url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/1812.10211 | Genus six curves, K3 surfaces, and stable pairs | A general smooth curve of genus six lies on a quintic del Pezzo surface. In \cite{AK11}, Artebani and Kondō construct a birational period map for genus six curves by taking ramified double covers of del Pezzo surfaces. The map is not defined for special genus six curves. In this paper, we construct a smooth Deligne-Mumford stack $\mathfrak{P}_0$ parametrizing certain stable surface-curve pairs which essentially resolves this map. Moreover, we give an explicit description of pairs in $\mathfrak{P}_0$ containing special curves. | \section{Introduction}
\numberwithin{theorem}{section}
\counterwithin{theorem}{section}
\numberwithin{equation}{section}
\counterwithin{equation}{section}
In \cite{AK11}, the authors construct a birational period map
\begin{equation*}
\varphi: \mathcal{M}_6 \dashrightarrow D/\Gamma,
\end{equation*}
where the source denotes the moduli space of genus six curves and the target parametrizes certain lattice-polarized $K3$ surfaces (see, for example, \cite[Section 1]{Dol96}). Their construction of $\varphi$ is as follows. The canonical model of a general smooth curve $C$ of genus six is a quadric section of a unique smooth quintic del Pezzo surface $\Sigma_5$ embedded anti-canonically in $\mathbb{P}^5$. The double cover of $\Sigma_5$ branched along $C$ will be a $K3$ surface. Taking the period point of this surface defines $\varphi$. More precisely, the output of $\varphi$ is a lattice-polarized $K3$ surface where the lattice has rank $5$ (note that $\mathcal{M}_6$ and $D/\Gamma$ are $15$-dimensional, while the moduli space of polarized $K3$ surfaces is $19$ dimensional).
A smooth curve of genus six is called \emph{special} if it is one of the following four types: hyperelliptic, trigonal, bielliptic, or plane quintic. The canonical model of any non-special smooth curve of genus six lies on a unique weak del Pezzo surface (see, for example, {\cite[Proposition 1.1]{AK11}}), so $\varphi$ extends over such curves. Note that $\varphi$ does not extend over special curves; the canonical models of these curves do not lie on weak quintic del Pezzo surfaces in $\mathbb{P}^5$. Moreover, Artebani and Kond\=o prove that the birational period map $\varphi$ induces an isomorphism
\begin{equation*}
\mathcal{M}_6 \setminus \{\text{special curves}\} \rightarrow (D \setminus \mathcal{H})/\Gamma,
\end{equation*}
where $\mathcal{H}$ denotes a \emph{discriminant divisor}. Artebani and Kond\=o show that $\mathcal{H}$ has $3$ irreducible components and that the general member of these components corresponds to a genus six curve with a node in $\Sigma_5$, the union of a plane quintic and a line in $\mathbb{P}^2$, and the union of a trigonal curve $C$ of genus six and a section $e \in |K_C-2g^1_3|$ in $\mathbb{P}^1 \times \mathbb{P}^1$ respectively (\cite[Theorem 0.2]{AK11}). The $K3$ surfaces corresponding to such curves are also constructed via double covers branched along these curves.
The goal of this paper is to construct a space resolving the indeterminacy of $\varphi$ and give a modular interpretation for this space. Studying birational period maps has been a topic of significant interest in the literature. Shah in \cite{Sh80} defines a period map for the GIT (geometric invariant theory) space of plane sextics by taking ramified double covers of $\mathbb{P}^2$. The indeterminacy occurs precisely along the locus of triple conic curves, which he resolves by blowing it up. Kond\=o in \cite{Kon00} defines a birational period map for curves of genus three by taking four-fold cyclic covers of $\mathbb{P}^2$ branched along quartic curves, which induces an isomorphism between the moduli space of non-hyperelliptic curves of genus three and the arithmetic quotient of a period domain minus a discriminant divisor. Similarly, Kond\=o in \cite{Kon02} constructs a birational period map for genus four curves by taking triple covers of quadric surfaces in $\mathbb{P}^3$ branched along non-hyperelliptic curves. Artebani in \cite{Ar09} expands upon Kond\=o's work in genus three by considering the GIT space for plane quartics and completely resolves the indeterminacy of the period map on the level of compactifications by blowing up the double conic locus. In \cite{CMJL12}, the authors expand upon Kond\=o's work in genus four by constructing a GIT model for $\overline{\mathcal{M}}_4$ and resolving the period map using techniques from Looijenga. In \cite{LO16}, Laza and O'Grady study the relationship between GIT and Satake-Baily-Borel compactifications of quartic $K3$ surfaces.
To resolve the period map $\varphi$ for genus six curves, rather than using GIT, we appeal to Hacking's theory of stable pairs developed in \cite{Hac01}, \cite{Hac04} and generalized in \cite{DH18}. A stable pair is a surface-curve pair satisfying certain properties for moduli theoretic purposes. The moduli spaces of stable pairs constructed in these papers are modified versions of the KSBA (Koll\'ar, Shepherd-Barron, Alexeev) compactification. In Section 3, we will formally define stable pairs and their allowable ($\mathbb{Q}$-Gorenstein) families. We remark that work of Hyeon and Lee in \cite{HL10} reveals that Hacking's compactification of plane quartic curves is already useful for the analogous period map question in genus three: in this case, Hacking's space extends the period map over hyperelliptic curves. We should also note that in \cite{AET19}, the authors construct a stable pair compactification of $K3$ surfaces of degree $2$.
Using Hacking's framework, we can consider a moduli stack $\mathfrak{P}$ of stable pairs whose general point is a pair of the form $(\Sigma_5, C)$, where $C$ is smooth and of class $-2K_{\Sigma_5}$ (see Definition~\ref{aux_stacks}). We define two open substacks of $\mathfrak{P}$ that will be necessary for us:
\begin{definition} \label{stacks}
Let $\mathfrak{P}_0 \subset \mathfrak{P}$ be the open substack parameterizing stable pairs $(X, D)$ such that:
\begin{enumerate}
\item The surface $X$ has only combinations of du Val, index two cyclic quotient singularities, and simple elliptic singularities.
\item The curve $D$ has at worst ADE singularites and avoids the singularities of $X$.
\end{enumerate}
In $(1)$, we allow ``empty" combinations of singularities, hence the surface $X$ may have only some of the listed singularities or may even be smooth.
Let $\mathfrak{P}_0^\text{sm} \subset \mathfrak{P}_0$ be the open substack parametrizing stable pairs $(X, D)$ satisfying properties $(1)$ and $(2)$ with $D$ smooth.
\end{definition}
We remark that an index $2$ cyclic quotient singularity arising on a stable pair in $\mathfrak{P}$ is a \emph{class $T$ singularity} (see Definition~\ref{classT}).
The main result of this paper is the following:
\begin{theorem} \label{mainthm}
The stack $\mathfrak{P}_0$ is Deligne-Mumford, smooth, and fits into the diagram
\begin{center}
\begin{tikzcd}[column sep=small]
& \mathfrak{P}_0 \arrow["j", dl, dashed, swap] \arrow[dr, "\tilde \varphi"] & \\
\overline {\mathcal{M}}_6 \arrow[dashed]{rr}{\varphi} & & (D/\Gamma)^\ast
\end{tikzcd}
\end{center}
where $j$ is the natural (birational) forgetting map and $\tilde \varphi$ extends the double cover construction of $\varphi$. Moreover, the map $j$ restricts to a surjective morphism
\begin{equation*}
j|_{\mathfrak{P}_0^\text{sm}}: \mathfrak{P}_0^\text{sm} \twoheadrightarrow \mathcal{M}_6 \setminus \mathcal{H}_6,
\end{equation*}
where $\mathcal{H}_6$ denotes the hyperelliptic locus.
\end{theorem}
In the statement of this theorem, $(D/\Gamma)^\ast$ denotes the Satake-Baily-Borel compactification of $D/\Gamma$. The content of this theorem is that $\mathfrak{P}_0^\text{sm}$ resolves the map $\varphi$ over plane quintic, trigonal, and bielliptic curves (all special curves except the hyperelliptics). The proof of this theorem will involve explicit construction of stable pairs $(X, D)$ containing special genus six curves. Using these pairs, we verify surjectivity of $j$ over smooth non-hyperelliptic curves. Table~\ref{table:pairs} in Section 4 gives a complete list of the pairs we construct. We note that these pairs lie in three distinct boundary loci in $\mathfrak{P}_0$, denoted $\mathcal{Z}_1$, $\mathcal{Z}_2$, and $\mathcal{Z}_3$. By ``boundary" here, we mean pairs $(X, D)$ such that $X$ is singular and does not have du Val singularities. We describe these boundary loci below by giving the dimension and general member of each.
\begin{enumerate} \label{bdry}
\item $\mathcal{Z}_1$: $14$ dimensional (a divisor). The general member is a pair $(X, D)$ where $X$ is constructed by choosing a line transverse to a smooth plane quintic curve in $\mathbb{P}^2$, blowing up the $5$ points of intersection, and contracting the strict transform of the line. This contraction produces a $\frac{1}{4}(1,1)$ cyclic quotient singularity. The curve $D$ is the image of the quintic in $X$.
\item $\mathcal{Z}_2$: $14$ dimensional (a divisor). The general member is a pair $(X, D)$ where $X$ is constructed by first choosing a trigonal curve of genus six $C$ on $\mathbb{P}^1 \times \mathbb{P}^1$ and a ruling $e$ meeting $C$ transversely in $4$ points. Then we blow up the four points of intersection between $C$ and $e$ and contract the strict transform of $e$. This contraction also produces a $\frac{1}{4}(1,1)$ cyclic quotient singularity. The curve $D$ is the image of $C$ in $X$.
\item $\mathcal{Z}_3$: $10$ dimensional. The general member is a pair $(X, D)$ where $X$ is a cone in $\mathbb{P}^5$ over an elliptic curve embedded in $\mathbb{P}^4$ via a degree $5$ line bundle and $D$ is a smooth quadric section of $X$ (a bielliptic curve).
\end{enumerate}
Moreover, we verify that given any pair $(X,D)$ in $\mathfrak{P}_0$, the double cover of $X$ branched along $D$ yields a (degeneration of a) $K3$ surface with
``insignificant limit singularities" (see \cite{Sh79}, \cite{Sh80} for the definition of such singularities). In dimension $2$, these are precisely the Gorenstein semi-log canonical (slc) singularities. Since the period map for $K3$ surfaces extends over degenerations with such singularities, the map $\tilde \varphi$ is indeed a morphism as asserted in the theorem (see Proposition~\ref{K3} and Proposition~\ref{period}). The $K3$ surfaces associated to the pairs $(X, D)$ over plane quintics and trigonal curves will be closely related to components of the discriminant divisor described by Artebani and Kond\=o (we discuss this in Remark~\ref{notable}).
We also remark that we can resolve the map $j$ via the simultaneous stable reduction of Casalaina-Martin and Laza for families of ADE curves (see {\cite[Theorem 3.5, Corollary 6.3]{CML13}}). We will see in Section 4 that this process yields a space that also resolves $\varphi$ over the hyperelliptic curves (see Remark~\ref{sim_st}).
We remark that one motivation for using stable pairs to resolve $\varphi$ stems from the Hassett-Keel program for genus six curves. In \cite{Mu14}, M\"uller shows that the final log canonical model of $\overline {\mathcal{M}}_6$ parametrizes quadric sections of $\Sigma_5$. Given a one-parameter degeneration of quadric sections of $\Sigma_5$ over the germ of a smooth curve, we can modify it so that the new special fiber is a stable pair. This stable reduction process involves applying techniques from the minimal model program. By enumerating singular quadric sections of $\Sigma_5$ by topological type and running this process, the hope is to construct a complete list of the pairs in $\mathfrak{P}$. We describe some explicit examples of this process in Section $5$, but since the list of singular quadric sections of $\Sigma_5$ is quite long, we save this approach for future work.
We should also remark that in \cite{HL10}, the authors identify both Hacking's compactification of plane quartic curves and the Satake-Baily-Borel compactification of Kond\=o's period space in \cite{Kon00} with certain log canonical models of $\overline{\mathcal{M}}_3$. One might ask: How do $\mathfrak{P}$ and $(D/\Gamma)^\ast$ fit into the Hassett-Keel story for genus six curves? We also leave this question for future work.
The paper is organized as follows. Section $2$ will describe some salient features of the geometry of special genus six curves. In Section $3$, we will recall the theory of stable pairs and establish a smoothability criterion for such pairs with mild surface singularities. Section $4$ will be devoted to proving Theorem~\ref{mainthm}. The proof will entail explicitly constructing surface-curve pairs using the geometry of special curves and then applying the smoothability criterion. Section $5$ gives some examples of computing stable limits of one-parameter degenerations of quadric sections of $\Sigma_5$, and we recover some of the pairs constructed in Section $4$.
\section{Geometry of special curves}
\numberwithin{theorem}{section}
\counterwithin{theorem}{subsection}
\numberwithin{equation}{section}
\counterwithin{equation}{subsection}
In this section, for each smooth non-hyperelliptic special curve $C$ of genus six mentioned in the introduction, we give a natural surface $S$ into which $C$ embeds. This will guide our search for stable pairs containing a given curve. We also introduce stratifications of plane quintic and trigonal curves after specifying certain marked divisors. Throughout this paper, $\mathbb{F}_{n}$ will denote the Hirzebruch surface $\mathbb{P}(\mathcal{O}_{\mathbb{P}^1} \oplus \mathcal{O}_{\mathbb{P}^1}(-n))$.
\vspace{.1in}
\subsection{Plane quintics} Of course, such a curve embeds in $\mathbb{P}^2$.
\begin{definition}
A \emph{marked plane quintic curve} is a pair $(C, E)$ where $C$ is a plane quintic curve and $E$ is a hyperplane section.
\end{definition}
In Section $4$, for each marked smooth plane quintic curve $(C, E)$, we exhibit a stable pair containing $C$. Marked smooth plane quintic curves $(C, E)$ are stratified by partitions $(a_1, \dots ,a_5)$ of $5$; the partition represents the non-zero coefficients of the points in the support of $E$. For example, a pair $(C, E)$ of type $(1,1,1,1,1)$ means that $E=\ell|_C$, where $\ell$ is a line transverse to $C$. On the other hand, a pair $(C, E)$ of type $(5)$ means that $E=\ell|_C$, where $\ell$ meets $C$ in a single point with intersection multiplicity $5$.
\vspace{.1in}
\subsection{Trigonal curves} Recall the construction of a rational normal surface scroll in $\mathbb{P}^{g-1}$. For two non-negative integers $a$ and $b$ such that $a+b=g-2$, a rational normal surface scroll $S_{a,b}$ is the join of two rational normal curves of degrees $a$ and $b$ with complementary linear spans. Equivalently, $S_{a,b}$ can be defined as the rational ruled surface $\mathbb{P}(\mathcal{O}_{\mathbb{P}^1}(-a)\oplus \mathcal{O}_{\mathbb{P}^1}(-b))$.
Now, consider a smooth trigonal curve $C \subset \mathbb{P}^{g-1}$. The linear system of quadrics containing $C$ cuts out a rational normal surface scroll $S_{a,b}$ (see \cite[Proposition 3.1]{ACGH85}). We now define some numerical invariants associated to the embeddings of smooth trigonal curves in scrolls that help us stratify such curves.
\begin{definition}
Let $S_{a,b}$ denote the rational normal surface scroll containing a given smooth trigonal curve $C$. The quantity $M=|a-b|$ is called the \emph{Maroni invariant} of $C$.
\end{definition}
Tautologically, a smooth trigonal curve $C$ of Maroni invariant $M$ embeds into the Hirzebruch surface $\mathbb{F}_M$. We note that for genus six, there are only two possible values for $M$: $0$ and $2$. When $M=0$, by genus considerations, such a curve has class $3e+4f$ on $\mathbb{P}^1 \times \mathbb{P}^1$, where $e$ and $f$ denote the classes of the two rulings.
When $M=2$, such a curve has class $3e+7f$ on $\mathbb{F}_2$, where $e$ denotes the negative section and $f$ denotes the fiber class of the projection $\mathbb{F}_2 \rightarrow \mathbb{P}^1$ (the latter cuts out the $g^1_3$ on $C$). The negative section has a unique point of intersection with $C$; denote this point $p$. Let $f_p$ denote the unique fiber containing $p$.
Any smooth trigonal curve $C$ of genus six has not only a unique $g_3^1$ but also a unique $g^1_4$ of class $K_C-2g^1_3$. If $C$ has Maroni invariant $0$, then this $g_4^1$ is cut out by $e$ on $\mathbb{P}^1 \times \mathbb{P}^1$. If $C$ has Maroni invariant $2$, the $g_4^1$ is cut out by $e+f$.
\begin{definition}
A \emph{marked trigonal curve of genus six} is a pair $(C, E)$ where $C$ is a trigonal curve of genus six and $E$ is a divisor in the unique $g^1_4$ associated to $C$.
\end{definition}
In Section $4$, for each marked smooth trigonal curve of genus six $(C, E)$, we exhibit a stable pair containing $C$. We will use the following notation to stratify marked smooth trigonal curves of genus six $(C, E)$:
\begin{enumerate}
\item \emph{Type $(2; [a_1], a_2, a_3, a_4)$}: A pair $(C, E)$ is of this type if $C$ has Maroni invariant $2$ and
\begin{equation*}
E=a_1p+\displaystyle \sum_i a_i p_i.
\end{equation*}
Note that when $C$ has Maroni invariant $2$, the point $p$ is always in the support of $E$. The $a_i$ necessarily form a partition of $4$. Note that $a_1>1$ if and only if $E=(e+f_p)|_C$, and $a_1=1$ if and only if $E=(e+f_0)|_C$ for some fiber $f_0 \neq f_p$.
\item \emph{Type $(0; b_1, b_2, b_3, b_4)$}: A pair $(C, E)$ is of this type if $C$ has Maroni invariant $0$ and
\begin{equation*}
E=\displaystyle \sum_i b_ip_i.
\end{equation*}
The $b_i$ necessarily form a partition of $4$.
\end{enumerate}
\subsection{Bielliptic curves}\label{biell_geo} A bielliptic curve is one that admits a $2:1$ cover of an elliptic curve. A smooth genus six bielliptic curve can be realized as a quadric section of a cone in $\mathbb{P}^5$ over a smooth elliptic curve embedded in $\mathbb{P}^4$ via a degree $5$ line bundle. The curve avoids the vertex of the cone (see
\cite[Lemma 3.3]{Ko05}, for example). Moreover, for a bielliptic curve of genus six (in fact for genus greater than five), the bielliptic involution is unique (see \cite[Chapter $5$]{Acc94}, for example) and the quotient by this involution is isomorphic to the exceptional elliptic curve for the minimal resolution of this cone.
\section{Moduli of stable pairs}
\numberwithin{theorem}{section}
\counterwithin{theorem}{section}
\numberwithin{equation}{section}
\counterwithin{equation}{section}
In this section, we outline the theory of stable pairs. We refer the reader to \cite{Hac01}, \cite{Hac04}, and \cite{DH18} for more details. The key idea is that the forthcoming definitions allow us to construct the moduli stacks $\mathfrak{P}$, $\mathfrak{P}_0$, and $\mathfrak{P}_0^\text{sm}$ (recall Definition~\ref{stacks}).
\begin{definition}
Let $X$ be a surface and $D$ an effective $\numset{Q}$-divisor on $X$. The pair $(X,D)$ is said to be \emph{semi log canonical (slc)} (resp. semi log terminal (slt)) if the following conditions hold:
\begin{enumerate}
\item $X$ is Cohen-Macaulay and has at worst normal crossings singularities in codimension $1$.
\item The divisor $K_X +D$ is $\numset{Q}$-Cartier.
\item Let $\nu: X^\nu \rightarrow X$ denote the normalization of $X$, $\delta$ the double curve of $X$, $D^\nu$ and $\delta^\nu$ the inverse images of $D$ and $\delta$. Then the pair $(X^\nu, \delta^\nu+D^\nu)$ is log canonical (resp. log terminal).
\end{enumerate}
\end{definition}
\begin{definition}[{\cite[Definition 2.1]{DH18}}]\label{stable_pair}
Let $m,n$ be positive co-prime integers with $m < n$. Let $X$ be a projective, reduced, connected, Cohen-Macaulay surface and $D$ an effective Weil divisor on $X$. We say that $(X,D)$ is a \emph{stable pair of type (m,n)} if the following conditions hold:
\begin{enumerate}
\item No component of $D$ is contained in the singular locus of $X$.
\item For some $\epsilon>0$, the pair $(X, (m/n+\epsilon)D)$ is slc, and the divisor $K_X+(m/n+\epsilon)D$ is ample.
\item The divisor $nK_X+mD$ is linearly equivalent to zero.
\item $\chi(\mathcal{O}_X)=1$.
\end{enumerate}
\end{definition}
\begin{definition}[{\cite[Definition 2.3]{DH18}}] \label{qg1}
A \emph{$\numset{Q}$-Gorenstein family of stable pairs of type $(m,n)$} is a pair $(\pi: \mathcal{X} \rightarrow T, \mathcal{D} \subset \mathcal{X})$, where $\mathcal{D}$ is a relative effective Weil divisor and $\pi$ is a flat, proper, Cohen-Macaulay morphism with slc surfaces as geometric fibers, satisfying the following additional conditions:
\begin{enumerate}
\item $\omega_\pi^{[i]}$ commutes with base change for every $i \in \numset{Z}$, and on each geometric fiber, some reflexive power of $\omega_\pi$ is invertible.
\item $\mathcal{O}_X(\mathcal{D})^{[i]}$ commutes with base change for every $i \in \numset{Z}$.
\item Each geometric fiber is a stable pair of type $(m,n)$.
\end{enumerate}
\end{definition}
For brevity, we will occasionally write ``stable pair" and omit ``of type $(m,n)$." We will eventually specialize to the case $(m,n)=(1,2)$. Geometrically, $\numset{Q}$-Gorenstein families of stable pairs are those which lift locally to canonical coverings (to be defined below). It is often more convenient to use this geometric definition when discussing $\numset{Q}$-Gorenstein deformations of singularities. We formally define canonical cover and the geometric version of $\numset{Q}$-Gorenstein deformation of a stable pair below, following \cite{Hac04}. Recall that the \emph{index} of a $\mathbb{Q}$-Cartier Weil divisor $D$ at a point $P$ in a normal variety $X$ is the smallest positive integer such that $ND$ is Cartier near $P$.
\begin{definition}{\label{can_cover}}
Let $P \in X$ be an slc surface germ of index $N$. The \emph{canonical covering} $\pi: Z \rightarrow X$ is defined by
\begin{equation*}
Z= \Spec_X (\mathcal{O}_X \oplus \mathcal{O}_X(K_X) \oplus \cdots \oplus \mathcal{O}_X((N-1)K_X)),
\end{equation*}
where the multiplication structure is determined by a choice of isomorphism $\mathcal{O}_X(NK_X) \cong \mathcal{O}_X$.
\end{definition}
We will also use the terminology \emph{index one cover} to express the same idea (recall that $K_Z$ is Cartier, hence has index $1$). Let $\xi_N$ be a primitive $N^\text{th}$ root of unity. There is a natural $\mu_N$ action on each $\mathcal{O}_X(iK_X)$ given by multiplication by $\xi^i_N$, and we note that the canonical covering morphism $\pi$ is a cyclic quotient of degree $N$ by the induced action on $Z$.
\begin{definition}\label{qg}
Let $(P \in X, D)$ be the germ of a stable pair, $N$ be the index of $X$, $Z \rightarrow X$ the canonical covering, and $D_Z$ the inverse image of $D$. We say that a deformation $(\mathcal{X}, \mathcal{D})/S$ of $(X, D)$ is \emph{$\numset{Q}$-Gorenstein} if there is a $\mu_N$-equivariant deformation $(\mathcal{Z}, \mathcal{D}_\mathcal{Z})/S$ of $(Z, D_Z)$ extending the natural $\mu_N$ action on $Z$ whose $\mu_N$ quotient is $(\mathcal{X}, \mathcal{D})/S$.
\end{definition}
\begin{remark} \label{index}
We say that $(X,D)$ satisfies the \emph{index condition} if the divisorial pullback of $D$ to the canonical covering at every surface germ of $X$ is Cartier. For stable pairs of type $(m,n)=(1,2)$, this condition is vacuous. See {\cite[Definition 2.4]{DH18}} for more details. We note that Definition~\ref{qg} is equivalent to conditions $(1)$ and $(2)$ of Definition~\ref{qg1} if the index condition holds.
\end{remark}
It is clear how to modify the definition of $\numset{Q}$-Gorenstein family in the context of surfaces (no marked curve) or surface germs: simply forget all conditions involving the marked divisor.
\begin{definition}
We say that a stable pair $(X,D)$ is \emph{smoothable} if there is a $\numset{Q}$-Gorenstein deformation $(\mathcal{X}, \mathcal{D})/\Delta$ of $(X,D)$ over the germ of a smooth curve such that the generic fiber $\mathcal{X}_\eta$ of $\mathcal{X}/\Delta$ is smooth.
\end{definition}
It follows from parts $(2)$ and $(3)$ of Definition~\ref{stable_pair} that for a stable pair $(X,D)$, the divisors $-K_X$ and $D$ are both ample. In particular, if $(X,D)$ is smoothable, $X$ must smooth to a del Pezzo surface.
\begin{theorem}[{\cite[Theorem 2.5]{DH18}}]
There is a Deligne-Mumford stack $\mathfrak{F}$ whose objects are $\mathbb{Q}$-Gorenstein families of stable pairs of type $(m,n)$ satisfying the index condition.
\end{theorem}
\begin{definition} \label{aux_stacks}
Fix $(m,n)=(1,2)$. Let $\mathfrak{F}_{K^2=5} \subset \mathfrak{F}$ be the open and closed substack parametrizing stable pairs $(X,D)$ with $K_X^2=5$. Let $\mathfrak{P}$ denote the component of $\mathfrak{F}_{K^2=5}$ whose general point is a pair $(\Sigma_5, C)$ where $C$ is smooth of class $-2K_{\Sigma_5}$.
\end{definition}
We note that $\mathfrak{F}_{K^2=5}$ is in fact an open and closed substack of $\mathfrak{F}$ since $K^2$ (and moreover $(K+D)^2$) is constant in $\mathbb{Q}$-Gorenstein families of stable pairs (see, for example, \cite{Has99}). Also, now it makes sense to define the open substacks $\mathfrak{P}_0$ and $\mathfrak{P}_0^\text{sm}$ of Definition~\ref{stacks} (the openness follows from the fact that the set of allowed singularity types for pairs in these substacks is closed under $\mathbb{Q}$-Gorenstein deformation).
\begin{remark}
The properness of the stack $\mathfrak{F}$ in general is a delicate issue. There is a partial valuative criterion properness proven in \cite[Proposition 2.11]{DH18}: Up to base change, a family over a DVR with smooth generic fiber can be completed to a family where the special fiber is a stable pair. Moreover, this new family will be $\mathbb{Q}$-Gorenstein if the special fiber satisfies the index condition (recall Remark~\ref{index}), but there is no reason that the index condition should hold a priori. However, as noted in Remark~\ref{index}, for stable pairs of type $(m,n)=(1,2)$, the index condition holds vacuously. Hence $\mathfrak{P}$ is proper.
\end{remark}
We will now give some properties of stable pairs of type $(m,n)$ and their families. We begin with a description of some singularities that arise on stable pairs and conclude with a smoothability criterion for pairs with such singularities.
\begin{definition}
Fix co-prime positive integers $a$ and $r$ with $a<r$. Let $\numset{Z}/r\numset{Z}$ act on $\mathbb{C}^2$ via the diagonal matrix
$$\begin{pmatrix}
\xi_r & 0 \\ 0 & \xi_r^a
\end{pmatrix},$$
where $\xi_r$ is a primitive $r^{\text{th}}$ root of unity. The resulting singularity is called a \emph{cyclic quotient singularity of type $\frac{1}{r}(1,a)$}.
\end{definition}
Such singularities are uniquely determined by their minimal resolutions. The exceptional locus of the minimal resolution of a cyclic quotient singularity of type $\frac{1}{r}(1, a)$ is a chain of rational curves $E_1, \dots, E_n$ with self-intersections $E^2_i=-c_i<0$ for all $i$. The $c_i$ can be computed via the continued fraction
\begin{equation} \label{cfrac}
\frac{r}{a}=c_1 - \frac{1}{c_2-\frac{1}{c_3-\dots}}.
\end{equation}
Conversely, given $E_i$, $c_i$, and a continued fraction representation as in ~(\ref{cfrac}), we say that the singularity created by contracting the $E_i$ is of type $\frac{1}{r}(1,a)$. We remark that this notation depends on one of the two possible orderings of the $E_i$.
\begin{definition}[{\cite[Definition 3.7]{KSB88}}]
\label{classT}
A surface singularity is said to be of \emph{class $T$} if it is a cyclic quotient singularity and admits a $\numset{Q}$-Gorenstein one-parameter smoothing.
\end{definition}
In the definition of class $T$ given in \cite{KSB88}, a deformation $\mathcal{X}/S$ is said to be $\numset{Q}$-Gorenstein if $K_\mathcal{X}$ is $\numset{Q}$-Cartier. This is an a priori weaker condition than the notion of $\numset{Q}$-Gorenstein given in Definition~\ref{qg}. However, as remarked in {\cite[Section 2.1]{HP10}}, the two notions coincide when the central fiber $X$ has quotient singularities and the base $S$ is a smooth curve. There is a well known classification of class $T$ singularities due to Koll\'ar and Shepherd-Barron which we now present.
\begin{proposition}[{\cite[Proposition 3.10]{KSB88}}]\label{Classt}
A class $T$ singularity is either a rational double point (ADE, du Val) or a cyclic quotient singularity of type
\begin{equation} \label{p,q}
\frac{1}{p^2q}(1, dpq-1)
\end{equation}
where $p,q$ are integers and $d$ is co-prime to $p$.
\end{proposition}
We make a few remarks about class $T$ singularities. Class $T$ singularities are precisely the log terminal $\numset{Q}$-Gorenstein-smoothable surface singularities ({\cite[Theorem 3.4]{Pr17}}). For a given class $T$ singularity, there is an irreducible component of its deformation space parametrizing $\numset{Q}$-Gorenstein deformations. Hence, $\numset{Q}$-Gorenstein deformations of class $T$ singularities are class $T$ ({\cite[Theorem 3.9, Section 7]{KSB88}}). A non-du Val class $T$ singularity of the form in ~(\ref{p,q}) has index $p$ and canonical cover of type $A_{pq-1}$. The $\mu_p$ action on the equation
\begin{equation} \label{Apq}
f=xy+z^{pq}=0
\end{equation}
is given by
\begin{equation} \label{can_action}
(x,y,z) \mapsto (\xi x, \xi^{-1} y, \xi^d z)
\end{equation}
(see the remarks immediately following Proposition 5.3 in \cite{BR95}, for example). We will also need to make use of the following theorem.
\begin{theorem}[{\cite[Theorem 3.1]{HP10}}] \label{lc_big}
Let $X$ be a projective surface with log canonical singularities such that $-K_X$ is big. Then there are no local-to-global obstructions to deformations of $X$. In particular, if the singularities of $X$ admit $\mathbb{Q}$-Gorenstein smoothings, then $X$ admits a $\mathbb{Q}$-Gorenstein smoothing.
\end{theorem}
Before establishing the smoothability criterion, we need two important facts.
\begin{lemma} \label{lift}
Let $(X,D)$ be a stable pair of type $(m,n)$ such that $X$ has class $T$ singularities and $D$ is Cartier. Then $H^1(\mathcal{O}_D(D))=0$.
\end{lemma}
\begin{proof}
By Lemma $3.14$ in \cite{Hac04}, $H^1(\mathcal{O}_X(D))=0$ since $X$ is log terminal (this is a consequence of Kodaira vanishing). Now, the exact sequence
\begin{equation*}
0 \rightarrow \mathcal{O}_X \rightarrow \mathcal{O}_X(D) \rightarrow \mathcal{O}_D(D) \rightarrow 0
\end{equation*}
induces a long exact sequence in cohomology
\begin{equation*}
\cdots \rightarrow H^1(\mathcal{O}_X(D)) \rightarrow H^1(\mathcal{O}_D(D)) \rightarrow H^2(\mathcal{O}_X) \rightarrow \cdots
\end{equation*}
By Serre duality, $H^2(\mathcal{O}_X)=H^0(K_X)^\vee=0$ since $K_X$ is anti-ample. The result is immediate.
\end{proof}
\begin{lemma}[{\cite[Lemma 5.5]{Hac01}}] \label{Pic}
Let $X$ be a surface with log canonical and $\numset{Q}$-Gorenstein smoothable singularities with $-K_X$ ample, and let $\mathcal{X}/\Delta$ be a deformation of $X$ over the germ of a smooth curve. Then the restriction map
\begin{equation*}
\Pic \mathcal{X} \rightarrow \Pic X
\end{equation*}
is an isomorphism.
\end{lemma}
The proof mimics a portion of the proof of the proposition cited. We include it for the reader's convenience.
\begin{proof}
We have a commutative diagram
\begin{equation*}
\begin{tikzcd}
\Pic \mathcal{X} \arrow[r] \arrow[d]
& \Pic X \arrow[d] \\
H^2(\mathcal{X}, \numset{Z}) \arrow[r]
& H^2(X, \numset{Z})
\end{tikzcd}
\end{equation*}
The restriction map $H^2(\mathcal{X}, \numset{Z}) \rightarrow H^2(X, \numset{Z})$ is an isomorphism because $X$ is a homotopy retract of $\mathcal{X}$. The map $\Pic X \rightarrow H^2(X, \numset{Z})$ fits into the long exact sequence in cohomology
\begin{equation*}
\cdots \rightarrow H^1(\mathcal{O}_X) \rightarrow \Pic X \rightarrow H^2(X, \numset{Z}) \rightarrow H^2(\mathcal{O}_X) \rightarrow \cdots
\end{equation*}
associated to the exponential sequence. Using Serre duality and the fact that $-K_X$ is ample, we see that $H^2(\mathcal{O}_X)=0$ as in the proof of Lemma~\ref{lift}.
By Theorem~\ref{lc_big}, $X$ admits a one-parameter smoothing over the germ of a smooth curve to a del Pezzo surface $Y$. Since $\chi(\mathcal{O}_Y)=1$, we must have $H^1(\mathcal{O}_X)=0$. Hence the map $\Pic X \rightarrow H^2(X, \numset{Z})$ is an isomorphism.
Since $H^1(\mathcal{O}_X)=H^2(\mathcal{O}_X)=0$, by cohomology and base change, $R^1f_\ast \mathcal{O}_\mathcal{X}=R^2f_\ast \mathcal{O}_\mathcal{X}=0$. Since $\Delta$ is affine, $H^1(\mathcal{O}_\mathcal{X})=H^2(\mathcal{O}_\mathcal{X})=0$ (see {\cite[Theorem III.3.7, Exercise III.8.1, Theorem III.12.11]{Har77}}). By considering the exponential sequence as before, we see that the map $\Pic \mathcal{X} \rightarrow H^2(\mathcal{X}, \numset{Z})$ is an isomorphism. Therefore, the restriction map $\Pic \mathcal{X} \rightarrow \Pic X$ is also an isomorphism.
\end{proof}
We now present the main theorem of this section. As in the previous lemma, let $\Delta$ denote the germ of a smooth curve.
\begin{theorem} \label{smooth}
Let $(X,D)$ be an slc stable pair such that $X$ has class $T$ singularities and $D$ is Cartier. Then the following hold:
\begin{enumerate}
\item $(X,D)$ is smoothable.
\item The generic fiber of any $\numset{Q}$-Gorenstein deformation of $(X,D)$ over $\Delta$ is smoothable.
\item Any $\numset{Q}$-Gorenstein deformation of the singularities of $X$ over $\Delta$ can be realized on a stable pair.
\end{enumerate}
\end{theorem}
\begin{proof}
By Theorem~\ref{lc_big}, a $\numset{Q}$-Gorenstein smoothing of the singularities of $X$ over $\Delta$ lifts to a $\numset{Q}$-Gorenstein smoothing $\mathcal{X}/\Delta$ of $X$. By Lemma~\ref{lift}, $H^1(\mathcal{O}_D(D))=0$, hence by {\cite[Corollary 3.2]{Has99}}, we can lift this family of surfaces in turn to a family of slc pairs $(\mathcal{X}, \mathcal{D})/\Delta$ satisfying all the conditions of a $\numset{Q}$-Gorenstein family except (a priori) that the generic fiber is a stable pair.
Since $nK_X +mD \sim 0$, by Proposition~\ref{Pic}, we have the relation $nK_\mathcal{X}+m\mathcal{D} \sim 0$. Therefore, $nK_{\mathcal{X}_\eta}+m\mathcal{D}_\eta \sim 0$ by restriction. We also note that $\chi(\mathcal{O}_{\mathcal{X}_\eta})=1$, since $\chi(\mathcal{O}_X)=1$. Now, fix $\epsilon$ such that $K_X+(m/n+\epsilon)D$ is ample. Passing to a sufficiently high multiple $N$ such that $N(K_\mathcal{X}+(m/n+\epsilon) \mathcal{D})$ is Cartier and restricting to the generic fiber shows that $K_{\mathcal{X}_\eta}+(m/n+\epsilon)\mathcal{D}_\eta$ is ample as well. This concludes the proof of $(1)$.
Since the hypotheses of the theorem are preserved under any $\numset{Q}$-Gorenstein deformation over $\Delta$, $(2)$ is immediate.
For $(3)$, lift a $\numset{Q}$-Gorenstein deformation of the singularities of $X$ to a $\numset{Q}$-Gorenstein deformation of slc pairs as above. Repeating the argument in the proof of $(1)$ shows that the generic fiber is also a stable pair.\end{proof}
\begin{remark}
The first claim in the theorem can be strengthened slightly: We may assume the singularities of $X$ are log canonical and $\numset{Q}$-Gorenstein-smoothable, if we also require that $H^1(\mathcal{O}_D(D))=0$.
\end{remark}
\section{Proof of main result: resolving $\varphi$}
\numberwithin{theorem}{section}
\counterwithin{theorem}{subsection}
\numberwithin{equation}{section}
\counterwithin{equation}{subsection}
This section will be devoted to proving Theorem~\ref{mainthm}. Let $\mathfrak{P}$ denote the moduli space in Definition~\ref{aux_stacks}. Again, for brevity, we will simply use the terminology ``stable pairs" (omitting ``of type $(m,n)$").
Using the stratifications in Section $2$, for each marked smooth plane quintic or trigonal curve $(D, E)$, we exhibit a stable pair $(X,D)$. For each bielliptic curve $D$, we exhibit a stable pair $(X, D)$. Stability of these pairs will be addressed in Proposition~\ref{stable_prop}. In this section, we also explain how to address the hyperelliptic curves. Throughout, we will abuse notation and write $D$ for both the curve that we start with and its image in any birational model of the surface into which $D$ naturally embeds.
\subsection{Marked plane quintics}\label{pl_q}
For a given marked plane quintic curve $(D, E)$ of type $(a_1, \dots, a_5)$, choose a line $\ell$ in $\mathbb{P}^2$ such that
\begin{equation*}
E=\ell|_D=\displaystyle \sum_i a_ip_i.
\end{equation*}
Separate $D$ from $\ell$ by blowing up, and contract the strict transform of $\ell$ and any exceptional curves of self-intersection strictly less than $-1$. We obtain a surface $X$ with singularity type
\begin{equation*}
\frac{1}{4}(1,1) \displaystyle \oplus \bigoplus_{a_i>1} A_{a_i-1}.
\end{equation*}
We have constructed the desired pair $(X, D)$.
\subsection{Marked trigonal curves, type $(0; b_1, b_2, b_3, b_4)$} \label{M=0;4}
Given a marked trigonal curve of genus six and Maroni invariant $0$ denoted $(D, E)$, embed $D$ in $\mathbb{P}^1 \times \mathbb{P}^1$. The curve $D$ has class $3e+4f$. Choose a particular ruling $e_0 \in |e|$ such that $E=e_0|_D$ on $D$. Separate $D$ from $e_0$ by blowing up. Contracting the strict transform of $e_0$ and all exceptional curves of self-intersection strictly less than $-1$ yields a surface $X$ with singularity type
\begin{equation*}
\frac{1}{4}(1,1) \displaystyle \oplus \bigoplus_{b_i>1} A_{b_i-1}.
\end{equation*}
We have constructed the desired pair $(X, D)$.
\subsection{Marked trigonal curves, type $(2; [a_1], a_2, a_3, a_4)$}\label{M=2} For a given pair $(D, E)$ of this type, embed $D$ in $\mathbb{F}_2$. Let $e$ denote the negative section and choose $f$ such that $E=(e+f)|_D$. Separate $D$ from $e \cup f$ by blowing up. If necessary (this will depend on whether $a_1=1$ or $a_1>1$), further separate $D$ from the chain of curves connecting the strict transforms of $e$ and $f$ by blowing up. This process yields a chain $C$ of rational curves of self-intersection
\begin{equation*}
[-3, \underbrace{-2, \dots, -2}_{\text{$a_1-1$}} , -3].
\end{equation*}
Contracting $C$ along with any exceptional curves of self-intersection strictly less than $-1$ produces a surface $X$ with singularity type
\begin{equation*}
\frac{1}{4(a_1+1)}(1, 2a_1+1) \oplus \displaystyle \bigoplus_{ \underset{i\neq 1}{a_i >1}} A_{a_i-1}.
\end{equation*}
The quotient singularity of $X$ is index two class $T$. We have constructed the desired pair $(X, D)$.
\subsection{Bielliptic curves}\label{biell} As noted in Section $2$, such a curve $D$ embeds as a quadric section of an elliptic cone $X$ of degree $5$ in $\mathbb{P}^5$, hence $D$ necessarily has class $-2K_ X$ (which is ample). Moreover, $H^1(\mathcal{O}_D(D))=0$ by Serre duality. Since $X$ is log canonical and $D$ avoids the singularity, $(X,D)$ is a smoothable slc stable pair by Theorem~\ref{smooth}. Note that any smoothing of the elliptic singularity is automatically $\numset{Q}$-Gorenstein, since the singularity is Gorenstein. Since $K_X^2=5$, the pair smooths to $(\Sigma_5, C)$ where $C$ is smooth of class $-2K_{\Sigma_5}$, as desired.
\subsection{Hyperelliptic curves}\label{hyper} There is a complete list of ADE-singular plane sextic curves given in \cite[Table 2]{Ya96}. In particular, we can find such a curve with an $A_{13}$ singularity and four nodes in general position. Blow up the four nodes to recover $\Sigma_5$, and let $D$ be the strict transform of the sextic. By construction, $D$ has class $-2K_{\Sigma_5}$. Stable reduction of a curve with an $A_{13}$ singularity yields a smooth genus six hyperelliptic curve. Moreover, every such curve arises in this way (see {\cite[Example 6.2.1]{Has00}}). It follows immediately from Definition~\ref{stable_pair} that the pair $(\Sigma_5, D)$ is stable. By deforming the $A_{13}$ curve in $\Sigma_5$, we obtain a $\mathbb{Q}$-Gorenstein smoothing of this pair to $(\Sigma_5, C)$ where $C$ is smooth of class $-2K_{\Sigma_5}$.
\subsection{Proof of main theorem} We have completed the list of pairs necessary to prove Theorem~\ref{mainthm}. The following sequence of propositions will constitute our proof.
\begin{proposition}\label{stable_prop}
All of the pairs constructed in (\ref{pl_q})~--~(\ref{hyper}) are smoothable stable pairs of type $(1,2)$. Moreover, all of these pairs lie in $\mathfrak{\mathfrak{P}_0}$.
\end{proposition}
\begin{proof}
We outline the general technique for showing that each pair over the trigonal curves and plane quintics lies in $\mathfrak{\mathfrak{P}_0}$ below. Note that we have already addressed the pairs associated to the bielliptic and hyperelliptic curves in ~(\ref{biell}) and ~(\ref{hyper}).
Fix one of these pairs $(X,D)$ such that $D$ is a smooth plane quintic or trigonal curve. Let $\phi: X' \rightarrow X$ be the minimal resolution. We have seen $X'$ can be realized as a sequence of blow-ups of a smooth surface in which $D$ naturally embeds and whose intersection theory is well understood (see Section $2$). As a result, there is a natural set of generators for $\Pic X'$. We have seen that $X$ has index $2$ class $T$ singularities (and potentially also has type $A$ singularities) and is in particular $\numset{Q}$-factorial. We can express $\phi^\ast(-2K_X)$ and $\phi^*(D)$ in terms of these Picard generators, and we determine that they are linearly equivalent. Moreover, $\phi^*(D)$ coincides with $D'$ (the strict transform of $D$), since $D$ avoids the singularities of $X$. By the projection formula, we obtain $D=-2K_X$. This computation also verifies that $-K_X$ is ample; one checks that $\phi^*(-K_X)$ is nef and trivial precisely along curves contracted by $\phi$. We also see that $(K_X)^2=5$. Moreover, since $X$ is log terminal and $D$ avoids singularities, $(X,D)$ is slc. Combining all of this, we see that $(X,D)$ satisfies the hypotheses of Theorem~\ref{smooth}. Thus, $(X,D)$ smooths to $(\Sigma_5, C)$, where $C$ is smooth of class $-2K_{\Sigma_5}$ as desired.
\end{proof}
\begin{Example}[\emph{Marked plane quintics, type $(1,1,1,1,1)$}] \label{pl_q_2}
Let $D$ be a smooth plane quintic and let $\ell$ be a line transverse to $D$. Let $\pi_1: X' \rightarrow \mathbb{P}^2$ be the blow-up of $\mathbb{P}^2$ at the $5$ points of intersection of $D$ and $\ell$, and let $\pi_2: X' \rightarrow X$ denote the contraction of $\ell'$ (the strict transform of $\ell$). Let $L$ denote the hyperplane class on $\mathbb{P}^2$, let $G_i$ be the five $\pi_1$-exceptional divisors, let $D'$ be the strict transform of $D$ under $\pi_1$, and let $D''$ denote its image in $X$. We show that $(X, D'')$ is a stable pair satisfying the hypotheses of Theorem~\ref{smooth} with $K_X^2=5$, hence this pair lies in $\mathfrak{P}$.
We compute
\begin{equation*}
\pi_2^\ast(-2K_X)=-2K_{X'}-\ell'=\pi_1^\ast(5L)-\underset{i=1}{\sum^5} G_i=D'=\pi_2^\ast (D''),
\end{equation*}
hence by the projection formula,
\begin{equation*}
D''=-2K_{X}.
\end{equation*}
To verify that $K_{X}+D''=-K_X$ is ample, we choose
\begin{equation*}
\ell'=\pi_1^\ast L - \underset{i=1}{\sum^5} G_i
\end{equation*}
and the $G_i$ as Picard generators for $X'$. Fix an irreducible curve $C \subset X$; we need to show that this curve is positive against $-K_{X}$. Since $X$ is $\numset{Q}$-factorial, we can pull back to the minimal resolution to compute intersection numbers. If $C'$ (the strict transform of $C$ under $\pi_2$) is not $\ell'$ or any of the $G_i$, it is non-negative along each. By non-degeneracy of the intersection pairing on $X'$, in fact $C'$ must be strictly positive along at least one of them. Since we can write $\pi_2^\ast(-K_{X})$ as a positive linear combination of $\ell'$ and the $G_i$, it follows that we only need to check how this pullback intersects each of them. Ampleness of $-K_{X}$ is immediate.
The pair $(X, D'')$ is slc since $X$ is log terminal (class $T$) and $D''$ avoids singularities. We conclude that $(X, D'')$ is an slc stable pair. Moreover, $K_{X}^2=5$ and the pair satisfies the hypotheses of Theorem~\ref{smooth}.
\end{Example}
\begin{Example}[\emph{Marked trigonal curves, type $(2; [4])$}]\label{M=2, ex}
Let $D$ be such a curve in $\mathbb{F}_2$, let $e$ denote the negative section, and let $f_p$ denote the distinguished fiber (see Section $2$). Let $\phi_1: X' \rightarrow \mathbb{F}_2$ denote the sequence of blow-ups described in ~(\ref{M=2}), and let $G_i$ be the $\phi_1$-exceptional divisors ($i=1, \dots, 4$). Let $\phi_2: X' \rightarrow X$ be the minimal resolution of $X$. Let $D'$ be the strict transform of $D$ under $\phi_1$, and let $D''$ be its image in $X$. We show that the pair $(X, D'')$ is stable and satisfies the hypotheses of Theorem~\ref{smooth} with $K_X^2=5$, hence this pair lies in $\mathfrak{P}$.
We compute
\begin{equation}\label{4.1}
\phi_1^\ast(K_{\mathbb{F}_2})=K_{X'}-G_1-2G_2-3G_3-4G_4.
\end{equation}
On the other hand,
\begin{equation}\label{4.2}
\phi_1^\ast(K_{\mathbb{F}_2})=\phi_1^\ast(-2e-4f_p)=-2e'-4f_p'-6G_1-10G_2-14G_3-14G_4,
\end{equation}
hence
\begin{equation}\label{4.3}
K_{X'}=-2e'-4f'_p-5G_1-8G_2-11G_3-10G_4.
\end{equation}
In ~(\ref{4.2}), ~(\ref{4.3}), $e'$ and $f'_p$ refer to the strict transforms of $e$ and $f_p$ under $\phi_1$.
Next, using ~(\ref{4.3}), we see that
\begin{equation}\label{4.4}
\phi_2^\ast(-2K_{X})=3e'+7f_p'+9G_1+15G_2+21G_3+20G_4.
\end{equation}
Also, since $D''$ avoids the singularities of $X$, $\phi_2^\ast (D'')=D'$. We compute
\begin{equation}\label{4.5}
\phi_1^\ast(D)=D'+G_1+2G_2+3G_3+4G_4.
\end{equation}
On the other hand,
\begin{equation}\label{4.6}
\phi_1^\ast(D)=\phi_1^\ast(3e+7f_p)=3e'+7f_p'+10G_1+17G_2+24G_3+24G_4.
\end{equation}
Therefore, by combining ~(\ref{4.4}), ~(\ref{4.5}), and ~(\ref{4.6}),
\begin{equation}\label{4.7}
D'=3e'+7f_p'+9G_1+15G_2+21G_3+20G_4=\phi_2^\ast(-2K_{X}).
\end{equation}
By the projection formula, $D''=-2K_{X}$.
To verify ampleness of $K_{X}+D''=-K_{X}$, we choose $e', f'_p$ and the $G_i$ as Picard generators for $X'$. An analogous argument to that in Example~\ref{pl_q_2} implies that we only need to check that $-K_X$ is positive against $G_4$, which it is. We again note that the pair $(X, D'')$ is slc since $X$ is log terminal and $D''$ avoids singularities. Hence this pair is in fact an slc stable pair. Moreover, $K_{X}^2=5$ and the pair satisfies the hypotheses of Theorem~\ref{smooth}.
\end{Example}
\begin{remark}\label{bdry_rmk}
We note that the pairs $(X, D)$ associated to plane quintics (Subsection~\ref{pl_q}) lie in the boundary locus $\mathcal{Z}_1$ described in Section $1$ (which is now well-defined as a result of Proposition~\ref{stable_prop}). We remark that it follows from the construction of $(X, D)$ that $\mathcal{Z}_1$ is in fact a divisor: The locus of plane quintics in $\mathcal{M}_6$ is of dimension $12$, and the moduli space of $5$ points on $\mathbb{P}^1$ (the points of intersection between a quintic and a line) is of dimension $2$. We also see from this discussion that the fiber of the forgetting map $j: \mathfrak{P}_0 \dashrightarrow \overline{\mathcal{M}}_6$ over a smooth plane quintic curve is $2$ dimensional.
Similarly, the pairs $(X, D)$ associated to trigonal curves (Subsection~\ref{M=0;4} and Subsection~\ref{M=2}) lie in the boundary locus $\mathcal{Z}_2$ described in Section $1$. We remark that $\mathcal{Z}_2$ is in fact a divisor: The trigonal locus in $\mathcal{M}_6$ is of dimension $13$, and the moduli of $4$ points on $\mathbb{P}^1$ (the points of intersection between a trigonal curve with $M=0$ on $\mathbb{P}^1 \times \mathbb{P}^1$ and the appropriate ruling) is of dimension $1$. We also see from this discussion that the fiber of the forgetting map $j: \mathfrak{P}_0 \dashrightarrow \overline{\mathcal{M}}_6$ over a smooth trigonal curve of genus six is $1$ dimensional.
By definition, the pairs $(X, D)$ in this subsection lie in the boundary locus $\mathcal{Z}_3 \subset \mathfrak{P}_0$. Since the bielliptic locus in $\mathcal{M}_6$ is of dimension $10$ and the bielliptic involution is unique (recall Subsection~\ref{biell_geo}), the locus $\mathcal{Z}_3$ is in fact $10$ dimensional as asserted in Section $1$.
\end{remark}
\begin{proposition}
The stack $\mathfrak{P}_0$ is smooth and Deligne-Mumford.\end{proposition}
\begin{proof}
Since $\mathfrak{F}$ is Deligne-Mumford, so is $\mathfrak{P}_0$.
The $\mathbb{Q}$-Gorenstein deformation space of any singularity allowed on a surface in $\mathfrak{P}_0$ is smooth, since the possible singularities are du Val, cyclic quotient, or simple elliptic of degree $5$. The smoothness of the deformation space of this simple elliptic singularity is proven in \cite[Section 9.2(b)]{Pi74}. Every deformation of this elliptic singularity is $\mathbb{Q}$-Gorenstein since this singularity is Gorenstein. There are no local-to-global obstructions for deformations of any of the surfaces in $\mathfrak{U}$ by Theorem~\ref{lc_big}. By {\cite[Proposition 3.3]{Has99}}, the $\mathbb{Q}$-Gorenstein deformation space of any pair in $\mathfrak{P}^\text{sm}_0$ is smooth.
Note that a pair $(X, D)$ in $\mathfrak{P}_0$ where $D$ has ADE singularities is not slc, but the conclusion of {\cite[Proposition 3.3]{Has99}} still holds since we require that $D$ avoids the singularities of $X$. Hence the deformation space of such a pair is smooth. We conclude that $\mathfrak{P}_0$ is smooth.
\end{proof}
\begin{proposition} \label{K3}
For any pair $(X, D)$ in $\mathfrak{P}_0$, the double cover of $X$ branched along $D$ is a $K3$ surface with Gorenstein slc singularities.
\end{proposition}
\begin{proof}
Since $D=-2K_X$, the double cover of $X$ branched along $D$ is, by definition,
\begin{equation*}
X^{(2)}=\Spec_X(\mathcal{O}_X \oplus \mathcal{O}_X(K_X)),
\end{equation*}
where the $\mathcal{O}_X$-algebra structure is determined by multiplication by $D$. Let $\pi: X^{(2)} \rightarrow X$ be the natural morphism.
By definition of $\mathfrak{P}_0$, the surface $X^{(2)}$ has only combinations of du Val and simple elliptic singularities (including ``empty" combinations).
For any pair $(X, D)$ in $\mathfrak{P}_0$, by adjunction and the fact that $D \sim -2K_X$, the line bundle $\omega_{X^{(2)}}$ is trivial. Moreover,
\begin{equation*}
h^1(X^{(2)}, \mathcal{O}_{X^{(2)}})=h^1(X,\mathcal{O}_X) + h^1(X, \mathcal{O}_X(K_X))=0.
\end{equation*}
Thus, $X^{(2)}$ is a $K3$ surface with Gorenstein slc singularities as claimed.
\end{proof}
\begin{remark} \label{notable}
We comment on some notable aspects of the $K3$ surfaces associated to the pairs $(X, D)$ containing plane quintic and trigonal curves. For a generic pair $(X, D)$ associated to plane quintics in Subsection~\ref{pl_q} (the general member of $\mathcal{Z}_1$), the double cover of $X$ branched along $D$ is a $K3$ surface with an $A_1$ singularity. This $K3$ surface corresponds to the generic point of the component of the discriminant divisor $\mathcal{H}_2$ described by Artebani and Kond\=o. Moreover, the lattice-polarization in this case is isomorphic to $U(2) \oplus D_4$ (see \cite[Section $3$]{AK11}), which is in particular of rank $6$.
This $K3$ surface can also be constructed by taking the minimal resolution of the double cover of $\mathbb{P}^2$ branched along the union of $D$ and $\ell$ and contracting the strict transform of $\ell$ (a $(-2)$-curve). We note that constructing periods for pairs $(D, \ell)$ via such double covers has already been considered in full detail in \cite{Laz09} and is also mentioned in \cite{AK11}.
For a generic pair $(X, D)$ associated to trigonal curves in Subsection~\ref{M=0;4} (the general member of $\mathcal{Z}_2$), the double cover of $X$ branched along $D$ is a $K3$ surface with an $A_1$ singularity. This $K3$ surface is the generic point of the component of the discriminant divisor $\mathcal{H}_3$ described by Artebani and Kond\=o. Moreover, the lattice polarization in this case is isomorphic to $U \oplus A_1^{\oplus 4}$ (see \cite[Section 3]{AK11}), which is in particular of rank $6$.
A complete description of the singularities appearing on the $K3$ surfaces associated to the pairs constructed in this paper appears in Table~\ref{table:pairs}.
\end{remark}
\begin{proposition}\label{period}
There is a period map
\begin{equation*}
\tilde \varphi: \mathfrak{P}_0 \rightarrow (D/\Gamma)^\ast.
\end{equation*}
\end{proposition}
\begin{proof}
Consider the tautological family $\mathfrak{W} \rightarrow \mathfrak{P}_0$. Taking the double cover of $\mathfrak{W}$ branched along the marked curves yields a family whose fibers parametrize $K3$ surfaces, except over pairs with elliptic singularities. Hence we have a rational period map
\begin{equation*}
\tilde \varphi : \mathfrak{P}_0 \dashrightarrow (D/\Gamma)^\ast
\end{equation*}
defined away from the elliptic cone pairs. Given a smoothing of such a pair over a germ of a smooth curve, this period map uniquely extends over the closed point. Since the double cover of any pair with elliptic singularities (in fact any pair in $\mathfrak{P}_0$ by Proposition~\ref{K3}) has insignificant limit singularities (see \cite[Theorem 1]{Sh79}), this extension in fact does not depend on the smoothing. See, for example, the discussion in {\cite[Section 3.3]{LO16}}. Since $\mathfrak{P}_0$ is smooth, this rational period map extends to a morphism
\begin{equation*}
\tilde \varphi: \mathfrak{P}_0 \rightarrow (D/\Gamma)^\ast
\end{equation*}
as claimed. The image of pairs with elliptic singularities (including the elliptic cones of Subsection~\ref{biell}) lies in the boundary of $(D/\Gamma)^\ast$.
\end{proof}
\begin{remark}
We see that, as expected, $\tilde \varphi$ can have positive dimensional fibers: We know that $\mathcal{Z}_3$ is $10$ dimensional in $\mathfrak{P}_0$ (recall Remark~\ref{bdry_rmk}), and by \cite[Remark $4.7$]{AK11}, bielliptic curves are mapped to a $1$ dimensional boundary component of $(D/\Gamma)^\ast$. We note for completeness that the boundary of $(D/\Gamma)^\ast$ has $2$ zero dimensional components and $14$ one dimensional components (\cite[Corollary $4.2$, Theorem $4.5$]{AK11}), although the component corresponding to bielliptic curves is the only part of the boundary we consider in this paper.
\end{remark}
\begin{proposition}
The natural birational forgetting map $j$ restricts to a surjective morphism
\begin{equation*}
j|_{\mathfrak{P}_0^\text{sm}}: \mathfrak{P}_0^\text{sm} \twoheadrightarrow \mathcal{M}_6 \setminus \mathcal{H}_6,
\end{equation*}
where $\mathcal{H}_6$ denotes the hyperelliptic locus.
\end{proposition}
\begin{proof}
By the explicit construction of the pairs in (\ref{pl_q})~--~(\ref{hyper}) and the definition of $\mathfrak{P}_0^\text{sm}$ (Definition~\ref{stacks}), every smooth genus six non-hyperelliptic curve arises on a pair in $\mathfrak{P}_0^\text{sm}$, and conversely, every curve on a pair in $\mathfrak{P}_0^\text{sm}$ is smooth and of genus six. This verifies the claim that $j$ restricts to a surjection of $\mathfrak{P}_0^\text{sm}$ onto $\mathcal{M}_6 \setminus \mathcal{H}_6$.
\end{proof}
\begin{remark} \label{sim_st}
Consider the tautological family $\mathfrak{W} \rightarrow \mathfrak{P}_0$ as in the proof of Proposition~\ref{period}. We can construct a diagram
\begin{center}
\begin{tikzcd}[column sep=small]
\tilde \mathfrak{P}_0 \arrow["\tilde j", d, swap] \arrow["\nu", r] & \mathfrak{P}_0 \arrow["j", dl, dashed, swap] & \\
\overline {\mathcal{M}}_6
\end{tikzcd}
\end{center}
via simultaneous stable reduction (\cite[Theorem 3.5, Corollary 6.3]{CML13}). Moreover, the image of $\tilde j$ is a partial compactification of ${\mathcal{M}}_6$. As noted previously, every hyperelliptic curve can be realized via stable reduction of a curve with an $A_{13}$ singularity. Therefore, the image of $\tilde j$ contains $\mathcal{M}_6$. By definition, $\mathfrak{P}_0$ contains pairs of the form $(\Sigma_5, C)$, where $C$ has ADE singularities. Stable reductions of such curves may (and will) be nodal; for example, consider cuspidal curves (these also exist on $\Sigma_5$ by the results in \cite[Table 2]{Ya96}). The image of $\tilde j$ consequently intersects the boundary of $\overline {\mathcal{M}}_6$. Note that $\tilde \mathfrak{P}_0$ resolves the indeterminacy of $j$, but it is not immediately clear how to describe this space in a modular way over pairs containing singular curves.
\end{remark}
Table~\ref{table:pairs} below summarizes the pairs constructed in this section and their associated $K3$ surfaces.
\begin{table}[ht]
\centering
\begin{tabular}{c c c c}
\hline\hline
$D$ & $X_{\text{sing}}$ & $\mathcal{Z}_i$ & Singularities of $K3$ \\ [0.5ex]
\hline
Plane quintic of type $(a_1, \dots, a_5)$ & $\frac{1}{4}(1,1) \displaystyle \oplus \bigoplus_{a_i>1} A_{a_i-1}$ & $\mathcal{Z}_1$ & $A_1 \displaystyle \oplus \bigoplus_{a_i>1} A^{\oplus 2}_{a_i-1}$ \\
Trigonal of type $(0; b_1, b_2, b_3, b_4)$ & $\frac{1}{4}(1,1) \displaystyle \oplus \bigoplus_{b_i>1} A_{b_i-1}$ & $\mathcal{Z}_2$ & $A_1 \displaystyle \oplus \bigoplus_{b_i>1} A^{\oplus 2}_{b_i-1}$ \\
Trigonal of type $(2; [a_1], a_2, a_3, a_4)$ & $\frac{1}{4(a_1+1)}(1, 2a_1+1) \displaystyle \oplus \bigoplus_{ \underset{i\neq 1}{a_i >1}} A_{a_i-1}$ & $\mathcal{Z}_2$ & $A_{2a_1+1} \displaystyle \oplus \bigoplus_{ \underset{i\neq 1}{a_i >1}} A^{\oplus 2}_{a_i-1}$ \\
Bielliptic & Simple elliptic & $\mathcal{Z}_3$ & Simple elliptic \\
$\sim -2K_{\Sigma_5}$ with an $A_{13}$ & Smooth & N/A & $A_{13}$ \\ [1ex]
\hline
\end{tabular}
\caption{Pairs $(X, D)$ constructed in subsections (\ref{pl_q})~--~(\ref{hyper}) and their associated $K3$ surfaces.}
\label{table:pairs}
\end{table}
\vspace{.60in}
\section{Construction of stable pairs via stable reduction}
\numberwithin{theorem}{section}
\counterwithin{theorem}{subsection}
\numberwithin{equation}{section}
\counterwithin{equation}{subsection}
In this section, we explain how to construct some of the pairs in the proof of Theorem~\ref{mainthm} via the Hassett-Keel program and stable reduction. We recall our discussion from Section $1$: In \cite{Mu14}, M\"uller shows that the final log canonical model of $\overline {\mathcal{M}}_6$ parametrizes quadric sections of $\Sigma_5$. In this section, we consider certain one-parameter degenerations over the germ of a smooth curve of quadric sections of $\Sigma_5$, where the generic fiber is smooth. We show that these families of pairs can be modified so that the new special fiber is a stable pair. In fact, we will recover some of the pairs containing special curves constructed in the previous section. The stable reduction process will involve applying the relative log minimal model program. We describe some examples below.
\vspace{.1in}
\subsection{Marked plane quintics of type (1,1,1,1,1)} \begin{proposition}
There exist quadric sections of $\Sigma_5$ with unique singularities of local analytic isomorphism type $y^5=x^5$.
\end{proposition}
\begin{proof}
Let $p_1, \dots, p_4$ denote points in $\mathbb{P}^2$ in general position. Choose another point $p_5$, determining a smooth irreducible plane conic. Consider the union of this conic with the four lines connecting $p_5$ to each of the other $p_i$. We have constructed a reducible plane sextic curve with $5$ components meeting transversely at $p_5$. Blowing up $p_1, \dots, p_4$ and anti-canonically embedding the resulting surface in $\mathbb{P}^5$ recovers $\Sigma_5$ with a quadric section of the desired singularity type.
\end{proof}
\begin{remark}
For future reference, let $C_0$ denote a curve in $\Sigma_5$ with this singularity type. Note that the log canonical threshold of the pair $(\Sigma_5, C_0)$ is $2/5<1/2$, hence this pair cannot be stable.
\end{remark}
\begin{proposition} \label{pl_q_mmp}
Let $(\mathcal{S}, \mathcal{C}) \rightarrow T$ be a family of surface-curve pairs over the germ of a smooth curve such that the generic fiber is a smooth quadric section of $\Sigma_5$ and the special fiber is $(\Sigma_5, C_0)$. There exists a family $(\mathcal{S'}, \mathcal{C'}) \rightarrow T'$ satisfying the following:
\begin{enumerate}
\item The generic fiber is isomorphic to the generic fiber of the original family.
\item The special fiber is a stable pair with a unique $\frac{1}{4}(1,1)$ singularity and marked curve isomorphic to a smooth plane quintic.
\end{enumerate}
\end{proposition}
\begin{proof}
We first run local stable reduction for the singularity of $C_0$ in the special fiber. We view $(\mathcal{S}, \mathcal{C})\rightarrow T$ as a family of surfaces $\mathcal{S}$ containing $\mathcal{C}$. We perform a base change $t \mapsto t^5$, where $t$ is a uniformizing parameter of $T$. We denote this finite cover of $T$ by $T'$. We then blow up $\mathcal{S}$ at the singular point of $C_0$.
This process yields a reducible surface $S_1 \cup S_2$ in the central fiber of the modified family. Let the double curve on $S_i$ be denoted by $B_i$. The surface $S_1$ is isomorphic to $\Sigma_4$ (a degree $4$ del Pezzo surface) marked with $C_1$ (the strict transform of $C_0$). A local computation shows that $S_2$ is isomorphic to $\mathbb{P}^2$ marked with a smooth plane quintic $C_2$ meeting $B_2$ transversely. On $S_1$, the curve $B_1$ is the exceptional divisor when we blow up $\Sigma_5$ at the singular point of $C_0$, and on $S_2$, the curve $B_2$ is the hyperplane class.
Note that the special fiber of the resulting family is still not a stable pair. Consider the components of $C_1$, denoted $F_i$ for $i=1, \dots, 5$. If $H$ is the pullback of the hyperplane class from $\mathbb{P}^2$ to $\Sigma_4$ and the $E_i$ are the exceptional divisors, we see explicitly:
\begin{enumerate}
\item $F_i=H-E_i-E_5$ for $i=1, \dots 4$.
\item $F_5=2H-\underset{j=1}{\overset{5}{\sum}}E_j$.
\end{enumerate}
These $F_i$ are all irreducible $(-1)$-curves and hence span extremal rays in the closure of the cone of effective curves $\overline {NE}(S_1)$. Consequently, these curves also span extremal rays in the closure of the relative cone of curves for our modified family.
The $F_i$ are all $K_{S_1}+\alpha C_1+B_1$-negative for all $\alpha>1/2$ by adjunction. We explicitly construct flips of these curves. Note that after we flip one of these, each of the remaining $F_i$ can still be flipped via the same construction. A standard normal bundle computation shows that blowing up any one of the $F_i$ yields an exceptional divisor isomorphic to $\mathbb{P}^1 \times \mathbb{P}^1$, realizing the curve as one of the rulings. Projecting to the other ruling (this requires the contraction theorem) contracts $F_i$ on $S_1$ and blows up the point $B_2 \cap F_i$ on $S_2$.
Flipping all of the $F_i$ in this way yields a new surface $S'_1 \cup S'_2$, where $S'_1$ is isomorphic to $\mathbb{P}^2$ and $S'_2$ is isomorphic to $\mathbb{P}^2$ blown up at $5$ collinear points. Note that $S'_1$ has no marked curve and $S'_2$ is still marked with a curve isomorphic to a smooth plane quintic $C'_2$. The curve $B_1$ becomes a conic $B'_1$ in $S'_1$ after these flips. Hence the hyperplane class $H'$ in $S'_1$ is negative with respect to
\begin{equation*}
K_{S'_1}+B'_1=-H',
\end{equation*}
which induces a divisorial contraction of $S'_1$. We are left with a surface $S''_2$, which is simply the contraction of the $(-4)$-curve $B'_2$ (the strict transform of $B_2$ after the flips) on $S'_2$. Hence $S''_2$ has a unique cyclic quotient singularity of type $\frac{1}{4}(1,1)$.
\end{proof}
\begin{remark}
We note that $S''_2$ is precisely the surface constructed in ~(\ref{pl_q}) corresponding to marked plane quintics of type $(1,1,1,1,1)$.
\end{remark}
\vspace{.2in}
\subsection{Marked trigonal curves of type $(2; [4])$}
\begin{proposition}
There exist quadric sections of $\Sigma_5$ with unique singularities of local analytic isomorphism type $y^3=x^7$.
\end{proposition}
\begin{proof}
By {\cite[1.10]{De90}}, there exists a plane sextic curve with such a singularity as well as four nodes in general position. Blowing up these nodes and anti-canonically embedding the resulting surface in $\mathbb{P}^5$ recovers $\Sigma_5$ with a quadric section of the desired singularity type.
\end{proof}
\begin{remark}
For future reference, we will denote by $C_0$ a curve with this singularity type. The log canonical threshold of the pair $(\Sigma_5, C_0)$ is less than $1/2$, hence this pair cannot be stable.
\end{remark}
\begin{proposition}
Let $(\mathcal{S}, \mathcal{C}) \rightarrow T$ be a family of surface-curve pairs over the germ of a smooth curve such that the generic fiber is a smooth quadric section of $\Sigma_5$ and the special fiber is $(\Sigma_5, C_0)$. There exists a family $(\mathcal{S'}, \mathcal{C'}) \rightarrow T'$ satisfying the following:
\begin{enumerate}
\item The generic fiber is isomorphic to the generic fiber of the original family.
\item The special fiber is a stable pair with a unique $\frac{1}{20}(1,9)$ singularity and marked curve isomorphic to a smooth genus six trigonal curve.
\end{enumerate}
\end{proposition}
\begin{proof}
Running local stable reduction for the family (see \cite{Has00}) yields a reducible surface $S=S_1 \cup S_2$ in the central fiber. Define $B_i$ as in Proposition~\ref{pl_q_mmp}. The surface $S_1$ is constructed by computing the embedded resolution of $C_0$ and contracting the exceptional divisors disjoint from its strict transform $C_1$. Let $F_i$, $i=1,2,3,4,5$, denote the exceptional divisors for this embedded resolution, where the indexing indicates the order in which these divisors appear as we repeatedly blow up points. In particular, the divisor $F_5$ denotes the exceptional curve which is not contracted post-embedded-resolution. We see that $S_1$ has two cyclic quotient singularities of type $\frac{1}{7}(1,4)$ and $\frac{1}{3}(1,2)$ along $B_1=F_5$. The surface $S_2$ is isomorphic to the weighted projective space $\mathbb{P}(7,3,1)$, which has two cyclic quotient singularities of type $\frac{1}{7}(1,3)$ and $\frac{1}{3}(1,1)$ along $B_2$. Note that the singular points of $S_1$ are also singular on $S_2$. The curve $C_2 \subset S_2$ is smooth and trigonal of genus six avoiding the singularities and meeting $B_2$ transversely at one point.
Note that by adjunction applied to $C_1$ in $S_1$, the pair $(S_1 \cup S_2, C_1 \cup C_2)$ is not stable. The embedded resolution computation also reveals that $C_1$ is an irreducible $(-1)$-curve. Flipping $C_1$ as in Proposition~\ref{pl_q_mmp} amounts to contracting $C_1$ on $S_1$ while blowing up the point $C_2 \cap B_2$ on $S_2$.
We denote the remaining reducible surface $S'_1 \cup S'_2$. We will show that the divisor
\begin{equation*}
-K_{S'_1}-F'_5
\end{equation*}
is ample, where $F'_5$ denotes the image of $F_5$ in $S'_1$ after flipping $C_1$. Let $\pi_1: S_1 \rightarrow \Sigma_5$ denote the sequence of blow-ups required for the embedded resolution of $C_0$, and let $\pi_2: S_1 \rightarrow S'_1$ denote the contraction of $F_i$ for $i=1,2,3,4$ and $C_1$. We note that $S'_1$ is $\mathbb{Q}$ factorial, so pulling back divisors makes sense. We compute
\begin{equation}
\pi_1^*(-2K_{\Sigma_5})=-2K_{S_1}+2F_1+4F_2+6F_3+12F_4+18F_5.
\end{equation}
On the other hand,
\begin{equation}
\pi_1^*(-2K_{\Sigma_5})=\pi^*(C_0)=C_1+3F_1+6F_2+7F_3+14F_4+21F_5,
\end{equation}
hence
\begin{equation} \label{5.3}
-2K_{S_1}=C_1+F_1+2F_2+F_3+2F_4+3F_5.
\end{equation}
Next, using ~(\ref{5.3}), we see that
\begin{equation} \label{5.4}
\pi_2^*(-2K_{S'_1}-2F'_5)=C_1+\frac{1}{7}F_1+\frac{2}{7}F_2+\frac{1}{3}F_3+\frac{2}{3}F_4+F_5.
\end{equation}
Now, let $C' \neq F'_5$ be a curve in $S'_1$. We want to show that the strict transform of $C'$ under $\pi_2$, henceforth denoted $\tilde C'$, is positive against the divisor in ~(\ref{5.4}). Suppose further that $\tilde C'$ is not any of the $F_i$ or $C_1$; then it is non-negative along each of these divisors. Suppose by contradiction that $\tilde C'$ is trivial along the $F_i$ and $C_1$. By construction, $\tilde C'$ is not $\pi_1$--exceptional, hence triviality against $C_1$ means that the scheme-theoretic intersection of the images of these two curves on $\Sigma_5$ is supported on the singular point of $C_0$. However, triviality of $\tilde C'$ against the $F_i$ implies that the image of $\tilde C'$ on $\Sigma_5$ is disjoint from $C_0$. This is absurd, since $C_0$ is an ample divisor. We have shown that $\tilde C'$ is indeed positive against the divisor in ~(\ref{5.4}).
It follows from this discussion that to verify ampleness of $-K_{S'_1}-2F'_5$, it is enough to check that the divisor in ~(\ref{5.4}) is positive against $F_5$, which it is. Hence we can divisorially contract $S'_1$ and we are left with a surface $S''_2$ with the desired cyclic quotient singularity.
\end{proof}
\begin{remark}
Let $\phi_2: S'_2 \rightarrow S''_2$ denote the minimal resolution of $S''_2$. By explicitly blowing down $(-1)$-curves on $S'_2$, we obtain $\mathbb{F}_2$. Moreover, we see that $S''_2$ is precisely the surface constructed in the proof of Theorem~\ref{mainthm} associated to marked smooth trigonal curves of genus six and type $(2; [4])$.
We also note that the divisorial contraction of $S'_1$ in this proof is of relative Picard number $5$, and the total space of the output family is not $\mathbb{Q}$-factorial.
\end{remark}
\vspace{.2in}
\subsection{Marked trigonal curves of type $(0; 1,1,1,1)$}: Consider a triple plane conic $D$. Blow up four points on the curve in general position in $\mathbb{P}^2$ to recover $\Sigma_5$ and consider the union of the strict transform $\tilde D$ with the exceptional divisors $E_i$. The resulting reducible curve $D'$ has class $-2K_{\Sigma_5}$.
Consider a family of smooth quadric sections of $\Sigma_5$ degenerating to $D'$ as in the prior examples. Blow up the resulting family of surfaces along $\tilde D_{\text{red}}$. The exceptional divisor of this blow-up is isomorphic to $\mathbb{P}^1 \times \mathbb{P}^1$. So the new central fiber is a reducible surface $S_1 \cup S_2$, where $S_1 \cong \Sigma_5$ and $S_2 \cong \mathbb{P}^1 \times \mathbb{P}^1$. These surfaces are attached along one of the rulings of $\mathbb{P}^1 \times \mathbb{P}^1$. Each of the $E_i$ intersects the double curve at a single point. The strict transform of the blown up curve lies in $\mathbb{P}^1 \times \mathbb{P}^1$ and meets the double curve transversely in four points; these are precisely the intersection points of the $E_i$ with the double curve.
By adjunction applied to each $E_i$ in $S_1$, the reducible surface $S_1 \cup S_2$ and its marked curve do not form a stable pair. After flipping the $E_i$ as in Proposition~\ref{pl_q_mmp}, we obtain a reducible surface where one component is isomorphic to $\mathbb{P}^2$ and the other component is isomorphic to $\mathbb{P}^1 \times \mathbb{P}^1$ blown up at four points along a ruling. We can divisorially contract the $\mathbb{P}^2$ component as in Proposition~\ref{pl_q_mmp}. This amounts to contracting the $(-4)$-curve on this blow up of $\mathbb{P}^1 \times \mathbb{P}^1$, and we obtain the expected surface.
\vspace{.1in}
\subsection{Bielliptic curves:} Consider a double plane cubic $D$. Blow up four points on the curve in general position in $\mathbb{P}^2$ to recover $\Sigma_5$, and consider the strict transform $\tilde D$, which has class $-2K_{\Sigma_5}$. Consider a family of smooth quadric sections of $\Sigma_5$ degenerating to $\tilde D$ as in the prior examples. Blow up this family of surfaces along $\tilde D_{\text{red}}$. The exceptional divisor will be isomorphic to the minimal resolution of an elliptic cone of degree $5$. So the new central fiber consists of a reducible surface $S_1 \cup S_2$, where $S_1 \cong \Sigma_5$ and $S_2$ is isomorphic to the resolution of this cone. The strict transform of the blown up curve lies in the exceptional divisor, disjoint from the double curve.
Let the double curve on $S_i$ be denoted by $B_i$ as in Proposition~\ref{pl_q_mmp}. Since $K_{S_1}+B_1$ is trivial, by taking the canonical model for the family, we can contract $S_1$. We obtain the expected elliptic cone of degree $5$.
\begin{acknowledgments}
I am extremely grateful to my advisor, Maksym Fedorchuk, for introducing me to this project and for his guidance and patience throughout. I would also like to thank Brian Lehmann for very helpful discussions regarding the minimal model program. I would also like to express my gratitude to Changho Han for insightful conversations about the theory of stable pairs.
\end{acknowledgments}
\thispagestyle{empty}
{\small
\markboth{References}{References}
\bibliographystyle{amsalpha}
| {
"timestamp": "2019-12-11T02:02:32",
"yymm": "1812",
"arxiv_id": "1812.10211",
"language": "en",
"url": "https://arxiv.org/abs/1812.10211",
"abstract": "A general smooth curve of genus six lies on a quintic del Pezzo surface. In \\cite{AK11}, Artebani and Kondō construct a birational period map for genus six curves by taking ramified double covers of del Pezzo surfaces. The map is not defined for special genus six curves. In this paper, we construct a smooth Deligne-Mumford stack $\\mathfrak{P}_0$ parametrizing certain stable surface-curve pairs which essentially resolves this map. Moreover, we give an explicit description of pairs in $\\mathfrak{P}_0$ containing special curves.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Genus six curves, K3 surfaces, and stable pairs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109494088426,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7076796378575431
} |
https://arxiv.org/abs/1312.2309 | A Weak Galerkin Finite Element Method for the Maxwell Equations | This paper introduces a numerical scheme for time harmonic Maxwell's equations by using weak Galerkin (WG) finite element methods. The WG finite element method is based on two operators: discrete weak curl and discrete weak gradient, with appropriately defined stabilizations that enforce a weak continuity of the approximating functions. This WG method is highly flexible by allowing the use of discontinuous approximating functions on arbitrary shape of polyhedra and, at the same time, is parameter free. Optimal-order of convergence is established for the weak Galerkin approximations in various discrete norms which are either $H^1$-like or $L^2$ and $L^2$-like. An effective implementation of the WG method is developed through variable reduction by following a Schur-complement approach, yielding a system of linear equations involving unknowns associated with element boundaries only. Numerical results are presented to confirm the theory of convergence. | \section{Introduction}
In this paper, we are concerned with new developments of numerical
methods for the time-harmonic Maxwell equations in a heterogeneous
medium $\Omega\subset \mathbb{R}^3$. The model problem seeks unknown
functions ${\bf u}$ and $p$ satisfying
\begin{eqnarray}
\nabla\times(\mu\nabla\times {\bf u})-\epsilon\nabla p &=&{\bf f}_1\quad \mbox{in}\;\Omega,\label{moment1}\\
\nabla\cdot(\epsilon{\bf u})&=&g_1\quad\mbox{in}\;\Omega,\label{cont1}\\
{\bf u}\times{\bf n} &=& \phi\quad\mbox{on}\;\partial\Omega,\label{bcc1}\\
p&=&0\quad\mbox{on}\;\partial\Omega,\label{bc1}
\end{eqnarray}
where the coefficients $\mu>0$ and $\epsilon>0$ are the magnetic
permeability and the electric permittivity of the medium,
respectively.
A weak formulation for (\ref{moment1})-(\ref{bc1}) seeks $({\bf u}, p)
\in H(\hbox{curl};\Omega) \times H_0^1(\Omega)$ such that
${\bf u}\times{\bf n}=\phi$ on $\partial\Omega$ and
\begin{eqnarray}
(\nu\nabla\times{\bf u},\ \nabla\times{\bf v})-({\bf v},\nabla p)&=&({\bf f},\ {\bf v}),
\quad \forall {\bf v} \in H_0(\hbox{curl};\Omega)\label{w1}\\
({\bf u},\nabla q)&=&-(g,q),\quad\forall q\in H_0^1(\Omega),\label{w2}
\end{eqnarray}
where $\nu=\mu/\epsilon$, ${\bf f}={\bf f}_1/\epsilon$ and
$g=g_1/\epsilon$.
The Maxwell equations have been studied extensively in literature by
using various numerical methodologies including
$H(\hbox{curl};\Omega)$-conforming edge element approaches
\cite{boss,jin,monk, Nedelec, Nedelec1} and discontinuous Galerkin
methods \cite{bls,bcnl, hps,hps-1,ps,psm}. Particularly in
\cite{hps-2}, a mixed DG formulation for the problem
(\ref{moment1})-(\ref{bc1}) was introduced and analyzed. In this DG
formulation, both ${\bf u}$ and $p$ are approximated by piecewise
$[P_k(T)]^3$ and $P_k(T)$ functions if $T$ is a tetrahedron and by
piecewise $[Q_k(T)]^3$ and $Q_k(T)$ if $T$ is a parallelepiped,
where $P_k(T)$ denotes the set of polynomials of total degree $k$
and $Q_k(T)$ the set of polynomials of degree $k$ in each variable.
The weak Galerkin (WG) finite element method refers to a general
finite element technique for partial differential equations where
differential operators are approximated as discrete distributions or
discrete weak derivatives. The method was first introduced in
\cite{wy, wy-mixed} for second order elliptic equations, and was
later extended to other partial differential equations including the
Stokes equations \cite{wang-ye-stokes} and the biharmonic equation
\cite{mu-wang-ye-biharmonic, cwang-jwang-biharmonic}. The current
research indicates that the concept of discrete weak differential
operators offers a new paradigm in numerical methods for partial
differential equations.
In this paper, we apply the idea of weak Galerkin to the problem
(\ref{moment1})-(\ref{bc1}). In essence, this procedure shall
introduce a discrete curl operator, which shall be combined with the
discrete weak gradient as introduced in \cite{wy} to yield a finite
element scheme for the Maxwell equations. In this WG method, two
types of weak functions are used: ${\bf u}_h=\{{\bf u}_0,{\bf u}_b\}\in
[P_s(T)]^3\times [P_t(e)]^3$ and $p_h=\{p_0,p_b\}\in P_\ell(T)\times
P_\iota(e)$, with ${\bf u}_h={\bf u}_0$ and $p_h=p_0$ inside of each element
and ${\bf u}_h={\bf u}_b$ and $p_h=p_b$ on the boundary of the element.
Error estimates of optimal order are established for the WG
approximations in appropriate norms for the case of $s=t=k$ and
$\ell=k-1$, $\iota=k$ with $k\ge 1$. For the case of $s=t=k$ and
$\ell=\iota=k-1$, only numerical experiments are conducted to
illustrate the performance of the corresponding WG finite element
scheme; a theoretical study of this WG method is left to interested
readers for an investigation.
The use of weak functions and weak derivatives makes the WG method
highly flexible on the construction of finite element functions on
partitions with arbitrary polygons or polyhedrons. Compared with the
DG method in \cite{hps-2}, our WG methods make use of additional
variables ${\bf u}_b$ and $p_b$ defined on the boundary of the elements.
However, the variables ${\bf u}_0$ and $p_0$ defined on each element can
be eliminated through a local process/computation, yielding a system
of linear equations involving only the variables ${\bf u}_b$ and $p_b$.
Consequently, the WG method has much less number of globally coupled
unknowns than DG methods. In addition, the weak Galerkin finite
element method is parameter independent in its stability and
convergence.
The paper is organized as follows. In Section
\ref{Section:preliminaries}, we introduce some basic notations. In
Section \ref{Section:wg-wd}, we discuss some discrete weak
differential operators, particularly a discrete weak curl. Section
\ref{Section:WGFEM} is devoted to a presentation of the weak
Galerkin finite element scheme for the problem
(\ref{w1})-(\ref{w2}). In Section \ref{Section:ErrorEquation}, we
derive an error equation for the WG finite element approximation. In
Section \ref{Section:L2Projections}, we introduce two types of $L^2$
projection operators and derive some estimates for them. Sections
\ref{Section:H1ErrorEstimates} and \ref{Section:L2ErrorEstimates}
are devoted to an error analysis for the WG finite element
approximations. In Section \ref{Section:Schur}, we discuss an
efficient implementation method by using variable
reductions/elimination. Finally in Section \ref{Section:NE}, we
present some numerical results that verify the theory established in
the previous sections.
\section{Preliminaries and Notations}\label{Section:preliminaries}
Let $D$ be any open bounded domain with Lipschitz continuous
boundary in $\mathbb{R}^3$. We use the standard definition for the
Sobolev space $H^s(D)$ and their associated inner products
$(\cdot,\cdot)_{s,D}$, norms $\|\cdot\|_{s,D}$, and seminorms
$|\cdot|_{s,D}$ for any $s\ge 0$. For example, for any integer $s\ge
0$, the seminorm $|\cdot|_{s, D}$ is given by
$$
|v|_{s, D} = \left( \sum_{|\alpha|=s} \int_D |\partial^\alpha v|^2
dD \right)^{\frac12}
$$
with the usual notation
$$
\alpha=(\alpha_1, \ldots, \alpha_d), \quad |\alpha| =
\alpha_1+\ldots+\alpha_d,\quad
\partial^\alpha =\prod_{j=1}^3\partial_{x_j}^{\alpha_j}.
$$
The Sobolev norm $\|\cdot\|_{m,D}$ is given by
$$
\|v\|_{m, D} = \left(\sum_{j=0}^m |v|^2_{j,D} \right)^{\frac12}.
$$
The space $H^0(D)$ coincides with $L^2(D)$, for which the norm and
the inner product are denoted by $\|\cdot \|_{D}$ and
$(\cdot,\cdot)_{D}$, respectively. When $D=\Omega$, we shall drop
the subscript $D$ in the norm and inner product notation.
The space $H(\hbox{curl};D)$ is defined as the set of vector-valued
functions on $D$ which, together with their curl, are square
integrable; i.e.,
\[
H({\rm curl}; D)=\left\{ {\bf v}: \ {\bf v}\in [L^2(D)]^3, \nabla\times{\bf v} \in
[L^2(D)]^3\right\}.
\]
\section{Weak Derivatives}\label{Section:wg-wd}
The two differential operators used in (\ref{w1}) and (\ref{w2}) are
curl and gradient operators. The goal of this section is to
introduce an analogy of the curl and gradient operator, called weak
curl and weak gradient operators, when the applied functions are
discontinuous.
\subsection{Weak gradient and discrete weak gradient}
The concept of weak gradient and its discrete analogue was
introduced in \cite{wy}. This subsection is presented for the sake
of completeness of presentation.
Let $K$ be any polyhedral domain with boundary $\partial K$. A weak
function on the region $K$ refers to a function $v=\{v_0, v_b\}$
such that $v_0\in L^2(K)$ and $v_b\in L^2(\partial K)$. The first
component $v_0$ can be understood as the value of $v$ in $K$, and
the second component $v_b$ represents $v$ on the boundary of $K$.
Note that $v_b$ may not necessarily be related to the trace of
${\bf v}_0$ on $\partial K$ should a trace be well-defined. Denote by
${\mathcal W}(K)$ the space of weak functions on $K$; i.e.,
\begin{equation}\label{hi.888}
{\mathcal W}(K):= \{v=\{v_0, v_b \}:\ v_0\in L^2(K),\; v_b\in L^2(\partial
K)\}.
\end{equation}
The weak gradient operator is defined as follows.
\medskip
\begin{defi} (Weak Gradient)
The dual of $L^2(K)$ can be identified with itself by using the
standard $L^2$ inner product as the action of linear functionals.
With a similar interpretation, for any $v\in {\mathcal W}(K)$, the weak
gradient of $v$ is defined as a linear functional $\nabla_w v$ in
the dual space of $[H^1(K)]^3$ whose action on each $q\in
[H^1(K)]^3$ is given by
\begin{equation}\label{wg}
(\nabla_w v, q)_K := -(v_0, \nabla\cdot q)_K + \langle v_b,
q\cdot{\bf n}\rangle_{\partial K},
\end{equation}
where ${\bf n}$ is the outward normal direction to $\partial K$,
$(v_0,\nabla\cdot q)_K=\int_K v_0 (\nabla\cdot q)dK$ is the $L^2$
inner product of $v_0$ and $\nabla\cdot q$, and $\langle v_b,
q\cdot{\bf n}\rangle_{\partial K}$ is the $L^2$ inner product of
$q\cdot{\bf n}$ and $v_b$ in $L^2(\partial K)$.
\end{defi}
The Sobolev space $H^1(K)$ can be embedded into the space ${\mathcal W}(K)$ by
an inclusion map $i_{\mathcal W}: \ H^1(K)\to {\mathcal W}(K)$ defined as follows
$$
i_{\mathcal W}(\phi) = \{\phi|_{K}, \phi|_{\partial K}\},\qquad \phi\in H^1(K).
$$
With the help of the inclusion map $i_{\mathcal W}$, the Sobolev space $H^1(K)$
can be viewed as a subspace of ${\mathcal W}(K)$ by identifying each $\phi\in
H^1(K)$ with $i_{\mathcal W}(\phi)$.
\medskip
Let $P_{r}(K)$ be the set of polynomials on $K$ with degree no more
than $r$.
\begin{defi}
(Discrete Weak Gradient) The discrete weak gradient operator, denoted by
$\nabla_{w,r, K}$, is defined as the unique polynomial
$(\nabla_{w,r, K}v) \in [P_r(K)]^3$ satisfying the following
equation
\begin{equation}\label{d-g}
(\nabla_{w,r, K}v, q)_K = -(v_0,\nabla\cdot q)_K+ \langle v_b,
q\cdot{\bf n}\rangle_{\partial K},\qquad \forall q\in [P_r(K)]^d.
\end{equation}
\end{defi}
\subsection{Weak curl and discrete weak curl}\label{Section:WeakCurl}
To define weak curl, we require weak functions ${\bf v}=\{{\bf v}_0,
{\bf v}_b\}$ such that ${\bf v}_0\in [L^2(K)]^3$ and ${\bf v}_b\times{\bf n}\in
[L^2(\partial K)]^3$. The first component ${\bf v}_0$ can be understood
as the value of ${\bf v}$ in $K$. The second component ${\bf v}_b$
represents the value of ${\bf v}$ on the boundary of $K$.
Denote by ${\mathcal V}(K)$ the space of vector-valued weak functions on $K$;
i.e.,
\begin{equation}\label{hi.999}
{\mathcal V}(K) = \{{\bf v}=\{{\bf v}_0, {\bf v}_b \}:\ {\bf v}_0\in [L^2(K)]^3,\;
{\bf v}_b\times{\bf n}\in [L^{2}(\partial K)]^3\}.
\end{equation}
Then, we define a weak curl operator as follows.
\medskip
\begin{defi}
(Weak Curl) The dual of $[L^2(K)]^3$ can be identified with itself by using the
standard $L^2$ inner product as the action of linear functionals.
With a similar interpretation, for any ${\bf v}\in {\mathcal V}(K)$, the weak
curl of ${\bf v}$ is defined as a linear functional $\nabla_w
\times{\bf v}$ in the dual space of $[H^1(K)]^3$ whose action on each
$\varphi\in [H^1(K)]^3$ is given by
\begin{equation}\label{w-c}
(\nabla_w\times{\bf v}, \varphi)_K := ({\bf v}_0, \nabla\times\varphi)_K
+\langle {\bf v}_b\times{\bf n}, \varphi\rangle_{\partial K},
\end{equation}
where ${\bf n}$ is the outward normal direction to $\partial K$,
$({\bf v}_0,\nabla\times\varphi)_K=\int_K {\bf v}_0\cdot\nabla\times\varphi
dK$ is the $L^2$ inner product of ${\bf v}_0$ and $\nabla\times\varphi$,
and $\langle {\bf v}_b\times{\bf n}, \varphi\rangle_{\partial K}$ is the
inner product in $L^2(\partial K)$.
\end{defi}
The Sobolev space $[H^1(K)]^3$ can be embedded into the space ${\mathcal V}(K)$
by an inclusion map $i_V: \ [H^1(K)]^3\to {\mathcal V}(K)$ defined as follows
$$
i_{\mathcal V}(\phi) =\{\phi|_{K}, \phi|_{\partial K}\},\qquad \phi\in [H^1(K)]^3.
$$
Let $K$ be any polyhedral domain with boundary $\partial K$. For
each face $e\in\partial K$, let ${\bf t}_1$ and ${\bf t}_2$ be two assigned
unit vectors on the face $e$ such that ${\bf t}_1$, ${\bf t}_2$ and ${\bf n}$
are orthogonal each other. Thus, we have
${\bf v}_b|_e=v_1{\bf t}_1+v_2{\bf t}_2+v_n{\bf n}$. Define
$\bar{{\bf v}}_b=v_1{\bf t}_1+v_2{\bf t}_2$.
Obviously,$\bar{{\bf v}}_b\times{\bf n}={\bf v}_b\times{\bf n}$. Since the quantity
of interest is not ${\bf v}_b$ but ${\bf v}_b\times{\bf n}$, we will let
${\bf v}_b=\bar{{\bf v}}_b$ in order to reduce the number of the unknowns.
\medskip
\begin{defi} (Discrete Weak Curl)
For a given $K$, a discrete weak curl operator,
denoted by $\nabla_{w,r,K}\times$, is defined as the unique polynomial
$(\nabla_{w,r.K}\times{\bf v}) \in [P_r(K)]^3$ that satisfies the following
equation
\begin{equation}\label{d-c}
(\nabla_{w,r,K}\times{\bf v}, \varphi)_K := ({\bf v}_0, \nabla\times\varphi)_K
+ \langle {\bf v}_b\times{\bf n}, \varphi\rangle_{\partial K},\qquad
\forall \varphi\in [P_r(K)]^3.
\end{equation}
\end{defi}
\section{Numerical Algorithms}\label{Section:WGFEM}
Let ${\cal T}_h$ be a partition of the domain $\Omega$ with mesh
size $h$ that consists of polyhedra of arbitrary shape. Assume that
the partition ${\cal T}_h$ is shape regular in the sense as defined
in \cite{wy-mixed}; i.e. ${\cal T}_h$ satisfies a set of conditions
given in \cite{wy-mixed}. Denote by ${\cal E}_h$ the set of all
faces in ${\cal T}_h$, and let ${\cal E}_h^0={\cal
E}_h\backslash\partial\Omega$ be the set of all interior faces.
Let $e\in{\mathcal E}_h^0$ be shared by two elements $T_1$ and $T_2$. Let
${\bf t}_1$ and ${\bf t}_2$ be two tangential unit vectors on face $e\in
{\mathcal E}_h$. For $k\ge 1$, define a weak Galerkin finite element spaces
associated with ${\mathcal T}_h$ as
\begin{align} \label{Vh}
V_h =\Big\{ {\bf v}=\{{\bf v}_0, {\bf v}_b=v_1{\bf t}_1+v_2{\bf t}_2\} : & \ {\bf v}_0|_{T}\in
[P_k(T)]^3, \\
& v_1, v_2\in P_k(e),\ e\subset{\partial T}\Big\}, \nonumber
\end{align}
and
\begin{align} \label{Wh}
W_h =\Big\{w=\{w_0, w_b\} : \quad & \{w_0, w_b\}|_{T}\in
P_{k-1}(T)\times
P_{k}(e), \ e\subset{\partial T}, \\
& w_b=0 \ {\rm on}\ \partial\Omega\Big\}.\nonumber
\end{align}
We also introduce the following subspace of $V_h$,
\[
V_{h,0} =\left\{ {\bf v}=\{{\bf v}_0, {\bf v}_b\}\in V_h, \;
{\bf v}_b\times{\bf n}|_e=0,\ e\subset\partial\Omega \right\}.
\]
The discrete weak gradient $\nabla_{w,k-1}$ and the discrete weak
curl $\nabla_{w,k}\times$ on the finite element spaces $W_h$ and
$V_h$ can be computed by using (\ref{d-g}) and (\ref{d-c}) on each
element $T$ respectively; i.e.,
\begin{eqnarray*}
(\nabla_{w,k}v)|_T &=&\nabla_{w,k, T} (v|_T),\qquad \forall v\in
W_h\\
(\nabla_{w,k-1}\times{\bf v})|_T &=&\nabla_{w,k-1, T}\times ({\bf v}|_T),\qquad \forall {\bf v}\in V_h.
\end{eqnarray*}
For simplicity of notation, from now on we shall drop the subscript
$k$ in $\nabla_{w,k}$ and $k-1$ in $\nabla_{w,k-1}\times$ for the
discrete weak gradient and the discrete weak curl.
Corresponding to the bilinear forms in (\ref{w1})-(\ref{w2}), we
introduce the following bilinear forms:
\begin{eqnarray*}
(\nu\nabla_w\times{\bf v},\ \nabla_w\times {\bf w})_h&=&\sum_{T\in{\mathcal T}_h}(\nu\nabla_w\times {\bf v},\ \nabla_w\times {\bf w})_T\\
({\bf v},\ \nabla_w q)_h&=&\sum_{T\in{\mathcal T}_h}(\nabla_w q,\ {\bf v})_T.
\end{eqnarray*}
Furthermore, we stabilize the first one by adding an appropriate
stabilization term as follows:
\begin{eqnarray}\label{Stabilized-A-form}
a({\bf v},\ {\bf w})&=&(\nu\nabla_w\times{\bf v},\
\nabla_w\times{\bf w})_h+s_1({\bf v},{\bf w}),
\end{eqnarray}
where
\begin{eqnarray}\label{Stabilization-T1}
s_1({\bf v},\;{\bf w}) &=& \sum_{T\in {\cal T}_h}h^{-1}{\langle}
({\bf v}_0-{\bf v}_b)\times{\bf n},\;\;({\bf w}_0-{\bf w}_b)\times{\bf n}{\rangle}_{\partial T}.
\end{eqnarray}
For simplicity of notation, we introduce the following notation
\begin{eqnarray}\label{B-form}
b({\bf v},\ q)&=&({\bf v}_0,\nabla_w q)_h
\end{eqnarray}
and a second stabilization term
\begin{eqnarray}\label{Stabilization-T2}
s_2(p,\;q) & = & \sum_{T\in {\cal T}_h}h{\langle}
p_0-p_b,\;\;q_0-q_b{\rangle}_{\partial T}.
\end{eqnarray}
\begin{algorithm}
Find ${\bf u}_h=\{{\bf u}_0,{\bf u}_b\}\in V_h$ and
$p_h=\{p_0,p_b\}\in W_h$ satisfying ${\bf u}_b\times {\bf n}= Q_b\phi$ on
$\partial \Omega$ and
\begin{eqnarray}
a({\bf u}_h,\ {\bf v})-b({\bf v},\;p_h)&=&({\bf f},\;{\bf v}_0),\quad\forall\ {\bf v}=\{{\bf v}_0,\; {\bf v}_b\}\in
V_{h,0},\label{wg1}\\
b({\bf u}_h,\;q)+s_2(p_h,q)&=&-(g,q_0), \quad\forall\ q=\{q_0,\; q_b\}\in
W_h,\label{wg2}
\end{eqnarray}
where $Q_b \phi$ is an approximation of the boundary value in the
polynomial space $[P_k(\partial T\cap \partial\Omega)]^3$. For
simplicity, one may take $Q_b \phi$ as the standard $L^2$ projection
of the boundary value $\phi$ on each boundary segment.
\end{algorithm}
\smallskip
\begin{lemma}\label{lemma-ue}
The weak Galerkin finite element algorithm (\ref{wg1})-(\ref{wg2})
has a unique solution.
\end{lemma}
\smallskip
\begin{proof}
It suffices to show that zero is the only solution of
(\ref{wg1})-(\ref{wg2}) if ${\bf f}=0, \phi=0,$ and $g=0$. To this end,
assume that the homogeneous conditions are given. Take ${\bf v}={\bf u}_h$
and $q=p_h$ in (\ref{wg1})-(\ref{wg2}). By adding the two resulting
equations, we obtain
\begin{eqnarray*}
(\nu{\nabla_w\times}{\bf u}_h,\ {\nabla_w\times}{\bf u}_h)_h
& + &\sum_{T\in{\mathcal T}_h}h^{-1}{\langle}({\bf u}_0-{\bf u}_b)\times{\bf n},\
({\bf u}_0-{\bf u}_b)\times{\bf n}{\rangle}_{\partial T}\\
& + & \sum_{T\in{\mathcal T}_h}h{\langle} p_0-p_b,\ p_0-p_b{\rangle}_{\partial T}=0,
\end{eqnarray*}
which implies $\nabla_w\times{\bf u}_h=0$ on each $T$,
${\bf u}_0\times{\bf n}={\bf u}_b\times{\bf n}$ and $p_0=p_b$ on ${\partial T}$. Note that
the boundary condition implies ${\bf u}_b\times{\bf n}=0$ on each
$e\subset\partial\Omega$. Then, it follows from (\ref{d-c}) and the
integration by parts that for any ${\bf v}\in [P_{k-1}(T)]^3$
\begin{eqnarray*}
0&=&({\nabla_w\times}{\bf u}_h,{\bf v})_T\\
&=&({\bf u}_0,\ {\nabla\times}{\bf v})_T+{\langle}{\bf u}_b\times{\bf n},\ {\bf v}{\rangle}_{\partial T}\\
&=&({\nabla\times}{\bf u}_0,\ {\bf v})_T+{\langle}({\bf u}_b-{\bf u}_0)\times{\bf n},\ {\bf v}{\rangle}_{\partial T}\\
&=&({\nabla\times}{\bf u}_0,\ {\bf v})_T,
\end{eqnarray*}
which gives ${\nabla\times}{\bf u}_0=0$ on each $T\in{\mathcal T}_h$. Using (\ref{wg2}),
(\ref{d-g}) and the integration by parts, we have
\begin{eqnarray*}
0&=&\sum_{T\in{\mathcal T}_h}({\bf u}_0,\nabla_w
q)_T=-\sum_{T\in{\mathcal T}_h}(\nabla\cdot{\bf u}_0,\
q_0)_T+\sum_{T\in{\mathcal T}_h}{\langle}{\bf u}_0\cdot{\bf n},\ q_b{\rangle}_{\partial T}.
\end{eqnarray*}
Letting $q_0=\nabla\cdot{\bf u}_0$ and $q_b=0$ in the above equation
yield $\nabla\cdot {\bf u}_0=0$ on each $T\in{\mathcal T}_h$. Next, by letting
$q_0=0$ and $q_b$ be the jump of ${\bf u}_0\cdot{\bf n}$ on each interior
face $e$, we conclude that ${\bf u}_0$ is continuous across each
interior face $e$ in the normal direction.
Note that $\nabla\times {\bf u}_0=0$. Thus, there exists a potential
function $\phi$ such that ${\bf u}_0=\nabla\phi$ on $\Omega$. It follows
from $\nabla\cdot {\bf u}_0=0$ and the fact
that ${\bf u}_0\cdot{\bf n}$ is continuous
that $\Delta\phi=0$ is strongly satisfied in $\Omega$. The boundary
condition of (\ref{bcc1}) implies that
${\bf u}_0\times{\bf n}=\nabla\phi\times{\bf n}=0$ on $\partial\Omega$.
Therefore, $\phi$ must be a constant on $\partial\Omega$. The
uniqueness of the solution of the Laplace equation implies that
$\phi=const$ is the only solution of $\Delta\phi=0$ if $\Omega$ is
simply connected. Then we must have ${\bf u}_0=\nabla\phi=0$. Since
${\bf u}_b\times{\bf n}={\bf u}_0\times{\bf n}=0$, we have ${\bf u}_b=0$.
Since ${\bf u}_h=0$, we then have $b({\bf v},p_h)=0$ for any ${\bf v}\in
V_{h,0}$. It follows from the definition of $b(\cdot,\cdot)$ and
$\nabla_w$ that
\begin{eqnarray}\label{eu1}
0&=&b({\bf v},\ p_h)=({\bf v}_0,\nabla_w
p_h)_h\\
&=&-\sum_{T\in{\mathcal T}_h}(\nabla\cdot{\bf v}_0,\ p_0)_T+\sum_{T\in{\mathcal T}_h}{\langle}
{\bf v}_0\cdot{\bf n},\ p_b{\rangle}_{\partial T}\nonumber\\
&=&\sum_{T\in{\mathcal T}_h}({\bf v}_0,\nabla p_0)_T,\nonumber
\end{eqnarray}
where we have used the fact that $p_0=p_b$ on $\partial T$. Letting
${\bf v}=\{{\bf v}_0,{\bf v}_b\}=\{\nabla p_0, 0\}$ in (\ref{eu1}) gives $\nabla
p_0=0$ on each $T\in {\mathcal T}_h$, i.e. $p_0$ is a constant on $T\in{\mathcal T}_h$. Using the facts $p_0=p_b$
and $p_b=0$ on $\partial\Omega$, we obtain $p_h=0$.
\end{proof}
\section{Error Equations}\label{Section:ErrorEquation}
For each element $T\in {\mathcal T}_h$, denote by ${\bf Q}_0$ and $Q_0$ the $L^2$
projections onto $[P_k(T)]^3$ and $P_{k-1}(T)$ respectively. Let
$Q_b$ be the $L^2$ projection onto $P_k(e)$. Then we can define two
projections onto the finite element space $V_h$ and $W_h$ such that
on each element $T$,
$${\bf Q}_h{\bf v}=\{{\bf Q}_0{\bf v},Q_b{\bf v}=Q_b(v_1){\bf t}_1+Q_b(v_2){\bf t}_2\},\quad Q_hq=\{Q_0q,Q_bq\}.$$
In addition, denote by ${\mathbb Q}_h$ the local $L^2$ projection onto
$[P_{k-1}(T)]^3$. The projection operators ${\mathbb Q}_h$, $Q_h$ and ${\bf Q}_h$
have some useful properties as stated in the following Lemma.
\begin{lemma}\label{lem-0} Let ${\bf Q}_h=\{{\bf Q}_0, Q_b\}$ and
$Q_h=\{Q_0, Q_b\}$ be the projection operators onto the finite
element spaces $V_h$ and $W_h$ respectively. Then, we have
\begin{equation}\label{key}
{\nabla_w\times} ({\bf Q}_h {\bf u}) = {\mathbb Q}_h ({\nabla\times}{\bf u})\qquad \forall {\bf u}\in
H(curl;\Omega)
\end{equation}
and
\begin{equation}\label{key11}
\nabla_w(Q_h q) = {\bf Q}_0(\nabla q)\qquad \forall q\in H^1(\Omega).
\end{equation}
\end{lemma}
\begin{proof}
Using (\ref{d-c}), the integration by parts, and the definition of
${\bf Q}_h$ and ${\mathbb Q}_h$, we have
\begin{eqnarray*}
({\nabla_w\times} ({\bf Q}_h {\bf u}),\; {\bf w})_T &=& ({\bf Q}_0{\bf u},\; {\nabla\times}{\bf w})_T + \langle
(Q_b{\bf u})\times{\bf n},\; {\bf w}\rangle_{{\partial T}}\\
&=&({\bf u},\; {\nabla\times}{\bf w})_T + \langle {\bf u}\times{\bf n},\; {\bf w}\rangle_{\partial T}\\
&=&({\nabla\times}{\bf u},\; {\bf w})_T=({\mathbb Q}_h({\nabla\times} {\bf u}),\; {\bf w})_T
\end{eqnarray*}
for any ${\bf w}\in [P_{k-1}(T)]^3$. This implies that (\ref{key}) holds
true.
As to (\ref{key11}), we use the definition of $Q_h$ and the discrete
gradient operator $\nabla_w$ to obtain
\begin{eqnarray*}
(\nabla_w (Q_h p),\; {\bf v})_T &=& -(Q_0p,\; \nabla\cdot{\bf v})_T +
\langle
Q_b p,\; {\bf v}\cdot{\bf n}\rangle_{{\partial T}}\\
&=&-(p,\; \nabla\cdot{\bf v})_T + \langle p,\; {\bf v}\cdot{\bf n}\rangle_{\partial T}\\
&=&(\nabla p,\; {\bf v})_T=({\bf Q}_0(\nabla p),\; {\bf v})_T
\end{eqnarray*}
for all ${\bf v}\in [P_k(T)]^3$, which verifies the desired relation
(\ref{key11}).
\end{proof}
\medskip
Define two error functions as follows
\begin{eqnarray}\label{error-u}
{\bf e}_h&=&\{{\bf e}_0,\;{\bf e}_b\}=\{{\bf Q}_0{\bf u}-{\bf u}_0,\;Q_b{\bf u}-{\bf u}_b\},\\
\varepsilon_h&=&\{\varepsilon_0,\;\varepsilon_b\}=\{Q_0
p-p_0,\;Q_bp-p_b\}.\label{error-p}
\end{eqnarray}
The rest of this section is to derive some equations that the above
error functions must satisfy. For simplicity of analysis, we assume
that the coefficient $\nu$ in (\ref{w1}) is a piecewise constant
function with respect to the finite element partition ${\mathcal T}_h$.
\begin{lemma}\label{Lemma:error-equation}
Let $({\bf u}_h; p_h)$ be the WG finite element solution arising from
(\ref{wg1}) and (\ref{wg2}), and $({\bf e}_h; \varepsilon_h)$ be the
error between the WG finite element solution and the $L^2$
projection of the exact solution as defined in
(\ref{error-u})-(\ref{error-p}). Then, the following equations are
satisfied
\begin{eqnarray}
a({\bf e}_h,\ {\bf v})-b({\bf v},\ \epsilon_h)&=&\varphi_{\bf u}({\bf v})
\quad\;\;\forall{\bf v}\in V_{h,0},\label{ee1}\\
b({\bf e}_h,\ q)+s_2(\epsilon_h,\ q)&=&\phi_{{\bf u},p}(q)\quad\forall q\in
W_h,\label{ee2}
\end{eqnarray}
where
\begin{eqnarray}\label{varphi-u-v}
\varphi_{\bf u}({\bf v}) &=&s_1({\bf Q}_h{\bf u},\ {\bf v})-l_1({\bf u},\ {\bf v}),\\
\phi_{{\bf u},p}(q) &=& s_2(Q_hp,\ q)+l_2({\bf u},q),\label{phi-up-q}
\end{eqnarray}
and
\begin{eqnarray}
l_1({\bf u},\ {\bf v})&=&\sum_{T\in{\mathcal T}_h}{\langle}(I-{\mathbb Q}_h){\nabla\times}{\bf u}, \ \nu({\bf v}_b-{\bf v}_0)\times{\bf n} {\rangle}_{\partial T}\label{l1}\\
l_2({\bf u},\ q)&=&\sum_{T\in{\mathcal T}_h}\langle q_0-q_b,\
({\bf u}-{\bf Q}_0{\bf u})\cdot{\bf n}\rangle_{\partial T}.\label{l2}
\end{eqnarray}
\end{lemma}
\begin{proof}
Using (\ref{key}), (\ref{d-c}), and the integration by parts we have
\begin{eqnarray} \label{m1}
& & (\nu{\nabla_w\times} ({\bf Q}_h{\bf u}),\;{\nabla_w\times} {\bf v})_T \\
&=&(\nu{\mathbb Q}_h({\nabla\times}{\bf u}),\;{\nabla_w\times}{\bf v})_T\nonumber\\
&=&(\nu{\bf v}_0,\ {\nabla\times} {\mathbb Q}_h({\nabla\times} {\bf u}))_T+\langle \nu{\bf v}_b\times{\bf n},
\ {\mathbb Q}_h({\nabla\times}{\bf u})\rangle_{\partial T}\nonumber\\
&=&(\nu{\nabla\times}{\bf v}_0,\; {\mathbb Q}_h({\nabla\times} {\bf u}))_T+\langle \nu({\bf v}_b-{\bf v}_0)\times{\bf n},
\ {\mathbb Q}_h({\nabla\times}{\bf u})\rangle_{\partial T}\nonumber\\
&=&(\nu{\nabla\times}{\bf u},\;{\nabla\times}{\bf v}_0)_T+{\langle}
{\mathbb Q}_h({\nabla\times}{\bf u}),\nu({\bf v}_b-{\bf v}_0)\times{\bf n}{\rangle}_{\partial T}.\nonumber
\end{eqnarray}
It follows from (\ref{key11}) that
\begin{eqnarray}
(\nabla_w(Q_h p),{\bf v}_0)_T=({\bf Q}_0\nabla p,{\bf v}_0)_T=(\nabla p,{\bf v}_0)_T.\label{m2}
\end{eqnarray}
Next, using the definition of $\nabla_w$ and ${\bf Q}_0$, we obtain
\begin{eqnarray}
({\bf Q}_0{\bf u},\ \nabla_wq)_T&=&-(q_0, \nabla\cdot ({\bf Q}_0{\bf u}))_T+{\langle} q_b,\;{\bf Q}_0{\bf u}\cdot{\bf n}{\rangle}_{\partial T}\label{m33}\\
&=&(\nabla q_0,\ {\bf u})_T-\langle q_0-q_b,\ {\bf Q}_0{\bf u}\cdot{\bf n}\rangle_{\partial T}.\nonumber
\end{eqnarray}
Testing (\ref{moment1}) by ${\bf v}_0$ with ${\bf v}=\{{\bf v}_0,\;{\bf v}_b\}\in V_{h,0}$ gives
\begin{equation}\label{m3}
({\nabla\times}(\nu{\nabla\times}{\bf u}),\;{\bf v}_0)- (\nabla p,\ {\bf v}_0)=({\bf f},\; {\bf v}_0).
\end{equation}
It follows from the integration by parts that
\[
({\nabla\times}(\nu{\nabla\times}{\bf u}),\;{\bf v}_0)=\sum_{T\in{\mathcal T}_h}(\nu{\nabla\times}{\bf u},\
{\nabla\times}{\bf v}_0)_T+\sum_{T\in{\mathcal T}_h}{\langle}\nu({\bf v}_b-{\bf v}_0)\times{\bf n},\
{\nabla\times}{\bf u}{\rangle}_{\partial T},
\]
where we use the fact that $\sum_{T\in{\mathcal T}_h}\langle{\bf v}_b\times{\bf n}, \
\nu{\nabla\times}{\bf u}\rangle_{\partial T}=0$. Using (\ref{m1}) and the equation above,
we have
\begin{eqnarray} \label{m4}
({\nabla\times}(\nu{\nabla\times}{\bf u}),\;{\bf v}_0)&=&(\nu{\nabla_w\times} ({\bf Q}_h{\bf u}),\ {\nabla_w\times}{\bf v})_h\\
& &\quad + \sum_{T\in{\mathcal T}_h}{\langle} (I-{\mathbb Q}_h){\nabla\times}{\bf u},\
\nu({\bf v}_b-{\bf v}_0)\times{\bf n}{\rangle}_{\partial T}. \nonumber
\end{eqnarray}
Substituting (\ref{m2}) and (\ref{m4}) into (\ref{m3}) yields
\[
(\nu{\nabla_w\times} ({\bf Q}_h{\bf u}),\ {\nabla_w\times}{\bf v})_h-(\nabla_wQ_h p,{\bf v}_0)_h=({\bf f},\ {\bf v}_0)-l_1({\bf v},\ {\bf u}).
\]
Adding $s_1({\bf Q}_h{\bf u},\ {\bf v})$ to both sides of the equation above gives
\begin{equation}\label{m5}
a({\bf Q}_h{\bf u},\ {\bf v})-b({\bf v},\ Q_hp)=({\bf f},\;{\bf v}_0)+\varphi_{\bf u}({\bf v}).
\end{equation}
To derive a second equation, we test equation (\ref{cont1}) by $q_0$
with $q=\{q_0,q_b\}\in W_h$ and then use the integration by parts to
obtain
\begin{equation}\label{m41}
-\sum_{T\in{\mathcal T}_h}({\bf u},\;\nabla q_0)_T+ \sum_{T\in{\mathcal T}_h}{\langle} {\bf u}\cdot{\bf n},\
q_0-q_b{\rangle}_{{\partial T}}=(g,\; q_0),
\end{equation}
where we have used the fact $\sum_{T\in{\mathcal T}_h}\langle {\bf u}\cdot{\bf n}, \
q_b\rangle_{\partial T}=0$. Combining (\ref{m33}) with (\ref{m41}) gives
\[
\sum_{T\in{\mathcal T}_h}({\bf Q}_0{\bf u},\;\nabla_w q)_T=-(g,\; q_0)+l_2({\bf u},q).
\]
Adding $s_2(Q_hp,\ q)$ to both sides of the equation above gives
\begin{equation}\label{m6}
b({\bf Q}_h{\bf u},q)+s_2(Q_hp,\ q)=-(g,\; q_0)+\phi_{{\bf u},p}(q).
\end{equation}
Finally, the differences of (\ref{m5}) and (\ref{wg1}), (\ref{m6})
and (\ref{wg2}) yield the error equations (\ref{ee1}) and
(\ref{ee2}), respectively.
\end{proof}
\section{Preparation for Error Estimates}\label{Section:L2Projections}
For ${\bf v}=\{{\bf v}_0,{\bf v}_b\}\in V_{h,0}$, define ${|\hspace{-.02in}|\hspace{-.02in}|}{\bf v}{|\hspace{-.02in}|\hspace{-.02in}|}$ as
follows
\begin{equation}\label{3barnorm}
{|\hspace{-.02in}|\hspace{-.02in}|} {\bf v}{|\hspace{-.02in}|\hspace{-.02in}|}^2=a({\bf v},\;{\bf v})=\sum_{T\in{\mathcal T}_h}\nu\|{\nabla_w\times}{\bf v}\|_T^2
+\sum_{T\in{\mathcal T}_h}h^{-1}\|({\bf v}_0-{\bf v}_b)\times{\bf n}\|_{\partial T}^2.
\end{equation}
It is clear that ${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}$ defines merely a semi-norm for
the linear space $V_{h,0}$. A norm can be derived from the semi-norm
${|\hspace{-.02in}|\hspace{-.02in}|}{\bf v}{|\hspace{-.02in}|\hspace{-.02in}|}$ by adding two more terms given as follows
\begin{equation}\label{3bar1nomr}
{|\hspace{-.02in}|\hspace{-.02in}|}{\bf v}{|\hspace{-.02in}|\hspace{-.02in}|}_1 = {|\hspace{-.02in}|\hspace{-.02in}|}{\bf v}{|\hspace{-.02in}|\hspace{-.02in}|} + \left(\sum_{T\in{\mathcal T}_h}
\|\nabla\cdot{\bf v}_0\|_T^2
\right)^{\frac12}+\left(\sum_{e\in{\mathcal E}_h^0}h^{-1}\|\jump{{\bf v}_0\cdot{\bf n}}\|_e^2
\right)^{\frac12},
\end{equation}
where $\jump{{\bf v}_0\cdot{\bf n}}$ is the jump of the function ${\bf v}_0$ at
each edge/face in the normal direction. The proof of Lemma
\ref{lemma-ue} can be employed to verify that ${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}_1$ is
indeed a norm in $V_{h,0}$. For convenience, we also use the
following notation:
\begin{equation}\label{Semi-Norm-v1h}
|{\bf v}|_{1,h}:=
\left(\sum_{T\in{\mathcal T}_h}h^{-1}\|({\bf v}_0-{\bf v}_b)\times{\bf n}\|^2_{{\partial T}}\right)^{1/2}.
\end{equation}
The linear space $W_h$ can be equipped with the following norm
$$
{|\hspace{-.02in}|\hspace{-.02in}|} q{|\hspace{-.02in}|\hspace{-.02in}|}_0 = |q|_{0,h} + h \|\nabla q\|_{0,h},
$$
where
$$
|q|_{0,h}^2=\sum_{T\in{\mathcal T}_h}h\|q_0-q_b\|_{\partial T}^2
$$
and
\begin{equation}\label{q-h1-seminorm}
\|\nabla q\|_{0,h} = \left(\sum_{T\in{\mathcal T}_h} \|\nabla
q_0\|_{T}^2\right)^{\frac12}
\end{equation}
for any $q\in W_h$.
\smallskip
The following Lemma provides some approximation estimates for the
projections ${\bf Q}_h$, ${\mathbb Q}_h$, and $Q_h$.
\begin{lemma}\label{lem-1}
Let ${\mathcal T}_h$ be a WG shape regular partition of $\Omega$, ${\bf w}\in
[H^{t+1}(\Omega)]^3$, $\rho\in H^t(\Omega)$, and $0\le t\le k$.
Then, for $0\le s\le 1$, we have
\begin{eqnarray}
&&\sum_{T\in{\mathcal T}_h} h_T^{2s}\|{\bf w}-{\bf Q}_0{\bf w}\|_{s,T}^2\le C h^{2(t+1)}
\|{\bf w}\|^2_{t+1},\label{Qh}\\
&&\sum_{T\in{\mathcal T}_h} h_T^{2s}\|{\nabla\times}{\bf w}-{\mathbb Q}_h({\nabla\times}{\bf w})\|^2_{s,T} \le
Ch^{2t}
\|{\bf w}\|^2_{t+1},\label{Rh}\\
&&\sum_{T\in{\mathcal T}_h} h_T^{2s}\|\rho-Q_0\rho\|^2_{s,T} \le
Ch^{2t}\|\rho\|^2_{t}.\label{Lh}
\end{eqnarray}
\end{lemma}
Since the mesh ${\mathcal T}_h$ is assumed to be very general, the proof of
Lemma \ref{lem-1} is rather technical and can be found in
\cite{wy-mixed}.
\smallskip
Let $K$ be an element with $e$ as a face. For any function $g\in
H^1(K)$, the following trace inequality has been proved for
arbitrary polyhedra $K$ in \cite{wy-mixed}.
\begin{equation}\label{trace}
\|g\|_{e}^2 \leq C \left( h_K^{-1} \|g\|_K^2 + h_K \|\nabla
g\|_{K}^2\right).
\end{equation}
In particular, if $\xi$ is a polynomial on $K$, then the standard
inverse inequality can be applied to yield
\begin{equation}\label{trace-poly}
\|\xi\|_{e}^2 \leq C h_K^{-1} \|\xi\|_K^2.
\end{equation}
Using (\ref{trace}) and Lemma \ref{lem-1}, we can prove the
following result.
\begin{lemma}\label{Lemma:myestimates}
Let ${\bf w}\in [H^{t+1}(\Omega)]^3$ and $p\in H^t(\Omega)$ and ${\bf v}\in
V_h$ with $\frac12 <t\le k$. Then
\begin{eqnarray}
|s_1({\bf Q}_h{\bf w},\ {\bf v})|+ |l_1({\bf w},\ {\bf v})| &\le& Ch^t\|{\bf w}\|_{t+1} |{\bf v}|_{1,h},\label{new-mmm1}\\
|s_2(Q_h p,\ q)|+ |l_2({\bf w},\ q)| &\le&
Ch^t(\|{\bf w}\|_{t+1}+\|p\|_{t})\ |q|_{0,h},\label{new-mmm888}
\end{eqnarray}
where $l_1({\bf w},{\bf v})$ and $l_2({\bf w},q)$ are defined in (\ref{l1}) and
(\ref{l2}).
\end{lemma}
\begin{proof}
Using the definition of $Q_b$, (\ref{trace}) and (\ref{Qh}), we have
\begin{eqnarray*}
|s_1({\bf Q}_h{\bf w},\ {\bf v})|&=&\left|\sum_{T\in{\mathcal T}_h} h^{-1}\langle
({\bf Q}_0{\bf w}-Q_b{\bf w})\times{\bf n},\;
({\bf v}_0-{\bf v}_b)\times{\bf n}\rangle_{\partial T}\right|\\
&=&\left|\sum_{T\in{\mathcal T}_h} h^{-1} \langle ({\bf Q}_0{\bf w}-{\bf w})\times{\bf n},\; ({\bf v}_0-{\bf v}_b)\times{\bf n}\rangle_{\partial T}\right|\\
&\le& \left(\sum_{T\in{\mathcal T}_h}(h^{-2}\|{\bf Q}_0{\bf w}-{\bf w}\|_T^2+\|\nabla ({\bf Q}_0{\bf w}-{\bf w})\|_T^2)\right)^{1/2} |{\bf v}|_{1,h}\\
&\le& Ch^t\|{\bf w}\|_{t+1} |{\bf v}|_{1,h}.
\end{eqnarray*}
Similarly, we have from (\ref{trace}) and (\ref{Rh}) that
\begin{eqnarray*}
\left|l_1({\bf v},\
{\bf w})\right|&\equiv&\left|\sum_{T\in{\mathcal T}_h}{\langle}(I-{\mathbb Q}_h){\nabla\times}{\bf w},\
\nu({\bf v}_b-{\bf v}_0)\times{\bf n}
{\rangle}_{\partial T}\right|\\
&\le&
\left(\sum_{T\in{\mathcal T}_h}h\|(I-{\mathbb Q}_h){\nabla\times}{\bf w}\|_{\partial T}^2\right)^{1/2}|{\bf v}|_{1,h}\\
&\le& Ch^t\|{\bf w}\|_{t+1} |{\bf v}|_{1,h}.
\end{eqnarray*}
This completes the proof of (\ref{new-mmm1}).
As to (\ref{new-mmm888}), note that
\begin{eqnarray*}
|s_2(Q_h p,\ q)|&=&\left|\sum_{T\in{\mathcal T}_h} h\langle Q_0p-Q_b p,\;
q_0-q_b\rangle_{\partial T}\right|\\
&\le &\sum_{T\in{\mathcal T}_h}h |\langle Q_0p- p,\; q_0-q_b\rangle_{\partial T}|\\
&\le& Ch^t\|p\|_{t}\ |q|_{0,h}.
\end{eqnarray*}
It follows from (\ref{trace}) and (\ref{Lh}) that
\begin{eqnarray*}
|l_2({\bf w},\ q)|&=&\left|\sum_{T\in{\mathcal T}_h}\langle q_0-q_b,\ ({\bf w}-{\bf Q}_0{\bf w})\cdot{\bf n}\rangle_{\partial T}\right|\\
&\le& \left(\sum_{T\in{\mathcal T}_h}h^{-1}\|{\bf w}-{\bf Q}_0{\bf w}\|_{\partial T}^2\right)^{1/2}\left(\sum_{T\in{\mathcal T}_h}h\|q_0-q_b\|^2_{{\partial T}}\right)^{1/2}\\
&\le& Ch^t\|{\bf w}\|_{t+1}\ |q|_{0,h}.
\end{eqnarray*}
Combining the above two estimates leads to the inequality
(\ref{new-mmm888}). This completes the proof of the lemma.
\end{proof}
\section{Error Estimates}\label{Section:H1ErrorEstimates}
The objective of this section is to establish some optimal order
error estimates for ${\bf u}_h$ and $p_h$ in certain discrete norms. We
start with a modified {\em inf-sup} condition commonly used for
analyzing saddle point problem.
\smallskip
\begin{lemma}\label{Lemma:inf-sup}
For any $q=\{q_0,q_b\}\in W_h$, there exist a ${\bf v}_q=h^2\{\nabla
q_0, 0\}\in V_{h,0}$ such that
\begin{equation}\label{inf-sup}
b({\bf v}_q,q)\ge h^2 \|\nabla q\|_{0,h}^2-C |q|_{0,h}^2
\end{equation}
and
\begin{equation}\label{inf-sup-boundedness}
{|\hspace{-.02in}|\hspace{-.02in}|} {\bf v}_q{|\hspace{-.02in}|\hspace{-.02in}|} \leq C h \|\nabla q\|_{0,h},
\end{equation}
where $C$ is a constant independent of $h$.
\end{lemma}
\begin{proof}
For a given $q=\{q_0,q_b\}\in W_h$ and ${\bf v}=\{{\bf v}_0,{\bf v}_b\}\in
V_{h,0}$, from the definition of the discrete weak gradient we
obtain
\begin{equation*}
\begin{split}
b({\bf v},q) &= \sum_{T\in{\mathcal T}_h}({\bf v}_0, \nabla_w q)_T \\
&= \sum_{T\in{\mathcal T}_h} \left({\langle}{\bf v}_0\cdot{\bf n}, q_b {\rangle}_{{\partial T}}-
(\nabla\cdot{\bf v}_0, q_0)_T\right)\\
&= \sum_{T\in{\mathcal T}_h} \left( ({\bf v}_0, \nabla q_0)_T +{\langle}{\bf v}_0\cdot{\bf n},
q_b-q_0{\rangle}_{{\partial T}}\right),
\end{split}
\end{equation*}
where we have used the usual integration by parts in the last
equation. By choosing ${\bf v}_0=2h^2\nabla q_0$ and ${\bf v}_b=0$ we arrive
at
\begin{equation*}
b({\bf v},q) = 2h^2\sum_{T\in{\mathcal T}_h} (\nabla q_0, \nabla q_0)_T
+2h^2\sum_{T\in{\mathcal T}_h} {\langle}\nabla q_0\cdot{\bf n}, q_b-q_0{\rangle}_{{\partial T}}.
\end{equation*}
Now by the Cauchy-Schwarz inequality and the trace inequality
(\ref{trace-poly}) we obtain
\begin{equation*}
\begin{split}
b({\bf v},q) & \ge 2h^2\sum_{T\in{\mathcal T}_h} (\nabla q_0, \nabla q_0)_T
-2h^2\sum_{T\in{\mathcal T}_h} \|\nabla q_0\cdot{\bf n}\|_{{\partial T}}
\|q_b-q_0\|_{{\partial T}}\\
&\ge 2h^2\sum_{T\in{\mathcal T}_h} (\nabla q_0, \nabla q_0)_T - Ch^{1.5}
\sum_{T\in{\mathcal T}_h} \|\nabla q_0\|_{T} \|q_b-q_0\|_{{\partial T}}\\
&\ge h^2\sum_{T\in{\mathcal T}_h} (\nabla q_0, \nabla q_0)_T - Ch
\sum_{T\in{\mathcal T}_h} \|q_b-q_0\|_{{\partial T}}^2,
\end{split}
\end{equation*}
which gives rise to the inequality (\ref{inf-sup}). The boundedness
estimate (\ref{inf-sup-boundedness}) can be obtain by computing the
triple bar norm of ${\bf v}_q$ directly. This completes the proof of the
lemma.
\end{proof}
\smallskip
The following is an error estimate for the WG finite element
solutions.
\begin{theorem}\label{h1-bd}
Let $({\bf u}; p)\in [H^{t+1}(\Omega)]^3\times [H_0^1(\Omega)\cap
H^{\max\{1,t\}}(\Omega)]$ with $\frac12< t\le k$ and $({\bf u}_h;p_h)\in
V_h\times W_h$ be the solution of (\ref{moment1})-(\ref{bc1}) and
(\ref{wg1})-(\ref{wg2}) respectively. Then
\begin{eqnarray}
{|\hspace{-.02in}|\hspace{-.02in}|}{\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}+|\epsilon_h|_{0,h}&\le&
Ch^t(\|{\bf u}\|_{t+1}+\|p\|_t),\label{err1}\\
h \|\nabla\epsilon_h\|_{0,h}&\leq&Ch^t(\|{\bf u}\|_{t+1}+\|p\|_t).
\label{err1-secondpart}
\end{eqnarray}
\end{theorem}
\smallskip
\begin{proof}
By letting ${\bf v}={\bf e}_h$ in (\ref{ee1}) and $q=\epsilon_h$ in
(\ref{ee2}) and adding the two resulting equations, we have
\begin{eqnarray}
{|\hspace{-.02in}|\hspace{-.02in}|}{\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}^2+
|\epsilon_h|_{0,h}^2&=&\varphi_{\bf u}({\bf e}_h)+\phi_{{\bf u},p}(\epsilon_h).
\label{main}
\end{eqnarray}
The right-hand side of (\ref{main}) can be handled by using Lemma
\ref{Lemma:myestimates} as follows. Using (\ref{new-mmm1}) with
${\bf w}$ and ${\bf v}$ replaced by ${\bf u}$ and ${\bf e}_h$ we obtain
\begin{equation}\label{varphi-estimate}
|\varphi_{{\bf u}}({\bf e}_h)|\leq C h^t \|{\bf u}\|_{t+1} {|\hspace{-.02in}|\hspace{-.02in}|}{\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{equation}
Similarly, using (\ref{new-mmm888}) with ${\bf w}$ and $q$ replaced by
${\bf u}$ and $\epsilon_h$ we obtain
\begin{equation}\label{phi-estimate}
|\phi_{{\bf u},p}(\epsilon_h)|\leq C h^t (\|{\bf u}\|_{t+1}+\|p\|_t) \
|\epsilon_h|_{0,h}.
\end{equation}
Substituting (\ref{varphi-estimate}) and (\ref{phi-estimate}) into
(\ref{main}) yields
\begin{equation}\label{b-u}
{|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}^2+|\epsilon_h|_{0,h}^2 \le
Ch^t(\|{\bf u}\|_{t+1}+\|p\|_t)({|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}+ |\epsilon_h|_{0,h}),
\end{equation}
which implies the error estimate (\ref{err1}).
Next we will bound $\|\nabla\epsilon_h\|_{0,h}$. It follows from
(\ref{ee1}) that
\[
b({\bf v},\ \epsilon_h)=a({\bf e}_h,\ {\bf v})-\varphi_{\bf u}({\bf v})\qquad \forall
{\bf v}\in V_{h,0}.
\]
From Lemma \ref{Lemma:inf-sup}, by choosing
${\bf v}={\bf v}_{\epsilon_h}=h^2\{\nabla \epsilon_h, 0\}$ we come up with
\begin{equation}\label{raining.100}
\begin{split}
h^2\|\nabla\epsilon_h\|_{0,h}^2&\leq |b({\bf v}_{\epsilon_h},
\epsilon_h)| + C |\epsilon_h|_{0,h}^2\\
&\leq |a({\bf e}_h,\ {\bf v}_{\epsilon_h})| +
|\varphi_{\bf u}({\bf v}_{\epsilon_h})|+C |\epsilon_h|_{0,h}^2\\
&\leq {|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}\ {|\hspace{-.02in}|\hspace{-.02in}|} {\bf v}_{\epsilon_h}{|\hspace{-.02in}|\hspace{-.02in}|} +
Ch^t\|{\bf u}\|_{t+1} {|\hspace{-.02in}|\hspace{-.02in}|}{\bf v}_{\epsilon_h}{|\hspace{-.02in}|\hspace{-.02in}|} + C
|\epsilon_h|_{0,h}^2\\
&\leq C({|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|} + Ch^t\|{\bf u}\|_{t+1}) h
\|\nabla\epsilon_h\|_{0,h} + C |\epsilon_h|_{0,h}^2,
\end{split}
\end{equation}
where we have used the estimate (\ref{inf-sup-boundedness}) in the
last inequality. It follows from (\ref{raining.100}) and
(\ref{err1}) that (\ref{err1-secondpart}) holds true. This completes
the proof of the theorem.
\end{proof}
\medskip
Recall that ${|\hspace{-.02in}|\hspace{-.02in}|}{\bf v}{|\hspace{-.02in}|\hspace{-.02in}|}$ is merely a semi-norm in the finite
element space $V_{h,0}$. Thus, the error estimate (\ref{err1}) only
provides a partial answer to the convergence of the WG finite
element method, particularly for the vector component ${\bf u}_h$. The
norm ${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}_1$, as defined by (\ref{3bar1nomr}), involves
two additional terms. The following theorem shall provide some
estimates with respect to those additional terms.
\medskip
\begin{theorem}\label{Theorem:divpart}
Let $({\bf u}; p)\in [H^{t+1}(\Omega)]^3\times (H_0^1(\Omega)\cap
H^{\max\{1,t\}}(\Omega))$ with $\frac12< t\le k$ and $({\bf u}_h;p_h)\in
V_h\times W_h$ be the solution of (\ref{moment1})-(\ref{bc1}) and
(\ref{wg1})-(\ref{wg2}) respectively. Then, we have
\begin{eqnarray}
\left(\sum_{e\in{\mathcal E}_h^0}h^{-1}\|\jump{{\bf e}_0\cdot{\bf n}}\|_e^2
\right)^{\frac12} &\leq& Ch^t(\|{\bf u}\|_{t+1}+\|p\|_t),\label{t-1}\\
\left(\sum_{T\in{\mathcal T}_h} \|\nabla\cdot{\bf e}_0\|_T^2 \right)^{\frac12} &\leq&
Ch^t(\|{\bf u}\|_{t+1}+\|p\|_t).\label{t-2}
\end{eqnarray}
\end{theorem}
\begin{proof}
Using the error equation (\ref{ee2}) we have
\begin{equation}\label{normal}
b({\bf e}_h,\ q)=\phi_{{\bf u},p}(q)-s_2(\epsilon_h,\ q).
\end{equation}
The definition of the weak gradient implies that
\begin{eqnarray}
b({\bf e}_h,q) &=& ({\bf e}_0, \nabla_w q)
=\sum_{T\in{\mathcal T}_h} \left({\langle}{\bf e}_0\cdot{\bf n},\ q_b{\rangle}_{\partial T} - (\nabla\cdot{\bf e}_0,\ q_0
)_T \right).\label{t0}
\end{eqnarray}
By letting $q=q_{{\bf e}_h}=\{0,h^{-1}\jump{{\bf e}_0\cdot{\bf n}}\}$ on the
interior edges, we obtain
\begin{equation*}
b({\bf e}_h,q_{{\bf e}_h})=\sum_{e\in{\mathcal E}_h^0}h^{-1}\|\jump{{\bf e}_0\cdot{\bf n}}\|_e^2.
\end{equation*}
Thus,
\begin{equation*}\label{t1}
\sum_{e\in{\mathcal E}_h^0}h^{-1}\|\jump{{\bf e}_0\cdot{\bf n}}\|_e^2=\phi_{{\bf u},p}(q_{{\bf e}_h})-s_2(\epsilon_h,\
q_{{\bf e}_h}).
\end{equation*}
It follows from (\ref{err1}) that
\begin{equation}\label{t3}
|s_2(\epsilon_h,\ q_{{\bf e}_h})|\le | \epsilon_h|_{0,h}\ |
q_{{\bf e}_h}|_{0,h}\le Ch^t(\|{\bf u}\|_{t+1}+\|p\|_t)\ |q_{{\bf e}_h}|_{0,h},
\end{equation}
and from (\ref{new-mmm888})
\begin{equation}\label{t3-new}
|\phi_{{\bf u},p}(q_{{\bf e}_h})|\le Ch^t(\|{\bf u}\|_{t+1}+\|p\|_t)\ |
q_{{\bf e}_h}|_{0,h}.
\end{equation}
Also, it is easy to see that
\begin{equation*}
\begin{split}
|q_{{\bf e}_h}|_{0,h}^2&=\sum_{T\in{\mathcal T}_h}h\|q_0-q_b\|_{{\partial T}\cap\Omega}^2\\
&=\sum_{T\in{\mathcal T}_h}h^{-1}\|\jump{{\bf e}_0\cdot{\bf n}}\|_{{\partial T}\cap\Omega}^2\\
&\le C\sum_{e\in{\mathcal E}_h^0}h^{-1}\|\jump{{\bf e}_0\cdot{\bf n}}\|_e^2.
\end{split}
\end{equation*}
Combining the above four inequalities yields
\begin{equation}\label{t100}
\left(\sum_{e\in{\mathcal E}_h^0}h^{-1}\|\jump{{\bf e}_0\cdot{\bf n}}\|_e^2\right)^{\frac12}
\leq C h^t(\|{\bf u}\|_{t+1}+\|p\|_t),
\end{equation}
which verifies the estimate (\ref{t-1}).
To derive (\ref{t-2}), we set $q=q_{{\bf e}_h}=\{-\nabla\cdot{\bf e}_0,\
0\}\in W_h$ in (\ref{t0}) so that
\begin{equation}\label{t6}
b({\bf e}_h,q_{{\bf e}_h})=\sum_{T\in{\mathcal T}_h}\|\nabla\cdot{\bf e}_0\|_T^2.
\end{equation}
Thus, it follows from (\ref{normal}) that
\begin{equation}\label{t888}
\sum_{T\in{\mathcal T}_h}\|\nabla\cdot{\bf e}_0\|_T^2=\phi_{{\bf u},p}(q_{{\bf e}_h})-s_2(\epsilon_h,\
q_{{\bf e}_h}).
\end{equation}
Substituting (\ref{t3}) and (\ref{t3-new}) into (\ref{t888}) implies
\begin{equation}\label{t999}
\sum_{T\in{\mathcal T}_h}\|\nabla\cdot{\bf e}_0\|_T^2\le
Ch^t(\|{\bf u}\|_{t+1}+\|p\|_t)\ |q_{{\bf e}_h}|_{0,h}.
\end{equation}
It follows from the definition of $|q|_{0,h}$ and the trace
inequality (\ref{trace-poly}) that
$$
|q_{{\bf e}_h}|_{0,h} \leq \left(\sum_{T\in{\mathcal T}_h}
h\|\nabla\cdot{\bf e}_0\|_{\partial T}^2 \right)^{\frac12}\leq
C\left(\sum_{T\in{\mathcal T}_h} \|\nabla\cdot{\bf e}_0\|_T^2 \right)^{\frac12},
$$
which, together with (\ref{t999}), leads to the following estimate
$$
\left(\sum_{T\in{\mathcal T}_h} \|\nabla\cdot{\bf e}_0\|_T^2 \right)^{\frac12} \leq
Ch^t(\|{\bf u}\|_{t+1}+\|p\|_t).
$$
This completes the proof.
\end{proof}
\medskip
To summarize, we have obtained the following error estimate for the
WG finite element solution arising from (\ref{wg1})-(\ref{wg2}).
\begin{theorem}\label{WG-ErrorEstimate}
Under the assumptions of Theorem \ref{h1-bd}, we have the following
error estimate for the WG finite element approximations:
\begin{eqnarray}
{|\hspace{-.02in}|\hspace{-.02in}|}{\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}_1+{|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h{|\hspace{-.02in}|\hspace{-.02in}|}_{0}&\le&
Ch^t(\|{\bf u}\|_{t+1}+\|p\|_t).\label{HONEerr}
\end{eqnarray}
\end{theorem}
\section{An Error Estimate in $L^2$}\label{Section:L2ErrorEstimates}
To derive an $L^2$-error estimate for the WG approximation of the
vector component, we consider an auxiliary problem that seeks
$(\psi;\xi)$ satisfying
\begin{equation}\label{dual-problem}
\begin{split}
{\nabla\times}(\nu {\nabla\times}\psi)-\nabla \xi &={\bf e}_0\quad \mbox{in}\;\Omega,\\
\nabla\cdot\psi&=0\quad\mbox{in}\;\Omega,\\
\psi\times{\bf n} &= 0\quad\mbox{on}\;\partial\Omega,\\
\xi &= 0\quad\mbox{on}\;\partial\Omega.\\
\end{split}
\end{equation}
Assume that the problem (\ref{dual-problem}) has the
$[H^{1+s}(\Omega)]^3\times H^s(\Omega)$-regularity property in the
sense that the solution $(\psi; \xi)\in (H^{1+s}(\Omega))^3\times
H^s(\Omega)$ and the following a priori estimate holds true:
\begin{equation}\label{reg}
\|\psi\|_{1+s}+\|\xi\|_s\le C\|{\bf e}_0\|,
\end{equation}
where $0< s\le 1$.
\medskip
\begin{theorem} \label{L2-bd}
Let $({\bf u}; p)\in [H^{r+1}(\Omega)]^3\times [H_0^1(\Omega)\cap
H^{\max\{1,r\}}(\Omega)]$ and $({\bf u}_h;p_h)\in V_h\times W_h$ be the
solutions of (\ref{moment1})-(\ref{bc1}) and (\ref{wg1})-(\ref{wg2})
respectively. Let $\frac12 <r\le k$ and $0 < s \le 1$. Then,
\begin{equation}\label{l2-error}
\|{\bf Q}_0{\bf u}-{\bf u}_0\|\le Ch^{r+s}(\|{\bf u}\|_{r+1}+\|p\|_r).
\end{equation}
\end{theorem}
\begin{proof}
Testing the first equation of (\ref{dual-problem}) by ${\bf e}_0$ gives
\[
\|{\bf Q}_0{\bf u}-{\bf u}_0\|^2=({\bf e}_0,\ {\bf e}_0)=({\nabla\times}(\nu{\nabla\times}\psi),\
{\bf e}_0)-(\nabla\xi,\ {\bf e}_0).
\]
Using (\ref{m2}) and (\ref{m4}) (with $\psi, \xi, {\bf e}_h$ in the
place of ${\bf u}, p, {\bf v}$ respectively), the above equation becomes
\begin{eqnarray*}
\|{\bf Q}_0{\bf u}-{\bf u}_0\|^2&=&(\nu{\nabla_w\times}{\bf Q}_h\psi,\ {\nabla_w\times}{\bf e}_h)_h-({\bf e}_0,\
\nabla_w(Q_h\xi))_h+l_1(\psi,\ {\bf e}_h).
\end{eqnarray*}
Adding and subtracting $s_1({\bf Q}_h\psi,\ {\bf e}_h)$ to the equation
above yields
\begin{eqnarray*}
\|{\bf Q}_0{\bf u}-{\bf u}_0\|^2&=&a({\bf Q}_h \psi,\ {\bf e}_h)-b({\bf e}_h,\
Q_h\xi)-\varphi_\psi({\bf e}_h).
\end{eqnarray*}
The error equation (\ref{ee2}) implies
\[
b({\bf e}_h,\ Q_h\xi)=-s_2(\epsilon_h,\ Q_h\xi)+\phi_{{\bf u},p}(Q_h\xi).
\]
It now follows from the definition of ${\bf Q}_0$, $\nabla_w$ and the
second equation of (\ref{dual-problem}) that
\begin{eqnarray*}
b({\bf Q}_h\psi,\ \epsilon_h)&=&({\bf Q}_0\psi, \nabla_w\epsilon_h)_h=\sum_{T\in{\mathcal T}_h}({\langle} \epsilon_b,\ {\bf Q}_0\psi\cdot{\bf n}{\rangle}_{\partial T}-(\epsilon_0,\ \nabla\cdot({\bf Q}_0\psi))_T)\\
&=&\sum_{T\in{\mathcal T}_h}({\langle} \epsilon_b-\epsilon_0,\ {\bf Q}_0\psi\cdot{\bf n}{\rangle}_{\partial T}+(\nabla\epsilon_0,\ \psi)_T)\\
&=&\sum_{T\in{\mathcal T}_h}({\langle} \epsilon_b-\epsilon_0,\ {\bf Q}_0\psi\cdot{\bf n}{\rangle}_{\partial T}-{\langle} \epsilon_b-\epsilon_0,\ \psi\cdot{\bf n}{\rangle}_{\partial T})\\
&=&l_2(\psi,\epsilon_h),
\end{eqnarray*}
where we have used the fact that $\sum_{T\in{\mathcal T}_h}{\langle} \epsilon_b,\ \psi\cdot{\bf n}{\rangle}_{\partial T}=0$.
Using the equations above, we have
\begin{eqnarray*}
\|{\bf Q}_0{\bf u}-{\bf u}_0\|^2&=&a({\bf Q}_h\psi,\ {\bf e}_h)-b({\bf Q}_h\psi,\
\epsilon_h)
-\phi_{{\bf u},p}(Q_h\xi)-\varphi_\psi({\bf e}_h)+\phi_{\psi,\xi}(\epsilon_h).
\end{eqnarray*}
Using (\ref{ee1}) and the equation above, we have
\begin{eqnarray} \label{d1}
\|{\bf Q}_0{\bf u}-{\bf u}_0\|^2=\varphi_{\bf u}({\bf Q}_h\psi)-\phi_{{\bf u},p}(Q_h\xi)-\varphi_\psi({\bf e}_h)+\phi_{\psi,\xi}(\epsilon_h).
\end{eqnarray}
The four terms on the right-hand side of (\ref{d1}) can be handled
by the estimates presented in Lemma \ref{Lemma:myestimates}. To this
end, we use (\ref{new-mmm1}) and (\ref{new-mmm888}) with $t=r$ to
obtain
\begin{equation}\label{EQN:09-21-001}
|\varphi_{\bf u}({\bf Q}_h\psi)-\phi_{{\bf u},p}(Q_h\xi)|\leq C
h^r(\|{\bf u}\|_{r+1}+\|p\|_r) \left( |{\bf Q}_h\psi|_{1,h} + |Q_h\xi|_{0,h}
\right).
\end{equation}
Using the definition (\ref{Semi-Norm-v1h}) we have
\begin{equation} \label{d3}
\begin{split}
|{\bf Q}_h\psi|_{1,h}^2=&\sum_{T\in{\mathcal T}_h} h^{-1}\|({\bf Q}_0\psi-Q_b\psi)\times{\bf n}\|^2_{\partial T}\\
\le& \sum_{T\in{\mathcal T}_h} h^{-1}\|({\bf Q}_0\psi-\psi)\times{\bf n}\|_{\partial T}^2\\
\le& Ch^{2s}\|\psi\|_{s+1}^2.\end{split}
\end{equation}
Similarly, we have from the definition of $Q_b$, (\ref{trace}) and
(\ref{Lh})
\begin{equation}\label{d4}
\begin{split}
|Q_h\xi|_{0,h}^2&=\sum_{T\in{\mathcal T}_h} h\|Q_0\xi-Q_b\xi\|^2_{\partial T}\\
&\le \sum_{T\in{\mathcal T}_h} h\|Q_0\xi-\xi\|_{\partial T}^2\\
&\le Ch^{2s}\|\xi\|_{s}^2.
\end{split}
\end{equation}
Substituting (\ref{d3}) and (\ref{d4}) into (\ref{EQN:09-21-001})
gives
\begin{equation}\label{EQN:09-21-002}
\begin{split}
|\varphi_{\bf u}({\bf Q}_h\psi)-\phi_{{\bf u},p}(Q_h\xi)|&\leq C
h^{r+s}(\|{\bf u}\|_{r+1}+\|p\|_r) \left( \|\psi\|_{1+s} + \|\xi\|_s
\right)\\
&\leq C h^{r+s}(\|{\bf u}\|_{r+1}+\|p\|_r) \|{\bf e}_0\|,
\end{split}
\end{equation}
where the regularity estimate (\ref{reg}) was used in the second
equation.
Analogously, we have from (\ref{new-mmm1}) and (\ref{new-mmm888})
with $t=s$ that
\begin{equation}\label{EQN:09-21-005}
\begin{split}
|\varphi_\psi({\bf e}_h)-\phi_{\psi,\xi}(\epsilon_h)| &\leq C
h^s(\|\psi\|_{s+1}+\|\xi\|_s) \left( |{\bf e}_h|_{1,h} + |
\epsilon_h|_{0,h} \right)\\
&\leq C h^s \left( {|\hspace{-.02in}|\hspace{-.02in}|}{\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|} + | \epsilon_h|_{0,h} \right)
\|{\bf e}_0\|\\
&\le C h^{r+s}(\|{\bf u}\|_{r+1}+\|p\|_r) \|{\bf e}_0\|,
\end{split}
\end{equation}
where we have used the error estimate (\ref{err1}) and the
regularity inequality (\ref{reg}). Finally, substituting
(\ref{EQN:09-21-002}) and (\ref{EQN:09-21-005}) into (\ref{d1})
yields the desired error estimate (\ref{l2-error}). This completes
the proof of the theorem.
\end{proof}
\section{An Effective Implementation through Variable Reduction}\label{Section:Schur}
The degree of freedoms for the WG formulation
(\ref{wg1})-(\ref{wg2}) is associated with ${\bf u}_h=\{{\bf u}_0,{\bf u}_b\}$
and $p_h=\{p_0,p_b\}$. In this section, we will demonstrate how
${\bf u}_0$ and $p_0$ can be eliminated from the system in order to
obtain a global system that depends only on ${\bf u}_b$ and $p_b$. With
such a variable reduction, the number of unknowns of the WG method
is reduced significantly for an efficient practical implementation.
Let ${\bf u}_h=\{{\bf u}_0,{\bf u}_b\}\in V_h$ and $p_h=\{p_0,p_b\}\in W_h$ be
the solution of the WG method (\ref{wg1})-(\ref{wg2}). Recall that
$({\bf u}_h;p_h)$ satisfies ${\bf u}_b\times{\bf n}=Q_b\phi$ on $\partial\Omega$
and the following equations:
\begin{eqnarray}
a({\bf u}_h,\ {\bf v})-b({\bf v},\;p_h)&=&({\bf f},\;{\bf v}_0),\quad\forall\ {\bf v}=\{{\bf v}_0,\; 0\}\in
V_{h,0},\label{wg11}\\
b({\bf u}_h,\;q)+s_2(p_h,q)&=&-(g,q_0), \quad\forall\ q=\{q_0,\; 0\}\in
W_h,\label{wg22}
\end{eqnarray}
and
\begin{eqnarray}
a({\bf u}_h,\ {\bf v})&=&0,\quad\forall\ {\bf v}=\{0,\; {\bf v}_b\}\in
V_{h,0},\label{wg111}\\
b({\bf u}_h,\;q)+s_2(p_h,q)&=&0, \quad\forall\ q=\{0,\; q_b\}\in
W_h.\label{wg222}
\end{eqnarray}
Denote by $V_k(T)$ and $W_k(T)$ the restrictions of $V_h$ and $W_h$
on $T$:
\[
V_k(T)=\{ {\bf v}=\{{\bf v}_0, {\bf v}_b=v_1{\bf t}_1+v_2{\bf t}_2\} :\ {\bf v}_0|_{T}\in
[P_k(T)]^3,\ v_1, v_2\in P_k(e),\ e\subset{\partial T}\}.
\]
and
\[
W_k(T)=\{q=\{q_0,q_b\}, q_0\in P_{k-1}(T), q_b\in
P_{k}(e),e\subset{\partial T}\}.
\]
Since the testing functions ${\bf v}=\{{\bf v}_0,\; 0\}$ and $q=\{q_0,\;
0\}$ are locally supported on each element, then the system of
equations (\ref{wg11})-(\ref{wg22}) is equivalent to the following
system of equations defined locally on each element $T$:
\begin{eqnarray}
a({\bf u}_h,\ {\bf v})-b({\bf v},\;p_h)&=&({\bf f},\;{\bf v}_0),\quad\forall\ {\bf v}=\{{\bf v}_0,\; 0\}\in
V_k(T),\label{wg1-local}\\
b({\bf u}_h,\;q)+s_2(p_h,q)&=&-(g,q_0), \quad\forall\ q=\{q_0,\; 0\}\in
W_k(T).\label{wg2-local}
\end{eqnarray}
If the exact solution of ${\bf u}_b$ and $p_b$ were known on ${\partial T}$, then
the corresponding ${\bf u}_0$ and $p_0$ can be obtained by solving
(\ref{wg1-local}) and (\ref{wg2-local}) locally on each element.
Therefore, the key is to derive a system of equations that shall
determine ${\bf u}_b$ and $p_b$.
For any given ${\bf u}_b$ and $p_b$ on ${\partial T}$, let us solve
(\ref{wg1-local}) and (\ref{wg2-local}) to obtain ${\bf u}_0$ and $p_0$
on each element $T$. For simplicity, we introduce the following
notation
\begin{eqnarray}
{\bf u}_0:&=& D({\bf u}_b,p_b,{\bf f},g),\label{EQN:09-21-200}\\
p_0:&=& E({\bf u}_b, p_b,f,g).\label{EQN:09-21-201}
\end{eqnarray}
Then the solution ${\bf u}_h$ and $p_h$ of (\ref{wg1})-(\ref{wg2}) can
be written as ${\bf u}_h=\{{\bf u}_0,{\bf u}_b\}=\{D({\bf u}_b,p_b,{\bf
f},g),{\bf u}_b\}$ and $p_h=\{p_0,p_b\}=\{E({\bf u}_b,p_b,{\bf f},g),p_b\}$.
Let $D_1({\bf u}_b,p_b)=D({\bf u}_b,p_b,0,0)$ \ and \ $D_2({\bf
f},g)=D(0,0,{\bf f},g)$. Similarly let $E_1({\bf u}_b,p_b) =
E({\bf u}_b,p_b,0,0)$ and $E_2({\bf f},g)=E(0,0,{\bf f},g)$. Since
$a(\cdot,\cdot)$, $b(\cdot,\cdot)$ and $s_2(\cdot,\cdot)$ are
bilinear, then superposition implies
\begin{equation*}
\begin{split}
({\bf u}_h; p_h)&=(\{{\bf u}_0,{\bf u}_b\}; \{p_0,p_b\})\\
&=(\{ D({\bf u}_b,p_b,{\bf f},g), {\bf u}_b\}; \{E({\bf u}_b,p_b,{\bf f},g),p_b\})\\
&=(\{ D({\bf u}_b,p_b,0,0), {\bf u}_b\}; \{E({\bf u}_b,p_b,0,0),p_b\})\\
&\ \ + (\{ D(0,0,{\bf f},g), 0\}; \{E(0,0,{\bf f},g),0\})\\
&=(\{ D_1({\bf u}_b,p_b), {\bf u}_b\}; \{E_1({\bf u}_b,p_b),p_b\}) + (\{
D_2({\bf f},g), 0\}; \{E_2({\bf f},g),0\}).
\end{split}
\end{equation*}
Substituting ${\bf u}_h=\{D({\bf u}_b,p_b,{\bf f},g),{\bf u}_b\}$ and
$p_h=\{E({\bf u}_b,p_b,{\bf f},g),p_b\}$ into the system
(\ref{wg111})-(\ref{wg222}) yields
\begin{eqnarray}
a(\{D_1({\bf u}_b,p_b),{\bf u}_b\},\ {\bf v})&=&\zeta_1({\bf v}),\label{wg1111}\\
b(\{D_1({\bf u}_b,p_b),{\bf u}_b\},\;q)+s_2(\{E_1({\bf u}_b,p_b),p_b\},q)&=&\zeta_2(q),
\label{wg2222}
\end{eqnarray}
for all ${\bf v}=\{0,\; {\bf v}_b\}\in V_{h,0}$ and $q=\{0,\; q_b\}\in W_h$.
Here
\begin{eqnarray*}
\zeta_1({\bf v})&=&-a(\{D_2({\bf f},g),0\},{\bf v})\\
\zeta_2(q)&=&-b(\{D_2({\bf f},g),0\},q)-s_2(\{E_2({\bf f},g),0\},q).
\end{eqnarray*}
Note that the system (\ref{wg1111})-(\ref{wg2222}) is a square
matrix problem with ${\bf u}_b$ and $p_b$ as unknowns, and this is the
system of equations that ${\bf u}_b$ and $p_b$ have to satisfy.
To summarize, our WG scheme (\ref{wg1})-(\ref{wg2}) can be
implemented as follows:
\begin{description}
\item[{\bf Step 1.}] Find ${\bf u}_b$ and $p_b$ with
${\bf u}_b\times{\bf n}=Q_b\phi$ and $p_b=0$ on $\partial\Omega$ satisfying
(\ref{wg1111})-(\ref{wg2222}).
\item[{\bf Step 2.}] Recover ${\bf u}_0$
and $p_0$ by ${\bf u}_0=D({\bf u}_b,p_b,{\bf f},g)$ and
$p_0=E({\bf u}_b,p_b,{\bf f},g)$ by solving (\ref{wg1-local}) and
(\ref{wg2-local}) locally on each element.
\end{description}
\smallskip
The system of equations (\ref{wg1111})-(\ref{wg2222}) is known as a
Schur complement of the original WG finite element scheme
(\ref{wg1})-(\ref{wg2}).
\section{Numerical Results}\label{Section:NE}
Our numerical tests are conducted for the Maxwell equations
\eqref{moment1}--\eqref{bc1} on the unit cube $\Omega=(0,1)^3$. The
first level grid consists of one cube. Each grid is refined by
subdividing a cube into eight half-sized cubes, to define the next
level grid. We apply the first order weak Galerkin finite element
method; i.e., $V_h$ and $W_h$ are defined in \eqref{Vh} and
\eqref{Wh} with $k=1$, respectively. Thus, the vector component
${\bf u}$ is approximated by using piecewise linear functions on each
cube and its faces; the scalar component $p$ is approximated by
using constants on each cube and linear function on its faces.
We compute four sets of solutions of \eqref{moment1}--\eqref{bc1},
which are
\begin{align}
\label{s1} && {\bf u}&= \begin{pmatrix} y-z \\ z-x \\ 3z-2y \end{pmatrix},
& p&=1. && \\
\label{s2} && {\bf u}&= \begin{pmatrix} yz \\ zx \\ 3z-2yx \end{pmatrix},
& p&=xz. && \\
\label{s3} && {\bf u}&= \begin{pmatrix} e^{yz} \\ z/(x+1) \\ e^{xy} \end{pmatrix},
& p&=e^{-xyz}. &&\\
\label{s4} && {\bf u}&= \begin{pmatrix} \cos(\pi x)\sin(\pi y)\sin(\pi z) \\
\sin(\pi x)\cos(\pi y)\sin(\pi z) \\
\sin(\pi x)\sin(\pi y)\cos(\pi z) \end{pmatrix},
& p&=\sin(2\pi x)\sin(2\pi y)\sin(2\pi z). &&
\end{align}
\begin{figure}[htb]\setlength\unitlength{1in}
\centering
\begin{picture}(3,5.0)(0,0)
\put(0,1.5){\resizebox{3.5in}{4.5in}{\includegraphics{ows.pdf}}}
\put(0,-0.5){\resizebox{3.5in}{4.5in}{\includegraphics{ow0.pdf}}}
\put(0,-2.5){\resizebox{3.5in}{4.5in}{\includegraphics{owby.pdf}} }
\end{picture}
\caption{The solution $p$ in \eqref{s3}, and the errors
$(p-p_0)$ and $(p-p_b)$ on level 4, at $z=0.3$. } \label{f-p}
\end{figure}
Observe that the solution $p$ in the first three test cases does not
satisfy the homogeneous boundary condition (\ref{bc1}). The
corresponding WG scheme (\ref{wg1})-(\ref{wg2}) needs to be modified
so that $p_b$ assumes the given non-homogeneous boundary value;
namely, the $L^2$ projection of the value of $p$ on the boundary.
\begin{table}[htb]
\caption{ \label{table1} The errors, ${\bf e}_h={\bf Q}_h {\bf u}- {\bf u}_h$ in $H^1$-like norm ${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}_1$,
$\epsilon_h=Q_hp -p_h$ in $L^2$-like norm ${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}_0$,
${\bf e}_0={\bf Q}_0 {\bf u} - {\bf u}_0$ in $L^2$ norm,
and $\epsilon_0=Q_0p-p_0$ in $L^2$ norm,
for \eqref{s1} by $k=1$ finite elements \eqref{Vh}--\eqref{Wh}.}
\begin{center} \begin{tabular}{c|rr|rr|rr|rr}
\hline grid & $ {|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}_1$&$h^r$ & $ {|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h {|\hspace{-.02in}|\hspace{-.02in}|}_0$&$h^r$ &
$ \|\epsilon_0\|_{L^2}$&$h^r$ & $ \|{\bf e}_0\|_{L^2}$&$h^r$ \\ \hline
1& 0.00000&0.0& 0.00000&0.0& 0.00000&0.0& 0.00000&0.0 \\
2& 0.00000&0.0& 0.00000&0.0& 0.00000&0.0& 0.00000&0.0 \\
3& 0.00000&0.0& 0.00000&0.0& 0.00000&0.0& 0.00000&0.0 \\
4& 0.00000&0.0& 0.00000&0.0& 0.00000&0.0& 0.00000&0.0 \\
\hline
\end{tabular}\end{center} \end{table}
The first numerical test is used to check the correctness of the
code, where the exact solutions \eqref{s1} are linear and constant,
respectively. As expected, the weak Galerkin finite element
solutions are exact, up to computer accuracy. As shown in Table
\ref{table1}, all errors are zero.
\begin{table}[htb]
\caption{ \label{table2} The errors, ${\bf e}_h={\bf Q}_h {\bf u}- {\bf u}_h$ in $H^1$-like norm
${|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}_1$,
${\bf e}_0={\bf Q}_0 {\bf u} - {\bf u}_0$ in $L^2$ norm $\|{\bf e}_0\|$,
$\epsilon_h=Q_hp -p_h$ in $L^2$-like norm ${|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h{|\hspace{-.02in}|\hspace{-.02in}|}_0$,
and $\epsilon_0=Q_0p-p_0$ in $L^2$ norm $\|\epsilon_0\|$,
for \eqref{s2} by $k=1$ finite elements \eqref{Vh}--\eqref{Wh}.
And the order $r$ as in $O(h^r)$ of convergence.}
\begin{center} \begin{tabular}{c|lc|lc|lc|lc}
\hline grid & $ {|\hspace{-.02in}|\hspace{-.02in}|}{\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}_1$&$h^r$ & $ \|{\bf e}_0\|_{L^2}$&$h^r$
& $ {|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h{|\hspace{-.02in}|\hspace{-.02in}|}_0$&$h^r$ &$ \|\epsilon_0\|_{L^2}$&$h^r$
\\ \hline
1 &2.26e-08 &- &7.76e-09 &- &0.0000 &- &0.0000 &-\\
2 &5.15e-02 &- &9.46e-03 &- &0.0000 &- &0.0000 &-\\
3 &2.28e-02 &1.1 &2.14e-03 &2.1 &0.0000 &- &0.0000 &-\\
4 &8.77e-03 &1.4 &4.15e-04 &2.4 &0.0000 &- &0.0000 &-\\
5 &3.03e-03 &1.5 &7.66e-05 &2.4 &0.0000 &- &0.0000 &-\\
\hline
\end{tabular}\end{center} \end{table}
In the second test \eqref{s2}, we choose some bilinear functions as
the exact solution. The numerical results are reported in Table
\ref{table2}. It can be seen that the numerical solution for the
unknown function $p$ is numerically the same as the exact solution.
Moreover, the order of convergence for ${\bf u}$ is half-order higher
than what was proved in the theory. This superconvergence is
probably caused by the special format of the exact solution.
\begin{table}[htb]
\caption{ \label{table3} The errors, ${\bf e}_h={\bf Q}_h {\bf u}- {\bf u}_h$ in $H^1$-like norm
${|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}_1$, ${\bf e}_0={\bf Q}_0 {\bf u} - {\bf u}_0$ in $L^2$ norm $\|{\bf e}_0\|$,
$\epsilon_h=Q_hp -p_h$ in $L^2$-like norm ${|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h{|\hspace{-.02in}|\hspace{-.02in}|}_0$,
${|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h{|\hspace{-.02in}|\hspace{-.02in}|}_{0,h}$, and $\epsilon_0=Q_0p-p_0$ in $L^2$
norm $\|\epsilon_0\|$,
for \eqref{s3} by $k=1$ finite elements \eqref{Vh}--\eqref{Wh}.
And the order $r$ as in $O(h^r)$ of convergence.}
\begin{center} \begin{tabular}{c|lc|lc|lc|lc|lc}
\hline grid & $ {|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}_1$&$h^r$ & $
\|{\bf e}_0\|_{L^2}$&$h^r$ & $ {|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h {|\hspace{-.02in}|\hspace{-.02in}|}_0$&$h^r$ & $
{|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h {|\hspace{-.02in}|\hspace{-.02in}|}_{0,h}$&$h^r$ &$ \|\epsilon_0\|_{L^2}$&$h^r$ \\
\hline
1 &7.02e-1 &- &3.32e-1 & - &6.56e-3 &- &6.56e-3 &- &2.68e-3 &-\\
2 &3.69e-1 &0.9 &8.71e-2 &1.9 &7.34e-2 &- &4.73e-3 &0.5 &2.33e-3 &0.2\\
3 &1.91e-1 &0.9 &2.10e-2 &2.1 &5.11e-2 &0.5 &1.09e-3 &2.1 &4.73e-4 &2.3\\
4 &1.02e-1 &0.9 &5.10e-3 &2.0 &2.91e-2 &0.8 &2.67e-4 &2.0 &1.18e-4 &2.0\\
5 &5.05e-2 &1.0 &1.26e-3 &2.0 &1.55e-2 &0.9 &6.59e-5 &2.0 &2.95e-5 &2.0\\
6 &2.52e-2 &1.0 &3.16e-4 &2.0 &7.73e-3 &1.0 &1.65e-5 &2.0 &7.39e-6 &2.0\\
\hline
\end{tabular}\end{center} \end{table}
In the third test \eqref{s3}, the exact solution is chosen as a
general function. The numerical results for this test case is
presented in Table \ref{table3}, confirming the theoretical
convergence estimates as derived in Theorems \ref{WG-ErrorEstimate}
and \ref{L2-bd}.
\begin{figure}[htb]\setlength\unitlength{1in}
\centering
\begin{picture}(3,6.7)(0,0)
\put(0,3.5){\resizebox{3.5in}{4.5in}{\includegraphics{ovs3.pdf}}}
\put(0, 1.5){\resizebox{3.5in}{4.5in}{\includegraphics{ov03.pdf}}}
\put(0,-0.5){\resizebox{3.5in}{4.5in}{\includegraphics{ovb32.pdf}} }
\put(0,-2.5){\resizebox{3.5in}{4.5in}{\includegraphics{ovb33.pdf}} }
\end{picture}
\caption{The solution $({\bf u})_3$ (the third component) in \eqref{s3},
and the errors $({\bf u}-{\bf u}_0)_3$,
$({\bf u}-{\bf u}_b)_{3,{\bf t}_1}$ and
$({\bf u}-{\bf u}_b)_{3,{\bf t}_2}$ (the two tangential component of the third component)
on level 4, at $z=0.3$. } \label{f-u}
\end{figure}
Table \ref{table3} contains additional information for the scalar
approximation $p_h$; the fourth column is the error for the scalar
approximation measured at the center of each face in a discrete
$L^2$ fashion. More precisely, for each $q_h=\{q_0, q_b\}\in W_h$,
the semi-norm ${|\hspace{-.02in}|\hspace{-.02in}|} q_h {|\hspace{-.02in}|\hspace{-.02in}|}_{0,h}$ is defined as follows:
$$
{|\hspace{-.02in}|\hspace{-.02in}|} q_h{|\hspace{-.02in}|\hspace{-.02in}|}_{0,h}^2=\sum_{T\in{\mathcal T}_h} h \| q_0 - \Pi
q_b\|_{\partial T}^2,
$$
where $\Pi$ is the average operator on each face. It can be seen
that the convergence in this discrete $L^2$ norm is of order
$O(h^2)$, which is higher than the theoretical prediction.
For this purpose, we graph the solutions and errors in Figures
\ref{f-p} and \ref{f-u}.
We believe that some superconvergence is playing a role in the weak
Galerkin finite element method. This is left to interested readers
for an investigation.
The forth test \eqref{s4} was conducted on another solution with
general structure. The goal of this test is to re-confirm the
convergence results developed in earlier Sections. The numerical
results are presented in Table \ref{table4}. The numerical
performance of the weak Galerkin finite element method is similar to
the test case three.
\begin{table}[htb]
\caption{ \label{table4} The errors, ${\bf e}_h={\bf Q}_h {\bf u}- {\bf u}_h$ in $H^1$-like norm
${|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}_1$,
${\bf e}_0={\bf Q}_0 {\bf u} - {\bf u}_0$ in $L^2$ norm $\|{\bf e}_0\|$,
$\epsilon_h=Q_hp -p_h$ in $L^2$-like norm ${|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h{|\hspace{-.02in}|\hspace{-.02in}|}_0$,
${|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h{|\hspace{-.02in}|\hspace{-.02in}|}_{0,h}$,
and $\epsilon_0=Q_0p-p_0$ in $L^2$ norm $\|\epsilon_0\|$,
for \eqref{s4} by $k=1$ finite elements \eqref{Vh}--\eqref{Wh}.
And the order $r$ as in $O(h^r)$ of convergence.}
\begin{center} \begin{tabular}{c|lc|lc|lc|lc|lc}
\hline grid & $ {|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}_1$&$h^r$ & $
\|{\bf e}_0\|_{L^2}$&$h^r$ & $ {|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h {|\hspace{-.02in}|\hspace{-.02in}|}_0$&$h^r$ & $
{|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h {|\hspace{-.02in}|\hspace{-.02in}|}_{0,h}$&$h^r$ &$ \|\epsilon_0\|_{L^2}$&$h^r$ \\
\hline
1 &8.54e0 & - &1.35e0 &- &3.60e-1 &- &3.60e-1 &- &1.47e-1 &-\\
2 &2.27e0 &1.8 &4.77e-1 &1.5 &2.10e0 &- &2.08e0 &- &8.65e-1 &-\\
3 &9.86e-1 &1.2 &1.47e-1 &1.7 &5.17e-1 &2.0 &2.50e-1 &3.1 &1.78e-1 &2.3\\
4 &4.32e-1 &1.1 &3.86e-2 &1.9 &3.09e-1 &0.8 &4.53e-2 &2.5 &4.10e-2 &2.1\\
5 &1.97e-1 &1.1 &9.21e-3 &2.0 &1.71e-1 &0.9 &1.08e-2 &2.0 &1.06e-2 &1.9\\
6 &9.85e-2 &1.0 &2.26e-3 &2.0 &8.75e-2 &1.0 &2.69e-3 &2.0 &2.71e-3 &2.0\\
\hline
\end{tabular}\end{center} \end{table}
In the rest of our numerical experiments, we considered a version of
the finite element scheme (\ref{wg1})-(\ref{wg2}) for which no
convergence theory was developed in the present paper. More
precisely, the WG method makes use of piecewise linear functions for
the vector component $V_h$, but the scalar component is modified as
follows:
$$
W_h=\{w=\{w_0,w_b\}:\{w_0,w_b\}|_T\in P_0(T)\times
P_0(e),e\subset\partial T,w_b=0 \mbox{ on }\partial\Omega\}.
$$
In other words, the scalar variable is approximated by using
piecewise constants on both the interior and the boundary of each
element. Again, it is not known if the current theoretical result
can be extended to this simple WG element, though the numerical
results show an excellent approximation to the exact solution. Table
\ref{table5} contains the numerical results for the test case
\eqref{s3}, and Table \ref{table6} is for the test case \eqref{s4}.
\begin{table}[htb!]
\caption{The errors, ${\bf e}_h={\bf Q}_h {\bf u}- {\bf u}_h$ in $H^1$-like norm,
${\bf e}_0={\bf Q}_0 {\bf u} - {\bf u}_0$ in $L^2$ norm,
$\epsilon_h=Q_hp -p_h$ in $L^2$-like norm,
and $\epsilon_0=Q_0p-p_0$ in $L^2$ norm,
for \eqref{s4} by lower order WG finite elements.
And the order $r$ as in $O(h^r)$ of convergence.}\label{table5}
\begin{center} \begin{tabular}{c|lc|lc|lc|lc}
\hline grid & $ {|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}_1$&$h^r$ & $
\|{\bf e}_0\|_{L^2}$&$h^r$ & $ {|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h {|\hspace{-.02in}|\hspace{-.02in}|}_0$&$h^r$ &
$ \|\epsilon_0\|_{L^2}$&$h^r$ \\ \hline
1 &6.72e-1 &- &3.24e-1 &- &0.00e0 &- &2.68e-3 &-\\
2 &3.66e-1 &0.9 &8.62e-2 &1.9 &5.73e-3 &- &2.92e-3 &-\\
3 &1.94e-1 &0.9 &2.09e-2 &2.0 &1.03e-3 &2.5 &6.16e-4 & 2.2\\
4 &1.02e-2 &0.9 &5.07e-3 &2.0 &1.89e-4 &2.5 &1.12e-4 & 2.4\\
5 &5.05e-2 &1.0 &1.26e-3 &2.0 &9.44e-5 &2.0 &2.71e-5 & 2.1 \\
6 &2.52e-2 &1.0 &3.14e-4 &2.0 &4.72e-5 &2.0 &6.78e-6 & 2.0\\
\hline
\end{tabular}\end{center} \end{table}
\begin{table}[htb!]
\caption{The errors, ${\bf e}_h={\bf Q}_h {\bf u}- {\bf u}_h$ in $H^1$-like norm,
${\bf e}_0={\bf Q}_0 {\bf u} - {\bf u}_0$ in $L^2$ norm,
$\epsilon_h=Q_hp -p_h$ in $L^2$-like norm,
and $\epsilon_0=Q_0p-p_0$ in $L^2$ norm,
for \eqref{s4} by lower order WG finite elements.
And the order $r$ as in $O(h^r)$ of convergence.}\label{table6}
\begin{center} \begin{tabular}{c|lc|lc|lc|lc}
\hline grid & $ {|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h{|\hspace{-.02in}|\hspace{-.02in}|}_1$&$h^r$ & $
\|{\bf e}_0\|_{L^2}$&$h^r$ & $ {|\hspace{-.02in}|\hspace{-.02in}|}\epsilon_h {|\hspace{-.02in}|\hspace{-.02in}|}_0$&$h^r$ &
$ \|\epsilon_0\|_{L^2}$&$h^r$ \\ \hline
1& 8.54e0 &- & 1.35e0 &- & 3.60e-1 &- & 1.47e-1&- \\
2& 2.21e0 &1.9 & 4.27e-1 &1.7 & 2.08e0 &- & 8.57e-1&- \\
3& 1.07e0 &1.1 & 1.63e-1 &1.4 & 1.99e-1 &3.4 & 1.23e-1&2.8 \\
4& 4.35e-1 &1.2 & 3.88e-2 &2.1 & 4.49e-2 &2.1 & 2.71e-2&2.2 \\
5& 1.96e-2 &1.1 & 9.19e-3 &2.1 & 1.07e-2 &2.1 & 7.23e-3&1.9 \\
6& 9.82e-2 &1.0 & 2.26e-3 &2.0 & 2.61e-3 &2.0 & 1.85e-3&2.0 \\
\hline
\end{tabular}\end{center} \end{table}
\newpage
| {
"timestamp": "2013-12-10T02:10:21",
"yymm": "1312",
"arxiv_id": "1312.2309",
"language": "en",
"url": "https://arxiv.org/abs/1312.2309",
"abstract": "This paper introduces a numerical scheme for time harmonic Maxwell's equations by using weak Galerkin (WG) finite element methods. The WG finite element method is based on two operators: discrete weak curl and discrete weak gradient, with appropriately defined stabilizations that enforce a weak continuity of the approximating functions. This WG method is highly flexible by allowing the use of discontinuous approximating functions on arbitrary shape of polyhedra and, at the same time, is parameter free. Optimal-order of convergence is established for the weak Galerkin approximations in various discrete norms which are either $H^1$-like or $L^2$ and $L^2$-like. An effective implementation of the WG method is developed through variable reduction by following a Schur-complement approach, yielding a system of linear equations involving unknowns associated with element boundaries only. Numerical results are presented to confirm the theory of convergence.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A Weak Galerkin Finite Element Method for the Maxwell Equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109485172559,
"lm_q2_score": 0.7185944046238982,
"lm_q1q2_score": 0.707679637216854
} |
https://arxiv.org/abs/1602.08055 | Stability of explicit one-step methods for P1-finite element approximation of linear diffusion equations on anisotropic meshes | We study the stability of explicit one-step integration schemes for the linear finite element approximation of linear parabolic equations. The derived bound on the largest permissible time step is tight for any mesh and any diffusion matrix within a factor of $2(d+1)$, where $d$ is the spatial dimension. Both full mass matrix and mass lumping are considered. The bound reveals that the stability condition is affected by two factors. The first one depends on the number of mesh elements and corresponds to the classic bound for the Laplace operator on a uniform mesh. The other factor reflects the effects of the interplay of the mesh geometry and the diffusion matrix. It is shown that it is not the mesh geometry itself but the mesh geometry in relation to the diffusion matrix that is crucial to the stability of explicit methods. When the mesh is uniform in the metric specified by the inverse of the diffusion matrix, the stability condition is comparable to the situation with the Laplace operator on a uniform mesh. Numerical results are presented to verify the theoretical findings. | \section{Introduction}
\label{sec:introduction}
Adaptive meshes are commonly used for the numerical solution of partial differential equations (PDEs) to enhance computational efficiency but there are still lacks in the mathematical understanding of the effects of the variation of element size and shape on the properties of numerical schemes for solving PDEs.
In this paper, we are concerned with the stability of explicit one-step time integration of linear finite element approximation with general nonuniform simplicial meshes for the initial-boundary value problem (IBVP)
\begin{align}
\begin{cases}
\begin{alignedat}{3}
& \partial_t u = \nabla \cdot \left(\D \nabla u \right),
\qquad && \boldsymbol{x} \in \Omega,
\quad && t \in \left( 0, T \right],
\\
& u(\boldsymbol{x}, t) = 0,
\qquad && \boldsymbol{x} \in \Gamma_D,
\quad && t \in \left( 0, T \right],
\\
& \D \nabla u(\boldsymbol{x}, t) \cdot \bm{n} = 0,
\qquad && \boldsymbol{x} \in \Gamma_N,
\quad && t \in \left( 0, T \right],
\\
& u(\boldsymbol{x}, 0) = u_0(\boldsymbol{x}),
\qquad && \boldsymbol{x} \in \Omega
\end{alignedat}
\end{cases}
\label{eq:IBVP}
\end{align}
where $\Omega \subset \mathbb{R}^d$ ($d \ge 1$) is an interval, a bounded polygonal or polyhedral domain, $\Gamma_D \cup \Gamma_N = \partial \Omega$, $\Gamma_D$ has a positive $(d-1)$-volume, $u_0$ is a given initial function, and $\D$ is the diffusion matrix which is assumed to be symmetric and uniformly positive definite on $\Omega$.
In this study, we also assume that $\D$ is time independent, i.e.,\ $\D = \D(\boldsymbol{x})$.
Problem \cref{eq:IBVP} is isotropic when $\D(\boldsymbol{x}) = \alpha (\boldsymbol{x}) I$ for all $\boldsymbol{x}$ in $\Omega$, where $\alpha$ is a scalar function and $I$ is the $d$-by-$d$ identity matrix.
Otherwise, the problem is an anisotropic diffusion problem for which we shall consider in this work.
Anisotropic diffusion arises in various areas of science and engineering, including plasma physics~\cite{GL09}, petroleum reservoir simulation~\cite{EAK01,MD06}, and image processing~\cite{KM09,Wei98}.
Assume that $u_0 \in H^1_D(\Omega) = \left\{ v \in H^1(\Omega): \text{$v=0$ on $\Gamma_D$} \right\}$.
Then, if $u$ is sufficiently smooth, we have the stability estimates
\begin{align}
\begin{cases}
\begin{alignedat}{2}
\Norm{u(\cdot,t)}_{L^2(\Omega)}
&\le \Norm{u_0}_{L^2(\Omega)},
&& \qquad t \in \left( 0, T \right],
\\[0.5ex]
\NormE{u(\cdot,t)}_{H^1(\Omega)}
&\le \NormE{u_0}_{H^1(\Omega)},
&& \qquad t \in \left( 0, T \right],
\end{alignedat}
\end{cases}
\label{energy-est}
\end{align}
where $\NormE{u(\cdot, t)}_{H^1(\Omega)} \equiv \norm{\D^{1/2}\nabla u}_{L^2(\Omega)}$ is the energy norm of $u(\cdot, t)$.
It is essential that a numerical scheme for \cref{eq:IBVP} preserves the stability estimates.
The stability of the time integration depends on the largest eigenvalue of the system related to the numerical scheme which, in turn, depends on the underlying meshes and the coefficients of the IBVP.
For a uniform mesh and the Laplace operator, it is well known that the largest permissible time step is proportional to the square of the element diameter.
In the case of a nonuniform mesh or a variable diffusion matrix the situation becomes more complicated.
Essentially, one needs to estimate the largest eigenvalues of $M^{-1} A$, where $M$ and $A$ are the mass and stiffness matrices corresponding to the discretization of the IBVP.\@
This can be done by estimating the extreme eigenvalues of $M$ and $A$.
Tight bounds on those of the mass matrix $M$ for linear finite elements with locally quasi-uniform meshes are available in the literature and typically proportional to the extremal mesh element volumes~\cite{Fri73,GraMcL06,Wat87}, whereas those for the stiffness matrix $A$ are more difficult to obtain and only a few results are available in the literature for the case of nonuniform meshes.
For example, Fried~\cite{Fri73} shows how to obtain these bounds for the finite element approximation of the Laplace operator for general nonuniform meshes using local element mass and stiffness matrices.
A similar argument was used by Shewchuk~\cite{She02a} to develop a bound on the largest eigenvalue of $M^{-1} A$ in terms of the maximum eigenvalues of local element matrices for the case of a lumped mass matrix.
Graham and McLean~\cite{GraMcL06} study the finite/boundary element approximation of a general differential/integral operator on locally quasi-uniform meshes in terms of patch volumes and aspect ratios.
Du, Wang, and Zhu~\cite{DuWanZhu09} obtain bounds on the extreme eigenvalues of the stiffness matrix for the Galerkin approximation of a general diffusion operator in terms of element geometry.
Zhu and Du~\cite{ZhuDu11,ZhuDu14} further develop bounds on the largest permissible
time step for time dependent problems.
It is worth mentioning that these existing works allow anisotropic meshes.
However, the interplay between the mesh geometry and the diffusion matrix is not really taken into account, which, as we will see, is crucially important for the stability of explicit integration schemes.
A notable exception is the bound obtained by Shewchuk~\cite{She02a}, which takes the effects of the diffusion coefficients fully into account; see \cref{rem:zhudu} for details and \cref{ex:aniso} for a numerical example.
Moreover, the existing analysis either employs some mesh regularity assumptions such as the local uniformity or involves parameters in final estimates that are related to mesh regularity, such as the maximum ratio of volumes of neighboring elements and/or the maximum number of elements in a patch.
The objective of this work is to provide estimates for the largest permissible time step which are accurate and tight for \emph{any mesh} and \emph{any diffusion matrix}.
We utilize bounds recently obtained by Kamenski~et~al.~\cite{KamHuaXu13} on the extreme eigenvalues of $M$ and the largest eigenvalue of $A$ for a general diffusion operator with arbitrary meshes.
The obtained stability condition expressed in terms of matrix entries is tight within a constant factor which is independent of the mesh and the diffusion matrix.
No assumption on the mesh regularity is made in the development.
We show that the alignment of the mesh with the diffusion matrix plays a crucial role in the stability condition: the largest permissible time step depends only on the number of mesh elements and the mesh geometry in relation to the diffusion matrix.
In particular, if the mesh is uniform in the metric specified by $\D^{-1}$, the stability condition is essentially the same as that for the Laplace operator with a uniform mesh.
Although we consider only linear finite elements, the presented analysis is applicable to high order finite elements without major modifications~\cite{HuaKamLan15}.
The paper is organized as follows.
We start in \cref{sec:setting} with the problem setting and a detailed description of mesh quality measures which are needed for the geometric interpretations of stability estimates.
The main results on stability are given in \cref{sec:explicit}; both the full mass matrix and mass lumping are considered.
Numerical examples to demonstrate the theoretical findings are presented in \cref{sec:numerical:examples}, including a two-dimensional groundwater flow problem.
Conclusions are drawn in \cref{sec:conclusion}.
\section{Linear finite element approximation}
\label{sec:setting}
We consider the standard linear finite element method for the spatial discretization of IBVP~\cref{eq:IBVP}.
We assume that a family $\{ \Th\}$ of simplicial meshes is given for $\Omega$.
While having adaptive meshes in mind, we consider the meshes to be general nonuniform ones, which may contain elements of small size and large aspect ratio.
Let $K$ be an arbitrary element of $\Th$, $\hat{K}$ the \emph{reference element}, and $\omega_i$ the element \emph{patch} of the $i^{\text{th}}$ vertex (\cref{fig:mesh:elements}).
Element and patch volumes are denoted by
\[
\Abs{K}
\quad \text{and} \quad
\Abs{\omega_i} = \sum_{K \in \omega_i} \Abs{K}.
\]
For each mesh element $K\in \Th$ let $F_K$ be the invertible affine mapping from $\hat{K}$ to $K$ (\cref{fig:mesh:elements}) and $F_K'$ its Jacobian matrix.
Note that $F_K'$ is a constant matrix with $\det(F_K') = \Abs{K}$ (for simplicity, we assume that $\hat{K}$ is equilateral with $\abs{\hat{K}} = 1$).
\begin{figure}[t]
\tikzsetnextfilename{hkl2013-notation}
\input{hkl2013-notation.tikz}
\caption{Reference and mesh elements, mapping $F_K$,
$i^{\text{th}}$ node and its patch $\omega_i$\label{fig:mesh:elements}}
\end{figure}
Let $V^h$ be the linear finite element space associated with mesh $\Th$.
Defining $V^h_D=V^h\cap H^1_D(\Omega)=\left\{ v^h \in V^h: \text{$v^h=0$ on $\Gamma_D$} \right\}$, the piecewise linear finite element solution $u^h(t) \in V^h_D$, $t \in \left( 0, T \right]$ is defined by
\begin{align}
\int_\Omega v^h \partial_t u^h \;d\boldsymbol{x}
&= - \int_\Omega \nabla v^h \cdot \D \nabla u^h \;d\boldsymbol{x},
&& \forall v^h \in V^h_D, \quad t \in \left( 0, T \right],
\label{eq:FEM:i}
\\
\intertext{subject to the initial condition}
\int_\Omega u^h(\boldsymbol{x}, 0) v^h \;d\boldsymbol{x}
&= \int_\Omega u_0(\boldsymbol{x}) v^h \;d\boldsymbol{x},
&& \forall v^h \in V^h_D .
\label{eq:FEM:ii}
\end{align}
We denote the number of the elements of $\Th$ by $N$ and the number of the interior vertices plus the vertices associated with the Neumann boundary condition by $N_{vi}$.
If we express $u^h$ as
\[
u^h(\boldsymbol{x},t) = \sum_{j=1}^{N_{vi}} u^h_j(t) \phi_j (\boldsymbol{x}),
\]
where $\phi_j$ is the linear basis function associated the $j^{\text{th}}$ vertex ($j = 1, \dotsc, N_{vi}$), from \cref{eq:FEM:i,eq:FEM:ii} we obtain
\begin{equation}
M \boldsymbol{U}_t = - A \boldsymbol{U},
\qquad \boldsymbol{U}(0) = \boldsymbol{U}_0,
\label{eq:fem:system}
\end{equation}
where $\boldsymbol{U} = {\left(u^h_1,\dotsc, u^h_{N_{vi}}\right)}^T$ and $M$ and $A$ are the mass and the stiffness matrices,
\begin{equation}
M_{ij} = \int_\Omega \phi_i \phi_j \;d\boldsymbol{x},
\qquad
A_{ij} = \int_\Omega \nabla \phi_i \cdot \D \nabla \phi_j \;d\boldsymbol{x} ,
\qquad i, j = 1, \dotsc, N_{vi}.
\label{eq:AM:i}
\end{equation}
We shall investigate how the geometry of the mesh and the anisotropy of the diffusion matrix affect the stability of explicit one-step methods for integrating \cref{eq:fem:system}.
In the following we assume that the mesh is fixed for all time steps.
\subsection{Mathematical description of nonuniform meshes; mesh quality measures}
\label{sec:mesh:notation}
An adaptive mesh, which is typically nonuniform, can be generated as a uniform one in the metric specified by a given metric tensor, which is always assumed to be symmetric and uniformly positive definite in $\Omega$~\cite{Hua05a}.
On the other hand, a metric tensor can be defined for any given mesh such that all elements are uniform in the metric specified by this tensor~\cite{HuaRus11}.
Thus, it is natural to consider nonuniform meshes in relation to a given metric tensor.
In the following, we describe several quality measures and mathematical characterizations for (nonuniform) meshes in terms of a given metric tensor $\M = \M(\boldsymbol{x})$.
As we will see in \cref{sec:explicit}, the matching between the mesh metric tensor and the diffusion matrix plays a crucial role for the stability condition.
In our analysis, we slightly adjust the original definitions of the mesh quality measures in~\cite{Hua05} (see also~\cite{Hua07,HuaRus11}).
Let
\begin{equation}
\M_K = \frac{1}{\Abs{K}} \int_K \M \;d\boldsymbol{x},
\qquad
\Abs{K}_{\M} = \Abs{K} {\det(\M_K)}^{\frac{1}{2}},
\qquad
\Abs{\Omega}_{\M,h} = \sum_{K \in \Th} \Abs{K}_{\M} .
\label{eq:sigmah:def}
\end{equation}
Note that $\M_K$ is the average of $\M$ over the element $K$ and $\Abs{K}_{\M}$ and $\Abs{\Omega}_{\M,h}$ are approximate volumes of $K$ and $\Omega$ in the metric $\M$, viz.,
\[
\Abs{K}_{\M} \approx \int_K {\det\bigl(\M(\boldsymbol{x})\bigr)}^{\frac{1}{2}} \;d\boldsymbol{x}
\quad \text{and} \quad
\Abs{\Omega}_{\M,h}
\approx \sum_{K \in \Th} \int_K {\det\bigl(\M(\boldsymbol{x})\bigr)}^{\frac{1}{2}} \;d\boldsymbol{x}
= \Abs{\Omega}_{\M}.
\]
Hereafter, without confusion we will call $\Abs{K}_{\M}$ and $\Abs{\Omega}_{\M,h}$ the volumes of $K$ and $\Omega$ in the metric $\M$, respectively.
We also define the \emph{average diameter} of element $K$ and the \emph{global average element diameter} with respect to $\M$ as
\begin{align*}
h_{K,\M} = \Abs{K}_{\M}^{\frac{1}{d}}
\quad \text{and} \quad
h_{\M} = {\left(\frac{1}{N}\Abs{\Omega}_{\M,h}\right)}^{\frac{1}{d}} .
\end{align*}
The diameter $h_K$ of $K$ is defined as the length of the longest edge of $K$.
With these notations, we now are ready to describe the mesh quality measures.
The first one, the \emph{equidistribution} quality measure, is defined as the ratio of the average element volume to the volume of $K$, both measured in the metric specified by $\M_K$,
\begin{equation}
\QeqM(K)
= \frac{\frac{1}{N} \Abs{\Omega}_{\M,h}}{\Abs{K}_{\M}}
= {\left( \frac{h_{\M}}{h_{K,\M}} \right)}^d .
\label{eq:Qeq:def}
\end{equation}
It satisfies
\begin{equation}
0 < \QeqM(K) < \infty,
\qquad
\frac{1}{N} \sum_{K \in \Th} \frac{1}{\QeqM(K)} = 1,
\qquad
\max_{K \in \Th} \QeqM(K) \ge 1.
\label{eq:Qeq:properties}
\end{equation}
The second one, the \emph{alignment} quality measure, is local (elementwise) and measures how closely the principal directions of the circumscribed ellipsoid of $K$ are aligned with the eigenvectors of $\M_K$ and the semi-lengths of the principal axes are inversely proportional to the square root of the eigenvalues of $\M_K$.
It is defined as
\begin{equation}
\QaliM(K)
= \frac{\Norm{{\left(F_K'\right)}^{-1} \M_K^{-1} {\left(F_K'\right)}^{-T}}_2 }
{{\det\left(\FMkF\right)}^{\frac{1}{d}}} = h_{K,\M}^2 \Norm{{\left(F_K'\right)}^{-1} \M_K^{-1} {\left(F_K'\right)}^{-T}}_2.
\label{eq:Qali:def}
\end{equation}
Since $\Norm{{\left(F_K'\right)}^{-1} \M_K^{-1} {\left(F_K'\right)}^{-T}}_2 \ge {\det\left(\FMkF\right)}^{\frac{1}{d}}$, $\QaliM$ always satisfies
\[
1 \le \QaliM(K) < \infty
\]
with $\QaliM(K) = 1$ if and only if $K$ is equilateral with respect to $\M_K$.
The alignment quality measure can be seen as an alternative to the aspect ratio of $K$ in the metric specified by $\M_K$ and it satisfies
\begin{equation}
\QaliM(K) \le \hat{h}^2
\cdot {\left(\frac{h_{K,\M}}{\rho_{K,\M}} \right)}^2 ,
\label{eq:Qali:def-2}
\end{equation}
where $\hat{h}$ is the length of the longest edge of $\hat{K}$ and $\rho_{K,\M}$ is the diameter of the largest sphere inscribed in the element $K$ viewed in the metric $M_K$.
To show this, we consider two points $\boldsymbol{x}_1, \boldsymbol{x}_2 \in K$ and the corresponding points $\boldsymbol{\xi}_1= F_K^{-1}(\boldsymbol{x}_1)$ and $\boldsymbol{\xi}_2 = F_K^{-1}(\boldsymbol{x}_2)$ in $\hat{K}$.
The distance between $\boldsymbol{x}_1$ and $\boldsymbol{x}_2$ in the metric $\M_K$ is
\begin{align*}
\Norm{\boldsymbol{x}_1 - \boldsymbol{x}_2}^2_{\M_K}
&= {\left(\boldsymbol{x}_1 - \boldsymbol{x}_2 \right)}^T \M_K \left(\boldsymbol{x}_1 - \boldsymbol{x}_2 \right)
= {\left(\boldsymbol{\xi}_1 - \boldsymbol{\xi}_2 \right)}^T {\left(F_K'\right)}^{T}
\M_K F_K' \left(\boldsymbol{\xi}_1 - \boldsymbol{\xi}_2 \right)
\\
&= \Norm{\boldsymbol{\xi}_1 - \boldsymbol{\xi}_2}_2^2
\cdot
\frac{{\left(\boldsymbol{\xi}_1 - \boldsymbol{\xi}_2 \right)}^T}{\Norm{\boldsymbol{\xi}_1 - \boldsymbol{\xi}_2}_2}
{\left(F_K'\right)}^{T} \M_K F_K'
\frac{\left(\boldsymbol{\xi}_1 - \boldsymbol{\xi}_2 \right)}{\Norm{\boldsymbol{\xi}_1 - \boldsymbol{\xi}_2}_2}
\\
&\le \hat{h}^2
\cdot
\frac{{\left(\boldsymbol{\xi}_1 - \boldsymbol{\xi}_2 \right)}^T}{\Norm{\boldsymbol{\xi}_1 - \boldsymbol{\xi}_2}_2}
{\left(F_K'\right)}^{T} \M_K F_K'
\frac{\left(\boldsymbol{\xi}_1 - \boldsymbol{\xi}_2 \right)}{\Norm{\boldsymbol{\xi}_1 - \boldsymbol{\xi}_2}_2}
.
\end{align*}
If we take the minimum over all pairs of opposing points on the largest sphere inscribed in the element $K$ viewed in the metric $M_K$, then
\[
\rho_{K,\M}^2
\le \hat{h}^2 \lambda_{\min} \bigl( {\left(F_K'\right)}^T \M_K F_K' \bigr).
\]
Hence,
\begin{equation}
\Norm{{\left(F_K'\right)}^{-1} \M_K^{-1} {\left(F_K'\right)}^{-T}}_2
= \frac{1}{\lambda_{\min} \bigl( {\left(F_K'\right)}^T \M_K F_K' \bigr)}
\le {\left(\frac{\hat{h}}{\rho_{K,\M}} \right)}^2 ,
\label{eq:Qali:def-3}
\end{equation}
which, together with \cref{eq:Qali:def}, gives \cref{eq:Qali:def-2}.
The \emph{element quality} measure is defined as a combination of $\QaliM$ and $\QeqM$,
\begin{equation}
\QM(K)
= \QaliM(K) \cdot {\bigl(\QeqM(K)\bigr)}^{\frac{2}{d}}
= h_{\M}^2 \Norm{{\left(F_K'\right)}^{-1} \M_K^{-1} {\left(F_K'\right)}^{-T}}_2 .
\label{eq:QM:def}
\end{equation}
It measures how far $K$ is from being equilateral with a constant volume when viewed in the metric specified by $\M$.
By definition and from \cref{eq:Qali:def-3} it follows that
\begin{equation}
0 < \QM(K)
\le \hat{h}^2 {\left( \frac{h_{\M}}{\rho_{K,\M}} \right)}^2
< \infty
.
\label{eq:QM:geometric}
\end{equation}
When a mesh is uniform with respect to $\M$ (we will refer to it as an \emph{$\M$-uniform} mesh) then it satisfies
\begin{equation}
\QaliM(K) = 1
\quad \text{and} \quad
\QeqM(K) = 1,
\qquad \forall K \in \Th,
\label{eq:eq:ali:muniform}
\end{equation}
which is equivalent to
\begin{equation}
\QM(K) = 1,
\qquad \forall K \in \Th.
\label{eq:eq:ali:i}
\end{equation}
Indeed, \cref{eq:eq:ali:i} follows directly from \cref{eq:eq:ali:muniform}.
On the other hand, since $\QaliM \ge 1$, \cref{eq:eq:ali:i} implies $\QeqM(K) \le 1$ for all $K$.
Due to the property \cref{eq:Qeq:properties}, the latter is only possible if $\QeqM(K) = 1$ for all $K$, which, in turn, implies $\QaliM(K) = 1$ for all $K$.
It is worth mentioning that an $\M$-uniform mesh satisfies
\begin{equation}
{\left(F_K'\right)}^{-1} \M_K^{-1} {\left(F_K'\right)}^{-T} = h_{\M}^{-2} I,
\quad \forall K \in \Th,
\label{eq:muniform:fmf}
\end{equation}
since \cref{eq:eq:ali:muniform} implies that all eigenvalues of ${\left(F_K'\right)}^{-1} \M_K^{-1} {\left(F_K'\right)}^{-T}$ are equal to $h_{\M}$.
On the other hand, when a mesh is far from being $\M$-uniform, then
\[
\QaliM(K) \gg 1
\quad \text{or} \quad
\max\limits_K \QeqM(K) \gg 1
\]
and therefore
\[
\max\limits_K \QM(K) \gg 1 .
\]
\subsection{Preliminary results}
\label{sec:auxiliary:results}
In this subsection we present a few properties of the mass matrix $M$ and the stiffness matrix $A$ of linear finite elements, which will be used repeatedly in our analysis.
Throughout the paper the less-than-or-equal-to sign between matrix terms means that the difference between the right-hand side and left-hand side terms is positive semidefinite.
\begin{lemma1}[{\cite[Sect.~3]{KamHuaXu13}}]
\label{thm:mass:matrix}
The linear finite element mass matrix $M$ and its diagonal part $M_D$ satisfy
\begin{equation}
\frac{1}{2} M_D \le M \le \frac{d+2}{2} M_D
\quad \text{and} \quad
M_{ii} = \frac{2 \Abs{\omega_i}}{(d+1) (d+2)} ,
\qquad i=1,\dotsc,N_{vi} .
\label{eq:M}
\end{equation}
\end{lemma1}
\begin{lemma1}
\label{thm:mass:matrix:lumped}
Let $M_{lump}$ be the lumped linear finite element mass matrix defined through
\[
M_{ii, lump} = \int\limits_\Omega \phi_i(\boldsymbol{x}) \cdot \sum\limits_{j=1}^{N_{vi}} \phi_j(\boldsymbol{x}) \;d\boldsymbol{x},
\qquad i=1,\dotsc,N_{vi}.
\]
Then
\begin{equation}
\frac{2 \Abs{\omega_i}}{(d+1) (d+2)}
\le M_{ii, lump}
\le \frac{ \Abs{\omega_i}}{d+1}.
\label{eq:ML}
\end{equation}
\end{lemma1}
\begin{proof}
Since
\[
\phi_i(\boldsymbol{x}) \le \sum\limits_{j=1}^{N_{vi}} \phi_j(\boldsymbol{x}) \le 1,
\]
we have
\[
M_{ii,lump}
\ge \int\limits_\Omega \phi^2_i(\boldsymbol{x}) \;d\boldsymbol{x}
= \sum_{K \in \omega_i} \int_K \phi_i^2(\boldsymbol{x}) \;d\boldsymbol{x}
= \sum_{K \in \omega_i} \frac{2 \Abs{K} }{(d+1) (d+2)}
= \frac{2 \Abs{\omega_i}}{(d+1) (d+2)}
\]
and
\[
M_{ii, lump}
\le \int\limits_\Omega \phi_i(\boldsymbol{x}) \;d\boldsymbol{x}
= \sum_{K \in \omega_i} \int_K \phi_i(\boldsymbol{x}) \;d\boldsymbol{x}
= \sum_{K \in \omega_i} \frac{\Abs{K} }{d+1}
= \frac{\Abs{\omega_i}}{d+1}
.
\]
\end{proof}
\begin{lemma1}
\label{thm:mass:matrix:lumped:bounds}
The linear finite element mass matrix $M$ and the lumped mass matrix $M_{lump}$ satisfy
\begin{equation*}
\frac{1}{d+2} M_{lump} \le M \le \frac{d+2}{2} M_{lump}
.
\end{equation*}
\end{lemma1}
\begin{proof}
Since $M_D \le M_{lump}$ we get the upper bound directly from \cref{eq:M}.
Combining the lower bound in \cref{eq:M} with the upper bound in \cref{eq:ML} gives
\[
\frac{1}{d+2} M_{lump}
\le \frac{1}{(d+2)(d+1)}
\diag\left( \Abs{\omega_1},\dotsc,\Abs{\omega_{N_{vi}}} \right)
= \frac{1}{2}M_D
\le M
.
\]
\end{proof}
\begin{lemma1}[{\cite[Sect.~4]{KamHuaXu13}}]
\label{thm:stiffness:matrix}
The linear finite element stiffness matrix $A$ and its diagonal part $A_D$ satisfy
\begin{equation}
A \le (d+1) A_D .
\label{eq:A}
\end{equation}
\end{lemma1}
\begin{lemma1}
\label{thm:stiffness:matrix:ii}
Let $\D_K$ be the average of the diffusion matrix $\D$ over $K$,
\[
\D_K = \frac{1}{\Abs{K}} \int_K \D(\boldsymbol{x}) \;d\boldsymbol{x} .
\]
Then the diagonal entries of the linear finite element stiffness matrix $A$ are bounded by
\begin{equation}
C_{\hat\nabla} \sum\limits_{K\in \omega_i} \Abs{K}
\cdot \lambda_{\min}\bigl({\left(F_K'\right)}^{-1} \D_K {\left(F_K'\right)}^{-T}\bigr)
\le A_{ii}
\le C_{\hat\nabla}
\sum\limits_{K\in \omega_i} \Abs{K}
\cdot \lambda_{\max} \bigl({\left(F_K'\right)}^{-1} \D_K {\left(F_K'\right)}^{-T}\bigr) ,
\label{eq:A:fdf}
\end{equation}
where
$
C_{\hat\nabla}
= \frac{d}{d+1} {\left( \frac{\sqrt{d+1}}{d!} \right)}^{\frac{2}{d}}
.
\label{eq:C:nabla}
$
\end{lemma1}
\begin{proof}
From \cref{eq:AM:i} we have
\[
A_{ii}
= \int_\Omega \nabla \phi_i^T \D \nabla \phi_i \;d\boldsymbol{x}
= \sum_{K \in \omega_i} \int_K \nabla \phi_i^T \D \nabla \phi_i \;d\boldsymbol{x}
= \sum_{K \in \omega_i} \Abs{K} \; \nabla \phi_i^T \D_K \nabla \phi_i .
\]
Denote the gradient operator in $\hat{K}$ by $\hat{\nabla} = \frac{\partial}{\partial \boldsymbol{\xi} }$.
By the chain rule $\nabla = {(F_K')}^{-T} \hat{\nabla}$ and
\begin{align}
A_{ii}
&= \sum_{K \in \omega_i} \Abs{K} \;
\hat{\nabla} \hat{\phi}_i^T {\left(F_K'\right)}^{-1} \D_K {\left(F_K'\right)}^{-T} \hat{\nabla} \hat{\phi}_i
\label{eq:aii:duni}
\\
&\le \sum_{K \in \omega_i} \Abs{K} \;
\hat{\nabla} \hat{\phi}_i^T
\hat{\nabla} \hat{\phi}_i
\lambda_{\max}\bigl({\left(F_K'\right)}^{-1} \D_K {\left(F_K'\right)}^{-T}\bigr)
\notag
.
\end{align}
Recall that $\hat{K}$ is taken to be equilateral.
Thus, $\hat{\nabla} \hat{\phi}_i^T \hat{\nabla} \hat{\phi}_i = C_{\hat\nabla} $ for all $ i = 1, \dotsc, d+1$.
Consequently,
\[
A_{ii}
\le C_{\hat\nabla} \sum_{K \in \omega_i} \Abs{K} \;
\lambda_{\max}\bigl({\left(F_K'\right)}^{-1} \D_K {\left(F_K'\right)}^{-T}\bigr) .
\]
Similarly, we can obtain the left inequality of \cref{eq:A:fdf}.
\qquad\end{proof}
\vspace{1.5ex}
\begin{remark}
\normalfont{%
From \cref{eq:QM:def}, with $\M$ being replaced by $\D^{-1}$, the bound \cref{eq:A:fdf} on $A_{ii}$ can be expressed in terms of the element quality measure $\QD(K)$ as
\begin{equation}
A_{ii}
\le C_{\hat\nabla} h_{\D^{-1}}^{-2}
\sum\limits_{K \in \omega_i} \Abs{K} \QD(K).
\label{eq:Aii:Q}
\end{equation}
}
\end{remark}
\begin{remark}[$\D^{-1}$-nonobtuse meshes]
\label{rem:Duniform}
\normalfont{%
Note that \cref{thm:stiffness:matrix} is very general and valid for any given mesh.
It implies that
\begin{equation}
\lambda_{\max}(A) \le (d+1) \max_i A_{ii} .
\label{eq:lmaxA:Ajj}
\end{equation}
This bound can be sharpened for some special types of meshes.
For example, if a mesh has no obtuse angles with respect to $\D^{-1}$ then $A$ is an M-matrix (its off-diagonal entries are non-positive) and $\sum_j A_{ij} \ge 0$ for all $i$ (e.g.,\ see the proof of Theorem 2.1 of~\cite{LiHua10}).
From the Gershgorin circle theorem we have
\[
\lambda_{\max}(A)
\le \max_i \left(A_{ii} + \sum_{j \neq i} \Abs{A_{ij}}\right)
= \max_i \left(A_{ii} - \sum_{j \neq i} A_{ij}\right)
= \max_i \left( 2 A_{ii} - \sum_j A_{ij}\right)
\]
and thus
\begin{equation}
\lambda_{\max}(A)
\le 2 \max_i A_{ii} .
\label{eq:A-3}
\end{equation}
If further the mesh is $\D^{-1}$-uniform, then from \cref{eq:eq:ali:i,eq:Aii:Q} we have
\begin{equation}
\lambda_{\max}(A)
\le 2 \max_i A_{ii}
\le 2 C_{\hat\nabla} h_{\D^{-1}}^{-2}
\max_i\sum\limits_{K \in \omega_i} \Abs{K} \QD(K)
= 2 C_{\hat\nabla} h_{\D^{-1}}^{-2} \max_i \Abs{\omega_i}.
\label{eq:A-4}
\end{equation}
}
\end{remark}
\section{Explicit time stepping and the stability condition}
\label{sec:explicit}
In this section we study stability conditions for explicit one-step methods applied to the finite element system \cref{eq:fem:system} and obtain estimates for the maximum time step.
Suppose we are given a constant time step $\tau$.
Then an explicit one-step integration scheme with $s$ stages of order $p$ computes approximations $\boldsymbol{U}_n\approx \boldsymbol{U}(n\tau)$ from
\begin{equation}
\boldsymbol{U}_n = R(-\tau\,M^{-1}A) \boldsymbol{U}_{n-1} ,
\label{erk:scheme}
\end{equation}
where the stability function $R(z)$ is a polynomial in $z$ and satisfies
\begin{equation}
R(z)
= 1 + z + \dotso + \frac{z^p}{p!} + \sum_{i=p+1}^s \alpha_i z^i
= e^z + \cO\left(z^{p+1}\right) .
\label{erk:stabfunc}
\end{equation}
Classical explicit one-step methods have severe step size restrictions when solving stiff problems as \cref{eq:fem:system} for $N_{vi}\gg 1$.
An interesting alternative are stabilized explicit Runge-Kutta (RK) methods, which have an extended stability domain along the negative real axis and therefore allow for larger time steps than classical explicit one-step methods.
Parameters $\alpha_{p+1}, \dotsc, \alpha_s \in \R$ in \cref{erk:stabfunc} are chosen such that $\Abs{R(z)} \le 1$ for $z\in [-r_s,0]$ and $r_s>0$ is as large as possible.
Explicit methods have low memory demand and can be considered as a good alternative to implicit methods when the solution of algebraic equations arising from the latter is difficult and/or costly.
Impressive examples and comparison results with VODEPK (a stiff ODE solver with Krylov iterations) are documented in~\cite{HunVer03}.
Commonly used explicit methods include DUMKA, Runge-Kutta-Chebychev (RKC) and the orthogonal Runge-Kutta-Chebychev (ROCK) methods.
A common practical choice is $p=2$, but there exist also DUMKA and ROCK methods of higher order~\cite{HaiWan96}.
In the following we first study stability estimates for the approximate solutions $\boldsymbol{U}_n$ obtained from \cref{erk:scheme}, assuming that $M$ is a full mass matrix.
However, the decomposition of a consistent mass matrix as a part of an explicit time integration method is in general not affordable, since an implicit scheme with a much larger step may be performed at the same cost.
Hence, we mainly discuss consequences of lumping the mass matrix as a routine procedure for (linear) finite elements.
Although appropriate mass lumping does not affect the overall accuracy, it is well-known that lumping the consistent mass induces dispersion errors that can affect the quality of the numerical solution.
More generally, we consider symmetric positive definite, surrogate matrices $\tilde{M}$ that satisfy
\begin{equation}
c_1 \tilde{M} \le M \le c_2 \tilde{M}
\end{equation}
and have nearly the same complexity as the diagonal lumped mass matrix $M_{lump}$.
Correction techniques for the dispersive effects of mass lumping and several efficient choices for $\tilde{M}$ can be found in~\cite{GuePas13}.
Note that due to \cref{thm:mass:matrix:lumped:bounds} we have $c_1=1/(d+2)$ and $c_2=(d+2)/2$ for the special case $\tilde{M}=M_{lump}$.
\subsection{Stability of explicit one-step integration schemes}
The investigation of the stability is based on the following main observation: if $B$ is a normal matrix and $R$ is a rational function, then
\begin{equation}
\Norm{R(B)}_2 = \max_i \Abs{R(\lambda_i(B))}.
\label{stab:l2norm}
\end{equation}
This fundamental relation is a direct consequence of the existence of a factorization $B=Q\,\diag \bigl(\lambda_1(B),\dotsc,\lambda_{N_{vi}}(B) \bigr)\,Q^T$ with a unitary matrix $Q$.
Using the fact that $M^{-\frac{1}{2}} A M^{-\frac{1}{2}}$ and $A^{\frac{1}{2}} M^{-1} A^{\frac{1}{2}}$ are normal matrices, we can prove the stability of the linear finite element approximation computed with an explicit one-step method.
\begin{theorem}
\label{thm:stability}
For a given explicit one-step method with the polynomial stability function $R$, the linear finite element approximation $u^{h}_n$ satisfies
\[
\Norm{u^{h}_{n}}_{L^2(\Omega)} \le \Norm{u^{h}_{0}}_{L^2(\Omega)}
\quad \text{and} \quad
\NormE{u^{h}_{n}}_{H^1(\Omega)} \le \NormE{u^{h}_{0}}_{H^1(\Omega)},
\]
if the time step $\tau$ is chosen such that
\[
\max_i \Abs{R\left(-\tau\lambda_i\left(M^{-1} A\right)\right)} \le 1.
\]
\end{theorem}
\begin{proof}
Since $R$ is a polynomial function, we have
\[
R(-\tauM^{-1} A)
= M^{-\frac{1}{2}} R(-\tauM^{-\frac{1}{2}} A M^{-\frac{1}{2}}) M^{\frac{1}{2}}
= A^{-\frac{1}{2}} R(-\tauA^{\frac{1}{2}} M^{-1} A^{\frac{1}{2}}) A^{\frac{1}{2}} .
\]
From this, it is easy to see that \cref{erk:scheme} can be written as
\begin{align}
M^{\frac{1}{2}} \boldsymbol{U}_n &= R(-\tauM^{-\frac{1}{2}} A M^{-\frac{1}{2}}) M^{\frac{1}{2}} \boldsymbol{U}_{n-1} ,
\label{eq:scheme4v}
\\
A^{\frac{1}{2}} \boldsymbol{U}_n &= R(-\tauA^{\frac{1}{2}} M^{-1} A^{\frac{1}{2}}) A^{\frac{1}{2}} \boldsymbol{U}_{n-1} .
\label{eq:scheme4w}
\end{align}
Since $M$ and $A$ are symmetric and positive definite, $M^{-\frac{1}{2}} A M^{-\frac{1}{2}}$ and $A^{\frac{1}{2}} M^{-1} A^{\frac{1}{2}}$ are symmetric and therefore normal.
From \cref{stab:l2norm}, our assumption on the time step and the fact that $M^{-1} A$, $M^{-\frac{1}{2}} A M^{-\frac{1}{2}}$, and $A^{\frac{1}{2}} M^{-1} A^{\frac{1}{2}}$ are similar to each other, we get
\[
\Norm{R(-\tauM^{-\frac{1}{2}} A M^{-\frac{1}{2}})}_2
= \Norm{R(-\tauA^{\frac{1}{2}} M^{-1} A^{\frac{1}{2}})}_2
= \max_i \Abs{R(-\tau\lambda_i(M^{-1} A))} \le 1 .
\]
Thus, equations \cref{eq:scheme4v,eq:scheme4w} imply
\begin{align*}
\Norm{u^{h}_{n}}_{L^2(\Omega)}
&= \Norm{M^{\frac{1}{2}} \boldsymbol{U}_n}_2
\le \Norm{M^{\frac{1}{2}} \boldsymbol{U}_{n-1}}_2
= \Norm{u^{h}_{n-1}}_{L^2(\Omega)}
,
\\
\NormE{u^{h}_{n}}_{H^1(\Omega)}
&= \Norm{A^{\frac{1}{2}} \boldsymbol{U}_n}_2
\le \Norm{A^{\frac{1}{2}} \boldsymbol{U}_{n-1}}_2
=\NormE{u^{h}_{n-1}}_{H^1(\Omega)}
.
\end{align*}
Successive application of these inequalities yields the assertion.
\qquad\end{proof}
We next consider the case where the linear finite element mass matrix $M$ is replaced by a symmetric positive definite, surrogate matrix $\tilde{M}$ of lower complexity.
That means, from now on we compute approximations $\boldsymbol{U}_n\approx \boldsymbol{U}(n\tau)$ from
\begin{equation}
\boldsymbol{U}_n = R(-\tau\,\tilde{M}^{-1}A) \boldsymbol{U}_{n-1} .
\label{erk:approx:scheme}
\end{equation}
\begin{theorem}
\label{thm:stability:approx:scheme}
For a given explicit one-step method with the polynomial stability function $R$ and a symmetric positive definite, surrogate matrix $\tilde{M}$ that satisfies $c_1 \tilde{M} \le M \le c_2 \tilde{M}$ for some positive constants $c_1$ and $c_2$, the linear finite element approximation $u^{h}_n$ satisfies
\[
\Norm{u^{h}_{n}}_{L^2(\Omega)}
\le \sqrt{\frac{c_2}{c_1}} \; \Norm{u^{h}_{0}}_{L^2(\Omega)}
\quad \text{and} \quad
\NormE{u^{h}_{n}}_{H^1(\Omega)}
\le \NormE{u^{h}_{0}}_{H^1(\Omega)},
\]
if the time step $\tau$ is chosen such that
\[
\max_i \abs{R(-\tau\lambda_i(\tilde{M}^{-1}A))} \le 1\,.
\]
\end{theorem}
\begin{proof}
Replacing $M$ by $\tilde{M}$ in the proof of \cref{thm:stability} does not change the arguments and gives
\begin{align*}
\NormE{u^{h}_{n}}_{H^1(\Omega)}
= \Norm{A^{\frac{1}{2}} \boldsymbol{U}_n}_2
&\le \Norm{A^{\frac{1}{2}} \boldsymbol{U}_{n-1}}_2
= \NormE{u^{h}_{n-1}}_{H^1(\Omega)},
\\
\Norm{\tilde{M}^{\frac{1}{2}} \boldsymbol{U}_n}_2
&\le \Norm{\tilde{M}^{\frac{1}{2}} \boldsymbol{U}_{n-1}}_2 .
\end{align*}
From the first inequality, stability in the energy norm follows.
To derive stability in the $L^2$-norm, we make use of the assumption on $\tilde{M}$:
\begin{align*}
\Norm{u^{h}_{n}}^2_{L^2(\Omega)}
&= {(\boldsymbol{U}_n)}^T M \boldsymbol{U}_n
\le c_2 \, {(\boldsymbol{U}_n)}^T \tilde{M} \boldsymbol{U}_n
= c_2 \,\Norm{\tilde{M}^{\frac{1}{2}} \boldsymbol{U}_n}_2
\le c_2 \,\Norm{\tilde{M}^{\frac{1}{2}} \boldsymbol{U}_{n-1}}_2
\\
&\le \dotso
\le c_2 \,\Norm{\tilde{M}^{\frac{1}{2}} \boldsymbol{U}_{0}}_2
= c_2 \,{(\boldsymbol{U}_0)}^T \tilde{M} \boldsymbol{U}_0
\le \frac{c_2}{c_1}\, {(\boldsymbol{U}_0)}^T M \boldsymbol{U}_0
= \frac{c_2}{c_1}\, \Norm{u^h_0}^2_{L^2(\Omega)},
\end{align*}
which gives the desired result.
\qquad\end{proof}
In the special case $\tilde{M} = M_{lump}$ we have the following result.
\begin{corollary1}
Under the assumptions of \cref{thm:stability:approx:scheme} and $\tilde{M}=M_{lump}$, we have
\begin{equation*}
\Norm{u^{h}_{n}}_{L^2(\Omega)}
\le \frac{d+2}{\sqrt{2}} \; \Norm{u^h_0}_{L^2(\Omega)}
\quad \text{and} \quad
\NormE{u^{h}_{n}}_{H^1(\Omega)}
\le \NormE{u^h_0}_{H^1(\Omega)} .
\end{equation*}
\end{corollary1}
\subsection{Estimates on the largest eigenvalue%
\texorpdfstring{ of $\tilde{M}^{-1}A$}{}}
The above results show that the contractivity of any given explicit one-step method is guaranteed if all eigenvalues of $- \tau \tilde{M}^{-1}A$ are in the corresponding stability domain $\mathcal{S} = \left\{ z \in \mathbb{C} : \Abs{R(z)} \le 1 \right\}$.
As a consequence, the key to the stability analysis of a given scheme is the estimation of the eigenvalues of $\tilde{M}^{-1}A$.
The following theorem provides such an estimate for two choices of $\tilde{M}$: $\tilde{M}=M$ and $\tilde{M}=M_{lump}$.
It turns out that in these cases the largest eigenvalue of $\tilde{M}^{-1}A$ is equivalent to the largest ratio between the corresponding diagonal entries of $A$ and $\tilde{M}$.
\begin{theorem}
\label{thm:lmax:theorem}
The eigenvalues of $\tilde{M}^{-1}A$ with $\tilde{M}$ being either $M$ or $M_{lump}$ are real and positive.
Moreover, the largest eigenvalue is bounded by
\begin{equation}
\max_i \frac{A_{ii}}{\tilde{M}_{ii}}
\le \lambda_{\max} \left( \tilde{M}^{-1}A \right)
\le C_* \max_i \frac{A_{ii}}{\tilde{M}_{ii}},
\label{eq:lmax}
\end{equation}
where $C_*$ is given in \cref{tab:lmax}.
\end{theorem}
\begin{table}[t]
\caption{$C_*$ in \cref{thm:lmax:theorem}\label{tab:lmax}}
\begin{tabular}{lcc}
\toprule%
& general meshes & nonobtuse w.r.t.\ $\D^{-1}$\\
\midrule%
$\tilde{M} = M$ & $2\left(d+1\right)$ & $4$\\
$\tilde{M} = M_{lump}$ & $d+1$ & $2$\\
\bottomrule%
\end{tabular}
\end{table}
\begin{proof}
Since $\tilde{M}$ and $A$ are symmetric and positive definite and $\tilde{M}^{-1}A$ is similar to the symmetric matrix $\tilde{M}^{-\frac{1}{2}} A \tilde{M}^{-\frac{1}{2}}$, the eigenvalues of $\tilde{M}^{-1}A$ are real and positive.
Using the canonical basis vectors $\boldsymbol{e}_i$ gives
\begin{align*}
\lambda_{\max} \bigl( \tilde{M}^{-1}A \bigr)
&= \max_{\boldsymbol{v} \neq 0}
\frac {\boldsymbol{v}^T A \boldsymbol{v}}
{\boldsymbol{v}^T \tilde{M} \boldsymbol{v}}
\ge \max_i \frac {\boldsymbol{e}_i^T A \boldsymbol{e}_i}
{\boldsymbol{e}_i^T \tilde{M} \boldsymbol{e}_i}
= \max_i \frac{A_{ii}}{\tilde{M}_{ii}}.
\end{align*}
Let us first have a look at the case $\tilde{M}=M$.
\Cref{thm:mass:matrix,thm:stiffness:matrix} yield
\begin{equation}
\lambda_{\max} \left( M^{-1} A \right)
= \max_{\boldsymbol{v} \neq 0}
\frac {\boldsymbol{v}^T A \boldsymbol{v}}
{\boldsymbol{v}^T M \boldsymbol{v}}
\le \max_{\boldsymbol{v} \neq 0}
\frac {(d+1) \boldsymbol{v}^T A_D \boldsymbol{v}}
{\frac{1}{2} \boldsymbol{v}^T M_D \boldsymbol{v}}
\label{eq:lmax:halve}
= 2(d+1) \max_i \frac{A_{ii}}{M_{ii}}.
\end{equation}
For the special case of meshes with nonobtuse angles with respect to $\D^{-1}$, the above bound can be sharpened by replacing the factor $d+1$ in \cref{eq:lmax:halve} with $2$ (see \cref{rem:Duniform}).
If $\tilde{M}=M_{lump}$ then the factor $1/2$ in the denominator of \cref{eq:lmax:halve} can be replaced by $1$ since $M_{lump}$ is already diagonal.
\qquad\end{proof}
\vspace{1.5ex}
\begin{example}[Stabilized Runge-Kutta methods]
\label{ex:explicit:SRK}
\normalfont{%
The stability region of a stabilized RK method of order $p=1$ with $s$ stages extends along the negative real axis of the complex plane, including the interval $\left[-2s^2, 0\right]$~\cite[p.~31f.]{HaiWan96}.
Thus, the method is stable if all eigenvalues of $-\tau \tilde{M}^{-1}A$ are between $-2s^2$ and $0$.
This leads to the stability condition
\begin{equation}
\tau \le \frac{2s^2}{ \lambda_{\max}\left(\tilde{M}^{-1}A \right) }.
\label{eq:tau:srk:exact}
\end{equation}
Using \cref{thm:lmax:theorem} and noticing that ${\left(\max_i \frac{A_{ii}}{\tilde{M}_{ii}} \right)}^{-1} = \min_i \frac{\tilde{M}_{ii}}{A_{ii}}$, we obtain a bound for the largest permissible time step $\tau_{\max}$ as
\begin{equation}
\frac{2s^2}{C_*} \min_i \frac{\tilde{M}_{ii}}{A_{ii}}
\le \tau_{\max} \le
2s^2 \min_i \frac{\tilde{M}_{ii}}{A_{ii}} .
\label{eq:tau:SRK}
\end{equation}
Clearly, if
\[
\tau > 2s^2 \min_i \frac{\tilde{M}_{ii}}{A_{ii}},
\]
we have $\Abs{R\left(-\tau \lambda_{\max}\left(\tilde{M}^{-1}A\right)\right)} > 1$ and the scheme becomes unstable.
In order to guarantee stability, the step size has to be chosen such that
\[
\tau \le \frac{2s^2}{C_*} \min_i \frac{\tilde{M}_{ii}}{A_{ii}}.
\]
Note that here $\tilde{M}=M$ or $\tilde{M}=M_{lump}$.
}
\end{example}
\vspace{1.5ex}
In practice, a few steps of a nonlinear power method are often sufficient to estimate the spectral radius automatically, especially if the eigenvalues are close to the negative real axis.
However, the power method can degenerate in many ways, so precaution has to be taken and theoretical bounds can be helpful.
Such bounds are also necessary for gaining insight on the effects of mesh geometry on the stability of explicit integration schemes and the maximum allowed step size.
The estimate in \cref{thm:lmax:theorem} is easily computable but it does not explain how the mesh geometry affects the time step.
To reveal these effects, we provide several geometric formulations of the estimate in the following.
First, substituting \cref{eq:M,eq:Aii:Q} for $\tilde{M}_{ii}$ and $A_{ii}$ in \cref{thm:lmax:theorem} gives the following corollary.
\begin{corollary1}
\label{thm:lmax:geo}
The largest eigenvalue of $\tilde{M}^{-1}A$ is bounded by
\begin{align}
\lambda_{\max}(\tilde{M}^{-1}A)
& \le
C_* C_{\#}
\max_i \sum\limits_{K \in \omega_i}
\frac{\Abs{K}}{\Abs{\omega_i}} \Norm{{\left(F_K'\right)}^{-1} \D_K {\left(F_K'\right)}^{-T}}_2
\label{eq:stability:geo}
\\
&=
C_* C_{\#} h_{\D^{-1}}^{-2}
\max_i \sum\limits_{K \in \omega_i}
\frac{\Abs{K}}{\Abs{\omega_i}} \QD(K)
\label{eq:stability:geo-2}
,
\end{align}
where $C_{\#} = \frac{1}{2} C_{\hat\nabla} (d+1) (d+2)$, $C_{\hat\nabla}$ and $C_*$ are given in \cref{eq:C:nabla,tab:lmax} and the element quality $\QD(K)$ is defined in \cref{eq:QM:def} (with $\mathbb{M}$ being replaced by $\mathbb{D}^{-1}$).
\end{corollary1}
The factor $h_{\D^{-1}}^{-2}$ in \cref{eq:stability:geo} corresponds to $h^2$ in the classic stability condition $\tau \sim h^2$ for uniform meshes with the Laplace operator.
Since
\[
h_{\D^{-1}}
= {(\Abs{\Omega}_{\D^{-1},h}/N)}^{\frac 1 d }
\to {(\Abs{\Omega}_{\D^{-1}}/N)}^{\frac 1 d }
\]
as the mesh is being refined, $h_{\D^{-1}}$ can be considered independent of the mesh geometry and therefore it essentially depends only on $N$, $\D$, and $\Omega$.
The mesh geometry effect is reflected mainly through the patch-average of $\Norm{{\left(F_K'\right)}^{-1} \D_K {\left(F_K'\right)}^{-T}}_2$ or, alternatively, the element quality measure $\QD(K)$.
Recall from \cref{eq:QM:geometric} that $\QD(K)$ can be seen as a ratio of the average element size to the diameter of the largest sphere inscribed in $K$, both measured in the metric $\D_K^{-1}$.
Hence, we can conclude that the largest possible time step depends on \emph{the number of mesh elements} and \emph{the correspondence of the geometry of the mesh elements to $\D^{-1}$}.
In other words, it is not the mesh geometry itself but \emph{the mesh geometry in relation to the diffusion matrix that matters for the stability of explicit schemes.}
We now study the situation with an $\M$-uniform mesh for a general metric tensor $\M$.
Recall that such a mesh satisfies \cref{eq:muniform:fmf}, which can be rewritten as
\[
{(F_K')}^{-T} {(F_K')}^{-1} = h_{\M}^{-2} \M_K
\qquad \forall K \in \Th.
\]
Then,
\[
\QD(K)
= h_{\D^{-1}}^2 \Norm{{\left(F_K'\right)}^{-1} \D_K {\left(F_K'\right)}^{-T}}_2
= {\left( \frac{h_{\D^{-1}}}{h_{\M}} \right)}^2
\Norm{\M_K \D_K}_2.
\]
Inserting this into \cref{eq:stability:geo} we get
\begin{equation}
\lambda_{\max}(\tilde{M}^{-1}A)
\le C_* C_{\#} h_{\M}^{-2}
\max_i \sum\limits_{K \in \omega_i}
\frac{\Abs{K}}{\Abs{\omega_i}} \cdot \Norm{\M_K \D_K}_2 .
\label{eq:stability:geo:ii}
\end{equation}
Once again, this shows that the largest eigenvalue of $\tilde{M}^{-1}A$ and, consequently, the largest permissible time step depend on the number of elements and the matching between the mesh (essentially determined by $\M$) and the diffusion matrix.
If mesh adaptation and the major diffusion directions match, the largest permissible time step depends mainly on the number of mesh elements.
A mismatch between (anisotropic) mesh adaptation and the diffusion directions can lead to a drastic reduction of the time step (see~\cref{ex:ZhuDu} in \cref{sec:numerical:examples}).
In particular, it implies that one gets both accuracy and stability with the same grid if the solution anisotropy is in correspondence with the diffusion and one would have to trade off accuracy for stability if the demands of accuracy and stability contradict each other (see also remarks by Shewchuk~\cite[Sect.~4.3]{She02a}).
To some extent, the demands of accuracy and stability can be combined using a metric tensor in the form
\[
\M_K = \theta_K \D_K^{-1} \quad \forall K \in \Th,
\]
where $\theta_K$ is a scalar function based on some (isotropic) error estimate; a similar idea has been used in~\cite{LiHua10} to combine mesh adaptation with satisfaction of the maximum principle.
This will not provide full mesh adaptation but, at least, some degree of it.
\vspace{1.5ex}%
\begin{remark}[Special cases]%
\label{rem:special:cases}%
\normalfont{%
For a uniform mesh ($\M = I$), we have
\[
\lambda_{\max}(\tilde{M}^{-1}A)
\le C_* C_{\#} h^{-2}
\max_i \sum\limits_{K \in \omega_i}
\frac{\Abs{K}}{\Abs{\omega_i}} \cdot \Norm{\D_K}_2
\approx C_* C_{\#} h^{-2}
\max_i \Norm{\D_{\omega_i}}_2
,
\]
where $\D_{\omega_i}$ denotes the average of $\D$ over a patch $\omega_i$.
In case of coefficient-adaptive ($\D^{-1}$-uniform) meshes ($\M = \D^{-1}$), mesh adaptation and diffusion directions match exactly
and \cref{eq:muniform:fmf,eq:aii:duni} yield $A_{ii} = C_{\hat{\nabla}} \Abs{\omega_i} / h_{\D^{-1}}^2$.
Thus, using \cref{eq:M,thm:lmax:theorem} gives
\[
\lambda_{\max}(\tilde{M}^{-1}A) \sim h_{\D^{-1}}^{-2}
\sim N^{\frac{2}{d}}
.
\]
}%
\end{remark}
\begin{remark}[Comparison to results available in the literature]%
\label{rem:zhudu}%
\normalfont{%
For the full mass matrix Zhu and Du~\cite[Theorem~3.1]{ZhuDu14} developed an estimate in terms of the element geometry and the eigenvalues of the diffusion matrix, which is valid for $d \ge 2$ and $P_k$ finite elements.
For the linear finite elements it becomes
\begin{align}
\frac{\max\limits_K \lambda_{\min}(\D_K) \boldsymbol{Z}_K}{d(1 + c_1 p_{\max} (d+2) )}
&\le \lambda_{\max}\left(M^{-1} A\right) \le
(d+2) \max\limits_K \lambda_{\max}(\D_K) \boldsymbol{Z}_K,
\label{eq:lmax:ZhuDu}
\\
\boldsymbol{Z}_K & = \frac{d+1}{d^2} \sum_{i_K} \frac{ \Abs{V_{i_K}}^2}{\Abs{K}^2},
\notag
\end{align}
where $\Abs{V_{i_K}}$ is the volume of a $(d-1)$-dimensional face opposing the $i_K$-th vertex of $K$, $p_{\max}$ is the maximum number of elements in a patch, and $c_1$ is the maximum ratio between the volumes of neighboring elements.
The ratio of the upper bound to the lower one is approximately $d {(d+2)}^2 c_1 p_{\max} \kappa(\D)$, where $\kappa(\D) = \lambda_{\max}(\D_K) / \lambda_{\min}(\D_K)$.
Geometric bound \cref{eq:stability:geo} is similar to \cref{eq:lmax:ZhuDu} but there is a significant difference.
Since $\boldsymbol{Z}_K \sim \norm{{\left(F_K'\right)}^{-1} {\left(F_K'\right)}^{-T}}_2$,
the interplay between the mesh geometry and the diffusion matrix in \cref{eq:stability:geo,eq:lmax:ZhuDu} is mainly reflected by
\[
\Norm{{\left(F_K'\right)}^{-1} \D_K {\left(F_K'\right)}^{-T}}_2
\quad \text{and} \quad
\lambda_{\max}(\D_K) \Norm{{\left(F_K'\right)}^{-1} {\left(F_K'\right)}^{-T}}_2 ,
\]
respectively.
If either the mesh or $\D$ are isotropic then the factors are comparable.
However, if both the mesh and $\D$ are anisotropic then the former factor can be much smaller than the latter.
In the worst situation, the accuracy of \cref{eq:lmax:ZhuDu} can deteriorate proportionally to $\kappa(\D)$ (see \cref{ex:aniso} in \cref{sec:numerical:examples}), whereas the bound \cref{eq:lmax} in \cref{thm:lmax:theorem} in terms of matrix entries is sharp within a factor of at most $2 (d+1)$, independently of the mesh and $\D$.
In the case of mass lumping, Shewchuk~\cite[Sect.~3]{She02a} obtained geometric bounds in two and three dimensions. The bounds can be generalized to any dimension as
\begin{equation}
\frac{1}{d} \max_K \boldsymbol{S}_K \leq \lambda_{\max}\left(\tilde{M}^{-1}A\right) \leq p_{\max} \max_K \boldsymbol{S}_K,
\quad
\boldsymbol{S}_K = \frac{1}{d^2} \sum_{i_K}
\frac{\Abs{K}}{\tilde{M}_{i_K i_K}}
\frac{\abs{V_{i_K}}_{\D^{-1}}^2}{\Abs{K}_{\D^{-1}}^2 } ,
\label{eq:lmax:Shewchuk}
\end{equation}
where $\Abs{V_{i_K}}_{\D^{-1}}$ is the volume of a $(d-1)$-dimensional face opposing the $i_K$-th vertex of $K$ with respect to $\D^{-1}$ and $\tilde{M}_{i_K i_K}$ is the entry of the (global) lumped mass matrix corresponding to the node $i_K$.
The bound takes the interplay between the mesh shape and $\D$ fully into account and is tight within a factor of $d p_{\max}$, independently of $\D$, but it still has a weak mesh dependence through $p_{\max}$ (typically, $p_{\max} \ge 6$ in 2D and can be much larger in higher dimensions).
Numerical examples in \cref{sec:numerical:examples} show that it is comparable but less accurate than bound \cref{eq:lmax} obtained in this paper.%
For the lumped case there is also an earlier result by Zhu and Du~\cite[Theorem~3.1]{ZhuDu11} but we omit it in this study since it is less accurate than Shewchuk's bound \cref{eq:lmax:Shewchuk}
}%
\end{remark}
\section{Numerical examples}
\label{sec:numerical:examples}
To test the developed estimates we continue \cref{ex:explicit:SRK} (stabilized Runge-Kutta methods) and compare the exact value of the largest permissible time step \cref{eq:tau:srk:exact} with the lower bound \cref{eq:tau:SRK},
\[
\tau_{\max} = \frac{2s^2}{\lambda_{\max}(M^{-1} A)}
\quad \text{and} \quad
\tau_h = \frac{2s^2}{C_*} \min_i \frac{M_{ii}}{A_{ii}},
\]
and compute the ratio $\tau_{\max} / \tau_h$ to evaluate the accuracy of the estimate.
Since $\tau_{\max} / \tau_h$ is independent of the number of stages $s$, we rescale the values of $\tau_{\max}$ and $\tau_h$ by $s^{-2}$ to stay general, i.e.,\ in the following we compare
\begin{equation}
\frac{\tau_{\max}}{s^2} = \frac{2}{\lambda_{\max}(M^{-1} A)}
\quad \text{with} \quad
\frac{\tau_h}{s^2} = \frac{2}{C_*} \min_i \frac{M_{ii}}{A_{ii}}.
\label{eq:tauh:mat}
\end{equation}
Note that \cref{eq:tau:SRK} implies that $1 \le \tau_{\max} / \tau_h \le C_*$ for any mesh and any diffusion matrix $\D$.
Moreover, from \cref{eq:stability:geo:ii},
\begin{equation}
\frac{\tau_{\max}}{s^2}
\ge \frac{2 h_{\M}^{2}}
{C_* C_{\#} \max_i \frac{1}{\Abs{\omega_i}}
\sum\limits_{K \in \omega_i} \Abs{K}\cdot \Norm{\M_K \D_K}_2} .
\label{eq:tauh:geo}
\end{equation}
\begin{example}[1D example~{\cite[Sects.~6.1 and~6.2]{PetSau12}}]
\label{sec:PetSau}
\normalfont{%
As a first example we consider the heat diffusion $u_t = {(\D u_x)}_x$ in $\Omega= (0,1)$ with the diffusion coefficients
\[
\D(x)
= {\left(
2 - \sin\left( 2 \pi \frac{x}{\varepsilon} \right)
\right)}^{-1}
\quad \text{and} \quad
\D(x)
= {\left(
2 - \sin\left( 2 \pi \tan \frac{(1-\varepsilon)\pi x}{2} \right)
\right)}^{-1} ,
\]
where $\varepsilon$ is a positive parameter (\cref{fig:PetSau}).
We choose $\varepsilon = 2^{-4}$ for our tests.
\begin{figure}[t]
\subcaptionbox{periodic\label{fig:PetSau:D:per}}[0.5\textwidth]
{%
\tikzsetnextfilename{hkl2013-ps-per-D}%
\centering%
\begin{tikzpicture}%
\begin{axis}[%
ymin=0.25, ymax=1.075,
xmin=-0.05, xmax=1.05,
width=0.45\linewidth,
height=0.32\linewidth,
]
\addplot[smooth, domain=0:1, samples=65]
{1 / ( 2 - sin(360*x*16) )};
\end{axis}%
\end{tikzpicture}%
}%
\subcaptionbox{nonperiodic\label{fig:PetSau:D:nonp}}[0.5\textwidth]
{%
\tikzsetnextfilename{hkl2013-ps-nonp-D}%
\centering%
\begin{tikzpicture}%
\begin{axis}[%
ymin=0.25, ymax=1.075,
xmin=-0.05, xmax=1.05,
width=0.45\linewidth,
height=0.32\linewidth,
]
\addplot[smooth, domain=0:tan(15*180/32), samples=257]
({atan(x)*32/15/180}, {(1 / ( 2 - sin( 360 * x) )});
\end{axis}%
\end{tikzpicture}%
}
\caption{Diffusion coefficients $\D$ in 1D (\cref{sec:PetSau})\label{fig:PetSau}}
\end{figure}
Numerical results in \cref{tab:PetSau} show that $1.00 \le \tau_{\max} / \tau_h \le 1.45$ for all considered meshes and cases, which is consistent with the theoretical prediction $1 \le \tau_{\max} / \tau_h \le 2$ (with mass lumping) and $1 \le \tau_{\max} / \tau_h \le 4$ (without mass lumping).
Interestingly, for this example, the estimate appears to be even asymptotically exact ($\tau_{\max} / \tau_h \to 1$ as $N \to \infty$) except for the case of $\D^{-1}$-uniform meshes with mass lumping.
\Cref{tab:PetSau} further shows that in case of mass lumping $\tau_{\max}$ is roughly three times as large as $\tau_{\max}$ without mass lumping.
The largest permissible time step $\tau_{\max}$ for $\D^{-1}$-uniform meshes is approximately \numrange{1.4}{1.8} times as large as for uniform meshes.
}\vspace{1.5ex}
\end{example}
\afterpage{%
\begin{table}[t]
\caption{Numerical results in 1D (\cref{sec:PetSau}\label{tab:PetSau})}
\begin{subtable}[c]{1.0\linewidth}
\centering
\subcaption{periodic $\D$ (\cref{fig:PetSau:D:per})\label{tab:PetSau:periodic}}
\begin{tabular}[b]{@{} r llr llr @{}}
\toprule%
& \multicolumn{3}{c}{with mass lumping}
& \multicolumn{3}{c}{without mass lumping}\\
\cmidrule(r){2-4}
\cmidrule(r){5-7}
$N\phantom{12}$
& $\frac{\tau_{\max}}{s^2}$
& $\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$
& $\frac{\tau_{\max}}{s^2}$
& $\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$ \\
\midrule%
\multicolumn{7}{c}{uniform meshes} \\
\midrule%
\input{hkl2013-ps-per-uni.dat}
\midrule%
\multicolumn{7}{c}{$\D^{-1}$-uniform meshes} \\
\midrule%
\input{hkl2013-ps-per-duni.dat}
\bottomrule%
\end{tabular}
\vspace{3ex}
\end{subtable}
\begin{subtable}[c]{1.0\linewidth}
\centering
\subcaption{nonperiodic $\D$
(\cref{fig:PetSau:D:nonp})\label{tab:PetSau:nonperiodic}}
\begin{tabular}[b]{@{} r llr llr @{}}
\toprule%
& \multicolumn{3}{c}{with mass lumping}
& \multicolumn{3}{c}{without mass lumping}\\
\cmidrule(r){2-4}
\cmidrule(r){5-7}
$N\phantom{12}$
& $\frac{\tau_{\max}}{s^2}$
& $\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$
& $\frac{\tau_{\max}}{s^2}$
& $\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$ \\
\midrule%
\multicolumn{7}{c}{uniform meshes} \\
\midrule%
\input{hkl2013-ps-nonp-uni.dat}
\midrule%
\multicolumn{7}{c}{$\D^{-1}$-uniform meshes} \\
\midrule%
\input{hkl2013-ps-nonp-duni.dat}
\bottomrule%
\end{tabular}
\end{subtable}
\end{table}
\clearpage{}
}
\begin{example}[2D example, $\D = I$]
\label{ex:ZhuDu}
\normalfont{%
In this example we consider the most simple case of $\D=I$.
Mesh examples are taken from~\cite{ZhuDu11,ZhuDu14}; they are: uniform isotropic, uniform anisotropic and strongly refined towards the boundary (\cref{fig:ZhuDu:mesh}).
Since these meshes have no obtuse angles, we can use sharper bounds with $C_* = 2$ (mass lumping) or $C_* = 4$ (no mass lumping) and therefore we expect that $1 \le \tau_{\max} / \tau_h \le 2$ or $1 \le \tau_{\max} / \tau_h \le 4$, respectively.
\Cref{tab:square:meshes} shows that $1.14 \le \tau_{\max} / \tau_h \le 1.69$ (mass lumping) and $1.18 \le \tau_{\max} / \tau_h \le 2.33$ (no mass lumping).
In comparison, the same ratio if using \cref{eq:lmax:ZhuDu} and \cref{eq:lmax:Shewchuk} ranges from
\numrange{1.78}{3.50}%
\footnote{In our tests, the estimate by Zhu and Du~\cite{ZhuDu14} seems to provide better results than in the numerical examples of the original paper.}
and \numrange{4.00}{6.77}, respectively.
In this example $\D = I$, so that the difference is mainly due to the fact that estimates in terms of mesh geometry are generally less tight than those in terms of matrix entries since additional estimation steps decrease the accuracy.
Notice the significant reduction of $\tau_{\max}$ when the mesh gets adapted in the ``wrong'' way, i.e.,\ away from $\D^{-1}$.
For exampl
, a $32 \times 32$ uniform mesh requires $\tau_{\max} = \num{2.38e-4}$, whereas the $4\times 256$ mesh with the same number of elements requires $\tau_{\max} = \num{6.36e-6}$, a reduction by a factor of $37$.
A strongly anisotropic $4\times16$ mesh adapted towards the boundary with a much smaller number of elements leads to the further reduction of the step size by a factor of \num{3000}.
Thus, the matching between the element geometry and the diffusion matrix has significant effects on the time step size and, depending on the anisotropy of the mesh and diffusion matrix, changes in the mesh alignment can result in changes in the time step size by orders of magnitude.
Again, mass lumping allows approximately \numrange{1.9}{3.2} times larger time steps.
\begin{figure}[t]
\hfill{}%
\subcaptionbox{uniform isotropic\label{fig:ZhuDu:mesh:iso}}[0.33\textwidth]
{%
\tikzsetnextfilename{hkl2013-zd-uni-iso-mesh}%
\centering%
\begin{tikzpicture}[scale=0.9]
\begin{axis}[axis equal, hide axis,
xmin = 0.0, xmax = 1.0, ymin = 0.0, ymax = 1.0,
colormap={bw}{gray(0cm)=(0); gray(1cm)=(0)},
height=0.40\textwidth,
]
\addplot[patch, fill=white, patch table = {hkl2013-zd-uni-iso-elements.dat}]
table [x index=0, y index=1] {hkl2013-zd-uni-iso-nodes.dat};
\end{axis}%
\end{tikzpicture}%
}%
\hfill{}%
\subcaptionbox{uniform anisotropic\label{fig:ZhuDu:mesh:ani}}[0.33\textwidth]
{%
\tikzsetnextfilename{hkl2013-zd-uni-ani-mesh}%
\centering%
\begin{tikzpicture}[scale=0.9]
\begin{axis}[axis equal, hide axis,
xmin = 0.0, xmax = 1.0, ymin = 0.0, ymax = 1.0,
colormap={bw}{gray(0cm)=(0); gray(1cm)=(0)},
height=0.40\textwidth,
]
\addplot[patch, fill=white, patch table = {hkl2013-zd-uni-ani-elements.dat}]
table [x index=0, y index=1] {hkl2013-zd-uni-ani-nodes.dat};
\end{axis}%
\end{tikzpicture}%
}%
\hfill{}%
\subcaptionbox{boundary layer\label{fig:ZhuDu:mesh:p2}}[0.33\textwidth]
{%
\tikzsetnextfilename{hkl2013-zd-p2-mesh}%
\centering%
\begin{tikzpicture}[scale=0.9]
\begin{axis}[axis equal, hide axis,
xmin = 0.0, xmax = 1.0, ymin = 0.0, ymax = 1.0,
colormap={bw}{gray(0cm)=(0); gray(1cm)=(0)},
height=0.40\textwidth,
]
\addplot[patch, fill=white, patch table = {hkl2013-zd-p2-elements.dat}]
table [x index=0, y index=1] {hkl2013-zd-p2-nodes.dat};
\end{axis}%
\end{tikzpicture}%
}%
\hfill{}%
\caption{Mesh examples in 2D (\cref{ex:ZhuDu})\label{fig:ZhuDu:mesh}}
\end{figure}
}\vspace{1.5ex}
\end{example}
\begin{table}[p]
\caption{Numerical results in 2D (\cref{ex:ZhuDu})\label{tab:square:meshes}}
\begin{tabular}[b]{@{} lr l lr lr @{}}
\toprule%
\multicolumn{3}{c}{without mass lumping}
& \multicolumn{2}{c}{new estimate \cref{eq:tauh:mat}}
& \multicolumn{2}{c}{Zhu \& Du~\cite{ZhuDu14}}\\
\cmidrule(r){4-5}
\cmidrule(r){6-7}
\multicolumn{1}{@{}l}{\phantom{1}mesh}
& $N\phantom{1}$
& $\frac{\tau_{\max}}{s^2}$
& $\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$
& $\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$ \\
\midrule%
\multicolumn{7}{c}{uniform isotropic (\cref{fig:ZhuDu:mesh:iso})} \\
\midrule%
\input{hkl2013-zd-uni-iso.dat}
\midrule%
\multicolumn{7}{c}{uniform anisotropic (\cref{fig:ZhuDu:mesh:ani})} \\
\midrule%
\input{hkl2013-zd-uni-ani.dat}
\midrule%
\multicolumn{7}{c}{boundary layer (\cref{fig:ZhuDu:mesh:p2})}\\
\midrule%
\input{hkl2013-zd-p2.dat}
\bottomrule%
\\[-0.313pt]
\toprule%
\multicolumn{3}{c}{with mass lumping}
& \multicolumn{2}{c}{new estimate \cref{eq:tauh:mat}}
& \multicolumn{2}{c}{Shewchuk~\cite{She02a}}\\
\cmidrule(r){4-5}
\cmidrule(r){6-7}
\multicolumn{1}{@{}l}{\phantom{1}mesh}
& $N\phantom{1}$
& $\frac{\tau_{\max}}{s^2}$
& $\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$
& $\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$ \\
\midrule%
\multicolumn{7}{c}{uniform isotropic (\cref{fig:ZhuDu:mesh:iso})} \\
\midrule%
\input{hkl2013-zd-uni-iso-lump.dat}
\midrule%
\multicolumn{7}{c}{uniform anisotropic (\cref{fig:ZhuDu:mesh:ani})} \\
\midrule%
\input{hkl2013-zd-uni-ani-lump.dat}
\midrule%
\multicolumn{7}{c}{boundary layer (\cref{fig:ZhuDu:mesh:p2})}\\
\midrule%
\input{hkl2013-zd-p2-lump.dat}
\bottomrule%
\end{tabular}%
\end{table}
\begin{example}[2D groundwater flow with jumping coefficients~\cite{MicPer08}]
\label{ex:waterflow}
\normalfont{%
As the next example we consider groundwater flow through an aquifer.
The problem is given by the IBVP~\cref{eq:IBVP} with $\Omega = (0,100) \times (0,100)$ and two impermeable subdomains $\Omega_1 = (0,80) \times (64,68)$ and $\Omega_2 = (20,100) \times (40,44)$.
\Cref{fig:waterflow:domain} shows the diffusion matrix $\D$ and the boundary conditions.
Although $\D$ is isotropic, it has a jump between the subdomains, leading to the anisotropic behavior of the solution.
We compute the solution by $h$-refinement in the standard way and use Hessian recovery based mesh adaptation to obtain adaptive meshes at particular time points and compare the exact $\tau_{\max}$ with the lower bound $\tau_h$.
For our computation we used \texttt{KARDOS}~\cite{ErdLanRoi02} to solve the PDE and \texttt{BAMG}~\cite{bamg} for mesh generation.
Examples of adaptive meshes are shown in \cref{fig:waterflow} for the time points $t=\num{1.0e4}$ and $t=\num{1.0e5}$.
Note that these meshes have oblique elements and angles close to \ang{180}: the maximum angles in \cref{fig:waterflow:iv,fig:waterflow:v} are \ang{175} and \ang{177}, respectively.
\Cref{tab:waterflow} shows that the ratio $\tau_{\max} / \tau_h$ is about \numrange{2.13}{2.48} with mass lumping and \numrange{3.25}{3.87} without mass lumping, which is consistent with the theoretical upper bounds $d + 1 = 3$ and $2 (d + 1) = 6$.
In this example, mass lumping would allow \numrange{2.6}{2.8} times larger time steps, which is similar to \cref{ex:ZhuDu} (a factor of \numrange{1.9}{3.2} there).
In a practical computation, however, one would rather use a numerical approximation for $\lambda_{\max}(M^{-1} A)$.
Typically, five steps of the Lánczos method with a random starting vector approximate the largest eigenvalue within 10\%.
Another practical alternative is the power method, for which it is reported~\cite[Sect.~3.2]{SomShaVer98a} that, for the case of eigenvalues being close to the negative real axis, usually only a few iterations are required if the computed eigenvector from the previous step is used as a new starting vector.
To compare it with our theoretical estimate, we additionally computed $\tau_h$ using five steps of the Lánczos method with a random starting vector (divided by 1.1 as a security factor since Lánczos approximation is an approximation from below).
\Cref{tab:waterflow:lanczos} shows that the corresponding ratio $\tau_{\max} / \tau_h$ is about \numrange{1.00}{1.07}, i.e., the computed time step approximation is within 7\% from the optimal value.
In our computations, the accuracy of our theoretical estimate \cref{eq:tauh:mat} corresponds to about two to three steps of the Lánczos method.
We would like to also point out that the lower bound in~\cref{eq:lmax} can be used as a practical security check for a numerical approximation: if the computed numerical approximation of $\lambda_{\max}(M^{-1} A)$ is smaller than this bound, the time step is guaranteed to be out of the stability region of the time integration method.
}\vspace{1.5ex}
\end{example}
\begin{figure}[p]
\subcaptionbox{Domain and the diffusion $\D$\label{fig:waterflow:domain}}
[0.33\linewidth]
{%
\tikzsetnextfilename{hkl2013-waterflow-omega}%
\input{hkl2013-waterflow-domain.tikz}%
}%
\hfill{}%
\subcaptionbox{$t = \num{1.0e4}$, $N = \num{5305}$\label{fig:waterflow:iv}}
[0.33\linewidth]
{%
\includegraphics[width=0.30\textwidth, clip]{hkl2013-waterflow-10e4}%
\newline{}%
\newline{}%
\includegraphics[width=0.30\textwidth, clip]{hkl2013-waterflow-10e4-zoom}%
}%
\hfill{}%
\subcaptionbox{$t = \num{1.0e5}$, $N = \num{20334}$\label{fig:waterflow:v}}
[0.33\linewidth]
{%
\includegraphics[width=0.30\textwidth, clip]{hkl2013-waterflow-10e5}%
\newline{}%
\newline{}%
\includegraphics[height=0.30\textwidth, clip]{hkl2013-waterflow-10e5-zoom}%
}%
\caption{%
Domain, mesh examples and close-ups at $[74, 82] \times [62, 70]$
(the upper right corner at the entrance of the tunnel)
for the groundwater flow (\cref{ex:waterflow})\label{fig:waterflow}%
}%
\end{figure}
\begin{table}[p]
\caption{Numerical results
for the groundwater flow (\cref{ex:waterflow})}
\begin{subtable}[c]{1.0\linewidth}
\centering
\subcaption{%
computing $\tau_h$ with \cref{eq:tauh:mat}\label{tab:waterflow}%
}
\begin{tabular}[b]{@{} lr llr llr @{}}
\toprule%
&& \multicolumn{3}{c}{with mass lumping}
& \multicolumn{3}{c}{without mass lumping}\\
\cmidrule(r){3-5}
\cmidrule(r){6-8}
\multicolumn{1}{@{}l}{\phantom{1}time}
& $N\phantom{12}$
& $\frac{\tau_{\max}}{s^2}$
& $\frac{\tau_h} {s^2}$
& $\frac{\tau_{\max}}{\tau_h}$
& $\frac{\tau_{\max}}{s^2}$
& $\frac{\tau_h} {s^2}$
& $\frac{\tau_{\max}}{\tau_h}$ \\
\midrule%
\input{hkl2013-waterflow.dat}
\bottomrule%
\end{tabular}
\end{subtable}%
\\[2ex]
\begin{subtable}[c]{1.0\linewidth}
\centering
\subcaption{%
computing $\tau_h$ with five steps of the Lánczos method
using a random starting vector\label{tab:waterflow:lanczos}%
}
\begin{tabular}[b]{@{} lr llr llr @{}}
\toprule%
&& \multicolumn{3}{c}{with mass lumping}
& \multicolumn{3}{c}{without mass lumping}\\
\cmidrule(r){3-5}
\cmidrule(r){6-8}
\multicolumn{1}{@{}l}{\phantom{1}time}
& $N\phantom{12}$
& $\frac{\tau_{\max}}{s^2}$
& $\frac{\tau_h} {s^2}$
& $\frac{\tau_{\max}}{\tau_h}$
& $\frac{\tau_{\max}}{s^2}$
& $\frac{\tau_h} {s^2}$
& $\frac{\tau_{\max}}{\tau_h}$ \\
\midrule%
\input{hkl2013-waterflow-lanczos.dat}
\bottomrule%
\end{tabular}
\end{subtable}%
\end{table}
\begin{example}[2D anisotropic diffusion]
\label{ex:aniso}
\normalfont{%
This example shows the importance of the interplay between the major diffusion directions and the mesh geometry.
Consider the IBVP~\cref{eq:IBVP} in $\Omega = {\left(0,1\right)}^2 \backslash {\left[\frac{4}{9},\frac{5}{9}\right]}^2$ with the homogeneous Dirichlet boundary condition and
\[
\D =
\begin{bmatrix}
\cos\theta & - \sin\theta \\
\sin \theta & \cos\theta
\end{bmatrix}
\begin{bmatrix}
1000 & 0 \\
0 & 1
\end{bmatrix}
\begin{bmatrix}
\cos\theta & \sin\theta \\
- \sin\theta & \cos\theta
\end{bmatrix},
\quad
\theta = \pi \sin x \cos y.
\]
First, we consider quasi-uniform meshes (\cref{fig:aniso:quni}), for which elements are close to be uniform in shape and size, $F_K' \approx \Abs{K}^{\frac{1}{d}} I$ and $\norm{{\left(F_K'\right)}^{-1} \D_K {\left(F_K'\right)}^{-T}}_2 \approx \lambda_{\max}(\D) \norm{{\left(F_K'\right)}^{-1} {\left(F_K'\right)}^{-T}}_2$.
Hence, using \cref{eq:stability:geo-2,eq:lmax:ZhuDu} provides comparable results, which is confirmed by the numerical results in \cref{tab:aniso}: for quasi-uniform grids \cref{eq:stability:geo-2} and \cref{eq:lmax:ZhuDu} or \cref{eq:lmax:Shewchuk} are accurate within a factor of \numrange{4.04}{6.35} and \numrange{4.52}{6.02}, respectively.
For $\D^{-1}$-uniform (coefficient-adaptive) meshes (\cref{fig:aniso:duni}) the situation is quite different and, as mentioned in \cref{rem:zhudu}, bound \cref{eq:stability:geo-2} should be more accurate than that using \cref{eq:lmax:ZhuDu}.
This is indeed confirmed by the numerical results: bound \cref{eq:stability:geo-2} is accurate within a factor of \numrange{3.40}{6.44}, whereas \cref{eq:lmax:ZhuDu} underestimates the real value by a factor of \numrange{347}{1020} (recalling $\kappa(\D) = \num{1000}$).
Note that Shewchuk's bound \cref{eq:lmax:Shewchuk} provides accurate results in any case, although not quite as accurate as \cref{eq:stability:geo-2}.
It is worth pointing out that the most accurate bound in all cases is \cref{eq:tauh:mat} in terms of the matrix entries.
This example also shows that $\D^{-1}$-uniform meshes allow larger time steps even if their elements may have ``bad quality'' in the common sense.
Hence, it is important to consider the quality of the mesh \emph{in relation to the diffusion} and not on itself.
}%
\end{example}
\begin{figure}[p]
\subcaptionbox{quasi-uniform\label{fig:aniso:quni}}
[0.5\linewidth]
{
%
\tikzsetnextfilename{hkl2013-aniso-uni-mesh}
\begin{tikzpicture}[scale=0.9]
\begin{axis}[axis equal, hide axis,
xmin = 0.0, xmax = 1.0, ymin = 0.0, ymax = 1.0,
colormap={bw}{gray(0cm)=(0); gray(1cm)=(0)},
height=0.40\linewidth,
]
\addplot[patch, fill=white] table {hkl2013-aniso-uni-mesh.dat};
\end{axis}
\end{tikzpicture}
}%
\subcaptionbox{$\D^{-1}$-uniform\label{fig:aniso:duni}}
[0.5\linewidth]
{
%
\tikzsetnextfilename{hkl2013-aniso-duni-mesh}
\begin{tikzpicture}[scale=0.9]
\begin{axis}[axis equal, hide axis,
xmin = 0.0, xmax = 1.0, ymin = 0.0, ymax = 1.0,
colormap={bw}{gray(0cm)=(0); gray(1cm)=(0)},
height=0.40\linewidth,
]
\addplot[patch, fill=white] table {hkl2013-aniso-duni-mesh.dat};
\end{axis}
\end{tikzpicture}
}%
\caption{Mesh examples for the anisotropic diffusion
(\cref{ex:aniso})\label{fig:aniso}}
\vspace{3ex}
\captionof{table}{Numerical results for the anisotropic diffusion
(\cref{ex:aniso})\label{tab:aniso}}
\begin{subtable}[c]{1.0\linewidth}
\centering
\subcaption{%
without mass lumping
}%
\begin{tabular}[b]{@{} rl ll ll lr @{}}
\toprule%
\multicolumn{2}{c}{ }
& \multicolumn{2}{c}{new estimate \cref{eq:tauh:mat}}
& \multicolumn{2}{c}{geometric \cref{eq:tauh:geo}}
& \multicolumn{2}{c}{Zhu \& Du~\cite{ZhuDu14}}\\
\cmidrule(r){3-4}
\cmidrule(r){5-6}
\cmidrule(r){7-8}
$N\phantom{12}$
& \phantom{12}$\frac{\tau_{\max}}{s^2}$
& \phantom{12}$\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$
& \phantom{12}$\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$
& \phantom{12}$\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$ \\
\midrule%
\multicolumn{8}{c}{quasi-uniform meshes (\cref{fig:aniso:quni})} \\
\midrule%
\input{hkl2013-aniso-uni.dat}
\midrule%
\multicolumn{8}{c}{$\D^{-1}$-uniform meshes (\cref{fig:aniso:duni})} \\
\midrule%
\input{hkl2013-aniso-duni.dat}
\bottomrule%
\end{tabular}%
\end{subtable}%
\\[1ex]
\begin{subtable}[c]{1.0\linewidth}
\centering
\subcaption{%
with mass lumping
}%
\begin{tabular}[b]{@{} rl ll ll lr @{}}
\toprule%
\multicolumn{2}{c}{ }
& \multicolumn{2}{c}{new estimate~\cref{eq:tauh:mat}}
& \multicolumn{2}{c}{geometric~\cref{eq:tauh:geo}}
& \multicolumn{2}{c}{Shewchuk~\cite{She02a}}\\
\cmidrule(r){3-4}
\cmidrule(r){5-6}
\cmidrule(r){7-8}
$N\phantom{12}$
& \phantom{12}$\frac{\tau_{\max}}{s^2}$
& \phantom{12}$\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$
& \phantom{12}$\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$
& \phantom{12}$\frac{\tau_h}{s^2}$
& $\frac{\tau_{\max}}{\tau_h}$ \\
\midrule%
\multicolumn{8}{c}{quasi-uniform meshes (\cref{fig:aniso:quni})} \\
\midrule%
\input{hkl2013-aniso-uni-lump.dat}
\midrule%
\multicolumn{8}{c}{$\D^{-1}$-uniform meshes (\cref{fig:aniso:duni})} \\
\midrule%
\input{hkl2013-aniso-duni-lump.dat}
\bottomrule%
\end{tabular}%
\end{subtable}%
\end{figure}
\newpage
\section{Conclusions}
\label{sec:conclusion}
\Cref{thm:lmax:theorem} gives an easily computable bound on the largest eigenvalue of the system matrix $\tilde{M}^{-1}A$ in terms of the diagonal entries of $\tilde{M}$ and $A$ with $\tilde{M}$ being either $M$ or $M_{lump}$.
The bound is tight for \emph{any mesh} and \emph{any diffusion matrix $\D$} within a small constant which is given explicitly and depends only on the dimension of the domain.
This allows efficient and accurate estimation of the largest permissible time step $\tau_{\max}$.
Moreover, estimates \cref{eq:stability:geo,eq:stability:geo:ii} in terms of the mesh geometry reveals how the mesh and the diffusion matrix affect the stability condition.
Roughly speaking, $\tau_{\max}$ depends only on the number of mesh elements and the matching between the element geometry with the diffusion matrix.
Thus, it is not the element geometry itself but the \emph{element geometry in relation to the diffusion matrix} that is important for the stability.
The element quality measure $\QD$ provides a measure for the effect of a given element on the stability condition.
As seen in \cref{ex:ZhuDu}, strong anisotropic adaptation in the ``wrong'' direction can cause a significant reduction of the time step size.
Meanwhile, the result suggests that improvements in the element quality can significantly increase $\tau_{\max}$.
The achieved result can be extended for high order~\cite{HuaKamLan15} or even $p$-adaptive finite elements without major modifications.
Essentially, one only needs to recalculate the constants which depend on the choice of the basis functions.
Furthermore, numerical results suggest that, at least in one and two dimensions, mass lumping can increase the time step size by a factor of \numrange{2}{3}.
This topic deserves more detailed investigations.
\section*{Acknowledgement}
Lennard Kamenski is thankful to Klaus Gärtner for a helpful comment that lead to \cref{rem:Duniform} and to Larissa Kaspar for providing parts of the code used in computations in \cref{ex:waterflow}.
The authors are grateful to an anonymous referee and particularly to Jed Brown for their valuable comments and suggestions which helped to improve the quality of this paper.
| {
"timestamp": "2016-02-26T02:14:34",
"yymm": "1602",
"arxiv_id": "1602.08055",
"language": "en",
"url": "https://arxiv.org/abs/1602.08055",
"abstract": "We study the stability of explicit one-step integration schemes for the linear finite element approximation of linear parabolic equations. The derived bound on the largest permissible time step is tight for any mesh and any diffusion matrix within a factor of $2(d+1)$, where $d$ is the spatial dimension. Both full mass matrix and mass lumping are considered. The bound reveals that the stability condition is affected by two factors. The first one depends on the number of mesh elements and corresponds to the classic bound for the Laplace operator on a uniform mesh. The other factor reflects the effects of the interplay of the mesh geometry and the diffusion matrix. It is shown that it is not the mesh geometry itself but the mesh geometry in relation to the diffusion matrix that is crucial to the stability of explicit methods. When the mesh is uniform in the metric specified by the inverse of the diffusion matrix, the stability condition is comparable to the situation with the Laplace operator on a uniform mesh. Numerical results are presented to verify the theoretical findings.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Stability of explicit one-step methods for P1-finite element approximation of linear diffusion equations on anisotropic meshes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109485172559,
"lm_q2_score": 0.7185944046238982,
"lm_q1q2_score": 0.707679637216854
} |
https://arxiv.org/abs/1411.6567 | Spectral geometry of the Steklov problem | The Steklov problem is an eigenvalue problem with the spectral parameter in the boundary conditions, which has various applications. Its spectrum coincides with that of the Dirichlet-to-Neumann operator. Over the past years, there has been a growing interest in the Steklov problem from the viewpoint of spectral geometry. While this problem shares some common properties with its more familiar Dirichlet and Neumann cousins, its eigenvalues and eigenfunctions have a number of distinctive geometric features, which makes the subject especially appealing. In this survey we discuss some recent advances and open questions, particularly in the study of spectral asymptotics, spectral invariants, eigenvalue estimates, and nodal geometry. | \section{Introduction}
\subsection{The Steklov problem}
Let $\Omega$ be a compact Riemannian manifold of dimension $n\ge
2$ with (possibly non-smooth) boundary $M=\partial\Omega$.
The
\emph{Steklov problem} on $\Omega$ is
\begin{equation}
\label{stek}
\begin{cases}
\Delta u=0& \mbox{ in } \Omega,\\
\partial_\nu u=\sigma \,u& \mbox{ on }M.
\end{cases}
\end{equation}
where $\Delta$ is the Laplace-Beltrami operator acting on functions on
$\Omega$, and $\partial_\nu$ is the outward normal derivative along the
boundary $M$. This problem was introduced by the Russian mathematician
V.A. Steklov at the turn of the 20th century (see \cite{KKKNPPS} for a historical discussion).
It is well known that the spectrum of the Steklov problem is discrete as long as the trace
operator $H^1(\Omega)\rightarrow L^2(\partial\Omega)$ is compact (see~\cite{arendtmazzeo}). In this case, the eigenvalues
form a sequence
$0=\sigma_0<\sigma_1\leq\sigma_2\leq\cdots\nearrow\infty$.
This is true under some mild regularity assumptions, for instance if
$\Omega$ has Lipschitz boundary (see~\cite[Theorem 6.2]{necas}).
The present paper focuses on the geometric properties of Steklov eigenvalues and eigenfunctions.
A lot of progress in this area has been made in the last few years,
and some fascinating open problems have emerged.
We will start by explaining the motivation to study the Steklov spectrum.
In particular, we will emphasize the differences between this eigenvalue problem and its
Dirichlet and Neumann counterparts.
\subsection{Motivation}
The Steklov eigenvalues can be interpreted as the eigenvalues of the
{\it Dirichlet-to-Neumann operator} ${\mathcal D}:H^{1/2}(M) \rightarrow
H^{-1/2}(M)$ which maps a function $f\in H^{1/2}(M)$ to ${\mathcal D}
f=\partial_\nu(Hf)$, where $Hf$ is the harmonic
extension of $f$ to $\Omega$. The study of the Dirichlet-to-Neumann
operator (also known as the voltage-to-current map) is essential for
applications to electrical impedance tomography, which is used in
medical and geophysical imaging (see~\cite{Uhlmann} for a recent
survey).
A rather striking feature of the asymptotic
distribution of Steklov eigenvalues is its unusually (compared to the Dirichlet and Neumann cases) high sensitivity
to the regularity of the boundary. On one hand, if the boundary of a
domain is smooth, the corresponding Dirichlet-to-Neumann operator is
pseudodifferential and elliptic of order one (see ~\cite{Taylor}). As a
result, one can show, for instance, that a surprisingly sharp
asymptotic formula for Steklov eigenvalues ~\eqref{formula:GPPSmain} holds for smooth surfaces.
However, this estimate already
fails for polygons (see section~\ref{sing}). It is in fact
likely that for domains which are not $C^\infty$-smooth but only of
class $C^k$ for some $k \ge 1$, the rate of decay of the remainder
in eigenvalue asymptotics depends on $k$. To our knowledge, for domains with Lipschitz
boundaries, even one-term spectral asymptotics have not yet been
proved. A summary of the available results is presented
in~\cite{Agranovich} (see also~\cite{AgranovichAmosov}).
One of the oldest topics in spectral geometry is shape
optimization. Here again, the Steklov spectrum holds some
surprises. For instance, the classical result of Faber--Krahn for the
first Dirichlet eigenvalue $\lambda_1(\Omega)$ states that among
Euclidean domains with fixed measure, $\lambda_1$ is minimized by a
ball. Similarly, the Szeg\H{o}--Weinberger inequality states that the first nonzero
Neumann eigenvalue $\mu_1(\Omega)$ is maximized by a ball. In both
cases, no topological assumptions are made. The analogous result
for Steklov eigenvalues is Weinstock's inequality, which states that
among planar domains with fixed perimeter, $\sigma_1$ is maximized by
a disk provided that $\Omega$ is simply--connected. In contrast with
the Dirichlet and Neumann case, this assumption cannot be removed. Indeed
the result fails for appropriate annuli (see
section~\ref{isoperim}). Moreover, maximization of the first Steklov eigenvalue among all planar domains of given perimeter is an open problem.
At the same time, it is known that for simply--connected planar domains, the $k$-th normalized Steklov eigenvalue is maximized in
the limit by a disjoint union of $k$ identical disks for
any $k\ge 1$ \cite{gp}. Once again, for the Dirichlet and Neumann eigenvalues the situation is quite different: the extremal domains for $k\ge 3$ are
known only at the level of experimental numerics, and, with a few exceptions, are expected to have rather complicated geometries.
Probably the most well--known question in spectral geometry is ``Can one hear the shape of a drum?'', or whether there
exist domains or manifolds that are isospectral but not isometric.
Apart from some easy examples discussed in section~\ref{isosp}, no
examples of Steklov isospectral non-isometric manifolds are
presently known. Their construction appears to be even trickier than for the
Dirichlet or Neumann problems. In particular, it is not known whether
there exist Steklov isospectral Euclidean domains which are not isometric. Note
that the standard transplantation techniques (see \cite{Berard,
Buser1986, BCDS}) are not applicable for the Steklov problem, as it
is not clear how to “reflect” Steklov eigenfunctions across the
boundary.
New challenges also arise in the study of the nodal domains and the nodal sets
of Steklov eigenfunctions. One of the problems is to understand
whether the nodal lines of Steklov eigenfunctions are dense at the
``wave-length scale'', which is a basic property of the zeros of
Laplace eigenfunctions. Another interesting question is the nodal
count for the Dirichlet-to-Neumann eigenfunctions. We touch upon
these topics in section~\ref{nodal}.
Let us conclude this discussion by mentioning that the Steklov
problem is often considered in the more general form
\begin{equation}
\label{stekgen}
\partial_\nu u = \sigma \rho u,
\end{equation}
where $\rho \in L^{\infty}(\partial \Omega)$ is a non-negative weight function on the boundary. If $\Omega$ is two-dimensional, the Steklov eigenvalues can be thought of
as the squares of the natural frequencies of a vibrating free membrane with
its mass concentrated along its boundary with density $\rho$ (see~\cite{LambPro}). A
special case of the Steklov problem with the boundary condition
\eqref{stekgen} is the sloshing problem, which describes the
oscillations of fluid in a container.
In this case, $\rho \equiv 1$ on
the free surface of the fluid and $\rho \equiv 0$ on the walls of the
container. There is an extensive literature on the properties of
sloshing eigenvalues and eigenfunctions,
see~\cite{foxkut,BanKulPoltSiu,KozKuz} and references
therein.
Since the present survey is directed towards geometric questions, in
order to simplify the analysis and presentation we focus on the pure
Steklov problem with $\rho\equiv 1$.
\subsection{Computational examples}\label{Section:Examples}
The Steklov spectrum can be explicitly computed in a few cases.
Below we discuss the Steklov
eigenvalues and eigenfunctions of cylinders and balls using
separation of variables.
\begin{example}
The Steklov eigenvalues of a unit disk are
$$0,1,1,2,2,\dots,k,k,\dots.$$
The corresponding eigenfunctions in polar coordinates $(r, \phi)$
are given by $$1, r\sin \phi, r\cos\phi,\dots, r\sin k\phi, r\cos k\phi,\dots.$$
\end{example}
\begin{example}
\label{balls}
The Steklov eigenspaces on the ball $B(0,R)\subset\mathbb{R}^n$ are
the restrictions of the spaces $H_k^n$ of homogeneous harmonic
polynomials of degree $l\in\mathbb{N}$ on $\mathbb{R}^n$. The
corresponding eigenvalue is $\sigma=k/R$ with multiplicity
$$\dim H_k^n={n+k-1\choose n-1}-{n+k-3\choose n-1}.$$
This is of course a generalization of the previous example.
\end{example}
\begin{example}\label{cyl}
This example is taken from~\cite{ceg2}. Let $\Sigma$ be a
compact Riemannian manifold without
boundary. Let
$$0=\lambda_1<\lambda_2\leq\lambda_3\cdots\nearrow\infty$$
be the spectrum of the Laplace-Beltrami operator $\Delta_\Sigma$ on
$\Sigma$, and let $(u_k)$ be an orthonormal basis of $L^2(\Sigma)$
such that
$$\Delta_{\Sigma} u_k=\lambda_ku_k.$$
Given any $L>0$, consider the cylinder $\Omega=[-L,L]\times\Sigma \subset \mathbb{R}\times\Sigma$.
Its Steklov spectrum is given by
\begin{gather*}
0, 1/L,\
\sqrt{\lambda_k}\tanh (\sqrt{\lambda_k}L)<
\sqrt{\lambda_k}\coth (\sqrt{\lambda_k}L).
\end{gather*}
and the corresponding eigenfunctions are
\begin{gather*}
1,\ t,\
\cosh(\sqrt{\lambda_k}t)f_k(x),\
\sinh(\sqrt{\lambda_k}t)f_k(x).
\end{gather*}
\end{example}
In sections \ref{section:spectrumsquare} and \ref{isoperim} we will discuss two more
computational examples: the Steklov eigenvalues of a
square and of annuli.
\subsection{Plan of the paper}
The paper is organized as follows.
In section~\ref{Section:Asymptotics} we survey results on the asymptotics and invariants of the Steklov
spectrum on smooth Riemannian manifolds. In section \ref{sing} we discuss asymptotics of Steklov eigenvalues on polygons, which turns out to be quite different from the case of smooth planar domains.
Section~\ref{section:geometricinequalities} is concerned
with geometric inequalities.
In section~\ref{section:IsospectralityRigidity} we discuss Steklov isospectrality and
spectral rigidity. Finally, section~\ref{section:NodalMultiplicity} deals with the nodal geometry of
Steklov eigenfunctions and the multiplicity bounds for Steklov
eigenvalues.
\section{Asymptotics and invariants of the Steklov
spectrum}\label{Section:Asymptotics}
\subsection{Eigenvalue asymptotics}
As above, let $n\geq 2$ be the dimension of the manifold $\Omega$, so that the dimension of the
boundary $M=\partial\Omega$ is $n-1$.
As was mentioned in the introduction, the Steklov eigenvalues of a compact manifold $\Omega$ with boundary
$M=\partial\Omega$ are the eigenvalues of the
Dirichlet-to-Neumann map. It is a first order elliptic pseudodifferential operator
which has the same principal symbol as the square root of the
Laplace-Beltrami operator on $M$. Therefore, applying standard
results of H\"ormander~\cite{hormander} we obtain the following
Weyl's law for Steklov eigenvalues:
$$
\#(\sigma_j < \sigma)=\frac{{\rm Vol}(\mathbb{B}^{n-1})\,{\rm Vol} (M)}{(2\pi)^{n-1}} \sigma^{n-1} + {\mathcal{O}}(\sigma^{n-2}),
$$
where $\mathbb{B}^{n-1}$ is a unit ball in $\mathbb{R}^{n-1}$.
This formula can be rewritten
\begin{equation}
\label{Weylaw}
\sigma_j= 2\pi \left(\frac{j}{{\rm Vol}(\mathbb{B}^{n-1})\, {\rm Vol}(M)}\right)^\frac{1}{n-1} + {\mathcal{O}}(1).
\end{equation}
In two dimensions, a much more precise asymptotic formula was proved in~\cite{GPPS}. Given a finite sequence $C=\{\alpha_1,\cdots,\alpha_k\}$ of positive numbers,
consider the following union of multisets (i.e. sets with multiplicities):
$\{0,..\dots,0\}\cup \alpha_1 \mathbb{N}\cup \alpha_1 \mathbb{N}\cup
\alpha_2\mathbb N\cup\alpha_2\mathbb N\cup \dots
\cup\alpha_k\mathbb{N}\cup \alpha_k\mathbb{N}$, where the first multiset
contains $k$ zeros and $\alpha\mathbb{N}=\{\alpha,2\alpha,3\alpha,\dots,n\alpha,\dots\}$.
We rearrange the elements of this multiset into a monotone increasing sequence $S(C)$.
For example,
$S(\{1\})=\{0,1,1,2,2,3,3,\cdots\}$ and $S(\{1, \pi\})=\{0,0,1,1,2,2,3,3,\pi,\pi,4,4,5,5,6,6,2\pi,2\pi,7,7,\cdots\}$.
The following sharp spectral estimate was proved in~\cite{GPPS}.
\begin{theorem}
\label{main:GPPS}
Let $\Omega$ be a smooth compact Riemannian surface with boundary $M$.
Let $M_1,\cdots,M_k$ be the connected components of the
boundary $M=\partial\Omega$, with lengths $\ell(M_i), 1\leq i\leq
k$.
Set $R=\left\{\frac{2\pi}{\ell(M_1)},\cdots,\frac{2\pi}{\ell(M_k)}\right\}$. Then
\begin{gather}\label{formula:GPPSmain}
\sigma_{j}=S(R)_j+{\mathcal{O}}(j^{-\infty}),
\end{gather}
where $\mathcal O(j^{-\infty})$ means that the error term decays
faster than any negative power of~$j$.
\end{theorem}
In particular, for simply--connected surfaces we recover the following
result proved earlier by Rozenblyum and Guillemin--Melrose
(see~\cite{rozenbljumEnglish,edward}):
\begin{equation}
\label{refined}
\sigma_{2j}=\sigma_{2j-1} + {\mathcal{O}}(j^{-\infty})=\frac{2\pi}{\ell(M)}j+{\mathcal{O}}(j^{-\infty}) .
\end{equation}
The idea of the proof of Theorem \ref{main:GPPS} is as follows. For each boundary component $M_i$, $i=1,\dots, k$, we cut off a ``collar'' neighbourhood of the boundary and smoothly glue a cap onto it. In this way, one obtains $k$ simply--connected surfaces, whose boundaries are precisely $M_1, \dots, M_k$, and the Riemannian metric in the neighbourhood of each $M_i$, $i=1,\dots k$, coincides with the metric on $\Omega$. Denote by $\Omega^*$ the union of these simply--connected surfaces.
Using an explicit formula for the full symbol of the
Dirichlet-to-Neumann operator \cite{LeeUhlmann} we notice that the
Dirichlet-to-Neumann operators associated with $\Omega$ and $\Omega^*$
differ by a smoothing operator, that is, by a pseudodifferential operator with a smooth integral kernel; such operators are bounded as maps between any two Sobolev spaces $H^s(M)$ and $H^t(M)$, $s,t \in \mathbb{R}$.
Applying pseudodifferential techniques, we deduce that the
corresponding Steklov eigenvalues of $\sigma_j(\Omega)$ and
$\sigma_j(\Omega^*)$ differ by ${\mathcal{O}}(j^{-\infty})$. Note that a similar
idea was used in \cite{HislopLutzer}. Now, in order to study the
asymptotics of the Steklov spectrum of $\Omega^*$, we map each of its
connected components to a disk by a conformal transformation and apply
the approach of Rozenblyum-Guillemin-Melrose which is also based on
pseudodifferential calculus.
\subsection{Spectral invariants}
\label{specinv}
The following result is an immediate corollary of Weyl's law~(\ref{Weylaw}).
\begin{corollary}
\label{cor1}
The Steklov spectrum determines the dimension of the manifold and
the volume of its boundary.
\end{corollary}
More refined information can be extracted from the Steklov spectrum of surfaces.
\begin{theorem}\label{xx}
The Steklov spectrum determines the number $k$ and the
lengths $\ell_1\geq \ell_2\geq\cdots\geq\ell_k$ of
the boundary components of a smooth compact Riemannian
surface. Moreover, if $\{\sigma_j\}$ is the monotone increasing
sequence of Steklov eigenvalues, then:
$$\ell_1=\frac{2\pi}{\limsup_{j\rightarrow\infty}(\sigma_{j+1}-\sigma_j)}.$$
\end{theorem}
This result is proved in~\cite{GPPS} by a combination of
Theorem~\ref{main:GPPS} and certain number-theoretic arguments
involving the Dirichlet theorem on simultaneous approximation of
irrational numbers.
As was shown in~\cite{GPPS}, a direct generalization of
Theorem \ref{xx} to higher dimensions is false. Indeed, consider
four flat rectangular tori: $T_{1,1}=\mathbb{R}^2/\mathbb{Z}^2$,
$T_{2,1} =\mathbb{R}/2\mathbb{Z}\times\mathbb{R}/\mathbb{Z}$,
$T_{2,2}~=~\mathbb{R}^2/(2\mathbb{Z})^2$ and
$T_{\sqrt{2},\sqrt{2}}=\mathbb{R}^2/(\sqrt{2}\mathbb{Z})^2$. It was shown in~\cite{DoyleRossetti,Parzanchevski}
that the disjoint union $\mathcal{T}=T_{1,1}\sqcup T_{1,1}\sqcup T_{2,2}$ is Laplace--Beltrami isospectral to the disjoint union $\mathcal{T}'=T_{2,1}\sqcup T_{2,1}\sqcup
T_{\sqrt{2},\sqrt{2}}$. It follows from Example~\ref{cyl} that for any
$L>0$, the two disjoint unions of cylinders $\Omega_1=[0,L]\times
\mathcal{T}$ and $\Omega_2=[0,L] \times \mathcal{T}'$ are Steklov
isospectral. At the same time,
$\Omega_1$ has four boundary components of area $1$ and two boundary components of area $4$, while $\Omega_2$ has six
boundary components of area $2$. Therefore, the collection of
areas of boundary components cannot be determined from the Steklov
spectrum.
Still, the following question can be asked:
\begin{open}
Is the number of boundary components of a manifold of dimension $\ge 3$ a Steklov spectral invariant?
\end{open}
Further spectral invariants can be deduced using the heat trace of the
Dirichlet-to-Neumann operator ${\mathcal D}$.
By the results of~\cite{DuistermaatGuillemin,Agranovich2,GrubbSeeley},
the heat trace admits an asymptotic expansion
\begin{equation}
\label{heatexp}
\sum_{i=0}^{\infty}e^{-t\sigma_i}=\operatorname{Tr} e^{-t{\mathcal D}}=\int_M e^{-t{\mathcal D}}(x,x)\ dx\sim\sum_{k=0}^{\infty}a_kt^{-n+1+k}+\sum_{l=1}^{\infty}b_lt^l\log t.
\end{equation}
The coefficients $a_k$ and $b_l$ are called the \emph{Steklov heat
invariants}, and it follows from (\ref{heatexp}) that they are
determined by the Steklov spectrum. The invariants
$a_0,\ldots,a_{n-1}$, as well as $b_l$ for all $l$, are local, in the
sense that they are integrals over $M$ of corresponding functions
$a_k(x)$ and $b_l(x)$ which may be computed directly from the symbol
of the Dirichlet-to-Neumann operator ${\mathcal D}$. The coefficients $a_k$
are not local for $k\geq n$~\cite{Gilkey,GilkeyGrubb} and hence are
significantly more difficult to study.
In~\cite{PoltSher}, explicit expressions for the Steklov heat invariants
$a_0$, $a_1$ and $a_2$ for manifolds of dimensions three or
higher were given in terms of the scalar curvatures of $M$ and
$\Omega$, as well as the mean curvature and the second order mean
curvature of $M$ (for further results in this direction, see~\cite{liu}).
For example, the formula for $a_1$ yields the
following corollary:
\begin{corollary}
\label{cor2}
Let $\dim \Omega \ge 3$. Then the integral of the mean curvature
over $\partial \Omega=M$ (i.e. the {\it total mean curvature} of
$M$) is an invariant of the Steklov spectrum.
\end{corollary}
The Steklov heat invariants will be revisited in section~\ref{isosp}.
\section{Spectral asymptotics on polygons}
\label{sing}
The spectral asymptotics given by formula \eqref{Weylaw} and Theorem \ref{main:GPPS} are obtained using pseudodifferential techniques which are valid
only for manifolds with smooth boundaries. In the presence of singularities, the study of the asymptotic distribution of Steklov eigenvalues is more difficult,
and the known remainder estimates are significantly weaker
(see~\cite{Agranovich} and references therein). Moreover, Theorem
\ref{main:GPPS} fails even for planar domains with corners. This can
be seen from the explicit computation of the spectrum for the
simplest nonsmooth domain: the square.
\begin{table}\label{table:eigenvalues}
\begin{tabular}{llll}
Eigenspace basis&Conditions on $\alpha$&
Eigenvalues&Asymptotic behaviour\\[.2cm]
$\cos(\alpha x)\cosh(\alpha y)$
&&\\
$\cos(\alpha y)\cosh(\alpha x)$
&$\tan(\alpha )=-\tanh(\alpha)$
&$\alpha\tanh(\alpha)$&$\frac{3\pi}{4}+\pi j+{\mathcal{O}}(j^{-\infty})$\\[.2cm]
\hline
$\sin(\alpha x)\cosh(\alpha y)$
&\\
$\sin(\alpha y)\cosh(\alpha x)$
&$\tan(\alpha )=\coth(\alpha)$
& $\alpha\tanh(\alpha)$&$\frac{\pi}{4}+\pi j+{\mathcal{O}}(j^{-\infty})$\\[.2cm]
\hline
$\cos(\alpha x)\sinh(\alpha y)$
&\\
$\cos(\alpha y)\sinh(\alpha x)$
&$\tan(\alpha )=-\coth(\alpha)$&$\alpha\coth(\alpha)$&$\frac{3\pi}{4}+\pi j+{\mathcal{O}}(j^{-\infty})$\\[.2cm]
\hline
$\sin(\alpha x)\sinh(\alpha y)$
&\\
$\sin(\alpha y)\sinh(\alpha x)$
&$\tan(\alpha)=\tanh(\alpha)$&$\alpha\coth(\alpha)$&$\frac{\pi}{4}+\pi j+{\mathcal{O}}(j^{-\infty})$\\[.2cm]
\hline
$xy$&&1
\end{tabular}
\bigskip
\caption{Eigenfunctions obtained by separation of variables on the
square $(-1,1)\times (-1,1)$.}
\label{SquareSepVar}
\end{table}
\subsection{Spectral asymptotics on the square}
\label{section:spectrumsquare}
The Steklov spectrum of the square $\Omega=(-1,1)\times (-1,1)$ is
described as follows.
For each positive root $\alpha$ of the following equations:
\begin{align*}
\tan(\alpha )+\tanh(\alpha )=0,\quad
\tan(\alpha )-\coth(\alpha )=0,\\
\tan(\alpha )+\coth(\alpha )=0,\quad
\tan(\alpha)-\tanh(\alpha)=0
\end{align*}
the number $|\alpha\tan(\alpha)|$ is a Steklov eigenvalue of multiplicity two (see
Table~\ref{table:eigenvalues} and Figure~\ref{figure:square}).
The function $f(x,y)=xy$ is also an eigenfunction, with a simple
eigenvalue $\sigma_3=1$. Starting from $\sigma_4$, the normalized eigenvalues
are clustered in groups of $4$ around the odd multiples of $2\pi$:
\begin{gather*}
\sigma_{4j+l}L=(2j+1)2\pi+{\mathcal{O}}(j^{-\infty}),\qquad\mbox{ for }l=0,1,2,3.
\end{gather*}
This is compatible with Weyl's law since for $k=4j+l$ it follows that
$$\sigma_kL=\left(\frac{k-l}{2}+1\right)2\pi+{\mathcal{O}}(j^{-\infty})=\pi k+{\mathcal{O}}(1).$$
Nevertheless, the refined asymptotics~(\ref{refined}) does not
hold.
Let us discuss the spectrum of a square in more detail.
Separation of variables quickly leads to the 8 families of Steklov eigenfunctions
presented in Table~\ref{SquareSepVar} plus an ``exceptional'' eigenfunction $f(x,y)=xy$.
\begin{figure}
\centering
\includegraphics[width=10cm]{Figures/tantan.png}
\caption{The Steklov eigenvalues of a square. Each intersection
corresponds to a double eigenvalue.}
\label{figure:square}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=7cm]{Figures/square.png}
\caption{Decomposition of the Steklov problem on a square into four mixed
problems on a triangle.}
\label{figure:squaresymmetries}
\end{figure}
One now needs to prove the completeness of this
system of orthogonal functions in $L^2(\partial\Omega)$. Using the
diagonal symmetries of the square (see Figure~\ref{figure:squaresymmetries}), we obtain symmetrized functions spanning the
same eigenspaces. Splitting the eigenfunctions into odd and even with
respect to the diagonal symmetries, we represent the spectrum as the
union of the spectra of four mixed Steklov problems on a right
isosceles triangle. In each of these problems the Steklov condition is
imposed on the hypotenuse, and on each of the sides the condition is
either Dirichlet or Neumann, depending on whether the corresponding
eigenfunctions are odd or even when reflected across this side. In
order to prove the completeness of this system of Steklov
eigenfunctions, it is sufficient to show that the corresponding
symmetrized eigenfunctions form a complete set of solutions for each
of the four mixed problems.
Let us show this property for the problem corresponding to even symmetries
across the diagonal. In this way, one gets a sloshing (mixed
Steklov--Neumann) problem on a right isosceles triangle. Solutions of
this problem were known since 1840s (see \cite{Lamb}). The
restrictions of the solutions to the hypotenuse (i.e. to the side of
the original square) turn out to be the eigenfunctions of the
\emph{free beam equation}:
\begin{align*}
\frac{d^4}{dx^4}f=\omega^4f\qquad&\mbox{ on }(-1,1)\\
\frac{d^3}{dx^3}f=\frac{d^2}{dx^2}f=0\quad&\mbox{ at }x=-1,1.
\end{align*}
This is a fourth order self-adjoint Sturm-Liouvillle equation. It is
known that its solutions form a complete set of functions on the interval
$(-1,1)$.
The remaining three mixed problems are dealt with similarly: one
reduces the problem to the study of solutions of the vibrating beam
equation with either the Dirichlet condition on both ends, or
the Dirichlet condition on one end and the Neumann on the other.
\subsection{Numerical experiments} Understanding fine spectral asy\-mptotics for the Steklov problem on
arbitrary polygonal domains is a difficult question. We have used
software from the FEniCS Project (see http://fenicsproject.org/ and \cite{fenicsbook})
to investigate the behaviour of the
Steklov eigenvalues for some specific examples. This was done using an
implementation due to B. Siudeja~\cite{bartekweb} which was already
applied in~\cite{KKKNPPS}. For the sake of completeness, we discuss two
of these experiments here.
\begin{example}(Equilateral triangle)
We have computed the first 60 normalized eigenvalues $\sigma_jL$ of
an equilateral triangle. The results lead to a conjecture that
$$\sigma_{2j}L=\sigma_{2j+1}L+{\mathcal{O}}(1)=2\pi j+{\mathcal{O}}(1).$$
\end{example}
\begin{example}(Right isosceles triangle)
For the right isosceles triangle with sides of lengths
$1,1,\sqrt{2}$, we have also computed the first 60 normalized
eigenvalues. The numerics indicate that the spectrum is
composed of two sequences of eigenvalues, one of is which behaving as a
sequence of double eigenvalues
$$\pi j+{\mathcal{O}}(1)$$ and the other one as a sequence of simple eigenvalues
$$\frac{\pi}{\sqrt{2}}(j+1/2)+{\mathcal{O}}(1).$$
\end{example}
In the context of the sloshing problem, some related conjectures have
been proposed in~\cite{foxkut}.
\section{Geometric inequalities for Steklov eigenvalues}\label{section:geometricinequalities}
\subsection{Preliminaries}
Let us start with the following simple observation. if a Euclidean domain $\Omega\subset\mathbb{R}^n$ is scaled by a factor $c>0$, then
\begin{gather}\label{sigmakscaling}
\sigma_k(c\,\Omega)=c^{-1}\sigma_k(\Omega).
\end{gather}
Because of this scaling property, maximizing $\sigma_k(\Omega)$ among domains with
fixed perimeter is equivalent to maximizing the normalized eigenvalues
$\sigma_k(\Omega)|\partial\Omega|^{1/{(n-1)}}$ on arbitrary domains. Here and further on we use the notation $|\cdot|$ to denote the volume of a manifold.
All the results concerning geometric bounds are proved using a variational
characterization of the eigenvalues. Let
$\mathcal{E}(k)$ be the set of all $k$ dimensional subspaces of the
Sobolev space $H^1(\Omega)$ which are orthogonal to constants on the
boundary $\partial\Omega$, then
\begin{gather}\label{varcharsigmak}
\sigma_k(\Omega,g)=\min_{E\in\mathcal{E}(k)}\sup_{0\neq u\in E}R(u),
\end{gather}
where the \emph{Rayleigh quotient} is
$$R(u)=\frac{\int_{\Omega}|\nabla u|^2\,dA}{\int_{M} u^2\,dS}.$$
In particular, the first nonzero eigenvalue is given by
\begin{gather*}
\sigma_1(\Omega)=\min\Bigl\{R(u)\,:\,u\in H^1(\Omega),\,\int_{\partial\Omega}u\,dS=0\Bigr\}.
\end{gather*}
These variational characterizations are similar
to those of Neumann eigenvalues on $\Omega$, where the integral in the
denominator of $R(u)$ would be on the domain $\Omega$ rather than on
its boundary.
One last observation is in order before we discuss isoperimetric
bounds. Let $\Omega_\epsilon:=(-1,1)\times(-\epsilon,\epsilon)$ be a
thin rectangle ($0<\epsilon<<1$).
It is easy to see using using~(\ref{varcharsigmak}) that
\begin{gather}
\label{collapse}
\lim_{\epsilon\rightarrow 0}\sigma_k(\Omega_\epsilon)=0,\qquad\mbox{ for each }k\in\mathbb{N}.
\end{gather}
In fact, it suffices for a family $\Omega_\epsilon$ of domains to have
a \emph{thin collapsing passage} (see Figure~\ref{figure:passage}) to
guarantee that $\sigma_k(\Omega_\epsilon)$ becomes arbitrarily small as
$\epsilon\searrow 0$ (see \cite[section 2.2]{gp}.)
\begin{figure}
\centering
\includegraphics[width=6cm]{Figures/chanel.jpg}
\caption{A domain with a thin passage.}
\label{figure:passage}
\end{figure}
This follows from the variational characterization: the idea is to construct a sequence of $k$ pairwise orthogonal test functions that oscillate
inside the thin passage and vanish outside. Then the Dirichlet energy
of such functions will be very small, while the denominator in the
Rayleigh quotient remains bounded away from zero, due to the
integration over the side of the passage. Hence, the Rayleigh quotient
will tend to zero, yielding \eqref{collapse}.
When considering an isoperimetric constraint, it is therefore more
interesting to maximize eigenvalues.
\subsection{Isoperimetric upper bounds for Steklov eigenvalues on
surfaces}
\label{isoperim}
On a compact surface with boundary, the following theorem gives a
general upper bound in terms of the genus and the number of boundary components.
\begin{theorem}[\cite{gp3}]\label{thmHPSgenus}
Let $\Omega$ be a smooth orientable compact surface with boundary
$M=\partial\Omega$ of length $L$. Let $\gamma$ be the genus of $\Omega$ and let
$l$ be the number of its boundary components. Then the following holds:
\begin{gather}\label{InequalityGPHPS}
\sigma_p\sigma_q\,L^2\leq
\begin{cases}
\pi^2(\gamma+l)^2 (p+q)^2 &\mbox{ if }p+q\mbox{ is even},\\
\pi^2(\gamma+l)^2 (p+q-1)^2 &\mbox{ if }p+q\mbox{ is odd},
\end{cases}
\end{gather}
for any pair of integers $p,q \ge 1$.
In particular by setting $p=q=k$ one obtains the following bound:
\begin{gather}\label{InequalityHPSgenusSimple}
\sigma_k(\Omega)L(M)\leq 2\pi(\gamma+l)k.
\end{gather}
\end{theorem}
The proof of Theorem~\ref{thmHPSgenus} is based on the existence of a
proper holomorphic covering map $\phi:\Omega\rightarrow\mathbb{D}$ of
degree $\gamma+l$ (the Ahlfors map), which was proved in~\cite{gabard}, and on an ingenious complex analytic argument due to
J. Hersch, L. Payne and M. Schiffer~\cite{hps}, who used it to prove
inequality~(\ref{InequalityGPHPS}) for planar domains. In this
particular case, inequality~(\ref{InequalityHPSgenusSimple}) is
known to be sharp. Indeed, it was proved in~\cite{gp} that equality is
attained in the limit by a family $\Omega_\epsilon$ of domains
degenerating to a disjoint union of $k$ identical disks (see Figure~\ref{figure:pullapart}).
\begin{figure}
\centering
\includegraphics[width=6cm]{Figures/pullapart.jpg}
\caption{A family of domains $\Omega_\epsilon$ maximizing $\sigma_2L$
in the limit as $\epsilon \to 0$.}
\label{figure:pullapart}
\end{figure}
For $k=1$, inequality (\ref{InequalityHPSgenusSimple}) was proved in\cite{fraschoen2}.
The earliest isoperimetric inequality for Steklov eigenvalues is that of
R. Weinstock~\cite{wein}. For simply--connected planar domains ($\gamma=0, l=1$), he proved that
\begin{gather}\label{Inequality:Weinstock}
\sigma_1(\Omega)L(\partial\Omega)\leq2\pi
\end{gather}
with equality if and only if $\Omega$ is a disk.
Weinstock used an
argument similar to that of G. Szeg\H{o}~\cite{szego}, who obtained an isoperimetric inequality for the first nonzero Neumann
eigenvalue of a simply--connected domain $\Omega$ normalized by the
measure $|\Omega|$ rather than its perimeter. In fact, Weinstock's
proof is the simplest application of the center of mass
renormalization (also known as Hersch's lemma, see ~\cite{Hersch,
SchoenYau, GNP, gp2}).
While Szeg\H{o}'s inequality can be generalized to an arbitrary Euclidean domain
(see \cite{weinberger}), this is not true for Weinstock's
inequality. In particular, as follows from the example below,
Weinstock's inequality fails for non-simply--connected planar domains.
\begin{example}
The Steklov eigenvalues and eigenfunctions of an annulus have been computed in
\cite{dittmar1}.
On the annulus $A_\epsilon=\mathbb{D}\setminus B(0,\epsilon)$, there
is a radially symmetric Steklov
eigenfunction
$$f(r)=-\left(\frac{1+\epsilon}{\epsilon\log(\epsilon)}\right)\log(r)+1,$$
with the corresponding eigenvalue
$\sigma=\frac{1+\epsilon}{\epsilon\log(1/\epsilon)}$.
All other
eigenfunctions are of the form
$$f_k(r,\theta)=(A_{k}r^k+A_{-k}r^{-k})T(k\theta)\qquad (\mbox{with }k\in\mathbb{N})$$
where $T(k\theta)=\cos(k\theta)$ or $T(k\theta)=\sin(k\theta)$. In order for
$f_k(r,\theta)$ to be a Steklov eigenfunction it is required that
$$\frac{\partial}{\partial_r}f_k(1,\theta)=\sigma
f(1,\theta)\qquad\mbox{ and}\qquad -\frac{\partial}{\partial_r}f_k(\epsilon,\theta)=\sigma
f(\epsilon,\theta),$$ which leads to the following system:
$$
\left(
\begin{array}{cc}
\sigma\epsilon^k+k\epsilon^{k-1}&\sigma\epsilon^{-k}-k\epsilon^{-k-1}\\
\sigma-k&\sigma+k
\end{array}
\right)
\left(
\begin{array}{c}
A_k\\
A_{-k}
\end{array}
\right)
=
\left(
\begin{array}{c}
0\\
0
\end{array}
\right).
$$
This system has a non-zero solution if and only if its determinant vanishes.
After some simplifications, the Steklov eigenvalues of the annulus $A_\epsilon=\mathbb{D}\setminus B(0,\epsilon)$
are seen to be the roots of the quadratic polynomials
$$p_k(\sigma)=\sigma^2
-\sigma k\left(\frac{\epsilon+1}{\epsilon}\right)\left(\frac{1+\epsilon^{2k}}{1-\epsilon^{2k}}\right)
+\frac{1}{\epsilon}k^2\qquad (k\in\mathbb{N}).$$
Each of these eigenvalues is double, corresponding to the choice of a
$\cos$ or $\sin$ function for the angular part $T(k\theta)$ of the corresponding eigenfunction.
For $\epsilon>0$ small enough, this leads in particular to
\begin{gather}\label{sigmaoneannulus}
\sigma_1(A_\epsilon)=\frac{1}{2\epsilon}\frac{1+\epsilon^{2}}{1-\epsilon}\left(1-\sqrt{1-4\epsilon\left(\frac{1-\epsilon}{1+\epsilon^{2}}\right)^2}\right).
\end{gather}
\end{example}
It follows from formula~(\ref{sigmaoneannulus}) that for the
annulus $A_\epsilon=B(0,1)\setminus B(0,\epsilon)$ one has
\begin{gather}\label{AsymptoticHole}
\sigma_1(A_\epsilon)L(\partial
A_\epsilon)=2\pi\sigma_1(\mathbb{D})+2\pi\epsilon+o(\epsilon)\quad\mbox{as
}\epsilon\searrow 0.
\end{gather}
Therefore, $\sigma_1(A_\epsilon)L(\partial
A_\epsilon)>2\pi\sigma_1(\mathbb{D})$ for $\epsilon>0$ small
enough (see Figure~\ref{figure:hole}), and hence Weinstock's inequality \eqref{Inequality:Weinstock} fails.
\begin{figure}
\centering
\includegraphics[width=6cm]{Figures/sigma1.jpg}
\caption{The normalized eigenvalue $\sigma_1(A_\epsilon)L(\partial A_\epsilon)$}
\label{figure:hole}
\end{figure}
\begin{remark} One can also compute
the Steklov eigenvalues of the spherical shell
$\Omega_\epsilon:=B(0,1)\setminus
B(0,\epsilon)\subset\mathbb{R}^{n}$ for $n\geq 3$. The eigenvalues
are the roots of certain quadratic polynomials which can be computed explicitly.
Here again, it is true that for $\epsilon>0$ small enough,
$\sigma_1(\Omega_\epsilon)|\partial\Omega_\epsilon|^{\frac{1}{n-1}}>\sigma_1(\mathbb{B})|\partial\mathbb{B}|^{\frac{1}{n-1}}$. This
computation was part of an undergraduate research project of E. Martel at
Universit\'e Laval.
\end{remark}
Given that Weinstock's inequality is no longer true for non-simply--connected planar domains, one may ask whether the supremum
of $\sigma_1L$ among all planar domains of fixed perimeter is
finite. This is indeed the case, as follows from the following theorem for $k=1$ and $\gamma=0$.
\begin{theorem}[\cite{ceg2}]
\label{unibound}
There exists a universal constant $C>0$ such that
\begin{gather}\label{InequalityNoBoundary}
\sigma_k(\Omega)L(\partial\Omega)\leq C(\gamma+1)k.
\end{gather}
\end{theorem}
Theorem \ref{unibound} leads to the the following question:
\begin{open}
\label{ddd}
What is the maximal value of $\sigma_1(\Omega)$ among Euclidean
domains $\Omega\subset\mathbb{R}^n$ of fixed perimeter? On which domain (or in the limit of which sequence of domains) is it realized?
\end{open}
Some related results will be discussed in subsection
\ref{Section:FS}. In particular, in view of Theorem
\ref{thm:fraschoengenus0}~\cite{fraschoen2}, it is tempting to suggest that
the maximum is realized in the limit by a sequence of domains with the
number of boundary components tending to infinity.
The proof of Theorem \ref{unibound} is based on N. Korevaar's metric geometry
approach~\cite{kvr} as described in~\cite{gyn}. For $k=1$,
inequality~(\ref{InequalityNoBoundary}) holds with
$C=8\pi$ (see~\cite{kok}). For $k=1$ and $\gamma=0$, it holds with
$C=4\pi$~\cite{fraschoen2} (see Theorem~\ref{thm:fraschoengenus0} below).
It is also possible to ``decouple'' the genus $\gamma$ and the index
$k$. The following theorem was proved by
A. Hassannezhad~\cite{hassannezhad}, using a generalization of the
Korevaar method in combination with concentration results
from~\cite{ColbMaert}.
\begin{theorem}
There exists two constants $A,B>0$ such that
$$\sigma_k(\Omega)L(\partial\Omega)\leq A\gamma+Bk.$$
\end{theorem}
At this point, we have considered maximization of the Steklov
eigenvalues under the constraint of fixed perimeter.
This is natural, since they are the eigenvalues of
to the Dirichlet-to-Neumann operator, which acts on the
boundary. Nevertheless, it is also possible to normalize the
eigenvalues by fixing the measure of $\Omega$.
The following theorem
was proved by F. Brock~\cite{brock}.
\begin{theorem}\label{thm:brock}
Let $\Omega\subset\mathbb{R}^{n}$ be a bounded Lipschitz
domain. Then
\begin{gather}\label{Inequality:Brock}
\sigma_1(\Omega)|\Omega|^{1/n}\leq \omega_n^{1/n},
\end{gather}
with equality if and only if $\Omega$ is a ball. Here $\omega_n$ is
the volume of the unit ball $\mathbb{B}^n\subset\mathbb{R}^n$.
\end{theorem}
Observe that no connectedness assumption is required this time.
The proof of Theorem~\ref{thm:brock} is based on a weighted
isoperimetric inequality for moments of inertia of the boundary
$\partial\Omega$.
A quantitative improvement of Brock's theorem was obtained in~\cite{brasdephilruffini}
in terms of the \emph{Fraenkel asymmetry} of a
bounded domain $\Omega\subset\mathbb{R}^n$:
$$\mathcal{A}(\Omega):=\inf
\left\{
\frac{\|1_\Omega-1_B\|_{L^1}}{|\Omega|}\,:\, B\mbox{ is a ball with
} |B|=|\Omega|
\right\}.$$
\begin{theorem}\label{thm:stabilitySteklov}
Let $\Omega\subset\mathbb{R}^{n}$ be a bounded Lipschitz
domain. Then
\begin{gather}\label{Stability}
\sigma_1(\Omega)|\Omega|^{1/n}\leq \omega_n^{1/n}(1-\alpha_n\mathcal{A}(\Omega)^2),
\end{gather}
where $\alpha_n>0$ depends only on the dimension.
\end{theorem}
The proof of Theorem~\ref{thm:stabilitySteklov} is based on a
quantitative refinement of the isoperimetric inequality. It would be interesting to prove a
similar stability result for Weinstock's inequality:
\begin{open}
Let $\Omega$ be a planar simply--connected domain such that
the difference $2\pi-\sigma_1(\Omega)L(\partial\Omega)$ is small. Show that $\Omega$ must be close
to a disk (in the sense of Fraenkel asymmetry or some other measure of proximity).
\end{open}
\subsection{Existence of maximizers and free boundary minimal
surfaces}\label{Section:FS}
A \emph{free boundary submanifold} is a proper minimal submanifold of some unit ball
$\mathbb{B}^n$ with its boun\-dary meeting the sphere $\mathbb{S}^{n-1}$
orthogonally. These are characterized by their Steklov eigenfunctions.
\begin{lemma}[\cite{fraschoen}]
\label{lemmafs}
A properly immersed submanifold $\Omega$ of the ball $\mathbb{B}^n$
is a free boun\-dary submanifold if and only if the restriction to
$\Omega$ of the coordinate functions $x_1,\cdots,x_n$ satisfy
\begin{equation*}
\begin{cases}
\Delta x_i=0& \mbox{ in } \Omega,\\
\partial_\nu x_i= \,x_i& \mbox{ on }\partial\Omega.
\end{cases}
\end{equation*}
\end{lemma}
This link was exploited by A. Fraser and
R. Schoen who developed the theory of extremal metrics for Steklov
eigenvalues. See~\cite{fraschoen,fraschoen2} and
especially~\cite{fraschoen3} where an overview is presented.
Let $\sigma^\star(\gamma,k)$ be the supremum of $\sigma_1L$ taken over
all Riemannian metrics on a compact surface of genus $\gamma$ with $l$
boundary components. In~\cite{fraschoen2}, a geometric
characterization of maximizers was proved.
\begin{proposition}\label{prop:fraschoencritmetric}
Let $\Omega$ be a compact surface of genus $\gamma$ with $l$
boundary components and let $g_0$ be a
smooth metric on $\Omega$ such that
$$\sigma_1(\Omega,g_0)L(\partial\Omega,g_0)=\sigma^\star(\gamma,l).$$
Then there exist eigenfunctions $u_1,\cdots,u_n$ corresponding to
$\sigma_1(\Omega)$ such that
the map
$$u=(u_1,\cdots,u_n):\Omega\rightarrow\mathbb{B}^n$$
is a conformal minimal immersion such that $u(\Omega)\subset\mathbb{B}^n$ is
a free boundary solution, and
is an isometry on $\partial\Omega$ up to a rescaling by a constant factor.
\end{proposition}
This result was extended to higher eigenvalues $\sigma_k$
in~\cite{fraschoen3}. This characterization is similar to that of
extremizers of the eigenvalues of the Laplace operator on surfaces
(see~\cite{nad,MR1781616,MR2378458}).
For surfaces of genus zero, Fraser and Schoen could also obtain an
existence and regularity result
for maximizers, which is the main result of their paper~\cite{fraschoen2}.
\begin{theorem}\label{thm:fraschoenexistence}
For each $l>0$, there exists a smooth metric $g$ on the surface of
genus zero with $l$ boundary components such that
$$\sigma_1(\Omega,g)L_g(\partial\Omega)=\sigma^\star(0,l).$$
\end{theorem}
Similar existence results have been proved for the first nonzero
eigenvalue of the Laplace--Beltrami operator in a fixed conformal
class of a closed surface of
arbitrary genus, in which case conical singularities have to be allowed
(see~\cite{JLNNP, petrides}).
Proposition~\ref{prop:fraschoencritmetric} and
Theorem~\ref{thm:fraschoenexistence} can be used to study optimal
upper bounds for $\sigma_1$ on surfaces of genus zero. Observe that
inequality~(\ref{InequalityHPSgenusSimple}) can be restated as
$$\sigma^\star(\gamma,l)\leq 2\pi(\gamma+l).$$
This bound is not sharp in general. For
instance, Fraser and Schoen~\cite{fraschoen2} proved that on annuli
($\gamma=0, l=2$), the maximal value of
$\sigma_1(\Omega)L(\partial\Omega)$ is attained by the
\emph{critical catenoid} ($\sigma_1L\sim 4\pi/1.2$), which is
the minimal surface $\Omega\subset B^3$ parametrized by
$$\phi(t,\theta)=c(\cosh(t)\cos(\theta),\cosh(t)\sin(\theta),t),$$
where the scaling factor $c>0$ is chosen so that the boundary of the
surface $\Omega$ meets the sphere $\mathbb{S}^2$ orthogonally.
\begin{theorem}[\cite{fraschoen2}]\label{thm:fraschoencritcat}
The supremum of $\sigma_1(\Omega) L(\partial\Omega)$ among surfaces
of genus 0 with two boundary components is attained by the
\emph{critical catenoid}. The maximizer is unique
up to conformal changes of the metric which are constant on the
boundary.
\end{theorem}
The uniqueness statement is proved using
Proposition~\ref{prop:fraschoencritmetric} by showing that the critical
catenoid is the unique free boundary annulus in a Euclidean ball.
The maximization of $\sigma_1L$ for the M\"obius bands has also
been considered in~\cite{fraschoen2}.
For surfaces of genus zero with arbitrary number of boundary components,
the maximizers are not known explicitly, but the asymptotic behaviour for large
number of boundary components is understood~\cite{fraschoen2}.
\begin{theorem}\label{thm:fraschoengenus0}
The sequence $\sigma^\star(0,l)$ is strictly increasing and
converges to $4\pi$. For each $l\in\mathbb{N}$ a maximizing metric
is achieved by a free boundary minimal surface $\Omega_l$ of area
less than $2\pi$. The limit of these minimal surfaces as
$l\nearrow+\infty$ is a double disk.
\end{theorem}
The results discussed above lead to the following question:
\begin{open}
Let $\Omega$ be a surface of genus $\gamma$ with $l$ boundary
components. Does there exist a smooth Riemannian metric $g_0$ such that
$$\sigma_1(\Omega,g_0)L(\partial\Omega,g_0)\geq\sigma_1(\Omega,g)L(\partial\Omega,g)$$
for each Riemannian metric $g$?
\end{open}
Free boundary minimal surfaces were used as a tool in the
study of maximizers for $\sigma_1$, but this interplay can be turned around and
used to obtain interesting geometric results.
\begin{corollary}
For each $l\geq 1$, there exists an embedded minimal surface of genus
zero in $\mathbb{B}^3$ with $l$ boundary components satisfying the
free boundary condition.
\end{corollary}
\subsection{Geometric bounds in higher dimensions}
In dimensions $n=\mbox{dim}(\Omega)\geq 3$, isoperimetric inequalities for Steklov eigenvalues are
more complicated, as they involve other geometric quantities, such as the isoperimetric ratio:
$$I(\Omega)=\frac{|M|}{|\Omega|^{\frac{n-1}{n}}}.$$
For the first nonzero eigenvalue $\sigma_1$, it is possible to obtain
upper bounds for general compact manifolds with
boundary in terms of $I(\Omega)$ and of the relative conformal volume, which is defined below.
Let $\Omega$ be a compact manifold of dimension $n$ with smooth
boundary $M$. Let $m\in\mathbb{N}$ be a positive integer. The
\emph{relative $m$-conformal volume} of $\Omega$ is
\begin{gather*}
V_{rc}(\Omega,m)=\inf_{\phi:\Omega\hookrightarrow B^m}\sup_{\gamma\in M(m)}\mbox{Vol}(\gamma\circ\phi(\Omega)),
\end{gather*}
where the infimum is over all conformal immersions
$\phi:\Omega\hookrightarrow\mathbb{B}^m$ such that
$\phi(M)\subset\partial\mathbb{B}^m$, and $M(m)$ is the group of
conformal diffeomorphisms of the ball.
This conformal invariant was introduced in ~\cite{fraschoen}. It is similar to the celebrated conformal volume of
P. Li and S.-T. Yau~\cite{liyau}.
\begin{theorem}\cite{fraschoen}
Let $\Omega$ be a compact Riemannian manifold of dimension $n$
with smooth boundary $M$. For each positive integer $m$, the
following holds:
\begin{gather}\label{IneqFSconf}
\sigma_1(\Omega)|M|^{\frac{1}{n-1}}\leq \frac{nV_{rc}(\Omega,m)^{2/n}}{I(\Omega)^{\frac{n-2}{n-1}}}.
\end{gather}
In case of equality, there exists a conformal harmonic map
$\phi:\Omega\rightarrow\mathbb{B}^m$ which is a homothety on $M=\partial\Omega$ and such that $\phi(\Omega)$ meets $\partial B^m$
orthogonally. If $n\geq 3$, then $\phi$ is is an isometric minimal
immersion of $\Omega$ and it is given by a subspace of the first
eigenspace.
\end{theorem}
The proof uses coordinate functions as test functions and is based on
the Hersch center of mass renormalization procedure. It is similar to
the proof of the Li-Yau inequality~\cite{liyau}.
For higher eigenvalues, the following upper bound for bounded domains was proved by
B. Colbois, A. El Soufi and the first author in~\cite{ceg2}.
\begin{theorem}\label{CEGconf}
Let $N$ be a Riemannian manifold of dimension $n$. If $N$ is
conformally equivalent to a complete Riemannian manifold with
non-negative Ricci curvature, then for each domain $\Omega\subset
N$, the following holds for each $k\geq 1$,
\begin{gather}\label{IneqCEGconf}
\sigma_k(\Omega)|M|^{\frac{1}{n-1}}\leq\frac{\alpha(n)}{I(\Omega)^{\frac{n-2}{n-1}}}k^{2/n}.
\end{gather}
where $\alpha(n)$ is a constant depending only $n$.
\end{theorem}
The proof of Theorem~\ref{CEGconf} is based on the methods of metric geometry
initiated in~\cite{kvr} and further developed in~\cite{gyn}.
In combination with the classical
isoperimetric inequality, Theorem~\ref{CEGconf} leads to the following corollary.
\begin{cor}\label{Cor:boundedeuclidean}
There exists a constant $C_n$ such that for any Euclidean domain
$\Omega\subset\mathbb{R}^{n}$
$$\sigma_k(\Omega)|\partial\Omega|^{\frac{1}{n-1}}\leq C_n k^{2/n}.$$
\end{cor}
Similar results also hold for domains in the hyperbolic space $\mathbb{H}^{n}$ and
in the hemisphere of $\mathbb{S}^{n}$.
An interesting question raised in~\cite{ceg2} is whether one can replace the exponent $2/n$ in Corollary \ref{Cor:boundedeuclidean} by
$1/(n-1)$, which should be optimal in view of Weyl's law \eqref{Weylaw}:
\begin{open}
Does there exist a constant $C_n$ such that any bounded Euclidean
domain $\Omega\subset\mathbb{R}^n$ satisfies
$$\sigma_k(\Omega)|\partial\Omega|^{\frac{1}{n-1}}\leq C_nk^{\frac{1}{n-1}}?$$
\end{open}
While it might be tempting to
think that inequality~(\ref{IneqCEGconf}) should also hold with the exponent
$1/(n-1)$, this is false since it would imply a universal upper
bound on the isoperimetric ratio $I(\Omega)$ for Euclidean domains.
\subsection{Lower bounds}
In~\cite{esco1}, J. Escobar proved the following lower bound.
\begin{theorem}
Let $\Omega$ be a smooth compact Riemannian manifold of
dimension $\geq 3$ with boundary $M=\partial\Omega$. Suppose that
the Ricci curvature of $\Omega$
is non-negative and that the second fundamental form of $M$ is
bounded below by $k_0>0$, then
$\sigma_1>k_0/2.$
\end{theorem}
The proof is a simple application of Reilly's formula.
In~\cite{esco2}, Escobar conjectured the stronger bound
$\sigma_1\geq k_0$, which he proved for surfaces. For convex planar domains,
this had already been proved by Payne~\cite{payne}. Earlier lower
bounds for convex and starshaped planar domains are due to Kuttler and
Sigillito~\cite{kuttsigi2,kuttsigi3}.
In more general situations (e.g. no convexity assumption), it is still
possible to bound the first eigenvalue from below, similarly to
the classical Cheeger inequality. The classical Cheeger constant associated
to a compact Riemannian manifold $\Omega$ with boundary
$M=\partial\Omega$ is defined by
\begin{gather*}
h_{c}(\Omega):=\inf_{|A|\leq\frac{|\Omega|}{2}}\frac{|\partial A\cap
\mbox{int }\Omega|}{|A|}.
\end{gather*}
where the infimum is over all Borel subsets of $\Omega$ such that
$|A|\leq |\Omega|/2$.
In~\cite{jammes1} P.~Jammes introduced the following Cheeger type
constant for the Steklov problem:
\begin{gather*}
h_{\mbox{j}}(\Omega):=\inf_{|A|\leq\frac{|\Omega|}{2}}\frac{|\partial A\cap
\mbox{int }\Omega|}{|A\cap\partial\Omega|}.
\end{gather*}
He proved the following lower bound.
\begin{theorem}\label{ThmCheegerJammes}
Let $\Omega$ be a smooth compact Riemannian manifold with boundary
$M=\partial\Omega$. Then
\begin{gather}\label{InequalityCheegerJammes}
\sigma_1(\Omega)\geq \frac{1}{4}h_c(\Omega)h_j(\Omega)
\end{gather}
\end{theorem}
The proof of this theorem uses the coarea formula and follows the proof of the classical Cheeger
inequality quite closely.
Previous lower bounds were also obtained in~\cite{esco1} in terms of
a related Cheeger type constant and of the first eigenvalue of a Robin
problem on $\Omega$.
\subsection{Surfaces with large Steklov eigenvalues}
The previous discussion immediately raises the question of whether
there exist surfaces with an arbitrarily large normalized first Steklov
eigenvalue. The question was settled by the first author and
B. Colbois in~\cite{colbgir1}.
\begin{theorem}\label{Theorem:CGLARGE}
There exists a sequence $\{\Omega_N\}_{N\in\mathbb{N}}$ of compact surfaces
with boundary and a constant $C>0$ such that for each $N\in\mathbb{N}$,
$\mbox{genus}(\Omega_N)=1+N,$ and
$$\sigma_1(\Omega_N)L(\partial\Omega_N)\geq CN.$
\end{theorem}
The proof is based on the construction of surfaces which are modelled on
a family of expander graphs.
\begin{remark}
The literature on geometric bounds for Steklov eigenvalues is
expanding rather fast. There is some interest in considering the maximization of various
functions of the Steklov eigenvalues. See \cite{dittmar1,edward2,HenPhil}.
In the framework of comparison geometry, $\sigma_1$ was studied is~\cite{esco3} and more recently in~\cite{BinSan}.
For submanifolds of $\mathbb{R}^n$, upper bounds involving the mean
curvatures of $M=\partial\Omega$ have been obtained in~\cite{IliasMakhoul}. Higher eigenvalues on annuli have
been studied in~\cite{FanTamYu}.
\end{remark}
\section{Isospectrality and spectral rigidity}\label{section:IsospectralityRigidity}
\label{isosp}
\subsection{Isospectrality and the Steklov problem}
Adapting the celebrated question of M. Kac ``Can one hear the shape of a drum?'' to the Steklov problem, one may ask:
\begin{open}
\label{isospdomains}
Do there exist planar domains which are not isometric and have the same Steklov spectrum?
\end{open}
We believe the answer to this question is negative. Moreover, the
problem can be viewed as a special case of a conjecture put forward in
\cite{JolSharaf}: two surfaces have the same Steklov spectrum if and
only if there exists a conformal mapping between them such that the
conformal factor on the boundary is identically equal to one. Note
that the ``if'' part immediately follows from the variational principle
\eqref{varcharsigmak}. Indeed, the numerator of the Rayleigh quotient
for Steklov eigenvalues is the usual Dirichlet energy, which is
invariant under conformal transformations in two dimensions. The
denominator also stays the same if the conformal factor is equal to
one on the boundary. Therefore, the Steklov spectra of such
conformally equivalent surfaces coincide.
In higher dimensions, the Dirichlet energy is not conformally
invariant, and therefore the approach described above does not
work. However, one can construct Steklov isospectral manifolds of
dimension $ n\ge 3$ with the help of Example~\ref{cyl}. Indeed, given
two compact manifolds $M_1$ and $M_2$ which are Laplace-Beltrami
isospectral (there are many known examples of such pairs, see, for instance, \cite{Buser1986, Sunada,GordonPerrySchueth}), consider two cylinders $\Omega_1=M_1 \times
[0,L]$ and $\Omega_2 = M_2 \times [0,L]$, $L>0$. It follows from
Example~\ref{cyl} that $\Omega_1$ and $\Omega_2$ have the same Steklov
spectra. Recently, examples of higher-dimesional Steklov isospectral
manifolds with connected boundaries were announced
in~\cite{GordHerbWebb}.
In all known constructions of Steklov isospectral manifolds,
their boundaries are Laplace isospectral. The following question was
asked in \cite{GPPS}:
\begin{open}
Do there exist Steklov isospectral manifolds such that their
boundaries are not Laplace isospectral?
\end{open}
\subsection{Rigidity of the Steklov spectrum: the case of a ball} It
is an interesting and challenging question to find examples of
manifolds with boundary that are uniquely determined by their Steklov
spectrum. In this subsection we discuss the seemingly simple example
of Euclidean balls.
\begin{proposition}
A disk is uniquely determined by its Steklov spectrum among all smooth
Euclidean domains.
\end{proposition}
\begin{proof}
Let $\Omega$ be an Euclidean domain which has the same Steklov
spectrum as the disk of radius $r$. Then, by Corollary \ref{cor1}
one immediately deduces that $\Omega$ is a planar domain of
perimeter $2\pi r$. Moreover, it follows from Theorem \ref{xx} that
$\Omega$ is simply--connected. Therefore, since the equality in
Weinstock's inequality \eqref{Inequality:Weinstock} is achieved for
$\Omega$, the domain $\Omega$ is a disk of radius $r$.
\end{proof}
\begin{remark}
The smoothness hypothesis in the proposition above seems to be purely technical. We have to make this assumption since we make use of Theorem \ref{xx}.
\end{remark}
The above result motivates
\begin{open}
Let $\Omega \subset {\mathbb R}^n$ be a domain which is isospectral to a ball of radius $r$. Show that it is a ball of radius $r$.
\end{open}
Note that Theorem~\ref{thm:brock} does not yield a solution to this problem because the volume $|\Omega|$ is not a Steklov spectrum invariant.
Using the heat invariants of the Dirichlet-to-Neumann operator (see subsection \ref{specinv}), one can prove the following statement in dimension three.
\begin{proposition}
Let $\Omega\subset\mathbb R^3$ be a domain with connected and smooth
boundary $M$. Suppose its Steklov spectrum is equal to that of
a ball of radius $r$. Then $\Omega$ is a ball of radius $r$.
\end{proposition}
This result was obtained in~\cite{PoltSher}, and we sketch its proof below. First, let us show that $M$ is simply--connected. We use an adaptation of a theorem of Zelditch on
multiplicities~\cite{Zelditch} proved using microlocal analysis.
Namely, since $\Omega$ is Steklov isospectral to a ball, the multiplicities of its Steklov eigenvalues grow as $m_k=C k+{\mathcal{O}}(1)$,
where $C>0$ is some constant and $m_k$ is the multiplicity of the
$k$-th {\it distinct} eigenvalue (cf. Example~\ref{balls}).
Then one deduces that $M$ is a Zoll surface (that is, all geodesics on
$M$ are periodic with a common period), and hence it is simply--connected~\cite{Besse}.
Therefore, the following
formula holds for the coefficient $a_2$ in the Steklov heat trace
asymptotics~\eqref{heatexp} on $\Omega$:
\begin{gather*}
a_2=\frac{1}{16\pi}\int_M H_1^2+\frac{1}{12}.
\end{gather*}
Here $H_1(x)$ denotes the mean curvature of $M$ at the point $x$, and the term $\frac{1}{12}$ is obtained from the Gauss--Bonnet theorem using the fact
that $M$ is simply--connected. We have then: $\int_M H_1^2=\int_{S_r} H_1^2$, where $S_r=\partial B_r$.
On the other hand, it follows from \eqref{Weylaw} and Corollary
\ref{cor2} that ${\rm Vol}(M)$ and $\int_M H_1$ are Steklov spectral invariants. Therefore,
$${\rm Area}(M)={\rm Area}(S_r) ,\,\,\,\int_M H_1=\int_{S_r} H_1.$$
Hence
\begin{equation*}
\sqrt{{\rm Area}(M)}\left(\int_M H_1^2\right)^{1/2}-\left|\int_M H_1\right|=\sqrt{{\rm Area}(S_r)}\left(\int_{S_r} H_1^2\right)^{1/2}-\left|\int_{S_r}
H_1\right|=0.\end{equation*}
Since the Cauchy-Schwarz inequality becomes an equality only for
constant functions, one gets that $H_1$ must be constant on $M$. By a
theorem of Alexandrov \cite{Alexandrov}, the only compact surfaces of
constant mean curvature embedded in $\mathbb R^3$ are round
spheres. We conclude that $M$ is itself a sphere of radius $r$ and
therefore $\Omega$ is isometric to $B_r$. This completes the proof of the proposition.
\section{Nodal geometry and multiplicity bounds}\label{section:NodalMultiplicity}
\label{nodal}
\subsection{Nodal domain count}
The study of nodal domains and nodal sets of eigenfunctions is
probably the oldest topic in geometric spectral theory, going back to
the experiments of E. Chladni with vibrating plates. The fundamental
result in the subject is Courant's nodal domain theorem which states
that the $n$-th eigenfunction of the Dirichlet boundary value problem
has at most $n$ nodal domains. The proof of this statement uses
essentially two ingredients: the variational principle and the unique
continuation for solutions of second order elliptic equations. It can
therefore be extended essentially verbatim to Steklov eigenfunctions (see~\cite{kuttsigi2,KarKoPo}).
\begin{theorem}
Let $\Omega$ be a compact Riemannian manifold with boundary and
$\phi_n$ be an eigenfunction corresponding to the $n$-th nonzero
Steklov eigenvalue $\sigma_n$. Then $\phi_n$ has at most $n+1$
nodal domains.
\end{theorem}
Note that the Steklov spectrum starts with $\sigma_0=0$, and
therefore the $n$-th nonzero eigenvalue is actually the $n+1$-st Steklov
eigenvalue.
Apart of the ``interior'' nodal domains and nodal sets of Steklov
eigenfunctions, a natural problem is to study the boundary nodal
domains and nodal sets, that is, the nodal domains and nodal sets of
the eigenfunctions of the Dirichlet-to-Neumann operator.
The proof of Courant's theorem cannot be generalized to the Dirichlet-to-Neumann operator because it is nonlocal. The following problem therefore arises:
\begin{open}
\label{opennodal}
Let $\Omega$ be a Riemannian manifold with boundary $M$. Find an upper bound for the number of nodal domains of the $n$-th eigenfunction of the Dirichlet-to-Neumann operator on $M$.
\end{open}
For surfaces, a simple topological argument shows that the bound on
the number of interior nodal domains implies an estimate on the number
of boundary nodal domains of a Steklov eigenfunction. In particular,
the $n$-th nontrivial Dirichlet-to-Neumann eigenfunction on the
boundary of a simply--connected planar domain has at most $2n$ nodal
domains~\cite[Lemma 3.4]{AlessandriniMagnanini}.
\begin{figure}
\centering
\includegraphics[width=6cm]{Figures/urchin.jpg}
\caption{A surface inside a ball creating only two connected components in the interior and a large number of connected components on the boundary sphere.}
\label{figure:urchin}
\end{figure}
In higher dimensions, the number of interior nodal domains does not
control the number of boundary nodal domains (see Figure~\ref{figure:urchin}),
and therefore
new ideas are needed to tackle Open Problem \ref{opennodal}. However,
there are indications that a Courant-type (i.e. $O(n)$) bound should
hold in this case as well. For instance, this is the case for
cylinders and Euclidean balls (see Examples~\ref{balls} and~\ref{cyl}).
\subsection{Geometry of the nodal sets} The nodal sets of Steklov eigenfunctions, both interior and boundary, remain largely unexplored.
The basic property of the nodal sets of Laplace--Beltrami
eigenfunctions is their density on the scale of $1/{\sqrt{\lambda}}$,
where $\lambda$ is the eigenvalue (cf.~\cite{Zelditch2}, see also Figure~\ref{figure:nodalellipse}).
This means that for any manifold
$\Omega$, there exists a constant $C$ such that for any eigenvalue
$\lambda$ large enough, the corresponding eigenfunction $\phi_\lambda$
has a zero in any geodesic ball of radius $C/\sqrt{\lambda}$. This
motivates the following questions (see also Figure \ref{figure:nodalellipse}):
\begin{open}
(i) Are the nodal sets of Steklov eigenfunctions on a Riemannian
manifold $\Omega$ dense on the scale $1/\sigma$ in $\Omega$?
(ii) Are the nodal sets of the Dirichlet-to-Neumann eigenfunctions
dense on the scale $1/\sigma$ in $M=\partial \Omega$?
\end{open}
For smooth simply--connected planar domains, a positive
answer to question (ii) follows from the work of Shamma~\cite{shamma} on
asymptotic behaviour of Steklov eigenfunctions. On the other hand, the explicit representation of eigenfunctions on rectangles
implies that there exist eigenfunctions of arbitrary high order which have zeros only on one pair of parallel sides. Therefore, a positive answer to (ii)
may possibly hold only under some regularity assumptions on the boundary.
\begin{figure}
\centering
\includegraphics[width=8cm]{Figures/ellipse30.png}
\caption{The nodal lines of the 30th eigenfunction on an ellipse.}
\label{figure:nodalellipse}
\end{figure}
Another fundamental problem in nodal geometry is to estimate the size of the nodal set. It was conjectured by S.-T. Yau that for any Riemannian manifold of dimension $n$,
$$C_1\sqrt{\lambda} \le \mathcal{H}_{n-1}(\mathcal{N}(\phi_\lambda)) \le C_2\sqrt{\lambda},$$
where $\mathcal{H}_{n-1}(\mathcal{N}(\phi_\lambda))$ denotes the $n-1$-dimensional Hausdorff measure of the nodal set $\mathcal{N}(\phi_\lambda)$
of a Laplace-Beltrami eigenfunction $\phi_\lambda$, and the constants $C_1, C_2$ depend only on the geometry of the manifold. Similar questions can be asked in the Steklov setting:
\begin{open}
\label{size}
Let $\Omega$ be an $n$-dimensional Riemannian manifold with boundary $M$. Let $\phi_\sigma$ be an eigenfunction of the Steklov problem on $\Omega$ corresponding to the eigenvalue $\sigma$ and let $u_\sigma=\phi_\sigma|_M$ be the corresponding eigenfunction of the Dirichlet-to-Neumann operator on $M$. Show that
\smallskip
(i) $C_1\sigma \le \mathcal{H}_{n-1}(\mathcal{N}(\phi_\sigma)) \le C_2\sigma,$
\smallskip
(ii) $C_1' \sigma \le \mathcal{H}_{n-2}(\mathcal{N}(u_\sigma)) \le C_2'\sigma,$
\smallskip
\noindent where the constants $C_1,C_2, C_1', C_2'$ depend only on the manifold.
\end{open}
Some partial results on this problem are known. In particular, the upper bound in (ii) was
conjectured by~\cite{BellovaLin} and proved in~\cite{Zelditch2}
for real analytic manifolds with real analytic boundary. A lower bound on the size of the nodal set $\mathcal{N}(u_\sigma)$ for smooth Riemannian manifolds (though weaker than the one conjectured in (ii) in dimensions $\ge 3$)
was recently obtained in~\cite{WangZhu} using an adaptation of the approach of~\cite{SoggeZelditch} to nonlocal operators.
The upper bound in (i) is related to the question of estimating the size of the zero set of a harmonic function in terms of its frequency (see \cite{HanLin}).
In \cite{PoltSherToth}, this approach is combined with the methods of potential theory and complex analysis in order to obtain both upper and lower bounds in (i) for simply--connected analytic surfaces.
Let us also note that the Steklov eigenfunctions decay rapidly away
from the boundary~\cite{HislopLutzer}, and therefore the problem of
understanding the properties of the nodal set in the interior is
somewhat analogous to the study of the zero sets of Schr\"odinger
eigenfunctions in the ``forbidden regions'' (see~\cite{HaZelZhou}).
\subsection{Multiplicity bounds for Steklov eigenvalues}
In two dimensions, the estimate on the number of nodal domains allows
to control the eigenvalue multiplicities (see \cite{Besson, Cheng}).
The argument roughly goes as follows:
if the multiplicity of an eigenvalue is high, one can construct a
corresponding eigenfunction with a high enough vanishing order at a
certain point of a surface. In the neighbourhood of this point the
eigenfunction looks like a harmonic polynomial, and therefore the
vanishing order together with the topology of a surface yield a lower
bound on the number of nodal domains. To avoid a contradiction with
Courant's theorem, one deduces a bound on the vanishing order, and
hence on the multiplicity.
This general scheme was originally applied to Laplace-Beltrami
eigenvalues, but it can be also adapted to prove multiplicity bounds for Steklov eigenvalues. Interestingly enough, one can obtain estimates of two kinds.
Recall that the Euler characteristic $\chi$ of an orientable surface of genus
$\gamma$ with $l$ boundary components equals $2-2\gamma-l$, and of a non-orientable one is equal to $2-\gamma-l$.
Putting together the results of~\cite{KarKoPo,
jammes2, jammes3, fraschoen2} we get the following bounds:
\begin{theorem}
Let $\Sigma$ be a compact surface of Euler characteristic $\chi$
with $l$ boundary components. Then the multiplicity $m_k(\Sigma)$
for any $k \ge 1$ satisfies the following inequalities:
\begin{equation}
\label{mult1}
m_k(\Sigma) \le 2k-2\chi-2l+5,
\end{equation}
\begin{equation}
\label{mult2}
m_k(\Sigma) \le k-2\chi+3.
\end{equation}
\end{theorem}
Note that the right-hand side of \eqref{mult1} depends only on the index of the eigenvalue $k$ and on the genus $\gamma$ of the surface, while
the right-hand side of \eqref{mult2} depends also on the number of boundary components.
Inequality \eqref{mult2} in this form was proved in \cite{jammes3}. In
particular, it is sharp for the first eigenvalue of the disk
($\chi=2$, $l=1$, the maximal multiplicity is two) and of the M\"obius
band ($\chi=0$, $l=1$, the maximal multiplicity is four). Inequality \ref{mult1} is sharp for the annulus ($\chi=0$, $l=2$, the
maximal multiplicity is three and attained by the critical catenoid,
see Theorem \ref{thm:fraschoencritcat}).
While these bounds are sharp in some cases, they are far from optimal
for large~$k$. In fact, the following result is an immediate corollary
of Theorem \ref{main:GPPS}.
\begin{corollary} \cite{GPPS}
For any smooth compact Riemannian surface $\Omega$ with $l$ boundary
components, there is a constant $N$ depending on the metric on
$\Omega$ such that for $j>N$, the multiplicity of $\sigma_j$ is at
most $2l$.
\end{corollary}
\begin{remark}
The multiplicity of the first nonzero eigenvalue $\sigma_1$ has been
linked to the relative chromatic number of the corresponding surface with boundary in~\cite{jammes3}.
\end{remark}
For manifolds of dimension $n\ge 3$, no general multiplicity bounds
for Steklov eigenvalues are available. Moreover, given a Riemannian
manifold $\Omega$ of dimension $n\ge 3$ and any non-decreasing
sequence of $N$ positive numbers, one can find a Riemannian metric $
g$ in a given conformal class, such that this sequence coincides
with the first $n$ nonzero Steklov eigenvalues of $(M,g)$
\cite{jammes1}.
\begin{theorem}\label{thm:prescriptionJammes}
Let $\Omega$ be a compact manifold with boundary. Let $n$ be a
positive integer and let
$0=s_0< s_1\leq\cdots\leq s_n$ be a finite
sequence. Then there exists a Riemannian metric $g$ on $\Omega$ such
that $\sigma_j=s_j$ for $j=0,\cdots,n$.
\end{theorem}
For Laplace-Beltrami eigenvalues, a similar result was
obtained in \cite{CdV87}. It is plausible that multiplicity bounds for
Steklov eigenvalues in higher dimensions could be obtained under
certain geometric assumptions, such as curvature constraints.
\subsection*{Acknowledgements}
The authors would like to thank Brian Davies for inviting them to
write this survey. The project started in 2012 at the conference on
\emph{Geometric Aspects of Spectral Theory}
at the Mathematical Research Institute in Oberwolfach,
and its hospitality is greatly appreciated.
We are grateful to Mikhail Karpukhin and David Sher for helpful
remarks on the preliminary version of the paper. We are also thankful to
Dorin Bucur, Fedor Nazarov, Alexander Strohmaier and John Toth for useful discussions,
as well as to Bartek Siudeja for letting us use his FEniCS
eigenvalues computation code.
\bibliographystyle{plain}
| {
"timestamp": "2014-11-25T02:21:29",
"yymm": "1411",
"arxiv_id": "1411.6567",
"language": "en",
"url": "https://arxiv.org/abs/1411.6567",
"abstract": "The Steklov problem is an eigenvalue problem with the spectral parameter in the boundary conditions, which has various applications. Its spectrum coincides with that of the Dirichlet-to-Neumann operator. Over the past years, there has been a growing interest in the Steklov problem from the viewpoint of spectral geometry. While this problem shares some common properties with its more familiar Dirichlet and Neumann cousins, its eigenvalues and eigenfunctions have a number of distinctive geometric features, which makes the subject especially appealing. In this survey we discuss some recent advances and open questions, particularly in the study of spectral asymptotics, spectral invariants, eigenvalue estimates, and nodal geometry.",
"subjects": "Spectral Theory (math.SP); Analysis of PDEs (math.AP); Differential Geometry (math.DG)",
"title": "Spectral geometry of the Steklov problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.984810954312569,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7076796354463496
} |
https://arxiv.org/abs/1007.0584 | Euler-Lagrange equations for composition functionals in calculus of variations on time scales | In this paper we consider the problem of the calculus of variations for a functional which is the composition of a certain scalar function $H$ with the delta integral of a vector valued field $f$, i.e., of the form $H(\int_{a}^{b}f(t,x^{\sigma}(t),x^{\Delta}(t))\Delta t)$. Euler-Lagrange equations, natural boundary conditions for such problems as well as a necessary optimality condition for isoperimetric problems, on a general time scale, are given. A number of corollaries are obtained, and several examples illustrating the new results are discussed in detail. | \section{Introduction}
The calculus on time scales was introduced by Bernd Aulbach and
Stefan Hilger in 1988 \cite{Au:Hilger}. The new theory bridges the
divide and extends the traditional areas of continuous and discrete
analysis and the various dialects of $q$-calculus \cite{dt:qc} into
a single theory \cite{BP,book:ts1,1st:book:ts}. The calculus of
variations on time scales was born with the works
\cite{cv:02,b7,Zeidan} and has interesting applications in Economics
\cite{Almeida:T,Atici:Uysal:06,Atici:Uysal:08,RuiDel07,Nat}.
Currently, several researchers are getting interested in the new
theory and contributing to its development (see, \textrm{e.g.},
\cite{ZbigDel,Bohner:F:T,bhn:Gus,Rui:Del:HO,Mal:Tor:09,Mal:Tor:Wei,AD:09,AD:10b,AD:10c}).
The present work is dedicated to the study of general
(non-classical) problems of calculus of variations on an arbitrary
time scale $\mathbb{T}$. As a particular case, by choosing
$\mathbb{T} = \mathbb{R}$, one gets the generalized calculus of
variations \cite{CLP} with functionals of the form
\begin{equation*}
H \left(\int_{a}^{b}f(t,x(t),x'(t))dt \right),
\end{equation*}
where $f$ has $n$ components and $H$ has $n$ independent variables.
Cases of calculus of variations as these appear in practical
applications (see \cite{CLP} and the references given therein) but
cannot be solved using the classical theory. Therefore, an extension
of this theory is needed.
The paper is organized as follows. In Section~\ref{sec:prm}, some
preliminaries on time scales are presented. Our results are given in
Section~\ref{sec:Euler} and Section~\ref{sec:Iso}. We begin
Section~\ref{sec:Euler} by formulating the general (non-classical)
problem of calculus of variations \eqref{vp} on an arbitrary time
scale. We obtain a general formula for the Euler-Lagrange equations
and natural boundary conditions for the general problem
(Theorem~\ref{thm:mr}), which are then applied to the product
(Corollary~\ref{cproduct}) and the quotient
(Corollary~\ref{cquotient}). In Section~\ref{sec:Iso} we prove a
necessary optimality condition for the general isoperimetric problem
(Theorem~\ref{th:iso} and Theorem~\ref{th:iso:abn}). Throughout the
paper several examples illustrating the new results are discussed in
detail.
\section{Preliminaries}
\label{sec:prm}
The following definitions and theorems will serve as a short
introduction to the calculus of time scales; they can be found in
\cite{BP,book:ts1}.
A nonempty closed subset of $\mathbb{R}$ is called a \emph{time
scale} and it is denoted by $\mathbb{T}$. The real numbers
$(\mathbb{R})$, the integers $(\mathbb{Z})$, the natural numbers
$(\mathbb{N})$, the $h$-numbers ($h\mathbb{Z}:=\{h z | z \in
\mathbb{Z}\}$, where $h>0$ is a fixed real number), and the
$q$-numbers ($q^{\mathbb{N}_0}:=\{q^k | k \in \mathbb{N}_0\}$, where
$q>1$ is a fixed real number) are examples of time scales, as are
$\{0,\frac{1}{2},1\}$, $[2,3]\cup \mathbb{N}$, and $[-1,1]\cup
[2,3]$, where $[-1,1]$ and $[2,3]$ are real number intervals. We
assume that a time scale $\mathbb{T}$ has the topology that it
inherits from the real numbers with the standard topology.
\begin{definition}
For $t\in\mathbb{T}$ we define he \emph{forward jump operator}
$\sigma:\mathbb{T}\rightarrow\mathbb{T}$ by
\begin{equation*}
\sigma(t)=\inf{\{s\in\mathbb{T}:s>t}\}, \mbox{ for all
$t\in\mathbb{T}$},
\end{equation*}
while the \emph{backward jump operator}
$\rho:\mathbb{T}\rightarrow\mathbb{T}$ is defined by
\begin{equation*}
\rho(t)=\sup{\{s\in\mathbb{T}:s<t}\},\mbox{ for all
$t\in\mathbb{T}$}.
\end{equation*}
\end{definition}
In this definition we consider $\sigma(M)=M$ if $\mathbb{T}$ has a
maximum $M$ and $\rho(m)=m$ if $\mathbb{T}$ has a minimum $m$.
A point $t\in\mathbb{T}$ is called \emph{right-dense},
\emph{right-scattered}, \emph{left-dense} and \emph{left-scattered}
if $\sigma(t)=t$, $\sigma(t)>t$, $\rho(t)=t$ and $\rho(t)<t$,
respectively. Points that are simultaneously right-scattered and
left-scattered are called \emph{isolated}. Points that are
simultaneously right-dense and left-dense are called \emph{dense}.
The \emph{graininess function} $\mu:\mathbb{T}\rightarrow[0,\infty)$
is defined by
\begin{equation*}
\mu(t)=\sigma(t)-t,\mbox{ for all $t\in\mathbb{T}$}.
\end{equation*}
\begin{example}
If $\mathbb{T} = \mathbb{R}$, then $\sigma(t) = \rho(t) = t$ and
$\mu(t)= 0$. If $\mathbb{T} = \mathbb{Z}$, then $\sigma(t) = t + 1$,
$\rho(t) = t - 1$, and $\mu(t)= 1$. On the other hand, if
$\mathbb{T} = q^{\mathbb{N}_0}$, where $q>1$ is a fixed real number,
then we have $\sigma(t) = q t$, $\rho(t) = q^{-1} t$, and $\mu(t)=
(q-1) t$.
\end{example}
\begin{definition}
A time scale $\mathbb{T}$ is called \emph{regular} if the following
two conditions are satisfied:
\begin{itemize}
\item[(i)] $\sigma(\rho(t))=t$, for all $t\in \mathbb{T}$; and
\item[(ii)] $\rho(\sigma((t))=t$, for all $t\in \mathbb{T}$.
\end{itemize}
\end{definition}
Following \cite{BP}, let us define
$$\mathbb{T}^{\kappa}=\left\{\begin{array}{lcl}
\mathbb{T}\setminus(\rho(\sup\mathbb{T}),\sup \mathbb{T}] &\mbox{if}&\sup \mathbb{T}<\infty \\
\mathbb{T} &\mbox{if}&\sup \mathbb{T}=\infty.
\end{array}\right.$$
\begin{definition}
\label{def:de:dif} We say that a function
$f:\mathbb{T}\rightarrow\mathbb{R}$ is \emph{delta differentiable}
at $t\in\mathbb{T}^{\kappa}$ if there exists a number
$f^{\Delta}(t)$ such that for all $\varepsilon>0$ there is a
neighborhood $U$ of $t$ (\textrm{i.e.},
$U=(t-\delta,t+\delta)\cap\mathbb{T}$ for some $\delta>0$) such that
$$|f(\sigma(t))-f(s)-f^{\Delta}(t)(\sigma(t)-s)|
\leq\varepsilon|\sigma(t)-s|,\mbox{ for all $s\in U$}.$$ We call
$f^{\Delta}(t)$ the \emph{delta derivative} of $f$ at $t$ and $f$ is
said \emph{delta differentiable} on $\mathbb{T}^{\kappa}$ provided
$f^{\Delta}(t)$ exists for all $t\in\mathbb{T}^{\kappa}$.
\end{definition}
\begin{remark}
If $t \in \mathbb{T} \setminus \mathbb{T}^\kappa$, then
$f^{\Delta}(t)$ is not uniquely defined, since for such a point $t$,
small neighborhoods $U$ of $t$ consist only of $t$ and, besides, we
have $\sigma(t) = t$. For this reason, maximal left-scattered points
are omitted in Definition~\ref{def:de:dif}.
\end{remark}
Note that in right-dense points $f^{\Delta} (t)=lim_{s\rightarrow
t}\frac{f(t)-f(s)}{t-s}$, provided this limit exists, and in
right-scattered points $f^{\Delta}
(t)=\frac{f(\sigma(t))-f(t)}{\mu(t)}$, provided $f$ is continuous at
$t$.
\begin{example}
\label{ex:der:QC} If $\mathbb{T} = \mathbb{R}$, then $f^{\Delta}(t)
= f'(t)$, \textrm{i.e.}, the delta derivative coincides with the
usual one. If $\mathbb{T} = \mathbb{Z}$, then $f^{\Delta}(t) =
\Delta f(t) = f(t+1) - f(t)$. If $\mathbb{T} = q^{\mathbb{N}_0}$,
$q>1$, then $f^{\Delta} (t)=\frac{f(q t)-f(t)}{(q-1) t}$,
\textrm{i.e.}, we get the usual derivative of quantum calculus
\cite{QC}.
\end{example}
A function $f:\mathbb{T}\rightarrow\mathbb{R}$ is called
\emph{rd-continuous} if it is continuous at right-dense points and
if its left-sided limit exists at left-dense points. We denote the
set of all rd-continuous functions by C$_{\textrm{rd}}$ and the set
of all delta differentiable functions with rd-continuous derivative
by C$_{\textrm{rd}}^1$.
Now we introduce the concept of integral for a function
$f:\mathbb{T}\rightarrow\mathbb{R}$.
Let $a,b\in\mathbb{T}$ with $a\leq b$. We define the closed interval
$[a,b]$ in $\mathbb{T}$ by
\begin{equation*}
[a,b]:=\{t \in \mathbb{T}: a \leq t \leq b\}.
\end{equation*}
Open intervals and half-open intervals in $\mathbb{T}$ are defined
accordingly. In what follows all intervals will be time scale
intervals.
It is known that rd-continuous function possess an
\emph{antiderivative}, \textrm{i.e.}, there exists a function $F$
with $F^{\Delta}=f$, and in this case the \emph{delta integral} is
defined by
\begin{equation*}
\int_{a}^{b}f(t)\Delta t=F(b)-F(a)
\end{equation*}
for all $a,b\in\mathbb{T}$.
The delta integral has the following properties:
\begin{itemize}
\item[(i)] if $f\in C_{rd}$ and $t \in \mathbb{T}^{\kappa}$, then
\begin{equation*}
\int_t^{\sigma(t)}f(\tau)\Delta\tau=\mu(t)f(t)\, ;
\end{equation*}
\item[(ii)]if $a,b\in\mathbb{T}$ and $f,g\in C_{rd}$, then
\begin{equation*}
\int_{a}^{b}f(\sigma(t))g^{\Delta}(t)\Delta t
=\left[(fg)(t)\right]_{t=a}^{t=b}-\int_{a}^{b}f^{\Delta}(t)g(t)\Delta
t \, ,
\end{equation*}
\begin{equation*}
\int_{a}^{b}f(t)g^{\Delta}(t)\Delta t
=\left[(fg)(t)\right]_{t=a}^{t=b}-\int_{a}^{b}
f^{\Delta}(t)g(\sigma(t))\Delta t;
\end{equation*}
\item[(iii)] if $[a,b]$ consists of only isolated points, then
\begin{equation*}
\int_{a}^{b}f(t)\Delta t =\sum_{t\in[a,b)}\mu(t)f(t).
\end{equation*}
\end{itemize}
\begin{example}
Let $a, b \in \mathbb{T}$ with $a < b$. If $\mathbb{T} =
\mathbb{R}$, then $\int_{a}^{b}f(t)\Delta t = \int_{a}^{b}f(t) dt$,
where the integral on the right-hand side is the classical Riemann
integral. If $\mathbb{T} = \mathbb{Z}$, then $\int_{a}^{b}f(t)\Delta
t = \sum_{k=a}^{b-1} f(k)$. If $\mathbb{T} = q^{\mathbb{N}_0}$,
$q>1$, then $\int_{a}^{b}f(t)\Delta t = (1 - q) \sum_{t \in [a,b)} t
f(t)$.
\end{example}
The Dubois-Reymond lemma of the calculus of variations on time
scales will be useful for our purposes.
\begin{lemma}(Lemma of Dubois-Reymond \cite{b7})
\label{lemma:DR} Let $\mathbb{T}=[a,b]$ be a time scale with at
least three points and let $g\in C_{\textrm{rd}}$,
$g:\mathbb{T}^\kappa\rightarrow\mathbb{R}$. Then,
$$\int_{a}^{b}g(t) \cdot \eta^\Delta(t)\Delta t=0 \quad
\mbox{for all $\eta\in C_{\textrm{rd}}^1$ with
$\eta(a)=\eta(b)=0$}$$ if and only if $g(t)=c$ on
$\mathbb{T}^\kappa$ for some $c\in\mathbb{R}$.
\end{lemma}
\section{Euler-Lagrange equations}
\label{sec:Euler}
Let $\mathbb{T}$ be a time scale. Throughout we let $A,B\in
\mathbb{T}$ with $A<B$. For an interval $[c,d]\cap \mathbb{T}$ we
simply write $[c,d]$. We also abbreviate $f\circ\sigma$ by
$f^\sigma$. Now let $[a,b]$, with $a,b\in \mathbb{T}$ and $b<B$, be
a subinterval of $[A,B]$.
The general (non-classical) problem of the calculus of variations on
time scales under our consideration consists of minimizing or
maximizing a functional of the form
\begin{equation}
\label{vp}
\begin{gathered}
\mathcal{L}[x]=H\left(\int_{a}^{b}f_{1}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t,\ldots, \int_{a}^{b}f_{n}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t\right),\\
(x(a)=x_{a}), \quad (x(b)=x_{b})
\end{gathered}
\end{equation}
over all $x\in C_{rd}^{1}$. Using parentheses around the end-point
conditions means that these conditions may or may not be present. We
assume that:
\begin{itemize}
\item[(i)] the function $H:\mathbb{R}^{n}\rightarrow \mathbb{R}$ has continuous
partial derivatives with respect to its arguments and we denote them
by $H'_{i}$, $i=1,\ldots,n$;
\item[(ii)] functions $(t,y,v)\rightarrow f_{i}(t,y,v)$ from $[a,b]\times \mathbb{R}^{2}$ to
$\mathbb{R}$, $i=1,\ldots,n$, have partial continuous derivatives with
respect to $y,v$ for all $t\in[a,b]$ and we denote them by $f_{iy}$,
$f_{iv}$;
\item [(iii)] $f_{i}$, $f_{iy}$,
$f_{iv}$, $i=1,\ldots,n$, are rd-continuous in $t$ for all $x\in
C_{rd}^{1}$.
\end{itemize}
A function $x\in C_{rd}^{1}$ is said to be an admissible function
provided that it satisfies the end-points conditions (if any is
given).
Let us consider the following norm in $C_{rd}^{1}$:
\begin{equation*}
\|x\|_{1}=\sup_{t\in[a,b]}|x^{\sigma}(t)|+\sup_{t\in[a,b]}|x^{\Delta}(t)|.
\end{equation*}
\begin{definition}
An admissible function $\tilde{x}$ is said to be a \emph{weak local
minimizer} (respectively \emph{weak local maximizer}) for \eqref{vp}
if there exists $\delta >0$ such that $\mathcal{L}[\tilde{x}]
\leq \mathcal{L}[x]$ (respectively $\mathcal{L}[\tilde{x}] \geq \mathcal{L}[x]$)
for all admissible $x$ with $\|x-\tilde{x}\|_{1}<\delta$.
\end{definition}
Next theorem gives necessary optimality conditions for problem
\eqref{vp}.
\begin{theorem}
\label{thm:mr}
If $\tilde{x}$ is a weak local solution of the problem \eqref{vp},
then the Euler-Lagrange equation
\begin{equation}
\label{Euler}
\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,
\mathcal{F}_{n}[\tilde{x}])\left(f_{iv}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
-f_{iy}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\right)=0
\end{equation}
holds for all $t \in [a,b]^\kappa$, where
$\mathcal{F}_{i}[\tilde{x}]
=\int_{a}^{b}f_{i}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\Delta t$,
$i=1,\ldots,n$. Moreover, if $x(a)$ is not specified, then
\begin{equation}
\label{nat:l}
\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,
\mathcal{F}_{n}[\tilde{x}])f_{iv}(a,\tilde{x}^{\sigma}(a),\tilde{x}^{\Delta}(a))=0;
\end{equation}
and if $x(b)$ is not specified, then
\begin{multline}
\label{nat:r}
\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],
\ldots,\mathcal{F}_{n}[\tilde{x}])\Bigl(f_{iv}(\rho(b),\tilde{x}^{\sigma}(\rho
(b)),\tilde{x}^{\Delta}(\rho(b)))\Bigr.\\
+\Bigl.\int_{\rho(b)}^bf_{iy}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\Delta
t\Bigr)=0.
\end{multline}
\end{theorem}
\begin{proof}
Suppose that $\mathcal{L}[x]$ has a weak local extremum at
$\tilde{x}$. For an admissible variation $h\in C_{rd}^{1}$ we define
a function $\phi:\mathbb{R}\rightarrow \mathbb{R}$ by
$\phi(\varepsilon) = \mathcal{L}[(\tilde{x} + \varepsilon h)] $. We
do not require $h(a)=0$ or $h(b)=0$ in case $x(a)$ or $x(b)$,
respectively, is free (it is possible that both are free). A
necessary condition for $\tilde{x}$ to be an extremizer for
$\mathcal{L}[x]$ is given by $\phi'(\varepsilon)|_{\varepsilon=0} =
0$. Using the chain rule for obtaining the derivative of a composed
function we get
\begin{equation*}
\phi'(\varepsilon)|_{\varepsilon=0}
=\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,\mathcal{F}_{n}[\tilde{x}])
\int_a^b \left[ f_{iy}(\bullet) h^{\sigma}(t) + f_{iv}(\bullet)
h^{\Delta}(t)\right]\Delta t ,
\end{equation*}
where $(\bullet) =
\left(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t)\right)$.
Integration by parts of the first term of the integrand gives
\begin{equation*}
\int_a^b f_{iy}(\bullet) h^{\sigma}(t) \Delta t =\int_a^t
f_{iy}(\circ) \Delta \tau h(t)|_{t=a}^{t=b}-\int_a^b \left(\int_a^t
f_{iy}(\circ)\Delta \tau h^{\Delta}(t)\right) \Delta t,
\end{equation*}
where $(\circ) =
\left(\tau,\tilde{x}^{\sigma}(\tau),\tilde{x}^{\Delta}(\tau)\right)$.The
necessary condition $\phi'(\varepsilon)|_{\varepsilon=0} = 0$ can be
written as
\begin{multline}
\label{eq:aft:IP} 0 = \int_a^b
h^{\Delta}(t)\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,\mathcal{F}_{n}[\tilde{x}])
\left(f_{iv}(\bullet)-\int_a^t
f_{iy}(\circ)\Delta \tau \right)\Delta t \\
\left.+\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,\mathcal{F}_{n}[\tilde{x}])
\left( \int_a^t f_{iy}(\circ)\Delta \tau
h(t)\right)\right|_{t=a}^{t=b}.
\end{multline}
In particular, equation \eqref{eq:aft:IP} holds for all variations which are zero at both ends.
For all such $h$'s the second term in \eqref{eq:aft:IP} is zero and by the
Dubois-Reymond Lemma~\ref{lemma:DR}, we have
\begin{equation}\label{eq:EL}
\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,\mathcal{F}_{n}[\tilde{x}])
\left(f_{iv}(\bullet)-\int_a^t f_{iy}(\circ)\Delta \tau
\right)\Delta t=c,
\end{equation}
for some $c\in \mathbb{R}$ and all $t\in[a,b]$. Hence, equation
\eqref{Euler} holds for all $t \in [a,b]^\kappa$. Equation
\eqref{eq:aft:IP} must be satisfied for all admissible values of
$h(a)$ and $h(b)$. Consequently, equations \eqref{eq:aft:IP} and
\eqref{eq:EL} imply that
\begin{multline*}
0=\left(c+\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,\mathcal{F}_{n}[\tilde{x}])\int_a^b
f_{iy}(\bullet)\Delta t\right)h(b)\\
-\left(c+\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,\mathcal{F}_{n}[\tilde{x}])\int_a^a
f_{iy}(\bullet)\Delta t\right)h(a).
\end{multline*}
From the properties of the delta integral and from \eqref{eq:EL}, it
follows that
\begin{multline}
\label{eq:1}
0=h(b)\left\{\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,
\mathcal{F}_{n}[\tilde{x}])\left(f_{iv}(\rho(b),\tilde{x}^{\sigma}(\rho
(b)),\tilde{x}^{\Delta}(\rho(b))\right.\right.\\
+\left.\left.\int_{\rho(b)}^bf_{iy}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\Delta
t\right)\right\}
-h(a)\left(\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,
\mathcal{F}_{n}[\tilde{x}])f_{iv}(a,\tilde{x}^{\sigma}(a),\tilde{x}^{\Delta}(a))\right).
\end{multline}
If $x(t)$ is not preassigned at either end-point, then $h(a)$ and
$h(b)$ are both completely arbitrary and we conclude that their
coefficients in \eqref{eq:1} must each vanish. It follows that
condition \eqref{nat:l} holds when $x(a)$ is not given, and
condition \eqref{nat:r} holds when $x(b)$ is not given.
\end{proof}
\begin{remark}
\label{rem:mr:reg} Let $\mathbb{T}$ be a regular time scale. Then
from the properties of the delta integral we have
\begin{equation*}
\int_{\rho(b)}^b
f_{iy}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\Delta t=\mu
(\rho
(t))f_{iy}(\rho(b),\tilde{x}^{\sigma}(\rho(b)),\tilde{x}^{\Delta}(\rho(b))).
\end{equation*}
Therefore \eqref{nat:r} can be written in the form
\begin{equation*}
\begin{split}
\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,
\mathcal{F}_{n}[\tilde{x}])\left\{f_{iv}(\rho(b),\tilde{x}^{\sigma}(\rho
(b)),\tilde{x}^{\Delta}(\rho(b)))\right.\\
+\left.\mu (\rho
(t))f_{iy}(\rho(b),\tilde{x}^{\sigma}(\rho(b)),\tilde{x}^{\Delta}(\rho(b)))\right\}.
\end{split}
\end{equation*}
\end{remark}
Choosing $\mathbb{T}=\mathbb{R}$ in Theorem~\ref{thm:mr} we
immediately obtain Theorem~3.1 and Equation~(4.1) in \cite{CLP}. The
Euler-Lagrange Equation for the product functional can be deduced
from Theorem~\ref{thm:mr}.
\begin{corollary}\label{cproduct}
If $\tilde{x}$ is a solution of the problem
\begin{equation*}
\begin{gathered}
\mathcal{L}[x]=\left(\int_{a}^{b}f_{1}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t\right)\left( \int_{a}^{b}f_{2}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t\right),\\
(x(a)=x_{a}), \quad (x(b)=x_{b}),
\end{gathered}
\end{equation*}
then the Euler-Lagrange equation
\begin{multline*}
\mathcal{F}_{2}[\tilde{x}]\left(f_{1v}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))-
f_{1y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\right)\\
+\mathcal{F}_{1}[\tilde{x}]\left(f_{2v}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))-
f_{2y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\right)=0
\end{multline*}
holds for all $t \in [a,b]^\kappa$. Moreover, if $x(a)$ is not
specified, then
\begin{equation*}
\mathcal{F}_{2}[\tilde{x}]f_{1v}(a,\tilde{x}^{\sigma}(a),\tilde{x}^{\Delta}(a))
+\mathcal{F}_{1}[\tilde{x}]f_{2v}(a,\tilde{x}^{\sigma}(a),\tilde{x}^{\Delta}(a))=0;
\end{equation*}
if $x(b)$ is not specified, then
\begin{multline*}
\mathcal{F}_{2}[\tilde{x}]\left(f_{1v}(\rho(b),\tilde{x}^{\sigma}(\rho
(b)),\tilde{x}^{\Delta}(\rho(b)))+\int_{\rho(b)}^bf_{1y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\Delta
t\right)\\
+\mathcal{F}_{1}[\tilde{x}]\left(f_{2v}(\rho(b),\tilde{x}^{\sigma}(\rho
(b)),\tilde{x}^{\Delta}(\rho(b)))+\int_{\rho(b)}^bf_{2y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\Delta
t\right)=0.
\end{multline*}
\end{corollary}
\begin{remark}
In the particular case $\mathbb{T}=\mathbb{R}$,
Corollary~\ref{cproduct} gives Equation~(3.17) in \cite{CLP}.
\end{remark}
\begin{example}
Consider the problem
\begin{equation}\label{ex:product}
\begin{gathered}
\text{minimize} \quad
\mathcal{L}[x]=\left(\int_{0}^{1}(x^{\Delta}(t))^2\Delta
t\right)\left(\int_{0}^{1}tx^{\Delta}(t) \Delta
t\right)\\
x(0)=0, \quad x(1)=1.
\end{gathered}
\end{equation}
If $\tilde{x}$ is a local minimum of \eqref{ex:product}, then the
Euler-Lagrange equation must hold, i.e.,
\begin{equation}\label{ex:product:euler}
2\tilde{x}^{\Delta \Delta}(t)Q_{2}+Q_{1}=0,
\end{equation}
where
\begin{equation*}
Q_{1}=\mathcal{F}_{1}[\tilde{x}]=\int_{0}^{1}(\tilde{x}^{\Delta}(t))^2\Delta
t, \quad
Q_{2}=\mathcal{F}_{2}[\tilde{x}]=\int_{0}^{1}t\tilde{x}^{\Delta}(t)
\Delta t.
\end{equation*}
If $Q_{2}= 0$, then also $Q_{1}=0$. This contradicts the fact that
on any time scale a global minimizer for the problem
\begin{equation*}
\begin{gathered}
\text{minimize} \quad
\mathcal{F}_{1}[x]=\int_{0}^{1}(x^{\Delta}(t))^2\Delta
t\\
x(0)=0, \quad x(1)=1
\end{gathered}
\end{equation*}
is $\bar{x}(t)=t$ and $\mathcal{F}_{1}[\bar{x}]=1$. Hence,
$Q_{2}\neq 0$ and equation \eqref{ex:product:euler} implies that
candidate solutions for problem \eqref{ex:product} are those
satisfying the delta differential equation
\begin{equation}\label{euler}
\tilde{x}^{\Delta \Delta}(t)=-\frac{Q_1}{2Q_2}
\end{equation}
subject to boundary conditions $x(0)=0$ and $x(1)=1$. Solving
equation \eqref{euler} we obtain
\begin{equation*}
x(t)=-\frac{Q_1}{2Q_2}\int_{0}^{t}\tau\Delta
\tau+1+\frac{Q_1}{2Q_2}\int_{0}^{1}\tau\Delta \tau.
\end{equation*}
Therefore, a solution of \eqref{euler} depends on the time scale.
Let us consider, for example, $\mathbb{T}=\mathbb{R}$ and
$\mathbb{T}=\left\{0,\frac{1}{2},1\right\}$. On
$\mathbb{T}=\mathbb{R}$ we obtain
\begin{equation}\label{sol:P}
x(t)=-\frac{Q_1}{4Q_2}t^2+\frac{4Q_2+Q_1}{4Q_2}t.
\end{equation}
Substituting \eqref{sol:P} into functionals $\mathcal{F}_1$ and
$\mathcal{F}_2$ gives
\begin{equation}\label{equation:Q1,Q2}
\begin{cases}
\frac{48Q_2^2+Q_1^2}{48Q_2^2}=Q_1\\
\frac{12Q_2-Q_1}{24Q_2}=Q_2.
\end{cases}
\end{equation}
Solving the system of equations \eqref{equation:Q1,Q2} we obtain
\begin{gather*}
\begin{cases}
Q_1=0\\
Q_2=0,
\end{cases}
\quad
\begin{cases}
Q_1=\frac{4}{3}\\
Q_2=\frac{1}{3}.
\end{cases}
\end{gather*}
Therefore,
\begin{equation*}
\tilde{x}(t)=-t^2+2t
\end{equation*}
is a candidate extremizer for problem \eqref{ex:product} on
$\mathbb{T}=\mathbb{R}$. Note that nothing can be concluded as to
whether $\tilde{x}$ gives a minimum, a maximum, or neither of these
for $\mathcal{L}$. \\
The solution of \eqref{euler} on
$\mathbb{T}=\left\{0,\frac{1}{2},1\right\}$ is
\begin{gather}\label{euler:T}
x(t)=
\begin{cases}
0 & \text{ if } t=0 \\
\frac{1}{2}+\frac{Q_1}{16Q_2} & \text{ if } t=\frac{1}{2}\\
1 & \text{ if } t=1.
\end{cases}
\end{gather}
Constants $Q_1$ and $Q_2$ are determined by substituting
\eqref{euler:T} into functionals $\mathcal{F}_1$ and
$\mathcal{F}_2$. The resulting
system of equations is
\begin{equation}\label{equation:T}
\begin{cases}
1+\frac{Q_1^2}{64Q_2^2}=Q_1\\
\frac{1}{4}-\frac{Q_1}{32Q_2}=Q_2.
\end{cases}
\end{equation}
Since system of equations \eqref{equation:T} has no real solutions,
we conclude that there exists no extremizer for problem
\eqref{ex:product} on $\mathbb{T}=\left\{0,\frac{1}{2},1\right\}$
among the set of functions that we consider to be admissible.
\end{example}
Assuming that the denominator does not vanish, the Euler-Lagrange
equation for the quotient problem can be deduced from
Theorem~\ref{thm:mr}.
\begin{corollary}
\label{cquotient}
If $\tilde{x}$ is a solution of the problem
\begin{equation*}
\begin{gathered}
\mathcal{L}[x]=\frac{\int_{a}^{b}f_{1}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t}{\int_{a}^{b}f_{2}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t},\\
(x(a)=x_{a}), \quad (x(b)=x_{b}),
\end{gathered}
\end{equation*}
then the Euler-Lagrange equation
\begin{multline*}
f_{1v}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))-
f_{1y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\\
-Q\left(f_{2v}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))-
f_{2y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\right)=0
\end{multline*}
holds for all $t \in [a,b]^\kappa$, where
$Q=\frac{\mathcal{F}_{1}[\tilde{x}]}{\mathcal{F}_{2}[\tilde{x}]}$.
Moreover, if $x(a)$ is not specified, then
\begin{equation*}
f_{1v}(a,\tilde{x}^{\sigma}(a),\tilde{x}^{\Delta}(a))
-Qf_{2v}(a,\tilde{x}^{\sigma}(a),\tilde{x}^{\Delta}(a))=0;
\end{equation*}
if $x(b)$ is not specified, then
\begin{multline*}
f_{1v}(\rho(b),\tilde{x}^{\sigma}(\rho
(b)),\tilde{x}^{\Delta}(\rho(b)))
+\int_{\rho(b)}^bf_{1y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\Delta t\\
-Q\left(f_{2v}(\rho(b),\tilde{x}^{\sigma}(\rho
(b)),\tilde{x}^{\Delta}(\rho(b)))
+\int_{\rho(b)}^bf_{2y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\Delta t\right)=0.
\end{multline*}
\end{corollary}
\begin{remark}
In the particular situation $\mathbb{T}=\mathbb{R}$,
Corollary~\ref{cquotient} gives Equation~(3.21) in \cite{CLP}.
\end{remark}
\begin{example}\label{ex:q:1}
Consider the problem
\begin{equation}
\label{ex:quotient:1}
\begin{gathered}
\text{minimize} \quad
\mathcal{L}[x]=\frac{\int_{0}^{2}(x^{\Delta}(t))^2 \Delta
t}{\int_{0}^{2}(x^{\Delta}(t)+(x^{\Delta}(t))^2)\Delta
t},\\
x(0)=0, \quad x(2)=4.
\end{gathered}
\end{equation}
If $\tilde{x}$ is a local minimizer for \eqref{ex:quotient:1}, then
the Euler-Lagrange equation must hold, i.e.,
\begin{equation*}
0=[2\tilde{x}^{\Delta}(t)-Q(1+2\tilde{x}^{\Delta}(t))]^\Delta, \quad
t\in[0,2]^{\kappa},
\end{equation*}
where
\begin{equation*}
Q=\frac{\int_{0}^{2}(\tilde{x}^{\Delta}(t))^2 \Delta
t}{\int_{0}^{2}(\tilde{x}^{\Delta}(t)+(\tilde{x}^{\Delta}(t))^2)\Delta
t}.
\end{equation*}
Therefore,
\begin{equation*}
0=2\tilde{x}^{\Delta \Delta}(t)-Q2\tilde{x}^{\Delta \Delta}(t),
\quad t\in[0,2]^{\kappa}.
\end{equation*}
Thus $\tilde{x}^{\Delta \Delta}(t)=0$ or $Q=1$. The solution of the
delta differential equation
\begin{equation*}
\begin{gathered}
x^{\Delta \Delta}(t)=0,\\
x(0)=0, \quad x(2)=4
\end{gathered}
\end{equation*}
does not depend on the time scale and it is $\tilde{x}(t)=2t$.
Observe that $\mathcal{L}[\tilde{x}]=\frac{2}{3}<1$. Therefore,
$\tilde{x}$ is a candidate local minimizer for problem
\eqref{ex:quotient:1}.
\end{example}
\begin{example}
\label{ex:q:2}
Consider the problem
\begin{equation}\label{ex:quotient:2}
\begin{gathered}
\text{extremize} \quad
\mathcal{L}[x]=\frac{\int_{0}^{1}tx^{\Delta}(t) \Delta
t}{\int_{0}^{1}(x^{\Delta}(t))^2\Delta
t},\\
x(0)=0, \quad x(1)=1.
\end{gathered}
\end{equation}
The Euler-Lagrange equation for this problem is
\begin{equation*}
0=1-2Qx^{\Delta \Delta}(t),
\end{equation*}
where $Q$ is the value of functional $\mathcal{L}$ in a solution of
\eqref{ex:quotient:2}. Since $Q\neq 0$, it follows that
\begin{equation}
\label{ex:2}
x^{\Delta \Delta}(t)=\frac{1}{2Q}.
\end{equation}
Solving equation \eqref{ex:2} we obtain
\begin{equation*}
x(t)=\frac{1}{2Q}\int_{0}^{t}\tau\Delta
\tau+1-\frac{1}{2Q}\int_{0}^{1}\tau\Delta \tau.
\end{equation*}
Therefore, a solution of \eqref{ex:2} depends on the time scale. Let
us consider, for example, $\mathbb{T}=\mathbb{R}$ and
$\mathbb{T}=\{0,\frac{1}{2},1\}$. On $\mathbb{T}=\mathbb{R}$ we
obtain
\begin{equation}\label{sol:R}
x(t)=\frac{1}{4Q}t^2+\frac{4Q-1}{4Q}t.
\end{equation}
Substituting \eqref{sol:R} into functional $\mathcal{L}$ yields
\begin{equation}\label{equation:Q}
\frac{24Q^2+2Q}{48Q^2+1}=Q.
\end{equation}
Solving equation \eqref{equation:Q} we obtain
$Q\in\left\{\frac{1}{4}-\frac{\sqrt{3}}{6},0,\frac{1}{4}+\frac{\sqrt{3}}{6}\right\}$.
Therefore,
\begin{equation*}
x_{1}(t)=\frac{3}{3-2\sqrt{3}}t^2+\frac{2\sqrt{3}}{2\sqrt{3}-3}t
\end{equation*}
is a candidate local minimizer while
\begin{equation*}
x_{2}(t)=\frac{3}{3+2\sqrt{3}}t^2+\frac{2\sqrt{3}}{2\sqrt{3}+3}t
\end{equation*}
is a candidate local maximizer for problem \eqref{ex:quotient:2} on
$\mathbb{T}=\mathbb{R}$. \\
The solution of \eqref{ex:2} on
$\mathbb{T}=\left\{0,\frac{1}{2},1\right\}$ is
\begin{gather}\label{sol1:T}
x(t)=
\begin{cases}
0 & \text{ if } t=0 \\
\frac{1}{2}-\frac{1}{16Q} & \text{ if } t=\frac{1}{2}\\
1 & \text{ if } t=1.
\end{cases}
\end{gather}
The constant $Q$ is determined by substituting \eqref{sol1:T} into
$\mathcal{L}$. The resulting equation is
\begin{equation}\label{sol2:T}
\frac{1}{4}+\frac{1}{32Q}=Q+\frac{1}{64Q} \, .
\end{equation}
Solving \eqref{sol2:T} we obtain $Q\in
\left\{\frac{1}{8}-\frac{\sqrt{2}}{8},\frac{1}{8}+\frac{\sqrt{2}}{8}\right\}$
and stationary functions are
\begin{gather}\label{sol:T:1}
x_{1}(t)=
\begin{cases}
0 & \text{ if } t=0 \\
\frac{\sqrt{2}}{2\sqrt{2}-2} & \text{ if } t=\frac{1}{2}\\
1 & \text{ if } t=1,
\end{cases}
\end{gather}
and \begin{gather}\label{sol:T:2}
x_{2}(t)=
\begin{cases}
0 & \text{ if } t=0 \\
\frac{\sqrt{2}}{2\sqrt{2}+2} & \text{ if } t=\frac{1}{2}\\
1 & \text{ if } t=1.
\end{cases}
\end{gather}
\begin{figure}[ht]
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[scale=0.26]{ex6_min}
\caption{The extremal minimizer of Example~\ref{ex:q:2} for
$\mathbb{T}=\mathbb{R}$ and $\mathbb{T}=\{0,\frac{1}{2},1\}$.}
\label{fig1:Ex6}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[scale=0.26]{ex6_max}
\caption{The extremal maximizer of Example~\ref{ex:q:2} for for
$\mathbb{T}=\mathbb{R}$ and $\mathbb{T}=\{0,\frac{1}{2},1\}$.}
\label{fig2:Ex6}
\end{minipage}
\end{figure}
Therefore \eqref{sol:T:1} is a candidate local minimizer while
\eqref{sol:T:2} is a candidate local maximizer for problem
\eqref{ex:quotient:2} on
$\mathbb{T}=\left\{0,\frac{1}{2},1\right\}$.
\end{example}
\begin{example}\label{ex:SL}
Consider the problem
\begin{equation}
\label{ex:SL:1}
\begin{gathered}
\text{extremize} \quad
\mathcal{L}[x]=\frac{\int_{a}^{b}[(x^{\Delta}(t))^2-q(t)(x^{\sigma}(t))^2]
\Delta t}{\int_{a}^{b}(x^{\sigma}(t))^2\Delta
t},\\
x(a)=0, \quad x(b)=0,
\end{gathered}
\end{equation}
where $q:[a,b]\rightarrow \mathbb{R}$ is a continuous function. The
Euler-Lagrange equation for this problem is
\begin{equation}\label{ex:SL:2}
x^{\Delta \Delta}(t)+q(t)x^{\sigma}(t)+Qx^{\sigma}(t)=0,
\end{equation}
subject to
\begin{equation}\label{ex:SL:3}
x(a)=0, \quad x(b)=0,
\end{equation}
where $Q$ is the value of functional $\mathcal{L}$ in a solution of
\eqref{ex:SL:1}. It is easily seen that
\eqref{ex:SL:2}--\eqref{ex:SL:3} is a case of the Sturm-Liouville
eigenvalue problem on time scales (see \cite{ABW} and
\cite{Rui:Del:ISO}). It follows that the problem of determining
eigenfunctions of \eqref{ex:SL:2} subject to \eqref{ex:SL:3} is
equivalent to the problem of determining functions satisfying
\eqref{ex:SL:3} which render $\mathcal{L}$ stationary.
\end{example}
\section{Isoperimetric problems}
\label{sec:Iso}
Let us consider now the general (non-classical) isoperimetric
problem on time scales. The problem consists of minimizing or
maximizing
\begin{equation}\label{ivp}
\mathcal{L}[x]=H\left(\int_{a}^{b}f_{1}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t,\ldots, \int_{a}^{b}f_{n}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t\right),
\end{equation}
in the class of functions $x\in C_{rd}^{1}$ satisfying the boundary
conditions
\begin{equation}\label{bivp}
x(a)=x_{a}, \quad x(b)=x_{b}
\end{equation}
and the constraint
\begin{equation}\label{civp}
\mathcal{K}[x]=P\left(\int_{a}^{b}g_{1}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t,\ldots, \int_{a}^{b}g_{m}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t\right)=k,
\end{equation}
where $x_{a}$, $x_{b}$, $k$ are given real numbers. We assume that:
\begin{itemize}
\item[(i)] functions $H:\mathbb{R}^{n}\rightarrow \mathbb{R}$
and $P:\mathbb{R}^{m}\rightarrow \mathbb{R}$ have continuous
partial derivatives with respect to their arguments and we denote
them by $H'_{i}$, $i=1,\ldots,n$, and $P'_{i}$, $i=1,\ldots,m$;
\item[(ii)] functions $(t,y,v)\rightarrow f_{i}(t,y,v)$, $i=1,\ldots,n$,
and $(t,y,v)\rightarrow g_{j}(t,y,v)$, $j=1,\ldots,m$,
from $[a,b]\times \mathbb{R}^{2}$ to $\mathbb{R}$
have partial continuous derivatives with respect to
$y,v$ for all $t\in[a,b]$ and we denote them by $f_{iy}$, $f_{iv}$
and $g_{jy}$, $g_{jv}$;
\item [(iii)] $f_{i}$, $f_{iy}$, $f_{iv}$, $i=1,\ldots,n$,
and $g_{j}$, $g_{jy}$, $g_{jv}$, $j=1,\ldots,m$,
are rd-continuous in $t$ for all $x\in C_{rd}^{1}$.
\end{itemize}
\begin{definition}
An admissible function $\tilde{x}$ is said to be a \emph{weak local
minimizer} (respectively \emph{weak local maximizer}) for the
isoperimetric problem \eqref{ivp}--\eqref{civp} if there exists
$\delta
>0$ such that $\mathcal{L}[\tilde{x}]\leq \mathcal{L}[x]$
(respectively $\mathcal{L}[\tilde{x}] \geq \mathcal{L}[x]$)
for all admissible $x$ satisfying the boundary conditions
\eqref{bivp}, the isoperimetric constraint \eqref{civp}, and
$\|x-\tilde{x}\|_{1}<\delta$.
\end{definition}
\begin{definition}
We say that $\tilde{x}$ is an extremal for $\mathcal{K}$ if
\begin{equation*}
\sum_{i=1}^{m}P'_{i}(\mathcal{G}_{1}[\tilde{x}],\ldots,\mathcal{G}_{m}[\tilde{x}])
\left(g_{iv}(\bullet)-\int_a^t g_{iy}(\circ)\Delta \tau \right)=c,
\end{equation*}
where $(\bullet) =
\left(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t)\right)$ and
$(\circ) =
\left(\tau,\tilde{x}^{\sigma}(\tau),\tilde{x}^{\Delta}(\tau)\right)$,
for some constant $c$ and for all $t \in [a,b]$. An extremizer
(i.e., a weak local minimizer or a weak local maximizer) for the
problem \eqref{ivp}--\eqref{civp} that is not an extremal for
$\mathcal{K}$ is said to be a normal extremizer; otherwise (i.e., if
it is an extremal for $\mathcal{K}$), the extremizer is said to be
abnormal.
\end{definition}
\begin{theorem}
\label{th:iso}
If $\tilde{x}$ is a normal extremizer for the isoperimetric problem
\eqref{ivp}--\eqref{civp}, then there exists a real $\lambda$ such that
\begin{multline}\label{iso}
\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,
\mathcal{F}_{n}[\tilde{x}])\left(f_{iv}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
-f_{iy}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\right)\\
-\lambda\sum_{i=1}^{m}P'_{i}(\mathcal{G}_{1}[\tilde{x}],\ldots,
\mathcal{G}_{m}[\tilde{x}])\left(g_{iv}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
- g_{iy}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\right)=0
\end{multline}
for all $t \in [a,b]^\kappa$.
\end{theorem}
\begin{proof}
Consider a variation of $\tilde{x}$, say
$\bar{x}=\tilde{x}+\varepsilon_{1}h_{1}+\varepsilon_{2}h_{2}$, where
$h_{i}\in C_{rd}^{1}$ and $h_{i}(a)=h_{i}(b)=0$, $i=1,2$, and
$\varepsilon_{i}$ is a sufficiently small parameter
($\varepsilon_{1}$ and $\varepsilon_{2}$ must be such that
$\|\bar{x}-\tilde{x}\|_{1}<\delta$ for some $\delta>0$). Here,
$h_{1}$ is an arbitrary fixed function and $h_{2}$ is a fixed
function that will be chosen later. Define the real function
\begin{equation*}
\bar{K}(\varepsilon_{1},\varepsilon_{2})=\mathcal{K}[\bar{x}]
=P\left(\int_{a}^{b}g_{1}(t,\bar{x}^{\sigma}(t),\bar{x}^{\Delta}(t))\Delta t,
\ldots, \int_{a}^{b}g_{m}(t,\bar{x}^{\sigma}(t),\bar{x}^{\Delta}(t))\Delta t\right)-k.
\end{equation*}
We have
\begin{equation*}
\left.\frac{\partial\bar{K}}{\partial
\varepsilon_{2}}\right|_{(0,0)}
=\sum_{i=1}^{m}P'_{i}(\mathcal{G}_{1}[\tilde{x}],\ldots,\mathcal{G}_{m}[\tilde{x}])
\int_a^b \left[ g_{iy}(\bullet) h_{2}^{\sigma}(t) + g_{iv}(\bullet)
h_{2}^{\Delta}(t)\right]\Delta t,
\end{equation*}
where $(\bullet) =
\left(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t)\right)$. Since
$h_{2}(a)=h_{2}(b)=0$, integration by parts gives
\begin{equation*}
\int_{a}^{b}h_{2}^{\Delta}(t)\sum_{i=1}^{m}P'_{i}(\mathcal{G}_{1}[\tilde{x}],
\ldots,\mathcal{G}_{m}[\tilde{x}])
\left(g_{iv}(\bullet)-\int_a^t g_{iy}(\circ)\Delta \tau \right)\Delta t,
\end{equation*}
where $(\circ) =
\left(\tau,\tilde{x}^{\sigma}(\tau),\tilde{x}^{\Delta}(\tau)\right)$.
By Lemma~\ref{lemma:DR}, there exists $h_{2}$ such that
$\left.\frac{\partial\bar{K}}{\partial
\varepsilon_{2}}\right|_{(0,0)}\neq 0$. Since $\bar{K}(0,0)=0$, by
the implicit function theorem we conclude that there exists a
function $\varepsilon_{2}$ defined in the neighborhood of zero, such
that $\bar{K}(\varepsilon_{1},\varepsilon_{2}(\varepsilon_{1}))=0$,
i.e., we may choose a subset of variations $\bar{x}$ satisfying the
isoperimetric constraint. \\
Let us now consider the real function
\begin{equation*}
\bar{L}(\varepsilon_{1},\varepsilon_{2})=\mathcal{L}[\bar{x}]
=H\left(\int_{a}^{b}f_{1}(t,\bar{x}^{\sigma}(t),\bar{x}^{\Delta}(t))\Delta t,
\ldots, \int_{a}^{b}f_{n}(t,\bar{x}^{\sigma}(t),\bar{x}^{\Delta}(t))\Delta t\right).
\end{equation*}
By hypothesis, $(0,0)$ is an extremal of $\bar{L}$ subject to the
constraint $\bar{K}=0$ and $\nabla \bar{K}(0,0)\neq \textbf{0}$. By
the Lagrange multiplier rule, there exists some real $\lambda$ such
that $\nabla(\bar{L}(0,0)-\lambda\bar{K}(0,0))=\textbf{0}$. Having
in mind that $h_{1}(a)=h_{1}(b)=0$, we can write
\begin{equation*}
\left.\frac{\partial\bar{L}}{\partial
\varepsilon_{1}}\right|_{(0,0)}
=\int_{a}^{b}h_{1}^{\Delta}(t)\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],
\ldots,\mathcal{F}_{n}[\tilde{x}])
\left(f_{iv}(\bullet)-\int_a^t f_{iy}(\circ)\Delta \tau
\right)\Delta t
\end{equation*}
and
\begin{equation*}
\left.\frac{\partial\bar{K}}{\partial
\varepsilon_{1}}\right|_{(0,0)}
=\int_{a}^{b}h_{1}^{\Delta}(t)\sum_{i=1}^{m}P'_{i}(\mathcal{G}_{1}[\tilde{x}],
\ldots,\mathcal{G}_{m}[\tilde{x}])
\left(g_{iv}(\bullet)-\int_a^t g_{iy}(\circ)\Delta \tau
\right)\Delta t.
\end{equation*}
Therefore,
\begin{multline}\label{iso:3}
\int_{a}^{b}h_{1}^{\Delta}(t)\left\{\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],
\ldots,\mathcal{F}_{n}[\tilde{x}])
\left(f_{iv}(\bullet)-\int_a^t f_{iy}(\circ)\Delta \tau \right)\right.\\
-\left.\lambda \sum_{i=1}^{m}P'_{i}(\mathcal{G}_{1}[\tilde{x}],
\ldots,\mathcal{G}_{m}[\tilde{x}])
\left(g_{iv}(\bullet)-\int_a^t g_{iy}(\circ)\Delta \tau
\right)\right\}\Delta t=0.
\end{multline}
As \eqref{iso:3} holds for any $h_{1}$, by Lemma~\ref{lemma:DR}, we
have
\begin{multline}\label{iso:4}
\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,\mathcal{F}_{n}[\tilde{x}])
\left(f_{iv}(\bullet)-\int_a^t f_{iy}(\circ)\Delta \tau \right)\\
-\lambda
\sum_{i=1}^{m}P'_{i}(\mathcal{G}_{1}[\tilde{x}],\ldots,\mathcal{G}_{m}[\tilde{x}])
\left(g_{iv}(\bullet)-\int_a^t g_{iy}(\circ)\Delta \tau \right)=c,
\end{multline}
for some $c\in \mathbb{R}$. Applying the delta derivative to both
sides of equation \eqref{iso:4}, we get \eqref{iso}.
\end{proof}
\begin{remark}
Choosing $H,P:\mathbb{R}\rightarrow \mathbb{R}$ and $H=P=id$ in
Theorem~\ref{th:iso} we immediately obtain Theorem~3.4 in
\cite{Rui:Del:ISO} and a particular case of Theorem~3.4 in
\cite{Mal:Tor:09}.
\end{remark}
One can easily cover abnormal extremizers within our result by
introducing an extra multiplier $\lambda_{0}$.
\begin{theorem}
\label{th:iso:abn}
If $\tilde{x}$ is an extremizer for the isoperimetric problem
\eqref{ivp}--\eqref{civp}, then there exist two constants
$\lambda_{0}$ and $\lambda$, not both zero, such that
\begin{multline}
\label{iso:abn}
\lambda_{0}\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],
\ldots,\mathcal{F}_{n}[\tilde{x}])\left(f_{iv}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
-f_{iy}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\right)\\
-\lambda\sum_{i=1}^{m}P'_{i}(\mathcal{G}_{1}[\tilde{x}],\ldots,
\mathcal{G}_{m}[\tilde{x}])\left(g_{iv}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
-g_{iy}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\right)=0
\end{multline}
for all $t \in [a,b]^\kappa$.
\end{theorem}
\begin{proof}
Following the proof of Theorem~\ref{th:iso}, since $(0,0)$ is an
extremal of $\bar{L}$ subject to the constraint $\bar{K}=0$, the
extended Lagrange multiplier rule (see, for instance,
\cite[Theorem~4.1.3]{Brunt}) asserts the existence of reals $\lambda_{0}$ and
$\lambda$, not both zero, such that
$\nabla(\lambda_{0}\bar{L}(0,0)-\lambda\bar{K}(0,0))=\textbf{0}$.
Therefore,
\begin{equation*}
\lambda_{0}\left.\frac{\partial\bar{L}}{\partial
\varepsilon_{1}}\right|_{(0,0)}-\lambda\left.\frac{\partial\bar{K}}{\partial
\varepsilon_{1}}\right|_{(0,0)}=0
\end{equation*}
\begin{multline}\label{abn:1}
\Leftrightarrow
\int_{a}^{b}h_{1}^{\Delta}(t)\left\{\lambda_{0}\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],
\ldots,\mathcal{F}_{n}[\tilde{x}])
\left(f_{iv}(\bullet)-\int_a^t f_{iy}(\circ)\Delta \tau \right)\right.\\
-\left.\lambda
\sum_{i=1}^{m}P'_{i}(\mathcal{G}_{1}[\tilde{x}],\ldots,\mathcal{G}_{m}[\tilde{x}])
\left(g_{iv}(\bullet)-\int_a^t g_{iy}(\circ)\Delta \tau
\right)\right\}\Delta t=0.
\end{multline}
Since \eqref{abn:1} holds for any $h_{1}$, it follows by
Lemma~\ref{lemma:DR} that
\begin{multline}\label{abn:2}
\lambda_{0}\sum_{i=1}^{n}H'_{i}(\mathcal{F}_{1}[\tilde{x}],\ldots,\mathcal{F}_{n}[\tilde{x}])
\left(f_{iv}(\bullet)-\int_a^t f_{iy}(\circ)\Delta \tau \right)\\
-\lambda
\sum_{i=1}^{m}P'_{i}(\mathcal{G}_{1}[\tilde{x}],\ldots,\mathcal{G}_{m}[\tilde{x}])
\left(g_{iv}(\bullet)-\int_a^t g_{iy}(\circ)\Delta \tau \right)=c,
\end{multline}
for some $c\in \mathbb{R}$. The desired condition \eqref{iso:abn}
follows by delta differentiation of \eqref{abn:2}.
\end{proof}
\begin{remark}
If $\tilde{x}$ is a normal extremizer for the isoperimetric problem
\eqref{ivp}--\eqref{civp}, then we can choose $\lambda_{0}=1$ in
Theorem~\ref{th:iso:abn} and obtain Theorem~\ref{th:iso}. For
abnormal extremizers, Theorem~\ref{th:iso:abn} holds with
$\lambda_{0}=0$. The condition $(\lambda_{0},\lambda)\neq\textbf{0}$
guarantees that Theorem~\ref{th:iso:abn} is a useful necessary
condition.
\end{remark}
\begin{corollary}
\label{c:iso:abn}
\begin{itemize}
\item[(i)] If $\tilde{x}$ is an extremizer for the isoperimetric
problem
\begin{equation*}
\begin{gathered}
\text{extremize}\quad
\mathcal{L}[x]=\left(\int_{a}^{b}f_{1}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t\right)\left( \int_{a}^{b}f_{2}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t)\right),\\
x(a)=x_{a}, \quad x(b)=x_{b},
\end{gathered}
\end{equation*}
subject to the constraint
\begin{equation*}
\mathcal{K}[x]=\left(\int_{a}^{b}g_{1}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t\right)\left( \int_{a}^{b}g_{2}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t)\right)=k,
\end{equation*}
then there exist two constants $\lambda_{0}$ and $\lambda$, not both
zero, such that
\begin{equation*}
\lambda_{0}\left\{\mathcal{F}_{2}[\tilde{x}]\left(f_{1v}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
-f_{1y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t)\right)\right.
\end{equation*}
\begin{equation*}
\left.+\mathcal{F}_{1}[\tilde{x}]\left(f_{2v}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
-f_{2y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\right)\right\}
\end{equation*}
\begin{equation*}
-\lambda \left\{
\mathcal{G}_{2}[\tilde{x}]\left(g_{1v}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
-g_{1y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t)\right)\right.
\end{equation*}
\begin{equation*}
+\left.\mathcal{G}_{1}[\tilde{x}]\left(g_{2v}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
-g_{2y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\right)\right\}=0
\end{equation*}
for all $t \in [a,b]^\kappa$.
\item[(ii)] Assume that denominators of $\mathcal{L}$ and $\mathcal{K}$ do not vanish.
If $\tilde{x}$ is an extremizer for the isoperimetric problem
\begin{equation*}
\begin{split}
\text{extremize}\quad
\mathcal{L}[x]=\frac{\int_{a}^{b}f_{1}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t}{\int_{a}^{b}f_{2}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t}, \quad x(a)=x_{a}, \quad x(b)=x_{b},\\
\end{split}
\end{equation*}
subject to the constraint
\begin{equation*}
\mathcal{K}[x]=\frac{\int_{a}^{b}g_{1}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t}{\int_{a}^{b}g_{2}(t,x^{\sigma}(t),x^{\Delta}(t))\Delta
t}=k,
\end{equation*}
then there exist two constants $\lambda_{0}$ and $\lambda$, not both
zero, such that
\begin{equation*}
\lambda_{0}\left\{\mathcal{G}_{2}[\tilde{x}]\left(f_{1v}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
-f_{1y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t)\right)\right.
\end{equation*}
\begin{equation*}
\left.-\mathcal{G}_{2}[\tilde{x}]Q_{L}\left(f_{2v}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
-f_{2y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\right)\right\}
\end{equation*}
\begin{equation*}
-\lambda\left\{\mathcal{F}_{2}[\tilde{x}]\left(g_{1v}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
-g_{1y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t)\right)\right.
\end{equation*}
\begin{equation*}
\left.-\mathcal{F}_{2}[\tilde{x}]Q_{K}\left(g_{2v}^{\Delta}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))
-g_{2y}(t,\tilde{x}^{\sigma}(t),\tilde{x}^{\Delta}(t))\right)\right\}=0
\end{equation*}
holds for all $t \in [a,b]^\kappa$, where
$Q_{L}=\frac{\mathcal{F}_{1}[\tilde{x}]}{\mathcal{F}_{2}[\tilde{x}]}$
and
$Q_{K}=\frac{\mathcal{G}_{1}[\tilde{x}]}{\mathcal{G}_{2}[\tilde{x}]}$.
\end{itemize}
\end{corollary}
\begin{example}
\label{ex:iso}
Consider the problem
\begin{equation}
\label{ex:iso:1}
\begin{split}
\text{extremize} \quad
\mathcal{L}[x]=\frac{\int_{0}^{1}(x^{\Delta}(t))^2 \Delta
t}{\int_{0}^{1}tx^{\Delta}(t)\Delta
t},\\
x(0)=0, \quad x(1)=1,
\end{split}
\end{equation}
subject to the constraint
\begin{equation}\label{ex:iso:2}
\mathcal{K}[x]=\int_{0}^{1}tx^{\Delta}(t)\Delta t=1.
\end{equation}
Since
\begin{equation*}
g_{v}(t,x^{\sigma}(t),x^{\Delta}(t))
-\int_{0}^{t}g_{y}(\tau,x_{\sigma}(\tau),x^{\Delta}(\tau))\Delta\tau=t
\end{equation*}
there are no abnormal extremals for the problem
\eqref{ex:iso:1}--\eqref{ex:iso:2}. Applying Theorem~\ref{th:iso},
we get the delta differential equation
\begin{equation}\label{rw}
2x^{\Delta\Delta}-Q-\lambda=0,
\end{equation}
where $Q$ is the value of functional $\mathcal{L}$ in a solution of
\eqref{ex:iso:1}--\eqref{ex:iso:2}. Solving equation \eqref{rw} we
obtain
\begin{equation*}
x(t)=\frac{Q+\lambda}{2}\int_{0}^{t}\tau\Delta
\tau+1-\frac{Q+\lambda}{2}\int_{0}^{1}\tau\Delta \tau.
\end{equation*}
Therefore, a solution of \eqref{rw} depends on the time scale. Let
us consider, for example, $\mathbb{T}=\mathbb{R}$ and
$\mathbb{T}=\{0,\frac{1}{2},1\}$. On $\mathbb{T}=\mathbb{R}$ we
obtain
\begin{equation*}
x(t)=3t^{2}-2t
\end{equation*}
as a candidate local minimizer while on
$\mathbb{T}=\{0,\frac{1}{2},1\}$
\begin{gather*}
x(t)=
\begin{cases}
0 & \text{ if } t=0 \\
-1 & \text{ if } t=\frac{1}{2}\\
1 & \text{ if } t=1.
\end{cases}
\end{gather*}
is a candidate local minimizer for the problem
\eqref{ex:iso:1}--\eqref{ex:iso:2}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{ex7}
\caption{The extremal minimizer of Example~\ref{ex:iso} for
$\mathbb{T}=\mathbb{R}$ and $\mathbb{T}=\{0,\frac{1}{2},1\}$.}
\label{fig:Ex7}
\end{figure}
\end{example}
\section*{Acknowledgements}
The authors were supported by the R\&D unit CEOC, via FCT and the EC
fund FEDER/POCI 2010. The first author was also supported by
Bia{\l}ystok University of Technology, via a project of the Polish
Ministry of Science and Higher Education ``Wsparcie miedzynarodowej
mobilnosci naukowcow''.
We are grateful to two anonymous reviewers for their comments.
| {
"timestamp": "2010-07-06T02:01:24",
"yymm": "1007",
"arxiv_id": "1007.0584",
"language": "en",
"url": "https://arxiv.org/abs/1007.0584",
"abstract": "In this paper we consider the problem of the calculus of variations for a functional which is the composition of a certain scalar function $H$ with the delta integral of a vector valued field $f$, i.e., of the form $H(\\int_{a}^{b}f(t,x^{\\sigma}(t),x^{\\Delta}(t))\\Delta t)$. Euler-Lagrange equations, natural boundary conditions for such problems as well as a necessary optimality condition for isoperimetric problems, on a general time scale, are given. A number of corollaries are obtained, and several examples illustrating the new results are discussed in detail.",
"subjects": "Optimization and Control (math.OC)",
"title": "Euler-Lagrange equations for composition functionals in calculus of variations on time scales",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109538667758,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7076796351260052
} |
https://arxiv.org/abs/1002.3017 | Strictly positive definite functions on compact abelian groups | We study the Fourier characterisation of strictly positive definite functions on compact abelian groups. Our main result settles the case $G = F \times \mathbb{T}^r$, with $r \in \mathbb{N}$ and $F$ finite. The characterisation obtained for these groups does not extend to arbitrary compact abelian groups; it fails in particular for all torsion-free groups. | \section{Introduction}
Let $G$ be a compact abelian group. A complex-valued function $f$ on $G$ is
called positive definite if for all $x_1,\ldots,x_n$ in $G$
and all $c_1,\ldots,c_n$ in $ \mathbb{C}\setminus\{0\}$ we have \begin{equation} \underset{i,j=1}{\overset{n}{\sum}}c_i\overline{c_j}f(x_j^{-1}x_i)\geq
0.\end{equation}
If the inequality above becomes strict whenever the $x_i$ are distinct, we call $f$ strictly positive definite.
By $ \mathfrak{P}(G)$ we denote the set of continuous positive definite functions on
$G$ and by $\mathfrak{P}^+(G)$ the subset of strictly positive definite
functions.
Bochner's theorem \cite[Theorem 1.4.3]{R} provides a neat characterisation of
the elements of $\mathfrak{P}(G)$ via the Fourier transform: $f \in \mathfrak{P}(G)$ iff
$\widehat{f} \ge 0$. For $\mathfrak{P}^+(G)$ however, no simple general characterisation
is known. For the compact group $\mathbb{T}$ of complex numbers with modulus
one, the question has been studied in \cite{ER,P,Su}, and solved in \cite{ER,P}.
Partial results for general, not necessarily abelian, compact groups may be
found in \cite{AP}.
It is an easy observation to make that in order to decide whether $f \in \mathfrak{P}
(G)$ is in fact in $\mathfrak{P}^+ (G)$, only the support of
$\widehat{f}$ needs to be known; see Theorem \ref{thm:spd_sets} for a precise statement. Thus,
strict positive definiteness translates to a property of subsets of the dual
group $\widehat{G}$, and we accordingly call $K \subset \widehat{G}$ {\bf
strictly positive definite} if it is the support of $\widehat{f}$, for some $f
\in \mathfrak{P}^+ (G)$. The paper compares this notion to another property, called
ubiquity: A subset $K \subset \widehat{G}$ is called {\bf ubiquitous} if for all
$H < \widehat{G}$ of finite index and all $ \gamma \in \widehat{G}$, the
intersection $\gamma H \cap K$ is nonempty. It is fairly easy to see that $K$ is
ubiquitous if it is strictly positive definite; see Lemma \ref{lem:spd_ubiq}
below. The chief result of our paper states that the converse is true for $G = F
\times \mathbb{T}^r$:
\begin{theorem}
\label{thm:main} Let $G = F \times \mathbb{T}^r $, and let $K \subset
\widehat{G}$. Then $K$ is strictly positive definite iff $K$ is ubiquitous.
\end{theorem}
The case $d=1$, $F$ trivial was settled in \cite{ER,P}. The paper \cite{Su}
established partial results, apparently unaware of the previous source. While
strictly speaking the results of \cite{Su} are contained in the earlier paper,
we have found \cite{Su} to be a useful source of ideas; in particular the notion
of ubiquity is taken from that paper.
The paper proceeds as follows: Section \ref{sect:prelim} collects general
remarks and definitions relating to strict positive definiteness. We observe
that if $\mathfrak{P}^+(G)$ is nonempty, then $G$ is metrisable (Corollary
\ref{cor:spd_metrisable}). We then prove the implication ``strict positive
definite $\Rightarrow$ ubiquitous'' (Lemma \ref{lem:spd_ubiq}). A closer look at
the torsion subgroup of $G$ allows to determine interesting classes of examples:
The converse of \ref{lem:spd_ubiq} is true for all torsion groups, and fails for
all torsion-free groups. In the final section we focus on the proof of Theorem
\ref{thm:main}.
\section{Preliminaries and generalities}
\label{sect:prelim}
Throughout this paper, $G$ denotes a compact abelian group, and $\widehat{G}$
its character group. Throughout this section, we will write the group operations
in $G$ and $\widehat{G}$ multiplicatively.
In the context of compact groups, Bochner's theorem yields that every function
$f$ in $ \mathfrak{P}(G)$ has a uniformly converging Fourier series with positive
coefficients. I.e., there is a subset $ K$ of $\widehat{G}$ and a sequence
$(a_\gamma)_{\gamma\in K}$ of strictly positive numbers such that \begin{equation}
\label{eqn:pd_fourseries} f(x)=\underset{\gamma\in K}\sum a_\gamma\gamma(x).
\end{equation} Let $F=\{x_1,\ldots,x_n\}$ be a subset of $G$ and $c=(c_1,\ldots,c_n)^T$ a
vector in $\mathbb{C}^n$. A function $p_{c,F}$ on $\widehat{G}$ is defined via \begin{equation}
\gamma\mapsto \underset{i=1}{\overset{n}{\sum}}c_i\gamma(x_i)\end{equation} and we call
such a function a trigonometric polynomial on $\widehat{G}$. Note that the space
of trigonometric polynomials is closed under addition, multiplication and
complex conjugation. Furthermore, since characters over abelian groups are
linearly independent, any trigonometric polynomial arising from pairwise
different $x_i$ with nonzero coefficients $c_i$ will be nonzero. If $f$ is given
by (\ref{eqn:pd_fourseries}), then one has \begin{equation} \label{eqn:spd_sets}
\underset{i,j=1}{\overset{n}{\sum}}c_i\overline{c_j}f(x_j^{-1}x_i)=\underset{\gamma\in
K}\sum a_\gamma\left|\underset{i=1}{\overset{n}{\sum}} c_i\gamma(x_i)\right|^2=\underset{\gamma\in K}\sum
a_\gamma|p_{c,F}(\gamma)|^2.\end{equation}
In particular, (\ref{eqn:spd_sets}) vanishes iff the trigonometric polynomial
$p_{c,F}$ vanishes on $K$. This observation motivates the following definition:
\begin{defi}
A subset $K$ of $\widehat{G}$ is called strictly positive definite if there is
no trigonometric polynomial vanishing on $K$ except for the zero polynomial.
\end{defi}
The above calculations have established the following result:
\begin{theorem} \label{thm:spd_sets}
A function $f \in \mathfrak{P}(G)$ is in $ \mathfrak{P}^+(G)$ iff the support of $ \hat{f}$ is
strictly positive definite.
\end{theorem}
Let us now collect some basic properties of strictly positive definite sets. The
following arguments will rely mainly on duality theory. In particular, we recall
the notion of annihilator subgroups: For $M \subset \widehat{G}$, let $M^\bot =
\{ x \in G: \gamma(x) = 1, \forall \gamma \in M \}$. Likewise, $N^\bot = \{
\gamma \in \widehat{G} : \gamma (x) = 1, \forall x \in N \}$ for $N \subset G$.
\begin{lemma}
Let $K \subset \widehat{G}$ be strictly positive definite. Then $K$ generates
$\widehat{G}$.
\end{lemma}
\begin{proof}
Assume that $H = \langle K \rangle $ is a proper subgroup. Pick a nontrivial
character $\tilde{\chi}$ of the quotient group $\widehat{G}/ H$, then
$\chi(\gamma) = \tilde{\chi}(\gamma H)$ defines a character of $\widehat{G}$. By
Pontryagin duality there exists $x \in G$ such that $\chi(\gamma) = \gamma(x)$.
The nonzero trigonometric polynomial $p(\gamma) = \gamma(x)-1$ vanishes on $H
\supset K$, proving that $K$ is not strictly positive definite.
\end{proof}
\begin{corollary} \label{cor:spd_metrisable}
$\mathfrak{P}^+(G)$ is nonempty iff $G$ is metrisable.
\end{corollary}
\begin{proof}
Note that by \cite[Theorem 2.2.6]{R}, $G$ is metrisable iff $\widehat{G}$ is
countable. Now if $\mathfrak{P}^+(G)$ is nonempty, there exists a strictly positive
definite $K \subset \widehat{G}$. Since $K$ is the support of a converging
Fourier series, $K$ is countable. But then $\widehat{G} = \langle K \rangle$ is
countable.
For the converse, we pick a summable nowhere vanishing family
$(a_{\gamma})_{\gamma \in \widehat{G}}$ of positive numbers, which exists by
countability of $\widehat{G}$. Define $f$ according to (\ref{eqn:pd_fourseries}),
with $K = \widehat{G}$. Now (\ref{eqn:spd_sets}), with $K=\widehat{G}$, implies
that $f$ is strictly positive definite.
\end{proof}
Let us next establish the general implication between strict positive
definiteness and ubiquity.
The central question of this paper is when the converse of this result holds.
\begin{lemma} \label{lem:spd_ubiq}
If $K \subset \widehat{G}$ is strictly positive definite, it is ubiquitous.
\end{lemma}
\begin{proof}
First we prove that for proper subgroups $H< \widehat{G}$ of finite index and $
\gamma \in \widehat{G}$ there exists a trigonometric polynomial vanishing
precisely on $\gamma H$: By duality, $H^\bot \cong \left( \widehat{G}/H\right)^{\wedge}$ is finite, and
thus $H = H^{\bot \bot} = \{ x_1,\ldots,x_n \}^\bot$, with $x_1,\ldots,x_n \in
G$. Hence, if we define a trigonometric polynomial $p$ by \begin{equation}
p(\mu)=\underset{i=1}{\overset{n}{\sum}}|\overline{\gamma(x_i)}\mu(x_i)-1|^2 \mbox{ , } (\mu\in G),\end{equation} we
find that $p(\mu)=0$ iff $\gamma^{-1} \mu \in \{x_1,\ldots,x_n \}^\bot$, iff
$\mu\in \gamma H$.
Now, if $K$ is not ubiquitous, then $K \cap \gamma H = \emptyset$ for suitable
$H$ of finite index, and $\gamma \in \widehat{G}$. Write $\widehat{G} \setminus
\gamma H = \bigcup_{i=1}^m \mu_i H$, and pick trigonometric polynomials $p_i$
vanishing precisely on $\mu_i H$. Then $\prod_{i=1}^m p_i$ is a trigonometric
polynomial vanishing precisely outside of $\gamma H$. It is therefore nonzero
and vanishes on $K$, which then cannot be strictly positive definite.
\end{proof}
Next we characterise finite strictly positive definite subsets. As a byproduct,
we clarify the case of finite groups.
\begin{lemma} \label{lem:finite_spd}
Let $K \subset \widehat{G}$ be finite. Then $K$ is strictly positive definite
iff $G$ is finite and $K = \widehat{G}$.
\end{lemma}
\begin{proof}
Note that by definition, $K$ is strictly positive definite iff the restriction
map $p \mapsto p|_K$, defined on the space of trigonometric polynomials, has
trivial kernel. If $G$ is infinite, the space of trigonometric polynomials on
$\widehat{G}$ is infinite-dimensional, precluding the existence of finite
strictly positive definite sets.
Thus, if $K$ is finite, $G$ has to be finite as well, and ubiquity of $K \subset
\widehat{G}$ implies $K = \widehat{G}$. The converse is obvious.
\end{proof}
Some clarification concerning the converse of Lemma \ref{lem:spd_ubiq} is
obtained by considering the torsion subgroup $G_t$ of $G$, defined as
\[
G_t = \{ x \in G : x^n = e_G, \mbox{ for suitable } n \in \mathbb{N} \}~.
\] The torsion subgroup is usually not closed; for instance, the torsion
subgroup of the torus group is dense. The following observation indicates how
the torsion subgroup is related to ubiquity.
\begin{lemma} \label{lem:dual_torsionsubg}
Let
\[ H_0 = \bigcap_{H< \widehat{G}, [\widehat{G}:H] < \infty} H~.\]
Then
\[
H_0 = G_t^\bot~,~ H_0^{\bot} = \overline{G_t}
\]
\end{lemma}
\begin{proof}
We first prove $G_t \subset H_0^\bot$: By the isomorphism theorem for groups,
$\gamma \in H_0$ iff $ \phi(\gamma) = e$, for all group homomorphisms $\phi$
with finite image. In particular, if $x \in G_t$, the mapping $\widehat{G} \ni
\gamma \mapsto \gamma(x) \in \mathbb{T}$ has finite image,
since $\gamma(x)^n = \gamma(x^n) = 1$. It follows for $\gamma \in H_0$ that
$\gamma(x) = 1$, which means $x \in H_0^\bot$.
For the proof of $G_t^\bot \subset H_0$ let $H< \widehat{G}$ be of finite index.
By duality theory, $H^\bot \cong \left( \widehat{G}/H \right)^{\wedge}$ is finite, hence a subgroup of
$G_t$. But then $G_t^\bot \subset H^{\bot \bot} = H$. Since $H< G$ was chosen
arbitrary of finite index, it follows that $G_t^\bot \subset H_0$.
Both inclusions shown so far imply $H_0 = H_0^{\bot \bot} \subset G_t^\bot
\subset H_0$, and thus $H_0 = G_t^\bot$. The second equality follows from this.
\end{proof}
We now settle the extreme cases $G_t=G$ and $G_t= \{ e \}$. First the good news.
\begin{theorem}
Let $G$ be a torsion group, and $K \subset \widehat{G}$. Then $K$ is strictly
positive definite iff $K$ is ubiquitous.
\end{theorem}
\begin{proof}
Only the ``if''-part needs to be shown.
Suppose that $K \subset \widehat{G}$ is not strictly positive definite. Hence
there is a nontrivial trigonometric polynomial
\[
p : \widehat{G} \ni \gamma \mapsto \sum_{i=1}^n c_i \gamma(x_i)
\] of $\widehat{G}$ vanishing on $K$. Since $G$ is a torsion group, $\langle
x_1,\ldots,x_n \rangle$ is finite, and by duality theory, $H = \{ x_1,\ldots,x_n
\}^\bot \subset \widehat{G}$ has finite index. Furthermore, for any $ \gamma \in
\widehat{G}$ and $\eta \in H$, one has
\[
p(\gamma \eta) = \sum_{i=1}^n c_i \gamma(x_i) \eta(x_i) = p(\gamma) ~,
\] implying that $p$ vanishes on an $H$-invariant set. In particular, since $p$
is nonzero and vanishes on $K$, $\widehat{G} \setminus K$ contains an $H$-coset.
Thus $K$ is not ubiquitous.
\end{proof}
The theorem applies to groups of the form $G = \prod_{i=1}^\infty F_i$, with
finite groups $F_i$ of bounded order. The other extreme provides a whole class
of examples for which the converse of Lemma \ref{lem:spd_ubiq} fails.
\begin{theorem} \label{thm:torsionfree}
Suppose that $G$ is nontrivial and torsion-free. Then every nonempty subset of
$\widehat{G}$ is ubiquitous, but finite subsets are not strictly positive
definite.
\end{theorem}
\begin{proof}
Suppose $G_t$ is trivial. With $H_0$ as defined in Lemma \ref{lem:dual_torsionsubg},
one obtains from \ref{lem:dual_torsionsubg} that $\widehat{G}/H_0$ is trivial
also. Hence $\widehat{G}$ has no proper finite index subgroups, and then every
nonempty subset is ubiquitous. Since $G$ is torsion-free and nontrivial, it is
infinite, and then Lemma \ref{lem:finite_spd} implies that no finite $K \subset
\widehat{G}$ is strictly positive definite.
\end{proof}
This result applies, for instance, to the group $\mathbb{Z}_p$ of $p$-adic
integers.
\section{The case $G = F \times \mathbb{T}^r$}
The remainder of the paper is reserved for $G = F \times \mathbb{T}^r$. We
identify the dual group of $G$ in the canonical way with $F \times
\mathbb{Z}^r$, and write the latter additively.
At first we will deal with $\mathbb T^r$ separately. Here we will need a fairly
deep theorem from number theory.
\subsection{Products of $ \mathbb{T}$}
We start with two lemmata that will be needed in the proof of the main
result. The first one is a fact from elementary group theory.
\begin{lemma} \label{lem:index_fi}
Every finite intersection of subgroups of finite index is a subgroup of finite
index as well.
\end{lemma}
\begin{proof}
This follows by induction and
\begin{enumerate}
\item If $A$ and $B$ are subgroups of a group $G$, then $[B:A\cap B] \le [G:A]$.
\item If $B<G$ and $A<B$, then $[G:A] = [G:B][B:A]$.
\end{enumerate}
\end{proof}
\begin{lemma} \label{lem:fi_supergrp}
Let H be subgroup of infinite index in $\mathbb{Z}^r$ and $y\in \mathbb{Z}^r\setminus H$. There
is a subgroup $G$ of finite index such that $H\subset G$ and $y\notin G$.
\end{lemma}
\begin{proof}
Let $H$ be a subgroup of infinite index in $\mathbb{Z}^r$ and $y\notin H$. Then there is
basis $x_1,\ldots,x_r$ of $\mathbb{Z}^r$ and some $\alpha_1,\ldots, \alpha_r$ in
$\mathbb{Z}\setminus\{0\}$ such that $ \alpha_1x_1,\ldots,\alpha_sx_s$ form a basis of
$H$ and $(\underset{i=1}{\overset{s}{\bigoplus}}\mathbb{Z} x_i)/H\cong
\underset{i=1}{\overset{s}{\bigoplus}}(\mathbb{Z}/\alpha_i\mathbb{Z})$, see \cite[2.9.2]{B}. In
particular, we have $s<r$. Now $y$ can be expressed as $y=\underset{i=1}{\overset{r}{\sum}}\beta_ix_i$
with unique entire numbers $\beta_1,\ldots,\beta_r$. If
$\beta_{s+1}=\ldots=\beta_{r}=0$ we define $G$ to be the subgroup generated by $ \alpha_1x_1,\ldots,\alpha_sx_s,x_{s+1},\ldots,x_r$ and note that $G\supset H$ is of finite index, see 3.1, and $y\notin G$ due to the uniqueness of the coefficients. If $\beta_i\neq 0$ for some $i\in\{s+1,\ldots,r\}$ then $G$ is defined as the subgroup generated of $\alpha_1x_1,\ldots \alpha_rx_r$ with $\alpha_{s+1},\ldots,\alpha_r$ in $\mathbb{Z}\setminus\{0\}$ and $\alpha_{i}$ not a divisor of $\beta_{i}$, then once again $H$ is a subset $G$, $G$ is of finite index and $y$ is not an element of $G$ by construction.
\end{proof}
The main device for showing sufficiency of ubiquity is the following theorem due
to Laurent, see \cite{L}. For a partition $ \mathcal{P}$ of $\{1,\ldots,n\}$ we
write $\gamma\in Null(p^\mathcal{P})$ if \begin{equation} 0=\underset{k\in P}\sum
c_k\gamma(x_k)\end{equation} for all $P\in \mathcal{P}$. Clearly, it is
$Null(p^\mathcal{P})\subset Null(p)=p^{-1}(\{0\})$. A partition $\mathcal{P}'$
is called finer than $\mathcal{P}$, if $\mathcal{P}'$ is a partition of
$\{1,\ldots,n\}$ and for all $P'\in \mathcal{P}'$ there is $P\in \mathcal{P}$
such that $P'\subset P$. In short, we write $\mathcal{P}'<\mathcal{P}$, if
$\mathcal{P}'\neq \mathcal{P}$ and $\mathcal{P}'$ is finer than $\mathcal{P}$.
Furthermore, $\gamma\in Null(p)$ is called maximal with respect to
$\mathcal{P}$, if $\gamma\in Null(p^\mathcal{P})$ and $\gamma\notin
Null(p^{\mathcal{P}'})$ for every $\mathcal{P}'<\mathcal{P}$.\\ By
$H_\mathcal{P}$ we denote the subgroup of $\mathbb{Z}^r$ defined by \begin{eqnarray*}
H_\mathcal{P} & = & \underset{P\in \mathcal{P}}\bigcap \{\gamma\in\mathbb{Z}^r :
\gamma(x_k)=\gamma(x_{l}) \mbox{ for } k,l\in P\}\\ & = & \underset{P\in
\mathcal{P}}\bigcap \{\gamma\in \mathbb{Z}^r: \gamma(x_kx_{l}^{-1})=1 \mbox{ for }
k,l\in P\} \\ & = & \underset{P\in \mathcal{P}}\bigcap \{x_kx_{l}^{-1}: k,l\in
P\}^\bot .\end{eqnarray*}
Finally, let $S_\mathcal{P}$ be the set of $\gamma\in Null(p)$, which are
maximal with respect to $\mathcal{P}$. Then one has the following theorem due to
Laurent, [L].
\begin{theorem}
$S_\mathcal{P}$ is a finite union of $H_\mathcal{P}$-co-sets.
\end{theorem}
This theorem can be regarded as a generalisation of a number theoretical result
of Skolem, Mahler and Lech on linear recurrences, see [P]. We are interested in
$Null(p)$, so we need only a corollary. By definition we have \begin{equation} Null(p)
\supset\underset{\mathcal{P} \mbox{ partition of } \{1,\ldots,n\} }\bigcup
S_\mathcal{P} .\end{equation} Conversely, $\gamma\in Null(p)$ implies $\gamma\in
Null(p^\mathcal{P})$, where $\mathcal{P}=\{\{1,\ldots,n\}\}$. Suppose now
$\gamma\notin S_{\mathcal{P}}$ for every partition $\mathcal{P}$. That is, for
every partition $\mathcal{P}$ the fact $\gamma\in Null(p^\mathcal{P})$ implies
$\gamma\in Null(p^{\mathcal{P}'})$ for some $\mathcal{P}'<\mathcal{P}$. But
there are only finitely many partition of $\{1,\ldots,n\}$, where
$\mathcal{P}_0=\{\{1\},\ldots,\{n\}\}$ is the finest partition with respect to
$<$. In particular, $\gamma\in Null(p^{\mathcal{P}_0})$ but $\gamma \notin
S_{\mathcal{P}_0}$ leads to a contradiction. Hence, it follows $\gamma \in
S_\mathcal{P}$ for some partition $\mathcal{P}$. That is,
\begin{equation} Null(p) =\underset{\mathcal{P} \mbox{ partition of } \{1,\ldots,n\} }\bigcup
S_\mathcal{P} \end{equation}
and we get:
\begin{corollary} \label{cor:laurent}
Let $p$ be a nontrivial trigonometric polynomial on $\mathbb{Z}^r$. There are finitely
many subgroups $G_1,\ldots,G_n$ of $\mathbb{Z}^r$ and $x_1,\ldots,x_n\in\mathbb{Z}^r$ such that
\begin{equation} Null(p)=\underset{i=1}{\overset{n}{\bigcup}} x_i+G_i.\end{equation}
\end{corollary}
\begin{theorem} \label{thm:ubiq_spd_torus}
Let $K$ be a subset of $\mathbb{Z}^r$. If K is ubiquitous then it is also strictly
positive definite.
\end{theorem}
\begin{proof} Assume that $K$ is not strictly positive definite. Then there
exists a non-zero trigonometric polynomial $p$ such that $K \subset S$, where
$S$ denotes the set of its zeros. By Corollary \ref{cor:laurent} we know that
$S$ can be written as $\underset{i=1}{\overset{n}{\bigcup}}\gamma_i+H_i$ for
some $\gamma_1,\ldots,\gamma_n$ in $ \mathbb{Z}^r$ and subgroups $H_1,\ldots,H_n$.
Without loss we can assume that $H_1,\ldots,H_m$ are of finite and
$H_{m+1},\ldots,H_n$ are of infinite index. Since $p$ is non-zero there is some
$\gamma$ in $\mathbb{Z}^r\setminus S$. But then \begin{equation} H'=\underset{i=1}{\overset{m}{\bigcap}}H_i\end{equation}
satisfies \begin{equation} \gamma+H'\cap \underset{i=1}{\overset{m}{\bigcup}}\gamma_i+H_i=\emptyset\end{equation}
and is of finite index, see \ref{lem:index_fi}. Furthermore, for every
$i=m+1,\ldots,n$ we pick by virtue of Lemma \ref{lem:fi_supergrp} a subgroup
$I_i$ of finite index such that $H_i\subset I_i$ and $\gamma - \gamma_i \notin
I_i$. Now we put \begin{equation} H=H'\cap \underset{i=m+1}{\overset{n}{\bigcap}}I_i,\end{equation}
then $H$ is still of finite index.
Furthermore, for each $i \in \{ 1, \ldots, m \}$,
\begin{equation*}
\gamma + H \cap \gamma_i + H_i \subset \gamma + H_i \cap \gamma_i + H_i =
\emptyset
\end{equation*} by choice of $\gamma$, whereas for $i \in \{ m+1, \ldots, n
\}$,
\begin{equation*}
\gamma + H \cap \gamma_i + H_i \subset \gamma + I_i \cap \gamma_i + I_i =
\emptyset
\end{equation*}
by choice of $I_i$.
Hence finally,
\begin{equation} \gamma + H \cap K \subset \gamma+H\cap S=\emptyset ~, \end{equation}
which shows that $K$ is not ubiquitous.
\end{proof}
\subsection{Strict Positive Definiteness over Direct Products}
It remains to combine the results for the factors $F$ and $\mathbb{T}^r$,
obtained in Lemma \ref{lem:finite_spd} and Theorem \ref{thm:ubiq_spd_torus}
respectively. The following somewhat technical result illustrates that the
transfer of results for the factors to the product group is not entirely
trivial.
\begin{theorem} \label{thm:spd_dirprod}
Let $G = G_1 \times G_2$, with compact groups $G_1$ and $G_2$. Let $K\subset
\widehat{G_1} \times \widehat{G_2}$. For $\gamma\in \widehat{G_1}$ let \begin{equation}
K_2(\gamma)=\{\omega:(\gamma,\omega)\in K\}\end{equation} and \begin{equation} K_1=\{\gamma:K_2(\gamma)\mbox{
strictly positive definite}\}.\end{equation} Finally, let \begin{equation} \widetilde{K}=\underset{\gamma\in
K_1}\prod\{\gamma\}\times K_2(\gamma)\end{equation}
If $K_1$ is strictly positive definite, then also $\widetilde K$ and in
particular $K$.
\end{theorem}
\begin{proof}
For $\gamma\in K_1$ and $\omega\in K_2(\omega)$ let positive real numbers
$a_\gamma$ resp. $b_\omega$ be given such that $\underset{\gamma\in K_1}\sum
a_\gamma\left (\underset{\omega\in K_2(\gamma)}\sum b_\omega\right)$ is
convergent.
If we put $f=\underset{\gamma\in K_1}\sum a_\gamma\gamma\left
(\underset{\omega\in K_2(\gamma)}\sum b_\omega\omega\right)=\underset{(\gamma,\omega)\in
\tilde K}\sum a_\gamma b_\omega\gamma\omega$, then $f$ converges absolutely and
unconditionally on $G_1\times G_2$ by Fubini's theorem, see \cite[(21.13)]{HS}.
Now suppose that for distinct $z_1=(x_1,y_1),\ldots,z_n=(x_n,y_n)$ in $G_1
\times G_2$ and some complex $c_1,\ldots,c_n$ we have
\begin{equation}
0=\underset{i,j=1}{\overset{n}{\sum}}c_i\overline{c_j}f(z^{-1}_jz_i)=\underset{(\gamma,\omega)\in
\tilde K}\sum a_\gamma b_\omega\left|\underset{i=1}{\overset{n}{\sum}}c_i\gamma(x_i)\omega(y_i)\right|^2.
\end{equation}
Without loss we can assume that $y_1,\ldots,y_m$ are distinct and we put
$I_l=\{k:y_k=y_l\}$. Thus $I_1,\ldots,I_m$ form a partition of $\{1,\ldots,n\}$
and
(3.12) reads for $(\gamma,\omega)\in \tilde{K}$:
\begin{equation} 0=\underset{i=1}{\overset{n}{\sum}}c_i\gamma(x_i)\omega(y_i)=\underset{l=1}{\overset{m}{\sum}}\left(\underset{k\in
I_l}\sum c_k\gamma(x_k)\right)\omega(y_l)\end{equation}
Since $K_2(\gamma)$ is strictly positive definite this implies $0=\underset{k\in
I_l}\sum c_k\gamma(x_k)$ for $(\gamma,\omega)\in \tilde{K}$ and $l=1,\ldots,m$.
But this again leads to $c_k=0$ for $k\in I_l$ and $l=1,\ldots,m$, since $K_1$
is strictly positive definite.
\end{proof}
We do not know of an exhaustive characterisation of strictly positive definite subsets of the product group $\widehat{G}_1 \times \widehat{G}_2$ in terms of strictly positive definite subsets of $\widehat{G}_1$ and $\widehat{G}_2$. It is fairly easy to see that the sufficient condition of the theorem is not necessary: For a counterexample, consider the case $G_1 = G_2 = \mathbb{T}$. Let
\[ K = \bigcup_{n=1}^\infty \{ n \} \times \{ -n,\ldots,n \} ~,\]
and let $K_1$ be defined as in the theorem. Then Lemma \ref{lem:finite_spd} implies that $K_1 = \emptyset$. But $K$ is strictly positive definite, which can be easily seen by applying the theorem with the roles of $\widehat{G}_1$ and $\widehat{G}_2$ interchanged, and using the observation that
\[
K = \bigcup_{m=-\infty}^\infty \{ k : k \ge |m| \} \times \{ m \}~.
\] One could formulate a version of the theorem that covers this example as well, e.g. by introducing a condition that is symmetric with respect to the roles of $\widehat{G}_1$ and $\widehat{G}_2$. More generally, since strict positive definiteness is clearly preserved by the action of a group automorphism, the condition would have to be invariant under automorphisms of $\widehat{G}_1 \times \widehat{G}_2$ as well. The counterexample illustrates that the sufficient condition is not invariant under the automorphism $(\gamma,\omega) \mapsto (\omega,\gamma)$. It seems open to us whether a clean-cut characterisation working for all product groups is available.
\subsection{Proof of Theorem \ref{thm:main}}
By Lemma \ref{lem:spd_ubiq} it remains to prove the ``if''-direction. Assume
that $K \subset F \times \mathbb{Z}^r$ is ubiquitous, and define $K_2(\gamma)$,
for arbitrary $\gamma \in F$, according to Theorem \ref{thm:spd_dirprod}. It
suffices to show, for all $\gamma \in F$, that $K_2(\gamma) \subset
\mathbb{Z}^r$ is strictly positive definite.
Suppose $\gamma\in F$, $H$ a subgroup of finite index in $\mathbb{Z}^r$ and $\omega\in
\mathbb{Z}^r$. Then $\{e\}\times H$ is a subgroup of finite index in $ F\times\mathbb{Z}^r$ and
by assumption the intersection \begin{equation} K\cap (\gamma,\omega)(\{e\}\times
H)=K\cap\{\gamma\}\times \omega H=\{(\gamma,\chi):\chi\in K_2(\gamma)\cap \omega
H\}\end{equation} is not empty. Hence, the projection thereof to the second variable is
also not empty and this is nothing but the intersection of $K_2(\gamma)$ with
$\omega H$. That is, $K_2(\gamma)$ is ubiquitous, for arbitrary $\gamma\in F$.
Now Theorem \ref{thm:ubiq_spd_torus} implies that $K_2(\gamma)$ is strictly
positive definite, and we are done. \hfill $\Box$
\bibliographystyle{amsplain}
| {
"timestamp": "2010-02-16T08:17:10",
"yymm": "1002",
"arxiv_id": "1002.3017",
"language": "en",
"url": "https://arxiv.org/abs/1002.3017",
"abstract": "We study the Fourier characterisation of strictly positive definite functions on compact abelian groups. Our main result settles the case $G = F \\times \\mathbb{T}^r$, with $r \\in \\mathbb{N}$ and $F$ finite. The characterisation obtained for these groups does not extend to arbitrary compact abelian groups; it fails in particular for all torsion-free groups.",
"subjects": "Functional Analysis (math.FA); Group Theory (math.GR)",
"title": "Strictly positive definite functions on compact abelian groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109538667758,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7076796351260051
} |
https://arxiv.org/abs/2206.05752 | Moduli for rational genus 2 curves with real multiplication for discriminant 5 | Principally polarized abelian surfaces with prescribed real multiplication (RM) are parametrized by certain Hilbert modular surfaces. Thus rational genus 2 curves correspond to rational points on the Hilbert modular surfaces via their Jacobians, but the converse is not true. We give a simple generic description of which rational moduli points correspond to rational curves, as well as give associated Weierstrass models, in the case of RM by the ring of integers of $\mathbb{Q}(\sqrt{5})$. To prove this, we provide some techniques for reducing quadratic forms over polynomial rings. | \section{Introduction}
We are interested in describing the space of rational genus 2 curves
which have certain endomorphism structure on their Jacobians,
and will correspond to modular forms.
Let $k$ be a field.
Let $D > 0$ be a discriminant, and $\mathcal O_D$ the quadratic order
of discriminant $D$. For an abelian surface $A/k$,
if $\mathcal O_D$ embeds in $\End_k(A)$, we say $A$ has real
multiplication (RM) by $\mathcal O_D$, and abbreviate this as RM-$D$.
By extension, if $C$ is a genus 2 curve and $A = \Jac(C)$ has
RM-$D$, we say $C$ has RM-$D$.
Typically Jacobians of genus 2 curves, and more generally
abelian surfaces, will have endomorphism ring $\mathbb Z$.
One interest in abelian surfaces $A$ with RM
(i.e., RM-$D$ for some $D$) is that they are
of GL(2) type, which by work of Ribet \cite{ribet} and
the proof of Serre's conjecture \cite{KW}, means that abelian
surfaces $A$ with RM over $k=\mathbb Q$ correspond to elliptic modular
forms of weight 2.
Parametrizing genus 2 curves, with or without an RM condition,
is essentially understood over $k=\mathbb C$, but much less clear over
$k=\mathbb Q$. In this paper, we give a relatively simple generic description of
moduli for genus 2 curves $C$ with RM-5 over $\mathbb Q$.
\begin{thm} \label{thm:main}
The $\mathbb C$-isomorphism classes of
genus $2$ curves $C/\mathbb Q$ with RM-5 are generically parametrized by
$(m,n) \in \mathbb Q^2$ such that $m^2-5n^2 - 5$ is a norm from
$\mathbb Q(\sqrt 5)$.
\end{thm}
This parametrization is in terms of birational coordinates for
the Hilbert modular surface $Y(5)$, and the invariants for the
curve $C$ are then polynomial expressions in $m$ and $n$.
We will also describe models for these curves (\cref{prop:model}),
and be more precise about the meaning of ``generically parametrized''
here (see \cref{thm:moduli} and \cref{sec:gh2mn}).
These results
extend to arbitrary subfields $k$ of $\mathbb C$, and the models
are rather simple when $k \supseteq \mathbb Q(\sqrt 5)$.
In order to explain our results more completely, we will first
describe moduli for genus 2 curves over $\mathbb C$ in more detail.
Below, when the field of definition of a curve or variety is not
specified, it is assumed to be $\mathbb C$.
Let $\mathcal M_2$ be the (coarse) moduli space of genus 2 curves
and $\mathcal A_2$ be the moduli space of principally
polarized abelian surfaces.
The Torelli map $\mathcal M_2 \to \mathcal A_2$, corresponding
to sending a genus 2 curve $C$ to its Jacobian $A = \Jac(C)$,
is almost surjective---the complement of its image consists of
(moduli for) products of 2 elliptic curves.
We may identify a point in $\mathcal M_2$ corresponding to a genus
2 curve $C$ with Igusa--Clebsch invariants
$(I_2 : I_4 : I_6 : I_{10})$ in weighted projective space
$\mathbb P^3_{1,2,3,5}(\mathbb C)$. Each $(I_2 : I_4 : I_6 : I_{10})$
with $I_{10} \ne 0$ comes from a genus 2 curve.
The Igusa--Clebsch invariants $I_{2j}$ can be defined as degree $2j$
polynomial functions $I_{2j}(f)$ of the coefficients of a
sextic Weierstrass equation $y^2 = f(x)$ for $C$,
and up to projective equivalence do not depend on the model.
Consequently, if $C$ has a model over a subfield $k \subseteq \mathbb C$,
then the Igusa--Clebsch invariants are defined over $k$ (i.e., can
all be taken in $k$ after scaling).
However, the converse is not true.
(Contrast this to the
genus 1 situation: an elliptic curve has a rational model if and only if
its $j$-invariant is rational.)
If $C$ is a genus 2 curve without extra automorphisms over
$\mathbb C$ and its Igusa--Clebsch invariants
are defined over $k$, then Mestre \cite{mestre} showed that
$C$ is defined over $k$ if and only if a certain conic $L/k$ has a
$k$-rational point. (If $C$ has extra automorphisms, it has a model
over $k$ by \cite{cardona-quer}.)
The coefficients of the Mestre conic $L$ are polynomials in $I_2, I_4, I_6$ and $I_{10}$.
Nonetheless, there is no simple characterization
of when the Mestre obstruction vanishes, i.e., when $L$ has
a $k$-rational point.
Now we review moduli for genus 2 curves with RM-$D$ over $\mathbb C$.
For simplicity, assume $D$ is a fundamental discriminant,
so $\mathcal O_D$ is the ring of integers of $\mathbb Q(\sqrt D)$.
The Hilbert modular surface
$Y_-(D)$ is a smooth compactification of the quotient
$\SL_2(\mathcal O_D) \backslash (\mathfrak H^+ \times \mathfrak H^-)$,
or alternatively
$\SL_2(\mathcal O_D \oplus \mathcal O_D^*) \backslash (\mathfrak H^+ \times \mathfrak H^+)$,
where $\mathcal O_D^*$ is the inverse different of $\mathcal O_D$
(e.g., see \cite{vdG}). Then $Y_-(D)$ is a coarse moduli
space for principally polarized abelian surfaces with real multiplication
RM-$D$, where one fixes an action of $\mathcal O_D$ compatible
with the polarization.
Suppose $k \subseteq \mathbb C$.
A genus 2 curve $C/k$ with RM-$D$ corresponds to a $k$-rational
point on $Y_-(D)$. Again, the converse is not true. If $p$
is a rational point on $Y_-(D)$ which does not correspond
to the product of 2 elliptic curves, then it corresponds to a curve
$C$ with RM-$D$ over $\mathbb C$. For $p$ to correspond to
a curve over $k$ with RM-$D$ we need both that the Mestre
obstruction vanishes, and that some rational model for $C$
has RM-$D$ defined over $k$. (It can happen that some
$k$-rational models for $C$ have RM defined over $k$
and some do not.)
We will see that generically if the Mestre obstruction vanishes,
then the RM is defined over $k$. More precisely, if
$\End(\Jac(C))$ is commutative, then a field of definition for $C$
is a field of definition for the RM (\cref{prop:RMdef}).
\subsection{Strategy of proof}
In the special case of RM-5, the Hilbert modular surface
$Y(5) = Y_-(5)$ is a rational surface, i.e., birational to
$\mathbb P^2_{m,n}(\mathbb C)$.
Hence to prove \cref{thm:main}, it suffices to show that
the vanishing of the Mestre obstruction at a rational point $(m,n)$
in $Y(5)$ is generically equivalent to the condition that
$m^2 - 5n^2 - 5 = u^2 - 5v^2$ for some $u, v \in \mathbb Q$.
This is not at all obvious from the Mestre conic,
which is a conic over $\mathbb Q[m,n]$ whose coefficients are
degree $\le 14$ polynomials in $m$ and $n$, and whose
discriminant is of degree 30.
In fact, it was rather surprising to us that there was such a
simple characterization of the Mestre obstruction. It was
only through computational observations that we were led
to believe in \cref{thm:main}, and then were able to find
a proof after much trial.
The starting point for the proof relies on two birational models for
$Y_-(5)$ due to Elkies and Kumar \cite{EK}, which were obtained
by studying lattice polarizations of K3 surfaces. The
first model is a double cover of $\mathbb P^2_{g,h}$ of the form
$z^2 = f(g,h)$, where $f$ is a degree 5 polynomial in $g$ and $h$.
In this model, the norm condition in \cref{thm:main} can be
restated as $30g+4$ being a norm from $\mathbb Q(\sqrt 5)$. In particular,
the Mestre obstruction only depends on $g$ and not $h$.
(This was our initial computational observation that led to the
theorem.) The Igusa--Clebsch invariants now are low-degree
expressions in $g$ and $h$. In terms of $g$ and $h$,
the Mestre conic has coefficients in $\mathbb Q[g,h]$ which are of degree
$\le 7$ in $g$ and degree $\le 2$ in $h$, and its discriminant
is an integer multiple of $h^2(8h-9g^2)^2 z^2$.
To our knowledge, there are no general methods to reduce
quadratic forms over polynomial rings.
The standard technique taught to ``simplify'' quadratic forms over
fields is diagonalization, but unless one is very lucky this is
not useful in simplifying quadratic forms over rings.
E.g., diagonalizing the conic over $\mathbb Q(m,n)$ and
clearing denominators gives coefficients which are polynomials
of degrees 24, 28 and 32 in $m$ and $n$.
We describe a few simple techniques to reduce degrees of
polynomial coefficients and
remove factors from the discriminant, which we hope may be of
use in other situations. In our case,
we are able to use these methods to reduce the the Mestre conic
in $g$ and $h$ to have polynomial coefficients of degree $\le 3$
and remove the factors of $h^2$ and $(8h-9g^2)$ from the
discriminant. Then we switch to the $(m,n)$ model and
apply our techniques to reduce the Mestre conic over
$\mathbb Q(m,n)$ to $x_1^2 - 5x_2^2 + (m^2 - 5n^2 - 5)x_3^2 = 0$, which proves
\cref{thm:main}.
We remark that we needed to use both of these models
for $Y_-(5)$ to carry out this reduction of the Mestre conic.
While the Mestre conic is simpler in $g$ and $h$, our final
reduced form, which is the same as $x_1^2 - 5x_3^2 + (30g+4)x_3^2 = 0$,
is \emph{not} equivalent to the original Mestre conic over $\mathbb Q(g,h)$.
That is, these conics are not equivalent over $\mathbb Q$ for a generic choice
of $g, h \in \mathbb Q$---the equivalence requires rational $g, h$
such that $f(g,h)$ is a rational square, i.e., $g$ and $h$ come from a
rational point on $Y_-(5)$, and it is not clear how to use the
relation $z^2 = f(g,h)$ to carry out this reduction solely in
terms of $g$ and $h$. On the other hand, we were unable
to carry out the reduction entirely in terms of $m$ and $n$
because finding suitable changes of variables is more difficult
with higher degree polynomial coefficients.
\subsection{Moduli of rational curves}
Here we briefly describe to what extent we can make the
``generic'' aspect of \cref{thm:main} precise. First, our reduction
of the Mestre conic $L$ over $\mathbb Q(m,n)$ does not give a
$\mathbb Q$-equivalent conic when specializing to points $(m,n) \in \mathbb Q^2$
such that $\disc L = 0$. This happens on a finite number of
curves in the moduli space, which we examine separately.
Second, as $(m,n)$ are only affine coordinates for a birational model
for $Y_-(5)$, the set of rational $(m,n)$ does not exhaust
the rational points on $Y_-(5)$. Fortunately, thanks to
work of Wilson \cite{wilson}, we can describe Igusa--Clebsch
invariants for the remaining points on $Y_-(5)$ and say
explicitly when such points correspond to a genus 2 curve defined
over $\mathbb Q$.
Consequently, in \cref{thm:moduli} we give an explicit description
of a set $\mathcal Y$ of rational moduli in $\mathcal M_2$ such that
any genus 2
curve $C/\mathbb Q$ with RM-5 corresponds to a point on $\mathcal Y$.
Moreover, any point in $\mathcal Y$ corresponds to a genus 2 curve
$C/\mathbb Q$ that has potential RM-5, i.e., RM-5 defined over $\bar \mathbb Q$
but not necessarily $\mathbb Q$.
We do not know if each such $C$ will always have a twist with RM-5
defined over $\mathbb Q$, but we were not able to find any examples to the
contrary.
At least the collection of such curves generically has RM-5, and we
explain two ways in which one can check that the RM-5 is defined
over $\mathbb Q$.
\subsection{Models of curves}
Several families of rational genus 2 curves $C/\mathbb Q$ with RM-5
have been constructed in the literature. For instance,
Mestre constructed a 2-parameter family in \cite{mestre:family}
and Brumer constructed a 3-parameter family (see \cite{brumer}
for an announcement, and \cite{hashimoto}
for a proof different from Brumer's).
For a rational choice of parameters these families
generically give rational genus 2 curves $C$ with RM-5 over $\mathbb Q$.
Moreover, over $\mathbb C$ these families are known to exhaust
all $\mathbb C$-isomorphism classes of genus 2 curves $C/\mathbb Q$ with
RM-5 (see \cite{hashimoto-sakai} for Brumer's family and
\cite{wilson} or \cite{sakai} for Mestre's family).
However, it is not known how to describe all such rational
curves with these families, or
how to describe what parameters give $\mathbb C$-isomorphic curves.
\cref{thm:main} generically parametrizes
such $C/\mathbb Q$. If $(m,n) \in \mathbb Q^2$ such that
$m^2-5n^2-5 = u^2-5v^2$ with $u, v \in \mathbb Q$, we give
a generic Weierstrass model $y^2 = f(x)$ for an associated
curve in terms of $(m,n,u,v)$. See \cref{prop:model}.
These results apply arbitrary base fields $k \subseteq \mathbb C$.
If $k \supseteq \mathbb Q(\sqrt 5)$,
then the analogous norm condition in \cref{thm:main}
is automatically satisfied, and one can write down a
model solely in terms of $(m,n) \in k^2$. See
\cref{prop:modelQsqrt5}.
\subsection{Additional remarks}
Our original motivation for this project was to help understand
weight 2 elliptic modular forms with rationality field $\mathbb Q(\sqrt 5)$.
We hope to return to this in the future.
In \cref{sec:otherD}, we briefly describe some computational
evidence that there are similarly simple descriptions
for when the Mestre obstruction vanishes for some other
small values of $D$. However, in these cases, the
Mestre conics that arise are more complicated and we have
only been partially successful in applying our reduction methods
to these cases.
Calculations for this project were carried out in
Sage \cite{sage} and Magma \cite{magma}.
\subsection*{Acknowledgements}
We are particularly grateful to Noam Elkies for many helpful
discussions and comments. We also thank Armand Brumer and
John Voight for useful discussions.
Both authors were supported by grants from
the Simons Foundation (550031 for AC, and 512927 for KM).
Part of this work was carried out while the
second author was visiting MIT and Harvard, and he thanks
them for their hospitality.
\section{Moduli spaces}
Henceforth, $k$ denotes a subfield of $\mathbb C$.
Let $C$ be a genus 2 curve defined over $k$. Then it
has a rational Weierstrass model of the form $y^2 = f(x)$, where
$f(x) \in k[x]$ is a sextic with no repeated irreducible factors.
The Igusa--Clebsch
invariants $I_2, I_4, I_6, I_{10}$ are polynomial invariants of $f$ of
respective degrees $2, 4, 6, 10$ with $I_{10} = \disc(f)$. We view
the Igusa--Clebsch invariants as a point $(I_2 : I_4: I_6: I_{10})$ in weighted
projective space $\mathbb P^3_{1,2,3,5}$. In this way, the Igusa--Clebsch invariants
in $\mathbb P^3_{1,2,3,5}$ depend only on $C$ and not on the choice of the
Weierstrass equation. Moreover, the set of $(I_2 : I_4: I_6: I_{10})$ with
$I_{10} \ne 0$ forms a coarse moduli space $\mathcal M_2$ for genus 2 curves.
\subsection{Hilbert modular surfaces}
Here we review some facts about certain Hilbert modular surfaces.
See \cite{vdG} and \cite{EK} for more details.
Let $D > 0$ be a fundamental discriminant. The Hilbert
modular surface $Y_-(D)$ is a smooth compactification
of the quotient
$\SL_2(\mathcal O_D \oplus \mathcal O_D^*) \backslash \mathfrak H^+ \times \mathfrak H^+$.
When $K = \mathbb Q(\sqrt D)$ has narrow class number 1, this agrees
with the Hilbert modular surface often denoted $Y(D)$.
Fix an embedding $K \subseteq \mathbb C$ and
denote by $\tau$ the nontrivial Galois automorphism of $K$.
One can associate
to $(z_1, z_2) \in \mathfrak H^+ \times \mathfrak H^+$ a lattice
\[ L_{(z_1,z_2)} = \{ (a z_1 + b, a^\tau z_2 + b^\tau) :
a \in \mathcal O_D, b \in \mathcal O_D^* \} \subseteq V = \mathbb C^2. \]
Then
\[ E((w_1, w_2), (w'_1, w'_2)) = \frac{\Im w_1 \bar w_1}{\Im z_1}
+ \frac{ \Im w_2 \bar w'_2}{\Im z_2} \]
(with bar denoting complex conjugation)
defines a Riemann form on $A = V/L_{(z_1, z_2)}$ such that
$L_{(z_1,z_2)}$ is unimodular with respect to this form.
This makes $A$ a principally polarized abelian surface (PPAS)
with an action of $\mathcal O_D$ via
$j(\alpha)(w_1, w_2) = (\alpha w_1, \alpha^\tau w_2)$.
In fact, one may check that $j : \mathcal O_D \hookrightarrow \End(A)^\dagger$,
where $\dagger$ denotes the Rosati involution.
This construction leads to the fact that $Y_-(D)$ is a moduli
space for such pairs $(A,j)$ of PPASs with RM-$D$.
The Humbert modular surface $\mathcal H_D$ is the image of
$Y_-(D)$ in $\mathcal A_2$, and the map $Y_-(D) \to \mathcal H_D$ is generically
2-to-1, corresponding to forgetting the action of $\mathcal O_D$.
Note that in the above construction, switching $z_1$ and $z_2$
corresponds to replacing $j$ with $j \circ \tau$, and for the points
$(z_1, z_1)$, the conjugate actions $j$ and $j \circ \tau$ are isomorphic.
If $A$ is a geometrically simple PPAS, then $\End(A)$ is isomorphic to
$\mathbb Z$, an order in a real quadratic field, an order in a quartic CM field,
or an order in an indefinite quaternion algebra.
If $A$ is not geometrically simple, but $\mathcal O_D$ embeds in $\End(A)$,
then $\End(A)$ is an order in either the split quaternion algebra $M_2(\mathbb Q)$ or in $M_2(F)$ where $F$ is an imaginary quadratic field,
according to whether $A$ is isogenous over $\overline{\mathbb Q}$ to a product
of isogenous elliptic curves without or with CM.
\subsection{Fields of definition}
We are interested in fields of definition of curves and endomorphisms.
In general, suppose $X$ is a coarse moduli space space for a class of
varieties $V$ satisfying some property $P$. If $x$ corresponds to the
pair $(V,P)$, then the field of moduli for $(V,P)$ is the field of definition
of the point $x$. If both $V$ and $P$ are defined over $k$, then
the field of moduli contains $k$, but the converse is not true in general.
In particular, if $C$ is a genus 2 curve over $\mathbb C$, then the field of moduli
of $C$ is the field of definition of $(I_2 : I_4 : I_6 : I_{10})$, i.e.
the minimal field $k_0$ such that
$I_2, I_4, I_6, I_{10}$ can be taken in $k_0$ after scaling.
If $C$ is defined over $k$, then $k \supseteq k_0$.
However, $C$ need not be defined over $k_0$, i.e., there need not
be a curve $C'/k_0$ such that $C'$ and $C$ are isomorphic
over $\mathbb C$.
Generically, $\Aut(C)$ is generated by the hyperelliptic involution on $C$. If $\lvert \Aut(C) \rvert > 2$, then by \cite{cardona-quer}, $C$
is defined over $k_0$. When $\Aut(C) \simeq C_2$, Mestre \cite{mestre}
constructed a nonsingular conic $L/k_0$ such that $C$ is defined
over $k \supseteq k_0$ if and only if $L$ has a $k$-point. The coefficients
of $L$ are polynomials in $I_2, I_4, I_6$ and $I_{10}$---see
\cref{section:mestre-conic-background}
for details. We remark that since $L$ always
has a point over a quadratic extension $k'/k_0$, $C$ is always definable
over a (in fact, infinitely many) quadratic extension(s) of $k_0$.
Now consider a genus 2 curve $(C,j)$ with RM-$D$, where
$j$ is an embedding of $\mathcal O_D$ into $\End(A)$, $A = \Jac(C)$,
that respects the polarization as above. Then the field of moduli for
$(C,j)$ is the minimal field $k_0$ such that $(A,j)$ corresponds to
a $k_0$-rational point on $Y_-(D)$. This necessarily contains
the field of moduli for $C$ regarded as a general genus 2 curve,
and thus may be regarded as the field of moduli for the RM.
If $(C,j)$ is defined over $k$, i.e., there is a model for $C$
defined over $k$ such that $j(\mathcal O_D) \subseteq \End_k(A)$,
then $k \supseteq k_0$. Conversely, given $k \supseteq k_0$,
we would like a way to determine whether $(C,j)$ is defined
over $k$. Necessarily, $C$ must be defined over $k$, i.e.,
the Mestre conic $L$ must have a $k$-rational point.
The following says that, generically, when the Mestre conic
has a point the RM is also defined over $k$.
\begin{prop} \label{prop:RMdef}
Suppose $p$ is a $k$-rational point on $Y_-(D)$
corresponding to a PPAS $A$ defined over $k$
with an embedding $j : \mathcal O_D \hookrightarrow \End_\mathbb C(A)$.
If $\End_\mathbb C(A)$ is commutative,
then $j(\mathcal O_D) \subseteq \End_k(A)$.
\end{prop}
\begin{proof}
Let $\sigma \in G_k$, and $\eta = \frac{D + \sqrt D}2$.
Then $p$ being $k$-rational means
there is an isomorphism $\phi : (A, j) \to (A^\sigma, j^\sigma)$.
In particular, $\phi$ maps $j(\eta)$ to $j^\sigma(\eta) \in \End_\mathbb C(A^\sigma)$,
which we may identify with $j(\eta)^\sigma \in \End_\mathbb C(A)$.
Consequently,
there is an inner automorphism of $\End_\mathbb C(A)$
taking $j(\eta)$ to $j(\eta)^\sigma$.
Hence if $\End_\mathbb C(A)$ is commutative, this means
$j(\eta)^\sigma = j(\eta)$
for all $\sigma$, and thus $j(\eta) \in \End_k(A)$.
\end{proof}
Now we briefly address how to check the field of definition
of RM for specific curves $C$.
Suppose $C$ is defined over $k$, and let $A = \Jac(C)$.
Algorithms for numerically computing $\End_k(A)$ and $\End_\mathbb C(A)$
have been implemented in Magma, which one can use to
provably exhibit RM-$D$ using correspondences---e.g., see \cite{KM}
or \cite{CMSV}.
In the case we consider in this paper, $D=5$, another criterion which
is simpler to provably verify was provided by Wilson:
\begin{prop}[\cite{wilson}] \label{prop:wilson}
Let $y^2 = f(x)$ be a sextic Weierstrass model over $k$ for a genus $2$
curve $C$ with potential RM-5, i.e., $C$ has RM-5 defined over $\mathbb C$.
Then $C$ has RM-5 (defined over $k$) if and only if
$\Gal(f) = \Gal(f/k)$ is contained in a transitive copy of $A_5$ inside $S_6$.
\end{prop}
It is easy to verify whether $C$ has potential RM-5, because one
can check whether it comes from a point on $Y_-(5)$ via its
Igusa--Clebsch invariants. In particular, if $C: y^2=f(x)$ is a genus 2
curve over $k$ with $\deg f = 6$, then $C$ has RM-5 (over $k$) if
and only if its Igusa--Clebsch invariants are of one of the types
listed below in \cref{prop:necess-cond} and $\Gal(f)$ lies in one of the
transitive copies of $A_5$ inside $S_6$.
\subsection{Moduli for RM-5} \label{section:moduli}
Elkies and Kumar \cite{EK} give the following birational model for $Y_-(5)$:
\begin{equation} \label{eq:EK}
Y : z^2 = 2 (-972 g^{5} - 324 g^{4} - 27 g^{3} - 4500 g^{2} h - 1350 g h + 6250 h^{2} - 108 h).
\end{equation}
For $(z,g,h)$ on the surface $Y$ corresponding to a point on $\mathcal M_2 \subseteq \mathcal A_2$, the
Igusa--Clebsch invariants are
\[ (I_2 : I_4: I_6: I_{10}) = \left(24 g + 6 : 9 g^{2} : 81 g^{3} + 18 g^{2} + 36 h : 4 h^{2}\right) \]
The surface $Y_-(5)$ is rational, and Elkies and Kumar give a birational map
between $Y$ and $\mathbb P^2$, with affine coordinates $(m,n)$, via
\begin{align}\label{eq:gh2mn}
\nonumber 30g + 9 &= m^2 - 5n^2 \\
h &= m \frac{(30g+9)(15g+2)}{6250} + \frac{9(250g^2 + 75g + 6)}{6250} \\
\nonumber z &= n \frac{(30g+9)(15g+2)}{25}.
\end{align}
These equations give invertible transformations between the affine coordinates
$(z,g,h)$ on $Y$ and $(m,n)$ on $\mathbb P^2$ outside of the locus where
$g = \frac{m^2-5n^2-9}{30}$ is $- \frac 3{10}$ or $- \frac 2{15}$.
In an alternative approach, Wilson \cite{wilson} constructed
a coarse moduli space for genus 2 curves $C$ with RM-5
with coordinates $(z_6 : s_2 : \sigma_5) \in \mathbb P^2_{1,2,5}$ with $\sigma_5 \ne 0$ such
that
\[ (I_2 : I_4: I_6: I_{10}) = \left(-2s_2 + 2z_6^2 : \frac{(s_2 + 2z_6^2)^2}{16}
: \frac{9z_6 \sigma_5 - 4I_4(3s_2 - 2z_6^2)}{16} : \frac{\sigma_5^2}{1024}\right). \]
Moreover if $C$ is defined over $k$, then so is $(z_6 : s_2 : \sigma_5)$
and the quantity
\[ \Delta' = 64 z_{6}^{6} s_{2}^{2} + 96 z_{6}^{4} s_{2}^{3} + 48 z_{6}^{2} s_{2}^{4} - 256 z_{6}^{5} \sigma_{5} + 8 s_{2}^{5} - 400 z_{6}^{3} s_{2} \sigma_{5} - 1000 z_{6} s_{2}^{2} \sigma_{5} + 3125 \sigma_{5}^{2} \]
must be a square in $k$.
One can translate Wilson's coordinates to the Elkies--Kumar coordinates via
\[ (g, h) = \left(- \frac{2z_6^2 + s_2}{12z_6^2} , \frac{\sigma_5}{64 z_6^5} \right). \]
We remark that under this change of coordinates, $\Delta' = 2^{10} z^2$, so the condition that $\Delta'$ is a square in $k$
is automatically satisfied when $(z,g,h)$ is a $k$-rational point on $Y$.
If $z_6 \ne 0$, we can assume $z_6 = 1$ and this relation gives a one-to-one correspondence
between $(g,h) \in \mathbb C^2$ and $(s_2, \sigma_5) \in \mathbb C^2$. If
$z_6 = 0$, then the Igusa--Clebsch invariants of the point $(z_6 : s_2 : \sigma_5)$
must either be $(0 : 0 : 0 : 1)$ if $s_2 = 0$ or
\[ (I_2 : I_4: I_6: I_{10}) = \left(-8 : 1: -3: \frac{\sigma_5^2}{s_2^5}\right) \]
otherwise. Hence any genus 2 curve with RM-5 either corresponds to a point
$(g,h) \in \mathbb C^2$ or has Igusa--Clebsch invariants of the form
$(0 : 0 : 0 : 1)$ or $(8 : 1 : 3 : s)$ for $s \ne 0$.
When $z_6 = 0$, $\Delta' = 8 s_{2}^{5} + 3125 \sigma_{5}^{2}$.
Thus $\Delta'$ being a square in $k$ means either $\sqrt 5 \in k$ if $s_2 =0$
or $3125s^2-8s$ is a square, where $s = -\frac{\sigma_5^2}{s_2^5}$,
if $s_2 \ne 0$. It is easy to see that any two
of these possibilities are mutually exclusive.
Let us now consider the possibility that $(g,h)$ and $(g',h')$ give the
same Igusa--Clebsch invariants, i.e., there exists $\lambda \in \mathbb C^\times$ such that
\[ \left(24 g' + 6 : 9 g'^{2} : 81 g'^{3} + 18 g'^{2} + 36 h' : 4 h'^{2}\right) = \lambda \cdot
\left(24 g + 6 : 9 g^{2} : 81 g^{3} + 18 g^{2} + 36 h : 4 h^{2}\right) \]
Since we are interested in genus 2 curves, assume $h$ and $h'$ are both nonzero.
First note if $g=0$, then $g'=0$ and we have $h' = \lambda ^3 h$ and $h'^2 = \lambda^5 h^2$. Comparing these shows $\lambda = 1$. So assume $g, g'$ are
both nonzero. Then comparing $I_4$'s yields $\lambda = \varepsilon \frac{g'}g$, where
$\varepsilon = \pm 1$. Now comparing $I_2$'s shows $4g' + 1 = \varepsilon(4g' + \frac{g'}g)$.
If $\varepsilon = 1$, then $g = g'$, i.e., $\lambda = 1$ which implies $h = h'$.
Thus assume $\varepsilon = -1$. Then $g' = - \frac{g}{8g+1}$ and $\lambda = \frac 1{8g+1}$.
Examining the $I_6$'s and $I_{10}$'s then gives
$h' = \frac{g^3+2h}{2(8g+1)^3}$ and
\[ (h')^2 = \frac{(g^3+2h)^2}{(8g+1)^6} = \frac{4h^2}{(8g+1)^5}. \]
Using the assumption that $g \ne 0$, the latter equality holds if and only if
$32h^2 - 4g^2 h - g^5 = 0$, i.e., $h = \frac{g^2}{16}(1 + u)$ where $u^2 = 1+8g \ne 0$.
Note that if $g' =g$ then $g = - \frac 14$, $\lambda = -1$
so $h'^2 = -h^2$.
Hence for any $(g,h) = \left(g, \frac{g^2}{16}(1 \pm \sqrt{8g+1})\right)$ with
$g \ne 0, -\frac 18$,
the pair $(g',h') = \left(-\frac g{8g+1}, \frac{g^3 +2h}{(8g+1)^3}\right)$ are
distinct coordinates with the same Igusa--Clebsch invariants,
and these are the only pairs of distinct $(g,h)$-coordinates with
this property.
Now suppose $(g,h)$ and $(g',h')$ are distinct $k$-rational pairs giving the same
Igusa--Clebsch invariants as above, with $u^2 = 8g+1$. Expressing $g, g', h, h'$ in terms of $u$,
we see that, for both $(g,h)$ and $(g',h')$, the right hand side of \eqref{eq:EK}
is in the $k^\times$-square class of $-(43u^2 + 22u + 43)$.
The above discussion yields the following.
\begin{prop} \label{prop:necess-cond}
Let $C$ be a genus $2$ curve with RM-5 defined over $k$.
Then the Igusa--Clebsch invariants of $C$
must be of one of the following types:
\begin{enumerate}
\item $(I_2 : I_4: I_6: I_{10}) = (0 : 0: 0: 1)$ when $\sqrt 5 \in k$;
\item $(I_2 : I_4: I_6: I_{10}) = (8 : 1 : 3 : s)$ for some nonzero $s \in k$ such that $3125s^2-8s$ is a square; or
\item $(I_2 : I_4: I_6: I_{10}) = \left(24 g + 6 : 9 g^{2} : 81 g^{3} + 18 g^{2} + 36 h : 4 h^{2}\right)$ for a $k$-rational solution $(z,g,h)$ to \eqref{eq:EK} with
$h \ne 0$.
\end{enumerate}
The above three cases are mutually exclusive. In case (2), $s$ is
unique. In case (3), the pair $(g,h)$ is unique except in the case that
$(g, h) = \left(\frac 18(u^2-1), \frac 1{1024}(u-1)^2(u+1)^3\right)$ for some
$u \in k^\times \setminus \{ \pm 1 \}$ such that $-(43u^2 + 22u + 43)$
is a square, in which case
$(g, h)$ and $(g', h') = \left(- \frac{g}{8g+1}, \frac{g^3+2h}{2(8g+1)^3}\right)$ are distinct elements of $k^2$
that both correspond to invariants
\[ \left(48u^2 + 16 : 36(1-u)^2(1+u)^2 :
72(1-u)^2(1+u)^2(9u^2 + \frac 2 u + 9) : 4(1-u)^4(1+u)^6 \right). \]
\end{prop}
We remark that $-(43u^2 + 22u + 43)$ can be a square in
a number field $k$ if and only if every infinite place of $k$ is complex and the completion $k_v$ at every place $v$ above $3$ is an extension of $\mathbb Q_3$ of even degree. In particular, when $k/\mathbb Q$ is quadratic this happens
if and only if $k$ is imaginary quadratic and non-split at 3.
\begin{rem}
If we consider the map $\phi(u) = \left(\frac 18(u^2-1), \frac 1{1024}(u-1)^2(u+1)^3\right)$, then the pairs $(g,h)$ and $(g',h')$
yielding the same Igusa--Clebsch invariants at the end of the proposition
are just the points $\phi(u)$ and $\phi(\tfrac 1u)$, which both lie on the curve
$X_6 : 32h^2-4g^2h - g^5 = 0$ on $Y$.
Noam Elkies explained to us how his work in \cite{elkies} implies
that $X_6$ is the image of the Shimura curve quotient
$X(6)/\langle w_6\rangle$ parametrizing
principally polarized abelian surfaces with quaternionic multiplication
by the maximal order in the rational quaternion algebra of discriminant 6.
Moreover, the involution on $X_6$ induced from $u \mapsto \tfrac 1u$
corresponds to the involution $w_2 = w_3$ of $X(6)/\langle w_6 \rangle$.
\end{rem}
\section{Reduction of quadratic forms over polynomial rings}
\label{sec:reduction}
Here we will explain our approach to reducing quadratic forms over
polynomial rings, which we will then apply to Mestre conics.
Say $R = k[t_1, \dots, t_m]$ is a polynomial ring over a field $k$
of characteristic not 2.
Let $Q(x_1, \dots, x_n)$ be a quadratic form over $R$. Thus
we can write $Q$ as
\[ Q(x_1, \dots, x_n) = \sum_{i, j} f_{i,j}(t_1, \dots, t_m) x_i x_j, \]
where each $A_{i,j} = f_{i,j}(t_1, \dots, t_m) \in R$ and
$A_{j, i} = A_{i, j}$. Then $A = (A_{i,j}) \in M_n(R)$ is the Gram
matrix for $Q$ with respect to the standard basis $\{e_1, \dots, e_n \}$.
Define the polynomial degree $\deg_k Q$ of $Q$ to be
$\max_{(i,j)} \deg A_{i,j}$.
Consider the following two reduction problems:
(i) reduce $Q$ to an equivalent quadratic form $Q'$ over $R$
with minimal polynomial degree; or
(ii) reduce $Q$ to a quadratic form $Q'$ over $R$
which is equivalent over the field of fractions $F$ of $R$
with minimal polynomial degree.
(By equivalence of quadratic forms, we mean isomorphism up to
invertible scaling.)
In case (i), specializations of $Q$ and $Q'$ to
any $t_1, \dots, t_m \in k$ will be $k$-equivalent.
In case (ii), specializations of $Q$ and $Q'$ will merely
be $k$-equivalent for generic choices of $t_1, \dots, t_m \in k$.
It is really reduction problem (ii) that we are interested in,
as it allows for much greater possibilities for reducing our
quadratic forms. Note that merely diagonalizing $Q$ over
$F$ and clearing denominators to obtain a form over $R$
is not typically helpful in reducing the polynomial degree.
(Conversely, one cannot always diagonalize and maintain
minimal polynomial degree---see \cref{ex:red1}, but fortunately
for our Mestre conic of interest, our reduction process
will also diagonalize the form.)
We first describe the types of reduction
steps we will use.
\begin{enumerate}
\item Simple degree reduction. By a $k$-linear change of basis,
we may assume the maximal degree of the $f_{i,j}$'s is attained
for some of the diagonal terms with $j=i$. Say $f_{j_0,j_0}$ attains
the maximal degree of the $f_{i,j}$'s.
Write $v = \sum h_i(t_1, \dots, t_m) e_i$ where each $h_i \in R$.
Search for a choice of polynomials $h_i$ such that $\deg Q(v) <
\deg f_{j_0, j_0}$ and $h_{j_0}$ has nonzero constant term.
Now make the change of variable corresponding to changing
basis for the Gram matrix by replacing
$e_{j_0}$ in the standard basis with $v$. The resulting quadratic
form will have $Q(v)$ as the coefficient of $x_{j_0}^2$ and so we have
reduced the degree of this diagonal term.
In our Mestre conic case, the degrees of the diagonal terms
turn out to control the polynomial degree of $Q$, so reducing
degrees of diagonal terms is sufficient for us. In general,
to reduce the degree of the $x_i x_j$ term, one could
similarly search for vectors $v$, $v'$ with polynomial
coefficients such that $\deg B(v, v') < \deg_k Q$,
and then change bases by replacing $e_i$ with $v$ and
$e_j$ with $v'$.
\item Discriminant reduction. Let $\Delta = \Delta(t_1, \dots, t_m) \in R$
be the discriminant of $Q$. By changing variables over $F$,
one may be able to remove polynomial factors from $\Delta$.
For instance, $Q_1 : x_1^2 + t_1 x_1 x_2 + t_1^2 x_2^2$ has
$\Delta = -3t_1^2$, and the change of variables
$x_2 \mapsto \frac 1{t_1} x_2$ gives the quadratic form
$Q_2 : x_1^2 + x_1 x_2 + x_2^2$ with discriminant $-3$.
In general, since an invertible change of variables preserves the
square class of the discriminant, we might hope to remove square
factors appearing in $\Delta$.
First divide out any polynomial factors of the gcd of the coefficients
of $Q$.
Now suppose $g(t_1, \dots, t_m) \in R$ is irreducible over $k$ of
positive degree such that $g^2 \mid \Delta$. Then we can
attempt the following:
(a) Search for a polynomial vector $v$ such that $g^2 \mid Q(v)$,
with at least one of the coefficients of $v$ having a nonzero
constant term (e.g., one can take $g(t_1) = t_1$ and $v = e_2$
with the above example of $Q_1$).
Then we can try a change of variables corresponding to
replacing some basis vector $e_i$ with $\frac{v}{g}$
where the $i$-th coefficient of $v$ has nonzero
constant term. This change of variables could introduce $g$
in the denominator of some $x_i x_j$ coefficients for $j \ne i$.
However, if we are fortunate, as always happens in our Mestre
conic reduction, then the resulting
quadratic form $Q'$ will still have coefficients in $R$,
and we will have removed a factor of $g^2$ from the discriminant.
(b) Assume $n \ge 3$, and if $n > 3$ that we have the
higher divisibility condition $g^{r} \mid \Delta$ for some
$r > \frac n2$. Then one can
look for $F$-linearly independent vectors $v_1, \dots, v_{r} \in R^n$
such that for each $1 \le i \le r$, $g \mid Q(v_i)$
but $g \nmid v_i$ (i.e., $g$ does not divide every
polynomial coefficient of $v_i$). Let $j_1, \dots, j_{n-r}$
be such that
that $e_{j_1}, \dots, e_{j_{n-r}}, v_1, \dots, v_{r} $ is a basis of $F^n$.
Then the change of basis $\{ e_1, \dots, e_n \}$
to $\{ g e_{j_1}, \dots, g e_{j_{n-r}}, v_1, \dots, v_{r} \}$ transforms $Q$
to a quadratic form $Q'$ with an extra factor of $g^{2(n-r)}$ in its
discriminant, but now each coefficient of $Q'$ is
divisible by $g$. Thus the $F$-equivalent form
$g^{-1}Q'$ has coefficients in $R$, and we will have
removed a factor of $g^{2r-n}$ from $\Delta$.
\end{enumerate}
Simple degree reduction preserves $R$-equivalence,
whereas discriminant reduction only preserves
$F$-equivalence. Our strategy is to try simple
degree reduction, then discriminant reduction, and
repeat until the discriminant is squarefree, and then
finish with simple degree reduction.
First we give a baby example of simple degree reduction (1).
Below and in the next section, $e_1, \dots, e_n$ will denote
the standard basis of the relevant vector space, and $A_i$ will denote
the Gram matrix for $Q_i$ with respect to $\{ e_1, \dots, e_n \}$.
\begin{ex} \label{ex:red1}
Let $R=\mathbb Q[t]$, and let $\{ e_1, e_2 \}$ be the standard basis for $M = R^2$.
Let $Q_1 = Q$ be the quadratic form on $M$ given by
\[ Q_1(x,y) = \left(t^{4} + 1\right) x^{2} + \left(2 t^{3} + 2 t\right) x y + \left(t^{2} - 1\right) y^{2}. \]
We can perform simple degree reduction as follows.
We want to lower the degree of the
$x^2$-coefficient, so let $v = a_1 e_1 + (a_2 + b_2 t)e_2$.
Then
\[ Q_1(v) = (a_1 + b_2)^2 t^4 + 2a_2(a_1+b_2) t^3 +
(a_2^2 + 2a_1b_2 - b_2^2)t^2 + 2(a_1-b_2)a_2 t + (a_1^2 - a_2^2). \]
Hence setting $b_2 = -a_1$ makes $Q_1(v)$ a degree 2 polynomial
in $t$ with $t^2$-coefficient $(a_2^2-3a_1^2)$, which we cannot make
0 for nontrivial choices of $a_1, a_2 \in \mathbb Q$. However, we can choose
to make either the $t^1$- or $t^0$-coefficient 0 by taking $a_2 = 0$ or
$a_2 = a_1$. Let us take $v_1 = e_1 - te_2$ so $Q_1(v_1) =
1-3t^2$, and let
$A_2$ be the Gram matrix for $Q_1$ with respect to $\{ v_1, e_2 \}$.
Let $Q_2$ be the associated quadratic form, i.e., the quadratic form
which has Gram matrix $A_2$ with respect to $\{ e_1, e_2 \}$.
In other words, $Q_2$ is obtained from $Q_1$ by the change of
variables $x \mapsto x$, $y \mapsto -tx + y$.
Then
\[ Q_2(x,y) = \left(1 -3 t^{2}\right) x^{2} + \left(4 t\right) x y + \left(t^{2} - 1\right) y^{2}. \]
Note that $Q_2$ has discriminant $12t^4 + 4$, so
we cannot hope to reduce the degree any further over $R$.
We remark that straightforward diagonalization of $Q_1$ gives
$ \left(t^{4} + 1\right) x^{2} + \frac{(1-3t^4)}{(t^4+1)}y^2$
and for $Q_2$ gives $\left(1 -3 t^{2}\right) x^{2} +\frac{3t^4+1}{3t^2-1} y^2$.
Since the discriminant is irreducible over $\mathbb Q$, one cannot
diagonalize over $R$ and have polynomial coefficients of degree $< 4$.
\end{ex}
A slightly more interesting example of (1) is given in the
reduction of the Mestre conic from $Q_1$ to $Q_2$ in
\cref{subsection:conic-reduction-generic}.
Examples of (2a) are also given by the reductions from
$Q_2$ to $Q_3$ and $Q_3$ to $Q_4$ in the
same section. Then the reduction from $Q_5$ to $Q_6$
gives an example of (2b).
All of these types of reduction involve finding polynomials
$h_i(t_1, \dots, t_m)e_i$ so that the coefficients
of $Q(v)$ satisfy certain conditions (e.g., no coefficients
above a certain degree, or whatever relations
are imposed upon the coefficients by a divisibility
condition).
In general, this may be computationally challenging,
as it involves
finding simultaneous solutions of many quadratic
equations in many variables to find suitable $h_i$'s.
As we do not have a general algorithm that will provably
minimize the polynomial degree, rather than trying to formulate
a precise reduction algorithm, we will just describe a few techniques
which can be used to lessen the computational difficulties
of these reduction steps in practice. The first two techniques
apply to both (1) and (2). The subsequent techniques
are just for discriminant reduction.
\begin{itemize}
\item \emph{Inductively try more complicated polynomial
combinations of basis vectors.} We begin by guessing
certain forms for the polynomial coefficients $h_i$ of
$v$. Each term of some $h_i$ with an unknown coefficient
adds another variable to solve for in finding a $Q(v)$
satisfying our desired criteria. E.g., in \cref{ex:red1}
we need to make certain expressions in the
unknown coefficients $a_1, a_2, b_2$ zero to reduce the
degree. To minimize the number of unknowns, we
begin by guessing as simple forms for the $h_i$'s as
we can hope for, and then try adding more terms as needed.
In \cref{ex:red1}, since we wanted to remove
$t^4$ from the coefficient of $x^2$, and the coefficient of $y^2$
is degree 2 in $t$, it makes sense to consider constant
multiples $h_1(t)$ of $e_1$ plus linear multiples
$h_2(t)$ of $e_2$ for $v$. In fact, we might have first tried
$h_1(t) = a_1$ and $h_2(t) = b_2 t$, and then if this
were not sufficient to remove the $t^4$ term, then
we would try including a constant term in $h_2(t)$.
If this were still unsuccessful, we could try letting $h_1(t)$
be a linear polynomial, which would necessitate $h_2(t)$
having degree 3. While this is of course not needed such
simple examples as \cref{ex:red1}, it may be necessary
in the presence of additional variables (both more $x_i$'s
and more $t_j$'s).
\item \emph{Look for coefficient conditions that factor.}
Say for instance that $m=2$, and we guess linear
forms $h_i(t_1,t_2) = a_i + b_i t_1 + c_i t_2$ for each $h_i$.
Then our desired conditions on $Q(v)$ may be something
like $\deg Q(v) < 4$ or $(t_1t_2 + 1)^2 \mid Q(v)$.
In the former case, say, we want to make each
$t_1^j t_2^{4-j}$ term of $Q(v)$ vanish. That gives 5
quadratic equations in $3n$ unknowns. How can we solve
this?
If our quadratic form is meant to reduce, we might hope
it does for algebraically simple reasons. If we are fortunate,
then some of these quadratic equations we need to solve
may factor, as in the case of the $t^4$-coefficient of $Q_1(v)$
in \cref{ex:red1}. If we are even more fortunate, this
forces one of our unknowns to be a certain linear combination
of other unknowns, and we can reduce the number of unknowns
and repeat. We are fortunate in this way in the case of the
Mestre conic we reduce in
\cref{subsection:conic-reduction-generic}.
\item \emph{Order of discriminant factor removal.}
In removing discriminant factors $g^r$, it may be easier to
remove certain factors before others. On one hand,
it may help to try to start with factors $g^2$ where $g$ is of small
degree, or $g$ only involves a small number of the variables
$t_1, \dots, t_m$,
to more easily find $h_i$ such that $g^2 | Q(v)$ or $g | Q(v)$.
For instance, if $m=2$, $g(t_1, t_2) = t_1$ and we want
$g^2 | Q(v)$, then any $t_1^i t_2^j$ term in $Q(v)$ with
$i \le 1$ must vanish. However, the main issue we encountered
in reducing our Mestre conic was that, at a given stage,
attempting to remove one factor may lead to
quadratic coefficient equations which factor, but attempting to
remove other factors does not.
Thus for (2) we propose a process roughly of the following form.
Try the simplest possible choices for $h_i$'s for removing
different factors $g^r$ of the discriminant.
Then pursue the ones that lead to linear relations among the
unknowns, inductively
adding more terms, and repeat until a factor is removed or
a bound for the complexity of the $h_i$'s is reached.
This approach is what led us the (otherwise unexplained)
order of removing discriminant factors we use in
\cref{subsection:conic-reduction-generic}.
\item \emph{Change variables to remove constant terms.}
If we want to remove a factor of say $(t_1-3)^2$ from
the discriminant, writing down the divisibility conditions
is a bit easier in practice if we first change
the polynomial variables $t_1 \mapsto t_1 + 3$, so
one is asking about removing a factor of $t_1^2$ from
a transformed form $Q'$. For an example,
see the reduction of $Q_6$ in
\cref{subsection:conic-reduction-generic}.
\item \emph{Examine minors.}
If some factor $g^r$ divides the discriminant of $Q$,
depending on $n$ and $r$, it may not be clear whether
we should try (2a) or (2b). In this case, one can examine
the (determinant) minors of the Gram matrix.
If some power of $g$ divides
sufficiently many minors, this suggest that (2b) may be possible.
Furthermore, if many of the diagonal minors are divisible
by $g$ then we can try looking for vectors $v_i$ as in (2b)
whose projection to $e_j$ is 0, for each $j$ in a set corresponding
to the minors. E.g., if $r=n-1$ and each diagonal minor is divisible
is $g$, then we can look for vectors $v_1, \dots, v_{n-1}$ such that
the projection of $v_i$ to $e_i$ is 0 for each $i$. This helps
reduce the number of unknowns we need to use, and is used
in the reduction of $Q'_6$ in
\cref{subsection:conic-reduction-generic}.
\end{itemize}
\section{Reducing the Mestre conic} \label{section:conic-reduction}
\subsection{Mestre's construction of genus 2 curves} \label{section:mestre-conic-background}
Suppose $k \subseteq \mathbb C$, and $(I_2, I_4, I_6, I_{10}) \in k^4$ are
Igusa--Clebsch invariants for a genus 2 curve $\mathcal C/\mathbb C$ without
extra automorphisms, i.e., $\Aut_\mathbb C(\mathcal C) \simeq C_2$.
(In this section only, we use $\mathcal C$ rather than $C$ to denote
a genus 2 curve to avoid conflict with the notation for Clebsch
invariants.)
In \cite{mestre}, Mestre gave a method to determine whether $\mathcal C$
is defined over $k$, and if so, find a model. Mestre worked in terms
of Clebsch invariants $(A,B,C,D)$ rather than Igusa--Clebsch invariants.
One can translate between these two sets of invariants via
\begin{align*}
&I_2 = -120 A, \quad I_4 = 90 (-8 A^{2} + 75 B), \quad
I_6 = 540 (16 A^{3} - 200 A B + 375 C) \\
&I_{10} = -162 (384 A^{5} - 6000 A^{3} B + 18750 A B^{2} - 10000 A^{2} C + 37500 B C + 28125 D).
\end{align*}
Mestre defines two elements $L$ and $M$ of $\mathbb Q(A,B,C,D)[x_1,x_2,x_3]$ as
$$L = \sum_{\substack{1 \leq i,j \leq 3}} L_{ij}x_ix_j\quad\text{and}\quad M = \sum_{\substack{1 \leq i,j,k \leq 3}} M_{ijk}x_ix_jx_k,$$
with
\begin{align*}
&\quad L_{11} = 2C + \tfrac{1}{3}AB &\quad L_{22} &= D&\\
&\quad L_{12} = \tfrac{2}{3}(B^2 + AC) &\quad L_{23} &= \tfrac{1}{3}B(B^2 + AC) + \tfrac{1}{3}C\left(2C + \tfrac{1}{3}AB\right)&\\
&\quad L_{13} = D
&\quad L_{33} &= \tfrac{1}{2}BD + \tfrac{2}{9}C(B^2 + AC),&
\end{align*}
\begin{flalign*}
&\quad M_{111} = \tfrac{2}{9}\left(A^2C - 6BC + 9D\right)&\\
&\quad M_{112} = \tfrac{1}{9}\left(2B^3 + 4ABC + 12C^2 + 3AD\right)&\\
&\quad M_{113} = \tfrac{1}{9}\left(AB^3 + \tfrac{4}{3}A^2BC + 4B^2C + 6AC^2 + 3BD\right)&\\
&\quad M_{122} = \tfrac{1}{9}\left(AB^3 + \tfrac{4}{3}A^2BC + 4B^2C + 6AC^2 + 3BD\right)&\\
&\quad M_{123} = \tfrac{1}{18}\left(2B^4 + 4AB^2C + \tfrac{4}{3}A^2C^2 + 4BC^2 + 3ABD + 12CD\right)&\\
&\quad M_{133} = \tfrac{1}{18}\left(AB^4 + \tfrac{4}{3}A^2B^2C + \tfrac{16}{3}B^3C + \tfrac{26}{3}ABC^2 + 8C^3 + 3B^2D + 2ACD\right)&\\
&\quad M_{222} = \tfrac{1}{9}\left(3B^4 + 6AB^2C + \tfrac{8}{3}A^2C^2 + 2BC^2 - 3CD\right)&\\
&\quad M_{223} = \tfrac{1}{18}\left(-\tfrac{2}{3}B^3C - \tfrac{4}{3}ABC^2 - 4C^3 + 9B^2D + 8ACD\right)&\\
&\quad M_{233} = \tfrac{1}{18}\left(B^5 + 2AB^3C + \tfrac{8}{9}A^2BC^2 + \tfrac{2}{3}B^2C^2 - BCD + 9D^2\right)&\\
&\quad M_{333} = \tfrac{1}{36}\left(-2B^4C - 4AB^2C^2 - \tfrac{16}{9}A^2C^3 - \tfrac{4}{3}BC^3 + 9B^3D + 12ABCD + 20C^2D\right),&
\end{flalign*}
and
\begin{align*}
&L_{ij} = L_{ji}, \, M_{ijk} = M_{jik} = M_{ikj}.
\end{align*}
The \textit{Mestre conic} and the \textit{Mestre cubic} associated to
$\mathcal C$ (or equivalently, the Clebsch or Igusa--Clebsch invarants)
are defined to be the projective varieties $L = 0$ and $M = 0$ over $\mathbb Q(A,B,C,D)$. In a slight abuse of terminology, we will occasionally say that $L$ itself is the Mestre conic, and similarly for $M$.
\begin{thm}[\cite{mestre}] \label{thm:mestre-conic}
Suppose $(A,B,C,D) \in k^4$ are the Clebsch invariants of a
genus 2 curve $\mathcal C/\mathbb C$ without extra automorphisms.
Then $\mathcal C$ is defined over $k$ if and only if the associated
the Mestre conic $L = 0$ in $\mathbb P^2(k)$ has
a $k$-rational point.
\end{thm}
If the Mestre conic associated to $\mathcal C/\mathbb C$ has $k$-rational points then those rational points are parameterized by a single projective parameter which we will call $x$. We will write $x_i = x_i(x)$ with $i = 1,2,3$ to denote this parametrization.
\begin{thm}[\cite{mestre}] \label{thm:mestre-cubic}
Suppose $(A,B,C,D) \in k^4$ are the Clebsch invariants of a
genus 2 curve $\mathcal C/\mathbb C$ without extra automorphisms
and the associated Mestre conic $L = 0$ has a $k$-rational point.
Then a model for $\mathcal C$ over $k$ is given by
$$y^2 = M(x_1(x), x_2(x), x_3(x)),$$
where $M=0$ is the associated Mestre cubic.
\end{thm}
Finally, we elaborate on the condition that $\mathcal C/\mathbb C$ has
no extra automorphisms. The possibilities for
extra automorphisms of genus 2 curves were determined by Bolza.
The reduced automorphism group of $\Aut^\red_\mathbb C(\mathcal C)$
is $\Aut_\mathbb C(\mathcal C)$ modulo the hyperelliptic involution.
If $\mathcal C$ has extra automorphisms, then
$\Aut^\red_\mathbb C(\mathcal C)$ either contains an involution or has order 5.
The latter case happens exactly when
the (Clebsch or Igusa--Clebsch invariants) of $\mathcal C$
are $(0 : 0 : 0 : 1) \in \mathbb P^3_{1,2,3,5}$.
As explained in \cite{mestre},
the Mestre conic attached to a genus 2 curve $\mathcal C/\mathbb C$
is singular if and only if the reduced automorphism group of
$\mathcal C$ contains an involution. Thus the condition that
$\mathcal C/\mathbb C$ has no extra automorphism can be restated
as: the Mestre conic $L=0$ is nonsingular and $I_2$, $I_4$ and $I_6$
are not all 0.
\subsection{The general case} \label{subsection:conic-reduction-generic}
Here we study the Mestre conic $L$ associated to a point $(z,g,h)$
of $Y$, i.e., to Igusa--Clebsch invariants
$\left(24 g + 6 : 9 g^{2} : 81 g^{3} + 18 g^{2} + 36 h : 4 h^{2}\right)$.
After scaling by $2^4 \cdot 3^7 \cdot 5^{14}$, the Mestre conic $L : \sum_{i, j = 1}^3 L_{ij} x_i x_j = 0$ defined above has coefficients
\begin{align*}
L_{11} &= 189843750 (-96 g^{3} - 337 g^{2} - 108 g + 400 h - 9) \\
L_{12} &= -2531250 (-144 g^{4} - 1299 g^{3} - 754 g^{2} + 2000 g h - 144 g + 500 h - 9) \\
L_{13} &= L_{22} = -3750 (1944 g^{5} + 40905 g^{4} + 36990 g^{3} - 68400 g^{2} h + 11835 g^{2} - 43200 g h \\
& \qquad \qquad + 50000 h^{2} + 1620 g - 5400 h + 81) \\
L_{23} &= 450 (324 g^{6} + 14931 g^{5} + 19395 g^{4} - 25800 g^{3} h + 9105 g^{3} - 30100 g^{2} h + 2020 g^{2} \\
& \qquad \qquad - 8400 g h + 10000 h^{2} + 216 g - 700 h + 9) \\
L_{33} &= - (2916 g^{7} + 283338 g^{6} + 499041 g^{5} - 496800 g^{4} h + 319140 g^{4} - 915300 g^{3} h \\
& \qquad \qquad + 525000 g^{2} h^{2} + 101160 g^{3} - 426300 g^{2} h + 500000 g h^{2} + 17214 g^{2} - 76800 g h \\
& \qquad \qquad + 100000 h^{2} + 1512 g - 4800 h + 54)
\end{align*}
The discriminant of $L$, by which we mean the determinant of the Gram
matrix, is then
\[ \disc(L) = 2^7 \cdot 3^3 \cdot 5^{22} \cdot h^{2} (8 h-9 g^{2})^{2} z^2. \]
Set $Q_1 = L$ and let $A_1$ be the Gram matrix of $Q_1$
with respect to the standard basis $\{e_1, e_2, e_3 \}$.
We will now perform a series of reductions on the Mestre conic
using the techniques described in the previous section.
Note that the $x_1^2$, $x_2^2$ and $x_3^2$ coefficients of
$L = Q_1$ are respectively degree 3, 5 and 7 polynomials in
$g$ (and degrees 1, 2 and 2 in $h$).
First we want to try to reduce the degree in $g$ of the $x_3^3$
coefficient. Consider
$v_1 = a_1 g^2 e_1 + a_2 g e_2 + e_3$, where $a_1$, $a_2$ denote
rational variables. The $Q_1(v_1)$ is degree 7 in $g$, and
the $g^7$-coefficient is $-2916(2500a_1 - 50a_2 + 1)^2$. So set
$a_2 = 50a_1 + \frac1{50}$. This makes the $g^6$-coefficient of
$Q_1(v_1)$ equal
$-\frac{3^5 5^4}2(1250a_1 - 1)^2$. Taking $a_1 = \frac 1{1250}$ gives
\begin{multline*} Q_1(v_1) = -2916 g^{5} - 24354 g^{4} + 10800 g^{3} h - 1500000 g^{2} h^{2} - 21483 g^{3}
+ 78000 g^{2} h \\
+ 40000 g h^{2} - \frac{14259}{2} g^{2} + 39000 g h - 100000 h^{2} - 1026 g + 4800 h - 54, \end{multline*}
where
$v_1 = \frac 1{1250}g^2 e_1 + \frac 3{50}ge_2 + e_3$.
Thus we now consider the Gram matrix $A_2$ for $Q_1$
with respect to the basis $\{ e_1, e_2, v_1 \}$. Let $Q_2$ be
the resulting quadratic form from this change of variables, i.e.,
$Q_2(v) = {}^t v A_2 v$. In particular, the $x_3^2$-coefficient
of $Q_2$ is $Q_1(v_1)$.
The $x_1^2$, $x_2^2$ and $x_3^2$ coefficients of $Q_2$ are
degrees 3, 5 and 5 in $g$ (and no other coefficient has higher
degree). We may try to reduce the coefficient degrees for
$x_2^2$ and $x_3^2$ by replacing $e_2$ and $e_3$
with vectors of the form $a_1 g e_1 + e_2$ and
$b_1 g e_1 + e_3$. In this way, one to reduce $Q_2$
to a quadratic form whose coefficients are elements of
$\mathbb Q[g,h]$ of degree $\le 4$, but there are no obvious
ways to further reduce the degree from there, and this
reduction does not make the next step
any easier, so we will not do this.
Instead, we will next remove a polynomial factor from
the discriminant of $Q_2$, which is a rational multiple
of $h^2(8h-9g^2)^2 z^2$. The $h^2$ factor has the
lowest degree, so we will begin with that. We will find a vector
$v_2 = (a_1 + b_1 g)e_1 + a_2 e_2 + a_3 e_3$ such that
$Q_2(v_2)$ is divisible by $h^2$. This is essentially the simplest
polynomial combination of standard basis vectors where we
can hope to kill off all of the $g^j$ terms in $Q_2(v_2)$, and it
turns out to be sufficient.
The constant term of $Q_2(v_2)$ is
$-54 \, {\left(5625 \, a_{1} - 75 \, a_{2} + a_{3}\right)}^{2}$, so we
set $a_3 = 75 a_2 - 5625a_1$. Now we kill off the highest
degree $g^j$ terms.
Then the $g^5$-coefficient of
$Q_2(v_2)$ is $-1822500 {\left(225 a_{1} - a_{2} - 100 b_{1}\right)}^{2}$. Set $a_2 = 225a_1 - 100b_1$. Then the $g^4$-coefficient
is $-118652343750 {\left(3 a_{1} - b_{1}\right)}^{2}$. Setting
$b_1 = 3a_1$ yields $Q_2(v_2)$ is a multiple of $h^2$. Specifically,
take $a_1 = 2^{-2} \cdot 3^{-2} \cdot 5^{-6}$, and
then $Q_2(v_2) = -2(300 g^{2} + 2 g + 3)h^{2}$,
where $v_2 = \frac 1{562500}((1+3g)e_1 - 75e_2 - 11250e_3)$.
Let $A_3$ be the Gram matrix of $Q_2$ with respect to the basis
$\{ e_1, e_2, \frac 1h v_2 \}$, and $Q_3$ the associated quadratic
form. So $Q_3$ is not $\mathbb Q[g,h]$-equivalent to $Q_2$,
but after specializing to any $g, h \in \mathbb Q$ with $h \ne 0$, the forms
$Q_2$ and $Q_3$ are $\mathbb Q$-equivalent.
Now we will remove the $(8h-9g^2)^2$ factor from the determinant.
The degrees in $g$ of the $x_1^2$, $x_2^2$ and $x_3^2$ coefficients
of $Q_3$ are 3, 5 and 3.
Let $v_3 = (a_1 + b_1g)e_1 + a_2 e_2 + (a_3 + b_3g)e_3$.
We want $Q_3(v_3)$ to be a multiple of $(8h-9g^2)^2$.
To kill the constant term of $Q_3(v_3)$, we need to set
$a_3 = 225a_2 - 16875a_1$. Then to kill the $h$-coefficient,
we need $a_2 = 75a_1$. Then to kill the $g^2$-coefficient,
$b_3 = 67500a_1 - 16875b_1$. At this point there are only
nonzero $g^5$, $g^4$ and $g_2h$ and $h^2$ terms, so for
$Q_3(v_3)$ to be a multiple of $(8h-9g^2)^2$ it needs to be
a rational multiple and the $g^5$ term must vanish. This is
accomplished with $b_1 = \frac 32 a_1$.
In summary
$v_3 = a_1 ((1+\frac 32 g)e_1 + 75e_2 + \frac{84375}2g e_3)$.
Taking $a_1 = 2 \cdot 3^{-1} \cdot 5^{-6}$ then gives
$Q_3(v_3) = -30(8h-9g^2)^2$. Now let $A_4$ be the
Gram matrix of $-Q_3$ with respect to the basis
$\{ \frac {e_1}{1875}, \frac {v_3}{8h-9g^2}, e_3 \}$,
and $Q_4$ the associated quadratic form,
\begin{multline*}
Q_4: \left(5184 g^{3} + 18198 g^{2} + 5832 g - 21600 h + 486\right) x_{1}^{2} + \left(612 g + 108\right) x_{1} x_{2} + 30 x_{2}^{2} \\+ \left(288 g^{2} + 684 g - 4000 h + 108\right) x_{1} x_{3} + \left(-240 g + 12\right) x_{2} x_{3} + \left(600 g^{2} + 4 g + 6\right) x_{3}^{2}.
\end{multline*}
Specializing $g, h$ to any rationals such that $h \ne 0$
and $8h \ne 9g^2$, $Q_4$ is $\mathbb Q$-equivalent to the original
Mestre conic $L$. The discriminant of $Q_4$ is $-9600z^2$.
Now there is no obvious way to further reduce the degree,
and indeed, it seems that there is not much further simplification
that can be done over $\mathbb Q(g,h)$. The reduction we
perform next will not preserve $\mathbb Q$-equivalence of quadratic
forms (even assuming $z \ne 0$) if $g, h$ are rational
but $z$ is not.
Let $Q_5$ be the quadratic form over $\mathbb Q[m,n]$ obtained by
converting $Q_4$ from $(g,h)$ to $(m,n)$ via \eqref{eq:gh2mn}.
Let $A_5$ be the Gram matrix of $Q_5$ with respect
to the standard basis.
The coefficients of $Q_5$ are elements of
$\mathbb Q[m,n]$ of degree $\le 6$, and the discriminant is
\[ -\frac{96}{25} n^{2} (m^{2} - 5 n^{2})^{2} (m^{2} - 5 n^{2} - 5)^{2}. \]
Let $v_5 = a_1 e_1 + a_2 e_2 + a_3 e_3$
be a rational linear combination of the standard basis vectors.
Then $Q_5(v_5)$ has constant term
$\frac{3}{250} {\left(63 a_{1} - 50 a_{2} - 70 a_{3}\right)}^{2}$.
Setting $a_3 = \frac{1}{70}(63 a_{1} - 50 a_{2})$, then gives that
$Q_5(v_5) = p_1(m,n) (m^2-5n^2)$ for some polynomial
$p_1(m,n)$ with constant term
$5 {\left(441 a_{1} - 50 a_{2}\right)}^{2}$.
Hence we set $a_1 = 50$ and $a_2 = 441$ (which makes $a_3 = -270$)
to get
\[ Q_5(v_5) = 30 {\left(16 m^{2} - 80 n^{2} + 2729\right)} {\left(m^{2} - 5 n^{2}\right)}^{2}. \]
Let $A_6$ be the Gram matrix of $Q_5$ with respect to the basis
$\{ \frac{v_5}{m^2-5n^2}, e_2, e_3 \}$, and $Q_6$ the associated
quadratic form, which has polynomial degree 4 and discriminant
$-9600 n^{2} (m^{2} - 5 n^{2} - 5)^{2}$.
Next one might try to remove the $n^2$ factor from the discriminant,
but evaluating $Q_6$ on simple combinations such as
$v = (a_1 + b_1 m) e_1 + a_2 e_2 + a_3 e_3$ leads to polynomials
without linear factors in $a_1, a_2, a_3, b_1$ for the coefficients
of powers of $m$. So it is not immediately clear how to find some
$v$ such that $Q_6(v)$ is divisible by $n^2$. On the other hand,
the diagonal minors of $A_6$ are divisible by $(m^{2} - 5 n^{2} - 5)$
which suggests we can remove a factor of $(m^{2} - 5 n^{2} - 5)$
from the discriminant by working with combinations of just 2 of the
standard basis vectors at a time.
To make it easier to look for multiples of $(m^{2} - 5 n^{2} - 5)$,
we first make the change of variables $m=m+5$, $n = n+2$.
This changes $(m^{2} - 5 n^{2} - 5)$ to $(m^{2} - 5 n^{2} + 10 m - 20 n)$,
which has no constant term. Let $Q'_6$ be the resulting
quadratic form. Now one can look for rational linear combinations $v$
of pairs of the basis vectors $e_1, e_2$ and $e_3$ such that
$Q_6'(v)$ has no constant term. In particular,
$u_1 = e_1 - 53 e_2$ and $u_2 = 11 e_2 - 15 e_3$ work and
both $Q_6'(u_1)$ and $Q_6'(u_2)$ are divisible by
$(m^{2} - 5 n^{2} + 10 m - 20 n)$. Let $A_7'$ be the
Gram matrix for $\tfrac{1}{6}(m^{2} - 5 n^{2} + 10 m - 20 n)^{-1}Q_6'$ with
respect to the basis
$\{ \frac {u_1}4, \frac{u_2}5, (m^{2} - 5 n^{2} + 10 m - 20 n)e_2 \}$,
and $Q_7'$ the resulting quadratic form. Let $Q_7$ and $A_7$
denote the result of reverting $Q_7'$ and $A_7'$ back to our original
variables $m=m-5$, $n = n-2$. Then $Q_7$ is:
\begin{multline*} Q_7 : 5x_{1}^{2} + 2m x_{1} x_{2} + \left(m^{2} - 5 n^{2} - 4\right) x_{2}^{2} \\
+ \left(4 m^{2} - 20 n^{2} - 20\right) x_{2} x_{3} + \left(5 m^{2} - 25 n^{2} - 25\right) x_{3}^{2} .
\end{multline*}
The discriminant of $Q_7$ is
$-25 n^{2} (m^{2} - 5 n^{2} - 5)$.
Now we can use a vector of the form
$v = (a_1 + b_1 m) e_1 + a_2 e_2 + a_3 e_3$ to removed the
$n^2$ factor from the determinant. Explicitly, by zeroing out
of coefficients of powers of $m$ in $Q_7(v)$, we find that
$Q_7(v_7) = -25 n^2$ where $v_7 = m e_1 - 5e_2 + 2e_3$.
Let $A_8$ be the Gram matrix for $\tfrac{1}{5} Q_7$
with respect to the basis $\{ e_1, \frac{v_7}{n}, e_3 \}$.
Then the associated quadratic form is
\[ Q_8 : x_1^2 - 5 x_2^2 + (m^2 - 5n^2 - 5) x_3^2. \]
For any $m, n \in \mathbb Q$ such that $\disc L \ne 0$
the form $Q_8 \in \mathbb Q[x_1, x_2, x_3]$ is similar to $Q_1$.
Thus for such $m, n$, the Mestre conic $L$ has a rational point
if and only if $\pm (m^2 - 5n^2 - 5)$ is a norm from $\mathbb Q(\sqrt 5)$.
(Note that $-1$ is a norm from $\mathbb Q(\sqrt 5)$.)
Consequently, the analogue holds for any extension $k \supseteq \mathbb Q$.
\subsection{Points at infinity} \label{sec:infty}
Here we consider $k$-rational points on $Y_-(5)$ not coming from
affine coordinates $(z,g,h) \in Y$. By \cref{prop:necess-cond},
if such a point corresponds to a genus 2 curve $C$,
there are two possibilities for the Igusa--Clebsch invariants:
(1) $(0 : 0 : 0 : 1)$ when $\sqrt 5 \in k$, and
(2) $(8 : 1 : 3 : s)$ where $s \in k^\times$ and $3125s^2-8s$
is a square in $k$.
We wish to determine when $C$ can be defined over $k$
in these cases. In case (1), $C$ is already defined over $\mathbb Q$
with a model $y^2 = x^5 - 1$. So we only need to analyze
case (2).
Let us consider the Mestre conic for Igusa--Clebsch invariants
as in (2).
After replacing $x_1$ with $2^{-1} \cdot 3^2 \cdot 5^3 x_1$,
$x_2$ with $2 \cdot 3^3 \cdot 5^5 x_2$ and $x_3$ with
$2^2 \cdot 3^4 \cdot 5^7 x_3$, the Gram matrix $A_1$ for the
Mestre conic $Q_1 = L$ is
\[ A_1 = \left(\begin{array}{rrr}
-1 & 2 & -3125 s + 2 \\
2 & -6250 s + 4 & 4 \\
-3125 s + 2 & 4 & -43750 s + 4
\end{array}\right) \]
Then
\[ \det A = 2 \cdot 5^{10} (3125s - 8) s^2. \]
Then
$Q_1(a_1e_1 + a_2e_2 + a_3e_3)$ has constant term
$-(a_1 - 2 a_2 + 2 a_3)^{2}$.
Now letting $A_2$ be the Gram matrix with respect to the basis
$\{ e_1, \tfrac{1}{25}(2e_1 + e_2), \tfrac{1}{125}(2e_1-e_3) \}$, we see
\[ A_2 = \left(\begin{array}{rrr}
1 & 0 & 25 s \\
0 & -10 s & 2 s \\
25 s & 2 s & -2 s
\end{array}\right). \]
Scale $A_2$ by $s$ and replace $x_2$ and $x_3$ with $x_2/s$ and $x_3/s$,
respectively, to get the equivalent Gram matrix
\[ A_3 = \left(\begin{array}{rrr}
-s & 0 & 25 s \\
0 & -10 & 2 \\
25 s & 2 & -2
\end{array}\right) \]
with quadratic form
\[ Q_3: -s x_{1}^{2} -10 x_{2}^{2} + 50 x_{1} x_{3} + 4 x_{2} x_{3} -2 x_{3}^{2} \]
This has determinant $2 s (3125s - 8)$, and the
associated quadratic form is $\mathbb Q$-equivalent to the diagonal form
\[ 2 x_1^2 + 5s x_2^2 - (3125s - 8) x_3^2. \]
Assuming that $s(3125s-8)$ is a square in $k^\times$, this form is
$k$-equivalent to the forms
\[ 2 x_1^2 + 5s x_2^2 - s x_3^2 \sim 2s x_1^2 - (x_3^2 - 5x_2^2). \]
Clearly, this has a rational point if and only if $2s$ is a norm from $k(\sqrt 5)$
(which is automatic if $\sqrt 5 \in k$).
\section{Moduli for rational curves}
Here we state our main result and complete the proof.
\begin{thm} \label{thm:moduli}
Let $C$ be a genus $2$ curve with RM-5 defined over $k$.
Then the Igusa--Clebsch invariants $(I_2 : I_4 : I_6 : I_{10}) \in
\mathbb P^3_{1,2,3,5}$ are of one of the following forms:
\begin{enumerate}
\item $\left(24 g + 6 : 9 g^{2} : 81 g^{3} + 18 g^{2} + 36 h : 4 h^{2}\right)$ for a $k$-rational solution $(z,g,h)$ to \eqref{eq:EK} such
that such that $30g+4$ is a norm from $k(\sqrt 5)$ and $hz(8h-9g^2) \ne 0$;
\item $(8 : 1 : 3 : s)$ where $s \in k^\times$
such that $s(3125s-8)$ is a square in $k^\times$ and
$2s$ is a norm from $k(\sqrt 5)$;
\item $(12 (4 g + 1) : 36 g^{2} : 36 (18 g + 13) g^{2} : 162 g^{4})$ where $g \in k^\times$ such that
$-3(128g+9)$ is a square in $k$;
\item \begin{multline*}
\left( 20 (2 m^{2} - 3) : 25 (m - 3)^{2} (m + 3)^{2} : \right. \\
\left.
5 (m + 3)^{2} (75 m^{4} - 378 m^{3} + 428 m^{2} + 474 m - 711) : 8 (m - 2)^{4} (m + 3)^{6}) \right)
\end{multline*}
where $m \in k$ or $m = \sqrt 5$;
\item $(8 : 1 : 3 : \frac 8{3125})$;
\item $(0 : 0 : 0 : 1)$ if $\sqrt 5 \in k$.\end{enumerate}
Cases (1) and (2) correspond to $\Aut_\mathbb C^\red(C) = \{ 1 \}$.
Cases (3)--(5) correspond to $\Aut_\mathbb C^\red(C)$ containing an involution,
and case (6) corresponds to $\# \Aut_\mathbb C^\red(C) = 5$.
Conversely, if $C$ is a genus $2$ curve over $\mathbb C$ with Igusa--Clebsch
invariants in one of the forms (1)--(6),
then $C$ can be defined over $k$ and $C$ has potential
RM-5. Moreover, in case (1), if $\End_\mathbb C(\Jac(C))$ is commutative,
then $C$ has RM-5 defined over $k$.
\end{thm}
In this theorem, by ``a norm from $k(\sqrt 5)$'' we mean in the image of
the relative norm map from $k(\sqrt 5)$ to $k$. Thus such a norm condition
is automatically satisfied if $\sqrt 5 \in k$.
In \cref{sec:gh2mn}, we reformulate condition (1) in terms of $(m,n)$,
which removes the need to check \eqref{eq:EK} to determine the existence
of a $k$-rational point $(z,g,h) \in Y$ given $g, h \in k$.
\begin{rem} Suppose $k=\mathbb Q$ now, and that $C$ is a genus 2 curve over
$\mathbb Q$ with Igusa--Clebsch invariants of one of the forms
(1)--(5) in the theorem. We would like to be able to say
when $C$ (or a twist) actually has RM-5 defined over $k$.
Write $A = \Jac(C)$.
Generically, $\End_\mathbb C(A) \simeq \mathbb Z[\frac{1+\sqrt 5}2]$ in case (1) so
the RM-5 will be defined over $\mathbb Q$, but there
seems to be no simple way to describe the moduli points where
$A$ has (split or non-split) quaternionic multiplication over $\mathbb C$.
We do not know whether the RM-5 must be defined over $\mathbb Q$
if $\End_\mathbb C(A)$ is not commutative.
In case (2), we also expect that generically
$\End_\mathbb C(A) \simeq \mathbb Z[\frac{1+\sqrt 5}2]$, and the RM will
be defined over $\mathbb Q$---however we have not checked that
the points satisfying (2) always correspond rational points on
$Y_-(5)$ so cannot apply \cref{prop:RMdef}. Still, one can
check in examples for case (2), e.g., $s=\frac 2{25}$, that
one gets a genus 2 curve with RM-5 defined over $\mathbb Q$.
In cases (3)--(5), $\End_\mathbb C(A)$ is an order in a $2 \times 2$ matrix algebra,
hence not commutative, and \cref{prop:RMdef} does not apply.
Here $C$ has more than just quadratic twists, and one twist
may have RM-5 defined over $\mathbb Q$, and another may not.
For instance if $C : y^2 = x^6 - x^4 + 4x^2 -1$, which has
Igusa--Clebsch invariants $(88 : 169 : 28561 : 57122)$
corresponding to $g=-\frac{13}{96}$ in case (3),
then $C$ has RM-5 defined over $\mathbb Q$, but the twist
corresponding to $x \mapsto \sqrt{-1} x$ does not.
This may be checked, for instance, by computing Galois
groups and using \cref{prop:wilson}.
We do not know whether there will always be some
twist with RM-5 defined over $\mathbb Q$ in these cases.
\end{rem}
\begin{proof}
Suppose $C$ is a genus 2 curve over $\mathbb C$ with RM-5
and Igusa--Clebsch invariants $(I_2 : I_4 : I_6 : I_{10})$
defined over $k$. If $C$ as well as the RM-5 is defined over $k$, then by
\cref{prop:necess-cond}, we know that the Igusa--Clebsch invariants
must be of the form $(0 : 0 : 0 : 1)$ (only if $\sqrt 5 \in k$),
$(8 : 1 : 3 : s)$ for $s \in k^\times$
such that $3125s^2-8s$ is a square in $k$, or they correspond to a
$k$-rational point $(z,g,h) \in Y$ with $h \ne 0$, so we may assume
our Igusa--Clebsch invariants take one of these forms.
As explained earlier,
$C$ has a model over $k$ if and only if the Mestre conic has a
$k$-rational point or $C$ has extra automorphisms.
Thus, to prove both directions of the theorem, it will suffice to show
that: (i) when $C$ has no extra automorphisms,
the Mestre conic has a $k$-rational point exactly in cases (1) and (2);
and (ii) the Igusa--Clebsch
invariants from \cref{prop:necess-cond} corresponding to
curves with extra automorphisms are described exactly by cases
(3)--(6).
If the Igusa--Clebsch invariants are $(0 : 0 : 0 : 1)$, then $C$ has a model
over $\mathbb Q$ given by $y^2 = x^5 - 1$, and the RM-5 is defined over
$\mathbb Q(\sqrt 5)$. This verifies the theorem (in both directions)
for these Igusa--Clebsch invariants, i.e., case (6).
Assume now that the Igusa--Clebsch invariants come from a $k$-rational point
$(z,g,h) \in Y$ with $h \ne 0$.
First suppose $z(8h-9g^2) \ne 0$, so that the Mestre conic is nonsingular
and $C$ has no extra automorphisms.
Then the reduction we performed
over $\mathbb Q$ in \cref{subsection:conic-reduction-generic} implies that
the Mestre conic has a $k$-rational point if and only if $30g+4 = m^2 -5n^2 -5$
is a norm from $k(\sqrt 5)$, except in the two special cases
$g \in \{ - \frac 3{10}, - \frac 2{15} \}$, where there is not a one-to-one
correspondence between the $(z,g,h)$ and $(m,n)$ coordinates.
In \cref{sec:gh2mn} below, we check that one has a $k$-rational $(z,g,h) \in Y$
for $g \in \{ - \frac 3{10}, - \frac 2{15} \}$ if and only if $\sqrt 5 \in k$,
and in this case the Mestre conic always has a $k$-rational point.
This, together with \cref{prop:RMdef}, proves (both directions of)
the theorem in case (1).
For the cases where the Mestre conic is singular,
the $k$-rational $(z,g,h) \in Y$ with $z(8h-9g^2) = 0$
correspond to Igusa--Clebsch invariants of the forms in cases (3) and (4).
The details are given in \cref{sec:mest-sing}.
Finally, suppose the Igusa--Clebsch invariants are of the form $(8:1:3:s)$,
where $s \in k^\times$ and $3125s^2-8s$ is a square in $k$.
Both directions of the theorem in case (2) follows from the reduction
of the Mestre conic in \cref{sec:infty}. Case (5) follows from
\cref{sec:mest-sing}.
\end{proof}
\subsection{Translation to $(m,n)$-coordinates} \label{sec:gh2mn}
Here we explain how to translate \cref{thm:moduli} into the rational
model $\mathbb P_{m,n}^2$ for $Y_-(5)$, and treat the
exceptional cases $g \in \{ - \frac 3{10}, - \frac 2{15} \}$ in the proof of
\cref{thm:moduli}.
Recall that there is a one-to-one correspondence between $k$-rational
coordinates $(z,g,h) \in Y$
and $k$-rational coordinates $(m,n) \in \mathbb A^2$
such that
$g = \frac{m^2-5n^2-9}{30} \not \in \{ - \frac 3{10}, -\frac 2{15} \}$.
If $g = -\frac 3{10}$, the equation for $Y$ becomes
$z^2 = \frac{4}{3125}(3125h - 27)^2.$
Hence there are no $k$-rational points $(z,-\frac 3{10},h)$ on $Y$
if $\sqrt 5 \not \in k$. If $\sqrt 5 \in k$, then for all $h \in k$,
there is a $k$-rational $(z,-\frac 3{10},h) \in Y$.
Here the associated Mestre conic is nonsingular if $h \not \in \{ 0,
\frac{81}{800}, \frac{27}{3125} \}$, and always has a $k$-rational point.
For instance, in Sage we find the $k$-rational point
$(\frac{64000}{81}h^2 - \frac{2368}{75}h + \frac{284}{3125} :\frac{128}{15}h - \frac{51}{1250}: h + \frac 9{1000})$.
If $g = -\frac 2{15}$, the equation for $Y$ becomes
$z^2 = \frac{4}{3125}(3125h - 2)^2.$
Similarly, there are no $k$-rational points $(z,-\frac 2{15},h)$ on $Y$
if $\sqrt 5 \not \in k$, but if $\sqrt 5 \in k$, then there is a $k$-rational
$(z,-\frac 2{15},h) \in Y$ for all $h \in k$.
The associated Mestre conic is nonsingular if
$h \not \in \{ 0, \frac{1}{50}, \frac{2}{3125} \}$
Again, one may check in Sage that the Mestre conic always
has a rational point.
These calculations complete the proof of \cref{thm:moduli} in
case (1). Consequently, we may alternatively formulate case (1)
of the theorem as saying that $(I_2 : I_4 : I_6 : I_{10})$ is of one
of the following forms:
\begin{enumerate}
\item[(1a)]
\begin{multline} \label{eq:mnICs}
(-20 (-2 m^{2} + 10 n^{2} + 3) :
25(-m^{2} + 5 n^{2} + 9)^{2} : \\
-5(-75 m^{6} + 1125 m^{4} n^{2} - 5625 m^{2} n^{4} + 9375 n^{6} - 72 m^{5} + 720 m^{3} n^{2} - 1800 m n^{4} \\
+ 1165 m^{4} - 11650 m^{2} n^{2} + 29125 n^{4} + 360 m^{3} - 1800 m n^{2} - 5985 m^{2} + 29925 n^{2} + 6399) : \\
8 (m^{5} - 10 m^{3} n^{2} + 25 m n^{4} + 5 m^{4} - 50 m^{2} n^{2} + 125 n^{4} - 5 m^{3} + 25 m n^{2} - 45 m^{2} + 225 n^{2} + 108)^{2})
\end{multline}
where $(m,n) \in k^2$ such that $m^2-5n^2-5$ is a norm from $k(\sqrt 5)$,
$n(m^2-5n^2)(m^2-5n^2-5) \ne 0$ and
$8 m^{5} - 80 m^{3} n^{2} + 200 m n^{4} - 85 m^{4} + 850 m^{2} n^{2} - 2125 n^{4} - 40 m^{3} + 200 m n^{2} + 1890 m^{2} - 9450 n^{2} - 9261 \ne 0$;
\item[(1b)] $(-12: 81 : 36000h - 567 : 400000h^2)$ if $\sqrt 5 \in k$
and $h \in k \setminus \{ 0, \frac{81}{800}, \frac{27}{3125} \}$; or
\item[(1c)] $(14 : 4 : 4500h + 16 : 12500h^2)$
if $\sqrt 5 \in k$ and $h \in k \setminus \{ 0, \frac{1}{50}, \frac{2}{3125} \}$.
\end{enumerate}
In particular, when $k=\mathbb Q$, we can deduce the following precise
interpretation of \cref{thm:main} from \cref{thm:moduli}: The
set of all genus 2 curves $C$ with RM-5 over $\mathbb Q$ up to
$\mathbb C$-isomorphism such that $\Aut_\mathbb C^{\red}(C) = \{ 1 \}$
excluding the 1-parameter family in case (2) correspond to
points $(m,n)$ with Igusa--Clebsch invariants as in (1a).
Moreover, each tuple of Igusa--Clebsch invariants
as in (1a) comes from such a curve, except possibly when
these Igusa--Clebsch invariants lead to a non-commutative
endomorphism algebra, in which case we only know that
such $(m,n)$ corresponds to a curve
defined over $\mathbb Q$ with potential RM-5.
Further, distinct points $(m,n)$ as in (1a) correspond
to distinct $\mathbb C$-isomorphism classes of genus 2 curves
by \cref{prop:necess-cond}.
\subsection{Singularities of the Mestre conic} \label{sec:mest-sing}
Here we describe the $k$-rational Igusa--Clebsch invariants
of types (2) and (3) in \cref{prop:necess-cond} for which the
the Mestre conic is singular. By \cite{cardona-quer}, these
invariants always yield a genus 2 curve defined over $k$.
This will complete the proof of \cref{thm:moduli}.
First, as in \cref{subsection:conic-reduction-generic},
let $L$ be the Mestre conic associated to a $(z,g,h) \in Y$.
Then the Mestre conic is singular if and only if $h=0$,
$h=\frac {9g^2}8$ or $z=0$.
We remark that the curve $h=0$ on $Y$ is given by
$z^2 = -54 (6g + 1)^2 g^3$,
and the $k$-rational points are parametrized by $(g,0)$ where
$-6g$ is a square in $k$. However, $h=0$ means that $I_{10} = 0$, so
these points do not correspond to genus 2 curves.
The curve $h=\frac {9g^2}8$ on $Y$ is given by
$z^2 = -\frac{27}{16} (128g + 9) g^2 (3g - 4)^2,$
and the $k$-rational points are parametrized by
$(g, \frac{9g^2}8)$ where $-3(128g+9)$ is a square in $k$.
This completes case (3) of the theorem.
Now suppose $z=0$, which means that either
$g \in \{ - \frac 3{10}, - \frac 2{15} \}$ or $n=0$.
If $g = -\frac 3{10}$ or $g = -\frac 2{15}$,
then $h = \frac{27}{3125}$ or $h = \frac{2}{3125}$, respectively,
and these are clearly $k$-rational points on $Y$.
The corresponding Igusa--Clebsch invariants are
$(20 : 225 : 1185 : -384)$ and $(70 : 100 : 2360 : 16)$,
respectively.
As notes in \cite{EK},
the $k$-rational points on $Y$ with $n=0$ are given by
\[ (z, g, h) = \left(0, \frac{m^2-9}{30}, \frac{(m-2)^2(m+3)^3}{12500}\right),
\quad m \in k. \]
Viewing this as a map from points $(m,0)$ to $(0,g,h)$,
note that $m = 0$ and $m = \pm \sqrt 5$
respectively map to $(g,h) = (-\tfrac{3}{10}, \tfrac{27}{3125})$ and $(-\tfrac{2}{15}, \tfrac{2}{3125})$.
This gives case (4) of the theorem.
Now we consider the ``points at infinity'' discussed in \cref{sec:infty}.
The Mestre conic associated to Igusa--Clebsch invariants $(8 : 1 : 3: s)$
for $s \in k^\times$ is singular if and only if $s = \frac 8{3125}$.
In terms of Wilson's moduli, this point corresponds to
$(z_6, s_2, \sigma_5) = (0, -\frac 52, \frac 12)$. Here Wilson's
discriminant $\Delta'$ is 0. Using Magma, we can construct
a rational genus 2 curve with invariants $(8 : 1 : 3 : \frac 8{3125})$,
namely
\begin{equation} \label{eqn:813s-curve}
y^2 = f(x) = (2 x^{3} - 2 x^{2} - x - 1)(x^{3} - x^{2} + 2 x + 2).
\end{equation}
This yields case (5) of the theorem.
\begin{rem} Calculations in Magma indicate that the curve in
\eqref{eqn:813s-curve} has conductor $800^2$, and corresponds
to the weight 2 modular form $f(z) = q - \sqrt 5 q^3 - 2 \sqrt 5 q^7 +
2 q^9 - \sqrt 5 q^{11} + \cdots$ with Fourier coefficient ring $\mathbb Z[\sqrt 5]$
and LMFDB label \verb+800.2.a.l+.
\end{rem}
\section{Generic models}
\label{sec:models}
In this section we give explicit rational Weierstrass models for
$(m,n)$ in the birational model $\mathbb P^2_{m,n}$ for $Y_-(5)$.
\begin{prop} \label{prop:model}
Let $k \subseteq \mathbb C$ be a field which does not contain $\sqrt{5}$. For any $m,n \in k$ such that $-(m^2 - 5n^2 - 5)$ is the norm of some nonzero element $\eta \in k(\sqrt{5})/k$, let $\mu \coloneqq m + n\sqrt{5}$ and define $C/k$ be the curve with Weierstrass model
\begin{align*}
y^2 = \text{\emph{Tr}}\!\left(\mu^2\eta^3\!\left(\frac{1-x\sqrt{5}}{1+x\sqrt{5}}\right)^3 - 2N(\mu)\mu\eta^2\!\left(\frac{1-x\sqrt{5}}{1+x\sqrt{5}}\right)^2 - 5N(\mu)(N(\mu)-5)\right)(1 - 5x^2)^3,
\end{align*}
where $N$ and $\text{\emph{Tr}}$ denote the norm and trace from $k(\sqrt{5})$ to $k$ respectively. When $C$ is a genus $2$ curve, the Igusa--Clebsch invariants of $C$ are as in \eqref{eq:mnICs},
i.e. $C$ corresponds to the point $(m,n)$ in
the $\mathbb P^2_{m,n}$ birational model for $Y_-(5)$.
\end{prop}
Note that the right hand side of the Weierstrass equation given in \cref{prop:model} is indeed a sextic in $x$; the factor of $(1 - 5x^2)^3$ clears denominators.
\begin{rem} Since $m^2 - 5n^2 - 5 + u^2 - 5v^2 = 0$ is a quadric in $\mathbb P^4$
it is birational to $\mathbb P^3$, so one may generically express the family of
curves in \cref{prop:model} in terms of a 3-parameter $(a,b,c)$. For instance,
one may generically write
\[ v = (4a+2c)/(5a^2-b^2+5c^2-1), \quad
m = 5av - 2, \quad n=-bv, \quad u = 5cv-1 \]
to get a 3-parameter family of genus 2 curves with RM-5.
However, the
resulting models are rather complicated and we omit them.
\end{rem}
\begin{prop} \label{prop:modelQsqrt5}
Let $k \subseteq \mathbb C$ be a field containing $\sqrt{5}$. For any $m,n \in k$, define $C/k$ be the curve with Weierstrass model
\begin{align*}
y^2 =\quad &(m - n\sqrt{5})^2x^6 - 2(m^2 - 5n^2)(m - n\sqrt{5})x^5 - 10(m^2 - 5n^2)(m^2 - 5n^2 - 5)x^3\\
-&2(m^2 - 5n^2)(m^2 - 5n^2 - 5)^2(m + n\sqrt{5})x - (m^2 - 5n^2 - 5)^3(m + n\sqrt{5})^2.
\end{align*}
When $C$ is a genus $2$ curve, the Igusa--Clebsch invariants of $C$ are as in \eqref{eq:mnICs},
i.e. $C$ corresponds to the point $(m,n)$ in
the $\mathbb P^2_{m,n}$ birational model for $Y_-(5)$.
\end{prop}
Note that both the $x^4$- and $x^2$-coefficients are zero in this model.
\begin{proof}[Proof of \cref{prop:model,prop:modelQsqrt5}]
In \cref{subsection:conic-reduction-generic}, we found a linear transformation $T$ defined over $\mathbb Q(m,n)$ and a scaling factor $c \in \mathbb Q(m,n)$ such that
$$cL_0(T(x_1,x_2,x_3)) = x_1^2 - 5x_2^2 + (m^2 - 5n^2 - 5)x_3^2,$$
whenever $\text{disc}\,L_0 \neq 0$, where $L_0$ is the Mestre conic associated to the Igusa--Clebsch invariants in \eqref{eq:mnICs}.
Applying the same transformation $T$ to the Mestre cubic $M_0$ and rescaling by some $c' \in \mathbb Q(m,n)$ yields
\begin{align*}
c'M_0(T(x_1,x_2,x_3)) = \quad & (m+5n^2)x_1^3 + 30mnx_1^2x_2 + 15(m^2+5n^2)x_1x_2^2 + 50mnx_2^3\\
&-(2m-3)(m^2-5n^2)x_1^2x_3 - 20n(m^2-5n^2)x_1x_2x_3 \\
&-5(2m+3)(m^2-5n^2)x_2^2x_3 - 2(m^2-5n^2-5)(m^2-5n^2)x_3^3.
\end{align*}
Define $L$ and $M$ to be these reduced forms of $L_0$ and $M_0$ respectively.
We first consider the case where $\sqrt{5} \not\in k$. By inspection, $L(k)$ has no points with $x_3 = 0$. Suppose that $(u_0\,:\,v_0\,:\,1) \in L(k)$. Parametrizing $L(k)$ in the usual way using this point gives
\begin{equation*} \label{eq:mestre-conic-param}
\begin{aligned}
&\left\{ (x_1\,:\,x_2\,:\,1) \in L(k) \right\} = \left\{ \left((1+5x^2)u_0 - 10xv_0 \,:\, (1+5x^2)v_0 - 2xu_0 \,:\, 1 - 5x^2\right)\,:\, x\in\mathbb P(k)\right\}.
\end{aligned}
\end{equation*}
It will be convenient for us to write this parametrization in terms of elements of $k(\sqrt{5})$. Define $\eta = u_0 + v_0\sqrt{5} \in k(\sqrt{5})/k$. The parametrization above can then be expressed as
$$\left\{ (x_1\,:\,x_2\,:\,1) \in L(k) \right\} = \left\{ \left(u \,:\, v \,:\, 1\right)\,:\, u + \sqrt{5}v = \eta\frac{1 - x\sqrt{5}}{1 + x\sqrt{5}},\,x\in\mathbb P(k)\right\}.$$
Let $\mu = m + n\sqrt{5} \in k(\sqrt{5})/k$. Then one verifies that, when $x_3 = 1$, the reduced Mestre cubic $M$ can be written as
$$\tfrac{1}{2}\text{Tr}(\mu^2(x_1 + x_2\sqrt{5})^3) + 3N(\mu)N(x_1 + x_2\sqrt{5}) - N(\mu)\text{Tr}(\mu(x_1 + x_2\sqrt{5})^2) - 2N(\mu)(N(\mu)-5),$$
where $N$ and $\text{Tr}$ denote the norm and trace from $k(\sqrt{5})$ to $k$. We can now substitute the parametrization of the $k$-rational points of $L$ into $M$ to obtain a $k$-rational Weierstrass model for the associated genus $2$ curve $C$, as described in \cref{thm:mestre-cubic}. This gives the $k$-rational model from \cref{prop:model}.
Now suppose that $\sqrt{5} \in k$. It is possible to mimic the calculations from the case where $\sqrt{5}\not\in k$ by taking
$$(u_0\,:\,v_0\,:\,1) \coloneqq \left(\left(m^2 - 5n^2 - 5 - \tfrac{1}{20}\right)\!\sqrt{5} \,:\, m^2 - 5n^2 - 5 + \tfrac{1}{20} \,:\, 1\right)$$
and working in the ring $k[t]/(t^2 - 5)$. However, we get a tidier Weierstrass model by instead using the point $(\sqrt{5}\,:\,1\,:\,0)$ to parameterize $L(k)$. A straightforward calculation yields the $k$-rational model given in \cref{prop:modelQsqrt5}.
\end{proof}
\begin{rem} Note that if we take $k=\mathbb Q(\sqrt 5)$ in
\cref{prop:modelQsqrt5}, and $m \in \mathbb Q$, $n \in \sqrt 5 \mathbb Q \setminus \{ 0 \}$,
we get a 2-parameter family of genus 2 curves defined over $\mathbb Q$
which have potential RM-5, but not RM-5 defined over $\mathbb Q$.
To see the RM-5 is not actually defined over $\mathbb Q$, one can check that for
$m \in \mathbb Q$, $n \in \sqrt 5 \mathbb Q \setminus \{ 0 \}$, Wilson's discriminant $\Delta'$
is in the square class of $n^2$, which is a non-square.
Hence the Igusa--Clebsch invariants are rational, but the
moduli points on $Y_-(5)$ are not rational, and so the RM-5
cannot be defined rationally.
(The irrationality of
these moduli points on $Y_-(5)$ is suggested by the fact
that $(m,n) \not \in \mathbb Q^2$ but is not \textit{a priori} implied by this
as $(m,n)$ are only coordinates for a birational model of $Y_-(5)$,
and we have not determined an explicit birational map from
$\mathbb P^2_{m,n}$ to $Y_-(5)$).
\end{rem}
\section{Comparisons with known families}
\subsection{Mestre's family}
Let $f$ be the polynomial
$$f(a,b,x) = x^5 + (a-3)x^4 + (-a + b + 3)x^3 + (a^2 - a - 2b - 1)x^2 + bx + a,$$
and let $X(a,b)$ be the genus $2$ curve
$$X(a,b): y^2 = xf(a,b,x).$$
In \cite{mestre}, Mestre proves that $X(a,b)$ has RM-5 for every $a,b$ in $\mathbb C$ such that $xf(a,b,x)$ has six distinct zeroes, and that the RM is defined over
$k = \mathbb Q(a,b)$. Using Humbert's criterion for RM-5, Wilson \cite{wilson}
showed that this family of curves over $k$ gives all genus 2 curves with RM-5
over $k$ which have a Weierstrass point in $k$, up to $k$-isomorphism.
In particular, for any genus $2$ curve $C$ with RM-5, there exist $a,b \in \mathbb C$ such that $C$ is $\mathbb C$-isomorphic to $X(a,b)$. See also \cite{sakai}
for an alternative proof of this last result.
Define $g_{a,b}$ and $h_{a,b}$ as
\begin{align*}
&g_{a,b} = \frac{2(3a^3 - 8a^2 - 5ab - b^2 - 3a)}{3(a^2 - 5a - 2b + 1)^2}\\
&\text{and}\\
&h_{a,b} = \frac{-a^2(4a^5 - 4a^4 - 24a^3b - a^2b^2 - 40a^3 + 34a^2b + 30ab^2 + 4b^3 + 91a^2 + 14ab - b^2 - 4a)}{2(a^2 - 5a - 2b + 1)^5}.
\end{align*}
Then, by comparing Igusa--Clebsch invariants, one can verify that $X(a,b)$ is $\mathbb C$-isomorphic to a genus $2$ curve associated to $(g,h) = (g_{a,b}, h_{a,b})$ in the Elkies--Kumar model \eqref{eq:EK}, assuming
$ a^2 - 5a - 2b + 1 \ne 0$ and $h_{a,b} \ne 0$.
Since choices of $a, b \in k$ can only yield genus 2 curves with RM-5 over
$k$ which have a $k$-rational Weierstrass point, one cannot easily describe
all genus 2 curves with RM-5 over $k$ using Mestre's family.
For example, there are no rational values of $a$ and $b$ for which
$X(a,b)$ is $\mathbb C$-isomorphic to the genus $2$ curve associated to
$(g,h) = (-\tfrac{4}{15}, \tfrac{16}{3125})$.
\subsection{Brumer's family}
Brumer constructed a family of curves
$C_{b,c,d}$ defined by
\begin{align*}
C_{b,c,d}: y^2 + (x^3 + x + 1 + c(x^2 + x))y =\,\,&b + (1 + 3b)x + (1 - bd + 3b)x^2\\
+&(b - 2bd - d)x^3 - bdx^4,
\end{align*}
and showed that if $C_{b,c,d}$ is nonsingular, then it is a genus
2 curve with RM-5 over $\mathbb Q(b,c,d)$. Moreover, every genus $2$
curve with RM-5 is $\mathbb C$-isomorphic to $C_{b,c,d}$ for some $b,c,d \in \mathbb C$.
Brumer did not publish the details of his proof
(see \cite{brumer} for an announcement), but the above statements
were reproved by different methods in \cite{hashimoto} and
\cite{hashimoto-sakai}.
Define $g_{b,c,d}$ and $h_{b,c,d}$ as
\begin{align*}
&g_{b,c,d} = \frac{-c^4 + 8bc^2d - 16b^2d^2 + 6c^3 - 24bcd + 24bc + c^2 - 68bd - 24cd - 108b - 30c - 36d - 61}{6(c^2 - 4bd - 2b - 3c - 2d - 5)^2}\\
\end{align*}
and
\begin{align*}
h_{b,c,d} = \,&(c^2 - 4bd - 2b - 3c - 2d - 5)^{-5} \left(bc^6d - 12b^2c^4d^2 + 48b^3c^2d^3 - 64b^4d^4 - b^2c^4d - 9bc^5d \right.\\
&+ 8b^3c^2d^2 + 72b^2c^3d^2 - bc^4d^2 - 16b^4d^3 - 144b^3cd^3 + 8b^2c^2d^3 - 16b^3d^4 + bc^5 - 40b^2c^3d\\
&+ 12bc^4d - c^5d + 144b^3cd^2 - 152b^2c^2d^2 + 52bc^3d^2 + 416b^3d^3 - 192b^2cd^3 - b^2c^3 - 9bc^4\\
&+ 36b^3cd + 334b^2c^2d + 63bc^3d + 6c^4d + 24b^3d^2 + 132b^2cd^2 - 80bc^2d^2 + c^3d^2 + 528b^2d^3\\
&- 36bcd^3 - 27b^2c^2 + 13bc^3 - c^4 + 108b^3d - 720b^2cd + 74bc^2d + 5c^3d - 456b^2d^2 - 96bcd^2\\
&- 36c^2d^2 + 216bd^3 + 27b^3 + 252b^2c + 56bc^2 + 6c^3 - 66b^2d - 627bcd - 43c^2d - 381bd^2\\
&- \left.63cd^2 + 27d^3 - 567b^2 + 27bc + 4c^2 - 121bd - 147cd - 81d^2 - 484b - 39c - 34d - 103\right).
\end{align*}
Then, by comparing Igusa--Clebsch invariants, one can verify that $C_{b,c,d}$ is $\mathbb C$-isomorphic to the genus $2$ curve associated to $(g,h) = (g_{b,c,d}, h_{b,c,d})$ in the Elkies--Kumar model \eqref{eq:EK} when $c^2 - 4bd - 2b - 3c - 2d - 5 \ne 0$ and $h_{b,c,d} \ne 0$.
One can ask if Brumer's family provides a way to describe all
genus 2 curves $C$ with RM-5 defined over $k$. However, it
is not clear whether these will all come from a $k$-rational
choice of parameters $b, c, d$. E.g., if $(z,g,h)$ is a generic
rational point on $Y$ such that $30g+4$ is a norm from $\mathbb Q(\sqrt 5)$,
it is not clear if we can write $(g,h) = (g_{b,c,d}, h_{b,c,d})$ for
some $b, c, d \in \mathbb Q$.
While Brumer's models are simpler than what we give
in \cref{sec:models}, over $\mathbb Q$ they might not comprise all
rational curves $C$ with RM-5, even generically. Moreover
there is no simple description of which choices of $b, c, d$
will give $\mathbb C$-isomorphic curves.
\section{Beyond RM-5} \label{sec:otherD}
The Hilbert modular surface $Y_-(D)$ is rational if and only if $D$ is one of $5, 8, 12, 13,$ or $17$. One might wonder if there are analogues of \cref{thm:main} for each of these discriminants. Numerical experimentation suggests that the answer is yes.
Define
\begin{align*}
&p_5(m,n) = -m^2 + 5n^2 + 5,\\
&p_8(m,n) = m+1,\\
&p_{12}(m,n) = -27m^2 + n^2 + 27,\\
&p_{13}(m,n) = 1803m^2 - 72mn + n^2 + 3168m - 1440n - 768,\,\text{and}\\
&p_{17}(m,n) = 1.
\end{align*}
In \cite{EK}, Elkies and Kumar give rational models of $Y_-(D)$ for all fundamental discriminants between $1$ and $100$. The polynomials $p_D(m,n)$ above are all factors of the discriminant of the Mestre conic one obtains when using Igusa--Clebsch invariants from \cite{EK} in the construction given in \cref{section:mestre-conic-background}. We chose several thousand values of $(m,n) \in \mathbb Q^2$ at random, and for each of these the associated Mestre conic was equivalent to $x_1^2 - Dx_2^2 - p_D(m,n)x_3^2 = 0$ over $\mathbb Q$ whenever it was nonsingular. In particular, the Mestre obstruction appears to vanish generically for $D = 17$, which is quite surprising.
We have attempted using the methods from
in \cref{sec:reduction} to reduce the Mestre conics for these other
values of $D$, but thus far have only been partially successful
in removing the other polynomial factors from the discriminant.
\begin{bibdiv}
\begin{biblist}
\bib{brumer}{article}{
author={Brumer, Armand},
title={The rank of $J_0(N)$},
note={Columbia University Number Theory Seminar (New York, 1992)},
journal={Ast\'{e}risque},
number={228},
date={1995},
pages={3, 41--68},
issn={0303-1179},
}
\bib{cardona-quer}{article}{
author={Cardona, Gabriel},
author={Quer, Jordi},
title={Field of moduli and field of definition for curves of genus 2},
conference={
title={Computational aspects of algebraic curves},
},
book={
series={Lecture Notes Ser. Comput.},
volume={13},
publisher={World Sci. Publ., Hackensack, NJ},
},
date={2005},
pages={71--83},
}
\bib{CMSV}{article}{
author={Costa, Edgar},
author={Mascot, Nicolas},
author={Sijsling, Jeroen},
author={Voight, John},
title={Rigorous computation of the endomorphism ring of a Jacobian},
journal={Math. Comp.},
volume={88},
date={2019},
number={317},
pages={1303--1339},
issn={0025-5718},
}
\bib{elkies}{article}{
author={Elkies, Noam D.},
title={Shimura curve computations via $K3$ surfaces of N\'{e}ron-Severi rank
at least 19},
conference={
title={Algorithmic number theory},
},
book={
series={Lecture Notes in Comput. Sci.},
volume={5011},
publisher={Springer, Berlin},
},
date={2008},
pages={196--211},
}
\bib{EK}{article}{
author={Elkies, Noam},
author={Kumar, Abhinav},
title={K3 surfaces and equations for Hilbert modular surfaces},
journal={Algebra Number Theory},
volume={8},
date={2014},
number={10},
pages={2297--2411},
issn={1937-0652},
}
\bib{hashimoto}{article}{
author={Hashimoto, Ki-ichiro},
title={On Brumer's family of RM-curves of genus two},
journal={Tohoku Math. J. (2)},
volume={52},
date={2000},
number={4},
pages={475--488},
issn={0040-8735},
}
\bib{hashimoto-sakai}{article}{
author={Hashimoto, Kiichiro},
author={Sakai, Yukiko},
title={General form of Humbert's modular equation for curves with real
multiplication of $\Delta=5$},
journal={Proc. Japan Acad. Ser. A Math. Sci.},
volume={85},
date={2009},
number={10},
pages={171--176},
issn={0386-2194},
}
\bib{KW}{article}{
author={Khare, Chandrashekhar},
author={Wintenberger, Jean-Pierre},
title={Serre's modularity conjecture. I},
journal={Invent. Math.},
volume={178},
date={2009},
number={3},
pages={485--504},
issn={0020-9910},
}
\bib{KM}{article}{
author={Kumar, Abhinav},
author={Mukamel, Ronen E.},
title={Real multiplication through explicit correspondences},
journal={LMS J. Comput. Math.},
volume={19},
date={2016},
number={suppl. A},
pages={29--42},
}
\bib{magma}{article}{
author={Bosma, Wieb},
author={Cannon, John},
author={Playoust, Catherine},
title={The Magma algebra system. I. The user language},
note={Computational algebra and number theory (London, 1993)},
journal={J. Symbolic Comput.},
volume={24},
date={1997},
number={3-4},
pages={235--265},
issn={0747-7171},
label={Magma}
}
\bib{mestre:family}{article}{
author={Mestre, J.-F.},
title={Familles de courbes hyperelliptiques \`a multiplications r\'{e}elles},
language={French},
conference={
title={Arithmetic algebraic geometry},
address={Texel},
date={1989},
},
book={
series={Progr. Math.},
volume={89},
publisher={Birkh\"{a}user Boston, Boston, MA},
},
date={1991},
pages={193--208},
}
\bib{mestre}{article}{
author={Mestre, Jean-Fran\c{c}ois},
title={Construction de courbes de genre $2$ \`a partir de leurs modules},
language={French},
conference={
title={Effective methods in algebraic geometry},
address={Castiglioncello},
date={1990},
},
book={
series={Progr. Math.},
volume={94},
publisher={Birkh\"{a}user Boston, Boston, MA},
},
date={1991},
pages={313--334},
}
\bib{ribet}{article}{
author={Ribet, Kenneth A.},
title={Abelian varieties over $\bf Q$ and modular forms},
conference={
title={Modular curves and abelian varieties},
},
book={
series={Progr. Math.},
volume={224},
publisher={Birkh\"{a}user, Basel},
},
date={2004},
pages={241--261},
}
\bib{sage}{manual}{
author={Developers, The~Sage},
title={{S}agemath, the {S}age {M}athematics {S}oftware {S}ystem
({V}ersion 9.4)},
date={2021},
label={Sage},
note={{\tt https://www.sagemath.org}},
}
\bib{sakai}{article}{
author={Sakai, Yukiko},
title={Poncelet's theorem and curves of genus two with real
multiplication of $\Delta=5$},
journal={J. Ramanujan Math. Soc.},
volume={24},
date={2009},
number={2},
pages={143--170},
issn={0970-1249},
}
\bib{vdG}{book}{
author={van der Geer, Gerard},
title={Hilbert modular surfaces},
series={Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in
Mathematics and Related Areas (3)]},
volume={16},
publisher={Springer-Verlag, Berlin},
date={1988},
pages={x+291},
isbn={3-540-17601-2},
}
\bib{wilson}{article}{
author={Wilson, John},
title={Explicit moduli for curves of genus 2 with real multiplication by
${\bf Q}(\sqrt5)$},
journal={Acta Arith.},
volume={93},
date={2000},
number={2},
pages={121--138},
issn={0065-1036},
}
\end{biblist}
\end{bibdiv}
\end{document}
\bib{cassels-flynn}{book}{
author={Cassels, J. W. S.},
author={Flynn, E. V.},
title={Prolegomena to a middlebrow arithmetic of curves of genus $2$},
series={London Mathematical Society Lecture Note Series},
volume={230},
publisher={Cambridge University Press, Cambridge},
date={1996},
pages={xiv+219},
isbn={0-521-48370-0},
review={\MR{1406090}},
doi={10.1017/CBO9780511526084},
}
| {
"timestamp": "2022-06-14T02:17:58",
"yymm": "2206",
"arxiv_id": "2206.05752",
"language": "en",
"url": "https://arxiv.org/abs/2206.05752",
"abstract": "Principally polarized abelian surfaces with prescribed real multiplication (RM) are parametrized by certain Hilbert modular surfaces. Thus rational genus 2 curves correspond to rational points on the Hilbert modular surfaces via their Jacobians, but the converse is not true. We give a simple generic description of which rational moduli points correspond to rational curves, as well as give associated Weierstrass models, in the case of RM by the ring of integers of $\\mathbb{Q}(\\sqrt{5})$. To prove this, we provide some techniques for reducing quadratic forms over polynomial rings.",
"subjects": "Number Theory (math.NT); Algebraic Geometry (math.AG)",
"title": "Moduli for rational genus 2 curves with real multiplication for discriminant 5",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109534209825,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7076796348056605
} |
https://arxiv.org/abs/1709.08579 | Dynamics of Linear Systems over Finite Commutative Rings | The dynamics of a linear dynamical system over a finite field can be described by using the elementary divisors of the corresponding matrix. It is natural to extend the investigation to a general finite commutative ring. In a previous publication, the last two authors developed an efficient algorithm to determine whether a linear dynamical system over a finite commutative ring is a fixed point system or not. The algorithm can also be used to reduce the problem of finding the cycles of such a system to the case where the system is given by an automorphism. Here, we further analyze the cycle structure of such a system and develop a method to determine its cycles. | \section{Introduction}
\par
Let $R$ be a commutative ring with $1$, let $M$ be an $R$-module, and let $f: M\longrightarrow M$ be an $R$-module endomorphism. We may consider $f$ as a dynamical system via iteration:
\begin{eqnarray*}
f,\;\; f^2 = f\circ f,\;\; \cdots, \;\;f^m=\underbrace{f\circ f\circ\cdots\circ f}_{\mbox{$m$ copies}},\;\; \cdots,
\end{eqnarray*}
and investigate the behaviors of the system. For our convenience, we set $f^0 = id$. By studying the dynamics of the system, we mean to investigate its possible cycles (including fixed points) generated by the elements of $M$. It is clear that if the cardinality of $M$ is infinite, then there may not be any cycle except the obvious fixed point $0$. But if $M$ is finite, then for any initial input $a\in M$, the sequence $f^m(a)$, $m\ge 0$, will eventually either stabilize or enter a cycle. We should restrict our attention to the case where both $R$ and $M$ are finite in this paper, and we call such a system {\it a linear dynamical system over the finite ring $R$}.
\par\medskip
If $R = F$ is a finite field, then $M$ is a finite dimensional vector space over $F$, and the dynamics of $f$ can be described by using the elementary divisors of the $F$-linear map $f$. Suppose the elementary divisors of $f$ are (see \cite{Hu}, pp. 356-357):
\begin{eqnarray*}
p_i^{m_{ij}},\;\; 1\le i\le s,\;\; 1\le j\le k_i,
\end{eqnarray*}
where each $p_i$ is an irreducible polynomial in $F[x]$, and the $m_{ij}$ are integers such that for each $i$,
\begin{eqnarray*}
m_{i1}\ge m_{i2}\ge\cdots\ge m_{ik_i}> 0.
\end{eqnarray*}
Then there are $f$-cyclic subspaces $M_{ij}$ (i.e. there exists a vector $v\in M_{ij}$ and an integer $k\ge 0$ such that $(v, f(v), \ldots, f^k(v))$ is a basis of $M_{ij}$) with minimal polynomial $p_i^{m_{ij}}$, $1\le i\le s,\; 1\le j\le k_i$, such that $M$ is a direct sum of the $M_{ij}$'s.
\par
It is easy to see (over any ring $R$) that if $M = M_1\oplus M_2$ is a direct sum decomposition of $f$-invariant submodules (we will just call them $f$-submodules), and $C_i$ is a cycle in $M_i$ of length $c_i$ ($i=1,2$), then the direct product
\begin{eqnarray*}
C_1\times C_2 := \{(a,b)\;|\; a\in M_1,\;b\in M_2\}
\end{eqnarray*}
offers $GCD(c_1,c_2)$ cycles of length $LCM(c_1,c_2)$ for the module $M$. So we can just work with each direct summand independently.
\par
Therefore, the problem of finding the cycles (including fixed points) of $f$ (over a field $F$) reduces to the problem of finding the cycles of the restricted linear maps
\begin{eqnarray*}
f_{ij} = f|_{M_{ij}}\; : \;M_{ij}\longrightarrow M_{ij},\;\; 1\le i\le s,\;\; 1\le j\le k_i.
\end{eqnarray*}
\par
If $p_i = x$, the corresponding linear maps $f_{ij},\; 1\le j\le k_i$, are nilpotent and thus all have just one fixed point $0$ and have no cycle of length $> 1$. So the question is to find the cycle structures of those $f_{ij}$ such that the corresponding $p_i$ are not $x$ (these $f_{ij}$ are automorphisms of the $M_{ij}$). For these cases, since the characteristic polynomial of each $f_{ij}$ is equal to its minimal polynomial, Elspas' formula describes its cycle structure \cite{Els}\cite{He} (we use $g_{ij}^k$ instead of $p_i^{m_{ij}}$ to simplify the notation in the formula):
\begin{eqnarray*}
G_{ij} = 1 + \sum_{k=1}^{s}\frac{q^{mk}-q^{m(k-1)}}{c_k}\mathcal{C}_k,
\end{eqnarray*}
where $q = |F|$, $1$ corresponds to the fixed point $0$, $\mathcal{C}_k$ is a cycle of length $c_k$, and $c_k = \mbox{ord}(g_{ij}^k)$ (i.e. $c_k$ is the least positive integer $t$ such that $g_{ij}^k$ divides $x^t - 1$). This formula gives a nice description of the cycle structure, though the actual computation is more involved: one first computes the normal form of the matrix $xI - A$, where $A$ is the matrix of $f$ with respect to a basis of $M$, or computes its minimum polynomial $m(f)$, then factors the polynomials that appear in the normal form of $xI - A$ (or $m(f)$) to get the $g_{ij}^k$, and then computes the order of each $g_{ij}^k$.
\par
Note that if $g_{ij}^k$ divides $x^t-1$, then all $g_{ij}^s,\;0\le s\le k$, divide $x^t-1$. So by the comment about the cycles of a direct sum of $f$-modules, we have:
\begin{lemma}\label{L11}
The $F$-module endomorphism $f$ possesses a cycle of maximum length, say $c$, such that all other cycle lengths are divisors of $c$.
\end{lemma}
\par
For a general finite commutative ring $R$, since the factorization in $R[x]$ may not be unique, different approaches are needed. The study of linear dynamical systems over rings of the form $\mathbb{Z}_q$ was suggested in \cite{Co2} due to its relation with monomial dynamical systems over finite fields. In \cite{Bo} an approach by using Fitting's lemma \cite{Hu}: there is a positive integer $k$ such that $M = \mbox{Ker}f^k\oplus\mbox{Im}f^k$ was suggested. However, the key to the success of the proposed approach, which is to find a proper $k$ and compute $f^k$ efficiently, was not treated. In \cite{XZ}, an upper bound for $k$ was determined, and an algorithm that runs in time $O(n^3\log(n\log(q)))$, where $n$ is given by $M = R^n$ and $q =|R|$, to compute $f^k$ was developed. The results of \cite{XZ} provide a solution to the problem of determining whether or not such a system is a fixed point system, and also reduce the problem of finding the cycle structure of such a system to that of an automorphism (since $f:\mbox{Im}f^k\longrightarrow \mbox{Im}f^k$ is bijective).
\par
The main idea of \cite{XZ} is, to determine the cycles of $f$, we can just consider the $f$-module $M$ as an abelian group ($\mathbb{Z}$-module) and determine the lengths of the cycles. After obtaining the information on the lengths of the cycles, we can find the cycles by solving linear systems of the form $(f^c - I)X = 0$, where $I$ is the identity map and $c$ is a cycle length. By the structure theorem of finitely generated abelian groups \cite{Hu}, we can assume that
\begin{eqnarray}\label{E1}
\mbox{Im}f^k =\mathbb{Z}_{p_1^{a_1}}\oplus \mathbb{Z}_{p_2^{a_2}}\oplus\cdots\oplus\mathbb{Z}_{p_m^{a_m}},
\end{eqnarray}
where $p_1,\ldots, p_m$ are (not necessarily distinct) primes, and $a_1,\ldots, a_m$ are (not necessarily distinct) positive integers.
\par
We further note that an $f$-module as in (\ref{E1}) can be written as a direct sum of $f$-submodules such that each of these submodules is formed by grouping those $\mathbb{Z}_{p_i^{a_i}}$'s with the same prime $p_i$ together. As discussed before, whenever we have a direct sum of $f$-submodules, we can just work with each direct summand independently. Therefore, our goal here is to analyze the cycle structure of an automorphism of an abelian group of the form ($p$ is a prime):
\begin{eqnarray}\label{E2}
\hspace{0.6cm} M = \mathbb{Z}_{p^{a_1}}\times \mathbb{Z}_{p^{a_2}}\times\cdots\times\mathbb{Z}_{p^{a_m}},\; 1\le a_1\le a_2 \le \ldots\le a_m.
\end{eqnarray}
Since abelian groups are $\mathbb{Z}$-modules, an endomorphism of $M$ can be represented by an integer matrix $A$; and if all $a_i = 1$, i.e. $M = \mathbb{Z}^m_p$, it is an endomorphism of the vector space $M$ over the field $\mathbb{Z}_p$.
In the case where all $a_i$'s are the same, i.e. $M=\mathbb{Z}^m_{p^a}$, an approach was developed in \cite{De} for an arbitrary endomorphism of $M$ using number theory techniques based on congruence of integers.
\par
In this paper, we consider the general case where $M$ is defined by (\ref{E2}) such that the $a_i$'s are not necessary the same. Our emphasis is on the computation of the cycles; and for that, we need to be able to compute the possible cycle lengths efficiently. In Section 2, we give an upper bound for the cycle lengths such that all possible cycle lengths are factors of the given upper bound. The factors of the given upper bound are obtained from the induced linear systems over the finite field $\mathbb{Z}_p$, and thus there is no extra work needed to factor the upper bound. The results of this paper and the results in \cite{XZ} together provide an algorithm to determine the cycles structure of a linear dynamical system over a finite commutative ring. The algorithm is given in Section 3. We consider three examples in Section 4. The first example shows that the given upper bound for the cycle lengths is sharp, and the third example is a linear system derived from a real world cancer regulatory network.
\section{Result}
We begin by collecting some general facts. The following lemma holds for any ring $R$.
\par
\begin{lemma}\label{L21}
Let $M, N$ be $R$-modules, let $\varphi: M \rightarrow N$ be an onto $R$-module homomorphism, and let $K = \ker\varphi$. If $f: M \rightarrow M$ is an $R$-module endomorphism such that $f(K)\subset K$, then $f$ induces an $R$-module endomorphism $\bar{f}$ of $N$ by $\bar{f}(\varphi(m)) = \varphi(f(m)),\; m\in M$.
\end{lemma}
\begin{proof}
The proof is a straightforward verification of the definition.
\end{proof}
\par\medskip
Recall that we work with a finite commutative ring $R$ and a finite $R$-module, so it is possible to use a reduction approach to find the cycles of an $R$-module endomorphism $f$. For this purpose, we investigate the relationships among the cycles of the $f$-modules $M$, $N$, and $K$ (notation as in Lemma 2). For any $m\in M$, we denote by $\ell(m)$ the length of the cycle generated by $m$ (if $m$ lies on a cycle). We have the following simple fact about fixed points:
\par
\begin{proposition} Notation as in Lemma \ref{L21}. Let $v$ be a fixed point of $N$ and let $u$ be a fixed point of $M$ such that $\varphi(u) = v$. Then $u_1$ is a fixed point of $M$ such that $\varphi(u_1) = v$ if and only if $u_1 = u+w$ for a fixed point $w\in K$. In particular, the number of fixed points $u$ of $M$ such that $\varphi(u) = v$ is $0$ or equal to the number of fixed points of $K$.
\end{proposition}
\begin{proof}
We have that $u_1 = u+w$ is a fixed point $\Leftrightarrow$ $f(u+w) = u+f(w) =u+w$ $\Leftrightarrow$ $f(w)=w$.
\end{proof}
\par\medskip
From now on, we assume that $f$ is an automorphism. Then every element of $M$ lies in a cycle. Note that by our assumption on $M$, $p^{a_m}(M) = 0$ (see (\ref{E2})), thus $p^{a_m}(K) = 0$. Suppose that $a$ is the least positive integer $i$ such that $p^i(K) = 0$. For $u\in M$, let $\varphi(u) = v\in N$, and let $\ell(v)=s$. With these assumptions, we have $f^{s}(u) = u+w$ for some $w\in K$. Let $\ell(w) = k$. We will write $[s,k]$ (respectively, $(s,k)$) for the least common multiple (respectively, the greatest common divisor) of $s$ and $k$.
\begin{lemma}\label{L22}
There exists $0\le b\le a$ such that $\ell(u)=p^b[s,k]$.
\end{lemma}
\begin{proof}
Let $\ell(u) = c$. Since $f^c(u) = u$, $\bar{f}^c(v) = v$ in $N$. So $c$ is a multiple of $s$. Let $c=ts$, then from $ f^{s}(u) = u+w$ we have $w = f^s(u) - u$ and
\begin{eqnarray*}
u = f^{ts}(u) = f^{(t-1)s}(u) + f^{(t-1)s}(w).
\end{eqnarray*}
Applying $f^s$ to this equation we have $f^{c}(w) = w$, which implies that $c$ is also a multiple of $k$. Thus $c$ is a common multiple of $s$ and $k$, say $c=c_0[s,k]$ for some positive integer $c_0$. This holds without the assumption that $p^a(K)=0$. If $p^a(K)=0$, we will have $c_0 =p^b$ for some $0\le b\le a$.
\par
To see that, let $[s,k]=ns$, where $n=\frac{k}{(s,k)}$. Start with $ f^{s}(u) = u+w$ and compute inductively, we have
\begin{eqnarray}\label{E3}
f^{ns}(u) = u+w+f^s(w)+\cdots+f^{(n-1)s}(w).
\end{eqnarray}
Note that since $ns$ is a multiple of $k$, $(f^{ns}-I)(w)=0$, which implies
\begin{eqnarray*}
(f^s - I)(w+f^s(w)+\cdots+f^{(n-1)s}(w)) = 0,
\end{eqnarray*}
that is\begin{eqnarray*}
w_1:=w+f^s(w)+\cdots+f^{(n-1)s}(w)
\end{eqnarray*}
is a fixed point of $f^s$. If $w_1=0$, then $c=[s,k]$. However, we do not know if $w_1=0$ or not, so we use the assumption that $p^a(K)=0$. Start from (\ref{E3}): $f^{ns}(u) = u+w_1$, note that $w_1$ is a fixed point of $f^{ns}$, we have
\begin{eqnarray*}
f^{p^ans}(u) = (f^{ns})^{p^a}(u) = u+p^aw_1 = u. \quad \mbox{(since $w_1\in K$)}
\end{eqnarray*}
Thus $\ell(u)$ is a factor of $p^ans$, which implies $c_0=p^b$ for some $0\le b\le a$.
\end{proof}
\par\medskip
We now assume that $f:M\longrightarrow M$ is a $\mathbb{Z}$-module automorphism, where $M$ is defined by (\ref{E2}). Note that $p^{a_m}(M)=0$. For each $0\le i\le a_m$, we define
\begin{eqnarray*}
M_i =\{v\in M\;|\; p^iv=0\}.
\end{eqnarray*}
Then each $M_i$ is an $f$-submodule of $M$. Consider the $f$-submodule filtration
\begin{eqnarray}\label{E4}
M_0=(0)\subset M_1\subset\cdots\subset M_{a_m-1}\subset M_{a_m}=M.
\end{eqnarray}
There are positive integers $k_i \le m,\; 1\le i\le a_m$, such that the quotients
\begin{eqnarray}\label{E5}
M_i/M_{i-1}\cong\mathbb{Z}_p^{k_i}.
\end{eqnarray}
The automorphism $f$ induces automorphisms
\begin{eqnarray*}
\bar{f}_i\; :\; M_i/M_{i-1}\cong\mathbb{Z}_p^{k_i}\longrightarrow
M_i/M_{i-1}\cong\mathbb{Z}_p^{k_i},\;\; 1\le i\le a_m.
\end{eqnarray*}
Since $\mathbb{Z}_p$ is a field, the dynamics of these automorphisms can be determined as discussed in the introduction section. For each $1\le i\le a_m$, let the set of cycle lengths of $\bar{f}_i$ be $L_{ij} =\{c_{ij}\;:\; 1\le j\le \ell_i$\},
where $\ell_i$ is a positive integer depending on $i$. We assume that $c_{i1}> c_{ij},\; 1\le i\le a_m,\; 1<j\le \ell_i$. Then by Lemma \ref{L11}, for any $1\le i\le a_m$, $c_{ij}|c_{i1}$ for all $1\le j\le \ell_i$.
\par
\begin{theorem}\label{T1}
Assume that $f:M\longrightarrow M$ is a $\mathbb{Z}$-module automorphism, where $M$ is defined by (\ref{E2}), and keep the notation introduced above.
\par
(1) For each integer $c_{i1}$ ($1\le i\le a_m$), the dynamical system $f:M\longrightarrow M$ possesses a cycle whose length is a multiple of $c_{i1}$.
\par
(2) All cycle lengths of $f$ are factors of
\begin{eqnarray}\label{c1}
p^{a_m-1}\cdot\mbox{LCM}(c_{11}, c_{21},\ldots, c_{a_{m}1}),
\end{eqnarray}
where LCM stands for the least common multiple.
\end{theorem}
\begin{proof}
(1) For a fixed $1\le i\le a_m$, choose $u\in M_i$ such that its image $\bar{u}\in M_i/M_{i-1}$ belongs to a cycle of length $c_{i1}$. If the cycle to which $u$ belongs has length $c$, then $f^c(u)=u$ and hence $f^c(\bar{u})=\overline{f^c(u)}=\bar{u}$ in $M_i/M_{i-1}$, which implies that $c_{i1}|c$.
\par
(2) We use induction on $a_m$. There is nothing to prove if $a_m=1$ since $f$ is just an automorphism of the vector space $M=\mathbb{Z}^m_{p}$. Let $a_m \ge 2$. Then since $f$ induces an automorphism of each $f$-submodule $M_i$ in (\ref{E4}), it induces an automorphism of
\begin{eqnarray*}
M/M_1 \cong \mathbb{Z}_{p^{a_1-1}}\times \mathbb{Z}_{p^{a_2-1}}\times\cdots\times\mathbb{Z}_{p^{a_m-1}}.
\end{eqnarray*}
The filtration of (\ref{E4}) induces the following filtration:
\begin{eqnarray}
\hspace{0.6cm} (0)=M_1/M_1\subset M_2/M_1\subset\cdots\subset M_{a_m-1}/M_1\subset M/M_1.
\end{eqnarray}
Since for $2\le i\le a_m$
\begin{eqnarray}
(M_i/M_1)/(M_{i-1}/M_1)\cong M_i/M_{i-1},
\end{eqnarray}
by the induction assumption, we have that the possible cycle lengths of $M/M_1$ are factors of $r=p^{a_m-2}LCM(c_{21},\ldots, c_{a_m1})$. Now the theorem follows by noticing $p(M_1)=0$ and applying Lemma \ref{L11} to $M_1$ and Lemma \ref{L21} to the projection $M\rightarrow M/M_{1}$.
\end{proof}
\par\medskip
\begin{corollary}\label{C21}
If $M = \mathbb{Z}_{p^a}^m$ (i.e. the case where all $a_i$ are equal), the cycle lengths are factors of $p^{a-1}c$, where $c$ is the maximum cycle length of $M/M_{a-1}\cong \mathbb{Z}_p^m$ with the module structure induced by $f$.
\end{corollary}
\begin{proof}
Consider the onto $f$-module homomorphism $\varphi : M\rightarrow M_i/M_{i-1}$ defined by first taking $m\rightarrow p^{a-i}m,\; m\in M$, and then follows by the projection from $M_i$ onto $M_i/M_{i-1}$. This homomorphism induces an $f$-module isomorphism $M/M_{a-1}\rightarrow M_i/M_{i-1}$ since its kernel is $M_{a-1}$. Thus all $M_i/M_{i-1}$ $(2\le i\le a$) are isomorphic as $f$-modules and have the same cycle structure, which implies that $c_{i1} = c$ for all $i$, and the corollary follows from (2) of the theorem.
\end{proof}
\par\medskip
We remark that if $f$ is not an automorphism then the above corollary does not apply, since not every element of $M$ lies in a cycle.
\section{Computing the Cycles}
\par
Under the assumption that the cycle lengths of a linear system over a finite field can be computed, we can use Theorem \ref{T1} to find the cycles of a linear system $f : M\rightarrow M$ over a finite commutative ring $R$. Our algorithm is divided into four steps.
\par\medskip
\begin{enumerate}
\item {\bf Reduce to an automorphism.} By Theorem 2.1 of \cite{XZ}\footnote{The assumption on the module considered in Theorem 2.1 \cite{XZ} is somewhat different, but a quick check of the proof there reveals that it also applies to the case here.} $N=m\log_2(q)$ satisfies $f^{N+1}(M) = f^N(M)$, where $q = |R|$, and $m$ is given by a presentation of the $R$-module $M$ as a quotient of the free module $R^m$:
\begin{eqnarray*}
\varphi : R^m \longrightarrow M\;\;\mbox{and}\;\; R^m/\ker(\varphi) \cong M.
\end{eqnarray*}
Therefore we have an induced automorphism $f: f^N(M)\rightarrow f^N(M)$. Consider the structure of the finitely generated abelian group $f^N(M)$, and reduce it to the case where the module is defined by (\ref{E2}). Thus we assume the module $M$ is defined by (\ref{E2}) in steps 2 -- 4.
\par\medskip
\item
{\bf Compute the number $c_{i1}$ for each induced automorphism (see the paragraph precedes Theorem 1)}
\begin{eqnarray}\label{E7}
\bar{f}_i : \mathbb{Z}_{p}^{k_i}\rightarrow \mathbb{Z}_{p}^{k_i}, 1\le i\le a_m.
\end{eqnarray}
If the minimal polynomial of $\bar{f}_i$ is $g_i$ and $g_i=g_{i1}^{r_{i1}}\cdots g_{is_i}^{r_{is_i}}$ is its irreducible factorization, then $c_{i1} = LCM(h_{i1},\ldots, h_{is_i})$, where $h_{ij} = ord(g_{ij}^{r_{ij}})$. Since $\mathbb{Z}_p$ is a field, one can use the existing approaches \cite{Els}\cite{He}. See also the discussions later.
\par\medskip
\item {\bf Compute the order of $f$ as an element of the automorphism group of $M$.} Let $n :=p^{a_m-1}\cdot\mbox{LCM}(c_{11}, c_{21},\ldots, c_{a_{m}1})$. Then $n$ is a multiple of the order of $f$. Since the factorization of $n$ is known from step 2 and step 3, we can use the following procedure to compute the order of $f$:
\begin{center}
\begin{tabular}{|lp{8cm}|}
\hline
Input: & $n$ --- a multiple of the order of $f$.\\
Output: & $ord$ --- the order of $f$.\\
\hline
& $ord \gets n$;\\
& {\tt while ( 1 ) do} \\
& \hspace{10mm} $ordersmaller \gets 0$;\\
& \hspace{10mm} {\tt for ( each prime factor $p$ of $ord$ )}\\
& \hspace{20mm} {\tt if ( $f^{\frac{ord}p} = I$ )}\\
& \hspace{30mm} $ord \gets \frac{ord}p$; \\
& \hspace{30mm} $ordersmaller \gets 1$; \\
& \hspace{30mm} {\tt break;} \\
& \hspace{10mm} {\tt if ( $ordersmaller = 0$ )}\\
&\hspace{20mm} {\tt return $ord$;}\\
\hline
\end{tabular}
\end{center}
\par\medskip
\item {\bf Compute the cycle lengths and the cycles.} Let the order of $f$ be $\mathbf{a}$. Then the cycle lengths of $f$ are given by the factors of $\mathbf{a}$, which are known from step 2. For a factor $d$ of $\mathbf{a}$, solve the linear system $(f^d - I)X=0$ to obtain the cycles of length $d$.
\end{enumerate}
\par\medskip
{\bf Analysis of the algorithm.} We ignore the factors contributed by $q = |R|$ in our analysis, since for application purposes we can assume that $q$ is relatively small.
\par
In step 1, the computation of $f^N$ takes time $O(m^3)$. Also, to find the structure of the abelian group $f^N(M)$, it requires to compute the Smith normal form of the integer matrix of $f: f^N(M)\rightarrow f^N(M)$. It is well-known that the computation of the Smith normal form takes time $O(m^3)$.
\par
In step 3, the given algorithm computes the order of $f$ in time $O(m^3)$ since computing $f^{\mathbf{a}/p}$ requires at most time $ O(m^2\sqrt{m}\log (m))$.
\par
The computations in step 4 are straightforward.
\par
Since we could not find any readily accessible reference on the algorithm and its analysis for the case where $R =\mathbb{Z}_p$ is a field, we give a discussion of the complexity of step 2 as follows.
\par
(1) There is a randomized algorithm \cite{NP} to compute the minimal polynomial $m(x)$ of a liner map
$T: \mathbb{Z}_{p}^k\rightarrow \mathbb{Z}_{p}^k$ in time $O(k^3)$ and uses $O(k)$ random vectors. Since $k\le m$, this computation requires expected time no worse than $O(m^3)$.
\par
(2) To factorize $m(x)$, some algorithms from \cite{GG} can be used. Assume the degree of $m(x)$ is $d$
and let ${\bf M}(d)$ be the number of field operations needed for multiplying two polynomials
of degree at most $d$, then $m(x)$ can be factorized in expected number of $O(d{\bf M}(d)\log (dp))$ field
operations. Since $d\le m$, this requires time no worse than $O(m^3)$.
\par
(3) Note that if $g(x)$ is an irreducible polynomial with $\mbox{ord}(g) = \alpha$, then
$\mbox{ord}(g^u) = \alpha p^c$ where $c$ is the smallest integer such that $p^c\ge u$ \cite{He}.
\par
Thus, our algorithm runs in expected time $\sim \tilde{O}(m^3)$.
\par\medskip
\section{Conclusion and Examples}
We have developed an efficient algorithm to determine the cycle structure of a linear dynamical system over a finite commutative ring with identity and thus provided a solution to this problem.
\par
We conclude our paper by considering three examples. The first example shows that the bound given by (\ref{c1}) is sharp.
\begin{example}
Let $R = \mathbb{Z}_{16}$, let $M=\mathbb{Z}_{4}\times\mathbb{Z}_{8}\times\mathbb{Z}_{16}$, and let $f : M\longrightarrow M$ be defined by
\begin{eqnarray*}
A=\begin{pmatrix}
1 & 1 & 0 \\
0 & 1 & 1 \\
0 & 0 & 1 \\
\end{pmatrix}
\end{eqnarray*}
with respect to the generators $e_1 = (1,0,0)^t,\; e_2 = (0,1,0)^t,\; e_3=(0,0,1)^t$ of $M$. Then $a_m = 4$,
\begin{gather*}
M_0 = (0) \subset M_1 = <2e_1, 4e_2, 8e_3> \subset M_2 = <e_1, 2e_2, 4e_3>\\
\subset M_3 = <e_1,e_2,2e_3> \subset M_4 =M,
\end{gather*}
and
\begin{gather*}
M_4/M_3 \cong \mathbb{Z}_2,\; M_3/M_2 \cong \mathbb{Z}_2\times\mathbb{Z}_2,\\
M_2/M_1 \cong M_1/M_0 \cong \mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_2.
\end{gather*}
It is easy to see that the restriction of $f$ to any of these quotients is the identity map, so $c_{i1} = 1$ for $1\le i\le 4$. Thus according to Theorem \ref{T1}, the possible cycle lengths are factors of $2^3$. For this example, there are cycles for each of the possible factors of $2^3$. For instance, the cycle generated by $2e_1+2e_2$ has length $2$, the cycle generated by $e_2$ has length $4$, and the cycle generated by $e_3$ has length $8$.
\end{example}
\par\medskip
The second example shows that there may not be a cycle of length $p^bc$ (see Corollary \ref{C21} for notation) for any $b>0$.
\begin{example} For any prime $p$ and any integers $a,n > 1$, consider the $\mathbb{Z}_{p^a}$-module $\mathbb{Z}_{p^a}^n$ and the automorphism $f$ defined by the cyclic permutation of the standard basis $(e_1,e_2,\ldots, e_n)$:
\begin{eqnarray*}
e_1 \rightarrow e_{2}\rightarrow e_{3}\rightarrow \cdots\rightarrow e_{n} \rightarrow e_{1}.
\end{eqnarray*}
Then the order of $f$ is $n$ and the cycle lengths are the factors of $n$ (for each $d|n$, the non-zero element $e_1+e_{d+1}+e_{2d+1} + \cdots + e_{n-d+1}$ generates a cycle of length $d$).
\end{example}
\par\medskip
Next, we use a simplified version of the network given by the diagram of Fig. 2B in \cite{Zhang} to construct an example. The original network consists of $29$ nodes, the $7$ red output nodes that having a single input node are omitted to make the presentation more streamline (these output nodes add very little extra to the computation). The omitted nodes are: FLIP, A20, RANTES, FasT, GZMB, MEK, and LCK. Our purpose here is to give an example from a real world network, finding a good linear model over a finite ring for the underlying biological system is beyond our scope here. We introduce the variables as in Table 1.
\par\medskip
\begin{table}[h]
\begin{center}
{\renewcommand{\arraystretch}{1.4}\small{\tiny
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
IL15 & RAS & ERK & JAK & IL2RBT & STAT3 & IFNGT & FasL\\ \hline
$x_1$ & $x_2$ & $x_3$ & $x_4$ & $x_5$ & $x_6$ & $x_7$ & $x_8$\\ \hline\hline
PDGF & PDGFR & PI3K & IL2 & BcIxL & TPL2 & SPHK & S1P \\ \hline
$x_9$ & $x_{10}$ & $x_{11}$ & $x_{12}$ & $x_{13}$ & $x_{14}$ & $x_{15}$ & $x_{16}$ \\ \hline\hline
sFas & Fas & DISC & Caspase & Apoptosis & IL2RAT &\cellcolor{Mygrey} &\cellcolor{Mygrey}\\ \hline
$x_{17}$ & $x_{18}$ & $x_{19}$ & $x_{20}$ & $x_{21}$ & $x_{22}$ &\cellcolor{Mygrey} &\cellcolor{Mygrey}\\ \hline
\end{tabular}}
}
\label{tab: pairing2}
\caption[Legend of variable names.]{\footnotesize Legend of variable names of the network given by Fig. 2B in \cite{Zhang} (with 7 red output nodes omitted).}
\end{center}
\end{table}
\par
\begin{example}
The base ring is $\mathbb{Z}_9$ and the state space is $\mathbb{Z}_9^{22}$. The update functions of the nodes are provided in Table 2.
\par
\begin{table}[h]
\begin{center}
{\renewcommand{\arraystretch}{1.4}\small{\tiny
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
$f_1$ & $f_2$ & $f_3$ & $f_4$ & $f_5$ & $f_6$ & $f_7$ & $f_8$\\ \hline
$x_1$ & $x_1$ & $x_2$ & $x_1$ & $x_1$ & $x_4$ & $3x_5+x_6$ & $2x_3+x_5+3x_{14}$\\ \hline\hline
$f_9$ & $f_{10}$ & $f_{11}$ & $f_{12}$ & $f_{13}$ & $f_{14}$ & $f_{15}$ & $f_{16}$ \\ \hline
$x_9$ & $x_{9}$ & $x_{10}$ & $-x_{4}-x_{11}$ & $-x_{4}-x_{11}$ & $x_{11}$ & $2x_{11}+x_{16}$ & $x_{15}$ \\ \hline\hline
$f_{17}$ & $f_{18}$ & $f_{19}$ & $f_{20}$ & $f_{21}$ & $f_{22}$ &\cellcolor{Mygrey} &\cellcolor{Mygrey}\\ \hline
$x_{15}$ & $-4x_{1}-4x_{11}-x_{17}$ & $x_{18}$ & $-x_1+x_{19}$ & $x_{20}$ & $x_{12}$ &\cellcolor{Mygrey} &\cellcolor{Mygrey}\\ \hline
\end{tabular}}
}
\label{tab: pairing3}
\caption[Formulas of the update functions.]{\footnotesize The update functions of the network given by Fig. 2B in \cite{Zhang}. The functions are $f_1 = x_1$, $f_2=x_1$, $f_3 =x_2$, ..., $f_8 = 2x_3+x_5+3x_{14}$, etc.}
\end{center}
\end{table}
Let $A$ be the matrix of the above linear system with respect to the standard basis $(e_1,e_2,\ldots,e_{22})$. According to Theorem 2.1 of \cite{XZ}, the upper bound of the exponent for $A^m$ to stabilize (i.e. $A^{m+1}(\mathbb{Z}_9^{22}) = A^{m}(\mathbb{Z}_9^{22})$) is $22\ln(9) < 48$. So we take $N=48$ and let $B=A^{N}$. To find the cycle structure of $A$, we need to find the cycle structure of the induced linear map (the other part is the kernel of $B$):
\begin{eqnarray*}
A:B(\mathbb{Z}_9^{22})\rightarrow B(\mathbb{Z}_9^{22}).
\end{eqnarray*}
\par
We make the following observation: if $U$ and $V$ are invertible matrices over $\mathbb{Z}_9$ such that $UBV$ is the Smith normal form for $B$, we can obtain the cycle lengths of $A$ from the cycle lengths of $UAU^{-1}$ as follows. Note that $UABV = UAU^{-1}(UBV)$ and $UBV$ is a diagonal matrix. Since $UBV$ gives the structure of the finite abelian group under consideration, we can read the action of $UAU^{-1}$ from the corresponding upper left submatrix of $UABV$ (i.e. we can just compute $UABV$).
\par
Our computation showed that the abelian group generated by the columns of $UBV$ is isomorphic to $\mathbb{Z}_9^4$ and the action induced by $A$ is given by the following matrix:
\begin{eqnarray*}
S=\begin{pmatrix}
1 & 0 & 1 & 0 \\
0 & 1 & 2 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{pmatrix}.
\end{eqnarray*}
The elementary divisors of $S$ over the field $\mathbb{Z}_3$ are $x+2$, $x+1$, and $x^2+x+1$, and the corresponding orders are $1,2$, and $3$. Thus, according to Theorem \ref{T1}, the possible cycles lengths are the factors of $3\cdot 6=18$. By considering $A^kB = B$, we find that the order of $S$ is $18$. This can also be obtained by computing the order of $S$ modulo $9$ directly. Thus, the cycle lengths of the linear model are $1, 2, 3, 6, 9$, and $18$. All the computations can be done using the LinearAlgebra package of MAPLE in less than one second. We remark that the computation can also be done in about the same amount of time for the full $29$-node network.
\end{example}
\subsection*{{\bf Acknowledgment}}
Part of the work in this paper was done during the visit of YJW to the Department of Mathematical Sciences at the University of Wisconsin-Milwaukee in the summer of 2015. YJW wishes to thank the University of Wisconsin-Milwaukee and its faculty for the hospitality she received during her visit.
| {
"timestamp": "2017-09-26T02:16:16",
"yymm": "1709",
"arxiv_id": "1709.08579",
"language": "en",
"url": "https://arxiv.org/abs/1709.08579",
"abstract": "The dynamics of a linear dynamical system over a finite field can be described by using the elementary divisors of the corresponding matrix. It is natural to extend the investigation to a general finite commutative ring. In a previous publication, the last two authors developed an efficient algorithm to determine whether a linear dynamical system over a finite commutative ring is a fixed point system or not. The algorithm can also be used to reduce the problem of finding the cycles of such a system to the case where the system is given by an automorphism. Here, we further analyze the cycle structure of such a system and develop a method to determine its cycles.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Dynamics of Linear Systems over Finite Commutative Rings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109516378093,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7076796335242823
} |
https://arxiv.org/abs/1903.00178 | Symbolic powers of certain cover ideals of graphs | In this paper, we compute the regularity and Hilbert series of symbolic powers of the cover ideal of a graph $G$ when $G$ is either a crown graph or a complete multipartite graph. We also compute the multiplicity of symbolic powers of cover ideals in terms of the number of edges. | \section{Introduction}
Symbolic powers of ideals have been studied intensely over the last two decades.
We refer the reader to \cite{DDAGHN} for a review in this direction.
There are many ideals associated to graphs, for example edge ideals and cover ideals.
Let $G$ denote a finite simple (no loops, no multiple edges) undirected graph with the vertex set
$V(G) =\{x_1,\ldots,x_n\}$ and edge set $E(G)$. For a graph $G$, by identifying the vertices with variables in
$S=\mathbb{K}[x_1,\ldots,x_n]$, we associate squarefree monomial ideals, the {\it edge ideal} $I(G)=\left(x_ix_j \mid \{x_i,x_j\} \in E(G) \right) $ and the {\it cover ideal}
$J(G)= \left( \prod_{x \in w}x \mid w \text{ is a minimal vertex cover of } G\right)$. By \cite[Proposition 2.7]{FHM}, $I(G)$ and $J(G)$ are dual to each other.
Recently, building a dictionary between combinatorial data of graphs and the algebraic properties of corresponding ideals has
been done by various authors
(cf. \cite{FHM}, \cite{HTrung}, \cite{PhDT103J}, \cite{Seyed16}, \cite{Fakhari17}, \cite{Seyed}, \cite{villarreal_book}).
In particular, establishing a relationship between Castelnuovo-Mumford regularity (or simply, regularity)
of powers of ideals, Hilbert series of ideals and combinatorial invariants associated with graphs is an active area of
research (cf. \cite{selvi_ha}, \cite{Good2013}, \cite{jayanthan}).
It was proved by Cutkosky,
Herzog and Trung, \cite{CHT}, and independently Kodiyalam,
\cite{vijay}, that if $I$ is a homogeneous ideal in $\mathbb{K}[x_1,
\ldots, x_n]$, then there exist non-negative integers $a, b$ and
$s_0$ such that $\mathrm{reg}(I^s) = as + b$ for all $s \geq s_0$.
While the
coefficient $a$ is well-understood (cf. \cite{CHT}, \cite{vijay}, \cite{TW}), the free constant $b$ and
stablization index $s_0=\min\{s \mid \mathrm{reg}(I^s)=as+b\}$ are quite mysterious. In the case of symbolic powers,
Minh and Trung, \cite{MT17}, ask the following question.
\begin{question}
Let $I$ be a squarefree monomial ideal. Is $\mathrm{reg}(I^{(s)})$ a linear function for $s \gg 0$?
\end{question}
In \cite{HHT07},
Herzog, Hibi and Trung proved that, if $I$ is a monomial ideal, then $\mathrm{reg}(I^{(s)})$ is a quasi-linear
function for $s \gg 0$. For small dimension, more general results are
known in \cite{HHT02} and \cite{HT10}.
It is not known whether the regularity of symbolic powers of
squarefree monomial ideals is a linear function or not. In this article,
we determine the linear polynomial for the regularity of symbolic powers of certain cover ideals of graphs.
A crown graph $C_{n,n}$ is a graph obtained from the complete bipartite graph by
removing a perfect matching (see definition in Section \ref{pre}).
The Betti numbers of edge ideal and representation number of crown graphs have been looked
by several authors, \cite{GKP-crown}, \cite{Rather2018}.
Since crown graph is a bipartite graph, by \cite[Corollary 2.6]{GRV05},
$J(G)^s=J(G)^{(s)}$ for all $s \geq 1$.
In \cite{HTrung}, Hang and Trung proved that if $G$ is bipartite, then
$b \leq |V(G)|-\deg(J(G))-1$ and $s_0 \leq |V(G)|+2$, where $\deg(J(G))=\max\{|C| : C \text{ is a minimal vertex cover set of }
G\}.$ In the case of crown graph we obtain that $b=0$ and $s_0=1$ (Theorem \ref{basecase}).
We now consider the complete multipartite graphs. The resolution of powers, Cohen-Macaulayness of cover ideals of
complete multipartite graphs and vanishing ideal of the parametrized algebraic toric set associated to the
complete multipartite graphs have already been looked into by several authors, \cite{JayanNeeraj}, \cite{RajivAjay},
\cite{RajivAjay18}, \cite{NV14}.
We prove that, if $G$ is complete multipartite with partition
$V(G)=V_1 \cup \cdots \cup V_k$, then $\mathrm{reg}(J(G)^{(s)})=s \deg(J(G))+p-1$ for all $s \geq 1$, where
$p=\min\{p_i : p_i=|V_i|\}$ (Theorem \ref{powerMultiThm}).
The Hilbert function,
Hilbert series and Hilbert polynomial are important invariants in commutative algebra and algebraic geometry
that measure the growth of the dimension of its homogeneous components.
In general, computing for the Hilbert series of $S/I$ is a
difficult task, even in simple situation such as when $I$ is a monomial ideal (\cite{Bh1993}).
In \cite{Good2013}, Goodarzi computed the Hilbert series of squarefree monomial ideals.
We compute the Hilbert series of symbolic powers of cover ideals of crown and complete multipartite graphs (Theorem
\ref{Hil-crown}, Theorem \ref{C6.8}).
Computing and finding bounds for the multiplicity of homogeneous ideal have been studied by a
number of researchers (see \cite{Bh1993}, \cite{Herzog'sBook}, \cite{villarreal_book}).
We compute the multiplicity of symbolic powers of cover ideals and edge ideals
in terms of combinatorial
invariants (Corollary \ref{multi-coverideal}).
In order to prove our main results, we first show that the minimal monomial generators of
symbolic powers of cover ideals
have specific order that satisfies some nice properties (Lemma \ref{techlemma}, Lemma \ref{C6.4}).
Using this ordering and certain exact sequences, we obtain our main results.
Our paper is organized as follows.
In Section \ref{pre}, we
collect the necessary notion, terminology and some results that are used in the rest of the article.
The regularity and Hilbert series of symbolic powers of cover ideals
of crown and multipartite graphs have been discussed in Section \ref{crown} and \ref{complete}, respectively.
The multiplicity of symbolic powers of edge ideals and cover ideals is
studied in Section \ref{multiplicity}.
\section{Preliminaries}\label{pre}
In this section, we set up basic definitions, notation and some important results which are needed
for the rest of the paper.
\subsection{Notion from commutative algebra}
Let $M =\underset{k \in \mathbb{N}} \bigoplus M_k$ be a finite graded $S$-module.
The \emph{Hilbert series} of $M$, denoted by $H(M,t)$, is defined by
$H({M},t):= \underset{k \in \mathbb{N}} \sum \dim_{\mathbb{K}}(M_k) t^k$. By \cite[Proposition 4.4.1]{Bh1993}, there exists a polynomial $h_M(t)\in \mathbb{Z}[t]$ such that $H(M,t)=\dfrac{h_M(t)}{(1-t)^d}$, where $d$ is the dimension of $M$.
The \emph{multiplicity} of $M$, denoted by $e(M)$, is defined by $e(M)=h_M(1)$.
The
\emph{Castelnuovo-Mumford regularity} of $M$, denoted by $\mathrm{reg}(M)$, is defined as
$\mathrm{reg}(M)=\max \{j-i \mid \mathrm{Tor}_i^S(M,\mathbb{K})_j \neq 0\}$.
Let $I$ be an ideal in a Noetherian domain $R$. The $s$-th \emph{symbolic power} of $I$ is defined by
$I^{(s)}:= \bigcap\limits_{\mathfrak{p} \in \operatorname{Ass}(R/I)} (I^sR_{\mathfrak{p}} \cap R).$
It follows from \cite[Proposition 1.4.4]{Herzog'sBook} that if $I$ is a squarefree monomial ideal in $S$, then $s$-th
symbolic power of $I$ is
$
I^{(s)}= \bigcap_{\mathfrak{p}\in \operatorname{Ass}(S/I)} \mathfrak{p}^s.
$
\begin{remark}\label{symbCon} Let $\mathfrak{p}=(x_{i_1},\dots, x_{i_r})$. For a monomial $u$ in $S$, set $m_i(u)=\max\{j:x_i^j\mid u\}$ and $\deg_{\mathfrak{p}}(u)=\sum\limits_{k=1}^rm_{i_k}(u)$.
Let $I$ be a squarefree monomial ideal with $I=\bigcap\limits_{\mathfrak{p}\in \operatorname{Ass}(S/I)} \mathfrak{p}$. Then $u\in I^{(s)}$ if and only if $\deg_{\mathfrak{p}}(u)\geq s$ for all $\mathfrak{p} \in$ Ass$(S/I)$.
\end{remark}
\subsection{Notion from combinatorics}
Let $G$ be a finite simple graph with the vertex set $V(G)$
and edge set $E(G)$.
A subset
$X$ of $V(G)$ is called \textit{independent} if for all $x,y\in X$, $\{x,y\} \notin E(G)$. A graph $G$ is said to be \emph{bipartite} if there exist two disjoint independent sets $X$ and $Y$ such that $V(G)=X\cup Y$.
A graph $G$ is said to be \emph{complete multipartite} if $V(G)$ can be partitioned
into sets $V_1,\ldots,V_k$ for some $k \geq 2$ such that
$E(G)=\bigcup_{i \neq j}\left\{ \{x,y\}\mid x\in V_i,y\in V_j\right\}$
and it is denoted by $K_{p_1,\dots, p_k}$, where $p_i=|V_i|$.
An $n$-\emph{crown graph} (or simply a \emph{crown graph}), denoted as $C_{n,n}$,
is a bipartite graph on the vertex set $V(G)=\{x_1,\ldots,x_n,y_1,\ldots,y_n\}$ with the edge set
$E(G)=\Big\{\{x_i,y_j\} \mid 1 \leq i,j\leq n, i \neq j \Big\}.$
A subset $C \subset V(G)$ is a \emph{vertex cover} of
$G$ if for each $e \in E(G)$, $e\cap C \neq \phi$. If $C$ is minimal
with respect to inclusion, then $C$ is called a \textit{minimal vertex
cover} of $G$.
\begin{example}
Let $G=K_{2,2,1,1}$ and $H=C_{4,4}$ be complete multipartite graph
and crown graph on $\{x_{1,1},x_{1,2},x_{2,1},x_{2,2},x_{3,1}
,x_{4,1}\}$ and $\{x_1,\ldots,x_4,y_1,\ldots,y_4\}$ as given in the figure below.
\begin{minipage}{\linewidth}
\begin{minipage}{0.45\linewidth}
\begin{figure}[H]
\definecolor{ududff}{rgb}{0.30196078431372547,0.30196078431372547,1}
\begin{tikzpicture}[scale=0.75]
\draw [line width=1pt](1,2)-- (2.42,0.99);
\draw [line width=1pt] (1,3)-- (4,3);
\draw [line width=1pt] (4,3)-- (2.42,0.99);
\draw [line width=1pt] (4,2)-- (2.44,3.97);
\draw [line width=1pt] (2.44,3.97)-- (1,2);
\draw [line width=1pt] (4,2)-- (1,3);
\draw [line width=1pt] (1,2)-- (4,2);
\draw [line width=1pt] (1,2)-- (4,3);
\draw [line width=1pt] (1,3)-- (2.42,0.99);
\draw [line width=1pt] (1,3)-- (2.44,3.97);
\draw [line width=1pt] (4,2)-- (2.42,0.99);
\draw [line width=1pt] (4,3)-- (2.44,3.97);
\draw [line width=1pt] (2.44,3.97)-- (2.42,0.99);
\begin{scriptsize}
\draw [fill=black] (1,2) circle (1.5pt);
\draw[color=black] (1.08,1.6) node {$x_{1,2}$};
\draw [fill=black] (2.42,0.99) circle (1.5pt);
\draw[color=black] (2.58,.62) node {$x_{4,1}$};
\draw [fill=black] (1,3) circle (1.5pt);
\draw[color=black] (1.16,3.42) node {$x_{1,1}$};
\draw [fill=black] (4,3) circle (1.5pt);
\draw[color=black] (4.04,3.4) node {$x_{2,1}$};
\draw [fill=black] (4,2) circle (1.5pt);
\draw[color=black] (4.06,1.6) node {$x_{2,2}$};
\draw [fill=black] (2.44,3.97) circle (1.5pt);
\draw[color=black] (2.6,4.3) node {$x_{3,1}$};
\end{scriptsize}
\end{tikzpicture}
\caption*{$G$}
\end{figure}
\end{minipage}
\begin{minipage}{0.6\linewidth}
\begin{figure}[H]
\definecolor{ududff}{rgb}{0.30196078431372547,0.30196078431372547,1}
\begin{tikzpicture}[scale=1.]
\draw [line width=1pt] (1,4)-- (2,2);
\draw [line width=1pt] (2,2)-- (3,4);
\draw [line width=1pt] (3,4)-- (1,2);
\draw [line width=1pt] (1,2)-- (2,4);
\draw [line width=1pt] (2,4)-- (4,2);
\draw [line width=1pt] (4,2)-- (3,4);
\draw [line width=1pt] (3,2)-- (4,4);
\draw [line width=1pt] (4,4)-- (2,2);
\draw [line width=1pt] (3,2)-- (2,4);
\draw [line width=1pt] (1,4)-- (3,2);
\draw [line width=1pt] (1,2)-- (4,4);
\draw [line width=1pt] (4,2)-- (1,4);
\begin{scriptsize}
\draw [fill=black] (1,4) circle (1.5pt);
\draw[color=black] (1,4.3) node {$x_1$};
\draw [fill=black] (2,2) circle (1.5pt);
\draw[color=black] (2,1.75) node {$y_2$};
\draw [fill=black] (3,4) circle (1.5pt);
\draw[color=black] (3,4.3) node {$x_3$};
\draw [fill=black] (1,2) circle (1.5pt);
\draw[color=black] (1,1.75) node {$y_1$};
\draw [fill=black] (2,4) circle (1.5pt);
\draw[color=black] (2,4.3) node {$x_2$};
\draw [fill=black] (4,2) circle (1.5pt);
\draw[color=black] (4,1.75) node {$y_4$};
\draw [fill=black] (3,2) circle (1.5pt);
\draw[color=black] (3,1.75) node {$y_3$};
\draw [fill=black] (4,4) circle (1.5pt);
\draw[color=black] (4,4.3) node {$x_4$};
\end{scriptsize}
\end{tikzpicture}
\caption*{$H$}
\end{figure}
\end{minipage}
\end{minipage}
It can be noted that $\{x_{1,1},x_{1,2},x_{2,1},x_{2,2},x_{3,1}\}$
and
$\{x_2,x_3, x_4,y_2,y_3,y_4\}$ are minimal vertex covers of $K_{2,2,1,1}$ and $C_{4,4}$ respectively.
\end{example}
For any undefined terminology and further basic definitions, we refer the reader to \cite{Bh1993}, \cite{Herzog'sBook}.
\section{Crown graph}\label{crown}
In this section, we study the regularity and Hilbert series of symbolic powers of cover ideals of
crown graphs. Throughout this section, $G$ denotes a crown graph.
\subsection{Regularity}
In this subsection, we obtain the linear function for the regularity of $J(G)^{(s)}$ for all
$s \geq 1$.
Our result Theorem \ref{basecase} shows that $\mathrm{reg}(J(G)^{(s)})$ is a
linear function with the stabilization index $s_0=1$ and free constant $b=0$.
In order to prove this, we first fix certain notation.
\begin{notation} \label{setup}
For $n\geq 3$, let $G=C_{n,n}$ be a graph with $V(G)=\{x_1,\ldots,x_n,y_1,\ldots,y_n\}$. Set
\begin{align*}
M_x=\prod_{i=1}^n x_i, ~~ M_y=\prod_{i=1}^n y_i,~~
M=M_xM_y, ~~ M_i=\frac{M}{x_iy_i} \text{ for $1 \leq i \leq n$. }
\end{align*}
\end{notation}
First, we find the monomial generating set of
cover ideal of crown graph.
\begin{lemma}
Let $G=C_{n,n}$ with notation as in \ref{setup}. Then
$J(G)=(M_x,M_y,M_1,\ldots,M_n).$
In particular, $\deg(J(G))=2n-2$.
\end{lemma}
\begin{proof}
Since $J(G)=\bigcap\limits_{i\neq j}(x_i,y_j)$, $(M_x,M_y,M_1,\ldots,M_n) \subset J(G)$.
Let $u$ be a monomial in $J(G)$. If either $M_x \mid u$ or $M_y\mid u$, then we are done. Now, we assume that $M_x\nmid u$ and $M_y\nmid u$. This forces that there exist $i$ and $j$ such that $x_i\nmid u$ and $y_j\nmid u$. For $k\neq i$, $J(G)\subset (x_i,y_k)$ which forces that $y_k\mid u$. This implies that $j=i$, and by symmetry for $k\neq i$, $ x_k\mid u$. Therefore $M_i\mid u$ which gives the desired result.
\end{proof}
For a monomial $u$, \emph{support} of $u$, denoted by $\mathrm{Supp}(u)$, is defined by $\mathrm{Supp}(u)=\{x_i:x_i\mid u\} $.
The following lemma summarizes some basic properties of $J(G)^{(s)}$.
\begin{lemma}\label{techlemma} Let $G=C_{n,n}$ with notation as in \ref{setup}. Then, for $s \geq 2$,
\begin{enumerate}[\rm i)]
\item $(M_x):M_y=(M_x)$.
\item $(M_x):M_i=(x_i)$ and $M_y:M_i=(y_i)$ for $1 \leq i \leq n$.
\item $(M_j):M_i=(x_iy_i)$ for $i\neq j$.
\item $(M_x,M_y,M_1,\ldots,M_{i-1}):M_{i}=(x_i,y_i)$ for $1 \leq i \leq n$.
\item $J(G)^{s}:M_x=J(G)^{s-1}$.
\item $(J(G)^{s},M_x):M_y=\left( J(G)^{s-1}, M_x\right) $.
\item $\left(J(G)^{s},M_x,M_y,M_1,\ldots,M_{i-1}\right):M_i=\left(x_i,y_i,M_i^{s-1}\right)$ for $1\leq i\leq n$.
\end{enumerate}
\end{lemma}
\begin{proof} (i)-(iv) are standard.\\
(v) The assertion follows from \cite[Lemma 3.2]{Seyed16}.\\
(vi) Since $\mathrm{Supp}(M_x) \cap \mathrm{Supp}(M_y)=\emptyset$, by (v), $(J(G)^{s},M_x):M_y=\left( J(G)^{s-1}, M_x\right) $.\\
(vii) By (iv), $(J(G)^{s},M_x,M_y,M_1,\ldots,M_{i-1}):M_i \supset (x_i,y_i,M_i^{s-1})$. Let
$u$ be a monomial in $(J(G)^{s},M_x,M_y,M_1,\ldots,M_{i-1}):M_i$. If either $x_i \mid uM_{i}$ or
$y_i \mid uM_i$, then $u \in (x_i,y_i, M_i^{s-1})$. Suppose $x_i \nmid uM_i$ and $y_i \nmid uM_i$.
Note that $(M_x,M_y,M_1,\dots,M_{i-1})\subset (x_i,y_i)$ and $x_i \nmid uM_i$ and $y_i \nmid uM_i$ forces that $uM_i\in J(G)^s $. Since $G$ is a bipartite graph, by \cite[Corollary 2.6]{GRV05}, $J(G)^{(s)}=J(G)^s$, and hence $J(G)^{s}=\bigcap\limits_{i\neq j}(x_i,y_j)^s$. For $k\neq i$, $J(G)^s\subset (x_i,y_k)^s$, and $x_i \nmid uM_i$ which implies that $y_k^s\mid uM_i$. Note that for $k\ne i$, $y_k^2\nmid M_i$ which forces that $y_k^{s-1}\mid u$. Similarly, we get that $x_k^{s-1}\mid u$ for all $k\neq i$. Hence $M_i^{s-1} \mid u$.
\end{proof}
We now proceed to compute the regularity of powers of $J(G)$.
\begin{theorem}\label{basecase} Let $G=C_{n,n}$. Then for all $s \geq 1$,
\[
\mathrm{reg}(J(G)^s)=s \cdot \deg(J(G)).
\]
\end{theorem}
\begin{proof}
It follows from \cite[Lemma 3.1]{Seyed} that $s\cdot\deg(J(G))\leq \mathrm{reg}(J(G)^s)$. We need to prove that
$\mathrm{reg}(J(G)^s) \leq s\cdot \deg(J(G))$. By \cite[Proposition 8.1.10]{Herzog'sBook}, $\mathrm{reg}(J(G))=\operatorname{pd}(S/I(G))$.
If $s=1$, then the result follows from \cite[Theorem 4.3]{Rather2018}. So, assume that $s \geq 2.$
Consider the following short exact sequence:
\begin{equation}\label{exactCrown}
0 \longrightarrow \dfrac{S} {J(G)^s:M_x}(-n) \longrightarrow \dfrac{S}{J(G)^s} \longrightarrow
\dfrac{S}{(J(G)^s,M_x)} \longrightarrow 0.
\end{equation}
By Lemma \ref{techlemma}, $J(G)^s:M_x=J(G)^{s-1}$.
Then, by the induction hypothesis, $$\mathrm{reg} \left( J(G)^{s-1}(-n)\right)\leq (s-1)\cdot\deg(J(G))+n \leq s\cdot\deg(J(G)).$$
Now, by Equation \eqref{exactCrown}, it is sufficient to show that $\mathrm{reg}(J(G)^s,M_x)) \leq s\cdot\deg(J(G)).$
\vskip 2mm \noindent
\textbf{Claim:} $\mathrm{reg}(J(G)^s,M_x) \leq s \cdot \deg(J(G))$ for all $s \geq 1$.
\vskip 1mm \noindent
\textit{Proof of the claim:}
We proceed by induction on $s$. If $s=1$, then $(J(G),M_x)=J(G)$ and the result follows from \cite[Theorem 4.3]{Rather2018}. Assume that $s \geq 2$.
Set $K=(J(G)^s,M_x)$, $K_1=(K,M_y)$ and for $2\leq l\leq n+1$, $K_l=(K_{l-1},M_{l-1})$. Note that $K_{n+1}=J(G)$.
Consider the following short exact sequences:
\begin{eqnarray}\label{exact1}
0 \longrightarrow \frac{S}{ K:M_{y}}(-n) \longrightarrow \frac{S}{K} \longrightarrow
\frac{S}{K_{1}} \longrightarrow 0,
\end{eqnarray}
for $1 \leq l \leq n,$
\begin{eqnarray}\label{exact2}
0 \longrightarrow \frac{S}{ K_l:M_{l}}(-(2n-2)) \longrightarrow \frac{S}{K_l} \longrightarrow
\frac{S}{K_{l+1}} \longrightarrow 0 .
\end{eqnarray}
Using Equations \eqref{exact1} and \eqref{exact2}, we get
\begin{eqnarray*}
\mathrm{reg}(K) \leq \max\left\{ \mathrm{reg}(K:M_y)+n,~\mathrm{reg}(J(G)),~\mathrm{reg}(K_l:M_{l})+2n-2 \text{ for }1\leq l \leq n\right\}.
\end{eqnarray*}
We now prove that each of regularities appearing on the right hand side of the above inequality is
bounded above by $s \cdot \deg(J(G))$. By Lemma \ref{techlemma}, Theorem \ref{basecase} and
\cite[Theorem 4.3]{Rather2018},
we have
\begin{align*}
\mathrm{reg}(K:M_y)=&\mathrm{reg}(J(G)^{s-1},M_x),~\mathrm{reg}(J(G))= \deg(J(G))\\
\mathrm{reg}(K_l:M_{l})=&\mathrm{reg}(x_{l},y_{l}, M_{l}^{s-1}), \text{ for all $1\leq l \leq n$}.
\end{align*}
By induction, $\mathrm{reg}(K:M_y)\leq(s-1)\cdot\deg(J(G))$. Since $x_l,y_l,M_l^{s-1}$ is a regular sequence
with $\deg(x_l)=\deg(y_l)=1$ and $\deg(M_l)^{s-1}=(s-1)\cdot\deg(J(G))$,
$\mathrm{reg}(K_l:M_l)=(s-1)\cdot \deg(J(G))$.
Therefore, $\mathrm{reg}(J(G)^s,M_x) \leq s\cdot\deg(J(G))$.
\end{proof}
\subsection{Hilbert series.}
We compute the Hilbert series of symbolic powers of cover ideals of crown graphs. We begin by computing the Hilbert series of cover ideal.
\begin{theorem}\label{hilbertscrown}
Let $G=C_{n,n}$ for $n \geq 3$ with notation as in \ref{setup}. Then
\[ H\left(\dfrac{S}{J(G)},t\right)= \frac{\displaystyle \sum_{i=0}^{n-1} (i+1)t^i + \displaystyle \sum_{i=0}^{n-3}(n-i-1)t^{n+i} - (n-1)t^{2n-2}}{(1-t)^{2n-2}}.
\]
\end{theorem}
\begin{proof}
Set $I_0=(M_x,M_y)$ and $I_{i}=(I_{i-1},M_i)$ for all $1 \leq i\leq n$.
For $1\leq i\leq n$, consider the exact sequence:
\begin{equation*}
0 \longrightarrow \frac{S}{I_{i-1}:M_i}(-(2n-2))
{\longrightarrow} \frac{S}{I_{i-1}} \longrightarrow
\frac{S}{I_i} \longrightarrow 0.\nonumber
\end{equation*}
We have
$H\left(\frac{S}{J(G)},t \right)
= H\left(\frac{S}{I_0},t\right)-t^{2n-2}\sum_{i=1}^n H\left(\frac{S}{I_{i-1}:M_i},t\right).$
Since $M_x,M_y$ is a regular sequence on $S$ of elements of degree $n$,
we get $H\left(\dfrac{S}{I_0},t\right)=\dfrac{(1-t^n)^2}{(1-t)^{2n}}.$
By Lemma \ref{techlemma}, $I_{i-1}:M_i=(x_i,y_i)$ for any $1 \leq i \leq n$
which implies that
$H\bigg( \frac{S} {I_{i-1}:M_i},t \bigg)=\frac{1}{(1-t)^{2n-2}}.$
Hence
\begin{eqnarray*}
H\left(\dfrac{S}{J(G)},t\right) =
\frac{ \displaystyle \sum_{i=0}^{n-1} (i+1)t^i + \displaystyle \sum_{i=0}^{n-3} (n-i-1)t^{n+i}-(n-1)t^{2n-2}}{(1-t)^{2n-2}}.
\end{eqnarray*}
\end{proof}
We end this section by proving one of our main result
\begin{theorem}\label{Hil-crown}
Let $G=C_{n,n}$ for $n \geq 3$ with notation as in \ref{setup}. Then for all $s \geq 1$, $H\left( \frac{S}{J(G)^s},t \right) $
\[
= \frac{ \displaystyle \sum_{i=0}^{ns-1} (i+1)t^i + \displaystyle \sum_{i=0}^{n-3}(n-i-1)st^{ns+i} - (n-1)st^{ns+n-2} - \displaystyle \sum_{i=0}^{s-2} (i+1)nt^{s(2n-2)-i(n-2)}}{(1-t)^{2n-2}}.
\]
\end{theorem}
\begin{proof}
We proceed by induction on $s$. By Theorem \ref{hilbertscrown}, the result is true for $s=1$.
Assume that $s\geq 2$.
Using Lemma \ref{techlemma} (v) and exact sequence \eqref{exactCrown}, we get
\begin{align}\label{main-eq}
H\left( \frac{S}{J(G)^s},t \right) = t^n H\left( \frac{S}{J(G)^{s-1}},t \right) + H\left( \frac{S} {\left( J(G)^s,M_x \right)},t \right).
\end{align}
\vskip 2mm \noindent
\textbf{Claim:} For all $s \geq 1$, $H\bigg( \frac{S} {\big( J(G)^s,M_x \big)},t \bigg) $
$$ =\frac{\displaystyle \sum_{i=0}^{n-1}
(i+1)t^i+n \displaystyle \sum_{i=n}^{ns-1}t^i + \displaystyle \sum_{i=1}^{n-2}(n-i)t^{ns+i-1} -
(n-1)t^{ns+n-2} - n \displaystyle \sum_{i=0}^{s-2}t^{s(2n-2)-i(n-2)}}{(1-t)^{2n-2}}.$$
Now, it is enough to prove the above claim as the desired result follows from induction argument, claim and Equation \eqref{main-eq}.
\\ \textit{Proof of the claim:}
For $s=1$ the result follows from Theorem \ref{hilbertscrown} and
the fact that $(J(G),M_x)=J(G)$.
Assume that $s \geq 2$.
Using Equations \eqref{exact1} and \eqref{exact2}, we get\
\begin{equation}\label{hilbertSeq}
H\left(\dfrac{S}{K},t\right)= t^nH\left(\dfrac{S}{K:M_y },t\right)+\sum_{i=1}^{m}t^{2n-2}H\left(\dfrac{S}{K_i:M_i},t\right)+H\left(\dfrac{S}{J(G)},t\right).
\end{equation}
By Lemma \ref{techlemma}(vi) and (vii), $K:M_y=(J(G)^{s-1}, M_x)$ and $K_i:M_i=(x_i,y_i, M_i^{s-1})$.
Since $x_i, y_i, M_i^{s-1}$ is a regular sequence with $\deg(M_i)=2n-2$, we get that
$$H\left(\dfrac{S}{K_i:M_i},t\right)=\dfrac{(1-t^{(s-1)(2n-2)})}{(1-t)^{2n-2}}.$$
Now the claim follows from Equation \eqref{hilbertSeq},
Theorem \ref{hilbertscrown} and induction.
\end{proof}
\section{Complete multipartite graph}\label{complete}
In this section, we study the regularity and Hilbert series of symbolic powers of cover
ideals of complete multipartite graphs. Throughout this section, $G$ denotes a complete multipartite graph.
\subsection{Regularity}
We determine the regularity of symbolic powers of cover ideals of
complete multipartite graphs. In order to compute the regularity of
$J(G)^{(s)}$, we first find the generators of $J(G)$ and its symbolic powers.
We begin by fixing some notation which are used for the rest of the section.
\begin{notation}\label{multi-nota}
Let $G=K_{p_1,\dots,p_k}$ be a complete multipartite graph with the vertex set
$$V(G)=\bigcup_{i=1}^{k} \{x_{ij}: 1\leq j\leq p_i \},~~ p_1\geq p_2\geq\dots\geq p_k\geq 1 \text{ and }
k\geq 2.$$
Set
$$n=p_1+p_2+\dots+p_k,~ M_i=\prod_{j=1}^{p_i}x_{ij} \text{ for }
1\leq i\leq k,~M=\prod_{i=1}^{k} M_i \text{ and }
N_i=\frac{M}{M_i} \text{ for } 1\leq i\leq k.$$
\end{notation}
The next lemma describes the minimal generating set of monomials of $J(G)$.
\begin{lemma} Let $G=K_{p_1,\dots,p_k}$ with the notation as in \ref{multi-nota}. Then $J(G)=(N_i \mid 1\leq i\leq k)$. In particular, $\deg(J(G))=n-p_k$.
\end{lemma}
\begin{proof}
Note that $\mathrm{Supp}(N_i)$ is a vertex cover of $G$ which implies that $(N_i\mid 1\leq i\leq k)\subset J(G)$. Let $u$ be a monomial in $J(G)$.
If for all $i,j$, $x_{i,j}\mid u$, then $u\in (N_i \mid 1\leq i\leq k)$. Now, without loss of generality, assume that $x_{1,1}\nmid u$.
Since for all
$i\neq 1$ and $j$, $J(G)\subset (x_{1,1},x_{i,j}) $, we get $x_{i,j}\mid u$, which further implies that $N_i\mid u$. This completes the proof.
\end{proof}
If $k=2$, then by \cite[Corollary 2.6]{GRV05},
$J(G)^s=J(G)^{(s)}$ for all $s \geq 1$. If $k \geq 3$, then
$G$ is non-bipartite graph and every vertex in $G$ is adjacent to every odd cycle in $G$.
Therefore, by \cite[Theorem 4.9 and Remark 4.10]{DrabGue},
$J(G)^{(s)}=MJ(G)^{(s-2)}+J(G)^s$.
Now, we further reduce the above expression.
\begin{lemma}\label{symbolicGen}
Let $G=K_{p_1,\dots,p_k}$ with the notation as in \ref{multi-nota}.
Then $$J(G)^{(s)}=MJ(G)^{(s-2)}+\left( N_j^s\mid j\in[k]\right).$$
\end{lemma}
\begin{proof} For $e\in E(G)$, $\mathfrak{p}_e$ denote an ideal generated by end points of $e$. Let $u$ be a monomial $J(G)^{(s)}$. By Remark \ref{symbCon}, for every $e\in E(G)$, we have $\deg_{\mathfrak{p}_e}(u)\geq s$. Note that $\deg_{\mathfrak{p}_e}(M)=2$. If $u=Mv$, then $\deg_{\mathfrak{p}_e}(v)\geq s-2$ which implies that $v\in J(G)^{(s-2)}$.
Suppose $M \nmid u$. Then there exists $x_{i,j}$ such that $x_{i,j}$ does not
divide $u$. Without loss of generality, we may assume that $x_{1,1} \nmid u$.
Since $x_{1,1}$ is adjacent to $x_{i,j}$ for $i\geq 2$ and for all $j$, by Remark \ref{symbCon}, $N_1^s$ divides $u$. Thus, we have $J(G)^{(s)}\subset MJ(G)^{(s-2)}+
\left( N_j^s: j\in[k]\right)$. Clearly $\left( N_j^s: j\in[k]\right)\subset J(G)^{(s)}$. It follows from Remark \ref{symbCon} that $MJ(G)^{(s-2)} \subset J(G)^{(s)}$
which completes the proof.
\end{proof}
For a monomial ideal $I=(m_1,\ldots,m_r)$, let $I^{[s]}$ denote an ideal generated by
$m_1^s,\ldots,m_r^s$.
The following lemma plays a crucial role to compute the regularity of $J(G)^{(s)}$.
\begin{lemma}\label{C6.4} Let $G=K_{p_1,\dots, p_k}$ with the notation as in \ref{multi-nota}. Then
\begin{enumerate}[\rm i)]
\item $J(G)^{(s)}:M=J(G)^{(s-2)}$ for $s\geq 2$.
\item $(J(G)^{(s)},M)=(J(G)^{[s]},M)$ for $s\geq 1$.
\item $(N_1^s,\dots,N_{i-1}^s):N_i^s=(M_i^s)$ for $2\leq i\leq k$.
\item $J(G)^{[s]}:M=J(G)^{[s-1]}$ for $s\geq 2$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) and (ii) follow from \cite[Lemma 3.4]{Fakhari17} and Lemma \ref{symbolicGen}, respectively.\\
(iii) Clearly $(M_i^s)\subset (N_1^s,\dots,N_{i-1}^s):N_i^s$.
Let $u$ be a monomial in $(N_1^s,\dots,N_{i-1}^s):N_i^s$.
This forces that for some $1\leq j<i$, $N_j^s\mid uN_i^s$ which implies that $M_i^s \mid u$. \\
(iv) If $u \in J(G)^{[s]}:M$, then there exists $i$ such that $N_i^s\mid uM$ and so $N_i^{s-1} \mid u$.
On the other side, since $N_i\mid M$ for all $i$, we get $N_i^s\mid MN_i^{s-1}$
\end{proof}
For fixed $s\ge 2$ and $j\in [k]$, we associate an ideal $I_{s,j}=\left(M, N_1^s,\ldots,N_j^s \right)$.
Now, we compute the regularity of $I_{s,j}$ in terms of $s$ and $p_j$, which helps to compute the regularity of $J(G)^{(s)}$.
\begin{lemma}\label{regLem} For fixed $s\geq 2$ and $j\in [k]$, $\mathrm{reg}\left(I_{s,j}\right)=s(n-p_j)+p_j-1.$
\end{lemma}
\begin{proof}
We prove the assertion by induction on $j$. Suppose $j=1$, consider the exact sequence
\[
0\longrightarrow \dfrac{S}{M:N_1^s}(-s(n-p_1)) \longrightarrow
\dfrac{S}{ M }\longrightarrow \dfrac{S}{\left( M,N_1^s\right)}\longrightarrow 0.
\]
Note that $\mathrm{reg} \left( M \right)=n$ and
$\left( M:N_1^s\right)= M_1$.
Therefore
$$\mathrm{reg}\left(\left( M:N_1^s\right) (-s(n-p_1))\right)=p_1+s(n-p_1).$$
Since $s \geq 2$, it follows from \cite[Lemma 1.2 (v)]{HTT} that
$\mathrm{reg}\left( M,N_1^s \right)=s(n-p_1)+p_1-1.$
Consider the exact sequence
\[
0\longrightarrow \dfrac{S}{I_{s,j-1}:N_j^s}(-s(n-p_j))
\longrightarrow
\dfrac{S}{I_{s,j-1} }\longrightarrow \dfrac{S}{I_{s,j}}\longrightarrow 0.
\]
Note that $I_{s,j-1}:N_j^s= M_j$, and
hence $\mathrm{reg}\left(\left(I_{s,j-1}:N_j^s\right)(-s(n-p_j))\right)=s(n-p_j)+p_j$.
By induction, $\mathrm{reg}\left(I_{s,j-1}\right)=s(n-p_{j-1})+p_{j-1}-1.$
Hence by \cite[Lemma 1.2 (v)]{HTT}, we get
$
\mathrm{reg}\left(I_{s,j}\right)=s(n-p_j)+p_j-1.
$
\end{proof}
We now compute the regularity of $J(G)^{(s)}.$ Since complete multipartite graph is a matroid, there is another way to compute the regularity of $J(G)^{(s)}$, see \cite[Theorem 4.5]{MT17}. We have provided here an elementary proof so
that the result is accessible to readers who are not familiar with matroid.
\begin{theorem}\label{powerMultiThm}
Let $G=K_{p_1,\dots,p_k}$ with the notation as in \ref{multi-nota}.
Then for all $s \geq 1$, $$\mathrm{reg}\left(J(G)^{(s)} \right)=s\cdot \deg(J(G))+p_k-1.$$
\end{theorem}
\begin{proof}
We prove the result by induction on $s$. If $s=1$, then the result follows from \cite[Theorem 5.3.8]{PhDT103J}. Assume that $s>1$. Consider the following exact sequence:
\begin{equation}\label{cmpMltSES}
0\longrightarrow \dfrac{S}{J(G)^{(s)}:M}(-n)\longrightarrow \dfrac{S}{J(G)^{(s)}}\longrightarrow
\dfrac{S}{(J(G)^{(s)}, M)}\longrightarrow0.
\end{equation}
By Lemma \ref{C6.4}(i), $J(G)^{(s)}:M=J(G)^{(s-2)}$ and by induction $$\mathrm{reg}\left(\left(J(G)^{(s)}:M\right)(-n)\right)=(s-2)\cdot \deg(J(G))+n+p_k-1. $$
It follows from Lemma \ref{symbolicGen} that $(J(G)^{(s)},M)=I_{s,k}$. Now, by Lemma \ref{regLem},
we get $$\mathrm{reg}\left(J(G)^{(s)}, M\right)=s\cdot \deg(J(G))+p_k-1.$$
Hence the assertion follows \cite[Lemma 1.2]{HTT}.
\end{proof}
It follow from the Theorem \ref{powerMultiThm} that if $p_k=1$, then the free constant $b=0$. In particular, we recover the result of Fakhari (\cite[Corollary 3.8]{Seyed}).
\begin{corollary}
Let $G$ be a complete graph on $n$ vertices. Then, for all $s \geq 1,$
$$\mathrm{reg}\left(J(G)^{(s)}\right)=s\cdot \deg(J(G))=s(n-1).$$
\end{corollary}
\subsection{Hilbert series}
In this subsection, we compute the Hilbert series of
symbolic powers of $J(G)$ in terms of
number of vertices and cardinality of partition.
To accomplish this, we first study the Hilbert series of
$\frac{S}{J(G)^{[s]}}$ for all $s \geq 1$.
\begin{proposition}\label{C6.5}
Let $G=K_{p_1,\dots, p_k}$ with the notation as in \ref{multi-nota}. Then, for all $s \geq 1,$
\[
H\left(\frac{S}{J(G)^{[s]}},t\right) = \frac{1- \displaystyle\sum_{i=1}^{k} t^{s(n-p_i)}+(k-1)t^{sn}}{(1-t)^n}.
\]
\end{proposition}
\begin{proof}
By Lemma \ref{C6.4}(iii), $(N_1^s,\dots,N_{i-1}^s):N_i^s=(M_i^s)$ for all $2\leq i\leq k$. Now, for $2\leq i\leq k$ consider the exact sequences:
\begin{eqnarray*}
0\longrightarrow \frac{S}{(M_i^s)}(-s(n-p_i)) \longrightarrow \frac{S}{(N_1^s,\dots,N_{i-1}^s)} \longrightarrow \frac{S}{(N_1^s,\dots,N_i^s)} \longrightarrow 0.
\end{eqnarray*}
We know that
$H\left(\frac{S}{(M_i^s)},t\right)=\frac{1-t^{sp_i}}{(1-t)^n} \text{ for all
$2\leq i\leq k$.}$
Therefore, by applying successively the above short exact sequences, we get
\begin{eqnarray*}
H\left(\frac{S}{J(G)^{[s]}},t\right) &=& H\left(\frac{S}{(N_1^s)},t\right) -\sum_{i=2}^kt^{s(n-p_i)}H\left(\dfrac{S}{(M_i^s)},t\right)
\\ &=& \dfrac{1-t^{s(n-p_1)}}{(1-t)^n
-\sum\limits_{i=2}^k\left( \frac{t^{s(n-p_i)}-t^{sn}}{(1-t)^n} \right)
\\
&=& \frac{1-\displaystyle \sum_{i=1}^k t^{s(n-p_i)}+(k-1)t^{sn}}{(1-t)^n}.
\end{eqnarray*}
\end{proof}
To obtain the Hilbert series of $\frac{S}{J(G)^{(s)}}$, we need the following lemma:
\begin{lemma}\label{C6.7}
Let $G=K_{p_1,\dots, p_k}$ with the notation as in \ref{multi-nota}. Then for all $s \geq 1,$
\[
H\left(\frac{S}{(J(G)^{[s]},M)},t\right) = \frac{1-t^n- \displaystyle\sum_{i=1}^{k} t^{s(n-p_i)}+\displaystyle\sum_{i=1}^{k} t^{s(n-p_i)+p_i}}{(1-t)^n}.
\]
\end{lemma}
\begin{proof}
Consider the short exact sequence:
\begin{equation}\label{comMulSES1}
0\longrightarrow \frac{S}{J(G)^{[s]}:M}(-n) \longrightarrow \frac{S}{J(G)^{[s]}} \longrightarrow \frac{S}{(J(G)^{[s]},M)}\longrightarrow 0.
\end{equation}
By Lemma \ref{C6.4}(iv), $J(G)^{[s]}:M=J(G)^{[s-1]}$. Therefore
\begin{eqnarray*}
H\left(\frac{S}{(J(G)^{[s]},M)},t\right)= H\left(\frac{S}{J(G)^{[s]}},t\right)- t^nH\left(\frac{S}{J(G)^{[s-1]}},t\right).
\end{eqnarray*}
Using Proposition \ref{C6.5}, we get the result.
\end{proof}
We are now ready to establish the Hilbert series of $\dfrac{S}{J(G)^{(s)}}$ for all $s \geq 1$.
\begin{theorem}\label{C6.8}
Let $G=K_{p_1,\dots, p_k}$ with the notation as in \ref{multi-nota}.
Then $H\left(\dfrac{S}{J(G)^{(s)}},t\right)$ is
\[
\left\{\begin{array}{cc}\dfrac{1-t^{rn}+ \sum\limits_{j=0}^{r-1}\sum\limits_{i=1}^{k} (t^{p_i}-1)\ t^{(s-j)(n-p_i)+jp_i}}{(1-t)^n},
& \text{ if } s=2r, r\geq 1\\
\dfrac{1+(k-1)t^{(r+1)n}- \sum\limits_{i=1}^{k} t^{(n-p_i)+rn} + \sum\limits_{j=0}^{r-1}\sum\limits_{i=1}^{k} (t^{p_i}-1)\ t^{(s-j)(n-p_i)+jp_i}}{(1-t)^n} &\text{ if } s=2r+1, r\geq 0.
\end{array}\right.
\]
\end{theorem}
\begin{proof}
It follows from Lemma \ref{C6.4}(i) and (ii) that for $s\geq 2$, $J(G)^{(s)}:M=J(G)^{(s-2)}$ and $(J(G)^{(s)},M)=(J(G)^{[s]},M).$ Using Equation \eqref{cmpMltSES}, we get
\begin{eqnarray}\label{addHil}
H\left(\frac{S}{J(G)^{(s)}},t\right) = t^n H\left(\frac{S}{J(G)^{(s-2)}},t\right) + H\left(\frac{S}{(J(G)^{[s]},M)},t\right).
\end{eqnarray}
Suppose $s=2r$.
We prove this by induction on $r$.
If $r=1$, then, by Lemma \ref{symbolicGen}, $J(G)^{(2)}=(J(G)^{[2]},M)$. Now the result follows from
Lemma \ref{C6.7}.
Assume that $r \geq 2$.
Now by induction and Lemma \ref{C6.7}, we get the assertion.
Suppose $s=2r+1$. By Lemma \ref{C6.5}, the result holds for $r=0$. Now assume that $r \geq 1$. The assertion follows from induction, Lemma \ref{C6.7} and Equation \eqref{addHil}.
\end{proof}
\section{Multiplicity}\label{multiplicity}
Next, we study the multiplicity of symbolic powers of cover ideals and edge ideals.
The following lemma is probably well-known. We include it for the sake of completeness.
\begin{lemma}\label{multLem}
Let $I$ be an ideal generated by $h$ linear forms. Then $e\left(\dfrac{S}{I^s}\right)=\displaystyle \binom{s+h-1}{h}.$
\end{lemma}
\begin{proof}
Let $x\in \mathfrak{m}\setminus I$ be a generator of $\mathfrak{m}$,
where $\mathfrak{m}=(x_1,\ldots,x_n)$ is the unique homogeneous maximal ideal in $S$.
Then $x$ is a regular element on $I$, and so on $I^s$. This gives $H\left(\dfrac{S}{(I^s,x)},t\right)=(1-t)H\left(\dfrac{S}{I^s},t\right)$, and hence $e\left(\dfrac{S}{(I^s,x)}\right)=e\left(\dfrac{S}{I^s}\right).$ Therefore, without loss of generality, we may assume that $I=\mathfrak{m}$. Thus, we get $e\left(\dfrac{S}{I^s}\right)=\dim_{\mathbb{K}}\left(\dfrac{S}{I^s}\right).$
\end{proof}
\begin{obs}\label{rem-multi}
Let $I$ be a squarefree monomial ideal in $S$.
Since the minimal associated primes of squarefree monomial ideals
are generated by subsets of variables, by \cite[Corollary 4.7.8]{Bh1993} and
Lemma \ref{multLem}, we have $$e\left(\dfrac{S}{I^{(s)}}\right)=\binom{h+s-1}{h}\left|\operatorname{Minh}(I)\right|,$$
where $\operatorname{Minh}(I)=\{\mathfrak{p} \in \operatorname{Ass}(S/I) : \operatorname{ht}(\mathfrak{p})=\operatorname{ht}(I)\}.$
\end{obs}
As a consequence of Observation \ref{rem-multi}, we obtain the multiplicity of symbolic powers of
edge ideals and cover ideals in terms of combinatorial invariants.
\begin{corollary}\label{multi-coverideal}
Let $G$ be a graph and $h$ the size of the smallest vertex cover of $G$. Then for all $s \geq 1$,
\begin{enumerate}[\rm i)]
\item $e\left(\dfrac{S}{I(G)^{(s)}}\right)=\binom{h+s-1}{h} \mathcal{V}(G)$, where
$\mathcal{V}(G)$ is the number of minimal vertex covers of $G$ of minimal size.
\item $e\left(\dfrac{S}{J(G)^{(s)}}\right)=\binom{s+1}{2}|E(G)|.$
\end{enumerate}
\end{corollary}
\vskip 5mm \noindent
\textbf{Acknowledgement:}
We would like to thank A. V. Jayanthan and J. K. Verma
for several clarifications on our doubts.
The computational commutative algebra package Macaulay 2 \cite{M2} was heavily used to compute several
examples. The authors was supported by NBHM,
IIT Madras, UGC India and IMSc Chennai respectively.
\bibliographystyle{abbrv}
| {
"timestamp": "2019-03-04T02:09:14",
"yymm": "1903",
"arxiv_id": "1903.00178",
"language": "en",
"url": "https://arxiv.org/abs/1903.00178",
"abstract": "In this paper, we compute the regularity and Hilbert series of symbolic powers of the cover ideal of a graph $G$ when $G$ is either a crown graph or a complete multipartite graph. We also compute the multiplicity of symbolic powers of cover ideals in terms of the number of edges.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Symbolic powers of certain cover ideals of graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109503004293,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7076796325632485
} |
https://arxiv.org/abs/1612.01983 | Three favorite sites occurs infinitely often for one-dimensional simple random walk | For a one-dimensional simple random walk $(S_t)$, for each time $t$ we say a site $x$ is a favorite site if it has the maximal local time. In this paper, we show that with probability 1 three favorite sites occurs infinitely often. Our work is inspired by Tóth (2001), and disproves a conjecture of Erdös and Révész (1984) and of Tóth (2001). | \section{Introduction}
Let $S_{t},~t\in \mathbb{N}$ be a one-dimensional simple random walk with $S_0=0$. We define the local time at $x$ by time $t$ to be
$
L(t,x)=\#\{0<k\leq t: S_k=x\}.
$
At time $t$, we say $x$ is a favorite site if it has the maximal local time, i.e., $L(t, x) = \max_y L(t, y)$, and we say that \emph{three favorite sites} occurs if there are exactly three sites which achieve the maximal local time.
Our main result states that
\begin{thm}\label{mainthm}
For one-dimensional simple random walk, with probability 1 three favorite sites occurs infinitely often.
\end{thm}
Theorem~\ref{mainthm} complements the result in \cite{toth2001no} which showed that there are no more than three favorite sites eventually, and disproves a conjecture of Erd\"{o}s and R\'ev\'esz \cite{ER84,ER87,ER91} and of \cite{toth2001no}. Previous to \cite{toth2001no}, it was shown in \cite{TW97} that eventually there are no more than three favorite edges.
Besides the number of favorite sites, the asymptotic behavior of favorite sites have been much studied (see \cite{ST00} for an overview): at time $n$ as $n\to \infty$, it was shown in \cite{BG85,LS04} that the distance between the favorite sites and the origin in the infimum limit sense is about $\sqrt{n}/\mathrm{poly}(\log n)$ while in the supremum limit sense is about $\sqrt{2n\log\log n}$; it was proved in \cite{CS98} that the distance between the edge of the range of random walk and the set of favorites increases as fast as $\sqrt{n}/(\log \log n)^{3/2}$; in \cite{CRS00} the jump size for the position of favorite site was studied and shown to be as large as $\sqrt{2n\log\log n}$; a number of other papers \cite{Eis97,BES00, Marcus01, HS00, EK02, HS15, CDH16} studied similar questions in broader contexts including symmetric stable processes, random walks on random environments and so on.
In two dimensions and higher, favorite sites for simple random walks have been intensively studied where some intriguing fractal structure arise, see, e.g., \cite{DPRZ01, Dembo05, Abe15, Okada}. Such fractal structure also plays a central role in the study of cover times for random walks, see, e.g., \cite{DPRZ04, Belius2016, Belius13}. We refrain from an extensive discussion on the literature on this topic as the mathematical connection to the concrete problem considered in the present article is limited. That being said, we remark that analogous questions on the number of favorite sites in two dimensions and higher are of interest for future research, which we expect to be more closely related to the literature mentioned in this paragraph as well as references therein.
Our proof is inspired by \cite{toth2001no}, which in turn was inspired by \cite{TW97}. Following \cite{toth2001no},
we define the number of upcrossings and downcrossings at $x$ by the time $t$ to be
\begin{align*}
&U(t,x)=\#\{0<k\leq t: S_k=x,~S_{k-1}=x-1\}, \\
&D(t,x)=\#\{0<k\leq t:S_k=x,~S_{k-1}=x+1\}.
\end{align*}
It is elementary to check that (see, e.g, \cite[Equation (1.6)]{toth2001no})
\begin{align}\label{L}
\begin{split}
L(t,x)=& D(t,x)+D(t,x-1)+\mathds{1}_{\{0<x\leq S(t)\}}-\mathds{1}_{\{S(t)<x\leq 0 \}} \\
=&U(t,x)+U(t,x+1)+\mathds{1}_{\{S(t)\leq x<0\}}-\mathds{1}_{\{0\leq x <S(t) \}}.
\end{split}
\end{align}
The set of favorite (or most visited) sites $\mathscr{K}(t)$ of the random walk at time $t\in \mathbb{N}$ consists of
those sites where the local time attains its maximum value, i.e.,
\begin{align*}
\mathscr{K}(t)=\left\{y\in \mathbb{Z}:~L(t,y)=\max_{z\in \mathbb{Z}} L(t,z) \right\}.
\end{align*}
For $r\geq 1$, let $f(r)$ be the (possibly infinite) number of times when the currently occupied site is one of the $r$ favorites:
\begin{align*}
f(r)=\#\{t\geq 1: ~S_t\in \mathscr{K}(t),~ \#\mathscr{K}(t)=r\}.
\end{align*}
We remark that one of the main conceptual contributions in \cite{toth2001no, TW97} is the introduction of this function $f(r)$. Effectively, $f(r)$ counts the clusters of instances for $r$ favorite sites; it is plausible that after the random walk leaves one of the favorite sites, within a non-negligible (random) number of steps those $r$ favorite sites will remain favorite sites. Therefore, the expectation of $f(r)$ is significantly smaller than the expected number of $t$ at which $r$ favorite sites occurs, and in fact it was shown in \cite{toth2001no} that $\mathbb{E} f(r) < \infty$ for all $r\geq 4$. It was then conjectured in \cite{toth2001no} that $f(3) <\infty$ with probability 1, even though from the computations in \cite{toth2001no} it was clear that $\mathbb{E} f(3) = \infty$.
In the current article, we will show, using the idea of counting clusters in \cite{toth2001no}, that the correlation becomes so small that the first moment dictates the behavior. That is to say, we will show that
\begin{equation}\label{eq-main-thm}
f(3)=\infty \mbox{ with probability 1},
\end{equation}
which then yields Theorem~\ref{mainthm}.
The rest of the paper is organized as follows: in Section~\ref{sec:prelim} we will set up the framework of our
proof following \cite{toth2001no}; in Section~\ref{sec:main} we first show that $f(3) = \infty$ with positive probability and then prove \eqref{eq-main-thm} by demonstrating a $0$-$1$ law. We emphasize that the first moment computation in Subsection~\ref{sec:firstmoment} follows from arguments in \cite{toth2001no}, and the main novelty of our work is on the second moment computation in Subsection~\ref{sec:secondmoment}.
\medskip
\noindent {\bf Acknowledgement.} We thank Yueyun Hu and Zhan Shi for introducing the problem on favorite sites and for interesting discussions, and we thank Steve Lalley and B\'alint T\'oth for many helpful discussions and useful comments for an early version of the manuscript.
\section{Preliminaries} \label{sec:prelim}
In this section, we recall the framework of \cite{toth2001no} with suitable adaption to our setup, and collect a number of useful and well-understood facts. We claim no originality in this section, and the existence of the current section is mainly for the completeness of notation and definition.
\subsection{Three consecutive favorite sites}
It turns out that in order to show $f(3) =\infty$ it suffices to consider instances of three favorite sites which are consecutive. To this end, we define the inverse edge local times by
\begin{align*}
&T_U(k,x)\triangleq \inf\{t\geq 1:U(t,x)=k\} \mbox{ and } T_D(k,x)\triangleq \inf\{t\geq 1:D(t,x)=k\}.
\end{align*}
We consider the events of three consecutive favorite sites, i.e.,
$$A_{x,h}^{(k)}\triangleq \{\mathscr{K}(T_U(k+1,x))=\{x,x+1,x+2\},~ L(T_U(k+1,x),x)=h \}\,.$$
We write the events in $T_U(k+1,x)$ rather than $T_U(k,x)$ as it matches the form of the Ray-Knight representation which we will discuss later. We then let $I_h=(\frac{1}{2}(h+\sqrt{h}), \frac{1}{2}(h+2\sqrt{h}))$ and define
\begin{align*}
N_H=\sum_{h=1}^H \sum_{k\in I_h} \sum_{x=1}^\infty \mathds{1}_{A_{x,h}^{(k)}} \quad \text{ and } \quad
N=\llim{H} N_H = \sum_{h=1}^\infty \sum_{k\in I_h} \sum_{x=1}^\infty \mathds{1}_{A_{x,h}^{(k)}} \,.
\end{align*}
We observe that for each $h$, the events $A_{x, h}^{(k)}$ are mutually disjoint. In addition, we have that $f(3) \geq u(x)$ where
\begin{align*}
u(x)=& \sum_{t=1}^\infty \mathds{1}_{\{S(t-1)=x-1, ~S(t)=x, ~x\in \mathscr{K}(t), ~\#\mathscr{K}(t)=3\}} \\
= &\sum_{k=1}^\infty \mathds{1}_{ \{ x\in \mathscr{K}(T_U(k,x)),~ \# \mathscr{K}(T_U(k,x))=3 \} } \\
=& \sum_{k=0}^\infty \sum_{h=1}^\infty \mathds{1}_{\{x\in \mathscr{K}(T_U(k+1),x),~ \#\mathscr{K}(T_U(k+1,x))=3,~ L(T_U(k+1,x),x)=h\}}\,.
\end{align*}
Therefore, we have that $f(3) \geq N$, and thus it suffices to show that $N = \infty$. We remark that the preceding discussions are extracted from decompositions in \cite[(2.3), (2.4), (2.5)]{toth2001no}, and they are the starting point for all computations in \cite{toth2001no} as well as the present article.
\subsection{Additive processes and the Ray-Knight representation}
Throughout this paper we denote by $Y_t$ a critical Galton-Watson branching process with geometric offspring distribution and by $Z_t$, $R_t$ critical geometric branching processes with one immigrant in each generation (in different ways). More precisely, we let $X_{t, i}$'s be i.i.d.\ geometric variables with mean 1 and recursively define
\begin{equation}\label{eq-Z}
Z_{t+1} = \mbox{$\sum_{i=1}^{Z_t+1}$} X_{t,i} \mbox{ and } R_{t+1}=1+\mbox{$\sum_{i=1}^{R_t}$} X_{t,i} \,.
\end{equation}
One can verify that $Y_t$, $Z_t$ and $R_t$ are Markov chains with state space $\mathbb{Z}_+$ and transition probabilities:
\begin{align}\label{pi}
\begin{split}
\mathbb{P}(Y_{t+1}=j|Y_t=i)=&\pi(i,j)
\triangleq
\begin{cases}
\delta_0(j), & \text{if } i=0, \\
2^{-i-j}~\frac{(i+j-1)!}{(i-1)!~j!}, & \text{if } i>0, \\
\end{cases}
\end{split} \\
\mathbb{P}(Z_{t+1}=j|Z_t=i)=&\rho(i,j)\triangleq \pi(i+1,j) \nonumber\\
\mbox{ and } \quad
\mathbb{P}(R_{t+1}=j|R_t=i)=&\rho^*(i,j)\triangleq \pi(i,j-1) \,. \nonumber
\end{align}
Let $k\geq 0$ and $x$ be fixed integers. When $x\geq 1$, define the following three processes:
\begin{enumerate}
\item $(Z_t^{(k)})_{t\geq 0}$, is a Markov chain with transition probability $\rho(i,j)$ and initial state $Z_0=k$.
\item $(Y_t^{(k)})_{t\geq -1}$, is a Markov chain with transition probabilities $\pi(i,j)$ and initial state $Y_{-1}=k$.
\item $(Y_t^{\prime(k)})_{t\geq 0}$, is a Markov chain with transition probabilities $\pi(i,j)$ and initial state $Y_{0}^{\prime (k)}=Z_{x-1}^{(k)}$.
\end{enumerate}
The three processes are independent, except for the fact that $Y_t^{\prime (k)}$ starts from the terminal state of $Z_t^{(k)}$.
We patch the three processes together to a single process:
$$
\Delta_{x}^{(k)}(y) \triangleq \begin{cases}
Z_{x-1-y}^{(k)}, & \text{if } 0\leq y\leq x-1,
\\
Y_{y-x}^{(k)}, &\text{if } x-1\leq y\leq \infty,
\\
Y_{-y}^{\prime(k)}, &\text{if } -\infty< y\leq 0.
\end{cases}
$$
We also define
\begin{align}\label{Lambda}
\Lambda_{x}^{(k)} (y) \triangleq \Delta_{x}^{(k)} (y)+\Delta_{x}^{(k)}(y-1)+\mathds{1}_{\{0<y\leq x \}}\,.
\end{align}
From the Ray-Knight Theorems on local time of simple random walks on $\mathbb{Z}$ (c.f. \cite[Theorem 1.1]{knight1963random}), it follows that for any integers $x\geq 1$ and $k\geq 0$,
\begin{align}\label{RKdcr}
(D(T_U(k+1,x),y),~y\in \mathbb{Z} )\stackrel{law}{=} (\Delta_{x}^{(k)}(y),~y\in \mathbb{Z}).
\end{align}
Using \eqref{L}, \eqref{Lambda} and \eqref{RKdcr}, we get
\begin{align}\label{RKrep}
(L(T_U(k+1,x),y),~y\in \mathbb{Z})\stackrel{law}{=} (\Lambda_{x}^{(k)}(y),~ y\in \mathbb{Z}).
\end{align}
Similarly, when $x\leq 0$, we define the processes
\begin{enumerate}
\item $(R_t^{(k)})_{t\geq 0}$, is a Markov chain with transition probability $\rho^*(i,j)$ and initial state $R_{-1}=k$.
\item $(Y_t^{(k)})_{t\geq 0}$, is a Markov chain with transition probabilities $\pi(i,j)$ and initial state $Y_{0}=k$.
\item $(Y_t^{\prime(k)})_{t\geq -1}$, is a Markov chain with transition probabilities $\pi(i,j)$ and initial state $Y_{-1}^{\prime (k)}=R_{-1-x}^{(k)}$.
\end{enumerate}
In this case, we patch the three processes together by
\begin{align*}
\Delta_{x}^{(k)}(y) \triangleq \begin{cases}
Y^{\prime(k)}_{y} , & \text{if } -1\leq y<\infty ,
\\
R_{y-x}, &\text{if } x-1\leq y\leq -1,
\\
Y_{x-1-y}^{(k)}, &\text{if } -\infty<y\leq x-1.
\end{cases}
\end{align*}
The corresponding $\Lambda_{x}^{(k)}$ is defined by
$$\Lambda_{x}^{(k)} (y) \triangleq \Delta_{x}^{(k)} (y)+\Delta_{x}^{(k)}(y-1)-\mathds{1}_{\{x<y\leq 0\}}\,.$$ By classical Ray-Knight Theorems, we get the couplings for the case $k\geq 0$, $x\leq 0$:
\begin{align}
(D(T_U(k+1,x),y),~y\in \mathbb{Z} )\stackrel{law}{=}& (\Delta_{x}^{(k)}(y),~y\in \mathbb{Z}), \label{RKdcr2} \\
(L(T_U(k+1,x),y),~y\in \mathbb{Z} )\stackrel{law}{=}& (\Lambda_{x}^{(k)}(y),~ y\in \mathbb{Z}). \label{RKrep2}
\end{align}
In this paper, we will mainly use the Ray-Knight representation \eqref{RKdcr} and \eqref{RKrep}, while \eqref{RKdcr2} and \eqref{RKrep2} will be used in the calculation of $\mathbb{E} N_H^2$. In the following, we default $x>0$ unless mentioned otherwise.
\subsection{Three favorite sites under Ray-Knight representation}
To utilize $(\ref{RKrep})$, given the additive processes $Y_t^{(k)}$, $Z_t^{(k)}$ and $Y_t^{\prime (k)}$, we define
$$
\tilde{Z}_t^{(k)} \triangleq Z_t^{(k)}+Z_{t-1}^{(k)}+1, \qquad \tilde{Y}_t^{(k)} \triangleq Y_t^{(k)}+Y_{t-1}^{(k)},
\qquad \tilde{Y}^{\prime (k)}_t\triangleq Y^{\prime (k)}_t+Y^{\prime (k)}_{t-1}.
$$
For $h\in \mathbb{Z}_+$, define the first hitting time of $[h,\infty)$ for $Y_t^{(k)}$ and $Z_t^{(k)}$ to be $\sigma^{(k)}_h$ and $\tau^{(k)}_h$ respectively and the extinction time of $Y_t^{(k)}$ to be $\omega^{(k)}$. That is,
\begin{equation}\label{stoppingtime}
\begin{split}
\sigma_h^{(k)} \triangleq& \inf\{ t\geq 0:~ Y_t^{(k)}\geq h\},
\quad \tau_h^{(k)} \triangleq \inf\{t\geq 0:~ Z_t^{(k)}\geq h\}, \\
\quad & \mbox{ and } \omega^{(k)}=\inf\{t\geq 0:~ Y_t^{(k)} =0\}.
\end{split}
\end{equation}
Correspondingly, we define the first hitting time of $[h,\infty)$ for the process $\tilde{Y}_t^{(k)}$ and $\tilde{Z}_t^{(k)}$ to be
$\tilde{\sigma}_{h}^{(k)}$ and $\tilde{\tau}_{h}^{(k)}$ respectively. Namely,
\begin{align*}
\tilde{\sigma}_h^{(k)} \triangleq& \inf\{ t\geq 0:~ \tilde{Y}_t^{(k)}\geq h\}, \qquad \tilde{\tau}_h^{(k)} \triangleq \inf\{t\geq 0:~ \tilde{Z}_t^{(k)}\geq h\} \,.
\end{align*}
Using the notation above, we can write $\mathbb{P}(A_{h,x}^{(k)})$ in its Ray-Knight representation form. That is, $\mathbb{P}(A_{h,x}^{(k)})$ is equal to
\begin{align*}
& \mathbb{P}\big(Y_0^{(k)}=h-k-1, Y_1^{(k)}= k+1,~ Y_2^{(k)}=h-k-1, \{\tilde{Y}_t^{(k)}<h, \mbox{ for }t\geq 3\}, \\
&\qquad \{\tilde{Z}_t^{(k)}<h, \mbox{ for }1\leq t\leq x-1\},~\{\tilde{Y}_t^{\prime(k)}<h, \mbox{ for }t\geq 1\} \big) \,.
\end{align*}
For all the notations above, when the initial state of a process is obvious, we omit the superscript ``$(k)$'' to avoid
cumbersome notations. We will also use conditional probability $\mathbb{P}(\cdot~|~Y_0=k)$ to indicate the initial state.
\subsection{Standard lemmas}
In this subsection we record a few well-understood lemmas that will be useful later.
\begin{lemma}\cite[(6.14) -- (6.15)]{toth2001no}\label{overshooting}
For any $0\leq k\leq h\leq u$ the following overshoot bounds hold:
\begin{align*}
\mathbb{P}\left( Y_{\sigma_h} \geq u \big|~ Y_0=k,~\sigma_h<\infty \right) \leq& \mathbb{P}(Y_1\geq u|~Y_0=h,~ Y_1\geq h)\,, \\
\mathbb{P}\left( Z_{\tau_h} \geq u \big|~ Z_0=k \right) \leq &\mathbb{P}( Z_1\geq u |~Z_0=h,~Z_1\geq h)\,.
\end{align*}
\end{lemma}
\begin{lemma}\label{tranprob}
We have that
\begin{enumerate}[(i)]
\item For $i,j\in \left(\frac{1}{2}(h-10\sqrt{h}),~\frac{1}{2}(h+10\sqrt{h}) \right)$, there exist positive constants $c$ and $C$ such that $c~h^{-\frac{1}{2}}\leq \pi(i,j)\leq C~h^{-\frac{1}{2}}$ for all $h\geq 1$.
\item For $i+j=h$, $\pi(i,j)\leq O(1) ~ h^{-\frac{1}{2}}$.
\item For $j<i_1<i_2$, $\pi(i_1,j)>\pi(i_2,j)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Properties (i) and (ii) follow from straightforward computation using Stirling's formula and \eqref{pi}. For Property
(iii), we see that
$
\frac{\pi(i+1,j)}{\pi(i,j)} = \frac{i+j}{2i}<1
$ for $j<i$, and (iii) follows from induction.
\end{proof}
\begin{lemma}\label{Etau}
We have that $\mathbb{E} \tau_h =\mathbb{E} Z_{\tau_h}-Z_0 $. In particular, we have that $\mathbb{E}\left[\tau_h | Z_0=k \right] \geq h-k$.
\end{lemma}
\begin{proof}
Applying the Optional Stopping Theorem to the martingale $Z_t-t$ at time $\tau_h$, we get
$
\mathbb{E}\tau_h =\mathbb{E} Z_{\tau_h} -Z_0 \geq h-k
$, as desired.
\end{proof}
\section{Proof of Theorem \ref{mainthm}} \label{sec:main}
The current section contains three parts: in Subsection~\ref{sec:firstmoment} we adapt the arguments in \cite{toth2001no} and provide a lower bound on the first moment for the number of instances for the consecutive three favorite sites; in Subsection~\ref{sec:secondmoment} (which contains the main novelty of the present paper), we show that the second moment is of the same order as the square of the first moment, thereby proving that three favorite sites occurs with non-vanishing probability; in Subsection~\ref{sec:01law} we prove a 0-1 law for three favorite sites and thus complete the proof of Theorem~\ref{mainthm}.
\subsection{Lower bound on the first moment}\label{sec:firstmoment}
For $x>0$ and $h\in \mathbb N$, in order to bound the probability for three consecutive favorite sites with local time $h$ at vertices $x$, $x+1$ and $x+2$, the main part is to control the probability for the local times below $h$ everywhere except at $x$, $x+1$ and $x+2$. To this end, it suffices to consider the edge local times (i.e., number of downcrossings) in the Ray-Knight representation with appropriate conditioning in the region of $(x, x+2)$. Then in the region outside of $(0, x+2)$, these edge local times evolve as martingales (when looking forward spatially in $(x+2, \infty)$ and backward spatially in $(-\infty, 0)$) and it is fairly standard to control the probability of staying below the level $h$; in the region $(0, x)$, the edge local times are not exactly a martingale (when looking backward spatially; see \eqref{eq-Z}) and the analysis is slightly more complicated. In the next lemma,
we prove a lower bound on the first moment of $\sum_{t=1}^{\tau_h}\tfrac{h-Z_t}{h}$. Combined with standard martingale analysis in the region outside of $(0, x+2)$ and a change of summation when summing over $x$ (see \eqref{prop32eq3}), this will then give a lower bound on the first moment of $N_H$ (see Proposition~\ref{ENH}).
\begin{lemma}\label{Efrac}
Suppose that $Z_0=k\in [h-2\sqrt{h},h-\sqrt{h}] $. Then there exists a constant $c>0$ such that $\mathbb{E}(\mbox{$ \sum_{t=1}^{\tau_h} $}\tfrac{h-Z_t}{h} ) \geq c\sqrt{h}$.
\end{lemma}
\begin{proof}
Let $M_t = \sum_{s=1}^t (Z_s-s)-t(Z_t-t)$, and let $\mathcal F_t = \sigma(Z_0, Z_1, \ldots, Z_t)$. We see that
$$\mbox{$\mathbb{E}(M_{t+1} \mid \mathcal F_t) = \left[ \sum_{s=1}^t (Z_s-s)+ (Z_t-t) \right]-(t+1)(Z_t-t) = M_t$}\,.$$
Thus $(M_t)$ is a martingale. By the Optional Stopping Theorem, we see that $\mathbb{E} \left(\sum_{t=1}^{\tau_h}(Z_t-t)\right) = \mathbb{E} \tau_h(Z_{\tau_h}-\tau_h)$
and hence
\begin{align}\label{Efraceq1}
\mathbb{E}\mbox{$\left( \sum_{t=1}^{\tau_h}\tfrac{h-Z_t}{h} \right) = (1+\frac{1}{2h})\mathbb{E} \tau_h-\frac{1}{h}\mathbb{E}[ \tau_h Z_{\tau_h}-\frac{1}{2} \tau_h^2 ] $}.
\end{align}
Now consider the process $M'_t=-\frac{1}{4}Z_t^2 +tZ_t-\frac{1}{2}t^2+\frac{1}{4} t$. By \eqref{eq-Z}, we see that
$$\mathbb{E}(M'_{t+1}\mid \mathcal F_t) = -\tfrac{1}{4}(Z_t^2+4Z_t+3)+(tZ_t+Z_t+t+1)-\tfrac{1}{2}(t^2+2t+1)+\tfrac{1}{4}(t+1)\,,$$
where equal to $M'_t$.
So $(M'_t)$ is a martingale.
Using the Optional Stopping Theorem to $(M'_t)$ at $\tau_h$, we have
\begin{align}\label{Efraceq2}
\mbox{$\mathbb{E} \left[ \tau_h Z_{\tau_h} -\frac{1}{2}\tau_h^2 \right] =\mathbb{E}\left[ \frac{1}{4}Z_{\tau_h}^2-\frac{1}{4} \tau_h \right]
-\frac{1}{4} Z_0^2 = \frac{1}{4} \mathbb{E} (Z_{\tau_h}^2-Z_0^2) -\frac{1}{4} \mathbb{E} \tau_h$}\,.
\end{align}
Combining \eqref{Efraceq1}, \eqref{Efraceq2} and Lemma \ref{Etau}, we get
\begin{align*}
\mbox{$\mathbb{E}\left[ \sum_{t=1}^{\tau_h} \frac{h-Z_t}{h} \right]$} =&(1+\frac{1}{4h}) \mathbb{E} \tau_h -\frac{1}{4h} \mathbb{E}\left[ Z_{\tau_h}^2 -Z_0^2 \right] \\
= &(1+\frac{1}{4h}) \mathbb{E}( Z_{\tau_h}-Z_0 )-\frac{1}{4h}\mathbb{E}[ (Z_{\tau_h}-Z_0)(Z_{\tau_h}+Z_0) ] \\
\geq & \frac{1}{4h}\mathbb{E}[ (Z_{\tau_h}-Z_0)(4h-(Z_{\tau_h}+Z_0)) ]\,.
\end{align*}
Obviously $Z_{\tau_h}-Z_0\geq h-k \geq \sqrt{h}$ and by Lemma \ref{overshooting} we have that $\mathbb{E}(Z_{\tau_h}- Z_0)(Z_{\tau_h}+Z_0 - 2h) = O(h)$. Therefore there is a constant $c$ such that $\mathbb{E}\left[ \sum_{t=1}^{\tau_h} \frac{h-Z_t}{h} \right]\geq c\sqrt{h}$ for sufficiently large $h$.
\end{proof}
\begin{proposition}\label{ENH}
For a constant $c>0$ we have $\mathbb{E} N_H\geq c \log H$.
\end{proposition}
\begin{proof}
In what follows, $c_i$ for $i\geq 1$ and $c$ are all constants. By the Ray-Knight representation, $\mathbb{E} N_H$ is equal to the following product:
\begin{align*}
&\sum_{h=1}^H \sum_{k\in I_h} \mathbb{P}\Big( Y_0^{(k)}=h-k-1, ~Y_1^{(k)}= k+1,~ Y_2^{(k)}=h-k-1,\\
&\qquad \qquad \qquad \{\tilde{Y}_t^{(k)}<h, ~\mbox{ for }t\geq 3\} \Big) \\
&\quad \times \sum_{x=1}^\infty \mathbb{P}\big( \{\tilde{Z}_t^{(k)}<h,~1\leq t\leq x-1\},~\{\tilde{Y}_t^{\prime(k)}<h,~\mbox{ for }t\geq 1\} \big) \,.
\end{align*}
Thus, we get that
\begin{align*}
\mathbb{E} N_H \geq &\sum_{h=1}^H \sum_{k\in I_h} \pi(l,h-k-1)\pi(h-k-1,k+1) \pi(k+1,h-k-1)~ \\
& \cdot \mathbb{P}(Y_t^{(h-k-1)} < \tfrac{1}{2} h \text{ for $t\geq 0$} )
\cdot \sum_{x=1}^\infty \mathbb{P}( \tilde{\tau}_h \geq x, \{\tilde{Y}_t^{\prime(k)}<h, \mbox{ for }t\geq 1\} )\,.
\end{align*}
By Lemma \ref{tranprob} (i), all $\pi(\cdot,\cdot)$ in the above equation are at the scale $h^{-\frac{1}{2}}$. Since $Y_t$ is a martingale, by using the Optional Stopping Theorem at $\sigma_{\frac{h}{2}}\wedge \omega$ where $\sigma_{\frac{h}{2}}$ and $\omega$ are defined in \eqref{stoppingtime}, we have
\begin{align*}
\mathbb{P}(Y_t^{(h-k-1)} < \tfrac{h}{2} \text{ for $t\geq 0$} ) &= \mathbb{P}( Y_t^{(h-k-1)} \text{ hits 0 before } \tfrac{h}{2} ) \\
&\geq \tfrac{h/2-(h-k-1)}{h/2}\geq c_1 h^{-\frac{1}{2}} \,.
\end{align*}
So we get
\begin{align}\label{prop32eq1}
\mathbb{E} N_H \geq c_2 \sum_{h=1}^H \sum_{k\in I_h} \sum_{x=1}^\infty h^{-2} \mathbb{P}\left( \tilde{\tau}_h \geq x, ~\{\tilde{Y}_t^{\prime(k)}<h,~t\geq 1\} \right) \,.
\end{align}
Let $k_1=\frac{1}{2} (h-2\sqrt{h})$. By independence in the Ray-Knight representation,
\begin{align*}
& \sum_{x=1}^\infty \mathbb{P}( \tilde{\tau}_h \geq x, ~\{\tilde{Y}_t^{\prime(k)}<h,~\mbox{ for }t\geq 1\} ) \\
\geq & \sum_{x=1}^\infty \mathbb{P}( Z_1^{(k)}\leq k_1 , ~Z_t^{(k)}<\frac{h}{2} \text{ for } 2\leq t \leq x-1 ,~
~\{Y_t^{\prime(k)}<\tfrac{h}{2},~ \mbox{ for }t\geq 1\} ) \\
\geq & \sum_{x=2}^\infty \sum_{l=0}^{[\frac{h}{2}-1]} \Big(\mathbb{P}(Z_1^{(k)} \leq k_1 )\cdot \mathbb{P}( ~Z_t^{(k_1)}<\tfrac{h}{2} \text{ for } 1\leq t \leq x-2 , ~ Z_{x-2}^{(k_1)}=l) \\
& \qquad \qquad \times \mathbb{P}( Y_t^{ (l)} \text{ hits 0 before } \tfrac{h}{2})\Big) \,.
\end{align*}
By Lemma \ref{tranprob} (i), $ \mathbb{P}(Z_1^{(k)} \leq k_1 )\geq c_3$. Using the Optional Stopping Theorem again, we have $\mathbb{P}\left( Y_t^{(l)} \text{ hits 0 before } \frac{h}{2} \right)\geq \frac{h/2-l}{h/2}$. So
\begin{align}\label{prop32eq2}
& \sum_{x=1}^\infty \mathbb{P}\left( \tilde{\tau}_h \geq x, ~\{\tilde{Y}_t^{\prime(k)}<h,~t\geq 1\} \right) \nonumber \\
\geq & c_3 \cdot \sum_{x=1}^\infty \sum_{l=0}^{[\frac{h}{2}-1]}\mathbb{P}\left( \tau_{h/2}^{(k_1)}\geq x , ~ Z_{x-1}^{(k_1)} =l\right)\cdot \frac{h/2-l}{h/2} \,.
\end{align}
By interchange of the summation and the expectation (which is valid by the Monotone Convergence Theorem) and Lemma \ref{Efrac}, we have that the right hand side of \eqref{prop32eq2} is equal to
\begin{align}\label{prop32eq3}
& c_3\cdot \mathbb{E}\Big[ \sum_{l=0}^{[\frac{h}{2}-1]} \sum_{x=1}^{\tau_{h/2}^{(k_1)}} \frac{h/2-l}{h/2} \cdot \mathds{1}_{\big\{ Z_{x-1}^{(k_1)} =l \big\}} \Big]
= c_3 \mathbb{E}\Big( \sum_{t=0}^{\tau_{h/2}^{(k_1)}-1} \frac{h/2- Z_{t}^{(k_1)}}{h/2} \Big) \geq c_4 \sqrt{h} \,,
\end{align}
where in the second inequality we did change of variable $t=x-1$. Thus by \eqref{prop32eq1} and \eqref{prop32eq3},
\begin{align*}
\mathbb{E} N_H\geq \sum_{h=1}^H \sum_{k\in I_h} & c_5~ h^{-\frac{3}{2}} \geq c_6 \cdot \sum_{h=1}^H \frac{1}{h} \geq c \log H \,,
\end{align*}
completing the proof of the proposition.
\end{proof}
\subsection{Upper bound on the second moment} \label{sec:secondmoment}
The calculation of second moment involves the two three favorite sites that happen in chronological order. The key insight is that two instances of three favorite sites with no spatial overlap are almost independent. Before giving the bound for the second moment, we discuss some useful concepts and tools that characterize the independence of different three favorite sites.
Let $D(t)=(D(t,x),x\in \mathbb{Z}) \in \mathbb{N}^{\mathbb{Z}}$ be the random vector that records the number of downcrossings of each site by the time $t$.
For $\ell\in \mathbb{N}^\mathbb{Z}$, we use $\ell(i)$, $i\in \mathbb{Z}$ to denote the $i$-th component of $\ell$.
For $\ell\in \mathbb{N}^\mathbb{Z}$, define $B_x(\ell)=\{\exists t<\infty:~ D(t)=\ell,~ S(t-1)=x-1,~S(t)=x \}$. Note that if $B_x(\ell)$ happens, there exists a unique $t \in \mathbb{N}$ such that $D(t)=\ell,~ S(t-1)=x-1$ and $S(t)=x$. Sometimes we abuse the terminology ``after $B_x(\ell)$ happens'' by meaning ``after the unique $t$ with $D(t)=\ell,~ S(t-1)=x-1,~S(t)=x$''. We also say ``$B_x(\ell)$ happens before $B_{x'}(\ell')$'' by meaning the unique $t$ (corresponding to $B_x(\ell)$) is less than the unique $t'$ (corresponding to $B_{x'}(\ell')$).
Let $\mathcal P =\{ \ell: \mathbb{P}(B_x(\ell))>0 \text{ for some $x$} \}$. Clearly for any $\ell \in \mathcal P$, $\ell$ has compact support.
For $\mathcal Q\subset \mathcal P$, denote $B_x(\mathcal Q)=\bigcup_{\ell\in \mathcal Q}B_x(\ell)$. Then we have
$A_{x,h}^{(k)}=B_x (\mathcal P_{x,h}^{(k)})$ where $\mathcal P_{x,h}^{(k)}$ is the collection of $\ell\in \mathcal P$ such that
\begin{align*}
&\ell(x-1)=k, ~\ell(x)=h-k-1,~ \ell(x+1)=k+1,~\ell(x+2)=h-k-1\,; \\
\qquad&\ell(i-1)+\ell(i)<h \mbox{ for all }i\neq x,x+1,x+2 \,.
\end{align*}
Our main intuition on bounding the correlation between two instances of three favorite sites is the following: Suppose at some time (say $T_1$) we have an instance of three favorite points at $x, x+1, x+2$ with edge local time (i.e., downcrossings) given by $\ell$. Our crucial observation is that conditioning on $B_x(\ell)$ does not increase much of the probability for producing an instance of three favorite sites in a future time (say $T_2$) which are spatially different from those of $\ell$. To this end, we let $\ell'$ be one of \emph{many} local perturbations of $\ell$ (which are obtained from $\ell$ by decreasing the values at $x+1$ and $x+2$). We note that (see Figure \ref{perturb} for an illustration)
\begin{itemize}
\item The event $B_x(\ell)$ (respectively, $B_x(\ell')$) corresponds to that the edge local time is $\ell$ (respectively, $\ell'$) when the random walk cross the directed edge $(x-1, x)$ for the $(\ell(x-1)+1)$'th time (note that $\ell(x-1)= \ell'(x-1)$; and note that this corresponds to time $T_1$ in Figure \ref{perturb}). Conditioned on $B_x(\ell)$ (respectively, $B_x(\ell')$), the edge local time at a later time (which corresponds to $T_2$ in Figure \ref{perturb}) is $\ell$ (respectively, $\ell'$) superposed with an independent edge local time field which we denote by $\tilde \ell$. By the strong Markov property for random walks, the law of $\tilde \ell$ is the same regardless of conditioning on $B_x(\ell)$ or $B_x(\ell')$.
\item If the field $(\ell + \tilde \ell)$ produces three favorite sites which are spatially different from those of $\ell$, then the field $(\ell' + \tilde \ell)$ also produces three favorite sites.
\end{itemize}
\begin{figure}[H]
\centering
\includegraphics[width=12cm]{perturb.png}\\
\caption{The black bars represent vertex local times at $T_1$ and the grey bars represent ones at $T_2$. When we decrease the edge local times at $x+1$ and $x+2$, descent of vertex local times happens at $x+1,x+2$ and $x+3$. After the local time perturbation at time $T_1$, we will still get ``three favorite sites'' at $T_2$. }\label{perturb}
\end{figure}
In summary, we see that the conditional probability of producing an instance of three favorite sites which are spatially different from those of $\ell$ given $B_x(\ell)$ is the same as the conditional probability given $B_x(\ell')$. But the probability for the union of $B_x(\ell')$'s when $\ell'$ ranging over all legitimate perturbations is much larger than that of $B_x(\ell)$ --- in fact larger by a factor of order $h = \ell(x-1) + \ell(x) + 1$ (see Lemma~\ref{fluc} below). This is a (quantitative) manifestation that the event $B_x(\ell)$ is uncorrelated with a spatially different instance of three favorite sites in the future.
Our formal proof does not exactly follow the discussion above on controlling the conditional probability, as it turns out slightly simpler to directly compute the joint probability for two instances of three favorite sites (but the intuition is the same). For the precise implementation, we let $\mathscr{A}$ be the set of all subsets of $\mathcal P$ and define a map $\varphi_x: \mathcal P \mapsto \mathscr{A}$ mapping an $\ell\in \mathcal P$ to a collection of vectors where we locally push down the values at locations $x+1$ and $x+2$. More precisely, we define $\varphi_x(\ell) $ to be
\begin{align*}
\{\ell^* \in \mathcal P:~ \ell^*(i) < \ell(i) \text{ for $i=x+1,x+2$, } \ell^*(i)=\ell(i) \text{ for $i\neq x+1,x+2$} \} .
\end{align*}
\begin{lemma}\label{disjoint}
For $i=1, 2$ and $\ell^*_i \in \varphi_{x_i}(\ell_i)$ with $\ell_i \in \mathcal P_{x_i, h}^{(k_i)}$, we have that $B_{x_1}(\ell_1^*) \cap B_{x_2}(\ell_2^*)=\emptyset$ if $(x_1, \ell_1) \neq (x_2, \ell_2)$. Further, we have $B_{x_1}(\ell_1^*) \cap B_{x_2}(\ell_2^*)=\emptyset$ if $(x_1, \ell_1) = (x_2, \ell_2)$ but $\ell_1^* \neq \ell_2^*$.
\end{lemma}
\begin{proof}
Case (i): Suppose $x_1 \neq x_2$. Since clearly $B_{x_1}(\ell^*_1)$ and $B_{x_2}(\ell^*_2)$ cannot happen at the same time $t$, we can then assume without loss of generality that $B_{x_1}(\ell^*_1)$ happens first. Then when $B_{x_2}(\ell^*_2)$ happens the vertex local time at $x_1$ is at least $h$, arriving at a contradiction. \\
Case (ii): Suppose that $x_1 = x_2$ but $\ell_1 \neq \ell_2$. In this case, we have $\ell^*_1\neq \ell^*_2$. Since clearly $B_{x_1}(\ell^*_1)$ and $B_{x_2}(\ell^*_2)$ cannot happen at the same time $t$, we can then assume without loss of generality that $B_{x_1}(\ell^*_1)$ happens first. In order for $B_{x_2}(\ell^*_2)$ to happen, the random walk has to leave $x_1 (=x_2)$ and revisit $x_1$. As a result, the vertex local time at $x_1$ will be strictly larger than $h$, arriving at a contradiction.\\
Case (iii): Suppose that $x_1 = x_2, \ell_1 = \ell_2$ but $\ell_1^* \neq \ell_2^*$. This follows from the same reasoning as in Case (ii).
\end{proof}
\begin{lemma}\label{fluc}
There exist a constant $c>0$ such that for any $\ell \in P_{x,h}^{(k)}$ with $k\in I_h$,
\begin{align*}
\mathbb{P}\left( B_x(\varphi_x(\ell)) \right) \geq ch \mathbb{P}(B_x(\ell)) \,.
\end{align*}
\end{lemma}
\begin{proof}
We consider $\ell^*\in \varphi_x(\ell)$ such that $\ell^* (x+1)\in [k+1-\sqrt{h}, k+1)$ and $\ell^*(x+2)\in [h-k-1-\sqrt{h}, h-k-1)$.
According to Lemma \ref{tranprob} (i) and (iii), there is a constant $c>0$ such that
\begin{align*}
\frac{\mathbb{P}(B_x(\ell^*))}{\mathbb{P}(B_x(\ell))}&= \frac{\pi(\ell^*(x), \ell^*(x+1) ) \pi(\ell^*(x+1),\ell^*(x+2)) \pi(\ell^*(x+2), \ell(x+3))}{\pi(h-k-1, k+1) \pi(k+1,h-k-1)\pi(h-k-1, \ell(x+3))}\\
&\geq c.
\end{align*}
Note that there are about $h$ of such $\ell^*\in \varphi_x(\ell)$ that satisfy the inequality. By Lemma \ref{disjoint}, we get that $\mathbb{P}(B_x(\varphi_x(\ell))) \geq c h \mathbb{P}(B_x(\ell))$.
\end{proof}
\begin{proposition}\label{ENH2}
We have that
$
\mathbb{E} N_H^2 = O(\log H) \cdot \mathbb{E} N_H $.
\end{proposition}
\begin{proof}
We decompose the second moment into the following three parts:
\begin{align}\label{NH2ini}
\mathbb{E} N_H^2 & = 2 \sum_{1\leq h<h'\leq H} \sum_{k\in I_h} \sum_{k'\in I_{h'}} \sum_{x=1}^\infty \sum_{x'=1}^\infty \mathbb{P}\left(A_{x,h}^{(k)}, ~A_{x',h'}^{(k')} \right)+ \mathbb{E} N_H \nonumber \\
& \leq O(1)\cdot (\text{I}+\text{II}+\mathbb{E} N_H)\,,
\end{align}
where
\begin{align*}
\text{I}=& \sum_{1\leq h<h'\leq H} \sum_{k'\in I_{h'}} \sum_{k\in I_h} \sum_{|x'-x|> 3} \mathbb{P}\left(A_{x,h}^{(k)}, ~A_{x',h'}^{(k')} \right) \,,\\
\text{II}= &\sum_{1\leq h<h'\leq H} \sum_{k\in I_h} \sum_{k'\in I_{h'}} \sum_{|x'-x|\leq 3} \mathbb{P}\left(A_{x,h}^{(k)}, ~A_{x',h'}^{(k')} \right) \,.
\end{align*}
First we estimate $\text{I}$. By the Strong Markov Property,
\begin{align*}
\mathbb{P}(A_{x,h}^{(k)}, A_{x',h'}^{(k')}) = &
\sum_{\ell \in \mathcal P_{x,h}^{(k)}} \sum_{\ell' \in \mathcal P_{x',h'}^{(k')}} \mathbb{P}\left( B_x(\ell),B_{x'}(\ell') \right) \\
= &\sum_{\ell \in \mathcal P_{x,h}^{(k)}} \sum_{\tilde \ell:~\ell+\tilde \ell\in \mathcal P_{x',h'}^{(k')}}
\mathbb{P}^0 \left( B_x(\ell) \right) \cdot \mathbb{P}^x( B_{x'}(\tilde \ell ))\,,
\end{align*}
where the $x$ in $\mathbb{P}^x$ indicates the starting point of the random walk. For any $x'\in Z_+$ and $k'\in I_{h'}$, using Lemma \ref{fluc}, we get
\begin{align*}
&\sum_{k \in I_h}\sum_{x:|x-x'|> 3} \mathbb{P}\left(A_{x,h}^{(k)}, ~A_{x',h'}^{(k')} \right) \\
=&
\sum_{k \in I_h}\sum_{x:|x-x'|> 3} \sum_{\ell \in \mathcal P_{x,h}^{(k)}~} \sum_{\tilde \ell:~\ell+\tilde \ell\in \mathcal P_{x',h'}^{(k')}} \mathbb{P}^0( B_x(\ell)) \cdot \mathbb{P}^x ( B_{x'}(\tilde \ell) ) \\
\leq & \sum_{k \in I_h}\sum_{x:|x-x'|> 3} \sum_{\ell\in \mathcal P_{x,h}^{(k)}~} \sum_{\tilde \ell: ~\ell+\tilde \ell\in \mathcal P_{x',h'}^{(k')}} O(1)h^{-1}~\mathbb{P}^0\left( B_x(\varphi_x(\ell)) \right)\cdot \mathbb{P}^x(B_{x'}(\tilde \ell)) \\
\leq & ~ O(1)h^{-1}~ \sum_{k \in I_h} \sum_{x:|x-x'|> 3} \sum_{\ell\in \mathcal P_{x,h}^{(k)}}
\sum_{\ell^* \in \varphi_x(\ell)~} \sum_{\tilde \ell:~\ell+\tilde \ell\in \mathcal P_{x',h'}^{(k')}} \mathbb{P}(B_x(\ell^*),~B_{x'}(\ell^*+\tilde \ell)) \,.
\end{align*}
The last inequality follows from Lemma \ref{disjoint} and Strong Markov Property. By Lemma \ref{disjoint}, all events $B_x(\ell^*)$ for $x\in \mathbb{N}$, $\ell^* \in \varphi_x(\ell)$, $k\in I_h$ and $\ell \in \mathcal P_{x, h}^{(k)}$ are disjoint. Note that $|x-x'|> 3$ and $\varphi_x$ only reduces the downcrossing number at $x+1$, $x+2$. So $\ell^*+\tilde \ell\in \mathcal P_{x',h'}^{(k')}$. Hence we have
\begin{align*}
\sum_{k \in I_h}\sum_{x:|x-x'|> 3} & \mathbb{P}\left(A_{x,h}^{(k)}, ~A_{x',h'}^{(k')} \right) \leq O(1) h^{-1} \mbox{$\sum_{\ell' \in \mathcal P_{x',h'}^{(k')}} $} \mathbb{P}\left( B_{x'}(\ell') \right)\,.
\end{align*}
As a result, we obtain that
\begin{align}\label{Iest}
\text{I} \leq & O(1)~\sum_{1\leq h<h'\leq H} h^{-1} \sum_{k'\in I_{h'}} \sum_{x'=1}^\infty \mathbb{P}\left( A_{x',h'}^{(k')} \right) \nonumber \\
\leq & O(1)~\left( \sum_{h=1}^H h^{-1}\right) \left( \sum_{h'=1}^H \sum_{k'\in I_{h'}} \sum_{x'=1}^\infty \mathbb{P}\left( A_{x',h'}^{(k')} \right) \right) = O(1) \log H \cdot \mathbb{E} N_H \,.
\end{align}
It remains to estimate $\text{II}$. In the case where the locations for favorite sites have overlap, we do have strong correlation between the two events. However, due to the overlap of locations for favorite sites the enumeration is hugely reduced. As a result the contribution to the second moment in this case can also be controlled, as we show in what follows.
Since $A_{x',h'}^{(k_1')}\cap A_{x',h'}^{(k_2')}=\emptyset$ for $k_1\neq k_2$, we have
\begin{align*}
\text{II} \leq &~ \sum_{x=1}^\infty \sum_{h=1}^H \sum_{k\in I_h} \sum_{h'=h+1}^H
7\cdot \sup_{x':~|x'-x|\leq 3} \mathbb{P}\left(A_{x,h}^{(k)}, ~ \{ \exists k' :~ A_{x',h'}^{(k')} \} \right)\,. \nonumber
\end{align*}
Note $\mathbb{P}\left(A_{x,h}^{(k)}, ~ \{ \exists k':~ A_{x',h'}^{(k')} \} \right)= \sum_{\ell \in \mathcal P_{x,h}^{(k)}} \mathbb{P}(B_x(\ell))\cdot \mathbb{P}\left( \exists k':~ A_{x',h'}^{(k')} \big|~B_x(\ell) \right)$. Conditioned on $B_x(\ell)$, in order for the event $ \left\{ \exists k':~ A_{x',h'}^{(k')} \right\}$ to occur, we must have:
\begin{enumerate}[(1)]
\item There exists a $k'\geq \ell(x')$ such that at some time $t$, $S(t-1)=x'-1$, $S(t)=x'$ and $D(t,x'-1)=k'$, $D(t,x')=h'-k'-1$ (if such $k'$ exists, it is unique). \label{condition1}
\item Once (1) happens, both $t$ and $k'$ are determined. The additional process after $B_x(\ell)$ need to satisfy: $D(t,x'+1)-\ell(x'+1)=h'-k'-1-\ell(x'+1)$ and $D(t,x'+2)-\ell(x'+2)=k'+1-\ell(x'+2)$. \label{condition2}
\item $L(t,y) < h'$ for all $y\neq x',x'+1,x'+2$. \label{condition3}
\end{enumerate}
We omit the probability loss for \eqref{condition1} and \eqref{condition3} and only consider the probability for \eqref{condition2}.
Formally, define $T$ to be the time $t$ such that $S(t-1)=x'-1,~S(t)=x',~D(t,x'-1)+D(t,x')=h'-1$. Then, we have $\mathbb{P}( \exists k':~ A_{x',h'}^{(k')} \big|~B_x(\ell) )$ is less equal to
\begin{align*}
\sum_{k'=\ell(x')}^{h'} \mathbb{P}\big( & T=T_U(k'+1,x'),~ D(T,x')=h'-k'-1, \\
& D(T,x'+1)=k'+1,~D(T,x'+2)=h'-k'-1 \big) \,.
\end{align*}
Using the Ray-Knight representation for the random walk started at $x$ after $B_x(\ell)$, we have $\mathbb{P} \left( \exists k':~ A_{x',h'}^{(k')} \big|~B_x(\ell) \right)$ is less equal to
\begin{align*}
\sum_{k'=\ell(x')}^{h'} \mathbb{P}\big( &T=T_U(k'+1,x'), D(T,x')=h'-k'-1) \big) \\
&\times \pi^*( h'-k'-1-\ell(x'),k'+1-\ell(x'+1)) \\representation
&\times \pi^*(k'+1-\ell(x'+1),h'-k'-1-\ell(x'+2)) \,.
\end{align*}
where $\pi^*(\cdot,\cdot)$ is either $\pi(\cdot,\cdot)$ or $\rho^*(\cdot,\cdot)$ depending on the relative position of $x$ and $x'$ (see \eqref{RKdcr} and \eqref{RKdcr2}). Since both $(h'-k'-1-\ell_x(x'))+(k'+1-\ell(x'+1))$ and $(k'+1-\ell(x'+1))+(h'-k'-1-\ell(x'+2))$ are greater than or equal to $h'-h$, by Lemma \ref{tranprob} (ii) and the relation $\rho^*(i,j)=\pi(i,j-1)$, we see that
$$\pi^*( h'-k'-1-\ell(x'), k'+1-\ell(x'+1))\cdot \pi^*(k'+1-\ell(x'+1), h'-k'-1-\ell(x'+2))$$
is at most $\frac{O(1)}{h'-h}$ for any $\ell(x')\leq k'\leq h'$. Therefore,
\begin{align*}
&\mathbb{P} \left( \exists k':~ A_{x',h'}^{(k')} \big|~B_x(\ell) \right) \\
\leq & \sum_{k'=\ell(x')}^{h'} \mathbb{P}\big( T=T_U(k'+1,x'),~ D(T,x')=h'-k'-1) \big) \cdot
\frac{O(1)}{h'-h} \\
=& \mathbb{P}\left(\exists k': T=T_U(k'+1,x'),~ D(T,x')=h'-k'-1) \right) \cdot \frac{O(1)}{h'-h} \,,
\end{align*}
which is bounded by $\frac{O(1)}{h'-h}$. As a consequence, we get that
\begin{align*}
\mathbb{P}\left(A_{x,h}^{(k)}, ~ \{ \exists k':~ A_{x',h'}^{(k')} \} \right) &\leq \sum_{\ell \in \mathcal P_{x,h}^{(k)}} \mathbb{P}(B_x(\ell))\cdot \frac{O(1)}{h'-h} \\
& = \frac{O(1)}{h'-h} \cdot \mathbb{P}\left( A_{x,h}^{(k)} \right)
\end{align*}
and thus
\begin{align}\label{IIest}
\text{II} \leq & \sum_{x=1}^\infty \sum_{h=1}^H \sum_{k\in I_h} \sum_{h'=h+1}^H \frac{O(1)}{h'-h} \cdot \mathbb{P}\left( A_{x,h}^{(k)} \right) \\
\leq & O( \log H) \sum_{h=1}^H \sum_{k\in I_h} \sum_{x=1}^\infty \mathbb{P}(A_{x,h}^{(k)}) = O( \log H) \mathbb{E} N_H \,.
\end{align}
Combining \eqref{NH2ini}, \eqref{Iest} and \eqref{IIest}, we get that $\mathbb{E} N_H^2= O( \log H ) \mathbb{E} N_H$.
\end{proof}
We are now ready to show that $N = \infty$ with positive probabiity.
\begin{proposition}\label{secmoment}
There exists a constant $\delta>0$ such that $\mathbb{P}(N =\infty) \geq \delta $ where $N=\llim{H} N_H$.
\end{proposition}
\begin{proof}
By Cauchy-Schwarz inequality, we get that
\begin{align*}
\mathbb{E} N_H & =\mathbb{E} N_H \mathds{1}_{\{N_H> \log \log H \}} + \mathbb{E} N_H \mathds{1}_{\{N_H\leq \log\log H \}}\\
&\leq \sqrt{ \mathbb{E} N_H^2 \cdot \mathbb{P}(N_H>\log\log H) } + \log\log H\,.
\end{align*}
By Propositions \ref{ENH} and \ref{ENH2}, there exist constants $c, \delta>0$ such that
\begin{align*}
\mathbb{P}(N_H>\log\log H) \geq \frac{ (\mathbb{E} N_H- \log\log H )^2 }{ \mathbb{E} N_H^2 } \geq c \frac{ \mathbb{E} N_H \log H}{\mathbb{E} N_H^2} \geq \delta \,,
\end{align*}
for all sufficiently large $H$. Sending $H\rightarrow \infty$, we get that $\mathbb{P}(N=\infty)\geq \delta$.
\end{proof}
\subsection{0-1 Law} \label{sec:01law}
In this section, building on Proposition~\ref{secmoment} we show that $N = \infty$ occurs with probability 1. There are a few possible approaches, and here we choose to prove a 0-1 law taking advantage of the result on the transience of favorite sites.
Let $V(t)$ be an arbitrary element in $\mathscr{K}(t)$. It was shown in \cite{BG85} that uniformly in all $V(t)\in \mathscr{K}(t)$ we have with probability 1
\begin{equation}\label{transience}
\liminf_{t\rightarrow \infty} \frac{|V(t)|}{t^{\frac{1}{2}} (\log t)^{-11}}= \infty\,.
\end{equation}
Denote $\psi(t)=t^{\frac{1}{2}}(\log t)^{-11}$ and $E=\left\{ \lliminf{t} |V(t)| \geq \psi (t)\right\}$. By \eqref{transience}, we have $\mathbb{P}(E)=1$, and thus without loss of generality we can assume that $E$ occurs in what follows. Our goal is to show that the event $\{f(3) = \infty\}$ is a tail event and it suffices to show that the event $\{f(3) = \infty\}$ is independent of any $\sigma$-field $F_m$ (which is the $\sigma$-field generated by the first $m$ steps of the random walk) for all $m\in \mathbb{N}$.
To this end, for each $m\in \mathbb{N}$ we let $M$ be the first time such that for all $t \geq M$ favorite sites occurs outside of $[-2m, 2m]$. We see that $M$ is not necessarily a stopping time but $M<\infty$ with probability 1. Therefore, the event $\{f(3)=\infty\}$ depends only on whether after $M$ three favorite sites occurs infinitely often. Now consider the event $\{f_m(3) = \infty\}$ where $f_m(3)$ is defined analogously to $f(3)$ but for the random walk started at time $m$. We claim that the symmetric difference between $\{f(3) = \infty\}$ and $\{f_m(3) = \infty\}$ has probability zero since in the symmetric difference one must have a favorite site (for the original random walk) in the interval $[-2m, 2m]$ after $M$. Therefore, the event $\{f(3) = \infty\}$ is independent of $F_m$ for all $m\in \mathbb{N}$ and thus is a tail event. By Kolmogorov's 0-1 law, $\mathbb{P}( f(3)=\infty ) \in \{0,1\}$. Combined with Proposition~\ref{secmoment}, it completes the proof of \eqref{eq-main-thm}.
| {
"timestamp": "2017-10-26T02:03:20",
"yymm": "1612",
"arxiv_id": "1612.01983",
"language": "en",
"url": "https://arxiv.org/abs/1612.01983",
"abstract": "For a one-dimensional simple random walk $(S_t)$, for each time $t$ we say a site $x$ is a favorite site if it has the maximal local time. In this paper, we show that with probability 1 three favorite sites occurs infinitely often. Our work is inspired by Tóth (2001), and disproves a conjecture of Erdös and Révész (1984) and of Tóth (2001).",
"subjects": "Probability (math.PR)",
"title": "Three favorite sites occurs infinitely often for one-dimensional simple random walk",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109489630492,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7076796316022148
} |
https://arxiv.org/abs/2205.04130 | Robustness of Polynomial Stability with Respect to Sampling | We provide a partially affirmative answer to the following question on robustness of polynomial stability with respect to sampling: ``Suppose that a continuous-time state-feedback controller achieves the polynomial stability of the infinite-dimensional linear system. We apply an idealized sampler and a zero-order hold to a feedback loop around the controller. Then, is the sampled-data system strongly stable for all sufficiently small sampling periods? Furthermore, is the polynomial decay of the continuous-time system transferred to the sampled-data system under sufficiently fast sampling?'' The generator of the open-loop system is assumed to be a Riesz-spectral operator whose eigenvalues are not on the imaginary axis but may approach it asymptotically. We provide conditions for strong stability to be preserved under fast sampling. Moreover, we estimate the decay rate of the state of the sampled-data system with a smooth initial state and a sufficiently small sampling period. | \section{Introduction}
In this paper, we study the robustness of polynomial stability with
respect to sampling. To state our problem precisely, we
consider the following sampled-data system with sampling period $\tau >0$:
\begin{subequations}
\label{eq:sampled_data_sys_intro}
\begin{align}
\dot x(t) &= Ax(t) + Bu(t),\quad t\geq 0;\qquad x(0) = x^0 \in X \\
u(t) &= Fx(k\tau),\quad k\tau \leq t < (k+1)\tau,~k\in \mathbb{N}\cup \{0\},
\label{eq:control_input_intro}
\end{align}
\end{subequations}
where $A$ with domain $D(A)$ is the generator of a $C_0$-semigroup $(T(t))_{t \geq 0}$ on a Hilbert space $X$,
and
the control operator $B:X \to \mathbb{C}$
and the feedback operator $F:\mathbb{C} \to X$ are bounded
linear operators.
We assume that the $C_0$-semigroup
$(T_{BF}(t))_{t \geq 0}$ generated by $A+BF$ is polynomially stable
with parameter $\alpha >0$, which means that
$\sup_{t \geq 0}\|T_{BF}(t)\| < \infty$, the spectrum of $A+BF$ is contained in the open left half-plane, and for all $x \in D(A+BF) = D(A)$,
$\|T_{BF}(t)x\| = o(t^{-1/\alpha})$ as $t \to \infty$, i.e.,
for any $\varepsilon >0$, there exists $t_0 >0$ such that
for all $t \geq t_0$,
\[
\|T_{BF}(t)x\| \leq \frac{\varepsilon}{t^{1/\alpha}}.
\]
By density of $D(A)$, we see that under this assumption,
$(T_{BF}(t))_{t \geq 0}$ is strongly stable, that is,
\[
\lim_{t\to \infty}\|T_{BF}(t)x^0\| = 0
\]
for all $x^0 \in X$. Intuitively, as the sampling period $\tau >0$ goes to zero,
the sampled-data control input \eqref{eq:control_input_intro}
becomes closer to the continuous-time control input
given by $u(t) = Fx(t)$ for $t \geq 0$.
Therefore, the following two questions arise:
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi})}
\item Is the sampled-data system \eqref{eq:sampled_data_sys_intro}
with sufficiently small sampling period $\tau >0$ strongly stable
in the sense that
\[
\lim_{t \to \infty} x(t) = 0
\]
for every initial state $x^0 \in X$?
\item
Does the state $x$ of the sampled-data
system \eqref{eq:sampled_data_sys_intro} decay polynomially
for $x^0 \in D(A)$ and sufficiently small $\tau >0$ as
the orbit $T_{BF}(t) x^0$?
\end{enumerate}
We provide a partially affirmative answer to these questions
in this paper.
The effect of sampling on systems can be regarded as
a kind of structured perturbation. In this sense,
the issue in the questions above is robustness analysis
of stability with respect to sampling.
For finite-dimensional linear systems, it is well known that
the closed-loop stability is preserved under fast sampling.
However, the robustness of stability with respect to sampling
is not guaranteed for all
infinite-dimensional linear systems; see \cite{Rebarber2002}.
It has been shown in \cite{Logemann2003,Rebarber2006}
that if $(T_{BF}(t))_{t \geq 0}$ is exponentially stable, that is,
there exist constants $M \geq 1$ and $\omega >0$ such that
$\|T_{BF}(t)\| \leq M e^{-\omega t}$ for all $t \geq 0$, then
the sampled-data system also has the same property of
exponential stability for sufficiently small sampling period $\tau>0$.
Exponential stability is a strong property, which one can see
from the fact that
exponential stability
is robust under small bounded perturbations and
even some classes of unbounded perturbations as shown, e.g., in
\cite{Pritchard1989, Pandolfi1991}.
Exploiting the advantages of exponential stability,
the robustness analysis developed in \cite{Logemann2003,Rebarber2006}
allows unbounded control operators mapping the input space
into a space larger than the state space, called an extrapolation space.
On the other hand, the
robustness of strong stability with respect to sampling
has been studied in \cite{Wakaiki2021SIAM}, where the control operator
needs an extra boundedness property related to the continuous spectrum
of $A$.
The reason for imposing
this boundedness property is that
strong stability is a rather delicate property that is highly sensitive
to perturbations; see \cite{Paunonen2014JDE,Paunonen2015Springer, Rastogi2020}
for the robustness of
strong stability of $C_0$-semigroups
(in the absence of polynomial stability).
Exponential stability leads to uniformly quantified asymptotic
behaviors of semigroup orbits for all initial values
from the unit ball of the state space.
This is a desirable property from the viewpoint of many applications.
Nevertheless, exponential stability is unachievable
in some control problems, for example, involving
wave equations or beam equations.
Although strong stability can be achieved in some of those problems,
it is a qualitative notion of stability unlike exponential stability, and
we do not obtain any information on decay rates
of semigroup orbits from strong stability itself.
Polynomial stability is an important subclass of semi-uniform stability, which
lies between the above two
extreme types of semigroup stability, exponential stability
and strong stability, and guarantees semi-uniform
decay rates for semigroup orbits with
initial values in the domain of the generator.
Various results on polynomial stability, and more generally
semi-uniform stability, have been obtained such as
characterizations of decay rates by resolvent estimates on the imaginary axis
$i\mathbb{R}$ \cite{Liu2005PDR,
Batkai2006, Batty2008,Borichev2010,Rozendaal2019} and
robustness to perturbations \cite{Paunonen2011,Paunonen2012SS,Paunonen2013SS, Paunonen2014OM, Rastogi2020}.
We also refer to \cite{Chill2020} for an overview of semi-uniform stability.
A discrete version of semi-uniform stability has been
investigated in the context of the quantified Katznelson-Tzafriri theorem \cite{Seifert2015, Seifert2016, Cohen2016, Ng2020}
and the Cayley transform of a generator \cite{Wakaiki2021JEE}.
However, to the author's knowledge, robustness
analysis with respect sampling has not been well established for
polynomial stability.
To study the robustness of polynomials stability with respect to sampling,
this paper continues and expands the robustness analysis
developed in \cite{Wakaiki2021SIAM}.
We assume as in \cite{Wakaiki2021SIAM} that $A$ is a Riesz-spactral operator, which is written
by
\begin{equation*}
A x =
\sum_{n=1}^\infty \lambda_n \langle x , \psi_n \rangle \phi_n
\end{equation*}
with domain
\begin{equation*}
D(A) = \left\{
x \in X: \sum_{n=1}^\infty |\lambda_n|^2~\! |\langle x, \psi_n \rangle|^2 < \infty
\right\},
\end{equation*}
where $(\lambda_n)_{n \in \mathbb{N}}$ are distinct complex numbers
not on $ i \mathbb{R}$,
$(\phi_n)_{n \in \mathbb{N}}$ forms a Riesz basis in $X$, and
$(\psi_n)_{n \in \mathbb{N}}$ is a biorthogonal
sequence to $(\phi_n)_{n \in \mathbb{N}}$; see Section~\ref{sec:RS_operator}
for the details of Riesz-spactral operators.
We restrict our attention to the situation where only a finite number of
the eigenvalues $(\lambda_n)_{n \in \mathbb{N}}$ are in the set
\[
\Omega_{\rm a} :=
\left\{
\lambda \in \mathbb{C} : \mathop{\rm Re}\nolimits \lambda > -\delta
\right\}
\cap
\left(
\mathbb{C} \setminus
\left\{
\lambda \in \mathbb{C} \setminus \mathbb{R} : \mathop{\rm Re}\nolimits \lambda \leq
\frac{- \Upsilon}{|\mathop{\rm Im}\nolimits \lambda|^\alpha}
\right\}
\right)
\]
for some $\delta,\alpha,\Upsilon >0$.
In contrast, it is assumed in \cite{Wakaiki2021SIAM} that
the set
\[
\Omega_{\rm b} :=
\left\{
\lambda \in \mathbb{C} \setminus \{
0
\}: \mathop{\rm Re}\nolimits\lambda > - \delta,~| \arg \lambda | < \pi /2 + \vartheta
\right\}
\]
contains only finitely many eigenvalues for some $\delta>0$ and
$0< \vartheta \leq \pi/2$.
Figure~\ref{fig:Omega} illustrates the sets $\Omega_{\rm a}$ and $\Omega_{\rm b}$.
In our setting, the continuous spectrum of $A$ has empty
intersection with
$i\mathbb{R}$ unlike
the setting of \cite{Wakaiki2021SIAM}, but
the spectrum of $A$ may approach $i\mathbb{R}$ asymptotically.
Therefore, the type of non-exponential stability we consider in this paper
is
different from that in \cite{Wakaiki2021SIAM}.
It has been shown in \cite{Wakaiki2021SIAM} that
only strong stability is preserved under fast sampling, whereas
we here investigate the quantitative behavior of
the state of the sampled-data system in addition to strong stability.
\begin{figure}
\centering
\subcaptionbox{Set $\Omega_{\rm a}$ considered in this paper.
\label{fig:Omega1}}
[.49\linewidth]
{\includegraphics[width = 5cm,clip]{Omega1.pdf}}
\subcaptionbox{Set $\Omega_{\rm b}$ considered in \cite{Wakaiki2021SIAM}.
\label{fig:Omega2}}
[.49\linewidth]
{\includegraphics[width = 5cm,clip]{Omega2.pdf}}
\caption{Comparison of sets that contains only finitely many
eigenvalues. \label{fig:Omega}}
\end{figure}
Another important difference from \cite{Wakaiki2021SIAM} is the assumption on
the control operator $B$ and the feedback operator $F$.
Let $b,f \in X$ and let
$B,F$ be written as $Bu = bu$ for
all $u \in \mathbb{C}$ and $Fx = \langle x, f \rangle$
for all $x \in X$. In this paper, we assume that
\begin{align*}
b &\in \mathcal{D}^\beta :=
\left\{ x \in X:
\sum_{n=1}^{\infty} |\lambda_n|^{2\beta}~\!
|\langle x, \psi_n \rangle |^2 < \infty\right\}\\
f &\in \mathcal{D}^\gamma_* :=
\left\{ x \in X:
\sum_{n=1}^{\infty} |\lambda_n|^{2\gamma}~\!
|\langle x,\phi_n \rangle |^2 < \infty\right\},
\end{align*}
where $\beta,\gamma \geq 0$ satisfy one of the following
conditions: (i) $\beta,\gamma \in \mathbb{N}_0$ and $\beta + \gamma \geq \alpha$; or (ii) $\beta + \gamma > \alpha$.
On the other hand, it is assumed in \cite{Wakaiki2021SIAM}
that
\[
b \in D(A^{-1}) = \left\{ x \in X:
\sum_{n=1}^{\infty} \frac{
|\langle x, \psi_n \rangle |^2}{|\lambda_n|^2} < \infty\right\}
\]
and $f \in X$.
Under
the assumption we make in this paper,
$B$ and $F$
have the parameters $\beta$ and $\gamma$ for design flexibility,
which increases the applicability
of the proposed robustness analysis.
The
bounded linear operator $\Delta(\tau)$ on $X$ defined by
\[
\Delta(\tau) := T(\tau) + \int^{\tau}_0 T(s)BFds
\]
plays a key role in the analysis of robustness with respect to sampling. In fact, the sampled-data system \eqref{eq:sampled_data_sys_intro}
is strongly stable if and only if
the discrete semigroup $(\Delta(\tau)^k)_{k \in \mathbb{N}}$
is strongly stable, i.e.,
\[
\lim_{k \to \infty} \|\Delta(\tau)^k x^0\| = 0
\]
for all $x^0 \in X$.
In \cite{Wakaiki2021SIAM}, the sufficient condition
for strong stability developed in
the Arendt-Batty-Lyubich-V\~u theorem~\cite{Arendt1998,Lyubich1988} is used in order to show that
$(\Delta(\tau)^k)_{k \in \mathbb{N}}$ is strongly stable.
This sufficient condition requires that
the intersection of the spectrum of $\Delta(\tau)$ and the unit circle
be countable, but this requirement is not generally satisfied in our setting.
Instead of the Arendt-Batty-Lyubich-V\~u theorem, we here employ
the characterization of strong stability by an integral condition on
resolvents developed in \cite{Tomilov2001}.
Moreover, we provide an integral condition on resolvents under which
the orbit $\Delta(\tau)^k x^0$ with
$x^0 \in \mathcal{D}^{\alpha/2}$ satisfies
\[
\|\Delta(\tau)^k x^0\| = o \left(
\sqrt{
\frac{\log k}{k}
}
\right)
\]
as $k \to \infty$. Using this integral condition, we show that
if $\alpha \leq 2$ or $\beta \geq \alpha$, then the state $x$
of the sampled-data system \eqref{eq:sampled_data_sys_intro}
with sufficiently small sampling period $\tau >0$
satisfies
\begin{equation}
\label{eq:decay_estimate_intro}
\|x(t)\| = o \left(
\sqrt{
\frac{\log t}{t}
}
\right)
\end{equation}
as $t\to \infty$
for every initial state $x^0 \in \mathcal{D}^{\alpha/2}$.
If $A$ generates a polynomially stable
$C_0$-semigroup $(T(t))_{t \geq 0}$ with parameter
$\alpha$ but not with any parameter
$\widetilde \alpha \in(0, \alpha)$,
the optimal decay rate of $\|T(t)x^0\|$ with $x^0 \in \mathcal{D}^{\alpha/2}$
is given by $\|T(t)x^0\| = o(1/\sqrt{t})$ as $t \to \infty$.
This observation implies that
the logarithmic factor $\sqrt{\log t}$ may be removed from
the estimate \eqref{eq:decay_estimate_intro}, but it is still unknown.
The paper is organized as follows.
Section~\ref{sec:preliminaries} contains preliminaries
on polynomial stability of $C_0$-semigroups and
Riesz-spectral operators.
In Section~\ref{sec:SDS}, we present the main result of this paper
and introduce the discretized system for the proof.
Section~\ref{sec:resolvent_cond} is devoted to
resolvent conditions for stability.
To apply these conditions to the discretized system,
we investigate the spectrum of $\Delta(\tau)$ in Section~\ref{sec:spectrum}.
Section~\ref{sec:resolvent_discretized_sys} completes the proof
of the main result with the help of the resolvent conditions for stability.
To illustrate the theoretical results,
we study a wave equation in Section~\ref{sec:wave_eq}.
The conclusion is given in Section~\ref{sec:conclusion}.
\subsection*{Notation and terminology}
For $\delta \in \mathbb{R}$ and $r>0$, we write
\begin{align*}
\mathbb{C}_\delta &:=
\{
\lambda \in \mathbb{C}: \mathop{\rm Re}\nolimits \lambda > \delta
\}\\
\mathbb{D}_r &:=
\{
\lambda \in \mathbb{C}: |\lambda| < r
\} \\
\mathbb{E}_r &:=
\{
\lambda \in \mathbb{C}: |\lambda| > r
\}.
\end{align*}
For $\alpha ,\Upsilon >0$,
define
\[
\Omega_{\alpha,\Upsilon} :=
\left\{
\lambda \in \mathbb{C}\setminus \mathbb{R} : \mathop{\rm Re}\nolimits \lambda \leq \frac{-\Upsilon}{|\mathop{\rm Im}\nolimits \lambda|^\alpha}
\right\}.
\]
The closure of a subset $\Gamma$ of $\mathbb{C}$ and
the complex conjugate of $\lambda \in \mathbb{C}$ are
denoted by $\overline{\Gamma}$ and $\overline{\lambda}$, respectively.
For real-valued functions $f,g$ on $J \subset \mathbb{R}$, we write
\[
f(t) = O\big(g(t)\big)\qquad (t \to \infty)
\]
if
there exist $M>0$ and $t_0 \in J$ such that
$f(t) \leq Mg(t)$ for every $t \geq t_0$, and
similarly,
\[
f(t) = o\big(g(t)\big)\qquad (t \to \infty)
\]
if
for any $\varepsilon >0$, there exists $t_0 \in J$ such that
$f(t) \leq \varepsilon g(t)$ for every $t \geq t_0$.
Let $X$ and $Y$ be Banach spaces. For a linear operator $A:X\to Y$,
we denote by $D(A)$ and $\mathop{\rm ran}\nolimits (A)$ the domain and the range of $A$,
respectively.
The space of all bounded linear operators from $X$ to $Y$ is denoted by $\mathcal{L}(X,Y)$,
and we write $\mathcal{L}(X) := \mathcal{L}(X,X)$.
For a closed operator $A: D(A) \subset X \to X$, we denote by
$\sigma(A)$ and $\rho(A)$ the spectrum and
the resolvent set of $A$, respectively.
We write $R(\lambda,A) := (\lambda I - A)^{-1}$
for $\lambda \in \rho (A)$.
For a subset $S$ of $X$ and a linear operator
$A: D(A) \subset X \to Y$, we denote by $A|_S$
the restriction of $A$ to $S$, i.e., $A|_S x = Ax$ with domain
$D(A|_S) := D(A) \cap S$.
A $C_0$-semigroup $(T(t))_{t \geq 0}$ on a Banach space $X$
is called {\em uniformly bounded} if
$\sup_{t \geq 0} \|T(t)\| < \infty$ and
{\em strongly stable} if $\lim_{t \to \infty} T(t)x = 0$ for all $x \in X$.
By a {\em discrete semigroup} on $X$, we mean a family $(\Delta^k)_{k \in \mathbb{N}}$
of operators, where $\Delta \in \mathcal{L}(X)$.
A discrete semigroup
$(\Delta^k )_{k \in \mathbb{N}}$ on $X$ is called
{\em power bounded}
if $\sup_{k \in \mathbb{N}}\|\Delta^k\| < \infty$ and
{\em strongly stable} if $\lim_{k \to \infty} \Delta^k x = 0$
fo all $x \in X$.
Let $X$ and $Y$ be Hilbert spaces.
We denote
an inner product on $X$ by $\langle x,y \rangle$ for $x,y \in X$.
For a densely
defined linear operator $A:D(A) \subset X \to Y$,
the Hilbert space adjoint of $A$ is denoted
by $A^*$.
\def\alph{enumi}){\alph{enumi})}
\def\theenumi{\alph{enumi})}
\def(\roman{enumii}){(\roman{enumii})}
\def\theenumii{(\roman{enumii})}
\section{Preliminaries}
\label{sec:preliminaries}
In this section, we review the definition and
some important properties of
polynomially stable $C_0$-semigroups and
Riesz-spectral operators.
\subsection{Polynomially stable $\bm{C_0}$-semigroups}
We start by recalling the polynomial stability of a $C_0$-semigroup.
\begin{dfntn}
A $C_0$-semigroup $(T(t))_{t\geq 0}$ on a Banach space $X$
generated by $A$ is {\em polynomially stable with parameter $\alpha >0$}
if the following three conditions are satisfied:
\begin{enumerate}
\item
$(T(t))_{t\geq 0}$ is uniformly bounded,
\item $i \mathbb{R} \subset \rho(A)$, and
\item $\|T(t)A^{-1}\| = O(t^{-1/\alpha})$ as $t\to \infty$.
\end{enumerate}
\end{dfntn}
Let $A$ be the generator of a uniformly bounded $C_0$-semigroup $(T(t))_{t\geq 0}$
on a Hilbert space. Then $-A$ and $-A^*$ are
sectorial in the sense of Chapter~2 of \cite{Haase2006}.
In particular, if $(T(t))_{t\geq 0}$ is a polynomially stable $C_0$-semigroup
on a Hilbert space, then
$A$ and $A^*$ are invertible, and hence
the fractional powers $(-A)^{\alpha}$ and $(-A^*)^{\alpha}$
are well defined for all $\alpha \in \mathbb{R}$.
We refer, e.g., to Chapter~3 of \cite{Haase2006} and
Section~II.5.3 of \cite{Engel2000} for the details of
fractional powers.
We use the following characterizations for polynomial decay
of a $C_0$-semigroup on a Hilbert space.
The proof can be found in Lemma~2.3 and
Theorem~2.4 of \cite{Borichev2010}. See also
Lemma~2.3 of \cite{Wakaiki2021JEE} for the result on
the decay rate of an individual orbit.
\begin{thrm}
\label{thm:decay_charac}
Let $(T(t))_{t\geq 0}$ be a uniformly bounded $C_0$-semigroup
on a Hilbert space $X$ with generator
$A$ such that $i \mathbb{R} \subset \rho(A)$.
For fixed $\alpha,\delta >0$, the following assertions are equivalent:
\begin{enumerate}
\item
$\|T(t)A^{-1}\| = O(t^{-1/\alpha})$ as $t\to \infty$. \vspace{2pt}
\item
$\|T(t)A^{-\alpha \delta}x\| = o(t^{-\delta})$ as $t\to \infty$
for all $x \in X$. \vspace{2pt}
\item
$\|R(i\omega,A)\| = O(|\omega|^{\alpha})$ as $\omega \to \infty$.
\vspace{2pt}
\item
$\displaystyle
\sup_{\lambda \in \overline{\mathbb{C}_0}}
\|R(\lambda,A)(-A)^{\alpha}\| < \infty$.
\end{enumerate}
\end{thrm}
The following estimate given in
Lemma~4 of \cite{Paunonen2014OM} is useful in the robustness analysis of polynomial stability.
\begin{lmm}
\label{lem:resolvent_estimate}
Let $A$ be the generator of a polynomially stable $C_0$-semigroup
with parameter $\alpha >0$
on a Hilbert space $X$.
Let $\beta,\gamma \geq 0$ satisfy $\beta+\gamma \geq \alpha$
and let $U$ be a Banach space.
There exists a constant $M \geq 1$ such that
if $B \in \mathcal{L}(U,X)$ and $F \in \mathcal{L}(X,U)$
satisfy $\mathop{\rm ran}\nolimits (B) \subset D((-A)^\beta)$ and
$\mathop{\rm ran}\nolimits (F^*) \subset D((-A^*)^\gamma)$, then
\[
\|FR(\lambda,A)B\| \leq M \|(-A)^{\beta}B\|~\! \|(-A^*)^\gamma F^*\|
\]
for all $\lambda \in \overline{\mathbb{C}_0}$.
\end{lmm}
\subsection{Riesz-spectral operators}
\label{sec:RS_operator}
Next we recall the definition of Riesz-spectral operators and
briefly state their most relevant properties. We refer the reader to
Section~3.2 of \cite{Curtain2020}, Section~2.4 of \cite{Tucsnak2009}, and Chapter~2 of \cite{Guo2019book}
for more details.
\begin{dfntn}[Definition 3.2.6 of \cite{Curtain2020}]
\label{def:RSO}
Let $X$ be a Hilbert space and let
$A:D(A) \subset X \to X$ be a closed linear operator with
simple eigenvalues $(\lambda_n)_{n \in \mathbb{N}}$ and
corresponding eigenvectors $(\phi_n)_{n \in \mathbb{N}}$.
We say that $A$ is a {\em Riesz-spectral operator} if
$(\phi_n)_{n \in \mathbb{N}}$ is a Riesz basis and
the set of eigenvalues $(\lambda_n)_{n \in \mathbb{N}}$
has at most finitely many accumulation points.
\end{dfntn}
Let $A$ be a Riesz-spectral operator on a Hilbert space $X$
with simple eigenvalues
$(\lambda_n)_{n \in \mathbb{N}}$ and
corresponding eigenvectors $(\phi_n)_{n \in \mathbb{N}}$.
Let $(\psi_n)_{n \in \mathbb{N}}$ be the eigenvectors
of the adjoint $A^*$ corresponding to the eigenvalues
$(\overline{\lambda_n})_{n \in \mathbb{N}}$.
Then
$(\psi_n)_{n \in \mathbb{N}}$ can be suitable scaled so that
$(\phi_n)_{n \in \mathbb{N}}$ and $(\psi_n)_{n \in \mathbb{N}}$
are {\em biorthogonal}, i.e.,
\[
\langle \phi_n, \psi_m \rangle =
\begin{cases}
1 & \text{if $n=m$} \\
0 & \text{otherwise}.
\end{cases}
\]
A sequence biorthogonal to a Riesz basis in $X$
is unique and also forms a Riesz basis in $X$.
Throughout this paper, we set the sequence $(\psi_n)_{n \in \mathbb{N}}$
of the eigenvectors of
the adjoint $A^*$ so that $(\psi_n)_{n \in \mathbb{N}}$
are biorthogonal to $(\phi_n)_{n \in \mathbb{N}}$.
There exist constants $M_{\rm a}, M_{\rm b} >0$ such that
for all $x \in X$,
\begin{align*}
M_{\rm a} \sum_{n=1}^\infty |\langle x, \psi_n \rangle |^2 \leq \|x\|^2 \leq
M_{\rm b} \sum_{n=1}^\infty |\langle x, \psi_n \rangle |^2
\\
\frac{1}{M_{\rm b}}\sum_{n=1}^\infty |\langle x, \phi_n \rangle |^2 \leq \|x\|^2 \leq
\frac{1}{M_{\rm a}} \sum_{n=1}^\infty |\langle x, \phi_n \rangle |^2.
\end{align*}
We shall frequently use these inequalities without comment.
The Riesz-spectral operator $A$ has the following representation:
\begin{equation}
\label{eq:RS_operator}
A x =
\sum_{n=1}^\infty \lambda_n \langle x , \psi_n \rangle \phi_n
\end{equation}
with domain
\begin{equation*}
D(A) = \left\{
x \in X: \sum_{n=1}^\infty |\lambda_n|^2~\! |\langle x, \psi_n \rangle|^2 < \infty
\right\}.
\end{equation*}
The spectrum of the Riesz-spectral operator $A$ is the closure of its point spectrum, that is,
$\sigma(A) = \overline{\{\lambda_n:n \in \mathbb{N}\}}$.
For $\lambda \in \rho(A)$, the resolvent $R(\lambda,A)$ is given by
\begin{equation}
\label{eq:RS_resolvent}
R(\lambda,A)x = \sum_{n=1}^\infty \frac{1}{\lambda- \lambda_n}
\langle x , \psi_n \rangle \phi_n\qquad \forall x \in X.
\end{equation}
The Riesz-spectral operator $A$ generates a $C_0$-semigroup on $X$
if and only if
$\sup_{n \in \mathbb{N}} \mathop{\rm Re}\nolimits \lambda_n < \infty$, and the $C_0$-semigroup $(T(t))_{t\geq 0}$ generated by $A$ can be written as
\[
T(t)x = \sum_{n=1}^\infty e^{t\lambda_n } \langle x , \psi_n \rangle \phi_n\qquad \forall t \geq 0,~\forall x \in X.
\]
The adjoint $A^*$ is also a Riesz-spectral operator and is
represented as
\[
A^* x = \sum_{n=1}^\infty \overline{\lambda_n} \langle
x, \phi_n
\rangle \psi_n
\]
with domain
\[
D(A^*) = \left\{
x \in X :
\sum_{n=1}^\infty
\left|
\lambda_n
\right|^2 ~\! \left|
\langle x, \phi_n \rangle
\right|^2 < \infty
\right\}.
\]
Moreover, the $C_0$-semigroup generated by $A^*$ is given by
$(T(t)^* )_{t\geq 0}$.
To
make assumptions on the ranges of the control operator $B\in \mathcal{L}(\mathbb{C},X)$ and
the adjoint of the feedback operator $F\in\mathcal{L}(X,\mathbb{C})$,
we use the following subsets:
\begin{align*}
\mathcal{D}^{\beta} &:=
\left\{ x \in X:
\sum_{n=1}^{\infty} |\lambda_n|^{2\beta}~\!
|\langle x, \psi_n \rangle |^2 < \infty\right\}\\
\mathcal{D}_*^{\gamma} &:=
\left\{ x \in X:
\sum_{n=1}^{\infty} |\lambda_n|^{2\gamma}~\!
|\langle x,\phi_n \rangle |^2 < \infty\right\}.
\end{align*}
\section{Stability of sampled-data systems}
\label{sec:SDS}
In this section, we
present the system under consideration and state the main result of this paper. We also introduce the discretized system as the first step of its proof.
\subsection{Main result}
Let $X$ be a Hilbert space, and
consider the sampled-data system with
state space $X$ and input space $\mathbb{C}$:
\begin{subequations}
\label{eq:sampled_data_sys}
\begin{align}
\dot x(t) &= Ax(t) + Bu(t),\quad t\geq 0;\qquad x(0) = x^0 \in X \\
u(t) &= Fx(k\tau),\quad k\tau \leq t < (k+1)\tau,~k \in \mathbb{N}\cup\{0\},
\end{align}
\end{subequations}
where
$x(t) \in X$ is the state, $u(t) \in \mathbb{C}$ is the control input,
$\tau>0$ is the sampling period,
$A: D(A) \subset X \to X$ is the generator of a $C_0$-semigroup
$(T(t) )_{t\geq 0}$ on $X$,
$B \in \mathcal{L}(\mathbb{C},X)$ is the control operator, and
$F \in \mathcal{L}(X,\mathbb{C})$ is the feedback operator.
\begin{dfntn}
The sampled-data system \eqref{eq:sampled_data_sys} is called
{\em strongly stable} if
\[\lim_{t\to \infty} \|x(t)\| = 0
\]
for every initial state $x^0 \in X$.
\end{dfntn}
We place the following assumption on the
sampled-data system
\eqref{eq:sampled_data_sys}.
\begin{assumption}
\label{assum:for_MR}
Let $A$ be a Riesz-spectral operator on a
Hilbert space $X$ with simple eigenvalues
$(\lambda_n)_{n \in \mathbb{N}}$ and
corresponding eigenvectors $(\phi_n)_{n \in \mathbb{N}}$.
Let $(\psi_n)_{n \in \mathbb{N}}$ be the eigenvectors of
$A^*$ such that $(\phi_n)_{n \in \mathbb{N}}$ and
$(\psi_n)_{n \in \mathbb{N}}$ are biorthogonal.
Let the control operator $B \in \mathcal{L}(X,\mathbb{C})$
and the feedback operator $F\in \mathcal{L}(\mathbb{C},X)$
be represented as
\begin{align}
\label{eq:B_F_rep}
Bu = bu,\quad u \in \mathbb{C};\qquad
Fx = \langle x, f\rangle,\quad x\in X
\end{align}
for some $b,f \in X$. Assume that
the operators $A$, $B$, and $F$ satisfy the following conditions:
\begin{enumerate}
\def\alph{enumi}){\arabic{enumi}}
\def\theenumi{\alph{enumi})}
\renewcommand{\theenumi}{(A\arabic{enumi})}
\item
\label{assump:finite_unstable}
There exist constants $\delta, \alpha, \Upsilon >0$ such that
$\mathbb{C}_{-\delta} \cap (\mathbb{C} \setminus \Omega_{\alpha,\Upsilon})$ has
only finite elements of $(\lambda_n)_{n \in \mathbb{N}}$.
\item
\label{assump:imaginary}
$\{\lambda_n :n \in \mathbb{N}\} \cap i \mathbb{R} = \emptyset$.
\item
\label{assump:closed_loop}
$A+BF$ generates a polynomially stable $C_0$-semigroup
$(T_{BF}(t))_{t \geq0}$
with parameter $\alpha$ on $X$.
\item
\label{assump:b_f_cond}
There exist constants
$\beta,\gamma \geq 0$ such that
$
b \in \mathcal{D}^{\beta}$,
$f\in \mathcal{D}_*^{\gamma}
$,
and one of the following conditions holds:
\begin{enumerate}
\item $\beta ,\gamma \in \mathbb{N}_0$ and
$\beta + \gamma \geq \alpha$.
\item $\beta + \gamma > \alpha$.
\end{enumerate}
\end{enumerate}
\end{assumption}
\def\alph{enumi}){\alph{enumi})}
\def\theenumi{\alph{enumi})}
\def(\roman{enumii}){(\roman{enumii})}
\def\theenumii{(\roman{enumii})}
By (A\ref{assump:finite_unstable}), the Riesz-spectral operator $A$
generates a $C_0$-semigroup $(T(t))_{t \geq 0}$.
Since $\sigma(A) = \overline{\{\lambda_n:n \in \mathbb{N}\}}$, it follows that $\sigma(A) \cap i \mathbb{R} = \emptyset$
under (A\ref{assump:finite_unstable}) and (A\ref{assump:imaginary}).
The
eigenvalues $(\lambda_n)_{n \in \mathbb{N}}$ may approach $i \mathbb{R}$
asymptotically, and an upper bound of the asymptotic rate
is represented by the parameter $\alpha$
given in (A\ref{assump:finite_unstable}).
Note that
when $\lim_{k \to \infty }\mathop{\rm Re}\nolimits \lambda_{n_k} = 0$
for some subsequence $(\lambda_{n_k})_{k \in \mathbb{N}}$,
there does not exist a feedback operator $F \in \mathcal{L}(X,\mathbb{C})$
such that the $C_0$-semigroup $(T_{BF}(t))_{t \geq0}$ generated by
$A+BF$ is exponentially stable; see, e.g., Theorem~8.2.3 of \cite{Curtain2020}.
We assume by (A\ref{assump:b_f_cond}) that
$B$ and $F$ have stronger boundedness properties related to
the parameter $\alpha$ than the standard boundedness properties
$B \in \mathcal{L}(\mathbb{C},X)$ and $F \in \mathcal{L}(X,\mathbb{C})$.
Assumptions similar to (A\ref{assump:b_f_cond}) are placed to
perturbation operators in the robustness analysis of
polynomial stability
developed in \cite{Paunonen2011,Paunonen2012SS,Paunonen2013SS}.
The following theorem is the main result of this paper, which
shows that polynomial stability is robust with respect to sampling.
\begin{thrm}
\label{thm:SD_SS}
If Assumption~\Rref{assum:for_MR} is satisfied, then
there exists $\tau^*>0$ such that
the following statements hold
for all $\tau \in (0,\tau^*)$:
\begin{enumerate}
\item
The sampled-data system \eqref{eq:sampled_data_sys} is strongly stable.
\item
If $\alpha \leq 2$ or $\beta \geq \alpha$, then
the state $x$ of the sampled-data system \eqref{eq:sampled_data_sys}
satisfies
\begin{equation}
\label{eq:solution_conv}
\|x(t)\| = o \left(\sqrt{
\frac{\log t}{t} }\right)
\qquad (t \to \infty)
\end{equation}
for every initial state $x^0 \in \mathcal{D}^{\alpha/2}$.
\end{enumerate}
\end{thrm}
The assumption $\alpha \leq 2$ implies $D(A) \subset \mathcal{D}^{\alpha/2}$.
On the other hand, the assumption $\beta \geq \alpha$ leads to
the uniform boundedness of $\|R(z,T(\tau)) S(\tau)\|$ on an annulus
$\{z \in \mathbb{C}:
1<|z| < 1+\varepsilon\}$ with some sufficiently small $\varepsilon>0$
for a fixed $\tau >0$, where
$S(\tau) \in \mathcal{L}(\mathbb{C},X)$ is defined by
\[
S(\tau)u := \int^\tau_0 T(s) Buds,\quad u \in \mathbb{C}.
\]
We will employ these assumptions in Section~\ref{sec:Integral_RSTR}.
If $A$ generates a polynomially stable $C_0$-semigroup $(T(t))_{t \geq 0}$
with parameter $\alpha$ but not with
any parameter $\widetilde \alpha \in (0,\alpha)$
and if $F=0$, then the optimal decay rate of
the state $x$ of the sampled-data system \eqref{eq:sampled_data_sys}
with initial state $x^0 \in D((-A)^{\alpha/2})=\mathcal{D}^{\alpha/2}$
is given by
$\|x(t)\| = o (1/\sqrt{t})$
as $t \to \infty$ by Theorem~\ref{thm:decay_charac}.
Whether the logarithmic correction in the estimate \eqref{eq:solution_conv}
can be omitted remains open.
The proof of Theorem~\ref{thm:SD_SS} is divided into several steps.
In the next subsection, we give the equivalence between the stability of
the sampled-data system \eqref{eq:sampled_data_sys} and that of
the discretized system.
Section~\ref{sec:resolvent_cond} is devoted to resolvent conditions
for the stability of discrete semigroups on Hilbert spaces.
To apply these resolvent conditions, in Section~\ref{sec:spectrum},
we investigate the spectrum of the operator
that represents the dynamics of the discretized system.
In Section~\ref{sec:resolvent_discretized_sys}, we complete the proof of
Theorem~\ref{thm:SD_SS}, by using
the resolvent conditions presented in Section~\ref{sec:resolvent_cond}.
\subsection{Discretized system}
For $t\geq 0$,
define $\Delta(t) \in \mathcal{L}(X)$ by
\begin{equation*}
\Delta(t) := T(t) + S(t) F.
\end{equation*}
Then the state $x$ of
the sampled-data system \eqref{eq:sampled_data_sys}
satisfies
\begin{equation}
\label{eq:discretized_sys}
x\big((k+1)\tau\big) = \Delta(\tau) x(k\tau)\qquad \forall k\in \mathbb{N} \cup \{0\},
\end{equation}
which we call the {\em discretized system}.
To prove Theorem~\ref{thm:SD_SS},
it suffices by the next result to investigate
the discrete semigroup
$(\Delta(\tau)^k)_{k \in \mathbb{N}}$.
\begin{prpstn}
\label{prop:SD_DT}
Let $A$ be the generator of a $C_0$-semigroup $(T(t))_{t\geq 0}$ on
a Banach space $X$. Let
$B \in \mathcal{L}(\mathbb{C},X)$ and
$F \in \mathcal{L}(X,\mathbb{C})$.
The following statements hold for a fixed $\tau >0$:
\begin{enumerate}
\item
The sampled-data system \eqref{eq:sampled_data_sys} is strongly stable
if and only if the discrete semigroup $(\Delta(\tau)^k)_{k \in \mathbb{N}}$
is strongly stable.
\item
Let $f:(0,\infty) \to \mathbb{R}$, and suppose that
there exist constants $k_0 \in \mathbb{N}$ and $M_1,M_2>0$ such that
for all $k \geq k_0$ and $s \in [0,\tau)$,
\begin{equation}
\label{eq:f_bound}
M_1 f(k\tau) \leq f(k) \leq M_2f(k\tau+s).
\end{equation}
Then
the state $x$ of
the sampled-data system \eqref{eq:sampled_data_sys}
with initial state $x^0\in X$
satisfies
\begin{equation}
\label{eq:solution_conv_from_dist}
\|x(t)\| = o \big(f(t) \big )\qquad (t \to \infty)
\end{equation}
if and only if $x^0$ satisfies
\begin{align}
\label{eq:dis_time_poly_decay}
\|\Delta(\tau)^k x^0\| =
o\big(
f(k)
\big)\qquad (k \to \infty).
\end{align}
\end{enumerate}
\end{prpstn}
\begin{proof}
The assertion a) is proved in
Proposition~2.2 in \cite{Wakaiki2021SIAM}, and therefore
we show only the assertion b).
Assume that \eqref{eq:solution_conv_from_dist} holds for
the state $x$ of
the sampled-data system \eqref{eq:sampled_data_sys}
with initial state $x^0\in X$.
Take $\varepsilon >0$. There exists $t_1 >0$ such that
\[
\|x(t)\| \leq \frac{\varepsilon}{M_1} f(t)\qquad \forall t \geq t_1.
\]
Choose $k_1 \in \mathbb{N}$
so that $k_1 \geq k_0$ and $k_1\tau \geq t_1$.
By \eqref{eq:discretized_sys} and \eqref{eq:f_bound},
we have that
\[
\|\Delta(\tau)^k x^0\| =
\|x(k \tau)\| \leq
\frac{\varepsilon}{M_1} f(k\tau)
\leq \varepsilon f(k).
\]
for all $k \geq k_1$.
Hence, \eqref{eq:dis_time_poly_decay} holds.
Conversely, assume that $x^0 \in X$ satisfies \eqref{eq:dis_time_poly_decay}, and
take $\varepsilon >0$. We have that
\[
1\leq K := \sup_{0\leq s < \tau} \| \Delta(s) \| < \infty.
\]
By assumption,
there exists $k_2 \in \mathbb{N}$ such that
\[
\|\Delta(\tau)^k x^0\| \leq
\frac{\varepsilon}{KM_2} f(k)
\qquad \forall k \geq k_2.
\]
Since \eqref{eq:discretized_sys} gives
\[
\|x(k\tau+s)\| \leq K \|x(k\tau)\| = K \|\Delta(\tau)^k x^0\|
\qquad \forall k \in \mathbb{N} \cup \{0 \},~\forall s \in [0,\tau),
\]
it follows from \eqref{eq:f_bound} that
for all
$k \geq \max \{k_0,k_2 \}$ and $s \in [0,\tau)$,
\[
\|x(k\tau+s)\|
\leq \frac{\varepsilon}{M_2} f(k)
\leq \varepsilon f(k\tau +s).
\]
Thus, we obtain \eqref{eq:solution_conv_from_dist}.
\end{proof}
If we define
\[
f(t) := \frac{\log t}{t},\quad t >0,
\]
then $f$ satisfies the condition in b) of Proposition~\ref{prop:SD_DT}
for a fixed $\tau >0$.
In fact, we obtain
\begin{align*}
\frac{\log (k\tau)}{k\tau} &=
\frac{\log (k\tau)}{\tau \log k} \cdot \frac{\log k}{k} \\
\frac{\log k}{k} &=
\frac{\frac{\log k}{\log (k\tau+s)}}{ \frac{k}{k\tau+s}} \cdot
\frac{\log (k\tau+s)}{k\tau+s}
\leq
\frac{\frac{\log k}{\log (k\tau)}}{ \frac{k}{(k+1)\tau}} \cdot
\frac{\log (k\tau+s)}{k\tau+s}
\end{align*}
for all $k \in \mathbb{N}$ with $k > \max\{1,1/\tau \}$
and all $s \in [0,\tau)$.
Therefore, there exists $k_0 \in \mathbb{N}$ such that
for all $k \geq k_0$ and $s \in [0,\tau)$,
\[
\frac{\tau}{2} \cdot \frac{\log (k\tau)}{k\tau}
\leq \frac{\log k}{k} \leq 2\tau \frac{\log (k\tau+s)}{k\tau+s}.
\]
\section{Resolvent conditions for stability of discrete semigroups}
\label{sec:resolvent_cond}
First, we review the resolvent characterizations of stability
of discrete semigroups on Hilbert spaces.
The resolvent characterization of power bouned discrete semigroups has been
obtained in Theorem II.1.12 of \cite{Eisner2010}, which is an analogue of
the characterization of uniformly bounded $C_0$-semigroups due to
\cite{Gomilko1999,Shi2000}.
Moreover, the
resolvent characterization of strongly stable discrete semigroups has been
developed in Theorem~3.11 of
\cite{Tomilov2001}; see also Theorem II.2.23 of \cite{Eisner2010}.
\begin{thrm}
\label{thm:strong_stability_resol}
Let $X$ be a Hilbert space and
let $\Delta \in \mathcal{L}(X)$ satisfy $\mathbb{E}_1 \subset \rho(\Delta)$.
Then the following statements hold:
\begin{enumerate}
\item
The discrete semigroup $(\Delta^k)_{k \in \mathbb{N}}$ is power bounded
if and only if
\begin{align*}
\limsup_{r \downarrow 1} ~\!(r-1)
\int^{2\pi}_0
\|R(re^{i\theta}, \Delta)x\|^2 d\theta &< \infty \\
\limsup_{r \downarrow 1} ~\!(r-1)
\int^{2\pi}_0
\|R(re^{i\theta}, \Delta)^*y\|^2 d\theta &< \infty
\end{align*}
hold for all $x,y \in X$.
\item
The discrete semigroup $(\Delta^k)_{k \in \mathbb{N}}$ is strongly stable
if and only if
\begin{align*}
\lim_{r \downarrow 1} ~\! (r-1)
\int^{2\pi}_0
\|R(re^{i\theta}, \Delta)x\|^2 d\theta &=0\\
\limsup_{r \downarrow 1} ~\!(r-1)
\int^{2\pi}_0
\|R(re^{i\theta}, \Delta)^*y\|^2 d\theta &< \infty
\end{align*}
hold for all $x,y \in X$.
\end{enumerate}
\end{thrm}
Next, we investigate a resolvent condition for polynomial decay of
a discrete semigroup on a Hilbert space. To this end,
the following identities given in Lemma~II.1.11 of \cite{Eisner2010} is useful.
\begin{lmm}
\label{lem:T_powered}
Let $X$ be a Banach space and let $\Delta \in \mathcal{L}(X)$
with spectral radius $\mathop{\rm r}\nolimits(\Delta)$. Then
\[
\Delta ^k=
\frac{r^{k+1}}{2\pi} \int^{2\pi}_0 e^{i\theta (k+1)} R(re^{i\theta},\Delta )d\theta =
\frac{r^{k+2}}{2\pi(k+1)}
\int^{2\pi}_0 e^{i\theta (k+1)} R(re^{i\theta},\Delta )^2d\theta
\]
for all $k \in \mathbb{N}$ and $r > \mathop{\rm r}\nolimits(\Delta)$.
\end{lmm}
We state the analogue of Proposition~3.1 (b) and (c) in \cite{Wakaiki2021JEE}
for discrete semigroups.
\begin{prpstn}
\label{prop:discrete_poly_decay}
Let $X$ be a Hilbert space, and suppose that a
discrete semigroup $(\Delta^k)_{k \in \mathbb{N}}$ on $X$ is power bounded.
Then the following statements hold:
\begin{enumerate}
\item
If $x \in X$ satisfies
\[
\|\Delta^kx\| = o\left(
\frac{1}{\sqrt{k}}
\right)\qquad (k \to \infty),
\]
then
\begin{align}
\label{eq:log_cond_T}
\lim_{r \downarrow 1}
\frac{1}{\log \frac{r}{r-1}}
\int^{2\pi}_0 \|R(re^{i\theta}, \Delta) x\|^2 d\theta = 0.
\end{align}
\item
Conversely, if $x \in X$ satisfies
\eqref{eq:log_cond_T},
then
\begin{align}
\label{eq:Tn_poly_log_conv}
\|\Delta^kx\| = o\left(\sqrt{
\frac{\log k}{k}
}\right)\qquad (k \to \infty).
\end{align}
\end{enumerate}
\end{prpstn}
\begin{proof}
a)
Assume that $x \in X$ satisfies $\|\Delta^kx\| = o(
1/\sqrt{k}
)$ as $k \to \infty$.
Take $\varepsilon>0$ and $r >1$.
There exists $k_0 \in \mathbb{N}$ such that
$\|\Delta^kx\|^2 \leq \varepsilon/k$ for all $k \geq k_0$.
By
Parseval's equality, we obtain
\[
\frac{1}{2\pi}
\int^{2\pi}_0 \|R(re^{i\theta},\Delta) x\|^2 d\theta =
\sum_{k=-\infty}^{\infty}
\left\|\frac{1}{2\pi}\int^{2\pi}_0 e^{ik\theta} R(re^{i\theta},\Delta) x d\theta\right\|^2.
\]
Combining this equality with Lemma~\ref{lem:T_powered},
we have that
\[
\frac{1}{2\pi}
\int^{2\pi}_0 \|R(re^{i\theta}, \Delta) x\|^2 d\theta
=
\sum_{k=0}^{\infty} \frac{\|\Delta^k x\|^2}{r^{2(k+1)}}.
\]
Since $(\Delta^k)_{k \in \mathbb{N}}$ is power bounded, it follows that
$M := \sup_{k \in \mathbb{N} \cup \{0\}}\|\Delta^k\| < \infty$.
Hence
\[
\sum_{k=0}^{k_0-1} \frac{\|\Delta^k x\|^2}{r^{2(k+1)}}
\leq k_0 M^2 \|x\|^2.
\]
Using the following well-known identity of polylogarithms (see, e.g.,
on p.~3 of \cite{Hain1994}):
\[
\sum_{k=1}^\infty \frac{1}{kr^{2k}} = \log \frac{r^2}{r^2-1},
\]
we obtain
\[
\sum_{k=k_0}^{\infty} \frac{\|\Delta^k x\|^2}{r^{2(k+1)}}
\leq \frac{\varepsilon}{r^2}
\sum_{k=k_0}^\infty \frac{1}{kr^{2k}} \leq
\frac{\varepsilon}{r^2} \log \frac{r^2}{r^2-1}.
\]
Moreover,
\[
\log \frac{r^2}{r^2-1} =
\log
\frac{r}{r-1} + \log \frac{r}{r+1}
\leq \log
\frac{r}{r-1}.
\]
Hence
\[
\limsup_{r\downarrow 1}\frac{1}{\log \frac{r}{r-1}}
\sum_{k=0}^{\infty} \frac{\|\Delta^k x\|^2}{r^{2(k+1)}} \leq
\limsup_{r\downarrow 1}\frac{1}{\log
\frac{r}{r-1}}
k_0 M^2 \|x\|^2 + \limsup_{r\downarrow 1}\frac{\varepsilon}{r^2} \leq \varepsilon.
\]
Since $\varepsilon >0$ was arbitrary,
the desired conclusion \eqref{eq:log_cond_T} holds.
b)
Assume that \eqref{eq:log_cond_T} holds.
By Lemma~\ref{lem:T_powered} and the Cauchy-Schwartz inequality,
we have that for all $x,y \in X$,
\begin{align*}
|\langle \Delta^k x, y
\rangle|
&\leq
\frac{r^{k+2}}{2\pi(k+1)}
\int^{2\pi}_0 |
\langle R(re^{i\theta},\Delta)^2x, y \rangle
| d\theta \\
&=
\frac{r^{k+2}}{2\pi(k+1)}
\int^{2\pi}_0 |
\langle R(re^{i\theta},\Delta)x, R(re^{-i\theta},\Delta^*)y \rangle
| d\theta \\
&\leq
\frac{r^{k+2}}{2\pi(k+1)}
\left(
\int^{2\pi}_0 \|R(re^{i\theta},\Delta)x \|^2 d\theta
\right)^{1/2}
\left(
\int^{2\pi}_0 \|R(re^{i\theta},\Delta^*)y \|^2 d\theta
\right)^{1/2}.
\end{align*}
Take $r_1>1$.
Since $(\Delta^k)_{k \in \mathbb{N}}$ is power bounded,
Theorem~\ref{thm:strong_stability_resol}.a)
and the uniform boundedness principle imply that
there exists a constant $M_1 >0$ such that
\[
(r-1)
\int^{2\pi}_0 \|R(re^{i\theta},\Delta^*)y \|^2 d\theta
\leq M_1^2\|y\|^2 \qquad \forall y \in X,~\forall r \in (1,r_1).
\]
Hence
\[
\| \Delta^kx\| \leq
\frac{M_1r^{k+2}}{2\pi(k+1)}
\sqrt{\frac{\log
\frac{r}{r-1}}{r-1}}
\left(\frac{1}{\log
\frac{r}{r-1}}
\int^{2\pi}_0 \|R(re^{i\theta},\Delta)x \|^2 d\theta
\right)^{1/2}\qquad \forall x \in X,~\forall r \in (1,r_1).
\]
Put $r := 1 + \frac{1}{k+1}$. Then
\[
\frac{r^{k+2}}{(k+1)(r-1)} = \left(
1+ \frac{1}{k+1}
\right)^{k+2} \to e\qquad (k \to \infty).
\]
On the other hand,
\begin{align*}
(r-1)\log
\frac{r}{r-1}
&=
\frac{\log (k+2)}{k+1}.
\end{align*}
Hence, there exists $k_1 \in \mathbb{N}$ such that
for all $k \geq k_1$,
\[
\frac{r^{k+2} }{k+1}
\sqrt{\frac{\log
\frac{r}{r-1}}{r-1}}
\leq 2 e \sqrt{
\frac{\log k}{k}}.
\]
Combining this estimate with \eqref{eq:log_cond_T},
we obtain \eqref{eq:Tn_poly_log_conv}.
\end{proof}
\section{Spectrum and sampling}
\label{sec:spectrum}
To apply Theorem~\ref{thm:strong_stability_resol} and Proposition~\ref{prop:discrete_poly_decay}
to the
discretized system \eqref{eq:discretized_sys},
we have to show that $\mathbb{E}_1 \subset \rho (\Delta(\tau))$
is satisfied.
The aim of this section is to prove the following theorem.
\begin{thrm}
\label{lem:resol_T_SF}
If {\rm (A\ref{assump:finite_unstable})}, {\rm (A\ref{assump:closed_loop})},
and {\rm (A\ref{assump:b_f_cond})} hold,
then there exists $\tau^* >0$
such that
$\mathbb{E}_1 \subset \rho (\Delta(\tau))$
for all $\tau \in (0,\tau^*)$.
\end{thrm}
First, we apply a spectral decomposition for $A$.
Next, we obtain the inclusion $\mathcal{D}^{\beta} \subset
D((-A-BF)^{\widetilde \beta})$ for all $\widetilde \beta \in [0,\beta)$.
Using this inclusion,
we also show that $|1-F(\lambda I - A)^{-1}B|$ is bounded from below
by a positive constant on $\rho(A) \cap \overline{\mathbb{C}_0}$.
This estimate for the continuous-time system
leads to an analogous result for the discretized system,
a lower bound of
$|1- F(zI - T(\tau))^{-1}S(\tau)|$ on
$\rho\big(T(\tau)\big) \cap \overline{\mathbb{E}_1}$. Finally,
the desired inclusion $\mathbb{E}_1 \subset \rho (\Delta(\tau))$
is proved.
\subsection{Spectral decomposition}
\label{sec:Spec_decomp}
We start by applying a spectral decomposition for $A$ under (A\ref{assump:finite_unstable}). A more general version of
spectral decompositions for unbounded operators can
be found in Lemma~2.4.7 of \cite{Curtain2020}
and Proposition~IV.1.16 of \cite{Engel2000}.
Since only finite elements of $(\lambda_n)_{n \in \mathbb{N}}$
are in
$\mathbb{C}_{-\delta} \cap (\mathbb{C} \setminus \Omega_{\alpha,\Upsilon})$,
there exists a smooth, positively oriented, and simple closed curve $\Phi$ in
$\rho(A)$ containing $\sigma(A) \cap \mathbb{C}_0$ in its interior and
$\sigma(A) \cap (\mathbb{C} \setminus \mathbb{C}_0)$ in its exterior.
The operator
\begin{equation}
\label{eq:projection}
\Pi := \frac{1}{2\pi i} \int_{\Phi} (\lambda I - A)^{-1} d\lambda
\end{equation}
is a projection on $X$ and yields
\[
X = X^+ \oplus X^-,
\]
where
\[
X^+ := \Pi X,\quad X^- := (I - \Pi) X.
\]
We have that $\dim X^+ < \infty$. Moreover, $X^+$ and $X^-$ are
$T(t)$-invariant for all $t \geq 0$. Define
\begin{align*}
A^+ := A|_{X^+} ,\quad A^- := A|_{D(A) \cap X^-}.
\end{align*}
Then
\[
\sigma(A^+) = \sigma(A) \cap \mathbb{C}_0,\quad
\sigma(A^-) = \sigma(A) \cap (\mathbb{C} \setminus \mathbb{C}_0).
\]
Let $N_{\rm a} \in \mathbb{N}$ satisfy
\begin{equation}
\label{eq:Ns_def}
\{\lambda_n:1\leq n \leq N_{\rm a} - 1 \} := \{\lambda_n
: n \in \mathbb{N}\} \cap \mathbb{C}_0
= \sigma(A^+)
\end{equation}
by changing the order of $(\lambda_n)_{n \in \mathbb{N}}$ if necessary.
In the subsequent developments, we
also use $N_{\rm b} \in \mathbb{N}$
satisfying
\begin{equation}
\label{eq:lambda_cond_large}
\mathop{\rm Re}\nolimits \lambda_n \leq -\delta\quad \text{or} \quad
\lambda_n \in \Omega_{\alpha,\Upsilon}\qquad \forall n \geq N_{\rm b}.
\end{equation}
By construction, we obtain $N_{\rm b} \geq N_{\rm a}$.
The series representations of $A^+$ and $A^-$ are given by
\begin{align*}
A^+x^+ &= \sum_{n=1}^{N_{\rm a}-1}
\lambda_n \langle x^+ , \psi_n \rangle \phi_n
\qquad \forall x ^+ \in D(A^+) = X^+ \\
A^-x^- &= \sum_{n=N_{\rm a}}^\infty \lambda_n \langle x^- , \psi_n \rangle \phi_n\qquad \forall x^- \in D(A^-) =
\left\{
x^- \in X^-: \sum_{n=N_{\rm a}}^\infty |\lambda_n|^2~\! |\langle x, \psi_n \rangle|^2 < \infty
\right\}.
\end{align*}
For all $\lambda \in \rho(A)$, $X^+$ and $X^-$ are
$(\lambda I - A)^{-1}$-invariant and
\[
(\lambda I - A^+)^{-1} =
(\lambda I - A)^{-1}|_{X^+},\quad (\lambda I - A^-)^{-1} =
(\lambda I - A)^{-1}|_{X^-}.
\]
For $t \geq 0$, we define
\[
T^+(t) := T(t)|_{X^+},\quad T^-(t) := T(t)|_{X^-}.
\]
Then $(T^+(t))_{t\geq 0}$
and $(T^-(t))_{t\geq 0}$ are $C_0$-semigroups with generators $A^+$ and $A^-$,
respectively.
Define
\begin{align*}
B^+ := \Pi B,\quad B^- := (I-\Pi)B,\quad
F^+ := F|_{X^+},\quad F^- := F|_{X^-}.
\end{align*}
From \eqref{eq:B_F_rep}, we have that
\begin{align*}
B^+u = b^+ u&,\quad B^- u = b^-u\qquad \forall u \in \mathbb{C} \\
F^+x^+ &= \langle x^+, f^+\rangle\qquad \forall x^+ \in X^+ \\
F^-x^- &= \langle x^-, f^-\rangle \qquad \forall x^- \in X^-,
\end{align*}
where
$b^+ := \Pi b$, $b^- := (I-\Pi)b$,
$f^+ := \Pi^*f$, and $f^- := (I-\Pi^*)f$.
The adjoint $\Pi^*$
is also a projection on $X$ and
yields a spectral decomposition for $A^*$.
We define
\[
X^+_* := \Pi^*X,\quad
X^-_* := (I-\Pi^*)X.
\]
The restriction $A^-_* := A^*|_{D(A^*)\cap X^-_*}$ of the adjoint $A^*$
is the generator of a $C_0$-semigroup
$(T^-_*(t))_{t\geq 0}$, where
\[
T^-_*(t):= T(t)^*|_{X^-_*}
\]
for $t \geq 0$.
The polynomial stability of
$(T^{-}(t))_{t\geq 0}$
and $(T^{-}_*(t))_{t\geq 0}$ under {\rm (A\ref{assump:finite_unstable})} and
{\rm (A\ref{assump:imaginary})}
is an immediate consequence of
the equivalence between a) and c) in Theorem~\ref{thm:decay_charac}.
\begin{lmm}
\label{lem:T_minus_poly_stable}
If {\rm (A\ref{assump:finite_unstable})} and
{\rm (A\ref{assump:imaginary})} hold, then
the $C_0$-semigroups $(T^{-}(t))_{t\geq 0}$
and $(T^{-}_*(t))_{t\geq 0}$
constructed as above
are polynomially stable with parameter $\alpha$.
\end{lmm}
\subsection{Relation between $\bm{\mathcal{D}^{\beta}}$ and
$\bm{D(-A-BF)^{\beta}}$}
Let a Riesz-spectral operator $A$ on a Hilbert space $X$
generate a $C_0$-semigroup. Let $B \in \mathcal{L}(\mathbb{C},X)$ and
$F \in \mathcal{L}(X,\mathbb{C})$
be such that $A+BF$
is the generator of a uniformly bounded $C_0$-semigroup.
Then,
the fractional power $(-A-BF)^{\beta}$ is well defined for every
$\beta >0$.
We will show that
if
$\mathop{\rm ran}\nolimits (B) \subset \mathcal{D}^{\beta}$ for some $\beta >0$, then
$\mathcal{D}^{\beta} \subset D((-A-BF)^{\widetilde \beta})$ holds for all
$\widetilde \beta \in [0,\beta)$.
To this end, the following result
is useful; see Lemma~5.4 of \cite{Wakaiki2021JEE} for the proof.
\begin{lmm}
\label{lem:A_AD_domain}
Let $X$ be a Banach space and let $V \in \mathcal{L}(X)$.
Suppose that $A$ and $A+V$ be the generators of exponentially stable
$C_0$-semigroups
on $X$.
Then $D((-A)^{\alpha_1}) \subset D((-A-V)^{\alpha_2})$
for all $\alpha_1,\alpha_2 \in (0,1)$ with $\alpha_2 < \alpha_1$.
\end{lmm}
We investigate the relation between $D(A^n)$ and $D((A+BF)^n)$
for $n \in \mathbb{N}$.
\begin{lmm}
\label{lem:integer_case}
Let $A$ be a linear operator on a Banach space $X$ and let
$F \in \mathcal{L}(X,\mathbb{C})$.
Define
$B\in \mathcal{L}(\mathbb{C},X)$ by
$Bu := bu$ for $u \in \mathbb{C}$, where $b \in X$.
For all $n \in \mathbb{N}$,
if $x \in D(A^n)$ and $b \in D(A^{n-1})$, then
$x \in D((A+BF)^n)$ and
\begin{equation}
\label{eq:A+BF_powered}
(A+BF)^n x = A^n x+q_{n-1} A^{n-1}b + \cdots + q_0 b,
\end{equation}
where $q_{m} := F(A+BF)^{n-m-1}x \in \mathbb{C}$ for $m=0,\dots,n-1$.
\end{lmm}
\begin{proof}
We prove the assertion by induction.
In the case $n=1$,
$x \in D(A)$ satisfies $x \in D(A+BF)$ and $(A+BF)x = Ax + (Fx)b$.
Now,
assume that the assertion holds for some $n \in \mathbb{N}$.
Let $x \in D(A^{n+1})$ and $b \in D(A^{n})$.
Then $A^{n} x \in D(A)$ and
\[
A^{\ell} b \in D(A)\qquad \forall \ell = 1,\dots,n-1.
\]
This and the inductive assumption imply
\[
(A+BF)^nx \in D(A) = D(A+BF),
\]
and hence
$x \in D((A+BF)^{n+1})$.
Moreover,
\begin{align*}
(A+BF)^{n+1}x &=
A \big(A^n x+q_{n-1} A^{n-1}b + \cdots + q_0 b \big) +
BF(A+BF)^{n}x\\
&=
A^{n+1} x + q_{n-1} A^{n}b + \cdots + q_0 Ab + (F(A+BF)^{n}x)b.
\end{align*}
Thus, \eqref{eq:A+BF_powered} holds when $n$ is replaced by $n+1$.
\end{proof}
Combining Lemmas~\ref{lem:A_AD_domain} and \ref{lem:integer_case},
we obtain the following result:
\begin{lmm}
\label{lem:ABF_domain}
Let a Riesz-spectral operator $A$ on a Hilbert space $X$
generate a $C_0$-semigroup.
Define
$B\in \mathcal{L}(\mathbb{C},X)$ by
$Bu := bu$ for $u \in \mathbb{C}$, where $b \in \mathcal{D}^{\beta}$ for some $\beta >0$.
Let $F \in \mathcal{L}(X,\mathbb{C})$ be such that
$A+BF$ generates a uniformly bounded $C_0$-semigroup on $X$.
Then for all $\widetilde \beta \in [0,\beta)$, one has
$\mathcal{D}^{\beta} \subset D((-A-BF)^{\widetilde \beta})$;
in particular $b \in D((-A-BF)^{\widetilde \beta})$.
\end{lmm}
\begin{proof}
There exists $h >0$ such that $A_h :=A-h I$ generates
an exponentially stable $C_0$-semigroup on $X$.
We obtain $
\mathcal{D}^{\beta} = D((-A_h)^{\beta})
$ by Lemma~3.2.11.c of \cite{Curtain2020}.
Since
\[
D\big((-A_h)^{\beta} \big) \subset D\Big((-A_h)^{\widetilde \beta} \Big),\quad
D\big((-A-BF)^{\beta}\big) \subset D\Big((-A-BF)^{\widetilde \beta}\Big)
\]
for every $\widetilde \beta \in [0,\beta)$,
it suffices to consider the case where $n < \widetilde \beta < \beta < n+1$
for some $n \in \mathbb{N}_0$.
Put $\beta_0 := \beta - n$ and
$\widetilde \beta_0 := \widetilde \beta - n$.
Take $x\in \mathcal{D}^{\beta} = D((-A_h)^{\beta}) $.
We have from the first law of exponents (see, e.g., Proposition~3.1.1.c) of \cite{Haase2006}) that
\begin{align}
D\big((-A_h)^\beta\big) &= \left\{
x \in D(A_h^n ) : A_h^nx \in D\big((-A_h)^{\beta_0}\big)
\right\} \label{eq:A_beta_domain}\\
D\Big((-A_h-BF)^{\widetilde \beta}\Big) &= \left\{
x \in D\big((A_h+BF)^n\big) : (A_h+BF)^nx \in D\Big((-A_h-BF)^{\widetilde \beta_0}\Big)
\right\}.\label{eq:ABF_beta_domain}
\end{align}
Since $x,b \in D(A_h^{n})$,
Lemma~\ref{lem:integer_case} implies that
$x \in D((A_h+BF)^{n})$ and
\begin{equation}
\label{eq:Aeps_BF}
(A_h+BF)^{n}x = A_h^nx + q_{n-1} A_h^{n-1}b + \cdots + q_0 b
\end{equation}
for some $q_0,\dots,q_{n-1} \in \mathbb{C}$.
By $b\in D(A_h^{n}) $,
\[
A_h^{n-1}b,A_h^{n-2}b,\dots, b \in D(A_h) \subset D\big((-A_h)^{\beta_0}\big).
\]
Moreover, we have from $x \in D((-A_h)^\beta)$
and \eqref{eq:A_beta_domain} that
\[
A_h^nx \in D\big((-A_h)^{\beta_0}\big).
\]
Hence $(A_h+BF)^{n}x \in D((-A_h)^{\beta_0}) $
by \eqref{eq:Aeps_BF}.
Lemma~\ref{lem:A_AD_domain} yields
\[
D\big((-A_h)^{\beta_0}\big) \subset D\Big((-A_h-BF)^{\widetilde \beta_0}\Big),
\]
and therefore \[
(A_h+BF)^{n}x \in D\Big((-A_h-BF)^{\widetilde \beta_0}\Big).
\]
This and
\eqref{eq:ABF_beta_domain} give
$x \in D((-A_h-BF)^{\widetilde \beta})$. Since
$D((-A_h-BF)^{\widetilde \beta}) = D((-A-BF)^{\widetilde \beta})$
by Proposition~3.1.9 of \cite{Haase2006},
we conclude that $\mathcal{D}^{\beta} \subset
D((-A-BF)^{\widetilde \beta})$.
\end{proof}
\subsection{Lower bound of $\bm {|1 - FR(z, T(\tau)) S(\tau)|}$}
In this subsection, we complete the proof of Theorem~\ref{lem:resol_T_SF},
by showing that
$|1 - FR(z,T(\tau))S(\tau)|$ is bounded from below
by a positive constant on $\rho(T(\tau)) \cap \overline{\mathbb{E}_1}$.
First, we estimate $|F R(\lambda, A+BF)B|$
with the help of Lemma~\ref{lem:ABF_domain}.
\begin{lmm}
\label{lem:FRB_bound}
If {\rm (A\ref{assump:finite_unstable})}, {\rm (A\ref{assump:closed_loop})},
and {\rm (A\ref{assump:b_f_cond})} hold, then
there exist constants $M \geq 1$, $\widetilde \beta \in [0,\beta]$, and $
\widetilde \gamma \in [0,\gamma]$ such that
\begin{equation}
\label{eq:bf_inclusions}
b \in D\Big((-A-BF)^{\widetilde \beta }\Big),\qquad
f \in D\Big((-A^*-F^*B^*)^{\widetilde \gamma}\Big)
\end{equation}
and
\begin{equation}
\label{eq:F_RABF_B_bound}
|F R(\lambda, A+BF)B| \leq M \big\|
(-A-BF)^{\widetilde \beta} b \big\|~\!\big\|
(-A^*-F^*B^*)^{\widetilde \gamma} f\big\|\qquad \forall \lambda \in
\overline{\mathbb{C}_0}.
\end{equation}
\end{lmm}
\begin{proof}
If $\beta,\gamma \in \mathbb{N}_0$, then
we have from Lemma~\ref{lem:integer_case} that
$b \in D((A+BF)^\beta)$ and $f \in D((A^*+F^*B^*)^\gamma)$.
If $\beta+\gamma > \alpha$, then
Lemma~\ref{lem:ABF_domain} implies that \eqref{eq:bf_inclusions} holds
for some $\widetilde \beta \in [0,\beta)$ and $
\widetilde \gamma \in [0,\gamma)$ satisfying $\widetilde \beta + \widetilde \gamma \geq \alpha$.
The inequality \eqref{eq:F_RABF_B_bound} immediately follows from Lemma~\ref{lem:resolvent_estimate}.
\end{proof}
Using
Lemma~\ref{lem:FRB_bound}, we next obtain an estimate of
$|1 - FR(\lambda,A)B|$.
\begin{lmm}
\label{lem:cont_time_trans_func_bound}
If {\rm (A\ref{assump:finite_unstable})}, {\rm (A\ref{assump:closed_loop})},
and {\rm (A\ref{assump:b_f_cond})} hold, then
there exists $\varepsilon>0$ such that
\[
|1 - FR(\lambda,A)B| > \varepsilon
\qquad
\lambda \in \rho(A) \cap \overline{\mathbb{C}_0}.
\]
\end{lmm}
\begin{proof}
Let $\lambda \in \rho (A)$. Then
\[
\lambda I - A - BF = (\lambda I - A)(I - (\lambda I - A)^{-1}BF).
\]
Since $\sigma((\lambda I - A)^{-1}BF) \setminus \{0 \}
= \sigma(F(\lambda I - A)^{-1}B) \setminus \{0 \} $ (see, e.g.,
(3) in Section~III.2 of \cite{Gohberg1990}),
we obtain
\[
\lambda \in \rho (A+BF) \quad \Leftrightarrow \quad
1 \in \rho \big((\lambda I - A)^{-1}BF\big) \quad \Leftrightarrow \quad
1 \in \rho (F(\lambda I - A)^{-1}B).
\]
Moreover,
\begin{equation}
\label{eq:extension}
\frac{1}{1 - F(\lambda I - A)^{-1}B} = F(\lambda I - A - BF)^{-1}B + 1\qquad \forall \lambda \in \rho(A) \cap \rho(A+BF).
\end{equation}
Hence Lemma~\ref{lem:FRB_bound} implies that
there exists $\varepsilon>0$ such that
$|1 - FR(\lambda ,A)B| > \varepsilon$ for
all $\lambda \in \rho(A) \cap \overline{\mathbb{C}_0} $.
\end{proof}
The estimate on the continuous-time system obtained in
Lemma~\ref{lem:cont_time_trans_func_bound} leads to
an analogous estimate on the discretized system
as in the robustness analysis of exponential stability \cite{Rebarber2006} and
strong stability \cite{Wakaiki2021SIAM}.
To show this, we use the following estimate, which
is obtained from arguments similar to those in the proofs of Theorem~2.1 in \cite{Rebarber2006}
and Lemma~3.8 in \cite{Wakaiki2021SIAM}:
\begin{lmm}
\label{lem:frac_lam_bound}
Suppose that {\rm (A\ref{assump:finite_unstable})} holds. Let
$\widetilde \alpha \geq \alpha$ and let
$N_{\rm b} \in \mathbb{N}$ satisfy \eqref{eq:lambda_cond_large}.
Then there exist constants $\Upsilon_1,\Upsilon_2 >0$ such that
\begin{equation}
\label{eq:case1-3_bound}
\left|
\frac{1 - e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}}
\right| ~\! \left|
\frac{1}{\lambda_n}
\right| \leq \max\big\{\Upsilon_1,
\Upsilon_2 |\lambda_n|^{\widetilde \alpha}
\big\}
\end{equation}
for all $\tau >0$, $z \in \rho(T(\tau)) \cap \overline{\mathbb{E}_1}$, and
$n \geq N_{\rm b}$.
\end{lmm}
\begin{proof}
Take $\tau > 0$ and $z \in \rho(
T(\tau) ) \cap \overline{\mathbb{E}_1}$.
Let $N_{\rm b} \in \mathbb{N}$ satisfy \eqref{eq:lambda_cond_large}.
For $n \geq N_{\rm b}$, we divide the proof into three
cases: (i) $\tau \mathop{\rm Re}\nolimits \lambda_n \leq -1$; (ii) $\tau \mathop{\rm Re}\nolimits \lambda_n > -1$
and $\mathop{\rm Re}\nolimits\lambda_n \leq -\delta$; and (iii) $\tau \mathop{\rm Re}\nolimits \lambda_n > -1$
and $\lambda_n \in \Omega_{\alpha,\Upsilon}$.
For all cases, the following inequality is useful:
\begin{equation}
\label{eq:dis_time_bound}
\left|
\frac{1 - e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}}
\right|
\leq \frac{|1 - e^{\tau \lambda_n}|}{1 - e^{\tau \mathop{\rm Re}\nolimits \lambda_n}} =
\frac{\frac{|1 - e^{\tau \lambda_n}|}{\tau |\lambda_n|}}{ \frac{1 - e^{\tau \mathop{\rm Re}\nolimits \lambda_n}}{\tau |\mathop{\rm Re}\nolimits \lambda_n|}} \cdot
\frac{|\lambda_n|}{|\mathop{\rm Re}\nolimits \lambda_n|}\qquad \forall n \geq N_{\rm b}.
\end{equation}
First we consider the case (i) $\tau \mathop{\rm Re}\nolimits \lambda_n \leq -1$.
By the property \eqref{eq:lambda_cond_large} of $N_{\rm b}$,
we obtain
$|\lambda_n| \geq \kappa$
for some
$\kappa >0$ and all $n \geq N_{\rm b}$.
Therefore, the estimate \eqref{eq:dis_time_bound} gives
\begin{equation}
\label{eq:case1_bound}
\left|
\frac{1 - e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}}
\right| ~\! \left|
\frac{1}{\lambda_n}
\right| \leq
\frac{|1 - e^{\tau \lambda_n}|}{1 - e^{\tau \mathop{\rm Re}\nolimits \lambda_n}} \left|
\frac{1}{\lambda_n}
\right| \leq \frac{2}{(1 - e^{-1}) \kappa}.
\end{equation}
Next we examine the case (ii) $\tau \mathop{\rm Re}\nolimits \lambda_n > -1$
and $\mathop{\rm Re}\nolimits\lambda_n \leq -\delta$.
The function
\[
g(\lambda) :=
\begin{cases}
\frac{1 - e^{\lambda}}{\lambda} & \text{if $\lambda \not=0$} \\
-1 & \text{if $\lambda = 0$}
\end{cases}
\]
is holomorphic on $\mathbb{C}$.
Therefore,
there exists $M_1 >0$ such that $|g(\lambda)| \leq M_1$
for all $\lambda \in \mathbb{C}$ satisfying $-1 \leq \mathop{\rm Re}\nolimits \lambda \leq 0$
and $|\mathop{\rm Im}\nolimits \lambda| \leq \pi$.
For all
$\lambda \in \mathbb{C}$ with $|\mathop{\rm Im}\nolimits \lambda | \leq \pi$,
we obtain
\[
|g(\lambda \pm 2\ell \pi i)| = \left|\frac{1 - e^{\lambda}}{\mathop{\rm Re}\nolimits \lambda + i(\mathop{\rm Im}\nolimits \lambda \pm 2\ell \pi )}\right|
\leq |g(\lambda)|\qquad \forall \ell \in \mathbb{N}.
\]
Hence $|g(\lambda)| \leq M_1$ if $-1 \leq \mathop{\rm Re}\nolimits \lambda \leq 0$.
The above estimate on $g$ shows that
\begin{equation}
\label{eq:dis_bount1}
\frac{|1 - e^{\tau \lambda_n}|}{\tau |\lambda_n|} \leq M_1.
\end{equation}
Moreover, the mean value theorem yields
\begin{equation}
\label{eq:dis_bount2}
\frac{1 - e^{\tau \mathop{\rm Re}\nolimits \lambda_n}}{\tau |\mathop{\rm Re}\nolimits \lambda_n|} \geq e^{-1}.
\end{equation}
By the estimates \eqref{eq:dis_bount1} and \eqref{eq:dis_bount2},
we obtain
\begin{equation}
\label{eq:case2_bound}
\left|
\frac{1 - e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}}
\right| ~\! \left|
\frac{1}{\lambda_n}
\right| \leq \frac{e M_1 }{|\mathop{\rm Re}\nolimits \lambda_n |} \leq \frac{e M_1 }{\delta}.
\end{equation}
Finally, we study the case (iii)
$\tau \mathop{\rm Re}\nolimits \lambda_n > -1$
and $\lambda_n \in \Omega_{\alpha,\Upsilon}$.
By $\lambda_n \in \Omega_{\alpha,\Upsilon}$,
\[
|\mathop{\rm Re}\nolimits \lambda_n| \geq \frac{\Upsilon}{|\mathop{\rm Im}\nolimits \lambda_n|^\alpha}.
\]
Using the fact that $|\lambda_n| \geq \kappa >0$
for all $n \geq N_{\rm b}$, we obtain
\begin{align}
\label{eq:real_part_estimate}
\frac{1}{|\mathop{\rm Re}\nolimits \lambda_n|} \leq
\frac{|\mathop{\rm Im}\nolimits \lambda_n|^\alpha}{\Upsilon} \cdot
\frac{ |\lambda_n|^{\widetilde \alpha}}
{|\lambda_n|^{\widetilde \alpha}} \leq
\frac{ |\lambda_n|^{\widetilde \alpha} }{\Upsilon \kappa^{\widetilde \alpha-\alpha}
}.
\end{align}
The estimates \eqref{eq:dis_bount1} and \eqref{eq:dis_bount2}
hold also in the case (iii).
Applying the estimates \eqref{eq:dis_bount1},
\eqref{eq:dis_bount2}, and \eqref{eq:real_part_estimate} to \eqref{eq:dis_time_bound} yields
\begin{align}
\label{eq:case3_bound}
\left|
\frac{1 - e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}}
\right| ~\!\left|
\frac{1}{\lambda_n}
\right| \leq \frac{e M_1}{\Upsilon \kappa^{\widetilde \alpha-\alpha}} |\lambda_n|^{\widetilde \alpha} .
\end{align}
Define
the constants $\Upsilon_1,\Upsilon_2 > 0$ by
\[
\Upsilon_1 :=
\max\left\{
\frac{2}{(1 - e^{-1}) \kappa},~\frac{e M_1 }{\delta}
\right\},\quad
\Upsilon_2 :=
\frac{e M_1}{\Upsilon \kappa^{\widetilde \alpha-\alpha}}.
\]
From the estimates \eqref{eq:case1_bound}, \eqref{eq:case2_bound},
and \eqref{eq:case3_bound},
we conclude
that \eqref{eq:case1-3_bound} holds
for all $N \geq N_{\rm b}$.
\end{proof}
\begin{lmm}
\label{lem:discrete_time_trans_func_bound}
Suppose that {\rm (A\ref{assump:finite_unstable})} holds.
Let $b \in \mathcal{D}^{\beta}$ and $f \in \mathcal{D}_*^{\gamma}$
for some $\beta,\gamma \geq 0$ satisfying
$\beta + \gamma \geq \alpha$.
If there exists $\varepsilon_{\rm c}\in (0,1)$ such that
\begin{equation}
\label{eq:continuous_lower_bound}
|1 - FR(\lambda, A)B| > \varepsilon_{\rm c} \qquad \forall \lambda \in \rho(A) \cap \overline{\mathbb{C}_0},
\end{equation}
then, for any $\varepsilon_{\rm d} \in (0,\varepsilon_{\rm c})$,
there exists $\tau^*>0$ such that
for all $\tau \in (0,\tau^*)$,
\begin{equation}
\label{eq:discrete_lower_bound}
\big|1 - FR\big(z, T(\tau)\big) S(\tau)\big| > \varepsilon_{\rm d}\qquad
\forall z \in \rho\big(T(\tau)\big) \cap \overline{\mathbb{E}_1}.
\end{equation}
\end{lmm}
The proof of
Lemma~\ref{lem:discrete_time_trans_func_bound}
is based on the approximation
approach developed in the proof of
Theorem~2.1 of \cite{Rebarber2006} for
the preservation of exponential stability
under sampling.
We decompose
the transfer functions ${\mathbf G}(\lambda) := F(\lambda I - A)^{-1}B$ and
${\mathbf H}_{\tau}(z) := F(zI - T(\tau))^{-1}S(\tau)$ into
finite-dimensional truncations and
infinite-dimensional tails with approximation order $N \in \mathbb{N}$:
\begin{align*}
{\mathbf G}(\lambda) &=
\sum_{n = 1}^{N-1} \frac{\langle b , \psi_n \rangle \langle \phi_n, f\rangle}{\lambda - \lambda_n} +
\sum_{n = N}^\infty \frac{\langle b , \psi_n \rangle \langle \phi_n, f\rangle}{\lambda - \lambda_n},\quad \lambda \in \rho(A)\\
{\mathbf H}_{\tau}(z) &=\sum_{n = 1}^{N-1}
\frac{e^{\tau \lambda_n}-1}{z - e^{\tau \lambda_n}} \cdot
\frac{\langle b , \psi_n \rangle
\langle\phi_n,f \rangle}{\lambda_n} +
\sum_{n = N}^{\infty}
\frac{e^{\tau \lambda_n}-1}{z - e^{\tau \lambda_n}} \cdot
\frac{\langle b , \psi_n \rangle
\langle\phi_n,f \rangle}{\lambda_n},\quad z \in \rho\big(T(\tau)\big),
\end{align*}
where, for simplicity of notation,
we assume
that $0 \not\in \{\lambda_n:n \in \mathbb{N}\}$.
The main idea of
the approximation
approach in \cite{Rebarber2006} is twofold.
First, we prove that
the infinite-dimensional tails become arbitrarily small
as $N$ increases. Next, we show that
if $\tau>0$ is sufficiently small, then
the finite-dimensional truncations with a fixed $N \in \mathbb{N}$
are close (except near the unstable poles) under the relationship
$z = e^{\tau \lambda}$ of the variable $\lambda$ in the continuous-time
setting and
the variable $z$ in the discrete-time setting.
For the infinite-dimensional tails, a treatment different from the previous studies \cite{Rebarber2006,Wakaiki2021SIAM} is
required due to the geometric property of the eigenvalues of
the generator $A$ and the conditions on
the control operator $B$
and the feedback operator $F$. On the other hand,
the analysis of
the finite-dimensional truncations has no difficulty arising
from polynomial stability. Hence, to the finite-dimensional truncations,
one can apply
the arguments developed in the proof of
Theorem~2.1 of \cite{Rebarber2006} with only minor modifications; see also
the proof of Lemma~3.8 of \cite{Wakaiki2021SIAM}.
\begin{proof}[Proof of Lemma~\ref{lem:discrete_time_trans_func_bound}]
{\em Step 1:}
We show that
for all $\varepsilon >0$, there exists
$N_0^{\rm c} \in \mathbb{N}$ such that
\begin{equation}
\label{eq:continuous_large_case}
\sup_{\lambda \in \overline{\mathbb{C}_0} }
\left|
\sum_{n = N}^\infty \frac{\langle b , \psi_n \rangle \langle \phi_n, f\rangle}{\lambda - \lambda_n}
\right| \leq \varepsilon\qquad \forall N\geq N_0^{\rm c}.
\end{equation}
Let $N_{\rm b} \in \mathbb{N}$ satisfy
\eqref{eq:lambda_cond_large}.
As in the spectral decomposition described in
Section~\ref{sec:Spec_decomp},
there exists a smooth, positively oriented, and simple closed
curve $\Phi_{\rm b}$ in
$\rho(A)$ containing $\{\lambda_n:1\leq n \leq N_{\rm b} - 1 \} $ in its interior and
$\sigma(A) \setminus \{\lambda_n:1\leq n \leq N_{\rm b} - 1 \} $ in its exterior.
Define the projection $\Pi_{\rm b}$ on $X$ by
\[
\Pi_{\rm b} := \frac{1}{2\pi i} \int_{\Phi_{\rm b}} (\lambda I - A)^{-1} d\lambda,
\]
and put $X_{\rm b}^- := (I - \Pi_{\rm b}) X$.
For $t \geq 0$, define
$
T_{\rm b}^-(t) := T(t)|_{X_{\rm b}^-}.
$
Similarly to Lemma~\ref{lem:T_minus_poly_stable},
$(T_{\rm b}^-(t))_{t\geq 0}$
is a polynomially stable $C_0$-semigroup with parameter $\alpha$
on $X_{\rm b}^-$.
We denote by $A^-_{\rm b}$ the generator of $(T_{\rm b}^-(t))_{t\geq 0}$.
Theorem~\ref{thm:decay_charac}
implies that
\[
M:=
\sup_{\lambda \in \overline{\mathbb{C}_0}}
\|R(\lambda , A^-_{\rm b}) (-A^-_{\rm b})^{-\alpha}\| <\infty.
\]
For all $n \geq N_{\rm b}$ and $\lambda \in \overline{\mathbb{C}_0}$,
\begin{align*}
\frac{M_{\rm a} }{|\lambda-\lambda_n|^2~\! |\lambda_n|^{2\alpha}}
&\leq \|R(\lambda,A^-_{\rm b})(-A^-_{\rm b})^{-\alpha} \phi_n \|^2\\
&\leq \|R(\lambda,A^-_{\rm b})(-A^-_{\rm b})^{-\alpha}\|^2 ~\! \| \phi_n \|^2 \\
&\leq M^2M_{\rm b}.
\end{align*}
Therefore,
\[
\sup_{\lambda \in \overline{\mathbb{C}_0}}
\frac{1}{|\lambda-\lambda_n|~\! |\lambda_n|^{\alpha}} \leq
M\sqrt{\frac{M_{\rm b} }{M_{\rm a}}} \qquad \forall n \geq N_{\rm b}.
\]
By \eqref{eq:lambda_cond_large}, there exists a constant $\kappa >0$ such that $|\lambda_n| \geq \kappa$
for all $n \geq N_{\rm b}$.
The Cauchy-Schwartz inequality implies that
for all $N \geq N_{\rm b}$ and $\lambda \in \overline{\mathbb{C}_0}$,
\begin{align*}
\left|
\sum_{n= N}^\infty
\frac{\langle b, \psi_n\rangle \langle \phi_n, f \rangle }
{\lambda - \lambda_n}
\right| &\leq
\sum_{n= N}^\infty
\frac{1}{|\lambda-\lambda_n|~\! |\lambda_n|^{\alpha}} \cdot
\frac{|\lambda_n|^{\beta+\gamma} ~\!
|\langle b, \psi_n\rangle \langle \phi_n,f \rangle |
}{|\lambda_n|^{\beta+\gamma-\alpha} } \\
&\leq
\frac{M}{\kappa^{\beta+\gamma-\alpha}} \sqrt{\frac{M_{\rm b} }{M_{\rm a}}
\left(\sum_{n= N}^\infty |\lambda_n|^{2\beta} ~\! |\langle b, \psi_n\rangle|^2\right)
\left(\sum_{n= N}^\infty |\lambda_n|^{2\gamma} ~\! |\langle \phi_n, f \rangle|^2\right)}.
\end{align*}
Since $b \in \mathcal{D}^{\beta}$ and $f \in \mathcal{D}_*^{\gamma}$,
we obtain
\begin{equation}
\label{eq:b_bounded}
\sum_{n= 1}^\infty |\lambda_n|^{2\beta}~\! |\langle b, \psi_n\rangle|^2 < \infty,\qquad
\sum_{n= 1}^\infty |\lambda_n|^{2\gamma} ~\! |\langle \phi_n, f \rangle|^2 < \infty.
\end{equation}
Hence, for all $\varepsilon >0$, there exists
$N_0^{\rm c} \geq N_{\rm b}$ such that \eqref{eq:continuous_large_case} holds.
{\em Step 2:}
We shall show that
for all $\varepsilon >0$, there exists $N_0^{\rm d} \in \mathbb{N}$
such that
\begin{equation}
\label{eq:discrete_large_case}
\sup_{z \in \rho(
T(\tau) ) \cap \overline{\mathbb{E}_1}}
\left|
\sum_{n = N}^\infty
\frac{1 - e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}} \cdot
\frac{\langle b , \psi_n \rangle
\langle\phi_n,f \rangle}{\lambda_n}
\right| \leq \varepsilon\qquad \forall \tau >0,~\forall N \geq N_0^{\rm d}.
\end{equation}
By Lemma~\ref{lem:frac_lam_bound}
with $\widetilde \alpha =\beta+\gamma$,
there are constants $\Upsilon_1,\Upsilon_2 >0$
such that
\[
\left|
\frac{1 - e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}} \cdot
\frac{\langle b , \psi_n \rangle
\langle\phi_n,f \rangle}{\lambda_n}
\right| \leq \left(\Upsilon_1 +
\Upsilon_2 |\lambda_n|^{\beta+\gamma}
\right) \langle b , \psi_n \rangle
\langle\phi_n,f \rangle.
\]
for all $\tau > 0$, $z \in \rho(
T(\tau) ) \cap \overline{\mathbb{E}_1}$, and $n \geq N_{\rm b}$.
Using the Cauchy-Schwartz inequality, we obtain
\begin{align}
\sum_{n = N}^\infty
\left|
\frac{1 - e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}} \cdot
\frac{\langle b , \psi_n \rangle
\langle\phi_n,f \rangle}{\lambda_n}
\right| &\leq
\Upsilon_1 \sum_{n = N}^\infty |\langle b , \psi_n \rangle | ~\!
|\langle\phi_n,f \rangle | +
\Upsilon_2 \sum_{n = N}^\infty |\lambda_n|^{\beta+\gamma } ~\! |
\langle b , \psi_n \rangle | ~\!
|\langle\phi_n,f \rangle | \notag \\
&\leq
\Upsilon_1 \sqrt{\left(
\sum_{n = N}^\infty |\langle b , \psi_n \rangle |^2\right) \left( \sum_{n = N}^\infty
|\langle\phi_n,f \rangle |^2\right)} \notag \\&\quad + \Upsilon_2
\sqrt{
\left(\sum_{n= N}^\infty |\lambda_n|^{2\beta} ~\! |\langle b, \psi_n\rangle|^2\right)
\left(\sum_{n= N}^\infty |\lambda_n|^{2\gamma} ~\! |\langle \phi_n, f \rangle|^2\right)} \label{eq:discrete_time_transfer_suff_large}
\end{align}
for all $N \geq N_{\rm b}$.
As in Step 1,
it follows
from \eqref{eq:b_bounded} and
\[
\sum_{n= 1}^\infty |\langle b, \psi_n\rangle|^2 < \infty,\quad
\sum_{n= 1}^\infty |\langle \phi_n, f \rangle|^2 < \infty
\]
that for all $\varepsilon >0$,
there exists $N_0^{\rm d} \geq N_{\rm b}$ such that
\eqref{eq:discrete_large_case} holds.
{\em Step 3:}
Let
$\varepsilon_{\rm c} \in (0,1)$ satisfy \eqref{eq:continuous_lower_bound}, and
choose $\varepsilon \in (0,\varepsilon_{\rm c}/3)$ arbitrarily.
We have shown in Steps~1 and 2 that
there exists $N_0 \geq N_{\rm b}$ such that
for all $N \geq N_0$ and $\tau >0$,
\begin{subequations}
\begin{align}
\sup_{\lambda \in \overline{\mathbb{C}_0} }
&\left|
\sum_{n=N}^{\infty} \frac{\langle b , \psi_n \rangle \langle \phi_n, f \rangle}{\lambda - \lambda_n}
\right| < \varepsilon \label{eq:suff_large_cont}\\
\sup_{z \in \rho(
T(\tau) ) \cap \overline{\mathbb{E}_1}}
&\left|
\sum^{\infty}_{n=N}\frac{1-e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}}
\cdot \frac{\langle b , \psi_n \rangle \langle \phi_n, f \rangle}{\lambda_n}
\right| < \varepsilon. \label{eq:suff_large_dist}
\end{align}
\end{subequations}
Take $N \geq N_0$.
For simplicity of notation, we assume that
$\lambda_n$ is nonzero
for all $1 \leq n \leq N-1$.
When $\lambda_n = 0$ for some $1 \leq n \leq N-1$,
the corresponding term in the the argument of this step,
\[
\frac{e^{\tau \lambda_n}-1}{z - e^{\tau \lambda_n}} \cdot
\frac{\langle b , \psi_n \rangle
\langle\phi_n,f \rangle}{\lambda_n},
\]
is just replaced
by
\[
\frac{\tau \langle b , \psi_n \rangle
\langle\phi_n,f \rangle}{z-1}.
\]
We next investigate the finite-dimensional truncation
\[
\sum_{n = 1}^{N-1}
\frac{e^{\tau \lambda_n}-1}{z - e^{\tau \lambda_n}} \cdot
\frac{\langle b , \psi_n \rangle
\langle\phi_n,f \rangle}{\lambda_n}.
\]
This finite sum
has no difficulty arising from polynomial stability, and hence
we can apply the result on exponential stability developed in
\cite{Rebarber2006}, which is outlined for completeness.
For $\tau, \eta, a>0$, define
the sets $\Omega_0$, $\Omega_1$, $\Omega_2$, and
$\Omega_3$ by
\begin{align*}
\Omega_0 &:=
\{z = e^{\tau \lambda }: \mathop{\rm Re}\nolimits \lambda \geq 0,~|\tau \lambda | < \eta\}
=
\{z = e^{\mu }: \mathop{\rm Re}\nolimits \mu \geq 0,~|\mu| < \eta\} \\
\Omega_1 &:=
\{z = e^{\tau \lambda }: |\lambda - \lambda_n| \geq a ~\text{for all
$ 1\leq n \leq N-1$}\} \\
&\qquad \cup
\{z = e^{\tau \lambda }: 0< |\lambda - \lambda_n| < a\text{~and~}
\langle b , \psi_n \rangle \langle \phi_n, f \rangle = 0
~\text{for some $ 1\leq n \leq N-1$} \} \\
\Omega_2 &:= \{z = e^{\tau \lambda }: 0< |\lambda - \lambda_n| < a
\text{~and~}
\langle b , \psi_n \rangle \langle \phi_n, f \rangle \not= 0
~\text{for some $ 1\leq n \leq N-1$} \} \\
\Omega_3 &:=\overline{\mathbb{E}_1} \setminus
\Omega_0.
\end{align*}
Take $0< \eta < \pi$. Then, for each $z \in \Omega_0$,
there uniquely exists $\lambda \in\overline{\mathbb{C}_0}$ such that
$z = e^{\tau \lambda}$ and $|\tau \lambda| < \eta$.
This $\lambda$ is the complex variable in the continuous-time setting
corresponding to the complex variable $z$ in the discrete-time setting.
Put
$
a^* := \min \{|\lambda_n - \lambda_m|/2: 1\leq n,m \leq N-1\}.
$
Then there is no $\lambda \in \mathbb{C}$ such that one has
both
$|\lambda - \lambda_n| < a^*$ and
$|\lambda - \lambda_m| < a^*$
for some $1\leq n ,m \leq N-1$ with $n \not=m$.
By Steps 3) and 4) of the proof of Theorem~2.1 in \cite{Rebarber2006},
there exist
$\tau^* >0$, $\eta \in (0,\pi)$, and
$a \in (0,a^*)$
such that
the following three statements hold
for all $\tau \in (0,\tau^*)$:
\begin{enumerate}
\item \label{it:Omega_3}
For
all $1 \leq n \leq N-1$,
one has $e^{\tau \lambda_n} \in \mathbb{C} \setminus \Omega_3$.
\item \label{it:Omega_4}
For all $z \in \Omega_0 \cap \Omega_1 =: \Omega_4$ and
the corresponding $\lambda \in\overline{\mathbb{C}_0}$ satisfying
$z = e^{\tau \lambda}$ and $|\tau \lambda| < \eta$,
\begin{equation}
\label{eq:finite_case1}
\left|
\sum_{n=1}^{N-1} \frac{\langle b , \psi_n \rangle \langle \phi_n, f \rangle}{\lambda - \lambda_n} +
\sum^{N-1}_{n=1}\frac{1-e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}}
\cdot \frac{\langle b , \psi_n \rangle \langle \phi_n, f \rangle}{\lambda_n}
\right| < \varepsilon.
\end{equation}
\item \label{it:Omega_5}
For all $z \in (\Omega_0 \cap \Omega_2) \cup
\Omega_3 =: \Omega_5$,
\begin{equation}
\label{eq:finite_case2}
\left|
1 + \sum^{N-1}_{n=1}\frac{1-e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}}
\cdot \frac{\langle b , \psi_n \rangle \langle \phi_n, f \rangle}{\lambda_n}
\right|> \varepsilon_{\rm c}.
\end{equation}
\end{enumerate}
In what follows, $\tau,\eta, a>0$ are chosen so that
the above statements \ref{it:Omega_4}--\ref{it:Omega_5} hold.
Suppose that $z \in \rho(
T(\tau) ) \cap \Omega_4$, and let
$\lambda \in \overline{\mathbb{C}_0}$ satisfy
$z = e^{\tau \lambda}$ and $|\tau \lambda| < \eta$.
Combining the estimates
\eqref{eq:suff_large_cont}, \eqref{eq:suff_large_dist}, and
\eqref{eq:finite_case1} with
\[
|1 - F(\lambda I - A)^{-1}B | =
\left|
1 - \sum_{n=1}^{\infty} \frac{\langle b , \psi_n \rangle \langle \phi_n, f \rangle}{\lambda - \lambda_n}
\right| > \varepsilon_{\rm c},
\]
we obtain
\begin{align*}
\left|
1 + \sum_{n=1}^\infty \frac{1-e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}}
\cdot \frac{\langle b , \psi_n \rangle \langle \phi_n, f \rangle}{\lambda_n}
\right| &>
\varepsilon_{\rm c} - 3\varepsilon.
\end{align*}
On the other hand, if $z \in \rho(
T(\tau) ) \cap\Omega_5$, then \eqref{eq:suff_large_dist}
and
\eqref{eq:finite_case2} yield
\begin{align*}
\left|
1 + \sum_{n=1}^\infty \frac{1-e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}}
\cdot \frac{\langle b , \psi_n \rangle \langle \phi_n, f \rangle}{\lambda_n}
\right|
&> \varepsilon_{\rm c}-\varepsilon.
\end{align*}
{\em Step 4:}
It remains to show that
\begin{equation}
\label{eq:Omega45}
\big(
\rho(
T(\tau) ) \cap\Omega_4\big) \cup \big(\rho(
T(\tau) ) \cap\Omega_5 \big)
=
\rho\big (T(\tau)\big) \cap \overline{\mathbb{E}_1}.
\end{equation}
By definition,
\[
(\Omega_0 \cap \Omega_1) \cup
(\Omega_0 \cap \Omega_2) =
\Omega_0 \cap (\Omega_1 \cup \Omega_2) =
\Omega_0 \setminus \{e^{\tau \lambda_n}: 1\leq n \leq N-1 \}.
\]
Moreover, the statement \ref{it:Omega_3} above shows that
\[
\Omega_3 \cap
\{
e^{\tau \lambda_n} : 1 \leq n \leq N-1
\} = \emptyset.
\]
Hence
\begin{align*}
\Omega_4 \cup \Omega_5
&=
(\Omega_0 \setminus \{e^{\tau \lambda_n}: 1\leq n \leq N - 1 \} )
\cup \Omega_3 \\
&= (
\Omega_0 \cup \Omega_3
)
\setminus
\{e^{\tau\lambda_n} : 1 \leq n \leq N-1 \}
\\
&=
\overline{\mathbb{E}_1} \setminus
\{e^{\tau\lambda_n} : 1 \leq n \leq N-1 \}.
\end{align*}
This yields
\begin{align*}
\big(
\rho\big(
T(\tau) \big) \cap\Omega_4\big) \cup \big(\rho\big(
T(\tau) \big) \cap\Omega_5 \big) &=
\rho(
T(\tau) ) \cap (\Omega_4 \cup \Omega_5) \\
&=
\rho\big(
T(\tau) \big) \cap \big(\overline{\mathbb{E}_1} \setminus
\{e^{\tau\lambda_n} : 1 \leq n \leq N-1 \}\big).
\end{align*}
Since
$\sigma(T(\tau)) = \overline{\{e^{\tau \lambda_n}:
n \in \mathbb{N} \}}$,
we have that
\begin{align*}
\rho\big(
T(\tau) \big) \cap \big(\overline{\mathbb{E}_1} \setminus
\{e^{\tau\lambda_n} : 1 \leq n \leq N-1 \}\big)
&= \rho\big(
T(\tau) \big) \cap \overline{\mathbb{E}_1}.
\end{align*}
Thus, \eqref{eq:Omega45} holds.
\end{proof}
The following result can be obtained by a slight modification of
the proof of Lemma~4.6 in \cite{Wakaiki2021SIAM}.
\begin{lmm}
\label{lem:circle_resol}
Let $A$ be a Riesz-spectral operator on a Hilbert space $X$
with simple eigenvalues
$(\lambda_n)_{n \in \mathbb{N}}$.
Let $B \in \mathcal{L}(\mathbb{C},X)$ and
$F \in \mathcal{L}(X,\mathbb{C})$ be such that
$A+BF$ generates a uniformly bounded $C_0$-semigroup on $X$.
Suppose that only finite elements of $(\lambda_n)_{n \in \mathbb{N}}$
are in $\mathbb{C}_0$.
If $\tau >0$ satisfies
\begin{align}
&\tau (\lambda_n - \lambda_m) \not= 2\ell \pi i \qquad
\forall \ell \in \mathbb{Z} \setminus \{0\} ,~\forall n,m \in \mathbb{N}
\text{~with~$\lambda_n,\lambda_m \in \mathbb{C}_{0}$}
\label{eq:sampling_cond}
\\
\label{eq:1_FRS}
&F R\big(z, T(\tau)\big)S(\tau) \not=1
\qquad \forall z \in \rho\big(T(\tau)\big) \cap\mathbb{E}_1,
\end{align}
then $\mathbb{E}_1 \subset \rho(\Delta(\tau))$.
\end{lmm}
The desired inclusion $\mathbb{E}_1 \subset \rho (\Delta(\tau))$
follows from Lemmas~\ref{lem:cont_time_trans_func_bound}, \ref{lem:discrete_time_trans_func_bound}, and \ref{lem:circle_resol}.
\begin{proof}[Proof of Theorem~\ref{lem:resol_T_SF}:]
Lemmas~\ref{lem:cont_time_trans_func_bound} and \ref{lem:discrete_time_trans_func_bound},
together with (A\ref{assump:finite_unstable}), show that
there exist $\varepsilon >0$ and $\tau^* >0$ such that
for all $\tau \in (0,\tau^*)$,
\begin{align}
&\tau (\lambda_n - \lambda_m) \not= 2\ell \pi i \qquad
\forall \ell \in \mathbb{Z} \setminus \{0\} ,~\forall n,m \in \mathbb{N}
\text{~with~$1 \leq n,m \leq N_{\rm a} - 1$}
\\
&\big|1 - F R\big(z, T(\tau)\big)S(\tau)\big|
>\varepsilon \qquad \forall z \in \rho\big(T(\tau)\big) \cap \overline{\mathbb{E}_1}.
\end{align}
Hence we obtain $\mathbb{E}_1 \subset \rho(\Delta(\tau))$
for all $\tau \in (0,\tau^*)$
by Lemma~\ref{lem:circle_resol}.
\end{proof}
\section{Application of resolvent conditions to discretized system}
\label{sec:resolvent_discretized_sys}
In this section, we complete the proof of the main result (Theorem~\ref{thm:SD_SS}).
To do so, we prove that
for sufficiently small sampling period $\tau>0$,
the operator $\Delta(\tau) = T(\tau) + S(\tau)F$ satisfies
the integral conditions on resolvents given in
Theorem~\ref{thm:strong_stability_resol} and
Proposition~\ref{prop:discrete_poly_decay}.
We divide the resolvent $R(z,\Delta(\tau))$
into two terms, by applying
the well-known Sherman-Morrison-Woodbury formula
presented in the next lemma.
This formula can be obtained from a straightforward calculation.
\begin{prpstn}
\label{Prop:SMW}
Let $X$ and $U$ be Banach spaces and let
$A:D(A)\subset X \to X$ be a closed linear operator. Take
$B \in \mathcal{L}(U,X)$, $F \in \mathcal{L}(X,U)$, and $\lambda \in \rho(A)$.
If $1 \in \rho (FR(\lambda,A)B) $, then $\lambda \in \rho (A+BF)$ and
\[
R(\lambda,A+BF) = R(\lambda,A) + R(\lambda,A) B
(I-FR(\lambda,A)B)^{-1} FR(\lambda,A).
\]
\end{prpstn}
Suppose that Assumption~\ref{assum:for_MR} hold.
By
Lemmas~\ref{lem:cont_time_trans_func_bound} and \ref{lem:discrete_time_trans_func_bound},
if the sampling period $\tau>0$ is sufficiently small, then
for all $z \in \rho \big(T(\tau)\big) \cap \overline{\mathbb{E}_1}$,
one has
$1 \in \rho (FR(z,T(\tau))S(\tau)) $.
Hence the Serman-Morrison-Woodbury formula presented in Proposition~\ref{Prop:SMW} yields
\[
R(z, T(\tau)+S(\tau)F) =
R\big(z, T(\tau)\big) +
\frac{R\big(z, T(\tau)\big) S(\tau)FR\big(z, T(\tau)\big) }
{1 - FR\big(z, T(\tau)\big) S(\tau) }.
\]
In what follows, we separately investigate
the integrals of
$
\|R(z, T(\tau))\|^2$ and
$\|R(z, T(\tau)) S(\tau)FR(z, T(\tau))\|^2$.
\subsection{Integral of $\bm {\|R(z, T(\tau))\|^2}$}
We obtain the following result on the integral of
$\|R(z, T(\tau))\|^2$ on circles.
\begin{lmm}
\label{lem:Resol_T_int}
If {\rm (A\ref{assump:finite_unstable})} and {\rm (A\ref{assump:imaginary})}
hold, then
the $C_0$-semigroup $(T(t))_{t\geq 0}$ satisfies
the following properties for each $\tau >0$:
\begin{enumerate}
\item For all $x,y \in X$,
\begin{subequations}
\label{eq:Resol_T_int}
\begin{align}
&\lim_{r\downarrow 1} ~(r-1)
\int^{2\pi}_0
\big\|R\big(re^{i\theta}, T(\tau)\big) x\big\|^2 d\theta = 0 \label{eq:Resol_T_inta}\\
&\lim_{r\downarrow 1} ~(r-1)
\int^{2\pi}_0
\big\|R\big(re^{i\theta}, T(\tau)^*\big) y\big\|^2 d\theta = 0.\label{eq:Resol_T_intb}
\end{align}
\end{subequations}
\item For all $x \in \mathcal{D}^{\alpha/2}$ and all
$y \in \mathcal{D}^{\alpha/2}_*$,
\begin{subequations}
\label{eq:poly_Resol_T_int}
\begin{align}
&\lim_{r\downarrow 1} \frac{1}{\log \frac{r}{r-1}}
\int^{2\pi}_0
\big\|R\big(re^{i\theta}, T(\tau)\big) x\big\|^2 d\theta = 0 \label{eq:poly_Resol_T_inta}\\
&\lim_{r\downarrow 1} \frac{1}{\log \frac{r}{r-1}}
\int^{2\pi}_0
\big\|R\big(re^{i\theta}, T(\tau)^*\big) y\big\|^2 d\theta = 0.\label{eq:poly_Resol_T_intb}
\end{align}
\end{subequations}
\end{enumerate}
\end{lmm}
\begin{proof}
a)
Take $\tau >0$.
To obtain \eqref{eq:Resol_T_inta}, we
apply the spectral decomposition by the projection $\Pi$ given in \eqref{eq:projection}.
Let $x \in X$, and
define $x^+:= \Pi x \in X^+$ and $x^-:= (I-\Pi)x \in X^-$.
Since $(d_1+d_2)^2 \leq 2(d_1^2+d_2^2)$
for every $d_1,d_2 \geq 0$,
it follows that in order to show \eqref{eq:Resol_T_inta}, it suffices
to show that
\[
\lim_{r \downarrow 1} ~(r-1)
\int^{2\pi}_0 \big\|R\big(re^{i\theta}, T(\tau)\big) x^+\big\|^2 d\theta = 0,\quad
\lim_{r \downarrow 1} ~(r-1)
\int^{2\pi}_0 \big\|R\big(re^{i\theta}, T(\tau)\big) x^-\big\|^2 d\theta = 0.
\]
There exist constants $r_0 > 1$ and
$c_0 >0$ such that $|r e^{i \theta} - e^{\tau \lambda_n} | \geq c_0$ for all $r \in (1,r_0)$,
$\theta \in [0,2\pi)$, and
$1 \leq n \leq N_{\rm a}-1$. We have that
for all $r \in (1,r_0)$,
\begin{align}
\int^{2\pi}_0
\big\|R\big(r e^{i \theta}, T(\tau)\big) x^+\big\|^2 d\theta
&\leq M_{\rm b} \sum_{n=1}^{N_{\rm a}-1} |\langle
x^+ , \psi_n
\rangle|^2 \int^{2\pi}_0 \frac{1}{|re^{i\theta} - e^{\tau \lambda_n} |^2 } d\theta \notag \\
&\leq \frac{2\pi M_{\rm b}}{ c_0^2} \sum_{n=1}^{N_{\rm a}-1} |\langle
x^+ , \psi_n
\rangle|^2.
\label{eq:x_plus_bound}
\end{align}
Therefore,
\[
\lim_{r\downarrow 1} ~(r-1)
\int^{2\pi}_0
\big\|R\big(r e^{i \theta}, T(\tau)\big) x^+\big\|^2 d\theta = 0.
\]
Since
the discrete semigroup
$(T^-(\tau)^k )_{k \in \mathbb{N}}$ is strongly stable
by Lemma~\ref{lem:T_minus_poly_stable}, we see from
Theorem~\ref{thm:strong_stability_resol} that
\[
\lim_{r\downarrow 1} ~(r-1)
\int^{2\pi}_0
\big\|R\big(r e^{i \theta}, T^-(\tau)\big) x^-\big\|^2 d\theta =0.
\]
Hence \eqref{eq:Resol_T_inta} holds.
Applying the spectral decomposition for $A^*$ as in the case of $A$,
we obtain \eqref{eq:Resol_T_intb}.
b) Take
$x \in \mathcal{D}^{\alpha/2}$, and
define $x^+:= \Pi x \in X^+$ and $x^-:= (I-\Pi)x \in X^-$.
Then $x^- \in D((-A^-)^{\alpha/2})$.
By \eqref{eq:x_plus_bound}, we obtain
\[
\lim_{r\downarrow 1} \frac{1}{\log \frac{r}{r-1}}
\int^{2\pi}_0
\big\|R\big(r e^{i \theta}, T(\tau)\big) x^+\big\|^2 d\theta = 0.
\]
Since
$(T^-(t))_{t \geq 0}$ is polynomially stable with parameter $\alpha$
by Lemma~\ref{lem:T_minus_poly_stable}, it follows
from Theorem~\ref{thm:decay_charac} that
$x^- \in D((-A^-)^{\alpha/2})$ satisfies
\[
\|T^-(\tau)^k x^-\| =
\|T^-(k\tau) x^-\| = o\left(
\frac{1}{\sqrt{k}}
\right) \qquad (k \to \infty).
\]
Proposition~\ref{prop:discrete_poly_decay} implies that
\[
\lim_{r\downarrow 1} \frac{1}{\log \frac{r}{r-1}}
\int^{2\pi}_0
\big\|R\big(r e^{i \theta}, T^-(\tau)\big) x^-\big\|^2 d\theta =0.
\]
Therefore, \eqref{eq:poly_Resol_T_inta} holds. Analogously,
we obtain
\eqref{eq:poly_Resol_T_intb} by the spectral decomposition for $A^*$.
\end{proof}
\subsection{Integral of $\bm {\|R(z, T(\tau)) S(\tau)FR(z, T(\tau))\|^2}$}
\label{sec:Integral_RSTR}
Next, we study
the integral of $\|R(z, T(\tau)) S(\tau) FR(z, T(\tau))\|^2$.
\begin{prpstn}
\label{prop:RSFR_bound}
Suppose that {\rm (A\ref{assump:finite_unstable})} and {\rm (A\ref{assump:imaginary})} hold.
Let $b \in \mathcal{D}^{\beta}$
and $f \in \mathcal{D}_*^{\gamma}$
for some $\beta ,\gamma \geq 0$ satisfying $\beta+ \gamma \geq \alpha$.
Then the following statements hold for each $\tau >0$:
\begin{enumerate}
\item
One has
\begin{align}
\label{eq:RSFR_estimate}
\lim_{r \downarrow 1} ~(r-1)
\int^{2\pi}_0\big\|R\big(re^{i\theta}, T(\tau)\big) S(\tau)\big\|^2 ~\! \big\|FR\big(re^{i\theta}, T(\tau)\big)\big\|^2 d\theta = 0.
\end{align}
\item If $\alpha \leq 2$ or $\beta \geq \alpha$, then
\begin{subequations}
\label{eq:RSFR_estimate_poly_decay}
\begin{align}
&\lim_{r\downarrow 1}
\frac{1}{\log
\frac{r}{r-1}}
\int^{2\pi}_0 \big\|R\big(re^{i\theta}, T(\tau)\big) S(\tau)\big\|^2
~\!
\big\|F^+R\big(re^{i\theta}, T^+(\tau)\big) \big\|^2 d\theta = 0
\label{eq:RSFR_estimate_poly_decay_p}\\
&\lim_{r\downarrow 1}
\frac{1}{\log
\frac{r}{r-1}}
\int^{2\pi}_0 \big\|R\big(re^{i\theta}, T(\tau)\big) S(\tau)\big\|^2
~\!
\big\|F^-R\big(re^{i\theta},
T^-(\tau)\big) (-A^-)^{-\alpha/2} \big\|^2 d\theta= 0.
\label{eq:RSFR_estimate_poly_decay_m}
\end{align}
\end{subequations}
\end{enumerate}
\end{prpstn}
To prove Proposition~\ref{prop:RSFR_bound},
we start with a simple result.
Recall that
$b^+$, $b^-$, $f^+$, and $f^-$ were defined as
$b^+ := \Pi b$, $b^- := (I-\Pi) b$, $f^+ :=
\Pi^* f$, and $f^- := (I - \Pi^*) f$ in Section~\ref{sec:Spec_decomp}.
\begin{lmm}
\label{lem:F_adjoint}
Suppose that
{\rm (A\ref{assump:finite_unstable})} holds.
Let $\tau >0$ and $z \in \rho(T(\tau))$.
Under the spectral decomposition described in Section~\ref{sec:Spec_decomp},
the following inequalities hold:
\begin{align*}
\big\|F^+R\big(z,T^+(\tau)\big)\big\|
&\leq \|R\big(\overline{z},T(\tau)^*\big) f^+\| \\
\big\|F^-R\big(z,T^-(\tau)\big) (-A^-)^{-\alpha/2}\big\|
&\leq
\big\|
R\big(\overline{z},
T(\tau)^*\big) (-A^-_*)^{-\alpha/2}f^-
\big\|.
\end{align*}
\end{lmm}
\begin{proof}
We have that
\begin{align*}
\big\|F^+R\big(z,T^+(\tau)\big)\big\|
&= \sup\big\{ \big|F^+R\big(z,T^+(\tau)\big) x^+ \big|:x^+ \in X^+ \text{~with~} \|x^+\| = 1 \big\} \\
&=\sup\big\{ \big| \big\langle
R\big(z,T(\tau)\big) x^+,f^+ \big\rangle \big|
:x^+ \in X^+ \text{~with~} \|x^+\| = 1 \big\} \\
&=\sup\big\{ \big|\big\langle
x^+,R\big(\overline{z},T(\tau)^*\big) f^+ \big\rangle \big|
:x^+ \in X^+ \text{~with~} \|x^+\| = 1 \big\} \\
&\leq \|R\big(\overline{z},T(\tau)^*\big) f^+\|
\end{align*}
for all $\tau >0$ and $z \in \rho(T(\tau))$.
Analogously,
\begin{align*}
\big\|F^-R\big(z,T^-(\tau)\big) (-A^-)^{-\alpha/2}\big\|
&=\sup\big\{ \big|\big\langle
R\big(z,T(\tau)\big) (-A^-)^{-\alpha/2}x^- ,f^-\big\rangle \big|:
x^- \in X^- \text{~with~} \|x^-\| = 1 \big\}.
\end{align*}
A routine calculation shows that
\begin{align*}
\big\langle R\big(z,T(\tau)\big) (-A^-)^{-\alpha/2}x^-, f^- \big\rangle
=
\sum_{n=N_{\rm a}}^{\infty}
\frac{\langle x^-,\psi_n \rangle \langle \phi_n,f^- \rangle}
{(-\lambda_n)^{\alpha/2} (z - e^{\lambda_n \tau})} =
\big\langle x^-, R\big(\overline{z},
T(\tau)^*\big) (-A^-_*)^{-\alpha/2}f^- \big\rangle.
\end{align*}
Therefore,
\begin{align*}
\big\|F^-R\big(z,T^-(\tau)\big) (-A^-)^{-\alpha/2}\big\|
&=
\sup\big\{ \big\langle x^-, R\big(\overline{z},
T(\tau)^*\big) (-A^-_*)^{-\alpha/2}f^- \big\rangle:
x^- \in X^- \text{~with~} \|x^-\| = 1 \big\} \\
&\leq
\big\|
R\big(\overline{z},
T(\tau)^*\big) (-A^-_*)^{-\alpha/2}f^-
\big\|
\end{align*}
for all $\tau >0$ and $z \in \rho(T(\tau))$.
\end{proof}
We divide the proof of \eqref{eq:RSFR_estimate} and \eqref{eq:RSFR_estimate_poly_decay} into three cases as in the proof of Lemma~19 in \cite{Paunonen2014JDE}:
(i) $\beta \geq \alpha$; (ii) $\gamma \geq \alpha$; and (iii) $\beta,\gamma <\alpha$.
For the proof,
we introduce some constants.
Take $\tau >0$, and
let $N_{\rm b} \geq N_{\rm a}$ satisfy \eqref{eq:lambda_cond_large}.
Under (A\ref{assump:imaginary}),
there exist constants
$r_1 > 1$ and
$c_1 >0$ such that
\begin{equation}
\label{eq:unstable_eig_bound}
|z - e^{\tau \lambda_n} | \geq c_1
\end{equation}
for all $z \in \mathbb{D}_{r_1} \cap \mathbb{E}_1$ and
$1 \leq n \leq N_{\rm b}-1$.
\subsubsection{Case $\beta \geq \alpha$}
First, we consider the case $\beta \geq \alpha$.
\begin{lmm}
\label{lem:g_zero}
Suppose that
{\rm (A\ref{assump:finite_unstable})} and {\rm (A\ref{assump:imaginary})}
hold. Let
$b \in \mathcal{D}^{\beta}$ for some $\beta \geq \alpha$ and $f \in X$.
Then
\eqref{eq:RSFR_estimate} and \eqref{eq:RSFR_estimate_poly_decay}
hold for each $\tau >0$.
\end{lmm}
\begin{proof}
Since
\[
\big\|FR\big(z, T(\tau)\big)\big\| = \big\|R\big(\overline{z}, T(\tau)^*\big) f\big\|\qquad \forall z \in \rho\big(T(\tau)\big),
\]
it follows from Lemma~\ref{lem:Resol_T_int}.a) that
\[
\lim_{r \downarrow 1} ~(r-1)
\int^{2\pi}_0 \big\|FR\big(re^{i\theta}, T(\tau)\big)\big\|^2 d\theta = 0.
\]
In order to prove \eqref{eq:RSFR_estimate}, it suffices to verify that
\begin{equation}
\label{eq:gamma_zero_bounded}
\sup_{z \in \mathbb{D}_{r_1} \cap \mathbb{E}_{1}} \big\|R\big(z, T(\tau)\big)S(\tau)\big\|< \infty.
\end{equation}
Take $z \in \mathbb{D}_{r_1} \cap \mathbb{E}_{1}$.
We obtain
\begin{align*}
\big\|R\big(z, T(\tau)\big)S(\tau)\big\|^2 \leq
M_{\rm b}
\sum_{n=1}^\infty
\left|
\frac{1 - e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}} \cdot
\frac{\langle b ,\psi_n\rangle }{\lambda_n}
\right|^2.
\end{align*}
By (A\ref{assump:finite_unstable}) and (A\ref{assump:imaginary}),
there is a constant $\kappa >0$ such that $|\lambda_n| \geq \kappa$
for all $n \in \mathbb{N}$.
By \eqref{eq:unstable_eig_bound},
\begin{align}
\label{eq:N1_smaller_gamma0}
\sum_{n=1}^{N_{\rm b}-1}
\left|
\frac{1 - e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}} \cdot
\frac{\langle b ,\psi_n\rangle }{\lambda_n}
\right|^2 \leq
\frac{(1+e^{\tau \sup_{n \in \mathbb{N} } \mathop{\rm Re}\nolimits \lambda_n })^2}{c_1^2 \kappa^2}
\sum_{n=1}^{N_{\rm b}-1} \left|
\langle b ,\psi_n\rangle
\right|^2.
\end{align}
Lemma~\ref{lem:frac_lam_bound} with $\widetilde \alpha= \beta$ shows that
\begin{align}
\label{eq:N1_larger_gamma0}
\sum_{n=N_{\rm b}}^\infty
\left|
\frac{1 - e^{\tau \lambda_n}}{z - e^{\tau \lambda_n}} \cdot
\frac{\langle b ,\psi_n\rangle }{\lambda_n}
\right|^2
&\leq \Upsilon_1^2
\sum_{n=N_{\rm b}}^\infty
\left|
\langle b ,\psi_n\rangle
\right|^2 + \Upsilon_2^2
\sum_{n=N_{\rm b}}^\infty
|\lambda_n|^{2\beta} ~\! \left|
\langle b ,\psi_n\rangle
\right|^2
\end{align}
for some $\Upsilon_1,\Upsilon_2 >0$.
Since
$b \in \mathcal{D}^{\beta}$, the inequalities \eqref{eq:N1_smaller_gamma0}
and \eqref{eq:N1_larger_gamma0}
yield
\eqref{eq:gamma_zero_bounded}.
To show the second assertion \eqref{eq:RSFR_estimate_poly_decay},
we have from the estimate \eqref{eq:unstable_eig_bound} that
\begin{equation*}
\left\|R\big(z, T(\tau)^*\big) y^+ \right\|^2
\leq \frac{1}{M_{\rm a}}
\sum_{n = 1}^{N_{\rm a}-1} \frac{
|\langle y^+,\phi_n\rangle |^2
}{|z - e^{\tau \lambda_n }|^2}
\leq
\frac{M_{\rm b}}{M_{\rm a}} \cdot
\frac{\left\| y^+\right\|^2}{c_1^2}
\end{equation*}
for all $y^+ \in X^+_*$ and
$z \in \mathbb{D}_{r_1} \cap \mathbb{E}_1$.
By this inequality and Lemma~\ref{lem:F_adjoint},
\begin{align*}
\limsup_{r \downarrow 1} ~\!\frac{1}{\log \frac{r}{r-1}}
\int^{2\pi}_0
\big\|F^+R\big(re^{i\theta}, T^+(\tau)\big) \big\|^2 d\theta &\leq
\limsup_{r \downarrow 1} ~\!\frac{1}{\log \frac{r}{r-1}}
\int^{2\pi}_0 \big\|R\big(re^{i\theta}, T(\tau)^*\big)f^+ \big\|^2 d\theta \\
&= 0.
\end{align*}
Combining Lemmas~\ref{lem:Resol_T_int}.b) and \ref{lem:F_adjoint},
we obtain
\begin{align*}
&\limsup_{r \downarrow 1} ~\!\frac{1}{\log \frac{r}{r-1}}
\int^{2\pi}_0
\big\|F^-R\big(re^{i\theta},
T^-(\tau)\big) (-A^-)^{-\alpha/2} \big\|^2 d\theta \\
&\qquad \leq
\limsup_{r \downarrow 1} ~\!\frac{1}{\log \frac{r}{r-1}}
\int^{2\pi}_0 \big\|R\big(re^{i\theta},
T(\tau)^*\big) (-A^-_*)^{-\alpha/2}f^- \big\|^2 d\theta \\
&\qquad = 0.
\end{align*}
The second assertion \eqref{eq:RSFR_estimate_poly_decay}
then follows from the estimate \eqref{eq:gamma_zero_bounded}.
\end{proof}
\subsubsection{Case $\gamma \geq \alpha$}
For the case $\gamma \geq \alpha$ and the case $\beta,\gamma < \alpha$, we
show the following preliminary result.
\begin{lmm}
\label{lem:z_lambda_bound}
Suppose that {\rm (A\ref{assump:finite_unstable})} and {\rm (A\ref{assump:imaginary})} hold.
Fix $\tau >0$ and let $r_1 >1$ satisfy \eqref{eq:unstable_eig_bound}
for all $z \in \mathbb{D}_{r_1} \cap \mathbb{E}_1$,
$1 \leq n \leq N_{\rm b}-1$ and some $c_1 >0$. Then
there exists a constant $\Upsilon_0 >0$ such that
for all $z \in \mathbb{D}_{r_1} \cap \mathbb{E}_1$ and
$n \in \mathbb{N}$,
\begin{equation}
\label{eq:Up_zero}
\frac{1}{|z-e^{\tau \lambda_n} |~\! |\lambda_n|^{\alpha}} \leq \Upsilon_0.
\end{equation}
\end{lmm}
\begin{proof}
If (A\ref{assump:finite_unstable}) and (A\ref{assump:imaginary}) hold,
we have a constant $\kappa >0$ satisfying $|\lambda_n| \geq \kappa$
for all $n \in \mathbb{N}$.
If $1 \leq n \leq N_{\rm b} - 1$, then
\eqref{eq:unstable_eig_bound} gives
\[
\frac{1}{|z-e^{\tau \lambda_n} |~\! |\lambda_n|^{\alpha}}
\leq \frac{1}{c_1 \kappa^{\alpha}}
\]
for all $z \in \mathbb{D}_{r_1} \cap \mathbb{E}_1$.
Let $n \geq N_{\rm b}$ and $z \in \mathbb{E}_1$.
We consider the following three cases as in the proof of
Lemma~\ref{lem:frac_lam_bound}:
(i) $\tau \mathop{\rm Re}\nolimits \lambda_n \leq -1$; (ii) $\tau \mathop{\rm Re}\nolimits \lambda_n > -1$
and $\mathop{\rm Re}\nolimits\lambda_n \leq -\delta$; and (iii) $\tau \mathop{\rm Re}\nolimits \lambda_n > -1$
and $\lambda_n \in \Omega_{\alpha,\Upsilon}$.
Moreover,
we use the estimate
\begin{align*}
\frac{1}{|z-e^{\tau \lambda_n} |~\! |\lambda_n|^{\alpha}}
\leq
\frac{1}{(1-e^{\tau \mathop{\rm Re}\nolimits \lambda_n} )|\lambda_n|^{\alpha}}
=
\left|
\frac{1}{\frac{1-e^{\tau \mathop{\rm Re}\nolimits \lambda_n}}{\tau \mathop{\rm Re}\nolimits \lambda_n}}\right|
\frac{1}{\tau |\mathop{\rm Re}\nolimits \lambda_n|~\! |\lambda_n|^{\alpha}}.
\end{align*}
In the case (i) $\tau \mathop{\rm Re}\nolimits \lambda_n \leq -1$, we have that
\[
\frac{1}{|1-e^{\tau \mathop{\rm Re}\nolimits \lambda_n} |~\! |\lambda_n|^{\alpha}} \leq \frac{1}{(1-e^{-1})\kappa^\alpha}.
\]
We next consider the case (ii) $\tau \mathop{\rm Re}\nolimits \lambda_n > -1$
and $\mathop{\rm Re}\nolimits\lambda_n \leq -\delta$.
Since the mean value theorem implies that
\begin{equation}
\label{eq:dis_bount2_again}
\frac{1 - e^{\tau \mathop{\rm Re}\nolimits \lambda_n}}{\tau |\mathop{\rm Re}\nolimits \lambda_n|} \geq e^{-1},
\end{equation}
it follows that
\[
\left|
\frac{1}{\frac{1-e^{\tau \mathop{\rm Re}\nolimits \lambda_n}}{\tau \mathop{\rm Re}\nolimits \lambda_n}}\right| ~\!
\frac{1}{\tau |\mathop{\rm Re}\nolimits \lambda_n|~\! |\lambda_n|^{\alpha}} \leq
\frac{e}{\tau \delta \kappa^{\alpha}}.
\]
Finally, we study the case (iii) $\tau \mathop{\rm Re}\nolimits \lambda_n > -1$
and $\lambda_n \in \Omega_{\alpha,\Upsilon}$.
Note that the estimate \eqref{eq:dis_bount2_again} holds also
in the case (iii).
Since $\lambda_n \in \Omega_{\alpha,\Upsilon}$ implies
\[
\frac{1}{|\mathop{\rm Re}\nolimits \lambda_n|} \leq \frac{|\mathop{\rm Im}\nolimits \lambda_n|^{\alpha}}{\Upsilon},
\]
we obtain
\begin{align*}
\left|
\frac{1}{\frac{1-e^{\tau \mathop{\rm Re}\nolimits \lambda_n}}{\tau \mathop{\rm Re}\nolimits \lambda_n}}\right| ~\!
\frac{1}{\tau |\mathop{\rm Re}\nolimits \lambda_n|~\! |\lambda_n|^{\alpha}} \leq
\frac{e}{\tau \Upsilon} \cdot \frac{|\mathop{\rm Im}\nolimits \lambda_n|^{\alpha}}{|\lambda_n|^{\alpha}}
\leq \frac{e}{\tau \Upsilon} .
\end{align*}
Define
a constant $\Upsilon_0>0$ by
\[
\Upsilon_0 :=
\max\left\{
\frac{1}{c_1 \kappa^{\alpha}},~
\frac{1}{(1-e^{-1})\kappa^\alpha},~\frac{e}{\tau \delta
\kappa^{\alpha}},~\frac{e}{\tau \Upsilon}
\right\}.
\]
Then \eqref{eq:Up_zero} holds for all $z \in \mathbb{D}_{r_1} \cap \mathbb{E}_1$ and
$n \in \mathbb{N}$.
\end{proof}
We are now in a position to examine the case $\gamma \geq \alpha$.
\begin{lmm}
\label{lem:b_zero}
Suppose that
{\rm (A\ref{assump:finite_unstable})} and
{\rm (A\ref{assump:imaginary})} hold. Let
$b \in X$ and $f \in \mathcal{D}_*^{\gamma}$ for some $\gamma \geq \alpha$.
Then
\eqref{eq:RSFR_estimate} holds for all $\tau >0$.
Moreover, if $\alpha \leq 2$, then
\eqref{eq:RSFR_estimate_poly_decay}
holds for all $\tau >0$.
\end{lmm}
\begin{proof}
Take $\tau >0$ arbitrarily, and define
\[
b_0:= \int^\tau_0 T(s) bds.
\]
Lemma~\ref{lem:Resol_T_int}.a) implies that
\[
\lim_{r \downarrow 1} ~(r-1)
\int^{2\pi}_0 \big\|R\big(re^{i\theta}, T(\tau)\big) b_0\big\|^2 d\theta = 0.
\]
To obtain \eqref{eq:RSFR_estimate},
it suffices to show that
\begin{equation}
\label{eq:beta_zero_bounded}
\sup_{z \in \mathbb{D}_{r_1} \cap \mathbb{E}_{1}} \big\|FR\big(z, T(\tau)\big)\big\| < \infty.
\end{equation}
Take $z \in \mathbb{D}_{r_1} \cap \mathbb{E}_{1}$.
Under (A\ref{assump:finite_unstable}) and (A\ref{assump:imaginary}),
there exists a constant $\kappa>0$ such that $|\lambda_n| \geq \kappa$
for all $n \in \mathbb{N}$. Hence
\begin{align*}
\big\|FR\big(z, T(\tau)\big)\big\|^2 &=
\big\|R\big(\overline{z}, T(\tau)^*\big)f\big\|^2 \\
&\leq
\frac{1}{M_{\rm a} } \sum_{n=1}^\infty
\left|
\frac{\langle f, \phi_n \rangle }{\overline{z} - e^{\tau \overline{\lambda_n}}}
\right|^2 \\
&\leq
\frac{1}{M_{\rm a}}
\left( \sum_{n=1}^{N_{\rm b}-1}
\left|
\frac{\langle f, \phi_n \rangle }{\overline{z} - e^{\tau \overline{\lambda_n}}}
\right|^2 + \frac{1}{\kappa^{2(\gamma-\alpha)}}
\sum_{n=N_{\rm b}}^\infty
\frac{|\lambda_n|^{2\gamma}~\! |\langle f, \phi_n \rangle|^2}{|z - e^{\tau \lambda_n}|^2 ~\! |\lambda_n|^{2\alpha}} \right).
\end{align*}
It follows from \eqref{eq:unstable_eig_bound}
that
\[
\sum_{n=1}^{N_{\rm b}-1}
\left|
\frac{\langle f, \phi_n \rangle }{\overline{z} - e^{\tau \overline{\lambda_n}}}
\right|^2 \leq \frac{1}{c_1^2}
\sum_{n=1}^{N_{\rm b}-1} |\langle f, \phi_n \rangle |^2
\]
Moreover,
Lemma~\ref{lem:z_lambda_bound} gives
\begin{align*}
\sum_{n=N_{\rm b}}^\infty
\frac{|\lambda_n|^{2\gamma}~\! |\langle f, \phi_n \rangle|^2}{|z - e^{\tau \lambda_n}|^2 ~\! |\lambda_n|^{2\alpha}}&\leq
\Upsilon_0^2
\sum_{n=N_{\rm b}}^\infty |\lambda_n|^{2\gamma}~\! |\langle f, \phi_n \rangle|^2
\end{align*}
for some $\Upsilon_0 >0$.
Therefore,
we obtain \eqref{eq:beta_zero_bounded} by $f \in \mathcal{D}_*^{\gamma}$.
Assume that $\alpha \leq 2$.
Then
$b_0 \in D(A) \subset \mathcal{D}^{\alpha/2}$.
Lemma~\ref{lem:Resol_T_int}.b) shows that
\[
\lim_{r \downarrow 1} \frac{1}{\log \frac{r}{r-1}}
\int^{2\pi}_0
\big\|R\big(re^{i\theta}, T(\tau)\big) b_0\big\|^2 d\theta = 0.
\]
Moreover,
for all $z \in \rho(T(\tau))$,
\begin{align*}
\big\|F^+R\big(z, T^+(\tau)\big) \big\| &=
\Big\|FR\big(z, T(\tau)\big)\big|_{X^+}\Big\| \leq
\big\|FR\big(z, T(\tau)\big)\big\|
\\
\big\|F^-R\big(z, T^-(\tau)\big)(-A^-)^{-\alpha/2}\big\|
&\leq
\Big\|FR\big(z, T(\tau)\big)\big|_{X^-} \Big\|
~\! \big\| (-A^-)^{-\alpha/2}\big\|
\leq \big\|FR\big(z, T(\tau)\big)\big\| ~\!
\big\| (-A^-)^{-\alpha/2}\big\|.
\end{align*}
Combining these estimates with \eqref{eq:beta_zero_bounded}
yields
\eqref{eq:RSFR_estimate_poly_decay}.
\end{proof}
\subsubsection{Case $\beta,\gamma < \alpha$}
Finally, we consider the case $\beta,\gamma < \alpha$.
\begin{lmm}
\label{lem:bg_posi1}
Suppose that
{\rm (A\ref{assump:finite_unstable})} and
{\rm (A\ref{assump:imaginary})}
hold. Let $0 \leq \beta,\gamma < \alpha $
and $\beta + \gamma \geq \alpha$. If $b \in \mathcal{D}^{\beta}$
and $f \in \mathcal{D}_*^{\gamma}$,
then
\eqref{eq:RSFR_estimate} holds for all $\tau >0$.
\end{lmm}
\begin{proof}
By assumption, we obtain $\beta ,\gamma >0$.
There exist
$0 < \beta_1 \leq \beta$ and $0<\gamma_1 \leq \gamma$
such that $\beta_1+\gamma_1 =\alpha$.
Since $b \in \mathcal{D}^{\beta}$
and $f \in \mathcal{D}_*^{\gamma}$,
we obtain $b^- \in D((-A^-)^{\beta_1})$ and
$f^- \in D((-A_*^-)^{\gamma_1})$.
Take $\tau >0$ and $z \in \mathbb{D}_{r_1} \cap \mathbb{E}_1 \subset
\rho (T(\tau))$.
Define
\begin{equation}
\label{eq:b1_def}
b_1 := \int^\tau_0 T(s)(-A^-)^{\beta_1} b^-ds \in X^-.
\end{equation}
Since the resolvent
$R(z, T^-(\tau))$ and the operator on $X^-$
\[
x^- \mapsto
\int^{\tau}_0 T^-(s) x^-ds
\]
commute with $(-A^-)^{\beta_1}$
by Proposition~3.1.1.f) of \cite{Haase2006}, it follows that
\begin{equation}
\label{eq:Rb1}
(-A^-)^{\beta_1} R\big(z, T^-(\tau)\big) \int^{\tau}_0 T^-(s) b^- ds =
R\big(z, T^-(\tau)\big) b_1.
\end{equation}
By the moment inequality (see, e.g., Proposition~6.6.4 of
\cite{Haase2006}),
there exists $\varsigma_1>0$ such that
\[
\|(-A^-)^{-\beta_1}x^-\| \leq \varsigma_1 \|x^-\|^{1-\beta_1/\alpha}
~\! \|(-A^-)^{-\alpha} x^-\|^{\beta_1/\alpha}\qquad \forall x^- \in X^-.
\]
Applying this inequality to $x^- = R(z, T^-(\tau)) b_1$,
we have from \eqref{eq:Rb1} that
\begin{align*}
\left\|R\big(z, T(\tau)\big) \int^{\tau}_0 T(s) b^- ds\right\|
&=
\left\| (-A^-)^{-\beta_1}
R\big(z, T(\tau)\big) b_1
\right\| \\
&\leq
\varsigma_1 \left\|
R\big(z, T(\tau)\big) b_1
\right\|^{1-\beta_1/\alpha} ~\!
\left\|
(-A^-)^{-\alpha} R\big(z, T^-(\tau)\big) b_1
\right\|^{\beta_1/\alpha}.
\end{align*}
Hence
\begin{align}
\big\|R\big(z, T(\tau)\big) S(\tau)\big\|
&\leq
\left\|R\big(z, T(\tau) \big) \int^{\tau}_0 T(s) b^+ds
\right\|
+ \left\|R\big(z, T(\tau)\big) \int^{\tau}_0 T(s) b^- ds\right\| \notag \\
&\leq \left\|R\big(z, T(\tau)\big) \int^{\tau}_0 T(s) b^+ds
\right\| + \varsigma_1 \left\|
R\big(z, T(\tau)\big) b_1
\right\|^{1-\beta_1/\alpha} ~\!
\left\|
(-A^-)^{-\alpha} R\big(z, T^-(\tau)\big) b_1
\right\|^{\beta_1/\alpha} .
\label{eq:RS_bound}
\end{align}
Using Lemma~\ref{lem:z_lambda_bound}, we obtain
\begin{align*}
\left\|
(-A^-)^{-\alpha} R\big(z, T^-(\tau)\big) b_1
\right\|^2 &\leq M_{\rm b}
\sum_{n=N_{\rm a}}^{\infty}
\frac{|\langle b_1, \psi_n \rangle |^2}
{|z- e^{\tau \lambda_n}|^2 ~\! |\lambda_n|^{2\alpha}}\\
&\leq \frac{M_{\rm b} }{M_{\rm a}} \Upsilon_0^2 \|b_1\|^2.
\end{align*}
for some $\Upsilon_0 >0$.
Therefore,
\begin{equation}
\label{eq:A_inv_alpha_b}
\left\|
(-A^-)^{-\alpha} R\big(z, T^-(\tau)\big) b_1
\right\|^{\beta_1/\alpha} \leq
\left(
\sqrt{\frac{M_{\rm b} }{M_{\rm a}}} \Upsilon_0\|b_1\|
\right)^{\beta_1/\alpha} =: \varpi_1 .
\end{equation}
Combining these estimates \eqref{eq:RS_bound} and
\eqref{eq:A_inv_alpha_b}, we obtain
\begin{equation}
\label{eq:RS_bound_final}
\big\|R\big(z, T(\tau)\big) S(\tau)\big\| \leq
\left\|R\big(z, T(\tau)\big) \int^{\tau}_0 T(s) b^+ds
\right\| + \varsigma_1\varpi_1 \left\|
R\big(z, T(\tau)\big) b_1
\right\|^{1-\beta_1/\alpha}.
\end{equation}
Define $f_1 := (-A^-_*)^{\gamma_1 }f^- \in X^-_*$.
By the moment inequality,
there exists $\varsigma_2 >0$ such that
\[
\left\|R\big(\overline{z}, T(\tau)^*\big) f^- \right\| \\
\leq
\varsigma_2 \left\|R\big(\overline{z}, T(\tau)^*\big) f_1 \right\|^{1-\gamma_1/\alpha} ~\!
\big\|(-A^-_*)^{-\alpha} R\big(\overline{z}, T^-_*(\tau)\big) f_1\big\|^{\gamma_1/\alpha}.
\]
Then
\begin{align*}
\big\|F R\big(z, T(\tau)\big)\big\|
&\leq
\left\|R\big(\overline{z}, T(\tau)^*\big) f^+
\right\|
+ \left\|R\big(\overline{z}, T(\tau)^*\big) f^- \right\| \\
&\leq \left\|R\big(\overline{z}, T(\tau)^*\big) f^+
\right\| +
\varsigma_2 \left\|R\big(\overline{z}, T(\tau)^*\big) f_1 \right\|^{1-\gamma_1/\alpha} ~\!
\big\|(-A^-_*)^{-\alpha} R\big(\overline{z}, T^-_*(\tau)\big) f_1\big\|^{\gamma_1/\alpha}.
\end{align*}
As in \eqref{eq:A_inv_alpha_b}, we obtain
\[
\big\|(-A^-_*)^{-\alpha} R\big(\overline{z}, T^-_*(\tau)\big) f_1\big\|^{\gamma_1/\alpha} \leq \left(
\sqrt{\frac{M_{\rm b} }{M_{\rm a}}} \Upsilon_0\|f_1\|
\right)^{\gamma_1/\alpha} =: \varpi_2.
\]
Hence
\begin{equation}
\label{eq:FR_final}
\big\|F R\big(z, T(\tau)\big)\big\| \leq
\left\|R\big(\overline{z}, T(\tau)^*\big) f^+
\right\| +
\varsigma_2\varpi_2 \left\|R\big(\overline{z}, T(\tau)^*\big) f_1 \right\|^{1-\gamma_1/\alpha}.
\end{equation}
Define
\[
p := \frac{1}{1 - \frac{\beta_1}{\alpha}},\quad
q := \frac{1}{1 - \frac{\gamma_1}{\alpha}}.
\]
Then $1/p + 1/q = 1$.
Since $(d_1+d_2)^2 \leq 2(d_1^2+d_2^2)$ for all $d_1,d_2 \geq 0$,
it follows from the estimates
\eqref{eq:RS_bound_final} and \eqref{eq:FR_final} that
\begin{align*}
\big\|R\big(z, T(\tau)\big) S(\tau)\big\|^2 ~\! \big\|FR\big(z, T(\tau)\big)\big\|^2 &\leq
4 \left( \left\|R\big(z, T(\tau)\big) \int^{\tau}_0 T(s) b^+ds
\right\|^2 +
(\varsigma_1 \varpi_1)^2\left\|
R\big(z, T(\tau)\big) b_1
\right\|^{2/p}
\right) \\
&\qquad \qquad
\times \left( \left\|R\big(\overline{z}, T(\tau)^*\big) f^+\right\|^2 +
(\varsigma_2 \varpi_2)^2 \left\|R\big(\overline{z}, T(\tau)^*\big) f_1 \right\|^{2/q}
\right).
\end{align*}
By H\"older's inequality and Lemma~\ref{lem:Resol_T_int}.a),
\begin{align*}
&\limsup_{r \downarrow 1} ~\!(r-1)
\int^{2\pi}_0 \left\|
R\big(re^{i\theta}, T(\tau)\big) b_1
\right\|^{2/p} ~\! \left\|R\big(re^{-i\theta}, T(\tau)^*\big) f_1
\right\|^{2/q} d\theta \\
&\quad \leq
\limsup_{r \downarrow 1} ~\!\left((r-1)
\int^{2\pi}_0 \left\|
R\big(re^{i\theta}, T(\tau)\big) b_1
\right\|^{2} d\theta \right)^{1/p} \cdot
\limsup_{r \downarrow 1} ~\!\left((r-1)
\int^{2\pi}_0 \left\|R\big(re^{i\theta}, T(\tau)^*\big) f_1 \right\|^{2} d\theta \right)^{1/q} \\
&\quad = 0.
\end{align*}
Since \eqref{eq:unstable_eig_bound} yields
\begin{align}
\label{eqResol_x_plus}
\left\|R\big(z, T(\tau)\big) x^+ \right\|^2
\leq M_{\rm b}
\sum_{n = 1}^{N_{\rm a}-1} \frac{
|\langle x^+,\psi_n\rangle |^2
}{|z - e^{\tau \lambda_n }|^2}
\leq
\frac{M_{\rm b}}{M_{\rm a}} \cdot
\frac{\left\| x^+\right\|^2}{c_1^2}
\end{align}
for all $x^+ \in X^+$ and
$z \in \mathbb{D}_{r_1} \cap \mathbb{E}_1$,
it follows that
\[
\limsup_{r \downarrow 1} ~\!
(r-1) \int^{2\pi}_0 \left\|R\big(re^{i\theta}, T(\tau)\big) \int^{\tau}_0 T(s) b^+ds
\right\|^{2p} d\theta= 0.
\]
Using H\"older's inequality and Lemma~\ref{lem:Resol_T_int}.a) again,
we obtain
\begin{align*}
&\limsup_{r \downarrow 1} ~\!(r-1)
\int^{2\pi}_0 \left\|R\big(re^{i\theta}, T(\tau)\big) \int^{\tau}_0 T(s) b^+ds
\right\|^2 ~\! \big\|R\big(re^{-i\theta}, T(\tau)^*\big) f_1 \big\|^{2/q} d\theta \\
&\quad \leq \limsup_{r \downarrow 1} ~\!
\left((r-1) \int^{2\pi}_0 \left\|R\big(re^{i\theta}, T(\tau)\big) \int^{\tau}_0 T(s) b^+ds
\right\|^{2p} d\theta
\right)^{1/p} \\
&\qquad \times \limsup_{r \downarrow 1} ~\!\left((r-1)
\int^{2\pi}_0 \big\|R\big(re^{i\theta}, T(\tau)^*\big) f_1 \big\|^{2} d\theta \right)^{1/q} \\
&\quad =0.
\end{align*}
Similarly,
\begin{align*}
&\limsup_{r \downarrow 1} ~\!(r-1)
\int^{2\pi}_0 \left\|
R\big(re^{i\theta}, T(\tau)\big) b_1
\right\|^{2/p} ~\! \big\|R\big(re^{-i\theta}, T(\tau)^*\big) f^+\big\|^2 d\theta =0\\
&\limsup_{r \downarrow 1} ~\!(r-1)
\int^{2\pi}_0 \left\|R\big(re^{i\theta}, T(\tau)\big) \int^{\tau}_0 T(s) b^+ds
\right\|^2 ~\! \big\|R\big(re^{-i\theta}, T(\tau)^*\big) f^+\big\|^2 d\theta =0.
\end{align*}
Thus, the desired conclusion \eqref{eq:RSFR_estimate} is obtained.
\end{proof}
\begin{lmm}
\label{lem:bg_posi2}
Suppose that
{\rm (A\ref{assump:finite_unstable})} and
{\rm (A\ref{assump:imaginary})}
hold. Let $b \in \mathcal{D}^{\beta}$
and $f \in \mathcal{D}_*^{\gamma}$
for some $0 \leq \beta,\gamma < \alpha $
satisfying $\beta + \gamma \geq \alpha$.
If $\alpha \leq 2$, then
\eqref{eq:RSFR_estimate_poly_decay}
holds for all $\tau >0$.
\end{lmm}
\begin{proof}
If $\alpha \leq 2$,
then $b_1$ defined by \eqref{eq:b1_def} satisfies
$b_1 \in D(A^-)
\subset D((-A^-)^{\alpha/2}).
$
Since $f^- \in D((-A^-_*)^{\gamma_1})$, we obtain
\[
f_2 := (-A^-_*)^{\gamma_1-\alpha/2 }f^- \in D\big((-A^-_*)^{\alpha/2 }\big).
\]
Take $\tau >0$ arbitrarily.
We have from
$ (-A^-_*)^{-\alpha/2 }f^- = (-A^-_*)^{-\gamma_1}f_2$
that for all $z \in \rho(T(\tau))$,
\[
\big\|R\big(\overline{z},
T(\tau)^*\big) (-A^-_*)^{-\alpha/2}f^-\big\|=
\big\|R\big(\overline{z}, T(\tau)^*\big) (-A^-_*)^{-\gamma_1}f_2 \big\|.
\]
Lemma~\ref{lem:F_adjoint} yields
\begin{align*}
\big\|F^+R\big(z, T^+(\tau)\big) \big\|
&\leq \|R\big(\overline{z},T(\tau)^*\big) f^+\| \\
\big\|F^-R\big(z,
T^-(\tau)\big) (-A^-)^{-\alpha/2} \big\|
&\leq \big\|R\big(\overline{z}, T(\tau)^*\big) (-A^-_*)^{-\gamma_1}f_2 \big\|.
\end{align*}
for all $z \in \rho(T(\tau))$.
Therefore, we obtain \eqref{eq:RSFR_estimate_poly_decay}
by
arguments similar to those proving \eqref{eq:RSFR_estimate}, i.e.,
a combination of the moment inequality,
the H\"older's inequality, and
Lemma~\ref{lem:Resol_T_int}.b).
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:RSFR_bound}]
The assertion follows immediately from
Lemmas~\ref{lem:g_zero}, \ref{lem:b_zero}, \ref{lem:bg_posi1}, and
\ref{lem:bg_posi2}.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:SD_SS}}
Now we are able to prove the main result.
\begin{proof}[Proof of Theorem~\ref{thm:SD_SS}]
By Theorem~\ref{lem:resol_T_SF} and the combination of
Lemmas~\ref{lem:cont_time_trans_func_bound} and
\ref{lem:discrete_time_trans_func_bound},
there exist $M_0>0$ and $\tau^* >0 $ such that
for all
$\tau \in (0,\tau^*)$, we obtain $\mathbb{E}_1 \subset \rho (\Delta(\tau))$ and
\begin{align*}
&\left|\frac{1}{1 - FR\big(z, T(\tau)\big) S(\tau) } \right| \leq M_0\qquad
\forall z \in \rho \big(T(\tau)\big) \cap \overline{\mathbb{E}_1}.
\end{align*}
Take $\tau \in (0,\tau^*)$.
By
(A\ref{assump:finite_unstable}),
there exists $r_0 >1$ such that
$re^{i\theta} \in \rho(T(\tau))$
for all $r \in (1,r_0)$ and $\theta \in [0,2\pi)$.
By the Serman-Morrison-Woodbury formula given in Proposition~\ref{Prop:SMW}, we obtain
\begin{equation}
\label{eq:SMW}
R(re^{i\theta}, T(\tau)+S(\tau)F) x =
R\big(re^{i\theta}, T(\tau)\big) x +
\frac{R\big(re^{i\theta}, T(\tau)\big) S(\tau)FR\big(re^{i\theta}, T(\tau)\big) x }
{1 - FR\big(re^{i\theta}, T(\tau)\big) S(\tau) }
\end{equation}
for all $x \in X$, $r \in (1,r_0)$, and $\theta \in [0,2\pi)$.
a)
If we show that for all $x,y \in X$,
\begin{subequations}
\label{eq:resol_estimate_for_PB}
\begin{align}
\lim_{r \downarrow 1} ~(r-1)
\int^{2\pi}_0
\big\|R\big(re^{i\theta}, \Delta(\tau)\big)x\big\|^2 d\theta &= 0
\label{eq:resol_estimate_for_PBa}\\
\limsup_{r \downarrow 1} ~(r-1)
\int^{2\pi}_0
\big\|R\big(re^{i\theta}, \Delta(\tau) ^*\big)y\big\|^2 d\theta &< \infty,
\label{eq:resol_estimate_for_PBb}
\end{align}
\end{subequations}
then the discrete semigroup $(\Delta(\tau)^k)_{k \in \mathbb{N}}$
is strongly stable by Theorem~\ref{thm:strong_stability_resol},
and therefore Proposition \ref{prop:SD_DT} implies that
the sampled-data system \eqref{eq:sampled_data_sys} is strongly stable.
Since $(d_1+d_2)^2 \leq 2(d_1^2+d_2^2)$ for all $d_1,d_2 \geq 0$,
the Serman-Morrison-Woodbury formula \eqref{eq:SMW} yields
\begin{align}
\int^{2\pi}_0
\|R(re^{i\theta}, T(\tau)+S(\tau)F) x\|^2 d\theta &\leq
2 \int^{2\pi}_0
\big\|R\big(re^{i\theta}, T(\tau)\big) x\big\|^2 d\theta \notag \\
&\qquad + 2M_0^2 \|x\|^2 \int^{2\pi}_0 \big\|R\big(re^{i\theta}, T(\tau)\big) S(\tau)\big\|^2
~\!
\big\|FR\big(re^{i\theta}, T(\tau)\big)\big\|^2 d\theta \label{eq:R_square_int}
\end{align}
for all $x \in X$ and $r \in (1,r_0)$.
Combining this estimate \eqref{eq:R_square_int}
with Lemma~\ref{lem:Resol_T_int}.a) and Proposition~\ref{prop:RSFR_bound}.a),
we obtain \eqref{eq:resol_estimate_for_PBa} for all $x \in X$.
A similar calculation shows that \eqref{eq:resol_estimate_for_PBb} holds
for all $y \in Y$.
In fact, a stronger result than \eqref{eq:resol_estimate_for_PBb},
\[
\lim_{r \downarrow 1} ~(r-1)
\int^{2\pi}_0
\big\|R\big(re^{i\theta}, \Delta(\tau) ^*\big)y\big\|^2 d\theta =0
\qquad \forall y \in X,
\]
is obtained from the following estimate:
\begin{align*}
\int^{2\pi}_0
\|R(re^{i\theta}, T(\tau)+S(\tau)F)^* y\|^2 d\theta
&=
\int^{2\pi}_0
\left\|R\big(re^{i\theta}, T(\tau)\big)^* y +
\left[ \frac{R\big(re^{i\theta}, T(\tau)\big)
S(\tau)FR\big(re^{i\theta}, T(\tau)\big) }
{1 - FR\big(re^{i\theta}, T(\tau)\big) S(\tau) } \right]^* y \right\|^2 d\theta
\\
&\quad \leq
2 \int^{2\pi}_0
\big\|R\big(re^{i\theta}, T(\tau)^*\big) y\big\|^2 d\theta \\
&\qquad + 2M_0^2 \|y\|^2 \int^{2\pi}_0
\big\|R\big(re^{i\theta}, T(\tau)\big) S(\tau)\big\|^2 ~\!
\big\|FR\big(re^{i\theta}, T(\tau)\big)\big\|^2 d\theta
\end{align*}
for all $y \in X$ and $r \in (1,r_0)$.
b)
Assume that $\alpha \leq 2$ or $\beta \geq \alpha$ holds.
Let
$x \in \mathcal{D}^{\alpha/2}$ and define
$x^+:= \Pi x$ and $x^- := (I-\Pi)x$. Then
$x^- \in D((-A^-)^{\alpha/2})$, and hence
$x^- = (-A^-)^{-\alpha/2} \xi^-$ for some $\xi^- \in X^-$.
For all $z \in \rho(T(\tau))$,
\begin{align*}
FR\big(z, T(\tau)\big)x&=
F^+R\big(z, T^+(\tau)\big)x^+ +
F^-R\big(z, T^-(\tau)\big)(-A^-)^{-\alpha/2} \xi^-.
\end{align*}
Since $(d_1+d_2+d_3)^2 \leq 3(d_1^2+d_2^2+d_3^2)$ for all
$d_1,d_2,d_3 \geq 0$,
the Serman-Morrison-Woodbury formula \eqref{eq:SMW} yields
\begin{align*}
&\int^{2\pi}_0
\|R(re^{i\theta}, T(\tau)+S(\tau)F) x\|^2 d\theta \\
&\qquad \leq
3 \int^{2\pi}_0
\big\|R\big(re^{i\theta}, T(\tau)\big) x\big\|^2 d\theta
+ 3M_0^2 \|x^+\|^2 \int^{2\pi}_0 \big\|R\big(re^{i\theta}, T(\tau)\big) S(\tau)\big\|^2
~\!
\big\|F^+R\big(re^{i\theta}, T^+(\tau)\big) \big\|^2 d\theta \\
&\qquad\qquad + 3M_0^2 \|\xi^-\|^2 \int^{2\pi}_0 \big\|R\big(re^{i\theta}, T(\tau)\big) S(\tau)\big\|^2
~\!
\big\|F^-R\big(re^{i\theta}, T^-(\tau)\big)(-A^-)^{-\alpha/2}\big\|^2 d\theta .
\end{align*}
Using Lemma~\ref{lem:Resol_T_int}.b) and
Proposition~\ref{prop:RSFR_bound}.b), we obtain
\[
\lim_{r\downarrow 1} ~\frac{1}{\log \frac{r}{r-1}}
\int^{2\pi}_0
\big\|R\big(re^{i\theta}, \Delta (\tau)\big) x \big\|^2 d\theta =0.
\]
We have shown in the proof of a)
that the discrete semigroup $(\Delta(\tau)^k)_{k \in \mathbb{N}}$
is strongly stable.
Therefore, it is power bounded.
Proposition~\ref{prop:discrete_poly_decay} implies that
\begin{align*}
\|\Delta(\tau)^k x\| =
o\left(
\sqrt{\frac{\log k}{k} }
\right)\qquad (k \to \infty)
\end{align*}
for all $x \in \mathcal{D}^{\alpha/2}$.
By Proposition \ref{prop:SD_DT} and the subsequent discussion,
the state $x$ of the sampled-data system \eqref{eq:sampled_data_sys}
satisfies
\begin{equation*}
\|x(t)\| = o \left(
\sqrt{\frac{\log t}{t}}
\right)\qquad (t \to \infty)
\end{equation*}
for every initial state $x^0 \in \mathcal{D}^{\alpha/2}$.
\end{proof}
\section{Example}
\label{sec:wave_eq}
Consider the controlled
wave equation with Dirichlet boundary conditions
\begin{equation}
\label{eq:wave_eq}
\left\{
\begin{alignedat}{4}
\frac{\partial^2 w}{\partial t^2} (\xi,t) &=
\frac{\partial^2 w}{\partial \xi^2} (\xi,t)
+b_0(\xi) u(t)
,\quad 0\leq \xi \leq 1,~t \geq 0 \\
w(0,t) &= w(1,t) = 0,\quad t \geq 0\\
w(\xi,0) &= w_0(\xi),~ \frac{\partial w}{\partial t}(\xi,0) = w_1(\xi),\quad 0\leq\xi \leq 1,
\end{alignedat}
\right.
\end{equation}
where
$b_0:[0,1] \to \mathbb{R}$
is a ``shaping'' function
around the ``control point'' and
$u(t) \in \mathbb{R}$ is the control input at time $t \geq 0$.
First, we write the equation \eqref{eq:wave_eq} as an abstract evolution equation; see also Example~3.2.16 in \cite{Curtain2020} and Example~VI.8.3 in \cite{Engel2000}.
Define an operator $A_0 : D(A_0) \subset L^2(0,1) \to L^2(0,1)$
by
\[
A_0w := \frac{d^2w}{d\xi^2}
\]
with domain
$
D(A_0) := H^2_0(0,1)= \{
w \in H^{2}(0,1) : w(0)= w(1) = 0
\}
$.
The operator $-A_0$ is self-adjoint and positive definite on $L^2(0,1)$.
Hence there exists a unique positive definite square root $(-A_0)^{1/2}$
with domain $D((-A_0)^{1/2}) = H^{1}_0(0,1) =
\{
w \in H^{1}(0,1) : w(0)= w(1) = 0
\}.
$
Define a Hilbert space
$X := D((-A_0)^{1/2}) \times L^2(0,1)$ endowed with
the inner product
\[
\langle x, y \rangle :=
\Big \langle (-A_0)^{1/2}x_1, (-A_0)^{1/2}y_1 \Big\rangle_{L^2} +
\langle x_2, y_2 \rangle_{L^2},\quad x =
\begin{bmatrix}
x_1 \\ x_2
\end{bmatrix} \in X,~y =
\begin{bmatrix}
y_1 \\ y_2
\end{bmatrix} \in X.
\]
Let $b_0 \in L^2(0,1)$, and put
\[
b :=
\begin{bmatrix}
0 \\ b_0
\end{bmatrix} \in X.
\]
Define
\[
A_1 :=
\begin{bmatrix}
0 & I \\
A_0 & 0
\end{bmatrix}
\]
with domain $
D(A_1) := D(A_0) \times D((-A_0)^{1/2})
$
and
\[
Bu :=
bu,\quad u \in \mathbb{C}.
\]
Then
the controlled wave equation \eqref{eq:wave_eq} can be written
in the form $\dot x =A_1 x + Bu$, where
\[
x(t) :=
\begin{bmatrix}
w(\cdot,t) \vspace{6pt}\\ \dfrac{\partial w}{\partial t} (\cdot, t)\vspace{4pt}
\end{bmatrix},\quad t \geq 0.
\]
We denote by $H^{-1}(0,1)$ the dual of $H^1_0(0,1)$
with respect to the pivot space $L^2(0,1)$.
The duality paring between $H^1_0(0,1)$ and $H^{-1}(0,1)$
is denoted by $\langle g, \nu \rangle_{H^1_0,H^{-1}}$ for
$g \in H^1_0(0,1)$ and $\nu \in H^{-1}(0,1)$.
Then $A_0$ has a unique extension such that $A_0 \in \mathcal{L}(
H^1_0(0,1), H^{-1}(0,1))$, and this extension is unitary; see Corollary~3.4.6 and
Proposition 3.5.1 of
\cite{Tucsnak2009}.
Let
$\zeta_1 \in H^{-1}(0,1)$ and
$\zeta_2,\eta_0 \in L^2(0,1)$.
We now consider the perturbed wave equation
\begin{equation}
\label{eq:wave_eq_perturbed}
\left\{
\begin{alignedat}{4}
\frac{\partial^2 w}{\partial t^2} (\xi,t) &=
\frac{\partial^2 w}{\partial \xi^2} (\xi,t)
+b_0(\xi) u(t)
+
\left(
\langle
w, \zeta_1
\rangle_{H^1_0,H^{-1}} +
\left\langle
\frac{\partial w}{\partial t}, \zeta_2
\right\rangle_{L^2}
\right)\eta_0(\xi),\quad 0\leq \xi \leq 1,~t \geq 0 \\
w(0,t) &= w(1,t) = 0,\quad t \geq 0\\
w(\xi,0) &= w_0(\xi),~ \frac{\partial w}{\partial t}(\xi,0) = w_1(\xi),\quad 0\leq\xi \leq 1,
\end{alignedat}
\right.
\end{equation}
Put
\[
\zeta :=
\begin{bmatrix}
-A_0^{-1}\zeta_1 \\ \zeta_2
\end{bmatrix} \in X,\quad
\eta :=
\begin{bmatrix}
0 \\ \eta_0
\end{bmatrix} \in X,
\]
and define $V \in \mathcal{L}(X)$ by
\[
Vx :=
\langle x, \zeta \rangle \eta =
\begin{bmatrix}
0 \\
\left(
\langle
x_1, \zeta_1
\rangle_{H^1_0,H^{-1}} +
\left\langle
x_2, \zeta_2
\right\rangle_{L^2}
\right)\eta_0
\end{bmatrix},
\quad x =
\begin{bmatrix}
x_1 \\ x_2
\end{bmatrix}\in X,
\]
which is in the form of
one-rank perturbations. The perturbed
wave equation \eqref{eq:wave_eq_perturbed} is transformed into
the abstract evolution equation $\dot x = A x + Bu$, where $A := A_1 +V$ with domain $D(A) = D(A_1)$.
Assume that $A$
is a Riesz-spectral operator of the form \eqref{eq:RS_operator} and
has
the spectral properties
(A\ref{assump:finite_unstable}) and (A\ref{assump:imaginary}).
Such perturbations $\zeta_1$, $\zeta_2$, and $\eta_0$ can be constructed
with slight modifications of the proof of
Theorem~13 in \cite{Paunonen2011}; see also
Theorem~1 in \cite{Xu1996}.
We apply the spectral decomposition by the projection $\Pi$ given in \eqref{eq:projection}.
Choose $f_1 \in X^+_* = \Pi^* X$ such that
the matrix
\begin{align}
\label{eq:wave_unstable_part}
\begin{bmatrix}
\lambda_1 & & 0 \\
& \ddots & \\
0 & & \lambda_{N_{\rm a}-1}
\end{bmatrix} +
\begin{bmatrix}
\langle b, \psi_1 \rangle \\
\vdots \\
\langle b, \psi_{N_{\rm a}-1} \rangle
\end{bmatrix}
\begin{bmatrix}
\langle \phi_1, f_1 \rangle & \cdots &
\langle \phi_{N_{\rm a} - 1}, f_1 \rangle
\end{bmatrix}
\end{align}
is Hurwitz, where $N_{\rm a} \in \mathbb{N}$
is chosen so that \eqref{eq:Ns_def} holds.
Define
$F_1 \in \mathcal{L}(X,\mathbb{C})$
by
\[
F_1 x := \langle x, f_1\rangle,\quad x \in X.
\]
Since $(T^-(t))_{t \geq 0}$ is polynomially stable with parameter $\alpha$,
the $C_0$-semigroup $(T_{BF_1}(t))_{t\geq 0}$
generated by
$A+BF_1$ has the same stability property
by Theorem~9 of \cite{Paunonen2014OM}.
Let $\beta, \gamma \geq 0$ satisfy $\beta + \gamma >\alpha$.
Assume that $b \in \mathcal{D}^{\beta}$, and
take $f_2 \in \mathcal{D}_*^\gamma$.
We take
$\widetilde \beta \in [0,\beta)$ and
$\widetilde \gamma \in [0,\gamma)$ such that
$\widetilde \beta + \widetilde \gamma \geq \alpha$.
Since $b \in \mathcal{D}^\beta$ and
$f_1 \in \mathcal{D}_*^\gamma$,
Lemma~\ref{lem:ABF_domain} implies that
\[
b \in D\Big((-A-BF_1)^{\widetilde \beta}\Big),\quad
f_2 \in D\Big((-A^*-F_1^*B^*)^{\widetilde \gamma}\Big).
\]
Define the feedback operator $F \in \mathcal{L}(X,\mathbb{C})$ by
$Fx := \langle x, f_1 + f_2\rangle $ for $x \in X$.
By Theorem~6 of \cite{Paunonen2014JDE}, there exists $c >0$ such that
$A+BF$ also generates a polynomially stable $C_0$-semigroup
with parameter $\alpha$
whenever
\begin{equation}
\label{eq:wave_norm_cond}
\big\|(-A^*-F_1^*B^*)^{\widetilde \gamma} f_2 \big\| < c.
\end{equation}
Hence, if
$f_1 \in X^+_*$ and $f_2 \in \mathcal{D}^{\gamma}_*$ are chosen so that
the matrix given in \eqref{eq:wave_unstable_part} is Hurwitz and
the norm condition \eqref{eq:wave_norm_cond} holds, then
Assumption~\ref{assum:for_MR} is satisfied, and
by Theorem~\ref{thm:SD_SS},
the sampled-data system
\eqref{eq:sampled_data_sys}
is strongly stable for all sufficiently small sampling periods.
Moreover, let $\alpha = 2$. Then
$\mathcal{D}^{\alpha/2} = D(A) = D(A_1)$, and
therefore the state $x$ of the sampled-data system \eqref{eq:sampled_data_sys}
satisfies
\[
\|x(t)\| = o \left(
\sqrt{\frac{\log t}{t}}
\right)\qquad (t \to \infty)
\]
for every
initial state $x^0 \in D(A_1) = H^2_0(0,1) \times H^1_0(0,1)$.
\section{Conclusion}
\label{sec:conclusion}
We have studied the robustness of polynomial stability
with respect to sampling.
The generator we consider is a Riesz-spectral operator
whose eigenvalues may approach the imaginary axis asymptotically.
We have presented conditions for the preservation of strong stability
under fast sampling.
Moreover, a decay rate estimate of the state has been provided for
the sampled-data system with a smooth initial state and
a sufficiently small sampling period.
Future work will focus on
relaxing the assumption on the generator and obtaining sharper
estimates of the decay rate.
| {
"timestamp": "2022-05-10T02:33:12",
"yymm": "2205",
"arxiv_id": "2205.04130",
"language": "en",
"url": "https://arxiv.org/abs/2205.04130",
"abstract": "We provide a partially affirmative answer to the following question on robustness of polynomial stability with respect to sampling: ``Suppose that a continuous-time state-feedback controller achieves the polynomial stability of the infinite-dimensional linear system. We apply an idealized sampler and a zero-order hold to a feedback loop around the controller. Then, is the sampled-data system strongly stable for all sufficiently small sampling periods? Furthermore, is the polynomial decay of the continuous-time system transferred to the sampled-data system under sufficiently fast sampling?'' The generator of the open-loop system is assumed to be a Riesz-spectral operator whose eigenvalues are not on the imaginary axis but may approach it asymptotically. We provide conditions for strong stability to be preserved under fast sampling. Moreover, we estimate the decay rate of the state of the sampled-data system with a smooth initial state and a sufficiently small sampling period.",
"subjects": "Optimization and Control (math.OC); Functional Analysis (math.FA)",
"title": "Robustness of Polynomial Stability with Respect to Sampling",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109489630493,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7076796316022147
} |
https://arxiv.org/abs/1407.8443 | The 2-adic valuations of differences of Stirling numbers of the second kind | Let $m, n, k$ and $c$ be positive integers. Let $\nu_2(k)$ be the 2-adic valuation of $k$. By $S(n,k)$ we denote the Stirling numbers of the second kind. In this paper, we first establish a convolution identity of the Stirling numbers of the second kind and provide a detailed 2-adic analysis to the Stirling numbers of the second kind. Consequently, we show that if $2\le m\le n$ and $c$ is odd, then $\nu_2(S(c2^{n+1},2^m-1)-S(c2^n, 2^m-1))=n+1$ except when $n=m=2$ and $c=1$, in which case $\nu_2(S(8,3)-S(4,3))=6$. This solves a conjecture of Lengyel proposed in 2009. | \section{Introduction and the statements of main results}
Let $\mathbb{N}$ denote the set of nonegative integers and
let $n, k\in\mathbb{N}$. The Stirling numbers of the second
kind $S(n,k)$ is defined as the number of ways to partition
a set of $n$ elements into exactly $k$ non-empty subsets.
Divisibility properties of Stirling numbers of the
second kind $S(n,k)$ have been studied from a number of different
perspectives. For each given $k$, the sequence $\{S(n,k),n\ge k\}$
is known to be periodic modulo prime powers. Carlitz \cite{[Ca]}
and Kwong \cite{[Kw]} have studied the length of this period,
respectively. Chan and Manna \cite{[CM]} characterized $S(n,k)$
modulo prime powers in terms of binomial coefficients when $k$ is a
multiple of prime powers. Various congruences involving sums of
$S(n,k)$ are also known \cite{[S]}.
Given a prime $p$ and a positive integer $m$, there exist unique
integers $a$ and $n$, with $p\nmid a$ and $n\geq 0$, such that
$m=ap^n$. The number $n$ is called the {\it $p$-adic valuation}
of $m$, denoted by $n=v_p(m)$. The study of $p$-adic valuations
of Stirling numbers of the second kind is full with
challenging problems. The values $\min\{v_p(k!S(n,k)): m\leq k\leq n\}$
are important in algebraic topology, see, for example,
\cite{[BD],[CK],[D2], [D3], [D4], [Lu1], [Lu2]}. Some work
evaluating $v_p(k!S(n,k))$ have appeared in above papers as well as
in \cite{[Cl],[D1],[Y]}. Lengyel \cite{[Le1]} studied the 2-adic
valuations of $S(n,k)$ and conjectured, proved by Wannemacker
\cite{[W]}, that $\nu_2(S(2^n,k))=s_2(k)-1,$ where $s_2(k)$ means the
base $2$ digital sum of $k$. Lengyel \cite{[Le2]} showed that if
$1\leq k\leq 2^n$, then $\nu_2(S(c2^n,k))=s_2(k)-1$ for any positive
integer $c$. Hong et al \cite{[HZZ]} proved that
$\nu_2(S(2^n+1,k+1))=s_2(k)-1$, which confirmed a conjecture of
Amdeberhan et al \cite{[AMM]}.
On the other hand, Lengyel \cite{[Le2]} studied the 2-adic
valuations of the difference $S(c2^{n+1},k)-S(c2^{n+1},k)$. It
appears that its 2-adic valuation increases by one as $n$ by one,
provided that $n$ is
large enough. As a consequence, Lengyel proposed the following conjecture.\\
\noindent{\bf Conjecture 1.1.} {\it\cite{[Le2]} Let $n, k, a, b, c\in
\mathbb{N}$ with $c\geq 1$ being odd and $3\leq k\leq 2^n$. Then
\begin{align}\label{ad1}
\nu_2(S(c2^{n+1},k)-S(c2^{n},k))=n+1-f(k)
\end{align}
and
\begin{align}\label{ad2}
\nu_2(S(a2^{n},k)-S(b2^{n},k))=n+1+\nu_2(a-b)-f(k)
\end{align}
for some function $f(k)$ which is independent of $n$ (for any
sufficiently large $n$).}\\
\\
When $k$ is a power of 2 minus 1, Lengyel suggested
the following conjecture which is stronger than (\ref{ad1}).\\
\noindent{\bf Conjecture 1.2.} {\it\cite{[Le2]}
Let $c, m, n\in \mathbb{N}$ with $c\geq
1$ being odd and $2\le m\le n$. Then}
\begin{align*}
\nu_2(S(c2^{n+1},2^m-1)-S(c2^n, 2^m-1))=n+1.
\end{align*}
Lengyel \cite{[Le2]} showed that (\ref{ad1}) is true if $s_2(k)\le
2$. For any real number $x$, as usual, let $\lceil x\rceil$ and
$\lfloor x\rfloor$ denote the smallest integer no less than $x$ and
the biggest integer no more than $x$, respectively. In \cite{[ZHZ]},
we used the Junod's congruence \cite{[J]} about the Bell polynomials
to show the following result.
\begin{thm}\label{lem10}
\cite{[ZHZ]} Let $n,k, a, b, c\in \mathbb{N}$ with $c\geq 1$
being odd, $3\leq k\leq 2^n$ and $a>b$. If $k$ is not a power
of 2 minus 1, then
\begin{align*}
v_2(S(a2^{n},k)-S(b2^{n},k))=n+v_2(a-b)-\lceil\log_2k\rceil
+s_2(k)+\delta(k),
\end{align*}
where $\delta(4)=2$, $\delta(k)=1$ if $k>4$ is a power of 2, and
$\delta(k)=0$ otherwise. In particular,
\begin{align*}
v_2(S(c2^{n+1},k)-S(c2^{n},k))=n-\lceil\log_2k\rceil
+s_2(k)+\delta(k).
\end{align*}
\end{thm}
Therefore Conjecture 1.1 is true except when $k$ is a power of
$2$ minus 1, in which case Conjecture 1.1 is still kept open so far.
It is also remarked in \cite{[ZHZ]} that the techniques there is not
suitable for the remaining case that $k$ is a power of $2$ minus 1.
In this paper, we introduce a new method to investigate the 2-adic
valuations of differences of Stirling numbers of the second kind.
Our main goal in this paper is to study Conjecture 1.1 for the
remaining case and Conjecture 1.2.
We will develop a detailed 2-adic analysis to the Stirling numbers of
the second kind. The main results of this paper can be stated as follows.
\begin{thm}\label{thm0}
Let $a, b, n\in \mathbb{N}$ with $a>b\ge1$. Then each of the
following is true.
{\rm (i).} If $n\ge2$, then
\begin{align*}
\nu_2(S(a2^{n},3)-S(b2^n, 3)) \left\{\begin{array}{cl}
=n+1+\nu_2(a-b)& {\it if}~b2^n>n+2+\nu_2(a-b),\\
>n+1+\nu_2(a-b)& {\it if}~b2^n=n+2+\nu_2(a-b),\\
=b2^n-1& {\it if}~b2^n<n+2+\nu_2(a-b).
\end{array}
\right.
\end{align*}
{\rm (ii).} If $n\ge3$, then
\begin{align*}
\nu_2(S(a2^{n},7)-S(b2^n, 7)) \left\{\begin{array}{cl}
=n+1+\nu_2(a-b)& {\it if}~b2^n>n+3+\nu_2(a-b),\\
>n+1+\nu_2(a-b)& {\it if}~b2^n=n+3+\nu_2(a-b),\\
=b2^n-2& {\it if}~b2^n<n+3+\nu_2(a-b).
\end{array}
\right.
\end{align*}
\end{thm}
\begin{thm}\label{thm1} Let $c, m, n\in \mathbb{N}$ with $c\geq
1$ being odd and $2\le m\le n$. Then
\begin{align*}
\nu_2(S(c2^{n+1},2^m-1)-S(c2^n, 2^m-1))=n+1
\end{align*}
except when $n=m=2$ and $c=1$, in which case one has $\nu_2(S(8,3)-S(4,3))=6$.
\end{thm}
Evidently, by Theorem \ref{thm1} one knows that Conjecture 1.2 is true except
for the exceptional case that $n=m=2$ and $c=1$, in which case Conjecture 1.2
is not true. It also implies that (\ref{ad1}) holds for the remaining
case that $k$ equals a power of 2 minus 1. By Theorem \ref{thm0},
we know that (\ref{ad2}) is true for the cases that $k=3$ and 7
and sufficiently large $n$, but the truth of (\ref{ad2}) still
keeps open when $k$ is a power of 2 minus 1 and no less than 15.
This paper is organized as follows. In Section 2, we recall some
known results and show also several new results that are needed
in the proof of Theorems \ref{thm0} and \ref{thm1}. The proofs
of Theorems \ref{thm0} and \ref{thm1} are given in Section 3.
The key new ingredients in this paper are to make use of a
classical congruence about binomial coefficients and to
establish a new convolution identity of
the Stirling numbers of the second kind.
In ending this section, we list several elementary properties of
$S(n, k)$ that will be used freely throughout this paper:
$\bullet$ The recurrence relation
\begin{align}\label{eq1}
S(n,k)=S(n-1,k-1)+kS(n-1,k),
\end{align}
with initial condition $S(0,0)=1$ and $S(n,0)=0$ for $n>0$.
$\bullet$ The explicit formula
\begin{align}\label{1.1}
S(n,k)=\frac{1}{k!}\sum_{i=0}^k(-1)^i{k\choose i}(k-i)^n.
\end{align}
$\bullet$ The generating function
\begin{align}\label{q1}
(e^t-1)^k=k!\sum_{j=k}^{\infty}S(j,k)\frac{t^j}{j!}
\end{align}
and
\begin{align}\label{05}
\frac{1}{(1-x)(1-2x)....(1-kx)}=\sum_{j=0}^{\infty}S(j+k,k)x^j.
\end{align}
\section{Preliminary lemmas}
In the present section, we give preliminary lemmas which
are needed in the proofs of Theorems \ref{thm0} and \ref{thm1}.
We begin with Kummer's identity.
\begin{lem}\label{lem01}\cite{[Ku]} (Kummer)
Let $k$ and $n\in\mathbb{N}$ be such that $k\le n$. Then
$$\nu_2\Big({n\choose k}\Big)=s_2(k)+s_2(n-k)-s_2(n).$$ Moreover,
$s_2(k)+s_2(n-k)\ge s_2(n).$
\end{lem}
The following classical congruence about binomial coefficients
is given in \cite{[R]}, which is the first new
key ingredient in the proof of Theorem 1.3.
\begin{lem}\cite{[R]}\label{lem02}
Let $n$ and $k$ be positive integers. For all primes $p$, we have
$${pn\choose pk}\equiv{n\choose k}\mod{pn\mathbb{Z}_p},$$
where $\mathbb{Z}_p$ stands for the ring of $p$-adic integers.
\end{lem}
\begin{lem}\label{lem03} \cite{[HZZ]}
Let $N\geq 2$ be an integer and $r$, $t$ be odd numbers. For any
$m\in \mathbb{Z}^+,$ we have $\nu_2((r2^N-1)^{t2^m}-1)=m+N.$
\end{lem}
\begin{lem}\cite{[CM]}\label{lem04}
Let $a,n$ and $m\ge3$ be positive integers with $n>a2^m$. Then
\begin{align*}
S(n,a2^m)\equiv \left\{
\begin{array}{lc}
a2^{m-1}{\frac{n-1}{2}-a2^{m-2}-1\choose a2^{m-2}-1}, &2\nmid n \\
\\
a2^{m-1}{\frac{n}{2}-a2^{m-2}-2\choose
a2^{m-2}-1}+{\frac{n}{2}-a2^{m-2}-1\choose a2^{m-2}-1}, &2\mid n
\end{array}
\right.\mod {2^m}.
\end{align*}
\end{lem}
\begin{lem}\label{lem05}
Let $a,n$ and $m\ge3$ be positive integers with $n\ge a2^m$. Then
\begin{align*}
S(n,a2^m-1)\equiv \left\{
\begin{array}{lc}
a2^{m-1}{\frac{n}{2}-a2^{m-2}-1\choose a2^{m-2}-1}, &2\mid n \\
\\
a2^{m-1}{\frac{n+1}{2}-a2^{m-2}-2\choose
a2^{m-2}-1}+{\frac{n+1}{2}-a2^{m-2}-1\choose a2^{m-2}-1}, &2\nmid n
\end{array}
\right.\mod {2^m}.
\end{align*}
\end{lem}
\begin{proof}
Using the recurrence relation (\ref{eq1}), we know that
\begin{align}\label{q5}
S(n+1,a2^m)=S(n,a2^m-1)+a2^mS(n,a2^m).
\end{align}
Thus Lemma \ref{lem05} follows immediately from (\ref{q5}) and
Lemma \ref{lem04}.
\hfill$\Box$
\end{proof}
\begin{lem}\cite{[Le2]}\label{lem06}
Let $m,n,c\in\mathbb{N}$ and $0\le m<n$. Then
$\nu_2(S(c2^n+2^m,2^{n}))=n-1-m.$
\end{lem}
\begin{lem}\label{lem07}\cite{[Le2]}
Let $c,n,m\in\mathbb{N}$. If $2\le m\le n$ and $c\ge 1$, then
$S(c2^n,2^m)\equiv 1\mod 4$ and $S(c2^n,2^m-1)\equiv 3\cdot 2^{m-1}\mod {2^{m+1}}.$
\end{lem}
\begin{lem}\cite{[Le2]}\label{lem08}
Let $m$ be a positive integer. Then for $m\ge3$, one has
$$\prod_{i=1}^{2^{m-1}}(1-(2i-1)x)\equiv(1+3x^2)^{2^{m-2}}\mod {2^{m+1}},$$
and for $m\ge4$, one has
$$\prod_{i=1}^{2^{m-1}-1}(1-2ix)\equiv1+2^{m-1}x+2^{m-1}x^2+2^mx^4\mod {2^{m+1}}.$$
\end{lem}
\begin{lem}\label{lem09}
Let $c,n$ and $m$ be positive integers with $3\le m\le n$. Then
\begin{align}\label{eq01}
S(c2^n+2^{m-1},2^m)\equiv 3\mod 4
\end{align}
and
\begin{align}\label{eq02} S(c2^n+2^{m-1},2^m-1)\equiv 2^{m-1}\mod
{2^{m+1}}.\end{align}
\end{lem}
\begin{proof} For any integer $n\ge m\ge3$, we deduce that
\begin{align}\label{eq03}
{c2^{n-1}-1\choose2^{m-2}-1}&=\frac{(c2^{n-1}-1)(c2^{n-1}-2)
...(c2^{n-1}-2^{m-2}+1)}{(2^{m-2}-1)!}\nonumber\\
&=-1+\sum_{t=1}^{2^{m-2}-1}(c2^{n-1})^t(-1)^{2^{m-2}-1-t} \sum_{1\le
i_1<...<i_t\le 2^{m-2}-1}\frac{1}{i_1...i_t}\nonumber\\
&=-1+\sum_{t=1}^{2^{m-2}-1}c^t(-1)^{t+1} \sum_{1\le i_1<...<i_t\le
2^{m-2}-1}\frac{2^{t(n-1)}}{i_1...i_t}.
\end{align}
Obviously, $\nu_2(i)\le m-3$ for any integer $1\le i\le 2^{m-2}-1$.
So for any integers $i_1, ..., i_t$ with $1\le i_1<...<i_t\le 2^{m-2}-1$,
we have
\begin{align}\label{eq0301}
\nu_2\Big(\frac{2^{t(n-1)}}{i_1...i_t}\Big)\ge t(n-1)-t(m-3)\ge
t(n-m+2)\ge 2t\ge2
\end{align}
since $n\ge m\ge 3$. It then follows from Lemma \ref{lem04},
(\ref{eq03}) and (\ref{eq0301}) that
\begin{align*}
S(c2^n+2^{m-1},2^m)\equiv{c2^{n-1}-1\choose 2^{m-2}-1}\equiv3 \mod
4,
\end{align*}
which means that (\ref{eq01}) is true.
Now we prove that congruence (\ref{eq02}) is true. For $m=3$, one
has
\begin{align}\label{eq0302}
\prod_{i=1}^{2^{m-1}-1}(1-2ix)=(1-2x)(1-4x)(1-6x)\equiv1+4x+12x^2\mod
{2^{4}}.
\end{align}
So by Lemma \ref{lem08} and (\ref{eq0302}), one get
\begin{align}\label{eq0303}
\prod_{i=1}^{2^{m-1}-1}(1-2ix)\equiv1+2^{m-1}x+2^{m-1}x^2
\phi_m(x)\mod {2^{m+1}},
\end{align}
where $\phi_1(x):=3$ and $\phi_m(x):=1+2x^2$ if $m\ge4$. Then by
(\ref{05}), (\ref{eq0303}) and Lemma \ref{lem08}, we get the
following congruence modulo $2^{m+1}$:
\begin{align}\label{06}
\sum_{j=0}^{\infty}S(j+2^m-1,2^{m}-1)x^j
=&\prod_{i=1}^{2^m-1}\frac{1}{1-ix}\nonumber\\
=&\Big(\prod_{i=1}^{2^{m-1}-1}\frac{1}{1-2ix}\Big)
\Big(\prod_{i=1}^{2^{m-1}}\frac{1}{1-(2i-1)x}\Big)\nonumber\\
\equiv& (1+2^{m-1}x+2^{m-1}x^2\phi_m(x))^{-1}(1+3x^2)^{-2^{m-2}}\nonumber\\
\equiv &(1+3\cdot2^{m-1}x+3\cdot2^{m-1}x^2\phi_m(x))
\sum_{i=0}^\infty{-2^{m-2}\choose i}3^ix^{2i}.
\end{align}
Note that the coefficient of $x^{c2^n-2^{m-1}+1}$ is
$S(c2^n+2^{m-1},2^m-1)$. By (\ref{06}), we have
\begin{align}\label{eq04}
S(c2^n+2^{m-1},2^m-1)&\equiv3\cdot 2^{m-1}\cdot
3^{c2^{n-1}-2^{m-2}}{-2^{m-2}\choose c2^{n-1}-2^{m-2}}\mod
{2^{m+1}}.
\end{align}
Then Lemma \ref{lem03} tells us that
\begin{align}\label{eq05}
3^{c2^{n-1}-2^{m-2}}=(2^2-1)^{(c2^{n-m+1}-1)2^{m-2}}
\equiv1\mod{2^m}.
\end{align}
By (\ref{eq03}), we obtain that
\begin{align}\label{eq06}
{-2^{m-2}\choose
c2^{n-1}-2^{m-2}}=(-1)^{c2^{n-1}-2^{m-2}}{c2^{n-1}-1\choose
c2^{n-1}-2^{m-2}}={c2^{n-1}-1\choose 2^{m-2}-1}\equiv3\mod{4}.
\end{align}
It then follows from (\ref{eq04}) to (\ref{eq06}) that (\ref{eq02}) holds.
The proof of Lemma \ref{lem09} is complete. \hfill $\Box$
\end{proof}\\
Now we give a new convolution identity about Stirling numbers of the
second kind. It is the second new key ingredient in the proof of
Theorem 1.3.
\begin{lem}\label{lem11} Let $k_1$, $k_2$ and $n$ be positive integers. Then
\begin{align*}
S(n,k_1+k_2) = \frac{k_1!k_2!}{(k_1+k_2)!}\sum_{i=k_1}^{n-k_2}
{n\choose i}S(i,k_1)S(n-i,k_2).
\end{align*}
\end{lem}
\begin{proof}
By (\ref{q1}), we have
\begin{align}\label{q01}
(e^t-1)^{k_1+k_2}&=(e^t-1)^{k_1}\cdot(e^t-1)^{k_2}\nonumber \\
&=\Big(k_1!\sum_{n=k_1}^{\infty}S(n,k_1)\frac{t^n}{n!}\Big)
\cdot\Big(k_2!\sum_{n=k_2}^{\infty}S(n,k_2)\frac{t^n}{n!}\Big)\nonumber \\
&=k_1!k_2!\Big(\sum_{n=k_1+k_2}^{\infty} \Big(\sum_{i=k_1}^{n-k_2}
\frac{S(i,k_1)S(n-i,k_2)}{i!(n-i)!}\Big)t^n\Big).
\end{align}
On the other hand, by (\ref{q1}) we know that
\begin{align}\label{q02}
(e^t-1)^{k_1+k_2}=(k_1+k_2)!\sum_{n=k_1}^{\infty}S(n,k_1+k_2)\frac{t^n}{n!}.
\end{align}
Then comparing the coefficients of $t^n$ in (\ref{q01}) and (\ref{q02}),
we obtain that
\begin{align}\label{q03}
S(n,k_1+k_2)\frac{(k_1+k_2)!}{n!} =k_1!k_2! \sum_{i=k_1}^{n-k_2}
\frac{S(i,k_1)S(n-i,k_2)}{i!(n-i)!}.
\end{align}
It then follows from (\ref{q03}) that
\begin{align*}
S(n,k_1+k_2) = \frac{k_1!k_2!}{(k_1+k_2)!}\sum_{i=k_1}^{n-k_2}
{n\choose i}S(i,k_1)S(n-i,k_2)
\end{align*}
as desired. The proof of Lemma \ref{lem11} is complete. \hfill$\Box$
\end{proof}
\begin{lem}\label{lem12}
Let $c,i,s,t\in\mathbb{N}$ with $c\ge 1$ being odd. Then each of the
following is true.
{\rm (i).} If $1\le i<c2^s$ and $\nu_2(i)<s$, then
$$\nu_2\Big({c2^s\choose i}\Big)\ge s-\nu_2(i),$$
with equality holding if and only if $s_2(c-1-\lfloor
i/2^s\rfloor)+s_2(\lfloor i/2^s\rfloor)=s_2(c-1).$
{\rm (ii).} If $3\le i<c2^s$ and $t\ge 2$, then
$$\nu_2\Big(2^{ti}{c2^s\choose i}\Big)\ge s+6.$$
\end{lem}
\begin{proof}
(i). For any integer $i$ with $i<c2^s$ and $\nu_2(i)<s$, we can
write $i=a+b2^s$ with $a,b\in\mathbb{N}$ and $0<a<2^s$. Then one has
$b=\lfloor \frac{i}{2^s}\rfloor$ and $\nu_2(i)=\nu_2(a)$. Using
Lemma \ref{lem01}, we derive that
\begin{align}\label{q6}
\nu_2\Big({c2^s\choose i}\Big)&=s_2(c2^s-i)+s_2(i)-s_2(c2^s)\nonumber\\
&=s_2(c2^s-a-b2^s)+s_2(a+b2^s)-s_2(c2^s)\nonumber\\
&=s_2((c-1-b)2^s+2^s-a)+s_2(a)+s_2(b)-s_2(c)\nonumber\\
&=s_2(c-1-b)+s_2(b)+s_2(2^s-a)+s_2(a)-s_2(c).
\end{align}
Note that $s_2(2^s-a)=s-\nu_2(a)-s_2(a)+1$. Since $c\ge 1$ is odd, one has
$s_2(c)-s_2(c-1)=1$. It then follows from Lemma \ref{lem01} and (\ref{q6}) that
\begin{align*}
\nu_2\Big({c2^s\choose i}\Big)\ge
s_2(c-1)+s_2(2^s-a)+s_2(a)-s_2(c)=s-\nu_2(a)=s-\nu_2(i),
\end{align*}
and equality holds if and only if $s_2(c-1-b)+s_2(b)=s_2(c-1).$ This
implies that part (i) is true. So part (i) is proved.
(ii). Clearly, if $\nu_2(i)\ge s$, then $\nu_2\big({c2^s\choose
i}\big)\ge 0\ge s-\nu_2(i).$ If $\nu_2(i)<s$, by part (i) one also
has $\nu_2\big({c2^s\choose i}\big)\ge s-\nu_2(i).$ Then we deduce
that
\begin{align}\label{q601}
\nu_2\Big(2^{ti}{c2^s\choose i}\Big)&\ge ti+s-\nu_2(i).
\end{align}
Since $t\ge 2$, by (\ref{q601}), one can easily check that
$\nu_2\big(2^{ti}{c2^s\choose i}\big)\ge s+6$ if $i\le 5.$ This
means that part (ii) is true for the case $i\le5$. Now let $i\ge6$. By
(\ref{q601}), we get that
\begin{align*}
\nu_2\Big(2^{ti}{c2^s\choose i}\Big)\ge (t-1)i+s+i-\nu_2(i)\ge s+6
\end{align*}
as desired. \hfill$\Box$
\end{proof}
\begin{lem}\label{lem13}
Let $c,r$ and $s$ be positive integers with $c\ge 1$ being odd and
$s\ge r\ge 3$. For any integer $i$ with $2^r-1\le i\le c2^s-2^r$, define
\begin{align}\label{ad3}
f_{r,s}(i):={c2^s\choose i}S(i,2^r-1)S(c2^s-i,2^r).
\end{align}
Then each of the following is true.
{\rm (i).} If $\nu_2(i)\le r-2$, then $\nu_2(f_{r,s}(i))\ge s+2$.
{\rm (ii).} If $\nu_2(i)=r-1$, then
$\nu_2(f_{r,s}(i))\ge s$ with equality holding if and only if
$$s_2(c-1-\lfloor i/2^s\rfloor)+s_2(\lfloor i/2^s\rfloor)=s_2(c-1).$$
{\rm (iii).}
$\nu_2\Bigg(\displaystyle\sum_{i=2^r-1
\atop \nu_2(i)\le r-1}^{c2^s-2^r}f_{r,s}(i)\Bigg)\ge s+1$.
{\rm (iv).} $f_{r,s}(2^r)+f_{r,s}(c2^s-2^r)\equiv 2^s\mod{2^{s+1}}$.
{\rm (v).} For any integer $l$ with $1\le l\le c2^{s-r}-2$, we have
$$f_{r,s+1}(l2^{r+1}+2^r)\equiv f_{r,s}(l2^{r}+2^{r-1})\mod{2^{s+2}}.$$
\end{lem}
\begin{proof}
(i). If $\nu_2(i)\le r-3$, then using Lemmas \ref{lem04},
\ref{lem05} and \ref{lem12} (i), we deduce that
\begin{align}\label{02}
\nu_2(f_{r,s}(i))&=\nu_2\Big({c2^s\choose i}\Big)
+\nu_2\big(S(i,2^r-1)S(c2^s-i,2^r)\big)\nonumber\\
&\ge s-\nu_2(i)+r-1\ge s+2.
\end{align}
So part (i) is true if $\nu_2(i)\le r-3$.
Now we let $\nu_2(i)=r-2$. Since $2^r-1\le i$, one may let
$i=2^{r-2}+i_12^{r-1}$ with $i_1\ge2$. Thus by Lemma \ref{lem01} we obtain that
\begin{align}\label{q10}
&\nu_2\Big({\frac{i}{2}-2^{r-2}-1\choose 2^{r-2}-1}\Big)
=\nu_2\Big({i_12^{r-2}-2^{r-3}-1\choose 2^{r-2}-1}\Big)\nonumber\\
&=s_2(2^{r-2}-1)+s_2((i_1-1)2^{r-2}-2^{r-3})-s_2(i_12^{r-2}-2^{r-3}-1)\nonumber\\
&=(r-2)+s_2((i_1-2)2^{r-2}+2^{r-3})-s_2((i_1-1)2^{r-2}+2^{r-3}-1)\nonumber\\
&=2+s_2(i_1-2)-s_2(i_1-1)\nonumber\\&=1+s_2(1)+s_2(i_1-2)-s_2(i_1-1)\ge 1.
\end{align}
Clearly $i$ is even since $\nu_2(i)=r-2$ and $r\ge 3$. From Lemma
\ref{lem05}, we get that
\begin{align}\label{q101}
S(i,2^r-1)\equiv 2^{r-1}{\frac{i}{2}-2^{r-2}-1\choose 2^{r-2}-1}\mod
{2^r}.
\end{align}
It then follows from (\ref{q10}) and (\ref{q101}) that $\nu _2(S(i,2^r-1))\ge r$.
Since $\nu_2(i)=r-2$, by Lemma \ref{lem12} one has
\begin{align}\label{q11}
\nu_2(f_{r,s}(i))&=\nu_2\Big({c2^s\choose i}\Big)
+\nu_2\big(S(i,2^r-1)S(c2^s-i,2^r)\big)\nonumber\\
&\ge s-\nu_2(i)+r= s-(r-2)+r\nonumber\\
&\ge s+2.
\end{align}
By (\ref{q11}), part (i) holds
for the case $\nu_2(i)=r-2$. So part (i) is proved.
(ii). Let $\nu_2(i)=r-1$. Then we can write $i=2^{r-1}+i_22^r$ with
$i_2\ge1$ since $i\ge 2^r-1$. So Lemma \ref{lem05} tells us that
\begin{align}\label{q7}
S(i,2^r-1)=S(2^{r-1}+i_22^r, 2^r-1)\equiv2^{r-1}{i_22^{r-1}-1\choose
2^{r-2}-1} \mod{2^r}
\end{align}
since $r\ge 3$. Applying Lemma \ref{lem01}, we deduce that
\begin{align}\label{q701}
\nu_2\Big({i_22^{r-1}-1\choose 2^{r-2}-1}\Big)&=s_2(2^{r-2}-1)
+s_2(i_22^{r-1}-2^{r-2})-s_2(i_22^{r-1}-1)\nonumber\\
&=r-2+s_2((i_2-1)2^{r-1}+2^{r-2})-s_2((i_2-1)2^{r-1}+2^{r-1}-1)
\nonumber\\&=0.
\end{align}
It then follows from (\ref{q7}) and (\ref{q701}) that
\begin{align}\label{q8}
\nu_2(S(i,2^r-1))=r-1.
\end{align}
On the other hand, by Lemma \ref{lem06}
and noticing that $s\ge r$, we get that
\begin{align}\label{q9}
\nu_2(S(c2^s-i,2^r))=\nu_2(S(c2^s-i_22^r-2^{r-1}, 2^r))=0.
\end{align}
Then from Lemma \ref{lem12} (i), (\ref{q8}) and (\ref{q9}), we deduce
that $\nu_2(f_{r,s}(i))\ge s$. Furthermore, the equality holds if
and only if $s_2(c-1-\lfloor i/2^s\rfloor)+s_2(\lfloor
i/2^s\rfloor)=s_2(c-1)$. Thus part (ii) is proved.
(iii). We define a subset of positive integers as follows:
$$J=\{i|2^r<i<c2^s-2^r, \nu_2(i)=r-1, s_2(c-1-\Big\lfloor
\frac{i}{2^s}\Big\rfloor)+s_2(\Big\lfloor
\frac{i}{2^s}\Big\rfloor)=s_2(c-1)\}.$$
We claim that $|J|$ is even.
Note that $J=\emptyset$ if $c=1$ and $r\le s \le r+1$. If $c=1$ and
$s\ge r+2$, then $J=\{i|2^r<i<2^s-2^r,\,\nu_2(i)=r-1\}.$ One can
easily compute that $|J|=2^{s-r}-2$. So the claim is true for the case
$c=1$. In what follows we deal with the case that $c\ge3$ is odd.
Let $c\ge3$ be an odd integer. One can define three subsets of $J$ as follows:
$$J_1=\{i\in J|2^r<i<2^s\},\,J_2=\{i\in J|2^s<i<(c-1)2^s\},\,
J_3=\{i\in J|(c-1)2^s<i<c2^s-2^r\}.$$
Clearly, $2^s\not\in J$ and $(c-1)2^s\not\in J$. Thus $J_1$, $J_2$ and
$J_3$ are disjoint and $J=J_1\cup J_2\cup J_3$, which implies that
$|J|=|J_1|+|J_2|+|J_3|$. If either $\lfloor \frac{i}{2^s}\rfloor=0$
or $\lfloor \frac{i}{2^s}\rfloor=c-1$, then
$s_2(c-1-\Big\lfloor \frac{i}{2^s}\Big\rfloor)+s_2(\Big\lfloor
\frac{i}{2^s}\Big\rfloor)=s_2(c-1)$. It follows that
$$J_1=\{i|2^r<i<2^s,\nu_2(i)=r-1\} \ {\rm and} \
J_3=\{i|(c-1)2^s<i<c2^s-2^r, \nu_2(i)=r-1\}.$$
One can compute that $|J_1|=|J_3|=2^{s-r}-1$. So to finish the proof of
the claim, it suffices to show that $|J_2|$ is even.
Now take any element $i\in J_2$. Then $2^s<i<(c-1)2^s$. Since $i=(i-\Big\lfloor
\frac{i}{2^s}\Big\rfloor2^s)+\Big\lfloor\frac{i}{2^s}\Big\rfloor2^s$,
one has $\nu_2(i)=\nu_2(i-\Big\lfloor \frac{i}{2^s}\Big\rfloor2^s)$
and $1\le \Big\lfloor \frac{i}{2^s}\Big\rfloor\le c-2$.
We can easily check that
$$i':=(i-\Big\lfloor\frac{i}{2^s}\Big\rfloor2^s)+(c-1-\Big\lfloor
\frac{i}{2^s}\Big\rfloor)2^s\in J_2.$$
Then $i\ne i'$. Otherwise, $i=i'$ implies that
$\lfloor \frac{i}{2^s}\rfloor=\frac{c-1}{2}$, and so
$$s_2(c-1-\Big\lfloor\frac{i}{2^s}\Big\rfloor)+s_2(\Big\lfloor
\frac{i}{2^s}\Big\rfloor)=2s_2\Big(\frac{c-1}{2}\Big)>s_2(c-1)$$
since $c\ge 3$ is odd. Hence $i\not\in J_2$. We arrive at a contradiction.
It then follows that $|J_2|$ is even. The claim is proved.
From part (i) and (ii) and the claim above, we derive that
\begin{align*}
\sum_{i=2^r-1,\nu_2(i)\le r-1}^{c2^s-2^r}f_{r,s}(i)
\equiv\sum_{i\in J}f_{r,s}(i)\equiv 0 \mod{2^{s+1}},
\end{align*}
which implies that part (iii) holds.
(iv). By Lemma \ref{lem01}, we get that
\begin{align}\label{q13}
\nu_2\Big({c2^s\choose 2^r}\Big)&=\nu_2\Big({c2^s\choose c2^s-2^r}\Big)\nonumber\\
&=s_2(c2^s-2^r)+s_2(2^r)-s_2(c2^s)\nonumber\\
&=s_2((c-1)2^s)+s_2(2^s-2^r)+1-s_2(c)\nonumber\\
&=s-r
\end{align}
since $s\ge r$ and $c\ge 1$ is odd. Then one may let
${c2^s\choose 2^r}=2^{s-r}(1+2k_0)$ with $k_0\in \mathbb{N}$.
From Lemma
\ref{lem07}, we deduce that there are three integers
$k_1$, $k_2$ and $k_3$ such that $S(2^r,2^r-1)=
2^{r-1}(3+4k_1)$, $S(c2^s-2^r,2^r-1)= 2^{r-1}(3+4k_2)$
and $S(c2^s-2^r,2^r)=1+4k_3.$ It then follows from (\ref{q13})
that
\begin{align}\label{q14}
f_{r,s}(2^r)&={c2^s\choose 2^r}S(2^r,2^r-1)S(c2^s-2^r,2^r)\nonumber\\
&=2^{s-r}(1+2k_0)2^{r-1}(3+4k_1)(1+4k_3)\nonumber\\
&\equiv 2^{s-1}(3+2k_0)\mod{2^{s+1}}
\end{align}
and
\begin{align}\label{q15}
f_{r,s}(c2^s-2^r)&={c2^s\choose c2^s-2^r}S(c2^s-2^r,2^r-1)S(2^r,2^r)\nonumber\\
&=2^{s-r}(1+2k_0)2^{r-1}(3+4k_2)\nonumber\\
&\equiv 2^{s-1}(3+2k_0)\mod{2^{s+1}}.
\end{align}
Then the desired result follows immediately
from (\ref{q14}) and (\ref{q15}). Part (iv) is proved.
(v). Lemma \ref{lem02} gives us that
\begin{align}\label{07}
{c2^{s+1}\choose l2^{r+1}+2^r}\equiv {c2^{s}\choose
l2^{r}+2^{r-1}}\mod 2^{s+1}.
\end{align}
From Lemma \ref{lem12} (i), we obtain that
\begin{align}\label{08}
\nu_2\Big({c2^{s}\choose l2^{r}+2^{r-1}}\Big)\ge s-r+1.
\end{align}
It then follows from (\ref{07}), (\ref{08}) and Lemma \ref{lem07}
that
\begin{align}\label{09}
f_{r,s+1}(l2^{r+1}+2^r)&={c2^{s+1}\choose
l2^{r+1}+2^r}S(l2^{r+1}+2^r,2^r-1)S(c2^{s+1}-l2^{r+1}-2^r,2^r)\nonumber\\
&=\Big({c2^{s}\choose l2^{r}+2^{r-1}}+a_1\cdot
2^{s+1}\Big)2^{r-1}(3+4a_2)(1+4a_3)\nonumber\\
&\equiv 3\cdot 2^{r-1}{c2^{s}\choose l2^{r}+2^{r-1}}\mod 2^{s+2},
\end{align}
where $a_1,a_2,a_3\in \mathbb{N}$.
On the other hand, by Lemma \ref{lem09} and (\ref{08}), we
derive that
\begin{align}\label{10}
f_{r,s}(l2^{r}+2^{r-1})&={c2^{s}\choose l2^{r}+2^{r-1}}
S(l2^{r}+2^{r-1},2^r-1)S(c2^{s}-l2^{r}-2^{r-1},2^r)\nonumber\\
&={c2^{s}\choose l2^{r}+2^{r-1}}2^{r-1}(1+4a_4)(3+4a_5) \nonumber\\
&\equiv3\cdot 2^{r-1}{c2^{s}\choose l2^{r}+2^{r-1}}\mod{2^{s+2}},
\end{align}
where $a_4,a_5\in \mathbb{N}$. So part (v) follows immediately from
(\ref{09}) and (\ref{10}).
This completes the proof of Lemma \ref{lem13}. \hfill$\Box$
\end{proof}
\section{Proofs of Theorems 1.2 and 1.3}
In this section, we use the lemmas presented in previous section to
show Theorems \ref{thm0} and \ref{thm1}. We begin with the proof
of Theorem \ref{thm0}.\\
\noindent{\bf Proof of Theorem \ref{thm0}}. (i). By (\ref{1.1}), one
has
\begin{align} \label{a1}
S(a2^{n},3)-S(b2^{n},3)=\frac{1}{6}(3^{a2^n}-3^{b2^n})
-\frac{1}{2}(2^{a2^n}-2^{b2^n}).
\end{align}
Lemma \ref{lem03} tells us that
$\nu_2(3^{(a-b)2^n}-1)=n+\nu_2(a-b)+2$. Then
\begin{align} \label{a2}
\nu_2\big(\frac{1}{6}(3^{a2^n}-3^{b2^n})\big)=-1+
\nu_2\big(3^{b2^n-1}(3^{(a-b)2^n}-1)\big)=n+\nu_2(a-b)+1.
\end{align}
Since $a>b$, we get that
\begin{align} \label{a3}
\nu_2\big(\frac{1}{2}(2^{a2^n}-2^{b2^n})\big)=b2^n-1.
\end{align}
It then follows from (\ref{a1}) to (\ref{a3}) that part (i) is true.
Part (i) is proved.
(ii). Using (\ref{1.1}), we deduce that
\begin{align} \label{b1}
7!(S(a2^{n},7)-S(b2^{n},7))&=\sum_{i=0}^7(-1)^i{7\choose
i}((7-i)^{a2^n}-(7-i)^{b2^n})=h(1)+h(2),
\end{align}
where
\begin{align} \label{b2}
h(1):=7^{b2^n}(7^{(a-b)2^n}-1)+{7\choose 2}5^{b2^n}(5^{(a-b)2^n}-1)+
{7\choose 4}3^{b2^n}(3^{(a-b)2^n}-1)
\end{align}
and
\begin{align} \label{b3}
h(2):={7\choose 1}(6^{b2^n}-6^{a2^n})+{7\choose
3}(4^{b2^n}-4^{a2^n})+ {7\choose 5}(2^{b2^n}-2^{a2^n}).
\end{align}
Let $d:=n+\nu_2(a-b)$. Then there exists an odd integer $c$ such
that $(a-b)2^n=c2^d$. From Lemma \ref{lem12} (ii), one knows that
$$\sum_{i=3}^{c2^d}(-1)^i8^i{c2^d\choose i}\equiv
\sum_{i=3}^{c2^d}(-1)^i4^i{c2^d\choose i}\equiv
\sum_{i=3}^{c2^d}4^i{c2^d\choose i}\equiv 0\mod 2^{d+6}.$$ It
follows that
\begin{align} \label{b4}
7^{c2^d}-1=(8-1)^{c2^d}-1=\sum_{i=1}^{c2^d}(-1)^i8^i{c2^d\choose
i}\equiv -c2^{d+3}-c2^{d+5}\mod{2^{d+6}},
\end{align}
\begin{align} \label{b5}
5^{c2^d}-1=(4+1)^{c2^d}-1=\sum_{i=1}^{c2^d}4^i{c2^d\choose i} \equiv
c2^{d+2}-c2^{d+3}\mod{2^{d+6}}
\end{align}
and
\begin{align} \label{b6}
3^{c2^d}-1=(4-1)^{c2^d}-1=\sum_{i=1}^{c2^d}(-1)^i4^i{c2^d\choose
i} \equiv -c2^{d+2}-c2^{d+3}\mod{2^{d+6}}.
\end{align}
Since $b\ge1$ and $n\ge3$, one has $7^{b2^n}\equiv 5^{b2^n} \equiv
3^{b2^n}\equiv 1 \mod 2^4$. Then by (\ref{b2}) and (\ref{b4}) to
(\ref{b6}), and noting that $c\ge1$ is odd, we get that
\begin{align}\label{b7}
h(1)&\equiv-c2^{d+3}-c2^{d+5}+{7\choose 2}
(c2^{d+2}-c2^{d+3})+{7\choose
4}(-c2^{d+2}-c2^{d+3})\nonumber\\
&\equiv c2^{d+5}\equiv 2^{d+5}\mod{2^{d+6}}.
\end{align}
Thus by (\ref{b7}), we have
\begin{align}\label{b8}
\nu_2(h(1))=d+5=n+5+\nu_2(a-b).
\end{align}
On the other hand, since $a>b\ge1$ and $n\ge3$, one has
$a2^n>b2^n+3$ and $b2^{n+1}>b2^n+3$. So by (\ref{b3}), we obtain
that
\begin{align} \label{b9}
h(2)\equiv 7\cdot 6^{b2^n}+{7\choose 5}2^{b2^n}=7\cdot
2^{b2^n}(3^{b2^n}+3)\equiv 2^{b2^n+2}\mod 2^{b2^n+3}
\end{align}
since $3^{b2^n}\equiv1 \mod 2^{n+1}$ and $n\ge3$. Hence (\ref{b9})
tells us that
\begin{align} \label{b10}
\nu_2(h(2))=b2^n+2.
\end{align}
Note that $\nu_2(7!)=4$. It then follows from (\ref{b1}), (\ref{b8})
and (\ref{b10}) that part (ii) is true.
The proof of
Theorem \ref{thm0} is complete. \hfill $\Box$\\
In what follows, we give the proof of Theorem \ref{thm1}
as the conclusion of this paper.\\
\noindent{\bf Proof of Theorem \ref{thm1}}. We first deal with the
case $m=2$. If $n=2$ and $c=1$, then by (\ref{1.1}), we compute that
$$S(c2^{n+1},3)-S(c2^{n},3)=S(8,3)-S(4,3)=960,
$$
which implies that $\nu_2(S(8,3)-S(4,3))=6$. If either $n\ge3$ or
$c\ge3$, then $c2^n>n+2+\nu_2(c)$ since $c$ is odd. So by
Theorem \ref{thm0} (i), one knows that
$$\nu_2(S(c2^{n+1},3)-S(c2^{n},3))=n+1+\nu_2(c)=n+1
$$
as desired. So Theorem \ref{thm1} is proved for the case $m=2$.
In what follows we assume that $m\ge 3$. We proceed with induction
on $m$. Consider the case $m=3$. Since $n\ge3$ and $c$ is odd, one has
$c2^n>n+3+\nu_2(c)$. So we can apply Theorem \ref{thm0} (ii) and get that
$$
\nu_2(S(c2^{n+1},7)-S(c2^{n},7))=n+1+\nu_2(c)=n+1,
$$
which implies that Theorem \ref{thm1} is true for $m=3$.
Now let $m\ge 4$. Assume that Theorem \ref{thm1}
is true for the $m-1$ case. Then
$\nu_2(S(c2^{n+1}, 2^{m-1}-1)-S(c2^{n}, 2^{m-1}-1))=n+1$.
In the following we prove that Theorem \ref{thm1}
is true for the $m$ case. Write
\begin{align}\label{03} \Delta:= {2^m-1\choose
2^{m-1}}\big(S(c2^{n+1},2^m-1)-S(c2^n, 2^m-1)\big).
\end{align}
Then Lemma \ref{lem01} gives us that
\begin{align}\label{04}
\nu_2\big({2^{m}-1\choose
2^{m-1}}\big)=s_2(2^{m-1}-1)+s_2(2^{m-1})-s_2(2^{m}-1) =0.
\end{align}
It follows from (\ref{03}) and (\ref{04}) that
\begin{align*}
\nu_2(\Delta)=\nu_2(S(c2^{n+1},2^m-1)-S(c2^n, 2^m-1)).
\end{align*}
So to prove that Theorem \ref{thm1} is true for the
$m$ case, it is sufficient to show that
\begin{align} \label{q16}
\Delta\equiv 2^{n+1}\mod{2^{n+2}},
\end{align}
which will be done in what follows.
By Lemma \ref{lem11}, we obtain that
\begin{align}\label{q17}
{2^m-1\choose 2^{m-1}}S(c2^{n+1},2^m-1)&=\sum_{i=2^{m-1}-1}^{c2^{n+1}-2^{m-1}}
{c2^{n+1}\choose i}S(i,2^{m-1}-1)S(c2^{n+1}-i, 2^{m-1})\nonumber\\
&=\sum_{i=2^{m-1}-1,\atop \nu_2(i)\le m-2}^{c2^{n+1}-2^{m-1}}f_{m-1,n+1}(i)+
\sum_{i=2^{m-1}-1, \atop \nu_2(i)\ge m-1}^{c{2^{n+1}}-2^{m-1}}f_{m-1,n+1}(i)
\end{align}
and
\begin{align}
{2^m-1\choose
2^{m-1}}S(c2^{n},2^m-1)&=\sum_{i=2^{m-1}-1}^{c2^{n}-2^{m-1}}
{c2^{n}\choose i}S(i,2^{m-1}-1)S(c2^{n}-i, 2^{m-1})\nonumber\\
&=\sum_{i=2^{m-1}-1,\atop \nu_2(i)\le
m-3}^{c2^{n}-2^{m-1}}f_{m-1,n}(i) +\sum_{i=2^{m-1}-1, \atop
\nu_2(i)\ge m-2}^{c{2^{n}-2^{m-1}}}f_{m-1,n}(i),
\end{align}
where $f_{m-1,n}(i)$ and $f_{m-1,n+1}(i)$ are defined as in
(\ref{ad3}). Then letting $r=m-1$ and $s=n+1$ in Lemma \ref{lem13}
(iii) gives us that
\begin{align}
\nu_2\Big(\sum_{i=2^{m-1}-1,\atop \nu_2(i)\le
m-2}^{c2^{n+1}-2^{m-1}} f_{m-1,n+1}(i)\Big)\ge n+2.
\end{align}
Using Lemma \ref{lem13} (i) with $r=m-1$ and $s=n$, we deduce that
\begin{align}\label{q18}
\nu_2\Big(\sum_{i=2^{m-1}-1,\atop \nu_2(i)
\le m-3}^{c2^{n}-2^{m-1}}f_{m-1,n}(i)\Big)
\ge \min_{2^{m-1}-1\le i\le c2^{n}-2^{m-1}\atop \nu_2(i)
\le m-3}\{f_{m-1,n}(i)\}\ge n+2.
\end{align}
Then by (\ref{03}) and (\ref{q17})-(\ref{q18}), we conclude that
\begin{align}\label{q19}
\Delta&\equiv\sum_{i=2^{m-1}-1, \atop \nu_2(i)\ge
m-1}^{c{2^{n+1}}-2^{m-1}}f_{m-1,n+1}(i)-\sum_{i=2^{m-1}-1, \atop
\nu_2(i)\ge m-2}^{c{2^{n}-2^{m-1}}}f_{m-1,n}(i)\mod{2^{n+2}}.
\end{align}
On the other hand, we have
\begin{align}
\sum_{i=2^{m-1}-1, \atop \nu_2(i)\ge m-1}^{c{2^{n+1}}-2^{m-1}}f_{m-1,n+1}(i)
&=\sum_{i=2^{m-1}-1, \atop \nu_2(i)=m-1}^{c{2^{n+1}}-2^{m-1}}f_{m-1,n+1}(i)
+\sum_{i=2^{m-1}-1, \atop \nu_2(i)>m-1}^{c{2^{n+1}}-2^{m-1}}
f_{m-1,n+1}(i)\nonumber\\
&=\sum_{l=0}^{c2^{n-m+1}-1}f_{m-1,n+1}(l2^m+2^{m-1})+
\sum_{l=1}^{c2^{n-m+1}-1}f_{m-1,n+1}(l2^m)
\end{align}
and
\begin{align}
\sum_{i=2^{m-1}-1, \atop \nu_2(i)\ge m-2}^{c{2^{n}-2^{m-1}}}f_{m-1,n}(i)
&=\sum_{i=2^{m-1}-1, \atop \nu_2(i)=m-2}^{c{2^{n}}-2^{m-1}}f_{m-1,n}(i)
+\sum_{i=2^{m-1}-1, \atop \nu_2(i)>m-2}^{c{2^{n}}-2^{m-1}}
f_{m-1,n}(i)\nonumber\\
&=\sum_{l=1}^{c2^{n-m+1}-2}f_{m-1,n}(l2^{m-1}+2^{m-2})+
\sum_{l=1}^{c2^{n-m+1}-1}f_{m-1,n}(l2^{m-1}).
\end{align}
Using Lemma \ref{lem13} (iv) with $s=n+1$ and $r=m-1$, we obtain
that
\begin{align}
f_{m-1,n+1}(2^{m-1})+f_{m-1,n+1}(c2^{n+1}-2^{m-1})\equiv 2^{n+1}\mod {2^{n+2}}.
\end{align}
Letting $s=n$ and $r=m-1$ in Lemma \ref{lem13} (v), we derive that
\begin{align}\label{q20}
\sum_{l=1}^{c2^{n-m+1}-2}\big(f_{m-1,n+1}(l2^m+2^{m-1})
-f_{m-1,n}(l2^{m-1}+2^{m-2})\big)\equiv
0\mod {2^{n+2}}.
\end{align}
Let $L:=\{l\in\mathbb{N}|1\le l\le c2^{n-m+1}-1\}$.
We define the following three subsets of $L$:
$$L_1=\{l\in L|\nu_2(l)<n-m+1\},\,
L_2=\{l\in L|\nu_2(l)=n-m+1\},\, L_3=\{l\in L|\nu_2(l)>n-m+1\}.$$
Then $L_1, L_2$ and $L_3$ are disjoint and $L=L_1\cup L_2\cup L_3$.
For $i\in\{1,2,3\}$, we define
$$\Delta_i:=\sum_{l\in L_i}\big(f_{m-1,n+1}(l2^m)-f_{m-1,n}(l2^{m-1})\big).$$
It then follows from (\ref{q19}) to (\ref{q20}) that
\begin{align}\label{q21}
\Delta&\equiv2^{n+1}+\sum_{l\in L}\Big(f_{m-1,n+1}(l2^m)
-f_{m-1,n}(l2^{m-1})\Big)\nonumber
\\&\equiv 2^{n+1}+\Delta_1+\Delta_2+\Delta_3 \mod{2^{n+2}}.
\end{align}
We claim that
\begin{align}\label{q22}
\Delta_1+\Delta_2+\Delta_3\equiv0\mod{2^{n+2}}.
\end{align}
Then from (\ref{q21}) and the claim (\ref{q22}), we deduce that
$\Delta\equiv2^{n+1}\mod{2^{n+2}}$, i.e. (\ref{q16}) is true. In
other words, Theorem \ref{thm1} holds for the $m$ case. It remains
to show that (\ref{q22}) is true that we will do in the following.
From Lemma \ref{lem02}, we obtain that
\begin{align}\label{q23}
{c2^{n+1}\choose l2^m}\equiv{c2^{n}\choose l2^{m-1}}\mod{2^{n+1}},
\end{align}
and Theorem \ref{lem10} tells us that
\begin{align}\label{q24}
\nu_2(S(c2^{n+1}-l2^m,2^{m-1})-S(c2^{n}-l2^{m-1},2^{m-1}))=
\nu_2(c2^{n-m+1}-l)+2.
\end{align}
By the inductive hypothesis, we infer that
\begin{align}\label{q25}
\nu_2(S(l2^m,2^{m-1}-1)-S(l2^{m-1},2^{m-1}-1))=\nu_2(l)+m.
\end{align}
It then follows from (\ref{q23}) to (\ref{q25}) and Lemma \ref{lem07}
that
\begin{align}\label{q26}
f_{m-1,n+1}(l2^m)=&{c2^{n+1}\choose l2^m}S(l2^m,2^{m-1}-1)
S(c2^{n+1}-l2^m,2^{m-1})\nonumber\\
=&\Big({c2^{n}\choose l2^{m-1}}+b_1\cdot 2^{n+1}\Big)
\Big(S(l2^{m-1},2^{m-1}-1)+(2b_2+1)\cdot 2^{\nu_2(l)+m}\Big)\nonumber\\
&\cdot\Big(S(c2^{n}-l2^{m-1},2^{m-1})+(2b_3+1)\cdot 2^{\nu_2(c2^{n-m+1}-l)+2}\Big),
\nonumber\\
\equiv&{c2^{n}\choose l2^{m-1}}\Big(S(l2^{m-1},2^{m-1}-1)
+(2b_2+1)\cdot 2^{\nu_2(l)+m}\Big)\nonumber\\
&\cdot\Big(S(c2^{n}-l2^{m-1},2^{m-1})+(2b_3+1)
\cdot 2^{\nu_2(c2^{n-m+1}-l)+2}\Big) \mod {2^{n+2}},
\end{align}
where $b_1,b_2,b_3\in \mathbb{N}$.
We first treat with $\Delta_1$. For any elment $l\in L_1$, by Lemma
\ref{lem12} (i), we get that
\begin{align}\label{q27}
\nu_2\Big({c2^{n}\choose l2^{m-1}}\Big)\ge n-m+1-\nu_2(l).
\end{align}
From Lemma \ref{lem07}, we derive that
\begin{align}\label{q28}
\nu_2\big(S(l2^{m-1},2^{m-1}-1)\cdot2^{\nu_2(l)+2}\big)=
\nu_2\big(S(c2^{n}-l2^{m-1},2^{m-1})\cdot 2^{\nu_2(l)+m}\big)=\nu_2(l)+m.
\end{align}
So by (\ref{q28}), we have
\begin{align}\label{q29}
\nu_2\big(S(l2^{m-1},2^{m-1}-1)\cdot2^{\nu_2(l)+2}+S(c2^{n}-l2^{m-1},
2^{m-1})\cdot 2^{\nu_2(l)+m}\big)\ge\nu_2(l)+m+1.
\end{align}
It then follows from (\ref{q26}), (\ref{q27}), (\ref{q29}), Lemma \ref{lem07}
and the fact $\nu_2(c2^{n-m+1}-l)=\nu _2(l)$ that
\begin{align*}
f_{m-1,n+1}(l2^m)\equiv&{c2^{n}\choose l2^{m-1}}
\Big(S(l2^{m-1},2^{m-1}-1)+2^{\nu_2(l)+m}\Big)
\Big(S(c2^{n}-l2^{m-1},2^{m-1})+2^{\nu_2(l)+2}\Big)\\
\equiv&{c2^{n}\choose l2^{m-1}}\Big(S(l2^{m-1},2^{m-1}-1)
\cdot2^{\nu_2(l)+2}+S(c2^{n}-l2^{m-1},2^{m-1})\cdot2^{\nu_2(l)+m}\Big)
\\
&+f_{m-1,n}(l2^{m-1})\equiv f_{m-1,n}(l2^{m-1})\mod{2^{n+2}}.
\end{align*}
That is, for each $l\in L_1$, one has
\begin{align}\label{q30}
f_{m-1,n+1}(l2^m)-f_{m-1,n}(l2^{m-1})\equiv 0\mod{2^{n+2}}.
\end{align}
Thus by (\ref{q30}), we deduce that
\begin{align}\label{q3001}
\Delta_1\equiv 0\mod{2^{n+2}}.
\end{align}
Consequently, we consider $\Delta_2$. Take an $l\in L_2$. One may let
$l=l_12^{n-m+1}$ with $1\le l_1\le c-1$ being odd. By Lemma
\ref{lem01}, we obtain that
\begin{align}\label{q31}
\nu_2\Big({c2^{n}\choose l2^{m-1}}\Big)
&=\nu_2\Big({c2^{n}\choose l_12^n}\Big)\nonumber\\
&=s_2(l_12^{n})+s_2((c-l_1)2^n)-s_2(c2^n)\nonumber\\
&=s_2(l_1)+s_2(c-l_1)-s_2(c).
\end{align}
It follows from Lemma \ref{lem07} and (\ref{q31}) that
\begin{align}\label{q32}
\nu_2\Big({c2^{n}\choose l2^{m-1}} 2^{\nu_2(l)+m}S(c2^{n}-l2^{m-1},
2^{m-1})\Big)=n+1+s_2(l_1)+s_2(c-l_1)-s_2(c).
\end{align}
Since $c$ is odd, one has $\nu_2(c2^{n-m+1}-l)\ge n-m+2$. Then
from (\ref{q26}), (\ref{q31}) and Lemma \ref{lem07}, we deduce that
\begin{align}\label{q33}
f_{m-1,n+1}(l2^m)&\equiv{c2^{n}\choose l2^{m-1}}\Big(S(l2^{m-1},
2^{m-1}-1)+ 2^{\nu_2(l)+m}\Big)S(c2^{n}-l2^{m-1},2^{m-1})\nonumber\\
&\equiv f_{m-1,n}(l2^{m-1})+{c2^{n}\choose l2^{m-1}} 2^{\nu_2(l)+m}
S(c2^{n}-l2^{m-1},2^{m-1})\mod {2^{n+2}}.
\end{align}
Note that $l_1=\frac{l}{2^{n-m+1}}$. Thus by (\ref{q32}) and
(\ref{q33}), we deduce that for each $l\in L_2$, we have
\begin{align}\label{q34}
f_{m-1,n+1}(l2^m)-f_{m-1,n}(l2^{m-1})\equiv
\left\{
\begin{array}{lc}
2^{n+1} & {\rm if} \ s_2(\frac{l}{2^{n-m+1}})
+s_2(c-\frac{l}{2^{n-m+1}})=s_2(c),\\
0 & {\rm otherwise}
\end{array}
\right.
\mod{2^{n+2}}.
\end{align}
It follows from (\ref{q34}) that
\begin{align}\label{q3401}
\Delta_2\equiv \sum_{l\in
L'_2}(f_{m-1,n+1}\big(l2^m)-f_{m-1,n}(l2^{m-1})\big)\equiv
2^{n+1}|L'_2|\mod{2^{n+2}},
\end{align}
where
$$L'_2:=\{l\in L_2|s_2(\frac{l}{2^{n-m+1}})
+s_2(c-\frac{l}{2^{n-m+1}})=s_2(c)\}.$$
Now we deal with $\Delta_3$. For any $l\in L_3,$ since $c$
is odd, one has $\nu_2(c2^{n-m+1}-l)=n-m+1$ and $1\le
c2^{n-m+1}-l\le c2^{n-m+1}-1$. Then there exists an odd integer
$1\le l_2\le c-1$ such that $c2^{n-m+1}-l=l_22^{n-m+1}$.
By Lemma \ref{lem01}, we get that
\begin{align}\label{q35}
\nu_2\Big({c2^{n}\choose l2^{m-1}}\Big)&=\nu_2\Big({c2^{n}\choose
(c2^{n-m+1}-l)2^{m-1}}\Big)\nonumber\\
&=\nu_2\Big({c2^{n}\choose l_22^n}\Big)\nonumber\\
&=s_2(l_2)+s_2(c-l_2)-s_2(c).
\end{align}
Furthermore, by Lemma \ref{lem07} and (\ref{q35}), we obtain that
\begin{align}\label{q36}
\nu_2\Big({c2^{n}\choose l2^{m-1}} 2^{n-m+3}S(l2^{m-1},
2^{m-1}-1)\Big)=n+1+s_2(l_2)+s_2(c-l_2)-s_2(c).
\end{align}
On the other hand, since $\nu_2(l)>n-m+1$ and $\nu_2(c2^{n-m+1}-l)=n-m+1$,
then by (\ref{q26}) and Lemma \ref{lem07} we know that
\begin{align}\label{q37} f_{m-1,n+1}(l2^m)&\equiv{c2^{n}\choose
l2^{m-1}}S(l2^{m-1},2^{m-1}-1)
\Big(S(c2^{n}-l2^{m-1},2^{m-1})+2^{\nu_2(c2^{n-m+1}-l)+2}\Big)\nonumber\\
&\equiv f_{m-1,n}(l2^{m-1})+{c2^{n}\choose l2^{m-1}} 2^{n-m+3}
S(l2^{m-1},2^{m-1}-1)\mod {2^{n+2}}.
\end{align}
Since $l_2=c-\frac{l}{2^{n-m+1}}$, by (\ref{q36}) and (\ref{q37}),
one deduces that for each $l\in L_3$,
\begin{align}\label{q38}
f_{m-1,n+1}(l2^m)-f_{m-1,n}(l2^{m-1})\equiv \left\{
\begin{array}{lc}
2^{n+1}& {\rm if} \ s_2(\frac{l}{2^{n-m+1}})+s_2(c-\frac{l}{2^{n-m+1}})
=s_2(c),\\
0&{\rm otherwise}
\end{array}
\right.
\mod{2^{n+2}}.
\end{align}
It then follows from (\ref{q38}) that
\begin{align}\label{q3801}
\Delta_3\equiv \sum_{l\in
L'_3}(f_{m-1,n+1}\big(l2^m)-f_{m-1,n}(l2^{m-1})\big)\equiv
2^{n+1}|L'_3|\mod{2^{n+2}},
\end{align}
where
$$L'_3:=\{l\in L_3|s_2(\frac{l}{2^{n-m+1}})+s_2(c-\frac{l}{2^{n-m+1}})=s_2(c)\}.$$
Let $\Psi: L'_2\rightarrow L'_3$ be a mapping defined for any $l\in L'_2$
by $\Psi(l):=c2^{n-m+1}-l.$ Obviously, $\Psi$ is well defined and injective.
Take an element $\tilde{l}\in L'_3$. Then $1\le \tilde{l}\le c2^{n-m+1}-1$,
$\nu_2(l)>n-m+1$ and
$$s_2(\frac{\tilde{l}}{2^{n-m+1}})+s_2(c-\frac{\tilde{l}}{2^{n-m+1}})=s_2(c).$$
Let $l=c2^{n-m+1}-\tilde{l}.$ It follows that $1\le l\le c2^{n-m+1}-1$,
$\nu_2(l)=n-m+1$ and $$s_2(\frac{l}{2^{n-m+1}})
+s_2(c-\frac{l}{2^{n-m+1}})=s_2(c-\frac{\tilde{l}}{2^{n-m+1}})
+s_2(\frac{\tilde{l}}{2^{n-m+1}})=s_2(c).$$
Thus $l\in L'_2$. So $\Psi$ is surjective. Hence $\Psi$ is
bijective, which means that $|L'_2|=|L'_3|$. It then follows from
(\ref{q22}), (\ref{q3001}), (\ref{q3401}) and (\ref{q3801}) that
\begin{align*}
\Delta_1+\Delta_2+\Delta_3&\equiv 2^{n+1}(|L'_2|+|L'_3|)\equiv 0 \mod{2^{n+2}}
\end{align*}
as claimed. Hence (\ref{q22}) is proved.
This completes the proof of Theorem
\ref{thm1}. \hfill$\Box$
\small
| {
"timestamp": "2014-08-01T02:10:29",
"yymm": "1407",
"arxiv_id": "1407.8443",
"language": "en",
"url": "https://arxiv.org/abs/1407.8443",
"abstract": "Let $m, n, k$ and $c$ be positive integers. Let $\\nu_2(k)$ be the 2-adic valuation of $k$. By $S(n,k)$ we denote the Stirling numbers of the second kind. In this paper, we first establish a convolution identity of the Stirling numbers of the second kind and provide a detailed 2-adic analysis to the Stirling numbers of the second kind. Consequently, we show that if $2\\le m\\le n$ and $c$ is odd, then $\\nu_2(S(c2^{n+1},2^m-1)-S(c2^n, 2^m-1))=n+1$ except when $n=m=2$ and $c=1$, in which case $\\nu_2(S(8,3)-S(4,3))=6$. This solves a conjecture of Lengyel proposed in 2009.",
"subjects": "Number Theory (math.NT)",
"title": "The 2-adic valuations of differences of Stirling numbers of the second kind",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109543125689,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7076796295113656
} |
https://arxiv.org/abs/1307.8246 | Note on Bessaga-Klee classification | We collect several variants of the proof of the third case of the Bessaga-Klee relative classification of closed convex bodies in topological vector spaces. We were motivated by the fact that we have not found anywhere in the literature a complete correct proof. In particular, we point out an error in the proof given in the book of C.~Bessaga and A.~Pełczyński (1975). We further provide a simplified version of T.~Dobrowolski's proof of the smooth classification of smooth convex bodies in Banach spaces which works simultaneously in the topological case. | \section{Introduction}
A well-known result due to Bessaga and Klee (see, for example, \cite[Section III.6]{BePe}) provides a classification of pairs $(X,U)$, where $X$ is a Hausdorff topological vector space and $U\subset X$ a closed convex body, up to a homeomorphism. Let us recall this result.
Let $X$ be a Hausdorff topological vector space and $U\subset X$ a closed convex body (i.e., a closed convex set with nonempty interior). The {\it characteristic cone} of $U$ (denoted by $\cc U$) is the set of those $x\in X$ such that the half-line $a+[0,+\infty)x$ is contained in $U$ for some $a\in U$. If $0\in\Int U$, then $\cc U$ is exactly the zero set of the Minkowski functional of $U$ (see, e.g., \cite[Section III.1]{BePe}).
Then the classification is summed up in the following theorem:
\begin{thm}\label{t:BK}
Let $X$ be a Hausdorff topological vector space and $U\subset X$ a closed convex body.
\begin{itemize}
\item[(i)] If $\cc U$ is a linear subspace of finite codimension $m$, then the pair $(X,U)$ is homeomorphic to the pair $(\cc U\times \er^m,\cc U\times[0,1]^m)$.
\item[(ii)] If $\cc U$ is a linear subspace of infinite codimension, then
the pair $(X,U)$ is homeomorphic to the pair $(X,X^+)$, where $X^+$ is a closed half-space of $X$.
\item[(iii)] If $\cc U$ is not a linear subspace, then the pair $(X,U)$ is homeomorphic also to the pair $(X,X^+)$.
\end{itemize}
\end{thm}
We have studied this result at a seminar with students using the book \cite{BePe} and we encountered a difficulty with proving the assertion (iii).
On page 112 of that book a formula is given, illustrated by a picture and followed by the claim that `it is not difficult to check' that this gives the required homeomorphism. After certain effort we realized that this claim is not true -- the given formula need not provide a homeomorphism. It is explained in Section~\ref{s:BP} below.
After finding the error we tried to correct it and to look at the literature for a correct proof. The original reference for the result is the paper \cite{BeKl}. However, this paper does not contain explicit formulation of the theorem. The desired statement is a special case of the more general \cite[Lemma 1.3]{BeKl}.
And again, in its proof a formula is described and followed by the claim that `it is tedious but not difficult to verify' that the formula gives the desired homeomorphism. In this case the claim is correct. In fact, the proof is not even too tedious. In Section~\ref{s:BK} we describe this method applied directly to the case of the above theorem.
Before finding and analyzing the original paper we established a correction
of the proof from \cite{BePe}. This correction is described in Section~\ref{s:C1}.
It is quite complicated, but we think it contains several interesting features.
Later, after analyzing the original method we got an idea that the error in \cite{BePe} is probably due to a misprint. And really, this yields the proof described in Section~\ref{s:C2}. The proof is a bit more complicated than the original one.
Finally, we found the paper \cite{Dob} where an analogous classification of $C^p$-smooth convex bodies in Banach spaces up to a $C^p$-diffeomorphism is given. As a special case $p=0$ the homeomorphic classification is given. The proof of the case (iii) takes only half a page. It refers to the implicit function theorem
\cite[Theorem 10.2.5]{Dd}. However, the key parts of the proof are missing (for example the proof that the respective maps are bijections and the proof that the Fr\'echet differential at each point is an onto isomorphism). Further, there is
one small mistake in the definition of one of the important sets.
In Section~\ref{s:D} below we give a proof using the method of \cite{Dob} for
the homeomorphism case. Under the additional smoothness assumptions the same proof
provides the classification up to a diffeomorphism. Further, our proof is more elementary, since it uses only a simple version of the implicit function theorem (see Theorem~\ref{IFT} below).
In view of this situation we decided to write down several variants of the proof because we think that such a result deserves it.
\medskip
Let us fix some notation. We adopt the notation of \cite{BePe}, the notation in the other two works is different.
If $U$ is a convex set containing $0$ in its interior, we denote by $w_U$ the Minkowsi functional of $U$. Further, $\cs U$ is the set of those $x\in U$ such that the line $a+\er x$ is contained in $U$ for some $a\in X$. In other words,
$\cs U=\cc U\cap \cc(-U)$.
\section{The basic method of the proof}
We will review below several possibilities of proving the assertion (iii) of Theorem~\ref{t:BK}. Not surprisingly, all the proofs follow the same pattern.
Let us describe this general pattern.
Let $U\subset X$ be a closed convex body such that $\cc U$ is not a linear subspace. It means that there is $y\in \cc U$ such that $-y\notin\cc U$. Without loss of generality we may suppose that $0\in \Int U$. Then $[0,+\infty)y\subset U$, $(-\infty,0]y\not\subset U$ and there is some $\varepsilon>0$ such that $(-\varepsilon,0]y\subset U$. Hence, without loss of generality we may suppose that $-y\in\partial U$. If we define a linear functional on $\er y$ by the formula $\psi_0(ty)=-t$, then $\psi_0(ty)\le w_U(ty)$ for each $t\in\er$. So, Hahn-Banach theorem implies that there is a linear functional $\psi$ on $X$ extending $\psi_0$ such that $\psi(x)\le w_U(x)$ for each $x\in X$. Set
$\varphi=-\psi$. Then $\varphi$ is a linear functional on $X$ such that
$\varphi(-y)=-1$ and $\varphi(x)\ge -1$ for $x\in U$. In particular, $|\varphi(x)|\le 1$ on $U\cap (-U)$, so $\varphi$ is continuous.
Set $Z=\{x\in X:\varphi(x)=-1\}$.
Now, a basic method of constructing a homeomorphism of the pair $(X,U)$ onto the pair $(X,\varphi^{-1}([-1,+\infty))$ is the following: To any $z\in Z$ assign some $c(z)\in [-1,+\infty)y$. Let $u(z)$ be the last point at the segment $[c(z),z]$ contained in $U$ and let $v(z)$ be a suitable point at the segment $(c(z),u(z))$.
Next, we choose a self-homeomorphism $h_z$ of the halfline $c(z)+(0,+\infty)(z-c(z))$ which is identity on the segment $(c(z),v(z)]$ and the segment $[v(z),u(z)]$ is mapped onto the segment $[v(z),z]$. Finally define the global homeomorphism $H$ by $h_z$ at the respective halfline and by the identity at the points not covered by such halflines.
\special{psfile=bk1p.ps vscale=70 hscale=70
voffset=-450 hoffset=-100}
\vskip60mm
Then a proof that $H$ is indeed a homeomorphism requires three steps:
\begin{itemize}
\item $H$ is well-defined (i.e., the respective halflines do not intersect).
\item $H$ is a self-homeomorphism of the union of the halflines.
\item $H$ remains homeomorphism if glued with the identity.
\end{itemize}
The proofs appearing in the literature differ in the formula for $c(z)$, the choice of $v(z)$ and the definition of $h_z$.
An important part of the proof (namely of the second step) consists in using the following easy lemma.
\begin{lemma}\label{l:Mink} Let $U\subset X$ be a closed convex body. Then the mapping
$$(u,v)\mapsto w_{U-u}(v)$$
is continuous on $\Int U\times X$.
\end{lemma}
\begin{proof} Let $c\in\er$ be arbitrary. We will show that the sets
$$\{(u,v)\in\Int U\times X:w_{U-u}(v)<c\}\mbox{ and }\{(u,v)\in\Int U\times X:w_{U-u}(v)>c\}$$
are open.
If $c\le 0$, then the first set is empty. For $c>0$ the inequality $w_{U-u}(v)<c$ is equivalent to $v\in c\Int(U-u)$, so $v+cu\in \Int U$. It follows that the first set is in this case open.
The second set equals $\Int U\times X$ for $c<0$. For $c=0$ it equals
$\Int U\times(X\setminus\cc U)$. Finally, for $c> 0$
the inequality $w_{U-u}(v)>c$ is equivalent to $v\notin c(U-u)$, i.e.,
$v+cu\in X\setminus cU$. In any case the second set is open as well.
\end{proof}
\section{Several variants of the proof}
In this section we collect several variants of the proof. We start by the original
proof which is hidden in \cite{BeKl}, then we continue by explaining why the proof in \cite{BePe} is incorrect and suggest two possible corrections.
\subsection{The original proof}\label{s:BK}
As we have remarked above, the paper \cite{BeKl} in fact do not contain explicit formulation of the theorem.
But the result follows from a more general Lemma 1.3. Let us give the proof to see that it is really easy,
if properly formulated.
Fix a closed convex body $V$ such that $[0,+\infty)y\subset \Int V\subset V\subset \Int U$.
For example, one can take $V=\frac12U$ or $V=\frac y2+U$. Set $W=V\cap(-V)\cap\Ker\varphi$. Then $W$ is a closed convex body in $\Ker\varphi$ and, moreover, $\cc W=\cs W=\cs V$.
For $z\in Z$ define $c(z)=w_W(z+y)y$ and let $v(z)$ be the last point of the segment $[c(z),z]$ contained in $V$. The homeomorphism $h_z$ is defined as identity on the segment $(c(z),v(z)]$, on $[v(z),u(z)]$ as the affine transformation sending this segment to $[v(z),z]$ and on the halfline $u(z)+(0,+\infty)(z-c(z))$ as a translation.
\special{psfile=bk2p.ps vscale=70 hscale=70
voffset=-450 hoffset=-100}
\vskip60mm
The proof that the glued mapping $H$ is a homeomorphism has three steps:
\medskip
\noindent{\sc Step 1:} The halflines $c(z)+(0,+\infty)(z-c(z))$, $z\in Z$, are pairwise disjoint and their union is the set $X\setminus (\cs V+[0,+\infty)y)$.
\smallskip
Let $x\in X$. Let us find out under which conditions there is $z\in Z$ such that $$x\in c(z)+(0,+\infty)(z-c(z)),$$ i.e., there are $z\in Z$ and $\alpha>0$ such that
\begin{equation}\label{eq:zero}x=c(z)+\alpha(z-c(z)).\end{equation}
This equation is equivalent to
\begin{equation}\label{eq:first}(x-\varphi(x) y) + \varphi(x) y
= \alpha (z+y) + ((1-\alpha) w_W(z+y)-\alpha) y.\end{equation}
Applying the functional $\varphi$ to both sides of this equation we get
\begin{equation}\label{eq:second}x-\varphi(x)y=\alpha(z+y)\quad \&\quad \varphi(x)=(1-\alpha) w_W(z+y)-\alpha.\end{equation}
More precisely, applying $\varphi$ to \eqref{eq:first} we get the second equation and plugging it into \eqref{eq:first} we get the first equation.
If we plug $z+y=\frac1\alpha(x-\varphi(x)y)$ to the second equation,
we get the quadratic equation
$$\alpha^2+\alpha(\varphi(x)+w_W(x-\varphi(x)y))-w_W(x-\varphi(x)y)=0.$$
If $w_W(x-\varphi(x)y)>0$, then this equation has one positive root and one negative root. Denote the positive root by $\alpha(x)$.
If $w_W(x-\varphi(x)y)=0$ and $\varphi(x)<0$, then the equation has one root equal to zero and the other one $\alpha(x)=-\varphi(x)>0$. If
$w_W(x-\varphi(x)y)=0$ and $\varphi(x)\ge 0$, the equation has no positive root.
Since the conditions $w_W(x-\varphi(x)y)=0$ and $\varphi(x)\ge 0$
hold if and only if $x\in\cs V+[0,+\infty)y$ we get that the $\alpha$ in \eqref{eq:zero} is always unique and it follows from the first equation in \eqref{eq:second} that the corresponding $z$ is also unique. We denote it by $z(x)$. This finishes the proof of Step 1.
Moreover, the above calculation shows that the mappings $x\mapsto z(x)$ and $x\mapsto \alpha(x)$ are continuous on $X\setminus(\cs V+[0,+\infty)y)$.
\medskip
\noindent{\sc Step 2.} $H$ is a homeomorphism of $X\setminus(\cs V+[0,+\infty)y)$ onto itself.
\smallskip
It is clear that $H$ is a bijection of $X\setminus(\cs V+[0,+\infty)y)$ onto itself. So, it is enough to show that $H$ and $H^{-1}$ are continuous on $X\setminus(\cs V+[0,+\infty)y)$. This can be done using Lemma~\ref{l:Mink}
and continuity of $z(x)$ and $\alpha(x)$.
More precisely, let us define $F$, a function of four real variables, on the set $$M=\{(\alpha,\beta,\gamma,\delta)\in(0,+\infty)^4: \gamma>\beta\ \&\ \delta>\beta\}$$ by the formula
$$F(\alpha,\beta,\gamma,\delta)=\begin{cases} \alpha & 0<\alpha\le\beta,\\
\beta+ \frac{\delta-\beta}{\gamma-\beta}(\alpha-\beta) & \beta\le\alpha\le\gamma,\\ \alpha+\delta-\gamma & \gamma\le\alpha.
\end{cases}$$
This function is continuous on its domain, since all the three formulas are continuous, their domains are relatively closed and the formulas agree on the intersections of their domains.
Further,
$$\begin{aligned} H(x) = &c(z(x))+ F(\alpha(x),\tfrac1{w_{V-c(z(x))}(z(x)-c(z(x)))},
\tfrac1{w_{U-c(z(x))}(z(x)-c(z(x)))},1)\,\cdot\\&
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\qquad\qquad\cdot (z(x)-c(z(x)),\\
H^{-1}(x) = &c(z(x))+ F(\alpha(x),\tfrac1{w_{V-c(z(x))}(z(x)-c(z(x)))},
1,\tfrac1{w_{U-c(z(x))}(z(x)-c(z(x)))})\,\cdot\\&
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\qquad\qquad\cdot(z(x)-c(z(x)),
\end{aligned}$$
so both $H$ and $H^{-1}$ are continuous.
\medskip
\noindent{\sc Step 3.} $H$ is a homeomorphism of $X$ onto itself.
\smallskip
Since $\cs V+[0,+\infty)y\subset\cc V\subset \Int V$ and $H$ is the identity on $V\setminus(\cs V+[0,+\infty)y)$, the global continuity of $H$ and $H^{-1}$ follows.
\begin{remark} Lemma 1.3 in \cite{BeKl} we have mentioned above is more general. It deals with homeomorphisms of triples, not pairs. To the set $V$ from \cite{BeKl} corresponds our set $U$, the sets $U$ and $P$ from \cite{BeKl} in our case coincide both with $V$.
The `tedious but not difficult' part skipped in \cite{BeKl} corresponds to our Steps 1 and 2. It is clear that the computation is not difficult, but especially Step 1 probably cannot be seen without a computation.
\end{remark}
\subsection{The incorrect proof in \cite{BePe}}\label{s:BP}
On page 112 of the quoted book the authors suggest the formulas $c(z)=(w_U(z+y)-1)y$ and $v(z)=\frac12(u(z)+c(z))$. Further, $h_z$ is defined as the identity on $(c(z),v(z)]$ and on the halfline $v(z)+[0,+\infty)(z-c(z))$ as an affine mapping fixing $v(z)$ and taking $u(z)$ to $z$.
We shall see that these
formulas do not provide a homeomorphism. The problem is that if $z+y\in\cc U$, we get $c(z)=-y$. In such a case $u(z)$ should be defined to be $z$ and already the mapping $z\mapsto u(z)$ may fail to be continuous.
Let us describe a counterexample. Set $X=\er^3$ and
$$U=\{(x_1,x_2,x_3)\in\er^3: x_1\ge (x_2)^+-1\ \&\ x_1\ge (x_3)^+-1\}.$$
Then $0\in\Int U$ and one can choose $y=(1,0,0)$ and $$Z=\{(x_1,x_2,x_3)\in\er^3:x_1=-1\}.$$
Let $x=(-1,-1,0)$. Then $x\in Z$, $w_U(x+y)=0$, hence $c(x)=-y$, $u(x)=x$ and hence $H(x)=x$.
Further, for any $n\in\en$ let $x_n=(-1,-1,\frac1n)$. Then $x_n\in Z$, $w_U(x_n+y)=\frac1n$, hence $c(x_n)=(-1+\frac1n,0,0)$. Further,
$u(x_n)=(-1+\frac1{2n},-\frac12,\frac1{2n})$ as this point belongs is the intersection of the boundary of $U$ with the segment $[c(x_n),x_n]$. Hence $v(x_n)=(-1+\frac3{4n},-\frac14,\frac1{4n})$ and
$H(x_n)= v(x_n)+3(x_n-v(x_n))=(-1-\frac3{2n},-\frac52,\frac5{2n})$.
Since $x_n\to x$ and $H(x_n)\to(-1,-\frac52,0)\ne H(x)$, $H$ is not continuous.
\subsection{Correction of the proof -- version 1}\label{s:C1}
In this section we present a possible correction of the proof from \cite{BePe}.
We change the formula for $c(z)$ with preserving the remaining assumptions.
Let us set $c(z)=(\sqrt{w_U(z+y)}-1)y$.
In this case the equality $c(z)=-y$ remains possible, but the square root changes certain order of convergence and makes the respective mappings continuous. This version of the proof is the most complicated one but we find it interesting.
So, let us give a proof.
\medskip
\noindent{\sc Step 1.} Set $Z'=\{z\in Z: w_U(z+y)>0\}$. Then the halflines $c(z)+(0,+\infty)(z-c(z))$, $z\in Z'$, are disjoint and cover the set $\{x\in X: w_U(x-\varphi(x)y)>0\}$.
\smallskip
Let $x\in X$. We will find out under which conditions there is $z\in Z'$ and $\alpha>0$ such that $$x=c(z)+\alpha(z-c(z)).$$
This equation is equivalent to
$$(x-\varphi(x) y)+\varphi(x) y
= \alpha(z+y) + ((1-\alpha)(\sqrt{w_U(z+y)}-1)-\alpha) y.$$
So, by applying $\varphi$ to both sides we get (similarly as in ``\eqref{eq:first}$\implies$\eqref{eq:second}'' above)
\begin{equation}\label{eq:fourth}x-\varphi(x)y=\alpha(z+y) \quad \&\quad \varphi(x)= (1-\alpha)(\sqrt{w_U(z+y)}-1)-\alpha.\end{equation}
From the first equation it follows that $w_U(x-\varphi(x)y)>0$
if we want $z\in Z'$. Further, if we isolate $z+y$ from the first equation
and plug the result into the second one, we get
$$\alpha\sqrt{w_U(x-\varphi(x)y)}+ \sqrt{\alpha}(\varphi(x)+1)-\sqrt{w_U(x-\varphi(x)y)}=0.$$
This is a quadratic equation for $\sqrt\alpha$ with a unique positive root $\alpha=\alpha(x)$. Hence, by the first equation in \eqref{eq:fourth}, there is a unique $z=z(x)$.
This completes the proof of Step 1. Moreover, the computation shows that the mappings $x\mapsto \alpha(x)$ and $x\mapsto z(x)$ are continuous on
$\{x\in X: w_U(x-\varphi(x)y)>0\}$.
\medskip
\noindent{\sc Step 2.} $H$ is a homeomorphism of $\{x\in X: w_U(x-\varphi(x)y)>0\}$ onto itself.
\smallskip
It is clear that $H$ is a bijection of the respective set onto itself. It remains to show that $H$ and $H^{-1}$ are continuous.
Let us define two functions of two real variables on the set $\er\times(0,2)$
by the formulas
$$\begin{aligned} G_1(\alpha,\beta)&=\begin{cases} \alpha & \alpha\le \frac\beta2,\\
\frac\beta2+\frac{2-\beta}{\beta}(\alpha-\frac\beta2) & \alpha\ge\frac\beta2,\end{cases}\\
G_2(\alpha,\beta)&=\begin{cases} \alpha & \alpha\le \frac\beta2,\\
\frac\beta2+\frac{\beta}{2-\beta}(\alpha-\frac\beta2) & \alpha\ge\frac\beta2.\end{cases}\end{aligned}$$
These functions are clearly continuous (the individual formulas are continuous, coincide on the intersection of the domains and the domains are relatively closed). Further, for $x\in\{x\in X: w_U(x-\varphi(x)y)>0\}$ we have
$$\begin{aligned}
H(x)&=c(z(x))+G_1(\alpha(x),\frac1{w_{U-c(z(x))}(z(x)-c(z(x)))})(z(x)-c(z(x))),\\
H^{-1}(x)&=c(z(x))+G_2(\alpha(x),\frac1{w_{U-c(z(x))}(z(x)-c(z(x)))})(z(x)-c(z(x))).\end{aligned}$$
It follows from Lemma~\ref{l:Mink} using the continuity of mappings $x\mapsto\alpha(x)$, $x\mapsto z(x)$ and $z\mapsto c(z)$ and the fact that $c(z(x))\in \Int U$ in this case that $H$ and $H^{-1}$ are continuous.
\medskip
\noindent{\sc Step 3:} $H$ is a homeomorphism of $X$ onto itself.
\smallskip
On the set $\{x\in X: w_U(x-\varphi(x)y)=0\}$ the mapping $H$ is defined to be identity. Since this set is closed, it is enough to show that whenever $x_\tau$ is a net in $\{x\in X: w_U(x-\varphi(x) y)>0\}$ such that $x_\tau\to x$ with $w_U(x-\varphi(x)y)=0$, then $H(x_\tau)\to x$ and $H^{-1}(x_\tau)\to x$.
So, let $(x_\tau)$ be such a net. Let us decompose the index set into two parts:
$$\begin{aligned}\Lambda_1&=\{\tau: w_{U-c(z(x_\tau))}(z(x_\tau)-c(z(x_\tau)))\le\frac1{2\alpha(x_\tau)}\},\\
\Lambda_2&=\{\tau: w_{U-c(z(x_\tau))}(z(x_\tau)-c(z(x_\tau)))>\frac1{2\alpha(x_\tau)}\}.\end{aligned}$$
For $\tau\in\Lambda_1$ we have $H(x_\tau)=H^{-1}(x_\tau)=x_\tau$, so it remains to show that the limit along $\Lambda_2$ is also $x$, provided $\Lambda_2$ is cofinal. Without loss of generality we may assume that $\Lambda_1=\emptyset$, i.e.
$$w_{U-c(z(x_\tau))}(z(x_\tau)-c(z(x_\tau)))>\frac1{2\alpha(x_\tau)}\mbox{\qquad for all }\tau.$$
Let us further compute the limit of $c(z(x_\tau))$. We have
$$c(z(x_\tau))=(\sqrt{w_U(z(x_\tau)+y)}-1)y =
(\sqrt{\frac{w_U(x_\tau-\varphi(x_\tau)y)}{\alpha(x_\tau)}}-1)y.$$
Since
$$\begin{aligned}\lim_\tau \sqrt{\frac{w_U(x_\tau-\varphi(x_\tau)y)}{\alpha(x_\tau)}}&=
\lim_\tau \frac{{\sqrt{w_U(x_\tau-\varphi(x_\tau)y)}}\cdot 2\sqrt{w_U(x_\tau-\varphi(x_\tau)y)}}{-(\varphi(x_\tau)+1)+\sqrt{(\varphi(x_\tau) +1)^2+4w_U(x_\tau-\varphi(x_\tau)y)}}
\\&=\lim_\tau \frac12((\varphi(x_\tau)+1)+\sqrt{(\varphi(x_\tau)
+1)^2+4w_U(x_\tau-\varphi(x_\tau)y)})
\\ &= \frac12((\varphi(x)+1)+\sqrt{(\varphi(x)
+1)^2+4w_U(x-\varphi(x)y)})
\\& =(\varphi(x)+1)^+,
\end{aligned}$$
we get $c(z(x_\tau))\to ((\varphi(x)+1)^+-1)y=\max(\varphi(x),-1)y$.
If $\varphi(x)>-1$, then $c(z(x_\tau))\to\varphi(x)y\in\Int U$, hence by Lemma~\ref{l:Mink} we get
$$w_{U-c(z(x_\tau))}(x_\tau-c(z(x_\tau)))\to w_{U-\varphi(x)y}(x-\varphi(x)y)=0$$
and hence $w_{U-c(z(x_\tau))}(x_\tau-c(z(x_\tau)))<\frac12$ for large $\tau$. It means that for large $\tau$ we have $\tau\in\Lambda_1$, a contradiction.
Thus $\varphi(x)\le -1$. Then $c(z(x_\tau))\to -y$.
We will show that $$w_{U-c(z(x_\tau))}(z(x_\tau)-c(z_(x_\tau)))\to 1.$$ Suppose it is not the case. Since
$w_{U-c(z)}(z-c(z))\ge 1$ for each $z\in Z'$, up to passing to a subnet we may assume that there is some $d>1$ such that $$w_{U-c(z(x_\tau))}(z(x_\tau)-c(z(x_\tau)))>d\mbox{ for each }\tau.$$ It means that
$z(x_\tau)-c(z(x_\tau))\notin d(U-c(z(x_\tau)))$, hence
$$\frac1dz(x_\tau) + (1-\frac1d) c(z(x_\tau))\notin U\mbox{ for each }\tau.$$
So,
\begin{multline*}\frac{w_U(z(x_\tau)+y)}d \cdot \frac{z(x_\tau)+y}{w_U(z(x_\tau)+y)}
+(1-\frac{w_U(z(x_\tau)+y)}d)\cdot(-y) \\
+((1-\frac1d)\sqrt{w_U(z(x_\tau)+y)} - \frac{w_U(z(x_\tau)+y)}d)y\notin U.\end{multline*}
Since $w_U(z(x_\tau)+y)\to 0$ the sum of the first two terms is for $\tau$ large enough a convex combination of $\frac{z(x_\tau)+y}{w_U(z(x_\tau)+y)}$ and $-y$, hence it belongs to $U$. Further, the coefficient at the last term is positive for $\tau$ large enough (this is the place where the choice of the square root is essential) which yields a contradiction as $y\in\cc U$.
Thus indeed $w_{U-c(z(x_\tau))}(z(x_\tau)-c(z_(x_\tau)))\to 1$.
Now we are ready to conclude.
To shorten the notation, set $\alpha_\tau=\alpha(x_\tau)$ and $w_\tau= w_{U-c(z(x_\tau))}(z(x_\tau)-c(z(x_\tau)))$, Since $\alpha_\tau>\frac1{2w_\tau}$, we have
$$\begin{aligned}
H(x_\tau)&=c(z(x_\tau))+G_1(\alpha_\tau,\frac1{w_\tau})(z(x_\tau)-c(z(x)))\\
&=c(z(x_\tau))+(\alpha_\tau(2w_\tau-1)-1+\frac1{w_\tau})(z(x_\tau)-c(z(x_\tau)))
\\&=c(z(x_\tau))+(2w_\tau-1-\frac1{\alpha_\tau}+\frac1{\alpha_\tau w_\tau})(x_\tau-c(z(x_\tau)))\\
&=2c(z(x_\tau))(1-w_\tau)+x_\tau(2w_\tau-1)+\frac{1-w_\tau}{\alpha_\tau w_\tau}(x_\tau-c(z(x_\tau)))\to x\end{aligned}$$
since $x_\tau\to x$, $c(z(x_\tau))\to-y$, $w_\tau\to 1$ and $\alpha_\tau w_\tau>\frac12$.
Similarly,
$$\begin{aligned}
H^{-1}(x_\tau)&=c(z(x_\tau))+G_2(\alpha_\tau,\frac1{w_\tau})(z(x_\tau)-c(z(x)))\\
&=c(z(x_\tau))+(\frac{\alpha_\tau}{2w_\tau-1}+\frac{w_\tau-1}{w_\tau(2w_\tau-1)})(z(x_\tau)-c(z(x_\tau)))
\\&=c(z(x_\tau))+(\frac{1}{2w_\tau-1}+\frac{w_\tau-1}{\alpha_\tau w_\tau(2w_\tau-1)})(x_\tau-c(z(x_\tau)))\to x.\end{aligned}$$
This completes the proof.
\subsection{Correction of the proof -- version 2}\label{s:C2}
Another possibility how to correct the proof is to use the formula $c(z)=(w_U(z+y)+1)y$ for $z\in Z$.
In this case the problem appearing in the original version and in the first correction disappears, since $c(z)\in \Int U$ for all $z\in Z$. Let us show that this modification works.
\medskip
\noindent{\sc Step 1.} The halflines $c(z)+(0,+\infty)(z-c(z))$, $z\in Z$, are pairwise disjoint and their union is the set $X\setminus ((\Ker\varphi\cap\cc U)+[1,+\infty)y)$.
\smallskip
Let $x\in X$. Let us find out under which conditions there are $z\in Z$ and $\alpha>0$ such that
$$x= c(z)+\alpha(z-c(z)).$$
This equation is equivalent to
$$(x-\varphi(x)y)+\varphi(x)y=\alpha(z+y)+((1-\alpha)(w_U(z+y)+1)-\alpha)y.$$
By applying $\varphi$ to both sides we see that the above equation is equivalent to (similarly as in ``\eqref{eq:first}$\implies$\eqref{eq:second}'' above)
\begin{equation}\label{eq:fifth}x-\varphi(x)y=\alpha(z+y)\quad\&\quad\varphi(x)= (1-\alpha)(w_U(z+y)+1)-\alpha.\end{equation}
From the first equation isolate $z+y$ and plug it to the second equation. We get thus
a quadratic equation for $\alpha$:
$$2\alpha^2+\alpha(\varphi(x)-1+w_U(x-\varphi(x)y))-w_U(x-\varphi(x)y)=0.$$
If $w_U(x-\varphi(x)y)>0$, there is a unique positive root $\alpha=\alpha(x)$.
If $w_U(x-\varphi(x)y)=0$ and $\varphi(x)<1$, there is a unique positive root $\alpha(x)=(1-\varphi(x))/2$.
If $w_U(x-\varphi(x)y)=0$ and $\varphi(x)\ge1$ (i.e., if $x\in(\Ker\varphi\cap\cc U)+[1,+\infty)y$), then there is no positive root. This shows there is a unique $\alpha=\alpha(x)$ and it follows from the first equation in \eqref{eq:fifth} that there is also a unique $z=z(x)$. This completes the proof of Step 1. Moreover, the computation shows that the mappings $x\mapsto \alpha(x)$ and $x\mapsto z(x)$ are continuous on $X\setminus ((\Ker\varphi\cap\cc U)+[1,+\infty)y)$.
\medskip
\noindent{\sc Step 2.} $H$ is a homeomorphism of $X\setminus ((\Ker\varphi\cap\cc U)+[1,+\infty)y)$ onto itself.
\smallskip
It is clear that $H$ is a bijection of the mentioned set onto itself.
Further, the formulas for $H$ and $H^{-1}$ are the same as in the previous case. Of course, $z(x)$, $\alpha(x)$ and $c(z(x))$ are given by different formulas, but since these mappings are continuous, we get that $H$ and $H^{-1}$ are continuous.
\medskip
\noindent{\sc Step 3.} $H$ is a homeomorphism of $X$ onto itself.
\smallskip
Since $H$ is defined as the identity on the set $(\Ker\varphi\cap\cc U)+[1,+\infty)y$ and this set is closed in $X$, it is enough to show the following: Let $(x_\tau)$ be a net in $X\setminus ((\Ker\varphi\cap\cc U)+[1,+\infty)y)$ converging to some $x\in (\Ker\varphi\cap\cc U)+[1,+\infty)y$.
Then $H(x_\tau)\to x$ and $H^{-1}(x_\tau)\to x$. So, let us have such a net.
We have $x_\tau=c(z(x_\tau))+\alpha(x_\tau)(z(x_\tau)-c(z(x_\tau)))$. We will show that for $\tau$ large enough $\alpha(x_\tau)\le\frac1{2w_{U-c(z(x_\tau))}(z(x_\tau)-c(z(x_\tau)))}$. Then the proof will be completed as it will follow that for $\tau$ large enough $H(x_\tau)=H^{-1}(x_\tau)=x_\tau$.
The desired inequality is equivalent to $w_{U-c(z(x_\tau))}(z(x_\tau)-c(z(x_\tau)))\le\frac1{2\alpha(x_\tau)}$, i.e.
$c(z(x_\tau))+2\alpha(x_\tau)(z(x_\tau)-c(z(x_\tau)))\in U$, equivalently
$$c(z(x_\tau))+2(x_\tau-c(z(x_\tau)))\in U.$$
Let us analyze the limit behaviour of the left-hand side. Set $t_\tau=\varphi(x_\tau)$ and $a_\tau=x_\tau-t_\tau y$. Then $t_\tau\to\varphi(x)$ and $a_\tau\to x-\varphi(x)y$, hence $w_U(a_\tau)\to 0$.
We have
$$c(z(x_\tau))=(w_U(z(x_\tau)+y)+1)y=(\frac{w_U(a_\tau)}{\alpha(x_\tau)}+1)y.$$
If $w_U(a_\tau)=0$, then $c(z(x_\tau))=y$. Moreover, $t_\tau<1$, hence $\varphi(x)\le 1$, so necessarily
$\varphi(x)=1$. It follows that $c(z(x_\tau))=\varphi(x)y$.
If $w_U(a_\tau)>0$, then $\alpha(x_\tau)$ is the positive root of the quadratic equation from Step 1,
so
$$\frac{w_U(a_\tau)}{\alpha(x_\tau)}=\frac12(\sqrt{(t_\tau-1 +w_U(a_\tau))^2+8w_U(a_\tau)}+t_\tau-1+w_U(a_\tau))\to \varphi(x)-1.$$
It follows that $c(z(x_\tau))\to\varphi(x)y$, hence
$$c(z(x_\tau))+2(x_\tau-c(z(x_\tau)))\to \varphi(x)y+2(x-\varphi(x)y).$$
Since $x-\varphi(x)y\in\cc U$, $y\in\cc U$ and $\varphi(x)\ge 1$, we get
$\varphi(x)y+2(x-\varphi(x)y)\in\cc U\subset\Int U$. Hence
$c(z(x_\tau))+2(x_\tau-c(z(x_\tau)))\in U$ for $\tau$ large enough and the proof is completed.
\begin{remark} It took us some time to discover that the proof in \cite{BePe} is incorrect. As remarked above, the error is already in the second step, since the assignment $z\mapsto u(z)$ fails to be continuous. The correction from Section~\ref{s:C1} is quite complicated but we find it interesting since it uses
some balance of asymptotic behaviour. The correction from Section~\ref{s:C2} is much simpler and now, a posteriori, we are convinced that this is the formula the authors had in mind. But it is still more complicated than the original proof, the main difference is in Step 3. While in the original version Step 3 is trivial, in the method described in Section~\ref{s:C2} Step 3 requires some nontrivial computation.
At least we do not see how to prove it without any computation like in the original version.
\end{remark}
\section{Topological version of Dobrowolski's proof}\label{s:D}
The approach of \cite{Dob} is a bit different, it focuses on smooth bodies in Banach spaces and refers to the implicit function theorem. As remarked above, the proof is extremely consise and missing computations (checking the assumptions of the implicit function theorem) is nontrivial -- it would be much longer than the proof itself.
In this section we give a modification of the proof from \cite{Dob} which works simultaneously in the topological and the smooth cases. Our version is moreover simplified and more elementary. In particular, it uses a simpler version of the implicit function theorem (not only its proof is simpler, but the assumptions are easier to check) and the form of our formula is simpler (although the mapping is the same) since we use Minkowski functional related to only one convex body. We will give the proof in the topological case and then comment why it works also in the smooth case.
Firstly, let us choose two auxiliary $C^\infty$ functions $\lambda$ and $\gamma$ defined on $\er$ with the following properties:
\begin{itemize}
\item $\lambda$ is non-decreasing, $\lambda=0$ on $(-\infty,\frac12]$, $\lambda=1$ on $[1,+\infty)$.
\item $\gamma=0$ on $(-\infty,\frac12]$, $\lim_{t\to\infty}\gamma(t)=+\infty$ and $0\le\gamma'(t)<\frac1t(\gamma(t)+1)$ for $t>0$.
\end{itemize}
The existence of $\lambda$ is a well-known fact. The existence of $\gamma$ is not obvious and in \cite{Dob} it is just postulated. One can take, for example, $\gamma(t)=\delta\lambda(t)\ln(t+1)$
for $t>-1$, where $\delta>0$ is a small enough number and complete this function by zero on the rest of $\er$.
In the proof we will need the following version of the implicit function theorem.
\begin{thm}\label{IFT} Let $X$ be a topological space, $\Omega\subset X\times \er$ an open set,
$F=F(x,t):\Omega\to \er$ a function and $(x_0,t_0)\in \Omega$. Suppose that the following assumptions
are satisfied.
\begin{itemize}
\item $F$ and $\frac{\partial F}{\partial t}$ are continuous on $\Omega$.
\item $F(x_0,t_0)=0$.
\item $\frac{\partial F}{\partial t}(x_0,t_0)\ne0$.
\end{itemize}
Then there is $G$, a neighborhood of $x_0$ in $X$, and $H$, a neighborhood of $t_0$ in $\er$ and a continuous function $f:G\to H$ such that $G\times H\subset\Omega$ and for $(x,t)\in G\times H$ one has $t=f(x)$ if and only if $F(x,t)=0$.
\end{thm}
This theorem follows from a more general \cite[Chapter III, Section 8, Theorem 25]{Schw} (which deals with a normed space in place of $\er$). However, our version is much simpler and can be proved by the same way as the easiest version
for $C^1$ functions from $\er^2$ to $\er$.
Let us now start the construction itself. For $z\in Z$ let $c(z)=\gamma(w_U(y+z))y$.
\medskip
\noindent{\sc Step 1:} The mapping $\Phi: (\alpha,z)\mapsto c(z)+\alpha(z-c(z))$ is a homeomorphism of $(0,+\infty)\times Z$ onto $X\setminus ((\cc U\cap \Ker\varphi)+[0,+\infty)y)$.
\smallskip
Note that in \cite{Dob} there is a small error, where instead of $\cc U\cap \Ker\varphi$ the author writes $\{0\}$. We proceed in the same way as above. Fix $x\in X$ and try to find $\alpha>0$ and $z\in Z$ such that $\Phi(\alpha,z)=x$. This equation is equivalent to
$$\alpha(z+y)+((1-\alpha)\gamma(w_U(z+y))-\alpha)y=(x-\varphi(x)y)+\varphi(x)y,$$
hence by applying $\varphi$ to both sides we get (similarly as in ``\eqref{eq:first}$\implies$\eqref{eq:second}'' above)
$$\alpha(z+y)=x-\varphi(x)y\quad\&\quad(1-\alpha)\gamma(w_U(z+y))-\alpha=\varphi(x).$$
If we isolate $z+y$ from the first equation and plug it in the second one,
we get
$$(1-\alpha)\gamma\left(\frac1\alpha w_U(x-\varphi(x)y)\right)-\alpha-\varphi(x)=0.$$
Denote the left-hand side by $F(x,\alpha)$. It is clear that $F$ is defined and continuous on $X\times(0,+\infty)$ and, moreover,
$$\frac{\partial F}{\partial \alpha}(x,\alpha)=
-\gamma\left(\frac1\alpha w_U(x-\varphi(x)y)\right)-\frac{1-\alpha}{\alpha^2}w_U(x-\varphi(x)y)
\gamma'\left(\frac1\alpha w_U(x-\varphi(x)y)\right)-1,$$
which is also continuous on $X\times(0,+\infty)$.
Further, $\frac{\partial F}{\partial \alpha}(x,\alpha)<0$ for $(x,\alpha)\in X\times(0,+\infty)$.
Indeed, if $w_U(x-\varphi(x)y)=0$, then $\frac{\partial F}{\partial \alpha}(x,\alpha)=-1$. If $w_U(x-\varphi(x)y)>0$
and $\alpha\le 1$, then $\frac{\partial F}{\partial \alpha}(x,\alpha)\le-1$. Finally, if $w_U(x-\varphi(x)y)>0$
and $\alpha>1$, then by the properties of $\gamma$ we get
$$\frac{\partial F}{\partial \alpha}(x,\alpha)
<-\gamma\left(\frac1\alpha w_U(x-\varphi(x)y)\right)+\frac{\alpha-1}{\alpha^2}
\left(\gamma\left(\frac1\alpha w_U(x-\varphi(x)y)\right)+1\right)-1\le0.$$
Let us continue with describing the range of $\Phi$. Fix $x\in X$. There are two possibilites:
\smallskip
Case 1: $w_U(x-\varphi(x)y)=0$. Then $F(x,\alpha)=-\alpha-\varphi(x)$. If $\varphi(x)<0$, there is a unique positive root
$\alpha=-\varphi(x)$. If $\varphi(x)\ge0$, there is no positive root.
\smallskip
Case 2: $w_U(x-\varphi(x)y)>0$. Then $\lim_{\alpha\to 0+}F(x,\alpha)=+\infty$ (as $\gamma$ has at $+\infty$ the limit $+\infty$)
and $\lim_{\alpha\to+\infty}F(x,\alpha)=-\infty$ (as $\gamma$ vanishes at a neighborhood of zero). Further, since $\alpha\mapsto F(x,\alpha)$ is continuous and strictly decreasing on $(0,+\infty)$, there is a unique root.
\smallskip
It follows that $\Phi$ is one-to-one and its range is $X\setminus ((\cc U\cap \Ker\varphi)+[0,+\infty)y)$.
It is clear that $\Phi$ is continuous. By Theorem~\ref{IFT} we get that the second coordinate of the inverse is continuous,
the continuity of the first coordinate then follows, hence $\Phi^{-1}$ is continuous.
\medskip
\noindent {\sc Step 2.} The mapping $\Psi$ defined by the formula
$$\Psi(\alpha,z)=(\alpha\lambda(\alpha w_{U-c(z)}(z-c(z))) (w_{U-c(z)}(z-c(z))-1)
+ \alpha,z)$$
is a homeomorphism of $(0,+\infty)\times Z$ onto itself.
\smallskip
Since $w_{U-c(z)}(z-c(z))\ge 1$ whenever $z\in Z$, $\Psi$ maps $(0,+\infty)\times Z$ into itself. $\Psi$ is clearly continuous. To show that $\Psi$ is a bijection and the inverse is continuous, let us investigate the first coordinate, i.e., the mapping $$\theta(\alpha,z)=\alpha\lambda(\alpha w_{U-c(z)}(z-c(z))) (w_{U-c(z)}(z-c(z))-1) + \alpha.$$
We have
\begin{multline*}\frac{\partial}{\partial\alpha}\theta(\alpha,z)
= ( \lambda(\alpha w_{U-c(z)}(z-c(z)))\\+\alpha\lambda'(\alpha w_{U-c(z)}(z-c(z)))(w_{U-c(z)}(z-c(z))))(w_{U-c(z)}(z-c(z))-1)+1.\end{multline*}
This partial derivative is continuous and strictly positive on $(0,+\infty)\times Z$.
Moreover, for any $z\in Z$ we have
$$\lim_{\alpha\to0+}\theta(\alpha,z)=0 \mbox{ and }\lim_{\alpha\to+\infty}\theta(\alpha,z)=+\infty,$$
hence $\Psi$ is a bijection. Moreover, the continuity of $\Psi^{-1}$ follows from
Theorem~\ref{IFT}. This completes the proof of Step 2.
\medskip
\noindent{\sc Step 3.} The mapping $H=\Phi\circ\Psi\circ\Phi^{-1}$ is a homeomorphism of $X\setminus ((\cc U\cap \Ker\varphi)+[0,+\infty)y)$ onto itself. Moreover,
it maps each halfline $c(z)+\er^+(z-c(z))$ onto itself in an increasing manner such that the segment $(c(z),u(z)]$ is mapped onto $(c(z),z]$.
\smallskip
Indeed, $H$ is a homeomorphism as a composition of homeomorphisms. Further, from the construction it is clear that it preserves the mentioned halflines in an increasing manner. The last thing to show is that $H(u(z))=z$. To show this notice first
that $\Phi^{-1}(u(z))=(\frac1{w_{U-c(z)}(z-c(z))},z)$, hence
$\Psi(\Phi^{-1}(u(z))=(1,z)$, thus $H(u(z))=z$.
\medskip
\noindent{\sc Step 4.} If we extend $H$ by identity on $(\cc U\cap \Ker\varphi)+[0,+\infty)y$ we get a homeomorphism of $X$ onto itself with the required properties.
\smallskip
Since $(\cc U\cap \Ker\varphi)+[0,+\infty)y$ is closed, it is enough to check that $H$ and $H^{-1}$ are continuous at points of this set. So, let us fix $x$ in this set
and a net $x_\tau$ in the complement converging to $x$. Let $(\alpha_\tau,z_\tau)=\Phi^{-1}(x_\tau)$. Let us first show that $\alpha_\tau\to 0$.
Suppose not. Then, up to passing to a subnet, we may assume that $\alpha_\tau\to\alpha\in(0,+\infty]$.
We have
$$\varphi(x)=\lim_\tau\varphi(x_\tau)=\lim_{\tau}(1-\alpha_\tau)\gamma\left(\frac1{\alpha_\tau} w_U(x_\tau-\varphi(x_\tau)y)\right)-\alpha_\tau.$$
If $\alpha=+\infty$, then the limit on the right-hand side is $-\infty$ (since $\gamma$ is zero on a neighborhood of zero), which is not possible. If $\alpha\in(0,+\infty)$, then the right-hand side goes to $-\alpha$
(since $w_U(x_\tau-\varphi(x_\tau)y)\to w_U(x-\varphi(x)y)=0$). Thus $\varphi(x)<0$, a contradiction.
So, we have proved that $\alpha_\tau\to 0$. Further,
$$c(z_\tau)=\gamma(w_U(z_\tau+y))y =\gamma\left(\frac1{\alpha_\tau} w_U(x_\tau-\varphi(x_\tau)y)\right)y
=\frac{\varphi(x_\tau)+\alpha_\tau}{1-\alpha_\tau}y\to\varphi(x)y.$$
Hence $w_{U-c(z_\tau)}(x_\tau-c(z_\tau))\to w_{U-\varphi(x)y}(x-\varphi(x)y)$ by Lemma~\ref{l:Mink}.
But the latter value is zero, since $w_{U}(x-\varphi(x)y)=0$ and $y\in\cc U$, so for each $t>0$ we have $t(x-\varphi(x)y)+\varphi(x)y\in U$.
So, for $\tau$ large enough we have
$\frac1\alpha w_{U-c(z_\tau)}(z_\tau-c(z_\tau))= w_{U-c(z_\tau)}(x_\tau-c(z_\tau))<\frac12,$
hence $\Psi(\alpha_\tau,z_\tau)=(\alpha_\tau,z_\tau)$.
Finally, for those $\tau$ we have
$H(x_\tau)=H^{-1}(x_\tau)=x_\tau$.
This completes the proof.
\begin{remark} In case $X$ is a Banach space a $U$ is a $C^p$-smooth convex body (where $p\in\en\cup\{\infty\}$), the homeomorphism $H$ constructed above is a $C^p$-diffeomorphism. Indeed, first remark that, if in Theorem~\ref{IFT} we moreover assume that $X$ is a Banach space and $F$ is $C^p$-smooth, then $f$ is also $C^p$-smooth. Further, in this case the function from Lemma~\ref{l:Mink} is $C^p$-smooth on the complement of its zero set by \cite[Lemma 1]{Dob}. (The proof of this lemma is omitted in \cite{Dob}, but it is an easy consequence of the definition.) Further, the function $F$ used in Step 1 is $C^p$-smooth on $X\times(0,+\infty)$ (at points where $w_U(x_0-\varphi(x_0)y)>0$ this is a composition of $C^p$-functions mentioned above; if $w_U(x_0-\varphi(x_0)y)=0$, then $F(x,\alpha)=-\alpha-\varphi(x)$ on a neighborhood of $(x_0,\alpha_0)$). It follows that $\Phi$ is a $C^p$-diffeomorphism. Similarly we can see that the mapping $\Psi$ from Step 2 is a $C^p$-diffeomorphism. Finally, from the proof of Step 4 we see that
for each point from $(\cc U\cap \Ker\varphi)+[0,+\infty)y$ there is a neighborhood on which $H$ is the identity, so $H$ is a $C^p$-diffeomorphism.\end{remark}
\def\cprime{$'$}
| {
"timestamp": "2014-11-07T02:08:57",
"yymm": "1307",
"arxiv_id": "1307.8246",
"language": "en",
"url": "https://arxiv.org/abs/1307.8246",
"abstract": "We collect several variants of the proof of the third case of the Bessaga-Klee relative classification of closed convex bodies in topological vector spaces. We were motivated by the fact that we have not found anywhere in the literature a complete correct proof. In particular, we point out an error in the proof given in the book of C.~Bessaga and A.~Pełczyński (1975). We further provide a simplified version of T.~Dobrowolski's proof of the smooth classification of smooth convex bodies in Banach spaces which works simultaneously in the topological case.",
"subjects": "Functional Analysis (math.FA)",
"title": "Note on Bessaga-Klee classification",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109538667758,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7076796291910211
} |
https://arxiv.org/abs/1311.3113 | New upper and lower bounds for the additive degree-Kirchhoff index | Given a simple connected graph on $N$ vertices with size $|E|$ and degree sequence $d_{1}\leq d_{2}\leq ...\leq d_{N}$, the aim of this paper is to exhibit new upper and lower bounds for the additive degree-Kirchhoff index in closed forms, not containing effective resistances but a few invariants $(N,|E|$ and the degrees $d_{i}$) and applicable in general contexts. In our arguments we follow a dual approach: along with a traditional toolbox of inequalities we also use a relatively newer method in Mathematical Chemistry, based on the majorization and Schur-convex functions. Some theoretical and numerical examples are provided, comparing the bounds obtained here and those previously known in the literature. | \section{Introduction}
The Kirchhoff index $R(G)$ of a connected undirected graph $G=(V,E)$ with vertex set $\{1, 2, \ldots, N\}$ and edge set $E$ was defined by Klein and Randi\'c \cite{KleRan} as
$$R(G)=\sum_{i<j}R_{ij},$$
where $R_{ij}$ is the effective resistance of the edge $ij$. A lot of attention has been given in recent years to this index, as well as to several modifications of it that take into account the degrees of the graph under consideration. Indeed, Chen and Zhang defined in \cite{CheZha} the multiplicative degree-Kirchhoff index as
\begin{equation}
\label{multip}
R^*(G)=\sum_{i<j}d_id_jR_{ij},
\end{equation}
where $d_i$ is the degree (i.e., the number of neighbors) of the vertex $i$. References \cite{BCPT1}, \cite{Bozkurt2012}, \cite{Pal2011b} and \cite{Pal2011} deal with this index.
Also, Gutman et al. defined in \cite{Gutman} the additive degree-Kirchhoff index as
\begin{equation}
\label{addit}
R^+(G)=\sum_{i<j}(d_i+d_j)R_{ij},
\end{equation}
and worked on the identification of graphs with lowest such degree among unicyclic graphs.
The additive degree Kirchhoff index is motivated by the degree distance of a graph, and these two indices are equal in case the graph $G$ under consideration is a tree, as can be seen if the effective resistance $R_{ij}$ in equation (\ref{addit}) is replaced by the distance in the graph between $i$ and $j$. See reference (\cite{Dobrynin}) for other details. \\
Recently, one of the authors of this paper showed in \cite{Pal2013}, using Markov chain theory, that for any graph $G$
\begin{equation}\label{markov}
R^+(G)\ge 2(N-1)^2,
\end{equation}
and the lower bound is attained by the complete graph. Also, in \cite{Pal2013} it was shown that for any $G$
$$R^+(G)\le {1\over 3}(N^4-N^3-N^2+N),$$
and it was conjectured that the maximum of $R^+(G)$ over all graphs is attained by the $({1\over 3}, {1\over 3}, {1\over 3})$ barbell graph, which consists of two complete graphs on ${N\over 3}$ vertices united by a path of length ${N\over 3}$, and for which $R^+(G)\sim {2\over {27}}N^4$.
\vskip .2 in
The aim of the current article is to exhibit new upper and lower bounds for the additive degree-Kirchhoff index in closed forms, not containing effective resistances but a few invariants ($N$, $|E|$ and the degrees $d_i$), and applicable in general contexts. In what follows we only consider simple, undirected and connected graphs. In computing our bounds we follow a dual approach:
first we use a traditional toolbox of inequalities and then a relatively newer method in Mathematical Chemistry, based on majorization and Schur-convex
functions. Schur-convexity and majorization order are widely discussed in \cite{Marshall} and previous uses of the majorization partial order in
chemistry and a general overview are given in \cite{KleRan}.
One major advantage of this technique is to provide a unified
approach to recover many bounds in the literature as well as to obtain better ones. This technique has been applied in \cite{BT}, \cite{BCT1}, \cite{BCPT1}, \cite{BCPT2} for determining bounds of some relevant topological indicators of graphs which can be usefully expressed as Schur-convex functions.
\section{Lower bounds}
In order to produce new lower bounds for $R^+(G)$ we use the following inequalities for the effective resistances that can be found in \cite{Pal2011b}:
\begin{equation}
\label{una}
R_{ij}\ge {{d_i+d_j-2}\over {d_id_j-1}},
\end{equation}
in case $(i,j)\in E$ and
\begin{equation}
\label{dos}
R_{ij}\ge {1\over {d_i}}+{1\over {d_j}},
\end{equation}
in case $(i,j)\notin E$.
Then we can prove the following
\begin{theorem}\label{th:first}
For any graph $G$ with degree sequence $d_{1}\leq d_{2}\leq ...\leq d_{N}$,
\begin{equation} \label{last}
R^+(G)\ge N(N-4) + 2|E|\sum_{j=1}^N {1\over d_j}.
\end{equation}
\end{theorem}
\begin{proof} Inserting (\ref{una}) and (\ref{dos}) into (\ref{addit}) we get
\begin{equation}\label{eq:rifuno}
\begin{split} R^+(G) &\ge \sum_{{i<j}\atop{d(i,j)=1}} {{(d_i+d_j)(d_i+d_j-2)}\over {d_id_j}}+\sum_{{i<j}\atop{d(i,j)>1}} (2+{d_i\over d_j}+{d_j\over d_i})\\
&=\sum_{{i<j}\atop{d(i,j)=1}} (2+{d_i\over d_j}+{d_j\over d_i})-2\sum_{{i<j}\atop{d(i,j)=1}} ({1\over d_i}+{1\over d_j})+\sum_{{i<j}\atop{d(i,j)>1}} (2+{d_i\over d_j}+{d_j\over d_i})\\
&=\sum_{i<j}2+\sum_{i<j}({d_i\over d_j}+{d_j\over d_i})-2\sum_{{i<j}\atop{d(i,j)=1}} ({1\over d_i}+{1\over d_j})\\
&=N(N-1)+\sum_{i<j}({d_i\over d_j}+{d_j\over d_i})-2N.
\end{split}
\end{equation}
After some algebra, it is not difficult to see that
$$\sum_{i<j}({d_i\over d_j}+{d_j\over d_i})=2|E|\sum_{j=1}^N {1\over d_j}-N,$$
and inserting into \eqref{eq:rifuno} we get the bound (\ref{last}).
\end{proof}
The expression of the lower bound given in (\ref{last}) depends on the summation $\sum_{j=1}^N {1\over d_j}$. Working on it, we get the following results
\begin{theorem}\label{th:segundo} Let $G$ be a graph with degree sequence $d_1 \le d_2 \le \cdots \le d_N$.
If $d_j=1$ for $1\le j \le M<N$ then
\begin{equation}\label{fino}
R^+(G) \ge N(N-4) +2|E|\left[M+{{(N-M)^2}\over {2|E|-M}}\right].
\end{equation}
\end{theorem}
\begin{proof}
If $d_j=1$ for $1\le j \le M<N$, then
\begin{equation}\label{extra}
R^+(G) \ge N(N-4)+2|E|M+ 2|E| \sum_{j=M+1}^N {1\over d_j}
\end{equation}
Applying the harmonic mean-arithmetic mean inequality to the last summation in the above inequality, we obtain
$$\sum_{j=M+1}^N {1\over d_j}\ge {{(N-M)^2}\over {2|E|-M}},$$
and inserting this into (\ref{extra}) we get the desired result.
\end{proof}
In the case of trees, by \eqref{fino} it is easy to obtain the following
\begin{corollary} If $T$ is a tree with $M\ge 2$ leaves and $N>2$ then
\begin{equation} \label{finoarbol}
R^+(T)\ge N(N-4)+2(N-1)\left[M+{{(N-M)^2}\over {2(N-1)-M}}\right].
\end{equation}
\end{corollary}
Now we will present three additional lower bounds for $R^+(G)$ starting again from (\ref{una}) and (\ref{dos}) and studying the behavior of a suitable real function depending on the degree sequence. Later on we will discuss which bounds turn out to be better.
\begin{theorem}\label{th:second}
For any graph $G$ with degree sequence $d_1 \le d_2 \le \cdots \le d_N$, $N>2$,
\begin{equation} \label{last1}
R^+(G)\ge N(N-2) + 2|E|\sum_{j=1}^N {1\over d_j} - \dfrac {4|E|}{1+d_1}.
\end{equation}
\end{theorem}
\begin{proof} Inserting (\ref{una}) and (\ref{dos}) into (\ref{addit}) we get
\begin{equation} \label{bound}
\begin{split}
R^+(G) &\ge \sum_{{i<j}\atop{d(i,j)=1}} {{(d_i+d_j)(d_i+d_j-2)}\over {d_id_j-1}}+\sum_{{i<j}\atop{d(i,j)>1}} (2+{d_i\over d_j}+{d_j\over d_i})\\
&=\sum_{{i<j}\atop{d(i,j)=1}} \left[ \dfrac {(d_i+d_j)(d_i+d_j-2)}{d_id_j-1} - (2+{d_i\over d_j}+{d_j\over d_i}) \right ] + \sum_{i<j} (2+{d_i\over d_j}+{d_j\over d_i})\\
& = \sum_{{i<j}\atop{d(i,j)=1}} \left[ \dfrac {(d_i+d_j)(d_i+d_j-2d_id_j)}{d_id_j(d_id_j-1)} \right ] +N(N-2) + 2|E|\sum_{i=1}^n \frac 1 {d_i}.
\end{split}
\end{equation}
To bound the first term, let us consider the real function $f(x)=\frac {(x+d_j)(x+d_j-2d_jx)}{d_jx(d_jx-1)} $ in the interval $I=[d_1,d_N]$, for $d_j \ge 2$. By Calculus, this function is increasing for $x \ge 2$ and moreover $f(2) \ge f(1)$. Thus for any integer $x \in I$ we get $f(x) \ge \frac {(d_1+d_j)(d_1+d_j-2d_jd_1)}{d_jd_1(d_jd_1-1)} $. A similar argument applied to the function $g(x)= \frac {(d_1+x)(d_1+x-2d_1x)}{d_1x(d_1x-1)}$ in the interval $I'=[2,d_N]$ shows that $g$ is increasing in $I'$ and thus $g(x) \ge g(d_1) = - \frac {4}{1+d_1}$ if $d_1 \ge 2$. When $d_1=1$, for $x \ge 2$, we get $g(x)= - \frac {x+1} x$ and $g(x) \ge g(2)= - \frac 32>-2.$ Therefore
\begin{equation}\label{extra1}
\sum_{{i<j}\atop{d(i,j)=1}} \dfrac {(d_i+d_j)(d_i+d_j-2d_id_j)}{d_id_j(d_id_j-1)} \ge - \dfrac {4|E|} {1+d_1}.
\end{equation}
Inserting (\ref{extra1}) in (\ref{bound}) we get (\ref{last1}).
\end{proof}
Finally, applying the harmonic mean-arithmetic mean inequality, from (\ref{last1}) we deduce the following corollaries
\begin{corollary}
For any graph $G$ with degree sequence $d_1 \le d_2 \le \cdots \le d_N$, $N>2$,
\begin{equation}\label{bound1}
R^+(G) \ge 2N(N-1)- \dfrac {4|E|} {1+d_1}.
\end{equation}
\end{corollary}
\begin{corollary}
If $d_j=1$ for $1\le j \le M<N$ then
\begin{equation}\label{fino1}
R^+(G) \ge N(N-2) +2|E|\left[M+{{(N-M)^2}\over {2|E|-M}}\right]-2|E|.
\end{equation}
\end{corollary}
\begin{corollary} If $T$ is a tree with $M\ge 2$ leaves then
\begin{equation} \label{finoarbol1}
R^+(T)\ge N(N-2)+2(N-1)\left[M+{{(N-M)^2}\over {2(N-1)-M}}\right]-2(N-1).
\end{equation}
\end{corollary}
\begin{remark}
It is a simple exercise in Calculus to show that the real functions
$$\Phi(x)=x+{{(N-x)^2}\over {2|E|-x}},\,\,\, x \ge 0$$
and
$$\Psi(x)=x\left[M+{{(N-M)^2}\over {2x-M}}\right],\,\,\, x \ge N-1$$
are increasing.
The fact that $\Phi$ is increasing tells us that the bounds (\ref{fino}), (\ref{finoarbol}), (\ref{fino1}) and (\ref{finoarbol1}) improve as the number of vertices with degree 1 increases. Indeed, the bound \eqref{fino} is worse than the universal bound \eqref{markov} only when $M=0$, a case we intentionally disregarded in the statement of Theorem \ref{th:segundo}, while for $M\ge 1$ our new bound betters \eqref{markov}.
In fact, since $\Phi(x)$ and $\Psi(x)$ are increasing, if $M\ge 1$, then either $M=1$ and $|E|$ must be at least $N$, because $G$ cannot be a tree with one leaf, or $M\ge 2$.
In the first case, $M=1$ and $|E|=N$, a simple computation yields that the bound (\ref{fino}) is $N^2-4N+2N\left[\dfrac{N^2}{2N-1}\right]$ which is better than \eqref{markov} whenever $N\ge 4$.
In the second case, with $M=2$ and $|E|=N-1$, an easy calculation yields that the bound (\ref{fino}), for $N>2$, is $2N^2-3N-2$, which is better than \eqref{markov} for $N\geq4$.
So by the monotonicity of $\Phi$ and $\Psi$ all cases are covered except possibly $M=1$ and $N=2$ or $3$. But it is impossible for a graph to have just one leaf and 2 or 3 vertices.
The fact that $\Psi$ is increasing tells us that (\ref{fino}) improves as the number of edges $|E|$ increases, so in a sense the lower bound (\ref{finoarbol}) is weakest. This is noticeable when $M$ is small, as in the linear graph, where the bound is quadratic and the value of $R^+$ is cubic, but it is not so when $M$ is large, as in the $N$-star graph, where the bound (\ref{finoarbol}) becomes $3N^2-8N+4$ and the actual value of $R^+$ is $3N^2-7N+4$. In the opposite direction, with $M$ small and $|E|$ large, \eqref{fino} performs well: if we take a complete $K_{N-1}$ with a single vertex attached with a single edge to anyone of the vertices of the $K_{N-1}$, then we get $R^+=3N^2-8N+8-{2\over {N-1}}$, whereas the lower bound is $3N^2-9N+6$.
\end{remark}
Summing up, by easy computations we have that:
\begin{enumerate}
\item bound (\ref{finoarbol1}) is always better than bound (\ref{finoarbol});
\item bound (\ref{fino}) is better than (\ref{fino1}) for $ |E|>N $ and they coincide for $|E|=N $, i.e. for the unicyclic graph;
\item bound (\ref{bound1}) is better than (\ref{markov}) if and only if
\begin{equation}\label{condition}
(1+d_1)(N-1)>2|E|
\end{equation}
and they coincide, for example, in case of trees and complete graphs. Note that (\ref{condition}) is satisfied, for instance, in case of $d$-regular graphs, with $d<N-1$.
\end{enumerate}
\section{Lower bounds via the majorization technique}
In this Section we show how majorization can be applied to bound the additive degree-Kirchhoff index. This approach can be pursued if we can identify a set of variables with constant sum and a Schur-convex function $f$ to be optimized on the set $S$ of these variables. In this case the global minimum (maximum) of $f$ is attained at the minimum (maximum) element of the set $S$ with respect to the majorization order (see \cite{BT} and \cite{BCT1} for more details).
\begin{theorem}\label{th:major1}
For any graph $G$ with degree sequence $d_1\le d_2 \le \cdots \le d_n$, let
\begin{equation}\label{eq:k}
\underset{i<j}{\sum }\dfrac{d_j}{d_i}=H.
\end{equation}
Then
\begin{equation}\label{eq:new_bound}
R^+(G) \ge N(N-3) + H + \left [ \frac {N(N-1)} 2\right ]^2 \frac 1 {H}.
\end{equation}
\end{theorem}
\begin{proof}
Let us consider the $\dfrac{N(N-1)}{2}$ variables $x_{ij}=\dfrac{d_j}{d_i}$, with $i<j$. The function
$$
f(x_{12}, x_{13}, \cdots x_{(N-1)N})= \sum_{i<j} \left ( \frac {d_i}{d_j} + \frac {d_j}{d_i} \right ) = \sum_{i=1}^{N-1} \sum_{j=i+1}^N \left ( x_{ij} + \frac 1 {x_{ij}} \right )
$$
is Schur-convex in the variables $x_{ij}$.
The minimal element of the set
$$
\Sigma_{H}= \{ \mathbf{w} \in \mathbb{R}^{N(N-1)/2} : w_1 \ge w_2 \ge \cdots w_{N(N-1)/2} \ge 0 \,\, , \sum_i w_i =H \}
$$
with respect to the majorization order is $\frac {2H} {N(N-1)} \mathbf{s}^{N(N-1)/2}$, where $\mathbf{s}$ is the unit vector (see \cite{Marshall} ). Thus we get the lower bound:
\begin{equation}\label{eq:first bound}
\sum_{i=1}^{N-1} \sum_{j=i+1}^N \left ( x_{ij} + \frac 1 {x_{ij}} \right ) \ge H + \left [ \frac {N(N-1)} 2\right ]^2 \frac 1 {H}.
\end{equation}
By the inequality (7) in Theorem \ref{th:first} we know that
$$
R^+(G) \ge N(N-3) + \sum_{i<j} \left ( \frac {d_i}{d_j} + \frac {d_j}{d_i} \right )
$$
and by \eqref{eq:first bound} we obtain the expected bound.
\end{proof}
\begin{remark}
The majorization technique would work just as fine if we considered the $\dfrac{N(N-1)}{2}$ variables $\dfrac 1 {x_{ij}} = \dfrac{d_i}{d_j}$ and the invariant quantity
\begin{equation}\label{eq:k**}
\underset{i<j}{\sum }\dfrac{d_i}{d_j}=H^{*}.
\end{equation}
Following the same steps as above we have another lower bound
\begin{equation}\label{eq:second bound}
\sum_{i=1}^{N-1} \sum_{j=i+1}^N \left ( x_{ij} + \frac 1 {x_{ij}} \right ) \ge H^{*} + \left [ \frac {N(N-1)} 2\right ]^2 \frac 1 {H^{*}}.
\end{equation}
Except for $d$-regular graphs for which the bounds \eqref{eq:first bound} and \eqref{eq:second bound} coincide, because $H=H^*$, in all other cases the first bound is always better than the second one.
In fact, by means of the harmonic mean - arithmetic mean inequality we get:
$$
H^{*} \cdot H \ge \left [ \frac {N(N-1)} 2 \right ]^2
$$
If $G$ is not a $d$-regular graph, the inequality $H > \frac {N(N-1)}2 > H^{*}$ holds. Multiplying both sides of the last inequality by $(H-H^{\ast})$ yields:
\[
(H-H^{*}) \cdot H \cdot H^{*} \geq (H-H^{\ast }) \left[ \frac{N(N-1)}{2}\right]
^{2}.
\]
It follows that
$$
(H-H^{*} )\geq \left( \dfrac{H-H^*}{H \cdot H^{*}} \right) \left[ \dfrac{N(N-1)}{2}\right] ^{2}=\left( \dfrac{1}{H^{*}}-\dfrac{1}{H}\right) \left[ \dfrac{N(N-1)}{2}\right] ^{2}.
$$
Rearranging this inequality we conclude that the lower bound in \eqref{eq:first bound} is always better than the one in \eqref{eq:second bound}.
\end{remark}
\noindent The usefulness of \eqref{eq:new_bound} is limited by the computation of the graph invariant $H$.
We will list later some examples of graphs for which this computation can be easily handled and compare \eqref{eq:new_bound} with the other bounds.
\vskip .2 in
Majorization is also the main argument in yet another possible approach for obtaining lower bounds. In reference \cite{Pal2013} it was shown that the following relationship between the additive and multiplicative degree-Kirchhoff indices holds :
\begin{equation}
\label{rela}
R^+(G)={N\over {2|E|}}R^*(G)+\sum_{j=1}^N\sum_{i\neq j} \pi_i E_iT_j,
\end{equation}
where $E_iT_j$ is the expected value of the number of steps $T_j$ that the random walk on $G$, started from vertex $i$, takes to reach vertex $j$. We recall that this random walk moves from a vertex $v$ to any neighboring vertex $w$ with uniform probabilities $p(v,w)={1\over {d_v}}$ and that the $N\times N$ matrix $P=\left(p(v,w)\right)$ of transition probabilities has a unique probabilistic left eigenvector $\pi=(\pi_i)$ (the stationary distribution), which is present in the summation in (\ref{rela}), and a spectrum $1=\lambda_1>\lambda_2\ge \lambda_3\ge \cdots \ge \lambda_N\ge -1$ in terms of which $R^*(G)$ can be expressed (see \cite{Pal2011}), namely
$$\displaystyle R^*(G)=2|E|\sum_{i=2}^N {1\over {1-\lambda_i}}.$$
With the preceding remarks and notation we can prove now the following
\begin{theorem}
For any graph $G$
\begin{equation}\label{eq:bound_major1}
R^+(G)\ge N\left[ {1\over {1+{\sigma \over \sqrt{N-1}}}}+{{(N-2)^2}\over {N-1-{\sigma \over \sqrt{N-1}}}}\right]+(N-1)^2,
\end{equation}
where
$$\sigma^2={2\over N}\sum_{(i,j)\in E}{1\over {d_id_j}}=\frac {{\rm tr}(P^2)}N$$
\end{theorem}
\begin{proof}
First, bound the summation with the hitting times as in \cite{Pal2013}:
$$R^+(G)\ge N \sum_{i=2}^N {1\over {1-\lambda_i}} + 2|E|\sum_{j=1}^N{1\over {d_j}}-2N+1.$$
Now apply the harmonic mean-arithmetic mean inequality only to the second addendum in order to get
$$R^+(G)\ge N \sum_{i=2}^N {1\over {1-\lambda_i}} + (N-1)^2.$$
Finally apply majorization to $\sum_{i=2}^N {1\over {1-\lambda_i}}$, as in \cite{BCPT1}, Proposition 11, in order to get the expected bound \eqref{eq:bound_major1}.
\end{proof}
We remark that
we recover the universal bound \eqref{markov} for the complete graph, for which $\sigma={1\over \sqrt{N-1}}$, and for all other graphs the bound is better than the universal one \eqref{markov}, as can be seen in the discussion in Section 3.1.1 of \cite{BCPT1}. \\
In order to complete our analysis, we study some particular classes of graphs for which the computation of $H$ is simple and we compare \eqref{eq:new_bound} with the other bounds.
\subsection{$d$-regular graph}
For $d$-regular graphs we have $H=\dfrac{N(N-1)}{2}$. The lower bound \eqref{eq:new_bound} becomes $N(N-1)$ which is worse than bound \eqref{markov} and consequently worse than (\ref{bound1}) and \eqref{eq:bound_major1}.
\subsection{$(a,b)-$ semiregular graph}
Let us consider a semiregular graph that has $N_{1}$ vertices with degree $a$ and
$N_{2}$ vertices with degree $b$, $a<b$, $N=N_{1}+N_{2}$. Then $H=
\frac{N(N-1)}{2}+\left( \frac{b}{a}-1\right) N_{1}N_{2}$.\\
We deal with two examples: i) a semiregular bipartite graph and ii) a semiregular not bipartite graph.
\begin{itemize}
\item [i)] Let us consider a semiregular bipartite graph with $N_{1}$=10 vertices with degree $a=4$ and $N_{2}=4$ vertices with degree $b=10$.
For this graph we have that $H=151$, $\sigma=0.4689932$ which imply
\begin{table}[th]
\centering%
\begin{tabular}{|l|l|}
\hline
Bound (\ref{bound1}) & $R^+(G)\ge 332$ \\ \hline
Bound (\ref{markov}) & $R^+(G)\ge 338$ \\ \hline
Bound \eqref{eq:bound_major1} & $R^+(G)\ge 338.033$ \\ \hline
Bound \eqref{eq:new_bound} & $R^+(G)\ge 359.64$ \\ \hline
\end{tabular}
\end{table}
\begin{center}
Table 1
\end{center}
\bigskip
\noindent Hence, bound \eqref{eq:new_bound} performs better than the others.
\item [ii)] Let us take a semiregular graph on $N$ vertices ($N$ even $\geq 8$) that is the union of a complete $K_{N/2}$ and a $N/2$-cycle such that vertex $i$ of the cycle is linked to vertex $i$ of the complete graph with a single edge, for $1\le i \le N/2$.
This graph has $N_{1}=N_{2}$ $=N/2
$, $a=3$, $b=N/2,$ thus $H=\frac{1}{24}N\left( 6N+N^{2}-12\right)$.
By \eqref{eq:new_bound} we get
\begin{equation}\label{bound_semireg1}\tag{19'}
R^+(G)\ge \frac{N\left( 228N^{2}-1152N+36N^{3}+N^{4}+1152\right) }{24\left(6N+N^{2}-12\right)}
\end{equation}
\noindent while bound (\ref{bound1}) becomes
\begin{equation}\label{bound_semireg2}\tag{14'}
R^+(G)\ge \frac{1}{8}N\left(15N-22\right)
\end{equation}
By Calculus, it is easy to show that , for $N>8$, the bound (\ref{bound_semireg1}) is better than both (\ref{bound_semireg2}) and (\ref{markov}). In virtue of (17), for $N>8$, bound (\ref{markov}) betters bound (\ref{bound_semireg2})
and they coincide for $N=8.$
\end{itemize}
We show in Table 2 a comparison between all bounds applicable to this example, for $N=20$:
\begin{table}[th]
\centering%
\begin{tabular}{|l|l|}
\hline
Bound (14') & $R^+(G)\ge 695$ \\ \hline
Bound (\ref{markov}) & $R^+(G)\ge 722$ \\ \hline
Bound (24) & $R^+(G)\ge 722.001$ \\ \hline
Bound (19') & $R^+(G)\ge 848.61$ \\ \hline
\end{tabular}
\end{table}
\begin{center}
Table 2
\end{center}
Finally, looking at Table 2, the bound (19') performs always better than (24) which in turn improves both (3) and (14').
\subsection{Full binary tree of depth $d>1$}
We consider a full binary tree of depth $d>1$ which has $N_1=2^d$ vertices of degree 1, one vertex (the root) of degree 2 and $N_2=2^d-2$ vertices of degree 3. Then $H=\dfrac{N(N-1)}{2} +2N_1+\frac 3 2 N_2 + 2N_1N_2$.
Taking $d=3$ we obtain the results summarized in the following table, which shows that our new bounds are better than the universal one (3):
\begin{table}[th]
\centering
\begin{tabular}{|l|l|}
\hline
Bound (\ref{markov}) & $R^+(G)\ge 392$ \\ \hline
Bound \eqref{eq:bound_major1} & $R^+(G)\ge 392.14$\\ \hline
Bound \eqref{eq:new_bound} & $R^+(G)\ge 406$\\ \hline
Bound (14) & $R^+(G)\ge 459.6$ \\ \hline
\end{tabular}
\end{table}
\begin{center}
Table 3
\end{center}
On the other hand, if we connect all the pendant vertices of the above mentioned full binary tree, we get a graph with $N_1=3$ vertices with degree $a=2$ and $N_2=12$ vertices with degree $d=3$.
Hence, we reduce it to the previous class of $(2,3)-$ semiregular graph and we have \\
\begin{table}[th]
\centering%
\begin{tabular}{|l|l|}
\hline
Bound (\ref{markov}) & $R^+(G)\ge 392$ \\ \hline
Bound (\ref{bound1}) & $R^+(G)\ge 392$ \\ \hline
Bound \eqref{eq:bound_major1} & $R^+(G)\ge 392.12$ \\ \hline
Bound \eqref{eq:new_bound} & $R^+(G)\ge 392.63$ \\ \hline
\end{tabular}
\end{table}
\begin{center}
Table 4
\end{center}
In this case, bound \eqref{eq:new_bound} performs slightly better than the others.
\section{Upper bounds}
Now with respect to upper bounds, we can prove the following simple
\begin{theorem}
For any graph $G$ we have
\begin{equation}
\label{upper}
R^+(G)\le 2|E|(N-1)R,
\end{equation}
where $R=\max_{i,j} R_{ij}$.
\end{theorem}
\begin{proof}
$R^+(G)\le R\sum_{i<j}(d_i+d_j)=2|E|(N-1)R $.
\end{proof}
The inequality (\ref{upper}) may seem like a crude estimate. In fact, it recovers the right order of the upper bound, since
$$2|E|(N-1)R\le N(N-1)^3,$$
which is only worse that the bound found in \cite{Pal2013} by the constant of the largest $N^4$ term. For trees, (\ref{upper}) becomes
\begin{equation}
\label{uppert}
R^+(T)\le 2(N-1)^2D\le 2(N-1)^3,
\end{equation}
where $D$ is the diameter of the graph. Again, this inequality for trees gives the right order of the upper bound of the index, except at most for the constant of the $N^3$ term, as the linear graph shows, since for this tree $R^+(G)\sim {2 \over 3}N^3$. On the other hand, for the star graph, $D=2$ and (\ref{uppert}) becomes $4(N-1)^2$ which is off the actual value only by the constant $4$ instead of $3$.
In reference \cite{Mark} a careful study of $R$ is carried out for distance-regular graphs, a large family for which we can apply the simple bound (\ref{uppert}) and show that $R^+(G)$ must be less than twice the universal lower bound.
\begin{theorem}
If $G$ is distance-regular with degree $k>2$ then
$$R^+(G)\le \left(2+{{188}\over {101}}\right)(N-1)^2.$$
\end{theorem}
\begin{proof}
Insert the inequality $R\le\left(2+{{188}\over {101}}\right){{(N-1)}\over {Nk}}$ shown in \cite{Mark} (the equality holds only in the case of the Biggs-Smith graph) into (\ref{upper}) to prove the assertion.
\end{proof}
It is interesting to notice that the result is false for the distance-regular graphs with degree $k=2$, i.e., the cycles, for which $R^+(G)$ jumps from the quadratic values shown above for $k>2$ to the cubic value ${{N^3-N}\over 3}$.
\vskip .2 in
We conclude giving further upper bounds, in terms of the so-called spectral gap, obtained by combining ideas from Markov chains and majorization.
Recall that from (\ref{rela}) and subsequent comments we have
\begin{equation}
\label{rela2}
R^+(G)={N\over {2|E|}}R^*(G)+\sum_{j=1}^N\sum_{i=1}^N \pi_i E_iT_j=N \sum_{i=2}^N {1\over {1-\lambda_i}} + \sum_{j=1}^N\sum_{i=1}^N \pi_i E_iT_j
\end{equation}
We want to find an upper bound for the summation with the hitting times in (\ref{rela2}), for which we use some Markov chain theory found in reference \cite{Lov}, specifically:
\begin{equation}
\label{lovasz}
\sum_i \pi_iE_iT_j={1\over \pi_j}\sum_{k=2}^N{1\over {1-\lambda_k}}v_{kj}^2,
\end{equation}
where $v_{kj}$ is the $j$-th component of the eigenvector $v_k$ associated to the eigenvalue
$\lambda_k$ (the vectors $v_k$ can be chosen to be orthonormal), and
$$\sum_{k=2}^N v_{kj}^2=1-\pi_j.$$
It is clear that (\ref{lovasz}) can be bounded as follows:
$${1\over \pi_j}\sum_{k=2}^N{1\over {1-\lambda_k}}v_{kj}^2\le {1\over {(1-\lambda_2)\pi_j}}\sum_{k=2}^Nv_{kj}^2={1\over {1-\lambda_2}}{{1-\pi_j}\over \pi_j}.$$
And so the sum of expected hitting times can be bounded as:
$$\sum_{j=1}^N\sum_{i=1}^N \pi_i E_iT_j\le{1\over {1-\lambda_2}}\sum_j{{1-\pi_j}\over \pi_j}={1\over {1-\lambda_2}}(2|E|\sum_j{1\over d_j}-N).$$
Now use in (\ref{rela2}) the upper bounds in \cite{BCPT1} Section 3.2 for $\sum_{i=2}^N {1\over {1-\lambda_i}}$, to obtain the following corollaries:
\begin{corollary}
For any $G$ we have
\begin{equation}
\label{final1}
R^+(G)\le N\left({{N-k-2}\over {1-\lambda_2}}+{k\over 2}+{1\over \theta}\right)+{1\over {1-\lambda_2}}(2|E|\sum_j{1\over d_j}-N),
\end{equation}
where $\displaystyle k=\Bigg\lfloor {{\lambda_2(N-1)+1}\over {\lambda_2+1}}\Bigg\rfloor$ and $\displaystyle \theta=\lambda_2(N-k-2)-k+2$.
\end{corollary}
\begin{corollary}
For any bipartite $G$ we have
\begin{equation}
\label{final2}
R^+(G)\le N\left({1\over 2}+{{N-k-3}\over {1-\lambda_2}}+{k\over 2}+{1\over \theta}\right)+{1\over {1-\lambda_2}}(2|E|\sum_j{1\over d_j}-N),
\end{equation}
where $k$ and $\theta$ are defined above.
\end{corollary}
For the $N$-star graph we have that $\lambda_2=0$, $k=1$ and $\theta=1$ and therefore the bound (\ref{final2}) becomes $3N^2-7N+4$ and the actual value of $R^+$ is attained. This can be extended to the complete bipartite graph $K_{r,s}$, for arbitrary $r, s$, for which bound (\ref{final2}) becomes
\begin{equation}
\label{rs}
3r^2+3s^2+2rs-3r-3s,
\end{equation}
whose order is always $N^2$, and improves the bound $2|E|(N-1)D=4rs(r+s-1)$. The smallest value of (\ref{rs}) occurs for $r=s={N\over 2}$, where it takes the value $N(2N-3)$, which is equal to the actual value $N(2N-3)$ of $R^+(G)$.
\section{Conclusions}
We have derived upper and lower bounds for $R^+(G)$ whose expressions do not depend on the effective resistances, which in general are difficult to compute, but on a limited number of graph invariants. These bounds are not mutually exclusive and their performance depends on the particular structure of the graphs in question. Here is a table summarizing our best results concerning the lower bounds:
\begin{table}[th!]
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{Graph} & \textbf{Bound} & \\ \hline
\multirow{3}{*}{Generic} & $R^+(G) \ge 2N(N-1)- \dfrac {4|E|} {1+d_1}$ & (\ref{bound1})\\
& $R^+(G) \ge N(N-3) + H + \left [ \frac {N(N-1)} 2\right ]^2 \frac 1 {H}$ & (19) \\
& $R^+(G)\ge N\left[ {1\over {1+{\sigma \over \sqrt{N-1}}}}+{{(N-2)^2}\over {N-1-{\sigma \over \sqrt{N-1}}}}\right]+(N-1)^2 $ & (24) \\ \hline
$M$ leaves & $R^+(G) \ge N(N-4) +2|E|\left[M+{{(N-M)^2}\over {2|E|-M}}\right]$ & (\ref{fino}) \\ \hline
Tree & $R^+(T)\ge N(N-2)+2(N-1)\left[M+{{(N-M)^2}\over {2(N-1)-M}}\right]-2(N-1)$ & (\ref{finoarbol1})\\ \hline
\end{tabular}
\end{table}
\begin{center}
Table 5
\end{center}
\newpage
| {
"timestamp": "2013-11-14T02:08:06",
"yymm": "1311",
"arxiv_id": "1311.3113",
"language": "en",
"url": "https://arxiv.org/abs/1311.3113",
"abstract": "Given a simple connected graph on $N$ vertices with size $|E|$ and degree sequence $d_{1}\\leq d_{2}\\leq ...\\leq d_{N}$, the aim of this paper is to exhibit new upper and lower bounds for the additive degree-Kirchhoff index in closed forms, not containing effective resistances but a few invariants $(N,|E|$ and the degrees $d_{i}$) and applicable in general contexts. In our arguments we follow a dual approach: along with a traditional toolbox of inequalities we also use a relatively newer method in Mathematical Chemistry, based on the majorization and Schur-convex functions. Some theoretical and numerical examples are provided, comparing the bounds obtained here and those previously known in the literature.",
"subjects": "Combinatorics (math.CO)",
"title": "New upper and lower bounds for the additive degree-Kirchhoff index",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109538667758,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7076796291910211
} |
https://arxiv.org/abs/1505.03459 | On powers of interval graphs and their orders | It was proved by Raychaudhuri in 1987 that if a graph power $G^{k-1}$ is an interval graph, then so is the next power $G^k$. This result was extended to $m$-trapezoid graphs by Flotow in 1995. We extend the statement for interval graphs by showing that any interval representation of $G^{k-1}$ can be extended to an interval representation of $G^k$ that induces the same left endpoint and right endpoint orders. The same holds for unit interval graphs. We also show that a similar fact does not hold for trapezoid graphs. | \section{Introduction}
An interval graph is a graph whose vertices correspond to intervals of the real line, and such that two vertices are adjacent if and only if the corresponding intervals intersect. More generally, given a set $S$ of geometric objects, the \emph{intersection graph} $G$ of $S$ is the graph with vertex set $S$ such that two vertices are adjacent if and only if they have a nonempty intersection. We say that $S$ is a \emph{geometric representation} of $G$. Let $m\geq 0$ be an integer and consider $m+1$ parallel horizontal lines $L_1,\ldots,L_{m+1}$ indexed from bottom to top. An \emph{$m$-trapezoid} on $L_1,\ldots,L_{m+1}$ is determined by a set of $m+1$ intervals $I_1,\ldots,I_{m+1}$ with $I_i\subset L_i$ for each $i\in\{1,\ldots,m+1\}$. This $m$-trapezoid consists of the polygon with corners $\ell(I_1),\ldots,\ell(I_{m+1}),r(I_{m+1}),\ldots,r(I_1),\ell(I_1)$ connected in this order (where $\ell(I_i)$ and $r(I_i)$ denote the left an right endpoint of interval $I_i$, respectively). An \emph{$m$-trapezoid graph}~\cite{F95} is the intersection graph of a set of $m$-trapezoids defined on the same $m+1$ lines. Note that $0$-trapezoid graphs coincide with interval graphs, and $1$-trapezoid graphs are simply called trapezoid graphs (and were introduced in~\cite{DCP88}). An interval representation is \emph{proper} if no interval properly contains any other interval, and it is \emph{unit} if all intervals have unit length. A graph is a unit (proper, respectively) interval graph if it is the intersection graph of a unit (proper, respectively) interval representation. It is well-known that an interval graph is unit if and only if it is proper~\cite{R69}.
An $m$-trapezoid representation of an $m$-trapezoid graph $G$ naturally induces $2(m+1)$ orders for $V(G)$ corresponding to the orders of the left and right endpoints of the intervals of each line $L_i$\footnote{Note that two elements of $V(G)$ can be equal for some orders if they share a common left or right endpoint.}. We denote by $\leqslant_L^i$ the order corresponding to the left endpoints of the intervals on line $L_i$, and by $\leqslant_R^i$ the order corresponding to the right endpoints of the intervals on line $L_i$. If $m=0$ (i.e. we consider an interval representation of an interval graph), we simply denote these two orders $\leqslant_L$ and $\leqslant_R$.
Given a graph $G$ and an integer $k\geq 1$, the $k$th \emph{power} $G^k$ of $G$ is the graph with vertex set $V(G)$ where two vertices are adjacent in $G^k$ if and only if they are at distance at most~$k$ in $G$.
Raychaudhuri~\cite{R87} proved that for any graph $G$ and any integer $k\geq 2$, if $G^{k-1}$ is an interval graph, then so is $G^k$. Raychaudhuri also proved the same statement for unit interval graphs. Flotow~\cite{F95} extended Raychaudhuri's result as follows: for any $k\geq 2$ and every $m\geq 0$, if $G^{k-1}$ is an $m$-trapezoid graph, then $G^k$ is an $m$-trapezoid graph. Flotow also proved that the same statement is true for co-comparability graphs~\cite{F95}, and studied similar question for circular-arc graphs~\cite{F96}. Note that a similar result does not hold for the related class of permutation graphs (which are exactly the graphs that are both comparability and co-comparability graphs), since the path $P_7$ is a permutation graph, but its square $P_7^2$ is not (indeed it is not a comparability graph). Similarly this does not hold for chordal graphs (see~\cite{LS83} and~\cite{R87} for some discussion and related results).
Raychaudhuri's proof for (unit) interval graphs relies on classic characterizations of these graph classes in terms of forbidden structures, and he does not consider the interval representations of the graphs in question. In contrast, in Flotow's proof, an $m$-trapezoid representation of $G^{k-1}$ is extended in an inductive process to obtain an $m$-trapezoid representation of $G^{k}$. However, Flotow's proof does not yield any conclusion about the \emph{orders} induced by the two considered $m$-trapezoid representations of $G^{k-1}$ and $G^k$. We prove the following strengthening of Raychaudhuri's results from~\cite{R87}, providing a shorter proof for them.
\begin{theorem}\label{thm:main}
Let $G$ be a graph and $k\geq 2$ an integer such that $G^{k-1}$ is an interval graph. Given any interval representation $R$ of $G^{k-1}$, $R$ can be extended to an interval representation $R'$ of $G^k$ such that $R$ and $R'$ induce the same left and right endpoint orders
\end{theorem}
\begin{corollary}\label{cor:unit}
Let $G$ be a graph and $k\geq 2$ an integer such that $G^{k-1}$ is a proper interval graph. Given any proper (respectively unit) interval representation $R$ of $G^{k-1}$, $R$ can be extended to a proper (respespectively unit) interval representation $R'$ of $G^k$ such that $R$ and $R'$ induce the same left and right endpoint orders
\end{corollary}
Finally, we show that a statement similar to the one of Theorem~\ref{thm:main} is not true for $m$-trapezoid graphs in the case $m=1$ and $k=2$ (i.e. for squares of trapezoid graphs).
\begin{proposition}\label{prop:P5}
There is a trapezoid representation $R$ of the path $P_5$ such that for any trapezoid representation $R'$ of $P_5^2$, at least one order among the four orders $\leqslant_L^0$, $\leqslant_R^0$, $\leqslant_L^1$, $\leqslant_R^1$ induced by each of $R$ and $R'$, differs on $R$ and $R'$.
\end{proposition}
We remark that the proof of Theorem~\ref{thm:main} provides a polynomial-time algorithm for building the representation $R'$ from $R$ and $G$. Hence Theorem~\ref{thm:main} has (algorithmic) applications, see for example~\cite{algopaper}
\section{Proofs}
\begin{proof}[Proof of Theorem~\ref{thm:main}]
We can assume that $G$ is connected and that in the representation $R$ there is no pair of intervals $I_x$ and $I_y$ with $r(I_x)=\ell(I_y)$ (otherwise we can modify $R$ so that it satisfies this property, without affecting $\leqslant_L$ and $\leqslant_R$).
For every vertex $x\in V(G^{k-1})$, we denote by $I_x$ the interval corresponding to $x$ in the representation $R$ of $G^{k-1}$.
Assume first that there exists a vertex at distance $k$ of $x$ (in $G$) whose corresponding interval of $R$ has a left endpoint larger than $\ell(I_x)$. Let $u_x$ be such a vertex whose corresponding interval $I_{u_x}$ of $R$ has the largest left endpoint. We define $r_k(x)$ to be a point of the real line located after $\ell(I_{u_x})$ and before the next left or right endpoint in $R$ (if it exists).
In the case $u_x=u_y$ for two distinct vertices $x$ and $y$ such that $x\leqslant_R y$, we choose $r_k(x)$ and $r_k(y)$ such that $r_k(x)<r_k(y)$ if $r(I_x)<r(I_y)$ and $r_k(x)=r_k(y)$ if $r(I_x)=r(I_y)$.
If there is no interval $u_x$, we let $r_k(x)=r(I_x)$. In this case, each vertex $y$ whose corresponding interval of $R$ starts after $\ell(I_x)$ is at distance at most~$k-1$ of $x$, and no interval in $R$ starts after $r(I_x)$.
Now, we build $R'$ as follows: each interval $I_x=[\ell(I_x),r(I_x)]$ is replaced by interval $I'_x=[\ell(I_x),r_k(x)]$. In other words, all the intervals of $R$ are extended to the right until being adjacent to the last interval at distance at most~$k$ in $G$, while locally preserving the order $\leqslant_R$ in the event of ties.
It is clear that $R$ and $R'$ induce the same order $\leqslant_L$ since we have not modified the left endpoints. Assume for contradiction that $R$ and $R'$ do not induce the same order $\leqslant_R$. Let $x$ and $y$ be two vertices. If $r_k(x)=r_k(y)$ we necessarily have $r(I_x)=r(I_y)$. Thus there exist two vertices $x$ and $y$ with $x\leqslant_R y$ in $R$, but $y\leqslant_R x$ in $R'$. In other words, $r(I_x)\leq r(I_y)$ but $r_k(x)>r_k(y)$. This implies in particular that $r(I_x)<r_k(x)$ and hence $u_x$ is well-defined.
Moreover, by definition of $r_k(x)$, we cannot have $\ell(I_{u_x})\leq r_k(y)<r_k(x)$, therefore $r_k(y)<\ell(I_{u_x})$ and the distance $d_G(y,u_x)$ is at least $k+1$. Since $d_G(x,u_x)=k$, there is a vertex $z$ at distance~$1$ of $u_x$ in $G$ that is at distance $k-1$ of $x$ in $G$ (for example $z$ lies on a shortest path from $u_x$ to $x$ in $G$). Hence, $I_{z}$ intersects both $I_x$ and $I_{u_x}$ in $R$. But then, $I_y$ also intersects $I_{z}$, implying that $d_G(y,u_x)\leq d_G(y,z)+d_G(z,u_x)\leq k-1+1=k$, a contradiction. Therefore $R$ and $R'$ induce the same order $\leqslant_R$.
It remains to show that the interval graph $G'$ defined by $R'$ is exactly $G^k$, i.e. that (i) all edges of $G^k$ are contained in $G'$, and (ii) that every edge of $G'$ belongs to $G^k$.
Note that each interval $I'_x$ of $R'$ contains the interval $I_x$ of $R$, hence all the edges of $G^{k-1}$ are contained in $G'$. Hence, assuming that (i) is false, we have two vertices $x,y$ at distance exactly~$k$ in $G$ that are not adjacent in $G'$. Assume without loss of generality that $x\leqslant_R y$. But then, we have $r_k(x)>\ell(I_y)$ and therefore $I'_x$ and $I'_y$ do intersect in $R'$, a contradiction. Therefore (i) holds.
Now, assume that (ii) is false; then we have two vertices $x,y$ with $d_G(x,y)\geq k+1$ but $I'_x$ intersects $I'_y$. Without loss of generality assume that $x\leqslant_Ry$. Then we have $r_k(x)>\ell(I_y)$, and $x\neq u_x$. Let $z$ be a vertex at distance~$1$ of $x$ and at distance at most~$k-1$ of $u_x$ (for example $z$ lies on a shortest path from $x$ to $u_x$ in $G$). Then, $I_z$ intersects both $I_x$ and $I_{u_x}$ in $R$, which implies that $I_z$ intersects $I_y$ in $R$. Hence, we have $d_G(y,x)\leq d_G(y,z)+d_G(z,x)\leq k-1+1=k$, a contradiction. Hence (ii) holds and the proof of Theorem~\ref{thm:main} is complete.
\end{proof}
\medskip
\begin{proof}[Proof of Corollary~\ref{cor:unit}]
An interval representation is proper if and only if the two orders $\leqslant_L$ and $\leqslant_R$ are the same.
Let $G$ be a graph such that $G^{k-1}$ is a proper interval graph and let $R$ be a proper interval representation of $G^{k-1}$. By Theorem~\ref{thm:main}, $R$ can be extented to an interval representation $R'$ of $G^k$ inducing the same left and right endpoint orders. Thus $R'$ is necessarily a proper interval representation.
For unit interval representations, it is enough to note that any proper interval representation $R$ of a graph $H$ can be transformed into a unit interval representation $R^u$ of $H$ such that $R$ and $R^u$ induce the same left and right endpoint orders (see for example~\cite{BW99}).\end{proof}
\medskip
\begin{proof}[Proof of Proposition~\ref{prop:P5}]
Let $V(P_5)=\{1,2,3,4,5\}$ and consider the following trapezoid representation, $R$, of $P_5$. Each vertex $i$ ($1\leq i\leq 5$) corresponds to the trapezoid $T_i$ with corners $\ell(I_i^0),\ell(I_i^1),r(I_i^1),r(I_i^0)$, where the intervals on $L_0$ are $I_1^0=[0,1]$, $I_2^0=[6,7]$, $I_3^0=[4,5]$, $I_4^0=[10,11]$, $I_5^0=[8,9]$ and the intervals on $L_1$ are $I_1^1=[4,5]$, $I_2^1=[3,4]$, $I_3^1=[8,9]$, $I_4^1=[6,7]$, $I_5^1=[12,13]$. See Figure~\ref{fig:P5} for an illustration. Note that we have $1\leqslant_L^0 3\leqslant_L^0 2\leqslant_L^0 5\leqslant_L^0 4$, $2\leqslant_L^1 1\leqslant_L^1 4\leqslant_L^1 3\leqslant_L^1 5$, $\leqslant_L^0=\leqslant_R^0$ and $\leqslant_L^1=\leqslant_R^1$.
\begin{figure}[ht]
\centering
\scalebox{0.8}{\begin{tikzpicture}[scale=0.4]
\foreach \I in {0,5,10}
{
\fill[pattern color=gray, draw=white, pattern=horizontal lines] (\I,-3) -- (\I+5,3) -- (\I+1+5,3) -- (\I+1,-3);
}
\foreach \I in {2.5,7.5}
{
\fill[pattern color=gray, draw=white, pattern=vertical lines] (\I,3) -- (\I+5,-3) -- (\I+1+5,-3) -- (\I+1,3);
}
\foreach \I in {0,5,10}
{
\draw[line width=1pt] (\I,-3) -- (\I+5,3) (\I+1,-3) -- (\I+1+5,3);
}
\foreach \I in {2.5,7.5}
{
\draw[line width=1pt] (\I,3) -- (\I+5,-3) (\I+1,3) -- (\I+1+5,-3);
}
\draw[line width=0.5pt] (-1,3) -- (18,3)
(-1,-3) -- (18,-3);
\node at (19,3) {$L_1$};
\node at (19,-3) {$L_0$};
\node at (0.5,-3.8) {$T_1$};
\node at (3,3.8) {$T_2$};
\node at (5.5,-3.8) {$T_3$};
\node at (8,3.8) {$T_4$};
\node at (10.5,-3.8) {$T_5$};
\end{tikzpicture}}
\label{fig:P5}
\caption{The trapezoid representation $R$ of graph $P_5$.}
\end{figure}
By contradiction, we assume that there exists a trapezoid representation $R'$ of $P_5^2$ inducing the same orders $\leqslant_L^0$, $\leqslant_R^0$, $\leqslant_L^1$ and $\leqslant_R^1$ as $R$. For each vertex $i$ ($1\leq i\leq 5$), denote by $T'_i$ the trapezoid corresponding to $i$ in $R'$, and by ${I'}_i^0$ and ${I'}_i^1$ the intervals of $L_0$ and $L_1$, respectively, belonging to $T_i$.
Since $3$ and $5$ are adjacent in $P_5^2$, we have $T'_3\cap T'_5\neq\emptyset$. Since $3\leqslant_L^0 5$, $3\leqslant_R^0 5$, $3\leqslant_L^1 5$ and $3\leqslant_R^1 5$, we have $\ell({I'}_5^1)<r({I'}_3^1)$ or $\ell({I'}_5^0)<r({I'}_3^0)$. Since $3\leqslant_R^0 2$ but $2$ and $5$ are not adjacent in $P_5^2$, we have $r({I'}_3^0)<r({I'}_2^0)<\ell({I'}_5^0)$. This implies that $\ell({I'}_5^1)<r({I'}_3^1)$. Since $2$ and $4$ are adjacent in $P_5^2$ and moreover $2\leqslant_L^0 4$, $2\leqslant_R^0 4$, $2\leqslant_L^1 4$ and $2\leqslant_R^1 4$, we must have $\ell({I'}_4^1)<r({I'}_2^1)$ or $\ell({I'}_4^0)<r({I'}_2^0)$. But since $r({I'}_2^0)<\ell({I'}_5^0)<\ell({I'}_4^0)$, we necessarily have $\ell({I'}_4^1)<r({I'}_2^1)$. Now, it suffices to observe that $1$ and $2$ are adjacent but $1$ and $4$ are non-adjacent in $P_5^2$, while $1\leqslant_L^0 3\leqslant_L^0 2$, $1\leqslant_R^0 3\leqslant_R^0 2$, and we must have $1\leqslant_L^1 2$ and $1\leqslant_R^1 2$. This makes it impossible to complete the representation $R'$, which is a contradiction.
\end{proof}
| {
"timestamp": "2015-11-10T02:14:48",
"yymm": "1505",
"arxiv_id": "1505.03459",
"language": "en",
"url": "https://arxiv.org/abs/1505.03459",
"abstract": "It was proved by Raychaudhuri in 1987 that if a graph power $G^{k-1}$ is an interval graph, then so is the next power $G^k$. This result was extended to $m$-trapezoid graphs by Flotow in 1995. We extend the statement for interval graphs by showing that any interval representation of $G^{k-1}$ can be extended to an interval representation of $G^k$ that induces the same left endpoint and right endpoint orders. The same holds for unit interval graphs. We also show that a similar fact does not hold for trapezoid graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "On powers of interval graphs and their orders",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109507462227,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7076796269486093
} |
https://arxiv.org/abs/1211.2153 | Local and global stability of equilibria for a class of chemical reaction networks | A class of chemical reaction networks is described with the property that each positive equilibrium is locally asymptotically stable relative to its stoichiometry class, an invariant subspace on which it lies. The reaction systems treated are characterised primarily by the existence of a certain factorisation of their stoichiometric matrix, and strong connectedness of an associated graph. Only very mild assumptions are made about the rates of reactions, and in particular, mass action kinetics are not assumed. In many cases, local asymptotic stability can be extended to global asymptotic stability of each positive equilibrium relative to its stoichiometry class. The results are proved via the construction of Lyapunov functions whose existence follows from the fact that the reaction networks define monotone dynamical systems with increasing integrals. | \section{Introduction}
Systems of chemical reactions can give rise to dynamical systems of various kinds (discrete or continuous time, discrete or continuous state, stochastic or deterministic, for example) and displaying a variety of behaviours \cite{erdi}. Perhaps most widely studied are models whose evolution is naturally described by systems of ordinary differential equations, namely deterministic, continuous time, spatially homogeneous models where chemical concentrations take nonnegative real values. A broad question of interest is when models of some {\bf chemical reaction network} (CRN) can be shown to allow, or forbid, certain behaviours for all reasonable choices of chemical reaction rates (kinetics). The attempt to make claims about the behaviour of CRNs which are to some degree independent of choices of kinetics is often termed {\bf chemical reaction network theory} (CRNT). Since the pioneering work of Feinberg \cite{feinberg0} and Horn and Jackson \cite{hornjackson}, there has been considerable progress in various directions, including the discovery of structural features of networks associated with multistationarity, oscillation, and the persistence of solutions.
Much, though not all, work in CRNT has focussed on reaction networks with mass action kinetics, but unknown rate constants, namely on particular polynomial differential equations with unknown parameters. Here we construct a class of CRNs which can be proved to have strong convergence properties with weaker assumptions on the kinetics. First note that the evolution of a CRN quite naturally takes place on certain invariant convex sets termed stoichiometry classes (to be defined below). The basic convergence properties of the networks we describe are:
\begin{enumerate}
\item No more than one positive equilibrium on each stoichiometry class, and local asymptotic stability of each positive equilibrium on its stoichiometry class.
\item Under additional assumptions, global asymptotic stability of a unique positive equilibrium on each nontrivial stoichiometry class.
\end{enumerate}
The precise meaning of these statements will be clarified below. The results will be proved using the theory of monotone dynamical systems \cite{hirschsmith,halsmith}. There is a considerable intersection between this theory and the study of CRNs, reflecting the fact that CRNs fairly frequently give rise to order-preserving dynamical systems -- see \cite{volpert,kunzesiegelmathchem,minchevasiegel,leenheer,banajidynsys,angelileenheersontag,angelisontagfutile,banajimierczynski} for example. The key geometrical insights for the results presented here come from the results on monotone systems with increasing integrals in \cite{mierczynski}, generalised in \cite{banajiangeli,banajiangelierratum}.
\section{Statement of the results}
The local and global results summarised above will be stated precisely as Theorems~\ref{mainthm0}~and~\ref{mainthm} below after some terminology and notation are introduced. Define $\mathbb{R}^n_{\geq 0}$ to be the nonnegative orthant in $\mathbb{R}^n$, i.e.
\[
\mathbb{R}^n_{\geq 0} = \{x \in \mathbb{R}^n\,:\, x_i \geq 0\,\,\,\mbox{for}\,\,\, i = 1, \ldots, n\}\,.
\]
Similarly
\[
\mathbb{R}^n_{\leq 0} = \{x \in \mathbb{R}^n\,:\, x_i \leq 0\,\,\,\mbox{for}\,\,\, i = 1, \ldots, n\}\,.
\]
A vector in $\mathbb{R}^n_{\geq 0}$ will be referred to as {\bf nonnegative}, while one in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$ will be termed {\bf positive}. As chemical concentrations are necessarily nonnegative, $\mathbb{R}^n_{\geq 0}$ is the natural state-space for ODE models of systems of chemical reactions. \\
We will be considering dynamical systems of the form:
\begin{equation}
\label{eqmain}
\dot x = \Gamma v(x)\,,
\end{equation}
with the following assumptions:
\begin{enumerate}
\item[A1.] $x \in \mathbb{R}^n_{\geq 0}$, $\Gamma$ is an $n \times m$ matrix, $v:U \to \mathbb{R}^m$ is $C^1$ and is defined on some open neighbourhood $U$ of $\mathbb{R}^n_{\geq 0}$.
\item[A2.] The reaction rates $v(x)$ satisfy conditions K1, K2 and K3 listed in Appendix~\ref{appkinetic}.
\item[A3.] $\Gamma = \Lambda\Theta$, where
\begin{enumerate}
\item[i.] $\Lambda$ is an $n \times r$ matrix with each row containing exactly one nonzero entry, and no column of zeros.
\item[ii.] $\Theta$ is an $r \times m$ matrix such that $\Theta_{ij}\Theta_{kj} \leq 0$ for $i \not = k$ and $\mathrm{ker}(\Theta^T)$ is one dimensional and includes a positive vector.
\end{enumerate}
\item[A4.] The DSR graph for the system at each $x \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$ is strongly connected.
\end{enumerate}
\vspace{0.3cm}
{\bf Remark on condition A1.} $x$ describes the concentrations of a set of $n$ chemical species involved in $m$ chemical reactions. The matrix $\Gamma$ is termed the {\bf stoichiometric matrix} of the system and $\Gamma_{ij}$ defines the net production/consumption of species $i$ by reaction $j$. It is convenient, though not necessary, to assume that the vector field is defined on an open set containing the nonnegative orthant, in order to avoid technicalities when discussing its derivative. Note that in this paper we follow the convention that a reversible reaction is treated as a single process (rather than as a pair of irreversible reactions), contributing one column to $\Gamma$ and one reaction rate in $v$. In the case of an irreversible reaction we adopt the convention that reactants occur on the left (and are hence associated with negative entries in $\Gamma$), while products occur on the right (and are associated with positive entries in $\Gamma$). In the case of reversible reactions, the choice of ``left'' and ``right'' is arbitrary: altering this choice for the $j$th reaction re-signs the $j$th column of $\Gamma$ and the $j$th reaction rate $v_j$. All results in this paper are independent of this choice.
{\bf Remark on condition A2.} This is a weak assumption on the kinetics. It can be crudely summarised via the statements: ``reactions need all their reactants to proceed'' and ``provided all reactants are present, increasing a concentration speeds up a reaction''. The assumption has been discussed and illustrated previously in \cite{angelileenheersontag,banajimierczynski} for example. The assumption implies, amongst other things, that the nonnegative orthant is positively invariant (Lemma~10 in \cite{banajimierczynski}). With condition A1, this guarantees that (\ref{eqmain}) defines a local semiflow $\phi$ on $\mathbb{R}^n_{\geq 0}$.
{\bf Remark on condition A3.} Condition~A3(i) implies that $\Lambda$ has rank $r$, and condition~A3(ii) implies that each column of $\Theta$ contains exactly one negative entry and exactly one positive entry. The implications of condition A3 will be explored in detail later during proof of the results. In brief, the first factor $\Lambda$ will define a closed, convex and pointed cone in $\mathbb{R}^n$ and the system will be shown to be order-preserving with respect to the order defined by this cone. The nature of this order-cone and its relationship with invariant subspaces of the system will imply the local and global convergence results below. Several lemmas which precede the proofs of these results require weaker conditions on $\Lambda$ and $\Theta$ than A3(i) and A3(ii); condition A3 can thus be seen as the intersection of the various conditions needed in these preliminary lemmas. We will comment in the concluding section on how to identify matrices which admit a factorisation as in condition A3.
{\bf Remark on condition A4.} The DSR graph associated with a CRN \cite{banajicraciun2} is a signed, labelled, bipartite, multidigraph, with relationships to other well-known objects such as Petri nets. It is strongly connected if there is a (directed) path from each vertex to each other vertex. We need only the following reduced construction here. Given an $n \times m$ matrix $A$ and an $m \times n$ matrix $B$, the (reduced) DSR graph $G_{A, B}$ is defined as follows: it is a bipartite digraph on $n+m$ vertices, $S_1, \ldots, S_n$ and $R_1, \ldots, R_m$ with arc $R_jS_i$ if, and only if, $A_{ij}\not = 0$ and arc $S_iR_j$ if, and only if, $B_{ji} \not = 0$. If desired, the arcs may be given the signs of the associated entries in $A$ and $B$. For System (\ref{eqmain}), at each $x \in \mathbb{R}^n_{\geq 0}$ we can define $G(x) = G_{\Gamma, -Dv(x)}$. Under condition A2, $G(x)$ is constant on $\mathrm{int}(\mathbb{R}^n_{\geq 0})$, and in fact on each elementary face of $\mathbb{R}^n_{\geq 0}$ (defined below).\\
{\bf Notation and terminology.} Given any $n \times k$ matrix $A$, and $x, y \in \mathbb{R}^n$ we will write $x \sim^A y$ for $x - y \in \mathrm{Im}\,A$. Clearly $\sim^A$ is an equivalence relation on $\mathbb{R}^n$. Given any such matrix $A$ and any $x \in \mathbb{R}^n_{\geq 0}$, define
\[
\mathcal{C}_{A, x} \equiv (x + \mathrm{Im}(A)) \cap \mathbb{R}^n_{\geq 0} = \{y \in \mathbb{R}^n_{\geq 0}\,:\, y \sim^A x\}.
\]
In the study of chemical reactions, $\mathcal{C}_{\Gamma, x}$ is termed the {\bf stoichiometry class} of $x$ (also known as the ``stoichiometric compatibility class'' of $x$); for system (\ref{eqmain}) satisfying assumptions A1--A4, $\mathcal{C}_{\Lambda, x}$ will be termed the {\bf $\Lambda$-class} of $x$. Since A3 implies that $\mathrm{Im}(\Gamma) \subseteq \mathrm{Im}(\Lambda)$, stoichiometry classes are subsets of $\Lambda$-classes, and clearly both are (forward) invariant under the local semiflow $\phi$. Stoichiometry classes or $\Lambda$-classes intersecting $\mathrm{int}(\mathbb{R}^n_{\geq 0})$ will be termed {\bf nontrivial}. \\
The first result of this paper is the following local claim:
\begin{theorem}
\label{mainthm0}
Suppose that System (\ref{eqmain}) satisfies assumptions A1, A2, A3 and A4. Each equilibrium $e \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$ is the unique equilibrium on its stoichiometry class $\mathcal{C}_{\Gamma, e}$ and is locally asymptotically stable relative to $\mathcal{C}_{\Gamma, e}$.
\end{theorem}\\
This will be proved via construction of a Lyapunov function on a neighbourhood in $\mathcal{C}_{\Gamma,e}$ of any positive equilibrium $e$. In order to extend this result to a global result, we need conditions to ensure that each nontrivial stoichiometry class contains an equilibrium, the Lyapunov function exists on the whole relative interior of each nontrivial stoichiometry class, and moreover that trajectories cannot approach the relative boundary of a nontrivial stoichiometry class. To make clear these notions we need to introduce some additional ideas.
Given an $n \times r$ matrix $\Lambda$, define the closed, convex cone $K(\Lambda) \subseteq \mathbb{R}^n$ associated with $\Lambda$ as:
\[
K(\Lambda) = \{\Lambda y\,:\, y \in \mathbb{R}^r_{\geq 0}\}\,.
\]
A local semiflow on $\mathbb{R}^n_{\geq 0}$ is {\bf persistent} if
\[
x \in \mathrm{int}(\mathbb{R}^n_{\geq 0}) \Rightarrow \omega(x) \cap \partial \,\mathbb{R}^n_{\geq 0} = \emptyset
\]
where $\omega(x)$ is the $\omega$-limit set of $x$.
Let $S \subseteq \{1, \ldots, n\}$ and $S^c = \{1, \ldots, n\}\backslash S$. Define
\[
F_S = \{x \in \mathbb{R}^n\,:\, x_i > 0, i \in S\,\,\, \mbox{and}\,\,\, x_i = 0, i \not \in S\}.
\]
$F_S$ will be referred to as an {\bf elementary face} of $\mathbb{R}^n_{\geq 0}$. (Elementary faces are the relative interiors of the closed faces of $\mathbb{R}^n_{\geq 0}$.) An elementary face other than $\mathrm{int}(\mathbb{R}^n_{\geq 0})$ or $\{0\}$ will be termed nontrivial. An elementary face $F_S$ is {\bf repelling} if, at each $x \in F_S$, there exists $i \in S^c$ such that $\dot x_i > 0$. Quite generally a repelling face of $\mathbb{R}^n_{\geq 0}$ can contain no $\omega$-limit points of a local semiflow on $\mathbb{R}^n_{\geq 0}$ (Lemma~11 in \cite{banajimierczynski}). \\
To the list of conditions A1--A4 we add two further conditions:
\begin{enumerate}
\item[A5.] $K(\Lambda) \cap \mathbb{R}^n_{\leq 0} = \{0\}$.
\item[A6.]
\begin{enumerate}
\item[i.] All reactions are reversible, or
\item[ii.] Every nontrivial elementary face of $\mathbb{R}^n_{\geq 0}$ which intersects a nontrivial stoichiometry class of the system is repelling.
\end{enumerate}
\end{enumerate}
\vspace{0.3cm}
{\bf Remark on condition A5.} It will be shown later that condition A5 guarantees that the Lyapunov function constructed via conditions A1--A4 extends to the entire relative interior of each nontrivial stoichiometry class. Given $\Lambda$ of the form in condition~A3(i), it is clear that condition~A5 is not satisfied if and only if $\Lambda$ includes a column in $\mathbb{R}^n_{\leq 0}$, i.e., with no positive entries.
{\bf Remark on condition A6.} It will be shown in Lemma~\ref{lemsiphon} that conditions A1--A3 and A6(i) imply condition A6(ii) which, by the remarks above, implies the following: given any $x \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$ and $y \in \mathcal{C}_{\Gamma, x}$, then $\omega(y) \cap \partial \mathbb{R}^n_{\geq 0} = \emptyset$, namely $\mathcal{C}_{\Gamma, x} \cap \partial \mathbb{R}^n_{\geq 0}$ contains no limit points of the local semiflow $\phi$. Note that this is a stronger conclusion than persistence of $\left.\phi\right|_{\mathcal{C}_{\Gamma, x}}$. The conclusion that for systems of reversible reactions satisfying conditions A1--A3 and A6(i), equilibria on each nontrivial stoichiometry class attract the whole class, including its boundary, is of some interest in itself: invariant elementary faces of $\mathbb{R}^n_{\geq 0}$ do not intersect nontrivial stoichiometry classes at all, thus making persistence a {\em structural} feature of the systems. The occurrence of such structural persistence was already remarked on in \cite{feinberg}; in contrast proving persistence for CRNs with mass action kinetics in the general situation where nontrivial stoichiometry classes may intersect invariant faces of $\mathbb{R}^n_{\geq 0}$ is delicate (\cite{craciunnazarovpantea,panteapersistence} for example) .\\
We have the following global result:
\begin{theorem}
\label{mainthm}
Suppose that System (\ref{eqmain}) satisfies conditions A1, A2, A3, A4, A5 and A6. Then each nontrivial stoichiometry class contains exactly one equilibrium, which lies in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$, and is globally asymptotically stable relative to its stoichiometry class.
\end{theorem}\\
In physical terms chemical reaction networks satisfying conditions A1--A6 have very simple behaviour: different initial conditions on the same nontrivial stoichiometry class converge to the same positive equilibrium.
\section{Examples}
The proofs of Theorems~\ref{mainthm0}~and~\ref{mainthm} are somewhat involved, so some examples of their application are provided first.
\subsection{Example 1}
Consider the system of three reactions involving four chemicals
\begin{equation}
\label{eqdependent}
A \rightleftharpoons B + C, \qquad B \rightleftharpoons D, \qquad C + D \rightleftharpoons A
\end{equation}
and the associated differential equation (\ref{eqmain}) with assumptions A1, A2. The stoichiometric matrix, $\Gamma$ admits a factorisation $\Gamma = \Lambda\Theta$ as follows:
\[
\left(\begin{array}{rrr}-1&0&1\\1&-1&0\\1&0&-1\\0&1&-1\end{array}\right) = \left(\begin{array}{rrr}1&\,\,\,0&\,\,\,0\\0&1&0\\-1&0&0\\0&0&1\end{array}\right)\left(\begin{array}{rrr}-1&0&1\\1&-1&0\\0&1&-1\end{array}\right)
\]
Note that $\Lambda$ and $\Theta$ fulfil conditions A3. The DSR graph at each interior point (drawn under assumption A2) is shown in Figure~\ref{SRdependent}, and is clearly strongly connected. So condition A4 is satisfied. By observation, condition A5 holds. Finally, all reactions are assumed to be reversible, so condition A6 holds. By Theorem~\ref{mainthm} each nontrivial stoichiometry class contains exactly one equilibrium which attracts the whole stoichiometry class.
\begin{figure}[h]
\begin{center}
\begin{minipage}{0.6\textwidth}
\begin{center}
\begin{tikzpicture}[domain=0:4,scale=0.45]
\fill (1,4) circle (4pt);
\node at (4,4) {$B$};
\draw[-, thick] (1.4,4) -- (3.6,4);
\node at (2.5,2.5) {$C$};
\draw[-, thick] (2.2,2.8) -- (1.2, 3.8);
\draw[-, dashed, thick] (2.8,2.2) -- (3.8, 1.2);
\fill (4,1) circle (4pt);
\node at (1,1) {$A$};
\node at (7,1) {$D$};
\fill (7,4) circle (4pt);
\draw[-, thick] (1.5,1) -- (3.6,1);
\draw[-, dashed, thick] (1,1.5) -- (1,3.6);
\draw[-, thick, dashed] (4.4,1) -- (6.5,1);
\draw[-, dashed, thick] (4.5,4) -- (6.6,4);
\draw[-, thick] (7,1.5) -- (7,3.7);
\end{tikzpicture}
\end{center}
\end{minipage}
\end{center}
\caption{\label{SRdependent} The DSR graph for reaction system (\ref{eqdependent}) at any interior point in $\mathbb{R}^n_{\geq 0}$. Edge-labels are omitted. Negative edges are represented as dashed lines, while positive edges are shown with bold lines: only connectedness is important for the results here, and so edge signs are not needed; however including these makes the relationship between the DSR graph and the network of reactions (\ref{eqdependent}) clearer to see. Pairs of antiparallel arcs of the same sign are represented as single undirected edges.}
\end{figure}
We remark that this example also fulfils the conditions in \cite{angelileenheersontag}, from which global stability follows from a rather different argument. This is not the case for the next example.
\subsection{Example 2}
Consider the following system of 4 chemical reactions on 5 chemicals:
\begin{equation}
\label{eqdependent1}
A \rightleftharpoons B + C, \qquad B \rightleftharpoons D, \qquad C + D \rightleftharpoons A, \qquad C + E \rightleftharpoons A.
\end{equation}
As before, make assumptions A1, A2 about the kinetics. The stoichiometric matrix, $\Gamma$ admits a factorisation $\Gamma = \Lambda\Theta$ as follows:
\[
\left(\begin{array}{rrrr}-1&0&1&1\\1&-1&0&0\\1&0&-1&-1\\0&1&-1&0\\0&0&0&-1\end{array}\right) = \left(\begin{array}{rrrr}1&\,\,\,0&\,\,\,0&\,\,\,0\\0&1&0&0\\-1&0&0&0\\0&0&1&0\\0&0&0&1\end{array}\right)\left(\begin{array}{rrrr}-1&0&1&1\\1&-1&0&0\\0&1&-1&0\\0&0&0&-1\end{array}\right).
\]
It is again easy to confirm that condition A3 is satisfied, and that the DSR graph at each interior point under assumption A2 (Figure~\ref{SRdep1}) is strongly connected, so that condition A4 is satisfied.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[domain=0:4,scale=0.45]
\fill (1,4) circle (4pt);
\node at (4,4) {$C$};
\draw[-, thick, dashed] (1.4,4) -- (3.5,4);
\node at (2.5,2.5) {$E$};
\draw[-, thick, dashed] (2.2,2.8) -- (1.2, 3.8);
\fill (4,1) circle (4pt);
\node at (1,1) {$A$};
\node at (7,1) {$B$};
\fill (10,1) circle (4pt);
\fill (7,4) circle (4pt);
\node at (10,4) {$D$};
\draw[-, thick, dashed] (1.5,1) -- (3.6,1);
\draw[-, thick] (1,1.5) -- (1,3.6);
\draw[-, thick] (4.4,1) -- (6.5,1);
\draw[-, thick, dashed] (7.5,1) -- (9.6,1);
\draw[-, thick] (10,1.4) -- (10,3.5);
\draw[-, thick, dashed] (7.4,4) -- (9.5,4);
\draw[-, thick, dashed] (4.5,4) -- (6.6,4);
\draw[-, thick] (4,1.4) -- (4,3.5);
\draw[-, thick] (6.7,4.3) .. controls (1,7) and (-1.5,4.5) .. (0.7,1.3);
\end{tikzpicture}
\end{center}
\caption{\label{SRdep1} The DSR graph for reaction system \ref{eqdependent1} at each interior point of $\mathbb{R}^n_{\geq 0}$. Conventions are as in Figure~\ref{SRdependent}.}
\end{figure}
By observation, condition A5 holds. Condition A6 again holds automatically as reactions have been assumed to be reversible. By Theorem~\ref{mainthm} each nontrivial stoichiometry class contains exactly one equilibrium which attracts the whole stoichiometry class.
To illustrate the case where reactions are not all reversible, we consider one further example.
\subsection{Example 3}
We consider an example sometimes termed an ``enzymatic futile cycle'' \cite{angelisontagfutile} and whose biological importance is discussed in some detail in \cite{fellmetabolism}. Global stability in this example can be proven in many ways (\cite{angelisontagfutile} for example), but is also an immediate consequence of the results in this paper.
The enzymatic futile cycle:
\begin{equation}
\label{reac3}
\begin{array}{ccccc}
S_1 + E & \rightleftharpoons & ES_1 & \rightarrow & S_2 + E\\
S_2 + F & \rightleftharpoons & FS_2 & \rightarrow & S_1 + F
\end{array}
\end{equation}
has stoichiometric matrix which factorises as follows:
\[
\left(\begin{array}{rrrr}-1 & 0 & 0 & 1\\-1 & 1 & 0 & 0\\1 & -1 & 0 & 0\\0 & 1 & -1 & 0\\0 & 0 & -1 & 1\\0 & 0 & 1 & -1\end{array}\right) = \left(\begin{array}{rrrr}1&0&0&0\\0&-1&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&-1\\0&0&0&1\end{array}\right)\left(\begin{array}{rrrr}-1&0&0&1\\1&-1&0&0\\0&1&-1&0\\0&0&1&-1\end{array}\right),
\]
and DSR graph shown in Figure~\ref{DSR3}.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[domain=0:4,scale=0.45]
\node at (1,1.5) {$S_1$};
\fill (1,5) circle (4pt);
\fill (5,1.5) circle (4pt);
\node at (1,7.5) {$ES_1$};
\fill (4,7.5) circle (4pt);
\node at (8,7.5) {$S_2$};
\node at (4,5) {$E$};
\node at (5,4) {$F$};
\node at (8,1.5) {$FS_2$};
\fill (8,4) circle (4pt);
\node at (8,7.5) {$S_2$};
\draw[<-, thick] (1.5,1.5) -- (4.6,1.5);
\draw[-, thick, dashed] (5.4,1.5) -- (7.3,1.5);
\draw[-, thick, dashed] (5.5,4) -- (7.6,4);
\draw[-, thick, dashed] (1,2.1) -- (1,4.6);
\draw[-, thick] (1, 5.4) -- (1,7);
\draw[<-, thick] (4, 5.5) -- (4,7.1);
\draw[-, thick, dashed] (1.7,7.5) -- (3.6,7.5);
\draw[->, thick] (4.4,7.5) -- (7.4,7.5);
\draw[-, thick, dashed] (1.4,5) -- (3.5,5);
\draw[-, thick] (8,2.1) -- (8,3.6);
\draw[-, thick, dashed] (8, 4.4) -- (8,6.9);
\draw[->, thick] (5,1.9) -- (5,3.55);
\end{tikzpicture}
\end{center}
\caption{\label{DSR3} The DSR graph for reaction system \ref{reac3} which is clearly strongly connected. Conventions are as in Figure~\ref{SRdependent}. Note that as a consequence of the irreversibility of some reactions, some edges in the DSR graph do not have an oppositely directed partner and so appear as directed.}
\end{figure}
Conditions A1 and A2 are assumed, while conditions A3--A5 are easy to confirm. On the other hand, Condition A6(ii) can be directly checked. The details involve the notions of {\bf siphons} and associated faces of $\mathbb{R}^n_{\geq 0}$, developed in Appendix~\ref{appsiphon}, and are presented in Appendix~\ref{appEx3}.
\section{Proofs of the results}
\subsection{Summary of the proofs}
The proofs proceed roughly as follows. The key objects of interest are $\Lambda$-classes which are in general invariant $r$-dimensional objects in $\mathbb{R}^n$. Conditions A1--A3 will imply that on each $\Lambda$-class $\phi$ is monotone with respect to the ordering defined by $K(\Lambda)$. Additionally, if condition A4 holds, then $\phi$ is strongly monotone in the relative interior of each nontrivial $\Lambda$-class (in a sense to be made precise below). Condition A3 (ii) allows the construction of an increasing (linear) first integral on each $\Lambda$-class such that stoichiometry classes become the level sets of this function. One implication is that stoichiometry classes are antichains in the ordering defined by $K(\Lambda)$. Via a construction closely related to those in \cite{mierczynski,banajiangeli}, conditions A1--A4 allow the construction of a Lyapunov function on each stoichiometry class in a neighbourhood of any positive equilibrium, strictly increasing along nontrivial trajectories. Adding condition A5 allows this Lyapunov function to be extended to the entire relative interior of the stoichiometry class, while assumption A6 ensures that trajectories on nontrivial stoichiometry classes cannot have $\omega$-limit sets intersecting $\partial \mathbb{R}^n_{\geq 0}$. While some of the arguments are minor modifications of arguments in \cite{banajiangeli}, often they are simpler than the general arguments there, and so the presentation will be mostly self-contained.
\subsection{Basics on cones and partial orders}
A closed, convex and pointed cone $K \subseteq \mathbb{R}^n$ (i.e., a closed, convex cone additionally satisfying $K \cap (-K) = \{0\}$) will be termed a {\bf CCP cone}. If a CCP cone $K$ has nonempty interior, then it will be termed {\bf proper} \cite{berman}. Define $K^{*}$ to be the {\bf dual cone} to $K$, i.e., $K^* = \{\lambda \in \mathbb{R}^n\,|\, \langle \lambda, y \rangle \geq 0\,\,\mbox{for all}\,\, y \in K\}$.
Given an $n \times r$ matrix $\Lambda$ with rank $r$, then $K(\Lambda)$ is an $r$-dimensional CCP cone in $\mathbb{R}^n$ generated by $r$ extremal vectors (the columns of $\Lambda$). However, we can consider $K(\Lambda)$ to be proper on any coset of $\mathrm{Im}\,\Lambda$ (the affine hull of $K(\Lambda)$) in the following sense: given $z \in \mathbb{R}^r$ and $c \in \mathbb{R}^n$, the map $z \mapsto c + \Lambda z$ is a linear bijection from $\mathbb{R}^r$ to a coset of $\mathrm{Im}\,\Lambda$, which maps the proper cone $\mathbb{R}^r_{\geq 0}$ to $c + K(\Lambda)$. Via this identification, standard results on monotone dynamical systems can be lifted to cosets of $\mathrm{Im}\,\Lambda$.
The symbols $<,>,\leq, \geq, \ll, \gg$ will refer to the standard partial ordering on $\mathbb{R}^n$. For example, for $a,b \in \mathbb{R}^n$, $a < b$ means $b-a \in \mathbb{R}^n_{\geq 0} \backslash\{0\}$. When the ordering is that defined by some other cone $K$, the alternative symbols $\prec, \succ, \preceq, \succeq, \llcurly, \ggcurly$ will be used. (Where these symbols are used, the cone in question will be clear from the context.) For example, given some CCP cone $K\subseteq \mathbb{R}^n$ and $a, b \in \mathbb{R}^n$, $a \preceq b$ means $b - a \in K$. Normally $a \llcurly b$ means $b-a \in \mathrm{int}(K)$; here the meaning will be extended so that given any CCP cone $K\subseteq \mathbb{R}^n$, $a \llcurly b$ means $b-a \in \mathrm{relint}(K)$.
Given a partial order $\preceq$ defined by a cone $K$, and some set $X$, if $x, y \in X$ implies $x \not \prec y$, we will say that ``$X$ is a $K$-antichain''.
\subsection{$\Lambda$-classes are lattices}
The lemmas to follow will show that the cones considered here define orderings which make each $\Lambda$-class a lattice. Given $a, b \in \mathbb{R}^r$, define $c=a \wedge b$ by $c_i = \min\{a_i, b_i\}$, and $c=a \vee b$ by $c_i = \max\{a_i, b_i\}$.
\begin{lemma}
\label{latticefull}
Let $\Lambda$ be an $n \times r$ matrix with rank $r$ defining a cone $K(\Lambda) \subseteq \mathbb{R}^n$. Let $c \in \mathbb{R}^n$ be an arbitrary vector.
\begin{enumerate}
\item Consider $x, y \in \mathbb{R}^r$, $z = x \wedge y$, $x' = c+ \Lambda x$, $y' = c+ \Lambda y$, and $z' = c+ \Lambda z$. Then (i) $z' \preceq x'$ and $z' \preceq y'$ and (ii) Any $b' \in c+\mathrm{Im}(\Lambda)$ satisfying $b' \preceq x'$ and $b' \preceq y'$, also satisfies $b' \preceq z' $. In other words, $z'$ is the infimum of $x', y'$ in the order defined by $K(\Lambda)$.
\item Consider $x, y \in \mathbb{R}^r$, $z = x \vee y$, $x' = c+ \Lambda x$, $y' = c+ \Lambda y$, and $z' = c+ \Lambda z$. Then (i) $z' \succeq x'$ and $z' \succeq y'$ and (ii) Any $b' \in c+\mathrm{Im}(\Lambda)$ satisfying $b' \succeq x'$ and $b' \succeq y'$, also satisfies $b' \succeq z' $. In other words, $z'$ is the supremum of $x', y'$ in the order defined by $K(\Lambda)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Part 1 will proved. The proof of Part 2 is similar. Since $z \leq x$, $x-z = p \in \mathbb{R}^r_{\geq 0}$. So $x'-z' = (c+\Lambda x)-(c+\Lambda z) = \Lambda p$, i.e., $z' \preceq x'$. Similarly $z' \preceq y'$.
Consider a vector $b' = c+\Lambda b$ satisfying $b' \preceq x'$ and $b' \preceq y'$. i.e.,
\[
x'-b' = c+\Lambda x - (c+\Lambda b) = \Lambda p_1 \quad \mbox{and} \quad y' - b' = c+\Lambda y - (c + \Lambda b) = \Lambda p_2\,,
\]
where $p_1, p_2 \in \mathbb{R}^r_{\geq 0}$. Multiplying each equation by any matrix $\Lambda '$ such that $\Lambda '\Lambda = I$ gives $b \leq x$ and $b \leq y$, implying $b \leq x \wedge y \equiv z$, i.e. $z - b = p_3 \in \mathbb{R}^r_{\geq 0}$. So $z' - b' = (c+\Lambda z) - (c+\Lambda b) = \Lambda p_3 \in K(\Lambda)$, i.e. $b'\preceq z'$.
\end{proof}\\
{\bf Notation.} Given any $\Lambda$ satisfying the assumptions of Lemma~\ref{latticefull}, and $x, y \in \mathbb{R}^n$ with $x \sim^\Lambda y$, we write $x \curlywedge y$ for the infimum of $x, y$ under the order defined by $K(\Lambda)$. Similarly $x \curlyvee y$ refers to the supremum of $x, y$ under the order defined by $K(\Lambda)$. Clearly $(x \curlyvee y) \sim^\Lambda (x \curlywedge y) \sim^\Lambda x \sim^\Lambda y$.\\
The next lemma shows that additional assumptions on $\Lambda$ ensure that the intersection of a coset of $\mathrm{Im}(\Lambda)$ with any closed order interval under the standard ordering (including, for example, $\mathbb{R}^n_{\geq 0}$ itself) is a lattice under the order defined by $K(\Lambda)$.
\begin{lemma}
\label{lat0}
Let $\Lambda$ be an $n \times r$ matrix of rank $r$, and such that each row contains no more than one positive entry and no more than one negative entry. Given $c \in \mathbb{R}^n$, $x', y' \in c+ \mathrm{Im}(\Lambda)$ and $t \in \mathbb{R}^n$:
\begin{enumerate}
\item If $x', y' \geq t$, then $x' \curlywedge y' \geq t$.
\item If $x', y' \geq t$, then $x' \curlyvee y' \geq t$.
\item If $x', y' \leq t$, then $x' \curlywedge y' \leq t$.
\item If $x', y' \leq t$, then $x' \curlyvee y' \leq t$.
\end{enumerate}
\end{lemma}
\begin{proof}
Choose $x, y \in \mathbb{R}^r$ such that $x'=c+\Lambda x, y'=c+\Lambda y$. If row $i$ of $\Lambda$ contains a positive entry, then define $p(i)$ by $\Lambda_{i, p(i)} > 0$, and let $\alpha_i = \Lambda_{i, p(i)}$; otherwise let $p(i)=1$ and $\alpha_i = 0$. Similarly, if row $i$ of $\Lambda$ contains a negative entry, then define $q(i)$ by $\Lambda_{i, q(i)} < 0$, and let $\beta_i = -\Lambda_{i, q(i)}$; otherwise let $q(i) = 1$ and $\beta_i = 0$. Note that for each $i$, $\alpha_i, \beta_i \geq 0$. We get
\[
(\Lambda x)_i = \alpha_i x_{p(i)} - \beta_i x_{q(i)} \qquad \mbox{and} \qquad (\Lambda y)_i = \alpha_i y_{p(i)} - \beta_i y_{q(i)}\,.
\]
We prove Statements~1~and~2. Statements~3~and~4 follow analogously.
{\bf 1.} Let $z = x \wedge y$ so that, by Lemma~\ref{latticefull}, $z' = c+ \Lambda z = x' \curlywedge y'$.
\[
z'_i = c_i + (\Lambda z)_i = c_i + \alpha_i \min\{x_{p(i)}, y_{p(i)}\} - \beta_i \min\{x_{q(i)}, y_{q(i)}\}\,.
\]
So either
\begin{eqnarray*}
z'_i & = & c_i + \alpha_i x_{p(i)} - \beta_i x_{q(i)} = c_i + (\Lambda x)_i \geq t_i, \quad \mbox{or}\\
z'_i & = & c_i + \alpha_i y_{p(i)} - \beta_i y_{q(i)} = c_i + (\Lambda y)_i \geq t_i, \quad \mbox{or}\\
z'_i & = & c_i + \alpha_i x_{p(i)} - \beta_i y_{q(i)} \geq c_i + (\Lambda x)_i \geq t_i\quad\mbox{(because}\quad y_{q(i)} \leq x_{q(i)}), \quad \mbox{or}\\
z'_i & = & c_i + \alpha_i y_{p(i)} - \beta_i x_{q(i)} \geq c_i + (\Lambda y)_i\geq t_i\quad\mbox{(because}\quad x_{q(i)} \leq y_{q(i)}).
\end{eqnarray*}
In every case, $z'_i \geq t_i$, so $z' \geq t$.
{\bf 2.} Let $z = x \vee y$ so that, by Lemma~\ref{latticefull}, $z' = c+ \Lambda z = x' \curlyvee y'$.
\[
z'_i= c_i + (\Lambda z)_i = c_i + \alpha_i \max\{x_{p(i)}, y_{p(i)}\} - \beta_i \max\{x_{q(i)}, y_{q(i)}\}\,.
\]
So either
\begin{eqnarray*}
z'_i & = & c_i + \alpha_i x_{p(i)} - \beta_i x_{q(i)} = c_i + (\Lambda x)_i \geq t_i, \quad \mbox{or}\\
z'_i & = & c_i + \alpha_i y_{p(i)} - \beta_i y_{q(i)} = c_i + (\Lambda y)_i \geq t_i, \quad \mbox{or}\\
z'_i & = & c_i + \alpha_i x_{p(i)} - \beta_i y_{q(i)} \geq c_i + (\Lambda y)_i \geq t_i\quad\mbox{(because}\quad x_{p(i)} \geq y_{p(i)}), \quad \mbox{or}\\
z'_i & = & c_i + \alpha_i y_{p(i)} - \beta_i x_{q(i)} \geq c_i + (\Lambda x)_i\geq t_i\quad\mbox{(because}\quad y_{p(i)} \geq x_{p(i)}).
\end{eqnarray*}
In every case, $z'_i \geq t_i$, so $z' \geq t$.
\end{proof}\\
\begin{corollary}
\label{lat2}
Let $\Lambda$ be an $n \times r$ matrix with rank $r$, and such that each row contains no more than one positive entry and no more than one negative entry. Then, with the partial order defined by $K(\Lambda)$, $\mathcal{C}_{\Lambda, c}$ is a lattice for each $c \in \mathbb{R}^n_{\geq 0}$. Consequently, with assumption A3, each $\Lambda$-class of (\ref{eqmain}) is a lattice.
\end{corollary}
\begin{proof}
This follows from Statements~1~and~2 of Lemma~\ref{lat0} with $t$ as the origin.
\end{proof}
\subsection{Each $\Lambda$-class has an infimum}
We show that including condition A5 ensures the existence of a unique minimal element on each $\Lambda$-class.
\begin{lemma}
\label{minimal}
Consider any $n \times r$ matrix $\Lambda$ of rank $r$, and such that $K(\Lambda) \cap \mathbb{R}^n_{\leq 0} = \{0\}$. Then given any $c \in \mathbb{R}^n_{\geq 0}$, $\mathcal{C}_{\Lambda, c}$ contains a greatest lower bound in the ordering determined by $K(\Lambda)$.
\end{lemma}
\begin{proof}
Consider any chain (i.e., totally ordered subset) $C \subseteq \mathcal{C}_{\Lambda, c}$. Take any decreasing sequence $(x_i) \subseteq C$, i.e., $x_0 \succ x_1 \succ x_2 \succ \cdots$. We want to show that $(x_i)$ is bounded (and hence bounded below). Since the sequence is arbitrary, this will imply that $C$ is bounded below. Suppose, on the contrary, there exists a subsequence $(x_{i_k})$ with $|x_{i_k}|\to \infty$, implying that $|x_{i_k} - x_0|\to \infty$. Consider the bounded sequence $(x_{i_k} - x_{0})/|x_{i_k}|$, and by passing to a subsequence if necessary, assume that it is convergent, i.e., $(x_{i_k} - x_{0})/|x_{i_k}| \to y$. But $(x_{i_k} - x_{0})/|x_{i_k}| = x_{i_k}/|x_{i_k}| - x_0/|x_{i_k}|$, and since $x_{i_k}/|x_{i_k}|$ is nonnegative and of magnitude $1$, and $x_0/|x_{i_k}| \to 0$, it follows that $y$ is nonnegative and of magnitude $1$. On the other hand, $(x_{i_k} - x_{0})/|x_{i_k}| \in -K(\Lambda)$, and since $-K(\Lambda)$ is closed, $y \in -K(\Lambda)$. This implies that $-K(\Lambda)$ includes a nonzero, nonnegative vector, contradicting the assumptions of the theorem. Thus every chain has a lower bound. By Zorn's Lemma, $\mathcal{C}_{\Lambda, c}$ contains a minimal element. Since $\mathcal{C}_{\Lambda, c}$ is a lattice, this minimal element is unique.
\end{proof}
\subsection{With reversibility, $\omega$-limit sets of nontrivial initial conditions lie in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$}
The next claim is that if all reactions are reversible, no limit sets of System (\ref{eqmain}) satisfying conditions A1--A3 on a nontrivial stoichiometry class can intersect $\partial \mathbb{R}^n_{\geq 0}$. In other words, conditions A1--A3 and A6(i) imply condition A6(ii):
\begin{lemma}
\label{lemsiphon}
Consider System (\ref{eqmain}) satisfying conditions A1--A3. Suppose additionally that all reactions are reversible, and let $S$ be some nonempty, proper subset of $\{1, \ldots, n\}$. If for some $c \gg 0$, $\mathcal{C}_{\Gamma,c}$ intersects $\overline{F}_S$, then $F_S$ is repelling.
\end{lemma}
\vspace{0.3cm}
This claim involves a somewhat lengthy digression, and so is proved and illustrated in Appendix~\ref{appsiphon}. Note that the assumption that chemical reactions are reversible means, mathematically, that the rate function $v(x)$ fulfils conditions K1 and K3 in Appendix~\ref{appkinetic}.
\subsection{An increasing first integral, stoichiometry classes are antichains}
In this subsection, make only the following assumptions on $\Gamma$, implied by assumption A3. Assume that $\Gamma = \Lambda \Theta$ where
\begin{enumerate}
\item[C1.] $\Lambda$ is an $n \times r$ matrix with rank $r$.
\item[C2.] $\Theta$ is an $r \times m$ matrix, $\mathrm{ker}(\Theta^T)$ is one dimensional, and there is a unit vector $y_\Theta \in \mathrm{ker}(\Theta^T)$ satisfying $y_\Theta \gg 0$.
\end{enumerate}
Since $\Lambda^T$ has rank $r$, it defines a surjective map from $\mathbb{R}^n \to \mathbb{R}^r$, and so we can choose and fix a vector $p_\Theta$ such that $\Lambda^T p_\Theta = y_\Theta$. Note that $p_\Theta \in \mathrm{int}(K(\Lambda)^*)$, the dual cone to $K(\Lambda)$, since for any $z > 0$, $p_\Theta^T\Lambda z = y_\Theta^T z > 0$. Define the linear scalar function $H:\mathbb{R}^n \to \mathbb{R}$ by $H(x) = p_\Theta^Tx$. The next lemma shows that $H$ is increasing with respect to the order defined by $K(\Lambda)$.
\begin{lemma}
\label{inc}
Consider $x, y \in \mathbb{R}^n$ such that $y \succ x$. Then $H(y) > H(x)$.
\end{lemma}
\begin{proof}
Note that $y = x + \Lambda z$ where $z > 0$. Then
\[
H(y)-H(x) = p_\Theta^Ty - p_\Theta^Tx = p_\Theta^T(\Lambda z) = y^T_\Theta z > 0,
\]
where the last inequality follows because $z > 0$, and $y_\Theta \gg 0$.
\end{proof}\\
Next, we prove that restricting attention to a $\Lambda$-class, stoichiometry classes are precisely the level sets of $H$.
\begin{lemma}
\label{Scchar}
For any $x \in \mathbb{R}^n_{\geq 0}$, $\mathcal{C}_{\Gamma,x} = \{y \sim^\Lambda x \,|\, H(y) = H(x)\}$.
\end{lemma}
\begin{proof}
From the definitions, $\mathcal{C}_{\Gamma,x} \subseteq \mathcal{C}_{\Lambda,x}$. Note that $p_\Theta^T\Gamma = p_\Theta^T\Lambda\Theta = y_\Theta^T\Theta = 0$. So given $y \sim^\Gamma x$, and writing $y = x + \Gamma y'$,
\[
H(y) = p_\Theta^Ty = p_\Theta^T(x+\Gamma y') = p_\Theta^Tx = H(x)\,.
\]
So $\mathcal{C}_{\Gamma,x} \subseteq \{y \sim^\Lambda x \,|\, H(y) = H(x)\}$.
On the other hand, consider any $y \sim^\Lambda x$ such that $H(y) = H(x)$. Write $y = x + \Lambda y'$ and note that
\[
0 = H(y) - H(x) = p_\Theta^T(y-x) = p_\Theta^T\Lambda y' = y_\Theta^T y'.
\]
Since $y_\Theta$ spans $\mathrm{ker}(\Theta^T)$ and $\mathrm{Im}(\Theta) = [\mathrm{ker}(\Theta^T)]^{\perp}$, $y' \in \mathrm{Im}(\Theta)$. So $y' = \Theta y''$ for some $y''$, i.e., $y = x + \Lambda \Theta y'' = x + \Gamma y''$. Thus $y \sim^\Gamma x$. So $\mathcal{C}_{\Gamma,x} \supseteq \{y \sim^\Lambda x \,|\, H(y) = H(x)\}$, proving the claim.
\end{proof}\\
\begin{corollary}
\label{unord1}
Each stoichiometry class of System (\ref{eqmain}) with assumption A3 is a $K(\Lambda)$-antichain.
\end{corollary}
\begin{proof}
Note that condition A3 implies conditions C1 and C2. By Lemma~\ref{Scchar}, $x \sim^\Gamma y$ implies $H(x) = H(y)$. On the other hand, by Lemma~\ref{inc}, $H(x) = H(y)$ implies $x \not \prec y$, and it follows that $\mathcal{C}_{\Gamma,y}$ is a $K(\Lambda)$-antichain.
\end{proof}\\
{\bf Notation.} Given the characterisation in Lemma~\ref{Scchar}, when we restrict attention to some $\Lambda$-class $\mathcal{C}_{\Lambda, c}$, we can refer to the stoichiometry class in $\mathcal{C}_{\Lambda, c}$ on which $H(\cdot)$ takes the value $h$ as $\mathcal{C}_{\Lambda, c}^h$, i.e. define $\mathcal{C}_{\Lambda, c}^h = \{y \in \mathcal{C}_{\Lambda, c}\,|\, H(y) = h\}$.
\subsection{$K$-quasipositivity}
{\bf Notation for matrices.} Given any matrix $M$, we refer to the $k$th column of $M$ as $M_k$ and the $k$th row of $M$ as $M^k$. It is convenient to phrase results in terms of ``qualitative classes'' of matrices and related ideas. Given a matrix $M$, the matrix-sets $\mathcal{Q}(M)$, $\mathcal{Q}_0(M)$ and $\mathcal{Q}_1(M)$ are defined in Appendix~\ref{appqualclass}.\\
{\bf Terminology.} Given a CCP cone $K \subseteq \mathbb{R}^n$, and an $n \times n$ matrix $J$, we say that $J$ is {\bf $K$-quasipositive} if there exists $\alpha \in \mathbb{R}$ such that $J + \alpha I \colon K \to K$. If in fact there exists $\alpha \in \mathbb{R}$ such that $J + \alpha I \colon K \to K$ and $J + \alpha I$ is $K$-irreducible (namely, if $F$ is a closed face of $K$ and $J + \alpha I \colon F \to F$, then either $F=\{0\}$ or $F=K$), then we say that $J$ is {\bf strictly $K$-quasipositive}.\\
The following lemma on $K$-quasipositivity of a special class of rank $1$ matrices appears as Theorems~5.3~and~5.4 in \cite{banajidynsys}.
\begin{lemma}
\label{suff2}
Consider a vector $\gamma$, any vector $v \in \mathcal{Q}_0(-\gamma)$ and a CCP cone $K$ with extremals $\{y_i\}$. Define two conditions as follows:
\begin{itemize}
\item[A.] $\gamma = ry_k$ for some $k$, and either (i) $r > 0$ and $y_j \in \mathcal{Q}_1(-\gamma)$ for all $j \not = k$ or (ii) $r < 0$ and $y_j \in \mathcal{Q}_1(\gamma)$ for all $j \not = k$.
\item[B.] There exist $r_1, r_2 > 0$ and $y_i \in \mathcal{Q}_1(\gamma)\backslash\mathcal{Q}_1(-\gamma)$, $y_j \in \mathcal{Q}_1(-\gamma)\backslash\mathcal{Q}_1(\gamma)$ such that $\gamma = r_1y_i - r_2y_j$. Moreover, $y_k \in (\mathcal{Q}_1(\gamma) \cap \mathcal{Q}_1(-\gamma))$ for $k \not \in \{i,j\}$.
\end{itemize}
If either $\gamma = 0$, or A or B holds then $\gamma v^T$ is $K$-quasipositive.
\end{lemma}
\begin{proof}
The case $\gamma = 0$ is trivial. For the remaining cases, see \cite{banajidynsys}. Although cones were assumed in that reference to be proper, the proofs are straightforward calculations which apply for any CCP cone.
\end{proof}\\
An immediate corollary is:
\begin{corollary}
\label{suff3}
Consider a matrix $\Gamma$, any matrix $V \in \mathcal{Q}_0(-\Gamma^T)$ and some CCP cone $K$. If each nonzero column of $\Gamma$ satisfies either condition A or condition B of Lemma~\ref{suff2}, then $\Gamma V$ is $K$-quasipositive.
\end{corollary}
\begin{proof}
$\Gamma V = \sum_i\Gamma_iV^i$, and by Theorem~\ref{suff2} for each $i$ there exists $\alpha_i$ such that $\Gamma_iV^i + \alpha_i I:K \to K$. Clearly, defining $\alpha = \sum_i \alpha_i$, $\Gamma V + \alpha I:K \to K$.
\end{proof}\\
This leads to:
\begin{lemma}
\label{quasipos}
Let $\Lambda$ be an $n \times r$ matrix with rank $r$, and no more than one nonzero entry in each row. Let $\Theta$ be an $r \times m$ matrix such that each column of $\Theta$ contains no more than one positive entry and no more than one negative entry. Let $\Gamma = \Lambda\Theta$, $V \in \mathcal{Q}_0(-\Gamma^T)$. Then $\Gamma V$ is $K(\Lambda)$-quasipositive.
\end{lemma}
\begin{proof}
We need to show that for each $i$, $\Gamma_i$ fulfils the conditions of Lemma~\ref{suff2}. The trivial case $\Theta_i = 0$ implies $\Gamma_i=0$. The reader can easily confirm that if $\Theta_i$ contains a single nonzero entry, then $\Gamma_i$ satisfies condition A, and if $\Theta_i$ contains a positive entry and a negative entry, then $\Gamma_i$ satisfies condition B.
\end{proof}\\
\begin{corollary}
\label{corqp}
Consider System (\ref{eqmain}) with assumptions A1--A3. At each $x \in \mathbb{R}^n_{\geq 0}$, the Jacobian matrix $\Gamma Dv(x)$ is $K(\Lambda)$-quasipositive.
\end{corollary}
\begin{proof}
Assumption A2 implies that $Dv(x)\in \mathcal{Q}_0(-\Gamma^T)$ at each $x \in \mathbb{R}^n_{\geq 0}$. Assumption A3 implies the assumptions of Lemma~\ref{quasipos}, which now gives the result.
\end{proof}
\subsection{Strict $K$-quasipositivity}
The aim in this section is to infer the following for System (\ref{eqmain}) satisfying conditions A1--A3: at any point $x$ where the DSR graph $G(x)$ is strongly connected, the Jacobian $J(x)$ is strictly $K(\Lambda)$-quasipositive (i.e., there exists $\alpha \in \mathbb{R}$ such that $J(x) + \alpha I\colon K(\Lambda) \to K(\Lambda)$ and $J(x) + \alpha I$ is $K(\Lambda)$-irreducible). The following is an immediate consequence of Theorem~1 in \cite{banajiburbanks}.
\begin{lemma}
\label{lemDSR}
Let $K \subseteq \mathbb{R}^n$ be a CCP cone, $A$ an $n \times m$ matrix, and $B'$ an $m \times n$ matrix. Suppose that $\mathrm{Im}\,A \not \subseteq \mathrm{span}\,F$ for any nontrivial face $F$ of $K$, and that $AB$ is $K$-quasipositive for each $B \in \mathcal{Q}_0(B')$. Then whenever the DSR graph $G_{A, B}$ is strongly connected, $AB$ is strictly $K$-quasipositive.
\end{lemma}\\
We wish to apply this result with $A = \Gamma$, $B' = -\Gamma^T$, and $K = K(\Lambda)$. We have already seen that $AB$ is $K$-quasipositive for all $B\in \mathcal{Q}_0(-\Gamma^T)$, and so we need:
\begin{lemma}
\label{lemnotinspan}
Consider an $n \times r$ matrix $\Lambda$ with rank $r$ and an $r \times m$ matrix $\Theta$ with no row of zeros. Let $\Gamma = \Lambda\Theta$. Then $\mathrm{Im}\,\Gamma \not \subseteq \mathrm{span}\,F$ for any nontrivial face $F\subseteq K(\Lambda)$.
\end{lemma}
\begin{proof}
Since $\mathrm{rank}\,\Lambda = r$, a vector $z \in \mathrm{Im}\,\Lambda$ has a unique representation $z = \sum \alpha_i\Lambda_i$. Consider some nontrivial face $F$ of $K(\Lambda)$, and choose some $k$ such that $\Lambda_k \not \in \overline F$. Since $\Theta$ has no row of zeros, choose some $i(k)$ such that $\Theta_{k,i(k)} = \alpha \not = 0$. Define $y = \Gamma \hat e_{i(k)} = \Lambda\Theta \hat e_{i(k)} = \alpha \Lambda_k + \cdots$. Since this representation of $y$ is unique, clearly $y \not \in \mathrm{span}\,F$ and so $\mathrm{Im}\,\Gamma \not \subseteq \mathrm{span}\,F$.
\end{proof}\\
We can deduce that:
\begin{corollary}
\label{corstrict}
Consider System (\ref{eqmain}) with assumptions A1--A3. Assume that at some $x \in \mathbb{R}^n_{\geq 0}$, the DSR graph $G_{\Gamma, Dv(x)}$ is strongly connected. Then the Jacobian matrix $\Gamma Dv(x)$ is strictly $K(\Lambda)$-quasipositive.
\end{corollary}
\begin{proof}
Condition A3(ii) implies that $\Theta$ has no row of zeros, for if $\Theta_{ij} = 0$ for some $i$ and all $j$, and $z$ is a positive vector in $\mathrm{ker}\,\Theta^T$ which exists by assumption, then $\{z, \hat e_i\}$ are two linearly independent vectors in $\mathrm{ker}\,\Theta^T$. So by Lemma~\ref{lemnotinspan}, $\mathrm{Im}\,\Gamma \not \subseteq \mathrm{span}\,F$ for any nontrivial face $F\subseteq K(\Lambda)$. Since, by Lemma~\ref{quasipos}, $\Gamma Dv(x)$ is $K$-quasipositive for all $Dv(x)\in \mathcal{Q}_0(-\Gamma^T)$, Lemma~\ref{lemDSR} now implies that $\Gamma Dv(x)$ is strictly $K(\Lambda)$-quasipositive whenever $G_{\Gamma, Dv(x)}$ is strongly connected.
\end{proof}\\
It follows immediately that:
\begin{corollary}
\label{corstrict1}
Consider System (\ref{eqmain}) with assumptions A1--A4. The Jacobian matrix $\Gamma Dv(x)$ is strictly $K(\Lambda)$-quasipositive at each $x \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$.
\end{corollary}
\subsection{Monotonicity and strong monotonicity}
We have shown in Corollaries~\ref{corqp}~and~\ref{corstrict1} that conditions A1--A4 imply that $\Gamma Dv$ is $K(\Lambda)$-quasipositive on all of $\mathbb{R}^n_{\geq 0}$ and strictly $K(\Lambda)$-quasipositive on $\mathrm{int}(\mathbb{R}^n_{\geq 0})$. The implications in terms of monotonicity and strong monotonicity of the local semiflow restricted to $\Lambda$-classes are discussed briefly. \\
{\bf Notation.} Given $a, b \in \mathbb{R}^n$, define the closed segment $[a, b] = \{\lambda a + (1-\lambda)b\,:\,\lambda \in [0,1]\}$. Open and semi-open segments $(a, b)$, $(a, b]$ and $[a, b)$ are similarly defined.\\
The following is a consequence of results in \cite{hirschsmith} (see also \cite{banajimierczynski}):
\begin{lemma}
\label{hslem}
Consider a proper cone $K \subseteq \mathbb{R}^r$, some open set $U\subseteq \mathbb{R}^r$ , and a $C^1$ vector field $f \colon U \to\mathbb{R}^r$ with Jacobian matrix $Df$. Let $X \subseteq U$ be some convex domain, positively invariant under the local flow $\phi_U$ defined by $f$, and let $\phi$ be the induced local semiflow on $X$. Assume that $Df$ is $K$-quasipositive in $X$. Consider some $x, y \in X$ with $x \preceq y$. Then $\phi_t(x) \preceq \phi_t(y)$ for each $t > 0$ such that $\phi_t(x), \phi_t(y)$ exist. If $x \prec y$ and there exists $z \in [x,y]$ such that $Df(z)$ is strictly $K$-quasipositive, then $\phi_t(x) \llcurly \phi_t(y)$ for each $t > 0$ such that $\phi_t(x), \phi_t(y)$ exist.
\end{lemma}\\
Our interest is in $\Lambda$-classes. Recalling that $\Lambda$ can be regarded as a bijection from $\mathbb{R}^r$ to $\mathrm{Im}(\Lambda)$, mapping $\mathbb{R}^r_{\geq 0}$ to $K(\Lambda)$, Lemma~\ref{hslem}, Corollaries~\ref{corqp}~and~\ref{corstrict1}, together imply:
\begin{corollary}
\label{hscor}
Consider System (\ref{eqmain}) with assumptions A1--A4 and $x, y \in \mathbb{R}^n_{\geq 0}$ with $x \preceq y$. Then $\phi_t(x) \preceq \phi_t(y)$ for each $t > 0$ such that $\phi_t(x), \phi_t(y)$ exist. If $x \prec y$ and at least one of $x, y$ is in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$, then $\phi_t(x) \llcurly \phi_t(y)$ (namely, $\phi_t(y) - \phi_t(x) \in \mathrm{relint}\,K(\Lambda)$) for each $t > 0$ such that $\phi_t(x), \phi_t(y)$ exist.
\end{corollary}
\subsection{Structure of the equilibrium set}
Define $E \subseteq \mathbb{R}^n_{\geq 0}$ to be the equilibrium set of (\ref{eqmain}), i.e.
\[
E = \{x \in \mathbb{R}^n_{\geq 0}\,:\, \Gamma v(x) = 0\}.
\]
\begin{lemma}
\label{lemord0}
Consider System (\ref{eqmain}) with assumptions A1--A4 and two distinct equilibria $x, y$ with $x \sim^\Lambda y$ and at least one of $x, y \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$. Then either $x \llcurly y$ or $y \llcurly x$. Consequently no stoichiometry class with an equilibrium in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$ contains more than one equilibrium.
\end{lemma}
\begin{proof}
Assume, by relabelling $x$ and $y$ if necessary, that $x \not \succ y$.
\begin{enumerate}
\item[(i)] Suppose $x \prec y$, but $x \not \llcurly y$. Since at least one of $x$ or $y$ lies in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$, the line segment $[x,y]$ certainly intersects $\mathrm{int}(\mathbb{R}^n_{\geq 0})$. Corollary~\ref{hscor} then implies that for $t > 0$, $x = \phi_t(x) \llcurly \phi_t(y) = y$, a contradiction.
\item[(ii)] Now suppose that $x$ and $y$ are not order related. Then $z \equiv x \curlywedge y$ is different from $x$ and $y$. By monotonicity $\phi_t(z) \preceq \phi_t(x) = x$ and $\phi_t(z) \preceq \phi_t(y) = y$. Since $z \equiv x \curlywedge y$, $\phi_t(z) \preceq z$. But $\phi_t(z) \in \mathcal{C}_{\Gamma,z}$ by invariance of $\mathcal{C}_{\Gamma,z}$, and $\mathcal{C}_{\Gamma,z}$ is an antichain (Corollary~\ref{unord1}), so $z \in E$. If $y \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$ (resp. $x \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$), applying the argument in part (i) to $z$ and $y$ (resp. $z$ and $x$) gives a contradiction.
\end{enumerate}
Since stoichiometry classes are subsets of $\Lambda$-classes and are antichains, it follows immediately that no stoichiometry class with an equilibrium in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$ contains more than one equilibrium.
\end{proof}\\
{\bf Remark.} Strong ordering of equilibria followed from a considerably more involved argument in \cite{banajiangeli}. Here the fact that $\Lambda$-classes are lattices makes the conclusion simple. \\
Given $x \sim^\Lambda y$, define
\[
P(x, y) = ((x+K(\Lambda)) \cup (x - K(\Lambda))) \cap \mathcal{C}_{\Gamma,y}\,.
\]
Using the characterisation in Lemma~\ref{Scchar}, we can alternatively write
\[
P(x, y) = ((x+K(\Lambda)) \cup (x - K(\Lambda))) \cap \{w \sim^\Lambda x \,:\, H(w) = H(y)\}.
\]
\begin{lemma}
\label{foliate}
Assume that matrices $\Lambda$, $\Theta$ and $\Gamma$ satisfy Condition A3. Then $P(x, y)$ is nonempty for any $y \sim^\Lambda x$.
\end{lemma}
\begin{proof}
If $H(y) = H(x)$, then by Lemma~\ref{Scchar}, $x \in \mathcal{C}_{\Gamma,y}$ and we are done. Assume that $H(y) > H(x)$ (resp. $H(y) < H(x)$). By the lattice property of $\mathcal{C}_{\Lambda, x}$ (Corollary~\ref{lat2}), $z = x \curlyvee y \in \mathcal{C}_{\Lambda, x} \cap (x+K(\Lambda))$ (resp. $z = x \curlywedge y \in \mathcal{C}_{\Lambda, x} \cap (x-K(\Lambda))$), and by Lemma~\ref{inc} $H(z) \geq H(y) > H(x)$ (resp. $H(z) \leq H(y) < H(x)$). Since $(x+K(\Lambda)) \cap \mathcal{C}_{\Lambda, x}$ (resp. $(x-K(\Lambda)) \cap \mathcal{C}_{\Lambda, x}$) is convex, it includes $[x,z]$. By the intermediate value theorem there exists $w \in [x,z]$ such that $H(w) = H(y)$. By construction $w \in P(x, y)$.
\end{proof}\\
\begin{lemma}
\label{lembounded}
Consider any CCP cone $K \subseteq \mathbb{R}^n_{\geq 0}$ and some vector $p \in \mathrm{int}( K^*)$. Given any point $x \in \mathbb{R}^n$ and any constant $t > 0$, the set $(x + K) \cap \{y\,:\,p^Ty = p^Tx + t\}$ is bounded. Similarly, $(x - K) \cap \{y\,:\,p^Ty = p^Tx - t\}$ is bounded.
\end{lemma}
\begin{proof}
The first statement will be proved; the proof of the second is similar. With fixed $x$ and $p$, define
\[
R = \inf_{y\in K,|y| = 1}p^Ty.
\]
Since $p^T(y-x) > 0$ (as $p \in K^*$ and $y-x \in K\backslash\{0\}$), $R > 0$ as the infimum of a positive function on a compact set. Consider any sequence $y_n$ in $(x + K) \cap \{y\,:\,p^Ty = p^Tx + t\}$. We then have $t = p^T(y_n-x) \geq R|y_n-x|$, i.e., $|y_n-x| \leq t/R$, and so $(y_n)$ is bounded.
\end{proof}\\
\begin{lemma}
\label{compconv}
Assume that matrices $\Lambda$, $\Theta$ and $\Gamma$ satisfy Condition A3. $P(x, y)$ is a nonempty compact, convex set for any $y \sim^\Lambda x$.
\end{lemma}
\begin{proof}
It has been shown in Lemma~\ref{foliate} that $P(x, y)$ is nonempty. Either $x=y$, in which case $P(x, y) = \{x\}$ which is trivially compact and convex; or exactly one of $(x+K(\Lambda))\cap \mathcal{C}_{\Gamma,y}$ or $(x - K(\Lambda)) \cap \mathcal{C}_{\Gamma,y}$ is nonempty (since stoichiometry classes are antichains by Corollary~\ref{unord1}). For definiteness assume that $P(x, y) = (x+K(\Lambda))\cap \mathcal{C}_{\Gamma,y}$ is nonempty (the other case is similar). $P(x, y)$ is thus closed and convex as the intersection of closed, convex sets. Applying Lemma~\ref{lembounded} with $K=K(\Lambda)$ and $p=p_\Theta$, $P(x, y)$ is bounded.
\end{proof}\\
\begin{lemma}
\label{haseq}
Consider System (\ref{eqmain}) satisfying conditions A1--A3. Choose some $e \in E$. Then each stoichiometry class in $\mathcal{C}_{\Lambda,e}$ contains an equilibrium.
\end{lemma}
\begin{proof}
By Lemma~\ref{compconv}, for arbitrary $x \sim^\Lambda e$, the intersections $P(e, x)$ are nonempty, compact, convex sets. Moreover these sets are forward invariant under $\phi$: $((e+K(\Lambda)) \cup (e-K(\Lambda)))$ is forward invariant by monotonicity of $\phi$ (Corollary~\ref{hscor}) and since the stoichiometry class $\mathcal{C}_{\Gamma,x}$ is forward invariant, $P(e, x)$ is forward invariant as the intersection of forward invariant sets. By the Brouwer fixed point theorem (\cite{spanier} for example) $P(e, x)$ contains an equilibrium. Since $x$ was arbitrary, each stoichiometry class in $\mathcal{C}_{\Lambda,e}$ contains an equilibrium.
\end{proof}\\
\begin{lemma}
\label{localhomeo}
Consider System (\ref{eqmain}) satisfying conditions A1--A4 and some $e \in E \cap \mathrm{int}(\mathbb{R}^n_{\geq 0})$. There exists an equilibrium $e_0 \in (e - \mathrm{relint}(K(\Lambda))) \cap \mathrm{int}(\mathbb{R}^n_{\geq 0})$. Given any such equilibrium $e_0$ there exists a homeomorphism
\[
\psi:[H(e_0), H(e)] \to \psi([H(e_0), H(e)]) \subseteq E
\]
such that (i) $H(\psi(h)) = h$, (ii) $\psi([H(e_0), H(e)]) \subseteq \mathrm{int}(\mathbb{R}^n_{\geq 0})$ and (iii) $h_1 < h_2 \Rightarrow \psi(h_1) \llcurly \psi(h_2)$.
\end{lemma}
\begin{proof}
Certainly, $e$ is a not a minimal element of $\mathcal{C}_{\Lambda, e}$ since it lies in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$, and moreover $e$ lies in $\mathrm{relint}(\mathcal{C}_{\Gamma,e})$. Define $R$ as in the proof of Lemma~\ref{lembounded}, fixing $K = K(\Lambda)$ and $p = p_\Theta$, namely:
\[
R = \inf_{y\in K,|y| = 1}p_\Theta^Ty.
\]
Let $0 < \delta$ be the minimum distance from $e$ to $\partial \mathbb{R}^n_{\geq 0}$ and choose some positive $\epsilon < \delta R$. Then for any $x \preceq e$ such that $H(e)-H(x) \leq \epsilon$, $P(e, x) \subseteq \mathrm{int}(\mathbb{R}^n_{\geq 0})$: this follows since $|y-e| \leq (H(e)-H(x))/R < \delta$ for $y \in P(e, x)$ (see the proof of Lemma~\ref{lembounded}). By Lemma~\ref{compconv}, $P(e, x)$ is a nonempty, compact convex set, and consequently contains an equilibrium $e_{H(x)}$. Since $e_{H(x)} \in (e - \mathrm{relint}(K(\Lambda))) \cap \mathrm{int}(\mathbb{R}^n_{\geq 0})$, it is, by Lemma~\ref{lemord0}, the unique equilibrium in $\mathcal{C}_{\Gamma,x}$. Defining $e_0 = e_{H(e) - \epsilon}$, for $h \in [H(e_0), H(e)]$, the map $\psi: h \mapsto e_h$ is thus well defined, has image in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$ and is clearly a bijection onto its image. $\psi^{-1}$ is continuous as it is simply the restriction of $H$ to the image of $\psi$.
We next show that $\psi$ is continuous (see also the proof of Lemma~5.12 in \cite{banajiangeli}). Consider any $h \in [H(e_0), H(e)]$, a sequence of values $h_i \subseteq [H(e_0), H(e)]$ with $h_i \to h$, and the corresponding equilibria $e_i = \psi(h_i)$. Since all $e_i$ lie in the order interval $[[e_0, e]] = \{y \in \mathbb{R}^n\,:\, e_0 \preceq y \preceq e\} \subseteq \mathrm{int}(\mathbb{R}^n_{\geq 0})$ which is easily seen to be bounded, $e_i$ contains no unbounded subsequences. Consider any convergent subsequence of $(e_i)$, say $e_{i_k} \to \tilde{e}$. By closure of $E$ and $[[e_0, e]]$, $\tilde{e} \in E \cap [[e_0, e]]$, and by continuity of $H$, $H(\tilde{e}) = h$. Since $\psi(h)$ is the unique equilibrium satisfying these requirements, $\tilde{e} = \psi(h)$. Thus $\psi$ is continuous.
Finally, that $H(\psi(h)) = h$ is immediate from the definition, and that $h_1 < h_2 \Rightarrow \psi(h_1) \llcurly \psi(h_2)$ is immediate from Lemma~\ref{lemord0} since $\mathrm{Im}(\psi)\subseteq \mathrm{int}(\mathbb{R}^n_{\geq 0})$.
\end{proof}\\
{\bf Remark.} We could equally prove the existence of $e_0 \in (e + \mathrm{relint}(K(\Lambda))) \cap \mathrm{int}(\mathbb{R}^n_{\geq 0})$ and a homeomorphism $\psi:[H(e), H(e_0)] \to \psi([H(e), H(e_0)]) \subseteq E$.
\subsection{Local asymptotic stability of equilibria}
We are now in a position to prove the local stability of all positive equilibria for System (\ref{eqmain}) satisfying conditions A1--A4.
{\par{\it Proof of Theorem \ref{mainthm0}}. \ignorespaces}
Consider any positive equilibrium $e$. That $e$ is the unique equilibrium on $\mathcal{C}_{\Gamma,e}$ follows from Lemma~\ref{lemord0}. Choose and fix some equilibrium $e_0 \in (e - \mathrm{relint}(K(\Lambda))) \cap \mathrm{int}(\mathbb{R}^n_{\geq 0})$ as in the proof of Lemma~\ref{localhomeo}. Via Lemma~\ref{localhomeo} define a strictly increasing homeomorphism $\psi:[H(e_0),H(e)]\to \psi([H(e_0),H(e)]) \equiv E_{e_0,e} \subseteq E$ such that $\psi(H(e_0))=e_0$ and $\psi(H(e))=e$. Note that $E_{e_0,e} \subseteq \mathrm{int}(\mathbb{R}^n_{\geq 0})$ and is strictly ordered. Define $U = (e_0 + K(\Lambda)) \cap \mathcal{C}_{\Gamma,e}$. For any point $x \in U\backslash\{e\}$, $e_0 \in x - K(\Lambda)$. On the other hand $e \not \in x - K(\Lambda)$. Since $E_{e_0,e}$ is homeomorphic to a line segment (and hence connected), $E_{e_0,e}$ must intersect $x - \mathrm{relbd}\, K(\Lambda)$. Moreover this intersection is unique, otherwise some pair of distinct equilibria in $E_{e_0,e}$ must fail to be strictly ordered. It is also clear that $(e-\mathrm{relbd}\, K(\Lambda)) \cap E_{e_0,e} = \{e\}$. Thus for all $x \in U$, define $Q(x) \equiv (x-\mathrm{relbd}\, K(\Lambda)) \cap E_{e_0,e}$, and $L(x) \equiv H(Q(x))$. We make three claims (the reader may wish to compare Lemmas~5.16,~5.17~and~5.18 in \cite{banajiangeli}):
\begin{enumerate}
\item $L(e) > L(x)$ for $x \in U\backslash\{e\}$;
\item $Q$, and hence $L$, are continuous on $U$;
\item $L$ increases strictly along nontrivial orbits.
\end{enumerate}
The first statement is immediate since $H(e) > H(z)$ for $z \in E_{e_0,e}\backslash\{e\}$. Since both $K(\Lambda)$ and $E_{e_0,e}$ are closed sets, with intersection at a unique point, it is not hard to see that for $(x_i) \subseteq U$ with $x_i \to x$, $Q(x_i) \to Q(x)$ and the second claim follows. Finally, given $x \in U\backslash\{e\}$, by strong monotonicity (Corollary~\ref{hscor}), $\phi_t(x) \ggcurly \phi_t(Q(x)) = Q(x)$ for $t > 0$; if $z \preceq Q(x)$ then $z \llcurly \phi_{t}(x)$, i.e., $z \ne Q(\phi_{t}(x))$. So $Q(\phi_{t}(x)) \succ Q(x)$, and thus $L(\phi_{t}(x)) > L(x)$, proving the third claim.
Thus $L$ serves as a Lyapunov function for $\left. \phi\right|_U$, and, by standard arguments, $e$ is locally asymptotically stable relative to $\mathcal{C}_{\Gamma,e}$. In particular, each $x \in \mathrm{relint}\,U$ is attracted to $e$.
\endproof
\subsection{Global asymptotic stability of equilibria}
For a global result we need to show that the previous local constructions can be extended.
\begin{lemma}
\label{minimal1}
Consider System (\ref{eqmain}) with conditions A1--A4 defining a local semiflow $\phi$. Consider some $c \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$. Suppose $\mathcal{C}_{\Lambda, c}$ contains an infimum $z \preceq \mathcal{C}_{\Lambda, c}$, and that given any $y \in \mathcal{C}_{\Lambda, c} \cap \mathrm{int}(\mathbb{R}^n_{\geq 0})$, $\phi$ has no $\omega$-limit points on $\mathcal{C}_{\Gamma,y} \cap \partial\,\mathbb{R}^n_{\geq 0}$. Then, for each $h \in [H(z), H(c)]$, the stoichiometry class $\mathcal{C}_{\Lambda, c}^h$ contains exactly one equilibrium. For $h \in (H(z), H(c)]$ this equilibrium is in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$.
\end{lemma}
\begin{proof}
Since stoichiometry classes are antichains (Corollary~\ref{unord1}), and $z \preceq \mathcal{C}_{\Lambda, c}$, $\{z\}$ must be an entire stoichiometry class, and consequently (since stoichiometry classes are invariant) $z$ is an equilibrium. Since $\mathcal{C}_{\Lambda, c}$ contains an equilibrium, by Lemma~\ref{haseq}, each stoichiometry class in $\mathcal{C}_{\Lambda, c}$ contains an equilibrium. Since $\mathcal{C}_{\Gamma,c}$ is a nontrivial stoichiometry class, by the assumption on $\omega$-limit sets of $\phi$ all equilibria in $\mathcal{C}_{\Gamma,c}$ lie in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$. By Lemma~\ref{lemord0}, there is in fact a unique equilibrium in $\mathcal{C}_{\Gamma,c}$. In summary, $\mathcal{C}_{\Gamma,c}$ contains a unique equilibrium and this equilibrium lies in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$.
Choose any $h \in (H(z), H(c)]$. By continuity of $H$, there exists $x \in (z, c]$ such that $H(x) = h$, i.e. $x \in \mathcal{C}_{\Lambda, c}^h$. By basic properties of convex sets, $x \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$, and so $\mathcal{C}_{\Gamma, x}$ is nontrivial. Applying to $\mathcal{C}_{\Gamma, x}$ the argument applied to $\mathcal{C}_{\Gamma,c}$, $\mathcal{C}_{\Gamma, x}$ contains exactly one equilibrium, and this equilibrium is in $\mathrm{int}(\mathbb{R}^n_{\geq 0})$.
\end{proof}\\
\begin{lemma}
\label{homeo}
Consider any $c \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$ and the nontrivial $\Lambda$-class $\mathcal{C}_{\Lambda, c}$ with the assumptions of Lemma~\ref{minimal1}. There exists a strictly increasing homeomorphism $\psi:[H(z), H(c)] \to \psi([H(z), H(c)]) \subseteq E$ and such that (i) $H(\psi(h)) = h$, (ii) $h_1 < h_2 \Rightarrow \psi(h_1) \llcurly \psi(h_2)$.
\end{lemma}
\begin{proof}
By Lemma~\ref{minimal1} for each $h \in [H(z), H(c)]$, $\mathcal{C}_{\Lambda, c}^h$ contains a unique equilibrium $e_h$ and for $h \in (H(z), H(c)]$, $e_h \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$. Defining $\psi$ by $\psi(h) = e_h$ it is clear that $\psi$ is a bijection. Continuity of $\psi$ and $\psi^{-1}$ now follow as in Lemma~\ref{localhomeo}. That $h_1 < h_2 \Rightarrow \psi(h_1) \llcurly \psi(h_2)$ follows from Lemma~\ref{lemord0}.
\end{proof}\\
We are now finally ready to prove the main global convergence result.
{\par{\it Proof of Theorem \ref{mainthm}}. \ignorespaces}
Consider any $c \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$, the nontrivial stoichiometry class $\mathcal{C}_{\Gamma,c}$ and the nontrivial $\Lambda$-class $\mathcal{C}_{\Lambda, c}$. By Lemma~\ref{minimal}, assumptions A3 and A5 imply that there exists $z \in \mathcal{C}_{\Lambda, c}$ with $z \preceq \mathcal{C}_{\Lambda, c}$. By condition A6, for $y \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$, $\phi$ has no $\omega$-limit points on $\mathcal{C}_{\Gamma,y} \cap \partial\,\mathbb{R}^n_{\geq 0}$. Thus Lemma~\ref{minimal1} applies: for each $h \in [H(z), H(c)]$, $\mathcal{C}_{\Lambda, c}^h$ contains exactly one equilibrium $e_h$ and for $h \in (H(z), H(c)]$, $e_h \in \mathrm{int}(\mathbb{R}^n_{\geq 0})$. Let $e \equiv e_{H(c)}$ be the equilibrium in $\mathcal{C}_{\Gamma,c}$.
By Lemma~\ref{homeo}, the map $\psi:[H(z), H(c)] \to \psi([H(z), H(c)]) \equiv E_{z,c} \subseteq E$ defined by $\psi(h) = e_h$ is a strictly increasing homeomorphism. We now follow the proof of Theorem~\ref{mainthm0} with $e_0 = z$. Since $z \preceq \mathcal{C}_{\Lambda, c}$, $P(z, c) = (z + K(\Lambda)) \cap \mathcal{C}_{\Gamma,c} = \mathcal{C}_{\Gamma,c}$. One immediate consequence is that $\mathcal{C}_{\Gamma,c}$ is bounded by Lemma~\ref{compconv}. Thus given any $y \in \mathcal{C}_{\Gamma,c}$, $\omega(y)$ (the $\omega$-limit set of $y$) exists.
For $y \in \mathcal{C}_{\Gamma,c}$ we define $Q(y) = (y-\mathrm{relbd}\,K) \cap E_{z,c}$ and construct the Liapunov function $L(\cdot) = H(Q(\cdot))$ defined on all of $\mathcal{C}_{\Gamma,c}$. That $Q$ is well defined, $L(e) > L(y)$ for $y \in \mathcal{C}_{\Gamma,c}\backslash\{e\}$, $Q$, and hence $L$, are continuous on $\mathcal{C}_{\Gamma,c}$ follow as in the proof of Theorem~\ref{mainthm0}. Note that $L$ is strictly increasing, namely $L(\phi_t(y)) > L(y)$ for $t>0$, if $y \in (\mathcal{C}_{\Gamma,c}\backslash\{e\}) \cap \mathrm{int}(\mathbb{R}^n_{\geq 0})$, but if $y \in\mathcal{C}_{\Gamma,c} \cap \partial \mathbb{R}^n_{\geq 0}$ we can only claim that $L$ is nondecreasing, namely $L(\phi_t(y)) \geq L(y)$ for $t>0$: it may occur that $Q(y) = z$, in which case the entire segment $[z,y]$ may lie in $\partial \mathbb{R}^n$ and we only have $\phi_t(y) \succeq \phi_t(z) = z$. Thus $\omega(y)$ lies in $\{e\} \cup (\mathcal{C}_{\Gamma,c} \cap \partial \mathbb{R}^n_{\geq 0})$. But by condition A6, $\phi$ has no $\omega$-limit points on $\partial \mathbb{R}^n_{\geq 0}$, and so $\omega(y) = \{e\}$. Thus all initial conditions in $\mathcal{C}_{\Gamma,c}$ are attracted to $e$.
\endproof
\section{Conclusions}
A class of CRNs has been described with strong convergence properties under only weak kinetic assumptions. The networks in this class are defined primarily by the existence of a certain factorisation of their Jacobian matrices, and strong connectedness of their DSR graphs. Roughly speaking, the convergence properties of these CRNs spring from the combination of monotonicity of the associated dynamical systems and the existence of integrals of motion.
A natural question is how one can identify systems of reactions which fall into the class described in this paper. Such identification would begin with deciding whether a matrix $\Gamma$ admits a factorisation as in condition~A3: we sketch how an algorithm for this purpose would proceed. The starting point is to identify collinear rows of $\Gamma$. A partition of the row-set of $\Gamma$ into $r$ maximal collinear subsets provides a candidate for a first factor $\Lambda$ of the correct form, i.e., as in condition A3(i). Given such a $\Lambda$, the second factor $\Theta$ is uniquely determined (since $\mathrm{ker}\,\Lambda$ is trivial), and it can be checked whether $\mathrm{ker}\,\Theta^T$ is one-dimensional, and if so, whether it intersects the interior of some orthant $\mathcal{O}$ in $\mathbb{R}^r$. If this is the case, but $\mathcal{O} \neq \mathbb{R}^r_{\geq 0}$, then defining $S$ to be the diagonal matrix with diagonal entries in $\{-1, 1\}$ which maps $\mathcal{O}$ to $\mathbb{R}^r_{\geq 0}$, we examine $(\Lambda S)\,(S\Theta)$, and check whether $S\Theta$ has at most one positive entry and at most one negative entry in each column. If this is the case, then $(\Lambda S)\,(S\Theta)$ now provides the desired factorisation.
For the class discussed here, monotonicity is with respect to an order defined by a cone with linearly independent extremal vectors. An interesting question is whether it is possible to extend the theory to more general cones thus progressing with the program of identifying CRNs with simple behaviour: while several of the proofs here were simplified by the fact that $\Lambda$-classes were lattices, the results in \cite{banajiangeli,banajimierczynski} suggest that this may not be crucial to the geometric argument.
Finally, it was observed that Example 1 fell into a class which can also be proved to be monotone in ``reaction coordinates'' \cite{angelileenheersontag}. An interesting theme for future work is to work towards a synthesis of the approaches to monotonicity of CRNs in normal (``species'') coordinates and reaction coordinates.
\section*{Acknowledgements}
The research of both authors was supported by EPSRC grant EP/J008826/1 ``Stability and order preservation in chemical reaction networks''.
| {
"timestamp": "2013-04-11T02:02:56",
"yymm": "1211",
"arxiv_id": "1211.2153",
"language": "en",
"url": "https://arxiv.org/abs/1211.2153",
"abstract": "A class of chemical reaction networks is described with the property that each positive equilibrium is locally asymptotically stable relative to its stoichiometry class, an invariant subspace on which it lies. The reaction systems treated are characterised primarily by the existence of a certain factorisation of their stoichiometric matrix, and strong connectedness of an associated graph. Only very mild assumptions are made about the rates of reactions, and in particular, mass action kinetics are not assumed. In many cases, local asymptotic stability can be extended to global asymptotic stability of each positive equilibrium relative to its stoichiometry class. The results are proved via the construction of Lyapunov functions whose existence follows from the fact that the reaction networks define monotone dynamical systems with increasing integrals.",
"subjects": "Dynamical Systems (math.DS); Classical Analysis and ODEs (math.CA)",
"title": "Local and global stability of equilibria for a class of chemical reaction networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109507462227,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7076796269486092
} |
https://arxiv.org/abs/1709.04002 | On the fine structure of the free boundary for the classical obstacle problem | In the classical obstacle problem, the free boundary can be decomposed into "regular" and "singular" points. As shown by Caffarelli in his seminal papers \cite{C77,C98}, regular points consist of smooth hypersurfaces, while singular points are contained in a stratified union of $C^1$ manifolds of varying dimension. In two dimensions, this $C^1$ result has been improved to $C^{1,\alpha}$ by Weiss \cite{W99}.In this paper we prove that, for $n=2$ singular points are locally contained in a $C^2$ curve. In higher dimension $n\ge 3$, we show that the same result holds with $C^{1,1}$ manifolds (or with countably many $C^2$ manifolds), up to the presence of some "anomalous" points of higher codimension. In addition, we prove that the higher dimensional stratum is always contained in a $C^{1,\alpha}$ manifold, thus extending to every dimension the result in \cite{W99}.We note that, in terms of density decay estimates for the contact set, our result is optimal. In addition, for $n\ge3$ we construct examples of very symmetric solutions exhibiting linear spaces of anomalous points, proving that our bound on their Hausdorff dimension is sharp. | \section{Introduction}
The classical obstacle problem consists in studying the regularity of solutions to the minimization problem
$$
\min_{v}\biggl\{\int_{B_1}\frac{|\nabla v|^2}{2}\,:\, v \geq \psi \text{ in $B_1$},\,v|_{\partial B_1}=g\biggr\},
$$
where $g:\partial B_1\to \mathbb{R}$ is some prescribed boundary condition, and the ``obstacle'' $\psi:B_1\to \mathbb{R}$
satisfies $\psi|_{\partial B_1}<g$.
Assuming that $\psi$ is smooth, it is well-known that this problem has a unique solution $v$ of class $C^{1,1}_{\rm loc}$ \cite{BK74}, and that $u:=v-\psi$ satisfies the Euler-Lagrange equation
$$
\Delta u=-\Delta\psi \,\chi_{\{u>0\}} \qquad \text{in $B_1$}.
$$
As already observed in \cite{C77,C98}, in order to prove some regularity results for the \emph{free boundary} $\partial\{u>0\}$ it is necessary to assume that $\Delta\psi<0$.
In addition, as also noticed in \cite{C98,W99,M03,PSU12}, from the point of view of the local structure it suffices to understand the model case $\Delta\psi\equiv -1$.
For this reason, from now on, we shall focus on the problem
\begin{equation}\label{obstaclepb}
\Delta u=\chi_{\{u>0\}},\quad u \geq 0 \qquad \mbox{in } B_1\subset \mathbb{R}^n.
\end{equation}
As shown by Caffarelli in his seminal papers \cite{C77,C98}, points of the {free boundary} $\partial\{u>0\}$ are divided into two classes: regular points and singular points. A free boundary point $x_\circ$ is either regular or singular depending on the type of blow-up of $u$ at that point. More precisely:
\begin{equation}\label{regular}
x_\circ \mbox{ is called \emph{regular} point} \quad \Leftrightarrow \quad \quad r ^{-2} u(x_\circ+ rx) \ \stackrel{r\downarrow 0}\longrightarrow\ \frac 1 2 \max\{\boldsymbol e\cdot x,0\}^2
\end{equation}
for some $\boldsymbol e=\boldsymbol e_{x_\circ}\in \mathbb S^{n-1}$, and
\begin{equation}\label{singular}
x_\circ \mbox{ is called \emph{singular} point} \quad \Leftrightarrow \quad r ^{-2} u(x_\circ+ rx) \ \stackrel{r\downarrow 0}\longrightarrow \ p_{*,x_\circ}(x) :=\frac 1 2 x\cdot A x
\end{equation}
for some symmetric nonnegative definite matrix $A=A_{x_\circ}$ with ${\rm tr}(A)=1$.
The existence of the previous limits in \eqref{regular} and \eqref{singular},
as well as the classification of possible blow-ups are well-known results; see \cite{C98,W99,M03,PSU12}.
By the theory in \cite{KN77,C77} (see also \cite{CR76,CR77,Sak91,Sak93,C98,M00,PSU12}), the free boundary is an analytic hypersurface near regular points. On the other hand, near singular points the \emph{contact set} $\{u=0\}$ forms cups and can be pretty wild ---see for instance the examples given in \cite{Sch76} and \cite{KN77}. Moreover, as shown in \cite{Sch76}, even $C^\infty$ strictly superhamonic obstacles in the plane ($n=2$) may lead to contact sets with Cantor set like structures. In particular, in such examples, the contact set has (locally) an infinite number of connected components, each containing singular points.
Despite these ``negative'' results showing that singular points could be rather bad, it is still possible to prove some nice structure. More precisely, singular points are naturally stratified according to the dimension of the linear space
\[L_{x_\circ} := \{ p_{*,x_\circ} =0 \} = {\rm ker}(A_{x_\circ}).\]
For $m\in \{0,1,2,\dots, n-1\}$ we define the $m$-th stratum as
\[
\Sigma_{m} := \big\{ x_\circ \ : \ \mbox{ singular point with } \dim( L_{x_\circ}) =m\big\}.
\]
As shown by Caffarelli in \cite{C98},
each stratum $\Sigma_m$ is locally contained in a $m$-dimensional manifold of class $C^1$ (see also \cite{M03} for an alternative proof).
This result has been improved in dimension $n=2$ by Weiss \cite{W99}: using a epiperimetric-type approach, he has been able to prove that $\Sigma_1$ is locally contained in a $C^{1,\alpha}$ curve, for some universal exponent $\alpha>0$.
Along the same lines, in a recent paper Colombo, Spolaor, and Velichkov \cite{CSV17} have obtained a logarithmic epiperimetric inequality at singular points in any dimension $n\ge3$, thus improving the known $C^{1}$ regularity to a more quantitative $C^{1,\log^{\epsilon}}$ one.
\\
The aim of this paper is to improve the previous known results by showing that, up to the presence of some ``anomalous'' points of higher codimension, singular points can be covered by $C^{1,1}$ (and in some cases $C^2$) manifolds. As we shall discuss in Remark \ref{rmk:optimal}, this result provides the optimal decay estimate for the contact set. In addition, anomalous points may exist and our bound on their Hausdorff dimension is optimal.
Before stating our result we note that, as a consequence of \cite{C98}, points in $\Sigma_0$ are isolated and $u$ is strictly positive in a neighborhood of them. In particular $u$ solves $\Delta u=1$ in a neighborhood of $\Sigma_0$, hence it is analytic there. Thus, it is enough to understand the structure of $\Sigma_m$ for $m=1,\ldots,n-1$.
Here and in the sequel, ${\rm dim}_{\mathcal H}(E)$ denotes the Hausdorff dimension of a set $E$ (see \eqref{eq:def dim} for a definition).
Our main result is the following:
\begin{theorem}
\label{thm:main}
Let $u \in C^{1,1}(B_1)$ be a solution of \eqref{obstaclepb}, and let $\Sigma:=\cup_{m=0}^{n-1}\Sigma_m$ denote the set of singular points. Then:
\begin{enumerate}
\item[($n=2$)] $\Sigma_1$ is locally contained in a $C^{2}$ curve.
\item[($n\ge 3$)] \begin{enumerate}
\item
The higher dimensional stratum $\Sigma_{n-1}$
can be written as the disjoint union of ``generic points'' $\Sigma_{n-1}^g$ and ``anomalous points'' $\Sigma_{n-1}^a$, where:\\
- $\Sigma^g_{n-1}$ is locally contained in a $C^{1,1}$ $(n-1)$-dimensional manifold;\\
- $\Sigma^a_{n-1}$ is a relatively open subset of $\Sigma_{n-1}$ satisfying ${\rm dim}_{\mathcal H}(\Sigma^a_{n-1})\leq n-3$ (actually, $\Sigma^a_{n-1}$ is discrete when $n=3$).
Furthermore, $\Sigma_{n-1}$ can be locally covered by a $C^{1,\alpha_\circ}$ $(n-1)$-dimensional manifold, for some dimensional exponent $\alpha_\circ>0$.
\item For all $m=1,\ldots,n-2$ we can write $\Sigma_{m}=\Sigma_m^g\cup\Sigma_g^a$,
where:\\
- $\Sigma_m^g$ can be locally covered by a $C^{1,1}$
$m$-dimensional manifold;\\
- $\Sigma^a_m$ is a relatively open subset of $\Sigma_{m}$ satisfying ${\rm dim}_{\mathcal H}(\Sigma^a_m)\leq m-1$ (actually, $\Sigma^a_m$ is discrete when $m=1$).
In addition, $\Sigma_{m}$ can be locally covered by a $C^{1,\log^{\epsilon_\circ}}$ $m$-dimensional manifold, for some dimensional exponent $\epsilon_\circ>0$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{remark}\label{rmk:optimal}
We first discuss the optimality of the above theorem.
\begin{enumerate}
\item
Our $C^{1,1}$ regularity provides the optimal control on the contact set in terms of the density decay. Indeed our result implies that, at all singular points up to a $(n-3)$-dimensional set (in particular at all singular points when $n=2$, and at all singular points up to a discrete set when $n=3$), the following bound holds:
$$
\frac{|\{u=0\}\cap B_r(x_\circ)|}{|B_r(x_\circ)|}\leq Cr\qquad \forall\,r>0
$$
(see Proposition \ref{decayest}, Definition \eqref{eq:def anomalous}, and Lemmas \ref{lem11}, \ref{lem12}, \ref{lem21}, and \ref{lem22}).
In view of the two dimensional Example 1 in \cite[Section 1]{Sch76}, this estimate is optimal.
\item The possible presence of anomalous points comes from different reasons depending on the dimension of the stratum. More precisely, as the reader will see from the proof (see also the description of the strategy of the proof given below), the following holds:
\begin{enumerate}
\item The possible presence of points in $\Sigma_{n-1}^a$ comes from the potential existence, in dimension $n\ge 3,$ of $\lambda$-homogeneous solutions to the Signorini problem with $\lambda\in(2,3)$. Whether this set is empty or not is an interesting open problem.
\item The anomalous points in the strata $\Sigma^a_m$ for $m\le n-2$ come from the possibility that, around a singular point $x_\circ$, the function $(u-p_{*,x_\circ})|_{B_r(x_\circ)}$ behaves as $\varepsilon_r q$, where:\\
- $\varepsilon_r$ is infinitesimal as $r\to 0^+$, but $\varepsilon_r\gg r^\alpha$ for any $\alpha>0$;\\
- $q$ is a nontrivial second order harmonic polynomial.\\
Although this behavior may look strange, it can actually happen and our estimate on the size of $\Sigma_m^a$ is optimal.
Indeed, in the Appendix we construct examples of solutions for which ${\rm dim}(\Sigma_a^m)=m-1$.
\end{enumerate}
\end{enumerate}
\end{remark}
We now make some general comments on Theorem \ref{thm:main}.
\begin{remark}
\label{rmk:main thm}
\begin{enumerate}
\item Our result on the higher dimensional stratum $\Sigma_{n-1}$ extends the result of \cite{W99} to every dimension, and improves it in terms of the regularity. Actually, as shown in Theorem \ref{thm:C2}, for any $m =1,\ldots,n-1$ we can cover $\Sigma_{m}$ with countably many $C^2$ $m$-dimensional manifolds,
up to a set of dimension at most $m-1$.
\item
The last part of the statement in the case $(n\geq 3)$-(b) was recently proved in \cite{CSV17}. Here we reobtain the same result as a simple consequence of our analysis (see the proof of Theorem \ref{thm:main}).
\item
As we shall see,
the higher regularity of the free boundary stated in the previous theorem comes with a higher regularity of the solution $u$ around singular points. More precisely, $\Sigma$ being of class $C^{k,\alpha}$ at some singular point $x_\circ$ corresponds to $u$ being of class $C^{k+1,\alpha}$ at such point.
\item
The fact that $\Sigma_m^a$ is relatively open implies that if $x_\circ\in \Sigma_m^a$ then $B_\rho(x_\circ)\cap \Sigma_m^a=B_\rho(x_\circ)\cap \Sigma_m$ for $\rho>0$ small.
In particular
$\dim_\mathcal{H} \big(B_\rho(x_\circ)\cap \Sigma_m\big)\le m-1
$ ($\leq n-3$ if $m=n-1$).
In other words, the whole stratum $\Sigma_m$ is lower dimensional near anomalous points.
\item[(5)] In \cite{Sak91,Sak93}, Sakai proved very strong structural results for the free boundary in dimension $n=2$. However, his results are very specific to the two dimensional case with analytic right hand side, as they rely on complex analysis techniques. On the other hand, all the results mentioned before \cite{C77,C98,W99,CSV17} are very robust and apply to more general right hand sides. Analogously, also our techniques are robust and can be extended to general right hand sides. In addition, our methods can be applied to the study of the regularity of the free boundary in the parabolic case (the so-called Stefan problem), a problem that cannot be studied with complex variable techniques even in dimension two.
\end{enumerate}
\end{remark}
\noindent
{\bf Strategy of the proof of Theorem \ref{thm:main}.}
The idea of the proof is the following:
let $0$ be a singular free boundary point. As shown in \cite{C98,M03} $u$ is $C^2$ at 0, namely there exists a second order homogeneous polynomial $p_*$, with $D^2p_*\geq 0$ and $\Delta p_*=1$, such that $u(x)=p_*(x)+o(|x|^2)$. In order to obtain our result, our goal is to improve the convergence rate $o(|x|^2)$ into a quantitative bound of the form $O(|x|^{2+\gamma})$ for some $\gamma>0$.
In particular, to obtain $C^{1,1}$ regularity of the singular set we would like to show that $\gamma\geq 1$.
Using motononicity formulae due to Weiss and Monneau, we are able to prove that Almgren frequency function is monotone on $w:=u-p_*$ (this result came as a complete surprise to us, as the Almgren frequency formula has never been used in the classical obstacle problem). This allows us to perform blow-ups around $0$ by considering limits of
$$
\tilde w_r(x):=\frac{w(rx)}{\|w(r\,\cdot\,)\|_{L^2(\partial B_1)}}\qquad \text{as $r\to 0$},
$$
and prove that if $\lambda_*$ is the value of the frequency at $0$ then $u(x)=p_*(x)+O(|x|^{\lambda_*})$.
Although it is easy to see that $\lambda_*\geq 2$, it is actually pretty delicate ---and actually sometimes impossible--- to exclude that $\lambda_*=2$ (note that, in such a case, we would get no new informations with respect to what was already known). Hence our goal is to understand the possible value of $\lambda_*.$
To this aim, we consider $q$ a limit of $\tilde w_r$ and, exploiting the monotonicity of the frequency, we prove that $q\not \equiv 0$, $q$ is $\lambda_*$-homogeneous, and $q\Delta q\equiv 0$.
Then we distinguish between the two cases $m=n-1$ and $m\leq n-2$. While in the latter case we can prove that $q$ is harmonic (therefore $\lambda_* \in \{2,3,4,\ldots\}$), in the case $m=n-1$ we prove that $q$ is a solution of the so-called ``Signorini problem'' (see for instance \cite{AC04,ACS08}).
In particular, when $n=2$, this allows us to characterize all the possible values of $\lambda_*$ in dimension $2$ (as all global two-dimensional homogenous solutions are classified). Still, this does not exclude that $\lambda_*=2$.
As shown in Proposition \ref{propblowup} this can be excluded in the case $m=n-1$, while the examples constructed in the Appendix show that $\lambda_*$ may be equal to $2$ if $m\leq n-2$.
To circumvent this difficulty, a key ingredient in our analysis comes from Equation \eqref{howisD2q} which shows that, whenever
$\lambda_*=2$, some strong relation between $p_*$ and $q$ holds. Thus, our goal becomes to prove that this relation cannot hold at ``too many'' singular points.
In order to estimate the size of the set where $\lambda_*<3$, we first consider the low-dimension cases $n=2$ and $n=3$, and then we develop a Federer-type dimension reduction principle to handle the case $n \geq 4$. Note that the Federer dimension reduction principle is not standard in this setting, the reason being that if $x_0$ and $x_1$ are two different singular points, then the blow-ups at such points come from different functions, namely $u-p_{*,x_0}$ and $u-p_{*,x_1}$. Still, we can prove the validity of a dimension reduction-type principle allowing us to conclude that, at most points, $\lambda_*\geq 3$.
This proves the main part of the theorem.
Then, to show that $\Sigma_{n-1}$ is contained in a $C^{1,\alpha_\circ}$-manifolds we prove that $\lambda_*\geq 2+\alpha_\circ>2$ at all points in $\Sigma_{n-1}$. Also, the $C^{1,\log^{\epsilon_\circ}}$ regularity of $\Sigma_m$ for $m\leq n-2$ comes a simple consequence of our analysis combined with Caffarelli's asymptotic convexity estimate \cite{C77}.
Finally, the $C^2$ regularity in two-dimensions requires a further argument based on a new monotonicy formula of Monneau-type.
\\
The paper is organized as follows.
In Section \ref{sect:blow up} we introduce some classical monotonicity quantities, as well as some variants of them that will play a crucial role in our analysis. In particular,
we prove the validity of a Almgren's monotonicy-type formula.
Then, given a singular free boundary point $x_\circ$, we investigate the properties of the blow-ups of $u(x_\circ+\cdot)-p_{*,x_\circ}$.
In Section \ref{sect:proof} we continue our analysis of
the possible homogeneities of the blow-ups and show the validity of a Federer-type reduction principle.
These results, combined with
the ones from Section \ref{sect:blow up}, allow us to prove Theorem \ref{thm:main} in dimensions $n\ge3$, as well as the $C^{1,1}$ regularity of $\Sigma_1$ in dimension $n=2$. The proof of the $C^2$ regularity of $\Sigma_1$ for $n=2$ is postponed to Section \ref{sect:C2}.
In the final Appendix we build solutions exhibiting anomalous points that show the sharpness of Theorem \ref{thm:main}(b).
\\
{\it Acknowledgments:} both authors are supported by ERC Grant ``Regularity and Stability in Partial Differential Equations (RSPDE)''.
\section{Notation, monotonicity formulae, and blow-ups}
\label{sect:blow up}
Let us denote
\begin{equation}
\label{eq:def M}
\mathcal M := \big\{ \mbox{symmetric $n\times n$ nonnegative definite matrices $B$ with ${\rm tr}\, B=1$}\big\}
\end{equation}
and
\[\mathcal P := \bigg\{ p(x) = { \frac 1 2} x\cdot Bx\ :\ B\in \mathcal M \bigg\}.\]
Given a singular free boundary point $x_\circ$, we denote
\[ p_{*,x_\circ}(x) = \lim_{r\to 0} r^{-2}u(x_\circ+ rx)\]
(the existence of this limit is guaranteed by \cite{C98}, see also \cite{M03}).
Note that $\Delta p_{*,x_\circ}\equiv 1$, hence $p_{*,x_\circ}\in \mathcal P$. When $x_\circ=0$, we will sometimes simplify the notation to $p_*$.
Throughout the paper we will assume that $u\not\equiv p_*$ in $B_1$, as otherwise Theorem \ref{thm:main} is trivial.
\subsection{Weiss, Monneau, and Almgren frequency formula}
In this section we assume that $x_\circ=0$ is a singular point. The goal of the section is to prove that,
for any given $p\in \mathcal P,$
the Almgren frequency formula
\[
\phi(r,w) :=\frac{r^{2-n}\int_{B_r} |\nabla w|^2}{ r^{1-n}\int_{\partial B_r} w^2},\qquad
w :=
u-p,
\]
is monotone nondecreasing in $r$.
(Note that, since by assumption $u \not\equiv p_*$, then $w := u-p\not\equiv 0$ for any $p \in \mathcal P$ and $\phi(r,w)$ is well defined.)
To this aim, we first recall the definition of the Weiss function
\[
W(r,u) := \frac{1}{r^{n+2}} \int_{B_r} \Bigl(|\nabla u|^2 +2u\Bigr) -\frac{2}{r^{n+3}}\int_{\partial B_r} u^2.
\]
\begin{proposition}[Weiss monotone function \cite{W99}] \label{prop:Weiss}
If $0$ is a singular point then
\[
\frac{d}{dr} W(r,u) \ge 0
\]
and
\[
W(0^+, u) = \frac{\mathcal H^{n-1}(\partial B_1)}{2n(n+2)}=W(r,p)\qquad \forall\,p \in \mathcal P,\,\forall \,r>0.
\]
\end{proposition}
To prove the monotonicity of $\phi$ we will use several times the following observation:
\begin{remark}\label{remsign} Since $\Delta u = \Delta p =1$ in $\{u>0\}$, we have
\[
w\Delta w =
\begin{cases}
0 &\mbox{in } \{u>0\}
\\
p\Delta p = p\ge 0 \quad &\mbox{in } \{u=0\}.
\end{cases}
\]
A short way to write this is
\begin{equation}\label{wLapw}
w\Delta w = p\chi_{\{u=0\}} \ge 0.
\end{equation}
\end{remark}
We also need the following auxiliary result, that is essentially due to Monneau \cite{M03}.
\begin{lemma}\label{lemWeiss}
Let $0$ be a singular point, $p \in \mathcal P$, and $w:=u-p$. Then
\begin{equation}
\label{eq:mon1}
\frac{1}{r^{n+2}}\int_{B_r} |\nabla w|^2\geq \frac{2}{r^{n+3}}\int_{\partial B_r} w^2
\end{equation}
and
\begin{equation}
\label{eq:mon2}
\frac{1}{r^{n+3}} \int_{\partial B_r} w( x\cdot\nabla w-2w) \ge \frac{1}{r^{n+2}}\int_{B_r} w\Delta w\ge 0
\end{equation}
for all $r>0$.
\end{lemma}
\begin{proof}
Since $W(0^+,u) = W(r,p)$ for all $r>0$ (see Proposition \ref{prop:Weiss}) and $\Delta p\equiv 1$, we have
\[
\begin{split}
0 &\le W(r,u) -W(0^+,u)
=W(r,u) - W(r,p)
\\
&=\frac{1}{r^{n+2}}\int_{B_r} \Bigl(|\nabla w|^2 + 2\nabla w\cdot \nabla p +2w\Bigr) -\frac{2}{r^{n+3}}\int_{\partial B_r} \Bigl(w^2 + 2w p\Bigr)
\\
&= \frac{1}{r^{n+2}}\int_{B_r} |\nabla w|^2 -\frac{2}{r^{n+3}}\int_{\partial B_r} w^2 + \frac{2}{r^{n+3}}\int_{\partial B_r} w(x\cdot\nabla p -2p)
\\
&= \frac{1}{r^{n+2}}\int_{B_r} |\nabla w|^2 -\frac{2}{r^{n+3}}\int_{\partial B_r} w^2 ,
\end{split}
\]
where we used that $p$ is $2$-homogeneous (hence $x\cdot \nabla p=2p$). This proves \eqref{eq:mon1}.
Now, since
$$
\frac{1}{r^{n+2}}\int_{B_r} |\nabla w|^2=
\frac{1}{r^{n+2}}\int_{B_r} - w\Delta w + \frac{1}{r^{n+3}}\int_{\partial B_r} w\,x\cdot \nabla w,
$$
\eqref{eq:mon2} follows from \eqref{eq:mon1}
and \eqref{wLapw}.
\end{proof}
We can now state and prove the monotonicity of the Almgren frequency function.
We remark that the fact that $\phi$ is monotone for all $p \in \mathcal P$ (and not only with $p=p_*$) will be crucial in the proof of Theorem \ref{thm:main}.
\begin{proposition}[Almgren frequency formula]\label{lemAlmgren}
Let $0$ be a singular point, $p \in \mathcal P$, and $w:= u-p$. Then
\[
\frac{d}{dr} \log \phi(r,w) \ge\frac 2 r \frac{
\left(r^{2-n} \int_{B_r} w\Delta w\right)^2 }{ r^{2-n} \int_{B_r}|\nabla w|^2 \ r^{1-n} \int_{\partial B_r} w^2 }\ge 0.
\]
\end{proposition}
\begin{proof}[Proof of Proposition \ref{lemAlmgren}]
Let us introduce the adimensional quatities
\[D(r) := r^{2-n} \int_{B_r} |\nabla w|^2 = r^2\int_{B_1} |\nabla w|^2(r\,\cdot\,),$$
$$
H(r) := r^{1-n}\int_{\partial B_r} w^2 = \int_{\partial B_1} w^2(r\, \cdot\,),\]
so that $\phi=D/H$.
By scaling it is enough to compute the derivative of $\phi$ at $r=1$ and prove that it is nonnegative.
Using lower indices to denote partial derivatives (so $w_i=\partial_{x_i}w,$ $w_{ij}=\partial^2_{x_ix_j}w$, etc.),
we have
\begin{equation}\label{zero}
\frac{d}{dr} \log \phi = \frac{D'}{D}- \frac{H'}{H}
\end{equation}
where
\begin{equation}\label{first}
\begin{split}
D'(1) &= \sum_{i,j} \int_{B_1} 2w_i x_j w_{ij} + 2D(1)
\\
&= \sum_{i,j}\int_{\partial B_1}2w_i x_j w_{j} \nu_i -\sum_{i,j} \int_{B_1} 2(w_{i}x_j)_i w_j +2D(1)
\\
&= 2\int_{\partial B_1} w_\nu ^2 - 2\int_{B_1} \Delta w \,(x\cdot \nabla w) - 2\int_{B_1} |\nabla w|^2 +2D(1)
\\
&= 2\int_{\partial B_1} w_\nu ^2 - 2\int_{B_1} \Delta w \,(x\cdot \nabla w)
\\
&= 2\int_{\partial B_1} w_\nu ^2 - 2\int_{B_1\cap\{u=0\}} \,(x\cdot \nabla p)
\\
&= 2\int_{\partial B_1} w_\nu ^2 - 4\int_{B_1\cap\{u=0\}} p.
\end{split}
\end{equation}
Here we used that $x\cdot \nabla w|_{\partial B_1}=w_\nu|_{\partial B_1}$ is the outer normal derivative, $\Delta w=-\chi_{\{u=0\}}$, and $x\cdot \nabla p=2p$ (since $p$ is $2$-homogeneous).
On the other hand, recalling \eqref{wLapw}, we have
\begin{equation}\label{secon}
\int_{\partial B_1}w w_\nu = \int_{B_1} w\Delta w + \int_{B_1}|\nabla w|^2
=\int_{B_1\cap\{u=0\}} p + \int_{B_1}|\nabla w|^2,
\end{equation}
therefore
\begin{equation}\label{third}
\begin{split}
H'(1) = 2\int_{\partial B_1} w w_\nu &= 2\int_{B_1\cap\{u=0\}} p + 2\int_{B_1}|\nabla w|^2.
\end{split}
\end{equation}
Hence, combining \eqref{first}, \eqref{secon}, \eqref{third}, and \eqref{zero}, and denoting
\[I:=\int_{B_1}w\Delta w = \int_{B_1\cap\{u=0\}} p \ge 0,\]
we obtain
\[
\begin{split}
\frac{d}{dr} \log \phi(1,w) &= 2\left(\frac{\int_{\partial B_1} w_\nu^2 -2I}{\int_{B_1} |\nabla w|^2} - \frac{\int_{\partial B_1}ww_\nu}{\int_{\partial B_1} w^2} \right)
\\
&= 2\, \frac{\big(\int_{\partial B_1} w_\nu^2 -2I \big)\int_{\partial B_1} w^2 - \int_{\partial B_1}ww_\nu \big(\int_{\partial B_1}ww_\nu -I\big) }{\int_{B_1} |\nabla w|^2 \int_{\partial B_1} w^2}
\\
&= 2\, \frac{\big\{\int_{\partial B_1} w_\nu^2\int_{\partial B_1} w^2 -\big(\int_{\partial B_1}ww_\nu\big)^2 \big\} + I \int_{\partial B_1}w(w_\nu-2w) }{\int_{B_1} |\nabla w|^2 \int_{\partial B_1} w^2}
\end{split}
\]
Observe that the first term inside the brackets is nonnegative by the Cauchy-Schwartz inequality. Also, recalling \eqref{eq:mon2}, we have that
\[
\int_{\partial B_1} w( w_\nu-2w) \ge \int_{B_1} w\Delta w =I.
\]
Since $I\geq 0$, the result follows.
\end{proof}
Note that, because $r\mapsto\phi(r,w)$ is monotone nondecreasing, it must have a limit as $r\downarrow 0.$
The first observation is that this limit is at least $2$.
\begin{lemma}\label{lemblowup0}
Let $0$ be a singular point, $p\in \mathcal P$, and $w:= u-p$.
Then $\phi(0^+,w) \geq 2$.
\end{lemma}
\begin{proof}
It suffices to observe that \eqref{eq:mon1} is equivalent to $\phi(r,w)\geq 2$ for all $r>0$.
\end{proof}
A first classical consequence of the frequency formula is the following monotonicity formula:
\begin{lemma}\label{Hincreasing}
Let $0$ be a singular point, $p\in \mathcal P$, and $w:= u-p$.
Given $\lambda >0$ denote
\[
H_\lambda(r,w) := \frac{1}{r^{n-1+2\lambda}} \int_{\partial B_r} w^2
\]
Then the function
$r\mapsto H_{\lambda}(r,w)$ is nondecreasing
for all $0\le \lambda \le \phi(0^+,w)$.
\end{lemma}
\begin{proof}
Denoting
\[w_r(x) := (u-p)(rx).\] we have
\[
\frac{H'_{\lambda}} {H_{\lambda}} (r,w) = \frac{ 2r^{-2\lambda} \int_{\partial B_1} w_r(x) \big(x\cdot \nabla w(rx)\big) -2\lambda r^{-2\lambda-1} \int_{\partial B_1} w_r ^2 }{ r^{-2\lambda} \int_{\partial B_1} w_r ^2}.
\]
Using that
\[
r\int_{\partial B_1} w_r(x) \big(x\cdot \nabla w(rx)\big)=\int_{\partial B_1} w_r (x\cdot \nabla w_r) = \int_{B_1} |\nabla w_r|^2 + \int_{B_1} w_r\Delta w_r
\]
and that $w_r\Delta w_r\ge 0$ (recall \eqref{wLapw}), we obtain
\[
\frac{H'_{\lambda}} {H_{\lambda}}(r,w) \ge \frac{2\int_{B_1} |\nabla w_r|^2}{r\int_{\partial B_1} w_r ^2 } - \frac{2\lambda}{r} = \frac{2}{r} \big(\phi(r,w)-\lambda\big).
\]
Since $\phi(r,w)\geq \phi(0^+,w) \geq \lambda$ (by Proposition \ref{lemAlmgren}), the result follows.
\end{proof}
\begin{corollary}[Monneau monotonicity formula \cite{M03}]\label{monneau}
Let $0$ be a singular point and let $H_\lambda$ be as in Lemma \ref{Hincreasing}.
The function $H_2(r, u-p)$ is monotone nondecreasing in $r$, for all $p$ in $\mathcal P$.
\end{corollary}
\begin{proof}
It is a direct consequence of Lemmas \ref{Hincreasing} and \ref{lemblowup0}.
\end{proof}
The following result shows the monotonicity for a modified Weiss function. It is remarkable that the quantity below is monotone for all $\lambda>0$, independently of the value of the frequency.
\begin{lemma}\label{modifieWeiss}
Let $0$ be a singular point, $\lambda\ge 0$, and $w:= u-p$, where $p\in \mathcal P$. Then the function
\[
W_\lambda(r,w) :=r^{-2\lambda}\bigg( r^{2-n} \int_{B_r} |\nabla w|^2 - \lambda\, r^{1-n}\int_{\partial B_r} w^2 \bigg)
\]
is monotone nondecreasing in $r$.
\end{lemma}
\begin{proof}
For $0\le \lambda\le \phi(0^+,w)$ we have $W_\lambda = (\phi -\lambda) H_\lambda$, the product of two positive nondecreasing functions (thanks to Proposition \ref{lemAlmgren} and Lemma \ref{Hincreasing}), hence $W_\lambda$ is nondecreasing.
The result is more interesting for $\lambda >\phi(0^+,w)$ and it requires a different proof. Indeed, using the notation and calculations from the proof of Proposition \ref{lemAlmgren} we have, for $I := \int_{B_1} w\Delta w = \int_{B_1\cap\{u=0\}} p \ge 0$,
\[
\begin{split}
W'_\lambda(1) &= D'(1) - \lambda H'(1) -2\lambda \big(D(1)-\lambda H(1)\big)
\\
& = \bigg(2\int_{\partial B_1} w_\nu ^2- 4 I \bigg) - 2\lambda \int_{\partial B_1}ww_\nu -2\lambda\bigl( D(1)- \lambda H(1)\bigr)
\\
&= \bigg(2\int_{\partial B_1} w_\nu ^2- 4 I \bigg) - 2\lambda \int_{\partial B_1}ww_\nu -2\lambda \bigg(\int_{\partial B_1} ww_\nu - I \bigg) +2\lambda^2 H(1)
\\
&= 2 \bigg( \int_{\partial B_1} w_\nu ^2 - 2\lambda\int_{\partial B_1} ww_\nu + \lambda^2 \int_{\partial B_1} w^2\bigg) + 2(\lambda -2)I
\\
&= 2 \int_{\partial B_1} (w_\nu -\lambda w)^2 + 2(\lambda -2)I.
\end{split}
\]
Since $\lambda > \phi(0^+,w)\geq 2$ (by Lemma \ref{lemblowup0}), the result follows.
\end{proof}
As a consequence of this result we can prove that, given $\lambda>\lambda_*=\phi(0^+,u-p_*)$, the function $H_\lambda$ blows up at 0. This, combined with the monotonicity of $H_{\lambda_*}$ (see Lemma \ref{Hincreasing}), shows that
\begin{equation}
\label{eq:decay w}
r^{2\lambda}\lesssim \average\int_{\partial B_r}w^2 \lesssim r^{2\lambda_*}\qquad \text{ for $r \ll 1$}.
\end{equation}
Note that while this estimate is classical for harmonic functions (since the frequency function is related to the derivative of $H_\lambda$), in our case only an inequality is available (see the proof of Lemma \ref{Hincreasing}) and a different argument is needed.
\begin{corollary}
Let $0$ be a singular point, $w:=u-p_*$, $\lambda_*:=\phi(0^+,w)$, and fix $\lambda>\lambda_*$. Let $H_\lambda$ be as in as in Lemma \ref{Hincreasing}. Then
$$
H_\lambda(r,w)\to +\infty\qquad \text{as $r\downarrow0$.}
$$
\end{corollary}
\begin{proof}
Assume by contradiction that there exists a sequence $r_k\downarrow0$ such that $H_\lambda(r_k,w)\leq C$ for some constant $C$. Then, taking $\mu \in (\lambda_*,\lambda)$, it follows that $H_\mu(r_k,w)\to 0$.
Hence, with the notation of Lemma \ref{modifieWeiss}, this gives (since $W_\mu \geq -\mu \,H_\mu$)
$$
\liminf_{k\to \infty}W_\mu(r_k,w)\geq \liminf_{k\to \infty} -\mu \,H_\mu(r_k,w) =0.
$$
By the monotonicity of $W_\mu$, this implies that $W_\mu(r,w)\geq 0$ for all $r>0$, or equivalently
$$
r^{2-n} \int_{B_r} |\nabla w|^2 \geq \mu\, r^{1-n}\int_{\partial B_r} w^2\qquad \forall\,r>0.
$$
But this means that $\phi(r,w)\geq \mu$ for all $r>0$, a contradiction to the fact that $\mu>\lambda_*$.
\end{proof}
\subsection{Blow-up analysis}
We now start investigating the structure of possible blow-ups.
\begin{proposition}\label{propblowup}
Let $0$ be a singular point, $w:=u-p_*$, and for $r>0$ small define
\[ w_r(x): = w(rx) ,\qquad
\tilde w_r : = \frac{w_r}{\|w_r\|_{L^2(\partial B_1)}}.
\]
Let $ L := \{p_*=0\}$,
and $m\in \{0,1,2,\dots n-1\}$ be the dimension of $L$. Also, let $\lambda_*:=\phi(0^+,w)$.
Then:
\begin{enumerate}
\item[(a)]
For $0\le m\le n-2$ we have $\lambda_*\in \{2,3,4,5,\dots\}$. Moreover, for every sequence $r_k\downarrow 0$ there is a subsequence $r_{k_\ell}$ such that $\tilde w_{r_{k_\ell}} \rightharpoonup q$ in $W^{1,2}(B_1)$ as $\ell \to \infty$, where $q\not\equiv 0$ is a $\lambda_*$-homogeneous harmonic polynomial.
In addition, if $\lambda_*=2$, then in an appropriate coordinate frame it holds
\begin{equation}\label{howisD2q}
D^2p_*=
\left(
\begin{array}{ccc|c}
\mu_1& & &\\
& \ddots & & 0_m^{n-m}\\
& &\mu_{n-m}&\\
\hline
& 0_{n-m}^m && 0_{m}^{m}
\end{array}
\right)
\quad
\mbox{and}
\quad
D^2 q=
\left(
\begin{array}{ccc|c}
t& & &\\
& \ddots & & 0_m^{n-m}\\
& &t&\\
\hline
& 0_{n-m}^m && -N
\end{array}
\right),
\end{equation}
where $\mu_1,\ldots,\mu_{n-m},t>0$, $\sum_{i=1}^{n-m}\mu_i=1$, and $N$ is a symmetric nonnegative definite $m\times m$ matrix with ${\rm tr }(N) =(n-m)t$.
\item[(b)] For $m=n-1$ we have $\lambda_*\ge 2+\alpha_\circ$, where $\alpha_\circ >0$ is a dimensional constant. Moreover, for every sequence $r_k\downarrow 0$ there is a subsequence $r_{k_\ell}$ such that $\tilde w_{r_{k_\ell}} \rightharpoonup q$ in $W^{1,2}(B_1)$, where $q\not\equiv 0$ is a $\lambda_*$-homogeneous solution of the Signorini problem (with obstacle $0$ on $L$):
\begin{equation}\label{TOP}
\Delta q\le 0\quad \text{and} \quad q\Delta q=0 \quad \mbox{in }\mathbb{R}^n, \quad \Delta q=0 \quad \mbox{in }\mathbb{R}^n\setminus L, \quad\mbox{and}\quad q\ge 0 \quad \mbox{on } L.
\end{equation}
\end{enumerate}
\end{proposition}
To prove Proposition \ref{propblowup}, we need the following auxiliary lemmas:
\begin{lemma}\label{charq}
Let $\tilde w_r$ be as in Proposition \ref{propblowup}
and assume that, for some sequence $r_{k_\ell}\downarrow 0$, it holds $\tilde w_{r_{k_\ell}} \rightharpoonup q$ in $W^{1,2}(B_1)$. Then
\begin{equation}\label{orthogonality}
\int_{\partial B_1} q(p_*-p) \ge 0 \qquad \mbox{for all } p \in \mathcal P.
\end{equation}
\end{lemma}
\begin{proof}
By the definition of $p_*$ it holds that
\[ w_r(x) = (u-p_*)(rx) = o(r^2)\quad \mbox{as } r\downarrow 0.\]
Let us denote $h_r:= \|w_r\| _{L^2{(\partial B_1)}} =o(r^2)$ and $\varepsilon_r: = h_r/r^2 = o(1)$ as $r\downarrow 0$.
Note that, by the compactness of the trace operator $W^{1,2}(B_1)\to L^2(\partial B_1)$, we have $\tilde w_{r_{k_\ell}}=w_{r_{k_\ell}}/h_{r_{k_\ell}}\to q$ in $L^2(\partial B_1)$.
By Corollary \ref{monneau} and the definition of $p_*$, for any fixed $p\in \mathcal P$ we have
\[
\int_{\partial B_1} \bigg(\frac{w_{r}}{r^2} + p_*-p\bigg)^2 = \int_{\partial B_1} \biggl(\frac{u(rx)-p(rx)}{r^2}\biggr)^2 \downarrow \int_{\partial B_1} (p_*-p )^2\qquad \text{as $r \downarrow 0$.}
\]
Hence, since $r^{-2}w_r=\varepsilon_r\tilde w_r$,
\[
\int_{\partial B_1} \big( \varepsilon_r\tilde w_r + p_* -p\big)^2 \ge \int_{\partial B_1} (p_*-p )^2\qquad \forall\,r>0,\,\forall\,p \in \mathcal P.
\]
Developing the squares
and taking $r=r_{k_\ell}$ we get
$$
\varepsilon_{r_{k_\ell}}^2 \int_{\partial B_1} \tilde w_{r_{k_\ell}}^2 + 2 \varepsilon_{r_{k_\ell}}\int_{\partial B_1} \tilde w_{r_{k_\ell}} (p_*-p) \ge 0.
$$
Dividing by $\varepsilon_{r_{k_\ell}}$ and letting $\ell \to \infty$ we obtain \eqref{orthogonality}.
\end{proof}
\begin{lemma}\label{lemmatrix}
Let $p_* \in \mathcal P$,
and assume that $q\not\equiv 0$ is a $2$-homogeneous harmonic polynomial satisfying \eqref{orthogonality}. Then, in an appropriate system of coordinate, \eqref{howisD2q} holds.
\end{lemma}
\begin{proof}
Take $p \in \mathcal P$ and define $A:= D^2 p_* $, $B:= D^2 p$, and $C := D^2q$. Then, since $x\cdot \nabla q=2q$
and $\Delta q=0$, it follows from \eqref{orthogonality} that
\[
\begin{split}
0\le \int_{\partial B_1} q(p_*-p) &= \frac{1}{2} \int_{\partial B_1} q_\nu(p_*-p) = \frac 1 2 \int_{B_1} \nabla q \cdot \nabla (p_*-p)
\\ &= \frac 1 2\int_{B_1} Cx\cdot (A-B) x\, dx = c_n {\rm tr}\big(C(A-B)\big),
\end{split}
\]
for some dimensional constant $c_n>0$. Hence, since $p\in \mathcal P$ was arbitrary, we
deduce that (recall \eqref{eq:def M})
\begin{equation}\label{ineq}
{\rm tr}(CA) \ge {\rm tr}(CB) \quad \mbox{ for all }B \in \mathcal M.
\end{equation}
To show that this implies \eqref{howisD2q},
let $\boldsymbol{v} \in \mathbb S^{n-1}$ be an eigenvector for $C$ corresponding to its largest eigenvalue $\nu_{\max}>0$, and choose $B:=\boldsymbol{v}\otimes \boldsymbol{v}$. Then, since $A\geq 0$ and ${\rm tr} (A)=1$, \eqref{ineq} yields
$$
\nu_{\max}= {\rm tr}(\nu_{\max} {\rm Id} A) \geq {\rm tr}(CA) \geq {\rm tr}(CB)= \nu_{\max}.
$$
Thus
$$
{\rm tr}([\nu_{\max} {\rm Id} - C]A)=0,
$$
and because both $A$ and
$\nu_{\max} {\rm Id} - C$ are symmetric and nonnegative definite, we deduce that the kernels of these two matrices decompose orthogonally $\mathbb{R}^n$. In addition, if we set $L=\{p_*=0\}={\rm ker}(A)$, then $(\nu_{\max} {\rm Id} - C)|_{L^\perp}\equiv 0$. Thanks to this fact and recalling that ${\rm tr}(C)=0$ (since $q$ is harmonic),
the result follows easily.
\end{proof}
We can now prove Proposition \ref{propblowup}.
\begin{proof}[Proof of Proposition \ref{propblowup}] For the sake of clarity, we divide the proof into several steps.
\smallskip
- {\em Step 1}. We note that $\{\tilde w_r\}$ is precompact. Indeed, by Proposition \ref{lemAlmgren} we have
\[
\int_{\partial B_1} \tilde w_r^2 =1 \quad \mbox{and}\quad \int_{B_1} |\nabla \tilde w_r|^2 = \phi(r) \le \phi(1) < \infty.
\]
This yields uniform bounds $\| \tilde w_r \|_{W^{1,2}(B_1)} \le C$ for all $r\in (0,1)$.
As a consequence, given a sequence $r_k\downarrow 0$ there is a subsequence $r_{k_\ell}\downarrow 0$ such that
\[
\tilde w_{r_{k_\ell}} \rightharpoonup q
\qquad \mbox{in } W^{1,2}(B_1).
\]
In particular, by the compactness of the trace operator $W^{1,2}(B_1)\to L^2(\partial B_1)$, it follows that
\[
\| q\|_{L^2(\partial B_1)} =1.
\]
\smallskip
- {\em Step 2}. We prove (a). So, we assume $m\leq n-2$ and we consider $q$ a possible limit of a converging sequence $\tilde w_{r_{k_\ell}}$. We want to prove that $q$ is a harmonic homogeneous polynomial.
We first show that $q$ is harmonic. Note that
\begin{equation}
\label{eq:sign Delta w}
\Delta w_r(x) =\Delta u(rx) -\Delta p_*(rx) = -r^2\chi_{\{u=0\}} (rx) \le 0,
\end{equation}
hence $\Delta \tilde w_r$ is a nonpositive measure.
Note also that the contact set $\{u(r\,\cdot\,)=0 \}$ converge in the Hausdorff sense to $L=\{p_*=0\}$ as $r\to0$ (this follows from the uniform convergence of $r^{-2}u(rx)$ to $p_*$ as $r\to 0$). This implies that $q$ has a distributional Laplacian given by a nonpositive measure supported in $L$. Since $q\in W^{1,2}(B_1)$ (by Step 1) and $L$ has codimension 2 (and thus it is of zero harmonic capacity) it follows that $q$ must be harmonic.
Let us prove next that $q$ is homogeneous. To this aim we show that
\begin{equation}\label{Ngoal}
\lambda_* = \phi(R,q) : =\frac{R^{2-n}\int_{\partial B_R} |\nabla q|^2}{R^{1-n}\int_{\partial B_R} q^2}\qquad \forall\,R\in (0,1].
\end{equation}
Indeed, by lower semicontinuity of the Dirichlet integral we have
$$\phi(1,q) \leq \liminf_{\ell\to \infty}\phi(1,\tilde w_{r_{k_\ell}})=\liminf_{\ell\to \infty}\phi(1,w_{r_{k_\ell}})=\liminf_{\ell\to \infty}\phi(r_{k_\ell},w)=\lambda_*.
$$
Also, since $q$ is harmonic, it follows that $R\mapsto \phi(R,q)$ is nondecreasing (this follows from the classical Almgren frequency formula, or equivalently from the proof of Proposition \ref{lemAlmgren}), thus $\phi(R,q)\leq \lambda_*$ for all $R\in (0,1]$.
To show the converse inequality we apply Lemma \ref{Hincreasing} to $\tilde w_{r_{k_\ell}}$
and let $\ell\to \infty$ to obtain
\begin{equation}\label{decayl2}
\frac{1}{\rho^{2\lambda_*}}\average\int_{\partial B_\rho} q^2 \le \average\int_{\partial B_1} q^2 =1.
\end{equation}
But since $q$ is harmonic (so, in particular, $q\Delta q\equiv 0$) we have
\[
\frac{H'_\lambda(R,q) }{H_\lambda(R,q)} = \frac 2R (\phi(R,q) -\lambda)
\]
(this is a classical identity that also follows from the proof of Lemma \ref{Hincreasing}).
Hence, if it was $\phi(R,q)<\lambda_*$ for some $R\in(0,1)$ then, choosing $\lambda:= \phi(R,q)$, we would have that $H_\lambda$ would be nonincreasing on $(0,R)$. In particular we would find
\[
\frac{1}{\rho^{2\lambda}}\average\int_{\partial B_\rho} q^2 \ge \frac{1}{R^{2\lambda}}\average\int_{\partial B_R} q^2 >0 \quad \mbox{for $\rho\in (0,R)$},
\]
which contradicts \eqref{decayl2} for $\rho$ small since $\lambda<\lambda_*$.
Hence, we proved \eqref{Ngoal}.
Note that \eqref{Ngoal} says that the Almgren frequency formula $\phi(R,q)$ is constantly equal to $\lambda_*$ for all $R\in (0,1]$. As a classical consequence, $q$ is $\lambda_*$-homogeneous. Hence, since $q$ harmonic, it follows that $q$ is a $\lambda_*$-homogeneous harmonic polynomial with $\lambda_*\in \{2,3,4,5,\dots\}$ (recall that $\lambda_*\geq 2$, see Lemma \ref{lemblowup0}).
Finally, to complete the proof of (a), it suffices to combine Lemmas \ref{charq} and \ref{lemmatrix} to obtain that \eqref{howisD2q} holds when $\lambda_*=2$.
\smallskip
- {\em Step 3}. We now prove the first part of (b): if $m = n-1$, then $q$ must be a homogenous solution of the Signorini problem.
Indeed, let $\tilde w_{r_{k_\ell}} \rightarrow q$ in $L^2(B_1)$.
We first show uniform semiconvexity and Lipschitz estimates that are of independent interest and will be useful later on in the paper.
Namely, let us prove the estimate
\begin{equation}\label{semic}
\partial^2_{\boldsymbol e\boldsymbol e} \tilde w_r \ge -C \quad \mbox{in }B_R, \qquad \forall\, \boldsymbol e\in L\cap \mathbb S^{n-1}, \,\forall\,R<1,
\end{equation}
where $C= C(n,R)$ ---in particular $C$ is independent of $r$.
For this, given a vector $\boldsymbol e \in \mathbb S^{n-1}$ and $h>0$, let
\[\delta^2_{\boldsymbol e,h} f:= \frac{f(\,\cdot\,+h\boldsymbol e)+f(\,\cdot\,-h\boldsymbol e)-2f}{h^2}\]
denote a second order incremental quotient. For $\boldsymbol e\in L\cap \mathbb S^{n-1}$ we have $\delta^2_{\boldsymbol e,h} p_*\equiv 0$ (since $p_*$ is constant in the directions of $L$). Thus, since $\Delta u =1$ outside of $\{u=0\}$ and $\Delta u\le 1$ everywhere,
\[
\Delta \big(\delta^2_{\boldsymbol e,h} w_r \big
= \frac{\Delta u\big(r (\,\cdot\,+h\boldsymbol e)\big)+\Delta u\big(r (\,\cdot\,-h\boldsymbol e)\big) - 2\Delta u\big(r (\,\cdot\,)\big) }{h^2}
\le 0 \hspace{2mm}\mbox{ in } B_1\setminus \{u(r\,\cdot\,)=0\}.
\]
On the other hand, since $u\ge 0$ we have
\[
\delta^2_{\boldsymbol e,h} w_r = \delta^2_{\boldsymbol e,h} u(r\,\cdot\,) \ge 0 \quad \mbox{ in } \{u(r\,\cdot \,)=0\}.
\]
As a consequence, the negative part of the second order incremental quotient $(\delta^2_{\boldsymbol e,h} \tilde w_r)_-$ is a (nonnegative) subharmonic function, and so is its limit $(\partial_{\boldsymbol e\boldsymbol e}^2\tilde w_r)_-$
(recall that $u\in C^{1,1}$, hence $\delta^2_{\boldsymbol e,h} \tilde w_r\to \partial_{\boldsymbol e\boldsymbol e}^2\tilde w_r$ a.e. as $h \to 0$).
Therefore, given any radius $R' \in (R,1)$, by the weak Harnack inequality (see for instance \cite[Theorem 4.8(2)]{CC95}) there exists $\epsilon=\epsilon(n) \in (0,1)$ such that
$$
\|(\partial_{\boldsymbol e\boldsymbol e}^2\tilde w_r)_- \|_{L^\infty(B_R)} \le C(n,R,R') \biggl(\int_{B_{R'}} (\partial_{\boldsymbol e\boldsymbol e}\tilde w_r)^\epsilon_- \biggr)^{1/\epsilon}
\leq C(n,R,R') \biggl(\int_{B_{R'}} |\partial_{\boldsymbol e\boldsymbol e}\tilde w_r|^\epsilon \biggr)^{1/\epsilon}.
$$
Also, by standard interpolation inequalities, the $L^\epsilon$ norm (here we use $\epsilon<1$) can be controlled by the weak $L^1$ norm, namely
$$
\biggl(\int_{B_{R'}} |\partial_{\boldsymbol e\boldsymbol e}\tilde w_r|^\epsilon \biggr)^{1/\epsilon} \leq C(n,R')\,\sup_{t>0}t\bigl|\bigl\{|\partial_{\boldsymbol e\boldsymbol e}\tilde w_r| >t\bigr\}\cap B_{R'}\bigr|.
$$
Furthermore, by Calderon-Zygmund theory (see for instance \cite[Equation (9.30)]{GT01}), the right hand side above is controlled by
$\|\Delta \tilde w_r\|_{L^1(B_{R''})}+
\|\tilde w_r\|_{L^1(B_{R''})}$, with $R'' \in (R',1)$.
Finally, since $\Delta \tilde w_r \leq 0$, $\|\Delta \tilde w_r\|_{L^1(B_{R''})}$ is controlled by the $L^1$ norm of $\tilde w$ inside $B_1$: indeed, if $\chi$ is a smooth nonnegative cut-off function that is equal to $1$ in $B_{R''}$ and vanished outside $B_1$, then
\begin{equation}
\label{eq:control Delta w}
\|\Delta \tilde w_r\|_{L^1(B_{R''})}\leq -\int_{B_1}\chi\,\Delta \tilde w_r =
-\int_{B_1}\Delta \chi\, \tilde w_r\leq C(n,R'')\int_{B_1}|\tilde w_r|.
\end{equation}
In conclusion, choosing $R'=\frac{2R+1}{3}$ and $R''=\frac{R+2}{3}$ we obtain
$$
\|(\partial_{\boldsymbol e\boldsymbol e}\tilde w_r)_- \|_{L^\infty(B_R)}\leq C(n,R)
\|\tilde w_r\|_{L^1(B_{1})} \leq C(n,R)
$$
(recall that $\tilde w_r$ is uniformly bounded in $W^{1,2}(B_1)\subset L^1(B_1)$, see Step 1),
which proves \eqref{semic}.
Note that, as a consequence of \eqref{semic}, the Laplacian of $\tilde w_r$ in the tangential directions is uniformly bounded from below.
Since $\Delta \tilde w_r \le 0$ everywhere and $L$ is $(n-1)$-dimensional, this implies a uniform semiconcavity estimate in the direction orthogonal to $L$, namely
\[
\partial^2_{\boldsymbol e'\boldsymbol e'} \tilde w_r \le C \qquad \mbox{in } B_R ,\quad \mbox{ for } \boldsymbol e'\in L^\perp \mbox{ with } |\boldsymbol e'|=1,
\]
where, as before, $R<1$ and $C= C(n,R)$.
Thanks to the previous semiconvexity and semiconcavity estimates, we deduce in particular a uniform Lipschitz bound:
\begin{equation}
\label{eq:wk Lip}
|\nabla \tilde w_r| \le C(n,R) \qquad \mbox{in } B_R\quad \forall\,R<1.
\end{equation}
Hence, the convergence $\tilde w_{r_{k_\ell}}\rightarrow q$ holds also locally uniformly inside $B_1$.
Now, recall that by Proposition \ref{lemAlmgren} we have
\begin{equation}
\label{eq:Sign}
\begin{split}
r\phi'(r,w) &\ge \phi(r,w)\, \frac{ \left(r^{2-n} \int_{B_r} w\Delta w\right)^2 }{ r^{2-n} \int_{B_r}|\nabla w|^2 \ r^{1-n} \int_{\partial B_r} w^2 }\\
& = \left(\frac{r^{2-n} \int_{B_r} w\Delta w }{ r^{1-n} \int_{\partial B_r} w^2 } \right)^2 = \left(\int_{B_1} \tilde w_r\Delta\tilde w_r\right)^2.
\end{split}
\end{equation}
Since
$$
\average\int_{r_{k_\ell}}^{2r_{k_\ell}} r \phi'(r,w)\,dr \leq 2\int_{r_{k_\ell}}^{2r_{k_\ell}} \phi'(r,w)\,dr=2\bigl(\phi(2r_{k_\ell},w)-\phi(r_{k_\ell},w)\bigr) \to 0 \qquad \text{as $\ell \to \infty$}
$$
(because $\phi(r,w)\to \lambda_*$ as $r\to 0$),
using the mean value theorem we may choose $\bar r_{k_\ell}\in [r_{k_\ell}, 2r_{k_\ell}] $ such that $\bar r_{k_\ell} \phi'(\bar r_{k_\ell},w)\rightarrow 0$ as $\ell \to \infty$.
Hence, thanks to \eqref{eq:Sign} and \eqref{wLapw}, we deduce that, for $\rho_\ell := \bar r_{k_\ell}/ r_{k_\ell} \in [1,2]$,
\[\int_{B_1} \tilde w_{r_{k_\ell}}\Delta\tilde w_{r_{k_\ell}}\leq
\int_{B_{\rho_\ell}} \tilde w_{r_{k_\ell}}\Delta\tilde w_{r_{k_\ell}} \rightarrow 0.
\]
Since $\Delta\tilde w_{r_{k_\ell}} \rightarrow \Delta q$ weakly$^*$ as measures inside $B_1$, $w_{r_k} \rightarrow q$ strongly in $C^0_{\rm loc}(B_1)$,
and $\tilde w_r\Delta \tilde w_r\geq 0$, we obtain
\[
\int_{B_{R}} q \Delta q =0\qquad \forall\,R<1,
\]
therefore, letting $R\uparrow 1$, $q\Delta q\equiv 0$ inside $B_1$.
Now, since
\begin{itemize}
\item $\Delta w_r \le 0$ is supported on $\{u(r\,\cdot\,)=0 \}$, that converges to $L$ as $r\downarrow 0$
\item $w_r = (u-p_*)( r\,\cdot\, ) = u(r\,\cdot\,)\ge 0$ on $L$
\item $\tilde w_{r_{k_\ell}}\to q$ locally uniformly
\end{itemize}
in the limit we obtain that $\Delta q\le 0$, $\Delta q = 0$ outside of $L$, and $q\ge 0$ on $L$.
This proves that $q\in W^{1,2}(B_1)$ is a solution of the thin obstacle problem \eqref{TOP} inside $B_1$.
The same argument as the one used
in Step 2 for case (a) (which only used that $q\Delta q\equiv 0$) shows that $q$ is $\lambda_*$-homogeneous inside $B_1$. In particular we can extend $q$ by homogeneity to the whole space, and $q$ satisfies \eqref{TOP} in $\mathbb{R}^n$.
\smallskip
- {\em Step 4}. We conclude the proof of (b) by showing that $\lambda_*\ge 2+\alpha_\circ$ for some dimensional constant $\alpha_\circ>0$.
We argue by compactness.
Observe that any blow-up $q$ satisfies
\begin{equation}\label{properties}
x\cdot\nabla q = \lambda_* q, \quad \int_{\partial B_1}q^2 =1, \quad \Delta q\le 0, \quad q\Delta q=0,\quad q\ge 0 \mbox{ on }L, \quad q(0)=0.
\end{equation}
Also, by Lemma \ref{charq} we have that \eqref{orthogonality} holds.
Now, if we had a sequence of functions $q^{(k)}$ satisfying \eqref{properties} with $\lambda_*^{(k)} \downarrow 2$, then we would find some limiting function $q^{(\infty)}$ satisfying
\eqref{properties} with $\lambda_*^{(\infty)}=2$ and \eqref{orthogonality}.
Then $q^{(\infty)}$ would be a $2$-homogeneous solution of the thin obstacle problem and hence a quadratic harmonic polynomial (see for instance \cite[Lemma 1.3.4]{GP09}).
Thus, applying Lemma \ref{lemmatrix} with $m=n-1$ we find that, in an appropriate coordinate system,
\[
D^2p_*=
\renewcommand\arraystretch{1.5}
\left(
\begin{array}{c|c}
1 &0_{n-1}^1
\\
\hline
0^{n-1}_1 & 0_{n-1}^{n-1}
\end{array}
\right)
\quad
\mbox{and}
\quad
D^2 q^{(\infty)}=
\renewcommand\arraystretch{1.5}
\left(
\begin{array}{c|c}
t &0_{n-1}^1
\\
\hline
0^{n-1}_1 & -N
\end{array}
\right)
\]
where $N\ge 0$ with ${\rm tr}(N)=t>0$ (since $\|q^{(\infty)}\|_{L^2(\partial B_1)}=1$). However, since $q^{(\infty)}(0)=0$ and $q^{(\infty)}\ge 0$ on $L=\{p_*=0\} = {\rm ker}(D^2p_*) $, we must have $-N\ge 0$, a contradiction.
\end{proof}
We conclude this section with an interesting observation: the gap between the value of the frequency and $2$ controls the decay of the measure of the contact set (recall that $\phi(0^+,w)\geq 2$, see Lemma \ref{lemblowup0}).
\begin{proposition}\label{decayest}
Let $0$ be a singular point, $w:=u-p_*$,
and $\lambda_*:=\phi(0^+,w).$
Then
$$
\frac{|\{u=0\}\cap B_r|}{|B_r|}\leq Cr^{\lambda_*-2}\qquad \forall\,r>0.
$$
In addition, the constant $C>0$ can be chosen uniformly at all singular points in a neighborhood of $0$.
\end{proposition}
\begin{proof}
Let
$w_r$ and $\tilde w_r$ be defined as in the statement of Proposition \ref{propblowup}.
Since $\tilde w_r$ is bounded in $W^{1,2}(B_1)$ (see Step 1 in the proof of Proposition \ref{propblowup}) and $\Delta w_{r}\leq 0$ (see \eqref{eq:sign Delta w}), we can bound the mass of $\Delta \tilde w_k$ inside $B_{1/2}$ by considering a smooth nonnegative cut-off function $\chi$ that is equal to $1$ in $B_{1/2}$ and vanished outside $B_1$, and then argue as in \eqref{eq:control Delta w}.
In this way we get
$$
\int_{B_{1/2}}|\Delta \tilde w_r|\leq C\|\tilde w_r\|_{L^1(B_1)}\leq C.
$$
But since
$$
|\Delta \tilde w_r|=r^2\frac{\chi_{\{u(r\,\cdot\,)=0\}}}{\|w_r\|_{L^2(\partial B_1)}}
$$
and $\|w_r\|_{L^2(\partial B_1)}\leq Cr^{\lambda_*}$ (see \eqref{eq:decay w}),
we conclude that
$$
r^{2-\lambda_*}\frac{|\{u=0\}\cap B_{r/2}|}{|B_{r/2}|}=r^{2-\lambda_*}\frac{|\{u(r\,\cdot\,)=0\}\cap B_{1/2}|}{|B_{1/2}|}\leq C,
$$
as desired.
\end{proof}
Note that the density bound is actually stronger around points corresponding to lower dimensional strata $\{\Sigma_m\}_{1\leq m \leq n-2}$. Indeed, since $\Delta \tilde w_r \leq 0$ and any limit of $\tilde w_r$ is harmonic (see Proposition \ref{propblowup}(a)), it follows that
$\int_{B_{1/2}}|\Delta \tilde w_r| \to 0$
as $r\to 0$,
so in this case the constant $C$ appearing in the statement can be replaced by $o(1)$.
\begin{remark}
In the case when $0 \in \Sigma_{n-1}$, we can actually prove a stronger estimate, namely that $\{u=0\}\cap B_r$ is contained in a $r^{\lambda_*-1}$-neighborhood of $L=\{p_*=0\}$.
To show this, note that \eqref{eq:wk Lip}
implies that
\begin{equation} \label{Lipchitzest}
|\nabla \tilde w_r|\leq C\qquad \text{in $B_{1/2}$},\quad \forall\,r>0,
\end{equation}
or equivalently
\begin{equation}
\label{eq:grad u p}
|\nabla u(x)-\nabla p_*(x)|\leq C\frac{\|w_r\|_{L^2(\partial B_1)}}{r}
\qquad \forall\, x\in B_{r/2}.
\end{equation}
Observe now that $\nabla u= 0$ on $\{u=0\}$ (since $u\in C^{1,1}$ and $u\geq 0$). Also, since $p_*(x)=\frac12 (\boldsymbol e\cdot x)^2$ for some $\boldsymbol e \in \mathbb S^{n-1}$,
$$
|\nabla p_*(x)|=\,{\rm dist}(x,L)\qquad \forall\,x \in \mathbb{R}^n.
$$
Hence, it follows by \eqref{eq:grad u p} that
\[
{\rm dist}(x,L)\leq C\frac{\|w_r\|_{L^2(\partial B_1)}}{r}
\qquad \forall\, x\in B_{r/2}\cap \{u=0\}.
\]
Since $\|w_r\|_{L^2(\partial B_1)} \leq Cr^{\lambda_*}$ (see \eqref{eq:decay w}),
we conclude that
\begin{equation}
\label{eq:dist L}
{\rm dist}(x,L)\leq Cr^{\lambda_*-1}
\qquad \forall\, x\in B_{r/2}\cap \{u=0\}.
\end{equation}
\end{remark}
\section{Proof of Theorem \ref{thm:main}} \label{sect:proof}
In this section we prove Theorem \ref{thm:main}. This will require a fine analysis of the possible values of the frequency at singular points.
We begin with the simple case $n=2$.
\begin{lemma}\label{possiblefreq}
Let $n=2$ and $0$ be a singular point in $\Sigma_1$. Then $\lambda_* := \phi(0^+, u-p_*)$ belongs to the set
\[
\{3, 4,5,6,\ldots\} \cup { \textstyle \big\{\frac 7 2, \frac{11}{2}, \frac {15}{ 2}, \frac {19}{ 2}, \ldots\big\} }.
\]
In particular $\alpha_\circ \geq 1$ (here $\alpha_\circ$ is as in Proposition \ref{propblowup}(b)).
\end{lemma}
\begin{proof}
From Proposition \ref{propblowup}(b) we have that any possible blow-up $q$ is a $\lambda_*$-homogeneous solutions of the Signorini problem in two dimensions with obstacle $0$ on $L$ and $\lambda_*\ge 2+\alpha_\circ>2$. In dimension two, homogeneous solutions to the Signorini problem that are symmetric with respect to $L$ are completely classified via a standard argument by separation of variables, and the set of their possible homogeneities is
\[
\{1,2,3,4,5\ldots\} \cup { \textstyle \big\{\frac 3 2, \frac 7 2, \frac{11}{2}, \frac {15}{ 2}, \frac {19}{ 2}, \ldots\big\} }
\]
(see for instance \cite{FS17}).
In our case $q$ may also have a odd part. However, the odd part is easily seen to be harmonic, hence its possible homogeneity belongs to the set
\[
\{1,2,3,4,5,\ldots\} .
\]
In conclusion
$$
\lambda_* \in \{1,2,3, 4,5,\ldots\} \cup { \textstyle \big\{\frac 3 2,\frac 7 2, \frac{11}{2}, \frac {15}{ 2}, \frac {19}{ 2}, \ldots\big\} }.
$$
Since $\lambda_*>2$, the lemma follows.
\end{proof}
As explained in the introduction, our main goal is to prove that the set of points with frequency less than $3$ is small. For this, we need to understand what happens when too many singular points accumulate around another singular point. This is the purpose of the next two lemmata: the first concerns the case $m\leq n-2$, and the second deals with the case $m=n-1$.
\begin{lemma}\label{codge2}
Let $n\ge 3$ and suppose that $0$ is a singular point. Assume that $m := \dim(L) \le n-2$, and that
there is a sequence of singular points $x_k\to 0$ and radii $r_k\downarrow 0$ with $|x_k|\le r_k/2$ such that
\[
\tilde w_{r_k} := \frac{(u-p_*)(r_k \,\cdot\, )}{\|(u-p_*)(r_k\,\cdot\,)\|_{L^2(\partial B_1)}} \rightharpoonup q
\qquad \text{in $W^{1,2}(B_1)$,}
\]
and $y_k :=\frac{x_k}{r_k} \to y_\infty$.
Then $y_\infty\in L$ and $q(y_\infty) =0.$
\end{lemma}
\begin{proof}
Since $(u-p_*)(r\,\cdot\,) = o(r^2)$ and $u(r_ky_k)=u(x_k)=0$, it follows that $p_*( r_ky_k) = r^2_kp_*(y_k) = o(r_k^2)$ as $r_k\to 0$, therefore $p_*(y_\infty)=0$. This proves that $y_\infty\in L = \{p_*=0\}$. We now prove that $q(y_\infty)=0$.
Note that, since $q$ is homogeneous (see Proposition \ref{propblowup}), if $y_\infty=0$ then the result is trivial. So we can assume that $|y_\infty|>0$.
We now use that $x_k$ is a singular point for $u$. Thanks to Lemma \ref{lemblowup0}
applied at $x_k$ with $p=p_{*}$, we know that the frequency of $u(x_k+\,\cdot\,)-p_{*}$ is at least $2$,
therefore
\[\phi\big(1/2, u(r_k(y_k +\,\cdot\,)) -p_*(r_k\,\cdot\,) \big) \ge 2.
\]
(Note that here $p_{*}$ is the quadratic polynomial of $u$ at $0$, not at $x_k$!)
Equivalently, recalling the definition of $\tilde w_{r_k}$, we have
\begin{equation}\label{freqge2}
2\le \frac{1}{2}\frac{\int_{B_{1/2}}\big |\nabla \tilde w_{r_k}(y_k +\cdot ) + h_{r_k} ^{-1} \nabla \big(p_*(r_k y_k+r_k \,\cdot\, )- p_*(r_k \,\cdot\, )\big )\big|^2}
{\int_{\partial B_{1/2}}\big |\tilde w_{r_k}(y_k +\cdot ) + h_{r_k}^{-1}\big(p_*(r_k y_k+r_k \,\cdot\, )- p_*(r_k \,\cdot\, ) \big)\big|^2},
\end{equation}
where $h_{r_k} := \|(u-p_*)(r_k\,\cdot\,)\|_{L^2(\partial B_1)}$.
Note that, because $p_*$ is a quadratic polynomial that vanishes on $L$, we have
\begin{equation}
\label{eq:b c}
h_{r_k}^{-1}\big(p_*(r_ky_k+r_k \,\cdot\, )- p_*(r_k \,\cdot\, )\big) = c_k + b_k \cdot x
\end{equation}
for some constant $c_k \in \mathbb{R}$ and some vector $b_k \in \mathbb{R}^n$ with $b_k \perp L$.
We now observe that, since $|y_k|\leq 1/2$ we have $B_{1/2}(y_k) \subseteq B_1$, therefore
\[
\int_{B_{1/2}} \big|\nabla\tilde w_{r_k}(y_k + \,\cdot\,) \big|^2 + \int_{\partial B_{1/2}} \big|\tilde w_{r_k}(y_k + \,\cdot\,) \big|^2 \le \|\tilde w_{r_k}\|^2_{W^{1,2}(B_1)} \le C.
\]
We claim that
\[
|c_k| \le C \quad \mbox{and} \quad |b_k|\le C, \qquad \mbox{with $C$ independent of $k$}.
\]
Indeed, if this was false, dividing by $(|c_k|+|b_k|)^2$ both the numerator and the denominator in \eqref{freqge2}, we would obtain
\[
2\le \frac{1}{2}\frac{\int_{B_{1/2}}\big |\nabla (\varepsilon_k(x) + \bar c_k + \bar b_k \cdot x)\big|^2} {\int_{\partial B_{1/2}}\big | \varepsilon_k(x) + \bar c_k + \bar b_k \cdot x)\big|^2},
\]
where $\bar c_k := c_k/ (|c_k|+|b_k|)$, $\bar b_k := b_k/ (|c_k|+|b_k|)$, and $\int_{B_{1/2}}|\nabla \varepsilon_k|^2+\int_{\partial B_{1/2}}\varepsilon_k^2 \rightarrow 0$. Thus, in the limit we would find
\[
2\le \frac{1}{2}\frac{\int_{B_{1/2}}\big |\nabla (\bar c_\infty + \bar b_\infty \cdot x)\big|^2} {\int_{\partial B_{1/2}}| \bar c_\infty + \bar b_\infty \cdot x|^2} = \frac{ |\bar b_\infty|^2}{4n|\bar c_\infty|^2 + |\bar b_\infty|^2} \le 1,
\]
a contradiction that proves the claim.
Thanks to the claim, up to a subsequence, $c_k \to c_\infty$ and $b_k\to b_\infty$ as $k \to \infty$, with $b_\infty \perp L$.
Note now that, since $x_k$ is a singular point, it follows by Corollary \ref{monneau} that, for all $\rho\in(0,1/2)$,
\[
\frac{1}{\rho ^4} \int_{\partial B_1} | u(x_k +r_k \rho \, \cdot\,) - p_*(r_k \rho \, \cdot\,)|^2 \le 2^4\int_{\partial B_1} \big| u\big(x_k +{\textstyle \frac {r_k} 2} \, \cdot\,\big) - p_*\big({\textstyle \frac {r_k} 2} \, \cdot\,\big)\big|^2 ,
\]
or equivalently, recalling \eqref{eq:b c},
\begin{equation}\label{control}
\frac{1}{\rho ^4} \average\int_{\partial B_\rho} | \tilde w_{r_k}(y_k + x) + c_k + b_k\cdot x|^2 \le 2^4\average\int_{\partial B_{1/2}} | \tilde w_{r_k}(y_k + x) + c_k + b_k\cdot x|^2 .
\end{equation}
Hence, in the limit (note that $B_{\rho}(y_\infty)\subset B_1$ for all $\rho \leq 1/2$)
\begin{equation}\label{controllim}
\frac{1}{\rho^4} \average\int_{\partial B_\rho} | q(y_\infty + x) + c_\infty + b_\infty\cdot x |^2 \le 2^4\average\int_{\partial B_{1/2}} | q(y_\infty + x) + c_\infty + b_\infty\cdot x|^2\qquad \forall\,\rho\in (0,1/2).
\end{equation}
Since $q$ is a homogeneous harmonic function, $y_\infty \in L$ with $|y_\infty|>0$, and $b_\infty$ is orthogonal to $L$, it follows by \eqref{controllim} that
the gradient of $q$ in the directions of $L$ must vanish at $y_\infty$, namely $\nabla_L q(y_\infty) =0$. Hence, by homogeneity we find $q(y_\infty)=0$, as desired.
\end{proof}
\begin{lemma}\label{cod1}
Let $n\ge 2$ and suppose that $0$ is a singular point with $m:=\dim(L) =n-1$. Assume that
there is a sequence of singular points $x_k\to 0$ with $x_k\in \Sigma_{n-1}$, and let $r_k \downarrow 0$ with $ |x_k| \le r_k/2$.
Denote
\[
\lambda_*:=\phi(0^+,u-p_*)\qquad \mbox{and}\qquad \lambda_{*,x_k}:=\phi\big(0^+,u(x_k+\,\cdot\,)-p_{*,x_k}\big).
\]
Suppose that
\[
\tilde w_{r_k} := \frac{(u-p_*)(r_k \,\cdot\, )}{\|(u-p_*)(r_k\,\cdot\,)\|_{L^2(\partial B_1)}} \rightharpoonup q
\qquad \text{in $W^{1,2}(B_1)$,}
\]
and $y_k :=\frac{x_k}{r_k} \to y_\infty$.
Also, let $\mathbb{R}\boldsymbol e=L^\perp$ with $|\boldsymbol e|=1$, and denote by $q^{\rm even}$ and $q^{\rm odd}$ the even and odd part of $q$ with respect to $L$, namely
$$
q^{\rm even}(x) = \frac 1 2 \big\{ q(x) +q\big(x-2(\boldsymbol e\cdot x)\boldsymbol e\big) \big\} ,\qquad
q^{\rm odd}(x) = \frac 1 2 \big\{ q(x) -q\big(x-2(\boldsymbol e\cdot x)\boldsymbol e\big) \big\}.
$$
Finally, let $\alpha_\circ$ be as in Proposition \ref{propblowup}(b).
Then $y_\infty\in L$, and for $\lambda := \inf_k\{ \lambda_{*,x_k}\} \ge 2+\alpha_\circ$ we have
\begin{equation}
\label{eq:q even y infty}
\rho^{-2 \lambda} \average\int_{\partial B_\rho} q^{\rm even}(y_\infty+ x)^2 \le
2^{2\lambda} \average\int_{\partial B_{1/2}} q^{\rm even}(y_\infty+ x)^2 \quad \forall\,\rho\in(0,1/2).
\end{equation}
In addition, if $\lambda_*<3$ then
\begin{equation}
\label{eq:q y infty}
\rho^{-2\lambda} \average\int_{\partial B_\rho} q(y_\infty+ x)^2 \le 2^{2\lambda} \average\int_{\partial B_{1/2}} q(y_\infty+ x)^2
\qquad \forall\,\rho\in(0,1/2).
\end{equation}
\end{lemma}
\begin{proof}
Let
\[
p_{*,x_k}(x) = \lim_{r\downarrow 0} r^{-2} u(x_k +rx)
\]
and define the second order harmonic polynomial
\[ P_k (x) := \frac{1}{h_{r_k}} \big( p_{*,x_k}(r_k x) -p_{*,0}(x_k +r_k x) \big), \quad \mbox{where }h_{r_k} := \| (u-p_*)(r_k \,\cdot\,) \|_{L^2(\partial B_1)}. \]
Since $x_k\in \Sigma_{n-1}$, Proposition \ref{propblowup}(b) yields
\begin{equation}\label{basic}
\phi\big(r_k/2 , u(x_k + \, \cdot\,) -p_{*,x_k} \big)\geq \phi\big(0^+ , u(x_k + \, \cdot\,) -p_{*,x_k} \big) = \lambda_{*, x_k}\ge \lambda\ge 2+\alpha_\circ >2,
\end{equation}
therefore
\begin{equation}\label{freqge2+alpha}
2+\alpha_\circ
\le
\frac12 \frac{\int_{B_{1/2}}\big |\nabla u(x_k + r_k\, \cdot\,) - \nabla p_{*,x_k}(r_k\, \cdot\, ) \big|^2}
{\int_{\partial B_{1/2}}\big |u(x_k + r_k\, \cdot\, ) - p_{*,x_k}(r_k\, \cdot\,) \big|^2}
=
\frac12
\frac{\int_{B_{1/2}}\big |\nabla \tilde w_{r_k}(y_k +\cdot ) - \nabla P_k\big|^2}
{\int_{\partial B_{1/2}}\big |\tilde w_{r_k}(y_k +\cdot ) - P_k \big)\big|^2}
\end{equation}
for all $k$.
We now claim that
\begin{equation}
\label{eq:bdd P}
|P_k| \le C \qquad \forall\, k.
\end{equation}
Indeed, if the coefficients of $P_k$ are not bounded, then dividing by its maximum in the numerator and the denominator of \eqref{freqge2} we obtain
\[
2+\alpha_\circ\le \frac12\frac{\int_{B_{1/2}}\big |\nabla \varepsilon_k - \nabla \bar P_k |^2} {\int_{\partial B_{1/2}}\big | \varepsilon_k - \bar P_k\big|^2},
\]
where $\bar P_k := P_k /|P_k|$ and $\int_{B_1 }|\nabla \varepsilon_k|^2 \rightarrow 0$, thus in the limit we find
\begin{equation}
\label{eq:freq P}
2+\alpha_\circ \le\frac12 \frac{\int_{B_{1/2}} |\nabla \bar P_\infty |^2} {\int_{\partial B_{1/2}} | \bar P_\infty |^2}
\end{equation}
for some quadratic polynomial $\bar P_\infty.$
Note now that, since $0,x_k\in \Sigma_{n-1}$, we have
\[
p_{*,x_k} = \frac 12 (\boldsymbol e_k\cdot x)^2 \quad \mbox{for some } \boldsymbol e_k \in \mathbb S^{n-1}
\]
and
\[
p_{*,0} = \frac 12 (\boldsymbol e\cdot x)^2 \quad \mbox{for } \boldsymbol e \in \mathbb S^{n-1}\cap L^\perp.
\]
Also, up to replacing $\boldsymbol e_k$ with $-\boldsymbol e_k$ if needed,
we have that $\boldsymbol e_k\to\boldsymbol e$ (since $p_{*,x_k}\to p_{*,0}$ as $k \to \infty$).
Thus
\[
\begin{split}
P_k (x) &= \frac{1}{h_{r_k}} \big( p_{*,x_k}(r_k x) -p_{*,0}(x_k + r_k x) \big)
\\
&= \frac{r_k^2}{2h_{r_k}} \big( (\boldsymbol e_k \cdot x)^2 - (\boldsymbol e\cdot (y_k+x))^2 \big)
\\
&= \frac{r_k^2}{2h_{r_k}} \big( (\boldsymbol e_k \cdot x)^2 - (\boldsymbol e\cdot x)^2 - 2a_k( \boldsymbol e\cdot x) - a_k^2\big)
\end{split}
\]
where $a_k := (\boldsymbol e\cdot y_k) \to 0$ (since $y_k \to y_\infty \in L = \boldsymbol e^\perp$).
Thus, since the coefficients of $\bar P_k := P_k /|P_k|$ are uniformly bounded and $a_k^2 \ll 2a_k$ we must have $\bar P_\infty(0) =0$ and
therefore
$$
\bar P_\infty(x) = \bar c_1(\boldsymbol e'\cdot x)(\boldsymbol e \cdot x) + \bar c_2(\boldsymbol e \cdot x),
$$
for some constants $\bar c_1,\bar c_2 \in \mathbb{R}$, where
\[
\boldsymbol e' := \lim_{k\to \infty} \frac{\boldsymbol e_k- \boldsymbol e}{|\boldsymbol e_k- \boldsymbol e|} \in \mathbb S^{n-1}\cap L.
\]
Now, since $\boldsymbol e'\perp\boldsymbol e$,
a direct computation using the formula above yields
\[
\begin{split}
\frac12\frac{\int_{B_{1/2}} |\nabla \bar P_\infty |^2} {\int_{\partial B_{1/2}} | \bar P_\infty |^2}
=\frac12\frac{\bar c_1^2\int_{B_{1/2}} (\boldsymbol e'\cdot x)^2+\bar c_1^2\int_{B_{1/2}} (\boldsymbol e\cdot x)^2+\bar c_2^2|B_{1/2}|} {\bar c_1^2\int_{\partial B_{1/2}} (\boldsymbol e'\cdot x)^2(\boldsymbol e\cdot x)^2 +\bar c_2^2\int_{\partial B_{1/2}}(\boldsymbol e\cdot x)^2}
=\frac{\frac{1}{2(n+2)}\bar c_1^2+\bar c_2^2} {\frac{1}{4(n+2)}\bar c_1^2 +\bar c_2^2}\leq 2,
\end{split}
\]
a contradiction to \eqref{eq:freq P}.
Hence this proves \eqref{eq:bdd P}, and up to a subsequence $P_k\to P_\infty$ as $k\to \infty$, where $P_\infty$ is a second order harmonic polynomial. In addition, by the discussion above, $P_\infty$ has the form
\begin{equation}\label{2}
P_\infty(x) = c_1(\boldsymbol e'\cdot x)(\boldsymbol e \cdot x) + c_2(\boldsymbol e \cdot x),
\end{equation}
where $\boldsymbol e'\perp \boldsymbol e$ and $c_1,c_2 \in \mathbb{R}$.
Now, by Lemma \ref{Hincreasing} applied to $u(x_k +r_k \,\cdot\,)-p_{*,x_k}$ and using \eqref{basic}, we have
\[
\rho^{-2\lambda} \average\int_{\partial B_\rho} | \tilde w_{r_k} (y_k +\,\cdot\,) -P_k |^2 \le 2^{2\lambda}\average\int_{\partial B_{1/2}} | \tilde w_{r_k} (y_k +\,\cdot\,) -P_k |^2,
\]
for all $\rho\in(0,1/2)$, hence, in the limit,
\begin{equation}\label{controllim3}
\rho^{-2\lambda} \average\int_{\partial B_\rho} | q(y_\infty + \,\cdot\,) - P_\infty )|^2 \le 2^{2\lambda}\average\int_{\partial B_{1/2}} | q(y_\infty + \,\cdot\,) - P_\infty |^2.
\end{equation}
Since $P_\infty$
is odd with respect to $L$ (see \eqref{2}), it follows by
\eqref{controllim3} that
\[\begin{split}
\rho^{-2\lambda}
\average\int_{\partial B_\rho} q^{\rm even}(y_\infty + \,\cdot\,)^2
&=\rho^{-2\lambda}
\average\int_{\partial B_\rho}
\big| \big( q(y_\infty+\,\cdot\,)-P_\infty\big)^{\rm even} \big|^2\\
&\leq \rho^{-2\lambda}
\average\int_{\partial B_\rho}
\big| q(y_\infty+\,\cdot\,)-P_\infty \big|^2\\
& \le 2^{2\lambda} \average\int_{\partial B_{1/2}}
\big| q(y_\infty+\,\cdot\,)-P_\infty \big|^2,
\end{split}
\]
where we used that $f \mapsto f^{\rm even}$ is an orthogonal projection in $L^2(\partial B_\rho)$.
This proves \eqref{eq:q even y infty}.
Assume now that in addition $\lambda_*<3$.
We claim that
\begin{equation}\label{qodd0}
q^{\rm odd}\equiv 0.
\end{equation}
Indeed, since the homogeneity of $q$ is at least $2+\alpha_\circ$ (by Proposition \ref{propblowup}(b)), we have
$\nabla q(0)= \nabla q^{\rm odd}(0) = 0$. On the other hand, $q^{\rm odd}$ is a harmonic function in $\mathbb{R}^n$ with sub-cubic growth at infinity (here we use the assumption $\lambda_*<3$) and vanishing on $L$, thus it must be $q^{\rm odd} = c(\boldsymbol e\cdot x )$ for some $c\in \mathbb{R}$ and hence (since $\nabla q^{\rm odd}(0) =0$)
$q^{\rm odd} \equiv 0$.
Thanks to \eqref{qodd0} we get $q=q^{\rm even}$, so \eqref{eq:q y infty} follows from \eqref{eq:q even y infty}.
\end{proof}
For $n\ge 3$ and $m\in\{1,2,\dots, n-1\}$ we define
\begin{equation}
\label{eq:def anomalous}
\Sigma^a_{m} : = \big\{ x_\circ \in \Sigma_m\ :\ \phi\big(0^+, u(x_\circ+\,\cdot\,)-p_{*,x_\circ} \big)<3 \big\},
\qquad
\Sigma^g_{m} := \Sigma_m \setminus \Sigma^a_{m}.
\end{equation}
We can now give the key lemmas needed to prove Theorem \ref{thm:main}. We begin by showing that points in $\Sigma_1^a$ are isolated inside $\Sigma$.
\begin{lemma}\label{lem11}
Assume $n\ge 3$. Then $\Sigma_1^a$ is a discrete set.
\end{lemma}
\begin{proof}
Assume by contradiction that $0\in \Sigma_1^a$ and $x_k\to 0$ is a sequence of singular points.
By definition, $0\in \Sigma_1^a$ means that $\dim(L)=1$ (where $L:= \{p_*=0\}$) and that $\lambda_* := \phi\big(0^+, u-p_{*} \big)<3$. Hence, since $n\ge 3$ we have $m=1\le n-2$, thus Proposition \ref{propblowup}(a) yields $\lambda_*=2$.
Let $r_k := 2|x_k|$. By Proposition \ref{propblowup} and Lemma \ref{codge2} we have (up to extracting a subsequence)
\[
\tilde w_{r_k} \rightarrow q \quad \mbox{in }L^2(B_1) \quad \mbox{and} \quad y_k := \frac{x_k}{r_k} \to y_\infty \in L\cap\partial B_{1/2}.
\]
where and $q$ is a $2$-homogeneous harmonic polynomial satisfying $q(y_\infty)=0$. In addition, since $\lambda_*=2$ we know that \eqref{howisD2q} holds. Namely, in an appropriate coordinate frame (recall that $m=1$ here) we have
\begin{equation}\label{howisD2q1}
D^2p_*=
\left(
\begin{array}{ccc|c}
\mu_1& & &\\
& \ddots & & 0_1^{n-1}\\
& &\mu_{n-1}&\\
\hline
& 0_{n-1}^1 & & 0
\end{array}
\right)
\quad
\mbox{and}
\quad
D^2 q=
\left(
\begin{array}{ccc|c}
t& & &\\
& \ddots & & 0_1^{n-1}\\
& &t&\\
\hline
& 0_{n-1}^1 && -(n-1)t
\end{array}
\right),
\end{equation}
where $\mu_1,\ldots,\mu_{n-1},t>0$, $\sum_{i=1}^{n-m}\mu_i=1$.
Note that, since $|y_\infty|=1/2$, $q(y_\infty)=0$, and $y_\infty\in L$, by homogeneity of $q$ we must have $q|_L \equiv 0$.
This contradicts the fact that $D^2q|_{L\otimes L}=-(n-1)t<0$ (see \eqref{howisD2q1})
and concludes the proof.
\end{proof}
In order to estimate the measure of $\Sigma_m^a$ for $m\geq 2$ we need to develop a Federer-type dimension reduction argument.
As a first step we need the following standard result in geometric measure theory, that we prove for convenience of the reader.
Before stating it, we recall some classical definitions.
Given $\beta >0$ and
$\delta \in (0,\infty]$,
the Hausdorff premeasures $\mathcal H^{\beta}_\delta(E)$ of a set $E$ are defined as follows:\footnote{In many textbooks, the definition of $\mathcal H^{\beta}_\delta$ includes a normalization constant\ chosen so that the Hausdorff measure of dimension $k$ coincides with the standard $k$-dimensional volume on smooth sets. However such normalization constant is irrelevant for our purposes, so we neglect it.}
\begin{equation}
\label{eq:def Haus}
\mathcal H^{\beta}_\delta(E):=\inf\biggl\{\sum_i {\rm diam}(E_i)^\beta\,:\,E\subset \bigcup_i E_i,\,{\rm diam}(E_i)<\delta \biggr\}.
\end{equation}
Then, one defines the $\beta$-dimensional Hausdorff measure $\mathcal H^\beta(E):=\lim_{\delta\to 0^+}\mathcal H^{\beta}_\delta(E)$.
We recall that the Hausdorff dimension can be defined in terms of $\mathcal H^{\beta}_\infty$ as follows:
\begin{equation}
\label{eq:def dim}
{\rm dim}_{\mathcal H}(E):=\inf\{\beta >0\,:\,\mathcal H^{\beta}_\infty(E)=0\}
\end{equation}
(this follows from the fact that $\mathcal H^{\beta}_\infty(E)=0$ if and only if $\mathcal H^{\beta}(E)=0$, see for instance \cite[Section 1.2]{Sim83}).
\begin{lemma}\label{abstract}
Let $E\subset \mathbb{R}^n$ be a set with $\mathcal{H}^\beta_\infty(E) >0$ for some $\beta\in (0,n]$.
Then:
\begin{enumerate}
\item[(a)] For $\mathcal{H}^\beta$-almost every point $x_\circ \in E$, there is a sequence $r_k\downarrow 0$ such that
\begin{equation}\label{denpt}
\lim_{k\to \infty} \frac{\mathcal{H}^\beta_\infty(E\cap B_r(x_\circ) )}{r_k^\beta} \ge c_{n,\beta}>0,
\end{equation}
where $c_{n,\beta}$ is a constant depending only on $n$ and $\beta$. Let us call these points ``density points''.
\item[(b)] Assume that $0$ is a ``density point'', let $r_k\downarrow 0$ be a sequence along which \eqref{denpt} holds, and define define the ``accumulation set'' for $E$ at $0$ as
\[
\mathcal A=\mathcal A_{E} := \big\{z\in \overline{B_{1/2}} \ : \, \exists\,(z_\ell)_{\ell\ge 1},(k_\ell)_{\ell \ge 1} \text{ s.t. $z_{\ell}\in r_{k_\ell}^{-1}E \cap B_{1/2}$ and $z_{\ell}\to z$ } \big\}.
\]
Then
\[\mathcal{H}_\infty^\beta(\mathcal A) >0.\]
\end{enumerate}
\end{lemma}
\begin{proof}
Part (a) of the lemma is a standard property of the Hausdorff (pre)measures, see for instance \cite[Theorem 1.3.6(2)]{Sim83} for a proof. We now prove (b).
Assume that $0$ is a density point. Then by (a) we have
\begin{equation}\label{123}
H_\infty^\beta(r_k^{-1}E \cap B_{1/2})
=\frac{H_\infty^\beta(E \cap B_{r_k/2})}{r_k^\beta} \ge 2^{-(\beta+1)} c_{n,\beta}>0 \quad \mbox{for all }k \gg 1.
\end{equation}
Note that the accumulation set $\mathcal A$ is a closed.
Assume by contradiction that $\mathcal{H}^\beta(\mathcal A) =0$.
Then, by definition of $\mathcal{H}^\beta_\infty$, given any $\varepsilon>0$ there exists a countable cover of balls $\{\hat B_i \}$ such that
\[
\mathcal A \,\subset \,\bigcup_{i\ge1} \hat B_i \quad \mbox{and} \quad \sum_{i\ge 1} {\rm diam}(B_i)^\beta \le \varepsilon.
\]
Since $\mathcal A \subset \overline{B_1}$ is compact set, we can find a finite subcover.
In particular, there exists $N \in \mathbb N$ such that
\[
\mathcal A \, \subset \,\bigcup_{i=1}^N \hat B_i \quad \mbox{and} \quad \sum_{i=1}^N {\rm diam}(\hat B_i)^\beta \le \varepsilon.
\]
But then, since
\[
r_{k}^{-1} E \cap \overline{ B_{1/2}} \subset \bigcup_{i=1}^N \hat B_i
\]
for $k$ large enough\footnote{Otherwise there would be a sequence of points $z_{\ell} \in r_{k_\ell}^{-1} E \cap \overline{ B_{1/2}}\setminus \bigcup_{i=1}^N \cup B_i$, and hence their limit $z$ ---up to a subsequence--- would satisfy at the same time $z\in \mathcal A$ and $z\in \overline{B_{1/2}} \setminus \bigcup_{i=1}^N \hat B_i$, a contradiction.}, by definition of $\mathcal{H}^\beta_\infty$ we obtain
\[
\mathcal{H}^\beta_\infty(r_{k}^{-1}E \cap \overline{ B_{1/2}} ) \le \varepsilon,
\]
a contradiction with \eqref{123} if $\varepsilon$ is small enough.
\end{proof}
We can now give an appropriate version of Lemma \ref{lem11} for the case $m = \dim(L)\in\{2,\ldots, n-2\}$.
\begin{lemma}\label{lem12}
Assume $n\ge 4$ and $m\in \{2,\dots, n-2\}$ .
Then $\dim_\mathcal{H}(\Sigma_m^a) \le m-1$.
\end{lemma}
\begin{proof}
Recalling \eqref{eq:def dim},
we assume by contradiction that $\mathcal{H}_\infty^\beta(\Sigma_m^a)>0$ for some $\beta>m-1$.
By Lemma \ref{abstract}(a), there is a point $x_\circ\in \Sigma_m^a$ and $r_k\downarrow 0$ such that
\[
r_k^{-\beta} \mathcal{H}_\infty^\beta(\Sigma_m^a \cap B_{r_k}(x_\circ)) \ge c_{n,\beta} >0.
\]
Assume without loss of generality that $x_\circ =0$.
Hence, since $0\in \Sigma_m^a$ and $m\le n-2$, it follows by \eqref{eq:def anomalous} and Proposition \ref{propblowup}(a) that
\[
\lambda_* := \phi(0^+, u-p_*) =2
\]
and that, up to extracting a subsequence,
\[
\tilde w_{r_k} \rightarrow q \qquad \mbox{in }L^2(B_1),
\]
where $q$ is a $2$-homogeneous harmonic polynomial. In addition, since $\lambda_*=2$, we know that in an appropriate coordinate frame $D^2p_*$ and $D^2q$ are given by \eqref{howisD2q}.
Also, applying Lemma \ref{abstract}(b), we deduce that the ``accumulation set'' $\mathcal A=\mathcal A_{\Sigma_m^a}$
satisfies $H_\infty^\beta(\mathcal A)>0$.
We claim that $\mathcal A\subset \overline B_1 \cap L \cap \{q=0\}$.
Indeed, by definition, a point $y$ belongs to $\mathcal A$ if there are sequences of singular points $x_k \to 0$ and of radii $r_k \downarrow 0$ such that $|x_k|\le r_k$ and $x_k/r_k \to z$.
Thus $x_k/(2r_k) \to z/2$, and by Lemma \ref{codge2} we obtain $z/2\in L$ and $q(z/2) =0$.
By homogeneity, this implies that $z\in L \cap \{q=0\}$ as claimed.
Finally we note that $L\cap \{q=0\}$ has dimension at most $m-1$. Indeed, if not this would imply that $q \equiv 0$ on $L$, which
would contradict the fact that ${\rm tr}(D^ 2q|_{L\otimes L})=-(n-m)t<0$ (see \eqref{howisD2q}).
Thus $\mathcal{H}^{m-1} \big(\overline{B_1}\cap L\cap \{q=0\}\big)<+\infty$, which yields (since $\beta>m-1$)
\[
0<\mathcal{H}^\beta_\infty(\mathcal A) \le \mathcal{H}^\beta_\infty\big(\overline B_1 \cap L \cap \{q=0\} \big)=0,
\]
contradiction.
\end{proof}
We now analyze the size of $\Sigma_{n-1}^a$.
We begin with the case $n=3.$
\begin{lemma}\label{lem21}
Let $n=3$. Then $\Sigma^a_{n-1}$ is a discrete set.
\end{lemma}
\begin{proof}
Let us assume that $0\in \Sigma^a_{n-1}$ and that $x_k\to 0$, where $x_k \in \Sigma_{n-1}$.
By Proposition \ref{propblowup} and by definition of $\Sigma^a_{n-1}$ we have
\[
\lambda_* := \phi(0^+, u-p_*) \in [2+\alpha_\circ,3).
\]
Let $r_k :=2|x_k|$ and note that, by Proposition \ref{propblowup}(b), we have (up to subsequence)
\[
\tilde w_{r_k} \rightarrow q \quad \mbox{and} \quad \frac{x_k}{r_k} \to z \in \partial B_{1/2}
\]
where $q$ is a $\lambda_*$-homogeneous solution of the Signorini problem (with zero obstacle on $L$).
Also, since $\lambda_*<3$, it follows by Lemma \ref{cod1} that $z\in L$ and
\[
\rho^{-2(2+\alpha_\circ)} \average\int_{\partial B_\rho(z)} q^2 \le 2^{2(2+\alpha_\circ)} \average\int_{\partial B_{1/2} (z)} q^2, \qquad \forall \rho\in (0,1/2)
\]
(note that, since $x_k \in \Sigma_{n-1},$
$\inf_k\lambda_{*,x_k}\geq 2+\alpha_0$ by Proposition \ref{propblowup}(b)).
This implies that $q$, $Dq$, and $D^2q$ vanish at $z$, and that $\lambda^z := \phi(0+,q(z+\,\cdot\,))\ge 2+\alpha_\circ$.
Since $q$ is a solution of Signorini that is homogeneous with respect to the point $0$, it is classical fact (this follows from instance from the monotonicity of the frequency function) that a blow-up at $z\in L$,
\[q^z = \lim_j q(z+r_j\,\cdot\,) / \|q(z+r_j\,\cdot\,) \|_{L^2(\partial B_1)} \]
has translation symmetry in the direction $z$, and it is $\lambda^z$-homogeneous.
Thus, since $n=3$, $q^z$ depends thus only on two variables (equivalently, it has 2-dimensional symmetry).
Since homogeneous 2-dimensional solutions of Signorini are completely classified (see the proof of Lemma \ref{possiblefreq}) we deduce that
\[
\lambda^z \in \{1,2,3,4,5, \dots\}\cup { \textstyle \big\{\frac 3 2, \frac 7 2, \frac{11}{2}, \frac {15}{ 2}, \,\dots\big\} }
\]
Recalling that $\lambda^z \ge 2+\alpha_\circ$, we get $\lambda^z \ge 3$.
But then we reach a contradiction since, by monotonicity of the frequency and the fact that the limit as $r\to +\infty$ of the frequency is independent of the point, we get
\[
3\leq \lambda^z = \phi(0^+,q(z+\,\cdot\,)) \le \phi(+\infty,q(z+\,\cdot\,)) = \phi(+\infty,q) = \lambda_* <3.
\]
\end{proof}
In order to control the size of $\Sigma_{n-1}^a$ for $n\geq 4$, we shall use the following result on the Signorini problem:
\begin{theorem}[{\cite[Theorem 1.3]{FS17}}]\label{FocSpa}
Let $L\subset \mathbb{R}^n$ be a $(n-1)$-dimensional subspace, and let $q$ be solution of the Signorini problem in $\mathbb{R}^n$ with obstacle $0$ on $L$ (see \eqref{TOP}).
Then, for all $z$ in the contact set $\{q=0\}\subset L$ it holds
\[
\phi(0^+,q(z+\, \cdot\,)) \in \{1,2,3, 4, \dots\} \cup { \textstyle \big\{\frac 3 2, \frac 7 2, \frac{11}{2}, \frac {15}{ 2}, \,\dots\big\} }
\]
except for at most a set of Hausdorff dimension $n-3$.
\end{theorem}
\begin{lemma}\label{lem22}
Let $n\ge4$. Then $\dim_\mathcal{H}(\Sigma^a_{n-1})\le n-3$.
\end{lemma}
\begin{proof}
Recalling \eqref{eq:def dim}, assume by contradiction that $\mathcal{H}_\infty^\beta(\Sigma_{n-1}^a)>0$ for some $\beta>n-3$. Then
by Lemma \ref{abstract}(a) there exists a point $x_\circ\in \Sigma_{n-1}^a$ and $r_k\downarrow 0$ such that
\[
r_k^{-\beta}
\mathcal{H}_\infty^\beta(\Sigma_{n-1}^a \cap B_{r_k}(x_\circ)) \ge c_{n,\beta} >0.
\]
Without loss of generality we assume that $x_\circ =0$.
Then, since $0\in \Sigma_{n-1}^a$, by \eqref{eq:def anomalous} and Proposition \ref{propblowup}(b) we have
\[
\lambda_* := \phi(0^+, u-p_*) \in [2+\alpha_\circ, 3)
\]
and (up to a subsequence)
\[
\tilde w_{r_k} \rightarrow q \quad \mbox{in }L^2(B_1),
\]
where $q$ is a $\lambda_*$-homogeneous solution of the Signorini problem with obstacle $0$ on $L$.
Applying Lemma \ref{abstract}(b), the ``accumulation set'' $\mathcal A=\mathcal A_{\Sigma_{n-1}^a}$
satisfies $H_\infty^\beta(\mathcal A)>0$.
Set
\[
\mathcal S =: \big\{z \in \overline B_1\cap L \cap\{ q(z)=0\} \mbox{ such that } \phi(0^+,q(z+\, \cdot\,)) \ge 2+\alpha_\circ\big\}.
\]
Then, by the same argument as in the proof of Lemma \ref{lem12} we deduce that $\mathcal A\subset \overline B_1\cap L \cap\{ q=0\}$.
Also, since $\lambda_*<3$,
as in the proof of Lemma \ref{lem21}
it follows by
Lemma \ref{cod1} that
$\phi(0^+,q(z+\,\cdot\,))\geq 2+\alpha_\circ$ for all $z \in \mathcal A$. Hence,
$$
\mathcal A\subset \mathcal S.
$$
We now note that, for all $z\in \mathcal S$, we have
\[
\phi(0^+,q(z+\, \cdot\,)) \le \phi(+\infty,q(z+\, \cdot\,)) = \phi(+\infty,q) = \lambda_*<3
\]
(since $0\in \Sigma^a_{n-1}$). Therefore it follows that
\[\phi\big(0^+,q(z+\, \cdot\,)\big) \in [2+\alpha_\circ,3)\quad \mbox{for all } z\in \mathcal S,\]
and Theorem \ref{FocSpa} yields $\dim_\mathcal{H}(\mathcal S)= n-3$. In particular $\mathcal{H}_\infty^\beta(\mathcal S)= 0$ (since $\beta > n-3$) and we obtain
\[0<\mathcal{H}_\infty^\beta(\mathcal A)\le \mathcal{H}_\infty^\beta(\mathcal S)=0,\]
a contradiction.
\end{proof}
We will also need the following version of Whitney's extension theorem (see for instance \cite{Fef09} and the references therein):
\begin{lemma}[Whitney's Extension Theorem]\label{WET}
Let $\beta\in(0,1]$, $\ell \in \mathbb{N}$, $K\subset \mathbb{R}^n$ a compact set, and $f: K\rightarrow \mathbb{R}$ a given mapping.
Suppose that for any $x_\circ \in K$ there exists a polynomial $P_{x_\circ}$ of degree $\ell$ such that:
\begin{itemize}
\item[(i)] $P_{x_\circ} (x_\circ) = f(x_\circ)$;
\item[(ii)] $| D^kP_{x_\circ} (x) -D^k P_x(x) | \le C |x-x_\circ|^{\ell+\beta-k}$ for all $x\in K$ and $k\in\{0,1,\ldots,\ell\}$, where $C>0$ is independent of $x_\circ$.
\end{itemize}
Then there exists $F:\mathbb{R}^n\to \mathbb{R}$ of class $C^{\ell,\beta}$ such that
\[
F|_{K}\equiv f \qquad \text{and}\qquad F(x) = P_{x_\circ}(x) + O(|x-x_\circ|^{\ell+\beta}) \quad \forall\, x_\circ \in K.
\]
\end{lemma}
We now prove that the set of points with frequency $\geq \lambda$ is contained in a $C^{\lambda-1}$-manifold.
Since the classical argument provided in \cite[Theorem 7.9]{PSU12} only shows that the singular set is locally contained in a countable union of manifolds (while here we claim that locally we need only one manifold), we provide the details of the proof.
\begin{lemma} \label{lem:laststep}
Let $n\ge 2$, $m\in\{1,2,\dots, n-1\}$, and $\lambda>2$. Let $\ell\in \mathbb{N}$ and $\beta \in(0,1]$ satisfy $\ell+\beta=\lambda$,
and define
\[
S_{m, \lambda} := \big\{ x_\circ\in \Sigma_m \ :\ \phi\big(0^+, u(x_\circ +\,\cdot\,)-p_{*,x_\circ} \big) \ge \lambda \big\}.
\]
Then $S_{m, \lambda}$ locally contained in a $m$-dimensional manifold of class $C^{\ell-1,\beta}$.
\end{lemma}
\begin{proof}
We prove the result in a neighborhood of the origin.
We begin by recalling that the singular set $\Sigma=\cup_{m=0}^n\Sigma_m$ is closed (this is a classical fact that follows from the relative openness of the set of regular points, see \cite{C77}).
In addition, we note that the monotonicity of the frequency implies that the map
$$
\Sigma\ni x_\circ \mapsto \phi\big(0^+, u(x_\circ +\,\cdot\,)-p_{*,x_\circ} \big)
$$
is upper semicontinuous, being the monotone decreasing limit (as $r\downarrow 0$) of the continuous functions
$$
\Sigma\ni x_\circ \mapsto \phi\big(r, u(x_\circ +\,\cdot\,)-p_{*,x_\circ} \big), \qquad r>0.
$$
Thanks to these facts we deduce that
\[
S_{\lambda} := \big\{ x_\circ\in \Sigma \ :\ \phi\big(0^+, u(x_\circ +\,\cdot\,)-p_{*,x_\circ} \big) \ge \lambda \big\}
\]
is closed. In particular, if we define the compact set $K:=
S_{\lambda}\cap \overline{B_{1/4}}$, we have that $\overline{S_{m,\lambda}\cap B_{1/4}}\subset K$.
Now, given $x_\circ \in K$, we define
\[
P_{x_\circ} (x) := p_{*, x_{\circ}} (x - x_\circ).
\]
We want to show that $K$, $f\equiv 0$, and $\{P_{x_\circ}\}_{x_\circ \in K}$ satisfy the assumptions of Lemma \ref{WET} with $\ell$ and $\beta$ as defined above.
Note that, by Lemma \ref{Hincreasing} and the definition of $S_{\lambda}$, for all $x_\circ \in K$ we have
\begin{equation}
\label{eq:bound}
\|u(x_\circ + \rho\,\cdot\,)-p_{*,x_\circ}(\rho\,\cdot\,)\|_{L^2(B_1)} \le (2\rho)^\lambda \big\|u\big(x_\circ + {\textstyle \frac 1 2}\,\cdot\, \big) - p_{*,x_\circ}\big({\textstyle \frac 1 2}\,\cdot\,\big)\big\|_{L^2{(\partial B_1)}}
\end{equation}
for all $\rho\in (0,1/2].$
Now, given $x_\circ,x \in K$,
set $\rho := |x-x_\circ|$ (note that $\rho \leq 1/2$), and for simplicity of notation assume that $x_\circ=0$.
Then it follows from \eqref{eq:bound}
applied both at $0$ and $x$ that
\begin{equation}\label{triangle!}
\begin{split}
\| (P_{0} -P_{x})(\rho\,\cdot\,)\|_{L^2(B_{1})} &\le \|u( \rho\,\cdot\,)-P_{0}( \rho\,\cdot\,)\|_{L^2(B_{1})} + \|u(\rho\,\cdot\,)-P_x( \rho\,\cdot\,) \|_{L^2(B_{1})}
\\
&= \big\|u(\rho\,\cdot\,)- p_*( \rho\,\cdot\,)\big\|_{L^2(B_{1})} +\big \|u\big( \rho \cdot \big)-p_{*,x} \big(\rho(\cdot-x/\rho)\big)\big\|_{L^2(B_{1})}
\\
&\le \big\|u(\rho\,\cdot\,)- p_*( \rho\,\cdot\,)\big\|_{L^2(B_{1})} +\big \|u\big(\rho \cdot \big)-p_{*,x} \big(\rho(\cdot-x/\rho)\big)\big\|_{L^2(B_{2}(x/\rho))}
\\
&= \big\|u(\rho\,\cdot\,)- p_*( \rho\,\cdot\,)\big\|_{L^2(B_{1})} +\big \|u\big(x+ \rho \cdot \big)-p_{*,x}( \rho\,\cdot\,)\big\|_{L^2(B_{2})}
\\
&\le C \rho^\lambda .
\end{split}
\end{equation}
In particular, since the norm $\|\cdot\|_{L^2(B_1)}$ is equivalent to the norm $\|\cdot\|_{C^\ell(B_1)}$ on the space of quadratic polynomials, we obtain the existence of a constant $C>0$ such that
\[
| D^kP_{x_\circ} (x) -D^k P_x(x) | \le C |x-x_\circ|^{\ell+\beta-k} \quad \mbox{fo all } x_\circ,x\in K\mbox{ and }k\in\{0,1, \dots, \ell\}
\]
(recall that $\lambda=\ell+\beta$).
Since $P_{x_\circ}(x_\circ) = 0$ for $x_\circ\in K$, applying Lemma \ref{WET} we find a function $F \in C^{\ell,\beta}(\mathbb{R}^n)$ such that
\[
F(x) = p_{x_\circ}(x-x_\circ) + O(|x-x_\circ|^{\ell+\beta}) \quad \mbox{for all } x_\circ \in K.
\]
Therefore
\[
S_{m,\lambda} \cap B_{1/4}\subset K\subset \{\nabla F =0\} = \bigcap_{i=1}^n \{\partial_{x_i} F=0\}.
\]
Now, if $x_\circ \in S_{m,\ell}\cap B_{1/4}$ then $\dim {\rm ker\,}\big(D^2 F(x_\circ)\big)=\dim {\rm ker\,}\big(D^2 p_{*,x_\circ}(0)\big)=m$. This implies that, up to a change of coordinates, the rank of $D^2_{(x_1,\ldots, x_{n-m})} F(x_\circ)$ is maximal, and we conclude by the Implicit Function Theorem that, in a neighborhood of $x_\circ$,
$\bigcap_{i=1}^{n-m} \{\partial_{x_i} F=0\}$ is a $m$-dimensional manifold of class $C^{\ell-1,\beta}$ that contains $S_{m,\lambda}$.
\end{proof}
We are now ready to prove Theorem \ref{thm:main} (except for the $C^2$ regularity in dimension $2$ that will follow from Theorem \ref{thm:C2} in the next section).
\begin{proof}[Proof of Theorem \ref{thm:main}]
We need to prove:
\begin{itemize}
\item[(a)] For $n=2$, $\Sigma_1$ is locally contained in a $C^{2}$ curve.
\item[(b)] For $n\ge 3$, $\Sigma^g_{n-1}$ is locally contained in a $C^{1,1}$ $(n-1)$-dimensional manifold and
$\Sigma^a_{n-1}$ is a relatively open subset of $\Sigma_{n-1}$ satisfying ${\rm dim}_{\mathcal H}(\Sigma^a_{n-1})\leq n-3$ (the latter set is discrete for $n=3$).
\item[(c)] For $n\ge 3$, $\Sigma_{n-1}$ can be locally covered by a $C^{1,\alpha_\circ}$ $(n-1)$-dimensional manifold, for some dimensional exponent $\alpha_\circ>0$.
\item[(d)] For $n\ge 3$ and $m=1,\ldots,n-2$, $\Sigma_m^g$ can be locally covered by a $C^{1,1}$
$m$-dimensional manifold and $\Sigma^a_m$ is a relatively open subset of $\Sigma_{m}$ satisfying ${\rm dim}_{\mathcal H}(\Sigma^a_m)\leq m-1$ (the latter set is discrete when $m=1$).
\item[(e)] For $n\ge 3$ and $m=1,\ldots,n-2$, $\Sigma_{m}$ can be locally covered by a $C^{1,\log^{\varepsilon_\circ}}$ $m$-dimensional manifold, for some dimensional exponent $\varepsilon_\circ>0$.
\end{itemize}
Throughout the proof, we will use the definition of $S_{m,\lambda}$ given in Lemma \ref{lem:laststep}.
\smallskip
{\it - Proof of (a).} By Lemma \ref{possiblefreq} we have that $\Sigma_{1} = S_{1,3}$. Thus, applying Lemma \ref{lem:laststep}, we obtain that $\Sigma_{1}$ is locally covered by a $C^{1,1}$ curve. To conclude that $\Sigma_1$ can be covered by a $C^2$ curve, we apply Theorem \ref{thm:C2} from the next section.
\smallskip
{\it - Proof of (b).} By Lemma \ref{lem22}, the Hausdorff dimension of $\Sigma^a_{n-1}$ is at most $n-3$. Also, by definition we have $\Sigma^g_{n-1} = S_{n-1,3}$, thus $\Sigma^g$ can be locally covered by a $C^{1,1}$ $(n-1)$-dimensional manifold, thanks to Lemma \ref{lem:laststep}.
The fact that $\Sigma_{n-1}^g$ is relatively closed in $\Sigma_{n-1}$ is a consequence of the fact that $x_\circ \mapsto \phi\big(0^+, u(x_\circ + \,\cdot\,) -p_{*,x_\circ}\big)$ is upper semicontinuous, as shown in the proof of Lemma \ref{lem:laststep}. In the case $n=3$, Lemma \ref{lem11} gives that $\Sigma_{n-1}^a$ is a discrete set.
\smallskip
{\it - Proof of (c).} By Proposition \ref{propblowup}(b) we have that the whole stratum $\Sigma_{n-1}$ is contained in $S_{n-1,2+\alpha_\circ}$, for some dimensional constant $\alpha_\circ>0$. As a consequence, the whole stratum $\Sigma_{n-1}$ can be covered by a $C^{1,\alpha_\circ}$ $(n-1)$-dimensional manifold.
\smallskip
{\it - Proof of (d).} By Lemma \ref{lem12}, for $1\le m\le n-2$ the Hausdorff dimension of $\Sigma^a_{m}$ is at most $m-1$ (in the case $m=1$, Lemma \ref{lem12} gives that $\Sigma_{1}^a$ is a discrete set). Also, since by definition $\Sigma^g_{m} = S_{m,3}$, applying again Lemma \ref{lem:laststep} we obtain that $\Sigma^g_m$ can be locally covered by a $C^{1,1}$ $m$-dimensional manifold.
Finally, as in the proof of (d), the relative closedness of $\Sigma^g$ follows from the upper semicontinuity of the frequency.
\smallskip
{\it - Proof of (e).} Let $m\leq n-2$.
We claim that the following estimate holds:
\begin{equation}\label{colomboandco}
\big\| u(x_\circ +r\,\cdot\,)-p_{*,x_\circ})(r \,\cdot\,) \big\|_{L^2(\partial B_1)} \le C r^{2} \log^{-\varepsilon_\circ} (1/r) \quad \forall x_o\in \Sigma_m\cap B_{1/2}, \ \forall \,r\in (0,1/2).
\end{equation}
Observe that it is enough to prove \eqref{colomboandco} at points $x_\circ$ such that
\[\lambda_{*,x_\circ} :=\phi\big(0^+, u(x_\circ + \,\cdot\,) -p_{*,x_\circ}\big) =2.\]
Indeed, if $\lambda_{*,x_\circ}>2$ then Proposition \ref{propblowup}(a) yields $\lambda_{*,x_\circ}\ge 3$,
hence \eqref{colomboandco} trivially holds (actually, with a much stronger estimate) thanks to Lemma \ref{Hincreasing}.
So, without loss of generality, we can assume that $\lambda_{*,x_\circ} =2$.
Let $M>1$ be a large constant to be fixed later.
By Caffarelli's asymptotic convexity estimate \cite{C77} (see also \cite[Corollary 5]{C98}), we have
\begin{equation}
\label{eq:Caff conv}
D^2 u \ge -C \log^{-\varepsilon_\circ}(1/r) \,{\rm Id} \quad \mbox{in } B_r(x_\circ)\qquad \forall\,x_\circ \in \Sigma,
\end{equation}
for some dimensional exponent $\varepsilon_\circ>0$.
Now, let $a_{r} :=\| r^{-2}u(x_\circ + r\cdot)-p_{*x_\circ}(r\,\cdot\,)\|_{L^2}=o(1)$ and $L_{x_\circ}:=\{p_{*,x_\circ}=0\}$.
Thanks to \eqref{eq:Caff conv} we have
\begin{equation}
\label{eq:Caff conv2}
\partial_{\boldsymbol e\boldsymbol e} \big( r^{-2}u(x_\circ + r\cdot)-p_{*x_\circ}(r\,\cdot\,) \big) = \partial_{\boldsymbol e\boldsymbol e} \bigl(r^{-2}u(x_\circ + r\cdot)\bigr) \ge -C \log^{-\varepsilon_\circ}(1/r) \qquad \text{in $B_1$}
\end{equation}
for all $\boldsymbol e\in L_{x_\circ}\cap \mathbb S^{n-1}$.
Assume by contradiction that
\[
a_{r_k} \ge M \log^{-\varepsilon_\circ}(1/r_k) \quad \mbox{ for some } r_k \downarrow 0.
\]
Then, recalling \eqref{eq:Caff conv2}, for any $\boldsymbol e\in L_{x_\circ}\cap \mathbb S^{n-1}$ we find
\[
\partial_{\boldsymbol e\boldsymbol e } \tilde w_{r_k} = \frac{1}{a_k} \partial_{\boldsymbol e\boldsymbol e} \big( r^{-2}u(x_\circ + r\cdot)-p_{*x_\circ}(r\,\cdot\,) \big) \ge -\frac{C}{M} \quad \mbox{in } B_1.
\]
Thus, since $\tilde w_{r_{k_\ell}} \rightarrow q$ in $L^2(B_1)$ for some subsequence $r_{k_\ell}$ (see Proposition \ref{propblowup}(a)), we have
\begin{equation}\label{semiconvlim}
\partial_{\boldsymbol e\boldsymbol e } q \ge -\frac{C}{M}\quad \mbox{in } B_1, \quad \forall\, \boldsymbol e\in L_{x_\circ}\cap \mathbb S^{n-1}.
\end{equation}
In addition, since $\lambda_{*,x_\circ} = 2$,
Proposition \ref{propblowup}(a) implies that $q$ is a quadratic polynomial satisfying
\[
D^2 q |_L \le 0, \quad D^2 q |_{L^\perp} \ge 0, \quad {\rm tr} (D^2 q)=0, \quad \mbox{and} \quad \|q\|_{L^2(\partial B_1)} =1.
\]
Thanks to this fact, a simple compactness argument shows that there exists ${\boldsymbol e'}\in L_{x_\circ}\cap \mathbb S^{n-1}$ such that
\[
\partial_{{\boldsymbol e'}{\boldsymbol e'}} q\leq -c_1<0 \quad \mbox{in } B_1,
\]
for some dimensional constant $c_1>0$. This contradicts \eqref{semiconvlim} for $M$ sufficiently large, thus establishing \eqref{colomboandco}.
Thanks to \eqref{colomboandco},
if we define
\[
P_{x_\circ}(x) := p_{*,x_\circ}(x-x_\circ)\qquad \forall\, x_0\in \Sigma_m,
\]
the argument in the proof of Lemma \ref{lem:laststep} yields
\begin{equation}\label{controlPs}
| D^kP_{x_\circ} (x) -D^k P_x(x) | \le C |x-x_\circ|^{2-k} \log^{-\varepsilon_\circ}\big(|x-x_\circ|\big) \quad \forall x_\circ, x \in \Sigma_m\cap \overline{B_{1/2}} , \ k\in\{0,1, 2\}.
\end{equation}
Hence, by Whitney's Extension Theorem (see \cite{Fef09} and the reference therein) and the argument in the proof of Lemma \ref{lem:laststep}, we conclude that \eqref{controlPs} that $\Sigma_m$ is locally contained in a $C^{1,\log^{\varepsilon_\circ}}$ $m$-dimensional manifold.
\end{proof}
\section{On third order blow-ups}
\label{sect:C2}
In this section we investigate the uniqueness/continuity of third order blow-ups for points in $\Sigma_{m}$,
and prove that $\Sigma_{m}$
can be covered by $C^2$ manifolds, up to a lower dimensional set (see Theorem \ref{thm:C2} below).
We begin by showing the validity of a third-order almost-monotonicity formula
of Monneau-type for all singular points.
\begin{lemma}
\label{lem:mon 3}
Let $0$ be a singular point, assume that $\lambda_*:=\phi(0^+,u-p_*)\geq 3,$ and let $q$ be a $3$-homogeneous harmonic polynomial that vanishes on $L:=\{p_*=0\}$. Set $v:=u-p_*-q$,
and let $H_\lambda$ be as in Lemma \ref{Hincreasing}.
Then
$$
\frac{d}{dr}H_3(r,v) \geq -C\bigg\|\frac{q^2}{p_*}\bigg\|_{L^\infty(B_1)},
$$
where $C>0$ is a constant that can be chosen uniformly at all singular points in a neighborhood of $0$.
\end{lemma}
\begin{proof}
Set $w:=u-p_*$, $w_r(x):=r^{-3}w(rx)$, and $v_r(x):=r^{-3}v(rx)=w_r(x)-q(x)$.
Then $H_3(r,v)=H_3(1,v_r)$, and we have
\begin{equation}
\label{eq:der mon 3}
\frac{d}{dr}H_3(r,v)=\frac{d}{dr}H_3(1,v_r)=\frac{2}{r}\int_{\partial B_1}v_r((v_r)_\nu-3v_r).
\end{equation}
We now observe that, because $\lambda_*\geq 3$, it holds
$$
W_3(1,w_r)=(\phi(1,w_r)-3)H_3 \geq 0
$$
(here $W_\lambda$ is as in Lemma \ref{modifieWeiss}).
Also, because $q$ is a $3$-homogeneous harmonic polynomial, one easily checks that
$
W_3(1,q)=0.
$
Hence, similarly to the proof of Lemma \ref{lemWeiss}, we get
\[
\begin{split}
0 &\le W_3(1,w_r) -W_3(1,q)
\\
&=\int_{B_1} \Bigl(|\nabla v_r|^2 + 2\nabla v_r\cdot \nabla q\Bigr) -3\int_{\partial B_1} \Bigl(v_r^2 + 2v_r q\Bigr)
\\
&= \int_{B_1} |\nabla v_r|^2 -3\int_{\partial B_1} v_r^2 + \int_{\partial B_1} v_r(x\cdot\nabla q -3q)
\\
&= \int_{B_1} |\nabla v_r|^2 -3\int_{\partial B_1} v_r^2 \\
&= \int_{B_1} -v_r\Delta v_r +\int_{\partial B_1} v_r((v_r)_\nu-3v_r),
\end{split}
\]
where we used that $\Delta q\equiv 0$ and $x\cdot \nabla q=3q$. Thus, recalling \eqref{eq:der mon 3} we obtain
$$
\frac{d}{dr}H_3(r,v)=\frac2{r}\int_{B_1}v_r\Delta v_r=\frac{2}{r^{n+5}}\int_{B_r}v\Delta v.
$$
Now, since $\Delta v=\Delta u-\Delta p_*-\Delta q=0$ inside $\{u>0\}$, we have
$$
v\Delta v=(p_*+q)\chi_{\{u=0\}},
$$
therefore
$$
\frac{d}{dr}H_3(r,v)=\frac2{r^{n+5}}\int_{B_1}v_r\Delta v_r=\frac{2}{r^{n+5}}\int_{B_r\cap\{u=0\}}(p_*+q).
$$
Noticing that
$$
p_*+q \ge p_* -|q| \ge \bigg(\sqrt{p_*}-\sqrt{\frac{|q|}{2p_*}}\bigg)^2-\frac{q^2}{2p_*} \geq -\frac{q^2}{2p_*}
$$
and that $\frac{q^2}{2p_*}$ is a $4$-homogeneous polynomial (this follows from the fact that $q=0$ on $\{p_*=0\}$, hence $q$ is divisible by $\sqrt{p_*}$),
we conclude that
$$
\frac{d}{dr}H_3(r,v)=-\frac{1}{r}\int_{B_1\cap\{u(r\,\cdot\,)=0\}}\frac{q^2}{p_*}\geq -\bigg\|\frac{q^2}{p_*}\bigg\|_{L^\infty(B_1)}\frac{|\{u(r\,\cdot\,)=0\}\cap B_1|}{r}.
$$
Since $\lambda_*\geq 3$,
the result follows by Proposition \ref{decayest}.
\end{proof}
In order to apply the previous result, we need to check the size of the points where any third-order blow-up is harmonic and vanishes on $\{p_*=0\}$.
We begin with the case $n=2$.
\begin{lemma}\label{lemdim20}
Let $n=2$, $0\in \Sigma_1$, and $w:=u-p_*$. Assume that there exists a sequence $x_k\in \Sigma_1$ with $x_k \to 0$, and that
\[
w_{r_k} := \frac{(u-p_*)(r_k \,\cdot\, )}{r_k^3} \rightharpoonup \hat q
\qquad \text{in $W^{1,2}(B_1)$.}
\]
Then $\hat q$ is a $3$-homogeneous harmonic polynomial vanishing on $L:=\{p_*=0\}$ and satisfying $\|\hat q\|_{L^2(\partial B_1)}=H_3(0^+,w)$.
\end{lemma}
\begin{proof}
Note that if $\phi(0^+,u-p_*)>3$ then $\|w_r\|_{L^2(B_1)}=o(r^3)$ (see \eqref{eq:decay w}), hence $H_3(0^+,w)=0$ and the result holds with $\hat q\equiv 0$. So we can assume that $\phi(0^+,u-p_*)=3$.
Set
$$
\tilde w_{r_k}:=\frac{w_{r_k}}{\|w_{r_k}\|_{L^2(\partial B_1)}}=\frac{w_{r_k}}{H_3(r_k,w)},
$$
and denote by $q$ a limit point for $\tilde w_{r_k}$.
Note that $\|\hat q\|_{L^2(\partial B_1)}=1$. Also, since $r\mapsto H_3(r,w)$ is monotone nondecreasing (see Lemma \ref{Hincreasing}) and $\hat q\not \equiv 0$, we deduce that $$\hat q=H_3(0^+,w)\,q.$$
This proves that $\|\hat q\|_{L^2(\partial B_1)}=H_3(0^+,w)$.
To conclude the proof it suffices to prove that $q$ is a $3$-homogeneous harmonic polynomial vanishing on $L:=\{p_*=0\}$.
We know by Proposition \ref{propblowup}(b) that $q$ is a $3$-homogeneous solutions of Signorini, see
\eqref{TOP}. Also,
applying Lemma \ref{cod1} with $r_k=2|x_k|$, we deduce that $y_k:=\frac{x_k}{r_k}\to y_\infty \in L\cap \partial B_{1/2}$ and that (thanks to \eqref{eq:q even y infty})
$$
\average\int_{\partial B_\rho}q^{\rm even}(y_\infty+x)^2 \leq C\rho^6
$$
(note that for $n=2$ we have that $\lambda_{*,x_k}\geq 3$ for all $k$, see Lemma \ref{possiblefreq}).
This implies in particular that $q^{\rm even}$ is $3$-homogeneous both with respect to $0$ and $y_\infty$, hence it must be one dimensional. Since $\Delta q=0$ outside $L$, this implies that
$q^{\rm even}$ is affine on each side of $L$, hence $q^{\rm even}\equiv 0$ (being $q^{\rm even}$ 3-homogeneous).
This proves that $q$ is odd with respect to $L$, so $q$ cannot have a singular Laplacian on $L$. Recalling that $\Delta q=0$ in $\mathbb{R}^2\setminus L$, this proves that $q$ is a $3$-homogeneous harmonic polynomial.
Finally, since $q \geq 0$ on $L$
and $q$ is $3$-homogeneous,
it must be $q|_L\equiv 0$.
\end{proof}
Thanks to the previous result and the Federer-type reduction argument developed in the previous section, we obtain the following:
\begin{lemma}
\label{lem:sigma h}
Let $n\geq 2$, $1\leq m \leq n-1$, and let
$\Sigma_{m}^{3rd}$
denote the set of singular points $x_\circ \in \Sigma_{m}$ such that $\phi(0^+,u(x_\circ+\,\cdot\,)-p_{*,x_\circ})\geq 3$ and the following holds: for any sequence $r_k\to 0$ such that
$$
w_{r_k} := \frac{u(x_\circ+r_k\,\cdot\,)-p_{*,x_\circ}(r_k \,\cdot\,)}{r_k^3} \rightharpoonup \hat q
\qquad \text{in $W^{1,2}(B_1)$,}
$$
$\hat q$ is a $3$-homogeneous harmonic polynomial vanishing on $\{p_{*,x_\circ}=0\}$ and satisfying $\|\hat q\|_{L^2(\partial B_1)}=H_3(0^+,u(x_\circ+\,\cdot\,)-p_{*,x_\circ})$.
Then:
\begin{enumerate}
\item[(i)] $\Sigma_{1}\setminus \Sigma_{1}^{3rd}$ consists of isolated points for $n\geq 2$;\\
\item[(ii)]${\rm dim}_\mathcal{H}(\Sigma_{m}\setminus \Sigma_{m}^{3rd})\leq m-1$ for $2 \leq m \leq n-1$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since the argument is similar to the
ones used in the previous section, we just explain the main steps, leaving the details to the interested reader.
Point (i) for $n=2$ follows immediately from Lemma \ref{lemdim20} and Lemma \ref{lem:mon 3}. The case $n\geq 3$
for $m=1$ follows instead by
Proposition \ref{propblowup}(a)
and Lemma \ref{codge2}.
Concerning the case $m= n-1\ge 2$,
recalling \eqref{eq:def anomalous} and Lemma \ref{abstract},
one can argue similarly to the proof of Lemma \ref{lem22} to prove that Lemma \ref{lemdim20} applies to all points in $\Sigma_{n-1}^g$
that are density points for $\Sigma_{n-1}^g$ with respect to the measure $H^\beta_\infty$, with $\beta>n-2$. Thus, thanks to Lemma \ref{abstract}(a), we deduce that Lemma \ref{lemdim20}
applies to all points in $\Sigma_{n-1}^g$ up to at most a set of Hausdorff dimension $n-2$.
Since ${\rm dim}_\mathcal{H}(\Sigma_{n-1}\setminus \Sigma_{n-1}^g)={\rm dim}_\mathcal{H}(\Sigma_{n-1}^a) \leq n-3$ (see Theorem \ref{thm:main}), this proves (ii) when $m=n-1$.
Analogously, in the case $2 \leq m \leq n-2$, using Proposition \ref{propblowup}(a) and arguing as in Lemma \ref{lem12},
we deduce that Lemma \ref{lemdim20}
applies to all points in $\Sigma_{m}^g$ up to at most a set of Hausdorff dimension $m-1$. Since ${\rm dim}_\mathcal{H}(\Sigma_{m}\setminus \Sigma_{m}^g)={\rm dim}_\mathcal{H}(\Sigma_{m}^a) \leq m-1$ (see Theorem \ref{thm:main}), this concludes the proof of (ii).
\end{proof}
We can now prove the uniqueness and continuity of third-order blow-ups at all points in $\Sigma_m^{3rd}$:
\begin{proposition}
\label{prop:C2}
Let $n\geq 2$, $1\leq m \leq n-1$,
and let $x_\circ \in \Sigma_m^{3rd}$. Then the following limit exists:
\begin{equation}
\label{eq:3rd limit}
\frac{u(x_\circ+rx)-p_{*,x_\circ}(rx)}{r^3}\rightharpoonup q_{*,x_\circ}(x)\qquad \text{in $W^{1,2}(B_1)$ as $r\to 0$,}
\end{equation}
where $q_{*,x_\circ}(x)$ is a 3-homogeneous harmonic polynomial vanishing on $\{p_{*,x_\circ}=0\}$ and satisfying $\|q_{*,x_\circ}\|_{L^2(\partial B_1)}=H_3(0^+,u(x_\circ+\,\cdot\,)-p_{*,x_\circ})$.
In addition the above convergence is uniform on compact sets, and the
$$
\text{the map }\Sigma_m^{3rd}\ni x_\circ \mapsto q_{*,x_\circ}\text{ is continuous.}
$$
\end{proposition}
\begin{proof}
Assume $0 \in \Sigma_m^{3rd}$. We first prove the existence of a limit.
Let $q_1$ and $q_2$ be two different limits obtained along two sequences $r_{k,1}$ and $r_{k,2}$ both converging to zero. Up to taking a subsequence of $r_{k,2}$ and relabeling the indices, we can assume that $r_{k,2}\leq r_{k,1}$ for all $k$.
Thus, thanks to Lemma \ref{lem:mon 3}, we have
$$H_3(r_{k,1},w-q_1)\geq H_3(r_{k,2},w-q_1)-C|r_{k,2}-r_{k,1}|\qquad \forall\,k,
$$
for some constant $C$ depending on $q_1$.
Thus, letting $k\to \infty$ we obtain
$$
0=\lim_{k\to \infty} \int_{B_1}(w_{r_{k,1}}-q_1)^2\geq \lim_{k\to \infty} \biggl(\int_{B_1}(w_{r_{k,2}}-q_1)^2-C|r_{k,2}-r_{k,1}|\biggr) =\int_{B_1}(q_2-q_1)^2.
$$
This proves the uniqueness of the limit.
We now prove the continuity of the map $x_\circ \mapsto q_{*,x_\circ}$ at $0 \in \Sigma_m^{3rd}$.
Fix $\varepsilon>0$, and consider a sequence $x_k\in \Sigma_m^{3rd}$ with $x_k\to 0$.
Thanks to \eqref{eq:3rd limit}, there exists a small radius $r_\varepsilon>0$ such that
\begin{equation}
\label{eq:eps}
\int_{\partial B_1}\biggl|\frac{u(r_\varepsilon x)-p_{*,0}(r_\varepsilon x)}{r_\varepsilon^3}- q_{*,0}(x)\biggr|^2 \leq \varepsilon.
\end{equation}
Now, let $R_k:\mathbb{R}^n\to \mathbb{R}^n$ be a rotation that maps the $m$-dimensional plane $L_k:=\{p_{*,x_k}=0\}$ onto $L_0:=\{p_{*,0}=0\}$,
and note that $R_k\to {\rm Id}$ as $k \to \infty$ (this follows by the continuity of $\Sigma_m\ni x\mapsto p_{*,x}$).
Then, since $q_{*,0}\circ R_k$ vanishes on $L_k$, we can apply Lemma \ref{lem:mon 3} at $x_k$ with $q=q_{*,0}\circ R_k$ to deduce that
\begin{align*}
\int_{\partial B_1}|q_{x_k,*} - q_{*,0}\circ R_k|^2 &=\lim_{r\to 0}
\int_{\partial B_1}\biggl|\frac{u(x_k+r x)-p_{*,x_k}(r x)}{r^3} - q_{*,0}\circ R_k(x)\bigg|^2\\
& \leq
\int_{\partial B_1}\biggl|\frac{u(x_k+r_\varepsilon x)-p_{*,x_k}(r_\varepsilon x)}{r_\varepsilon^3} - q_{*,0}\circ R_k(x)\bigg|^2 +Cr_\varepsilon.
\end{align*}
Note that the constant $C$ above is independent of $k$ since, by the continuity of $p_{*,x_k}$,
$p_{*,x_k}\circ R_k^{-1} \geq p_{*,0}/2$ for $k$ large enough, therefore
$$\bigg\|\frac{(q_{*,0}\circ R_k)^2}{p_{*,x_k}}\bigg\|_{L^\infty(B_1)}\leq 2\bigg\|\frac{q_{*,0}^2}{p_{*,0}}\bigg\|_{L^\infty(B_1)}\qquad \forall\,k\gg 1.$$
Hence, since $R_k\to {\rm Id}$, letting $k \to \infty$ and recalling \eqref{eq:eps} we obtain
\begin{align*}
\limsup_{k\to \infty}\int_{\partial B_1}
|q_{x_k,*} - q_{*,0}|^2&\leq \lim_{k\to \infty} \int_{\partial B_1}\biggl|\frac{u(x_k+r_\varepsilon x)-p_{*,x_k}(r_\varepsilon x)}{r_\varepsilon^3} - q_{*,0}\circ R_k(x)\bigg|^2+Cr_\varepsilon\\
& \le \varepsilon+Cr_\varepsilon.
\end{align*}
Since $\varepsilon>0$ is arbitrary, this proves the continuity at $0$.
In addition, arguing as above (using Lemma \ref{lem:mon 3}) one sees that the convergence in \eqref{eq:3rd limit} is locally uniform with respect to $x_\circ$.
\end{proof}
\begin{remark}
It is important to observe that the above proof shows something stronger:
if $x_k\in \Sigma_m^g$ (so their frequency is at least $3$, see \eqref{eq:def anomalous}) and $x_k \to x_\circ$ with $x_\circ \in \Sigma_m^{3rd}$, then
\begin{equation}
\label{eq:cont k}
\lim_{k\to \infty}\int_{\partial B_1}
|q_{x_k} - q_{*,0}|^2=0
\end{equation}
whenever $q_{x_k}$ is an arbitrary limit point
of $r^{-3}\left(u(x_k+r x)-p_{*,x_k}(r x)\right)$ as $r \to 0$. In other words, even if the third order blow-up of $u-p_{*,x_k}$ at $x_k$ may not be unique, any such limit has to converge to $q_{*,0}$ as $x_k\to x_\circ$.
Indeed, if $\{r_{k,j}\}_{j\geq 1}$ is a sequence converging to $0$ such that
$$
q_{x_k}(x)=\lim_{j\to \infty}\frac{u(x_k+r_{k,j} x)-p_{*,x_k}(r_{k,j} x)}{r_{k,j}^3},
$$
then Lemma \ref{lem:mon 3} applied at $x_k$ with $q=q_{*,0}\circ R_k$ yields (since $r_{k,j}\leq r_\varepsilon$ for $j\gg 1$)
\begin{align*}
\int_{\partial B_1}|q_{x_k} - q_{*,0}\circ R_k|^2 &=\lim_{j \to \infty}
\int_{\partial B_1}\biggl|\frac{u(x_k+r_{k,j} x)-p_{*,x_k}(r_{k,j} x)}{r_{k,j}^3} - q_{*,0}\circ R_k(x)\bigg|^2\\
& \leq
\int_{\partial B_1}\biggl|\frac{u(x_k+r_\varepsilon x)-p_{*,x_k}(r_\varepsilon x)}{r_\varepsilon^3} - q_{*,0}\circ R_k(x)\bigg|^2 +Cr_\varepsilon,
\end{align*}
and the result follows as in the proof of Proposition \ref{prop:C2}.
Note also that, when $n=2$, $q_{*,0}$ is odd with respect to the line $\{p_{*,0}=0\}$ (see the proof of Lemma \ref{lemdim20}).
Hence it follows by \eqref{eq:cont k}
and the continuity of $p_{*,x_k}$
that
\begin{equation}
\label{eq:cont k2}
\lim_{k\to \infty}\int_{\partial B_1}
|q_{x_k}^{\rm odd} - q_{*,0}|^2+\int_{\partial B_1}
|q_{x_k}^{\rm even}|^2=0,
\end{equation}
where $q_{x_k}^{\rm odd}$ (resp. $q_{x_k}^{\rm even}$) is the odd (resp. even) part of $q_{x_k}$ with respect to $\{p_{*,x_k}=0\}$. In particular, as in Lemma \ref{lemdim20}, $q_{x_k}^{\rm odd}$ is a 3-homogeneous harmonic polynomial.
\end{remark}
As consequence of the previous results, we obtain the following result about the structure of $\Sigma_{n-1}$.
\begin{theorem}
\label{thm:C2}
The following holds:
\begin{enumerate}
\item[($n=2$)] $\Sigma_1$ is locally contained in a $C^{2}$ curve.
\item[($n\geq 3$)] For any $m=1,\ldots,n-1$, the set $\Sigma_{m}$ can be covered by a countable family of $C^2$ $m$-dimensional manifolds, except for at most a set of Hausdorff dimension $m-1$.
\end{enumerate}
\end{theorem}
\begin{proof}
We start with the case $n=2$.
Let us consider the map
\begin{equation}
\label{eq:cont q odd}
\Sigma_1\ni x_\circ \mapsto q_{x_\circ}^{\rm odd},
\end{equation}
where $q_{x_\circ}^{\rm odd}$ is the odd part of $q_{x_\circ}$ with respect to $\{p_{*,x_\circ}=0\}$, and:\\
- if $x_\circ \in \Sigma_1^{3rd}$, then $q_{x_\circ}=q_{*,x_\circ}$ is the third order limit provided by Proposition \ref{prop:C2};\\
- if $x_\circ \in \Sigma_1\setminus\Sigma_1^{3rd}$, then $q_{x_\circ}$ is an arbitrary limit point
of $r^{-3}\left(u(x_\circ+r x)-p_{*,x_\circ}(r x)\right)$ as $r\to 0$ (recall that $\Sigma_1=\Sigma_1^g$ for $n=2$, see Lemma \ref{possiblefreq}).
Fix $R\in (0,1)$, and
for $(r,x_\circ) \in (0,1-R] \times (\Sigma^1\cap \overline B_R) $, let us define the function
\[
\mathcal F(r,x_\circ) :=
r^{-3} \biggl(\average\int_{B_r} \left(u(x_\circ+\cdot) -p_{*,x_\circ} - q_{x_\circ}^{\rm odd} \right)^2 \biggr)^{1/2 }.
\]
Note that, as a consequence of Lemma \ref{lem:mon 3}, the map $r\mapsto \mathcal F(r,x_\circ)$ is almost monotone, hence the limit as $r\to 0^+$ exists. Also, for $r>0$ fixed, the map $$\Sigma^1\cap \overline B_R\ni x_\circ\mapsto \mathcal F(r,x_\circ)$$ is continuous as a consequence of \eqref{eq:cont k} and \eqref{eq:cont k2} (recall that the set $\Sigma_1\setminus \Sigma_1^{3rd}$ consists of isolated points by Lemma \ref{lem:sigma h}(i), so $\mathcal F(r,\cdot)$
is trivially continuous at such points).
Thus, as in Lemma \ref{lem:laststep},
the almost monotonicity implies that $x_\circ\mapsto \mathcal F(0^+,x_\circ)$ is upper semicontinuous. Since $\mathcal F(0^+,\cdot)=0$
on $\Sigma_1^{3rd}$ (by Proposition \ref{prop:C2}) we deduce that, for any $\varepsilon>0$, there exists $r_\varepsilon>0$ such that
\begin{equation}
\label{eq:cont F}
\mathcal F(r,x_\circ)\leq \varepsilon \qquad \forall\,x_\circ \in \Sigma_1 \cap \overline B_R
\quad \text{s.t.}\quad {\rm dist}(x_\circ,\Sigma_1^{3rd})\leq r_\varepsilon,\qquad \forall\,r\in (0,r_\varepsilon].
\end{equation}
Now, to any point $x_\circ\in \Sigma_1$ we associate the third order polynomial
\[P_{x_\circ} (x) := p_{*,x_\circ}(x-x_\circ)+q_{x_\circ}^{\rm odd}(x-x_\circ),\]
and we consider the function $\mathcal G:\Sigma_1 \times \Sigma_1\to \mathbb{R}$ defined as
$$
\mathcal G(x_\circ, x):= \frac{1}{\rho_{x_\circ, x}^3}\big\| (P_{x_\circ} -P_x)(\rho_{x_\circ, x}\,\cdot\, )\big\|_{L^2(B_1)} ,\qquad \rho_{x_\circ, x}:=|x-x_\circ|.
$$
We want to prove that $\mathcal G$ is uniformly continuous on $(\Sigma_1 \cap \overline B_R)\times (\Sigma_1 \cap \overline B_R)$ for any $R\in (0,1)$.
Observe that, thanks to Lemma \ref{lem:sigma h}(i), the set
$$
O_{r,R}:=\left\{x_\circ \in \Sigma_1 \cap \overline B_R\,:\,{\rm dist}(x_\circ,\Sigma_1^{3rd})\geq r\right\}
$$
is finite for any $r>0$. In particular, if we define
$$
U_{r,R}:=\left\{x_\circ \in \Sigma_1 \cap \overline B_R\,:{\rm dist}(x_\circ,\Sigma_1^{3rd})\leq r\right\},
$$
then for any $\varepsilon>0$
there exists $\delta=\delta(\varepsilon)>0$ small enough such that
$$
{\rm dist}(x_1,x_2)> \delta\qquad \forall\,
(x_1,x_2) \in (O_{\varepsilon,r_\varepsilon/2}\times O_{\varepsilon,r_\varepsilon/2})\cup (O_{\varepsilon,r_\varepsilon}\times U_{\varepsilon,r_\varepsilon/2})
$$
(here $r_\varepsilon>0$ is as in \eqref{eq:cont F}).
Hence it is enough to check the continuity of $\mathcal G$ on $U_{\varepsilon,r_\varepsilon}\times U_{\varepsilon,r_\varepsilon}$.
Note that, arguing exactly as in \eqref{triangle!}, it follows that
\[
\mathcal G(x_\circ, x) \le \mathcal F(\rho_{x_\circ, x}, x_\circ) + \mathcal F(2\rho_{x_\circ, x}, x)\qquad \forall \,x_\circ, x \in \Sigma_1.
\]
In particular, provided $\rho_{x_\circ, x}=|x-x_\circ|\leq r_\varepsilon/2$, then it follows by \eqref{eq:cont F} that
$\mathcal G(x_\circ, x) \leq 2\varepsilon$
whenever $(x_\circ, x) \in U_{\varepsilon,r_\varepsilon}\times U_{\varepsilon,r_\varepsilon}$, which proves the desired uniform continuity of $\mathcal G$.
Since the norm $\|\cdot\|_{L^2(B_1)}$ is equivalent to the norm $\|\cdot\|_{C^3(B_1)}$ on the space of third order polynomials,
the uniform continuity of $\mathcal G$
implies that the the polynomials $P_{x_\circ}$ are continuous in the sense of Whitney's Theorem: for any $R \in (0,1)$ there exists a modulus of continuity $\omega_R$ such that
$$
| D^kP_{x_\circ} (x) -D^k P_x(x) | \le \omega_{R}(|x-x_\circ|)|x-x_\circ|^{3-k} \qquad \forall\,x_\circ,x \in \Sigma_1 \cap \overline B_R,\,k=0,1,2,3.
$$
Since $\Sigma_1$ is closed, the set $\Sigma_1 \cap \overline B_R$ is compact, so
this allows us to apply the classical Whitney's Theorem to find a map $F \in C^3(\mathbb{R}^2)$ such that
$$
F(x)=p_{*,x_\circ}(x-x_\circ)+q_{*,x_\circ}(x-x_\circ)+o(|x-x_\circ|^3) \qquad \forall\, x_\circ \in \Sigma_1^{3rd}\cap \overline{B_R},
$$
and we conclude by the Implicit Function Theorem (see the proof of Lemma \ref{lem:laststep}).
\smallskip
Concerning the higher dimensional case,
since ${\rm dim}_\mathcal{H}(\Sigma_{m} \setminus \Sigma_{m}^{3rd})\leq m-1$
(see Lemma \ref{lem:sigma h}),
for any $j \in \mathbb N$ we can find a countable family of balls $\{\hat B_i\}$ such that
$$
\Sigma_{m} \setminus \Sigma_{m}^{3rd}
\subset \bigcup_i \hat B_i=:\mathcal O_j,
\quad \text{and}\quad \sum_i{\rm diam}(\hat B_i)^{m-1+1/j}<\frac1j.
$$
In particular $\mathcal H^{m-1+1/j}_\infty(\mathcal O_j)<1/j$ (see \eqref{eq:def Haus}). Note that, because $\Sigma_{m}$ is relatively open in $\Sigma$ (by the continuity of $x_\circ\mapsto p_{*,x_\circ}$), the set $\overline {\Sigma_m}\setminus \Sigma_m$ closed. Define the sets
$$
\mathcal U_j := \{x \, :\, {\rm dist}(x,\overline{\Sigma_m}\setminus\Sigma_m)<1/j\},\qquad K_j:=\Sigma_{m}\setminus (\mathcal O_j\cup \mathcal U_j).
$$
Since $\mathcal O_j$ is open, the set $K_j$ is closed.
Noticing that the polynomials $P_{x_\circ} (x) := p_{*,x_\circ}(x-x_\circ)+q_{*,x_\circ}(x-x_\circ)$
are continuous with respect to $x_\circ \in K_j$ (by Proposition \ref{prop:C2}), we can argue as we did above in case $n=2$ to conclude that $K_j$ can be locally covered by a $m$-dimensional manifold of class $C^2$.
Then the result follows by observing that
$\cup_jK_j=\Sigma_m\setminus \left(\cap_j \mathcal O_j)\right)$ and
$\mathcal{H}^{\beta}_\infty(\cap_j \mathcal O_j)=0$ for any $\beta>m-1$, hence
${\rm dim}_\mathcal{H}(\cap_j \mathcal O_j)\leq m-1 $ (see \eqref{eq:def dim}).
\end{proof}
| {
"timestamp": "2017-11-28T02:05:15",
"yymm": "1709",
"arxiv_id": "1709.04002",
"language": "en",
"url": "https://arxiv.org/abs/1709.04002",
"abstract": "In the classical obstacle problem, the free boundary can be decomposed into \"regular\" and \"singular\" points. As shown by Caffarelli in his seminal papers \\cite{C77,C98}, regular points consist of smooth hypersurfaces, while singular points are contained in a stratified union of $C^1$ manifolds of varying dimension. In two dimensions, this $C^1$ result has been improved to $C^{1,\\alpha}$ by Weiss \\cite{W99}.In this paper we prove that, for $n=2$ singular points are locally contained in a $C^2$ curve. In higher dimension $n\\ge 3$, we show that the same result holds with $C^{1,1}$ manifolds (or with countably many $C^2$ manifolds), up to the presence of some \"anomalous\" points of higher codimension. In addition, we prove that the higher dimensional stratum is always contained in a $C^{1,\\alpha}$ manifold, thus extending to every dimension the result in \\cite{W99}.We note that, in terms of density decay estimates for the contact set, our result is optimal. In addition, for $n\\ge3$ we construct examples of very symmetric solutions exhibiting linear spaces of anomalous points, proving that our bound on their Hausdorff dimension is sharp.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "On the fine structure of the free boundary for the classical obstacle problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109503004294,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7076796266282647
} |
https://arxiv.org/abs/1807.05923 | What is algebraic about algebraic effects and handlers? | This note recapitulates and expands the contents of a tutorial on the mathematical theory of algebraic effects and handlers which I gave at the Dagstuhl seminar 18172 "Algebraic effect handlers go mainstream". It is targeted roughly at the level of a doctoral student with some amount of mathematical training, or at anyone already familiar with algebraic effects and handlers as programming concepts who would like to know what they have to do with algebra. We draw an uninterrupted line of thought between algebra and computational effects. We begin on the mathematical side of things, by reviewing the classic notions of universal algebra: signatures, algebraic theories, and their models. We then generalize and adapt the theory so that it applies to computational effects. In the last step we replace traditional mathematical notation with one that is closer to programming languages. | \subsubsection*{Acknowledgment}
I thank Matija Pretnar for discussion and planning of the Dagstuhl tutorial and
these notes. Section~\ref{sec:what-coalg-about} on comodels was written jointly
with Matija and is based on his Dagstuhl tutorial with the same title.
\section{Algebraic theories}
\label{sec:algebraic-theories}
In algebra we study mathematical structures that are equipped with operations satisfying
equational laws. For example, a group is a structure $(G, \mathsf{u}, {\cdot}, {}^{-1})$,
where $\mathsf{u}$~is a constant, $\cdot$~is a binary operation, and ${}^{-1}$~is a unary
operation, satisfying the familiar group identities:
\begin{gather*}
(x \cdot y) \cdot z = x \cdot (y \cdot z),\\
\mathsf{u} \cdot x = x = x \cdot \mathsf{u},\\
x \cdot x^{-1} = \mathsf{u} = x^{-1} \cdot x.
\end{gather*}
There are alternative axiomatizations, for instance: a group is a monoid
$(G, \mathsf{u}, {\cdot})$ in which every element is invertible, i.e.,
$\all{x \in G} \some{y \in G} x \cdot y = \mathsf{u} = y \cdot x$. However, a
formulation all of whose axioms are equations is preferred, because its simple
logical form grants its models good structural properties.
It is important to distinguish the theory of an algebraic structure from the
algebraic structures it describes. In this section we shall study the
descriptions, which are known as \emph{algebraic} or \emph{equational theories}.
\subsection{Signatures, terms and equations}
\label{sec:signatures-equations}
A \emph{signature $\Sigma$} is a collection of \emph{operation symbols} with
\emph{arities} $\family{(\op{i}, \arity{i})}{i}$. The operation symbols
$\op{i}$ may be any anything, but are usually thought of as syntactic entities,
while arities $\arity{i}$ are non-negative integers. An operation symbol whose
arity is~$0$ is called a \emph{constant} or a \emph{nullary} symbol. Operation
symbols with arities $1$, $2$ and $3$ are referred to as \emph{unary},
\emph{binary}, and \emph{ternary}, respectively.
A (possibly empty) list of distinct variables $x_1, \ldots, x_k$ is called a
\emph{context}. The \emph{$\Sigma$-terms in context $x_1, \ldots, x_k$} are
built inductively using the following rules:
\begin{enumerate}
\item each variable $x_i$ is a $\Sigma$-term in context $x_1, \ldots, x_k$,
\item if $t_1, \ldots, t_{\arity{i}}$ are $\Sigma$-terms in context $x_1, \ldots, x_k$ then
$\op{i}(t_1, \ldots, t_{\arity{i}})$ is a $\Sigma$-term in context $x_1, \ldots, x_k$.
\end{enumerate}
We write
\begin{equation*}
x_1, \ldots, x_k \mid t
\end{equation*}
to indicate that $t$ is a $\Sigma$-term in the given context. A \emph{closed
$\Sigma$-term} is a $\Sigma$-term in the empty context. No variables occur
in a closed term.
A \emph{$\Sigma$-equation} is a pair of $\Sigma$-terms $\ell$ and $r$ in a context
$x_1, \ldots, x_k$. We write
\begin{equation*}
x_1, \ldots, x_k \mid \ell = r
\end{equation*}
to indicate an equation in a context. We shall often elide the context and write simply
$\ell = r$, but it should be understood that there is an ambient context which contains at
least all the variables mentioned by $\ell$ and $r$.
A $\Sigma$-equation really is just a list of variables and a pair of terms, and
\emph{not} a logical statement. The context variables are \emph{not} universally
quantified, and we are not talking about first-order logic. Of course, a
$\Sigma$-equation is suggestively written as an equation because we do
eventually want to \emph{interpret} it as an assertion of equality, but until
such time (and even afterwards) it is better to think of contexts, terms, and
equations as ordinary mathematical objects, devoid of any imagined or special
meta-mathematical status. This remark will hopefully become clearer in
Section~\ref{sec:algebr-theor-with}.
When no confusion can arise we drop the prefix ``$\Sigma$-'' and simply speak about
terms and equations instead of $\Sigma$-terms and $\Sigma$-equations.
\begin{example}
\label{ex:monoid-signature}
%
The signature for the theory of a monoid has a nullary symbol $\mathsf{u}$ and a binary
symbol $\mathsf{m}$. There are infinitely many expressions in context $x, y$, such as
%
\begin{align*}
\mathsf{u}(),\quad
x,\quad
y,\quad
\mathsf{m}(\mathsf{u}(), \mathsf{u}()),\quad
\mathsf{m}(\mathsf{u}(), x),\quad
\mathsf{m}(y, \mathsf{u}()),\quad
\mathsf{m}(x, x),\quad
\mathsf{m}(y, x),
\ldots
\end{align*}
%
An equation in context $x, y$ is
%
\begin{equation*}
x, y \mid \mathsf{m}(y, x) = \mathsf{m}(\mathsf{m}(\mathsf{u}(), x), y).
\end{equation*}
%
It is customary to write a nullary symbol $\mathsf{u}()$ simply as $\mathsf{u}$, and to
use the infix operator~$\cdot$ in place of~$\mathsf{m}$. With such notation the
above equation would be written as
%
\begin{equation*}
x, y \mid y \cdot x = (\mathsf{u} \cdot x) \cdot y.
\end{equation*}
%
One might even omit $\cdot$ and the context, in which case the equation is
written simply as $y \, x = (\mathsf{u} \, x) \, y$. If we agree that $\cdot$
associates to the left then $(\mathsf{u} \, x) \, y$ may be written as
$\mathsf{u} \, x \, y$, and we are left with $y \, x = \mathsf{u} \, x \, y$,
which is what your algebra professor might write down. Note that we are
\emph{not} discussing validity of equations but only ways of displaying them.
\end{example}
\subsection{Algebraic theories}
\label{sec:algebraic-theories-1}
An \emph{algebraic theory $\theory{T} = (\signature{T}, \equations{T})$}, also
called an \emph{equational theory}, is given by a signature~$\signature{T}$ and
a collection $\equations{T}$ of $\signature{T}$-equations.
We impose no restrictions on the number of operation symbols or equations, but at least in
classical treatments of the subject certain complications are avoided by insisting that
arities be non-negative integers.
\begin{example}
\label{ex:theory-group}
%
The theory~$\theory{Group}$ of a group is algebraic. In order to follow closely the
definitions we eschew the traditional notation $\cdot$ and ${}^{-1}$, and explicitly
display the contexts. We abide by such formalistic requirements once to demonstrate
them, but shall take notational liberties subsequently.
%
The signature $\signature{Group}$ is given by operation symbols $\mathsf{u}$,
$\mathsf{m}$, and $\mathsf{i}$ whose arities are $0$, $2$, and $1$, respectively. The
equations $\equations{Group}$ are:
%
\begin{align*}
x, y, z &\mid \mathsf{m}(\mathsf{m}(x, y), z) = \mathsf{m}(x, \mathsf{m}(y, z)),\\
x &\mid \mathsf{m}(\mathsf{u}(), x) = x \\
x &\mid \mathsf{m}(x, \mathsf{u}()) = x,\\
x &\mid \mathsf{m}(x, \mathsf{i}(x)) = \mathsf{u}()\\
x &\mid \mathsf{m}(\mathsf{i}(x), x) = \mathsf{u}().
\end{align*}
%
\end{example}
\begin{example}
\label{ex:semi-lattice}
%
The theory $\theory{Semilattice}$ of a semilattice is algebraic. It is given by a
nullary symbol $\bot$ and a binary symbol $\vee$, satisfying the equations
%
\begin{align*}
x \vee (y \vee z) &= (x \vee y) \vee z,\\
x \vee y &= y \vee x,\\
x \vee x &= x,\\
x \vee \bot &= x.
\end{align*}
%
It should be clear that the first equation has context $x, y, z$, the second one
in~$x, y$, and the last two in~$x$.
\end{example}
\begin{example}
\label{ex:field}
%
The theory of a field, as usually given, is not algebraic because the inverse~$0^{-1}$
is undefined, whereas the operations of an algebraic theory are always taken to be
total. However, a proof is required to show that there is no equivalent algebraic theory.
\end{example}
\begin{example}
\label{ex:pointed-set}
%
The theory $\theory{Set_\bullet}$ of a \emph{pointed set} has a constant $\bullet$ and
no equations.
\end{example}
\begin{example}
\label{ex:theory-empty}
%
The \emph{empty theory $\theory{Empty}$} has no operation symbols and no equations.
\end{example}
\begin{example}
\label{ex:theory-singleton}
%
The theory of a \emph{singleton $\theory{Singleton}$} has a constant $\star$ and the
equation $x = y$.
\end{example}
\begin{example}
\label{ex:lattice}
%
A bounded lattice is a partial order with finite infima and suprema. Such a formulation
is not algebraic because the infimum and supremum operators do not have fixed arities,
but we can reformulate it in terms of nullary and binary operations. Thus, the theory
$\theory{Lattice}$ of a bounded lattice has constants $\bot$ and $\top$, and two binary
operation symbols $\vee$ and $\wedge$, satisfying the equations:
%
\begin{align*}
x \vee (y \vee z) &= (x \vee y) \vee z, & x \wedge (y \wedge z) &= (x \wedge y) \wedge z,\\
x \vee y &= y \vee x, & x \wedge y &= y \wedge x,\\
x \vee x &= x, & x \wedge x &= x,\\
x \vee \bot &= x, & x \wedge \top &= x.
%
\intertext{In addition we need the \emph{absorbtion} laws:}
x \vee (x \wedge y) &= x, & x \wedge (x \vee y) &= x.
\end{align*}
%
Notice that the theory of a bounded lattice is the juxtaposition of two copies of
the theory of a semi-lattice from Example~\ref{ex:semi-lattice}, augmented with laws that relate them.
The partial order is recovered because $x \leq y$ is equivalent to
$x \vee y = y$ and to $x \wedge y = x$.
\end{example}
\begin{example}
\label{ex:finitely-generated-group}
%
A \emph{finitely generated group} is a group which contains a finite collection of
elements, called the \emph{generators}, such that every element of the group is obtained
by multiplications and inverses of the generators. It is not clear how to express this
condition using only equations, but a proof is required to show that there is no
equivalent algebraic theory.
\end{example}
\begin{example}
\label{ex:Cinfty-theory}
%
An example of an algebraic theory with many operations and equations is the theory of a
$\mathcal{C}^\infty$-ring. Let $\mathcal{C}^\infty(\RR^n, \RR^m)$ be the set of all smooth maps from $\RR^n$
to $\RR^m$. The signature for the theory of a $\mathcal{C}^\infty$-ring contains an $n$-ary
operation symbol $\op{f}$ for each $f \in \mathcal{C}^\infty(\RR^n, \RR)$. For all
$f \in \mathcal{C}^\infty(\RR^n, \RR)$, $h \in \mathcal{C}^\infty(\RR^m, \RR)$, and
$g_1, \ldots, g_n \in \mathcal{C}^\infty(\RR^m, \RR)$ such that
%
\begin{equation*}
f \circ (g_1, \ldots, g_n) = h,
\end{equation*}
%
the theory has the equation
%
\begin{equation*}
x_1, \ldots, x_m \mid
\op{f} (\op{g_1}(x_1, \ldots, x_m), \ldots, \op{g_n}(x_1, \ldots, x_m)) =
\op{h}(x_1, \ldots, x_m).
\end{equation*}
%
The theory contains the theory of a commutative unital ring as a subtheory. Indeed,
the ring operations on~$\RR$ are smooth maps, and so they appear as $\op{+}$,
$\op{\times}$, $\op{-}$ in the signature, and so do constants $\op{0}$ and $\op{1}$,
because all maps $\RR^0 \to \RR$ are smooth. The commutative ring equations are present
as well because the real numbers form a commutative ring.
\end{example}
\subsection{Interpretations of signatures}
\label{sec:interp-of-sign}
Let a signature $\Sigma$ be given. An \emph{interpretation~$I$} of $\Sigma$ is given by
the following data:
\begin{enumerate}
\item a set $\carrier{I}$, called the \emph{carrier},
\item for each operation symbol $\op{i}$ a map
%
\begin{equation*}
\sem{\op{i}}_I : \underbrace{\carrier{I} \times \cdots \times \carrier{I}}_{\arity{i}} \to \carrier{I},
\end{equation*}
%
called an \emph{operation}.
\end{enumerate}
The double bracket $\sem{{\ }}_I$ is called the \emph{semantic bracket} and is typically
used when syntactic entities (operation symbols, terms, equations) are mapped to
their mathematical counterparts. When no confusion can arise, we omit the subscript~$I$
and write just~$\sem{{\ }}$.
We abbreviate an $n$-ary product $\carrier{I} \times \cdots \times \carrier{I}$ as $\carrier{I}^n$. A
nullary product $\carrier{I}^0$ contains a single element, namely the empty tuple
$\unit$, so it makes sense to write $\carrier{I}^0 = \one = \set{\unit}$. Thus a nullary
operation symbol is interpreted by an map $\one \to \carrier{I}$, and such maps are in
bijective correspondence with the elements of~$\carrier{I}$, which would be the constants.
An interpretation~$I$ may be extended to $\Sigma$-terms. A $\Sigma$-term in context
\begin{equation*}
x_1, \ldots, x_k \mid t
\end{equation*}
is interpreted by a map
\begin{equation*}
\sem{x_1, \ldots, x_k \mid t}_I : \carrier{I}^k \to \carrier{I},
\end{equation*}
as follows:
\begin{enumerate}
\item the variable $x_i$ is interpreted as the $i$-th projection,
%
\begin{equation*}
\sem{x_1, \ldots, x_k \mid x_i}_I = \pi_i : \carrier{I}^k \to \carrier{I},
\end{equation*}
\item a compound term in context
%
\begin{equation*}
x_1, \ldots, x_k \mid \op{i}(t_1, \ldots, t_{\arity{i}})
\end{equation*}
%
is interpreted as the composition of maps
%
\begin{equation*}
\xymatrix@+6em{
{\carrier{I}^k} \ar[r]^{(\sem{t_1}_I, \ldots, \sem{t_{\arity{i}}}_I)}
&
{\carrier{I}^{\arity{i}}} \ar[r]^{\sem{\op{i}}_I}
&
{\carrier{I}}
}
\end{equation*}
%
where we elided the contexts $x_1, \ldots, x_k$ for the sake of brevity.
\end{enumerate}
\begin{example}
One interpretation of the signature from Example~\ref{ex:monoid-signature} is given by
the carrier set $\RR$ and the interpretations of operation symbols
%
\begin{align*}
\sem{\mathsf{u}}() &= 1 + \sqrt{5}, \\
\sem{\mathsf{m}}(a, b) &= a^2 + b^3.
\end{align*}
%
The term in context $x, y \mid \mathsf{m}(\mathsf{u}, \mathsf{m}(x, x))$ is interpreted
as the map $\RR \times \RR \to \RR$, given by the rule
%
\begin{equation*}
(a, b) \mapsto (a+1)^3 a^6 + 2 (3 + \sqrt{5}).
\end{equation*}
%
The same term in a context $y, x, z$ is interpreted as the map $\RR \times \RR \times \RR \to \RR$,
given by the rule
%
\begin{align*}
(a, b, c) &\mapsto (b+1)^3 b^6 + 2 (3 + \sqrt{5}).
\end{align*}
%
These are not the same map, as they do not even have the same domains!
\end{example}
The previous examples shows why contexts should not be ignored. In mathematical practice
contexts are often relegated to guesswork for the reader, or are handled implicitly. For
example, in real algebraic geometry the solution set of the equation $x^2 + y^2 = 1$ is
either a unit circle in the plane or an infinitely extending cylinder of unit radius in
the space, depending on whether the context might be $x, y$ or $x, y, z$. Which context is
meant is indicated one way or another by the author of the mathematical text.
\subsection{Models of algebraic theories}
\label{sec:models-algebr-theor}
A \emph{model~$M$} of an algebraic theory~$\theory{T}$ is an interpretation of the signature
$\signature{T}$ which validates all the equations $\equations{T}$. That is, for every
equation
\begin{equation*}
x_1, \ldots, x_k \mid \ell = r
\end{equation*}
in~$\equations{T}$, the maps
\begin{equation*}
\sem{x_1, \ldots, x_k \mid \ell}_M : \carrier{M}^k \to \carrier{M}
\qquad\text{and}\qquad
\sem{x_1, \ldots, x_k \mid r}_M : \carrier{M}^k \to \carrier{M}
\end{equation*}
are equal. We refer to a model of~$\theory{T}$ as a \emph{$\theory{T}$-model} or
a \emph{$\theory{T}$-algebra}.
\begin{example}
A model $G$ of $\theory{Group}$, cf.\ Example~\ref{ex:theory-group}, is given by a
carrier set $\carrier{G}$ and maps
%
\begin{equation*}
\sem{u}_G : \one \to \carrier{G},\qquad
\sem{m}_G : \carrier{G} \times \carrier{G} \to \carrier{G},\qquad
\sem{i}_G : \carrier{G} \to \carrier{G},
\end{equation*}
%
interpreting the operation symbols $\mathsf{u}$, $\mathsf{m}$, and
$\mathsf{i}$, respectively, such that the equations~$\equations{Group}$. This
amounts precisely to $(\carrier{G}, \sem{u}_G, \sem{m}_G, \sem{i}_G)$ being a
group, except that the unit is viewed as a map $\one \to \carrier{G}$ instead
of an element of~$\carrier{G}$.
\end{example}
\begin{example}
Every algebraic theory has the \emph{trivial model}, whose carrier is the
singleton~$\one$, and whose operations are interpreted by the unique maps
$\one^k \to \one$. All equations are satisfied because any two maps $\one^k \to \one$
are equal.
\end{example}
The previous example explains why one should \emph{not} require $0 \neq 1$ in a ring, as
that prevents the theory of a ring from being algebraic.
\begin{example}
The empty set is a model of a theory~$\theory{T}$ if, and only if, every operation symbol
of $\theory{T}$ has non-zero arity.
\end{example}
\begin{example}
A model of the theory~$\theory{Set_\bullet}$ of a pointed set, cf.\
Example~\ref{ex:pointed-set}, is a set $S$ together with an element $s \in S$ which
interprets the constant~$\bullet$.
\end{example}
\begin{example}
A model of the theory~$\theory{Empty}$, cf.\ Example~\ref{ex:theory-empty}, is the same
thing as a set.
\end{example}
\begin{example}
A model of the theory~$\theory{Singleton}$, cf.\ Example~\ref{ex:theory-singleton}, is
any set with precisely one element.
\end{example}
Suppose $L$ and $M$ are models of a theory~$\theory{T}$. Then we may form the
\emph{product of models} $L \times M$ by taking the cartesian product as the carrier,
\begin{equation*}
\carrier{L \times M} = \carrier{L} \times \carrier{M},
\end{equation*}
and pointwise operations,
\begin{equation*}
\sem{\op{i}}_{M \times L}(a, b) = (\sem{\op{i}}_M(a), \sem{\op{i}}_L(b)).
\end{equation*}
The equations $\equations{T}$ are valid in $L \times M$ because they are valid on each
coordinate separately. This construction can be extended to a product of any number of
models, including an infinite one.
\begin{example}
We may now prove that the theory of a field from Example~\ref{ex:field} is not
equivalent to an algebraic theory. There are fields of size 2 and 3, namely $\ZZ_2$ and
$\ZZ_3$. If there were an algebraic theory of a field, then $\ZZ_2 \times \ZZ_3$ would
be a field too, but it is not, and in fact there is no field of size~6.
\end{example}
\begin{example}
Similarly, the theory of a finitely generated group from
Example~\ref{ex:finitely-generated-group} cannot be formulated as an algebraic theory,
because an infinite product of non-trivial finitely generated groups is not finitely
generated.
\end{example}
\begin{example}
Let us give a model of the theory of a $\mathcal{C}^\infty$-ring from
Example~\ref{ex:Cinfty-theory}. Pick a smooth manifold~$M$, and let the carrier be the
set $\mathcal{C}^\infty(M, \RR)$ of all smooth scalar fields on~$M$. Given
$f \in \mathcal{C}^\infty(\RR^n, \RR)$, interpret the operation $\op{f}$ as composition with~$f$,
%
\begin{align*}
\sem{\op{f}} &: \mathcal{C}^\infty(M, \RR)^n \to \mathcal{C}^\infty(M, \RR) \\
\sem{\op{f}} &: (u_1, \ldots, u_n) \mapsto f \circ (u_1, \ldots, u_n).
\end{align*}
%
We leave it as an exercise to verify that all equations are validated by this
interpretation.
\end{example}
\subsection{Homomorphisms and the category of models}
\label{sec:homom-categ-models}
Suppose $L$ and $M$ are models of a theory~$\theory{T}$. A
\emph{$\theory{T}$-homomorphism} from $L$ to $M$ is a map $\phi : \carrier{L} \to \carrier{M}$ between the
carriers which commutes with operations: for every operation symbol $\op{i}$
of~$\theory{T}$, we have
\begin{equation*}
\phi \circ \sem{\op{i}}_L = \sem{\op{i}}_M \circ \underbrace{(\phi, \ldots, \phi)}_{\arity{i}}.
\end{equation*}
\begin{example}
A homomorphism between groups $G$ and $H$ is a map $\phi : \carrier{G} \to \carrier{H}$ between the
carriers such that, for all $a, b \in \carrier{G}$,
%
\begin{align*}
\phi(\sem{\mathsf{u}()}_G) &= \sem{\mathsf{u}()}_H,\\
\phi(\sem{\mathsf{m}}_G (a,b)) &= \sem{\mathsf{m}}_H (\phi(a), \phi(b)),\\
\phi(\sem{\mathsf{i}}_G (a)) &= \sem{\mathsf{i}}_H (\phi(a)).
\end{align*}
%
This is a convoluted way of saying that the unit maps to the unit, and that
$\phi$ commutes with the group operation and the inverses. Algebra textbooks
usually require only that a group homomorphism commute with the group
operation, which then implies that it also preserves the unit and commutes
with the inverse.
\end{example}
We may organize the models of an algebraic theory~$\theory{T}$ into a category $\Mod{T}$
whose objects are the models of the theory, and whose morphisms are homomorphisms of the
theory.
\begin{example}
The category of models of theory $\theory{Group}$, cf.\ Example~\ref{ex:theory-group},
is the usual category of groups and group homomorphisms.
\end{example}
\begin{example}
The category of models of the theory $\theory{Set_\bullet}$, cf.\
Example~\ref{ex:pointed-set}, has as its objects the pointed sets, which are pairs
$(S, s)$ with $S$ a set and $s \in S$ its \emph{point}, and as homomorphisms
the point-preserving functions between sets.
\end{example}
\begin{example}
The category of models of the empty theory $\theory{Empty}$, cf.\
Example~\ref{ex:theory-empty}, is just the category $\category{Set}$ of sets and
functions.
\end{example}
\begin{example}
The category of models of the theory of a singleton $\theory{Singleton}$, cf.\
Example~\ref{ex:theory-singleton}, is the category whose objects are all the singleton
sets. There is precisely one morphisms between any two of them. This category is
equivalent to the trivial category which has just one object and one morphism.
\end{example}
\subsection{Models in a category}
\label{sec:models-category}
So far we have taken the models of an algebraic theory to be sets. More generally, we may
consider models in any category $\category{C}$ with finite products. Indeed, the
definitions of an interpretation and a model from Sections~\ref{sec:interp-of-sign}
and~\ref{sec:models-algebr-theor} may be directly transcribed so that they apply
to~$\category{C}$. An interpretation $I$ in $\category{C}$ is given by
\begin{enumerate}
\item an object $\carrier{I}$ in $\category{C}$, called the \emph{carrier},
\item for each operation symbol $\op{i}$ a morphism in $\category{C}$
%
\begin{equation*}
\sem{\op{i}}_I : \underbrace{\carrier{I} \times \cdots \times \carrier{I}}_{\arity{i}} \to \carrier{I}.
\end{equation*}
\end{enumerate}
Once again, we abbreviate the $k$-fold product of $\carrier{I}$ as $\carrier{I}^k$. Notice that a nullary
symbol is interpreted as a morphism $\carrier{I}^0 \to \carrier{I}$, which is a morphisms from the
terminal object $\one \to \carrier{I}$ in~$\category{C}$.
An interpretation~$I$ is extended to $\Sigma$-terms in contexts as follows:
\begin{enumerate}
\item the variable $x_1, \ldots, x_k \mid x_i$ is interpreted as the $i$-th projection,
%
\begin{equation*}
\sem{x_1, \ldots, x_k \mid x_i}_I = \pi_i : \carrier{I}^k \to \carrier{I},
\end{equation*}
\item a compound term in context
%
\begin{equation*}
x_1, \ldots, x_k \mid \op{i}(t_1, \ldots, t_{\arity{i}})
\end{equation*}
%
is interpreted as the composition of morphisms
%
\begin{equation*}
\xymatrix@+6em{
{\carrier{I}^k} \ar[r]^{(\sem{t_1}_I, \ldots, \sem{t_{\arity{i}}}_I)}
&
{\carrier{I}^{\arity{i}}} \ar[r]^{\sem{\op{i}}_I}
&
{\carrier{I}}
}
\end{equation*}
\end{enumerate}
A model of an algebraic theory~$\theory{T}$ in~$\category{C}$ is an interpretation~$M$ of
its signature $\signature{T}$ which validates all the equations. That is, for every
equation
\begin{equation*}
x_1, \ldots, x_k \mid \ell = r
\end{equation*}
in $\equations{T}$, the morphisms
\begin{equation*}
\sem{x_1, \ldots, x_k \mid \ell}_M : \carrier{M}^k \to \carrier{M}
\qquad\text{and}\qquad
\sem{x_1, \ldots, x_k \mid r}_M : \carrier{M}^k \to \carrier{M}
\end{equation*}
are equal.
The definition of a homomorphism carries over to the general setting as well. A
\emph{$\theory{T}$-homomorphism} between $\theory{T}$-models $L$ and $M$ in a category
$\category{C}$ is a morphism $\phi : \carrier{L} \to \carrier{M}$ in~$\category{C}$ such that, for every
operation symbol~$\op{i}$ in~$\theory{T}$, $\phi$ commutes with the interpretation of
$\op{i}$,
\begin{equation*}
\phi \circ \sem{\op{i}}_L = \sem{\op{i}}_M \circ \underbrace{(\phi, \ldots, \phi)}_{\arity{i}}.
\end{equation*}
The $\theory{T}$-models and $\theory{T}$-homomorphisms in a category $\category{C}$ form a
category $\ModC{\category{C}}{T}$.
\begin{example}
A model of the theory $\theory{Group}$ in the category $\category{Top}$ of
topological spaces and continuous maps is a topological group.
\end{example}
\begin{example}
What is a model of the theory $\theory{Group}$ in the category of groups $\category{Grp}$? Its
carrier is a group $(G, u, m, i)$ together with group homomorphisms
$\upsilon : \one \to G$, $\mu : G \times G \to G$, and $\iota : G \to G$ which satisfy
the group laws. Because $\upsilon$ is a group homomorphism, it maps the unit of the
trivial group~$\one$ to $u$, so the units $u$ and $\upsilon$ agree. The operations $m$
and $\mu$ agree too, because
%
\begin{equation*}
\mu(x, y) =
\mu(m(x, u), m(u, y)) =
m(\mu(x, u), \mu(u, y)) =
m(x, y),
\end{equation*}
%
where in the middle step we used the fact that $\mu$ is a group homomorphism. It is now
clear that the inverses $i$ and $\iota$ agree as well. Furthermore, taking into account
that $m$ and $\mu$ agree, we also obtain
%
\begin{equation*}
m(x, y) =
m(m(u, x), m(y, u)) =
m(m(u, y), m(x, u)) =
m(y, x).
\end{equation*}
%
The conclusion is that a group in the category of groups is an abelian group. The
category $\ModC{\category{Grp}}{Group}$ is therefore equivalent to the category of abelian groups.
\end{example}
\begin{example}
A model of the theory of a pointed set, cf.\ Example~\ref{ex:pointed-set}, in the
category of groups $\category{Grp}$ is a group $(G, u, m, i)$ together with a
homomorphism $\one \to G$ from the trivial group~$\one$ to $G$. However, there is
precisely one such homomorphism which therefore need not be mentioned at all. Thus a
pointed set in groups amounts to a group.
\end{example}
\subsection{Free models}
\label{sec:free-models}
Of special interest are the free models of an algebraic theory. Given an
algebraic theory~$\theory{T}$ and a set $X$, the \emph{free $\theory{T}$-model},
also called the \emph{free $\theory{T}$-algebra}, generated by~$X$ is a model
$M$ together with a map $\eta : X \to \carrier{M}$ such that, for every
$\theory{T}$-model $L$ and every map $f : X \to \carrier{L}$ there is a unique
$\theory{T}$-homomorphism $\overline{f} : M \to L$ for which the following
diagram commutes:
\begin{equation*}
\xymatrix{
{X}
\ar[r]^{\eta}
\ar[rd]_{f}
&
{\carrier{M}}
\ar[d]^{\overline{f}}
\\
&
\carrier{L}
}
\end{equation*}
The definition is a bit of a mouthful, but it can be understood as follows: the
free $\theory{T}$-model generated by $X$ is the ``most economical'' way of
making a $\theory{T}$-model out of the set~$X$.
\begin{example}
The free group generated by the empty set is the trivial group~$\one$ with
just one element. The map $\eta : \emptyset \to \one$ is the unique one, and
given any (unique) map $f : \emptyset \to \carrier{G}$ to a carrier of another
group~$G$, there is a unique group homomorphism $\overline{f} : \one \to G$.
The relevant triangle commutes automatically because it originates
at~$\emptyset$.
\end{example}
\begin{example}
The free group generated by the singleton set $\one$ is the group of integers
$(\ZZ, 0, {+}, {-})$. The map $\eta : \set{\star} \to \ZZ$ takes the generator
$\unit$ to $0$. As an exercise you should verify that the integers have the
required universal property.
\end{example}
\begin{example}
Let $\finpow{X}$ be the set of all finite subsets of a set~$X$. We show that
$(\finpow{X}, \emptyset, {\cup})$ is the free semilattice generated by~$X$, cf.\
Example~\ref{ex:semi-lattice}. The map $\eta : X \to \finpow{X}$ takes $x \in X$ to the
singleton set $\eta(x) = \set{x}$. Given any semilattice $(L, \bot, {\vee})$ and a map
$f : X \to \carrier{L}$, define the homomorphism $\overline{f} : \finpow{X} \to \carrier{L}$ by
%
\begin{equation*}
\overline{f}(\set{x_1, \ldots, x_n}) = f(x_1) \wedge \cdots \wedge f(x_n).
\end{equation*}
%
Clearly, the required diagram commutes because
%
\begin{equation*}
\overline{f}(\eta(x)) = \overline{f}(\set{x}) = f(x).
\end{equation*}
%
If $g : \finpow{X} \to \carrier{L}$ is another homomorphism satisfying $g \circ \eta = f$ then
%
\begin{multline*}
g(\set{x_1, \ldots, x_n})
= g(\eta(x_1) \cup \cdots \cup \eta(x_n))
= g(\eta(x_1)) \wedge \cdots \wedge g(\eta(x_n)) \\
= f(x_1) \wedge \cdots \wedge f(x_n)
= \overline{f}(\set{x_1, \ldots, x_n}),
\end{multline*}
%
hence $\overline{f}$ is indeed unique.
\end{example}
\begin{example}
The free model generated by~$X$ of the theory of a pointed set, cf.\
Example~\ref{ex:pointed-set}, is the disjoint union $X + \one$ whose elements are of the
form $\iota_1(x)$ for $x \in X$ and $\iota_2(y)$ for $y \in \one$. The point is the
element $\iota_2(\unit)$. The map $\eta : X \to X + \one$ is the canonical
inclusion~$\iota_1$.
\end{example}
\begin{example}
The free model generated by~$X$ of the empty theory, cf.\ Example~\ref{ex:theory-empty},
is~$X$ itself, with $\eta : X \to X$ the identity map.
\end{example}
\begin{example}
The free model generated by~$X$ of the theory of a singleton, cf.\
Example~\ref{ex:theory-singleton}, is the singleton set~$\one$, with $\eta : X \to \one$
the only map it could be. This example shows that~$\eta$ need not be injective.
\end{example}
Every algebraic theory~$\theory{T}$ has a free model. Let us sketch its
construction. Given a signature $\Sigma$ and a set~$X$,
define~$\Tree{\Sigma}{X}$ to be the set of well-founded trees built inductively
as follows:
\begin{enumerate}
\item for each $x \in X$, there is a tree $\leaf{x} \in \Tree{\Sigma}{X}$,
\item for each operation symbol $\op{i}$ and trees
$t_1, \ldots, t_{\arity{i}} \in \Tree{\Sigma}{X}$, there is a tree, denoted by
$\op{i}(t_1, \ldots, t_n) \in \Tree{\Sigma}{X}$, whose root is labeled by
$\op{i}$ and whose subtrees are $t_1, \ldots, t_{\arity{i}}$.
\end{enumerate}
By labeling the tree leaves with the keyword ``$\mathsf{return}$'' we are
anticipating their role in effectful computations, as will become clear later
on. From a purely formal point of view the choice of the label is immaterial.
The $\Sigma$-terms in context $x_1, \ldots, x_n$ are precisely the trees in
$\Tree{\Sigma}{\set{x_1, \ldots, x_n}}$, except that a variable $x_i$ is labeled
as $\leaf{x_i}$ when construed as a tree.
Suppose $x_1, \ldots, x_n \mid t$ is a $\Sigma$-term in context, and we are
given an assignment $\sigma : \set{x_1, \ldots, x_n} \to \Tree{\Sigma}{X}$ of
trees to variables. Then we may build the tree $\sigma(t)$ inductively as
follows:
\begin{enumerate}
\item $\sigma(t) = \sigma(x_i)$ if $t = x_i$,
\item $\sigma(t) = \op{i}(\sigma(t_1), \ldots, \sigma(t_n))$ if
$t = \op{i}(t_1, \ldots, t_n)$.
\end{enumerate}
In words, the tree $\sigma(t)$ is obtained by replacing each variable~$x_i$ in~$t$ with
the corresponding tree $\sigma(x_i)$.
Given a theory~$\theory{T}$, let $\approx_\theory{T}$ be the least equivalence relation on
$\Tree{\signature{T}}{X}$ such that:
\begin{enumerate}
\item for every equation $x_1, \ldots, x_n \mid \ell = r$ in $\equations{T}$ and for every
assignment $\sigma : \set{x_1, \ldots, x_n} \to \Tree{\signature{T}}{X}$, we have
%
\begin{equation*}
\sigma(\ell) \approx_{\theory{T}} \sigma(r).
\end{equation*}
%
\item $\approx_{\theory{T}}$ is a \emph{$\signature{T}$-congruence}: for every
operation symbol $\op{i}$ in $\signature{T}$, and for all trees
$s_1, \ldots, s_{\arity{i}}$ and $t_1, \ldots, t_{\arity{i}}$, if
%
\begin{equation*}
s_1 \approx_{\theory{T}} t_1,
\quad \ldots \quad,
s_{\arity{i}} \approx_{\theory{T}} t_{\arity{i}}
\end{equation*}
%
then
%
\begin{equation*}
\op{i}(s_1, \ldots, s_{\arity{i}}) \approx_{\theory{T}}
\op{i}(t_1, \ldots, t_{\arity{i}}).
\end{equation*}
\end{enumerate}
Define the carrier of the free model $\Free{T}{X}$ to be the quotient set
\begin{equation*}
\carrier{\Free{T}{X}} = \Tree{\signature{T}}{X} / {\approx_{\theory{T}}}.
\end{equation*}
Let $[t]$ be the $\approx_{\theory{T}}$-equivalence class of
$t \in \Tree{\signature{T}}{X}$. The interpretation of the operation symbol $\op{i}$ in
$\Free{T}{X}$ is the map $\sem{\op{i}}_{\Free{T}{X}}$ defined by
\begin{equation*}
\sem{\op{i}}_{\Free{T}{X}}([t_1], \ldots, [t_{\arity{i}}]) =
[\op{i}(t_1, \ldots, t_{\arity{i}})].
\end{equation*}
The map $\eta_X : X \to \Free{T}{X}$ is defined by
\begin{equation*}
\eta_X(x) = [\leaf{x}].
\end{equation*}
To see that we successfully defined a $\theory{T}$-model, and that it is freely generated
by~$X$, one has to verify a number of mostly straightforward technical details, which we
omit.
When a theory $\theory{T}$ has no equations the free models generated by~$X$ is just the
set of trees $\Tree{T}{X}$ because the relation $\approx_{\theory{T}}$ is equality.
\subsection{Operations with general arities and parameters}
\label{sec:oper-gener-arit-param}
We have so far followed the classic mathematical presentation of algebraic
theories. To get a better fit with computational effects, we need to generalize
operations in two ways.
\subsubsection{General arities}
\label{sec:general-arities}
We shall require operations that accept an arbitrary, but fixed collection of
arguments. One might expect that the correct way to do so is to allow arities to
be ordinal or cardinal numbers, as these generalize natural numbers, but that
would be a thoroughly non-computational idea. Instead, let us observe that an
$n$-ary cartesian product
\begin{equation*}
\underbrace{X \times \cdots \times X}_{n}
\end{equation*}
is isomorphic to the exponential $X^{[n]}$, where
$[n] = \set{0, 1, \ldots, n-1}$. Recall that an exponential $B^A$ is the set of
all functions $A \to B$, and in fact we shall use the notations $B^A$ and
$A \to B$ interchangeably. If we replace $[n]$ by an arbitrary set~$A$, then we
can think of a map
\begin{equation*}
X^A \to X
\end{equation*}
as taking $A$-many arguments. We need reasonable notation for writing down an
operation symbol applied to $A$-many arguments, where $A$ is an arbitrary set. One
might be tempted to adapt the tuple notation and write something silly, such as
\begin{equation*}
\op{i}(\cdots t_a \cdots)_{a \in A},
\end{equation*}
but as computer scientists we know better than that. Let us use the notation that
is already provided to us by the exponentials, namely the $\lambda$-calculus. To
have $A$-many elements of a set $X$ is to have a map $\kappa : A \to X$, and
thus to apply the operation symbol $\op{i}$ to $A$-many arguments $\kappa$ we
simply write~$\op{i}(\kappa)$.
\begin{example}
Let us rewrite the group operations in the new notation. The empty set
$\emptyset$, the singleton $\one$, and the set of boolean values
%
\begin{equation*}
\bool = \set{\mathsf{false}, \mathsf{true}}
\end{equation*}
%
serve as arities. We use the conditional statement
%
\begin{equation*}
\cond{b}{x}{y}
\end{equation*}
%
as a synonym for what is usually written as definition by cases,
%
\begin{equation*}
\begin{cases}
x & \text{if $b = \mathsf{true}$,}\\
y & \text{if $b = \mathsf{false}$.}
\end{cases}
\end{equation*}
%
Now a group is given by a carrier set $G$ together with maps
%
\begin{align*}
\mathsf{u} &: G^\emptyset \to G,\\
\mathsf{m} &: G^\bool \to G,\\
\mathsf{i} &: G^\one \to G,
\end{align*}
%
satisfying the usual group laws, which we ought to write down using the
$\lambda$-notation. The associativity law is written like this:
%
\begin{multline*}
\mathsf{m}(\lam{b} \cond{b}{\mathsf{m}(\lam{c}\cond{c}{x}{y})}{z}) = \\
\mathsf{m}(\lam{b} \cond{b}{x}{\mathsf{m}(\lam{c} \cond{c}{y}{z})}).
\end{multline*}
%
Here is the right inverse law, where $\mathsf{O}_X : \emptyset \to X$ is
the unique map from $\emptyset$ to~$X$:
%
\begin{equation*}
\mathsf{m}(\lam{b} \cond{b}{x}{\mathsf{i}(\lam{\_}{x})}) =
\mathsf{u}(\mathsf{O}_G).
\end{equation*}
%
The symbol $\_$ indicates that the argument of the $\lambda$-abstraction is
ignored, i.e., that the function defined by the abstraction is constant. One
more example might help: $x$ squared may be written as
$\mathsf{m}(\lam{b} \cond{b}{x}{x})$ as well as $\mathsf{m}(\lam{\_} x)$.
\end{example}
Such notation is not appropriate for performing algebraic manipulations, but is
bringing us closer to the syntax of a programming language.
\subsubsection{Operations with parameters}
\label{sec:oper-with-param}
To motivate our second generalization, consider the theory of a module~$M$ over
a ring~$R$ (if you are not familiar with modules, think of the elements of $M$
as vectors and the elements of~$R$ as scalars). For it to be an algebraic
theory, we need to deal with scalar multiplication ${\cdot} : R \times M \to M$,
because it does not fit the established pattern. There are three possibilities:
\begin{enumerate}
\item We could introduce \emph{multi-sorted} algebraic theories whose operations
take arguments from several carrier sets. The theory of a module would have
two sorts, say $\mathsf{R}$ and $\mathsf{M}$, and scalar multiplication would
be a binary operation of arity $(\mathsf{R}, \mathsf{M}; \mathsf{M})$. (We
hesitate to write $\mathsf{R} \times \mathsf{M} \to \mathsf{M}$ lest the type
theorists get useful ideas.)
\item Instead of having a single binary operation taking a scalar and a vector,
we could have many unary operations taking a vector, one for each scalar.
\item We could view the scalar as an additional \emph{parameter} of a
unary operation on vectors.
\end{enumerate}
The second and the third options are superficially similar, but they differ in
their treatment of parameters. In one case the parameters are part of the
indexing of the signature, while in the other they are properly part of the
algebraic theory. We shall adopt operations with parameters because they
naturally model algebraic operations that arise as computational effects.
\begin{example}
The theory of a module over a ring~$(R, 0, {+}, {-}, {\cdot})$ has several
operations. One of them is scalar multiplication, which is a \emph{unary}
operation $\mathsf{mul}$ parameterized by elements of~$R$. That is, for every
$r \in R$ and term $t$, we may form the term
%
\begin{equation*}
\mathsf{mul}(r; t),
\end{equation*}
%
which we think of as~$t$ multiplied with~$r$. The remaining operations seem
not to be parameterized, but we can force them to be parameterized by fiat.
Addition is a binary operation $\mathsf{add}$ parameterized by the singleton
set~$\one$: the sum of $t_1$ and $t_2$ is written as
%
\begin{equation*}
\mathsf{add}(\unit; t_1, t_2).
\end{equation*}
%
We can use this trick in general: an operation without parameters is an
operation taking parameters from the singleton set.
\end{example}
Note that in the previous example we mixed theories and models. We spoke about
the \emph{theory} of a module with respect to a \emph{specific ring}~$R$.
\begin{example}
The theory of a $\mathcal{C}^\infty$-ring, cf.\ Example~\ref{ex:Cinfty-theory}, may be
reformulated using parameters. For every $n \in \NN$ there is an $n$-ary
operation symbol $\mathsf{app}_n$ whose parameter set is
$\mathcal{C}^\infty(\RR^n, \RR)$. What was written as
%
\begin{equation*}
\op{f}(t_1, \ldots, t_n)
\end{equation*}
%
in Example~\ref{ex:Cinfty-theory} is now written as
%
\begin{equation*}
\mathsf{app}_n(f; t_1, \ldots, t_n).
\end{equation*}
%
If you insist on the~$\lambda$-notation, replace the tuple
$(t_1, \ldots, t_n)$ of terms with a single function $t$ mapping from $[n]$ to
terms, and write $\mathsf{app}_n(f; t)$.
The operations $\mathsf{app}_n$ tell us what $\mathcal{C}^\infty$-rings are about: they
are structures whose elements can feature as arguments to smooth functions. In
contrast, an ordinary (commutative unital) ring is one whose elements can
feature as arguments to ``finite degree'' smooth maps, i.e., the polynomials.
\end{example}
\subsection{Algebraic theories with parameterized operations and general arities}
\label{sec:algebr-theor-with}
Let us restate the definitions of signatures and algebraic operations, with the
generalizations incorporated. For simplicity we work with sets and functions,
and leave consideration of other categories for another occasion.
A \emph{signature $\Sigma$} is given by a collection of \emph{operation symbols
$\op{i}$} with associated \emph{parameter sets $P_i$} and
\emph{arities~$A_i$}. For reasons that will become clear later, we write
\begin{equation*}
\opdecl{\op{i}}{P_i}{A_i}
\end{equation*}
to display an operation symbol $\op{i}$ with parameter set $P_i$ and
arity~$A_i$. The symbols may be anything, although we think of them as syntactic
entities, while $P_i$'s and $A_i$'s are sets.
Arbitrary arities require an arbitrary number of variables in context. We therefore
generalize terms in contexts to \emph{well-founded trees} over~$\Sigma$
generated by a set~$X$. These form a set $\Tree{\Sigma}{X}$ whose elements are
generated inductively as follows:
\begin{enumerate}
\item for every generator $x \in S$ there is a tree $\leaf{x}$,
\item if $p \in P_i$ and $\kappa : A_i \to \Tree{\Sigma}{X}$ then
$\op{i}(p, \kappa)$ is a tree whose root is labeled with~$\op{i}$ and whose
$A_i$-many subtrees are given by~$\kappa$.
\end{enumerate}
The usual $\Sigma$-terms in context $x_1, \ldots, x_k$ correspond to
$\Tree{\Sigma}{\set{x_1, \ldots, x_k}}$. Or to put it differently, the elements
of $\Tree{\Sigma}{X}$ may be thought of as terms with variables~$X$. In fact, we
shall customarily refer to them as terms.
An \emph{interpretation $I$ of a signature $\Sigma$} is given by:
\begin{enumerate}
\item a carrier set $\carrier{I}$,
\item for each operation symbol $\op{i}$ with parameter set~$P_i$ and arity~$A_i$,
a map
%
\begin{equation*}
\sem{\op{i}}_I : P_i \times \carrier{I}^{A_i} \longrightarrow \carrier{I}.
\end{equation*}
\end{enumerate}
The interpretation $I$ may be extended to trees. A tree $t \in \Tree{\Sigma}{X}$
is interpreted as a map
\begin{equation*}
\sem{t}_I : \carrier{I}^X \to \carrier{I}
\end{equation*}
as follows:
\begin{enumerate}
\item the tree $\leaf{x}$ is interpreted as the $x$-th projection,
%
\begin{align*}
\sem{\leaf{x}}_I : \carrier{I}^X \to \carrier{I},\\
\sem{\leaf{x}}_I : \eta \mapsto \eta(x),
\end{align*}
\item the tree $\op{i}(p, \kappa)$ is interpreted as the map
%
\begin{align*}
\sem{\op{i}(p, \kappa)}_I &: \carrier{I}^X \longrightarrow \carrier{I} \\
\sem{\op{i}(p, \kappa)}_I &:
\eta \mapsto
\sem{\op{i}}_I(p, \lam{a} \sem{\kappa(a)}_I(\eta)),
\end{align*}
\end{enumerate}
A \emph{$\Sigma$-equation} is a set $X$ and a pair of $\Sigma$-terms
$\ell, r \in \Tree{\Sigma}{X}$, written
\begin{equation*}
X \mid \ell = r.
\end{equation*}
We usually leave out~$X$. Given an interpretation $I$ of signature $\Sigma$, we
say that such an equation is \emph{valid} for~$I$ when the interpretations of
$\ell$ and $r$ give the same map.
An \emph{algebraic theory $\theory{T} = (\signature{T}, \equations{T})$} is
given by a signature $\signature{T}$ and a collection of $\Sigma$-equations
$\equations{T}$. A \emph{$\theory{T}$-model} is an interpretation for
$\signature{T}$ which validates all the equations~$\equations{T}$.
The notions of $\theory{T}$-morphisms and the category $\Mod{T}$ of
$\theory{T}$-models and $\theory{T}$-morphisms may be similarly generalized. We
do not repeat the definitions here, as they are almost the same. You should
convince yourself that every algebraic theory has a free model, which is still
built as a quotient of the set of well-founded trees.
\section{Computational effects as algebraic operations}
\label{sec:comp-effects-as}
It is high time we provide some examples from programming. The original insight
by Gordon Plotkin and John
Power~\cite{plotkin01:_seman_algeb_operat,plotkin03:_algeb_operat_gener_effec}
was that many computational effects are naturally described by algebraic
theories. What precisely does this mean?
When a program runs on a computer, it interacts with the environment by
performing \emph{operations}, such as printing on the screen, reading from the
keyboard, inspecting and modifying external memory store, launching missiles,
etc. We may model these phenomena mathematically as operations on an algebra
whose elements are \emph{computations}. Leaving the exact nature of computations
aside momentarily, we note that a computation may be
\begin{itemize}
\item pure, in which case it terminates and returns a value, or
\item effectful, in which case it performs an operation.
\end{itemize}
(We are ignoring a third possibility, non-termination.) Let us write
\begin{equation*}
\return{v}
\end{equation*}
for a pure computation that returns the value~$v$. Think of a value as an inert
datum that needs no further computation, such as a boolean constant, a numeral,
or a $\lambda$-abstraction. An operation takes a \emph{parameter}~$p$, for
instance the memory location to be read, or the string to be printed, and a
\emph{continuation}~$\kappa$, which is a suspended computation expecting the
result of the operation, for instance the contents of the memory location that
has been read. Thus it makes sense to write
\begin{equation*}
\opcall{op}{p}{\kappa}
\end{equation*}
for the computation that performs the operation~$\kode{op}$, with parameter~$p$
and continuation~$\kappa$. The similarity with algebraic operations from
Section~\ref{sec:algebr-theor-with} is not incidental!
\begin{example}
The computation which increases the contents of memory location~$\ell$ by~$1$
and returns the original contents is written as
%
\begin{equation*}
\opcall{lookup}{\ell}{
\lam{x} \opcall{update}{(\ell,x + 1)}{
\lam{\_} \return{x}
}
}.
\end{equation*}
%
In some venerable programming languages we would write this as $\ell{+}{+}$.
Note that the operations happen from outside in: first the memory
location~$\ell$ is read, its value is bound to~$x$, then $x + 1$ is written to
memory location~$\ell$, the result of writing is ignored, and finally the
value of~$x$ is returned.
\end{example}
So far we have a notation that looks like algebraic operations, but to do things
properly we need signatures and equations. These depend on the computational
effects under consideration.
\begin{example}
\label{ex:theory-state}
%
The algebraic theory of \emph{state} with \emph{locations $L$} and
\emph{states $S$} has operations
%
\begin{equation*}
\opdecl{\kode{lookup}}{L}{S}
\qquad\text{and}\qquad
\opdecl{\kode{update}}{L \times S}{\one}.
\end{equation*}
%
First we have equations which state what happens on successive lookups and
updates to the same memory location. For all $\ell \in L$, $s \in S$ and
all continuations~$\kappa$:
%
\begin{align*}
\opcall{lookup}{\ell}{
\lam{s}{
\opcall{lookup}{\ell}{
\lam{t} \kappa \, s \, t}
}
} &=
\opcall{lookup}{\ell}{\lam{s}{\kappa \, s \, s}}
\\
\opcall{lookup}{\ell}{
\lam{s} \opcall{update}{(\ell, s)}{\kappa}
} &=
\kappa \, ()
\\
\opcall{update}{(\ell, s)}{
\lam{\_} \opcall{lookup}{\ell}{\kappa}
} &=
\opcall{update}{(\ell, s)}{\lam{\_} \kappa \, s}
\\
\opcall{update}{(\ell, s)}{
\lam{\_} \opcall{update}{(\ell, t)}{\kappa}
} &=
\opcall{update}{(\ell, t)}{\kappa}
\end{align*}
%
For example, the first equations says that two consecutive lookups from a
memory location give equal results. We ought to explain the precise nature
of~$\kappa$ in the above equations. If we translate the earlier examples into
the present notation, we see that~$\kappa$ corresponds to variables, which
leads to the idea that we should use a \emph{generic}~$\kappa$. Thus we let
$\kappa$ take some arguments and just return them as a tuple. In the first
equation we take $\kappa = \lam{s \, t } \leaf {(s, t)}$; in the second and
fourth equations are $\kappa = \lam{\_} \leaf{\unit}$; and in the third
equation $\kappa = \lam{s} \leaf{s}$. All equations have empty contexts, as no
free variables occur in them. Unless specified otherwise, we shall
always take~$\kappa$ to be such a generic continuation.
There is a second set of equations stating that lookups and updates from
\emph{different} locations $\ell \neq \ell'$ distribute over each other:
%
\begin{align*}
\opcall{lookup}{\ell}{
\lam{s} \opcall{lookup}{\ell'}{\lam{s'} \kappa \, s \, s'}
} &=
\opcall{lookup}{\ell'}{
\lam{s'} \opcall{lookup}{\ell}{\lam{s} \kappa \, s \, s'}
}
\\
\opcall{update}{(\ell, s)}{
\lam{\_} \opcall{lookup}{\ell'}{\kappa}
} &=
\opcall{lookup}{\ell'}{
\lam{t} \opcall{update}{(\ell, s)}{
\lam{\_} \kappa \, t
}
} \\
\opcall{update}{(\ell, s)}{
\lam{\_} \opcall{update}{(\ell', s')}{\kappa}
} &=
\opcall{update}{(\ell', s')}{
\lam{\_} \opcall{update}{(\ell, s)}{\kappa}
}.
\end{align*}
%
Have we forgotten any equations?
It turns out that the theory is Hilbert-Post complete: if we add any equation
that does not already follow from these, the theory trivializes in the sense
that all equations become derivable.
\end{example}
\begin{example}
\label{ex:theory-io}
%
The theory of \emph{input and output} (I/O) has operations
%
\begin{equation*}
\opdecl{\kode{print}}{S}{\one}
\qquad\text{and}\qquad
\opdecl{\kode{read}}{\one}{S},
\end{equation*}
%
where $S$ is the set of entities that are read or written, for example bytes,
or strings. There are no equations. We may now write the obligatory hello world:
%
\begin{equation*}
\opcall{print}{\text{`Hello world!'}}{\lam{\_} \return{\unit}}.
\end{equation*}
\end{example}
\begin{example}
\label{ex:theory-exception}
%
The theory of a point set, cf.\ Example~\ref{ex:pointed-set}, is the theory of
an \emph{exception}. The point $\bullet$ is a constant, which we rename to a
nullary operation
%
\begin{equation*}
\opdecl{\kode{abort}}{\one}{\emptyset}.
\end{equation*}
%
There are no equations. For example, the computation
%
\begin{equation*}
\opcall{read}{\unit}{
\lam{x}{
\cond{x < 0}{\opcall{abort}{\unit}{\mathsf{O}_\ZZ}}{\return{(x+1)}}
}
}
\end{equation*}
%
reads an integer~$x$ from standard input, raises an exception if $x$ is
negative, otherwise it returns its successor.
\end{example}
\begin{example}
\label{ex:non-determinism}
%
Let us take the theory of semilattice, cf.\ Example~\ref{ex:semi-lattice}, but without the
unit. It has a binary operation~$\vee$ satisfying
%
\begin{align*}
x \vee x &= x, \\
x \vee y &= y \vee x, \\
(x \vee y) \vee z &= x \vee (y \vee z).
\end{align*}
%
This is the algebraic theory of (one variant of) \emph{non-determinism}.
Indeed, the binary operation $\vee$ corresponds to a choice operation
%
\begin{equation*}
\opdecl{\kode{choose}}{\one}{\bool}
\end{equation*}
%
which (non-deterministically) returns a bit, or chooses a computation, depending
on how we look at it. Written in continuation notation, it chooses a bit~$b$
and passes it to the continuation~$\kappa$,
%
\begin{equation*}
\opcall{choose}{\unit}{\lam{b}{\kappa \, b}},
\end{equation*}
%
whereas with the traditional notation it chooses between two computations
$\kappa_1$ and $\kappa_2$,
%
\begin{equation*}
\kode{choose}(\kappa_1, \kappa_2).
\end{equation*}
%
\end{example}
\begin{example}
\label{ex:theory-single-state}
%
Algebraic theories may be combined. For example, if we want a theory
describing state and I/O we may simply adjoin the signatures and equations of
both theories to obtain their combination.
Sometimes we want to combine theories so that the operations between them
interact. To demonstrate this, let us consider the theory of a single stateful
memory location holding elements of a set $S$. The operations are
%
\begin{equation*}
\opdecl{\kode{get}}{\one}{S}
\qquad\text{and}\qquad
\opdecl{\kode{put}}{S}{\one}.
\end{equation*}
%
The equations are
%
\begin{align}
\label{eq:state-get-get}%
\opcall{get}{\unit}{
\lam{s}{
\opcall{get}{\unit}{
\lam{t} \kappa \, s \, t}
}
} &=
\opcall{get}{\unit}{\lam{s}{\kappa \, s \, s}}
\\
\label{eq:state-get-put}%
\opcall{get}{\unit}{
\lam{s} \opcall{put}{s}{\kappa}
} &=
\kappa \, ()
\\
\label{eq:state-put-get}%
\opcall{put}{s}{
\lam{\_} \opcall{get}{\unit}{\kappa}
} &=
\opcall{put}{s}{\lam{\_} \kappa \, s}
\\
\label{eq:state-put-put}%
\opcall{put}{s}{
\lam{\_} \opcall{put}{t}{\kappa}
} &=
\opcall{put}{t}{\kappa}
\end{align}
%
This is just the first group of equations from Example~\ref{ex:theory-state},
except that we need not specify which memory location to read from.
Can the theory of states with many locations from
Example~\ref{ex:theory-state} be obtained by a combination of many
instances of the theory of a single state? That is, to model $I$-many states,
we combine $I$-many copies of the theory of a single state, so that for every
$\iota \in I$ we have operations
%
\begin{equation*}
\opdecl{\kode{get}_\iota}{\one}{S_\iota}
\qquad\text{and}\qquad
\opdecl{\kode{put}_\iota}{S_\iota}{\one},
\end{equation*}
%
with the above equations. We also need to postulate \emph{distributivity} laws
expressing the fact that operations from instance~$\iota$ distribute over
those of instance~$\iota'$, so long as $\iota \neq \iota'$:
%
\begin{align*}
\opcall{get_\iota}{\unit}{
\lam{s} \opcall{get_{\iota'}}{\unit}{\lam{s'} \kappa \, s \, s'}
} &=
\opcall{get_{\iota'}}{\unit}{
\lam{s'} \opcall{get_\iota}{\unit}{\lam{s} \kappa \, s \, s'}
}
\\
\opcall{put_\iota}{s}{
\lam{\_} \opcall{get_{\iota'}}{\unit}{\kappa}
} &=
\opcall{get_{\iota'}}{\unit}{
\lam{t} \opcall{put_\iota}{s}{
\lam{\_} \kappa \, t
}
} \\
\opcall{put_\iota}{s}{
\lam{\_} \opcall{update_{\iota'}}{s'}{\kappa}
} &=
\opcall{update_{\iota'}}{s'}{
\lam{\_} \opcall{put_\iota}{s}{\kappa}
}.
\end{align*}
%
The theory so obtained is similar to that of Example~\ref{ex:theory-state},
with two important differences. First, the locations $\ell \in L$ are
parameters of operations in Example~\ref{ex:theory-state}, whereas in the
present case the instances $\iota \in I$ index the operations themselves.
Second, all memory locations in Example~\ref{ex:theory-state} share the same
set of states~$S$, whereas the combination of $I$-many separate states allows
a different set of states $S_\iota$ for every instance~$\iota \in I$.
\end{example}
\subsection{Computations are free models}
\label{sec:comp-are-free}
Among all the models of an algebraic theory of computational effects, which one
best described the actual computational effects? If a theory of computational
effects truly is adequately described by its signature and equations, then the
free model ought to be the desired one.
\begin{example}
Consider the theory $\theory{State}$ of a state storing elements of~$S$ from
Example~\ref{ex:theory-single-state}. Let us verify whether the free model
$\Free{State}{V}$ adequately describes stateful computations returning values
from~$V$. As we saw in Section~\ref{sec:free-models}, the free model is a
quotient of the set of trees $\Tree{\signature{State}}{V}$ by a congruence
relation~$\approx_{\theory{State}}$. Every tree is congruent to one of the
form
%
\begin{equation}
\label{eq:state-normal-form}
%
\opcall{get}{\unit}{
\lam{s} \opcall{put}{f(s)}{\lam{\_} \return{g(s)}}
}
\end{equation}
%
for some maps $f : S \to S$ and $g : S \to V$. Indeed, by applying the
equations from Example~\ref{ex:theory-single-state}, we may contract any two
consecutive $\kode{get}$'s to a single one, and similarly for consecutive
$\kode{put}$'s, we may disregard a $\kode{get}$ after a $\kode{put}$, and
cancel a~$\kode{get}$ followed by a~$\kode{put}$. We are left with four
forms of trees,
%
\begin{gather*}
\return{v},\\
\opcall{get}{\unit}{\lam{s} \return{g(s)}},\\
\opcall{put}{t}{\lam{\_} \return{v}},\\
\opcall{get}{\unit}{
\lam{s} \opcall{put}{f(s)}{\lam{\_} \return{g(s)}}
},
\end{gather*}
%
but the first three may be brought into the form of the fourth one:
%
\begin{align*}
\return{v} &=
\opcall{get}{\unit}{\lam{s} \opcall{put}{s}{\lam{\_} \return{v}}},
\\
\opcall{get}{\unit}{\lam{s} \return{g(s)}} &=
\opcall{get}{\unit}{\lam{s} \opcall{put}{s}{\lam{\_} \return{g(s)}}},
\\
\opcall{put}{t}{\lam{\_} \return{v}} &=
\opcall{get}{\unit}{\lam{\_} \opcall{put}{t}{\lam{\_} \return{v}}}.
\end{align*}
%
Therefore, the free model~$\Free{Sate}(V)$ is isomorphic to the set of
functions
%
\begin{equation*}
S \to S \times V.
\end{equation*}
%
The isomorphism takes the element represented by~\eqref{eq:state-normal-form}
to the function $\lam{s} (f(s), g(s))$. (It takes extra effort to show that
each element is represented by unique $f$ and $g$.) The inverse takes a
function $h : S \to S \times V$ to the computation represented by the tree
%
\begin{equation*}
\opcall{get}{\unit}{
\lam{s} \opcall{put}{\pi_1(h(s))}{\lam{\_} \return{\pi_2(h(s))}}
}.
\end{equation*}
%
Functional programers will surely recognize the genesis of the state monad.
\end{example}
Let us expand on the last thought of the previous example and show, at the risk
of wading a bit deeper into category theory, that free models of an algebraic
theory~$\theory{T}$ form a monad. We describe the monad structure in the form of
a Kleisli triple, because it is familiar to functional programmers. First, we
have an endofunctor $\FreeFun{T}$ on the category of sets which takes a set~$X$
to the free model $\Free{T}{X}$ and a map $f : X \to Y$ to the unique
$\theory{T}$-homomorphism $\overline{f}$ for which the following diagram
commutes:
\begin{equation*}
\xymatrix{
{X}
\ar[r]^{\eta_X}
\ar[d]_{f}
&
**[r]{\Free{T}{X}}
\ar[d]^{\overline{f}}
\\
{Y}
\ar[r]_{\eta_Y}
&
**[r]{\Free{T}{Y}}
}
\end{equation*}
Second, the unit of the monad is the map $\eta_X : X \to \Free{T}{X}$ taking~$x$
to $\return{x}$. Third, a map $\phi : X \to \Free{T}{Y}$ is lifted to the unique
map $\lift{\phi} : \Free{T}{X} \to \Free{T}{Y}$ for which the following diagram
commutes:
\begin{equation*}
\xymatrix{
{X}
\ar[r]^{\eta_X}
\ar[rd]_{\phi}
&
**[r]{\Free{T}{X}}
\ar[d]^{\lift{\phi}}
\\
&
**[r]{\Free{T}{Y}}
}
\end{equation*}
Concretely, $\lift{\phi}$ is defined by recursion on
($\approx_{\theory{T}}$-equivalence classes of) trees by
\begin{align*}
\lift{\phi}([\return{x}]) &= \phi(x), \\
\lift{\phi}([\opcall{op}{p}{\kappa}]) &= [\opcall{op}{p}{\lift{\phi} \circ \kappa}],
\end{align*}
where~$\kode{op}$ ranges over the operations of~$\theory{T}$. The first equation
holds because the above diagram commutes, and the second because $\lift{\phi}$ is a
$\theory{T}$-homomorphism. We leave the verification of the monad laws as
an exercise.
\begin{example}
Let us resume the previous example. If there is any beauty in mathematics, the
monad for $\FreeFun{State}$ should be isomorphic to the usual state monad
$(T, \theta, {}^{*})$, given by
%
\begin{align*}
T(X) &= (S \to S \times X), \\
\theta_X(x) &= (\lam{s} (s, x)), \\
\psi^{*}(h) &= (\lam{s} \psi (\pi_2 (h(s))) (\pi_1 (h(s)))),
\end{align*}
%
where $x \in X$, $\psi : X \to T(Y)$, and $h : S \to S \times X$. In the
previous example we already verified that $\FreeFun{State}(X) \cong T(X)$ by
the isomorphism
%
\begin{equation*}
\Xi :
[\opcall{get}{\unit}{\lam{s} \opcall{put}{f(s)}{\lam{\_} \return{g(s)}}}]
\mapsto
(\lam{s} (f(s), g(s))).
\end{equation*}
%
Checking that~$\Xi$ transfers $\eta$ to $\theta$ and
$\lift{{}}$ to ${}^{*}$ requires a tedious but straightforward calculation
which is best done in the privacy of one's notebook. Nevertheless, here it is.
Note that
%
\begin{equation*}
\eta_X(x) = [\return{x}] = [\opcall{get}{\unit}{\lam{s} \opcall{put}{s}{\lam{\_} \return{x}}}]
\end{equation*}
%
hence $\eta_X(x)$ is isomorphic to the map $x \mapsto (\lam{s} (s, x))$, which is
just $\theta_X(x)$, as required. For lifting, consider any $\phi : X \to \Free{State}{Y}$.
There corresponds to it a unique map $\psi : X \to (S \to S \times Y)$ satisfying
%
\begin{equation*}
\phi(x) = [\opcall{get}{\unit}{\lam{t} \opcall{put}{\pi_1(\psi(x)(t))}{\lam{\_} \return{\pi_2(\psi(x)(t)}}}].
\end{equation*}
%
First we compute $\lift{\phi}$ applied to an arbitrary element of the free model:
%
\begin{align*}
\lift{\phi}&([\opcall{get}{\unit}{\lam{s} \opcall{put}{f(s)}{\lam{\_} \return{g(s)}}}]) = \\
&[\opcall{get}{\unit}{\lam{s} \opcall{put}{f(s)}{\lam{\_} \phi(g(s))}}] = \\
&[\opcall{get}{\unit}{
\lam{s}
\opcall{put}{f(s)}{
\lam{\_}
\opcall{get}{\unit}{\lam{t} \opcall{put}{\pi_1 (\psi(g(s))(t))}{\lam{\_} \return{\pi_2 (\psi(g(s))(t))}}}
}
}] = \\
&[\opcall{get}{\unit}{\lam{s} \opcall{put}{f(s)}{\lam{\_}
\opcall{put}{\pi_1 (\psi(g(s))(f(s)))}{\lam{\_} \return{\pi_2 (\psi(g(s))(f(s)))}}
}}] = \\
&[\opcall{get}{\unit}{\lam{s}
\opcall{put}{\pi_1 (\psi(g(s))(f(s)))}{\lam{\_} \return{\pi_2 (\psi(g(s))(f(s)))}}
}].
\end{align*}
%
Then we compute $\psi^{*}$ applied to the corresponding element of the state monad:
%
\begin{align*}
\psi^{*}(\lam{s} (f(s), g(s)))
&= (\lam{s} \psi(g(s))(f(s))) \\
&= (\lam{s} (\pi_1 (\psi(g(s))(f(s))), \pi_2 (\psi(g(s))(f(s))))),
\end{align*}
%
And we have a match with respect to~$\Xi$.
\end{example}
\subsection{Sequencing and generic operations}
\label{sec:sequ-gener-oper}
We seem to have a good theory of computations, but our notation is an
abomination which neither mathematicians nor programmers would ever want to use.
Let us provide a better syntax that will make half of them happy.
Consider an algebraic theory~$\theory{T}$. For an operation $\opdecl{\kode{op}}{P}{A}$
in $\signature{T}$, define the corresponding \emph{generic operation}
\begin{equation*}
\opgen{op}{p} \defeq \opcall{op}{p}{\lam{x} \return{x}}.
\end{equation*}
In words, the generic version performs the operation and returns its result.
When the parameter is the unit we write $\opgen{op}{}$ instead of the silly
looking $\opgen{op}{\unit}$. After a while one also grows tired of the over-line
and simplifies the notation to just $\kode{op}(p)$, but we shall not do so here.
Next, we give ourselves a better notation for the monad lifting. Suppose
$t \in \Free{T}{X}$ and $h : X \to \Free{T}{Y}$. Define the \emph{sequencing}
\begin{equation*}
\seq{x}{t} h(x),
\end{equation*}
to be an abbreviation for $\lift{h}(t)$, with the proviso that $x$ is bound in
$h(x)$. Generic operations and sequencing allow us to replace the awkward
looking
\begin{equation*}
\opcall{op}{p}{\lam{x} t(x)}
\end{equation*}
with
\begin{equation*}
\seq{x}{\opgen{op}{p}} t(x).
\end{equation*}
Even better, nested operations
\begin{equation*}
\opcall{op_1}{p_1}{\lam{x_1}
\opcall{op_2}{p_2}{\lam{x_2}
\opcall{op_3}{p_2}{\lam{x_3}
\cdots
}}}
\end{equation*}
may be written in Haskell-like notation
\begin{align*}
&\seq{x_1}{\xopgen{\kode{op}_1}{p_1}} \\
&\seq{x_2}{\xopgen{\kode{op}_3}{p_3}} \\
&\seq{x_2}{\xopgen{\kode{op}_3}{p_3}}
\cdots
\end{align*}
The syntax of a typical programming language only ever exposes the generic
operations. The generic operation $\overline{\kode{op}}$ with parameter set~$P$
and arity~$A$ looks to a programmer like a function of type $P \to A$, which is
why we use the notation $\opdecl{\kode{op}}{P}{A}$ to specify signatures.
Because sequencing is just lifting in disguise, it is governed by the same
equations as lifting:
\begin{align*}
(\seq{x}{\return{v}} h(x)) &= h(v), \\
(\seq{x}{\opcall{op}{p}{\kappa}} h(x)) &=
\opcall{op}{p}{\lam{y} \seq{x}{\kappa(y)} h(x)}.
\end{align*}
These allow us to eliminate sequencing from any expression. When we rewrite the
second equation with generic operations we get an associativity law for
sequencing:
\begin{equation*}
(\seq{x}{(\seq{y}{\opgen{op}{p}} \kappa(y))} h(x)) =
(\seq{y}{\opgen{op}{p}} \seq{x}{\kappa(y)} h(x)).
\end{equation*}
The ML aficionados may be pleased to learn that the sequencing notation in
an ML-style language is none other than $\kode{let}$-binding,
\begin{equation*}
\kode{let}\; x = t\;\kode{in}\;h(x).
\end{equation*}
\section{Handlers}
\label{sec:handlers}
So far the main take-away is that computations returning values from~$V$ and
performing operations of a theory~$\theory{T}$ are the elements of the free
model $\Free{T}{V}$. What about \emph{transformations} between computations,
what are they? An easy but useless answer is that they are just maps between the
carriers of free models,
\begin{equation*}
\carrier{\Free{T}{X}} \longrightarrow \carrier{\Free{T'}{X'}},
\end{equation*}
whereas a better answer should take into account the algebraic structure. Having
put so much faith in algebra, let us continue to do so and postulate that a
transformation between computations be a homomorphism. Should it be a
homomorphism with respect to~$\theory{T}$ or~$\theory{T}'$? We could weasel out
of the question by considering only homomorphisms of the form
$\Free{T}{X} \to \Free{T}{X'}$, but such homomorphisms are rather uninteresting,
because they amount to maps~$X \to \Free{T}{X'}$. We want
transformation between computations that transform the operations as well as
values.
To get a reasonable notion of transformation, let us recall that the universal
property of free models speaks about maps \emph{from} a free model. Thus, a
transformation between computations should be a $\theory{T}$-homomorphism
\begin{equation*}
H : \carrier{\Free{T}{X}} \longrightarrow \carrier{\Free{T'}{X'}}.
\end{equation*}
For this to make any sense, the carrier $\carrier{\Free{T'}{X'}}$ must carry the
structure of a $\theory{T}$-model, i.e., in addition to~$H$ we must also provide
a $\theory{T}$-model on $\carrier{\Free{T'}{X'}}$. If we take into account the
fact that $H$ is uniquely determined by its action on the generators, we arrive
at the following notion. A \emph{handler} from computations $\Free{T}{X}$ to
computations $\Free{T'}{X'}$ is given by the following data:
\begin{enumerate}
\item a map $f : X \to \carrier{\Free{T'}{X'}}$,
\item for every operation $\opdecl{\op{i}}{P_i}{A_i}$ in $\signature{T}$, a map
%
\begin{equation*}
h_i : P_i \times \carrier{\Free{T'}{X'}}^{A_i} \to \carrier{\Free{T'}{X'}}
\end{equation*}
%
such that
\item the maps $h_i$ form a $\theory{T}$-model on~$\carrier{\Free{T'}{X'}}$, i.e., they
validate the equations~$\equations{T}$.
\end{enumerate}
The map $H : \carrier{\Free{T}{X}} \longrightarrow \carrier{\Free{T'}{X'}}$ induced by these
data is the unique one satisfying
\begin{align*}
H([\return{x}]) &= f(x), \\
H([\opcall{op}{p}{\kappa}]) &= h_i(p, H \circ \kappa).
\end{align*}
When $H$ is a handler from $\Free{T}{X}$ to $\Free{T'}{X'}$ we write
\begin{equation*}
H : \Free{T}{X} \hto \Free{T'}{X'}.
\end{equation*}
From a mathematical point of view handlers are just a curious
combination of algebraic notions, but they are much more interesting from a
programming point of view, as practice has shown.
We need a notation for handlers that neatly collects its defining data. Let us write
\begin{equation}
\label{eq:handler-notation}
%
\kode{handler}\; \{
\retclause{x} f(x), \;
\big( \opclause{\op{i}}{y}{\kappa} h_i(y, \kappa) \big)_{\op{i} \in \signature{T}}
\}
\end{equation}
for the handler~$H$ determined by the maps $f$ and $h_i$, as above, and
\begin{equation*}
\withhandle{H}{C}
\end{equation*}
for the application of~$H$ to a computation~$C$. The defining equations for
handlers written in the new notation are, where $H$ stands for the
handler~\eqref{eq:handler-notation}:
\begin{align*}
(\withhandle{H}{\return v}) &= f(v), \\
(\withhandle{H}{\seq{x}{\xopgen{\op{i}}{p}} \kappa(x)}) &=
h_i (p, \lam{x} \withhandle{H}{\kappa(x)})
\end{align*}
\begin{example}
Let us consider the theory $\theory{Exn}$ of an exception, cf.\
Example~\ref{ex:theory-exception}. A handler
%
\begin{equation*}
H : \Free{Exn}{X} \hto \Free{T}{Y}
\end{equation*}
%
is given by a $\kode{return}$ clause and an $\kode{abort}$ clause,
%
\begin{equation*}
\kode{handler}\; \{
\retclause{x} f(x),
\opclause{\kode{abort}}{y}{\kappa} c
\},
\end{equation*}
%
where $f : X \to \Free{T}{Y}$ and $c \in \Free{T}{Y}$. Note that $c$ does not
depend on $y \in \one$ and $\kappa : \emptyset \to \Free{T}{Y}$ because they
are both useless. The theory of an exception has no equations, so the
handler is automatically well defined. Such a handler is quite similar to
exception handlers from mainstream programming languages, except that it
handles both the exception and the return value.
\end{example}
\section{What is coalgebraic about algebraic effects and handlers?}
\label{sec:what-coalg-about}
Handlers are a form of flow control (like loops, conditional statements,
exceptions, coroutines, and the dreaded ``goto''), to be used by programmers in
programs. In other words, they can be used to \emph{simulate} computational
effects, a bit like monads can simulate computational effects in a purely
functional language. What we still lack is a mathematical model of computational
effects at the level of the external environment in which the program runs. There is
always a barrier between the program and its external environment, be it a
virtual machine, the operating system, or the underlying hardware. The actual
computational effects cross the barrier, and cannot be modeled as handlers. A
handler gets access to the continuation, but when a real computational effects
happens, the continuation is not available. If it were, then after having
launched missiles, the program could change its mind, restart the continuation,
and establish world peace.
\subsection{Comodels of algebraic theories}
\label{sec:comod-algebr-theor}
We shall model the top-level computational effects with comodels, which were
proposed by Gordon Plotkin and John
Power~\cite{plotkin08:_tensor_comod_model_operat_seman}. A \emph{comodel} of a
theory~$\theory{T}$ in a category~$\category{C}$ is a model in the opposite
category~$\opcat{\category{C}}$. Comodels form a category
\begin{equation*}
\ComodC{\category{C}}{T} \defeq \opcat{(\ModC{\opcat{\category{C}}}{T})}.
\end{equation*}
We steer away from category-theory and just spell out what a comodel is in the
category of sets. When we pass to the dual category all morphisms turn around,
and concepts are replaced with their duals.
Recall that the interpretation of an operation $\opdecl{\kode{op}}{P}{A}$ in a
model~$M$ is a map
\begin{equation*}
\sem{\kode{op}}_M :
P \times |M|^A \longrightarrow |M|,
\end{equation*}
which in the curried form is
\begin{equation*}
|M|^A \longrightarrow |M|^P.
\end{equation*}
In the opposite category the map turns its direction, and the exponentials
become products:
\begin{equation*}
A \times |M| \longleftarrow P \times |M|.
\end{equation*}
Thus, in a comodel~$W$ an operation $\opdecl{\kode{op}}{P}{A}$ is interpreted as a map
\begin{equation*}
\sem{\kode{op}}^W : P \times |W| \to A \times |W|,
\end{equation*}
which we call a \emph{cooperation}.
\begin{example}
Non-deterministic choice $\opdecl{\kode{choose}}{\one}{\bool}$, cf.\
Example~\ref{ex:non-determinism}, is interpreted as a cooperation
%
\begin{equation*}
|W| \to {\bool} \times |W|,
\end{equation*}
%
where on the left we replaced $\one \times |W|$ with the isomorphic set $|W|$.
If we think of $|W|$ as the set of all possible worlds, the cooperation
$\kode{choose}$ is the action by which the world produces a boolean value and
the next state of the world. Thus an external source of binary non-determinism
is a stream of booleans.
\end{example}
\begin{example}
Printing to standard output $\opdecl{\kode{print}}{S}{\one}$ is interpreted as
a cooperation
%
\begin{equation*}
S \times |W| \to |W|.
\end{equation*}
%
It is the action by which the world is modified according to the printed
message (for example, the implants on your retina might induce your visual
center to see the message).
\end{example}
\begin{example}
Reading from standard input $\opdecl{\kode{read}}{\one}{S}$ is interpreted as
a cooperation
%
\begin{equation*}
|W| \to S \times |W|.
\end{equation*}
%
This is quite similar to non-deterministic choice, except that the world
provides an element of~$S$ rather than a boolean value. The world might
accomplish such a task by inducing the user (who is considered as part of the
world) to press buttons on the keyboard.
\end{example}
\begin{example}
An exception $\opdecl{\kode{abort}}{\one}{\emptyset}$ is interpreted as a
cooperation
%
\begin{equation*}
\one \times |W| \to \emptyset \times |W|.
\end{equation*}
%
Unless $|W|$ is the empty set, there is no such map. An exception cannot
propagate to the outer world. The universe is safe from segmentation fault!
\end{example}
The examples are encouraging, so let us backtrack and spell out the basic
definitions properly. A \emph{cointerpretation}~$I$ of a signature $\Sigma$ is
given by a carrier set $\carrier{I}$, and for each operation symbol $\opdecl{\kode{op}}{P}{A}$
a map
\begin{equation*}
\sem{\kode{op}}^I : P \times \carrier{I} \to A \times \carrier{I},
\end{equation*}
called a \emph{cooperation}. The cointerpretation $I$ may be extended to
well-founded trees. A tree $t \in \Tree{\Sigma}{X}$ is interpreted as a map
\begin{equation*}
\sem{t}^I : \carrier{I} \to X \times \carrier{I}
\end{equation*}
as follows:
\begin{enumerate}
\item the tree $\leaf{x}$ is interpreted as the $x$-th injection,
%
\begin{align*}
\sem{X \mid x}^I &: \carrier{I} \to X \times \carrier{I},\\
\sem{X \mid x}^I &: \omega \to (x, \omega).
\end{align*}
%
\item the tree $\op{i}(p, \kappa)$ is interpreted as the map
%
\begin{align*}
\sem{X \mid \op{i}(p, \kappa)}^I &: \carrier{I} \to X \times \carrier{I},\\
\sem{X \mid \op{i}(p, \kappa)}^I &: \omega \to
\sem{X \mid \kappa(a)}^I(\varpi)
\quad\text{where $(a, \varpi) = \sem{\op{i}}^{I}(p, \omega)$.}
\end{align*}
\end{enumerate}
A \emph{comodel~$W$} of a theory~$\theory{T}$ is a
$\signature{T}$-cointerpretation which validates all the
equations~$\equations{T}$. As before, an equation is valid when the
interpretations of its left- and right-hand sides yield equal maps.
\begin{example}
Let us work out what constitutes a comodel~$W$ of the theory of state, cf.\
Example~\ref{ex:theory-single-state}. The operations
%
\begin{equation*}
\opdecl{\kode{get}}{\one}{S}
\qquad\text{and}\qquad
\opdecl{\kode{put}}{S}{\one}
\end{equation*}
%
are respectively interpreted by cooperations
%
\begin{equation*}
g : \carrier{W} \to S \times \carrier{W}
\qquad\text{and}\qquad
p : S \times \carrier{W} \to \carrier{W},
\end{equation*}
%
where we replaced $\one \times \carrier{W}$ with the isomorphic
set~$\carrier{W}$ (and we shall continue doing so in the rest of the example).
The cooperations~$p$ and~$g$ must satisfy the equations from
Example~\ref{ex:theory-single-state}. We first unravel the interpretation of
equation~\eqref{eq:state-get-put}. Recall that $\kappa$ is the generic
continuation $\kappa \, \unit = \leaf{\unit}$, and the context contains no variables
so it is interpreted by~$\one$. Thus the right- and left-hand sides are
interpreted as maps $\carrier{W} \to \carrier{W}$, namely
%
\begin{align*}
\sem{\kappa\,\unit}^W &: w \mapsto w,\\
\sem{\opcall{get}{\unit}{\lam{s} \opcall{put}{s}{\kappa}}}^W &: w \mapsto p(g(w)).
\end{align*}
%
These are equal precisely when, for all $w \in |W|$,
%
\begin{equation}
\label{eq:state-comodel-p-g}
p(g(w)) = w.
\end{equation}
%
Keeping in mind that the dual nature of cooperations requires reading of
expressions from inside out, so that in $p(g(w))$ the cooperation~$g$ happens
before~$p$, the intuitive meaning of~\eqref{eq:state-comodel-p-g} is clear:
the external world does not change when we read the state and write it right
back. Equations~\eqref{eq:state-get-get}, \eqref{eq:state-put-get}, and
\eqref{eq:state-put-put} may be similarly treated to respectively give
%
\begin{align}
\label{eq:state-comodel-g-g}
g(\pi_2 (g (w))) &= g(w),
\\
\notag
g(p(s, w)) &= (s, p (s, w)),
\\
\notag
p(t, p(s, w)) &= p(t, w).
\end{align}
%
%
%
%
%
From these equations various others can be derived. For instance,
by~\eqref{eq:state-comodel-p-g}, the cooperation~$g$ is a section of~$p$,
therefore we may cancel it on both sides of~\eqref{eq:state-comodel-g-g} to
derive $\pi_2(g(w)) = w$, which says that reading the state does not alter the
external world.
\end{example}
\begin{example}
A comodel~$W$ of the theory of non-determinism, cf.\
Example~\ref{ex:non-determinism}, is given by a cooperation
%
\begin{equation*}
c : \carrier{W} \to \bool \times \carrier{W}.
\end{equation*}
%
The cooperation must satisfy (the interpretations of) associativity,
idempotency, and commutativity. Commutativity is problematic because we get
from it that if $c(w) = (b, w')$ then also
$c(w) = (\mathop{\mathsf{not}} b, w')$, implying the nonsensical requirement
$b = \mathop{\mathsf{not}} b$. It appears that comodels of non-determinism require
fancier categories than the good old sets.
\end{example}
\subsection{Tensoring comodels and models}
\label{sec:tens-comod-models}
If we construe the elements of a $\theory{T}$-model~$\carrier{M}$ as effectful
computations and the elements of a $\theory{T}$-comodel ~$\carrier{W}$ as
external environments, it is natural to ask whether $M$ and $W$ interact to give
an account of running effectful programs in effectful external environments.
Let $\sim_\theory{T}$ be the least equivalence relation
on~$\carrier{M} \times \carrier{W}$ such that, for every operation symbol
$\opdecl{\kode{op}}{P}{A}$ in $\signature{T}$, and for all $p \in P$, $a \in A$,
$\kappa : A \to \carrier{M}$, and $w, w' \in \carrier{M}$ such that
$\sem{\kode{op}}^W(p, w) = (a, w')$,
\begin{equation}
\label{eq:tensor-equivalence}
%
(\sem{\kode{op}}_M(p, \kappa), w) \sim_\theory{T} (\kappa(a), w').
\end{equation}
Define the \emph{tensor $\tensor{M}{W}$} to be the quotient set
$(\carrier{M} \times \carrier{W})/{\sim_\theory{T}}$.
The tensor represents the interaction of~$M$ and $W$. The equivalence
$\sim_\theory{T}$ in~\eqref{eq:tensor-equivalence} has an operational reading:
to perform the operation $\sem{\kode{op}}_M(p, \kappa)$ in the external
environment~$w$, run the corresponding cooperation $\sem{\kode{op}}^W(p, w)$ to
obtain $a \in A$ and a new environment~$w'$, then proceed by executing
$\kappa(a)$ in environment~$w'$.
\begin{example}
Let us compute the tensor of $M = \Free{\theory{State}}{X}$, the free model of
the theory of state generated by~$X$, and the comodel $W$ defined by
%
\begin{equation*}
|W| \defeq S,
\qquad
\sem{\kode{get}}^W \defeq \lam{s} (s, s),
\qquad
\sem{\kode{put}}^W \defeq \lam{(s,t)} s.
\end{equation*}
%
We may read the equivalences
%
\begin{align*}
(\sem{\opcall{get}{\unit}{\kappa}}_M, s) &\sim_{\theory{State}} (\sem{\kappa}_M (s), s), \\
(\sem{\opcall{put}{t}{\kappa}}_M, s) &\sim_{\theory{State}} (\sem{\kappa}_M \unit, t),
\end{align*}
%
from left to right as rewrite rules which allows us to ``execute away'' all
the operations until we are left with a pair of the form
$(\sem{\return{x}}_M, s)$. Because
$(\sem{\return{x}}_M, s) \sim_{\theory{State}} (\sem{\return{y}}_M, t)$
implies $x = y$ and $s = t$ (the proof of which we skip), it follows that
$\tensor{M}{W}$ is isomorphic to $X \times S$. In other words, the execution
of a program in an initial state always leads to a return value paired with
the final state.
\end{example}
The next time the subject of tensor products comes up, you may impress your
mathematician friends by mentioning that you know how to tensor software with
hardware.
\section{Making a programming language}
\label{sec:making-progr-lang}
The mathematical theory of algebraic effects and handlers may be used in
programming language design, both as a mathematical foundation and a source of
inspiration for new programming concepts. This is a broad topic which far
exceeds the purpose and scope of these notes, so we only touch on the main
questions and issues, and provide references for further reading.
\begin{figure}[ht]
\small
\fbox{\parbox{\textwidth}{
\centering
\newcommand{\mathrel{\;{:}{:}\!=}\;}{\mathrel{\;{:}{:}\!=}\;}
\newcommand{\mathrel{\;\big|\;}}{\mathrel{\;\big|\;}}
\begin{align*}
\text{Value}\ v
\mathrel{\;{:}{:}\!=}\;& x & & \text{variable} \\
\mathrel{\;\big|\;}& \kode{false} \mathrel{\;\big|\;} \kode{true} & & \text{boolean constant} \\
\mathrel{\;\big|\;}& \lam{x} c & & \text{function} \\
\mathrel{\;\big|\;}&
\kode{handler}\; \{
\begin{aligned}[t]
& \retclause{x} c_r, \\
& \ldots, \opclause{\op{i}}{x}{k} c_i, \ldots \}
\end{aligned}
& & \text{handler}
\\
\text{Computation}\ c
\mathrel{\;{:}{:}\!=}\;& \return{v} & & \text{pure computation} \\
\mathrel{\;\big|\;}& \opgen{op}{v} & & \text{operation} \\
\mathrel{\;\big|\;}& \seq{x}{c_1} c_2 & & \text{sequencing} \\
\mathrel{\;\big|\;}& \cond{v}{c_1}{c_2} & & \text{conditional} \\
\mathrel{\;\big|\;}& v_1 \, v_2 & & \text{application} \\
\mathrel{\;\big|\;}& \withhandle{v}{c} & & \text{handling}
\\
\text{Value type}\ A, B
\mathrel{\;{:}{:}\!=}\;& \kode{bool} & & \text{boolean type} \\
\mathrel{\;\big|\;}& A \times B & & \text{product type} \\
\mathrel{\;\big|\;}& A \to \ct{C} & & \text{function type} \\
\mathrel{\;\big|\;}& \ct{C} \hto \ct{D} & & \text{handler type}
\\
\text{Computation type}\ \ct{C}, \ct{D}
\mathrel{\;{:}{:}\!=}\;& \dirt{A}{\{\op{1}, \ldots, \op{k}\}}
\end{align*}
}}
\caption{A core language with algebraic effects and handlers}
\label{fig:mini-eff}
\end{figure}
Figure~\ref{fig:mini-eff} shows the outline of a core language based on
algebraic theories, as presented so far. Apart from a couple of changes in
terminology and notation there is nothing new. Instead of generators and
generating sets we speak of \emph{values} and \emph{value types}, and instead of
trees and free models we speak of \emph{computations} and \emph{computation
types}. The computation type $\dirt{A}{\{\op{1}, \ldots, \op{k}\}}$
corresponds to the free model $\Free{T}{A}$ where $\theory{T}$ is the theory
with operations $\op{1}, \ldots, \op{k}$ without any equations. The rest of the
table should look familiar. And operational semantics and typing rules still
have to be given. For these we refer to Matija Pretnar's
tutorial~\cite{pretnar15:_introd_algeb_effec_handl}, and
to~\cite{bauer14:_effec_system_algeb_effec_handl,pretnar14:_infer_algeb_effec}
for a more thorough treatment of the language.
The programming language in Figure~\ref{fig:mini-eff} can express only the
terminating computations. To make it more realistic, we should add to it general
recursion and allow non-terminating computations. Such modifications cannot be
accommodated by the set-theoretic semantics, but they can be handled by domain
theory, as was shown in~\cite{bauer14:_effec_system_algeb_effec_handl}. Therein
you can find an adequate domain-theoretic semantics for algebraic effects and
handlers with support for general recursion.
Once a programming language is in place, the next task is to explore its
possibilities. Are user-defined operations and handlers good for anything?
Practice so far has shown that indeed they can be used for all sorts of things,
but also that it is possible to overuse and misuse them, just like any
programming concept. Handlers have turned out to be a versatile tool that
unifies and generalizes a number of techniques: exception handlers, backtracking
and other search strategies, I/O redirection, transactional memory, coroutines,
cooperative multi-threading, delimited continuations, probabilistic programming,
and many others. As this note is already getting quite long, we recommend
existing
material~\cite{pretnar15:_introd_algeb_effec_handl,bauer15:_progr,kammar13:_handl}
for further reading. For experimenting with handlers in practice, you can try
out one of the languages that implements handlers. The first such language was
Eff~\cite{bauer:_eff}, but there are by now others. The Effects Rosetta
Stone~\cite{effec_roset_stone} is a good starting point to learn about them and
to see how they compare. The Effect bibliography~\cite{effec} is a good source
for finding out what has been published in the area of computational effects.
\section{Exploring new territories}
\label{sec:expl-new-terr}
Lastly, we mention several aspects of algebraic effects and handlers that have
largely remained unexplored so far.
Perhaps the most obvious one is that existing implementations of effects and
handlers largely ignore equations. In a sense this is expected and
understandable. For~\eqref{eq:handler-notation} to define a handler
$\Free{T}{X} \hto \Free{T'}{Y}$, the operation clauses $h_i$ must satisfy the
equations of~$\theory{T}$. In general it is impossible to check algorithmically
whether this is the case, and so a compiler or a language interpreter should
avoid trying to do so. Thus existing languages with handlers solve the problem
by ignoring the equations. This is not as bad as it sounds, because in practice
we often want handlers that break the equations. Moreover, dropping equations
just means that we work with trees as representatives of their equivalence
classes, which is a common implementation technique (for instance, when we
represent finite sets by lists). Nevertheless, incorporating equations into
programming languages would have many benefits.
The idea of tensoring comodels and models as a mathematical explanation of the
interaction between a program and its external environment is very pleasing, but
has largely not been taken advantage of. There should be a useful programming
concept in there, especially if we can make tensoring a user-definable feature
of a programming language. The only known (to me) attempt to do so were the
\emph{resources} in an early version of Eff~\cite{bauer15:_progr}, but those
disappeared from later versions of the language. May they see the light of day
again.
At the Dagstuhl seminar~\cite{chandrasekaran18:_algeb} the topic of dynamic
creation of computational effects was recognized as important and mostly
unsolved. The operations of an algebraic theory are fixed by the signature, but
in real-world situations new instances of computational effects are created and
destroyed dynamically, for example, when a program opens or closes a file,
allocates or deallocates memory, spawns or terminates a new thread, etc. How
should such phenomena be accounted for mathematically? A good answer would
likely lead to new programming concepts for general resource management. Once
again, the only known implementation of dynamically created instances of effects
was provided in the original version of Eff~\cite{bauer15:_progr}, although some
languages allow dynamic creation of new effects by indirect means.
\bibliographystyle{plain}
| {
"timestamp": "2019-03-13T01:12:13",
"yymm": "1807",
"arxiv_id": "1807.05923",
"language": "en",
"url": "https://arxiv.org/abs/1807.05923",
"abstract": "This note recapitulates and expands the contents of a tutorial on the mathematical theory of algebraic effects and handlers which I gave at the Dagstuhl seminar 18172 \"Algebraic effect handlers go mainstream\". It is targeted roughly at the level of a doctoral student with some amount of mathematical training, or at anyone already familiar with algebraic effects and handlers as programming concepts who would like to know what they have to do with algebra. We draw an uninterrupted line of thought between algebra and computational effects. We begin on the mathematical side of things, by reviewing the classic notions of universal algebra: signatures, algebraic theories, and their models. We then generalize and adapt the theory so that it applies to computational effects. In the last step we replace traditional mathematical notation with one that is closer to programming languages.",
"subjects": "Logic in Computer Science (cs.LO); Programming Languages (cs.PL)",
"title": "What is algebraic about algebraic effects and handlers?",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109489630493,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.707679625667231
} |
https://arxiv.org/abs/1902.02511 | Newton-Okounkov polytopes of flag varieties for classical groups | For classical groups SL(n), SO(n) and Sp(2n), we define uniformly geometric valuations on the corresponding complete flag varieties. The valuation in every type comes from a natural coordinate system on the open Schubert cell and is combinatorially related to the Gelfand-Zetlin pattern in the same type. In types A and C, we identify the corresponding Newton-Okounkov polytopes with the Feigin-Fourier-Littelmann-Vinberg polytopes. In types B and D, we compute low-dimensional examples and formulate open questions. | \section{Introduction}
Toric geometry and theory of Newton polytopes exhibited fruitful connections between algebraic geometry and convex geometry.
After the Kouchnirenko and Bernstein--Khovanskii theorems were proved in the 1970-s (for a reminder see Section \ref{ss.toric}), Askold Khovanskii asked how to extend these results to the setting where a complex torus is replaced by an arbitrary connected reductive group.
In particular, he advertised widely the problem of finding the right analogs of Newton polytopes for non-toric varieties such as spherical varieties (classical examples of spherical varieties are reviewed in Section \ref{ss.sphere}).
Notion of Newton polytopes was extended to spherical varieties by Andrei Okounkov in the 1990-s \cite{O97,O98}.
Later, his construction was developed systematically in \cite{KK, LM}, and the resulting theory of Newton--Okounkov convex bodies is now an active field of algebraic geometry.
While Newton--Okounkov convex bodies can be defined for line bundles on arbitrary varieties (without a group action), they are easier to deal with in the case of varieties with an action of a reductive group.
In the latter case, theory of Newton--Okounkov convex bodies is closely related with representation theory.
For instance, Gelfand--Zetlin (GZ) polytopes and Feigin--Fourier--Littelmann--Vinberg (FFLV) polytopes (see Section \ref{s.GZ} for a reminder) arise naturally as Newton--Okounkov polytopes of flag varieties.
\subsection{Newton--Okounkov convex bodies}
\label{ss.toric}
In this section, we recall construction of Newton--Okounkov convex bodies for the general mathematical audience.
Let us start from the definition of Newton polytopes.
\begin{defin} Let $f=\sum_{\alpha\in\mathbb{Z}^n} c_\alpha x^\alpha$ be a Laurent polynomial in $n$ variables (here the multiindex notation $x^\alpha$ for $x=(x_1,\ldots,x_n)$ and $\alpha=(\alpha_1,\ldots,\alpha_n)\in\mathbb{Z}^n$ stands for $x_1^{\alpha_1}\cdots x_n^{\alpha_n}$).
The {\em Newton polytope} $\Delta_f\subset\mathbb{R}^n$ is the convex hull of all $\alpha\in \mathbb{Z}^n$ such that $c_\alpha\ne0$.
\end{defin}
By definition, Newton polytope is a lattice polytope, that is, its vertices lie in $\mathbb{Z}^n$.
\begin{example}\label{e.hyp} For $n=2$ and $f=1+2x_1+x_2+3x_1x_2$, the Newton polytope $\Delta_f$ is the square with
the vertices $(0,0)$, $(1,0)$, $(0,1)$ and $(1,1)$.
\end{example}
Note that Laurent polynomials with complex coefficients are well-defined functions at all points $(x_1,\ldots,x_n)\in\mathbb{C}^n$ such that $x_1,\ldots,x_n\ne 0$.
They are regular functions on the complex torus $(\mathbb{C}^*)^n:=\mathbb{C}^n\setminus\bigcup_{i=1}^n\{x_i=0\}$.
\begin{thm}\cite{Kou}\label{t.K}
For a given lattice polytope $\Delta\subset\mathbb{R}^n$, let
$f_1(x_1,\ldots,x_n)$,\ldots, $f_n(x_1,\ldots,x_n)$ be a generic collection of Laurent polynomials with the Newton polytope $\Delta$.
Then the system $f_1=\ldots=f_n=0$ has $n!{\rm Volume}(\Delta)$ solutions in the complex torus $(\mathbb{C}^*)^n$.
\end{thm}
The Kouchnirenko theorem can be viewed as a generalization of the classical Bezout theorem.
The Newton polytope serves as a refinement of the degree of a polynomial.
This makes the Kouchnirenko theorem applicable to collections of polynomials which are not generic among all polynomials of given degree but only among polynomials with given Newton polytope.
For instance, the Kouchnirenko theorem applied to a pair of generic polynomials with Newton polytope as in Example \ref{e.hyp} yields the correct answer $2$ while Bezout theorem yields an incorrect answer $4$ (because of two extraneous solutions at infinity).
A more geometric viewpoint on the Bezout theorem and its extensions stems from enumerative geometry and will be discussed in Section \ref{ss.sphere}.
The Koushnirenko theorem was extended to the systems of Laurent polynomials with distinct Newton polytopes by David Bernstein
and Khovanskii using mixed volumes of polytopes \cite{B75}.
Further generalizations include explicit formulas for the genus and Euler characteristic of complete intersections $\{f_1=0\}\cap\ldots\cap\{f_m=0\}$ in $(\mathbb{C}^*)^n$ for $m<n$ \cite{Kh78}.
We now consider a bit more general situation.
Fix a finite-dimensional vector space $V\subset\mathbb{C}(x_1,\ldots,x_n)$ of rational functions on $\mathbb{C}^n$.
Let $f_1$,\ldots, $f_n$ be a generic collection of functions from $V$, and $X_0\subset\mathbb{C}^n$ an open dense subset obtained by removing poles of these functions.
How many solutions does a system $f_1=\ldots=f_n=0$ have in $X_0$?
For instance, if $V$ is the space spanned by all Laurent polynomials with a given Newton polytope, and $X_0=(\mathbb{C}^*)^n$, then
the answer is given by the Kouchnirenko theorem.
Here is a simple non-toric example from representation theory.
\begin{example}\label{e.flag}
Let $n=3$.
Consider the adjoint representation of $SL_3(\mathbb{C})$ on the space ${\rm End}(\mathbb{C}^3)$ of all linear operators on $\mathbb{C}^3$.
That is, $g\in SL_3(\mathbb{C})$ acts on an operator $X\in{\rm End}(\mathbb{C}^3)$ as follows:
$${\rm Ad}(g): X\mapsto gXg^{-1}.$$
Let $U^-\subset SL_3(\mathbb{C})$ be the subgroup of lower triangular unipotent matrices:
$$U^-=\left\{\begin{pmatrix}1&0&0\\x_1&1&0&\\x_2&x_3&1&\\ \end{pmatrix} \ | \ (x_1,x_2,x_3)\in \mathbb{C}^3\right\}.$$
To define a subspace $V\subset\mathbb{C}(x_1,x_2,x_3)$ we restrict functions from the dual space ${\rm End}^*(\mathbb{C}^3)$ to the $U^-$-orbit ${\rm Ad}(U^-)E_{13}$ of the operator $E_{13}:=e_1\otimes e_3^*\in {\rm End}(\mathbb{C}^3)$ (here ($e_1$, $e_2$, $e_3$) is the standard basis in $\mathbb{C}^3$).
More precisely, a linear function $f\in {\rm End}^*(\mathbb{C}^3)$ yields the polynomial $\hat f(x_1,x_2,x_3)$ as follows:
$$\hat f(x_1,x_2,x_3):=f\left(\begin{pmatrix}1&0&0\\x_1&1&0&\\x_2&x_3&1&\\ \end{pmatrix} \begin{pmatrix}0&0&1\\0&0&0&\\0&0&0&\\ \end{pmatrix} \begin{pmatrix}1&0&0\\x_1&1&0&\\x_2&x_3&1&\\ \end{pmatrix}^{-1} \right)$$
It is easy to check that the space $V$ is spanned by 8 polynomials: $1$, $x_1$, $x_2$, $x_3$, $x_1x_2-x_1^2x_3$, $x_1x_3$, $x_2x_3$, $x_2^2-x_1x_2x_3$.
It will be clear from the next section that the Kouchnirenko theorem does not apply to the space $V$, that is, the normalized volume of the Newton polytope of a generic polynomial from $V$ is bigger than the number of solutions of a generic system $f_1=f_2=f_3=0$ with $f_i\in V$.
\end{example}
To assign the {\em Newton--Okounkov convex body} to $V$ we need an extra ingredient.
Choose a translation-invariant total order on the lattice $\mathbb{Z}^n$ (e.g., we can take the lexicographic order).
Consider a map
$$v:\mathbb{C}(x_1,\ldots,x_n)\setminus\{0\}\to\mathbb{Z}^n,$$
that behaves like the lowest order term of a polynomial, namely: $v(f+g)\ge \min\{v(f),v(g)\}$ and $v(fg)=v(f)+v(g)$ for all nonzero $f,g$.
Recall that maps with such properties are called {\em valuations}.
A straightforward construction of valuations is shown in Example \ref{e.flag2} below.
\begin{defin}
The {\em Newton--Okounkov convex body} $\Delta_v(V)$ is the closure
of the convex hull of the set
$$\bigcup_{k=1}^\infty\left\{\frac{v(f)}{k} \ |\ f\in V^k \right\}\subset\mathbb{R}^n.$$
By $V^k$ we denote the subspace spanned by the $k$-th powers of the functions from $V$.
\end{defin}
Different valuations might yield different Newton--Okounkov convex bodies.
An important application of Newton--Okounkov bodies is the following analog of Kouchnirenko theorem.
Recall that by $X_0\subset\mathbb{C}^n$ we denoted an open dense subset where all functions from $V$ are regular (that is, do not have poles).
\begin{thm}\cite{KK,LM}\label{t.NO}
If $V$ is sufficiently big, then a generic system $f_1=\ldots=f_n=0$ with $f_i\in V$ has
$n!{\rm Volume}(\Delta_v(V))$ solutions in $X_0$.
\end{thm}
In particular, it follows that all Newton--Okounkov convex bodies for $V$ have the same volume.
For more details (in particular, for the precise meaning of ``sufficiently big'') we refer the reader to \cite[Theorem 4.9]{KK}.
\begin{example} \label{e.flag2} Let $V$ be the space from Example \ref{e.flag}.
Define a valuation $v$ by assigning to a polynomial $f\in\mathbb{C}[x_1,x_2,x_3]$ its lowest order term with respect to the
lexicographic ordering of monomials.
More precisely, we say that $x_1^{k_1}x_2^{k_2}x_3^{k_3}\succ x_1^{l_1}x_2^{l_2}x_3^{l_3}$ iff
there exists $j\le3$ such that $k_i=l_i$ for $i<j$ and $k_j>l_j$.
It is easy to check that $v(V)$ consists of $8$ lattice points $(0,0,0)$, $(1,0,0)$, $(0,1,0)$, $(0,0,1)$, $(1,1,0)$,
$(1,0,1)$, $(0,1,1)$, $(0,2,0)$.
Their convex hull is depicted on Figure 1.
This is the FFLV polytope $FFLV(1,0,-1)$ for the adjoint representation of $SL_3$ (in this case, it happens to be unimodularly equivalent to the GZ polytope).
In particular, $FFLV(1,0,-1)\subset\Delta_v(V)$.
\end{example}
\begin{figure}
\begin{center}\includegraphics[width=10cm]{FFLV_1_1.jpg}
\caption{}
\end{center}
\end{figure}
\subsection{Enumerative geometry}
\label{ss.sphere}
In this section, we give a brief introduction to enumerative geometry for the general mathematical audience.
Enumerative geometry motivated the study of Grassmannians, flag varieties and more general spherical varieties.
Recall two classical problems of enumerative geometry from the 19-th century.
\begin{problem}[Schubert]\label{p.Sch}
How many lines in a 3-space intersect four given lines in general position?
\end{problem}
We can identify lines in $\mathbb{C}\mathbb{P}^3$ with vector planes in $\mathbb{C}^4$, that is, a line can be viewed as a point on the
Grassmannian $G(2,4)$.
The condition that a line $l\in G(2,4)$ intersects a fixed line $l_1$ defines a hypersurface $H_1\subset G(2,4)$.
Hence, the problem reduces to computing the number of intersection points of four hypersurfaces in $G(2,4)$.
It is not hard to check that the hypersurface $H_1$ is just a hyperplane section of the Grassmannian under the Pl\"ucker
embedding $G(2,4)\hookrightarrow \mathbb{P}(\Lambda^2\mathbb{C}^4)\simeq\mathbb{C}\mathbb{P}^5$.
The image of the Grassmannian is a quadric in $\mathbb{C}\mathbb{P}^5$.
The number of intersection points of a quadric in $\mathbb{C}\mathbb{P}^5$ with
four hyperplanes in general position is equal to $2$ by the Bezout theorem.
Hence, the answer to the Schubert problem is $2$.
Schubert's problem can also be solved for real lines in $\mathbb{R}^3$ by elementary metods (for instance, by using two families
of lines on a hyperboloid of one sheet).
In this context, Schubert's problem was recently applied to experimental physics \cite{Phys}.
\begin{problem}[Steiner]\label{p.St}
How many smooth conics are tangent to five given conics?
\end{problem}
Similarly to the Schubert problem, we can identify conics with points in $\mathbb{C}\mathbb{P}^5$, namely, the conic given by an equation
$ax^2+bxy+cy^2+dxz+eyz+fz^2=0$ corresponds to the point $(a:b:c:d:e:f)\in\mathbb{C}\mathbb{P}^5$.
Smooth conics form an open subset $C\subset\mathbb{C}\mathbb{P}^5$ (the complement $\mathbb{C}\mathbb{P}^5\setminus C$ is the zero set of the discriminant).
The condition that a conic is tangent to a given conic defines a hypersurface in $\mathbb{C}\mathbb{P}^5$ of degree $6$.
Using Bezout theorem in $\mathbb{C}\mathbb{P}^5$ one might guess (as Jacob Steiner himself did) that the answer to the Steiner problem is $6^5$.
However, the correct answer is much smaller.
This is similar to the difference between the Bezout and Kouchnirenko theorems: the former yields
extraneous solutions that have no enumerative meaning.
The correct answer was found by Michel Chasles who used (in modern terms) a {\em wonderful compactification} of $C$, namely, the space of complete conics.
Hermann Schubert developed a powerful general method (calculus of conditions) for solving problems of enumerative geometry such as Problems \ref{p.Sch}, \ref{p.St}.
In a sense, his method was based on an informal version of intersection theory.
The 15-th Hilbert problem asked for a rigorous foundation of Schubert calculus\footnote{Das Problem besteht darin, {\em diejenigen geometrischen Anzahlen
strenge und unter genauer Feststellung der Grenzen ihrer G\"ultigkeit
zu beweisen, die insbesondere Schubert auf Grund des sogenannten Princips
der speciellen Lage mittelst des von ihm ausgebildeten
Abz\"ahlungskalk\"uls bestimmt hat} (Hilbert).}.
In the first half of the 20-th century, these foundations were developed both in the topological (cohomology rings) and algebraic (Chow rings) settings.
However, Schubert's version of intersection theory was formalized only in the 1980-s by Corrado De Concini and Claudio Procesi \cite{CP}.
In particular, many problems of enumerative geometry (including Problems \ref{p.Sch} and \ref{p.St}) reduce to computation of the self-intersection index of a hypersurface in homogeneous space $G/H$ where $G$ is a reductive group such as $SL_n(\mathbb{C})$, $SO_n(\mathbb{C})$ or $Sp_{2n}(\mathbb{C})$.
In the toric case ($G=(\mathbb{C}^*)^n$), the Kouchnirenko theorem yields an explicit formula for the self-intersection index of a hypersurface $\{f=0\}$ where $f$ is a generic polynomial with a given Newton polytope.
In the reductive case, explicit formulas were obtained by Boris Kazarnovskii (case of $(G\times G)/G^{\rm diag}$) and Michel Brion (general case) \cite{Kaz,Br89}.
Though the Brion--Kazarnovskii formula was originally stated in different terms, it can be reformulated using Newton--Okounkov polytopes \cite{KK2}.
\begin{example} \label{e.flag3} We now place Example \ref{e.flag} into the context of enumerative geometry.
Let $X=\{(V^1\subset V^2\subset \mathbb{C}^3) \ | \ \dim V^i=i \}$ be the variety of complete flags in $\mathbb{C}^3$.
This is a homogeneous space under the action of $SL_3(\mathbb{C})$, namely, $X=SL_3(\mathbb{C})/B$ where $B$ is the subgroup of upper-triangular matrices.
It is easy to check that $B$ acts on $X$ with an open dense orbit $U^-B/B\simeq U^-$.
We say that two flags $V^1\subset V^2$ and $W^1\subset W^2$ in $\mathbb{C}^3$ are not in general position if either $V^1\subset W^2$ or $W^1\subset V^2$.
How many flags in $\mathbb{C}^3$ are not in general position with three given flags?
By taking projectivizations of subspaces $V^1\subset V^2\subset\mathbb{C}^3$ we can regard a flag as $a\in l\subset\mathbb{C}\mathbb{P}^2$, where $a=\mathbb{P}(V^1)$ is a point and $l=\mathbb{P}(V^2)$ is a line on the projective plane.
Hence, we can reduce the question to the following elementary problem.
\begin{problem}[High school geometry]
There is a triangle $ABC$ on the plane.
Points $A'$, $B'$, $C'$ lie on the lines $BC$, $AC$ and $AB$, respectively.
Find all configurations $(X, YZ)$ (where a point $X$ lies on a line $YZ$) such that
$(X, YZ)$ is not in general position with the configurations $(A', BC)$, $(B', AC)$ and $(C', AB)$.
\end{problem}
It is easy to show that there are $6$ such configurations.
On the other hand, the same answer can be found using the simplest projective embedding of $X$:
$$p:X\hookrightarrow\mathbb{P}(\mathbb{C}^3)\times \mathbb{P}(\Lambda^2\mathbb{C}^3)\stackrel{\mbox{\tiny Segre}}{\hookrightarrow}\mathbb{P}({\rm End}(\mathbb{C}^3)); \quad
p:(V^1, V^2)\mapsto V^1\times V^2\mapsto V^1\otimes\Lambda^2 V^2,$$
and counting the number of intersection points of $p(X)$ with 3 generic hyperplanes in $\mathbb{C}\mathbb{P}^8$ (that is, the {\em degree} of $p(X)$).
Restricting the map $p$ to the open dense $B$-orbit $U^-\subset X$ we get that the latter problem reduces to the problem from Example \ref{e.flag}.
In particular, we can show that the inclusion $FFLV(1,0,-1)\subset\Delta_v(V)$ is an equality.
Indeed, by Theorem \ref{t.NO} the volume of $\Delta_v(V)$ times $3!$ is equal to the degree of $p(X)$, that is, to $6$.
Hence, the volume of $\Delta_v(V)$ is equal to $1$.
Since the volume of $FFLV(1,0,-1)$ is also equal to $1$, the inclusion $FFLV(1,0,-1)\subset\Delta_v(V)$ of convex polytopes implies the exact equality.
\end{example}
\section{GZ patterns and FFLV polytopes}\label{s.GZ}
In this section, we recall the definitions of GZ patterns in types $A$, $B$, $C$, $D$ and FFLV
polytopes in types $A$ and $C$.
Let $\lambda=(\lambda_1,\ldots,\lambda_n)$ denote a non-increasing collection of integers.
In what follows, we regard $\lambda$ as a dominant weight of a classical group.
GZ polytopes for classical groups $G$ were constructed using representation theory, namely, lattice points in the polytope $GZ(\lambda)$ parameterize the vectors of the GZ basis in the irreducible representation $V_\lambda$ of $G$ with the highest weight $\lambda$ (see \cite{M} for a survey on GZ bases).
Lattice points in FFLV polytopes $FFLV(\lambda)$ parameterize a different basis in the same representation (see \cite{FFL,FFL2}).
In particular, $GZ(\lambda)$ and $FFLV(\lambda)$ have the same Ehrhart and volume polynomials.
\subsection{GZ patterns}\label{ss.GZ}
\subsubsection{Type A}
We now regard $\lambda$ as a dominant weight of $SL_n$.
In convex geometric terms, the GZ polytope $GZ(\lambda)\subset\mathbb{R}^d$, where $d:=\frac{n(n-1)}{2}$, is defined as the set of all points
$(u^1_1,u^1_2,\ldots, u^1_{n-1};u^2_1,\ldots,u^2_{n-2};\ldots; u^{n-1}_1)\in\mathbb{R}^d$ that satisfy the following interlacing inequalities:
$$
\begin{array}{cccccccccc}
\lambda_1& & \lambda_2 & &\lambda_3 & &\ldots & & &\lambda_n \\
&\boxed{u^1_1}& &\boxed{u^1_2} & & \ldots & & &\boxed{u^1_{n-1}}& \\
& &\boxed{u^2_1} & & \ldots & & &\boxed{u^2_{n-2}}& & \\
& & & \ddots & & \ddots & & & & \\
& & & &\boxed{u^{n-2}_1}& & \boxed{u^{n-2}_2} & & & \\
& & & & &\boxed{u^{n-1}_1}& & & & \\
\end{array}
\eqno(GZ_A)$$
where the notation
$$
\begin{array}{ccc}
a & &b \\
& c &
\end{array}
$$
means $a\ge c\ge b$ (the table encodes $2d$ inequalities).
\subsubsection{Types B and C}
Let $\lambda$ be a dominant weight of $Sp_{2n}(\mathbb{C})$, that is, all $\lambda_i$ are non negative.
Put $d=n^2$.
Denote coordinates in $\mathbb{R}^d$ by $(x^1_1,\ldots, x^1_n; y^1_1,\ldots, y^1_{n-1};\ldots; x_1^{n-1}, x^{n-1}_2, y^{n-1}_1; x^n_1)$.
For every $\lambda$, define the {\em symplectic GZ polytope} $SGZ(\lambda)\subset\mathbb{R}^d$ for $Sp_{2n}(\mathbb{C})$ by the following interlacing inequalities:
$$
\begin{array}{ccccccccccc}
\lambda_1& & \lambda_2 & &\lambda_3 & & \ldots & \lambda_n & &0 &\\
&\boxed{x^1_1} & &\boxed {x^1_2} & &\ldots & & &\boxed{x^1_{n}} & &\\
& &\boxed{y^1_1} & &\boxed{y^1_2} & & \ldots &\boxed{y^1_{n-1}}& & 0 &\\
& & &\boxed{x^2_1} & &\ldots & & &\boxed{x^2_{n-1}}& &\\
& & & & \boxed{y^2_1} & & \ldots &\boxed{y^2_{n-2}}& & 0 &\\
& & & & &\ddots & & \vdots & &\vdots &\\
& & & & & & \boxed{x^{n-1}_1}& &\boxed{x^{n-1}_2}& &\\
& & & & & & &\boxed{y^{n-1}_1}& & 0 &\\
& & & & & & & & \boxed{x^{n}_1} & &\\
\end{array}
\eqno{GZ_C}$$
Again, every coordinate in this table is bounded from above by its upper left neighbor and bounded from below by its upper right neighbor (the table encodes $2d$ inequalities).
Roughly speaking, $SGZ(\lambda)$ is the polytope defined using half of the GZ pattern $(GZ_A)$ for $SL_{2n}(\mathbb{C})$.
To define the GZ polytope in type $B$ (that is, for $G=SO_{2n+1}(\mathbb{C})$) we use the same pattern and inequalities but choose a bigger lattice $L\subset\mathbb{R}^d$ so that the standard lattice $\mathbb{Z}^d\subset L$ has index $2$ in $L$ (see \cite{BZ} for more details).
\subsubsection{Type D}
Let $\lambda$ be a dominant weight of $SO_{2n}(\mathbb{C})$.
Put $d=n(n-1)$.
Denote coordinates in $\mathbb{R}^d$ by $(y^1_1,\ldots, y^1_{n-1};\ldots; x_1^{n-1}, x^{n-1}_2, y^{n-1}_1; x^n_1)$.
For every $\lambda$, define the {\em even orthogonal GZ polytope} $OGZ(\lambda)\subset\mathbb{R}^d$ for $SO_{2n}(\mathbb{C})$ using the following table:
$$
\begin{array}{cccccccccc}
\lambda_1 & &\lambda_2 & &\ldots & & &\lambda_n & &\\
&\boxed{y^1_1} & &\boxed{y^1_2} & & \ldots &\boxed{y^1_{n-1}}& & &\\
& &\boxed{x^2_1} & &\ldots & & &\boxed{x^2_{n-1}}& &\\
& & & \boxed{y^2_1} & & \ldots &\boxed{y^2_{n-2}}& & &\\ & & & &\ddots & & \vdots & &\vdots &\\
& & & & & \boxed{x^{n-1}_1}& &\boxed{x^{n-1}_2}& &\\
& & & & & &\boxed{y^{n-1}_1}& & &\\
& & & & & & & \boxed{x^{n}_1} & &\\
\end{array}
\eqno{GZ_D}$$
Again, every coordinate in this table is bounded from above by its upper left neighbor and bounded from below by its upper right neighbor.
There are also extra inequalities for every $i=1$,\ldots ,$n-2$:
$$x^i_{n-i}+x^i_{n+1-i}+x^{i+1}_{n-i}\ge y^i_{n-i}; \quad x^{i+1}_{n-i-1}+x^i_{n+1-i}+x^{i+1}_{n-i}\ge y^i_{n-i},$$
and inequality $x^{n-1}_{1}+x^{n-1}_{2}+x^{n}_{1}\ge y^{n-1}_{1}$ (see \cite{BZ} for more details).
In what follows, we will use not GZ polytopes themselves but the GZ tables.
\begin{remark}\label{r.shape}
If we rotate GZ tables in types $A$, $B$/$C$ and $D$ by $\frac{3\pi}4$ clockwise we will get the following tables:
$$
A \begin{array}{c}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\ \\
\hline
\ \\
\hline
\end{array}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\ \\
\hline
\end{array}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\end{array}
\end{array}, \quad
B/C \begin{array}{c}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\end{array}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\ \\
\hline
\end{array}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\ \\
\hline
\ \\
\hline
\end{array}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\ \\
\hline
\end{array}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\end{array}
\end{array},\quad
D \begin{array}{c}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\end{array}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\ \\
\hline
\end{array}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\ \\
\hline
\ \\
\hline
\end{array}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\ \\
\hline
\ \\
\hline
\end{array}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\ \\
\hline
\end{array}
\begin{array}[t]{|c|}
\hline
\ \\
\hline
\end{array}
\end{array}
$$
We will use this presentation of GZ tables in the proof of Theorem \ref{t.C}.
\end{remark}
\subsection{FFLV polytopes} \label{ss.FFLV}
\subsubsection{Type A}
For every dominant weight $\lambda$ of $SL_n(\mathbb{C})$, we now define the FFLV polytope $FFLV(\lambda)$.
Put $d:=\frac{n(n-1)}2$.
Label coordinates in $\mathbb{R}^d$ by
$(u^1_{n-1};u^2_{n-2},u^1_{n-2};\ldots;u^{n-1}_1,u^{n-2}_{1},\ldots,u^{1}_1)$.
and organize them using the GZ table $(GZ_A)$.
The polytope $FFLV(\lambda)$ in type $A$ is defined by inequalities
$u^l_m\ge 0$ and
$$\sum_{(l,m)\in D}u^l_m\le \lambda_i-\lambda_j$$
for all Dyck paths $D$ going from $\lambda_i$ to $\lambda_j$ in table $(FFLV)$ where $1\le i<j\le n$.
A {\em Dyck path} is a broken line whose segments either connect $u^i_j$ with $u^{i+1}_j$ or
connect $u^i_j$ with $u^i_{j+1}$.
Note that $FFLV(\lambda)$ only depends on the differences $(\lambda_1-\lambda_2)$,\ldots, $(\lambda_{n-1}-\lambda_n)$.
An example of FFLV polytope for $n=3$ and $\lambda=(1,0,-1)$ is depicted on Figure 1.
\subsubsection{Type C}
Similarly to type A case, we define FFLV polytopes in type $C$ using the corresponding GZ table $(GZ_C)$.
The only difference with type $A$ case is that we allow Dyck paths to end at one of the $0$ entry in the rightmost column of the table (see \cite{FFL2,ABS} for more details).
\section{Valuations on flag varieties}\label{s.main}
We now construct uniformly a valuation $v$ on flag varieties in types $A$, $B$, $C$ and $D$.
In types $A$ and $C$, we identify the corresponding Newton--Okounkov polytopes with FFLV polytopes.
In type $B_2$, we get a {\em symplectic DDO polytope} \cite[Section 4]{Ki16I}, which is not
combinatorially equivalent to either the FFLV or the GZ polytope in type $C_2$.
In type $D_3$, we get a polytope that is different from both GZ and FFLV polytopes in type $A_3$, however, the question of combinatorial equivalence is open.
Fix a complete flag of subspaces
$F^\bullet:=(F^1\subset F^2\subset\ldots\subset F^{n-1}\subset \mathbb{C}^n)$, and a basis $e_1$,\ldots, $e_n$ in $\mathbb{C}^n$ compatible with $F^\bullet$, that is, $F^i=\langle e_1,\ldots, e_i\rangle$.
Define a non-degenerate symmetric bilinear $(\cdot,\cdot)$ form on $\mathbb{C}^n$ as follows:
$$ (e_i,e_j)=\left\{
\begin{array}{cc} 1 &\mbox{ if } i+j=n;\\
0 & \mbox{ otherwise }.
\end{array} \right.$$
Similarly, we define a non-degenerate skew symmetric form $\omega(\cdot,\cdot)$ for even $n$.
For $i<j$, put
$$\omega(e_i,e_j)=-\omega(e_j,e_i)=\left\{
\begin{array}{cc} 1 &\mbox{ if } i+j=n;\\
0 & \mbox{ if } i+j\ne n.
\end{array} \right.$$
Let $B\subset SL_n(\mathbb{C})$ be a subgroup of upper triangular matrices with respect to the basis $e_1$,\ldots, $e_n$.
Recall that the complete flag variety $SL_n(\mathbb{C})/B$ can be defined as the variety of complete flags of subspaces $M^\bullet=(\{0\}\subset V^1 \subset V^2\subset\ldots\subset V^{n-1}\subset\mathbb{C}^n)$.
Similarly, we regard $SO_n(\mathbb{C})/B$ for any $n$ and $Sp_{n}/B$ for even $n$ as subvarieties of {\em orthogonal} and {\em isotropic} flags in $SL_n(\mathbb{C})/B$ and $SL_{n}/B$, respectively.
A complete flag $M^\bullet$ in $\mathbb{C}^n$ is {\em orthogonal} if $V^i$ is orthogonal to to $V^{n-i}$ with respect to
$(\cdot,\cdot)$.
A complete flag $M^\bullet$ in $\mathbb{C}^{n}$ is called {\em isotropic} if the restriction of $\omega$ to $V^{\frac{n}{2}}$
is zero, and $V^{n-i}=\{v\in \mathbb{C}^{n} \ | \ \omega(v,u)=0 \mbox{ for all } u\in V^i\}$.
In particular, the flag $F^\bullet$ is orthogonal and isotropic by our choice of the forms $(\cdot,\cdot)$ and $\omega$.
Recall that if $G$ is a connected complex semisimple group (e.g., a classical group), then
the Picard group of the complete flag variety $G/B$ can be identified with the weight lattice of $G$ \cite[1.4.2]{B}.
In particular, there is a bijection between dominant weights $\lambda$ and globally generated line bundles $L_\lambda$.
Recall also that the space of global sections $H^0(G/B,L_\lambda)$ is isomorphic to $V_\lambda^*$ where
$V_\lambda$ is the irreducible representation of $G$ with the highest weight $\lambda$.
Let $v_\lambda\in V_\lambda$ be a highest weight vector, i.e., the line $\langle v_\lambda\rangle\subset V_\lambda$ is $B$-invariant.
There is a well-defined map
$$p_\lambda:G/B\to\mathbb{P}(V_\lambda), \quad gB\mapsto\langle v_\lambda\rangle\subset \mathbb{P}(V_\lambda).$$
For instance, if $G=SL_3$ and $\lambda=(1,0,-1)$, then $p_\lambda$ coincides with the map $p$ of Example \ref{e.flag3}.
Similarly to Example \ref{e.flag} we may identify $V_\lambda^*$ with a subspace of $\mathbb{C}(G/B)$.
This amounts to fixing a global section $s_0\in H^0(G/B,L_\lambda)$ and identifying $s\in H^0(G/B,L_\lambda)$ with $\frac{s}{s_0}\in \mathbb{C}(G/B)$.
Denote by $\Delta_v(G/B,L_\lambda)\subset\mathbb{R}^d$ the Newton--Okounkov convex body corresponding to $G/B$,
$L_\lambda$ and $v$ (we denote by $d$ the dimension of $G/B$).
In what follows, we use that the normalized volume of $\Delta_v(G/B,L_\lambda)$
is equal by Theorem \ref{t.NO} to the degree of $p_\lambda(G/B)\subset\mathbb{P}(V_\lambda)$.
The latter is equal to the volume of $GZ(\lambda)$ and $FFLV(\lambda)$ by the Hilbert's theorem.
\subsection{Type A} Let $G=SL_n(\mathbb{C})$.
Put $d=\frac{n(n-1)}{2}$.
Recall that the open Schubert cell $X^\circ$ with respect to $F^\bullet$ is defined as the set of all flags $M^\bullet$ that are in general position with the standard flag $F^\bullet$, i.e., all intersections $M^i\cap F^j$ are transverse.
We can identify the open Schubert cell $X^\circ\subset G/B$ with an affine space $\mathbb{C}^d$ by choosing for every flag $M^\bullet$ a basis $v_1$,\ldots, $v_n$ in $\mathbb{C}^n$ of the form:
$$v_1=e_n+x^{n-1}_1 e_{n-1}+\ldots+ x^1_1 e_1, $$
$$v_2=e_{n-1}+x^{n-2}_2 e_{n-2}+\ldots+ x^1_2 e_1, \quad \ldots \quad, v_{n-1}=e_2+x^1_{n-1}e_1, \quad v_n=e_n,$$
so that $M^i=\langle v_1,\ldots, v_i\rangle$.
Such a basis is unique, hence, the coefficients $(x^i_j)_{i+j< n}$ are coordinates on the open cell.
In other words, every flag $M^\bullet\in X^\circ$ gets identified with a triangular matrix:
$$\begin{pmatrix}
x^1_1 & x^1_2& \ldots & x^1_{n-1} & 1\\
x^2_1 & x^2_2 & \ldots & 1 & 0\\
\vdots & \vdots & & & \vdots\\
x^{n-1}_1 & 1 & \ldots & 0 & 0\\
1 & 0 & & 0 & 0\\
\end{pmatrix}.\eqno(*)
$$
We order the coefficients $(x^i_j)_{i+j< n}$ of this matrix by starting from column $(n-1)$ and going from top to bottom in every column and from right to left along columns.
More precisely, put $(y_1,\ldots,y_d):=(x^1_{n-1};x^1_{n-2},x^2_{n-2};\ldots;x^{1}_1,x^{2}_{1},\ldots,x^{n-1}_1)$.
For instance, if $n=4$ we get the ordering:
$$\begin{pmatrix}
y_4 & y_2 & y_1 & 1\\
y_5 & y_3 & 1 & 0\\
y_6 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
\end{pmatrix}.
$$
We fix the lexicographic ordering on monomials in coordinates $y_1$, \ldots, $y_d$ so that $y_1\succ y_2\succ\ldots\succ y_d$.
By the lexicographic ordering we mean that $y_1^{k_1}\cdots y^{k_d}\succ y^{l_1}\cdots y^{l_d}$ iff
there exists $j\le d$ such that $k_i=l_i$ for $i<j$ and $k_j>l_j$.
\begin{remark}\label{r.flag}
In \cite[Section 2.2]{K17}, there is a geometric construction of coordinates compatible with the flag of translated Schubert subvarieties:
$$w_0X_{\rm id}\subset w_0w_{d-1}^{-1}X_{w_{d-1}}\subset w_0w_{d-2}^{-1}X_{w_{d-2}}
\subset\ldots\subset w_0w_{1}^{-1}X_{w_{1}}\subset SL_n/B,$$
for $\overline{w_0}=(s_1)(s_2s_1)(s_3s_2s_1)\ldots(s_{n-1}\ldots s_1)$.
Here $w_k$ denotes the $k$-th terminal subword of $\overline{w_0}$, that is, $w_{d-1}=s_1$, $w_{d-2}=s_2s_1$ and so on.
It is not hard to check that coordinates
$(y_1,\ldots,y_d)$
are also compatible with the same flag, i.e.,
$w_0w_{k}^{-1}X_{w_k}\cap X^\circ=\{y_1=\ldots=y_k=0\}$.
\end{remark}
Let $v$ denote the lowest order term valuation on $\mathbb{C}(G/B)$, that is, if $y_1^{k_1}\cdots y_d^{k_d}$
is the lowest order term of a polynomial $f\in\mathbb{C}(G/B)$ then $v(f):=(k_1,\ldots,k_d)\in\mathbb{Z}^d$.
For the ratio $\frac{f}{g}$ of two polynomials we put $v(\frac{f}{g}):=v(f)-v(g)$.
Let $L_\lambda$ be the line bundle on $G/B$ corresponding to a dominant weight
$\lambda:=(\lambda_1,\ldots,\lambda_n)\in\mathbb{Z}^n$ of $G$.
\begin{thm}{\cite[Theorem 2.1]{K17}}\label{t.A}
In type $A$, the Newton--Okounkov convex body $\Delta_v(G/B,L_\lambda)$ coincides with the
FFLV polytope
$FFLV(\lambda)$.
\end{thm}
For instance, the computation of the polytope $\Delta_v(SL_n/B,L_\lambda)$ for $n=3$ and $\lambda=(1,0,-1)$ is illustrated in Examples \ref{e.flag}, \ref{e.flag2}, \ref{e.flag3}.
Using Remark \ref{r.flag} we could deduce Theorem \ref{t.A} directly from \cite[Theorem 2.1]{K17}.
Below we give another proof that works simultaneously for types $A$ and $C$.
\subsection{Type $C$} Let $n=2r$ be even, and $G=Sp_n(\mathbb{C})$.
Put $d=r^2$.
We define the open Schubert cell $X^\circ$ with respect to $F^\bullet$ as the set of all isotropic flags $M^\bullet$
that are in general position with the standard flag $F^\bullet$.
Again, we can identify the open Schubert cell $X^\circ\subset G/B$ with an affine space $\mathbb{C}^d$ using matrix $(*)$.
Since $M^\bullet$ is isotropic the coefficients $(x^i_j)_{i+j< n}$ are no longer independent variables.
It is not hard to check that exactly $d$ coefficients, namely, $(x^i_j)_{i+j< n, i\le j}$ are independent.
Again, we order the coordinates by starting from column $(n-1)$ and going from top to bottom in every column and from right to left along columns.
That is, put $(y_1,\ldots,y_d):=(x^1_{n-1};x^1_{n-2},x^2_{n-2};\ldots;x^{1}_r,x^{2}_{r},\ldots,x^r_r; \ldots; x^1_2,x^2_2; x^1_1)$.
It is easy to check that every $x^i_j$ for $i>j$ can be expressed as a polynomial in coordinates $y_1$,\ldots, $y_d$ with the lowest order term $x^j_i$.
In particular, there is a table inside the matrix $(*)$ whose coefficients
are coordinates on the Schubert cell $X^\circ$.
Here is an example for $n=6$ (coefficients inside the table are boxed):
$$\begin{pmatrix}
\boxed{x^1_1} & \boxed{x^1_2} &\boxed{x^1_3} & \boxed{x^1_4}& \boxed{x^1_{5}} & 1\\
x^2_1 & \boxed{x^2_2} &\boxed{x^2_3} & \boxed{x^2_4}& 1 & 0\\
x^3_1 & x^3_2 &\boxed{x^3_3} & 1 & 0 & 0\\
x^4_1 & x^4_2 &1 & 0 & 0 & 0\\
x^5_1 & 1 &0 & 0 & 0 & 0\\
1 & 0 &0 & 0 & 0 & 0\\
\end{pmatrix}.
$$
Note that the table is shaped exactly as the GZ pattern in type $C$ rotated by $\frac{3\pi}4$ clockwise (see Remark \ref{r.shape}).
Similarly to the type $A$ case, let $v$ be the lowest term valuation on $\mathbb{C}(G/B)$ associated with this ordering.
Let $L_\lambda$ be the line bundle on $G/B$ corresponding to a dominant weight
$\lambda:=(\lambda_1,\ldots,\lambda_r)\in\mathbb{Z}^r$ of $G$.
As before, denote by $\Delta_v(G,L_\lambda)\subset\mathbb{R}^d$
the Newton--Okounkov convex body corresponding to $G/B$,
$L_\lambda$ and $v$.
\begin{thm}\label{t.C}
In type $C$, the Newton--Okounkov convex body $\Delta_v(G/B,L_\lambda)$ coincides with the
FFLV polytope $FFLV(\lambda)$ in type $C$.
\end{thm}
\begin{proof}
We will provide a uniform proof for types $A$ and $C$.
Note that in both types the irreducible representation corresponding to the fundamental weight $\omega_k$
is contained in the $k$-th exterior power of the tautological representation $G\subset GL_n(\mathbb{C})$ \cite[Exercise 15.14, Theorem 17.5]{FH}.
Hence, the space $H^0(G/B,L_{\omega_k})=V_{\omega_k}^*$ is spanned by the restrictions of
Pl\"ucker coordinates of the Grassmannian $G(k,n)\subset \mathbb{P}(\Lambda^k\mathbb{C}^n)$ to $X^\circ$.
More precisely, there is a map $X^\circ\subset G/B\to G(k,n)\to \mathbb{P}(\Lambda^k\mathbb{C}^n)\supset\mathbb{P}(V_{\omega_k})$, which allows us to identify
$V_{\omega_k}^*$ with a subspace of $\mathbb{C}(X^\circ)=\mathbb{C}(G/B)$ spanned by certain $k\times k$ minors of matrix $(*)$.
Namely, we take all $k\times k$ minors of the $k\times n$ submatrix of $(*)$ formed by the first $k$ rows.
This is equivalent to taking all minors of $k\times (n-k)$ submatrix $A_{k,n-k}$ of $(*)$ with coefficients $x^i_j$ where $i\le k$ and $j\le (n-k)$.
It follows easily from the definition of the valuation $v$ that the lowest order term in any minor of matrix $A_{k,n-k}$
is the diagonal term.
Hence, $v(V_{\omega_k}^*)$ consists precisely of those points with coordinates $(u^i_j)$ in $\mathbb{R}^d$ such that $u^i_j=0,1$ and
two nonzero $u^i_j$ never lie on the same Dyck path.
Hence, the convex hull of $v(V_{\omega_k}^*)$ coincides with the FFLV polytope $FFLV(\omega_i)$.
We get the inclusion $FFLV(\omega_i)\subset \Delta_v(G/B,L_{\omega_i})$.
By the superadditivity of Newton--Okounkov convex bodies \cite[Proposition 2.32]{KK} we also have that if
$\lambda=\sum m_i\omega_i$ then
$$\sum m_i\Delta_v(G/B,L_{\omega_i})\subset \Delta_v(G/B,L_\lambda),$$
where the addition in the left hand side is Minkowski sum.
By definition $FFLV(\lambda)=\sum m_i FFLV(\omega_i)$.
Hence, we get inclusion
$$FFLV(\lambda)\subset \Delta_v(G/B,L_\lambda).$$
This inclusion is equality because both convex bodies have the same volume.
\end{proof}
\begin{remark}
The proof relies on the fact that the volume of $FFLV(\lambda)$ is equal to the degree of $p_\lambda(G/B)\subset\mathbb{P}(V_\lambda)$.
In types $A$ and $C$, this fact has both representation theoretic \cite{FFL,FFL2} and combinatorial proofs \cite{ABS}.
In type $A$, there is also a convex geometric proof \cite[Section 4]{K17}.
It would be interesting to check whether this proof extends to type $C$.
\end{remark}
Similarly to the type $A$ case, the valuation $v$ in type $C$ can be defined using a flag of translated Schubert subvarieties, however, they no longer correspond to terminal subwords of any decomposition of the longest element in the Weyl group of $G$.
For instance, if $n=4$ we get subvarieties corresponding to elements $s_2s_1s_2$, $s_1s_2$ and $s_1$ of the Weyl group.
\subsection{Type $B$} Let $n=2r+1$ be odd, and $G=SO_n(\mathbb{C})$.
Put $d=r^2$.
We define the open Schubert cell $X^\circ$ with respect to $F^\bullet$ as the set of all orthogonal flags $M^\bullet$ that are in general position with the standard flag $F^\bullet$.
Again, there is a table (shaped as the GZ pattern in type $C$) inside the matrix $(*)$ whose coefficients are coordinates on the Schubert cell $X^\circ$.
Here is an example for $n=5$ (coefficients inside the table are boxed):
$$\begin{pmatrix}
x^1_1 &\boxed{x^1_2} & \boxed{x^1_3}& \boxed{x^1_{4}} & 1\\
x^2_1 &x^2_2 & \boxed{x^2_3}& 1 & 0\\
x^3_1 &x^3_2 & 1 & 0 & 0\\
x^4_1 &1 & 0 & 0 & 0\\
1 &0 & 0 & 0 & 0\\
\end{pmatrix}.
$$
Put $(y_1,\ldots,y_d):=(x^1_{n-1};x^1_{n-2},x^2_{n-2};\ldots;x^{1}_{r+1},x^{2}_{r+1},\ldots,x^r_{r+1}; \ldots; x^1_3,x^2_3; x^1_2)$.
As before, let $v$ be the lowest term valuation on $\mathbb{C}(G/B)$ associated with the ordering $y_1\succ\ldots\succ y_d$.
It is easy to check that every $x^i_j$ for $i>j$ can be expressed as a polynomial in coordinates
$y_1$,\ldots, $y_d$ with the lowest order term $x^j_i$, while $x^i_i$ is a polynomial with the lowest order term $(x^i_{r+1})^2$.
While we may still use Pl\"ucker coordinates to compute $\Delta_v(G/B,L_{\omega_k})$ it is no longer true that the lowest
order term in any minor of matrix $A_{k,n-k}$ is the diagonal term
(because the diagonal coefficients $x^i_i$ might contribute higher order terms).
In particular, the defining inequalities for the convex hull $P_k$ of $v(H^0(G/B,L_{\omega_k}))$ will be more intricate.
Still, they can be described by generalizing the notion of Dyck paths.
It would be interesting to compare these inequalities with those of \cite{BK} (type $B_3$), see also \cite{Ma}.
To check whether the polytope $P_\lambda:=\sum m_iP_i$ coincides with the convex body $\Delta_v(G/B,L_{\lambda})$ for
$\lambda=\sum m_i\omega_i$ we have to compare their volumes.
For instance, one could try to construct a volume preserving piecewise linear map between
$P_\lambda$ and the corresponding GZ-polytope in type $B$ extending the construction of \cite[Section 4.2]{K17}.
For $B_2$, it is easy to check using Pl\"ucker coordinates that the convex hull $P_\lambda$ of $v(H^0(G/B,L_{\lambda}))\subset\mathbb{R}^4$
for $\lambda=\lambda_1\omega_1+\lambda_2\omega_2$ contains the Minkowski sum $\lambda_1 P_1+\lambda_2 P_2$, where
$P_1$ is the 3-dimensional simplex with the vertices $(0,0,0,0)$, $(1,0,0,0)$, $(0,2,0,0)$, $(0,0,0,1)$,
and $P_2$ is the 3-dimensional simplex with the vertices $(0,0,0,0)$, $(0,1,0,0)$, $(0,0,1,0)$, $(0,0,0,1)$.
Hence, $P_\lambda$ is identical to the Newton--Okounkov polytope computed in \cite[Proposition 4.1]{Ki16I} (up to relabeling of coordinates).
Denote coordinates in $\mathbb{R}^4$ by $(u_1,u_2,u_3,u_4)$. Then $P_\lambda$ is given by inequalities:
$$0\le u_1, u_2, u_3, u_4; \quad u_1 \le \lambda_1; \quad u_3\le \lambda_2; \quad 2u_1+u_2+2(u_3+u_4) \le 2(\lambda_1+\lambda_2); $$ $$2u_1+u_2+u_3+u_4 \le 2\lambda_1+\lambda_2.$$
In particular, its volume coincides with the degree of $p_\lambda(G/B)\subset\mathbb{P}(V_\lambda)$.
Hence, $P_\lambda=\Delta_v(G/B,L_{\lambda})$.
Note that $P_\lambda$ is not combinatorially equivalent to the FFLV polytope in type $C_2$
(see \cite[Section 2.4]{K17}).
\subsection{Type $D$}
Let $n=2r$ be even, and $G=SO_n(\mathbb{C})$.
Put $d=r(r-1)$.
There is a table (shaped as the GZ pattern in type $D$) inside the matrix $(*)$ whose coefficients are coordinates on the Schubert cell $X^\circ$.
Here is an example for $n=6$ (coefficients inside the table are boxed):
$$\begin{pmatrix}
x^1_1 &\boxed{x^1_2} & \boxed{x^1_3}& \boxed{x^1_{4}} & \boxed{x^1_{5}}&1\\
x^2_1 &x^2_2 & \boxed{x^2_3}& \boxed{x^2_{4}} & 1 &0\\
x^3_1 &x^3_2 & 0 & 1 & 0 &0\\
x^4_1 &x^4_2 & 1 & 0 & 0 &0\\
x^5_1 &1 & 0 & 0 & 0 &0\\
1 &0 & 0 & 0 & 0 &0\\
\end{pmatrix}.
$$
Put $(y_1,\ldots,y_d):=(x^1_{n-1}$; $x^1_{n-2},x^2_{n-2}$;\ldots; $x^{1}_{r+1},x^{2}_{r+1},\ldots, x^{r-1}_{r+1}$; $x^{1}_{r},x^{2}_{r},\ldots,x^{r-1}_{r}$;\ldots; $x^1_3,x^2_3; x^1_2)$, and define $v$ as before.
It is easy to check that every $x^i_j$ for $i>j$ can be expressed as a polynomial in coordinates
$y_1$,\ldots, $y_d$ with the lowest order term $x^j_i$, while $x^i_i$ is a polynomial with the lowest order term $x^i_r x^i_{r+1}$.
For $D_3$, it is not hard to check using Pl\"ucker coordinates that the convex hull $P_\lambda$ of $v(H^0(G/B,L_{\lambda}))\subset\mathbb{R}^6$
for $\lambda=\lambda_1\omega_1+\lambda_2\omega_2+\lambda_3\omega_3$ contains the Minkowski sum $\lambda_1 P_1+\lambda_2 P_2+\lambda_3P_3$, where
$P_1$ is the 4-dimensional polytope with the vertices $(0,0,0,0,0,0)$, $(1,0,0,0,0,0)$, $(0,1,0,0,0,0)$, $(0,0,0,1,0,0)$, $(0,0,0,0,0,1)$, $(0,1,0,1,0,0)$,
$P_2$ is the 3-dimensional simplex with the vertices $(0,0,0,0,0,0)$, $(0,1,0,0,0,0)$, $(0,0,1,0,0,0)$, $(0,0,0,0,0,1)$, and
$P_3$ is the 3-dimensional simplex with the vertices $(0,0,0,0,0,0)$, $(0,0,0,1,0,0)$, $(0,0,0,0,1,0)$, $(0,0,0,0,0,1)$.
By reordering coordinates, we can get that $P_1=FFLV(\widetilde\omega_2)$, $P_2=FFLV(\widetilde\omega_1)$, $P_3=FFLV(\widetilde\omega_3)$ for fundamental weights $\widetilde\omega_1$, $\widetilde\omega_2$, $\widetilde\omega_3$ of $SL_4(\mathbb{C})$.
However, these reorderings do not agree for different fundamental weights so it is not clear whether $P_\lambda$ is unimodularly equivalent to $FFLV(\lambda)$ in type $A_3$ (or to other known polytopes).
To compare $P_\lambda$ with FFLV and GZ polytopes one might write down the inequalities that define $P_\lambda$ and use them to count the number of facets of $P_\lambda$.
It would also be interesting to compute the inequalities for $P_\lambda$ in the case of $D_4$ and compare them with those of
\cite{G}.
| {
"timestamp": "2019-02-08T02:06:57",
"yymm": "1902",
"arxiv_id": "1902.02511",
"language": "en",
"url": "https://arxiv.org/abs/1902.02511",
"abstract": "For classical groups SL(n), SO(n) and Sp(2n), we define uniformly geometric valuations on the corresponding complete flag varieties. The valuation in every type comes from a natural coordinate system on the open Schubert cell and is combinatorially related to the Gelfand-Zetlin pattern in the same type. In types A and C, we identify the corresponding Newton-Okounkov polytopes with the Feigin-Fourier-Littelmann-Vinberg polytopes. In types B and D, we compute low-dimensional examples and formulate open questions.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Newton-Okounkov polytopes of flag varieties for classical groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109489630492,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7076796256672307
} |
https://arxiv.org/abs/1506.03841 | Lipschitz geometry does not determine embedded topological type | We investigate the relationships between the Lipschitz outer geometry and the embedded topological type of a hypersurface germ in $(\mathbb C^n,0)$. It is well known that the Lipschitz outer geometry of a complex plane curve germ determines and is determined by its embedded topological type. We prove that this does not remain true in higher dimensions. Namely, we give two normal hypersurface germs $(X_1,0)$ and $(X_2,0)$ in $(\mathbb C^3,0)$ having the same outer Lipschitz geometry and different embedded topological types. Our pair consist of two superisolated singularities whose tangent cones form an Alexander-Zariski pair having only cusp-singularities. Our result is based on a description of the Lipschitz outer geometry of a superisolated singularity. We also prove that the Lipschitz inner geometry of a superisolated singularity is completely determined by its (non embedded) topological type, or equivalently by the combinatorial type of its tangent cone. | \section{Introduction}
A complex germ $(X,0)$ has two natural metrics up to bilipschitz
equivalence, the \emph{outer metric} given by embedding $(X,0)$
in some $(\C^n,0)$ and taking distance in $\C^n$ and the \emph{inner
metric} given by shortest distance along paths in $X$.
In this paper we investigate the relationships between the Lipschitz outer geometry and the embedded topological type of a hypersurface germ in $(\C^n,0)$.
It is well known that the Lipschitz outer geometry of a complex
plane curve germ determines and is determined by its embedded
topological type (\cite{PT}, see also \cite{fernandes} and \cite[Theorem 1.1.]{NP}). We
prove that this does not remain true in higher dimensions:
\begin{theorem}\label{th:main}
There exist two hypersurface germs in $(\C^3,0)$ having same
Lipschitz outer geometry and distinct embedded topological type.
\end{theorem}
It is worth noting that for families of isolated hypersurfaces in
$\C^3$, the constancy of Lipschitz outer geometry implies constancy of embedded
topological type. Indeed, Varchenko proved in \cite{varchenko} that a
Zariski equisingular family of hypersurfaces in any dimension has
constant embedded topological type and it is proved in \cite{NP2} that
for a family of hypersurface singularities $(X_t,0)\subset(\C^3,0)$,
Zariski equisingularity is equivalent to constant Lipschitz outer
geometry.
It should also be noted that the converse question, which consists of
examining which part of the outer Lipschitz geometry of a
hypersurface can be recovered from its embedded topological type seems
difficult. In particular the outer geometry of a normal complex
surface singularity determines its multiplicity (\cite[Theorem 1.2 (2)]{NP2}) so this
question somehow contains the Zariski multiplicity question.
In order to prove Theorem \ref{th:main} we construct two germs of
hypersurfaces in $(\C^3,0)$ having the same Lipschitz outer geometry
and different embedded topological types. They consist of a pair of
superisolated singularities whose tangent cones form an
Alexander-Zariski pair of projective plane curves.
A surface singularity $(X,0)$ is {\it superisolated} (SIS for short)
if it is given by an equation
$$ f_{d} (x,y,z)+f_{d+1} (x,y,z)+f_{d+2}(x,y,z)+\dots=0, $$
where $d \geq 2$, $f_k$ is a homogeneous polynomial of degree $k$
and the projective curve $\{f_{d+1}=0 \} \subset \mathbb P^2$
contains no singular point of the projective curve $C= \{[x:y:z]:
f_{d} (x,y,z)=0\}$. In particular, the projectivized tangent cone $C$
of $(X,0)$ is reduced. In the sequel we will just consider
SISs with equations
$$ f_{d} (x,y,z)+f_{d+1} (x,y,z)=0\,. $$
\begin{definition}[Combinatorial type of a projective plane
curve]\label{def:combinatorial type}
The \emph{combinatorial type} of a reduced projective plane curve $C \subset \mathbb P^2$ is the homeomorphism type of a tubular neighborhood of it in $\mathbb
P^2$ (see, e.g., \cite[Remark 3]{ACT}; a more combinatorial version is also given there, which we describe in Section \ref{sec:superisolated}).
\end{definition}
It is well known that the combinatorial type
of the projectivized tangent cone of a SIS $(X,0)$ determines the topology of $(X,0)$. In fact, we will show:
\begin{theorem} \label{thm:inner} (i). The Lipschitz inner geometry of a
SIS determines and is determined by the combinatorial type of its projectivized tangent cone.
(ii). There exist SISs with the same combinatorial types of their projectivized tangent cones but different Lipschitz outer geometry.
\end{theorem}
\smallskip\noindent{\bf Acknowledgments.} We are grateful to
H\'el\`ene Maugendre for fruitful conversations and for communicating
to us the equations of the tangent cones for the examples in
the proof of Theorem \ref{thm:inner} (ii). Walter Neumann was supported by NSF grant
DMS-1206760. Anne Pichon was supported by ANR-12-JS01-0002-01 SUSI. We are also grateful for the
hospitality and support of the following institutions: Columbia University, Institut de Math\'ematiques de Marseille, Aix Marseille
Universit\'e and CIRM Luminy, Marseille.
\section{Proof of Theorem 1.1}
The proof of Theorem \ref{th:main} will need Lemma \ref{le:RL implies BLA} and Proposition \ref{thm:outer} below, which will be proved in section \ref{sec:outer geometry}. First a definition:
\begin{definition} We say that two germs $(C_1,0)$ and $(C_2,0)$ of
reduced irreducible plane curves are
\emph{\bla-equivalent} if for $i=1,2$ there are holomorphic maps $h_i\colon (\C^2,0)\to (\C,0)$ with $(h_i^{-1}(0),0)=(C_i,0)$, a homeomorphism $\psi \colon
(\C^2,0) \to (\C^2,0)$, a constant $K \geq 1$ and a neighborhood
$\cal U$
of the origin in $\C^2$ such that for all $a,a' \in \cal U$.
\begin{align*}
\frac{1}{K} || h_2(\psi(a)) (1,\psi(a)) - h_2(\psi(a'))&(1,\psi(a')) ||_{\C^3} \leq || h_1(a)(1,a) - h_1(a')(1,a') ||_{\C^3} \hskip1,5cm
\\ & \leq K || h_2(\psi(a)) (1,\psi(a)) - h_2(\psi(a')) (1,\psi(a')) ||_{\C^3}
\end{align*}
\end{definition}
\begin{lemma}\label{le:RL implies BLA}
\Bla-equivalence of reduced irreducible plane curve
germs $(C_1,0)$ and $(C_2,0)$ does
not depend on the choice of their defining functions $h_1$ and $h_2$. Moreover, it is implied by
analytic equivalence of $(C_1,0)$ and $(C_2,0)$ in the sense of Zariski \cite{Z1}
(also called RL-equivalence or $\cal A$-equivalence).
\end{lemma}
\begin{proposition} \label{thm:outer} Let $(X,0)$ be a SIS with
equation $f_d + f_{d+1} =0$. The Lipschitz outer geometry of
$(X,0)$ is determined by the combinatorial type of its projectivized
tangent cone and by the \bla-equivalence classes of corresponding
singularities of the projectivized tangent cones.
\end{proposition}
\begin{proof}[Proof of Theorem \ref{th:main}]
Recall that a Zariski pair is a pair of projective curves $C_1,C_2
\subset \mathbb P^2$ with the same combinatorial type but such that
$(\mathbb P^2,C_1)$ is not homeomorphic to $(\mathbb P^2,C_2)$. The
first example was discovered by Zariski: a pair of sextic curves
$C_1$ and $C_2$, each with six cusps, distinguished by the fact that
$C_1$ has the cusps lying on a quadric and $C_2$ does not. He
constructed those of type $C_1$ in \cite{Z1} and conjectured type
$C_2$, confirming their existence eight years later in \cite{Z2}. He
distinguished their embedded topology by the fundamental groups of their
complements, but they can also be
distinguished by their Alexander polynomials (Libgober
\cite{libgober}) so they are called \emph{Alexander-Zariski pairs}.
Let $(X_1,0)$ and $(X_2,0)$ be two SISs whose
tangent cones are sextics of types $C_1$ and $C_2$ as above.
According to \cite{Z}, the analytic type
of a cusp is uniquely determined, so its \bla-equivalence class is
determined (Lemma \ref{le:RL implies BLA}). Then by Proposition \ref{thm:outer}, $(X_1,0)$ and $(X_2,0)$
are outer Lipschitz equivalent.
On the other hand, Artal showed that $(X_1,0)$ and $(X_2,0)$ do not
have the same embedded topological type. In fact, he shows
(\cite[Theorem 1.6 (ii)]{A}) that a Zariski pair is distinguished by
its Alexander polynomials if and only if the corresponding
SISs are distinguished by the Jordan block
decompositions of their homological monodromies.
\end{proof}
\section{The inner geometry of a superisolated singularity}
\label{sec:superisolated}
We first recall how the topological type of a SIS is determined by the combinatorial type of its projectivized tangent cone.
We refer to
\cite{ALM} for details.
A SIS $(X,0)\subset (\C^3,0)$ is resolved by
blowing up the origin of $(\C^3,0)$. The exceptional divisor of this
resolution of $(X,0)$ is the projectivized tangent cone $C$ of $(X,0)$
and one obtains the minimal good resolution by blowing up the
singularities of $C$ which are not ordinary double points until one
obtains a normal crossing divisor $C'$. Let $\Gamma$ be the dual graph
of this resolution. Following \cite{BNP} we say \L-curve for a
component of $C'$ which is a component of $C$ and \L-node any vertex
of $\Gamma$ representing an \L-curve.
One can also resolve the singularities of $C$ as a projective
plane curve to obtain the same graph $\Gamma$ except that the
self-intersection numbers of the \L-curves are different (in the
example below the self-intersection number $-9$ becomes $+3$). The
graph $\Gamma$ with these data is equivalent to the combinatorial
type of $C$.
\begin{example} \label{ex:resolution} Consider the SIS
$(X,0)\subset (\C^3,0)$ given by $F(x,y,z)=y^3+xz^2-x^4=0$. Blowing
up the origin of $\C^3$ resolves the singularity: using the chart
$(x,v,w)\mapsto (x,y,z)= (x,xv,xw)$, the equation of the resolved
$X^*$ is $v^3+w^2-x=0$ and the exceptional curve has a cusp
singularity $x=v^3+w^2=0$. Blowing up further leads to the following dual
graph $\Gamma$, the black vertex
being the \L-node.
\begin{center}
\begin{tikzpicture}
\draw[thin] (0,0)--(2,0);
\draw[thin] (1,0)--(1,-1);
\draw[fill=white] (0,0) circle(2pt);
\draw[fill=white] (1,0) circle(2pt);
\draw[fill=white] (2,0) circle(2pt);
\draw[fill=black] (1,-1) circle(2pt);
\node(a)at(0,.3){$-2$};
\node(a)at(1,.3){$-1$};
\node(a)at(2,.3){$-3$};
\node(a)at(.58,-1){$-9$};
\end{tikzpicture}
\end{center}
The self-intersection $-9$ of the \L-curve is computed as follows. Let $E_1,\ldots,E_4$ be the components of the exceptional divisor indexed so that $E_1$ is the \L-curve and $E_2$, $E_3$ and $E_4$ correspond to the string of non \L-nodes indexed from left to right on the graph. Since the tangent cone is reduced with degree $3$, the strict transform $l_1^*$ of a generic linear form $l_1 \colon (X,0) \to (\Bbb C,0)$ consists of three smooth curves transverse to $E_1$. The total transform $l_1$ is given by the divisor:
$$ (l_1) = E_1 + 3 E_2 + 6 E_3 + 2E_4 + l_1^*\,.$$
Since $(l_1)$ is a principal divisor, we have $(l_1).E_1=0$,
which leads to $E_1.E_1 = -9$.
\end{example}
\begin{proof}[Proof of Theorem \ref{thm:inner} (i)]
Let $(X,0) \subset (\mathbb C^3,0)$ be a SIS with equation
$f_d + f_{d+1} = 0$. We set $f=f_d$ and $g=-f_{d+1}$.
Let $\ell \colon \C^3 \to \C^2$ be a generic linear projection for
$(X,0)$, let $\Pi$ be the polar curve of the restriction
$\ell \mid_{X}$ and $\Delta = \ell(\Pi)$ its discriminant curve.
Let $e$ be the blow-up of the origin of $\C^3$ and let $p$ be a singular point
of $e^{-1}(0)\cap X^*$. Without loss of generality, we
can assume $\ell = (x,y)$. We can also choose our coordinates so
that $p = (1,0,0) $ in the chart $(x,v,w)$ given by $(x,v,w)\mapsto
(x,y,z)= (x,xv,xw)$ in the blow-up $e$ (so $p$ corresponds to the
$x$-axis in the tangent cone of $X$). Then $X^*$ has equation
$$ f(1,v,w) - x g(1,v,w) = 0$$
and $g(1,v,w)$ is a unit at $p$ since
$\{g=0\} \cap Sing (f=0)=\emptyset$ in $\Bbb P^2$.
Let $e_0 \colon Y \to \C^2$ be the blow-up of the origin of $\C^2$. We
consider $e_0$ in the chart $(x,v) \mapsto (x,y)=(x,xv)$, we set
$q=(1,0) \in Y$ in this chart, and we denote by
$\tilde{\ell} \colon (X^*,p) \to (Y,q)$ the projection
$(x,v,w) \mapsto (x,v)$. So we have the commutative diagram:
$$
\xymatrix{
(X^*,p)\ar@{->}[r]^e\ar@{->}[d]^{\tilde \ell}&(X,0)\ar@{->}[d]^\ell \\
(Y,q)\ar@{->}[r]^{e_0}& (\C^2,0)\hbox to 0pt{\,.\hss}
}
$$
Now $\Pi = X \cap \{f_z - g_z = 0\}$, so the strict transform $\Pi^*$ of $\Pi$ by $e$ has equations:
$$f_w(1,v,w) - x g_w(1,v,w) =0 \quad\text{and}\quad f(1,v,w) -x g(1,v,w) =0\,,$$
which are also the equations of the polar curve of the projection
$\tilde{\ell} \colon (X^*,p) \to (Y,q)$.
Since $g(1,v,w) \in \C\{v,w\}$ is a unit at $p$, the quotient $h(v,w):= \frac{f(1,v,w)}{g(1,v,w)}$ defines a holomorphic function germ $h \colon (\C^2_{(v,w)},0) \to (\C,0)$. In terms of $h(v,w)$ the above equations for $(\Pi^*,p)$ can be written:
$$h_w(v,w) =0 \quad\text{and}\quad h(v,w) - x =0\,.$$
Consider the isomorphism $ proj \colon (X^*,p) \to (\C^2,0)$ which is the
restriction of the linear projection $(x,v,w)\mapsto (v,w)$. Then
$\Pi^*$ is the inverse image by $proj$ of the polar curve $\Pi'$ of
the morphism $\ell' \colon (\C_{(v,w)}^2,0) \to (\C_{(x,v)}^2,0)$
defined by $(v,w) \mapsto (h(v,w),v)$, i.e., the relative polar curve of the
map germ $(v,w) \mapsto h(v,w)$ for the generic projection $(v,w)
\mapsto v$.
We set
$\Delta' = \ell'(\Pi')$ and $q=(1,0)$ in $\C^2_{(x,v)}$. We then have a commutative diagram:
$$
\xymatrix{
(\C^2,\Pi',0)\ar@{<-}[r]^{proj}\ar@{->}[dr]^{\ell'} & (X^*,\Pi^*,p)\ar@{->}[r]^e\ar@{->}[d]^{\tilde \ell}&(X,\Pi,0)\ar@{->}[d]^\ell \\
& (Y,\Delta',q)\ar@{->}[r]^{e_0}& (\C^2,\Delta, 0)
}
$$
Let $(\Pi_0,0)$ be the part of $(\Pi,0)$ which is tangent to the
$x-$axis (i.e., it corresponds to $p\in e^{-1}(0)$ in our chosen
coordinates) and let $(\Delta_0,0)$ be its image by $\ell$. Let $V$ be
a cone around the $x$-axis in $(\C^3,0)$. As in \cite{BNP}, consider a
carrousel decomposition of $(\ell(V),0)$ with respect to the curve
germ $(\Delta_0, 0)$ such that the $\Delta$-wedges around $\Delta_0$
are D-pieces. We then consider the geometric decomposition of $(V,0)$
into A-, B- and D-pieces obtained by lifting by $\ell$ this
decomposition. Lifting the carrousel decomposition of $\ell(V)$ by
$e_0$ we get a carrousel decomposition of $(Y,q)$ with respect to
$\Delta'$. On the other hand the lifting by $e$ of the geometric
decomposition of $V$ is a geometric decomposition of $(X^*,p)$ which
coincides with the lifting by $\tilde \ell$ of the carrousel
decomposition of $(Y,q)$ just defined.
By the L\^e Swing Lemma \cite[Lemma 2.4.7]{LMW}, the union of pieces
beyond the first Puiseux exponents of the branches of $\Delta'$ at $q$
lift to pieces in $X^*$ which have trivial topology, i.e., their links
are solid tori. Therefore these are absorbed by the amalgamation
process consisting of amalgamating iteratively any D-piece which is
not a conical piece with the neighbor piece using \cite[Lemma 13.1]{BNP}.
Moreover, since $\Delta'$ is the strict transform of $\Delta$ by
$e_0$, the rate of each piece of the obtained decomposition of $X^*$
equals $q+1$, where $q$ is the first Puiseux exponent of a branch of
$\Delta'$. Let $\Gamma_p$ be the minimal resolution graph of the
curve $h=0$ at $p$. Let us call a {\it node} of $\Gamma_p$ any vertex
having at least three incident edges including the arrows representing
the components of $h$ and the root vertex of $\Gamma_p$ if $h=0$ has
more than one line in its tangent cone. According to
\cite[Th\'eor\`eme C]{LMW}, the rate $q$ equals the polar quotient
$$ \frac{m_{E_i}(l)}{m_{E_i}(h)}$$
where $v_i$ is the corresponding node in $\Gamma_p$ and where $l
\colon (\C_{v,w}^2,p) \to (\C,0)$ is a generic linear form at $p$.
Now, set $\tilde{f}(v,w)=f(1,v,w)$. Since $g(1,v,w)$ is a unit at
$p$, the curves $h=0$ and $\tilde{f}=0$ coincide, so $m_{E_i}(h) =
m_{E_i}(\tilde{f})$. Since the strict transform of $\tilde{f}$
coincides with the germ of $\cal L$-curves at $p$, $\Gamma_p$ is a
connected component of $\Gamma$ minus its \L-nodes with free edges
replaced by arrows. Therefore the rates
$\frac{m_{E_i}(l)}{m_{E_i}(\tilde f)}$, and then the inner rate of
$(X, 0)$ are computed from $\Gamma$.
\end{proof}
\begin{example} \label{ex:inner geometry} Consider again the
SIS $(X,0)$ of Example \ref{ex:resolution} with equation $
xz^2+y^3-x^4=0$. Its projectivized tangent cone $xz^2+y^3=0$ has a unique singular
point, and the corresponding graph $\Gamma_p$ is the resolution
graph of the cusp $w^2+v^3=0$, i.e., the graph $\Gamma$ of Example
\ref{ex:resolution} with the \L-node replaced by an arrow. The
multiplicity of $\tilde f$ along the curve $E_3$ corresponding to
the node of $\Gamma_p$ equals $6$ while that of a generic linear
form $(v,w)\mapsto l(v,w)$ equals $2$. We then obtain the polar
quotient $\frac{m_{E_3}(l)}{m_{E_3}(\tilde f)} = 1/3$, which gives
inner rate $1/3 + 1 = 4/3$.
The Lipschitz inner geometry is then completely described (see
\cite[Section 15]{BNP}) by the graph $\Gamma$ completed by labeling
its nodes by the inner rates of the corresponding geometric pieces:
\begin{center}
\begin{tikzpicture}
\draw[thin] (0,0)--(2,0);
\draw[thin] (1,0)--(1,-1);
\draw[fill=white] (0,0) circle(2pt);
\draw[fill=white] (1,0) circle(2pt);
\draw[fill=white] (2,0) circle(2pt);
\draw[fill=black] (1,-1) circle(2pt);
\node(a)at(0,.3){$-2$};
\node(a)at(1,.3){$-1$};
\node(a)at(2,.3){$-3$};
\node(a)at(.58,-1){$-9$};
\node(a)at(1,-1.3){$\mathbf 1$};
\node(a)at(1.4,-0.3){$\mathbf{4/3}$};
\end{tikzpicture}
\end{center}
\end{example}
\begin{example} \label{ex:inner geometry2} Consider the SIS $(X,0)$ with equation $ (zx^2+y^3)(x^3+zy^2)+z^7= 0$,
that we already considered in \cite[Example 15.2]{BNP} and in
\cite{NP2}. The tangent cone consists of two unicuspidal curves $C$
and $C'$ with $6$ intersecting points $p_1, \ldots p_6$, the germ
$(C \cup C',p_1)$ consisting of two transversal cusps, and the
remaining 5 points being ordinary double points of $C \cup
C'$.
For each $i=1,\ldots,6$, the tangent cone of $(C \cup C',p_i)$ has
two tangent lines and the quotient $m_{E_{v_0}}(l) / m_{E_{v_0}}(\tilde f)$
at the root vertex $v_0$ of $\Gamma_{p_i}$ is then a polar quotient
in the sense of \cite{LMW}. The root vertex $v_0$ has valency $2$
and it corresponds to a special annular piece in the sense of
\cite{BNP}, with inner rate $m_{E_{v_0}}(l) / m_{E_{v_0}}(\tilde f)
+1$. For $p_2,\ldots,p_6$, we obtain inner rate $1/2 + 1 = 3/2$ for
that special annular piece and for $p_1$, we obtain $1/4 + 1 =
5/4$. The inner rates at the two other nodes of $\Gamma_{p_1}$ both
equal $2/10 + 1 = 6/5$. We have thus recovered the inner geometry:
\begin{center}
\begin{tikzpicture}
\draw[] (-2,0)circle(2pt);
\draw[thin ](-2,0)--(-1,1);
\draw[thin ](0:0)--(-1,1);
\draw[thin ](0:0)--(1,1);
\draw[thin ](1,1)--(2,0);
\draw[] (2,0)circle(2pt);
\draw[thin ](1,1)--(1.5,2.5);
\draw[thin ](-1,1)--(-1.5,2.5);
\draw[ fill] (1.5,2.5)circle(2pt);
\draw[ fill] (-1.5,2.5)circle(2pt);
\draw[thin ](-1.5,2.5)--(1.5,2.5);
\draw[fill=white] (2,0)circle(2pt);
\draw[fill=white] (-2,0)circle(2pt);
\draw[thin] (-1.5,2.5)..controls (-0.5,3) and (0.5,3)..(1.5,2.5);
\draw[thin] (-1.5,2.5)..controls (-0.5,2) and (0.5,2)..(1.5,2.5);
\draw[thin] (-1.5,2.5)..controls (-0.5,3.5) and (0.5,3.5)..(1.5,2.5);
\draw[thin] (-1.5,2.5)..controls (-0.5,1.5) and (0.5,1.5)..(1.5,2.5);
\draw[fill=white] (0,2.5)circle(2pt);
\draw[fill=white] (0,2.86)circle(2pt);
\draw[fill=white] (0,2.12)circle(2pt);
\draw[fill=white] (0,1.75)circle(2pt);
\draw[fill=white] (0,3.25)circle(2pt);
\draw[fill=white] (-1,1)circle(2pt);
\draw[fill=white] (1,1)circle(2pt);
\draw[fill=white] (0,0)circle(2pt);
\node(a)at(-2,-0.35){-2};
\node(a)at(0,-0.35){-5};
\node(a)at(2,-0.35){-2};
\node(a)at(-1,0.65){-1};
\node(a)at(1,0.65){-1};
\node(a)at(-0.3,3.5){-1};
\node(a)at(0.4,3.5){$\mathbf{3/2}$};
\node(a)at(-0.3,1.55){-1};
\node(a)at(0.4,1.55){$\mathbf{3/2}$};
\node(a)at(-1.7,2.8){$\mathbf{1}$};
\node(a)at(-1.7,2.2){-23};
\node(a)at(1.7,2.8){$\mathbf{1}$};
\node(a)at(1.7,2.2){-23};
\node(a)at(-1.5,1){$\mathbf{6/5}$};
\node(a)at(1.5,1){$\mathbf{6/5}$};
\node(a)at(0,0.4){$\mathbf{5/4}$};
\draw[thin,>-stealth,->](1.5,2.5)--+(1.2,0.4);
\draw[thin,>-stealth,->](1.5,2.5)--+(1.3,0);
\draw[thin,>-stealth,->](1.5,2.5)--+(1.2,-0.4);
\draw[thin,>-stealth,->](-1.5,2.5)--+(-1.2,0.4);
\draw[thin,>-stealth,->](-1.5,2.5)--+(-1.3,0);
\draw[thin,>-stealth,->](-1.5,2.5)--+(-1.2,-0.4);
\end{tikzpicture}
\end{center}
This was also computed in \cite{BNP} with the help of Maple, in terms of
the carrousel decomposition of the discriminant curve of a generic
projection of $(X,0)$.
\end{example}
\begin{proof}[Proof of Theorem \ref{thm:inner} (ii)]
Consider the two SISs $(X_1,0)$ and $(X_2,0)$ with equations respectively:
\begin{align*}
X_1:& \qquad F_1(x,y,z)= (y^3-z^2x)(y^3 + z^2x) + (x+y+z)^7=0\\
X_2:& \qquad F_2(x,y,z)=(y^3-z^2x)(y^3 + 2 z^2x) + (x+y+z)^7=0
\end{align*}
We will prove that they have same inner geometry and different outer geometries.
On one hand, the projectivized tangent cones of $(X_1,0)$ and
$(X_2,0)$ have same combinatorial type, so $(X_1,0)$ and $(X_2,0)$
have same Lipschitz inner geometry (Theorem \ref{thm:inner}). The
tangent cone consists of two unicuspidal components $C$ and $C'$ with
two intersection points: one, $p_1$, at the cusps, with maximal
contact there, and one, $p_2$, at smooth points of $C$ and $C'$
intersecting with contact $3$ there. The inner geometry is given by
the following graph. In particular, the inner rates at the two non
\L-nodes are computed from the corresponding polar rates in the two
graphs $\Gamma_{p_1}$ and $\Gamma_{p_2}$. They both equal $1/6 + 1 = 7/6$.
\begin{center}
\begin{tikzpicture}
\draw[thin] (0,0)--(3,0);
\draw[thin] (-1,-1)--(0,0);
\draw[thin] (-1,1)--(0,0);
\draw[thin] (-1,1)--(-2,0);
\draw[thin] (-1,-1)--(-2,0);
\draw[thin] (-3,-1)--(-2,0);
\draw[thin] (-3,1)--(-2,0);
\draw[fill=white] (0,0) circle(2pt);
\draw[fill=white] (1.5,0) circle(2pt);
\draw[fill=white] (3,0) circle(2pt);
\draw[fill=black] (-1,-1) circle(2pt);
\draw[fill=black] (-1,1) circle(2pt);
\draw[fill=white] (-2,0) circle(2pt);
\draw[fill=white] (-3,-1) circle(2pt);
\draw[fill=white] (-3,1) circle(2pt);
\node(a)at(0,.3){$-1$};
\node(a)at(1.5,.3){$-2$};
\node(a)at(3,.3){$-2$};
\node(a)at(0.4,-0.3){$\mathbf{7/6}$};
\node(a)at(-1.4,-1.3){$-21$};
\node(a)at(-1.4,1.3){$-21$};
\node(a)at(-1,.7){$\mathbf{1}$};
\node(a)at(-1,-.7){$\mathbf{1}$};
\node(a)at(-2.5,0){$\mathbf{7/6}$};
\node(a)at(-2,0.3){$-1$};
\node(a)at(-3.4,-1){$-3$};
\node(a)at(-3.4,1){$-2$};
\end{tikzpicture}
\end{center}
On the other hand, let us compute the multiplicities of the three functions $x, y$ and $z$ at each component of the exceptional locus. We obtain the following triples $(m_{E_j}(x), m_{E_j}(y), m_{E_j}(z))$ for both $X_1$ and $X_2$:
\begin{center}
\begin{tikzpicture}
\draw[thin] (0,0)--(3,0);
\draw[thin] (-1,-1)--(0,0);
\draw[thin] (-1,1)--(0,0);
\draw[thin] (-1,1)--(-2,0);
\draw[thin] (-1,-1)--(-2,0);
\draw[thin] (-3,-1)--(-2,0);
\draw[thin] (-3,1)--(-2,0);
\draw[fill=white] (0,0) circle(2pt);
\draw[fill=white] (1.5,0) circle(2pt);
\draw[fill=white] (3,0) circle(2pt);
\draw[fill=black] (-1,-1) circle(2pt);
\draw[fill=black] (-1,1) circle(2pt);
\draw[fill=white] (-2,0) circle(2pt);
\draw[fill=white] (-3,-1) circle(2pt);
\draw[fill=white] (-3,1) circle(2pt);
\node(a)at(3,.3){$(3,3,2)$};
\node(a)at(1.5,.3){$(6,5,4)$};
\node(a)at(0.5,-.3){$(9,7,6)$};
\node(a)at(-1,-1.3){$(1,1,1)$};
\node(a)at(-1,1.3){$(1,1,1)$};
\node(a)at(-3,0){$(12,14,15)$};
\node(a)at(-3.7,-1){$(4,5,5)$};
\node(a)at(-3.7,1){$(6,7,8)$};
\end{tikzpicture}
\end{center}
We compute from this the partial derivatives $\frac{\partial F_i}{\partial x}$, $\frac{\partial F_i}{\partial y}$ and $\frac{\partial F_i}{\partial z}$ along the curves of the exceptional divisor. We obtain different values for two multiplicities (in bold) for $(X_1,0)$ and $(X_2,0)$, written in that order on the graph:
\begin{center}
\begin{tikzpicture}
\draw[thin] (0,0)--(3,0);
\draw[thin] (-1,-1)--(0,0);
\draw[thin] (-1,1)--(0,0);
\draw[thin] (-1,1)--(-2,0);
\draw[thin] (-1,-1)--(-2,0);
\draw[thin] (-3,-1)--(-2,0);
\draw[thin] (-3,1)--(-2,0);
\draw[fill=white] (0,0) circle(2pt);
\draw[fill=white] (1.5,0) circle(2pt);
\draw[fill=white] (3,0) circle(2pt);
\draw[fill=black] (-1,-1) circle(2pt);
\draw[fill=black] (-1,1) circle(2pt);
\draw[fill=white] (-2,0) circle(2pt);
\draw[fill=white] (-3,-1) circle(2pt);
\draw[fill=white] (-3,1) circle(2pt);
\node(a)at(4,0){$(11,12,12)$};
\node(a)at(1.5,.3){$(22,24,24)$};
\node(a)at(0.8,-.3){$(33,35,36)$};
\node(a)at(-1,-1.3){$(5,5,5)$};
\node(a)at(-1,1.3){$(5,5,5)$};
\node(a)at(-3.5,0){$\small (72,\mathbf{70} \ or\ \mathbf{69},69)$};
\node(a)at(-4,-1){$(24,24,23)$};
\node(a)at(-4,1.3){ $\small (36,35, \mathbf{36}\ or \ \mathbf{35} )$};
\end{tikzpicture}
\end{center}
We compute from this the resolution graph of the family of polar curves $a \frac{\partial F_i}{\partial x} + b \frac{\partial F_i}{\partial y} +c\frac{\partial F_i}{\partial z} =0$. In the $X_1$ case one has to blow up once more to resolve a basepoint. We then get the resolution graph of the polar curve of a generic plane projection of $(X_1,0)$ resp.\ $(X_2,0)$ (the arrows represent the strict transform, the numbers in parentheses are the multiplicities of the function $a \frac{\partial F_i}{\partial x} + b \frac{\partial F_i}{\partial y} +c\frac{\partial F_i}{\partial z}$ for generic $a,b,c$ and the negative numbers are self-intersections):
\begin{center}
\begin{tikzpicture}
\draw[thin] (0,0)--(3,0);
\draw[thin] (-1,-1)--(0,0);
\draw[thin] (-1,1)--(0,0);
\draw[thin] (-1,1)--(-2,0);
\draw[thin] (-1,-1)--(-2,0);
\draw[thin] (-3,-1)--(-2,0);
\draw[thin] (-4,2)--(-2,0);
\draw[thin,>-stealth,->](0,0)--+(0.7,1);
\draw[thin,>-stealth,->](-3,1)--+(-1.3,-0.3);
\draw[thin,>-stealth,->](-1,1)--+(0,1);
\draw[thin,>-stealth,->](-1,1)--+(0.3,0.8);
\draw[thin,>-stealth,->](-1,1)--+(-0.3,0.8);
\draw[thin,>-stealth,->](-1,-1)--+(0,-1);
\draw[thin,>-stealth,->](-1,-1)--+(0.3,-0.8);
\draw[thin,>-stealth,->](-1,-1)--+(-0.3,-0.8);
\draw[fill=white] (0,0) circle(2pt);
\draw[fill=white] (1.5,0) circle(2pt);
\draw[fill=white] (3,0) circle(2pt);
\draw[fill=black] (-1,-1) circle(2pt);
\draw[fill=black] (-1,1) circle(2pt);
\draw[fill=white] (-2,0) circle(2pt);
\draw[fill=white] (-3,-1) circle(2pt);
\draw[fill=white] (-3,1) circle(2pt);
\draw[fill=white] (-4,2) circle(2pt);
\node(a)at(3.5,0){$(11)$};
\node(a)at(1.5,.3){$(22)$};
\node(a)at(0.4,-.3){$(33)$};
\node(a)at(-1.4,-1){$(5)$};
\node(a)at(-1.4,1){$(5)$};
\node(a)at(-2.6,0){$\small (69)$};
\node(a)at(-3.5,-1){$(23)$};
\node(a)at(-3.5,2.2){ $\small (35)$};
\node(a)at(-2.5,1.2){ $\small (105)$};
\node(a)at(-6,2){ $(X_1,0)$};
{\small
\node(a)at(-0.4,0){$-1$};
\node(a)at(1.5,-.3){$-2$};
\node(a)at(3,-.3){$-2$};
\node(a)at(-0.55,-1){$-21$};
\node(a)at(-0.55,1){$-21$};
\node(a)at(-1.6,0){$-2$};
\node(a)at(-3.2,0.7){$-1$};
\node(a)at(-4.4,2){$-3$};
\node(a)at(-3,-1.3){$-3$};
}
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}
\draw[thin] (0,0)--(3,0);
\draw[thin] (-1,-1)--(0,0);
\draw[thin] (-1,1)--(0,0);
\draw[thin] (-1,1)--(-2,0);
\draw[thin] (-1,-1)--(-2,0);
\draw[thin] (-3,-1)--(-2,0);
\draw[thin] (-3,1)--(-2,0);
\draw[thin,>-stealth,->](0,0)--+(0.7,1);
\draw[thin,>-stealth,->](-3,1)--+(-1.3,-0.3);
\draw[thin,>-stealth,->](-2,0)--+(-1.5,0);
\draw[thin,>-stealth,->](-1,1)--+(0,1);
\draw[thin,>-stealth,->](-1,1)--+(0.3,0.8);
\draw[thin,>-stealth,->](-1,1)--+(-0.3,0.8);
\draw[thin,>-stealth,->](-1,-1)--+(0,-1);
\draw[thin,>-stealth,->](-1,-1)--+(0.3,-0.8);
\draw[thin,>-stealth,->](-1,-1)--+(-0.3,-0.8);
\draw[fill=white] (0,0) circle(2pt);
\draw[fill=white] (1.5,0) circle(2pt);
\draw[fill=white] (3,0) circle(2pt);
\draw[fill=black] (-1,-1) circle(2pt);
\draw[fill=black] (-1,1) circle(2pt);
\draw[fill=white] (-2,0) circle(2pt);
\draw[fill=white] (-3,-1) circle(2pt);
\draw[fill=white] (-3,1) circle(2pt);
\node(a)at(3.5,0){$(11)$};
\node(a)at(1.5,.3){$(22)$};
\node(a)at(0.4,-.3){$(33)$};
\node(a)at(-1.4,-1){$(5)$};
\node(a)at(-1.4,1){$(5)$};
\node(a)at(-1.5,0){$\small (69)$};
\node(a)at(-3.5,-1){$(23)$};
\node(a)at(-2.5,1.2){ $\small (35)$};
{\small
\node(a)at(-0.4,0){$-1$};
\node(a)at(1.5,-.3){$-2$};
\node(a)at(3,-.3){$-2$};
\node(a)at(-0.55,-1){$-21$};
\node(a)at(-0.55,1){$-21$};
\node(a)at(-2,-0.4){$-1$};
\node(a)at(-3.2,0.7){$-2$};
\node(a)at(-3,-1.3){$-3$};
}
\node(a)at(-6,1){ $(X_2,0)$};
\end{tikzpicture}
\end{center}
The polar curves of $(X_1,0)$ and $(X_2,0)$ have different Lipschitz geometry since they don't even have the same number of components. Therefore, by \cite[Theorem 1.2 (6)]{NP2}, $(X_1,0)$ and $(X_2,0)$ have different outer Lipschitz geometries.
\end{proof}
\section{The outer geometry of a superisolated
singularity}\label{sec:outer geometry} \begin{proof}[Proof of
Lemma \ref{le:RL implies BLA}] We first re-formulate the
definition of \bla-equivalence. We will use coordinates $(v,w)$
in $\C^2$ and $(x,y,z)$ in $\C^3$. We have functions $h_1(v,w)$
and $h_2(v,w)$ whose zero sets are the curves $(C_1,0)$ and
$(C_2,0)$, a homeomorphism $\psi \colon (\C^2,0) \to (\C^2,0)$ of
germs, a constant $K \geq 1$ and a neighborhood $\cal U$ of the origin
in $\C^2$ such that for all $a,a' \in \cal U$. \begin{align*}
\frac{1}{K} || h_2(\psi(a)) (1,\psi(a)) -
h_2(\psi(a'))&(1,\psi(a')) ||_{\C^3} \leq || h_1(a)(1,a) -
h_1(a')(1,a') ||_{\C^3} \hskip1,5cm \\ & \leq K || h_2(\psi(a))
(1,\psi(a)) - h_2(\psi(a')) (1,\psi(a')) ||_{\C^3} \end{align*}
For $i=1,2$ we define $H_i\colon (\C^2,0) \to (\C^3,0)$ by $$
H_i(v,w)=h_i(v,w)(1,v, w)$$ and denote by $(S_i,0)$ the image of
$H_i$ in $(\C^3,0)$. Note that $H_i$ maps $(C_i,0)$ to $0$ and is
otherwise injective. We can thus complete the maps $\psi$, $H_1$
and $H_2$ to a commutative diagram
$$\xymatrix{(\C^2,0)\ar@{->}[r]^{H_1}\ar@{->}[d]^\psi
&(S_1,0)\ar@{->}[d]^{\psi'\hbox to 0pt{\hbox to 3.5cm{}$(\star)$\hss}}\\
(\C^2,0)\ar@{->}[r]^{H_2} &(S_2,0)\\
}
$$
and $\psi'$ is bijective. \Bla-equivalence is now the
statement that $\psi'$ is bilipschitz for the outer geometry.
Now write
$h_1=Uh'_1$ and $H_1 =UH'_1$ where $U=U(v,w)\in \C\{v,w\}$
is a unit. Then we obtain a commutative diagram
$$\xymatrix{(\C^2,0)\ar@{->}[r]^{H'_1}\ar@{=}[d] &(S'_1,0)\ar@{->}[d]^{\eta}\\
(\C^2,0)\ar@{->}[r]^{H_1} &(S_1,0)\\
}$$
where $\eta$ is $(x,y,z)\mapsto U(\frac yx,\frac
zx)(x,y,z)$. The factor $U(\frac yx,\frac zx)=U(v,w)$ has the form
$\alpha_0+\sum_{i,j\ge 0}\alpha_{ih}v^iw^j$ with $\alpha_0\ne 0$
so if the neighborhood $\cal U$ is small then the factor is close
to $\alpha_0$, so $\eta$ is bilipschitz. Thus $\psi'\circ \eta\colon
(S'_1,0)\to (S_2,0)$ is bilipschitz, so we have shown that modifying $h_1$
by a unit does not affect \bla-equivalence. The same holds
for
$h_2$, so \bla-equivalence does not depend on the choice of
defining functions for the curves $(C_1,0)$
and $(C_2,0)$.
It remains to show that analytic equivalence of $(C_1,0)$ and
$(C_2,0)$ implies \bla-equivalence. Analytic equivalence means
that there exists a biholomorphic germ $\psi\colon(\C^2,0)\to
(\C^2,0)$ and a unit $U\in\C\{v,w\}$ such that $Uh_1=h_2\circ
\psi$. We have already dealt with multiplication with a unit, so
we will assume we have $h_1=h_2\circ \psi$. If $\psi$ is a linear
change of coordinates, then we get a diagram as in $(\star)$
above, with $\psi'$ given by the corresponding coordinate change
in the $y,z$ coordinates of $\C^3$, so $\psi'$ is bilipschitz and
we have \bla-equivalence. For general $\psi$ the same is true up to higher
order in $v$ and $w$, so we still get \bla-equivalence.
\end{proof}
\begin{proof}[Proof of Proposition \ref{thm:outer}]
Let $(X_1,0)$ and $(X_2,0)$ be two SISs with equations respectively
$$f_1(x,y,z) - g_1(x,y,z) =0 \hbox{ and } f_2(x,y,z) - g_2(x,y,z) =0,$$
where for $i=1,2$, $f_i$ and $g_i$ are homogeneous polynomials of
degrees $d$ and $d+1$ respectively. We can assume that the projective
line $x=0$ does not contain any singular point of the projectivized
tangent cones $C_1= \{f_1=0\}$ and $C_2 = \{f_2=0\}$. We assume also
that $C_1$ and $C_2$ have the same combinatorial types and that
corresponding singular points of $C_1$ and $C_2$ are \bla-equivalent.
Since the tangent cone of a SIS $(X,0)$ is
reduced, the general hyperplane section of $(X,0)$ consists of
smooth transversal lines. Therefore, adapting the arguments of
\cite[Section 4]{NPP} by taking simply a line as test curve, we
obtain that the inner and outer metrics are Lipschitz equivalent
inside the conical part of $(X,0)$, i.e., outside cones around its
exceptional lines. So we just have to control outer distance
inside conical neighborhoods of the exceptional lines of $(X_1,0)$ and
$(X_2,0)$ whose projective points are corresponding singular points of
$C_1$ and $C_2$.
Let $p_1 \in Sing(C_1)$ and $p_2 \in Sing(C_2)$ be two singular points
in correspondence. After modifying $(X_1,0)$ and $(X_2,0)$ by
analytic isomorphisms, we can assume that $p_i = (1,0,0)$ for
$i=1,2$. We use again the notations of the proof of Theorem
\ref{thm:inner}, and we work in the chart $(x,v,w) = (x,y/x,z/x)$ for
the blow-up $e$.
Set $h_i(v,w)=f_i(1,v,w)/ g_i (1,v,w)$. Then the germs $(X_i^*,p_i)$
have equations $h_i(v,w)+x=0$.
Since $C_1$ and $C_2$ are \bla-equivalent and $h_i=0$ is an equation of
$C_i$, there exists a local homeomorphism $\psi \colon
(\C^2_{(v,w)},0) \to (\C^2_{(v,w)},0)$, a constant $K \geq 1$ and a
neighborhood $U$ of the origin in $\C^2$ such that for all
$(v,w),(v',w') \in U$.
\begin{align*}
\frac{1}{K} || h_2(\psi(v,w)) (1,\psi(v,w)) - h_2(\psi(v',w')) (1,\psi(v',w')) ||_{\C^3} \leq& \\
|| h_1(v,w)(1,v,w) - h_1(v',w')(1,v',w') &||_{\C^3} \leq& (\ast)\\
K || h_2(\psi(v,w)) (1,\psi(v,w)) - h_2(\psi(v',w')) (1,&\psi(v',w')) ||_{\C^3}
\end{align*}
Locally,
$$X_1^* = \{ x=h_1(v,w)\}
\quad\text{and}\quad
X_2^* = \{ x = h_2(\psi(v,w)) \}\,.$$
As in the proof of Theorem \ref{thm:inner} we consider
the isomorphisms $\operatorname{proj}_i\colon (X^*_i,p_i)\to (\C^2,0)$ for $i=1,2$, the
restrictions of the linear projections $(x,v,w)\mapsto (v,w)$. The
composition $\operatorname{proj}_2^{-1}\circ\psi\circ\operatorname{proj}_1$ gives a local
homeomorphism $\psi' \colon (W_1,p_1) \to (W_2,p_2)$, where $W_i$ is
an open neighborhood of $p_i$ in $X_i^*$.
Then, $\psi'$ induces a local homeomorphism $\psi'' \colon e(W_1) \to
e(W_2)$ such that $\psi'' \circ e = e \circ \psi'$. Notice that each
$e(W_i)$ contains the intersection of $X_i$ with a cone in $(\C^3,0)$
around the exceptional line represented by $p_i$.
Consider a pair of points $q=(x,xv,xw)$ and $q'=(x',x'v',x'w')$ in $e(W_1)$. By definition of $\psi''$, we have
$$||q-q'|| = || h_1(v,w)(1,v,w) - h_1(v',w')(1,v',w') ||_{\C^3}, $$
$$||\psi''(q)-\psi''(q')|| =|| h_2(\psi(v,w)) (1,\psi(v,w)) - h_2(\psi(v',w')) (1,\psi(v',w')) ||_{\C^3}.$$
Then $(\ast)$ implies that the
ratio $\frac{||\psi''(q)-\psi''(q')||}{||q-q'||}$ is bounded above and below in a neighborhood of the origin.
Now let $\widetilde W_i$ be the union of the
$W_i$'s
and let $\psi' \colon \widetilde W_1 \to
\widetilde W_2$ be the homeomorphism whose restriction to each
$W_1$ is the local $\psi'$. Then $\psi''\colon e(\widetilde W_1)\to e(\widetilde W_2)$ is the outer bilipschitz homeomorphism induced by
$\psi'$ and we must extend $\psi''$ over all of $X_1$.
Let $B$ be a Milnor ball for $X_1$ and $X_2$ around $0$. We set
$\widetilde{Y_i} = \overline{(e^{-1}(B \cap X_i)\setminus
\widetilde{W_i}}$. For $i=1,2$ we can adjust $\widetilde{W_i}$ so
that $\widetilde{Y_i}$ is a $D^2$-bundle over the exceptional
divisor $C_i$ minus its intersection with $\widetilde{W_i}$, i.e.,
over $\widetilde C_i:=\overline{C_i \setminus \widetilde{W_i}}$, and
whose fibers are curvettes of $C_i$. We want to extend $\psi''\colon e(\widetilde W_1)\to e(\widetilde W_2)$ to a bilipschitz map over
the conical regions $e(\widetilde Y_1)$ and $e(\widetilde Y_2)$. For
this it suffices to extend $\psi'$ by a bundle isomorphism
$\widetilde Y_1 \to \widetilde Y_2$, since the resulting
$e(\widetilde Y_1) \to e(\widetilde Y_2)$ is then bilipschitz.
$(X_1,0)$ and $(X_2,0)$ are inner bilipschitz equivalent by Theorem
\ref{thm:inner}), so by \cite[1.9 (2)]{BNP} the image by $\psi'' $
of the foliation of $e(\widetilde W_1)$ by Milnor fibers of a
generic linear form $\ell_1$ has the homotopy class of the
corresponding foliation by fibers of $\ell_2$ in $e(\widetilde
W_2)$. Since the projectivized tangent cones $C_1$ and $C_2$ are
reduced, a fiber of $\ell_i\circ e$ intersects each $D^2$-fiber over
$\partial \widetilde C_i$ in one point. This gives a trivialization
of the $D^2$-bundle over each $\partial \widetilde C_i$ and
therefore determines a relative Chern class for each component of
the bundle $\widetilde Y_i$ over $\widetilde C_i$. The map $\psi'$
restricted to the bundle over $\partial \widetilde C_1$ extends to
bundle isomorphisms between the components of $\widetilde Y_1$ and
$\widetilde Y_2$ if and only if their relative Chern classes
agree. But for $i=1,2$ these relative Chern classes are given by the
negative of the number of intersection points of $\ell_i^*$ with
each component of $C_i$ (i.e., the degrees of these components of
$C_i$), and these degrees agree since $C_1$ and $C_2$ are
combinatorially equivalent.
We have now constructed a map $\psi''\colon
(X_1,0)\to (X_2,0)$ which is outer bilipschitz if we restrict to distance
between pairs of points $x,y$ which are either both in a single
component of $e(\widetilde W_1)$ or both in the conical region
$e(\widetilde Y_1)$. Let $N\widetilde Y_i$ be a larger version of
the bundle $\widetilde Y_i$, so $e(N\widetilde Y_i)$ is a conical
neighborhood of $e(\widetilde Y_i)$. We still have an outer
bilipschitz constant for $\psi''$ for any $x$ and $y$ which are both in a
single component of $e(\widetilde W_1)$ or both in the conical region
$e(N\widetilde Y_1)$. Otherwise, either one of $x,y$ is in $e(\widetilde
W_1)\setminus e(N\widetilde Y_1)$ and the other in $e(\widetilde Y_1)$
or $x$ and $y$ are in different components of $e(\widetilde W_1)$. The ratio of inner to outer distance is clearly bounded for such point pairs, so since $\psi''$ is inner bilipschitz, it is outer bilipschitz.
\end{proof}
| {
"timestamp": "2015-11-26T02:01:09",
"yymm": "1506",
"arxiv_id": "1506.03841",
"language": "en",
"url": "https://arxiv.org/abs/1506.03841",
"abstract": "We investigate the relationships between the Lipschitz outer geometry and the embedded topological type of a hypersurface germ in $(\\mathbb C^n,0)$. It is well known that the Lipschitz outer geometry of a complex plane curve germ determines and is determined by its embedded topological type. We prove that this does not remain true in higher dimensions. Namely, we give two normal hypersurface germs $(X_1,0)$ and $(X_2,0)$ in $(\\mathbb C^3,0)$ having the same outer Lipschitz geometry and different embedded topological types. Our pair consist of two superisolated singularities whose tangent cones form an Alexander-Zariski pair having only cusp-singularities. Our result is based on a description of the Lipschitz outer geometry of a superisolated singularity. We also prove that the Lipschitz inner geometry of a superisolated singularity is completely determined by its (non embedded) topological type, or equivalently by the combinatorial type of its tangent cone.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Lipschitz geometry does not determine embedded topological type",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109485172559,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7076796253468862
} |
https://arxiv.org/abs/1309.2626 | Some new maximum VC classes | Set systems of finite VC dimension are frequently used in applications relating to machine learning theory and statistics. Two simple types of VC classes which have been widely studied are the maximum classes (those which are extremal with respect to Sauer's lemma) and so-called Dudley classes, which arise as sets of positivity for linearly parameterized functions. These two types of VC class were related by Floyd, who gave sufficient conditions for when a Dudley class is maximum. It is widely known that Floyd's condition applies to positive Euclidean half-spaces and certain other classes, such as sets of positivity for univariate polynomials.In this paper we show that Floyd's lemma applies to a wider class of linearly parameterized functions than has been formally recognized to date. In particular we show that, modulo some minor technicalities, the sets of positivity for any linear combination of real analytic functions is maximum on points in general position. This includes sets of positivity for multivariate polynomials as a special case. | \section{Introduction}
Maximum set systems are in some sense the most perfect systems of finite VC dimension. They arise most notably from the systems given by ``positive" half-spaces in Euclidean space. They also arise as the dual set system associated with a simple arrangement of hyperplanes. Their desirable features include a certain kind of recursive structure which allows for, among other things, the existence of so-called sample compression schemes, and as such they are central to most approaches to proving the long outstanding sample compression conjecture \cite{BL98,Fl89,FlWa95,BIP,KW}. Some further uses of maximum set systems exist in machine learning and model theory \cite{LBS, GuHi11,J11}.
In what follows we first provide the definitions for the basic notions of interest, including set systems, VC dimension, the maximum property and linearly parameterized set systems. We then go on to establish our results in the subsequent section.
Our results relate to two criteria given by Floyd which are sufficient for a linearly parameterized set system to have the maximum property. While several specific applications of Floyd's theorem have been given, other powerful applications seem to have been overlooked. In particular, there seems to be no mention in the literature that Floyd's result applies to general multivariate (rather than univariate) polynomials. More generally we note the important fact that any linear combination of analytic functions satisfies Floyd's criteria.
\section{Basic definitions}
Let $X$ be a set and $\mathcal{P}(X)$ its power set. We call any $\mathcal{C} \subseteq \mathcal{P}(X)$ a \textit{set system} on $X$. For any $X_0 \subseteq X$, we let $\mathcal{C}\vert_{X_0}$ denote $\{C \cap X_0 : C \in \mathcal{C}\}$. We say that $\mathcal{C}$ \textit{shatters} $X_0 \subseteq X$ if $\mathcal{C}\vert_{X_0} = \mathcal{P}(X_0)$.
The \textit{Vapnik-Chervonenkis (VC) dimension} \cite{VaCh71} of $\mathcal{C}$, when $\mathcal{C}$ is non-empty, is defined as
$$\text{VC}(\mathcal{C}) = \text{sup}\{|X_0| : X_0 \subseteq X \, \text{is shattered by } \mathcal{C}\}.$$
When $\mathcal{C} = \emptyset$ we will use the convention that VC$(\mathcal{C}) = -1$.
If $\text{VC}(\mathcal{C})$ is finite then $\mathcal{C}$ is said to be a \textit{VC-class}. For natural numbers $n$ and $k$, define
\[
{n \choose \leq k} = \begin{cases} \sum_{i=0}^k {n \choose i} &\mbox{if } n \geq k \\
2^k & \mbox{if } n<k. \end{cases}
\]
A key combinatorial fact relating to VC classes is Sauer's lemma \cite{Sa72,Sh72}.
\begin{lemma}
Let $\alpha = \text{VC}(\mathcal{C})$. Then for any $X_0 \subseteq X$
$$ |\mathcal{C}\vert_{X_0}| \leq {|X_0| \choose \leq \alpha}.$$
\end{lemma}
We say that $\mathcal{C}$ is \textit{maximum} \cite{We87} of VC dimension $\alpha$ if for any finite $X_0 \subseteq X$
$$ |\mathcal{C}\vert_{X_0}| = {|X_0| \choose \leq \alpha}.$$
Many set systems arise naturally as the family of sets defined by a parameterized formula in a mathematical structure. For instance, let $X$ be a set, $A$ a parameter set, and $f:X\times A \rightarrow \mathbb{R}$ a real-valued function. We use the notation $f_a:X \rightarrow \mathbb{R}$ to represent the function defined by $x \mapsto f(x,a)$. Let $pos(f_a) = \{x \in X: f_a(x) > 0 \}$ and $Pos(f) = \{pos(f_a(x)) :a \in A\}$. Then $Pos(f)$ is a set system on $X$ and has a well-defined VC dimension.
An interesting case occurs when $f$ parameterizes a vector space of real-valued functions. Specifically, suppose that $f_i:X \to \mathbb{R}$ for $i=1,2,\ldots,n$ are linearly independent real-valued functions, and $f_0:X \rightarrow \mathbb{R}$ is a real-valued function. Let $\mathcal{F} = \{a_1f_1(x) + a_2f_2(x) + \cdots +a_nf_n(x): a_1,\ldots,a_n \in \mathbb{R}\}$, and define $f_0(x) - \mathcal{F}$ to mean $\{f_0(x) - f(x): f \in \mathcal{F}\}.$ Then $\mathcal{F}$ is a real vector space, and $f_0(x) - \mathcal{F}$ is an affine real vector space. We will use $Pos(f_0 - \mathcal{F})$ to denote $\{pos(f_0-f): f \in \mathcal{F}\}$. Set systems of the form $Pos(f_0 - \mathcal{F})$ have been called \textit{Dudley classes} \cite{BL98}.
The following theorem is due to Dudley \cite{DW,Du99}. Cover proved a similar (non-affine) result in \cite{Co65}.
\begin{theorem} \label{TTTTT}
If $\mathcal{F}$ is an $n$-dimensional real vector space of real-valued functions defined on $X$, and $f_0:X \rightarrow \mathbb{R}$, then
$\text{VC}(Pos(f_0-\mathcal{F})) = n$.
\end{theorem}
Dudley classes include some natural set systems such as balls in Euclidean space, halfspaces in Euclidean space, and sets of positivity for polynomials, for which the coefficients are regarded as parameters. The following example is due to Dudley \cite{Du79}.
\\
{\bf{Example:}} We will show that balls in Euclidean 2-space (disks) form a Dudley class. The scheme of the example can be generalized to higher dimensions. Define $f_0(x,y) = -x^2 -y^2$ and $f(x,y) = a_3y+a_2x+a_1$. Then $f_0 - f \in f_0 - \mathcal{F}$ where $\mathcal{F} = \{a_3y+a_2x+a_1:a_1,a_2,a_3 \in \mathbb{R}\}$. Note that $pos(f_0-f)$ describes a disk with center $(\frac{-a_2}{2}, \frac{-a_3}{2})$ and radius $\sqrt{(\frac{a_2}{2})^2+ (\frac{a_3}{2})^2-a_1}$.\footnote{When $(\frac{a_2}{2})^2+ (\frac{a_3}{2})^2-a_1 < 0$, the radius does not exist; in this case $f_0-f$ has no real solutions and $pos(f_0-f)$ describes the empty-set. Including this degenerate case does not affect the VC dimension, because the empty-set can always be approximated by a sufficiently small disk.} Thus $Pos(f_0 - \mathcal{F})$ is the set system of all disks in the plane. Since it is also a $3$-dimensional Dudley class, we can conclude from Theorem \ref{TTTTT} that the VC dimension of the set of disks in the plane is 3. \qed
\\
The main link between maximum set systems and Dudley classes is due to Floyd \cite{Fl89} (Theorem 8.2).
\begin{lemma}
Let $\mathcal{F}$ be a vector space of real-valued functions on a set $X$, with $dim(\mathcal{F})=n$. Let $f_0(x)$ a function on $X$. Suppose further that
\begin{enumerate}
\item For any $A \subseteq X$ with $|A| = n$, the dimension of $\mathcal{F}$ restricted to $A$ is $n$.
\item For any $f \in \mathcal{F}$, there are at most $n$ zeros of $f_0 - f$ in $X$.
\end{enumerate}
Then $pos(f_0 - \mathcal{F})$ is maximum of VC dimension $n$ on $X$.
\end{lemma}
The assumption (1) above is (by Dudley's theorem) equivalent to the requirement that $pos(f_0 - \mathcal{F})$ shatter every set of size $n$ in $X$, which is a necessary condition for the maximum property. This will be satisfied for an ordinary univariate polynomial $y=a_0 + a_1x + \cdots + a_nx^n$ if $X$ projects 1-1 onto the $x$-axis.
The assumption (2) above requires that $f_0 \not \in \mathcal{F}$, since otherwise the space $f_0 - \mathcal{F}$ includes the constantly zero function. If assumption (2) is not respected, Euclidean halfspaces provide a counter-example to the lemma, as observed in \cite{BL98}.
We now proceed to give arguably more natural criteria which guarantee that a linear system of real-valued functions satisfy Floyd's conditions. For instance if $f_0,\ldots,f_n$ are linearly independent analytic functions and $X \subset \mathbb{R}^k$ is in general position, then with $\mathcal{F} = span \langle f_1,\ldots,f_n \rangle$, Floyd's lemma applies to $f_0 - \mathcal{F}$.
This gives examples of maximum families which have not been given in the literature to date. Some examples are given at the end of the next section.
\section{Results}
In this section we will introduce topological and analytic conditions on $\mathcal{F}$ and $X$ which are sufficient to guarantee that $Pos(f_0 - \mathcal{F})$ is a maximum set system when restricted to subsets of $X$ which are in general position (Theorem \ref{T:dskfjh}).
The basic strategy for proving Theorem \ref{T:dskfjh} is to associate subsets of $X$ of size $N$ with elements of $X^N$, and observe that the elements not satisfying Floyd's criteria constitute a thin part of $X^N$.
The elements of $X^N$ on which Floyd's conditions fail will be seen to lie on the zero sets of certain functions arising as determinants of matrices. Establishing that these determinants do indeed have thin zero sets is the aim of Lemma \ref{L:kjh}.
When $\mathcal{F}$ consists of analytic functions (which are defined before Proposition \ref{P:kdfh}) the elements of $X^N$ not satisfying Floyd's criteria will actually have Lebesgue measure zero. This implies that if a finite $X_0 \subseteq X$ is selected according to one of several common probability distributions, including the uniform and Gaussian distributions, then $Pos(f_0 - \mathcal{F})$ will almost surely be maximum when restricted to $X_0$ (see Corollary \ref{C:dskfjh}).
\subsection{}
Let $\mathcal{F}$ be an $n$-dimensional vector space of real-valued functions on a topological space $X$. We will say that $\mathcal{F}$ is \textit{admissible} if for any $f\in \mathcal{F}$,
\begin{enumerate}
\item $f$ is continuous
\item If $f^{-1}(0)$ has non-empty interior, then $f$ is constantly zero.
\end{enumerate}
Note that any subspace of an admissible $\mathcal{F}$ is admissible.
Equip $X^n = \underbrace{X \times X \times \cdots \times X}_{n\, \text{times}}$ with the product topology.
\begin{lemma}\label{L:kjh}
Suppose $\mathcal{F}$ is admissible and $f_1(x),\ldots,f_n(x)$ is a basis for $\mathcal{F}$. Let $F:X^n \to \mathbb{R}$ be given by
\[F(x_1,\ldots,x_n) = det
\begin{pmatrix}
f_{1}({x}_1) & f_{2}({x}_1) & \cdots & f_{n}({x}_1) \\
f_{1}({x}_2) & f_{2}({x}_2) & \cdots & f_{n}({x}_2) \\
\vdots & \vdots & \ddots & \vdots \\
f_{1}({x}_n) & f_{2}({x}_n) & \cdots & f_{n}({x}_n)
\end{pmatrix}
\]
then $F^{-1}(0) \subseteq X^n$ has empty interior.
\end{lemma}
\begin{proof}
The argument is by induction on $n$. If $n=1$, the lemma follows from the assumptions on $\mathcal{F}$.
Suppose the lemma is known to hold for $n-1$ and consider $F(x_1,\ldots,x_n)$. Then
\[F = f_1(x_1) \begin{vmatrix}
f_{2}({x}_2) & \cdots & f_{n}({x}_2) \\
\vdots & \ddots & \vdots \\
f_{2}({x}_n) & \cdots & f_{n}({x}_n)
\end{vmatrix} + \cdots + f_n(x_1)(-1)^{1+n} \begin{vmatrix}
f_{1}({x}_2) & \cdots & f_{n-1}({x}_2) \\
\vdots & \ddots & \vdots \\
f_{1}({x}_n) & \cdots & f_{n-1}({x}_n)
\end{vmatrix}
\]
where the vertical bars denote the determinant.
Suppose $U \subset F^{-1}(0)$ is open. Assume, by way of contradiction, that $U$ is nonempty. Let $V$ be the projection of $U$ onto $x_2,\ldots,x_n$. By inductive hypothesis, there is some $(a_2,\ldots,a_n) \in V$ such that
\[
\begin{vmatrix}
f_{2}({a}_2) & \cdots & f_{n}({a}_2) \\
\vdots & \ddots & \vdots \\
f_{2}({a}_n) & \cdots & f_{n}({a}_n)
\end{vmatrix} \neq 0
\]
This gives
$$F(x_1,a_2,\ldots,a_n) = c_1 f_1(x_1) + \cdots + c_n f_n(x_1)$$
for real numbers $c_1,\ldots,c_n$, corresponding to the subdeterminants, and $c_1 \neq 0$. Define $U_{a_2,\ldots,a_n} = \{a \in X : (a,a_2,\ldots,a_n) \in U\}$. Then $U_{a_2,\ldots,a_n}$ is non-empty and open, and on this open set $F(x_1,a_2,\ldots,a_n) = 0$. But $F(x_1,a_2,\ldots,a_n) \in \mathcal{F}$, and therefore $F(x_1,a_2,\ldots,a_n) = 0$ everywhere. This contradicts the linear independence of $f_1,\ldots,f_n$, because $c_1 \neq 0$. Thus $F^{-1}(0)$ has empty interior.
\end{proof}
Proposition \ref{P:kdfh}, below, appears in \cite{Fe96} on p. 240. The statement given there is for the more general context of Banach spaces. The proof uses the technique of approximate differentiation; we will give a more elementary argument.
Recall that an infinitely differentiable function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ is \textit{analytic} if for every $x$ in the domain of $f$ there is an open set $U$ with $x\in U$ such that $f$ is equal to its Taylor series expansion on $U$.
We will use the fact that if $f: \mathbb{R} \rightarrow \mathbb{R}$ is analytic and takes non-zero values, then its zeros form a countable set \cite{KP02}.
\begin{proposition} \label{P:kdfh}
Let $f:\mathbb{R}^k \rightarrow \mathbb{R}$ be analytic. Suppose $f$ is not constantly zero and let $A= f^{-1}(0)$. Then $\lambda(A)=0$ where $\lambda$ is Lebesgue measure.
\end{proposition}
\begin{proof}
The proof is by induction on $k$. Suppose $k=1$. Then $A$ is countable and therefore $\lambda(A)= 0$. If $k>1$ then let $\chi_A$ be the indicator function for $A$.
Define $A_r = A \cap B_r$ where $B_r$ is a $k$-ball of radius $r$ centered at the origin. Then the functions $\{\chi_{A_n} : n \in \mathbb{N}\}$ converge monotonically to $\chi_A$. By the monotone convergence theorem \cite{Ro88},
$\lim_{n \to \infty} \int \chi_{A_n} = \int \chi_A$. Thus it suffices to show that $\int \chi_{A_n} = 0$ for all $n$. By replacing $A$ with $A_n$ if necessary, we may assume without loss that $\lambda(A) < \infty$.
Since $\chi_A$ takes only non-negative values, $\int_{\mathbb{R}^k} |\chi_A| d(x_1,\ldots,,d_k) = \int_{\mathbb{R}^k} \chi_A d(x_1,\ldots,,d_k) = \lambda(A) < \infty$. Thus $\chi_A$ satisfies the conditions of Fubini's theorem \cite{Fe96}. By Fubini's theorem we may evaluate $\int_{\mathbb{R}^k} \chi_A \,d(x_1,\ldots,x_k)$ by iterated integration.
From iterated integration and induction it is seen that $\int_{\mathbb{R}^k} \chi_A \,d(x_1,\ldots,x_k) = 0$. This implies that $\lambda(A) = 0$.
\end{proof}
\begin{corollary}\label{P:dkjh}
If $\mathcal{F}$ is a real vector space of real-valued functions defined on $\mathbb{R}^k$ and $\mathcal{F}$ has a basis consisting of real analytic functions then $\mathcal{F}$ is admissible.
\end{corollary}
\begin{proof}
Lebesgue measure zero implies empty interior.
\end{proof}
\begin{corollary} \label{C:fjkh}
Suppose that $\mathcal{F}$ is a real vector space of real-valued functions defined on $\mathbb{R}^k$ for some $k \in \mathbb{N}$ and $\mathcal{F}$ has a basis of real analytic functions. Then $F^{-1}(0)$ has Lebesgue measure zero, where $F$ is as in Lemma \ref{L:kjh}.
\end{corollary}
\begin{proof}
Note that $F$ is analytic and not constantly zero, and therefore Proposition \ref{P:kdfh} applies.
\end{proof}
We will call a non-empty topological space $X$ a \textit{Baire space} if any countable union of closed sets with empty interior has empty interior. For natural numbers $n$ and $N$, by $[N]^n$ we mean the subsets of $\{1,\ldots,N\}$ of cardinality $n$. We will abuse notation slightly by writing $\langle i_1,\ldots,i_n \rangle \in [N]^n$ to mean that $\{i_1,\ldots,i_n\} \in [N]^n$ and $i_1<\cdots <i_n$. For an ordered set $q \in X^N$, we regard $q$ as the function with domain $[N]$ and codomain $X$ defined by $i \mapsto q_i$. Thus by $range(q)$ we mean the elements of $X$ occurring in the ordered set $q$.
\begin{theorem}\label{T:dskfjh}
Suppose $f_0,f_1,\ldots,f_n$ are linearly independent real-valued functions defined on an infinite topological space $X$ with the property that for every $N \in \mathbb{N}$, $X^N$ is Baire in the product topology. Put $\mathcal{F} = span \langle f_0,f_1,\ldots,f_n \rangle$ and $\mathcal{C} = pos(f_0 - span\langle f_1,\ldots,f_n\rangle )$. Then if $\mathcal{F}$ is admissible then for every $N > n$ there is $X_0 \subseteq X$ with $|X_0|=N$ such that $\mathcal{C} \vert_{X_0}$ is maximum of VC dimension $n$.
\end{theorem}
\begin{proof}
Let $N \in \mathbb{N}$ be given. Let $x_1,\ldots,x_N$ be variables ranging over $X$. We can express the statement that $x_1,\ldots,x_N$ satisfy conditions (1) and (2) of Floyd's lemma using determinants.
For any $B = \langle i_1,\ldots,i_n \rangle \in [N]^n$, let $F_B$ denote the function
\[
F_B(x_{1},\ldots,x_{N}) = det
\begin{pmatrix}
f_{1}({x}_{i_1}) & f_{2}({x}_{i_1}) & \cdots & f_{n}({x}_{i_1}) \\
f_{1}({x}_{i_2}) & f_{2}({x}_{i_2}) & \cdots & f_{n}({x}_{i_2}) \\
\vdots & \vdots & \ddots & \vdots \\
f_{1}({x}_{i_n}) & f_{2}({x}_{i_n}) & \cdots & f_{n}({x}_{i_n})
\end{pmatrix}.
\]
Note that $F_B$ ignores variables not in $B$.
Condition (1) will be true if for every $B \in [N]^n$, $F_B(x_1,\ldots,x_N) \neq 0$. Note that for each choice of $B \in [N]^n$, $F_B^{-1}(0)$ has empty interior as a subset of $X^N$, as a consequence of Lemma \ref{L:kjh}.
Condition (2) of Floyd's lemma will be satisfied if the system given by
\[
\begin{pmatrix}
f_{1}({x}_{i_1}) & f_{2}({x}_{i_1}) & \cdots & f_{n}({x}_{i_1}) \\
f_{1}({x}_{i_2}) & f_{2}({x}_{i_2}) & \cdots & f_{n}({x}_{i_2}) \\
\vdots & \vdots & \ddots & \vdots \\
f_{1}({x}_{i_{n+1}}) & f_{2}({x}_{i_{n+1}}) & \cdots & f_{n}({x}_{i_{n+1}})
\end{pmatrix}
\begin{pmatrix}
w_1 \\ w_2 \\ \vdots \\ w_n
\end{pmatrix}
=
\begin{pmatrix}
f_0({x}_{i_1}) \\ f_0({x}_{i_2}) \\ \vdots \\ f_0({x}_{i_{n+1}})
\end{pmatrix}
\]
is inconsistent for every choice of $x_{i_1},\ldots,x_{i_{n+1}}$ from $x_1,\ldots,x_N$.
If we define, for every $B = \langle i_1,\ldots,i_{n+1} \rangle \in [N]^{n+1}$,
\[
G_B(x_{1},\ldots,x_{N}) = det
\begin{pmatrix}
f_{1}({x}_{i_1}) & f_{2}({x}_{i_1}) & \cdots & f_{n}({x}_{i_1}) & f_0({x}_{i_1}) \\
f_{1}({x}_{i_2}) & f_{2}({x}_{i_2}) & \cdots & f_{n}({x}_{i_2}) & f_0({x}_{i_2}) \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
f_{1}({x}_{i_{n+1}}) & f_{2}({x}_{i_{n+1}}) & \cdots & f_{n}({x}_{i_{n+1}}) &f_0({x}_{i_{n+1}})
\end{pmatrix}
\]
then condition (2) is equivalent to the requirement that $G_B(x_1,\ldots,x_N) \neq 0$ for all $B \in [N]^{n+1}$. Note that for each choice of $B \in [N]^{n+1}$, $G_B^{-1}(0)$ has empty interior as a subset of $X^N$, as a consequence of Lemma \ref{L:kjh}.
To complete the argument, we must show that
$$ Q := X^N \setminus \left ( \bigcup_{B \in [N]^n} F_B^{-1}(0) \cup \bigcup_{B \in [N]^{n+1}} G_B^{-1}(0) \right ) $$
is nonempty. But since $X^N$ is Baire, $Q$ is actually open and dense. Taking $q \in Q \subseteq X^N$, we see that $X_0 := range(q)$ suffices, by Floyd's lemma.
\end{proof}
Note that in the special case in which $f_0,\ldots,f_n$ are real analytic functions defined on $\mathbb{R}^k$, the set $Q$ as in the proof of the theorem is not only dense and open, but co-null in the sense of Lebesgue measure by Corollary \ref{C:fjkh}. This gives applications to probability distributions which have the same null sets as Lebesgue measure. Recall that a measure $\nu$ defined on the Borel sets is \textit{absolutely continuous} with respect to $\lambda$ if $\lambda(B) =0$ always implies $\nu(B) = 0$. It is known that the Gaussian measures are absolutely continuous with respect to Lebesgue measure \cite{KS06}. The same is true for the uniform probability measure defined on a box in Euclidean space, since this is just Lebesgue measure normalized to a bounded set.
\begin{corollary} \label{C:dskfjh}
Suppose $f_0,f_1,\ldots,f_n$ are linearly independent real analytic functions defined on $$R = \underbrace{[0,1] \times [0,1] \times \cdots \times [0,1]}_{k\, \text{times}}.$$ Put $\mathcal{C} = pos(f_0 - span\langle f_1,\ldots,f_n\rangle )$. For $N > n$, let $\nu$ be a probability measure on $R^N$ which is absolutely continuous with respect to Lebesgue measure. Then if $q \in R^N$ is selected at random according to $\nu$, then $\mathcal{C} \vert_{X_0}$ is maximum of VC dimension $n$ with probability 1, where $X_0 = range(q)$.
\end{corollary}
\begin{proof}
Observe that $q \in Q$ almost surely, because $\nu(Q) = \lambda(Q) = 1$.
\end{proof}
Note that for any set of real variables $V=\{v_1,\ldots,v_m\}$, distinct monomials arising from $V$ are linearly independent and analytic. Thus the above results apply, in particular, to polynomial functions and their sets of positivity.
This generalizes the Floyd/Dudley result which states that the set of open balls in a Euclidean space has the maximum property on points in general position.
It also applies to some functions which seem not to have been considered before, such as trigonometric polynomials. That is, functions of the form
$$t(x,y;a_0,a_1,\ldots,a_N,b_1,\ldots,b_N) = a_0 + \sum_{n=1}^N a_n\cos(nx) + \sum_{n=1}^N b_n\sin(nx) - y$$
where the $a_i$ and $b_i$ are viewed as parameters. The Wronskian criterion \cite{So01} for the linear independence of functions can be used to generate still more examples.
The notion of samples which are dense (in the product topology) with certain properties has been undertaken by Sontag in the context of neural networks \cite{S95}.
| {
"timestamp": "2013-09-11T02:11:26",
"yymm": "1309",
"arxiv_id": "1309.2626",
"language": "en",
"url": "https://arxiv.org/abs/1309.2626",
"abstract": "Set systems of finite VC dimension are frequently used in applications relating to machine learning theory and statistics. Two simple types of VC classes which have been widely studied are the maximum classes (those which are extremal with respect to Sauer's lemma) and so-called Dudley classes, which arise as sets of positivity for linearly parameterized functions. These two types of VC class were related by Floyd, who gave sufficient conditions for when a Dudley class is maximum. It is widely known that Floyd's condition applies to positive Euclidean half-spaces and certain other classes, such as sets of positivity for univariate polynomials.In this paper we show that Floyd's lemma applies to a wider class of linearly parameterized functions than has been formally recognized to date. In particular we show that, modulo some minor technicalities, the sets of positivity for any linear combination of real analytic functions is maximum on points in general position. This includes sets of positivity for multivariate polynomials as a special case.",
"subjects": "Probability (math.PR); Functional Analysis (math.FA)",
"title": "Some new maximum VC classes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109547583621,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7076796238967261
} |
https://arxiv.org/abs/1710.02119 | A $τ$-Tilting Approach to Dissections of Polygons | We show that any accordion complex associated to a dissection of a convex polygon is isomorphic to the support $\tau$-tilting simplicial complex of an explicit finite dimensional algebra. To this end, we prove a property of some induced subcomplexes of support $\tau$-tilting simplicial complexes of finite dimensional algebras. | \section{Introduction}
The theory of cluster algebras gave rise to several interpretations of associahedra~\cite{ StasheffI,StasheffII, Tamari}. Fig.~\ref{fig:associahedron} shows two such interpretations for the rank~$3$ associahedron: as the exchange graph of triangulations of a hexagon and as the exchange graph of support $\tau$-tilting modules over the cluster tilted algebra whose quiver with relations is as depicted. This follows from results in the setting of the ``additive categorification of cluster algebras'' that was initiated in \cite{BuanMarshReinekeReitenTodorov, CalderoChapotonSchiffler}.
\begin{figure}[t]\centering
\includegraphics[width=.88\textwidth]{Stella-Fig1}
\caption{The exchange graph on triangulations of a hexagon (left) and the exchange graph on support $\tau$-tilting modules of the quiver with relations~$\overline{\quiver}$ (right).}\label{fig:associahedron}
\end{figure}
F.~Chapoton observed a similar isomorphism between the exchange graph of certain dissections of a heptagon and that of support $\tau$-tilting modules over the path algebra of the quiver \includegraphics[scale=.35]{Stella-Fig0}, subject to the relation $\beta\alpha = 0$. Fig.~\ref{fig:accordiohedron} shows these two exchange graphs, which can be found in~\cite[Fig.~7]{Chapoton-quadrangulations} and in~\cite[Example~6.4]{AdachiIyamaReiten}.
\begin{figure}[t]\centering
\includegraphics[width=.88\textwidth]{Stella-Fig2}
\caption{The $\dissection_\circ$-accordion complex of the dissection~$\dissection_\circ$ of Fig.~\ref{fig:exmAccordionDissections} (left) and the $2$-term silting complex of the quiver~$\overline{\quiver}(\dissection_\circ)$ (right).}\label{fig:accordiohedron}
\end{figure}
The purpose of this note is to show that this isomorphism is an avatar of a more general result in the theory of $\tau$-tilting modules. Any basic finite dimensional algebra $\Lambda$ gives rise to an exchange graph on support $\tau$-tilting modules. This exchange graph is the dual graph of the support $\tau$-tilting complex~\cite{AdachiIyamaReiten} (see Section~\ref{sec:recollections}). Let~$\{e_1, \dots, e_n\}$ be a complete set of primitive pairwise orthogonal idempotents of~$\Lambda$. Let~$J$ be a non-empty subset of~$[n]$ and~$e_J := \sum_{j \in J} e_j$. The following result forms the algebraic core of the paper (see Section~\ref{sec:recollections} for definitions and Section~\ref{sec:algRes} for the proof).
\begin{Theorem}\label{thm:mainAlgThm}The support $\tau$-tilting complex of~$e_J \Lambda e_J$ is isomorphic to the subcomplex of the support $\tau$-tilting complex of~$\Lambda$ induced by the support $\tau$-tilting modules whose $\b{g}$-vectors' coordinates vanish outside of~$J$.
\end{Theorem}
With this algebraic result, we can explain the isomorphism of Fig.~\ref{fig:accordiohedron}, and extend it to dissections of any polygon. Any reference dissection of a polygon gives rise to an exchange graph on certain dissections. This exchange graph is the dual graph of the accordion complex studied in~\cite{Chapoton-quadrangulations, GarverMcConville, MannevillePilaud-accordion} (see Section~\ref{sec:accordions}).
\begin{Theorem}\label{thm:mainDissection}Any accordion complex is isomorphic to the support $\tau$-tilting simplicial complex of an explicit finite dimensional algebra. Thus, the corresponding exchange graphs are isomorphic.
\end{Theorem}
Note that this statement could be seen as a consequence of a more general result~\cite[Proposition~2.44]{PaluPilaudPlamondon}. However, the latter combines several isomorphisms (from dissections, via non-kissing paths, through non-attracting chords, to support $\tau$-tilting modules) and the proof is quite intricate. In the present paper, we will easily deduce Theorem~\ref{thm:mainDissection} from Theorem~\ref{thm:mainAlgThm} and from the known case of triangulations of polygons (see Section~\ref{sec:accordions}).
The explicit finite dimensional algebras that appear in Theorem~\ref{thm:mainDissection} had previously appeared in~\cite{DavidRoesler, DavidRoeslerSchiffler}, where gentle algebras are associated to dissections of any surface without puncture. Applying Theorem~\ref{thm:mainAlgThm} to these algebras, one could obtain an analogue of the accordion complex for these surfaces.
\section[Recollections on $\tau$-tilting theory]{Recollections on $\boldsymbol{\tau}$-tilting theory}\label{sec:recollections}
The theory of $\tau$-tilting modules was introduced in~\cite{AdachiIyamaReiten}, and we mainly follow this source. Let $k$ be an algebraically closed field, let $\Lambda$ be a basic finite-dimensional $k$-algebra, and let $\{e_1, \dots, e_n\}$ be a complete set of pairwise orthogonal idempotents in~$\Lambda$. Denote by $\mbox{{\rm mod \!}} \Lambda$ the category of finite-dimensional right $\Lambda$-modules, and by $\operatorname{{\rm proj }} \Lambda$ its full subcategory of projective modules. We denote by $\tau$ the Auslander--Reiten translation of $\mbox{{\rm mod \!}} \Lambda$ (see, for instance, \cite[Chapter~IV]{AssemSimsonSkowronski}). For any $\Lambda$-module $M$, we denote by $|M|$ the number of pairwise non-isomorphic direct summands appearing in any decomposition of $M$ into indecomposable modules.
\subsection[Support $\tau$-tilting pairs]{Support $\boldsymbol{\tau}$-tilting pairs}
Following~\cite[Definition~0.1]{AdachiIyamaReiten}, we say that a $\Lambda$-module $M$ is
\begin{itemize}\itemsep=0pt
\item \textit{$\tau$-rigid} if $\Hom{\Lambda}(M, \tau M) = 0$;
\item \textit{$\tau$-tilting} if it is $\tau$-rigid and $|M|=|\Lambda|$;
\item \textit{support $\tau$-tilting} if there exists an idempotent $e$ of $\Lambda$ such that $e$ is in the annihilator of~$M$ and~$M$ is a~$\tau$-tilting $\Lambda/(e)$-module.
\end{itemize}
Support $\tau$-tilting modules always exist: $\Lambda$ itself and the zero module are two examples.
It is useful to keep track of the idempotents in the annihilator of a support $\tau$-tilting module. For this reason, we will follow~\cite[Definition~0.3]{AdachiIyamaReiten} and call a pair $(M,P)$, with $M\in \mbox{{\rm mod \!}} \Lambda$ and $P\in \operatorname{{\rm proj }} \Lambda$, a
\begin{itemize}\itemsep=0pt
\item \textit{$\tau$-rigid pair} if $M$ is $\tau$-rigid and $\Hom{\Lambda}(P,M) = 0$;
\item \textit{support $\tau$-tilting pair} if it is a $\tau$-rigid pair and $|M| + |P| = |\Lambda|$;
\item \textit{almost complete support $\tau$-tilting pair} if it is a $\tau$-rigid pair and $|M| + |P| = |\Lambda|-1$.
\end{itemize}
We will say that the pair $(M,P)$ is \textit{basic} if both $M$ and $P$ are basic $\Lambda$-modules. We define direct sums of pairs componentwise.
One of the main theorems of~\cite{AdachiIyamaReiten} is the following.
\begin{Theorem}[{\cite[Theorem~0.4]{AdachiIyamaReiten}}] A basic almost complete support $\tau$-tilting pair is a direct summand of exactly two basic support $\tau$-tilting pairs.
\end{Theorem}
\begin{Definition}The \textit{support $\tau$-tilting complex} of~$\Lambda$ is the simplicial complex~$\operatorname{s} \! \tau\mathcal{C}(\Lambda)$ whose vertices are the isomorphism classes of indecomposable $\tau$-rigid pairs and whose faces are sets of $\tau$-rigid pairs whose direct sum is rigid. The \textit{exchange graph}~$\operatorname{s} \! \tau \! \operatorname{-tilt}(\Lambda)$ is the dual graph of~$\operatorname{s} \! \tau\mathcal{C}(\Lambda)$, i.e., the graph whose vertices are isomorphism classes of basic support $\tau$-tilting pairs, and where two vertices are joined by an edge whenever the corresponding support $\tau$-tilting pairs differ by exactly one direct summand.
\end{Definition}
\subsection{2-term silting objects}
The study of support $\tau$-tilting pairs turns out to be equivalent to that of another class of objects: $2$-term silting objects~\cite[Section~3]{AdachiIyamaReiten}. Let $K^b(\operatorname{{\rm proj }} \Lambda)$ be the homotopy category of bounded complexes of projective $\Lambda$-modules. Let $2 \! \operatorname{-cpx}(\Lambda)$ be the full subcategory of $K^b(\operatorname{{\rm proj }} \Lambda)$ consisting of \textit{$2$-term objects}, that is, complexes
\begin{gather*}
\complexP = \cdots \to P_{m+1} \to P_m \to P_{m-1} \to \cdots
\end{gather*}
such that $P_m$ is zero unless $m\in \{0,1\}$. We will write $P_1\to P_0$ to denote the complex
\begin{gather*}
\cdots \to 0 \to P_1 \to P_0 \to 0 \to \cdots
\end{gather*}
A $2$-term object $\complexP$ is \textit{rigid} if $\Hom{K^b}(\complexP, \complexP[1]) = 0$. It is \textit{silting} if
\begin{itemize}\itemsep=0pt
\item it is rigid, and
\item $|\complexP| = |\Lambda|$.
\end{itemize}
This is a special case of a more general definition of silting objects, see~\cite{KellerVossieck}. Examples of $2$-term silting objects include $0\to \Lambda$ and $\Lambda \to 0$.
\begin{Definition} The \textit{$2$-term silting complex} of~$\Lambda$ is the simplicial complex~$\mathcal{SC}(\Lambda)$ whose vertices are isomorphism classes of indecomposable rigid $2$-term objects in $K^b(\operatorname{{\rm proj }} \Lambda)$ and whose faces are sets of such objects whose direct sum is rigid. The \textit{exchange graph}~$2 \! \operatorname{-silt}(\Lambda)$ is the dual graph of~$\mathcal{SC}(\Lambda)$, i.e., the graph whose vertices are isomorphism classes of basic $2$-term silting objects in $K^b(\operatorname{{\rm proj }} \Lambda)$, and where two vertices are joined by an edge whenever the corresponding objects differ by exactly one direct summand.
\end{Definition}
For any $\Lambda$-module $M$, denote by $P_1^M\to P_0^M$ a minimal projective presentation of $M$.
\begin{Theorem}[{\cite[Theorem~3.2]{AdachiIyamaReiten}}] The map $(M,P) \mapsto \big(P_1^M\to P_0^M\big) \oplus (P\to 0)$ induces an isomor\-phism of simplicial complexes~$\operatorname{s} \! \tau\mathcal{C}(\Lambda) \cong \mathcal{SC}(\Lambda)$, and thus of exchange graphs $\operatorname{s} \! \tau \! \operatorname{-tilt}(\Lambda)$ $\cong 2 \! \operatorname{-silt}(\Lambda)$.
\end{Theorem}
\subsection{The $\b{g}$-vector of a 2-term object}
The results of this note rely on the following definition.
\begin{Definition} Let $\complexP$ be a $2$-term object in $2 \! \operatorname{-cpx}(\Lambda)$. The \textit{$\b{g}$-vector} of $\complexP$, denoted by $\b{g}(\complexP)$, is the class of $\complexP$ in the Grothendieck group $K_0\big(K^b(\operatorname{{\rm proj }} \Lambda)\big)$.
\end{Definition}
We will usually denote $\b{g}$-vectors as integer vectors by using the basis of the abelian group $K_0\big(K^b(\operatorname{{\rm proj }} \Lambda)\big)$ given by the classes of the indecomposable projective modules $e_1 \Lambda, \dots, e_n \Lambda$ concentrated in degree $0$. Thus, if $\complexP$ is the $2$-term object
\begin{gather*}
\bigoplus_{i \in [n]} (e_i \Lambda)^{\oplus b_i} \xrightarrow{} \bigoplus_{i \in [n]} (e_i \Lambda)^{\oplus a_i},
\end{gather*}
then its $\b{g}$-vector is $\b{g}(\complexP) = (a_i - b_i)_{i \in [n]}$.
In contrast to arbitrary $2$-term objects, rigid $2$-term objects are determined by their $\b{g}$-vector in the following sense.
\begin{Theorem}[{\cite[Sections~2.3 and 2.4]{DehyKeller}}]Let $\complexP$ and $\complexQ$ be two rigid $2$-term objects.
\begin{enumerate}[$(i)$]\itemsep=0pt
\item If ${\b{g}(\complexP) = \b{g}(\complexQ)}$, then $\complexP$ and $\complexQ$ are isomorphic.
\item The object $\complexP$ is isomorphic to an object of the form $(P_1\to P_0) \oplus \big(Q\stackrel{{\rm id}_Q}{\to} Q\big)$, where~$P_1$ and~$P_0$ do not have non-zero direct summands in common.
\end{enumerate}
\end{Theorem}
Note that $\big(Q\stackrel{{\rm id}_Q}{\to} Q\big)$ is isomorphic to zero in $K^b(\operatorname{{\rm proj }} \Lambda)$.
\section{Algebraic result}\label{sec:algRes}
We use the same notations as in the previous section. In particular, $\Lambda$ is a basic finite-dimensional $k$-algebra with complete set of pairwise orthogonal idempotents~$\{e_1, \dots, e_n\}$.
Let $J$ be a subset of $[n]$. We will study $2$-term objects that only involve the indecomposable projective modules $e_j \Lambda$ with $j\in J$.
\begin{Definition}Let $2 \! \operatorname{-cpx}_J(\Lambda)$ be the full subcategory of $2 \! \operatorname{-cpx}(\Lambda)$ whose objects are the $2$-term objects $P_1\to P_0$ such that all the indecomposable direct summands of $P_1$ and $P_0$ have the form~$e_j \Lambda$ with $j\in J$.
\end{Definition}
Our main interest will lie in the rigid objects in~$2 \! \operatorname{-cpx}_J(\Lambda)$.
\begin{Definition}Let~$\mathcal{SC}_J(\Lambda)$ be the subcomplex of~$\mathcal{SC}(\Lambda)$ \textit{induced} by~$J$, that is, the subcomplex whose vertices are rigid objects in $2 \! \operatorname{-cpx}_J(\Lambda)$.
Let $2 \! \operatorname{-silt}_J(\Lambda)$ be the dual graph of~$\mathcal{SC}_J(\Lambda)$. Its vertices are isomorphism classes of basic objects $\complexP$ in $2 \! \operatorname{-cpx}_J(\Lambda)$ satisfying
\begin{itemize}\itemsep=0pt
\item $\complexP$ is rigid;
\item if $\complexP'\in 2 \! \operatorname{-cpx}_J(\Lambda)$ and $\complexP\oplus \complexP'$ is rigid, then $\complexP'$ is a direct sum of direct summands of~$\complexP$.
\end{itemize}
Two vertices are joined by an edge whenever the corresponding objects differ by exactly one indecomposable direct summand.
\end{Definition}
In other words, the faces of $\mathcal{SC}_J(\Lambda)$ correspond to basic rigid objects whose $\b{g}$-vectors have zero coefficients in entries corresponding to elements not in $J$. In this sense, $\mathcal{SC}_J(\Lambda)$ is a~representa\-tion-theoretic analogue of the accordion complex~\cite{Chapoton-quadrangulations, GarverMcConville, MannevillePilaud-accordion} (see Theorem~\ref{thm:contractDiagonals}). This is the main motivation for the introduction of this object.
Let $e_J := \sum_{j\in J} e_j$, and consider the algebra $e_J \Lambda e_J$. Observe that $e_J \Lambda e_J$ is isomorphic to $\End{\Lambda}(\Lambda e_J)$. This has the following consequence. Let $\operatorname{{\rm proj }}_J(\Lambda)$ be the full subcategory of $\operatorname{{\rm proj }}(\Lambda)$ whose objects are isomorphic to direct sums of the indecomposable modules $e_j \Lambda$, with $j\in J$.
\begin{Lemma} The $k$-linear categories $\operatorname{{\rm proj }}_J(\Lambda)$ and $\operatorname{{\rm proj }}(e_J\Lambda e_J)$ are equivalent. In particular, the categories $K^b(\operatorname{{\rm proj }}_J(\Lambda))$ and $K^b(\operatorname{{\rm proj }}(e_J \Lambda e_J))$ are equivalent.
\end{Lemma}
This lemma immediately implies the following statement.
\begin{Theorem}The simplicial complexes~$\mathcal{SC}_J(\Lambda)$ and~$\mathcal{SC}(e_J \Lambda e_J)$ are isomorphic. In particular, their dual graphs $2 \! \operatorname{-silt}_J(\Lambda)$ and $2 \! \operatorname{-silt}(e_J \Lambda e_J)$ are isomorphic.
\end{Theorem}
\begin{Corollary}The simplicial complex~$\mathcal{SC}_J(\Lambda)$ is a pseudomanifold of dimension~$|J|-1$. In particular, its dual graph~$2 \! \operatorname{-silt}_J(\Lambda)$ is $|J|$-regular.
\end{Corollary}
\section{Application: accordion complexes of dissections}\label{sec:accordions}
Let~$\polygon$ be a convex polygon. We call \textit{diagonals} of~$\polygon$ the segments connecting two non-consecutive vertices of~$\polygon$. A \textit{dissection} of~$\polygon$ is a set~$\dissection$ of non-crossing diagonals. It dissects the polygon into \textit{cells}. We denote by~$\overline{\quiver}(\dissection)$ the quiver with relations whose vertices are the diagonals of~$\dissection$, whose arrows connect any two counterclockwise consecutive edges of a cell of~$\dissection$, and whose relations are given by triples of counterclockwise consecutive edges of a cell of~$\dissection$.
See Fig.~\ref{fig:exmAccordionDissections} for an example.
We now consider $2m$ points on the unit circle alternately colored black and white, and let~$\polygon_\circ$ (resp.~$\polygon_\bullet$) denote the convex hull of the white (resp.~black) points.
We fix an arbitrary reference dissection~$\dissection_\circ$ of~$\polygon_\circ$. A diagonal~$\delta_\bullet$ of~$\polygon_\bullet$ is a \textit{$\dissection_\circ$-accordion diagonal} if it crosses either none or two consecutive edges of any cell of~$\dissection_\circ$. In other words, the diagonals of~$\dissection_\circ$ crossed by~$\delta_\bullet$ together with the two boundary edges of~$\polygon_\circ$ crossed by~$\delta_\bullet$ form an accordion. A \textit{$\dissection_\circ$-accordion dissection} is a set of non-crossing \mbox{$\dissection_\circ$-accordion} diagonals. See Fig.~\ref{fig:exmAccordionDissections} for an example. We call \textit{$\dissection_\circ$-accordion complex} the simplicial complex~$\accordionComplex(\dissection_\circ)$ of $\dissection_\circ$-accordion dissections. This complex was studied in recent works of F.~Chapoton~\cite{Chapoton-quadrangulations}, A.~Garver and T.~McConville~\cite{GarverMcConville}, and T.~Manneville and V.~Pilaud~\cite{MannevillePilaud-accordion}.
\begin{figure}[t]\centering \includegraphics[width=.9\textwidth]{Stella-Fig3}
\caption{A dissection~$\dissection_\circ$ with its quiver~$\overline{\quiver}(\dissection_\circ)$ (left), a $\dissection_\circ$-accordion diagonal (middle) and a $\dissection_\circ$-accordion dissection (right).}
\label{fig:exmAccordionDissections}
\end{figure}
For a diagonal~$\delta_\circ$ of~$\dissection_\circ$ and a $\dissection_\circ$-accordion diagonal~$\delta_\bullet$ intersecting~$\delta_\circ$, we consider the three edges (including~$\delta_\circ$) crossed by~$\delta_\bullet$ in the two cells of~$\dissection_\circ$ containing~$\delta_\circ$. We define~$\sign{\delta_\circ}{\dissection_\circ}{\delta_\bullet}$ to be $1$, $-1$, or~$0$ depending on whether these three edges form a~$\ZZZ$, a~$\SSS$, or a~$\VVV$. The \textit{$\b{g}$-vector} of~$\delta_\bullet$ with respect to~$\dissection_\circ$ is the vector~$\gvector{\dissection_\circ}{\delta_\bullet} \in \R^{\dissection_\circ}$ whose $\delta_\circ$-coordinate is~$\sign{\delta_\circ}{\dissection_\circ}{\delta_\bullet}$.
\begin{Example}\label{exm:associahedron}When the reference dissection~$\dissection_\circ$ is a triangulation of~$\polygon_\circ$, any diagonal of~$\polygon_\bullet$ is a $\dissection_\circ$-accordion diagonal. The $\dissection_\circ$-accordion complex is thus an $n$-dimensional associahedron (of type~$A$), where~$n = m-3$. As explained in \cite{CalderoChapotonSchiffler}, the $\dissection_\circ$-accordion complex is isomorphic to the $2$-term silting complex of the quiver~$\overline{\quiver}(\dissection_\circ)$ of the triangulation~$\dissection_\circ$. The~isomorphism sends a~diagonal of~$\polygon_\bullet$ to the $2$-term silting object with the same $\b{g}$-vector. See Fig.~\ref{fig:associahedron} for an illustration.
\end{Example}
With the notations we introduced, we can now restate Theorem~\ref{thm:mainDissection} more precisely.
\begin{Theorem}\label{thm:bijectionAccordionComplexSiltingComplex}For any reference dissection~$\dissection_\circ$, the $\dissection_\circ$-accordion complex is isomorphic to the $2$-term silting complex of the quiver~$\overline{\quiver}(\dissection_\circ)$.
\end{Theorem}
One possible approach to Theorem~\ref{thm:bijectionAccordionComplexSiltingComplex} would be to provide an explicit bijective map between $\dissection_\circ$-accordion diagonals and $2$-term silting objects for~$\overline{\quiver}(\dissection_\circ)$. Such a map is easy to guess using \mbox{$\b{g}$-vectors}, but the proof that it is actually a bijection and that it preserves compatibility is intricate. This approach was developed in the more general context of non-kissing complexes of gentle quivers with relations in~\cite[Propostion~2.44]{PaluPilaudPlamondon}. In this note, we use an alternative simpler strategy to obtain Theorem~\ref{thm:bijectionAccordionComplexSiltingComplex} by using Theorem~\ref{thm:mainAlgThm} and understanding accordion complexes as certain subcomplexes of the associahedron.
For that, consider two nested dissections~$\dissection_\circ \subset \dissection_\circ'$. Observe that any~$\dissection_\circ$-accordion diagonal is a $\dissection_\circ'$-accordion diagonal. Conversely a $\dissection_\circ'$-accordion diagonal~$\delta_\bullet$ is a $\dissection_\circ$-accordion diagonal if and only if it does not cross any diagonal~$\delta_\circ'$ of~$\dissection_\circ' \ssm \dissection_\circ$ as a $\ZZZ$ or a~$\SSS$, that is if and only if the~$\delta_\circ'$-coordinate of its $\b{g}$-vector~$\gvector{\dissection_\circ'}{\delta_\bullet}$ vanishes for any~$\delta_\circ' \in \dissection_\circ' \ssm \dissection_\circ$.
This observation shows the following statement.
\begin{Theorem}[{\cite[Section~4.2]{MannevillePilaud-accordion}}]\label{thm:contractDiagonals}For any two nested dissections~$\dissection_\circ \subset \dissection_\circ'$, the accordion complex~$\accordionComplex(\dissection_\circ)$ is isomorphic to the subcomplex of~$\accordionComplex(\dissection_\circ')$ induced by $\dissection_\circ'$-accordion diagonals~$\delta_\bullet$ whose $\b{g}$-vectors $\gvector{\dissection_\circ'}{\delta_\bullet}$ lie in the coordinate subspace spanned by elements in~$\dissection_\circ$.
\end{Theorem}
In order to prove Theorem~\ref{thm:bijectionAccordionComplexSiltingComplex} we now turn to associative algebras. Let~$\overline{\quiver} = (\quiver, \relations)$ be any gentle quiver with relations~\cite{ButlerRingel} and~$J$ be any subset of vertices of~$\overline{\quiver}$. We call \textit{shortcut quiver} the quiver with relations~$\overline{\quiver}_J = (\quiver_J, \relations_J)$ whose vertices are the elements of~$J$, whose arrows are the paths in~$\overline{\quiver}$ with endpoints in~$J$ but no internal vertex in~$J$, and whose relations are inherited from those of~$\overline{\quiver}$. Then the quotient~$k\quiver_J/\relations_J$ of the path algebra~$k\quiver_J$ is gentle and is isomorphic to the algebra~$e_J (k\quiver/\relations) e_J$.
\begin{Example}Quivers of dissections are shortcut quivers: if~$\dissection_\circ \subset \dissection_\circ'$, then~${\overline{\quiver}(\dissection_\circ) = \overline{\quiver}(\dissection_\circ')_{\dissection_\circ}}$.
In particular, for any dissection~$\dissection_\circ$, the quiver~$\overline{\quiver}(\dissection_\circ)$ is a shortcut quiver of the quiver with relations of a cluster tilted algebra of type~$A$.
\end{Example}
The following statement is an application of Theorem~\ref{thm:mainAlgThm} to gentle algebras.
\begin{Theorem}\label{thm:contractVertices}For any gentle quiver with relations~$\overline{\quiver}$ and any subset~$J$ of vertices of~$\overline{\quiver}$, the \mbox{$2$-term} silting complex~$\mathcal{SC}(\overline{\quiver}_J)$ for the shortcut quiver~$\overline{\quiver}_J$ is isomorphic to the subcomplex of the $2$-term silting complex~$\mathcal{SC}(\overline{\quiver})$ induced by $2$-term silting objects whose $\b{g}$-vectors lie in the coordinate subspace spanned by vertices in~$J$.
\end{Theorem}
Combining Theorems~\ref{thm:contractDiagonals} and~\ref{thm:contractVertices} together with Example~\ref{exm:associahedron}, we obtain Theorem~\ref{thm:bijectionAccordionComplexSiltingComplex} (and Theorem~\ref{thm:mainDissection}).
\section{Concluding remarks}
\begin{Remark}There is a geometric interpretation of the common phenomenon described in Theorems~\ref{thm:contractDiagonals} and~\ref{thm:contractVertices}. For a $\dissection_\circ$-accordion dissection~$\dissection_\bullet$, denote by~$\R_{\ge0}\,\gvector{\dissection_\circ}{\dissection_\bullet}$ the polyhedral cone generated by the set of $\b{g}$-vectors~$\gvector{\dissection_\circ}{\dissection_\bullet} := \set{\gvector{\dissection_\circ}{\delta_\bullet}}{\delta_\bullet \in \dissection_\bullet}$. The collection~$\gvectorFan$ of cones~$\R_{\ge0}\,\gvector{\dissection_\circ}{\dissection_\bullet}$ for all $\dissection_\circ$-accordion dissections~$\dissection_\bullet$ is a complete simplicial fan called \mbox{\textit{$\b{g}$-vector fan}} of~$\dissection_\circ$~\cite{MannevillePilaud-accordion}. The crucial feature of this fan is that no coordinate hyperplane meets the interior of any of its maximal cones. This is often referred to as the \textit{sign-coherence property} of $\b{g}$-vectors. It implies that for any two nested dissections~$\dissection_\circ \subset \dissection_\circ'$, the section of~$\gvectorFan[\dissection_\circ']$ with the coordinate subspace~$\R^{\dissection_\circ}$ is a subfan of~$\gvectorFan[\dissection_\circ']$. The content of Theorem~\ref{thm:contractDiagonals} is that this subfan is the $\b{g}$-vector fan~$\gvectorFan$. A~similar statement holds for Theorem~\ref{thm:contractVertices}.
\end{Remark}
\begin{Remark}In the theory of cluster algebras, a standard operation consists of freezing a subset of the initial cluster. This corresponds to taking a section of the $\b{d}$-vector fan by a coordinate subspace. To the best of our knowledge, the same operation on the $\b{g}$-vector fan studied in this note was not considered before in the literature.
\end{Remark}
\begin{Remark} The connection between representation theory and accordion complexes was already considered by A.~Garver and T.~McConville in~\cite[Section~8]{GarverMcConville}. However, their approach deals with $\b{c}$-vectors and simple-minded collections while our approach deals with $\b{g}$-vectors and silting objects.
\end{Remark}
\subsection*{Acknowledgements}
We are grateful to F.~Chapoton for pointing out to us the isomorphism between the two graphs of Fig.~\ref{fig:accordiohedron} which gave us the motivation for the present note. We also thank R.~Schiffler for his comments on a previous version. Finally, we are grateful to an anonymous referee for helpful suggestions on the presentation of this note. The first two authors are partially supported by the French ANR grant SC3A~(15\,CE40\,0004\,01). The last author is supported by the ISF grant 1144/16.
\pdfbookmark[1]{References}{ref}
| {
"timestamp": "2018-05-15T02:13:52",
"yymm": "1710",
"arxiv_id": "1710.02119",
"language": "en",
"url": "https://arxiv.org/abs/1710.02119",
"abstract": "We show that any accordion complex associated to a dissection of a convex polygon is isomorphic to the support $\\tau$-tilting simplicial complex of an explicit finite dimensional algebra. To this end, we prove a property of some induced subcomplexes of support $\\tau$-tilting simplicial complexes of finite dimensional algebras.",
"subjects": "Representation Theory (math.RT)",
"title": "A $τ$-Tilting Approach to Dissections of Polygons",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.984810954312569,
"lm_q2_score": 0.7185943865443352,
"lm_q1q2_score": 0.7076796235763818
} |
https://arxiv.org/abs/1910.10060 | A Fuss-Catalan variation of the caracol flow polytope | Recently, a combinatorial interpretation of Baldoni and Vergne's generalized Lidskii formula for the volume of a flow polytope was developed by Benedetti et al.. This converts the problem of computing Kostant partition functions into a problem of enumerating a set of objects called unified diagrams. We devise an enhanced version of this combinatorial model to compute the volumes of flow polytopes defined on a family of graphs called the k-caracol graphs, resulting in the first application of the model to non-planar graphs. At k=1 and k=n-1, we recover results for the classical caracol graph and the Pitman--Stanley graph. Furthermore, we introduce the notion of in-degree gravity diagrams for flow polytopes, which are equinumerous with (out-degree) gravity diagrams considered by Benedetti et al.. We show that for the k-caracol flow polytopes, these two kinds of gravity diagrams satisfy a natural combinatorial correspondence, which raises an intriguing question on the relationship in the geometry of two related polytopes. |
\section{Introduction}
In the paper~\cite{BGHHKMY}, we developed a combinatorial model for computing the volume of flow polytopes $\calF_G(\ba)$ which is based on the {\em generalized Lidskii volume formula} due to Baldoni and Vergne~\cite{BV}. We defined combinatorial objects called {\em unified diagrams}, whose enumeration gives the normalized volume of the flow polytope. We showed that the model can be applied to compute the volume of the flow polytopes of the Pitman--Stanley graph, the zigzag graph, and the caracol graph at various net flows, without the need for constant term identities.
In this paper, we show that the combinatorial model can be applied to compute the volume of the flow polytopes of a family of graphs which we call the {\em $k$-caracol graphs}. Setting $k=1$ recovers the results obtained for the caracol graph developed in~\cite{BGHHKMY}, and setting $k=n-1$ recovers some of the results for the Pitman--Stanley polytope~\cite{PS}.
We note that this is the first application of the combinatorial model to non-planar graphs. Indeed, the motivation for studying the flow polytopes of the $k$-caracol graphs was borne from the desire to understand the combinatorics of the flow polytope of the complete graph. The Chan--Robbins-Yuen polytope $\mathrm{CRY}(n)$~\cite{CRY} can be realized as the flow polytope of the complete graph $K_{n+1}$ with net flow vector $\ba=\varepsilon_1-\varepsilon_{n+1}$. A well-known result due to Zeilberger~\cite{Z} states that the volume of $\mathrm{CRY}(n)$ is the product of the first $n-2$ Catalan numbers. Despite the combinatorial nature of the formula, the proof relies on an application of the Morris constant term identity, and no combinatorial proof is known.
Other generalizations of the volume of the flow polytope of the complete graph $K_{n+1}$ also involve products of combinatorial quantities. At net flow $\ba=\varepsilon_1+\varepsilon_2-2\varepsilon_{n+1}$, Corteel, Kim, and M\'esz\'aros~\cite{CKM} showed that the volume is \hbox{$2^{{n\choose2}-1}$} times the product of the first $n-2$ Catalan numbers, while at net flow $\ba=\sum_{i=1}^n \varepsilon_i - n\varepsilon_{n+1}$, M\'esz\'aros, Morales, and Rhoades~\cite{MMR} showed that the volume is the number of standard Young tableaux of staircase shape $(n-1,n-2,\ldots, 2,1,0)$ times the product of the first $n-1$ Catalan numbers. Both of these generalizations rely on the Morris constant term identity as well.
Combinatorial objects such as Dyck paths and parking functions appeared naturally in the study of the Pitman--Stanley polytope,
which is affinely equivalent to the flow polytope of the Pitman--Stanley graph.
These objects play a central role in the unified diagrams for flow polytopes, and we saw in~\cite{BGHHKMY} that the volume of the flow polytope of the caracol graph with net flow $\ba=\varepsilon_1-\varepsilon_{n+1}$ is the Catalan number $\Cat(n-2)$, while with net flow $\ba= \sum_{i=1}^n \varepsilon_i - n\varepsilon_{n+1}$, the volume is $\Cat(n-2)\cdot n^{n-2}$, the product of a Catalan number and the number of parking functions of length $n-1$. A main result of this paper is a generalization of this to the $k$-caracol family of graphs.\\%, whose volume is also a product of various combinatorial quantities.\\
\noindent{\bf Theorem~\ref{thm.oneoneone}.} For $k\in\bbN$ and $n>k$,
$$\vol\calF_{\car{n+1}{k}} (1,\ldots, 1,-n)
=\Cat(n-k,k(n-k)-1)\cdot k^{k(n-k)-2} \cdot n^{n-k-1},$$
where $\Cat(a,b)$ is a {\em rational Catalan number} (see~\cite{ALW}). For the special case when $b=ka-1$, it is a {\em generalized Fuss-Catalan number}, and when $b=a+1$, it is the classical Catalan number.
Many of the ideas from~\cite{BGHHKMY} are generalized in this present paper, but a refinement of the original combinatorial model is necessary in order to explain the appearance of the factor $k^{k(n-k)-2}$ in Theorem~\ref{thm.oneoneone}, which is undetected when $k=1$. We therefore introduce the {\em truncated unified diagrams}, whose `completions' are the standardized diagrams. The truncated diagrams are enumerated by the {\em $k$-parking numbers} (see the Appendix)
$$T_k(r,i) = (r+1)^{i-1} \multiset{k(r+1)}{r-i},$$
which interpolate between the generalized Fuss-Catalan numbers and the number of parking functions. We give these numbers a combinatorial interpretation involving a vehicle-parking scenario in Thereom~\ref{thm.multilabel}. The formula in Theorem~\ref{thm.oneoneone} is then obtained by a binomial transform of these numbers, up to a power of $k$.
This power of $k$ arises from counting the completions of the truncated diagrams to standardized diagrams. The factor of $k$ which appears in the formula of Theorem~\ref{thm.oneoneone} can be explained by considering a cyclic group action on the set of truncated diagrams, together with a delightful partitioning of the `$N$-th multinomial $(k-1)$-simplex' (better known as the `$N$-th row of Pascal's triangle' in the case $k=2$), whose entries sum to $k^N$ (Lemma~\ref{lem.thelemma}).
This paper is organized as follows. In Section~\ref{sec.lidskii}, we introduce the family of $k$-caracol graphs, state the generalized Lidskii volume formulas, and introduce one of the key ingredients of a unified diagram, called a {\em gravity diagram}. These are a combinatorial interpretation of the {\em Kostant partition function}. We also discuss the necessary background on rational Catalan combinatorics, and then give two bijective proofs to show that the volume of the flow polytope of the $k$-caracol graph $\car{n+1}{k}$ at unit flow $\ba=(1,0,\ldots,0,-1)$ is the generalized Fuss-Catalan number $\Cat(n-k,k(n-k)-1)$.
The combinatorics arising from Theorems~\ref{thm.onezerozero} and~\ref{prop.outgrav} give rise to an interesting geometric question (Remark~\ref{rem.duality}).
In Section~\ref{sec.kcaracol}, we introduce the unified diagrams for the flow polytope of the $k$-caracol graphs, and its variations. We define the $k$-parking numbers, and show that they enumerate the truncated unified diagrams (Theorem~\ref{thm.multilabel}).
Having developed all the necessary enumerative tools, we end the section with a proof of Theorem~\ref{thm.oneoneone}.
In Section~\ref{sec.abbb}, we discuss a generalization of the volume of the flow polytopes of the $k$-caracol graphs at more general net flow vectors in Theorem~\ref{thm.abbb}, and show that $k$-parking numbers form {\em log-concave} sequences.
Finally in Section~\ref{sec.mcar}, we discuss some results for a multigraph which we call the {\em $k$-multicaracol graph}, whose volume formulas are closely related to those for the $k$-caracol graph at various net flows (Theorem~\ref{thm.mcarabbb}). We end with a suggestion of an alternative pathway towards another combinatorial proof of Theorem~\ref{thm.oneoneone}.
\section{Volume of the $k$-caracol polytope with unit flow}\label{sec.lidskii}
\subsection{Flow polytopes and the $k$-caracol graphs}
We define the family of {\em $k$-caracol graphs}.
\begin{defn} \label{defn.kcar}
Let $k\in\bbN=\bbZ_{\geq0}$ and $n>k$. The directed graph $G=\car{n+1}{k}$ has vertex set $V(G)=\{1,\ldots, n+1\}$ and edge set
$$
E(G)=\left\{(i,i+1), (i,k+1), \ldots, (i,n) \mid 1\leq i\leq k\right\}
\cup
\left\{(i,i+1), (i,n+1) \mid k+1 \leq i\leq n \right\}.
$$
For clarity, we point out that $\car{n+1}{k}$ is a graph without multiple edges, and note that the number of edges in $\car{n+1}{k}$ is $m=(k+1)(n-k)+n-2$ for. The graph $\car{n+1}{1}$ is the caracol graph, and the graph $\car{n+1}{n-1}$ is the Pitman--Stanley graph $\PS_n$ with an additional edge $(n,n+1)$. The flow polytopes of both graphs were previously studied in~\cite{BGHHKMY}. We point out that our definition of $\PS_n$ differs from others found in the literature in that the edge $(n-1,n)$ is not repeated.
\end{defn}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=0.8]
\begin{scope}[yshift=0]
\node[vertex][fill,label=below:\footnotesize{$1$}](a0) at (0,0) {};
\node[vertex][fill,label=below:\footnotesize{$2$}](a1) at (1,0) {};
\node[vertex][fill,label=below:\footnotesize{$3$}](a2) at (2,0) {};
\node[vertex][fill,label=below:\footnotesize{$4$}](a3) at (3,0) {};
\node[vertex][fill,label=below:\footnotesize{$5$}](a4) at (4,0) {};
\node at (4.5,0) {$\cdots$};
\node[vertex][fill,label=above :\footnotesize{$n-1$}](a10) at (5,0) {};
\node[vertex][fill,label=above:\footnotesize{$n$}](a11) at (6,0) {};
\node[vertex][fill,label=above:\footnotesize{$n+1$}](a12) at (7,0) {};
\draw[->, >=stealth, thick] (a0)--(a1);
\draw[-stealth, thick] (a1)--(a2);
\draw[-stealth, thick] (a2)--(a3);
\draw[-stealth, thick] (a3)--(a4);
\draw[thick] (a4)--(4.15,0); \draw[thick] (4.75,0)--(4.85,0);
\draw[-stealth, thick] (4.85,0)--(a10);
\draw[-stealth, thick] (a10)--(a11);
\draw[-stealth, thick] (a11) to (a12);
\draw[-stealth, thick] (a10) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a4) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a3) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a2) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a1) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a0) to[out=-50,in=240] (a12);
\end{scope}
\begin{scope}[xshift=250]
\node[vertex][fill,label=below:\footnotesize{$1$}](a0) at (0,0) {};
\node[vertex][fill,label=below:\footnotesize{$2$}](a1) at (1,0) {};
\node[vertex][fill,label=below:\footnotesize{$3$}](a2) at (2,0) {};
\node[vertex][fill,label=below:\footnotesize{$4$}](a3) at (3,0) {};
\node[vertex][fill,label=below:\footnotesize{$5$}](a4) at (4,0) {};
\node at (4.5,0) {$\cdots$};
\node[vertex][fill,label=above :\footnotesize{$n-1$}](a10) at (5,0) {};
\node[vertex][fill,label=above:\footnotesize{$n$}](a11) at (6,0) {};
\node[vertex][fill,label=above:\footnotesize{$n+1$}](a12) at (7,0) {};
\draw[-stealth, thick] (0,0)--(.95,0);
\draw[-stealth, thick] (1,0)--(1.95,0);
\draw[-stealth, thick] (2,0)--(2.95,0);
\draw[-stealth, thick] (3,0)--(3.95,0);
\draw[thick] (4,0)--(4.15,0); \draw[thick] (4.75,0)--(4.85,0);
\draw[-stealth, thick] (4.85,0)--(4.95,0);
\draw[-stealth, thick] (5,0)--(5.95,0);
\draw[-stealth, thick] (6,0) to (6.95,0);
\draw[-stealth, thick] (a0) to[out=25,in=130] (a2);
\draw[-stealth, thick] (a0) to[out=30,in=130] (a3);
\draw[-stealth, thick] (a0) to[out=35,in=130] (a4);
\draw[-stealth, thick] (a0) to[out=40,in=130] (a10);
\draw[-stealth, thick] (a0) to[out=45,in=130] (a11);
\draw[-stealth, thick] (a10) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a4) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a3) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a2) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a1) to[out=-50,in=240] (a12);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{The graphs $\PS_{n+1}$ and $\car{n+1}{1}=\Car_{n+1}$.}
\end{figure}
\begin{figure}[th]
\begin{center}
\begin{tikzpicture}[scale=0.8]
\begin{scope}
\node[vertex][fill,label=below:\footnotesize{$1$}](a0) at (0,0) {};
\node[vertex][fill,label=below:\footnotesize{$2$}](a1) at (1,0) {};
\node[vertex][fill,label=below:\footnotesize{$3$}](a2) at (2,0) {};
\node[vertex][fill,label=below:\footnotesize{$4$}](a3) at (3,0) {};
\node[vertex][fill,label=below:\footnotesize{$5$}](a4) at (4,0) {};
\node[vertex][fill,label=below:\footnotesize{$6$}](a10) at (5,0) {};
\node[vertex][fill,label=above:\footnotesize{$7$}](a11) at (6,0) {};
\node[vertex][fill,label=above:\footnotesize{$8$}](a12) at (7,0) {};
\draw[-stealth, thick] (0,0)--(.95,0);
\draw[-stealth, thick] (1,0)--(1.95,0);
\draw[-stealth, thick] (2,0)--(2.95,0);
\draw[-stealth, thick] (3,0)--(3.95,0);
\draw[-stealth, thick] (4,0)--(4.95,0);
\draw[-stealth, thick] (5,0)--(5.95,0);
\draw[-stealth, thick] (6,0) to (6.95,0);
\draw[-stealth, thick] (a0) to[out=25,in=130] (a2);
\draw[-stealth, thick] (a0) to[out=30,in=130] (a3);
\draw[-stealth, thick] (a0) to[out=35,in=130] (a4);
\draw[-stealth, thick] (a0) to[out=40,in=130] (a10);
\draw[-stealth, thick] (a0) to[out=45,in=130] (a11);
\draw[-stealth, thick] (a1) to[out=25,in=130] (a3);
\draw[-stealth, thick] (a1) to[out=30,in=130] (a4);
\draw[-stealth, thick] (a1) to[out=35,in=130] (a10);
\draw[-stealth, thick] (a1) to[out=40,in=130] (a11);
\draw[-stealth, thick] (a10) to[out=-50,in=230] (a12);
\draw[-stealth, thick] (a4) to[out=-50,in=235] (a12);
\draw[-stealth, thick] (a3) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a2) to[out=-50,in=240] (a12);
\end{scope}
\begin{scope}[xshift=250, yshift=0]
\node[vertex][fill,label=below:\footnotesize{$1$}](a0) at (0,0) {};
\node[vertex][fill,label=below:\footnotesize{$2$}](a1) at (1,0) {};
\node[vertex][fill,label=below:\footnotesize{$3$}](a2) at (2,0) {};
\node[vertex][fill,label=below:\footnotesize{$4$}](a3) at (3,0) {};
\node[vertex][fill,label=below:\footnotesize{$5$}](a4) at (4,0) {};
\node[vertex][fill,label=below:\footnotesize{$6$}](a10) at (5,0) {};
\node[vertex][fill,label=above:\footnotesize{$7$}](a11) at (6,0) {};
\node[vertex][fill,label=above:\footnotesize{$8$}](a12) at (7,0) {};
\draw[-stealth, thick] (0,0)--(.95,0);
\draw[-stealth, thick] (1,0)--(1.95,0);
\draw[-stealth, thick] (2,0)--(2.95,0);
\draw[-stealth, thick] (3,0)--(3.95,0);
\draw[-stealth, thick] (4,0)--(4.95,0);
\draw[-stealth, thick] (5,0)--(5.95,0);
\draw[-stealth, thick] (6,0) to (6.95,0);
\draw[-stealth, thick] (a0) to[out=30,in=130] (a3);
\draw[-stealth, thick] (a0) to[out=35,in=130] (a4);
\draw[-stealth, thick] (a0) to[out=40,in=130] (a10);
\draw[-stealth, thick] (a0) to[out=45,in=130] (a11);
\draw[-stealth, thick] (a1) to[out=25,in=130] (a3);
\draw[-stealth, thick] (a1) to[out=30,in=130] (a4);
\draw[-stealth, thick] (a1) to[out=35,in=130] (a10);
\draw[-stealth, thick] (a1) to[out=40,in=130] (a11);
\draw[-stealth, thick] (a2) to[out=30,in=130] (a4);
\draw[-stealth, thick] (a2) to[out=35,in=130] (a10);
\draw[-stealth, thick] (a2) to[out=40,in=130] (a11);
\draw[-stealth, thick] (a10) to[out=-50,in=230] (a12);
\draw[-stealth, thick] (a4) to[out=-50,in=235] (a12);
\draw[-stealth, thick] (a3) to[out=-50,in=240] (a12);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{The graphs $\car82$ and $\car83$.}
\end{figure}
\begin{defn}\label{defn.graph}
Let $G$ be a connected acyclic directed graph with vertex set $V(G)=\{1,\ldots, n+1\}$ and edge multiset $E(G)$ with $m$ edges. Further assume that
\begin{itemize}
\item[(a)] the out-degree of each of the vertices $1$ through $n$ is at least one,
\item[(b)] the in-degree of each of the vertices $2$ through $n+1$ is at least one,
\item[(c)] the edges of $G$ are each directed from $i$ to $j$ if $i<j$.
\end{itemize}
For $i=1,\ldots, n$, let $t_i = \outdeg_i-1$ be one less than the out-degree of the vertex $i$. The {\em shifted out-degree vector} of $G$ is $\bt=(t_1,\ldots,t_n)\in\bbZ_{\geq0}^n$.
Similarly for $i=2,\ldots, n+1$, let $u_i=\indeg_i-1$ be one less than the in-degree of the vertex $i$. The {\em shifted in-degree vector} of $G$ is $\bu = (u_2,\ldots,u_{n+1})\in\bbZ_{\geq0}^n$.
Note that $\sum_{i=1}^n t_i= \sum_{i=2}^{n+1}u_i = m-n$.
Given $\ba=(a_1,\ldots, a_n, -\sum_{i=1}^n a_i)$ with $a_i \in \bbZ_{\geq0}$, an {\em $\ba$-flow on $G$} is tuple $(f_e)_{e\in E(G)} \in \bbR_{\geq0}^{m}$ such that
$$\sum_{(j,k)\in E(G)} f_{(j,k)} - \sum_{(i,j)\in E(G)} f_{(i,j)} = a_j $$
for $j=1,\ldots, n$. The {\em flow polytope of $G$ with net flow $\ba$} is the set $\calF_G(\ba)$ of $\ba$-flows on $G$. Note that $\dim\calF_G(\ba) = m-n$.
\end{defn}
\subsection{The generalized Lidskii formulas}
\begin{defn}
For $i=1,\ldots, n$, let $\alpha_i = \varepsilon_i-\varepsilon_{i+1}$, where $\varepsilon_i$ is the $i$-th standard basis vector of $\bbR^{n+1}$.
To each edge $e=(i,j)$ of $G$, we associate the vector
$$\alpha_e = \alpha_{(i,j)} = \alpha_i+\cdots + \alpha_{j-1} = \varepsilon_i-\varepsilon_j,$$
and let $\Phi_G^+ = \{\alpha_e \mid e\in E(G)\}$ be the multiset of {\em positive roots associated to $G$}.
The {\em Kostant partition function of $G$ evaluated at $\ba$}, denoted by $K_G(\ba)$, is the number of ways of expressing the vector $\ba\in \bbZ^{n+1}$ as a linear combination of the vectors in $\Phi_G^+$.
\end{defn}
In this setting, integral $\ba$-flows on $G$ are equivalent to vector partitions of $\ba$. Thus, the number of integral $\ba$-flows on $G$ is $K_G(\ba)$.
The {\em normalized volume} of a $d$-dimensional lattice polytope is $d!$ times its Euclidean volume.
\begin{theorem}[Lidskii formulas, {\cite[Theorem 38]{BV}}, {\cite[Theorem 1.1]{MM18}}]
\label{thm.lidskii}
Let $G$ be a connected acyclic directed graph with vertex set $\{1,\ldots, n+1\}$ and $m$ edges, along with the additional properties as outlined in Definition~\ref{defn.graph}.
The normalized volume of the flow polytope $\calF_G(\ba)$ of $G$ with net flow vector $\ba=(a_1,\ldots,a_n, -\sum_{i=1}^n a_i)$ is
$$\vol\calF_G(\ba) = \sum_{\bs \rhd \bt} {m-n \choose s_1,\ldots,s_n}\cdot
a_1^{s_1}\cdots a_n^{s_n}\cdot
K_{G}(s_1-t_1, \ldots, s_n-t_n, 0),$$
and the number of lattice points of $\calF_G(\ba)$ is
\begin{align*}
K_G(\ba)
&= \sum_{\bs\rhd\bt}
{a_1+t_1\choose s_1} \cdots {a_n+t_n\choose s_n} \cdot
K_G(s_1-t_1, \ldots, s_n-t_n, 0),\\
&= \sum_{\bs\rhd\bt}
\multiset{a_1-u_1}{s_1} \cdots \multiset{a_n-u_n}{s_n} \cdot
K_G(s_1-t_1, \ldots, s_n-t_n, 0),
\end{align*}
where $K_G$ is the Kostant partition function of $G$, and the sum is over weak compositions $\bs=(s_1,\ldots,s_n) \vDash m-n$ such that $\sum_{i=1}^k s_i \geq \sum_{i=1}^k t_i$ for every $k$.
\end{theorem}
The special case of the Lidskii volume formula at $\ba=(1,0,\ldots, 0,-1)$ plays a central role in the following sections.
\begin{corollary}\label{cor.main} Let $G$ be a directed graph with $n+1$ vertices and $m$ edges, with shifted out-degree and in-degree vectors $\bt=(t_1,\ldots, t_n)$ and $\bu=(u_2,\ldots, u_{n+1})$. Then
\begin{align*}
\vol\calF_G(1,0,\ldots,0,-1)
&= K_G(m-n-t_1, -t_2,\ldots, -t_n, 0), \label{eqn.outgrav}\\
&= K_G(0, u_2, \ldots, u_n, u_{n+1}-(m-n)).
\end{align*}
\end{corollary}
Thus, the volume of the flow polytope $\calF_G(1,0,\ldots,0,-1)$ can be computed by counting the number of lattice points of two related polytopes, as noted by M\'esz\'aros and Morales in~\cite{MM18}.
\subsection{Kostant partition functions and gravity diagrams}
In~\cite{BGHHKMY}, we introduced a combinatorial interpretation of the Kostant partition function of a graph $G$, which we called (out-degree) gravity diagrams. Here, we introduce an analogous notion of in-degree gravity diagrams.
\begin{defn}
A vector $\bv = (v_1,\ldots, v_{n+1}) = c_1\alpha_1 + \cdots + c_n\alpha_n$ can be represented by an array of $c_j$ dots in the $j$-th column, with the dots drawn so that the column is justified upward.
A positive root $\alpha_e = \alpha_{(i,j)} = \alpha_i+\cdots + \alpha_{j-1} \in \Phi_G^+$ can then be viewed as a line segment that joins dots in consecutive columns, from the $i$-th column to the $(j-1)$-th column.
So, given a partition $\bv=\sum_{\alpha_{(i,j)}\in \Phi_G^+} p_{(i,j)}[\alpha_{(i,j)}]$ of the vector $\bv$ using roots from $\Phi_G^+$, a {\em line-dot diagram} for $\bv$ with respect to $\Phi_G^+$ is a pictorial representation of the vector partition that consists of the array of dots for $\bv$, and $p_{(i,j)}$ line segments from the $i$-th column to the $(j-1)$-th column for each edge $(i,j)\in E(G)$ in which each dot is incident to at most one nontrivial line segment. We consider a single dot to be a line of length zero, or a trivial line segment.
\end{defn}
Of course, a given vector partition may have multiple line-dot diagram representations. Two line-dot diagrams are {\em equivalent} if they represent the same vector partition, and a {\em gravity diagram} is a representative of an equivalence class of line-dot diagrams. Let $\calG_G(\bv)$ denote a set of gravity diagrams for the vector $\bv$ with respect to the graph $G$.
\begin{theorem}[{\cite[Theorem 3.1]{BGHHKMY}}] \label{thm.gravitydiagrams}
Let $G$ be a directed graph with $n+1$ vertices and whose edges are directed from $i$ to $j$ if $i<j$. For any vector $\bv = (v_1,\ldots, v_{n+1})$ such that $\sum_{i=1}^{n+1}v_i=0$,
$$K_G(\bv) = |\outgrav_G(\bv)|.$$
\end{theorem}
By Corollary~\ref{cor.main}, we see that the volume of the flow polytope $\calF_G(1,0,\ldots,0,-1)$ can be computed by counting a set of associated gravity diagrams, provided that they can be described systematically.
There are two vectors which are most pertinent to the study of volumes of flow polytopes with unit flow $\ba=(1,0,\ldots,0,-1)$.
\begin{defn} Given a directed graph $G$ with shifted out-degree vector $\bt$,
the set of {\em out-degree gravity diagrams} of $G$ is a set of gravity diagrams for the vector
\begin{align*}
\bv_{\mathrm{out}}
&= (m-n-t_1,-t_2,\ldots, -t_n,0)\\
&= (t_2+\cdots+t_n)\alpha_1 + (t_3+\cdots+t_n)\alpha_2 + \cdots + t_{n}\alpha_{n-1}
\end{align*}
with respect to the set of positive roots $\Phi_G^+$. This is denoted by $\outgrav_G(\bv_{\mathrm{out}})$.
In a similar vein, given a directed graph $G$ with shifted in-degree vector $\bu$, the set of {\em in-degree gravity diagrams} of $G$ is a set of gravity diagrams for the vector
\begin{align*}
\bv_{\mathrm{in}}
&=(0,u_2, \ldots, u_n, u_{n+1}-(m-n))\\
&= u_2\alpha_2 + (u_2+u_3)\alpha_3 + \cdots + (u_2+\cdots+u_n)\alpha_n
\end{align*}
with respect to the set of positive roots $\Phi_G^+$. This is denoted by $\ingrav_G(\bv_{\mathrm{in}})$.
\end{defn}
\begin{corollary} \label{cor.gravitydiagrams}
Combining Corollary~\ref{cor.main} and Theorem~\ref{thm.gravitydiagrams}, the volume of the flow polytope of $G$ with unit flow $\ba=(1,0,\ldots,0,-1)$ is equal to the number of out-degree gravity diagrams of $G$ and the number of in-degree gravity diagrams of $G$.
$$\vol\calF_G(1,0,\ldots,0,-1) = |\outgrav_G(\bv_{\mathrm{out}})| = |\ingrav_G(\bv_{\mathrm{in}})|. $$
\end{corollary}
In the next sections, we describe a canonical way to define out-degree and in-degree gravity diagrams for the $k$-caracol family of graphs. We note that some of our conventions differ from the ones originally chosen for the (classical) caracol graph $\Car_{n+1} = \car{n+1}{1}$ in~\cite{BGHHKMY}.
\subsection{Gravity diagrams for the $k$-caracol graphs} \label{sec.gravity}
The $k$-caracol graph $\car{n+1}{k}$ has $n+1$ vertices and $m=(k+1)(n-k)+n-2$ edges. Its shifted out-degree vector $\bt$ and shifted in-degree vector $\bu$ are
\begin{align*}
\bt
&= (\underbrace{n-k, \ldots, n-k}_{k-1}, n-k-1, \underbrace{1,\ldots, 1}_{n-k-1},0 ) \qquad\hbox{ and}
\\
\bu
&= (\underbrace{0, \ldots, 0}_{k-1}, k-1, \underbrace{k,\ldots, k}_{n-k-1}, n-k-1 ),
\end{align*}
and their coordinates sum to $m-n=(k+1)(n-k)-2$. We also have
\begin{align*}
\bv_{\mathrm{out}}
&= \sum_{j=1}^{k-1} ((k+1-j)(n-k)-2)\alpha_j + \sum_{j=k}^{n-2} (n-j-1)\alpha_j
\qquad\hbox{ and} \\
\bv_{\mathrm{in}}
&= \sum_{j=k+1}^{n} ((j-k)k-1)\alpha_{j}.
\end{align*}
The in-degree gravity diagrams for $\bv_{\mathrm{in}}$ are defined on a triangular array of $(j-k)k-1$ dots in the $j$-th column for $j=k+1,\ldots,n$. Since the dots lie in columns indexed by $\alpha_j$ only for $j=k+1,\ldots, n$, then for the purposes of defining the set of in-degree gravity diagrams, we only need to consider the positive roots which correspond to the edges in the graph $G=\car{n+1}{k}$ when restricted to the vertex set $\{k+1,k+2,\ldots,n+1\}$. These positive roots are
$$\Phi_{G|_{V=\{k+1,\ldots, n+1\}}}^+ = \{ \alpha_j\}_{j=k+1}^n \cup \{\alpha_j + \cdots +\alpha_n \}_{j=k+1}^{n-1}, $$
so we see that each nontrivial line segment must end in the $n$-th column.
This leads us to choose the following conventions for the in-degree gravity diagrams:
\begin{enumerate}
\item[(a)] each line segment must be horizontal,
\item[(b)] a longer line segment must be in a row above that of a shorter line segment.
\end{enumerate}
This uniquely defines a representative for each equivalence class of in-degree line-dot diagrams for $\car{n+1}{k}$.
See Figures~\ref{fig.car61} and~\ref{fig.car62} for some examples.
The out-degree gravity diagrams for $\bv_{\mathrm{out}}$ are defined on the array consisting of $(k+1-j)(n-k)-2$ dots in the $j$-th column for $j=1,\ldots, k-1$, and $n-j-1$ dots in the $j$-th column for $j=k,\ldots, n-2$, where the latter portion forms a right-triangular array.
Given this, we only need to consider the positive roots which correspond to the edges in the graph $G=\car{n+1}{k}$ when restricted to the vertex set $\{1,2,\ldots, n-1\}$.
From this, we see that every nontrivial line segment for the out-degree gravity diagram begins an $i$-th column for some $i=1,\ldots, k$, and ends in a $j$-th column for some $j = k, \ldots, n-2$. In other words, every nontrivial line segment contains a dot from the $k$-th column of the array, and this leads us to choose the following conventions for the out-degree gravity diagrams:
\begin{enumerate}
\item[(a)] each line segment must be horizontal,
\item[(b)] the line segments are ordered from top to bottom so that the line segments which end at the $q$-th column are above the line segments which end at the $p$-th column if $q>p$, and if two line segments end at the same column, then the longer line segment is above the shorter line segment.
\end{enumerate}
This uniquely defines a representative for each equivalence class of out-degree line-dot diagrams for $\car{n+1}{k}$.
See Figures~\ref{fig.car61} and~\ref{fig.car62} for some examples.
\begin{figure}[ht!]
\begin{center}
\input{1-caracol_gravity}
\end{center}
\caption{The out-degree and in-degree gravity diagrams for $\car61$.}
\label{fig.car61}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\input{2-caracol_gravity}
\end{center}
\caption{The out-degree and in-degree gravity diagrams for $\car62$.}
\label{fig.car62}
\end{figure}
\begin{remark}\label{rem.trapezoid}
An out-degree gravity diagram of $\car{n+1}{k}$ is defined on an array consisting of $(k+1-j)(n-k)-2$ dots in the $j$-th column for $j=1,\ldots, k-1$, and $n-j-1$ dots in the $j$-th column for $j=k,\ldots, n-2$. However, note that we can truncate the dots in the first $k-1$ columns of the out-degree gravity diagram below the first $n-k-1$ rows of dots without loss of generality since every nontrivial line segment must contain a dot from the column indexed by $\alpha_k$, and so no line segments can be drawn on those dots below the first $n-k-1$ rows. In other words, we view the out-degree gravity diagrams of $\car{n+1}{k}$ as a trapezoidal array of $k-1+i$ dots in the $i$-th row for $i=1,\ldots, n-k-1$. See the left side of Figure~\ref{fig.3-car_bijection_out_d} for an example.
\end{remark}
\subsection{Fuss-Catalan volumes}
In the paper~\cite{BGHHKMY}, we computed the volume of the flow polytope of
the caracol graph $\car{n+1}{1}$ with unit flow $\ba=(1,0,\ldots, 0, -1)$ by describing a bijection between its out-degree gravity diagrams and a set of Dyck paths.
We now generalize this method and compute the volume of the flow polytope of the $k$-caracol graph $\car{n+1}{k}$ with unit flow $\ba=(1,0,\ldots, 0, -1)$ in two ways. The first is a bijection between in-degree gravity diagrams and a set of rational Catalan Dyck paths, and the second is a bijection between the out-degree gravity diagrams and the same set of rational Catalan Dyck paths.
Before we do this, we introduce some basic background on rational Catalan combinatorics, which is a generalization due to Armstrong, Loehr, and Warrington~\cite{ALW} of the classical Catalan numbers.
\begin{defn} Let $a,b$ be nonnegative integers such that $b\geq a$.
A {\em lattice path} from $(0,0)$ to $(b,a)$ is a path comprised of $a$ north steps $N=(0,1)$ and $b$ east steps $E=(1,0)$. We may equivalently represent the lattice path as a word $N^{s_1}EN^{s_2}E \cdots N^{s_b}E$, so that $\bs = (s_1,\ldots, s_b)\in \bbZ_{\geq0}$ is a weak composition of $|\bs| = s_1+\cdots + s_b=a$ of length $\ell(\bs) = b$. In this paper, we will often view lattice paths as weak compositions.
\end{defn}
\begin{defn}
Given two weak compositions $\bs=(s_1,\ldots, s_b)\vDash a$ and $\bt = (t_1,\ldots, t_b) \vDash a$, we say that $\bs$ {\em dominates} $\bt$ and we write $\bs \rhd \bt$
if $s_1+ \cdots + s_j \geq t_1 + \cdots + t_j$ for each $j=1,\ldots, b$.
A {\em $\bt$-Dyck path} is a weak composition $\bs = (s_1,\ldots, s_b)$ that dominates $\bt$.
\end{defn}
Visually, $\bs = (s_1,\ldots, s_b)\vDash a$ is the lattice path $N^{s_1}E \cdots N^{s_b}E$ on the rectangular grid from $(0,0)$ to $(b, a)$.
On this grid, the composition $\bt = (t_1,\ldots, t_b)\vDash a$ is represented by shading $t_j$ squares in the $j$-th column of squares, starting at height $t_1+\cdots+t_{j-1}$. The set of $\bt$-Dyck paths is then the set of lattice paths from $(0,0)$ to $(b, a)$ which lie above the shaded $\bt$-region.
The {\em area} of a $\bt$-Dyck path is the number of squares lying between the path and the shaded $\bt$-region.
\begin{defn}
For coprime positive integers $b>a$, a {\em rational $(a,b)$-Dyck path}
is a lattice path from $(0,0)$ to $(b,a)$ in the integer lattice $\bbZ^2$ comprised of north steps $N=(0,1)$ and east steps $E=(1,0)$ that stays above the diagonal line from $(0,0)$ to $(b,a)$. Let $\calD(a,b)$ denote the set of rational $(a,b)$-Dyck paths.
\end{defn}
\begin{remark} \label{rem.rationalDyck}
Rational $(a,b)$-Dyck paths are a special case of $\bt$-Dyck paths. By shading the squares on the $b$ by $a$ grid which intersect the line $y=\frac{a}{b}x$, we obtain the (row) signature $(r_1,\ldots, r_a)$ of the path, where $r_i$ is the number of shaded squares in the $i$-th row. The associated weak composition $\bt$ is then the transpose of $(r_1-1,\ldots, r_{a-1}-1, r_a)$. In the proof of Theorem~\ref{thm.onezerozero}, we will use the fact that for $a=n-k$ and $b=k(n-k)-1$, rational $(a,b)$-Dyck paths are $\bt$-Dyck paths where $\bt$ is the transpose of $(k-1,k^{n-k-1})$.
\end{remark}
\begin{defn} For coprime positive integers $b>a$, the {\em rational Catalan number}
\begin{equation}
\Cat(a,b)=\frac{1}{a+b}{a+b\choose a} = \frac{1}{b}{a+b-1\choose a} = \frac{1}{a}{a+b-1\choose b}
\end{equation}
enumerates rational $(a,b)$-Dyck paths~\cite[Section 3.2]{ALW}.
\end{defn}
\begin{remark} Two well-known special cases of the rational Catalan numbers are the classical Catalan numbers
$$\Cat(n) = \Cat(n,n+1) = \frac{1}{2n+1}{2n+1\choose n}
= \frac{1}{n+1}{2n\choose n}
= \frac{1}{n}{2n\choose n+1},$$
and the classical Fuss-Catalan numbers
$$\Cat(n,kn+1) = \frac{1}{(k+1)n+1}{(k+1)n+1\choose n}
= \frac{1}{kn+1}{(k+1)n\choose n}
= \frac{1}{n}{(k+1)n\choose kn+1},$$
for $k\in\bbN$.
\end{remark}
It turns out that the volume of the flow polytope of the $k$-caracol graph with unit flow $\ba=(1,0,\ldots,0,-1)$ is a generalized Fuss-Catalan number.
\begin{theorem} \label{thm.onezerozero}
For $k\in \bbN$ and $n>k$,
$$\vol \calF_{\car{n+1}{k}}(1,0,\ldots,0,-1)
=\Cat(n-k,k(n-k)-1).$$
\end{theorem}
\begin{proof} Let $a=n-k$ and $b=ka-1$.
We construct a bijection $\Psi_{\mathrm{in}}:\ingrav_G(\bv_{\mathrm{in}}) \rightarrow\calD(a,b)$ from the set of in-degree gravity diagrams of $\car{n+1}{k}$ to the set of rational $(a,b)$-Dyck paths.
Recall from Section~\ref{sec.gravity} that an in-degree gravity diagram of $\car{n+1}{k}$ is defined on a triangular array of $(j-k)k-1$ dots in the $j$-column for $j=k+1,\ldots, n$. Given an in-degree gravity diagram $\Gamma$, we embed
it into the squares of the $\bbZ^2$ grid by rotating $\Gamma$ counterclockwise by ninety degrees, aligned so that the dots in the column indexed by $\alpha_n$ lie in the squares just above the line $y=a$. See Figure~\ref{fig.3-car_bijection_in} for an illustration.
As noted in Remark~\ref{rem.rationalDyck}, the set of $(a,b)$-Dyck paths is the set of $\bt$-Dyck paths where $\bt$ is the transpose of $(k-1,k^{n-k-1})$. With this interpretation, one can see that the columns of $\Gamma$ embed into the squares of $\bbZ^2$ precisely so that the lower boundary of $\Gamma$ consists of the shaded squares in the $(a,b)$-Dyck path diagram.
Line segments of the embedded $\Gamma$ extend along columns from the top row of the Dyck path diagram, and by the convention chosen for the in-degree gravity diagrams, the lengths of these columns are non-increasing from left to right. Thus, the line segments of $\Gamma$ define a unique rational $(a,b)$-Dyck path associated to $\Gamma$. Conversely, any $(a,b)$-Dyck path defines an in-degree gravity diagram for $\car{n+1}{k}$ whose line segments occupy every square on the northwest side of the Dyck path.
Therefore, $|\ingrav_{\car{n+1}{k}}(\bv_{\mathrm{in}})| = |\calD(a,b)| = \Cat(a,b)$, and
we conclude by Corollary~\ref{cor.gravitydiagrams} that $\vol\calF_{\car{n+1}{k}}(1,0,\ldots,0,-1) = \Cat(n-k,k(n-k)-1)$.
\end{proof}
\begin{corollary} We recover the following formulas as special cases.
At $k=1$,
$$\vol \calF_{\Car_{n+1}}(1,0,\ldots, 0,-1) = \Cat(n-1, n-2) = \frac{1}{n-1}{2n-4\choose n-2}$$
is a classical Catalan number.
At $k=n-1$,
$$\vol \calF_{\PS_n}(1,0,\ldots, 0,-1)
= \vol \calF_{\car{n+1}{n-1}}(1,0,\ldots, 0,-1) = \Cat(1, n-2) = 1.$$
\end{corollary}
\begin{proof}
Earlier, we observed that when $k=n-1$, the graph $\car{n+1}{n-1}$ is the graph $\PS_n$ with an extra edge $(n,n+1)$. Note that this edge does not affect the equations defining the polytope, so
$\calF_{\PS_n}(a_1,\ldots,a_{n-1}, \hbox{$-\sum_{i=1}^{n-1} a_i$})
= \calF_{\car{n+1}{n-1}}(a_1,\ldots,a_{n-1}, a_n, \hbox{$-\sum_{i=1}^{n} a_i$}).$
\end{proof}
\begin{remark}
M\'esz\'aros~\cite{M} developed a method for expressing the volumes of flow polytopes with unit flow as the number of certain triangular arrays, and as an application, used it to construct a family of flow polytopes $\calF_G$ with Fuss-Catalan volume $\Cat(a, ka+1)$
Using relationship between rational-Dyck paths and in-degree gravity diagrams as a guide, then the graph $H_{n+1}^{(k)}$, obtained by taking $G=\car{n+1}{k}$ and adding one more copy of the edge $(k, k+1)$, has shifted in-degree vector $\bu= (0, k^{n-k}, n-k)$, and $\bv_{\mathrm{in}}=\sum_{j=k+1}^n (j-k)k\alpha_j$. This means that an in-degree gravity diagram for $H_{n+1}^{(k)}$ can be embedded in the squares of a $k(n-k)$ by $n-k$ grid. We can, without loss of generality, extend this to a $b=k(n-k)+1$ by $a=n-k$ grid to ensure that $a$ and $b$ are coprime and the bijection between the rational $(a,b)$-Dyck paths and in-degree gravity diagrams will remain unchanged. We hope to explore this variation of the $k$-caracol graphs in future work.
\end{remark}
By Corollary~\ref{cor.gravitydiagrams}, we can obtain a second proof of Theorem~\ref{thm.onezerozero} by constructing a bijection from the set of out-degree gravity diagrams to the same set of rational Dyck paths as above.
\begin{defn} We set some notation that will be used in the proof of the next result.
Recall from Remark~\ref{rem.trapezoid} that without loss of generality, we may consider the out-degree gravity diagrams for $\car{n+1}{k}$ to be defined on a trapezoidal array of $k-1+i$ dots in the $i$-th row for $i=1,\ldots,n-k-1$. Since the line segments of the gravity diagram are horizontal, we let $L_i=[\ell_i,r_i]$ denote the line segment in the $i$-th row, from the $\ell_i$-th column to the $r_i$-th column. The {\em length} of $L_i$ is $d(L_i) = r_i-\ell_i$.
\end{defn}
For $k\in \bbN$ and $n>k$, let $a=n-k$ and $b=ka-1$.
We will define a map $\Psi_{\mathrm{out}}:\outgrav_G(\bv_{\mathrm{out}})\rightarrow \calD(a,b)$ (Definition~\ref{defn.psiout}) from the set of out-degree gravity diagrams for $G=\car{n+1}{k}$ to the set of rational $(a,b)$-Dyck paths in several steps.
Let $\Gamma\in \outgrav_G(\bv_{\mathrm{out}})$ be an out-degree gravity diagram with line segments $L_1,\ldots, L_{a-1}$.
Again, we view an $(a,b)$-Dyck path as a $\bt$-Dyck path where $\bt$ is the transpose of $(k-1, k^{a-1})$. Let $Z$ denote the $\bbZ^2$ grid from $(0,0)$ to $(b,a)$, with the $\bt$-region shaded (this is the lattice on which we can draw an $(a,b)$-Dyck path). Note that $Z$ has exactly $a-1$ nonempty rows of squares lying above its shaded $\bt$-region, so we will show that we can embed the line segments of $\Gamma$ into the rows of squares of $Z$ appropriately, which in turn will define the $(a,b)$-Dyck path associated to $\Gamma$.
To begin with, we label the $j(k-1)$-th column of squares of $Z$ by $\alpha_{k+j}$, for $j=0,\ldots, a-2$. The zeroth column lies to the left of the diagram. See the right side of Figure~\ref{fig.3-car_bijection_out_d} for an example.
We embed the line segment $L_i=[\ell_i,r_i]$ into the $(i+1)$-th row of squares of $Z$ by placing its left endpoint in the column indexed by $\alpha_{r_i}$.
To see that this procedure indeed embeds $L_i$ into the squares lying above the shaded $\bt$-region of $Z$, we first list a few properties which are satisfied by these embedded line segments.
\begin{lemma}\label{lem.lineproperties}
Let $L_i=[\ell_i,r_i]$ be the line segment in the $i$-th row of an out-degree gravity diagram $\Gamma\in \outgrav_G(\bv_{\mathrm{out}})$. Let $\lp(L_i)$, respectively $\rp(L_i)$, denote the column of the Dyck path diagram $Z$ that is occupied by the left (respectively right) endpoint of the embedded line segment $L_i$. Then
\begin{enumerate}
\item[(a)] $1\leq \ell_i\leq k$ and $k\leq r_i\leq k+i-1$,
\item[(b)] $r_i-k \leq d(L_i) \leq r_i-1 \leq k+i-2$,
\item[(c)] $\lp(L_i) \in \{ k-1, 2(k-1), \ldots, (i-1)(k-1)\}$, and
\item[(d)] $\rp(L_i) \leq ik-1$.
\end{enumerate}
\end{lemma}
\begin{proof}
Parts (a) and (b) follow directly from the conventions for $\Gamma$ as a gravity diagram. Part (c) holds because $\lp(L_i)$ occupies the column labeled by $\alpha_{r_i}$, which is the $(r_i-k)(k-1)$-th column of $Z$, and by part (a), $k\leq r_i\leq k+i-1$. Finally, part (d) follows because by part (c), the rightmost column which can be occupied by $\lp(L_i)$ is $(i-1)(k-1)$, and by part (b), the maximum length of $L_i$ is $k+i-2$, so $\rp(L_i) \leq (i-1)(k-1) + k+i-2 = ik-1$.
\end{proof}
Note that there are precisely $ik-1$ squares lying in the $(i+1)$-th row of $Z$, above the shaded $\bt$-region, so by part (d) of the above Lemma, each line segment $L_i$ of $\Gamma$ is embedded into the squares lying above the shaded $\bt$-region of $Z$, as claimed.
\begin{lemma} \label{lem.rightendpoints}
Let $G=\car{n+1}{k}$, and let $\Gamma\in \outgrav_G(\bv_{\mathrm{out}})$ be an out-degree gravity diagram with line segments $L_1,\ldots, L_{a-1}$. Then $\rp(L_1) \leq \cdots \leq \rp(L_{a-1})$.
\end{lemma}
\begin{proof}
We proceed by induction on $a$.
The base cases are for $k\geq1$ and $a=n-k=1$. In these cases, the only out-degree gravity diagram is the empty diagram, and the only $(1,b)$-Dyck path is $NE^b$, so the base cases hold.
Now given $k\geq1$, suppose $\Gamma$ has $a-1$ rows with line segments $L_1,\ldots, L_{a-1}$. By the induction hypothesis, the line segments $L_1,\ldots, L_{a-2}$ of $\Gamma$ embeds into rows $2$ through $a-1$ of the Dyck path grid $Z$, and the shape of these embedded line segments defines a (partial) rational $(a-1,b-k)$-Dyck path from $(0,0)$ to $(c,a-1)$ for some $0\leq c \leq b-k$. We now consider embedding the last line segment $L_{a-1}$.
If $L_{a-1}$ and $L_{a-2}$ are embedded into $Z$ so that $\lp(L_{a-1}) = \lp(L_{a-2})$, then by the conventions defining the out-degree gravity diagrams, $d(L_{a-1}) \geq d(L_{a-2})$, and so $\rp(L_{a-1}) \geq \rp(L_{a-2})$.
Otherwise, by construction, $L_{a-1}$ must be embedded so that $\lp(L_{a-1})\geq \lp(L_{a-2})+k-1$. In other words, if $r_{a-2} = k+h$ for some $h\geq0$, then $r_{a-1} \geq k+h+1$. By part (b) of the previous Lemma, we have $d(L_{a-2})\leq k+h-1$ and $h+1 \leq d(L_{a-1})$. Putting this altogether,
\begin{align*}
\rp(L_{a-1})
= \lp(L_{a-1})+d(L_{a-1})
&\geq \lp(L_{a-2}) + k-1 + h+1\\
&> \lp(L_{a-2}) + d(L_{a-2})
= \rp(L_{a-2}),
\end{align*}
so the right endpoint of $L_{a-1}$ lies (strictly) to the right of the right endpoint of $L_{a-2}$ in this case also.
\end{proof}
\begin{defn} \label{defn.psiout}
Lemma~\ref{lem.rightendpoints} shows that the line segments $L_1,\ldots, L_{a-1}$ of a gravity diagram $\Gamma\in \outgrav_G(\bv_{\mathrm{out}})$ are embedded into the $(a,b)$-Dyck path grid $Z$ so that the right endpoints of the line segments move weakly to the right. Therefore, we can define $\Psi_{\mathrm{out}}(\Gamma)$ to be the rational $(a,b)$-Dyck path defined by the `rectilinear convex hull' of the embedded line segments of $\Gamma$.
In other words, consider the region of squares that lie above the shaded $\bt$-region as the Ferrers diagram of the partition $\lambda(a,k) = ((a-1)k-1, (a-2)k-1, \ldots, 2k-1, k-1)$.
Then the right endpoints of the embedded line segments coming from $\Gamma$ define a subpartition of $\lambda$, and this subpartition defines the $(a,b)$-Dyck path associated to $\Gamma$.
\end{defn}
See Example~\ref{eg.pairofDycks} and Figure~\ref{fig.3-car_bijection_out_d} for an illustration.
\begin{figure} [ht!]
\begin{center}
\input{3-car_bijection_in}
\end{center}
\caption{An in-degree gravity diagram of $\car{12}{3}$, rotated to embed in its corresponding rational $(8,23)$-Dyck path under the bijection $\Psi_{\mathrm{in}}$.}
\label{fig.3-car_bijection_in}
\end{figure}
\begin{figure} [ht!]
\begin{center}
\input{3-car_bijection_out_d}
\end{center}
\caption{An out-degree gravity diagram of $\car{12}{3}$ and its corresponding rational $(8,23)$-Dyck path under the bijection $\Psi_{\mathrm{out}}$.
}
\label{fig.3-car_bijection_out_d}
\end{figure}
\begin{prop}\label{prop.outgrav}
For $k\in \bbN$ and $n>k$, the map
$\Psi_{\mathrm{out}}:\outgrav_G(\bv_{\mathrm{out}})\rightarrow \calD(a,b)$ from the set of out-degree gravity diagrams of $G=\car{n+1}{k}$ to the set of rational $(a,b)$-Dyck paths is a bijection.
\end{prop}
\begin{proof}
Let $a=n-k$ and $b=ka-1$.
To see that $\Psi_{\mathrm{out}}$ is a bijection, we will describe the inverse map by reconstructing the line segments $L_1,\ldots, L_{a-1}$ for an out-degree gravity diagram $\Gamma \in \outgrav_G(\bv_{\mathrm{out}})$.
Let $\bs$ be a rational $(a,b)$-Dyck path, and let $L_i=[\ell_i,r_i]$ denote the line segment that we will reconstruct from the $(i+1)$-th row of the Dyck path.
The shape of $\bs$ immediately dictates the location of the right endpoint of each embedded line segment, so we need only to determine the location of the left endpoint.
Suppose $jk \leq \rp(L_i) \leq (j+1)k-1$ for some $j=0,\ldots, i-2$.
By construction, $\lp(L_i)= (r_i-k)(k-1)$. We claim that $r_i-k=j$. From there, we would have the length $d(L_i)=\rp(L_i)-\lp(L_i)$, and then we can fully determine the line segment $L_i=[k+j-d(L_i),k+j]$.
As seen in Lemma~\ref{lem.lineproperties}(b), $r_i-k \leq d(L_i) \leq r_i-1$. And since $d(L_i) = \rp(L_i)-\lp(L_i)$, it follows that
$$(r_i-k)k \leq \rp(L_i) \leq (r_i-k+1)k -1.$$
Since we assumed that $\rp(L_i) \leq (j+1)k-1$, then the inequality on the right side implies $r_i-k\leq j$.
On the other hand, we also assumed that $jk\leq\rp(L_i)$, so the inequality of the left side implies $j\leq r_i-k$. Thus we have $r_i-k=j$, as claimed.
Since we can uniquely recover the embedded line segments and therefore the out-degree gravity diagram from any $(a,b)$-Dyck path $\bs$, then $\Psi_{\mathrm{out}}$ is a bijection.
\end{proof}
\begin{corollary} \label{cor.inout}
The composition $\Psi_{\mathrm{in}}^{-1}\circ\Psi_{\mathrm{out}}$ is a bijection between the sets of out-degree and in-degree gravity diagrams of $\car{n+1}{k}$.
\end{corollary}
\begin{proof}
An out-degree and an in-degree gravity diagram of $\car{n+1}{k}$ correspond to each other if they have the same associated rational $(a,b)$-Dyck path.
\end{proof}
\begin{example}\label{eg.pairofDycks}
Figures~\ref{fig.3-car_bijection_in} and~\ref{fig.3-car_bijection_out_d} show a pair of in-degree and out-degree gravity diagrams for $G=\car{12}{3}$ which correspond to each other under the bijection $\Psi_{\mathrm{in}}^{-1}\circ\Psi_{\mathrm{out}}$ because they have the same associated rational $(8,23)$-Dyck path.
In Figure~\ref{fig.3-car_bijection_out_d}, we have an out-degree gravity diagram $\Gamma \in \outgrav_{G}(\bv_{\mathrm{out}})$.
The $(2j)$-th column of squares of the Dyck path diagram are indexed by $\alpha_{3+j}$ for $j=0,\ldots, 6$. These are indicated in light grey across the top row of the diagram.
The line segments of $\Gamma$ are $L_1,\ldots, L_7=[3,3], [2,3], [2,4], [1,4], [3,7], [3,7], [1,7]$, and $L_i=[\ell_i,r_i]$ is embedded into the $(i+1)$-th row of squares of the associated rational $(8,23)$-Dyck path with their left endpoints occupying the column labeled by $\alpha_{r_i}$ for each $i$. The `rectilinear convex hull' of the embedded line segments forms the subpartition $(14,12,12,5,4,1)$ of the partition $\lambda(8,3)=(20,17,14,11,8,5,2)$.
\end{example}
\begin{remark}
We make a few comments regarding the special case of the classical caracol graph $G=\car{n+1}{1}$. In this case, the out-degree gravity diagrams are defined on a triangular array of $n-j-1$ dots in the $j$-th column for $j=1,\ldots, n-2$, and each horizontal line segment extends from the first column to the $j$-th column for some $j=1,\ldots, n-2$. Similarly, the in-degree gravity diagrams are defined on a triangular array of $j-2$ dots in the $j$-th column for $j=3,\ldots, n$, and each horizontal line segment extends from the last column to the $j$-th column for some $j=3,\ldots, n$. See Figure~\ref{fig.car61} for the full sets of out-degree and in-degree gravity diagrams for $\car61$.
Given this, an `obvious' bijection between the out-degree and in-degree gravity diagrams for $\car{n+1}{1}$ is the reflection about a vertical axis. Corollary~\ref{cor.inout} gives a second bijection; the out-degree diagrams can equivalently be thought of as subpartitions of the staircase partition $\delta_{n-2}=(n-3,n-4,\ldots, 2,1)$, with the parts of the subpartition defined by the lengths of the horizontal line segments. The bijection $\Psi_{\mathrm{in}}^{-1}\circ\Psi_{\mathrm{out}}$ amounts to being the conjugation of partitions. See Figure~\ref{fig.car81} for an example for $\car81$.
\end{remark}
\begin{figure}[ht!]
\include{1-car_bijection}
\caption{The bijection $\Psi_{\mathrm{in}}^{-1}\circ\Psi_{\mathrm{out}}$ for the out-degree and in-degree gravity diagrams for $\car{n+1}1$ amounts to the conjugation of partitions.}
\label{fig.car81}
\end{figure}
\begin{remark} \label{rem.duality}
To summarize, we have seen that the volume of the flow polytope of the $k$-caracol graph $G=\car{n+1}{k}$ with unit flow can be computed by counting the number lattice points of two different graphs. When $k\geq2$,
\begin{align*}
\big|\calF_{G_{\mathrm{in}}}
&\left(k-1, k^{n-k-1}, -k(n-k)+1\right) \cap \bbZ^{\dim G_{\mathrm{in}} } \big|\\
&=K_{G_{\mathrm{in}}} \left(k-1, k^{n-k-1}, -k(n-k)+1\right)\\
&=\vol \calF_G(1,0,\ldots, 0,-1)\\
&=K_{G_{\mathrm{out}}} \left(k(n-k)-2, -(n-k)^{k-2}, -(n-k-1), (-1)^{n-k-1}\right)\\
&=\big|\calF_{G_{\mathrm{out}}}\left(k(n-k)-2, -(n-k)^{k-2}, -(n-k-1), (-1)^{n-k-1}\right) \cap \bbZ^{\dim G_{\mathrm{out}} } \big|,
\end{align*}
where $G_{\mathrm{in}}$ is the restriction of $G=\car{n+1}{k}$ to the vertices $\{k,\ldots,n+1\}$ and $G_{\mathrm{out}}$ is the restriction of $G=\car{n+1}{k}$ to the vertices $\{1,\ldots,n-1\}$.
We point out that at $k=2$, $G_{\mathrm{in}}=\PS_{n-1}$, and for $k\geq3$, $G_{\mathrm{in}} = \Car_{n-k+2}$.
The case $k=1$ is trivial since $G_{\mathrm{in}} = \PS_{n-1} = G_{\mathrm{out}}^{\mathrm{rev}}$, and $\calF_{G_{\mathrm{in}}}(1^{n-2},-(n-2))$ is equal to $\calF_{G_{\mathrm{out}}}(n-2, (-1)^{n-2})$ by reversing the flow.
For $k\geq2$, it may be interesting to investigate any geometric implications behind the combinatorial correspondence given by $\Psi_{\mathrm{in}}^{-1} \circ \Psi_{\mathrm{out}}$ on the lattice points of these flow polytopes of different dimensions.
\end{remark}
\section{Volume of the $k$-caracol polytope with net flow $(1,\ldots,1,-n)$} \label{sec.kcaracol}
In~\cite{BGHHKMY}, we introduced a combinatorial interpretation of the Lidskii volume formula (Theorem~\ref{thm.lidskii}) and called the objects unified diagrams. In this section, we define unified diagrams for the $k$-caracol graph, and compute the volume of the flow polytope of $\car{n+1}{k}$ with net flow $\ba=(1,\ldots, 1,-n)$. As a corollary, we recover the analogous result for the classical caracol graph and the Pitman--Stanley graph.
We point out that the results in this section is the first application of using unified diagrams to compute volumes of flow polytopes whose underlying graphs are not planar.
\subsection{Unified diagrams}
In this section, we restrict ourselves to defining unified diagrams for flow polytopes with net flow $\ba=(1,\ldots, 1,-n)$. We will discuss unified diagrams in full generality in Section~\ref{sec.abbb}.
\begin{defn} Let $\bt = (t_1,\ldots, t_p) \vDash q$.
A {\em labeled $\bt$-Dyck path} is a pair $(\bs,\sigma)$ where $\bs=(s_1,\ldots, s_p)$ is a $\bt$-Dyck path and $\sigma$ is a permutation in the symmetric group $\fS_q$, whose descent set is contained in $\{s_1+\cdots+s_j\mid j=1,\ldots, p-1\}$. Let $\PF_\bt$ denote the set of labeled $\bt$-Dyck paths, which are also known as {\em generalized parking functions}.
\end{defn}
\begin{defn}\label{defn.unified}
Let $G$ be an acyclic directed graph with $n+1$ vertices and shifted out-degree vector $\bt$. A {\em unified diagram} for the flow polytope $\calF_G(1,\ldots,1,-n)$ is a triple $(\bs, \sigma, \Gamma)$ where $(\bs,\sigma)$ is a labeled $\bt$-Dyck path and $\Gamma$ is an out-degree gravity diagram for $\outgrav_{G}(\bs-\bt,0)$. Let $\calU_G$ denote this set of unified diagrams.
\end{defn}
Visually, if $\bt=(t_1,\ldots, t_p)\vDash q$, then $(\bs,\sigma)$ is the lattice path $N^{s_1}E \cdots N^{s_p}E$ on the rectangular grid form $(0,0)$ to $(p, q)$ which lies above the shaded $\bt$-region, and whose north steps are labeled by the permutation $\sigma$ so that the labels on consecutive north steps are nondecreasing.
See Figure~\ref{fig.UDcar73} for an example where $G=\car73$. There, the shifted out-degree vector is $\bt=(3,3,2,1,1,0)$, indicated by the shaded squares, and the $\sigma$-labeled $\bt$-Dyck path $\bs=(5,4,0,1,0,0)$ is indicated in red. The gravity diagram $\Gamma$, which represents a vector partition of $\bs-\bt=2\alpha_1+3\alpha_2+\alpha_3+\alpha_4$ with respect to graph $\car73$, is embedded in the squares bounded between the $\bt$-Dyck path and the shaded $\bt$-region.
\begin{figure}[ht!]
\include{3-caracol_ud_73}
\caption{A unified diagram $U=(\bs,\sigma,\Gamma)$ for $\car73$.}
\label{fig.UDcar73}
\end{figure}
\begin{remark} Since $\sum_{j=1}^n t_j= m-n$, then $(m-n)\be_1=(m-n,0,\ldots,0) \rhd \bt$, and $\bv_{\mathrm{out}}=(m-n)\be_1-\bt$. All other $\bs\vDash m-n$ which dominate $\bt$ satisfy $(m-n)\be_1 \rhd \bs \rhd \bt$, and $\outgrav_G(\bs-\bt,0) \subseteq \outgrav_G(\bv_{\mathrm{out}})$ for all $\bs\rhd\bt$.
\end{remark}
Unified diagrams were created for the purpose of combinatorializing the generalized Lidskii volume formula. We restate the formula in a way that is convenient for us to use later on. This next result follows from the fact that the number of labeled $\bt$-Dyck paths is $|\PF_\bt| = \sum_{\bs\rhd\bt}{|\bt|\choose \bs}$.
\begin{theorem}[Parking function version of the Lidskii volume formula, {\cite[Theorems 4.3, 4.4]{BGHHKMY}}]
\label{thm.parkinglidskii}
With the same conditions as in Theorem~\ref{thm.lidskii}, the volume of the flow polytope $\calF_G(\ba)$ of $G$ with net flow vector $\ba$ is
$$\vol\calF_G(\ba) = \sum_{(\bs,\sigma)\in \PF_\bt} \ba^\bs \cdot K_G(\bs-\bt,0) = |\,\calU_G(\ba)|. $$
\end{theorem}
\subsection{Refinements of unified diagrams}
In this section, we set up the combinatorial tools necessary for enumerating the unified diagrams for the flow polytope of the $k$-caracol graphs. This will be achieved by stratifying the set of unified diagrams according to level.
\begin{defn} Let $\bt=(t_1,\ldots, t_p)\vDash q$. Given a $\bt$-Dyck path $\bs=(s_1,\ldots, s_p)$, its {\em $k$-th column level} is defined to be $q-(s_1+\cdots+s_k)$, for $k=1,\ldots, p$. Visually, this is the height at which the $k$-th east step of $\bs$ occurs, where the zero-th level starts from the top of the Dyck path at $y=q$. The possible levels in the $k$-th column are $i=0,\ldots, q-(t_1+\cdots+t_k)$.
\end{defn}
\begin{defn}
Let $G$ be a directed graph with $n+1$ vertices and shifted out-degree vector $\bt \vDash m-n$.
If $(\bs,\sigma)$ is a labeled $\bt$-Dyck path whose $k$-th column level is $i$, we can decompose it into two labeled Dyck paths $(\bp,\pi)$ and $(\bq,\kappa)$ respectively corresponding to the subpaths before and after the $k$-th east step of $\bs$.
We can standardize the labelings so that $\kappa \in \fS_i$ and $\pi\in \fS_{m-n-i}$. There are ${m-n\choose i}$ ways to choose a label set of size $i$, so
\begin{equation}\label{eqn.stdU}
|\,\calU_G| = \sum_{i=0}^{m-n-(t_1+\cdots+t_k)} {m-n\choose i}
|\,\mathcal{SU}_G^{(k,i)}|
\end{equation}
where $\mathcal{SU}_G^{(k,i)} = \{ ((\bp,\pi), (\bq,\kappa), \Gamma) \}$ is the set of {\em standardized level-$(k,i)$ unified diagrams} for $\calF_G$; the concatenation of $\bp$ and $\bq$ is a $\bt$-Dyck path $\bs$ with $s_1+\cdots+s_k = m-n-i$, the labels $\kappa\in \fS_i$ and $\pi\in \fS_{m-n-i}$, and $\Gamma\in \outgrav_G(\bs-\bt,0)$ is an out-degree gravity diagram with $m-n-(t_1+\cdots+t_k)-i$ dots in the $k$-th column.
\end{defn}
\begin{example}
The labeled $\bt$-Dyck path $(\bs,\sigma)$ in the unified diagram $U$ for $\car73$ from Figure~\ref{fig.UDcar73} has level $i=1$ in the third column, and it decomposes into the two labeled Dyck paths $(\bp,\pi)$ and $(\bq,\kappa)$ with standardized labelings where $\bp = (5,4,0)$, $\sigma=257891346\in \fS_9$, $\bq=(1,0,0)$, and $\kappa = 1\in \fS_1$.
\end{example}
We need one further refinement on the set of unified diagrams.
\begin{defn}
Let $G$ be a directed graph with $n+1$ vertices and shifted out-degree vector $\bt \vDash m-n$. For $k\in \bbN$ and $i\in \bbZ_{\geq0}$, a {\em truncated level-$(k,i)$ unified diagram} for the flow polytope $\calF_G$ is obtained by taking a standardized level-$(k,i)$ unified diagram $(\bs,\sigma, \Gamma)$ for $\calF_G$ and erasing the {\em initial part} $(\bp,\pi)$ of the labeled $\bt$-Dyck path $(\bs,\sigma)$ which occurs before (and including) the $k$-th east step of $\bs$.
In other words, this is a triple $(\bq,\kappa,\Gamma)$ where $(\bq, \kappa)$ is a labeled $\bt'=(t_{k+1},\ldots, t_n)$-Dyck path that begins at the coordinates $(k, m-n-i)$
and is labeled by $\kappa\in \fS_i$ so that the labels on consecutive north steps of
$\bq$ are non-decreasing, and $\Gamma$ is an out-degree gravity diagram for $G$ with $m-n-(t_1+\cdots+t_k)-i$ dots in its $k$-th column.
Let $\calU_G^{(k,i)}$ denote the set of truncated level-$(k,i)$ unified diagrams for $G$.
\end{defn}
\begin{example}
The left side of Figure~\ref{fig.3-caracol_ud_103} shows a truncated level-$(3,2)$ unified diagram for $G = \car{10}3$. Note that the only requirement on how the line segments of the embedded gravity diagram $\Gamma$ are depicted is that the line segments must occupy the lowest possible dots in each column. That is, `gravity' drags the line segments downwards.
\end{example}
\begin{figure}[ht!]
\include{3-caracol_ud_103}
\caption{On the left is a truncated unified diagram $U=(\bq,\kappa,\Gamma) \in \calU_G^{(3,2)}$ for $G=\car{10}3$, and on the right is its corresponding $3$-multi-labeled Dyck path $M$ under the bijection $\Theta: \calU_G^{(3,2)} \rightarrow \calT_3(3,2)$. $M$ encodes the parking preferences $\mathbf{pp}$.}
\label{fig.3-caracol_ud_103}
\end{figure}
\begin{defn}
For each truncated unified diagram $U=(\bq,\kappa, \Gamma)\in \calU_G^{(k,i)}$, let $S(U)$ be the number of ways to complete $U$ to obtain a standardized unified diagram $((\bp,\pi),(\bq,\kappa), \Gamma) \in \mathcal{SU}_G^{(k,i)}$.
\end{defn}
To be clear, a completion $(\bp,\pi)$ is a labeled $(t_1,\ldots, t_k)$-Dyck path $\bp$ from $(0,0)$ to $(k, m-n-i)$ whose last step is the east step $(k-1, m-n-i)$ to $(k, m-n-i)$, the label $\pi\in \fS_{m-n-i}$, and $\Gamma$ is contained in the region between $\bp$ and the shaded $\bt$-region.
We then have
\begin{equation}\label{eqn.SUcompletion}
\left|\, \mathcal{SU}_{G}^{(k,i)}\right| = \sum_{U\in \,\calU_{G}^{(k,i)}} S(U).
\end{equation}
\subsection{Completions of truncated unified diagrams for the $k$-caracol graph}
In the remainder of this section, we let $G=\car{n+1}{k}$.
\begin{defn}
Let $U= (\bq,\kappa, \Gamma)\in \calU_{G}^{(k,i)}$ be a truncated unified diagram for $G$, with the gravity diagram drawn so that its line segments occupy the lowest possible dots in each column. The {\em $k$-hull} of $U$ is the weak composition $\bc=(c_1,\ldots, c_k)\vDash m-n-i$ which represents the shape of the $(t_1,\ldots, t_k)$-Dyck path $\bp=N^{c_1}E \cdots N^{c_k}E$ from $(0,0)$ to $(k, m-n-i)$ having the smallest possible area.
\end{defn}
Recall from Remark~\ref{rem.trapezoid} that without loss of generality, we can consider out-degree gravity diagrams for $\car{n+1}{k}$ to be defined on a trapezoidal array of dots with $k-1+i$ dots in the $i$-th row for $i=1,\ldots, n-k-1$.
In particular, the columns of the gravity diagram indexed by $\alpha_1,\ldots, \alpha_k$ form a $(n-k-1)\times k$ rectangle $R$. Given $\Gamma \in \outgrav_G(\bv_{\mathrm{out}})$, let $\Gamma|_R$ denote the restriction of the gravity diagram to the dots in $R$. Note that every line segment of $\Gamma|_R$ has its right endpoint in the $k$-th column.
\begin{lemma}\label{lem.cvector}
Let $G=\car{n+1}{k}$, and let $U=(\bq,\kappa,\Gamma)\in \calU_{G}^{(k,i)}$ be a truncated unified diagram for $G$. Let $L_1,\ldots, L_{n-k-1-i}$ be the (possibly trivial) line segments of $\Gamma|_R$, where $L_j=[\ell_j, k]$ for $1\leq \ell_j \leq k$ and $j=1,\ldots, n-k-1-i$. Let $\bh=(n-k,\ldots, n-k, 2(n-k-1)-i)$. The $k$-hull of $U$ is
$$\bc(U) = \bh + \sum_{j=1}^{n-k-1-i} (\be_{\ell_j} - \be_k). $$
\end{lemma}
\begin{proof}
Recalling from Section~\ref{sec.gravity} that the shifted out-degree vector for $\car{n+1}{k}$ is
$$\bt=(t_1,\ldots, t_n) = (\underbrace{n-k,\ldots, n-k}_{k-1},n-k-1, \underbrace{1,\ldots,1}_{n-k-1},0)\vDash m-n,$$
then $\bh=(h_1,\ldots, h_k) =(n-k,\ldots, n-k, 2(n-k-1)-i)$ is a composition of $m-n-i$ with $k$ parts that represents the hull of a truncated level-$(k,i)$ unified diagram having an empty gravity diagram.
Now, with the gravity diagram $\Gamma$ embedded into $U$, then $\bc(U)$ is determined by $\bh$, together with the line segments of $\Gamma|_R$. For each line segment $L_j = [\ell_j,k]$ beginning in the $\ell_j$-th column and ending in the $k$-th column, the $k$-hull of the truncated unified diagram is obtained by altering $\bh$ by $\be_{\ell_j}-\be_k$.
\end{proof}
\begin{example}
For the truncated unified diagram $U$ in Figure~\ref{fig.3-caracol_ud_103}, its gravity diagram $\Gamma$ is
$$\begin{tikzpicture}[scale=0.5]
\begin{scope}[xshift=0, scale=1.0]
\node[vertex][fill, minimum size=3pt, label=above:\tiny$\alpha_1$, color=red](a10) at (0,0) {};
\node[vertex][fill, minimum size=3pt, label=above:\tiny$\alpha_2$, color=red](a20) at (1,0) {};
\node[vertex][fill, minimum size=3pt, label=above:\tiny$\alpha_3$, color=red](a30) at (2,0) {};
\node[vertex][fill, minimum size=3pt, label=above:\tiny$\alpha_4$](a40) at (3,0) {};
\node[vertex][fill, minimum size=3pt, label=above:\tiny$\alpha_5$](a50) at (4,0) {};
\node[vertex][fill, minimum size=3pt, label=above:\tiny$\alpha_6$](a60) at (5,0) {};
\node[vertex][fill, minimum size=3pt, label=above:\tiny$\alpha_7$](a70) at (6,0) {};
\node[vertex][fill, minimum size=3pt, label=above:\tiny$\alpha_8$](a80) at (7,0) {};
\node[vertex][fill, minimum size=3pt, label=above:\tiny$\alpha_9$](a90) at (8,0) {};
\node[vertex][fill, minimum size=3pt, color=red](a11) at (0,-1) {};
\node[vertex][fill, minimum size=3pt, color=red](a12) at (0,-2) {};
\node[vertex][fill, minimum size=3pt, color=red](a21) at (1,-1) {};
\node[vertex][fill, minimum size=3pt, color=red](a22) at (1,-2) {};
\node[vertex][fill, minimum size=3pt, color=red](a31) at (2,-1) {};
\node[vertex][fill, minimum size=3pt, color=red](a32) at (2,-2) {};
\node[vertex][fill, minimum size=3pt](a41) at (3,-1) {};
\node[vertex][fill, minimum size=3pt](a42) at (3,-2) {};
\node[vertex][fill, minimum size=3pt](a51) at (4,-1) {};
\node[vertex][fill, minimum size=3pt](a52) at (4,-2) {};
\node[vertex][fill, minimum size=3pt](a61) at (5,-1) {};
\node[vertex][fill, minimum size=3pt](a62) at (5,-2) {};
\node[vertex][fill, minimum size=3pt](a71) at (6,-1) {};
\node[vertex][fill, minimum size=3pt](a72) at (6,-2) {};
\node[vertex][fill, minimum size=3pt](a81) at (7,-1) {};
\draw (a30) to (a50);
\draw[color=red] (a11) to (a31);
\end{scope}
\end{tikzpicture}
$$
with the restriction $\Gamma|_R$ depicted in red. We have $\bh= (6,6,8)$, and the $3$-hull of $U$ is
$$\bc(U) = (6,6,8) + (0,0,1-1) + (1,0,-1) + (0,0,1-1) = (7,6,7).$$
\end{example}
\begin{lemma}\label{lem.completions} Let $G=\car{n+1}{k}$, and let $\bc(U)=(c_1,\ldots, c_k)$ be the $k$-hull of the truncated unified diagram $U=(\bq,\kappa,\Gamma)\in \calU_{G}^{(k,i)}$. The number of ways to complete $U$ to a standardized unified diagram in $\mathcal{SU}_G^{(k,i)}$ is
$$S(U) = \sum_{\bd \in\calC(\bc(U))} {m-n-i\choose \bd},$$
where $\calC(\bc(U)) = \{\bd\vDash m-n-i \mid d_1 + \cdots +d_j \geq c_1+\cdots+c_j, \hbox{ for } j=1,\ldots, k-1 \}$.
\end{lemma}
\begin{proof} The $k$-hull $\bc(U)$ of $\Gamma$ represents a $(t_1,\ldots, t_k)$-Dyck path $(\bp,\pi)$ having the smallest area which completes $U$ to a standardized unified diagram. Thus a Dyck path completion of $U$ is a weak composition $\bd$ that dominates $\bc(U)$, and the claim follows since there are ${m-n-i\choose \bd}$ ways to label the north steps of $\bd$ so that the labels from $\pi$ are nondecreasing on consecutive north steps.
\end{proof}
\subsection{The $k$-parking numbers}
In this section, we enumerate the truncated level-$(k,i)$ unified diagrams for $G=\car{n+1}{k}$ by bijecting them to another family of combinatorial objects that we now define. After this is completed, we will show that for each truncated diagram $U \in \calU_G^{(k,i)}$, there are `on average' $k^{(k+1)(n-k)-3-i}$ ways to complete it to a standardized unified diagram.
\begin{defn}\label{defn.kparkingtrianglenumbers}
For $k\in \bbN$, $r\in \bbZ_{\geq0}$, and $i=0,\ldots, r$, let
$$T_k(r,i) = (r+1)^{i-1} \multiset{k(r+1)}{r-i}
= (r+1)^{i-1}{k(r+1)+r-1-i \choose r-i}.$$
For fixed $k$, the numbers $T_k(r,i)$ form the entries of the {\em $k$-parking triangle}. Tables of values for $T_k(r,i)$ are given in the Appendix, for $k=1,2,3,4$.
\end{defn}
\begin{remark} We note some special values of $T_k(r,i)$.
\begin{enumerate}
\item[(a)] At $i=0$,
$$T_k(r,0) = \frac{1}{r+1}{k(r+1) + r-1 \choose r}
= \Cat(r+1, k(r+1)-1)$$
is a generalized Fuss-Catalan number. This is equal to $\vol\calF_{\car{n+1}{k}}(1,0,\ldots,0,-1)$ if we let $r=n-k-1$.
\item[(b)] At $i=r$,
$T_k(r,r) = (r+1)^{r-1}$
is the number of parking functions of length $r$.
\item[(c)] At $i=r-1$,
$$T_k(r,r-1) = (r+1)^{r-2}{k(r+1)\choose k(r+1)-1} = k(r+1)^{r-1} =kT_k(r,r)$$
is $k$ times the number of parking functions of length $r$.
\end{enumerate}
\end{remark}
\begin{defn}
For $k\in \bbN$, $r\in \bbZ_{\geq0}$, and $i=0,\ldots, r$, let $\calT_k(r,i)$ be the set of classical Dyck paths from $(0,0)$ to $(r,r)$ with labeled north steps so that each of the labels from the set $\{1,2,\ldots, i\}$ appear exactly once, and the remaining $r-i$ labels are chosen (possibly with repeats) from the set $\{\overline{k-1}, \ldots, \overline{1}, \overline{0}\}$, and the labels are nondecreasing on consecutive north steps. These labels are ordered by $\overline{k-1} < \cdots < \overline{1} < \overline{0} < 1 < 2< \cdots < i$. We call these the {\em $k$-multi-labeled Dyck paths}.
\end{defn}
\begin{theorem}\label{thm.multilabel}
For $k\in \bbN$, $r\in \bbZ_{\geq0}$, and $i=0,\ldots, r$,
$$|\calT_k(r,i)|= T_k(r,i). $$
\end{theorem}
\begin{proof}
Consider the scenario where there are $r+1$ parking spaces on a circular one way street whose single entrance/exit is just before the first parking space. There are $r$ vehicles: $d_s$ identical motorcycles of the same model $s$ for $s=\overline{k-1},\ldots, \overline{0}$, and $i$ distinct cars, so that $d_{k-1}+\cdots+d_{1}+d_{0}+i=r$. Each group of model $s$ motorcycles has a multiset of $d_s$ preferred parking spaces, and each car has a preferred parking space as well. The motorcycles arrive in groups and park, followed by each car, and if the vehicle's preferred spot is already taken, then it parks in the next available space down the circular street. Since there are $r+1$ spaces and $r$ vehicles, every vehicle will be able to park.
We record the parking preferences as
$$\mathbf{pp}= \{p_{k-1,1},\ldots, p_{k-1,d_{k-1}} \} \times \cdots \times \{p_{0,1},\ldots, p_{0,d_0}\}\times (q_1,\ldots, q_i),$$
where $\{p_{s,1},\ldots, p_{s,d_s} \}$ is a multiset of parking space preferences for the model $s$ motorcycles, and $(q_1,\ldots, q_i)$ is the list of parking preferences for the $i$ cars.
The cyclic group $\bbZ/(r+1)\bbZ$ acts on the set of parking preferences by
$$z\cdot \mathbf{pp} = \{p_{k-1,1}+z,\ldots, p_{k-1,d_{k-1}}+z \} \times \cdots \times \{p_{0,1}+z,\ldots, p_{0,d_0}+z\} \times (q_1+z,\ldots, q_i+z) \!\!\! \mod r+1,$$
for $z\in \bbZ/(r+1)\bbZ$. If the parking preferences $\mathbf{pp}$ lead to the $j$-th vehicle parking in space $S_j$, then the parking preferences $z\cdot \mathbf{pp}$ leads to the $j$-th vehicle parking in space $(S_j+z) \mod r+1$. Thus each orbit of the cyclic group action on the set of parking preferences has size $r+1$. In each orbit, there is a unique parking configuration where the $(r+1)$-st space is empty, and this corresponds to an element in $\calT_k(r,i)$.
There are $(r+1)^i$ preference lists $(q_1,\ldots, q_i)$ for the cars, and
$$\sum_{d_0+\cdots+d_{k-1} =r-i }\multiset{r+1}{d_0}\cdots\multiset{r+1}{d_{k-1}}
= \multiset{k(r+1)}{r-i}$$
preference sets for the $k$ models of motorcycles. Therefore,
$$|\calT_k(r,i)|
= (r+1)^{i-1} \multiset{k(r+1)}{r-i}.$$
\end{proof}
\begin{example}
The right side of Figure~\ref{fig.3-caracol_ud_103} shows a multi-labeled Dyck path $M$. $M$ encodes the parking preferences $\mathbf{pp} = \{1 \} \times \emptyset \times\{1,3\} \times (4,1)$ for one model-$2$ motorcycle $\mathtt{M}_{\overline{2}}$, two identical model-$0$ motorcycles $\mathtt{M}_{\overline{0}}$, and two distinct cars $\mathtt{C}_1$ and $\mathtt{C}_2$. The resulting parked configuration is $(\mathtt{M}_{\overline{2}}, \mathtt{M}_{\overline{0}},\mathtt{M}_{\overline{0}}, \mathtt{C}_1, \mathtt{C}_2)$ for the vehicles.
\end{example}
\begin{theorem} \label{thm.theta}
Let $k\in \bbN$ and $n>k$. The number of truncated level-$(k,i)$ unified diagrams for $G=\car{n+1}{k}$ is
$$\left|\,\calU_{\car{n+1}{k}}^{(k,i)}\right|
= T_k(n-k-1,i).$$
\end{theorem}
\begin{proof} Let $G=\car{n+1}{k}$.
We construct a bijection $\Theta: \calU_G^{(k,i)} \rightarrow \calT_k(n-k-1,i)$.
Let $U=(\bq, \kappa, \Gamma)$ be a truncated level-$(k,i)$ unified diagram. Recall that the embedded gravity diagram $\Gamma$ has $n-k-1-i$ dots in the $k$-th column, and every (possibly trivial) line segment in $\Gamma$ contains a dot from the $k$-th column, so we consider $\Gamma$ as having $n-k-1-i$ line segments.
From $U$, we create a $k$-multi-labeled Dyck path $M\in\calT_k(n-k-1,i)$ in the following way.
Let $\mathbf{1} = (1,\ldots, 1) \vDash n-k-1$. We may view $(\bq,\kappa)$ as a labeled $\mathbf{1}$-Dyck path with starting point $(0, n-k-1-i)$, and $\bq$ has $i$ north steps labeled by the permutation $\kappa\in \fS_i$. To create $M$, we need to add $n-k-1-i$ more north steps to $\bq$, and the $n-k-1-i$ line segments embedded between $\bq$ and the shaded region in $U$ define these uniquely; given one such line segment $L=[\ell,k+h]$ that begins in the $\ell$-th column for some $\ell=1,\ldots, k$, and ends in a $(k+h)$-th column for some $h = 0,\ldots, n-k-2$, create a new north step at $x=h$ with the label $\overline{k-\ell}$ so that the labels remain nondecreasing on consecutive north steps of $M$, with respect to the order $\overline{k-1} < \cdots < \overline{1} < \overline{0} <1<\cdots <i$.
We may visualize this construction of $M$ from $U$ as `sliding' the label $\overline{k-\ell}$ along the line segment $L=[\ell,k+h]$ of the gravity diagram to its end to create a new north step with that label.
To see that $M$ indeed is a $k$-multi-labeled Dyck path in $\calT_k(n-k-1,i)$, note that by virtue of the fact that the line segments of $\Gamma$ are embedded between $\bq$ and the shaded region, it is ensured that adding north steps dicted by the right endpoints of the line segments creates a Dyck path from $(0,0)$ to $(n-k-1,n-k-1)$ that remains above the line $y=x$. The conditions on the labels of the north steps of $M$ are clearly satisfied by construction.
To see that $\Theta$ is a bijection, we describe the inverse construction. Let $M\in \calT_k(n-k-1,i)$. It has $n-k-1-i$ north steps with labels in $\{\overline{k-1},\ldots, \overline{1},\overline{0} \}$, so by removing those, we can recover the labeled Dyck path $(\bq,\kappa)$ with $\kappa\in \fS_i$. It remains to recover the embedded gravity diagram $\Gamma$, but this is easy as well, since each north step with label $\overline{k-\ell}$ at $x=h$ gives rise to a line segment $L=[\ell,k+h]$.
Since $\Theta$ is a bijection, then the result follows from Theorem~\ref{thm.multilabel}.
\end{proof}
\begin{example} Figure~\ref{fig.3-caracol_ud_103} shows a truncated unified diagram $U=(\bq,\kappa,\Gamma) \in \calU_G^{(3,2)}$ for $G=\car{10}3$, and its corresponding $3$-multi-labeled Dyck path $M$ under the bijection $\Theta: \calU_G^{(3,2)} \rightarrow \calT_3(3,2)$. Note that the embedded gravity diagram $\Gamma$ contains three line segments; the two which begin in the third column carry the label $\textcolor{cyan}{\overline{0}}$ and the one which begins in the first column carries the label $\textcolor{cyan}{\overline{2}}$. These labels `slide' along their line segments from left to right to form the $3$-multi-labeled Dyck path $M$.
\end{example}
\subsection{Partitioning the $N$-th multinomial $(k-1)$-simplex}
The main result of this section is to finish the computation of the number of level-$(k,i)$ standardized unified diagrams for $G=\car{n+1}{k}$.
We shall see in Theorem~\ref{thm.kparkingtriangle} that `on average' there are $k^{(k+1)(n-k)-3-i}$ ways to complete any truncated level-$(k,i)$ unified diagram to a standardized unified diagram, but first we need a Lemma.
\begin{lemma} \label{lem.thelemma}
Let $N\in \bbN$ be a positive integer, and let $k\in \bbZ_{\geq2}$.
Let $\calC(N,k)$ denote the set of weak compositions of $N$ with $k$ parts.
Given $\bc = (c_0,\ldots, c_{k-1}) \in \calC(N,k)$, and letting $\be_0=(1,0,0,\ldots)$, $\be_1 =(0,1,0,\ldots)$ etc., define
\begin{align*}
\bc_0 &= (c_{0,0},\ldots, c_{0,k-1}) =\bc,\\
\bc_j &= (c_{j,0},\ldots, c_{j,k-1})= \bc+\be_{k-1}-\be_{j-1},
\end{align*}
for $j=1,\ldots, k-1$.
Let
\begin{align*}
\calC(\bc_j) &= \left\{ \bd=(d_0,\ldots, d_{k-1}) \vDash N \mid d_j+\cdots +d_{j+i} \geq c_{j,j}+\cdots+c_{j,j+i}, \hbox{ for } i=0,\ldots, k-2 \right\},
\end{align*}
with the understanding that the indices of $d_{j+i}$ and $c_{j,j+i}$ are defined mod $k$, and $\calC(\bc_j)$ is empty if $\bc_j$ has negative entries.
Then
$$\calC(N,k) = \coprod_{0\leq j\leq k-1} \calC(\bc_j),$$
is a partition of the set of weak compositions of $N$ with $k$ parts.
\end{lemma}
\begin{proof}
Since $d_0+\cdots+d_{k-1}=N$, then we can rewrite the $k-1$ defining inequalities for each set $\calC(\bc_j)$ in terms of $d_0,\ldots, d_{k-2}$.
That is, an inequality involving $d_{k-1}$, generically of the form
$$d_j+\cdots + d_{k-1}+d_0 + \cdots +d_{\ell-1} \geq c_{j,j} + \cdots + c_{j,k-1+\ell} =M,$$
where $j\leq k-1$,
appears as a defining inequality only in $\calC(\bc_j)$, and it can be replaced by
$$d_{\ell} + \cdots + d_{j-1} \leq c_{j,k+\ell}+\cdots + c_{j,j-1} < N-M+1,$$
where $\ell <j$.
The only other set in which the expression $d_{\ell} + \cdots + d_{j-1}$ appears in a defining inequality is $\calC(\bc_\ell)$, and there, the inequality is
$$d_{\ell} + \cdots + d_{j-1} \geq c_{\ell,\ell}+\cdots+c_{\ell, j-1}.$$
Note that by definition,
\begin{align*}
\bc_\ell &= (c_{0,0}, \ldots, c_{0,\ell-1}-1, c_{0,\ell},\ldots\ldots\ldots \ldots, c_{0,k-1}+1),\\
\bc_j &= (c_{0,0}, \ldots \ldots\ldots \ldots, c_{0,j-1}-1, c_{0,j}, \ldots, c_{0,k-1}+1),
\end{align*}
so if $M=c_{j,j} + \cdots + c_{j,\ell-1} =c_{0,j} +\cdots + (c_{0,k-1}+1) + \cdots + c_{0,\ell-1}$, then
$$c_{\ell,\ell}+\cdots+c_{\ell, j-1} = c_{0,\ell}+ \cdots + c_{0,j-1} = N-M+1.$$
Therefore, the sets $\calC(\bc_j)$ are disjoint.
Since the sets $\calC(\bc_j)$ are partitioned by ${k-1\choose 2}$ hyperplanes, each of the form $d_a+\cdots+d_b = c_{0,a}+\cdots+c_{0,b}$ for $0\leq a\leq b \leq k-2$, and each of these hyperplanes contain the point $\bc_0$, then $\calC(N,k)= \coprod_{0\leq j\leq k-1} \calC(\bc_j)$, and the result follows.
\end{proof}
\begin{corollary} \label{cor.thecorollary}
With $\calC(\bc_j)$ defined as in Lemma~\ref{lem.thelemma}, let $S(\bc_j) = \sum_{\bd \in \calC(\bc_j)} {N\choose \bd}$. Then
$\sum_{j} S(\bc_j)=k^N.$
\end{corollary}
\begin{proof}
This follows from Lemma~\ref{lem.thelemma} and the multinomial theorem,
$\sum_{\bd\in \calC(N,k)} {N\choose \bd}= k^N$.
\end{proof}
\begin{example} The essence of Lemma~\ref{lem.thelemma} is to partition multinomial coefficients in a specific way that will be useful in the proof of Theorem~\ref{thm.kparkingtriangle}.
When $k=2$, this is simply a partition of the binomial coefficients for a fixed $N$. For example let $\bc_0=(c,N-c)$, so that $\bc_1=(c-1,N-c+1)$. We have
\begin{align*}
\calC(\bc_0) &= \{ (d,N-d) \mid d \geq c \},\\
\calC(\bc_1) &= \{ (d,N-d) \mid N-d \geq N-c+1\} = \{ (d,N-d) \mid d\leq c-1\},
\end{align*}
and $S(\bc_0) = \sum_{d=c}^{N} {N \choose d}$, $S(\bc_1) = \sum_{d=0}^{c-1} {N \choose d}$.
Simply put, we are partitioning the $N$-th row of Pascal's triangle into the set of binomial coefficients ${N \choose d}$ with $d\geq c$, and the set of binomial coefficients ${N\choose d}$ with $d< c$. Summing over the entire row of Pascal's triangle yields $S(\bc_0)+S(\bc_1) = \sum_{d=0}^N {N\choose d}=2^N$.
\end{example}
\begin{example}
This example explains the title of this section. Generalizing the previous example, for $k=3$, the multinomial coefficients ${N \choose d_1,\ldots, d_k}$ can be arranged on the the lattice points $(d_1,\ldots, d_k)\in \bbZ^d$, forming a $(k-1)$-simplex in $\bbZ^k$.
The left side of Figure~\ref{fig.trinomial_triangle} depicts the multinomial triangle for $N=6$ and $k=3$, with the weak composition $(d_1,d_2,d_3)$ listed below each entry. This partition of the triangle corresponds to the one defined by $\bc_0=(2,2,2)$, $\bc_1=(1,2,3)$, and $\bc_2=(2,1,3)$.
\end{example}
\begin{figure}[ht!]
\begin{tikzpicture}
\begin{scope}[scale=1, xshift=-200, yshift=-40
\node at (3,5.2) {$1$}; \node at (3,4.9) {\tiny\textcolor{gray}{$\,_{(0,0,6)}$}};
\node at (2.5,4.33) {$6$}; \node at (2.5,4.03) {\tiny\textcolor{gray}{$\,_{(1,0,5)}$}};
\node at (3.5,4.33) {$6$}; \node at (3.4,4.03) {\tiny\textcolor{gray}{$\,_{(0,1,5)}$}};
\node at (2,3.46) {$15$}; \node at (2,3.16) {\tiny\textcolor{gray}{$\,_{(2,0,4)}$}};
\node at (3,3.46) {$30$}; \node at (2.9,3.16) {\tiny\textcolor{gray}{$\,_{(1,1,4)}$}};
\node at (4,3.46) {$15$}; \node at (4,3.16) {\tiny\textcolor{gray}{$\,_{(0,2,4)}$}};
\node at (1.5,2.6) {$20$}; \node at (1.5,2.3) {\tiny\textcolor{gray}{$\,_{(3,0,3)}$}};
\node at (2.5,2.6) {$60$}; \node at (2.4,2.3) {\tiny\textcolor{lava}{$\,_{(2,1,3)}$}};
\node at (3.5,2.6) {$60$}; \node at (3.6,2.3) {\tiny\textcolor{bleudefrance}{$\,_{(1,2,3)}$}};
\node at (4.5,2.6) {$20$}; \node at (4.5,2.3) {\tiny\textcolor{gray}{$\,_{(0,3,3)}$}};
\node at (1,1.73) {$15$}; \node at (1,1.43) {\tiny\textcolor{gray}{$\,_{(4,0,2)}$}};
\node at (2,1.73) {$60$}; \node at (2,1.43) {\tiny\textcolor{gray}{$\,_{(3,1,2)}$}};
\node at (3,1.73) {$90$}; \node at (3,1.43) {\tiny\textcolor{applegreen}{$\,_{(2,2,2)}$}};
\node at (4,1.73) {$60$}; \node at (4.1,1.43) {\tiny\textcolor{gray}{$\,_{(1,3,2)}$}};
\node at (5,1.73) {$15$}; \node at (5,1.43) {\tiny\textcolor{gray}{$\,_{(0,4,2)}$}};
\node at (0.5,.87) {$6$}; \node at (.5,.57) {\tiny\textcolor{gray}{$\,_{(5,0,1)}$}};
\node at (1.5,.87) {$30$}; \node at (1.5,.57) {\tiny\textcolor{gray}{$\,_{(4,1,1)}$}};
\node at (2.5,.87) {$60$}; \node at (2.5,.57) {\tiny\textcolor{gray}{$\,_{(3,2,1)}$}};
\node at (3.5,.87) {$60$}; \node at (3.5,.57) {\tiny\textcolor{gray}{$\,_{(2,3,1)}$}};
\node at (4.5,.87) {$30$}; \node at (4.6,.57) {\tiny\textcolor{gray}{$\,_{(1,4,1)}$}};
\node at (5.5,.87) {$6$}; \node at (5.5,.57) {\tiny\textcolor{gray}{$\,_{(0,5,1)}$}};
\node at (0,0) {$1$}; \node at (0,-.3) {\tiny\textcolor{gray}{$\,_{(6,0,0)}$}};
\node at (1,0) {$6$}; \node at (1,-.3) {\tiny\textcolor{gray}{$\,_{(5,1,0)}$}};
\node at (2,0) {$15$}; \node at (2,-.3) {\tiny\textcolor{gray}{$\,_{(4,2,0)}$}};
\node at (3,0) {$20$}; \node at (3,-.3) {\tiny\textcolor{gray}{$\,_{(3,3,0)}$}};
\node at (4,0) {$15$}; \node at (4,-.3) {\tiny\textcolor{gray}{$\,_{(2,4,0)}$}};
\node at (5,0) {$6$}; \node at (5.1,-.3) {\tiny\textcolor{gray}{$\,_{(1,5,0)}$}};
\node at (6,0) {$1$}; \node at (6,-.3) {\tiny\textcolor{gray}{$\,_{(0,6,0)}$}};
\draw (1,2.07)--(3.23,2.07);
\draw (3,2.43)--(4.6,-0.15);
\draw (2.8,2.07)--(3.8,3.80);
\end{scope}
\begin{scope}[xshift=0, scale=0.5]
\draw[fill, color=gray!85] (0,0) rectangle (1,2);
\draw[fill, color=gray!85] (1,2) rectangle (2,4);
\draw[fill, color=gray!85] (2,4) rectangle (3,5);
\draw[fill, color=gray!85] (3,5) rectangle (4,6);
\draw[very thin, color=gray!100] (0,0) grid (4,6);
\node[vertex][fill, minimum size=3pt] at (2.5,5.5){};
\draw[very thick, color=red] (2,6)--(4,6);
\node at (0.5, 6.5) {\tiny$\alpha_1$};
\node at (1.5, 6.5) {\tiny$\alpha_2$};
\node at (2.5, 6.5) {\tiny$\alpha_3$};
\node at (2,-1) {\tiny$\bc(U_0)=\textcolor{black}{(2,2,2)}$};
\node at (2,-2) {\tiny$\bc_0=\textcolor{applegreen}{(2,2,2)}$};
\end{scope}
\begin{scope}[xshift=80, scale=0.5]
\draw[fill, color=gray!85] (0,0) rectangle (1,2);
\draw[fill, color=gray!85] (1,2) rectangle (2,4);
\draw[fill, color=gray!85] (2,4) rectangle (3,5);
\draw[fill, color=gray!85] (3,5) rectangle (4,6);
\draw[very thin, color=gray!100] (0,0) grid (4,6);
\node[vertex][fill, minimum size=3pt] at (2.5,5.5){};
\node[vertex][fill, minimum size=3pt] at (1.5,4.5){};
\draw (1.5,4.5)--(2.5,5.5);
\draw[very thick, color=red] (2,6)--(4,6);
\node at (0.5, 6.5) {\tiny$\alpha_1$};
\node at (1.5, 6.5) {\tiny$\alpha_2$};
\node at (2.5, 6.5) {\tiny$\alpha_3$};
\node at (2,-1) {\tiny$\bc(U_1)=\textcolor{black}{(2,3,1)}$};
\node at (2,-2) {\tiny$\bc_1=\textcolor{bleudefrance}{(1,2,3)}$};
\end{scope}
\begin{scope}[xshift=160, scale=0.5]
\draw[fill, color=gray!85] (0,0) rectangle (1,2);
\draw[fill, color=gray!85] (1,2) rectangle (2,4);
\draw[fill, color=gray!85] (2,4) rectangle (3,5);
\draw[fill, color=gray!85] (3,5) rectangle (4,6);
\draw[very thin, color=gray!100] (0,0) grid (4,6);
\node[vertex][fill, minimum size=3pt] at (.5,2.5){};
\node[vertex][fill, minimum size=3pt] at (1.5,4.5){};
\node[vertex][fill, minimum size=3pt] at (2.5,5.5){};
\draw (.5,2.5)--(1.5,4.5)--(2.5,5.5);
\draw[very thick, color=red] (2,6)--(4,6);
\node at (0.5, 6.5) {\tiny$\alpha_1$};
\node at (1.5, 6.5) {\tiny$\alpha_2$};
\node at (2.5, 6.5) {\tiny$\alpha_3$};
\node at (2,-1) {\tiny$\bc(U_2)=\textcolor{black}{(3,2,1)}$};
\node at (2,-2) {\tiny$\bc_2=\textcolor{lava}{(2,1,3)}$};
\end{scope}
\end{tikzpicture}
\caption{A partition of the multinomial triangle for $N=6$ and $k=3$ determined by $\bc_0,\bc_1,\bc_2$. The sum of the entries in each third is the number of completions to standardized unified diagrams for each truncated unified diagram on the right. Note that $\bc(U_j)$ is the backward cyclic shift of $\bc_j$ by $j$ positions.}
\label{fig.trinomial_triangle}
\end{figure}
\begin{theorem} \label{thm.kparkingtriangle}
Let $k\in \bbN$ and $n>k$.
The number of standardized level-$(k,i)$ unified diagrams for $\car{n+1}{k}$ is
$$\left|\,\mathcal{SU}_{\car{n+1}{k}}^{(k,i)}\right|
= k^{(k+1)(n-k)-3-i} \cdot T_k(n-k-1,i).$$
\end{theorem}
\begin{proof} Let $G=\car{n+1}{k}$.
When $k=1$, there is only one way to complete a truncated unified diagram $U=(\bq,\kappa,\Gamma)\in \calU_G^{(1,i)}$ to a standardized unified diagram because there is only one way to add $m-n-i$ north steps to complete $\bq$ in the first column. Thus it follows from Equation~\eqref{eqn.SUcompletion} and Theorem~\ref{thm.theta} that $|\,\mathcal{SU}_G^{(1,i)}|= |\,\calU_G^{(1,i)}| = T_1(n-2,i)$.
So suppose $k\geq2$. We first define a $\bbZ/k\bbZ$-action on the set of out-degree line-dot diagrams of $\car{n+1}{k}$ which satisfy the following:
\begin{enumerate}
\item[(a)] each line segment must be horizontal,
\item[(b')] the line segments are ordered from top to bottom so that the line segments with right endpoints in the $q$-th column are above the line segments with right endpoints at the $p$-th column if $q>p$.
\end{enumerate}
We point out that the last property of the out-degree gravity diagrams that specifies a certain ordering of line segments is omitted.
Modifying Remark~\ref{rem.trapezoid} slightly to apply to these line-dot diagrams instead of gravity diagrams, we can still consider the line-dot diagrams for $\car{n+1}{k}$ to be defined on a trapezoidal array of dots with $k-1+i$ dots in the $i$-th row for $i=1,\ldots, n-k-1$. Let $\Gamma|_R$ denote the restriction of the line-dot diagram to the first $k$ columns, and note that every line segment of $\Gamma|_R$ has its right endpoint in the $k$-th column. Letting $L_1,\ldots, L_{n-k-1}$ be the (possibly trivial) line segments of $\Gamma|_R$ where $L_j=[\ell_j,k]$, we define $\rho(\Gamma) = (\ell_1-1,\ldots, \ell_{n-k-1}-1)$.
For $z\in \bbZ/k\bbZ$, let
$$z\cdot \rho(\Gamma) = (\tilde \ell_1,\ldots, \tilde \ell_{n-k-1})
=(\ell_1-1-z,\ldots, \ell_{n-k-1}-1-z) \mod{k},$$
and let $z\cdot \Gamma$ be the line-dot diagram obtained from $\Gamma$ by replacing the line segments $L_1,\ldots,$ $L_{n-k-1}$ in $\Gamma|_R$ by the line segments $[\tilde \ell_1 +1, k], \ldots, [\tilde \ell_{n-k-1}+1,k]$.
The configuration of the line segments in $\Gamma$ restricted to the columns indexed by $\alpha_{k+1},\ldots, \alpha_{n-2}$ remains unchanged.
We note that each orbit of the cyclic action of $\bbZ/k\bbZ$ on the set of line-dot diagrams of $\car{n+1}{k}$ has size $k$. As well, there is an action of $\bbZ/k\bbZ$ on the set of truncated unified diagrams $\calU_G^{(k,i)}$ that is induced in the following way.
Fix a labeled level-$(k,i)$ $(t_{k+1},\ldots, t_n)$-Dyck path $(\bq,\kappa)$, and consider the set of truncated unified diagrams $U_j=(\bq,\kappa,\Gamma_j) \in \calU_G^{(k,i)}$ for $j=0,\ldots, k-1$, where $\{\Gamma_0,\ldots, \Gamma_{k-1}\}$ is an orbit of line-dot diagrams under the $\bbZ/k\bbZ$-action.
Necessarily, each $\Gamma_j$ has at most $n-k-1-i$ line segments, and the cyclic $\bbZ/k\bbZ$-action is defined in the same way as before.
In each truncated unified diagram $U_j$, the embedded line-dot diagram becomes a gravity diagram as we take the convention that the line segments should occupy the lowest possible dots in each column, so an orbit of line-dot diagrams of size $k$ can induce an orbit of truncated unified diagrams of size less than $k$.
Given a $\bbZ/k\bbZ$-orbit $\calO$ of truncated unified diagrams, we will show that
$$\sum_{U\in \calO} S(U) = k^{m-n-i-1}|\calO|.$$
We first consider the case where the orbit $\calO = \{U_0,\ldots, U_{k-1}\}$ has size $k$.
Let $\bc(U_j)$ denote the $k$-hull of $U_j$, and let $\bc_0 = (c_1,\ldots, c_k)= \bc(U_0)$.
Let $\bc_j = \bc_0 + \be_{k}-\be_{j}$ be as in Lemma~\ref{lem.thelemma} (with a shift in indices).
We claim that $\bc(U_j)$ is the backward cyclic shift of $\bc_j$ by $j$ positions.
Suppose $\rho(\Gamma_0) = (\ell_1-1,\ldots, \ell_{n-k-1-i}-1)$ so that $\rho(\Gamma_j) = (\ell_1-1-j, \ldots, \ell_{n-k-1-i}-1-j) \mod{k}$.
Then by Lemma~\ref{lem.cvector},
\begin{align*}
\bc(U_0)
&= \bh + \bb - (n-k-1-i) \be_{k},\\
\bc(U_j) &= \bh + \beta^j(\bb) -(n-k-1-i)\be_{k},
\end{align*}
where $\bb=(b_1,\ldots, b_k)=\sum_{p=1}^{n-k-1-i} \be_{\ell_p}$, and $\beta^j$ denotes the backward cyclic shift of coordinates by $j$ positions.
This simplifies to
\begin{align*}
\bc_0
&= (c_1,\ldots, c_k) = \bc(U_0)\\
&= (n-k+b_1, n-k+b_2, \ldots, n-k+b_{k-1}, 2(n-k-1)-i+b_k-(n-k-1-i))\\
&= (n-k+b_1, n-k+b_2, \ldots, n-k+b_{k-1}, n-k+b_k-1),
\end{align*}
and similarly,
\begin{align*}
\bc(U_j)
&=(n-k+b_{j+1}, \ldots, n-k+b_k, n-k+b_1, \ldots, n-k+b_{j-1}, n-k+b_j-1)\\
&=(c_{j+1},\ldots, c_k+1, c_1, \ldots, c_{j-1}, c_j-1)\\
&=\beta^j(\bc_0 + \be_k - \be_j)
=\beta^j(\bc_j),
\end{align*}
as claimed.
Because $\bc(U_j)$ and $\bc_j$ are simply rearrangements of each other, then the number of ways to complete $U_j$ to a standardized unified diagram is
$$S(U_j) = \sum_{\bd\in \calC(\bc(U_j))} {m-n\choose \bd} = \sum_{\bd\in \calC(\bc_j)} {m-n\choose \bd}.$$
By Corollary~\ref{cor.thecorollary}, we conclude that when $\calO$ is an orbit of size $k$,
$$\sum_{j=0}^{k-1} S(U_j) = k^{m-n-i}.$$
More generally, in the case that the orbit $\calO$ has size less than $k$, the difference is that the $\bbZ/k\bbZ$-action generates $k$ distinct line-dot diagrams but only $|\calO|$ distinct representatives as gravity diagrams, and so
$$\sum_{U\in \calO} S(U) = \frac{|\calO|}{k} k^{m-n-i}.$$
We finally see that
$$\left|\,\mathcal{SU}_G^{(k,i)}\right|
= \sum_{\calO} \sum_{U\in \calO} S(U)= \sum_{\calO} k^{m-n-1-i} |\calO|
= k^{m-n-1-i}\cdot T_k(n-k-1,i),$$
where the last equality follows because the sum is over all truncated level-$(k,i)$ unified diagrams for $G$, and by Theorem~\ref{thm.theta} there are $T_k(n-k-1,i)$ of these.
\end{proof}
\begin{example} \label{eg.4car}
Figure~\ref{fig.4-car_orbit} shows a $\bbZ/4\bbZ$-orbit of line-dot diagrams for $G=\car{9}{4}$.
The rectangular region $R$ of a line-dot diagram is the portion restricted to the columns labeled $\alpha_1,\ldots, \alpha_4$.
Note that the $\bbZ/4\bbZ$-action on the line-dot diagrams leaves the line segments which are supported on the columns $\alpha_4, \alpha_5, \alpha_6$ unchanged.
\begin{figure}[ht!]
\begin{center}
\input{4-car_orbit}
\end{center}
\caption{The $\bbZ/4\bbZ$-orbit of line-dot diagrams for $\car94$.}
\label{fig.4-car_orbit}
\end{figure}
The $4$-hull of a level-$(4,0)$ truncated unified diagram with empty gravity diagram is $\bh = (4,4,4,6)$. Fixing the level-$(4,0)$ labeled Dyck path $(\bq,\kappa)$ and embedding $\Gamma_0$ into the $8\times 18$ grid to obtain a
truncated unified diagram $U_0=(\bq,\kappa,\Gamma_0)$, the composition which represents its $4$-hull is
\begin{align*}
\bc(U_0)
&= (4,4,4,6) + (1,0,0,-1) + (0,0,1,-1) + (0,0,0,1-1)\\
&= (4,4,4,6) + (1,0,1,1) + (0,0,0,-3)\\
&= (5,4,5,4).
\end{align*}
In all, the compositions representing the $4$-hulls of the truncated unified diagrams in this orbit are
$$\bc(U_0)=(5,4,5,4), \quad
\bc(U_1)=(4,5,5,4), \quad
\bc(U_2)=(5,5,5,3), \quad
\bc(U_3)=(5,5,4,4), $$
and shifting $\bc(U_j)$ forwards by $j$ positions gives
$$\bc_0=(5,4,5,4), \quad
\bc_1=(4,4,5,5), \quad
\bc_2=(5,3,5,5), \quad
\bc_3=(5,4,4,5). $$
The number of ways to complete the truncated unified diagram $U_j=(\bq,\kappa,\Gamma_j)$ is $S(U_j) = \sum_{\bd\in\calC(\bc_j)} {18\choose \bd}$, where
by Lemma~\ref{lem.thelemma}, the sets
\begin{align*}
\calC(\bc_0) &=\{ \bd \vDash 18 \mid d_0\geq5, d_0+d_1\geq9, d_0+d_1+d_2\geq14\},\\
\calC(\bc_1) &=\{ \bd \vDash 18 \mid d_1\geq4, d_1+d_2\geq9, d_0<5\},\\
\calC(\bc_2) &=\{ \bd \vDash 18 \mid d_2\geq5, d_0+d_1<9, d_1<4\},\\
\calC(\bc_3) &=\{ \bd \vDash 18 \mid d_0+d_1+d_2<4, d_1+d_2<9, d_2<5\},
\end{align*}
partition the entire set $\calC(18,4)$ of weak compositions of $m-n=18$ with $k=4$ parts. Therefore, $\sum_{j=0}^3 S(U_j) = 4^{18}. $
\end{example}
\begin{example}
We have seen in Figure~\ref{fig.car62} that there are $\Cat(3,5)=7$ out-degree gravity diagrams for $G=\car62$. For each truncated unified diagram $U\in \calU_G^{(2,0)}$ with a specified out-degree gravity diagram, we compute the number $S(U)$ of standardized level-$(2,0)$ unified diagrams whose truncation is $U$.
\begin{center}
\begin{tikzpicture}[scale=0.4]
\begin{scope}[scale=1]
\draw[fill, color=gray!85] (0,0) rectangle (1,3);
\draw[fill, color=gray!85] (1,3) rectangle (2,5);
\draw[fill, color=gray!85] (2,5) rectangle (3,6);
\draw[fill, color=gray!85] (3,6) rectangle (4,7);
\draw[very thin, color=gray!100] (0,0) grid (4,7);
\node[vertex][fill, minimum size=3pt] at (2.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,3.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,4.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,6.5) {};
\draw[very thick, color=red] (1,7)--(4,7);
\node at (2,-1) {\footnotesize$S(U_1)=99$};
\end{scope}
\begin{scope}[scale=1, xshift=150]
\draw[fill, color=gray!85] (0,0) rectangle (1,3);
\draw[fill, color=gray!85] (1,3) rectangle (2,5);
\draw[fill, color=gray!85] (2,5) rectangle (3,6);
\draw[fill, color=gray!85] (3,6) rectangle (4,7);
\draw[very thin, color=gray!100] (0,0) grid (4,7);
\node[vertex][fill, minimum size=3pt] at (2.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,3.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,4.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,6.5) {};
\draw (0.5,3.5)--(1.5,5.5);
\draw[very thick, color=red] (1,7)--(4,7);
\node at (2,-1) {\footnotesize$S(U_2)=64$};
\end{scope}
\begin{scope}[scale=1, xshift=300]
\draw[fill, color=gray!85] (0,0) rectangle (1,3);
\draw[fill, color=gray!85] (1,3) rectangle (2,5);
\draw[fill, color=gray!85] (2,5) rectangle (3,6);
\draw[fill, color=gray!85] (3,6) rectangle (4,7);
\draw[very thin, color=gray!100] (0,0) grid (4,7);
\node[vertex][fill, minimum size=3pt] at (2.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,3.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,4.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,6.5) {};
\draw (1.5,5.5)--(2.5,6.5);
\draw[very thick, color=red] (1,7)--(4,7);
\node at (2,-1) {\footnotesize$S(U_3)=29$};
\end{scope}
\begin{scope}[scale=1, xshift=450]
\draw[fill, color=gray!85] (0,0) rectangle (1,3);
\draw[fill, color=gray!85] (1,3) rectangle (2,5);
\draw[fill, color=gray!85] (2,5) rectangle (3,6);
\draw[fill, color=gray!85] (3,6) rectangle (4,7);
\draw[very thin, color=gray!100] (0,0) grid (4,7);
\node[vertex][fill, minimum size=3pt] at (2.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,3.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,4.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,6.5) {};
\draw (0.5,4.5)--(1.5,6.5);
\draw (0.5,3.5)--(1.5,5.5);
\draw[very thick, color=red] (1,7)--(4,7);
\node at (2,-1) {\footnotesize$S(U_4)=29$};
\end{scope}
\begin{scope}[scale=1, xshift=600]
\draw[fill, color=gray!85] (0,0) rectangle (1,3);
\draw[fill, color=gray!85] (1,3) rectangle (2,5);
\draw[fill, color=gray!85] (2,5) rectangle (3,6);
\draw[fill, color=gray!85] (3,6) rectangle (4,7);
\draw[very thin, color=gray!100] (0,0) grid (4,7);
\node[vertex][fill, minimum size=3pt] at (2.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,3.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,4.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,6.5) {};
\draw (1.5,6.5)--(2.5,6.5);
\draw (0.5,3.5)--(1.5,5.5);
\draw[very thick, color=red] (1,7)--(4,7);
\node at (2,-1) {\footnotesize$S(U_5)=64$};
\end{scope}
\begin{scope}[scale=1, xshift=750]
\draw[fill, color=gray!85] (0,0) rectangle (1,3);
\draw[fill, color=gray!85] (1,3) rectangle (2,5);
\draw[fill, color=gray!85] (2,5) rectangle (3,6);
\draw[fill, color=gray!85] (3,6) rectangle (4,7);
\draw[very thin, color=gray!100] (0,0) grid (4,7);
\node[vertex][fill, minimum size=3pt] at (2.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,3.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,4.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,6.5) {};
\draw (0.5,3.5)--(1.5,5.5)--(2.5,6.5);
\draw[very thick, color=red] (1,7)--(4,7);
\node at (2,-1) {\footnotesize$S(U_6)=64$};
\end{scope}
\begin{scope}[scale=1, xshift=900]
\draw[fill, color=gray!85] (0,0) rectangle (1,3);
\draw[fill, color=gray!85] (1,3) rectangle (2,5);
\draw[fill, color=gray!85] (2,5) rectangle (3,6);
\draw[fill, color=gray!85] (3,6) rectangle (4,7);
\draw[very thin, color=gray!100] (0,0) grid (4,7);
\node[vertex][fill, minimum size=3pt] at (2.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (1.5,6.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,3.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,4.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,5.5) {};
\node[vertex][fill, minimum size=3pt] at (0.5,6.5) {};
\draw (0.5,3.5)--(1.5,5.5)--(2.5,6.5);
\draw (0.5,4.5)--(1.5,6.5);
\draw[very thick, color=red] (1,7)--(4,7);
\node at (2,-1) {\footnotesize$S(U_7)=29$};
\end{scope}
\end{tikzpicture}
\end{center}
Under the $\bbZ/2\bbZ$-action described in Theorem~\ref{thm.kparkingtriangle}, the orbits
are $\{U_1, U_4\}$, $\{U_3,U_7\}$, $\{U_5,U_6\}$, and $\{U_2\}$.
For example, counting the possible labeled Dyck path completions arising from the $\{U_1, U_4\}$ orbit, we have
$$S(U_1)+S(U_4) = \sum_{i=0}^4{7\choose i} + \sum_{i=0}^2{7\choose i} = \sum_{i=0}^7{7\choose i} =2^7. $$
Summing over all orbits,
$$\left|\, \mathcal{SU}_{\car62}^{(2,0)}\right|= \sum_{\calO} \sum_{U\in \calO} S(U) = 2^7 +2^7 + 2^7 + 2^6 = 7\cdot 2^6.$$
\end{example}
\begin{remark}
At $k=1$, equation~\eqref{eqn.stdU}, Theorem~\ref{thm.theta}, and Theorem~\ref{thm.kparkingtriangle} recover the analogous results for the classical caracol graph, proved in~\cite[Proposition 5.1, Theorem 5.6, and Theorem 5.9]{BGHHKMY}.
At $k=n-1$, Theorems~\ref{thm.theta} and~\ref{thm.kparkingtriangle} reduce to
$$\left|\,\calU_{\car{n+1}{n-1}}^{(n-1,i)}\right|
= T_{n-1}(0,i) = 1, \qquad\hbox{and}\qquad
\left|\, \mathcal{SU}_{\car{n+1}{n-1}}^{(n-1,i)}\right|
= (n-1)^{n-3-i},
$$
where $i$ is necessarily $0$. These are the same results obtained for the Pitman--Stanley graph $\PS_n$ in~\cite{BGHHKMY}.
\end{remark}
\subsection{Volume of the $k$-caracol polytope}
Having developed all the tools necessary, we conclude this section with the computation that yields the volume of the flow polytope of $\car{n+1}{k}$ with net flow $\ba=(1,\ldots, 1,-n)$.
\begin{theorem}\label{thm.oneoneone}
For $k\in \bbN$ and $n>k$, let $a=n-k$ and $b=k(n-k)-1$. Then
$$\vol\calF_{\car{n+1}{k}}(1,\ldots, 1,-n)
=\Cat(a,b)\cdot k^{b-1}\cdot n^{a-1}.$$
\end{theorem}
\begin{proof} Combining Theorems~\ref{thm.parkinglidskii} and Equation~\eqref{eqn.stdU},
$$\vol\calF_{\car{n+1}{k}}(1,\ldots, 1,-n)
= \left|\,\calU_{\car{n+1}{k}}\right|
= \sum_{i=0}^{n-k-1} {m-n\choose i} \left|\,\mathcal{SU}_{\car{n+1}{k}}^{(k,i)}\right|.
$$
We have $m-n= (k+1)(n-k)-2 = a+b-1$. Applying Theorem~\ref{thm.kparkingtriangle}, we have
\begin{align*}
\vol\calF_{\car{n+1}{k}}(1,\ldots, 1,-n)
&= \sum_{i=0}^{a-1} {a+b-1\choose i} {a+b-1-i\choose b} a^{i-1} k^{a+b-2-i}\\
&= \frac{1}{a}\frac{(a+b-1)!}{b!(a-1)!} \cdot k^{b-1} \cdot
\sum_{i=0}^{a-1} \frac{(a-1)!}{i!(a-1-i)!} a^{i} k^{a-1-i}\\
&= \Cat(a,b) \cdot k^{b-1} \cdot n^{a-1},
\end{align*}
as claimed.
\end{proof}
\begin{remark}
At $k=1$, this recovers the result for the classical caracol graph~\cite[Theorem 5.10]{BGHHKMY},
$$\vol\calF_{\Car_{n+1}}(1,\ldots, 1,-n) = \Cat(n-2) \cdot n^{n-2}.$$
At $k=n-1$, we have $a=n-k=1$ and $b=ka-1=n-2$, so $\Cat(a,b)=\Cat(1,n-2)=1$, and we recover the result for the Pitman--Stanley graph,
$$\vol \calF_{\PS_n}(1,\ldots, 1, -(n-1)) = k^{b-1} = (n-1)^{n-3}.$$
\end{remark}
\section{The $k$-caracol polytope at other net flows} \label{sec.abbb}
The tools and combinatorial objects developed in the previous section can be augmented for some cases of more general net flow vectors. We now introduce unified diagrams for flow polytopes with net flow vector $\ba=(a_1,\ldots,a_n,-\sum_{i=1}^n a_i)$.
\begin{defn} \label{defn.unifieda}
Let $G$ be an acyclic directed graph with $n+1$ vertices and shifted out-degree vector $\bt$. A {\em unified diagram} for the flow polytope $\calF_G(\ba)$ is a type $(\bs, \sigma, \alpha, \Gamma)$ where $(\bs,\sigma)$ is a labeled $\bt$-Dyck path, $\Gamma$ is an out-degree gravity diagram for $\outgrav_G(\bs-\bt,0)$, and $\alpha$ is a vector in $[a_1]^{s_1}\times \cdots \times [a_n]^{s_n}$. Let $\calU_G(\ba)$ denote this set of unified diagrams.
\end{defn}
We may interpret $\alpha$ as a second labeling on the north steps of the $\bt$-Dyck path, where the north steps in the $j$-th column can have a label chosen from $\{1,\ldots, a_j\}$ in any order. We call $\alpha$ the {\em net flow label}. Observe that if any $a_j$ in the net flow vector is $0$, then the $\bt$-Dyck path in a corresponding unified diagram cannot have any north steps in its $j$-th column. Indeed, when $\ba=(1,0,\ldots, 0,-1)$, the set of unified diagrams for $\calF_G(\ba)$ is effectively just the set of out-degree gravity diagrams $\outgrav_G(\bv_{\mathrm{out}})$ because the only $\bt$-Dyck path allowed in the unified diagrams is $N^{m-n}E^{n}$ and it has a unique $\sigma$ labeling.
\begin{theorem}\label{thm.abbb}
For $k\in \bbN$ and $n>k$, let $a=n-k$ and $b=k(n-k)-1$. Let $\ba = (x^k,y^{n-k},-kx-(n-k)y)$ where $x\in \bbR_{>0}$ and $y\in \bbR_{\geq0}$. Then
$$\vol\calF_{\car{n+1}{k}}(\ba)
=\Cat(a,b)\cdot k^{b-1} x^b (kx+(n-k)y)^{a-1} .$$
\end{theorem}
\begin{proof}
Similar to Equation~\eqref{eqn.stdU}, when we partition the set of unified diagrams $\calU_G(\ba)$ according to standardized level-$(k,i)$ unified diagrams, there are ${m-n\choose i}$ ways to choose a parking function label set of size $i$ for the standardization, $x^{m-n-i}$ ways to choose net flow labels for the north steps of the $\bt$-Dyck path in the first $k$ columns, and $y^i$ ways to choose net flow labels for the remaining columns.
Thus we have
\begin{equation}\label{eqn.logconcave}
\vol\calF_{\car{n+1}{k}}(\ba)
= \Big|\,\calU_{\car{n+1}{k}}(\ba)\Big|
= \sum_{i=0}^{n-k-1} {m-n\choose i} x^{m-n-i} y^i \Big|\,\mathcal{SU}_{\car{n+1}{k}}^{(k,i)}\Big|.
\end{equation}
Applying Theorem~\ref{thm.kparkingtriangle} with $m-n=(k+1)(n-k)-2=a+b-1$, we compute
\begin{align*}
\vol\calF_{\car{n+1}{k}}(\ba)
&= \sum_{i=0}^{a-1} {a+b-1\choose i} x^{a+b-1-i} y^i
{a+b-1-i\choose b} a^{i-1}k^{a+b-2-i}\\
&= (kx)^{b-1} x \cdot \Cat(a,b)\cdot
\sum_{i=0}^{a-1} {a-1\choose i} (kx)^{a-1-i} (ay)^i \\
&= (kx)^{b-1} x \cdot \Cat(a,b)\cdot (kx +ay)^{a-1},
\end{align*}
and obtain a generalization of Theorem~\ref{thm.oneoneone}.
\end{proof}
\begin{corollary}
For $k\in \bbN$ and $n>k$, let $a=n-k$ and $b=k(n-k)-1$. Then
$$\vol\calF_{\car{n+1}{k}}(\underbrace{1,\ldots,1}_k,\underbrace{0,\ldots, 0}_{n-k},-k)
=\Cat(a,b)\cdot k^{a+b-2}.$$
\end{corollary}
\begin{remark}
When $a_{k+1}=\cdots=a_n=0$, then the $\bt$-Dyck paths in the unified diagrams for $\calF_{\car{n+1}{k}}(1^k,0^{n-k},-k)$ can only have north steps in the first $k$ columns. In other words,
$$\vol\calF_{\car{n+1}{k}}(\underbrace{1,\ldots,1}_k,\underbrace{0,\ldots, 0}_{n-k},-k)
= \Cat(a,b)\cdot k^{a+b-2}
= \Big|\,\mathcal{SU}_{\car{n+1}{k}}^{(k,0)} \Big|$$
is the number of standardized level-$(k,0)$ unified diagrams, in agreement with Theorem~\ref{thm.kparkingtriangle}.
\end{remark}
\subsection{Log-concavity of the $k$-parking numbers}
Let $G=\car{n+1}{k}$ and $\ba=(x^k, y^{n-k}, -kx-(n-k)y)$ such that $x\in \bbR_{>0}$ and $y\in \bbR_{\geq0}$.
By a result of Baldoni and Vergne~\cite[Section 3.4]{BV}, the flow polytope $\calF_G(\ba)$ can be expressed as the Minkowski sum
$$\calF_G(\ba) = x\calF_G(\underbrace{1,\ldots,1}_k, \underbrace{0,\ldots,0}_{n-k}, -k)
+ y\calF_G(\underbrace{0,\ldots,0}_k, \underbrace{1,\ldots,1}_{n-k}, -(n-k)). $$
The {\em Aleksandrov-Fenchel inequalities}~\cite{A,F1,F2} state that there exists $V_i \in \bbR_{\geq0}$ such that for polytopes $P$ and $Q$,
$$\vol(xP+yQ) = \sum_{i=0}^d {d\choose i} x^{d-i} y^i V_i, $$
and moreover, the $V_i$ are {\em log-concave} so that
$V_i^2 \geq V_{i-1}V_{i+1}$ for all $i$.
Combining our Equation~\eqref{eqn.logconcave} with Theorem~\ref{thm.kparkingtriangle}, we have
\begin{align*}
\vol\calF_G(\ba)
&= \sum_{i=0}^{n-k-1} {m-n\choose i}x^{m-n-i}y^i |\,\mathcal{SU}_G^{(k,i)}|\\
&= \sum_{i=0}^{n-k-1} {m-n\choose i} (kx)^{m-n-i}y^i k^{-1} T_k(n-k-1,i),
\end{align*}
so the Aleksandrov-Fenchel inequalities imply that the $k$-parking numbers $T_k(n-k-1,i)$
for fixed $n$ and $k$, and $i=0,\ldots, n-k-1$ are log-concave. See the Appendix for some values.
\section{A multigraph related to the $k$-caracol graph} \label{sec.mcar}
In the previous section, we applied techniques developed in~\cite{BGHHKMY} to compute the volumes of flow polytopes of graphs which are not planar. In this section, we will see that there is a family of planar multigraphs which give rise to flow polytopes with volume formulas that are similar to the formulas of the previous sections.
\subsection{Gravity diagrams for the $k$-multicaracol graph}
We next define the family of {\em $k$-multicaracol graphs}.
\begin{defn} Let $k,a\in \bbN$. The directed graph $G=\mcar{a+2}{k}$ on the vertex set $\{0,1,\ldots, a+1\}$ is constructed by starting with the Pitman--Stanley graph $\PS_{a+1}$, then adding the vertex $0$, and $k$ directed edges $(0,i)$ for $i=1,\ldots, a$.
\end{defn}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=0.9]
\node[vertex][fill,label=below:\footnotesize{$0$}](a0) at (0,0) {};
\node[vertex][fill,label=below:\footnotesize{$1$}](a1) at (1,0) {};
\node[vertex][fill,label=below:\footnotesize{$2$}](a2) at (2,0) {};
\node[vertex][fill,label=below:\footnotesize{$3$}](a3) at (3,0) {};
\node[vertex][fill,label=below:\footnotesize{$4$}](a4) at (4,0) {};
\node at (4.5,0) {$\cdots$};
\node[vertex][fill,label=below :\footnotesize{$5$}](a10) at (5,0) {};
\node[vertex][fill,label=below:\footnotesize{$6$}](a11) at (6,0) {};
\node[vertex][fill,label=right:\footnotesize{$7$}](a12) at (7,0) {};
\draw[-stealth, thick, color=red] (a0) to[out=30,in=130] (a1);
\draw[-stealth, thick] (1,0)--(1.95,0);
\draw[-stealth, thick] (2,0)--(2.95,0);
\draw[-stealth, thick] (3,0)--(3.95,0);
\draw[thick] (4,0)--(4.15,0); \draw[thick] (4.75,0)--(4.85,0);
\draw[-stealth, thick] (4.85,0)--(4.95,0);
\draw[-stealth, thick] (5,0)--(5.95,0);
\draw[-stealth, thick] (6,0) to (6.95,0);
\draw[-stealth, thick, color=red] (a0) to[out=35,in=130] (a2);
\draw[-stealth, thick, color=red] (a0) to[out=40,in=130] (a3);
\draw[-stealth, thick, color=red] (a0) to[out=45,in=130] (a4);
\draw[-stealth, thick, color=red] (a0) to[out=50,in=130] (a10);
\draw[-stealth, thick, color=red] (a0) to[out=55,in=130] (a11);
\draw[-stealth, thick] (a10) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a4) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a3) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a2) to[out=-50,in=240] (a12);
\draw[-stealth, thick] (a1) to[out=-50,in=240] (a12);
\end{tikzpicture}
\end{center}
\caption{The graph $G=\mcar{8}{k}$. A red edge of the form $(0,i)$ in this picture represents $k$ distinct edges.
}
\end{figure}
\begin{remark}
We shall see that there are many similarities between the flow polytopes $\car{n+1}{k}$ and $\mcar{a+2}{k}$, where $a=n-k$. First we note that they have the same dimension, $(k+1)(n-k)-2 = (k+1)a-2$. If $\{f_e\}_{e\in E(\car{n+1}{k})}$ is a flow on the graph $\car{n+1}{k}$, then for each $p=1,\ldots, k-1$, the flow on the edge $(p,p+1)$ is completely determined by the flow conservation equation
$$f_{(p,p+1)} = a_p + \sum_{(i,p)\in E(\car{n+1}{k})} f_{(i,p)}
- \sum_{\tiny\begin{array}{c} (p,j)\in E(\car{n+1}{k})\\ j\neq p+1 \end{array}} f_{(p,j)},$$
so if we project $\calF_{\car{n+1}{k}}(\ba)$ onto the coordinates $\{x_e\}_{e\notin \{ (i,i+1) \mid i=1,\ldots, k-1\}}$, then it may be viewed as a polytope contained in $\mcar{a+2}{k}(a_1+\cdots+a_k, a_{k+1},\ldots, a_n, -\sum a_i)$.
\end{remark}
The graph $\mcar{a+2}{k}$ has $a+2$ vertices and $m=(k+2)a-1$ edges. Its shifted out-degree vector and shifted in-degree vector are
$$
\bt = (t_0,\ldots, t_a) = (ak-1,\underbrace{1,\ldots, 1}_{a-1},0)
\qquad\hbox{and}\qquad
\bu = (u_1,\ldots, u_{a+1}) = (k-1,\underbrace{k, \ldots, k}_{a-1}, a),
$$
and their coordinates sum to $m-a-1= (k+1)a-2$. We also have
$$\bv_{\mathrm{out}} = \sum_{j=0}^{a-2} (a-1-j)\alpha_j
\qquad \hbox{and}\qquad
\bv_{\mathrm{in}} = \sum_{j=1}^a (jk-1)\alpha_j.$$
The in-degree gravity diagrams are defined on a triangular array of $jk-1$ dots in the $j$-th column for $j=1,\ldots,a$. Identical to the case of in-degree gravity diagrams for the $k$-caracol graphs, we may choose the following conventions for the in-degree gravity diagrams for $k$-multicaracol graphs:
\begin{enumerate}
\item[(a)] each line segment must be horizontal,
\item[(b)] a longer line segment must be in a row above that of a shorter line segment.
\end{enumerate}
To be precise, the set of in-degree gravity diagrams for the $k$-multicaracol graph $\mcar{a+2}{k}$ is identical to the set of in-degree gravity diagrams for the $k$-caracol graph $\car{n+1}{k}$, where $a=n-k$.
This observation immediately leads to the next result.
\begin{theorem} \label{thm.mcar_onezerozero}
For $k,a\in \bbN$,
$$\vol \calF_{\mcar{a+2}{k}} (1,0,\ldots, 0,-1) = \Cat(a, ka-1).
$$
\end{theorem}
\begin{proof} By Corollary~\ref{cor.gravitydiagrams} and Theorem~\ref{thm.onezerozero},
$$\vol \calF_{\mcar{a+2}{k}} (1,0,\ldots, 0,-1)
= |\ingrav_{\mcar{a+2}{k}}(\bv_{\mathrm{in}}) |
= |\ingrav_{\car{k+a+1}{k}}(\bv_{\mathrm{in}}) |
= \Cat(a,ka-1).$$
\end{proof}
\begin{remark}
In~\cite{BGHHKMY}, we introduced a polynomial for the volume of flow polytopes with properties similar to those of the Ehrhart polynomial of a polytope.
Let $G$ be a directed graph with vertex set $\{1,\ldots, n+1\}$ and $m$ edges. For any nonnegative integer $x\in \bbZ_{\geq0}$, the directed graph $\widehat{G}(x)$ on the vertex set $\{0,1,\ldots, n+1\}$ is constructed by starting with the directed graph $G$, then adding the vertex $0$, and $x$ directed edges $(0,i)$ for $i=1,\ldots, n$. Define the polynomial
$$E_G(x) = \vol \calF_{\widehat{G}(x)}(1,0,\ldots, 0,-1). $$
In the context of this paper, $\mcar{a+2}{k} = \widehat{\PS}_{a+1}(k)$, and it follows from~\cite[Proposition 8.7]{BGHHKMY} that
\begin{equation}
\vol \calF_{\mcar{a+2}{k}} (1,0,\ldots,0,-1)
= E_{\PS_{a+1}}(k)
= \Cat(a,ka-1).
\end{equation}
\end{remark}
By definition, the number of out-degree diagrams is equal to the number of in-degree gravity diagrams for any fixed flow polytope, so from the proof of Theorem~\ref{thm.mcar_onezerozero}, we also know that
$$|\outgrav_{\mcar{a+2}{k}}(\bv_{\mathrm{out}}) |
= |\outgrav_{\car{n+1}{k}}(\bv_{\mathrm{out}}) | = \Cat(a,ka-1),$$
where $a=n-k$. We can prove this result directly via a bijection, which will be used in a later result. But first, we need to describe our conventions for the out-degree gravity diagrams for $\mcar{a+2}{k}$. These out-degree gravity diagrams are defined on a triangular array of $a-1-j$ dots in the $j$-th column for $j=0,\ldots, a-2$, so every non-trivial line segment begins in the zero-th column indexed by $\alpha_0$, and moreoever, each line segment (including the trivial one of length zero) which begins in the zero-th column is assigned one of $k$ colours $c_1,\ldots, c_k$. In addition, we choose the following conventions for the out-degree gravity diagrams:
\begin{enumerate}
\item[(a)] each line segment must be horizontal,
\item[(b)] a longer line segment must be in a row above that of a shorter line segment,
\item[(c)] and if there are two line segments of the same length but different colours, say $c_p$ and $c_q$ with $p<q$, then the line segment of colour $c_p$ lies in a row above the line segment of colour $c_q$.
\end{enumerate}
See the diagram on the right side of Figure~\ref{fig.mcarcar} for an example of an out-degree gravity diagram for $\mcar{10}{3}$.
\begin{prop}\label{prop.mcarcar}
For $k,a\in \bbN$,
$|\outgrav_{\mcar{a+2}{k}}(\bv_{\mathrm{out}})| = \Cat(a,ka-1)$.
\end{prop}
\begin{proof} We construct a bijection $\Xi: \outgrav_{\car{k+a+1}{k}}(\bv_{\mathrm{out}})\rightarrow \outgrav_{\mcar{a+2}{k}}(\bv_{\mathrm{out}})$ between the sets of gravity diagrams. For the remainder of this proof, we let $\Car = \car{k+a+1}{k}$ and $\MCar = \mcar{a+2}{k}$ to simplify the notation.
Heuristically, the multigraph $\MCar$ is obtained by contracting the path of length $k-1$ on the vertices $1,\ldots, k$ in $\Car$, to the vertex $0$ in $\MCar$.
The essential observation here is that the $k$ edges $(0,j)$ in $\MCar$ come from the $k$ edges $(i,j)$ in $\Car$ for $i=1,\ldots, k$, so a line segment in an out-degree gravity diagram for $\MCar$ which is coloured $c_i$ should be thought of as representing a positive root $\alpha_i + \cdots$ in $\Phi_{\Car}^+$.
With these observations, we define $\Xi: \outgrav_{\Car}(\bv_{\mathrm{out}})\rightarrow \outgrav_{\MCar}(\bv_{\mathrm{out}})$ as follows. Given an out-degree gravity diagram $\Gamma\in \outgrav_{\Car}$, we define $\Xi(\Gamma)$ to be the diagram obtained by `projecting' the first $k$ columns of $\Gamma$ to the zero-th column of $\Xi(\Gamma)$, where each line segment in $\Gamma$ that begins in the $i$-th column is assigned the colour $c_i$ in $\Xi(\Gamma)$.
The map $\Xi$ is well-defined because the array of dots in columns $j=k,\ldots, n-2$ in an out-degree gravity diagram for $\MCar$ is the same as the array of dots in an out-degree gravity diagram for $\Car$. Moreover, every nontrivial line segment in $\Gamma$ contains a dot from the $k$-th column. The conventions for the the respective out-degree gravity diagrams were chosen so that the horizontal line segments will appear in the same order.
To reverse the map $\Xi$, simply take an out-degree gravity diagram for $\MCar$ and for each line segment coloured $c_i$, extend it to a line segment which begins in the $i$-th column in the out-degree gravity diagram for $\Car$. So $\Xi$ is a bijection.
Together with Proposition~\ref{prop.outgrav}, the result follows.
\end{proof}
\begin{figure} [ht!]
\begin{center}
\input{mcar-car}
\end{center}
\caption{The bijection between out-degree gravity diagrams for $\car{12}{3}$ and $\mcar{10}{3}$. The colours of the gravity line segments for $\mcar{10}{3}$ are red, blue, and green, in that order.}
\label{fig.mcarcar}
\end{figure}
We have seen that at net flow $(1,0,\ldots,0,-1)$, the flow polytopes of the graphs $\mcar{a+2}{k}$ and $\car{n+1}{k}$ have the same volume.
Next, we will see that the volumes of the flow polytopes of this pair of graphs are closely related at other net flows as well.
\subsection{Unified diagrams for the $k$-multicaracol graph}
\begin{theorem}\label{thm.mcarabbb}
For $k,a\in \bbN$,
$$\vol\calF_{\mcar{a+2}{k}}(kx,y,\ldots,y,-(kx+ay)) = \Cat(a,ka-1) \cdot (kx)^{ka-1} (kx+ay)^{a-1}.$$
\end{theorem}
\begin{proof} Let $\ba=(kx, y,\ldots, y, -(kx+ay))$.
We have
$$\vol\calF_{\mcar{a+2}{k}}(\ba)
= \Big|\,\calU_{\mcar{a+2}{k}} (\ba)\Big|
= \sum_{i=0}^{a-1} {(k+1)a-2\choose i} (kx)^{(k+1)a-2-i} y^i
\Big|\,\calU_{\mcar{a+2}{k}}^{(0,i)}\Big|.
$$
Letting $n=a+k$, we can extend the bijection $\Xi$ in Proposition~\ref{prop.mcarcar} to the set of truncated unified diagrams
$$\widehat{\Xi}: \calU_{\mcar{a+2}{k}}^{(0,i)} \rightarrow \calU_{\car{n+1}{k}}^{(k,i)}: \Xi(\bq,\kappa,\Gamma) \mapsto (\bq,\kappa, \Xi(\Gamma)),$$
so $|\,\calU_{\mcar{a+2}{k}}^{(0,i)}|= |\,\calU_{\car{n+1}{k}}^{(k,i)}|$.
Applying Theorem~\ref{thm.theta} and computing in the same way as in Theorem~\ref{thm.abbb},
\begin{align*}
\vol\calF_{\mcar{a+2}{k}}(\ba)
&= \sum_{i=0}^{a-1} {(k+1)a-2\choose i} (kx)^{(k+1)a-2-i} y^i {(k+1)a-2-i\choose ka-1}a^{i-1}\\
&= \Cat(a,b)\cdot (kx)^{ka-1} (kx+ay)^{a-1}.
\end{align*}
\end{proof}
\begin{corollary} We have the following specializations:
\begin{enumerate}
\item[(a)] $\displaystyle \vol\calF_{\mcar{a+2}{k}}(k,1,\ldots,1, -k-a) = \Cat(a,ka-1) \cdot k^{ka-1} (k+a)^{a-1}. $
\item[(b)] $\displaystyle \vol\calF_{\mcar{a+2}{k}}(k,0,\ldots,0, -k) = \Cat(a,ka-1) \cdot k^{(k+1)a-2}. $
\end{enumerate}
\end{corollary}
\begin{remark}
Let $a=n-k$ so that $b=ka-1$, and comparing the results of Theorem~\ref{thm.mcarabbb} and Theorem~\ref{thm.abbb}, we see that
$$\vol\calF_{\mcar{n-k+2}{k}}(kx, y^{n-k}, -kx-(n-k)y)
=k\cdot \vol\calF_{\car{n+1}{k}}(x^k, y^{n-k}, -kx-(n-k)y). $$
This implies that one can obtain a different proof of Theorem~\ref{thm.abbb} (and its specialization Theorem~\ref{thm.oneoneone}) if one can construct a $k$ to $1$ map from the set of unified diagrams for $\calF_{\mcar{n-k+2}{k}}(kx, y^{n-k}, -kx-(n-k)y)$ to the set of unified diagrams for $\calF_{\car{n+1}{k}}(x^k, y^{n-k}, -kx-(n-k)y)$. It may be interesting to understand this map from a geometric viewpoint.
\end{remark}
| {
"timestamp": "2019-10-23T02:20:29",
"yymm": "1910",
"arxiv_id": "1910.10060",
"language": "en",
"url": "https://arxiv.org/abs/1910.10060",
"abstract": "Recently, a combinatorial interpretation of Baldoni and Vergne's generalized Lidskii formula for the volume of a flow polytope was developed by Benedetti et al.. This converts the problem of computing Kostant partition functions into a problem of enumerating a set of objects called unified diagrams. We devise an enhanced version of this combinatorial model to compute the volumes of flow polytopes defined on a family of graphs called the k-caracol graphs, resulting in the first application of the model to non-planar graphs. At k=1 and k=n-1, we recover results for the classical caracol graph and the Pitman--Stanley graph. Furthermore, we introduce the notion of in-degree gravity diagrams for flow polytopes, which are equinumerous with (out-degree) gravity diagrams considered by Benedetti et al.. We show that for the k-caracol flow polytopes, these two kinds of gravity diagrams satisfy a natural combinatorial correspondence, which raises an intriguing question on the relationship in the geometry of two related polytopes.",
"subjects": "Combinatorics (math.CO)",
"title": "A Fuss-Catalan variation of the caracol flow polytope",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.984810954312569,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7076796235763817
} |
https://arxiv.org/abs/0809.1404 | On the unfolding of simple closed curves | I show that every rectifiable simple closed curve in the plane can be continuously deformed into a convex curve in a motion which preserves arc length and does not decrease the Euclidean distance between any pair of points on the curve. This result is obtained by approximating the curve with polygons and invoking the result of Connelly, Demaine, and Rote that such a motion exists for polygons. I also formulate a generalization of their program, thereby making steps toward a fully continuous proof of the result. To facilitate this, I generalize two of the primary tools used in their program: the Farkas Lemma of linear programming to Banach spaces and the Maxwell-Cremona Theorem of rigidity theory to apply to stresses represented by measures on the plane. | \section{Introduction}\label{intro}
Imagine a loop of string lying flat on a table without
crossing itself. Now suppose the loop is slowly
deformed until it becomes convex, without stretching
or breaking it, in an {\it expansive} motion.
By expansive, I mean that
if you pick any pair of points on the string, then during
the deformation, the distance between them will be
nondecreasing. Then we can ask whether, given
an initial loop, there always exists an expansive motion
which deforms that loop until it becomes convex. If the
loop is a polygon, then the answer is yes, as proved by
Connelly, Demaine, and Rote \cite{cdr}. The first theorem
of this paper (Theorem \ref{main}) is that the answer is yes for any rectifiable
curve, no matter how complicated (in section \ref{path},
we give some examples of pathological curves to which the
theorem applies). This solves Problem 4 listed by Ghomi
\cite[p. 1]{open}.
My proof of the main theorem uses a limiting process,
relying on the result of \cite{cdr}. I next generalize
the program used in \cite{cdr}, which
relies on techniques of linear programming, specifically the Farkas
Lemma. This approach naturally lends itself to computation;
an example of research on the computation of nonexpansive unfoldings
of polygons is given by \cite{energy}. In my continuous
analogue of the program, I develop a version of the Farkas
Lemma for Banach spaces (Theorem \ref{farkas}) as well as a
continuous version of the Maxwell-Cremona Theorem (Theorem
\ref{maxwell}), a combinatorial version of which was used in
the program in \cite{cdr}. A different version of the Farkas Lemma
in Banach spaces and specifically in $\Lp$ spaces has been studied
in \cite{farkas}. I am not aware of any previous
generalization of the Maxwell-Cremona Theorem to the
case I consider here. Finally, I use the continuous
version of the program to give a different proof of the existence
of infinitesimal expansions for polygons. The hope is that
a continuous analogue of the discrete program could yield
a direct proof (one which does not rely on approximation by
polygons) of the main theorem for some class of curves more
general than polygons.
I would like to thank Robert Bryant for many useful
conversations about the work in this paper, regarding both
its content and presentation, and Robert Connelly for
suggesting some reogranization to clarify the results.
I also thank Andrew Ferrari for introducing me to many of
the techniques used here.
\subsection{Notation}
We will use the following function spaces:
\begin{description}
\item[$C(X,Y)$] the Banach space of continuous functions from
$X$ to $Y$ given the supremum norm.
\item[$C_c(X,Y)$] the subspace of $C(X,Y)$ consisting of
functions of compact support.
\item[$C_0(X,Y)$] the Banach space completion of $C_c(X,Y)$
with respect to the supremum norm. These are the functions
that ``vanish at infinity''.
\item[$C_0^\infty(X,Y)$] the subspace of $C_c(X,Y)$ consisting
of infinitely differentiable functions.
\item[$\Lp(X,Y)$] the Banach space of $\Lp$ functions from
$X$ to $Y$.
\end{description}
If $Y$ is left out, it is assumed to be $\R$, except in
section \ref{cpx}, where it is assumed to be $\C$.
All Hilbert and Banach spaces are implicitly assumed to
be over $\R$, except in section \ref{cpx}, where they will
be over $\C$. If $E$ is a Banach space, $E^*$ is its dual.
The duality bracket $\langle x,y\rangle$ will be used both
in the case that $x\in E^*$ and $y\in E$, and in the case
that $x,y\in H$, a Hilbert space. We will write $\Lin(X,Y)$
for the Banach space of bounded linear transformations from
$X$ to $Y$ given the operator norm.
\section{Proof for General Curves using \cite{cdr}}\label{cpx}
\subsection{Preliminaries}
Consider a simple closed curve in the plane. I wish to prove
the existence of a continuous deformation of the curve into
a convex curve, so that the intrinsic distance between every
pair of points on the curve stays constant, and the extrinsic
distance between every pair of points on the curve is
nondecreasing. Here, by intrinsic distance I mean the distance
along the curve, and by extrinsic distance I mean the Euclidean
distance in $\R^2$.
A curve is called {\it rectifiable} if a finite intrinsic
distance can be defined between every pair of points, that is,
the supremum of the lengths of all inscribed
polygons is finite:
\begin{equation}
L_x^y(\f):=\sup_{x=a_0<a_1<\cdots<a_k=y}\sum_{j=1}^k|\f(a_j)-\f(a_{j-1})|<\infty
\end{equation}
We will only consider rectifiable curves in
this paper. If a curve is rectifiable, then it has a {\it unit
speed parameterization}, that is $\f(s)=\int_0^s\f'(s')\,ds'$ and
$|\f'(s)|=1$ almost everywhere. Since a homothety will scale the
arc length of a curve, it suffices to consider simple closed
curves of length $2\pi$. Thus, given $\f_0$, we seek a
continuous family of simple closed curves $\f_t:\R/2\pi\map\R^2$
parameterized by $t\in[0,1]$ such that each curve is of unit
speed, $|\f_{t_1}(x)-\f_{t_1}(y)|\leq|\f_{t_2}(x)-\f_{t_2}(y)|$
whenever $t_1\leq t_2$, and $\f_1$ is convex.
\subsection{Main Result}
For this section, it will be natural to consider curves
in $\C$ (rather than $\R^2$). Thus Banach spaces will be
over $\C$. It will be convenient to have our curves
reside in the following space:
\begin{equation}
\mathcal D:=\left\{\f:\R/2\pi\map\C\Bigm|
\text{$\f(0)=0$, $\f$ absolutely continuous, $\f'\in\Linfty(\R/2\pi)$}\right\}
\end{equation}
There is, of course, the natural correspondence between
$\f\in\mathcal D$ and $\f'\in\{u\in\Linfty(\R/2\pi):\int u=0\}$.
Thus $\mathcal D$ is a Banach space with norm $\|\f'\|_\infty$. Now
topologize $\mathcal D$ using the weak-$*$ topology on
$\Linfty(\R/2\pi)$. Since $\Lone(\R/2\pi)$ is separable,
the Banach-Alaoglu Theorem implies that any norm bounded
sequence in $\mathcal D$ has a convergent subsequence.
The choice of topology on $\mathcal D$ is justified by
the following lemma.
\begin{lemma}\label{unif}
Suppose $\f_n\to\f$ in $\mathcal D$, then $\f_n\to\f$ uniformly.
\end{lemma}
\begin{proof}
By the Uniform Boundedness Principle, we know that
$\|\f_n\|$ is bounded. Thus there exists $M$
with $|\f_n'|\leq M$, hence $\{\f_n\}$ is an equicontinuous family.
It is clear that $\f_n\to\f$ pointwise since we have
$\int\chi_{[0,x]}\f_n'\to\int\chi_{[0,x]}\f'$. And
an equicontinuous sequence of functions converges
pointwise if and only if it converges uniformly.
\end{proof}
Define the continuous function $\mathcal E:\mathcal D\map\R$ by
$\mathcal E(\f)=\iint_{(\R/2\pi)^2}|\f(x)-\f(y)|$. Also define the following
order relation on $\mathcal D$: we say that $\f\trianglelefteq\g$
if and only if $|\f(x)-\f(y)|\leq|\g(x)-\g(y)|$
for all $x$ and $y$.
\begin{theorem}\label{main}
Given a unit speed simple closed curve $\f:\R/2\pi\map\C$,
there exists a continuous function $\h:[0,1]\map\mathcal D$ such
that:
\begin{itemize}
\item[(1)] $\h(0)=\f$.
\item[(2)] $\h(1)$ is convex.
\item[(3)] $\h(t)$ has unit speed for all $t$.
\item[(4)] If $t_1\leq t_2$, then $\h(t_1)\trianglelefteq\h(t_2)$.
\end{itemize}
\end{theorem}
\begin{proof}
For $n\geq 3$, consider the polygon $\mathcal P_n$
inscribed in $\f$ which has $n$ vertices spaced
out at multiples of $2\pi/n$ starting at zero.
Explicitly:
\begin{equation}
\mathcal P_n(x):=\left(1-\left\{\frac{nx}{2\pi}\right\}\right)
\f\left(\frac{2\pi}n\left\lfloor\frac{nx}{2\pi}\right\rfloor\right)
+\left\{\frac{nx}{2\pi}\right\}
\f\left(\frac{2\pi}n\left(\left\lfloor\frac{nx}{2\pi}\right\rfloor+1\right)\right)
\end{equation}
This polygon may or may not be simple. It will,
however, divide the plane into a finite number of
simply connected regions. Let $\mathcal P_n'$ be a constant
speed $s_n\leq 1$ parameterization of the boundary
of that region which has greatest area. Then let
$\h_n:[0,1]\map\mathcal D$ be continuous and satisfy:
\begin{itemize}
\item[(1$'$)] $\h_n(0)=\mathcal P_n'$.
\item[(2$'$)] $\h_n(1)$ is convex.
\item[(3$'$)] $\h_n(t)$ has speed $s_n\leq 1$ for all $t$.
\item[(4$'$)] If $t_1\leq t_2$, then $\h_n(t_1)\trianglelefteq\h_n(t_2)$.
\item[(5$'$)] $\h_n(t)(\pi)\in\R_{>0}$ for all $t$.
\item[(6$'$)] $\mathcal E(\h_n(t))$ is a linear function of $t$.
\end{itemize}
The existence of an $\h_n$ satisfying (1$'$)--(4$'$) is
implied by Theorem 1 of \cite[p. 207]{cdr}.
Condition (5$'$) can be achieved by properly rotating
each curve. The motion of \cite{cdr} is {\it strictly}
expansive, so $\mathcal E(\h_n(t))$ will be strictly
increasing, so a simple reparameterization in $t$
suffices to make it linear and satisfy (6$'$).
Let $\Q_{[0,1]}=\Q\cap[0,1]$.
This set is countable; suppose $\{r_i\}_{i=1}^\infty$
is a counting of it. Let $\h_n^{(0)}=\h_n$.
Inductively, let $\h_n^{(i)}$ be a subsequence of
$\h_n^{(i-1)}$ such that $\h_n^{(i)}(r_i)$ converges.
(Such a subsequence is guaranteed to exist since
$\|\h_n^{(i)}(r_i)\|=s_n$ is bounded). Now $\h_j^{(j)}$
converges pointwise to a function $\tilde{\h}:\Q_{[0,1]}\map\mathcal D$
which satisfies:
\begin{itemize}
\item[(1$''$)] $\tilde{\h}(0)(\R/2\pi)=\f(\R/2\pi)$.
\item[(2$''$)] $\tilde{\h}(1)$ is convex.
\item[(3$''$)] $\tilde{\h}(t)$ has speed $\leq 1$.
\item[(4$''$)] If $t_1\leq t_2$, then $\tilde{\h}(t_1)\trianglelefteq\tilde{\h}(t_2)$.
\item[(5$''$)] $\tilde{\h}(t)(\pi)\in\R_{>0}$ for all $t$.
\item[(6$''$)] $\mathcal E(\tilde{\h}(t))$ is a linear function of $t$.
\end{itemize}
We will now construct $\h:[0,1]\map\mathcal D$. For every
$t\in[0,1]$, we set $\h(t)$ to be some arbitrary subsequential
limit of $\tilde{\h}(q_j)$ where $q_j$ is some sequence of
rationals converging to $t$. Clearly $\h$ satisfies
(1$''$)--(6$''$) as well. Now (1$''$) and (3$''$) together
mean that $\h(0)(s)=\f(s+\Delta)$ for some
$\Delta$. We can take $\Delta=0$. Hence we have
(1), (2), and (4). To prove (3), note that:
\begin{equation}
\left|\frac{\f(x+h)-\f(x)}h\right|\leq\left|\frac{\h(t)(x+h)-\h(t)(x)}h\right|\leq 1
\end{equation}
As $h\to 0$, the left hand side approaches $1$ for
almost all $x$, hence $|\h(t)'(x)|=1$ almost
everywhere as desired.
Finally, we must show that $\h$ is in fact continuous.
This follows from (5$''$) and (6$''$) in the following way.
Suppose the contrary, that there is some $t$ where
$\h$ is not continuous. Then there exists a sequence
$q_j\to t$ with either $q_j<t$ for all $j$ or $q_j>t$
for all $j$, and a neighborhood $N$ of $\h(t)$ such that
$\h(q_j)\notin N$ for all $j$. Now a subsequence of
$\h(q_j)$ will converge in $\mathcal D$ to a limit $\g$.
Now we have:
\begin{itemize}
\item[(i)] $\mathcal E(\g)=\mathcal E(\h(t))$
\item[(ii)] $\g\trianglelefteq\h(t)$ or $\g\trianglerighteq\h(t)$
depending on whether $q_j<t$ or $q_j>t$
\item[(iii)] $\g(0)=0=\h(t)(0)$
\item[(iv)] $\g(\pi),\h(t)(\pi)\in\R_{>0}$
\end{itemize}
The conditions (i) and (ii) imply
that $|\g(x)-\g(y)|=|\h(t)(x)-\h(t)(y)|$ for
all $x$ and $y$. This means that the curves are
rigid motions of each other. Then (iii) and (iv)
imply that they are actually the same
curve since they have the same orientation. Thus a
subsequence of $\h(q_j)$ converges to $\h(t)$. This
is of course a contradiction since each $\h(q_j)$
is outside the neighborhood $N$ of $\h(t)$.
This contradiction proves that $\h$ is continuous.
\end{proof}
\subsection{Pathological Rectifiable Curves}\label{path}
Define $f_-$ and $f_+$:
\begin{equation}
f_\pm(x)=\begin{cases}x^2\sin x^{-1}\pm e^{-1/x}&x>0\cr 0&x=0\end{cases}
\end{equation}
If we plot $f_-$ and $f_+$ on $[0,\pi^{-1}]$ and add
line segments around the left side of the curve to close
it, we get an infinite number of interlocking ``teeth''.
This example is based on a polygon with a finite number
of such teeth unfolded by Erik Demaine. We also have:
\begin{equation}
g(t)=\begin{cases}t^2e^{i/t}&t>0\cr 0&t=0\cr-t^2e^{-i/t}&t<0\end{cases}
\end{equation}
Plotting $g$ on $[-\pi^{-1},\pi^{-1}]$ and adding line
segments to close the curve gives a simple closed curve
with an infinite spiral. By Theorem \ref{main},
both of these curves can be unfolded in an expansive
motion, something which is not at all intuitive considering
their geometry.
\section{A Generalization of the CDR Program}
The program in \cite{cdr} proves the existence of an infinitesimal
expansion for any polygon. That is, if a nonconvex polygon has
verticies $\p_i$, it shows the existence of velocities $\vv_i$ satisfying:
\begin{align}
(\p_i-\p_{i+1})\cdot(\vv_i-\vv_{i+1})&=0\\
(\p_i-\p_j)\cdot(\vv_i-\vv_j)&>0\text{ for $i$ and $j$ not adjacent}
\end{align}
From this, it is relatively straightforward to solve a
differential equation of the form $\frac d{dt}\{\p_i\}=\{\tilde{\vv}_i\}$
(where the $\{\tilde{\vv}_i\}$ depend continuously on the $\{\p_i\}$),
thus constructing an expansive motion of the polygon. Clearly,
if we have a curve $\f$, then the analogue is to find a variation
$\varphi$ satisfying:
\begin{align}
\f'(x)\cdot\varphi'(x)&=0\text{ for all $x$}\\
(\f(x)-\f(y))\cdot(\varphi(x)-\varphi(y))&\geq 0\text{ for all $x$ and $y$}
\end{align}
The generalized program developed here will be able to
prove the existence of infinitesimal expansions for
polygons, a hard theoretical result of \cite{cdr}.
It also proves the existence of ``almost'' expansive variations
for all rectifiable curves which in a neighborhood of any point
look like the rotated graph of a function from $\R$ to $\R$.
By this I mean that for every $x\in\R/2\pi$, there exists
$\vv\in\R^2$ such that $\f(y)\cdot\vv$ is one to one in a
neighborhood of $x$. The final result of this generalized
program is Theorem \ref{gen}.
The generalizations of the Farkas Lemma and the Maxwell-Cremona
Theorem, the tools used in the program, are stated and proved
in sections \ref{farkass} and \ref{maxwells} respectively.
\subsection{Notation}
Let $H:=\{u\in C(\R/2\pi,\R^2):u(0)=0$, $u$ absolutely
continuous, and $\int|u'|^2<\infty\}$. So that $H$ is a Hilbert
space, equip it with the norm $\sqrt{\int|u'|^2}$
and inner product $\int u'\cdot v'$. Topologize $H$ with
the weak topology. We will need the sets:
\begin{align}
Q_\f&:=\{u\in H:u'\cdot\f'\equiv 0\}\text{ (a closed subspace)}\\
T&:=\{t\in C((\R/2\pi)^2)^*:t\geq 0\}
\end{align}
Note that we will be looking for $\varphi\in Q_\f$, since it is
these variations which preserve arc length. Also note that in
this section, we do not assume that $\f$ is parameterized by
arc length.
\begin{lemma}\label{unif2}
If $\g_n\to\g$ in the weak topology on $H$, then
$\g_n\to\g$ uniformly.
\end{lemma}
\begin{proof}
This is completely analogous to Lemma \ref{unif}. We know that
$\g_n\to\g$ pointwise. Observing that $\|\g_n\|$ is bounded, we
have the inequality:
\begin{equation}
\int_a^b|\g_n'|=\int_{\R/2\pi}\g_n'\frac{\g_n'}{|\g_n'|}\chi_{[a,b]}\leq
\sqrt{\int_{\R/2\pi}|\g_n'|^2}\sqrt{\int_a^b\left|\frac{\g_n'}{|\g_n'|}\right|^2}
\leq M\sqrt{b-a}
\end{equation}
This shows that $\g_n$ are uniformly continuous, and hence
converge uniformly.
\end{proof}
Define $D\subset H$, the set of curves we will consider,
to be the set of $\f\in H$ satisfying:
\begin{itemize}
\item[(1)] $\f$ is a simple closed curve, that is $\f$ is injective.
\item[(2)] $\f'\ne 0$ almost everywhere (this in fact is not implied by (1)).
\item[(3)] For every $x$, there exists $\delta>0$ and $\vv$
such that $\f(y)\cdot\vv$ is one to one for $|y-x|<\delta$. (locally graph-like)
\end{itemize}
The symbol $\f$ will always denote a member of $D$.
The following bounded operator will be essential
to the program; it is called the {\it Rigidity
Operator}:
\begin{gather}
R_\f:H\map C((\R/2\pi)^2)\cr
(R_\f\varphi)(x,y)=(\f(x)-\f(y))\cdot(\varphi(x)-\varphi(y))
\end{gather}
\subsection{Outline of the Program}
Before we state the final result of the program in its full
generality (Theorem \ref{gen}), it is useful to state the
following corollary which gives the general idea of the result.
\begin{corollary}\label{cgen}
Let $\f\in D$ not be convex. Let $V\subset(\R/2\pi)^2$ be
closed and have the property that for all $(x,y)\in V$,
the line segment between $\f(x)$ and $\f(y)$ is not competely
contained in $\f(\R/2\pi)$ (for example, if $\f$ has no
straight sections, we can take $V=\{(x,y)\in(\R/2\pi)^2:|x-y|>\epsilon\}$).
Then there exists a $\varphi\in Q_\f$ such that:
\begin{equation}\label{expan}
(\f(x)-\f(y))\cdot(\varphi(x)-\varphi(y))>0\text{ for all $(x,y)\in V$}
\end{equation}
\end{corollary}
This result includes the result \cite{cdr} of the
existence of infinitesimal expansions for nonconvex
polygons.
\begin{corollary}[Theorem 3 of {\cite[p. 215]{cdr}}]
If $\{\p_i\}$ is a nonconvex simple polygon with no
straight verticies, then there exist $\{\vv_i\}$ satisfying:
\begin{align}
(\p_i-\p_{i+1})\cdot(\vv_i-\vv_{i+1})&=0\\
(\p_i-\p_j)\cdot(\vv_i-\vv_j)&>0\text{ for $i$ and $j$ not adjacent}
\end{align}
\end{corollary}
\begin{proof}
Apply Corollary \ref{cgen} to $\f=\text{the polygon}$ and:
\begin{equation}
\begin{split}
V=\{(x,y)\in(\R/2\pi)^2:\text{ }&\text{there are two full edges
separating $x$ and $y$}\cr&\text{in both directions}\}
\end{split}
\end{equation}
Then we have a $\varphi$. Set $\vv_i=\varphi(\f^{-1}(\p_i))$.
Then $(\p_i-\p_j)\cdot(\vv_i-\vv_j)>0$ for $i$ and $j$ not
adjacent is clear from (\ref{expan}). Now:
\begin{equation}
(\vv_{i+1}-\vv_i)\cdot(\p_{i+1}-\p_i)=
\int_{\f^{-1}(\p_i)}^{\f^{-1}(\p_{i+1})}\!\varphi'(x)\cdot(\p_{i+1}-\p_i)\,dx=0
\end{equation}
since $\varphi\in Q_\f$.
\end{proof}
Theorem \ref{gen}, the main result of the generalization of
the program of \cite{cdr} is essentially Corollary \ref{cgen}
made uniform over some suitable set of curves.
\begin{theorem}[Analogue of Theorem 3 of {\cite[p. 215]{cdr}}]\label{gen}
Suppose $D_1\subset D$ is (weakly) closed and contains no
convex curves, and that $V\subset D_1\cross (\R/2\pi)^2$
is closed. Additionally, suppose that for every $(\f,x,y)\in V$,
the line segment joining $\f(x)$ and $\f(y)$ is not completely
contained in $\f(\R/2\pi)$. Then there exists $\epsilon>0$ such
that for each $\f\in D_1$, there exists $\varphi\in Q_\f$ with:
\begin{itemize}
\item[(1)] $\|\varphi\|=1$.
\item[(2)] $R_\f\varphi(x,y)\geq\epsilon$ whenever $(\f,x,y)\in V$.
\end{itemize}
\end{theorem}
We can see that Corollary \ref{cgen} is obtained by taking $D_1$
to consist of a single curve. A corollary which does not lose
the uniformity is the following:
\begin{corollary}
Suppose $D_1\subset D$ is weakly closed, contains no convex
curves, and contains no curves with straight sections. Then
for every $\delta>0$, there exists $\epsilon>0$ such that for
every $\f\in D_1$, there exists $\varphi\in Q_\f$ satisfying:
\begin{itemize}
\item[(1)] $\|\varphi\|=1$.
\item[(2)] $(\varphi(x)-\varphi(y))\cdot(\f(x)-\f(y))\geq\epsilon$ if $|x-y|\geq\delta$.
\end{itemize}
\end{corollary}
\begin{proof}
Choose $V=D_1\cross\{(x,y)\in(\R/2\pi)^2:|x-y|\geq\delta\}$ and apply
Theorem \ref{gen}.
\end{proof}
The main difficulty in showing the existence of a $\varphi$
which is expansive for {\it all} pairs $x$ and $y$
is the fact $V$ being closed is critical to the proof. Clearly
$(\f,x,x)$ can never be in $V$ since then we would conclude
that $(\varphi(x)-\varphi(x))\cdot(\f(x)-\f(x))>0$. Hence, we
must always exclude a neighborhood of the ``diagonal'' of
$(\R/2\pi)^2$. This means that we will not have shown that
$(\varphi(x)-\varphi(y))\cdot(\f(x)-\f(y))>0$ for all
pairs $x$ and $y$.
The following theorem is the essence of why expansive variations
exist. It relies on the generalization of the Maxwell-Cremona
Theorem (Theorem \ref{maxwell}).
\begin{theorem}[Analogue of Theorem 4 of {\cite[p. 216]{cdr}}]\label{needmax}
If $\f\in D$ and $t\in T$ such that
$\langle t,R_\f\alpha\rangle=0$ for all $\alpha\in Q_\f$, then
either:
\begin{itemize}
\item[(1)] The curve $\f$ is convex.\newline OR
\item[(2)] For all $(x,y)\in\operatorname{supp}t$, the line
segment connecting $\f(x)$ and $\f(y)$
is completely contained in $\f(\R/2\pi)$.
\end{itemize}
\end{theorem}
In the spirit of the generalization of the Farkas Lemma (Theorem
\ref{farkas}), it is possible to prove that Theorem \ref{needmax} implies
Theorem \ref{gen}.
\begin{proposition}[Analogue of Lemma 3 of {\cite[p. 216]{cdr}}]\label{impl}
Theorem \ref{needmax} implies Theorem \ref{gen}.
\end{proposition}
\subsection{Proof of Theorem \ref{needmax}}
\begin{proof}
Suppose that we have some $\f\in D$ and $t\in T$
with $\langle t,R_\f\alpha\rangle=0$ for all $\alpha\in Q_\f$.
Let $\mathbf{\hat f}'$ denote $\f'/|\f'|$.
First, let us show that there exists
$\beta\in\Ltwo(\R/2\pi)$ such that:
\begin{equation}
\langle t,R\alpha\rangle=\int_{\R/2\pi}\beta(x)\mathbf{\hat f}'(x)\cdot\alpha'(x)\,dx
\end{equation}
Clearly there exists $\mu\in\Ltwo(\R/2\pi,\R^2)$
such that
$\langle t,R\alpha\rangle=\int_{\R/2\pi}\mu(x)\cdot\alpha'(x)\,dx$.
Now we can constrain $\mu$ as follows. For any
$\lambda\in\Ltwo(\R/2\pi)$ satisfying
$\int\lambda\mathbf{\hat f}'=0$, we know that:
\begin{equation}
\int_{\R/2\pi}\left(\mu(x)\cdot i\mathbf{\hat f}'(x)\right)\lambda(x)\,dx=0
\end{equation}
The set $H$ of such $\lambda$ is of codimension $2$ in
$\Ltwo(\R/2\pi)$. Now
$\mu(x)\cdot i\mathbf{\hat f}'(x)\in H^\perp$,
which is of dimension $2$. But we can exercise
two dimensions of freedom by
adding constants to $\mu(x)$. Thus we can assume
$\mu(x)\cdot i\mathbf{\hat f}'(x)\equiv 0$,
in other words $\mu\parallel\f'$, and hence is
of the form $\beta(x)\mathbf{\hat f}'(x)$.
We will consider the operators $A_1,A_2\in\Lin(C_0(\R^2,\R^2),\R^2)$
defined by:
\begin{align}
A_1\U&:=\iint_{(\R/2\pi)^2}t(x,y)(\f(x)-\f(y))
\int_{\f(y)}^{\f(x)}\U\cdot\dd s\\
A_2\U&:=\int_{\R/2\pi}\beta(x)\mathbf{\hat f}'(x)[\U(\f(x))\cdot\f'(x)]\,dx
\end{align}
Since $A_1$ and $A_2$ are linear combinations of projections, they
are symmetric, that is there exist $a_j,b_j,e_j\in\Lin(C_0(\R^2,\R),\R)=C_0(\R^2)^*$
such that $A_j=\left(\smallmatrix a_j&b_j\cr b_j&e_j\endsmallmatrix\right)$.
Then $A:=A_1-A_2=\left(\smallmatrix a&b\cr b&e\endsmallmatrix\right)$,
where $a,b,e\in\Lin(C_0(\R^2,\R),\R)=C_0(\R^2)^*$. We have:
\begin{equation}
\begin{split}
A_1\grad g&=\iint_{(\R/2\pi)^2}t(x,y)(\f(x)-\f(y))(g(\f(x))-g(\f(y)))\cr
&=\Bigl(\langle t,R(\e_1g(\f(\cdot)))\rangle,\langle t,R(\e_2g(\f(\cdot)))\rangle\Bigr)\cr
&=\int_{\R/2\pi}\beta(x)\mathbf{\hat f}'(x)[\grad g(\f(x))\cdot\f'(x)]\,dx=A_2\grad g
\end{split}
\end{equation}
Hence $A\grad g=0$ for all $g\in C_0^\infty(\R^2)$.
By the generalization of the Maxwell-Cremona Theorem,
Theorem \ref{maxwell}, there exists a $c\in C_c(\R^2)$
such that we have (in the distributional sense):
\begin{equation}
A\U=\iint_{\R^2}\left(\begin{matrix}\hfill c_{yy}&-c_{xy}\cr-c_{xy}&\hfill c_{xx}\end{matrix}\right)\U\,dx\,dy
\end{equation}
Now the matrices $\left(\begin{smallmatrix}\hfill c_{yy}&-c_{xy}\cr
-c_{xy}&\hfill c_{xx}\end{smallmatrix}\right)$ and
$\left(\begin{smallmatrix}c_{xx}&c_{xy}\cr
c_{xy}&c_{yy}\end{smallmatrix}\right)$ are
related by a similarity transform. The former is
a positive linear combination of projections at
every point in $\R^2-\f(\R/2\pi)$, hence the latter
is positive at every point not on the curve as well. Hence
$c$ is locally convex on the interior of the curve
and on the exterior of the curve.
Now let $M=\sup_{\p\in\R^2}c(\p)$ and define the nonempty
closed set $S=\{\p\in\R^2:c(\p)=M\}$.
Suppose $\p\in\partial S$ and $\p\notin\f(\R/2\pi)$.
Then there is a neighborhood of $\p$ which is disjoint
from $\f(\R/2\pi)$. In this neighborhood, $c$ will
be convex. Hence the whole neighborhood will belong
to $S$, a contradiction. Thus
$\partial S\subseteq\f(\R/2\pi)$. We thus have four
cases:
\begin{itemize}
\item[(1)] $S$ is the closure of the exterior of the curve.
\item[(2)] $S$ is the closure of the interior of the curve.
\item[(3)] $S$ is a closed subset of the curve.
\item[(4)] $S$ is the whole plane.
\end{itemize}
If (1) is true, then $c$ is zero on the curve. This
implies that $\f$ is a level curve of a function with positive
hessian and as such must be convex. If (4) is true,
then $c\equiv 0$. Then for every $(x,y)\in\operatorname{supp}t$,
we will necessarily have the line segment joining
$\f(x)$ and $\f(y)$ completely contained in $\f(\R/2\pi)$.
This is because if not, then there would be a point
in $\R^2-\f(\R/2\pi)$ where the matrix
$\left(\begin{smallmatrix}\hfill c_{yy}&-c_{xy}\cr
-c_{xy}&\hfill c_{xx}\end{smallmatrix}\right)$ would be
positive, giving $c$ upward convexity. The case (2) is
easily disposed of since $c=0$ outside the convex hull
of the curve and hence will be zero on
at least one point of the curve. Hence the maximum
value $c$ attains is zero, a contradiction. Thus it
suffices to show that case (3) cannot happen.
Assume (3) is true. We have two cases:
\begin{itemize}
\item[(1$'$)] There exists $x\in\R/2\pi$ such that for every
$\delta>0$, $\f([x,x+\delta])\nsubseteq S$ and $\f([x-\delta,x])\nsubseteq S$.
\item[(2$'$)] There does not exist such an $x\in\R/2\pi$.
\end{itemize}
I will deal with the easier case (1$'$) first. WLOG $x=0$.
Also, WLOG, $\f(x)\cdot\e_1$ is one to one for $|x|<\epsilon$.
Choose $\delta_1,\delta_2>0$ such that the curve in the square
$[-\delta_1,\delta_1]\cross[-\delta_2,\delta_2]\subset\R^2$ looks like the graph of
a function, that is, $\f^{-1}([-\delta_1,\delta_1]\cross[-\delta_2,\delta_2])\subseteq[-\epsilon,\epsilon]$.
Let $-\delta_1<x_-<0<x_+<\delta_1$ have $c(x_-,0)\ne M$ and
$c(x_+,0)\ne M$.
Now let:
\begin{equation}
M'=\frac 12\left(M+\max_{\p\in\partial[x_-,x_+]\cross[-\delta_2,\delta_2]}c(\p)\right)<M
\end{equation}
Let $y_+$ be the least $y>0$ such that $c(0,y)=M'$ and let
$y_-$ be the highest $y<0$ such that $c(0,y)=M'$. Consider
the level curves passing through $y_+$ and $y_-$. By the
convexity of $c$ they must curve away from $(0,0)$ where
the maximum occurs, but they must meet the curve on both
sides of $(0,0)$ at some $x_-'$ and $x_+'$. This is a
contradiction.
Now suppose (2$'$) is true. Let $[x,y]\subset\R/2\pi$
satisfy $c(\f([x,y]))=M$ and for every $\delta>0$,
$\f([x-\delta,x])\nsubseteq S$ and $\f([y,y+\delta])\nsubseteq S$.
Then $\f([x,y])$ is a level curve of $c$ restricted to
the interior of the curve. As the level curve of a
convex function it must be curved towards the interior
of the curve. But by the same reasoning, $\f([x,y])$
is a level curve of $c$ restricted to the outside of
the curve, and hence must be curved towards the outside
of the curve. Hence $\f([x,y])$ is a line segment.
As above, we can rotate $\f$ so it looks like the graph
of a function $\R\map\R$ near $\f(x)$ and near $\f(y)$. Using
the same procedure as above, we get a contradiction
by considering level curves of $M-\eta$ for a suitably small $\eta>0$.
\end{proof}
We have now justified every step in the proof of Theorem
\ref{gen} except for Proposition \ref{impl} and the generalized
Maxwell-Cremona Theorem. We will prove these next.
\section{A Generalization of the Farkas Lemma}\label{farkass}
The Farkas Lemma from linear programming is as follows:
\begin{lemma}[Farkas Lemma]\label{farkasl}
Let $A:\R^n\to\R^m$ be a linear transformation. Then exactly
one of the following two statements holds:
\begin{itemize}
\item[(1)] There exists a nonzero $y\in\R^m$ whose components
are all nonnegative and which satisfies $A^{\operatorname{T}}y=0$.
\item[(2)] There exists an $x\in\R^n$ such that every component
of $Ax$ is positive.
\end{itemize}
\end{lemma}
The generalization of the Farkas Lemma that we
will need will have the basic form:
\begin{theorem}\label{farkas}
Let $X$ be a compact Hausdorff space and $Y$ a (real) Hilbert space.
Let $A:Y\map C(X)$ be linear and bounded. Also let $A':C(X)^*\map Y$
denote its adjoint, that is
$\langle\lambda,Ay\rangle=\langle A'\lambda,y\rangle$. Then
exactly one of the following two statements holds:
\begin{itemize}
\item[(1)] There exists a nonzero positive $t\in C(X)^*$ such that $A't=0$.
\item[(2)] There exists a $y\in Y$ such that $Ay>0$.
\end{itemize}
\end{theorem}
We remark that if we take $Y$ to be finite dimensional and $X$ to consist
of a finite number of points, then we recover Lemma \ref{farkasl}.
\begin{proof}
It is trivial that (1) and (2) cannot simultaneously hold,
for if so, $0=\langle A't,y\rangle=\langle t,Ay\rangle>0$.
It remains to show that $\sim$(1)$\implies$(2). Let
$T:=\{t\in C(X)^*:t\geq 0\}$.
I claim that there exists $\epsilon>0$ such that
$\|A't\|\geq\epsilon\|t\|$ for all $t\in T$. If we
suppose the contrary, then there exists a sequence $t_n\in T$ with
$\|t_n\|=1$ such that $A't_n\to 0$. By the Banach-Alaoglu
Theorem, there exists a subnet $t_\alpha$ which converges to
$t\in T$ (in the weak-$*$ topology on $T$). We know that we will
have $t\in T$ and $\|t\|=1$. Also, for all $y\in Y$, we have:
\begin{equation}
0=\lim_\alpha\langle A't_\alpha,y\rangle=
\lim_\alpha\langle t_\alpha,Ay\rangle=\langle t,Ay\rangle=\langle A't,y\rangle
\end{equation}
Thus $A't=0$, contradicting $\sim$(1). Thus the claim is
true. I now can show (2).
Let $t_n\in T$ be a sequence such that $\|t_n\|=1$ and:
\begin{equation}
\|A't_n\|\to\inf_{\begin{smallmatrix}t\in T\cr\|t\|=1\end{smallmatrix}}
\|A't\|=:w\geq\epsilon
\end{equation}
Then a subnet $t_\alpha$ will converge in the weak-$*$
topology to a limit $t_{\infty}$. Now:
\begin{equation}
w\leq\|A't_{\infty}\|\leq\liminf_\alpha\|A't_\alpha\|=w
\end{equation}
Hence $\|A't_{\infty}\|=w$.
Let $y:=A't_{\infty}/\|A't_{\infty}\|$. I claim that
$(Ay)(x)\geq\epsilon$ for all $x\in X$. It suffices to
show that $\langle t,Ay\rangle\geq w$ for all $t\in T$
with $\|t\|=1$. But if $\langle t,Ay\rangle<w$ for some
$t\in T$ with $\|t\|=1$, then
$\langle A't,y\rangle<w$. Consider then:
\begin{equation}
\begin{split}
\left.\frac d{d\eta}\right|_{\eta=0}
&\|A'((1-\eta)t_{\infty}+\eta t)\|^2\cr
&=\left.\frac d{d\eta}
\left[(1-\eta)^2\|A't_{\infty}\|^2+2\eta(1-\eta)\langle A't,A't_{\infty}\rangle
+\eta^2 \|A't\|^2\right]\right|_{\eta=0}\cr
&=-2w^2+2\langle A't,wy\rangle<0
\end{split}
\end{equation}
This is a contradiction since $\|(1-\eta)t_{\infty}+\eta t\|=1$.
Hence the proof is complete.
\end{proof}
We can prove Proposition \ref{impl} using the same proof outline
from Theorem \ref{farkas}. We will, however, need the following
approximation lemma.
\begin{lemma}\label{approx}
Suppose $\f_n\to\f$ in $D$ and that $q\in Q_\f$ is of the form
$q'=\lambda i\f'$ where $\lambda$ is smooth. Then there exist
$q_n\in Q_{\f_n}$ such that $q_n\to q$ (weakly).
\end{lemma}
\begin{proof}
We will search for $q_n$ of the form $q_n'=(\lambda+\nu_n)i\f_n'$.
We will have $\|q_n\|$ bounded if $\|\nu_n\|_\infty$ is bounded.
Hence we will have $q_n\to q$ weakly if $\|\nu_n\|_\infty$ is
bounded and $\langle\ell,q-q_n\rangle\to 0$ for all smooth
$\ell\in H$. Now $|\langle\ell,q-q_n\rangle|$ is equal to:
\begin{equation}
\begin{split}
\left|\int_{\R/2\pi}\ell'\cdot(q'-q_n')\right|&=\left|\int_{\R/2\pi}\ell'\cdot
(\lambda i\f'-\lambda i\f_n'-\nu_n i\f_n')\right|\cr
&\leq\left|\int_{\R/2\pi}\ell'\lambda\cdot
i(\f'-\f_n')\right|+\left|\int_{\R/2\pi}\nu_n\ell'\cdot i\f_n'\right|\cr
&=\left|\int_{\R/2\pi}[\ell''\lambda+\ell'\lambda']\cdot
i[\f-\f_n]\right|+\left|\int_{\R/2\pi}\nu_n\ell'\cdot i\f_n'\right|\cr
&\leq 2\pi\|\ell''\lambda+\ell'\lambda'\|_\infty\|\f-\f_n\|_\infty
+\|\nu_n\|_\infty\|\ell'\|_\infty\sqrt{2\pi}\|\f_n\|
\end{split}
\end{equation}
By Lemma \ref{unif2}, $\|\f-\f_n\|_\infty\to 0$. Thus in order
for $q_n\to q$ weakly, all we need is $\|\nu_n\|_\infty\to 0$ and
$\int_{\R/2\pi}(\lambda+\nu_n)\f_n'=0$ (because clearly we
must have $\int_{\R/2\pi}q_n'=0$). Using integration
by parts, this last equality can be written:
\begin{equation}\label{needforc}
\int_{\R/2\pi}\f_n\nu_n'=\int_{\R/2\pi}[\f-\f_n]\lambda'
\end{equation}
We can pick $a_1$, $a_2$, and $a_3$ in $\R/2\pi$ such that:
\begin{equation}
\left|\begin{matrix}1&1&1\cr\f(a_1)\cdot\e_1&\f(a_2)\cdot\e_1&\f(a_3)\cdot\e_1
\cr\f(a_1)\cdot\e_2&\f(a_2)\cdot\e_2&\f(a_3)\cdot\e_2\end{matrix}\right|\geq 2\epsilon>0
\end{equation}
There exists an $N$ such that for every $n\geq N$, the determinant
with $\f$ replaced with $\f_n$ is greater than $\epsilon$. It
suffices to choose $\nu_n$ for $n\geq N$. Set
$C_n=\int_{\R/2\pi}[\f-\f_n]\lambda'$. We know that
$|C_n|\leq 2\pi\|\lambda'\|_\infty\|\f-\f_n\|_\infty$. We solve
the following system of equations for $b_{n,i}\in\R$:
\begin{align}
\hphantom{\f_n(a_1)}b_{n,1}+\hphantom{\f_n(a_2)}b_{n,2}+\hphantom{\f_n(a_3)}b_{n,3}&=0\\
\f_n(a_1)b_{n,1}+\f_n(a_2)b_{n,2}+\f_n(a_3)b_{n,3}&=C_n
\end{align}
For $n\geq N$, we can use Cramer's Rule to give the follwing
bound on the solution:
\begin{equation}
|b_{n,i}|\leq\epsilon^{-1}2[2\pi\|\lambda'\|_\infty\|\f-\f_n\|_\infty]2[\sqrt{2\pi}\|\f_n\|]
\end{equation}
Set $\nu_n(0)=0$ and:
\begin{equation}
\nu_n'(x)=b_{n,1}\delta(x-a_1)+b_{n,2}\delta(x-a_2)+b_{n,3}\delta(x-a_3)
\end{equation}
Then we will guarantee $\int_{\R/2\pi}\nu_n'=0$, equation
(\ref{needforc}), and $\|\nu_n\|_\infty\to 0$. Thus we
will have $q_n\to q$ (weakly).
\end{proof}
\begin{proof}[Proof of Proposition \ref{impl}]
We will write $V(\f)$ for
$\{(x,y)\in(\R/2\pi)^2:(\f,x,y)\in V\}$. Also, if
$Z\subset(\R/2\pi)^2$, we will write $T_Z$ for
$\{t\in C((\R/2\pi)^2):t\geq 0\text{ and }\supp t\subseteq Z\}$.
We assume Theorem \ref{needmax}. Let
$\pi_\f:H\map Q_\f$ be the orthogonal projection and let
$J_\f=\pi_\f\circ R_\f'$. Then Theorem \ref{needmax} implies
``If $\f\in D_1$, $t\in T_{V(\f)}$, and $J_\f t=0$, then
$t=0$''.
I claim that there exists $\epsilon>0$ such that
$\|J_\f t\|\geq\epsilon\|t\|$ for
all $\f\in D_1$ and $t\in T_{V(\f)}$. If we suppose
the contrary, then there exist two sequences, $\f_n\in D_1$
and $t_n\in T_{V(\f_n)}$ with $\|t_n\|=1$
such that $\|J_{\f_n}t_n\|\to 0$. Since
$D_1$ is weakly closed, it is compact by the
Banach-Alaoglu Theorem, hence there exists a convergent
subsequence of $\f_n$ which we assume WLOG is the
whole sequence, so that $\f_n\to\f$. Since
this means that $\f_n\to\f$ uniformly, we will have
$\|R'_{\f_n}-R'_\f\|\to 0$. Thus:
\begin{equation}
\|\pi_{\f_n}R'_{\f_n}t_n\|\to 0\implies\|\pi_{\f_n}R'_\f t_n\|\to 0
\end{equation}
Now there is also a weak-$*$ convergent subsequence of the
$t_n$ by the Banach-Alaoglu Theorem, which again WLOG
is the whole sequence. Thus $t_n\to t\in T_{V(\f)}$
since $V$ is closed; also $\|t\|=1$. Pick some $q\in Q_\f$
which can be written as $q'=\lambda i\f'$ where $\lambda$
is smooth (such $q$ are dense in $Q_\f$). Let $q_n\in Q_{\f_n}$
be the sequence guaranteed to exist by Lemma \ref{approx}. We note
that since $q_n$ is weakly convergent, it is bounded. Now:
\begin{equation}\label{feq1}
0=\lim_{n\to\infty}\langle\pi_{\f_n}R'_\f t_n,q_n\rangle
=\lim_{n\to\infty}\langle R'_\f t_n,q_n\rangle
=\lim_{n\to\infty}\langle t_n,R_\f q_n\rangle
\end{equation}
Now by Lemma \ref{unif2}, $R_\f q_n\to R_\f q$ strongly.
Thus the final limit in equation (\ref{feq1}) is equal to
$\langle t,R_\f q\rangle$. This means that $\langle R_\f't,q\rangle=0$
for a dense subset of $q\in Q_\f$. Thus $J_\f t=0$ where
$\f\in D_1$ and $t\in T_{V(\f)}-\{0\}$, contradicting
Theorem \ref{needmax}. Thus the claim is proved.
We can now show the existence of an appropriate $\varphi$
for every $\f\in D_1$ exactly as in the proof of Theorem
\ref{farkas}.
Fix some $\f\in D_1$. Let $t_n\in T_{V(\f)}$ be a
sequence such that $\|t_n\|=1$ and:
\begin{equation}
\|J_\f t_n\|\to\inf_{\begin{smallmatrix}t\in T_{V(\f)}\cr\|t\|=1\end{smallmatrix}}\|J_\f t\|
=:w\geq\epsilon
\end{equation}
A subsequence is weak-$*$ convergent (WLOG the whole
sequence) to a limit $t_{\infty}$. Using the same reasoning
as above, we conclude that $J_\f t_n\to J_\f t_{\infty}$ in
the weak topology, so:
\begin{equation}
w\leq\|J_\f t_{\infty}\|\leq\liminf\|J_\f t_n\|=w
\end{equation}
Thus $\|J_\f t_{\infty}\|=w$. Let $q:=J_\f t_{\infty}/\|J_\f t_{\infty}\|$.
Now I claim that $\langle J_\f t,q\rangle\geq w\|t\|$
for all $t\in T_{V(\f)}$. Suppose not, that we have $t\in T_{V(\f)}$
with $\|t\|=1$ and $\langle J_\f t,q\rangle<w$.
Then $\langle J_\f t,J_\f t_{\infty}\rangle<w^2$. But
consider then:
\begin{equation}
\begin{split}
\left.\frac d{d\eta}\right|_{\eta=0}
&\|J_\f((1-\eta)t_{\infty}+\eta t)\|^2\cr
&=\left.\frac d{d\eta}
\left[(1-\eta)^2\|J_\f t_{\infty}\|^2+2\eta(1-\eta)\langle J_\f t,J_\f t_{\infty}\rangle
+\eta^2 \|J_\f t\|^2\right]\right|_{\eta=0}\cr
&=-2w^2+2\langle J_\f t_{\infty}, J_\f t\rangle<0
\end{split}
\end{equation}
This is a contradiction since $\|(1-\eta)t_{\infty}+\eta t\|=1$.
Hence the claim is proved.
Let $\varphi=q$. Then:
\begin{equation}
\langle t,R_\f\varphi\rangle=\langle J_\f t,q\rangle\geq w\|t\|\geq\epsilon\|t\|
\text{ for all $t\in T_{V(\f)}$}
\end{equation}
This means that $R_\f\varphi(x,y)\geq\epsilon$ for all $(x,y)\in V(\f)$.
\end{proof}
\section{A Generalization of the Maxwell-Cremona Theorem}\label{maxwells}
Let $A\in\Lin(C_0(\R^2,\R^2),\R^2)$
have compact support. Then by the Riesz Representation Theorem,
$A$ can be thought of as a matrix of measures on $\R^2$:
\begin{equation}
A=\left(\begin{matrix}a&b\cr d&e\end{matrix}\right)
\end{equation}
We are concerned with the case when $A$ is symmetric, that
is $b=d$. For the moment, suppose
$a$, $b$, and $e$ are continuous functions. In this case,
at each point $A$ has orthogonal eigenvectors $\vv_1$ and $\vv_2$
with eigenvalues $\lambda_1$ and $\lambda_2$. We think of
$A$ as representing a ``stress'' on the plane, where at each
point, there is tension in the $\vv_i$ direction of magnitude
$\lambda_i$. It turns out that it is right to call such a
stress is an ``equilibrium stress'' if:
\begin{equation}\label{eqstress}
A\grad g=0\text{ for all $g\in C_0^\infty(\R^2)$}
\end{equation}
In the case that $a$, $b$, and $e$ are continuous, it is
straightforward to show that in fact:
\begin{equation}\label{maxc}
A=\left(\begin{matrix}a&b\cr b&e\end{matrix}\right)=
\left(\begin{matrix}\hfill c_{yy}&-c_{xy}\cr-c_{xy}&\hfill c_{xx}\end{matrix}\right)
\end{equation}
The function $c$ will be in $C_c(\R^2)$. This is the
Maxwell-Cremona ``lifting'' of the stress represented by
$A$.
However, the notion of being an equilibrium stress (\ref{eqstress})
makes sense for any compactly supported $A$, so one would
expect that (\ref{maxc}) should hold in some sense for all
equilibrium stresses $A$. If $\U$ is a smooth vector field
and we integrate
$\iint_{\R^2}\left(\begin{smallmatrix}\hfill c_{yy}&-c_{xy}\cr
-c_{xy}&\hfill c_{xx}\end{smallmatrix}\right)\U\,dx\,dy$ by parts, we get
$\iint_{\R^2}c[i\grad\curl\U]\,dx\,dy$, so if (\ref{maxc}) holds
in the distributional sense, we would like this last integral
to give $A\U$ for smooth $\U$. This is the intuition for
the following theorem.
\begin{theorem}\label{maxwell}
Let $A\in\Lin(C_0(\R^2,\R^2),\R^2)$ have compact support.
Suppose $A$ is symmetric, that is there exist
$a,b,c\in C_0(\R^2)^*$ such that:
\begin{equation}
A=\left(\begin{matrix}a&b\cr b&e\end{matrix}\right)
\end{equation}
Additionally, suppose that for every $g\in C_0^\infty(\R^2)$,
$A\grad g=0$. Then there exists $c\in C_c(\R^2)$ such that
for all $\U\in C_0^\infty(\R^2,\R^2)$:
\begin{equation}\label{mconc}
A\U=\iint_{\R^2}c[i\grad\curl\U]\,dx\,dy
\end{equation}
\end{theorem}
\begin{proof}
First, let us show that (the matrix of measures associated with)
$A$ has no pure point part. Let $\p$
and $\vv$ be arbitrary. Choose $g\in C_0^\infty(\R^2)$ so
that $\grad g(\p)=\vv$. Then $0=A(\grad g)(\epsilon(\cdot)+\p)$,
but as $\epsilon\to 0$, right hand side approaches the pure
point part of $A$ at $\p$ applied to $\vv$. Hence $A$ has
no pure point part.
Consider the measure $|A|\in C_0(\R^2)^*$, where the
$|\cdot|$ of a matrix is its operator norm. In other words,
for $f\geq 0$, we define:
\begin{equation}
|A|f:=\sup_{\begin{smallmatrix}\theta:\R^2\map\R\cr\psi:\R^2\map\R\end{smallmatrix}}\iint_{\R^2}
\left(\begin{matrix}\cos\theta&\sin\theta\end{matrix}\right)
\left(\begin{matrix}a&b\cr b&e\end{matrix}\right)
\left(\begin{matrix}\cos\psi\cr \sin\psi\end{matrix}\right)f
\end{equation}
We know $|A|$ comes from a measure, which we will also denote
$|A|$. Let $\mu(\theta)$
be the measure on the real line $\R$ at angle $\theta$ passing
through the origin, obtained by projecting the
measure $|A|$ orthogonally onto the line. In other words:
\begin{equation}
\int_\R f(x)\,d\mu(\theta)=\iint_{\R^2}f((x,y)\cdot(\cos\theta,\sin\theta))|A|
\end{equation}
Now let $\mu_{\text{pp}}(\theta)$ be the pure point part of
$\mu(\theta)$. I claim that $\mu_{\text{pp}}(\theta)\ne 0$ for
at most countably many $\theta$. We note that this is implied
by the following:
\begin{equation}\label{crit}
\sum_{i=1}^N\|\mu_{\text{pp}}(\theta_i)\|\leq\||A|\|\text{ whenever $\theta_i$ are distinct}
\end{equation}
But (\ref{crit}) is true because any part of $|A|$ which
contributes to both $\|\mu_{\text{pp}}(\theta_i)\|$ and
$\|\mu_{\text{pp}}(\theta_j)\|$ would have to be supported on
a countable set of points, and hence would have to be pure point,
which we know $A$, and hence $|A|$ does not have. Now let
$m(\theta,h)=\sup_{x\in\R}\int_x^{x+h}\mu(\theta)(y)\,dy$.
Now $m(\theta,h)\to 0$ as $h\to 0$ if $\mu(\theta)$ has no
pure point part, thus $m(\theta,h)\to 0$ for almost all $\theta$.
This fact being proved, we can proceed to the construction of $c$.
Let $\phi$ be a smooth real valued even function on
$\R^2$ with support contained in the unit disc which satisfies
$\phi\geq 0$ and $\iint_{\R^2}\phi=1$. Let
$\phi_\eta(\p)=\eta^{-2}\phi(\eta^{-1}\p)$. We can then define
the operator:
\begin{equation}
A_\eta=A*\phi_\eta=\left(\begin{matrix}
a^{(\eta)}&b^{(\eta)}\cr b^{(\eta)}&e^{(\eta)}\end{matrix}\right)
\end{equation}
Now we know that:
\begin{equation}
a^{(\eta)},b^{(\eta)},e^{(\eta)}\in C_0^\infty(\R^2)\text{ and that
$A_\eta\grad g=0$ for all $g\in C_0^\infty(\R^2)$}
\end{equation}
Thus the vector fields $(a^{(\eta)},b^{(\eta)})$ and
$(b^{(\eta)},e^{(\eta)})$ have zero divergence. That
means there exist $f^{(\eta)},g^{(\eta)}\in C_0^\infty(\R^2)$ such
that $a^{(\eta)}=f^{(\eta)}_y$,
$b^{(\eta)}=-f^{(\eta)}_x=-g^{(\eta)}_y$, and
$e^{(\eta)}=g^{(\eta)}_x$. The equality $f^{(\eta)}_x=g^{(\eta)}_y$ implies
that there exists $c^{(\eta)}\in C_0^\infty(\R^2)$ such
that $f^{(\eta)}=c^{(\eta)}_y$ and $g^{(\eta)}=c^{(\eta)}_x$.
In other words:
\begin{equation}
A_\eta=\left(\begin{matrix}\hfill c^{(\eta)}_{yy}&-c^{(\eta)}_{xy}
\cr-c^{(\eta)}_{xy}&\hfill c^{(\eta)}_{xx}\end{matrix}\right)
\end{equation}
{\bf Claim: For every $\epsilon>0$, there exist $\delta>0$
and $\eta_0>0$ such that:
\begin{equation}
\eta_0>\eta>0\text{ and }|\q-\p|<\delta\implies|c^{(\eta)}(\p)-c^{(\eta)}(\q)|<\epsilon
\end{equation}}
Let $\epsilon>0$ be given. Suppose $\p=(x_0,y)\in\R^2$ and
$\q\in\R^2$ and we wish to bound
$|c^{(\eta)}(\p)-c^{(\eta)}(\q)|$ given $|\q-\p|<\delta$.
To simplify notation, we will for the moment assume
that $\q=(x,y)$. Then:
\begin{equation}
\begin{split}
\left|c^{(\eta)}(\p)-c^{(\eta)}(\q)\right|&=\left|\int_{x_0}^xc^{(\eta)}_x(t,y)\,dt\right|
=\left|\int_{x_0}^x\int_{-\infty}^yc^{(\eta)}_{xy}(t,z)\,dz\,dt\right|\cr
&\leq\int_{x_0}^x\int_{-\infty}^\infty\left|b^{(\eta)}(t,z)\right|\,dz\,dt
\leq\int_{x_0-\eta}^{x+\eta}\int_{-\infty}^\infty\left|A(t,z)\right|\,dz\,dt\cr
&\leq m(0,\delta+2\eta)
\end{split}
\end{equation}
Similary, if $\theta_\p^\q$ is the angle of the segment from
$\p$ to $\q$, then we have:
\begin{equation}
\left|c^{(\eta)}(\p)-c^{(\eta)}(\q)\right|\leq m(\theta_\p^\q,2\eta+\delta)
\end{equation}
Now since $m(\theta,h)\to 0$ as $h\to 0$ for all but at most
countably many $\theta$, there exists $h>0$ such that the
measure of the set $\{\theta:m(\theta,h)<\epsilon/4\}$
is more than $\frac{5\pi}3$. Then if $2\eta+\delta<
\min(\epsilon/(4\pi\|t\|),h)$ and the slope the segment from
$\p$ to $\q$ is not in the exceptional set of $\theta$
(which has measure less than $\frac\pi 3$),
then $|c^{(\eta)}(\p)-c^{(\eta)}(\q)|\leq\epsilon/2$. But
for any $\p$ and $\q$ within $\delta$ of each other,
we can find a $\rr$ within $\delta$ of both $\p$ and
$\q$ so that neither of the segments $\p$ to $\rr$ and
$\rr$ to $\q$ are in the exceptional set of $\theta$.
Hence by the triangle inequality,
$|c^{(\eta)}(\p)-c^{(\eta)}(\q)|\leq\epsilon$ if we set
$\eta_0=\delta=\frac 14\min(\epsilon/(4\pi\|t\|),h)$.
Thus the claim is true.
Now by the Arzel\`a-Ascoli Theorem, there exists a
subsequence of $c^{(1/n)}$ which converges uniformly
to a continous function $c\in C_c(\R^2)$. Thus let
$\eta_i\to 0$ and satisfy $c^{(\eta_i)}\to c\in C_c(\R^2)$
uniformly as $i\to\infty$. As remarked before, if $\U$
is smooth compactly supported vector field, then it is
a straightforward integration by parts to show:
\begin{equation}
A(\U*\phi_{\eta_i})=A_{\eta_i}\U=\iint_{\R^2}c^{(\eta_i)}[i\grad\curl\U]\,dx\,dy
\end{equation}
Taking the limit as $i\to\infty$, we obtain (\ref{mconc})
as was to be shown.
\end{proof}
\section{Open Problems}
Now, I can state some conjectures on possible strengthening
of Theorem \ref{main}. For example, we can
conjecture that there exists an $\h$ which is not only
continuous, but in fact smooth. Also, if the initial curve is
smooth, we can require that the curve be smooth at every
time during the deformation.
\begin{conjecture}
Given a unit speed simple closed curve $\f:\R/2\pi\map\C$,
there exists a smooth function $\h:[0,1]\map\mathcal D$ satisfying
(1)--(4).
\end{conjecture}
\begin{conjecture}
Given a smooth unit speed simple closed curve $\f:\R/2\pi\map\C$,
there exists a continuous function $\h:[0,1]\map\mathcal D$ satisfying
(1)--(4) as well as:
\begin{itemize}
\item[(5)] $\h(t)(x)$ is a smooth function of $x$ for all $t\in[0,1]$.
\end{itemize}
\end{conjecture}
I also conjecture that it is possible to extend Corollary \ref{cgen}
to something resembling the following.
\begin{conjecture}
Suppose $\f:\R/2\pi\map\R^2$ is a rectifiable simple closed
curve which is not convex. Then there exists
$\varphi:\R/2\pi\map\R^2$ which is absolutely continuous and
satisfies $\f'\cdot\varphi'\equiv 0$, as well as
$(\f(x)-\f(y))\cdot(\varphi(x)-\varphi(y))>0$ whenever the
line segment connecting $\f(x)$ and $\f(y)$ is not
completely contained in $\f(\R/2\pi)$.
\end{conjecture}
Of course, this would be in preparation to prove:
\begin{mconjecture}
There exists a proof of Theorem \ref{main} which does not rely
on approximation by polygons.
\end{mconjecture}
\bibliographystyle{amsplain}
| {
"timestamp": "2008-09-08T20:49:33",
"yymm": "0809",
"arxiv_id": "0809.1404",
"language": "en",
"url": "https://arxiv.org/abs/0809.1404",
"abstract": "I show that every rectifiable simple closed curve in the plane can be continuously deformed into a convex curve in a motion which preserves arc length and does not decrease the Euclidean distance between any pair of points on the curve. This result is obtained by approximating the curve with polygons and invoking the result of Connelly, Demaine, and Rote that such a motion exists for polygons. I also formulate a generalization of their program, thereby making steps toward a fully continuous proof of the result. To facilitate this, I generalize two of the primary tools used in their program: the Farkas Lemma of linear programming to Banach spaces and the Maxwell-Cremona Theorem of rigidity theory to apply to stresses represented by measures on the plane.",
"subjects": "Differential Geometry (math.DG)",
"title": "On the unfolding of simple closed curves",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109529751892,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.707679622615348
} |
https://arxiv.org/abs/2009.03684 | Discrete Fourier transforms, quantum $6j$-symbols and deeply truncated tetrahedra | The asymptotic behavior of quantum $6j$-symbols is closely related to the volume of truncated hyperideal tetrahedra\,\cite{C}, and plays a central role in understanding the asymptotics of the Turaev-Viro invariants of $3$-manifolds. In this paper, we propose a conjecture relating the asymptotics of the discrete Fourier transforms of quantum $6j$-symbols on one hand, and the volume of deeply truncated tetrahedra of various types on the other. As supporting evidence, we prove the conjecture in the case that the dihedral angles are sufficiently small, and provide numerical calculations in the case that the dihedral angles are relatively big. A key observation is a relationship between quantum $6j$-symbols and the co-volume function of deeply truncated tetrahedra, which is of interest in its own right. More ambitiously, we extend the conjecture to the discrete Fourier transforms of the Yokota invariants of planar graphs and volume of deeply truncated polyhedra, and provide supporting evidence. | \section{Introduction}
Quantum $6j$-symbols are the main building blocks of the Turaev-Viro invariants of $3$-manifolds\,\cite{TV}; the asymptotic behavior of the former plays a central role in understanding the asymptotics of the latter\,\cite{CY, BEL, BDKY}. In \cite{C}, it is proved that the exponential growth rate of the quantum $6j$-symbols is closely related to the volume of truncated hyperideal tetrahedra. In this paper, we aim to study the discrete Fourier transforms of quantum $6j$-symbols and the relationship between their asymptotic behavior and the volume of deeply truncated tetrahedra of various types.
To be precise, let $(I,J)$ be a partition of $\{1,\dots,6\},$ and let $\Delta$ be a deeply truncated tetrahedron of type $(I,J),$ ie., $\{e_i\}_{i\in I}$ is the set of edges of deep truncation. (See Section \ref{tetra} for more details.)
For a $6$-tuple $(b_I, a_J)=((b_i)_{i\in I},(a_j)_{j\in J})$ of integers in $\{0,\dots, r-2\},$ let $\mathrm{\widehat {Y}}_r\big(b_I; a_J\big)$ be the discrete Fourier transform of the Yokota invariant of the trivalent graph $\cp{\includegraphics[width=0.5cm]{Yokota}}$ with respect to $(b_I,a_J),$ ie.,
$$\mathrm{ \widehat {Y}}_r\big(b_I; a_J\big)=\sum_{(a_i)_{i\in I}}\prod_{i\in I} \mathrm{H}(a_i,b_i)\bigg|\begin{matrix}
a_1 & a_2 & a_3\\
a_4 & a_5 & a_6
\end{matrix}\bigg|^2$$
where the sum is over all multi-integers $(a_i)_{i\in I}$ in $\{0,\dots, r-2\}$ so that the triples $(a_1, a_2, a_3),$ $(a_1, a_5, a_6),$ $(a_2, a_4, a_6)$ and $(a_3, a_4, a_5)$ are $r$-admissible,
$$\mathrm{H}(a_i,b_i)=(-1)^{a_i+b_i}\frac{q^{(a_i+1)(b_i+1)}-q^{-(a_i+1)(b_i+1)}}{q-q^{-1}},$$
and $\bigg|\begin{matrix} a_{1} & a_{2} & a_{3} \\ a_{4} & a_{5} & a_{6} \end{matrix} \bigg|$
is the quantum $6j$-symbol of the $6$-tuple $(a_{1},\dots,{a_{6}}).$
(See Sections \ref{6jsymbols}, \ref{yok} and \ref{pdft} for more details.)
\begin{conjecture}\label{conj} Suppose $\Delta(\theta_I;\theta_J)$ is a deeply truncated tetrahedron of type $(I,J)$ with $\theta_I=\{\theta_i\}_{i\in I}$ the set of dihedral angles at the edges of deep truncation and $\theta_J=\{\theta_j\}_{j\in J}$ the set of dihedral angles at the regular edges. Let $\{(b_I^{(r)}, a_J^{(r)})\}$ be a sequence of $6$-tuples with
$$\theta_i=\Big|\pi-\lim_{r\to \infty}\frac{2\pi b_i^{(r)}}{r}\Big|$$
for $i\in I,$ and
$$\theta_j=\Big|\pi-\lim_{r\to \infty}\frac{2\pi a_j^{(r)}}{r}\Big|$$
for $j\in J.$ Then evaluated at the root of unity $q=e^{\frac{2\pi \sqrt{-1}}{r}}$ and as $r$ varies over all positive odd integers,
$$\lim_{r\to \infty}\frac{2\pi}{r}\log \mathrm{\widehat {Y}}_r\big(b_I^{(r)};a_J^{(r)}\big)=2\rm{Vol}(\Delta(\theta_I;\theta_J)).$$
\end{conjecture}
As a convincing supporting evidence, we prove the following main result of this paper.
\begin{theorem}\label{main}
For any $(I,J),$ there exists an $\epsilon>0$ such that if all the dihedral angles of $\Delta(\theta_I;\theta_J)$ are less than $\epsilon,$ then Conjecture \ref{conj} is true.
\end{theorem}
Furthermore, we provide additional numerical evidence for Conjecture \ref{conj} with relatively big dihedral angles in Section \ref{appendix}.
An analogous Volume Conjecture for deeply truncated tetrahedra with one edge of deep truncation was suggested (and has been verified numerically in a handful of cases) in \cite[Section 5, Conjecture 3]{KM2}. Notice however that the quantities inside the logarithm in Conjecture \ref{conj} and in \cite{KM2}, albeit similar-looking, are different.
\\
We also propose a more ambitious conjecture for the discrete Fourier transforms of the Yokota invariants of planar graphs which are the $1$-skeleton of deeply truncated polyhedra. (See Sections \ref{dtp}, \ref{yok} and \ref{pdft} for more details.)
\begin{conjecture}\label{Ambconj}
Let $\Gamma\subset S^3$ be a planar graph and let $P$ be a deeply truncated polyhedron with $1$-skeleton $\Gamma.$ Let $\{\theta_i\}_{i\in I}$ be the angles at the edges of deep truncation and let $\{\theta_j\}_{j\in J}$ be the dihedral angles of $P$ at the regular edges. Let $\{(b^{(r)}_I,a^{(r)}_J)\}$ be a sequence of colorings of $\Gamma$ such that
$$\theta_i=\Big|\pi-\lim_{r\to \infty}\frac{2\pi b_i^{(r)}}{r}\Big|$$
for $i\in I$ and
$$\theta_j=\Big|\pi-\lim_{r\to \infty}\frac{2\pi a_j^{(r)}}{r}\Big|$$
for $j\in J,$ and let $\mathrm{\widehat {Y}}_r\big(\Gamma, b_I^{(r)};a_J^{(r)}\big)$ be the discrete Fourier transform of the Yokota invariant of $\Gamma$ with respect to $(b^{(r)}_I,a^{(r)}_J).$
Then as $r$ varies over all positive odd integers,
$$\lim_{r\to \infty}\frac{2\pi}{r}\log \mathrm{\widehat {Y}}_r\big(\Gamma,b_I^{(r)}; a_J^{(r)}\big)=2\rm{Vol}(P).$$
\end{conjecture}
\begin{remark} By using the same techniques as in the proof of Theorem \ref{main}, one can prove that Conjecture \ref{Ambconj} is true for all the graphs obtained from $\cp{\includegraphics[width=0.5cm]{Yokota}}$ by doing a sequence of the following triangle moves $\cp{\includegraphics[width=1.5cm]{triangle}}$ and with sufficiently small dihedral angles, providing supporting evidences to Conjecture \ref{Ambconj}.
\end{remark}
\bigskip
\noindent\textbf{Outline of the proof of Theorem \ref{main}.}
We follow the guideline of Ohtsuki's method. In Proposition \ref{computation}, we compute the discrete Fourier transform of the Yokota invariants of the graph $\cp{\includegraphics[width=0.5cm]{Yokota}},$ writing them as a sum of values of a holomorphic function $f_r$ at integer points. The function $f_r$ comes from Faddeev's quantum dilogarithm function. Using Poisson Summation Formula, we in Proposition \ref{Poisson} write the invariants as a sum of the Fourier transforms of $f_r$ computed in Propositions \ref{4.2}. In Proposition \ref{crit} we show that the critical value of the functions in the leading Fourier transforms has real part the volume of the deeply truncated tetrahedron. The key observation is Theorem \ref{co-vol} relating the asymptotics of quantum $6j$-symbols with the co-volume function of deeply truncated tetrahedra, which is of interest in its own right. Then we estimate the leading Fourier transforms in Sections \ref{leading} using the Saddle Point Method (Proposition \ref{saddle}). Finally, we estimate the non-leading Fourier transforms and the error term respectively in Sections \ref{ot} and \ref{ee} showing that they are neglectable, and prove Theorem \ref{main} in Section \ref{pf}. We discuss the result in as general a case as possible, and only in the last section we use the assumption that the dihedral angles are sufficiently small.
\\
\noindent\textbf{Acknowledgments.} The second author would like to thank Francis Bonahon, Zhengwei Liu, Feng Luo and Ka Ho Wong for helpful discussions. The second author is partially supported by NSF Grant DMS-1812008.
\section{Preliminaries}
\subsection{Deeply truncated tetrahedron and co-volume function}\label{tetra}
\begin{definition} A \emph{ deeply truncated tetrahedron} is a compact hyperbolic polyhedron with faces $H_1,$ $H_2,$ $H_3,$ $H_4,$ $T_1,$ $T_2,$ $T_3,$ and $T_4$ such that
\begin{enumerate}[(1)]
\item For each $i\in \{1,2,3,4\},$ $T_i\cap H_i=\emptyset.$
\item For each $\{i,j\} \subset \{1,2,3,4\},$ $T_i\cap H_j\neq\emptyset,$ and the dihedral angle between them is always $\frac{\pi}{2}.$
\item For each $\{i,j\} \subset \{1,2,3,4\},$ either $T_i\cap T_j\neq\emptyset$ or $H_i\cap H_j\neq\emptyset,$ but not both.
\end{enumerate}
\end{definition}
From the definition, we see that each face $H_i$ or $T_i$ is one of the following four types: (1) a hyperbolic triangle, (2) a hyperbolic quadrilateral with two right angles, (3) a hyperbolic pentagon with four right angles and (4) a hyperbolic hexagon with six right angles.
We only consider the intersection of $H_i$ and $H_j$ or the intersection of $T_i$ and $T_j$ as the \emph{edge} of the deeply truncated tetrahedron; therefore there are in total six edges. We call an edge between $H_i$ an $H_j$ a \emph{regular edge} and an edge between $T_i$ and $T_j$ an \emph{edge of deep truncation}.
A truncated hyperideal tetrahedron is one example of deeply truncated tetrahedra, where the $H_i$'s are the hexagonal faces, and $T_i$'s are the triangles from truncations (see Figure \ref{hyperideal}). For a truncated hyperideal tetrahedron, there are six regular edges and no edge of deep truncation. The name ``edge of deep truncation'' comes from the idea that if the truncations are ``deep'' enough, then two triangles $T_i$ and $T_j$ coming from truncations may intersect. That is also why the tetrahedron is called ``deeply truncated''. Deeply truncated tetrahedra were first studied by Kolpakov and Murakami in \cite{KM}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.3]{hyperideal.pdf}
\caption{Truncated hyperideal tetrahedron}
\label{hyperideal}
\end{figure}
Let $(I,J)$ be a partition of $\{1,\dots,6\}.$ A deeply truncated tetrahedron $\Delta$ is of type $(I,J)$ if $\{e_i\}_{i\in I}$ is the set of edges of deep truncation. Up to permutation of indices and interchange the role of $H_i$'s and $T_i$'s, all the types of deeply truncated tetrahedra besides the truncated hyperideal tetrahedron are listed in Figure \ref{deep}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.25]{deep.pdf}
\caption{Deeply truncated tetrahedra: The red edges are the edges of deep truncation and the blue edges are the regular edges. The dihedral angles at the grey edges are all right angles.}
\label{deep}
\end{figure}
Let $\Delta=\Delta(\theta_I,\theta_J)$ be the deeply truncated tetrahedron of type $(I,J);$ let $\{\theta\}_{i\in I}$ be the dihedral angles at the edges of deep truncation and let $\{\theta_j\}_{j\in J}$ be the dihedral angles at the regular edges. Let $\{l_i\}_{i\in I}$ be the lengths of the edges of deep truncation. Let $c_i=\cosh l_i$ for $i\in I,$ and let $c_j=\cos\theta_j$ for $j\in J.$ If without loss of generality we assume that $1\in I,$ then by the hyperbolic cosine law of polygons of various types (see \cite[Appendix A]{GL2}) and a direct computation, we have
\begin{equation}\label{cos1}
\cos\theta_1=\frac{c_4+c_2c_6+c_3c_5+(c_2c_5+c_3c_6)c_1-c_4c_1^2}{\sqrt{-1+c_1^2+c_2^2+c_3^2+2c_1c_2c_3}\sqrt{-1+c_1^2+c_5^2+c_6^2+2c_1c_5c_6}}.
\end{equation}
If we consider the following Gram matrix
\begin{equation*}
\begin{split}
G=\left[
\begin{array}{cccc}
1 & -c_1 & -c_2 & -c_6\\
-c_1& 1 & -c_3 & -c_5\\
-c_2 & -c_3 & 1 & -c_4 \\
-c_6 & -c_5 & -c_4 & 1 \\
\end{array}\right],
\end{split}
\end{equation*}
then (\ref{cos1}) can be written as
\begin{equation}\label{cos}
\cos\theta_1=\frac{G_{34}}{\sqrt{G_{33}G_{44}}}.
\end{equation}
A deeply truncated tetrahedron $\Delta$ of type $(I,J)$ is up to isometry determined by $\{l_i\}_{i\in I}$ and $\{\theta_j\}_{j\in J}.$ In particular, if we fix $\{\theta_j\}_{j\in J},$ then the dihedral angles $\{\theta_i\}_{i\in I}$ at the edges of deep truncation are functions of $\{l_i\}_{i\in I}$ and as a consequence the volume of $\Delta$ is a function of $\{l_i\}_{i\in I}.$
\begin{definition}[\cite{Luo, LY}]\label{cov}
For a fixed $\{\theta_j\}_{j\in J},$ let $\mathrm{Vol}$ and $\{\theta_i\}_{i\in I}$ respectively be the volume of $\Delta$ and the dihedral angles at the edges of deep truncation as functions of $l_I=(l_i)_{i\in I},$ and consider the following co-volume function $\mathrm{Cov}$ defined by
$$\mathrm{Cov}(l_I)=\mathrm{Vol}+\frac{1}{2}\sum_{i\in I}\theta_i\cdot l_i .$$
\end{definition}
The key property of the co-volume function is the following
\begin{lemma}\label{Sch} For $i\in I,$
$$\frac{\partial \mathrm{Cov}}{\partial l_i}=\frac{\theta_i}{2}.$$
\end{lemma}
\begin{proof} By the Schl\"afli formula, we have
$$\frac{\partial \mathrm{Vol}}{\partial \theta_i}=-\frac{l_i}{2}.$$
Then by the chain rule and the product rule, we have
\begin{equation*}
\begin{split}
\frac{\partial \mathrm{Cov}}{\partial l_i}=&\sum_{k\in I}\frac{\partial \mathrm{Vol}}{\partial\theta_k}\cdot\frac{\partial \theta_k}{\partial l_i}+\frac{1}{2}\sum_{k\in I}\frac{\partial}{\partial l_i}\Big(\theta_k\cdot l_k\Big)\\
=&-\sum_{k\in I}\frac{l_k}{2}\cdot \frac{\partial \theta_k}{\partial l_i}+\frac{1}{2}\sum_{k\in I}\cdot\frac{\partial \theta_k}{\partial l_i}\cdot l_k+\frac{\theta_i}{2}=\frac{\theta_i}{2}.
\end{split}
\end{equation*}
\end{proof}
Finally we include a sufficient condition to determine whether a deeply truncated tetrahedron with specified parameters exists.
\begin{proposition}\label{prop:criterion}
Let $(I,J)$ be a partition of $\{1,\dots,6\},$ $\{l_i\}_{i\in I}$ be positive real numbers and $\{\theta_j\}_{j\in J}$ be real numbers in $[0,\pi].$ Let $c_i=\cosh l_i$ for $i\in I$ and $c_j=\cos\theta_j$ for $j\in J,$ and let \begin{displaymath}
G=\left[
\begin{array}{cccc}
1 & -c_1 & -c_2 & -c_6\\
-c_1& 1 & -c_3 & -c_5\\
-c_2 & -c_3 & 1 & -c_4 \\
-c_6 & -c_5 & -c_4 & 1 \\
\end{array}\right]
\end{displaymath}
be the Gram matrix defined as above.
If
\begin{enumerate}[(1)]
\item $\textrm{Sign}(G)=(3,1),$
\item $G_{st}>0$ for $s\neq t$ and
\item $G_{ss}<0,$
\end{enumerate}
then there exists a deeply truncated tetrahedron of type $(I,J)$ with lengths $\{l_i\}_{i\in I}$ of the edges of deep truncation and
dihedral angles $\{\theta_j\}_{j\in J}$ at the regular edges.
\end{proposition}
\begin{proof}
The first part of the proof in \cite[Theorem 3.2]{U} works verbatim in the case of deeply truncated tetrahedra. Notice that we require that a deeply truncated tetrahedron has hyperideal vertices, while [Ushijima, Theorem 3.2] does not; this accounts for the extra condition $(3)$ (see \cite[Remark 3 after Theorem 3.2]{U}).
\end{proof}
\begin{remark}
The conditions in \cite[Theorem 3.2]{U} are necessary and sufficient. However, if the pair $(s,t)$ gives entry of the Gram matrix corresponding to an edge of deep truncation, then the condition $G_{st}>0$ would imply that the dihedral angle at this edge is less than $\frac{\pi}{2},$ which is not always the case. Hence the conditions in Proposition \ref{prop:criterion} are not necessary in general.
\end{remark}
\subsection{Deeply truncated polyhedra}\label{dtp}
We extend the definition of a deeply truncated tetrahedron to polyhedra of any combinatorial type, which are the objects involved in Conjecture \ref{Ambconj}.
Let $\Gamma\subset S^3$ be a polyhedral graph (that is, a graph that is the $1$-skeleton of a polyhedron), and let $V$ be its set of vertices and $F$ its set of faces.
\begin{definition}
A \emph{deeply truncated polyhedron} with $1$-skeleton $\Gamma$ is a compact hyperbolic polyhedron $P\subset \mathbb{H}^3$ with faces $\{T_v\}_{v\in V}\cup \{H_f\}_{f\in F}$ such that:
\begin{enumerate}[(1)]
\item $T_v\cap H_f\neq \varnothing$ if and only if $v\in f,$
\item if $T_v\cap H_f\neq \varnothing,$ then they intersect at a right angle, and
\item for every edge $e$ of $\Gamma$ with endpoints $v_1,v_2$ and adjacent to faces $f_1,f_2,$ exactly one of $T_{v_1}\cap T_{v_2}$ and $H_{f_1}\cap H_{f_2}$ is non-empty.
\end{enumerate}
The edge $e$ of $\Gamma$ is a \emph{regular edge} if $H_{f_1}\cap H_{f_2}\neq \varnothing;$ otherwise it is an \emph{edge of deep truncation}. In either case, the \emph{dihedral angle} of $P$ at $e$ is the angle between the two intersecting faces.
\end{definition}
\begin{remark}
Notice that $P$ by itself is simply a compact hyperbolic polyhedron; to obtain a deeply truncated polyhedron with $1$-skeleton $\Gamma$ we need the additional information of the partition of its faces according to vertices and faces of $\Gamma.$ In particular, $P$ as a simplicial complex in $\mathbb{H}^3$ does \emph{not} have $1$-skeleton $\Gamma;$ by definition $\Gamma$ is the $1$-skeleton of $P$ with the additional information.
\end{remark}
\begin{example}
A truncated hyperideal polyhedron $P$ with $1$-skeleton $\Gamma$\,\cite{BB} is a deeply truncated polyhedron with $1$-skeleton $\Gamma.$ The faces of $P$ give the set $H_F$ while the faces dual to the vertices of $P$ give the set $T_V.$ In this case all edges are regular.
\end{example}
\begin{remark}
The statement of Proposition \ref{Sch} is true for deeply truncated polyhedra as well; the same proof applies verbatim.
\end{remark}
\subsection{Quantum \texorpdfstring{$6j$}{6j}-symbols}\label{6jsymbols}
Let $r$ be an odd integer and $q$ be an $r$-th root of unity. For the context of this paper we are only interested in the case $q=e^{\frac{2\pi\sqrt{-1}}{r}},$ but the definitions and results in this section work with any choice of $q.$
As is customary we define $[n]=\frac{q^n-q^{-n}}{q-q^{-1}},$ $\{n\}=q^n-q^{-n}$ and the quantum factorial
$$[n]!=\prod_{k=1}^n[k].$$
A triple $(a_1,a_2,a_3)$ of integers in $\{0,\dots,r-2\}$ is \emph{$r$-admissible} if
\begin{enumerate}[(1)]
\item $a_i+a_j-a_k\geqslant 0$ for $\{i,j,k\}=\{1,2,3\}.$
\item $a_1+a_2+a_3\leqslant 2(r-2),$
\item $a_1+a_2+a_3$ is even.
\end{enumerate}
For an $r$-admissible triple $(a_1,a_2,a_3),$ define
$$\Delta(a_1,a_2,a_3)=\sqrt{\frac{[\frac{a_1+a_2-a_3}{2}]![\frac{a_2+a_3-a_1}{2}]![\frac{a_3+a_1-a_2}{2}]!}{[\frac{a_1+a_2+a_3}{2}+1]!}}$$
with the convention that $\sqrt{x}=\sqrt{|x|}\sqrt{-1}$ when the real number $x$ is negative.
A 6-tuple $(a_1,\dots,a_6)$ is \emph{$r$-admissible} if the triples $(a_1,a_2,a_3),$ $(a_1,a_5,a_6),$ $(a_2,a_4,a_6)$ and $(a_3,a_4,a_5)$ are $r$-admissible.
\begin{definition}
The \emph{quantum $6j$-symbol} of an $r$-admissible 6-tuple $(a_1,\dots,a_6)$ is
\begin{multline*}
\bigg|\begin{matrix} a_1 & a_2 & a_3 \\ a_4 & a_5 & a_6 \end{matrix} \bigg|
= \sqrt{-1}^{-\sum_{i=1}^6a_i}\Delta(a_1,a_2,a_3)\Delta(a_1,a_5,a_6)\Delta(a_2,a_4,a_6)\Delta(a_3,a_4,a_5)\\
\sum_{k=\max \{T_1, T_2, T_3, T_4\}}^{\min\{ Q_1,Q_2,Q_3\}}\frac{(-1)^k[k+1]!}{[k-T_1]![k-T_2]![k-T_3]![k-T_4]![Q_1-k]![Q_2-k]![Q_3-k]!},
\end{multline*}
where $T_1=\frac{a_1+a_2+a_3}{2},$ $T_2=\frac{a_1+a_5+a_6}{2},$ $T_3=\frac{a_2+a_4+a_6}{2}$ and $T_4=\frac{a_3+a_4+a_5}{2},$ $Q_1=\frac{a_1+a_2+a_4+a_5}{2},$ $Q_2=\frac{a_1+a_3+a_4+a_6}{2}$ and $Q_3=\frac{a_2+a_3+a_5+a_6}{2}.$
\end{definition}
Closely related, a triple $(\alpha_1,\alpha_2,\alpha_3)\in [0,2\pi]^3$ is \emph{admissible} if
\begin{enumerate}[(1)]
\item $\alpha_i+\alpha_j-\alpha_k\geqslant 0$ for $\{i,j,k\}=\{1,2,3\},$
\item $\alpha_i+\alpha_j+\alpha_k\leqslant 4\pi.$
\end{enumerate}
A $6$-tuple $(\alpha_1,\dots,\alpha_6)\in [0,2\pi]^6$ is \emph{admissible} if the triples $\{1,2,3\},$ $\{1,5,6\},$ $\{2,4,6\}$ and $\{3,4,5\}$ are admissible.
\subsection{The Yokota invariant}\label{yok}
In this section we recall the definition of the Yokota invariant, first introduced in \cite{Y}. It is an invariant that extends the Kauffman bracket for trivalent graphs to the case of graphs with vertices of any valence. For the sake of simplicity we only deal with the case of planar graphs with no $1$- or $2$-valent vertices; the general case of framed graphs in closed oriented manifolds is not conceptually more complex.
Let $\Gamma\subset S^3$ be a trivalent planar graph, $a_I$ be a coloring of its edges with elements in $\{0,\dots,r-2\},$ and denote with $\langle \Gamma,a_I\rangle$ the \emph{Kauffman bracket} evaluated at the $r$-th root of unity $q=e^{\frac{2\pi\sqrt{-1}}{r}}$ of the pair $\left(\Gamma,a_I\right)$ (see for example \cite[Section 9]{KL} for a definition).
We say that the coloring $a_I$ is \emph{$r$-admissible} if, whenever $i,j,k\in I$ are the indices of the edges of $\Gamma$ sharing a vertex, the triple $(a_i,a_j,a_k)$ is $r$-admissible.
\begin{definition}
A \emph{desingularization} of a planar graph $\Gamma$ with no $1$- and $2$-valent vertices is a graph $\Gamma'$ that coincides with $\Gamma$ outside of a neighborhood of the vertices of $\Gamma,$ and in a neighborhood of each vertex is a planar trivalent tree, as in Figure \ref{fig:desing}.
\end{definition}
\begin{figure}
\centering
\begin{minipage}{.4\textwidth}\centering \begin{tikzpicture}[scale=0.5]
\centering
\draw[thick] (3,3)--(3,-3);
\draw[thick] (0.5,2)--(5.5,-2);
\draw[thick] (5.5,2)--(0.5,-2);
\end{tikzpicture}
\end{minipage}
$\xrightarrow{\hspace*{1cm}}$
\begin{minipage}{.4\textwidth}
\centering
\begin{tikzpicture}[scale=0.4]
\centering
\draw[thick] (3,3)--(3,0);
\draw[thick] (3,2)--(3,-4);
\draw[thick] (0.5,2)--(3,0);
\draw[thick] (5.5,2)--(3,1);
\draw[thick](0.5,-3)--(3,-2);
\draw[thick] (3,-1)--(5.5,-3);
\end{tikzpicture}
\end{minipage}
\caption{Desingularization in a neigborhood of a vertex of valence $6$}\label{fig:desing}
\end{figure}
\begin{definition}
Let $\Gamma\subset S^3$ be a planar graph with no $1$- and $2$-valent vertices and $\Gamma'$ be a desingularization of $\Gamma.$ Let $I$ be the set of edges of $\Gamma$ and $I'$ be the set of edges of $\Gamma'$ so that $I\subset I'$ in a natural way, and let $V'$ be the set of vertices of $\Gamma'.$ Then the $r$-th \emph{Yokota invariant} of $\left(\Gamma,a_I\right)$ is
\begin{displaymath}
\mathrm Y_r(\Gamma,a_I)= \sum_{a_{I'}}\frac{\prod_{i\in I'\setminus I} \langle \cp{\includegraphics[width=0.5cm]{circle}}, a_i\rangle}{\prod_{v\in V'} \langle \cp{\includegraphics[width=0.5cm]{theta}}, (a_{v_1},a_{v_2},a_{v_3})\rangle}\langle \Gamma',a_{I'}\rangle^2,
\end{displaymath}
where the sum is over all $r$-admissible colorings $a_{I'}$ of $I'$ extending $a_I,$ and $(v_1,v_2,v_3)$ are the indices of the edges of $\Gamma'$ sharing $v.$ In particular, \begin{displaymath}
\mathrm Y_r(\cp{\includegraphics[width=0.5cm]{Yokota}},(a_1,\dots, a_6))= \bigg|\begin{matrix} a_1 & a_2 & a_3 \\ a_4 & a_5 & a_6 \end{matrix} \bigg|^2.
\end{displaymath}
\end{definition}
\begin{remark}
If $\Gamma$ has at least a vertex of valence greater than $3,$ then there are many different desingularizations; but $\mathrm Y_r(\Gamma,a_I)$ is independent of the choice of desingularization (\cite[Proposition 4.3]{Y}).
\end{remark}
\subsection{Discrete Fourier transforms}\label{pdft}
The discrete Fourier transforms of the Yokota invariants were introduced in \cite{BAR}; they were later shown to be a particular case of a general construction for modular tensor categories (see \cite[Section 1]{LIU} and references therein). They were used to prove the Turaev-Viro volume conjecture for a certain family of manifolds in \cite{BEL}, using a weak form of the Poisson Summation Formula; later \cite{WY2} established a strong version of the Poisson Summation Formula.
For simplicity we only deal with planar graphs in $S^3;$ and the definition could be extended to any graph in any closed, oriented $3$-manifold without any extra difficulty.
\begin{definition}
Let $\Gamma$ be a planar graph in $S^3,$ $(I,J)$ be a partition of the edges of $\Gamma,$ and $(b_I, a_J)$ be a coloring of the edges of $\Gamma.$ The \emph{discrete Fourier transform} of the Yokota invariant of $\Gamma$ with respect to $(b_I,a_J)$ is defined as
$$\mathrm{ \widehat {Y}}_r\big(\Gamma, b_I; a_J\big)=\sum_{(a_i)_{i\in I}}\prod_{i\in I} \mathrm{H}(a_i,b_i)\mathrm Y_r(\Gamma,a_I,a_J).$$
\end{definition}
\iffalse
\begin{remark}
For the remainder of this article we will only be interested in the case of $\Gamma$ the tetrahedral graph; to ease notation, we will simply denote $\mathrm{\widehat {Y}}_r(b_I;a_J)$ for $\mathrm{\widehat {Y}}_r(\cp{\includegraphics[width=0.5cm]{Yokota}},b_I;a_J).$
\end{remark}\fi
\begin{proposition}[Duality of the Fourier transform of the $6j$-symbol]\label{prop:duality}
Let $(I,J)$ be any partition of the edges of the tetrahedral graph. Then
\begin{displaymath}
\mathrm{\widehat {Y}}_r(b_I;a_J)=\bigg(\frac{ r}{2\sin^2\big(\frac{2\pi}{r}\big)}\bigg)^{3-\lvert I\rvert} \mathrm{\widehat {Y}}_r(a_J;b_I).
\end{displaymath}
\end{proposition}
\begin{proof}
The case of $J=\varnothing$ was proven in \cite{BAR}. The general case can be reduced to the case of $J=\varnothing$ using the fact that
\begin{displaymath}
\sum_{i=0}^{r-2}\sum_{j=0}^{r-2}\mathrm H(k,i)\mathrm H(j,l)= \frac{2\sin^2\big(\frac{2\pi}{r}\big)}{r} \delta_{kl}.
\end{displaymath}
\end{proof}
\subsection{Dilogarithm and Lobachevsky functions}
Let $\log:\mathbb C\setminus (-\infty, 0]\to\mathbb C$ be the standard logarithm function defined by
$$\log z=\log|z|+\sqrt{-1}\cdot\arg z$$
with $-\pi<\arg z<\pi.$
The dilogarithm function $\mathrm{Li}_2: \mathbb C\setminus (1,\infty)\to\mathbb C$ is defined by
$$\mathrm{Li}_2(z)=-\int_0^z\frac{\log (1-u)}{u}du$$
where the integral is along any path in $\mathbb C\setminus (1,\infty)$ connecting $0$ and $z,$ which is holomorphic in $\mathbb C\setminus [1,\infty)$ and continuous in $\mathbb C\setminus (1,\infty).$
The dilogarithm function satisfies the following properties (see eg. Zagier\,\cite{Z}).
\begin{enumerate}[(1)]
\item for any $z\in \mathbb{C}\setminus(1,\infty),$\begin{equation}\label{Li2}
\mathrm{Li}_2\Big(\frac{1}{z}\Big)=-\mathrm{Li}_2(z)-\frac{\pi^2}{6}-\frac{1}{2}\big(\log(-z)\big)^2.
\end{equation}
\item In the unit disk $\big\{z\in\mathbb C\,\big|\,|z|<1\big\},$
\begin{equation}\label{Li1}
\mathrm{Li}_2(z)=\sum_{n=1}^\infty\frac{z^n}{n^2},
\end{equation}
\item On the unit circle $\big\{ z=e^{2\sqrt{-1}\theta}\,\big|\,0 \leqslant \theta\leqslant\pi\big\},$
\begin{equation}\label{dilogLob}
\mathrm{Li}_2(e^{2\sqrt{-1}\theta})=\frac{\pi^2}{6}+\theta(\theta-\pi)+2\sqrt{-1}\cdot\Lambda(\theta).
\end{equation}
\end{enumerate}
Here $\Lambda:\mathbb R\to\mathbb R$ is the Lobachevsky function defined by
$$\Lambda(\theta)=-\int_0^\theta\log|2\sin t|dt,$$
which is an odd function of period $\pi$ (see eg. Thurston's notes\,\cite[Chapter 7]{T}).
\subsection{Quantum dilogarithm functions}
The following variant of Faddeev's quantum dilogarithm functions\,\cite{F, FKV} will play a key role in the proof of the main result.
Let $r\geqslant 3$ be an odd integer. Then the following contour integral
\begin{equation}
\varphi_r(z)=\frac{4\pi \sqrt{-1}}{r}\int_{\Omega}\frac{e^{(2z-\pi)x}}{4x \sinh (\pi x)\sinh (\frac{2\pi x}{r})}\ dx
\end{equation}
defines a holomorphic function on the domain $$\Big\{z\in \mathbb C \ \Big|\ -\frac{\pi}{r}<\mathrm{Re}z <\pi+\frac{\pi}{r}\Big\},$$
where the contour is
$$\Omega=\big(-\infty, -\epsilon\big]\cup \big\{z\in \mathbb C\ \big||z|=\epsilon, \mathrm{Im}z>0\big\}\cup \big[\epsilon,\infty\big),$$
for some $\epsilon\in(0,1).$
Note that the integrand has poles at $\sqrt{-1} n,$ $n\in\mathbb Z,$ and the choice of $\Omega$ is to avoid the pole at $0.$
\\
The function $\varphi_r(z)$ satisfies the following fundamental properties; their proof can be found in \cite[Lemma 2.1]{WY}.
\begin{lemma}
\begin{enumerate}[(1)]
\item For $z\in\mathbb C$ with $0<\mathrm{Re}z<\pi,$
\begin{equation}\label{fund}
1-e^{2 \sqrt{-1} z}=e^{\frac{r}{4\pi \sqrt{-1}}\Big(\varphi_r\big(z-\frac{\pi}{r}\big)-\varphi_r\big(z+\frac{\pi}{r}\big)\Big)}.
\end{equation}
\item For $z\in\mathbb C$ with $-\frac{\pi}{r}<\mathrm{Re}z<\frac{\pi}{r},$
\begin{equation}\label{f2}
1+e^{r\sqrt{-1}z}=e^{\frac{r}{4\pi \sqrt{-1}}\Big(\varphi_r(z)-\varphi_r\big(z+\pi\big)\Big)}.
\end{equation}
\end{enumerate}
\end{lemma}
Using (\ref{fund}) and (\ref{f2}), for $z\in\mathbb C$ with $\pi+\frac{2(n-1)\pi}{r}< \mathrm{Re}z< \pi+\frac{2n\pi}{r},$ we can define $\varphi_r(z)$ inductively by the relation
\begin{equation}\label{extension}
\prod_{k=1}^n\Big(1-e^{2 \sqrt{-1} \big(z-\frac{(2k-1)\pi}{r}\big)}\Big)=e^{\frac{r}{4\pi \sqrt{-1}}\Big(\varphi_r\big(z-\frac{2n\pi}{r}\big)-\varphi_r(z)\Big)},
\end{equation}
extending $\varphi_r(z)$ to a meromorphic function on $\mathbb C.$ The poles of $\varphi_r(z)$ have the form $(a+1)\pi+\frac{b\pi}{r}$ or $-a\pi-\frac{b\pi}{r}$ for all nonnegative integer $a$ and positive odd integer $b.$
Let $q=e^{\frac{2\pi \sqrt{-1}}{r}},$
and let $$(q)_n=\prod_{k=1}^n(1-q^{2k}).$$
\begin{lemma}\label{fact}
\begin{enumerate}[(1)]
\item For $0\leqslant n \leqslant r-2,$
\begin{equation}
(q)_n=e^{\frac{r}{4\pi \sqrt{-1}}\Big(\varphi_r\big(\frac{\pi}{r}\big)-\varphi_r\big(\frac{2\pi n}{r}+\frac{\pi}{r}\big)\Big)}.
\end{equation}
\item For $\frac{r-1}{2}\leqslant n \leqslant r-2,$
\begin{equation}
(q)_n=2e^{\frac{r}{4\pi \sqrt{-1}}\Big(\varphi_r\big(\frac{\pi}{r}\big)-\varphi_r\big(\frac{2\pi n}{r}+\frac{\pi}{r}-\pi\big)\Big)}.
\end{equation}
\end{enumerate}
\end{lemma}
Let $\{n\}!=\prod_{k=1}^n\{k\}.$ Then
$$\{n\}!=(-1)^nq^{-\frac{n(n+1)}{2}}(q)_n,$$ and
as a consequence of Lemma \ref{fact}, we have
\begin{lemma}\label{factorial}
\begin{enumerate}[(1)]
\item For $0\leqslant n \leqslant r-2,$
\begin{equation}
\{n\}!=e^{\frac{r}{4\pi \sqrt{-1}}\Big(-2\pi\big(\frac{2\pi n}{r}\big)+\big(\frac{2\pi}{r}\big)^2(n^2+n)+\varphi_r\big(\frac{\pi}{r}\big)-\varphi_r\big(\frac{2\pi n}{r}+\frac{\pi}{r}\big)\Big)}.
\end{equation}
\item For $\frac{r-1}{2}\leqslant n \leqslant r-2,$
\begin{equation} \label{move}
\{n\}!=2e^{\frac{r}{4\pi \sqrt{-1}}\Big(-2\pi\big(\frac{2\pi n}{r}\big)+\big(\frac{2\pi }{r}\big)^2(n^2+n)+\varphi_r\big(\frac{\pi}{r}\big)-\varphi_r\big(\frac{2\pi n}{r}+\frac{\pi}{r}-\pi\big)\Big)}.
\end{equation}
\end{enumerate}
\end{lemma}
We consider (\ref{move}) because there are poles in $(\pi,2\pi),$ and to avoid the poles we move the variables to $(0,\pi)$ by subtracting $\pi.$
The function $\varphi_r(z)$ and the dilogarithm function are closely related as follows.
\begin{lemma}\label{converge} \begin{enumerate}[(1)]
\item For every $z$ with $0<\mathrm{Re}z<\pi,$
\begin{equation}\label{conv1}
\varphi_r(z)=\mathrm{Li}_2(e^{2\sqrt{-1}z})+\frac{2\pi^2e^{2\sqrt{-1}z}}{3(1-e^{2\sqrt{-1}z})}\frac{1}{r^2}+O\Big(\frac{1}{r^4}\Big).
\end{equation}
\item For every $z$ with $0<\mathrm{Re}z<\pi,$
\begin{equation}\label{conv2}
\varphi_r'(z)=-2\sqrt{-1}\cdot\log(1-e^{2\sqrt{-1}z})+O\Big(\frac{1}{r^2}\Big).
\end{equation}
\item \cite[Formula (8)(9)]{O2}
$$\varphi_r\Big(\frac{\pi}{r}\Big)=\mathrm{Li}_2(1)+\frac{2\pi\sqrt{-1}}{r}\log\Big(\frac{r}{2}\Big)-\frac{\pi^2}{r}+O\Big(\frac{1}{r^2}\Big).$$
\end{enumerate}\end{lemma}
\section{The geometry of quantum \texorpdfstring{$6j$}{6j}-symbols}
\begin{definition} An $r$-admissible $6$-tuple $(a_1,\dots,a_6)$ is of the \emph{hyperideal type} if for $\{i,j,k\}=\{1,2,3\},$ $\{1,5,6\},$ $\{2,4,6\}$ and $\{3,4,5\},$
\begin{enumerate}[(1)]
\item $0\leqslant a_i+a_j-a_k<r-2,$
\item $r-2<a_i+a_j+a_k\leqslant 2(r-2),$
\item $a_i+a_j+a_k$ is even.
\end{enumerate}
\end{definition}
As a consequence of Lemma \ref{factorial} we have
\begin{proposition}\label{6jqd} The quantum $6j$-symbol at the root of unity $q=e^{\frac{2\pi \sqrt{-1}}{r}}$ can be computed as
$$\bigg|
\begin{matrix}
a_1 & a_2 & a_3 \\
a_4 & a_5 & a_6
\end{matrix} \bigg|=\frac{\{1\}}{2}\sum_{k=\max\{T_1,T_2,T_3,T_4\}}^{\min\{Q_1,Q_2,Q_3,r-2\}}e^{\frac{r}{4\pi \sqrt{-1}}U_r\big(\frac{2\pi a_1}{r},\dots,\frac{2\pi a_6}{r},\frac{2\pi k}{r}\big)},$$
where $U_r$ is defined as follows. If $(a_1,\dots,a_6)$ is of hyperideal type, then
\begin{equation}\label{termwithr}
\begin{split}
U_r(\alpha_1,\dots,\alpha_6,\xi)=&\pi^2-\Big(\frac{2\pi}{r}\Big)^2+\frac{1}{2}\sum_{i=1}^4\sum_{j=1}^3(\eta_j-\tau_i)^2-\frac{1}{2}\sum_{i=1}^4\Big(\tau_i+\frac{2\pi}{r}-\pi\Big)^2\\
&+\Big(\xi+\frac{2\pi}{r}-\pi\Big)^2-\sum_{i=1}^4(\xi-\tau_i)^2-\sum_{j=1}^3(\eta_j-\xi)^2\\
&-2\varphi_r\Big(\frac{\pi}{r}\Big)-\frac{1}{2}\sum_{i=1}^4\sum_{j=1}^3\varphi_r\Big(\eta_j-\tau_i+\frac{\pi}{r}\Big)+\frac{1}{2}\sum_{i=1}^4\varphi_r\Big(\tau_i-\pi+\frac{3\pi}{r}\Big)\\
&-\varphi_r\Big(\xi-\pi+\frac{3\pi}{r}\Big)+\sum_{i=1}^4\varphi_r\Big(\xi-\tau_i+\frac{\pi}{r}\Big)+\sum_{j=1}^3\varphi_r\Big(\eta_j-\xi+\frac{\pi}{r}\Big),\\
\end{split}
\end{equation}
where $\alpha_i=\frac{2\pi a_i}{r}$ for $i=1,\dots,6$ and $\xi=\frac{2\pi k}{r},$ $\tau_1=\frac{\alpha_1+\alpha_2+\alpha_3}{2},$ $\tau_2=\frac{\alpha_1+\alpha_5+\alpha_6}{2},$ $\tau_3=\frac{\alpha_2+\alpha_4+\alpha_6}{2}$ and $\tau_4=\frac{\alpha_3+\alpha_4+\alpha_5}{2},$ $\eta_1=\frac{\alpha_1+\alpha_2+\alpha_4+\alpha_5}{2},$ $\eta_2=\frac{\alpha_1+\alpha_3+\alpha_4+\alpha_6}{2}$ and $\eta_3=\frac{\alpha_2+\alpha_3+\alpha_5+\alpha_6}{2}.$
If $(a_1,\dots,a_6)$ is not of the hyperideal type, then $U_r$ will be changed according to Lemma \ref{factorial}.
\end{proposition}
\begin{definition} A $6$-tuple $(\alpha_1,\dots,\alpha_6)\in [0,2\pi]^6$ is of the \emph{hyperideal type} if
\begin{enumerate}[(1)]
\item $0\leqslant \alpha_i+\alpha_j-\alpha_k\leqslant 2\pi,$
\item $2\pi\leqslant \alpha_i+\alpha_j+\alpha_k\leqslant 4\pi.$
\end{enumerate}
\end{definition}
We notice that the six numbers $|\pi-\alpha_1|,\dots,|\pi-\alpha_6|$ are the dihedral angles of an ideal or a hyperideal tetrahedron if and only if $(\alpha_1,\dots,\alpha_6)$ is of the hyperideal type.
By Lemma \ref{converge}, $U_r=U-\frac{4\pi\sqrt{-1}}{r}\log\big(\frac{r}{2}\big)+O(\frac{1}{r}),$ where $U$ is defined by
\begin{equation}\label{term}
\begin{split}
U(\alpha_1,\dots,\alpha_6,\xi)=&\pi^2+\frac{1}{2}\sum_{i=1}^4\sum_{j=1}^3(\eta_j-\tau_i)^2-\frac{1}{2}\sum_{i=1}^4(\tau_i-\pi)^2\\
&+(\xi-\pi)^2-\sum_{i=1}^4(\xi-\tau_i)^2-\sum_{j=1}^3(\eta_j-\xi)^2\\
&-2\mathit{Li}_2(1)-\frac{1}{2}\sum_{i=1}^4\sum_{j=1}^3\mathit{Li}_2\big(e^{2i(\eta_j-\tau_i)}\big)+\frac{1}{2}\sum_{i=1}^4\mathit{Li}_2\big(e^{2i(\tau_i-\pi)}\big)\\
&-\mathit{Li}_2\big(e^{2i(\xi-\pi)}\big)+\sum_{i=1}^4\mathit{Li}_2\big(e^{2i(\xi-\tau_i)}\big)+\sum_{j=1}^3\mathit{Li}_2\big(e^{2i(\eta_j-\xi)}\big)\\
\end{split}
\end{equation}
on the region $\mathrm{B_{H,\mathbb C}}$ consisting of $(\alpha_1,\dots,\alpha_6,\xi)\in\mathbb C^7$ such that $(\mathrm{Re}(\alpha_1),\dots,\mathrm{Re}(\alpha_6))$ is of the hyperideal type and $\max\{\mathrm{Re}(\tau_i)\}\leqslant \mathrm{Re}(\xi)\leqslant \min\{\mathrm{Re}(\eta_j), 2\pi\}.$
Let
$$\mathrm{B_H}=\mathrm{B_{H,\mathbb C}}\cap \mathbb R^7.$$
Then by (\ref{dilogLob}), for $(\alpha_1,\dots,\alpha_6,\xi)\in\mathrm{B_H},$
\begin{equation}\label{UV}
U(\alpha_1,\dots,\alpha_6,\xi)=2\pi^2+2\sqrt{-1}\cdot V(\alpha_1,\dots,\alpha_6,\xi)
\end{equation}
for $V:\mathrm{B_H}\to\mathbb R$ defined by
\begin{equation}\label{V}
\begin{split}
V(\alpha_1,\dots,\alpha_6,\xi)=\,&\delta(\alpha_1,\alpha_2,\alpha_3)+\delta(\alpha_1,\alpha_5,\alpha_6)+\delta(\alpha_2,\alpha_4,\alpha_6)+\delta(\alpha_3,\alpha_4,\alpha_5)\\
&-\Lambda(\xi)+\sum_{i=1}^4\Lambda(\xi-\tau_i)+\sum_{j=1}^3\Lambda(\eta_j-\xi)
\end{split}
\end{equation}
with $\delta$ defined by
$$\delta(x,y,z)=-\frac{1}{2}\Lambda\Big(\frac{x+y-z}{2}\Big)-\frac{1}{2}\Lambda\Big(\frac{y+z-x}{2}\Big)-\frac{1}{2}\Lambda\Big(\frac{z+x-y}{2}\Big)+\frac{1}{2}\Lambda\Big(\frac{x+y+z}{2}\Big).$$
As a consequence, we have
\begin{equation}\label{pU}
\begin{split}
\frac{\partial U}{\partial \xi}=2\sqrt{-1}\cdot\frac{\partial V}{\partial \xi}=2\sqrt{-1}\cdot\log\frac{\sin(-\xi)\sin(\eta_1-\xi)\sin(\eta_2-\xi)\sin(\eta_3-\xi)}{\sin(\xi-\tau_1)\sin(\xi-\tau_2)\sin(\xi-\tau_3)\sin(\xi-\tau_4)},
\end{split}
\end{equation}
and
\begin{equation}\label{pUa}
\begin{split}
\frac{\partial U}{\partial \alpha_1}=2\sqrt{-1}\cdot\frac{\partial V}{\partial \alpha_1}=&\frac{\sqrt{-1}}{2}\cdot\log\frac{\sin(\frac{\alpha_1+\alpha_2-\alpha_3}{2})\sin(\frac{\alpha_1+\alpha_3-\alpha_2}{2})\sin(\frac{\alpha_1+\alpha_5-\alpha_6}{2})\sin(\frac{\alpha_1+\alpha_6-\alpha_5}{2})}{\sin(\frac{\alpha_2+\alpha_3-\alpha_1}{2})\sin(\frac{\alpha_1+\alpha_2+\alpha_3}{2})\sin(\frac{\alpha_5+\alpha_6-\alpha_1}{2})\sin(\frac{\alpha_1+\alpha_5+\alpha_6}{2})}\\
&+\sqrt{-1}\cdot\log\frac{\sin(\xi-\tau_1)\sin(\xi-\tau_2)}{\sin(\eta_1-\xi)\sin(\eta_2-\xi)}.
\end{split}
\end{equation}
A result of Costantino\,\cite{C} shows that for each $\alpha=(\alpha_1,\dots,\alpha_6)$ of the hyperideal type, there exists a unique $\xi_{\alpha}$ so that $(\alpha,\xi_\alpha)\in\mathrm{B_H}$ and
$$\frac{\partial V(\alpha,\xi)}{\partial \xi}\Big|_{\xi=\xi_\alpha}=0.$$
Indeed, oen can prove that for each $\alpha,$ $V$ is strictly concave down in $\xi$ with derivatives $\pm\infty$ at the boundary points of the interval of $\xi,$ hence there is a unique critical point which is the absolute maximum. Moreover, he shows that by the Murakami-Yano formula\,\cite{MY,U},
\begin{equation}\label{VV}
V(\alpha,\xi_\alpha)=\mathrm{Vol}(\Delta_{|\pi-\alpha|}),
\end{equation}
the volume of the hyperideal tetrahedron with dihedral angles $|\pi-\alpha_1,|,\dots, |\pi-\alpha_6|.$
As a consequence of (\ref{UV}), we have on $\mathrm{B_H}$
\begin{equation}\label{Costantino}
\frac{\partial U(\alpha,\xi)}{\partial \xi}\Big|_{\xi=\xi_\alpha}=0.
\end{equation}
Let $u_i=e^{\sqrt{-1}\alpha_i}$ for $i=1,\dots,6$ and let $z=e^{-2\sqrt{-1}\xi}.$ Then a direct computation shows that
\begin{equation}\label{pUC}
\frac{\partial U}{\partial \xi}=2\sqrt{-1}\cdot\log\frac{(1-z)(1-zu_1u_2u_4u_5)(1-zu_1u_3u_4u_6)(1-zu_2u_3u_5u_6)}{(1-zu_1u_2u_3)(1-zu_1u_5u_6)(1-zu_2u_4u_6)(1-zu_3u_4u_5)}\quad\quad(\mathrm{mod}\ 4\pi),
\end{equation}
and
\begin{equation}\label{pUaC}
\begin{split}
\frac{\partial U}{\partial \alpha_1}=&\frac{\sqrt{-1}}{2}\cdot\log\frac{(1-u_1u_2u_3^{-1})(1-u_1u_2^{-1}u_3)(1-u_1u_5u_6^{-1})(1-u_1u_5^{-1}u_6)}{u_1^4(1-u_1^{-1}u_2u_3)(1-u_1^{-1}u_2^{-1}u_3^{-1})(1-u_1^{-1}u_5u_6)(1-u_1^{-1}u_5^{-1}u_6^{-1})}\\
&+\sqrt{-1}\cdot\log\frac{u_4(1-zu_1u_2u_3)(1-zu_1u_5u_6)}{(1-zu_1u_2u_4u_5)(1-zu_1u_3u_4u_6)}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad(\mathrm{mod}\ \pi).
\end{split}
\end{equation}
For a fixed $\alpha$ so that $\mathrm{Re}(\alpha)$ is of the hyperideal type, consider the function $U_\alpha$ of $\xi$ defined by $U_\alpha(\xi)=U(\alpha,\xi).$ From (\ref{pUC}), if $\xi$ is a critical point of $U_\alpha,$ then as a necessary condition
$$\frac{(1-z)(1-zu_1u_2u_4u_5)(1-zu_1u_3u_4u_6)(1-zu_2u_3u_5u_6)}{(1-zu_1u_2u_3)(1-zu_1u_5u_6)(1-zu_2u_4u_6)(1-zu_3u_4u_5)}=1,$$
which is equivalent to the following quadratic equation
\begin{equation}\label{qe}
Az^2+Bz+C=0,
\end{equation}
where
\begin{equation*}
\begin{split}
A=&u_1u_4+u_2u_5+u_3u_6-u_1u_2u_6-u_1u_3u_5-u_2u_3u_4-u_4u_5u_6+u_1u_2u_3u_4u_5u_6,\\
B=&-\Big(u_1-\frac{1}{u_1}\Big)\Big(u_4-\frac{1}{u_4}\Big)-\Big(u_2-\frac{1}{u_2}\Big)\Big(u_5-\frac{1}{u_5}\Big)-\Big(u_3-\frac{1}{u_3}\Big)\Big(u_6-\frac{1}{u_6}\Big),\\
C=&\frac{1}{u_1u_4}+\frac{1}{u_2u_5}+\frac{1}{u_3u_6}-\frac{1}{u_1u_2u_6}-\frac{1}{u_1u_3u_5}-\frac{1}{u_2u_3u_4}-\frac{1}{u_4u_5u_6}+\frac{1}{u_1u_2u_3u_4u_5u_6}.
\end{split}
\end{equation*}
Let
\begin{equation}\label{za}
z_\alpha=e^{-2\sqrt{-1}\xi(\alpha)}=\frac{-B+\sqrt{B^2-4AC}}{2A}
\end{equation}
and
$$z^*_\alpha=e^{-2\sqrt{-1}\xi^*(\alpha)}=\frac{-B-\sqrt{B^2-4AC}}{2A}.$$
Since the region of $\alpha$ is simply connected, we can choose the branch of $\sqrt{B^2-4AC}$ by analytic continuation.
Then we have
$$\frac{\partial U_\alpha}{\partial \xi}\Big|_{\xi=\xi(\alpha)}=4k\pi \quad\text{and}\quad\frac{\partial U_\alpha}{\partial \xi}\Big|_{\xi=\xi^*(\alpha)}=4k^*\pi $$
for some integers $k$ and $k^*.$
A direct computation shows that if $\alpha_1=\alpha_2=\cdots=\alpha_6=\pi,$ then $\xi(\alpha)=\frac{7\pi}{4}\in\big(\frac{3\pi}{2},2\pi\big)=(\max\{\tau_i\},\min\{\eta_j,2\pi\})$ and $\xi^*(\alpha)=\frac{5\pi}{4}\notin(\frac{3\pi}{2},2\pi).$ Hence for $\alpha$ real and of the hyperideal type, $\xi(\alpha)$ coincides with $\xi_\alpha$ in Costantino's result, and by (\ref{Costantino}) and the continuity of $\frac{\partial U}{\partial \xi},$ we have
$$\frac{\partial U_\alpha}{\partial \xi}\Big|_{\xi=\xi(\alpha)}=0$$
for any $\alpha$ so that $\mathrm{Re}(\alpha)$ is of the hyperideal type, ie, $\xi(\alpha)$ is a critical point of $U_\alpha.$
\begin{remark}
At this point, we do not know whether for any $\alpha$ so that $\mathrm{Re}(\alpha)$ is of the hyperideal type, $(\alpha,\xi(\alpha))$ always lies in $\mathrm{B_{H,\mathbb C}}.$ In Section \ref{Asy}, we will show that it does when $\theta_1,\dots,\theta_6$ are sufficiently small.
\end{remark}
For $\alpha\in\mathbb C^6$ so that $(\alpha,\xi(\alpha))\in\mathrm{B_{H,\mathbb C}},$ we define
\begin{equation}\label{W}
W(\alpha)=U(\alpha,\xi(\alpha)).
\end{equation}
Then we have the key result of this section.
\begin{theorem}\label{co-vol} For a partition $(I,J)$ of $\{1,\dots,6\}$ and a fixed $(\theta_j)_{j\in J},$
$$W\big((\pi\pm \sqrt{-1} l_i)_{i\in I},(\pi\pm \theta_j)_{j\in J}\big)=2\pi^2+2\sqrt{-1}\cdot\mathrm{Cov}\big((l_i)_{i\in I}\big)$$
for all $(l_i)_{i\in I}\in \mathbb R^I_{>0},$ where $\mathrm{Cov}$ is the co-volume function defined in Definition \ref{cov}.
\end{theorem}
\begin{proof} The proof follows the argument in \cite{MY, U, KM}: the key step is proving that $W$ satisfies the same differential identities as those of $\textrm{Cov}$ in Lemma \ref{Sch}. Without loss of generality assume that $1\in I.$ Let $W^*(\alpha)=U(\alpha,\xi^*(\alpha))-4k^*\pi \xi^*(\alpha).$ Then we have
\begin{equation}\label{former}
\frac{d W}{d \alpha_1}=\frac{\partial U}{\partial \alpha_1}\Big|_{\xi=\xi(\alpha)}+\frac{\partial U}{\partial \xi}\Big|_{\xi=\xi(\alpha)}\cdot\frac{\partial \xi(\alpha)}{\partial \alpha_1}=\frac{\partial U}{\partial \alpha_1}\Big|_{\xi=\xi(\alpha)},\end{equation}
and
\begin{equation}\label{latter}
\frac{d W^*}{d \alpha_1}=\frac{\partial U}{\partial \alpha_1}\Big|_{\xi=\xi^*(\alpha)}+\frac{\partial U-4k^*\pi i\xi}{\partial \xi}\Big|_{\xi=\xi^*(\alpha)}\cdot\frac{\partial \xi(\alpha)}{\partial \alpha_1}=\frac{\partial U}{\partial \alpha_1}\Big|_{\xi=\xi^*(\alpha)}.
\end{equation}
Let
$$F(\alpha)=\frac{1}{2}\big(W(\alpha)-W^*(\alpha)\big).$$
Then by (\ref{pUaC}), (\ref{former}) and (\ref{latter}),
\begin{equation*}
\begin{split}
\frac{d F}{d \alpha_1}=&\frac{\sqrt{-1}}{2}\cdot\log\frac{(1-z_\alpha u_1u_2u_3)(1-z_\alpha u_1u_5u_6)(1-z^*_\alpha u_1u_2u_4u_5)(1-z^*_\alpha u_1u_3u_4u_6)}{(1-z^*_\alpha u_1u_2u_3)(1-z^*_\alpha u_1u_5u_6)(1-z_\alpha u_1u_2u_4u_5)(1-z_\alpha u_1u_3u_4u_6)}\quad\quad(\text{mod}\ \pi)
.\\
\end{split}
\end{equation*}
Let $\mathrm R$ and $\mathrm S$ respectively be the terms in $(1-z_\alpha u_1u_2u_3)(1-z_\alpha u_1u_5u_6)(1-z^*_\alpha u_1u_2u_4u_5)(1-z^*_\alpha u_1u_3u_4u_6)$ not containing and containing $\sqrt{B^2-4AC}.$ Then by a direct computation (see also \cite{MY, U}),
$$\mathrm S =\mathrm Q\big(u_1^{-1}-u_1\big)\sqrt{B^2-4AC}, $$
where
$$\mathrm Q= \frac{1}{4A^2}u_1^2u_4^{-1}(u_4u_5-u_3)(u_3u_4-u_5)(u_2u_4-u_6)(u_4u_6-u_2),$$
and
$$\mathrm R=8\mathrm QG_{34},$$
where $G_{ij}$ is the $ij$-th cofactor of the Gram matrix $G$ in Section \ref{tetra}.
We also have
$$B^2-4AC=16\det G.$$
If $\alpha_1=\pi+\sqrt{-1} l_1,$ then $$\mathrm S=\mathrm Q\big(-2\sinh l_1\cdot\sqrt{16\det G}\big)=-8\mathrm Q\big(\sqrt{G_{34}^2-G_{33}G_{44}}\big).$$
Therefore, we have
\begin{equation*}
\frac{d F}{d \alpha_1}=\frac{\sqrt{-1}}{2}\cdot\log\frac{G_{34}-\sqrt{G_{34}^2-G_{33}c_{44}}}{G_{34}+\sqrt{G_{34}^2-G_{33}G_{44}}} \quad\quad(\text{mod } \pi).
\end{equation*}
Since
$$\cos \theta_1 =\frac{G_{34}}{\sqrt{G_{33}G_{44}}},$$
we have
$$\frac{G_{34}-\sqrt{G_{34}^2-G_{33}c_{44}}}{G_{34}+\sqrt{G_{34}^2-G_{33}G_{44}}}=e^{-2\sqrt{-1}\theta_1}.$$
Then
\begin{equation*}
\frac{d F}{d \alpha_1}=\frac{\sqrt{-1}}{2}\cdot\log e^{-2\sqrt{-1}\theta_1}=\theta_1 \quad\quad(\text{mod } \pi)
\end{equation*}
and
\begin{equation}\label{partialViii}
\frac{\partial F}{\partial l_1}=\sqrt{-1}\cdot\frac{d F}{d \alpha_1}=\sqrt{-1}\theta_1 \quad\quad(\text{mod } \sqrt{-1}\pi ).
\end{equation}
If $\alpha_1=\pi-\sqrt{-1} l_1,$ then
$$\mathrm S=\mathrm Q\big(2\sinh l_1\cdot\sqrt{16\det G}\big)=8\mathrm Q\big(\sqrt{G_{34}^2-G_{33}G_{44}}\big),$$
and
\begin{equation}\label{partialViiii}
\frac{\partial F}{\partial l_1}=-\sqrt{-1}\cdot\frac{d F}{d \alpha_1}=\frac{1}{2}\log\frac{G_{34}+\sqrt{G_{34}^2-G_{33}c_{44}}}{G_{34}-\sqrt{G_{34}^2-G_{33}G_{44}}}=\frac{1}{2}\log e^{2\sqrt{-1}\theta_1}=\sqrt{-1}\theta_1 \quad\quad(\text{mod } \sqrt{-1}\pi).
\end{equation}
Below a direct computation (see also \cite{MY}) shows that
\begin{equation}\label{partialW}
\frac{\partial}{\partial l_1}\big(W(\alpha)+W^*(\alpha)\big)=0 \quad\quad(\text{mod } \sqrt{-1}\pi).
\end{equation}
Indeed, it comes from the following calculation:
$$(1-z_\alpha u_1u_2u_4u_5)(1-z^*_\alpha u_1u_2u_4u_5)=\frac{1}{A}\frac{(u_1u_2u_4u_5)^2}{u_3u_6}\Big(1-\frac{u_3}{u_4u_5}\Big)\Big(1-\frac{u_6}{u_2u_4}\Big)\Big(1-\frac{u_6}{u_1u_5}\Big)\Big(1-\frac{u_3}{u_1u_2}\Big),$$
$$(1-z_\alpha u_1u_3u_4u_6)(1-z^*_\alpha u_1u_3u_4u_6)=\frac{1}{A}\frac{(u_1u_3u_4u_6)^2}{u_2u_5}\Big(1-\frac{u_2}{u_4u_6}\Big)\Big(1-\frac{u_5}{u_3u_4}\Big)\Big(1-\frac{u_2}{u_1u_3}\Big)\Big(1-\frac{u_5}{u_1u_6}\Big),$$
$$(1-z_\alpha u_1u_2u _3)(1-z^*_\alpha u_1u_2u_3)=\frac{1}{A }\frac{(u_1u_2u_3)^2}{u_4u_5u_6}\Big(1-\frac{u_4u_5}{u_3}\Big)\Big(1-\frac{u_4u_6}{u_2}\Big)\Big(1-\frac{u_5u_6}{u_1}\Big)\Big(1-\frac{1}{u_1u_2u_3}\Big),$$ and
$$(1-z_\alpha u_1u_5u_6)(1-z^*_\alpha u_1u_5u_6)=\frac{1}{A }\frac{(u_1u_5u_6)^2}{u_2u_3u_4}\Big(1-\frac{u_2u_4}{u_6}\Big)\Big(1-\frac{u_3u_4}{u_5}\Big)\Big(1-\frac{u_2u_3}{u_1}\Big)\Big(1-\frac{1}{u_1u_5u_6}\Big).$$
These, together with (\ref{pUaC}), (\ref{former}) and (\ref{latter}), imply (\ref{partialW}).
Putting (\ref{partialViii}), (\ref{partialViiii}) and (\ref{partialW}) together, we have
$$\frac{\partial W}{\partial l_1}=\sqrt{-1}\theta_1+\sqrt{-1} k\pi$$
for some integer $k.$
To find the value of $k,$ we take $\alpha_i=\pi$ for all $i\in I,$ ie, $l_i=0$ for all $i\in I.$ Then by (\ref{pUa}), $\frac{\partial W}{\partial l_1}=\sqrt{-1}\cdot\frac{\partial W}{\partial \alpha_1}$ is real. Also, by (\ref{cos1}), if $l_1=0$ then $\theta_1=0.$ This implies that $k=0,$ and
\begin{equation}
\frac{\partial W}{\partial l_1}=\sqrt{-1}\theta_1.
\end{equation}
By exactly the same argument, we have
\begin{equation}\label{Sch2}
\frac{\partial W}{\partial l_i}=\sqrt{-1}\theta_i
\end{equation}
for all $i\in I.$
By (\ref{UV}), (\ref{VV}) and Definition \ref{cov},
$$W\big((\pi)_{i\in I},(\pi\pm\theta_j)_{j\in J}\big)=2\pi^2+2\sqrt{-1}\cdot\mathrm{Vol}(\Delta_{(\mathbf 0;\theta_J)})=2\pi^2+2\sqrt{-1}\cdot\mathrm{Cov}(\mathbf 0).$$
Together with Lemma \ref{Sch} and (\ref{Sch2}), we have the result.
\end{proof}
\begin{remark} Theorem \ref{co-vol} also plays an essential role in the proof of the main theorems of \cite{WY2, Y}.
\end{remark}
\begin{corollary}\label{cv1} Let $\Delta$ be the deeply truncated tetrahedron of type $(I,J)$ with $\{l_i\}_{i\in I}$ the lengths of the edges of deep truncation and $\{\theta_j\}_{j\in J}$ the dihedral angles at the regular edges. Then
$$\mathrm{Vol}(\Delta)=\frac{1}{2}\mathrm{Im}\bigg(W-\sum_{i\in I} l_i\frac{\partial W}{\partial l_i}\bigg)\bigg|_{\big((\pi\pm \sqrt{-1} l_i)_{i\in I},(\pi\pm \theta_j)_{j\in J}\big)}.$$
\end{corollary}
Corollary \ref{cv1} is an immediate consequence of Theorem \ref{co-vol} and Lemma \ref{Sch}. See also \cite{KM} for a different volume formula involving both roots of the quadratic equation (\ref{qe}).
\section{Computation of the discrete Fourier transforms}
\begin{proposition}\label{computation} Let $(I,J)$ be a partition of $\{1,\dots,6\}.$ Then the discrete Fourier transform $\mathrm{\widehat {Y}}_r(b_I; a_J)$ at the root of unity $q=e^{\frac{2\pi \sqrt{-1}}{r}}$ can be computed as
$$\mathrm{\widehat {Y}}_r(b_I; a_J)=\frac{(-1)^{|I|\big(\frac{r}{2}+1\big)}\cdot n(a_J)}{4\{1\}^{|I|-2}}\sum_{a_I,k_1,k_2}\Big(\sum_{\epsilon_I}g_r^{\epsilon_I}(a_I,k_1,k_2)\Big),$$
where $n(a_J)$ is the number of $3$-admissible colorings $c$ such that $c_j \equiv a_j\ (\text{mod } 2)$ for each $j\in J,$ $\epsilon_I=(\epsilon_i)_{i\in I}\in\{1,-1\}^I$ runs over all multi-signs, $a_I=(a_i)_{i\in I}$ runs over all multi-even integers in $\{0,2\dots,r-3\}$ so that the triples $(a_1, a_2, a_3),$ $(a_1, a_5, a_6),$ $(a_2,a_4,a_6)$ and $(a_3,a_4,a_5)$ are $r$-admissible, and $k_1$ and $k_2$ run over all the integers in between $\max\{T_i\}$ and $\min\{Q_j,r-2\}$ with
$$g_r^{\epsilon_I}(a_I,k_1,k_2)=e^{\sum_{i\in I}\epsilon_i\cdot\frac{2\sqrt{-1}\pi(a_i+b_i+1)}{r} +\frac{r}{4\pi \sqrt{-1}}\mathcal W_r^{\epsilon_I}\big(\frac{2\pi a_I}{r},\frac{2\pi k_1}{r},\frac{2\pi k_2}{r}\big)}$$
where $\frac{2\pi a_I}{r}=\big(\frac{2\pi a_i}{r}\big)_{i\in I}$ and
\begin{equation*}
\begin{split}
\mathcal W_r^{\epsilon_{I}}(\alpha_I,\xi_1,\xi_2)=&-\sum_{i\in I}2\epsilon_i(\alpha_i-\pi)(\beta_i-\pi)+U_r(\alpha_1,\alpha_2,\dots,\alpha_6,\xi_1)+U_r(\alpha_1,\alpha_2,\dots,\alpha_6,\xi_2)
\end{split}
\end{equation*}
where $\alpha_I=(\alpha_i)_{i\in I}.$
\end{proposition}
\begin{proof} First, we observe that if we let the summation in the definition of $\mathrm{\widehat {Y}}_r(b_I; a_J)$ be over all multi-even integers $a_I$ instead of multi-integers, then the resulting quantity differs from $\mathrm{\widehat {Y}}_r(b_I; a_J)$ by a factor $n(a_J)$ by \cite[Lemma A.4, Theorem 2.9 and its proof]{DKY}. Next, we observe that
$$(-1)^{a+b}q^{(a+1)(b+1)}=-(-1)^{\frac{r}{2}}q^{\big((a-\frac{r}{2})(b-\frac{r}{2})+a+b+1\big)},$$
and
$$(-1)^{a+b}q^{-(a+1)(b+1)}=(-1)^{\frac{r}{2}}q^{-\big((a-\frac{r}{2})(b-\frac{r}{2})+a+b+1\big)}.$$
As a consequence, we have
\begin{equation*}
\begin{split}
\mathrm H(a_i,b_i)=&\frac{1}{q-q^{-1}}\bigg((-1)^{a_i+b_i}q^{(a_i+1)(b_i+1)}-(-1)^{a_i+b_i}q^{-(a_i+1)(b_i+1)}\bigg)\\
=&\frac{-(-1)^{\frac{r}{2}}}{q-q^{-1}}\bigg(q^{\big((a_i-\frac{r}{2})(b_i-\frac{r}{2})+a_i+b_i+1\big)}+q^{-\big((a_i-\frac{r}{2})(b_i-\frac{r}{2})+a_i+b_i+1\big)}\bigg)\\
=&\frac{(-1)^{\frac{r}{2}+1}}{\{1\}}\sum_{\epsilon_i\in\{-1,1\}}q^{\epsilon_i\big((a_i-\frac{r}{2})(b_i-\frac{r}{2})+a_i+b_i+1\big)}\\
=&\frac{(-1)^{\frac{r}{2}+1}}{\{1\}}\sum_{\epsilon_i\in\{-1,1\}}e^{\epsilon_i\frac{2\sqrt{-1}\pi(a_i+b_i+1)}{r}+\frac{r}{4\pi\sqrt{-1}}\Big(-2\epsilon_i\big(\frac{2\pi a_i}{r}-\pi\big)\big(\frac{2\pi b_i}{r}-\pi\big)\Big)},\\
\end{split}
\end{equation*}
and hence
\begin{equation*}
\begin{split}\prod_{i\in I}\mathrm H(a_i,b_i)=&\frac{(-1)^{|I|\big(\frac{r}{2}+1\big)}}{\{1\}^{|I|}}\sum_{\epsilon_I\in\{-1,1\}^{|I|}}e^{\sum_{i\in I}\epsilon_i\frac{2\sqrt{-1}\pi(a_i+b_i+1)}{r}+\frac{r}{4\pi\sqrt{-1}}\sum_{i\in I}\Big(-2\epsilon_i\big(\frac{2\pi a_i}{r}-\pi\big)\big(\frac{2\pi b_i}{r}-\pi\big)\Big)}\\
=&\frac{(-1)^{|I|\big(\frac{r}{2}+1\big)}}{\{1\}^{|I|}}\sum_{\epsilon_I\in\{-1,1\}^{|I|}}e^{\sum_{i\in I}\epsilon_i\sqrt{-1}(\alpha_i+\beta_i+\frac{2\pi}{r})+\frac{r}{4\pi\sqrt{-1}}\sum_{i\in I}\big(-2\epsilon_i(\alpha_i-\pi)(\beta_i-\pi)\big)}.\\
\end{split}
\end{equation*}
Then the result follows from Proposition \ref{6jqd}.
\end{proof}
We notice that the summation in Proposition \ref{computation} is finite, and to use the Poisson Summation Formula, we need an infinite sum over integral points. To this end, we consider the following regions and a bump function over them.
Let $\alpha_i=\frac{2\pi a_i}{r}$ for $i=1,\dots,6,$ $\beta_i=\frac{2\pi b_i}{r}$ for $i\in I,$ $\xi_s=\frac{2\pi k_s}{r}$ for $s=1,2,$ $\tau_i=\frac{2\pi T_i}{r}$ for $i=1,\dots,4,$ and $\eta_j=\frac{2\pi Q_j}{r}$ for $j=1,2,3.$ For a fixed $(\alpha_j)_{j\in J},$ let
$$\mathrm {D_A}=\Big\{(\alpha_I,\xi_1,\xi_2)\in\mathbb R^{|I|+2}\ \Big|\ (\alpha_1,\alpha_2,\dots,\alpha_6) \text{ is admissible, } \max\{\tau_i\}\leqslant \xi_s\leqslant \min\{\eta_j, 2\pi\}, s=1,2\Big\},$$
and let
$$\mathrm {D_H}=\Big\{(\alpha_I,\xi_1,\xi_2)\in\mathrm {D_A} \ \Big|\ (\alpha_1,\alpha_2,\dots,\alpha_6) \text{ is of the hyperideal type} \Big\}.$$
For a sufficiently small $\delta >0,$ let
$$\mathrm {D_H^\delta}=\Big\{(\alpha_I,\xi_1,\xi_2)\in\mathrm {D_H}\ \Big|\ d((\alpha_I,\xi_1,\xi_2), \partial\mathrm {D_H})>\delta \Big\},$$
where $d$ is the Euclidean distance on $\mathbb R^n.$
We let $\psi:\mathbb R^{|I|+2}\to\mathbb R$ be a $C^{\infty}$-smooth bump function supported on $(\mathrm{D_H}, \mathrm{D_H^\delta}),$ ie, \begin{equation*}
\left \{\begin{array}{rl}
\psi(\alpha_I,\xi_1,\xi_2)=1, & (\alpha_I,\xi_1,\xi_2)\in \overline{\mathrm{D_H^\delta}}\\
0<\psi(\alpha_I,\xi_1,\xi_2)<1, & (\alpha_I,\xi_1,\xi_2)\in \mathrm{D_H}\setminus \overline{\mathrm{D_H^\delta}}\\
\psi(\alpha_I,\xi_1,\xi_2)=0, & (\alpha_I,\xi_1,\xi_2)\notin \mathrm{D_H},\\
\end{array}\right.
\end{equation*}
and let
$$f^{\epsilon_I}_r(a_I,k_1,k_2)=\psi\Big(\frac{2\pi a_I}{r},\frac{2\pi k_1}{r},\frac{2\pi k_2}{r}\Big)g^{\epsilon_I}_r(a_I,k_1,k_2).$$
In Proposition \ref{computation}, the sum is over multi-even integers $a_I.$ On the other hand, to use the Poisson Summation Formula, we need a sum over all integers. For this purpose, we for each $i\in I$ let $a_i=2a_i',$ $a_I'=(a_i')_{i\in I}$ and denote $(2a_i')_{i\in I}$ by $2a_I'.$ Then by Proposition \ref{computation},
$$\mathrm{\widehat {Y}}_r(b_I;a_J)=\frac{(-1)^{|I|\big(\frac{r}{2}+1\big)}\cdot n(a_J)}{4\{1\}^{|I|-2}}\sum_{(a_I',k_1,k_2)\in\mathbb Z^{|I|+2}}\Big(\sum_{\epsilon_I\in\{1,-1\}^I} f_r^{\epsilon_I}\big(2a_I',k_1,k_2\big)\Big)+\text{error term}.$$
Let $$f_r=\sum_{\epsilon_I\in\{1,-1\}^I} f_r^{\epsilon_I}.$$ Then
$$\mathrm{\widehat {Y}}_r(b_I;a_J)=\frac{(-1)^{|I|\big(\frac{r}{2}+1\big)}\cdot n(a_J)}{4\{1\}^{|I|-2}}\sum_{(a_I',k_1,k_2)\in\mathbb Z^{|I|+2}}f_r\big(2a_I',k_1,k_2\big)+\text{error term}.$$
Since $f_r$ is $C^{\infty}$-smooth and equals zero out of $\mathrm{D_H},$ it is in the Schwartz space on $\mathbb R^{|I|+2}.$ Then by the Poisson Summation Formula (see e.g. \cite[Theorem 3.1]{SS}),
$$\sum_{(a_I',k_1,k_2)\in\mathbb Z^{|I|+2}}f_r\big(2a_I',k_1,k_2\big)=\sum_{(m_I,n_1,n_2)\in\mathbb Z^{|I|+2}}\widehat {f_r}(m_I,n_1,n_2),$$
where $m_I=(m_i)_{i\in I}\in \mathbb Z^I$ and $\widehat f_r(m_I,n_1,n_2)$ is the $(m_I,n_1,n_2)$-th Fourier coefficient of $f_r$ defined by
\begin{equation*}
\begin{split}
\widehat {f_r}(m_I,n_1,n_2)=\int_{\mathbb R^{|I|+2}}&f_r\big(2a_I',k_1,k_2\big)e^{\sum_{i\in I}2\pi \sqrt{-1}m_ia_i'+2\pi \sqrt{-1}n_1k_1+2\pi \sqrt{-1}n_2k_2}da'_Idk_1dk_2,
\end{split}
\end{equation*}
where $da'_I=\prod_{i\in I}da'_i.$
By the change of variable, and by changing $2a_i'$ back to $a_i,$ the Fourier coefficients can be computed as
\begin{proposition}\label{4.2}
$$\widehat{f_r}(m_I,n_1,n_2)=\sum_{\epsilon_I\in\{1,-1\}^I} \widehat{f^{\epsilon_I}_r}(m_I,n_1,n_2)$$
with
\begin{equation*}
\begin{split}
\widehat{f^{\epsilon_I}_r}(m_I,n_1,n_2)=\frac{r^{|I|+2}}{2^{2|I|+2}\cdot\pi^{|I|+2}}&\int_{\mathrm{D_H}}\psi(\alpha_I,\xi_1,\xi_2)e^{\sum_{i\in I}\epsilon_i\sqrt{-1}(\alpha_i+\beta_i+\frac{2\pi}{r})}\\
&\cdot e^{\frac{r}{4\pi \sqrt{-1}}\big(\mathcal W_r^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)-\sum_{i\in I}2\pi m_i\alpha_i-4\pi n_1\xi_1-4\pi n_2\xi_2\big)}d\alpha_Id\xi_1d\xi_2,
\end{split}
\end{equation*}
where $d\alpha_I=\prod_{i\in I}d\alpha_i$ and
$$\mathcal W_r^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)=-\sum_{i\in I}2\epsilon_i(\alpha_i-\pi)(\beta_i-\pi)+U_r(\alpha_1,\dots,\alpha_6,\xi_1)+U_r(\alpha_1,\dots,\alpha_6,\xi_2).$$
In particular,
\begin{equation*}
\begin{split}
\widehat{f^{\epsilon_I}_r}(0,\dots,0)=\frac{r^{|I|+2}}{2^{2|I|+2}\cdot\pi^{|I|+2}}\int_{\mathrm{D_H}}\psi(\alpha_I,\xi_1,\xi_2)e^{\sum_{i\in I}\epsilon_i\sqrt{-1}(\alpha_i+\beta_i+\frac{2\pi}{r})+\frac{r}{4\pi \sqrt{-1}}\mathcal W_r^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)}d\alpha_Id\xi_1d\xi_2.
\end{split}
\end{equation*}
\end{proposition}
\begin{proposition}\label{Poisson}
$$\mathrm{\widehat {Y}}_r(b_I; a_J)=\frac{(-1)^{|I|\big(\frac{r}{2}+1\big)}\cdot n(a_J)}{4\{1\}^{|I|-2}}\sum_{(m_I,n_1,n_2)\in\mathbb Z^{|I|+2}}\widehat{ f_r}(m_I,n_1,n_2)+\text{error term}.$$
\end{proposition}
We will estimate the leading Fourier coefficients, the non-leading Fourier coefficients and the error term respectively in Sections \ref{leading}, \ref{ot} and \ref{ee}, and prove Theorem \ref{main} in Section \ref{pf}.
\section{Asymptotics}\label{Asy}
\begin{proposition}\label{saddle}
Let $D_{\mathbf z}$ be a region in $\mathbb C^n$ and let $D_{\mathbf a}$ be a region in $\mathbb R^k.$ Let $f(\mathbf z,\mathbf a)$ and $g(\mathbf z,\mathbf a)$ be complex valued functions on $D_{\mathbf z}\times D_{\mathbf a}$ which are holomorphic in $\mathbf z$ and smooth in $\mathbf a.$ For each positive integer $r,$ let $f_r(\mathbf z,\mathbf a)$ be a complex valued function on $D_{\mathbf z}\times D_{\mathbf a}$ holomorphic in $\mathbf z$ and smooth in $\mathbf a.$
For a fixed $\mathbf a\in D_{\mathbf a},$ let $f^{\mathbf a},$ $g^{\mathbf a}$ and $f_r^{\mathbf a}$ be the holomorphic functions on $D_{\mathbf z}$ defined by
$f^{\mathbf a}(\mathbf z)=f(\mathbf z,\mathbf a),$ $g^{\mathbf a}(\mathbf z)=g(\mathbf z,\mathbf a)$ and $f_r^{\mathbf a}(\mathbf z)=f_r(\mathbf z,\mathbf a).$ Suppose $\{\mathbf a_r\}$ is a convergent sequence in $D_{\mathbf a}$ with $\lim_r\mathbf a_r=\mathbf a_0,$ $f_r^{\mathbf a_r}$ is of the form
$$ f_r^{\mathbf a_r}(\mathbf z) = f^{\mathbf a_r}(\mathbf z) + \frac{\upsilon_r(\mathbf z,\mathbf a_r)}{r^2},$$
$\{S_r\}$ is a sequence of embedded real $n$-dimensional closed disks in $D_{\mathbf z}$ sharing the same boundary, and $\mathbf c_r$ is a point on $S_r$ such that $\{\mathbf c_r\}$ is convergent in $D_{\mathbf z}$ with $\lim_r\mathbf c_r=\mathbf c_0.$ If for each $r$
\begin{enumerate}[(1)]
\item $\mathbf c_r$ is a critical point of $f^{\mathbf a_r}$ in $D_{\mathbf z},$
\item $\mathrm{Re}f^{\mathbf a_r}(\mathbf c_r) > \mathrm{Re}f^{\mathbf a_r}(\mathbf z)$ for all $\mathbf z \in S\setminus \{\mathbf c_r\},$
\item the Hessian matrix $\mathrm{Hess}(f^{\mathbf a_r})$ of $f^{\mathbf a_r}$ at $\mathbf c_r$ is non-singular,
\item $|g^{\mathbf a_r}(\mathbf c_r)|$ is bounded from below by a positive constant independent of $r,$
\item $|\upsilon_r(\mathbf z, \mathbf a_r)|$ is bounded from above by a constant independent of $r$ on $D_{\mathbf z},$ and
\item the Hessian matrix $\mathrm{Hess}(f^{\mathbf a_0})$ of $f^{\mathbf a_0}$ at $\mathbf c_0$ is non-singular,
\end{enumerate}
then
\begin{equation*}
\begin{split}
\int_{S_r} g^{\mathbf a_r}(\mathbf z) e^{rf_r^{\mathbf a_r}(\mathbf z)} d\mathbf z= \Big(\frac{2\pi}{r}\Big)^{\frac{n}{2}}\frac{g^{\mathbf a_r}(\mathbf c_r)}{\sqrt{-\det\mathrm{Hess}(f^{\mathbf a_r})(\mathbf c_r)}} e^{rf^{\mathbf a_r}(\mathbf c_r)} \Big( 1 + O \Big( \frac{1}{r} \Big) \Big).
\end{split}
\end{equation*}
\end{proposition}
A proof can be found in \cite[Appendix]{WY2}.
For fixed $\{\beta_i\}_{i\in I}$ and $\{\alpha_j\}_{j\in J},$ let $\theta_i=|\pi-\beta_i|$ for $i\in I$ and let $\theta_j=|\pi-\alpha_j|$ for $j\in J.$ The function $\mathcal W_r^{\epsilon_I}$ is approximated by the following function
$$\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)=-\sum_{i\in I}2\epsilon_i(\alpha_i-\pi)(\beta_i-\pi)+U(\alpha_1,\dots,\alpha_6,\xi_1))+U(\alpha_1,\dots,\alpha_6,\xi_2).$$
The approximation will be specified in the proof of Proposition \ref{critical}. Notice that $\mathcal W^{\epsilon_I}$ is continuous on
$$\mathrm{D_{H,\mathbb C}}=\big\{(\alpha_I,\xi_1,\xi_2)\in\mathbb C^{|I|+2}\ \big|\ (\mathrm{Re}(\alpha_I),\mathrm{Re}(\xi_1),\mathrm{Re}(\xi_2))\in \mathrm{D_{H}}\big\}$$ and for any $\delta>0$ is analytic on
$$\mathrm{D^\delta_{H,\mathbb C}}=\big\{(\alpha_I,\xi_1,\xi_2)\in\mathbb C^{|I|+2}\ \big|\ (\mathrm{Re}(\alpha_I),\mathrm{Re}(\xi_1),\mathrm{Re}(\xi_2))\in \mathrm{D^\delta_{H}}\big\},$$
where $\mathrm{Re}(\alpha_I)=(\mathrm{Re}(\alpha_i))_{i\in I}.$
In the rest of this paper, we assume that $\theta_1,\dots,\theta_6$ are sufficiently close to $0,$ or equivalently, $\{\beta_i\}_{i\in I}$ and $\{\alpha_j\}_{j\in J}$ are sufficiently close to $\pi.$ In the special case $\beta_i=\alpha_j=\pi$ for all $i\in I$ and $j\in J,$ a direct computation shows that $\xi(\pi,\dots,\pi)=\frac{7\pi}{4}.$ We denote by $\pi_I$ the point $(\pi, \dots, \pi)\in \mathbb C^I.$ For $\delta>0,$ we denote by $\mathrm{D_{\delta,\mathbb C}}$ the $L^1$ $\delta$-neighborhood of $\big(\pi_I,\frac{7\pi}{4},\frac{7\pi}{4}\big)$ in $\mathbb C^{|I|+2},$ that is
$$\mathrm{D_{\delta,\mathbb C}}=\Big\{(\alpha_I,\xi_1,\xi_2)\in \mathbb C^{|I|+2}\ \Big|\ d_{L^1}\Big((\alpha_I,\xi_1,\xi_2),\Big(\pi_I,\frac{7\pi}{4},\frac{7\pi}{4}\Big)\Big)<\delta\Big\},$$
where $d_{L^1}$ is the real $L^1$ norm on $\mathbb C^n$ defined by
$$d_{L^1}(\mathbf x,\mathbf y)=\max_{i\in\{1,\dots,n\}}\{|\mathrm {Re}(x_i)-\mathrm{Re}(y_i)|, |\mathrm {Im}(x_i)-\mathrm{Im}(y_i)| \},$$
where $\mathbf x=(x_1,\dots,x_n)$ and $\mathbf y=(y_1,\dots,y_n).$ We will also consider the region
$$\mathrm{D_{\delta}}=\mathrm{D_{\delta,\mathbb C}}\cap \mathbb R^{|I|+2}.$$
\subsection{Critical points and critical values of \texorpdfstring{$\mathcal W^{\epsilon_I}$}{W}}
Let $\Delta(\theta_I; \theta_J)$ be the deeply truncated tetrahedron of type $(I,J)$ with edges of deep truncation $\{e_i\}_{i\in I}$ and regular edges $\{e_j\}_{j\in J},$ and with $\theta_i$ the dihedral angle at $e_i$ for $i\in I,$ and with $\theta_i$ the dihedral angle at $e_j$ for $j\in J.$
For $i\in I,$ let $l_i$ be the length of $e_i.$
\begin{proposition}\label{crit} Suppose $\{\beta_i\}_{i\in I}$ and $\{\alpha_j\}_{j\in J}$ are sufficiently close to $\pi.$ Let $\theta_i=|\pi-\beta_i|$ for $i\in I$ and let $\theta_j=|\pi-\alpha_j|$ for $j\in J.$ Let $\mu_i=1$ if $\beta_i\geqslant\pi$ and let $\mu_i=-1$ if $\beta_i\leqslant \pi$ for $i\in I.$ Then $\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)$ has a critical point
$$z^{\epsilon_I}=\Big(\big(\pi+\epsilon_i\mu_i\sqrt{-1}l_i\big)_{i\in I}, \xi\big((\pi+\epsilon_i\mu_i\sqrt{-1}l_i)_{i\in I},\alpha_J\big), \xi\big((\pi+\epsilon_i\mu_i\sqrt{-1}l_i)_{i\in I},\alpha_J\big)\Big)$$
in $\mathrm{D_{\delta,\mathbb C}}$ with critical value $$4\pi^2+4\sqrt{-1}\cdot \mathrm{Vol}\big(\Delta(\theta_I;\theta_J)\big).$$
\end{proposition}
\begin{proof}
By (\ref{cos1}), for $i\in I,$ $l_i$ is sufficiently small for a sufficiently small $\theta_i.$ Then by the continuity of $\xi(\alpha_1,\dots,\alpha_6),$ the point $((\pi\pm il_i)_{i\in I},\xi((\pi\pm il_i)_{i\in I},\alpha_J),\xi((\pi\pm il_i)_{i\in I},\alpha_J))\in \mathrm D_{\delta,\mathbb C}$ for sufficiently small $\theta_1,\dots,\theta_6.$
For any $\alpha_I=(\alpha_i)_{i\in I}$ so that $(\alpha_I,\xi(\alpha_I,\alpha_J),\xi(\alpha_I,\alpha_J))\in\mathrm{D_{H,\mathrm C}},$ and for $s=1,2,$
\begin{equation}\label{partialxi}
\frac{\partial \mathcal W^{\epsilon_I}}{\partial \xi_s}\Big|_{(\alpha_I,\xi(\alpha_I,\alpha_J),\xi(\alpha_I,\alpha_J))}=\frac{\partial U}{\partial \xi}\Big|_{((\alpha_I,\alpha_J),\xi(\alpha_I,\alpha_J))}=\frac{\partial U_{(\alpha_I, \alpha_J)}}{\partial \xi}\Big|_{\xi(\alpha_I, \alpha_J)}=0.
\end{equation}
In particular,
$$\frac{\partial \mathcal W^{\epsilon_I}}{\partial \xi_1}\Big|_{z^{\epsilon_I}}=\frac{\partial \mathcal W^{\epsilon_I}}{\partial \xi_2}\Big|_{z^{\epsilon_I}}=0.$$
Let $\alpha=(\alpha_I,\alpha_J)$ and let $W(\alpha)=U(\alpha,\xi(\alpha))$ be the function defined in (\ref{W}). Then for $i\in I$
$$\frac{\partial W}{\partial \alpha_i}\Big|_\alpha=\frac{\partial U}{\partial \alpha_i}\Big|_{(\alpha,\xi(\alpha))}+\frac{\partial U}{\partial \xi}\Big|_{(\alpha,\xi(\alpha))}\cdot\frac{\partial \xi(\alpha)}{\partial \alpha_i}\Big|_\alpha=\frac{\partial U}{\partial \alpha_i}\Big|_{(\alpha,\xi(\alpha))}.$$
Together with Theorem \ref{co-vol} and Lemma \ref{Sch}, we have
\begin{equation*}
\begin{split}
\frac{\partial U}{\partial \alpha_i}\Big|_{\big(\big((\pi+\epsilon_i\mu_i\sqrt{-1}l_i)_{i\in I},\alpha_J\big),\xi\big((\pi+\epsilon_i\mu_i \sqrt{-1}l_i)_{i\in I},\alpha_J\big)\big)}=&\frac{\partial W}{\partial \alpha_i}\Big|_{\big((\pi+\epsilon_i\mu_i \sqrt{-1}l_i)_{i\in I},\alpha_J\big)}\\
=&-\epsilon_i\mu_i \sqrt{-1}\cdot\frac{\partial W}{\partial l_i}\Big|_{(l_i)_{i\in I}}=\epsilon_i\mu_i \theta_i.
\end{split}
\end{equation*}
Then we have for $i\in I,$
\begin{equation*}
\begin{split}
\frac{\partial \mathcal W^{\epsilon_I}}{\partial \alpha_i}\Big|_{z^{\epsilon_I}}=&-2\epsilon_i(\beta_i-\pi)+2\frac{\partial U}{\partial \alpha_i}\Big|_{\big(\big((\pi+\epsilon_i\mu_i\sqrt{-1}l_i)_{i\in I},\alpha_J\big),\xi\big((\pi+\epsilon_i\mu_i \sqrt{-1}l_i)_{i\in I},\alpha_J\big)\big)}\\
=&2\epsilon_i\big(-(\beta_i-\pi)+\mu_i\theta_i\big)=0,
\end{split}
\end{equation*}
where the last equality comes from that $\mu_i\theta_i=\beta_i-\pi.$
For the critical value, by Theorem \ref{co-vol} and $\mu_i\theta_i=\beta_i-\pi$ again, we have
\begin{equation*}
\begin{split}
\mathcal W^{\epsilon_I}(z^{\epsilon_I})=&-\sum_{i\in I}2\epsilon_i(\sqrt{-1}\epsilon_i\mu_il_i)(\beta_i-\pi)+2\Big(2\pi^2+2\sqrt{-1}\Big(\mathrm{Vol}(\Delta(\theta_I;\theta_J))+\frac{1}{2}\sum_{i\in I}\theta_il_i\Big)\Big)\\
=&4\pi^2+4\sqrt{-1}\cdot \mathrm{Vol}((\Delta(\theta_I;\theta_J))+\sum_{i\in I}2\sqrt{-1}\big(-\mu_i(\beta_i-\pi)+\theta_i\big)l_i\\
=&4\pi^2+4\sqrt{-1}\cdot \mathrm{Vol}((\Delta(\theta_I;\theta_J)).
\end{split}
\end{equation*}
\end{proof}
\subsection{Convexity of \texorpdfstring{$\mathcal W^{\epsilon_I}$}{W}}
\begin{proposition}\label{convexity}
There exists a $\delta_0>0$ such that if all $\{\alpha_j\}_{j\in J}$ are in $(\pi-\delta_0,\pi+\delta_0),$ then for any $\epsilon_I,$ $\mathrm{Im}\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)$ is strictly concave down in $\{\mathrm{Re}(\alpha_i)\}_{i\in I},$ $\mathrm{Re}(\xi_1)$ and $\mathrm{Re}(\xi_2),$ and is strictly concave up in $\{\mathrm{Im}(\alpha_i)\}_{i\in I},$ $\mathrm{Im}(\xi_1)$ and $\mathrm{Im}(\xi_2)$ on $\mathrm{D_{\delta_0,\mathbb C}}.$
\end{proposition}
\begin{proof} We first consider the special case $\{\alpha_i\}_{i\in I},$ $\xi_1$ and $\xi_2$ are real. In this case,
$$\mathrm{Im}\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)=2V(\alpha_1,\dots,\alpha_6,\xi_1)+2V(\alpha_1,\dots,\alpha_6,\xi_2)$$
with $V(\alpha_1,\dots,\alpha_6,\xi)$ defined in (\ref{V}).
At $\big(\pi,\dots,\pi,\frac{7\pi}{4},\frac{7\pi}{4}\big),$ we have $\frac{\partial ^2\mathrm{Im}\mathcal W^{\epsilon_I}}{\partial \alpha_i^2} =-4$ for $i\in I,$ $\frac{\partial ^2\mathrm{Im}\mathcal W^{\epsilon_I}}{\partial \alpha_i\alpha_{i'}} =-2$ for $i\neq i'\in I,$ $\frac{\partial ^2\mathrm{Im}\mathcal W^{\epsilon_I}}{\partial \alpha_i\xi_s} =4$ for $i\in I$ and $s=1,2,$ $\frac{\partial ^2\mathrm{Im}\mathcal W^{\epsilon_I}}{\partial \xi_s^2} =-16$ for $s=1,2$ and $\frac{\partial ^2\mathrm{Im}\mathcal W^{\epsilon_I}}{\partial \xi_1\xi_2} =0.$ Then a direct computation shows that, at $\big(\pi,\dots,\pi,\frac{7\pi}{4},\frac{7\pi}{4}\big),$ the Hessian matrix
of $\mathrm{Im}\mathcal W^{\epsilon_I}$ is negative definite.
Then by the continuity, there exists a sufficiently small $\delta_0>0$ such that for all $\{\alpha_j\}_{j\in J}$ in $(\pi-\delta_0,\pi+\delta_0)$ and $(\alpha_I,\xi_1,\xi_2)\in \mathrm{D_{\delta_0,\mathbb C}},$ the Hessian matrix of $\mathrm{Im}\mathcal W^{\epsilon_I}$ with respect to $\{\mathrm{Re}(\alpha_i)\}_{i\in I},$ $\mathrm{Re}(\xi_1)$ and $\mathrm{Re}(\xi_2)$ is still negative definite, implying that $\mathrm{Im}\mathcal W^{\epsilon_I}$ is strictly concave down in $\{\mathrm{Re}(\alpha_i)\}_{i\in I},$ $\mathrm{Re}(\xi_1)$ and $\mathrm{Re}(\xi_2)$ on $\mathrm{D_{\delta_0,\mathbb C}}.$ Since $\mathcal W^{\epsilon_I}$ is holomorphic, $\mathrm{Im}\mathcal W^{\epsilon_I}$ is strictly concave up in $\{\mathrm{Im}(\alpha_i)\}_{i\in I},$ $\mathrm{Im}(\xi_1)$ and $\mathrm{Im}(\xi_2)$ on $\mathrm{D_{\delta_0,\mathbb C}}.$
\end{proof}
\begin{proposition}\label{nonsingular} If all $\{\alpha_j\}_{j\in J}$ are in $(\pi-\delta_0,\pi+\delta_0),$ then
the Hessian matrix $\mathrm{Hess}\mathcal W^{\epsilon_I}$ of $\mathcal W^{\epsilon_I}$ with respect to $\{\alpha_i\}_{i\in I},$ $\xi_1$ and $\xi_2$ is non-singular on $\mathrm{D_{\delta_0,\mathbb C}}.$
\end{proposition}
\begin{proof} By Proposition \ref{convexity}, the real part of the $\mathrm{Hess}\mathcal W^{\epsilon_I}$ is negative definite. Then by \cite[Lemma]{L}, it is nonsingular.
\end{proof}
\subsection{Asymptotics of the leading Fourier coefficients}\label{leading}
\begin{proposition}\label{critical}
Suppose $\{\beta_i\}_{i\in I}$ and $\{\alpha_j\}_{j\in J}$ are in $\{\pi-\epsilon,\pi+\epsilon
\}$ for a sufficiently small $\epsilon>0.$ For $\epsilon_I\in\{1,-1\}^I,$ let $z^{\epsilon_I}$ be the critical point of $\mathcal W^{\epsilon_I}$ described in Proposition \ref{crit}. Then
$$\widehat{f^{\epsilon_I}_r}(0,\dots,0)=\frac{C^{\epsilon_I}(z^{\epsilon_I})}{\sqrt{-\det\mathrm{Hess}\Big(\frac{\mathcal W^{\epsilon_I}(z^{\epsilon_I})}{4\pi\sqrt{-1}}\Big)}}e^{\frac{r}{2\pi}2\mathrm{Vol}(\Delta(\theta_I;\theta_J))}\Big( 1 + O \Big( \frac{1}{r} \Big) \Big)$$
where each $C^{\epsilon_I}(z^{\epsilon_I})$ depends continuously on $\{\beta_i
\}_{i\in I}$ and $\{\alpha_j\}_{j\in J};$ and when $\beta_i=\alpha_j=\pi,$
$$C^{\epsilon_I}(z^{\epsilon_I})=\frac{r^{\frac{|I|}{2}-1}}{2^{\frac{3|I|}{2}+1}\pi^{\frac{|I|}{2}+1}}.$$
\end{proposition}
For the proof of Proposition \ref{critical}, we need the following
\begin{lemma}\label{absm} For each $\epsilon_I\in\{1,-1\}^I$ and any fixed $\{\alpha_j\}_{j\in J},$
$$\max_{\mathrm{D_H}}\mathrm{Im}\mathcal W^{\epsilon_I} \leqslant \mathrm{Im}\mathcal W^{\epsilon_I}\Big(\pi,\dots,\pi,\frac{7\pi}{4},\frac{7\pi}{4}\Big)=4\mathrm{Vol}(\Delta_{(0,\dots,0)})$$
where $\Delta_{(0,\dots,0)}$ is the regular ideal octahedron, and the equality holds if and only if $\alpha_1=\dots=\alpha_6=\pi$ and $\xi_1=\xi_2=\frac{7\pi}{4}.$
\end{lemma}
\begin{proof} On $\mathrm{D_H},$ we have
$$\mathrm{Im}\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)=2V(\alpha_1,\dots,\alpha_6,\xi_1)+2V(\alpha_1,\dots,\alpha_6,\xi_2)$$
for $V$ defined in (\ref{V}). Then the result is a consequence of the result of Costantino\,\cite{C} and the Murakami-Yano formula\,\cite{MY} (see Ushijima\,\cite{U} for the case of hyperideal tetrahedra). Indeed, by \cite{C}, for a fixed $\alpha=(\alpha_1,\dots,\alpha_6)$ of the hyperideal type, the function $f(\xi)$ defined by $f(\xi)=V(\alpha,\xi)$ is strictly concave down and the unique maximum point $\xi(\alpha)$ exists and lies in $(\max\{\tau_i\},\min\{\eta_j,2\pi\}),$ ie, $(\alpha,\xi(\alpha))\in\mathrm{B_H}.$ Then by \cite{U}, $V(\alpha,\xi(\alpha))=\mathrm{Vol}(\Delta_{|\pi-\alpha|}),$ the volume of the hyperideal tetrahedron $\Delta_{|\pi-\alpha|}$ with dihedral angles $|\pi-\alpha_1|,\dots, |\pi-\alpha_6|.$ Since $\xi(\pi,\dots,\pi)=\frac{7\pi}{4}$ and $\Delta_{(0,\dots,0)}$ has the maximum volume among all the hyperideal tetrahedra, $V\big(\pi,\dots,\pi,\frac{7\pi}{4}\big)=\mathrm{Vol}(\Delta_{(0,\dots,0)})\geqslant \mathrm{Vol}(\Delta_{|\pi-\alpha|})=V(\alpha,\xi(\alpha))\geqslant V(\alpha,\xi)$ for any $(\alpha,\xi)\in \mathrm{B_H}.$
For the equality part, suppose $(\alpha_1,\dots,\alpha_6,\xi_1,\xi_2)\neq \big(\pi,\dots,\pi,\frac{7\pi}{4},\frac{7\pi}{4}\big).$ If $(\alpha_1,\dots,\alpha_6)\neq (\pi,\dots,\pi),$ then
$\mathrm{Im}\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)\leqslant 4\mathrm{Vol}(\Delta_{|\pi-\alpha|})<4\mathrm{Vol}(\Delta_{(0,\dots,0)}).$ If $(\alpha_1,\dots,\alpha_6)=(\pi,\dots,\pi)$ but, say, $\xi_1\neq \frac{7\pi}{4},$ then the strict concavity of $f(\xi)$ implies that $\mathrm{Im}\mathcal W^{\epsilon_I}(\pi,\dots,\pi,\xi_1,\xi_2)< \mathrm{Im}\mathcal W^{\epsilon_I}\big(\pi,\dots,\pi,\frac{7\pi}{4},\frac{7\pi}{4}\big).$
\end{proof}
\begin{proof}[Proof of Proposition \ref{critical}] Let $\delta_0>0$ be as in Proposition \ref{convexity}.
By Lemma \ref{convexity}, Proposition \ref{absm} and the compactness of $\mathrm{D_H}\setminus\mathrm{D_{\delta_0}},$
$$4\mathrm{Vol}(\Delta_{(0,\dots,0)})>\max_{\mathrm{D_H}\setminus\mathrm{D_{\delta_0}}} \mathrm{Im}\mathcal W^{\epsilon_I}.$$
By Proposition \ref{crit} and continuity, if $\{\beta_i\}_{i\in I}$ and $\{\alpha_j\}_{j\in J}$ are sufficiently close to $\pi,$ then the critical point $z^{\epsilon_I}$ of $\mathcal W^{\epsilon_I}$ as in Proposition \ref{crit} lies in $\mathrm{D_{\delta_0,\mathbb C}},$ and $\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I})=4\mathrm{Vol}(\Delta(\theta_I;\theta_J))$ is sufficiently close to $4\mathrm{Vol}(\Delta_{(0,\dots,0)})$ so that
$$\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I})>\max_{\mathrm{D_H}\setminus\mathrm{D_{\delta_0}}} \mathrm{Im}\mathcal W^{\epsilon_I}.$$
Therefore, we only need to estimate the integral on $\mathrm{D_{\delta_0}}.$ To do this, we consider as drawn in Figure \ref{surface} the surface $S^{\epsilon_I}=S^{\epsilon_I}_{\text{top}}\cup S^{\epsilon_I}_{\text{side}}$ in $\overline{\mathrm{D_{\delta_0,\mathbb C}}},$ where
$$S^{\epsilon_I}_{\text{top}}=\{ (\alpha_I,\xi_1,\xi_2)\in \mathrm{D_{\delta_0,\mathbb C}}\ |\ ((\mathrm{Im}(\alpha_I)),\mathrm{Im}(\xi_1),\mathrm{Im}(\xi_2))=\mathrm{Im}(z^{\epsilon_I})\}$$
and
$$S^{\epsilon_I}_{\text{side}}=\{ (\alpha_I,\xi_1,\xi_2)+t\sqrt{-1}\cdot \mathrm{Im}(z^{\epsilon_I})\ |\ (\alpha_I,\xi_1,\xi_2)\in\partial \mathrm{D_{\delta_0}},t\in[0,1]\}.$$
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.3]{surface2.pdf}
\caption{The deformed surface $S^{\epsilon_I}$}
\label{surface}
\end{figure}
By analyticity, the integral remains the same if we deform the domain from $\mathrm{D_{\delta_0}}$ to $S^{\epsilon_I}.$
By Proposition \ref{convexity}, $\mathrm{Im}\mathcal W^{\epsilon_I}$ is concave down on $S^{\epsilon_I}_{\text{top}}.$ Since $z^{\epsilon_I}$ is the critical points of $\mathrm{Im}\mathcal W^{\epsilon_I},$ it is the only absolute maximum on $S^{\epsilon_I}_{\text{top}}.$
On the side $S^{\epsilon_I}_{\text{side}},$ for each $(\alpha_I,\xi_1,\xi_2)\in \partial \mathrm{D_{\delta_0}},$ consider the function
$$g^{\epsilon_I}_{(\alpha_I,\xi_1,\xi_2)}(t)= \mathrm{Im}\mathcal W^{\epsilon_I}((\alpha_I,\xi_1,\xi_2)+t\sqrt{-1}\cdot \mathrm{Im}(z^{\epsilon_I}))$$
on $[0,1].$ By Lemma \ref{convexity}, $g^{\epsilon_I}_{(\alpha_I,\xi_1,\xi_2)}(t)$ is concave up for any $(\alpha_I,\xi_1,\xi_2)\in \partial \mathrm{D_{\delta_0}}.$ As a consequence, $g^{\epsilon_I}_{(\alpha_I,\xi_1,\xi_2)}(t)\leqslant \max\{g^{\epsilon_I}_{(\alpha_I,\xi_1,\xi_2)}(0), g^{\epsilon_I}_{(\alpha_I,\xi_1,\xi_2)}(1)\}.$ Now by the previous two steps, since $(\alpha_I,\xi_1,\xi_2)\in \partial \mathrm{D_{\delta_0}},$
$$g^{\epsilon_I}_{(\alpha_I,\xi_1,\xi_2)}(0)= \mathrm{Im}\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)<\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I});$$
and since $(\alpha_I,\xi_1,\xi_2)+\sqrt{-1}\cdot \mathrm{Im}(z^{\epsilon_I})\in S^{\epsilon_I}_{\text{top}},$
$$g^{\epsilon_I}_{(\alpha_I,\xi_1,\xi_2)}(0)= \mathrm{Im}\mathcal W^{\epsilon_I}((\alpha_I,\xi_1,\xi_2)+\sqrt{-1}\cdot \mathrm{Im}(z^{\epsilon_I}))<\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I}).$$ As a consequence,
$$\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I})>\max_{S^{\epsilon_I}_{\text{side}}} \mathrm{Im}\mathcal W^{\epsilon_I}.$$
Therefore, we proved that $z^{\epsilon_I}$ is the unique maximum point of $\mathrm{Im}\mathcal W^{\epsilon_I}$ on $S^{\epsilon_I}\cup\big( \mathrm{D_H}\setminus\mathrm{D_{\delta_0}}\big),$ and $\mathcal W^{\epsilon_I}$ has critical value $4\pi^2+4\sqrt{-1}\cdot\mathrm{Vol}(\Delta(\theta_I;\theta_J)$ at $z^{\epsilon_I}.$
By Proposition \ref{nonsingular}, $\det\mathrm{Hess}\mathcal W^{\epsilon_I}(z^{\epsilon_I})\neq 0.$
Finally, we estimate the difference between $\mathcal W^{\epsilon_I}_r$ and $\mathcal W^{\epsilon_I}.$ By Lemma \ref{converge}, (3), we have
$$\varphi_r\Big(\frac{\pi}{r}\Big)=\mathrm{Li}_2(1)+\frac{2\pi\sqrt{-1}}{r}\log\Big(\frac{r}{2}\Big)-\frac{\pi^2}{r}+O\Big(\frac{1}{r^2}\Big);$$
and for $z$ with $0<\mathrm{Re} z<\pi$ have
$$\varphi_r\Big(z+\frac{k\pi}{r}\Big)=\varphi_r(z)+\varphi'_r(z)\cdot\frac{k\pi}{r}+O\Big(\frac{1}{r^2}\Big).$$
Then by Lemma \ref{converge}, in $\big\{(\alpha_I,\xi_1,\xi_2)\in \overline{\mathrm{D_{H,\mathbb C}^\delta}}\ \big|\ |\mathrm{Im}(\alpha_i)| < L\text{ for } i\in I, |\mathrm{Im}(\xi_1)| < L, |\mathrm{Im}(\xi_2)| < L\}$ for some $L>0,$
$$\mathcal W^{\epsilon_I}_r(\alpha_I,\xi_1,\xi_2)=\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)-\frac{8\pi\sqrt{-1}}{r}\log\Big(\frac{r}{2}\Big)+\frac{4\pi \sqrt{-1} \cdot\kappa(\alpha_I,\xi_1,\xi_2)}{r}+\frac{\nu_r(\alpha_I,\xi_1,\xi_2)}{r^2},$$
with
\begin{equation*}
\begin{split}
&\kappa(\alpha_I,\xi_1,\xi_2)\\
=&\sum_{i=1}^4 \sqrt{-1}\tau_i- \sqrt{-1}\xi_1- \sqrt{-1}\xi_2-3\sqrt{-1}\pi \\
&+\frac{1}{2}\sum_{i=1}^4\sum_{j=1}^3\log\big(1-e^{2\sqrt{-1}(\eta_j-\tau_i)}\big)-\frac{3}{2}\sum_{i=1}^4\log\big(1-e^{2\sqrt{-1}(\tau_i-\pi)}\big)\\
&+\frac{3}{2}\log\big(1-e^{2\sqrt{-1}(\xi_1-\pi)}\big)-\frac{1}{2}\sum_{i=1}^4\log\big(1-e^{2\sqrt{-1}(\xi_1-\tau_i)}\big)-\frac{1}{2}\sum_{j=1}^3\log\big(1-e^{2\sqrt{-1}(\eta_j-\xi_1)}\big)\\
&+\frac{3}{2}\log\big(1-e^{2\sqrt{-1}(\xi_2-\pi)}\big)-\frac{1}{2}\sum_{i=1}^4\log\big(1-e^{2\sqrt{-1}(\xi_2-\tau_i)}\big)-\frac{1}{2}\sum_{j=1}^3\log\big(1-e^{2\sqrt{-1}(\eta_j-\xi_2)}\big)\\
\end{split}
\end{equation*}
and $|\nu_r(\alpha_I,\xi_1,\xi_2)|$ bounded from above by a constant independent of $r.$
Then
\begin{equation*}
\begin{split}
&e^{\sum_{i\in I}\epsilon_i\sqrt{-1}\big(\alpha_i+\beta_i+\frac{2\pi}{r}\big)+\frac{r}{4\pi \sqrt{-1}}{\mathcal W}^{\epsilon_I}_r(\alpha_I,\xi_1,\xi_2)}\\
=&\Big(\frac{r}{2}\Big)^{-2}e^{\sum_{i\in I}\epsilon_i\sqrt{-1}(\alpha_i+\beta_i)+\kappa(\alpha_I,\xi_1,\xi_2)}\cdot e^{\frac{r}{4\pi \sqrt{-1}}\Big(\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)+\frac{\nu_r(\alpha_I,\xi_1,\xi_2)-\sum_{i\in I}\epsilon_i 8\pi^2}{r^2}\Big)}.
\end{split}
\end{equation*}
Now let $D_{\mathbf z}=\big\{(\alpha_I,\xi_1,\xi_2)\in \overline{\mathrm{D_{H,\mathbb C}^\delta}}\ \big|\ |\mathrm{Im}(\alpha_i)| < L\text{ for } i\in I, |\mathrm{Im}(\xi_1)| < L, |\mathrm{Im}(\xi_2)| < L\}$ for some $L>0,$ Let $\mathbf a_r=((\beta_i)_{i\in I},(\alpha_j)_{j\in J})$ (recall that $\beta_i=\frac{2\pi n_i}{r}$ and $\alpha_j=\frac{2\pi m_j}{r}$ depends on $r$),
$f^{\mathbf a_r}(\alpha_I,\xi_1,\xi_2)=\frac{\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)}{4\pi\sqrt{-1}},$ $g^{\mathbf a_r}(\alpha_I,\xi_1,\xi_2)=\psi(\alpha_I,\xi_1,\xi_2)e^{\sum_{i\in I}\epsilon_i\sqrt{-1}(\alpha_i+\beta_i)+\kappa(\alpha_I,\xi_1,\xi_2)},$ $f_r^{\mathbf a_r}(\alpha_I,\xi_1,\xi_2)=\frac{{\mathcal W}_r^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)}{4\pi\sqrt{-1}}-\frac{2}{r}\log\big(\frac{r}{2}\big),$ $\upsilon_r(\alpha_I,\xi_1,\xi_2)=\nu_r(\alpha_I,\xi_1,\xi_2)-\sum_{i\in I}\epsilon_i 8\pi^2,$ $S_r=S^{\epsilon_I}\cup \big(\mathrm{D_H}\setminus\mathrm{D_{\delta_0}}\big)$ and $z^{\epsilon_I}$ is the critical point of $f$ in $D_{\mathbf z}.$ Then all the conditions of Proposition \ref{saddle} are satisfied and the result follows.
When $\beta_i=\alpha_j=\pi,$ a direct computation shows that
\begin{equation*}
\begin{split}
C^{\epsilon_I}(z^{\epsilon_I})=\frac{r^{|I|+2}}{2^{2|I|+2}\pi^{|I|+2}}\Big(\frac{2\pi}{r}\Big)^{
\frac{|I|+2}{2}}\Big(\frac{r}{2}\Big)^{-2} g\Big(\pi,\dots,\pi,\frac{7\pi}{4},\frac{7\pi}{4}\Big)=\frac{r^{\frac{|I|}{2}-1}}{2^{\frac{3|I|}{2}+1}\pi^{\frac{|I|}{2}+1}}.
\end{split}
\end{equation*}
\end{proof}
\begin{corollary}\label{5.8} If $\epsilon>0$ is sufficiently small and all $\{\beta_i\}_{i\in I}$ and $\{\alpha_j\}_{j\in J}$ are in $\{\pi-\epsilon,\pi+\epsilon
\},$ then
$$\sum_{\epsilon_I\in\{1,-1\}^I}\frac{C^{\epsilon_I}(z^{\epsilon_I})}{\sqrt{-\det\mathrm{Hess}\Big(\frac{\mathcal W^{\epsilon_I}(z^{\epsilon_I})}{4\pi\sqrt{-1}}\Big)}}\neq 0.$$
\end{corollary}
\begin{proof} If $\beta_i=\alpha_j=\pi$ for all $i\in I$ and $j\in J,$ then all $z^{\epsilon_I}=\big(\pi,\dots,\pi,\frac{7\pi}{4},\frac{7\pi}{4}\big)$ and all $\mathcal W^{\epsilon_I}$ are the same functions. As a consequence, all the $C^{\epsilon_I}(z^{\epsilon_I})$'s and all Hessian determinants $\det\mathrm{Hess}\Big(\frac{\mathcal W^{\epsilon_I}(z^{\epsilon_I})}{4\pi\sqrt{-1}}\Big)$'s are the same at this point, implying that the sum is not equal to zero. Then by continuity, if $\epsilon$ is small enough, then the sum remains non-zero.
\end{proof}
\begin{remark} We suspect that all $C^{\epsilon_I}(z^{\epsilon_I})$'s and all $\det\mathrm{Hess}\Big(\frac{\mathcal W^{\epsilon_I}(z^{\epsilon_I})}{4\pi\sqrt{-1}}\Big)$'s are always the same for any given $\{\beta_i\}_{i\in I}$ and $\{\alpha_j\}_{j\in J}.$
\end{remark}
\subsection{Estimate of the other Fourier coefficients}\label{ot}
\begin{proposition}\label{other} Suppose $\{\beta_i\}_{i\in I}$ and $\{\alpha_j\}_{j\in J}$ are in $\{\pi-\epsilon,\pi+\epsilon
\}$ for a sufficiently small $\epsilon>0.$ If $(m_I,n_1,n_2)\neq(0,\dots,0),$ then
$$\Big|\widehat{f^{\epsilon_I}_r}(m_I,n_1,n_2)\Big|<O\Big(e^{\frac{r}{2\pi}\big(2\mathrm{Vol}(\Delta(\theta_I;\theta_J))-\epsilon'\big)}\Big)$$
for some $\epsilon'>0.$
\end{proposition}
\begin{proof} Recall that if $\beta_i=\alpha_j=\pi$ for all $i\in I$ and $j\in J,$ then the total derivative
$$D\mathcal W^{\epsilon_I}\Big(\pi,\dots,\pi,\frac{7\pi}{4}, \frac{7\pi}{4}\Big)=(0,\dots,0).$$
Hence there exists a $\delta_1>0$ and an $\epsilon>0$ such that if $\{\beta_i\}_{i\in I}$ and $\{\alpha_j\}_{j\in J}$ are in $\{\pi-\epsilon,\pi+\epsilon
\},$ then for all $(\alpha_I,\xi_1,\xi_2)\in D_{\delta_1,\mathbb C}$ and for any unit vector $\mathbf u=((u_i)_{i\in I},w_1,w_2)\in \mathbb R^{|I|+2},$ the directional derivatives
$$|D_{\mathbf u}\mathrm{Im}\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)|=\Big|\sum_{i\in I}u_i\frac{\partial \mathrm{Im}\mathcal W^{\epsilon_I}}{\partial \mathrm{Im}(\alpha_i)}+w_1\frac{\partial \mathrm{Im}\mathcal W^{\epsilon_I}}{\partial \mathrm{Im}(\xi_1)}+w_2\frac{\partial \mathrm{Im}\mathcal W^{\epsilon_I}}{\partial \mathrm{Im}(\xi_2)}\Big|<\frac{2\pi-\epsilon''}{2\sqrt{2|I|+4}}$$
for some $\epsilon''>0.$
On $\mathrm{D_H},$ we have
\begin{equation*}
\begin{split}
&\mathrm{Im}\Big(\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)-\sum_{i\in I}2\pi m_i\alpha_i-4\pi n_1\xi_1-4\pi n_2\xi_2\Big)=\mathrm{Im}\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2).
\end{split}
\end{equation*}
Then by Lemma \ref{convexity}, Proposition \ref{absm} and the compactness of $\mathrm{D_H}\setminus\mathrm{D_{\delta_1}},$
$$4\mathrm{Vol}(\Delta_{(0,\dots,0)})>\max_{\mathrm{D_H}\setminus\mathrm{D_{\delta_1}}} \mathrm{Im}\Big(\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)-\sum_{i\in I}2\pi m_i\alpha_i-4\pi n_1\xi_1-4\pi n_2\xi_2\Big)+\epsilon'''$$
for some $\epsilon'''>0.$
By Proposition \ref{crit} and continuity, if $\{\beta_i\}_{i\in I}$ and $\{\alpha_j\}_{j\in J}$ are sufficiently close to $\pi,$ then the critical point $z^{\epsilon_I}$ of $\mathcal W^{\epsilon_I}$ as in Proposition \ref{crit} lies in $\mathrm{D_{\delta_1,\mathbb C}},$ and $\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I})=4\mathrm{Vol}(\Delta(\theta_I;\theta_J))$ is sufficiently close to $4\mathrm{Vol}(\Delta_{(0,\dots,0)})$ so that
\begin{equation}\label{b}
\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I})>\max_{\mathrm{D_H}\setminus\mathrm{D_{\delta_1}}} \mathrm{Im}\Big(\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)-\sum_{i\in I}2\pi m_i\alpha_i-4\pi n_1\xi_1-4\pi n_2\xi_2\Big)+\epsilon'''.
\end{equation}
Therefore, we only need to estimate the integral on $\mathrm{D_{\delta_1}}.$
If $(m_I,n_1,n_2)\neq (0,\dots,0),$ then there is at least one of $\{m_i\}_{i\in I},$ $n_1$ and $n_2$ that is nonzero. Without loss of generality, assume that $m_1\neq 0.$
If $m_1>0,$ then consider the surface $S^+=S^+_{\text{top}}\cup S^+_{\text{side}}$ in $\overline{\mathrm{D_{\delta_1,\mathbb C}}}$ where
$$S^+_{\text{top}}=\{ (\alpha_I,\xi_1,\xi_2)\in \mathrm{D_{\delta_1,\mathbb C}}\ |\ (\mathrm{Im}(\alpha_I),\mathrm{Im}(\xi_1),\mathrm{Im}(\xi_2))=(\delta_1,0,\dots,0)\}$$
and
$$S^+_{\text{side}}=\{ (\alpha_I,\xi_1,\xi_2)+(t\sqrt{-1}\delta_1,0,\dots,0)\ |\ (\alpha_I,\xi_1,\xi_2)\in\partial \mathrm{D_{\delta_1}}, t\in[0,1]\}.$$
On the top, for any $(\alpha_I,\xi_1,\xi_2)\in S^+_{\text{top}},$ by the Mean Value Theorem,
\begin{equation*}
\begin{split}
\big|\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I})-\mathrm{Im}\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)\big|
=&\big|D_{\mathbf u}\mathrm{Im}\mathcal W^{\epsilon_I}(z)\big|\cdot\big\|z^{\epsilon_I}-(\alpha_I,\xi_1,\xi_2)\big\|\\
<&\frac{2\pi-\epsilon''}{2\sqrt{2|I|+4}}\cdot2\sqrt{2|I|+4} \delta_1\\
=&2\pi\delta_1-\epsilon''\delta_1,
\end{split}
\end{equation*}
where $z$ is some point on the line segment connecting $z^{\epsilon_I}$ and $(\alpha_I,\xi_1,\xi_2),$ $\mathbf u=\frac{z^{\epsilon_I}-(\alpha_I,\xi_1,\xi_2)}{\|z^{\epsilon_I}-(\alpha_I,\xi_1,\xi_2)\|}$ and $2\sqrt{2|I|+4} \delta_1$ is the diameter of $\mathrm{D_{\delta_1,\mathbb C}}.$
Then
\begin{equation*}
\begin{split}
\mathrm{Im}\Big(\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)-\sum_{i\in I}2\pi m_i\alpha_i-4\pi n_1\xi_1-4\pi n_2\xi_2\Big)=&\mathrm{Im}\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)-2\pi m_1\delta_1\\
<&\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I})+2\pi\delta_1-\epsilon''\delta_1-2\pi\delta_1\\
=&\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I})-\epsilon'' \delta_1.
\end{split}
\end{equation*}
On the side, for any point $(\alpha_I,\xi_1,\xi_2)+(t\sqrt{-1}\delta_1,0,\dots,0)\in S^+_{\text{side}},$ by the Mean Value Theorem again, we have
$$\big|\mathrm{Im}\mathcal W^{\epsilon_I}\big((\alpha_I,\xi_1,\xi_2)+(t\sqrt{-1}\delta_1,0,\dots,0)\big)-\mathrm{Im}\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)\big|<\frac{2\pi-\epsilon''}{2\sqrt{2|I|+4}} t\delta_1.$$
Then
\begin{equation*}
\begin{split}
\mathrm{Im}\mathcal W^{\epsilon_I}\big((\alpha_I,\xi_1,\xi_2)+(t\sqrt{-1}\delta_1,0,\dots,0)\big)-2\pi m_1 t\delta_1<&\mathrm{Im}\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)+\frac{2\pi-\epsilon''}{2\sqrt{2|I|+4}} t\delta_1-2\pi t\delta_1\\
<&\mathrm{Im}\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)\\
<&\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I})-\epsilon''',
\end{split}
\end{equation*}
where the last inequality comes from the fact that $(\alpha_I,\xi_1,\xi_2)\in \partial \mathrm{D_{\delta_1}}\subset \mathrm{D_H}\setminus\mathrm{D_{\delta_1}}$ and (\ref{b}).
Now let $\epsilon'=\min\{\epsilon''\delta_1,\epsilon'''\},$ then on $S^+\cup \big(\mathrm{D_H}\setminus\mathrm{D_{\delta_1}}\big),$
$$\mathrm{Im}\Big(\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)-\sum_{i\in I}2\pi m_i\alpha_i-4\pi n_1\xi_1-4\pi n_2\xi_2\Big)<\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I})-\epsilon',$$
and the result follows.
If $m_1<0,$ then we consider the surface $S^-=S^-_{\text{top}}\cup S^-_{\text{side}}$ in $\overline{\mathrm{D_{\delta_1,\mathbb C}}}$ where
$$S^-_{\text{top}}=\{ (\alpha_I,\xi_1,\xi_2)\in \mathrm{D_{\delta_1,\mathbb C}}\ |\ (\mathrm{Im}(\alpha_I),\mathrm{Im}(\xi_1),\mathrm{Im}(\xi_2))=(-\delta_1,0,\dots,0)\}$$
and
$$S^-_{\text{side}}=\{ (\alpha_I,\xi_1,\xi_2)-(t\sqrt{-1}\delta_1,0,\dots,0)\ |\ (\alpha_I,\xi_1,\xi_2)\in\partial \mathrm{D_{\delta_1}}, t\in[0,1]\}.$$
Then the same estimate as in the previous case proves that on
$S^-\cup \big(\mathrm{D_H}\setminus\mathrm{D_{\delta_1}}\big),$
$$\mathrm{Im}\Big(\mathcal W^{\epsilon_I}(\alpha_I,\xi_1,\xi_2)-\sum_{i\in I}2\pi m_i\alpha_i-4\pi n_1\xi_1-4\pi n_2\xi_2\Big)<\mathrm{Im}\mathcal W^{\epsilon_I}(z^{\epsilon_I})-\epsilon',$$
from which the result follows.
\end{proof}
\subsection{Estimate of the error term}\label{ee}
The goal of this section is to estimate the error term in Proposition \ref{Poisson}.
\begin{proposition}\label{error} Suppose $\{\alpha_j\}_{j\in J}$ are in $\{\pi-\epsilon,\pi+\epsilon
\}$ for a sufficiently small $\epsilon>0.$ Then
the error term in Proposition \ref{Poisson} is less than $O\big(e^{\frac{r}{2\pi}(2\mathrm{Vol}(\Delta(\theta_I;\theta_J))-\epsilon')}\big)$
for some $\epsilon'>0.$
\end{proposition}
For the proof we need the following estimate, which first appeared in \cite[Proposition 8.2]{GL} for $q=e^{\frac{\pi \sqrt{-1}}{r}},$ and for the root $q=e^{\frac{2\pi \sqrt{-1}}{r}}$ in \cite[Proposition 4.1]{DK}.
\begin{lemma}\label{est}
For any integer $0<n<r$ and at $q=e^{\frac{2\pi \sqrt{-1}}{r}},$
$$ \log\left|\{n\}!\right|=-\frac{r}{2\pi}\Lambda\left(\frac{2n\pi}{r}\right)+O\left(\log(r)\right).$$
\end{lemma}
\begin{proof}[Proof of Proposition \ref{error} ]
For a fixed $\alpha_J=(\alpha_j)_{j\in J},$ let
\begin{equation*}
M_{\alpha_J}=\max\big\{V(\alpha_1,\dots,\alpha_6,\xi_1)+V(\alpha_1,\dots,\alpha_6,\xi_2)\ \big|\ (\alpha_I,\xi_1,\xi_2)\in\partial \mathrm {D_H}\cup\big(\mathrm {D_A}\setminus \mathrm{D_H}\big)\big\}
\end{equation*}
where $V$ is as defined in (\ref{V}). Then by \cite[Sections 3 \& 4]{BDKY},
$$M_{\alpha_J}<2v_8=2\mathrm{Vol}(\Delta_{(0,\dots,0)});$$
and by continuity, if $\epsilon$ is sufficiently small and $\{\theta_1,\dots,\theta_6\}$ are less than $\epsilon,$ then
$$M_{\alpha_J}<2\mathrm{Vol}(\Delta{(\theta_I;\theta_J)}).$$
Now by Lemma \ref{est} and the continuity, for $\epsilon'=\frac{2\mathrm{Vol}(\Delta{(\theta_I;\theta_J)})-M_{\alpha_J}}{2},$ we can choose a sufficiently small $\delta>0$ so that if $\big(\frac{2\pi a_I}{r},\frac{2\pi k_1}{r},\frac{2\pi k_2}{r}\big)\notin \mathrm{D_H^\delta},$ then
$$\Big|g_r^{\epsilon_I}(a_I, k_1, k_2)\Big|<O\Big(e^{\frac{r}{2\pi}(M_{\alpha_J}+\epsilon')}\Big)=O\Big(e^{\frac{r}{2\pi}(2\mathrm{Vol}(\Delta{(\theta_I;\theta_J)})-\epsilon')}\Big).$$
Let $\psi$ be the bump function supported on $(\mathrm{D_H}, \mathrm{D_H^{\delta}}).$ Then the error term in Proposition \ref{Poisson} is less than $O\big(e^{\frac{r}{2\pi}(2\mathrm{Vol}(\Delta{(\theta_I;\theta_J)})-\epsilon')}\big).$
\end{proof}
\subsection{Proof of Theorem \ref{main}}\label{pf}
\begin{proof}[Proof of Theorem \ref{main}] Let $\epsilon>0$ be sufficiently small so that the conditions of Propositions \ref{critical}, \ref{other} and \ref{error} and of Corollary \ref{5.8} are satisfied, and suppose $\{\beta_i\}_{i\in I}$ and $\{\alpha_j\}_{j\in J}$ are all in $(\pi-\epsilon, \pi+\epsilon).$
By Propositions \ref{4.2}, \ref{Poisson}, \ref{critical}, \ref{other} and \ref{error},
\begin{equation*}
\begin{split}
\mathrm{\widehat {Y}}_r(b_I; a_J)=&\frac{(-1)^{|I|\big(\frac{r}{2}+1\big)}\cdot n(a_J)}{4\{1\}^{|I|-2}}\Big(\sum_{\epsilon_I\in\{1,-1\}^I}\widehat{ f_r^{\epsilon_I}}(0,\dots,0)\Big)\Big(1+O\big(e^{\frac{r}{2\pi}{(-\epsilon')}}\big)\Big)\\
=&\frac{(-1)^{|I|\big(\frac{r}{2}+1\big)}\cdot n(a_J)}{4\{1\}^{|I|-2}}\bigg( \sum_{\epsilon_I\in\{1,-1\}^I}\frac{C^{\epsilon_I}(z^{\epsilon_I})}{\sqrt{-\det\mathrm{Hess}\Big(\frac{\mathcal W^{\epsilon_I}(z^{\epsilon_I})}{4\pi\sqrt{-1}}\Big)}}\bigg) e^{\frac{r}{2\pi}2\mathrm{Vol}(\Delta(\theta_I;\theta_J))}\Big( 1 + O \Big( \frac{1}{r} \Big) \Big);
\end{split}
\end{equation*}
and by Corollary \ref{5.8},
$$\sum_{\epsilon_I\in\{1,-1\}^I}\frac{C^{\epsilon_I}(z^{\epsilon_I})}{\sqrt{-\det\mathrm{Hess}\Big(\frac{\mathcal W^{\epsilon_I}(z^{\epsilon_I})}{4\pi\sqrt{-1}}\Big)}}\neq 0,$$
which completes the proof.
\end{proof}
\section{Numerical evidence for Conjecture \ref{conj}}\label{appendix}
In this appendix we show numerical evidence supporting Conjecture \ref{conj}. We provide calculations for two deeply truncated tetrahedra of type $((1),(23456))$ and one deeply truncated tetrahedron of every other type in Figure \ref{deep}. All the calculations are performed with the Mathematica software.
We provide plots for:
\begin{enumerate}[(1)]
\item Every tetrahedron with five angles equal to $0$ and a deeply truncated edge with angle between $0$ and $\frac{\pi}{2}$ at $r=2017.$
\item Two tetrahedra with a single deeply truncated edge.
\item Two tetrahedra with two deeply truncated edges (one per type).
\item Three tetrahedra with three deeply truncated edges (one per type).
\end{enumerate}
Because of Proposition \ref{prop:duality} this actually accounts for all partitions $(I,J)$ up to relabeling.
In every case we show, if a tetrahedron has angle $\alpha$ at an edge, the sequence of colorings we choose for that edge is $\left\lfloor \frac{r}{4\pi}(\pi-\alpha)\right\rfloor.$
The fact that the angles and lengths we list correspond to hyperbolic tetrahedra can be checked directly from the Gram matrix using Proposition \ref{prop:criterion}.
\vspace{.5cm}
\begin{minipage}{.45\textwidth}
\includegraphics[width=.95\textwidth]{Plot1edgeallangles.pdf}
\end{minipage}
\begin{minipage}{.45\textwidth}
\begin{flushright}
{\tabulinesep=1.2mm
\begin{tabu}{|C{2.3cm}|C{4.5cm}|}
\hline
Type & $\left( (1),(23456)\right)$\\ \hline
Angles & $(\alpha),\left(0,0,0,0,0\right)$ \\ \hline Edge lengths & $l$ \\ \hline Error & $<0.1\%$\\
\hline
\end{tabu}}
\end{flushright}
\end{minipage}
\vspace{1.5cm}
\begin{minipage}{.45\textwidth}
\includegraphics[width=.95\textwidth,left]{Plot1edgezeroangle.pdf}
\end{minipage}
\begin{minipage}{.45\textwidth}
\begin{flushright}
{\tabulinesep=1.2mm
\begin{tabu}{|C{2.3cm}|C{4.5cm}|}
\hline
Type & $\left( (1),(23456)\right)$\\ \hline
Angles & $(0),\left(\frac{\pi}{5},\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4}\right)$ \\ \hline Edge lengths & $0$ \\ \hline Volume & $2.8543$ \\ \hline $\frac{\pi}{6049}\log \widehat{\mathrm Y}_{6049}$ & $2.84835$ \\ \hline Error & $0.2\%$\\
\hline
\end{tabu}}
\end{flushright}
\end{minipage}
\vspace{1.5cm}
\begin{minipage}{.45\textwidth}
\includegraphics[width=.95\textwidth]{Plot1edgeposangle.pdf}
\end{minipage}
\begin{minipage}{.45\textwidth}
\begin{flushright}
{\tabulinesep=1.2mm
\begin{tabu}{|C{2.3cm}|C{4.5cm}|}
\hline
Type & $\left( (1),(23456)\right)$\\ \hline
Angles & $(0.4005),\left(\frac{\pi}{5},\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4}\right)$ \\ \hline Edge lengths & $0.3214$ \\ \hline Volume & $2.8223$ \\ \hline $\frac{\pi}{6049}\log \widehat{\mathrm Y}_{6049}$ & $2.8163$ \\ \hline Error & $0.2\%$\\
\hline
\end{tabu}}
\end{flushright}
\end{minipage}
\vspace{1.5cm}
\begin{minipage}{.45\textwidth}
\includegraphics[width=.95\textwidth,left]{Plot2edgestype2.pdf}
\end{minipage}
\begin{minipage}{.45\textwidth}
\begin{flushright}
{\tabulinesep=1.2mm
\begin{tabu}{|C{2.3cm}|C{5cm}|}
\hline
Type & $\left( (12),(3456)\right)$\\ \hline
Angles & $(0.1638,0.2160),\left(\frac{\pi}{5},\frac{\pi}{6},\frac{\pi}{5},\frac{\pi}{6}\right)$ \\ \hline Edge lengths & $0.1486, 0.2024$ \\ \hline Volume & $3.2937$ \\ \hline $\frac{\pi}{1009}\log \widehat{\mathrm Y}_{1009}$ & $3.2825$ \\ \hline Error & $0.3\%$\\
\hline
\end{tabu}}
\end{flushright}
\end{minipage}
\vspace{1.5cm}
\begin{minipage}{.45\textwidth}
\includegraphics[width=.95\textwidth,left]{Plot2edgestype1.pdf}
\end{minipage}
\begin{minipage}{.45\textwidth}
\begin{flushright}
{\tabulinesep=1.2mm
\begin{tabu}{|C{2.3cm}|C{5cm}|}
\hline
Type & $\left( (14),(2356)\right)$\\ \hline
Angles & $(0.3862,0.2302),\left(\frac{\pi}{4},\frac{\pi}{5},\frac{\pi}{4},\frac{\pi}{4}\right)$ \\ \hline Edge lengths & $0.2842, 0.1673$ \\ \hline Volume & $3.0362$ \\ \hline $\frac{\pi}{1009}\log \widehat{\mathrm Y}_{1009}$ & $3.0293$ \\ \hline Error & $0.2\%$\\
\hline
\end{tabu}}
\end{flushright}
\end{minipage}
\vspace{1.5cm}
\begin{minipage}{.45\textwidth}
\includegraphics[width=.95\textwidth,left]{Plot3edgestype1.pdf}
\end{minipage}
\begin{minipage}{.45\textwidth}
\begin{flushright}
{\tabulinesep=1.2mm
\begin{tabu}{|C{2.3cm}|C{6cm}|}
\hline
Type & $\left( (123),(456)\right)$\\ \hline
Angles & $(0.1282,0.2060,0.2955),\left(\frac{\pi}{8},\frac{\pi}{6},\frac{\pi}{5}\right)$ \\ \hline Edge lengths & $0.1210, 0.2008,0.29683$ \\ \hline Volume & $3.4136$ \\ \hline $\frac{\pi}{509}\log \widehat{\mathrm Y}_{509}$ & $3.4366$ \\ \hline Error: & $0.7\%$\\
\hline
\end{tabu}}
\end{flushright}
\end{minipage}
\vspace{1.5cm}
\begin{minipage}{.45\textwidth}
\includegraphics[width=.95\textwidth,left]{Plot3edgestype2.pdf}
\end{minipage}
\begin{minipage}{.45\textwidth}
\begin{flushright}
{\tabulinesep=1.2mm
\begin{tabu}{|C{2.3cm}|C{6cm}|}
\hline
Type & $\left( (124),(356)\right)$\\ \hline
Angles & $(0.1042,0.1802,0.1339),\left(\frac{\pi}{7},\frac{\pi}{6},\frac{\pi}{5}\right)$ \\ \hline Edge lengths & $0.0931, 0.1743,0.1203$ \\ \hline Volume & $3.4277$ \\ \hline $\frac{\pi}{509}\log \widehat{\mathrm Y}_{509}$ & $3.4504$ \\ \hline Error: & $0.7\%$\\
\hline
\end{tabu}}
\end{flushright}
\end{minipage}
\vspace{1.5cm}
\begin{minipage}{.45\textwidth}
\includegraphics[width=.95\textwidth,left]{Plot3edgestype3.pdf}
\end{minipage}
\begin{minipage}{.45\textwidth}
\begin{flushright}
{\tabulinesep=1.2mm
\begin{tabu}{|C{2.3cm}|C{6cm}|}
\hline
Type & $\left( (126),(345)\right)$\\ \hline
Angles & $(0.4041,0.5014,0.4064),\left(\frac{2\pi}{13},\frac{3\pi}{13},\frac{4\pi}{17}\right)$ \\ \hline Edge lengths & $0.4284, 0.5045,0.3817$ \\ \hline Volume & $3.1123$ \\ \hline $\frac{\pi}{509}\log \widehat{\mathrm Y}_{509}$ & $3.1280$ \\ \hline Error: & $0.5\%$\\
\hline
\end{tabu}}
\end{flushright}
\end{minipage}
| {
"timestamp": "2021-03-23T01:43:16",
"yymm": "2009",
"arxiv_id": "2009.03684",
"language": "en",
"url": "https://arxiv.org/abs/2009.03684",
"abstract": "The asymptotic behavior of quantum $6j$-symbols is closely related to the volume of truncated hyperideal tetrahedra\\,\\cite{C}, and plays a central role in understanding the asymptotics of the Turaev-Viro invariants of $3$-manifolds. In this paper, we propose a conjecture relating the asymptotics of the discrete Fourier transforms of quantum $6j$-symbols on one hand, and the volume of deeply truncated tetrahedra of various types on the other. As supporting evidence, we prove the conjecture in the case that the dihedral angles are sufficiently small, and provide numerical calculations in the case that the dihedral angles are relatively big. A key observation is a relationship between quantum $6j$-symbols and the co-volume function of deeply truncated tetrahedra, which is of interest in its own right. More ambitiously, we extend the conjecture to the discrete Fourier transforms of the Yokota invariants of planar graphs and volume of deeply truncated polyhedra, and provide supporting evidence.",
"subjects": "Geometric Topology (math.GT); Mathematical Physics (math-ph); Quantum Algebra (math.QA)",
"title": "Discrete Fourier transforms, quantum $6j$-symbols and deeply truncated tetrahedra",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109520836026,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.707679621974659
} |
https://arxiv.org/abs/2204.03153 | Spectrum of the Transposition graph | Transposition graph $T_n$ is defined as a Cayley graph over the symmetric group generated by all transpositions. It is known that all eigenvalues of $T_n$ are integers. However, an explicit description of the spectrum is unknown. In this paper we prove that for any integer $k\geqslant 0$ there exists $n_0$ such that for any $n\geqslant n_0$ and any $m \in \{0, \dots, k\}$, $m$ is an eigenvalue of $T_n$. In particular, it is proved that zero is an eigenvalue of $T_n$ for any $n\neq2$, and one is an eigenvalue of $T_n$ for any odd $n\geqslant 7$ and for any even $n \geqslant 14$. We also present exact values of the third and the fourth largest eigenvalues of $T_n$ with their multiplicities. | \section{Introduction}\label{sec1}
The {\it Transposition graph} $T_n$ is defined as a Cayley graph over the symmetric group $\mathrm{Sym}_n$ generated by all transpositions. The graph $T_n, n\geqslant 2$, is a connected bipartite $\binom{n}{2}$-regular graph of order $n!$ and diameter $(n-1)$~\cite{K08}. It is an edge--transitive graph but not distance--regular, and hence not distance--transitive graph. It was shown in~\cite{KL20} that the Transposition graph is integral which means that all eigenvalues of its adjacency matrix are integers~\cite{HS74}. Since $T_n$ is bipartite then its spectrum $Spec(T_n)$ is symmetric with respect to zero, where the spectrum of a graph is defined as a multiset of distinct eigenvalues together with their multiplicities~\cite{BH12}. Independently, an integerness of $T_n$ was shown in~\cite{KY97} along with finding the bisection width of the Transposition network $T_n$. More precisely, the following theorem was proved.
\begin{theorem}\label{KY-08} {\rm \cite[Lemma~3]{KY97}} The Transposition graph $T_n, n\geqslant 2,$ is an integral graph such that its largest eigenvalue is $\frac{n(n-1)}{2}$ with multiplicity $1$; its second largest eigenvalue is $\frac{n(n-3)}{2}$ with multiplicity $(n-1)^2$; and for any $k, 3\leqslant k \leqslant n$, the value $\frac{n(n-2k+1)}{2}$ is an eigenvalue of $T_n$ with multiplicity at least $\frac{n!}{n(n-k)!(k-i)!}$.
\end{theorem}
This theorem, among other things, gives an idea on how the spectrum of the Transposition graph looks like. However, an explicit description of the spectrum is unknown. The next theorem gives an arrangement of eigenvalues around zero in the spectrum of this graph.
\begin{theorem} \label{10}
For any integer $k\geqslant 0$, there exists $n_0$ such that for any $n\geqslant n_0$ and any $m \in \{0, \dots, k\}$, $m \in Spec(T_n)$. \end{theorem}
Thus, for large enough $n$, this result shows an existence of all integers in $Spec(T_n)$ up to some upper bound. Moreover, since $T_n$ is bipartite hence $-m \in Spec(T_n)$. To prove this theorem we use basic facts from the representation theory of the symmetric group. We also prove new results on a correspondence between eigenvalues of the graph $T_n$ and partitions of $n$. These technical results are presented in Section~\ref{Sec2} along with definitions and notation. In particular, it is proved that zero is an eigenvalue of $T_n$ for any $n\neq2$, and one is an eigenvalue of $T_n$ for any odd $n\geqslant 7$ and for any even $n \geqslant 14$. Then we prove Theorem~\ref{10} in Section~\ref{Sec3}. Finally, in Section~\ref{Sec4} we estimate exact values of the third and the fourth largest eigenvalues of the graph $T_n$ and present their multiplicities.
\section{Preliminaries}\label{Sec2}
\subsection{Basic facts}
Let $G$ be a finite group with an identity element $1_G$, and $S$ be its generating set. Then a Cayley graph $\Gamma=Cay(G,S)$ is called {\it normal} if its generating set $S$ is closed under conjugation, i.~e. $S$ is the union of conjugacy classes of $G$. The following theorem enables to compute eigenvalues and their multiplicities for any normal Cayley graph $\Gamma$ in terms of complex character values of $G$.
\begin{theorem}\label{Z-88} {\rm \cite[Theorem~1]{Z88}} Let $G$ be a finite group with $s$ conjugacy classes and let $\{\chi_1,\chi_2,\ldots \chi_s \}$ be the set of all irreducible complex characters of $G$. Then the eigenvalues $\lambda_i, \ i=1,2,\ldots,s$, of any normal Cayley graph $\Gamma=Cay(G,S)$ are given by the following expression:
\begin{equation}\label{eigen_expr}
\lambda_i=\sum_{g\in S} \frac{\chi_i(g)}{\chi_i(1_G)},
\end{equation}
and the multiplicity $mul(\lambda_i)$ of $\lambda_i$ is given by the formula:
\begin{equation}\label{mul_expr}
mul(\lambda_i)=\sum_{j=1,\\ \newline \lambda_j=\lambda_i}^{s} \chi_j(1_G)^2,
\end{equation}
\end{theorem}
It was shown in~\cite{KY97} that the conditions of Theorem~\ref{Z-88} hold for the Transposition graph $T_n=Cay(\mathrm{Sym}_n,T)$, where $T$ is the set of all transpositions. Moreover, the following useful expressions were obtained.
It is well-known fact (see~\cite{Sa01}) that there is one-to-one correspondence between the irreducible complex characters $\chi_i(g)$ and $\chi_i(1_G)$, $i=1,2,\ldots,p(n)$, of the symmetric group $\mathrm{Sym}_n$ and the partitions of $n$, where $p(n)$ is the number of partitions of $n$. Let a nonincreasing sequence $(n_1,n_2,\ldots,n_k), \ k\geqslant 1$, where $\sum_{j=1}^k n_j=n$, be the partition ${\bf i}=(n_1,\dots,n_k)\vdash n$ of $n$ corresponding to an irreducible complex character $\chi_i$. Then the following expression holds:
\begin{equation}\label{Formula3-KY}
\frac{\chi_i(\tau)}{\chi_i(I_n)}=\sum_{j=1}^k \frac{n_j(n_j-2j+1)}{n(n-1)},
\end{equation}
where $\tau$ is a transposition and $I_n$ is the identity permutation. Since the generating set $T$ of $T_n$ consists of $\frac{n(n-1)}{2}$ transpositions, then equations~(\ref{eigen_expr}) and~(\ref{Formula3-KY}) give an expression for an eigenvalue $\lambda_{\bf i}$ corresponding to the partition ${\bf i}$:
\begin{equation}\label{transp_eigen}
\lambda_{\bf i} = \sum_{j=1}^k \frac{n_j(n_j-2j+1)}{2}.
\end{equation}
Moreover, the last expression is bounded as follows:
\begin{equation}\label{eigen_ineq}
\sum_{j=1}^k \frac{n_j(n_j-2j+1)}{2} \leqslant \frac{(n-n_k)(n-n_k+1)}{2}+\frac{n_k(n_k-2k+1)}{2}.
\end{equation}
By Theorem~\ref{Z-88}, to compute multiplicities of eigenvalues of the Transposition graph $T_n$ we have to be able to compute $\chi_i(I_n)$. It is known (see~\cite{Sa01} for more details), this can be determined using a standard Young tableau associated with the partition of $n$ and the Frame-Robinson-Thrall hook-length formula defined as follows:
\begin{equation}\label{hook_formula}
\chi_i(I_n)=\frac{n!}{\prod_{t=1}^{k}\prod_{j=1}^{n_t} h_{tj}},
\end{equation}
where $h_{tj}$ is the hook-length of a box $(t,j)$ in a Young diagram.
\subsection{New technical results}
We start with showing that the eigenvalue zero is in the spectrum of the graph $T_n$ for any $n\neq 2$. In what follows below, we use notation $(n_1,\dots,n_k,1\times t)$ for a partition in which $1$ appears $t$ times, where $t \geqslant 0$.
\begin{lemma}\label{lemma1}
For any odd $n\geqslant 1$, the partition $\left(\frac{n+1}{2}, 1\times\frac{n-1}{2}\right)$ corresponds to the eigenvalue zero of the Transposition graph $T_n$. For any even $n\geqslant 4$, the partition $\left(\frac{n}{2}, 2, 1\times \frac{n-4}{2}\right)$ corresponds to the eigenvalue zero of $T_n$.
\end{lemma}
\begin{proof} We prove the lemma by a direct substitution of partitions into the expression~(\ref{transp_eigen}). Indeed, if $n$ is odd then we have:
$$\lambda_{\left(\frac{n+1}{2},1\times\frac{n-1}{2}\right)}=\frac{1}{2}\left(\frac{n+1}{2}\cdot \left(\frac{n+1}{2}-2+1\right) \right) + \frac{1}{2}\sum\limits_{j=2}^{\frac{n-1}{2}+1}(1-2j+1)=$$ $$\frac{n^2-1}{8}-\frac{n^2-1}{8}=0,$$
and if $n$ is even then we have:
$$\lambda_{\left(\frac{n}{2},2,1\times\frac{n-4}{2}\right)}=\frac{1}{2}\cdot\frac{n}{2}\cdot\left(\frac{n}{2}-2+1\right)+\frac{1}{2}\cdot2\cdot(2-4+1)+\frac{1}{2}\sum\limits_{j=3}^{\frac{n-4}{2}+2}(1-2j+1)=$$
$$=\frac{n\cdot(n-2)}{8}-1-\frac{(n+2)\cdot(n-4)}{8}=0.$$
Note that partition $\left(\frac{n+1}{2},1\times\frac{n-1}{2}\right)$ holds for any odd $n\geqslant 1$, and the partition
$(\frac{n}{2},2,1\times \frac{n-4}{2})$ holds for any even $n\geqslant 4$. \hfill $\square$ \end{proof}
\begin{corollary} \label{cor-1}
In the spectrum of the Transposition graph $T_n$ there is the eigenvalue zero for any $n\neq2$.
\end{corollary}
\begin{proof} For $n=2$ there are only two partitions: $(2)$ and $(1,1)$. Substituting these partitions into the expression~(\ref{transp_eigen}) we have:
$$\lambda_{(2)}=\frac{1}{2} \cdot 2\cdot(2-2+1)=1 \neq 0,$$
and
$$\lambda_{(1,1)}=\frac{1}{2}\cdot 1 \cdot (1-2+1)+\frac{1}{2} \cdot 1 \cdot (1-4+1)=-1 \neq 0.$$
Thus, zero is not the eigenvalue of $T_n$ when $n=2$. However, by Lemma~\ref{lemma1} for any $n\geqslant 3$, we have $0\in Spec(T_n)$. \hfill $\square$
\end{proof}\\
Similar result is obtained for the eigenvalue one.
\begin{lemma}
For any odd $n\geqslant 7$, the partition $\left(\frac{n-1}{2},3,1\times \frac{n-5}{2}\right)$ corresponds to the eigenvalue one of the Transposition graph $T_n$. For any even $n \geqslant 14$, the partition $\left(\frac{n-6}{2},4,4,2,1\times \frac{n-14}{2}\right)$ corresponds to the eigenvalue one of $T_n$.
\end{lemma}
\begin{proof} We prove the lemma by a direct substitution of partitions into the expression~(\ref{transp_eigen}) such that if $n$ is odd then we have:
$$\lambda_{\left(\frac{n-1}{2},3,1\times \frac{n-5}{2}\right)}=\frac{1}{2}\cdot\frac{n-1}{2}\cdot\left(\frac{n-1}{2}-2+1\right)+ \frac{1}{2}\cdot3\cdot(3-4+1)+$$
$$\frac{1}{2}\cdot\sum\limits_{j=3}^{\frac{n-5}{2}+2}(1-2j+1)=\frac{(n-1)\cdot(n-3)}{8}+0-\frac{(n+1)\cdot(n-5)}{8}=\frac{8}{8}=1,$$
and if $n$ is even then we have:
$$\lambda_{\left(\frac{n-6}{2},4,4,2,1\times \frac{n-14}{2}\right)}=\frac{1}{2}\cdot\frac{n-6}{2}\cdot(\frac{n-6}{2}-2+1)+\frac{1}{2}\cdot4\cdot(4-4+1)+\frac{1}{2}\cdot4\cdot(4-6+1)+$$
$$+\frac{1}{2}\cdot2\cdot(2-8+1)+\frac{1}{2}\cdot\sum\limits_{j=5}^{\frac{n-14}{2}+4}(1-2j+1)=\frac{(n-6)\cdot(n-8)}{8}-5-\frac{n\cdot(n-14)}{8}=\frac{8}{8}=1.$$
Note that the partition $\left(\frac{n-1}{2},3,1\times \frac{n-5}{2}\right)$ holds for any odd $n\geqslant 7$, and the partition $\left(\frac{n-6}{2},4,4,2,1\times \frac{n-14}{2}\right)$ holds for any even $n\geqslant 14$. \hfill $\square$
\end{proof}\\
The following two technical lemmas are used in Section~\ref{Sec3} to prove Theorem~\ref{10}.
\begin{lemma}\label{lemma3}
If $n \geqslant 7$ is odd, the partition $(\frac{n-2\lambda+1}{2}, \lambda+2, 2\times (\lambda-1), 1\times \frac{n-4\lambda-1}{2})$ corresponds to the eigenvalue $\lambda \in \mathbb{N}$, where $1\leqslant \lambda \leqslant \frac{n-3}{4}$.
\end{lemma}
\begin{proof} By a direct substitution of the partition into the expression~(\ref{transp_eigen}) we immediately have:
$$\lambda_{\left(\frac{n-2\lambda+1}{2}, \lambda+2, 2\times (\lambda-1), 1\times \frac{n-4\lambda-1}{2}\right)}=$$
$$=\underbrace{\frac{1}{2}\left(\frac{n-2\lambda+1}{2}\right)\left(\frac{n-2\lambda+1}{2}-2+1\right)}_{(1)}+\underbrace{\frac{1}{2}(\lambda+2)(\lambda+2-2\cdot2+1)}_{(2)}+$$
$$+\underbrace{\frac{1}{2}\sum\limits_{j=3}^{\lambda+1}2(2-2j+1)}_{(3)}+\underbrace{\frac{1}{2}\sum\limits_{j=\lambda+2}^{\lambda+2+\frac{n-4\lambda-1}{2}-1}1(1-2j+1)}_{(4)},$$
where after calculations we have:
\begin{enumerate}[(1)]
\item=$\frac{1}{8}(n-2\lambda+1)(n-2\lambda-2);$
\item=$\frac{1}{2}(\lambda+2)(\lambda-1);$
\item=$\sum\limits_{j=3}^{\lambda+1}2-2j+1=\left(\frac{3-6+3-2(\lambda+1)}{2}\right)(\lambda-1)=-(\lambda^2-1);$
\item=$\sum\limits_{j=\lambda+2}^{\lambda+\frac{n-4\lambda-1}{2}+1}(1-j)=\left(\frac{1-\lambda-2+1-(\lambda+1+\frac{n-4\lambda-1}{2})}{2}\right)\frac{n-4\lambda-1}{2}=-\frac{(n+1)(n-4\lambda-1)}{8}.$
\end{enumerate}
Finally, putting all the members of the expression together we have:
$$\frac{1}{8}(n-2\lambda+1)(n-2\lambda-2)+\frac{1}{2}(\lambda+2)(\lambda-1)-(\lambda^2-1)-\frac{(n+1)(n-4\lambda-1)}{8}=\lambda.$$
It is easy to see that $\left(\frac{n-2\lambda+1}{2},\lambda+2,2\times(\lambda-1),1\times\frac{n-4\lambda-1}{2}\right)$ is a partition if and only if $\frac{n-2\lambda+1}{2}\geqslant \lambda+2$ and $\lambda\geqslant 1$. Therefore, if $\lambda \leqslant \frac{n-3}{4}$ then $\lambda \in Spec(T_n)$. Since $\lambda \geqslant 1$, this implies $\frac{n-3}{4} \geqslant 1$. Thus, $n \geqslant 7$ which completes the proof. \hfill $\square$
\end{proof}\\
\begin{lemma}\label{lemma4}
If $n\geqslant 14$ is even, the partition $(\frac{n-6\lambda}{2},2\lambda+2,\lambda+3,3\times (\lambda-1),2\times\lambda,1\times \frac{n-10\lambda-4}{2})$ corresponds to the eigenvalue $\lambda\in \mathbb{N}$, where $1\leqslant \lambda \leqslant \frac{n-4}{10}$.
\end{lemma}
\begin{proof} Using the same arguments as in the proof of Lemma~\ref{lemma3}, we substitute the partition into the expression~(\ref{transp_eigen}) and have:
$$\lambda_{\left(\frac{n-6\lambda}{2},2\lambda+2,\lambda+3,3\times(\lambda-1),2\times\lambda,1\times\frac{n-10\lambda-4}{2}\right)}=$$
$$=\underbrace{\frac{1}{2}\left(\frac{n-6\lambda}{2}\right)\left(\frac{n-6\lambda}{2}-2+1\right)}_{(1)} + \underbrace{\frac{1}{2}(2\lambda+2)(2\lambda+2-4+1)}_{(2)}+$$
$$+\underbrace{\frac{1}{2}(\lambda+3)(\lambda+3-6+1)}_{(3)} +
\underbrace{\frac{1}{2}\sum\limits_{j=4}^{4+\lambda-2}3(3-2j+1)}_{(4)}+$$
$$+\underbrace{\frac{1}{2}\sum\limits_{j=\lambda+3}^{2\lambda+3-1}2(2-2j+1)}_{(5)} +
\underbrace{\frac{1}{2}\sum\limits_{j=2\lambda+3}^{2\lambda+3+\frac{n-10\lambda-4}{2}-1}1(1-2j+ 1)}_{(6)},$$
and after calculations we obtain:
\begin{enumerate}[(1)]
\item=$\frac{1}{2}(\frac{n-6\lambda}{2})(\frac{n-6\lambda}{2}-2+1)=\frac{1}{8}(n-6\lambda)(n-6\lambda-2);$
\item=$\frac{1}{2}(2\lambda+2)(2\lambda+2-4+1)=(\lambda+1)(2\lambda-1);$
\item=$\frac{1}{2}(\lambda+3)(\lambda+3-6+1)=\frac{1}{2}(\lambda+3)(\lambda-2);$
\item=$\frac{1}{2}\sum\limits_{j=4}^{4+\lambda-2}3(3-2j+1)=-\frac{1}{2}(3\lambda+6)(\lambda-1);$
\item=$\frac{1}{2}\sum\limits_{j=\lambda+3}^{2\lambda+3-1}2(2-2j+1)=-(3\lambda+2)\lambda;$
\item=$\frac{1}{2}\sum\limits_{j=2\lambda+3}^{2\lambda+3+\frac{n-10\lambda-4}{2}-1}1\cdot(1-2j+1)=-\frac{(n+2-2\lambda)(n-10\lambda-4)}{8},$
\end{enumerate}
for which a summation gives $\lambda$.
Note that the expression $\left(\frac{n-6\lambda}{2},2\lambda+2,\lambda+3,3\times(\lambda-1),2\times\lambda,1\times\frac{n-10\lambda-4}{2}\right)$ is a partition if and only if $\frac{n-6\lambda}{2}\geqslant 2\lambda+2$ and $\lambda\geqslant 1$. Therefore, if $\lambda \leqslant \frac{n-4}{10}$ then $\lambda \in Spec(T_n)$. Since $\lambda \geqslant 1$, this implies $\frac{n-4}{10} \geqslant 1$ which gives $n \geqslant 14$ and completes the proof. \hfill $\square$
\end{proof}
\section{Proof of Theorem~\ref{10}}\label{Sec3}
Let us choose $n_0$ such that
\begin{equation}\label{n0}
{\mathrm{min}}\left(\frac{n_0-3}{4},\frac{n_0-4}{10}\right)=k.
\end{equation}
If $k\geqslant 0$ then $n_0-3\geqslant 0$ and $n_0-4\geqslant 0$, hence~(\ref{n0}) is equivalent to $\frac{n_0-4}{10}=k$. Therefore, \begin{equation}\label{n01}
n_0 = 10k + 4.
\end{equation}
Now we prove that for any $n\geqslant n_0$ and for any $m\in\{0,\dots,k\}$, $m \in Spec(T_n)$.
Since $n \geqslant n_0 \geqslant 4$ then by Lemma~\ref{lemma1} the eigenvalue zero is in the spectrum.
If $n$ is odd then by Lemma~\ref{lemma3}, for any $1\leqslant m \leqslant \frac{n-3}{4}$, we have $m \in Spec(T_n)$. It follows from~(\ref{n01}) that $\frac{n-3}{4} \geqslant \frac{n_0-3}{4} \geqslant \frac{10k+4-3}{4}>m$ for any $m\in\{1,\dots,k\}$. Therefore, $m\in Spec(T_n)$ for any $m\in \{0,\dots,k\}$.
If $n$ is even then by Lemma~\ref{lemma4}, for any $1\leqslant m \leqslant \frac{n-4}{10}$, we have $m \in Spec(T_n)$. It follows from~(\ref{n01}) that $\frac{n-4}{10}\geqslant \frac{n_0-4}{10} \geqslant \frac{10k+4-4}{10} \geqslant m$ for any $m\in \{1,\dots,k\}$. Again, we have that $m \in Spec(T_n)$ for any $m \in \{0, \dots, k\}$.
Thus, for any $n\geqslant n_0=10k+4$ and for any $m\in\{0,\dots,k\}$, $m\in Spec(T_n)$ which completes the proof of Theorem~\ref{10}
\hfill $\square$
\section{The third and the fourth largest eigenvalues}\label{Sec4}
In this section we present exact values of the third and the fourth largest eigenvalues of the Transposition graph $T_n, n\geqslant 4,$ and their multiplicities.
\begin{theorem} \label{3} The third largest eigenvalue of the Transposition graph $T_n, n\geqslant 4,$ is $\frac{(n-1)(n-4)}{2}$ with multiplicity $\left(\frac{n(n-3)}{2}\right)^2$.
\end{theorem}
\begin{proof} We say that a partition ${\bf i_1}=(n_1,n_2,\dots,n_k)\vdash n$ is greater than a partition ${\bf i_2}=(m_1,m_2,\dots,m_l) \vdash n$, and write ${\bf i_1}>{\bf i_2}$, if an eigenvalue $\lambda_{\bf i_1}$ corresponding to $\bf i_1$ is greater than an eigenvalue $\lambda_{\bf i_2}$ corresponding to $\bf i_2$.
By Theorem~\ref{KY-08}, the first and the second largest eigenvalues are $\frac{n(n-1)}{2}$ and $\frac{n(n-3)}{2}$, respectively. Moreover, these eigenvalues are associated with partitions $(n)$ and $(n-1,1)$, correspondingly~\cite{KY97}. Obviously, that $(n)>(n-1,1)>(n-2,2)$.
Our main goal now is to show that the partition $(n-2,2)$ is greater than any other partitions excepting $(n)$ and $(n-1,1)$, and it is the only partition associated with the third largest eigenvalue of $T_n, n\geqslant 4$.
To show this, it is sufficient to prove that the following two inequalities hold:
\begin{equation}\label{ineq_part_2}
(n-2,2)>(n-k,k)
\end{equation}
for any $k>2$ and $k\leqslant \frac{n}{2}$, and
\begin{equation}\label{ineq_part_k}
(n-2,2)>(n_1,\dots,n_k) \vdash n
\end{equation}
for any $k \geqslant 3$.
To prove~(\ref{ineq_part_2}), let us consider partitions $(n-k,k) \vdash n$ and $(n-k-1,k+1) \vdash n$. Then, the following inequality
$$(n-k,k)>(n-k-1,k+1)$$
holds if $n>2k+1$. Indeed, by~(\ref{transp_eigen}) we have to consider the inequality $(n-k)(n-k-2+1)+k(k-4+1)>(n-k-1)(n-k-1-2+1)+(k+1)(k+1-4+1),$ which gives $n>2k+1$ after reductions. Moreover, the condition $(n-k-1,k+1) \vdash n$ implies that $n-k-1 \geqslant k+1$ which again gives us $n\geqslant 2k+2>2k+1$.
Now let us show that inequality~(\ref{ineq_part_k}) holds for any $k \geqslant 3$. By~(\ref{transp_eigen}), we have the following expression for the eigenvalue corresponding to the partition $(n-2,2)$:
$$\lambda_{(n-2,2)}=\frac{(n-2)(n-2-2+1)}{2}+\frac{2(2-2\cdot 2+1)}{2}=\frac{(n-1)(n-4)}{2}.$$
Since $(n_1,\dots,n_k)\vdash n$ and $k\geqslant 3$, then $n_k \leqslant \frac{n}{3}$. Therefore, using the inequality (\ref{eigen_ineq}) leads to the following expression:
$$\lambda_{(n_1,\dots, n_k)} \leqslant \frac{1}{2}\left(\left(n-\frac{n}{3}\right)\left(n-\frac{n}{3}+1\right)+\frac{n}{3}\left(\frac{n}{3}-2\cdot 3+1\right)\right),$$
and finally we have:
$$\frac{(n-1)(n-4)}{2} > \frac{1}{2}\left(\left(n-\frac{n}{3}\right)\left(n-\frac{n}{3}+1\right)+\frac{n}{3}\left(\frac{n}{3}-2\cdot 3+1\right)\right),$$
which after reductions gives $(n-3)^2> 0$ holding for any $n \geqslant 4$.
Hence, taking into account Theorem~\ref{KY-08} and the inequalities~(\ref{ineq_part_2}),~(\ref{ineq_part_k}), for any $n \geqslant 4$, we have:
$$\lambda_{(n)}>\lambda_{(n-1,1)}>\lambda_{(n-2,2)}>\lambda_{\bf i},$$
where ${\bf i} \in \{{\bf i_j}=(n_1,\dots,n_k)\vdash n \ | \ {\bf i_j} \notin \{(n),(n-1,1),(n-2,2)\}\}$.
Thus, it is shown that $\frac{(n-1)(n-4)}{2}$ is the third largest eigenvalue of $T_n$ associated with the partition $(n-2,2)$. Now let us get its multiplicity.
It is easy to see that the hook-lengths of the partition $(n-2,2)$ are given as $h_{21}=2$, $h_{12}=n-1$, and $h_{1j}=n-j-1$ for any $j\in\{3,\ldots,n-2\}$.
Hence, by equations~(\ref{mul_expr}) and~(\ref{hook_formula}) we immediately have:
$${\rm mul}\left(\frac{(n-1)(n-4)}{2}\right) = \left(\frac{n!}{2\cdot(n-1)\cdot(n-2)\cdot(n-4)!}\right)^2 = \left(\frac{n(n-3)}{2}\right)^2,$$
which gives the multiplicity of the third largest eigenvalue and complete the proof. \hfill $\square$
\end{proof}
\begin{theorem}\label{4} The fourth largest eigenvalue of the Transposition graph $T_n, n > 6,$ is $\frac{n(n-5)}{2}$ with multiplicity $\left(\frac{(n-1)(n-2)}{2}\right)^2$.
\end{theorem}
\begin{proof} Our main goal is to show that the partition $(n-2,1,1)$ is greater than any other partitions excepting $(n),(n-1,1)$ and $(n-2,2)$. Moreover, it is the only partition associated with the fourth largest eigenvalue of $T_n, \ n>6$. To show this, it is sufficient to prove that the following three inequalities hold:
\begin{equation}\label{t4_1ineq}
(n-2,1, 1)>(n_1, \dots, n_k),
\end{equation}
for any $k\geqslant 4$ and $(n_1,\dots,n_k)\vdash n$, where $n>6$;
\begin{equation} \label{t4_2ineq}
(n-2,1,1)>(n_1,n_2,n_3),
\end{equation}
for any $(n_1,n_2,n_3)\vdash n$ and $(n_1,n_2,n_3)\neq (n-2,1,1)$;
\begin{equation}\label{t4_3ineq}
(n-2,1,1)>(n_1,n_2),
\end{equation}
if $n_1 \leqslant n-3$ for any $(n_1,n_2)\vdash n$, where $n>6$.
First, let us show that inequality~(\ref{t4_1ineq}) holds for any $k\geqslant 4$. By~(\ref{transp_eigen}), we have the following expression for the eigenvalue corresponding to the partition $(n-2,1, 1)$:
$$\lambda_{(n-2,1,1)}=\frac{(n-2)(n-2-2+1)}{2}+\frac{1(1-2\cdot 2+1)}{2}+\frac{1(1-2\cdot 3+1)}{2}=\frac{n(n-5)}{2}.$$
Since $(n_1,\dots,n_k)\vdash n$ and $k\geqslant 4$, then $n_k \leqslant \frac{n}{4}$. Therefore, using the inequality (\ref{eigen_ineq}) leads to the following expression:
\begin{align*}
\lambda_{(n_1,\dots,n_k)} \leqslant \sum\limits_{j=1}^k \frac{n_j(n_j - 2j + 1)}{2} \leqslant \frac{(n-\frac{n}{4})(n-\frac{n}{4}-1)}{2}+\frac{\frac{n}{4}(\frac{n}{4}-2\cdot 4 + 1)}{2}= \\ =\frac{n \cdot (5n-20)}{16}.
\end{align*}
Comparing $\frac{n(n-5)}{2}$ and $\frac{n \cdot (5n-20)}{16}$ gives $3n>20$ which holds for any integer $n>6$.
To prove inequality~(\ref{t4_2ineq}), let us write an expression for the eigenvalue corresponding to the partition $(n_1, n_2, n_3)$:
\begin{equation}\label{t4_2_p}
\begin{split}
\lambda_{(n_1,n_2,n_3)}=\frac{n_1\cdot(n_1-2+1)}{2}+\frac{n_2\cdot(n_2-4+1)}{2}+\frac{n_3\cdot(n_3-6+1)}{2} \\
= \frac{n_1\cdot(n_1-1)}{2}+\frac{n_2\cdot(n_2-3)}{2}+\frac{n_3\cdot(n_3-5)}{2}.
\end{split}
\end{equation}
Thus,~(\ref{t4_2ineq}) is equivalent to the following inequality:
$$n^2-5 n>n_1\cdot(n_1-1)+n_2\cdot(n_2-3)+n_3\cdot(n_3-5)$$ for any $(n_1,n_2,n_3) \vdash n$. Moreover, since $n_3=n-n_1-n_2$, then we have:
$$n^2-5n>n_1\cdot(n_1-1)+n_2\cdot(n_2-3)+(n-n_1-n_2)(n-n_1-n_2-5),$$
which after calculations can be written as follows:
$$2n_1^2+2n_2^2+4n_1+2n_2-2nn_1-2nn_2+2n_1n_2< 0$$
or as follows:
$$n_1^2+n_2^2+2n_1+n_2-(nn_1+nn_2-n_1n_2)< 0.$$
If we rewrite the last inequality in the form:
$$n\cdot(n_1+n_2)>n_1^2+n_2^2+2n_1+n_2+n_1n_2,$$
then since $n=n_1+n_2+n_3$ we immediately have the inequality:
\begin{equation}\label{t_4_in2_p}
n_1n_2+n_3(n_1+n_2)>2n_1+n_2.
\end{equation}
Let us show that the last inequality is true. Indeed, if $n_3>1$ then we have $n_1n_2+n_3(n_1+n_2)>n_1n_2+n_1+n_2,$ and since $n_2>1$ in this case we immediately have $n_1n_2+n_1+n_2>n_1+n_1+n_2=2n_1+n_2$, which means that~(\ref{t_4_in2_p}) holds. If $n_3=1$, then~(\ref{t_4_in2_p}) is written as $n_1n_2>n_1$, and since $n_1>0$ then $n_2>1$. Thus,~(\ref{t_4_in2_p}) holds for all partitions of the form $(n_1,n_2,1)\vdash n$, where $n_2\geqslant 2$. If $n_2=1$ we have the partition $(n-2,1,1)$, and this completes a verification of~(\ref{t_4_in2_p}).
Now we prove inequality~(\ref{t4_3ineq}). Let us consider the expression~(\ref{transp_eigen}) corresponding to the partition $(n_1,n_2,0)$. It is the same as consider the partition $(n_1,n_2)$. Thus, if we prove inequality~(\ref{t_4_in2_p}) for $(n_1,n_2,0)$, then we show that~(\ref{t4_3ineq}) is true. Indeed, if $n_3=0$ then~(\ref{t_4_in2_p}) is rewritten as $n_1n_2>2n_1+n_2$, or as $n_1(n_2-2)>n_2$. Since $n_1\geqslant n_2$, then it holds for $n_2>3$. If $n_2=3$ we get $(n-3)(3-2)>3$. Hence, it is true for any $n>6$, which means that~(\ref{t4_3ineq}) holds.
Therefore, it has shown that the fourth largest eigenvalue is $\frac{n(n-5)}{2}$ and it is obtained due to the only partition $(n-2,1,1)$ for any $n>6$. Since the hook-lengths of $(n-2,1,1)$ are presented as $2,n,n-3,n-4,\dots,1$, then by~(\ref{mul_expr}) and~(\ref{hook_formula}) we immediately obtain its multiplicity as follows:
$${\rm mul}\bigg(\frac{n(n-5)}{2}\bigg)=\bigg(\frac{n!}{2\cdot n\cdot (n-3)!}\bigg)^2=\bigg(\frac{(n-1)\cdot(n-2)}{2}\bigg)^2,$$
which completes the proof. \hfill $\square$
\end{proof}
\section{Discussions and further research}
Despite we know something about the eigenvalues of the Transposition graphs, not so much is known about their multiplicities. In particular, there are no explicit formulas for multiplicities of the eigenvalues zero and one. Computational results on their multiplicities are presented in Table~1 and Table~2.
As one can see from the Table~1, a behavior of multiplicities of the eigenvalue zero is quite unpredictable. Say, for $n=9$ its multiplicity is less than for $n=8$. We know that for a given $n$ multiplicities of eigenvalues depend on the number of partitions of $n$, and with growing $n$ the number of the corresponding partitions should grow as well. To understand this growing function for any eigenvalue in the spectrum, or even to find an approach for getting explicit formulas of multiplicities of the eigenvalues from Theorem~\ref{10} is one of the challenging problems.
\vspace{5mm}
\begin{table}[h!]
\centering
\captionsetup[table]{labelformat=empty}
\begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|}
\hline
$n$ & 1 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\
\hline
${\rm mul(0)}$ & 1 & 4 & 4 & 36 & 256 & 400 & 9864 & 6664 & 790528 & 1474848 \\
\hline
\end{tabular}
\captionof{table}{Table 1: Multiplicities of the eigenvalue zero for any $n\leqslant 11$}
\end{table}
\begin{table}[h!]
\centering
\captionsetup[table]{labelformat=empty}
\begin{tabular}{ |c|c|c|c|c|c|c|}
\hline
$n$ & 7 & 9 & 11 & 13 & 15 & 17 \\
\hline
${\rm mul(1)}$ & 441 & 46656 & 3052225 & 87609600 & {\small 2701400625} & {\small 3928998225152} \\
\hline
\end{tabular}
\vspace{3mm}
\begin{tabular}{ |c|c|c|c|c|}
\hline
$n$ & 14 & 16 & 18 & 20\\
\hline
${\rm mul(1)}$ & {\small 566130565} & {\small 301532774400} & {\small 274422662958600} & {\small 86181028874240000}\\
\hline
\end{tabular}
\captionof{table}{Table 2: Multiplicities of the eigenvalue one for some odd $n\geqslant 7$ and some even $n\geqslant 14$}
\end{table}
\section*{Acknowledgements}
The work of Artem Kravchuk was supported by the Mathematical Center in Akademgorodok, under agreement No. 075-15-2019-1613 with the Ministry of Science and High Education of the Russian Federation.
| {
"timestamp": "2022-04-08T02:08:03",
"yymm": "2204",
"arxiv_id": "2204.03153",
"language": "en",
"url": "https://arxiv.org/abs/2204.03153",
"abstract": "Transposition graph $T_n$ is defined as a Cayley graph over the symmetric group generated by all transpositions. It is known that all eigenvalues of $T_n$ are integers. However, an explicit description of the spectrum is unknown. In this paper we prove that for any integer $k\\geqslant 0$ there exists $n_0$ such that for any $n\\geqslant n_0$ and any $m \\in \\{0, \\dots, k\\}$, $m$ is an eigenvalue of $T_n$. In particular, it is proved that zero is an eigenvalue of $T_n$ for any $n\\neq2$, and one is an eigenvalue of $T_n$ for any odd $n\\geqslant 7$ and for any even $n \\geqslant 14$. We also present exact values of the third and the fourth largest eigenvalues of $T_n$ with their multiplicities.",
"subjects": "Combinatorics (math.CO)",
"title": "Spectrum of the Transposition graph",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.984810949854636,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7076796203729361
} |
https://arxiv.org/abs/1708.04708 | Generalized Jiang and Gottlieb groups | Given a map $f\colon X \to Y$, we extend a Gottlieb's result to the generalized Gottlieb group $G^f(Y,f(x_0))$ and show that the canonical isomorphism $\pi_1(Y,f(x_0))\xrightarrow{\approx}\mathcal{D}(Y)$ restricts to an isomorphism $G^f(Y,f(x_0))\xrightarrow{\approx}\mathcal{D}^{\tilde{f}_0}(Y)$, where $\mathcal{D}^{\tilde{f}_0}(Y)$ is some subset of the group $\mathcal{D}(Y)$ of deck transformations of $Y$ for a fixed lifting $\tilde{f}_0$ of $f$ with respect to universal coverings of $X$ and $Y$, respectively. | \section*{Introduction}
Throughout this paper, all spaces are path-connected with homotopy types of $CW$-complexes.
We do not distinguish between a map and its homotopy class.
Let $X$ be a connected space, $x_0\in X$ a base-point and $\mathbb{S}^1$ the circle. The \textit{Gottlieb group} $G(X,x_0)$ of $X$ defined in \cite{gottlieb1} is the subgroup of the fundamental group $\pi_1(X,x_0)$ consisting of all elements which can be represented by a map $\alpha \colon \mathbb{S}^1\to X$ such that $\mathrm{id}_X \vee \alpha \colon X\vee \mathbb{S}^1\to X$ extends (up to homotopy) to a map $F \colon X\times \mathbb{S}^1\to X$.
Following \cite{gottlieb1}, we recall that $P(X,x_0)$ is the set of elements of $\pi_1(X,x_0)$ whose Whitehead products
with all elements of all homotopy groups $\pi_m(X,x_0)$ are zero for $m\geq 1$. It turns out that $P(X,x_0)$
forms a subgroup of $\pi_1(X,x_0)$ called the \textit{Whitehead center group} and, by \cite[Theorem~I.4]{gottlieb1}, it holds $G(X,x_0)\subseteq P(X,x_0)$.
Now, given a map $f \colon X\to Y$, in view of \cite{gottlieb} (see also \cite{kim}),
the \textit{generalized Gottlieb group} $G^f(Y,f(x_0))$ is defined
as the subgroup of $\pi_1(Y,f(x_0))$ consisting of all elements which can be
represented by a map $\alpha \colon \mathbb{S}^1\to Y$ such that $f\vee \alpha \colon X\vee \mathbb{S}^1\to Y$ extends (up to homotopy) to a map $F \colon X\times \mathbb{S}^1\to Y$.
The \textit{generalized Whitehead center group} $P^f(Y,f(x_0))$ as defined in \cite{kim} consists of all elements $\alpha\in\pi_1(Y,f(x_0))$ whose Whitehead products $[f\beta,\alpha]$ are zero for all $\beta\in\pi_m(X,x_0)$ with $m\ge 1$. It turns out that $P^f(Y,f(x_0))$ forms a subgroup of $\pi_1(Y,f(x_0))$ and $G^f(Y,f(x_0))\subseteq P^f(Y,f(x_0))\subseteq \mathcal{Z}_{\pi_1(Y,f(x_0))}f_*(\pi_1(X,x_0))$, the centralizer of $f_*(\pi_1(X,x_0))$ in $\pi_1(Y,f(x_0))$.
If $X=Y$ then the group $G^f(Y,f(x_0))$ is considered in \cite[Chapter~II, 3.5~Definition]{jiang}, denoted by $J(f,x_0)$ and called the \textit{Jiang subgroup} of the map $f\colon Y \to Y$. The role of the $J(f,x_0)$ played in that theory has been intensively studied in the book \cite{brown} as well. More precisely, it is observed that the group $J(f,x_0)$ acts on the right on the set of all fixed point classes of $f$, and any two equivalent fixed point classes under this action have the same index. Further, Bo-Ju Jiang in \cite[Chapter~II, 3.1~Definition]{jiang} considered also the group $J(\tilde{f}_0)$ for a fixed lifting $\tilde{f}_0$ of $f$ to the universal covering of $Y$ and stressed its importance to the Nielsen--Wecken theory of fixed point classes.
If $f=\mathrm{id}_X$ then, by \cite[Theorem~II.1]{gottlieb1}, the groups $J(f,x_0)$ and $J(\tilde{f}_0)$ are isomorphic and, according to \cite[Chapter~II, 3.6~Lemma]{jiang}, the groups $J(f,x_0)$ and $J(\tilde{f}_0)$ are isomorphic for any self-map $f\colon X\to X$ but no proof is given.
The aim of this paper is to follow the proof of \cite[Theorem~II.1]{gottlieb1} and give not only a proof of \cite[Chapter~II, 3.6~Lemma]{jiang} but also present a proof of its generalized version for any map $f\colon X \to Y$.
The paper is divided into two sections. Section~\ref{sec.1} follows some results from \cite{gottlieb1} and deals with some properties of fibre-preserving maps and deck transformations used in the sequel. In particular, we show the functoriality of the fundamental group via deck transformations. Section~\ref{sec.2} takes up the systematic study of the group $G^f(Y,f(x_0))$. If $X=Y$, $f=\mathrm{id}_X$ and $x_0\in X$ is a base-point then the group $G^f(X,x_0)=G(X,x_0)$ has been described in \cite[Theorem~II.1]{gottlieb1} via the deck transformation group of $X$ and by \cite[Chapter~II, 3.6~Lemma]{jiang} the groups $G^f(X,x_0)=J(f,x_0)$ and $J(\tilde{f}_0)$ are isomorphic for a self-map $f\colon X\to X$.
Denote by $\mathcal{D}(Y)$ the group of all deck transformations of a space $Y$. Given a map $f\colon X \to Y$, write $\mathcal{L}^f(Y)$ for the set of all liftings $\tilde{f}\colon \tilde{X}\to \tilde{Y}$ of $f$ with respect to universal coverings of $X$ and $Y$, respectively. Now, for a fixed $\tilde{f}_0\in \mathcal{L}^f(Y)$, we denote by $\mathcal{D}^{\tilde{f}_0}(Y)$ the set (being a group) of all elements $h\in \mathcal{D}(Y)$ such that $\tilde{f}_0\simeq_{\tilde{H}} h\tilde{f}_0$, where $\tilde{H}\colon \tilde{X}\times I \to \tilde{Y}$ is a fibre-preserving homotopy with respect to the universal covering maps $p\colon \tilde{X}\to X$ and $q\colon \tilde{Y}\to Y$. Then, the main result, Theorem~\ref{qq} generalizes \cite[Theorem~II.1]{gottlieb1} and \cite[Chapter~II, 3.6~Lemma]{jiang} as follows:
\textit{Given $f\colon X \to Y$, the canonical isomorphism $\pi_1(Y,f(x_0))\xrightarrow{\approx}\mathcal{D}(Y)$ restricts to an isomorphism $G^f(Y,f(x_0))\xrightarrow{\approx} \mathcal{D}^{\tilde{f}_0}(Y)$.}
\section{Preliminaries}\label{sec.1}
Let $p\colon X\to A$ and $q\colon Y\to B$ be maps. We say that $f\colon X\to Y$ is a \textit{fibre-preserving map} with respect to $p,q$ provided $p(x)=p(x')$ implies $qf(x)=qf(x')$ for any $x,x'\in X$.
We say that $H\colon X\times I \to Y$ is a \textit{fibre-preserving homotopy} with respect to $p,q$ if $H$ is a fibre-preserving map with respect to $p\times \mathrm{id}_I\colon X\times I \to A\times I$ and $q\colon Y \to B$.
It is clear that the commutativity of a diagram \[ \xymatrix{ X\ar[r]^{f} \ar[d]_p & Y\ar[d]^q \\ A \ar[r]^g& B }\] guarantee that $f$ is a fibre-preserving map.
\begin{remark}
(1) Let $p,q,f$ be maps as above. If $p\colon X \to A$ is surjective then there exists a map $g\colon A\to B$ such that $qf=g p$. In addition, if $p$ is a quotient map, then $g$ is continuous.
(2) Given discrete groups $H$ and $K$, consider actions $H\times X \to X$ and $K\times Y \to Y$ and write $p\colon X \to X/H$ and $q\colon Y \to Y/K$ for the quotient maps. If $f\colon X \to Y$ is a $\varphi$-equivariant map for a homomorphism $\varphi\colon H \to K$ then $f$ is a fibre-preserving map with respect to $p$ and $q$.
\end{remark}
If $f\colon X \to Y$ is a fibre-preserving map and $g=\mathrm{id}_A$ then the map $f$ is a fibrewise map in the sense of \cite[Chapter~1]{james}. But, the reciprocal of that does not hold, as it is shown below:
\begin{example}
Let $p=q\colon \mathbb{S}^1\times I \to \mathbb{S}^1$ be the projection. Fix $1\neq \lambda\in \mathbb{S}^1$ and define $f\colon \mathbb{S}^1\times I\to \mathbb{S}^1\times I$ by $f(z,t)= (\lambda z,t)$ for $(z,t)\in \mathbb{S}^1\times I$. Then, $f$ is a fibre-preserving map but clearly $qf\neq p$.
\end{example}
Write $\mathcal{D}(X)$ for the group of all deck transformations of $X$ and recall that there is an isomorphism $\mathcal{D}(X)\approx \pi_1(X,x_0)$. Next, given a map $f\colon X\to Y$, consider the set $\mathcal{L}^f(Y)$ of all maps $\tilde{f}\colon \tilde{X}\to \tilde{Y}$ such that the diagram \[ \xymatrix{ \tilde{X}\ar[r]^{\tilde{f}} \ar[d]_p & \tilde{Y}\ar[d]^q \\ X \ar[r]^f & Y }\] is commutative, where $p,q$ are universal covering maps.
Fixing $\tilde{f}_0\in \mathcal{L}^f(Y)$, we follow \cite[Chapter~I, 1.2~Proposition]{jiang} to show:
\begin{proposition}\label{star}If $f\colon X\to Y$ then for any lifting $\tilde{f}\colon \tilde{X}\to \tilde{Y}$ of $f$ there is a unique $h\in \mathcal{D}(Y)$ such that $\tilde{f}=h\tilde{f}_0$.
\end{proposition}
\begin{proof} First, fix $x_0\in X$ and $\tilde{x}_0\in p^{-1}(x_0)$, and
write $y_0=f(x_0)$, $\tilde{y}_0=\tilde{f}_0(\tilde{x}_0)$, and
$\tilde{y}=\tilde{f}(\tilde{x}_0)$. Obviously, $\tilde{y}_0,\tilde{y}\in
q^{-1}(y_0)$. Then, there exists a unique $h\in \mathcal{D}(Y)$ with
$h(\tilde{y}_0)=\tilde{y}$, that is, $h\tilde{f}_0(\tilde{x}_0)=
\tilde{f}(\tilde{x}_0)$. Since both $\tilde{f}$ and $h\tilde{f}_0$ are
lifts of $fp\colon \tilde{X}\to Y$, the unique lifting property
guarantees that $\tilde{f}=h\tilde{f}_0$.
\par Now, suppose that $\tilde{f}=h\tilde{f}_0=h'\tilde{f}_0$ for some $h,h'\in \mathcal{D}(Y)$. Then, $h\tilde{f}_0(\tilde{x}_0)=h'\tilde{f}_0(\tilde{x}_0)$ implies $h(\tilde{y}_0)=h'(\tilde{y}_0)$. Consequently, $h=h'$ and the proof is complete.
\end{proof}
For a deck transformation $l\in \mathcal{D}(X)$, we notice that $\tilde{f}_0l$ is also a lifting of $f$. By Proposition~\ref{star}, there exists a unique $h_l\in \mathcal{D}(Y)$ such that $\tilde{f}_0l=h_l\tilde{f}_0$.
Then, we define \[f_*\colon \mathcal{D}(X)\to \mathcal{D}(Y)\] by $f_*(l)=h_l$ for any $l\in \mathcal{D}(X)$. Obviously, the map $f_*$ is a homomorphism. Notice that the map $f_*$ has been already defined in \cite[Chapter~II, 1.1~Definition]{jiang} for any self-map $f\colon X \to X$.
Given $\tilde{f}_1,\tilde{f}_2\in \mathcal{L}^f(Y)$, we define $\tilde{f}_1\ast\tilde{f}_2=h_1h_2\tilde{f}_0$, where $\tilde{f}_1=h_1\tilde{f}_0$ and $\tilde{f}_2=h_2\tilde{f}_0$ for $h_1,h_2\in \mathcal{D}(Y)$ as in Proposition~\ref{star}. This leads to a group structure on $\mathcal{L}^f(Y)$ with $\tilde{f}_0$ as the identity element. Notice that the groups $\mathcal{L}^f(Y)$ and $\mathcal{D}(Y)$ are isomorphic. In the sequel we identify those two groups, if necessary.
For a homotopy $\tilde{H}\colon \tilde{X}\times I \to \tilde{Y}$, we write $\tilde{H}_t=\tilde{H}(-,t)$ with $t\in I$.
\begin{lemma}\label{Ht}Let $f\colon X \to Y$. A homotopy $\tilde{H}\colon \tilde{X}\times I \to \tilde{Y}$ with $\tilde{H}_0=\tilde{f}_0$ is a fibre-preserving homotopy if and only if for any $l\in \mathcal{D}(X)$ and $t\in I$ the following diagram
\[ \xymatrix@C=1.5cm{ \tilde{X}\ar[r]^{\tilde{H}_t} \ar[d]_-{l} & \tilde{Y}\ar[d]^-{f_*(l)} \\ \tilde{X} \ar[r]^{\tilde{H}_t} & \tilde{Y} } \] commutes.
\end{lemma}
\begin{proof}Let $(\tilde{x},t),(\tilde{x}',t')\in \tilde{X}\times I$ with $(p\times \mathrm{id}_I)(\tilde{x},t)=(p\times \mathrm{id}_I)(\tilde{x}',t')$. Then, $p(\tilde{x})=p(\tilde{x}')$ and $t=t'$. Next, consider $l\in \mathcal{D}(X)$ such that $l(\tilde{x})=\tilde{x}'$. Because $\tilde{H}_t=f_*(l)\tilde{H}_tl^{-1}$, we conclude that $q\tilde{H}_t(\tilde{x})=qf_*(l)\tilde{H}_tl^{-1}(\tilde{x})=qf_*(l)\tilde{H}_t(\tilde{x}')=q\tilde{H}_t(\tilde{x}')$. Hence $\tilde{H}$ is fibre-preserving.
Suppose $\tilde{H}$ is fibre-preserving, $l\in \mathcal{D}(X)$ and take $\tilde{x}\in \tilde{X}$, $t\in I$. Then, $\tilde{x}$ and $ l(\tilde{x})$ are in the same fibre of $p$. Since $\tilde{H}_t(\tilde{x})$ and $\tilde{H}_t(l(\tilde{x}))$ are in the same fibre of $q$, there exists a unique $h\in \mathcal{D}(Y)$ such that $h\tilde{H}_t(\tilde{x})=\tilde{H}_tl(\tilde{x})$. If $\varepsilon>0$ is sufficiently small, $h\tilde{H}_{t-\varepsilon}(\tilde{x})=\tilde{H}_{t-\varepsilon}l(\tilde{x})$. Thus, the greatest lower bound of the set of $t$'s such that $h\tilde{H}_t(\tilde{x})=\tilde{H}_tl(\tilde{x})$ must occur when $t=0$. Therefore, by continuity, $h\tilde{H}_0(\tilde{x})=\tilde{H}_0l(\tilde{x})$. But $\tilde{H}_0=\tilde{f}_0$, so we get $h\tilde{f}_0(\tilde{x})=\tilde{f}_0l(\tilde{x})=f_*(l)\tilde{f}_0(\tilde{x})$. This can occur only when $h=f_*(l)$. Consequently, $\tilde{H}_tl=f_*(l)H_t$ and the proof is complete.
\end{proof}
Now, fix $\tilde{f}_0\in \mathcal{L}^{f}(Y)$ and consider the subset $\mathcal{D}^{\tilde{f}_0}(Y)$ of elements $h\in \mathcal{D}(Y)$ such that $\tilde{f}_0\simeq_{\tilde{H}} h\tilde{f}_0$, where $\tilde{H}\colon \tilde{X}\times I \to \tilde{Y}$ is a fibre-preserving homotopy with respect to the universal covering maps $p,q$.
Equivalently, in view of Lemma~\ref{Ht}, the set $\mathcal{D}^{\tilde{f}_0}(Y)$ coincides with the set of all elements $h\in \mathcal{D}(Y)$ for which there is a homotopy $f\simeq_H f$ which lifts to a homotopy $\tilde{H}$ with $\tilde{f}_0\simeq_{\tilde{H}}h\tilde{f}_0$.
Next, write $\mathcal{Z}_{\mathcal{D}(Y)} f_*(\mathcal{D}(X))$ for the centralizer of $ f_*(\mathcal{D}(X))$ in $\mathcal{D}(Y)$. Then, the result below generalizes \cite[Chapter~II, 3.2~Proposition, 3.3~Lemma]{jiang} as follows:
\begin{proposition}\label{prop.center}The subset $\mathcal{D}^{\tilde{f}_0}(Y)$ is contained in $\mathcal{Z}_{\mathcal{D}(Y)} f_*(\mathcal{D}(X))$ and is a subgroup of $\mathcal{D}(Y)$.
\end{proposition}
\begin{proof}
Let $h\in \mathcal{D}^{\tilde{f}_0}(Y)$. Then, there is a fibre-preserving homotopy $\tilde{H}\colon \tilde{X}\times I \to \tilde{Y}$ with $\tilde{H}_0=\tilde{f}_0$ and $\tilde{H}_1=h\tilde{f}_0$. But, by Lemma~\ref{Ht}, it holds $f_*(l)\tilde{H}_t=\tilde{H}_tl$ for any $l\in \mathcal{D}(X)$ and $t\in I$. Hence, for $t=0,1$ we get $f_*(l)\tilde{f}_0=\tilde{f}_0l$ and $f_*(l)h\tilde{f}_0=h\tilde{f}_0l=hf_*(l)\tilde{f}_0$. Consequently, $f_*(l)h=hf_*(l)$ and we get that $h\in \mathcal{Z}_{\mathcal{D}(Y)} f_*(\mathcal{D}(X))$.
To show the second part, take $h,h'\in \mathcal{D}^{\tilde{f}_0}(Y)$. Then, there are fibre-preserving homotopies $\tilde{H},\tilde{H}'\colon \tilde{X}\times I \to \tilde{Y}$ with $\tilde{H}_0=\tilde{H}_0'=\tilde{f}_0$, $\tilde{H}_1=h\tilde{f}_0$ and $\tilde{H}'_1=h'\tilde{f}_0$. Next, consider the map $\tilde{H}''\colon \tilde{X}\times I \to \tilde{Y}$ given by $\tilde{H}''(\tilde{x},t)=hh'^{-1}\tilde{H}'(\tilde{x},1-t)$ for $(\tilde{x},t)\in \tilde{X}\times I$ and notice that $\tilde{H}''$ is a fibre-preserving homotopy with $\tilde{H}''_0=h\tilde{f}_0$ and $\tilde{H}''_1=hh'^{-1}\tilde{f}_0$. Finally, the concatenation $\tilde{H}\bullet \tilde{H}'' \colon \tilde{X}\times I \to \tilde{Y}$ is a fibre-preserving homotopy with $(\tilde{H}\bullet \tilde{H}'')_0=\tilde{f}_0$ and $(\tilde{H}\bullet \tilde{H}'')_1=hh'^{-1}\tilde{f}_0$. Consequently, $hh'^{-1}\in \mathcal{D}^{\tilde{f}_0}(Y)$ and the proof is complete.
\end{proof}
Notice that if $f\colon X \to X$ is a self-map then the group $\mathcal{D}^{\tilde{f}_0}(X)$ coincides with the group $J(\tilde{f}_0)$ defined in \cite[Chapter~II, 3.1~Definition]{jiang}.
\section{Main result}\label{sec.2}
Given spaces $X$ and $Y$, write $Y^X$ for the space of continuous maps from $X$ into $Y$ with the compact-open topology. Next, consider the evaluation map $\mathrm{ev} \colon Y^X\to Y$, i.e., $\mathrm{ev}(f)=f(x_0)$ for $f\in Y^X$ and the base-point $x_0\in X$.
Then, it holds \[G^f(Y,f(x_0))=\operatorname{Im}\bigl(\mathrm{ev}_\ast \colon \pi_1(Y^X,f)\to \pi_1(Y,f(x_0))\bigr).\]
Certainly, $G^f(X,f(x_0))$ coincides with the group $J(f,x_0)$ defined in \cite[Chapter~II, 3.5~Definition]{jiang} for a self-map $f\colon X\to X$.
\par Now, we follow \textit{mutatis mutandis} the result \cite[Theorem~II.1]{gottlieb1} to generalize \cite[Chapter~II, 3.6~Lemma]{jiang} as follows:
\begin{theorem}\label{qq}Given $f\colon X \to Y$, the canonical isomorphism $\pi_1(Y,f(x_0))\xrightarrow{\approx} \mathcal{D}(Y)$ restricts to an isomorphism $G^f(Y,f(x_0))\xrightarrow{\approx} \mathcal{D}^{\tilde{f}_0}(Y)$.
\end{theorem}
\begin{proof}Let $\alpha\in G^f(Y,f(x_0))$ and $h\in \mathcal{D}(Y)$ be the corresponding deck transformation. Then, there is a homotopy $H\colon X\times I \to Y$ such that $H_0=H_1=f$ and $H(x_0,-)=\alpha$, where $x_0\in X$ is a base-point. Next, consider the commutative diagram
\[ \xymatrix@C=1.5cm{
\tilde{X} \ar[d]_{i_0}\ar[rr]^{\tilde{f}_0} && \tilde{Y} \ar[d]^q \\
\tilde{X}\times I \ar[r]^{p\times \mathrm{id}_I} & X\times I \ar[r]^H & Y\rlap{.}
} \] Then, by the lifting homotopy property there is a map $\tilde{H}\colon \tilde{X}\times I \to \tilde{Y}$ such that $\tilde{H}i_0=\tilde{H}_0=\tilde{f}_0$ and $q\tilde{H}=H(p\times \mathrm{id}_I)$. This implies that $\tilde{H}$ is a fibre-preserving homotopy. Further, because $H_0=H_1=f$, we also derive that $\tilde{H}_0,\tilde{H}_1\in \mathcal{L}^f(Y)$.
Now, since the path $\tilde{\tau}\colon I\to \tilde{Y}$ defined by $\tilde{\tau}=\tilde{H}(\tilde{x}_0,-)$ runs from $\tilde{f}_0(\tilde{x}_0)$ to $\tilde{H}_1(\tilde{x}_0)$, we derive that $\alpha=q\tilde{\tau}$. Consequently, by means of Proposition~\ref{star} we get $\tilde{H}_1=h\tilde{f}_0$ and so $\tilde{H}$ is the required fibre-preserving homotopy with $\tilde{f}_0\simeq_{\tilde{H}} h\tilde{f}_0$.
Conversely, given $h\in \mathcal{D}^{\tilde{f}_0}(Y)$, there is a fibre-preserving homotopy $\tilde{H}\colon \tilde{X}\times I \to \tilde{Y}$ with $\tilde{f}_0\simeq_{\tilde{H}} h\tilde{f}_0$. This implies a homotopy $H\colon X\times I \to Y$ such that $H_0=H_1=f$ and $q\tilde{H}=H(p\times \mathrm{id}_I)$. Then, the path $\tau\colon I\to Y$ given by $\tau=H(x_0,-)$ leads to the required loop in $G^f(Y,f(x_0))$.
\end{proof}
Notice that by Theorem~\ref{qq} the group $\mathcal{D}^{\tilde{f}_0}(Y)$ is independent of the lifting $\tilde{f}_0\in\mathcal{L}^f(Y)$. Further, the advantage of $G^f(Y,f(x_0))$ over $\mathcal{D}^{\tilde{f}_0}(Y)$ is that it does not involve the covering spaces $\tilde{X}$ and $\tilde{Y}$ explicitly, hence it is easier to handle.
Next, Lemma~\ref{Ht} and Theorem~\ref{qq} yield:
\begin{corollary}\label{cor2}If $f\colon X \to Y$ then $G^f(Y,f(x_0))$ is isomorphic to the subgroup of $\mathcal{D}(Y)$ given by those deck transformations $h$ for which there are homotopies $\tilde{H}\colon \tilde{X}\times I \to \tilde{Y}$ such that $\tilde{f}_0\simeq_{\tilde{H}} h\tilde{f}_0$ and the diagrams
\[ \xymatrix@C=1.5cm{ \tilde{X}\ar[r]^{\tilde{H}_t} \ar[d]_-{l} & \tilde{Y}\ar[d]^-{f_*(l)} \\ \tilde{X} \ar[r]^{\tilde{H}_t} & \tilde{Y} } \] commute for any $l\in \mathcal{D}(X)$ and $t\in I$. Equivalently, the homotopies $\tilde{H}\colon \tilde{X}\times I \to \tilde{Y}$ are $f_*$-equivariant.
\end{corollary}
Let $\mathcal{H}^{\tilde{f}_0}(Y)$ be the subset of all $h\in\mathcal{Z}_{\mathcal{D}(Y)}f_*(\mathcal{D}(X))$ such that $\tilde{f}_0\simeq h\tilde{f}_0$. By similar arguments as in the proof of Proposition~\ref{prop.center}, it is easy to verify that $\mathcal{H}^{\tilde{f}_0}(Y)$ is a subgroup of $\mathcal{Z}_{\mathcal{D}(Y)}f_*(\mathcal{D}(X))$.
Now, we process as in the proof of \cite[Theorem~II.6]{gottlieb1} to show:
\begin{proposition}\label{pp}Given $f\colon X \to Y$, there are inclusions \[ G^f(Y,f(x_0))\subseteq \mathcal{H}^{\tilde{f}_0}(Y)\subseteq P^f(Y,f(x_0)). \]
\end{proposition}
\begin{proof} Certainly, the inclusion $G^f(Y,f(x_0))\subseteq \mathcal{H}^{\tilde{f}_0}(Y)$ is a direct consequence of Proposition~\ref{prop.center} and Theorem~\ref{qq}.
Now, let $h\in \mathcal{H}^{\tilde{f}_0}(Y)$ and $\tilde{H} \colon \tilde{X}\times I\to \tilde{Y}$ be a homotopy with $\tilde{f}_0\simeq_{\tilde{H}} h\tilde{f}_0$. Next, consider the path $\tilde{\phi} \colon I\to \tilde{Y}$ defined by $\tilde{\phi}=\tilde{H}(\tilde{x}_0,-)$, where $p(\tilde{x}_0)=x_0$. Then, the loop $\phi=q\tilde{\phi}$ corresponds to $h$.
Notice that $\phi$ acts trivially on $f_\ast(\pi_m(X,x_0))$ for $m>1$ if and only if there is a map $F \colon \mathbb{S}^m\times \mathbb{S}^1\to Y$ such that the diagram
\[ \xymatrix@C=2cm{\mathbb{S}^m\vee \mathbb{S}^1\ar@{^{(}->}[d] \ar[r]^-{f\alpha\vee \phi} & Y \\
\mathbb{S}^m\times \mathbb{S}^1 \ar@{-->}[ru]_F &} \]
commutes (up to homotopy) for any $\alpha\in \pi_m(X,x_0)$.\
Given $\alpha\in \pi_m(X,x_0)$ with $m>1$, there exists $\tilde{\alpha}\in\pi_m(\tilde{X},\tilde{x}_0)$ such that $p\tilde{\alpha}=\alpha$. Thus, we define a map \[F' \colon \mathbb{S}^m\times I\xrightarrow{\tilde{\alpha}\times \mathrm{id}_I}\tilde{X}\times I\xrightarrow{\tilde{H}}\tilde{Y}\xrightarrow{q}Y.\]
Because $F'(s,0)=q\tilde{H}(\tilde{\alpha}(s),0)=q\tilde{f}_0(\tilde{\alpha}(s))=fp\tilde{\alpha}(s)$ and
$F'(s,1)=q\tilde{H}(\tilde{\alpha}(s),1)=qh\tilde{f}_0(\tilde{\alpha}(s))=q\tilde{f}_0(\tilde{\alpha}(s))=fp\tilde{\alpha}(s)$ for $s\in \mathbb{S}^m$, the map $F'$
implies the required map $F\colon \mathbb{S}^m\times \mathbb{S}^1\to Y$.
Since $\mathcal{H}^{\tilde{f}_0}(Y)\subseteq \mathcal{Z}_{\mathcal{D}(Y)}f_*(\mathcal{D}(X))$, we derive that $\phi$ acts trivially also on $f_\ast(\mathcal{D}(X))$. This gives the inclusion $\mathcal{H}^{\tilde{f}_0}(Y)\subseteq P^f(Y,f(x_0))$ and the proof is complete.
\end{proof}
Let $H$ be a finite group acting freely on a $(2n+1)$-homotopy sphere $\Sigma(2n+1)$.
If $\Sigma(2n+1)/H$ is the corresponding space form then, following \cite[Chapter~VII, Proposition~10.2]{brown}, the action of $H=\mathcal{D}(\Sigma(2n+1)/H)$ on $\pi_m(\Sigma(2n+1)/H,y_0)$ is trivial for $m>1$. In particular, $H$ acts trivially on $\pi_{2n+1}(\Sigma(2n+1)/H,y_0)\approx \pi_{2n+1}(\Sigma(2n+1),\tilde{y}_0)$. This implies that for any $h\in H$, the induced homeomorphism $h_*\colon \Sigma(2n+1)\to \Sigma(2n+1)$ is homotopic to $\mathrm{id}_{\Sigma(2n+1)}$. Consequently, if $f\colon X\to \Sigma(2n+1)/H$ is a map then $\mathcal{H}^{\tilde{f}_0}(\Sigma(2n+1)/H)=\mathcal{Z}_{H}f_*(\mathcal{D}(X))$. Because $P^f(\Sigma(2n+1)/H,f(x_0))\subseteq \mathcal{Z}_{H}f_*(\mathcal{D}(X))$, Proposition~\ref{pp} yields \[\mathcal{H}^{\tilde{f}_0}(\Sigma(2n+1)/H)=P^f(\Sigma(2n+1)/H,f(x_0))=\mathcal{Z}_{H}f_*(\mathcal{D}(X)). \]
Further, the result \cite[Theorem~1.17]{gol}, Theorem~\ref{qq} and Proposition~\ref{pp} lead to:
\begin{corollary}
If $f\colon X\to\Sigma(2n+1)/H $ is a map as in \cite[Theorem~1.14]{gol} then $\mathcal{D}^{\tilde{f}_0}(\Sigma(2n+1)/H)=\mathcal{H}^{\tilde{f}_0}(\Sigma(2n+1)/H)=\mathcal{Z}_Hf_*(\mathcal{D}(X))$. In particular, $J(f,x_0)=\mathcal{Z}_Hf_*(H)$ for any self-map $f\colon \Sigma(2n+1)/H\to \Sigma(2n+1)/H$.
\end{corollary}
Given a free action of a finite group $H$ on $\mathbb{S}^{2n+1}$, Oprea \cite[\textsc{Theorem~A}]{oprea} has shown that $G(\mathbb{S}^{2n+1}/H,y_0)=\mathcal{Z}H$, the center of $H$.
In the special case of a free linear action of $H$ on $\mathbb{S}^{2n+1}$, the description of $G(\mathbb{S}^{2n+1}/H,y_0)$ via deck transformations presented in \cite[Theorem~II.1]{gottlieb1} has been applied in \cite{bro} to get a very nice representation-theoretic proof of \cite[\textsc{Theorem~A}]{oprea}. The result stated in Theorem~\ref{qq} might be applied to extend the methods from \cite{bro} to simplify the proof of \cite[Theorem~1.17]{gol} on the case of a free linear action of $H$ on $\mathbb{S}^{2n+1}$ as well.
| {
"timestamp": "2017-08-17T02:01:59",
"yymm": "1708",
"arxiv_id": "1708.04708",
"language": "en",
"url": "https://arxiv.org/abs/1708.04708",
"abstract": "Given a map $f\\colon X \\to Y$, we extend a Gottlieb's result to the generalized Gottlieb group $G^f(Y,f(x_0))$ and show that the canonical isomorphism $\\pi_1(Y,f(x_0))\\xrightarrow{\\approx}\\mathcal{D}(Y)$ restricts to an isomorphism $G^f(Y,f(x_0))\\xrightarrow{\\approx}\\mathcal{D}^{\\tilde{f}_0}(Y)$, where $\\mathcal{D}^{\\tilde{f}_0}(Y)$ is some subset of the group $\\mathcal{D}(Y)$ of deck transformations of $Y$ for a fixed lifting $\\tilde{f}_0$ of $f$ with respect to universal coverings of $X$ and $Y$, respectively.",
"subjects": "Algebraic Topology (math.AT)",
"title": "Generalized Jiang and Gottlieb groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109480714625,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7076796190915576
} |
https://arxiv.org/abs/1803.03551 | An iterative method for elliptic problems with rapidly oscillating coefficients | We introduce a new iterative method for computing solutions of elliptic equations with random rapidly oscillating coefficients. Similarly to a multigrid method, each step of the iteration involves different computations meant to address different length scales. However, we use here the homogenized equation on all scales larger than a fixed multiple of the scale of oscillation of the coefficients. While the performance of standard multigrid methods degrades rapidly under the regime of large scale separation that we consider here, we show an explicit estimate on the contraction factor of our method which is independent of the size of the domain. We also present numerical experiments which confirm the effectiveness of the method, with openly available source code. | \section{Introduction}
\subsection{Informal summary of results}
In this paper, we introduce a new iterative method for the numerical approximation of solutions of elliptic problems with rapidly oscillating coefficients. For definiteness, we consider the Dirichlet problem
\begin{equation}
\label{e.model.problem}
\left\{
\begin{aligned}
& -\nabla \cdot \left( \a(x) \nabla u \right) = f & \mbox{in } & U_r,\\
& u = w & \mbox{on } & \partial U_r,
\end{aligned}
\right.
\end{equation}
where $r>0$ is the length scale of the problem, which is typically very large ($r\gg1$), and we denote $U_r = r U$ for a fixed, bounded $C^{1,1}$ domain $U\subseteq {\mathbb{R}^d}$ in dimension $d\geq 2$.
The boundary condition $w$ belongs to $H^1(U_r)$, and the right-hand side $f$ belongs to $H^{-1}(U_r)$. The coefficients $\a(x)$ are symmetric, uniformly elliptic and H\"older continuous. Moreover, in order to ensure that \emph{quantitative homogenization} holds on large scales, we assume that the coefficients are sampled by a probability measure which is $\mathbb{Z}^d$-stationary and has a unit range of dependence (see below for the precise formulation of these assumptions).
Our goal is to build a numerical method for the computation of $u$ which remains efficient in the regime of fast oscillations of the coefficient field (which in our setting corresponds to the case in which the length scale is very large, $r \gg 1$) and does not rely on scale separation for convergence (the method computes the true solution for fixed~$r$ and not only in the limit $r\to \infty$).
\smallskip
In the absence of fast oscillations of the coefficient field, contemporary technology allows to access numerical approximations of elliptic problems involving billions of degrees of freedom. One of the most successful methods allowing to achieve such results is the \emph{multigrid method} (see \cite{FFT} for benchmarks). However, the performance of this method degrades quickly as the coefficient field becomes more rapidly oscillating (see for instance \cite[Table~IV]{sundar2012parallel} and \cite[Tables~II and III]{compare-mg}).
\smallskip
We seek to remedy this problem by leveraging on \emph{homogenization}. While standard multigrid methods use a decomposition of the elliptic problem into a series of scales, the difficulty in our context is that the slow eigenmodes of the heterogeneous operator still have fast oscillations, and are thus not easily captured through a coarse representation. We overcome this by introducing a suitable variant of the multigrid method that succeeds in replacing the heterogeneous operator by the homogenized one
on length scales larger than a large but finite multiple of the correlation length scale. The result is a new iterative method that converges exponentially fast in the number of iterations, each of which is relatively inexpensive to compute---the memory and number of computations required scale linearly in the volume, and the computation is very amenable to parallelization. We give a rigorous proof of convergence and present numerical experiments which establish the efficiency of the method from a practical point of view.
\subsection{Statement of the main result}
We introduce some notation in order to state our main result. We begin with the precise assumptions on the coefficient field.
We fix parameters~$\Lambda > 1$ and~$\alpha \in (0,1]$ and require our coefficient fields $\a(x)$ to satisfy
\begin{equation}
\label{e.holder}
\forall x,y \in {\mathbb{R}^d}, \quad |\a(y) - \a(x)| \le \Lambda|x-y|^\alpha
\end{equation}
and
\begin{equation}
\label{e.unif.ell}
\forall x \in {\mathbb{R}^d}, \quad \forall \xi \in {\mathbb{R}^d}, \quad \Lambda^{-1} |\xi|^2 \le \xi \cdot \a(x) \xi \le \Lambda |\xi|^2.
\end{equation}
We denote by~$\mathbb{R}^{d\times d}_{\mathrm{sym}}$ the set of~$d$-by-$d$ real symmetric matrices and define
\begin{equation*}
\Omega := \left\{ \a : \mathbb{R}^d \to \mathbb{R}^{d \times d}_\mathrm{sym} \ \mbox{satisfying \eqref{e.holder} and \eqref{e.unif.ell}} \right\}.
\end{equation*}
For each Borel set $V \subset {\mathbb{R}^d}$, we denote by $\mathcal{F}_V$ the Borel $\sigma$-algebra on $\Omega$ generated by the family of mappings
\begin{equation*}
\a \mapsto \int_{\mathbb{R}^d} \chi \, \a_{ij} , \qquad i,j \in \{1,\ldots,d\}, \ \chi \in C^\infty_c(V).
\end{equation*}
We also set $\mathcal{F} := \mathcal{F}_{{\mathbb{R}^d}}$.
For each $y \in {\mathbb{R}^d}$, we denote by $T_y:\Omega\to\Omega$ the action of translation by~$y$:
\begin{equation*}
\forall x \in {\mathbb{R}^d}, \quad T_y \a(x) := \a(x+y).
\end{equation*}
We assume that~$\P$ is a probability measure on $(\Omega, \mathcal{F})$ satisfying:
\begin{itemize}
\item stationarity with respect to $\mathbb{Z}^d$-translations: for every $y \in \mathbb{Z}^d$ and $A \in \mathcal{F}$,
\begin{equation*}
\P \left[ T_y A \right] = \P \left[ A \right] ;
\end{equation*}
\item unit range of dependence: for every Borel sets $V,W \subset {\mathbb{R}^d}$,
\begin{equation*}
\dist(V,W) \ge 1 \implies \mathcal{F}_{V} \text{ and } \mathcal{F}_{W} \mbox{ are $\P$-independent.}
\end{equation*}
\end{itemize}
The expectation associated with the probability measure $\P$ is denoted by $\mathbb{E}$. We recall that, by classical homogenization theory (see~\cite{PV1,AKMbook}), the heterogeneous operator $-\nabla \cdot \a(x) \nabla$ homogenizes to the homogeneous operator $-\nabla \cdot \overline{\a} \nabla$, where $\overline{\a} \in \mathbb{R}^{d\times d}$ is a deterministic, constant, positive definite matrix. For every $s, \theta > 0$ and random variable $X$, we write
\begin{equation}
\label{e.def.Os}
X \le \O_s \left( \theta \right) \quad \text{if and only if} \quad \mathbb{E} \left[ \exp \left( \left( \theta^{-1} \max(X,0) \right)^s \right)\right] \le 2.
\end{equation}
We also set, for every $\lambda \in (0,1]$,
\begin{equation}
\label{e.def.llambda}
\ell(\lambda) :=
\left\{
\begin{aligned}
& \left( \log (1+\lambda^{-1}) \right)^\frac 1 2 & \quad \text{if } d = 2, \\
& 1 & \quad \text{if } d \ge 3.
\end{aligned}
\right.
\end{equation}
For notational convenience, from now on we will suppress the explicit dependence on the spatial variable in the operator $-\nabla \cdot \a(x) \nabla$ and simply write $-\nabla \cdot \a \nabla$.
\smallskip
We now state the main result of the paper. We recall that $\P$ is a probability measure on $(\Omega,\mathcal{F})$ which specifies the law of the coefficient field~$\a(x)$ and satisfies the assumptions stated above, that~$\overline{\a}$ is the homogenized matrix associated to~$\P$, and that $U\subseteq {\mathbb{R}^d}$ is a bounded domain with $C^{1,1}$ boundary.
\begin{theorem}[$H^1$ contraction]
\label{t.contract}
For each~$s \in (0,2)$, there exists a constant $C(s,U,\Lambda,\alpha,d) < \infty$ such that the following statement holds.
Fix $r \geq 1$, $\lambda \in \left[r^{-1},1\right]$, $f\in H^{-1}(U_r)$, $w\in H^1(U_r)$ and let $u \in w+H^1_0(U_r)$ be the solution of~\eqref{e.model.problem}. Also fix a function~$v \in w + H^1_0(U_r)$ and define the functions $u_0, \bar u, \tilde u \in H^1_0(U_r)$ to be the solutions of the following equations (with null Dirichlet boundary condition on~$\partial U_r$):
\begin{align*}
(\lambda^2 - \nabla \cdot \a \nabla)u_0 &
= f + \nabla \cdot \a \nabla v & \text{in } U_r,
\\
- \nabla \cdot \overline{\a} \nabla\bar u & = \lambda^2 u_0 & \text{in } U_r,
\\
(\lambda^2 - \nabla \cdot \a \nabla)\tilde u & = (\lambda^2-\nabla \cdot \overline{\a} \nabla )\bar u& \text{in } U_r.
\end{align*}
For~$\hat v \in w+H^1_0(U_r)$ defined by
\begin{equation}
\label{e.def.hatv}
\hat v := v + u_0 + \tilde u,
\end{equation}
we have the estimate
\begin{equation}
\label{e.contract}
\left\| \nabla ( \hat v-u ) \right\|_{L^2(U_r)} \le \O_s \left(C \ell(\lambda)^\frac 1 2 \, \lambda^{\frac 1 2} \, \left\| \nabla ( v-u ) \right\|_{L^2(U_r)} \right).
\end{equation}
\end{theorem}
The function $u \in H^1(U_r)$ appearing in Theorem~\ref{t.contract} is the unknown we wish to approximate, and $v \in H^1(U_r)$ should be thought of as the current best approximation to~$u$. The function~$\hat{v}$ is then the new, updated approximation to~$u$ and the estimate~\eqref{e.contract} says that, if $\lambda$ is chosen small enough, then the error in our approximation will be reduced by a multiplicative factor of~$1/2$.
As explained more precisely around \eqref{e.unif.contract} below, we can then iterate this procedure and obtain rapid convergence to the solution. The only assumption we make on~$v$ is that it satisfies the correct boundary condition, that is, $v \in w + H^1_0(U_r)$. In particular, we may begin the iteration with~$v = w$ as the initial guess (or any other function with the correct boundary condition).
The computation of $\hat v$ reduces to solving the problems for $u_0, \bar u$, and $\tilde u$ listed in the statement, and the point is that each of these problems is relatively inexpensive to compute, provided that~$\lambda$ is not too small.
A fundamental aspect of the result is therefore that the required smallness of the parameter~$\lambda$ (so that~\eqref{e.contract} gives us a strict contraction in~$H^1$) does not depend on the length scale~$r$ of the problem. In other words, we may need to take~$\lambda$ to be small, but it will still be of order one, no matter how large~$r$ is.
\smallskip
Similarly to standard multigrid methods, the equation for $u_0$ is meant to resolve the small-scale discrepancies between $u$ and $v$. Note that the equation for $u_0$ can be rewritten as
\begin{equation*}
(\lambda^2 - \nabla \cdot \a \nabla)u_0 = - \nabla \cdot \a \nabla (u-v) \qquad \text{in } U_r.
\end{equation*}
The parameter~$\lambda^{-1}$ is the characteristic length scale of this problem, and in practice we will take it to be some fixed multiple of the scale of oscillations of the coefficients. The computation of $u_0$ can thus be decomposed into a large number of essentially unrelated elliptic problems posed on subdomains of side length of the order of $\lambda^{-1}$. In analogy with multigrid methods, we may also think of~$\lambda^{-2}$ as the number of elementary pre-smoothing steps performed during one global iteration.
\smallskip
As announced, we then use the homogenized operator on scales larger than~$\lambda^{-1}$. This is what the problem for $\bar u$ is meant to capture. Since the elliptic problem for~$\bar u$ involves the homogenized operator $-\nabla \cdot \overline{\a} \nabla$, it can be solved efficiently using the standard multigrid method. We note that the equation for~$\bar u$ can be rewritten, if desired, in the form
\begin{equation}
\label{e.alt.eq.baru}
- \nabla \cdot \overline{\a} \nabla\bar u = -\nabla \cdot \a \nabla(u- v-u_0) \qquad \text{in } U_r.
\end{equation}
\smallskip
The final step of the iteration, involving the definition of $\tilde u$, is meant to add back some small-scale details to the function $\bar u$. It is analogous to the post-smoothing step in the standard $V$-cycle implementation of the multrigrid method, and the parameter $\lambda^{-2}$ represents the number of post-smoothing steps.
\smallskip
We next discuss the more probabilistic aspects involved in the statement of Theorem~\ref{t.contract}. Since the coefficient field is random, the statement of this theorem can only be valid with high probability, but not almost surely. Indeed, with non-zero probability, the coefficient field can be essentially arbitrary, and on such small-probability events, the idea of leveraging on homogenization can only perform badly (recall that we aim for a convergence result for large but~fixed $r$, as opposed to asymptotic convergence). It may help the intuition to observe that, by Chebyshev's inequality, the assumption of \eqref{e.def.Os} implies that
\begin{equation}
\label{e.tail}
\forall x \ge 0, \quad \P \left[ X \ge \theta x \right] \le 2 \exp(-x^s),
\end{equation}
and that conversely, the assumption of \eqref{e.tail} implies that $X \le \O_s(C \theta)$ for some constant $C(s) < \infty$ (see \cite[Lemma~A.1]{AKMbook}).
\smallskip
We remark that Theorem~\ref{t.contract} is new even when restricted to the subclass of periodic coefficient fields. In this case, both the probabilistic part of the estimate as well as the logarithmic factor of $\ell(\lambda)$ are not present, and~\eqref{e.contract} can be replaced with the simpler form
\begin{equation*}
\|\nabla(\hat v-u)\|_{L^2(U_r)} \le C
\lambda^{\frac 1 2} \, \|\nabla(v - u)\|_{L^2(U_r)}.
\end{equation*}
\smallskip
We stress that the probabilistic statement in \eqref{e.contract} is valid for each fixed choice of~$u,v \in H^1(U_r)$. In fact, the following stronger, uniform estimate is now proved in \cite{chenlin}. For each $s \in (0,2)$, there exist a constant $C(s,p,U,\Lambda,d) < \infty$ and, for each $r \geq 1$ and $\lambda \in [r^{-1},1]$, a random variable $\mathcal{X}_{s,r,\lambda} : \Omega \to [0,+\infty]$ satisfying
\begin{equation*}
\mathcal{X}_{s,r,\lambda} \le \O_s \left( C\right)
\end{equation*}
such that, for every $u,v \in H^1(U_r)$ and $\hat v$ as in the statement of Theorem~\ref{t.contract},
\begin{equation}
\label{e.unif.contract}
\left\| \nabla (\hat v - u) \right\|_{ L^2(U_r)}
\leq
\mathcal{X}_{s,r,\lambda} \,
\ell(\lambda)^{\frac12} \,
\lambda^\frac 1 2 \,
(\log r)^\frac 1 s \,
\left\|\nabla (v-u)\right\|_{ L^2(U_r)}.
\end{equation}
As is apparent, the price one has to pay for the uniformity of this estimate in the functions $u$ and $v$ is a slight degradation of the contraction factor, by a slowly diverging logarithmic factor of the domain size. Due to randomness, uniform estimates such as \eqref{e.unif.contract} must necessarily contain some logarithmic divergence in the domain size. Indeed, consider for instance the case of a coefficient field given by a random checkerboard in which we toss a fair coin, independently for each $z \in \mathbb{Z}^d$, the coefficient field in $z + [0,1)^d$ to be either $I_d$ or $2I_d$. Then, with probability tending to one as $r$ tends to infinity, there will be in the domain $U_r$ a region of space of side length of the order of $(\log r)^\frac 1 d$ where the coefficient field is constant equal to $I_d$. If the support of the solution we seek is concentrated in this region, then the iteration described in Theorem~\ref{t.contract} will perform badly unless $\lambda^{-1}$ is chosen larger than $(\log r)^\frac 1 d$.
\smallskip
The iteration proposed in Theorem~\ref{t.contract} requires the user to make a judicious choice of the length scale~$\lambda^{-1}$. Ideally, it would be preferable to devise an adaptive method which discovers a good choice for~$\lambda^{-1}$ automatically. The contraction of the iteration would then be guaranteed with probability one, and more subtle probabilistic quantifiers would instead enter into the complexity analysis of the method.
A suitably designed adaptive algorithm would likely
also work on more general coefficient fields than those considered here, allowing for instance to drop the assumption of stationarity. An assumption of approximate local stationarity would then also enter into the complexity analysis of the method. We leave the development of such adaptive methods to future work.
\smallskip
The method proposed here also requires that the user computes~$\overline{\a}$ beforehand. An efficient method for doing so was presented in \cite{efficient} in a discrete setting; see also~\cite{Gloria, cemracs} and references therein for previous work on this problem. Moreover, one can check that in order to guarantee the contraction property of the iteration described in Theorem~\ref{t.contract}, say by a factor of~$1/2$, a coarse approximation of~$\overline{\a}$, which may be off by a small but fixed positive amount, suffices.
\smallskip
The proof of Theorem~\ref{t.contract} can be modified so that the~$L^2$ norms in~\eqref{e.contract} are replaced by $L^p$ norms, for any exponent~$p<\infty$. Up to some additional logarithmic factors in~$\lambda$, the contraction factor in the estimate would then be of order~$\lambda^{\frac 1p}$ rather than $\lambda^{\frac 12}$. This modification requires the application of large-scale Calder\'on-Zygmund-type~$L^p$ estimates which can be found in~\cite[Chapter 7]{AKMbook}. The main required modification to the proof of Theorem~\ref{t.contract} is simply to upgrade the two-scale expansion result of Theorem~\ref{t.twoscale} from $p=2$ to larger exponents by adapting the argument of~\cite[Theorem 7.10]{AKMbook}.
\subsection{Previous works}
There has been a lot of work on numerical algorithms that become sharp only in the limit of infinite scale separation (see for instance~ \cite{NJW, MS, brandt, eq-free,eh-book,E-book, hmm} and the references therein). That is, the error between the true solution~$u$ and its numerical approximation becomes small only as~$r\to \infty$. Such algorithms typically have a computational complexity scaling sublinearly with the volume of the domain. An example of such a method in the context of the homogenization problem considered here is to compute an approximation of the solution to the homogenized equation. In addition to relying on scale separation, we note that such a sublinear method can only give an accurate global approximation in a weaker space such as~$L^2$, but not in stronger norms such as~$H^1$ which are sensitive to small scale oscillations.
\smallskip
We now turn our attention to numerical algorithms that, like ours, converge to the true solution for each finite value of~$r$.
As pointed out in \cite{knap1,knap2}, direct applications of standard multigrid methods result in coarse-scale systems that do not capture the relevant large-scale properties of the problem. Indeed, standard coarsening procedures produce effective coefficients that are simple arithmetic averages of the original coefficient field, instead of the homogenized coefficients. To remedy this problem, \cite{knap1,knap2} propose more subtle, matrix-dependent choices for the restriction and prolongation operators. The idea is to try to approximate a Schur complement calculation, while preserving some calculability constraints such as matrix sparsity. The method proposed there is shown numerically to perform better than simple averaging, but no theoretical guarantee is provided.
\smallskip
In \cite{EL1,EL2}, the authors propose, in the periodic setting, to solve local problems for the correctors, deduce locally homogenized coefficients, and build coarsened operators from these. For the special two-dimensional case with $\a(x) = \tilde \a(x_1 - x_2)$ for some $1$-periodic $\tilde \a \in C([0,1];\mathbb{R}^{2\times 2}_{\mathrm{sym}})$, they show (in our notation) that $O(r^{\frac 5 3} \log r)$ smoothing steps suffice to guarantee the contractivity of the two-step multigrid method (assuming that the chosen coarsening scale is a bounded multiple of the oscillation scale). For comparison, this roughly corresponds to the choice of $\lambda \simeq r^{-\frac 5 6}$ in our method. They also report better numerical performance than predicted by their theoretical arguments.
\smallskip
Beyond our current assumption of stationarity of the coefficient field, one can look for numerical methods for the resolution of general elliptic problems with rapidly oscillating coefficients. Possibly the simplest such method is to rely on the uniform ellipticity assumption~\eqref{e.unif.ell} and appeal to a preconditioned conjugate gradient method, using the standard Laplacian as a preconditioner. However, this method provides with convergence results that are naturally stated in terms of the~$L^2$ norm of the solution (as opposed to the $H^1$ norm), and its performance degrades quickly if the ellipticity contrast becomes large.
\smallskip
Algebraic multigrid methods are intended to solve completely arbitrary linear systems of equations, by automatically discovering a hierarchy of coarsened problems~\cite{amg}. In practice, it is however necessary to make some judicious choices of coarsening operators. In a sense, the present contribution as well as those of \cite{knap1,knap2,EL1,EL2} are descriptions of specific coarsening procedures which, under stronger assumptions such as stationarity, are shown to have fast convergence properties.
\smallskip
Many alternative approaches to the computation of elliptic problems with arbitrary coefficient fields have been developed. We mention in particular, without going into details, hierarchical matrices \cite{bebendorf}, generalized multiscale finite element methods \cite{BL,EGH,GGS}, polyharmonic splines \cite{OZB}, local orthogonal decompositions \cite{malpet}, subspace correction methods \cite{KY} and gamblets \cite{roulette}.
\smallskip
Since the preconditioned gradient method already achieves essentially optimal asymptotic complexity for the class of problems we consider here, we believe that the most informative theoretical tests should relate to the regime of \emph{high ellipticity contrast}, $\Lambda \gg 1$. In view of the results of \cite{AD}, we expect that properties comparable with Theorem~\ref{t.contract} can be shown for the highly degenerate, percolation-type environments studied there.
\smallskip
\subsection{Organization of the paper}
We introduce some more notation in Section~\ref{s.notation}. Section~\ref{s.proof} is devoted to the proof of Theorem~\ref{t.contract}. We report on our numerical results in Section~\ref{s.numerics}. Finally, an appendix recalls some classical Sobolev and elliptic estimates for the reader's convenience.
\section{Notation}
\label{s.notation}
In this section, we collect some notation used throughout the paper. Recall that the notation~$\O_s(\cdot)$ was defined in \eqref{e.def.Os}. We will need the following fact, which says that~$\O_s$ is behaving like a norm: for each $s \in (0,\infty)$, there exists~$C_s < \infty$ (with $C_s = 1$ for $s \ge 1$) such that the following triangle inequality for $\O_s(\cdot)$ holds: for any measure space $(E,\mathcal{S},\mu)$, measurable function $\theta: E \to (0,\infty)$ and jointly measurable family $\left\{ X(z) \right\}_{z \in E}$ of random variables, we have (see \cite[Lemma~A.4]{AKMbook})
\begin{equation}
\label{e.Osums}
\forall z\in E, \ X(z) \le \O_s(\theta(z))
\implies
\int_{E} X\,d\mu \leq \O_s\left( C_s \int_E \theta \,d\mu \right).
\end{equation}
We denote by $(e_1,\ldots,e_d)$ the canonical basis of ${\mathbb{R}^d}$, and write $B(x,r) \subset {\mathbb{R}^d}$ for the Euclidean ball centered at $x \in {\mathbb{R}^d}$ and of radius $r > 0$. For a Borel set $V \subset {\mathbb{R}^d}$, we denote its Lebesgue measure by $|V|$. If $|V| < \infty$, then for every $p \in [1,\infty)$ and $f \in L^p(V)$ we write the scaled~$L^p$ norm of~$f$ by
\begin{equation*}
\|f\|_{\underline L^p(V)} := \left( |V|^{-1} \int_V f^p \right)^{\frac 1 p} = |V|^{-\frac 1 p} \|f\|_{L^p(V)}.
\end{equation*}
For each $k \in \mathbb{N}$, we denote by $H^{k}(V)$ the classical Sobolev space on $V$, whose norm is given by
\begin{equation*}
\|f\|_{H^{k}(V)} := \sum_{0 \le |\beta| \le k} \|\partial^\beta f\|_{L^2(V)}.
\end{equation*}
In the expression above, the parameter $\beta = (\beta_1, \ldots, \beta_d)$ is a multi-index in $\mathbb{N}^d$, and we used the notation
\begin{equation*}
|\beta| := \sum_{i = 1}^d \beta_i \quad \text{ and } \quad \partial^\beta f = \partial_{x_1}^{\beta_1} \cdots \partial_{x_d}^{\beta_d} f.
\end{equation*}
Whenever $|V| < \infty$, we define the scaled Sobolev norm by
\begin{equation*}
\|f\|_{\underline H^{k}(V)} := \sum_{0 \le |\beta| \le k} |V|^{\frac{|\beta| - k}{d}}\|\partial^\beta f\|_{\underline L^2(V)}.
\end{equation*}
We denote by $H^1_0(V)$ the completion in $H^1(V)$ of the space $C^\infty_c(V)$ of smooth functions with compact support in $V$.
We write $H^{-1}(V)$ for the dual space to $H^1_0(V)$, which we endow with the (scaled) norm
\begin{equation*}
\|f\|_{\underline H^{-1}(V)} := \sup \left\{ |V|^{-1} \int_V f \, g, \quad g \in H^1_0(V), \ \|g\|_{\underline H^1(V)} \le 1 \right\} .
\end{equation*}
The integral sign above is an abuse of notation and should be understood as the duality pairing between $H^{-1}(V)$ and $H^1_0(V)$. The spaces $H^{-1}(V)$ and $H^1_0(V)$ can be continuously embedded into the space of distributions, and we make sure that the duality pairing is consistent with the integral expression above whenever $f$ and $g$ are smooth functions.
For every~$r > 0$ and~$x \in {\mathbb{R}^d}$, we denote the time-slice of the heat kernel which has length scale ~$r$ by
\begin{equation}
\label{e.gaussian}
\Phi_r(x) := (4\pi r^2)^{-\frac d 2} \exp \left( -\frac{x^2}{4r^2} \right) .
\end{equation}
We denote by $\zeta \in C^\infty_c({\mathbb{R}^d})$ the standard mollifier
\begin{equation}
\label{e.def.zeta}
\zeta(x) : = \left\{
\begin{aligned}
& c_d \, \exp\left( - (1-|x|^2)^{-1} \right) & \mbox{if} & \ |x|<1,\\
& 0 & \mbox{if} & \ |x| \geq 1,
\end{aligned}
\right.
\end{equation}
where the constant~$c_d$ is chosen so that $\int_{\mathbb{R}^d} \zeta = 1$.
For $f \in L^p({\mathbb{R}^d})$ and $g \in L^{p'}({\mathbb{R}^d})$ with $\frac 1 p + \frac 1 {p'} = 1$, we denote the convolution of $f$ and $g$ by
\begin{equation*}
f \ast g (x) := \int_{\mathbb{R}^d} f(y) g(x-y) \, dy.
\end{equation*}
\section{Proof of Theorem~\ref{t.contract}}
\label{s.proof}
This section is devoted to the proof of~Theorem~\ref{t.contract}. We begin by introducing the notion of \emph{(first-order) corrector}: for each $p \in {\mathbb{R}^d}$, the corrector in the direction of $p$ is the function $\phi_p \in H^1_{\mathrm{loc}}({\mathbb{R}^d})$ solving
\begin{equation*}
-\nabla \cdot \a \left( p + \nabla \phi_p \right) = 0 \qquad \text{in } {\mathbb{R}^d},
\end{equation*}
and such that the mapping $x \mapsto \nabla \phi_p(x)$ is $\mathbb{Z}^d$-stationary and satisfies
\begin{equation*}
\mathbb{E} \left[ \int_{[0,1]^d} \nabla \phi_p \right] = 0.
\end{equation*}
The corrector $\phi_p$ is unique up to an additive constant (see~\cite[Definition~4.2]{AKMbook} for instance). We also recall that one can define the homogenized matrix $\overline{\a} \in \mathbb{R}^{d\times d}_\mathrm{sym}$ via the formula
\begin{equation*}
\forall p \in {\mathbb{R}^d}, \quad \overline{\a} p = \mathbb{E} \left[ \int_{[0,1]^d} \a(p + \nabla \phi_p )\right] ,
\end{equation*}
or equivalently,
\begin{equation*}
\forall p \in {\mathbb{R}^d}, \quad p \cdot \overline{\a} p = \mathbb{E} \left[ \int_{[0,1]^d} (p + \nabla \phi_p ) \cdot \a (p + \nabla \phi_p )\right] ,
\end{equation*}
and in particular, as a consequence of \eqref{e.unif.ell}, we have
\begin{equation}
\label{e.ahom.ell}
\forall \xi \in {\mathbb{R}^d}, \quad \Lambda^{-1} |\xi|^2 \le \xi \cdot \overline{\a} \xi \le \Lambda |\xi|^2.
\end{equation}
For each $k \in \{1,\ldots,d\}$ and $\lambda > 0$, we denote
\begin{equation*}
\phi^{(\lambda)}_{e_k} := \phi_{e_k} - \phi_{e_k} \ast \Phi_{\lambda^{-1}}.
\end{equation*}
A key ingredient in the proof of Theorem~\ref{t.contract} is the following \emph{quantitative two-scale expansion} for the operator~$\left(\lambda^2 - \nabla \cdot \a\nabla \right)$. It is the only input from the quantitative theory of stochastic homogenization used in this paper and it follows from some estimates which can be found in~\cite{AKMbook}.
\begin{theorem}[Two-scale expansion and error estimate]
\label{t.twoscale}
For each $s \in (0,2)$, there exists a constant $C(s,U,\Lambda,\alpha, d) < \infty$ such that, for every $r \ge 1$, $\lambda \in [r^{-1},1]$, and $\bar v \in H^1_0(U_r) \cap H^{2}(U_r)$, defining
\begin{equation}
\label{e.def.w}
w := \bar v + \sum_{k = 1}^d \phi^{(\lambda)}_{e_k} \, \partial_{x_k} \bar v,
\end{equation}
we have the estimate
\begin{equation}
\label{e.twoscale}
\left\| \nabla \cdot \left( \a \nabla w - \overline{\a} \nabla \bar v \right)\right\|_{\underline H^{-1}(U_r)}
\le
\O_s \left( C \ell(\lambda)\|\bar v\|_{\underline H^{2}(U_r)} + C \lambda^{\frac d 2} \, \|\bar v\|_{\underline H^{1}(U_r)} \right).
\end{equation}
Moreover,
for every $\mu \in [0,\lambda]$ and
$v \in H^1_0(U_r)$ such that
\begin{equation}
\label{e.eq.v.barv}
\left(\mu^2 -\nabla \cdot \a \nabla\right) v = \left(\mu^2 - \nabla \cdot \overline{\a} \nabla\right) \bar v,
\end{equation}
we have the estimate
\begin{multline}
\label{e.error.estimates}
\left\| v - w \right\|_{\underline H^1(U_r)} + \left(\mu + r^{-1} \right) \|v-\bar v\|_{\underline L^2(U_r)} + \left(\mu + r^{-1} \right)^2 \|v-\bar v\|_{\underline H^{-1}(U_r)}
\\
\le \O_s \left( C \left(\mu \ell(\lambda) + \lambda^\frac d 2\right) \|\bar v\|_{\underline H^1(U_r)} + C \ell(\lambda)^\frac 1 2 \,\| \bar v \|_{\underline H^1(U_r)}^\frac 1 2 \, \|\bar v\|_{\underline H^2(U_r)}^\frac 1 2 + C \ell(\lambda) \|\bar v\|_{\underline H^2(U_r)} \right).
\end{multline}
\end{theorem}
\smallskip
The proof of Theorem~\ref{t.twoscale} follows that of a similar result from Chapter~6 of~\cite{AKMbook}. The main difference here is the presence of the zeroth order term with the factor of~$\mu^2$, which presents no additional difficulty. We begin by recalling the concept of a flux corrector and stating some estimates on the correctors proved in~\cite{AKMbook}.
\smallskip
For each $p \in {\mathbb{R}^d}$, we denote the (centered) flux of the corrector $\phi_p$ by
\begin{equation*}
(\mathbf{g}_{p,i})_{1 \le i \le d} = \mathbf{g}_p := \a(p + \nabla \phi_p) - \overline{\a} p.
\end{equation*}
Since $\nabla \cdot \mathbf{g}_p = 0$, the flux of the corrector admits a representation as the ``curl'' of some vector potential, by Helmholtz's theorem. This vector potential, the \emph{flux corrector}, will be useful for the proof of Theorem~\ref{t.twoscale}. For each $p \in {\mathbb{R}^d}$, the vector potential $(\S_{p,ij})_{1 \le i,j \le d}$ is a matrix-valued random field with entries in $H^1_{\mathrm{loc}}({\mathbb{R}^d})$ satisfying, for each $i,j \in \{1,\ldots,d\}$,
\begin{equation*}
\S_{p,ij} = - \S_{p,ji},
\end{equation*}
\begin{equation}
\label{e.def.S}
\nabla \cdot \S_{p} = \mathbf{g}_p,
\end{equation}
and such that $x \mapsto \nabla \S_{p,ij}(x)$ is a stationary random field with mean zero. In \eqref{e.def.S}, we used the shorthand notation
\begin{equation*}
(\nabla \cdot \S_e)_i := \sum_{j = 1}^d \partial_{x_j} \S_{e,ij}.
\end{equation*}
The conditions above do not specify the flux corrector uniquely. One way to ``fix the gauge'' is to enforce that, for each $i,j \in \{1,\ldots,d\}$,
\begin{equation*}
\Delta \S_{p,ij} = \partial_{x_j} \mathbf{g}_{p,i} - \partial_{x_i} \mathbf{g}_{p,j}.
\end{equation*}
This latter choice then defines $\S_{p,ij}$ uniquely, up to the addition of a constant. We refer to \cite[Section~6.1]{AKMbook} for more precision on this construction.
We set
\begin{equation}
\label{e.Sl}
\mathbf{S}^{(\lambda)}_{e} := \S_e - \S_e \ast \Phi_{\lambda^{-1}}.
\end{equation}
The fundamental ingredient for the proof of Theorem~\ref{t.twoscale} is the following proposition, which quantifies the convergence to zero of the spatial averages of the gradients of the correctors.
\begin{proposition}[Corrector estimates]
For each $s \in (0,2)$, there exists a constant $C(s,U,\Lambda,\alpha,d) < \infty$ such that for every $\lambda \in (0,1)$, $x \in {\mathbb{R}^d}$ and $i,j,k \in \{1,\ldots,d\}$,
\label{p.corrector}
\begin{equation}
\label{e.apriori.nablaphi}
|\nabla \phi_{e_k}(x)| \le \O_s \left( C \right) ,
\end{equation}
\begin{equation}
\label{e.bound.gradphi}
\left| \left(\nabla \phi_{e_k} \ast \Phi_{\lambda^{-1}}\right) (x) \right|
+ \left| \left(\nabla \S_{e_k,ij} \ast \Phi_{\lambda^{-1}} \right)(x) \right| \le \O_s \left( C \lambda^{\frac d 2} \right) ,
\end{equation}
\begin{equation}
\label{e.bound.phi}
\left|\phi^{(\lambda)}_{e_k}(x)\right| + \left|\mathbf{S}^{(\lambda)}_{e_k,ij}(x)\right| = \O_s \left( C \ell(\lambda) \right) .
\end{equation}
\begin{proof}
By \cite[Lemma~4.4]{AKMbook}, we have
\begin{equation*}
\|\nabla \phi_{e_k}\|_{L^2(B(0,1))} \le \O_s \left( C \right) .
\end{equation*}
By the assumption of \eqref{e.holder}, we can apply standard Schauder estimates, see e.g.\ \cite[Theorems 3.1 and 3.8]{HL}, to deduce \eqref{e.apriori.nablaphi}. The estimates in \eqref{e.bound.gradphi} are proved in \cite[Theorem~4.9 and Proposition~6.2]{AKMbook}. The estimates in \eqref{e.bound.phi} also follow from \cite[Theorem~4.9 and Proposition~6.2]{AKMbook}, combined with the assumption of \eqref{e.holder} and the Schauder estimate in \cite[Corollary~3.2 and Theorem~3.8]{HL}.
\end{proof}
\end{proposition}
In the next lemma, we provide a convenient representation of~$\nabla \cdot \a \nabla w$ in terms of the correctors.
\begin{lemma}
\label{l.two-scale}
Let $\lambda >0$, $\bar v \in H^1(U_r)$, and let $w \in H^1(U_r)$ be defined by~\eqref{e.def.w}.
Then
\begin{equation*}
\nabla \cdot \left(\a \nabla w - \overline{\a} \nabla \bar v\right) = \nabla \cdot \mathbf F,
\end{equation*}
where the $i$-th component of the vector field $\mathbf F$ is given by
\begin{multline}
\label{e.def.Fi}
\mathbf F_i :=
\sum_{j,k = 1}^d \left(\a_{ij} \phi^{(\lambda)}_{e_k} - \mathbf{S}^{(\lambda)}_{e_k,ij} \right) \partial_{x_j} \partial_{x_k} \bar v \\
+ \sum_{j,k = 1}^d \left(\a_{ij}\left(\partial_{x_j} \phi_{e_k} \ast \Phi_{\lambda^{-1}}\right) + \partial_{x_j} \S_{e,ij} \ast \Phi_{\lambda^{-1}}\right)\partial_{x_k} \bar v.
\end{multline}
\end{lemma}
\begin{proof}
The argument is very similar to that for \cite[Lemma~6.6]{AKMbook}, the main difference being that the definition of $\phi^{(\lambda)}_{e_k}$ is slightly different from that of $\phi^\varepsilon_{e_k}$ there. We recall the argument here for the reader's convenience.
Observe that, for each $j \in \{1,\ldots,d\}$,
\begin{equation}
\label{e.split.in.three}
\partial_{x_j} w =
\sum_{k = 1}^d \left( \left(\delta_{jk} + \partial_{x_j} \phi_{e_k}\right)\partial_{x_k} \bar v -\left( \partial_{x_j} \phi_{e_k} \ast \Phi_{\lambda^{-1}}\right)\partial_{x_k} \bar v + \phi^{(\lambda)}_{e_k} \, \partial_{x_j}\partial_{x_k} \bar v \right).
\end{equation}
We start by studying the contribution of the first summand. By~\eqref{e.def.S} and~\eqref{e.Sl}, we have, for every $i,k \in \{1,\ldots, d\}$,
\begin{equation*}
\sum_{j = 1}^d \partial_{x_j} \mathbf{S}^{(\lambda)}_{e_k,ij} = \sum_{j = 1}^d \left( \a_{ij} \left( \delta_{jk} + \partial_{x_j} \phi_{e_k} \right) - \overline{\a}_{ij} \delta_{jk} -\partial_{x_j} \S_{e_k,ij} \ast \Phi_{\lambda^{-1}}\right) .
\end{equation*}
We deduce that, for each $i \in \{1,\ldots,d\}$,
\begin{equation}
\label{e.integr.to.S}
\sum_{j,k = 1}^d \a_{ij} \left(\delta_{jk} + \partial_{x_j} \phi_{e_k}\right) \partial_{x_k} \bar v
= \sum_{j,k = 1}^d \left( \overline{\a}_{ij} \delta_{jk} + \partial_{x_j} \mathbf{S}^{(\lambda)}_{e_k,ij} + \partial_{x_j} \S_{e_k,ij} \ast \Phi_{\lambda^{-1}}\right) \partial_{x_k} \bar v,
\end{equation}
and thus
\begin{multline*}
\sum_{i,j,k = 1}^d \partial_{x_i} \left(\a_{ij} \left(\delta_{jk} + \partial_{x_j} \phi_{e_k}\right) \partial_{x_k} \bar v \right)
= \nabla \cdot \overline{\a} \nabla \bar v\\
+ \sum_{i,j,k = 1}^d \partial_{x_i} \left( \partial_{x_j} \mathbf{S}^{(\lambda)}_{e_k,ij} \, \partial_{x_k} \bar v \right)
+ \sum_{i,j,k = 1}^d \partial_{x_i} \left( \left(\partial_{x_j} \S_{e_k,ij} \ast \Phi_{\lambda^{-1}} \right)\partial_{x_k} \bar v \right) .
\end{multline*}
By the skew-symmetry of $\mathbf{S}^{(\lambda)}_e$, we have
\begin{align*}
0 & = \sum_{i,j,k = 1}^d \partial_{x_i} \partial_{x_j} \left( \mathbf{S}^{(\lambda)}_{e_k,ij}\, \partial_{x_k} \bar v\right) \\
& = \sum_{i,j,k = 1}^d \partial_{x_i} \left( \partial_{x_j} \mathbf{S}^{(\lambda)}_{e_k,ij} \, \partial_{x_k} \bar v\right) + \sum_{i,j,k = 1}^d \partial_{x_i} \left( \mathbf{S}^{(\lambda)}_{e_k,ij} \, \partial_{x_j} \partial_{x_k} \bar v \right) ,
\end{align*}
and thus
\begin{multline*}
\sum_{i,j,k = 1}^d \partial_{x_i} \left(\a_{ij} \left(\delta_{jk} + \partial_{x_j} \phi_{e_k}\right) \partial_{x_k} \bar v \right)
= \nabla \cdot \overline{\a} \nabla \bar v\\
- \sum_{i,j,k = 1}^d \partial_{x_i} \left( \mathbf{S}^{(\lambda)}_{e_k,ij} \, \partial_{x_j} \partial_{x_k} \bar v \right)
+ \sum_{i,j,k = 1}^d \partial_{x_i} \left( \left(\partial_{x_j} \S_{e_k,ij} \ast \Phi_{\lambda^{-1}} \right) \partial_{x_k} \bar v\right) .
\end{multline*}
Recalling \eqref{e.split.in.three}, we obtain the announced result.
\end{proof}
We next present the proof of Theorem~\ref{t.twoscale}, which can be compared to the one of~\cite[Theorem 6.9]{AKMbook}.
\begin{proof}[Proof of Theorem~\ref{t.twoscale}]
We will proceed by proving first \eqref{e.twoscale}, and then the $H^1$, $L^2$ and $H^{-1}$ estimates appearing in \eqref{e.error.estimates}, in this order. We decompose the arguments into seven steps.
\smallskip
\emph{Step 1.} We prove \eqref{e.twoscale}. In view of Lemma~\ref{l.two-scale}, it suffices to show that, for the vector field $\mathbf F$ defined in \eqref{e.def.Fi},
\begin{equation}
\label{e.est.F}
\|\mathbf F\|_{\underline L^2(U_r)} \le \O_s \left( C \ell(\lambda)\|\bar v\|_{\underline H^{2}(U_r)} + C \lambda^{\frac d 2} \, \|\bar v\|_{\underline H^{1}(U_r)} \right).
\end{equation}
We estimate each of the terms appearing in the definition of $\mathbf F$.
By Proposition~\ref{p.corrector} and \eqref{e.Osums}, we have, for every $i,j,k \in \{1,\ldots,d\}$,
\begin{equation*}
\left\| \left(\a_{ij} \phi^{(\lambda)}_{e_k} - \mathbf{S}^{(\lambda)}_{e_k,ij} \right) \partial_{x_j} \partial_{x_k} \bar v \right\|_{\underline L^2(U_r)} \le \O_s \left( C \ell(\lambda) \|\bar v\|_{\underline H^2(U_r)} \right) ,
\end{equation*}
as well as
\begin{equation*}
\left\|\left(\a_{ij}\left(\partial_{x_j} \phi_{e_k} \ast \Phi_{\lambda^{-1}}\right) + \partial_{x_j} \S_{e,ij} \ast \Phi_{\lambda^{-1}}\right)\partial_{x_k} \bar v\right\|_{\underline L^2(U_r)} \le \O_s \left(C \lambda^\frac d 2 \| \bar v\|_{\underline H^{1}(U_r)} \right),
\end{equation*}
and thus \eqref{e.est.F} follows.
\smallskip
\emph{Step 2.}
In order to show \eqref{e.error.estimates}, we first need to evaluate the contribution of a boundary layer.
For every $\ell \ge 0$, we write $\zeta_\ell := \ell^{-d} \zeta(\ell^{-1} \, \cdot\,)$ (recall the definition of $\zeta$ in \eqref{e.def.zeta}) and
\begin{equation}
\label{e.def.url}
U_{r,\ell} := \left\{x \in U_r \ : \ \dist(x,\partial U_r) > \ell\right\}.
\end{equation}
With the definition of $\ell(\lambda)$ given in \eqref{e.def.llambda}, we set
\begin{equation*}
T := \left(\mathbf{1}_{{\mathbb{R}^d} \setminus U_{r,2\ell(\lambda)}}\ast \zeta_{\ell(\lambda)}\right) \sum_{k = 1}^d \phi^{(\lambda)}_{e_k} \, \partial_{x_k} \bar v .
\end{equation*}
We will use the function $T$ as a test function for an upper bound on the size of the actual boundary layer in the next step. In this step, we show that there exists $C(s,U,\Lambda,\alpha,d) < \infty$ such that
\begin{equation}
\label{e.estim.nablaT}
\|\nabla T \|_{\underline L^2(U_r)} \le \O_s \left( C \,\ell(\lambda)^\frac 1 2 \,\| \bar v \|_{\underline H^1(U_r)}^\frac 1 2 \, \|\bar v\|_{\underline H^2(U_r)}^\frac 1 2 + C \ell(\lambda) \|\bar v\|_{\underline H^2(U_r)} \right)
\end{equation}
and
\begin{equation}
\label{e.estim.T}
\|T\|_{\underline L^2(U_r)} \le \O_s \left( C \, \ell(\lambda)^\frac 3 2 \,\| \bar v \|_{\underline H^1(U_r)}^\frac 1 2 \, \|\bar v\|_{\underline H^2(U_r)}^\frac 1 2 \right) .
\end{equation}
By the chain rule,
\begin{equation*}
\|\nabla T\|_{L^2(U_r)}
\le C \sum_{k = 1}^d \left\| \left(\frac{ |\nabla \bar v| }{\ell(\lambda)} + |\nabla^2\bar v| \right) \, \left|\phi^{(\lambda)}_{e_k}\right| + \left| \nabla \bar v \right| \, \left|\nabla \phi^{(\lambda)}_{e_k}\right| \right\|_{L^2(U_r\setminus U_{r,3\ell(\lambda)})}.
\end{equation*}
By Proposition~\ref{p.corrector} and \eqref{e.Osums}, we have
\begin{equation*}
\left\| |\nabla^2\bar v | \, \left| \phi^{(\lambda)}_{e_k} \right| \right\|_{L^2(U_r \setminus U_{r,3\ell(\lambda)})}
\le \O_s \left( C \ell(\lambda) \|\nabla^2 \bar v\|_{L^2(U_r)} \right) .
\end{equation*}
Similarly,
\begin{equation}
\label{e.for.estim.T}
\left\| \frac{ |\nabla \bar v | }{\ell(\lambda)} \, \left| \phi^{(\lambda)}_{e_k} \right| \right\|_{L^2(U_r \setminus U_{r,3\ell(\lambda)})} \le \O_s \left( C \|\nabla \bar v\|_{L^2(U_r \setminus U_{r,3\ell(\lambda)})} \right) ,
\end{equation}
and by Proposition~\ref{p.trace},
\begin{equation}
\label{e.est.boundary.barv}
\|\nabla \bar v \|_{L^2(U_r \setminus U_{r,3\ell(\lambda)})} \le C \, \ell(\lambda)^\frac 1 2 \, r^{\frac d 2} \, \|\bar v\|_{\underline H^1(U_r)}^\frac 1 2 \, \|\bar v\|_{\underline H^2(U_r)}^\frac 1 2.
\end{equation}
Finally, using again Proposition~\ref{p.corrector} and \eqref{e.Osums}, we have
\begin{equation*}
\left\| |\nabla \bar v | \, \left| \nabla \phi^{(\lambda)}_{e_k} \right| \right\|_{L^2(U_r \setminus U_{r,3\ell(\lambda)})} \le \O_s \left( C \|\nabla \bar v\|_{L^2(U_r \setminus U_{r,3\ell(\lambda)})} \right) ,
\end{equation*}
and we can appeal once more to \eqref{e.est.boundary.barv} to estimate the norm of $\nabla \bar v$ on the right side above. This completes the proof of \eqref{e.estim.nablaT}. The estimate \eqref{e.estim.T} follows from \eqref{e.for.estim.T} and \eqref{e.est.boundary.barv}.
\smallskip
\emph{Step 3.}
We now evaluate the size of the boundary layer $b \in H^1(U_r)$ defined as the solution of
\begin{equation}
\label{e.eq.b}
\left\{
\begin{aligned}
& \left(\mu^2 - \nabla \cdot \a \nabla\right) b = 0 & \quad \text{in } & U_r, \\
& b = \sum_{k = 1}^d \phi^{(\lambda)}_{e_k} \, \partial_{x_k} \bar v & \quad \text{on } & \partial U_r.
\end{aligned}
\right.
\end{equation}
Since $T$ and $b$ share the same boundary condition on $\partial U_r$, by the variational formulation of \eqref{e.eq.b}, we have
\begin{equation*}
\int_{U_r} \left( \mu^2 b^2 + \nabla b \cdot \a \nabla b \right) \le \int_{U_r} \left( \mu^2 T^2 + \nabla T \cdot \a \nabla T \right).
\end{equation*}
By the result of the previous step, we thus obtain, for every $\mu \in [0,\lambda]$,
\begin{multline}
\label{e.estim.b}
\mu \|b\|_{\underline L^2(U_r)} + \|\nabla b\|_{\underline L^2(U_r)}
\\
\le \O_s \left( C \,\ell(\lambda)^\frac 1 2 \,\| \bar v \|_{\underline H^1(U_r)}^\frac 1 2 \, \|\bar v\|_{\underline H^2(U_r)}^\frac 1 2 + C \ell(\lambda) \|\bar v\|_{\underline H^2(U_r)} \right) .
\end{multline}
\smallskip
\emph{Step 4.}
We are now prepared to prove that
\begin{multline}
\label{e.announce.X1}
\left\| \nabla (v - w) \right\|_{\underline L^2(U_r)} + \mu \|v-w\|_{\underline L^2(U_r)}
\\
\le \O_s \left( C \left(\mu \ell(\lambda) + \lambda^\frac d 2\right) \|\bar v\|_{\underline H^1(U_r)} + C \ell(\lambda)^\frac 1 2 \,\| \bar v \|_{\underline H^1(U_r)}^\frac 1 2 \, \|\bar v\|_{\underline H^2(U_r)}^\frac 1 2 + C \ell(\lambda) \|\bar v\|_{\underline H^2(U_r)} \right).
\end{multline}
For concision, we define
\begin{equation*}
\mathcal{X}_1 := \|-\nabla \cdot (\overline{\a} \nabla \bar v - \a \nabla w)\|_{\underline H^{-1}(U_r)},
\end{equation*}
and recall that, by \eqref{e.twoscale},
\begin{equation}
\label{e.estim.X1}
\mathcal{X}_1 \le \O_s \left( C \ell(\lambda) \|\bar v\|_{\underline H^2(U_r)} + C \lambda^\frac d 2 \|\bar v\|_{\underline H^1(U_r)} \right) .
\end{equation}
Moreover, by \eqref{e.eq.v.barv} and \eqref{e.eq.b},
\begin{align*}
-\nabla \cdot (\overline{\a} \nabla \bar v - \a \nabla w)
&
= -\nabla \cdot \a \nabla (v-w) + \mu^2 (v-\bar v) \\
& = -\nabla \cdot \a \nabla (v-w+b) + \mu^2(v-\bar v+b) .
\end{align*}
Since $v-w+b \in H^1_0(U_r)$, we deduce that
\begin{multline*}
|U_r|^{-1} \int_{U_r} \left(\nabla (v-w+b) \cdot \a \nabla (v-w+b) + \mu^2 (v-w+b)(v-\bar v+b) \right)
\\
\le \mathcal{X}_1 \|\nabla(v-w+b)\|_{\underline L^2(U_r)} ,
\end{multline*}
and by the uniform ellipticity of $\a$ and H\"older's inequality,
\begin{multline*}
\|\nabla (v-w+b)\|_{\underline L^2(U_r)}^2 + \mu^2 \|v-w+b\|_{\underline L^2(U_r)}^2
\\
\le C \mathcal{X}_1 \|\nabla (v-w+b)\|_{\underline L^2(U_r)} + \mu^2 \|w-\bar v\|_{\underline L^2(U_r)} \, \|v-w+b\|_{\underline L^2(U_r)}.
\end{multline*}
Using Proposition~\ref{p.corrector} and \eqref{e.Osums}, we verify that
\begin{equation}
\label{e.est.w-barv}
\|w - \bar v\|_{\underline L^2(U_r)} \le \O_s \left( C \ell(\lambda) \|\bar v\|_{\underline H^1(U_r)} \right) .
\end{equation}
Combining these two estimates with \eqref{e.estim.X1} and Young's inequality, we obtain that
\begin{multline*}
\|\nabla (v-w+b)\|_{\underline L^2(U_r)} + \mu \|v-w+b\|_{\underline L^2(U_r)} \\
\le \O_s \left(C \left(\mu \ell(\lambda) + \lambda^\frac d 2\right) \|\bar v\|_{\underline H^1(U_r)} + C \ell(\lambda) \|\bar v\|_{\underline H^2(U_r)} \right) .
\end{multline*}
An application of \eqref{e.estim.b} then yields the announced estimate \eqref{e.announce.X1}.
\smallskip
\emph{Step 5.}
In this step, we complete the proof of the fact that
$\|v-w\|_{\underline H^1(U_r)}$
is bounded by the right side of \eqref{e.error.estimates}. In view of \eqref{e.announce.X1}, it suffices to show that
\begin{multline}
\label{e.basic.l2}
r^{-1} \|v-w\|_{\underline L^2(U_r)} \\
\le \O_s \left( C \left(\mu \ell(\lambda) + \lambda^\frac d 2\right) \|\bar v\|_{\underline H^1(U_r)} + C \ell(\lambda)^\frac 1 2 \,\| \bar v \|_{\underline H^1(U_r)}^\frac 1 2 \, \|\bar v\|_{\underline H^2(U_r)}^\frac 1 2 + C \ell(\lambda) \|\bar v\|_{\underline H^2(U_r)} \right).
\end{multline}
By \eqref{e.estim.nablaT} and \eqref{e.announce.X1}, we have
\begin{multline*}
\|\nabla (v-w+T)\|_{\underline L^2(U_r)} \\
\le \O_s \left( C \left(\mu \ell(\lambda) + \lambda^\frac d 2\right)\|\bar v\|_{\underline H^1(U_r)} + C \ell(\lambda)^\frac 1 2 \,\| \bar v \|_{\underline H^1(U_r)}^\frac 1 2 \, \|\bar v\|_{\underline H^2(U_r)}^\frac 1 2 + C \ell(\lambda) \|\bar v\|_{\underline H^2(U_r)} \right).
\end{multline*}
The estimate \eqref{e.basic.l2} then follows by the Poincar\'e inequality and \eqref{e.estim.T}.
\smallskip
\emph{Step 6.}
We now complete the proof that $\left( \mu + r^{-1} \right) \|v-\bar v\|_{\underline L^2(U_r)}$ is bounded by the right side of \eqref{e.error.estimates}.
For $\mu \ge r^{-1}$, the result follows from \eqref{e.announce.X1} and \eqref{e.est.w-barv}, while $\mu \le r^{-1}$, it follows from \eqref{e.error.estimates} and \eqref{e.est.w-barv}.
\smallskip
\emph{Step 7.}
We finally complete the proof of \eqref{e.error.estimates} by showing the estimate for the $H^{-1}$ norm of $v - \bar v$.
If $\mu \le r^{-1}$, then the conclusion is immediate from the estimate on the $L^2$ norm of $v-\bar v$, by scaling. Otherwise, by the equations for $v$ and $\bar v$, we have
\begin{equation*}
\mu^2(v- \bar v) = \nabla \cdot \left( \a \nabla v - \overline{\a} \nabla \bar v \right),
\end{equation*}
and moreover,
\begin{align*}
& \left\| \nabla \cdot \left( \a \nabla v - \overline{\a} \nabla \bar v\right) \right\|_{\underline H^{-1}(U_r)}
\\
& \qquad \le
\left\| \nabla \cdot \left( \a \nabla w - \overline{\a} \nabla \bar v\right) \right\|_{\underline H^{-1}(U_r)} + \left\| \nabla \cdot \left( \a \nabla v - \a \nabla w\right) \right\|_{\underline H^{-1}(U_r)}
\\
& \qquad \le
\left\| \nabla \cdot \left( \a \nabla w - \overline{\a} \nabla \bar v\right) \right\|_{\underline H^{-1}(U_r)} + C \left\| \nabla v - \nabla w \right\|_{\underline L^2(U_r)}.
\end{align*}
The terms on the right side above have been estimated in \eqref{e.twoscale} and \eqref{e.announce.X1} respectively, so the proof is complete.
\end{proof}
We next give the proof of the main result.
\begin{proof}[Proof of Theorem~\ref{t.contract}]
Let $u,v,u_0,\bar u, \tilde u \in H^1(U_r)$ be as in the statement of Theorem~\ref{t.contract}. We first show the a priori estimates
\begin{equation}
\label{e.apriori.u0}
\lambda \|u_0\|_{\underline L^2(U_r)} + \|\nabla u_0\|_{\underline L^2(U_r)} \le C \|u-v\|_{\underline H^1(U_r)} ,
\end{equation}
and
\begin{equation}
\label{e.apriori.baru}
\|\bar u\|_{\underline H^1(U_r)} + \lambda^{-1} \|\bar u\|_{\underline H^2(U_r)} \le C \|u-v\|_{\underline H^1(U_r)} .
\end{equation}
By the variational formulation of the equation for $u_0\in H^1_0(U_r)$, we have
\begin{equation*}
\int_{U_r} \left( \lambda^2 u_0^2 + \nabla u_0 \cdot \a \nabla u_0 \right) = \int_{U_r} \nabla u_0 \cdot \a \nabla (u-v).
\end{equation*}
By H\"older's and Young's inequalities and the uniform ellipticity of $\a$, we get~\eqref{e.apriori.u0}.
Using the equation \eqref{e.alt.eq.baru} satisfied by $\bar u \in H^1_0(U_r)$ and the estimate \eqref{e.apriori.u0}, we deduce
\begin{align*}
\|\nabla \bar u\|_{\underline L^2(U_r)} & \le C \|\nabla (u-v-u_0)\|_{\underline L^2(U_r)} \\
& \le C \|\nabla (u-v)\|_{\underline L^2(U_r)}.
\end{align*}
By Proposition~\ref{p.h2.estim} and the $L^2$ estimate in \eqref{e.apriori.u0},
we also have
\begin{equation*}
\|\bar u\|_{\underline H^2(U_r)} \le C \lambda^2 \|u_0\|_{\underline L^2(U_r)} \le C \lambda \|u-v\|_{\underline H^1(U_r)},
\end{equation*}
as announced in \eqref{e.apriori.baru}.
\smallskip
We now introduce the two-scale expansion
\begin{equation*}
w := \bar u + \sum_{k = 1}^d \phi^{(\lambda)}_{e_k} \, \partial_{x_k} \bar u.
\end{equation*}
Using the equation for $\bar u$ in \eqref{e.alt.eq.baru} and Theorem~\ref{t.twoscale} with $\mu = 0$, we obtain
\begin{multline*}
\|v + u_0 + w - u\|_{\underline H^1(U_r)} \\
\le \O_s \left( C \lambda^{\frac d 2} \|\bar u\|_{\underline H^1(U_r)} + C \ell(\lambda)^\frac 1 2 \|\bar u\|_{\underline H^1(U_r)}^\frac 1 2 \, \|\bar u\|_{\underline H^2(U_r)}^\frac 1 2 + C \ell(\lambda) \|\bar u\|_{\underline H^2(U_r)} \right) ,
\end{multline*}
and thus, by \eqref{e.apriori.baru},
\begin{equation}
\label{e.h-1.expansion}
\|v + u_0 + w - u \|_{\underline H^1(U_r)} \le \O_s \left( C \ell(\lambda)^\frac 1 2 \lambda^\frac 1 2 \|u-v\|_{\underline L^2(U_r)} \right) .
\end{equation}
In order to complete the proof of Theorem~\ref{t.contract}, there remains to estimate the $H^1$ norm of $w- \tilde u$. By the equation for $\tilde u$, Theorem~\ref{t.twoscale} and \eqref{e.apriori.baru}, we have
\begin{align*}
& \left\| w - \tilde u \right\|_{\underline H^1(U_r)}
\\
& \qquad \le \O_s \bigg( C \left(\lambda \ell(\lambda) + \lambda^\frac d 2\right) \|\bar u\|_{\underline H^1(U_r)} \\
& \qquad \qquad \qquad \qquad \qquad + C \ell(\lambda)^\frac 1 2 \,\| \bar u \|_{\underline H^1(U_r)}^\frac 1 2 \, \|\bar u\|_{\underline H^2(U_r)}^\frac 1 2 + C \ell(\lambda) \|\bar u\|_{\underline H^2(U_r)} \bigg)
\\
& \qquad \le \O_s \left( C \ell(\lambda)^\frac 1 2 \lambda^\frac 1 2 \|u-v\|_{\underline H^1(U_r)} \right) ,
\end{align*}
as desired.
\end{proof}
\section{Numerical results}
\label{s.numerics}
In this section, we report on numerical tests demonstrating the performance of the iterative method described in Theorem~\ref{t.contract}. The code used in the tests can be consulted at
\begin{center}
\small{\url{https://github.com/ahannuka/homo_mg}}
\end{center}
Throughout this section, we consider a two-dimensional {\em random checkerboard} coefficient field $x \mapsto \a(x)$, which is defined as follows: we give ourselves a family $(b(z))_{z \in \mathbb{Z}^2}$ of independent random variables such that for every $z \in \mathbb{Z}^2$,
\begin{equation*}
\P \left[ b(z) = 1 \right] = \P \left[ b(z) = 9 \right] = \frac 1 2.
\end{equation*}
We then set, for every $x \in z + [0,1)^2$,%
\begin{equation*}
\a(x) := b(z) \, I_2,
\end{equation*}
where $I_2$ denotes the $2$-by-$2$ identity matrix. For this particular coefficient field, the homogenized matrix can be computed analytically as $\overline{\a} = 3 I_2$ (see \cite[Exercise~2.3]{AKMbook}). When such an analytical expression does not exist, the homogenized coefficient can be approximated numerically, for example, by using the method presented in \cite{efficient}.
\begin{figure}[ht]
\includegraphics[scale=0.7]{Acof_r=100-eps-converted-to.pdf}
\includegraphics[scale=0.35]{solution_r=100.jpg}
\caption{On the left, a typical realization of the coefficient field $\a(x)$, with $r=100$ (yellow coresponds to the value $1$ and blue to the value $9$). On the right, the corresponding solution.}
\label{fig:sol_cof}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.7]{solution_ue_y=55-eps-converted-to.pdf}
\includegraphics[scale=0.7]{solution_uh_y=55-eps-converted-to.pdf}
\end{center}
\caption{On the left, the FE-solution to the heterogeneous problem, and on the right, the FE-solution to the corresponding homogenized problem. Both solutions are plotted along the line $y=55$. The fast oscillation in the left figure is clearly visible.}
\label{fig:sol_y55}
\end{figure}
For each $r > 0$, we write $U_r := (0,r)^2$. We aim to compute the solution to the continuous partial differential equation in \eqref{e.model.problem} with $w = 0$ (null Dirichlet boundary condition) and load function $f = 1$. We discretize this problem using a first-order finite element method. Let $\mathcal{T}$ be a triangular mesh of the domain $U_r$ constructed by first dividing each cell~$z + [0,1)^2$ ($z \in \mathbb{Z}^2$) into two triangles, and then using three levels of uniform mesh refinement. This results into a sufficiently fine mesh to capture the oscillations present in the exact solution $u$. The first order finite element space
\begin{equation*}
V_h := \left\{ u \in H^1_0(U_r) \; | \; u_{|K} \in P^1(K) \; \quad \forall K \in \mathcal{T} \right\}
\end{equation*}
with standard nodal basis is used in all computations. The finite element solution $u_h \in V_h$ satisfies
\begin{equation}
\label{eq:fem}
\forall v_h \in V_h, \quad ( \a \nabla u_h, \nabla v_h) = (f,v_h) .
\end{equation}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.7]{rho_dist_rev-eps-converted-to.pdf}
\includegraphics[scale=0.7]{cfactor_rev-eps-converted-to.pdf}
\end{center}
\caption{On the left, the empirical distribution of the factor $\rho$ for~$\lambda = 0.1$ and $r=100$, based on $100$ runs. On the right, the error in the $H^1$ seminorm for $r=100$ and~$\lambda = 0.1$, after each iteration. The method converges after $8$ iterations.}
\label{fig:cdist}
\end{figure}
A typical realization of the coefficient field $\a(x)$ and of the corresponding exact solution $u$ are visualized in Figure \ref{fig:sol_cof}. The high-frequency oscillations in the solution are clearly visible in Figure \ref{fig:sol_y55}, where the solution is visualized along the line $y=55$.
\smallskip
Our interest lies in the contraction factor of the iterative procedure. The contraction factor is studied by first solving the finite dimensional problem (\ref{eq:fem}) exactly using a direct solver. Then a sequence of approximate solutions $\{ u_h^{(i)} \}_{i=1}^N$ is generated by starting from $u_h^{(1)} = 0$ and applying the iterative procedure described in Theorem~\ref{t.contract}. The logarithm of the error $\|\nabla( u - u_h^{(i)}) \|_{L^2(U_r)}$ is computed for each $i \in \{1,\ldots,10\}$, a regression line is fitted, and the slope of this line is denoted by $\rho$. It is our numerical estimate of the logarithm of the contraction factor; roughly speaking,
\begin{equation*}
\rho \approx \log{ \left( \frac{ \| \nabla (u_h - u_h^{(i+1)} )\|_{L^2(U_r)}}{ \| \nabla (u_h - u_h^{(i)} ) \|_{L^2(U_r)}} \right)}
\end{equation*}
(``$\log$'' denotes the natural logarithm.)
The iteration is said to converge, when the relative error is smaller than $10^{-9}$. Past this threshold, the error between the exact and the iterative solutions is smaller than the accuracy of the discretization itself, and thus cannot be measured.
\smallskip
Since the coefficient field is random, the contraction factor will vary for different realizations of $\a$. For the choice of $\lambda = 0.1$ and $r = 100$, the empirical distribution of the contraction factor is given in Figure \ref{fig:cdist}, based on one hundered samples of the coefficient field. Appart from the purposes of displaying this histogram, each of our estimates for $\rho$ is an average over ten realizations of the coefficient field.
\smallskip
In our first test, the parameter $\lambda$~is fixed to $\lambda = 0.1,0.2$, and then $0.4$. The size of the domain $r$ is varied between $10$ and $200$. The averaged contraction factor is visualized on the left side of Figure \ref{fig:rho_fig}. The results are in excellent agreement with Theorem~\ref{t.contract}. After a pre-asymptotic region, the contraction factor becomes independent of the size of the domain $r$. The pre-asymptotic region is due to the fact that for small values of $r$, the pre- and post-smoothing steps are essentially sufficient to solve the equation. We emphasize that the contraction factor remains very good, of the order of $0.1$, even for the relatively large value of $\lambda = 0.4$.
\smallskip
In the second test, the size of the domain $r$ takes values $r = 100$, $200$, and~$300$, while $\lambda$~is varied between~$0.01$~and~$0.5$. For each~$\lambda$, the exponent of the averaged contraction factor is computed based on ten simulation runs. The results are presented on the right side of Figure \ref{fig:rho_fig}. After a pre-asymptotic region, the exponential of the contraction factor behaves like $\lambda^{1/2}$, as predicted by Theorem~\ref{t.contract}. The pre-asymptotic region is roughly characterized by the scaling $r \lesssim 10 \lambda^{-1}$.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.7]{rho_L=01_rev-eps-converted-to.pdf}\hfill
\includegraphics[scale=0.7]{rho_lambda_rev-eps-converted-to.pdf}
\end{center}
\caption{On the left, averaged factor $\rho$ as a function of $r$, for~$\lambda = 0.1$, $0.2$, and~$0.4$. On the right, the exponential of the averaged factor $\rho$ as a function of $\lambda$ for $r=100$, $200$, and $300$. In all cases, the average is computed from ten simulation runs.}
\label{fig:rho_fig}
\end{figure}
| {
"timestamp": "2019-01-11T02:08:56",
"yymm": "1803",
"arxiv_id": "1803.03551",
"language": "en",
"url": "https://arxiv.org/abs/1803.03551",
"abstract": "We introduce a new iterative method for computing solutions of elliptic equations with random rapidly oscillating coefficients. Similarly to a multigrid method, each step of the iteration involves different computations meant to address different length scales. However, we use here the homogenized equation on all scales larger than a fixed multiple of the scale of oscillation of the coefficients. While the performance of standard multigrid methods degrades rapidly under the regime of large scale separation that we consider here, we show an explicit estimate on the contraction factor of our method which is independent of the size of the domain. We also present numerical experiments which confirm the effectiveness of the method, with openly available source code.",
"subjects": "Numerical Analysis (math.NA); Analysis of PDEs (math.AP)",
"title": "An iterative method for elliptic problems with rapidly oscillating coefficients",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9848109480714625,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7076796190915576
} |
https://arxiv.org/abs/1803.08033 | Hereditarily minimal topological groups | We study locally compact groups having all subgroups minimal. We call such groups hereditarily minimal. In 1972 Prodanov proved that the infinite hereditarily minimal compact abelian groups are precisely the groups $\mathbb Z_p$ of $p$-adic integers. We extend Prodanov's theorem to the non-abelian case at several levels. For infinite hypercentral (in particular, nilpotent) locally compact groups we show that the hereditarily minimal ones remain the same as in the abelian case. On the other hand, we classify completely the locally compact solvable hereditarily minimal groups, showing that in particular they are always compact and metabelian.The proofs involve the (hereditarily) locally minimal groups, introduced similarly. In particular, we prove a conjecture by He, Xiao and the first two authors, showing that the group $\mathbb Q_p\rtimes \mathbb Q_p^*$ is hereditarily locally minimal, where $\mathbb Q_p^*$ is the multiplicative group of non-zero $p$-adic numbers acting on the first component by multiplication. Furthermore, it turns out that the locally compact solvable hereditarily minimal groups are closely related to this group. | \section{Introduction}
A Hausdorff topological group $(G, \tau)$ is called {\it minimal} if there exists no Hausdorff group topology on $G$ which is strictly coarser than $\tau$ (see \cite{Doitch,S71}).
This class of groups, containing all compact ones, was largely studied in the last five decades, (see the papers \cite{B,DS,DU,EDS,Meg,P}, the surveys \cite{DMe,S74} and the book \cite{DPS}). Since it is not stable under taking quotients, the following stronger notion was introduced in \cite{DP}: a minimal group $G$ is {\it totally minimal}, if the quotient group $G/N$ is minimal for every closed normal subgroup $N$ of $G$.
This is precisely a group $G$ satisfying the open mapping theorem, i.e., every continuous surjective homomorphism with domain $G$ and codomain any Hausdorff topological group is open. Clearly, every compact group is totally minimal.
Examples of locally compact non-compact minimal groups can be found in \cite{RS}, they are all non-abelian. Indeed, Stephenson \cite{S71} noticed much earlier that local compactness and minimality jointly imply compactness, for abelian groups:
\begin{fact}\cite[Theorem 1]{S71} \label{Steph:Thm} A minimal locally compact abelian group is compact.
\end{fact}
In particular, ${\mathbb R}$ is not minimal. This failure of minimality to embrace also local compactness was repaired by Morris and Pestov \cite{MP}. They called {\it locally minimal} a topological group $(G,\tau)$, having a neighborhood $V$ of the identity of $G$, such that for every coarser Hausdorff group topology $\sigma\subseteq \tau$ with $V\in \sigma$, one has $\sigma=\tau$. Clearly, every minimal group is locally minimal. Moreover, every locally compact group is locally minimal.
Neither minimality nor local minimality are inherited by all subgroups (although they are inherited by closed central subgroups). This justifies the following definition, crucial for this paper (these properties are abbreviated sometimes to $\operatorname{HM}$ and $\operatorname{HLM}$ in the sequel):
\begin{defi}
A topological group $G$ is said to be {\em hereditarily} {\em (locally)} {\em minimal}, if every subgroup of $G$ is (locally) minimal.
\end{defi}
The following theorem of Prodanov provided an interesting characterization of the group of $p$-adic integers ${\mathbb Z}_p$ in terms of hereditary minimality.
\begin{fact}\label{TeoP}{\em \cite{P}}\label{fac:HMZ}
An infinite compact abelian group $K$ is isomorphic to ${\mathbb Z}_p$ for some prime $p$ if and only if $K$ is hereditarily minimal.
\end{fact}
Dikranjan and Stoyanov classified all hereditarily minimal \ abelian groups.
\begin{fact}\label{DS-theorem}{\em \cite{DS}}
Let $G$ be a topological abelian group. Then the following conditions are equivalent:
\begin{enumerate}
\item each subgroup of $G$ is totally minimal;
\item $G$ is hereditarily minimal;
\item $G$ is topologically isomorphic to one of the following groups:\begin{enumerate} [(a)]
\item a subgroup of ${\mathbb Z}_p$ for some prime $p$,
\item a direct sum $\bigoplus F_p$, where for each prime $p$, the group $F_p$ is a finite abelian $p$-group,
\item $X\times F_p$, where $X$ is a rank-one subgroup of ${\mathbb Z}_p$, and $F_p$ is a finite abelian $p$-group.
\end{enumerate} \end{enumerate}
\end{fact}
Note that only the groups from item (a) can be infinite locally compact. Indeed, using Fact \ref{Steph:Thm} one can extend Fact \ref{fac:HMZ} to locally compact abelian groups
as follows:
\begin{corol}\label{cor:spr}
An infinite hereditarily minimal\ locally compact abelian group is isomorphic to ${\mathbb Z}_p$.
\end{corol}
The main aim of this paper is to extend this result to non-abelian groups at various levels of non-commutativity. In particular, we obtain an extension to hypercentral (e.g., nilpotent) groups (Corollary \ref{cor:nprodanov}), yet some non-abelian groups may appear beyond the class of hypercentral groups (see Theorem C or Theorem D for a complete description in the case of solvable groups).
Let us mention here that without any restraint on commutativity one can find examples of very exotic hereditarily minimal \ groups even in the discrete case (see \S 3.1, entirely dedicated to discrete hereditarily minimal \ groups and their connection to categorically compact groups).
As far as hereditarily locally minimal \ groups are concerned, the following question was raised in \cite[Problem 7.49]{DMe} in these terms:
{\em if a connected locally compact group $G $ is hereditarily locally minimal, is $G$ necessarily a Lie group}? Inspired by this question and the above results, hereditarily locally minimal groups were characterized in \cite{DHXX} among the locally compact groups which are either abelian or connected as follows:
\begin{fact}{\rm \cite[Corollary 1.11]{DHXX}}\label{fac:ext}
For a locally compact group $K$ that is either abelian or connected, the following conditions are equivalent:
\begin{enumerate}[(a)]
\item $K$ is a hereditarily locally minimal \ group;
\item $K$ is either a Lie group or has an open subgroup isomorphic to ${\mathbb Z}_p$ for some prime $p.$\end{enumerate}
\end{fact}
This provides, among others, a characterization of the connected Lie groups as connected locally compact hereditarily locally minimal \ groups.
\subsection{Main Results}
It was mentioned in \cite{DHXX}, that item (a) in Fact \ref{fac:ext} might not be equivalent to item (b) in the non-abelian case, and it was conjectured (see \cite[Conjecture 5.1]{DHXX}) that a possible counter-example could be the group $({\mathbb Q}_p,+)\rtimes {\mathbb Q}_p^*$, where ${\mathbb Q}_p^* = ({\mathbb Q}_p\setminus\{0\},\cdot)$ (here the natural action of ${\mathbb Q}_p^*$ on ${\mathbb Q}_p$ by multiplication is intended). We prove that this conjecture holds true.
\begin{theoremA} Let $p$ be a prime. Then $({\mathbb Q}_p,+)\rtimes {\mathbb Q}_p^*$ is hereditarily locally minimal. \end{theoremA}
We report in Theorem \ref{thm:retro} a result of Megrelishvili, ensuring that the group in Theorem A is minimal.
Although it is not hereditarily minimal (e.g., its subgroup $({\mathbb Q}_p,+)$ is not minimal by Fact \ref{Steph:Thm}), we show in the classification Theorem D that it contains (up to isomorphism) most locally compact solvable HM groups.
\medskip
Clearly, a hereditarily minimal\ group is hereditarily locally minimal. In order to ensure the converse implication, we give the next definition.
\begin{defi}\label{def:CFN}
For a topological group $G$ we consider the following properties:\\
($\mathcal{N}_{fn}$) $G$ contains no finite normal non-trivial subgroups;\\
($\mathcal{C}_{fn}$)
Every infinite compact subgroup of $G$ satisfies ($\mathcal{N}_{fn}$).
\end{defi}
\begin{remark}\label{rem:pltor} Obviously, a torsionfree group $G$ satisfies both properties. Moreover, if $G$ is abelian, then $G$ is torsionfree if and only if it satisfies ($\mathcal{N}_{fn}$). It is also clear that ($\mathcal{C}_{fn}$) implies ($\mathcal{N}_{fn}$) when $G$ is infinite compact.
\end{remark}
\begin{theoremB}\label{thm:herequiv}
For a compact group $G$, the following conditions are equivalent:
\begin{enumerate} [(a)]
\item $G$ is a hereditarily minimal \ group;
\item $G$ is a hereditarily locally minimal \ group satisfying ($\mathcal{C}_{fn}$).
\end{enumerate}
\end{theoremB}
\begin{remark}\label{new:rem} One cannot replace compact by locally compact as the groups ${\mathbb Q}_p$ and ${\mathbb R}\rtimes {\mathbb Z}(2)$
show (easy counter-examples are provided also by arbitrary infinite discrete abelian groups).
\end{remark}
Our next result extends Corollary \ref{cor:spr}.
\begin{theoremC}\label{thm:tozp}
Let $G$ be an infinite hereditarily minimal\ locally compact group that is either compact or locally solvable. Then $G$ is either center-free or isomorphic to ${\mathbb Z}_p$, for some prime $p$.
\end{theoremC}
In order to introduce our main result we need to first recall some folklore facts and fix the relevant notation.
\begin{notation} \label{new:notation}
For a prime $p$, let ${\mathbb Z}_p^*=({\mathbb Z}_p\setminus p{\mathbb Z}_p,\cdot)$. Its torsion subgroup $F_p$ consists only of the $(p-1)$-th
roots of unity if $p>2$, and $F_p = \{1,-1\}$ (the square roots of unity) if $p=2$. Moreover, it is cyclic and
\begin{equation*}
F_p \cong \begin{cases}
{\mathbb Z}(2) & \text{if } p = 2,\\
{\mathbb Z}(p-1) & \text{otherwise}.
\end{cases}
\end{equation*}
We denote by $C_{p}$ the subgroup of ${\mathbb Z}_p^*$ defined by
\begin{equation*}
C_{p}= \begin{cases}
1+4{\mathbb Z}_2 & \text{if } p = 2,\\
1+p{\mathbb Z}_p & \text{otherwise}.
\end{cases}
\end{equation*}
Finally, for $n\in {\mathbb N}$ we let $C_{p}^{p^n}=\{x^{p^n}: x\in C_p\} \leq C_p,$ so
\begin{equation}\label{Cppn:form}
C_{p}^{p^n}= \begin{cases}
(1+4{\mathbb Z}_2)^{2^n} = 1+2^{n+2}{\mathbb Z}_2 & \text{if } p = 2,\\
(1+p{\mathbb Z}_p)^{p^n} = 1+p^{n+1}{\mathbb Z}_p & \text{otherwise}.
\end{cases}
\end{equation}
\end{notation}
It is well known that $C_{p}\cong ({\mathbb Z}_p,+)$ and ${\mathbb Z}_p^* = C_p F_p \cong C_p \times F_p \cong ({\mathbb Z}_p,+) \times F_p$ as topological groups.
Now we provide two series of examples of hereditarily minimal metabelian locally compact groups that play a
prominent
role in our Theorem D:
\begin{example}\label{Exaaa}
Consider the natural action of ${\mathbb Z}_p^*$ on $({\mathbb Z}_p,+)$ by multiplication, and the semidirect product
\[
K = ({\mathbb Z}_p,+)\rtimes {\mathbb Z}_p^*.
\]
Then $K$ is a subgroup of the group considered in Theorem A, so it is hereditarily locally minimal. Moreover, $K \cong ({\mathbb Z}_p,+)\rtimes \left( ({\mathbb Z}_p,+) \times F_p \right)$, so $K$ is compact and all of its non-abelian subgroups are minimal by Corollary \ref{cor:prec}. However, $K$ is not hereditarily minimal, as for example its compact abelian subgroup $\{0\} \rtimes {\mathbb Z}_p^* \cong {\mathbb Z}_p^* \cong ({\mathbb Z}_p,+) \times F_p$ is not hereditarily minimal by Fact \ref{TeoP}.
\begin{itemize}
\item[(i)] For a subgroup $F$ of $F_p$ and an integer $n\in {\mathbb N}$, consider the following subgroups of $K$:
\begin{gather*}
K_{p,F} = ({\mathbb Z}_p,+)\rtimes F \leq K_{p,F_p} \\
M_{p,n} = ({\mathbb Z}_p,+)\rtimes C_p^{p^n} \leq M_{p,0}.
\end{gather*}
For example, $K_{p,\{1\}} = ({\mathbb Z}_p,+)\rtimes \{1\}$ is isomorphic to ${\mathbb Z}_p$.
In Example \ref{ex:padic:rtimes:F}(a) we use a criterion for hereditary minimality of a compact solvable group
(Theorem \ref{thm:charmeta}) in order to prove that $K_{p,F}$ is hereditarily minimal, while we use Theorem A and Theorem B to prove that $M_{p,n}$ is hereditarily minimal in Example \ref{padics:rtimes:padics}.
\item[(ii)] In case $p=2$ and $n\in {\mathbb N}$, we consider
also the group $T_n=({\mathbb Z}_2,+)\rtimes_{\beta_n}C_2^{2^n}$ with the faithful action $\beta_n:C_2^{2^n}\times {\mathbb Z}_2 \to {\mathbb Z}_2 $ defined by
\begin{equation*}
\beta_n(y,x) =
\begin{cases}
yx & \text{ if } y\in C_2^{2^{n+1}},\\
-yx & \text{ if } y\in C_2^{2^{n}} \setminus C_2^{2^{n+1}}.
\end{cases}
\end{equation*}
(when no confusion is possible, we denote $\beta_n$ simply by $\beta$).
Note that the restriction $\beta' = \beta_n\restriction_{C_2^{2^{n+1}}\times {\mathbb Z}_2}:C_2^{2^{n+1}}\times {\mathbb Z}_2 \to {\mathbb Z}_2$ is the natural action by multiplication in the ring ${\mathbb Z}_2$, so $T_n \geq ({\mathbb Z}_2,+)\rtimes_{\beta'}C_2^{2^{n+1}} = M_{2,n+1}$. Obviously $[T_n:M_{2,n+1}]=2$, so $M_{2,n+1}$ is normal in $T_n$.
Another application of the criterion for hereditary minimality (Theorem \ref{thm:charmeta}) shows in Example \ref{ex:padic:rtimes:F}(b) that also the groups $T_n$ are hereditarily minimal.
\end{itemize}
\end{example}
The following theorem classifies the locally compact solvable hereditarily minimal\ groups by showing that these are precisely
the groups described in Example \ref{Exaaa}. Note that the only abelian ones among them are the groups $K_{p,\{1\}} = ({\mathbb Z}_p,+)\rtimes \{1\} \cong {\mathbb Z}_p$, for prime $p$.
\begin{theoremD}\label{thm:hmabc}
Let $G$ be an infinite locally compact solvable group, then the following conditions are equivalent:
\begin{enumerate}
\item $G$ is hereditarily minimal;
\item $G$ is topologically isomorphic to one of the following groups:
\begin{enumerate} [(a)]
\item $K_{p,F} = {\mathbb Z}_p \rtimes F$, where $F\leq F_p$ for some prime $p$;
\item $M_{p,n}={\mathbb Z}_p \rtimes C_p^{p^n}$, for some prime $p$ and \ $n\in {\mathbb N}$;
\item $T_n=({\mathbb Z}_2,+)\rtimes_{\beta}C_2^{2^n} $, for some $n\in {\mathbb N}$.
\end{enumerate} \end{enumerate}
\end{theoremD}
A locally compact hereditarily minimal\ group need not be compact in general, as witnessed by the large supply of infinite discrete hereditarily minimal\ groups.
Nonetheless, as the groups in Theorem D are compact and metabelian, one has the following:
\begin{corol}
If $G$ is an infinite hereditarily minimal\ locally compact solvable group, then $G$ is compact metabelian.
\end{corol}
Another nice consequence of Theorem D is the following: every closed non-abelian subgroup of $M_{p,n}$ is isomorphic to one of the groups in (b) or (c), while the closed abelian subgroups of $M_{p,n}$ are isomorphic to ${\mathbb Z}_p\cong K_{p,\{1\}}$.
The proof of Theorem D is covered by Theorem \ref{thm:free} and Theorem \ref{prop:nonfree}, where we consider
the torsionfree case and the non-torsionfree case, respectively.
Another application of Theorem D is Theorem \ref{thm:htmkpf} in which we classify the infinite locally compact, solvable, hereditarily totally minimal groups (see Definition \ref{def:htm}).
\begin{remark}\label{cor:compnofin}
Call a topological group $G$ {\em compactly hereditarily} {\em (locally)} {\em minimal}, if every compact subgroup of $G$ is hereditarily (resp., locally) minimal.
We abbreviate it to $\operatorname{CHM}$ and $\operatorname{CHLM}$ in the sequel. For compact groups, being hereditarily minimal is equivalent to being compactly hereditarily minimal. Clearly, every discrete group is compactly hereditarily minimal. Applying Theorem B to compact subgroups one can prove that
a topological group $G$ is compactly hereditarily minimal \ if and only if $G$ is a compactly hereditarily locally minimal \ group satisfying ($\mathcal{C}_{fn}$).
\end{remark}
\medskip
The next diagram summarizes some of the interrelations between the properties considered so far. The double arrows denote implications that always hold. The single arrows denote implications valid under some additional assumptions.
\smallskip
\ \ \ \ \ \ \ \ \ \ \ \ \ \
$${\xymatrix@!0@C4.2cm@R=3.2cm{
\mbox{CHLM}
\ar@/_1.2pc/|-{(2)}[d]
\ar@/_1.2pc/|-{(1)
}[r]
&
\mbox{CHM} \ar@{=>}[l]
\ar@/^1.2pc/|-{\hspace{10pt} compact}[d] &\\
\mbox{HLM}
\ar@{=>}[u]
\ar@/^1.2pc/|-{(3)
}[r]
&
\mbox{HM}
\ar@{=>}[l]
\ar@{=>}[u]
}}$$
\smallskip
\noindent (1): This implication holds true for
groups satisfying ($\mathcal{C}_{fn}$) (Remark \ref{cor:compnofin}). \\
(2): This implication holds true for totally disconnected locally compact groups (Proposition \ref{prop:ltdc}(1)).\\
(3): This implication holds true for compact
groups satisfying ($\mathcal{C}_{fn}$) (Theorem B).
\smallskip
The group ${\mathbb Q}_{p}$ witnesses that the implication $\operatorname{HM} \Longrightarrow \operatorname{CHM}$ \ cannot be inverted if compact is replaced by locally compact (and totally disconnected). Indeed, ${\mathbb Q}_{p}$ is not minimal (so not hereditarily minimal), yet every compact subgroup of ${\mathbb Q}_{p}$ is isomorphic to ${\mathbb Z}_p$, which means that ${\mathbb Q}_{p}$ is $\operatorname{CHM}$. On the other hand, every non-trivial compact subgroup of the locally compact group $G={\mathbb R}\times {\mathbb Z}_p$ is topologically isomorphic to ${\mathbb Z}_p$, so $G$ is $\operatorname{CHM}$ by Prodanov's theorem. Yet $G$ is not hereditarily locally minimal \ by Fact \ref{fac:ext}.
This shows that none of the vertical arrows in the diagram can be inverted in general.
\bigskip
The paper is organized as follows. The proof Theorem A, given in \S\ref{Proof of Theorem A}, is articulated in several steps.
We first recall the crucial criteria for (local) minimality of dense subgroups (Fact \ref{Crit}). Using these criteria, we show in \S\ref{sub:nab} that every non-abelian subgroup $H$ of $L$ is locally minimal, while \S \ref{sub:main} takes care of the abelian subgroups of $L$.
Section \S\ref{lcHMSection} contains some general results on locally compact HM groups
and the proof of Theorem B.
In \S\ref{discreteHM} we provide a brief review on the relevant connection between discrete categorically compact groups
and the discrete HM groups.
In \S \ref{subsection:proofB} we give some general results on non-discrete locally compact HM groups, proving in Proposition \ref{prop:ndhmlc} that they are totally disconnected, and contain a copy of ${\mathbb Z}_p$ (in particular, an infinite locally compact HM group is torsion if and only if it is discrete, and in this case it is not locally finite).
Furthermore, such a group $G$ satisfies ($\mathcal{N}_{fn}$),
so that either $Z(G) = \{e\}$, or $Z(G) \cong {\mathbb Z}_p$ for a prime $p$. In the
latter case, $G$ is also torsionfree by Corollary \ref{cor:pgr}. So a non-discrete locally compact HM group is either center-free or torsionfree.
Finally, we prove Theorem B (making use of Proposition \ref{pro:tor}) and apply Theorem B to see in Example \ref{padics:rtimes:padics} that the groups $M_{p,n}$ are HM.
In \S\ref{lcHM} we explore infinite non-discrete locally compact HM groups with non-trivial center,
proving in Corollary \ref{cor:nprodanov} that the hypercentral ones are isomorphic to ${\mathbb Z}_p$ and we give a proof of Theorem C.
In Theorem \ref{add:prop:thm} we collect some necessary conditions a non-discrete locally compact HM group with non-trivial center must satisfy.
In \S\ref{Semidirect products of p-adic integers} we prepare the tools for the proof of Theorem D, by proving that the groups introduced in Example \ref{Exaaa} are pairwise non-isomorphic (Corollary \ref{thm:pair} and propositions \ref{prop:p=2} and \ref{Tn-Mpn:non-isom}) and by classifying the semidirect products of ${\mathbb Z}_p$ with some compact subgroups of ${\mathbb Z}_p^*$ (Proposition \ref{prop:alpha} and Lemma \ref{lem:uonique}).
Theorem D is proved in \S\ref{Proof of Theorem D}. To this end we first provide a criterion Theorem \ref{thm:charmeta}, used in Example \ref{ex:padic:rtimes:F} to check that the groups $K_{p,F}$ are HM. Another consequence of Theorem \ref{thm:charmeta} is Lemma \ref{laaaasts:lemma}, that we apply to
show that the groups $T_n$ are HM in Example \ref{ex:Tn}.
We start the proof of Theorem D in \S\ref{The general case}, by proving some reduction results
(Proposition \ref{prop:ms}), and some general results (Propositions \ref{prop:tri}, \ref{prop:abc} and
\ref{prop:dich}).
In \S\ref{Torsionfree case} we consider the torsionfree case of Theorem D in Theorem \ref{thm:free}, while the non-torsionfree case Theorem \ref{prop:nonfree} is proved in \S\ref{Non-torsionfree case}, based on the technical result Proposition \ref{prop:sknmin} dealing with the case $p=2$.
We dedicate \S\ref{Hereditarily totally minimal topological groups} to hereditarily totally minimal groups (HTM for short, see Definition \ref{def:htm}). The only locally compact solvable ones to consider are the groups classified in Theorem D, and we first prove in Proposition \ref{prop:kpfhtm} that
the groups $K_{p,F}$ are HTM. Then we see in Proposition \ref{prop:mpntn} that the HM groups $M_{p,n}$ and $T_n$ are not HTM, leading us to Theorem \ref{thm:htmkpf}, which describes the groups $K_{p,F}$ as the only infinite locally compact solvable HTM groups.
The last \S\ref{Open questions and concluding remarks} collects some open questions, a partial converse to Theorem \ref{add:prop:thm}, and some final remarks.
\bigskip
\subsection{Notation and terminology}
We denote by ${\mathbb Z}$ the group of integers, by ${\mathbb R}$ the real numbers, and by ${\mathbb N}$ and ${\mathbb N}_{+}$ the non-negative integers and positive natural numbers, respectively. For $n\in {\mathbb N}_+$, we denote by ${\mathbb Z}(n)$ the finite cyclic group with $n$ elements.
If $p$ is a prime number, ${\mathbb Q}_p$ stands for the field of $p$-adic numbers, and ${\mathbb Z}_p$ is its subring of $p$-adic integers.
Let $G$ be a group. We denote by $e$ the identity element.
If $A$ is a non-empty subset of $G$, we denote by $\langle A\rangle$ the subgroup of $G$ generated by $A$. In particular, if $x$ is an element of $G$, then $\langle x\rangle$ is a cyclic subgroup. If $F=\langle x\rangle$ is finite, then $x$ is called a \emph{torsion} element, and $o(x)= |F|$ is the \emph{order} of $x$. We denote by $t(G)$ the torsion part of the group $G$ and $G$ is called \emph{torsionfree} if $t(G)$ is trivial. The {\it centralizer} of $x$ is $C_G(x)$. If the {\it center} $Z(G)$ is trivial, then we say that $G$ is \emph{center-free}. A group $G$ is called \emph{$n$-divisible} if $nG=G$ for $n\in {\mathbb N}_+$.
Let $\mathcal P$ be an algebraic (or set-theoretic) property. A group is called \emph{locally $\mathcal P$} if every finitely generated subgroup has the property $\mathcal P$. For example, in a locally finite group every finitely generated subgroup is finite.
The \emph{$n$-th center} $Z_n(G)$ is defined as follows for $n\in {\mathbb N}$. Let $Z_0(G) = \{e\}$, $Z_1(G) = Z(G)$, and assume that $n > 1$ and $Z_{n-1}(G)$ is already defined. Consider the canonical projection $\pi \colon G \to G/Z_{n-1}(G)$ and let $Z_n(G) = \pi^{-1} Z (G/Z_{n-1}(G) )$. Note that
$Z_n(G)=\{x\in G: [x,y]\in Z_{n-1}(G) \text{ for every } y\in G\}$. This produces an ascending chain of subgroups $Z_n(G)$ called the upper central series of $G$, and a group is {\em nilpotent} if $Z_n(G) = G$ for some $n\in {\mathbb N}$. In this case, its nilpotency class is the minimum of such $n$. For example, the groups with nilpotency class at most $1$ are the abelian groups.
One can continue the upper central series to infinite ordinal numbers via transfinite recursion: for a limit ordinal $\lambda$, define $Z_{\lambda }(G)=\bigcup _{\alpha <\lambda } Z_{\alpha }(G)$.
A group is called {\it hypercentral} if it coincides with $Z_{\alpha }(G)$ for some ordinal $\alpha$.
Nilpotent groups are obviously hypercentral, while hypercentral groups are locally nilpotent.
We denote by $G'=G^{(1)}$ the {\it derived subgroup} of $G$, namely the subgroup of $G$ generated by all commutators $[a,b]=aba^{-1}b^{-1},$ where $a,b\in G.$ For $n\geq 1$, define $G^{(n)}=(G^{(n-1)})'$ and also $G^{(0)}=G$. We say that $G$ is {\em solvable} of class $n$ for some $n\in {\mathbb N}$, if $G^{(n)}=\{e\}$ and $G^{(m)}\ne\{e\}$ for $0\leq m<n$. If $G$ is solvable of class $n$, then $G^{(n-1)}$ is abelian. In particular, $G$ is {\em metabelian}, if $G$ is solvable of class at most $2$.
For an integral domain (in particular, a field) $A$ we denote by $A^*$ the multiplicative group of all invertible elements of $A$ (resp., the group $(A\setminus\{0\},\cdot)$).
All the topological groups in this paper are assumed to be Hausdorff. For a topological group $G$, the connected component of $G$ is denoted by $c(G)$. For a subgroup $H\leq G$, the closure of $H$ is denoted by $\overline{H}.$
A topological group is {\emph{precompact} if it is isomorphic to a subgroup of a compact group. Let $S$ and $T$ be topological groups and $\alpha:S\times T\to T$ be a continuous action by automorphisms. We say that the action $\alpha$ is \emph{faithful}
if $\ker\alpha=\{s\in S: \forall t\in T \ \alpha(s,t)=t \}$ is trivial.
All unexplained terms related to general topology can be found in \cite{En}. For background on abelian groups, see \cite{Fuc}.
\section{Proof of Theorem A}\label{Proof of Theorem A}
There exist useful criteria for establishing the minimality (local minimality) of a dense subgroup of a minimal (respectively, locally minimal) group. These criteria are based on the following definitions.
\begin{defi}
Let $H$ be a subgroup of a topological group $G$.
\begin{enumerate}
\item \cite{P,S71} $H$ is said to be {\it essential} in $G$ if $H\cap N\neq \{e\}$ for every non-trivial closed normal subgroup $N$ of $G.$
\item \cite{ACDD} $H$ is {\it locally essential} in $G$ if there exists a neighborhood $V$ of $e$ in $G$ such that $H\cap N\neq \{e\}$ for every non-trivial closed normal subgroup $N$ of $G$ which is contained in $V.$\end{enumerate}
\end{defi}
\begin{fact}\label{Crit} Let $H$ be a dense subgroup of a topological group $G.$
\begin{enumerate} \item \cite[Minimality Criterion]{B} $H$ is minimal if and only if $G$ is minimal and $H$ is essential in $G$ (for compact $G$ see also \cite{P,S71}).
\item \cite [Local Minimality Criterion]{ACDD} $H$ is locally minimal if and only if $G$ is locally minimal and $H$ is locally essential in $G.$ \end{enumerate}
\end{fact}
In this section, $L$ denotes the group $({\mathbb Q}_p,+)\rtimes {\mathbb Q}_p^*$.
In \S\ref{sub:nab}, we show that every non-abelian subgroup $H$ of $L$ is essential (in particular, locally essential) in its closure $\overline{H}.$ Since the latter group is locally compact (and thus locally minimal), we conclude by the above criterion that $H$ is also locally minimal. At the same time, we deduce by the above Minimality Criterion that $H$ is minimal if and only if $\overline{H}$ is minimal in Corollary \ref{thm:dense}. In particular, every precompact non-abelian subgroup of $L$ is minimal. Finally, using Fact \ref{fac:ext}, we prove in \S \ref{sub:main} that also the abelian subgroups of $L$ are locally minimal.
\subsection{Non-abelian subgroups of $L$}\label{sub:nab}
We begin this section with some easy lemmas of independent interest.
\begin{lemma}\label{lem:tre}
If $G$ is a group, then $G'$ non-trivially meets every normal subgroup $N$ of $G$ not contained in $Z(G)$.
In particular, if $G$ is a topological group with trivial center, then $G'$ is essential in $G$.
\end{lemma}
\begin{proof}
Let $a\in N \setminus Z(G)$,
and let $b\in G$ be such that $e \neq [a,b]=aba^{-1}b^{-1}$. Then $[a,b]\in N\cap G'$, as $N$ is normal.
\end{proof}
\begin{lemma} \label{lem:abel}
If $H$ is a non-abelian subgroup of a group $G$, then $H\cap G'$ is non-trivial.
\end{lemma}
\begin{proof}
If $a,b$ are non-commuting elements of $H$, then $\{e\} \neq [a,b]\in H\cap G'.$
\end{proof}
\begin{lemma}\label{lem:cyc}
Every non-trivial subgroup $H$ of ${\mathbb Q}_p$ is essential.
\end{lemma}
\begin{proof}
First note that an element $e \neq x \in {\mathbb Q}_p$ has the form $x = p^n a$ for $a \in {\mathbb Z}_p^*$, and $n \in {\mathbb Z}$,
so its generated subgroup is $\langle x \rangle = x {\mathbb Z} = p^n a {\mathbb Z}$ (note that here ${\mathbb Z}$ carries the $p$-adic topology, induced by ${\mathbb Q}_p$).
Let $N$ be a non-trivial closed subgroup of ${\mathbb Q}_p$, and let $H \geq p^n a {\mathbb Z}$, and $N \geq p^m b {\mathbb Z}$ for some
$a, b \in {\mathbb Z}_p^*$ and $n, m \in {\mathbb Z}$.
Being closed, $N$ also contains the subgroup $p^m b {\mathbb Z}_p$, which coincides with $p^m {\mathbb Z}_p$.
Then $e \neq a p^{ \max \{n,m\} } \in N \cap H$.
\end{proof}
\begin{lemma}\label{lem:trabel}
If $H$ is a non-abelian subgroup of $L$, then $Z(H)$ is trivial.
\end{lemma}
\begin{proof}
Let $(m,n)\in Z(H)$, and we show that $m=0$ and $n=1$.
Let $(a,b), (c,d)\in H$ be non-commuting elements.
Then $(m,n)$ commutes with both $(a,b)$ and $(c,d)$ while the latter two elements do not commute with each other. This implies the following:
\begin{enumerate} \item $a(1-n)=m(1-b)$,
\item $c(1-n)=m(1-d)$,
\item $a(1-d)\neq c(1-b)$.\end{enumerate}
Multiply $(1)$ by $c$ to obtain $ac(1-n)=mc(1-b).$ This together with $(2)$ imply that $am(1-d)=mc(1-b).$ In view of $(3),$ the latter equality is possible only if $m=0.$ Using $(1)-(2)$ we now obtain $a(1-n)=c(1-n)=0.$ By $(3)$, either $a \neq 0$ or $c \neq 0$, and thus $n=1.$
\end{proof}
\begin{prop}\label{nonab:subgr:essent}
If $H$ is a non-abelian subgroup of $L$, then $H$ is essential in $\overline{H}$.
\end{prop}
\begin{proof}
We first consider the subgroup $H_1 = H \cap L'$, which is non-trivial by Lemma \ref{lem:abel}. As $L'={\mathbb Q}_p\rtimes \{1\}$ is isomorphic to ${\mathbb Q}_p$, Lemma \ref{lem:cyc} implies that $H_1$ is essential in $L'$.
Now let $N$ be a non-trivial closed normal subgroup of $\overline{H}$, and we have to prove that $N \cap H$ is non-trivial.
Since $\overline{H}$ is non-abelian, its center is trivial by Lemma \ref{lem:trabel}, so $\overline{H}'$ is essential in $\overline H$ by Lemma \ref{lem:tre}. Then $N\cap \overline{H}'$ is non-trivial, and, in particular, $N_1 = N \cap L'$ is non-trivial. Obviously, $N$ is closed in $L$, so $N_1$ is a closed subgroup of $L'$. The essentiality of $H_1$ in $L'$ gives that $N_1 \cap H_1$ is non-trivial, so also $N \cap H$ is non-trivial.
\end{proof}
\begin{corol}\label{thm:dense}
Let $H$ be a non-abelian subgroup of $L$. Then:
\begin{enumerate} \item $H$ is locally minimal;
\item $H$ is minimal if and only if $\overline{H}$ is minimal.\end{enumerate}
\end{corol}
\begin{proof}
$(1)$: Since $\overline{H}$ is locally compact, and thus locally minimal, we can apply Proposition \ref{nonab:subgr:essent} and the Local Minimality Criterion.
$(2)$: Apply the Minimality Criterion.
\end{proof}
Since $L$ is complete, and compact groups are minimal, we immediately obtain the following consequence of Corollary \ref{thm:dense}(2).
\begin{corol}\label{cor:prec}
If $H$ is a precompact non-abelian subgroup of $L$,
then $H$ is minimal.
\end{corol}
Recall that a group $G$ is \emph{perfectly minimal} if $G \times M$ is minimal for every minimal group $M$. The above result should be compared with the following Megrelishvili's theorem, ensuring that $L$ is perfectly minimal.
Let $K$ be a topological division ring. A subset $B$ of $K$ is called \emph{bounded} if for every neighborhood $X$ of $0$ there is a neighborhood $Y$ of $0$ such that $YB \subseteq X$ and $BY \subseteq X$. A subset $V$ of $K$ is \emph{retrobounded} if $0 \in V$ and $(K \setminus V)^{-1}$ is bounded.
Then $K$ is called \emph{locally retrobounded} if it has a local base at $0$ consisting of \emph{retrobounded} neighborhoods.
For example, $K$ is locally retrobounded if it is: locally compact,
topologized by an absolute value,
or a linearly ordered field.
\begin{fact}\cite[Theorem 4.7(a)]{Meg}\label{thm:retro}
Let $K$ be a non-discrete locally retrobounded division ring.
Then the group $G = K \rtimes K^*$
is perfectly minimal, where the natural action of $K^* $ on $K = (K,+)$ by multiplication is considered.
\end{fact}
As $K = {\mathbb Q}_p$ is locally retrobounded, Fact \ref{thm:retro} entails that $L$ is perfectly minimal. By Lemma \ref{lem:trabel}, the non-abelian subgroups of $L$ are center-free, so we can apply the above results to obtain the following corollary.
\begin{corol}\label{prod:nonAB:subgr:ofL}
For every $i\in I$, let $H_i$ be a non-abelian subgroup of $L$ that is either dense or precompact. Then, the product $\displaystyle{\prod}_{i\in I} H_i$ is perfectly minimal.
\end{corol}
\begin{proof}
If $H_i$ is precompact,
then it is minimal by Corollary \ref{cor:prec}. If $H_i$ is dense in $L$, then it is minimal by Fact \ref{thm:retro} and Corollary \ref{thm:dense}(2). Thus, $H_i$ is minimal for every $i\in I.$ These subgroups are center-free according to Lemma \ref{lem:trabel}. As the arbitrary product of center-free minimal groups is perfectly minimal by \cite[Theorem 1.15]{Meg95}, we conclude that $\displaystyle{\prod}_{i\in I} H_i$ is perfectly minimal.
\end{proof}
\subsection{Abelian subgroups of $L$} \label{sub:main}
\begin{lemma}\label{last:lemma}
An infinite compact subgroup $C$ of $G = {\mathbb Q}_p^*$ contains an open subgroup of $G$ isomorphic to ${\mathbb Z}_p$.
\end{lemma}
\begin{proof}
Note that $G \cong {\mathbb Z} \times {\mathbb Z}_p^*$, where ${\mathbb Z}$ is equipped with the discrete topology, so $G\cong {\mathbb Z} \times F_p \times \mathbb Z_p$, and we identify these two groups.
Being infinite compact, $C$ non-trivially meets the open subgroup $U = \{0\} \times \{0\} \times \mathbb Z_p$ of $G$, otherwise the projection of $C$ in the discrete quotient group $G/U$ would be compact infinite.
So consider the non-trivial subgroup $O = C\cap U$ of $C$.
As $O$ is a closed subgroup of $U \cong {\mathbb Z}_p$,
it is isomorphic to ${\mathbb Z}_p$ itself and it has finite index in $U$, so it is also open in $U$. As $U$ is open in $G$, we conclude that $O$ is open in $G$.
\end{proof}
Now we are in position to prove
Theorem A.
\medskip
\noindent{\bf Proof of Theorem A.}
If $H$ is a non-abelian subgroup of $L$, then Corollary \ref{thm:dense}(1) applies.
Let $H$ be an abelian subgroup of $L$, and consider the following two possibilities:
\smallskip
Case 1: there is $e \neq h\in H \cap L' $, then obviously $H \leq C_L(h)$.
Since $L' = ({\mathbb Q}_p,+) \rtimes \{1\}$,
we have $C_L(h) = L'$. It follows that $H \leq L'\cong {\mathbb Q}_p$. Then $H$ is locally minimal by Fact \ref{fac:ext}.
\smallskip
Case 2: the subgroup $H \cap L' $ is trivial, so the projection $L \to L/L'$ restricted to $H$ gives a continuous group isomorphism $q: H \to q(H)$.
It is not restrictive to assume $H$ to be non-discrete, and if we prove that the closure of $H$ in $L$ has an open subgroup isomorphic to ${\mathbb Z}_p$, it will follow by Fact \ref{fac:ext} that $H$ is locally minimal; so we can assume $H$ to be closed in $L$.
Then $H$ is a non-discrete locally compact and totally disconnected group, so $H$ contains an infinite compact open subgroup $K$ by van Dantzig's theorem \cite{vD}, and the restriction $q\restriction_{ K }: K \to q(K)$ is a closed map, hence a topological group isomorphism. The infinite compact subgroup $q(K)$ of $q(H) \leq L/L' \cong {\mathbb Q}_p^*$
contains an open subgroup $O$ isomorphic to ${\mathbb Z}_p$ by Lemma \ref{last:lemma}.
Obviously, also $q\restriction_{ K }^{\phantom{K}-1} (O)$ is isomorphic to ${\mathbb Z}_p$, and it is open in $K$, hence in $H$. \qed
\section{Locally compact $\operatorname{HM}$ groups}\label{lcHMSection}
In this section we consider hereditarily minimal locally compact groups $G$. We start with the following immediate consequence of Corollary \ref{cor:spr}.
\begin{lemma}\label{Prod+Steph:lemma}
Let $G$ be a hereditarily minimal locally compact group, and let $A$ be a closed abelian subgroup of $G$. Then either $A$ is finite, or $A \cong {\mathbb Z}_p$ for some prime $p$.
\end{lemma}
In particular, if $G$ is not a torsion group, then it contains a copy of some ${\mathbb Z}_p$,
so $G$ is not discrete.
Now we give some results on the discrete hereditarily minimal\ groups.
\subsection{Discrete $\operatorname{HM}$ groups}\label{discreteHM}
Clearly, every locally finite group is torsion. In the following result we recall that the converse holds true for locally solvable groups.
\begin{fact}\cite[Proposition 1.1.5]{DIX}\label{fac:lslf}
Every torsion locally solvable group is locally finite.
\end{fact}
The next fact guarantees the existence of an infinite abelian group inside every infinite locally finite group.
\begin{fact}\cite{HK}\label{fac:lfia}
Every infinite locally finite group contains an infinite abelian subgroup.
\end{fact}
From Fact \ref{fac:lslf} and Fact \ref{fac:lfia} we obtain the following corollary.
\begin{corol}\label{cor:lsia}
An infinite locally solvable group contains an infinite abelian subgroup.
\end{corol}
The next result will be used in the sequel to conclude that an infinite locally solvable hereditarily minimal\ group is not discrete.
\begin{lemma}\label{lem:tnlf}
If $G$ is an infinite discrete hereditarily minimal\ group, then the abelian subgroups of $G$ are finite.
In particular, the center of $G$ is finite, $G$ is torsion but it is neither locally finite nor locally solvable.
\end{lemma}
\begin{proof}
By Lemma \ref{Prod+Steph:lemma}, $G$ has no infinite abelian subgroups, so the center of $G$ is finite.
In particular $G$ is torsion, but it is not locally finite by Fact \ref{fac:lfia}. Finally, $G$ is not locally solvable by Fact \ref{fac:lslf}.
\end{proof}
So a discrete HM group is torsion by Lemma \ref{lem:tnlf}. This result should be compared with Proposition \ref{prop:ndhmlc}(2), where we prove that non-discrete locally compact HM groups are not torsion, as they contain a copy of the $p$-adic integers ${\mathbb Z}_p$ for some $p$.
The minimality of infinite discrete groups has been a long standing open question of Markov. Since minimal discrete groups admit no non-discrete Hausdorff group topologies at all, such groups are also called {\em non-topologizable}.
The first example of a non-topologizable group was provided by Shelah \cite{Sh} under the assumption of the Continuum Hypothesis CH. His example is simple and torsionfree, so that discrete group is also totally minimal, yet not hereditarily minimal. A countable example of a non-topologizable group was built a bit later by
Ol$'$shanskij \cite{O} (it was an appropriate quotient of Adjan's group $A(m,n)$ built for the solution of Burnside problem).
There exist infinite hereditarily minimal\ discrete groups and this fact, recently established by \cite{KOO} is related to another interesting topic,
namely {\em categorically compact groups}. According to \cite{DU} a Hausdorff topological group $G$ is {\em categorically compact} (briefly, $C$-compact) if for every topological group $H$ the projection $p: G \times H \to H$ sends closed subgroup of $G \times H $ to closed subgroups of $H$.
Compact groups are obviously $C$-compact by Kuratowski closed projection theorem,
while solvable $C$-compact groups can be shown to be compact \cite[Corollary 3.5]{DU}.
The $C$-compact groups are two-sided complete
and the class of $C$-compact groups has a number of nice properties typical for the compact groups: stability under taking closed subgroups, finite products and
Hausdorff quotient groups. Moreover, the $\omega$-narrow (in particular, separable) $C$-compact groups are totally minimal \cite[Corollary 3.5]{DU}.
The question of whether all $C$-compact groups are compact, raised in \cite{DU}, remained open for some time even in the case of locally compact groups
(the connected $C$-compact locally compact groups are compact \cite[ Proposition 5.1]{DU}).
It was proved in \cite[Theorem 5.5]{DU}, that a countable discrete group $G$ is $C$-compact if and only if every subgroup of $G$ is totally minimal
(i.e., non-topologizable along with all its quotients).
This gives rise to the following notion (the specific term was proposed later in \cite{LG2}).
\begin{defi}
A group $G$ is {\it hereditarily non-topologizable} when all subgroups of $G$ are non-topologizable along with all their quotients (i.e., totally minimal in the discrete topology).
\end{defi}
It was shown in \cite[Corollary 5.4]{DU} that every discrete hereditarily non-topologizable group is $C$-compact, yet the existence of infinite hereditarily non-topologizable groups remained open until the recent paper \cite{KOO} which provided many examples of such groups (hence, discrete $C$-compact that obviously fail to be compact). These examples have various additional properties displaying various levels of being infinite (infinite exponent, non-finitely generated, uncountable, etc.).
\subsection{Proof of Theorem B}\label{subsection:proofB}
The first item of the next result is taken from \cite[Lemma 2.8]{DHXX}.
\begin{lemma}\label{lem:HLM}
Let $G$ be a topological group, and $N$ be an open subgroup of $G$.
\begin{enumerate}
\item
If $N$ is locally minimal, then $G$ is locally minimal.
\item If $N$ is hereditarily locally minimal, then $G$ is hereditarily locally minimal.\end{enumerate}
\end{lemma}
\begin{proof}
$(2)$: Let $H$ be a subgroup of $G.$ By our assumption on $N$, the locally minimal subgroup $H\cap N$ is an open in $H$. It follows from item (1) that $H$ is locally minimal.
\end{proof}
The following two deep results are due to Zelmanov and Kaplansky, respectively. Note that the second part of (a) makes use of Fact \ref{fac:lfia}.
\begin{fact}
\begin{itemize}
\item [(a)]\label{fac:zel} \cite[Theorem 2]{Z}
Every
compact torsion group is locally finite. In particular, every infinite compact group contains an infinite compact abelian subgroup.
\item [(b)]\label{fac:kap} \cite[Theorem 6]{K}
Every non-discrete Lie group contains a non-trivial continuous homomorphic image of ${\mathbb R}$.
\end{itemize}
\end{fact}
The next result is the starting point of our exploration of hereditarily minimal\ locally compact groups.
Recall that a topological group $G$ is called {\it compactly covered} if each element of $G$ is contained in some compact subgroup.
\begin{prop}\label{prop:ndhmlc}
Let $G$ be a hereditarily minimal locally compact group.
\begin{enumerate}
\item Then $G$ is totally disconnected and compactly covered.
\item If $G$ is non-discrete, then it contains a copy of ${\mathbb Z}_p$ for some prime $p$ and satisfies ($\mathcal{N}_{fn}$). In particular, $Z(G)$ is trivial or $Z(G) \cong {\mathbb Z}_p$ for some prime $p$.
\end{enumerate}
\end{prop}
\begin{proof} $(1)$ : Assume that the connected component $c(G)$ is non-trivial. As $c(G)$ is a hereditarily minimal connected locally compact group, it is a Lie group by Fact \ref{fac:ext}, and of course it is non-discrete. Using Fact \ref{fac:kap}(b), $c(G) $ contains an infinite abelian Lie group $K$, which is isomorphic to ${\mathbb Z}_p$ by Corollary \ref{cor:spr}, contradicting the fact that $K$ is a Lie group.
The second assertion follows from Lemma \ref{Prod+Steph:lemma}.
$(2)$: By $(1)$, $G$ is a non-discrete totally disconnected locally compact group. By \cite{vD}, it contains an infinite compact open subgroup $H$. According to Fact \ref{fac:zel}(a), $H$ contains an infinite hereditarily minimal \ compact abelian group $A.$ By Fact \ref{TeoP}, $A$ is isomorphic to ${\mathbb Z}_p$ for some prime $p.$
To prove the second assertion we need to check that $G$ contains no finite normal non-trivial subgroups.
Assume for a contradiction that $G$ has a finite non-trivial normal subgroup $F$.
As $G\geq A\cong {\mathbb Z}_p$, we deduce that
$A \cap F$ is trivial, and the natural
action by conjugations $\alpha:A\times F\to F$ has an infinite kernel $M =\ker \alpha$. Being a non-trivial closed subgroup of $A\cong {\mathbb Z}_p$, $M$ is also isomorphic to ${\mathbb Z}_p$. It follows that $F\rtimes_{\alpha}M=F\times M$ is hereditarily minimal. Let $C$ be a cyclic non-trivial subgroup of $F$. Then $C\times {\mathbb Z}_p$ is a hereditarily minimal \ compact abelian group, contradicting Fact \ref{TeoP}.
For the final assertion, apply Lemma \ref{Prod+Steph:lemma} to the closed normal subgroup $Z(G)$.
\end{proof}
The following example shows that the hypotheses of the above proposition cannot be relaxed.
\begin{example}\label{ex:tarski}
Our first item shows that hereditary minimality cannot be relaxed to hereditary local minimality, while in item (b) we show that an infinite discrete HM group may have finite non-trivial center, so need not satisfy ($\mathcal{N}_{fn}$). So ``non-discrete'' cannot be removed in Proposition \ref{prop:ndhmlc}(2).
\begin{enumerate}[(a)]
\item The Lie group ${\mathbb R}\times {\mathbb Z}(2)$ is certainly hereditarily locally minimal by Fact \ref{fac:ext}, but not totally-disconnected.
\item Let $T$ be a countable discrete hereditarily non-topologizable group (see \S\ref{discreteHM}). By \cite[Theorem 5.5]{DU}, $T$ is $C$-compact. As this class is stable under taking finite products, also the direct product $G = {\mathbb Z}(2) \times T$ is $C$-compact. Since $G$ is also countable and discrete, \cite[Theorem 5.5]{DU} applies again, and $G$ is hereditarily non-topologizable. Thus, $G$ is a discrete hereditarily minimal group with finite non-trivial center.\end{enumerate}
\end{example}
Proposition \ref{prop:ndhmlc}(2) has the following easy consequence: as ${\mathbb Z}_p \times {\mathbb Z}_q$ is not HM for any pair $p,q$ of primes, direct products of non-discrete locally compact hereditarily minimal \ groups are never hereditarily minimal.
Recall that a profinite group is a totally disconnected compact group. The following proposition can be applied for example to profinite torsionfree groups, as a torsionfree group obviously satisfies ($\mathcal{N}_{fn}$).
\begin{prop}\label{pro:tor}
Let $G$ be an infinite profinite group satisfying ($\mathcal{N}_{fn}$). If $H$ is a locally minimal dense subgroup of $G$, then $H$ is minimal.
\end{prop}
\begin{proof}
We prove first that if $H$ is a locally essential subgroup of $G$, then it is essential in $G$.
Since $G$ is profinite, there exists a local base at the identity $\mathcal B$ consisting of compact open normal subgroups. Suppose that $H$ is locally essential in $G$, and let $K\in \mathcal B$ be such that $M\cap H$ is not trivial for every non-trivial closed normal subgroup $M$ of $G$ contained in $K$.
Let $N$ be a non-trivial closed normal subgroup of $G$, and we will show that $N\cap H$ is not trivial.
As $N$ is compact, and $M=N\cap K$ is an open subgroup of $N$, the index $[N:M]$ is finite.
This yields that $M$ is infinite since $N$ is infinite by our assumption on $G$.
Hence, $M\subseteq K$ is a non-trivial closed normal subgroup of $G.$ This implies that $M\cap H$ is not trivial. Since $M\cap H\subseteq N\cap H$ the latter group is also non-trivial.
Let $H$ be a locally minimal dense subgroup of
the compact group $G$. By the Local Minimality Criterion, $H$ is locally essential in $G$, hence essential by the above argument. Now apply the Minimality Criterion to conclude that $H$ is minimal.
\end{proof}
The compact group ${\mathbb T}$ has plenty of dense non-minimal subgroups and they are all locally minimal by Fact \ref{fac:ext}, so ``profinite'' cannot be relaxed to ``compact'' in the above proposition.
By Proposition \ref{prop:ndhmlc}, the compact HM groups satisfy the assumptions of Proposition \ref{pro:tor}. Using this fact, now we prove Theorem B: a compact group is hereditarily minimal \ if and only if it is hereditarily locally minimal \ and satisfies ($\mathcal{C}_{fn}$).
\bigskip
\noindent{\bf Proof of Theorem B.}
If $G$ is a hereditarily minimal \ group, then it is hereditarily locally minimal. To prove $G$ satisfies ($\mathcal{C}_{fn}$), pick an infinite compact subgroup $H$ of $G$. Clearly, $H$ is also hereditarily minimal, so
satisfies ($\mathcal{N}_{fn}$) by Proposition \ref{prop:ndhmlc}(2). This means that $G$ satisfies ($\mathcal{C}_{fn}$).
For the converse implication, suppose that $G$ is a hereditarily locally minimal\ compact group satisfying ($\mathcal{C}_{fn}$).
We first show that $G$ is totally disconnected. Assuming the contrary, the connected component $c(G)$ is a non-trivial hereditarily locally minimal, connected, compact group. By Fact \ref{fac:ext}, $c(G)$ is a non-discrete Lie group.
According to Fact \ref{fac:kap}(b), $c(G)$ contains an infinite compact abelian Lie group $C$. In particular, $C$ contains a copy of ${\mathbb T}$, hence contains torsion elements. So $C$ does not satisfy ($\mathcal{N}_{fn}$), contradicting our assumption that $G$ satisfies ($\mathcal{C}_{fn}$).
Let $H$ be a subgroup of the profinite group $G$. Without loss of generality we may assume that $H$ is infinite. By our assumption, $\overline H$ is a profinite hereditarily locally minimal\ group satisfying ($\mathcal{N}_{fn}$). By Proposition \ref{pro:tor}, applied to $\overline H$ and its subgroup $H$, we deduce that $H$ is minimal.\qed
\bigskip
Our first application of Theorem B
shows that the groups $M_{p,n}$ are $\operatorname{HM}$ for every prime $p$ and every $n\in {\mathbb N}.$
\begin{example}\label{padics:rtimes:padics}
Being a subgroup of $({\mathbb Z}_p,+)\rtimes {\mathbb Z}_p^*$, the group $M_{p,n}=({\mathbb Z}_p,+)\rtimes C_p^{p^n}$ is hereditarily locally minimal\ by Theorem A. As $M_{p,n}$ is torsionfree, it is also hereditarily minimal\ by Theorem B.
\end{example}
The following proposition proves the implication (2) in the diagram in the introduction under an assumption much weaker than total disconnectedness (equivalent to the property
of having a compact connected component\footnote{or containing no lines, i.e., copies of ${\mathbb R}$.}).
\begin{prop} \label{prop:ltdc}
Let $G$ be a locally compact group having a compact open subgroup $H$.
\begin{enumerate}
\item If $G$ is compactly hereditarily locally minimal, then it is hereditarily locally minimal.
\item If $G$ is compactly hereditarily minimal, then it is either discrete or contains a copy of ${\mathbb Z}_p$ for some prime $p$.
\end{enumerate}
\end{prop}
\begin{proof}
$(1)$: Since $G$ is compactly hereditarily locally minimal, $H$ is hereditarily locally minimal, and Lemma \ref{lem:HLM}(2) applies.
$(2)$: If $G$ is non-discrete and compactly hereditarily minimal, then also $H$ is non-discrete. Being also hereditarily minimal\ and compact, Proposition \ref{prop:ndhmlc}(2) applies to $H$.
\end{proof}
\section{Proof of Theorem C}\label{lcHM}
We start this section with the following apparently folklore property of central extensions. Recall that a group is \emph{$p$-torsionfree} if it has no element of order $p$.
\begin{lemma}\label{lem:Z1Z2}
Let $G$ be a group with non-trivial center. If $Z(G)$ is $p$-torsionfree and $G/Z(G)$ is a $p$-group, then $Z_2(G)=Z(G)$. If in addition $G/Z(G)$ is finite, then $G$ is abelian.
\end{lemma}
\begin{proof}
Let $Z_1=Z(G)$ be $p$-torsionfree and $G/Z_1$ be a $p$-group. Assume for a contradiction that $Z_1\lneqq Z_2$, where $Z_2=Z_2(G)$. We can choose $x\in Z_2 \setminus Z_1$ such that $x^p \in Z_1$. Since $x\notin Z_1$, there exists $y\in G$ that does not commute with $x$. Then $a =[x,y]=xyx^{-1}y^{-1} \ne e$, and $a\in Z_1$, by the definition of $Z_2$. Let $\phi:G\to G$ be the inner automorphism induced by $x$ and observe that $\phi(y)=xyx^{-1}=ay$. Since $x^p \in Z_1$ is a central element, $\phi^p = Id_G.$ On the other hand, as $a\in Z_1$, we also have $\phi^p(y) = {x^p}yx^{-p}=a^py.$ So $y = a^py$, and it follows that $a^p = e$. Since $a\in Z_1$, which is $p$-torsionfree, this yields $a = e$, a contradiction.
For the last assertion, recall that finite $p$-groups are nilpotent, so if $G$ is not abelian, then $Z(G/Z_1(G))$ is non-trivial, i.e., $Z_1(G)\lneqq Z_2(G)$.
\end{proof}
In the following result, we prove some more properties a non-discrete hereditarily minimal, locally compact group with non-trivial center satisfies.
\begin{corol}\label{cor:pgr}
Let $G$ be a non-discrete hereditarily minimal \ locally compact group with non-trivial center. Then there is a prime number $p$ such that:
\begin{enumerate}
\item for every non-central element $x \in G$, we have ${\mathbb Z}_p \cong Z(G) \leq \overline{\langle x\rangle} \cong {\mathbb Z}_p$. In particular, for every closed subgroup $H$ of $G$, either $H \leq Z(G)$, or $Z(G) \leq H$;
\item $G$ is torsionfree, and $G/Z(G)$ is a center-free $p$-group.
\end{enumerate}
\end{corol}
\begin{proof}
$(1)$: The center $Z(G)$ is isomorphic to ${\mathbb Z}_p$ for some prime $p$ by Proposition \ref{prop:ndhmlc}(2).
Let $x \in G $ be a non-central element, and consider the abelian subgroup $N_x = \overline{\langle Z(G), x\rangle}$ of $G$. As $N_x$ contains $Z(G) \cong {\mathbb Z}_p$, it is isomorphic to ${\mathbb Z}_p$ by Lemma \ref{Prod+Steph:lemma}. As the closed subgroups of ${\mathbb Z}_p$ are totally ordered by inclusion, we obtain that $Z(G) \leq \overline{\langle x\rangle} = N_x \cong {\mathbb Z}_p$.
It follows that if $H$ is a closed non-central subgroup of $G$, then $Z(G)\leq H$. $N_x$.
$(2)$: If $x\notin Z(G)$, then by item $(1)$, the index $[\overline{\langle x\rangle}:Z(G)]=p^n$ for some $n\in {\mathbb N}$, so $x^{p^n}\in Z(G)$. This proves that $G/Z(G)$ is a $p$-group and that $G$ is torsionfree, so $Z_2(G)=Z(G)$ by Lemma \ref{lem:Z1Z2}.
\end{proof}
In the next result, we use Corollary \ref{cor:pgr} to extend Corollary \ref{cor:spr}.
\begin{corol}\label{cor:nprodanov}
An infinite hereditarily minimal\ locally compact hypercentral group $G$ is isomorphic to ${\mathbb Z}_p$ for some prime $p$.
\end{corol}
\begin{proof} By a well-known theorem of Mal'cev (see \cite[page 8]{DIX}), all hypercentral groups are locally nilpotent, so in particular $G$ is locally solvable. Then $G$ is not discrete by Lemma \ref{lem:tnlf}, and clearly it has non-trivial center. Hence $Z_2(G)=Z(G)\cong {\mathbb Z}_p$ by Corollary \ref{cor:pgr}. Since $G$ is hypercentral, we deduce that $G=Z(G)\cong {\mathbb Z}_p$.
\end{proof}
The next results will be used in the subsequent proof of Theorem C.
\begin{prop}\label{new:thmB}
If $G$ is a hereditarily minimal \ locally compact group with non-trivial center, then every abelian subgroup of the quotient group $G/Z(G)$ is finite.
\end{prop}
\begin{proof}
Assume that $G/Z(G)$ has an abelian subgroup, so also its closure $A$ is abelian. Let $H = q^{-1}(A)$, where $q:G \to G/Z(G)$ is the canonical homomorphism. So, $H$ is a locally compact subgroup of $G$ with $H \geq Z(G)$. Hence, $Z(H) \geq Z(G)\cap H=Z(G)$, and by the third isomorphism theorem we deduce that $H/Z(H)$ is a quotient of the abelian group $H/Z(G) \cong A$, so $H$ is nilpotent (of class $\leq 2$).
If $G$ is discrete, then $H$ is finite by Lemma \ref{lem:tnlf}. As $H = q^{-1}(A)$, we deduce that also $A$ is finite.
Now we assume that $G$ is non-discrete. By Proposition \ref{prop:ndhmlc}(2), $Z(G) \cong {\mathbb Z}_p$ for some prime $p$. As $H$ contains $Z(G)$, Corollary \ref{cor:nprodanov} implies that also $H$ is isomorphic to ${\mathbb Z}_p$. So $A \cong H/Z(G)$ is finite.
\end{proof}
The quotient group $Q=G/Z(G)$ is torsion, by Lemma \ref{lem:tnlf} and Corollary \ref{cor:pgr}.
According to Fact \ref{fac:lslf} and Fact \ref{fac:lfia}, the property of $Q$ described in Proposition \ref{new:thmB} is equivalent also to having no infinite locally solvable subgroup, or having no infinite locally finite subgroup.
\begin{corol}\label{new:corol:for:ThmB}
Let $G$ be a non-discrete hereditarily minimal \ locally compact group with non-trivial center. If $G/Z(G)$ is a locally finite group, then $G \cong {\mathbb Z}_p$ for some prime $p$.
\end{corol}
\begin{proof}
By Corollary \ref{cor:pgr}, we obtain that $G/Z(G)$ is a $p$-group and $Z(G)\cong {\mathbb Z}_p$.
If $G/Z(G)$ is infinite, then it has an infinite abelian subgroup by Fact \ref{fac:lfia}, contradicting Proposition \ref{new:thmB}. So $G/Z(G)$ is finite, and $G$ is abelian by Lemma \ref{lem:Z1Z2}.
\end{proof}
As a consequence of Corollary \ref{new:corol:for:ThmB} we now prove Theorem C.
\medskip
\noindent
{\bf Proof of Theorem C.} Let $G$ be an infinite hereditarily minimal\ locally compact group with non-trivial center that is either compact or locally solvable. We have to prove that $G \cong {\mathbb Z}_p$, for some prime $p$.
First note that $G$ is non-discrete by Lemma \ref{lem:tnlf}. Applying Corollary \ref{cor:pgr}, we obtain that $G/Z(G)$ is a $p$-group and $Z(G)\cong {\mathbb Z}_p$. In view of Corollary \ref{new:corol:for:ThmB}, it suffices to prove that $G/Z(G)$ is locally finite.
If $G$ is locally solvable, then its quotient $G/Z(G)$ is locally finite by Fact \ref{fac:lslf}.
If $G$ is compact, then $G/Z(G)$ is compact torsion, so locally finite by Fact \ref{fac:zel}(a).
\qed
\bigskip
Let us see that the assumption ``compact or locally solvable'' cannot be removed in Theorem C.
Recall that the countable discrete group $G = {\mathbb Z}(2) \times T$ from Example \ref{ex:tarski} is hereditarily minimal with non-trivial center. Clearly, this group is non-abelian, and it is neither locally solvable nor compact.
\medskip
Applying Theorem C to the groups studied in Corollary \ref{cor:pgr} (see also Proposition \ref{new:thmB}), we can deduce additional properties they share.
\begin{thm}\label{add:prop:thm}
Let $G$ be a non-discrete hereditarily minimal locally compact group, and assume $\{e\} \neq Z(G) \neq G$.
Then there exists a prime $p$ such that:
\begin{enumerate}
\item every non-trivial closed subgroup $H$ of $G$ (e.g., $Z(G)$) is open;
\item every non-trivial compact subgroup $H$ of $G$ is isomorphic to ${\mathbb Z}_p$;
\item every finite subgroup of $G/Z(G)$ is cyclic and $G/Z(G)$ satisfies $(\mathcal N_{fn})$.
\end{enumerate}
\end{thm}
\begin{proof}
By Corollary \ref{cor:pgr}, $G$ is torsionfree, $Z(G)\cong {\mathbb Z}_p$ and $G/Z(G)$ is a $p$-group, for some prime $p$.
$(1)$: First we prove that $Z(G)$ is open in $G$.
By Proposition \ref{prop:ndhmlc}(1), $G$ is totally disconnected, so it has a local base at the identity consisting of compact open subgroups, and let $K$ be one of these subgroups. If $K$ is central, then $Z(G)$ is open. Otherwise, we have $K\geq Z(G)$ by Corollary \ref{cor:pgr}(1), so $K\cong {\mathbb Z}_p$ by Theorem C. Moreover, the index $[K:Z(G)]$ is finite and $Z(G)$ is open in $K$, so $Z(G)$ is open in $G$.
Now let $H$ be a non-trivial closed subgroup of $G$. If it is contained in $Z(G)$ then, being closed in $Z(G) \cong {\mathbb Z}_p$, it is also open in $Z(G)$, hence open in $G$. Otherwise, $H$ contains $Z(G)$ by Corollary \ref{cor:pgr}(1), so $H$ is open.
$(2)$: If $H$ is a non-trivial compact subgroup of $G$, then $H$ is infinite as $G$ is torsionfree. This implies that $H\cap Z(G)\neq \{e\}$, since $Z(G)$ is open in $G$. So we deduce that $H \cong {\mathbb Z}_p$ by Theorem C.
$(3)$: Let $\pi: G \to G/Z(G)$ be the canonical map, and let $F$ be a finite subgroup of $G/Z(G)$.
Then $\pi^{-1}(F)$ is a closed subgroup of $G$, containing $\ker \pi = Z(G)\cong {\mathbb Z}_p$, and such that $[\pi^{-1}(F) : Z(G)] = |F|$ is finite. Then $\pi^{-1}(F)$ is compact, hence isomorphic to ${\mathbb Z}_p$ by item $(2)$. If $g \in G$ is such that ${\mathbb Z}_p \cong \overline{\langle g \rangle} = \pi^{-1}(F)$, then $F = \langle \pi (g) \rangle$.
To check that $G/Z(G)$ satisfies $(\mathcal N_{fn})$ pick a finite non-trivial normal subgroup $N$ of $G/Z(G)$ and let $|N| = p^n$.
By what we have just proved, $H = \pi^{-1}(N)$ is isomorphic to ${\mathbb Z}_p$, and indeed ${\mathbb Z}_p \cong Z(G) = p^n H$. Let $x \in H \setminus Z(G)$, and $y \in G$ be an element non-commuting with $x$. As $H$ is normal in $G$, we can consider the conjugation by $y$ as a map $\phi : H \to H$. Since $p^n x \in Z(G)$, we have $\phi (p^n x ) = p^n x$; on the other hand, $\phi (p^n x ) = p^n \phi ( x )$, so we conclude $p^n \phi ( x ) = p^n x$. As $H$ is abelian and torsionfree, we deduce $\phi(x) = x$, a contradiction.
\end{proof}
Note that $G$ as in Theorem \ref{add:prop:thm} is neither compact nor locally solvable, by Theorem C.
See Questions \ref{que:exist} and \ref{QQ:DC} for further comments and Proposition \ref{prop:cand} for a partial converse.
We conclude this section by listing the three possibilities (trichotomy) for an infinite locally compact non-abelian HM group $G$:
\begin{enumerate}
\item $G$ is discrete, if and only if $G$ is torsion (by Lemma \ref{lem:tnlf} and Proposition \ref{prop:ndhmlc}(2)). In this case $G$ is not locally finite but may have finite non-trivial center (see Example \ref{ex:tarski}).
\hspace{-1.0cm}If $G$ is not discrete, we apply Proposition \ref{prop:ndhmlc}, and we obtain the following two cases:
\item $Z(G) = \{e\}$, $G$ contains a copy of ${\mathbb Z}_p$, and satisfies ($\mathcal{N}_{fn}$).
\item $Z(G) = Z_2(G) \cong {\mathbb Z}_p$ is open and proper in $G$, and $G$ has the properties listed in Corollary \ref{cor:pgr} and Theorem \ref{add:prop:thm}.
\end{enumerate}
To case (1) was dedicated \S 3.1, while case (2) (for solvable groups) will be the subject of the rest of the paper.
Note that the groups in (1) and in (3) are neither compact, nor locally solvable, by Lemma \ref{lem:tnlf} and Theorem C.
\section{Semidirect products of $p$-adic integers}\label{Semidirect products of p-adic integers}
We begin this section with a general result on semidirect products, and their quotients. We then apply it in the subsequent Lemmas \ref{lemma:speder} and \ref{commTn}, where we consider some of the semidirect products introduced in Example \ref{Exaaa}.
\begin{lemma}\label{claim:gender}
Let $\alpha: Y\times X\to X$ be a continuous action of an abelian group $Y$ on an abelian group $X$, and consider the topological semidirect product $G = X\rtimes_\alpha Y$.
Consider the subgroup of $X$ defined by $A = \langle x-\alpha(y,x): x\in X, y\in Y \rangle$.
Then the derived group $G'$ coincides with $A \rtimes_\alpha \{e_Y\}$, and the quotient group $G/G'$ is topologically isomorphic to $(X/A) \times Y$.
\end{lemma}
\begin{proof}
For $y\in Y$, let $A_y = \{x-\alpha(y,x): x\in X\}$, then $A = \langle A_y: y \in Y\rangle $. Note that the commutator $[(x,e_X), (e_Y,y)] = (x-\alpha(y,x),e_Y)$ for every $x\in X$ and $y\in Y$, so
$A \rtimes \{e_Y\} \leq G'$.
On the other hand, every $y,y'\in Y$ commute, thus
\[
\alpha(y', A_y) = \{ \alpha(y',x) - \alpha(y', \alpha(y,x)) : x\in X\} = \{ \alpha(y',x) - \alpha(y, \alpha(y',x)) : x\in X\} \leq A_y \leq A,
\]
which implies that $A \rtimes_\alpha \{e_Y\}$ is normal in $G$.
Let $\chi:G\to (X/A) \times Y$ be defined by $\chi(x,y)=(\phi(x),y)$, where $\phi:X\to X/A$ is the canonical map. Since $\phi$ is a continuous open surjection it follows that $\chi$ is a continuous open surjection. Moreover, the definitions of $\phi$ and $A$ imply that $\chi$ is also homomorphism. It is easy to see that $\ker\chi=A \rtimes_\alpha \{e_Y\}$, so $G \big/(A \rtimes_\alpha \{e_Y\})\cong(X/A) \times Y$ is abelian and $G'\leq A \rtimes_\alpha \{e_Y\}$.
\end{proof}
Recall that the topological group $(C_p,\cdot)$ is isomorphic to the group $({\mathbb Z}_p,+)$ (essentially, via the $p$-adic logarithm), so the closed subgroups of $(C_p,\cdot)$ are totally ordered, have the form $C_p^{p^n}$, for $n \in {\mathbb N}$, and have been described in (\ref{Cppn:form}).
Now we apply Lemma \ref{claim:gender} to the case when $(C_p^{p^n},\cdot)$, viewed as a subgroup of the automorphism group ${\mathrm Aut}\,({\mathbb Z}_p)$, acts on ${\mathbb Z}_p$ via the natural action by multiplication. Recall that we denote by $M_{p,n}= {\mathbb Z}_p \rtimes C_p^{p^n}$ the semidirect product arising this way.
\begin{lemma}\label{lemma:speder}
For the group $M_{p,n}= {\mathbb Z}_p \rtimes C_p^{p^n}$, the following hold:
\begin{itemize}
\item if $p>2$, then $M_{p,n}' = p^{n+1}{\mathbb Z}_p \rtimes \{1\}$ and $M_{p,n} / M_{p,n}' \cong {\mathbb Z}(p^{n+1})\times C_p^{p^n}$;
\item if $p=2$, then $M_{2,n}' = 2^{n+2}{\mathbb Z}_2 \rtimes \{1\}$ and $M_{2,n} / M_{2,n}' \cong {\mathbb Z}(2^{n+2})\times C_2^{2^n}$.
\end{itemize}
\end{lemma}
\begin{proof}
In the notation of Lemma \ref{claim:gender}, let $A= \langle (1-y)x: x\in {\mathbb Z}_p , y\in C_p^{p^n}\rangle$.
By (\ref{Cppn:form}), one immediately obtains that if $p>2$, then $A = p^{n+1}{\mathbb Z}_p$, while $A = 2^{n+2}{\mathbb Z}_2$ otherwise.
\end{proof}
In the following remark we give the explicit isomorphisms stated in the above Lemma \ref{lemma:speder}.
\begin{remark}\label{rem:last}
Using the proof of Lemma \ref{claim:gender} we can write explicitly the isomorphisms in Lemma \ref{lemma:speder}.
Let $p$ be a prime and $n\in {\mathbb N}$. The isomorphism $\widetilde{\psi}$ satisfies the equality $\widetilde{\psi}((x,y)M_{p,n}')=\psi(x,y) $ for every $(x,y)\in M_{p,n}$, where:
\begin{itemize}
\item if $p>2$, $\psi:M_{p,n}\to {\mathbb Z}(p^{n+1})\times C_p^{p^n}$ is defined by $\psi(x,y)=(x\mod p^{n+1},y )$;
\item if $p=2$, $\psi:M_{2,n}\to {\mathbb Z}(2^{n+2})\times C_2^{2^n}$ is defined by $\psi(x,y)=(x\mod 2^{n+2},y )$.
\end{itemize}
In other words, we have $\psi=\widetilde{\psi}\circ q$, where $q:M_{p,n}\to M_{p,n}/M_{p,n}'$ is the canonical map.
\end{remark}
Obviously, if two groups $M_{p,n}$, $M_{p',n'}$ are isomorphic, then $p = p'$. Under this assumption, we now prove that also $n = n'$.
\begin{corol}\label{thm:pair}
For $n \in {\mathbb N}$, the subgroups $M_{p,n}= {\mathbb Z}_p\rtimes C_p^{p^n}$ of $M_{p,0}=M_p= {\mathbb Z}_p\rtimes C_p$ are pairwise non-isomorphic.
\end{corol}
\begin{proof}
Let $n,m \in {\mathbb N}$, and assume that $\psi: M_{p,n} \to M_{p,m}$ is an isomorphism. Then $\psi(M_{p,n}') = M_{p,m}'$, and
$\psi$ induces an isomorphism $\bar \psi: A \to B$, where $A = M_{p,n}/M_{p,n}'$, and $B = M_{p,m}/M_{p,m}'$.
By Lemma \ref{lemma:speder}, comparing the torsion subgroups of $A$ and $B$, we obtain ${\mathbb Z}(p^{n+1}) \cong {\mathbb Z}(p^{m+1})$ when $p>2$, or ${\mathbb Z}(2^{n+2}) \cong {\mathbb Z}(2^{m+2})$ when $p=2$. In any case, $n = m$.
\end{proof}
The following result is the counterpart of Lemma \ref{lemma:speder} for the groups $T_n$.
\begin{lemma}\label{commTn}
For every $n\in {\mathbb N}$, the commutator subgroup of the group $T_n$ is $T_n'= 2{\mathbb Z}_2\rtimes\{1\}$, and the quotient group $T_n / T_n'$ is isomorphic to ${\mathbb Z}(2) \times C_2^{2^n}$.
\end{lemma}
\begin{proof}
Let $A=\langle x-\beta(y,x): x\in {\mathbb Z}_2, \ y\in C_2^{2^{n}} \rangle \leq {\mathbb Z}_2$.
In view of Lemma \ref{claim:gender}, we have to prove that $A=2{\mathbb Z}_2$. By the definition of $\beta$ we have $A=\langle V, W\rangle$, where $V=\langle x-yx: x\in {\mathbb Z}_2, \ y\in C_2^{2^{n+1}} \rangle \leq {\mathbb Z}_2$ and $W=\langle x+yx: x\in {\mathbb Z}_2, \ y\in C_2^{2^{n}}\setminus C_2^{2^{n+1}}\rangle \leq {\mathbb Z}_2$.
Note that if $y \in C_2^{2^{n+1}}$, then $y \in 1 + 2{\mathbb Z}_2$, so $1-y \in 2 {\mathbb Z}_2$. In particular, $V=\langle (1-y)x: x\in {\mathbb Z}_2, \ y\in C_2^{2^{n+1}} \rangle \leq 2{\mathbb Z}_2$.
To study $W$, first observe that $C_2^{2^{n}}\setminus C_2^{2^{n+1}}=(1+2^{n+2}{\mathbb Z}_2)\setminus (1+2^{n+3}{\mathbb Z}_2)=1+2^{n+2}+2^{n+3}{\mathbb Z}_2$. Hence,
\[
W=\langle x(1+y): x\in {\mathbb Z}_2, \ y\in 1+2^{n+2}+2^{n+3}{\mathbb Z}_2 \rangle =\langle xt:x\in {\mathbb Z}_2, \ t\in 2+2^{n+2}+2^{n+3}{\mathbb Z}_2\rangle \leq 2{\mathbb Z}_2.
\]
On the other hand,
$2{\mathbb Z}_2=(2+2^{n+2}){\mathbb Z}_2\leq W$, since $t=2+2^{n+2}\in 2+2^{n+2}+2^{n+3}{\mathbb Z}_2$.
It is now clear that $A=\langle V, W\rangle= W= 2{\mathbb Z}_2$, which completes the proof.
\end{proof}
\begin{prop}\label{prop:p=2}
For $n\in {\mathbb N}$, the groups $T_n$ are pairwise non-isomorphic.
\end{prop}
\begin{proof} Assume that there exists a topological isomorphism $\psi: T_n\to T_m$. Let $\pi_2:T_m\to C_2^{2^m}$ be the projection on the second coordinate. We first show that $\pi_2(\psi({\mathbb Z}_2\rtimes\{1\}))=1$. If $a\in {\mathbb Z}_2$ and $\pi_2(\psi(a,1))=c$, then $\pi_2(\psi(2a,1))=c^2$. By Lemma \ref{commTn}, $T_n'=T_m' = 2{\mathbb Z}_2\rtimes\{1\}$, which implies that $\psi(2{\mathbb Z}_2\rtimes\{1\})=2{\mathbb Z}_2\rtimes\{1\}$ and $\pi_2(\psi(2{\mathbb Z}_2\rtimes\{1\}))=1$. As $(2a,1)\in 2{\mathbb Z}_2\rtimes\{1\}$, we deduce that $c^2=1$. It follows that $c=1$, since $C_2^{2^m}$ is torsionfree.
Consider the subgroups $M_{2,n+1}={\mathbb Z}_2\rtimes_{\beta_n}C_2^{2^{n+1}}$ and $M_{2,m+1}={\mathbb Z}_2\rtimes_{\beta_m}C_2^{2^{m+1}}$ of $T_n$ and $T_m$, respectively. We will prove that $\psi(M_{2,n+1})=M_{2,m+1}$. Since $\psi^{-1}$ is also a topological isomorphism, it suffices to show that $\psi(M_{2,n+1})\leq M_{2,m+1}$.
For this aim we prove that $\pi_2(\psi(M_{2,n+1}))\leq C_2^{2^{m+1}}$. Note that an element of $M_{2,n+1}$ has the form $(a,b^2)$, where $a\in {\mathbb Z}_2$ and $b\in C_2^{2^n}$. Clearly, $\pi_2(\psi(0,b))\in C_2^{2^m}$, so $$\pi_2(\psi(a,b^2))=\pi_2(\psi(a,1))\pi_2(\psi(0,b))^2= 1\cdot \pi_2(\psi(0,b))^2\in C_2^{2^{m+1}}.$$ Hence, $\pi_2(\psi(M_{2,n+1}))\leq C_2^{2^{m+1}}$, which means that $\psi(M_{2,n+1})\leq M_{2,m+1}$. By Corollary \ref{thm:pair}, we deduce that $m=n$.
\end{proof}
It is not difficult to see that if $T_n \cong M_{p,m}$ for some $n,m,p$, then $p=2$. In the next result we apply Lemma \ref{commTn} to prove also that a group $T_n$ is not isomorphic to any of the groups $M_{2,m}$.
\begin{prop}\label{Tn-Mpn:non-isom}
For every $n,m\in {\mathbb N}$, and every prime number $p$, the groups $T_n$ and $M_{p,m}$ are not isomorphic.
\end{prop}
\begin{proof}
By contradiction, assume that $\psi: T_n \to M_{p,m}$ is an isomorphism. Similar to the proof of Corollary \ref{thm:pair}, $\psi$ induces an isomorphism between the torsion groups $t(T_n/T_{n}')$ and $t(M_{p,n}/M_{p,n}')$.
But $t(T_n/T_{n}')\cong {\mathbb Z}(2)$ by Lemma \ref{commTn}, while $t(M_{2,n}/M_{2,n}')\cong {\mathbb Z}(2^{n+2})$ and $t(M_{p,n}/M_{p,n}')\cong {\mathbb Z}(p^{n+1})$ when $p>2$ by Lemma \ref{lemma:speder}.
\end{proof}
In the sequel, we consider a faithful action $\alpha: {\mathbb Z}_p \times {\mathbb Z}_p \to {\mathbb Z}_p$ of ${\mathbb Z}_p$ on ${\mathbb Z}_p$, and the semidirect product $M_{p,\alpha}= ({\mathbb Z}_p,+)\rtimes_{\alpha} ({\mathbb Z}_p,+)$ arising this way.
Recall that ${\mathrm Aut}\,({\mathbb Z}_p) \cong {\mathbb Z}_p^*$, as every $\phi\in {\mathrm Aut}\,({\mathbb Z}_p)$, has the form $\phi(x)=m\cdot x$ for $m =\phi(1) \in {\mathbb Z}_p^*$; identifying $\phi$ with $\phi(1)$, the action $\alpha$ gives a group monomorphism $f:({\mathbb Z}_p,+)\to {\mathbb Z}_p^*$ such that $\alpha(y,x)= f(y)\cdot x$.
\begin{prop}\label{prop:alpha}
For a prime $p$, consider the semidirect product $M_{p,\alpha}= ({\mathbb Z}_p,+)\rtimes_{\alpha} ({\mathbb Z}_p,+)$, where $\alpha$ is a faithful action.
\begin{itemize}
\item If $p >2$, then $M_{p,\alpha} \cong M_{p,n}$ for some $n\in {\mathbb N}$.
\item If $p =2$, then either $M_{p,\alpha} \cong M_{p,n}$, or $M_{p,\alpha} \cong T_n$, for some $n\in {\mathbb N}$.
\end{itemize}
\end{prop}
\begin{proof}
Since $\alpha$ is faithful, there is a group monomorphism $f:({\mathbb Z}_p,+)\to {\mathbb Z}_p^*= C_pF_p$ such that $\alpha(y,x)= f(y)\cdot x$. Now we consider two cases, depending on whether the image of $f$ is contained in $C_p $ or not.
If it is, then $f:({\mathbb Z}_p,+)\to C_p$ is continuous when we equip these two copies of $({\mathbb Z}_p,+)$ with the $p$-adic topology. So $f({\mathbb Z}_p)= C_p^{p^n}$ for some $n\in {\mathbb N}$, and $f: ({\mathbb Z}_p,+)\to C_p^{p^n}$ is a topological isomorphism.
We define $\phi:({\mathbb Z}_p,+)\rtimes_{\alpha} ({\mathbb Z}_p,+)\to ({\mathbb Z}_p,+)\rtimes (C_p^{p^n},\cdot)$ by $(x,y)\to (x,f(y))$. To prove $\phi$ to be a topological isomorphism, it remains only to check it is a homomorphism, as follows:
\begin{equation}\label{homomorphism}
\begin{split}
\phi((x_1,y_1)(x_2,y_2))=\phi(x_1+\alpha(y_1,x_2), y_1y_2)=\phi(x_1+f(y_1)\cdot x_2, y_1y_2)=(x_1+f(y_1)\cdot x_2, f(y_1)f(y_2)),
\\
\phi(x_1,y_1)\phi(x_2,y_2)=(x_1,f(y_1))(x_2,f(y_2))=(x_1+f(y_1)\cdot x_2, f(y_1)f(y_2)).
\end{split}
\end{equation}
Now we consider the case when $f({\mathbb Z}_p) \nsubseteq C_p$.
First we see that this happens only if $p=2$; indeed this follows from the fact that if $p>2$ then $({\mathbb Z}_p,+)$ is $(p-1)$-divisible, while $F_p$ has cardinality $p-1$.
So we have $p=2$ and $f:({\mathbb Z}_2,+)\to {\mathbb Z}_2^* =C_2 F_2$ such that $\alpha(y,x)= f(y)\cdot x$ and $f({\mathbb Z}_2) \nsubseteq C_2$.
Recall that $C_2=1+4{\mathbb Z}_2$ and $F_2 =\{ 1,-1 \}$.
Equipping $C_2 \cong {\mathbb Z}_2$ with the $2$-adic topology, $F_2$ with the discrete topology, and the codomain of $f$ with the product topology, it is easy to see that $f: ({\mathbb Z}_2,+)\to C_2 \cdot F_2$ is continuous,
so $f(1)\in (-1)\cdot C_2$.
Consider the (continuous) canonical projection $\pi: C_2 \cdot F_2 \to C_2$, and call $\tilde f$ the composition map $\tilde f = \pi \circ f : {\mathbb Z}_2 \to C_2$. Then $\tilde f$ is a continuous homomorphism, $\tilde f (1)=-f(1)$, and it is easy to see that
\begin{equation*}
\tilde f(y) =
\begin{cases}
f(y) & \text{ if } y\in 2{\mathbb Z}_2,\\
-f(y) & \text{ if } y \in {\mathbb Z}_2 \setminus 2{\mathbb Z}_2.
\end{cases}
\end{equation*}
Since ${\mathbb Z}_2$ is compact, there is $n\in {\mathbb N}$ such that $\tilde f({\mathbb Z}_2)=C_2^{2^n}=(1+4{\mathbb Z}_2)^{2^n}$, and we now prove that $M_{2,\alpha}$ is isomorphic to $T_n$.
Let $\phi:M_{2,\alpha}\to T_{n}$ be defined by $\phi(x,y)=(x,\tilde f(y))$.
As $\tilde f: ({\mathbb Z}_2,+) \to C_2^{2^n}$ is a topological group isomorphism we deduce that $\phi$ is homeomorphism. Let us show that $\phi$ is also a homomorphism. If $(x_1,y_1), (x_2,y_2)\in M_{2,\alpha}$, then
\[
\phi(x_1,y_1) \phi (x_2,y_2)=(x_1,\tilde f(y_1))(x_2,\tilde f(y_2))=
( x_1+\beta_n \big(\tilde f(y_1),x_2 \big) , \tilde f(y_1)\tilde f(y_2) ).
\]
On the other hand, since $\tilde f$ is a homomorphism we have
\[
\phi((x_1,y_1)(x_2,y_2))= \phi(x_1+f(y_1) \cdot x_2,y_1+y_2)=
(x_1+f(y_1) \cdot x_2,\tilde f(y_1+y_2))=(x_1+f(y_1) \cdot x_2,\tilde f(y_1)\tilde f(y_2) ).
\]
To finish the proof, we now check that $\beta_n(\tilde f(y_1),x_2) = f(y_1) \cdot x_2$.
If $y_1\in 2{\mathbb Z}_2$, then $\tilde f(y_1)=f(y_1)$ and also $\tilde f(y_1)\in C_2^{2^{n+1}}$, so
$\beta_n(\tilde f(y_1),x_2)=\tilde f(y_1)\cdot x_2=f(y_1) \cdot x_2$.
If $y_1 \in {\mathbb Z}_2 \setminus 2{\mathbb Z}_2$, then $\tilde f(y_1)=-f(y_1)$, and moreover $\tilde f(y_1)\in C_2^{2^{n}} \setminus C_2^{2^{n+1}}$, so $\beta_n(\tilde f(y_1),x_2)=-\tilde f(y_1)\cdot x_2=f(y_1)\cdot x_2$.
\end{proof}
In the following lemma we describe the faithful actions of a finite group on the $p$-adic integers.
\begin{lemma}\label{lem:uonique}
Let $\alpha: H\times {\mathbb Z}_p\to {\mathbb Z}_p$ be a faithful action of a finite group $H$ on ${\mathbb Z}_p$. Then $H$ is isomorphic to a subgroup $F$ of $F_p$, and $G=({\mathbb Z}_p,+)\rtimes_{\alpha} H$ is topologically isomorphic to $K_{p,F}$.
\end{lemma}
\begin{proof}
Since $\alpha$ is faithful, there is a group monomorphism $f:H\to {\mathbb Z}_p^*$ such that $\alpha(h,x)= f(h)\cdot x$. As $H$ is finite, its image $F$ is contained in the torsion subgroup $F_p$ of ${\mathbb Z}_p^*$.
Consider the map $\phi: G\to K_{p,F}$ defined by $\phi(a,b)=(a,f(b)) $ for every $(a,b)\in K_{p,F}$.
Following the argument in (\ref{homomorphism}), one can verify that $\phi$ is a topological isomorphism.
\end{proof}
\section{Proof of Theorem D}\label{Proof of Theorem D}
In this section we prove that infinite locally compact solvable $\operatorname{HM}$ groups are metabelian (see Proposition \ref{prop:ms}). Then we use it to classify all locally compact solvable $\operatorname{HM}$ groups.
We start this section with two general results we use in the sequel. Especially Lemma \ref{lem:difp} will be used in Theorem \ref{thm:charmeta}, Proposition \ref{prop:tri}, Proposition \ref{prop:dich} and Theorem \ref{prop:nonfree}.
\begin{lemma} \label{lem:meta}
Let $G$ be an infinite hereditarily minimal \ locally compact solvable group of class $n>1$. Then, $\overline{G^{(n-1)}}$ is isomorphic to ${\mathbb Z}_p$ for some prime $p$.
\end{lemma}
\begin{proof}
Since $G$ is solvable of class $n>1$, $G^{(n-1)}$ is a non-trivial normal abelian subgroup of $G$. By Lemma \ref{lem:tnlf}, $G$ is non-discrete.
Hence, $G^{(n-1)}$ is infinite, by Proposition \ref{prop:ndhmlc}. Being infinite hereditarily minimal\ locally compact abelian group, $\overline{G^{(n-1)}}$ is isomorphic to ${\mathbb Z}_p$ for some prime $p$, by Corollary \ref{cor:spr}.
\end{proof}
Let $p$ be a prime. Recall that an abelian group $G$ is {\em $p$-divisible} if $pG=G$.
\begin{lemma} \label{lem:difp}
Let $p$ and $q$ be distinct primes and $\alpha:{\mathbb Z}_q\times {\mathbb Z}_p\to {\mathbb Z}_p$ be a continuous action by automorphisms. Then ${\mathbb Z}_p\rtimes_{\alpha} {\mathbb Z}_q$ is not hereditarily locally minimal.
\end{lemma}
\begin{proof}
In case $K=\ker\alpha$ is trivial, then ${\mathbb Z}_q$ is algebraically isomorphic to a subgroup of ${\mathrm Aut}\,({\mathbb Z}_p)\cong {\mathbb Z}_p^* $.
This is impossible since ${\mathbb Z}_q$ is $p$-divisible, while ${\mathbb Z}_p^*= C_p F_p$ contains no infinite $p$-divisible subgroups.
Hence, $K$ is a non-trivial closed subgroup of ${\mathbb Z}_q$, so isomorphic to ${\mathbb Z}_q$ itself.
By Fact \ref{fac:ext}, the group ${\mathbb Z}_p\rtimes_{\alpha} K\cong {\mathbb Z}_p\times {\mathbb Z}_q$ is not hereditarily locally minimal.
\end{proof}
The following theorem provides a criterion for hereditary minimality in terms of properties of the closed subgroups of a compact solvable group. We are going to use it in Example \ref{ex:padic:rtimes:F} to check that the groups $K_{p,F}$ are hereditarily minimal.
\begin{thm}\label{thm:charmeta}
Let $G$ be a compact solvable group. The following conditions are equivalent:
\begin{enumerate}
\item $G$ is hereditarily minimal;
\item there exists a prime $p$, such that for every infinite closed subgroup $H$ of $G$ either $Z(H) = \{e\}$ or $H \cong {\mathbb Z}_p$;
\item there exists a prime $p$, such that for every infinite closed subgroup $H$ of $G$ either $Z(H) = \{e\}$ or $Z(H) \cong {\mathbb Z}_p$.
\end{enumerate}
\end{thm}
\begin{proof} Since the assertion of the theorem is trivially true for finite or abelian groups, we can assume that $G$ is infinite and solvable of class $n>1$.
$(1) \Rightarrow (2)$: By Lemma \ref{lem:meta}, $B=\overline{G^{(n-1)}}\cong{\mathbb Z}_p$ for some prime $p$.
Let $H$ be an infinite closed subgroup of $G$ such that $Z(H)$ is non-trivial. By Theorem C, $H\cong {\mathbb Z}_q$ for some prime $q$. If $q\neq p$, then $H\cap B$ is trivial. Since $B$ is normal subgroup of $G$ we deduce that $B\rtimes H \cong {\mathbb Z}_p\rtimes {\mathbb Z}_q$ is hereditarily minimal, contradicting Lemma \ref{lem:difp}.
$(2)\Rightarrow (3)$: Trivial.
$(3) \Rightarrow (1)$: Let $H$ be an infinite subgroup of $G$. We will prove that $H$ is minimal. If $H$ is abelian, then $Z(H)=H\leq Z(\overline {H})$ and by $(3)$ we deduce that $Z(\overline {H})$ is isomorphic to ${\mathbb Z}_p$, so, in particular, its subgroup $H$ is minimal by Prodanov's theorem. Hence, we can assume that $H$ is non-abelian, so solvable of class $m$ where $n\geq m>1$, and we have to show that it is essential in
$\overline H$.
Consider the closed non-trivial abelian subgroup $M=\overline{H^{(m-1)}}$ of $\overline H$. We prove that $M\cong {\mathbb Z}_p$. It suffices to show, by our assumption $(3)$ that $M$ is infinite. Assuming the contrary, let $M$ be finite. By Corollary \ref{cor:lsia},
$\overline H$ contains an infinite
closed abelian subgroup $A$. By $(3), \ A\cong {\mathbb Z}_p$ and thus $A\cap M$ is trivial. As $M$ is a normal subgroup of $\overline H$, the topological semidirect product $M\rtimes_{\alpha}A$ is well defined, where $\alpha$ is the natural action by conjugations $\alpha:A\times M\to M.$ Being an infinite group that acts on a finite group, $A$ must have a non-trivial kernel $K=\ker\alpha.$ Hence, $K\cong {\mathbb Z}_p$ as a closed non-trivial subgroup of $A\cong {\mathbb Z}_p$. It follows that $\overline H$ contains a subgroup isomorphic to $M\rtimes_{\alpha}K\cong M\times K\cong M\times {\mathbb Z}_p$. Let $C$ be a non-trivial cyclic subgroup of $M$, then $G$ contains an infinite closed abelian subgroup $L\cong C\times {\mathbb Z}_p$ that is not isomorphic to ${\mathbb Z}_p$, contradicting $(3)$.
Coming back to the proof of the essentiality of $H$ in $\overline{H}$, let $N$ be a non-trivial closed normal subgroup of $\overline H$.
If $N_1=N\cap M$ is trivial, then $NM\cong N\times M\cong N\times {\mathbb Z}_p$. Following the same ideas of the first part of the proof, one can find an infinite abelian subgroup of $G$ not isomorphic to ${\mathbb Z}_p$, contradicting $(3)$.
Therefore $N_1$ and $H_1=H\cap M$ are non-trivial subgroups of $M\cong {\mathbb Z}_p$, with $N_1$ closed in $M$. By Lemma \ref{lem:cyc}, $N_1\cap H_1$ is non-trivial.
Since $N\cap H\geq N_1\cap H_1$, we conclude that $N\cap H\neq \{e\}$, proving the essentiality of $H$ in $\overline{H}$.\end{proof}
Now we give an application of Theorem \ref{thm:charmeta} that we use in Example \ref{ex:Tn}.
\begin{lemma}\label{laaaasts:lemma}
Let $G$ be a compact solvable torsionfree group containing a closed hereditarily minimal\ solvable subgroup $G_1$ of class $k$.
If $\overline{G_1^{(k-1)}}\cong{\mathbb Z}_p$, and $[G:G_1]=p^n$, for some $n\in {\mathbb N}$ and a prime $p$, then $G$ is hereditarily minimal.
\end{lemma}
\begin{proof} According to Theorem \ref{thm:charmeta}, it suffices to check that for every infinite closed subgroup $H$ of $G$, either $Z(H) = \{e\}$ or $H\cong {\mathbb Z}_p$. Assume that $H$ is an infinite closed subgroup of $G$, and let $H_1 = G_1 \cap H$. Then there exists $r\leq n$ such that
\begin{equation}\label{dead:equation}
[H:H_1]=p^r.
\end{equation}
So, $H_1$ is an infinite closed subgroup of $G_1$.
Now assume that $Z(H)$ is non-trivial and pick any $e \ne z\in Z(H)$. Then $e\ne z^{p^m} \in H_1$ for some integer $0 < m \leq r$, by (\ref{dead:equation}). As $z^{p^m} \in Z(H_1)$, this proves that $Z(H_1) \ne \{e\}$, so $H_1\cong {\mathbb Z}_p$ by Theorem \ref{thm:charmeta}.
Let $A=Z(H)\cap H_1$. As $z^{p^m}\in A$, it is a non-trivial closed subgroup of $H_1\cong {\mathbb Z}_p$, so $A$ is isomorphic to ${\mathbb Z}_p$ and $[H_1:A]=p^s$ for some $s\in {\mathbb N}$. Then $p^{r+s} = [H:H_1] [H_1:A] = [H:A] = [H:Z(H)] [Z(H):A]$, so $H/Z(H)$ is a finite $p$-group. Thus $H$ is abelian by Lemma \ref{lem:Z1Z2}.
Since the compact abelian torsionfree group $H$ contains a subgroup isomorphic to ${\mathbb Z}_p$ of finite index, we deduce that $H$ itself is isomorphic to ${\mathbb Z}_p$.
\end{proof}
\begin{example}\label{ex:padic:rtimes:F}\label{ex:Tn}
Now we use Theorem \ref{thm:charmeta} and Lemma \ref{laaaasts:lemma} to see that the the infinite compact metabelian groups $K_{p,F}$ and $T_n=({\mathbb Z}_2,+)\rtimes_{\beta} C_2^{2^n}$ are hereditarily minimal.
\begin{itemize}
\item[(a)] To show that $K_{p,F}={\mathbb Z}_p\rtimes F$ is hereditarily minimal, pick an infinite closed subgroup $H$ of $K_{p,F}$. If $H$ is non-abelian, then $Z(H)$ is trivial by Lemma \ref{lem:trabel}. If $H$ is abelian, then there exists $e\ne x\in H \cap ( {\mathbb Z}_p\rtimes \{1\} )$ since $F$ is finite. As $H\leq C_{ K_{p,F} }(x)\leq {\mathbb Z}_p\rtimes \{1\}$, we obtain that $H\cong {\mathbb Z}_p$. By Theorem \ref{thm:charmeta}, we conclude that $K_{p,F}$ is hereditarily minimal.
\item[(b)] The group $T_n$ is torsionfree, and its compact subgroup $({\mathbb Z}_2,+)\rtimes_{\beta}C_2^{2^{n+1}}\cong M_{2,n+1}$ has index $2$ in $T_n$. Moreover, $M_{2,n+1}$ is hereditarily minimal\ by Example \ref{padics:rtimes:padics}, while $M_{2,n+1}' \cong {\mathbb Z}_2$ by Lemma \ref{lemma:speder}. So we deduce by Lemma \ref{laaaasts:lemma} that $T_n$ is hereditarily minimal.
Alternatively, since its open subgroup $M_{2,n+1}$ is hereditarily locally minimal, the group $T_n$ is also hereditarily locally minimal\ by Lemma \ref{lem:HLM}(2). As $T_n$ is torsionfree, Theorem B implies that $T_n$ is hereditarily minimal.
\end{itemize}
\end{example}
The next result, in which we study when some semidirect products are hereditarily minimal, should be compared with Proposition \ref{prop:alpha} and Lemma \ref{lem:uonique}.
\begin{lemma}\label{lemma:kerhm}
Let $G=({\mathbb Z}_p,+)\rtimes_{\alpha} T$, where $T$ is either finite or (topologically) isomorphic to $({\mathbb Z}_p,+)$. Then $G$ is hereditarily minimal\ if and only if $\alpha$ is faithful.
\end{lemma}
\begin{proof}
If $K =\ker\alpha$ is not trivial, then $({\mathbb Z}_p,+)\times K\leq G$ is not hereditarily minimal\ by Prodanov's theorem,
so $G$ is also not hereditarily minimal.
Now assume that $\alpha$ is faithful. If $T$ is finite, then $T$ is isomorphic to a subgroup $F$ of $F_p$, and $G\cong K_{p,F}$ by Lemma \ref{lem:uonique}. By Example \ref{ex:padic:rtimes:F}, $G$ is hereditarily minimal. In case $T$ is isomorphic to ${\mathbb Z}_p$, then by Proposition \ref{prop:alpha} either $G\cong M_{p,n}$ or $G\cong T_n$ for some $n\in {\mathbb N}.$ We use Example \ref{padics:rtimes:padics} and Example \ref{ex:Tn}, respectively, to conclude that $G$ is hereditarily minimal.
\end{proof}
\subsection{The general case}\label{The general case}
The next proposition offers a reduction from the general case to a specific situation that will be repeatedly
used in the sequel, very often without explicitly giving/recalling all details.
\begin{prop}\label{prop:ms}
Let $G$ be an infinite hereditarily minimal\ locally compact group, which is either compact or locally solvable.
If $G$ has a non-trivial normal solvable subgroup, then $G$ is metabelian. In particular, it has a normal subgroup $N\cong {\mathbb Z}_p$, such that $N = C_G(N)$, and there exists a monomorphism
\[
j: G/N \hookrightarrow {\mathrm Aut}\,(N)\cong {\mathrm Aut}\,({\mathbb Z}_p) \cong {\mathbb Z}_p \times F_p.\eqno{(\dag)}
\]
\end{prop}
\begin{proof}
Let $A$ be a non-trivial normal solvable subgroup of $G$, and assume that $n > 0$ is the solvability class of $A$. Then $A^{(n-1)}$ is non-trivial and abelian, it is characteristic in $A$, so normal in $G$. Hence, we can assume $A$ to be abelian.
Now note that $G$ is non-discrete by Lemma \ref{lem:tnlf}, so $A$ is infinite by Proposition \ref{prop:ndhmlc}.
Let $N_1 = C_G(A)$, and note that also $N_1$ is normal in $G$. As $A$ is abelian, we have $A \leq N_1$, and indeed $A \leq Z(N_1)$, so $Z(N_1)$ is infinite. Moreover, $N_1$ is closed in $G$, so it is hereditarily minimal, and locally compact.
Since $N_1$ is also either compact or locally solvable, we have $N_1 \cong {\mathbb Z}_p$ for some prime $p$ according to Theorem C.
By induction, define $N_{n+1} = C_G(N_n)$ for $n \geq 1$, and similarly prove that $ N_n \cong{\mathbb Z}_p $ is normal in $G$ for every $n$. Then $H = \displaystyle{ \overline{ \bigcup_{n} N_n } }$ is abelian, locally compact, hereditarily minimal, so also $H \cong {\mathbb Z}_p$ by Corollary \ref{cor:spr}. Then $[H: N_1]$ is finite, so the ascending chain of subgroups $\{N_n\}_n$ stabilizes and there exists $n_0\in {\mathbb N}_+$ such that $H = N_{n_0} = C_G (N_{n_0}) \cong {\mathbb Z}_p$ is normal in $G$.
Let $N = N_{n_0}$. Then $G$ acts on $N$ by conjugation, and the kernel of this action is $N=C_G(N)$. Hence $G/C_G(N)=G/N$ is isomorphic to a subgroup of ${\mathrm Aut}\,(N)$ via a monomorphism $j$ as in $(\dag)$. Since $N\cong {\mathbb Z}_p$, we have ${\mathrm Aut}\,(N)\cong {\mathrm Aut}\,({\mathbb Z}_p) \cong {\mathbb Z}_p \times F_p$. This implies that $G/N$ is abelian, so $G$ is metabelian.
\end{proof}
Let us see that the assumption `compact or locally solvable' cannot be removed in Proposition \ref{prop:ms}.
Consider the countable discrete group $G = {\mathbb Z}(2) \times T$ from Example \ref{ex:tarski}. The hereditarily minimal group $G$ has a normal abelian subgroup, but it is not compact, nor even locally solvable.
In the rest of this section $G$ is a hereditarily minimal \ locally compact, solvable group, and $N$ and $j$ are as in Proposition \ref{prop:ms}.
\begin{prop}\label{prop:tri}
If $x\in G\setminus N$, then $H_x=\overline{\langle x\rangle}$ trivially meets $N$. Moreover, if $x$ is non-torsion, then $G_1=N \cdot H_x \cong N\rtimes H_x$ is isomorphic to either $M_{p,n}$ or $T_n$, for some prime $p$ and $n\in {\mathbb N}$.
\end{prop}
\begin{proof} Clearly, we can consider only the case when $x$ is non-torsion, as $N$ is torsionfree. Hence, $H_x\cong {\mathbb Z}_q$ for some prime $q$ by Lemma \ref{Prod+Steph:lemma}.
If $q \neq p$, then $H_x\cap N= \{e\}$ and $G\geq N\cdot H_x\cong N\rtimes H_x$, where the action is the conjugation in $G$. Moreover, this semidirect product is also isomorphic to ${\mathbb Z}_p\rtimes_{\alpha} {\mathbb Z}_q$ for some action $\alpha$, contradicting Lemma \ref{lem:difp}. So we deduce that $H_x\cong {\mathbb Z}_p$.
As $G$ is hereditarily minimal\ so is its subgroup $C$. In particular, $C$ is essential in its closure $H_x$. Since $N$ is normal in $G$, it follows that $H_x \cap N$ is trivial if and only if $C\cap N$ is trivial.
Assume by contradiction that $C\cap N$ is non-trivial.
Let $\varphi_x: G\to G$ be the conjugation by $x$,
and note that $\varphi _x\upharpoonright_N\neq Id_N$, as $x\notin N=C_G(N)$. Obviously, $\varphi_x\upharpoonright _C = Id_C$, so $\varphi_x\upharpoonright _{H_x} = Id_{H_x}$, and in particular $\varphi_x\upharpoonright _{N\cap H_x} = Id_{N\cap H_x}$.
As $N\cap H_x$ is a non-trivial closed subgroup of $N\cong {\mathbb Z}_p$, we have $N\cap H_x= p^k N$ for an integer $k$. Take an arbitrary element $t\in N$, so $p^kt\in N\cap H_x$ and $\varphi_x(p^kt) = p^kt$.
On the other hand, $\varphi_x(p^kt) = p^k\varphi_x(t)$,
which means $\varphi_x(t)=t$, as $N$ is torsionfree. So $\varphi_x\upharpoonright_N= Id_{N}$, a contradiction.
Thus, $G_1=N \cdot H_x \cong N\rtimes H_x \cong {\mathbb Z}_p\rtimes_{\alpha} {\mathbb Z}_p$, for a faithful action $\alpha$ by Lemma \ref{lemma:kerhm}. Finally, apply Proposition \ref{prop:alpha}.
\end{proof}
In the following result, we consider the canonical projection $q: G \to G/N$, and we study how the torsion elements of $G$ are related to those of $G/N$. To this end, recall that $G/N$ is (algebraically isomorphic to) a subgroup of ${\mathrm Aut}\,(N)\cong {\mathrm Aut}\,({\mathbb Z}_p)\cong {\mathbb Z}_p \times F_p$, so the torsion subgroup of $G/N$ is isomorphic to a subgroup of $F_p$.
\begin{prop}\label{prop:abc}
\begin{enumerate}[(a)]
\item Let $x \in G \setminus N$. Then $C = \langle x\rangle \cong \langle q(x)\rangle$, so $x$ is torsion if and only if $q(x)$ is torsion. In the latter case, $C$ is isomorphic to a subgroup of $F_p$.
\item
$G/N$ is torsionfree if and only if $G$ is torsionfree.
\item Assume $G$ has torsion, and fix a torsion element $x_0$ of $G$ of maximum order $m$. Then the subgroup $S_p= N\cdot\langle x_0\rangle$ contains all torsion elements of $G$, and indeed
$t (G) = t(S_p) = S_p \setminus N$. Furthermore, $S_p\cong K_{p,F}$, where $F$ is the subgroup of $F_p$ isomorphic to $ \langle x_0\rangle.$ \end{enumerate}
\end{prop}
\begin{proof}
(a) By Proposition \ref{prop:tri}, the map $q$ restricted to $C$ induces an isomorphism $C \cong \langle q(x)\rangle$. For the last part, use the fact that the torsion subgroup of $G/N$ is isomorphic to a subgroup of $F_p$.
(b) If $G$ has a non-trivial torsion element $g$, then $g \notin N\cong {\mathbb Z}_p$. So, $q(g)$ is a non-trivial torsion element of $G/N$.
By item (a), if $q(g)$ is a non-trivial torsion element of $G/N$, then
$g$ is a non-trivial torsion element of $G$.
(c) Using item (a) and the fact that $t(G/N)$ is cyclic we deduce
that $t(G/N)=t(S_p/N)=\langle q(x_0) \rangle$. Moreover, we have
$$t(G)=q^{-1}(t(G/N))\setminus N= q^{-1}(t(S_p/N))\setminus N=S_p\setminus N.$$
As $S_p$ is hereditarily minimal\ (being a subgroup of $G$), Lemma \ref{lemma:kerhm} implies that $S_p\cong N\rtimes_{\alpha} \langle x_0 \rangle $, for some faithful action $\alpha$. The isomorphism $S_p\cong K_{p,F}$ now follows From Lemma \ref{lem:uonique}.
\end{proof}
By Proposition \ref {prop:ms} we may identify $G/N$ with a subgroup of ${\mathbb Z}_p\times F_p$ via the monomorphism $j$. For $p>2$ the following dichotomy holds.
\begin{prop}\label{prop:dich}
If $p>2$, then either $j(G/N)\leq {\mathbb Z}_p$ or $j(G/N)\leq F_p$.
\end{prop}
\begin{proof} By contradiction, let $(x,t)\in j(G/N)$ be such that $x\neq 0_{{\mathbb Z}_p}$ and $t\neq e_{F_p}$. Let $q:G\to G/N$ be the canonical map and pick $z\in G$ such that $q(z)=(x,t).$ Then $C = \langle z\rangle$ misses $N$, as $(x,t)$ is non-torsion. Furthermore, $C_1=\overline{C}\cong {\mathbb Z}_q$ for some prime $q$. As $N_1 = N \cap C_1$ misses $ C$ and as $C$ is minimal
(since $G$ is HM), we deduce that the closed subgroup $N_1$ of $ C_1$ must be trivial by the Minimality Criterion.
Hence, the subgroup $N\rtimes C_1\cong {\mathbb Z}_p\rtimes {\mathbb Z}_q$ is hereditarily minimal. This is possible only if $q = p$ by Lemma \ref{lem:difp}. Thus, $C_1\cong {\mathbb Z}_p$, so also $H=q(C_1)$ is
(even topologically) isomorphic to ${\mathbb Z}_p$. Since ${\mathbb Z}_p$ is $(p-1)$-divisible we have $H=(p-1)H$. Now let $\pi_2:{\mathbb Z}_p\times F_p \to F_p$ be the projection on the second coordinate. We have $\pi_2(H)=\pi_2( (p-1)H)=\{e_{F_p}\}.$ On the other hand, $e_{F_p}\neq t\in \pi_2(H)$, a contradiction.
\end{proof}
\subsection{Torsionfree case}\label{Torsionfree case}
The next theorem classifies the locally compact solvable HM groups, that are also torsionfree. The non-torsionfree groups are considered in Theorem \ref{prop:nonfree}.
\begin{thm} \label{thm:free} Let $G$ be an infinite locally compact solvable torsionfree group, then the following conditions are equivalent:
\begin{enumerate}
\item $G$ is hereditarily minimal;
\item $G$ is topologically isomorphic to one of the following groups:
\begin{enumerate} [(a)]
\item ${\mathbb Z}_p$ for some prime $p$;
\item $M_{p,n}={\mathbb Z}_p \rtimes C_p^{p^n}$, for some prime $p$ and \ $n\in {\mathbb N}$;
\item $T_n=({\mathbb Z}_2,+)\rtimes_{\beta}C_2^{2^n} $, for some $n\in {\mathbb N}$.
\end{enumerate} \end{enumerate}
\end{thm}
\begin{proof} $(2)\Rightarrow (1):$ If $G\cong {\mathbb Z}_p$, then it is hereditarily minimal\ by Prodanov's theorem. If $G\cong M_{p,n}$, then it is hereditarily minimal\ by Example \ref{padics:rtimes:padics}(a).
If $G\cong T_n$ for some $n$, then $G$ is hereditarily minimal\ by Example \ref{padics:rtimes:padics}(b).
$(1)\Rightarrow (2)$: If $G$ is abelian, then $G\cong {\mathbb Z}_p$ for some prime $p$, by Prodanov's theorem.
Assume from now on that $G$ is non-abelian, so $G \ne N$ and let $x_1\in G \setminus N$. By Proposition \ref{prop:tri},
$N$ trivially meets $\overline{\langle x_1\rangle}$, and the subgroup $G_1 = N\cdot \overline{\langle x_1\rangle}$ is isomorphic to ${\mathbb Z}_p \rtimes_\alpha {\mathbb Z}_p$,
for some faithful action $\alpha.$ Clearly, $G' \leq N\leq G_1$, so $G_1$ is normal.
Moreover, $G_1/N$ is topologically isomorphic to $\overline{\langle x_1\rangle}$, hence to ${\mathbb Z}_p$ by Lemma \ref{Prod+Steph:lemma}.
Now we claim that there is a monomorphism $f:G/N \to {\mathbb Z}_p$. Since $G$ is torsionfree, $G/N$ is also torsionfree by Proposition \ref{prop:abc}. Moreover, according to Proposition \ref{prop:ms}, there exists a group monomorphism $j:G/N \to {\mathbb Z}_p\times F_p$, so also $W=j(G/N)$ is torsionfree. If $p> 2$, then $W\leq{\mathbb Z}_p$ by Proposition \ref{prop:dich}, so we simply take $f=j$. If $p=2$, the subgroup $2W$ of $W$ is isomorphic to a subgroup of ${\mathbb Z}_2\times \{e_{F_2}\}$. We define $\phi :{\mathbb Z}_2\times F_2\to {\mathbb Z}_2$ by $\phi(x)= 2x$. Then $\phi\upharpoonright_W$ is an isomorphism between $W$ and $2W$ since $W$ is torsionfree. Hence, $f=\phi\circ j$ is a monomorphism $G/N \to {\mathbb Z}_2$.
Equipping the codomain of $f$ with the $p$-adic topology, we now prove that $f$ is a topological isomorphism onto its range.
As $G_1/N$ is topologically isomorphic to ${\mathbb Z}_p$, the restriction $f\upharpoonright_{G_1/N }:G_1/N \to {\mathbb Z}_p$ is continuous.
In particular, $f(G_1/N)\cong G_1/N$ is a non-trivial compact subgroup of ${\mathbb Z}_p$. Hence, $f(G_1/N)$ is open and $G_1/N \cong f(G_1/N) = p^k{\mathbb Z}_p$ for some $k\in {\mathbb N}$. We deduce that $f(G/N)$ is also open, so $f(G/N) = p^t{\mathbb Z}_p$ for some $t\in {\mathbb N}$. Since $N$ is a common closed normal subgroup of both $G$ and $G_1$, we get
\[
G/N \Big/ G_1/N \cong \ f(G/N)\Big/f(G_1/N)\cong p^t{\mathbb Z}_p\big/p^k{\mathbb Z}_p\cong {\mathbb Z}(p^{t-k}).
\]
As $G_1/N$ is open in $G/N$ which is compact, we obtain that $f:G/N\to f(G/N)$ is a topological isomorphism. In particular, $G/N$ is topologically isomorphic to ${\mathbb Z}_p$.
For every $x \in G\setminus N$, let
$G_x$ be the closed subgroup of $G$ generated by $N$ and $x$. Consider the families $\mathcal F_1=\{G_x\}_{x\notin N}$ and $\mathcal F_2=\{q(G_x)\}_{x\notin N}$,
where $q: G \to G/N$ is the canonical map.
As $G/N\cong {\mathbb Z}_p$, the family $\mathcal F_2$ is totally ordered by inclusion and every member of $\mathcal F_2$ has finitely many successors in $\mathcal F_2$.
Since all the subgroups $G_x$ contain $N$, it follows that $\mathcal F_1$ is also totally ordered and has the same property.
In particular, the members in $\mathcal F_1$ that contain $G_1$ are finitely many, so there is a maximal member among them
and it coincides with $G$ as $\displaystyle \bigcup_{x\notin N} G_x=G$. Finally, apply Proposition \ref{prop:tri}.
\end{proof}
\subsection{Non-torsionfree case}\label{Non-torsionfree case}
Given an action $\alpha:({\mathbb Z}_2,+)\times ({\mathbb Z}_2,+)\to ({\mathbb Z}_2,+)$ and $(b,c)\in {\mathbb Z}_2\times {\mathbb Z}_2$ we write shortly $b(c)$ in place of $\alpha(b,c)$.
A map ${\varepsilon}:({\mathbb Z}_2,+)\to ({\mathbb Z}_2,+)$ is called a {\em crossed homomorphism}, if
\begin{equation*}
{\varepsilon}(b_1+b_2)={\varepsilon}(b_1)+b_1({\varepsilon}(b_2)), \forall b_1, b_2\in {\mathbb Z}_2.
\end{equation*}
The following technical result is used in the proof of Theorem \ref{prop:nonfree}, in the special case of $p=2$.
\begin{prop}\label{prop:sknmin} Let $\alpha:({\mathbb Z}_2,+)\times ({\mathbb Z}_2,+)\to ({\mathbb Z}_2,+)$ be a continuous faithful action by automorphisms and let $f$ be
an automorphism of $M_{2,\alpha}= ({\mathbb Z}_2,+)\rtimes_{\alpha} ({\mathbb Z}_2, +)$ of order $2$. If the automorphism $\bar f: M_{2,\alpha}/M_{2,\alpha}' \to M_{2,\alpha}/M_{2,\alpha}'$ induced by $f$ is identical, then the group $G=(({\mathbb Z}_2,+)\rtimes_{\alpha} ({\mathbb Z}_2, +))\rtimes F$, where $F=\{f,Id\}$, is not hereditarily minimal.
\end{prop}
\begin{proof}
Since the group $F$ is finite we deduce that the topological semidirect product $G=M_{2,\alpha}\rtimes F$ is well defined. According to Theorem B, it suffices to show that $G$ does not satisfy ($\mathcal{C}_{fn}$). Consider the element $g_0:= (e,f)$ of $G$, note that it has order $2$ in $G$, and let $P=\langle g_0\rangle$. It is sufficient to find an infinite compact abelian subgroup $H$ of $G$ containing $P$. To this end, we shall provide a non-torsion element $g \in G$, commuting with $g_0$, and let $H=\overline{\langle g, g_0\rangle}$.
In view of the semidirect product structure of $G$ and since $M_{2,\alpha}$ is torsion free, this means to find a fixed point of $f$, i.e., an element $e\ne x\in M_{2,\alpha}$ with $f(x) = x$,
since in this case $g= (x,f) \in G $ works, as $g$ commutes with $g_0$.
First of all, we need to find an explicit form of $f$. For the sake of brevity put $N = {\mathbb Z}_2 \times \{0\}$ and $T = \{0\}\times {\mathbb Z}_2$. Our hypothesis that $\bar f$ is identical allows us to claim that also the automorphism $T\to T$, induced by $\bar f$, of the quotient $T\cong M_{2,\alpha}/N$ is identical, so
we can write
$f(0,b)=({\varepsilon}(b),b)$ for every $b\in {\mathbb Z}_2$, where ${\varepsilon} : {\mathbb Z}_2\to {\mathbb Z}_2$ is a continuous crossed homomorphism, as $f$ is a continuous homomorphism.
Moreover, since $f$ is an automorphism of $M_{2,\alpha}$ and $N\geq M_{2,\alpha}'$ it follows that $f\restriction_{N}$ is an automorphism of $N$.
In fact, as $N\cong {\mathbb Z}_2$, $f\restriction_{N}$ is the multiplication by some $m\in {\mathbb Z}_2^*$, and $m\in F_2=\{\pm 1\}$ as $o(x_0)=2$.
Hence, we can write $f$ as follows:
\begin{equation}\label{888}f(a,b)=f (a,0)f(0,b)=(ma,0)({\varepsilon}(b),b)=(ma+{\varepsilon}(b),b).
\end{equation}
Assume that $m= 1$, then (\ref{888}) and $f^2 = Id$ give $2 {\varepsilon}(b)= 0$, hence ${\varepsilon}(b)= 0$, for every $b = 0$.
Take any $b\ne 0$ in ${\mathbb Z}_2$ and put $x =(0,b)$. Then obviously $f(x) = x$ and we are done.
In case $m=-1$. Take any $b\neq0$ in ${\mathbb Z}_2$, so $2b\neq 0$. Recall that ${\mathrm Aut}\, ({\mathbb Z}_2) \cong {\mathbb Z}_2^* = {\mathbb Z}_2 \setminus 2 {\mathbb Z}_2 = 1 + 2{\mathbb Z}_2$, so the action $\alpha$ gives $m_b\in {\mathbb Z}_2^* = 1 + 2{\mathbb Z}_2$ such that $b({\varepsilon}(b))=m_b{\varepsilon}(b)$. Hence,
\[
{\varepsilon}(2b) = {\varepsilon}(b) + m_b {\varepsilon}(b) = (1+m_b){\varepsilon}(b)\in 2{\mathbb Z}_2,
\]
which means there exists $s\in {\mathbb Z}_2$ such that ${\varepsilon}(2b)=2s$. Consider the element $x = (s,2b) \in M_{2,\alpha}$, and observe that it is not torsion as
$2b\neq 0$. Finally, $x$ is a fixed point of $f$. Indeed, $f(x) = ( -s+{\varepsilon}(2b), 2b) = (-s + 2s, 2b) = (s , 2b) = x$.
\end{proof}
Here follows the counterpart of Theorem \ref{thm:free} for non-torsionfree groups, which concludes the classification of the locally compact solvable HM groups given in Theorem D.
\begin{thm}\label{prop:nonfree}
Let $G$ be an infinite locally compact solvable non-torsionfree group. Then the following conditions are equivalent:
\begin{enumerate} [(a)]
\item $G$ is hereditarily minimal;
\item $G$ is topologically isomorphic to $K_{p,F}$ for some prime $p$ and a non-trivial subgroup $F$ of $F_p$.
\end{enumerate}
\end{thm}
\begin{proof}
$(b)\Rightarrow (a)$: If $G\cong K_{p,F}$, then $G$ is hereditarily minimal\ by Example \ref{ex:padic:rtimes:F}.
$(a)\Rightarrow (b)$: In the notation of Proposition \ref{prop:abc}, let $S_p=N\cdot \langle x_0\rangle$, where $x_0$ is a torsion element of $G$ of maximal order. Moreover, $t(G)\subseteq S_p$ and $S_p\cong K_{p,F}$ for some non-trivial $F\leq F_p$. So, it suffices to show that $G=S_p$.
By contradiction, pick $x\in G\setminus S_p$ and observe that $x\notin N$ and $x$ is non-torsion. The subgroup $T=\overline{\langle x \rangle}$ is isomorphic to ${\mathbb Z}_q$ for some prime $q$ by Prodanov's theorem. Moreover,
\begin{equation}\label{999}
T\cap N=\{e\},
\end{equation}
by Proposition \ref{prop:tri} If $p>2$, then $G/N$ is isomorphic to a subgroup of $F_p$ by Proposition \ref{prop:dich} and Proposition \ref{prop:abc}(b). Hence $[G:N]<\infty$, and this contradicts (\ref{999}), as $T$ is infinite.
Now assume that $p=2$. In view of the normality of $N\cong {\mathbb Z}_p$ in $G$ and (\ref{999}), we can apply Lemma \ref{lem:difp} to deduce that $q=2$ as well. So
$L=NT$ is isomorphic to $N\rtimes T\cong M_{2,\alpha}$, where $\alpha$ is a faithful action, by Lemma \ref{lemma:kerhm}. In the sequel we identify $L = NT\cong N\rtimes T$ and $M_{2,\alpha}$. Since $G'\leq N \leq L$ by Proposition \ref{prop:ms}, it follows that $L$ is normal in $G$.
As $x_0$ has order two and $L$ is torsionfree, we obtain $$G\geq L\rtimes \langle x_0 \rangle \cong ({\mathbb Z}_2\rtimes_{\alpha} {\mathbb Z}_2)\rtimes_{\beta}F_2,$$ where $\beta$ is a faithful action. Indeed, if $\ker\beta$ is non-trivial, then $\ker\beta=F_2$ and we deduce that
$$
({\mathbb Z}_2\rtimes_{\alpha} {\mathbb Z}_2)\rtimes_{\beta}F_2\cong ({\mathbb Z}_2\rtimes_{\alpha} {\mathbb Z}_2)\times F_2,
$$ contradicting the fact that $G$ is hereditarily minimal.
The action $\beta$ coincides with the action $\phi:L\to L$ by conjugation by $x_0$, and $o(\phi)=2$ as $o(x_0)=2$ and $x_0\notin C_G(N)=N\leq L$. Clearly $\phi(L')=L'$, as $L'$ is a normal subgroup of $G$, so $\phi$ induces an automorphism $\bar \phi: L/L' \to L/L'$. Since $\bar \phi $ is still an internal automorphism of $L/L'$ and this quotient is abelian, we deduce that $\bar \phi = Id_{L/L'}$. Now we can apply Proposition \ref{prop:sknmin} to $f = \phi$ to deduce that $({\mathbb Z}_2\rtimes_{\alpha} {\mathbb Z}_2)\rtimes F_2$ is not hereditarily minimal, contradicting the fact that $G$ is hereditarily minimal.
\end{proof}
\begin{corol}
Let $G$ be an infinite hereditarily minimal locally compact group, which is either compact or locally solvable.
Then $G$ contains a subgroup $K \cong {\mathbb Z}_p$ for some prime $p$, such that its normalizer $N_G(K)$ is isomorphic to one of the groups in $(a)$, $(b)$ or $(c)$, in Theorem D.
\end{corol}
\begin{proof}
As $G$ is non-discrete by Lemma \ref{lem:tnlf}, the existence of $K$ is guaranteed by Proposition \ref{prop:ndhmlc}(2). The subgroup $N = N_G(K)$ is closed in $G$ and contains $K$, so $N$ satisfies all the properties of $G$ listed in the statement. As $K$ is normal in $N$, we can apply Proposition \ref{prop:ms} to conclude that $N$ is metabelian. Finally, Theorem D applies to $N$.
\end{proof}
\subsection{Hereditarily totally minimal topological groups}\label{Hereditarily totally minimal topological groups}
In this subsection we classify the infinite locally compact solvable hereditarily totally minimal groups. We also provide an equivalent condition for a non-discrete locally compact group with non-trivial center to be HTM.
\begin{defi}\label{def:htm}
Call a topological group $G$ hereditarily totally minimal, if every subgroup of $G$ is totally minimal.
\end{defi}
The following concept has a key role in the Total Minimality Criterion (see Fact \ref{fac:TMC}).
\begin{defi}
A subgroup $H$ of a topological group $G$ is totally dense if for every closed normal subgroup $N$ of $G$ the intersection $N\cap H$ is dense in $N$.
\end{defi}
\begin{fact} \label{fac:TMC}
\cite[Total Minimality Criterion]{DP} Let $H$ be a dense subgroup of a topological group $G$. Then $H$ is totally minimal if and only if $G$ is totally minimal and $H$ is totally dense in $G$.
\end{fact}
As hereditarily totally minimal groups are HM, the groups $K_{p,F}, M_{p,n}$ and $T_n$ are the only locally compact solvable groups we need to consider, according to Theorem D. In the next proposition we prove that the groups $K_{p,F}$ are hereditarily totally minimal.
\begin{prop}\label{prop:kpfhtm}
Let $p$ be a prime and $F\leq F_p$. Then $K_{p,F}$ is hereditarily totally minimal.
\end{prop}
\begin{proof}
Let $H$ be an infinite subgroup of $K_{p,F}$ and we have to prove that $H$ is totally minimal. As $K_{p,F}$ is compact, it suffices to show that $H$ is totally dense in $\overline{H}$, by Fact \ref{fac:TMC}. To this aim, let $N$ be a non-trivial closed normal subgroup of $\overline{H}$. As $\overline{H}$ is an infinite compact HM group, its subgroup $N$ is infinite by Proposition \ref{prop:ndhmlc}, so it intersects non-trivially the subgroup ${\mathbb Z}_p\rtimes \{1\}$. Moreover, there exists $n\in {\mathbb N}$ such that $N\geq N\cap ({\mathbb Z}_p\rtimes \{1\})= p^n{\mathbb Z}_p\rtimes \{1\}$. This implies that $N$ is open in $K_{p,F}$, so in $\overline{H}$, hence $\overline{H\cap N}=N$.
\end{proof}
The following Lemma will be used in the proof of Proposition \ref{prop:mpntn}.
\begin{lemma}\label{lemma:strong}
If $G$ is a hereditarily totally minimal group, then all quotients of $G$ are hereditarily totally minimal.
\end{lemma}
\begin{proof}
Let $N$ be a closed normal subgroup of $G$ and let $q:G \to G/N$ be the quotient map. Take a subgroup $D$
of $G/N$ and let $D_1 = q^{-1}(D)$. Now we prove that $D$ is totally minimal. Consider the restriction $q': D_1 \to D$, clearly, $q'$ is a continuous surjective homomorphism. Since $D_1$ is totally minimal by our hypothesis on $G$, we obtain that $q'$ is open. Moreover, being a quotient group of $D_1$, we deduce that $D$ itself is totally minimal.
\end{proof}
Now we show that the HM groups $M_{p,n}$ and $T_n$ are not hereditarily totally minimal.
\begin{prop}\label{prop:mpntn}
Let $p$ be a prime and $n\in {\mathbb N}$, then:
\begin{enumerate}
\item $M_{p,n}$ is not hereditarily totally minimal;
\item $T_n$ is not hereditarily totally minimal.
\end{enumerate}
\end{prop}
\begin{proof}
(1): Assume for a contradiction that $M_{p,n}$ is hereditarily totally minimal for some prime $p$ and $n\in {\mathbb N}$. Then the quotient group $M_{p,n} / M_{p,n}'$ is hereditarily totally minimal by Lemma \ref{lemma:strong}.
If $p>2$, then $M_{p,n} / M_{p,n}'\cong {\mathbb Z}(p^{n+1})\times C_p^{p^n}$ by Lemma \ref{lemma:speder}, but this group is not even hereditarily minimal\ by Fact \ref{TeoP}. In case $p=2$, as $M_{2,n} / M_{2,n}'\cong {\mathbb Z}(2^{n+2})\times C_2^{2^n}$, we get a similar contradiction.
(2): By Lemma \ref{commTn} the quotient group $T_n / T_n'$ of $T_n$ is isomorphic to ${\mathbb Z}(2) \times C_2^{2^n}$, which is not hereditarily totally minimal. Alternatively, one can note that $T_n$ contains $M_{2,n+1}$, which is not hereditarily totally minimal by (1).
\end{proof}
\begin{thm}\label{thm:htmkpf}
Let $G$ be an infinite locally compact solvable group, then the following conditions are equivalent:
\begin{enumerate}
\item $G$ is hereditarily totally minimal;
\item $G$ is topologically isomorphic to $K_{p,F} = {\mathbb Z}_p \rtimes F$, where $F\leq F_p$ for some prime $p$.\end{enumerate}
\end{thm}
\begin{proof}
$(1)\Rightarrow(2)$: Clearly, a hereditarily totally minimal group is hereditarily minimal. So, by Theorem D, $G$ is topologically isomorphic to one of the three types of groups: $K_{p,F}$, $M_{p,n}$ or $T_n$. In addition, neither $M_{p,n}$ nor $T_n$ are hereditarily totally minimal by Proposition \ref{prop:mpntn}.
Hence, $G$ is topologically isomorphic to $K_{p,F}$ for some prime $p$ and $F\leq F_p$.\\
$(2)\Rightarrow(1)$: Use Proposition \ref{prop:kpfhtm}.
\end{proof}
The next fact was originally proved in \cite{EDS}, and we use it in the subsequent Theorem \ref{thm:htmfi}.
\begin{fact}\cite[Theorem 7.3.1]{DPS}\label{fac:EDS}
If a topological group $G$ contains a compact normal subgroup $N$ such that $G/N$ is (totally) minimal, then $G$ is (resp., totally) minimal.
\end{fact}
The groups $K_{p,F}$, where $F$ is a non-trivial subgroup of $F_p$, are center-free. The next theorem deals with the case of non-trivial center.
\begin{thm}\label{thm:htmfi}
Let $G$ be a locally compact non-discrete group with non-trivial center. The following conditions are equivalent:
\begin{enumerate} \item $G$ is hereditarily totally minimal;
\item every non-trivial closed subgroup of $G$ is open, $Z(G)\cong {\mathbb Z}_p$ for some prime $p$ and $G/Z(G)$ is hereditarily totally minimal.\end{enumerate}
\end{thm}
\begin{proof}
By Fact \ref{DS-theorem}, the assertion holds true in the abelian case, so we may assume that $Z(G)\ne G$.
$(1)\Rightarrow (2)$: If $G$ is hereditarily totally minimal, then $G/Z(G)$ is hereditarily totally minimal, by Lemma \ref{lemma:strong}.
Applying Theorem \ref{add:prop:thm} we conclude that every closed non-trivial subgroup of $G$ is open and $Z(G)\cong {\mathbb Z}_p$ for some prime $p$.
$(2)\Rightarrow (1)$: Note that $G/Z(G)$ is a discrete hereditarily totally minimal group.
We first prove that if $H$ is a closed subgroup of $G$, then $H$ is totally minimal. As $Z(G)\cong {\mathbb Z}_p$, the subgroup $H\cap Z(G)$ is compact and normal in $H$. We also have $H/(H\cap Z(G)) \cong HZ(G)/Z(G)\leq G/Z(G)$, so $H/(H\cap Z(G))$ is totally minimal since $G/Z(G)$ is hereditarily totally minimal. By Fact \ref{fac:EDS}, $H$ is totally minimal.
Now let $H$ be a subgroup of $G$. By Fact \ref{fac:TMC}, it remains to show that $H$ is totally dense in $\overline{H}$. To this aim, take a non-trivial closed normal subgroup $N$ of $\overline{H}$. Clearly, $N$ is also closed in $G$ and by our assumption it is open in $G$. In particular, $N$ is open in $\overline{H}$ so
$\overline{H\cap N}=N$.
\end{proof}
Now we recall of special case of a theorem, known as {\em Countable Layer Theorem}.
\begin{thm}\label{coro1}
\cite[Theorem 9.91]{HM}
Any profinite group $G$ has a canonical countable descending sequence
\begin{equation}\label{eq:CLT}
G = \Omega_0(G)\supseteq \ldots \supseteq \Omega_n(G) \supseteq \ldots \ldots
\end{equation}
of closed characteristic subgroups of G with the following two properties:
\begin{enumerate}
\item $\bigcap_{n=1}^{\infty} \Omega_n(G) = \{e\}$;
\item for each $n \in {\mathbb N}_+$, the quotient $\Omega_{n-1}(G)/\Omega_n(G)$ is isomorphic to a cartesian product of simple finite groups. \end{enumerate}
\end{thm}
\begin{thm}\label{neww}
Every locally compact hereditarily totally minimal group is metrizable.
\end{thm}
\begin{proof} Let $G$ be a locally compact HTM group. By Proposition \ref{prop:ndhmlc}(1), $G$ is totally disconnected. Let $K$ be a
compact open subgroup of $G$. It is enough to show that $K$ is metrizable.
According to the Countable Layer Theorem, one has a decreasing chain of closed normal subgroups (\ref{eq:CLT})
with $\bigcap_{n=1}^{\infty} \Omega_n(K) = \{e\}$.
By Lemma \ref{lemma:strong}, the quotient group $G/ \Omega_1(K)$ is HM. Since this is a direct product
of finite groups, this is possible only if $G/ \Omega_1(K)$ is finite. Similarly, each quotient group $\Omega_n(K)/ \Omega_{n+1}(K)$ is finite.
Hence, also each quotient group $G/ \Omega_{n}(K)$ is finite. This implies that each subgroup $\Omega_n(K)$ is open. Now $\bigcap_{n=1}^{\infty} \Omega_n(K) = \{e\}$ and the compactness of $K$ imply that the subgroups $\{\Omega_n(K): n\in {\mathbb N}\}$ form a local base at $e$. Therefore, $K$ is metrizable.
\end{proof}
\section{Open questions and concluding remarks}\label{Open questions and concluding remarks}
Recall that a group is hereditarily non-topologizable when it is hereditarily totally minimal in the discrete topology.
\begin{question}\label{CC:groups}
Are discrete hereditarily minimal \ groups also hereditarily non-topologizable ?
\end{question}
In case of affirmative answer, one can deduce from the results quoted in \S \ref{discreteHM} that the class of countable discrete hereditarily minimal\ groups
is stable under taking finite products. On the other hand, if $G$ and $H$ are discrete hereditarily minimal\ groups such that their product is not hereditarily minimal, then
one of these groups is not hereditarily non-topologizable, so provides a counter-example to Question \ref{CC:groups}.
\smallskip
In view of Theorem \ref{add:prop:thm}, the following natural question arises:
\begin{quest}\label{que:exist}
Does there exist a non-discrete hereditarily minimal locally compact group $G$ with $\{e\} \neq Z(G) \neq G$?
\end{quest}
By Theorem \ref{add:prop:thm}, a positive answer to Question \ref{que:exist} provides a group $G$ such that the center is open, and has infinite index in $G$, so $G$ is not compact (this follows also from Theorem C). So by Corollary \ref{cor:spr}, Question \ref{que:exist} has this equivalent formulation: does there exist a hereditarily minimal\ locally compact group $G$ which is neither discrete nor compact, and has non-trivial center?
In other words:
\begin{question}\label{QQ:DC}
\begin{enumerate}
\item
Does there exist a hereditarily minimal\ locally compact group which is neither discrete nor compact?
\item Can such a group have non-trivial center?
\end{enumerate}
\end{question}
As already noted, a positive answer to Question \ref{que:exist} gives a positive answer to both items in Question \ref{QQ:DC}. On the other hand, a group $G$ providing a positive answer to Question \ref{QQ:DC}, if not center-free, gives also a positive answer to Question \ref{que:exist}.
\medskip
One can focus his search for an answer to Questions \ref{que:exist} and \ref{QQ:DC} using Proposition \ref{prop:cand} below.
Theorem \ref{add:prop:thm} provides some necessary conditions for a non-discrete locally compact group $G$ satisfying $\{e\} \neq Z(G) \neq G$ to be also hereditarily minimal. Among others: being torsionfree, having the center open and isomorphic to ${\mathbb Z}_p$, having the quotient $G/Z(G)$ a (discrete) $p$-group. This justifies the hypotheses in the following proposition, which proves a partial converse to Theorem \ref{add:prop:thm}.
\begin{prop}\label{prop:cand}
Let $p$ be a prime number, and $G$ be a torsionfree group such that $Z(G)\cong {\mathbb Z}_p$ is open in $G$. If $G/Z(G)$ is hereditarily minimal, then $G$ is hereditarily minimal.
\end{prop}
\begin{proof}
Note that $G/Z(G)$ is a discrete hereditarily minimal\ group by our assumption.
By Fact \ref{fac:EDS} and using similar arguments as in the proof of Theorem \ref{thm:htmfi} (see the implication $(2)\Rightarrow (1)$), one can prove that every closed subgroup of $G$ is minimal.
Now let $H$ be a subgroup of $G$, and we show that $H$ is essential in $\overline{H}.$ Since $G$ is torsionfree and $G/Z(G)$ is torsion by Lemma \ref{lem:tnlf}, every non-trivial subgroup of $G$ meets the center non-trivially. Let $N$ be a closed normal non-trivial subgroup of $\overline H$. So, $H\cap Z(G)$ and $N\cap Z(G)$ are non-trivial subgroups of $Z(G)$ and $N\cap Z(G)$ is also closed. As $Z(G)\cong {\mathbb Z}_p$, Lemma \ref{lem:cyc} implies that $N\cap H\cap Z(G)$ is non-trivial. In particular, $N\cap H$ is non-trivial.
\end{proof}
If $G$ is a torsionfree group such that $Z(G)\cong {\mathbb Z}_p$ and $G/Z(G)$ is an infinite discrete hereditarily minimal\ $p$-group, then $G$ is hereditarily minimal\ by Proposition \ref{prop:cand}, and clearly $G$ is neither discrete nor compact. So such a group provides a positive answer to Question \ref{que:exist} (so in particular also to Question \ref{QQ:DC}).
We conjecture that such a group exists on the base of a remarkable example of a locally compact group $M$ built in \cite[Theorem 4.5]{HMOWO}, having
the following properties:
\begin{enumerate}
\item $Z(M)$ is open and $Z(M) \cong {\mathbb Z}_p$;
\item $M/Z(M)$ is a Tarski monster of exponent $p$ having $p$ conjugacy classes;
\item every element of $M$ is contained in a subgroup, which contains $Z(M)$ and is isomorphic to ${\mathbb Z}_p$ (in particular, $M$ is torsionfree);
\item all normal subgroups of $M$ are central;
\item for every proper closed subgroups $H$ of $M$, $H \cong {\mathbb Z}_p$ and either $H \leq Z(M)$ or $Z(M) \leq H$.\end{enumerate}
In particular, we see that $M$ has all of the properties listed in Theorem \ref{add:prop:thm}, but it is not clear if such an $M$ must be
HM. This will be ensured by Proposition \ref{prop:cand} if one can ensure the Tarski monster $M/Z(M)$ to be HM (i.e., hereditarily
non-topologizable). Tarski monsters $T$ with this property were built in \cite{KOO}, so it remains to check if for such a $T$ one can
build an extension $M$ as above.
\smallskip
By Proposition \ref{prop:ms}, if $G$ is an infinite locally compact HM group, which is either compact or locally solvable, with a non-trivial normal solvable subgroup, then $G$ is metabelian. So in particular $G$ is solvable, and Theorem D applies.
\begin{question}
Can Theorem D be extended to locally solvable groups?
\end{question}
Fact \ref{DS-theorem} shows that the abelian HM groups are second countable, while Theorem D shows that the conclusion remains true if ``abelian'' is replaced by ``solvable and locally compact''.
On the other hand, uncountable discrete HTM groups were built in \cite{KOO}, showing that a locally compact HTM non-solvable group need not be second countable even in the discrete case. Yet this leaves open the following:
\begin{question}
Are locally compact $\operatorname{HM}$ groups metrizable?
\end{question}
Theorem \ref{neww} shows that the answer is affirmative for locally compact HTM groups. On the other hand, Fact \ref{fac:ext} suggests that the answer can be affirmative even for locally compact HLM groups.
Proposition \ref{prop:mpntn} provides examples of locally compact hereditarily minimal \ groups with trivial center that are not HTM.
We are not aware if a locally compact hereditarily minimal \ group with non-trivial center can be non-HTM.
\subsection*{Acknowledgments}
The first-named author takes this opportunity to thank Professor Dikranjan for his generous hospitality and support.
The second-named author is partially supported by grant PSD-2015-2017-DIMA-PRID-2017-DIKRANJAN PSD-2015-2017-DIMA - progetto PRID TokaDyMA
of Udine University. The third and fourth-named authors are supported by Programma SIR 2014 by MIUR, project GADYGR, number RBSI14V2LI, cup G22I15000160008 and by INdAM - Istituto Nazionale di Alta Matematica.
| {
"timestamp": "2018-03-22T01:12:22",
"yymm": "1803",
"arxiv_id": "1803.08033",
"language": "en",
"url": "https://arxiv.org/abs/1803.08033",
"abstract": "We study locally compact groups having all subgroups minimal. We call such groups hereditarily minimal. In 1972 Prodanov proved that the infinite hereditarily minimal compact abelian groups are precisely the groups $\\mathbb Z_p$ of $p$-adic integers. We extend Prodanov's theorem to the non-abelian case at several levels. For infinite hypercentral (in particular, nilpotent) locally compact groups we show that the hereditarily minimal ones remain the same as in the abelian case. On the other hand, we classify completely the locally compact solvable hereditarily minimal groups, showing that in particular they are always compact and metabelian.The proofs involve the (hereditarily) locally minimal groups, introduced similarly. In particular, we prove a conjecture by He, Xiao and the first two authors, showing that the group $\\mathbb Q_p\\rtimes \\mathbb Q_p^*$ is hereditarily locally minimal, where $\\mathbb Q_p^*$ is the multiplicative group of non-zero $p$-adic numbers acting on the first component by multiplication. Furthermore, it turns out that the locally compact solvable hereditarily minimal groups are closely related to this group.",
"subjects": "General Topology (math.GN); Group Theory (math.GR)",
"title": "Hereditarily minimal topological groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109480714625,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7076796190915576
} |
https://arxiv.org/abs/1912.00620 | Pfaffian Pairs and Parities: Counting on Linear Matroid Intersection and Parity Problems | Spanning trees are a representative example of linear matroid bases that are efficiently countable. Perfect matchings of Pfaffian bipartite graphs are a countable example of common bases of two matrices. Generalizing these two examples, Webb (2004) introduced the notion of Pfaffian pairs as a pair of matrices for which counting of their common bases is tractable via the Cauchy-Binet formula.This paper studies counting on linear matroid problems extending Webb's work. We first introduce "Pfaffian parities" as an extension of Pfaffian pairs to the linear matroid parity problem, which is a common generalization of the linear matroid intersection problem and the matching problem. We enumerate combinatorial examples of Pfaffian pairs and parities. The variety of the examples illustrates that Pfaffian pairs and parities serve as a unified framework of efficiently countable discrete structures. Based on this framework, we derive celebrated counting theorems, such as Kirchhoff's matrix-tree theorem, Tutte's directed matrix-tree theorem, the Pfaffian matrix-tree theorem, and the Lindström-Gessel-Viennot lemma.Our study then turns to algorithmic aspects. We observe that the fastest randomized algorithms for the linear matroid intersection and parity problems by Harvey (2009) and Cheung-Lau-Leung (2014) can be derandomized for Pfaffian pairs and parities. We further present polynomial-time algorithms to count the number of minimum-weight solutions on weighted Pfaffian pairs and parities. Our algorithms make use of Frank's weight splitting lemma for the weighted matroid intersection problem and the algebraic optimality criterion of the weighted linear matroid parity problem given by Iwata-Kobayashi (2017). | \section{Introduction}
Let $A$ be a totally unimodular matrix of row-full rank; that is, any minor of $A$ is $0$ or $\pm 1$.
The (generalized) \emph{matrix-tree theorem}~\cite{Maurer1976} claims that the number of column bases of $A$ is equal to $\det A \trsp{A}$.
This can be observed by setting $A_1 = A_2 = A$ in the \emph{Cauchy--Binet formula}
\begin{align} \label{eq:cauchy_binet}
\det A_1\trsp{A_2} = \sum_{J \subseteq E : \card{J} = r} \det A_1[J] \det A_2[J],
\end{align}
where $A_1, A_2$ are matrices of size $r \times n$ with common column set $E$ and $A_k[J]$ denotes the submatrix of $A_k$ indexed by columns $J \subseteq E$ for $k = 1, 2$.
In the case where $A$ comes from the incidence matrix of an undirected graph, the formula~\eqref{eq:cauchy_binet} provides the celebrated matrix-tree theorem due to Kirchhoff~\cite{Kirchhoff1847} for counting spanning trees.
From a matroidal point of view, the matrix-tree theorem is regarded as a theorem for counting bases of regular matroids, which are a subclass of linear matroids represented by totally unimodular matrices.
Regular matroids are recognized as the largest class of matroids for which base counting is exactly tractable.
For general matroids (even for binary or transversal matroids), base counting is \textsf{\#P-complete}~\cite{Colbourn1995,Snook2012} and hence approximation algorithms have been well-studied~\cite{Anari2018,Anari2019}.
Another example of a polynomial-time countable object is perfect matchings of graphs with \emph{Pfaffian orientation}~\cite{Kasteleyn1961}.
The \emph{Pfaffian} is a polynomial of matrix entries defined for a skew-symmetric matrix $S$ of even order.
If $S$ is the Tutte matrix of a graph $G$, its Pfaffian is the sum over all perfect matchings of $G$ except that each matching has an associated sign as well.
Suppose that edges of $G$ are oriented so that all terms in the Pfaffian become $+1$ by assigning $+1$ or $-1$ to each variable in the Tutte matrix according to the edge direction.
This means that there are no cancellations in the Pfaffian, and thus it coincides with the number of perfect matchings of $G$.
Such an orientation is called \emph{Pfaffian} and a graph that admits a Pfaffian orientation is also called \emph{Pfaffian}.
If $G$ is bipartite, we can consider the determinant of the Edmonds matrix instead of the Pfaffian of the Tutte matrix.
Whereas counting of perfect matchings is \textsf{\#P-complete} even for bipartite graphs~\cite{Valiant1977}, characterizations of Pfaffian graphs and polynomial-time algorithms to give a Pfaffian orientation have been intensively studied~\cite{Kasteleyn1961,Little1974,Robertson1999,Temperley1961,Vazirani1989}.
From the viewpoint of matroids again, the bipartite matching problem is generalized to the \emph{linear matroid intersection problem}~\cite{Edmonds1968,Edmonds1970}.
This is the problem to find a common column base of two matrices $A_1, A_2$ of the same size.
Besides the bipartite matching problem, the linear matroid intersection problem includes a large number of combinatorial optimization problems as special cases, such as problems of finding an arborescence, a colorful spanning tree, and a vertex-disjoint $S$--$T$ path~\cite{Tutte1965}.
The \emph{weighted linear matroid intersection problem} is to find a common base of $A_1, A_2$ that minimizes a given column weight $\funcdoms{w}{E}{\setR}$.
Various polynomial-time algorithms have been proposed for both the unweighted and the weighted linear matroid intersection problems~\cite{Edmonds1968,Edmonds1970,Frank2011,Harvey2009}.
However, the counting of common bases is intractable even for a pair of totally unimodular matrices, as it includes the counting of perfect bipartite matchings.
Commonly generalizing Pfaffian bipartite graphs and regular matroids, Webb~\cite{Webb2004} introduced the notion of a \emph{Pfaffian} (\emph{matrix}) \emph{pair} as a pair of totally unimodular matrices $A_1, A_2$ such that $\det A_1[B] \det A_2[B]$ is constant for any common base $B$ of $A_1$ and $A_2$.
This condition means due to the Cauchy--Binet formula~\eqref{eq:cauchy_binet} that the number of common bases of $(A_1, A_2)$ can be retrieved from $\det A_1 \trsp{A_2}$.
For example, bases of a totally unimodular matrix $A$ are clearly common bases of a Pfaffian pair $(A, A)$.
Webb~\cite{Webb2004} indicated that the set of perfect matchings of a Pfaffian bipartite graph can also be represented as common bases of a Pfaffian pair.
Although the Pfaffian pairs concept nicely integrates these two celebrated countable objects, its existence and importance do not seem to have been recognized besides the original thesis~\cite{Webb2004} of Webb.
We remark that one can remove the assumption of the total unimodularity on $A_1$ and $A_2$ for counting purpose.
The linear matroid intersection and the (nonbipartite) matching problems are commonly generalized to the \emph{linear matroid parity problem}~\cite{Lawler1976}, which is explained as follows.
Let $A$ be a $2r \times 2n$ matrix whose column set is partitioned into pairs, called \emph{lines}.
Let $L$ be the set of lines.
In this paper, we call $(A, L)$ a (\emph{linear}) \emph{matroid parity}.
The linear matroid parity problem on $(A, L)$ is to find a \emph{parity base} of $(A, L)$, which is a column base of $A$ consisting of lines.
Applications of the linear matroid parity problem include the maximum hyperforest problem on a 3-uniform hypergraph, the disjoint $A$-path problem and the disjoint \calS-path problem~\cite{Lovasz1980}.
The linear matroid parity problem is known to be solvable in polynomial time since the pioneering work of Lovász~\cite{Lovasz1980}.
Recently, Iwata--Kobayashi~\cite{Iwata2017} presented the first polynomial-time algorithm for the \emph{weighted linear matroid parity problem}, which is to find a parity base of $(A, L)$ that minimizes a given line weight $\funcdoms{w}{L}{\setR}$.
In this paper, we explore Pfaffian pairs and their generalization to the linear matroid parity problem, which we call \emph{Pfaffian} (\emph{linear matroid}) \emph{parities}.
The contributions of this paper are twofold: structural and algorithmic results.
\subsection{Structural Results}
We introduce a new concept ``Pfaffian parity'' as a matroid parity $(A, L)$ such that $\det A[B]$ is constant for all parity base $B$ of $(A, L)$.
As in the case of Pfaffian pairs, this condition ensures that the number of parity bases can be retrieved from the Pfaffian of a skew-symmetric matrix associated with $(A, L)$.
The proof of this fact relies on a generalization of the Cauchy--Binet formula~\eqref{eq:cauchy_binet} to the Pfaffian given by Ishikawa--Wakayama~\cite{Ishikawa1995} (see \cref{prop:ishikawa_pfaffian_cauchy_binet}).
We then consolidate a list of discrete structures that can be represented as common bases of Pfaffian pairs or parity bases of Pfaffian parities.
Some of them are already (explicitly or implicitly) known, and some are newly proved.
The variety of this list illustrates that Pfaffian pairs and parities serve as a unified framework of discrete structures for which counting is tractable.
Much of celebrated counting theorems are explained in this framework, such as Kirchhoff's matrix-tree theorem~\cite{Kirchhoff1847}, Tutte's directed matrix-tree theorem~\cite{Tutte1948}, the Pfaffian matrix-tree theorem due to Masbaum--Vaintrob~\cite{Masbaum2002}, and the Lindström--Gessel--Viennot (LGV) lemma~\cite{Gessel1985,Lindstrom1973}.
An overview of the list is as follows; see each linked section for the exact problem definitions.
\paragraph{Regular Matroids and Regular Delta-Matroids~\textmd{(\cref{sec:regular_matroids,sec:regular_delta_matroids})}.}
Bases of regular matroids are a trivial example of Pfaffian pairs, as explained above.
We obtain Kirchhoff's matrix-tree theorem~\cite{Kirchhoff1847} as a corollary.
Webb~\cite{Webb2004} showed that one can represent the set of nonsingular principal submatrices of a skew-symmetric totally unimodular matrix as common bases of a Pfaffian pair.
This can be slightly generalized to the feasible sets of \emph{regular delta-matroids}, which are a generalization of regular matroids introduced by Bouchet~\cite{Bouchet1995, Bouchet1998}.
A combinatorial example of regular delta-matroids is Euler tours in 4-regular directed graphs~\cite{Bouchet1995}.
\paragraph{Arborescences~\textmd{(\cref{sec:arborescences})}.}
An \emph{arborescence} of a directed graph $G$ is a rooted directed spanning tree.
It is well-known that arborescences of $G$ are characterized as a common base of two matrices $A_1, A_2$ associated with $G$.
Tutte's directed matrix-tree theorem~\cite{Tutte1948} claims that the number of arborescences of $G$ is equal to $\det A_1 \trsp{A_2}$.
Some known proofs of the directed matrix-tree theorem essentially show that $(A_1, A_2)$ is Pfaffian.
This means that the directed matrix-tree theorem can be treated in the framework of Pfaffian pairs.
\paragraph{Perfect Matchings of Pfaffian Graphs~\textmd{(\cref{sec:perfect_matchings})}.}
We show that the set of perfect matchings of a Pfaffian graph can be seen as parity bases of a Pfaffian parity.
This is an extension of the relationship between a Pfaffian bipartite graph and a Pfaffian pair observed by Webb~\cite{Webb2004}.
\paragraph{Spanning Hypertrees of 3-Pfaffian 3-Uniform Hypergraphs~\textmd{(\cref{sec:spanning_hypertrees})}.}
Let $H = (V, \mathcal{E})$ be a 3-graph (3-uniform hypergraph) and $\vec{H} = (V, \vec{\mathcal{E}})$ an orientation of $H$.
Namely, each element in $\mathcal{E}$ is an unordered triple over $V$, and each element in $\vec{\mathcal{E}}$ is an ordered triple.
A spanning hypertree of $H$ has the sign according to the orientation $\vec{H}$.
The Pfaffian matrix-tree theorem~\cite{Masbaum2002} claims that the Pfaffian of a skew-symmetric matrix associated with $\vec{H}$ is the signed sum of all spanning hypertrees of $H$.
The orientation $\vec{H}$ is called \emph{3-Pfaffian} if the signs of all the spanning hypertrees are the same~\cite{Goodall2011}.
This means that the absolute value of the determinant of the Pfaffian turns out to be the number of spanning hypertrees of $H$.
The 3-graph $H$ is also called \emph{3-Pfaffian} if it admits a 3-Pfaffian orientation.
3-Pfaffian orientations for 3-graphs includes Pfaffian orientations for graphs as a special case~\cite{Goodall2011}.
Lovász~\cite{Lovasz1980} presented a reduction of the problem to find a spanning hypertree of $H$ to the linear (graphic) matroid parity problem.
This reduction yields a one-to-one correspondence between spanning hypertrees of $H$ and parity bases of a matroid parity $(A, L)$ constructed from $H$ (with $L = \mathcal{E}$ indeed).
Appropriately reflecting the information on the orientation to the matrix $A$, we show that $(A, L)$ becomes Pfaffian when the orientation is 3-Pfaffian.
More generally, we prove that the sign of a spanning hypertree $T$ is equal to $\det A[B]$, where $B$ is the parity base of $(A, L)$ corresponding to $T$.
Although we can easily confirm this fact by using the Pfaffian matrix-tree theorem, we also provide another proof without relying on the Pfaffian matrix-tree theorem.
This leads us to a new proof of the Pfaffian matrix-tree theorem.
\paragraph{Disjoint $S$--$T$ Paths of DAGs~\textmd{(\cref{sec:dag})}.}
Let $G$ be a directed acyclic graph (DAG) and take disjoint vertex subsets $S$ and $T$ with $\card{S} = \card{T} = k$.
An \emph{$S$--$T$ path} of $G$ is the union of $k$ directed paths, each from a vertex in $S$ to a vertex in $T$
\footnote{
An $S$--$T$ path generally refers to a single path from $S$ to $T$ rather than the union of such paths.
In some literature, an $S$--$T$ path in our definition is called a \emph{perfect Menger-type linking}~\cite[Section~2.2.4]{Murota2000}.
}.
An $S$--$T$ path is called (vertex-)\emph{disjoint} if the consisting directed paths are pairwise vertex-disjoint.
A disjoint $S$--$T$ path has the sign according to the pattern of which vertices in $S$ are connected to which vertices in $T$.
The LGV lemma~\cite{Gessel1985,Lindstrom1973} provides a formula on the sum of signs of disjoint $S$--$T$ paths in $G$ via the determinant.
The LGV lemma has various applications in combinatorics and linear algebra~\cite{Gessel1985}; the Cauchy--Binet formula~\eqref{eq:cauchy_binet} can be proved via the LGV lemma for example.
The disjoint $S$--$T$ path problem on $G$ reduces to the bipartite matching problem~\cite{Frank2011}, in which a disjoint $S$--$T$ path is mapped to a perfect bipartite matching bijectively.
We show that this map retains the signs as well.
This means that if all disjoint $S$--$T$ paths of $G$ have the same sign, the set of disjoint $S$--$T$ paths forms common bases of a Pfaffian pair.
We say that such $(S, T)$ is in the \emph{LGV position} on $G$ and illustrate two examples arising from planar graphs.
We further provide a new proof of the LGV lemma making use of this map.
\paragraph{Shortest Disjoint $S$--$T$ Paths and $S$--$T$--$U$ Paths~\textmd{(\cref{sec:st,sec:stu})}.}
We generalize the above arguments of the disjoint $S$--$T$ path problem on DAGs to \emph{Mader's disjoint \calS-path problem}~\cite{Gallai1964,Mader1978} on undirected graphs.
Let $G$ be an undirected graph and $\mathcal{S}$ a family of disjoint vertex subsets.
Suppose that $\card{\cup_{S \in \mathcal{S}} S} = 2k$.
An \emph{\calS-path} of $G$ is the union of $k$ paths, each of which connects vertices belonging to distinct parts in $\mathcal{S}$.
An \calS-path is called \emph{disjoint} if the consisting paths are pairwise vertex-disjoint.
The disjoint \calS-path problem on $G$ is to find a disjoint \calS-path of $G$.
As for disjoint $S$--$T$ paths of DAGs, the sign of a disjoint \calS-path is defined based on the connecting pattern on $\mathcal{S}$.
When $\card{\mathcal{S}} = 2$, we call an \calS-path an $S$--$T$ path (with $\mathcal{S} = \set{S, T}$).
When $\card{\mathcal{S}} = 3$, we refer to \calS-path as an $S$--$T$--$U$ path (with $\mathcal{S} = \set{S, T, U}$).
Tutte~\cite{Tutte1965} proposed a reduction of the disjoint $S$--$T$ path problem to the linear (graphic) matroid intersection problem.
Subsequently, Schrijver~\cite{Schrijver2003} presented a reduction of the disjoint \calS-path problem to the linear matroid parity problem based on the Lovász' reduction~\cite{Lovasz1980}.
Suppose also that $G$ is equipped with a positive edge length, and consider the shortest disjoint \calS-path problem, which is to find a disjoint \calS-path that minimizes the sum of edge lengths.
Yamaguchi~\cite{Yamaguchi2016} showed that the shortest disjoint \calS-path problem is reduced to the weighted linear matroid parity problem.
As a special case, the shortest disjoint $S$--$T$ path problem reduces to the weighted linear matroid intersection problem.
This is a generalization of the reduction from the disjoint $S$--$T$ path problem on a DAG to the bipartite matching problem.
We first deal with the disjoint $S$--$T$ path problem on $G$.
Unfortunately, Tutte's reduction provides only a one-to-many correspondence between disjoint $S$--$T$ paths and common bases of a matrix pair $(A_1, A_2)$.
Nevertheless, we show that the sign of a disjoint $S$--$T$ path $P$ coincides with $\det A_1[B] \det A_2[B]$, where $B$ is a common base corresponding to $P$.
In addition, the weighted reduction gives a one-to-one correspondence between optimal solutions.
As a consequence, if $(S, T)$ is in the \emph{LGV position}, i.e., the signs of all disjoint $S$--$T$ paths are the same, we can represent the set of shortest disjoint $S$--$T$ paths of $G$ as minimum-weight common bases of a Pfaffian pair.
We next consider the general disjoint \calS-path problem.
Like the $S$--$T$ case, Schrijver's reduction for unweighted problems constructs only a one-to-many correspondence, whereas Yamaguchi's reduction for weighted problems provides a one-to-one correspondence.
Unlike the $S$--$T$ case, however, it will be turned out that $\det A[B]$ depends on some factor other than the sign of a disjoint \calS-path $P$, where $A$ is the matrix in the reduced linear matroid parity problem and $B$ is a parity base corresponding to $P$.
Nonetheless, when $\card{\mathcal{S}} = 3$, i.e., in the $S$--$T$--$U$ case, the factor would be constant for any disjoint \calS-path.
This means the shortest disjoint $S$--$T$--$U$ paths are represented as minimum-weight parity bases of a weighted Pfaffian parity when $S, T, U$ are in the \emph{LGV position}.
\subsection{Algorithmic Results}
Let $(A_1, A_2)$ be an $r \times n$ Pfaffian pair and $(A, L)$ a $2r \times 2n$ Pfaffian parity.
The definitions of Pfaffian pairs and parities guarantee that one can count the number of common bases of $(A_1, A_2)$ and the number of parity bases of $(A, L)$ just by matrix computations.
We observe that these computations can be done in $\Order\prn{nr^{\omega-1}}$ or $\Order\prn{nr^{\omega-1} + r^3}$-time (see \cref{thm:complexity_c_known}), where we assume that arithmetic operations can be performed in constant time.
Here, $2 < \omega \le 3$ is the matrix multiplication exponent, i.e., the multiplication of two $r \times r$ matrices is performed in $\Order\prn{r^\omega}$-time.
The current best value of $\omega$ is $2.3728639$~\cite{LeGall2014}.
More generally and precisely, when the matrices are over a field $\setK$ of characteristic $\ch(\setK)$, we can compute the number of common or parity bases modulo $\ch(\setK)$ within these times.
In the above arguments, we implicitly assumed that we know the value of $\det A_1[B] \det A_2[B]$ with an arbitrary common base $B$ of $(A_1, A_2)$ and $\det A[B]$ with an arbitrary parity base $B$ of $(A, L)$.
These values are called \emph{constants}.
If we do not know the value of constants, then we need to obtain one common or parity base $B$ beforehand by executing linear matroid intersection and parity algorithms.
The current best time complexities for solving the linear matroid intersection problem is deterministic $\Order\prn{nr^{\frac{5-\omega}{4-\omega}} \log r}$-time due to Gabow--Xu~\cite{Gabow1996} and randomized $\Order\prn{nr^{\omega-1}}$-time due to Harvey~\cite{Harvey2009}.
Also, the current best time complexities for the linear matroid parity problem is deterministic $\Order\prn{nr^\omega}$-time due to Gabow--Stallmann~\cite{Gabow1986} and Orlin~\cite{Orlin2008}, and randomized $\Order\prn{nr^{\omega-1}}$-time due to Cheung--Lau--Leung~\cite{Cheung2014}.
Therefore, we are confronted with a choice of whether to stick to deterministic algorithms or to employ randomized algorithms to keep the running time.
We would face the same trade-off to find one common or parity base even if we know the constants.
For the case of $\ch(\setK) = 0$, we show that it is possible to cherry-pick good points of both choices as follows.
\begin{theorem}\label{thm:complexity_of_unweighted_counting}
When $\ch(\setK) = 0$, we can count the number of common bases of an $r \times n$ Pfaffian pair and the number of parity bases of a $2r \times 2n$ Pfaffian parity in deterministic $\Order\prn{nr^{\omega-1}}$-time.
In addition, we can construct one common or parity base in the same time complexity.
\end{theorem}
Intuitively, the algorithms of Harvey~\cite{Harvey2009} and Cheung--Lau--Leung~\cite{Cheung2014} make use of randomness to take a random vector avoiding numerical cancellations between common or parity bases.
We show that Pfaffian pairs and parities do not involve such numerical cancellations by their definitions.
We next consider the problems of counting the number of minimum-weight common bases of a column-weighted Pfaffian pair and the number of minimum-weight parity bases of a line-weighted Pfaffian parity.
These problems can be algebraically formulated by using a univariate polynomial matrix (assuming the weight to be integral).
In these formulations, the number of minimum-weight common or parity bases is obtained as the coefficient of the lowest degree term in the determinant or Pfaffian of the polynomial matrix.
While we can compute it by performing a symbolic computation, this yields only a pseudo-polynomial time algorithm.
Broder--Mayr~\cite{Broder1997} and Hayashi--Iwata~\cite{Hayashi2018} presented polynomial-time counting algorithms for minimum-weight spanning trees and minimum-weight arborescences, respectively.
Their algorithms first compute dual optimal solutions of linear programming (LP) formulations and then perform graphic operations on trees constructed from the dual optimal solutions.
Generalizing these algorithms from a matroidal perspective, we present a polynomial-time counting algorithm for minimum-weight common bases of a Pfaffian pair.
We make use of Frank's weight splitting lemma~\cite{Frank1981}, which reveals the dual structure of the weighted matroid intersection problem.
Applying a row operation, we reduce the counting on a weighted Pfaffian pair to the counting on an unweighted Pfaffian pair.
Our reduction can be seen as a succinct description of a known trick to represent minimum-weight common bases of a weighted matrix pair as the set of common bases of unweighted matrix pair.
The running time is estimated as follows.
\begin{theorem}\label{thm:complexity_of_counting_weighted_pairs}
Let $(A_1, A_2)$ be a Pfaffian pair with column weight $\funcdoms{w}{E}{\setR}$.
We can compute the number of minimum-weight common bases of $(A_1, A_2)$ modulo $\ch(\setK)$ in deterministic $\Order\prn{nr^\omega + nr \log n}$-time.
\end{theorem}
We also present a polynomial-time counting algorithm for weighted Pfaffian parities.
Although an LP formulation of the weighted linear matroid parity algorithm is not yet known, Iwata--Kobayashi~\cite{Iwata2017} gave an algebraic optimality criterion, which associates the minimum weight of a parity base with the maximum weight of a perfect matching of a graph.
Based on this association, we show that the number of minimum-weight parity bases coincides with the leading coefficient of a skew-symmetric polynomial matrix that the algorithm of Iwata--Kobayashi outputs as a byproduct.
We then apply Murota's upper-tightness testing algorithm~\cite{Murota1995a} to compute the leading coefficient.
Murota's algorithm was originally presented in the context of combinatorial relaxation, which is to compute the degree of the determinant (Pfaffian) of a skew-symmetric polynomial matrix.
The time complexity is summarized as follows.
\begin{theorem}\label{thm:counting_minimum_weight_parity_bases}
Let $(A, L)$ be a Pfaffian parity with line weight $\funcdoms{w}{L}{\setR}$.
We can compute the number of minimum-weight parity bases of $(A, L)$ modulo $\ch(\setK)$ in deterministic $\Order\prn{n^3r}$-time.
\end{theorem}
On describing time complexities, we have assumed that arithmetic operations on $\setK$ can be performed in constant time.
This assumption is reasonable when $\setK$ is a finite field of fixed order.
When $\setK$ is the field $\setQ$ of rational numbers, there is no guarantee that a direct application of the algorithm of Iwata--Kobayashi~\cite{Iwata2017} does not swell the bit-lengths of intermediate numbers.
Instead, they showed that one can solve the weight linear matroid parity problem over $\setQ$ by applying their algorithm over a sequence of finite fields.
We give a polynomial-time counting algorithm with $\setK = \setQ$ based on their reduction.
\begin{theorem}\label{thm:bit_complexity}
Let $(A, L)$ be a Pfaffian parity over $\setQ$ with line weight $\funcdoms{w}{L}{\setR}$.
We can deterministically compute the number of minimum-weight parity bases of $(A, L)$ in time polynomial in the binary encoding length of $A$.
\end{theorem}
As seen above, our counting algorithms for weighted Pfaffian pairs and parities are based on different approaches: that for pairs reduces to the unweighted counting and that for parities is via the matching problem.
We show in Appendix that the algorithm for Pfaffian pairs can also be derived by an approach based on the bipartite matching problem.
\subsection{Organization}
The rest of this paper is organized as follows.
After introducing some preliminaries, \cref{sec:pfaffian_matroid_pairs_and_parities} gives formal definitions of Pfaffian pairs and parities as well as their properties.
\Cref{sec:examples,sec:lgv} exhibit examples of Pfaffian pairs and parities.
The family of disjoint path problems are dealt with in \cref{sec:lgv} and others are in \cref{sec:examples}.
Finally, \cref{sec:algorithms} presents our counting algorithms for unweighted and weighted Pfaffian pairs and parities.
\section{Pfaffian Pairs and Pfaffian Parities}\label{sec:pfaffian_matroid_pairs_and_parities}
\subsection{Preliminaries}\label{sec:preliminaries}
Let $\setZ, \setQ$ and $\setR$ denote the set of all integers, rational and real numbers, respectively.
For a nonnegative integer $n$, we denote $\set{1, 2, \dotsc, n}$ by $\intset{n}$.
Let $\setK$ be a field of characteristic $\ch(\setK)$.
Unless otherwise stated, all matrices are over $\setK$ in \cref{sec:pfaffian_matroid_pairs_and_parities,sec:algorithms} and are over $\setQ$ in \cref{sec:examples,sec:lgv}.
For $n \in \setZ$, we define $n$ modulo $0$ as $n$ for convenience.
For $n,m \in \setZ$, ``$n$ is equal to $m$ over $\setK$'' means $n$ is congruent to $m$ modulo $\ch(\setK)$.
For a matrix $A$, we denote by $A[I, J]$ the submatrix of $A$ with row subset $I$ and column subset $J$.
If $I$ is all the rows of $A$, we denote $A[I, J]$ by $A[J]$.
While we can describe this paper without defining matroids, here we give a general definition.
A \emph{matroid} is the pair $\mathbf{M} = (E, \mathcal{B})$ of a finite set $E$ and a nonempty set family $\mathcal{B} \subseteq 2^E$ over $E$ satisfying the following: for any $B_1, B_2 \in \mathcal{B}$ and $x \in B_1 \setminus B_2$, there exists $y \in B_2 \setminus B_1$ such that $B_1 \setminus \set{x} \cup \set{y} \in \mathcal{B}$.
Each element of $\mathcal{B}$ is called a \emph{base} of $\mathbf{M}$ and $E$ is called the \emph{ground set} of $\mathbf{M}$.
Typical examples of matroids arise from matrices.
Let $A \in \setK^{r \times n}$ be a matrix with column set $E$.
Define
\begin{align}
\mathcal{B}(A) \defeq \set{B \subseteq E}[\text{$\card{B} = r$, $A[B]$ is nonsingular}].
\end{align}
If $A$ is of row-full rank, then $\mathbf{M}(A) \defeq (E, \mathcal{B}(A))$ forms a matroid, called a \emph{linear matroid} represented by $A$.
We refer to each element of $\mathcal{B}(A)$ as a \emph{base} of $A$.
In this paper, we consider $A$ to have no base if $A$ is not of row-full rank.
Recall that the \emph{determinant} of a square matrix $A = \prn{A_{i,j}}_{i,j \in \intset{n}} \in \setK^{n \times n}$ is defined as
\begin{align}\label{def:determinant}
\det A \defeq \sum_{\sigma \in \sym_n} \sgn \sigma \prod_{i=1}^n A_{i, \sigma(i)},
\end{align}
where $\sym_n$ is the set of all permutations on $\intset{n}$ and $\sgn \sigma$ denotes the sign of a permutation $\sigma \in \sym_n$.
A square matrix $S$ is said to be \emph{skew-symmetric} if $\trsp{S} = -S$ and all diagonal entries are zero, where the latter condition cares the case when $\ch(\setK) = 2$.
For a skew-symmetric matrix $S = \prn{S_{i,j}}_{i,j \in \intset{2n}} \in \setK^{2n \times 2n}$ of even order, the \emph{Pfaffian} of $S$ is defined as
\begin{align}\label{def:pfaffian}
\pf S \defeq \sum_{\sigma \in F_{2n}} \sgn \sigma \prod_{i=1}^n S_{\sigma(2i-1), \sigma(2i)},
\end{align}
where $F_{2n}$ is the subset of $\sym_{2n}$ given by
\begin{align}\label{def:F}
F_{2n} \defeq \set{\sigma \in \sym_{2n}}[\text{$\sigma(1) < \sigma(3) \dotsb < \sigma(2n-1)$ and $\sigma(2i-1) < \sigma(2i)$ for $i \in \intset{n}$}].
\end{align}
It is well-known that
\begin{align}
\prn{\pf S}^2 & = \det S,\label{eq:pf_det} \\
\pf AS\trsp{A} & = \det A \pf S\label{eq:pfaffian_multiplicativity}
\end{align}
hold, where $A \in \setK^{2n \times 2n}$ is any square matrix.
The following formula is a generalization of the Cauchy--Binet formula~\eqref{eq:cauchy_binet} to Pfaffian given by Ishikawa--Wakayama~\cite{Ishikawa1995}.
\begin{proposition}[{\cite[Theorem~1]{Ishikawa1995}}]\label{prop:ishikawa_pfaffian_cauchy_binet}
Let $S \in \setK^{2n \times 2n}$ be a skew-symmetric matrix and $A \in \setK^{2r \times 2n}$ a square matrix.
Suppose that the row and column sets of $S$ and the column set of $A$ are indexed by $E$.
Then it holds
\begin{align}
\pf AS\trsp{A} = \sum_{\condit{J \subseteq E}[\card{J} = 2r]} \det A[J] \pf S[J, J].\label{eq:pfaffian_cauchy_binet}
\end{align}
\end{proposition}
Let $A$ be a square matrix partitioned as $A = \begin{psmallmatrix} X & Y \\ Z & W \end{psmallmatrix}$.
Suppose that $X$ and $W$ are square, and $W$ is nonsingular.
The \emph{Schur complement} of $A$ with respect to $W$ is the matrix $X - YW^{-1}Z$.
This matrix is the one that appears at the position of $X$ after eliminating $Y$ and $Z$ by elementary operations using $W$.
Since elementary operations do not change the determinant, it holds
\begin{align}\label{eq:shurt_det}
\det A = \det W \det\prn[\big]{X - YW^{-1}Z}.
\end{align}
Elementary operations also retain the Pfaffian of a skew-symmetric matrix by~\eqref{eq:pfaffian_multiplicativity}.
Thus if $S = \begin{psmallmatrix} X & Y \\ -\trsp{Y} & W \end{psmallmatrix}$ is skew-symmetric and $W$ is nonsingular, we have
\begin{align}\label{eq:shurt_pf}
\pf S = \pf W \pf\prn[\big]{X + YW^{-1}\trsp{Y}}.
\end{align}
\subsection{Linear Matroid Intersection Problem and Pfaffian Pairs}\label{sec:linear_matroid_intersection_problem_and_pfaffian_pairs}
The \emph{matroid intersection problem} introduced by Edmonds~\cite{Edmonds1968,Edmonds1970} is the following: given two matroids $\mathbf{M}_1 = (E, \mathcal{B}_1)$ and $\mathbf{M}_2 = (E, \mathcal{B}_2)$ over the same ground set $E$, we find a common base $B \in \mathcal{B}_1 \cap \mathcal{B}_2$ of $\mathbf{M}_1$ and $\mathbf{M}_2$.
The \emph{linear matroid intersection problem} is to find a common base of two linear matroids.
We regard the input of the linear matroid intersection problem as a \emph{matrix pair} $(A_1, A_2)$, which is the pair of matrices $A_1, A_2 \in \setK^{r \times n}$ of the same size over the same ground field $\setK$.
We denote the set of common bases of $A_1$ and $A_2$ by $\mathcal{B}(A_1, A_2) \defeq \mathcal{B}(A_1) \cap \mathcal{B}(A_2)$.
The linear matroid intersection problem can be algebraically formulated as follows.
For a vector $z = \prn{z_j}_{j \in E}$ indexed by the common column set $E$ of $(A_1, A_2)$, we denote the diagonal matrix $\diag\prn{z_j}_{j \in E}$ by $D(z)$.
We also define a block matrix
\begin{align}\label{def:Xi}
\Xi_{A_1, A_2}(z)
\defeq
\begin{pmatrix}
O & A_1 \\\trsp{A_2} &
D(z)
\end{pmatrix},
\end{align}
where $O$ denotes the zero matrix of appropriate size.
We henceforth omit the subscript $A_1, A_2$ of $\Xi$ as it will be clear from the context.
\begin{proposition}[{see~\cite{Geelen2001,Tomizawa1974}}]\label{prop:algebraic_formulation_of_intersection}
Let $(A_1, A_2)$ be a matrix pair and $z = \prn{z_j}_{j \in E}$ a vector of distinct indeterminates indexed by the common column set $E$ of $(A_1, A_2)$.
Then the following are equivalent:
%
\begin{enumerate}
\item $(A_1, A_2)$ has a common base.\label{item:algebraic_formulation_of_intersection_1}
\item $A_1 D(z) \trsp{A_2}$ is nonsingular.\label{item:algebraic_formulation_of_intersection_2}
\item $\Xi(z)$ is nonsingular.\label{item:algebraic_formulation_of_intersection_3}
\end{enumerate}
\end{proposition}
Here, the nonsingularity in \cref{prop:algebraic_formulation_of_intersection} is in the sense of matrices over the rational function field $\setK(z) \defeq \setK\prn{\set{z_j}[j \in E]}$.
As indicated by Tomizawa--Iri~\cite{Tomizawa1974}, the equivalence of \cref{prop:algebraic_formulation_of_intersection}~\ref{item:algebraic_formulation_of_intersection_1} and~\ref{item:algebraic_formulation_of_intersection_2} can be seen from the Cauchy--Binet formula~\eqref{eq:cauchy_binet} because the formula expands $\det A_1 D(z) \trsp{A_2}$ as
\begin{align}\label{eq:cauchy_binet_1}
\det A_1 D(z) \trsp{A_2}
= \sum_{B \in \mathcal{B}(A_1, A_2)} \det A_1[B] \det A_2[B] \prod_{j \in B} z_j.
\end{align}
This equation means that $\det A_1 D(z) \trsp{A_2} \ne 0$ if and only if $\mathcal{B}(A_1, A_2) \ne \varnothing$ since the factor on $x$ avoids cancellations in the summation.
Considering the formula~\eqref{eq:shurt_det} on the Schur complement and~\eqref{eq:cauchy_binet_1}, we also have
\begin{align}\label{eq:cauchy_binet_2}
\det \Xi(z)
= \det A_1 {D(z)}^{-1} \trsp{A_2} \cdot \det D(z)
= \sum_{B \in \mathcal{B}(A_1, A_2)} \det A_1[B] \det A_2[B] \prod_{j \in E \setminus B} z_j.
\end{align}
Hence all the claims in \cref{prop:algebraic_formulation_of_intersection} are equivalent.
See also Harvey~\cite{Harvey2009} and Murota~\cite[Remark~2.3.37]{Murota2000}.
Now we define Pfaffian matrix pairs slightly generalizing that of Webb~\cite{Webb2004}.
\begin{definition}[{Pfaffian matrix pair; see~\cite{Webb2004}}]\label{def:pfaffian_pair}
We say that a matrix pair $(A_1, A_2)$ is \emph{Pfaffian} if there exists $c \in \setK \setminus \set{0}$ such that $\det A_1[B] \det A_2[B] = c$ for all $B \in \mathcal{B}(A_1, A_2)$.
The value $c$ is called the \emph{constant} of $(A_1, A_2)$.
\end{definition}
We abbreviate a Pfaffian matrix pair as a Pfaffian pair.
If $(A_1, A_2)$ is Pfaffian, nonzero terms in the summation of~\eqref{eq:cauchy_binet_1} and~\eqref{eq:cauchy_binet_2} do not cancel out.
Hence the following holds for Pfaffian pairs.
\begin{proposition}\label{prop:counting_formula_for_pfaffian_pairs}
Let $(A_1, A_2)$ be a Pfaffian pair of constant $c$ and $z = \prn{z_j}_{j \in E}$ a vector indexed by the common column set $E$ of $(A_1, A_2)$.
Then it holds
%
\begin{align}
\det A_1 D(z) \trsp{A_2} & = c \sum_{B \in \mathcal{B}(A_1, A_2)} \prod_{j \in B} z_j,\label{eq:symbolic_counting_pairs} \\
\det \Xi(z) & = c \sum_{B \in \mathcal{B}(A_1, A_2)} \prod_{j \in E \setminus B} z_j.
\end{align}
%
In particular, the number of common bases of $(A_1, A_2)$ is equal to $c^{-1} \det A_1\trsp{A_2} = c^{-1} \det \Xi(\onevec)$ over $\setK$, where $\onevec$ denotes the vector of ones with appropriate dimension.
\end{proposition}
Next, consider a column-weighted version of matrix pairs.
Let $(A_1, A_2)$ be a matrix pair and $\funcdoms{w}{E}{\setR}$ a weight on the common column set $E$ of $(A_1, A_2)$.
The \emph{weight} $w(J)$ of $J \subseteq E$ is defined as $w(J) \defeq \sum_{j \in J} w(j)$.
The \emph{weighted linear matroid intersection problem} is to find a common base of $(A_1, A_2)$ that minimizes the weight $w$ among all common bases.
It is well-known that one can algebraically encode the information on the weight $w$ by putting it to the power of an indeterminate $\theta$, as the following proposition shows.
We define $\theta^w \defeq \pbig{\theta^{w(j)}}_{j \in E}$.
\begin{proposition}\label{prop:algebraic_weighted_pair}
Let $(A_1, A_2)$ be a matrix pair with column weight $\funcdoms{w}{E}{\setR}$ and let $\theta$ be an indeterminate.
For $x \in \setR$, the coefficient of $\theta^x$ in $\det A_1 D\pbig{\theta^w} \trsp{A_2}$ and the coefficient of $\theta^{w(E)-x}$ in $\det \Xi\pbig{\theta^w}$ are equal to
%
\begin{align}\label{eq:weighted_counting_pair}
\sum_{B \in \mathcal{B}_x} \det A_1[B] \det A_2[B],
\end{align}
%
where $\mathcal{B}_x \defeq \set{B \in \mathcal{B}(A_1, A_2)}[w(B) = x]$.
In particular, if $(A_1, A_2)$ is Pfaffian with constant $c$, the coefficients of $\theta^x$ in $\det A_1 D\pbig{\theta^w} \trsp{A_2}$ and of $\theta^{w(E)-x}$ in $\det \Xi\pbig{t\theta^w}$ are equal to $c^{-1}\card{\mathcal{B}_x}$ over $\setK$.
\end{proposition}
\begin{proof}
By~\eqref{eq:cauchy_binet_1}, we have
%
\begin{align}
\det A_1 D\pbig{\theta^w} \trsp{A_2}
& = \sum_{B \in \mathcal{B}(A_1, A_2)} \det A_1[B] \det A_2[B] \prod_{j \in B} \theta^{w(j)} \\
& = \sum_{B \in \mathcal{B}(A_1, A_2)} \det A_1[B] \det A_2[B] \theta^{w(B)}.\label{eq:expansion_of_weighted_pair}
\end{align}
%
Hence the coefficient of $\theta^x$ in~\eqref{eq:expansion_of_weighted_pair} is equal to~\eqref{eq:weighted_counting_pair} for $x \in \setR$.
We can show the claim for $\det \Xi\pbig{\theta^w}$ in the same way via~\eqref{eq:cauchy_binet_1}.
\end{proof}
\subsection{Linear Matroid Parity Problem and Pfaffian Parities}
Let $\mathbf{M} = (E, \mathcal{B})$ be a matroid with $\card{E}$ being even.
The ground set $E$ is partitioned into pairs, called \emph{lines}.
Let $L$ be the set of lines.
The \emph{matroid parity problem} (also known as the \emph{matchoid problem} or the \emph{matroid matching problem}), introduced by Lawler~\cite{Lawler1976}, is to find a base of $\mathbf{M}$ consisting of lines.
Such a base is called a \emph{parity base} of $\mathbf{M}$ (with respect to $L$).
In the general case, the matroid parity problem requires exponential number of membership oracle calls of $\mathcal{B}$~\cite{Lovasz1980}.
Nevertheless, Lovász~\cite{Lovasz1980} showed that the \emph{linear matroid parity problem} admits a polynomial-time algorithm, in which the linear matroid is given as a matrix $A$.
Here, the numbers of rows and columns of $A$ are even, say, $A \in \setK^{2r \times 2n}$.
We call the pair $(A, L)$ a (linear) \emph{matroid parity}.
We regard parity bases of $(A, L)$ as a subset of $L$ and denote by $\mathcal{B}(A, L)$ the set of all parity bases of $\mathbf{M}(A)$ with respect to $L$.
For $J \subseteq L$, we denote by $A[J]$ the submatrix of $A$ consisting of columns corresponding to lines in $J$.
The linear matroid parity problem also has algebraic formulations.
For a vector $z = \prn{z_l}_{l \in L}$ indexed by $L$, we denote by $\Delta_L(z)$ the $2n \times 2n$ skew-symmetric block-diagonal matrix defined as follows: the row and column sets are indexed by $E$, and each block corresponding to a line $l \in L$ is a $2 \times 2$ skew-symmetric matrix $\begin{psmallmatrix} \phantom{+}0 & +z_l \\ -z_l & \phantom{+}0 \end{psmallmatrix}$.
Similarly to~\eqref{def:Xi}, we define a skew-symmetric block matrix
\begin{align}\label{def:Phi}
\Phi_{A, L}(z)
\defeq
\begin{pmatrix}
O & A \\
-\trsp{A} & \Delta_L(z)
\end{pmatrix}.
\end{align}
We also omit the subscripts $L$ of $\Delta$ and $A, L$ of $\Phi$ as they will be always clear.
\begin{proposition}[{\cite{Geelen2005,Lovasz1979}}]\label{prop:algebraic_formulation_of_parity}
Let $(A, L)$ be a matroid parity and $z = \prn{z_l}_{l \in L}$ a vector of distinct indeterminates indexed by $L$.
Then the following are equivalent:
%
\begin{enumerate}
\item $(A, L)$ has a parity base.\label{prop:algebraic_formulation_of_parity_1}
\item $A \Delta(z) \trsp{A}$ is nonsingular.\label{prop:algebraic_formulation_of_parity_2}
\item $\Phi(z)$ is nonsingular.\label{prop:algebraic_formulation_of_parity_3}
\end{enumerate}
\end{proposition}
We note that the matrix $A \Delta(z) \trsp{A}$ in \cref{prop:algebraic_formulation_of_parity}~\ref{prop:algebraic_formulation_of_parity_2} can also be written as
\begin{align}\label{eq:rewrite_ADeltaA}
A \Delta(z) \trsp{A}
= A_1 D(z) \trsp{A_2} - A_2 D(z) \trsp{A_1}
= \sum_{l = (v, \bar{v}) \in L} z_l \pbig{a_v\trsp{a_{\bar{v}}} - a_{\bar{v}} \trsp{a_v}},
\end{align}
where $a_v \defeq A[\set{v}]$ is the $v$th column of $A$ for $v \in E$ and $A_1, A_2$ are $2r \times n$ submatrices of $A$ consisting of column vectors $a_v$ and $a_{\bar{v}}$ for each line $(v, \bar{v}) \in L$, respectively.
Lovász~\cite[Theorem~3]{Lovasz1979} described the equivalence of \cref{prop:algebraic_formulation_of_parity}~\ref{prop:algebraic_formulation_of_parity_1} and~\ref{prop:algebraic_formulation_of_parity_2} representing $A \Delta(z) \trsp{A}$ in the rightmost form of~\eqref{eq:rewrite_ADeltaA}.
The equivalence of \cref{prop:algebraic_formulation_of_parity}~\ref{prop:algebraic_formulation_of_parity_1} and~\ref{prop:algebraic_formulation_of_parity_3} is due to Geelen--Iwata~\cite[Theorem~4.1]{Geelen2005}; see also Cheung--Lau--Leung~\cite{Cheung2014} and Murota~\cite[Remark~7.3.23]{Murota2000}.
These equivalences can also be observed from the following identities.
\begin{proposition}\label{prop:pfaffian_cauchy_binet}
Let $(A, L)$ be a matroid parity and $z = \prn{z_l}_{l \in L}$ a vector indexed by $L$.
Then it holds
%
\begin{align}
\pf A\Delta(z)\trsp{A} & = \sum_{B \in \mathcal{B}(A, L)} \det A[B] \prod_{l \in B} z_l,\label{eq:pfaffian_cauchy_binet_1} \\
\pf \Phi(z) & = \sum_{B \in \mathcal{B}(A, L)} \det A[B] \prod_{l \in L \setminus B} z_l.\label{eq:pfaffian_cauchy_binet_2}
\end{align}
\end{proposition}
\begin{proof}
Applying the the expanding formula~\eqref{eq:pfaffian_cauchy_binet} to $\pf A\Delta(z)\trsp{A}$, we have
%
\begin{align}
\pf A\Delta(z)\trsp{A} = \sum_{\condit{J \subseteq E}[\card{J} = 2r]} \det A[J] \pf \Delta(z)[J, J],
\end{align}
%
where $2r$ is the cardinality of rows of $A$ and $E$ is the columns of $A$.
From the definitions of $\Delta(z)$ and Pfaffian, $\Delta(z)[J, J]$ is nonsingular only if $J$ consists of lines.
In this case, $\pf \Delta(z)[J, J]$ is equal to the product of $z_l$ for every line $l$ consisting $J$.
Hence~\eqref{eq:pfaffian_cauchy_binet_1} is obtained.
The latter identity~\eqref{eq:pfaffian_cauchy_binet_2} is obtained by applying the formula~\eqref{eq:shurt_pf} on the Schur complement to $\Phi(z)$.
Note that $\Delta(z)$ can be regarded as nonsingular by seeing each $z_l$ as an indeterminate.
\end{proof}
Now we define Pfaffian matroid parities in the same manner as \cref{def:pfaffian_pair}.
\begin{definition}[{Pfaffian matroid parity}]\label{def:pfaffian_parity}
We say that a matroid parity $(A, L)$ is \emph{Pfaffian} if there exists $c \in \setK \setminus \set{0}$ such that $\det A[B] = c$ for all $B \in \mathcal{B}(A, L)$.
The value $c$ is called the \emph{constant} of $(A, L)$.
\end{definition}
We abbreviate Pfaffian matroid parity as Pfaffian parity for short.
The following is immediately obtained from \cref{prop:pfaffian_cauchy_binet} and \cref{def:pfaffian_parity}.
\begin{proposition}\label{prop:counting_formula_for_pfaffian_prities}
Let $(A, L)$ be a Pfaffian parity of constant $c$ and $z = \prn{z_l}_{l \in L}$ a vector indexed by $L$.
Then it holds
%
\begin{align}
\pf A\Delta(z)\trsp{A} & = c \sum_{B \in \mathcal{B}(A, L)} \prod_{l \in B} z_l, \\
\pf \Phi(z) & = c \sum_{B \in \mathcal{B}(A, L)} \prod_{l \in L \setminus B} z_l.
\end{align}
%
In particular, the number of parity bases of $(A, L)$ is equal to $c^{-1} \pf A\Delta(\onevec)\trsp{A} = c^{-1} \pf \Phi(\onevec)$ over $\setK$.
\end{proposition}
We next consider the weighted linear matroid parity problem.
Let $(A, L)$ be a matroid parity and $\funcdoms{w}{L}{\setR}$ a weight on lines.
The \emph{weight} of $J \subseteq L$ is defined as $w(J) \defeq \sum_{j \in J} w(j)$.
The \emph{weighted linear matroid parity problem} is to find a parity base of $(A, L)$ having the minimum weight with respect to $w$ among all parity bases.
The following is obtained in the same way as \cref{prop:algebraic_weighted_pair} via \cref{prop:pfaffian_cauchy_binet}; see also Iwata--Kobayashi~\cite{Iwata2017}.
\begin{proposition}\label{prop:algebraic_weighted_parity}
Let $(A, L)$ be a matroid parity equipped with a line weight $\funcdoms{w}{L}{\setR}$.
Let $\theta$ be an indeterminate.
For $x \in \setR$, the coefficient of $\theta^x$ in $\pf A \Delta\pbig{\theta^w} \trsp{A}$ and the coefficient of $\theta^{w(L)-x}$ in $\pf \Phi\pbig{\theta^w}$ are equal to
%
\begin{align}\label{eq:weighted_counting_parity}
\sum_{B \in \mathcal{B}_x} \det A[B],
\end{align}
%
where $\mathcal{B}_x \defeq \set{B \in \mathcal{B}(A, L)}[w(B) = x]$.
In particular, if $(A, L)$ is Pfaffian with constant $c$, the coefficients of $\theta^x$ in $\pf A \Delta\pbig{\theta^w} \trsp{A}$ and of $\theta^{w(L)-x}$ in $\pf \Phi\pbig{\theta^w}$ are equal to $c^{-1}\card{\mathcal{B}_x}$ over $\setK$.
\end{proposition}
\subsection{Reducing Pfaffian Pairs to Pfaffian Parities}
Lawler~\cite{Lawler1976} presented the following reduction of the linear matroid intersection problem to the linear matroid parity problem.
Let $(A_1, A_2)$ be an $r \times n$ matrix pair with common column set $E$.
We define a $2r \times 2n$ matrix $A$ as follows: we associate each two columns of $A$ with $j \in E$ and set the $2r \times 2$ submatrix associated with $j$ as $\begin{psmallmatrix} A_1[\set{j}] & 0 \\ 0 & A_2[\set{j}] \end{psmallmatrix}$.
Through this association, we regard $E$ as the set of lines of $A$.
Then $B \subseteq E$ is a common base of $(A_1, A_2)$ if and only if $B$ is a parity base of $(A, E)$~\cite[Chapter~9.2]{Lawler1976}.
We show that when $(A_1, A_2)$ is Pfaffian, so is $(A, E)$.
\begin{proposition}
Let $(A_1, A_2)$ be an $r \times n$ matrix pair and $(A, E)$ the $2r \times 2n$ matroid parity defined above.
If $(A_1, A_2)$ is Pfaffian with constant $c$, then $(A, E)$ is Pfaffian with constant $\prn{-1}^{\frac{r(r-1)}{2}}c$.
\end{proposition}
\begin{proof}
Let $B \subseteq E$ be a common base of $(A_1, A_2)$ as well as a parity base of $(A, E)$.
By an appropriate column permutation, $A[B]$ is transformed into $\begin{psmallmatrix} A_1[B] & O \\ O & A_2[B] \end{psmallmatrix}$, whose the determinant is $\det A_1[B] \det A_2[B] = c$.
The sign of this column permutation is $\prn{-1}^{1 + \cdots + (r-1)} = \prn{-1}^{\frac{r(r-1)}{2}}$.
Hence the claim holds.
\end{proof}
\section{Examples}\label{sec:examples}
In this section, we enumerate discrete structures that can be represented as common bases of Pfaffian pairs or parity bases of Pfaffian parities.
\subsection{Regular Matroids and Spanning Trees}\label{sec:regular_matroids}
A matroid is called \emph{regular} if it is represented by a totally unimodular matrix, or equivalently, it is representable by a matrix over any field.
If $A$ is a totally unimodular matrix, a pair $(A, A)$ is clearly Pfaffian with constant 1.
Hence, as observed by Webb~\cite[Section~3.5]{Webb2004}, the number of bases of $A$ is equal to $\det A\trsp{A}$ by $\mathcal{B}(A) = \mathcal{B}(A, A)$ and \cref{prop:counting_formula_for_pfaffian_pairs}.
This well-known formula on regular matroids was initially indicated by Maurer~\cite{Maurer1976}.
Regular matroids typically arise from graphs.
Let $G = (V, E)$ be a connected undirected graph and $\mathcal{B}(G) \subseteq 2^E$ denote the set of all spanning trees of $G$.
Then $\mathbf{M}(G) \defeq (E, \mathcal{B}(G))$ forms a matroid, called the \emph{graphic matroid} of $G$.
Consider any orientation $\vec{G} = (V, \vec{E})$ of $G$.
Throughout this paper, we denote the directed edge in $\vec{E}$ corresponding to $e \in E$ by $\vec{e}$ and the directed edge set corresponding to $F \subseteq E$ by $\vec{F}$.
We define the \emph{incidence matrix} $A = \prn{A_{v,e}}_{v \in V, e \in E}$ of $\vec{G}$ as a matrix over $\setQ$ by
\begin{align}\label{def:incidence_matrix}
A_{v,e} \defeq \begin{cases*}
+1 & ($v = \tail{\vec{e}}$), \\
-1 & ($v = \head{\vec{e}}$), \\
\phantom{+}0 & (otherwise)
\end{cases*}
\end{align}
for $v \in V$ and $e \in E$, where $\tail{\vec{e}}$ and $\head{\vec{e}}$ denote the tail and the head of $\vec{e}$, respectively.
The incidence matrix $A$ is known to be totally unimodular.
Let $A^{(r)}$ denote the submatrix of $A^{(r)}$ obtained by removing the $r$th row of $A$ for $r \in V$.
Then $A^{(r)}$ represents $\mathcal{B}(G)$, i.e., $\mathcal{B}(G) = \mathcal{B}\pbig{A^{(r)}}$.
Hence the number of spanning trees of $G$ is equal to $\det A^{(r)}\trsp{{A^{(r)}}}$, which is the $(r, r)$th cofactor of the \emph{Laplacian matrix} $A\trsp{A}$ of $G$.
This is exactly Kirchhoff's matrix-tree theorem~\cite{Kirchhoff1847}.
Refer to~\cite{Oxley2011} for details of regular and graphic matroids
\subsection{Regular Delta-Matroids and Euler Tours in 4-Regular Directed Graphs}\label{sec:regular_delta_matroids}
Let $S \in \setQ^{n \times n}$ be a skew-symmetric matrix whose rows and columns are indexed by a finite set $E$.
We also assume that $S$ is \emph{principally unimodular}; that is, any principal minor of $S$ is in $\set{+1, 0, -1}$~\cite{Bouchet1998}.
Since $S$ is skew-symmetric, all the principal minors of $S$ must be $0$ or $+1$.
Define
\begin{align}
\mathcal{F}(S) \defeq \set{F \subseteq E}[\text{$S[F, F]$ is nonsingular}]
\end{align}
and denote $(E, \mathcal{F}(S))$ by $\mathbf{D}(S)$.
For $X \subseteq E$, we let $\mathbf{D}(S) \symdif X \defeq (E, \mathcal{F}(S) \symdif X)$ with
\begin{align}
\mathcal{F}(S) \symdif X \defeq \set{F \symdif X}[F \in \mathcal{F}(S)],
\end{align}
where $F \symdif X$ means the \emph{symmetric difference} of $F$ and $X$, that is, $F \symdif X \defeq (F \setminus X) \cup (X \setminus F)$.
Then $\mathbf{D}(S) \symdif X$ is called the \emph{regular delta-matroid} represented by $S$ (and $X$)~\cite{Bouchet1995,Geelen1995}.
Elements in $\mathcal{F}(S) \symdif X$ are called \emph{feasible sets} of $\mathbf{D}(S) \symdif X$.
Regular delta-matroids are a generalization of regular matroids; see~\cite{Bouchet1998}.
Webb~\cite[Section~3.5]{Webb2004} indicated that the set of nonsingular principal submatrices of a skew-symmetric totally unimodular matrix can be represented by a Pfaffian pair.
This can be slightly generalized to the feasible sets of a regular delta-matroid as follows.
Define matrices $A_1 \defeq \begin{pmatrix} S & I_n\end{pmatrix}$ and $A_2 \defeq \begin{pmatrix} I_n & I_n \end{pmatrix}$ with common column set $E \cup \overline{E}$, where $I_n$ is the identity matrix of order $n = \card{E}$ and $\overline{E}$ is a disjoint copy of $E$ corresponding to the right blocks of $A_1$ and $A_2$.
Note that $A_1$ is not necessarily totally unimodular.
For $J \subseteq E$, denote by $\overline{J}$ the corresponding subset of $\overline{E}$ to $J$.
\begin{proposition}[{see~\cite[Section~3.5]{Webb2004}}]\label{prop:regular_delta_matroid_is_pfaffian_pair}
The matrix pair $(A_1, A_2)$ is Pfaffian with constant 1.
In addition, there is a one-to-one correspondence between common bases of $(A_1, A_2)$ and feasible sets of $\mathbf{D}(S) \symdif X$ given by $B \mapsto (B \cap E) \symdif X$ for $B \in \mathcal{B}(A_1, A_2)$.
\end{proposition}
\begin{proof}
We first show that $(A_1, A_2)$ is Pfaffian.
Note that $J \subseteq E \cup \overline{E}$ with $\card{J} = n$ is a base of $A_2$ if and only if $\overline{E \setminus J} = J \cap \overline{E}$.
Taking such a column subset $J$, put $T_1 \defeq A_1[J]$ and $T_2 \defeq A_2[J]$.
By a row permutation on $T_1$ and $T_2$, we transform $T_1$ and $T_2$ to
%
\begin{align}
T_1 = \begin{pmatrix}
S[J \cap E, J \cap E] & O \\
S[E \setminus J, J \cap E] & I_{n-k}
\end{pmatrix},
\quad
T_2 = \begin{pmatrix}
I_k & O \\
O & I_{n-k}
\end{pmatrix},
\end{align}
%
where $k \defeq \card{J \cap E}$.
Note that $\det T_1 \det T_2$ does not change since the same row permutation is performed on both $T_1$ and $T_2$.
Now we have
%
\begin{align}\label{eq:regular_delta_matroid_as_pfaffian_pair}
\det T_1 \det T_2 = \det S[J \cap E, J \cap E].
\end{align}
%
Since $S$ is skew symmetric and principally unimodular,~\eqref{eq:regular_delta_matroid_as_pfaffian_pair} is either 0 or 1.
Hence $(A_1, A_2)$ is Pfaffian with constant 1.
The equation~\eqref{eq:regular_delta_matroid_as_pfaffian_pair} also implies that $B \subseteq E \cup \overline{E}$ is a common base of $(A_1, A_2)$ if and only if $\overline{E \setminus B} = B \cap \overline{E}$ and $B \cap E \in \mathcal{F}(S)$.
Hence $B \in \mathcal{B}(A_1, A_2)$ corresponds to $B \cap E \in \mathcal{F}(S)$ one-to-one.
The latter part of the proposition is obtained by taking the symmetric difference with $X$.
\end{proof}
\Cref{prop:regular_delta_matroid_is_pfaffian_pair} yields the following corollary.
\begin{corollary}
The number of feasible sets of a regular delta-matroid $\mathbf{D}(S) \symdif X$ is equal to $\det (S+I_n)$.
\end{corollary}
Taking the symmetric difference with $X$ does not affect the number of feasible sets.
However, this changes the correspondence between elements of $E$ and columns of $(A_1, A_2)$.
Namely, labeling $t_j$ to an element $j \in E$ in $\mathbf{D}(S) \symdif X$ is equivalent to labeling $t_j$ in $(A_1, A_2)$ to $j \in E$ if $j \notin X$ and to $\overline{j} \in \overline{E}$ if $j \in X$.
Note this fact when applying the formula~\eqref{eq:symbolic_counting_pairs} to regular delta-matroids.
A combinatorial example of regular delta-matroids was given by Bouchet~\cite{Bouchet1995} as follows.
Let $G = (V, E)$ be a directed 4-regular Eulerian graph; that is, $G$ is strongly connected, and every vertex of $G$ is of in- and out-degree two.
A (directed) \emph{Euler tour} of $G$ is a tour that traverses every edge exactly once.
Any Euler tour $T$ of $G$ visits every vertex exactly twice as $G$ is 4-regular.
For each vertex $v \in V$ with incoming edges $e_1, e_2 \in E$ and outgoing edges $e_3, e_4 \in E$, there are exactly two possibilities of the way to visit $v$ twice; that is, $T$ traverses $e_3$ just after $e_1$ and $e_4$ just after $e_2$, or $e_4$ just after $e_1$ and $e_3$ just after $e_2$.
Therefore, fixing an Euler tour $U$ of $G$, we can represent every Euler tour $T$ of $G$ as a vertex subset $F_U(T) \subseteq V$ defined as follows: $v \in V$ is in $F_U(T)$ if and only if $T$ visits $v \in V$ in the different way from $U$.
The map $T \mapsto F_U(T)$ is injective.
Define $\mathbf{D}_U(G) \defeq (V, \mathcal{F}_U(G))$ with
$
\mathcal{F}_U(G) \defeq \set{F_U(T)}[\text{$T$ is an Euler tour of $G$}].
$
For each vertex $v \in V$, we label the two directed edges leaving $v$ as $e_v^+$ and $e_v^-$.
Define a skew-symmetric matrix $S^U = \pbig{S^U_{u,v}}_{u,v \in V}$ over $\setQ$ as
\begin{align} \label{def:A_U}
S^U_{u,v} \defeq \begin{cases*}
+1 & ($U$ traverses edges in the order of $\cdots e_u^+ \cdots e_v^+ \cdots e_u^- \cdots e_v^- \cdots$), \\
-1 & ($U$ traverses edges in the order of $\cdots e_u^+ \cdots e_v^- \cdots e_u^- \cdots e_v^+ \cdots$), \\
\phantom{+}0 & (otherwise)
\end{cases*}
\end{align}
for $u, v \in V$.
Then $S^U$ is principally unimodular~\cite[Theorem~11]{Bouchet1995} and $\mathbf{D}_U(G)$ coincides with $\mathbf{D}\pbig{S^U}$~\cite[Corollary~12]{Bouchet1995}.
Hence the set of Euler tours in a 4-regular directed graph can be represented as common bases of a Pfaffian pair through \cref{prop:regular_delta_matroid_is_pfaffian_pair}.
\begin{remark}
For an arbitrary directed graph $G = (V, E)$ each of whose vertex has the same in- and out-degree, there exists a formula, so-called the \emph{BEST theorem}, to count the number of Euler tours in $G$ (see, e.g.,~\cite[Theorem~6.36]{Cai2017}).
This theorem states that the number of Euler tours in $G$ is
%
\begin{align} \label{eq:BEST}
T\prod_{v \in V} (d_v - 1)!,
\end{align}
%
where $d_v$ is the in-degree ($=\text{out-degree}$) of $v \in V$ and $T$ is the number of $r$-arborescences of $G$ with arbitrary root $r \in V$.
In the case where $G$ is 4-regular, the BEST theorem~\eqref{eq:BEST} claims that the number of Euler tours in $G$ is equal to $T$, which can be computed by the directed matrix-tree theorem~\cite{Tutte1948}.
Hence the Pfaffian-pair representation of Euler tours in 4-regular directed graph might seem useless.
Nevertheless, this representation is needed when we apply the formula~\eqref{eq:symbolic_counting_pairs} that includes a variable $z$ because the corresponding ``symbolic'' version of the BEST theorem is yet unknown.
\end{remark}
\subsection{Arborescences}\label{sec:arborescences}
Let $G = (V, E)$ be a directed graph and take a vertex $r \in V$.
An $r$-\emph{arborescence}, or a \emph{directed tree} rooted at $r$, of $G$ is an edge subset $F \subseteq E$ satisfying the following:
\begin{enumerate}[label={(A\arabic*)}]
\item $F$ is a spanning tree if the orientation is ignored.\label{item:A1}
\item The in-degree of every $v \in V \setminus \set{r}$ is exactly one in $F$.\label{item:A2}
\end{enumerate}
It is well-known that $r$-arborescences can be represented as common bases of a matrix pair.
Let $A$ be the incidence matrix~\eqref{def:incidence_matrix} of $G$ and $R = \prn{R_{v,e}}_{v \in V, e \in E}$ a matrix over $\setQ$ defined by
\begin{align}
R_{v,e} \defeq \begin{cases*}
-1 & ($v = \partial^- e$), \\
\phantom{+}0 & (otherwise)
\end{cases*}
\end{align}
for $v \in V$ and $e \in E$.
The matrix $R$ is totally unimodular since each column has at most one nonzero entry.
Matroids represented by such matrices are called \emph{partition matroids}.
Put $A_1 \defeq A^{(r)}$ and $A_2 \defeq R^{(r)}$.
Then $B \subseteq E$ is a base of $A_1$ and $A_2$ if and only if $B$ satisfies~\ref{item:A1} and~\ref{item:A2}, respectively.
Hence common bases of $(A_1, A_2)$ correspond to $r$-arborescences of $G$.
The matrix $L \defeq A\trsp{R}$ is called the (directed) \emph{Laplacian matrix} of $G$.
The directed matrix-tree theorem due to Tutte~\cite{Tutte1948} claims that the $(r,r)$th cofactor of $L$, which is equal to $\det A_1 \trsp{A_2}$, coincides with the number of $r$-arborescences of $G$.
This implies:
\begin{proposition}\label{prop:arborescence_pfaffian_pair}
The pair $(A_1, A_2)$ is Pfaffian with constant 1.
\end{proposition}
\begin{proof}
Since both $A_1$ and $A_2$ are totally unimodular, $\det A_1[B] \det A_2[B]$ is $\pm1$ for all $B \in \mathcal{B}(A_1, A_2)$.
If $(A_1, A_2)$ is not Pfaffian, $\det A_1 \trsp{A_2}$ is less than $\mathcal{B}(A_1, A_2)$ due to cancellations in the right-hand side of~\eqref{eq:cauchy_binet}.
This contradicts the statement of the directed matrix-tree theorem.
\end{proof}
We can directly show that $(A_1, A_2)$ is Pfaffian without the directed matrix-tree theorem; see the proof of~\cite[Theorem~6.35]{Cai2017} for example.
Such a proof justifies the directed matrix-tree theorem via the Cauchy--Binet formula~\eqref{eq:cauchy_binet} the other way around.
\subsection{Perfect Matchings of Pfaffian Graphs}\label{sec:perfect_matchings}
A \emph{matching} of an undirected graph $G$ is an edge subset $M$ such that no two disjoint edges in $M$ share the same end.
We also define a matching for a directed graph by ignoring its orientation.
A matching $M$ is said to be \emph{perfect} if every vertex of $G$ is covered by some edge in $G$.
Matching theory has two faces depending on whether $G$ is bipartite or general.
First, let $G = (U \cup V, E)$ be a simple undirected bipartite graph.
The vertex set of $G$ is bipartitioned as $\set{U, V}$ with $n \defeq \card{U} = \card{V}$ and all edges are between $U$ and $V$.
We define totally unimodular matrices $A_U = \pbig{A^U_{u,e}}_{u \in U, e \in E}$ and $A_V = \pbig{A^V_{v,e}}_{v \in V, e \in E}$ as
\begin{align}\label{def:bipartite_matching_A1_A2}
A^U_{u,e} \defeq \begin{cases}
+1 & (u \in e), \\
\phantom{+}0 & (\text{otherwise}),
\end{cases}
\quad
A^V_{v,e} \defeq \begin{cases}
+1 & (v \in e), \\
\phantom{+}0 & (\text{otherwise})
\end{cases}
\end{align}
for $u \in U, v \in V$ and $e \in E$.
Note that both $\mathbf{M}(A_U)$ and $\mathbf{M}(A_V)$ are partition matroids.
Then $M \subseteq E$ is a perfect matching of $G$ if and only if $M \in \mathcal{B}(A_U, A_V)$.
Suppose that vertices in $U$ and $V$ are ordered as $u_1, \ldots, u_n$ and $v_1, \ldots, v_n$.
For $i \in \intset{n}$, the $i$th rows of $A_U$ and $A_V$ are associated with $u_i$ and $v_i$, respectively.
A perfect matching $M$ of $G$ uniquely corresponds to a permutation $\sigma \in \sym_n$ on $\intset{n}$ such that $\set[\big]{u_i, v_{\sigma(i)}} \in M$ for all $i \in \intset{n}$.
Denote this permutation by $\sigma_M$.
We define the \emph{sign} of $M$ (with respect to the current ordering of $U$ and $V$) as $\sgn M \defeq \sgn \sigma_M$.
Let $z = \prn{z_e}_{e \in E}$ be a vector of distinct indeterminates indexed by $E$.
The matrix $A_1 D(z) \trsp{A_2}$ is called the \emph{Edmonds matrix} of $G$.
Its $(i,j)$th entry is $z_e$ if $e = \set{u_i, v_j} \in E$ and 0 otherwise for $i, j \in \intset{n}$.
By the definition~\eqref{def:determinant} of the determinant, it holds
\begin{align}\label{eq:edmonds_1}
\det A_1 D(z) \trsp{A_2} = \sum_{M \in \mathcal{B}(A_1, A_2)} \sgn M \prod_{e \in M} z_e.
\end{align}
On the other hand, by~\eqref{eq:cauchy_binet_1}, we have
\begin{align}\label{eq:edmonds_2}
\det A_1 D(z) \trsp{A_2} = \sum_{M \in \mathcal{B}(A_1, A_2)} \det A_1[M] \det A_2[M] \prod_{e \in M} z_e.
\end{align}
Comparing the coefficients of~\eqref{eq:edmonds_1} and~\eqref{eq:edmonds_2}, we have the following.
\begin{lemma}\label{lem:bipartite_matching_sign}
The sign of a perfect matching $M$ of $G$ is equal to $\det A_1[M] \det A_2[M]$.
\end{lemma}
Consider an orientation $\vec{G} = (U \cup V, \vec{E})$ of $G$.
We define a vector $s = \prn{s_e}_{e \in E}$ indexed by $E$ as
\begin{align}
s_e \defeq \begin{cases}
+1 & (\vec{e} = (u, v)), \\
-1 & (\vec{e} = (v, u))
\end{cases}
\end{align}
for $e = \set{u, v} \in E$ with $u \in U$ and $v \in V$.
Put $\vec{A}_1 \defeq A_1 D(s)$ and $\vec{A}_2 \defeq A_2$.
Namely, $\vec{A}_1$ is a matrix obtained from $A_1$ by reversing the sign of every column corresponding to an edge from $V$ to $U$.
Note that $\mathcal{B}(A_1, A_2) = \mathcal{B}(\vec{A}_1, \vec{A}_2)$.
The matrix $N = \pbig{N_{i,j}}_{i,j \in \intset{n}} \defeq \vec{A}_1 \trsp{{\vec{A}_2}} = A_1 D(s) \trsp{A}_2$ is called the (\emph{directed}) \emph{bipartite adjacency matrix} of $\vec{G}$.
Its $(i,j)$th entry $N_{i,j}$ is
\begin{align}\label{eq:bipartite_adjacency_matrix}
N_{i,j} = \begin{cases}
+1 & ((u_i, v_j) \in \vec{E}), \\
-1 & ((v_j, u_i) \in \vec{E}), \\
\phantom{+}0 & (\text{otherwise})
\end{cases}
\end{align}
for $i, j \in \intset{n}$.
Recall that $\vec{M}$ is also called a matching of $\vec{G}$ for a matching $M$ of $G$.
We define the \emph{sign} of a perfect matching $\vec{M}$ of $\vec{G}$ as
\begin{align}\label{def:sign_of_directed_perfect_bipartite_matching}
\sgn \vec{M}
\defeq \sgn M \prod_{e \in M} s_e
= \sgn M \prod_{i=1}^n N_{i, \sigma_M(i)}
\in \set{+1, -1}.
\end{align}
\begin{lemma}\label{lem:bipartite_matching_sign_directed}
The sign of a perfect matching $\vec{M}$ of $\vec{G}$ is equal to $\det \vec{A}_1[M] \det \vec{A}_2[M]$.
\end{lemma}
\begin{proof}
By \cref{lem:bipartite_matching_sign}, we have $\sgn M = \det A_1[M] \det A_2[M]$.
We also have $\det \vec{A}_1[M] = \det A_1[M] \prod_{e \in M} s_e$ and $\det \vec{A}_2[M] = \det A_2[M]$ by the definitions of $\vec{A}_1$ and $\vec{A}_2$.
Hence the claim holds.
\end{proof}
An orientation $\vec{G}$ of $G$ is called \emph{Pfaffian} if the signs of all perfect matchings of $\vec{G}$ are the same~\cite{Kasteleyn1961,Temperley1961}.
The following proposition, which was observed by Webb~\cite{Webb2004}, holds from \cref{lem:bipartite_matching_sign_directed}.
\begin{theorem}[{\cite[Observation~3.7]{Webb2004}}]\label{thm:bipartite_matching_pfaffian}
Let $G$ be a bipartite graph, $\vec{G}$ an orientation of $G$, and $(\vec{A}_1, \vec{A}_2)$ the matrix pair defined above from $\vec{G}$.
Then $\mathcal{B}(\vec{A}_1, \vec{A}_2)$ coincides with the set of perfect matchings of $G$.
In addition, if $\vec{G}$ is Pfaffian, $(\vec{A}_1, \vec{A}_2)$ is also Pfaffian with constant $\sgn \vec{M}$, where $M$ is an arbitrary perfect matching of $G$.
\end{theorem}
We extend the above arguments to nonbipartite graphs.
Let $G = (V, E)$ be a simple undirected graph that is not necessarily bipartite.
Suppose that $\card{V}$ is even and vertices are ordered as $v_1, \ldots, v_{2n}$.
We define a totally unimodular matrix $A \in \setR^{\card{V} \times 2\card{E}}$ as follows: each row is indexed by $v \in V$ and each two columns are associated with an edge $e \in E$.
For $v \in V$ and $e = \set{v_i, v_j} \in E$ with $i < j$, the corresponding $1 \times 2$ submatrix of $A$ to $v$ and $e$ is defined to be $\begin{pmatrix} +1 & 0 \end{pmatrix}$ if $v = v_i$, $\begin{pmatrix} 0 & +1 \end{pmatrix}$ if $v = v_j$ and $O$ otherwise.
We regard each $e \in E$ as a line of $A$.
Then $M \subseteq E$ is a perfect matching of $G$ if and only if $M \in \mathcal{B}(A, E)$~\cite[Chapter~9.2]{Lawler1976}.
Recall that $F_{2n}$ is the subset of $\sym_{2n}$ defined by~\eqref{def:F}.
A perfect matching $M$ of $G$ uniquely corresponds to a permutation $\sigma \in F_{2n}$ such that $\set[\big]{v_{\sigma(2i-1)}, v_{\sigma(2i)}} \in M$ for all $i \in \intset{n}$.
Denote this permutation by $\sigma_M$.
We define the \emph{sign} of $M$ as $\sgn M \defeq \sgn \sigma_M$.
Let $z = \prn{z_e}_{e \in E}$ be a vector of distinct indeterminates indexed by $E$.
The skew-symmetric matrix $A \Delta(z) \trsp{A}$ is called the \emph{Tutte matrix} of $G$.
Its $(i,j)$th entry is equal to $z_e$ if $e = \set{u_i, v_j} \in E$ and $i < j$, to $-z_e$ if $e = \set{u_i, v_j} \in E$ and $i > j$ and to 0 otherwise for $i, j \in \intset{n}$.
By the definition~\eqref{def:pfaffian} of the Pfaffian, it holds
\begin{align}\label{eq:tutte_1}
\pf A \Delta(z) \trsp{A} = \sum_{M \in \mathcal{B}(A, E)} \sgn M \prod_{e \in M} z_e.
\end{align}
We also have
\begin{align}\label{eq:tutte_2}
\pf A \Delta(z) \trsp{A} = \sum_{M \in \mathcal{B}(A, E)} \det A[M] \prod_{e \in M} z_e.
\end{align}
by~\eqref{eq:pfaffian_cauchy_binet_1}.
Hence the following holds as an extension of \cref{lem:bipartite_matching_sign}.
\begin{lemma}\label{lem:matching_sign}
The sign of a perfect matching $M$ of $G$ is equal to $\det A[M]$.
\end{lemma}
In the same way as the bipartite case, we next consider an orientation $\vec{G} = (V, \vec{E})$ of $G$.
Define a vector $s = \prn{s_e}_{e \in E}$ indexed by $E$ as follows: for each $\vec{e} = (v_i, v_j) \in \vec{E}$, we set
\begin{align}
s_e \defeq \begin{cases}
+1 & (i < j), \\
-1 & (i > j).
\end{cases}
\end{align}
We also construct a symmetric block diagonal matrix $X = \diag\prn{X_e}_{e \in E}$, where $X_e$ is a $2 \times 2$ matrix defined by $X_e \defeq I_2$ if $s_e = +1$ and $X_e \defeq \begin{psmallmatrix} \phantom{+}0 & +1 \\ +1 & \phantom{+}0 \end{psmallmatrix}$ if $s_e = -1$ for $e \in E$.
Put $\vec{A} \defeq AX$, i.e., $\vec{A}$ is obtained from $A$ by interchanging two columns associated with each $(v_i, v_j) \in \vec{E}$ with $i > j$.
Note that $X\Delta(\onevec)X = \Delta(s)$ and $\mathcal{B}(A, E) = \mathcal{B}(\vec{A}, E)$ hold.
The skew-symmetric matrix $S = \pbig{S_{i,j}}_{i,j \in \intset{2n}} \defeq \vec{A} \Delta(\onevec) \trsp{\vec{A}} = A \Delta(s) \trsp{A}$ is called the (\emph{directed}) \emph{skew-symmetric adjacency matrix} of $\vec{G}$.
It can be confirmed that
\begin{align}\label{eq:skew_symmetric_adjacency_matrix}
S_{i,j} = \begin{cases}
+1 & ((v_i, v_j) \in \vec{E}), \\
-1 & ((v_j, v_i) \in \vec{E}), \\
\phantom{+}0 & (\text{otherwise})
\end{cases}
\end{align}
holds for $i, j \in \intset{2n}$.
For a perfect matching $M$ of $G$, it holds $s_e = S_{\sigma_M(2i-1), \sigma_M(2i)}$ for every $e = \set[\big]{v_{\sigma_M(2i-1)}, v_{\sigma_M(2i)}} \in M$ since $\sigma_M(2i-1) < \sigma_M(2i)$.
We define the \emph{sign} of a perfect matching $\vec{M}$ of $\vec{G}$ as
\begin{align}
\sgn \vec{M}
\defeq \sgn M \prod_{e \in M} s_e
= \sgn M \prod_{i=1}^n S_{\sigma_M(2i-1), \sigma_M(2i)}
\in \set{+1, -1}.
\end{align}
\begin{lemma}\label{lem:matching_sign_directed}
The sign of a perfect matching $\vec{M}$ of $\vec{G}$ is equal to $\det \vec{A}[M]$.
\end{lemma}
\begin{proof}
The claim follows from $\det \vec{A}[M] = \det A[M] \prod_{e \in M} s_e$ and $\sgn M = \det A[M]$ by \cref{lem:matching_sign}.
\end{proof}
An orientation $\vec{G}$ of $G$ is also called \emph{Pfaffian} if the signs of all perfect matchings of $\vec{G}$ are constant.
The following holds from \cref{lem:matching_sign_directed} as a generalization of \cref{thm:bipartite_matching_pfaffian}.
\begin{theorem}\label{thm:matching_pfaffian}
Let $G = (V, E)$ be a graph, $\vec{G}$ an orientation of $G$ and $\vec{A}$ the matrix defined above from $\vec{G}$.
Then $\mathcal{B}(\vec{A}, E)$ coincides with the set of perfect matchings of $G$.
In addition, if $\vec{G}$ is Pfaffian, $(\vec{A}, E)$ is also Pfaffian with constant $\sgn \vec{M}$, where $M$ is an arbitrary perfect matching of $G$.
\end{theorem}
\subsection{Spanning Hypertrees of 3-Pfaffian 3-Uniform Hypergraphs}\label{sec:spanning_hypertrees}
\input{3-Pfaffian.tex}
\section{Examples from Disjoint \calS-Path Problem}\label{sec:lgv}
Continued from \cref{sec:examples}, this section provides further examples of Pfaffian pairs and parities in the framework of Mader's disjoint \calS-path problem.
\input{DAG.tex}
\input{Undirected.tex}
\input{STU.tex}
\section{Algorithms}\label{sec:algorithms}
In this section, we present algorithms for Pfaffian pairs and parities.
In \cref{sec:unweighted_counting}, we see that the current fastest randomized algorithms~\cite{Cheung2014,Harvey2009} for the linear matroid intersection and parity problems can be derandomized for Pfaffian pairs and parities with $\ch(\setK) = 0$.
\Cref{sec:counting_on_weighted_pfaffian_pairs,sec:counting_on_weighted_pfaffian_parities} describe counting algorithms for minimum-weight common bases of Pfaffian pairs and minimum-weight parity bases of Pfaffian parities, respectively.
Unless otherwise stated, we deal with matrices over a field $\setK$ of characteristic $\ch(\setK)$.
We assume that we can perform arithmetic operations on $\setK$ in constant time.
\subsection{Counting on Unweighted Pfaffian Pairs and Parities}\label{sec:unweighted_counting}
\Cref{prop:counting_formula_for_pfaffian_pairs,prop:counting_formula_for_pfaffian_prities} claim that the number of common bases of a Pfaffian pair $(A_1, A_2)$ of constant $c$ is equal to $c^{-1} \det A_1 \trsp{A_2}$ and the number of parity bases of a Pfaffian parity $(A, L)$ of constant $c$ is equal to $c^{-1} \pf A \Delta(\onevec) \trsp{A}$, both over $\setK$.
Therefore, if the value of $c$ is already known, we can compute these quantities just by performing matrix computations.
\begin{theorem}\label{thm:complexity_c_known}
Suppose that we are given the value of constants.
Then the following hold:
\begin{enumerate}
\item We can compute the number of common bases of an $r \times n$ Pfaffian pair modulo $\ch(\setK)$ in deterministic $\Order\prn{nr^{\omega-1}}$-time.\label{item:complexity_c_known_1}
\item We can compute the number of parity bases of a $2r \times 2n$ Pfaffian parity modulo $\ch(\setK)$ in deterministic $\Order\prn{nr^{\omega-1} + r^3}$-time.
When $\ch(\setK) = 0$, the running time can be improved to $\Order\prn{nr^{\omega-1}}$.\label{item:complexity_c_known_2}
\end{enumerate}
\end{theorem}
\begin{proof}
\ref{item:complexity_c_known_1}
For an $r \times n$ Pfaffian pair $(A_1, A_2)$, we can compute the matrix multiplication $A_1 \trsp{A_2}$ in $\Order\prn{nr^{\omega-1}}$-time and its determinant in $\Order\prn{r^\omega}$-time~\cite[Theorem~6.6]{Aho1974}.
\ref{item:complexity_c_known_2}
Let $(A, L)$ be a $2r \times 2n$ Pfaffian parity.
We rewrite $A \Delta(\onevec) \trsp{A}$ as
%
\begin{align}\label{eq:reqrite_ADeltaA_onevec}
A \Delta(\onevec) \trsp{A} = A_1 \trsp{A_2} - A_2 \trsp{A_1}
\end{align}
%
using~\eqref{eq:rewrite_ADeltaA}, where $A_1, A_2 \in \setK^{2r \times n}$ are the submatrices of $A$ defined in the same way as~\eqref{eq:rewrite_ADeltaA}.
Then we can compute $A \Delta(\onevec) \trsp{A}$ in $\Order\prn{nr^{\omega-1}}$-time through~\eqref{eq:reqrite_ADeltaA_onevec}.
The computation of Pfaffian requires $\Order(r^3)$-time via the naive Gaussian elimination.
While we can compute the square of the Pfaffian using the fast determinant computation through~\eqref{eq:pf_det}, in general we cannot determine which of two square roots of $\det A \Delta(\onevec) \trsp{A}$ is $\pf A \Delta(\onevec) \trsp{A}$.
When $\ch(\setK) = 0$, we have $c^{-1} \pf A \Delta(\onevec) \trsp{A} \ge 0$ since it is the cardinality of $\mathcal{B}(A, L)$.
Thus we can compute the Pfaffian in $\Order(r^\omega)$-time.
\end{proof}
In practice, the value of $c$ can be typically retrieved from the reduction of discrete structures to Pfaffian pairs or parities, as seen in \cref{sec:examples,sec:lgv}.
However, if this is not the case, one common or parity base $B$ should be obtained to compute $c$.
Solving linear matroid intersection and parity problems will make the running time larger than $\Order\prn{nr^{\omega-1}}$ if we stick to deterministic algorithms~\cite{Gabow1996,Orlin2008}, or we can employ randomized algorithms~\cite{Cheung2014,Harvey2009} to keep the running time.
We face the same trade-off to find one common or parity base even if we know $c$.
Indeed, for Pfaffian pairs and parities with $\ch(\setK) = 0$, we can derandomize the linear matroid intersection algorithm of Harvey~\cite{Harvey2009} and the linear matroid parity algorithm of Cheung--Lau--Leung~\cite{Cheung2014}.
In these algorithms, randomness is used only to find a vector over $\setK$ satisfying some genericity conditions, summarized below.
See~\cite[Section~4]{Harvey2009} and~\cite[Section~6]{Cheung2014} for details on their algorithms.
Let $(A_1, A_2)$ be a matrix pair with common column set $E$.
A column subset $J \subseteq E$ is said to be \emph{extensible} if there exists a common base of $(A_1, A_2)$ containing $J$.
Similarly, for a matroid parity $(A, L)$, we call $J \subseteq L$ \emph{extensible}%
\footnote{
Cheung--Lau--Leung~\cite{Cheung2014} call such $J$ ``growable.''
}
if there exists a parity base of $(A, L)$ containing $J$.
For a vector $z = \prn{z_j}_{j \in E}$ and $J \subseteq E$, let $\phi_J(z)$ denote a vector whose each component is defined as
\begin{align}
{\phi_J(z)}_j \defeq \begin{cases}
0 & (j \in J), \\
z_j & (j \in E \setminus J)
\end{cases}
\end{align}
for $j \in E$.
We also define $\phi_J(z)$ for $z = \prn{z_l}_{l \in L}$ and $J \subseteq L$ in the same way.
Recall that matrices $\Xi(z)$ and $\Phi(z)$ are defined by~\eqref{def:Xi} and by~\eqref{def:Phi}, respectively.
\begin{lemma}[{\cite{Cheung2014,Harvey2009}}]\label{lem:validity_of_derandomization}
The following hold.
%
\begin{enumerate}
\item Let $(A_1, A_2)$ be an $r \times n$ matrix pair with common column set $E$.
Suppose that we are given a vector $z = \prn{z_j}_{j \in E} \in \setK^n$ such that $J$ is extensible if and only if $\Xi\prn{\phi_J(z)}$ is nonsingular for every $J \subseteq E$.
Then we can construct a common base of $(A_1, A_2)$ (if it exists) in deterministic $\Order\prn{nr^{\omega-1}}$-time.\label{item:validity_of_derandomization_pairs}
\item Let $(A, L)$ be a $2r \times 2n$ matroid parity.
Suppose that we are given a vector $z = \prn{z_l}_{l \in L} \in \setK^n$ such that $J$ is extensible if and only if $\Phi\prn{\phi_J(z)}$ is nonsingular for every $J \subseteq L$.
Then we can construct a parity base of $(A, L)$ (if it exists) in deterministic $\Order\prn{nr^{\omega-1}}$-time.\label{item:validity_of_derandomization_parities}
\end{enumerate}
\end{lemma}
It is shown in~\cite[Theorem~4.4]{Harvey2009} and in~\cite[Theorem~6.4]{Cheung2014} that a vector of distinct indeterminates satisfies the requirements of $z$ in \cref{lem:validity_of_derandomization}.
The algorithms of Harvey~\cite{Harvey2009} and Cheung--Lau--Leung~\cite{Cheung2014} use a random vector over $\setK$ instead of indeterminates to avoid symbolic computations.
For Pfaffian pairs and parities, we can use $\onevec$ for $z$ as follows.
\begin{lemma}\label{lem:derandomizing_for_pfaffian}
Let $\setK$ be a field of characteristic zero.
For Pfaffian pairs and parities over $\setK$, we can choose $z = \onevec$ in \cref{lem:validity_of_derandomization}.
\end{lemma}
\begin{proof}
Let $(A_1, A_2)$ be a Pfaffian pair of constant $c$ with common column set $E$.
By \cref{prop:counting_formula_for_pfaffian_pairs}, we have
%
\begin{align}
\det \Xi\prn{\phi_J(\onevec)}
= c^{-1} \sum_{B \in \mathcal{B}(A_1, A_2)} \prod_{j \in E \setminus B} {\phi_J(\onevec)}_j
= c^{-1} \card{\set{B \in \mathcal{B}(A_1, A_2)}[J \subseteq B]}
\end{align}
%
for $J \subseteq E$.
Since $\ch(\setK) = 0$, the set $\set{B \in \mathcal{B}(A_1, A_2)}[J \subseteq B]$ is nonempty if and only if its cardinality is nonzero over $\setK$.
Hence the nonsingularity of $\Xi(\phi_J(\onevec))$ is equivalent to the extensibility of $J$.
The same argument can be applied to Pfaffian parities by using \cref{prop:counting_formula_for_pfaffian_prities}.
Let $(A, L)$ be a Pfaffian parity of constant $c$.
Then
%
\begin{align}
\pf \Phi\prn{\phi_J(\onevec)}
= c^{-1} \sum_{B \in \mathcal{B}(A, L)} \prod_{l \in L \setminus B} {\phi_J(\onevec)}_l
= c^{-1} \card{\set{B \in \mathcal{B}(A, L)}[J \subseteq B]}
\end{align}
%
for $J \subseteq L$.
Thus $J$ is extensible if and only if $\Phi(\phi_J(\onevec))$ is nonsingular.
\end{proof}
The proof of \cref{lem:derandomizing_for_pfaffian} can also be seen as alternative simple proofs of~\cite[Theorem~4.4]{Harvey2009} and~\cite[Theorem~6.4]{Cheung2014}.
Now \cref{thm:complexity_of_unweighted_counting} is obtained as a conclusion of \cref{thm:complexity_c_known,lem:validity_of_derandomization,lem:derandomizing_for_pfaffian}.
\subsection{Counting on Weighted Pfaffian Pairs}\label{sec:counting_on_weighted_pfaffian_pairs}
Let $(A_1, A_2)$ be an $r \times n$ weighted Pfaffian pair of constant $c$ with column weight $\funcdoms{w}{E}{\setR}$.
In this section, we consider counting the number of minimum-weight common bases of $(A_1, A_2)$ over $\setK$.
While we can compute it by naively expanding $\det A_1 D\prn{\theta^w} \trsp{A_2}$ or $\det \Xi\prn{\theta^w}$ from \cref{prop:algebraic_weighted_pair}, this expansion requires pseudo-polynomial time with respect to the maximum absolute value of a weight (assuming $w$ to be integral).
Instead, we reduce the problem to the counting on an unweighted Pfaffian pair.
We introduce some notions to make our descriptions rigorous.
Since $w$ is real-valued, the determinant and entries of $A_1 D\prn{\theta^w} \trsp{A_2}$ are a formal $\setK$-linear combination $f(\theta)$ of real powers of $\theta$.
Namely, $f(\theta)$ is formally expressed as
\begin{align}
f(\theta) = \sum_{x \in X} a_x \theta^x
\end{align}
with finite $X \subseteq \setR$ and $a_x \in \setK$ for $x \in X$.
Abusing terminology, we call $f(\theta)$ a \emph{polynomial} in $\theta$.
We define the \emph{degree} $\deg f(\theta)$ and the \emph{order} $\ord f(\theta)$ of $f(\theta)$ as the maximum and the minimum $x \in X$ such that $a_x \ne 0$, respectively.
We set $\deg 0 \defeq -\infty$ and $\ord 0 \defeq +\infty$ for convenience.
The \emph{constant term} of $f(\theta)$ means $a_0$.
We begin to describe the algorithm.
Suppose that $(A_1, A_2)$ has at least one common base and we have obtained a minimum-weight common base $B \in \mathcal{B}(A_1, A_2)$ by solving the weighted linear matroid intersection problem.
We first perform row transformations on $A_1$ and $A_2$ so that $A_1[B]$ and $A_2[B]$ become the identity matrix $I_r$.
This operation, called \emph{pivoting}, remains $(A_1, A_2)$ Pfaffian but changes its constant to 1.
Now we can regard the row sets of $A_1$ and $A_2$ as $B$ since $A_1[B]$ and $A_2[B]$ are identity.
Frank's \emph{weight splitting lemma}~\cite{Frank1981} reveals the dual structure of the weighted matroid intersection problem.
It claims that there exist $\funcdoms{w_1, w_2}{E}{\setR}$ such that
\begin{enumerate}[label={(W\arabic*)}]
\item $w_1(j) + w_2(j) = w(j)$ for $j \in E$, and\label{item:W1}
\item a common base $B' \in \mathcal{B}(A_1, A_2)$ minimizes the weight $w$ if and only if $B'$ minimizes $w_1$ among all bases of $A_1$ and $B'$ minimizes $w_2$ among all bases of $A_2$.\label{item:W2}
\end{enumerate}
Let $w_1, w_2$ be split weights satisfying~\ref{item:W1} and~\ref{item:W2}.
The following observation is easy but important.
\begin{proposition}\label{prop:weighted_intersection_cocircuit}
For $k = 1, 2$, $u \in B$ and $j \in E$, if the $(u,j)$th entry of $A_k$ is nonzero, then it holds $w_k(u) \le w_k(j)$.
\end{proposition}
\begin{proof}
By $A_k[B] = I_r$, the set $B' \defeq B \setminus \set{u} \cup \set{j}$ is a base of $A_k$.
If $w_k(u) > w_k(j)$, we have $w(B') < w(B)$, which contradicts to~\ref{item:W2} and the minimality of $B$.
\end{proof}
For $k = 1,2$, let $A_k^\# \in \setK^{r \times n}$ be the matrix with row set $B$ and column set $E$ whose the $(u,j)$th entry $\pbig{A_k^\#}_{u,j}$ is defined by
\begin{align}\label{def:A_k_sharp}
\pbig{A_k^\#}_{u,j} \defeq \begin{cases}
\text{the $(u,j)$th entry of $A_k$} & (w_k(u) = w_k(j)), \\
0 & (\text{otherwise})
\end{cases}
\end{align}
for $u \in B$ and $j \in E$.
\begin{lemma}\label{lem:pair_sharp}
The set of minimum-weight common bases of $(A_1, A_2)$ with respect to $w$ is equal to $\mathcal{B}\pbig{A_1^\#, A_2^\#}$.
In addition, $\pbig{A_1^\#, A_2^\#}$ is Pfaffian with constant 1.
\end{lemma}
\begin{proof}
We first show that $\mathcal{B}\pbig{A_k^\#}$ is the set of minimum-weight bases of $A_k$ with respect to the weight $w_k$ for $k = 1,2$.
Then the first claim of the lemma follows from~\ref{item:W2}.
Define $\funcdoms{p}{B}{\setR}$ by $p(u) \defeq -w_k(u)$ for $u \in B$ and put $\tilde{A}_k(\theta) \defeq D\pbig{\theta^p} A_k D\pbig{\theta^{w_k}}$, where $\theta$ is an indeterminate.
Note that each nonzero entry in $\tilde{A}_k(\theta)$ is a ``monomial,'' i.e., its degree and order are the same.
Take $B' \subseteq E$ with $\card{B'} = r$.
Then we have $\det \tilde{A}_k(\theta)[B'] = \theta^{w_k(B') - w_k(B)} \det A_k[B']$.
Thus, $B'$ is a minimum-weight base of $A_k$ with respect to $w_k$ if and only if $\det \tilde{A}_k(\theta)[B']$ is in $\setK \setminus \set{0}$.
From \cref{prop:weighted_intersection_cocircuit}, the degree of each nonzero entry in $\tilde{A}_k(\theta)$ is nonnegative; this implies that any term of positive degree in $\tilde{A}_k(\theta)[B']$ cannot contribute to the constant term of $\det \tilde{A}_k(\theta)[B']$.
In addition, the constant term of each entry in $\tilde{A}_k(\theta)$ is the same as that of $A_k^\#$ by its definition.
Hence the constant term of $\det \tilde{A}_k(\theta)[B']$ is equal to $\det A_k^\#[B']$, which means that $B'$ is in $\mathcal{B}\pbig{A_k^\#}$ if and only if $B'$ minimizes $w_k$ among $\mathcal{B}(A_k)$.
In the above argument, $\det A_k^\#[B'] = \det \tilde{A}_k(\theta)[B'] = \det A_k[B']$ is proved for $B' \in \mathcal{B}\pbig{A_k^\#}$.
Hence we have $\det A_1^\#[B'] \det A_2^\#[B'] = \det A_1[B'] \det A_2[B'] = 1$ for all $B' \in \mathcal{B}\pbig{A_1^\#, A_2^\#}$.
\end{proof}
By \cref{lem:pair_sharp} and \cref{prop:counting_formula_for_pfaffian_pairs}, we have the following corollary, which leads us to an algorithm described in \cref{alg:weighted_pfaffian_pair}.
\begin{corollary}\label{cor:counting_via_sharp}
The number of minimum-weight common bases of $(A_1, A_2)$ modulo is equal to $\det A_1^\# \trsp{{A_2^\#}}$ over $\setK$.
\end{corollary}
\begin{algorithm}[tbp]
\caption{Computing the number of minimum-weight common bases of a Pfaffian pair.}\label{alg:weighted_pfaffian_pair}
\begin{algorithmic}[1]
\Input{An $r \times n$ Pfaffian pair $(A_1, A_2)$ and a column weight $\funcdoms{w}{E}{\setZ}$}
\Output{The number of minimum-weight common bases of $(A_1, A_2)$ modulo $\ch(\setK)$}
\State{Compute a minimum-weight common base $B \in \mathcal{B}(A_1, A_2)$ and split weights $w_1, w_2$}
\State{$A_1 \gets {A_1[B]}^{-1}A_1, \, A_2 \gets {A_2[B]}^{-1}A_2$}
\State{Construct the matrices $A_1^\#$ and $A_2^\#$ defined by~\eqref{def:A_k_sharp}}
\State{\Return $\det A_1^\# \trsp{{A_2^\#}}$}
\end{algorithmic}
\end{algorithm}
Now \cref{thm:complexity_of_counting_weighted_pairs} can be proved as follows.
\begin{proof}[{of \cref{thm:complexity_of_counting_weighted_pairs}}]
The validity of \cref{cor:counting_via_sharp} is proved in the above arguments.
We analyze its time complexity.
Frank's weighted matroid intersection algorithm~\cite{Frank1981} can be implemented for linear matroids in $\Order\prn{nr^\omega + nr \log n}$-time (see, e.g.,~\cite[Corollarly~41.10a]{Schrijver2003}).
Other computations can be done within this time.
\end{proof}
\subsection{Counting on Weighted Pfaffian Parities}\label{sec:counting_on_weighted_pfaffian_parities}
Let $(A, L)$ be a $2r \times 2n$ Pfaffian parity of constant $c$ with line weight $\funcdoms{w}{L}{\setR}$.
We describe an algorithm to count the number of minimum-weight parity bases of $(A, L)$ modulo $\ch(\setK)$.
Suppose that $(A, L)$ has at least one parity base.
Let $\zeta$ denote the minimum weight of a parity base of $(A, L)$ and $N$ the number of minimum-weight parity bases modulo $\ch(\setK)$.
Note that $N$ is nonzero if $\ch(\setK) = 0$.
We put $\delta \defeq w(L) - \zeta$.
Then following holds from \cref{prop:algebraic_weighted_parity}.
\begin{lemma}\label{lem:algebraic_form_of_weighted_parity}
The coefficient of $\theta^{\delta}$ in $\pf \Phi\pbig{\theta^w}$ is equal to $cN$.
In addition, it holds $\delta \ge \deg \pf \Phi\pbig{\theta^w}$ and the equality is attained if and only if $N \ne 0$.
\end{lemma}
We first obtain a minimum-weight parity base $B \in \mathcal{B}(A, L)$ applying the algorithm of Iwata--Kobayashi~\cite{Iwata2017}.
Then we perform a row transformation and a line (column) permutation on $A$ so that the left $2r$ columns of $A$ correspond to $B$ and $A[B] = I_{2r}$.
Namely, $A$ is in the form of $A =\begin{pmatrix} I_{2r} & C \end{pmatrix}$ for some matrix $C \in \setK^{2r \times (2n - 2r)}$.
Note that these transformations retain $(A, L)$ Pfaffian but change the constant to constant 1.
We perform the same transformations on $\Phi\pbig{\theta^w}$ (and on $\Delta\pbig{\theta^w}$) accordingly.
Now the polynomial matrix $\Phi\pbig{\theta^w}$ is in the form of
\begin{align}
\Phi\pbig{\theta^w} \defeq \prn{\begin{array}{c|cc}
O & I_{2r} & C \\\hline
-I_{2r} & \multicolumn{2}{c}{\multirow{2}{*}{$\Delta\pbig{\theta^w}$}} \\
-\trsp{C} & \multicolumn{2}{c}{}
\end{array}}
\begin{array}{l}
\gets U \\
\gets B \\
\gets E \setminus B,
\end{array}
\end{align}
where $U$ is the row set of $A$ identified with $B$.
Besides the minimum-weight parity base $B$, the algorithm of Iwata--Kobayashi~\cite{Iwata2017} output an extra matrix $C^*$.
Its row set $U^*$ and column set $E^*$ contains $U$ and $E \setminus B$, respectively, and elements in $U^* \setminus U$ and $E^* \setminus E = E^* \setminus (E \setminus B)$ are newly introduced ones.
The Schur complement of $C^*$ with respect to $Y \defeq C^*[U^* \setminus U, E^* \setminus E]$ coincides with $C$, i.e., it holds
\begin{align}\label{eq:C_star_Schur}
C = C^*[U, E \setminus B] - C^*[U, E^* \setminus E]Y^{-1} C^*[U^* \setminus U, E \setminus B].
\end{align}
In addition, the cardinalities of $U^*$ and $E^*$ are guaranteed to be $\Order(n)$.
We put $W \defeq U^* \cup B \cup E^*$ and $c^* \defeq \det Y$.
Consider the skew-symmetric polynomial matrix $\Phi^*(\theta) = \pbig{\Phi_{u,v}^*(\theta)}_{u,v \in W}$ defined by
\begin{align}\label{def:phi_star_theta}
\Phi^*(\theta) = \prn{\begin{array}{c|c|c|c|c}
\multicolumn{2}{c|}{\multirow{2}{*}{$O$}} & O & \multicolumn{2}{c}{\multirow{2}{*}{$C^*$}} \\\cline{3-3}
\multicolumn{2}{c|}{} & I_{2r} & \multicolumn{2}{c}{} \\\hline
O & -I_{2r} & \multicolumn{2}{c|}{\multirow{2}{*}{$\Delta\pbig{\theta^w}$}} & \multirow{2}{*}{$O$} \\\cline{1-2}
\multicolumn{2}{c|}{\multirow{2}{*}{$-\trsp{{C^*}}$}} & \multicolumn{2}{c|}{} & \\\cline{3-5}
\multicolumn{2}{c|}{} & \multicolumn{2}{c|}{O} & O
\end{array}}
\begin{array}{l}
\gets U^* \setminus U \\
\gets U \\
\gets B \\
\gets E \setminus B \\
\gets E^* \setminus E.
\end{array}
\end{align}
Then we have the following claim, which is essentially the same as Claim~6.2 in the arXiv preprint of~\cite{Iwata2017}.
\begin{lemma}\label{lem:pfaffian_equality_phi_star}
It holds $\pf \Phi^*(\theta) = c^* \pf \Phi\pbig{\theta^w}$.
\end{lemma}
\begin{proof}
From the property~\eqref{eq:C_star_Schur} of $C^*$ on the Schur complement, we can transform $\Phi^*(\theta)$ by elementary operations as
%
\begin{align}
\hat{\Phi}(\theta) \defeq
\prn{\begin{array}{c|c|c|c|c}
\multicolumn{2}{c|}{\multirow{2}{*}{$O$}} & O & O & Y \\\cline{3-5}
\multicolumn{2}{c|}{} & I_{2r} & C & O \\\hline
O & -I_{2r} & \multicolumn{2}{c|}{\multirow{2}{*}{$\Delta\pbig{\theta^w}$}} & \multirow{2}{*}{$O$} \\\cline{1-2}
O & \raisebox{-.3ex}{$-\trsp{C}$} & \multicolumn{2}{c|}{} & \\\hline
\raisebox{-.3ex}{$-\trsp{Y}$} & O & \multicolumn{2}{c|}{O} & O
\end{array}}
\begin{array}{l}
\gets U^* \setminus U \\
\gets U \\
\gets B \\
\gets E \setminus B \\
\gets E^* \setminus E.
\end{array}
\end{align}
%
Then we have
%
\begin{align}
\pf \Phi^*(\theta)
= \pf \hat{\Phi}(\theta)
= \pf \begin{pmatrix}
O & O & Y \\
O & \Phi\pbig{\theta^w} & O \\
-\trsp{Y} & O & O
\end{pmatrix}
=
\pf \begin{pmatrix}
O & Y & O \\
-\trsp{Y} & O & O \\
O & O & \Phi\pbig{\theta^w}
\end{pmatrix}
=
c^* \pf \Phi\pbig{\theta^w}.
\end{align}
%
Note that the permutation which we applied on the third equality is even since the order of $\Phi\pbig{\theta^w}$ is $2r + 2n$.
Hence the claim holds.
\end{proof}
\Cref{lem:algebraic_form_of_weighted_parity} can be rephrased in terms of $\Phi^*(\theta)$ by using \cref{lem:pfaffian_equality_phi_star} as follows.
\begin{lemma}\label{lem:algebraic_form_of_weighted_parity_star}
The coefficient of $\theta^\delta$ in $\pf \Phi^*(\theta)$ is equal to $c^*N$.
In addition, it holds $\delta \ge \deg \pf \Phi^*(\theta^w)$ and the equality is attained if and only if $N \ne 0$.
\end{lemma}
We next define an undirected graph $G = G(\Phi^*)$ associated with $\Phi^*(\theta)$.
The vertex set of $G$ is $W$ and the edge set is given by
\begin{align}\label{def:graph_of_phi_star_theta}
F \defeq \set[\big]{\set{u, v}}[u, v \in W, \, \Phi_{u,v}^*(\theta) \ne 0].
\end{align}
We set the weight of every edge $\set{u, v} \in F$ to $\deg \Phi_{u,v}^*(\theta)$.
Let $\hat{\delta}(\Phi^*)$ denote the maximum weight of a perfect matching of $G$.
We set $\hat{\delta}(\Phi^*) \defeq -\infty$ if $G$ has no perfect matching.
Here we put $\hat{\delta} \defeq \hat{\delta}(\Phi^*)$.
From the definition~\eqref{def:pfaffian} of Pfaffian, $\hat{\delta}$ serves as a combinatorial upper bound on $\deg \pf \Phi^*(\theta)$.
For later use, we define $G(S)$ and $\hat{\delta}(S)$ for any skew-symmetric polynomial matrix $S(\theta)$ in the same manner.
The dual problem of the maximum-weight perfect matching problem on $G$ is as follows (see~\cite{Iwata2017} and~\cite[Theorem~25.1]{Schrijver2003}):
\begin{align}
\text{(D)} \quad
\begin{array}{|c>{\hspace{-.5em}}l}
\underset{\pi, \xi}{\text{minimize}} & \begin{array}[t]{>{\displaystyle}l}
\sum_{u \in W} \pi(u) - \sum_{Z \in \Omega} \xi(Z)
\end{array} \\
\text{subject to} & \begin{array}[t]{>{\displaystyle}l>{\displaystyle}l}
\pi(u) + \pi(v) - \sum_{Z \in \Omega_{u,v}} \xi(Z) \ge \deg \Phi^*_{u,v}(\theta) & (\set{u,v} \in F), \\
\xi(Z) \ge 0 & (Z \in \Omega),
\end{array}
\end{array}
\end{align}
where $\Omega \defeq \set{Z \subseteq W}[\text{$\card{Z}$ is odd and $\card{Z} \ge 3$}]$ and $\Omega_{u,v} \defeq \set{Z \in \Omega}[\card{Z \cap \set{u,v}} = 1]$ for $u,v \in W$.
The following claim is proved in~\cite{Iwata2017} as a key ingredient of the optimality certification on the weighted linear matroid parity problem.
\begin{proposition}[{\cite[Claim~6.3 in the arXiv preprint]{Iwata2017}}]\label{prop:feasibility_of_D2}
There exists a feasible solution of (D) having the objective value $\delta$.
\end{proposition}
We make use of \cref{prop:feasibility_of_D2} for the purpose of counting.
\begin{lemma}\label{lem:tightness_of_parity}
It holds $\delta \ge \hat{\delta} \ge \deg \pf \Phi^*(\theta)$.
The equalities are attained if $N \ne 0$.
\end{lemma}
\begin{proof}
We have $\delta \ge \hat{\delta}$ by \cref{prop:feasibility_of_D2} and the weak duality of (D).
We also have $\hat{\delta} \ge \deg \pf \Phi^*(\theta)$ from the definition of Pfaffian.
The equality condition is obtained from \cref{lem:algebraic_form_of_weighted_parity_star}.
\end{proof}
By \cref{lem:tightness_of_parity}, it holds $N = 0$ if $\delta > \hat{\delta}$.
Otherwise, our goal is to compute the coefficient of $\theta^\delta = \theta^{\hat{\delta}}$ in $\pf \Phi^*(\theta)$ by \cref{lem:algebraic_form_of_weighted_parity_star}.
This can be obtained by executing Murota's upper-tightness testing algorithm on combinatorial relaxation~\cite[Section~4.4]{Murota1995a} (with $\det$ replaced with $\pf$).
\begin{proposition}[{see~\cite[Section~4.4]{Murota1995a}}]\label{prop:combinatorial_relaxation}
Let $S(\theta)$ be a $2n \times 2n$ skew-symmetric polynomial matrix.
We can compute the coefficient of $\theta^{\hat{\delta}(S)}$ in $\pf S(\theta)$ in $\Order(n^3)$-time.
\end{proposition}
\begin{algorithm}[tbp]
\caption{Computing the number of minimum-weight parity bases of a Pfaffian parity.}\label{alg:weighted_pfaffian_parity}
\begin{algorithmic}[1]
\Input{An $2r \times 2n$ Pfaffian parity $(A, L)$ and a line weight $\funcdoms{w}{L}{\setZ}$}
\Output{The number of minimum-weight common bases of $(A_1, A_2)$ modulo $\ch(\setK)$}
\State{Compute a minimum-weight parity base $B \in \mathcal{B}(A, L)$ and the matrix $C^*$}
\State{Construct the matrix $\Phi^*\pbig{\theta}$ and the graph $G = G(\Phi^*)$}
\State{Compute the maximum weight $\hat{\delta} = \hat{\delta}(\Phi^*)$ of a perfect matching of $G$}
\If{$\delta \defeq w(B) > \hat{\delta}$}
\State{\Return 0}
\Else{}
\State{Compute the coefficient $a$ of $\theta^{\hat{\delta}}$ in $\pf \Phi^*(\theta)$}
\State{$c^* \defeq \pf C^*[U^* \setminus U, E^* \setminus E]$}
\State{\Return ${c^*}^{-1}a$}
\EndIf{}
\end{algorithmic}
\end{algorithm}
\Cref{alg:weighted_pfaffian_parity} shows the entire procedure of our algorithm.
Its time complexity, which is stated in \cref{thm:counting_minimum_weight_parity_bases}, is analyzed as follows.
\begin{proof}[{of \cref{thm:counting_minimum_weight_parity_bases}}]
The algorithm of Iwata--Kobayashi~\cite{Iwata2017} runs in $\Order\prn{n^3 r}$-time~\cite[Theorem~11.1]{Iwata2017}.
The maximum-weight perfect matching problem can be solved in $\Order\prn{n^3}$-time as $\card{W} = \Order\prn{n}$; see~\cite[Section~26.3a]{Schrijver2003}.
The coefficient of $\theta^{\hat{\delta}}$ in $\pf \theta^*(\theta)$ can also be computed in $\Order\prn{n^3}$-time by \cref{prop:combinatorial_relaxation}.
Hence the total running time is dominated by $\Order\prn{n^3 r}$.
\end{proof}
In the above arguments, we have assumed that arithmetic operations on $\setK$ can be performed in constant time.
This assumption is reasonable when $\setK$ is a finite field of fixed order.
When $\setK = \setQ$, it has not been proved that a direct application of the algorithm by Iwata--Kobayashi~\cite{Iwata2017} does not swell the bit-lengths of intermediate numbers.
Instead, they showed that one can obtain a minimum-weight parity base $B$ by applying their algorithm over a sequence of finite fields.
However, since our counting algorithm requires not only $B$ but also $C^*$, we cannot directly execute our counting algorithm if we use the weighted linear matroid parity algorithm for $\setK = \setQ$ as a black-box.
Here, we describe a polynomial-time counting algorithm for $\setK = \setQ$, which is based on the same reduction to problems over finite fields as~\cite{Iwata2017}.
Let $(A, L)$ be a Pfaffian parity over $\setQ$ equipped with a line weight $\funcdoms{w}{L}{\setR}$.
Multiplying the product of denominators of entries in $A$, we may assume that entries in $A$ are integral.
Applying the weighted linear matroid parity algorithm for $\setK = \setQ$, we first compute the minimum weight $\eta$ of a parity base and the constant $c \in \setZ$ of $(A, L)$.
Let $\gamma$ be the maximum absolute value of the entries of $A$ and put $K \defeq \ceil{r \log (nr\gamma)} + 2$.
We compute $K$ smallest prime numbers $p_1, \ldots, p_K$ by the sieve of Eratosthenes.
Since $K$ is bounded by a polynomial of the bit-length of $A$ and $p_K = \Order(K \log K)$ by the prime number theorem, this computation can be done in polynomial time.
Let $N$ be the number of minimum-weight parity bases of $(A, L)$.
For $i \in \intset{K}$, we consider the problem of computing $cN$ modulo $p_i$.
We have $cN \equiv 0$ modulo $p_i$ if $p_i$ divides $c$.
Suppose the case when $c$ is not a multiple of $p_i$.
Since $\det A[B] = c$ for all $B \in \mathcal{B}(A, L)$, a line subset $B \subseteq L$ is a parity base of $(A, L)$ if and only if $B$ is a parity base of $(A^{p_i}, L)$, where $A^{p_i}$ is a matrix over $\GF(p_i)$ obtained by regarding each entry of $A$ as an element of $\GF(p_i)$.
Therefore, $cN$ modulo $p_i$ is equal to the number of parity bases of $(A^{p_i}, L)$ over $\GF(p_i)$.
We compute this quantity by applying \cref{alg:weighted_pfaffian_parity} to $(A^{p_i}, L)$.
Since arithmetic operations on $\GF(p_i)$ can be performed in polynomial time, this computation can also be done in polynomial time by \cref{thm:counting_minimum_weight_parity_bases}.
Using the Chinese remainder theorem and the Euclidean algorithm, we compute $cN$ modulo $\prod_{i=1}^K p_i$ from $cN$ modulo $p_1, \ldots, p_K$.
Then we have
\begin{align}
2\abs{cN}+1
< 4\abs{cN}
\le 4r! \gamma^r \binom{n}{r}
\le 4\prn{r\gamma n}^{r}
\le 2^K
\le \prod_{i=1}^K p_i,
\end{align}
which implies that $cN$ is uniquely determined from $cN$ modulo $\prod_{i=1}^K p_i$.
These arguments certificate the correctness of \cref{thm:bit_complexity}.
\subsection{Counting on Weighted Pfaffian Pairs Revisited}
In this section, we derive \cref{alg:weighted_pfaffian_pair} in a different manner than \cref{sec:counting_on_weighted_pfaffian_pairs}.
While it is a bit roundabout approach, it might provide a connection between \cref{alg:weighted_pfaffian_pair} and the counting algorithm for weighted Pfaffian parities, explained in \cref{sec:counting_on_weighted_pfaffian_parities}.
Let $(A_1, A_2)$ be a Pfaffian pair with constant $c$ and column weight $\funcdoms{w}{E}{\setR}$.
Suppose that $(A_1, A_2)$ has at least one common base.
Let $\zeta$ be the minimum weight of a common base and $N$ the number of minimum-weight common bases modulo $\ch(\setK)$.
Define $P(\theta) = \pbig{P_{u,v}(\theta)}_{u \in U_1, v \in U_2} \defeq A_1 D\pbig{\theta^w} \trsp{A_2}$, where $\theta$ is an indeterminate and $U_1$ and $U_2$ are row sets of $A_1$ and $A_2$, respectively.
From \cref{prop:algebraic_weighted_pair}, we have the following algebraic characterization on $\zeta$ and $N$.
\begin{lemma}\label{lem:algebraic_form_of_weighted_intersection}
The coefficient of $\theta^{\zeta}$ in $P(\theta)$ is equal to $cN$.
In addition, it holds $\zeta \le \ord P(\theta)$ and the equality is attained if and only if $N \ne 0$.
\end{lemma}
Let $B \in \mathcal{B}(A_1, A_2)$ be a minimum-weight common base.
As in \cref{sec:counting_on_weighted_pfaffian_pairs}, we first apply pivoting to $A_1$ and $A_2$ so that $A_1[B] = A_2[B] = I_r$.
We perform the same row and column operation on $P(\theta)$ accordingly.
Now we can identify $U_1$ and $U_2$ with $B$ since $A_1[B]$ and $A_2[B]$ are identity.
Next, we construct a bipartite graph $G = (U_1 \cup U_2, F)$ from $P(\theta)$ as follows.
The vertex set $U_1 \cup U_2$ is bipartitioned as $\set{U_1, U_2}$.
The edge set $F$ is given by
\begin{align}
F \defeq \set{(u,v)}[u \in U_1, v \in U_2, P_{u,v}(\theta) \ne 0]
\end{align}
and we set the weight of every edge $(u,v) \in F$ as $\ord P_{u,v}(\theta)$.
Let $\hat{\zeta}$ denote the minimum weight of a perfect matching of $G$.
If $G$ has no perfect matching, we let $\hat{\zeta} \defeq +\infty$.
Then it is easily observed from the definition~\eqref{def:determinant} of the determinant that $\hat{\zeta}$ serves as a combinatorial lower bound on $\ord \det P(\theta)$ (see, e.g.,~\cite[Proposition~2.1]{Murota1995a}), and hence on $\zeta$ if $N \neq 0$ by \cref{lem:algebraic_form_of_weighted_intersection}.
Indeed, these quantities satisfy the following relation.
\begin{lemma}\label{lem:tightness_of_intersection}
It holds $\zeta \le \hat{\zeta} \le \ord \det P(\theta)$.
The equalities are attained if $N \ne 0$.
\end{lemma}
\begin{proof}
By \cref{lem:algebraic_form_of_weighted_intersection} and $\hat{\zeta} \le \ord \det P(\theta)$, it suffices to show $\zeta \le \hat{\zeta}$.
To show the claim, we use the dual problem of the minimum-weight perfect bipartite matching problem on $G$, which is formulated as follows:
%
\begin{align}
\text{(DB)} \quad
\begin{array}{|c>{\hspace{-.5em}}l}
\underset{p_1,p_2}{\text{maximize}} & \begin{array}[t]{>{\displaystyle}l}
\sum_{u \in B} p_1(u) + \sum_{v \in B} p_2(v)
\end{array} \\
\text{subject to} & \begin{array}[t]{>{\displaystyle}l>{\displaystyle}l}
p_1(u) + p_2(v) \le \ord P_{u,v}(\theta) & ((u,v) \in F). \\
\end{array}
\end{array}
\end{align}
%
See~\cite[Theorem~17.5]{Schrijver2003} for example.
Note again that $U_1$ and $U_2$ are identified with $B$.
Using the split weight $w_1, w_2$ satisfying~\ref{item:W1} and~\ref{item:W2}, we define $p_1(u) \defeq w_1(u)$ and $p_2(v) \defeq w_2(v)$ for $u,v \in B$.
We show that this $p_1$ and $p_2$ are feasible on (DB).
For every $(u, v) \in F$, there exists $j \in E$ with $w(j) = \ord P_{u,v}(\theta)$ such that both the $(u,j)$th entry of $A_1$ and the $(v,j)$th entry of $A_2$ are nonzero.
By \cref{prop:weighted_intersection_cocircuit}, we have $w_1(u) \le w_1(j)$ and $w_2(v) \le w_2(j)$.
Thus $p_1(u) + p_2(v) = w_1(u) + w_2(v) \le w_1(j) + w_2(j) = w(j) = \ord P_{u,v}(\theta)$, where the third equality follows from~\ref{item:W1}.
Hence $(p_1, p_2)$ is feasible.
The value of the objective function with respect to $p_1$ and $p_2$ is
%
\begin{align}
\sum_{u \in B} p_1(u) + \sum_{v \in B} p_2(v)
= \sum_{u \in B} w_1(u) + \sum_{v \in B} w_2(v)
= w(B) = \zeta.
\end{align}
%
This equality means that $\zeta$ is no more than the optimal value of the dual program, and thus than $\hat{\zeta}$ by the weak duality of the linear program.
\end{proof}
If $\zeta < \hat{\zeta}$, then $N$ must be zero by \cref{lem:tightness_of_intersection}.
Otherwise, it holds $\zeta = \hat{\zeta} = \ord \det P(\theta)$ and $N$ is equal to the coefficient of $\theta^{\hat{\zeta}}$ in $\det P(\theta)$.
In the definition~\eqref{def:determinant} of the determinant of $P(\theta)$, every minimum-weight perfect matching of $G$ contributes to the coefficient of $\theta^{\hat{\zeta}}$ in $\det P(\theta)$.
In this sense, we can regard the computation of $N$ as a kind of ``counting operation'' on minimum-weight perfect matchings of the bipartite graph $G$.
Murota~\cite{Murota1995a} gave a characterization of the coefficient of $\theta^{\hat{\zeta}}$ in $\det P(\theta)$ (for a general polynomial matrix $P(\theta)$) as follows.
Let $(p_1, p_2)$ be a feasible solution of (DB).
The \emph{tight coefficient matrix} $P^\# = \prn{P_{u,v}^\#}_{u,v \in B}$ of $P(\theta)$ with respect to $(p_1, p_2)$ is defined by
\begin{align}
P_{u,v}^\# \defeq \text{the coefficient of $\theta^{p_1(u)+p_2(v)}$ in $P_{u,v}(\theta)$}
\end{align}
for $u,v \in B$.
\begin{proposition}[{\cite[Propositions~2.4 and~2.6]{Murota1995a}}]\label{prop:combinatorial_relaxation_det}
Let $(p_1, p_2)$ be a feasible solution of $\mathrm{(DB)}$.
If $(p_1, p_2)$ is not optimal, then $P^\#$ is singular.
If $(p_1, p_2)$ is optimal, then $\det P^\#$ is equal to the coefficient of $\theta^{\hat{\zeta}}$ in $\det P(\theta)$.
\end{proposition}
\Cref{prop:combinatorial_relaxation_det} essentially follows from the complementarity of the linear program.
At this point, we have obtained a polynomial-time algorithm for computing $N$ since $P^\#$ can be calculated in polynomial-time.
We need one more argument to reach to \cref{alg:weighted_pfaffian_pair}.
Let $w_1$ and $w_2$ be a split weight satisfying~\ref{item:W1} and~\ref{item:W2}.
For $k = 1,2$, let $A_k^\#$ be the matrix obtained from $A_k$ and $w_k$ by~\eqref{def:A_k_sharp}.
We also put $p_1(u) \defeq w_1(u)$ and $p_2(u) \defeq w_2(u)$ for $u \in B$.
Note that $(p_1, p_2)$ is feasible on (DB) as shown in the proof of \cref{lem:tightness_of_intersection}.
\begin{lemma}\label{lem:from_A_sharp_to_tcm}
The matrix $A_1^\# \trsp{{A_2^\#}}$ is equal to the tight coefficient matrix of $P(\theta)$ with respect to $(p_1, p_2)$.
\end{lemma}
\begin{proof}
Fix $u, v \in B$ and $j \in E$.
Let $a$ and $b$ be the $(u,j)$th entry of $A_1$ and the $(v,j)$th entry of $A_2$, respectively.
Assume that $ab \ne 0$.
Then $ab$ contributes to $P^\#_{u,v}$ if and only if $w_1(u)+w_2(v) = w(j)$, which is equivalent to $w_1(u) = w_1(j)$ and $w_2(u) = w_2(j)$ by \cref{prop:weighted_intersection_cocircuit} and~\ref{item:W1}.
Hence $ab$ contributes to $P^\#_{u,v}$ if and only if it contributes to the $(u,v)$th entry of $A_1^\# \trsp{{A_2^\#}}$.
\end{proof}
We prove \cref{cor:counting_via_sharp} using \Cref{lem:from_A_sharp_to_tcm}, which guarantees the validity of \cref{alg:weighted_pfaffian_pair}.
\begin{proof}[of \cref{cor:counting_via_sharp}]
Suppose that $\zeta = \hat{\zeta}$.
Then $(p_1, p_2)$ is optimal on (DB) since the associated objective value is $\zeta = \hat{\zeta}$.
By \cref{lem:algebraic_form_of_weighted_intersection}, $N$ is equal to the coefficient of $\theta^{\hat{\zeta}}$ in $\det P(\theta)$, which is the same as $\det A_1^\# \trsp{{A_2^\#}}$ by the latter part of \cref{prop:combinatorial_relaxation_det} and \cref{lem:from_A_sharp_to_tcm}.
If $\zeta < \hat{\zeta}$, then $N = 0$ by \cref{lem:tightness_of_intersection}.
In addition, $(p_1, p_2)$ is not optimal on (DB).
Hence $\det A_1^\# \trsp{{A_2^\#}}$ must be zero by the former claim of \cref{prop:combinatorial_relaxation_det}.
Thus $\det A_1^\# \trsp{{A_2^\#}} = N$ holds on both cases.
\end{proof}
\begin{remark}
In the above arguments, we have constructed the bipartite graph $G$ from $P(\theta) = A_1 D\prn{\theta^w} \trsp{A_2}$.
On the other hand, \cref{alg:weighted_pfaffian_parity} builds a graph from $\Phi^*\prn{\theta}$ instead of $Q(\theta) \defeq A \Delta\prn{\theta^w} \trsp{A}$ for a Pfaffian parity $(A, L)$.
Assuming that $A$ is pivoted so that $A[B] = I_{2r}$ with a minimum-weight parity base $B$, we conjecture that the number of minimum-weight parity bases is equal to the coefficient of $\theta^{\hat{\delta}(Q)}$ in $\pf Q(\theta)$, where $\hat{\delta}(Q)$ is defined in \cref{sec:counting_on_weighted_pfaffian_parities}.
If this is the case, we can improve the running time of the weighted counting algorithm for Pfaffian parities.
Moreover, we can use an arbitrary algorithm to output $B$ since $C^*$ is no longer needed, which might further improve the running time for specific instances.
This conjecture is true for linear matroid intersection by \cref{lem:tightness_of_intersection,lem:algebraic_form_of_weighted_intersection} and for the matching problem by the definition of Pfaffian.
\end{remark}
\section{Appendix}
\input{a2_weighted_pairs.tex}
\subsection{Disjoint \ST\ Paths on Directed Acyclic Graphs}\label{sec:dag}
Let $G = (V, E)$ be a directed acyclic graph (DAG) and take disjoint vertex subsets $S, T \subseteq V$ with $k \defeq \card{S} = \card{V}$.
Suppose that vertices in $S$ and $T$ are ordered as $s_1, \ldots, s_k$ and $t_1, \ldots, t_k$.
We assume that the in-degree of $s_i$ and the out-degree of $t_j$ are zero for all $i,j \in \intset{k}$.
We regard directed paths of $G$ as edge subsets.
A (\emph{directed}) \emph{$S$--$T$ path} $P \subseteq E$ of $G$ is the union of $k$ directed paths $P_1, \ldots, P_k$ of $G$ satisfying the following:
\begin{enumerate}[label={(P1)}]
\item There exists a permutation $\sigma \in \sym_k$ of $\intset{k}$ such that every $P_i$ is a path from $s_i$ to $t_{\sigma(i)}$ for $i \in \intset{k}$.\label{item:P1}
\end{enumerate}
We call an $S$--$T$ path $P$ (\emph{vertex-})\emph{disjoint} if $P_i$ and $P_j$ have no common vertices for distinct $i, j \in \intset{k}$.
We denote the permutation in~\ref{item:P1} by $\sigma_P$.
Note that $\sigma_P$ is well-defined for disjoint $S$--$T$ paths.
We define the \emph{sign} of a disjoint $S$--$T$ path $P$ as $\sgn P \defeq \sgn \sigma_P$.
We introduce the Lindström--Gessel--Viennot (LGV) lemma, which was provided by Gessel--Viennot~\cite{Gessel1985} based on the work of Lindström~\cite{Lindstrom1973}.
Let $z = \prn{z_e}_{e \in E}$ be a vector of distinct indeterminates indexed by $E$.
Define a $k \times k$ matrix $\Omega(z) = \prn{\Omega_{i,j}(z)}_{i,j \in \intset{k}}$ by
\begin{align}\label{def:Omega_entry}
\Omega_{i,j}(z) \defeq \sum_{P \in \mathcal{P}_{i,j}} \prod_{e \in P} z_e
\end{align}
for $i,j \in \intset{k}$, where $\mathcal{P}_{i,j}$ is the set of all disjoint $s_i$--$t_j$ paths of $G$.
\begin{lemma}[{LGV lemma~\cite{Gessel1985,Lindstrom1973}}]\label{lem:normal-lgv}
It holds
%
\begin{align}\label{eq:normal-lgv}
\det \Omega(z) = \sum_{P \in \mathcal{P}} \sgn P \prod_{e \in P} z_e,
\end{align}
%
where $\mathcal{P}$ is the set of all disjoint $S$--$T$ paths of $G$.
\end{lemma}
We say that $(S,T)$ is in the \emph{LGV position} on $G$ if $\sgn P$ is constant for any disjoint $S$--$T$ path $P$ of $G$.
When $(S, T)$ is in the LGV position, the number of disjoint $S$--$T$ paths of $G$ can be computed through the LGV lemma.
The \emph{disjoint $S$--$T$ path problem} on a DAG $G = (V, E)$, which is to find a disjoint $S$--$T$ path of $G$, can be reduced to the bipartite matching problem.
We review the reduction presented in the proof of~\cite[Theorem~2.5.9]{Frank2011}.
Let $\tilde{V}_S$ and $\tilde{V}_T$ be disjoint copies of $\tilde{V} \defeq V \setminus (S \cup T)$.
For $v \in \tilde{V}$, we denote the corresponding vertices to $v$ in $\tilde{V}_S$ and $\tilde{V}_T$ by $v_s$ and $v_t$, respectively.
Also $v_s$ and $v_t$ indicate $v$ itself for $v \in S$ and $v \in T$, respectively.
We construct a bipartite graph $\Gamma$ as follows.
The vertex set of $\Gamma$ is the disjoint union of $V_S \defeq S \cup \tilde{V}_S$ and $V_T \defeq T \cup \tilde{V}_T$.
The edge set of $\Gamma$ is $F_1 \cup F_2$, where
\begin{align}
F_1 &\defeq \set{\set{u_s, v_t}}[(u, v) \in E], \\
F_2 &\defeq \set[\big]{\set{v_s, v_t}}[v \in \tilde{V}].
\end{align}
\begin{lemma}[{see~\cite[Theorem~2.5.9]{Frank2011}}]\label{lem:dag_correspondense}
There is a one-to-one correspondence between disjoint $S$--$T$ paths of $G$ and perfect matchings of $\Gamma$.
\end{lemma}
\begin{proof}
Let $P$ be a disjoint $S$--$T$ path of $G$ and $U \subseteq V$ the set of vertices covered by $P$.
Let $M$ be the union of $M_1$ and $M_2$, where
%
\begin{align}
M_1 &\defeq \set{\set{u_s, v_t}}[(u, v) \in P] \subseteq F_1, \\
M_2 &\defeq \set[\big]{\set{v_s, v_t}}[v \in \tilde{V} \setminus U] \subseteq F_2.
\end{align}
%
Then each $u_s \in V_S$ is covered by $\set{u_s, v_t} \in M_1$ for some $v_t \in V_T$ if $u \in U$ and by $\set{u_s, u_t} \in M_2$ if $u \notin U$.
In addition, such an edge in $M$ is unique since $P$ is disjoint.
The same argument holds for vertices in $V_T$.
Hence $M$ is a perfect matching of $\Gamma$.
Conversely, let $M$ be a perfect matching of $\Gamma$.
Define $P \defeq \{(u, v) \mid \set{u_s, v_t} \in M \cap F_1\}$.
Then the in- and out-degrees of $v \in \tilde{V}$ are the same (zero or one) and the in-degree of $t_j \in T$ and the out-degree of $s_i \in S$ are one in $P$.
Thus $P$ is the disjoint union of a disjoint $S$--$T$ path $P'$ of $G$ and cycles on $\tilde{V}$.
In particular, since $G$ is acyclic, it holds $P = P'$.
It is easily confirmed that the former and the latter correspondences are inversely mapped.
\end{proof}
Orienting every edge of $\Gamma$ appropriately, we show that the correspondence given in \cref{lem:dag_correspondense} preserves the signs of disjoint $S$--$T$ paths and perfect matchings up to the factor of $\prn{-1}^k$.
Suppose that vertices in $V_S$ and $V_T$ are ordered so that the first $k$ vertices are $s_1, \ldots, s_k$ and $t_1, \ldots, t_k$, respectively.
Construct a directed bipartite graph $\vec{\Gamma} = (V_S \cup V_T, \vec{F}_1 \cup \vec{F}_2)$, where $\vec{F}_1$ is the orientation of $F_1$ from $V_T$ to $V_S$ and $\vec{F}_2$ is the orientation of $F_2$ from $V_S$ to $V_T$.
Recall from \cref{sec:perfect_matchings} that $\sgn \vec{M}$ is defined by~\eqref{def:sign_of_directed_perfect_bipartite_matching} for a perfect matching $M$ of $\Gamma$.
\begin{lemma}\label{lem:normal-sgn}
Let $P$ be a disjoint $S$--$T$ path of $G$ and $M$ the perfect matching of $\Gamma$ corresponding to $P$.
Then it holds $\sgn P = \prn{-1}^k \sgn \vec{M}$.
\end{lemma}
\begin{proof}
Let $\vec{\Gamma}^*$ be the directed bipartite graph obtained from $\vec{\Gamma}$ by appending $k$ directed edges
%
\begin{align}
\vec{F}_3 \defeq \set[\big]{\pbig{t_{\sigma_P(i)}, s_i}}[i \in \intset{k}].
\end{align}
%
Then $\vec{M}' \defeq \vec{F}_2 \cup \vec{F}_3$ is a perfect matching of $\vec{\Gamma}^*$.
It is clear that $\sgn M' = \sgn P$.
The number of edges in $\vec{M}'$ from $V_T$ to $V_S$ is $\card[\big]{\vec{F}_3} = k$.
Hence the sign of $\vec{M}'$ is $\prn{-1}^k \sgn P$.
We show $\sgn \vec{M} = \sgn \vec{M}$, which implies the claim.
For $i \in \intset{k}$, let $P_i$ be the $s_i$--$t_{\sigma_P(i)}$ path in $P$ and $U_i$ the set of vertices in $\tilde{V}$ covered by $P_i$.
Let $\vec{C}_i$ be the disjoint union of $\vec{C}_{i,1}, \vec{C}_{i,2}$ and $\vec{C}_{i,3}$ defined by
%
\begin{align}
\vec{C}_{i,1} &\defeq \set{(v_t, u_s)}[(u, v) \in P_i] \subseteq \vec{F}_1, \\
\vec{C}_{i,2} &\defeq \set{(v_s, v_t)}[v \in U_i] \subseteq \vec{F}_2, \\
\vec{C}_{i,3} &\defeq \set[\big]{(t_{\sigma_P(i)}, s_i)} \subseteq \vec{F}_3.
\end{align}
%
Then $\set[\big]{\vec{C}_1, \ldots, \vec{C}_k}$ is the set of alternating cycles of $M \symdif M'$.
Note that $M$ is also a perfect matching of $\vec{\Gamma}^*$.
An even cycle of directed edges is said to be \emph{oddly oriented} if the number of edges consistent with the direction of a traversal is odd for either choice of traversals.
Every alternating cycle $\vec{C}_i$ is oddly oriented because the unique edge in $\vec{C}_{i,3}$ is opposite to all the other edges of $\vec{C}_i$.
This means that the signs of $M$ and $M'$ are the same (see, e.g.,~\cite[Lemma~8.3.1]{Plummer1986}).
Hence the claim holds.
\end{proof}
By \cref{lem:normal-sgn,lem:matching_sign_directed,lem:dag_correspondense}, we have the following.
\begin{theorem}
Let $G$ be a DAG and take disjoint vertex subsets $S, T$ with $\card{S} = \card{T}$.
Let $\vec{\Gamma}$ be the directed bipartite graph defined above and $(\vec{A}_1, \vec{A}_2)$ the matrix pair representing perfect matchings of $\vec{\Gamma}$ given in \cref{sec:perfect_matchings}.
Then $\mathcal{B}(\vec{A}_1, \vec{A}_2)$ and disjoint $S$--$T$ paths of $G$ correspond one-to-one.
In addition, if $(S, T)$ is in the LGV position, then $(\vec{A}_1, \vec{A}_2)$ is Pfaffian with constant $\pm 1$.
\end{theorem}
\Cref{fig:LGVDAG} illustrates two examples of $(S, T)$ that is in the LGV position.
If $\sigma_P$ is identity for every disjoint $S$--$T$ path $P$ of $G$, then $(S, T)$ is clearly in the LGV position.
Such $(S, T)$ is called \emph{nonpermutable}~\cite{Gessel1985} and one famous example arises from a planar DAG $G$.
Suppose that $s_1, \ldots, s_k, t_k, \ldots, t_1$ are aligned clockwise on the boundary of one face of $G$.
Then $(S, T)$ is nonpermutable because an $S$--$T$ path $P$ cannot be disjoint if $\sigma_P$ is not identity.
Another example of the LGV position also arises from a planar DAG $G$.
Suppose that all the vertices in $S$ are on the boundary of a face $F$ of $G$, and all the vertices in $T$ are on the boundary of another face that does not adjoin $F$.
Then $\sigma_P$ must be a power of a cyclic permutation over $\intset{k}$, whose sign is always $+1$ when $k$ is odd.
\tikzset{s/.style={fill, circle, inner sep=0pt, minimum size=4pt}}
\tikzset{t/.style={draw, circle, inner sep=0pt, minimum size=4pt}}
\tikzset{arc/.style={-{Stealth[length=2mm, width=2mm, angle'=40]}}}
\begin{figure}[tbp]
\begin{minipage}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}
\node (s1) [s] at (0, 1) {}; \node [left=0mm of s1] {$s_1$};
\node (s2) [s] at (0, 2) {}; \node [left=0mm of s2] {$s_2$};
\node (s3) [s] at (0, 3) {}; \node [left=0mm of s3] {$s_3$};
\node (s4) [s] at (0, 4) {}; \node [left=0mm of s4] {$s_4$};
\node (t1) [t] at (5, 1) {}; \node [right=0mm of t1] {$t_1$};
\node (t2) [t] at (5, 2) {}; \node [right=0mm of t2] {$t_2$};
\node (t3) [t] at (5, 3) {}; \node [right=0mm of t3] {$t_3$};
\node (t4) [t] at (5, 4) {}; \node [right=0mm of t4] {$t_4$};
\node (e) at (1.5,1.0) {};
\node (f) at (1.5,1.5) {};
\node (g) at (1.5,2.5) {};
\node (h) at (1.5,3.0) {};
\node (i) at (1.5,4.0) {};
\node (e2) at (3.5,1.0) {};
\node (f2) at (3.5,1.5) {};
\node (g2) at (3.5,2.5) {};
\node (h2) at (3.5,3.5) {};
\node (i2) at (3.5,4) {};
\draw[rounded corners=1cm,fill=gray!20] (1,0.5) rectangle ++(3,4) node [midway] {} ;
\draw[arc] (s1) -- (e);
\draw[arc] (s1) -- (f);
\draw[arc] (s2) -- (f);
\draw[arc] (s3) -- (f);
\draw[arc] (s3) -- (g);
\draw[arc] (s4) -- (h);
\draw[arc] (s4) -- (i);
\draw[arc] (e2) -- (t1);
\draw[arc] (f2) -- (t1);
\draw[arc] (f2) -- (t2);
\draw[arc] (g2) -- (t2);
\draw[arc] (g2) -- (t3);
\draw[arc] (h2) -- (t3);
\draw[arc] (i2) -- (t4);
\end{tikzpicture}
\subcaption{}
\end{minipage}%
\begin{minipage}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}
\draw[fill = gray!20] (0, 0) circle (1.5cm);
\draw[fill = white] (0, 0) circle (0.9cm);
\node (s1) [s] at ( 0 , 0.5 ) {}; \node [right=0mm of s1] {$s_1$};
\node (s2) [s] at ( 0.433, -0.25) {}; \node [below left=-1mm of s2] {$s_2$};
\node (s3) [s] at (-0.433, -0.25) {}; \node [above=0mm of s3] {$s_3$};
\node (t1) [t] at ( 1.732, 1) {}; \node [right=0mm of t1] {$t_1$};
\node (t2) [t] at ( 0, -2) {}; \node [right=0mm of t2] {$t_2$};
\node (t3) [t] at (-1.732, 1) {}; \node [left=0mm of t3] {$t_3$};
\draw[arc] (s1) -- (0, 1.3);
\draw[arc] (s1) -- (-0.6, 1);
\draw[arc] (s2) -- (1.2, -0.1);
\draw[arc] (s3) -- (-1.2, -0.1);
\draw[arc] (s3) -- (-0.6, -1);
\draw[arc] (s3) -- (-1, -0.6);
\draw[arc] (1.1, 0.4) -- (t1);
\draw[arc] (0, -1.2) -- (t2);
\draw[arc] (0.6, -1.25) -- (t2);
\draw[arc] (-0.6, -1.25) -- (t2);
\draw[arc] (-1.1, 0.4) -- (t3);
\draw[arc] (-0.95, 0.8) -- (t3);
\end{tikzpicture}
\subcaption{}
\end{minipage}
\caption{%
Examples of $(S, T)$ that are in the LGV position.
Gray areas represent planar DAGs.
(a) $s_1, \ldots, s_4, t_4, \ldots, t_1$ are aligned clockwise on the boundary of an (outer) face.
(b) $S$ adjoins a face, $T$ adjoins another face, and $\card{S} = \card{T}$ is odd.
}\label{fig:LGVDAG}
\end{figure}
Lastly, we present another application of \cref{lem:normal-sgn}.
Let $(\vec{A}_1, \vec{A}_2)$ be the matrix pair representing perfect matchings of $\vec{\Gamma}$.
Let $z = \prn{z_e}_{e \in F_1 \cup F_2}$ be a vector of distinct indeterminates indexed by $F_1 \cup F_2$.
We substitute $z_e \defeq 1$ for $e \in F_2$.
On indexing an component of $z$, we identify $\set{u_s, v_t} \in F_1$ with $(u, v) \in E$.
Put $N(z) \defeq \vec{A}_1 D(z) \trsp{{\vec{A}_2}}$.
Denote by $\mathcal{M}$ the set of perfect matchings of $\Gamma$, by $\mathcal{P}$ the set of disjoint $S$--$T$ paths of $G$, and by $\Omega(z)$ the matrix defined in the LGV lemma for $G$.
Then we have
\begin{align}\label{eq:derive_LGV}
\det N(z)
= \sum_{M \in \mathcal{M}} \sgn \vec{M} \prod_{e \in M} z_e
= \sum_{P \in \mathcal{P}} \prn{-1}^k \sgn P \prod_{e \in P} z_e
= \det \prn{-\Omega(z)},
\end{align}
where the first equality follows from~\eqref{eq:cauchy_binet_1}, the second one follows from \cref{lem:dag_correspondense,lem:normal-sgn}, and the last one is due to the LGV lemma.
Indeed, we can prove the equality $\det N(z) = \det \prn{-\Omega(z)}$ without using the LGV lemma, which turns to be a new proof of the lemma.
\begin{proof}[{of \cref{lem:normal-lgv}}]
Recall that entries of $N(z)$ satisfy~\eqref{eq:bipartite_adjacency_matrix} up to the factor of $x$.
The matrix $N(z)$ can be partitioned as
%
\begin{align}
N(z) =
\begin{pmatrix}
X & Y \\
Z & W
\end{pmatrix},
\end{align}
%
where the row and column sets of $X$ correspond to $S$ and $T$, respectively and other rows and columns are indexed by $\tilde{V}$.
By sorting $\tilde{V}$ in a topological order with respect to $\vec{G}$, we can transform $W$ into an upper triangular matrix, as $G$ is acyclic.
In addition, diagonal entries of $W$ are 1 since $(v_s, v_t) \in \vec{E}_2$ for $v \in \tilde{V}$.
Hence we have $\det W(z) = 1$.
By the formula~\eqref{eq:shurt_det} on the Schur complement, we have
%
\begin{align}
\det N(z)
= \det W \det \pbig{X - Y{W}^{-1}Z}
= \det \pbig{X - YW^{-1}Z}.
\end{align}
We show $X - YW^{-1}Z = -\Omega(z)$.
Fix $i, j \in \intset{k}$ and let $G^{(i,j)}$ be the subgraph of $G$ obtained by deleting $S \setminus \set{s_i}$ and $T \setminus \set{t_j}$.
We consider the disjoint $\set{s_i}$--$\set{t_j}$ path problem on $G^{(i,j)}$.
The directed bipartite graph $\vec{\Gamma}^{(i,j)}$ on this problem is the subgraph of $\vec{\Gamma}$ induced by $\pbig{\tilde{V}_S \cup \set{s_i}} \cup \pbig{\tilde{V}_T \cup \set{t_j}}$.
Similarly, the counterpart $N^{(i,j)}(z)$ of $N(z)$ on this problem is $N(z)[\tilde{V}_S \cup \set{s_i}, \tilde{V}_T \cup \set{t_j}]$, which means
%
\begin{align}
N^{(i,j)}(z) =
\begin{pmatrix}
X[\set{s_i}, \set{t_j}] & Y[\set{s_i}, \tilde{V}] \\
Z[\tilde{V}, \set{t_j}] & W
\end{pmatrix}.
\end{align}
%
By~\eqref{eq:shurt_det}, it holds
%
\begin{align}
\det N^{(i,j)}(z) = X[\set{s_i}, \set{t_j}] - Y[\set{s_i}, \tilde{V}] W^{-1} Z[\tilde{V}, \set{t_j}],
\end{align}
%
which coincides with the $(i,j)$th entry of $X - YW^{-1}Z$.
Denote by $\mathcal{M}^{(i,j)}$ the set of perfect matchings of $\Gamma^{(i,j)}$ and by $\mathcal{P}^{(i,j)}$ the set of disjoint $\set{s_i}$--$\set{t_j}$ paths of $G^{(i,j)}$, which are exactly $s_i$--$t_i$ paths of $G$.
In the same was as~\eqref{eq:derive_LGV}, we have
%
\begin{align}
\det N^{(i,j)}(z)
&= \sum_{M \in \mathcal{M}^{(i,j)}} \sgn \vec{M} \prod_{e \in M} z_e \\
&= \sum_{P \in \mathcal{P}^{(i,j)}} \prn{-1}^1 \sgn P \prod_{e \in P} z_e \\
&= -\sum_{P \in \mathcal{P}^{(i,j)}} \prod_{e \in P} z_e
= -\Omega_{i,j}(z).\label{eq:LGV_proof_last}
\end{align}
%
Note that we did not apply the LGV lemma on the last equality of~\eqref{eq:LGV_proof_last}; it is just the definition~\eqref{def:Omega_entry} of $\Omega_{i,j}(z)$.
Hence $X - YW^{-1}Z = -\Omega(z)$ holds.
The LGV lemma follows from this fact together with the first two equalities in~\eqref{eq:derive_LGV}.
\end{proof}
\section*{Acknowledgments}
The authors thank Satoru Iwata for his helpful comments, and Yusuke Kobayashi, Yutaro Yamaguchi, and Koyo Hayashi for discussions.
This work was supported by JST ACT-I Grant Number JPMJPR18U9, Japan, and Grant-in-Aid for JSPS Research Fellow Grant Number JP18J22141, Japan.
\subsection{Shortest Disjoint \STU\ Paths on Undirected Graphs}\label{sec:stu}
We further extend the shortest disjoint $S$--$T$ path problem to the shortest disjoint $S$--$T$--$U$ path problem, which is a special case of the shortest disjoint \calS-path problem.
We first introduce Mader's disjoint \calS-path problem~\cite{Gallai1964,Mader1978}.
Let $G = (V, E)$ be an undirected graph and $\mathcal{S} = \{S_1, \ldots, S_s\}$ a family of disjoint nonempty subsets of $V$.
Suppose that $\Sigma \defeq S_1 \cup \cdots \cup S_s$ is of cardinality $2k$ and ordered as $u_1, \ldots, u_{2k}$ so that $i \le j$ means $\alpha \le \beta$, where $u_i \in S_{\alpha}$ and $u_j \in S_{\beta}$ for $i, j \in \intset{2k}$.
Vertices in $\Sigma$ are called \emph{terminals}.
Recall that $F_{2k}$ is the subset of $\sym_{2k}$ defined by~\eqref{def:F}.
An \emph{\calS-path} $P$ of $G$ is the union of $k$ paths $P_1, \ldots, P_k \subseteq E$ of $G$ satisfying the following:
\begin{enumerate}[label={(P2)}]
\item There exists a permutation $\sigma \in F_{2k}$ such that $P_i$ is a path between $u_{\sigma(2i-1)} \in S_\alpha$ and $u_{\sigma(2i)} \in S_\beta$ with $\alpha \ne \beta$ for each $i \in \intset{k}$.\label{item:P2}
\end{enumerate}
Namely, $P$ is an \calS-path if the ends of each $P_i$ belong to distinct parts in $\mathcal{S}$ and the ends of $P_i$ and $P_j$ are disjoint for all distinct $i, j \in \intset{k}$.
We call an \calS-path $P$ \emph{disjoint} if $P_i$ and $P_j$ have no common vertices for all distinct $i,j \in [k]$.
For a disjoint \calS-path $P$, a permutation satisfying~\ref{item:P2} uniquely exists in $F_{2k}$ and we denote it by $\sigma_P$.
The \emph{sign} of a disjoint \calS-path $P$ is defined as $\sgn P \defeq \sgn \sigma_P$.
The \emph{disjoint \calS-path problem} on $G$ is to find a disjoint \calS-path of $G$.
We also consider the situation when $G$ is equipped with a positive edge length $l \colon E \to \setR_{>0}$.
Then the \emph{length} of an \calS-path $P$ is defined as $l(P) \defeq \sum_{e \in P} l(e)$.
The \emph{shortest disjoint \calS-path problem} is to find a disjoint \calS-path of $G$ with minimum length.
We next describe a reduction of the disjoint \calS-path problem to the linear matroid parity problem, based on Schrijver's linear representation~\cite{Schrijver2003} of Lov\'asz' reduction~\cite{Lovasz1980}.
We assume that there are no edges connecting terminals.
Put $\tilde{V} \defeq V \setminus \Sigma$, $\tilde{E} \defeq \set[\big]{\set{u,v} \in E}[u,v \in \tilde{V}]$, $m \defeq \card{E}$ and $\tilde{m} \defeq \card[\big]{\tilde{E}}$.
Fix two-dimensional row vectors $b_1, \ldots, b_s$ which are pairwise linearly independent.
We construct a matrix
\begin{align}
X = \begin{pmatrix}
X_1 & O \\
X_2 & X_3
\end{pmatrix}
\end{align}
from $G$ as follows.
The size of each block is $2k \times 2\tilde{m}$ for $X_1$, $(2n-4k) \times 2\tilde{m}$ for $X_2$, and $(2n-4k) \times 2(m-\tilde{m})$ for $X_3$.
Each edge $e \in \tilde{E}$ is associated with two columns of $\begin{psmallmatrix} X_1 \\ X_2 \end{psmallmatrix}$ and each $e \in E \setminus \tilde{E}$ is associated with two columns of $\begin{psmallmatrix} O \\ X_3 \end{psmallmatrix}$.
Each terminal $u_i \in \Sigma$ corresponds to the $i$th row of $\begin{pmatrix} X_1 & O \end{pmatrix}$ for $i \in \intset{2k}$ and each $v \in \tilde{V}$ is associated with two rows of $\begin{pmatrix} X_2 & X_3 \end{pmatrix}$.
Entries of each block are determined as follows.
\begin{itemize}
\item The $1 \times 2$ submatrix of $X_1$ associated with $u_i \in U$ and $e \in E \setminus \tilde{E}$ is $b_\alpha$ if $e \cap S_{\alpha} = \set{u_i}$ and $O$ otherwise.
\item The $2 \times 2$ submatrix of $X_2$ associated with $v \in \tilde{V}$ and $e \in E \setminus \tilde{E}$ is the identify matrix $I_2$ of order two if $v \in e$ and $O$ otherwise.
\item The matrix $X_3$ is defined to be the Kronecker product $H[\tilde{V}, \tilde{E}] \otimes I_2$, where $H$ is the incidence matrix~\eqref{def:incidence_matrix} of any orientation of $G$.
Namely, $X_3$ is obtained from $H[\tilde{V}, \tilde{E}]$ by replacing $+1$ with $+I_2$, $-1$ with $-I_2$, and $0$ with $O$.
\end{itemize}
We regard each edge $e \in E$ as a line of $X$, which consists of the two columns associated with $e$.
\begin{lemma}[{\cite[Lemma~4]{Yamaguchi2016}}]\label{lem:Yamaguchi2017}
An edge subset $B \subseteq E$ is a parity base of $(X, E)$ if and only if $B$ is a spanning forest of $G$ such that every connected component covers exactly two terminals belonging to distinct parts of $\mathcal{S}$.
\end{lemma}
Note that if $B$ is a parity base of $(X, E)$, the number of connected components must be $k$ since $B$ covers all the vertices of $G$ by \cref{lem:Yamaguchi2017}.
Hence $(X, E)$ has a parity base if and only if $G$ has a disjoint \calS-path.
Unfortunately, as in the $S$--$T$ path case described in \cref{sec:st}, this reduction does not provide a one-to-one correspondence between $\mathcal{B}(X, E)$ and the set of disjoint \calS-paths of $G$.
Yamaguchi~\cite{Yamaguchi2016} showed that the shortest disjoint \calS-path problem can be reduced to the weighted linear matroid parity problem.
Here we present a simplified reduction for our setting (where an \calS-path covers all terminals), together with a one-to-one correspondence of optimal solutions.
Let $G^*$ be the graph obtained from $G$ by adding a new vertex $v^*$ and an edge set $E' \defeq \set[\big]{\set{v, v^*}}[v \in \tilde{V}]$.
We construct a matrix $X^*$ from $G^*$ in the same way as the construction of $X$ from $G$.
Let $A$ be the matrix obtained from $X^*$ by removing the two rows corresponding to $v^*$.
Namely, by an appropriate column permutation and an edge orientation on $E'$, the matrix $A$ is written as
\begin{align}\label{eq:form_of_A}
\begin{pmatrix}
X_1 & O & O \\
X_2 & X_3 & I_{2n-4k}
\end{pmatrix},
\end{align}
where each two columns of the left, middle and right blocks correspond to an edge in $\tilde{E}$, $E \setminus \tilde{E}$ and $E'$, respectively.
Regarding $E^* \defeq E \cup E'$ as the set of lines on $A$, we set a line weight $\funcdoms{w}{E^*}{\setR}$ as $w(e) \defeq l(e)$ for $e \in E$ and as $w(e) = 0$ for $e \in E'$.
\begin{lemma}\label{lem:stu_correspondence}
The minimum length of a disjoint \calS-path of $G$ with respect to $l$ is equal to the minimum weight of a parity base of $(A, E^*)$ with respect to $w$.
In addition, there is a one-to-one correspondence between shortest disjoint \calS-paths of $G$ and minimum-weight parity bases of $(A, E^*)$.
\end{lemma}
\begin{proof}
The line of this discussion is almost the same as the proof of \cref{lem:st-one2one}.
Since $A$ can be transformed into~\eqref{eq:form_of_A}, $B \subseteq E^*$ is a parity base of $(A, E^*)$ if and only if a submatrix of $X$ is nonsingular, where the rows and columns of the submatrix are ones associated with $V \setminus \set{v}[\set{v, v^*} \in B \cap E']$ and $B \cap E$, respectively.
By \cref{lem:Yamaguchi2017}, this is equivalent to the condition that $B$ consists of $k+1$ connected components $B_0 = B \cap E', B_1, \ldots, B_k$ in $G^*$ such that $B_0$ contains $v^*$ and each of $B_1, \ldots, B_k$ is a tree covering exactly two terminals belonging to distinct parts of $\mathcal{S}$.
Thus any parity base $B \in \mathcal{B}(A, E^*)$ contains a unique disjoint \calS-path $P$ of $G$.
We have $l(P) \le w(B)$ by the nonnegativity of $w$.
Conversely, given a disjoint \calS-path $P$, one can construct a parity base $B$ by adding $\set{v, v^*} \in E'$ for each $v \in \tilde{V}$ that is not covered by $P$.
Then we have $w(B) = l(P)$ as $w(e) = 0$ for $e \in E'$.
Hence the minimum length of a disjoint \calS-path of $G$ is equal to the minimum weight of a parity base of $(A, E^*)$.
Any minimum-weight common base $B \in \mathcal{B}(A, E^*)$ contains a unique shortest disjoint \calS-path $P$ of $G$, as we have seen above.
Conversely, for any shortest disjoint \calS-path, there exists a unique minimum-weight common base $B$ because $l(e)$ is positive for all $e \in E$.
Hence the correspondence is one-to-one.
\end{proof}
We next give a formula that connects $\sgn P$ and $\det A[B]$.
For a disjoint \calS-path $P$, define
\begin{align}\label{def:c_P}
c_P \defeq \prod_{i=1}^k \det
\begin{pmatrix}
b_{\alpha_{2i-1}} \\
b_{\alpha_{2i}}
\end{pmatrix}
\in \setR \setminus \set{0},
\end{align}
where $\alpha_i$ is the element in $\intset{s}$ such that $u_{\sigma_P(i)} \in S_{\alpha_i}$ for $i \in \intset{2k}$.
\begin{lemma}\label{lem:stu_sign_correspondence}
Let $P$ be a disjoint \calS-path of $G$ and $B$ a parity base of $(A, E^*)$ containing $P$.
Then we have
%
\begin{align}\label{eq:stu_sgn_P}
c_P \sgn P = \det A[B].
\end{align}
\end{lemma}
\begin{proof}
As shown in the proof of \cref{lem:stu_correspondence}, $B$ consists of $k+1$ connected components $B_0 = B \cap E', B_1, \ldots, B_k$ in $G^*$ such that $B_0$ covers $v^*$ and $B_i$ contains the path $P_i \subseteq P$ between $u_{\sigma_P(2i-1)}$ and $u_{\sigma_P(2i)}$ for each $i \in \intset{k}$.
Let $V_i$ be the vertex subset covered by $B_i$ for $i = 0, \ldots, k$.
By row and column permutations, $A[B]$ is transformed into a block diagonal matrix $Z = \diag(Z_0, Z_1, \ldots, Z_k)$.
For $i = 0, \ldots, k$, the columns of $Z_i$ correspond to $B_i$ and the rows of $Z_i$ are associated with $V_i$ if $i \ne 0$ and with $V_0 \setminus \set{v^*}$ if $i = 0$.
We can assume that the permutations are taken so that they preserve the two rows and the two columns associated with each $v \in \tilde{V}$ and $e \in E^*$.
This means that the column permutation is even.
We also arrange the rows corresponding to $u_{\sigma_P(2i-1)}$ and $u_{\sigma_P(2i)}$ on the first and second rows of $Z_i$, respectively, for $i \in \intset{k}$.
Since the first $2k$ rows of $A$ correspond to $u_1, \ldots, u_{2k}$ from top to bottom, the sign of the row permutation coincides with $\sgn P$.
Hence we have
%
\begin{align}\label{eq:eq:stu_sgn_P_mid}
\det A[B] = \sgn P \prod_{i=0}^k \det Z_k.
\end{align}
Applying row and column permutations preserving each two rows and columns, $Z_0$ becomes a block diagonal matrix whose diagonal blocks are $I_2$ or $-I_2$.
Thus $\det Z_0 = 1$.
Consider $Z_i$ for $i \in \intset{k}$.
Let $\alpha, \beta \in \intset{s}$ with $u_{\sigma_P(2i-1)} \in S_\alpha$, $u_{\sigma_P(2i)} \in S_\beta$.
Permuting row and columns and reversing the signs of some lines (two consecutive columns) in $B_i \cap \tilde{E}$, we can transform $Z_i$ into
%
\begin{align}\label{eq:Zi}
\prn{\begin{array}{ccccc|c}
b_\alpha & & & & & \multirow{6}{*}{$C$} \\
& & & & b_\beta & \\
I_2 & -I_2 & & & & \\
& I_2 & \ddots & & & \\
& & \ddots & -I_2 & & \\
& & & I_2 & I_2 & \\\hline
& & & & & I_{2p}
\end{array}},
\end{align}
%
for some matrix $C$, where empty cells indicate zero and $p \defeq \card{B_i \setminus P_i}$.
Note that these permutations and change of sign retain the determinant again.
The determinant of~\eqref{eq:Zi} is equal to $\det \begin{psmallmatrix} b_\alpha \\ b_\beta \end{psmallmatrix}$.
Hence~\eqref{eq:stu_sgn_P} holds via~\eqref{eq:eq:stu_sgn_P_mid}.
\end{proof}
We say that $\mathcal{S}$ is in the \emph{LGV position} on $G$ if $\sgn P$ is constant for all disjoint \calS-path $P$ of $G$.
Since $\det A[B]$ depends not only on $\sgn P$ but on $c_P$ by \cref{lem:stu_sign_correspondence}, the matroid parity $(A, E^*)$ might not be Pfaffian even if $\mathcal{S}$ is in the LGV position.
Nevertheless, $c_P$ is constant when $\card{\mathcal{S}} = 3$, as claimed in the following.
We refer to the (shortest) disjoint \calS-path problem with $\mathcal{S} = \set{S = S_1, T = S_2, U = S_3}$ as the (\emph{shortest}) \emph{disjoint $S$--$T$--$U$ path problem}.
An \emph{$S$--$T$--$U$ path} means an $\set{S, T, U}$-path.
\begin{lemma}\label{lem:stu_c_P_constant}
Let $G = (V, E)$ be an undirected graph and $S, T, U \subseteq V$ disjoint vertex subsets.
Then $c_P$ defined by~\eqref{def:c_P} is constant for all disjoint $S$--$T$--$U$ path $P$ of $G$.
\end{lemma}
\begin{proof}
Let $P$ be a disjoint $S$--$T$--$U$ path of $G$ and $x$, $y$, and $z$ the numbers of paths in $P$ connecting $S$ to $T$, $T$ to $U$, and $U$ to $S$, respectively.
We have $x + y = \card{T}$, $y + z = \card{U}$, and $z + x = \card{S}$.
This means that $x, y, z$ are uniquely determined from $\card{S}, \card{T}, \card{U}$.
The value of $c_P$ is equal to
%
\begin{align}
c_P =
\prn{\det \begin{pmatrix} b_1 \\ b_2 \end{pmatrix}}^x
\prn{\det \begin{pmatrix} b_2 \\ b_3 \end{pmatrix}}^y
\prn{\det \begin{pmatrix} b_1 \\ b_3 \end{pmatrix}}^z,
\end{align}
%
where $b_1, b_2$ and $b_3$ are two-dimensional row vectors corresponding to $S$, $T$, and $U$, respectively.
Hence $c_P$ does not depend on the choice of $P$.
\end{proof}
We have the following conclusion by \cref{lem:stu_correspondence,lem:stu_sign_correspondence,lem:stu_c_P_constant}.
\begin{theorem}
Let $G$ be an undirected graph and take disjoint vertex subsets $S, T, U$.
Let $(A, E^*)$ be the matroid parity defined above.
When $\set{S, T, U}$ is in the LGV position, then $(A, E^*)$ is Pfaffian with constant $c_P \sgn P$, where $P$ is an arbitrary disjoint $S$--$T$--$U$ path of $G$.
In addition, when $G$ is equipped with a positive edge length $l$, there is a one-to-one correspondence between shortest disjoint $S$--$T$--$U$ paths of $G$ and minimum-weight parity bases of $(A, E^*)$ with respect to the line weight $w$ defined above.
\end{theorem}
We show one example of the LGV position for the $S$--$T$--$U$ case.
Let $G$ be a planar graph and suppose that terminals are aligned on the boundary of one face of $G$ in the order of $S$, $U$, $T$ clockwise.
Then as shown in the proof of \cref{lem:stu_c_P_constant}, the connecting pattern of terminals are uniquely determined from $\card{S}$, $\card{T}$ and $\card{U}$.
Hence $\sigma_P$ is constant for all disjoint $S$--$T$--$U$ path $P$ of $G$, which means that $\set{S, T, U}$ is in the LGV position.
See \cref{fig:LGV-STU} for an illustration.
\tikzset{u/.style={draw, fill=gray!50, circle, inner sep=0pt, minimum size=4pt}}
\begin{figure}
\centering
\begin{tikzpicture}
\node (s1) [s] at (0, 0.5) {}; \node [left=0mm of s1] {$s_1$};
\node (s2) [s] at (0, 1.5) {}; \node [left=0mm of s2] {$s_2$};
\node (s3) [s] at (0, 2.5) {}; \node [left=0mm of s3] {$s_3$};
\node (t1) [t] at (5, 0.5) {}; \node [right=0mm of t1] {$t_1$};
\node (t2) [t] at (5, 1.5) {}; \node [right=0mm of t2] {$t_2$};
\node (t3) [t] at (5, 2.5) {}; \node [right=0mm of t3] {$t_3$};
\node (u1) [u] at (2, 3.7) {}; \node [above=0mm of u1] {$u_1$};
\node (u2) [u] at (3, 3.7) {}; \node [above=0mm of u2] {$u_2$};
\draw [rounded corners=1cm,fill=gray!20] (1,0) rectangle ++(3,3) node [midway] {} ;
\draw (s1) -- (1.5, 0.5);
\draw (s2) -- (1.5, 1);
\draw (s2) -- (1.5, 1.5);
\draw (s2) -- (1.5, 1.8);
\draw (s3) -- (1.5, 2);
\draw (s3) -- (1.5, 2.5);
\draw (t1) -- (3.5, 0.5);
\draw (t2) -- (3.5, 1);
\draw (t2) -- (3.5, 1.5);
\draw (t2) -- (3.5, 1.8);
\draw (t3) -- (3.5, 2);
\draw (t3) -- (3.5, 2.5);
\draw (u1) -- (1.8, 2.5);
\draw (u1) -- (2.2, 2.5);
\draw (u2) -- (3, 2.5);
\end{tikzpicture}
\caption{%
Example of $S, T, U$ that are in the LGV position, where $S = \set{s_1, s_2, s_3}$, $T = \set{t_1, t_2, t_3}$ and $U = \set{u_1, u_2}$.
The gray area represents a planar graph.
}\label{fig:LGV-STU}
\end{figure}
\subsection{Shortest Disjoint \ST\ Paths on Undirected Graphs}\label{sec:st}
Let $G=(V,E)$ be an undirected graph and $S = \set{s_1, \ldots, s_k}$ and $T = \set{t_1, \ldots, t_k}$ disjoint vertex subsets of cardinality $k$.
An \emph{$S$--$T$ path} $P \subseteq E$ of $G$ is the union of $k$ paths $P_1, \ldots, P_k$ of $G$ satisfying~\ref{item:P1} (with direction ignored).
We denote the permutation in~\ref{item:P1} by $\sigma_P$.
An $S$--$T$ path $P$ is said to be (\emph{vertex}-)\emph{disjoint} if $P_i$ and $P_j$ do not share the same vertices for all distinct $i, j \in \intset{k}$.
The \emph{disjoint $S$--$T$ path problem} on $G$ is to find a disjoint $S$--$T$ path of $G$.
We also equip $G$ with a positive edge length $\funcdoms{l}{E}{\setR_{>0}}$.
The \emph{length} $l(P)$ of an disjoint $S$--$T$ path $P$ of $G$ is defined as the sum of lengths of all edges in $P$.
The \emph{shortest disjoint $S$--$T$ path problem} on $G$ is to find a disjoint $S$--$T$ path of $G$ with minimum length.
We first show that the shortest disjoint $S$--$T$ path problem on an undirected graph is a generalization of the disjoint $S$--$T$ path problem on a DAG\@.
Let $\vec{G} = (V, \vec{E})$ be a DAG and take disjoint vertex subsets $S, T \subseteq V$ of cardinality $k$.
Let $v_1, \ldots, v_n$ be a topological ordering of $V$ with respect to $\vec{G}$, i.e., $(v_i, v_j) \notin \vec{E}$ for all $i, j \in \intset{n}$ with $i \ge j$.
Let $G = (V, E)$ be the undirected graph obtained from $\vec{G}$ by ignoring the orientation.
We set an edge length for $G$ as $l(e) \defeq j-i$ for $e = \set{v_i, v_j} \in E$ with $i < j$.
\begin{proposition}
Suppose that $\vec{G}$ has at least one disjoint directed $S$--$T$ path.
Then $P \subseteq E$ is a shortest disjoint $S$--$T$ path of $G$ if and only if $\vec{P}$ is a disjoint directed $S$--$T$ path of $\vec{G}$.
\end{proposition}
\begin{proof}
Let $P = P_1 \cup \cdots \cup P_k$ be a disjoint $S$--$T$ path of $G$, where $P_i$ is the $s_i$--$t_{\sigma_P(i)}$ path contained in $P$ for $i \in [k]$.
For each $e \in P$, we denote by $\partial^s e$ and $\partial^t e$ the ends of $e$ which are in sides closer to $s_i$ and to $t_{\sigma_P(i)}$ on $P_i$, respectively.
Let $f(v_i) \defeq i$ for $i \in [n]$.
Then we have
%
\begin{align}\label{eq:l_P_i}
l(P_i)
= \sum_{e \in P_i} l(e)
= \sum_{\{u,v\} \in P_i} |f(v) -f(u)|
\ge \sum_{e \in P_i} \prn{f\pbig{\partial^t e} - f\pbig{\partial^s e}}
= f\pbig{t_{{\sigma_P}(i)}} - f(s_i)
\end{align}
%
for $i \in \intset{k}$.
Summing~\eqref{eq:l_P_i} up over all $i \in \intset{k}$, we have
%
\begin{align}\label{eq:l_P}
l(P)
= \sum_{i=1}^k l(P_i)
\ge \sum_{i=1}^k \prn{f\pbig{t_{{\sigma_P}(i)}} - f(s_i)}
= \sum_{v \in T} f(v) - \sum_{u \in S} f(u).
\end{align}
%
The equality of~\eqref{eq:l_P} is attained if and only if $f\pbig{\partial^s e} \le f\pbig{\partial^t e}$ for every $e \in P$.
Since $V$ is topologically ordered, this is equivalent to the condition that $\vec{P}$ is a disjoint directed $S$--$T$ path of $\vec{G}$.
By assumption, such a disjoint $S$--$T$ path exists on $G$.
Hence the set of shortest disjoint $S$--$T$ path of $G$ corresponds to the set of disjoint directed $S$--$T$ paths of $\vec{G}$.
\end{proof}
We next describe a reduction from the disjoint $S$--$T$ path problem on an undirected graph $G = (V, E)$ to the graphic matroid intersection problem, given by Tutte~\cite{Tutte1965}.
Let $G_S$ and $G_T$ be the graphs obtained from $G$ by shrinking $S$ and $T$ into a single vertex $v^*$, respectively.
Recall from \cref{sec:regular_matroids} that $\mathbf{M}(G_S)$ and $\mathbf{M}(G_T)$ denote the graphic matroids of $G_S$ and $G_T$.
An edge subset $B \subseteq E$ is a common base of $\mathbf{M}(G_S)$ and $\mathbf{M}(G_T)$ if and only if $B$ is a spanning forest of $G$ consisting of $k$ connected components each of which covers exactly one vertex belonging to $S$ and exactly one vertex belonging to $T$ in $G$.
This means that any common base $B$ contains a unique disjoint $S$--$T$ path $P$ of $G$.
Conversely, given a disjoint $S$--$T$ path $P$, we can construct a common base $B$ by adding some of the remaining edges to $P$ so that it forms a spanning forest with $k$ connected components.
This means that $\mathbf{M}(G_S)$ and $\mathbf{M}(G_T)$ have a common base if and only if $G$ has a disjoint $S$--$T$ path.
Note that $B \mapsto P$ is injective in this correspondence but $P \mapsto B$ is not so.
Analogously to Tutte's reduction, the shortest disjoint $S$--$T$ path problem can be reduced to the weighted graphic matroid intersection problem.
This is a special case of Yamaguchi's reduction~\cite{Yamaguchi2016} of the shortest disjoint \calS-path problem, which will be described in \cref{sec:stu}, on our $S$--$T$ path setting.
While this is a simple extension of Tutte's reduction, it provides a one-to-one correspondence between optimal solutions of these problems.
Let $G^*$ be the graph obtained from $G$ by appending $v^*$ as a new vertex and adding the edge set $E' \defeq \set[\big]{\set{v, v^*}}[v \in \tilde{V}]$, where $\tilde{V} := V \setminus (S \cup T)$.
Similarly, let $G_S^*$ and $G_T^*$ be the graphs obtained from $G_S$ and $G_T$ by adding $E'$, respectively.
Denote $E \cup E'$ by $E^*$.
We set a column weight $\funcdoms{w}{E^*}{\setR}$ of $\mathbf{M}\pbig{G_S^*}$ and $\mathbf{M}\pbig{G_T^*}$ as $w(e) \defeq l(e)$ for $e \in E$ and as $q(e) \defeq 0$ for $e \in E'$.
\begin{lemma}\label{lem:st-one2one}
The minimum length of a disjoint $S$--$T$ path of $G$ with respect to $l$ is equal to the minimum weight of a common base of $\mathbf{M}\pbig{G_S^*}$ and $\mathbf{M}\pbig{G_T^*}$ with respect to $w$.
In addition, there is a one-to-one correspondence between shortest disjoint $S$--$T$ paths of $G$ and minimum-weight common bases of $\mathbf{M}\pbig{G_S^*}$ and $\mathbf{M}\pbig{G_T^*}$.
\end{lemma}
\begin{proof}
As in the unweighted case, $B \subseteq E^*$ is a common base of $\mathbf{M}\pbig{G_S^*}$ and $\mathbf{M}\pbig{G_T^*}$ if and only if $B$ is a spanning forest of $G^*$ consisting of $k+1$ connected components $B_0 = B \cap E', B_1, \ldots, B_k \subseteq B$ such that $v^*$ is covered only by $B_0$ and each of $B_1, \ldots, B_k$ covers exactly one vertex belonging to $S$ and exactly one vertex belonging to $T$.
Thus any common base $B$ contains a unique disjoint $S$--$T$ path $P$ of $G$.
Since $w$ is nonnegative, we have $w(B) \ge l(P)$.
Conversely, given a disjoint $S$--$T$ path $P$, we can construct a common base $B$ by adding $\set{v, v^*} \in E'$ for each $v \in \tilde{V}$ that is not covered by $P$.
We have $w(B) = l(P)$ as $w(e) = 0$ for $e \in E'$.
Hence the shortest length of a disjoint $S$--$T$ path of $G$ is equal to the minimum weight of a common base of $\mathbf{M}\pbig{G_S^*}$ and $\mathbf{M}\pbig{G_T^*}$.
As we discussed above, every minimum-weight common base contains a unique shortest disjoint $S$--$T$ path, and for every shortest disjoint $S$--$T$ path $P$, there exists a minimum-weight common base $B$ containing $P$.
Since $l(e) > 0$ for all $e \in E$, this map $P \mapsto B$ is injective.
\end{proof}
We next consider the correspondence in \cref{lem:st-one2one} by means of matrices.
Let $A$ be the incidence matrix~\eqref{def:incidence_matrix} of any orientation of $G^*$.
We define $A_1 \defeq A[S \cup \tilde{V}, E^*]$ and $A_2 \defeq A[T \cup \tilde{V}, E^*]$.
Then $A_1$ and $A_2$ represent $\mathbf{M}(G_S^*)$ and $\mathbf{M}(G_T^*)$, respectively.
We assume that for each $i \in \intset{k}$, the $i$th rows of $A_1$ and $A_2$ correspond to $s_i$ and $t_i$, respectively.
Then the sign of a disjoint $S$--$T$ path $P$ can be written using the subdeterminants of $A_1$ and $A_2$ as follows, even if $P$ is not the shortest.
\begin{lemma}\label{lem:st_sign}
Let $P$ be a disjoint $S$--$T$ path of $G$ and $B$ a common base of $(A_1, A_2)$ containing $P$.
Then we have
%
\begin{align}
\sgn P = \prn{-1}^k \det A_1[B] \det A_2[B].
\end{align}
\end{lemma}
\begin{proof}
As shown in the proof of \cref{lem:st-one2one}, $B$ consists of $k+1$ connected components $B_0$, $B_1, \ldots, B_k$ in $G^*$.
Here, $B_0$ covers $v^*$ and $B_i$ contains the $s_i$--$t_{\sigma_P(i)}$ path contained in $P$ for $i \in \intset{k}$.
Let $V_i$ be the vertex subset covered by $B_i$ for $i = 0, \ldots, k$.
By row and column permutations, $A_1[B]$ is transformed into a block diagonal matrix $X = \diag(X_0, X_1, \ldots, X_k)$, where $X_0$ has the row set $V_0 \setminus \set{v^*}$ and the column set $B_0$ and $X_i$ has the row set $V_i \setminus \set[\big]{t_{\sigma_P(i)}}$ and the column set $B_i$ for $i \in \intset{k}$.
Similarly, we permute rows and columns of $A_2[B]$ so that it becomes a block diagonal matrix $Y = \diag(Y_0, Y_1, \ldots, Y_k)$, where $Y_0$ has the row set $V_0 \setminus \set{v^*}$ and the column set $B_0$ and $Y_i$ has the row set $V_i \setminus \set[\big]{s_i}$ and the column set $B_i$ for $i \in \intset{k}$.
We can assume that column permutations of these two transformations are the same.
We can also assume that $X_0$ and $Y_0$ have the same ordering of rows, and for each $i \in \intset{k}$, the first rows of $X_i$ and $Y_i$ correspond to $s_i$ and $t_{\sigma_P(i)}$, respectively, and other rows are in the same ordering.
This implies that the product of signs of these two row permutations on $A_1$ and $A_2$ are $\sgn P$.
Hence we have
%
\begin{align}\label{eq:st_sign_mid}
\det A_1[B] \det A_2[B]
= \sgn P \det X \det Y
= \sgn P \prod_{i=0}^k \det X_i \det Y_i.
\end{align}
Here, we evaluate $\det X_i \det Y_i$ for $i = 0, \ldots, k$.
Note that $\det X_i \det Y_i$ is in $\set{+1, -1}$ since the incidence matrix $A$ is totally unimodular and $A_1[B]$ and $A_2[B]$ are nonsingular.
For $i = 0$, we have $\det X_0 \det Y_0 = 1$ by $X_0 = Y_0 = A[V_0 \setminus \set{v^*}, B_0]$.
For $i \in \intset{k}$, let $Z_i$ be the matrix such that the first row is the sum of those of $X_i$ and $Y_i$ and other rows are $X_i[V_i \cap \tilde{V}, B_i] = Y_i[V_i \cap \tilde{V}, B_i] = A[V_i \cap \tilde{V}, B_i]$.
Note that $\det X_i + \det Y_i = \det Z_i$.
Since both the ends of every edge in $B_i$ are in $V_i$, every column of $Z_i$ contains exactly one $+1$ and one $-1$ and other entries are zero.
Hence $Z_i$ is singular and thus we have $\det X_i = -\det Y_i$.
The claim of the proposition holds by~\eqref{eq:st_sign_mid}.
\end{proof}
We have the following conclusion by \cref{lem:st-one2one,lem:st_sign}.
\begin{theorem}
Let $G$ be an undirected graph and take disjoint vertex subsets $S, T$ with $k \defeq \card{S} = \card{T}$.
Let $(A_1, A_2)$ be the matrix pair associated with $G, S$ and $T$.
When $(S, T)$ is in the LGV position, then $(A_1, A_2)$ is Pfaffian with constant ${(-1)}^k \sgn P$, where $P$ is an arbitrary disjoint $S$--$T$ path of $G$.
In addition, when $G$ is equipped with a positive edge length $l$, there is a one-to-one correspondence between shortest disjoint $S$--$T$ paths of $G$ and minimum-weight common bases of $(A_1, A_2)$ with respect to the column weight $w$ defined above.
\end{theorem}
We say that $(S, T)$ is in the \emph{LGV position} on $G$ if $\sgn P$ is constant for any disjoint $S$--$T$ path $P$ of $G$.
\Cref{fig:LGV-ST} illustrates two examples of LGV-position, which are obtained by ignoring edge directions in \cref{fig:LGVDAG}.
\begin{figure}[tbp]
\begin{minipage}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}
\node (s1) [s] at (0, 1) {}; \node [left=0mm of s1] {$s_1$};
\node (s2) [s] at (0, 2) {}; \node [left=0mm of s2] {$s_2$};
\node (s3) [s] at (0, 3) {}; \node [left=0mm of s3] {$s_3$};
\node (s4) [s] at (0, 4) {}; \node [left=0mm of s4] {$s_4$};
\node (t1) [t] at (5, 1) {}; \node [right=0mm of t1] {$t_1$};
\node (t2) [t] at (5, 2) {}; \node [right=0mm of t2] {$t_2$};
\node (t3) [t] at (5, 3) {}; \node [right=0mm of t3] {$t_3$};
\node (t4) [t] at (5, 4) {}; \node [right=0mm of t4] {$t_4$};
\node (e) at (1.5,1.0) {};
\node (f) at (1.5,1.5) {};
\node (g) at (1.5,2.5) {};
\node (h) at (1.5,3.0) {};
\node (i) at (1.5,4.0) {};
\node (e2) at (3.5,1.0) {};
\node (f2) at (3.5,1.5) {};
\node (g2) at (3.5,2.5) {};
\node (h2) at (3.5,3.5) {};
\node (i2) at (3.5,4) {};
\draw[rounded corners=1cm,fill=gray!20] (1,0.5) rectangle ++(3,4) node [midway] {} ;
\draw (s1) -- (e);
\draw (s1) -- (f);
\draw (s2) -- (f);
\draw (s3) -- (f);
\draw (s3) -- (g);
\draw (s4) -- (h);
\draw (s4) -- (i);
\draw (e2) -- (t1);
\draw (f2) -- (t1);
\draw (f2) -- (t2);
\draw (g2) -- (t2);
\draw (g2) -- (t3);
\draw (h2) -- (t3);
\draw (i2) -- (t4);
\end{tikzpicture}
\subcaption{}
\end{minipage}%
\begin{minipage}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}
\draw[fill = gray!20] (0, 0) circle (1.5cm);
\draw[fill = white] (0, 0) circle (0.9cm);
\node (s1) [s] at ( 0 , 0.5 ) {}; \node [right=0mm of s1] {$s_1$};
\node (s2) [s] at ( 0.433, -0.25) {}; \node [below left=-1mm of s2] {$s_2$};
\node (s3) [s] at (-0.433, -0.25) {}; \node [above=0mm of s3] {$s_3$};
\node (t1) [t] at ( 1.732, 1) {}; \node [right=0mm of t1] {$t_1$};
\node (t2) [t] at ( 0, -2) {}; \node [right=0mm of t2] {$t_2$};
\node (t3) [t] at (-1.732, 1) {}; \node [left=0mm of t3] {$t_3$};
\draw (s1) -- (0, 1.3);
\draw (s1) -- (-0.6, 1);
\draw (s2) -- (1.2, -0.1);
\draw (s3) -- (-1.2, -0.1);
\draw (s3) -- (-0.6, -1);
\draw (s3) -- (-1, -0.6);
\draw (1.1, 0.4) -- (t1);
\draw (0, -1.2) -- (t2);
\draw (0.6, -1.25) -- (t2);
\draw (-0.6, -1.25) -- (t2);
\draw (-1.1, 0.4) -- (t3);
\draw (-0.95, 0.8) -- (t3);
\end{tikzpicture}
\subcaption{}
\end{minipage}
\caption{%
Examples of $(S, T)$ that are in the LGV position.
Gray areas represent planar DAGs.
}\label{fig:LGV-ST}
\end{figure}
| {
"timestamp": "2020-05-11T02:09:11",
"yymm": "1912",
"arxiv_id": "1912.00620",
"language": "en",
"url": "https://arxiv.org/abs/1912.00620",
"abstract": "Spanning trees are a representative example of linear matroid bases that are efficiently countable. Perfect matchings of Pfaffian bipartite graphs are a countable example of common bases of two matrices. Generalizing these two examples, Webb (2004) introduced the notion of Pfaffian pairs as a pair of matrices for which counting of their common bases is tractable via the Cauchy-Binet formula.This paper studies counting on linear matroid problems extending Webb's work. We first introduce \"Pfaffian parities\" as an extension of Pfaffian pairs to the linear matroid parity problem, which is a common generalization of the linear matroid intersection problem and the matching problem. We enumerate combinatorial examples of Pfaffian pairs and parities. The variety of the examples illustrates that Pfaffian pairs and parities serve as a unified framework of efficiently countable discrete structures. Based on this framework, we derive celebrated counting theorems, such as Kirchhoff's matrix-tree theorem, Tutte's directed matrix-tree theorem, the Pfaffian matrix-tree theorem, and the Lindström-Gessel-Viennot lemma.Our study then turns to algorithmic aspects. We observe that the fastest randomized algorithms for the linear matroid intersection and parity problems by Harvey (2009) and Cheung-Lau-Leung (2014) can be derandomized for Pfaffian pairs and parities. We further present polynomial-time algorithms to count the number of minimum-weight solutions on weighted Pfaffian pairs and parities. Our algorithms make use of Frank's weight splitting lemma for the weighted matroid intersection problem and the algebraic optimality criterion of the weighted linear matroid parity problem given by Iwata-Kobayashi (2017).",
"subjects": "Data Structures and Algorithms (cs.DS); Combinatorics (math.CO); Optimization and Control (math.OC)",
"title": "Pfaffian Pairs and Parities: Counting on Linear Matroid Intersection and Parity Problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109547583621,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7076796179617421
} |
https://arxiv.org/abs/1603.06352 | Online Learning with Low Rank Experts | We consider the problem of prediction with expert advice when the losses of the experts have low-dimensional structure: they are restricted to an unknown $d$-dimensional subspace. We devise algorithms with regret bounds that are independent of the number of experts and depend only on the rank $d$. For the stochastic model we show a tight bound of $\Theta(\sqrt{dT})$, and extend it to a setting of an approximate $d$ subspace. For the adversarial model we show an upper bound of $O(d\sqrt{T})$ and a lower bound of $\Omega(\sqrt{dT})$. | \section{Introduction}
Arguably the most well known problem in online learning theory is the so called \emph{prediction with experts advice} problem. In its simplest form, a learner wishes to make an educated decision and at each round chooses to take the advice of one of $N$ experts. The learner then suffers a loss between $0$ and $1$. \tk{goal is to minimize her regret, which is ...}
It is a standard result in online learning that, without further assumptions, the best strategy for the learner will incur $\Theta(\sqrt{T \log N})$ regret~\citep{CesaBianchiLugosi}. However, it is natural to assume that while experts are abundant, their decisions are based on common paradigms and that their decision making is based on few degrees of freedom -- for example, if experts are indeed experts, their political bias, social background or school of thought largely dominates their decision making. Experts can also be assets on which the learner wishes to distribute her wealth. In this setting, weather, market condition and interests are dominant factors.
It is also sensible to assume that one can exploit this structure to achieve better regret bounds, potentially independent of the actual number of experts while still maintaining a strategy of picking an expert's advice at each round. Our main result is of this flavor and we show how a learner can exploit hidden structure in the problem in an online setting.
We model the problem as follows: We assume that each expert corresponds to a vector $u_i$ in $\mathbb{R}^d$ space where $d$ is potentially small. Then at each round the experts loss corresponds to a scalar product with a vector $v_t$ chosen arbitrarily, and possibly in an adversarial manner. The learner does not observe the chosen embedding of the experts in Euclidean space nor the vectors $v_t$, and can only observe the loss of each expert.
To further motivate our setting, let us consider the low rank expert model in the stochastic case. It is well known that for linear predictors in $d$-dimensional space the regret will be $O(\sqrt{dT})$, independent of the number of experts. Indeed, we show that a simple follow the leader algorithm will achieve this regret bound. In fact, one novelty of this paper is a regret bound that depends on an approximate rank -- formally we show that one can improve on the $O(\sqrt{T\log N})$ regret bound and derive bounds that depend on the \emph{approximate rank} rather than the number of experts.
The non-stochastic setting is more challenging. It is true that for linear predictors in $d$-dimension one can achieve $O(\sqrt{dT})$ regret bound even in the non-stochastic case. But the result assumes that learner has access to the geometric structure of the problem, namely, the embedding of the experts in the Euclidean space. Given the embedding one can apply a \emph{Follow the Regularized Leader} approach with proper regularization to derive the desired regret bound.
Our main result is a regret minimization algorithm that achieves an $O(d\sqrt{T})$ regret in this low $d$-rank setting, when the learner
{\em does not} have access to the experts' embedding in Euclidean space.
Our algorithm does not need to know the value of the rank $d$, and adaptively adapts to it.
Thus we demonstrate a regret bound that is independent of number of experts.
We accompany this upper bound with an $\Omega(\sqrt{dT})$ lower bound.
Our results are part of a larger agenda in online learning.
A working premise in Online Learning is that in most cases the stochastic case is the hardest case.
Indeed, the literature is filled with generalization bounds and their analogue regret bounds.
However, a striking difference is that the statistical bounds are often achieved using simple ERM algorithms,
that are oblivious to any structure in the problem, even if the structure is required for the generalization bounds to be valid.
In contrast, to achieve the analogue regret bound, one has to work harder.
For finite hypothesis class the $\log N$ factor is achieved by a sophisticated algorithm,
and for more general convex problems in Euclidean space a problem-specific regularization needs to be invoked in order to achieve optimality.
Thus, a key difference is that online algorithms need to be tailored to the structure of the problem. This leads to the disappointing fact that to achieve optimal regret bounds, it is not enough for the problem to be structured but the learner needs to actively understand the structure.
Our current research is an attempt to better understand this key difference:
we wish to understand whether an online linear predictor can somehow exploit the geometry of the problem in an implicit manner, similarly to batch ERM algorithms, and how.
For this, we invoke a setting where the learner must choose its predictor without the a-priori ability to devise a regularizer.
Our findings so far indeed demonstrate that even without access to the structure the learner can indeed overcome her dependence on the irrelevant parameter $N$.
Technically, one should compare our regret bound of $O(d\sqrt{T})$ to the standard regret bound of $O(\sqrt{T \log N})$.
For our bound to be superior one needs that $d =o (\sqrt{\log N})$;
while this can indeed be the case in various settings, our result can be better seen as a first step in a more general research direction.
We aim to understand how online algorithms can take advantage of structural assumptions in the losses, without being given any explicit information about it.
\subsection{Related Work}
Low rank assumptions are ubiquitous in the Machine Learning literature. They have been successfully applied to various problems, most notably to matrix completion (\citealp{candes2009exact, foygel2011concentration, srebro2004maximum}) but also in the context of classification with missing data (\citealp{goldberg2010transduction,hazan2015missing}) and large scale optimization \citep{shalev2011large}.
A similar problem that was studied in the literature is the \emph{Branching Experts Problem}~\citep{gofer2013regret}. In the branching expert problem $N$ potential experts are effectively only $k$ distinct experts, but the clustering of the experts to the $k$ clusters is unknown a-priori. This case can be considered as a special instance of our setting as indeed we can embed each expert as a $k$-dimensional vector. \cite{gofer2013regret} proved a sharp $\Theta(\sqrt{kT})$ regret (the bound is tight only when $k < c\log{N}$ for some constant $c>0$). It is perhaps worth noting that when effectively only $k$ experts appear, the stochastic bound is $O(\sqrt{T \log k})$, thus showing that in this similar problem, it is not true that the stochastic case is the hardest case.
\paragraph{Complexity measures for online learning.}
We are not the first to try and understand what is the proper analogue for ERM in the online setting. Notions like the VC-dimension and Rademacher complexities have been extended to notions of Littlestone-Dimension (\citealp{littlestone1988learning,shalev2011online}), and Sequential Rademacher Complexity \citep{rakhlin2010online} respectively.
The SOA algorithm suggested by \cite{ben2009agnostic} is a general framework for regret minimization that depends solely on the Littlestone dimension. However, the SOA algorithm is conceptually distinct from an ERM algorithm within our framework:
to implement the SOA algorithm, one has to have access to the structure of the class (specifically, one needs to compute the Littlestone dimension of subclasses within the algorithm).
Sequential Rademacher complexity seems like a powerful tool for improving our bounds and answering some of our open problems. There are also advances in constructing effective algorithms within this framework \citep{rakhlin2012relax}. However, as the branching expert example shows, there is no general argument that show that structure in the problem leads to stochastic--analogue bounds on the complexity.
\paragraph{Learning from easy data.}
In another line of research, which is similar in spirit to ours, several authors attempt to go beyond worst-case analysis in online learning, and provide algorithms and bounds that can exploit deficiencies in the data.
Work in this direction includes the study of
worst-case robust online algorithms that can also adapt to stochastic i.i.d.~data (e.g., \citealp{NIPS2009_3795,rakhlin2013localization, de2014follow, sani2014exploiting}),
as well as the exploration of
various structural assumptions that can be leveraged for obtaining improved regret guarantees (e.g., \citealp{cesa2007improved,hazan2010extracting,hazan2011better,chiang2012online,rakhlin2013online}).
However, to the best of our knowledge, low rank assumptions in online learning have not been explored in this context.
\paragraph{Adaptive online algorithms.}
Online adaptive learning methods have recently been the topic of extensive study and are effective for large scale stochastic optimization in practice. One of the earliest and most widely used methods in this family is the AdaGrad algorithm \citep{duchi2011adaptive}, a subgradient descent method that dynamically incorporate knowledge of the geometry of the data from earlier iterations. Our problem can be cast into an online linear optimization problem and subgradient descent methods are indeed applicable. It might seem at first sight that adapting the regularization via AdaGrad can lead to desired results. However, the analysis of the AdaGrad algorithm can only yield an $O(\sqrt{dNT})$ bound on the regret in our low-rank setting. In fact, a closer inspection reveals that the $\sqrt{N}$ factor in the latter bound is unavoidable for AdaGrad: as we show in \apref{sec:adagrad}, in our setting the regret of AdaGrad is lower bounded by $\Omega(\min\{\sqrt{N},T\})$.
\section{Problem Setup and Main Results}
We recall the standard adversarial online experts model for $T$ rounds with $N$ experts.
At each round $t=1,\ldots,T$, the learner chooses a probability vector $x_t \in \Delta_N $, where $\Delta_N$ denotes the $N$-simplex, namely the set of all possible distributions over $N$ experts,
\[\Delta_N=\left\{ x\in \mathbb{R}^N \;:\; \forall i, ~ x(i) \ge 0 ~~\mbox{and}~~ \sum\nolimits_{i=1}^N x(i)=1 \right\}.\]
An adversary replies by choosing a loss vector $\ell_t \in [-1,1]^N$,%
\footnote{As will become apparent later, in our setup it is more natural to consider symmetric $[-1,1]$ loss values rather than the typical $[0,1]$ losses. The two variants of the problem are equivalent up to a simple shift and scaling of the losses---a transformation that preserves the rank of the loss matrix.}
and the learner suffers a loss
$x_t(\ell_t) = x_t\cdot \ell_t.$
The objective of the learner is to minimize her regret, which is defined as follows,
\[\mathrm{Regret}_T ~=~ \sum_{t=1}^T x_t\cdot \ell_t - \min_{i \in [N]} \sum_{t=1}^T \ell_t(i).\]
In the stochastic online experts model, the adversary selects a distribution $\mathcal{D}$ over the loss vectors in $ [-1,1]^N$,
and at time $t$ a random $\ell_t \in [-1,1]^N$ is selected from $\mathcal{D}$. The regret is.
\[\mathrm{Regret}_T ~=~ \sum_{t=1}^T x_t\cdot \mathbb{E}[\ell_t] - \min_{i \in [N]}\sum_{t=1}^T \mathbb{E}[\ell_t(i)]~,\]
where the expectations are taken over the random loss vectors selected from $\mathcal{D}$.
In our setting, we wish to assume that there is a structure over the experts which implies that the loss vectors are structured,
and are derived from a low rank subspace.
Therefore we will add the following constraint over the adversary:
let $L\in \mathbb{R}^{N\times T}$ be the loss matrix obtained in hindsight (i.e., the $t$'th column of $L$ is $\ell_t$).
We restrict the feasible strategies for the adversary to only such that satisfy:
\[\mathrm{rank}(L) =d ~.\]
An equivalent formulation of our model is as follows: An adversary chooses at the beginning of the game a matrix $U \in \mathbb{R}^{N\times d}$,
where each row corresponds to an expert.
At round $t$ the adversary chooses a vector $v_t$, and the learner gets to observe $\ell_t$ where $\ell_t = U v_t$.
The objective of the learner remains the same: to choose at each round a probability distribution $x_t$ that minimizes the regret.
We stress that the learner observes only the loss vectors $\ell_t$, and does not have access to either $U$ or the vectors $v_t$.
\subsection{Main Results}
We next state the main results of this paper:
\begin{theorem}
\label{thm:main}
The $T$-round regret of \algref{alg:main} (described in \secref{sec:upper} below) is at most $O(d\sqrt{T})$, where $d=\mathrm{rank}(L)$.
\end{theorem}
We remark that a regret upper bound of $O(\sqrt{T} \min\set{d,\sqrt{\log{N}}})$ is attainable by combining the standard multiplicative-updates algorithm with our algorithm.%
\footnote{A standard way to accomplish this is by running the two online algorithms in parallel, and choosing between their predictions by treating them as two meta-experts in another multiplicative-weights algorithm.}
Our upper bound is accompanied by the following lower bound.
\begin{theorem}
\label{thm:lowbound}
For any online learning algorithm, $T$ and $d \le \log_2{N}$, there exists a sequence of loss vectors $\ell_1,\ldots \ell_T \in [-1,1]^N$ such that
\[ \mathrm{Regret}_T ~=~
\sum_{t=1}^T x_t \cdot \ell_t - \min_{i \in [N]}\sum_{t=1}^T \ell_t(i)
~\ge~ \sqrt{\frac{d T}{8}}~,
\]
and $\mathrm{rank}(L)=d$.
\end{theorem}
\section{Preliminaries}
\subsection{Notation}
\label{sec:notation}
Let $I_n$ be the $n\times n$ identity matrix. Let $\bm{1}_n$ be a vector of length $n$ with all $1$ entries.
The columns of a matrix $U$ are denoted by $u_1,u_2,\ldots$.
The $i$'th coordinate of a vector $x$ is denoted by $x(i)$.
For a matrix $M$, we denote by $M^\dagger$ the Moore-Penrose pseudo-inverse of $M$.
For a positive definite matrix $H \succ 0$ we will denote its corresponding norm
$\|x\|_H = \sqrt{ x^\top H x},$ and its dual norm
$\|x\|^*_H =\sqrt{ x^\top H^{-1} x}.$
Given a positive semi-definite matrix $M \succeq 0$ its corresponding Ellipsoid is defined as:
\[
\mathcal{E}(M) = \{x ~:~ x^\top M^\dagger x \le 1 \}~.
\]
\subsection{Ellipsoidal Approximation of Convex Bodies}
\label{sec:John}
A main tool in our algorithm is an Ellipsoid approximation of convex bodies. Recall John's theorem for symmetric zero-centered convex bodies.
\begin{theorem}[John's Theorem; e.g, \citealp{ball1997elementary}]
\label{thm:john}
Let $K$ be a convex body in $\mathbb{R}^d$ that is symmetric around zero (i.e., $K=-K$).
Let $\mathcal{E}$ be an ellipsoid with minimum volume enclosing $K$.
Then:
\[ \frac{1}{\sqrt{d}}\mathcal{E} \subseteq K \subseteq \mathcal{E}.\]
\end{theorem}
While computing the minimum volume enclosed ellipsoid is computationally hard, for symmetric convex bodies it can be approximated to within $1+\epsilon$ factor in polynomial time.
Specifically, given as input a matrix $A \in \mathbb{R}^{N\times d}$, consider the polytope $P_A= \set{x \,:\, \| Ax \|_\infty \leq 1}$. We have the following.
\begin{theorem}[\citealp{grotschel2012geometric}, Theorem~4.6.5]
There exists a poly-time procedure $\mvee{A}$ that receives as input a matrix $A\in \mathbb{R}^{N\times d}$ and returns a matrix $M$ such that
\begin{equation*
\frac{1}{\sqrt{2d}} \mathcal{E}(M) \subseteq P_A \subseteq \mathcal{E}(M).
\end{equation*}
\end{theorem}
\subsection{Online Mirror Descent}
\label{sec:omd}
Another main tool in our analysis is the well-known \emph{Online Mirror Descent} algorithm for online convex optimization. The Online mirror descent is a subgradient descent method for optimization over a convex set in $\mathbb{R}^d$ that implies a regularization factor, chosen a-priori. In \algref{alg:omd} we describe the algorithm for the special case where the convex set is $\Delta_N$ and the regularization function is chosen to be $\|\cdot\|^2_H$ for some input matrix $H\succ 0$:
\begin{algorithm}[h!]
\caption{OMD: Online Mirror Descent}
\begin{algorithmic}[1]
\STATE \textbf{input:} $H\succ 0$, $\{\eta_t\}_{t=1}^T$, $x_1\in \Delta_N$.
\FOR {$t=1$ to $T$}
\STATE Play $x_t$
\STATE Suffer cost $x_{t}\cdot \ell_t$ and observe $\ell_t$
\STATE Update
\[x_{t+1} = \arg\min_{x\in \Delta_N}\ell_t\cdot x + \eta_t^{-1}\|x-x_t\|^2_H.\]
\ENDFOR
\end{algorithmic}
\label{alg:omd}
\end{algorithm}
The regret bound of the algorithm is dependent on the choice of regularization and is given as follows:
\begin{lemma}[e.g., \citealp{hazan2015online}]
\label{lem:omd}
The $T$-round regret of the OMD algorithm (\algref{alg:omd}) is bounded as follows:
\[
\sum_{t=1}^T \ell_t \cdot x_t - \sum_{t=1}^T \ell_t \cdot x^*
\leq
\frac{1}{\eta_T} \norm{x_1-x^*}_H^2
+ \frac{1}{2} \sum_{t=1}^T \eta_t (\norm{\ell_t}_H^*)^2
~.
\]
\end{lemma}
\subsection{Rademacher Complexity}
\label{sec:radamacher}
Our tool to analyze the stochastic case will be the Rademacher Complexity,
specifically we will use it to bound the regret of a ``Follow The Leader" algorithm (FTL).
Recall that the FTL algorithm selection rule is defined as follows:
\[ x_t = \arg\min_{x \in \Delta_N} \sum_{i=1}^{t-1} \ell_i \cdot x.\]
One way to bound the regret of the FTL algorithm in the stochastic case is by bounding the Rademacher complexity of the feasible samples. Recall that the Rademacher Complexity of a class of target function $\mathcal{F}$ over a sample $S_t=\{\ell_1,\ldots, \ell_t\}$ is defined as follows
\[R(\mathcal{F},S_t) = \mathbb{E}_{\sigma} \left[ \sup_{f\in \mathcal{F}} \frac{1}{t}\sum_{i=1}^t \sigma_{i} f(\ell_i)\right],\]
where $\sigma \in \{-1,1\}^t$ are i.i.d.~Rademacher distributed random variables.
The following bound is standard and well known, and for completeness we provide a proof in \apref{sec:standard}.%
\footnote{Surprisingly, we could not find any specific reference that precisely derives it.}
\begin{lemma}
\label{lem:standard}
Let $K$ be a symmetric convex set centered around zero in $\mathbb{R}^d$. Recall that the dual set $K^*$ is defined as follows:
\[K^* = \{x: \sup_{y\in K} |y\cdot x| \le 1\}.\]
Let $S_t=\{\ell_1,\ldots,\ell_t\}\subseteq K$ and let $\mathcal{F}\subseteq \alpha K^*$ be a subclass of linear functions, then:
\[ R(\mathcal{F},S_t) \le \alpha\sqrt{\frac{d}{t}}.\]
\end{lemma}
Another standard bound applies to the case where $\mathcal{F}$ is bounded in the $l_1$-norm.
\begin{lemma}[\citealp{kakade2009complexity}]
\label{lem:radl1}
Let $S_t=\{\hat{\ell}_1,\ldots,\hat{\ell}_t\}\in \mathbb{R}^N$ and let $\mathcal{F}_1$
be a subclass of linear functions such that $\sup\{ \|f\|_1 : f\in \mathcal{F}\} \leq 1$, then:
\[ R(\mathcal{F}_1,S_t) \le \max_i \|\hat{\ell_i}\|_\infty \sqrt{\frac{2\log N}{t}}.\]
\end{lemma}
The Rademacher complexity is a powerful tool in statistical learning theory and it allows us to bound the generalization error of an FTL algorithm. Namely, for every sample $S_t=\{\ell_1,\ldots,\ell_t\}$ denote:
\[f_S = \arg\min_{f\in \mathcal{F}} \sum_{i=1}^t f(\ell_i).\]
Then we have the following bound for every $f^* \in \mathcal{F}$ (for i.i.d.~loss vectors; see for example \citealp{shalev2014understanding}):
\[
\mathop\mathbb{E}_{S_t\sim D}\mathop\mathbb{E}_{\ell\sim D} [f_{S_t}(\ell)- f^*(\ell)]
\le 2 \mathop\mathbb{E}_{S_t\sim D} [ R(\mathcal{F},S_t) ] ~.
\]
Applying this to FTL in the experts setting we have, in terms of regret, that for any $x^*$:
\begin{align}
\label{eq:radbound}
\mathbb{E}\left[\sum_{t=1}^T \ell_t \cdot x_t - \ell_t\cdot x^*\right]
&=\sum_{t=1}^T \mathop \mathbb{E}_{\ell_1,\ldots,\ell_{t-1}\sim D} \mathop\mathbb{E}_{\ell_t\sim D}[\ell_t \cdot x_t - \ell_t\cdot x^*]\nonumber\\
&\le 2\sum_{t=1}^T \mathop\mathbb{E}_{S_{t-1}\sim D} [R(\Delta_N,S_{t-1})]~.
\end{align}
\section{Upper Bound}
\label{sec:upper}
In this section we discuss our online algorithm for the adversarial model, which is given in \algref{alg:main}.
The algorithm is a version of Online Mirror Descent with adaptive regularization.
It maintains a positive-definite matrix $H$, which is being updated whenever the newly observed loss vector $\ell_t$ is not in the span of previously appeared losses.
In all other time steps---i.e., when $\ell_t$ remains in the previous span---the algorithm preforms an Online Mirror Descent type update (see \algref{alg:omd}), with the function $\norm{x}_H^2 = x^\top H x$ as a regularizer.
The algorithm updates the regularization matrix $H$ so as to adapt to the low-dimensional geometry of the set of feasible loss vectors.
Indeed, as our analysis below reveals, $H$ is an ellipsoidal approximation of a certain low-dimensional convex set in $\mathbb{R}^N$ to which the loss vectors $\ell_t$ can be localized.
This low-dimensional set is the intersection of the unit cube in $N$ dimensions---in which the loss vectors $\ell_t$ reside by definition---and the low dimensional subspace spanned by previously observed loss vectors, given by $\mathrm{span}(U)$.
Whenever the latter subspace changes, namely, once a newly observed loss vector leaves the span of previous vectors, the ellipsoidal approximation is recomputed and the matrix $H$ is updated accordingly.
\begin{algorithm}[h!]
\caption{Online Low Rank Experts}
\begin{algorithmic}[1]
\STATE Initialize: $x_1= \frac{1}{N} \bm{1}_N$ , $\tau = 0$, $k = 0$, $U = \set{}$
\FOR {$t=1$ to $T$}
\STATE Observe $\ell_t$, suffer cost $x_{t}\cdot \ell_t$.
\IF{ $\ell_t \notin \mathrm{span}(U)$}
\STATE Add $\ell_t$ as a new column of $U$, reset $\tau=0$, and set $k \leftarrow k+1$.
\STATE Compute $M=\mvee{U^\top}$ and $H=I_n +U^\top M U$.
\ENDIF
\STATE let $\tau \leftarrow \tau+1$ and $\eta_t = 4\sqrt{k/\tau}$, and set:
\[x_{t+1} = \arg\min_{x\in \Delta_N}\ell_t\cdot x + \eta_t^{-1}\|x-x_t\|^2_H.\]
\ENDFOR
\end{algorithmic}
\label{alg:main}
\end{algorithm}
To derive \thmref{thm:main}, we begin with analyzing a simpler case where the learner is aware of the subspace from which losses are derived. Specifically, assume that at the beginning of the rounds, the learner is equipped with a rank $d$ matrix $U$ such that for all losses $\ell_1,\ell_2,\ldots \in \mathrm{span}(U)$ where we denote by $\mathrm{span}(U)$ the span of the columns of the matrix $U$.
In this simplified setting, we can obtain a regret bound of $O(\sqrt{dT})$ via John's theorem (\thmref{thm:john}).%
\footnote{We remark that for the simplified setting, the $O(\sqrt{dT})$ regret bound is in fact tight, as our $\Omega(\sqrt{dT})$ lower bound (given in \secref{sec:lower}) applies in a setting where the subspace of the loss vectors is known a-priori to the learner.}
As discussed above, the loss vectors $\ell_1,\ldots,\ell_T$ can be localized to the intersection of the unit cube in $N$ dimensions with the $d$-dimensional subspace spanned by the columns of $U$.
Then, John's theorem asserts that the minimal-volume enclosing ellipsoid of the intersection is a $\sqrt{d}$-approximation to the set of feasible loss vectors.
\begin{theorem}
\label{thm:known}
Run \algref{alg:omd} with Input $H$, $\{\eta_t\}$ and $x_1$ defined as follows: (i) $H=I_n + U^\top M U$, where $M=\mvee{U^\top}$,
(ii) $\eta_t= 4\sqrt{d/t}$, where $d=\mathrm{rank}(U)$, and (iii) $x_1 \in \Delta$ is arbitrary.
If $\ell_1,\ldots, \ell_T \in \mathrm{span}(U)$ , then the expected $T$-round regret of the algorithm is at most $8\sqrt{dT}$.
\end{theorem}
\begin{proof}
Consider the $d$-dimensional polytope \[P = \set{v \in \mathbb{R}^d \,:\, \norm{U^\top v}_\infty \le 1}.\]
Then by John's Theorem (Theorem~\ref{thm:john}),
we have,
\begin{equation}\label{eq:MPM}
\mathcal{E}(\tfrac{1}{2d} M) ~\subseteq~ P ~\subseteq~ \mathcal{E}(M) ~.
\end{equation}
In order to apply \lemref{lem:omd}, we need to bound both $\norm{\ell_t}_H^*$ and $\norm{x_1-x^*}_H^2$.
We first bound the norms $\norm{\ell_t}_H^*$.
Notice that for each loss vector $\ell_t$ there exists $v_t \in P$ such that $\ell_t = U^\top v_t$ (as $\ell_t \in \mathrm{span}(U)$ and $\norm{\ell_t}_\infty \le 1$).
Thus, we can write,
\[ (\norm{\ell_t}_H^*)^2
=
\ell_t^\top H^{-1} \ell_t
=
v_t^\top U (I_n + U^\top M U)^{-1} U^\top v_t
\leq
v_t^\top U (U^\top M U)^\dagger U^\top v_t
=
v_t^\top M^{-1} v_t
~,
\]
where we have used \lemref{lem:LML} (see \apref{ap:technical}).
Now, since $v_t \in P$ and $\mathcal{E}(M)$ is enclosing~$P$, we obtain $v_t^\top M^{-1} v_t \le 1$.
This proves that $$(\norm{\ell_t}_H^*)^2 \le 1.$$
Next we bound $\|x_1-x^*\|_H \leq 2$
Since $\|x_1-x^*\|_H \leq 2 \max_{x \in \symplex_n} \norm{x}_H$, it suffices to bound
$ \max_{x \in \symplex_n} \norm{x}_H$.
Hence, our goal is to show that $\norm{x}_H \le 2\sqrt{d}$ for all $x \in \symplex_n$.
Since $\norm{x}_H^2 = 1 + 2d \, \norm{x}_{H'}^2$ with $H' = \tfrac{1}{2d} U^\top M U$, it is enough to bound the norm $\norm{x}_{H'}^2$.
Given a convex set in $\mathbb{R}^d$, recall that the dual set is given by
\[ P^*= \{x: \sup_{p\in P} |x\cdot p| \le 1\}.\]
The dual of an ellipsoid $\mathcal{E}(M)$ is given by $(\mathcal{E}( M))^*= \mathcal{E}(M^{-1})$ and it is standard to show that \eqref{eq:MPM} implies in the dual:
\[ (\mathcal{E}( M))^* \subseteq P^* \subseteq (\mathcal{E}( \tfrac{1}{2d}M))^*.\]
Taken together we obtain that $P^*\subseteq \mathcal{E}(2d M^{-1})$.
Note that by definition the columns of $U$ are in $P^*$, hence, for every $u_i$,
\[\|u_i\|_M^2\le 2d.\]
Since $x\in \Delta_N$,
\[
\|x\|_{H'}^2= \tfrac{1}{2d}\|Ux\|_M^2\le \tfrac{1}{2d}\max_i \|u_i\|_M^2 \le 1
~.
\]
Equipped with the bounds $\norm{x}_H \le \sqrt{1+2d}\leq 2\sqrt{d}$ for all $x \in \symplex_n$ and $\norm{\ell_t}_H^* \le 1$ for all $t$, we are now ready to analyze the regret of the algorithm, which via \lemref{lem:omd} can be bounded as follows:
\begin{align*}
\mathrm{Regret}_T &= \sum_{t=1}^T \ell_t \cdot x_t - \sum_{t=1}^T \ell_t \cdot x^*\\
&\leq \frac{1}{\eta_T} \norm{x_1-x^*}_H^2
+ \frac{1}{2} \sum_{t=1}^T \eta_t (\norm{\ell_t}_H^*)^2
& \hfill \because~~ \textrm{\lemref{lem:omd}}\\
&\leq
\frac{4}{\eta_T} \max_{x \in \symplex_n} \norm{x}_H^2
+ \frac{1}{2} \sum_{t=1}^T \eta_t (\norm{\ell_t}_H^*)^2
& \because~~ \|x_1-x^*\|_H \leq 4 \max_{x \in \symplex_n} \norm{x}_H \\
&\leq
\frac{16d}{\eta_T} + \frac{1}{2} \sum_{t=1}^T \eta_t
& \because~~ \max_{x \in \symplex_n} \norm{x}_H^2 \leq 4d, ~ \norm{\ell_t}_H^*\leq 1
.
\end{align*}
A choice of $\eta_t = 4\sqrt{d/t}$, together with the inequality $\sum_{t=1}^T 1/\sqrt{t} \le 2\sqrt{T}$, gives the theorem.
\end{proof}
The $d$-low rank setting does not assume that the learner has access to the subspace $U$, and potentially an adversary may adapt her choice of subspace to the learner's strategy. However, the learner can still obtain regret bounds that are independent of the number of experts. We are now ready to prove \thmref{thm:main}.\\
\begin{proof}[Proof of \thmref{thm:main}]
Let $t_0 = 1$, $t_{d+1} = T$ and for all $1 \le k \le d$ let $t_k$ be the round where the $k$'th column is added to $U$. Also, let $T_k = t_{k+1}-t_k$ the length of the $k$'th epoch.
Notice that between rounds $t_k$ and $t_{k+1}$ the algorithm's execution is identical to \algref{alg:omd} with input depicted in \thmref{thm:known}. Therefore its regret in this time period is at most $8\sqrt{kT_k}$.
The total regret is then bounded by
\begin{align*}
8 \sum_{k=0}^d \sqrt{k T_k}
\leq
8 \sqrt{\sum_{k=0}^d k} \cdot \sqrt{\sum_{k=0}^d T_k}
\leq
8d \sqrt{T}
~,
\end{align*}
and the theorem follows.
\end{proof}
\subsection{Stochastic Online Experts}
\label{sec:stochastic}
We now turn to analyze the regret in the stochastic model, where the loss vectors $\ell_t$ are chosen i.i.d.~from some unknown distribution.
In this case we can achieve a right regret bound of $O(\sqrt{d T})$ using a simple ``Follow The Leader" (FTL) algorithm.
We will in fact show an even stronger result for the stochastic case, that an approximate rank is enough to bound the complexity.
Recall that the approximate rank, $\mathop{\mathrm{rank}_\epsilon}(L)$, of a matrix is defined as follows (see \citealp{alon2013approximate}):
\[{\mathop{\mathrm{rank}_\epsilon}}(L)= \min \{ \mathrm{rank}(L') : \|L'-L\|_{\infty} < \epsilon\}.\]
The following statement is the main result for this section:
\begin{theorem}
\label{thm:stochastic}
Assume that an adversary chooses her losses $\{\ell_t\}$ i.i.d.~from some distribution $\mathcal{D}$ supported on $[-1,1]^N$. Then the $T$-round regret of the FTL algorithm is bounded by:
\[ \mathrm{Regret}_T \le 8\mathbb{E}\left[ \sqrt{T \cdot \mathop{\mathrm{rank}_\epsilon}(L)}\right] + \epsilon \sqrt{T \log N},\]
for every $0\le \epsilon<1$. In particular, if $\mathrm{rank}(L) \le d$ almost surely, then
$\mathrm{Regret}_T = O(\sqrt{dT}).$
\end{theorem}
\begin{proof}
Our proof relies on \eqref{eq:radbound} and a bound for $R(\Delta_N,S_t)$.
Fix a sequence $S_T=\{\ell_1,\ldots,\ell_T\}$ and let $d=\mathop{\mathrm{rank}_\epsilon}(L)$ and let $U$ be $N\times d$ matrix such that
\[\L= U V + \hat{L}~,\]
where $\max_{i,j} |\hat{L}_{i,j}|<\epsilon$. We will denotes the columns of $\hat{L}$ by $\hat{\ell}_1,\ldots \hat{\ell}_N$.
We define a symmetric convex set centered around zero in $\mathbb{R}^d$ as follows:
\[K= \{v: \sup\nolimits_{i} |u_i \cdot v| \le 2\}~.\]
Note that for every $v_t$ we have that $v_t \in K$ if $\epsilon\leq 1$. By definition of the set we have:
$u_i\in 2 K^*$ for every $i$. One can verify that $K^*$ is convex, hence if we let $\mathcal{F}=\mathrm{conv}(u_1,\ldots, u_N)$ we have that $\mathcal{F}\subseteq 2 K^*$. We can think of $\mathcal{F}$ as a linear function space, where $f_u(v)= u\cdot v$.
It follows by \lemref{lem:standard} that $R(\mathcal{F},S_t) \le \sqrt{2d/t}$. Finally,
\[
R(\Delta_N ,S_t) = \mathbb{E} \left[\sup_{x\in \Delta_N} \sum_{i=1}^t\frac{1}{t} \sigma_i x\cdot \ell_i \right]
\le \mathbb{E} \left[ \sup_{x\in \Delta_N} \frac{1}{t}\sum_{i=1}^t \sigma_i x\cdot U v_i \right]
+ \mathbb{E} \left[ \sup_{x\in \Delta_N}\frac{1}{t}\sum_{i=1}^t \sigma_i x\cdot \hat{\ell} \right] ~.
\]
Next, we have:
\begin{align}\label{eq:rankbound}
\mathbb{E} \left[ \sup_{x\in \Delta_N} \frac{1}{t}\sum_{i=1}^t \sigma_i x\cdot U v_i \right]
&=\mathbb{E} \left[ \sup_{x\in \Delta_N} \frac{1}{t}\sum_{i=1}^t \sigma_i (U^\top x)\cdot v_i \right] \nonumber\\
&= \sup_{f\in \mathrm{conv}(u_i)} \mathbb{E} \left[ \frac{1}{t}\sum_{i=1}^t \sigma_i f(v_i) \right] \nonumber\\
&=R(\mathcal{F},S_t)<2\sqrt{\frac{d}{t}}~.
\end{align}
and by \lemref{lem:radl1},
\begin{equation}\label{eq:noisebound}
\mathbb{E} \sup_{x\in \Delta_N} \left[ \frac{1}{t} \sum_{i=1}^t \sigma_i x\cdot \hat{\ell_i} \right]
\le \epsilon \sqrt{\frac{2 \log N}{t}} ~.
\end{equation}
Taking \eqref{eq:rankbound} and \eqref{eq:noisebound} together, we have:
\[R(\Delta_N ,S_t) \le 2\sqrt{\frac{\mathop{\mathrm{rank}_\epsilon}{L}}{t}}+\epsilon \sqrt{\frac{2 \log N}{t}}~.\]
The statement now follows from \eqref{eq:radbound}.
\end{proof}
\section{Lower Bound}
\label{sec:lower}
We now prove \thmref{thm:lowbound}. For our proof we will rely on lower bounds for online learning of hypotheses classes with respect to the Littlestone dimension (see \citealp{shalev2011online}). For a class $\mathcal{H}$ of target functions $h:\mathcal{X}\to \{0,1\}$, the Littlestone dimension $\mathrm{Ldim}(\mathcal{H})$ measures the complexity, or online learnability, of the class.
To define $\mathrm{Ldim}(\mathcal{H})$ one considers trees whose internal nodes are labeled by instances. Any branch in such a tree can be described as a sequence of examples $(x_1,y_1),\ldots, (x_d,y_d)$ where $x_i$ is the instance associated with the $i$th node in the path, and $y_i$ is $1$ if $x_{i+1}$ is the right child of the $i$--th node, and $y_i=0$ otherwise.
$\mathrm{Ldim}(\mathcal{H})$ is then defined as the depth of the largest binary tree that is shattered by $\mathcal{H}$. An instance-labeled tree is said to be shattered by a class $\mathcal{H}$ if for any root-to-leaf path $(x_1,y_1),\ldots,(x_d,y_d)$ there is some $h\in \mathcal{H}$ such that $h(x_i)=y_i$.
To prove \thmref{thm:lowbound}, we need the following result about the Littlestone dimension.
\begin{lemma}[\citealp{ben2009agnostic}]
\label{lem:pal}
Let $\mathcal{H}$ be any hypothesis class with finite $\mathrm{Ldim}(\mathcal{H})$, where $\mathrm{Ldim}$ is the Littlestone-dimension of a class $\mathcal{H}$.
For any (possibly randomized) algorithm, there exists a sequence of labeled instances $(v_1,y_1),\ldots, (v_T,y_T)$ with $y_t\in \{0,1\}$ such that
\[\mathbb{E}\left[ \sum_{t=1}^T |\hat{y}_t - y_t| \right] - \min_{h\in \mathcal{H}} \sum_{i=1}^T |h(x_t)-y_t| \ge \sqrt{\frac{\mathrm{Ldim}(\mathcal{H})T}{8}}~.\]
where $\hat{y}_t$ is the algorithm's output at iteration $t$.
\end{lemma}
\begin{proof}[Proof of \thmref{thm:lowbound}]
We let $\mathcal{H}$ be the $2^d$ vertices of the $d$-dimensional hypercube.
We define a function class $\mathcal{F}$ over the domain $\mathcal{X} = \{e_1,\ldots, e_d\}$ of standard basis vectors.
A function $f_u \in \mathcal{F}$, is labeled by $u \in \mathcal{H}$, and defined over the set of basis vector $e_j$, as follows,
\[
f_u(e_j) = \begin{cases}
0 & \text{if~~$u(j)=-1$,} \\
1 & \text{if~~$u(j)=1$.}
\end{cases}
\]
One can verify that $\mathrm{Ldim}(\mathcal{F})=d$.
For each $u_i \in \mathcal{H}$ and $y\in \{0,1\}$, we can write $$|f_{u_i}(e_j) -y| = \frac{1-(2y-1)\cdot u_i\cdot e_j}{2}.$$
By \lemref{lem:pal},
we deduce that for any algorithm,
there exists a sequence $(v_1,\bar{y}_1),\ldots,(v_T,\bar{y}_T)$ of standard basis vectors $v_1,\ldots,v_T$ and $\bar{y}_1,\ldots \bar{y}_T \in \{-1,1\}$ such that:
\begin{equation}\label{eq:lowbound}
\sum_{t=1}^T \sum_i x_t(i) u_i \cdot (\bar{y}_t v_t ) - \min_{u} \sum_{i=1}^T u\cdot (\bar{y}_t v_t ) \ge 2\sqrt{\frac{dT}{8}}~.
\end{equation}
We now consider an adversary that chooses $U$ as his expert matrix, and at round $t$ the learner observes $\ell_t = U(\bar{y}_t v_t)$. The lower bound now follows from \eqref{eq:lowbound}; the fact that $\mathrm{rank}(\L)=d$ follows from the fact that our experts are embedded in $\mathbb{R}^d$.
\end{proof}
\section{Discussion and Open Problems}
We considered the problem of experts with a hidden low rank structure. Our findings are that in the non-stochastic case, similar to the stochastic case, the regret bounds are independent of the number of experts.
The most natural question is then to bridge the gap between the upper and lower bounds:
\begin{question}
Is there an algorithm that can achieve regret $O(\sqrt{dT})$ for any sequence $\ell_1,\ldots,\ell_T$ such that $\mathrm{rank}(L)=d$?
Alternatively, can one prove a lower bound of $\Omega(d\sqrt{T})$?
\end{question}
As discussed, our agenda is more general than the low-rank setting. Our aim is to construct new online algorithms that can exploit structure in the data, without explicit information on the structure. Other settings can also be considered within our framework.
Another interesting setting, that avoids dependence in dimension, is to assume that experts are embedded in a Hilbert space. By isomorphisms of Hilbert spaces this is equivalent to an adversary that chooses an expert embedding matrix $U \in \mathbb{R}^{N\times N}$ such that for every $u_i$ we have $\|u_i\|_2\leq 1$ and correspondingly at each time point we receive a vector $v_t$ such that $\|v_t\|_2\leq 1$ as a result we have a factorization:
\[L = UV^\top, \qquad \|U\|_{2,\infty},\|V\|_{2,\infty} \leq 1,\]
where $\|X\|_{2,\infty} = \sup_{\|y\|\leq 1} \|X y\|_\infty$.
Recall the definition of the max-norm, also called the $\gamma_2$-norm \citep{srebro2005rank}:
\[\|L\|_{\max} = \min_{UV^\top=L} \|U\|_{2,\infty}\cdot \|V\|_{2,\infty} .\]
Thus, similar to the low rank setting we can define this setting as follows: At each round a learner chooses $x_t \in \Delta_N $, an adversary replies by choosing a loss vector $\ell_t$, and the learner incurs the corresponding loss. The adversary is restricted to strategies such that
\[ \|L\|_{\max} \le 1.\]
The importance of this setting is that the proper generalization bound for this case is dimension independent (e.g., \citealp{kakade2009complexity}). Hence, we ask the following question:
\begin{question}
Is there an algorithm that can achieve regret $O(\sqrt{T})$ for any sequence $\ell_1,\ldots,\ell_T$ such that $\|L\|_{\max}\le 1$?
\end{question}
We can also generalize this setting to any pair of norms, $\|\cdot\|$ and its dual $\|\cdot\|^*$, where the description of the game remains the same. The adversary chooses an embedding $U$ of the experts with bounded $\|\cdot \|$ norm. Then, at each round he chooses a set of vectors $v_t$ with $\|\cdot \|^*$ bounded norm.
Finally, a different interesting direction to pursue in future work is to extend the noisy result to the adversarial setting. Namely,
\begin{question}
Is there an algorithm that can achieve regret $O(\sqrt{dT}+ \epsilon\sqrt{T\log N} )$ for any sequence $\ell_1,\ldots,\ell_T$ such that $\mathop{\mathrm{rank}_\epsilon}(L) \leq d$?
\end{question}
\bibliographystyle{abbrvnat}
| {
"timestamp": "2016-05-24T02:15:06",
"yymm": "1603",
"arxiv_id": "1603.06352",
"language": "en",
"url": "https://arxiv.org/abs/1603.06352",
"abstract": "We consider the problem of prediction with expert advice when the losses of the experts have low-dimensional structure: they are restricted to an unknown $d$-dimensional subspace. We devise algorithms with regret bounds that are independent of the number of experts and depend only on the rank $d$. For the stochastic model we show a tight bound of $\\Theta(\\sqrt{dT})$, and extend it to a setting of an approximate $d$ subspace. For the adversarial model we show an upper bound of $O(d\\sqrt{T})$ and a lower bound of $\\Omega(\\sqrt{dT})$.",
"subjects": "Machine Learning (cs.LG)",
"title": "Online Learning with Low Rank Experts",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.984810952529396,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7076796163600195
} |
https://arxiv.org/abs/1804.01172 | Applying Computer Algebra Systems with SAT Solvers to the Williamson Conjecture | We employ tools from the fields of symbolic computation and satisfiability checking---namely, computer algebra systems and SAT solvers---to study the Williamson conjecture from combinatorial design theory and increase the bounds to which Williamson matrices have been enumerated. In particular, we completely enumerate all Williamson matrices of even order up to and including 70 which gives us deeper insight into the behaviour and distribution of Williamson matrices. We find that, in contrast to the case when the order is odd, Williamson matrices of even order are quite plentiful and exist in every even order up to and including 70. As a consequence of this and a new construction for 8-Williamson matrices we construct 8-Williamson matrices in all odd orders up to and including 35. We additionally enumerate all Williamson matrices whose orders are divisible by 3 and less than 70, finding one previously unknown set of Williamson matrices of order 63. | \section{Introduction}\label{sec:introduction}
In recent years SAT solvers have been used to solve or make progress on
mathematical conjectures which have otherwise
resisted solution from some of the world's best mathematicians.
Some prominent problems which fit into this trend
include the Erd\H{o}s discrepancy conjecture, which was open for
80 years and had a special case solved by~\cite{DBLP:conf/sat/KonevL14};
the Ruskey--Savage conjecture, which has been open for 25 years
and had a special case solved by~\cite{DBLP:conf/cade/ZulkoskiGC15};
the Boolean Pythagorean triples problem, which was open for 30 years
and solved by~\cite{heule2016solving}; and the
determination of the fifth Schur number, which was open for 100 years
and solved by~\cite{DBLP:journals/corr/abs-1711-08076}.
Although these are problems
which arise in completely separate fields
and have no obvious connection to propositional
satisfiability checking, nevertheless SAT solvers were found
to be extremely effective at pushing the state-of-the-art and
sometimes absolutely crucial in the problem's ultimate solution.
In this paper we apply a SAT solver to the Williamson conjecture from
combinatorial design theory.
Our work is similar in spirit to the aforementioned works but
we would like to highlight two main differences.
Firstly, we employ a programmatic
SAT solver (as introduced by~\cite{DBLP:conf/sat/GaneshOSDRS12})
which is able to learn conflict clauses through a piece of code
specifically tailored to the problem domain.
This code encodes domain-specific knowledge that an
off-the-shelf SAT solver would otherwise not be able to exploit. This
framework is not limited to any specific domain;
any external library or function can be used
as long as it is callable by the SAT solver. As we will see
in Section~\ref{sec:sat+cas}, the clauses that
are learned in this fashion can enormously cut down the search space as well
as the solver's runtime.
Secondly, similar in style to~\citep{DBLP:conf/cade/ZulkoskiGC15}
we incorporate functionality from computer algebra systems to
increase the efficiency of the search in what we
call the ``SAT+CAS'' paradigm.
This approach of combining computer algebra systems with SAT
or SMT solvers was also independently proposed
at the conference ISSAC by~\cite{DBLP:conf/issac/Abraham15}.
More recently, it has been argued by the SC$^2$ project~\citep{sc2} that
the fields of satisfiability checking and symbolic computation
are complementary and combining the tools of both fields
(i.e., SAT solvers and computer algebra systems)
in the right way can solve problems more efficiently than
could be done by applying the tools of either field in isolation,
and our work provides evidence for this view.
We describe the Williamson conjecture, its history, and state the necessary
properties of Williamson matrices that we require in Section~\ref{sec:willconj}.
In particular, we derive a new version of Williamson's product theorem that applies
to Williamson matrices of even order (Theorem~\ref{thm:willprodeven}).
We give an overview of the
SAT+CAS paradigm in Section~\ref{sec:sat+cas}, describe our SAT+CAS method
in Section~\ref{sec:sat+casmethod}, and give a summary
of our results in Section~\ref{sec:results}.
The present work is an extension of our
previous work~\citep{bright2017sat+} which enumerated Williamson matrices
of even order up to order~$64$. The present work extends this enumeration
to order~$70$ and extends the method to enumerate Williamson matrices
with orders divisible by~$3$. In doing so, we find a previously
undiscovered set of Williamson matrices of order~$63$, the first new set of Williamson
matrices of odd order discovered since
one of order $43$ was found
over ten years ago by~\cite{holzmann2008williamson}.
Additionally, we improve our treatment of equivalence checking (see Section~\ref{sec:equivcheck}),
identify a new equivalence operation that applies to Williamson
matrices of even order (see Section~\ref{sec:willequiv}),
derive a new doubling construction for Williamson matrices (Theorem~\ref{thm:willdbl}),
and a new construction for 8-Williamson matrices (Theorem~\ref{thm:8will}).
Using this construction we construct 8-Williamson matrices in all
odd orders $n\leq35$, improving on the result of \cite{kotsireas2009hadamard}
that constructed 8\nobreakdash-Williamson matrices in all odd orders $n\leq29$.
Finally, in Section~\ref{sec:conclusion}
we use our experience developing
systems that combine SAT solvers with computer algebra systems
to give some guidelines about the kind of problems such an approach is
likely to be effective for.
\section{The Williamson Conjecture}\label{sec:willconj}
\cite{williamson1944hadamard} introduced the matrices which
now bear his name while developing a method of constructing Hadamard matrices---square
matrices with $\pm1$ entries and pairwise orthogonal rows.
The \emph{Hadamard conjecture}
states that Hadamard matrices exist for all orders divisible by~$4$.
Williamson's construction has been extensively used to construct Hadamard matrices
in many different orders and the \emph{Williamson conjecture}
states that it can be used to construct a Hadamard matrix of any order divisible by~$4$;
\cite{turyn1972infinite} states it as follows:
\begin{quote}
Only a finite number of Hadamard matrices of Williamson type are known
so far; it has been conjectured that one such exists of any order $4t$.
\end{quote}
Williamson matrices have also found use in digital
communication systems and this motivated mathematicians from NASA's
Jet Propulsion Laboratory to construct Williamson matrices of order~$23$
while developing codes allowing the transmission of
signals over a long range~\citep{baumert1962}.
These Williamson matrices were consequently used to
construct a Hadamard matrix of order $4\cdot23=92$~\citep{nasablog}.
(In some older works the Hadamard matrix constructed in this way was itself referred to
as a Williamson matrix but we follow modern convention and do not use
this terminology.)
Williamson matrices are also studied for their elegant mathematical properties
and their relationship to other mathematical conjectures~\citep{schmidt1999williamson}.
Although Williamson defined his matrices for both even and odd orders (see Section~\ref{sec:background}),
most subsequent work has focused on the odd case.
A complete enumeration of Williamson matrices was completed
for all odd orders up to~$23$ by~\cite{baumert1965hadamard}.
A enumeration in orders~$25$ and~$27$ was completed by~\cite{40002775009}
but this enumeration was later found to be incomplete by~\cite{djokovic1995note},
who gave a complete enumeration in the order~$25$ as well as (in a previous paper)
the orders~$29$ and~$31$~\citep{dokovic1992williamson}.
The orders~$33$ and~$39$ were
claimed to be completely enumerated by~\cite{koukouvinos1988hadamard,koukouvinos1990there}
but these searches were demonstrated to be incomplete when a complete
enumeration of the orders $33$, $35$, and~$39$ was completed
by~\cite{dokovic1993williamson}.
Most recently, all odd orders up to~$59$ were enumerated by~\cite{holzmann2008williamson}
and the order $61$ was enumerated by~\cite{Lang2012}.
Historically, less attention was paid to the even order cases,
although generalizations of Williamson matrices
were explicitly constructed in even orders by~\cite{Wallis1974}
as well as~\cite{agayan1981recurrence}.
Williamson matrices were constructed in all
even orders up to~$22$ by~\cite{kotsireas2006constructions},
up to~$34$ by~\cite{DBLP:conf/casc/BrightGHKNC16},
and up to~$42$ by~\cite{DBLP:journals/jar/ZulkoskiBHKCG17}.
\cite{kotsireas2006constructions} provided a exhaustive search
up to order $18$ but otherwise these works did not
contain a complete enumerations.
A complete enumeration in the even orders up to~$44$ was given
by~\cite{DBLP:phd/basesearch/Bright17} and this was extended to order $64$ by~\cite{bright2017sat+}.
One reason why more attention has traditionally
been given to the odd order case is
due to the fact that if it was possible to construct Williamson
matrices in all odd orders this would resolve the
Hadamard conjecture. On the other hand,
constructing Williamson matrices in all even orders
would not resolve the Hadamard conjecture
because Hadamard matrices constructed using
Williamson matrices of even order have orders which are divisible by~$8$.
However, it is still not even known if Hadamard
matrices exist for all orders divisible by~$8$, so nevertheless
studying Williamson
matrices of even order
has the potential to shed light on the Hadamard conjecture as well.
The Williamson conjecture was shown to be false by \cite{dokovic1993williamson}
who showed that such matrices do not exist in order~$35$.
Later, when an enumeration of Williamson matrices for odd orders $n<60$
was completed~\citep{holzmann2008williamson} it was found that Williamson matrices also do not
exist for orders $47$, $53$, and~$59$ but exist for all other odd orders under~$65$
since Turyn's construction~\citep{turyn1972infinite} works in orders $61$ and~$63$.
In this paper we provide for the first time
a complete enumeration of Williamson
matrices in the orders~$63$, $66$, $68$, $69$, and~$70$.
In particular, we show that Williamson matrices exist in every even order up to $70$.
This leads us to state what we call the \emph{even Williamson conjecture}:
\begin{conjecture}\label{conj:will}
Williamson matrices exist in every even order.
\end{conjecture}
The fact that Williamson matrices of even order turn out to be somewhat
plentiful gives some evidence for the truth of Conjecture~\ref{conj:will}.
Though we do not know how to prove Conjecture~\ref{conj:will}
our enumeration could potentially uncover structure in Williamson matrices
which might then be exploited in a proof of the conjecture.
Additionally, we point out that the existence of Williamson matrices
of order~$70=2\cdot35$ is especially interesting since $35$ is the
smallest order for which Williamson matrices do not exist.
Using complex Hadamard matrices, \cite{turyn1970} showed the existence
of Williamson matrices of odd order~$n$
implies the existence of Williamson matrices
of orders $2^kn$ for $k=1$, $2$, $3$, $4$.
Since Williamson matrices exist for all odd orders $n<35$
Turyn's result implies that Williamson matrices exist for all even orders
strictly less than $70$.
Since Williamson matrices of order $35$ do not exist
Turyn's result cannot be used to show the existence of Williamson
matrices of order $70$; the question of existence in
order $70$ was open until this paper.
We also determine that there are exactly two sets of Williamson matrices
(up to the equivalence given in Section~\ref{sec:willequiv}) of order $63$.
One of these falls under the aforementioned construction given
by~\cite{turyn1972infinite} while the other is new and is the first newly
discovered set of Williamson matrices in an odd order since one
was found using an exhaustive search in order $43$ by~\cite{holzmann2008williamson}.
In order $69$ our enumeration method produced just one set of Williamson
matrices and that set falls under the construction given by Turyn.
\subsection{Williamson matrices and sequences}\label{sec:background}
We now give the background on Williamson matrices
and their properties which is necessary to understand the remainder of the paper.
The definition of Williamson matrices is motivated by the following theorem
used for constructing Hadamard matrices by~\cite{williamson1944hadamard}:
\begin{theorem}
\label{thm:williamson}
Let\/ $n \in \mathbb{N}$ and let\/ $A$, $B$, $C$, $D \in \{\pm 1\}^{n\times n}$.
Further, suppose that
\begin{enumerate}
\item $A$, $B$, $C$, and\/ $D$ are symmetric;
\item $A$, $B$, $C$, and\/ $D$ commute pairwise (i.e., $AB=BA$, $AC=CA$, etc.);
\item $A^2 + B^2 + C^2 + D^2 = 4nI_n$, where\/ $I_n$ is the identity
matrix of order\/ $n$.
\end{enumerate}
Then
\[
\begin{bmatrix}
A & B & C & D \\
-B & A & -D & C\\
-C & D & A & -B\\
-D & -C & B & A
\end{bmatrix}
\]
is a Hadamard matrix of order $4n$.
\end{theorem}
To make the search for such matrices more tractable, and in particular
to make condition 2 trivial, Williamson also required the matrices
$A$, $B$, $C$, $D$ to be circulant matrices, as defined below.
\begin{definition}\label{def:circmatrix}
An\/ $n\times n$ matrix\/ $A=(a_{ij})$ is circulant if\/ $a_{ij}=a_{0,(j-i)\bmod n}$
for all\/ $i$ and\/ $j\in\brace{0,\dotsc,n-1}$.
\end{definition}
Circulant matrices $A$, $B$, $C$, $D$ which satisfy the conditions of Theorem~\ref{thm:williamson}
are known as \emph{Williamson matrices} in honor of Williamson.
Since Williamson matrices are circulant they
are defined in terms of their first row $[x_0,\dotsc,x_{n-1}]$
and since they are symmetric this row must be a symmetric sequence,
i.e., satisfy $x_i=x_{n-i}$ for $1\leq i<n$.
Given these facts, it is often convenient to work in terms of sequences rather than matrices.
When working with sequences in this context the
following function becomes very useful.
\begin{definition}
The \emph{periodic autocorrelation function} of the sequence\/ $A=[a_0,\dotsc,a_{n-1}]$ is
the function given by
\[ \PAF_A(s) \coloneqq \sum_{k=0}^{n-1}a_k a_{(k+s) \bmod n} . \]
We also use\/ $\PAF_A$ to refer to a sequence
containing the values of the above function (which has period\/ $n$), i.e.,
\[ \PAF_A \coloneqq \brack[\big]{\PAF_A(0),\dotsc,\PAF_A(n-1)} . \]
\end{definition}
This function allows us to easily give a definition of Williamson matrices in terms of sequences.
\begin{definition}\label{def:williamsonsequences}
Four symmetric sequences\/ $A$, $B$, $C$, $D\in\brace{\pm1}^n$ are called \emph{Williamson sequences}
if they satisfy
\begin{equation*}\label{eq:willdef}
\PAF_A(s) + \PAF_B(s) + \PAF_C(s) + \PAF_D(s) = 0
\end{equation*}
for\/ $s=1$, $\dots$, $\floor{n/2}$.
\end{definition}
It is straightforward to see that there is an equivalence between
such sequences and Williamson matrices~\citep[\S3.4]{DBLP:conf/casc/BrightGHKNC16}
and so for the remainder of this paper we will work directly with these sequences instead of
Williamson matrices.
\subsection{Williamson equivalences}\label{sec:willequiv}
Given a Williamson sequence $A$, $B$, $C$, $D$ of order $n$ there are five types of invertible operations which can be applied to produce another Williamson sequence, though two of the operations only apply when $n$ is even. These operations allow us to define \emph{equivalence classes} of Williamson sequences. If a single Williamson sequence is known it is straightforward to generate all Williamson sequences in the same equivalence class, so it suffices to search for Williamson sequences up to these equivalence operations.
\begin{enumerate}
\item[E1.] (Reorder) Reorder the sequences $A$, $B$, $C$, $D$ in any way.
\item[E2.] (Negate) Negate all the entries of any of $A$, $B$, $C$, or~$D$.
\item[E3.] (Shift) If $n$ is even, cyclically shift all the entries in any of $A$, $B$, $C$, or~$D$ by an offset of~$n/2$.
\item[E4.] (Permute entries) Apply an automorphism of the cyclic group $C_n$ to all the indices of the entries of each of $A$, $B$, $C$, and~$D$ simultaneously.
\item[E5.] (Alternating negation) If $n$ is even, negate every second entry in each of $A$, $B$, $C$, and~$D$ simultaneously.
\end{enumerate}
These equivalence operations are well known~\citep{holzmann2008williamson}
except for the shift and alternating negation operations which have not traditionally
been used because they only apply when $n$ is even.
In fact, they were overlooked until a careful examination of the sequences produced by
our enumeration method.
\subsection{Fourier analysis}
We now give an alternative definition of Williamson sequences using concepts from Fourier analysis.
First, we define the power spectral density of a sequence.
\begin{definition}\label{def:psd}
The \emph{power spectral density} of the sequence\/ $A=[a_0,\dotsc,a_{n-1}]$ is
the function
\[ \PSD_A(s) \coloneqq \abs[\big]{\DFT_A(s)}^2 \]
where\/ $\DFT_A$ is the \emph{discrete Fourier transform}
of\/ $A$, i.e., $\DFT_A(s) \coloneqq \sum_{k=0}^{n-1} a_k e^{2\pi iks/n}$.
Equivalently, we may also consider the power spectral density
to be a sequence containing the values of the above function, i.e.,
\[ \PSD_A \coloneqq \brack[\big]{\PSD_A(0),\dotsc,\PSD_A(n-1)} . \]
\end{definition}
It now follows by~\cite[Theorem~2]{dokovic2015compression} that Williamson
sequences have the following alternative definition.
\begin{theorem}\label{thm:willdef}
Four symmetric sequences\/ $A$, $B$, $C$, $D\in\brace{\pm1}^n$ are Williamson sequences
if and only if
\begin{equation}\label{eq:willpsd}\tag{$*$}
\PSD_A(s) + \PSD_B(s) + \PSD_C(s) + \PSD_D(s) = 4n
\end{equation}
for\/ $s=0$, $\dots$, $\floor{n/2}$.
\end{theorem}
\begin{corollary}\label{cor:psdtest}
If\/ $\PSD_A(s)>4n$ for any value\/ $s$ then\/ $A$ cannot be part of a Williamson sequence.
\end{corollary}
\begin{proof}
Since $\PSD$ values are nonnegative, if $\PSD_A(s)>4n$ then the relationship~\eqref{eq:willpsd}
cannot hold and thus $A$ cannot be part of a Williamson sequence.
\end{proof}
Similarly, one can extend this so-called PSD test in Corollary~\ref{cor:psdtest} to apply to more than one sequence
at a time:
\begin{corollary}\label{cor:psdtestext}
If\/ $\PSD_A(s)+\PSD_B(s)>4n$ for any value of\/ $s$ then $A$ and $B$ do not occur
together in a Williamson sequence
and if\/ $\PSD_A(s)+\PSD_B(s)+\PSD_C(s)>4n$ for any value of\/ $s$
then\/ $A$, $B$, and\/ $C$ do not occur together in a Williamson sequence.
\end{corollary}
\subsection{Compression}\label{sec:compression}
As in the work by~\cite{dokovic2015compression} we now introduce the notion of \emph{compression}.
\begin{definition
Let\/ $A = [a_0,a_1,\ldots,a_{n-1}]$ be a sequence of length\/ $n = dm$ and set
\[ a_j^{(d)}=a_j + a_{j+d} + \dotsb + a_{j+(m-1)d}, \qquad j=0, \dotsc, d-1.
\label{compression} \]
Then we say that the sequence\/
$A^{(d)} = [a_0^{(d)},a_1^{(d)},\ldots,a_{d-1}^{(d)}]$
is the \emph{$m$-compression} of\/ $A$.
\end{definition}
From~\cite[Theorem~3]{dokovic2015compression} we have the following result.
\begin{theorem}\label{thm:willcomp}
If\/ $A$, $B$, $C$, $D$ is a Williamson sequence of order\/ $n$ then
\[ \PAF_{A'} + \PAF_{B'} + \PAF_{C'} + \PAF_{D'} = [4n,0,\dotsc,0] \]
and
\[ \PSD_{A'} + \PSD_{B'} + \PSD_{C'} + \PSD_{D'} = [4n,\dotsc,4n] \]
for any compression\/ $A'$, $B'$, $C'$, $D'$ of that Williamson sequence.
\end{theorem}
\begin{corollary}\label{cor:willrowsum}
If\/ $A$, $B$, $C$, $D$ is a Williamson sequence of order\/ $n$ then
\begin{equation*}
R_A^2 + R_B^2 + R_C^2 + R_D^2 = 4n
\label{eq:willdioeq}\tag{$**$}
\end{equation*}
where\/ $R_X$ denotes the rowsum of\/ $X$.
\end{corollary}
\begin{proof}
Let $X'$ be the $n$-compression of $X\in\brace{\pm1}^n$, i.e., $X'$ is a sequence with one
entry whose value is $R_X$. Note that $\PSD_{X'}=[R_X^2]$, so the result follows
by Theorem~\ref{thm:willcomp}.
\end{proof}
\subsection{Williamson's product theorem}
\cite{williamson1944hadamard} proved the following theorem:
\begin{theorem}\label{thm:willprododd}
If\/ $A$, $B$, $C$, $D$ is a Williamson sequence of odd order\/ $n$ then\/
$a_ib_ic_id_i=-a_0b_0c_0d_0$ for\/ $1\leq i<n/2$.
\end{theorem}
We prove a version of this theorem for even $n$:
\begin{theorem}\label{thm:willprodeven}
If\/ $A$, $B$, $C$, $D$ is a Williamson sequence of even order\/ $n=2m$ then\/
$a_ib_ic_id_i=a_{i+m}b_{i+m}c_{i+m}d_{i+m}$ for\/ $0\leq i<m$.
\end{theorem}
Although this theorem is not an essential part of our algorithm it improves its
efficiency by allowing us to cut down the size of the search space.
Our algorithm uses the theorem in the following form:
\begin{corollary}\label{cor:willprod}
If\/ $A'$, $B'$, $C'$, $D'$ is a\/ $2$\nobreakdash-compression of a Williamson sequence
then\/ $A'+B'+C'+D'\equiv[0,\dotsc,0]\pmod{4}$.
\end{corollary}
Proofs of Theorem~\ref{thm:willprodeven} and Corollary~\ref{cor:willprod}
are available on the arXiv~\citep{willprodthm}.
\subsection{Doubling construction}\label{sec:dblconstr}
We now give a simple construction which generates Williamson sequences of
order~$2n$ from Williamson sequences of odd order $n$ using the following
three operations on sequences $A=[a_0,\dotsc,a_{n-1}]$ and $B=[b_0,\dotsc,b_{n-1}]$:
\begin{enumerate}
\item Negation. Individually negate each entry of $A$, i.e., $-A\coloneqq[-a_0,\dotsc,-a_{n-1}]$.
\item Shift. Cyclically shift the entries of $A$ by an offset of $(n-1)/2$,
i.e.,
\[ A' \coloneqq [a_{(n-1)/2},\dotsc,a_{n-1},a_0,a_1,\dotsc,a_{(n-3)/2}] . \]
\item Interleave. Interleave the entries of $A$ and $B$ in a perfect shuffle, i.e.,
\[ A\mathbin{\text{\cyr x}} B \coloneqq [a_0,b_0,a_1,b_1,\dotsc,a_{n-1},b_{n-1}] . \]
\end{enumerate}
Our doubling construction is captured by the following theorem whose
proof is available on the arXiv~\citep{willdblthm}.
\begin{theorem}\label{thm:willdbl}
Let $A$, $B$, $C$, $D$ be Williamson sequences of odd order $n$.
Then
{\rm\[ A\mathbin{\text{\cyr x}} B',\, (-A)\mathbin{\text{\cyr x}} B',\, C\mathbin{\text{\cyr x}} D',\, (-C)\mathbin{\text{\cyr x}} D' \]}%
are Williamson sequences of order $2n$.
\end{theorem}
We remark that a single Williamson sequence of order $n$
can often be used to generate more
than one Williamson sequence of order~$2n$ by applying equivalence operations
to the Williamson sequence $A$, $B$, $C$, $D$ before using the construction.
For example, the single inequivalent Williamson sequence of order 5
can be used to generate both inequivalent Williamson sequences of order 10
using this construction with an appropriate reordering of $A$, $B$, $C$, $D$.
We also remark that this doubling construction can be reversed in the sense that
if Williamson sequences of order $2n$ exist for $n$ odd then symmetric sequences
$X_1$, $\dotsc$, $X_8\in\brace{\pm1}^n$
can be constructed which satisfy the Williamson
property
\[ \sum_{i=1}^8 \PAF_{X_i}(s) = 0 \qquad\text{for $s=1$, $\dotsc$, $n-1$} . \]
We call such sequences \emph{8-Williamson sequences} because they form the
first rows of \emph{8-Williamson matrices} as defined by for example
\cite{kotsireas2006constructions}. Note that the equivalence operations
of Section~\ref{sec:willequiv} also define an equivalence on 8-Williamson
sequences so long as they are written to apply to~8 sequences instead of~4.
\begin{theorem}\label{thm:8will}
Let $A$, $B$, $C$, $D$ be Williamson sequences of order $2n$
with $n$ odd and write
{\rm\[ A = A_1\mathbin{\text{\cyr x}} A_2',\, B = B_1\mathbin{\text{\cyr x}} B_2',\, C = C_1\mathbin{\text{\cyr x}} C_2',\, D = D_1\mathbin{\text{\cyr x}} D_2' . \]}%
Then $A_1$, $A_2$, $B_1$, $B_2$, $C_1$, $C_2$, $D_1$, $D_2$ are 8-Williamson sequences of order $n$.
\end{theorem}
\begin{proof}
The fact that the constructed sequences are symmetric, have $\pm1$ entries,
and are of order $n$ follows directly from the construction and because
the sequences they are constructed from are symmetric,
have $\pm1$ entries, and are of order $2n$. Since $A$, $B$, $C$, $D$ are
Williamson we have that
\[ \sum_{X=A,B,C,D}\PAF_X(2s) = 0 \qquad\text{for $s=1$, $\dotsc$, $n-1$} . \]
Using the fact that $\PAF_{X\mathbin{\text{\cyr x}} Y}(2s)=\PAF_X(s)+\PAF_Y(s)$
and $\PAF_{Y'}(s)=\PAF_Y(s)$ this sum becomes exactly the Williamson property
$\sum_{i=1}^8 \PAF_{X_i}(s) = 0$.
\end{proof}
\section{The SAT+CAS paradigm}\label{sec:sat+cas}
The idea of combining SAT solvers with computer algebra systems
originated independently in two works published in 2015:
In a paper at the conference CADE entitled ``\textsc{MathCheck}: A Math Assistant
via a Combination of Computer Algebra Systems and SAT Solvers'' by~\cite{DBLP:conf/cade/ZulkoskiGC15}
and in an invited talk at the conference ISSAC entitled
``Building Bridges between Symbolic Computation and Satisfiability Checking''
by~\cite{DBLP:conf/issac/Abraham15}.
The CADE paper describes a tool called \textsc{MathCheck}\ which combines
the general-purpose search capability of SAT solvers with the domain-specific
knowledge of computer algebra systems.
The paper made the case that \textsc{MathCheck}\
\begin{quote}
\dots combines the efficient search routines of modern
SAT solvers, with the expressive power of CAS, thus complementing
both.
\end{quote}
As evidence for the power of this paradigm, they used
\textsc{MathCheck}\ to improve the best known bounds in two conjectures in graph theory.
Independently, the computer scientist Erika \'Abrah\'am observed
that the fields of satisfiability checking and symbolic computation share
many common aims but in practice are quite separated, with little communication
between the fields:
\begin{quote}
\ldots collaboration between symbolic computation and SMT [SAT modulo theories]
solving is still (surprisingly) quite restricted\ldots
\end{quote}
Furthermore, she outlined reasons why combining the insights from both fields
had the potential to solve certain problems more efficiently than would be
otherwise possible. To this end, the SC$^2$ project~\citep{sc2} was started
with the aim of fostering collaboration between the two communities.
\subsection{Programmatic SAT}
The idea of a \emph{programmatic} SAT solver was introduced by~\cite{DBLP:conf/sat/GaneshOSDRS12}.
A programmatic SAT solver can generate conflict clauses programmatically, i.e.,
by a piece of code which runs as the SAT solver carries out its search.
Such a SAT solver can learn clauses which are more useful than the conflict
clauses which it learns by default. Not only can this make the SAT solver's search more efficient,
it allows for increased expressiveness as many types of constraints which are
awkward to express in a conjunctive normal form format can naturally be expressed
using code. Additionally, it allows one to compile \emph{instance-specific} SAT solvers
which are tailored to solving one specific type of instance. In this framework
instances no longer have to solely consist of a set of clauses in conjunctive normal form.
Instead, instances can consist of both a set of CNF clauses and a piece of code which
encodes constraints which are too cumbersome to be written in CNF format.
As an example of this, consider the case of searching for Williamson sequences using
a SAT solver. One could encode Definition~\ref{def:williamsonsequences} in CNF
format by using Boolean variables to represent the entries in the Williamson sequences and by
using binary adders to encode the summations; such a method was used by~\cite{DBLP:conf/casc/BrightGHKNC16}.
However, one could also use the equivalent definition given in Theorem~\ref{thm:willdef}.
This alternate definition has the advantage that it becomes easy to apply
Corollaries~\ref{cor:psdtest} and~\ref{cor:psdtestext}, which allows one to filter many
sequences from consideration and greatly speed up the search.
Because of this, our method will use the constraints~\eqref{eq:willpsd}
from Theorem~\ref{thm:willdef}
to encode the definition of Williamson sequences in our SAT instances.
However, encoding the equations in~\eqref{eq:willpsd} would be extremely cumbersome
to do using CNF clauses, because of the involved nature of computing the PSD values.
However, the equations~\eqref{eq:willpsd} are easy to express programmatically---%
as long as one has a method of computing the PSD values. This can be done
efficiently using the fast Fourier transform which is available in many
computer algebra systems and mathematical libraries.
Thus, our SAT instances will not use CNF clauses to encode the defining property of
Williamson sequences but instead encode those clauses programmatically.
This is done by writing a \emph{callback function}
which is compiled with the SAT solver and programmatically expresses the constraints
in Theorem~\ref{thm:willdef} and
the filtering criteria of Corollaries~\ref{cor:psdtest}
and~\ref{cor:psdtestext}.
\subsection{Programmatic Williamson encoding}\label{sec:progwill}
We now describe in detail our programmatic encoding of Williamson sequences.
The encoding takes the form of a piece of code which examines a partial assignment
to the Boolean variables defining the sequences $A$, $B$, $C$, and~$D$ (where true
encodes~$1$ and false encodes~$-1$).
In the case when the partial assignment can be ruled out using
Corollaries~\ref{cor:psdtest} or~\ref{cor:psdtestext}, a conflict
clause is returned which encodes a reason why the partial assignment
no longer needs to be considered. If the sequences actually form a Williamson
sequence then they are recorded in an auxiliary file; at this point the
solver can return SAT and stop, though our implementation continues the search
because we
want to do a
complete enumeration of the space.
The programmatic callback function does the following:
\begin{enumerate}
\item Initialize $S\coloneqq\emptyset$. This variable will be a set which contains
the sequences whose entries are all currently assigned.
\item Check if all the variables encoding the entries in sequence~$A$ have been assigned;
if so, add $A$ to the set~$S$ and compute $\PSD_A$, otherwise skip to the next step.
When $\PSD_A(s)>4n$
for some value of $s$ then learn a clause prohibiting the entries of $A$ from being
assigned the way they currently are, i.e., learn the clause
\begin{equation*}
\lnot(a_0^{\text{cur}}\land a_1^{\text{cur}} \land\dotsb\land a_{n-1}^{\text{cur}}) \equiv \lnot a_0^{\text{cur}}\lor\lnot a_1^{\text{cur}}\lor\dotsb\lor\lnot a_{n-1}^{\text{cur}}
\end{equation*}
where $a_i^{\text{cur}}$ is the literal $a_i$ when $a_i$ is currently assigned to true
and is the literal $\lnot a_i$ when $a_i$ is currently assigned to false.
\item Check if all the variables encoding the entries in sequence~$B$ have been assigned;
if so, add $B$ to the set~$S$ and compute $\PSD_B$. When there is some $s$ such that $\sum_{X\in S}\PSD_X(s)>4n$ then learn a clause
prohibiting the values of the sequences in $S$ from being assigned the way they currently are.
\item Repeat the last step again twice, once with $B$ replaced with $C$ and then again with $B$ replaced with $D$.
\item If all the variables in sequences $A$, $B$, $C$, and $D$ are assigned then
record the sequences in an auxiliary file
and learn a clause prohibiting the values of the sequences from being
assigned the way they currently are so that this assignment is not examined again.
\end{enumerate}
After the search is completed the auxiliary file will contain
all sequences which passed the PSD tests and thus all Williamson sequences
will be in this list (verifying a sequence is in fact Williamson can be done using
Definition~\ref{def:williamsonsequences}).
Note that the clauses learned by this function allow the SAT solver to
execute the search significantly faster than would be
possible using a brute-force technique. As a rough estimate
of the benefit, note that there are approximately
$2^{n/2}$ possibilities for each member $A$, $B$, $C$, $D$ in a Williamson
sequence. If no clauses are learned in steps~2--4 then the SAT solver
will examine all $2^{4(n/2)}$ total possibilities.
Conversely, if a clause is always learned in step~2 then the SAT solver
will only need to examine the $2^{n/2}$ possibilities for~$A$.
Of course, one will not always learn a clause in steps~2\nobreakdash--4
but in practice such a clause is learned quite frequently
and this more than makes up for the overhead of
computing the PSD values (this accounted for about 20\%
of the SAT solver's runtime in our experiments).
The programmatic approach
was essential for the largest orders which we were able to solve;
see Table~\ref{tbl:satresults} in Section~\ref{sec:results}
for a comparision between the running times
of a SAT solver using the CNF and programmatic encodings.
However, it was much too slow to be able perform the enumeration by itself.
\section{Our enumeration algorithm}\label{sec:sat+casmethod}
A high-level summary of the components of our enumeration algorithm
is given in Figure~\ref{fig:diagram}. We require two kinds of
functions from computer algebra systems or mathematical libraries,
namely, one that can solve the quadratic Diophantine equation~\eqref{eq:willdioeq}
and one that can compute the discrete Fourier transform of a
sequence.
In the following description we have step~1 handled by the
Diophantine equation solver, steps~2--4 handled by
the driver script, and step~5 handled by the programmatic
SAT solver. The driver script is responsible for constructing
the SAT instances and passing them off to the programmatic SAT solver.
It also implicitly passes encoding information to the system responsible
for performing the programmatic Williamson encoding described in
Section~\ref{sec:progwill}, i.e., the system needs to know
which variables in the SAT instance correspond to which
Williamson sequence entries, but this can be fixed in advance.
We now give a complete description of our method which enumerates
all Williamson sequences of a given order~$n$ divisible by~$2$ or~$3$.
Let $m\in\brace{2,3}$ be the smallest prime divisor of $n$.
\begin{figure}\centering
\begin{tikzpicture}
\tikzset{>=latex'}
\tikzset{auto}
\tikzstyle{block} = [draw, rectangle, minimum height=3em, minimum width=6.975em, text width=6.975em, align=center]
\node (input) {\small $n\mspace{1mu}$};
\node [block,right of=input,node distance=2cm] (gen) {\small Driver script};
\node [block,right of=gen,node distance=5.5cm] (sat) {\small Programmatic\\SAT solver};
\node [block,above of=gen,node distance=2.75cm] (cas) {\small Diophantine solver\\Fourier transform}
\node [block,above of=sat,node distance=2.75cm] (cas2) {\small Fourier transform};
\draw [draw,->] (input) -- (gen);
\draw [draw,->] (sat.140) -- node[text width=1.375cm,align=center] {\small Partial\\assignment\\} (cas2.220);
\draw [draw,<-] (sat.40) -- node[right,text width=1cm,align=center] {\small Conflict\\clause\\} (cas2.320);
\draw [draw,->] (gen.140) -- node[text width=1.125cm,align=center] {\small External\\call\\} (cas.220);
\draw [draw,<-] (gen.40) -- node[right,text width=0.75cm,align=center] {\small Result} (cas.320);
\draw [draw,->] (gen) -- node[below] {\small SAT instances} (sat);
\draw [draw,->,dashed] (gen.12.5) -| (4.25,2.75) --node[text width=4cm,align=center]{\small Encoding\\information\\} (cas2);
\node [right of=sat,node distance=2.75cm,text width=4.5em,align=center] (output) {\small Enumeration\\in order $n$};
\draw [draw,->] (sat) -- (output);
\end{tikzpicture}
\caption{Outline of our algorithm for enumerating Williamson sequences of order $n$.
The boxes on the left correspond to the preprocessing which encodes and decomposes
the original problem into SAT instances. The boxes
on the right correspond to an SMT-like setup where the
system that computes the discrete Fourier transform
takes on the role of the theory solver.}\label{fig:diagram}\end{figure}
\subsection{Step 1: Generate possible sum-of-squares decompositions}
First, note that by Corollary~\ref{cor:willrowsum} every Williamson sequence
gives rise to a decomposition of $4n$ into a sum of four squares.
We query a computer algebra system such as \textsc{Maple}
or \textsc{Mathematica} to get all possible solutions of
the Diophantine equation~\eqref{eq:willdioeq}.
Because we only care about Williamson sequences up to equivalence,
we add the inequalities
\[ 0 \leq R_A \leq R_B \leq R_C \leq R_D \]
to the Diophantine equation; it is clear that any Williamson
sequence can be transformed into another Williamson sequence
which satisfies these inequalities by applying the reorder and/or
negate equivalence operations.
\subsection{Step 2: Generate possible Williamson sequence members}
Next, we form a list of the sequences which could possibly appear as a member of
a Williamson sequence of order $n$.
To do this, we examine every symmetric sequence $X\in\brace{\pm1}^n$.
For all such $X$ we compute $\PSD_X$ and ignore those which satisfy
$\PSD_X(s)>4n$ for some $s$.
We also ignore those~$X$ whose rowsum does not appear in any possible
solution $(R_A,R_B,R_C,R_D)$ of the sum-of-squares
Diophantine equation~\eqref{eq:willdioeq}.
The sequences $X$ which remain after this process form a list of the sequences
which could possibly appear as a member of a Williamson sequence.
At this stage we could generate all Williamson sequences of order $n$
by trying all ways of grouping the possible sequences $X$ into quadruples and
filtering those which are not Williamson. However, because of the large number
of ways in which this grouping into quadruples can be done
this is not feasible to do except in the case when $n$ is very small.
\subsection{Step 3: Perform compression}
In order to reduce the size of the problem so that the possible
sequences generated in step~2 can be grouped into quadruples we first
compress the sequences using the process
described in Section~\ref{sec:compression}.
For each solution $(R_A,R_B,R_C,R_D)$ of the sum-of-squares Diophantine equation~\eqref{eq:willdioeq}
we form four lists $L_A$, $L_B$, $L_C$, and $L_D$. The list $L_A$ will contain
the $m$\nobreakdash-compressions of the sequences $X$ generated in step~2 which have rowsum $R_A$
(and the other lists
will be defined in a similar manner).
Note that the sequences in these lists will be $\brace{\pm2,0}$-sequences
if $n$ is even and $\brace{\pm3,\pm1}$-sequences if $n$ is odd
since they are $m$\nobreakdash-compressions of the sequences $X$
which are $\brace{\pm1}$-sequences.
\subsection{Step 4: Match the compressions}
By construction, the lists $L_A$, $L_B$, $L_C$, and $L_D$ contain all possible
$m$\nobreakdash-compressions of the members of Williamson sequences whose sum-of-squares
decomposition is $R_A^2+R_B^2+R_C^2+R_D^2$. Thus, by trying all possible sum-of-squares
decompositions and all ways of matching
together the sequences from the lists $L_A$, $L_B$, $L_C$, $L_D$ we can find all
$m$\nobreakdash-compressions of Williamson sequences of order $n$.
By Theorem~\ref{thm:willcomp}, a necessary condition for $A$, $B$, $C$, $D$
to be a Williamson sequence is that
\[ \PSD_{A'}+\PSD_{B'}+\PSD_{C'}+\PSD_{D'} = [4n,\dotsc,4n] \]
where $A'$, $B'$, $C'$, $D'$ are the $m$\nobreakdash-compressions of $A$, $B$, $C$, $D$.
Therefore, one could perform this step by enumerating all
$(A',B',C',D')\in L_A\times L_B\times L_C\times L_D$
and outputting those whose PSDs sum to $[4n,\dotsc,4n]$ as a potential $m$\nobreakdash-compression
of a Williamson sequence.
However, there will typically be far too many elements of
$L_A\times L_B\times L_C\times L_D$ to try in a reasonable amount
of time.
Instead, we will enumerate all $(A',B')\in L_A\times L_B$ and $(C',D')\in L_C\times L_D$
and use a string sorting technique by~\cite{DBLP:journals/ol/KotsireasKP10} to find which
$(A',B')$ and $(C',D')$ can be matched together to form potential
$m$\nobreakdash-compressions of Williamson sequences.
To determine which pairs can be matched together we use the necessary
condition from Theorem~\ref{thm:willcomp} in a slightly rewritten form,
\[ \PAF_{A'}+\PAF_{B'} = [4n,0,\dotsc,0]-(\PAF_{C'}+\PAF_{D'}) . \]
Our matching procedure outputs a list of the $(A',B',C',D')$
which satisfy this condition, and therefore output a list of potential
$m$\nobreakdash-compressions of Williamson sequences.
In detail, our matching procedure performs the following steps:
\begin{algorithmic}[1]
\State \textbf{initialize} $L_{AB}$ and $L_{CD}$ to empty lists
\For{$(A',B')\in L_A\times L_B$}
\If{$\PSD_{A'}(s)+\PSD_{B'}(s)<4n$ for all $s$}
\State \textbf{add} $\PAF_{A'}+\PAF_{B'}$ to $L_{AB}$
\EndIf
\EndFor
\For{$(C',D')\in L_C\times L_D$}
\If{$\PSD_{C'}(s)+\PSD_{D'}(s)<4n$ for all $s$}
\State \textbf{add} $[4n,0,\dotsc,0]-(\PAF_{C'}+\PAF_{D'})$ to $L_{CD}$
\EndIf
\EndFor
\For{each $X$ common to both $L_{AB}$ and $L_{CD}$}
\State \textbf{output} $(A',B')$ and $(C',D')$ which $X$ was generated from in an auxiliary file
\EndFor
\end{algorithmic}
Line~8 can be done efficiently by sorting the lists $L_{AB}$ and $L_{CD}$
and then performing a linear scan through the sorted lists to find the elements
common to both lists. Line~9 can be done efficiently if with each element in
the lists $L_{AB}$ and $L_{CD}$ we also keep track of a pointer to the sequences
$(A',B')$ or $(C',D')$ that the element was generated from in line~4 or~7.
Also in line~9 if $n$ is even we only
output sequences for which $A'+B'+C'+D'$ is the zero vector mod~$4$ as this
is an invariant of all $2$\nobreakdash-compressed Williamson sequences by Corollary~\ref{cor:willprod}.
\subsection{Step 5: Uncompress the matched compressions}
It is now necessary to find the Williamson sequences, if any, which when compressed
by a factor of~$m$ produce one of the sequences generated in step~4.
In other words, we want to find a way to perform uncompression on the matched compressions
which we generated. To do this, we formulate the uncompression problem as a Boolean
satisfiability instance and use a SAT solver's combinatorial search facilities
to search for solutions to the uncompression problem.
We use Boolean variables to represent the entries of the uncompressed
Williamson sequences, with true representing the value of $1$ and false representing
the value of $-1$. Since Williamson sequences consist of four sequences of length~$n$
they contain a total of $4n$ entries, namely,
\[ a_0,\dotsc,a_{n-1},b_0,\dotsc,b_{n-1},c_0,\dotsc,c_{n-1},d_0,\dotsc,d_{n-1} . \]
However, because Williamson sequences are symmetric we actually only need
to define the distinct variables
\[ a_0,\dotsc,a_{\floor{n/2}},b_0,\dotsc,b_{\floor{n/2}},c_0,\dotsc,c_{\floor{n/2}},d_0,\dotsc,d_{\floor{n/2}} . \]
Any variable $x_i$ with $i>n/2$ can simply be replaced with the equivalent variable
$x_{n-i}$; in what follows we implicitly use this substitution when necessary.
Thus, the SAT instances which we generate will contain
$2n+4$ variables when $n$ is even and $2n+2$ variables when $n$ is odd.
Say that $(A',B',C',D')$ is one of the
$m$\nobreakdash-compressions generated in step~4.
By the definition of $m$\nobreakdash-compression, we have that
$a'_i = a_i + a_{i+n/2}$ if $n$ is even and $a'_i= a_i + a_{i+n/3} + a_{i+2n/3}$
if $n$ is odd.
Since $-3\leq a'_i\leq 3$ there are
seven possibilities we must consider for each $a'_i$.
Case 1. If $a'_i=3$ then we must have $a_i=1$, $a_{i+n/3}=1$, and $a_{i+2n/3}=1$.
Thinking of the entries as Boolean variables, we add the clauses
\[ a_i \land a_{i+n/3} \land a_{i+2n/3} \]
to our SAT instance.
Case 2. If $a'_i=2$ then we must have $a_i=1$ and $a_{i+n/2}=1$. Thinking of the
entries as Boolean variables, we add the clauses
\[ a_i \land a_{i+n/2} \]
to our SAT instance.
Case 3. If $a'_i=1$ then exactly one of the entries
$a_i$, $a_{i+n/3}$, and~$a_{i+2n/3}$ must be $-1$.
Thinking of the entries as Boolean variables, we add the clauses
\[ (\lnot a_i\lor\lnot a_{i+n/3}\lor\lnot a_{i+2n/3})\land(a_i\lor a_{i+n/3})\land(a_i\lor a_{i+2n/3})\land(a_{i+n/3}\lor a_{i+2n/3}) \]
to our SAT instance. These clauses specify in conjunctive normal form
that exactly one of the variables $a_i$, $a_{i+n/3}$, and~$a_{i+2n/3}$ is false.
Case 4. If $a'_i=0$ then we must have $a_i=1$ and $a_{i+n/2}=-1$ or vice versa.
Thinking of the entries as Boolean variables, we add the clauses
\[ (a_i \lor a_{i+n/2}) \land (\lnot a_i \lor \lnot a_{i+n/2}) \]
to our SAT instance. These clauses specify in conjunctive normal form
that exactly one of the variables $a_i$ and $a_{i+n/2}$ is true.
Case 5. If $a'_i=-1$ then exactly one of the entries
$a_i$, $a_{i+n/3}$, and~$a_{i+2n/3}$ must be $1$.
Thinking of the entries as Boolean variables, we add the clauses
\[ (a_i\lor a_{i+n/3}\lor a_{i+2n/3})\land(\lnot a_i\lor\lnot a_{i+n/3})\land(\lnot a_i\lor\lnot a_{i+2n/3})\land(\lnot a_{i+n/3}\lor\lnot a_{i+2n/3}) \]
to our SAT instance. These clauses specify in conjunctive normal form
that exactly one of the variables $a_i$, $a_{i+n/3}$, and~$a_{i+2n/3}$ is true.
Case 6. If $a'_i=-2$ then we must have $a_i=-1$ and $a_{i+n/2}=-1$. Thinking of the
entries as Boolean variables, we add the clauses
\[ \lnot a_i \land \lnot a_{i+n/2} \]
to our SAT instance.
Case 7. If $a'_i=-3$ then we must have $a_i=-1$, $a_{i+n/3}=-1$, and $a_{i+2n/3}=-1$.
Thinking of the entries as Boolean variables, we add the clauses
\[ \lnot a_i \land \lnot a_{i+n/3} \land \lnot a_{i+2n/3} \]
to our SAT instance.
For each entry $a'_i$ in $A'$ we add the clauses from the
appropriate case to the SAT instance, as well
as add clauses from a similar case analysis for the entries
from $B'$, $C'$, and $D'$.
A satisfying assignment to the generated SAT instance provides an uncompression
$(A,B,C,D)$ of $(A',B',C',D')$. However, the uncompression need not be a
Williamson sequence. To ensure that the solutions produced by the SAT solver
are in fact Williamson sequences we additionally use the programmatic SAT
Williamson encoding as described in Section~\ref{sec:progwill}.
For each $(A',B',C',D')$ generated in step~4 we generate a SAT instance which
contains the clauses specified above. We then solve the SAT instances with
a programmatic SAT solver whose programmatic clause generator specifies
that any satisfying assignment of the instance encodes a Williamson sequence
and performs an exhaustive search to find all solutions.
By construction, every Williamson sequence of order $n$ will have its
$m$\nobreakdash-compression generated in step~4, making this search
totally exhaustive (up to the discarded equivalences).
\subsection{Postprocessing: Remove equivalent Williamson sequences}\label{sec:equivcheck}
After step~5 we have produced a list of all the Williamson sequences of order $n$
which have a certain sum-of-squares decompositions. We chose the
decompositions in such a way that every Williamson sequence will be equivalent to
one decomposition but this does not cover all possible equivalences, so
some Williamson sequences which we generate may be equivalent to each other.
For the purpose of counting the total number of inequivalent
Williamson sequences which exist in order $n$ it is necessary to
examine each Williamson sequence in the list and determine if it
is equivalent to another Williamson sequence in the list. This can be done
by repeatedly applying the equivalence operations from Section~\ref{sec:willequiv}
on the Williamson sequences in the list
and discarding those which are equivalent to a previously
found Williamson sequence.
However, this can be inefficient because there are
typically a large number of Williamson sequences in each
equivalence class. Instead, a more efficient way of testing for equivalence
is to define a single representative in each equivalence class
which is easy to compute. Then two Williamson sequences can
be tested for equivalence by testing that their representatives are equal.
As a first step in defining a single representative
in each equivalence class of Williamson
sequences we first consider only the equivalence operations E1, E2, and~E3 (reorder,
negate, and shift). The operations E2 and~E3 apply to individual sequences $X$ and
there are up to four sequences which could be generated using E2 and~E3, namely,
$X$, $\E2(X)$, $\E3(X)$, and $\E2(\E3(X))$. Let $M_X$ be the lexicographic
minimum of these four sequences. Given a Williamson sequence $(A,B,C,D)$, we compute
$(M_A,M_B,M_C,M_D)$ and then use operation E1 on the sequences in the quadruple
to sort those sequences in increasing lexicographic order. The resulting sequence
is the lexicographic minimum of all sequences equivalent to $(A,B,C,D)$
using the operations E1, E2, and~E3 and is therefore a unique single representative of the
equivalence class which we denote $M_{(A,B,C,D)}$.
Next, consider the equivalence operation~E4 (permute entries). Let $\sigma$
be an automorphism of the cyclic group $C_n$ and let $\sigma(X)$ be the sequence
whose $i$th entry is $x_{\sigma(i)}$. Then the lexicographic minimum of the set
\[ S_{(A,B,C,D)} \coloneqq \brace[\big]{\, M_{(\sigma(A),\sigma(B),\sigma(C),\sigma(D))} : \sigma\in\Aut(C_n) \,} \]
is the lexicographic minimum of all sequences equivalent to $(A,B,C,D)$
using the operations E1, E2, E3, and~E4. (This is due to the fact that
E4 commutes with E1, E2, and~E3, so it is always possible to find the
global lexicographic minimum by first trying all possible ways of applying~E4
and only afterwards considering E1, E2, and~E3.)
Finally, if $n$ is even we consider the equivalence operation~E5 (alternating negation).
The lexicographic minimum of the set
$S_{(A,B,C,D)} \cup S_{\E5(A,B,C,D)}$
will be the lexicographic minimum of all sequences equivalent to $(A,B,C,D)$
and is therefore a single unique representative of the equivalence class.
(Again, this is due to the
fact that E5 commutes with the other operations so it is always possible
to find the global lexicographic minimum by
first trying all possible ways of applying~E5 before applying the other operations.)
\subsection{Optimizations}\label{sec:optimizations}
While the procedure just described will correctly enumerate all Williamson sequences of
a given even order $n$, there are a few optimizations which can be used to improve
the efficiency of the search.
Note that in step~3 we have not generated \emph{all} possible
$m$\nobreakdash-compression quadruples; we only generate those quadruples that have rowsums
$(R_A,R_B,R_C,R_D)$ which correspond to solutions of~\eqref{eq:willdioeq}, and we
use the negation and reordering equivalence operations to cut down the number
of possible rowsums necessary to check. However, there still remain equivalences which
can be removed; if~$\sigma$ is an automorphism of the cyclic group $C_n$ then
$(A,B,C,D)$ is a Williamson sequence if and only if $(\sigma(A),\sigma(B),\sigma(C),\sigma(D))$
is a Williamson sequence (with $\sigma(X)$ defined so that its $i$th entry is $x_{\sigma(i)}$).
Thus if both $A$ and $\sigma(A)$ are in the list generated
in step~2 we can remove one from consideration. Unfortunately, we cannot do the same
in the lists for $B$, $C$, and~$D$, since it is not possible to know which
representatives for $B$, $C$, and~$D$ to keep, as the representatives
must match with the representative for $A$ that was kept.
Similarly, in step~5 one can ignore any SAT instance which can be transformed
into another SAT instance using the equivalence operations from Section~\ref{sec:willequiv}.
In this case the solutions in the ignored SAT instance will be equivalent to those
in the SAT instance associated to it through the equivalence transformation.
In the programmatic Williamson encoding we can often learn shorter clauses
with a slight modification of the procedure described in Section~\ref{sec:progwill}.
Instead of checking $\sum_{X\in S}\PSD_X(s)>4n$ directly we instead
find the smallest subset $S'$ of $S$ such that $\sum_{X\in S'}\PSD_X(s)>4n$
(if such a subset exists). This is done by sorting the values
of $\PSD_X(s)$
and performing the check using the largest values
$\PSD_X(s)$ before considering the smaller values. For example,
if $\PSD_B(s)>\PSD_A(s)$ then we would check $\PSD_B(s)>4n$ before checking
$\PSD_A(s)+\PSD_B(s)>4n$.
When $n$ is odd we can use Theorem~\ref{thm:willprododd} to provide additional
information to the SAT solver. For simplicity, suppose we fix $a_0=1$;
as shown in~\citep[\S3.1.2]{DBLP:phd/basesearch/Bright17}
this can be done by fixing the sign of $\rowsum(A)$ to not necessarily
be positive but to satisfy $\rowsum(A)\equiv n\pmod{4}$.
Also fixing the values $b_0=c_0=d_0=1$, Theorem~\ref{thm:willprododd} says that
\[ a_kb_kc_kd_k = -1 \qquad \text{for $k=1$, $\dotsc$, $(n-1)/2$} . \]
Thinking of the entries as Boolean variables, we can encode the
multiplicative constraint in conjunctive normal form as
\[ (\lnot a_k\lor b_k\lor c_k\lor d_k)\land(a_k\lor\lnot b_k\lor c_k\lor d_k)\land\dotsb\land(a_k\lor\lnot b_k\lor\lnot c_k\lor\lnot d_k) \]
(that is, all the clauses on the four variables $a_k$, $b_k$, $c_k$, $d_k$
with an odd number of negative literals).
We add these clauses for $k=1$, $\dotsc$, $(n-1)/2$ into each SAT instance
generated in each odd order $n$.
\section{Results}\label{sec:results}
We implemented the algorithm described in Section~\ref{sec:sat+casmethod},
including all optimizations,
and ran it in all orders $n\leq70$ divisible by $2$ or $3$.
Step~1 was completed using
the computer algebra system \textsc{Maple}.
Steps~2--4 and the postprocessing were completed
using C++ code which used the library FFTW~\citep{frigo2005design}
for computing PSD values. Step~5 was completed using
\textsc{MapleSAT}~\citep{DBLP:conf/sat/LiangKPCG17} modified to support a programmatic
interface and also used FFTW for computing PSD values.
Since FFTW introduces some floating-point errors in the values it returns,
when checking the $\PSD$ values of $A$ we actually ensure that
$\PSD_A(s)>4n+\epsilon$ for some $\epsilon$ which is small but larger than
the accuracy of the discrete Fourier transform used,
e.g., $\epsilon=10^{-2}$.
Our computations were performed on a cluster
of 64\nobreakdash-bit Intel Xeon E5-2683V4 2.1~GHz processors limited to
6~GB of memory and running \mbox{CentOS}~7.
Timings for running our entire algorithm (in hours) in even orders
are given in Table~\ref{tbl:results}, and timings for the running
of the SAT solver alone are given in Table~\ref{tbl:satresults}.
The bottleneck of our method
for large even~$n$ was the matching procedure described in step~4,
which requires enumerating and then sorting a very large number of vectors.
For example, when $n=64$ and $R_A=R_B=8$ there were over $26.6$ billion
vectors added to $L_{AB}$.
Table~\ref{tbl:results} also includes the number of SAT instances
which we generated in each order, as well as the total number of
Williamson sequences which were found up to equivalence
(denoted by $\#W_n$). The counts for $\#W_n$ are not identical to those
given in~\citep{bright2017sat+} because that work did not
use the equivalence operation E5 (alternating negation) but the
results up to order $64$ (the largest order previously solved)
are otherwise identical.
\begin{table}
\begin{center}
\begin{tabular}{c@{\qquad}c@{\qquad}c@{\qquad}c}
$n$ & Time (h) & \# inst. & $\#W_n$ \\ \hline
2 & 0.00 & 1 & 1 \\
4 & 0.00 & 1 & 1 \\
6 & 0.00 & 1 & 1 \\
8 & 0.00 & 1 & 1 \\
10 & 0.00 & 2 & 2 \\
12 & 0.00 & 3 & 3 \\
14 & 0.00 & 3 & 5 \\
16 & 0.00 & 5 & 6 \\
18 & 0.00 & 22 & 23 \\
20 & 0.00 & 14 & 17 \\
22 & 0.00 & 22 & 15 \\
24 & 0.00 & 40 & 72 \\
26 & 0.00 & 24 & 26 \\
28 & 0.00 & 78 & 83 \\
30 & 0.00 & 281 & 150 \\
32 & 0.00 & 70 & 152 \\
34 & 0.00 & 214 & 91 \\
36 & 0.00 & 1013 & 477 \\
38 & 0.00 & 360 & 50 \\
40 & 0.01 & 4032 & 1499 \\
42 & 0.02 & 2945 & 301 \\
44 & 0.01 & 1163 & 249 \\
46 & 0.03 & 1538 & 50 \\
48 & 0.09 & 4008 & 9800 \\
50 & 0.45 & 3715 & 275 \\
52 & 0.78 & 4535 & 926 \\
54 & 3.00 & 25798 & 498 \\
56 & 0.98 & 18840 & 40315 \\
58 & 15.97 & 9908 & 73 \\
60 & 27.14 & 256820 & 4083 \\
62 & 64.74 & 19418 & 61 \\
64 & 65.52 & 34974 & 69960 \\
66 & 764.96 & 109566 & 262 \\
68 & 593.77 & 122150 & 1113 \\
70 & 957.96 & 71861 & 98
\end{tabular}\end{center}
\caption{A summary of the running time in hours, number
of SAT instances used, and number of inequivalent
Williamson sequences generated in each even order $n\leq70$.}\label{tbl:results}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{ccc}
& \multicolumn{2}{c}{SAT Solving Time (hours)} \\
$n$ & CNF encoding & Programmatic \\ \hline
30 & 0.01 & 0.00 \\
32 & 0.01 & 0.00 \\
34 & 0.03 & 0.00 \\
36 & 0.36 & 0.00 \\
38 & 0.12 & 0.00 \\
40 & 2.01 & 0.01 \\
42 & 4.31 & 0.01 \\
44 & 5.59 & 0.00 \\
46 & 8.65 & 0.01 \\
48 & 18.78 & 0.02 \\
50 & 53.12 & 0.02 \\
52 & $-$ & 0.05 \\
54 & $-$ & 0.18 \\
56 & $-$ & 0.18 \\
58 & $-$ & 0.13 \\
60 & $-$ & 9.56 \\
62 & $-$ & 0.45 \\
64 & $-$ & 0.94 \\
66 & $-$ & 5.14 \\
68 & $-$ & 17.18 \\
70 & $-$ & 8.07
\end{tabular}\end{center}
\caption{The total time spent running \textsc{MapleSAT}
in each even order $30\leq n\leq 70$ using the CNF encoding
and the programmatic encoding. A timeout of 100 hours was used.}\label{tbl:satresults}
\end{table}
Table~\ref{tbl:oddresults} contains the same information
as Table~\ref{tbl:results} except in the odd orders.
The bottleneck for our algorithm in these orders
was the uncompression step (since uncompressing by a factor
of $3$ is more challenging than uncompressing by a factor of $2$),
i.e., solving the SAT instances.
The counts for $\#W_n$ in these cases exactly match
those given by~\cite{holzmann2008williamson} up to
the largest order they solved.
We found one previously unknown Williamson sequence of
order $63$ using 466,561 $3$\nobreakdash-compressed
quadruples.
We give this Williamson sequence here explicitly, with `\verb|+|'
representing $1$, `\verb|-|' representing $-1$, and
each sequence member on a new line:
{\microtypesetup{activate=false}
\begin{center}
\verb|-++++-+---+-+----++--++-+++++-+--+-+++++-++--++----+-+---+-++++| \\
\verb|-++--+-+-++----++++-+--+--+++-++++-+++--+--+-++++----++-+-+--++| \\
\verb|-++--+---+---+++--+++++-+-+++-++++-+++-+-+++++--+++---+---+--++| \\
\verb|-----+--++++---+-+--+++-+----+-++-+----+-+++--+-+---++++--+----|
\end{center}}
\begin{table}
\begin{center}
\begin{tabular}{c@{\qquad}c@{\qquad}c@{\qquad}c}
$n$ & Time (h) & \# inst. & $\#W_n$ \\ \hline
3 & 0.00 & 1 & 1 \\
9 & 0.00 & 3 & 3 \\
15 & 0.00 & 8 & 4 \\
21 & 0.00 & 30 & 7 \\
27 & 0.00 & 172 & 6 \\
33 & 0.01 & 364 & 5 \\
39 & 0.05 & 1527 & 1 \\
45 & 1.12 & 15542 & 1 \\
51 & 4.57 & 17403 & 2 \\
57 & 61.26 & 58376 & 1 \\
63 & 1670.95 & 466561 & 2 \\
69 & 8162.50 & 600338 & 1
\end{tabular}\end{center}
\caption{A summary of the running time in hours, number
of SAT instances used, and number of inequivalent
Williamson sequences generated in each odd order $n$ divisible by $3$ and less than $70$.}\label{tbl:oddresults}
\end{table}
We also used our enumeration of Williamson sequences of order $2n$ for $n\leq35$
along with Theorem~\ref{thm:8will} to explicitly construct 8-Williamson sequences in all
odd orders $n\leq35$. Table~\ref{tbl:8willresults} contains the counts (denoted
by $\#8W_n$) of how many
inequivalent 8\nobreakdash-Williamson sequences can be constructed in this fashion.
Note that $\#8W_n$ does not count the total number of 8-Williamson
sequences in order $n$, only those that can be constructed via
the construction of Theorem~\ref{thm:8will}. We explicitly give one example
of an 8-Williamson sequence of order $35$, with `\verb|+|'
representing $1$ and `\verb|-|' representing~$-1$:
{\small\microtypesetup{activate=false}
\begin{center}
\verb|++++-+++--+-+----++----+-+--+++-+++ +++---+-+++-++--+--+--++-+++-+---++| \\
\verb|++-+-+++-+-----++++++-----+-+++-+-+ ++-+--+--++---+-++++-+---++--+--+-+| \\
\verb|++---++-+-+--+--------+--+-+-++---+ ++---++-+-+--+--------+--+-+-++---+| \\
\verb|+--+++-----+---+-++-+---+-----+++-- +---++--++++-++-+--+-++-++++--++---|
\end{center}}\noindent
These sequences can be used to generate a Hadamard matrix of order $8\cdot35=280$;
for details see \cite{kotsireas2006constructions,kotsireas2009hadamard}.
\begin{table}
\begin{center}
\begin{tabular}{cccccccccc}
$n$ & 1 & 3 & 5 & 7 & 9 & 11 & 13 & 15 & 17 \\
$\#8W_n$ & 1 & 1 & 1 & 4 & 13 & 10 & 18 & 129 & 79 \\[-0.75em] \\
$n$ & 19 & 21 & 23 & 25 & 27 & 29 & 31 & 33 & 35 \\
$\#8W_n$ & 43 & 280 & 48 & 257 & 486 & 71 & 58 & 240 & 78
\end{tabular}
\end{center}
\caption{The number of inequivalent 8-Williamson sequences generated using Theorem~\ref{thm:8will}
in each odd order $n\leq35$.}\label{tbl:8willresults}
\end{table}
An explicit enumeration of all the Williamson
sequences and 8-Williamson sequences we constructed has also been made available
online~\citep{willsat}.
\section{Conclusion}\label{sec:conclusion}
In this paper we have shown the power of the SAT+CAS paradigm (i.e.,
the technique of applying the tools from the fields of satisfiability checking
and symbolic computation)
as well as the power and flexibility of the programmatic SAT approach.
Our focus was applying the SAT+CAS paradigm to the Williamson conjecture
from combinatorial design theory, but we believe the SAT+CAS paradigm
shows promise to be applicable to other problems and conjectures.
However, the SAT+CAS paradigm is not something that can
be effortlessly applied to problems or expected to be effective
on all types of problems.
Our experiments in this area allow us to offer some guidance about
the kind of problems in which the SAT+CAS paradigm would work
particularly well.
In particular, \cite{DBLP:phd/basesearch/Bright17}~highlights the
following properties of problems which makes them good candidates
to study using the SAT+CAS paradigm:
\begin{enumerate}
\item \emph{There is an efficient encoding of the problem into a Boolean setting.}
Since the problem has to be translated into a SAT instance or multiple SAT instances
the encoding should ideally be straightforward and easy to compute. Not only does
this make the process of generating the SAT instances easier and less error-prone
it also means that the SAT solver is executing its search through a domain which
is closer to the original problem. In general, the more convoluted the encoding the less likely
the SAT solver will be able to efficiently search the space. For example, in our
application we were fortunate to be able to encode $\pm1$ values as Boolean variables.
\item \emph{There is some way of splitting the Boolean formula into multiple instances
using the knowledge from a CAS\@.} Of course, a SAT instance can always be split into multiple
instances by hard-coding the values of certain variables and then generating instances
which cover all possible assignments of those variables. However, this strategy is typically
not an ideal way of splitting the search space. The instances generated in this fashion
tend to have wildly unbalanced difficulties, with some very easy instances and some
very hard instances, limiting the benefits of using many processors to search the space.
Instead, the process of splitting using domain-specific
knowledge allows instances which cannot be ruled out \emph{a priori} to not
even need to be generated because they encode some part of the search space which
can be discarded based on domain-specific knowledge. For example, in our application
we only needed to generate SAT instances with a few possibilities for the rowsums
of the sequences $A$, $B$, $C$, and~$D$ and could ignore all other possible rowsums.
\item \emph{The search space can be split into a reasonable number of cases.}
One of the disadvantages of using SAT solvers is that it can be difficult to tell how much progress
is being made as the search is progressing.
The process of splitting the search space allows one to get a better estimate of the progress
being made, assuming the difficulty of the instances isn't extremely unbalanced.
In our experience, splitting the search space into instances which can be solved
relatively quickly worked well, assuming the number of instances isn't too large so
that the overhead of calling the SAT solver is small.
This allowed the space to be searched significantly faster (especially
when using multiple processors) than a single instance would have taken to complete.
In our application the order $n=69$
required the most amount of splitting; in this case we split the search
space into 600,338 SAT instances and each instance took an average of $48.8$ seconds to solve.
\item \emph{The SAT solver can learn something about the space as the search is running.}
The efficiency of SAT solvers is in part due to the facts that they learn as the search
progresses. It can often be difficult for a human to make sense of these facts but they play a vital
role to the SAT solver internally and therefore a problem where the SAT solver can take
advantage of its ability to learn nontrivial clauses is one in which the SAT+CAS paradigm is
well suited for.
For more sophisticated
lemmas that the SAT solver would be unlikely to learn (because they
rely on domain-specific knowledge) it is useful to learn clauses programmatically via
the programmatic SAT idea~\citep{DBLP:conf/sat/GaneshOSDRS12}.
For example, the timings in Table~\ref{tbl:satresults} show how
important the learned programmatic clauses were to the efficiency of the SAT solver.
\item \emph{There is domain-specific knowledge which can be efficiently given to the SAT solver.}
Domain-specific knowledge was found to be critical to solving instances of the
problems besides those of the smallest sizes.
The instances which were generated
using naive encodings were typically only able to be solved for small sizes
and all significant increases in the size of the problems past that point
came from the usage of domain-specific knowledge.
Of course, for the information to be useful to the solver there needs to be an efficient way
for the solver to be given the information; it can be encoded directly in the SAT instances
or generated on-the-fly using programmatic SAT functionality.
For example, in our application we show how to encode
Williamson's product theorem for odd orders $n$ directly into the SAT instance in Section~\ref{sec:optimizations}
and we show how to programmatically encode the PSD test in Section~\ref{sec:progwill}.
\item \emph{The solutions of the problem lie in spaces which cannot be simply enumerated.}
If the search space is highly structured and there exists an efficient search algorithm
which exploits that structure then using this algorithm directly
is likely to be a better choice.
A SAT solver could also perform this search
but would probably do so less efficiently; instead,
SAT solvers have a relative advantage when the search space is less structured.
For example, in our application we require searching for sequences
whose compressions are equal to some given sequence and use the PSD test to filter
certain sequences from consideration. The space is specified by a number of simple
but ``messy'' constraints and SAT solvers are good at dealing with that kind of complexity.
\end{enumerate}
Perhaps the most surprising result of our work on the Williamson conjecture
is our discovery that there are typically many more Williamson matrices
in even orders than there are in odd orders. In fact, every odd order $n$
in which a search has been carried out has $\#W_n\leq10$,
while we have shown that every even order $18\leq n\leq 70$ has $\#W_n>10$
and there are some orders which contain thousands of inequivalent
Williamson matrices.
Part of this dichotomy can be explained by Theorem~\ref{thm:willdbl}
which generates Williamson matrices of order $2n$ from Williamson matrices
of odd order $n$.
For example, the two classes of Williamson matrices
of order $10$ can be generated from the single class of Williamson matrices
of order $5$.
However, this still does not fully explain the relative abundance of Williamson
matrices in even orders. In particular,
it cannot possibly explain why Williamson matrices exist in order $70$
because Williamson matrices of order $35$ do not exist.
\section*{Acknowledgements}
This work was made possible because of the resources available on the
petascale supercomputer Graham at the University of Waterloo,
managed by Compute Canada and SHARCNET,
the Shared Hierarchical Academic Research Computing Network.
The authors thank Dragomir \DJ okovi\'c
informing us about the construction of \cite{turyn1970}
using complex Hadamard matrices.
| {
"timestamp": "2018-04-05T02:03:02",
"yymm": "1804",
"arxiv_id": "1804.01172",
"language": "en",
"url": "https://arxiv.org/abs/1804.01172",
"abstract": "We employ tools from the fields of symbolic computation and satisfiability checking---namely, computer algebra systems and SAT solvers---to study the Williamson conjecture from combinatorial design theory and increase the bounds to which Williamson matrices have been enumerated. In particular, we completely enumerate all Williamson matrices of even order up to and including 70 which gives us deeper insight into the behaviour and distribution of Williamson matrices. We find that, in contrast to the case when the order is odd, Williamson matrices of even order are quite plentiful and exist in every even order up to and including 70. As a consequence of this and a new construction for 8-Williamson matrices we construct 8-Williamson matrices in all odd orders up to and including 35. We additionally enumerate all Williamson matrices whose orders are divisible by 3 and less than 70, finding one previously unknown set of Williamson matrices of order 63.",
"subjects": "Logic in Computer Science (cs.LO); Symbolic Computation (cs.SC); Combinatorics (math.CO)",
"title": "Applying Computer Algebra Systems with SAT Solvers to the Williamson Conjecture",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109525293959,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7076796163600194
} |
https://arxiv.org/abs/1910.11302 | Color-critical Graphs and Hereditary Hypergraphs | A quick proof of Gallai's celebrated theorem on color-critical graphs is given from Gallai's simple, ingenious lemma on factor-critical graphs, in terms of partitioning the vertex-set into a minimum number of hyperedges of a hereditary hypergraph, generalizing the chromatic number. We then show examples of applying the results to new problems and indicate the way to algorithms and refined complexity results for all these examples at the same time. | \section{Introduction }
Graphs and digraphs are without loops or parallel edges. Given a hypergraph $H=(V,E)$, $(E\subseteq\Pscr(V),$ where $\Pscr(V)$ is the power-set of $V)$ we will call the elements of $V$ {\em vertices}, and those of $E$ {\em hyperedges}, $n:=|V|$. A {\em cover} is a family $\Cscr\subseteq E$ such that $\displaystyle \cup \Cscr := \cup_{C\in \Cscr}C=V$. We suppose that $E$ is a cover. The minimum number of hyperedges in a cover is denoted $\rho:=\rho(H)$. The {\em hereditary closure} of $H$ is $H^h=(V,E^h)$ where $E^h:=\{X\subseteq e: e\in E\}$, and $H$ is {\em hereditary}, if $H^h=H$.
In this paper we study {\em hereditary} hypergraphs (sometimes also called independence systems, or a simplicial complexes in the literature). Hyperedges of cardinality $1$ will be called {\em singletons}, and hyperedges of cardinality $2$ are called {\em edges}. Deleting a vertex $v$ of the hypergraph $H$ results in the hypergraph $H-v=(V\setminus \{v\}, \{e\in E, v\notin E\} )$. For hereditary hypergraphs this is the same as deleting $v$ from all hyperedges. Like for coloring, for hereditary hypergraphs a minimum cover can be supposed to be a {\em partition} of $V$, and {\em we will suppose this}! Indeed, a vertex contained in several hyperedges can be deleted from one of these hyperedges. This assumption is of primary importance, since the edges and the singletons play a major role in such partitions.
If $H$ is a hereditary hypergraph, we have $\rho(H)-1\le \rho(H-v)\le \rho(H)$; if the first inequality is satisfied with equality for all $v\in V$, we say that $H$ is {\em critical}.
Given a hypergraph $H=(V,E)$, denote by $E_2$ the set of its edges, $H_2:=(V,E_2)$. The {\em components} of $H$ are defined as the those of $(H^h)_2$. These form a partition of $V$, and correspond to the usual hypergraph components: $H$ is {\em connected} if $(H^h)_2$ is connected. Abusing terminology, the vertex-set of a component will also be called component.
The maximum size of a matching in a graph $G$ is denoted by $\nu(G)$.
We prove Gallai's ingenious, simple lemma \cite{gallai:factorCritical} for self-containedness:
\begin{lem*}
If $G=(V,E)$ is a connected graph, and $\nu(G-v)=\nu (G)$ for all $v\in V$, then $\nu(G)=\frac{n-1}2$.
\end{lem*}
\proof Suppose for a contradiction that $M$ is a maximum matching and $u\ne v\in V$ are not covered by $M$. Let the distance of $u$ and $v$ be minimum among all maximum matchings and two of their uncovered vertices. Let $P\subseteq E(G)$ be a shortest path between $u$ and $v$. Clearly, $|P|\ge 2$, otherwise the edge $uv$ could be added to $M$, contradicting the maximality of $M$.
Let $u'$ be the neighbor of $u$ on $P$, and $M'$ a maximum matching of $G-u'$. Each connected component of the symmetric difference $D$ of $M$ and $M'$ is the disjoint union of even paths and even circuits alternating between $M$ and $M'$. If $u$ and $u'$ are not in the same component of $D$, then interchanging the edges of $M$ in the component of $u'$, neither $u$ nor $u'$ is covered by a matching edge, leading to the same contradiction as before.
On the other hand, if they are in the same component, the same interchange of the edges leads to a maximum matching that leaves $u'$ and $v$ uncovered, contradicting the minimum choice of the distance between $u$ and $v$.
\qed
Gallai's theorem on color-critical graphs \cite{gallai:colorCritical} is a beautiful statement but its original proof was rather complicated, essentially more difficult than the above lemma on factor-critical graphs \cite{gallai:factorCritical}. Stehl\'\i k \cite{STCC} gave a simpler proof. We show here that the generalization to hereditary hypergraphs can be shortly reduced to Gallai's Lemma (Section~\ref{sec:main}), in addition giving rise to a wide range of known and new examples (Section~\ref{sec:a})\footnote{The Theorem below and its proof have been included in a more complex framework of an unpublished manuscript \cite{AM}. Several occurrences of old and recent, direct special cases of hereditary hypergraphs make it useful to provide an exclusive, short presentation of this general theorem, with some examples of hereditary hypergraphs of interest.}, to algorithms, and clarifications of their complexity issues.
\section{Theorem and Proof}\label{sec:main}
\begin{thm*
In a connected, hereditary, critical hypergraph $\rho\le \frac{n+1}{2}$. Furthermore, either the inequality is strict and there is a minimum cover without singleton, or the equality holds, and there are minimum covers with only edges and exactly one singleton that can be any vertex.
\end{thm*}
\proof For $n\le 1$ the statement is obvious, so suppose $H$ is critical, $n\ge 2$. Then for all $v\in V$ there exists a minimum cover containing $\{v\}$: indeed, adding $\{v\}$ to a minimum cover of $H-v$, we get a minimum cover of $H$. Consider a minimum cover of $H-v$ (partitioning $V\setminus \{v\}$), {\em maximizing $C_v:=\cup\,\Cscr_v$, where $\Cscr_v$ is the set of its non-singleton elements}. Clearly, $\rho= |\Cscr_v|+ |V\setminus C_v|$.
\medskip\noindent
{\bf Claim~1}. For all $u, v\in V$ and each component $C\subseteq V$ of $\Cscr_u\cup\Cscr_v:\, |C\cap C_u|=|C\cap C_v|.$
\smallskip
Indeed, if $k_u$ and $k_v$ are the number of hyperedges in this component of $\Cscr_u$ and $\Cscr_v$ respectively, then $k_u + |C\setminus C_u|= k_v + |C\setminus C_v|$ for if say $k_u + |C\setminus C_u| > k_v + |C\setminus C_v|$, then
in the minimum cover consisting of $\Cscr_u$ and $|V\setminus C_u|$ singletons,
replace the hyperedges in $C$, that is, $k_u$ hyperedges of $\Cscr_u$ and $|C\setminus C_u|$ singletons,
by the $k_v$ hyperedges of $\Cscr_v$ in $C$, and $|C\setminus C_v|$ singletons, leading to a cover of size
\[\rho+ k_v - k_u + |C\setminus C_v| - |C\setminus C_u| < \rho,\]
a contradiction. But then the proven equality implies that the same replacement -- in either directions -- of the hyperedges lead to a minimum cover, increasing the size of $C_u$ if say $|C\cap C_u|<|C\cap C_v|$, and this contradiction with the choice of $C_u$ proves the claim.
\medskip\noindent
{\bf Claim~2}. If each minimum cover of $H$ contains a singleton, then $\Cscr_v\subseteq E_2$ for all $v\in V$.
\smallskip
\smallskip
Let $u\in C_v$ be arbitrary, and let us prove that $|e_u|=2$ for the hyperedge $e_u$, $u\in e_u\in\Cscr_v$. Since $u\in C_v\setminus C_u$, by Claim~1, the component $C$ of $\Cscr_u\cup\Cscr_v$ containing $u$ also contains a vertex $v_0\in C_u\setminus C_v$. Let $P$ be a shortest path between $v_0$ and $u$ in $\Cscr_u\cup\Cscr_v$ (in the connected component $C$). Let $v_0, v_1,v_2\ldots$ be the vertices of $P\subseteq E$, in fact $P\subseteq E_2$, in this order, necessarily alternating between subsets of hyperedges in $\Cscr_u$ and subsets of hyperedges in $\Cscr_v$. We prove by induction on $|P|$ {\em the assertion that the latter subsets (of hyperedges in $\Cscr_v$) are in fact in $\Cscr_v$}:
Note first that $\{v_1,v_2\}\in\Cscr_v$, because if it was a subset of an edge $e\in\Cscr_v$, $|e|\ge 3$, then replacing $\{v_0\}$ and $e$ in the minimum cover $\Cscr_v\cup\{\{v'\}: v'\in V\setminus C_v\}$ by $\{v_0,v_1\}$, $e\setminus \{v_1\}$, we get a minimum cover, where the hyperedges of size at least two cover $C_v\cup {v_0}$ contradicting the definition of $C_v$, provided $v\ne v_0$. If $v=v_0$, $C_{v'}:=C_v\cup {v_0}$ can occur in Claim~2 choosing any $v'\in V\setminus (C_v\cup {v_0})$; $v'$ exists, since $V\setminus (C_v\cup {v_0})\ne\emptyset$ because of the condition of Claim~2.
This proves the assertion for $|P|=2$.
Let $\Cscr_{v_2}:=(\Cscr_v \setminus \{v_1,v_2\}) \cup \{v_0,v_1\}$, and $P':=P-\{v_1,v_2\}$; $\Cscr_{v_2}$ is a minimum cover of $H-v_2$ maximizing the union of non-singletons, and $|P'|<|P|$. Now the induction hypothesis finishes the proof of the assertion and of Claim~2.
\medskip
To finish the proof note first that a minimum cover without singleton implies $\rho\le\frac n2$ and we are done.
Otherwise, Claim~2 can be applied, and $\rho=\frac{|C_v|}2+ |V\setminus C_v|$ follows for all $v\in V$. This formula also shows that a larger matching $\Cscr'_v$ would provide a smaller cover. So $\Cscr_v$ is a maximum matching of $H_2$ and does not cover $v$, so $\nu(H_2-v)=\nu(H_2)$ for all $v\in V$. The connectivity of $H$ means by definition that $H_2$ is connected, so the conditions of Gallai's Lemma are satisfied for $H_2$: $H_2$ is factor-critical, and $\{v\}$ $(v\in V)$ with a perfect matching of $H_2-v$ provide a cover of size $1+\frac{n-1}{2}=\frac{n+1}{2}$.
\qed
Let us restate the inequality of the Theorem so that it directly contains the formulation of \cite{gallai:colorCritical}:
\begin{cor}\label{cor:Gallai}
A hereditary hypergraph with $n\le 2(\rho -1)$ is either not critical, or not connected.
\end{cor}
\section{Examples, Algorithms and Conclusions}\label{sec:a}
In this section we show some examples applying the results to particular hypergraphs. Any hereditary hypergraph is an example, so we cannot seek for completeness, but we try to show how the specialization works. An important surprise is that it turned out that the role of larger hyperedges is secondary, {\em $H_2$ plays the main role: the covers appearing in the Theorem consist only of edges; in the corollaries the components and connectivity depend only on $H_2$.}
\begin{cor}\label{cor:concrete}
Let $H=(V,E)$ be a hereditary hypergraph with $|V|\le 2(\rho(H) -1)$. Then
either there exists $v\in V$ so that $\rho(H-v)=\rho(H)-1$,
or $H$ is not connected, that is, there exists a partition $\{V_1, V_2\}$ of $V$ so that $\{v_1,v_2\}\notin E$ for all $v_1\in V_1$, $v_2\in V_2$. \qed
\end{cor}
\noindent
{\bf 3.1 Hereditary hypergraphs from graphs}
\smallskip
Let $G=(V(G),E(G))$ be either an undirected graph or a digraph, the context always determines the current meaning, and we define hereditary hypergraphs on $V(G)$. The more deeply the hypergraphs are related to $G$, the more interesting the results.
Fix a (not necessarily finite) set $\Gscr$ of (di)graphs and for each (di)graph $G$, let $H(G,\Gscr):=(V(G), F)$, where
\[F:=\{U\subseteq V(G): \hbox{$U$ induces in $G$ a graph without any induced subgraph in $\Gscr$}\}.\]
When are the Theorem or its corollaries meaningful or even interesting for $H(G,\Gscr)$?
If neither of the $2$-vertex graphs are in $\Gscr$, hypergraphs $H(G,\Gscr)$ are connected for every graph $G$, and our Theorem and its corollaries are trivial. On $2$ vertices there are two undirected graphs: one without, and one with an edge between the two vertices. If the only graph of $\Gscr$ on two vertices is the edge-less graph, $H(G,\Gscr)$ consists of cliques of $G$;
if it is the edge on two vertices, $H(G,\Gscr)$ consists of stable sets of $G$.
In turn, according to Corollary~\ref{cor:concrete}, in the former case the disconnectivity of $H(G,\Gscr)$ means the disconnectivity of $G$, and in the latter case it means the disconnectivity of the complement of $G$. In these cases, the Theorem specializes to Gallai's theorem.
It is easy to see that in these cases the only possibility to add to $\Gscr$ more graphs is to add a clique (or stable set) of given size. Then for some $k\in\mathbb{N}$, $H(G,\Gscr)$ is the family of cliques or stable sets of size at most $k$ on $V$. For $k\ge 2$ the Theorem applies without change and it is then about coloring with at most $k$ vertices of each color.
\medskip\noindent
{\bf 3.2 Hereditary hypergraphs from digraphs}
\smallskip
Similarly, for digraphs one of the subgraphs on two vertices has to be excluded: there are now three digraphs on two vertices: with or without an arc as in the undirected case, or with an arc in both directions ($2$-cycle). If $\Gscr$ contains only the latter, we also do not get anything new: keeping only arcs in both directions as an undirected edge we reduce the problem to Gallai's colorings in undirected graphs. However, if there are some other graphs in $\Gscr$ we have three interesting special cases: cliques, stable sets (Gallai), a third case we discuss below as also cases from multigraphs.
\begin{cor}\label{cor:graphs]}
Let $G$ be a graph, $\Gscr$ a set of graphs, $H:=H(G,\Gscr)$, and $|V(G)|\le 2(\rho(H) -1)$. Then
either there exists $v\in V$ so that $\rho(H-v)=\rho(H)-1$,
or $H$ is not connected, that is, there exists a partition $\{V_1, V_2\}$ of $V$ so that $\{v_1,v_2\}$ for all $v_1\in V_1$, $v_2\in V_2$ induces a graph in $\Gscr$.\qed
\end{cor}
As argued before the corollary, the interesting cases are when {\em the unique graph on two vertices of $\Gscr$ is an edge, a non-edge or a $2$-cycle}, and in the last case there are many possibilities to exclude further induced subgraphs. For instance we can include in $\Gscr$ $3$-cycles and all graphs on $4$ vertices having $4$-cycles. Actually an arbitrary subset of graphs having directed cycles, or the set of all such graphs can be contained in $\Gscr$, and will not make any change in the relevant critical graphs (as compared to including only $3$- and $4$-cycles, no larger hyeredge plays a role). Corollary~\ref{cor:graphs]} holds, and partitioning into hyperedges of $H(G,\Gscr)$ means then partitioning into vertex-sets that induce acyclic digraphs: this is ``digraph coloring'',
for which Corollary~\ref{cor:graphs]} was asked in \cite{BS}. (The Theorem has then already been proved, see footnote~1. Stehl\'\i k \cite{M} missed its specialization to acyclic induced subgraphs, and answered \cite{BS} using the Edmonds-Gallai structure theorem.)
\medskip\noindent
{\bf 3.3 More Examples}
\smallskip
Clearly, common hyperedges of an arbitrary number of hereditary hypergraphs on the same set of vertices form a hereditary hypergraph. If all of them arise as stable sets of graphs, the intersection will be just the stable-set-hypergraph of the graph which is the union of the considered graphs. However, if the considered hypergraphs arise in different ways, the intersections may provide nontrivial new cases, if the role of the edges is kept in mind.
More generally, a {\em stable-set} in a (not necessarily hereditary) hypergraph $H=(V,E)$ is a set $S\subseteq V$ so that $S$ does not contain any $e\in E$. (Independent sets of matroids are those that do not contain a hyperedge from the circuit-hypergraph.) The family $\Sscr$ of all stable sets is obviously a hereditary family; $S\in \Sscr$, if and only if $V\setminus S$ is a {\em transversal} or blocker of the hyperedges; the family of transversals is an upper hereditary hypergraph, another source of examples:
In {\em upper hereditary} hypergraphs the supersets of hyperedges are also hyperedges.
The {\em dual} of $H=(V,E)$ is $H^d:=(V,E^d)$, where $E^d:=\{V\setminus e: e\in E\}$. The dual of a hereditary hypergraph is upper hereditary and vice versa, generating more examples; $(H^d)^d=H$. Each example of upper hereditary hypergraphs provides an example of hereditary hypergraphs, and vice versa. Upper hereditary hypergraphs arise for instance from vertex-sets of graphs that do contain one of a fixed set of graphs as induced subgraphs; being non-planar or non-bipartite is a special case.
In multi (di)graphs $G$ with for instance edge-multipicities $z:E(G)\rightarrow \Rset$ and $\lambda\in\Rset$ we may consider the hereditary hypergraph $\{U\subseteq V(G): \hbox{sum of $z(e)$ on the edges induced by $U$}\le \lambda\}$, when Corollary~\ref{cor:graphs]} is again meaningful. The upper bound can be replaced by any monoton function of $z(e)$ and the graph, combined with vertex multiplicities or edge- and vertex-colored graphs, $\ldots$
\noindent
{\bf 3.4 Algorithms and Complexity}
\smallskip
The focus of the examples of the previous subsections was the Theorem. Algorithmic and complexity questions are less ``choosy'' and become meaningful and nontrivial for more examples.
Once in a while questions about particular, critical hereditary hypergraphs are raised anew, sometimes as open problems like in \cite{BS} about partitioning the vertex-set into acyclic digraphs. How can the NP-hard covering participate in well-characterizing minmax theorems? The discussion of this question is beyond the possibilities of this note.
This will be layed out in forthcoming papers, we mention the key to the solution only shortly:
It is NP-hard to compute $\rho_H$ and in hereditary hypergraphs $H$ it is not easier, since taking the hereditary closure does not affect $\rho$!
The covering problem for the hereditary closure of $3$-uniform hypergraphs contains the $3$-dimensional matching problem \cite{GJ}, and is therefore NP-hard even if the hyperedges of the hereditary hypergraph are given explicitly, and their number is polynomial in the input size. Indeed, $\rho = n/3$ if and only if there exists a partition into triangles.
However, the maximum $\mu$ of the number of vertices covered by non-singletons in a cover of a hereditary hypergraph can be maximized in polynomial time, and the vertex-weighted generalization can also be solved! It can be seen that this maximum does not change if we write here ``minimum cover'' instead of ``cover''. This allows to handle with well characterizing minmax theorems and in polynomial time some aspects of minimum covers \cite{T}, for which results of Bouchet \cite{B}, Cornu\'ejols, Hartvigsen and Pulleyblank \cite{CP}, \cite{CHP} play an enlightening role.
\medskip\noindent
{\bf 3.5 Conclusion}:
We tried to show by the Theorem and multiple examples how results on graph colorings may be extended to covers in hypergraphs. We continue this work with minmax and structure theorems, develop algorithms at the general level of hereditary hypergraphs, and show more applications and connections between various problems \cite{T}, \cite{AM}.
We hope the reader will also have the reflex of using hereditary hypergraphs when a new special case is coming up!
\small
| {
"timestamp": "2019-10-25T02:20:37",
"yymm": "1910",
"arxiv_id": "1910.11302",
"language": "en",
"url": "https://arxiv.org/abs/1910.11302",
"abstract": "A quick proof of Gallai's celebrated theorem on color-critical graphs is given from Gallai's simple, ingenious lemma on factor-critical graphs, in terms of partitioning the vertex-set into a minimum number of hyperedges of a hereditary hypergraph, generalizing the chromatic number. We then show examples of applying the results to new problems and indicate the way to algorithms and refined complexity results for all these examples at the same time.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "Color-critical Graphs and Hereditary Hypergraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109511920161,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7076796153989858
} |
https://arxiv.org/abs/0809.4814 | Reduction of the Number of Quantifiers in Real Analysis through Infinitesimals (Master Thesis, Mathematics Department, California Polytechnic State University, San Luis Obispo) | We construct the non-standard complex (and real) numbers using the ultrapower method in the spirit of Cauchy's construction of the real numbers. We show that the non-standard complex numbers are a non-archimedean, algebraically closed field, and that the non-standard real numbers are a totally ordered, real-closed, non-archimedean field. We explore the various types of non-standard numbers, and develop the non-standard completeness results (Saturation Principle, Supremum Completeness of Bounded Internal Sets, etc) for $\starr$. We give non-standard characterizations for such usual topological objects as open, closed, bounded, and compact sets in terms of monads. We also consider such traditional topics of real analysis as limits, continuity, uniform continuity, convergence, uniform convergence, etc. in a non-standard setting. In both topology and real analysis we reduce (and in some cases eliminate) the number of quantifiers in the non-standard setting. | \chapter{Introduction}
\section{Historical Discussion}
When calculus was discovered by Leibniz in the late seventeenth century, he invented the notation $dx$ to mean an infinitely small change in the variable $x$. As to the existence of an infinitely small quantity, Leibniz explained:
\begin{quote}
It will be sufficient if, when we speak of infinitely great (or more strictly unlimited), or of infinitely small quantities (i.e. the very least of those within our knowledge), it is understood that we mean quantities that are indefinitely great or indefinitely small, i.e., as great as you please, or as small as you please, so that the error that one may assign may be less than a certain assigned quantity... (Goldblatt \cite{rGoldblatt} p.6)
\end{quote}
When Euler developed infinite series for logarithmic, exponential, and trigonometric functions, he frequently used the ideas of arbitrarily small and arbitrarily large (Goldblatt \cite{rGoldblatt} p.8).
Though the idea of infinitely large and small numbers were used to great success by Leibniz, Newton, and Euler, the larger mathematical community was skeptical of Leibniz's explanation given above. In the late nineteenth century the work of Dedekind, Cantor, Cauchy, Balzano, and Weierstrass expunged infinitesimals from analysis and replaced them with the $\varepsilon$-$\delta$ formulations which are widely taught today.
In 1966, Abraham Robinson discovered the non-standard analysis \cite{aRob66}, providing a firm foundation for the infinitesimals which were banished in the late nineteenth century. The version of non-standard analysis developed by Robinson relies heavily on formal logic. The ultrapower formulation (constructionist approach) was discovered soon after by Luxemburg \cite{wLuxNotes} (See also Stroyan and Luxemburg \cite{StroLux76}). For a more contemporary presentation of the ultrapower method we refer to Lindst\o m \cite{tLin}.
Whereas the ultrapower method in Luxemburg \cite{wLuxNotes}, Lindstr\o m \cite{tLin}, and Goldblatt \cite{rGoldblatt} utilizes the natural numbers $\mathbb{N}$ as the index set, we shall use $\mathbb{R}_+$ as the index set. Note that another ultrapower non-standard model (with different index set and different ultrafilter) was used in Guy Berger's thesis \cite{gBerger05} for studying delta-like solutions of Hopf's equation.
It is our belief that the use of infinitesimals in analysis simplifies the presentation of key concepts such as limits (of usual functions and sequences), continuity, derivatives, and topological concepts such as open, closed, and compact sets in the usual topology on $\mathbb{R}$. Furthermore, we stress the use of the non-commutative quantifiers $\forall$ and $\exists$ in standard analysis creates statements for foundational concepts which can be confusing to the beginner. We demonstrate that in the framework of non-standard analysis we are able to take the previous list of concepts and formulate them with less quantifiers (and sometimes none).
For additional reading we refer to Lindstr\o m \cite{tLin} and Davis \cite{mDavis}. For the reader interested in teaching calculus in the language of infinitesimals we refer to Keisler \cite{jKeisE} \cite{jKeisF} and to Todorov \cite{tdTod2000a}.
\section{Summary}
We take a constructive approach to the non-standard complex (and real) numbers. Our approach is similar in nature to the Cauchy construction of the real numbers as equivalence classes of fundamental sequences of rational numbers. Chapter \ref{C: Preliminaries} is an introduction to Non-Standard Analysis via the ultrapower construction. Section \ref{S: Filters and Ultrafilters} discusses the basic theory of filters and ultrafilters with the intention of specifying a specific ultrafilter with which to define an equivalence relation among sequences of complex numbers. In Section \ref{S: A Non-Standard Extension of C} we demonstrate that the ultrapower set modulo the equivalence relation is a field, which we call ${^{*}\mathc}$. In Sections \ref{S: Algebraic Properties} and \ref{S: Order} we show that ${^{*}\mathc}$ is an algebraically closed field, with ${^{*}\mathr}$ as a real-closed, totally ordered subfield.
In Section \ref{S: Trichotomy} we define and give examples of infinitesimal, finite, and unlimited elements of ${^{*}\mathc}$; thus demonstrating that both ${^{*}\mathc}$ and ${^{*}\mathr}$ are non-archimedean fields. We introduce the standard part mapping in Section \ref{S: SPM} to connect the non-standard ${^{*}\mathc}$ to the standard $\mathbb{C}$. It may then seem natural that the standard part mapping is the ring homomorphism which proves that the set of finite non-standard numbers modulo the infinitesimals is isomorphic to the complex numbers.
In Section \ref{S: NSE of Set} we generalize the method of Section \ref{S: A Non-Standard Extension of C} for any set. In a similar spirit of generalizing our methods we develop internal sets in Section \ref{S: Internal Sets} as nets of subsets of $\mathbb{R}_+$, and we demonstrate that the work of Section \ref{S: NSE of Set} is a special case of the internal sets. Having defined internal sets, we then develop the completeness results for ${^{*}\mathr}$ in Section \ref{S: Completeness}. We first prove Dedekind Completeness, which is the non-standard analogue of the supremum completeness of $\mathbb{R}$. We discuss the Spilling Principles to address the infinitesimal/finite and finite/unlimited barrier. The Saturation Principle is proven in simplified form, and the Cantor Principle is given as a direct corollary.
Sections \ref{S: Transfer by Example} and \ref{S: Logic for Transfer} develop the necessary background for the Transfer Principle; these are the only sections in which we use formal logic. The Transfer Principle is a powerful theorem allowing us to literally transfer properly formed statements about $\mathbb{R}$ into statements about ${^{*}\mathr}$, and vice versa. The Transfer Principle also tells us where to place the asterisks in the formalization of traditional real analytic statements.
Chapter \ref{C: Topology} reviews open, closed, bounded, and compact sets in the context of the usual topology on $\mathbb{R}$. We introduce the monad, and think of it as a universally open infinitesimal interval. We proceed to develop the non-standard characterizations of the items in the list above. We emphasize that the non-standard characterizations reduce the number of quantifiers, and in some cases, eliminates them completely (hence the title of the work: "Reduction of the number of quantifiers..."). For example, the standard definition of a compact set has two quantifiers (For every cover there is a finite subcover). The non-standard characterization of compactness is given in terms of monads and is free of quantifiers.
Chapter \ref{C: Analysis} is concerned with usual topics from real analysis in a non-standard setting. We review the standard definitions of limits, continuity, uniform continuity, sequences, uniform convergence, and derivatives. We then present and prove the non-standard characterizations, and again emphasize the reduction of quantifiers. For example, the standard definition of a limit has three quantifiers, whereas the non-standard characterization has only one quantifier. As another, more subtle, example, the standard definition of uniform continuity has four quantifiers, whereas the non-standard characterization has two quantifiers. Moreover, the non-standard setting gives a definition which conforms to what our intuition would expect uniform continuity to mean.
The examples cited above are only a few instances in which we may reduce the number of quantifiers.
\chapter{Introduction to Non-Standard Analysis}\label{C: Preliminaries}
We present aspects of the theory of filters and ultrafilters in order to construct the non-standard complex (and real) numbers via the ultrapower method. This process is then generalized to turn standard sets and functions into non-standard sets and functions. It turns out that our extension of any standard set is only a special case of an internal set. Knowing what an internal set is, allows us to discuss the various completeness theorems on ${^{*}\mathr}$, some of which may come as a surprise. We end the chapter with the Transfer Principle, which is the most general statement for how to move from a standard statement, to a non-standard statement, and vice versa.
\section{Filters and Ultrafilters}\label{S: Filters and Ultrafilters}
Let us begin with the basic theory of filters and ultrafilters. In the following, $I$ is an arbitrary infinite set and $\mathcal{P}(I)$ is the power set of $I$.
\begin{definition}[Filter and Free Filter]\label{D: Filter and Free Filter}\index{Filter}\index{Free Filter}
A non-empty set $\mathcal{F} \subseteq \mathcal{P}(I)$ is a \textbf{filter} on $I$ if for each $A, B \in \mathcal{P}(I)$ we have
\begin{quote}
\begin{description}
\item[(a)] $\varnothing \notin \mathcal{F}$.
\item[(b)] $A, B \in \mathcal{F}$ implies $A \cap B \in \mathcal{F}$.
\item[(c)] $A \in \mathcal{F}$ and $A \subseteq B \subseteq I$ implies $B \in \mathcal{F}$.
\end{description}
Further, a filter $\mathcal{F}$ is a \textbf{free filter} if:
\begin{description}
\item[(d)] $\cap_{A \in \mathcal{F}} A = \varnothing$
\end{description}
A filter $\mathcal{F}$ is called \textbf{countably incomplete} if:
\begin{description}
\item[(e)] There is a decreasing sequence of sets $I = I_0 \supset I_1 \supset I_2 \supset ...$ in $\mathcal{F}$ such that $\bigcap_{n=0}^{\infty} I_n = \varnothing$.
\end{description}
\end{quote}
\end{definition}
\begin{definition}[Ultrafilter]\label{D: Ultrafilter}\index{Ultrafilter}
A filter $\mathcal{U}$ on a set $I$ is an \textbf{ultrafilter} (or maximal filter) if there is no filter $\mathcal{F}$ on $I$ that properly contains $\mathcal{U}$.
\end{definition}
\begin{theorem}[Characterization of Ultrafilters]\label{T: Characterization of Ultrafilter}
Let $\mathcal{U}$ be a filter on $I$, then $\mathcal{U}$ is an ultrafilter on $I$ if and only if for every $A \subseteq I$, either $A \in \mathcal{U}$ or $I \setminus A \in \mathcal{U}$.
\end{theorem}
\begin{proof}
\begin{description}
\item[($\Rightarrow$)] Let $\mathcal{U}$ be an ultrafilter on $I$, and suppose to the contrary that for a fixed $A \subset I$, neither $A \notin \mathcal{U}$ nor $I \setminus A \notin \mathcal{U}$. Define $\overline{\mathcal{U}} =: \{ X \in \mathcal{P}(I) : A \cup X \in \mathcal{U}\}$. It is painless to show that $\overline{\mathcal{U}}$ is a filter on $I$. Note, $\mathcal{U} \subseteq \overline{\mathcal{U}}$ by taking $B \in \mathcal{U}$ and using \textbf{(c)} from Definition \ref{D: Filter and Free Filter}. Also, $I \setminus A \in \overline{\mathcal{U}}$ since $A \cup (I \setminus A) = I \in \mathcal{U}$, and by assumption $I \setminus A \notin \mathcal{U}$. So $\mathcal{U}$ a proper subset of $\overline{\mathcal{U}}$. Which contradicts the assumption that $\mathcal{U}$ is a maximal filter.
\item[($\Leftarrow$)] Let $\mathcal{U}$ be a filter on $I$, and assume for every $A \subseteq I$, either $A \in \mathcal{U}$ or $I \setminus A \in \mathcal{U}$, and assume $\mathcal{U}$ is not an ultrafilter. So there exists $\mathcal{V}$, a filter on $I$, such that $\mathcal{U}$ is properly contained by $\mathcal{V}$. Let $A \in \mathcal{V} \setminus \mathcal{U}$. Then $A \cap X \not= \varnothing$ for all $X \in \mathcal{U} \subset \mathcal{V}$ (by \textbf{(b)} of Definition \ref{D: Filter and Free Filter}). Particularly, when $X = I \setminus A$ we have $A \cap (I \setminus A) \not= \varnothing$, which is a contradiction.
\end{description}
\end{proof}
\begin{corollary}[A Generalization]\label{C: Extension of Characterization}
Let $\mathcal{U}$ be an ultrafilter on $I$. If $\bigcup_{n=1}^{m} A_n \in \mathcal{U}$ for some $m \in \mathbb{N}$, then $A_n \in \mathcal{U}$ for \emph{at least} one $n$. Additionally, if the sets $A_n$ ($n = 1, 2, ..., m$) are mutually disjoint, then $A_n \in \mathcal{U}$ holds for \emph{exactly} one $n$.
\end{corollary}
\begin{proof}
We take Theorem \ref{T: Characterization of Ultrafilter} as the base case and apply induction.
\end{proof}
\begin{examples}$ $
\begin{quote}
\begin{description}
\item[(i)] Let $I = \mathbb{R}_+$ and let $i \in \mathbb{R}_+$. The set $\mathcal{F}_0 =: \{ A \in \mathcal{P}(\mathbb{R}_+) : i \in A \}$ is a filter which is not free. Indeed, $\varnothing \not\in \mathcal{F}_0$ since $\mathbb{R}_+ \not= \varnothing$. Suppose $A, B \in \mathcal{F}_0$ so that $i \in A$ and $i \in B$. Then $i \in A \cap B$, therefore $A \cap B \in \mathcal{F}_0$. Now, let $A \in \mathcal{F}_0$ and suppose $A \subseteq B \subseteq \mathbb{R}_+$. Then $i \in A$ and by assumption $i \in B$, therefore $B \in \mathcal{F}_0$. Hence, $\mathcal{F}_0$ is a filter. To show that $\mathcal{F}_0$ is not free note that $i \in A$ for all $A \in \mathcal{F}_0$ and so $\{ i \} \subseteq \cap_{A \in \mathcal{F}_0} A \not= \varnothing$.
\item[(ii)] Let $I = \mathbb{N}$. The set $\mathcal{F}_{r} =: \{ A \in \mathcal{P}(\mathbb{N}) : (\mathbb{N} \setminus A)$ is finite $\}$ is the Frechet filter which is not an ultrafilter. Indeed, $\varnothing \not\in \mathcal{F}_{r}$ for if it were then $\mathbb{N} \setminus \varnothing = \mathbb{N}$ would be finite, which is a contradiction. Let $A, B \in \mathcal{F}_{r}$. Then $(\mathbb{N} \setminus A)$ and $(\mathbb{N} \setminus B)$ are finite. Therefore, $(\mathbb{N} \setminus A) \cup (\mathbb{N} \setminus B)$ is finite. By DeMorgan's Laws we have $\mathbb{N} \setminus (A \cap B)$ is finite. Hence, $A \cap B \in \mathcal{F}_{r}$. Next, let $A \in \mathcal{F}_{r}$ and $A \subseteq B \subseteq \mathbb{N}$. Then $(\mathbb{N} \setminus A)$ is finite and since $A \subseteq B$ we have $(\mathbb{N} \setminus B) \subseteq (\mathbb{N} \setminus A)$ giving $(\mathbb{N} \setminus B)$ is finite. Hence, $B \in \mathcal{F}_{r}$. To see that $\mathcal{F}_{r}$ is not an ultrafilter consider the set of even numbers in $\mathbb{N}$, which are infinite, and therefore not in $\mathcal{F}_r$. Notice that the complement of the evens is the odds, which are also infinite, and therefore not in $\mathcal{F}_r$. So we have found a set for which neither it, nor its compliment is in the filter. Therefore, $\mathcal{F}_r$ is not an ultrafilter by Theorem \ref{T: Characterization of Ultrafilter}.
\end{description}
\end{quote}
\end{examples}
We now focus our attention on the existence of the ultrafilter. To that end, let $\mathcal{S}$ be a partially ordered set with $\leq$ a partial order. An element $m \in \mathcal{S}$ is a \textbf{maximal element} of $\mathcal{S}$ if there is no $s \in \mathcal{S}$ such that $m \lneqq s.$
\begin{lemma}[Zorn's Lemma]\label{L: Zorn's Lemma}\index{Zorn's Lemma}
If $\mathcal{S}$ is a partially ordered set such that every totally ordered subset $\mathcal{L}$ of $\mathcal{S}$ has an upper bound $b \in \mathcal{S}$, then $\mathcal{S}$ has maximal elements.
\end{lemma}
Recall that Zorn's Lemma is equivalent to the Axiom of Choice, and thus can not be proven.
\begin{theorem}[Existence of Ultrafilter on $I$]\label{T: Existence}
Let $I$ be an infinite set and $I_n \subseteq I$ such that $$I = I_0 \supset I_1 \supset I_2 \supset ... \textnormal{ and } \bigcap^{\infty}_{n=0} I_n = \varnothing.$$ Then there exists a free ultrafilter $\mathcal{U}$ on $I$ such that $I_n \in \mathcal{U}$ for all $n \in \mathbb{N}$.
\end{theorem}
\begin{proof}
Let $\mathcal{S}$ be the set of all free filters $\mathcal{F}$ on $I$ such that $I_n \in \mathcal{F}$ for all $n \in \mathbb{N}$. We order $\mathcal{S}$ by set inclusion, $\subseteq$. Note, $\mathcal{S} \not= \varnothing$ since $\mathcal{F}_0 = \{ A \in \mathcal{P}(I) : I_n \subseteq A$ for some $n \}$ is a free filter on $I$ and $I_n \in \mathcal{F}_0$ for all $n$. Let $\mathcal{L} \subseteq \mathcal{S}$ be a totally ordered subset of $\mathcal{S}$. So each $L \in \mathcal{L}$ is a free filter on $I$ with $I_n \in L$ for all $n$.
Define $\mathcal{L}^* = \cup_{L \in \mathcal{L}} L$. Then $\mathcal{L}^*$ is also a free filter on $I$. Indeed, $\varnothing \notin \mathcal{L}^*$ since $\varnothing \in \mathcal{L}$ would imply $\varnothing \in L$ for some $L \in \mathcal{L}$, which is a contradiction. Next, let $A, B \in \mathcal{L}^*$. Then $A \in L_1$ and $B \in L_2$ for some $L_1, L_2 \in \mathcal{L}$. Without loss of generality, let $L_1 \subseteq L_2$ so $A \in L_2$ and $B \in L_2$. Therefore $A \cap B \in L_2$ so $A \cap B \in \mathcal{L}^*$. Lastly, let $A \in \mathcal{L}^*$ and $A \subseteq B \subseteq I$. Since $A \in \mathcal{L}^*$, we have $A \in L_1$ for some $L_1 \in \mathcal{L}$ hence $L_1$ a free filter implies $B \in L_1$ so $B \in \mathcal{L}^*$. Therefore $\mathcal{L}^* \in \mathcal{S}$, and $\mathcal{L}^*$ is an upper bound for $\mathcal{L}$, so Zorn's Lemma implies $\mathcal{S}$ has a maximal element. Let $\mathcal{U} \in \mathcal{S}$ be such a maximal element, then $\mathcal{U}$ is a filter on $I$ and $I_n \in \mathcal{U}$ for all $n$.
\end{proof}
\section{Probability Measure on $\mathcal{P}(\mathbb{R}_+)$}\label{S: Probability Measure}
We now fix $I = \mathbb{R}_+$, $I_n = (0, \frac{1}{n})$, and $\mathcal{U}$ to be an ultrafilter on $\mathbb{R}_+$ with $I_n \in \mathcal{U}$ for all $n \in \mathbb{N}$. We know such an ultrafilter exists by Theorem \ref{T: Existence}. The purpose of this excursion into probability measures is to justify our later use of the phrase "almost everywhere", which will ease our notational burden in some instances.
\begin{definition}[Probability Measure]\label{D: Probability Measure}\index{Probability Measure}
Let $p: \mathcal{P}(\mathbb{R}_+) \to \{0,1\}$ be a two-valued \textbf{probability measure} with the following properties:
\begin{quote}
\begin{description}
\item[(a)] The probability measure $p$ is finitely additive. That is, $p(A \cup B) = p(A) + p(B)$ for disjoint $A$ and $B$,
\item[(b)] We have $p(I_n) = 1$, where $I_n = \{ \varepsilon \in \mathbb{R}_+ : 0 < \varepsilon < \frac{1}{n}\}$, and
\item[(c)] For every finite subset $A$ of $\mathbb{R}_+$, $p(A) = 0$; in particular, $p(\varnothing) = 0$.
\end{description}
\end{quote}
\end{definition}
\begin{theorem}[Ultrafilter implies Probability Measure] \label{T: Ultrafilter implies Probability Measure}
Let $\mathcal{U}$ be an ultrafilter. Define the probability measure $p: \mathcal{P}(\mathbb{R}_+) \to \{0,1\}$ by
\begin{align}
p(A) =
\begin{cases}
0 &\text{if $A \notin \mathcal{U}$}\\
1 &\text{if $A \in \mathcal{U}$}\notag
\end{cases}
\end{align}
Then $p$ is a finitely additive measure which satisfies \emph{\textbf{(a)} - \textbf{(c)}} of Definition \ref{D: Probability Measure}.
\end{theorem}
The above theorem in addition to the converse, which implies the existence of an ultrafilter given a probability measure as defined in Definition \ref{D: Probability Measure}, establishes a one-to-one correspondence between our fixed ultrafilter $\mathcal{U}$ and our finitely additive two-valued probability measure $p$. Given the existence of $\mathcal{U}$ on $\mathbb{R}_+$ from Theorem \ref{T: Existence}, we are guaranteed the existence of such a probability measure.
We now develop the meaning of "almost everywhere" in the language of predicates.
\begin{definition}[Almost Everywhere]\label{D: Predicate}\index{Almost Everywhere}
Let $P: \mathbb{R}_+ \to \{ \textnormal{true, false}\}$ be a predicate in one variable on $\mathbb{R}_+$. We say that $P$ holds \textbf{almost everywhere} (written a.e.) if $p (\{ \varepsilon \in \mathbb{R}_+ : P(\varepsilon) \textnormal{ is true} \} ) = 1$. Alternately $P$ holds a.e. if $\{ \varepsilon \in \mathbb{R}_+ : P(\varepsilon) \textnormal{ is true} \}\in \mathcal{U}$.
\end{definition}
Before presenting examples of predicates, we introduce a definition. For non-empty sets $A$ and $B$, the set of all functions from $A$ to $B$ is the \textbf{ultrapower set}, which is denoted by $B^A$. Let $S$ be a set. We call the elements of $S^{\mathbb{R}_+}$ \textbf{nets} in $S$ and we denot them $(a_\varep) \in S^{\mathbb{R}_+}$, where $\varepsilon \in \mathbb{R}_+$.
\begin{examples}
The following are examples of predicates:
\begin{quote}
\begin{description}
\item[(i)] Let $(a_\varep), (b_\varep) \in \mathbb{C}^{\mathbb{R}_+}$. Let $P(\varepsilon)$ be the predicate "$a_\varep = b_\varep$".
\item[(ii)] Let $(a_\varep) \in \mathbb{C}^{\mathbb{R}_+}$ and $\mathcal{S} \subseteq \mathbb{C}$. Let $P(\varepsilon)$ be the predicate "$a_\varep \in \mathcal{S}$".
\item[(iii)] Let $(a_\varep) \in \mathbb{C}^{\mathbb{R}_+}$ and $(\mathcal{S}_\varep) \in \mathcal{P}(\mathbb{C})^{\mathbb{R}_+}$. Let $P(\varepsilon)$ be the predicate "$a_\varep \in \mathcal{S}_\varep$".
\item[(iv)] Let $(a_\varep), (b_\varep) \in \mathbb{R}^{\mathbb{R}_+}$. Let $P(\varepsilon)$ be the predicate "$a_\varep < b_\varep$".
\end{description}
\end{quote}
\end{examples}
\section{A Non-Standard Extension of $\mathbb{C}$}\label{S: A Non-Standard Extension of C}
We now extend the standard complex numbers $\mathbb{C}$ to the non-standard complex numbers ${^{*}\mathc}$. This will be done by means of ultrapowers and the ultrafilter which was fixed in Section \ref{S: Probability Measure}.
It is clear that $\mathbb{C}^{\mathbb{R}_+}$ is a commutative ring with unity and the set $J = \{ (a_\varep) \in \mathbb{C}^{\mathbb{R}_+} : a_\varep = 0 \textnormal{ a.e.} \}$ is an ideal of $\mathbb{C}^{\mathbb{R}_+}$. Recall from Definition \ref{D: Predicate}, this means $\{ \varepsilon \in \mathbb{R}_+ :a_\varep = 0\} \in \mathcal{U}$.
\begin{definition}[Equivalence Relation]\label{D: Equivalence Relation}
For $(a_\varep), (b_\varep) \in \mathbb{C}^{\mathbb{R}_+}$, we say $(a_\varep) \sim (b_\varep)$ \iff $a_\varep = b_\varep$ a.e. Alternately (from Definition \ref{D: Predicate}), we say $(a_\varep) \sim (b_\varep)$ \iff $\{ \varepsilon \in \mathbb{R}_+ : a_\varep = b_\varep \} \in \mathcal{U}$.
\end{definition}
\begin{definition}[Equivalence Classes]\label{D: Equivalence Classes}
If $(a_\varep) \in \mathbb{C}^{\mathbb{R}_+}$, we denote by $\ensuremath{\langle} a_\varep \ensuremath{\rangle}$ the equivalence class of $(a_\varep)$. We denote the corresponding factor ring $\mathbb{C}^{\mathbb{R}_+} / \sim$ by ${^{*}\mathc}$.
\end{definition}
At the very least we would like ${^{*}\mathc}$ as defined above to be a field. Indeed, we will show it is. However, we will get so much more. Eventually, we show ${^{*}\mathc}$ is a non-archimedian, algebraicially closed field which has $\mathbb{C}$ as an embedded subset.
\begin{theorem}\label{T: Field}
If $J = \{ (a_\varep) \in \mathbb{C}^{\mathbb{R}_+} : a_\varep = 0$ a.e.$\}$, as before, then $J$ is a maximal ideal of $\mathbb{C}^{\mathbb{R}_+}$; therefore ${^{*}\mathc}$ is a field.
\end{theorem}
\begin{proof}
Suppose $K \subseteq \mathbb{C}^{\mathbb{R}_+}$ is an ideal such that $J \subsetneqq K \subseteq \mathbb{C}^{\mathbb{R}_+}$. Let $(a_\varep) \in K \setminus J$. That is, $A =: \{ \varepsilon \in \mathbb{R}_+ : a_\varep \not= 0 \} \in \mathcal{U}$. Define $(b_\varep) \in \mathbb{C}^{\mathbb{R}_+}$ by
\begin{align}
b_\varep =
\begin{cases}
\frac{1}{a_\varep} &\text{ if $\varepsilon \in A$}\\
1 &\text{if $\varepsilon \in \mathbb{R}_+ \setminus A$}\notag
\end{cases}
\end{align}
Let $B = \{ \varepsilon \in \mathbb{R}_+ : a_\varep \cdot b_\varep = 1\}$. We have $A \subseteq B$ since $\varepsilon \in A$ implies $b_\varep = \frac{1}{a_\varep}$ and $a_\varep \cdot b_\varep = 1$. By \textbf{(c)} of Definition \ref{D: Filter and Free Filter} we have $B \in \mathcal{U}$, so $a_\varep \cdot b_\varep = 1$ a.e. Hence $(a_\varep)$ has multiplicative inverse $(b_\varep)$ as defined above, and $(1) \in K$. Therefore $\mathbb{C}^{\mathbb{R}_+} \subseteq K$ and $K = \mathbb{C}^{\mathbb{R}_+}$. So $J$ is a maximal ideal and ${^{*}\mathc}$ is a field.
\end{proof}
By embedding $\mathbb{C}$ in ${^{*}\mathc}$ we may discuss standard elements of $\mathbb{C}$ as elements of ${^{*}\mathc}$ so as to compare standard to non-standard elements.
\begin{definition}\label{D: Embedding}\index{Embedding}
We \textbf{embed} $\mathbb{C}$ in ${^{*}\mathc}$, denoted $\mathbb{C} \hookrightarrow {^{*}\mathc}$, by $c \to \ensuremath{\langle} c_\varep \ensuremath{\rangle}$, with $c_\varep = c$ for all $\varepsilon \in \mathbb{R}_+$. We write $\ensuremath{\langle} c \ensuremath{\rangle}$ instead of $\ensuremath{\langle} c_\varep \ensuremath{\rangle}$, and $\mathbb{C} \subset {^{*}\mathc}$ rather than the more precise $\mathbb{C} \hookrightarrow {^{*}\mathc}$.
\end{definition}
\begin{lemma}\label{L: Proper Extension}
The non-standard extension ${^{*}\mathc}$ is a proper extension of $\mathbb{C}$. That is, ${^{*}\mathc} \setminus \mathbb{C} \not= \varnothing$.
\end{lemma}
\begin{proof}
Let $a_\varep = \varepsilon$. We demonstrate that $\ensuremath{\langle} \varepsilon \ensuremath{\rangle} \in {^{*}\mathc} \setminus \mathbb{C}$. Assume to the contrary that $\ensuremath{\langle} \varepsilon \ensuremath{\rangle} = \ensuremath{\langle} c \ensuremath{\rangle}$ for some $c \in \mathbb{C}$. Then $A = \{ \varepsilon \in \mathbb{R}_+ : \varepsilon = c\} \in \mathcal{U}$. Observe that $A \not= \varnothing$ so $c \in \mathbb{R}_+$ implying $A = \{ c \} \in \mathcal{U}$, a contradiction since $p(\{c\}) = 0$. Therefore $\ensuremath{\langle} \varepsilon \ensuremath{\rangle} \not= \ensuremath{\langle} c \ensuremath{\rangle}$ for any $c \in \mathbb{C}$. So $\ensuremath{\langle} a_\varep \ensuremath{\rangle} = \ensuremath{\langle} \varepsilon \ensuremath{\rangle} \notin \mathbb{C}$.
\end{proof}
\section{Algebraic Properties of ${^{*}\mathc}$ and ${^{*}\mathr}$}\label{S: Algebraic Properties}
We demonstrate that ${^{*}\mathc}$ is an algebraically closed field and that ${^{*}\mathr}$ is a real-closed field.
\begin{theorem}\label{T: Algebraically Closed}
The non-standard extension ${^{*}\mathc}$ is an algebraically closed field. That is, every polynomial of non-zero degree with coefficients in ${^{*}\mathc}$ has a root in ${^{*}\mathc}$.
\end{theorem}
\begin{proof}
Let $P \in {^{*}\mathc}[x]$ with $\deg P \geq 1$. In other words, let $P(x) = \sum_{n=0}^{m} \alpha_n x^n$, where $\alpha_n \in {^{*}\mathc}$ and $\alpha_m \not= 0$. Since $\alpha_n \in {^{*}\mathc}$, $\alpha_n = \ensuremath{\langle} a_{n\varepsilon} \ensuremath{\rangle}$ for some $(a_{n\varepsilon}) \in \mathbb{C}^{\mathbb{R}_+}$ ($n = 0, 1, ... , m$). Define $P_\varepsilon(x) := \sum_{n=0}^{m} a_{n\varepsilon} x^n$. For any fixed $\varepsilon \in \mathbb{R}_+$, $P_\varepsilon \in \mathbb{C}[x]$. Since $\mathbb{C}$ is algebraically closed there exists $z_\varepsilon \in \mathbb{C}$ such that $P_\varepsilon(z_\varepsilon) = 0$ for all $\varepsilon \in \mathbb{R}_+$.
Now, define $z = \ensuremath{\langle} z_\varepsilon \ensuremath{\rangle} \in {^{*}\mathc}$. We have $\{ \varepsilon \in \mathbb{R}_+ : P_\varepsilon(z_\varepsilon) = 0 \} = \mathbb{R}_+ \in \mathcal{U}$. Define the internal function $\ensuremath{\langle} P_\varepsilon \ensuremath{\rangle} \in {^{*}\mathc}[x]$ where $\ensuremath{\langle} P_\varepsilon \ensuremath{\rangle} : {^{*}\mathc} \to {^{*}\mathc}$ by $\ensuremath{\langle} P_\varepsilon \ensuremath{\rangle} ( \ensuremath{\langle} c_\varepsilon \ensuremath{\rangle} ) = \ensuremath{\langle} P_\varepsilon (c_\varepsilon) \ensuremath{\rangle}$. So $\ensuremath{\langle} P_\varepsilon \ensuremath{\rangle} ( \ensuremath{\langle} z_\varepsilon \ensuremath{\rangle} ) = \ensuremath{\langle} P_\varepsilon ( z_\varepsilon ) \ensuremath{\rangle} = \ensuremath{\langle} 0 \ensuremath{\rangle} = 0$
We must now show that $\ensuremath{\langle} P_\varepsilon \ensuremath{\rangle} = P$. Let $x = \ensuremath{\langle} x_\varepsilon \ensuremath{\rangle} \in {^{*}\mathc}$ (arbitrarily). Then $\ensuremath{\langle} P_\varepsilon \ensuremath{\rangle}(x) = \ensuremath{\langle} P_\varepsilon \ensuremath{\rangle} (\ensuremath{\langle} x_\varepsilon \ensuremath{\rangle}) \overset{\text{def}}{=} \ensuremath{\langle} P_\varepsilon (x_\varepsilon) \ensuremath{\rangle} = \ensuremath{\langle} \sum_{n=0}^{m} a_{n\varepsilon} x_{\varepsilon}^{n} \ensuremath{\rangle} = \sum_{n=0}^{m} \ensuremath{\langle} a_{n\varepsilon} \ensuremath{\rangle} \ensuremath{\langle} x_{\varepsilon}^{n} \ensuremath{\rangle} = \sum_{n=0}^{m} \ensuremath{\langle} a_{n\varepsilon} \ensuremath{\rangle} \ensuremath{\langle} x_\varepsilon \ensuremath{\rangle}^{n} = \sum_{n=0}^{m} \alpha_n \ensuremath{\langle} x_\varepsilon \ensuremath{\rangle}^n = \sum_{n=0}^{m} \alpha_n x^n = P(x)$.
\end{proof}
We now take the short route to showing that $\mathbb{R}$ is a real closed field.
\begin{definition}\label{D: Real Closed}
An arbitrary field $\mathbb{K}$ is \textbf{real closed} if
\begin{quote}
\begin{description}
\item[(a)] For all $a \in \mathbb{K}$ either $x^2 = a$ or $x^2 = -a$ is solvable in $\mathbb{K}$.
\item[(b)] Every polynomial of odd degree with coefficients in $\mathbb{K}$ has a root in $\mathbb{K}$.
\end{description}
\end{quote}
\end{definition}
Recall that each real closed field $\mathbb{K}$ is uniquely orderable by $x \geq 0$ \iff there exists $y\in \mathbb{K}$ such that $x = y^2$ (Van Der Waerden \cite{VanDerWaerden}, ch. 11).
\begin{theorem}[Artin-Schrier]
If $\mathbb{K}$ is a totally ordered field, then the following are equivalent:
\begin{quote}
\begin{description}
\item[(a)] $\mathbb{K}$ is a real-closed field.
\item[(b)] $\mathbb{K}(i)$ is an algebraically closed field.
\end{description}
\end{quote}
\end{theorem}
We refer, again, to \cite{VanDerWaerden}.
\begin{lemma}\label{L: Adjoin i}
The non-standard extension ${^{*}\mathc} = {^{*}\mathr}(i)$ in the sense that each $z \in {^{*}\mathc}$ can be uniquely represented as $z = a + bi$ where $a, b \in {^{*}\mathr}$.
\end{lemma}
\begin{proof}
Let $z = \ensuremath{\langle} z_\varepsilon \ensuremath{\rangle}$ for some $(z_\varepsilon) \in \mathbb{C}^{\mathbb{R}_+}$. For each fixed $\varepsilon \in \mathbb{R}_+$, $z_\varep = a_\varep + b_\varep i$. Then $a = \ensuremath{\langle} a_\varep \ensuremath{\rangle}$ and $b = \ensuremath{\langle} b_\varep \ensuremath{\rangle}$ where $a_\varep, b_\varep \in \mathbb{R}$ for all $\varepsilon$.
\end{proof}
\begin{corollary}\label{C: Real Closed}
The non-standard extension ${^{*}\mathr}$ is a real-closed field.
\end{corollary}
\begin{proof}
From Theorem \ref{T: Algebraically Closed}, Lemma \ref{L: Adjoin i}, and in light of the Theorem of Artin-Schrier \cite{VanDerWaerden}, ${^{*}\mathr}$ is a real-closed field.
\end{proof}
\section{Order Relation in ${^{*}\mathr}$}\label{S: Order}
The question may arise as to whether the richness of ${^{*}\mathr}$ destroys the total ordering that we hope to inherit from $\mathbb{R}$. This turns out not to be the case.
\begin{definition}
Let $\ensuremath{\langle} a_\varep \ensuremath{\rangle}, \ensuremath{\langle} b_\varep \ensuremath{\rangle} \in {^{*}\mathr}$. Then $\ensuremath{\langle} a_\varep \ensuremath{\rangle} < \ensuremath{\langle} b_\varep \ensuremath{\rangle}$ if $a_\varep < b_\varep$ a.e. Alternatively, $\ensuremath{\langle} a_\varep \ensuremath{\rangle} < \ensuremath{\langle} b_\varep \ensuremath{\rangle}$ if $\{ \varepsilon : a_\varep < b_\varep \} \in \mathcal{U}$.
\end{definition}
\begin{theorem}[Totally Ordered Field]\label{T: Totally Ordered Field}$ $
\begin{quote}
\begin{description}
\item[(a)] The non-standard real numbers ${^{*}\mathr}$ are a totally ordered field.
\item[(b)] The real numbers $\mathbb{R}$ are a totally ordered subfield of ${^{*}\mathr}$.
\end{description}
\end{quote}
\end{theorem}
\begin{proof}
We leave reader to check that $\leq$ is an order relation on ${^{*}\mathr}$ (i.e. reflexive, anti-symmetric, and transitive).
\begin{description}
\item[(a)] Let $\ensuremath{\langle} a_\varep \ensuremath{\rangle}, \ensuremath{\langle} b_\varep \ensuremath{\rangle} \in {^{*}\mathr}$. We must show that $\ensuremath{\langle} a_\varep \ensuremath{\rangle} < \ensuremath{\langle} b_\varep \ensuremath{\rangle}$, $\ensuremath{\langle} a_\varep \ensuremath{\rangle} = \ensuremath{\langle} b_\varep \ensuremath{\rangle}$, or $\ensuremath{\langle} a_\varep \ensuremath{\rangle} > \ensuremath{\langle} b_\varep \ensuremath{\rangle}$. Indeed, $\{ \varepsilon \in \mathbb{R}_+ : a_\varep < b_\varep \} \cup \{ \varepsilon \in \mathbb{R}_+ : a_\varep = b_\varep \} \cup \{ \varepsilon \in \mathbb{R}_+ : a_\varep > b_\varep \} = \mathbb{R}_+$ since $\mathbb{R}$ is totally ordered. By Corollary \ref{C: Extension of Characterization}, only one of the above disjoint sets may be in the ultrafilter $\mathcal{U}$, as desired.
\item[(b)] We have to show that $0 \notin {^{*}\mathr}_+$ (which is evidently true), and that ${^{*}\mathr}_+$ is closed under addition and multiplication. We shall leave addition to the reader and prove closure under multiplication. Let $\ensuremath{\langle} a_\varep \ensuremath{\rangle}, \ensuremath{\langle} b_\varep \ensuremath{\rangle} \in {^{*}\mathr}_+$ with $\ensuremath{\langle} a_\varep \ensuremath{\rangle} > 0$ and $\ensuremath{\langle} b_\varep \ensuremath{\rangle} > 0$. Then $A =: \{ \varepsilon \in \mathbb{R}_+ : a_\varep > 0\} \in \mathcal{U}$ and $B =: \{ \varepsilon \in \mathbb{R}_+ : b_\varep > 0\} \in \mathcal{U}$. Considering $C =: \{ \varepsilon \in \mathbb{R}_+ : a_\varepb_\varep > 0\}$, let $\varepsilon \in A \cap B$, then $a_\varep > 0$ and $b_\varep > 0$ so $a_\varepb_\varep > 0$ (in $\mathbb{R}$) implying $\varepsilon \in C$. Therefore, $A \cap B \subseteq C$ and by \textbf{(c)} of Definition \ref{D: Filter and Free Filter}, $C \in \mathcal{U}$.
\end{description}
\end{proof}
\section{Non-Standard Numbers}\label{S: Trichotomy}
We classify the non-standard complex numbers into two distinct sets. We then explore the properties of and among these sets. Looking forward, we view $\mathbb{Q} \subset {^{*}\mathc}$ (as an embedding in the spirit of Definition \ref{D: Embedding}) so that we may apply the order properties from Section \ref{S: Order}.
\begin{definition}[Classification]\label{D: Trichotomy}\index{Finite Non-Standard Number}\index{Infinitesimal}\index{Infinitely Large Non-Standard Number}
Let $z \in {^{*}\mathc}$.
\begin{quote}
\begin{description}
\item[(a)] If there exists an $n \in \mathbb{N}$ such that $|z| \leq n$ then $z$ is \textbf{finite}. We denote by $\mathcal{F}({^{*}\mathc})$ the set of finite numbers.
\item[(b)] If $|z| < \frac{1}{n}$ for all $n \in \mathbb{N}$, then $z$ is \textbf{infinitesimal}. We denote by $\mathcal{I}({^{*}\mathc})$ the set of infinitesimal numbers.
\item[(c)] If $n < |z|$ for all $n \in \mathbb{N}$, then $z$ is \textbf{infinitely large}. We denote by $\mathcal{L}({^{*}\mathc})$ the set of infinitely large numbers.
\end{description}
\end{quote}
\end{definition}
It is important to note that $\mathcal{F}({^{*}\mathc})$ and $\mathcal{L}({^{*}\mathc})$ are disjoint, as are $\mathcal{L}({^{*}\mathc})$ and $\mathbb{C}$. Furthermore, the finite numbers combined with the infinitely large numbers give all of ${^{*}\mathc}$. That is, $\mathcal{F}({^{*}\mathc}) \cup \mathcal{L}({^{*}\mathc}) = {^{*}\mathc}$. We also have $\mathbb{C} \cap \mathcal{I}({^{*}\mathc}) = \{0\}$ and $\mathbb{R} \cap \mathcal{I}({^{*}\mathr}) = \{ 0\}$.
\begin{lemma}\label{L: Finite Integral Domain}
The set of finite non-standard numbers $\mathcal{F}({^{*}\mathc})$ is an integral domain.
\end{lemma}
\begin{proof}
It is a straightforward proof by definition to show that $\scf(\starc)$ is a subring of ${^{*}\mathc}$. It may be questionable to the reader as to whether $\scf(\starc)$ has zero divisors in ${^{*}\mathc}$ since $\mathbb{C}^{\mathbb{R}_+}$ is a ring with zero divisors and ${^{*}\mathc}$ is simply $\mathbb{C}^{\mathbb{R}_+} \setminus \sim$ (as in Section \ref{S: A Non-Standard Extension of C}).
Indeed, since $x, y \in \mathcal{F}({^{*}\mathc}) \subseteq {^{*}\mathc}$ if $xy=0$ then either $x=0$ or $y=0$ since ${^{*}\mathc}$ is a field. Hence, $\mathcal{F}({^{*}\mathc})$ is an integral domain.
\end{proof}
\begin{lemma}\label{L: Maximal Convex}
The set of infinitesimals $\mathcal{I}({^{*}\mathc})$ is a convex maximal ideal in $\mathcal{F}({^{*}\mathc})$ in the sense that $\mathcal{I}({^{*}\mathc})$ is a maximal ideal in $\mathcal{F}({^{*}\mathc})$ such that if $|x| \leq |y| \in \mathcal{I}({^{*}\mathc})$, then $x \in \mathcal{I}({^{*}\mathc})$.
\end{lemma}
\begin{proof}
First we show that $\mathcal{I}({^{*}\mathc})$ is an ideal of $\mathcal{F}({^{*}\mathc})$. Clearly, $\sci(\starc) \subseteq \scf(\starc)$. Let $x, y \in \sci(\starc)$. Then for all $n \in \mathbb{N}$ $|x| < \frac{1}{n}$ and $|y| < \frac{1}{n}$. Then $|x + y| \leq |x| + |y| < \frac{2}{n}$ for all $n \in \mathbb{N}$. Therefore, $x + y \in \sci(\starc)$. Next, let $x \in \scf(\starc)$, $y \in \sci(\starc)$. Then there exists $n_1 \in \mathbb{N}$ such that $|x| \leq n_1$ and $|y| < \frac{1}{n}$ for all $n \in \mathbb{N}$. Hence, $|xy| < \frac{n_1}{n}$ for all $n \in \mathbb{N}$. Therefore, $xy \in \sci(\starc)$. So $\sci(\starc)$ an ideal of $\scf(\starc)$.
Next we show that $\sci(\starc)$ is maximal in $\scf(\starc)$. Suppose $K \subseteq \scf(\starc)$ is an ideal so that $\sci(\starc) \subsetneqq K \subseteq \scf(\starc)$. So there exists $x \in K \setminus \mathcal{I}({^{*}\mathc})$. That is, there exists $n_1 \in \mathbb{N}$ such that $|x| \geq \frac{1}{n_1}$. Hence, $\frac{1}{|x|} \leq n_1$ and so $\frac{1}{x} \in \mathcal{F}({^{*}\mathc})$, which implies $x \frac{1}{x} = 1 \in K$ since $K$ is an ideal. Therefore $K = \scf(\starc)$ and $\sci(\starc)$ is a maximal ideal in $\scf(\starc)$.
The final item to prove is the convexity of the maximal ideal $\sci(\starc)$, and this is straightforward. Let $|y| \in \sci(\starc)$ with $|x| \leq |y|$. Then for all $n \in \mathbb{N}$, $|y| < \frac{1}{n}$ so $|x| \leq |y| < \frac{1}{n}$ putting $x \in \sci(\starc)$.
\end{proof}
It is now possible to imagine the factor ring $\mathcal{F}({^{*}\mathc}) / \mathcal{I}({^{*}\mathc})$, but we wait for the next section to discuss it in any detail.
\begin{definition}[Infinitesimal Relation]\label{D: Infinitely Close}\index{Infinitesimal Relation}
The symbol $\approx$ denotes the infinitesimal relation. We write $z \approx 0$ \iff $z \in \mathcal{I}({^{*}\mathc})$. Similarly, $a \approx b$ \iff $a - b \approx 0$.
\end{definition}
We now explore some of the properties of the sets of non-standard numbers.
\begin{corollary}
The infinitesimal relation $\approx$ is an equivalence relation on ${^{*}\mathc}$ that preserves addition in ${^{*}\mathc}$ and multiplication by a scalar in $\mathcal{F}({^{*}\mathc})$. That is,
\begin{quote}
\begin{description}
\item[(a)] For all $x, y, z, t \in {^{*}\mathc}$, if $x \approx y$ and $z \approx t$, then $x + z \approx y + t$.
\item[(b)] For all $x, y \in {^{*}\mathc}$ and for all $\lambda \in \mathcal{F}({^{*}\mathc})$, if $x \approx y$, then $\lambda x \approx \lambda y$.
\end{description}
\end{quote}
Additionally, $\approx$ preserves all ring operations in $\mathcal{F}({^{*}\mathc})$. That is to say,
\begin{quote}
\begin{description}
\item[(c)] For all $x, y, z, t \in \mathcal{F}({^{*}\mathc})$, if $x \approx y$ and $z \approx t$, then $xz \approx yt$.
\end{description}
\end{quote}
\end{corollary}
\begin{proof}
Parts \textbf{(a)} and \textbf{(b)} stem from Definition \ref{D: Infinitely Close}; specifically that $a \approx b$ \iff $a-b\approx 0$. Part \textbf{(c)} stems from the fact that $\mathcal{I}({^{*}\mathc})$ is an ideal in $\mathcal{F}({^{*}\mathc})$.
\end{proof}
The proofs of both of the following lemmas are found directly from Definition \ref{D: Trichotomy}.
\begin{lemma}
If $z \in {^{*}\mathc}$ is infinitesimal, finite, or infinitely large, then so is $|z|$, respectively.
\end{lemma}
\begin{lemma}\label{L: Reciprocal}
Let $z \in {^{*}\mathc}$ such that $z \not= 0$. Then $z \in \mathcal{L}({^{*}\mathc})$ \iff $\frac{1}{z} \in \mathcal{I}({^{*}\mathc})$.
\end{lemma}
\begin{example}[Canonical Infinitesimal]\label{E: Canonical Infinitesimal}\index{Canonical Infinitesimal}
Let $\rho = \ensuremath{\langle} \varepsilon \ensuremath{\rangle}$ for $\varepsilon \in \mathbb{R}_+$. Then $\rho > 0$, $\rho \approx 0$, and $\rho$ is called the canonical infinitesimal. Recall that we fixed $\mathcal{U}$ in Section \ref{S: Probability Measure} to be an ultrafilter such that $I_n = (0, \frac{1}{n}) \in \mathcal{U}$ for all $n \in \mathbb{N}$. We need $\ensuremath{\langle} 0 \ensuremath{\rangle} < \ensuremath{\langle} \varepsilon \ensuremath{\rangle} < \ensuremath{\langle} \frac{1}{n} \ensuremath{\rangle}$, where we think of $\ensuremath{\langle} 0 \ensuremath{\rangle}$ and $\ensuremath{\langle} \frac{1}{n} \ensuremath{\rangle}$ as elements of $\mathbb{Q}$ embedded in ${^{*}\mathc}$ as in Definition \ref{D: Embedding}. Since $I_n = \{ \varepsilon \in \mathbb{R}_+ : 0 < \varepsilon < \frac{1}{n}\} \in \mathcal{U}$ for any $n \in \mathbb{N}$, the desired string of inequalities is satisfied, thus $\rho \in \mathcal{I}({^{*}\mathc})$.
\end{example}
\begin{example}[More Infinitesimals]
Given Example \ref{E: Canonical Infinitesimal} and the fact that $\mathcal{I}({^{*}\mathc})$ is an ideal, we immediately know that $\rho^2, \rho^3, ..., \rho^n, ...$ are infinitesimals. Also, since ${^{*}\mathr}$ is a real-closed field (Theorem \ref{T: Algebraically Closed}), the solutions to $x^2 = \rho$ for $x \geq 0$ exist and are unique. Therefore, $\sqrt{\rho}, \sqrt[3]{\rho}, ..., \sqrt[n]{\rho}, ...$ are infinitesimals.
\end{example}
\begin{example}[Finite, Non-Standard Number]
It is possible to have a finite number $z \in {^{*}\mathc}$ which is neither infinitesimal, nor standard (in the sense that it is in $\mathbb{C}$). Indeed, $5 + \rho$, where $\rho$ is the canonical infinitesimal from above, is a finite, non-standard number.
A plethora of examples abound. For instance, a rational expression such as $\frac{4 + \rho^2}{3 + \rho}$ is a finite, non-standard number. Indeed, $\frac{4 + \rho^2}{3 + \rho} \approx \frac{4}{3}$.
\end{example}
\begin{example}[Canonical Infinitely Large Number]\label{E: Canonical Infinitely Large}\index{Canonical Infinitely Large Number}
Let $(\nu_\varepsilon) \in \mathbb{N}^{\mathbb{R}_{+}}$. Define
\begin{align}
\nu_\varepsilon =
\begin{cases}
n &\text{if $\varepsilon \in I_n \setminus I_{n+1}$}\\
1 &\text{if $\varepsilon \geq 1$}\notag
\end{cases}
\end{align}
For $n = 1, 2, 3, ...$ We claim that $\ensuremath{\langle} \nu_\varepsilon \ensuremath{\rangle}$ is an infinitely large positive number. We must show that for all $m \in \mathbb{N}$, $\{ \varepsilon \in \mathbb{R}_+ : \nu_\varepsilon > m \} \in \mathcal{U}$. Recall that $\mathcal{U}$ was fixed to be an ultrafilter such that $I_n = (0, \frac{1}{n}) \in \mathcal{U}$ for all $n \in \mathbb{N}$. Let $m \in \mathbb{N}$ be arbitrary and fixed. Suppose $\varep_{0} \in I_m$. Then there exists $n \in \mathbb{N}$ for which $m < n$ and $\varep_{0} \in I_{n+1} \setminus I_n$. Therefore, $\nu_{\varep_{0}} = n > m$ and so $I_m \in \{ \varepsilon \in \mathbb{R}_+ : \nu_\varepsilon > m\}$. Therefore by Definition \ref{D: Filter and Free Filter} part \textbf{(c)}, we have $\{\varepsilon \in \mathbb{R}_+ : \nu_\varepsilon > m\} \in \mathcal{U}$. As $m$ was arbitrary, this holds for all $m \in \mathbb{N}$. Therefore $\ensuremath{\langle} \nu_\varepsilon \ensuremath{\rangle}$ is infinitely large.
\end{example}
\begin{example}[Infinitely Large Numbers]
Similarly, $\nu^2, \nu^3, ...$ are also infinitely large numbers in ${^{*}\mathn} \setminus \mathbb{N}$. In fact, ${^{*}\mathn} \setminus \mathbb{N}$ consists of infinitely large numbers only and $\mathcal{F}({^{*}\mathn}) = \mathbb{N}$. That is to say, there are no infinitesimals in the non-standard natural numbers ${^{*}\mathn}$.
\end{example}
\begin{example}[More Infinitely Large Numbers]
In view of Lemma \ref{L: Reciprocal} and Example \ref{E: Canonical Infinitesimal}, the following are infinitely large numbers: $$\frac{1}{\rho}, \frac{1}{\rho^2}, ..., \frac{1}{\rho^n}, ..., \frac{1}{\sqrt{\rho}}, \frac{1}{\sqrt[3]{\rho}}, ..., \frac{1}{\sqrt[n]{\rho}}, ...$$
\end{example}
\begin{examples}In light of the examples in Section \ref{S: Functions} we have additional examples of infinitesimals and infinitely large numbers.
\begin{quote}
\begin{description}
\item[(i)] Let $\ln \rho =: \ensuremath{\langle} \ln \varepsilon \ensuremath{\rangle}$. Then $\ln \rho$ is an infinitely large negative number.
\item[(ii)] Let $\sin \rho =: \ensuremath{\langle} \sin \varepsilon \ensuremath{\rangle}$. Then $\sin \rho$ is a positive infinitesimal.
\item[(iii)] Let $e^{\frac{1}{\rho}} =: \ensuremath{\langle} e^{\frac{1}{\varepsilon}} \ensuremath{\rangle}$. Then $e^{\frac{1}{\rho}}$ is an infinitely large number.
\end{description}
\end{quote}
\end{examples}
\section{Standard-Part Mapping}\label{S: SPM}
In Section \ref{S: Trichotomy} we used the infinitesimal relation $\approx$ to relate non-standard numbers in ${^{*}\mathc}$ to standard numbers in $\mathbb{C}$. The Standard-Part Mapping ($\ensuremath{{{\rm{st}}}}$-mapping) is another way to express this relationship. Moreover it gives us a field isomorphism between the factor ring $\mathcal{F}({^{*}\mathc}) / \mathcal{I}({^{*}\mathc})$ and $\mathbb{C}$.
\begin{definition}[Standard-Part Mapping]\label{D: SPM}\index{Standard Part Mapping}
We begin with $\mathcal{F}({^{*}\mathr})$ as the domain:
\begin{quote}
\begin{description}
\item[(a)] $\ensuremath{{{\rm{st}}}} : \mathcal{F}({^{*}\mathr}) \to \mathbb{R}$ is defined to be $\ensuremath{{{\rm{st}}}} (x) = \sup \{ r \in \mathbb{R} : r \leq x \}$. For $x \in \mathcal{F}({^{*}\mathr})$, we call $\ensuremath{{{\rm{st}}}}(x)$ the \textbf{standard part} of $x$.
\end{description}
\end{quote}
We may extend the above to $\mathcal{F}({^{*}\mathc})$:
\begin{quote}
\begin{description}
\item[(b)] $\ensuremath{{{\rm{st}}}} : \mathcal{F}({^{*}\mathc}) \to \mathbb{C}$ is defined to be $\ensuremath{{{\rm{st}}}} (x + iy) = \ensuremath{{{\rm{st}}}}(x) + i\cdot \ensuremath{{{\rm{st}}}}(y)$.
\end{description}
\end{quote}
We may also extend to $\mathcal{L}({^{*}\mathr})$:
\begin{quote}
\begin{description}
\item[(c)] When $x \in \mathcal{L}({^{*}\mathr})$ we may extend the $\ensuremath{{{\rm{st}}}}$-mapping to $\ensuremath{{{\rm{st}}}} : {^{*}\mathr} \to \mathbb{R} \cup \{\pm\infty\}$ by $\ensuremath{{{\rm{st}}}}(x) = \pm \infty$ for $x > 0$ in $\mathcal{L}({^{*}\mathr})$ or $x < 0$ in $\mathcal{L}({^{*}\mathr})$, respectively.
\end{description}
\end{quote}
\end{definition}
We now discuss the asymptotic expansion of the finite numbers.
\begin{theorem}\label{T: Representation}
Each $z \in \mathcal{F}({^{*}\mathc})$ has asymptotic expansion as $z = \ensuremath{{{\rm{st}}}}(z) + dz$ where $dz \approx 0$.
\end{theorem}
\begin{proof}
We consider the case where $z \in \mathcal{F}({^{*}\mathr})$. Define $dz =: z - \ensuremath{{{\rm{st}}}}(z)$ and observe $\ensuremath{{{\rm{st}}}}(z)$ exists by virtue of the completeness of $\mathbb{R}$.
We now show that $dz \approx 0$. Stated in another way, we show $z \approx \ensuremath{{{\rm{st}}}}(z)$.
Consider $S_z = \{ r \in \mathbb{R} : r \leq z\}$ and recall that $\ensuremath{{{\rm{st}}}}(z) = \sup S_z$. Suppose on the contrary that $\ensuremath{{{\rm{st}}}}(z) \not\approx z$, then there ezists $n \in \mathbb{N}$ such that $| \ensuremath{{{\rm{st}}}} (z) - z | \geq \frac{1}{n}$.
\begin{description}
\item[Case 1] If $z > \ensuremath{{{\rm{st}}}}(z)$, then $|\ensuremath{{{\rm{st}}}}(z) - z| = z - \ensuremath{{{\rm{st}}}}(z)$ and $z - \ensuremath{{{\rm{st}}}}(z) \geq \frac{1}{n}$ which implies $z \geq \ensuremath{{{\rm{st}}}}(z) + \frac{1}{n}$. Thus, $\ensuremath{{{\rm{st}}}}(z) + \frac{1}{n} \in S_z$ so $\ensuremath{{{\rm{st}}}}(z) + \frac{1}{n} \leq \ensuremath{{{\rm{st}}}}(z)$ since $\ensuremath{{{\rm{st}}}}(z) = \sup S_z$. Therefore, $\frac{1}{n} \leq 0$, a contradiction.
\item[Case 2] If $z < \ensuremath{{{\rm{st}}}}(z)$ then $|\ensuremath{{{\rm{st}}}}(z) - z| = \ensuremath{{{\rm{st}}}}(z) - z \geq \frac{1}{n}$ which implies $\ensuremath{{{\rm{st}}}}(z) - \frac{1}{n} \geq z$. Since $\ensuremath{{{\rm{st}}}}(z)$ is the least upper bound for $S_z$ the set $\{ r \in S_z : \ensuremath{{{\rm{st}}}}(z) - \frac{1}{n} < r < \ensuremath{{{\rm{st}}}}(z) \}$ is non-empty. So there exists an $r_0 \in S_z$ such that $\ensuremath{{{\rm{st}}}}(z) - \frac{1}{n} < r_0 < \ensuremath{{{\rm{st}}}}(z)$ hence $r_0 > z$, a contradiction since $r_0 \in S_z$.
\end{description}
Therefore $z \approx \ensuremath{{{\rm{st}}}}(z)$ and $dz \approx 0$, so $z = \ensuremath{{{\rm{st}}}}(z) + dz$. The case of $z \in \mathcal{F}({^{*}\mathc})$ follows from $\ensuremath{{{\rm{st}}}}(x + iy) = \ensuremath{{{\rm{st}}}}(x) + i\ensuremath{{{\rm{st}}}}(y)$.
\end{proof}
\begin{corollary}
Every $z \in \mathcal{F}({^{*}\mathc})$ may be \emph{uniquely} represented as $z = c + h$ where $c \in \mathbb{C}$ and $h \in \mathcal{I}({^{*}\mathc})$.
\end{corollary}
\begin{proof}
The existence of such an expansion follows directly from Theorem \ref{T: Representation} for $c = \ensuremath{{{\rm{st}}}}(z)$ and $h = dz$. As to the uniqueness, let $z = c_1 + h_1$ for some $c_1 \in \mathbb{C}$ and $h_1 \in \mathcal{I}({^{*}\mathc})$. Then $c + h = c_1 + h_1$, implying $c - c_1 = h_1 - h$, hence $c - c_1 \approx 0$. Thus $c - c_1 = 0$ since $0$ is the only infinitesimal in $\mathbb{C}$.
\end{proof}
Perhaps surprisingly, the complex/real numbers may be presented as a factor ring of sets of non-standard complex/real numbers, respectively.
\begin{theorem}
$\mathcal{F}({^{*}\mathc}) / \mathcal{I}({^{*}\mathc})$ is \emph{field} isomorphic to $\mathbb{C}$ under $[z] \to \ensuremath{{{\rm{st}}}}(z)$. Similarly, $\mathcal{F}({^{*}\mathr}) / \mathcal{I}({^{*}\mathr})$ is order field isomorphic to $\mathbb{R}$ under $[x] \to \ensuremath{{{\rm{st}}}}(x)$.
\end{theorem}
\begin{proof}
We shall focus on the real case, leaving the complex case to the reader. Recall that $\mathcal{F}({^{*}\mathr})$ is an integral domain and $\mathcal{I}({^{*}\mathr})$ is a maximal convex ideal in $\mathcal{F}({^{*}\mathr})$ (Lemmas \ref{L: Finite Integral Domain} and \ref{L: Maximal Convex}), thus the factor ring is a field. Recall, $[x] = \{ y \in \mathcal{F}({^{*}\mathr}) : x - y \in \mathcal{I}({^{*}\mathr})\}$.
We need to demonstrate that this function is well-defined, bijective, and a field homomorphism. Indeed, let $[x] = [y]$, then $\ensuremath{{{\rm{st}}}}(x) = \ensuremath{{{\rm{st}}}}(y)$ since $x - y \approx 0$, thus the function is well-defined. Next, suppose $\ensuremath{{{\rm{st}}}}(x) = \ensuremath{{{\rm{st}}}}(y)$. Then $x \approx y$, therefore $[x] = [y]$, and the function is injective. Let $r \in \mathbb{R}$. Then by viewing $r$ as an embedded element of $\mathcal{F}({^{*}\mathr}) \subseteq {^{*}\mathr}$, we have $\ensuremath{{{\rm{st}}}}(r) = r$, hence the function is surjective.
Next we show that the function is a field homomorphism. For addition (with the help of Theorem \ref{T: Representation}) we have $x + y = \ensuremath{{{\rm{st}}}}(x) + \ensuremath{{{\rm{st}}}}(y) + dx + dy$ and $\ensuremath{{{\rm{st}}}}(x + y) = \ensuremath{{{\rm{st}}}}(\ensuremath{{{\rm{st}}}}(x) + \ensuremath{{{\rm{st}}}}(y)) = \ensuremath{{{\rm{st}}}}(x) + \ensuremath{{{\rm{st}}}}(y)$ since $\ensuremath{{{\rm{st}}}}(x), \ensuremath{{{\rm{st}}}}(y) \in \mathbb{R}$. For multiplication we have $xy = \ensuremath{{{\rm{st}}}}(x)\ensuremath{{{\rm{st}}}}(y) + \ensuremath{{{\rm{st}}}}(x)dy + \ensuremath{{{\rm{st}}}}(y)dx + dxdy$ and $\ensuremath{{{\rm{st}}}}(xy) = \ensuremath{{{\rm{st}}}}(x)\ensuremath{{{\rm{st}}}}(y)$. For division, assume $y \not\approx 0$ so that $\frac{x}{y} = z$ exists in $\mathcal{F}({^{*}\mathr})$. Then $x = zy$ hence $\ensuremath{{{\rm{st}}}}(x) = \ensuremath{{{\rm{st}}}}(z)\ensuremath{{{\rm{st}}}}(y)$.
As to the preservation of the order relation, for $x_1 < x_2$ we have $\ensuremath{{{\rm{st}}}}(x_1) \leq \ensuremath{{{\rm{st}}}}(x_2)$ where the equality holds whenever $x_1 \approx x_2$.
Therefore, the function is a well-defined, bijective, field homomorphism from $\mathcal{F}({^{*}\mathr}) / \mathcal{I}({^{*}\mathr}) \to \mathbb{R}$, which preserves the order relation.
\end{proof}
\begin{examples} Having established the $\ensuremath{{{\rm{st}}}}$-mapping as a field isomorphism we apply it.
\begin{quote}
\begin{description}
\item[(i)] $\ensuremath{{{\rm{st}}}} (5 + \rho) = 5$, just as $5 + \rho \approx 5$.
\item[(ii)] $\ensuremath{{{\rm{st}}}} \left (\frac{7 + \rho^5}{8 + \sqrt{\rho}} \right ) = \frac{\ensuremath{{{\rm{st}}}}(7 + \rho^5)}{\ensuremath{{{\rm{st}}}}(8 + \sqrt{\rho})} = \frac{7}{8}$.
\item[(iii)]
\begin{align}
\ensuremath{{{\rm{st}}}} \left ( \frac{\sqrt{1 + dx} - 1}{dx} \right ) &= \ensuremath{{{\rm{st}}}} \left ( \frac{(\sqrt{1 + dx} - 1)(\sqrt{1 + dx} + 1)}{dx (\sqrt{1 + dx} + 1)} \right ) = \ensuremath{{{\rm{st}}}} \left (\frac{1 + dx - 1}{dx(\sqrt{1 + dx} + 1)} \right)\notag\\
&= \ensuremath{{{\rm{st}}}} \left (\frac{1}{(\sqrt{1 + dx} + 1)} \right) = \frac{\ensuremath{{{\rm{st}}}}(1)}{\ensuremath{{{\rm{st}}}}(\sqrt{1 + dx} + 1)}\notag\\
&= \frac{1}{\ensuremath{{{\rm{st}}}}(\sqrt{1 + dx}) + \ensuremath{{{\rm{st}}}}(1)} = \frac{1}{1 + 1} = \frac{1}{2}\notag
\end{align}
\item[(iv)] Let $dx > 0$. Then, $\ensuremath{{{\rm{st}}}} \left ( \frac{1}{dx^2 + dx} \right) = \ensuremath{{{\rm{st}}}} \left ( \frac{1}{dx(dx + 1)} \right) = \ensuremath{{{\rm{st}}}} \left( \frac{1}{dx} \right) \ensuremath{{{\rm{st}}}} \left ( \frac{1}{dx + 1} \right) = \infty \cdot 1 = \infty$.
\end{description}
\end{quote}
\end{examples}
For a method of teaching calculus based on the standard part mapping we refer to Keisler \cite{jKeisE} and Todorov \cite{tdTod2000a}.
\section{Non-Standard Extension of a Standard Set}\label{S: NSE of Set}
We generalize the process of Section \ref{S: A Non-Standard Extension of C} to any set.
\begin{definition}\label{D: Extension of Set}\index{Non-Standard Extension of a Set}
Let $S \subseteq \mathbb{C}$ and $(s_\varep) \in \mathbb{C}^{\mathbb{R}_+}$ be a net in $\mathbb{C}$. Then the \textbf{non-standard extension} ${^{*}S}$ of $S$ is given by ${^{*}S} = \{ \ensuremath{\langle} s_\varep \ensuremath{\rangle} : \{\varepsilon \in \mathbb{R}_+ : s_\varep \in S\} \in \mathcal{U} \}$.
\end{definition}
From the above definition we can see that $s_\varep \in S$ quite often, but not necessarily for all $\varepsilon \in \mathbb{R}_+$. It just so happens that we can select some representative, say $b_\varep$, such that $b_\varep \in S$ for all $\varepsilon \in \mathbb{R}_+$.
\begin{lemma}[Regular Representatives]\label{L: Regular Representatives}
Let $S \subseteq \mathbb{C}$. Then for each $\ensuremath{\langle} s_\varep \ensuremath{\rangle} \in {^{*}S}$ there exists a net $(x_\varep) \in \mathbb{C}^{\mathbb{R}_+}$ such that $\ensuremath{\langle} s_\varep \ensuremath{\rangle} = \ensuremath{\langle} x_\varep \ensuremath{\rangle}$ and $x_\varep \in S$ for all $\varepsilon \in \mathbb{R}_+$. We say $(x_\varep)$ is a \textbf{regular representative} of $\ensuremath{\langle} s_\varep \ensuremath{\rangle}$ relative to $S$.
\end{lemma}
\begin{proof}
If $\ensuremath{\langle} s_\varep \ensuremath{\rangle} \in {^{*}S}$ then $A =: \{ \varepsilon \in \mathbb{R}_+ : s_\varep \in S\} \in \mathcal{U}$. Let $(x_\varep) \in \mathbb{R}^{\mathbb{R}_+}$ be the net defined by
\begin{align}
x_\varep =
\begin{cases}
s_\varep &\text{if $\varepsilon \in A$}\\
s \in S \textnormal{ arbitrarily }&\text{if $\varepsilon \in \mathbb{C} \setminus A$}\notag
\end{cases}
\end{align}
Then we have $\ensuremath{\langle} s_\varep \ensuremath{\rangle} = \ensuremath{\langle} x_\varep \ensuremath{\rangle}$, and $x_\varep \in S$ for all $\varepsilon \in \mathbb{R}_+$.
\end{proof}
\begin{examples}$ $
\begin{quote}
\begin{description}
\item[(i)] ${^{*}\mathr}$ is the non-standard extension of $\mathbb{R}$ with elements $\ensuremath{\langle} a_\varep \ensuremath{\rangle} \in {^{*}\mathr}$ if $a_\varep \in \mathbb{R}$ a.e.
\item[(ii)] ${^{*}\mathn}$ is the non-standard extension of $\mathbb{N}$ with elements $\ensuremath{\langle} a_\varep \ensuremath{\rangle} \in {^{*}\mathn}$ if $a_\varep \in \mathbb{N}$ a.e.
\item[(iii)] ${^{*}\rplus}$ is the non-standard extension of $\mathbb{R}_+$ with elements $\ensuremath{\langle} a_\varep \ensuremath{\rangle} \in {^{*}\rplus}$ if $a_\varep \in \mathbb{R}_+$ a.e.
\end{description}
\end{quote}
\end{examples}
\begin{theorem}[Boolean Properties]\label{T: Boolean}
Let $A, B \subseteq \mathbb{C}$. Then the following are properties of the non-standard extensions ${^*A}$ and ${^*B}$:
\begin{quote}
\begin{description}
\item[(a)] ${^*\varnothing} = \varnothing$
\item[(b)] ${^*(A \cup B)} = {^{*}A} \cup {^{*}B}$
\item[(c)] ${^*(A \cap B)} = {^{*}A} \cap {^{*}B}$
\item[(d)] ${^*(A \setminus B)} = {^{*}A} \setminus {^{*}B}$
\item[(e)] $A \subseteq B$ \iff ${^{*}A} \subseteq {^{*}B}$
\end{description}
\end{quote}
\end{theorem}
\begin{proof}
We shall prove \textbf{(e)}, leaving the rest to the reader.
\begin{description}
\item[($\Rightarrow$)] Let $A \subseteq B$, $A_\varepsilon =: \{ \varepsilon \in \mathbb{R}_+ : z_\varep \in A\}$ and $B_\varepsilon =: \{ \varepsilon \in \mathbb{R}_+ : z_\varep \in B\}$. Then $\ensuremath{\langle} z_\varep \ensuremath{\rangle} \in {^{*}A}$ \iff $A_\varepsilon \in \mathcal{U}$. Note that $A_\varepsilon \subseteq \{ \varepsilon \in \mathbb{R}_+ : A_\varepsilon \subseteq B_\varepsilon\}$ which implies that $\{ \varepsilon \in \mathbb{R}_+ : A_\varepsilon \subseteq B_\varepsilon\} \in \mathcal{U}$ and therefore ${^{*}A} \subseteq {^{*}B}$.
\item[($\Leftarrow$)] Suppose ${^{*}A} \subseteq {^{*}B}$. Suppose to the contrary that $A \not\subset B$. Then there exists some $a \in A$ such that $a \not\in B$. Taking $\ensuremath{\langle} a_\varep \ensuremath{\rangle} = a$ for all $\varepsilon \in \mathbb{R}_+$ we have $\ensuremath{\langle} a_\varep \ensuremath{\rangle} \in {^{*}B}$, moreover we have $\{ \varepsilon \in \mathbb{R}_+ : a_\varep \in B \} = \mathbb{R}_+$. A contradiction, since we assumed $A \not\subset B$.
\end{description}
\end{proof}
\section{Internal Sets}\label{S: Internal Sets}
As it happens, our previous construction of using a net of points in $\mathbb{R}$ to create a single point of $^*\mathbb{R}$ can work for other objects. In this section we discuss this process on a net of subsets of $\mathbb{R}_+$. It should be noted that the non-standard extensions of standard sets from the previous section are a special case of the more general internal sets.
\begin{definition}\label{D: Internal}\index{Internal Set}
Let $(\mathcal{S}_\varep) \in \mathcal{P}(\mathbb{C})^{\mathbb{R}_+}$, and define $\ensuremath{\langle} \mathcal{S}_\varep \ensuremath{\rangle} =: \{ \ensuremath{\langle} z_\varep \ensuremath{\rangle} \in {^{*}\mathc} : z_\varep \in \mathcal{S}_\varep$ a.e.$\}$. Sets of the form $\ensuremath{\langle} \mathcal{S}_\varep \ensuremath{\rangle}$ are \textbf{internal sets}. The internal set $\ensuremath{\langle} \mathcal{S}_\varep \ensuremath{\rangle}$ is \textbf{generated} by the net $(\mathcal{S}_\varep)$. The set of all internet subsets of ${^{*}\mathc}$ is denoted ${\star\mathcal{P}(\mathbb{C})}$. If $S$ is not internal, then it is \textbf{external}. The set of external sets are denoted $\mathcal{P}({^{*}\mathc}) \setminus \star\mathcal{P}(\mathbb{C})$.
\end{definition}
\begin{example}
The standard sets $\mathbb{N}, \mathbb{N}_0, \mathbb{Q}, \mathbb{R}, \mathbb{R}_+, \mathbb{C}$ are all external. We wait until the Dedekind Completeness result (Section \ref{S: Completeness}) to prove this.
\end{example}
\begin{lemma}
Let $S \subseteq \mathbb{C}$. Then ${^{*}S}$ is an internal set generated by the constant net $S_\varepsilon = S$ for all $\varepsilon \in \mathbb{R}_+$. That is to say, ${^{*}S} = \ensuremath{\langle} S \ensuremath{\rangle}$.
\end{lemma}
\begin{proof}
${^{*}S} = \{ \ensuremath{\langle} z_\varep \ensuremath{\rangle} \in {^{*}\mathc} : z_\varep \in S \textnormal{ a.e.}\}$ by Definition \ref{D: Extension of Set}, ${^{*}S}$ is an internal set by Definition \ref{D: Internal}.
\end{proof}
So the non-standard extension of a set is a special case of an internal set. We may think of internal sets as a generalization of the process of creating a non-standard extension.
\begin{example}[Infinitesimal Interval]
Let $(0, \rho) = \{ x \in {^{*}\mathr} : 0 < x < \rho \}$. Take as our net of real subsets $S_\varepsilon = (0, \varepsilon) = \{ x \in \mathbb{R} : 0 < x < \varepsilon\}$, where $\rho = \ensuremath{\langle} \varepsilon \ensuremath{\rangle}$. Then $\ensuremath{\langle} S_\varepsilon \ensuremath{\rangle} = (0, \rho)$. Since $\rho$ is an infinitesimal, there is no standard set which could be a subset of $(0, \rho)$, thus it is a proper internal set.
\end{example}
\begin{example}[Intervals in ${^{*}\mathr}$]
More generally, let $\alpha, \beta \in {^{*}\mathr}$ with $\alpha < \beta$. With the intervals $(\alpha, \beta), [\alpha, \beta], [\alpha, \beta), (\alpha, \beta]$ in ${^{*}\mathr}$ defined in the usual sense, each is an internal set.
\end{example}
\begin{example}
We briefly depart from the non-standard real numbers to discuss subsets of the non-standard natural numbers. Recall that $\nu$ is infinitely large from Example \ref{E: Canonical Infinitely Large}. Let $\Omega = \{ 1, 2, 3, ... , \nu\}$, which is to say, $\Omega = \{n \in $$^*\mathbb{N} : n \leq \nu\}$. We claim $\Omega$ is internal and if $\Omega_\varepsilon = \{ n \in \mathbb{N} : n \leq \alpha_\varepsilon$ a.e.$\}$, then $\ensuremath{\langle} \Omega_\varepsilon \ensuremath{\rangle} = \Omega$.
Indeed, let $\ensuremath{\langle} a_\varepsilon \ensuremath{\rangle} \in \Omega$. Then $a_\varepsilon \in \mathbb{N}$ a.e. and $a_\varepsilon \leq \alpha_\varepsilon$ a.e. Then $a_\varepsilon \in \Omega_\varepsilon$ a.e., which implies $\ensuremath{\langle} a_\varepsilon \ensuremath{\rangle} \in \ensuremath{\langle} \Omega_\varepsilon \ensuremath{\rangle}$.
Conversely, $\ensuremath{\langle} a_\varepsilon \ensuremath{\rangle} \in \ensuremath{\langle} \Omega_\varepsilon \ensuremath{\rangle}$ if and only if $a_\varepsilon \in \Omega_\varepsilon$ a.e. which implies $a_\varepsilon \in \mathbb{N}$ a.e. and $a_\varepsilon \leq \alpha_\varepsilon$ a.e. which implies $\ensuremath{\langle} a_\varepsilon \ensuremath{\rangle} \in $$^*\mathbb{N}$ and $\ensuremath{\langle} a_\varepsilon \ensuremath{\rangle} \leq \nu$. That is, $\ensuremath{\langle} a_\varepsilon \ensuremath{\rangle} \in \Omega$. Sets of the form $\Omega$ are called \textbf{hyperfinite}.
\end{example}
\begin{example}
Let $r \in \mathbb{R}$ and $\nu \in {^{*}\mathn}$. Defining ${^*B(r, \frac{1}{\nu}}) =: \{ x \in {^{*}\mathr} : r - \frac{1}{\nu} < x < r + \frac{1}{\nu} \}$, we have that ${^*B(r, \frac{1}{\nu}})$ is an internal set. Indeed, $\nu = \ensuremath{\langle} \nu_\varepsilon \ensuremath{\rangle}$ and ${^*B(r, \frac{1}{\nu}}) = \ensuremath{\langle} B(r, \frac{1}{\nu_\varepsilon}) \ensuremath{\rangle}$.
\end{example}
\section{Completeness of ${^{*}\mathr}$}\label{S: Completeness}
Our focus turns to the completeness of ${^{*}\mathr}$. First, we discuss \textbf{Dedekind Completeness} which is the non-standard analogue of the Supremum Principle in $\mathbb{R}$. We apply Dedekind Completeness to discuss the \textbf{Spilling Principles}. Next we present two versions of the \textbf{Saturation Principle}; one for open intervals in ${^{*}\mathr}$ and one for internal sets in ${^{*}\mathr}$, and as a direct corollary to the open interval version of the Saturation Principle we obtain the \textbf{Cantor Completeness}.
\begin{lemma}\label{L: Order Completeness Lemma}
Let $\ensuremath{\langle} A_{\varep} \ensuremath{\rangle} \subseteq {^{*}\mathr}$ be an internal set that is bounded from above by $\ensuremath{\langle} b_\varep \ensuremath{\rangle} \in {^{*}\mathr}$. That is, for all $\alpha \in \ensuremath{\langle} A_{\varep} \ensuremath{\rangle}$, $\alpha \leq \ensuremath{\langle} b_\varep \ensuremath{\rangle}$. Then, $J =: \{ \varepsilon \in \mathbb{R}_+ :$ For all $x \in A_{\varep}$, $x \leq b_\varep\} \in \mathcal{U}$.
\end{lemma}
\begin{proof}
Suppose to the contrary that $J = \{ \varepsilon \in \mathbb{R}_+ :$ for all $x \in A_{\varep}$, $x \leq b_\varep\} \notin \mathcal{U}$. Since $\mathcal{U}$ is an ultrafilter, $\mathbb{R}_+ \setminus J =: \{ \varepsilon \in \mathbb{R}_+ :$ there exists $x \in A_{\varep}$, such that $x > b_\varep\} \in \mathcal{U}$. Let
\begin{equation}\label{E: X Epsilon}
X_{\varep} = \{ x \in A_{\varep} : x > b_\varep\}
\end{equation}
and observe that $K =: \{ \varepsilon \in \mathbb{R}_+ : X_{\varep} \not= \varnothing\} \in \mathcal{U}$. We define $(a_\varep) \in \mathbb{R}^{\mathbb{R}_+}$ by $a_\varep \in X_{\varep}$ if $\varepsilon \in K$ and by $a_\varep = 1$ if $\varepsilon \in \mathbb{R}_+ \setminus K$ (Such a definition requires the Axiom of Choice). Let $\alpha = \ensuremath{\langle} a_\varep \ensuremath{\rangle}$, then $a_\varep \in A_{\varep}$ a.e. and $a_\varep > b_\varep$ a.e. and so by the definition of $X_{\varep}$ in (\ref{E: X Epsilon}) we have $\alpha \in \ensuremath{\langle} A_{\varep} \ensuremath{\rangle}$ and $\alpha > \ensuremath{\langle} b_\varep \ensuremath{\rangle}$, a contradiction that $\ensuremath{\langle} A_{\varep} \ensuremath{\rangle}$ is bounded from above by $\ensuremath{\langle} b_\varep \ensuremath{\rangle}$.
\end{proof}
\begin{theorem}[Dedekind (Order) Completeness in ${^{*}\mathr}$]\label{T: Order Completeness}\index{Supremum Completeness}
If an internal subset $\mathcal{A}$ of ${^{*}\mathr}$ is bounded from above, then $\sup \mathcal{A}$ in ${^{*}\mathr}$ exists.
\end{theorem}
\begin{proof}
By assumption there exists $\beta \in {^{*}\mathr}$ such that for all $\alpha \in \mathcal{A}$, $\alpha \leq \beta$. We have $\mathcal{A} = \ensuremath{\langle} A_{\varep} \ensuremath{\rangle}$ and $\beta = \ensuremath{\langle} b_\varep \ensuremath{\rangle}$ for some $(A_{\varep}) \in \mathcal{P}(\mathbb{R})^{\mathbb{R}_+}$ and $(b_\varep) \in \mathbb{R}^{\mathbb{R}_+}$, respectively. From Lemma \ref{L: Order Completeness Lemma} we have $J =: \{ \varepsilon \in \mathbb{R}_+ :$ for all $x \in A_{\varep}$, $x \leq b_\varep\} \in \mathcal{U}$. Define $(s_\varep) \in \mathbb{R}^{\mathbb{R}_+}$ by
\begin{align}
s_\varep =
\begin{cases}
\sup (A_{\varep}) &\text{if $\varepsilon \in J$}\\
1 &\text{if $\varepsilon \in \mathbb{R}_+ \setminus J$}\notag
\end{cases}
\end{align}
We now show that $\ensuremath{\langle} s_\varep \ensuremath{\rangle}$ is an upper bound of $\mathcal{A}$. Let $\ensuremath{\langle} a_\varep \ensuremath{\rangle} \in \mathcal{A}$ and denote $L =: \{ \varepsilon \in \mathbb{R}_+ : a_\varep \in A_{\varep} \}$. We have $L \in \mathcal{U}$ hence $J \cap L \in \mathcal{U}$ (\textbf{(b)} from Definition \ref{D: Filter and Free Filter}). Thus $\{ \varepsilon \in \mathbb{R}_+ : a_\varep \leq s_\varep \} \in \mathcal{U}$ since $J \cap L \subseteq \{ \varepsilon \in \mathbb{R}_+ : a_\varep \leq s_\varep \}$ by the definition of $(s_\varep)$.
Lastly, to show that $\ensuremath{\langle} s_\varep \ensuremath{\rangle}$ is the \emph{least} upper bound, we show that for every $\ensuremath{\langle} a_\varep \ensuremath{\rangle} \in \mathcal{A}$, the set $\{ x \in \mathcal{A} : \ensuremath{\langle} a_\varep \ensuremath{\rangle} < x < \ensuremath{\langle} s_\varep \ensuremath{\rangle} \}$ is non-empty. Define $\mathcal{S}_\varep = \{ x \in \mathbb{R}_+ : a_\varep < x < s_\varep \}$ if $\varepsilon \in J \cap L$ and arbitrarily (say, $\mathcal{S}_\varep = \mathbb{R}_+$) if $\varepsilon \in \mathbb{R}_+ \setminus (J \cap L)$. Then by the definition of the supremum in $\mathbb{R}$, $\mathcal{S}_\varep \not= \varnothing$ for all $\varepsilon \in \mathbb{R}_+$ since $s_\varep = \sup A_{\varep}$ for $\varepsilon \in J$. Define $(x_\varep) \in \mathbb{R}^{\mathbb{R}_+}$ (using the Axiom of Choice) by $x_\varep \in \mathcal{S}_\varep$ if $\varepsilon \in J \cap L$ and arbitrarily (say $x_\varep = 1$) for $\varepsilon \in \mathbb{R}_+ \setminus (J \cap L)$. We conclude that $\ensuremath{\langle} a_\varep \ensuremath{\rangle} < \ensuremath{\langle} x_\varep \ensuremath{\rangle} < \ensuremath{\langle} s_\varep \ensuremath{\rangle}$ as desired.
\end{proof}
\begin{corollary}
The standard sets $\mathbb{N}, \mathbb{N}_0, \mathbb{Q}, \mathbb{R}, \mathbb{R}_+, \mathbb{C}$ are all external sets.
\end{corollary}
\begin{proof}
We shall only prove that $\mathbb{R}$ is external. Suppose on the contrary that it is internal. Since $\mathbb{R} \subset {^{*}\mathr}$ we have that $\mathbb{R}$ is bounded from above by any infinitely large positive element of ${^{*}\mathr}$. By Order Completeness (Theorem \ref{T: Order Completeness}), $\sup \mathbb{R}$ exists in ${^{*}\mathr}$, and is infinitely large as an upper bound for $\mathbb{R}$. As $\sup \mathbb{R} - 1$ is also infinitely large, it is also an upper bound for $\mathbb{R}$. This is a contradiction since $\sup \mathbb{R}$ is the \emph{least} upper bound. Therefore, $\mathbb{R}$ is external.
\end{proof}
\begin{corollary}[Spilling Principles]\label{C: Spilling Principles}\index{Spilling Principles}
If $S \subseteq {^{*}\mathr}$ is an internal set, then:
\begin{quote}
\begin{description}
\item[(a)] \textbf{Overflow of $\mathcal{F}({^{*}\mathr})$:} If $S$ contains arbitrarily large finite positive numbers, then $S$ contains arbitrarily small infinitely large positive numbers.
\item[(b)] \textbf{Underflow of $\mathcal{F}({^{*}\mathr})$:} If $S$ contains arbitrarily small finite non-infinitesimal numbers, then $S$ contains arbitrarily large positive infinitesimals.
\item[(c)] \textbf{Underflow of $\mathcal{L}({^{*}\mathr})$:} If $S$ contains arbitrarily small infinitely large positive numbers, then $S$ contains arbitrarily large finite positive numbers.
\item[(d)] \textbf{Overflow of $\mathcal{I}({^{*}\mathr})$:} If $S$ contains arbitrarily large positive infinitesimals, then $S$ contains arbitrarily small finite non-infinitesimal positive numbers.
\end{description}
\end{quote}
\end{corollary}
\begin{proof}
We begin with \textbf{(a)}. Suppose there exist arbitrarily large finite positive numbers in $S$. In the case that $S$ is unbounded from above we may conclude that $S$ contains an infinitely large number.
In the case where $S$ is bounded from above, we know from Theorem \ref{T: Order Completeness} (Dedekind Completeness) that $\alpha = \sup (A)$ exists in ${^{*}\mathr}$. Furthermore, by the assumptions made on $S$, $\alpha$ is an infinitely large positive number. Since $\alpha = \sup (A)$, there exists $x \in S$ such that $\frac{\alpha}{2} < x < \alpha$ and since $\frac{\alpha}{2}$ is infinitely large, $x$ must be as well.
Now that we have shown that the set of positive infinitely large numbers in $S$ is non-empty ($\mathcal{L}(S_+) \not= \varnothing$), we show that it does not have a lower bound. Suppose to the contrary that $l \leq x$ for some $l \in \mathcal{L}({^{*}\mathr}_+)$ and all $x \in \mathcal{L}(S_+)$. Notice the set $S_l = \{ x \in S : 0 \leq x < l \}$ is internal since $S_l = S \cap (0 , l)$. Since $S \cap \mathcal{F}({^{*}\mathr}_+) \subseteq S_l$ we have $\mathcal{L}(S_l) \not=\varnothing$ by the same argument from before. So there exists an inifinitely large $x \in S$ with $0 < x < l$, a contradiction of the choice of $l$.
Therefore there exist arbitrarily small infinitely large positive numbers in $S$. We leave \textbf{(b)} - \textbf{(d)} to the reader.
\end{proof}
\begin{theorem}[Saturation Principle for nested open intervals]\label{T: Interval Saturation}\index{Saturation Principle!Open Intervals}
Every nested sequence of open intervals in ${^*}\mathbb{R}$ has a non-empty intersection.
\end{theorem}
\begin{proof}
Given $\alpha_n, \beta_n \in {^*}\mathbb{R}$ such that $\alpha_n < \beta_n$ for all $n \in \mathbb{N}$, and $\alpha_n \leq \alpha_{n+1} < \beta_{n+1} \leq \beta_n$, we have to show that there exists $\gamma \in {^*}\mathbb{R}$ such that $\alpha_m < \gamma < \beta_m$ for all $m \in \mathbb{N}$. Let $\alpha_n = \ensuremath{\langle} a_{n\varepsilon} \ensuremath{\rangle}$ and $\beta_n = \ensuremath{\langle} b_{n\varepsilon} \ensuremath{\rangle}$. From $\alpha_n < \beta_n$ for all $n \in \mathbb{N}$, if we define $A_n = \{ \varepsilon \in \mathbb{R}_+ : a_{n\varepsilon} < b_{n\varepsilon} \}$, then $A_n \in \mathcal{U}$.
For $n=1$, without loss of generality we can assume that $A_1 = \mathbb{R}_+$. Indeed, suppose not, then $\mathbb{R}_+ \setminus A_1 \not= \varnothing$. Redefine:
\begin{align}
a'_{1\varepsilon} &= a_{1\varepsilon}\notag\\
b'_{1\varepsilon} &= \begin{cases} b_{1\varepsilon} & \text{ if $\varepsilon \in A_1$,}
\\
a_{1\varepsilon} + 1 & \text{ if $\varepsilon \in \mathbb{R}_+ \setminus A_1$}
\end{cases}\notag
\end{align}
Then we have $a'_{1\varepsilon} < b'_{1\varepsilon}$ for all $\varepsilon \in \mathbb{R}_+$. Also, $\alpha_1 = \ensuremath{\langle} a'_{1\varepsilon} \ensuremath{\rangle}$ and $\beta_1 = \ensuremath{\langle} b'_{1\varepsilon} \ensuremath{\rangle}$. So $A'_1 = \{ \varepsilon \in \mathbb{R}_+ : a'_{1\varepsilon} < b'_{1\varepsilon} \} = \mathbb{R}_+$. Thus justifying our assumption that $A_1 = \mathbb{R}_+$.
Henceforth, we will keep the original notation, using $a_{1\varepsilon}$ and $b_{1\varepsilon}$ without the primes. So we assume that $a_{1\varepsilon} < b_{1\varepsilon}$ for all $\varepsilon \in \mathbb{R}_+$.
Next, we define $\mu : \mathbb{R}_+ \to \mathbb{N} \cup \{ \infty \}$ by $\mu(\varepsilon) := \max\{m \in \mathbb{N} : \bigcap_{n=1}^{m}(a_{n\varepsilon}, b_{n\varepsilon}) \not= \varnothing\}$. Notice that $\mu(\varepsilon)$ is well defined because of our assumption that $A_1 = \mathbb{R}_+$.
Next, choose $(c_\varepsilon) \in \mathbb{R}^{\mathbb{R}_+}$ such that $c_\varepsilon \in \bigcap_{n=1}^{\mu(\varepsilon)} (a_{n\varepsilon}, b_{n\varepsilon})$. Notice that the intersection is non-empty for all $\varepsilon \in \mathbb{R}_+$ by our choice of $\mu(\varepsilon)$. Define $\gamma \in {^*}\mathbb{R}$ by $\gamma = \ensuremath{\langle} c_\varepsilon \ensuremath{\rangle}$.
And now, for the heart of the proof. Let $m \in \mathbb{N}$. We must show that $\alpha_m < \gamma < \beta_m$, that is, that $a_{m\varep} < c_\varep < b_{m\varep}$ a.e.
Notice that $\{ \varepsilon \in \mathbb{R}_+ : \bigcap_{n =1}^{\mu(\varepsilon)} (a_{n\varep}, b_{n\varep}) \not= \varnothing \} = \mathbb{R}_+$ by definition of $\mu(\varepsilon)$ and so it is in $\mathcal{U}$. Consider $S_m = \{ \varepsilon \in \mathbb{R}_+ : \bigcap_{n=1}^{m} (a_{n\varep}, b_{n\varep}) \not= \varnothing\}$. Since $\bigcap_{n=1}^{m} (\alpha_n, \beta_n) \not= \varnothing$ (in ${^{*}\mathr}$), we know that $S_m \in \mathcal{U}$.
We now claim $S_m \subseteq \{\varepsilon \in \mathbb{R}_+ : a_{m\varep} < c_\varep < b_{m\varep}\}$. Indeed, $\varepsilon \in S_m \Leftrightarrow \bigcap_{n=1}^{m} (a_{n\varep}, b_{n\varep}) \not= \varnothing$. This implies that $m \leq \mu(\varepsilon)$, by the definition of $\mu(\varepsilon)$, which implies $c_\varep \in (a_{m\varep}, b_{m\varep})$ (because $c_\varep \in \bigcap_{n=1}^{\mu(\varepsilon)} (a_{n\varep}, b_{n\varep}) \subseteq (a_{m\varep}, b_{m\varep})$) $\Leftrightarrow \varepsilon \in \{\varepsilon \in \mathbb{R}_+ : a_{m\varep} < c_\varep < b_{m\varep}\}$. By \textbf{(c)} of Definition \ref{D: Filter and Free Filter}, $a_{m\varep} < c_\varep < b_{m\varep}$ a.e. for all $n \in \mathbb{N}$ since $m$ was arbitrary.
Therefore, $\gamma = \ensuremath{\langle} c_\varep \ensuremath{\rangle} \in {^{*}\mathr}$ is an element of $\bigcap_{n=1}^{\infty}(\alpha_n, \beta_n)$, making the intersection non-empty.
\end{proof}
\begin{corollary}[Cantor Principle in ${^{*}\mathr}$]\index{Cantor Principle}
Every nested sequence of closed intervals in ${^{*}\mathr}$ has a non-empty intersection.
\end{corollary}
\begin{proof}
As every closed interval in ${^{*}\mathr}$ contains an open interval in ${^{*}\mathr}$, by Theorem \ref{T: Interval Saturation}, every nested sequence of closed intervals in ${^{*}\mathr}$ has a non-empty intersection.
\end{proof}
We now present the general version of the Saturation Principle. In the following $c^+$ denotes the cardinal number that is the successor of $c = \ensuremath{{\rm{card}}}(\mathbb{R})$; notice that $\ensuremath{{\rm{card}}}(\mathbb{R}_+) = c$, as well. Recall that a collection of sets $\{S_\gamma\}_{\gamma \in \Gamma}$ has the \textbf{finite intersection property} if $\cap_{\gamma \in F} S_\gamma \not= \varnothing$ for each finite subset $F \subseteq \Gamma$.
\begin{theorem}[General Saturation Principle]\label{T: General Saturation}\index{Saturation Principle!General}
The non-standard complex numbers ${^{*}\mathc}$ are $c^{+}$-saturated in the sense that every family $\{S_\gamma\}_{\gamma \in \Gamma}$ of internal sets of ${^{*}\mathc}$ with the finite intersection property and with $\ensuremath{{\rm{card}}}(\Gamma) \leq c$ has a non-empty intersection.
\end{theorem}
We omit the proof, but note that it is similar to the proof of Theorem \ref{T: Interval Saturation} but utilizes more complex combinatorial arguments.
\begin{remark}
Observe that $\mathbb{N}, \mathbb{Q}, \mathbb{R}, \mathbb{R}_+, \mathbb{R}^{n}, \mathbb{C}, \mathbb{C}^{n}, \mathcal{D}(\mathbb{R}^{d})$ all have cardinality at most $c$ and so may be used as the index set in Theorem \ref{T: General Saturation}. Here $\mathcal{D}(\mathbb{R}^{d})$ denotes the class of $C^\infty$-functions from $\mathbb{R}^d$ to $\mathbb{C}$ with compact support.
\end{remark}
\section{Non-Standard Extension of a Function}\label{S: Functions}
Just as a standard set may be extended to a non-standard set, we may extend an ordinary function to a non-standard function.
\begin{definition}\label{D: NSE of Function}\index{Non-Standard Extension of a Function}
Let $f: X \to \mathbb{C}$ where $X \subseteq \mathbb{C}$, be a (standard) function, and let $\ensuremath{\langle} x_\varep \ensuremath{\rangle} \in {^{*}X}$. Without loss of generality we can assume that $x_\varep \in X$ for all $\varepsilon \in \mathbb{R}_+$ by Lemma \ref{L: Regular Representatives}. We define the non-standard extension ${^{*}f}: {^{*}X} \to {^{*}\mathc}$ by the formula ${^{*}f}(\ensuremath{\langle} x_\varep \ensuremath{\rangle}) = \ensuremath{\langle} f(x_\varep) \ensuremath{\rangle}$.
\end{definition}
\begin{remark}
We drop the asterisks in front of $f$ for well known functions such as $e^x$, $\ln x$, etc.
\end{remark}
\begin{theorem}[Properties of Non-Standard Extension of $f$]\label{T: Properties of f-star} Consider $f$ and ${^{*}f}$ as in the above definition. Then,
\begin{quote}
\begin{description}
\item[(a)] ${^{*}f}$ is an extension of $f$ in that ${^{*}f} \mid_{X} = f$ (Notice $X \subseteq {^{*}X}$ by Definition \ref{D: Extension of Set})
\item[(b)] $\ensuremath{{\rm{dom}}}({^{*}f}) = $ $^*(\ensuremath{{\rm{dom}}}(f))$, and
\item[(c)] $\ensuremath{{\rm{ran}}}({^{*}f}) = $ $^*(\ensuremath{{\rm{ran}}}(f))$
\end{description}
\end{quote}
\end{theorem}
\begin{proof} $ $
\begin{quote}
\begin{description}
\item[(a)] $\ensuremath{\langle} x_\varep \ensuremath{\rangle} \in X$ \iff there exists $x \in X$ such that $\{ \varepsilon \in \mathbb{R}_+ : x_\varep = x\} \in \mathcal{U}$ \iff $\ensuremath{\langle} x_\varep \ensuremath{\rangle} = \ensuremath{\langle} x \ensuremath{\rangle}$. So ${^{*}f}(\ensuremath{\langle} x_\varep \ensuremath{\rangle}) = {^{*}f}(\ensuremath{\langle} x \ensuremath{\rangle}) = {^{*}f}(x) = f(x)$.
\item[(b)] $\ensuremath{{\rm{dom}}}({^{*}f}) = {^{*}X} = $ $^*(\ensuremath{{\rm{dom}}}(f))$.
\item[(c)] $\ensuremath{\langle} y_\varep \ensuremath{\rangle} \in \ensuremath{{\rm{ran}}}({^{*}f})$ \iff there exists $\ensuremath{\langle} x_\varep \ensuremath{\rangle} \in \ensuremath{{\rm{dom}}}({^{*}f})$ such that ${^{*}f}(\ensuremath{\langle} x_\varep \ensuremath{\rangle}) = \ensuremath{\langle} y_\varep \ensuremath{\rangle}$ \iff $\{ \varepsilon \in \mathbb{R}_+ : f(x_\varep) = y_\varep\} \in \mathcal{U}$. We have $\{ \varepsilon \in \mathbb{R}_+ : f(x_\varep) = y_\varep\} \subseteq \{\varepsilon \in \mathbb{R}_+ : y_\varep \in \ensuremath{{\rm{ran}}}(f)\}$, so $\{\varepsilon \in \mathbb{R}_+ : y_\varep \in \ensuremath{{\rm{ran}}}(f)\} \in \mathcal{U}$. The last statement is equivalent to $\ensuremath{\langle} y_\varep \ensuremath{\rangle} \in $ $^*(\ensuremath{{\rm{ran}}}(f))$.
\end{description}
\end{quote}
\end{proof}
We now present some basic examples to fortify Definition \ref{D: NSE of Function}.
\begin{examples}\label{E: NSF}$ $
\begin{quote}
\begin{description}
\item[(i)] Consider $\ln x$ with $\ensuremath{{\rm{dom}}}(\ln x) = \mathbb{R}_+$ and $\ensuremath{{\rm{ran}}}(\ln x) = \mathbb{R}$. For the non-standard extension $\star\ln x$ we have $\ensuremath{{\rm{dom}}}(\star\ln x) = {^{*}\mathr}_+$ and $\ensuremath{{\rm{ran}}}(\star\ln x) = {^{*}\mathr}$ by Theorem \ref{T: Properties of f-star}.
Let $\ensuremath{\langle} x_\varep \ensuremath{\rangle} \in {^{*}\mathr}_+$. Without loss of generality (Lemma \ref{L: Regular Representatives}) we may assume that $x_\varep \in \mathbb{R}_+$ for all $\varepsilon \in \mathbb{R}_+$. Therefore, $\star\ln \ensuremath{\langle} x_\varep \ensuremath{\rangle} = \ensuremath{\langle} \ln x_\varep \ensuremath{\rangle}$.
\item[(ii)] Next, consider $e^x$ with $\ensuremath{{\rm{dom}}}(e^x) = \mathbb{R}$ and $\ensuremath{{\rm{ran}}}(e^x) = \mathbb{R}_+$. Once again, from Theorem \ref{T: Properties of f-star}, the non-standard extension $\star e^x$ has $\ensuremath{{\rm{dom}}}(\star e^x) = {^{*}\mathr}$ and $\ensuremath{{\rm{ran}}}(\star e^x) = {^{*}\mathr}_+$. The values may be computed via Definition \ref{D: NSE of Function} so that $\star e^{\ensuremath{\langle} x_\varep \ensuremath{\rangle}} = \ensuremath{\langle} e^{x_\varep} \ensuremath{\rangle}$. Notice we need not invoke Lemma \ref{L: Regular Representatives} since $\ensuremath{{\rm{dom}}}(e^x)$ has no peculiarities.
\item[(iii)] The extensions of all trigonometric functions ($\sin$, $\cos$, etc.) are similar. In particular, consider $\sin x$ with $\ensuremath{{\rm{dom}}}(\sin x) = \mathbb{R}$ and $\ensuremath{{\rm{ran}}}(\sin x) = [-1, 1]$. We have $\ensuremath{{\rm{dom}}}(\star\sin x) = {^{*}\mathr}$ and $\ensuremath{{\rm{ran}}}(\star\sin x) = \star[-1,1] \subset {^{*}\mathr}$ with values $\star\sin \ensuremath{\langle} x_\varep \ensuremath{\rangle} = \ensuremath{\langle} \sin x_\varep \ensuremath{\rangle}$.
\end{description}
\end{quote}
\end{examples}
\section{Transfer Principle by Example}\label{S: Transfer by Example}
Remember that the real numbers may be arrived at constructively with Dedekind Cuts whereby the Completeness of the reals is a theorem, yet there is the alternate approach of axiomatically defining the real numbers whereby the Completeness is an axiom. The same holds true for the non-standard reals. Up until now we constructed the non-standard reals using ultrafilters and an equivalence relation, but there is the alternative of an axiomatic approach. The Transfer Principle is one of the key axioms which allows us to connect the standard analysis to the non-standard analysis.
We proceed by recalling theorems proven in this paper using ultrafilters and the equivalence relation on $\mathbb{C}^{\mathbb{R}_+}$, and noting that formalizing the equivalent standard result, and putting asterisks in the proper places leads us to the non-standard result.
\begin{example}
In Theorem \ref{T: Field} we proved that the non-standard complex numbers ${^{*}\mathc}$ is a field. We proved our assertion using the framework of ultrafilters, but this needn't be the case. Recall that ${^{*}\mathc} = \mathbb{C}^{\mathbb{R}_+} / \sim$ and so ${^{*}\mathc}$ is already a ring. So the assertion that ${^{*}\mathc}$ is a field is reduced to saying that each element has a multiplicative inverse. This is formalized as:
\begin{equation}\label{E: Formal Field}
(\forall z \in {^{*}\mathc})(\exists w \in {^{*}\mathc})[zw = 1].
\end{equation}
We now notice that (\ref{E: Formal Field}) may be obtained by replacing $\mathbb{C}$ with ${^{*}\mathc}$ in the standard statement: $$(\forall z \in \mathbb{C})(\exists w \in \mathbb{C})[zw = 1],$$
which is true since $\mathbb{C}$ is a field.
\end{example}
\begin{example}
In Theorem \ref{T: Algebraically Closed} we proved that ${^{*}\mathc}$ is an algebraically closed field. In the proof we relied heavily upon the equivalence classes of ${^{*}\mathc}$ and the fixed ultrafilter. The formalization of Theorem \ref{T: Algebraically Closed} is:
\begin{equation}\label{E: Formal Closed}
(\forall P \in {^{*}\mathc}[x])(\exists z \in {^{*}\mathc})[P(z) = 0].
\end{equation}
Notice (\ref{E: Formal Closed}) is obtained by replacing $\mathbb{C}[x]$ with ${^{*}\mathc}[x]$ and $\mathbb{C}$ with ${^{*}\mathc}$ in the standard statement: $$(\forall P \in \mathbb{C}[x])(\exists z \in \mathbb{C})[P(z) = 0],$$
which is true since $\mathbb{C}$ is an algebraically closed field.
\end{example}
\begin{example}
In Theorem \ref{T: Totally Ordered Field} we proved, using the properties of ultrafilters, that ${^{*}\mathr}$ was a totally ordered field. The formalization of the statement for \ref{T: Totally Ordered Field} is:
\begin{equation}\label{E: Formal Total}
(\forall x, y \in {^{*}\mathr})[(x > y) \vee (x > y) \vee (x = y)].
\end{equation}
Notice (\ref{E: Formal Total}) may be obtained by replacing $\mathbb{R}$ with ${^{*}\mathr}$ in the standard statement: $$(\forall x, y \in \mathbb{R})[(x > y) \vee (x > y) \vee (x = y)],$$
which is true since $\mathbb{R}$ is a totally ordered field.
\end{example}
\begin{example}
In Theorem \ref{T: Properties of f-star} we will show $\ensuremath{{\rm{dom}}}({^{*}f}) = {\star(\ensuremath{{\rm{dom}}}(f))}$. Recall the definition of the domain $\ensuremath{{\rm{dom}}}(f)$ may be formalized as:
\begin{equation}\label{E: Formal dom}
(\forall x \in \mathbb{R})[ x \in \ensuremath{{\rm{dom}}}(f) \Leftrightarrow (\exists y \in \mathbb{C})[f(x) = y]].
\end{equation}
Note that replacing $\mathbb{R}$ with ${^{*}\mathr}$, $\mathbb{C}$ with ${^{*}\mathc}$, $\ensuremath{{\rm{dom}}}(f)$ with ${\star(\ensuremath{{\rm{dom}}}(f))}$ and $f$ with ${^{*}f}$ in (\ref{E: Formal dom}) we obtain: $$(\forall x \in {^{*}\mathr})[x \in {\star(\ensuremath{{\rm{dom}}}(f))} \Leftrightarrow (\exists y \in {^{*}\mathc})[{^{*}f}(x) = y]].$$ We now note that the right side of the if and only if statement is what we mean by $x \in \ensuremath{{\rm{dom}}}({^{*}f})$, and therefore ${\star(\ensuremath{{\rm{dom}}}(f))} = \ensuremath{{\rm{dom}}}({^{*}f})$ as was already proven.
\end{example}
\begin{example}
In Theorem \ref{T: Properties of f-star} we will show $\ensuremath{{\rm{ran}}}({^{*}f}) = {\star(\ensuremath{{\rm{ran}}}(f))}$. Recall the definition of the range $\ensuremath{{\rm{ran}}}(f)$ may be formalized as:
\begin{equation}\label{E: Formal ran}
(\forall y \in \mathbb{C})[ y \in \ensuremath{{\rm{ran}}}(f) \Leftrightarrow (\exists x \in \ensuremath{{\rm{dom}}}(f))[f(x) = y]].
\end{equation}
Note that by replacing $\mathbb{C}$ with ${^{*}\mathc}$, $\ensuremath{{\rm{ran}}}(f)$ with ${\star(\ensuremath{{\rm{ran}}}(f))}$, $\ensuremath{{\rm{dom}}}(f)$ with ${\star(\ensuremath{{\rm{dom}}}(f))}$, and $f$ with ${^{*}f}$ in (\ref{E: Formal ran}) we obtain: $$(\forall y \in {^{*}\mathc})[ y \in {\star(\ensuremath{{\rm{ran}}}(f))} \Leftrightarrow (\exists x \in {\star(\ensuremath{{\rm{dom}}}(f))}[{^{*}f}(x) = y]].$$ As ${\star(\ensuremath{{\rm{dom}}}(f))} = \ensuremath{{\rm{dom}}}({^{*}f})$, the right side of the if and only if statement is what we mean by $x \in \ensuremath{{\rm{ran}}}({^{*}f})$, and therefore ${\star(\ensuremath{{\rm{ran}}}(f))} = \ensuremath{{\rm{ran}}}({^{*}f})$ as was already proven.
\end{example}
From just these five examples we see that by placing asterisks in the right location we arrive at a non-standard result which we have already proven "the hard way" using ultrafilters. This is not simply a coincidence; indeed, we are hinting at a powerful general theorem which asserts the validity of a standard result if and only if the non-standard result is likewise valid; this is the Transfer Principle. Before we describe the Transfer Principle in any detail, we present a cautionary example.
\begin{warning}
The Completeness of the real numbers may be stated as follows: $$\textnormal{Any bounded subset of $\mathbb{R}$ has a supremum.}$$ Casually putting asterisks as we have done before yields the statement: $$\textnormal{Any bounded subset of ${^{*}\mathr}$ has a supremum.}$$ Whereas the standard result is certainly true, the "transferred" statement is \emph{not} true. As a counterexample consider the set $\mathcal{F}({^{*}\mathr}) \subset {^{*}\mathr}$. It is bounded by any infinitely large element of ${^{*}\mathr}$, yet $\mathcal{F}({^{*}\mathr})$ does not have a supremum (as demonstrated earlier).
\end{warning}
In order to rigorously state the Transfer Principle, we must clearly define the mathematical language with which we shall speak.
\section{Logical Basis for Transfer}\label{S: Logic for Transfer}
The Transfer Principle has quite an extensive logical foundation, and for the purposes of this text, we shall only take what is absolutely necessary for the \emph{statement} of the Transfer Principle. We shall not concern ourselves with the details of the proof (save but to cite it). We begin with a discussion of our framework.
\begin{definition}[Superstructure]\label{D: Superstructure}\index{Superstructure}
Let $S$ be an infinite set. The \textbf{superstructure}, denoted $V(S)$, on $S$ is the union $$V(S) = \bigcup_{n = 0}^{\infty} V_n(S),$$ where the $V_n(S)$ are inductively defined by $V_0(S) = S$, $V_1(S) = S \cup \mathcal{P}(S)$, ..., $V_{n+1}(S) = V_n(S) \cup \mathcal{P}(V_n(S))$.
Elements of $V(S)$ are \textbf{mathematical objects}, elements of $V(S) \setminus S$ are \textbf{sets}, and elements of $S$ are \textbf{individuals}. We do not think of individuals as sets. To understand this, imagine we are in the superstructure $V(\mathbb{R})$; we have $5 \in \mathbb{R}$ and $\pi \in \mathbb{R}$, but we would never say $5 \in \pi$.
\end{definition}
\begin{theorem}[Properties of $V_n(S)$ and $V(S)$]
$ $
\begin{quote}
\begin{description}
\item[(a)] We have $S = V_0(S) \subset V_1(S) \subset ... \subset V(S)$, which implies $\ensuremath{{\rm{card}}}(S) < \ensuremath{{\rm{card}}}(V_1(S)) < ... < \ensuremath{{\rm{card}}}(V(S))$. Similarly, $S = V_0(S) \in V_1(S) \in ... \in V(S)$.
\item[(b)] Each $V_n(S)$ is \textbf{transitive} in the superstructure in the sense that $A \in V_n(S)$ implies either $A \in S$ ($A$ is an individual) or $A \subset V_n(S)$ ($A$ is a set). Written another way, $V_n(S) \subseteq S \cup \mathcal{P}(V_n(S))$, for all $n \in \mathbb{N}$.
\item[(c)]We also have that the superstructure itself is transitive, which is to say that if $A \in V(S)$, then either $A \in S$ or $A \subset V(S)$, that is, $V(S) \subseteq S \cup \mathcal{P}(V(S))$.
\end{description}
\end{quote}
\end{theorem}
The transitivity of the superstructure is the \emph{key} property of the superstructure. It tells us that everything in the superstructure is either an individual or a set. This allows us to apply all of our set theoretic operations ($\cup$, $\cap$, $\setminus$, etc) on the non-individual objects of the superstructure.
We claim that the superstructure $V(\mathbb{R})$ contains all of the objects from analysis that we need. To give an idea of the breadth of the superstructure we demonstrate that the Cartesian product $\mathbb{R} \times \mathbb{R}$ is in the superstructure, and that by induction $\mathbb{R}^n$ is in the superstructure. From there we are able to show that functions, and the algebraic operations are members of the superstructure.
In order to demonstrate that the Cartesian product $\mathbb{R} \times \mathbb{R}$ is in the superstructure we define the ordered pair to be $\ensuremath{\langle} a, b \ensuremath{\rangle} =: \{ \{a\}, \{a, b\}\}$. If we can use this definition to prove the following lemma, then we will have justified our definition.
\begin{lemma}
Let $a, b \in \mathbb{R}$. $\ensuremath{\langle} a, b \ensuremath{\rangle} = \ensuremath{\langle} a', b' \ensuremath{\rangle} \Leftrightarrow a = a'$ and $b = b'$.
\end{lemma}
\begin{proof} $\ensuremath{\langle} a, b \ensuremath{\rangle} = \ensuremath{\langle} a', b' \ensuremath{\rangle} \Leftrightarrow \{\{a\}, \{a, b\}\} = \{\{a'\}, \{a', b'\}\}$.
\begin{description}
\item[($\Rightarrow$)]
\begin{description}
\item[Case 1:] $\{a\} = \{a'\}$ and $\{a, b\} = \{a', b'\}$ implies $a = a'$ implies $\{a, b\} = \{a, b'\}$ implies $b = b'$.
\item[Case 2:] $\{a\} = \{a', b'\}$ and $\{a, b\} = \{a'\}$. The former implies $a = a' = b'$ and the latter implies $a = b = a'$ as desired.
\end{description}
\item[($\Leftarrow$)] Trivial.
\end{description}
\end{proof}
From the above lemma we have established that ordered pairs may be represented as sets. With respect to the superstructure we have $\{a\} \in \mathcal{P}(\mathbb{R}) \subseteq V_1(\mathbb{R})$ and $\{a, b\} \in \mathcal{P}(\mathbb{R}) \subseteq V_1(\mathbb{R})$. So $\{\{a\}, \{a, b\}\} \in \mathcal{P}(V_1(\mathbb{R})) \subset V_2(\mathbb{R})$. This puts the ordered pair, $\ensuremath{\langle} a, b \ensuremath{\rangle} \in V_2(\mathbb{R}) \subset V(\mathbb{R})$ and therefore $\mathbb{R}^2 \subseteq V_2(\mathbb{R})$, hence $\mathbb{R}^2 \in V_3(\mathbb{R}) \subset V(\mathbb{R})$.
We may now define ordered triples as $\ensuremath{\langle} a, b, c \ensuremath{\rangle} =: \ensuremath{\langle} \ensuremath{\langle} a, b \ensuremath{\rangle}, c \ensuremath{\rangle}$. We may inductively define $n$-tuples in this way as well. This allows us to say that $\mathbb{R}^n$ is in the superstructure for any $n \in \mathbb{N}$. As a consequence, $\mathbb{C}$ is in the superstructure since $\mathbb{C} = \mathbb{R}^2$.
As mentioned previously, functions are in the superstructure. Indeed, we may think of the function $f$ \emph{as the graph} which is a subset of $\mathbb{R} \times \mathbb{R}$ which is certainly in the superstructure. In other words, $f \subseteq \mathbb{R}^2 \subseteq V_2(\mathbb{R})$, hence $f \in V_3(\mathbb{R})$.
We may also think of addition, subtraction, multiplication, and division as being in the superstructure since we may define each as sets of ordered triples. The breadth of the superstructure is apparent.
We now discuss the language of the superstructure, denoted $\mathcal{L}(S)$. Do not forget that the goal of our language is to be able to formulate statements, and know exactly where to put the asterisks when we wish to transfer them to their non-standard form.
\begin{definition}[Alphabet]\label{D: Alphabet}\index{Language!Alphabet}
Let $S$ be an infinite set and $V(S)$ be the superstructure of $S$. The \textbf{alphabet} $\mathcal{A} \cup \mathcal{B} \cup \mathcal{C}$ of the language $\mathcal{L}(S)$ consists of the sets:
\begin{quote}
\begin{description}
\item[(a)] The set $\mathcal{A}$ is composed of the \textbf{symbols}: $$= \textnormal{ } \in \textnormal{ } \neg \textnormal{ } \wedge \textnormal{ }( \textnormal{ } ) \textnormal{ }[ \textnormal{ } ] \textnormal{ } \{ \textnormal{ } \} \textnormal{ } \ensuremath{\langle} \textnormal{ } \ensuremath{\rangle} \textnormal{ } \exists \textnormal{ } \upharpoonright,$$
\item[(b)] The set $\mathcal{B}$ is composed of countably many \textbf{variables}: $$x, y, z, x_1, x_2, x_3, ..., X, Y, Z, X_1, X_2, X_3, ...$$
\item[(c)] The set $\mathcal{C} = V(S)$ is the superstructure discussed previously. The members of $\mathcal{C}$ are called \textbf{constants} of the language $\mathcal{L}(S)$.
\end{description}
\end{quote}
Members of the alphabet $\mathcal{A} \cup \mathcal{B} \cup \mathcal{C}$ are \textbf{letters}.
\end{definition}
\begin{definition}[Vocabulary]\label{D: Vocabulary}\index{Language!Vocabulary}
The vocabulary of $\mathcal{L}(S)$ consists of \textbf{words, terms, predicates} and \textbf{propositions} defined recursively as follows:
\begin{quote}
\begin{description}
\item[(a)] A \textbf{word} is any finite sequence of letters.\index{Language!Word}
\item[(b)] A word $\tau$ is a \textbf{term} if there exists a finite sequence of words $\tau_1, \tau_2, ..., \tau_m$ such that $\tau = \tau_m$ and for each $n \in \{1, 2, ..., m\}$ we have:\index{Language!Term}
\begin{quote}
\begin{description}
\item[(1)] $\tau_n$ is either a variable \textbf{or}
\item[(2)] $\tau_n$ is a constant \textbf{or}
\item[(3)] $\tau_n = \ensuremath{\langle} \tau_i, \tau_j \ensuremath{\rangle}$ for some $i, j < n$ (i.e. an ordered pair of variables and/or constants) \textbf{or}
\item[(4)] $\tau_n = (\tau_i \upharpoonright \tau_j)$ for some $i, j < n$.
\end{description}
\end{quote}
A term containing no variables is a \textbf{closed term}. The set of closed terms in $\mathcal{L}(S)$ is denoted by $\mathcal{T}(S)$.\index{Language!Closed Term}
\end{description}
We should take the time to define the symbol $\upharpoonright$. It is essentially function evaluation. For $A \in V(S) \setminus S$ and $a \in V(S)$ we define
\begin{align}
A \upharpoonright a =
\begin{cases}
b &\text{if ($\exists ! b \in V(S)$)[$\ensuremath{\langle} a, b \ensuremath{\rangle} \in A$]}\\
\varnothing &\text{otherwise.}\notag
\end{cases}
\end{align}
\begin{remark}\label{R: Evaluation}
Let $f \in V(S) \setminus S$ be a function. Let $a \in V(S)$ and $x \in \mathcal{B}$. We shall write $f(a)$ and $f(x)$ instead of $f \upharpoonright a$ and $f \upharpoonright x$, respectively.
\end{remark}
\begin{description}
\item[(c)] A word $P$ is a \textbf{predicate} if there exists a finite sequence of words $P_1, P_2, ..., P_m$ such that $P = P_m$ and for each $n \in \{1, 2, ..., m\}$ we have:\index{Language!Predicate}
\begin{quote}
\begin{description}
\item[(1)] $P_n$ is either $( \sigma = \tau )$ for some terms $\sigma$ and $\tau$, \textbf{or}
\item[(2)] $P_n$ is $(\sigma \in \tau)$ for some terms $\sigma$ and $\tau$, \textbf{or}
\item[(3)] $P_n = \neg P_i$ for some $i < n$, \textbf{or}
\item[(4)] $P_n = (P_i \wedge P_j)$ for some $i, j < n$, \textbf{or}
\item[(5)] $P_n = (\exists x \in A)P_i$ for some $i < n$ and for some term $A$ where the variable $x$ does not occur.
\end{description}
\end{quote}
\item[(d)] A variable $x$ in the predicate $P$ is \textbf{bounded} if we have the form $$(\exists x \in A)R(x) \textnormal{ \textbf{or} } (\forall x \in A)R(x),$$ for some predicate $R(x)$ and for some term $A$ in which $x$ does not occur. If $x_1, x_2, ..., x_n$ are all free variables in $P$, then we write $P(x_1, x_2, ..., x_n)$ instead of $P$.\index{Language!Bounded Variable}
\item[(f)] A \textbf{proposition} is a predicate in which no variable is free. The set of all propositions in $\mathcal{L}(S)$ is denoted by $\Pi(S)$.\index{Language!Proposition}
\end{description}
\end{quote}
\end{definition}
\begin{remark}
Predicates such as $(\forall x)P(x)$ or $(\exists x)(P(x)$ are disallowed. That is, we do not consider unbounded predicates.
\end{remark}
\begin{example}[Words]
Let $x, y, w, z \in \mathcal{B}$. We consider $(x, y$ to be a word since it is just a finite sequence of letters. Note, a word does not have to \emph{make sense}, in fact, it can be wholly meaningless, as in our example.
We also consider $\ensuremath{\langle} w, z \ensuremath{\rangle}$ to be a word, by the definition.
\end{example}
\begin{example}[Terms]
Let $x, w, z \in \mathcal{B}$. From the last example, $\ensuremath{\langle} w, z \ensuremath{\rangle}$ is a word, but it is also a term. So we see that there is a bit of structure required to give us a term. Of course, a term does not necessarily have to have meaning.
Using our notation from Remark \ref{R: Evaluation} we see that $(\sin \upharpoonright x) = \sin x$ is a term. Similarly, $\sin \pi$ is a term. Moreover, it is a closed term since it contains only constants.
\end{example}
\begin{example}[Predicates]
Let $x, y, z \in \mathcal{B}$ be variables and $A, B, \mathbb{R} \in \mathcal{C}$ be constants. We consider $(\forall x \in \mathbb{R})(\exists y \in \mathbb{R})[ (A \upharpoonright x) = y]$ to be a predicate.
We may look at something a bit more familiar, like the statement that a set is dense. $P(B) = (\forall x, y \in B)(\exists z \in B)[x < z < y]$. This another example of a predicate. In this case, $B \in V(\mathbb{R})$ is a free variable.
\end{example}
\begin{example}[Propositions]
Considering the density predicate in the above example, we see that letting $B = \mathbb{N}$, $B = \mathbb{Q}$, or $B = \mathbb{R}$ give propositions that are either true or false.
\end{example}
With our language in mind, we are prepared to state the Transfer Principle.
\begin{theorem}[Transfer Principle]\index{Transfer Principle}
Let $P$ be a proposition in the form $P = P(A_1, A_2, ..., A_r)$ for some predicate $P(x_1, x_2, ..., x_r)$ in the language $\mathcal{L}(\mathbb{R})$, some $r \in \mathbb{N}$, and some $A_n \in V(\mathbb{R})$. Let ${\star P}$ be the proposition in $\mathcal{L}({^{*}\mathr})$ obtained by replacing all $A$'s by ${\star A}$'s. That is, ${\star P =: P({\star A_1}, {\star A_2}, ..., {\star A_n})}.$ Then $P$ and ${\star P}$ are equivalent.
\end{theorem}
\begin{proof} We refer to M. Davis \cite{mDavis} pp. 24-33 or T. Linstr\o m \cite{tLin} pp. 73-81.
\end{proof}
\chapter{The Usual Topology of $\mathbb{R}$ and Monads}\label{C: Topology}
We apply our conception of infinitely close to discuss open, closed, bounded, and compact sets. Each shall be characterized with respect to an infinitesimal ball about a point, which we call a monad. We think of monads as a kind of universal infinitesimal interval.
\section{Monads}\label{S: Monads}
\begin{definition}\index{Monad}
Let $s \in S$ and $S \subseteq \mathbb{R}$.
\begin{quote}
\begin{description}
\item[(a)] The set $\mu(s) =: \{s + dx : dx \in \mathcal{I}({^{*}\mathr})\}$ is the \textbf{monad at $s$}.
\item[(b)] The set $\mu(S) =: \{s + dx : s \in S$, $dx \in \mathcal{I}({^{*}\mathr})\}$ is the \textbf{monad on $S$}.\index{Monad!Monad on a Set}
\item[(c)] The set $\mu_0(s) =: \mu(s) \setminus \{s\}$ is the \textbf{deleted monad at $s$}.\index{Monad!Deleted Monad}
\end{description}
\end{quote}
\end{definition}
Henceforth we shall use the following notation:
\begin{itemize}
\item The standard $\frac{1}{n}$-ball centered at $s \in \mathbb{R}$ is $B(s, \frac{1}{n}) =: \{ x \in \mathbb{R} : | x - s | < \frac{1}{n} \}$.
\item The standard deleted $\frac{1}{n}$-ball centered at $s$ is $B_0(s, \frac{1}{n}) =: \{ x \in \mathbb{R} : 0 < | x - s | < \frac{1}{n} \}$.
\item The non-standard $\frac{1}{n}$-ball centered at $s$ is ${^*B(s, \frac{1}{n})} =: \{ x \in {^{*}\mathr} : | x - s | < \frac{1}{n} \}$.
\item The non-standard deleted $\frac{1}{n}$-ball centered at $s$ is ${\star B}_0(s, \frac{1}{n}) =: \{ x \in {^{*}\mathr} : 0 < | x - s | < \frac{1}{n} \}$.
\end{itemize}
\begin{lemma}[Characterization of Monad]\label{L: CM}
Let $s \in \mathbb{R}$. Then, $\mu(s) = \cap_{n \in \mathbb{N}} {^*B(s, \frac{1}{n})}$. Similarly, $\mu_0(s) = \cap_{n \in \mathbb{N}} {\star B}_0 (s, \frac{1}{n})$.
\end{lemma}
\begin{proof}
Let $s \in \mathbb{R}$. We have $x \in \mu(s)$ \iff there exists $dx \in \mathcal{I}({^{*}\mathr})$ such that $x =s+ dx$ \iff $x -s\approx 0$ \iff $| x - s | < \frac{1}{n}$ for all $n \in \mathbb{N}$ \iff $x \in \cap_{n \in \mathbb{N}} {^*B(s, \frac{1}{n})}$.
\end{proof}
\begin{lemma}[Balloon Principle]\label{L: BP}
With $s \in \mathbb{R}$ we have $\star B(s, \frac{1}{\nu}) \subset \mu(s) \subset \star B(s, \frac{1}{n})$ for all $n \in \mathbb{N}$ and for all $\nu \in {^{*}\mathn} \setminus \mathbb{N}$.
\end{lemma}
\begin{proof}
From Lemma \ref{L: CM} we have that $\mu(s) \subset \star B(s, \frac{1}{n})$. For the other inclusion observe that $x \in \star B(s, \frac{1}{\nu})$ implies $x \approx s$, which implies $x \in \mu(s)$.
\end{proof}
The reader may find it interesting that in the statement of the Balloon Principle, both $B(s, \frac{1}{n})$ and $B(s, \frac{1}{\nu})$ are internal sets whereas $\mu(s)$ is external. Indeed, an allusion to this is the fact that $\mu(0) = \mathcal{I}({^{*}\mathr})$, making $\mu(0)$ external since $\mathcal{I}({^{*}\mathr})$ is.
\section{Interior Points and Open Sets}\label{S: Interior and Open Sets}
For $S \subseteq \mathbb{R}$ and $s \in \mathbb{R}$ recall the following standard definitions:
\begin{itemize}
\item The point $s \in \mathbb{R}$ is an \emph{interior point} of $S$ if there exists $n \in \mathbb{N}$ such that $B(s, \frac{1}{n}) \subseteq S$. We denote the set of interior points of S by $int(S)$.\index{Interior Point}
\item The set $S$ is \emph{open} \iff $S = int(S)$.\index{Open Set}
\end{itemize}
\begin{theorem}[Characterization of Interior Points and Open Sets]\label{T: IP}
Let $s \in S$ and $S \subseteq \mathbb{R}$. Then $s \in int(S)$ \iff $\mu(s) \subseteq {^{*}S}$. Consequently, the following are equivalent:
\begin{quote}
\begin{description}
\item[(a)] $S$ is open.
\item[(b)] $(\forall s \in S)[\mu(s) \subseteq {^{*}S}]$.
\item[(c)] $\mu(S) \subseteq {^{*}S}$.
\end{description}
\end{quote}
\end{theorem}
\begin{proof}
We begin by proving the statement for interior points of $S$.
\begin{description}
\item[($\Rightarrow$)] Let $s$ be an interior point of $S$. Then there exists $n \in \mathbb{N}$ such that $B(s, \frac{1}{n}) \subseteq S$ which implies there exists $n \in \mathbb{N}$ such that ${^*B(s, \frac{1}{n})} \subseteq {^{*}S}$ (by \textbf{(e)} of Theorem \ref{T: Boolean}). Therefore, $\mu(s) \subseteq {^{*}S}$ since $\mu(s) \subset {^*B(s, \frac{1}{n})}$ by Lemma \ref{L: BP}.
\item[($\Leftarrow$)] Suppose $\mu(s) \subseteq {^{*}S}$. Then ${^*B(s, \frac{1}{\nu})} \subset {^{*}S}$ for all $\nu \in {^{*}\mathn} \setminus \mathbb{N}$ by Lemma \ref{L: BP}. Trivially, there exists $\nu_0 \in {^{*}\mathn}$ such that ${^*B(s, \frac{1}{\nu_0})} \subset {^{*}S}$. By the Transfer Principle there exists $\nu \in \mathbb{N}$ such that $B(s, \frac{1}{\nu}) \subset S$, which is to say that $s \in int(S)$.
\end{description}
The equivalence of \textbf{(a)} and \textbf{(b)} is given by noting $S$ open \iff $S = int(S)$ and using the statement just proved. So, $s \in S$ \iff $s \in int(S)$ \iff $\mu(s) \subseteq {^{*}S}$. As $s$ was arbitrary, we have that for each $s \in S$, $\mu(s) \subseteq {^{*}S}$.
The equivalence of \textbf{(b)} and \textbf{(c)} is clear by the definition of $\mu(S)$.
\end{proof}
\begin{remark}[Reduction of Quantifiers]\index{Reduction of Quantifiers}
Notice that the standard formulation of open set requires two quantifiers: $(\forall s \in S)(\exists n \in \mathbb{N})[B(s, \frac{1}{n}) \subseteq S]$. On the other hand, the non-standard formulation requires \emph{none}: $\mu(S) \subseteq {^{*}S}$.
\end{remark}
\section{Cluster Points, Adherent Points, and Closed Sets}\label{S: Cluster and Closed}
For $S \subseteq \mathbb{R}$ and $s \in \mathbb{R}$ recall the following standard definitions:
\begin{itemize}
\item The point $s$ is a \emph{cluster point of $S$} if for all $\varepsilon \in \mathbb{R}_+$ the set $\mathcal{S}_\varep = B_0(s, \varepsilon) \cap S$ is non-empty. The set of all cluster points is denoted $S'$.\index{Cluster Point}
\item The point $s$ is \emph{adherent to $S$} if $B(s, \frac{1}{n}) \cap S \not= \varnothing$ for all $n \in \mathbb{N}$. The set of all points adherent to $S$ is called the \emph{closure} and is denoted $\overline{S}$.\index{Adherent Point}
\item The set $S$ is \emph{closed} if $S = \overline{S}$.\index{Closed Set}
\end{itemize}
The following characterization will become useful for us when we discuss the non-standard characterization of limits.
\begin{theorem}[Characterization of Cluster Points]\label{T: AP}
Let $s \in \mathbb{R}$ and $S \subseteq \mathbb{R}$. Then the following statements are equivalent:
\begin{quote}
\begin{description}
\item[(a)] $s$ is a cluster point of $S$.
\item[(b)] $s + dx \in {^{*}S}$ for some non-zero infinitesimal $dx$.
\item[(c)] There exists $x \in {^{*}S}$ such that $x \approx s$ and $x \not= s$.
\item[(d)] ${^{*}S} \cap \mu_0(s) \not= \varnothing$.
\end{description}
\end{quote}
Consequently, $S' = \{ s \in \mathbb{R} : {^{*}S} \cap \mu_0(s) \not= \varnothing \}$.
\end{theorem}
\begin{proof}
We first prove the the equivalence of \textbf{(a)} and \textbf{(b)}.
\begin{description}
\item[($\Rightarrow$)] Let $s$ be a cluster point of $S$. Then for all $\varepsilon \in \mathbb{R}_+$ the set $\mathcal{S}_\varep = B_0(s, \varepsilon) \cap S$ is non-empty, and by the Axiom of Choice there exists a net $(x_\varep) \in S_{\varepsilon}^{\mathbb{R}_+}$ such that $x_\varep \in \mathcal{S}_\varep$ for all $\varepsilon \in \mathbb{R}_+$. Equivalently, $x_\varep \in B_0(s, \varepsilon) \cap S$ for all $\varepsilon \in \mathbb{R}_+$ so we have $\{ \varepsilon \in \mathbb{R}_+ : x_\varep \in B_0(s, \varepsilon) \cap S\} = \mathbb{R}_+ \in \mathcal{U}$. Note, $\{ \varepsilon \in \mathbb{R}_+ : x_\varep \in B_0(s, \varepsilon) \cap S\} = \{ \varepsilon \in \mathbb{R}_+ : x_\varep \in S$ and $0 < | x_\varep - s | < \varepsilon\}$ so we conclude that $\ensuremath{\langle} x_\varep \ensuremath{\rangle} \in {^{*}S}$, and by letting $dx =: \ensuremath{\langle} x_\varep -s\ensuremath{\rangle} = \ensuremath{\langle} x_\varep \ensuremath{\rangle} - s$, we conclude that $dx$ is a non-zero infinitesimal.
\item[($\Leftarrow$)] Let $s + dx \in {^{*}S}$ for some non-zero infinitesimal $dx$. Looking at representatives, $dx = \ensuremath{\langle} \delta_\varepsilon \ensuremath{\rangle}$ for some $(\delta_\varepsilon) \in \mathbb{R}^{\mathbb{R}_+}$. For each $n \in \mathbb{N}$ define $S_n = \{ \varepsilon \in \mathbb{R}_+ : 0 < | \delta_\varepsilon | < \frac{1}{n}$ and $s + \delta_\varepsilon \in S$ \}. Then $S_n \in \mathcal{U}$ by our assumptions and is therefore non-empty for all $n \in \mathbb{N}$. If we let $x_\varep =:s+ \delta_\varepsilon$, then $x_\varep \in S$ and $0 < | x_\varep -s| < \frac{1}{n}$ for all $n \in \mathbb{N}$, which means that $s$ is a cluster point of $S$.
\end{description}
The equivalence of \textbf{(b)} and \textbf{(c)} is immediate by taking $x =: s+ dx$, and the equivalence of \textbf{(c)} and \textbf{(d)} is clear from the definition of $\mu_0(s)$.
\end{proof}
\begin{remark}[Reduction of Quantifiers]\index{Reduction of Quantifiers}
Notice that the standard formulation of a cluster point requires one quantifier: $(\forall \varepsilon \in \mathbb{R}_+)[B_0 (s, \varepsilon) \cap S \not= \varnothing]$. On the other hand, the non-standard formulation requires \emph{none}: ${^{*}S} \cap \mu_0(s) \not= \varnothing$.
\end{remark}
We use the work on cluster points to discuss adherent points, and then of closed sets.
\begin{theorem}[Characterization of Adherent Points]\label{T: NSAP}
Let $s \in \mathbb{R}$ and $S \subseteq \mathbb{R}$. Then $s \in \overline{S}$ \iff ${^{*}S} \cap \mu(s) \not= \varnothing$. Consequently, $\overline{S} = \{ s \in \mathbb{R} : {^{*}S} \cap \mu(S) \not= \varnothing \}$.
\end{theorem}
\begin{proof}
\begin{description}
\item[($\Rightarrow$)] In the case that $s$ is an isolated point of $S$ the result is trivially true. In the case that $s$ is not an isolated point it is then a non-trivial adherent point, or a cluster point of $S$. Then by Theorem \ref{T: AP} part \textbf{(b)} we have ${^{*}S} \cap \mu(s) \not= \varnothing$.
\item[($\Leftarrow$)] Assume ${^{*}S} \cap \mu(s) \not= \varnothing$. Then for some non-zero infinitesimal $dx$ we have $s + dx \in {^{*}S}$ and by Theorem \ref{T: AP} we have that $s$ is a cluster point. As any cluster point is an adherent point we are done.
\end{description}
\end{proof}
\begin{remark}[Reduction of Quantifiers]\index{Reduction of Quantifiers}
Notice that the standard formulation of a adherent point requires one quantifier: $(\forall \varepsilon \in \mathbb{R}_+)[B (s, \varepsilon) \cap S \not= \varnothing]$. On the other hand, the non-standard formulation requires \emph{none}: ${^{*}S} \cap \mu(s) \not= \varnothing$.
\end{remark}
Before characterizing the closed sets we must extend our definition of the $\ensuremath{{{\rm{st}}}}$-mapping a bit further from Definition \ref{D: SPM}.
\begin{definition}\index{Standard Part Mapping!on a Set}
Let $A \subseteq {^{*}\mathr}$. We define $\ensuremath{{{\rm{st}}}}(A) =: \{ \ensuremath{{{\rm{st}}}}(x) : x \in A \cap \mathcal{F}({^{*}\mathr}) \}$.
\end{definition}
Notice that $\ensuremath{{{\rm{st}}}}(\mu(S)) = S$ for any $S \subseteq \mathbb{R}$ by the definition of $\mu(S)$.
\begin{theorem}[Characterization of Closed Sets]\label{T: NSCS}
Let $s \in \mathbb{R}$ and $S \subseteq \mathbb{R}$. Then the following are equivalent:
\begin{quote}
\begin{description}
\item[(a)] $S$ is closed.
\item[(b)] For each $s \in S$, if ${^{*}S} \cap \mu(s) \not= \varnothing$, then $s \in S$.
\item[(c)] $S = \{ s \in \mathbb{R} : {^{*}S} \cap \mu(s) \not= \varnothing \}$.
\item[(d)] $S = \ensuremath{{{\rm{st}}}}({^{*}S})$.
\end{description}
\end{quote}
\end{theorem}
\begin{proof} First, we prove the equivalence of \textbf{(a)} and \textbf{(b)}.
\begin{description}
\item[($\Rightarrow$)] The set $S$ is closed \iff $(\mathbb{R} \setminus S)$ is open \iff for all $r \in (\mathbb{R} \setminus S)$ we have $\mu(r) \subseteq \star(\mathbb{R} \setminus S)$. Let $s \in \mathbb{R}$ be such that ${^{*}S} \cap \mu(s) \not= \varnothing$. Either $s \in S$ or $s \in (\mathbb{R} \setminus S)$. If $s \in (\mathbb{R} \setminus S)$ then $\mu(s) \subseteq \star(\mathbb{R} \setminus S)$ which implies ${^{*}S} \cap \mu(s) = \varnothing$, a contradiction since ${\star(\mathbb{R} \setminus S)} = {\star \mathbb{R}} \setminus {\star S}$. Therefore $s \in S$.
\item[($\Leftarrow$)] For every $s \in \mathbb{R}$ we have ${^{*}S} \cap \mu(s) \not= \varnothing$ implies $s \in S$ by assumption. Suppose to the contrary that $S$ is not closed. Then there exists $r \in \overline{S} \setminus S$. Thus $B(r; \frac{1}{n}) \cap S \not= \varnothing$ for all $n \in \mathbb{N}$. Thus ${\star B}(r; \frac{1}{n}) \cap {^{*}S} \not= \varnothing$ for all $n \in \mathbb{N}$ since $B(r; \frac{1}{n}) \cap S \subseteq {\star B}(r; \frac{1}{n}) \cap {^{*}S}$. Further, note that the latter is internal so by the Saturation Principle $\cap_{n \in \mathbb{N}} ( \star B(r ; \frac{1}{n}) \cap {^{*}S} ) = \cap_{n \in \mathbb{N}} \star B(r ; \frac{1}{n}) \cap {^{*}S} = \mu(r) \cap {^{*}S} \not= \varnothing$. Therefore $r \in S$, a contradiction.
\end{description}
The equivalence of \textbf{(b)} and \textbf{(c)} follows since $S$ closed \iff $S = \overline{S}$ and $\overline{S} = \{ s \in \mathbb{R} : {^{*}S} \cap \mu(s) \not= \varnothing \}$ from Theorem \ref{T: NSAP}.
Last, we prove the equivalence of \textbf{(c)} and \textbf{(d)}.
\begin{description}
\item[($\Rightarrow$)] With $s \in S$ we have $\ensuremath{{{\rm{st}}}}(s) = s$. Therefore $s \in {^{*}S} \cap \mathcal{F}({^{*}\mathr})$ giving $s \in \ensuremath{{{\rm{st}}}}({^{*}S})$. Conversely, let $s \in \ensuremath{{{\rm{st}}}}({^{*}S})$. Then $s = \ensuremath{{{\rm{st}}}}(x)$ for some finite $x \in {^{*}S}$. Since ${^{*}S} \cap \mu(s) \not= \varnothing$ we have $s \in S$ by assumption.
\item[($\Leftarrow$)] We have $S \subseteq \{ s \in \mathbb{R} : {^{*}S} \cap \mu(s) \not= \varnothing \}$ trivially since $S \subseteq {^{*}S}$. Conversely, suppose ${^{*}S} \cap \mu(s)$ for some $s \in \mathbb{R}$. We have $s = \ensuremath{{{\rm{st}}}}(x)$ for some $x \in {^{*}S} \cap \mu(s)$ which implies $x \in {^{*}S} \cap \mathcal{F}({^{*}\mathr})$ and therefore $s \in S$ (since $S = \ensuremath{{{\rm{st}}}}({^{*}S})$).
\end{description}
\end{proof}
\begin{remark}[Reduction of Quantifiers]\index{Reduction of Quantifiers}
Notice that the standard formulation of a closed set requires two quantifiers: $(\forall s \in S)(\forall \varepsilon \in \mathbb{R}_+)[B(s, \varepsilon) \cap S \not= \varnothing]$. On the other hand, the non-standard formulation requires \emph{none}: $S = \ensuremath{{{\rm{st}}}}({^{*}S})$.
\end{remark}
\section{Bounded Sets}\label{S: Bounded Sets}
Recall that a standard set $A \subset \mathbb{R}$ is \textbf{bounded} if there exists $M \in \mathbb{R}_+$ such that for all $s \in S$ we have $|s| \leq M$.\index{Bounded Sets}
\begin{theorem}[Characterization of Bounded Sets]\label{T: SBS}
Let $S \subseteq \mathbb{R}$. Then $S$ is bounded \iff ${^{*}S} \subseteq \mathcal{F}({^{*}\mathr})$. Similarly, $S$ is bounded from above (below) \iff ${^{*}S}$ does not contain positive (negative) infinitely large numbers.
\end{theorem}
\begin{proof}
\begin{description}
\item[($\Rightarrow$)] Suppose $S$ is bounded, and let $M \in \mathbb{R}_+$ be a bound for $S$. Then, $(\forall s \in S)(|s| \leq M)$ \iff (by Transfer Principle) $(\forall s \in {^{*}S})(|s| \leq M)$ which implies ${^{*}S} \subseteq \mathcal{F}({^{*}\mathr}).$
\item[($\Leftarrow$)] Suppose ${^{*}S} \subseteq \mathcal{F}({^{*}\mathr})$. Then $(\exists M \in {^{*}\mathr}_+)(\forall s \in {^{*}S})(|s| \leq M)$. By the Transfer Principle we have $(\exists M \in \mathbb{R}_+)(\forall s \in S)(|s| \leq M)$. That is, $S$ is bounded.
\end{description}
\end{proof}
\begin{remark}[Reduction of Quantifiers]\index{Reduction of Quantifiers}
Notice that the standard formulation of a closed set requires two quantifiers: $(\exists M \in \mathbb{R}_+)(\forall s \in S)[|s| \leq M]$. On the other hand, the non-standard formulation requires \emph{none}: ${^{*}S} \subseteq \mathcal{F}({^{*}\mathr})$.
\end{remark}
\section{Compact Sets}\label{S: Compact Sets}
Recall that a subset $S \subseteq \mathbb{R}$ is compact \iff it is closed and bounded. We shall use this characterization rather than open covers for the sake of simplicity.\index{Compact Sets}
\begin{theorem}[Characterization of Compact Sets]\label{T: NSCoS}
Let $S \subseteq \mathbb{R}$. Then the following are equivalent:
\begin{quote}
\begin{description}
\item[(a)] $S$ is compact.
\item[(b)] ${^{*}S} \subseteq \mu(S)$.
\item[(c)] ${^{*}S} \subseteq \bigcup_{s \in S} \mu(s)$.
\end{description}
\end{quote}
\end{theorem}
\begin{proof}
First, we prove the equivalence of \textbf{(a)} and \textbf{(b)}.
\begin{description}
\item[($\Rightarrow$)] $S$ compact \iff $S$ is closed and bounded \iff $S = \ensuremath{{{\rm{st}}}}({^{*}S})$ and ${^{*}S} \subseteq \mathcal{F}({^{*}\mathr})$ (by part \textbf{(d)} of Theorem \ref{T: NSCS} and Theorem \ref{T: SBS}, respectively). Let $x \in {^{*}S}$, then $x \in \mathcal{F}({^{*}\mathr})$ and by Theorem \ref{T: Representation} we have $x = \ensuremath{{{\rm{st}}}}(x) + dx$ for some $dx \approx 0$. Note, $\ensuremath{{{\rm{st}}}}(x) \in \ensuremath{{{\rm{st}}}}({^{*}S}) = S$ therefore $x \in \mu(S)$.
\item[($\Leftarrow$)] Suppose ${^{*}S} \subseteq \mu(S)$. As $\mu(S) \subseteq \mathcal{F}({^{*}\mathr})$ we have $S$ bounded by Theorem \ref{T: SBS}. We must show that $S = \ensuremath{{{\rm{st}}}}({^{*}S})$. Trivially we have $S \subseteq \ensuremath{{{\rm{st}}}}({^{*}S})$. For the reverse containment we have $s \in \ensuremath{{{\rm{st}}}}({^{*}S})$ \iff $s = \ensuremath{{{\rm{st}}}}(x)$ for some $x \in {^{*}S} \cap \mathcal{F}(\mathbb{R})$. Therefore $x \in \mu(S)$ (from our assumption), giving $s = \ensuremath{{{\rm{st}}}}(x) \in S$. Therefore $S$ is closed and bounded.
\end{description}
The equivalence of \textbf{(b)} and \textbf{(c)} is given by noting that $\mu(S) = \bigcup_{s \in S} \mu(s)$.
\end{proof}
\begin{remark}[Reduction of Quantifiers]\index{Reduction of Quantifiers}
Notice that the standard formulation of a compact set requires two quantifiers: Every open cover has a finite subcover. On the other hand, the non-standard formulation requires \emph{none}: ${^{*}S} \subseteq \bigcup_{s \in S} \mu(s)$.
\end{remark}
\chapter{Topics in Real Analysis in a Non-Standard Setting}\label{C: Analysis}
We give non-standard characterizations for such standard analytic concepts as: sequences, limits, continuity, uniform continuity, derivatives, sequences of functions, and uniform convergence. We emphasis with each characterization the reduction of quantifiers.
\section{Limits}\label{S: Limits}
In our discussion of the $\ensuremath{{{\rm{st}}}}$-mapping, the reader may have noticed its resemblance to taking a classical limit. This observation is quite valid and we devote this section to proving it.
Let $r \in \mathbb{R}$ be a cluster point of $X \subseteq \mathbb{R}$ (Theorem \ref{T: AP}). Let $f: X \to \mathbb{C}$ be a function. Recall the following standard definitions:
\begin{itemize}
\item $\lim_{x \to r} f(x) = L$ if, by definition, for each $\varepsilon \in \mathbb{R}_+$ there exists $\delta \in \mathbb{R}_+$ such that for all $x \in X$, if $0 < | x-r | < \delta$ then $|f(x) - L| < \varepsilon$.
\item Phrased in countable variables, $\lim_{x \to r} f(x) = L$ if, by definition, for each $m \in \mathbb{N}$ there exists $n \in \mathbb{N}$ such that for all $x \in X$, if $0 < | x-r | < \frac{1}{n}$, then $|f(x) - L| < \frac{1}{m}$.
\item $\lim_{x \to r} f(x) \not= L$ \iff there exists $m \in \mathbb{N}$ such that for all $\delta \in \mathbb{R}$ there exists $x \in X$ so that $0 < |x -r | < \delta$ and $|f(x) - L| \geq \frac{1}{m}$.
\end{itemize}
\begin{theorem}[Limits]\label{T: NSL}\index{Limit}
Let $f : X \to \mathbb{C}$ where $X \subseteq \mathbb{R}$. Suppose $r$ is a cluster point of $X$ and $L \in \mathbb{C}$ (Theorem \ref{T: AP}). Then the following statements are equivalent:
\begin{quote}
\begin{description}
\item[(a)] $\lim_{x \to r} f(x) = L$.
\item[(b)] ${^{*}f}(r + dx) \approx L$ for all non-zero infinitesimals $dx$, such that $r + dx \in {^{*}X}$.
\item[(c)] ${^{*}f}(x) \approx L$ for all $x \in {^{*}X}$ such that $x \approx r$, $x \not= r$.
\item[(d)] $\ensuremath{{{\rm{st}}}}[{^{*}f}(r + dx)] = L$ for all non-zero infinitesimals $dx$, such that $r + dx \in {^{*}X}$.
\end{description}
\end{quote}
\end{theorem}
\begin{proof}
We prove the equivalence of \textbf{(a)} and \textbf{(b)} in detail.
From Theorem \ref{T: AP} we know that $r$ a cluster point for $X \subseteq \mathbb{R}$ \iff $r + dx \in {^{*}X}$ for some non-zero infinitesimal $dx$. As in the proof of Theorem \ref{T: AP} we have $dx = \ensuremath{\langle} \delta_\varepsilon \ensuremath{\rangle}$ for some $(\delta_\varepsilon) \in \mathbb{R}^{\mathbb{R}_+}$ and $A_n =: \{ \varepsilon \in \mathbb{R}_+ : 0 < | \delta_\varepsilon | < \frac{1}{n}$ and $r + \delta_\varepsilon \in X \} \in \mathcal{U}$ for all $n \in \mathbb{N}$.
\begin{description}
\item[($\Rightarrow$)] We use the countable formulation of the definition of the limit. Fixing $m \in \mathbb{N}$, there exists a $n \in \mathbb{N}$ such that for all $x \in X$, if $0 < | x-r | < \frac{1}{n}$ then $|f(x) - L| < \frac{1}{m}$. Define $B_m =: \{ \varepsilon \in \mathbb{R}_+ : |f(r + \delta_\varepsilon) - L | < \frac{1}{m}\}$. Notice that $A_n \subseteq B_m$. Indeed, let $\varepsilon \in A_n$ so that $0 < |\delta_\varepsilon | < \frac{1}{n}$ for all $n \in \mathbb{N}$ and $r + \delta_\varepsilon \in X$. Let $x =: r + \delta_\varepsilon$, then $| f(r + \delta_\varepsilon) - L < \frac{1}{m}$ so that $\varepsilon \in B_m$. As $A_n \in \mathcal{U}$, by the properties of ultrafilters, $B_m \in \mathcal{U}$. So $|{^{*}f}(r + dx) - L | < \frac{1}{m}$, and since $m \in \mathbb{N}$ was arbitrary, ${^{*}f}(r + dx) - L \approx 0$ as desired.
\item[($\Leftarrow$)] Assume that ${^{*}f}(r + dx) \approx L$ for all non-zero infinitesimals $dx$ such that $r + dx \in {^{*}X}$. We shall use the hybrid (continuous/countable) negation of the definition of the limit. Suppose to the contrary that $\lim_{x \to c} f(x) \not= L$. So there exists $m \in \mathbb{N}$ such that for all $\delta \in \mathbb{R}_+$ the set $X_{\delta} =: \{ x \in X : 0 < |x - r | < \delta$ and $|f(x) - L| \geq \frac{1}{m}\}$ is non-empty for all $\delta \in \mathbb{R}_+$. By the Axiom of Choice there exists $(x_\delta) \in \mathbb{R}^{\mathbb{R}_+}$ such that $x_\delta \in X_{\delta}$ for all $\delta \in \mathbb{R}_+$. Define $dx =: \ensuremath{\langle} x_\delta - r\ensuremath{\rangle}$. As $x_\delta \in X_{\delta}$ for all $\delta \in \mathbb{R}_+$ we have $r + dx \in {^{*}X}$. Also, we have $0 < |dx| < \ensuremath{\langle} \delta \ensuremath{\rangle}$ and $|{^{*}f}(r + dx) - L| \geq \frac{1}{m}$. Therefore, $dx$ is a non-zero infinitesimal and ${^{*}f}(r + dx) - L \not\approx 0$, a contradiction.
\end{description}
Note the equivalence of \textbf{(b)} and \textbf{(c)} is immediate by letting $x =: r + dx$, and we obtain the equivalence of \textbf{(c)} and \textbf{(d)} by noting $L \in \mathbb{C}$ and applying Theorem \ref{T: Representation}.
\end{proof}
\begin{remark}[Reduction of Quantifiers]\index{Reduction of Quantifiers}
A particularly nice feature of the non-standard characterization of limits is that the number of quantifiers is reduced as compared to the standard characterization. Indeed, observe the formalization of the standard definition for the limit of $f: X \to \mathbb{R}$ at the point $c \in \mathbb{R}$ with limit $L \in \mathbb{R}$:
\begin{equation}\label{E: SLIM}
(\forall \varepsilon \in \mathbb{R}_+)(\exists \delta \in \mathbb{R}_+)(\forall x \in X)[0 < | x - c | < \delta \Rightarrow | f(x) - L | < \varepsilon]
\end{equation}
Now observe the formalization of the non-standard characterization from Theorem \ref{T: NSL}:
\begin{equation}\label{E: NSLIM}
(\forall x \in {^{*}X})[x \approx r \Rightarrow f(x) \approx L]
\end{equation}
Notice that whereas the former has three quantifiers, the latter only has one. Moreover, observe that the non-standard characterization is intuitively what we think of as a limit, but we have made rigorous the idea of infinitely close!
\end{remark}
We now present some examples to illustrate our characterization of the limit.
\begin{examples}\label{E: Lim}$ $
\begin{quote}
\begin{description}
\item[(i)] $\lim_{x \to 1} \frac{x}{1+x} = \frac{1}{2}$. Indeed, let $dx$ be a non-zero infinitesimal, then $1 + dx \in {^{*}\mathr} \setminus \{-1\}$. We have $\ensuremath{{{\rm{st}}}}[{^{*}f}(1 + dx)] = \ensuremath{{{\rm{st}}}}[\frac{1 + dx}{2 + dx}] = \frac{\ensuremath{{{\rm{st}}}}[1 + dx]}{\ensuremath{{{\rm{st}}}}[2+dx]} = \frac{1}{2}$. Hence, by part \textbf{(d)} of Theorem \ref{T: NSL} the limit is $\frac{1}{2}$.
\item[(ii)] If $x \approx 0$, then $\star\sin x \approx 0$. Indeed by Theorem \ref{T: NSL} $\ensuremath{{{\rm{st}}}}[\star\sin x] = \lim_{x \to 0} \sin x = 0$. Therefore, $\star\sin x \approx 0$.
\end{description}
\end{quote}
\end{examples}
\section{Limits at Infinity}\label{S: Limits at Infinity}
To facilitate a discussion on the limit of a sequence in the language of non-standard analysis we first discuss the limit of a function as $x$ goes to infinity.
Let $X \subseteq \mathbb{R}$ be unbounded from above, let $f: X \to \mathbb{C}$, and suppose $L \in \mathbb{C}$. Recall that $\lim_{x \to \infty} f(x) = L$ if (by definition),
\begin{equation}\label{E: Lim to Infinity}
(\forall \varepsilon \in \mathbb{R}_+)(\exists K \in \mathbb{R}_+)(\forall x \in X)[x > K \Rightarrow |f(x) - L| < \varepsilon].
\end{equation}
In the following, ${^{*}X}_+$ denotes the set of positive numbers in ${^{*}X}$ and $\mathcal{L}({^{*}X}_+)$ denotes the set of infinitely large numbers in ${^{*}X}$. Notice that $\mathcal{L}({^{*}X}_+) \not= \varnothing$ by Theorem \ref{T: SBS}.
\begin{theorem}[Characterization of Limits at Infinity]\label{T: NSLI}\index{Limit!at Infinity}
Let $X \subseteq \mathbb{R}$ be a set which is unbounded from above. Let $f: X \to \mathbb{C}$ and suppose $L \in \mathbb{C}$. Then $\lim_{x \to \infty} f(x) = L$ \iff $(\forall x \in \mathcal{L}({^{*}X}_+))[{^{*}f}(x) \approx L]$.
\end{theorem}
\begin{proof}$ $
\begin{description}
\item[($\Rightarrow$)] We assume (\ref{E: Lim to Infinity}). Let $\varepsilon \in \mathbb{R}_+$ be fixed so that there exists $K \in \mathbb{R}_+$ such that $(\forall x \in X)[x > K \Rightarrow |f(x) - L| < \varepsilon]$. Apply the Transfer Principle so that $(\forall x \in {^{*}X})[x > K \Rightarrow |{^{*}f}(x) - L| < \varepsilon]$. As $K$ is standard, picking any $x \in \mathcal{L}({^{*}X}_+)$ gives $x > K$ so that $(\forall x \in \mathcal{L}({^{*}X}_+))[{^{*}f}(x) - L| < \varepsilon]$, which implies $(\forall x \in \mathcal{L}({^{*}X}_+))[{^{*}f}(x) \approx L]$, as desired.
\item[($\Leftarrow$)] Assume $(\forall x \in \mathcal{L}({^{*}X}_+)({^{*}f}(x) \approx L)$. Let $\varepsilon \in \mathbb{R}_+$ (arbitrarily fixed). Then $(\forall x \in \mathcal{L}({^{*}X}_+))[|{^{*}f}(x) - L| < \varepsilon]$. Trivially, $(\exists K \in {^{*}\mathr}_+)(\forall x \in {^{*}X})[x > K \Rightarrow |{^{*}f}(x) - L| < \varepsilon]$ (For instance, pick $K \in {^{*}\mathn} \setminus \mathbb{N}$). Apply the Transfer Principle so that, $(\exists K \in \mathbb{R}_+)(\forall x \in X)[x > K \Rightarrow |f(x) - L| < \varepsilon]$. As $\varepsilon \in \mathbb{R}_+$ was arbitrary, we have $\lim_{n \to \infty} f(x) = L$.
\end{description}
We present an alternate proof which uses the Overflow Principle.
\begin{description}
\item[($\Leftarrow$)] Assume $(\forall x \in \mathcal{L}({^{*}X}_+))[{^{*}f}(x) \approx L]$. This translates to $(\forall \varepsilon \in \mathbb{R}_+)(\forall x \in \mathcal{L}({^{*}X}_+))[|{^{*}f}(x) - L| < \varepsilon]$. Suppose to the contrary that $(\exists \varep_{0} \in \mathbb{R}_+)(\forall K \in \mathbb{R}_+)(\exists x \in X)[x > K$ and $|f(x) - L| \geq \varep_{0}]$. Fix $\varep_{0} \in \mathbb{R}_+$ so that, $$(\forall K \in \mathbb{R}_+)(\exists x \in X)[x > K \textnormal{ and } |f(x) - L| \geq \varep_{0}].$$ Apply the Transfer Principle so that
\begin{equation}\label{E: Nonempty}
(\forall K \in {^{*}\mathr}_+)(\exists x \in {^{*}X})[x > K \textnormal{ and } |{^{*}f}(x) - L| \geq \varep_{0}].
\end{equation}
Consider $A =: \{ x \in {^{*}X} : |{^{*}f}(x) - L| \geq \varep_{0} \}$. From (\ref{E: Nonempty}) we know that $A \not= \varnothing$. Either $A$ contains infinitely large positive numbers (in which case we contradict our given assumption), or $A$ contains arbitrarily large finite numbers. In this case, we apply the Overflow Principle so that $A$ contains at least one infinitely large number, contradicting our given assumption.
\end{description}
\end{proof}
\begin{example}
$\lim_{x \to \infty} \frac{\sin x}{x} = 0$. Let $dx$ be a positive infinitesimal. Notice that we may neither use l'Hospital's rule (the limit in the numerator does not exist), nor may we distribute the limit (for the same reason). We have $$\lim_{x \to \infty} \frac{\sin x}{x} = \lim_{x \to 0^+} \frac{\sin \frac{1}{x}}{\frac{1}{x}} = \lim_{x \to 0^+} x \sin \frac{1}{x} = \ensuremath{{{\rm{st}}}} \left[ dx \sin\left(\frac{1}{dx}\right)\right] = \ensuremath{{{\rm{st}}}}(dx)\ensuremath{{{\rm{st}}}}\left[\sin\left(\frac{1}{dx}\right)\right]= 0$$ as required since $\ensuremath{{{\rm{st}}}}(dx) = 0$ and $\ensuremath{{{\rm{st}}}}\left(\frac{1}{dx}\right) \in \mathbb{R}$ since $\sin\left(\frac{1}{dx}\right)$ is finite (as a number in ${\star[-1, 1]}$.
\end{example}
\begin{remark}[Reduction of Quantifiers]\index{Reduction of Quantifiers}
Again, whereas the standard definition of a limit at infinity contains three quantifiers, $(\forall \varepsilon \in \mathbb{R}_+)(\exists K \in \mathbb{R}_+)(\forall x \in X)[x > K \Rightarrow |f(x) - L| < \varepsilon],$ the non-standard characterization in Theorem \ref{T: NSLI}, $(\forall x \in \mathcal{L}({^{*}X}_+))[{^{*}f}(x) \approx L]$, has a \emph{single} quantifier.
\end{remark}
The characterization of the limit of a sequence follows from the characterization in Theorem \ref{T: NSLI}. We take $X = \mathbb{N}$ and note that $\mathcal{L}({^{*}\mathn}) = {^{*}\mathn} \setminus \mathbb{N}$.
\begin{corollary}[Characterization of Limits of Sequences]\label{C: NSLS}\index{Limit!of a Sequence}
Let $(a_n) \in \mathbb{C}^\mathbb{N}$ be a sequence, and suppose $L \in \mathbb{C}$. Then $\lim_{n \to \infty} a_n = L$ \iff $(\forall n \in {^{*}\mathn} \setminus \mathbb{N})[{^*a_n} \approx L].$
\end{corollary}
\begin{proof}
Take $X = \mathbb{N}$ and $f(n) = a_n$. Then the proof follows directly from Theorem \ref{T: NSLI}.
\end{proof}
\begin{examples}$ $
\begin{quote}
\begin{description}
\item[(i)] $\lim_{n \to \infty} \frac{\sqrt{n + 1}}{n} = 0$. Indeed, let $\nu \in {^{*}\mathn} \setminus \mathbb{N}$. Then $\frac{\nu + 1}{\nu} = \frac{\nu}{\nu}\sqrt{1 + \frac{1}{\nu}} = \frac{1}{\sqrt{\nu}}\sqrt{1 + \frac{1}{\nu}} \approx 0$, since $\frac{1}{\sqrt{\nu}} \approx 0$ and $\sqrt{1 + \frac{1}{\nu}} \approx 1$.
\item[(ii)] $\lim_{n \to \infty} \frac{n + 5}{n + 3} = 1$. Indeed, let $\nu \in {^{*}\mathn} \setminus \mathbb{N}$. Then $\frac{\nu + 5}{\nu + 3} = \frac{ 1 + \frac{5}{\nu}}{1 + \frac{3}{\nu}} \approx 1$.
\end{description}
\end{quote}
\end{examples}
\section{Continuity}\label{S: Continuity}
We characterize ordinary continuity on (both at a point and on a set) and then discuss the more difficult characterization of uniform continuity.
Let $r \in X$ and $X \subseteq \mathbb{R}$. Recall the following standard definitions:
\begin{itemize}
\item $f: X \to \mathbb{C}$ is \emph{continuous at the point $r$} if, by definition, for all $\varepsilon \in \mathbb{R}_+$, there exists $\delta \in \mathbb{R}_+$ such that for all $x \in X$, if $| x - r |< \delta$, then $|f(x) - f(r)| < \varepsilon$.
\item Consequently, $f: X \to \mathbb{C}$ is \emph{continuous on the set $X$} if, by definition, for all $r \in X$ and $\varepsilon \in \mathbb{R}_+$, there exists $\delta \in \mathbb{R}_+$ such that for all $x \in X$, if $|x - r|<\delta$, then $|f(x) - f(r)|< \varepsilon$.
\end{itemize}
Note that countable formulations may be made as before, and we shall freely use them.
\begin{theorem}[Continuity]\label{T: NSC}\index{Continuity!at a Point}
Let $X \subseteq \mathbb{R}$, $r \in X$, and $f: X \to \mathbb{C}$. The following statements are equivalent:
\begin{quote}
\begin{description}
\item[(a)] $f$ is continuous at the point $r$.
\item[(b)] ${^{*}f}(r + dx) \approx {^{*}f}(r)$ for all infinitesimals $dx$ with $r + dx \in {^{*}X}$.
\item[(c)] ${^{*}f}(x) \approx {^{*}f}(r)$ for all $x \in {^{*}X}$ with $x \approx r$.
\end{description}
\end{quote}
\end{theorem}
\begin{proof}
Since $r \in X$ by assumption, we have ${^{*}f}(r) = f(r)$. The equivalence of \textbf{(a)} and \textbf{(b)} follows by letting $f(r) = L$ and applying Theorem \ref{T: NSL}. The equivalence of \textbf{(b)} and \textbf{(c)} is immediate by letting $x =: r + dx$.
\end{proof}
\begin{corollary}\index{Continuity!on a Set}
Let $X \subseteq \mathbb{R}$ and $f: X \to \mathbb{C}$. Then $f$ is continuous on the set $X$ \iff ${^{*}f}(x) \approx f(r)$ for all $r \in X$ and $x \in {^{*}X}$ such that $x \approx r$.
\end{corollary}
\begin{remark}[Reduction of Quantifiers]\index{Reduction of Quantifiers}
We formalize both standard and non-standard characterizations of continuity with our focus on comparing the quantifiers. Observe that $f: X \to \mathbb{R}$ is continuous on the set $X \subset \mathbb{R}$ if:
\begin{equation}\label{E: CSET}
(\forall r \in X)(\forall \varepsilon \in \mathbb{R}_+)(\exists \delta \in \mathbb{R}_+)(\forall x \in X)[ |x - r| < \delta \Rightarrow |f(x) - f(r)| < \varepsilon].
\end{equation}
Observe the non-standard characterization:
\begin{equation}\label{E: NSCSET}
(\forall r \in X)(\forall x \in {^{*}X})[x \approx r \Rightarrow {^{*}f}(x) \approx f(r)].
\end{equation}
Again we notice that while the former has four non-commuting quantifiers, the latter has two \emph{commuting} quantifiers.
\end{remark}
We again turn to some examples to demonstrate our characterization.
\begin{examples}\label{E: Cont}$ $
\begin{quote}
\begin{description}
\item[(i)] The function $f: \mathbb{R} \to \mathbb{R}$ defined by $f(x) = x$ is continuous on $\mathbb{R}$. To show this, let $r \in \mathbb{R}$ be fixed and arbitrary. Suppose $x \in {^{*}\mathr}$ such that $x \approx r$. Then ${^{*}f}(x) = x \approx r = {^{*}f}(r)$, and by part \textbf{(c)} of Theorem \ref{T: NSC} $f(x) = x$ is continuous at $r \in \mathbb{R}_+$. Since $r$ was arbitrary, $f(x) = x$ is continuous on $\mathbb{R}$.
\item[(ii)] The function $f: \mathbb{R}_+ \to \mathbb{R}$ defined by $f(x) = \frac{1}{x}$ is continuous on $\mathbb{R}_+$. Let $r \in \mathbb{R}$ be fixed and arbitrary. Let $dx$ be a non-zero infinitesimal so that $r + dx \in {^{*}\mathr}_+$. Then $\ensuremath{{{\rm{st}}}}[{^{*}f}(r + dx) - {^{*}f}(r)] = \ensuremath{{{\rm{st}}}}[\frac{1}{r + dx} - \frac{1}{r}] = \ensuremath{{{\rm{st}}}}[\frac{dx}{r^2 + rdx}] = \frac{\ensuremath{{{\rm{st}}}}[dx]}{\ensuremath{{{\rm{st}}}}[r^2 + rdx]} = \frac{0}{r^2} = 0$. Therefore, ${^{*}f}(r + dx) \approx {^{*}f}(r)$ so by part \textbf{(b)} of Theorem \ref{T: NSC} $f(x) = \frac{1}{x}$ is continuous at $r \in \mathbb{R}_+$. As $r$ was arbitrary, $f(x) = \frac{1}{x}$ is continuous on $\mathbb{R}_+$.
\end{description}
\end{quote}
\end{examples}
\section{Uniform Continuity}\label{S: Uniform Continuity}
As was shown, the continuity of a function (both at a point and on a set) followed directly by substitution from the work done on limits. Unfortunately we are not so lucky with respect to \textbf{uniform continuity}. As with ordinary continuity, we recall the formalized standard definition of uniform continuity. The function $f: X \to \mathbb{C}$ is uniformly continuous on $X \subseteq \mathbb{R}$ if, by definition, for all $\varepsilon \in \mathbb{R}_+$ there exists $\delta \in \mathbb{R}_+$ such that for all $x, r \in X$, $|x - r|< \delta$ implies $|f(x) - f(r)| < \varepsilon$.
\begin{theorem}[Uniform Continuity]\label{T: NSUC}\index{Uniform Continuity}
Let $X \subseteq \mathbb{R}$. The function $f: X \to \mathbb{C}$ is uniformly continuous on $X$ \iff ${^{*}f}(x) \approx {^{*}f}(r)$ for all $x, r \in {^{*}X}$ such that $x \approx r$.
\end{theorem}
\begin{proof}$ $
\begin{description}
\item[($\Rightarrow$)] Suppose $f$ is uniformly continuous on $X \subseteq \mathbb{R}$. Let $m \in \mathbb{N}$, then there exists $n \in \mathbb{N}$ such that for all $u, v \in X$, $|u - v| < \frac{1}{n}$ implies $|f(u) - f(v)| < \frac{1}{m}$. Suppose $x = \ensuremath{\langle} x_\varep \ensuremath{\rangle}$ and $r = \ensuremath{\langle} r_\varep \ensuremath{\rangle}$ are in ${^{*}X}$ such that $x \approx r$. We must show that ${^{*}f}(x) \approx {^{*}f}(r)$. Indeed, $A_n =: \{ \varepsilon \in \mathbb{R}_+ : |x_\varep - r_\varep| < \frac{1}{n}\} \in \mathcal{U}$. As $f$ is uniformly continuous, $A_n \subset B_m =: \{ \varepsilon \in \mathbb{R}_+ : |f(x_\varep) - f(r_\varep)| < \frac{1}{m}\}$ and so $B_m \in \mathcal{U}$. Therefore, $|{^{*}f}(x) - {^{*}f}(r)| < \frac{1}{m}$ for all $m \in \mathbb{N}$ which implies ${^{*}f}(x) \approx {^{*}f}(r)$.
\item[($\Leftarrow$)] Assume for all $x, r \in {^{*}X}$, $x \approx r$ implies ${^{*}f}(x) \approx {^{*}f}(r)$. Suppose to the contrary that $f$ is not uniformly continuous on $X$. That is, there exists $\varep_{0} \in \mathbb{R}_+$ such that for all $\varepsilon \in \mathbb{R}_+$ there exists $u, v \in X$ such that $|u - v| < \varepsilon$ and $|f(u) - f(v)| \geq \varep_{0}$. For each $\varepsilon \in \mathbb{R}_+$ the set $A_{\varep} =: \{ (u,v) \in X \times X : |u - v| < \varepsilon$ and $|f(u) - f(v)| \geq \varep_{0}\}$ is non-empty. By the Axiom of Choice there exist nets $(x_\varep)$ and $(r_\varep)$ such that $(x_\varep, r_\varep) \in A_{\varep}$ for all $\varepsilon \in \mathbb{R}_+$. Therefore, $x =: \ensuremath{\langle} x_\varep \ensuremath{\rangle}$ and $r =: \ensuremath{\langle} r_\varep \ensuremath{\rangle}$ are each in ${^{*}X}$. So we have $|x - r| < \ensuremath{\langle} \varepsilon \ensuremath{\rangle}$, giving $x \approx r$ (since $\ensuremath{\langle} \varepsilon \ensuremath{\rangle} \approx 0$) and $|{^{*}f}(x) - {^{*}f}(r)| \geq \varep_{0}$. Hence, ${^{*}f}(x) \not\approx {^{*}f}(r)$, a contradiction.
\end{description}
\end{proof}
\begin{remark}[Reduction of Quantifiers]\index{Reduction of Quantifiers}
Once again, we formalize both standard and non-standard characterizations of uniform continuity in order to compare the quantifiers. Recall that $f: X \to \mathbb{R}$ is uniformly continuous on $X \subseteq \mathbb{R}$ if:
\begin{equation}\label{E: UC}
(\forall \varepsilon \in \mathbb{R}_+)(\exists \delta \in \mathbb{R}_+)(\forall x,r \in X)[ |x - r| < \delta \Rightarrow |f(x) - f(r)| < \varepsilon].
\end{equation}
Likewise, the non-standard characterization from Theorem \ref{T: NSUC} states:
\begin{equation}\label{E: NSUC}
(\forall r, x \in {^{*}X})[x \approx r \Rightarrow {^{*}f}(x) \approx {^{*}f}(r)].
\end{equation}
Yet again, the former has four non-commuting quantifiers while the latter has has two \emph{commuting} quantifiers.
Notice that in the non-standard characterization of continuity the points of continuity $r$ were required to be in $\mathbb{R}$, whereas in uniform continuity they are allowed to be in the non-standard extension ${^{*}\mathr}$.
\end{remark}
\begin{examples}$ $
\begin{quote}
\begin{description}
\item[(i)]Let $f: \mathbb{R}_+ \to \mathbb{R}$ be defined by $f(x) = \frac{1}{x}$. By part \textbf{(ii)} of Example \ref{E: Cont} $f(x)$ is continuous on $\mathbb{R}_+$. However, it is \emph{not} uniformly continuous on $\mathbb{R}_+$. Indeed, let $r = \rho$ and $x = \rho^2$ where $\rho = \ensuremath{\langle} \varepsilon \ensuremath{\rangle}$ as in Example \ref{E: Canonical Infinitesimal}. Certainly they are both in ${^{*}\mathr}_+$ so they are in the domain of ${^{*}f}(x) = \frac{1}{x}$. Furthermore, $\rho \approx \rho^2$ but $| {^{*}f}(\rho) - {^{*}f}(\rho^2) | = |\frac{1}{\rho} - \frac{1}{\rho^2} | = \frac{1}{\rho^2} - \frac{1}{\rho} = \frac{1- \rho}{\rho^2}$ is not infinitesimal since $1 - \rho$ is finite but not infinitesimal and $\frac{1}{\rho^2}$ is infinitely large.
\item[(ii)] Let $f: \mathbb{R} \to \mathbb{R}$ be defined by $f(x) = \sin x$. Then $f(x)$ is uniformly continuous on $\mathbb{R}$. Indeed, let $x, r \in {^{*}\mathr}$ such that $x \approx r$. We must show that $\star\sin x \approx \star\sin r$. We have $| \star\sin x - \star\sin r | = 2 \star\sin \frac{x - r}{2} \star\cos \frac{x + r}{2}$ using trigonometric identities. As $\frac{x-r}{2} \approx 0$, by part \textbf{(iii)} of Example \ref{E: Lim} we have $\star\sin \frac{x-r}{2} \approx 0$. Also, we have $\star\cos \frac{x+r}{2}$ is a finite number since $\star\cos x$ is bounded (Examples \ref{E: NSF} part \textbf{(iii)}), which gives that $| \star\sin x - \star\sin r | \approx 0$.
\item[(iii)] Let $f: \mathbb{R} \to \mathbb{R}$ be defined by $f(x) = e^x$. It is well known that $f(x)$ is continuous on $\mathbb{R}$, however, it is not uniformly so. Indeed, select $\frac{1}{\rho}$ and $\frac{1}{\rho} + \rho$ in $\mathcal{L}({^{*}\mathr})$. We have $\frac{1}{\rho} \approx \frac{1}{\rho} + \rho$ but ${^{*}f}\left(\frac{1}{\rho} + \rho\right) - {^{*}f}\left(\frac{1}{\rho}\right) = e^{\frac{1}{\rho}}\left(e^\rho - 1\right) > e^{\frac{1}{\rho}} \frac{\rho}{2} \not\approx 0$. The inequality is justified via the asymptotic expansion of the Taylor Series for $e^\rho - 1$. Also, $e^{\frac{1}{\rho}} \frac{\rho}{2} \not\approx 0$ is actually infinitely large since $$e^{\frac{1}{\rho}} \frac{\rho}{2} \approx \ensuremath{{{\rm{st}}}}\left[e^{\frac{1}{\rho}} \frac{\rho}{2} \right] = \frac{1}{2} \ensuremath{{{\rm{st}}}} \left[ e^{\frac{1}{\rho}} \rho \right] = \frac{1}{2} \lim_{x \to 0_+} e^{\frac{1}{x}}x = \frac{1}{2} \lim_{x \to \infty} \frac{e^x}{x} = \frac{1}{2} \lim_{x \to \infty} \frac{e^x}{1} = \infty,$$ where the penultimate step is accomplished by l'Hospital's rule.
\end{description}
\end{quote}
\end{examples}
\begin{theorem}
Let $X \subseteq \mathbb{R}$ be a compact set and $f: X \to \mathbb{C}$. If $f$ is continuous on $X$, then $f$ is uniformly continuous on $X$.
\end{theorem}
\begin{proof}
As $X$ is compact we have ${^{*}X} \subseteq \mu(X)$ by Theorem \ref{T: NSCoS}. Since $\ensuremath{{{\rm{st}}}}(\mu(X)) = X$ and $X \subseteq \ensuremath{{{\rm{st}}}}({^{*}X})$ trivially we have $\ensuremath{{{\rm{st}}}}({^{*}X}) = X$. Suppose $x, y \in {^{*}X}$ such that $x \approx y$. Then $c =: \ensuremath{{{\rm{st}}}}(x) = \ensuremath{{{\rm{st}}}}(y) \in X$. Therefore ${^{*}f}(x) \approx f(c) \approx {^{*}f}(y)$ since $f$ assumed continuous at $c$. Therefore $f$ is uniformly continuous on $X$.
\end{proof}
\section{Derivatives}\label{S: Derivatives}
Let $X \subseteq \mathbb{R}$ and $c \in \mathbb{R}$ be a cluster point of $X$. Recall that a function $f: X \to \mathbb{C}$ is \emph{differentiable at $c$} with derivative $L \in \mathbb{C}$ if, by definition, for all $\varepsilon \in \mathbb{R}_+$ there exists $\delta \in \mathbb{R}_+$ such that if $x \in \mathbb{R}$ is such that $0 < | x - c | < \delta$, then $$\left | \frac{f(x) - f(c)}{x - c} - L \right | < \varepsilon.$$
\begin{theorem}[Characterization of Derivatives]\label{T: NSD}\index{Derivative}
Let $X \subseteq \mathbb{R}$, and consider the function $f: X \to \mathbb{C}$. The following are equivalent:
\begin{quote}
\begin{description}
\item[(a)] The function $f$ is differentiable at $c \in \mathbb{R}$
\item[(b)] There is a number $L \in \mathbb{C}$ such that for all $x \approx c$, $x \not= c$ we have: $$\frac{{^{*}f}(x) - {^{*}f}(c)}{x - c} \approx L.$$ Furthermore, when $L$ exists we have $f'(c) = L$.
\item[(c)] $\ensuremath{{{\rm{st}}}} \left ( \frac{{^{*}f}(c + dx) - {^{*}f}(c)}{dx} \right ) = L$ for all non-zero infinitesimals $dx$ such that $c + dx \in {^{*}X}$.
\end{description}
\end{quote}
\end{theorem}
\begin{proof} We prove the equivalence of \textbf{(a)} and \textbf{(b)}.
\begin{description}
\item[($\Rightarrow$)] Suppose $f : X \to \mathbb{C}$ is differentiable at $c \in \mathbb{R}$ with derivative $L = f'(c) \in \mathbb{C}$. Then $\lim_{x \to c} \frac{f(x) - f(c)}{x - c} = L.$ By part \textbf{(c)} of Theorem \ref{T: NSL} we have $\frac{{^{*}f}(x) - {^{*}f}(c)}{ x - c } \approx L = f'(c)$ for all $x \in {^{*}\mathr}$ such that $x \approx c$.
\item[($\Leftarrow$)] Suppose there exists $L \in \mathbb{C}$ such that $\frac{{^{*}f}(x) - {^{*}f}(c)}{x-c} \approx L = f'(c)$ for all $x \approx c$, $x \not= c$. By Theorem \ref{T: NSL} we have that $\lim_{x \to c}\frac{f(x) - f(c)}{x -c} = L = f'(c)$.
\end{description}
As for the equivalence of \textbf{(b)} and \textbf{(c)} we refer to the equivalence of \textbf{(b)} and \textbf{(d)} from Theorem \ref{T: NSL}.
\end{proof}
\begin{corollary}
Let $X \subseteq \mathbb{R}$ and consider $f: X \to \mathbb{C}$. Then $f'(x) = \ensuremath{{{\rm{st}}}} \left ( \frac{ f(x + dx) - f(x) } {dx} \right )$ for all $dx \approx 0$, $dx \not= 0$ such that $x + dx \in {^{*}X}$.
\end{corollary}
\begin{remark}[Reduction of Quantifiers]\index{Reduction of Quantifiers}
Recall for $f: X \to \mathbb{C}$, $L \in \mathbb{C}$, and $c \in \mathbb{R}$ a cluster point of $X$, $L$ is the derivative of $f(x)$ at $c$ if:
\begin{equation}
(\forall \varepsilon \in \mathbb{R}_+)(\exists \delta \in \mathbb{R}_+)(\forall x \in X)\left [0 < |x - c| < \delta \Rightarrow \left |\frac{f(x)-f(c)}{x-c} - L \right| < \varepsilon \right ]
\end{equation}
The formalization from Theorem \ref{T: NSD} is:
\begin{equation}
(\forall x \in {^{*}X})\left [ x \approx c \Rightarrow \frac{{^{*}f}(x) - {^{*}f}(c)}{x - c} \approx L \right ]
\end{equation}
As with limits and continuity before, we see that the non-standard characterization has two fewer quantifiers and that those quantifiers commute (though in this case trivially).
\end{remark}
\begin{examples}[Computations]$ $
\begin{quote}
\begin{description}
\item{(i)} Let $f(x) = x^3$. Let $dx \approx 0$, $dx \not= 0$ such that $x + dx \in {^{*}X}$ and apply the above corollary so that:
\begin{align}
f'(x) &= \ensuremath{{{\rm{st}}}} \left ( \frac{(x + dx)^3 - x^3}{dx} \right )\notag\\
&= \ensuremath{{{\rm{st}}}} \left ( \frac{ x^3 + 3x^2 dx + 3x dx^2 + dx^3 - x^3}{dx} \right )\notag\\
&= \ensuremath{{{\rm{st}}}} \left ( \frac{dx ( 3x^2 + 3xdx + dx^2)}{dx} \right )\notag\\
&= \ensuremath{{{\rm{st}}}}(3x^2 + 3xdx + dx^2) = 3x^2.\notag
\end{align}
\item[(ii)] (Power Rule) Let $f(x) = x^n$. Let $dx \approx 0$, $dx \not= 0$ such that $x + dx \in {^{*}X}$ and apply the above corollary so that:
\begin{align}
f'(x) &= \ensuremath{{{\rm{st}}}} \left ( \frac{(x+dx)^n - x^n}{dx} \right )\notag\\
&= \ensuremath{{{\rm{st}}}} \left ( \frac{\sum_{i=0}^{n} {n \choose i} x^{n-i} dx^{i} - x^n}{dx} \right )\notag\\
&= \ensuremath{{{\rm{st}}}} \left ( \frac{\sum_{i=1}^{n} {n \choose i} x^{n-i} dx^{i}}{dx} \right )\notag\\
&= \ensuremath{{{\rm{st}}}} \left ( \frac{ dx ( \sum_{i=1}^{n} {n \choose i} x^{n-i} dx^{i-1} )}{dx} \right )\notag\\
&= \ensuremath{{{\rm{st}}}} \left ( \sum_{i=1}^{n} {n \choose i} x^{n-i} dx^{i-1} \right )\notag\\
&= \ensuremath{{{\rm{st}}}} \left ( {n \choose 1} x^{n-1} + \sum_{i = 2}^{n} {n \choose i}x^i dx^{i - 1} \right )\notag\\
&= \ensuremath{{{\rm{st}}}} (n x^{n - 1} ) + \ensuremath{{{\rm{st}}}} \left ( \sum_{i = 1}^{n} {n \choose i} x^{n- i} dx^{i - 1} \right ) = n x^{n - 1}.\notag
\end{align}
\item[(iii)] Let $f(x) = \sin(x)$. Let $dx \approx 0$, $dx \not= 0$ such that $x + dx \in {^{*}X}$ and apply the above corollary so that:
\begin{align}
f'(x) &= \ensuremath{{{\rm{st}}}} \left ( \frac{ \sin(x + dx) - \sin(x)}{dx} \right )\notag\\
&= \ensuremath{{{\rm{st}}}} \left ( \frac{\sin(x)\cos(dx) + \cos(x)\sin(dx) - \sin(x)}{dx} \right )\notag\\
&= \ensuremath{{{\rm{st}}}} \left ( \sin(x)\frac{(\cos(dx) - 1)}{dx} + \cos(x)\frac{\sin(dx)}{dx} \right )\notag
\end{align}
Now, $\ensuremath{{{\rm{st}}}} \left ( \frac{\sin(dx)}{dx} \right ) = \lim_{x \to 0} \frac{\sin(x)}{x} = 1$ and $\ensuremath{{{\rm{st}}}} \left ( \frac{\cos(dx) - 1}{dx} \right ) = \lim_{x \to 0} \frac{ \cos(x) - 1}{x} = 0$, both by l'Hospital's rule. Continuing the calculations on $f'(x)$ we have: $$\ensuremath{{{\rm{st}}}} \left ( \sin(x)\frac{(\cos(dx) - 1)}{dx} + \cos(x)\frac{\sin(dx)}{dx} \right ) = 0 + \cos(x) = \cos(x).$$
\end{description}
\end{quote}
\end{examples}
We now demonstrate the ease with which this characterization allows us to prove the usually burdensome chain rule.
\begin{corollary}[Chain Rule]\label{C: Chain}
Let $X \subseteq \mathbb{R}$ and $Y \subseteq g[X]$. If $g: X \to \mathbb{C}$ is differentiable at $c \in \mathbb{R}$ and $f: Y \to \mathbb{C}$ is differentiable at $g(c)$, then $f \circ g$ is differentiable at $c$ and $(f \circ g)'(c) = f'(g(c))g'(c)$.
\end{corollary}
\begin{proof}
We show that for all $x \approx c$ with $x \not= c$, $$\frac{{^{*}f}({^{*}g}(x)) - {^{*}f}({^{*}g}(c))}{x - c} \approx f'(g(c))g'(c).$$ Indeed, let $x \approx c$. If ${^{*}g}(x) = {^{*}g}(c)$ then we are done. In the case that they are not equal, by Theorem \ref{T: NSD}, we have $$\frac{{^{*}f}({^{*}g}(x)) - {^{*}f}({^{*}g}(c))}{x - c} = \left (\frac{{^{*}f}({^{*}g}(x)) - {^{*}f}({^{*}g}(c))}{{^{*}g}(x) - {^{*}g}(c)} \right ) \left (\frac{{^{*}g}(x) - {^{*}g}(c)}{x - c}\right ) \approx f'(g(c))g'(c).$$
\end{proof}
\section{Sequences of Functions}\label{S: Sequences of Functions}
We now turn our attention to sequences of functions, and we explore what we may say about them in the language of non-standard analysis. We recall what it means for a sequence of functions to converge in standard analysis.
Let $X \subseteq \mathbb{R}$, $S \subseteq X$, $f_n, f : X \to \mathbb{C}$, and let $(f_n)$ be a sequence of functions. Recall that $(f_n)$ converges pointwisely to $f$ on $S$ if (by definition),
\begin{equation}\label{E: SequenceCont}
(\forall x \in S)(\forall \varepsilon \in \mathbb{R}_+)(\exists K \in \mathbb{N})(\forall n \in \mathbb{N})[n > K \Rightarrow | f_n(x) - f(x) | < \varepsilon].
\end{equation}
Also recall that $(f_n)$ converges to $f$ \emph{uniformly} ($(f_n) \rightrightarrows f$) on $S$ if (by definition),
\begin{equation}\label{E: SequenceUCont}
(\forall \varepsilon \in \mathbb{R}_+)(\exists K \in \mathbb{N})(\forall x \in S)(\forall n \in \mathbb{N})[n > K \Rightarrow | f_n(x) - f(x) | < \varepsilon].
\end{equation}
\begin{theorem}[Convergence of a Sequence of Functions]\label{T: NSCSF}\index{Limit!of a Sequence of Functions}
Let $X \subseteq \mathbb{R}$, $S \subseteq X$, and suppose $f_n, f: X \to \mathbb{C}$ with $(f_n)$ a sequence of functions. Then the following are equivalent:
\begin{quote}
\begin{description}
\item[(a)] $(f_n)$ converges to $f$ on $S$.
\item[(b)] $(\forall x \in S)(\forall n \in {^{*}\mathn} \setminus \mathbb{N})[{^{*}f}_n(x) \approx f(x)]$.
\end{description}
\end{quote}
\end{theorem}
\begin{proof}
We have $(f_n) \to f$ on $S$ \iff $(\forall x \in S)[\lim_{n \to \infty} f_n(x) = f(x)]$ \iff (by Corollary \ref{C: NSLS}) $(\forall x \in S)(\forall n \in {^{*}\mathn} \setminus \mathbb{N})[{^{*}f}_n(x) \approx f(x)]$.
\end{proof}
\begin{theorem}[Uniform Convergence of a Sequence of Functions]\label{T: NSUCSF}\index{Uniform Convergence of a Sequence of Functions}
Let $X \subseteq \mathbb{R}$, $S \subseteq X$, and suppose $f_n, f: X \to \mathbb{C}$ with $(f_n)$ a sequence of functions. Then the following are equivalent:
\begin{quote}
\begin{description}
\item[(a)] $(f_n)$ converges uniformly to $f$ on $S$.
\item[(b)] $(\forall x \in {^{*}S})(\forall n \in {^{*}\mathn} \setminus \mathbb{N})[{^{*}f}_n(x) \approx {^{*}f}(x)]$.
\end{description}
\end{quote}
\end{theorem}
\begin{proof}$ $
\begin{description}
\item[($\Rightarrow$)] Let $(f_n) \to f$ uniformly on $S$. So, (\ref{E: SequenceUCont}) holds. Fix $\varepsilon \in \mathbb{R}_+$. Then there exists $K \in \mathbb{R}_+$ such that $(\forall x \in S)(\forall n \in \mathbb{N})[n > K \Rightarrow |f_n(x) - f(x)| < \varepsilon].$ By the Transfer Principle we obtain, $(\forall x \in {^{*}S})(\forall x \in {^{*}\mathn})[n > K \Rightarrow |{^{*}f}_n(x) - {^{*}f}(x)| < \varepsilon].$ Certainly if $n \in {^{*}\mathn} \setminus \mathbb{N}$ then $n > K$ for any $K \in \mathbb{R}_+$. Thus, $(\forall x \in {^{*}S})(\forall n \in {^{*}\mathn} \setminus \mathbb{N})[|{^{*}f}_n(x) - {^{*}f}(x)| < \varepsilon],$ which gives $(\forall x \in {^{*}S})(\forall n \in {^{*}\mathn} \setminus \mathbb{N})[{^{*}f}_n(x) \approx {^{*}f}(x)]$, as desired.
\item[($\Leftarrow$)] Assume \textbf{(b)}. If $\varepsilon \in \mathbb{R}_+$ (arbitrarily fixed), then $(\forall x \in {^{*}S})(\forall n \in {^{*}\mathn} \setminus \mathbb{N})[|{^{*}f}_n(x) - {^{*}f}(x)| < \varepsilon]$. So trivially, $(\exists K \in {^{*}\mathn})(\forall x \in {^{*}S})(\forall n \in {^{*}\mathn})[n > K \Rightarrow |{^{*}f}_n(x) - {^{*}f}(x)| < \varepsilon]$ (For instance pick $K \in {^{*}\mathn} \setminus \mathbb{N}$). Apply the Transfer Principle so that $(\exists K \in \mathbb{N})(\forall x \in S)(\forall n \in \mathbb{N})[n > K \Rightarrow |f_n(x) - f(x)| < \varepsilon]$. As $\varepsilon \in \mathbb{R}_+$ was arbitrary, we have $(f_n) \rightrightarrows f$.
\end{description}
\end{proof}
\begin{remark}[Reduction of Quantifiers]\index{Reduction of Quantifiers}
Comparing (\ref{E: SequenceCont}) with \textbf{(b)} from Theorem \ref{T: NSCSF}, and (\ref{E: SequenceUCont}) with \textbf{(b)} from Theorem \ref{T: NSUCSF} we note the now common result. There are two less quantifiers in the non-standard characterizations, and these quantifiers commute.
\end{remark}
\begin{theorem}
Suppose $(f_n)$ is a sequence of continuous functions on $X \subseteq \mathbb{R}$ for all $n \in \mathbb{N}$, and that $(f_n) \rightrightarrows f$ on $X$. Then $f$ is continuous on $X$.
\end{theorem}
\begin{proof}
Let $x \in {^{*}X}$ and $c \in X$ with $x \approx c$. Our goal is to show ${^{*}f}(x) \approx f(c)$. By assumption, for all $n \in \mathbb{N}$, ${^{*}f}_n(x) \approx f_n(c)$.
We claim there exists $\nu \in {^{*}\mathn} \setminus \mathbb{N}$ such that ${^{*}f}_\nu(x) \approx {^{*}f}_\nu(c)$. Indeed, consider $A =: \{ n \in {^{*}\mathn} : | {^{*}f}_n(x) - {^{*}f}_n(c)| < \frac{1}{n} \}$. We know $A \not= \varnothing$ since $\mathbb{N} \subseteq A$ (by assumption). So, $A$ contains arbitrarily large finite numbers, and by the Overflow Principle (Corollary \ref{C: Spilling Principles}), $A$ contains an infinitely large $\nu$.
Having established $\nu \in {^{*}\mathn} \setminus \mathbb{N}$ such that ${^{*}f}_\nu(x) \approx {^{*}f}_\nu(c)$ we have $$|{^{*}f}(x) - f(c)| \leq |{^{*}f}(x) - {^{*}f}_\nu(x)| + |{^{*}f}_\nu(x) - {^{*}f}_\nu(c) + |{^{*}f}_\nu(c) - f(c)|.$$ The first and third summands are infinitesimal since $\nu \in {^{*}\mathn} \setminus \mathbb{N}$ and $(f_n) \rightrightarrows f$, and the second summand is infinitesimal by the above claim. Hence, $|{^{*}f}(x) - {^{*}f}(c)|$ is infinitesimal.
\end{proof}
\begin{examples}$ $
\begin{quote}
\begin{description}
\item[(i)] Consider $(f_n) = (x^n)$. Notice that $f_n(x)$ converges to $0$ on $[0, 1)$. We first show that $(f_n)$ is not uniformly convergent on $[0,1)$. Let $\nu \in {^{*}\mathn} \setminus \mathbb{N}$, and define $x =: 1 - \frac{1}{\nu} \in {\star[0,1)}$. Then $x^\nu = (1 - \frac{1}{\nu})^\nu \approx \lim_{n \to \infty} (1 - \frac{1}{n})^n = e^{-1} \not= 0$. Therefore, $x^\nu \not\approx 0$ and so $(f_n)$ is not uniformly convergent on $[0, 1)$.
\item[(ii)] Consider $(f_n) = (x^n)$, on the interval $[0, \delta)$ for any $0 < \delta < 1$. We claim that $(f_n) \rightrightarrows 0$ on $[0, \delta)$. Note ${\star[0, \delta)} = \{ x \in {^{*}\mathr} : 0 \leq x < \delta\}$ where $\delta$ is standard. So $0 \leq x < \delta < 1$ giving $0 \leq x^n < \delta^n$, hence $0 \leq x^{\nu} < \delta^\nu \approx 0$. Therefore, $x^\nu \approx 0$. Giving $x^n \rightrightarrows 0$ on $[0, \delta)$.
\item[(iii)] Consider $(f_n) = (\frac{1}{n}\sin(nx))$. We show that $(f_n)$ is uniformly convergent to $0$ on $\mathbb{R}$. Note that $|{^*\sin(nx)}| \leq 1$ for all $n \in {^{*}\mathn}$ and $x \in {^{*}\mathr}$. Choose $x \in {^{*}\mathr}$ and $n \in {^{*}\mathn} \setminus \mathbb{N}$. Then $|\frac{1}{n}{^*\sin(nx)}| \leq \frac{1}{n} \approx 0$. As $x$ and $n$ were arbitrary, $(f_n) \rightrightarrows 0$ on $\mathbb{R}$ by Theorem \ref{T: NSUCSF}.
\item[(iv)] Consider $(f_n) = (e^{-(x-n)^2})$. We illustrate the subtle difference between the convergence and uniform convergence of a sequence of functions.
We first show that $(f_n)$ converges to $0$ on $\mathbb{R}$. Let $x \in \mathbb{R}$ and $\nu \in {^{*}\mathn} \setminus \mathbb{N}$ be arbitrarily chosen and fixed. Then $e^{-(x-\nu)^2} \approx 0$ since $\nu$ is infinitely large while $x$ is finite, therefore $(x - \nu)$ is infinitely large. Squaring it only improves our situation.
We now show that $(f_n)$ is not uniformly convergent to $0$ on $\mathbb{R}$. Let $\nu \in {^{*}\mathn} \setminus \mathbb{N}$ be arbitrarily chosen and fixed. Then let $x = \nu \in {^{*}\mathr}$ so that $e^{-(x - \nu)^2} = e^{-(\nu - \nu)^2} = e^0 = 1 \not\approx 0$. Therefore, $(f_n)$ is not uniformly convergent to $0$ on $\mathbb{R}$.
The key to ordinary convergence was that $x$ was only allowed to be \emph{finite}, while in uniform convergence $x$ had to be non-standard. Therefore, $x$ could, in effect, keep up with $\nu \in {^{*}\mathn} \setminus \mathbb{N}$.
\item[(v)] Consider $(f_n) = (\frac{1}{n}e^{-(x-n)^2})$. We show that $(f_n)$ is uniformly convergent to $0$ on $\mathbb{R}$. We know that for all $x \in {^{*}\mathr}$ and for all $\nu \in {^{*}\mathn} \setminus \mathbb{N}$, $e^{-(x-\nu)^2} \leq 1$. Now, let $x \in {^{*}\mathr}$ and $\nu \in {^{*}\mathn} \setminus \mathbb{N}$ be arbitrarily chosen and fixed. Then $\frac{1}{\nu}e^{-(x-\nu)^2} \leq \frac{1}{\nu} \approx 0$. So by Theorem \ref{T: NSUCSF}, $(f_n) \rightrightarrows 0$ on $\mathbb{R}$.
\end{description}
\end{quote}
\end{examples}
| {
"timestamp": "2008-10-10T07:52:27",
"yymm": "0809",
"arxiv_id": "0809.4814",
"language": "en",
"url": "https://arxiv.org/abs/0809.4814",
"abstract": "We construct the non-standard complex (and real) numbers using the ultrapower method in the spirit of Cauchy's construction of the real numbers. We show that the non-standard complex numbers are a non-archimedean, algebraically closed field, and that the non-standard real numbers are a totally ordered, real-closed, non-archimedean field. We explore the various types of non-standard numbers, and develop the non-standard completeness results (Saturation Principle, Supremum Completeness of Bounded Internal Sets, etc) for $\\starr$. We give non-standard characterizations for such usual topological objects as open, closed, bounded, and compact sets in terms of monads. We also consider such traditional topics of real analysis as limits, continuity, uniform continuity, convergence, uniform convergence, etc. in a non-standard setting. In both topology and real analysis we reduce (and in some cases eliminate) the number of quantifiers in the non-standard setting.",
"subjects": "Classical Analysis and ODEs (math.CA); Logic (math.LO)",
"title": "Reduction of the Number of Quantifiers in Real Analysis through Infinitesimals (Master Thesis, Mathematics Department, California Polytechnic State University, San Luis Obispo)",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109503004293,
"lm_q2_score": 0.7185943805178138,
"lm_q1q2_score": 0.7076796147582964
} |
https://arxiv.org/abs/1612.07365 | A Proximity Measure using Blink Model | This paper proposes a new graph proximity measure. This measure is a derivative of network reliability. By analyzing its properties and comparing it against other proximity measures through graph examples, we demonstrate that it is more consistent with human intuition than competitors. A new deterministic algorithm is developed to approximate this measure with practical complexity. Empirical evaluation by two link prediction benchmarks, one in coauthorship networks and one in Wikipedia, shows promising results. For example, a single parameterization of this measure achieves accuracies that are 14-35% above the best accuracy for each graph of all predictors reported in the 2007 Liben-Nowell and Kleinberg survey. | \section{Approximation Algorithm} \label{sec:algorithm}
We present a deterministic algorithm
that approximates (\ref{eq:metric}) directly.
Without loss of generality (per Section~\ref{sec:basic}),
we describe this algorithm under the conditions of all node weights being 1
and that $G$ is a simple directed graph where parallel edges are already merged.
\subsection{Overall flow} \label{sec:flow}
A \emph{minimal path} from node A to node B is defined as a path from A to B without repeating nodes.
For a finite graph $G$, there exist a finite number of minimal paths.
Consider a single minimal path $i$ from node A to node B. We define the following\footnote{We
omit A and B from $s_{\textrm{path }i}$ and $\hat{s}_{\textrm{path }i}$ notations to keep formulas concise.
Note that any path refers to a minimal path in $G$ from A to B.}
as the \emph{nominal contribution} of this path to (\ref{eq:metric}).
\begin{equation} \label{eq:nominal}
s_{\textrm{path }i} = - \log \left( 1 - \prod_{\textrm{edge }e\textrm{ on path }i}w_E\left(e\right) \right)
\end{equation}
By the additivity property (\ref{eq:add}), if all minimal paths from A to B are mutually disjoint,
we can compute (\ref{eq:metric}) exactly by summing (\ref{eq:nominal}) over all minimal paths.
Of course this is not true for general graphs where paths from A to B overlap each other
and (\ref{eq:metric}) is less than the sum of $s_{\textrm{path }i}$ values.
However, if we consider only length-1 and length-2 minimal paths, they can never
share an edge with each other, and their nominal contributions can be added according to the additivity property.
Further invoking the monotonicity property (\ref{eq:mono}), we obtain the following inequality.
\begin{equation} \label{eq:bounds}
\sum_{\textrm{path }i\textrm{ with length 1 or 2}} {s_{\textrm{path }i}}
\leq s \left( \text{A},\text{B} \right)
\leq \sum_{\textrm{path }i} {s_{\textrm{path }i}}
\end{equation}
Therefore, the key to approximate (\ref{eq:metric}) is to quantify the contribution
of minimal paths that are longer than 2.
We start the approximation by making the following conjecture that the contribution
of each minimal path $i$ is quantifiable as a value $\hat{s}_{\textrm{path }i}$ such that
$s \left( \text{A},\text{B} \right) = \sum_{\textrm{path }i} {\hat{s}_{\textrm{path }i}}$.
We use $G^\prime$ to denote a subgraph of $G$
and $s_{G^\prime} \left( \text{A},\text{B} \right)$ to denote the measure value (\ref{eq:metric}) in $G^\prime$.
\newtheorem{assumption}{Conjecture}
\begin{assumption}\label{assume}
A value $\hat{s}_{\textrm{path }i}$ exists for each minimal path $i$ from node A to node B,
such that these $\hat{s}_{\textrm{path }i}$ values satisfy the following conditions:
\begin{equation} \label{eq:assumption}
\begin{aligned}
\hat{s}_{\textrm{path }i} = s_{\textrm{path }i}, & \textrm{ if path }i\textrm{ has length 1 or 2} \\
0 \leq \hat{s}_{\textrm{path }i} \leq s_{\textrm{path }i}, & \textrm{ if path }i\textrm{ is longer than 2}
\end{aligned}
\end{equation}
\begin{equation} \label{eq:assumption2}
\begin{aligned}
s_{G^\prime} \left( \text{A},\text{B} \right) & \geq \sum_{\textrm{path }i\textrm{ is contained in }G^\prime} {\hat{s}_{\textrm{path }i}} \\
s_{G^\prime} \left( \text{A},\text{B} \right) & \leq \sum_{\textrm{path }i\textrm{ overlaps with }G^\prime} {\hat{s}_{\textrm{path }i}}
\end{aligned}
\end{equation}
for any subgraph $G^\prime$.
\end{assumption}
Our algorithm works best when Conjecture~\ref{assume} holds,
while the approximation would be coarser when it does not.
We have not found any graph that breaks Conjecture~\ref{assume},
and it remains unproven whether it holds for all graphs.
We use Conjecture~\ref{assume} in two ways.
By selecting a special set of subgraphs $G^\prime$, we utilize (\ref{eq:assumption2})
to iteratively approximate $\hat{s}_{\textrm{path }i}$ values.
Then, after obtaining approximate $\hat{s}_{\textrm{path }i}$ values,
we invoke Conjecture~\ref{assume} for a special case of $G^\prime=G$,
where the two sums in condition (\ref{eq:assumption2}) are identical
and therefore (\ref{eq:assumption2}) becomes two equalities.
This justifies that $s \left( \text{A},\text{B} \right) = \sum_{\textrm{path }i} {\hat{s}_{\textrm{path }i}}$
which achieves our purpose.
One observation is that Conjecture~\ref{assume} does not uniquely define $\hat{s}_{\textrm{path }i}$ values
as there may exist two sets of $\hat{s}_{\textrm{path }i}$ values that both
satisfy (\ref{eq:assumption})(\ref{eq:assumption2}).
However, by definition they both sum up to the same end result (\ref{eq:metric}),
and therefore we only need to find one such set of $\hat{s}_{\textrm{path }i}$ values.
A second observation is that the lower bound in (\ref{eq:assumption2}) is tight for a variety of subgraphs $G^\prime$,
while the upper bound is tight only for large subgraphs.
We exploit this observation in the proposed algorithm:
we will select/design a certain set of subgraphs $G^\prime$ where the lower bound in (\ref{eq:assumption2}) is tight,
and then use the lower bound as an equality to iteratively refine the approximated $\hat{s}_{\textrm{path }i}$ values.
The proposed algorithm is illustrated in Algorithm~\ref{pseudo}.
$\hat{s}_{\textrm{path }i}$ values are initialized to be the nominal.
A subgraph $G^\prime_i$, to be elaborated later, is selected for each path $i$ longer than 2,
and $s_{G^\prime_i} \left( \text{A},\text{B} \right)$ is computed/approximated.
During each iteration, for each path $i$ longer than 2,
we use the lower bound in (\ref{eq:assumption2}) as an equality and convert it to the following update formula.
\begin{equation} \label{eq:update}
\hat{s}_{\textrm{path }i}^{k+1} = \frac{\hat{s}_{\textrm{path }i}^k \cdot \left( s_{G^\prime_i} \left( \text{A},\text{B} \right) - \sum_{\textrm{path }j\in\Upsilon_i} {s_{\textrm{path }j}} \right)}{\sum_{\textrm{path }j\in\Xi_i} {\hat{s}_{\textrm{path }j}^k}}
\end{equation}
where $k$ is the iteration index,
$\Xi_i$ is the set of minimal paths from A to B that are contained in $G^\prime_i$
and that have length more than 2,
and $\Upsilon_i$ is the set of minimal paths from A to B that are contained in $G^\prime_i$
and that have length of 1 or 2.
\newtheorem{algorithm}{Algorithm}
\begin{algorithm}
Overall flow of the proposed algorithm.
\begin{center}
\begin{tabular}{l}
\hline
\texttt{\upshape $\hat{s}^0_{\textrm{path }i} \gets s_{\textrm{path }i}$, $\forall$ path $i$, using (\ref{eq:nominal}); }
\\
\texttt{\upshape select subgraphs $G^\prime_i$, $\forall$ path $i$ longer than 2;}
\\
\texttt{\upshape for $k=0,1,\cdots$, until convergence:}
\\
$\quad$ \texttt{\upshape compute $\hat{s}_{\textrm{path }i}^{k+1}$ using (\ref{eq:update}), $\forall$ path $i$ longer than 2;}
\\
\texttt{\upshape $s \left( \text{A},\text{B} \right) \gets$ sum of final $\hat{s}_{\textrm{path }i}$ values;}
\\
\hline
\end{tabular}
\end{center}
\label{pseudo}
\end{algorithm}
One way to interpret this algorithm is that it is an iterative solver that solves a linear system
where there is one equation for each $G^\prime_i$ and the unknowns are $\hat{s}_{\textrm{path }i}$.
This interpretation holds for the variation in Section~\ref{sec:high},
however it does not hold in Sections~\ref{sec:mid} and \ref{sec:low},
in which we will present an alternative interpretation.
In the next three sections, we present three variations of the proposed algorithm.
They differ in how they select/construct subgraph $G^\prime_i$ for a given path $i$.
Section~\ref{sec:paths} discusses selecting minimal paths to work on.
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=1.6in]{subgraph}}
\subfloat[]{\includegraphics[width=1.6in]{simplified}}
\caption{(i) The selected subgraph of Figure~\ref{fig:mark}
for the highlighted path, and (ii) its simplified form.}
\label{fig:subgraph}
\end{figure}
\subsection{High-accuracy variation} \label{sec:high}
In this variation of the proposed algorithm,
we select subgraph $G^\prime_i$ as the minimal subgraph
that contains all minimal paths from A to B which overlap with path $i$ by at least one edge.
One example is illustrated in Figure~\ref{fig:subgraph}(i),
which shows the subgraph of Figure~\ref{fig:mark}
for the highlighted path from A to $\textrm{B}_2$,
and it is used in (\ref{eq:update}) to update
$\hat{s}$ of the highlighted path during each iteration.
Note that $G^\prime_i$ only needs to be identified once
and $s_{G^\prime_i} \left( \text{A},\text{B} \right)$
only needs to be evaluated once, and the same value
is reused in (\ref{eq:update}) across iterations.
A main computation in this variation is the evaluation of
$s_{G^\prime_i} \left( \text{A},\text{B} \right)$.
Since $G^\prime_i$ is much smaller than the whole graph $G$ in
typical applications, many techniques from the network reliability field
can be applied to approximately evaluate (\ref{eq:nr}) in $G^\prime_i$
and hence $s_{G^\prime_i} \left( \text{A},\text{B} \right)$.
For example, it is known that (\ref{eq:nr}) is invariant under
certain topological transformations \cite{ball,satyanarayana}.
Applying such transformations, the graph in Figure~\ref{fig:subgraph}(i)
can be simplified to Figure~\ref{fig:subgraph}(ii) without loss of accuracy.
Then a Monte Carlo method can be applied on the simplified graph
and approximate $s_{G^\prime_i} \left( \text{A},\text{B} \right)$.
\subsection{Medium-accuracy variation} \label{sec:mid}
Instead of identifying and solving each $G^\prime_i$ as an actual subgraph,
in this variation we construct $G^\prime_i$ as a hypothetical subgraph for each path $i$
during each iteration.
We start the construction by considering the
amount of sharing on each edge.
In the $k^\textrm{th}$ iteration, for edge $e$, define
\begin{equation} \label{eq:usage}
u_e^k = \sum_{\textrm{path }j\textrm{ contains }e\textrm{ and is longer than 2}}{\hat{s}_{\textrm{path }j}^k}
\end{equation}
Intuitively, $u_e^k$ quantifies usage of edge $e$ by A-to-B minimal paths,
based on current knowledge at the $k^\textrm{th}$ iteration.
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=2.6in]{path1}}\\
\subfloat[]{\includegraphics[width=2.6in]{path2}}
\caption{(i) Example of a middle section of a path.
(ii) The same middle section after adding hypothetical edges.}
\label{fig:path}
\end{figure}
For each path $i$, we annotate each edge on this path with $u_e^k$.
Figure~\ref{fig:path}(i) illustrates a middle section of an example path $i$.
We use $u_{\textrm{XY}}^k$ to denote $u_e^k$ when $e$ is an edge from node X to node Y,
and $w_{\textrm{XY}}$ to denote its edge weight.
Without loss of generality, we assume that
$u_{\textrm{FG}}^k > u_{\textrm{CD}}^k > u_{\textrm{EF}}^k > u_{\textrm{DE}}^k$.
We construct the hypothetical subgraph $G^\prime_i$ starting from path $i$ itself
and by adding hypothetical edges.
Since $u_{\textrm{EF}}^k > u_{\textrm{DE}}^k$, there must exist one or more A-to-B path(s)
that passes the edge from E to F but that does not
pass the edge from D to E. A hypothetical new edge from D to E is added
to approximate the effect of such path(s);
furthermore, we know that the sum of $\hat{s}^k$ of these paths is
equal to $u_{\textrm{EF}}^k - u_{\textrm{DE}}^k$, and we use this fact to assign
the following weight.
\begin{equation} \label{eq:hypoweight1}
w_\textrm{DE}^\prime = 1 - \left( 1 - w_\textrm{DE} \right)^{\left(u_{\textrm{EF}}^k - u_{\textrm{DE}}^k\right)/ u_{\textrm{DE}}^k}
\end{equation}
In plain words, we assume that this hypothetical edge
is $\left( u_{\textrm{EF}}^k - u_{\textrm{DE}}^k \right)/u_{\textrm{DE}}^k$
times as strong as original D-to-E edge.
Similarly, since $u_{\textrm{CD}}^k > u_{\textrm{EF}}^k$, there must exist one or
more A-to-B path(s) that passes the edge from C to D, but that does not pass the edge
from E to F, the edge from D to E, or paths represented by the hypothetical edge from
D to E. A hypothetical new edge from D to F is added to approximate the
effect of such path(s). Again, we know that the sum of $\hat{s}^k$ of these paths is
equal to $u_{\textrm{CD}}^k - u_{\textrm{EF}}^k$, and by the same argument for (\ref{eq:hypoweight1}),
we assign the following edge weight.
\begin{equation} \label{eq:hypoweight2}
w_\textrm{DF}^\prime = 1- \left( 1 - w_\textrm{EF} \cdot \left( 1 - \left( 1 - w_\textrm{DE} \right)^{u_{\textrm{EF}}^k / u_{\textrm{DE}}^k} \right) \right)^\frac{u_{\textrm{CD}}^k - u_{\textrm{EF}}^k}{u_{\textrm{EF}}^k}
\end{equation}
The same rationale applies to adding the hypothetical edge from C to F, and so on.
The above construction process for $G^\prime_i$ processes edges on path $i$ one by one
in the order of increasing $u_e^k$
values and adds hypothetical edges.
The last step of construction is to add to $G^\prime_i$
the length-2 A-to-B paths that overlap with path $i$,
since they are not visible in $u_e^k$
values.
The completed hypothetical subgraph $G^\prime_i$ is then used
in (\ref{eq:update}) to compute $\hat{s}_{\textrm{path }i}^{k+1}$ for the next iteration,
and the overall algorithm proceeds.
Note that the denominator $\sum_{\textrm{path }j\in\Xi_i} {\hat{s}_{\textrm{path }j}^k}$ in (\ref{eq:update})
is simply equal to the largest $u_e^k$ along path $i$; let it be $u_\textrm{max}^k$.
One distinction between this variation and Section~\ref{sec:high}
is that the exact evaluation of $s_{G^\prime_i} \left( \text{A},\text{B} \right)$
has linear complexity with respect to path length.
The hypothetical edges form series-parallel structures that
are friendly to topological transformations \cite{satyanarayana,politof}.
Using Figure~\ref{fig:path}(ii) as an example,
the hypothetical D-to-E edge and the original D-to-E edge can be
merged into a single edge;
then it and the E-to-F edge can be merged into a single D-to-F edge;
then it and the hypothetical D-to-F edge can be merged, and so on.
Another distinction between this variation and Section~\ref{sec:high}
is that $G^\prime_i$ is no longer the same across iterations.
As a result, the linear-system interpretation mentioned in Section~\ref{sec:flow}
no longer holds.
Instead, the following interpretation is more intuitive.
The calculation by (\ref{eq:update})
applies a dilution factor $\hat{s}_{\textrm{path }i}^k/u_\textrm{max}^k$
on the strength of $G^\prime_i$ excluding length-2 paths.
The more path $i$ overlaps with other paths,
the larger $u_\textrm{max}^k$ is and the smaller the dilution factor is,
and $G^\prime_i$ is a hypothetical subgraph that mimics a path where every edge has usage $u_\textrm{max}^k$.
\begin{figure}
\centering
\includegraphics[width=1.9in]{low}
\caption{Hypothetical subgraph in low-accuracy variation.}
\label{fig:low}
\end{figure}
\subsection{Low-accuracy variation} \label{sec:low}
Continuing the interpretation from the last section,
we may construct $G^\prime_i$ in the form
of Figure~\ref{fig:low}, which results in a tradeoff
with further lower accuracy and lower runtime.
For minimal path $i$, let C and D be the first and last intermediate nodes,
let the original edge weights along this path be $w_1$, $w_2$, $\cdots$, $w_n$,
and let edge usages (\ref{eq:usage})
along this path be $u_1^k$, $u_2^k$, $\cdots$, $u_n^k$.
We construct a hypothetical subgraph $G^\prime_i$ in the form of Figure~\ref{fig:low}.
In Figure~\ref{fig:low}, dashed arrows from A to D and from C to B represent edges
that may exist in $G$ and form length-2 paths A-D-B and A-C-B;
if these dashed edges do exist, they have their original weights.
The weights of the three solid edges are:
\begin{eqnarray} \label{eq:low}
w_\textrm{AC}^\prime & = & 1- \left( 1 - w_1 \right)^{u_\textrm{max}^k / u_1^k} \\
w_\textrm{CD}^\prime & = & 1- \left( 1 - \prod_{j=2}^{n-1}{w_j} \right)^{u_\textrm{max}^k / u_\textrm{mean}^k} \\
w_\textrm{DB}^\prime & = & 1- \left( 1 - w_n \right)^{u_\textrm{max}^k / u_n^k}
\end{eqnarray}
where $u_\textrm{mean}^k$ is a weighted average of $u_2^k,\cdots,u_{n-1}^k$:
\begin{equation} \label{eq:mean}
u_\textrm{mean}^k = \left( \sum_{j=2}^{n-1}{u_j^k\cdot\log\left(w_j\right)} \right) / {\sum_{j=2}^{n-1}{\log\left(w_j\right)}}
\end{equation}
Intuitively, $G^\prime_i$ still mimics a path where every edge has usage $u_\textrm{max}^k$,
and is constructed more crudely than in Section~\ref{sec:mid}.
\begin{table*}[!t]
\centering
\caption{Performance on Figure~\ref{fig:mark}.
$\textrm{Error}_{\text{B}_2}$ and $\textrm{Error}_\Delta$ are relative errors in $s \left( \text{A},\text{B}_2 \right)$
and in $s \left( \text{A},\text{B}_2 \right) - s \left( \text{A},\text{B}_1 \right)$.}
\label{tbl:toy}
\footnotesize
\setlength\tabcolsep{1pt}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline
Edge weight & \multicolumn{3}{|c|}{0.1} & \multicolumn{3}{|c|}{0.5} & \multicolumn{3}{|c|}{0.9} \\ \hline
& $\textrm{Error}_{\text{B}_2}$ & $\textrm{Error}_\Delta$ & Runtime(s)
& $\textrm{Error}_{\text{B}_2}$ & $\textrm{Error}_\Delta$ & Runtime(s)
& $\textrm{Error}_{\text{B}_2}$ & $\textrm{Error}_\Delta$ & Runtime(s) \\ \hline
High accuracy & 0.07\% & 19.2\% & 7.68E-3 & 2.40\% & 12.4\% & 6.73E-3 & 2.40\% & 6.73\% & 7.74E-3 \\ \hline
Medium accuracy & 8.86\% & 100\% & 1.48E-3 & 17.2\% & 100\% & 1.50E-3 & 12.0\% & 100\% & 4.78E-4 \\ \hline
MC, 1K samples & 37.7\% & 4.21E2 & 9.53E-4 & 3.38\% & 282\% & 1.94E-3 & 43.1\% & 228\% & 2.09E-3 \\ \hline
MC, 10K samples & 12.9\% & 1.56E2 & 8.45E-3 & 1.07\% & 106\% & 1.84E-2 & 3.82\% & 272\% & 1.91E-2 \\ \hline
MC, 100K samples & 3.55\% & 4751\% & 8.27E-2 & 0.35\% & 29.2\% & 0.174 & 1.05\% & 70.2\% & 0.182 \\ \hline
MC, 1M samples & 1.13\% & 1852\% & 0.812 & 0.11\% & 8.69\% & 1.74 & 0.33\% & 23.7\% & 1.78 \\ \hline
\end{tabular}
\end{table*}
\subsection{Accuracy and implementation issues} \label{sec:paths}
This algorithm can discern $\text{B}_1$ and $\text{B}_2$ correctly for all cases in Section~\ref{sec:graphs},
and Table~\ref{tbl:toy} shows details on Figure~\ref{fig:mark},
with three cases where all edge weights are 0.1, 0.5 and 0.9 respectively.
The medium-accuracy variation is unable to distinguish $\text{B}_1$ and $\text{B}_2$
and hence has 100\% $\textrm{Error}_\Delta$.
Each Monte Carlo (MC) measurement is repeated with 100 different random number
generation seeds, and the reported error/runtime is the average over the 100 runs.
Not surprisingly, MC favors 0.5 edge weight and has larger errors for higher or lower weights,
while our algorithm is stable across the range.
Table~\ref{tbl:toy} suggests that the high-accuracy variation
has comparable accuracy to 10K MC samples for individual-score estimation,
and is more accurate in differentiating $\text{B}_1$ and $\text{B}_2$ than one million samples for two of the three cases.
It also suggests that a MC method needs at least tens of thousands of samples to reliably differentiate nodes.
For this graph, the high-accuracy runtime is equal to 3500--8000 MC samples,
while the medium-accuracy runtime is equal to 200--1500 samples.
In practice, different variations can form a hybrid: medium- or even low-accuracy variation is used
for all target nodes and identifies a critical subset of targets,
while high-accuracy variation only ranks within the critical subset.
With any variation, the input to the proposed algorithm is a set of minimal paths from A to B,
and the output are $\hat{s}$ values for each path in the set.
For large graphs, to maintain practical complexity, the set is a subset of all minimal paths,
and this results in underestimation of the Blink Model score (\ref{eq:metric}).
A natural strategy is to ignore long and weak paths, similar to \cite{liu} with respect to PageRank.
This is motivated by the fact that the measure (\ref{eq:nr}) has locality \cite{karger}.
In our implementation, a minimal path $i$ is given to the proposed algorithm
if and only if it satisfies both of the following conditions:
\begin{equation} \label{eq:filter1}
s_{\textrm{path }i} \ge t_1
\end{equation}
\begin{equation} \label{eq:filter2}
\prod_{\textrm{edge }e\textrm{ on path }i}{\frac{\log\left(1-w_E\left(e\right)\right)}{\sum_{\textrm{edge }f\in\Theta_e}{\log\left(1-w_E\left(f\right)\right)}}} \ge t_2
\end{equation}
where $\Theta_e$ is the set of out-going edges from the source node of $e$,
and $t_1$ and $t_2$ are two constant thresholds that control runtime-accuracy tradeoff.
Condition~(\ref{eq:filter2}) is essentially a fan-out limit on paths.
When making predictions in Section~\ref{sec:result},
we use $t_2$$=$2E-6 which implies that we consider
at most 500,000 paths from A.\footnote{Identification
of minimal paths to many B's can be achieved by
a single graph traversal from A. For example, if the traversal finds a path
composed of nodes A, $\textrm{B}_1$, $\textrm{B}_2$, $\cdots$, $\textrm{B}_n$,
which satisfy (\ref{eq:filter1})(\ref{eq:filter2}), then this provides one qualified path
to $\textrm{B}_1$, one to $\textrm{B}_2$, $\cdots$, and one to $\textrm{B}_n$.}
For node B that is close to A, the above strategy provides good coverage of
a ``local'' region and thus causes little underestimation.
The further away B is from A, the less complete this coverage is and
the more the underestimation is.
In applications where we rank multiple B nodes for a given A, e.g. link prediction,
fidelity is maintained because the degree of underestimation
is negatively correlated with exact scores.
For a distant node B where no path satisfies the bounds,
we use a single path with the largest $s_{\textrm{path }i}$, which can be found
efficiently with a Dijkstra's-like procedure.
On a last note, our algorithm is essentially a method to quantify values
for a set of uncertain and mutually overlapping pieces of evidence, each piece being a minimal path.
The algorithm is orthogonal to the choice of input evidence,
for which any path collection implementation can be used,
while the quality of output is influenced by the quality of input.
\section{Conclusions}
This manuscript proposes the Blink Model graph proximity measure.
We demonstrate that it matches human intuition better than others,
develop an approximation algorithm,
and empirically verify its predictive accuracy.
\bibliographystyle{IEEEtran}
\section{Introduction} \label{sec:intro}
Humans have intuitions for graph proximity.
Given a graph, certain pairs of nodes are perceived to have
stronger relation strength than others.
We know that a larger number of shorter paths indicate
greater proximity, yet a precise mathematical formulation of such perception is elusive.
Many measures in the literature
can be viewed as quantitative
proxies of graph proximity:
shortest path, Jaccard index \cite{jaccard}, Katz \cite{katz},
personalized PageRank \cite{pr_paper},
SimRank \cite{simrank}, Adamic/Adar \cite{adamicadar}
and others \cite{potamias,blondel,fouss,koren2006,tong,yen,lao,liu,chebotarev,barabasi,leicht}.
Although they each have characteristics that suit specific applications,
they generally have varying degrees of agreement with human intuition.
This manuscript adds one more entry to the list.
This graph proximity measure is called the Blink Model and
is a derivative of network reliability.
By studying its properties and a series of graph examples,
we argue that it matches human intuition better than
many existing measures.
We develop a practical algorithm to approximately compute this measure,
and demonstrate its predicting power through empirical validations.
Some of the contents appeared in \cite{patent}.
Relational data, or graph-structured data, are ubiquitous.
Graph proximity measures, i.e., the ability to quantify relation strength,
are fundamental building blocks in many applications.
They can be used to recommend new contacts in social networks \cite{liben},
to make product recommendations based on a graph model of products and users \cite{aggarwal},
to rank web search results or documents in general \cite{pr_paper},
or to predict new facts in knowledge graphs \cite{cbmm}.
They can be used to single out anomalies by identifying implausible links.
The list of applications goes on and on.
The proposed measure is a derivative of
terminal network reliability \cite{ball}, which has
other forms in various fields to be reviewed in Section~\ref{sec:liter}.
Network reliability has largely been ignored as a candidate measure in the
aforementioned applications.
For example, \cite{potamias} concluded that network reliability was
one of the least predictive measures.
We prove the opposite conclusion with our Blink Model measure
by including the winning measure from \cite{potamias} in both
theoretical and empirical comparisons.
The discrepancy may be due to that \cite{potamias} used a Monte-Carlo approximation
with only 50 samples which were not sufficient to reach accurate results,
and running a sufficient number of samples might have been computationally infeasible.
Exact evaluation of the Blink Model measure has the
same complexity as terminal network reliability,
which is known to be \#P-complete \cite{valiant}.
We will present a new deterministic algorithm
that approximates the measure directly with practical complexity
and thereby enables the proposed measure in applications.
To quantify the benefit of being consistent with human intuition,
we use two link prediction tasks
to compare our measure against other topological proximity measures.
The first is a replication of \cite{liben}.
A remarkable conclusion of \cite{liben} was that the best proximity measure is case dependent.
A specific parameterization of a specific measure may perform
well on one graph yet underperform significantly
on another, and there does not exist a consistent winner.
We compare against the oracle, i.e. the highest accuracy for each graph
of all predictors in \cite{liben}, and demonstrate that
a single parameterization of our measure outperforms the oracle by
14--35\% on each graph.
The second task is predicting additions of
inter-wikipage citations in Wikipedia
from April 2014 to March 2015, and again substantial accuracy advantage is shown.
Through these tasks, we also demonstrate a simple yet practical and automatic method of
training graph weighting parameters,
which can naturally incorporate application-specific domain knowledge.
This manuscript is organized as follows.
The rest of this section defines the problem, the proposed proximity measure
and several competing measures,
and reviews related fields and basic arithmetic.
Section~\ref{sec:metric} studies properties of the measure and compares
it with others through graph examples.
Section~\ref{sec:algorithm} describes the proposed approximation algorithm.
Section~\ref{sec:result} presents empirical comparisons.
\subsection{Problem statement} \label{sec:ps}
The problem statement for a proximity measure is the following.
The input is a graph $G = \langle V,E \rangle$,
its node weights $w_V:V \rightarrow \left( 0, 1 \right]$,
and its edge weights $w_E:E \rightarrow \left( 0, 1 \right]$.
The output is, for any pair of nodes A and B in $V$, a value score(A,B).
Note that not all proximity measures consider $w_V$ and $w_E$.
Some use $w_E$ and ignore $w_V$, while some consider only topology $G$.
Although $w_V$ and $w_E$ can be from any source,
we present a simple yet practical method in Section~\ref{sec:weight}
to train a few parameters and thereby set $w_V$ and $w_E$.
It's applicable to all proximity measures and is used in Section~\ref{sec:result}.
We will focus on directed simple edges.
Undirected edges can be represented by two directed edges,
and most discussions are extendable to hyperedges.
\subsection{Definitions} \label{sec:def}
Let us first define the proposed graph proximity measure.
Consider the input $G$ as a graph that blinks:
an edge exists with a probability equal to its weight;
a node exists with a probability equal to its weight.
Edges and nodes each blink independently.
A path is considered existent if and only if
all edges and all intermediate nodes on it exist;
note that we do not require the two end nodes to exist.
The proposed proximity measure is
\begin{eqnarray}
s \left( \text{A},\text{B} \right) & = & - \log \left( 1-b \left( \text{A},\text{B} \right) \right) \label{eq:metric} \\
\quad\textrm{where}\; b \left( \text{A},\text{B} \right) & = & \text{P} \left[ \text{ at least one path exists from A to B } \right] \label{eq:nr}
\end{eqnarray}
We will refer to (\ref{eq:metric}) as the Blink Model measure,
its properties and generalizations to be presented in Section~\ref{sec:metric}.
It is straightforward to see that $s$ and $b$ are monotonic functions of each other
and hence order equivalent, and the reason to choose $s$ over $b$ will be evident in Section~\ref{sec:prop}.
Next we define several competing measures.
For brevity, SimRank \cite{simrank} and commute-time \cite{fouss} are omitted
and they are compared in Section~{\ref{sec:graphs}}.
Personalized PageRank (PPR) \cite{pr_paper} with weights
considers a Markov chain that has the topology of $G$
plus a set of edges from each node to node A.
These additional edges all have transition probability of $\alpha \in \left( 0, 1 \right)$.
For each original edge $e \in E$, let X be its source node,
let $w_{\textrm{sum,X}}$ be the sum of weights of X's out-going edges,
and the transition probability of edge $e$ is $\left( 1-\alpha \right) \cdot w_E(e) / w_{\textrm{sum,X}} $.
The PPR measure $\text{score}_\text{PPR} \left( \textrm{A,B} \right)$
is defined as this Markov chain's stationary distribution on node B.
PPR does not use node weights.
The original Katz measure \cite{katz} does not use edge or node weights,
and we define a modified Katz measure:
\begin{equation} \label{eq:katz}
\text{score}_\text{Katz} \left( \textrm{A,B} \right) = \sum_{l=1}^{\infty}{ \beta^l \cdot \sum_{\textrm{length-}l\textrm{ A-to-B path }i}{p_{i,l}} }
\end{equation}
where $\beta\in \left( 0, 1 \right)$ is a parameter,
and $p_{i,l}$ is the product of edge weights
and intermediate node weights for the $i^\textrm{th}$ path with length $l$.
This measure is divergent if $\beta$ is larger than the reciprocal
of the spectral radius of the following matrix $M$:
entry $M_{i,j}$ is the product of the $i^\textrm{th}$ node weight and
the sum of edge weights from the $i^\textrm{th}$ node to the $j^\textrm{th}$ node.
The effective conductance (EC) measure is defined as
the effective conductance between two nodes by viewing edges as resistors.
It can be generalized to be a directed measure, and notable variants include
cycle-free effective conductance (CFEC) \cite{koren2006}
and EC with universal sink \cite{faloutsos,tong}.
Expected Reliable Distance (ERD) is the winning measure in \cite{potamias}.
Consider the same blinking graph as in the Blink Model
and let $D$ be the shortest-path distance from A to B,
ERD is an inverse-proximity measure:
\begin{equation} \label{eq:relidist}
\text{score}_\text{ERD} \left( \textrm{A,B} \right) = \text{E} \left[ D | D \neq \infty \right]
\end{equation}
Our implementation of Adamic/Adar \cite{adamicadar} is:
\begin{equation} \label{eq:adamicadar}
\text{score}_\text{AA} \left( \textrm{A,B} \right) = \sum_\textrm{C}{\frac{n_\textrm{A,C} \cdot n_\textrm{C,B}}{\log{d_\textrm{C,in}}+\log{d_\textrm{C,out}}}}
\end{equation}
where $n_\textrm{A,C}$ is the number of A-to-C edges,
$n_\textrm{C,B}$ is the number of C-to-B edges,
and $d_\textrm{C,in}$ and $d_\textrm{C,out}$ are the numbers of in-coming and out-going edges of node C.
\subsection{Related work} \label{sec:liter}
This section briefly reviews fields related to the proposed measure.
Network reliability is an area in operations research
which focuses on evaluating two-terminal reliability, i.e. (\ref{eq:nr}),
all-terminal reliability and other similar quantities.
Target applications were
assessing the reliability of ARPANET, tactical radio networks, etc.
Complexity of exact evaluation of (\ref{eq:nr}) was proved to
be \#P-complete \cite{valiant}.
Fast exact methods were developed for special topologies \cite{satyanarayana,politof}.
Deterministic methods were developed for evaluating bounds \cite{ball,brecht}.
Monte Carlo methods were developed for estimation \cite{vanslyke,fishman,karger},
and were considered the choice for larger graphs with general topologies \cite{ball}.
Most of these methods have poor scalability to large graphs.
In a work \cite{hardy} in 2007,
which was a modern implementation based on binary decision diagrams (BDD),
the largest square-2D-grid benchmark had only 144 nodes.
In another work \cite{ramirez} in 2005, which used a Monte Carlo method,
the largest benchmark had only 11 nodes.
Our blinking graph definition belongs to
the category of random graphs \cite{gilbert,erdos}.
It is a generalization of the Edgar Gilbert model
by having a different probability for each edge
(zero probabilities remove edges from the Edgar Gilbert model).
In particular, the branch of percolation theory \cite{kesten,bollobas}
and works on uncertain graphs \cite{potamias,jin,zou,yuan}
are closely related to our study.
Network influence studies the spread of influence through social networks.
A popular model, called independent cascade model \cite{kempe03},
considers the same blinking graph with all node weights being 1,
and the influence $\sigma\left( S \right)$ is, given a set of starting nodes $S$,
the expected number of reachable nodes from $S$.
In the case of $S=\{A\}$,
$\sigma\left( S \right)$ is equal to the sum of (\ref{eq:nr}) over all node B's.
The goal of optimization is to select $S$
to maximize $\sigma\left( S \right)$.
Since the quantity of interest is sum of (\ref{eq:nr}) values,
it is easier to compute than individual (\ref{eq:nr}) values.
For example, many fewer Monte Carlo samples are needed to reach the same
relative error.
Methods have been developed to quickly estimate $\sigma\left( S \right)$ \cite{chen,wang},
e.g., \cite{wang} uses the largest-probability single path to each node
as a surrogate for an individual (\ref{eq:nr}) value.
Although these methods showed fidelity at the $\sigma\left( S \right)$ level,
they incur too much error for our purpose.
\subsection{Basic arithmetic} \label{sec:basic}
For clarity of presentation and without loss of generality\footnote{Blinking
graphs with edge weights alone, setting all node weights to 1,
are equally expressive.
A node weight can be expressed as an edge weight by splitting a node
into two nodes, one being sink of in-coming edges, one being source of out-going edges,
and adding an auxiliary edge between the two, with edge weight
equal to the original node weight \cite{ball}.},
this section assumes all node weights being 1.
Exact evaluation of the Blink Model can be as follows.
Enumerate all subgraphs of $G$, each of which is a state of the blinking graph
and has a probability that is equal to the product of $w_E(e)$ for edges $e$ that exist
and $1-w_E(e)$ for edges $e$ that do not.
(\ref{eq:nr}) is the sum of probabilities of subgraphs where a path exists from A to B,
and (\ref{eq:metric}) gets calculated accordingly.
This has impractical complexity.
Monte Carlo evaluation of the Blink Model measure can be as follows.
Each sample traverses the subgraph reachable from A in one instance of the blinking graph.
(\ref{eq:nr}) is approximated by the fraction of samples that reach B,
and (\ref{eq:metric}) gets approximated accordingly.
This can be expensive.
In Table~\ref{tbl:toy}, we demonstrate that
at least tens of thousands of samples are needed to reliably discern different pairs of nodes.
Yet in Section~\ref{sec:wikipedia} for a graph that represents Wikipedia citation network,
practical run time allows only 100 samples.
If edges $e_1=(\textrm{X},\textrm{Y})$ and $e_2=(\textrm{Y},\textrm{Z})$ are the only edges to/from node Y,
they can be replaced by a single edge from X to Z with weight $w_E(e_1)\cdot w_E(e_2)$,
without altering the Blink Model measure for any pair of nodes.
Two parallel edges $e_1$ and $e_2$ can be replaced by a single edge with weight $1-(1-w_E(e_1))\cdot(1-w_E(e_2))$,
without altering the Blink Model measure for any pair of nodes.
$x$ parallel edges with weight $w$ is equivalent to a single edge with weight $1-(1-w)^x$.
In other words, an edge with weight $1-(1-w)^x$ is $x$ times as strong as an edge with weight $w$.
\section{The Measure} \label{sec:metric}
This section starts with two important properties of the proposed proximity measure,
followed by its generalizations and variations.
We then use a series of graph studies to demonstrate that the proposed measure
is more consistent with human intuition than competitors,
and finally discuss setting edge and node weights in applications.
\subsection{Properties} \label{sec:prop}
Let us begin with two properties of (\ref{eq:metric}), \emph{additivity} and \emph{monotonicity},
which are important in the coming sections.
\emph{Additivity}.
Let $G_1 = \langle V_1,E_1 \rangle$ and $G_2 = \langle V_2,E_2 \rangle$ be two graphs
such that $V_1 \cap V_2 = \left\{ \textrm{A,B} \right\} $, $E_1 \cap E_2 = \emptyset $.
Let $G_3 = \langle V_1 \cup V_2 ,E_1 \cup E_2 \rangle$ be a combined graph that has the same
node and edge weights as in $G_1$ and $G_2$.
Let $s_{G_1} \left( \text{A},\text{B} \right)$, $s_{G_2} \left( \text{A},\text{B} \right)$ and
$s_{G_3} \left( \text{A},\text{B} \right)$ be the measure value (\ref{eq:metric}) in these
three graphs respectively.
Then this condition holds:
\begin{equation} \label{eq:add}
s_{G_3} \left( \text{A},\text{B} \right) = s_{G_1} \left( \text{A},\text{B} \right) + s_{G_2} \left( \text{A},\text{B} \right)
\end{equation}
Among competitors defined in Section~\ref{sec:def},
Adamic/Adar and EC have the same additivity property.
Katz does not have this property,
as in general (\ref{eq:katz}) in $G_3$ is more than the sum of that in $G_1$ and $G_2$,
and may even be divergent.
The additivity property is consistent with human intuition.
When multiple independent sets of evidence are combined,
our perception of proximity becomes the sum of proximity values derived from each individual set.
To state the same in more general terms, the proposed proximity measure (\ref{eq:metric}) is
proportional to the amount of evidence, which is why we choose it over (\ref{eq:nr}).
This additivity is also crucial to the development of
the approximation algorithm in Section~\ref{sec:algorithm}.
\emph{Monotonicity}.
Let $G_1 = \langle V_1,E_1 \rangle$ and $G_2 = \langle V_2,E_2 \rangle$ be two graphs
such that $V_1 \subseteq V_2$, $E_1 \subseteq E_2$, and that their weights satisfy
that $w_{V_1}\left( \textrm{X} \right) \leq w_{V_2}\left( \textrm{X} \right),\forall\textrm{X}\in V_1$
and that $w_{E_1}\left( e \right) \leq w_{E_2}\left( e \right),\forall e \in E_1$.
Let $s_{G_1} \left( \text{A},\text{B} \right)$ and $s_{G_2} \left( \text{A},\text{B} \right)$
be the measure value (\ref{eq:metric}) in these two graphs respectively.
Then the following condition holds.
\begin{equation} \label{eq:mono}
s_{G_1} \left( \text{A},\text{B} \right) \leq s_{G_2} \left( \text{A},\text{B} \right), \forall\textrm{A,B}\in V_1
\end{equation}
In plain language, if an edge is added to a graph or if a node/edge weight is increased in a graph,
then the proposed measure (\ref{eq:metric}) will not decrease for any pair of nodes.
This again is consistent with human intuition.
Among competitors defined in Section~\ref{sec:def},
Katz and EC have the same monotonicity property,
assuming that the additional edges or added weights do not cause (\ref{eq:katz}) to diverge.
In Adamic/Adar's (\ref{eq:adamicadar}), if the denominator is viewed as reciprocal of $w_V( \textrm{C} )$
which implies a specific choice\footnote{Such
choice of weights is shown to be beneficial in social networks \cite{adamicadar}.
With Blink Model, this can easily be encoded as domain knowledge, to be described in Section~\ref{sec:weight}.
In fact similar schemes are used in Section~\ref{sec:result}.} of $w_V$,
then it also satisfies the monotonicity property.
Note that (inverse) ERD is not monotonic because
additional edges may form a new long path from A to B and hence increase (\ref{eq:relidist}).
\subsection{Generalizations} \label{sec:gen}
The measure (\ref{eq:metric}) is defined on a particular event,
``a path exists from A to B''.
This definition is a pair-wise proximity measure and is
useful in for example link-prediction applications.
For other applications, the definition (\ref{eq:metric}) can be
generalized to other events in the blinking graph:
e.g., for a set of nodes $S_\textrm{A}$ and another set of nodes $S_\textrm{B}$,
\begin{equation} \label{eq:alt1}
\tilde{s} \left( S_\textrm{A},S_\textrm{B} \right) = - \log ( 1-\textrm{P} \left[ \textrm{ a path exists from any of } S_\textrm{A} \textrm{ to each of }S_\textrm{B} \right] )
\end{equation}
Or for three nodes A, B and C,
\begin{multline} \label{eq:alt3}
\tilde{\tilde{s}} \left( \textrm{A,B,C} \right) = - \log ( 1-\textrm{P} [ \textrm{ a path exists from A to B but no path exists}\\ \textrm{from A to C } ] )
\end{multline}
And there are many more possibilities.
In particular, when edges are labeled to indicate types of relations \cite{cbmm},
the choice of event can involve edge labels.
The measure (\ref{eq:metric}) is a proximity measure.
Another variation is a distance measure:
\begin{equation} \label{eq:dist}
d \left( \text{A},\text{B} \right) = - \log \left( b \left( \text{A},\text{B} \right) \right)
\end{equation}
It is straightforward to verify that the above definition satisfies the triangle inequality.
It also has the monotonicity property.
It has an additivity property that differs from that in Section~\ref{sec:prop},
but is defined on two graphs in series.
\subsection{Graph studies} \label{sec:graphs}
This section uses graph examples to compare the proposed proximity measure
with competitors to demonstrate that it is more consistent with human intuition.
Comparison with Adamic/Adar (\ref{eq:adamicadar}) or common-neighbor (replacing the sum
in (\ref{eq:adamicadar}) by a sum of 1's) is straightforward since they are limited to
length-2 paths, and an example is omitted.
In examples in Figures~\ref{fig:multi}--\ref{fig:mark},
we argue that human intuition would say that node A has stronger relation to node $\textrm{B}_2$ than to $\textrm{B}_1$.
A key notion is that human intuition not only prefers more and shorter paths,
but also prefers structures that are mutually corroborated.
If an edge or path has no corroboration, its existence in the graph may be a random
coincidence and hence does not indicate strong proximity.
On the flip side, proximity is strong for an aggregate structure that is impervious to edges randomly existing.
In discussing all examples, we assume uniform node weights of 1 and uniform edge weights of $w<1$.
Let us begin with Figure~\ref{fig:multi} of two undirected graphs.
It could be perceived that there are two random paths from $\textrm{A}_1$ to $\textrm{B}_1$,
while the two length-2 paths from $\textrm{A}_2$ to $\textrm{B}_2$ are less likely to be random because
the crossing edge provides mutual corroboration between them,
and therefore human intuition prefers $(\textrm{A}_2,\textrm{B}_2)$ over $(\textrm{A}_1,\textrm{B}_1)$.
Table~\ref{tbl:multi} lists various proximity scores, where none is consistent with human intuition.
Shortest-path and EC conclude that $(\textrm{A}_1,\textrm{B}_1)$ and $(\textrm{A}_2,\textrm{B}_2)$ are equally related,
while CFEC \cite{koren2006} and commute-time \cite{fouss} conclude that $(\textrm{A}_1,\textrm{B}_1)$ is stronger than $(\textrm{A}_2,\textrm{B}_2)$.
In contrast, the Blink Model score is $-2\cdot\log(1-w^2)$ for $(\textrm{A}_1,\textrm{B}_1)$
and $-\log(1-2w^2-2w^3+5w^4-2w^5)$ for $(\textrm{A}_2,\textrm{B}_2)$, and the latter is strictly larger than the former.
This shows a weakness of EC in that it sees no effect from the crossing edge in the second graph;
the EC variant of CFEC \cite{koren2006} exacerbates this trait;
another EC variant \cite{faloutsos,tong} adds a universal sink to the EC model,
and it is straightforward to verify that it also ranks $(\textrm{A}_1,\textrm{B}_1)$ as stronger than $(\textrm{A}_2,\textrm{B}_2)$,
and similar effects of the universal sink have been reported in \cite{koren2006}.
A spectrum of measures was proposed in \cite{yen}, where shortest-path is one end of the spectrum
while commute-time is the other end;
although we are unable to judge intermediate variants of \cite{yen},
Table~\ref{tbl:multi} suggests that both of its corner variants produce counterintuitive rankings for Figure~\ref{fig:multi}.
\begin{figure}
\centering
\includegraphics[width=2.4in]{multi}
\caption{A pair of graph examples.}
\label{fig:multi}
\end{figure}
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=2.4in]{pagerank}} \\
\subfloat[]{\includegraphics[width=2.4in]{katz}} \\
\subfloat[]{\includegraphics[width=2.1in]{relidist}}
\caption{Graph examples for PPR, Katz and ERD.}
\label{fig:many}
\end{figure}
\begin{table}[t]
\centering
\caption{Some proximity measures on Figure~\ref{fig:multi}.}
\label{tbl:multi}
\small
\begin{tabular}{|l|c|c|} \hline
Measure & $\textrm{A}_1$,$\textrm{B}_1$ & $\textrm{A}_2$,$\textrm{B}_2$ \\ \hline
1/shortest-path & $1/(2w)$ & $1/(2w)$ \\ \hline
1/commute-time & $1/(8w)$ & $1/(10w)$ \\ \hline
EC & $w$ & $w$ \\ \hline
CFEC & $w$ & $8w/9$ \\ \hline
\end{tabular}
\end{table}
Next let us consider Figure~\ref{fig:many}(i).
There are two equal-length paths from A to $\textrm{B}_1$ and to $\textrm{B}_2$,
but there are more paths going from A to $\textrm{B}_2$.
So it seems there's more reason to believe that it's not a coincidence
that $\textrm{B}_2$ is connected to A than $\textrm{B}_1$.
PageRank scores are $\text{score}_\text{PPR} \left( \textrm{A,B}_1 \right) = (1-\alpha)^2/2$
and $\text{score}_\text{PPR} \left( \textrm{A,B}_2 \right) = (1-\alpha)^2/4$.
In other words, PPR considers that A has greater proximity to $\textrm{B}_1$ than to $\textrm{B}_2$,
and this holds true for any parameterization.
In contrast, the Blink Model score (\ref{eq:metric}) is higher for $\textrm{B}_2$ than $\textrm{B}_1$.
Consider the Katz measure (\ref{eq:katz}) on Figure~\ref{fig:many}(ii).
It's straightforward to verify that, with any $w$ and $\beta$ values,
we have $\text{score}_\text{Katz} \left( \textrm{A,B}_1 \right)=\text{score}_\text{Katz}\left( \textrm{A,B}_2 \right)$,
including when $w$ is 1 and (\ref{eq:katz}) becomes the original Katz.
In other words, the (modified) Katz measure cannot discern $\textrm{B}_1$ and $\textrm{B}_2$ relative to A,
because it sees no difference between the four paths to $\textrm{B}_1$ and those to $\textrm{B}_2$.
In contrast, the Blink Model measure (\ref{eq:metric}) recognizes that
the edge to the left of A, which all paths to $\textrm{B}_1$ depend on, has no corroboration,
and we have $s \left( \text{A},\text{B}_1 \right) < s \left( \text{A},\text{B}_2 \right)$,
consistent with human intuition.
Next consider the ERD measure (\ref{eq:relidist}) on Figure~\ref{fig:many}(iii),
and we have $\text{score}_\text{ERD} \left( \textrm{A,B}_1 \right) = 1$
and $\text{score}_\text{ERD} \left( \textrm{A,B}_2 \right) > 1$.
Since ERD is an inverse proximity, the conclusion is that
A has greater proximity to $\textrm{B}_1$ than to $\textrm{B}_2$,
and is inconsistent with human intuition.
Blink Model shows the reverse.
Next consider SimRank \cite{simrank} on a three-node undirected complete graph
and on a four-node undirected complete graph.
It is straightforward to verify that the SimRank score, under any parameterization,
is higher for a pair of nodes in the former than in the latter,
and in fact the score always decreases as the size of a complete graph increases.
This is contrary to our Blink Model and human intuition.
To be fair, SimRank was designed as a similarity measure and was not intended
to be a proximity measure. This is also the likely reason that
its performance in \cite{liben} was reported to be mediocre.
\begin{figure}
\centering
\includegraphics[width=3.3in]{mark}
\caption{A graph example.}
\label{fig:mark}
\end{figure}
The last example graph is Figure~\ref{fig:mark}, which demonstrates
the advantage of measure (\ref{eq:metric}) over a class of methods.
Continuing the intuition from Figure~\ref{fig:multi},
on the left there exists mutual corroboration between the top pair of length-3 paths to $\text{B}_1$
and between the bottom pair of length-3 paths, but none exists between the two pairs.
On the right there exists mutual corroboration among all four length-3 paths to $\text{B}_2$,
and the proximity to $\text{B}_2$ is perceived as more robust to a human.
This is analogous to using four strings to reinforce four poles, and a human
is more likely to use a pattern similar to the right half of Figure~\ref{fig:mark} than the left.
The Blink Model recognizes that $\text{B}_2$ is more connected to A than $\text{B}_1$,
e.g. when $w$ is 0.5, $s \left( \text{A},\text{B}_1 \right) = 0.795$
and $s \left( \text{A},\text{B}_2 \right) = 0.809$.
Consider any algorithm which operates on local storage per node:
it starts with special storage in source node A, all other nodes starting equal;
on each successive iteration, it updates information stored in each node
based on information stored in adjacent nodes from the previous iteration.
Such an algorithm can easily for example compute shortest-path distance from A,
the PPR score, the Katz score, and the EC score.
However, such an algorithm, even with an infinite amount of storage and an infinite number of iterations,
cannot distinguish $\text{B}_1$ and $\text{B}_2$ in Figure~\ref{fig:mark};
in fact, the eight nodes that are distance-2 from A are also indistinguishable.
Algorithms of this type encompass a large range of methods.
In particular, any matrix computation based on the adjacency matrix or variants of the adjacency matrix,
which includes almost all measures that have a random-walk-based definition,
falls into this category,
and no possible linear algebra can determine that $\text{B}_2$ is closer than $\text{B}_1$.
This is a blessing and a curse: the Blink Model can discern cases correctly,
but is inherently hard to compute.
\subsection{Training weights} \label{sec:weight}
This section addresses a practical issue of using the Blink Model in an application:
how to set edge and node weights.
There are many ways, and we describe a simple yet practical method to do so by training a few parameters.
Let two functions $f_E:E \rightarrow R_{>0}$ and $f_V:V \rightarrow R_{>0}$ represent domain knowledge.
In applications where we have no domain knowledge beyond the topology $G$, we simply have $f_E$ and $f_V$ equal to 1 for all.
In applications where we do, we assume that $f_E$ and $f_V$ are larger for more important or more reliable edges and nodes,
and that their values exhibit linearity:
two parallel edges $e_1$ and $e_2$ can be replaced by a single edge $e_3$ with $f_E(e_3)=f_E(e_1)+f_E(e_2)$.
Our method sets graph edge and node weights as:
\begin{equation} \label{eq:weight}
\begin{aligned}
w_E(e) & = 1 - \left( 1 - b_1 \right) ^ {f_E(e)}, \forall e \in E \\
w_V(v) & = 1 - \left( 1 - b_2 \right) ^ {f_V(v)}, \forall v \in V
\end{aligned}
\end{equation}
where $b_1,b_2\in(0,1)$ are two tunable parameters\footnote{One way to interpret $b_1$ and $b_2$,
or $w_E$ and $w_V$ in general, is that they encode varying personalities, while
the analysis engine (\ref{eq:metric}) is invariant. This is similar to that different
people, when presented with the same evidence, may make different decisions.
In Section~\ref{sec:result} for example, we scan for $b_1$ and $b_2$ that match the collective
behavior of physicists or Wikipedia contributors.}.
It is straightforward to verify that the linearity assumption on $f_E$
is consistent with the arithmetic in Section~\ref{sec:basic}.
Parameters $b_1$ and $b_2$ for Blink Model are similar to $\alpha$ for PageRank and $\beta$ for Katz,
and we search for best values by using training data in an application.
If $f_E$ and $f_V$ have tunable parameters, those can be trained in the same process.
Since we introduce only two parameters, the training process is straightforward
and can be brute-force scan and/or greedy search.
This method is applicable to all proximity measures and is used for all in Section~\ref{sec:result}.
One caveat is that certain measures work better with linear weights:
\begin{equation} \label{eq:linearweight}
\begin{aligned}
w_E(e) & = b_1 \cdot f_E(e), \forall e \in E \\
w_V(v) & = b_2 \cdot f_V(v), \forall v \in V
\end{aligned}
\end{equation}
For example, we observe empirically that PPR works better with (\ref{eq:linearweight}),
which is intuitive given the linearity assumption on $f_E$,
while Modified Katz and ERD prefer (\ref{eq:weight}).
Note that when $b_1$ and $b_2$ are small, (\ref{eq:weight}) asymptotically becomes (\ref{eq:linearweight}).
\section{Experimental Results} \label{sec:result}
This section compares the predictive power of our method
against competitors on two temporal link prediction tasks.
Data and benchmarking code are released at \cite{data}.
All blink-model runs use the variation of Section~\ref{sec:mid};
single-thread run time is 2.9--5.0 seconds per starting node in coauthorship graphs
and 5.3 seconds in the Wikipedia graph,
on a Linux server with Intel E7-8837 processors at 2.67GHz.
We use the method of Section~\ref{sec:weight} with two scenarios:
graph \#1 is with no domain knowledge, $f_E$ and $f_V$ being 1 for all,
and hence with uniform edge and node weights $b_1$ and $b_2$;
graph \#2 is with domain knowledge expressed in $f_E$ and $f_V$.
In Section~\ref{sec:arxiv}, we follow the practice of \cite{liben} and scan parameters for each
predictor without separating training and test sets.
In Section~\ref{sec:wikipedia}, we separate data into a training set and a test set,
and the training set is used to scan for the best parameterization,
while the test set is used for evaluation.
Best parameter values for each predictor are listed in Tables \ref{tbl:arxiv} and \ref{tbl:wiki};
no-effect parameters are omitted, e.g., PPR scores are invariant with respect to $b_1$ and $b_2$.
We focus on evaluating individual proximity measures and do not compare with ensemble methods \cite{lichtenwalter}.
We use structured input data as is, and do not utilize text of arXiv papers or Wikipedia pages.
In real life applications, natural language processing (NLP)
can be used to provide additional edges,
edge labels and more meaningful edge weights \cite{cbmm}, and thereby enhance prediction accuracy.
Furthermore, such data from NLP are inherently noisy,
which fits perfectly with the underlying philosophy of the Blink Model that any evidence is uncertain.
\begin{table*}[ht]
\centering
\caption{Statistics of the coauthorship networks. Entry
format is our-number/number-reported-in-\cite{liben}.
Column Collaborations denotes pairwise relations in the training period.
Column $|E_\textrm{old}|$ denotes pairwise relations among Core authors in the training period.
Column $|E_\textrm{new}|$ denotes new pairwise relations among Core authors formed in the test period.}
\label{tbl:stats}
\footnotesize
\setlength\tabcolsep{1pt}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline
& \multicolumn{3}{|c|}{Training Period} & \multicolumn{3}{|c|}{Core} \\ \cline{2-7}
& Authors & Articles & Collaborations & Authors & $|E_\textrm{old}|$ & $|E_\textrm{new}|$ \\ \hline
astro-ph & 5321/5343 & 5820/5816 & 41575/41852 & 1563/1561 & 6189/6178 & 5719/5751 \\ \hline
cond-mat & 5797/5469 & 6698/6700 & 23373/19881 & 1387/1253 & 2367/1899 & 1514/1150 \\ \hline
gr-qc & 2150/2122 & 3292/3287 & 5800/5724 & 484/486 & 523/519 & 397/400 \\ \hline
hep-ph & 5482/5414 & 10277/10254 & 47788/47806 & 1786/1790 & 6629/6654 & 3281/3294 \\ \hline
hep-th & 5295/5241 & 9497/9498 & 15963/15842 & 1434/1438 & 2316/2311 & 1569/1576 \\ \hline
\end{tabular}
\end{table*}
\subsection{Link prediction in arXiv} \label{sec:arxiv}
This section replicates the experiments in \cite{liben} which are the following.
For five areas in arXiv, given the coauthors of papers published in the training period 1994-1996,
the task is to predict new pairwise coauthorship relations formed in the test period 1997-1999.
Predictions are only scored for those within \emph{Core} authors, defined as
those who have at least 3 papers in the training period and at least 3 papers in the test period;
this Core list is unknown to predictors.
Table~\ref{tbl:stats} gives statistics of the five graphs and prediction tasks.
Let $E_\textrm{new}$ be the set of new pairwise coauthorship relations among Core authors formed in the test period.
Let $E_p$ be the top $|E_\textrm{new}|$ pairs among Core authors that are predicted by a predictor,
and the score of this predictor is defined as $|E_p \cap E_\textrm{new}|/|E_\textrm{new}|$.
Table~\ref{tbl:stats} shows that our setup matches \cite{liben} closely for
four of the five benchmarks.
We focus on benchmarks astro-ph, hep-ph and hep-th,
for the following reasons.
Benchmark cond-mat differs significantly from that reported in \cite{liben},
thus is not a valid benchmark to compare against the oracle of \cite{liben}.
In gr-qc, 131 out of the 397 new relations were formed by a single project
which resulted in three papers in the test period,
with nearly identical 45-46 authors, \cite{blackhole} being one of the three.
Because the size of gr-qc is not large enough relative to this single event,
the scores of the predictors are distorted.
Thus it is not a surprise that \cite{liben} reported that the best predictor for
gr-qc is one that deletes 85-90\% of edges as a preprocessing step,
and that the same predictor delivers poor performance on the other benchmarks.
\begin{table*}[t]
\centering
\caption{Comparison of predictor accuracies on coauthorship networks.
$A$ denotes the accuracy score of a predictor.
$R$ denotes the ratio of $A$ over that of oracle of \cite{liben}.}
\label{tbl:arxiv}
\footnotesize
\setlength\tabcolsep{1pt}
\begin{tabular}{|l|l|c|c|c|c|c|c|} \hline
& & \multicolumn{2}{|c|}{astro-ph} & \multicolumn{2}{|c|}{hep-ph} & \multicolumn{2}{|c|}{hep-th} \\ \cline{3-8}
& parameters & $A$ & $R$ & $A$ & $R$ & $A$ & $R$ \\ \hline
Oracle of \cite{liben} & varying & 8.55\% & & 7.2036\% & & 7.9407\% & \\ \hline
Oracle of Blink, graph \#1 & varying & 9.075\% & 1.061 & 8.3816\% & 1.164 & 8.8592\% & 1.116 \\ \hline
Blink, graph \#1 & $b_1=0.5$, $b_2=0.4$ & 7.7461\% & 0.906 & 7.8025\% & 1.083 & 8.0306\% & 1.011 \\ \hline
Blink, graph \#2 & $b_1=0.8$, $b_2=0.6$, $\gamma=5$ & 10.264\% & {\bf{1.200}} & 9.6922\% & {\bf{1.345}} & 9.0504\% & {\bf{1.140}} \\ \hline
PPR, graph \#2 & $\alpha=0.50$ & 8.5330\% & 0.998 & 6.7358\% & 0.935 & 7.9031\% & 0.995 \\ \hline
Modified Katz, graph \#2 & $b_1=0.5$, $b_2=0.1$, $\beta=0.1$, $\gamma=5$ & 8.4106\% & 0.984 & 8.2292\% & 1.142 & 7.8394\% & 0.987 \\ \hline
ERD, 10K samples, graph \#1 & $b_1=0.9$, $b_2=0.9$ & 8.4281\% & 0.986 & 8.1682\% & 1.134 & 7.1383\% & 0.899 \\ \hline
ERD, 10K samples, graph \#2 & $b_1=0.9$, $b_2=0.9$, $\gamma=4$ & 9.5471\% & 1.117 & 8.7473\% & 1.214 & 7.1383\% & 0.899 \\ \hline
\end{tabular}
\end{table*}
In Table~\ref{tbl:arxiv}, the oracle of \cite{liben} is the highest score for
each benchmark, by all predictors including meta-approaches;
note that no single predictor has such performance,
and PPR and Katz on uniformly weighted graphs are dominated by the oracle.
In graph \#1, each paper is modeled by a hyperedge with uniform weight.
Allowing the best $b_1$ and $b_2$ per graph leads to the oracle of Blink which easily
beats the oracle of \cite{liben};
for a single parameterization, we get the next row where Blink wins two out of three.
Such performance already puts Blink above all predictors reported in \cite{liben}.
In graph \#2, each paper is modeled as a node,
and it connects to and from each of its authors with two directed simple edges.
We provide domain knowledge through the following $f_E$ and $f_V$.
For an edge $e=(\textrm{X},\textrm{Y})$, $f_E(e) = 1 / \max ( 1 , \log_\gamma d_\textrm{X} )$,
where $d_\textrm{X}$ is the out degree of X.
For a paper node, we set $f_V$ to infinity and hence weight to 1.
For an author node, $f_V(\textrm{author}) = 1 / \max ( 1 , \log_\gamma m_\textrm{author} )$,
where $m_\textrm{author}$ is the number of coauthors of this author in the training period.
$\gamma$ is a tunable parameter
and is scanned with other parameters and reported in Table~\ref{tbl:arxiv}.
With graph \#2, the predictor scores are asymmetric.
For PPR and Katz, we experimented with max, min, sum and product of
two directional scores, and the max gives the best results and is reported.
For the Blink Model, we define symmetric
$\textrm{score}(\textrm{A,B}) = - \log ( 1-b ( \text{A},\text{B} ) \cdot b ( \text{B},\text{A} ) )$.
For ERD, the shortest-path distance in a Monte Carlo sample is defined as the shorter
between the A-to-B path and the B-to-A path.
Table~\ref{tbl:arxiv} demonstrates the proposed proximity measure's superior predictive power
over competitors, as expected from discussions in Section~\ref{sec:metric},
and a single parameterization of Blink Model outperforms oracle of \cite{liben} by 14--35\%.
Figure~\ref{fig:roc} shows receiver-operating-characteristic (ROC) curves.
Blink with graph \#2 clearly dominates the ROC curves.
There is not a clear runner-up among the three competitors.
Note that ERD \#2 performs well on hep-ph for early guesses at around 5\% true positive rate,
but it degrades quickly after that and becomes the worst of the four by the 20\% rate.
\begin{figure}
\centering
\subfloat[astro-ph]{\includegraphics[width=1.7in]{roc_as}}
\subfloat[hep-ph]{\includegraphics[width=1.7in]{roc_ph}}\\
\subfloat[hep-th]{\includegraphics[width=1.7in]{roc_th}}
\subfloat{\includegraphics[width=1.7in]{legend}}
\caption{ROC curves on coauthorship networks.}
\label{fig:roc}
\end{figure}
\begin{table*}[ht]
\centering
\caption{Statistics of Wikipedia citation networks and prediction tasks.
$n_\textrm{page}$ denotes the number of pages;
$d_{2014}$ and $d_{2015}$ denote the average number of out-going citations on a 2014/2015 page;
$d_{2014,\textrm{unique}}$ and $d_{2015,\textrm{unique}}$ denote the average number of unique out-going citations on a 2014/2015 page;
$n_\textrm{prediction}$ denotes the average number of additions to predict per task.}
\label{tbl:wikistats}
\footnotesize
\setlength\tabcolsep{1pt}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline
& $n_\textrm{page}$ & $d_{2014}$ & $d_{2014,\textrm{unique}}$ & $d_{2015}$ & $d_{2015,\textrm{unique}}$ & $n_\textrm{prediction}$ \\ \hline
2014 all pages & 4731544 & 24.66 & 23.99 & & & \\ \hline
2015 all pages & 4964985 & & & 24.90 & 24.23 & \\ \hline
qualified tasks & 93845 & 156.1 & 151.0 & 162.4 & 157.0 & 10.06 \\ \hline
training tasks & 1000 & 142.2 & 137.9 & 147.5 & 143.1 & 10.14 \\ \hline
test tasks & 1000 & 159.0 & 153.3 & 165.7 & 159.6 & 9.85 \\ \hline
trimmed test tasks & 949 & 157.6 & 151.7 & 164.3 & 157.9 & 4.63 \\ \hline
\end{tabular}
\end{table*}
\subsection{Link prediction in Wikipedia} \label{sec:wikipedia}
Our second experiment is
predicting additions of inter-wikipage citations in Wikipedia.
The rationale is that citation links reflect
Wikipedia contributors' perception of relation strength between subjects.
We obtain an inter-wikipage-citation graph from \cite{dbpedia14}
which was based on Wikipedia dumps generated in April/May 2014,
and another graph from \cite{dbpedia15} which was based on those in February/March 2015.
In both graphs, each node represents a Wikipedia page,
and each directed edge from node A to node B represents a citation on page A to page B.
The ordering of out-going citations from a page A, as they appear in the text on page A,
is known and will be utilized by some predictors.
Statistics are shown in Table~\ref{tbl:wikistats}.
4,631,780 of the 2014 nodes are mapped to 2015 nodes by exact name matching,
and another 87,368 are mapped to 2015 nodes by redirection data from \cite{dbpedia15}
which are pages that have been renamed or merged.
The remaining 12,396 of the 2014 nodes cannot be mapped:
the majority are Wikipedia pages that have been deleted,
and some are due to noise in the data collection of \cite{dbpedia14,dbpedia15}.
Such noise is a small fraction and has negligible impact to our measurements.
For each mapped node A,
we identify $S_{\textrm{A},2014}$ as the set of 2014 nodes that page A cites in the 2014 graph and that remain in the 2015 graph,
$S_{\textrm{A},2015}$ as the set of 2014 nodes that page A cites in the 2015 graph,
and $X_{\textrm{A},2014}$ as the set of 2014 nodes that cite page A.
If page A satisfies the condition that
$5 \leq |\left( S_{\textrm{A},2015} \backslash S_{\textrm{A},2014} \right) \backslash X_{\textrm{A},2014} | \leq |S_{\textrm{A},2014}|\cdot20\%$,
we consider page A as a qualified prediction task.
The rationale behind the size limits is to choose test pages that have undergone thoughtful edits,
and their 2014 page contents were already relatively mature;
the rationale for excluding in-coming neighbors $X_{\textrm{A},2014}$ is to make
the tasks more challenging, since simple techniques like heavily weighting in-coming edges have no effect.
Statistics are shown in Table~\ref{tbl:wikistats}.
The number of qualified tasks is large,
and we randomly sample a 1000-task training set and a 1000-task test set.
Tasks vary in difficulty. If edits were to make a page more complete,
the new links are often reasonably predictable and some are obvious.
However, if edits were driven by a recent event, the new links are next to impossible to predict.
We form a trimmed test set by removing from the test set
targets that are too easy or too difficult, utilizing the outputs of four
best-performing predictors on the test set: Adamic/Adar, Blink Model \#2, Personalized PageRank \#2,
and Modified Katz \#2.
A new link is removed if it ranks less than 20 by all four predictors or if it ranks more than 1000 by all four.
The removed prediction targets are excluded from predictor outputs during mean-average-precision evaluation.
The results are listed in the last row of Table~\ref{tbl:wikistats}
and the last two columns of Table~\ref{tbl:wiki}.
\begin{table*}[ht]
\centering
\caption{Comparison of predictor accuracies on additions of inter-wikipage citations in Wikipedia.
Each predictor uses its best parameters selected based on the training set.
$R$ denotes the ratio of MAP of a predictor over MAP of Adamic/Adar.}
\label{tbl:wiki}
\footnotesize
\setlength\tabcolsep{1pt}
\begin{tabular}{|l|l|c|c|c|c|c|c|} \hline
& & \multicolumn{2}{|c|}{training} & \multicolumn{2}{|c|}{test} & \multicolumn{2}{|c|}{trimmed test} \\ \cline{3-8}
& parameters & MAP & $R$ & MAP & $R$ & MAP & $R$ \\ \hline
Adamic/Adar & & 0.0291 & & 0.0281 & & 0.0163 & \\ \hline
Blink, graph \#1 & $b_1=0.5$, $b_2=0.1$ & 0.0295 & 1.014 & 0.0263 & 0.937 & 0.0166 & 1.019 \\ \hline
Blink, graph \#2 & $b_1=0.8$, $b_2=0.8$, $\gamma=10$ & 0.0362 & 1.244 & 0.0362 & {\bf{1.289}} & 0.0233 & {\bf{1.428}} \\ \hline
PPR, graph \#1 & $\alpha=0.5$ & 0.0299 & 1.029 & 0.0291 & 1.038 & 0.0186 & 1.140 \\ \hline
PPR, graph \#2 & $\alpha=0.2$, $\gamma=500$ & 0.0321 & 1.104 & 0.0309 & 1.100 & 0.0206 & 1.263 \\ \hline
Modified Katz, graph \#1 & $\beta=5\textrm{E-}6$ & 0.0269 & 0.925 & 0.0241 & 0.860 & 0.0151 & 0.924 \\ \hline
Modified Katz, graph \#2 & $b_1=0.8$, $b_2=0.8$, $\beta=0.1$, $\gamma=10$ & 0.0341 & 1.173 & 0.0328 & 1.170 & 0.0198 & 1.213 \\ \hline
ERD, 100 samples, graph \#1 & $b_1=0.4$, $b_2=0.9$ & 0.0266 & 0.914 & 0.0233 & 0.830 & 0.0162 & 0.996 \\ \hline
ERD, 100 samples, graph \#2 & $b_1=0.9$, $b_2=0.9$, $\gamma=10$ & 0.0238 & 0.817 & 0.0218 & 0.778 & 0.0154 & 0.944 \\ \hline
\end{tabular}
\end{table*}
In Table~\ref{tbl:wiki}, mean average precision (MAP) is the accuracy score.
Unlike in Section~\ref{sec:arxiv}, the relations to predict
are asymmetric (page A adds a citation to page B)
and hence all predictors use their one-directional A-to-B score as is.
When multiple node B's have the same score, we use their in-coming degrees as a tie breaker:
a node with higher in-coming degree is ranked first.
Adamic/Adar is used as a reference, as it represents what can be achieved through good-quality local analysis.
In graph \#2, we provide domain knowledge through the following $f_E$ and $f_V$.
For an edge from node X to node Y and that represents the $i^\textrm{th}$ citation link on page X:
\begin{equation}
f_E(\textrm{edge}) = \frac{\delta_\textrm{Y,X}}{\max \left( 1 , \log_\gamma i \right) \cdot \max \left( 1 , \log_\gamma{d_\textrm{Y,in}} \right)}
\end{equation}
where $\delta_\textrm{Y,X}$ is 2 if edge exists from Y to X, and 1 otherwise.
$\gamma$ is again a tunable parameter.
The above scheme gives higher weight to a citation link if it is located at an earlier location on a page,
or if it points to a less-cited page, or if a returning citation exists.
Our $f_V$ function is a direct adaptation of Adamic/Adar's (\ref{eq:adamicadar}):
$f_V(\textrm{node}) = 1/( \log{d_\textrm{node,in}}+\log{d_\textrm{node,out}} )$.
To get best results, PPR uses (\ref{eq:linearweight}) while Katz and ERD use (\ref{eq:weight}).
A remarkable observation on Katz with graph \#2 is that its best
performance happens when (\ref{eq:katz}) is almost divergent:
with $b_1=0.8$, $b_2=0.8$ and $\gamma=10$, the divergence limit for $\beta$ is 0.1075.
For ERD, the reduction in sample numbers from Section~\ref{sec:arxiv} is because the Wikipedia graph is much larger and denser.
Blink Model with graph \#2 is clearly the best performer in Table~\ref{tbl:wiki}.
\begin{figure}[ht]
\centering
\subfloat[Test set]{\includegraphics[width=3in]{test}}\\
\subfloat[Trimmed test set]{\includegraphics[width=3in]{trimmed_test}}
\caption{Accuracy curves on Wikipedia citation network.}
\label{fig:curve}
\end{figure}
Figure~\ref{fig:curve} shows a more detailed comparison by plotting
true positive rate as a function of the number of predictions made.
Blink Model \#2 clearly dominates the curves, and for example needs 20 fewer predictions to reach
a 20\% true positive rate on the test set than the closest competitor.
| {
"timestamp": "2016-12-23T02:00:42",
"yymm": "1612",
"arxiv_id": "1612.07365",
"language": "en",
"url": "https://arxiv.org/abs/1612.07365",
"abstract": "This paper proposes a new graph proximity measure. This measure is a derivative of network reliability. By analyzing its properties and comparing it against other proximity measures through graph examples, we demonstrate that it is more consistent with human intuition than competitors. A new deterministic algorithm is developed to approximate this measure with practical complexity. Empirical evaluation by two link prediction benchmarks, one in coauthorship networks and one in Wikipedia, shows promising results. For example, a single parameterization of this measure achieves accuracies that are 14-35% above the best accuracy for each graph of all predictors reported in the 2007 Liben-Nowell and Kleinberg survey.",
"subjects": "Social and Information Networks (cs.SI)",
"title": "A Proximity Measure using Blink Model",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9559813513911654,
"lm_q2_score": 0.7401743620390163,
"lm_q1q2_score": 0.7075928868871525
} |
https://arxiv.org/abs/hep-ph/0510295 | DGLAP evolution of truncated moments of parton densities within two different approaches | We solve the LO DGLAP QCD evolution equation for truncated Mellin moments of the nucleon nonsinglet structure function. The results are compared with those, obtained in the Chebyshev-polynomial approach for $x$-space solutions. Computations are performed for a wide range of the truncation point $10^{-5}\leq x_0\leq 0.9$ and $1\leq Q^2\leq 100 {\rm GeV}^2$. The agreement is perfect for higher moments ($n\geq 2$) and not too large $x_0$ ($x_0\leq 0.1$), even for a small number of terms in the truncated series (M=4). The accuracy of the truncated moments method increases for larger $M$ and decreases very slowly with increasing $Q^2$. For M=30 the relative error in a case of the first moment at $x_0\leq 0.1$ and $Q^2=10 {\rm GeV}^2$ doesn't exceed 5% independently on the shape of the input parametrisation. This is a quite satisfactory result. Using the truncated moments approach one can avoid uncertainties from the unmeasurable $x\to 0$ region and also study scaling violations without making any assumption on the shape of input parametrisation of parton distributions. Therefore the method of truncated moments seems to be a useful tool in further QCD analyses. | \section{Introduction}
The DGLAP evolution \cite{b1} is the most familiar resummation
technique, which describes scaling violations of parton densities.
Measurements of deep-inelastic scattering structure functions of the nucleon
allow the determination of free parameters of the input parton distributions
and the verification of so called sum rules. There exist different sum rules
for unpolarised and polarised structure functions which refer to the moments
of the structure functions. From a phenomenological point of view however, QCD
tests based on moments $\int_{0}^{1} dx x^{n-1} F(x,Q^2)$ is
unreliable. The limit $x\rightarrow 0$, which implies that the invariant
energy $W^2$ of the inelastic lepton-hadron scattering becomes infinite
($W^2=Q^2(1/x-1)$) will never be attained experimentally. In the theoretical
approach to structure functions there are two ways to avoid the problem of
dealing with the unphysical region $x\rightarrow 0$. The first one is to work in
$x$-space and obtain directly the evolution of parton distributions (not of
their moments). Then one has integro-differential equations (e.g. DGLAP one)
in $x$ and $Q^2$ but the integration over $x$ goes for $x\geq x_0$. In this
case an extrapolation to the unmeasurable $x\rightarrow 0$ region is unneeded.
The second way is using evolution equations for truncated moments of structure
functions $\int_{x_1}^{x_2} dx x^{n-1} F(x,Q^2)$ instead of for full
moments. In the usually used method of solving QCD evolution equations, one
takes the Mellin (full) transform of these equations and obtains analytical
solutions. Then after the inverse Mellin transform (performed numerically) one
has suitable solutions of the original equations in $x$-space. In this way e.g.
in a case of DGLAP approximation, the differentio-integral equations for parton
distributions $q(x,Q^2)$ change after the Mellin transform into simple differential
and diagonalised ones in the moment space $n$. The only problem is knowledge of
the input parametrisation for the whole region $0\leq x \leq 1$ what is necessary
in the determination of the initial moments of the distribution functions.
Using truncated moments approach one can avoid uncertainties from the
unmeasurable $x\rightarrow 0$ region and also obtain important theoretical
results incorporating perturbative QCD effects at small $x$, which could be
verified experimentally. Truncated moments of parton distributions in solving
DGLAP equations have been presented in \cite{b5}. Authors have shown that the
evolution equations for truncated moments though not diagonal can be solved
with a quite good precision for $n\geq 2$. This is because each $n$-th truncated
moment couples only with ($n+j$)-th ($j\geq 0$) truncated moments. In \cite{b6}
the truncated moments method has been adopted to double logarithmic $ln^2x$
resummation. There is a number of papers in which the most known methods for
solving the $Q^2$ evolution equations for parton distributions have been
reviewed (see e.g. \cite{b7},\cite{b8}). Authors compare the DGLAP framework
for the full Mellin moments method with brute-force or Laguerre-polynomial
approaches, used for $x$-space version of the evolution equation. In this paper
we compare the solutions of LO DGLAP $Q^2$ evolution equations written for the
truncated Mellin moments of the structure functions with those, obtained by
using the Chebyshev-polynomial method in the $x$-space. In both these approaches we
compute the truncated moments $\int_{x_0}^{1} dx x^{n-1} F(x,Q^2)$. As a test
structure function $F(x,Q^2)$ we take two different spin-like nonsinglet parton
distributions. We perform the computations for a wide range of the truncation
point $10^{-5}\leq x_0\leq 0.9$ and $1\leq Q^2\leq 100 {\rm GeV}^2$. In the
next section we briefly recall an idea of the evolution equation for truncated
moments of parton distributions. The main topic of our paper i.e. the
comparison of the Chebyshev-polynomial and truncated moments techniques in
solving the LO DGLAP evolution equation for the nonsinglet structure function
is presented in Section 3. Finally, Section 4 contains conclusions.
\section{Truncated Mellin moments of the nonsinglet structure function
$q^{NS}(x,t)$ within LO DGLAP approach.}
For (full) Mellin moments of parton distributions $f(x,Q^2)$
\begin{equation}\label{r3.1}
\bar{f}(n,Q^2)=\int\limits_0^1 dx x^{n-1} f(x,Q^2)
\end{equation}
the DGLAP evolution equation can be solved analytically. This is because one
obtains in the moment space $n$ simple diagonalised differential equations. The
only problem is the knowledge of the input parametrisation for the whole region
$0\leq x\leq 1$, what is necessary in the determination of the initial moments
$\bar{f}(n,Q^2=Q_0^2)$:
\begin{equation}\label{r3.2}
\bar{f}(n,Q_0^2)=\int\limits_0^1 dx x^{n-1} f(x,Q_0^2).
\end{equation}
Using the truncated moments approach one can avoid the uncertainties from the
region $x\rightarrow 0$, which will never be attained experimentally. The
derivation of the DGLAP equations for truncated moments of parton distributions
has been presented in \cite{b5}. The evolution equations for truncated moments
$\bar{f}(x_0,n,Q^2)$
are not diagonal and therefore solving this problem is not so easy like in a
case of the full-moments technique. Nevertheless this method has an
advantage over other approaches, based not only on the cut-off for unphysical
region $x\rightarrow 0$. The technique of truncated moments within DGLAP
approximation enables namely to study scaling violations without making any
assumption on the shape of the input parametrisation of parton distributions.
While the solution of the evolution equations in the $x$-space requires
knowledge of inputs $f(x,Q_0^2)$ with many parameters (fitted in detailed
comparison with the data), the initial values of truncated moments can be
obtained directly by data. Following the authors of \cite{b5}, we have found
the LO DGLAP evolution equation for the truncated at $x_0$ Mellin moment of
the nonsinglet structure function $q^{NS}(x,Q^2)$ in a form:
\begin{equation}\label{r3.4}
\frac{d\bar{q}^{NS}(x_0,n,t)}{dt}=\frac{\alpha_s(t)}{2\pi}
\int\limits_{x_0}^1 dy y^{n-1} q^{NS}(y,t) G_{n}\left(\frac{x_0}{y}\right).
\end{equation}
$\bar{q}^{NS}(x_0,n,t)$ is the truncated at $x_0$ moment of the nonsinglet
structure function:
\begin{equation}\label{r3.5}
\bar{q}^{NS}(x_0,n,t)=\int\limits_{x_0}^1 dx x^{n-1} q^{NS}(x,t),
\end{equation}
where
\begin{equation}\label{r2.4}
t\equiv \ln\frac{Q^2}{\Lambda_{QCD}^2}
\end{equation}
and
\begin{equation}\label{r3.6}
G_n\left(\frac{x_0}{y}\right)\equiv\int\limits_{x_0/y}^1 dz z^{n-1}
P_{qq}(z).
\end{equation}
For $x_0=0$ the kernel $G_n(x_0/y)$ is simply equal to the anomalous
dimension $\gamma_{qq}(n)$:
\begin{equation}\label{r2.5}
\gamma_{qq}(n)=\int\limits_0^1 z^{n-1} P_{qq}(z) dz.
\end{equation}
Expanding the $G_n$ in Taylor series around $y=1$, one has
\begin{eqnarray}\label{r3.7}
G_n\left(\frac{x_0}{y}\right) = \gamma_{qq}(n)-\frac{4}{3}\sum\limits_{k=0}^{\infty}
[2\sum\limits_{i=n+2}^{\infty}\frac{(i+k-1)!}{i!}x_0^i \nonumber\\
+ \frac{(n+k-1)!}{n!}
(x_0^n+\frac{n+k}{n+1} x_0^{n+1})]\sum\limits_{p=0}^{k}\frac{(-1)^p
y^p}{p!(k-p)!}.
\end{eqnarray}
Truncating the above expansion at order $M$ and using the following relation
\begin{equation}\label{r3.8}
\sum\limits_{k=0}^{M}\sum\limits_{p=0}^{k} \longrightarrow
\sum\limits_{p=0}^{M}\sum\limits_{k=p}^{M}
\end{equation}
one can find that the evolution equation (\ref{r3.4}) becomes
\begin{equation}\label{r3.9}
\frac{d\bar{q}^{NS}(x_0,n,t)}{dt}=\frac{\alpha_s(t)}{2\pi}
\sum\limits_{p=0}^M C_{pn}^{(M)}(x_0) \:\bar{q}^{NS}(x_0,n+p,t)
\end{equation}
and
\begin{equation}\label{r3.9a}
G_n\left(\frac{x_0}{y}\right) = \sum\limits_{p=0}^M C_{pn}^{(M)}(x_0) \:
y^p,
\end{equation}
where
\begin{eqnarray}\label{r3.10}
C_{pn}^{(M)}(x_0) = \gamma_{qq}(n)\delta_{p0} - \frac{4}{3}\sum\limits_{k=p}^{M}
\frac{(-1)^p}{p!(k-p)!}
\:[\:2\sum\limits_{i=n+2}^{\infty}\frac{(i+k-1)!}{i!}x_0^i\nonumber\\
+ \frac{(n+k-1)!}{n!}(x_0^n+\frac{n+k}{n+1} x_0^{n+1})\:].
\end{eqnarray}
Note that the evolution equations for truncated moments (\ref{r3.9}),
(\ref{r3.10}) are not diagonal but each $n$-th moment couples only with
($n+p$)-th ($p\geq 0$) moments. As it was shown in \cite{b5} the series of
couplings to higher moments is convergent and furthermore the value of
($n+p$)-th moments decreases rapidly in comparison to the $n$-th moment. Hence
one can retain from (\ref{r3.9}) the closed system of $M+1$ equations:
\begin{displaymath}
\frac{d\bar{q}^{NS}(x_0,N_0,t)}{dt}=\frac{\alpha_s(t)}{2\pi}
[C_{0,N_0}^{(M)}(x_0) \bar{q}^{NS}(x_0,N_0,t)
\end{displaymath}
\begin{displaymath}
+ C_{1,N_0}^{(M)}(x_0) \bar{q}^{NS}(x_0,N_0+1,t) + \\
...+ C_{M,N_0}^{(M)}(x_0) \bar{q}^{NS}(x_0,N_0+M,t)]
\end{displaymath}
\begin{displaymath}
\frac{d\bar{q}^{NS}(x_0,N_0+1,t)}{dt}=\frac{\alpha_s(t)}{2\pi}
[C_{0,N_0+1}^{(M-1)}(x_0) \bar{q}^{NS}(x_0,N_0+1,t)
\end{displaymath}
\begin{displaymath}
+ C_{1,N_0+1}^{(M-1)}(x_0) \bar{q}^{NS}(x_0,N_0+2,t) +
...+ C_{M-1,N_0+1}^{(M-1)}(x_0) \bar{q}^{NS}(x_0,N_0+M,t)]
\end{displaymath}
\begin{displaymath}
...
\end{displaymath}
\begin{equation}\label{r3.11}
\frac{d\bar{q}^{NS}(x_0,N_0+M,t)}{dt}=\frac{\alpha_s(t)}{2\pi}
C_{0,N_0+M}^{(0)}(x_0) \bar{q}^{NS}(x_0,N_0+M,t).
\end{equation}
$N_0$ denotes the lowest moment in calculations. The above system can be
solved numerically like a standard coupled differential equations using
the Runge-Kutta method. We have also found an analytical solution of (\ref{r3.11})
in the form:
\begin{eqnarray}\label{r3.12}
\bar{q}^{NS}(x_0,i,t) = \left(\bar{q}^{NS}(x_0,i,t_0)
- \sum\limits_{k=i+1}^{N_0+M}
A_{ik}(x_0)\bar{q}^{NS}(x_0,k,t_0)\right)\nonumber\\
\times \exp\left(\frac{\alpha_s}{2\pi}D_{ii}^{(M)}(x_0)
(t-t_0)\right) + \sum\limits_{k=i+1}^{N_0+M} A_{ik}(x_0)\bar{q}^{NS}(x_0,k,t)
\end{eqnarray}
for $\alpha_s=$const and
\begin{eqnarray}\label{r3.13}
\bar{q}^{NS}(x_0,i,t) = \left(\bar{q}^{NS}(x_0,i,t_0) - \sum\limits_{k=i+1}^{N_0+M}
A_{ik}(x_0)\bar{q}^{NS}(x_0,k,t_0)\right)\nonumber\\
\times \exp\left(c_f D_{ii}^{(M)}(x_0)\ln{\frac{t}{t_0}}\right)
+ \sum\limits_{k=i+1}^{N_0+M} A_{ik}(x_0)\bar{q}^{NS}(x_0,k,t)
\end{eqnarray}
for the running $\alpha_s$. Matrix elements $D_{ij}^{(M)}(x_0)$ and
$A_{ij}(x_0)$ are given in Appendix B. For details about properties of
triangular matrices like $D$ see also \cite{b5}. We have made sure that the
results (\ref{r3.12})-(\ref{r3.13}) agree with the solutions obtained with
the help of the Runge-Kutta method. In the forthcoming chapter we compare predictions for the
truncated moments $\bar{q}^{NS}(x_0,n,t)$ obtained by solving eq.(\ref{r3.11})
with those, computed in the Chebyshev polynomial approach.
\section{Results for truncated moments of the nonsinglet structure function
$\bar{q}^{NS}(x_0,n,t)$ within LO approximation of the DGLAP approach.}
We solve the system of evolution equations for truncated moments
(\ref{r3.11}) and compare the results with predictions, obtained in the
Chebyshev polynomial approach.
The Chebyshev polynomials technique \cite{b15} was successfully used by
J.Kwieci\'nski in many QCD treatments e.g. \cite{b4},\cite{b12}. Using this
method one obtains the system of linear differential equations instead of the
original integro-differential ones. The Chebyshev expansion provides a robust
method of discretising a continuous problem. This allows computing the parton
distributions for "not too singular" input parametrisation in the whole
$x\in (0;1)$ region. More detailed description of the Chebyshev polynomials
method in the solving the QCD evolution equations is given in Appendix A.
In this paper we use two spin-like input parametrisations of the parton
distribution $q^{NS}(x,Q_0^2)$ at $Q_0^2=1 {\rm GeV}^2$, namely:
\begin{eqnarray}\label{r4.1}
q^{NS}(x,Q_0^2) &=& a_1 (1-x)^3, \\ \label{r4.2}
q^{NS}(x,Q_0^2) &=& a_2 x^{-0.4}(1-x)^{2.5},
\end{eqnarray}
where constants $a_1$ and $a_2$ are determined by the appropriate sum rules.
More singular at small-$x$ input (\ref{r4.2}) incorporates the latest
knowledge about the low-$x$ behaviour of the polarised structure functions
\cite{b14}. We start our analysis with a simple test, where the truncation
point $x_0=0$. Then the results should be of course equal to the analytical
ones:
\begin{equation}\label{r4.2a}
\bar{q}^{NS}(n,Q^2)=\bar{q}^{NS}(n,Q_0^2)\left(\frac{\alpha_s(Q_0^2)}
{\alpha_s(Q^2)}\right)^{c_f\gamma_{qq}(n)}.
\end{equation}
$\alpha_s(Q^2)$ is the running coupling and $c_f$ depends on the number of the
quark flavours $N_f$:
\begin{equation}\label{r2.9}
c_f=\frac{2}{11-\frac{2}{3}N_f}.
\end{equation}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline\hline
$q^{NS}(x,Q_0^2)$ & $Q^2$ & $n$ & $\bar{q}^{NS}(n,Q^2)$ &
$\Delta_{Cheb}\%$ \\ \hline\hline
& & 1 & $2.112\cdot 10^{-1}$ & $< 4\cdot 10^{-1}$ \\
\cline{3-5}
& & 2 & $2.820\cdot 10^{-2}$ & $< 4\cdot 10^{-2}$ \\
\cline{3-5}
& 100 & 3 & $7.492\cdot 10^{-3}$ & $< 2\cdot 10^{-1}$ \\
\cline{3-5}
& & 4 & $2.732\cdot 10^{-3}$ & $< 3\cdot 10^{-1}$ \\
\cline{3-5}
$a_1 (1-x)^3$ & & 5 & $1.204\cdot 10^{-3}$ & $< 7\cdot 10^{-1}$ \\
\cline{2-5}
& & 1 & $2.112\cdot 10^{-1}$ & $< 2\cdot 10^{-1}$ \\
\cline{3-5}
& & 2 & $3.296\cdot 10^{-2}$ & $< 2\cdot 10^{-2}$ \\
\cline{3-5}
& 10 & 3 & $9.556\cdot 10^{-3}$ & $< 5\cdot 10^{-2}$ \\
\cline{3-5}
& & 4 & $3.709\cdot 10^{-3}$ & $< 2\cdot 10^{-1}$ \\
\cline{3-5}
& & 5 & $1.716\cdot 10^{-3}$ & $< 3\cdot 10^{-1}$ \\
\hline
& & 1 & $2.112\cdot 10^{-1}$ & $< 2$ \\
\cline{3-5}
& & 2 & $2.098\cdot 10^{-2}$ & $< 5\cdot 10^{-2}$ \\
\cline{3-5}
& 100 & 3 & $5.245\cdot 10^{-3}$ & $< 2\cdot 10^{-1}$ \\
\cline{3-5}
& & 4 & $1.902\cdot 10^{-3}$ & $< 3\cdot 10^{-1}$ \\
\cline{3-5}
$a_2 x^{-0.4}(1-x)^{2.5}$ & & 5 & $8.502\cdot 10^{-4}$ & $< 2$ \\
\cline{2-5}
& & 1 & $2.112\cdot 10^{-1}$ & $< 9\cdot 10^{-1}$ \\
\cline{3-5}
& & 2 & $2.452\cdot 10^{-2}$ & $< 2\cdot 10^{-2}$ \\
\cline{3-5}
& 10 & 3 & $6.691\cdot 10^{-3}$ & $< 4\cdot 10^{-2}$ \\
\cline{3-5}
& & 4 & $2.583\cdot 10^{-3}$ & $< 9\cdot 10^{-2}$ \\
\cline{3-5}
& & 5 & $1.212\cdot 10^{-3}$ & $< 2\cdot 10^{-1}$ \\
\hline
\end{tabular}
\caption{Test of the Chebyshev polynomial method: comparison with analytical
results of n-th (full) moments $\bar{q}^{NS}(n,Q^2)$ for different $Q^2$
and input functions $q^{NS}(x,Q_0^2)$.}
\end{center}
\end{table}
Table 1 shows the analytical values of full moments $\bar{q}^{NS}(n,t)$ for
two values of $Q^2$: 10 ${\rm GeV}^2$ and 100 ${\rm GeV}^2$ together with the
percentage errors for the Chebyshev results $\Delta_{Cheb}\%$:
\begin{equation}\label{r4.3}
\Delta_{Cheb}\% =
\frac{\mid\bar{q}^{NS}(n,t)(analytical)-\bar{q}^{NS}(n,t)(Chebyshev)\mid}
{\bar{q}^{NS}(n,t)(analytical)}\cdot 100\%.
\end{equation}
Note a good agreement of the Chebyshev solutions for
$\bar{q}^{NS}(n,Q^2)$ in comparison to the exact analytical results. The
percentage error defined in (\ref{r4.3}) doesn't exceed 1\% in a case of the
flat input (\ref{r4.1}) and 2\% in a case of the more singular at small-$x$
input (\ref{r4.2}). The accuracy is better for lower $Q^2$, when the DGLAP
evolution is shorter. Using the results from Table 1, we expect the similar
precision for the truncated moments as well. Thus we assume that the Chebyshev
method predictions are reliable with carefully estimated errors: 1\% for the
parametrisation (\ref{r4.1}) and 2\% for (\ref{r4.2}). In Tables 2 and 3 we
compare results for truncated at $x_0$ (0.01 and 0.1 respectively)
moments, obtained from (\ref{r3.13}) (FMPR) with those, found
within the Chebyshev approach (Cheb.). We set again two scales of $Q^2$: 10
${\rm GeV}^2$ and 100 ${\rm GeV}^2$.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline\hline
$q^{NS}(x,Q_0^2)$ & $Q^2$ & $n$ & $\bar{q}(Cheb.)$ &
$\bar{q}(FMPR)$ \\ \hline\hline
& & 1 & $1.892\cdot 10^{-1}$ & $2.006\cdot 10^{-1}$ \\
\cline{3-5}
& & 2 & $2.812\cdot 10^{-2}$ & $2.817\cdot 10^{-2}$ \\
\cline{3-5}
& 100 & 3 & $7.499\cdot 10^{-3}$ & $7.491\cdot 10^{-3}$ \\
\cline{3-5}
& & 4 & $2.740\cdot 10^{-3}$ & $2.732\cdot 10^{-3}$ \\
\cline{3-5}
$a_1 (1-x)^3$ & & 5 & $1.212\cdot 10^{-3}$ & $1.204\cdot 10^{-3}$ \\
\cline{2-5}
& & 1 & $1.951\cdot 10^{-1}$ & $2.015\cdot 10^{-1}$ \\
\cline{3-5}
& & 2 & $3.289\cdot 10^{-2}$ & $3.293\cdot 10^{-2}$ \\
\cline{3-5}
& 10 & 3 & $9.561\cdot 10^{-3}$ & $9.556\cdot 10^{-3}$ \\
\cline{3-5}
& & 4 & $3.714\cdot 10^{-3}$ & $3.709\cdot 10^{-3}$ \\
\cline{3-5}
& & 5 & $1.721\cdot 10^{-3}$ & $1.716\cdot 10^{-3}$ \\
\cline{1-5}
& & 1 & $1.658\cdot 10^{-1}$ & $1.817\cdot 10^{-1}$ \\
\cline{3-5}
& & 2 & $2.082\cdot 10^{-2}$ & $2.090\cdot 10^{-2}$ \\
\cline{3-5}
& 100 & 3 & $5.250\cdot 10^{-3}$ & $5.245\cdot 10^{-3}$ \\
\cline{3-5}
& & 4 & $1.907\cdot 10^{-3}$ & $1.902\cdot 10^{-3}$ \\
\cline{3-5}
$a_2 x^{-0.4}(1-x)^{2.5}$& & 5 & $8.550\cdot 10^{-4}$ & $8.502\cdot 10^{-4}$ \\
\cline{2-5}
& & 1 & $1.732\cdot 10^{-1}$ & $1.826\cdot 10^{-1}$ \\
\cline{3-5}
& & 2 & $2.437\cdot 10^{-2}$ & $2.443\cdot 10^{-2}$ \\
\cline{3-5}
& 10 & 3 & $6.692\cdot 10^{-3}$ & $6.691\cdot 10^{-3}$ \\
\cline{3-5}
& & 4 & $2.585\cdot 10^{-3}$ & $2.583\cdot 10^{-3}$ \\
\cline{3-5}
& & 5 & $1.214\cdot 10^{-3}$ & $1.212\cdot 10^{-3}$ \\
\hline
\end{tabular}
\caption{Truncated at $x_0=0.01$ n-th moments $\bar{q}^{NS}(x_0,n,Q^2)$ within
FMPR and Chebyshev approaches for different $Q^2$ and input functions
$q^{NS}(x,Q_0^2)$.}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline\hline
$q^{NS}(x,Q_0^2)$ & $Q^2$ & $n$ & $\bar{q}(Cheb.)$ &
$\bar{q}(FMPR)$ \\ \hline\hline
& & 1 & $9.921\cdot 10^{-2}$ & $1.236\cdot 10^{-1}$ \\
\cline{3-5}
& & 2 & $2.377\cdot 10^{-2}$ & $2.567\cdot 10^{-2}$ \\
\cline{3-5}
& 100 & 3 & $7.229\cdot 10^{-3}$ & $7.368\cdot 10^{-3}$ \\
\cline{3-5}
& & 4 & $2.721\cdot 10^{-3}$ & $2.724\cdot 10^{-3}$ \\
\cline{3-5}
$a_1 (1-x)^3$ & & 5 & $1.210\cdot 10^{-3}$ & $1.203\cdot 10^{-3}$ \\
\cline{2-5}
& & 1 & $1.138\cdot 10^{-1}$ & $1.294\cdot 10^{-1}$ \\
\cline{3-5}
& & 2 & $2.883\cdot 10^{-2}$ & $3.011\cdot 10^{-2}$ \\
\cline{3-5}
& 10 & 3 & $9.303\cdot 10^{-3}$ & $9.401\cdot 10^{-3}$ \\
\cline{3-5}
& & 4 & $3.695\cdot 10^{-3}$ & $3.699\cdot 10^{-3}$ \\
\cline{3-5}
& & 5 & $1.719\cdot 10^{-3}$ & $1.715\cdot 10^{-3}$ \\
\cline{1-5}
& & 1 & $7.112\cdot 10^{-2}$ & $9.086\cdot 10^{-2}$ \\
\cline{3-5}
& & 2 & $1.664\cdot 10^{-2}$ & $1.816\cdot 10^{-2}$ \\
\cline{3-5}
& 100 & 3 & $5.003\cdot 10^{-3}$ & $5.116\cdot 10^{-3}$ \\
\cline{3-5}
& & 4 & $1.890\cdot 10^{-3}$ & $1.895\cdot 10^{-3}$ \\
\cline{3-5}
$a_2 x^{-0.4}(1-x)^{2.5}$& & 5 & $8.536\cdot 10^{-4}$ & $8.497\cdot 10^{-4}$ \\
\cline{2-5}
& & 1 & $8.237\cdot 10^{-2}$ & $9.519\cdot 10^{-2}$ \\
\cline{3-5}
& & 2 & $2.026\cdot 10^{-2}$ & $2.131\cdot 10^{-2}$ \\
\cline{3-5}
& 10 & 3 & $6.446\cdot 10^{-3}$ & $6.528\cdot 10^{-3}$ \\
\cline{3-5}
& & 4 & $2.568\cdot 10^{-3}$ & $2.572\cdot 10^{-3}$ \\
\cline{3-5}
& & 5 & $1.213\cdot 10^{-3}$ & $1.211\cdot 10^{-3}$ \\
\hline
\end{tabular}
\caption{Truncated at $x_0=0.1$ n-th moments $\bar{q}^{NS}(x_0,n,Q^2)$ within
FMPR and Chebyshev approaches for different $Q^2$ and input functions
$q^{NS}(x,Q_0^2)$.}
\end{center}
\end{table}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=90mm]{fig1.eps}
\caption{First truncated moment: $\bar{q}^{NS}(x_0,1,Q^2)Cheb.$ (solid),
$\bar{q}^{NS}(x_0,1,Q^2)FMPR$ (dashed $M=4$, dotted $M=20$)
for different inputs: (\ref{r4.1}) - upper lines and
(\ref{r4.2}) - lower lines. $Q^2=10 {\rm GeV}^2$.}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=90mm]{fig2.eps}
\caption{Second truncated moment: $\bar{q}^{NS}(x_0,2,Q^2)Cheb.$ (solid),
$\bar{q}^{NS}(x_0,2,Q^2)FMPR$ (dashed $M=4$, dotted $M=20$)
for different inputs: (\ref{r4.1}) - upper lines and
(\ref{r4.2}) - lower lines. $Q^2=10 {\rm GeV}^2$.}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=90mm]{fig3.eps}
\caption{First truncated at $x_0=0.01$ moment: $\bar{q}^{NS}(x_0,1,Q^2)Cheb.$
(solid), $\bar{q}^{NS}(x_0,1,Q^2)FMPR$ (dashed $M=4$, dashed-dotted $M=20$,
dotted $M=60$). The upper lines correspond to the input
parametrisation (\ref{r4.1}), the lower ones to (\ref{r4.2}).}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=90mm]{fig4.eps}
\caption{Second truncated at $x_0=0.01$ moment: $\bar{q}^{NS}(x_0,2,Q^2)Cheb.$
(solid), $\bar{q}^{NS}(x_0,2,Q^2)FMPR$ (covered with Cheb. for different
$M\geq 4$). The upper line corresponds to the input parametrisation (\ref{r4.1}),
the lower one to (\ref{r4.2}).}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=90mm]{fig5.eps}
\caption{First truncated at $x_0=0.1$ moment: $\bar{q}^{NS}(x_0,1,Q^2)Cheb.$
(solid), $\bar{q}^{NS}(x_0,1,Q^2)FMPR$ (dashed $M=4$, dashed-dotted $M=20$,
dotted $M=60$). The upper lines correspond to the input
parametrisation (\ref{r4.1}), the lower ones to (\ref{r4.2}).}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=90mm]{fig6.eps}
\caption{Second truncated at $x_0=0.1$ moment: $\bar{q}^{NS}(x_0,2,Q^2)Cheb.$
(solid), $\bar{q}^{NS}(x_0,2,Q^2)FMPR$ (dashed $M=4$, dashed-dotted $M=20$,
dotted $M=60$). The upper lines correspond to the input
parametrisation (\ref{r4.1}), the lower ones to (\ref{r4.2}).}
\end{center}
\end{figure}
\noindent Notice a quite satisfactory agreement of the both presented
methods even for a very small value of $M$ (4).
The accuracy of the determination of higher moments is better despite the
fact, that less terms $(M-n)$ are included. The accuracy of the truncated
moments method depends on the convergence of the expansion of $G_n(x_0/y)$,
which is the truncated counterpart of the anomalous dimension $\gamma_{qq}(n)$.
Because $G_n(x_0/y)$ is expanded in powers of $y$ around $y=1$,
the small-$y$ region ($y\sim x_0$) in the integral of the evolution equation
(\ref{r3.4}) is badly reproduced. Therefore the convergence is better for
higher moments, which have a smaller contribution from the low-$y$ region.
Lower moments are more sensitive to the lower limit of the integration $x_0$ in
(\ref{r3.4}). From the other side, for sufficiently small $x_0$, factors
$x_0^i$ in the coefficients $C_{pn}^M(x_0)$ (\ref{r3.10}) make the convergence
of $G_n(x_0/y)$ better. Hence the difference
between $\bar{q}^{NS}(x_0,n,Q^2)FMPR$ and $\bar{q}^{NS}(x_0,n,Q^2)Cheb.$ is
larger for $x_0=0.1$ than for $x_0=0.01$. Furthermore, as $x_0\rightarrow 1$,
the accordance of $\bar{q}^{NS}(x_0,n,Q^2)FMPR$ and
$\bar{q}^{NS}(x_0,n,Q^2)Cheb.$ becomes again better because of the vanishing
structure functions in this limit. Comparisons of $\bar{q}^{NS}(x_0,n,Q^2)FMPR$
with $\bar{q}^{NS}(x_0,n,Q^2)Cheb.$ as a function of $x_0$ for first ($n=1$)
and second ($n=2$) moments are shown in Figs.1,2. In Figs.3-6 we present the
$Q^2$ dependence of $\bar{q}^{NS}(x_0,n,Q^2)FMPR$ and
$\bar{q}^{NS}(x_0,n,Q^2)Cheb.$ at fixed $x_0=0.01, 0.1$ and for $n=1$,
$n=2$ respectively. The plots are given for different $M$ and both
parametrisations (\ref{r4.1}),(\ref{r4.2}). The agreement of the truncated
moment method with the Chebyshev approach is perfect for $n=2$ at
$x_0\leq 0.01$, independently on the inputs, $Q^2$ and even value of $M$. The
other results are also very satisfactory. The relative difference between
$\bar{q}^{NS}(x_0,n,Q^2)FMPR$ and $\bar{q}^{NS}(x_0,n,Q^2)Cheb.$ doesn't
exceed 5\% for $n\geq 2$ and not too large $x_0$ ($x_0\leq 0.1$), already at
$M=4$. This difference for the first moment also decreases down to a few \%
for $M=30$ and $x_0=0.1$ (for smaller $x_0$ the accuracy is much better).
The error function
\begin{equation}\label{r4.3a}
R_n^M(x_0,Q^2) =
\frac{\mid\bar{q}^{NS}(x_0,n,Q^2)FMPR-\bar{q}^{NS}(x_0,n,Q^2)Cheb.\mid}
{\bar{q}^{NS}(x_0,n,Q^2)Cheb.}\cdot 100\%
\end{equation}
grows very slowly with $Q^2$ (see Figs.3,5,6). In Tables 4 and 5 we show
the error function $R_n^M(x_0,Q^2)$ for different $M$, $Q^2=10 {\rm GeV}^2$
and two values of $x_0: 0.01, 0.1$ respectively.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline\hline
$x_0=0.01$ & \multicolumn{2}{c|}{$a_1 (1-x)^{3}$} & \multicolumn{2}{c|}
{$a_2 x^{-0.4}(1-x)^{2.5}$} \\ \hline\hline
$M$ & $~R_1~$ & $R_2$ & $~~~R_1~~~$ & $R_2$ \\ \hline
4 & 4 & $\ll 1$ & 6 & $\ll 1$ \\ \hline
10 & 3 & $\ll 1$ & 5 & $\ll 1$ \\ \hline
20 & 3 & $\ll 1$ & 5 & $\ll 1$ \\ \hline
30 & 2 & $\ll 1$ & 4 & $\ll 1$ \\ \hline
60 & 2 & $\ll 1$ & 3 & $\ll 1$ \\ \hline
\end{tabular}
\caption{The percentage error function $R_n\equiv R_n^M(x_0,Q^2)$ defined in
(\ref{r4.3a}), for $x_0=0.01$ and different input functions $q^{NS}(x,Q_0^2)$.
$Q^2=10 {\rm GeV}^2$, the values of $n$ and $M$ shown.}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline\hline
$x_0=0.1$ & \multicolumn{2}{c|}{$a_1 (1-x)^{3}$} & \multicolumn{2}{c|}
{$a_2 x^{-0.4}(1-x)^{2.5}$} \\ \hline\hline
$M$ & $~R_1~$ & $R_2$ & $~~~R_1~~~$ & $R_2$ \\ \hline
4 & 13 & 5 & 16 & 5 \\ \hline
10 & 10 & 4 & 11 & 4 \\ \hline
20 & 6 & 3 & 7 & 3 \\ \hline
30 & 4 & 2 & 5 & 2 \\ \hline
\end{tabular}
\caption{The percentage error function $R_n\equiv R_n^M(x_0,Q^2)$ defined in
(\ref{r4.3a}), for $x_0=0.1$ and different input functions $q^{NS}(x,Q_0^2)$.
$Q^2=10 {\rm GeV}^2$, the values of $n$ and $M$ shown.}
\end{center}
\end{table}
Note that with increasing $M$ the accuracy of the truncated moments method
systematically though slowly increases. This improvement of the accuracy
breaks however for larger $M$ ($M\simeq 70$ at $x_0=0.01$ and $M\simeq 40$ at
$x_0=0.1$ ) because of increasing numerical errors. All presented above results
concern the running coupling $\alpha_s(Q^2)$. We have found also, that for
the constant $\alpha_s$ the error function $R$ (\ref{r4.3a}) grows
approximately proportionally to the strength of $\alpha_s$:
\begin{equation}\label{r4.3b}
\frac{R(\alpha_{s1})}{R(\alpha_{s2})} \sim \frac{\alpha_{s1}}{\alpha_{s2}}.
\end{equation}
Summarising, the LO DGLAP evolution of any truncated at $x_0\leq 0.1$ moment
of the parton distribution can be reproduced with the satisfactory accuracy,
where the relative error $\leq 5 \%$.
\section{Summary and conclusions.}
Analysis of the QCD $Q^2$ evolution equations for truncated moments of
parton distributions is very interesting both from the theoretical and
experimental point of view. The truncated moments technique is complementary
to the existing methods for solving the evolution equations, based on the
full moments or $x$-space approaches. Apart from it refers directly to the
physical values - moments (rather than to the parton distributions), what
enables to use a wide range of deep-inelastic scattering data in terms of
smaller number of parameters. In this way, no assumptions on the shape of
parton distributions are needed. Dealing with truncated at $x_0$ Mellin
moments: $\int_{x_0}^{1} dx x^{n-1} f(x,Q^2)$ one can also avoid
uncertainty from the unmeasurable very small $x\rightarrow 0$ region.
In this paper we have compared the solutions of LO DGLAP $Q^2$ evolution
equations written for the truncated Mellin moments of the structure functions
with those, obtained by using the Chebyshev-polynomial technique.
In both these approaches we have calculated numerically and
semi-analytically the truncated moments $\int_{x_0}^{1} dx x^{n-1}
F(x,Q^2)$. As a test structure function $F(x,Q^2)$ we have taken two different
spin-like nonsinglet parton distributions. The computations have been
performed for a wide range of $x_0$ ($10^{-5}\leq x_0\leq 0.9$) and $Q^2$
($1\leq Q^2\leq 100 {\rm GeV}^2$). Treating the Chebyshev results
as exact, we have found that the truncated moments method is very promising,
for any moment, together with the first one. The precision of the truncated
moments approach is perfect for higher moments ($n\geq 2$) and not too large
the truncation point $x_0$ ($x_0\leq 0.1$), even for small $M=4$. Larger
values of $M$ (e.g. $M=30$) enables to obtain a quite satisfactory accuracy
(the relative error $\leq 5\%$) also for the first truncated moment. The
original truncated moments technique \cite{b5} has been developed in \cite{b5a},
what could improve the numerical efficiency. This technique can be a valuable
tool e.g. in determination of the contribution to the moments of the gluon
distribution from the experimentally accessible region. We think that the
method of truncated moments can be useful in further theoretical and
experimental QCD investigations.
| {
"timestamp": "2005-10-21T22:28:14",
"yymm": "0510",
"arxiv_id": "hep-ph/0510295",
"language": "en",
"url": "https://arxiv.org/abs/hep-ph/0510295",
"abstract": "We solve the LO DGLAP QCD evolution equation for truncated Mellin moments of the nucleon nonsinglet structure function. The results are compared with those, obtained in the Chebyshev-polynomial approach for $x$-space solutions. Computations are performed for a wide range of the truncation point $10^{-5}\\leq x_0\\leq 0.9$ and $1\\leq Q^2\\leq 100 {\\rm GeV}^2$. The agreement is perfect for higher moments ($n\\geq 2$) and not too large $x_0$ ($x_0\\leq 0.1$), even for a small number of terms in the truncated series (M=4). The accuracy of the truncated moments method increases for larger $M$ and decreases very slowly with increasing $Q^2$. For M=30 the relative error in a case of the first moment at $x_0\\leq 0.1$ and $Q^2=10 {\\rm GeV}^2$ doesn't exceed 5% independently on the shape of the input parametrisation. This is a quite satisfactory result. Using the truncated moments approach one can avoid uncertainties from the unmeasurable $x\\to 0$ region and also study scaling violations without making any assumption on the shape of input parametrisation of parton distributions. Therefore the method of truncated moments seems to be a useful tool in further QCD analyses.",
"subjects": "High Energy Physics - Phenomenology (hep-ph)",
"title": "DGLAP evolution of truncated moments of parton densities within two different approaches",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992951349231,
"lm_q2_score": 0.7310585903489891,
"lm_q1q2_score": 0.7075910943011171
} |
https://arxiv.org/abs/1807.09479 | Maker-Breaker domination game | We introduce the Maker-Breaker domination game, a two player game on a graph. At his turn, the first player, Dominator, select a vertex in order to dominate the graph while the other player, Staller, forbids a vertex to Dominator in order to prevent him to reach his goal. Both players play alternately without missing their turn. This game is a particular instance of the so-called Maker-Breaker games, that is studied here in a combinatorial context. In this paper, we first prove that deciding the winner of the Maker-Breaker domination game is PSPACE-complete, even for bipartite graphs and split graphs. It is then showed that the problem is polynomial for cographs and trees. In particular, we define a strategy for Dominator that is derived from a variation of the dominating set problem, called the pairing dominating set problem. | \section{Introduction}
Since their introduction by Erd\H{o}s and Selfridge in \cite{erdos-1973}, positional games have been widely studied in the literature (see \cite{hefetz2014positional} for a recent survey book on the topic). These games are played on an hypergraph of vertex set $X$, with a finite set $\mathcal{F}\subseteq 2^X$ of hyperedges. The set $X$ is often called the {\em board} of the game, and an element of $\mathcal{F}$ a {\em winning set}. The game involves two players that alternately occupy a previously unoccupied vertex of $X$. The winner is determined by a convention: in the {\em Maker-Maker} convention, the first player to occupy all the vertices of a winning set is the winner. Such games may end in a draw, as it is the case in Tic-Tac-Toe. In the {\em Maker-Breaker} convention, the objectives are opposite: one player (the {\em Maker}) aims to occupy all the vertices of a winning set, whereas {\em Breaker} wins if she occupies a vertex in every winning set. In view of the complexity of solving both kinds of games, Maker-Breaker instances are generally more considered in the literature as by definition, there is always a winner. In addition, rulesets of such games are often built from a graph. For example, one can mention the famous {\em Shannon switching game} (popularized as the game {\sc Bridg-it}) \cite{shannon}, where, given a graph $G=(V,E)$ and two particular vertices $u$ and $v$, the board $X$ corresponds to $E$, and the winning sets are all the subsets of $E$ that form a $u-v$ path in $G$. In the {\em Hamiltonicity} game \cite{connectivity}, the winning sets are all the sets of edges containing an Hamiltonian cycle. \\
In view of such examples, converting a graph property into a $2$-player game is a natural operation. Hence it is not surprising that it has also been done for dominating sets. More precisely, several games having different rulesets and known as {\em domination game} have been defined in the literature. For example, in \cite{alon,favaron}, a move consists in orienting an edge of a given graph $G$ and the two players try to maximize (resp. minimize) the domination number of the resulting digraph. In \cite{Bicoldom}, the rules require two colors during the play. In \cite{bresar}, the domination game is defined in a sense where the players both select vertices and try to maximize (resp. minimize) the length of the play before building a dominating set. Since then, this version has become the standard one for the domination game, with regular progress on it \cite{dom1,dom2,dom3,dom4}. However, among the different variants of the domination game , the natural Maker-Breaker version (in the sense of Erd\H{o}s and Selfridge) has never been considered in the literature. In this paper, we consider the so-called {\em Maker Breaker Domination game}, where, given a graph $G=(V,E)$, the board $X$ is the set $V$, and $\mathcal{F}$ is the set of all the dominating sets of $G$. In other words, the two players alternately occupy a not yet occupied vertex of $G$. Maker wins if he manages to build a dominating set of $G$, whereas Breaker wins if she manages to occupy a vertex and all its neighbors. In what follows and in order to be consistent with the standard domination game, Maker will be called {\em Dominator}, and Breaker will be the {\em Staller}. \\
When dealing with Maker-Breaker games, there are two main questions that naturally arise:
\begin{itemize}
\item Given a graph $G$, which player has a winning strategy for the Maker-Breaker domination game\ on $G$ ?
\item If {Dominator}\ has a winning strategy on $G$, what is the minimum number of turns needed to win?
\end{itemize}
The current paper is about the first question. In the next section, we give definitions for the different cases about the winner, together with first general results. Section 3 deals with the algorithmic complexity of the problem, where the {\sc pspace}-completeness is proved. In Section 4, a so-called {\em pairing strategy} is given, yielding a strategy for {Dominator}\ in graphs having certain properties. The last section is about graph operators that lead to polynomial strategies on trees and cographs.
\section{Preliminaries}
A \emph{position} of the Maker-Breaker domination game\ is denoted by a triplet $G=(V,E,c)$, where $V$ is a set of vertices, $E$ is a set of edges on $V$ and $c$ is a function $c:V \rightarrow \{{\tt Dominator},{\tt Staller}, {\tt Unplayed}\}$. In other words, the function $c$ allows to describe any game position encountered during the play.
If, for all $u$ in $V$, $c(u)={\tt Unplayed}$, then $G$ is said to be a \emph{starting position}. In this case, we will identify $G$ with the graph $(V,E)$. At his turn, Dominator (respectively Staller) chooses one vertex $u$ with $c(u)={\tt Unplayed}$ and changes its value to ${\tt Dominator}$ (resp. ${\tt Staller}$). When there is no more {\tt Unplayed} vertex, either the set of vertices $c^{-1}({\tt Dominator})$ forms a dominating set, and Dominator wins, or there is one vertex $u$ for which all its closed neighboorhood has value ${\tt Staller}$, and Staller wins. In the latter case, we say that Staller {\em isolates} $u$. Note that whenever $c^{-1}({\tt Dominator})$ is a dominating set or a vertex has been isolated by Staller, the winner is already determined and cannot change, since the two conditions are complementary. Thus we will often consider that the game stops when one of the two conditions holds.\\
The Maker-Breaker domination game\ is a finite game with perfect information and no draw. Thus, there is always a winning strategy for one of the player. There are four cases - also called {\em outcomes} - to characterize the winner of the game, according to who starts. We define $\mathcal D$, $\mathcal S$, $\mathcal N$ and $\mathcal P$ as the different possible outcomes for a position of the Maker-Breaker domination game .
\begin{definition}
A position $G$ has four possible outcomes:
\begin{itemize}
\item $\mathcal D$ if Dominator has a winning strategy as first and second player,
\item $\mathcal S$ if Staller has a winning strategy as first and second player,
\item $\mathcal N$ if the next player (i.e., the one who starts) has a winning strategy,
\item $\mathcal P$ otherwise (i.e., the second player wins).
\end{itemize}
\end{definition}
Note that for proximity reasons, the notion of outcome and the last two notations are derived from combinatorial game theory \cite{Siegel}. In addition, the outcome of $G$ is denoted $o(G)$.\\
The following proposition is a direct application of a general result on Maker-Breaker games stated in \cite{beck2008combinatorial, hefetz2014positional}. It ensures that the outcome $\mathcal P$ never occurs. For the sake of completeness, we here give a proof of this result adapted to our particular case.
\begin{proposition}[Imagination strategy]
\label{prop:imagination}
There is no position $G$ of the Maker-Breaker domination game\ such that $o(G)=\mathcal P$.
\end{proposition}
\begin{proof}
Assume there is position $G$ of the Maker-Breaker domination game\ such that $o(G)=\mathcal P$. This means in particular that {Dominator}\ wins playing second on $G$. We next give a winning strategy for {Dominator}\ as first player, which will imply that $o(G)=\mathcal D$, a contradiction.
The strategy for {Dominator} as first player is the following. He first plays any vertex unplayed and then imagines he did not. He thus considers himself as the second player, seeing this vertex as an extra vertex.
Whenever his winning strategy (as a second player) requires to play the extra vertex, he plays any other unplayed vertex $u$, and considers $u$ as the new extra vertex.
If {Dominator}\ was winning before all the vertices were chosen, he still wins no later than his last move in the game where he was playing second.
Otherwise, when {Staller}\ chooses the last vertex of the graph, her strategy asks her to play the extra vertex since it is the only one available in the imagined game, but it means that {Dominator}\ had already won on the previous turn.
\end{proof}
Note that this proposition is valid for any position of the game (and not only for starting positions). In other words, it ensures that a player has no interest to miss his/her turn. Figure~\ref{fig:ex_outcome} gives an example of graphs for the three remaining outcomes.\\
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\node at (-0.5,0){
\begin{tikzpicture}
\node[noeud] (a) at (-1,0){};
\node[noeud] (b) at (0,-0){};
\node[noeud] (c) at (1,-0){};
\node[noeud] (d) at (2,-0){};
\draw (a) -- (b) -- (c)--(d);
\end{tikzpicture}
};
\node at (3.5,0){
\begin{tikzpicture}
\node[noeud] (a) at (-1,0){};
\node[noeud] (b) at (0,-0){};
\node[noeud] (c) at (1,-0){};
\draw (a) -- (b) -- (c);
\end{tikzpicture}
};
\node at (7.5,0){
\begin{tikzpicture}
\node[noeud] (b) at (0,0){};
\node[noeud] (c) at (1,0){};
\node[noeud] (d) at (-0.866,-0.5){};
\node[noeud] (e) at (-0.866,0.5){};
\node[noeud] (f) at (1.866,-0.5){};
\node[noeud] (g) at (1.866,0.5){};
\draw (c) -- (b);
\draw (b) -- (d);
\draw (b) -- (e);
\draw (c) -- (f);
\draw (c) -- (g);
\end{tikzpicture}
};
\node at (-0.5,-1) {$\mathcal D$};
\node at (3.5,-1) {$\mathcal N$};
\node at (7.5,-1) {$\mathcal S$};
\node at (-0.5,-1.5) {\footnotesize Dominator always wins};
\node at (3.5,-1.5) {\footnotesize First player wins};
\node at (7.5,-1.5) {\footnotesize Staller always wins};
\end{tikzpicture}
\caption{Example of a graph for each possible outcome.}
\label{fig:ex_outcome}
\end{figure}
According to the three possible outcomes of a position, we now introduce an order relation on the outcomes derived from combinatorial game theory: $\mathcal{ S \prec N \prec D}$. This allows us to state the following proposition.
\begin{proposition}
\label{prop:subgraph}
Let $G=(V,E,c)$ be a position of the Maker-Breaker domination game\ and let $H=(V,E',c)$ be another position, with $E' \subseteq E$. Then $o(H) \preceq o(G)$.
\end{proposition}
\begin{proof} A reformulation of the proposition is that if {Dominator}\ has a winning strategy on $H$, then he also has a winning strategy on $G$.
Assume {Dominator}\ has a winning strategy on $H$.
A winning strategy for {Dominator}\ on $G$ is to apply the same strategy as on $H$.
Indeed, for every possible sequence of moves of {Staller} , {Dominator}\ is able to dominate $H$. Since every edge of $H$ is also in $G$, {Dominator}\ is also able to dominate $G$.
\end{proof}
In other words, adding edges to a position can only benefit {Dominator}, and removing edges can only benefit {Staller} . Note that this property does not hold in the standard domination game.\\
Another result can be derived from Maker-Breaker games. The following theorem is a well known result from the early studies about positional games.
\begin{theorem}[Erd\H os-Selfridge Criterion~\cite{erdos-1973}]
\label{thm:criterion}
Given a Maker-Breaker game $G$ on an hypergraph $(X,\mathcal F)$, if
$ \sum_{A \in \mathcal F} 2^{-|A|}<\frac{1}{2}$ then Breaker wins on $G$ playing second.
\end{theorem}
In order to apply this theorem to the Maker-Breaker domination game , we need to consider a reverse version of it. Indeed, as the set $\mathcal F$ corresponds to the dominating sets of $G$, the sizes of the winning sets are not easy to control. Thus, we can also consider the Maker-Breaker domination game\ as the Maker-Breaker game where $\mathcal F$ is the set of the closed neighborhoods of every vertex of $G$. In that case, {Dominator}\ is the Breaker, and {Staller}\ is the Maker. Now Theorem~\ref{thm:criterion} can be applied on this game:
\begin{proposition}
Let $G$ be a starting position of the Maker-Breaker domination game\ and let $\delta$ be the minimum degree of $G$. If $|V|< 2^\delta$ then {Dominator}\ has a winning strategy for the Maker-Breaker domination game\ on $G$ playing second.
\end{proposition}
\begin{proof}
As stated before, the Maker-Breaker domination game\ on $G$ is a Maker-Breaker game played on $\mathcal H= (V, \mathcal{F})$ where $\mathcal{F}$ is the set of the closed neighborhoods of $G$, and {Staller}\ plays the role of Maker in this game. Applying the Erd\H os-Selfridge Criterion, we know that if $\sum_{u \in \mathcal V} 2^{-|N[u]|}<\frac{1}{2}$ then {Dominator}\ has a winning strategy. For all $u$ in $V$, we have $N[u]\geq \delta +1$, hence $2^{-|N[u]|} \leq 2^{-(\delta+1)}$. Thus if $|V| \times 2^{-(\delta + 1)}<\frac{1}{2}$ then {Dominator}\ has a winning strategy.
\end{proof}
This result can be applied to prove that some families of graphs are $\mathcal D$ (e.g. $r$-regular graphs having $r>\log_2 |V|$). In addition, it also suggests that highly connected graphs are more advantageous for {Dominator} .
\section{Complexity}
In this section, we consider the computational complexity of deciding whether a game position of the Maker-Breaker domination game\ is $\mathcal{S}$, $\mathcal{N}$, or $\mathcal{D}$. First, remark that in the general case, deciding the outcome of a Maker-Breaker game $(X,\mathcal{F})$ is {\sc pspace}-complete. Indeed, this game exactly corresponds to the game {\sc pos-cnf} that was proved to be {\sc pspace}-complete in~\cite{poscnf}.\\
{\sc pos-cnf} is played on a formula $F$ in conjunctive normal form, with variables $X_1, \ldots,X_n$, where each variable is positive, that is $F = C_1 \wedge \cdots \wedge C_m$ with clauses $C_i = X_{i_1} \vee \cdots \vee X_{i_{k_i}}$. Two players, Prover and Disprover, alternate turns in choosing a variable that has not been chosen yet.
When all variables have been chosen, variables chosen by Prover are set to true, while variables chosen by Disprover are set to false.
Prover wins if $F$ is true under this valuation and Disprover wins otherwise.
Without loss of generality, we can consider that each variable appears in the formula, otherwise we consider the formula $F' = F \wedge (X_1 \vee \cdots \vee X_n)$. Clearly, any Maker-Breaker game $(X,\mathcal{F})$ is equivalent to a {\sc pos cnf} game, as $X$ corresponds to the set of variables, and the winning sets correspond to the clauses. Prover has the same role as Breaker, and Maker has the role of Disprover. \\
The complexity of this game remains {\sc pspace}-complete when reduced to instances of the Maker-Breaker domination game:
\begin{theorem}\label{thm:pspace}
Deciding the outcome of a Maker-Breaker domination game\ position is {\sc pspace}-complete on bipartite graphs.
\end{theorem}
\begin{proof}
We reduce the problem from {\sc pos-cnf}.
Let $F = C_1 \wedge \cdots \wedge C_m$ be a positive formula in conjunctive normal form using $n$ variables $X_1 \cdots X_n$.
We build a bipartite graph $G = (V,E)$ from $F$ as follows. There is one vertex for each variable and two vertices for each clause:
$$V = \{x_i|1 \leq i \leq n\} \cup \{c^k_j|1 \leq j \leq m, 0 \leq k \leq 1\},$$
and an edge between a variable vertex $x_i$ and a clause vertex $c^k_j$ (with $k\in \{0,1\}$) if the variable $X_i$ appears in the clause $C_j$. Figure \ref{fig:reduction_POScNF} shows an example of such a construction, from the example where $F=(X_1 \vee X_2) \wedge (X_1 \vee X_4) \wedge (X_2 \vee X_3 \vee X_4)$.\\
\begin{figure}[ht]
\centering
\scalebox{1}{
\begin{tikzpicture}
\node[noeud] (x1) at (0,0){};
\node[noeud] (x2) at (1.25,0){};
\node[noeud] (x3) at (2.5,0){};
\node[noeud] (x4) at (3.75,0){};
\node[above] at (x1) {$x_1$};
\node[above] at (x2) {$x_2$};
\node[above] at (x3) {$x_3$};
\node[above] at (x4) {$x_4$};
\node[noeud] (c1) at (-0.8,-3){};
\node[noeud] (c1b) at (0.1,-3){};
\node[noeud] (c2) at (1.5,-3){};
\node[noeud] (c2b) at (2.4,-3){};
\node[noeud] (c3) at (3.8,-3){};
\node[noeud] (c3b) at (4.7,-3){};
\node[below] at (c1){$c_1^0$};
\node[below] at (c1b) {$c_1^1$};
\node[below] at (c2) {$c_2^0$};
\node[below] at (c2b) {$c_2^1$};
\node[below] at (c3) {$c_m^0$};
\node[below] at (c3b) {$c_m^1$};
\draw (x1) -- (c1);
\draw (x1) -- (c1b);
\draw (x1) -- (c2);
\draw (x1) -- (c2b);
\draw (x2) -- (c1);
\draw (x2) -- (c1b);
\draw (x2) -- (c3);
\draw (x2) -- (c3b);
\draw (x3) -- (c3);
\draw (x3) -- (c3b);
\draw (x4) -- (c2);
\draw (x4) -- (c2b);
\draw (x4) -- (c3);
\draw (x4) -- (c3b);
\end{tikzpicture}
}
\caption{Reduction from {\sc pos-cnf} on $(X_1 \vee X_2) \wedge (X_1 \vee X_4) \wedge (X_2 \vee X_3 \vee X_4)$.}
\label{fig:reduction_POScNF}
\end{figure}
We now show that Prover has a winning strategy in $F$ as first player (respectively second player) if and only if {Dominator}\ has a winning strategy in $G$ as first (resp. second) player.
Assume Prover has a winning strategy in $F$. We first consider the case where Prover is the last player to play in {\sc pos-cnf} (i.e. $n$ is odd if Prover plays first and even if Prover plays second).
{Dominator}\ builds his strategy on $G$ as follows:
\begin{itemize}
\item If Prover and {Dominator}\ are starting the game, {Dominator}\ chooses the vertex $x_i$ corresponding to the variable $X_i$ played by Prover in his wining strategy.
\item Whenever {Staller}\ chooses a vertex $c^k_j$, {Dominator}\ answers by choosing the vertex $c^{1-k}_j$.
\item Whenever {Staller}\ chooses a vertex $x_i$, {Dominator}\ assumes Disprover chose the variable $X_i$. Then he answers by choosing the vertex $x_j$ corresponding to the variable $X_j$ played by Prover in his wining strategy.
\end{itemize}
This last step is always possible since we assume that Prover is playing the last in the {\sc pos-cnf} game. When all vertices are chosen, since Prover was winning in $F$, for each vertex $c^k_j$, there is a neighbor $x_i$ that was chosen by {Dominator}.
As all variables are in a clause, and for each $j$, {Dominator}\ chose either $c^0_j$ or $c^1_j$, all vertices of the form $x_i$ are also dominated by {Dominator}'s choice of vertices.
Hence {Dominator}\ wins the game.
If Prover is not the last player to move, {Dominator}\ follows the same strategy but when {Staller}\ is playing the last variable vertex, {Dominator}\ cannot answer a variable vertex. Then he can play any clause variable $c^k_j$ and imagines he did not, as in the Imagination strategy of Proposition~\ref{prop:imagination}, and goes on according to his strategy. If Staller answers the second vertex of the clause $C_j$ at some point, then {Dominator}\ chooses another unplayed clause vertex. At the end, we will also have, as before, one vertex of each clause plays by {Dominator}\ and the same conclusion holds.
Assume now Disprover has a winning strategy in $F$. The strategy for {Staller}\ is exactly the same:
\begin{itemize}
\item Whenever Disprover's strategy requires to choose a variable $X_i$, {Staller}\ chooses the vertex $x_i$.
\item Whenever {Dominator}\ chooses a vertex $c^k_j$, {Staller}\ answers by choosing the vertex $c^{1-k}_j$.
\item Whenever {Dominator}\ chooses a vertex $x_i$, {Staller}\ assumes Prover chose the variable $X_i$.
\end{itemize}
If the last step is not possible, this means that all the variables are chosen. Then, there exists a clause $C_j$ for which no variables are chosen by Prover. If $c^0_j$ and $c^1_j$ are already played, one of them has been chosen by {Staller}, and thus is isolated. Otherwise, {Staller}\ chooses $c^0_j$ and isolates it. In both cases, {Staller}\ wins.
\end{proof}
\begin{corollary}
Deciding the outcome of a Maker-Breaker domination game\ position is {\sc pspace}-complete on chordal graphs, and also in particular on split graphs.
\end{corollary}
\begin{proof}
The proof of Theorem~\ref{thm:pspace} remains valid by adding edges between the variable vertices. In particular, if they form a clique, the resulting graph is a split graph, that is special case of chordal graphs.
\end{proof}
In view of these complexity results, the question of the threshold between {\sc pspace}-completeness and polynomiality is of natural interest. The following section is a first step towards it, with a characterization of a certain structure in the graph that induces a natural winning strategy for {Dominator}.
\section{Pairing strategy}
A natural winning strategy for Breaker in a Maker-Breaker game is the so-called {\em pairing strategy} as defined in \cite{hefetz2014positional}. This strategy can be applied when a subset of the board $X$ can be partitioned into pairs such that each winning set contains one of the pairs. In that case, a strategy for Breaker as a second player consists in occupying the other element of the pair that has been just occupied by Maker. By doing so, Breaker will occupy at least one element in each winning set and thus win the game. In the context of the Maker-Breaker domination game, such a subset correspond to a special dominating set that we introduce below.
\begin{definition}
Given a graph $G=(V,E)$, a subset of pair of vertices $\{(u_1,v_1),\ldots,(u_k,v_k)\}$ of $V$ is a {\em pairing dominating set} if all the vertices are distinct and if the intersection of the closed neighborhoods of each pair covers the vertices of the graph: $$V=\bigcup_{i=1}^k N[u_i]\cap N[v_i].$$
\end{definition}
Figure~\ref{fig:pair_dom_not_edge} shows an example of a pairing dominating set. Clearly, if one chooses a vertex in $(u_i,v_i)$ for each pair of a {\em pairing dominating set}, the resulting set is a dominating set of $G$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\node[noeud] (1) at (-1.866,0.866){};
\node[noeud] (2) at (-1.866,-0.866){};
\node[noeud] (3) at (-1,0){};
\node[noeud] (4) at (0,0.866){};
\node[noeud] (5) at (0,0){};
\node[noeud] (6) at (0,-0.866){};
\node[noeud] (7) at (1,0){};
\node[noeud] (8) at (1.866,0.866){};
\node[noeud] (9) at (1.866,-0.866){};
\node[left] at (1){$u_1$};
\node[left] at (2){$v_1$};
\node[right] at (8){$u_2$};
\node[right] at (9){$v_2$};
\node[above] at (3){$u_3$};
\node[above] at (7){$v_3$};
\draw (3)--(1)--(2)--(3)--(4)--(7)--(8)--(9)--(7)--(5)--(3)--(6)--(7);
\end{tikzpicture}
\caption{The set $\{(u_1,v_1),(u_2,v_2),(u_3,v_3)\}$ is a pairing dominating set.}
\label{fig:pair_dom_not_edge}
\end{figure}
From this definition, we will say that a vertex $w$ is {\em pairing dominated} if there exists a pair $(u,v)$ from a pairing dominated set such that $w\in N[u]\cap N[v]$. In addition, all the pairs $(u,v)$ satisfying $N[u]\cap N[v]=\emptyset$ are useless in the construction of a pairing dominating set. Note that a pair $(u,v)$ of a pairing dominating set is not nessarily an edge of the graph.\\
The pairing strategy applied to the Maker-Breaker domination game\ can be translated into a strategy on a pairing dominating set:
\begin{proposition}\label{prop:pds}
If a graph $G$ admits a pairing dominating set, then $o(G)=\mathcal{D}$.
\end{proposition}
\begin{proof}
If $G$ admits a pairing dominating set, then {Dominator}\ applies the following strategy as a second player: each time {Staller}\ occupies a vertex of a pair $(u_i,v_i)$ for some $i$, {Dominator}\ answers by occupying the other vertex of the same pair if it is not yet occupied. Otherwise, {Dominator}\ plays randomly. By definition of a pairing dominating set, it ensures that the vertices chosen by {Dominator}\ form a dominating set of $G$.
Hence {Dominator} has a strategy as second player, thus also as first player using Proposition \ref{prop:imagination}.
\end{proof}
This result induces the following corollary that ensures a winning strategy for {Dominator}\ as a first player.
\begin{corollary}\label{cor:pds}
Given a graph $G$, if there exists a vertex $u$ of $G$ such that $G\setminus N[u]$ admits a pairing dominating set, then $\mathcal{N} \preceq o(G)$.
\end{corollary}
\begin{proof}
If such a vertex exists, then {Dominator}\ starts and occupy it. He then applies his pairing strategy on $G\setminus N[u]$ as a second player to dominate the rest of the graph.
\end{proof}
From this property, a natural question that arises is the detection of graphs having a pairing dominating set. An example of such graphs is when the vertices of the graph can be partitioned into cliques of size at least $2$. In that case, a trivial pairing dominating set consists in choosing any two vertices in each clique. Note that the question of the existence of such a partition is often referred to as the {\em packing by cliques} problem (with cliques of size at least $2$). It was proved to be polynomial by Hell and Kirkpatrick in \cite{hell}. A particular case of this decomposition is when the graph admits a perfect matching. As an example, Proposition~\ref{prop:pds} ensures that paths or cycles of even size are $\mathcal{D}$ as they have a perfect matching. \\
\begin{remark}
\label{rem:pairing}
The condition of Proposition~\ref{prop:pds} is not necessary. Indeed, the graphs of Figure~\ref{fig:D_no_pairing} are examples with outcome $\mathcal{D}$ and it can be shown that they do not admit a pairing dominating set. Yet, we will see in Section 5 two families of graphs (cographs and trees) for which there is an equivalence between the existence of a winning strategy for {Dominator}\ and the existence of a pairing dominating set.
\end{remark}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\node at (0,0){
\begin{tikzpicture}
\foreach \thet in {18,90,162,234,306}{
\node[noeud] (\thet) at (\thet:1){};
}
\draw (18) -- (90) -- (162) -- (234) -- (306) -- (18);
\end{tikzpicture}
};
\node at (4,0){
\begin{tikzpicture}
\tikzstyle{every node}=[noeud]
\node(1) at (-1.5,0) {};
\node(2) at (-0.5,0) {};
\node(3) at (0.5,0) {};
\node(4) at (1.5,0) {};
\node(5) at (-1,-1) {};
\node(6) at (0,-1) {};
\node(7) at (1,-1) {};
\node(8) at (-1,1) {};
\node(9) at (0,1) {};
\node(10) at (1,1) {};
\draw (1)--(5)--(6)--(1)--(8)--(9)--(1) (9)--(6)--(2)--(9)--(10)--(4)--(9)--(3)--(6)--(7)--(4)--(6);
\end{tikzpicture}
};
\end{tikzpicture}
\caption{Graphs with outcome $\mathcal D$ and without a pairing dominating set.}
\label{fig:D_no_pairing}
\end{figure}
We conclude this section with a study of the complexity of the pairing dominating set problem.
\begin{theorem}
\label{thm:pair_dom}
Given a graph $G$, it is {\sc np}-complete to decide whether $G$ admits a pairing dominating set.
\end{theorem}
\begin{proof}
Let $G=(V,E)$ be a graph. By definition, the problem is clearly in {\sc np}. It remains to prove the {\sc np}-hardness of the problem by reducing it from {\sc 3-sat}. Let $F= C_1 \vee \cdots \vee C_m$ be an instance of {\sc 3-sat} over the variables $X_1,\ldots,X_n$. Without loss of generality, one can assume that all the variables appear in both their positive and negative version in $F$, but not in the same clause. From $F$, we build the following graph $G$ as illustrated by Figure ~\ref{fig:gadget}.
\begin{itemize}
\item Each clause $C_j$, $1\leq j\leq m$, is associated to a vertex $c_j$.
\item Each variable $X_i$, $1\leq i\leq n$ is associated to a gadget over seven vertices $\{x_i,y_i,z_i,x'_i,y'_i,z'_i,t_i\}$ such that $x_iy_iz_i$ and $x'_iy'_iz'_i$ are two triangles, and $t_i$ is adjacent to both $x_i$ and $x'_i$. The pairs $(x_i,y_i)$ and $(x'_i,y'_i)$ will be denoted $e_i$ and $\overline{e_i}$ respectively.
\item For each variable $X_i$ and clause $C_j$, we add the two edges $c_jx_i$ and $c_jy_i$ (resp. $c_jx'_i$ and $c_jy'_i$) if $X_i$ appears in clause $C_j$ in its positive (resp. negative) form.
\end{itemize}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\node[noeud] (x) at (-1,0){};
\node[noeud] (y) at (-1.866,0.5){};
\node[noeud] (z) at (-1.866,-0.5){};
\node[below] at (x){\scriptsize $x_i$};
\node[left] at (y){\scriptsize $y_i$};
\node[left] at (z){\scriptsize $z_i$};
\node[noeud] (t) at (0,0){};
\node[below] at (t){\scriptsize $t_i$};
\node[noeud] (x') at (1,0){};
\node[noeud] (y') at (1.866,0.5){};
\node[noeud] (z') at (1.866,-0.5){};
\node[below] at (x'){\scriptsize $x'_i$};
\node[right] at (y'){\scriptsize $y'_i$};
\node[right] at (z'){\scriptsize $z'_i$};
\node[noeud] (c1) at (-1.3,1.5){};
\node[noeud] (c2) at (-0.6,1.5){};
\node[noeud] (c3) at (0.71,1.5){};
\node at (0.055,1.5) {$\cdots$};
\node[above] at (c1){$c_{j_1}$};
\node[above] at (c2){$c_{j_2}$};
\node[above] at (c3){$c_{j_k}$};
\draw[very thick, color = rougejoli] (x) -- (y) node[midway, above] {$e_i$};
\draw[very thick, color = rougejoli] (x') -- (y') node[midway, above] {$\overline{e_i}$};
\draw (y)--(z) -- (x) -- (t) -- (x') -- (z')--(y');
\draw (x) -- (c1) -- (y) --(c2) -- (x);
\draw (x') -- (c3) -- (y');
\end{tikzpicture}
\caption{Gadget around a variable $X_i$ for the proof of NP-completeness. The clauses $C_{j_1},\ldots,C_{j_k}$ are those where the variable $X_i$ appears.}
\label{fig:gadget}
\end{figure}
We first claim that any assignment of the variables $X_1,\ldots,X_n$ that makes $F$ satisfiable induces a pairing dominating set in $G$. Let $\sigma$ be such an assignment. We build the following set $D$ of pairs of vertices: for each variable $X_i$, we add the pairs $\{(x_i,y_i),(t_i,x'_i),(y'_i,z'i)\}$ to $D$ if $X_i$ is {\sc true} in $\sigma$ , and the pairs $\{(x'_i,y'_i),(t_i,x_i),(y_i,z_i)\}$ otherwise. It now suffices to check that $D$ is a pairing dominating set. First of all, one can easily remark that all the vertices of the gadgets (i.e., vertices different from the clauses $c_j$) are pairing dominated by $D$. In addition, as each clause $C_j$ is satisfied by $\sigma$, each vertex $c_j$ is adjacent to at least one pair $(x_i,y_i)$ or $(x'_i,y'_i)$ of $D$. Hence any choice of vertex in such a pair allows to dominate $c_j$. \\
Now consider a pairing dominating set $D$ of $G$. We first show that for each gadget associated a variable $X_i$, up to symmetry, there are only four cases to pairing dominate the vertices $t_i$, $z_i$ and $z'_i$, depicted by Figure~\ref{fig:pair_dom_gadget}. Indeed, since each vertex $t_i$ has degree $2$, there are three cases for it to be pairing dominated by $D$: either the pair $(t_i,x'_i)$, or $(t_i,x_i)$, or $(x_i,x'_i)$ must belong to $D$.
{\bf (i)} The pair $(t_i,x'_i)$ belongs to $D$. Then, by considering the vertex $z'_i$, of degree $2$, the pair $(y'_i,z'_i)$ must belong to $D$. Concerning the vertex $z_i$, it is necessarily dominated by vertices from the triangle $x_iy_iz_i$, leading to the three cases $(a)$, $(b)$ and $(c)$ of Figure~\ref{fig:pair_dom_gadget}.
{\bf(ii)} The pair $(t_i,x_i)$ belongs to $D$. By symmetry of the gadget, this case is similar to the previous one and we get the symmetric pairs from Figures $(a)$, $(b)$ and $(c)$.
{\bf(iii)} The pair $(x_i,x'_i)$ belongs to $D$ (Figure~\ref{fig:pair_dom_gadget} (d)). Then both vertices $z_i$ and $z'_i$ must belong to $D$ in the pairs $(y_i,z_i)$ and $(y'_i,z'_i)$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\node at (0,0){
\begin{tikzpicture}
\clip (-2.2,-1.9) rectangle (2.2,2.2);
\node[petit_noeud] (x) at (-1,0){};
\node[petit_noeud] (y) at (-1.866,0.5){};
\node[petit_noeud] (z) at (-1.866,-0.5){};
\node[petit_noeud] (t) at (0,0){};
\node[petit_noeud] (x') at (1,0){};
\node[petit_noeud] (y') at (1.866,0.5){};
\node[petit_noeud] (z') at (1.866,-0.5){};
\draw[very thick, color = rougejoli] (x) -- (y) node[midway, above right] {$e_i$};
\draw[very thick, color = rougejoli] (x') -- (y') node[midway, above] {$\overline{e_i}$};
\draw (y)--(z) -- (x) -- (t) -- (x') -- (z')--(y');
\draw[dashed] (1.866,0) ellipse (0.2 and 0.75);
\draw[dashed] (0.5,0) ellipse (0.75 and 0.2);
\begin{scope}[shift= {(-1,0)}]
\begin{scope}[shift={(150:0.5)},rotate = 150]
\draw[dashed] (0,0) ellipse (0.75 and 0.2);
\end{scope}
\end{scope}
\begin{scope}[shift={(-1,0)}]
\foreach \thet in {30,60}{
\draw (x) --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\begin{scope}[shift={(-1.866,0.5)}]
\foreach \thet in {65,90,125}{
\draw (y) --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\begin{scope}[shift={(1,0)}]
\foreach \thet in {120,150}{
\draw (x') --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\begin{scope}[shift={(1.866,0.5)}]
\foreach \thet in {65,90,125}{
\draw (y') --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\node at (0,-1.1) {\small $(a)$};
\end{tikzpicture}
};
\node at (5,0){
\begin{tikzpicture}
\clip (-2.2,-1.9) rectangle (2.2,2.2);
\node[petit_noeud] (x) at (-1,0){};
\node[petit_noeud] (y) at (-1.866,0.5){};
\node[petit_noeud] (z) at (-1.866,-0.5){};
\node[petit_noeud] (t) at (0,0){};
\node[petit_noeud] (x') at (1,0){};
\node[petit_noeud] (y') at (1.866,0.5){};
\node[petit_noeud] (z') at (1.866,-0.5){};
\draw[very thick, color = rougejoli] (x) -- (y) node[midway, above] {$e_i$};
\draw[very thick, color = rougejoli] (x') -- (y') node[midway, above] {$\overline{e_i}$};
\draw (y)--(z) -- (x) -- (t) -- (x') -- (z')--(y');
\draw[dashed] (1.866,0) ellipse (0.2 and 0.75);
\draw[dashed] (0.5,0) ellipse (0.75 and 0.2);
\begin{scope}[shift= {(-1,0)}]
\begin{scope}[shift={(-150:0.5)},rotate = -150]
\draw[dashed] (0,0) ellipse (0.75 and 0.2);
\end{scope}
\end{scope}
\begin{scope}[shift={(-1,0)}]
\foreach \thet in {30,60}{
\draw (x) --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\begin{scope}[shift={(-1.866,0.5)}]
\foreach \thet in {65,90,125}{
\draw (y) --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\begin{scope}[shift={(1,0)}]
\foreach \thet in {120,150}{
\draw (x') --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\begin{scope}[shift={(1.866,0.5)}]
\foreach \thet in {65,90,125}{
\draw (y') --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\node at (0,-1.1) {\small $(b)$};
\end{tikzpicture}
};
\node at (0,-2.5){
\begin{tikzpicture}
\clip (-2.2,-1.9) rectangle (2.2,2.2);
\node[petit_noeud] (x) at (-1,0){};
\node[petit_noeud] (y) at (-1.866,0.5){};
\node[petit_noeud] (z) at (-1.866,-0.5){};
\node[petit_noeud] (t) at (0,0){};
\node[petit_noeud] (x') at (1,0){};
\node[petit_noeud] (y') at (1.866,0.5){};
\node[petit_noeud] (z') at (1.866,-0.5){};
\draw[very thick, color = rougejoli] (x) -- (y) node[midway, above] {$e_i$};
\draw[very thick, color = rougejoli] (x') -- (y') node[midway, above] {$\overline{e_i}$};
\draw (y)--(z) -- (x) -- (t) -- (x') -- (z')--(y');
\draw[dashed] (-1.866,0) ellipse (0.2 and 0.75);
\draw[dashed] (0.5,0) ellipse (0.75 and 0.2);
\draw[dashed] (1.866,0) ellipse (0.2 and 0.75);
\begin{scope}[shift={(-1,0)}]
\foreach \thet in {30,60}{
\draw (x) --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\begin{scope}[shift={(-1.866,0.5)}]
\foreach \thet in {65,90,125}{
\draw (y) --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\begin{scope}[shift={(1,0)}]
\foreach \thet in {120,150}{
\draw (x') --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\begin{scope}[shift={(1.866,0.5)}]
\foreach \thet in {65,90,125}{
\draw (y') --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\node at (0,-1.1) {\small $(c)$};
\end{tikzpicture}
};
\node at (5,-2.5){
\begin{tikzpicture}
\clip (-2.2,-1.9) rectangle (2.2,2.2);
\node[petit_noeud] (x) at (-1,0){};
\node[petit_noeud] (y) at (-1.866,0.5){};
\node[petit_noeud] (z) at (-1.866,-0.5){};
\node[petit_noeud] (t) at (0,0){};
\node[petit_noeud] (x') at (1,0){};
\node[petit_noeud] (y') at (1.866,0.5){};
\node[petit_noeud] (z') at (1.866,-0.5){};
\draw[very thick, color = rougejoli] (x) -- (y) node[midway, above] {$e_i$};
\draw[very thick, color = rougejoli] (x') -- (y') node[midway, above] {$\overline{e_i}$};
\draw (y)--(z) -- (x) -- (t) -- (x') -- (z')--(y');
\draw[dashed] (-1.866,0) ellipse (0.2 and 0.75);
\draw[dashed] (1.866,0) ellipse (0.2 and 0.75);
\draw[dashed] (-1.2,0) arc (180:0:0.2);
\draw[dashed] (0.8,0) arc (180:0:0.2);
\draw[dashed] (-1.2,0) arc (-180:0:1.2);
\draw[dashed] (-0.8,0) arc (-180:0:0.8);
\begin{scope}[shift={(-1,0)}]
\foreach \thet in {30,60}{
\draw (x) --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\begin{scope}[shift={(-1.866,0.5)}]
\foreach \thet in {65,90,125}{
\draw (y) --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\begin{scope}[shift={(1,0)}]
\foreach \thet in {120,150}{
\draw (x') --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\begin{scope}[shift={(1.866,0.5)}]
\foreach \thet in {65,90,125}{
\draw (y') --(\thet :0.3);
\draw[dotted] (\thet :0.3) -- (\thet : 0.5);
}
\end{scope}
\node at (0,-1.6) {\small $(d)$};
\end{tikzpicture}
};
\end{tikzpicture}
\caption{Possible pair dominating sets for the gadget of the proof of Theorem~\ref{thm:pair_dom} (up to symmetry).}
\label{fig:pair_dom_gadget}
\end{figure}
In order to find an assignment for $F$, we now show that $D$ can be transformed into a pairing dominating set where each pair is as in Figure~\ref{fig:pair_dom_gadget} (a) (or its symmetrical, according to case $(ii)$). Consider first that for the gadget associated to some variable $X_i$, the pairs of $D$ are those depicted by Figure~\ref{fig:pair_dom_gadget} $(b)$. As the vertex $z_i$ has no other neighbor than $x_i$ and $y_i$, replacing a pair $(x_i,z_i)$ by the pair $(x_i,y_i)$ in $D$ remains a valid pairing dominating set since both $x_i$ and $y_i$ are adjacent to $z_i$. This operation is clearly possible if $y_i$ is not in $D$. In the case where $y_i$ is already in $D$, say in a pair $(y_i,u)$, remark that removing this pair from $D$ does not break the pairing dominating property of $D$ if $(x_i,y_i)$ is added. Indeed, since, by definition of $G$, $x_i$ and $y_i$ have the same neighborhood (except $t_i$, that is already in a pair), we have that $N[u]\cap N[y_i]\subseteq N[x_i]\cap N[y_i]$. Since $x_i$ and $y_i$ play a symmetrical role, we can use the same argument to replace the pairs of Figure~\ref{fig:pair_dom_gadget} $(c)$ by those of $(a)$ in $D$. The last case is when the pairs of $D$ are those of Figure~\ref{fig:pair_dom_gadget} $(d)$ for the variable $X_i$. Since $N[y_i]\cap N[z_i]\subseteq N[y_i]\cap N[x_i]$ and $N[x_i]\cap N[x'_i] = \{t_i\} \subset N[t_i]\cap N[x'_i]$ (as $X_i$ and $\overline{X_i}$ cannot be in the same clause), we can replace the pairs of Figure~\ref{fig:pair_dom_gadget} $(d)$ by those of Figure~\ref{fig:pair_dom_gadget} $(a)$ without breaking the pairing dominating property of $D$. In case $t_i$ was already in $D$, say in a pair $(t_i,u)$, once again this pair can be removed from $D$ as $N[t_i]\cap N[u]$ is either empty or at most a subset of $\{x_i,x'_i\}$, which is already pairing dominated by the pairs of Figure~\ref{fig:pair_dom_gadget} $(a)$.
Hence we have transformed $D$ such that all the vertices different from the $c_j$ are pairing dominated by the pairs of vertices of Figure~\ref{fig:pair_dom_gadget} $(a)$. In addition, if $D$ admits other pairs than those depicted by Figure~\ref{fig:pair_dom_gadget} $(a)$, then these pairs are necessarily of the form $(c_j,c_l)$, $(z_i,u)$, or $(z'_i,u)$. The last two types of pairs can be removed from $D$ as $N[z_i]$ and $N[z'_i]$ are already pairing dominated. Concerning the pairs $(c_j,c_l)$, they can also be removed from $D$ as the sets $N[c_j]\cap N[c_l]$ belong to the gadgets (and are different from the clause vertices), and are thus already pairing dominated. \\
We now build the following assignment of the variables of $F$: for all $1\leq i \leq n$, the variable $X_i$ is set to {\sc true} if and only if the pair $e_i$ belongs to $D$. As each vertex $c_j$ is pairing dominated in $D$ by at least a pair $e_i$ or $\overline{e_i}$ for some $i$, it means that each corresponding clause $C_j$ has at least a variable equal to {\sc true}, which concludes the proof.
\end{proof}
\section{Graph operations}
In the first part of this section, we study the outcome of operations of graphs for which the outcome is already known. This will lead to polynomial time algorithms to solve the Maker-Breaker domination game\ on cographs and forests, as these families can be built from joins, unions and by adjoining pendant edges.
\subsection{Union and join}
Let $G=(V_G,E_G)$ and $H=(V_H,E_H)$ be disjoint graphs. The \emph{union} $G \cup H$ of $G$ and $H$ is the graph with vertex set $V_G \cup V_H$ and edge set $E_G \cup E_H$.
The \emph{join} $G \bowtie H$ of $G$ and $H$ is the graph with vertex set $V_G \cup V_H$ and edge set $E_G \cup E_H \cup \{uv|u\in V_G, v\in V_H\}$.
\begin{theorem}
\label{thm:union}
Let $G$ and $H$ be two starting positions of the Maker-Breaker domination game .
\begin{itemize}
\item If $o(G)=\mathcal S$ or $o(H)=\mathcal S$ then $o(G\cup H)=\mathcal S$.
\item If $o(G)=o(H) = \mathcal N$ then $o(G\cup H)=\mathcal S$.
\item If $o(G)=o(H) = \mathcal D$ then $o(G\cup H)=\mathcal D$.
\item Otherwise, $o(G\cup H)=\mathcal N$.
\end{itemize}
\end{theorem}
This result is summarized in Table~\ref{tab:union}.
Note that the outcome $\mathcal S$ is absorbing for the union, while the outcome $\mathcal D$ is neutral.
\begin{table}[ht]
\centering
\begin{tabular}{|c||c|c|c|}
\hline
\diagbox{$o(G)$}{$o(H)$} & ${\cal D}$ & ${\cal N}$ & ${\cal S}$ \\ \hline\hline
${\cal D}$ & ${\cal D}$ & ${\cal N}$ & ${\cal S}$ \\ \hline
${\cal N}$ & ${\cal N}$ & ${\cal S}$ & ${\cal S}$ \\ \hline
${\cal S}$ & ${\cal S}$ & ${\cal S}$ & ${\cal S}$ \\ \hline
\end{tabular}
\caption{Outcomes of the Maker-Breaker domination game\ played on the union of $G$ and $H$.}
\label{tab:union}
\end{table}
\begin{proof}
Assume {Staller}\ has a winning strategy on $G$ or $H$. Then she has a winning strategy on $G \cup H$. Indeed, without loss of generality assume that she has a winning strategy on $G$. Her strategy on $G\cup H$ is to play only on $G$ following her winning strategy. If at some point {Dominator}\ is playing on $H$, this can be considered as a passing move in $G$ and by Proposition~\ref{prop:imagination} this does not compromise Staller's strategy. At some point she will isolate a vertex in $G$ and thus in $G\cup H$.
Thus if $G$ or $H$ has outcome $\mathcal{S}$, then whatever {Dominator}\ plays as a first move, {Staller}\ still has a winning strategy on this graph. If both positions have outcome $\mathcal{N}$ then after {Dominator} 's first move, {Staller}\ can play on the other component and also wins. This proves the first two points.
If both positions have outcome $\mathcal D$, then {Dominator}\ has a winning strategy on both graphs playing second. He can answer to every move of {Staller}\ on the component she plays with his winning strategy on this component. If one of the graph is full, {Dominator} can play any vertex on the other graph and imagines he did not, as in the imagination strategy of Proposition \ref{prop:imagination} At the end, {Dominator}\ dominates both components and so $G \cup H$ has outcome $\mathcal{D}$.
Finally, assume without loss of generality that $o(G)=\mathcal N$ and $o(H)=\mathcal D$. If {Staller}\ plays first, as in the first case, by applying her winning strategy as the first player in $G$ she will be able to isolate a vertex and to win. On the other hand, if {Dominator}\ plays first, he can play his winning move on $G$ and then answers to {Staller}\ on the component she has played on with his winning strategy. So the first player has a winning strategy and the outcome is $\mathcal N$.
\end{proof}
\begin{theorem}
\label{thm:join}
Let $G$ and $H$ be two starting positions of the Maker-Breaker domination game .
\begin{itemize}
\item[(i)] If $G=K_1$ and $o(H) = \mathcal S$ (or $H=K_1$ and $o(G) = \mathcal S$)\\ then $o(G \bowtie H)=\mathcal N$.
\item[(ii)] Otherwise, $o(G \bowtie H)=\mathcal D$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) Assume that $G= K_1$ and $o(H) = \mathcal S$. If {Dominator}\ starts, he will win by playing on the unique vertex of $G$ and dominates the join, so he has a winning strategy as a first player. However, since $o(H)= \mathcal S$, if {Staller}\ starts, she can play on the only vertex of $G$ and then apply her winning strategy as second player on $H$. So she wins on $G \bowtie H$ as first player as well as {Dominator} and $o(G \bowtie H)=\mathcal N$.
(ii) Since we are not in the first case, there are two possibilities : Either both $G$ and $H$ have at least two vertices or, without loss of generality, $G=K_1$ and $o(H) \succeq \mathcal N$.
Assume first that both $G$ and $H$ have more than two vertices.
Let $u_1$, $v_1$ be two vertices of $G$ and $u_2$, $v_2$ two vertices of $H$. Since every vertex of $G$ is a neighbor of every vertex of $H$ and conversely, $\{(u_1,v_1),(u_2,v_2)\}$ forms a pairing dominating set for $G \bowtie H$ and the outcome is $\mathcal D$ according to Proposition~\ref{prop:pds}.
Assume now that $G=K_1$ and $o(H) \succeq \mathcal N$. Note that {Dominator}\ has a winning strategy on $H$ as first player.
Assume that {Staller}\ is the first player. If on her first move she does not play on the vertex of $G$, then {Dominator}\ wins immediately by playing on it. If she does play on it, then {Dominator}\ will apply his winning strategy as first player on $H$. This will allow him to dominate $H$ and, since each vertex of $H$ dominates $G$, all the vertices of $G \bowtie H$ will be dominated. {Dominator}\ has a winning strategy as second player, hence $o(G \bowtie H) = \mathcal D$.
\end{proof}
The combination of these two results gives a complexity result on the class of cographs. Recall that cographs (or $P_4$-free graphs) can be inductively built from a single vertex by taking the union of two cographs or the join of two cographs. In addition, from a given cograph, recovering this construction from unions and joins can be found with a linear time algorithm~\cite{corneil1985linear}. Since we know the outcome of Maker-Breaker domination game\ for $K_1$ and for the union and the join operators, we can deduce the following corollary.
\begin{corollary}
Deciding the outcome of the Maker-Breaker domination game\ on cographs can be done in polynomial time.
\end{corollary}
As stated in Remark~\ref{rem:pairing}, for some families of graphs the outcome of a starting position is $\mathcal D$ if and only if it admits a pairing dominating set. We show that the family of cographs satisfies this property.
\begin{theorem}
A cograph $G$ has outcome $\mathcal D$ if and only if it admits a pairing dominating set.
\end{theorem}
\begin{proof}
By Proposition~\ref{prop:pds} that if a graph admits a pairing dominating set, then it has outcome $\mathcal D$. It remains to prove that all cographs with outcome $\mathcal D$ admits a pairing dominating set.
The proof is done by induction on the number $n$ of vertices of $G$.
First note that the result is true when $n \leq 2$. The only such cographs are $K_1$, $K_2$ and $K_1\cup K_1$, and among them the only graph with outcome $\mathcal D$ is $K_2$. $K_2$ admits a perfect matching and thus a pairing dominating set.
Assume now that every cograph of outcome $\mathcal D$ with a number of vertices less or equal to $n$ admits a pairing dominating set. Let $G$ be a cograph of outcome $\mathcal D$ with $n+1$ vertices. By definition of a cograph, $G$ is either the union or the join of two smaller cographs.
If $G$ is the union of two cographs $G_1$ and $G_2$, they necessarily have outcome $\mathcal D$ by Theorem~\ref{thm:union}. By induction hypothesis, they both admit a pairing dominating set, which union is a pairing dominating set for $G$.
Assume now that $G$ is the join of two cographs $G_1$ and $G_2$.
If both $G_1$ and $G_2$ have more than two vertices, then if $u_1$, $v_1$ are any two vertices of $G_1$ and $u_2$, $v_2$ are any two vertices of $G_2$, $\{(u_1,v_1),(u_2,v_2)\}$ forms a pairing dominating set for $G$.
Assume now that $G_1 = K_1$ and let $x$ be its unique vertex. Then $G_2$ has either outcome $\mathcal N$ or $\mathcal D$ by Theorem~\ref{thm:join}. If $G_2$ has outcome $\mathcal D$ then by induction hypothesis, it admits a pairing dominating set. Every vertex of this pairing dominating set is a neighbor of $x$ and it remains also a pairing dominating set for $G$.
Assume now that $o(G_2) = \mathcal N$. $G_2$ is either the union of two cographs or the join of two cographs.
If $G_2$ is the join of two cographs, by Theorem~\ref{thm:join}, it must be the join of a graph $K_1$ with vertex $y$ and of a graph $H$ with outcome $\mathcal S$. Notice that $x$ and $y$ are both universal vertices so $\{(x,y)\}$ is a pairing dominating set for $G$.
If $G_2$ is the union of $H_1$ and $H_2$ then, without loss of generality, by Theorem~\ref{thm:union} $o(H_1)=\mathcal D$ and $o(H_2)=\mathcal N$. By induction hypothesis, $H_1$ admits a pairing dominating set $S_1$. Note also that by Theorem~\ref{thm:join}, $x \bowtie H_2$ has outcome $\mathcal D$, so by induction hypothesis it admits a pairing dominating set $S_2$. Since $S_1$ pairing dominates $H_1$ and $S_2$ pairing dominates $\{x\} \cup H_2$, $S_1 \cup S_2$ forms a pairing dominating set for $G$.
\end{proof}
\subsection{Glue operator and trees}
We now study the operator consisting of gluing two graphs on a vertex. This operator will be useful in the study of trees. A more formal definition is the following:
\begin{definition}
Let $G=(V_G,E_G)$ and $H=(V_H,E_H)$ be graphs and let $u \in V_G$ and $v\in V_H$ be two vertices. The \emph{glued graph} of $G$ and $H$ at $u$ and $v$ is the graph $G \glue{u}{v} H$ with vertex set $(V_G \setminus \{u\})\cup (V_H \setminus \{v\}) \cup \{w\} $ (where $w$ is a new vertex) and for which $xy$ is an edge if and only if $xy$ is an edge of $G$ or $H$ or $y=w$ and $xu$ is and edge of $G$ or $xv$ is an edge of $H$.
\end{definition}
If the vertex $u$ is clear from the context or does not matter, the glue will be denoted by $G \glue{}{v} H$. Similarly if the vertex $v$ is not useful in the notation, we might also remove it.
Figure~\ref{fig:glue} gives a representation of the glued of two graphs.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0) circle (1);
\draw (0,0) node {$G$};
\node[noeud] at (1,0) {};
\draw (1,0) node[left] {$u$};
\draw (2.5,0) circle (1);
\draw (2.5,0) node {$H$};
\node[noeud] at (1.5,0) {};
\draw (1.5,0) node[right] {$v$};
\draw (5,0) circle (1);
\draw (5,0) node {$G$};
\draw (7,0) circle (1);
\draw (7,0) node {$H$};
\draw (6,1.4) node {$G\glue{u}{v} H$};
\node[noeud] at (6,0) {};
\draw (6,0) node[left] {$w$};
\end{tikzpicture}
\caption{Representation of the glued graph of $G$ and $H$ on $u$ and $v$.}
\label{fig:glue}
\end{figure}
Let $H$ be a graph and $v$ a vertex of $H$. We say that the couple $(H,v)$ is \emph{neutral} for the glue operator if for every graph $G$ and every vertex $u$ of $G$, $o(G\glue{u}{v} H)=o(G)$.
\begin{theorem}
\label{thm:neutral}
Let $H$ be a graph and $v$ be a vertex of $H$. $(H,v)$ is neutral for the glue operator if and only if $o(H)=\mathcal N$ and $o(H \setminus \{v\})=\mathcal D$.
\end{theorem}
\begin{proof}
First, let $H$ be a graph and $v$ be a vertex of $H$. Assume that $(H,v)$ is neutral. Then $o(K_1\glue{}{v} H)=o(K_1)$. Notice that $K_1\glue{}{v}H=H$ and since $o(K_1)=\mathcal N$, we necessarily have $o(H)=\mathcal N$.
Now consider the graph $G=K_2 \glue{}{v} H$ that consists of $H$ with a pendant vertex $v'$ attached to $v$. Since $(H,v)$ is neutral, $G$ has the same outcome as $K_2$ that is $\mathcal D$.
In particular, {Dominator}\ has a winning strategy on $G$ by playing second. If {Staller}\ plays first on $v$, {Dominator}\ has to play on $v'$. His remaining winning strategy is a winning strategy on $H \setminus \{v\}$.
This proves that the conditions are necessary for $(H,v)$ to be neutral. We now prove that they are sufficient.
Let $H$ be a graph, $v$ be a vertex of $H$ and $H' = H \setminus \{v\}$, such that $o(H)= \mathcal N$ and $o(H') = \mathcal D$. Let $G$ be a graph and $u$ a vertex of $G$. In the following, we identify the vertices $u$ and $v$ to $w$ and the glued graph of $G$ and $H$ will be denoted by $G \glue{}{} H$.
Since $o(H')=\mathcal{D}$, $o(G \cup H')= o(G)$ by Theorem~\ref{thm:union}. Note that $G \cup H'$ is a subgraph of $G\glue{}{} H$ where only edges are removed so, by Proposition~\ref{prop:subgraph}, $o(G\glue{}{} H) \succeq o(G \cup H') = o(G)$.
We now show that $o(G\glue{}{} H) \preceq o(G)$ to conclude the proof. Note that if $o(G) = \mathcal D$ we necessarily have $o(G\glue{}{} H) \preceq o(G)$.
Assume that $o(G) \preceq \mathcal{N}$. This means that {Staller}\ has a winning strategy on $G$ as first player. Since $o(H)=\mathcal N$, {Staller}\ also has a winning strategy on $H$ as first player. The following strategy is a winning strategy on $G \glue{}{} H$ for {Staller}\ as first player. {Staller}\ begins by applying her winning strategy on $H$ until the strategy requires her to play on $w$. If during this stage {Dominator}\ plays on $w$, by following her strategy, {Staller}\ will isolate a vertex on $H$ different from $w$. This vertex is not connected to $G$ so she wins. If {Dominator}\ plays on $G \setminus \{w\}$ then {Staller}\ can imagine that {Dominator}\ has played on $w$ and will win similarly. So we can assume that {Dominator}\ always answers in $H'$.
When {Staller} 's strategy on $H$ is to play on $w$, instead of playing $w$, she switches to her winning strategy on $G$. Similarly as before, if {Dominator}\ does not answer in $G \setminus\{w\}$, {Staller}\ will win by isolating a vertex of $G$ different from $w$. Thus we can assume that {Dominator} plays only on $G \setminus \{w\}$. Then {Staller}\ continues to apply her winning strategy on $G$ until this strategy requires her to play on $w$. Note that at this point $w$ is a winning move for {Staller}\ both in $G$ and $H$.
{Staller}\ now plays $w$ and answers to every move of {Dominator}\ with her strategy in the same component. Since she follows her winning strategy in $G$ and $H$ she will isolate a vertex in each of these graphs. If one of those two vertices is not $w$, then {Staller}\ wins because this vertex is isolated in $G \glue{}{} H$. If both of these vertices are $w$, then $w$ and its whole neighborhood are played by {Staller}\ in the glued graph and Staller wins.
So {Staller}\ has a winning strategy as first player in $G\glue{}{} H$ and $o(G\glue{}{} H)\preceq \mathcal N$.
Assume now that $o(G)= \mathcal S$, i.e. {Staller}\ has a winning strategy on $G$ as second player. If {Dominator}\ begins by playing on $w$, then {Staller}\ can apply her winning strategy in $G$, she will isolate a vertex different from $w$ will win. If {Dominator}\ begins by playing in $H'$, then {Staller}\ can imagine that he played on $w$, apply her winning strategy on $G$ and win similarly as before. So we can assume that {Dominator}\ begins by playing in $G \setminus \{w\}$. Then {Staller}\ can follow the same strategy as before: she plays her winning strategy on $G$ until she wins or she has to play on $w$, when this is the case, she turns to her winning strategy as first player on $H$ until she wins or she has to play on $w$. Finally she isolates $w$. As before, {Dominator}\ has to answer to {Staller}\ on the same graph {Staller}\ has played. Thus $o(G\glue{}{} H) = \mathcal S$.
These three cases prove that $o(G\glue{u}{v} H) \preceq o(G)$. Since we also have $o(G\glue{u}{v} H) \succeq o(G)$, this prove that $o(G\glue{u}{v} H) = o(G)$.
\end{proof}
A question that could be asked is whether or not neutral graphs exist. We solve it by exhibiting an infinite family of neutral graphs:
\begin{definition}
For $n\geq 2$, the \emph{hanging split graph} of size $n$, $H_n$, is the graph composed of a clique of size $n$ with vertex set $\{v,v_1,\ldots , v_{n-1}\}$ and an independent of size $n-1$ with vertex set $\{u_1,\ldots,u_{n-1}\}$. Add an edge $u_iv_i$ for all $1 \leq i\leq n-1$.
\end{definition}
Figure~\ref{fig:glue_neutral} gives a representation of the first two hanging split graphs and of the general case.
\begin{proposition}
\label{prop:H_n}
For all $n \geq 2$, $(H_n, v)$ is neutral for the glue operator.
\end{proposition}
\begin{proof}
Note that $H_n \setminus \{v\}$ has a perfect matching so it has outcome $\mathcal D$ by Proposition~\ref{prop:pds}.
If {Dominator}\ plays first on $H_n$, a winning strategy is to start on $v$, then the remaining graph has a perfect matching and he will win.
If {Staller}\ plays first on $H_n$, a winning strategy is to play on each $v_i$. {Dominator}\ has to answer on $u_i$ otherwise {Staller}\ wins immediately by isolating this vertex. When every $v_i$ is played, she can play on $v$ and isolate it.
So both players have a winning strategy when playing first and thus $o(H_n) = \mathcal N$. By Theorem~\ref{thm:neutral}, $(H_n,v)$ is neutral.
\end{proof}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\node at (0,0){
\begin{tikzpicture}
\node[noeud] (a) at (-1,0){};
\node[noeud] (b) at (0,0){};
\node[noeud] (c) at (1,0){};
\draw (a)--(b)--(c);
\node[below right] at (1,0) {$v$};
\end{tikzpicture}
};
\node at (3,0){
\begin{tikzpicture}
\node[noeud] (a) at (0,0){};
\node[noeud] (b) at (60:1){};
\node[noeud] (c) at (120:1){};
\node[noeud] (d) at (60:2){};
\node[noeud] (e) at (120:2){};
\draw (d)--(b)--(a)--(c)--(e);
\draw (b)--(c);
\node[below right] at (0,0) {$v$};
\end{tikzpicture}
};
\node at (6,0){
\begin{tikzpicture}
\node at (0,0) {$K_n$};
\draw (0,0) circle (1);
\foreach \val in {0,60,120,240,300}{
\node[noeud] (u\val) at (\val:1){};
}
\foreach \val in {60,120,240,300}{
\node[noeud] (v\val) at (\val:1.5){};
\draw (u\val) -- (v\val);
}
\node at (168:1.25) {\tiny $\bullet$};
\node at (180:1.25) {\tiny $\bullet$};
\node at (192:1.25) {\tiny $\bullet$};
\node[below right] at (1,0) {$v$};
\end{tikzpicture}
};
\node at (0,-2) {$H_2$};
\node at (3,-2) {$H_3$};
\node at (6,-2) {$H_n$};
\end{tikzpicture}
\caption{Examples of hanging split graphs.}
\label{fig:glue_neutral}
\end{figure}
An interest of neutral graphs is that if a graph $G$ is of the form $G'\glue{}{v} H$ with $(H,v)$ being neutral, then we can restrict the study of $G$ to the study of $G'$. In the following, we apply this idea to trees by noticing that $P_3$ is isomorphic to $H_2$ and thus neutral.
We define a \emph{$P_2$-irreducible} graph as a graph without pendant $P_2$, where a pendant $P_2$ is a $P_2$ attached to a graph by an edge. Note that attaching a $P_2$ to a vertex is equivalent to gluing a $P_3$ to the same vertex.
\begin{lemma}
\label{lem:remove_p2}
Every $P_2$-irreducible tree has one of the following form :
\begin{itemize}
\item $K_1$
\item $P_2$
\item $K_{1,n}$ with $n \geq 3$
\item Trees where there are at least two vertices with more than two leaves as neighbors.
\end{itemize}
\end{lemma}
Figure~\ref{fig:red_trees} shows a representation of these different cases.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0) node[noeud] {};
\node[noeud] (a1) at (2,0){};
\node[noeud] (a2) at (3,0){};
\draw (a1) --(a2);
\draw (5,0) -- ++(18:0.7);
\draw (5,0) -- ++(90:0.7);
\draw (5,0) -- ++(162:0.7);
\draw (5,0) -- ++(234:0.7);
\draw (5,0) -- ++(306:0.7);
\draw (5,0) node[noeud] {};
\draw (5,0)++(18:0.7) node[noeud] {};
\draw (5,0)++(90:0.7) node[noeud] {};
\draw (5,0)++(162:0.7) node[noeud] {};
\draw (5,0)++(234:0.7) node[noeud] {};
\draw (5,0)++(306:0.7) node[noeud] {};
\draw (8.5,0) circle (0.7);
\draw (8.5,0) node {$T$};
\draw (8.5,0)++(140:0.7) -- ++(140:0.5);
\draw (8.5,0)++(30:0.7) -- ++(30:0.5);
\draw (8.5,0)++(-60:0.7) -- ++(-60:0.5);
\draw (8.5,0)++(140:1.2) -- ++(70:0.4);
\draw (8.5,0)++(140:1.2) -- ++(140:0.4);
\draw (8.5,0)++(140:1.2) -- ++(210:0.4);
\draw (8.5,0)++(30:1.2) -- ++(-40:0.4);
\draw (8.5,0)++(30:1.2) -- ++(30:0.4);
\draw (8.5,0)++(30:1.2) -- ++(100:0.4);
\draw (8.5,0)++(-60:1.2) -- ++(-25:0.4);
\draw (8.5,0)++(-60:1.2) -- ++(-95:0.4);
\draw (8.5,0)++(140:1.2) node[noeud] {};
\draw (8.5,0)++(140:1.6) node[noeud] {};
\draw (8.5,0)++(157:1.45)node[noeud] {};
\draw (8.5,0)++(123:1.45)node[noeud] {};
\draw (8.5,0)++(30:1.2) node[noeud] {};
\draw (8.5,0)++(30:1.6) node[noeud] {};
\draw (8.5,0)++(13:1.45) node[noeud] {};
\draw (8.5,0)++(47:1.45) node[noeud] {};
\draw (8.5,0)++(-60:1.2) node[noeud] {};
\draw (8.5,0)++(-68:1.5) node[noeud] {};
\draw (8.5,0)++(-52:1.5) node[noeud] {};
\draw (0,-1) node {$K_{1}$};
\draw (2.5,-1) node {$K_{2}$};
\draw (5,-1) node {$K_{1,n}$};
\end{tikzpicture}
\caption{Different possible reductions for trees.}
\label{fig:red_trees}
\end{figure}
\begin{proof}
Let $T$ be a $P_2$-irreducible graph.
If $T$ has only vertices of degree $1$ ans $2$, then $T$ is a path. The only paths that are $P_2$-irreducible are $K_1$ and $K_2$.
Thus, let $r$ be a vertex of degree at least 3 and consider $T$ as a rooted tree on $r$. Let $T_1$,...,$T_k$ be the subtrees connected to $r$.
Consider a subtree $T_i$ that is not a single vertex. Let $x_i$ be a leaf of maximal depth in $T_i$. The parent $y_i$ of $x_i$ has degree at least 3 (otherwise $T$ is not $P_2$-irreducible) and thus has at least one other child, which is necessarily a leaf by maximality of the depth of $x_i$.
Hence in every subtree that is not a single vertex, there is a vertex with at least two leaves as neighbours.
If there are two subtrees of size at least two, we are in the last case. If all the subtrees have size one, the tree is actually a star with at least three leaves. If there is exactly one subtree of size at least two, then $r$ is another vertex with at least two leaves (the other subtrees) and we are again in the last case.
\end{proof}
\begin{theorem}
Deciding the outcome of the Maker-Breaker domination game\ on trees is polynomial.
\end{theorem}
\begin{proof}
The following algorithm solves the Maker-Breaker domination game\ on trees in polynomial time:
For a tree $T$, iteratively remove a pendant $P_2$ until it is not possible anymore. Let $T'$ be the obtained tree. If $T'=P_2$, return the answer $\mathcal D$. If $T'=K_1$ or $K_{1,n}$ with $n \geq 3$, then return $\mathcal N$. Otherwise, return $\mathcal S$.
Note that the above algorithm is polynomial. Indeed, removing pendant $P_2$'s can be done in polynomial time by keeping in memory the set of leaves at each time and updating it when necessary. Verifying that a tree is $K_1$, $P_2$ or a star can also be done in polynomial time.
We now prove the correctness of the algorithm. Let $T_1, \ldots , T_k$ be the intermediary trees obtained after removing a pendant $P_2$. From Proposition~\ref{prop:H_n}, we know that $P_3$ is neutral and a pendant $P_2$ can be seen as the glue with a $P_3$. So $o(T) = o(T_1) = \ldots = o(T_k) = o(T')$, and the outcome of $T$ is the same as the outcome of $T'$.
Since $T'$ is $P_2$-irreducible, it corresponds to one of the situations described in Lemma~\ref{lem:remove_p2}. If it is a $P_2$, the outcome is $\mathcal D$. If it is $K_1$ or $K_{1,n}$ with $n \geq 3$, the first player wins by playing on the central vertex and thus the outcome is $\mathcal N$. In the last case, two distinct vertices are attached to two leaves or more. Assume that {Staller}\ plays second on $T'$. After {Dominator} 's first move, one of the two vertices and its leaves are unplayed by {Dominator}. {Staller}\ can play this vertex and will isolate one of its leaves after her next move. Hence $T'$ is indeed $\mathcal S$ in this last case.
We conclude that the outcome of $T$ is the same as the outcome of $T'$ and the algorithm correctly returns the right output.
\end{proof}
\begin{remark}
Note that a tree has outcome $\mathcal D$ only if by removing pendant $P_2$'s the remaining graph is a $P_2$. This means that a tree has outcome $\mathcal D$ if and only if it admits a perfect matching and thus if and only if there is a pairing dominating set.
\end{remark}
\section{Conclusion and perspectives}
In this paper, the complexity of the Maker-Breaker domination game\ is studied for different classes of graphs. {\sc pspace}-completeness is proved for split and bipartite graphs, whereas polynomial algorithms are given for cographs and trees. An interesting equivalence property is that in these last two cases, the outcome is $\mathcal D$ if and only if the graph admits a pairing dominating set. The study of the pairing dominating set problem might be a key in the study of the threshold between {\sc pspace} and {\sc p} for the Maker-Breaker domination game .
As stated in the introduction, another problem that might be relevant to consider is the number of moves needed by {Dominator}\ to win. In particular, it could be worth studying the correlation of this value with the dominating number or the game dominating number.
Also, this game has been built from the dominating set problem. Other remarkable structures in graphs could have been chosen, such as total dominating sets. Another variant would be to consider the game in an oriented version.
\section*{Acknowledgment}
The authors would like to thank Simon Schmidt, Milo\v s Stojakovi{\'c} and Sandi Klav\v zar for the fruitful discussions about this topic.
| {
"timestamp": "2018-09-19T02:11:55",
"yymm": "1807",
"arxiv_id": "1807.09479",
"language": "en",
"url": "https://arxiv.org/abs/1807.09479",
"abstract": "We introduce the Maker-Breaker domination game, a two player game on a graph. At his turn, the first player, Dominator, select a vertex in order to dominate the graph while the other player, Staller, forbids a vertex to Dominator in order to prevent him to reach his goal. Both players play alternately without missing their turn. This game is a particular instance of the so-called Maker-Breaker games, that is studied here in a combinatorial context. In this paper, we first prove that deciding the winner of the Maker-Breaker domination game is PSPACE-complete, even for bipartite graphs and split graphs. It is then showed that the problem is polynomial for cographs and trees. In particular, we define a strategy for Dominator that is derived from a variation of the dominating set problem, called the pairing dominating set problem.",
"subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "Maker-Breaker domination game",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992942089575,
"lm_q2_score": 0.7310585844894971,
"lm_q1q2_score": 0.7075910879527838
} |
https://arxiv.org/abs/1009.4683 | Efficient Computation of Optimal Trading Strategies | Given the return series for a set of instruments, a \emph{trading strategy} is a switching function that transfers wealth from one instrument to another at specified times. We present efficient algorithms for constructing (ex-post) trading strategies that are optimal with respect to the total return, the Sterling ratio and the Sharpe ratio. Such ex-post optimal strategies are useful analysis tools. They can be used to analyze the "profitability of a market" in terms of optimal trading; to develop benchmarks against which real trading can be compared; and, within an inductive framework, the optimal trades can be used to to teach learning systems (predictors) which are then used to identify future trading opportunities. |
\section{Discusion}
Our main goal was to provide the theoretical basis for the computation
of \emph{a posteriori} optimal trading strategies, with respect to
various criteria.
In particular, we have presented the algorithms, together
with the proofs of their correctness.
The highlights of our contributions are that return and sterling optimal
strategies can be computed very efficiently, even with constraints
on the number of trades. Sharpe optimal strategies prove to be much
tougher to compute. However, for slightly modified notions of the
Sharpe ratio, where one ignores the impact of bid-ask spread squared
we can compute the optimal strategy efficiently.
This is a reasonable approach since in most cases, the bid-ask spread
is \math{\sim 10^{-4}}. We also show that this modified
optimal
strategy is not far from optimal with respect to the unmodified
Sharpe ratio.
Interesting topics of future research are to use these optimal strategies
to learn how to trade optimally, which was the original motivation of this
work. Further, one could use them to benchmark trading strategies as well as
markets.
A natural open problem is whether Sharpe optimal strategies can be computed
under constraints on the number of trades.
We suspect that the monotone hierarchy we created with respect to the
SSR has an important role to play, but the result has been elusive.
The algorithms presented here can be useful if one could relax Assumption
{\bf A1}. We have used such an assumption because it considerably
simplifies the analysis, and in many cases, the optimal strategy is in fact
an all-or-nothing strategy.
At the core of some of our algorithms is a new technique for
optimizing quotients over
intervals of a sequence.
This technique is based on relating the
problem to convex set operations, and for our purposes has
direct application to optimizing the \math{MDD}, the simplified Sharpe
ratio (SSR),
which is an integral component in optimizing the Sharpe ratio, and
the Downside Deviation Ratio (DDR). This technique may be of more general
use in optimizing other financially important criteria.
As a result, such an algorithm may be of independent interest.
\subsection{Overview of the Algorithm}
Based on the results in the previous section, the main observation is that
if the Sterling optimal strategy makes two or more trades, then its
\math{MDD} will be the bid-ask spread, and hence it must be
the return optimal strategy. Thus we only need to consider the case when the
Sterling optimal strategy makes exactly one trade. A useful observatioin
is that no optimal strategy will exit a trade in the middle of
a sequence of positive returns or enter a trade in the middle of a
sequence of negative returns. Thus the return sequence can be contracted
by combining any sequence of positive returns into a single
return, ans similarily any sequence of negative returns. Possible entry points
\math{a_i} correspond to the beginning of a positive return and
exit points the end of a sequence of positive returns.
The main task is to find the Sterling optimal strategy making exactly one
trade. We show that this problem can be reduced to convex hull operations
in 2-dimensions, and hence obtain an \math{O(n\log n)} algorithm for
constructing the optimal solution.
In order to construct the Sterling optimal strategy which makes at most
\math{K} trades, the basic idea is to start with the return optimal
strategy whose trade intervals
cannot be enlarged (maximal return optimal strategies).
If this strategy makes at most \math{K} trades, then we are back to the
unconstrained case. Otherwise, we show that one can successively merge
neighboring trades in the return optimal strategy to obtain a
sterling optimal strategy. The main difficulty in the algorithm is
to determine which trades to merge, and we will show that a greedy
merging strategy is in fact optimal. | {
"timestamp": "2010-09-24T02:02:21",
"yymm": "1009",
"arxiv_id": "1009.4683",
"language": "en",
"url": "https://arxiv.org/abs/1009.4683",
"abstract": "Given the return series for a set of instruments, a \\emph{trading strategy} is a switching function that transfers wealth from one instrument to another at specified times. We present efficient algorithms for constructing (ex-post) trading strategies that are optimal with respect to the total return, the Sterling ratio and the Sharpe ratio. Such ex-post optimal strategies are useful analysis tools. They can be used to analyze the \"profitability of a market\" in terms of optimal trading; to develop benchmarks against which real trading can be compared; and, within an inductive framework, the optimal trades can be used to to teach learning systems (predictors) which are then used to identify future trading opportunities.",
"subjects": "Computational Engineering, Finance, and Science (cs.CE); Computational Finance (q-fin.CP)",
"title": "Efficient Computation of Optimal Trading Strategies",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992932829918,
"lm_q2_score": 0.731058584489497,
"lm_q1q2_score": 0.7075910872758485
} |
https://arxiv.org/abs/2011.15110 | Derivative-Informed Projected Neural Networks for High-Dimensional Parametric Maps Governed by PDEs | Many-query problems, arising from uncertainty quantification, Bayesian inversion, Bayesian optimal experimental design, and optimization under uncertainty-require numerous evaluations of a parameter-to-output map. These evaluations become prohibitive if this parametric map is high-dimensional and involves expensive solution of partial differential equations (PDEs). To tackle this challenge, we propose to construct surrogates for high-dimensional PDE-governed parametric maps in the form of projected neural networks that parsimoniously capture the geometry and intrinsic low-dimensionality of these maps. Specifically, we compute Jacobians of these PDE-based maps, and project the high-dimensional parameters onto a low-dimensional derivative-informed active subspace; we also project the possibly high-dimensional outputs onto their principal subspace. This exploits the fact that many high-dimensional PDE-governed parametric maps can be well-approximated in low-dimensional parameter and output subspace. We use the projection basis vectors in the active subspace as well as the principal output subspace to construct the weights for the first and last layers of the neural network, respectively. This frees us to train the weights in only the low-dimensional layers of the neural network. The architecture of the resulting neural network captures to first order, the low-dimensional structure and geometry of the parametric map. We demonstrate that the proposed projected neural network achieves greater generalization accuracy than a full neural network, especially in the limited training data regime afforded by expensive PDE-based parametric maps. Moreover, we show that the number of degrees of freedom of the inner layers of the projected network is independent of the parameter and output dimensions, and high accuracy can be achieved with weight dimension independent of the discretization dimension. | \section{Introduction}
Many problems in computational science and engineering require
repeated evaluation of an expensive nonlinear parametric map for
numerous instances of input parameters drawn from a probability
distribution $\nu$. These {\em many-query problems} arise in such
problems as Bayesian inference, forward uncertainty quantification,
optimization under uncertainty, and Bayesian optimal experimental
design (OED), and are often governed by partial
differential equations (PDEs).
The maps are parameterized by model parameters $m$ with joint
probability distribution $\nu$, which is mapped to outputs $q$ through an implicit dependence on the solution of
the PDEs for the state $u$:
\begin{equation}
\underbrace{q(m) = q(u(m))}_\text{Implicit dependence} \text{ where } u \text{ depends on } m \text{ through } \underbrace{R(u,m) = 0}_\text{PDE model}
\end{equation}
The variables $m\in \mathbb{R}^{d_M},u\in \mathbb{R}^{d_U}$ are
formally discretizations of functions in Banach spaces, where the
dimensions $d_M$ and $d_U$ are often large. We assume that the outputs
$q \in \mathbb{R}^{d_Q}$ are differentiable with respect to $m$. The
outputs can be the full PDE states $u$, or state dependent quantity of interest, such as
integrals or (mollified) pointwise evaluations of the states or their
fluxes, as arise in inverse problems, optimal design and control, and
OED.
Each evaluation of $q$ requires solution of the PDEs at one instance
of $m$, thus making solution of many-query problems prohibitive when
the solution of the governing PDEs is expensive due to nonlinearity,
multiphysics or multiscale nature, and/or complex geometry. For
example, solution of complex PDEs can take hours or even days or
weeks on a supercomputer, making the solution of many-query problems
prohibitive using the full high fidelity PDE solution.
Thus, practical solution of high dimensional many-query problems
requires surrogates for the mapping $m \mapsto q$. The goal of surrogate
construction is to construct approximations of the map that are
accurate over the probability distribution for input parameters, $\nu$,
i.e., to find a surrogate $f \in \mathbb{R}^{d_Q}$ that is inexpensive
to construct and evaluate and that satisfies
\begin{equation}\label{expected_error_bound}
\mathbb{E}_\nu [\| q - f\|^2] = \int \|q(m) - f(m)\|^2 \, d\nu(m) < \epsilon
\end{equation}
for a suitable tolerance $\epsilon >0$, in a suitable norm. Here $\mathbb{E}_\nu$ is the expectation with respect to $\nu$, as defined in equation \eqref{expected_error_bound}.
To tackle this challenge, various forms of surrogates have been
developed that exploit intrinsic properties of the high-dimensional
parameter-to-output map, $m \mapsto q$ such as sparsity and
low-dimensionality, including, e.g., sparse polynomials
\cite{ BabuskaNobileTempone10,ChenQuarteroni15, CohenDeVore15,
XiuKarniadakis02a}, and reduced basis methods
\cite{Bui-ThanhWillcoxGhattas08a, ChenQuarteroniRozza2017,ChenSchwab2016,
CohenDeVore15}. Dimension-independent complexity to
achieve a desired accuracy has been shown for these methods under
suitable assumptions; see the review in \cite{CohenDeVore15} and
references therein. In recent years, neural networks have shown great
promise in representing high dimensional nonlinear mappings,
especially in data-rich applications such as computer vision and
natural language processing, see for example \cite{GoodfellowBengioCourville2016}. In these
regimes neural networks with high weight dimensions can learn patterns
in very large data sets. More recently, significant interest has
arisen in using neural networks as nonlinear surrogates for
PDE-governed high-dimensional parametric maps in the many-query
setting \cite{BhattacharyaHosseiniKovachki2020,HanJentzenWeinan18,
RuthottoOsherLiEtAl20, SchwabZech19,
ZhuZabarasKoutsourelakisEtAl19}.
However, the PDE-governed many-query regime is fundamentally different
from the data-rich regime of typical data science problems. The key
difference is that in the former, few data may be available to train a
surrogate due to the expense of evaluating the map, and the input--output
pairs may be very high dimensional. For example, consider a high
parameter dimension climate model: the parameters (e.g., unknown
initial conditions) and outputs (e.g., future states) may number in the
millions or billions, but one can afford only hundreds of evaluations
of the mapping $m \mapsto q$. Because the number of weights in a dense
neural network model would then
scale with the square of the number of parameters/outputs, the number
of climate model runs needed to fully inform the weights would need to
scale with the number of parameters and outputs. This is orders of
magnitude larger than is feasible.
Black box strategies for dimension reduction such as convolution
kernels can efficiently represent discretizations of stationary
processes on regular (Cartesian) grids. The limitation to regular
grids presents difficulties for many PDE problems discretized on
highly unstructured meshes. Recent works in geometric deep learning have extended convolution processes to non-Cartesian data \cite{BronsteinBrunaLeCunEtAl17,MontiBoscainiMasciEtAl17}, but even when the meshes are
uniform, the question of finding the right architecture remains and
associated dimension reduction remains. Genetic algorithms can be used
to select neural network architectures
\cite{Branke1995,SuganumaShirakawaNagao2017,Yao1993}, but this can be
prohibitive, since each proposed network needs to be trained in order
to test its viability, and the number of trained networks that need to
be explored will be large due to the heuristic nature of these
algorithms.
The key idea for overcoming this critical challenge faced by the PDE-governed
setting is that the mapping $m \mapsto q$
often has low-dimensional structure and smoothness that can be exploited to
aid architecture selection as well as
provide informed subspaces of the input and intrinsic subspaces of the
outputs, within which the mapping can be well approximated.
This intrinsic low-dimensionality is revealed by the Jacobian
and higher-order derivatives of $q$ with respect to $m$. This has
been proven for certain model problems and demonstrated numerically
for many others in model reduction \cite{AlgerChenGhattas20,BashirWillcoxGhattasEtAl08,
ChenGhattas2019}, Bayesian inversion
\cite{Bui-ThanhBursteddeGhattasEtAl12_gbfinalist, Bui-ThanhGhattas12a,
Bui-ThanhGhattas12, Bui-ThanhGhattas13a, Bui-ThanhGhattas15,
Bui-ThanhGhattasMartinEtAl13,ChenGhattas20,ChenVillaGhattas2017, ChenWuChenEtAl19, FlathWilcoxAkcelikEtAl11,
IsaacPetraStadlerEtAl15, MartinWilcoxBursteddeEtAl12,PetraMartinStadlerEtAl14a}, optimization
under uncertainty \cite{AlexanderianPetraStadlerEtAl17,ChenGhattas20a, ChenHabermanGhattas20,
ChenVillaGhattas19}, and
Bayesian optimal experimental design
\cite{AlexanderianPetraStadlerEtAl14, AlexanderianPetraStadlerEtAl16,
CrestelAlexanderianStadlerEtAl17, WuChenGhattas20}.
In this work we consider strategies for constructing reduced dimension
projected neural network surrogates using Jacobian-based derivative
information for effective input dimension reduction, and represent the outputs in their principal subspace. The resulting networks
can be viewed as encoder-decoder networks with a priori computed
reduced input and output bases that reflect the geometry and
low-dimensionality of the parameter-to-output map. When the bases for
the inputs and outputs are fixed, the weight dimension of the neural
network is independent of the discretization dimensions of the
PDE model. On the other hand, when these bases are permitted to be
modified during neural network training, they serve as good initial
guesses that impose the parametric map's structure on the full
encoder-decoder training problem.
We consider active subspaces (AS) \cite{ConstantineDowWang2014,
ZahmConstantinePrieurEtAl2020} for the input space reduction, and the principal subspace given by
proper orthogonal decomposition (POD)
\cite{ManzoniNegriQuarteroni2016, QuarteroniManzoniNegri2015} for the
output reduction. AS incorporates Jacobian information to detect input
subspaces to which the outputs are sensitive, while POD provides a low dimensional
subspace in which the outputs are efficiently represented. We compare
this strategy to the use of Karhunen Lo\`{e}ve expansion (KLE)
projections for the input space and POD for the output projection, as
was done in \cite{BhattacharyaHosseiniKovachki2020}. KLE exploits low
dimensionality of the parameter probability distribution, as exposed
by the eigenvalue decay of its covariance operator, but does not
account for the outputs. KLE and POD are related to principal component
analysis (PCA) of the parameter data and output data respectively. PCA
is a popular strategy for dimension reduction in neural network
architecture selection; see for example \cite{GargPandaRoy2019}. We
favor the AS for the input projection, since it is derivative-informed; AS
explicitly incorporates the sensitivity of the outputs to the
inputs in determining the dimension reduction.
To motivate this strategy, we derive an input--output projection error
bound based on optimal ridge functions for Gaussian parametric
mappings. We construct projected neural network surrogates based
on AS-to-POD neural network ridge functions, which we compare against
KLE-to-POD neural network ridge functions as well as conventional
approaches using full space neural networks.
We test the resulting surrogates on two different PDE
parameter-to-output map problems: one parametric mapping involving
pointwise observations of the state for a nonlinear
convection-reaction-diffusion problem, and another involving a high
frequency Helmholtz problem. We consider problems where $d_M$ is
large, but $d_Q$ is smaller; thus, we keep the first layer fixed but
train the output layer. These numerical experiments demonstrate that
full space neural network surrogates that depend on the dimension of
the discretization parameter perform poorly in generalization accuracy
in the low data regime. On the other hand, the projected neural
network surrogates are capable of achieving high accuracy in the low
data regime; in particular the derivative-informed projection strategy (AS) performs
best. We also test against neural network ridge functions with
identical architectures that instead use Gaussian random projection
bases to test the effect of the structured bases (AS-to-POD and
KLE-to-POD), and demonstrate that the random projections performs
worse than the structured ones.
\textbf{Contributions:} We propose a general framework to construct
surrogates for high-dimensional parameter-to-output maps governed by
PDEs in the form of \linebreak derivative-informed low-dimensional projected neural
networks that generalize well even with limited numbers of training
data. This framework exploits informed input and output subspaces
that reveal the intrinsic dimensionality of the parameter-to-output
map. The strategy involves constructing ridge function surrogates that
attempt to learn only the nonlinear mapping between informed modes of
the input parameters and output spaces.
Our work differs from \cite{BhattacharyaHosseiniKovachki2020},
which uses KLE-based input projections; instead,
we use derivative-informed active subspace-based projections. Both
representations have input--output projection error bounds given
respectively by the eigenvalue decays of the AS, KLE, and POD
operators. For a fixed rank ridge function surrogate, the bounds for
the AS-to-POD surrogates are tighter than those for the KLE-to-POD,
which involve the square of the Lipschitz constant for the
parameter-to-output map. These are conservative upper bounds, but one
can reasonably expect that incorporating specific information about
the outputs in the input basis can provide better accuracy with fewer
modes. This is confirmed in numerical experiments.
Numerical experiments demonstrate that the derivative-informed strategy
using AS performs well in the limited data regime typical of
PDE-governed parametric maps. When the squared Jacobian of the
parameter-to-output map has spectral structure similar to the
covariance operator of the input parameter distribution, the KLE
eigenvectors perform well as a reduced basis---nearly as well as the
AS strategy. When the parameter-to-output map differs from the
covariance operator, the KLE strategy is seen to perform worse than
the AS strategy, and only slightly better than using Gaussian random
bases.
The rest of the paper is organized as follows: In Section 2, we discuss techniques for input-output dimension reduction for parametric maps, and derive an input--output projection error
bound based on optimal ridge functions for Gaussian parametric
mappings based on AS, KLE and POD. In Section 3 we present strategies for constructing parsimonious neural network surrogates using these building blocks, and discuss errors incurred in the surrogate construction procedure. In Section 4 we present numerical experiments that demonstrate the viability of the derivative-informed projected neural network approach for two problems arising from parametric PDE inference problems.
\section{Input-Output Projected Ridge Function Surrogates}
We proceed by discussing strategies for input-output projected ridge functions of the form:
\begin{equation} \label{input_output_ridge_function}
q(m) \approx f(m) = \Phi_{r_Q}f_r(V_{r_M}^Tm) + b_Q,
\end{equation}
where $V_{r_M}$ are a reduced basis for the input space, $\Phi_{r_Q}$ are a reduced basis for the outputs, and $f_r:\mathbb{R}^{r_M}\rightarrow \mathbb{R}^{d_Q}$ is a ridge function. We begin by examining strategies for input and output projection, and known bounds for projection errors, which lead to an input--output bound that motivates our general approach.
\subsection{Input Projection: Active Subspace and Karhunen-Lo\`{e}ve Expansion}
In the setting of parametric mappings, the input parameters $m$ comes from a discretization of an infinite dimensional parameters. Algorithmic complexity suffers from the curse of dimensionality as the discretization dimension grows. In order to avoid the curse of dimensionality we seek to exploit low-dimensional structure of the parameter-to-output map in architecting neural network surrogates.
An effective strategy for constructing an input dimension reduced surrogate is to construct an optimal ridge function approximation of $m \mapsto q$. A ridge function is a composition of the form $g \circ h$, where $h:\mathbb{R}^{d_M}\rightarrow \mathbb{R}^r$ is a linear mapping (a tall skinny matrix, with $r \ll d_M$), and $g:\mathbb{R}^r \rightarrow \mathbb{R}^{d_Q}$ is a measurable function in $L^2(\mathbb{R}^{d_M}, \sigma(h),\mathbb{R}^{d_Q})$, i.e. in the $\sigma$-algebra generated by the bases of $h$. Input dimension reduced dense neural networks have this form.
We seek to construct ridge function surrogates for the mapping $m \mapsto q$ that are restricted to subspaces of the input space that resolve intrinsic low dimensionality about the mapping. The goal is to find a (nonlinear) ridge function $g$ and a linear projector of dimension $r$, $V_r:\mathbb{R}^{d_M} \rightarrow r$ such that for a given tolerance $\epsilon >0$,
\begin{equation}
\mathbb{E}_\nu [\| q - g \circ V_r\|_X] \leq \epsilon
\end{equation}
in a suitable norm $X$. The key issues that remain are how to find $V_r$, i.e. both the dimension $r$ and the subspace $X_r \subset \mathbb{R}^{d_M}$ spanned by the column bases of $V_r$.
One approach we consider is to use Karhunen-Lo\`{e}ve expansion (KLE) \cite{SchwabTodor2006}, which exploits low dimensional correlation structure of the probability distribution $\nu$ of the parameters $m$. This approach is equivalent to principal component analysis (PCA) of input data. It is considered for solving parametric PDEs using dimension reduced neural networks in \cite{BhattacharyaHosseiniKovachki2020}.
For a Gaussian distribution $\nu = \mathcal{N}(m_0,C)$, with mean $m_0 \in \mathbb{R}^{d_M}$ and covariance $C \in \mathbb{R}^{d_M \times d_M}$, the optimal rank $r$ basis $V_r \in \mathbb{R}^{d_M \times r}$ for representing samples $m_i \sim \nu$ is given by the $r$ eigenvectors corresponding to the $r$ biggest eigenvalues of the covariance matrix $C = \Psi \text{diag}(c) \Psi^T$, with $(c_i,\Psi_i)_{i \geq 1}$ representing the eigenpairs of $C$ ordered such that $c_1 \geq c_2 \geq \dots \geq c_{d_M} \geq 0$, i.e
\begin{equation}
\min_{V_r \in \mathbb{R}^{d_M \times r}} \mathbb{E}_{m_i \sim \nu}[\|(I_d - V_rV_r^T)(m_i - m_0)\|^2_2] = \sum_{i = r+1}^{d_M} c_i^2.
\end{equation}
See \cite{ZahmConstantinePrieurEtAl2020}.
The input data are restricted to the dominant modes of the covariance of the parameter distribution $\nu$. This approach is appropriate in regimes where the output variability is dominated by the parameter variability in the leading input modes, and the parameter covariance has rapidly decaying eigenvalues.
Input dimension reduction using KLE has limitations since it does not explicitly take into account the mapping $m \mapsto q$, it is only dependent on the distribution $\nu$. For this reason we propose to use derivative-informed input dimension reduction based on the map Jacobian information. The Jacobian of the outputs with respect to the parameters can be used to find a global subspace in which the outputs are most sensitive to the input parameters over the parameter space. Similar techniques have been popularized for dimension reduction for scalar valued functions under the name ``active subspaces'' (AS) \cite{ConstantineDowWang2014}, for which projection error bounds can be established using Poincar\'{e} inequalities. The ideas have been generalized to vector valued functions \cite{ZahmConstantinePrieurEtAl2020}, or scenarios where Poincar\'{e} inequalities do not hold \cite{ParenteWallinWohlmuth2020}.
The active subspace is constructed by the eigenvectors of the Jacobian information matrix, or a Gauss-Newton Hessian of $\frac{1}{2}\|q(m)\|^2_{\ell^2(\mathbb{R}^{d_Q})}$ averaged with respect to the parameter distribution, i.e.,
\begin{equation} \label{averaged_gn_hess}
H_\text{GN} = \mathbb{E}_\nu [\nabla q ^T \nabla q ] = \int_M \nabla q(m)^T\nabla q(m) d\nu(m) \in \mathbb{R}^{d_M\times d_M}.
\end{equation}
The leading eigenvectors corresponding to the largest eigenvalues of $H_\text{GN}$ represents directions in which the parameter-to-output map $m \mapsto q$ is most sensitive to the input parameters $m \in \mathbb{R}^{d_M}$ in the $\ell^2(\mathbb{R}^{d_Q})$ sense (i.e. as informs a regression problem).
Consider the ridge function parametrizations given by the $g(P_rm)$, where $P_r \in \mathbb{R}^{d_M \times d_M}$ is a rank-r projector. In \cite{ZahmConstantinePrieurEtAl2020} upper bounds are established for the approximation error of $m \mapsto q$ using a conditional expectation of $q$ with restricted to the dominant modes of the averaged Gauss-Newton Hessian (Equation \ref{averaged_gn_hess}). Specifically let $(\lambda,v_i)_{i\geq 1}$ denote the eigenpairs of the generalized eigenvalue problem
\begin{equation}\label{generalized_EVP_AS}
H_\text{GN}v_i = \lambda_i C^{-1}v_i,
\end{equation}
then an upper bound for the approximation error $\|q - \mathbb{E}_\nu[q|\sigma(P_r)]\|^2_{\mathcal{H}}$ can be obtained (see Proposition 2.6 in \cite{ZahmConstantinePrieurEtAl2020}):
\begin{equation}\label{trailing_bound_active_subspace}
\|q - \mathbb{E}_\nu[q|\sigma(P_r)]\|_\mathcal{H}^2 \leq \sum_{i=r+1}^{d_M} \lambda^M_i,
\end{equation}
when the projector is taken to be
\begin{equation} \label{as_projector_definition}
P_r = V_rV_r^TC^{-1}.
\end{equation}
Here the function space $\mathcal{H} = L^2(\mathbb{R}^{d_M},\nu;\mathbb{R}^{d_Q}) $, and the $\mathcal{H}$ norm is induced by the inner product:
\begin{equation}
(u,v)_\mathcal{H} = \int_{\mathbb{R}^{d_M}} (u(m),v(m))_{\ell^2(\mathbb{R}^{d_Q})} d\nu(m).
\end{equation}
The conditional expectation $\mathbb{E}_\nu[q|\sigma(P_r)](m)$ represents the mapping $m\mapsto q$ with orthogonal complement to $\text{span}\{P_r\}$ in the parameter space marginalized out. In the case that the projector $P_r$ is orthogonal with respect to the covariance $C$, the conditional expectation with respect to the $\sigma$-algebra generated by $P_r$ can be written as
\begin{equation} \label{q_conditional_expectation}
\mathbb{E}_\nu[q|\sigma(P_r)](m) = \mathbb{E}_{y \sim \nu}[q(P_rm + (I_{d_M} - P_r)y)].
\end{equation}
This bound establishes that when the spectrum of the Gauss-Newton Hessian decays fast, the mapping $m \mapsto q$ can be well approximated in expected value by the conditional expectation $\mathbb{E}_\nu[q|\sigma(P_r)](m)$, which is marginalized to the subspace spanned by the dominant modes of $H_\text{GN}$.
When the mapping $q$ is Lipschitz continuous with constant $L \geq 0$, and the eigenvalues of the parameter distribution covariance decay quickly, the KLE can be used to construct a similar ridge function bound, i.e. there exist a function $g$ and projector $P_r$ such that
\begin{equation}\label{eq:KLE-error}
\| q - g \circ P_r\|_\mathcal{H}^2 \leq L^2 \sum_{i=r+1}^{d_M} c_i,
\end{equation}
where $c_i$ is the $i^{th}$ eigenvalue of the covariance $C$. Moreover, it is established (by Proposition 3.1 in \cite{ZahmConstantinePrieurEtAl2020}) that the upper bound of the active subspace projection error in \eqref{trailing_bound_active_subspace} is smaller or equal to the upper bound of the KLE projection error in \eqref{eq:KLE-error}, i.e.,
\begin{equation}\label{trailing_bound_active_subspace_vs_kle}
\sum_{i=r+1}^{d_M} \lambda^M_i \leq L^2 \sum_{i=r+1}^{d_M} c_i.
\end{equation}
Both KLE and AS can be used to detect low dimensionality of the input space, as well as to provide dominant basis vectors for the input space. KLE can be used to reduce the input dimensionality when the covariance of the parameter distribution $\nu$ has quick decay. The issue with KLE is that it does not take into account the outputs $q$. In contrast, AS can be used to reduce the input dimensionality when the covariance preconditioned Gauss-Newton Hessian has quick eigenvalue decay, which directly takes into account the sensitivity of the outputs to the input parameters and can be used in a more broad set of circumstances than KLE. It is likely that the AS projection is more accurate than the KLE projection, informed by the relation of their error bounds \eqref{trailing_bound_active_subspace_vs_kle}, and demonstrated by our numerical experiments.
The limitations of the AS approach are that the parametric mapping is
assumed to be differentiable with respect to the parameters $m$, and
that the computation of the AS basis is formally more expensive than
the computation of the KLE basis. Both bases can be calculated via
matrix free randomized methods \cite{HalkoMartinssonTropp2011}, which do not require the explicit
formation of the matrices, and instead require only matrix-vector
products. KLE requires access to just the covariance operator, while
AS require evaluations of the Jacobian of the parametric mapping at
different sample points. However, the Jacobian matrix-vector products
required in the computation of the AS subspace require only marginally
more work than the computation of the training data. This is because
once the forward PDE is solved at a sample point to construct a
training data pair, the Jacobian matrix-vector product at that point
requires solution of linearized forward and adjoint PDE problems with
$O(r)$ right hand sides each. When direct (or iterative solvers with
expensive-to-construct preconditioners) are employed to solve these
problems, little additional work is required beyond the forward
solve. This is because triangular solves are asymptotically cheap
compared to the factorizations (or for iterative solves, the
preconditioner needs to be constructed just once). Moreover, blocking
the $O(r)$ right hand sides in the forward/adjoint solves results in
better cache performance, thereby making the Jacobian action less
expensive that it might seem.
In practice AS may require fewer samples than training data
construction (in numerical experiments we use significantly fewer
samples for AS construction than training data). When the spectrum of
the parameter covariance-preconditioned Gauss-Newton Hessian decays
rapidly, $r$ is small and only few matrix-vector products (and hence
linearized forward/adjoint solves) are required. For more information
see Appendix \ref{jacobian_with_adjoints}.
\subsection{Output Projection: Proper Orthogonal Decomposition}
Reduced order modeling (ROM), e.g. a reduced basis method (RBM) have been developed to help reduce the dimension of PDE outputs (solution and quantities of interest) \cite{BennerGugercinWillcox2015,ChenQuarteroniRozza2013,ChenQuarteroniRozza2017,QuarteroniManzoniNegri2015,WillcoxPeraire2002}. This methodological framework has made tractable the solution of many-query problems involving PDEs (optimization, inference, control, inverse problems etc.) \cite{BuiDamodaranWillcox2004}. In these methods low dimensional representations of the outputs are made via snapshots.
Due to the optimality of the representation in the sense of Proposition \ref{POD_bound_proposition}, this approach is also considered for solving parametric PDEs using dimension reduced neural networks \cite{BhattacharyaHosseiniKovachki2020}.
The proper orthogonal decomposition (POD) for the outputs is the eigenvalue decomposition of the expectation of the following matrix,
\begin{equation}
\mathbb{E}_\nu[qq^T] = \int_{\mathbb{R}^{d_M}} q(m)q(m)^T d\nu(m) = \Phi D\Phi^T.
\end{equation}
Using the Hilbert-Schmidt Theorem, the POD basis is shown to be optimal for the following minimization problem (see \cite{ManzoniNegriQuarteroni2016,QuarteroniManzoniNegri2015}).
\begin{proposition}[Proposition 2.1 in \cite{ManzoniNegriQuarteroni2016}] \label{POD_bound_proposition}
The POD basis $\Phi \in \mathbb{R}^{d_Q \times r_Q}$ is such that
\begin{align}
&\int_{\mathbb{R}^{d_M}}\| q(m) - \Phi\Phi^Tq(m)\|^2_{\ell^2(\mathbb{R}^{d_Q})} d\nu(m) \nonumber \\
= \min_{W \in \mathbb{R}^{d_Q \times r_Q}, W^TW = I_{r_Q}} &\int_{\mathbb{R}^{d_M}}\| q(m) - WW^Tq(m)\|^2_{\ell^2(\mathbb{R}^{d_Q})} d\nu(m).
\end{align}
And the error is given by the trailing eigenvalues of $\mathbb{E}_\nu[qq^T]$:
\begin{equation}
\int_{\mathbb{R}^{d_M}}\| q(m) - \Phi\Phi^Tq(m)\|^2_{\ell^2(\mathbb{R}^{d_Q})} d\nu(m) = \sum_{i = r_Q +1}^{\text{rank}(T)} \lambda^Q_i
\end{equation}
\end{proposition}
POD serves as a constructive prescription for a low rank basis that is optimal in the $\ell^2$ sense. It also serves as a means of reliably calculating the inherent dimensionality of the outputs for the mapping $m \mapsto q$. The POD basis can be approximated via Monte Carlo samples directly from training data using the ``method of snapshots'' \cite{Pinnau08}.
\subsection{Input--Output Error Bound for Optimal Ridge Function} \label{section_input_output_bound}
Reduced bases for the parameter space $\mathbb{R}^{d_M}$ and the outputs $\mathbb{R}^{d_Q}$ can be used to design input--output reduced ridge function surrogates for the mapping $m \mapsto q$ as in Equation \eqref{input_output_ridge_function}. When the input dimensions that inform the outputs are well approximated by the modes spanned by $V_{r_M}$, and the outputs are well approximated when restricted to the span of $\Phi_{r_Q}$ then the mapping $m \mapsto q$ can be well approximated by ridge functions of this form. A schematic for this ridge function strategy is given in Figure \ref{ridge_schematic}.
\begin{figure}[H]
\center
\begin{tikzpicture}[scale = 0.9,every node/.style={draw,outer sep=0pt,thick}]
\node[bag] (Input) at (0,0) [minimum width=1cm,minimum height=3cm] {Input basis\\ $\mathbb{R}^{d_M} \rightarrow \mathbb{R}^{r_M}$};
\node[bag] (NN) at (4,0) [minimum width=3cm,minimum height=0.6cm] {$f_r:\mathbb{R}^{r_M}\rightarrow \mathbb{R}^{r_Q}$};
\draw (Input.north east) -- (NN.north west) (Input.south east) -- (NN.south west);
\node[bag] (Output) at (8,0)[minimum width=1cm,minimum height=1.2cm] {Output basis\\ $\mathbb{R}^{r_Q} \rightarrow \mathbb{R}^{d_Q}$};
\draw (NN.north east) -- (Output.north west) (NN.south east) -- (Output.south west);
\end{tikzpicture}
\caption{Dimension reduced representation by conditional expectation ridge function.}
\label{ridge_schematic}
\end{figure}
Combining each of the KLE and AS approaches with POD, an error bound can be established for the conditional expectation ridge function that is restricted to the dominant modes of the POD basis.
\begin{proposition}[Input--Output Ridge Function Error Bound]\label{prop_input_output_error_bound}
Let $P_{r_M} = V_rV_r^TC^{-1}\in\mathbb{R}^{d_M}$ be the projectors coming from the active subspace generalized eigenvalue problem \eqref{generalized_EVP_AS}, and define the conditional expectation of the outputs $q$ with respect to the $\sigma$-algebra generated $P_{r_M}$ given in Equation \eqref{as_projector_definition}:
\begin{equation}
q_{r_M}(m) = \mathbb{E}_\nu[q|\sigma(P_{r_M})](m).
\end{equation}
Define the rank $r_Q$ POD decomposition for $q_{r_M}$ as follows:
\begin{equation}
\big[\mathbb{E}_\nu[q_{r_M}q_{r_M}^T]\big]_{r_Q} = \bigg[\int_{\mathbb{R}^{d_M}}q_{r_M}(m)q_{r_M}(m)^T d\nu(m)\bigg]_{r_Q} = \widehat{\Phi}_{r_Q}\widehat{D}_{r_Q} \widehat{\Phi}^T_{r_Q}.
\end{equation}
Then one has
\begin{equation}
\int_{\mathbb{R}^{d_M}} \|q(m) - \widehat{\Phi}_{r_Q}\widehat{\Phi}_{r_Q}^Tq_{r_M}(m)\|^2_{\ell^2(\mathbb{R}^{d_Q})}d\nu(m) \leq \sum_{i=r_M+1}^{d_M}\lambda_i^M + \sum_{i=r_Q +1}^{d_Q} \lambda_i^Q
\end{equation}
Further if $q$ is Lipschitz continuous with constant $L\geq 0$, then
\begin{equation}
\int_{\mathbb{R}^{d_M}} \|q(m) - \widehat{\Phi}_{r_Q}\widehat{\Phi}_{r_Q}^Tq_{r_M}(m)\|^2_{\ell^2(\mathbb{R}^{d_Q})}d\nu(m) \leq L^2\sum_{i=r_M+1}^{d_M}c_i + \sum_{i=r_Q +1}^{d_Q} \lambda_i^Q
\end{equation}
\end{proposition}
\begin{proof}
This result follows from the triangle inequality
\begin{align}
&\int_{\mathbb{R}^{d_M}} \|q(m) - \widehat{\Phi}_{r_Q}\widehat{\Phi}_{r_Q}^Tq_{r_M}(m)\|^2_{\ell^2(\mathbb{R}^{d_Q})}d\nu(m) \nonumber\\
\leq &\int_{\mathbb{R}^{d_M}} \|q(m) - q_{r_M}(m)\|^2_{\ell^2(\mathbb{R}^{d_Q})} + \|q_{r_M}(m) - \widehat{\Phi}_{r_Q}\widehat{\Phi}_{r_Q}^Tq_{r_M}(m)\|^2_{\ell^2(\mathbb{R}^{d_Q})}d\nu(m),
\end{align}
and application of \eqref{trailing_bound_active_subspace},\eqref{trailing_bound_active_subspace_vs_kle} and Proposition \ref{POD_bound_proposition}.
\end{proof}
This result establishes that when the spectra for $\mathbb{E}_\nu[\nabla q^T \nabla q]$ and $\mathbb{E}_\nu[qq^T]$ decay rapidly, the map $m\mapsto q$ can be well approximated over $\nu$ when restricted to the dominant subspaces of AS and POD. Further if $q$ is Lipschitz continuous with constant $L$ not too large, and the covariance of $\nu$ has quick eigenvalue decay, then the map can be well approximated when restricted to the dominant subspace of KLE and POD.
If the mapping $m \mapsto q$ satisfy these conditions, then we just need to find a low dimensional ridge function mapping $f_r:\mathbb{R}^{r_M} \rightarrow \mathbb{R}^{r_Q}$. In the next section we consider feedforward neural networks for the nonlinear ridge functions. This input--output dimension reduction strategy is general and can be readily extended to different nonlinear representations such as polynomials, Gaussian process etc.
\section{Projected Neural Network} \label{section_nn_architecture}
In this section we present strategies for constructing projecting neural networks. A projected neural network, parametrized by weights $\mathbf{w} = [w,b_Q] \in \mathbb{R}^{d_W}$, can be written as follows:
\begin{equation}
f(m,\mathbf{w}) = \Phi_{r_Q} f_r(V_{r_M}^Tm,w) + b_Q.
\end{equation}
The function $f_r$ represents a sequence of nonlinear compositions of affine mappings between successive latent representation spaces $\mathbb{R}^{d_\text{layer}}$. The architecture of the neural network includes the choice of layers, layer dimensions (widths) and nonlinear activation functions, for more information see \cite{GoodfellowBengioCourville2016}. Once an architecture is chosen, optimal weights are found via the solution of an empirical risk minimization problem over given training data. Here we consider least squares regression: given $\{(m_i,q_i)|m_i \sim \nu\}_{i=1}^{N_\text{train}}$, find the weights $\mathbf{w}$ that minimize
\begin{equation} \label{nn_least_squares}
\min_{\mathbf{w} \in \mathbb{R}^{d_W}}\frac{1}{N_\text{train}}\sum_{i=1}^{N_\text{train}} \|f(m_i,\mathbf{w}) - q(m_i)\|^2_{\ell^2(\mathbb{R}^{d_Q})}.
\end{equation}
In the setting we consider, a limited number of data are expected to be available for the empirical risk minimization problem. A general rule of thumb is to have a number independent training data $N_\text{train}$ that is commensurate to the number of weights, $d_W$, to be inferred, (for linear problems $N_\text{train} > d_W / d_Q$ linearly independent data are needed). Since we cannot afford many training data, instead we try to keep $d_W$ small, and the network parsimonious.
A common dimension reduction strategy for regression problems in machine learning is to use an encoder-decoder network \cite{Kramer1991}. In an encoder-decoder input data is nonlinearly contracted to lower dimensions and then extended to the outputs. The main issue in designing an encoder-decoder is to find a reduced dimensionality that is appropriate. In this setting the intrinsic dimensionalities of the inputs and outputs are exposed by AS, KLE and POD. These decompositions can be used to help identify appropriate dimensions for encoder-decoder architectures.
Additionally the basis representations that result from these decompositions can be used explicitly in the neural network configuration. When training a full neural network (i.e. including the first and last layers as weights) the projectors can be used as initial guesses for neural network training, which can ease the difficulty of the optimization problem, by initializing the portions of the weights corresponding to the first and last layers to regions of the parameter space and outputs that are informed by the parameters $m$ or the parameter-to-output map.
Otherwise one can architect an extremely parsimonious network by keeping the input or output projectors fixed. In this case the dimension of the trainable neural network does not depend explicitly on $d_M$ or $d_Q$, but only on $r_M$ and $r_Q$. The network is constrained to only learn patterns between the dominant subspaces of the parameter space and outputs, as exposed by the projectors. If the dependence of the weight dimension on $d_M$ or $d_Q$ is removed by constraining the neural network to the dominant subspaces of the parameter space and outputs, the weights for the neural network can be inferred using few training data. This makes this strategy really desirable in settings where the input dimension or output dimension are very large and the mapping needs to be queried many times, such as in many many-query applications such as inverse problems, optimal experimental design, forward uncertainty quantification and many others.
\subsection{Discussion of Errors} \label{section:discussion_of_errors}
In Section \ref{section_input_output_bound} we discussed errors in approximating $m \mapsto q$ by a ridge function restricted to the dominant modes of both the inputs and outputs. The bounds there are for projection errors. When we attempt to represent the map by a neural network ridge function other errors are introduced. There is an error in the approximation of the low dimensional mapping by the neural network instead of other nonlinear ridge functions. We can think of this error as
\begin{equation}
\int_{\mathbb{R}^{d_M}}\|f(m,\mathbf{w}) - q_{r_M}(m)\|_{\ell^2(\mathbb{R}^{d_Q})}^2 d\nu(m) = \text{Representation Error}.
\end{equation}
Some theoretical guarantees have been established for neural network representation errors, see for example \cite{SchwabZech19,MaWuE2019,MaWojtowytschWuEtAl20}
As was discussed in the previous section, the neural network weights are found via the solution of an empirical risk minimization problem, for which we use Monte Carlo samples to approximate the integral with respect to the parameter distribution $\nu$, which incurs a sampling error that leads to a generalization gap \cite{MaWojtowytschWuEtAl20}. The empirical risk minimization problem is nonconvex, and so finding a global minimizer is NP-hard. One has to instead settle for local minimizers that perform reasonably well, and this introduces an additional optimization error:
\begin{subequations}
\begin{equation}
\mathbf{w}^\dagger \neq \mathbf{w}^* = \text{argmin} \frac{1}{N_\text{train}}\sum_{i=1}^{N_\text{train}} \| f(m_i,\mathbf{w})- q(m_i)\|_{\ell^2(\mathbb{R}^{d_Q})}^2.
\end{equation}
The approximate solution ($\mathbf{w}^\dagger$) to the nonconvex optimization problem, is generally different than the global minimizer $(\mathbf{w}^*)$, and the difference between the local minimum and the global minimum is the optimization error
\begin{align}
\bigg|\frac{1}{N_\text{train}}\sum_{i=1}^{N_\text{train}} \| f(m_i,\mathbf{w}^\dagger) - q(m_i)\|_{\ell^2(\mathbb{R}^{d_Q})}^2 - \| f(m_i,\mathbf{w}^*) - q(m_i)\|_{\ell^2(\mathbb{R}^{d_Q})}^2\bigg| \nonumber \\
= \text{Optimization Error}.
\end{align}
\end{subequations}
Recent work suggests that for feedforward neural networks this error can be mitigated by an iterative layerwise optimzation procedure \cite{ChanYuYouEtAl20}. An additional sampling error is incurred in the Monte Carlo approximations made for the computation of the AS and POD projectors. Bounds for such errors are investigated in \cite{HolodnakIpsenSmith18}. In total, the errors incurred can be decomposed into projection error, representation error, sampling errors and optimization error. Each of these errors plays a role in the final solution one gets when constructing a surrogate of this form.
\section{Numerical Experiments}
We conduct numerical experiments on input--output projection based neural networks on two PDE based regression problems. We consider projection based neural networks based on AS (compared to KLE) for input projection and POD for output projection. Proposition \ref{prop_input_output_error_bound} suggests that the decay of the eigenvalues for KLE, AS and POD can be used to choose the ranks for these architectures. In order to have a consistent error tolerance criterion between AS and KLE, the Lipschitz constant $L$ appearing in the upper bound \eqref{trailing_bound_active_subspace_vs_kle} must be used for comparison. Since this constant is not known a priori, we use fixed ranks in order to have a fair comparison between KLE and AS. For simplicity we take the input and output ranks to be the same to demonstrate the generalization accuracy of the projected neural networks compared to the full space network. More representative projected architectures could be achieved by more detailed rank selection procedures, but at additional computational cost.
To demonstrate the benefit of using the projection basis vectors to construct the neural networks, we also use Gaussian random vectors for comparison. Moreover, to demonstrate the projection errors separately, without involving the neural network representation and optimization errors, we also consider the following projection error
\begin{equation}\label{input_output_projection_error}
\mathbb{E}_{m \sim \nu}\left[\frac{\|P_\text{POD}q(P_\text{input}m) - q(m)\|_{\ell^2(\mathbb{R}^{d_Q})}}{\|q(m)\|_{\ell^2(\mathbb{R}^{d_Q})}}\right].
\end{equation}
Here $P_\text{POD}$ is the projector on to the principal subspace for the outputs as represented by POD, and $P_\text{input}$ is the input projector (either AS or KLE). These errors are studied as a function of rank.
For the training of the projected neural network, once the architectures (including projection bases, network layers, layer dimensions, and nonlinear activation functions) are chosen, the locally optimal weights $\mathbf{w}^\dagger \in \mathbb{R}^{d_W}$ are found via solving an empirical risk minimization problem. Given training data $X_\text{train} = \{(m_i,q(m_i))\}_{i=1}^{N_\text{train}}$, candidate optimal weights are found via solving the empirical risk least-squares minimization problem,
\begin{equation} \label{model_arch_emp_risk}
\min_{w\in \mathbb{R}^{d_W}} F_{X_\text{test}}(w) = \frac{1}{2N_\text{train}} \sum_{i=1}^{N_\text{train}} \| q(m_i) - f(m_i,\mathbf{w})\|^2_{\ell^2(\mathbb{R}^{d_Q})}.
\end{equation}
We used a subsampled inexact Newton CG method because it performed reliably better than Adam, SGD, and other first order methods on the problems we tested \cite{OLearyRoseberryAlgerGhattas2019}. We define the relative error as
\begin{equation}
\text{Relative Error} = \sqrt{\frac{\sum_{i=1}^{N_\text{test}}\|q(m_i) - f(m_i,\mathbf{w}^\dagger)\|^2}{\sum_{i=1}^{N_\text{test}}\|q(m_i)\|^2}},
\end{equation}
and the accuracy as
\begin{equation}
\text{Accuracy} = 100(1 - \text{Relative Error}).
\end{equation}
We are particularly interested in what surrogates can achieve good performance with only a limited number of training data, as these are the conditions of the problems that motivate this work. We study the performance of the neural networks as a function of training data, $N_\text{train}$. For each problem $2048$ total data are generated, with $512$ set aside as testing data.
Since the performance of the stochastic optimizer can be sensitive to the choice of initial guesses for the weights not given by the projectors, we average results over ten initial guesses for each neural network, and report averages and standard deviations. We are interested in neural networks that are robust to the choice of initial guess for the weights.
We compare the projected neural networks against a full space neural network. The full space neural network (FS) maps the input data to the output dimension, $d_Q$. We consider $d_Q < d_M$, so that the FS is not too high dimensional. For both the projected and full space networks we use two nonlinear layers for simplicity and ease of training, with softplus activation functions; these choices performed well during training. For each of the projected neural networks the weights in the input layer are be fixed during training, while the weights in the output layer are trained, which we find improved the generalization accuracy. This allows for a neural network with a parsimonious training weight dimension independent of $d_M$. We do not include the effect of training the input layer in these results; these networks only performed well when the trained input reduced network weights were used as an initial guess, and even then they did not outperform the input reduced networks. This came at the expense of increasing the neural network weight dimension by orders of magnitude. The input and output projectors were orthogonalized and rescaled, which significantly improved generalization accuracy. For more details see Appendix \ref{appendix_implementation_specifics}.
The naming conventions we use for the neural networks used in this work is AS for the neural networks that use the AS projector for the input and the POD projector for the outputs. We call the networks that use KLE for the input and POD for the output KLE networks. We call the Gaussian random projected networks RS for random space, and the full space networks FS. We investigate two numerical examples which involve parametric maps related to PDE-governed inference problems.
The first example is a 2D semi-linear convection-reaction-diffusion PDE, where the parameters $m$ represents a coefficient in a nonlinear reaction term. The second example is a 2D Helmholtz PDE with a single source, where the parameters $m$ represents a variable coefficient. In both examples the outputs are the PDE state evaluated at some points the interior of the PDE domain.
Both problems use a Gaussian distribution with Mat\'{e}rn covariance for the uncertain parameter distribution $\nu(m)$, for more information see Appendix \ref{section_matern_prior}. The Mat\'{e}rn covariance can be represented by the inverse of a coercive elliptic operator. The KLE basis vectors are taken as eigenvectors corresponding to the leading eigenvalues of the discrete covariance, discretized by a finite element method.
\subsection{Convection Reaction Diffusion Problem} \label{section_confusion}
For a first case study, we investigate a 2D quasi-linear convection reaction diffusion problem with a nonlinear dependence on the random field $m$. The formulation of the problem is
\begin{subequations}
\begin{align}
- \nabla \cdot (k \nabla u) + \mathbf{v} \cdot \nabla u + e^m u^3 &= \mathbf{f} \quad \text{in } \Omega \\
u &= 0 \text{ on } \partial \Omega \\
q(m) = Bu(m) &= [u(\mathbf{x}^{(i)},m)] \quad \text{at } \mathbf{x}^{(i)} \in (0.6,0.8)^2\\
\Omega &= (0,1)^2.
\end{align}
\end{subequations}
In this problem the uncertain parameters show up as cubic nonlinear reaction term, the parameters $m$ has mean zero. The volumetric forcing function $\mathbf{f}$ is a Gaussian bump located at $x_1 = 0.7,x_2=0.7$. The velocity field $\mathbf{v}$ used for each simulation is the solution of a steady-state Navier-Stokes equation with side walls driving the flow. See Appendix \ref{num_exp_appendix} for more information.
For this problem westudy two meshes. Both are unit square meshes and are parametrized by $n_x,n_y$. The two meshes we consider are $n_x = n_y = 64$ and $n_x = n_y = 192$. The Gaussian Mat\'{e}rn distribution for $\nu$ is parametrized by $\gamma = 0.1,\delta = 1.0$. We start by investigating the eigenvalue decompositions for AS, KLE and POD, and the input--output projection errors given by \ref{input_output_projection_error}.
In Figure \ref{as_kle_spectra_confusion}, the AS spectra agree between the two mesh discretization and are roughly mesh independent. The KLE spectra agree between the two mesh discretization and are roughly mesh independent. The decay of the AS spectra are quicker than those of the KLE spectra.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/as_confusion_g0.1_d1.0}.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/kle_confusion_g0.1_d1.0}.pdf}
\end{subfigure}
\caption{Active subspace and KLE eigenvalue decay for $\gamma =0.1, \delta = 1.0$}
\label{as_kle_spectra_confusion}
\end{figure}
In Figure \ref{as_kle_vectors_confusion}, the dominant vectors of AS are localized to the part of the domain where the observations are present. The KLE eigenvectors represent the dominant eigenmodes of the fractional PDE operator that is the covariance matrix. For this problem, which has Dirichlet boundary conditions and a unit square mesh, these modes correspond to the constant, and sines and cosines that arise in separation of variables. These eigenvectors do not pick up local information about the mapping $m \mapsto q$.
\begin{figure}[H]
\begin{minipage}{0.85\textwidth}
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/as_vectors_confusion/frame0_no_colorbar.pdf}
\end{subfigure}%
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/as_vectors_confusion/frame7_no_colorbar.pdf}
\end{subfigure}%
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/as_vectors_confusion/frame15_no_colorbar.pdf}
\end{subfigure}%
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/as_vectors_confusion/frame31_no_colorbar.pdf}
\end{subfigure}
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/kle_vectors_confusion/frame0_no_colorbar.pdf}
\end{subfigure}%
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/kle_vectors_confusion/frame7_no_colorbar.pdf}
\end{subfigure}%
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/kle_vectors_confusion/frame15_no_colorbar.pdf}
\end{subfigure}%
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/kle_vectors_confusion/frame31_no_colorbar.pdf}
\end{subfigure}
\end{minipage}%
\begin{minipage}{0.15\textwidth}
\includegraphics[scale = 0.6]{figures/plot_onlycbar.pdf}
\end{minipage}
\caption{AS and KLE eigenvectors for $\gamma =0.1, \delta = 1.0$}
\label{as_kle_vectors_confusion}
\end{figure}
In Figure \ref{pod_eigs_and_errors_confusion}, the POD spectra agree in the dominant modes and have similar qualitative decay. They however begin to diverge in the small modes, the finer discretization contains more information in the tail of the spectrum. The input--output error projection errors in Figure \ref{pod_eigs_and_errors_confusion} are sample average approximations of equation \ref{input_output_projection_error} computed for AS and KLE. Both use POD for the outputs, and in each case the ranks are taken to be the same for both the input and the output projectors. Since the output projectors are taken to be the same, this allows for a study of how informed the input projectors are. The AS projector contains more information that informs the outputs than KLE, particularly in the first $15-20$ modes, after that the decay rate of the projection errors are roughly the same.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/pod_confusion_g0.1_d1.0}.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/confusion_g0.1_d1.0input_output_error}.pdf}
\end{subfigure}
\caption{Plot of spectra for POD on the two meshes (left), and the input--output projection error (right) for the convection diffusion problem}
\label{pod_eigs_and_errors_confusion}
\end{figure}
In what follows we train the projected neural networks and compare against the FS network. Below in Table \ref{table_summary_of_dimensions_confusion}, dimensions for the input parameters $d_M$, and neural network weight dimension $d_W$ for each network are summarized.
\begin{table}[H]
\center
\begin{tabular}{|l||l|l|}
\hline
$d_M$ & $4,225$ & $37,249$ \\
\hline
\hline (AS, KLE, RS) projected network $r = 8$ & $1,124$ & $1,124$ \\
\hline (AS, KLE, RS) projected network $r = 32$ & $6,500$ & $6,500$ \\
\hline (AS, KLE, RS) projected network $r = 128$ & $49,740$ & $49,740$ \\
\hline FS network & $442,800$ & $3,745,200$ \\
\hline
\end{tabular}
\caption{Neural network weight dimensions and parameter dimension for different meshes.}
\label{table_summary_of_dimensions_confusion}
\end{table}
For a first set of results we compare the rank $r = 8$ projected networks with the FS network for the coarse mesh. In Figure \ref{confusion_fixed_rank_coarse}, the AS projected network performs better than all three other networks. The KLE network performs about $2\%$ worse in generalization accuracy, both of these networks perform about the same for different initial guesses for the inner dense layer weights. The RS network performs better than the FS network, which suggests that the reduced dimension architecture has benefits in this regime, however it is very sensitive to the initial guesses for the weights as indicated by the one standard deviation error bars. The AS and KLE networks perform significantly better than the RS network which suggests that there is definite upside to choosing input and output bases for these networks.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/avg_accuracy_vs_data_g0.1d1.0nx64}.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/proj_avg_accuracy_vs_data_g0.1d1.0nx64}.pdf}
\end{subfigure}
\caption{Generalization accuracy vs number of training data seen for all networks (left), and just the projected networks (right) for the coarse mesh convection diffusion problem.}
\label{confusion_fixed_rank_coarse}
\end{figure}
Figure \ref{confusion_fixed_rank_fine} shows that as the parameter dimension $d_M$ grows, the RS and FS networks become harder to train, while the AS and KLE networks perform comparably well to how they did for the coarse mesh. This suggests that there is mesh independent information that can be encapsulated by the AS, KLE and POD eigenvectors.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/avg_accuracy_vs_data_g0.1d1.0nx192}.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/proj_avg_accuracy_vs_data_g0.1d1.0nx192}.pdf}
\end{subfigure}
\caption{Accuracy vs number of training data seen for all networks (left), and just the projected networks (right) for the fine mesh convection diffusion problem.}
\label{confusion_fixed_rank_fine}
\end{figure}
For the next set of numerical results we compare the performance of the AS and KLE projected networks for different choices of the rank $r$. Figure \ref{confusion_fixed_rank_as_kle_small} shows that the advantages of the AS network are pronounced when the rank is small, but as the rank $r$ grows the KLE network starts to catch up. While the superior performance of AS in low dimensions agrees with the projection errors observed in Figure \ref{pod_eigs_and_errors_confusion}, the comparable performance of KLE to AS for higher dimensions does not agree. We believe that this has to do with the errors inherent in the neural network representation and training, as discussed in Section \ref{section:discussion_of_errors}.
As the input bases grow, there will be more overlap between AS and KLE, and more room for the KLE network to learn patterns captured by the low dimensional AS basis. Another thing to note is that generally as the input and output bases grow the variance with respect to the neural network initial guesses decreases. Similar trends are observed in Figure \ref{confusion_fixed_rank_as_kle_large} for the finer mesh discretization.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/kle_as_avg_accuracy_vs_data_g0.1d1.0nx64}.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/large_kle_as_avg_accuracy_vs_data_g0.1d1.0nx64}.pdf}
\end{subfigure}
\caption{Accuracy vs number of training data seen for AS and KLE networks as a function of rank for the coarse mesh convection diffusion problem.}
\label{confusion_fixed_rank_as_kle_small}
\end{figure}
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/kle_as_avg_accuracy_vs_data_g0.1d1.0nx192}.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/large_kle_as_avg_accuracy_vs_data_g0.1d1.0nx192}.pdf}
\end{subfigure}
\caption{Accuracy vs number of training data seen for AS and KLE networks as a function of rank for the fine mesh convection diffusion problem.}
\label{confusion_fixed_rank_as_kle_large}
\end{figure}
\subsection{Helmholtz Problem}
For a second case study, we investigate a 2D high frequency Helmholtz problem with a nonlinear dependence on the random field $m$. The formulation of the problem is
\begin{subequations}
\begin{align}
- \Delta u - (k e^m)^2u &= \mathbf{f} \quad \text{in } \Omega \\
\text{PML Boundary Condition} &\text{ on } \partial \Omega \setminus \Gamma_\text{top} \\
\nabla u \cdot n &= 0 \text{ on } \Gamma_\text{top} \\
q(m) = Bu(m) &= [u(\mathbf{x}^{(i)},m)] \quad \text{at } \mathbf{x}^{(i)} \in \Omega\\
\Omega &= (0,3)^2.
\end{align}
\end{subequations}
The PML boundary condition on the sides and bottom of the domain simulates truncation of a semi-infinite domain; waves can only be reflected on the top surface. Due to the PML domain truncation, the PDE for the Mat\'{e}rn covariance for the parameter distribution has Robin boundary conditions to eliminate boundary artifacts. The problem has a single source at a point $(0.775,2.85)$, and the $100$ observation points are clustered in a box $(0.575,0.975)\times (2.75,2.95)$, none of the observation points coincide with the source. Since the PDE state variable here is a velocity field, the outputs have dimension $d_Q = 200$ for this problem. The wave number for this problem is $9.118$. The Gaussian Mat\'{e}rn distribution $\nu$ is parametrized by $\gamma = 1.0,\delta = 5.0$. We consider two meshes for this problem again, $n_x = n_y = 64$ and $128$. We again start by investigating the eigenvalue decompositions for AS, KLE and POD, and the input--output projection errors given by \ref{input_output_projection_error}.
In Figure \ref{as_kle_spectra_helmholtz}, the AS spectra agree between the two mesh discretization and are roughly mesh independent. Note that for this problem the decay of the dominant modes of the AS spectra compared to KLE are much more pronounced than in Figure \ref{as_kle_spectra_confusion}. The KLE spectra agree between the two mesh discretization and are roughly mesh independent.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\center
\includegraphics[width = \textwidth]{{figures/as_helmholtz_g1.0_d5.0}.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\center
\includegraphics[width = \textwidth]{{figures/kle_helmholtz_g1.0_d5.0}.pdf}
\end{subfigure}%
\caption{Active subspace and KLE spectra for $\gamma =1.0, \delta = 5.0$}
\label{as_kle_spectra_helmholtz}
\end{figure}
The dominant AS vectors are localized to the part of the domain where the observations are present. The first mode captures strictly locally supported information of the uncertain parameters, and the higher modes start to capture higher frequency effects inherent to the Helmholtz problem. The KLE eigenvectors again correspond to the eigenvectors arising from separation of variables, note that for this problem the PDE forward map is dissimilar to the covariance operator, so this basis is not an optimal representation of the PDE state, and therefore a linear restriction of the PDE state. These eigenvectors again do not pick up local information about the mapping $m \mapsto q$.
\begin{figure}[H]
\begin{minipage}{0.85\textwidth}
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/as_vectors_helmholtz/frame0_no_colorbar.pdf}
\end{subfigure}%
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/as_vectors_helmholtz/frame7_no_colorbar.pdf}
\end{subfigure}%
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/as_vectors_helmholtz/frame15_no_colorbar.pdf}
\end{subfigure}%
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/as_vectors_helmholtz/frame31_no_colorbar.pdf}
\end{subfigure}
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/kle_vectors_helmholtz/frame0_no_colorbar.pdf}
\end{subfigure}%
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/kle_vectors_helmholtz/frame7_no_colorbar.pdf}
\end{subfigure}%
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/kle_vectors_helmholtz/frame15_no_colorbar.pdf}
\end{subfigure}%
\begin{subfigure}{0.25\textwidth}
\center
\includegraphics[width=\textwidth]{figures/kle_vectors_helmholtz/frame31_no_colorbar.pdf}
\end{subfigure}
\end{minipage}%
\begin{minipage}{0.15\textwidth}
\includegraphics[scale = 0.5]{figures/plot_onlycbar.pdf}
\end{minipage}
\caption{AS and KLE eigenvectors for $\gamma =1.0, \delta = 5.0$}
\label{as_kle_vectors_helmholtz}
\end{figure}
In Figure \ref{pod_eigs_and_errors_helmholtz}, the POD spectra agree in the dominant modes and have similar qualitative decay. They however begin to diverge in the small modes -- even more so than in the convection diffusion problem. The finer discretization contains more information in the tail of the spectrum. The AS projector contains more information about the outputs than KLE, particularly in the first $15-20$ modes, after that the decay rate of the projection errors are roughly the same.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/pod_helmholtz_g1.0_d5.0}.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/helmholtz_f600_g1.0_d5.0input_output_error}.pdf}
\end{subfigure}
\caption{Plot of spectra for POD on the two meshes (left), and the input--output projection error (right) for the Helmholtz problem}
\label{pod_eigs_and_errors_helmholtz}
\end{figure}
In what follows we train the projected neural network networks and compare against the FS networks. Below in Table \ref{table_summary_of_dimensions_helmholtz}, dimensions for the input parameters $d_M$, and $d_W$ for each network are summarized.
\begin{table}[H]
\center
\begin{tabular}{|l||l|l|}
\hline
$d_M$ & $4,225$ & $16,641$ \\
\hline
\hline (AS, KLE, RS) projected network $r = 8$ & $2,024$ & $2,024$ \\
\hline (AS, KLE, RS) projected network $r = 32$ & $9,800$ & $9,800$ \\
\hline (AS, KLE, RS) projected network $r = 64$ & $25,544$ & $25,544$ \\
\hline FS network & $925,600$ & $3,408,800$ \\
\hline
\end{tabular}
\caption{Neural network weight dimensions and parameter dimension for the different meshes used.}
\label{table_summary_of_dimensions_helmholtz}
\end{table}
For a first set of restults we again compare the rank $r = 8$ projected networks with the FS network for the coarse mesh. In Figure \ref{helmholtz_fixed_rank_coarse}, the AS projected network performs better than all three other networks. For this problem the KLE network performs about $10\%$ worse in generalization accuracy, both of these networks perform about the same for different initial guesses for the inner dense layer weights. The RS and KLE networks performs better than the FS network in the low data regime, but the FS network begins to outperform these two networks when more training data are available. The FS network nearly catches up to the parsimonious AS $r = 8$ network in the high data limit. The benefit of the AS and FS networks over the KLE and RS networks seems to be their ability to resolve more oscillatory information than the KLE and RS networks. The FS network is able to do this when more data are available, since it has many weights to be fit, the AS network can do this in both the limited data regime and when more data are available, since the AS basis is able to represent these modes. The KLE network represents smooth modes of the parameter distribution covariance, which are not as useful when the parametric mapping is dominated by highly oscillatory modes as in this example.
The benefit of KLE for the convection reaction diffusion problem is that the PDE problem was similar to the PDE that shows up in the Gaussian Mat\'{e}rn covariance matrix. The limitations of KLE show up for the Helmholtz problem, where the forward PDE mapping is dissimilar from the parameter distribution covariance operator $C$.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/avg_accuracy_vs_data_g1.0d5.0nx64}.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/proj_avg_accuracy_vs_data_g1.0d5.0nx64}.pdf}
\end{subfigure}
\caption{Accuracy vs number of training data seen for all networks (left), and just the projected networks (right) for the coarse mesh Helmholtz problem.}
\label{helmholtz_fixed_rank_coarse}
\end{figure}
Figure \ref{helmholtz_fixed_rank_fine} shows that as the parameter dimension $d_M$ grows, the RS and FS networks become harder to train, while the AS and KLE networks perform comparably well to how they did for the coarse mesh. The FS network again nearly catches up the the AS $r = 8$ in the high data limit.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/avg_accuracy_vs_data_g1.0d5.0nx128}.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/proj_avg_accuracy_vs_data_g1.0d5.0nx128}.pdf}
\end{subfigure}
\caption{Accuracy vs number of training data seen for all networks (left), and just the projected networks (right) for the fine mesh Helmholtz problem.}
\label{helmholtz_fixed_rank_fine}
\end{figure}
For the next set of numerical results we again compare the performance of the AS and KLE projected networks for different choices of the rank $r$. Figure \ref{helmholtz_fixed_rank_as_kle_small} shows that for this problem the AS networks outperform the KLE networks consistently. The low dimensional AS network performs well for low data, but sees reduced performance as the data dimension improves. The AS networks see benefited performance when the rank increases to $32$ and $64$. Both KLE and AS observed reduced accuracy for $r = 128$. The Helmholtz data are evidently harder to fit than the convection-reaction-diffusion data, which are less oscillatory as seen by the AS eigenvectors for the two problems (Figures \ref{as_kle_vectors_confusion} and \ref{as_kle_vectors_helmholtz}). When the dimension of the neural networks grow sufficiently large the difficulties of neural network training begin to dominate the benefits of better approximation in larger reduced bases.
Similar trends are observed in Figure \ref{helmholtz_fixed_rank_as_kle_large}.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/kle_as_avg_accuracy_vs_data_g1.0d5.0nx64}.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/large_kle_as_avg_accuracy_vs_data_g1.0d5.0nx64}.pdf}
\end{subfigure}
\caption{Accuracy vs number of training data seen for AS and KLE networks as a function of rank for the coarse mesh Helmholtz problem.}
\label{helmholtz_fixed_rank_as_kle_small}
\end{figure}
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/kle_as_avg_accuracy_vs_data_g1.0d5.0nx128}.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{{figures/large_kle_as_avg_accuracy_vs_data_g1.0d5.0nx128}.pdf}
\end{subfigure}
\caption{Accuracy vs number of training data seen for AS and KLE networks as a function of rank for the fine mesh Helmholtz problem.}
\label{helmholtz_fixed_rank_as_kle_large}
\end{figure}
In these numerical results we compared fixed ranks for the inputs and the outputs, in order to have a straightforward comparison between the KLE and AS networks. In general we recommend the use of the AS-to-POD network, which uses explicit information of the parametric map to construct an informed subspace of the input. The decay of the AS and POD spectra alone can be used to decide the ranks of the input and output projectors for the networks, with the projection error result from Proposition \ref{prop_input_output_error_bound} in mind for this rank choice. This result is less useful when using the KLE spectral decay in choosing a network structure, since this additionally requires knowledge of the Lipschitz constance $L$ for the mapping $m \mapsto q$. Further, numerical results demonstrate that when the PDE mapping does not look like the covariance operator, KLE does not perform much better than random projectors, while the AS projector performed well for both numerical experiments.
\subsection{Software}
Code for numerical experiments can be found in hIPPYflow\footnote{\url{https://github.com/hippylib/hippyflow}}, a Python library for parametric PDE based dimension reduction and surrogate modeling based on hIPPYlib\footnote{\url{https://github.com/hippylib/hippylib}}, a Python library for adjoint based inference problems \cite{VillaPetraGhattas2018}. Neural network training was handled by hessianlearn\footnote{\url{https://github.com/tomoleary/hessianlearn}}, a Python library for Hessian based stochastic optimization in tensorflow and keras \cite{AbadiAgarwalBarhamEtAl2016,OLearyRoseberryAlgerGhattas2019,OLearyRoseberryAlgerGhattas2020}.
\section{Conclusions}
In this work we have presented a framework for the dimension-scalable
construction of derivative-informed projected neural network
surrogates for parametric maps governed by partial differential
equations.
The need for these surrogates arises in many-query problems for
expensive-to-solve PDEs characterized by high-dimensional parameters.
Such settings are challenging for surrogate construction, since the
expense of the PDE solves implies that a limited number of data can be
generated for training, far fewer than typical data-hungry surrogates
require in high dimensions.
In order to address these challenges, we present
projected neural networks that use derivative (Jacobian) information
for input dimension reduction and principal subspaces of the output to
construct parsimonious networks architectures. We advocate the use
of Jacobian-informed active subspace (AS) projection for the
parameters, and proper orthogonal decomposition (POD) projection for
the outputs, and motivate the strategy with analysis.
In numerical experiments we compare this AS-to-POD strategy against a
similar strategy that instead uses Karhunen Lo\`{e}ve Expansion (KLE)
for the input dimension \cite{BhattacharyaHosseiniKovachki2020},
conventional full space dense networks, and similarly architected
projected neural networks using random projectors for the input and
output bases. The full space network performed poorly in the low data
regime, demonstrating the need for projected neural networks in high
dimensional surrogate problems with few training data. The KLE-to-POD
and AS-to-POD networks outperformed the random subspace projected
network, showing that it is important to choose basis vectors for the
input and output representation in the neural network that exploit the
structure of the parametric PDE-based map.
The KLE-to-POD strategy worked well for the
convection-reaction-diffusion PDE, but not as well in the Helmholtz
problem. The derivative-informed AS-to-POD strategy performed well in
both numerical examples, consistently outperforming all of the other
neural networks. This makes sense in light of the projection error
plots (Figure \ref{pod_eigs_and_errors_confusion} and Figure
\ref{pod_eigs_and_errors_helmholtz}); in the
convection-reaction-diffusion case the KLE basis was able to reduce
the error similar to AS, but in the Helmholtz case the KLE basis was
not able to reduce the projection error much. Figures
\ref{as_kle_vectors_confusion} \ref{as_kle_spectra_helmholtz}
demonstrate that in both cases the AS eigenvectors were able to
resolve key localized information, in particular highly-oscillatory
smaller length scales for the Helmholtz problem, while the KLE
eigenvectors resembled typical elliptic PDE eigenfunctions as expected.
In future work we will employ these surrogates in many-query settings
such as Bayesian inference, optimal experimental design, and
stochastic optimization. The neural network architectures in this work
were kept simple in order to limit the sources of errors (more
complicated neural networks are harder to train). There is
considerable opportunity to tune the parametrizations of the
input--output projected networks to achieve higher performing
networks. Nevertheless, the present work demonstrates the viability of
the general approach.
\bibliographystyle{siamplain}
| {
"timestamp": "2020-12-01T02:53:13",
"yymm": "2011",
"arxiv_id": "2011.15110",
"language": "en",
"url": "https://arxiv.org/abs/2011.15110",
"abstract": "Many-query problems, arising from uncertainty quantification, Bayesian inversion, Bayesian optimal experimental design, and optimization under uncertainty-require numerous evaluations of a parameter-to-output map. These evaluations become prohibitive if this parametric map is high-dimensional and involves expensive solution of partial differential equations (PDEs). To tackle this challenge, we propose to construct surrogates for high-dimensional PDE-governed parametric maps in the form of projected neural networks that parsimoniously capture the geometry and intrinsic low-dimensionality of these maps. Specifically, we compute Jacobians of these PDE-based maps, and project the high-dimensional parameters onto a low-dimensional derivative-informed active subspace; we also project the possibly high-dimensional outputs onto their principal subspace. This exploits the fact that many high-dimensional PDE-governed parametric maps can be well-approximated in low-dimensional parameter and output subspace. We use the projection basis vectors in the active subspace as well as the principal output subspace to construct the weights for the first and last layers of the neural network, respectively. This frees us to train the weights in only the low-dimensional layers of the neural network. The architecture of the resulting neural network captures to first order, the low-dimensional structure and geometry of the parametric map. We demonstrate that the proposed projected neural network achieves greater generalization accuracy than a full neural network, especially in the limited training data regime afforded by expensive PDE-based parametric maps. Moreover, we show that the number of degrees of freedom of the inner layers of the projected network is independent of the parameter and output dimensions, and high accuracy can be achieved with weight dimension independent of the discretization dimension.",
"subjects": "Numerical Analysis (math.NA); Computational Engineering, Finance, and Science (cs.CE); Machine Learning (cs.LG)",
"title": "Derivative-Informed Projected Neural Networks for High-Dimensional Parametric Maps Governed by PDEs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992932829917,
"lm_q2_score": 0.731058584489497,
"lm_q1q2_score": 0.7075910872758484
} |
https://arxiv.org/abs/2011.00090 | Exact closed-form and asymptotic expressions for the electrostatic force between two conducting spheres | We present exact closed-form expressions and complete asymptotic expansions for the electrostatic force between two charged conducting spheres of arbitrary sizes. Using asymptotic expansions of the force we confirm that even like-charged spheres attract each other at sufficiently small separation unless their voltages/charges are the same as they would be at contact. We show that for sufficiently large size asymmetries, the repulsion between two spheres $\textit{increases}$ when they separate from contact if their voltages or their charges are held constant. Additionally, we show that in the constant voltage case, this like-voltage repulsion can be further increased and maximised though an optimal $\textit{lowering}$ of the voltage on the larger sphere at an optimal sphere separation. | \section*{Acknowledgments}
We thank the Mac Armour Fellowship for support.
\section{Introduction}\label{sec:introduction}
The interaction of two charged conducting spheres is a fundamental problem in electrostatics that has a history of more than one hundred years. Lord Kelvin~\cite{Thomson1872-jh}, Poisson, Kirchoff, Maxwell~\cite{Maxwell1954-ka}, and Russell~\cite{Russell1927-lr} are among those who have worked on this problem. Due to its nontrivial nature, the problem continues to generate interest to this date~\cite{Lekner2011-pz,Lekner2012-pr,Lekner2012-ua,Kolikov2012-zq,Meyer2015-fd,Banerjee2017-xl,Lekner2016-df,Banerjee2019-vf}. The simple and ubiquitous geometry of spheres makes the analytical results relevant in a wide variety of applications where electrostatic forces play an important role such as the interaction of raindrops in clouds~\cite{Harrison2020-xo}, dust and powder interactions~\cite{Feng2003-wu,Cordero2017-dk}, cellular and molecular interactions~\cite{Varadwaj2017-rb}, atomic force microscopy~\cite{Hudlet1998-fe,Law2002-go}, etc.
The calculations and analysis of the force becomes difficult as the two spheres approach each other due to charge polarisation on each sphere. In the classical solution using the method of images for example, an ever increasing number of images need to be included for convergence and the solutions fail completely when the two spheres touch each other. In the near limit asymptotic solutions provide an alternative series to calculate the force accurately with a short computation time.
However, in this asymptotic limit, the force expansions are known only to the first order in the sphere surface-to-surface separation~\cite{Lekner2012-pr}. Due to the difficulty of this analysis, only recently has it been shown by Lekner~\cite{Lekner2012-ua} that two like-charged spheres attract each other at sufficiently small separation unless they have the exact same charge ratio as they would at contact.
In this paper we present a complete asymptotic analysis of the force in the near limit to {\it all} orders in sphere separation. Additionally, we derive exact closed-form expressions for the force using the $q$-digamma function~\cite{Krattenthaler1996-ev}.
Our analysis confirms the attraction between like-charged spheres and provides new results on how the electrostatic force varies with size asymmetry. In particular, we find that at large size asymmetries the behaviour of the force is radically different from the behaviour at small asymmetries.
\begin{figure}
\centering
\scalebox{0.204}{\includegraphics{Figures/Figure_1}}
\caption{Two conducting spheres $1$ and $2$ are held at voltages $V_1$ and $V_2$ respectively. The permittivity of the surrounding medium is $\epsilon$. The spheres acquire charges $Q_1$ and $Q_2$ to maintain their given voltages in the presence of each other. These charges vary with separation. If the batteries are disconnected at some given charge, then the voltages of the two spheres vary with separation.}
\label{fig:TwoSpheres}
\end{figure}
The setup of the problem is shown in Fig.~\ref{fig:TwoSpheres}. Two spheres 1 and 2 with radii $R_1$ and $R_2$ are at a distance $S$ between their centres and are given the voltages $V_1$ and $V_2$ respectively. The charges $Q_1$ and $Q_2$ acquired by spheres are related to their given voltages through the matrix equation
\begin{align} \label{eq:Q=CV}
\begin{pmatrix}
Q_1 \\
Q_2
\end{pmatrix} \equiv
\begin{pmatrix}
C_{11} & C_{12} \\
C_{21} & C_{22}
\end{pmatrix}
\begin{pmatrix}
V_1 \\
V_2
\end{pmatrix} ,
\end{align}
where $C_{11}$ and $C_{22}$ are the self capacitances of the two spheres and $C_{12}=C_{21}$ is their mutual capacitance. These capacitance coefficients are geometric properties of the interaction and depend only on $R_1$, $R_2$, and $S$~\cite{Maxwell1954-ka,Smythe1968-rd}.
In a recent paper~\cite{Banerjee2019-vf} we presented exact closed-form and asymptotic solutions to all orders in sphere separation for these capacitance coefficients. In Sec.~\ref{sec:capacitance} we present a summary of these results along with the well known classical solutions since the knowledge of the capacitance coefficients is vital to the analysis presented in this paper.
The basic formalism for calculating the force between any two conductors (see Art. 93 in Ref.~\cite{Maxwell1954-ka} for example), starts by writing the overall electrostatic energy of the system as
\begin{align}
W_V
=&\frac{1}{2} \left[C_{11} V_1^2 + 2 C_{12} V_1 V_2+C_{22} V_2^2\right]
\end{align}
when the voltages of the two spheres are held constant. The mutual electrostatic force between the two spheres is related to the variation of this energy with distance
\begin{align} \label{eq:fvdefinition}
F_V=\frac{dW_V}{dS}=\frac{1}{2} \left[\frac{dC_{11}}{dS} V_1^2 + 2 \frac{dC_{12}}{dS} V_1 V_2+\frac{dC_{22}}{dS} V_2^2\right].
\end{align}
In Sec.~\ref{sec:capacitance_derivatives} we carry out the derivatives of all three forms of the capacitance coefficients discussed in Sec.~\ref{sec:capacitance}. These derivatives are the essential components required for calculation of the force.
In Sec.~\ref{sec:voltage} we calculate and plot this force at constant voltage for some typical sphere size ratios. For spheres with sufficiently large size asymmetries, we show that repulsion at contact initially {\it increases} with distance. Additionally, we show that the maximum repulsion between two spheres happens not when both spheres have the same voltage but when voltage of the larger sphere is optimally {\it lower} and the spheres are at an optimal non-zero separation. Our analysis reveals numerical values for the size asymmetry thresholds above which these anomalous behaviours of the force are observed.
If the spheres have a constant amount of charge instead, then inverting the charge voltage relation gives
\begin{align}
\begin{pmatrix} V_1\\ V_2
\end{pmatrix}
\,=\,\begin{pmatrix}
C_{11} & C_{12} \\
C_{12} & C_{22}
\end{pmatrix}^{-1}
\begin{pmatrix}
Q_1\\
Q_2
\end{pmatrix}\,\equiv\,
\begin{pmatrix}
P_{11} & P_{12} \\
P_{12} & P_{22}
\end{pmatrix}
\begin{pmatrix}
Q_1\\
Q_2
\end{pmatrix},
\end{align}
where $P_{ij}$ are the coefficients of potential~\cite{Maxwell1954-ka}. The electrostatic energy of the system and the electrostatic force are given by (see Art. 93 in Ref.~\cite{Maxwell1954-ka})
\begin{align}
W_Q
=&\frac{1}{2} \left[P_{11} Q_1^2 + 2 P_{12} Q_1 Q_2+P_{22} Q_2^2\right],~~~~ F_Q =
-\frac{dW_Q}{dS}.
\end{align}
By analysing the coefficients $P_{ij}$, Lekner showed that two charged spheres always attract each other unless they have the exact same charge ratio as they would at contact. In this paper we confirm this result by the explicit calculation of the asymptotic force to all orders in sphere separation. In addition, we show that above a certain size asymmetry, similar to the constant voltage case, the repulsion between two charged spheres {\it increases} at first when they separate from contact even as their charges remain fixed.
\section{Capacitance coefficients} \label{sec:capacitance}
As discussed in Sec.~\ref{sec:introduction}, the key to understanding the force between the two spheres lies in examining the geometrical dependence of the capacitance coefficients. In this section we present a recap of the classical solutions~\cite{Maxwell1954-ka,Smythe1968-rd} and some recently published asymptotic and closed-form capacitance results~\cite{Banerjee2019-vf} that are required for accurate force calculations.
\subsection{Preliminary definitions}
In writing down expressions for the capacitance coefficients, it is convenient to write the following two geometric definitions:
\begin{equation}
s \equiv \frac{S}{{R_1}+{R_2}}
\end{equation}
is the dimensionless distance between the spheres. When the two spheres are about to touch, $s\rightarrow 1$. The dimensionless surface separation of the two spheres is $s-1$ in units of $R_1+R_2$.
In addition to $s$ we define a second parameter that measures the asymmetry in the radii of the two spheres
\begin{equation}
r \equiv \frac{{R_1}-{R_2}}{{R_1}+{R_2}}.
\end{equation}
For equal sized spheres, $r=0$, and when the second sphere is much smaller in size, $r\rightarrow 1$. Interchanging the two spheres takes $r \rightarrow -r$.
The quantities $s$ and $r$ are a complete description of the geometry of the problem. In terms of $s$ and $r$ it is convenient to define
\begin{align}
\label{eq:alpha}\alpha&= \left( \frac{\sqrt{s^2-r^2}- \sqrt{s^2-1}}{\sqrt{1-r^2}}\right)^{2},
\end{align}
which scales the entire distance range between the two spheres to the unit interval. When the two spheres are about to touch $\alpha \rightarrow 1$, and when the spheres move very far away $\alpha \rightarrow 0$.
The relation between distance variables $s$ and $\alpha$ in Eq.~(\ref{eq:alpha}) can be inverted to give
\begin{equation}\label{eq:s-of-alpha}
s=\frac{\sqrt{(1+\alpha)^2 -r^2(1-\alpha)^2}}{2 \sqrt{\alpha }}.
\end{equation}
To write the capacitances in compact form it is useful to define
\begin{align}
\lambda = \frac{(1-r^2)}{2s}\left(\frac{1-\alpha^2}{\alpha}\right)=\frac{2(1-r^2)\sinh \mu}{\sqrt{1 - r^2 \tanh^2 \mu}}
\end{align}
and
\begin{equation}
\frac{1}{2}-x =y-\frac{1}{2}=\frac{\tanh^{-1}(r\tanh \log\!\sqrt{\alpha}\hspace{0.1em})}{\log \alpha}=\frac{\tanh^{-1}(r\tanh \mu )}{2\mu},
\end{equation}
where $\mu = - \log \sqrt \alpha$ or we may write $\alpha = \exp(-2\mu)$. Similar to $s-1$, $\mu \rightarrow 0$ when the spheres are about to touch. Note that the closely related variable $u = -\log {\alpha}$ is used by Lekner to express the capacitance coefficients using the Abel-Plana formula~\cite{Lekner2011-pz}. Maxwell first defined this $u$ variable (see $\varpi$ in Chapter XI of Ref.~\cite{Maxwell1954-ka}) and Russell~\cite{Russell1927-lr} uses this variable in his papers as well.
The size asymmetry $r$, along with one distance variable $\alpha$, $\mu$, or $s$, completely specifies the geometry of the problem. For our numerical calculations we use $r$ and $\alpha$ as our main quantities, and express $\mu$, $s$, $\lambda$, $x$ and $y$ in terms of these two quantities. We choose $\alpha$ as the main distance variable in our numerical calculations because it is easily converted to $s$ or $\mu$ through the simple relations discussed above; this choice optimises the computation time for the force plots presented in this paper.
To gain more insight into the interdependence of these quantities it is useful to look at their behaviour when the spheres are about to touch each other. For small $\mu$ we have
\begin{align}
s-1&= \frac{ (1 - r^2)}{2}\mu^2 + \frac{(1 + 2 r^2 - 3 r^4)}{24} \mu^4 + {\cal O}(\mu^6),\\
\lambda &= 2(1- r^2)\,\mu +\frac{(1+2 r^2-3 r^4)}{3}\mu ^3 + {\cal O}(\mu^5),
\end{align}
and
\begin{align}
\frac{1}{2}-x&=y-\frac{1}{2}=\frac{r}{2}-\frac{r (1-r^2)}{6} \mu ^2 +\frac{ r\, (2-5 r^2+3 r^4)}{30} \mu ^4+ {\cal O}(\mu^6).
\end{align}
For spheres of equal size, both $x$ and $y$ are equal to $1/2$ independent of the sphere separation. The variable $\lambda$ is the derivative of $2s$ with respect to $\mu$ (see Sec.~\ref{sec:capacitance_derivatives}).
\subsection{Capacitance expressions}
It is useful to define the dimensionless capacitance coefficients for the two spheres as $c_{ij} \equiv C_{ij}/2\pi \epsilon(R_1+R_2)$.
With the definitions above, the classical solutions using the method of images can be written in the following infinite Lambert series~\cite{Banerjee2017-wk} form
\begin{align} \label{eq:c11-Classical}
c_{11} &=\lambda\,\sum_{n=0}^\infty \frac{\alpha^{n+x}}{1-\alpha^{2n+2x}},~~~~~~
c_{22} =\lambda\,\sum_{n=0}^\infty \frac{\alpha^{n+y}}{1-\alpha^{2n+2y}},
\end{align}
and
\begin{align}
\label{eq:c12-Classical}
c_{12}
&=-\lambda \sum_{n=1}^\infty \frac{\alpha^n}{1-\alpha^{2n}}.
\end{align}
The series above (with minor differences) were first given by Kirchhoff~\cite{Maxwell1954-ka} and discussed in Ref.~\cite{Banerjee2019-vf} in their current form. We include terms through $n=10$ and limit the range to $\alpha \le 0.15$ with errors of order $10^{-10}$ or less for the calculations presented in this paper. Note that these series are defined for all $\alpha \neq 1$.
Using the sum of the Lambert series in terms of the $q$-digamma function~\cite{Krattenthaler1996-ev,Banerjee2017-wk} the capacitance coefficients can be written in closed-form as~\cite{Banerjee2019-vf}
\begin{align}\label{eq:c11closedform}
c_{11} &= \frac{\lambda}{4\mu}\, \left[{\psi_{\alpha^2}(x)-2\psi_{\alpha}(x)+\log \tfrac{1+\alpha}{1-\alpha}}\right],\\
\label{eq:c22closedform} c_{22} &= \frac{\lambda}{4\mu}\, \left[{\psi_{\alpha^2}(y)-2\psi_{\alpha}(y)+\log \tfrac{1+\alpha}{1-\alpha}}\right],
\end{align}
and
\begin{equation}\label{eq:c12closedform}
c_{12} = \frac{\lambda} {4\mu}\left[{\psi_{\alpha^2}(\tfrac{1}{2})+\log (1-\alpha^2)}\right],
\end{equation}
where $\psi_{\alpha}(x)$ is the $q$-digamma function of $x$ with $q=\alpha$. These solutions are valid for $0<\alpha< 1$. They are not defined at $\alpha=0$ or $\alpha=1$.
In their range of validity, the closed-form expressions above can be used for calculation of the coefficients by utilising the existing special function library of numerical software such as \textit{Mathematica}~\cite{Mathematica}. Built-in convergence tests in the software can optimise the number of terms required for accuracy in calculating the $q$-digamma function. We use the closed-form solutions for calculating relative errors when deciding on the number of terms and the range of use of the classical solutions above and the asymptotic solutions discussed below.
Both classical and closed-form solutions show a slow down in convergence when the spheres are near each other, i.e. $\mu << 1$. In the near regime ($\mu \lesssim 1$) the following asymptotic solutions provide fast and accurate computation~\cite{Banerjee2019-vf}:
\begin{align}\label{eq:AsymptoticC11}
c_{11} \approx
\frac{\lambda}{4\mu} &\, \Bigg[\log\frac{1}{\mu}-\psi(x)-\sum_{k=1}^{K}\frac{2^{4k-1} B_{2k} (x) B_{2k}\! \left(\frac{1}{2}\right)}{(2k)! \, k}\mu^{2k} -2\pi \sin (2\pi x) \, e^{-\frac{\pi^2}{\mu}}\Bigg],\\
c_{22} \approx
\frac{\lambda}{4\mu}&\, \Bigg[\log\frac{1}{\mu}-\psi(y)-\sum_{k=1}^{K}\frac{2^{4k-1} B_{2k} (y) B_{2k}\! \left(\frac{1}{2}\right)}{(2k)! \, k}\mu^{2k}-2\pi \sin (2\pi y) \, e^{-\frac{\pi^2}{\mu}}\Bigg],
\end{align}
and
\begin{align}\label{eq:AsymptoticC12}
c_{12}
&\approx -\frac{\lambda}{4\mu } \Bigg[\log\frac{1}{\mu}+\gamma-\sum_{k=1}^{K}\frac{2^{4k-1} B_{2k} B_{2k}\! \left(\frac{1}{2}\right)}{(2k)! \,k}\mu^{2k}\Bigg],
\end{align}
where $\psi=\psi_{\alpha=1}$ is the digamma function, $\gamma=0.5772...$ is the Euler's constant, and $B_{2k}$, $B_{2k}(x)$ are the $2k$\hspace{0.01in}th Bernoulli number and Bernoulli polynomial~\cite{Abramowitz1972-nf}.
Using the cut-off $K\simeq \pi^2/2\mu$ leads to optimally low errors in capacitances at any given $\mu$~\cite{Banerjee2019-vf}. Using $K=5$ is a practical cut-off for $\mu \le 0.35$ (or $\alpha \ge 0.5$) with errors of order $10^{-10}$ or less. Note that using too large a $K$ value makes the errors worse. For $\mu=0.35$, the optimal $K$ is $\pi^2/0.7 \simeq 14$. The $\exp{(-\pi^2/\mu)}$ term in $c_{11}, c_{22}$ matters only if the optimum $K$ is used. In this paper, with $K=5$ as cut-off, we omit that $\exp{(-\pi^2/\mu)}$ term as it is smaller than the error.
\section{Capacitance derivatives}
\label{sec:capacitance_derivatives}
The electrostatic force in Eq.~(\ref{eq:fvdefinition}) is a linear combination of the derivatives of the capacitance coefficients with respect to the sphere separation. In this section we calculate these derivatives of the capacitance coefficients discussed in Sec.~\ref{sec:capacitance}.
\subsection{Preliminary derivatives}
It is convenient to first calculate the derivatives of the main quantities used in expressing the capacitance coefficients. These derivatives allow us to express the derivatives of the capacitance coefficients in a more compact manner.
The variables $\alpha$ and $\mu$ are mathematical measures of the distance between the spheres. Their derivatives with respect to $s$ are
\begin{align}
\frac{d\alpha}{ds} =-\frac{4\alpha}{\lambda} ~~~\text{and}~~~\frac{d\mu}{ds}= \frac{2}{\lambda}.
\end{align}
The capacitance coefficients in the previous section are expressed in terms of $\mu$ and $\alpha$. The derivatives above allow us to relate the $\mu$ and $\alpha$ derivatives of the capacitance to the derivatives with respect to $s$.
The derivative of $\lambda$ with respect to $s$ is
\begin{align}
\frac{d\lambda}{ds}=4 \left(\frac{1+\alpha^2}{1-\alpha^2}\right)-\frac{\lambda}{ s}=4 \coth 2\mu-\frac{\lambda}{ s}.
\end{align}
The variation of $x,y$ with respect to $\alpha$ and $\mu$ are
\begin{align}
x'(\alpha)&=-\frac{ (1-2x)}{4 \alpha \mu}+\frac{r\cosh [(1-2x) \, \mu ]^2}{(1+\alpha )^2 \mu },~~~~x'(\mu)=-2\alpha x'(\alpha),\\
y'(\alpha)&=-x'(\alpha),~~~~ y'(\mu)=-x'(\mu).
\end{align}
These derivatives of $x$ and $y$ go to zero as $\alpha \rightarrow 1$ or $\mu \rightarrow 0$.
\subsection{Classical capacitance derivatives}
Differentiating the classical result for capacitance $c_{11}$ in Eq.~(\ref{eq:c11-Classical}) with respect to $s$ gives
\begin{align}\label{eq:dc11classical}
\frac{dc_{11}}{ds}=&\frac{c_{11} }{\lambda} \frac{d\lambda}{ds}-4 \sum_{n=0}^\infty \frac{ \left(\alpha ^{3n+3x}+\alpha ^{n+x}\right) \left[n+x+\alpha x'(\alpha ) \log \alpha \right]}{\left(\alpha ^{2n+2x}-1\right)^2},
\end{align}
and for capacitance $c_{12}$ in Eq.~(\ref{eq:c12-Classical}) we get
\begin{align}\label{eq:dc12classical}
\frac{dc_{12}}{ds}
&=\frac{c_{12} }{\lambda} \frac{d\lambda}{ds}+4 \sum_{n=1}^\infty \frac{ \left(\alpha ^{3n}+\alpha^{n}\right) n}{\left(\alpha ^{2n}-1\right)^2}.
\end{align}
The derivative of $c_{22}$ can be obtained by replacing $x$ with $y$ and $c_{11}$ with $c_{22}$ in Eq.~(\ref{eq:dc11classical}). These derivatives of the classical results converge well when the spheres are relatively far away. Only terms through $n=10$ are needed for a numerical accuracy of order $10^{-8}$ for $\alpha \le 0.15$ (or $\mu \ge 0.95$). The errors are calculated using the closed-form derivatives discussed below.
\subsection{Closed-form capacitance derivatives}
Differentiating the closed-form expression for $c_{11}$ in Eq.~(\ref{eq:c11closedform}) with respect to $s$ we have
\begin{align}\label{eq:dc11closedform}\nonumber
\frac{dc_{11}}{ds} &= -\frac{2\alpha} {\mu} \left[\alpha\, \psi_{\alpha^2}^{(1,0)}(x)-\psi_{\alpha}^{(1,0)}(x)+ \frac{1}{1-\alpha^2} \right]-c_{11}\left(\frac{2}{\lambda\mu}+\frac{1}{s }-\frac{4\,\coth 2 \mu}{\lambda}\right)\\
&~~~~-\frac{\alpha x'(\alpha)} {\mu}\left[ \psi_{\alpha^2}^{(0,1)}(x)-2\psi_{\alpha}^{(0,1)}(x)\right],
\end{align}
where the superscript $(0,1)$ indicates the derivative with respect to $x$ and $(1,0)$ indicates the derivative with respect to $\alpha$ or $\alpha^2$ depending on the case.
Similarly,
\begin{align}
\frac{dc_{12}}{ds} &=-\frac{2\alpha^2}{\mu}\left[ \psi_{\alpha^2}^{(1,0)}\left(\tfrac{1}{2}\right)- \frac{1}{1-\alpha^2} \right]-c_{12}\left(\frac{2}{\lambda\mu}+\frac{1}{s }-\frac{4\,\coth 2 \mu}{\lambda}\right).
\end{align}
The derivative of $c_{22}$ can be obtained by replacing $x$ with $y$ and $c_{11}$ with $c_{22}$ in Eq.~(\ref{eq:dc11closedform}). These derivatives are defined at all points except $\alpha=0$ and $\alpha=1$.
Note that the $\alpha$ derivative of $\psi_\alpha(x)$ is numerically unstable in {\it Mathematica}~\cite{Mathematica} for some parts of the range $0<\alpha < 1$. To avoid inaccuracies in calculating the capacitance derivatives above, we use the inversion symmetry~\cite{Krattenthaler1996-ev}
\begin{align}\label{eq:psi_inv}
\psi_\alpha(x) &= \left(x-\tfrac{3}{2}\right) \log \alpha + \psi_{1/\alpha}(x)
\end{align}
as an alternate definition for $\psi_\alpha(x)$ within the software.
In principle, these closed-form expressions for the capacitance derivatives can be used to calculate the electrostatic force between the spheres for all $0<\alpha < 1$. However, derivative calculations of the $q$-digamma function encounter numerical errors near $\alpha=0$ and convergence problems near $\alpha =1$ . Therefore, for fast and accurate force calculations, we use these derivatives only in the mid-distance range, i.e., $0.15 \leq \alpha \leq 0.5$, which corresponds to $0.95 \ge \mu \ge 0.35$.
\subsection{Asymptotic capacitance derivatives}
Differentiating the asymptotic expression for $c_{11}$ in Eq.~(\ref{eq:AsymptoticC11}) gives
\begin{align}\label{eq:dc11asymptotic} \nonumber
&\frac{dc_{11}}{ds}
\approx -\frac{1}{2\mu^2}+\frac{c_{11}}{\lambda}\left(\frac{{d\lambda}}{ds}-\frac{2}{\mu} \right)
-\frac{x'(\mu)}{2\mu }\, \Bigg[ \psi'(x)+\sum_{k=1}^{K}\frac{2^{4k} B_{2k-1} (x) B_{2k}\! \left(\frac{1}{2}\right)}{(2k)!}\mu^{2k}\Bigg]\\
-&\sum_{k=1}^{K}\frac{2^{4k-1} B_{2k} (x) B_{2k}\! \left(\frac{1}{2}\right)}{(2k)!}\mu^{2k-2}-\frac{\pi^2 e^{-\frac{\pi^2}{\mu}}}{\mu^3} \left[ \pi \sin (2\pi x) +2\mu^2 \cos (2 \pi x)\, x'(\mu)\right],
\end{align}
and differentiating asymptotic $c_{12}$ in Eq.~(\ref{eq:AsymptoticC12}) we get
\begin{align}\label{eq:dc12asymptotic}
\frac{dc_{12}}{ds}
\approx&\frac{1}{2\mu^2}+\frac{c_{12}}{\lambda}\left(\frac{{d\lambda}}{ds}-\frac{2}{\mu} \right) +\sum_{k=1}^{K}\frac{2^{4k-1} B_{2k} B_{2k}\! \left(\frac{1}{2}\right)}{(2k)!}\mu^{2k-2}.\end{align}
The asymptotic derivative of $c_{22}$ are obtained by replacing $x$ with $y$ and $c_{11}$ with $c_{22}$ in Eq.~(\ref{eq:dc11asymptotic}).
The asymptotic derivatives are be useful for fast and accurate calculations in the near range ($\mu \lesssim 1$) provided the cut-off $K$ is carefully chosen~\cite{Banerjee2019-vf}. For practical considerations we choose $K=5$ which gives the derivatives accurately to within order $10^{-8}$ for $\mu \le 0.35$ (or $\alpha \ge 0.5$). The $\exp(-\pi^2/\mu)$ term Eq.~(\ref{eq:dc11asymptotic}) is omitted in this approximation as discussed in Sec.~\ref{sec:capacitance}.
\section{Energy and force at constant voltage} \label{sec:voltage}
In this section we analyse the electrostatic force when both spheres are held at constant voltages. The force in this case is a linear combination of the capacitance derivatives discussed in Sec.~\ref{sec:capacitance_derivatives}. We combine the classical, closed-form, and asymptotic expressions to create an accurate description of the force at all separations between the spheres.
\subsection{Dimensionless energy and force}
\label{sec:FV}
We define a dimensionless form of the electrostatic energy as
\begin{align}
w_V\equiv \frac{W_V}{\pi \epsilon (R_1 +R_2) V_1^2}
=& \, c_{11}+ 2 c_{12} v +c_{22} {v}^{2},
\end{align}
where $v=V_2/V_1$ is the voltage of the second sphere relative to the first sphere. We assume that $V_1$ is the larger of the two voltages so that $-1 \le v \le 1$. Since one of the two spheres has to be at a non-zero voltage for the problem to be meaningful, normalising the energy using the larger voltage ensures that there is no division by zero.
We now define a dimensionless version of the force in Eq.~(\ref{eq:fvdefinition}) as
\begin{align}
f_V\equiv\frac{F_V}{\pi\epsilon V_1^2}
= \frac{dc_{11}}{ds} + 2v \frac{dc_{12}}{ds} +{v}^{2}\frac{dc_{22}}{ds} ,
\end{align}
for which the capacitance derivatives are calculated in Sec.~\ref{sec:capacitance_derivatives}.
As defined, the dimensionless force at constant voltage satisfies the following symmetry under the interchanging of the spheres and their voltages
\begin{align}\label{eq:fvsymmetry}
\frac{f_V (r,v)}{v} = \left[\frac{f_V (r,1/v)}{1/v}\right]_{r=-r}.
\end{align}
All possible voltage and size scenarios can be covered by analysing the full range of the size asymmetry ratio $-1 <r<1$. The equations are valid for $|v|>1$ as well but don't provide any extra information as shown by the symmetry above.
For analysis presented in this paper, it is useful to explicitly calculate the force to order $\mu^2$ in the asymptotic limit so that the coefficients only depend upon the size asymmetry and the voltage ratio. Using the asymptotic derivatives in Sec. \ref{sec:capacitance_derivatives} we get
\begin{align} \nonumber
f_V &= -\frac{(1-v)^2}{2\mu^2} + \frac{(1-v)^2 (1+3r^2)}{6} \log \frac{1}{\mu}-\frac{(1-3r^2)(1+v^2)+4v}{36}\\ \nonumber
&-\frac{(1+3r^2)}{3} \phi_v(y_0)+\frac{(r-r^3)}{3} \phi_v^\prime(y_0)-(1-v)^2\frac{1+60r^2-45r^4}{90} \mu^2 \log \frac{1}{\mu}\\ \nonumber
&-\bigg[\frac{(33+330 r^2-515 r^4)(1+v^2)+8 (13+25 r^2)v}{3600}-\frac{1+60r^2-45r^4}{45}\phi_v(y_0)\\
&\hspace{0.18in}+\frac{(19r-70r^3+51r^5)}{90}\phi_v^\prime(y_0)+\frac{(r-r^3)^2}{18}\phi_v^{\prime\prime}(y_0)\bigg]\,\mu^2+{\cal O} \left(\mu^4\log \mu\right),
\label{eq:fvasymptotic}
\end{align}
where $\phi_v(x)\equiv [v^2\psi(x)+\psi(1\!-\!x)+2v\,\gamma]/2$ is a voltage weighted digamma function.
Note that the first term in Eq.~(\ref{eq:fvasymptotic}) is always attractive and independent of the size asymmetry. It agrees with Lekner's calculation of the force at constant voltage~\cite{Lekner2012-pr}. The second term is always repulsive, symmetric under $r \rightarrow -r$, and increases with size asymmetry. The ${\cal O}(1)$ term changes under $r \rightarrow -r$ due to the presence of $\phi_v^\prime$ which involves the trigamma function.
For the numerical calculations presented in this paper (see Fig.~\ref{fig:fvr1over3} for example) we use the closed-form expressions for the middle region $0.35\le \mu \le 0.95 $ which corresponds to $0.5 \ge \alpha \ge 0.15$. In the near region $\mu\le 0.35$ (or $\alpha \ge 0.5$) we use the asymptotic capacitance and derivatives with a cut-off of $K=5$, and in the far region $\mu \ge 0.95$ (or $\alpha \le 0.15$) we use the classical capacitances and their derivatives with terms through $n=10$.
If the $q$-digamma function is unavailable then the asymptotic solutions (with $K=5$) can be used for $\mu \le 0.6$ ($\alpha \ge 0.3$) and the classical solutions (with $n=10$) thereafter with relative errors of $10^{-5}$ or less. The regions of strong overlap in the three forms of solutions lets us accurately calculate the force between the spheres at any distance with a high degree of confidence.
\subsection{Force $f_V$ when $R_1>R_2$}
\label{sec:FvR1gtR2}
\begin{figure}
\centering
\scalebox{0.145}{\includegraphics{Figures/fv_r_equals_1over3.png}}
\caption{The dimensionless force at constant voltage, $f_V$, is plotted versus the relative surface separation, $s-1$, of the two spheres for $r=\frac{1}{3}$ (or when $R_1=2R_2$). The force is attractive (negative) at sufficiently small distances for all voltage ratios $v\ne 1$. The force is always repulsive (positive) when the two spheres have the exact same voltage, $v=1$. This repulsion at
equal voltages decreases monotonically with increasing sphere separation.}
\label{fig:fvr1over3}
\end{figure}
We first analyse the case where the first sphere, with the higher voltage, is larger in size as well. In Fig.~\ref{fig:fvr1over3} we plot the force between the two spheres for $r=\frac{1}{3}$ which corresponds to the case where $R_1 = 2 R_2$. As expected, the force is attractive at sufficiently small sphere separation when the two spheres are not at the exact same voltage. The force is repulsive at all distances only when the two spheres have the exact same voltage, i.e., $v=1$. This attraction at sufficiently small separation when $v\ne 1$ is understood mathematically from Eq.~(\ref{eq:fvasymptotic}) - the leading term is negative and eventually dominates all the other terms. The behaviour of the force versus separation in Fig.~\ref{fig:fvr1over3} is qualitatively similar to that of equal-sized spheres held at constant voltages~\cite{Banerjee2017-xl}.
To qualitatively understand the attraction between spheres with like but unequal voltages, consider the shortest field line (along the two centres) that connects the surfaces of the two spheres. Such a field line must exist when the second sphere is at a lower voltage. At the end of this field line there must be negative charge even though the second sphere has a positive voltage. The distance between this negative charge and the positive charge at the beginning of the field line goes to zero as the two spheres are about to touch. This attractive force eventually overcomes any repulsion between the spheres. There is no such field line only when the two spheres are at the exact same voltage. In this case the force is finite even at zero separation because the like charges move away from each other.
In Fig.~\ref{fig:fvr9over11} we plot the force between the two spheres for $r=\frac{9}{11}$ which corresponds to the highly asymmetric case where $R_1 = 10 R_2$. Even in this case the force is attractive at sufficiently small sphere separation when $v \ne 1$ due to the same reasons discussed above. The force is repulsive at all distances only when $v=1$. This repulsive force, however, shows non-monotonic behaviour versus sphere separation. The repulsion initially {\it increases} as the spheres move away from each other at contact and reaches a maximum of $1.65$ times the value at contact at $s-1=0.353$ before eventually decreasing.
We may qualitatively understand this increase in the repulsion as the spheres move away from contact by noting that the smaller sphere gains $2.45$ times its charge at contact when separation increases to $s-1=0.353$, and the charge on larger sphere decreases only to $98.9\%$ of its original value. The product of the two charges therefore increases by a factor of $2.42$. However, since the separation between the charges increases as well, the force increases only by a factor of $1.65$. Note that the first two leading order terms in Eq.~(\ref{eq:fvasymptotic}) go to zero at $v=1$. We analyse this anomalous repulsion in more detail again in Sec.~\ref{sec:normalizedrepulsion}.
\begin{figure}
\centering
\scalebox{0.145}{\includegraphics{Figures/fv_r_equals_9over11.png}}
\caption{The dimensionless force at constant voltage, $f_V$, is plotted versus the relative sphere surface separation, $s-1$, for $r=\frac{9}{11}$ (when $R_1=10 R_2$). Similar to the $r=\frac{1}{3}$ case, the force is attractive at sufficiently small distances for all voltage ratios $v\ne 1$ and repulsive at all distances when $v=1$. However, unlike the $r=\frac{1}{3}$ case, here the repulsion at $v=1$ {\it increases} with separation for small surface separations.}
\label{fig:fvr9over11}
\end{figure}
\subsection{Force $f_V$ when $R_1<R_2$}
We now examine the case where sphere 1 with the higher voltage has a smaller radius than sphere 2. In Fig.~\ref{fig:fvrnegative1over3} we plot the force between the two spheres for $r=-\frac{1}{3}$ which corresponds to the case when $R_1 = \frac{1}{2}R_2$. The force is attractive at sufficiently small sphere separation when the two spheres are not at the exact same voltage. The force is repulsive at all distances only when the two spheres have the exact same voltage, i.e., $v=1$. The polarisation effect, which causes the attraction at small separation, is weaker in this case than for $r=\frac{1}{3}$. This makes sense if we note that the smaller sphere at lower voltage loses charge rapidly with decreasing distance to maintain its given voltage in the presence of the larger sphere. In comparison, the charge on the lower voltage larger sphere does not change much in the vicinity of the smaller sphere at a higher voltage.
\begin{figure}
\centering
\scalebox{0.145}{\includegraphics{Figures/fv_r_equals_1over3_and_negative_1over3.png}}
\caption{The dimensionless force, $f_V$, is plotted versus the relative surface separation, $s-1$, for $r=-\frac{1}{3}$ (solid lines), i.e., when $R_1 = \frac{1}{2}R_2$. The behaviour is similar to the $r=\frac{1}{3}$ case (dotted lines), except that the polarisation effect is weaker for the solid lines. This effect can be seen by comparing the switch to attraction which happens at comparatively smaller separation for the same voltage ratio.}
\label{fig:fvrnegative1over3}
\end{figure}
In Fig.~\ref{fig:fvrnegative9over11} we plot the force between the spheres for $r=-\frac{9}{11}$ which corresponds to case where $R_1 =\frac{1}{10} R_2$. The force shows similar behaviour to the $r=\frac{9}{11}$ case in Fig.~\ref{fig:fvr9over11}. The $v=1$ curves are identical by definition since when the two spheres have equal voltage, either one can be chosen as sphere 1. However, unlike the $r=\frac{9}{11}$ case, at certain distances, it is possible to increase the repulsive force between the two spheres by {\it decreasing } the voltage of the larger sphere! The value $v=0.867...$ is calculated numerically to maximise the relative vertical jump from the $v=1$ curve over all voltages and separations.
How can sphere with less voltage exert a greater repulsion? Calculations show that decreasing the voltage to $86.7\%$ on the larger sphere at $s-1=0.112$ reduces its charge by $15\%$ but the smaller sphere acquires $61\%$ more charge to maintain its voltage. The product of the two charges thus increases by $37\%$. However, we hypothesise that the extra charge on the smaller sphere pushes the charge on the larger sphere further away and the effective distance between the two charge distributions increases. Additionally, there is now some negative charge on the larger sphere as well which causes some attraction. Thus, there is an overall increase of $18\%$ in the repulsive force.
\begin{figure}
\centering
\scalebox{0.145}{\includegraphics{Figures/fv_r_equals_negative_9over11.png}}
\caption{The dimensionless force, $f_V$, is plotted versus the relative surface separation, $s-1$, for $r=-\frac{9}{11}$ (i.e., $R_1 = \frac{1}{10}R_2$). The force plots are similar to the $r=\frac{9}{11}$ case, with one notable exception: for some separations, decreasing the voltage on the larger sphere can {\it increase} the repulsion. The ``optimal" $v=0.867...$ maximises the relative vertical ``jump" from the $v=1$ curve over all voltage ratios and sphere separations (see Table \ref{table:forceinversion} for details).}
\label{fig:fvrnegative9over11}
\end{figure}
In Table~\ref{table:forceinversion} we list such force anomalies for several different sizes of the two spheres. For every $r$ value we numerically calculate the optimum lowering of the voltage from $v=1$ and the optimum separation, $s-1$, that causes the maximum relative increase in the force. Note that when the two spheres are not at equal voltage, there are some field lines from sphere 1 that end up on sphere 2 and cause some attraction between charges of opposite polarity. Even then, the lower voltage case has overall more repulsion than the equal voltage case.
\renewcommand{\arraystretch}{1.4}
\begin{table}
\centering
\begin{tabular}{||c|c|c| c|c|c|c||}
\hline
$\hspace{0.1in}r \hspace{0.1in}$ & \!\!$R_1:R_2$\!\! & \!optimal $v$\! & $s\!-\!1$ & \!\!$f_V(v \!=\! v_{opt})$\!\! & \!$f_V(v \!=\! 1)$\! & \!\!$f_V(v_{opt})/f_V(1)$\!\! \\ [0.5ex]
\hline\hline
--\,$\nicefrac{1}{2}$ & $1:3$ & 0.978 & 0.0790 & 0.175 & 0.174 & 1.0056\\
\hdashline[4pt / 1pt]
--\,$\nicefrac{3}{5}$ & $1:4$ & 0.951& 0.103 & 0.138 & 0.135 & 1.022\\
\hline
--\,$\nicefrac{2}{3}$ & $1:5$ & 0.929 & 0.112 & 0.113 & 0.108 & 1.045\\
\hdashline[4pt / 1pt]
--\,$\nicefrac{5}{7}$ & $1:6$ & 0.912 & 0.116 & 0.0961 & 0.0896 & 1.072\\
\hline
--\,$\nicefrac{3}{4}$ & $1:7$ & 0.897 & 0.116 & 0.0831 & 0.0756 & 1.099\\
\hdashline[4pt / 1pt]
--\,$\nicefrac{7}{9}$ & $1:8$ & 0.885 & 0.115 & 0.0731 & 0.0649 & 1.128\\
\hline
--\,$\nicefrac{4}{5}$ & $1:9$ & 0.875 & 0.114 & 0.0653 & 0.0565 & 1.156\\
\hdashline[4pt / 1pt]
--\,$\nicefrac{9}{11}$ & $1:10$ & 0.867 & 0.112 & 0.0589 & 0.0497 & 1.184\\
[0.5ex]
\hline
\end{tabular}
\caption{The data shows that the repulsive force is {\it greater} when the larger sphere is at an optimal {\it lower} voltage $f_V(v\!=\!v_{opt})$ than when both spheres are at the same voltage, $f_V(v\!=\!1)$. Also listed is the optimal separation at which both forces are calculated. See Fig. \ref{fig:fvrnegative9over11} for example. The voltage and separation are optimised for achieving the largest ratio (last column) of the two forces. }
\label{table:forceinversion}
\end{table}
To calculate the minimum size asymmetry where lowering the voltage can increase the force, let the voltage ratio be $v = 1 - \varepsilon$, where $\varepsilon<<1$. Substituting this voltage ratio in the expression for force we get
\begin{equation}
\begin{aligned}
f_V&=\frac{d c_{11}}{ds} +2(1 - \varepsilon)\frac{d c_{12}}{ds}+ (1-\varepsilon)^2 \frac{d c_{22}}{ds} \\
&=f_V(v\!=\!1)-2\varepsilon \left( \frac{d c_{12}}{ds} +\frac{d c_{22}}{ds}\right)+{\cal O}(\varepsilon^2).
\end{aligned}
\end{equation}
So the criterion for the force to increase when the voltage is reduced is that the coefficient of $\varepsilon$ has to be positive at some separation.
The conditions for the critical case are given by
\begin{align}
\frac{d }{ds}(c_{12}+c_{22})=0~~~\text{and}~~~\frac{d^2 }{ds^2}(c_{12}+c_{22})=0.
\end{align}
Solving for the two conditions numerically gives
$r = -0.3226...$ and $s-1 \rightarrow 0$. Thus, the critical case is when two spheres with $R_1 \simeq \frac{1}{2} R_2$ are about to touch each other. Note that the data in Table~\ref{table:forceinversion} alludes to this critical point as well.
Analysing the capacitance coefficients near $\mu=0$ yields
\begin{align}
\frac{c_{12}+c_{22}}{1-r^2} =& -\frac{1}{2} \left[\gamma + \psi \left(y_0\right)\right] +\frac{\mu^2}{24}\times \\ \nonumber
&\left[-1+r^2- (2+6r^2)[\gamma + \psi(y_0)]+2r(1-r^2)\psi^{\prime} (y_0)\right]+{\cal{O} }(\mu^4),
\end{align}
where $y_0 = (1+r)/2$ is the value of $y$ as $\mu\rightarrow 0$. Plotting the coefficient of $\mu^2$ shows that it is negative for $r < -0.3226...$ which corresponds to the size ratio $R_1 \simeq \frac{1}{2} R_2$. That is, the repulsion can be higher at lower voltage if the first sphere is smaller than about half the size of the second sphere. Note that $\mu^2 \sim (s-1)$ in this limit.
\section{Energy and force at constant charge}\label{sec:charge}
In this section we analyse the electrostatic force when the charges on the spheres are held constant. In addition to the standard definition discussed in Sec.~\ref{sec:introduction}, we develop another alternative formulation for the force which allows us to use our results from the constant voltage case. We combine the classical, closed-form, and asymptotic expressions to create an accurate description of the force at all separations between the spheres.
\subsection{Dimensionless formulation}
We define a dimensionless form of the electrostatic energy as
\begin{equation}
w_Q \equiv \frac{4\pi\epsilon(R_1+R_2) W_Q}{q_0 Q_1^2}= \frac{1}{q_0} \left(p_{11} + 2 p_{12} q + p_{22} q^2\right),
\end{equation}
where $p_{ij}\equiv 2\pi\epsilon (R_1+R_2)P_{ij}$ are dimensionless, $q=Q_2/Q_1$ is the charge ratio between the spheres, and
\begin{align}
q_{0}
&=\frac{\gamma+\psi(y_0 )}{\gamma+\psi(x_0)}=\frac{\phi_{1}(y_0)-\frac{\pi}{2}\cot\pi y_0}{\phi_{1}(y_0)+\frac{\pi}{2}\cot\pi y_0}
\end{align}
is the charge ratio at contact with $\phi_{1}=\phi_{v=1}$. In the ratio $q_0$ above, $x_0 = (1-r)/2$ is the value of $x$ as $\mu\rightarrow 0$. Note that $y_0=1-x_0$.
This normalisation of $W_Q$ ensures that the dimensionless energy and force at contact are the same regardless of which sphere is designated as $1$. Sphere $1$ is chosen such that $q_0 \, |Q_1|\ge |Q_2|$ to ensure that $|V_1|\ge |V_2|$ near contact. Additionally, sphere $1$ is chosen to be of positive polarity without loss of generality since the energy and the force remain the same if we switch polarities of both spheres.
We can now calculate the dimensionless force at constant charge as
\begin{align}\label{eq:fq} \nonumber
f_Q&\equiv\frac{4\pi\epsilon(R_1+R_2)^2}{q_0 Q_1^2} F_Q =
-\frac{4\pi\epsilon(R_1+R_2)^2}{q_0 Q_1^2}\frac{dW_Q}{dS}\\
&=-\frac{dw_Q}{ds}= -\frac{1}{q_0}\left(\frac{dp_{11}}{ds} + 2 q \frac{dp_{12}}{ds} +q^2 \frac{dp_{22}}{ds}\right).
\end{align}
As defined, the dimensionless force at constant charge satisfies the symmetry
\begin{align}
f_Q (r,q_0) = \left[f_Q (r,q_0)\right]_{r=-r}
\end{align}
at $q=q_0$ and more generally for any $q$,
\begin{align}\label{eq:fqsymmetry}
\frac{q_0 f_Q (r,q)}{q} = \left[ \frac{q_0 f_Q (r,1/q)}{1/q}\right]_{r=-r}.
\end{align}
All possible charge and size scenarios are covered by analysing the parameter space $-q_0\le q \le q_0$ along with $-1<r<1$. Although we restrict ourselves to $|q|\le q_0$, the force expressions hold for $|q|>q_0$ as well. However, those cases do not provide any additional information than that already discussed in the paper. Any $|q|>q_0$ is equivalent to a case with ratio $q\rightarrow 1/q$ (and thus $|q|<q_0$) and $r \rightarrow -r$ in our parameter space
as shown by the symmetry above.
\subsection{Alternate formulation of force at fixed charge}
The derivatives of the coefficients of potential $p_{ij}$ are nontrivial to calculate and simplify. Here, we rewrite $f_Q$ in terms of the force at constant voltage discussed earlier in Sec.~\ref{sec:FV} in the manner below.
Imagine disconnecting the batteries at a certain distance $s$. At this point the spheres now are at fixed charge. But the force before and after disconnecting the batteries should not change at the same $s$ value since the charge on the sphere stays the same. Setting $F_V=F_Q$ and replacing $V_1$ in terms of its corresponding charge values gives
\begin{align}
f_V=\frac{F_V}{\pi\epsilon V_1^2}
= \frac{F_Q}{\pi\epsilon (P_{11} Q_1 + P_{12}Q_2)^2}
=\frac{4\pi \epsilon (R_1+R_2)^2 F_Q}{ Q_1^2 (p_{11} + p_{12}q)^2}.
\end{align}
Therefore
\begin{equation}\label{eq:fq-of-fv}
\begin{aligned}
\frac{4\pi \epsilon (R_1+R_2)^2 F_Q}{q_0 Q_1^2 }=
f_Q=& \frac{(p_{11} + p_{12}q )^2}{q_0} f_V.
\end{aligned}
\end{equation}
Note that $f_V$ is in terms of $v$ which should be replaced by its equivalent in terms of $q$,
\begin{equation}
v = \frac{V_2}{V_1} = \frac{ p_{22}q + p_{12}}{p_{11} + p_{12}q} = \frac{c_{11}\,q-c_{12} }{c_{22} - c_{12}\, q},
\end{equation}
since the charges are now fixed whereas the voltage ratio $v$ varies with distance. After expressing all the $p_{ij}$ in terms of $c_{ij}$ and comparing the coefficients of the derivatives of $c_{ij}$, it is easily verified that the $f_Q$ expressions in Eqs.~(\ref{eq:fq})~and~(\ref{eq:fq-of-fv}) are equal to each other.
To understand the plots presented below it is useful to expand this force at constant charge in the limit $\mu \rightarrow 0$ as
\begin{align}\label{eq:fqmu}
f_Q &= - \frac{2\left(1-\frac{q}{q_0}\right)^2 (1-\xi^2)}{(1-r^2)^2 \mu^2 \left[2\log\frac{1}{\mu}-\phi_{-1}(y_0)+\xi^2 \phi_1(y_0)\right]^2}\\ \nonumber
&+\frac{\left[\left(1\!+\!\frac{q}{q_0}\right)+\xi \left(1\!-\!\frac{q}{q_0}\right)\right]^2 \left[(1\!-\!r^2) (2 r \phi_1'(y_0)\!-\!1)-(2\!+\!6 r^2) \phi_1(y_0)\right]}{6 \left(1\!-\xi ^2\right) \left(1-r^2\right)^2 \phi_1(y_0)^2}
+ {\cal O} \left(\!\frac{1}{\log {\mu}}\!\right),
\end{align}
with $\xi= \pi \cot(\pi y_0)/2\phi_1(y_0)$ where $\phi_{\pm 1} = \phi_v$ at $v=\pm 1$. Note that the leading term in Eq.~(\ref{eq:fqmu}) is always negative.
Thus for $q\ne q_0$ this leading term always dominates the higher order terms and causes an overall attractive (negative) force for sufficiently small $\mu$, which agrees with Lekner's result~\cite{Lekner2012-pr}. The second term, dominant when $q=q_0$, is always repulsive and approaches a finite limit as $\mu \rightarrow 0$. All higher order terms go to zero in this limit.
\subsection{Calculations of force $f_Q$}
We now analyse the electrostatic force between the two spheres at constant charge using the expressions developed above. The sphere designated as 1 is the more ``positive" of the two and does not develop any negative charge density even upon polarisation. The second sphere develops some negative charge density before contact (because of its lower voltage) even if it has an overall positive charge. To cover all possible charge and size scenarios, we examine both $r>0$ ($R_1>R_2$) and $r<0$ $(R_1 < R_2)$ cases since even the smaller sphere can be the more positive of the two.
\begin{figure}
\centering
\scalebox{0.145}{\includegraphics{Figures/fq_r_equals_1over3_and_negative_1over3.png}}
\caption{The dimensionless force, $f_Q$, is plotted versus the relative surface separation, $s-1$, for $r=-\frac{1}{3}$ (solid lines), i.e., $R_1=\frac{1}{2}R_2$. Also plotted is the $r=\frac{1}{3}$ case (dotted lines) for comparison. The force is attractive at sufficiently small distances for all charge ratios when $q < q_0$ and always repulsive only when the two spheres have the charge ratio at contact, $q=q_0$. The polarisation effect that causes attraction at small separation is stronger when the smaller sphere is the more ``positive" of the two.}
\label{fig:fqr1over3}
\end{figure}
In Fig.~\ref{fig:fqr1over3} we plot the force when one sphere is twice the size of the other, i.e., $r=\pm \frac{1}{3}$. In both cases the spheres attract each other at sufficiently small separations unless their charge ratio is exactly the same as the charge ratio at contact, i.e., $q = q_0$. This attraction due to charge polarisation is stronger for $r=-\frac{1}{3}$ (solid lines), where the smaller sphere is the more positive of the two.
The behaviour of the force versus separation in Fig.~\ref{fig:fqr1over3} is qualitatively similar to that of equal-sized spheres with fixed charges~\cite{Banerjee2017-xl}.
The ``perfect" charge ratio, $q_0$, ensures that both spheres are at the same voltage right before contact and there are no electric field lines between them. Thus, there is no negative charge density on either sphere which can cause any attraction. When one of the spheres has less charge than this ideal ratio, there is at least one field line from the sphere with the greater (than perfect) charge to the sphere with less (than perfect) charge along the line joining the two. The opposite ends of this field line have opposite charge density. When regions of opposite charge density get sufficiently close before contact, their attraction overcomes the repulsion between the like charges to cause an overall attraction between the spheres.
\begin{figure}
\centering
\scalebox{0.145}{\includegraphics{Figures/fq_r_equals_9over11_and_negative_9over11.png}}
\caption{The dimensionless force, $f_Q$, is plotted versus the relative surface separation, $s-1$, for $r=\frac{9}{11}$ (dotted lines) and $r=-\frac{9}{11}$ (solid lines). The force is attractive at sufficiently small distances for all charge ratios $q < q_0$ and repulsive at all distances only when the two spheres have $q=q_0$, the charge ratio at contact. This plot is similar to that in Fig. \ref{fig:fqr1over3} with one exception: the repulsion at $q=q_0$ {\it increases} as the two spheres separate away from contact before eventually decreasing.}
\label{fig:fqr9over11}
\end{figure}
In Fig.~\ref{fig:fqr9over11} we plot the force when one sphere is ten times the other, i.e., $r=\pm \frac{9}{11}$. In both cases the spheres attract each other at sufficiently small separations unless their charge ratio is exactly the same as the charge ratio at contact, $q = q_0$. The features of this plot are similar to those for the $r=\pm \frac{1}{3}$ case in Fig.~\ref{fig:fqr1over3} with one exception: similar to $v=1$ in the constant voltage case, the repulsive force when $q = q_0$ is not maximum when the two spheres are in contact and {\it increases} when the two spheres separate.
To understand how two spheres can repel more at a finite separation than when at contact, we hypothesise that the charge on the larger sphere moves back towards the point of contact when the two spheres separate. This charge rearrangement must be the main mechanism that causes a stronger horizontal push along the axis of symmetry since neither sphere gains any charge.
Comparing the force plots for constant charge with the force plots at constant voltage in Sec.~\ref{sec:voltage}, we see that polarisation effects in constant charge and constant voltage are in the opposite directions; that is, the dashed curves lie on opposite sides of the solid curves where the force switches to attraction. This contrasting behaviour is due to the difference in main mechanism of charge polarisation in the two cases.
In the constant voltage case the main mechanism that drives the force characteristics is the gaining/losing of charge by the smaller sphere. This mechanism is stronger when the larger sphere has more voltage. In comparison, the main mechanism of polarisation in the constant charge case is the redistribution of charge on the larger sphere. This charge redistribution is stronger when the smaller sphere has a greater than its fair share of the charge and is able to cause more charge separation on the larger sphere.
Another contrast between the constant charge and the constant voltage cases is the value of the separation at which the repulsion switches to attraction. This separation is much smaller in the charge case for similar numerical value of the voltage and charge ratios. By expanding the variable voltage ratio, $v$, for a fixed charge ratio $q$ we see that
\begin{align}
v= 1-\frac{q \psi\left(\frac{1-r}{2}\right)-\gamma (1-q)-\psi\left(\frac{1+r}{2}\right)}{(1+q) \log \frac{1}{\mu }+\gamma q-\psi\left(\frac{1+r}{2}\right)} +{\cal O}\left(\frac{\mu^2}{\log{\mu}}\right).
\end{align}
Note that the voltage ratio $v$ goes to $1$ as the separation $\mu \rightarrow 0$ regardless of the charge ratio $q$. Hence, much smaller separations are needed to overcome the $v=1$ like repulsion.
\section{Summary}
The main new results presented in this paper are the asymptotic expansions and closed-form expressions for the electrostatic force between two conducting spheres. We supplement these results with the well known classical method-of-images series in the far region for best convergence and computational speed at all distances while maintaining high accuracy.
These force expressions allow us to investigate the force as a function of the sphere size asymmetry. Our calculations reveal some new results when the size asymmetry between the spheres is large. We highlight these results by proposing the following three experiments:
{\it Experiment 1.} Let one sphere be 19 times larger than the other (see $r=9/10$ curve in Fig.~\ref{fig:NormalizedFV}). Hold both spheres at equal voltages and measure the electrostatic force at contact. Now move the spheres to 1.42 times this distance of closest approach while keeping their voltages the same. The measured force should now be 2.60 times larger.
{\it Experiment 2.} Let one sphere be 10 times larger than the other (see Fig.~\ref{fig:fvrnegative9over11}). Hold both spheres at equal voltages and measure the electrostatic force at a centre-to-centre distance of $1.112(R_1+R_2)$. Now decrease the voltage of the larger sphere to $86.7\%$ of the original value while keeping the separation fixed. The measured force should now be $18.4\%$ larger.
{\it Experiment 3.} Let one sphere be 19 times larger than the other (see $r=9/10$ curve in Fig.~\ref{fig:NormalizedFQ}). Charge both spheres and measure their electrostatic force at contact. Now move the spheres to 1.026 times this distance of closest approach while keeping their charges fixed. The measured force should now be 1.042 times larger. This force increase is similar to but much smaller than that in experiment 1.
The main mechanism for the force increase in the first two experiments is the substantial gaining of charge by the smaller sphere. This mechanism is missing in the third experiment since the charges on both spheres are fixed. We hypothesise that the mechanism behind the force increase in the third experiment is the rearrangement of charges on the larger sphere which decreases the effective distance between the spheres even as their centre-to-centre distance increases.
The three experiments listed above highlight the anomalous behaviour of the electrostatic force for high sphere size asymmetries. Additional details of such behaviour are provided in Tables~\ref{table:forceinversion}~and~\ref{table:repulsionatcontact}. For convenience we work with dimensionless form of the electrostatic force. If needed, the dimensionless forms can be converted back to SI units through the relations
\begin{align}
F_V = \pi \epsilon V_1^2 f_V~~~\text{and}~~~ F_Q = \frac{q_0 Q_1^2}{4\pi\epsilon(R_1+R_2)^2} f_Q.
\end{align}
In the paper we designate sphere 1 to be stronger than sphere 2 in order to put a bound on the voltage and charge ratios and avoid repetition of cases. However, there is no such requirement; all relations hold as long as $V_1$ and $Q_1$ are nonzero.
\section{Repulsion increase with separation}
\label{sec:normalizedrepulsion}
The anomalous behaviour of the electrostatic force at contact, where the repulsion increases as the spheres separate, happens only for sufficiently large size asymmetries. Similar behaviour is observed in both constant voltage and constant charge cases although the force increase is much larger in the constant voltage case. In this section we analyse this repulsion increase with separation between the two spheres as a function of their size asymmetry.
For the constant voltage case, setting $v=1$ and expanding $f_V$ for small $\mu$ gives
\begin{align}\nonumber
f_V (v\!=\!1)=& -\left(\frac{1}{3}+r^2\right)\phi_1(y_0)+\left(\frac{1-r^2}{6}\right)\Big[2 r\,\phi_1^\prime(y_0) - 1\Big]+\frac{\mu^2}{360}\times\\ \label{eq:fvmu2}
& \bigg[-\left(17+86 r^2-103 r^4\right)+\left(8+480 r^2-360 r^4\right) \phi_1(y_0) \\ \nonumber
&\hspace{0.0in}- \left(76 r-280 r^3+204 r^5\right) \phi_1^\prime(y_0)-20 \left(r-r^3\right)^2 \phi_1^{\prime\prime}(y_0)\bigg]+{\cal O}\left(\mu^4\right).
\end{align}
Setting the coefficient of $\mu^2$ to zero and numerically solving for the critical asymmetry ratio yields $r_c=0.423...$ ($R_1\simeq 2.5 R_2$). For $|r|>r_c$, the coefficient of $\mu^2$ is positive and the force increases with separation starting at contact! As discussed earlier in Sec.~\ref{sec:FvR1gtR2}, this increase in force is mainly due to the increase in charge on the smaller sphere.
\begin{figure}
\centering
\scalebox{0.145}{\includegraphics{Figures/fv_normalized_force.png}}
\caption{The force at constant voltage, $f_V$, normalised by the force at contact, $f_{V0}$, is plotted against the surface separation, $s-1$, for the voltage ratio $v=1$. Above a critical size asymmetry, $r_c=0.423...$ ($R_1 \simeq 2.5 R_2$), the force {\it increases} as the two spheres separate from contact. For $r=0.9$ the force increases to 2.6 times its value at contact at $s-1=0.41$ (see Table. \ref{table:repulsionatcontact}).}
\label{fig:NormalizedFV}
\end{figure}
In Fig.~\ref{fig:NormalizedFV} we highlight this non-monotonic behaviour of the repulsive force with sphere separation as a function of the size asymmetry. To compare all the cases we normalise the force by its value at contact. For size ratios greater than about 5:2, the repulsion increases at first as the spheres move away from each other from contact. We observe that the peak of the normalised force increases monotonically with $r$.
By using $f_V(v\!=\!1)$ in Eq.~(\ref{eq:fvmu2}) and setting $q=q_0$ the force at constant charge for small $\mu$ yields
\begin{align}\label{eq:fqmu2}
f_Q (q\!=\!q_0) &= \frac{4f_V(v\!=\!1)}{(1-r^2)^2[\phi_1(y_0)^2-(\pi^2/4)\cot^2(\pi y_0)]}\times\\\nonumber
&\left [1-\mu^2\frac{1-r^2+(2+6r^2)\,\phi_1(y_0)-2(r-r^3)\,\phi_1^\prime(y_0)}{6\,\phi_1(y_0)} \right]+{\cal O}\left(\frac{\mu^2}{\log \mu}\right).
\end{align}
Isolating the coefficient of $\mu^2$ above and setting it to zero gives $r_c=0.4872...$(i.e., $R_1 \simeq 3 R_2$). The force at contact, $f_{Q0}$, in its dimensionless form is the same as the Kelvin factor calculated in Ref.~\cite{Lekner2012-ua}.
In Fig.~\ref{fig:NormalizedFQ} we plot $f_Q$ normalised by its value at contact for several different values of $r$. Compared to the constant voltage case in Fig.~\ref{fig:NormalizedFV}, the normalised force shows a much smaller increase and the peak happens at much smaller separations. Since neither sphere gets any additional charge, the only mechanism by which the force can increase is the redistribution of the charge which creates a greater
horizontal component of the force as separation increases.
\begin{figure}
\centering
\scalebox{0.145}{\includegraphics{Figures/fq_normalized_force.png}}
\caption{The force $f_Q$ normalised by the force at contact, $f_{Q0}$, is plotted against separation $s-1$ for the charge ratio $q_0$. Above a critical size asymmetry, $r_c=0.487...$ ($R_1 \simeq 3 R_2$), the force {\it increases} as the two spheres move away from contact. For $r=0.9$ the force increases to 1.042 times its contact value (see Table.~\ref{table:repulsionatcontact}). The increase in repulsion here is much smaller than that in Fig.~\ref{fig:NormalizedFV}. High size asymmetries (see $r=0.99$) are needed for a significant ($14\%$) force increase.}
\label{fig:NormalizedFQ}
\end{figure}
In Table~\ref{table:repulsionatcontact} we list values of the maximum normalised repulsion between the spheres and the corresponding separations at which this maximum happens. This data can be tested against computer simulations or experimental measurements. The constant voltage case appears to be a more likely candidate for experimental verification since the normalised force values are much larger than in the constant charge case. Also, it is easier to maintain two spheres at constant voltage than constant charge due to the variety of mechanisms through which the charge can dissipate from objects that are not perfectly isolated.
\renewcommand{\arraystretch}{1.4}
\begin{table}
\centering
\begin{tabular}{||c|c|c| c|c|c||}
\hline
$\hspace{0.1in}r \hspace{0.1in}$ &$R_1:R_2$ &\!\! Max \!$f_V/f_{V0}$ & $s-1$ \!\!&\! \! Max \!$f_Q/f_{Q0}$ \!\!& $s-1$ \\ [0.5ex]
\hline\hline
$\nicefrac{1}{2}$ & $3:1$ & $1.014$ & 0.100 &--&--\\
\hdashline[4pt / 1pt]
$\nicefrac{3}{5}$ & $4:1$ & $1.077$ & 0.194 & --&--\\
\hline
$\nicefrac{2}{3}$ & $5:1$ & $1.159$ & 0.244 & -- &--\\
\hdashline[4pt / 1pt]
$\nicefrac{5}{7}$ & $6:1$ & $1.250$ & 0.279 & 1.003 &0.00890\\
\hline
$\nicefrac{3}{4}$ & $7:1$ & $1.346$ & 0.304 & 1.006 &0.0127\\
\hdashline[4pt / 1pt]
$\nicefrac{7}{9}$ & $8:1$ & $1.445$ & 0.324 & 1.009& 0.0157\\
\hline
$\nicefrac{4}{5}$ & $9:1$ & $1.546$ & 0.340 & 1.012&0.0180\\
\hdashline[4pt / 1pt]
$\nicefrac{9}{11}$ & $10:1$ & $1.648$ & 0.353 & 1.015 & 0.0198\\
\hline
$\nicefrac{6}{7}$ & $13:1$ & $1.961$ & 0.382 & 1.025 & 0.0233\\
\hdashline[4pt / 1pt]
$\nicefrac{15}{17}$ & $16:1$ & $2.278$ & 0.401 & 1.034 & 0.0249\\
\hline
$\nicefrac{9}{10}$ & $19:1$ & $2.597$ & 0.415 & 1.042 & 0.0257\\
[0.5ex]
\hline
\end{tabular}
\caption{The data shows that the repulsion between the two spheres {\it increases} as they move away from contact. The value of the force is listed as a ratio to the value of the force at contact, $f_V/f_{V0}$ for constant voltage, and $f_Q/f_{Q0}$ for constant charge. Also listed is the separation $s-1$ at which this force ratio is a maximum. The values $f_{V0}$ and $f_{Q0}$ are given by the $\mu \rightarrow 0$ limits of Eqs.~(\ref{eq:fvmu2})~and~(\ref{eq:fqmu2}).}
\label{table:repulsionatcontact}
\end{table} | {
"timestamp": "2020-11-24T02:40:26",
"yymm": "2011",
"arxiv_id": "2011.00090",
"language": "en",
"url": "https://arxiv.org/abs/2011.00090",
"abstract": "We present exact closed-form expressions and complete asymptotic expansions for the electrostatic force between two charged conducting spheres of arbitrary sizes. Using asymptotic expansions of the force we confirm that even like-charged spheres attract each other at sufficiently small separation unless their voltages/charges are the same as they would be at contact. We show that for sufficiently large size asymmetries, the repulsion between two spheres $\\textit{increases}$ when they separate from contact if their voltages or their charges are held constant. Additionally, we show that in the constant voltage case, this like-voltage repulsion can be further increased and maximised though an optimal $\\textit{lowering}$ of the voltage on the larger sphere at an optimal sphere separation.",
"subjects": "Classical Physics (physics.class-ph)",
"title": "Exact closed-form and asymptotic expressions for the electrostatic force between two conducting spheres",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992914310605,
"lm_q2_score": 0.731058584489497,
"lm_q1q2_score": 0.7075910859219782
} |
https://arxiv.org/abs/0908.3347 | A survey of graphical languages for monoidal categories | This article is intended as a reference guide to various notions of monoidal categories and their associated string diagrams. It is hoped that this will be useful not just to mathematicians, but also to physicists, computer scientists, and others who use diagrammatic reasoning. We have opted for a somewhat informal treatment of topological notions, and have omitted most proofs. Nevertheless, the exposition is sufficiently detailed to make it clear what is presently known, and to serve as a starting place for more in-depth study. Where possible, we provide pointers to more rigorous treatments in the literature. Where we include results that have only been proved in special cases, we indicate this in the form of caveats. | \section{Introduction}
There are many kinds of monoidal categories with additional structure
--- braided, rigid, pivotal, balanced, tortile, ribbon, autonomous,
sovereign, spherical, traced, compact closed, *-autonomous, to name a
few. Many of them have an associated graphical language of ``string
diagrams''. The proliferation of different notions is often confusing
to non-experts, and occasionally to experts as well. To add to the
confusion, one concept often appears in the literature under multiple
names (for example, ``rigid'' is the same as ``autonomous'',
``sovereign'' is the same as ``pivotal'', and ``ribbon'' is the same
as ``tortile'').
In this survey, I attempt to give a systematic overview of the main
notions and their associated graphical languages. My initial intention
was to summarize, without proof, only the main definitions and
coherence results that appear in the literature. However, it quickly
became apparent that, in the interest of being systematic, I had to
include some additional notions. This led to the sections on spacial
categories, and planar and braided traced categories.
Historically, the terminology was often fixed for special cases before
more general cases were considered. As a result, some concepts have a
common name (such as ``compact closed category'') where another name
would have been more systematic (e.g.~``symmetric autonomous
category''). I have resisted the temptation to make major changes to
the established terminology. However, I propose some minor tweaks that
will hopefully not be disruptive. For example, I prefer ``traced
category'', which can be combined with various qualifying adjectives,
to the longer and less flexible ``traced monoidal category''.
Many of the coherence results are widely known, or at least presumed
to be true, but some of them are not explicitly found in the
literature. For those that can be attributed, I have attempted to do
so, sometimes with a caveat if only special cases have been proved in
the literature. For some easy results, I have provided proof sketches.
Some unproven results have been included as conjectures.
While the results surveyed here are mathematically rigorous, I have
shied away from giving the full technical details of the definitions
of the graphical languages and their respective notions of equivalence
of diagrams. Instead, I present the graphical languages somewhat
informally, but in a way that will be sufficient for most
applications. Where appropriate, full mathematical details can be
found in the references.
Readers who want a quick overview of the different notions are
encouraged to first consult the summary chart at the end of this
article.
An updated version of this article will be maintained on the ArXiv, so
I encourage readers to contact me with corrections, literature
references, and updates.
\paragraph{Graphical languages: an evolution of notation.}
The use of graphical notations for operator diagrams in physics goes
back to Penrose {\cite{Pen71}}. Initially, such notations applied to
multiplications and tensor products of linear operators, but it became
gradually understood that they are applicable in more general
situations.
To see how graphical languages arise from matrix multiplication,
consider the following example. Let $M:A\ii B$, $N:B\x C\ii D$,
and $P:D\ii E$ be linear maps between finite dimensional vector spaces
$A,B,C,D,E$. These maps can be combined in an obvious way to obtain a
linear map $F:A\x C\ii E$. In functional notation, the map $F$
can be written
\begin{equation}\label{eqn-functional}
F = P\cp N\cp (M\x \id_C).
\end{equation}
The same can be expressed as a summation over matrix indices, relative
to some chosen basis of each space. In mathematical notation, suppose
$M=(m_{j,i})$, $N=(n_{l,jk})$, $P=(p_{m,l})$, and $F=(f_{m,ik})$,
where $i,j,k,l,m$ range over basis vectors of the respective spaces.
Then
\begin{equation}\label{eqn-math}
f_{m,ik} = \sum_j\sum_l p_{m,l} n_{l,jk} m_{j,i}.
\end{equation}
In physics, it is more common to write column indices as superscripts
and row indices as subscripts. Moreover, one can drop the summation
symbols by using Einstein's summation convention.
\begin{equation}\label{eqn-physics}
F_{m}^{ik} = P_{m}^{l} N_{l}^{jk} M_{j}^{i}.
\end{equation}
In (\ref{eqn-math}) and (\ref{eqn-physics}), the order of the
factors in the multiplication is not relevant, as all the information
is contained in the indices. Also note that, while the notation
mentions the chosen bases, the result is of course basis independent.
This is because indices occur in pairs of opposite variance (if on the
same side of the equation) or equal variance (if on opposite sides of
the equation). It was Penrose {\cite{Pen71}} who first pointed out
that the notation is valid in many situations where the indices are
purely formal symbols, and the maps may not even be between vector
spaces.
Since the only non-trivial information in (\ref{eqn-physics}) is in
the pairing of indices, it is natural to represent these pairings
graphically by drawing a line between paired indices. Penrose
{\cite{Pen71}} proposed to represent the maps $M,N,P$ as boxes, each
superscript as an incoming wire, and each subscript as an outgoing
wire. Wires corresponding to the same index are connected. Thus, we
obtain the graphical notation:
\begin{equation}
\vcenter{\wirechart{@R-.2cm}{
*{}\wireright{r}{k}&\blank\\
&\blank\nnbox{[u].[d]}{F}\wireright{r}{m}&*{}\\
*{}\wireright{r}{i}&\blank\\
}}
=
\vcenter{\wirechart{@R-.2cm}{
*{}\wireright{rr}{k}&&\blank\\
&&\blank\nnbox{[u].[d]}{N}\wireright{r}{l}&\blank\nnbox{[]}{P}\wireright{r}{m}&*{}\\
*{}\wireright{r}{i}&\blank\nnbox{[]}{M}\wireright{r}{j}&\blank
}}
\end{equation}
Finally, since the indices no longer serve any purpose, one may omit
them from the notation. Instead, it is more useful to label each wire
with the name of the corresponding space.
\begin{equation}\label{eqn-cat-diagram}
\vcenter{\wirechart{@R-.2cm}{
*{}\wireright{r}{C}&\blank\\
&\blank\nnbox{[u].[d]}{F}\wireright{r}{E}&*{}\\
*{}\wireright{r}{A}&\blank\\
}}
=
\vcenter{\wirechart{@R-.2cm}{
*{}\wireright{rr}{C}&&\blank\\
&&\blank\nnbox{[u].[d]}{N}\wireright{r}{D}&\blank\nnbox{[]}{P}\wireright{r}{E}&*{}\\
*{}\wireright{r}{A}&\blank\nnbox{[]}{M}\wireright{r}{B}&\blank
}}
\end{equation}
In the notation of monoidal categories, (\ref{eqn-cat-diagram}) can be
expressed as a commutative diagram
\begin{equation}
\xymatrix{
A\x C \ar[r]^<>(.5){F} \ar[d]_{M\x\id_C} & E \\
B\x C \ar[r]^<>(.5){N} & D \ar[u]_{P},
}
\end{equation}
or simply:
\begin{equation}
F = P\cp N\cp (M\x \id_C).
\end{equation}
Thus, we have completed a full circle and arrived back at the notation
(\ref{eqn-functional}) that we started with.
\paragraph{Organization of the paper.}
In each of the remaining sections of this paper, we will consider a
particular class of categories and its associated graphical language.
\paragraph{Acknowledgments.}
I would like to thank Gheorghe {\Stefanescu} and Ross Street for their
help in locating hard-to-obtain references, and for providing some
background information. Thanks to Fabio Gadducci, Chris Heunen, and
Micah McCurdy for useful comments on an earlier draft.
\section{Categories}\label{sec-categories}
We only give the most basic definitions of categories, functors, and
natural transformations. For a gentler introduction, with more details
and examples, see e.g.~Mac~Lane {\cite{ML71}}.
\begin{definition}
A {\em category} $\Cc$ consists of:
\begin{itemize}
\item a class $\obj{\Cc}$ of {\em objects}, denoted $A$, $B$, $C$,
\ldots;
\item for each pair of objects $A,B$, a set $\hom_{\Cc}(A,B)$ of
{\em morphisms}, which are denoted $f:A\ii B$;
\item {\em identity morphisms} $\id_A:A\ii A$ and the operation of
{\em composition}: if $f:A\ii B$ and $g:B\ii C$, then
\[ g\cp f:A\ii C,
\]
\end{itemize}
subject to the three equations
\[ \id_B\cp f = f, \sep f\cp\id_A = f, \sep (h\cp g)\cp f = h\cp(g\cp f)
\]
for all $f:A\ii B$, $g:B\ii C$, and $h:C\ii D$.
\end{definition}
The terms ``map'' or ``arrow'' are often used interchangeably with
``morphism''.
\begin{examples}
Some examples of categories are: the category $\Set$ of sets (with
functions as the morphisms); the category $\Rel$ of sets (with
relations as the morphisms); the category $\Vect$ of vector spaces
(with linear maps); the category $\Hilb$ of Hilbert spaces (with bounded
linear maps); the category $\UHilb$ of Hilbert spaces (with unitary
maps); the category $\Top$ of topological spaces (with continuous
maps); the category $\Cob$ of $n$-dimensional oriented manifolds
(with oriented cobordisms). Note that in each case, we need to
specify not only the objects, but also the morphisms (and
technically the composition and identities, although they are often
clear from the context).
Categories also arise in other sciences, for example in logic (where
the objects are propositions and the morphisms are proofs), and in
computing (where the objects are data types and the morphisms are
programs).
\end{examples}
Many concepts associated with sets and functions, such as {\em
inverse}, {\em monomorphism} (injective map), {\em idempotent}, {\em
cartesian product}, etc., are definable in an arbitrary category.
\paragraph{Graphical language.}
In the graphical language of categories, objects are represented as
{\em wires} (also called {\em edges}) and morphisms are represented as
{\em boxes} (also called {\em nodes}). An identity morphisms is
represented as a continuing wire, and composition is represented by
connecting the outgoing wire of one diagram to the incoming wire of
another. This is shown in Table~\ref{tab-graphical-cats}.
\begin{table}
\begin{center}
\begin{tabular}{@{}llc@{}}
Object& $A$ &
$\wirechart{@C=.7cm}{*{}\wireright{rr}{A}&&}$ \\\\
Morphism &$f:A\ii B$ &
$\wmetamorph{f}{A}{B}$\\\\
Identity & $\id_A:A\ii A$ &
$\wirechart{@C=.7cm}{*{}\wireright{rr}{A}&&}$ \\\\
Composition & $t\cp s$ &
$\wirechart{}{*{}\wireright{r}{A}&
\wwblank{8mm}\ulbox{[]}{s}\wireright{r}{B}&
\wwblank{8mm}\ulbox{[]}{t}\wireright{r}{C}&
}$
\end{tabular}
\end{center}
\caption{The graphical language of categories}
\label{tab-graphical-cats}
\end{table}
\paragraph{Coherence.}
Note that the three defining axioms of categories (e.g., $\id_B\cp
f=f$) are automatically satisfied ``up isomorphism'' in the graphical
language. This property is known as {\em soundness}. A converse of
this statement is also true: every equation that holds in the
graphical language is a consequence of the axioms. This property is
called {\em completeness}. We refer to a soundness and completeness
theorem as a {\em coherence theorem}.
\begin{theorem}[Coherence for categories]\label{thm-coherence-categories}
A well-formed equation between two morphism terms in the language of
categories follows from the axioms of categories if and only if it
holds in the graphical language up to isomorphism of diagrams.
\end{theorem}
Hopefully it is obvious what is meant by {\em isomorphism of
diagrams}: two diagrams are isomorphic if the boxes and wires of the
first are in bijective correspondence with the boxes and wires of the
second, preserving the connections between boxes and wires.
Admittedly, the above coherence theorem for categories is a
triviality, and is not usually stated in this way. However, we have
included it for sake of uniformity, and for comparison with the less
trivial coherence theorems for monoidal categories in the following
sections. The proof is straightforward, since by the associativity
and unit axioms, each morphism term is uniquely equivalent to a term
of the form $((f_n\cp \ldots)\cp f_2)\cp f_1$ for $n\geq 0$, with
corresponding diagram
\[
\wirechart{}{*{} \wireright{r}{}&
\blank\ulbox{[]}{f_1} \wireright{r}{}&
\blank\ulbox{[]}{f_2} \wireright{r}{}&
*+{\cdots} \wireright{r}{}&
\blank\ulbox{[]}{f_n} \wireright{r}{}& *{.}
}
\]
\begin{remark}
We have equipped wires with a left-to-right arrow, and boxes with a
marking in the upper left corner. These markings are of no use at
the moment, but will become important as we extend the language in
the following sections.
\end{remark}
\subsection*{Technicalities}
\paragraph{Signatures, variables, terms, and equations.}
So far, we have not been very precise about what the wires and boxes
of a diagram are labeled with. We have also glossed over what was
meant by ``a well-formed equation between morphism terms in the
language of categories''. We now briefly explain these notions,
without giving all the formal details. For a more precise mathematical
treatment, see e.g.~Joyal and Street {\cite{JS91}}.
The wires of a diagram are labeled with {\em object variables}, and
the boxes are labeled with {\em morphism variables}. To understand
what this means, consider the familiar language of arithmetic
expressions. This language deals with {\em terms}, such as
$(x+y+2)(x+3)$, which are built up from {\em variables}, such as $x$
and $y$, {\em constants}, such as $2$ and $3$, by means of {\em
operations}, such as addition and multiplication. Variables can be
viewed in three different ways: first, they can be viewed as {\em
symbols} that can be compared (e.g.~the variable $x$ occurs twice in
the given term, and is different from the variable $y$). They can also
be viewed as placeholders for arbitrary {\em numbers}, for example
$x=5$ and $y=15$. Here $x$ and $y$ are allowed to represent different
numbers or the same number; however, the two occurrences of $x$ must
denote the same number. Finally, variables can be viewed as
placeholders for arbitrary {\em terms}, such as $x=a+b$ and $y=z^2$.
The formal language of category theory is similar, except that we
require two sets of variables: object variables (for labeling wires)
and morphism variables (for labeling boxes). We must also equip each
morphism variable with a specified domain and codomain. The following
definition makes this more precise.
\begin{definition}
A {\em simple (categorical) signature} $\Sigma$ consists of a set
$\Sigma_0$ of {\em object variables}, a set $\Sigma_1$ of {\em
morphism variables}, and a pair of functions
$\dom,\cod:\Sigma_1\ii\Sigma_0$. Object variables are usually
written $A,B,C,\ldots$, morphism variables are usually written
$f,g,h,\ldots$, and we write $f:A\ii B$ if $\dom(f)=A$ and
$\cod(f)=B$.
\end{definition}
Given a simple signature, we can then build {\em morphism terms},
such as $f\cp(g\cp \id_A)$, which are built from morphism variables
(such as $f$ and $g$) and morphism constants (such as $\id_A$), via
operations (i.e., composition). Each term is recursively equipped with
a domain and a codomain, and we must require compositions to respect
the domain and codomain information. A term that obeys these rules is
called {\em well-formed}. Finally, an equation between terms is called
a {\em well-formed equation} if the left-hand side and right-hand side
are well-formed terms that moreover have equal domains and equal
codomains.
The graphical language is also relative to a given signature. The
wires and boxes are labeled, respectively, with object variables and
morphism variables from the signature, and the labeling must respect
the domain and codomain information. This means that the wire entering
(respectively, exiting) a box labeled $f$ must be labeled by the
domain (respectively, codomain) of $f$.
The above remark about the different roles of variables in arithmetic
also holds for the diagrammatic language of categories. On the one
hand, the labels can be viewed as formal symbols. This is the view
used in the coherence theorem, where the formal labels are part of the
definition of equivalence (in this case, isomorphism) of diagrams.
The labels can also be viewed as placeholders for specific objects and
morphisms in an actual category. Such an assignment of objects and
morphisms is called an {\em interpretation} of the given signature.
More precisely, an interpretation $i$ of a signature $\Sigma$ in a
category $\Cc$ consists of a function $i_0:\Sigma_0\ii\obj{\Cc}$, and
for any $f\in\Sigma_1$ a morphism $i_1(f):i_0(\dom f)\ii i_0(\cod f)$.
By a slight abuse of notation, we write $i:\Sigma\ii\Cc$ for such an
interpretation.
Finally, a morphism variable can be viewed as a placeholder for an
arbitrary (possibly composite) diagram. We occasionally use this
latter view in schematic drawings, such as the schematic
representation of $t\cp s$ in Table~\ref{tab-graphical-cats}. We then
label a box with a morphism term, rather than a formal variable, and
understand the box as a short-hand notation for a possibly composite
diagram corresponding to that term.
\paragraph{Functors and natural transformations.}
\begin{definition}
Let $\Cc$ and $\Dd$ be categories. A {\em functor} $F:\Cc\ii\Dd$
consists of a function $F:\obj{\Cc}\ii\obj{\Dd}$, and for each pair
of objects $A,B\in\obj{\Cc}$, a function
$F:\hom_{\Cc}(A,B)\ii\hom_{\Dd}(FA,FB)$, satisfying $F(g\cp
f)=F(g)\cp F(f)$ and $F(\id_A)=\id_{FA}$.
\end{definition}
\begin{definition}
Let $\Cc$ and $\Dd$ be categories, and let $F,G:\Cc\ii\Dd$ be
functors. A {\em natural transformation} $\natt:F\ii G$ consists of
a family of morphisms $\natt_A:FA\ii GA$, one for each object
$A\in\obj{\Cc}$, such that the following diagram commutes for all
$f:A\ii B$:
\[\xymatrix{
FA\ar[r]^{\natt_A} \ar[d]_{Ff} & GA\ar[d]^{Gf} \\
FB\ar[r]^{\natt_B} & GB. \\
}
\]
\end{definition}
\paragraph{Coherence and free categories.}
Most coherence theorems are proved by characterizing the {\em free}
categories of a certain kind.
\begin{definition}
We say that a category $\Cc$ is {\em free} over a signature $\Sigma$
if it is equipped with an interpretation $i:\Sigma\ii\Cc$, such that
for any category $\Dd$ and interpretation $j:\Sigma\ii\Dd$, there
exists a unique functor $F:\Cc\ii\Dd$ such that $j=F\cp i$.
\end{definition}
\begin{theorem}
The graphical language of categories over a signature $\Sigma$, with
identities and composition as defined in
Table~\ref{tab-graphical-cats}, and up to isomorphism of diagrams,
forms the free category over $\Sigma$.
\end{theorem}
Theorem~\ref{thm-coherence-categories} is indeed a consequence of this
theorem: by definition of freeness, an equation holds in all
categories if and only if it holds in the free category. By the
characterization of the free category, an equation holds in the free
category if and only if it holds in the graphical language.
\section{Monoidal categories}\label{sec-progressive}
In this section, we consider various notions of monoidal categories.
We sometimes refer to these notions as ``progressive'', which means
they have graphical languages where all arrows point left-to-right.
This serves to distinguish them from ``autonomous'' notions, which
will be discussed in Section~\ref{sec-autonomous}, and ``traced''
notions, which will be discussed in Section~\ref{sec-traced}.
\subsection{(Planar) monoidal categories}\label{subsec-planar-monoidal}
A {\em monoidal category} (also sometimes called {\em tensor
category}) is a category with an associative unital tensor product.
More specifically:
\begin{definition}[\cite{ML71,JS93}]
A {\em monoidal category} is a category with the following
additional structure:
\begin{itemize}
\item a new operation $A\x B$ on objects and a new object constant $I$;
\item a new operation on morphisms: if $f:A\ii C$ and $g:B\ii D$, then
\[ f\x g:A\x B\ii C\x D;
\]
\item and isomorphisms
\[ \begin{array}{ll}
\alpha_{A,B,C}:&(A\x B)\x C\catarrow{\iso} A\x(B\x C),\\
\lambda_A:&I\x A\catarrow{\iso} A,\\
\rho_A:&A\x I\catarrow{\iso} A,\\
\end{array}
\]
\end{itemize}
subject to a number of equations:
\begin{itemize}
\item $\x$ is a bifunctor, which means $\id_A\x\id_B = \id_{A\x
B}$ and $(k\x h)\cp(g\x f) = (k\cp g)\x(h\cp f)$;
\item $\alpha$, $\lambda$, and $\rho$ are natural transformations,
i.e., $(f\x (g\x h))\cp\alpha_{A,B,C} = \alpha_{A',B',C'}\cp((f\x
g)\x h)$, $f\cp\lambda_{A} = \lambda_{A'}\cp(\id_I\x f)$, and
$f\cp\rho_{A} = \rho_{A'}\cp(f\x\id_I)$;
\item plus the following two coherence axioms, called the ``pentagon
axiom'' and the ``triangle axiom'':
\[\xymatrix@R-5mm@C-20mm{
&(A\x (B\x C))\x D
\ar[rr]\ar@{}@<1ex>[rr]^{\alpha_{A,B\x C,D}}&&
A\x((B\x C)\x D)
\ar[dr]^<>(.9){A\x\alpha_{B,C,D}}
\\
((A\x B)\x C)\x D
\ar[ur]^<>(.1){\alpha_{A,B,C}\x D}
\ar[drr]_<>(.3){\alpha_{A\x B,C,D}}&&&&
A\x(B\x (C\x D))\\
&&(A\x B)\x (C\x D)
\ar[rru]_<>(.7){\alpha_{A,B,C\x D}}
\\
}
\]
\[\xymatrix@R-5mm@C-10mm{
(A\x I)\x B\ar[dr]_{\rho_A\x\id_B} \ar[rr]^{\alpha_{A,I,B}}
&& A\x (I\x B)\ar[dl]^{\id_A\x\lambda_B} \\
& A\x B \\
}
\]
\end{itemize}
\end{definition}
When we specifically want to emphasize that a monoidal category is not
assumed to be braided, symmetric, etc., we sometimes also refer to it
as a {\em planar monoidal category}.
\begin{examples}
Examples of monoidal categories include: the category $\Set$ (of
sets and functions), together with the cartesian product $\times$;
the category $\Set$ together with the disjoint union operation $+$;
the category $\Rel$ with either $\times$ or $+$; the category
$\Vect$ (of vectors spaces and linear functions) with either
$\oplus$ or $\x$; the category $\Hilb$ of Hilbert spaces with either
$\oplus$ or $\x$; the categories $\Top$ and $\Cob$ with disjoint
union $+$. Note that in each case, we need to specify a category and
a tensor product (in general there are multiple choices).
Technically, we should also specify associativity maps etc., but
they are usually clear from the context.
\end{examples}
\paragraph{Graphical language.}
We extend the graphical language of categories as follows. A tensor
product of objects is represented by writing the corresponding wires
in parallel. The unit object is represented by zero wires. A morphism
variable $f:A_1 \x \ldots\x A_n \ii B_1\x \ldots\x B_m$ is represented
as a box with $n$ input wires and $m$ output wires. A tensor product
of morphisms is represented by stacking the corresponding diagrams.
This is shown in Table~\ref{tab-graphical-monoidal}.
\begin{table}
\begin{center}
\begin{tabular}{@{}llc@{}}
Tensor product& $S\x T$ &
$\vcenter{\wirechart{@C=1.5cm@R=0.4cm}{\wireright{rr}{T}&&\\\wireright{rr}{S}&&}}$\\\\[-.5ex]
Unit object& $I$ &
(empty)\\\\[-.5ex]
Morphism &$f:$ {\small $A_1\,{\x}\,\ldots\,{\x}\, A_n\,{\ii}\, B_1\,{\x}\,\ldots\,{\x}\, B_m$} &
$\wirechart{@C=1.5cm@R=0.4cm}{
*{}\wireright{r}{A_n}&
\blank\wireright{r}{B_m}&
\\
*{}\wireright{r}{A_1}^<>(.8){\vdots}&
\blank\ulbox{[].[u]}{f}\wireright{r}{B_1}^<>(.2){\vdots}&
\\
}$\\\\
Tensor product &$s\x t$ &
$\vcenter{\wirechart{@C=1.5cm@R=0.4cm}{
\vsblank\wireright{r}{C}&\blank\ulbox{[]}{t}\wireright{r}{D}&\\
\vsblank\wireright{r}{A}&\blank\ulbox{[]}{s}\wireright{r}{B}&\\
}}$
\end{tabular}
\end{center}
\caption{The graphical language of monoidal categories}
\label{tab-graphical-monoidal}
\end{table}
Note that it is our convention to write tensor products in the
bottom-to-top order. Similar conventions apply to objects as to
morphisms: thus, a single wire is labeled by an {\em object variable}
such as $A$, while a more general object such as $A\x B$ or $I$ is
represented by zero or more wires. For more details, see ``Monoidal
signatures'' below.
\paragraph{Coherence.}
It is easy to check that the graphical language for monoidal
categories is sound, up to deformation of diagrams in the plane. As
an example, consider the following law, which is a consequence of
bifunctoriality:
\[ (\id_C\x g)\cp(f\x\id_B) =
(f\x\id_D)\cp(\id_A\x g).
\]
Translated into the graphical language, this becomes
\[\vcenter{\wirechart{@C-2mm@R6mm}{
*{}\wireright{rr}{B}&&\blank\ulbox{[]}{g}\wireright{r}{D}&*{}\\
*{}\wireright{r}{A}&\blank\ulbox{[]}{f}\wireright{rr}{C}&&*{}\\
}}
\ssep=\ssep
\vcenter{\wirechart{@C-2mm@R7mm}{
*{}\wireright{r}{B}&\blank\ulbox{[]}{g}\wireright{rr}{D}&&*{}\\
*{}\wireright{rr}{A}&&\blank\ulbox{[]}{f}\wireright{r}{C,}&*{}\\
}}
\]
which obviously holds up to deformation of diagrams. We have the
following coherence theorem:
\begin{theorem}[Coherence for planar monoidal categories {\cite[Thm.~1.5]{JS88}}, {\cite[Thm.~1.2]{JS91}}]\label{thm-coherence-planar}
A well-formed equation between morphism terms in the language of
monoidal categories follows from the axioms of monoidal categories
if and only if it holds, up to planar isotopy, in the graphical
language.
\end{theorem}
Here, by ``planar isotopy'', we mean that two diagrams, drawn in a
rectangle in the plane with incoming and outgoing wires attached to
the boundaries of the rectangle, are equivalent if it is possible to
transform one to the other by continuously moving around boxes in the
rectangle, without allowing boxes or wires to cross each other or to
be detached from the boundary of the rectangle during the moving. To
make these notions mathematically precise, it is usually easier to
represent morphism as points, rather than boxes. For precise
definitions and a proof of the coherence theorem, see
Joyal and Street {\cite{JS88,JS91}}.
\begin{caveat}
Technically, Joyal and Street's proof in {\cite{JS88,JS91}} only
applies to planar isotopies where each intermediate diagram during
the deformation remains progressive, i.e., with all arrows oriented
left-to-right. Joyal and Street call such an isotopy ``recumbent''.
We conjecture that the result remains true if one allows arbitrary
planar deformations. Similar caveats also apply to the coherence
theorems for braided and balanced monoidal categories below.
\end{caveat}
The following is an example of two diagrams that are not isomorphic in
the planar embedded sense:
\begin{equation}\label{eqn-non-spacial}
\vcenter{\wirechart{@R+0.1cm}{
\blank\wireright{rr}{B}&&\blank\\
\ulbox{[u].[d]}{f} & \blank\ulbox{[]}{h} & \ulbox{[u].[d]}{g} \\
\blank\wireright{rr}{A}&&\blank\\
}}
\sep\neq\sep
\vcenter{\wirechart{@R+0.1cm}{
\blank\wireright{rr}{B}&&\blank\\
\blank\ulbox{[u].[]}{f}\wireright{rr}{A}&&\blank\ulbox{[u].[]}{g},\\
& \blank\ulbox{[]}{h} & \\
}}
\end{equation}
where $f:I\ii A\x B$, $g:A\x B\ii I$, and $h:I\ii I$. And indeed, the
corresponding equation $g\cp ((\rho_A\cp(\id_A\x
h)\cp\rho\inv_A)\x\id_B) \cp f = g\cp
((\lambda_A\cp(h\x\id_A)\cp\lambda\inv_A)\x\id_B) \cp f$ {\em does
not follow} from the axioms of monoidal categories. This is an easy
consequence of soundness.
Note that because of the coherence theorem, it is not actually
necessary to memorize the axioms of monoidal categories: indeed, one
could use the coherence theorem as the {\em definition} of monoidal
category! For practical purposes, reasoning in the graphical language
is almost always easier than reasoning from the axioms. On the other
hand, the graphical definition is not very useful when one has to
check whether a given category is monoidal; in this case, checking
finitely many axioms is easier.
\paragraph{Relationship to traditional coherence theorems.}
Many category theorists are familiar with coherence theorems of the
form ``all diagrams of a certain type commute''. Mac~Lane's
traditional coherence theorem for monoidal categories {\cite{ML63}} is
of this form. It states that all diagrams built from only $\alpha$,
$\lambda$, $\rho$, $\id$, $\cp$, and $\x$ commute.
The coherence results in this paper are of a more general form (cf.
Kelly {\cite[p.~107]{K72b}}). Here, the object is to characterize {\em
all} formal equations that follow from a given set of axioms. We
note that the traditional coherence theorem is an easy consequence of
the general coherence result of Theorem~\ref{thm-coherence-planar}:
namely, if a given well-formed equation is built only from $\alpha$,
$\lambda$, $\rho$, $\id$, $\cp$, and $\x$, then both the left-hand
side and right-hand side denote identity diagrams in the graphical
language. Therefore, by Theorem~\ref{thm-coherence-planar}, the
equation follows from the axioms of monoidal categories. Analogous
remarks hold for all the coherence theorems of this article.
\subsection*{Technicalities}
\paragraph{Monoidal signatures.}
To be precise about the labels on diagrams of monoidal categories, and
about the meaning of ``well-formed equation'' in the coherence
theorem, we introduce the concept of a monoidal signature. This
generalizes the simple signatures introduced in
Section~\ref{sec-categories}. Monoidal signatures were introduced
under the name {\em tensor schemes} by Joyal and Street
{\cite{JS88,JS91}}. We give a non-strict version of the definition.
\begin{definition}[{\cite[Def.~1.4]{JS91}}, {\cite[Def.~1.6]{JS88}}]
Given a set $\Sigma_0$ of {\em object variables}, let
$\MonTerm(\Sigma_0)$ denote the free $(\x,I)$-algebra generated by
$\Sigma_0$, i.e., the set of {\em object terms} built from object
variables and $I$ via the operation $\x$. For example, if
$A,B\in\Sigma_0$, then the term $(A\x B)\x(I\x A)$ is an element of
$\MonTerm(\Sigma_0)$.
A {\em monoidal signature} consists of a set $\Sigma_0$ of object
variables, a set $\Sigma_1$ of {\em morphism variables}, and a pair
of functions $\dom,\cod:\Sigma_1\ii\MonTerm(\Sigma_0)$.
\end{definition}
The concept of well-formed morphism terms and equations (in the
language of monoidal categories) is defined relative to a given
monoidal signature. In the graphical language, wires and boxes are
labeled by object variables and morphism variables as before. An
object term expands to zero or more parallel wires, by the rules of
Table~\ref{tab-graphical-monoidal}. As before, the labellings must
respect the domain and codomain information, which now involves
possibly multiple wires connected to a box. Just as we sometimes label
a box by a morphism term in schematic drawings to denote a possibly
composite diagram, we sometimes label a wire by an object term, such
as $S$ and $T$ in Table~\ref{tab-graphical-monoidal}. In this case, it
is a short-hand notation for zero or more parallel wires.
Given a monoidal signature $\Sigma$ and a monoidal category $\Cc$, an
{\em interpretation} $i:\Sigma\ii\Cc$ consists of an object function
$i_0:\Sigma_0\ii\obj{\Cc}$, which then extends in a unique way to
$\hat i_0: \MonTerm(\Sigma_0)\ii\obj{\Cc}$ such that $\hat i_0(A\x
B)=\hat i_0(A)\x\hat i_0(B)$ and $\hat i_0(I)=I$, and for any
$f\in\Sigma_1$ a morphism $i_1(f):i_0(\dom f)\ii i_0(\cod f)$.
The remaining graphical languages in this
Section~\ref{sec-progressive} are all given relative to a monoidal
signature.
\paragraph{Monoidal functors and natural transformations.}
\begin{definition}
A {\em strong monoidal functor} (also sometimes called a {\em tensor
functor}) between monoidal categories $\Cc$ and $\Dd$ is a functor
$F:\Cc\ii\Dd$, together with natural isomorphisms $\phi^2:FA\x FB\ii
F(A\x B)$ and $\phi^0: I\ii FI$, such that the following diagrams
commute:
\[ \xymatrix@C-2ex{
(FA\x FB)\x FC \ar[rr]^{\phi^2\x \id} \ar[d]_{\alpha} &&
F(A\x B)\x FC \ar[r]^{\phi^2} &
F((A\x B)\x C) \ar[d]^{F(\alpha)} \\
FA\x (FB\x FC) \ar[rr]^{\id\x\phi^2} &&
FA\x F(B\x C) \ar[r]^{\phi^2} &
F(A\x (B\x C)) \\
}
\]
\[ \xymatrix{
FA\x I \ar[r]^{\rho} \ar[d]_{\id\x\phi^0} &
FA \ar[d]^{F(\rho)} \\
FA\x FI \ar[r]^{\phi^2} &
F(A\x I)
}
\sep
\xymatrix{
I\x FA \ar[r]^{\lambda} \ar[d]_{\phi^0\x\id} &
FA \ar[d]^{F(\lambda)} \\
FI\x FA \ar[r]^{\phi^2} &
F(I\x A)
}
\]
\end{definition}
\begin{definition}
Let $\Cc$ and $\Dd$ be monoidal categories, and let $F,G:\Cc\ii\Dd$
be strong monoidal functors. A natural transformation $\natt:F\ii
G$ is called {\em monoidal} (or a {\em tensor transformation}) if
the following two diagrams commute for all $A,B$:
\[ \xymatrix{
FA\x FB \ar[r]^{\phi^2} \ar[d]_{\natt_A\x\natt_B} &
F(A\x B) \ar[d]^{\natt_{A\x B}} \\
GA\x GB \ar[r]^{\phi^2} &
G(A\x B)
}
\]
\end{definition}
\paragraph{Coherence and free monoidal categories.}
Similarly to what we stated for categories, the coherence theorem for
monoidal categories is a consequence of a characterization of the free
monoidal category. However, due to the extra coherence conditions in
the definition of a strong monoidal functor, the definition of
freeness is slightly more complicated.
\begin{definition}
A monoidal category $\Cc$ is a {\em free monoidal category} over a
monoidal signature $\Sigma$ if it is equipped with an interpretation
$i:\Sigma\ii\Cc$ such that for any monoidal category $\Dd$ and
interpretation $j:\Sigma\ii\Dd$, there exists a strong monoidal
functor $F:\Cc\ii\Dd$ such that $j=F\cp i$, and $F$ is unique up to
a unique monoidal natural isomorphism.
\end{definition}
As before, the coherence theorem can be re-formulated as a freeness
theorem.
\begin{theorem}
The graphical language of monoidal categories over a monoidal
signature $\Sigma$, with identities, composition, and tensor as
defined in Tables~\ref{tab-graphical-cats} and
{\ref{tab-graphical-monoidal}}, and up to planar isotopy of
diagrams, forms a free monoidal category over $\Sigma$.
\end{theorem}
Most of the coherence theorems (and conjectures) of this article can
be similarly formulated in terms of freeness. An exception to
this are the traced categories without braidings in
Sections~\ref{subsec-right-traced}--\ref{subsec-spacial-traced} and
{\ref{subsec-dagger-traced}}, as explained in
Remark~\ref{rem-planar-not-free}. From now on, we will only mention
freeness when it is not entirely automatic, such as in
Section~\ref{subsec-planar-autonomous}.
\subsection{Spacial monoidal categories}
\begin{definition}
A monoidal category is {\em spacial} if it satisfies the additional
axiom
\begin{equation}\label{eqn-spacial}
\rho_A\cp(\id_A\x h)\cp\rho\inv_A=\lambda_A\cp(h\x\id_A)\cp
\lambda\inv_A,
\end{equation}
for all $h:I\ii I$.
\end{definition}
In the graphical language, this means that
\[
\vcenter{\wirechart{@R+0.3cm}{
\blank \\
\blank & \blank\ulbox{[]}{h} & \blank \\
\blank\wireright{rr}{A}&&\blank\\
}}
=
\vcenter{\wirechart{@R+0.3cm}{
\blank\wireright{rr}{A}&&\blank,\\
\blank & \blank\ulbox{[]}{h} & \blank \\
\blank \\
}},
\]
so in particular, it implies that the two terms in
(\ref{eqn-non-spacial}) are equal. The author does not know whether
the concept of a spacial monoidal category appears in the literature,
or if it does, under what name.
\paragraph{Graphical language.}
The graphical language for spacial monoidal categories is the same as
that for monoidal categories, except that planarity is dropped from
the notion of diagram equivalence, i.e., diagrams are considered up to
isomorphism. Obviously the axioms are sound; we conjecture that they
are also complete.
\begin{conjecture}[Coherence for spacial monoidal categories]
A well-formed equation between morphism terms in the language of
spacial monoidal categories follows from the axioms of spacial
monoidal categories if and only if it holds, up to isomorphism of
diagrams, in the graphical language.
\end{conjecture}
Note that, in the case of planar diagrams, the notion of isomorphism
of diagrams coincides with ambient isotopy in 3 dimensions. This
explains the term ``spacial''.
\subsection{Braided monoidal categories}
\begin{definition}[\cite{JS93}]
A {\em braiding} on a monoidal category is a natural family of
isomorphisms $\sym_{A,B}:A\x B\ii B\x A$, satisfying the
following two ``hexagon axioms'':
\[\xymatrix@R-3mm@C-5mm{
&(B\x A)\x C\ar[rr]^{\alpha_{B,A,C}}&&
B\x(A\x C) \ar[dr]^{\id_B\x\sym_{A,C}}\\
(A\x B)\x C
\ar[ur]^{\sym_{A,B}\x\id_C}
\ar[dr]_{\alpha_{A,B,C}}&&&&
B\x(C\x A).\\
&A\x(B\x C)\ar[rr]^{\sym_{A,B\x C}}&&
(B\x C)\x A \ar[ur]_{\alpha_{B,C,A}}\\
}
\]
\[\xymatrix@R-3mm@C-5mm{
&(B\x A)\x C\ar[rr]^{\alpha_{B,A,C}}&&
B\x(A\x C) \ar[dr]^{\id_B\x\symi_{C,A}}\\
(A\x B)\x C
\ar[ur]^{\symi_{B,A}\x\id_C}
\ar[dr]_{\alpha_{A,B,C}}&&&&
B\x(C\x A).\\
&A\x(B\x C)\ar[rr]^{\symi_{B\x C,A}}&&
(B\x C)\x A \ar[ur]_{\alpha_{B,C,A}}\\
}
\]
\end{definition}
Note that every braided monoidal category is spacial; this follows
from the naturality (in $I$) of $\sym_{A,I}:A\x I\ii I\x A$.
A braided monoidal functor between braided monoidal categories is a
monoidal functor that is compatible with the braiding in the following
sense:
\[ \xymatrix{
FA\x FB \ar[r]^{\phi^2}\ar[d]_{\sym_{FA,FB}} &
F(A\x B)\ar[d]^{F\sym_{A,B}} \\
FB\x FA \ar[r]^{\phi^2} &
F(B\x A).
}
\]
\paragraph{Graphical language.}
One extends the graphical language of monoidal categories with the {\em
braiding}:
\begin{center}
\begin{tabular}{@{}llc@{}}
Braiding & $\sym_{A,B}$ &
$\vcenter{\wirechart{@C=1.5cm@R=0.8cm}{
*{}\wireright{r}{B}&\blank\wirecross{d}\wireright{r}{A}&\\
*{}\wireright{r}{A}&\blank\wirebraid{u}{.3}\wireright{r}{B}&
}}$ \\
\end{tabular}
\end{center}
In general, if $A$ and $B$ are composite object terms, the braiding
$\sym_{A,B}$ is represented as the appropriate number of wires
crossing each other.
Note that the braiding satisfies
$\sym_{A,B}\cp\symi_{A,B}=\id_{A\x B}$, but not
$\sym_{A,B}\cp\sym_{B,A}=\id_{A\x B}$. Graphically:
\[\vcenter{\wirechart{@C=1.5cm@R=0.8cm}{
*{}\wireright{r}{B}&
\blank\wirecross{d}\wireright{r}{A}&
\blank\wirebraid{d}{.3}\wireright{r}{B}&
\\
*{}\wireright{r}{A}&
\blank\wirebraid{u}{.3}\wireright{r}{B}&
\blank\wirecross{u}\wireright{r}{A}&
\\
}}
= \id_{A\x B},
\]
\[\vcenter{\wirechart{@C=1.5cm@R=0.8cm}{
*{}\wireright{r}{B}&
\blank\wirecross{d}\wireright{r}{A}&
\blank\wirecross{d}\wireright{r}{B}&
\\
*{}\wireright{r}{A}&
\blank\wirebraid{u}{.3}\wireright{r}{B}&
\blank\wirebraid{u}{.3}\wireright{r}{A}&
\\
}}
\neq \id_{A\x B}.
\]
\begin{example}
The hexagon axiom translates into the following in the graphical
language:
\[ (\id_B\x\sym_{A,C}) \cp \alpha_{B,A,C} \cp (\sym_{A,B}\x\id_C)
= \alpha_{B,C,A} \cp (\sym_{B,C\x A}) \cp \alpha_{A,B,C}
\]
\[\vcenter{\wirechart{@R=6mm}{
\wireright{r}{C}&
\wireid\wireright{r}{C}&
\blank\wirecross{d}{.3}\wireright{r}{A}&
\\
\wireright{r}{B}&
\blank\wirecross{d}{.3}\wireright{r}{A}&
\blank\wirebraid{u}{.3}\wireright{r}{C}&
\\
\wireright{r}{A}&
\blank\wirebraid{u}{.3}\wireright{r}{B}&
\wireid\wireright{r}{B}&
\\
}}
\ssep=\ssep
\vcenter{\wirechart{@R=6mm}{
\wireright{r}{C}&
\blank\wirecross{d}{.3}\wireright{r}{A}&
\\
\wireright{r}{B}&
\blank\wirecross{d}{.3}\wireright{r}{C}&
\\
\wireright{r}{A}&
\blank\wirebraid{uu}{.2}\wireright{r}{B}&
\\
}}
\]
\end{example}
\begin{example}
The {\em Yang-Baxter equation} is the following equation, which is a
consequence of the hexagon axiom and naturality:
\[ (\sym_{B,C}\x\id_A)\cp(\id_B\x\sym_{A,C})\cp(\sym_{A,B}\x\id_C) =
(\id_C\x\sym_{A,B})\cp(\sym_{A,C}\x\id_B)\cp(\id_A\x\sym_{B,C}).
\]
In the graphical language, it becomes:
\[\vcenter{\wirechart{@C-4mm@R=6mm}{
\wireright{r}{C}&
\wireid\wireright{r}{C}&
\blank\wirecross{d}{.3}\wireright{r}{A}&
\wireid\wireright{r}{A}&
\\
\wireright{r}{B}&
\blank\wirecross{d}{.3}\wireright{r}{A}&
\blank\wirebraid{u}{.3}\wireright{r}{C}&
\blank\wirecross{d}{.3}\wireright{r}{B}&
\\
\wireright{r}{A}&
\blank\wirebraid{u}{.3}\wireright{r}{B}&
\wireid\wireright{r}{B}&
\blank\wirebraid{u}{.3}\wireright{r}{C}&
\\
}}
\ssep=\ssep
\vcenter{\wirechart{@C-4mm@R=6mm}{
\wireright{r}{C}&
\blank\wirecross{d}{.3}\wireright{r}{B}&
\wireid\wireright{r}{B}&
\blank\wirecross{d}{.3}\wireright{r}{A}&
\\
\wireright{r}{B}&
\blank\wirebraid{u}{.3}\wireright{r}{C}&
\blank\wirecross{d}{.3}\wireright{r}{A}&
\blank\wirebraid{u}{.3}\wireright{r}{B}&
\\
\wireright{r}{A}&
\wireid\wireright{r}{A}&
\blank\wirebraid{u}{.3}\wireright{r}{C}&
\wireid\wireright{r}{C}&
\\
}}
\]
\end{example}
\begin{theorem}[Coherence for braided monoidal categories {\cite[Thm.~3.7]{JS91}}]
A well-formed equation between morphisms in the language of braided
monoidal categories follows from the axioms of braided monoidal
categories if and only if it holds in the graphical language up to
isotopy in 3 dimensions.
\end{theorem}
Here, by ``isotopy in 3 dimensions'', we mean that two diagrams, drawn
in a 3-dimensional box with incoming and outgoing wires attached to
the boundaries of the box, are isotopic if it is possible to transform
one to the other by moving around nodes in the box, without allowing
nodes or edges to cross each other or to be detached from the boundary
during the moving. Also, the linear order of the edges entering and
exiting each node must be respected. This is made more precise in
Joyal and Street {\cite{JS91}}.
\begin{caveat}
The proof by Joyal and Street {\cite{JS91}} is subject to some minor
technical assumptions: graphs are assumed to be {\em smooth}, and
the isotopies are progressive, with continuously changing tangent
vectors.
\end{caveat}
\subsection{Balanced monoidal categories}\label{subsec-balanced}
\begin{definition}[\cite{JS93}]
A {\em twist} on a braided monoidal category is a natural family of
isomorphisms $\theta_A:A\ii A$, satisfying $\theta_I=\id_I$ and such
that the following diagram commutes for all $A,B$:
\begin{equation}\label{eqn-balanced}
\xymatrix{
A\x B \ar[d]_{\theta_{A\x B}}\ar[r]^{\sym_{A,B}} & B\x A \ar[d]^{\theta_B\x\theta_A} \\
A\x B & B\x A. \ar[l]^{\sym_{B,A}}
}
\end{equation}
A {\em balanced monoidal category} is a braided monoidal category
with twist.
\end{definition}
A balanced monoidal functor between balanced monoidal categories is a
braided monoidal functor that is also compatible with the twist, i.e.,
such that $F(\theta_A)=\theta_{FA}$ for all $A$.
\paragraph{Graphical language.}
The graphical language of balanced monoidal categories is similar to
that of braided monoidal categories, except that morphisms are
represented by flat ribbons, rather than 1-dimensional wires. A
ribbon can be thought of as a pair of parallel wires that are
infinitesimally close to each other, or as a wire that is equipped
with a {\em framing} {\cite{JS91}}. For example, the braiding looks
like this:
\[ \sym_{A,B} =
\mnew{\resizebox{2cm}{!}{\includegraphics{braid-balanced}}}.
\]
The twist map $\theta_A$ is represented as a 360-degree twist in a
ribbon, or in several ribbons together, if $A$ is a composite object
term. This is easiest seen in the following illustration.
\[ \theta_A =
\resizebox{3cm}{!}{\includegraphics{twistA}}~,
\sep
\theta_{A\x B} =
\mnew{\resizebox{4.5cm}{!}{\includegraphics{twistAB}}}.
\]
The meaning of (\ref{eqn-balanced}) should then be obvious.
\begin{theorem}[Coherence for balanced monoidal categories {\cite[Thm.~4.5]{JS91}}]
A well-formed equation between morphisms in the language of balanced
monoidal categories follows from the axioms of balanced monoidal
categories if and only if it holds in the graphical language up to
framed isotopy in 3 dimensions.
\end{theorem}
\subsection{Symmetric monoidal categories}\label{subsec-symmetric-monoidal}
\begin{definition}
A {\em symmetric monoidal category} is a braided monoidal category
where the braiding is self-inverse, i.e.:
\[ \sym_{A,B} = \symi_{B,A}
\]
In this case, the braiding is called a {\em symmetry}.
\end{definition}
\begin{remark}\label{rem-symmetric-from-balanced}
Because of equation (\ref{eqn-balanced}), a symmetric monoidal
category can be equivalently defined as a balanced monoidal category
in which $\theta_A=\id_A$ for all $A$.
\end{remark}
\begin{remark}\label{rem-twisted-symmetric}
The previous remark notwithstanding, there exist symmetric monoidal
categories that possess a non-trivial twist (in addition to the
trivial twist $\theta_A=\id_A$). Thus, in a balanced monoidal
category, the symmetry condition $\sym_{A,B} = \symi_{B,A}$ does not
in general imply $\theta_A=\id_A$. In other words, a balanced
monoidal category that is symmetric as a braided monoidal category
is not necessarily symmetric as a balanced monoidal category. An
example is the category of finite dimensional vector spaces and
linear bijections, with $\theta_A(x)=nx$, where $n=\dim(A)$.
\end{remark}
\begin{examples}
On the monoidal category $(\Set,\times)$ of sets with cartesian
product, a symmetry is given by $\sym(x,y)=(y,x)$. On the category
$(\Vect,\x)$ of vector spaces with tensor product, a symmetry is
given by $\sym(x\x y)=y\x x$.
\end{examples}
\paragraph{Graphical language.}
The symmetry is graphically represented by a crossing:
\begin{center}
\begin{tabular}{@{}llc@{}}
Symmetry & $\sym_{A,B}$ &
$\vcenter{\wirechart{@C=1.5cm@R=0.8cm}{
*{}\wireright{r}{B}&\blank\wirecross{d}\wireright{r}{A}&\\
*{}\wireright{r}{A}&\blank\wirecross{u}\wireright{r}{B}& }}$ \\
\end{tabular}
\end{center}
\begin{theorem}[Coherence for symmetric monoidal categories {\cite[Thm.~2.3]{JS91}}]
\label{thm-coherence-symmetric-monoidal}
A well-formed equation between morphisms in the language of
symmetric monoidal categories follows from the axioms of symmetric
monoidal categories if and only if it holds, up to isomorphism of
diagrams, in the graphical language.
\end{theorem}
Note that the graphical language for symmetric monoidal categories is
up to isomorphism of diagrams, without any reference to 2- or
3-dimensional structure. However, isomorphism of diagrams is
equivalent to ambient isotopy in 4 dimensions, so we can still regard
it as a geometric notion.
\section{Autonomous categories}\label{sec-autonomous}
Autonomous categories are monoidal categories in which the objects
have {\em duals}. In terms of graphical language, this means that some
wires are allowed to run from right to left.
\subsection{(Planar) autonomous categories}\label{subsec-planar-autonomous}
\begin{definition}[\cite{JS93}]
In a (without loss of generality strict) monoidal category, an {\em
exact pairing} between two objects $A$ and $B$ is given by a pair
of morphisms $\eta:I\ii B\x A$ and $\eps:A\x B\ii I$, such that the
following two adjunction triangles commute:
\begin{equation}\label{eqn-pairing}
\vcenter{\xymatrix{
A \ar[r]^<>(.5){\id_A\x\eta} \ar[dr]_{\id_A} &
A\x B\x A \ar[d]^{\eps\x\id_A} \\
& A,
}}
\hspace{.7in}
\vcenter{\xymatrix{
B \ar[r]^<>(.5){\eta\x\id_{B}} \ar[dr]_{\id_{B}} &
B\x A\x B \ar[d]^{\id_{B}\x\eps} \\
& B.
}}
\end{equation}
In such an exact pairing, $B$ is called the {\em right dual} of $A$
and $A$ is called the {\em left dual} of $B$.
\end{definition}
\begin{remark}
The maps $\eta$ and $\eps$ determine each other uniquely, and they
are respectively called the {\em unit} and the {\em counit} of the
adjunction. Moreover, the triple $(B,\eta,\eps)$, if it exists, is
uniquely determined by $A$ up to isomorphism. The existence of duals
is therefore a property of a monoidal category, rather than an
additional structure on it. Moreover, every strong monoidal functor
automatically preserves existing duals.
\end{remark}
\begin{definition}[\cite{JS86,JS88,JS93}]
A monoidal category is {\em right autonomous} if every object $A$
has a right dual, which we then denote $\rA$. It is {\em left
autonomous} if every object $A$ has a left dual, which we then
denote $\lA$. Finally, the category is {\em autonomous} if it is
both right and left autonomous.
\end{definition}
\begin{remark}[Terminology]
A [right, left, --] autonomous category is also called [right, left,
--] rigid, see e.g.~{\cite[p.~78]{SR72}}. Also, the term
``autonomous'' is sometimes used in the weaker sense of ``monoidal
closed''. Although this latter usage is no longer common, it still
lives on in the terminology ``*-autonomous category'' (Barr
{\cite{Bar79}}, see also Section~\ref{sec-star-autonomous}).
If we wish to emphasize that an autonomous category is not
necessarily symmetric or braided, we sometimes call it a {\em planar
autonomous category}.
\end{remark}
\paragraph{Graphical language.}
If $A$ is an object variable, the objects $\rA$ and $\lA$ are both
represented in the same way: by a wire labeled $A$ running from right
to left. The unit and counit are represented as half turns:
\begin{center}
\begin{tabular}{@{}ll@{}cl@{}c@{}}
Dual& $\rA, \lA$ &
$\wirechart{@C=1cm}{*{}\wireleft{rr}{A}&&}$ \\\\
Unit&
$\eta_A:I\ii \rA\x A$ &
$\vcenter{\wirechart{@C=1.5cm@R=0.5cm}{
\blank\wireopen{d}\wireright{r}{A}&\\
\blank\wireleft{r}{A}&
}}
$ &
$\eta'_A:I\ii A\x \lA$ &
$\vcenter{\wirechart{@C=1.5cm@R=0.5cm}{
\blank\wireopen{d}\wireleft{r}{A}&\\
\blank\wireright{r}{A}&
}}
$\\\\
Counit&
$\eps_A:A\x \rA\ii I$ &
$\vcenter{\wirechart{@C=1.5cm@R=0.5cm}{
\wireleft{r}{A}&\blank\wireclose{d}\\
\wireright{r}{A}&\blank
}}
$ &
$\eps'_A:\lA\x A\ii I$ &
$\vcenter{\wirechart{@C=1.5cm@R=0.5cm}{
\wireright{r}{A}&\blank\wireclose{d}\\
\wireleft{r}{A}&\blank
}}
$\\\\
\end{tabular}
\end{center}
More generally, if $A$ is a composite object represented by a number
of wires, then $\rA$ and $\lA$ are represented by the same set of
wires running backward (rotated by 180 degrees), and the units and
counits are represented as multiple wires turning.
\begin{example}
The two diagrams in (\ref{eqn-pairing}), where $B=\rA$, translate
into the graphical language as follows:
\[
\vcenter{\wirechart{@C=1.0cm@R=0.5cm}{
\blank\wireopen{d}\wireright{r}{A}&\\
\blank\wireleft{r}{A}&\blank\\
\wireright{r}{A}&\blank\wireclose{u}
}}
= \ssep
{\wirechart{@C=1.0cm@R=0.5cm}{
\wireid\wireright{r}{A}&\wireid
}},
\hspace{.5cm}
\vcenter{\wirechart{@C=1.0cm@R=0.5cm}{
\wireleft{r}{A}&\blank\wireclose{d}\\
\blank\wireright{r}{A}&\blank\\
\blank\wireopen{u}\wireleft{r}{A}&
}}
= \ssep
{\wirechart{@C=1.0cm@R=0.5cm}{
\wireid\wireleft{r}{A}&\wireid
}}.
\]
\end{example}
\begin{example}
For any morphism $f:A\ii B$, it is possible to define morphisms
$\rd{f}:\rd{B}\ii\rd{A}$ and $\ld{f}:\ld{B}\ii\ld{A}$, called the
{\em adjoint mates} of $f$, as follows:
\[ \rd{f} \ssep =
\vcenter{\wirechart{@C=0.5cm@R=0.5cm}{
\vsblank\wireleft{r}{B}&\wireid\wire{r}{}&\blank\wireclose{d}\\
\blank\wireright{r}{A}&\blank\ulbox{[]}{f}\wireright{r}{B}&\blank\\
\blank\wireopen{u}\wire{r}{}&\wireid\wireleft{r}{A}&\vsblank
}}
\sep
\ld{f} \ssep =
\vcenter{\wirechart{@C=0.5cm@R=0.5cm}{
\blank\wireopen{d}\wire{r}{}&\wireid\wireleft{r}{A}&\vsblank\\
\blank\wireright{r}{A}&\blank\ulbox{[]}{f}\wireright{r}{B}&\blank\\
\vsblank\wireleft{r}{B}&\wireid\wire{r}{}&\blank\wireclose{u}
}}
\]
With these definitions, $\rd{(-)}$ and $\ld{(-)}$ become
contravariant functors.
\end{example}
\begin{theorem}[Coherence for planar autonomous categories {\cite[Thm.~2.7]{JS88}}]
\label{thm-coherence-planar-autonomous}
A well-formed equation between morphisms in the language of
autonomous categories follows from the axioms of autonomous
categories if and only if it holds in the graphical language up to
planar isotopy.
\end{theorem}
Here, the notion of planar isotopy is the same as before, except that
the wires are of course no longer restricted to being oriented
left-to-right during the deformation. However, the ability to turn
wires upside down does not extend to boxes: the notion of isotopy for
this theorem does not include the ability to rotate boxes. See
Joyal and Street {\cite{JS88}} for a more precise statement.
\begin{caveat}
The proof by Joyal and Street {\cite{JS88}} assumes that the
diagrams are piecewise linear.
\end{caveat}
Note that the same theorem applies to left autonomous, right
autonomous, or autonomous categories. Indeed, each individual term in
the language of autonomous categories involves only finitely many
duals, and thus may be translated into a term of (say) left
autonomous categories by replacing each object variable $A$ by
$A^{***\ldots*}$, for a sufficiently large, even number of $*$'s. The
resulting term maps to the same diagram.
The same coherence theorem also holds for categories that are only
right (or left) autonomous. This is a consequence of the following
proposition.
\begin{proposition}\label{prop-right-autonomous}
Each right (or left) autonomous category can be fully embedded in an
autonomous category.
\end{proposition}
\begin{proof}
Let $\Cc$ be a right autonomous category, and consider the strong
monoidal functor $F:\Cc\ii\Cc$ given by $F(A)=A^{**}$. This functor
is full and faithful, and every object in the image of $F$ has a
left dual. Now let $\hat\Cc$ be the colimit (in the large category
of right autonomous categories and strong monoidal functors) of the
sequence
\[ \Cc\catarrow{F}\Cc\catarrow{F}\Cc\catarrow{F}\ldots
\]
Then $\hat\Cc$ is autonomous, and $\Cc$ is fully and faithfully
embedded in $\hat\Cc$. The proof for left autonomous categories is
analogous. \eot
\end{proof}
\begin{corollary}[Coherence for right (left) autonomous categories]
A well-formed equation between morphisms in the language of right
(left) autonomous categories follows from the axioms of right (left)
autonomous categories if and only if it holds in the graphical
language up to planar isotopy.
\end{corollary}
\begin{proof}
It suffices to show that an equation (in the language of right
autonomous categories) holds in all right autonomous categories if
and only if it holds in all autonomous categories. The ``only if''
direction is trivial, since every autonomous category is right
autonomous. For the opposite direction, suppose some equation holds
in all autonomous categories, and let $\Cc$ be a right autonomous
category. Then $\Cc$ can be faithfully embedded in an autonomous
category $\hat\Cc$. By assumption, the equation holds in $\hat\Cc$,
and therefore also in $\Cc$, since the embedding is faithful.\eot
\end{proof}
\subsection*{Technicalities}
\paragraph{Autonomous signatures.}
The diagrams of autonomous categories, and the concept of well-formed
equation in the coherence theorem, are defined relative to the notion
of an autonomous signature. These were called {\em autonomous tensor
schemes} by Joyal and Street {\cite{JS88}}. We give a non-strict
version of the definition.
\begin{definition}{\cite[Def.~2.5]{JS88}}
Given a set $\Sigma_0$ of {\em object variables}, let
$\AutTerm(\Sigma_0)$ denote the free
$(\x,I,\ld{(-)},\rd{(-)})$-algebra generated by $\Sigma_0$, i.e.,
the set of {\em object terms} built from object variables and $I$
via the operations $\x$, $\ld{(-)}$, and $\rd{(-)})$. For example,
if $A,B\in\Sigma_0$, then the term $\rd{B}\x\rd{({}^{**}I\x A)}$
is an element of $\AutTerm(\Sigma_0)$.
An {\em autonomous signature} consists of a set $\Sigma_0$ of object
variables, a set $\Sigma_1$ of {\em morphism variables}, and a pair
of functions $\dom,\cod:\Sigma_1\ii\AutTerm(\Sigma_0)$.
\end{definition}
The concept of a {\em right autonomous signature} and {\em left
autonomous signature} are defined analogously. The remaining
graphical languages in this Section~\ref{sec-autonomous} are all given
relative to an autonomous signature.
\paragraph{Functors and natural transformations of autonomous categories.}
Any strong monoidal functor preserves exact pairings: if $\eta:I\ii
B\x A$ and $\eps:A\x B\ii I$ define an exact pairing, then so do
\[ \hat F\eta:
I \catarrow{\phi^0}
FI \catarrow{F\eta}
F(B\x A) \catarrow{(\phi^2)\inv}
FB\x FA
\]
and
\[ \hat F\eps:
FA\x FB \catarrow{\phi^2}
F(A\x B) \catarrow{F\eps}
FI \catarrow{(\phi^0)\inv}
I.
\]
In particular, if $\Cc$ and $\Dd$ are autonomous categories and
$F:\Cc\ii\Dd$ is a monoidal functor, by uniqueness of duals, there
will be a unique induced natural isomorphism $F(\rd{A})\iso \rd{(FA)}$
such that
\[ \mnew{\xymatrix{
I\ar[dr]_<>(.5){\eta_{FA}}\ar[r]^<>(.5){\hat F\eta_A} &
F(\rd{A})\x FA\ar[d]^{{\iso}\x\id} \\
& \rd{(FA)}\x FA
}}
\sep\mbox{and}\sep
\mnew{\xymatrix{
FA\x F(\rd{A}) \ar[d]^{\id\x{\iso}} \ar[r]^<>(.5){\hat F\eps_A} &
I, \\
FA\x \rd{(FA)} \ar[ur]_<>(.5){\eps_{FA}}
}}
\]
and similarly for $F(\ld{A})\iso\ld{(FA)}$.
For natural transformations, we have the following lemma:
\begin{lemma}[Saavedra Rivano {\cite[Prop.~5.2.3]{SR72}}, see also {\cite[Prop.~7.1]{JS93}}]
\label{lem-saavedra-rivano}
Suppose $\natt:F\ii G$ is a monoidal natural transformation between
strong monoidal functors $F,G:\Cc\ii\Dd$. If $A$ has a right dual
$A^*$ in $\Cc$, then $\natt_{A^*}$ and $(\natt_A)^*$ are mutually
inverse in $\Dd$ (up to the above canonical isomorphism), or more
precisely:
\[ \xymatrix{
F(A^*)\ar[r]^{\natt_{A^*}} \ar[d]_{\iso} &
G(A^*) \ar[d]^{\iso} \\
(FA)^* &
(GA)^*\ar[l]^{(\natt_A)^*} &
}
\]
In particular, if $\Cc$ is autonomous, then any such monoidal
natural transformation is invertible.
\end{lemma}
\paragraph{Coherence and free autonomous categories.}
The graphical language, as we have defined it above for autonomous
categories, is sufficient for the purposes of
Theorem~\ref{thm-coherence-planar-autonomous}. However, it does not
characterize the free autonomous category over an autonomous signature
as stated. For example, consider a signature with a single morphism
variable $f:A\ii A$. The problem is that there are clearly some
diagrams, such as
\begin{equation}\label{eqn-not-autonomous}
\vcenter{\wirechart{@R=.5cm}{
\blank\wireopen{d}\wireleft{rr}{} && \blank\wireclose{d} \\
\blank\wireright{r}{A} & \blank\ulbox{[]}{f}\wireright{r}{A} & \blank,
}
}
\end{equation}
which are not translations of any well-formed term of autonomous
categories. Indeed, for this diagram to correspond to a well-formed
term, we would have to have e.g.~$f:A^{**}\ii A$ or $f:A\ii {}^{**}A$.
Joyal and Street {\cite{JS88}} characterize the free autonomous
category by equipping each edge with a winding number. Effectively,
the horizontal segments of edges are labeled with pairs $(A,n)$, where
$A$ is an object variables and $n$ is an integer winding number.
Left-to-right segments have even winding numbers, right-to-left
segments have odd winding numbers, and winding numbers increase by one
on counterclockwise turns, and decrease by one on clockwise turns. The
winding numbers on the input and output of each box, and on the global
inputs and outputs, are restricted to be consistent with the domain
and codomain information, where e.g.~$A^{**}$ corresponds to $(A,2)$,
and ${}^{***}B$ to $(B,-3)$. See {\cite{JS88}} for precise details.
Here is an example of a well-formed diagram of type $I\ii B^{**}\x A$,
where $g:I\ii A\x B$:
\[ \m{\resizebox{1.0in}{!}{\includegraphics{winding-numbers}}}
\]
\begin{theorem}\label{thm-free-autonomous}
The graphical language (with winding numbers) of autonomous
categories over an autonomous signature $\Sigma$, up to planar
isotopy of diagrams, forms a free autonomous category over $\Sigma$.
\end{theorem}
We remark that if a diagram of planar autonomous categories can be
labeled with winding numbers, then this labeling is necessarily
unique. In particular, for the purposes of
Theorem~\ref{thm-coherence-planar-autonomous}, there is no harm in
dropping the winding numbers, because by hypothesis, the theorem only
considers diagrams that are the translation of well-formed terms,
whose winding numbers can therefore uniquely reconstructed.
\subsection{(Planar) pivotal categories}
A pivotal category is an autonomous category with a suitable
isomorphism $A\iso A^{**}$.
\begin{definition}[\cite{FY89,FY92,JSXX}]
A {\em pivotal category} is a right autonomous category equipped
with a monoidal natural isomorphism $i_A:A\ii A^{**}$.
\end{definition}
Note that any pivotal category is immediately left autonomous,
therefore autonomous. The requirement that $i_A$ is a {\em monoidal}
natural transformation here means that $i_I$ is the canonical
isomorphism $I\iso I^{**}$, and that the following diagram commutes,
where the horizontal arrow is the canonical isomorphism derived from
the autonomous structure:
\begin{equation}\label{eqn-pivotal-monoidal}
\mnew{\xymatrix@C=0cm{
&A\x B\ar[dl]_{i_A\x i_B}\ar[dr]^{i_{A\x B}}\\
A^{**}\x B^{**}\ar[rr]^{\iso}&&(A\x B)^{**}.
}}
\end{equation}
The following property, which is sometimes taken as part of the
definition of pivotal categories {\cite[Def.~3.1.1]{JSXX}}, is a
direct consequence of Saavedra Rivano's Lemma
(Lemma~\ref{lem-saavedra-rivano}).
\begin{lemma}
In any pivotal category, the following diagram commutes:
\[ \xymatrix{
A^* \ar[r]^{i_{A^*}} \ar[dr]_{\id_{A^*}}&
A^{***} \ar[d]^{i_A^*} \\
& A^*.
}
\]
\end{lemma}
\begin{remark}
One can equivalently define a pivotal category as an autonomous
category equipped with a monoidal natural isomorphism (of
contravariant monoidal functors) $\phi:\rd{A}\catarrow{\iso}\ld{A}$.
This was done by Freyd and Yetter {\cite{FY92}}. Condition (S) of
{\cite[Def.~4.1]{FY92}} is also a consequence of Saavedra Rivano's
Lemma, and is therefore redundant.
\end{remark}
\begin{remark}[Terminology]
Freyd and Yetter {\cite{FY92}} also introduced the term {\em
sovereign category} for a pivotal category.
\end{remark}
A pivotal functor between pivotal categories is a monoidal functor
that also satisfies
\[ \xymatrix{
FA \ar[r]^{F(i_A)}\ar[rd]_{i_{FA}} &
F(A^{**})\ar[d]^{\iso} \\
& (FA)^{**}.
}
\]
\paragraph{Graphical language.}
The graphical language for pivotal categories is the same as that for
autonomous categories, where the isomorphism $i_A:A\ii A^{**}$ is
represented like an identity map. Of course, there are now additional
diagrams that are the translation of well-formed terms. For example,
when $f:A\ii A$, then (\ref{eqn-not-autonomous}) is a well-formed
diagram of pivotal categories, but not of autonomous categories.
Indeed, in the case of pivotal categories, the problem of winding
numbers (discussed before Theorem~\ref{thm-free-autonomous})
disappears, as winding numbers are taken modulo 2, and hence add
nothing beyond orientation.
\begin{theorem}[Coherence for pivotal categories]
\label{thm-coherence-pivotal}
A well-formed equation between morphisms in the language of pivotal
categories follows from the axioms of pivotal categories if and only
if it holds in the graphical language up to planar isotopy,
including rotation of boxes.
\end{theorem}
\begin{caveat}
Only special cases of this theorem have been proved in the
literature. Freyd and Yetter {\cite[Thm.~4.4]{FY92}} considered the
case of the free pivotal category generated by a category. In our
terminology, this means that they only considered diagrams for
pivotal categories over {\em simple signatures}, rather than
over {\em autonomous signatures}. In other words, they only
considered boxes of the form
\[ \wmetamorph{f}{A}{B},
\]
with exactly one input and one output. Joyal and Street's draft report
{\cite{JSXX}} claims the general result but contains no proof.
\end{caveat}
The notion of planar isotopy for pivotal categories includes the
ability to rotate boxes in the plane of the diagram. For example, the
following two diagrams are isotopic in this sense:
\begin{equation}\label{eqn-rotation}
\vcenter{\wirechart{}{
\blank\wireopen{ddd}\wireleft{rr}{} &&
\blank\wireclose{d}
\\
& \blank\ulbox{[].[d]}{f}\wireright{r}{} & \blank\\
& \blank \wireright{r}{} & *{} \\
\blank \wireright{rr}{} && *{}
}
}
=
\vcenter{\wirechart{}{
\blank\wireopen{ddd}\wireright{rr}{} &&
*{}
\\
& \blank\ulbox{[].[d]}{f}\wireright{r}{} & *{}\\
& \blank \wireright{r}{} & \blank\wireclose{d} \\
\blank \wireleft{rr}{} && \blank
}
}
\end{equation}
This also explains why we have marked a corner of each box. With the
ability to rotate boxes, we need to keep track of their ``natural''
orientation, so that the diagrams from (\ref{eqn-rotation}) can also
be represented like this:
\[ \vcenter{\wirechart{}{
\blank\wireopen{d}\wireright{rr}{} && *{}
\\
\blank\wireleft{r}{}& \blank\drbox{[].[d]}{f}\\
\blank\wireleft{r}{}& \blank\\
\blank\wireopen{u}\wireright{rr}{} && *{}
}
}
\]
More generally, the adjoint mate of $f:A\ii B$ can be represented by a
rotated box:
\begin{equation}\label{eqn-adjoint-mate}
\rd{f} \ssep =
\vcenter{\wirechart{@C=0.5cm@R=0.5cm}{
\vsblank\wireleft{r}{B}&\wireid\wire{r}{}&\blank\wireclose{d}\\
\blank\wireright{r}{A}&\blank\ulbox{[]}{f}\wireright{r}{B}&\blank\\
\blank\wireopen{u}\wire{r}{}&\wireid\wireleft{r}{A}&\vsblank
}}
\ssep=\ssep
\wirechart{@C=1.4cm@R=0.5cm}{
\wireleft{r}{B}&\blank\drbox{[]}{f}\wireleft{r}{A}&
}
\end{equation}
Also note that is $f$ is a composite diagram, then the whole diagram
may be rotated to obtain $f^*$.
\subsection{Spherical pivotal categories}\label{subsec-spherical-pivotal}
\begin{definition}[Barrett and Westbury \cite{BW99}]
A pivotal category is {\em spherical} if for all objects $A$ and
morphisms $f:A\ii A$,
\begin{equation}\label{eqn-spherical}
\vcenter{\wirechart{@R=.5cm}{
\blank\wireopen{d}\wireleft{rr}{} && \blank\wireclose{d} \\
\blank\wireright{r}{A} & \blank\ulbox{[]}{f}\wireright{r}{A} & \blank
}
}
= \vcenter{\wirechart{@R=.5cm}{
\blank\wireopen{d}\wireright{r}{A} &\blank\ulbox{[]}{f}\wireright{r}{A}& \blank\wireclose{d} \\
\blank\wireleft{rr}{} & & \blank \\
}
}
\end{equation}
\end{definition}
The intuition behind the ``spherical'' axioms is that diagrams should
be embedded in a 2-sphere, rather than the plane. It is then obvious
that the left-hand side of (\ref{eqn-spherical}) can be continuously
transformed into the right-hand side, namely by moving the loop across
the back of the 2-sphere.
\paragraph{Failure of coherence.}
The spherical axiom is not sound for the graphical language of
diagrams embedded in the 2-sphere. The problem is that the notion of
``diagram embedded in the 2-sphere'' is not compatible with
composition or tensor. The following is a consequence of the
spherical axiom, but does not hold up to isotopy in the 2-sphere.
\[
\vcenter{\wirechart{@C=.5cm@R=.8cm}{
\blank\wireopen{dd}\wire{rr}{}&&
\blank\wireclose{dd}
\\
&\blank\ulbox{[]}{g}\\
\blank\wireright{r}{A}&
\blank\ulbox{[]}{f}\wireright{r}{A}&
\blank
\\}}
=
\vcenter{\wirechart{@C=.5cm@R=.8cm}{
&\blank\ulbox{[]}{g}\\
\blank\wireright{r}{A}&
\blank\ulbox{[]}{f}\wireright{r}{A}&
\blank
\\
\blank\wireopen{u}\wire{rr}{}&&
\blank\wireclose{u}
\\}}
=
\vcenter{\wirechart{@C=.5cm@R=.8cm}{
&\blank\ulbox{[]}{g}\\
\blank\wireopen{d}\wire{rr}{}&&
\blank\wireclose{d}
\\
\blank\wireright{r}{A}&
\blank\ulbox{[]}{f}\wireright{r}{A}&
\blank
\\}}
\]
Note that this counterexample is similar to the spacial axiom
(\ref{eqn-spacial}), but does not quite imply it. If one adds the
spacial axiom, as we are about to do, then any notion of isotopy is
lost and equivalence of diagrams collapses to isomorphism.
\subsection{Spacial pivotal categories}
\begin{definition}
A pivotal category is {\em spacial} if it satisfies the spacial
axiom (\ref{eqn-spacial}) and the spherical axiom
(\ref{eqn-spherical}).
\end{definition}
\paragraph{Graphical language and coherence.}
The graphical language for spacial pivotal categories is the same as
that for planar pivotal categories, except that equivalence of
diagrams is now taken up to isomorphism. Clearly, the axioms are sound
for the graphical language. We conjecture that they are also complete.
\begin{conjecture}[Coherence for spacial pivotal categories]
\label{conj-coherence-spacial-pivotal}
A well-formed equation between morphisms in the language of spacial
pivotal categories follows from the axioms of spacial pivotal
categories if and only if it holds in the graphical language up to
isomorphism.
\end{conjecture}
\subsection{Braided autonomous categories}\label{subsec-braided-autonomous}
An braided autonomous category is an autonomous category that is also
braided (as a monoidal category). The notion of braided autonomous
categories is not extremely natural, as the graphical language is only
sound for a restricted form of isotopy called {\em regular isotopy}.
Nevertheless, it is useful to collect some facts about braided
autonomous categories.
\begin{lemma}[{\cite[Prop.~7.2]{JS93}}]\label{lem-braided-right-autonomous}
A braided monoidal category is autonomous if and only if it is right
autonomous.
\end{lemma}
\begin{proof}
If $\eta:I\ii B\x A$ and $\eps:A\x B\ii I$ form an exact pairing,
then so do $\symi_{A,B}\cp\eta:I\ii A\x B$ and
$\eps\cp\sym_{B,A}:B\x A\ii I$. Therefore any right dual of $A$ is
also a left dual of $A$. \eot
\end{proof}
In any braided autonomous category $\Cc$, we can define a natural
isomorphism $\lop_A:A^{**}\ii A$. This follows from the proof of
Lemma~\ref{lem-braided-right-autonomous}, using the fact that both $A$
and $A^{**}$ are right duals of $A^*$. More concretely, $\lop_A$
and its inverse are defined by:
\[ \begin{array}{lll}
\lop_A &=& A^{**}\catarrow{\eta_A\x\id}A^*\x A\x A^{**}
\catarrow{\id\x\sym_{A,A^{**}}}A^*\x A^{**}\x A\catarrow{\eps_{A^*}\x\id}
A, \\
\lopi_A &=& A\catarrow{\id\x\eta_{A^*}}A\x A^{**}\x A^*
\catarrow{\symi_{A^{**},A}\x\id}A^{**}\x A\x A^*\catarrow{\id\x\eps_A}
A^{**}.
\end{array}
\]
Here we have written, without loss of generality, as if $\Cc$ were
strict monoidal. Graphically, $\lop_A$ and its inverse look like this:
\[ \lop_A =
\vcenter{\wirechart{@C-4ex}{
*{}\wireright{rr}{A^{**}}&&
\blank\wirecross{d}\wireright{rr}{A}&&
*{}
\\&
\blank\wire{r}{}&
\blank\wirebraid{u}{.3}\wire{r}{}&
\blank
\\&
\blank\wireopen{u}\wire{rr}{A^*}&&
\blank\wireclose{u}
}}
\sep
\lopi_A =
\vcenter{\wirechart{@C-4ex}{
&\blank\wireopen{d}\wire{rr}{A^*}&&
\blank\wireclose{d}
\\
&\blank\wire{r}{}&
\blank\wirebraid{d}{.3}\wire{r}{}&
\blank
\\
*{}\wireright{rr}{A}&&
\blank\wirecross{u}\wireright{rr}{A^{**}}&&
*{}
}}
\]
We must note that although $\lop_A$ is a natural isomorphism, it is
not canonical. In general, there exist infinitely many natural
isomorphisms $A\iso A^{**}$. Also, $\lop$ is not a {\em monoidal}
natural transformation, and therefore does not define a pivotal
structure on $\Cc$. A general braided autonomous category is not
pivotal.
\paragraph{Graphical language and coherence.}
The graphical language braided autonomous categories is obtained
simply by adding braids to the graphical language of autonomous
categories. However, the correct notion of equivalence of diagrams is
neither planar isotopy (like for autonomous categories), nor
3-dimensional isotopy (like for braided monoidal categories), but
an in-between notion called {\em regular isotopy} {\cite{Kau86}}.
\begin{table}
\[
\begin{array}{lc}
\m{(R1)}& \m{\resizebox{1.7in}{!}{\includegraphics{reidemeister1}}}\\[3ex]
\m{(R2)}& \m{\resizebox{1.0in}{!}{\includegraphics{reidemeister2}}}\\[3ex]
\m{(R3)}& \m{\resizebox{1.0in}{!}{\includegraphics{reidemeister3}}}\\
\end{array}
\sep
\begin{array}{lc}
\m{($\Lambda$1)}& \m{\resizebox{1.7in}{!}{\includegraphics{reidemeister4}}}\\[4.5ex]
\m{($\Lambda$2)}& \m{\resizebox{1.7in}{!}{\includegraphics{reidemeister5}}}\\
\end{array}
\]
\caption{Reidemeister moves and $\Lambda$-moves}
\label{tab-reidemeister}
\end{table}
It is well-known that 3-dimensional isotopy of links and tangles is
equivalent to planar isotopy of their (non-degenerate) projections
onto a 2-dimensional plane, plus the three {\em Reidemeister moves}
{\cite{Rei32}} shown as (R1)--(R3) in Figure~\ref{tab-reidemeister}.
To extend this to diagrams with nodes, one also has to add the moves
($\Lambda$1) and ($\Lambda$2).
{\em Regular isotopy} is defined to be the equivalence obtained by
dropping Reidemeister move (R1). Note that regular isotopy is an
equivalence on 2-dimensional representation of 3-dimensional diagrams
(and not of 3-dimensional diagrams themselves).
\begin{theorem}[Coherence for braided autonomous categories]
A well-formed equation between morphisms in the language of braided
autonomous categories follows from the axioms of braided autonomous
categories if and only if it holds in the graphical language up to
regular isotopy.
\end{theorem}
\begin{caveat}
Only special cases of this theorem have been proved in the
literature. Freyd and Yetter {\cite[Thm.~3.8]{FY92}} proved this
only for diagrams over a simple signature.
\end{caveat}
\subsection{Braided pivotal categories}\label{subsec-braided-pivotal}
\begin{lemma}[Deligne, see {\cite[Prop.~2.11]{Yet92}}]\label{lem-braided-pivotal}
Let $\Cc$ be a braided autonomous category. Then giving a twist
$\theta_A:A\ii A$ on $\Cc$ (making $\Cc$ into a balanced category)
is equivalent to giving a pivotal structure $i_A:A\ii A^{**}$
(making $\Cc$ into a pivotal category).
\end{lemma}
The lemma is remarkable because the concept of a braided autonomous
category does not include any assumption relating the braided
structure to the autonomous structure. Moreover, the axioms for a
twist depend only on the braided structure, whereas the axioms for a
pivotal structure depend only on the autonomous structure. Yet, they
are equivalent if $\Cc$ is braided autonomous.
\begin{proofof}{Lemma~\ref{lem-braided-pivotal}}
Recall the natural isomorphism $\lop_A:A^{**}\ii A$ that was defined
in Section~\ref{subsec-braided-autonomous} for any braided
autonomous category. Given a twist $\theta_A:A\ii A$, we define a
pivotal structure by
\begin{equation}\label{eqn-i-from-theta}
i_A = A\catarrow{\theta_A}A\catarrow{\lopi_A}A^{**}.
\end{equation}
Conversely, given a pivotal structure $i_A:A\ii A^{**}$, we define
a twist by
\begin{equation}\label{eqn-theta-from-i}
\theta_A = A\catarrow{i_A}A^{**}\catarrow{\lop_A}A.
\end{equation}
The two constructions are clearly each other's inverse. To verify
their properties, it is obvious that $i_A$ is a natural isomorphism
if and only if $\theta_A$ is a natural isomorphism. Moreover,
$\theta_I=\id$ iff $i_I=\lopi_I$, and $\lopi_I$ is the canonical
isomorphism $I\iso I^{**}$. What remains to be shown is that
$\theta$ satisfies equation (\ref{eqn-balanced}) if and only if $i$
satisfies equation (\ref{eqn-pivotal-monoidal}). However, this is a
direct consequence of the following fact about $\lop$, which is
easily verified:
\[ \xymatrix@R-5mm{
A^{**}\x B^{**} \ar[d]_{\iso}\ar[r]^{\sym_{A,B}} &
B^{**}\x A^{**} \ar[dd]^{\lop_B\x\lop_A} \\
(A\x B)^{**}\ar[d]_{\lop_{A\x B}}
\\
A\x B & B\x A. \ar[l]^{\sym_{B,A}}
}
\]
\eottwo
\end{proofof}
\begin{corollary}
A braided pivotal category is the same thing as a balanced
autonomous category. \eot
\end{corollary}
\begin{remark}\label{rem-braided-pivotal-other-theta}
While Lemma~\ref{lem-braided-pivotal} establishes a one-to-one
correspondence between twists and pivotal structures, the
correspondence is not canonical. Indeed, instead of
(\ref{eqn-i-from-theta}) and (\ref{eqn-theta-from-i}), we could have
equally well used
\begin{equation}\label{eqn-i-from-theta-alt}
i_A = A\catarrow{\theta\inv_A}A\catarrow{\lopalt_A}A^{**}
\end{equation}
and
\begin{equation}\label{eqn-theta-from-i-alt}
\theta_A = A\catarrow{\lopalt_A}A^{**}\catarrow{i\inv_A}A,
\end{equation}
where
\[ \lopalt_A = \m{\resizebox{.4in}{!}{\includegraphics{lopalt}}}.
\]
In fact, there are a countable number of such similar one-to-one
correspondences, all induced by the existence of a monoidal natural
transformation $\lopalt_A{}\inv\cp i_A\cp \lop_A\cp i_A:A\ii A$.
They all coincide if and only if the category is tortile, as
discussed in the next section.
\end{remark}
\paragraph{Graphical language and coherence.}
The graphical language for braided pivotal categories is the same as
the graphical language for pivotal categories, with the addition of
braids. Equivalence of diagrams is up to regular isotopy, just as for
braided autonomous categories (see
Section~\ref{subsec-braided-autonomous}).
\begin{theorem}[Coherence for braided pivotal categories]
\label{thm-coherence-braided-pivotal}
A well-formed equation between morphisms in the language of braided
pivotal categories follows from the axioms of braided pivotal
categories if and only if it holds in the graphical language up to
regular isotopy.
\end{theorem}
\begin{caveat}\label{cav-coherence-braided-pivotal}
Only special cases of this theorem have been proved in the
literature. Freyd and Yetter {\cite[Thm.~4.4]{FY92}} proved this
only for diagrams over a simple signature.
\end{caveat}
\begin{remark}
The equation
\[ \resizebox{1.8in}{!}{\includegraphics{braided-pivotal}}
\]
holds up to regular isotopy, as it can be proved using only the
Reidemeister moves (R2) and (R3). It is therefore valid in braided
pivotal categories (or even braided autonomous categories). On the
other hand, the equation
\[ \resizebox{1.8in}{!}{\includegraphics{tortile}}
\]
holds up to isotopy, but not up to regular isotopy (because regular
isotopy preserves total curvature, as pointed out by Freyd and Yetter
{\cite[p.~169]{FY89}}). It is therefore not valid in braided pivotal
categories. The use of regular isotopy does not seem natural, and
this is precisely the reason why Joyal and Street introduced tortile
categories, which we discuss in the next section.
\end{remark}
\begin{remark}\label{braided-not-spherical}
A braided pivotal category is not in general spherical (and
therefore also not spacial). Indeed, instead of the spherical axiom
(\ref{eqn-spherical}), only the following holds up to regular
isotopy:
\[ \resizebox{2.4in}{!}{\includegraphics{braided-not-spherical}}
\]
Along with Remark~\ref{rem-braided-pivotal-other-theta}, this is
further evidence that braided pivotal categories (and braided
autonomous categories) are not ``natural'' notions.
\end{remark}
\subsection{Tortile categories}
\begin{lemma}\label{lem-tortile}
Consider a braided pivotal category, which is equivalently balanced
autonomous via (\ref{eqn-i-from-theta}) and
(\ref{eqn-theta-from-i}). For any object $A$ the following are
equivalent:
\begin{enumerate}\alphalabels
\item $
(\eps_{A^*}\x\id_A) \cp
(\id_{A^*}\x\symi_{A^{**},A}) \cp
(\eta_{A}\x\id_{A^{**}})\cp
i_A \cp
(\eps_{A^*}\x\id_A) \cp
(\id_{A^*}\x\sym_{A,A^{**}}) \cp
(\eta_{A}\x\id_{A^{**}})\cp
i_A
= \id_A,
$
or graphically:
\[ \resizebox{1.8in}{!}{\includegraphics{tortile-A}}
\]
\item $\theta_{A^*}=(\theta_A)^*$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is a straightforward calculation, but it is best explained
by the fact that the following hold in the graphical language:
\[ \theta_A = \m{\resizebox{.4in}{!}{\includegraphics{theta1}}} \sep
(\theta_A)^* = \m{\resizebox{.4in}{!}{\includegraphics{theta2}}} \sep
\theta_{A^*} = \m{\resizebox{.4in}{!}{\includegraphics{theta3}}} \sep
(\theta_{A^*})\inv = \m{\resizebox{.4in}{!}{\includegraphics{theta4}}}.
\]
Therefore, the equation (b) is equivalent to
\[ \resizebox{1.3in}{!}{\includegraphics{tortile-A-star}},
\]
which is the adjoint mate of (a).\eot
\end{proof}
\begin{remark}
The condition in Lemma~\ref{lem-tortile}(a) holds if and only if the
two definitions of $\theta_A$ from (\ref{eqn-theta-from-i}) and
(\ref{eqn-theta-from-i-alt}) coincide.
\end{remark}
\begin{definition}[{\cite{JS93}}]
A {\em tortile category} is a braided pivotal category satisfying
the condition of Lemma~\ref{lem-tortile}(a). Equivalently, a tortile
category is a balanced autonomous category satisfying the condition
of Lemma~\ref{lem-tortile}(b).
\end{definition}
\begin{remark}[Terminology]
A tortile category is also sometimes called a {\em ribbon category},
see e.g.~{\cite{Tur94}}.
\end{remark}
\paragraph{Graphical language and coherence.}
The graphical language for tortile categories is like the graphical
language for braided pivotal categories, except that morphisms are
represented by ribbons, rather than wires. These ribbons are just like
the ones for balanced categories from Section~\ref{subsec-balanced}.
Units and counits are represented in the obvious way, for example
\[ \eta_A = \m{\resizebox{.4in}{!}{\includegraphics{tortile-unit}}},
\sep \eps_A = \m{\resizebox{.4in}{!}{\includegraphics{tortile-counit}}}.
\]
The twist map $\theta_A:A\ii A$ can be represented in several
equivalent ways:
\[ \theta_A = \m{\resizebox{2.5cm}{!}{\includegraphics{twistA}}}
= \m{\resizebox{2.3cm}{!}{\includegraphics{twist-loop-down}}}
= \m{\resizebox{2.3cm}{!}{\includegraphics{twist-loop-up}}}.
\]
Note that these diagrams are equivalent up to framed 3-dimensional
isotopy, and define the same morphism in a tortile category. (On the
other hand, in a mere braided pivotal category, the latter two
diagrams are not equal). Also note that the map $\lop_A$ from
Section~\ref{subsec-braided-autonomous} is also represented in the
graphical language as
\[ \lop_A = \m{\resizebox{2.3cm}{!}{\includegraphics{twist-loop-down}}},
\]
but this is of type $\lop_A:A^{**}\ii A$, whereas $\theta_A:A\ii A$.
They differ, of course, only by an invisible pivotal map $i_A:A\ii
A^{**}$.
\begin{theorem}[Coherence for tortile categories]
\label{thm-coherence-tortile}
A well-formed equation between morphisms in the language of tortile
categories follows from the axioms of tortile categories if and only
if it holds in the graphical language up to framed 3-dimensional
isotopy.
\end{theorem}
\begin{caveat}
\label{cav-coherence-tortile}
Only special cases of this theorem have been proved in the
literature. Shum {\cite[Thm.~6.1]{Shum94}} proved it for the case of
the free tortile category generated by a category, i.e., for
diagrams over a simple signature only.
\end{caveat}
\subsection{Compact closed categories}\label{subsec-compact-closed}
A compact closed category is a tortile category that is symmetric (as
a balanced monoidal category) in the sense of
Section~\ref{subsec-symmetric-monoidal}. Equivalently, because of
Remark~\ref{rem-symmetric-from-balanced}, a compact closed category is
a tortile category in which $\theta_A=\id_A$ for all $A$.
The definition can be simplified. Notice that a right autonomous
symmetric monoidal category is automatically autonomous (by
Lemma~\ref{lem-braided-right-autonomous}), balanced (with
$\theta_A=\id_A$) and therefore pivotal (by
Lemma~\ref{lem-braided-pivotal}). Moreover, it is tortile (because
$\theta_{A^*}=(\theta_A)^*=\id_{A^*}$). We can therefore define:
\begin{definition}
A {\em compact closed category} is a right autonomous symmetric
monoidal category.
\end{definition}
\begin{remark}
By analogy with Remark~\ref{rem-twisted-symmetric}, it is possible
for a compact closed category to possess a non-trivial twist (with
the associated non-trivial pivotal structure), in addition to the
trivial twist $\theta_A=\id_A$, making it into a tortile category.
In other words, for a given tortile category, the symmetry condition
$\sym_{A,B} = \symi_{B,A}$ does not in general imply
$\theta_A=\id_A$. However, it does imply $\theta_A^2=\id_A$, as the
following argument shows:
\[ \theta_A^2 = \m{\resizebox{1in}{!}{\includegraphics{double-twist}}}
= \m{\resizebox{.9in}{!}{\includegraphics{no-twist}}} = \id_A.
\]
To construct an example where $\theta\neq\id$, consider the category
$\Cc$ of finite-dimensional real vector spaces and linear functions.
Define an equivalence relation on objects by $A\sim B$ iff $\dim(A\x
B)$ is a square. Then define a subcategory $\Cc_{\sim}$ by
\[ \hom_{\Cc_{\sim}}(A,B) = \left\{\begin{array}{lp{1in}}
\hom_{\Cc}(A,B) & if $A\sim B$,\\
\emptyset & else.
\end{array}\right.
\]
Then $\Cc_{\sim}$ is compact closed. Let $\N^+=\s{1,2,3,\ldots}$ be
the positive integers, and consider some multiplicative homomorphism
$\phi:\N^+\ii\s{-1,1}$. Any such homomorphism is determined by a
sequence $a_1,a_2,\ldots\in\s{-1,1}$ via
\[ \phi(p_1^{n_1}p_2^{n_2}\cdots p_k^{n_k}) = a_1^{n_1}a_2^{n_2}\cdots
a_k^{n_k},
\]
where $p_i$ is the $i$th prime number. Finally, define the twist map
$\theta_A$ as multiplication by the scalar $\phi(\dim(A))$, or as
$\id_A$ if $A$ is 0-dimensional. With this twist, $\Cc_{\sim}$ is
tortile. In fact, this shows that there exists a continuum of
possible twists on $\Cc_{\sim}$.
\end{remark}
\begin{examples}
The monoidal category $(\Rel,\times)$ is compact closed with
$A^*=A$. The category $(\FdVect,\x)$ of finite dimensional vectors
spaces is compact closed with $A^*$ the dual space of $A$, and
similarly for the category of finite dimensional Hilbert spaces
$(\FdHilb,\x)$. The corresponding categories of possibly infinite
dimensional spaces are not autonomous. $(\Cob,+)$ is compact closed
with $A^*$ equal to $A$ with reversed orientation.
\end{examples}
\paragraph{Graphical language and coherence.}
The graphical language for compact closed categories is like that of
tortile categories, except that we remove the framing and twist maps,
and use symmetries instead of braidings.
\begin{theorem}[Coherence for compact closed categories]
\label{thm-coherence-compact-closed}
A well-formed equation between morphisms in the language of compact
closed categories follows from the axioms of compact closed
categories if and only if it holds, up to isomorphism of diagrams,
in the graphical language.
\end{theorem}
\begin{caveat}
\label{cav-coherence-compact-closed}
The special case of diagrams over a simple signature was proven
by Kelly and Laplaza {\cite[Thm.~8.2]{KL80}}. The general case does
not appear in the literature.
\end{caveat}
\section{Traced categories}\label{sec-traced}
The graphical languages considered in Section~\ref{sec-progressive}
were {\em progressive}, which means that all wires were oriented
left-to-right. By contrast, the graphical languages of autonomous
categories in Section~\ref{sec-autonomous} allow wires to be oriented
left-to-right or right-to-left. We now turn out attention to an
intermediate notion, namely {\em traced} categories.
Like autonomous graphical languages, traced graphical languages permit
loops, but with a restriction: all wires must be directed
left-to-right at their endpoints. In other words, traced diagrams are
like autonomous diagrams, but are taken relative to a {\em monoidal
signature} (see Section~\ref{subsec-planar-monoidal}), rather than
an {\em autonomous signature} (see
Section~\ref{subsec-planar-autonomous}).
Table~\ref{tab-traced-vs-autonomous} shows a typical example of a
traced diagram, and a typical example of an autonomous diagram that is
not a traced diagram.
\begin{table}
\[ (a)\ssep\m{\wirechart{@C=1cm@R=.8cm}{
\blank\wireopen{d}\wire{rr}{}&&
\blank\wireclose{d}
\\
\blank\wireright{r}{}&
\blank\wireright{r}{}&
\blank
\\
*{}\wireright{r}{}&
\blank\ulbox{[].[u]}{f}\wireright{r}{}&
*{}
\\}}\sep
(b)\ssep\raisebox{-5mm}{\m{\wirechart{@C=1cm@R=.8cm}{
\wireright{r}{}&
\blank\ulbox{[].[d]}{f}\wireright{r}{}&
\blank\wireclose{d}\\
\wireleft{r}{}&
\blank\wireleft{r}{} &
\blank
}}}
\]
\caption{(a) A traced diagram. (b) An autonomous diagram that is not traced.}
\label{tab-traced-vs-autonomous}
\end{table}
Logically, we should have considered traced categories before pivotal
categories, because traced categories have less structure than pivotal
categories (i.e., every pivotal category is traced, and not the other
way around). However, many of the coherence theorems of this section
are consequences of the corresponding theorems for pivotal categories,
and therefore it made sense to present the pivotal notions first.
Symmetric traced categories and their graphical language (in the
strict monoidal case, and with one additional axiom) were first
introduced in the 1980's by {\Stefanescu} and {\Cazanescu} under the
name ``biflow'' {\cite{6,12,8}}. Joyal, Street, and Verity later
rediscovered this notion independently, generalized it to balanced
monoidal categories, and proved the fundamental embedding theorem
relating balanced traced categories to tortile categories
{\cite{JSV96}}.
\begin{remark}
Joyal, Street, and Verity use the term {\em traced monoidal
category}. However, I prefer {\em traced category}, usually
prefixed by an adjective such as planar, spacial, balanced,
symmetric. The word ``monoidal'' is redundant, because one cannot
have a traced structure without a monoidal structure. Also, by
putting the adjective before the word ``traced'', rather than after
it, we make it clear that the traced structure, and not just the
underlying monoidal structure, if being modified.
\end{remark}
\subsection{Right traced categories}\label{subsec-right-traced}
\begin{definition}
A {\em right trace} on a monoidal category is a family of operations
\[ \TrR^X:\hom(A\x X,B\x X)\ii\hom(A,B),
\]
satisfying the following four axioms. For notational convenience, we
assume without loss of generality that the monoidal structure is
strict.
\begin{enumerate}\alphalabels
\item Tightening (naturality in $A,B$): $\TrR^X((g\x\id_X)\cp f\cp
(h\x\id_X))=g\cp (\TrR^Xf)\cp h$;
\item Sliding (dinaturality in $X$): $\TrR^Y(f\cp(\id_A\x
g))=\TrR^X((\id_B\x g)\cp f)$, where $f:A\x X\ii B\x Y$ and $g:Y\ii
X$;
\item Vanishing: $\TrR^I f=f$ and $\TrR^{X\x Y} f=\TrR^X(\TrR^Y(f))$;
\item Strength. $\TrR^X(g\x f)=g\x\TrR^X f$.
\end{enumerate}
A {\em (planar) right traced category} is a monoidal category equipped
with a right trace.
\end{definition}
These axioms are similar to those of Joyal, Street, and Verity
{\cite{JSV96}}, except that we have omitted the yanking axioms which
does not apply in the planar case, and we have replaced the non-planar
``superposing'' axiom by the planar ``strength'' axiom. I do not know
whether this set of planar axioms appears in the literature.
\paragraph{Graphical language and coherence.}
The right trace of a diagram $f:A\x X\ii B\x X$ is graphically
represented by drawing a loop from the output $X$ to the input $X$, as
follows:
\begin{equation}\label{eqn-right-trace}
\TrR^X f = \vcenter{\wirechart{@C=1cm@R=.8cm}{
\blank\wireopen{d}\wire{rr}{}&&
\blank\wireclose{d}
\\
\blank\wireright{r}{X}&
\blank\wireright{r}{X}&
\blank
\\
*{}\wireright{r}{A}&
\blank\ulbox{[].[u]}{f}\wireright{r}{B}&
*{}
\\}}
\end{equation}
Note that in the graphical language of right traced categories, parts
of wires can be oriented right-to-left, but each wire must be oriented
left-to-right near the endpoints. The four axioms of right traced
categories are illustrated in the graphical language in
Table~\ref{tab-right-traced}.
\begin{table}
\hfill\resizebox{0.95\textwidth}{!}{\includegraphics{traced-planar}}\hfill
\caption{The axioms of right traced categories}
\label{tab-right-traced}
\end{table}
The axioms of right traced categories are obviously sound for the
graphical language, up to planar isotopy. We conjecture that they are
also complete.
\begin{conjecture}[Coherence for right traced categories]
A well-formed equation between morphism terms in the language of
right traced categories follows from the axioms of right traced
categories if and only if it holds in the graphical language up
planar isotopy.
\end{conjecture}
This is a weak conjecture, in the sense that there is not much
empirical evidence to support it, nor is there an obvious strategy for
a proof. If this conjecture turns out to be false, the axioms for
right traced categories should be amended until it becomes true.
The concept of a {\em left trace} is defined similarly as a family of
operations
\[ \TrL^X:\hom(X\x A,X\x B)\ii\hom(A,B),
\]
satisfying symmetric axioms. A left trace is graphically depicted as
follows:
\begin{equation}\label{eqn-left-trace}
\TrL^X g = \vcenter{\wirechart{@C=1cm@R=.8cm}{
*{}\wireright{r}{A}&
\blank\wireright{r}{B}&
*{}
\\
\blank\wireright{r}{X}&
\blank\ulbox{[].[u]}{g}\wireright{r}{X}&
\blank
\\
\blank\wireopen{u}\wire{rr}{}&&
\blank\wireclose{u}
\\}}
\end{equation}
We say that a monoidal functor $F$ {\em preserves right traces} if
$F(\TrR^X f) = \TrR^{FX}((\phi^2)\inv\cp Ff\cp\phi^2)$, and similarly
for left traces.
\subsection{Planar traced categories}
\begin{definition}
A {\em planar traced category} is a monoidal category equipped with
a right trace and a left trace, such that the two traces satisfy
three additional axioms:
\begin{enumerate}\alphalabels
\item Interchange: $\TrR^X(\TrL^Y f)=\TrL^Y(\TrR^X f)$, for all
$f:Y\x A\x X\ii Y\x B\x X$;
\item Left pivoting: $\TrR^B(\id_B\x f)=\TrL^A(f\x\id_A)$, for all
$f:I\ii A\x B$;
\item Right pivoting: $\TrR^B(\id_B\x f)=\TrL^A(f\x\id_A)$, for all
$f:A\x B\ii I$.
\end{enumerate}
\end{definition}
\paragraph{Graphical language and coherence.}
The graphical language of planar traced categories consists of
diagrams using the left and right trace together, modulo planar
isotopy. The axioms of interchange, left pivoting, and right pivoting
are shown graphically in Table~\ref{tab-planar-traced}. Compare also
equation~(\ref{eqn-rotation}) on page~\ref{eqn-rotation}.
\begin{table}
\[ \begin{array}{ccc}
\m{\resizebox{4cm}{!}{\includegraphics{interchange}}} &
\m{\resizebox{3cm}{!}{\includegraphics{left-pivoting}}} &
\m{\resizebox{3cm}{!}{\includegraphics{right-pivoting}}} \\
\mbox{(a) interchange} &
\mbox{(b) left pivoting} &
\mbox{(c) right pivoting}
\end{array}
\]
\caption{Axioms relating left and right trace}
\label{tab-planar-traced}
\end{table}
The axioms are clearly sound; we conjecture that they are also
complete:
\begin{conjecture}[Coherence for planar traced categories]
\label{conj-coherence-planar-traced}
A well-formed equation between morphism terms in the language of
planar traced categories follows from the axioms of planar traced
categories if and only if it holds in the graphical language up
planar isotopy.
\end{conjecture}
As for right traced categories, this conjecture is weak. If it turns
out to be false, then one should amend the axioms of planar traced
categories accordingly.
\begin{remark}\label{rem-planar-not-free}
Even if the conjecture is true, the graphical language does not in
itself give an easy description of the free planar traced category.
This is because there are diagrams, such as the following, that
``look'' planar traced, but are not actually the diagram of any planar
traced term (not even up to planar isotopy).
\[ \resizebox{1.6in}{!}{\includegraphics{not-planar-traced}}
\]
It is not obvious how to characterize the ``planar traced'' diagrams
intrinsically, or how to extend the notion of planar traced
categories to encompass all such diagrams.
\end{remark}
\begin{remark}
\label{rem-pivotal-is-traced}
An autonomous category is not necessarily traced. However, every
pivotal category is planar traced with the obvious definitions of
left and right trace:
\[ \begin{array}{lll}
\TrR^X f &=& (\id_B\x\eps_X)\cp((f\cp (\id_A\x i\inv_X))\x\id_{X^*})\cp(\id_A\x\eta_{X^*}),\\
\TrL^X f &=& (\eps_{X^*}\x\id_B)\cp(\id_{X^*}\x((i_X\x\id_B)\cp f))\cp(\eta_{X}\x\id_A).\\
\end{array}
\]
In the graphical language, this looks just like the diagrams
(\ref{eqn-right-trace}) and (\ref{eqn-left-trace}). As a
consequence, each diagram of planar traced categories can be
regarded as a diagram of planar pivotal categories, but not the
other way around.
\end{remark}
\subsection{Spherical traced categories}
The concept of a spherical traced category is analogous to that of
spherical pivotal categories from
Section~\ref{subsec-spherical-pivotal}.
\begin{definition}
A planar traced category satisfies the {\em spherical axiom} if for
all $f:A\ii A$,
\begin{equation}\label{eqn-spherical-traced}
\TrL^A f = \TrR^A f,
\end{equation}
or equivalently, in the graphical language:
\[
\vcenter{\wirechart{@R=.5cm}{
\blank\wireopen{d}\wireright{r}{A} &\blank\ulbox{[]}{f}\wireright{r}{A}& \blank\wireclose{d} \\
\blank\wire{rr}{} & & \blank \\
}
}
= \vcenter{\wirechart{@R=.5cm}{
\blank\wireopen{d}\wire{rr}{} && \blank\wireclose{d} \\
\blank\wireright{r}{A} & \blank\ulbox{[]}{f}\wireright{r}{A} & \blank
}
}
\]
A {\em spherical traced category} is a planar traced category
satisfying the spherical axiom.
\end{definition}
Every spherical pivotal category is spherical traced.
\paragraph{Failure of coherence.}
Just like for spherical pivotal categories, the graphical language of
spherical traced categories is not coherent for any geometrically
useful notion of equivalence of diagrams.
\subsection{Spacial traced categories}\label{subsec-spacial-traced}
\begin{definition}
A {\em spacial traced category} is a planar traced category if it
satisfies the spacial axiom (\ref{eqn-spacial}) and the spherical
axiom (\ref{eqn-spherical-traced}).
\end{definition}
\paragraph{Graphical language and coherence.}
The graphical language for spacial traced categories is the same as
that for planar traced categories, except that equivalence of diagrams
is now taken up to isomorphism.
\begin{conjecture}[Coherence for spacial traced categories]
\label{conj-coherence-spacial-traced}
A well-formed equation between morphism terms in the language of
spacial traced categories follows from the axioms of spacial traced
categories if and only if it holds, up to isomorphism of diagrams,
in the graphical language.
\end{conjecture}
\begin{remark}
Every spacial pivotal category is clearly spacial traced. I do not
know whether conversely every spacial traced category can be
faithfully embedded in a spacial pivotal category. If this is true,
then Conjecture~\ref{conj-coherence-spacial-traced} follows from
Conjecture~\ref{conj-coherence-spacial-pivotal}.
\end{remark}
\subsection{Braided traced categories}
Braided traced categories, like braided pivotal categories, are a
somewhat unnatural notion, because coherence is only satisfied up to
regular isotopy. (If one considers full isotopy, one obtains the more
natural notion of balanced traced categories, which we will consider
in the next section). Nevertheless, we include this section on braided
traced categories, not least because it is the first traced notion for
which we can actually prove a coherence theorem (modulo
Caveat~\ref{cav-coherence-braided-pivotal}).
\begin{definition}
A {\em braided traced category} is a planar traced category with a
braiding (as a monoidal category), such that
\begin{equation}\label{eqn-braided-traced}
(\TrL^A\sym_{A,A})\cp(\TrR^A\symi_{A,A}) = \id_A,
\end{equation}
or graphically:
\[ \resizebox{1.8in}{!}{\includegraphics{braided-pivotal}}.
\]
\end{definition}
\begin{lemma}
\begin{enumerate}\alphalabels
\item The axiom (\ref{eqn-braided-traced}) does not follow from the
remaining axioms.
\item In the presence of the remaining axioms,
(\ref{eqn-braided-traced}) is equivalent to
\begin{equation}\label{eqn-braided-traced-2}
(\TrL^A\symi_{A,A})\cp(\TrR^A\sym_{A,A}) = \id_A,
\end{equation}
or graphically:
\[ \resizebox{1.8in}{!}{\includegraphics{braided-traced-2}}.
\]
\item In the presence of the remaining axioms of braided traced
categories, the left and right pivoting axioms are redundant.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) To see this, consider morphism terms in the language of braided
traced categories with one object generator and no morphism
generators. Define the {\em degree} of a term to the be tensor
product of all traced-out objects, i.e., $\deg(\id)=I$, $\deg(f\cp
g)=\deg(f)\x \deg(g)$, $\deg(\TrR^Xf) = X\x \deg(f)$, etc. This is
well-defined up to isomorphism. All the axioms of planar traced
categories and braided categories respect degree; the only axioms
where the left-hand side and right-hand side could potentially have
different degree are sliding in Table~\ref{tab-right-traced} and
pivoting in Table~\ref{tab-planar-traced}. However, in the absence
of morphism generators, it is easy to show that all morphism terms
are of the form $f:A\ii B$ where $A\iso B$. Therefore, neither
sliding nor pivoting change the degree (the latter because it is
vacuous). Therefore degree is an invariant. On the other hand,
(\ref{eqn-braided-traced}) is not degree-preserving; therefore it
cannot follow from the other axioms.
(b) The following graphical proof sketch can be turned into an
algebraic proof:
\[ \resizebox{3in}{!}{\includegraphics{braided-traced-12}}
\]
(c) Here is a proof sketch for the left pivoting axiom. Notably, the
second to last step uses dinaturality (sliding).
\[ \resizebox{4in}{!}{\includegraphics{pivoting-redundant}}
\]
\end{proof}
\begin{remark}
Each braided traced category possesses a balanced structure (as a
braided monoidal category) given by $\theta_A=\TrL^A\symi_{A,A}$,
with inverse $\theta\inv_A=\TrR^A\sym_{A,A}$ (cf.
(\ref{eqn-braided-traced})). However, this twist is not canonical;
for example, another twist can be defined by
$\theta'_A=\TrR^A\sym_{A,A}$ with inverse
$\theta'_A{}\inv=\TrL^A\symi_{A,A}$ (cf.
(\ref{eqn-braided-traced-2})). In fact, there are countably many
other possible twists. This is entirely analogous to
Remark~\ref{rem-braided-pivotal-other-theta}. The various twists
coincide if and only if the yanking equation (\ref{eqn-yanking})
holds, yielding a balanced traced category as discussed in
Section~\ref{subsec-balanced-traced} below.
\end{remark}
We note that every braided pivotal category is braided traced, with
the traced structure as given in Remark~\ref{rem-pivotal-is-traced}.
Moreover, there is an embedding theorem giving a partial converse:
\begin{theorem}[Representation of braided traced categories]\label{thm-embedding-braided-traced}
Every braided traced category $\Cc$ can by fully and faithfully
embedded into a braided pivotal category $\Int(\Cc)$, via a braided
traced functor.
\end{theorem}
\begin{proof}
The proof exactly mimics the Int-construction of Joyal, Street, and
Verity {\cite{JSV96}}, except that we must replace the twist by
\m{\resizebox{!}{1em}{\includegraphics{loop}}}, and be careful only
to use the braided traced axioms. We omit the details, which are
both lengthy and tedious.\eot
\end{proof}
\begin{remark}
A braided traced category is not necessarily spherical (and
therefore not spacial). This is analogous to
Remark~\ref{braided-not-spherical}.
\end{remark}
\paragraph{Graphical language and coherence.}
The graphical language for braided traced categories is obtained by
adding braids to the graphical language of planar traced categories.
Equivalence of diagrams is up to {\em regular isotopy} (see
Section~\ref{subsec-braided-autonomous}).
\begin{theorem}[Coherence for braided traced categories]
\label{thm-coherence-braided-traced}
A well-formed equation between morphisms in the language of braided
traced categories follows from the axioms of braided traced
categories if and only if it holds in the graphical language up to
regular isotopy.
\end{theorem}
\begin{proof}
Soundness is easy to check by inspection of the axioms.
Completeness is a consequence of
Theorems~\ref{thm-coherence-braided-pivotal} and
{\ref{thm-embedding-braided-traced}}. Namely, consider an
equation in the language of braided traced categories that holds in
the graphical language up to regular isotopy. The diagrams
corresponding to the left-hand side and right-hand side of the
equation can also be regarded as diagrams of braided pivotal
categories, and since they are regularly isotopic, the equation
holds in all braided pivotal categories by
Theorem~\ref{thm-coherence-braided-pivotal}. Since any braided
traced category $\Cc$ can be faithfully embedded in a braided
pivotal category $\Int(\Cc)$ by
Theorem~\ref{thm-embedding-braided-traced}, an equation that
holds in $\Int(\Cc)$ must also hold in $\Cc$. It follows that the
equation in question holds in all braided traced categories $\Cc$,
and therefore, it is a consequence of the axioms.\eot
\end{proof}
\begin{caveat}
Because of the dependence on
Theorem~\ref{thm-coherence-braided-pivotal},
Caveat~\ref{cav-coherence-braided-pivotal} also applies here.
\end{caveat}
\subsection{Balanced traced categories}\label{subsec-balanced-traced}
\begin{definition}[\cite{JSV96}]
A {\em balanced traced category} is a balanced monoidal category
equipped with a right trace $\Tr$, and satisfying the following {\em
yanking axioms}:
\begin{equation}\label{eqn-yanking}
\Tr^X(\sym_{X,X}) = \theta_X
\sep\mbox{and}\sep
\Tr^X(\symi_{X,X}) = \theta\inv_X
\end{equation}
\end{definition}
\paragraph{Graphical language and coherence.}
The graphical language of balanced traced categories combines the
ribbons and twists of balanced categories with the loops of traced
categories. The trace is represented as expected:
\[ \Tr^X f =
\raisebox{-10mm}{\resizebox{3cm}{!}{\includegraphics{trace-balanced}}}.
\]
Note that there is no need to postulate a left trace, because a left
trace is definable from the right trace and braidings as follows:
\[ \TrL^X f =
\raisebox{-10mm}{\resizebox{3cm}{!}{\includegraphics{trace-balanced-left}}}
:=
\raisebox{-8mm}{\resizebox{4cm}{!}{\includegraphics{trace-balanced-left-def}}}
\]
\begin{remark}
The defined left trace automatically satisfies interchange and the
pivoting axioms (Table~\ref{tab-planar-traced}), as well as the
spherical axiom (\ref{eqn-spherical-traced}) and the braided traced
axiom (\ref{eqn-braided-traced}). The spacial axiom
(\ref{eqn-spacial}) is satisfied by any braided monoidal category.
Therefore, any balanced traced category is spacial traced and
braided traced.
\end{remark}
The graphical validity of the yanking axiom is easily verified using a
shoe string:
\[
\raisebox{-1mm}{\resizebox{1.5cm}{!}{\includegraphics{yank}}}
~=
\resizebox{2.7cm}{!}{\includegraphics{twistA}}~,
\sep
\raisebox{-1mm}{\resizebox{1.5cm}{!}{\includegraphics{yank-inverse}}}
~=
\resizebox{2.7cm}{!}{\includegraphics{twistA-inverse}}~.
\]
Every tortile category is balanced traced, with the traced structure as
given in Remark~\ref{rem-pivotal-is-traced}. Moreover, there is an
embedding theorem:
\begin{theorem}[Representation of balanced traced categories {\cite[Prop.~5.1]{JSV96}}]
\label{thm-embedding-balanced-traced}
Every balanced traced category can be fully and faithfully embedded
into a tortile category, via a balanced traced functor.
\end{theorem}
\begin{theorem}[Coherence for balanced traced categories]
\label{thm-coherence-balanced-traced}
A well-formed equation between morphisms in the language of balanced
traced categories follows from the axioms of balanced
traced categories if and only if it holds in the
graphical language up to framed isotopy in 3 dimensions.
\end{theorem}
\begin{proof}
This follows from Theorems~\ref{thm-coherence-tortile} and
{\ref{thm-embedding-balanced-traced}}, by the exact same
argument that was used in the proof of
Theorem~\ref{thm-coherence-braided-traced}.\eot
\end{proof}
\begin{caveat}
Because of the dependence on Theorem~\ref{thm-coherence-tortile},
Caveat~\ref{cav-coherence-tortile} also applies here.
\end{caveat}
\begin{remark}
In any braided monoidal category with a right trace, the twist and
its inverse are definable by equation (\ref{eqn-yanking}). These
maps are automatically natural and satisfy $\theta_I=\id_I$ and
(\ref{eqn-balanced}). However, they are not automatically inverse to
each other. Therefore, a balanced traced category could be
equivalently defined as a braided monoidal category with a right
trace, satisfying
\[
\Tr^X(\symi_{X,X}) = \Tr^X(\sym_{X,X})\inv.
\]
\end{remark}
\subsection{Symmetric traced categories}
\begin{definition}[\cite{8,12,JSV96}]
A {\em symmetric traced category} is a symmetric monoidal
category with a right trace $\Tr$, satisfying the {\em symmetric
yanking axiom}:
\[ \Tr^X(\sym_{X,X})=\id_X.
\]
\end{definition}
\begin{remark}
Because of Remark~\ref{rem-symmetric-from-balanced}, a symmetric
traced category can be equivalently defined as a balanced
traced category in which $\theta_A=\id_A$ for all $A$.
\end{remark}
Obviously every compact closed category is symmetric traced with the
structure from Remark~\ref{rem-pivotal-is-traced}. Here, too, we have
an embedding theorem:
\begin{theorem}[Representation of symmetric traced categories \cite{JSV96}]
\label{thm-embedding-symmetric-traced}
Every symmetric traced category can be fully and faithfully embedded
into a compact closed category, via a symmetric traced functor.
\end{theorem}
\begin{example}[\cite{JSV96}]\label{exa-rel-as-traced}
Consider the category $\Rel$ of sets and relations, with biproducts
given by disjoint union $A+B$. Given a relation $R:A+X\ii B+X$,
define its trace $\Tr^X(R):A\ii B$ by $(a,b)\in\Tr^X(R)$ iff there
exists $n\geq 0$ and $x_1,\ldots,x_n\in X$ such that
$a\,R\,x_1\,R\,x_2\,R\,\ldots \,R\,x_n\,R\,b$. This defines a
symmetric traced category which is not compact closed.
\end{example}
\paragraph{Graphical language and coherence.}
The graphical language is like that of planar traced categories,
combined with the symmetry. A typical diagram looks like this:
\[ \resizebox{2.7cm}{!}{\includegraphics{typical-symmetric-traced}}.
\]
The notion of equivalence of diagrams is isomorphism.
\begin{theorem}[Coherence for symmetric traced categories]
\label{thm-coherence-symmetric-traced}
A well-formed equation between morphisms in the language of
symmetric traced categories follows from the axioms of symmetric
traced categories if and only if it holds in the graphical language
up to isomorphism of diagrams.
\end{theorem}
\begin{proof}
A consequence of Theorems~\ref{thm-coherence-compact-closed} and
{\ref{thm-embedding-symmetric-traced}}, as in
Theorems~\ref{thm-coherence-braided-traced} and
{\ref{thm-coherence-balanced-traced}}.
\end{proof}
\begin{caveat}
Because of the dependence on
Theorem~\ref{thm-coherence-compact-closed},
Caveat~\ref{cav-coherence-compact-closed} also applies here.
\end{caveat}
\begin{remark}
Strict symmetric traced categories, with the additional axiom
\begin{equation}\label{eqn-stefanescu}
\Tr^X(\id_{A\x X})=\id_A,
\end{equation}
first appear in the work of {\Stefanescu} under the name ``biflow''. A
precursor of the definition appears in {\cite{6}}, and the axioms
were given their modern form by {\Cazanescu} and {\Stefanescu}
{\cite{12,8}}. The paper {\cite{6}} also contains a detailed
proof sketch of coherence, namely, that the graphical language,
modulo isomorphism and the equation (\ref{eqn-stefanescu}), forms
the free biflow over a monoidal signature. This proof sketch remains
valid with respect to the modern definition, provided that one
assumes coherence for symmetric monoidal categories.
\end{remark}
\section{Products, coproducts, and biproducts}\label{sec-products}
In this section, we consider graphical languages for monoidal
categories where the monoidal structure is given by a categorical
product, coproduct, or biproduct. The main difference with the
graphical languages of ``purely'' monoidal categories from
Sections~\ref{sec-progressive}--\ref{sec-traced} is that equivalence
of diagrams must now be defined up to diagrammatic equations.
\subsection{Products}\label{subsec-products}
\begin{table}
\[ \begin{array}{lll}
\mbox{Naturality axioms:}\\
\Delta_B\cp f = (f\x f)\cp\Delta_A : & A\ii B\x B\\
\Diamond_B\cp f = \Diamond_A : & A\ii I\\[2ex]
\mbox{Commutative comonoid axioms:}\\
(\id_A\x \Delta_A)\cp\Delta_A = (\Delta_A\x\id_A)\cp\Delta_A :& A\ii A\x A\x A\\
(\id_A\x \Diamond_A)\cp\Delta_A = \rho\inv_A :& A\ii A\x I\\
(\Diamond_A\x\id_A)\cp\Delta_A = \lambda\inv_A :& A\ii I\x A\\
\sym_{A,A}\cp\Delta_A = \Delta_A :& A\ii A\x A\\[2ex]
\mbox{Coherence axioms:}\\
\Delta_I=\lambda\inv_I:&I\ii I\x I\\
(\id_A\x \sym_{B,A}\x\id_B)\cp\Delta_{A\x B}=\Delta_A\x\Delta_B :& A\x A\ii B\x B\x A\x B \\
\Diamond_I=\id_I:&I\ii I\\
\Diamond_{A\x B} = \lambda_I\cp (\Diamond_A\x\Diamond_B):&A\x B\ii I
\end{array}
\]
\caption{The axioms for products}
\label{tab-product-axioms}
\end{table}
\begin{definition}
In a category, a {\em product} of objects $A$ and $B$ is given by an
object $A\times B$, together with morphisms $\pi_1:A\times B\ii A$
and $\pi_2:A\times B\ii B$, such that for all objects $C$ and pairs
of morphisms $f:C\ii A$ and $g:C\ii B$, there exists a unique
morphism $h:C\ii A\x B$ such that the following diagram commutes:
\[\xymatrix{
&C\ar[dl]_{f}\ar[dr]^{g}\ar@{-->}[d]^{h}\\
A&A\x B\ar[l]_{\pi_1}\ar[r]^{\pi_2}&B.
}
\]
The unique morphism $h$ is often written as $h=\apair{f,g}$. An
object $I$ is {\em terminal} if for all objects $C$, there exists a
unique morphism $h:C\ii I$. A {\em finite product category} (or {\em
cartesian category}) is a category with a chosen terminal object,
and a chosen product for each pair of objects.
\end{definition}
Equivalently, a finite product category can be described as a
symmetric monoidal category, together with natural families of {\em
copy} and {\em erase} maps
\[ \Delta_A:A\ii A\x A,\sep \Diamond_A:A\ii I
\]
subject to a number of axioms, shown in Table~\ref{tab-product-axioms}.
\paragraph{Graphical language.}
We extend the graphical language of symmetric monoidal categories by
adding the following representations of the copy and erase maps.
\begin{center}
\begin{tabular}{@{}llc@{}}
Copy & $\Delta_A:A\ii A\x A$
& $\vcenter{\wirechart{@C=1cm@R=.4cm}{
&&*{}\\
*{}\wireright{r}{A}&
*{\bullet}\wireright{ur}{A}\wireright{dr}{A}\\
&&*{}\\
}}$
\\\\[-.5ex]
Erase & $\Diamond_A:A\ii I$
& $\vcenter{\wirechart{@C=1cm@R=.4cm}{
*{}\wireright{r}{A}&
*{\bullet}&
*{}
}}$
\\
\end{tabular}
\end{center}
As usual, if $A$ is a composite object term, a wire labeled $A$ should
be replaced by multiple parallel wires.
Table~\ref{tab-product-examples} contains graphical representations of
some of the axioms for finite product categories.
\begin{table}
\[ \begin{array}{c@{\hspace{1cm}}c}
\m{\resizebox{1.6in}{!}{\includegraphics{finite-product-comonoid}}} &
\m{\resizebox{1.8in}{!}{\includegraphics{finite-product-naturality}}}
\\\\[-.5ex]
\mbox{Commutative comonoid axioms} & \mbox{Naturality}
\end{array}
\]
\caption{Graphical representation of some product axioms}
\label{tab-product-examples}
\end{table}
Note that the projections $\pi_1:A\times B\ii A$ and $\pi_2:A\times
B\ii B$, and the pairing $h:C\ii A\x B$ of $f:C\ii A$ and $g:C\ii B$,
are represented graphically as follows:
\[ \m{\resizebox{2.8in}{!}{\includegraphics{finite-product-projections}}}
\]
\paragraph{Coherence.}
As the equivalences in Table~\ref{tab-product-examples} demonstrate,
coherence in the graphical language of finite product categories does
not hold up to isomorphism or isotopy of diagrams. Rather, is holds up
to {\em manipulations} of diagrams. So unlike the graphical languages
considered in Sections~\ref{sec-categories}--\ref{sec-traced}, we now
have to consider axioms on diagrams.
\begin{theorem}[Coherence for finite product categories]
\label{thm-coherence-product}
A well-formed equation between morphism terms in the language of
finite product categories follows from the axioms of finite product
categories if and only if it holds in the graphical language, up to
isomorphism of diagrams and the diagrammatic manipulations shown
in Table~\ref{tab-product-examples}.
\end{theorem}
This theorem is a simple consequence of coherence for symmetric
monoidal categories (Theorem~\ref{thm-coherence-symmetric-monoidal}),
together with the fact that all the axioms of finite product
categories, except those shown in Table~\ref{tab-product-examples},
hold up to isomorphism of diagrams.
\subsection{Coproducts}
The definition of coproducts and initial objects is dual to that of
products and terminal objects, i.e., it is obtained by reversing all
the arrows in Section~\ref{subsec-products}. Explicitly, an object $0$
is {\em initial} if for all objects $C$, there exists a unique
morphism $h:0\ii C$. A {\em coproduct} of objects $A,B$ is given by an
object $A+B$, together with morphisms $\iota_1:A\ii A+B$ and
$\iota_2:B\ii A+B$, such that for all objects $C$ and pairs of
morphisms $f:A\ii C$ and $g:B\ii C$, there exists a unique morphism
$h:A+B\ii C$ such that $h\cp\iota_1=f$ and $h\cp\iota_2=g$. One often
writes $h=[f,g]$. A category with finite coproducts is also called a
{\em co-cartesian category}.
Dualizing the presentation of Section~\ref{subsec-products}, one can
equivalently define a finite coproduct category as a symmetric
monoidal category with natural families of {\em merge} and {\em
initial} maps
\[ \nabla_A:A\x A\ii A,\sep \Box_A:I\ii A,
\]
satisfying the duals of the axioms in Table~\ref{tab-product-axioms}.
\paragraph{Graphical language.}
The graphical language of finite coproduct categories is obtained by
dualizing that of finite product categories, with the duals of the
axioms from Table~\ref{tab-product-examples}.
\begin{center}
\begin{tabular}{@{}llc@{}}
Merge & $\nabla_A:A\x A\ii A$
& $\vcenter{\wirechart{@C=1cm@R=.4cm}{
*{}\wireright{dr}{A}&&\\
&*{\bullet}\wireright{r}{A}&
*{}\\
*{}\wireright{ur}{A}&&\\
}}$
\\\\[-.5ex]
Initial & $\Box_A:I\ii A$
& $\vcenter{\wirechart{@C=1cm@R=.4cm}{
*{}&
*{\bullet}\wireright{r}{A}&
*{}
}}$
\\
\end{tabular}
\end{center}
\subsection{Biproducts}\label{subsec-biproduct}
\begin{definition}
An object is called a {\defit zero object} if it is initial and
terminal. If $\zero$ is a zero object, then there is a distinguished
map $A\ii \zero\ii B$ between any two objects, denoted $0_{A,B}$. A
{\em biproduct} of objects $A_1$ and $A_2$ is given by an object
$A_1\oplus A_2$, together with morphisms $\iota_i:A_i\ii A_1\oplus
A_2$ and $\pi_i:A_1\oplus A_2\ii A_i$, for $i=1,2$, such that
$A\oplus B$ is a product with $\pi_1,\pi_2$, a coproduct with
$\iota_1,\iota_2$ and such that $\pi_i\cp \iota_j=\delta_{ij}$.
Here $\delta_{ii}=\id_{A_i}$ and $\delta_{ij}=0_{A_j,A_i}$ when
$i\neq j$. We say that $\Cc$ is a {\em biproduct category}
if it has a chosen zero object $\zero$ and a chosen biproduct for
any pair of objects.
\end{definition}
\begin{remark}
The axiom $\pi_i\cp \iota_j=\delta_{ij}$ is equivalent to the
assertion that the symmetric monoidal structure defined by $\oplus$
``as a product'' coincides with the symmetric monoidal structure
defined by $\oplus$ ``as a coproduct''. Therefore, a
biproduct category is symmetric monoidal in a canonical way.
\end{remark}
Equivalently, a biproduct category can be defined as a
symmetric monoidal category, together with natural families of
morphisms
\[ \Delta_A:A\ii A\x A,\sep \Diamond_A:A\ii I,\sep
\nabla_A:A\x A\ii A,\sep \Box_A:I\ii A,
\]
satisfying the axioms in Table~\ref{tab-product-axioms}, as well as
their duals.
\paragraph{Graphical language.}
The graphical language of biproducts is obtained by combining the
graphical languages for products and coproducts. In this case, one has
the equalities in Table~\ref{tab-biproduct-naturality}, which are
consequences of the naturality axioms in
Table~\ref{tab-product-examples}. Note that the axiom
$\pi_i\cp\iota_j=\delta_{ij}$ holds automatically in the graphical
language.
\begin{table}
\[
\m{\resizebox{3.7in}{!}{\includegraphics{biproduct-naturality}}}
\]
\caption{Some biproduct laws}
\label{tab-biproduct-naturality}
\end{table}
\begin{theorem}[Coherence for biproduct categories]
A well-formed equation between morphism terms in the language of
biproduct categories follows from the axioms of biproduct
categories if and only if it holds in the graphical language, up to
isomorphism of diagrams, the diagrammatic manipulations shown
in Table~\ref{tab-product-examples}, and their duals.
\end{theorem}
This theorem is a simple consequence of coherence for symmetric
monoidal categories, together with the fact that the axioms in
Table~\ref{tab-product-examples} (and their duals) are exactly the
graphical representations of the axioms in
Table~\ref{tab-product-axioms} (and their duals) that do not already
hold up to graphical isomorphism.
\subsection{Traced product, coproduct, and biproduct categories}
It potentially makes sense to revisit each of the notions of
Sections~\ref{sec-progressive}--{\ref{sec-traced}} and consider the
case where the monoidal structure is given by a product, coproduct, or
biproduct. Since products, coproducts, and biproducts are
automatically symmetric, we do not need to consider the weaker notions
(such as balanced, braided, etc).
Moreover, we do not need to consider any autonomous cases, because an
autonomous category where the tensor is given by a product (or
coproduct) is trivial. Indeed, for any objects $A,B$, the morphisms
$f:A\ii B$ are in one-to-one correspondence with morphism $A\x B^*\ii
I$. Since $I$ is terminal, there is exactly one such morphism, and
therefore there is a unique morphism between any two objects. Such a
category is equivalent to the one-object one-morphism category.
Therefore, the only new notion from
Sections~\ref{sec-progressive}--\ref{sec-traced} that admits
non-trivial examples in the context of products, coproducts, or
biproducts is that of a symmetric traced category.
\begin{definition}
A {\em traced product [coproduct, biproduct] category} is a
symmetric traced category where the tensor is given by a categorical
product [coproduct, biproduct].
\end{definition}
\begin{example}[\cite{JSV96}]\label{exa-rel-as-traced-biprod}
The symmetric traced category $(\Rel,+)$ from
Example~\ref{exa-rel-as-traced} is a traced biproduct category.
\end{example}
\begin{example}
Consider the category $\Set_{\bot}$ whose objects are sets, and
whose morphisms are partial functions, regarded as a subcategory of
$\Rel$ from Example~\ref{exa-rel-as-traced-biprod}. In this category, the
empty set $0$ is a zero object, and the disjoint union operation
$A+B$ defines a coproduct (but not a product). Trace is given as in
Example~\ref{exa-rel-as-traced-biprod}. With these definitions,
$\Set_{\bot}$ is a traced coproduct category.
\end{example}
\paragraph{Graphical language.}
As expected, the graphical language of traced product [coproduct,
biproduct] categories is given by adding a trace (as in
Section~\ref{sec-traced}) to the graphical language of finite product
[finite coproduct, biproduct] categories.
\begin{theorem}[Coherence for traced product {[coproduct, biproduct]} categories]
A well-formed equation between morphism terms in the language of
traced product [coproduct, biproduct] categories follows from the
respective axioms if and only if it holds in the graphical language,
up to isomorphism of diagrams, and the diagrammatic manipulations
shown in Table~\ref{tab-product-examples} and/or their duals (as
appropriate).
\end{theorem}
\begin{remark}\label{rem-while-loop}
In computer science, traces arise naturally in the context of {\em
data flow} (as fixed points), and in the context of {\em control
flow} (as iteration). The two situations correspond to traced
product categories and traced coproduct categories, respectively.
The duality between data flow and control flow was first described
by Bainbridge {\cite{Bai76}}. The following are typical examples of
a data flow diagram (on the left) and a control flow diagram (on the
right). The data flow diagram represents the fixed point expression
$y = (3+x)(x+y)$, parametric on an input $x$. The control flow
diagram represents a generic ``while loop''. Note that data flow
diagrams have a notion of ``copying'' data, whereas control flow
diagrams have a dual notion of ``merging'' control paths.
\[
\m{\resizebox{!}{.6in}{\includegraphics{data-flow}}}
\sep
\m{\resizebox{!}{.5in}{\includegraphics{while-loop}}}
\]
\end{remark}
\begin{proposition}[{\Cazanescu} and {\Stefanescu} {\cite{12,8}}]
\label{prop-iteration}
In a category with finite coproducts, giving a trace is equivalent
to giving an {\em iteration operator}. Here, an iteration operator
is a family of operations
\[ \iter^X : \hom(X,A+X) \ii \hom(X,A),
\]
natural in $A$ and dinatural in $X$, satisfying
\begin{enumerate}
\item Iteration:
$\iter(f) = [\id_A,\iter(f)]\cp f$, for all $f:X\ii A+X$;
\item Diagonal property:
$\iter(\iter(f)) = \iter((\id_A+[\id_X,\id_X])\cp f)$,
for all $f:X\ii A+X+X$.
\end{enumerate}
Dually, on a finite product category, giving a trace is equivalent
to a {\em fixed point operator} $\fix^X : \hom(A\times X,X) \ii
\hom(A,X)$.
\end{proposition}
This makes precise the intuitive idea that in the presence of
coproducts, the while loop in Remark~\ref{rem-while-loop} is
sufficient for constructing arbitrary traces.
\begin{remark}
In the presence of the other axioms, the diagonal property is
equivalent to the so-called Beki\v{c} Lemma:
\[ \iter[f,g] = [\id_A,\iter([\id_{A+X},\iter(g)]\cp f)]\cp[\inj{2},\iter(g)],
\]
for all $f:X\ii A+X+Y$ and $g:Y\ii A+X+Y$ {\cite[Prop.~B.1]{2a}}.
\end{remark}
\begin{remark}
Iteration operators in the sense of Proposition~\ref{prop-iteration}
were first defined, using different but equivalent axioms, by
{\Cazanescu} and Ungureanu {\cite{22,25}}, under the name
``algebraic theory with iterate''.
\end{remark}
\begin{proposition}[{\cite{8}}]
In a category with finite biproducts, giving a trace is equivalent
to giving a {\em repetition operation}, i.e., a family of operators
\[ *:\hom(A,A)\ii\hom(A,A)
\]
satisfying
\begin{enumerate}
\item $f^* = \id + ff^*$,
\item $(f+g)^* = (f^*g)^*f^*$.
\item $(fg)^*f = f(gf)^*$ (dinaturality).
\end{enumerate}
Here, $f+g$ denotes the morphism $\nabla_A\cp(f\oplus
g)\cp\Delta_A:A\ii A$, for $f,g:A\ii A$.
\end{proposition}
\subsection{Uniformity and regular trees}
\begin{definition}
Suppose we are given a traced category with a distinguished subclass
of morphisms called the {\em strict} morphisms. Then the trace is
called {\em uniform} if for all $f:A\x X\ii B\x X$, $g:A\x Y\ii B\x
Y$, and strict $h:X\ii Y$, the following implication holds:
\[ (\id_B\x h)\cp f = g \cp (\id_A\x h) \ssep\imp\ssep
\Tr^X(f) = \Tr^Y(g).
\]
Equivalently, in pictures:
\[
\m{\resizebox{!}{.4in}{\includegraphics{uniform1}}}
\sep\imp\sep \m{\resizebox{!}{.4in}{\includegraphics{uniform2}}}\sep.
\]
whenever $h$ is strict. Note that uniformity is not an equational
property.
\end{definition}
\begin{proposition}[{\cite{8}}]
A traced coproduct category is uniformly traced if and only if for
all $f:X\ii A+X$, $g:Y\ii A+Y$, and strict $h:X\ii Y$,
\[ (\id_A+h)\cp f = g\cp h
\ssep\imp\ssep
\iter^X(f)=\iter^Y(g)\cp h.
\]
Moreover, a traced biproduct category is uniformly traced if and
only if for all $f:X\ii X$, $g:Y\ii Y$, and strict $h:X\ii Y$,
\[ h\cp f=g\cp h
\ssep\imp\ssep
h\cp f^*=g^*\cp h.
\]
\end{proposition}
In the particular case where the class of strict morphisms is taken to
be the smallest co-cartesian subcategory containing all objects,
{\Stefanescu} {\cite{2a,5}} proved that the free uniformly
traced coproduct category over a monoidal signature is given by the
graphical language of traced coproduct categories, modulo a suitable
notion of simulation equivalence on diagrams. This simulation
equivalence is easiest to describe in the case where all morphism
variables are of input arity 1. In this case, two diagrams are
simulation equivalent if and only if they have the same infinite tree
unwinding. There is also an analogous result for biproducts. We refer
the reader to {\cite{2a,2b,4}} for full details.
The following is an example of an equation that holds up to infinite
tree unwinding, but fails in general traced coproduct categories:
\begin{equation}\label{eqn-tree}
\m{\resizebox{!}{.5in}{\includegraphics{unwinding}}}
\end{equation}
{\'E}sik's ``iteration theories'' {\cite{15}} are a direct equational
axiomatization of such infinite tree unwindings. They include an
iteration operator as in Proposition~\ref{prop-iteration}, but with an
infinite family of additional properties, such as (\ref{eqn-tree}).
\subsection{Cartesian center}
Sometimes it is useful to consider notions that are weaker than
product categories, yet still have copy and erase maps $\Delta_A:A\ii
A\x A$ and $\Diamond_A:A\ii I$. For example, it is common to drop the
naturality axioms, while retaining the commutative comonoid and
coherence axioms (see Tables~\ref{tab-product-axioms} and
{\ref{tab-product-examples}}). An equivalent way to describe such a
category is as a symmetric monoidal category with (faithful) {\em
cartesian center} {\cite{Has97}}, i.e., a symmetric monoidal
category with a symmetric monoidal subcategory that contains all the
objects and is cartesian. Similar ideas have occurred, with varying
degrees of explicitness, in the literature on flowcharts, see e.g.
{\cite{22,10,11}}.
Similarly, if one omits naturality from the axioms for coproducts, one
obtains categories with a co-cartesian center. A weakened version of
biproducts is obtained by combining the axioms of cartesian center and
co-cartesian center. In this case, one requires the operations
$\Delta$, $\Diamond$, $\nabla$, $\Box$ to be natural with respect to
one another, yielding the properties from
Table~\ref{tab-biproduct-naturality}. More generally, one may require
any subset of the operations $\Delta$, $\Diamond$, $\nabla$, $\Box$ to
exist, and a further subset to be natural transformations. As the
reader may imagine, this leads to a nearly endless number of
categorical notions and corresponding graphical languages; see
e.g.~{\cite{11,4}}.
\section{Dagger categories}
The concept of a dagger category (also called {\em involutive
category} or {\em *-category} in the literature) is motivated by the
category of Hilbert spaces, where each morphism $f:A\ii B$ has an {\em
adjoint} $f\da:B\ii A$.
\begin{definition}
A {\em dagger category} is a category $\Cc$ together with an
involutive, identity-on-objects, contravariant functor
$\dagger:\Cc\ii\Cc$.
\end{definition}
Concretely, this means that to every morphism $f:A\ii B$, one
associates a morphism $f\da:B\ii A$, called the {\defit adjoint} of
$f$, such that for all $f:A\ii B$ and $g:B\ii C$:
\[ \begin{array}{l@{~}l}
\id_A\da = \id_A &: A\ii A, \\
(g\cp f)\da = f\da\cp g\da &: C\ii A, \\
f\dada = f&:A\ii B, \\
\end{array}
\]
\begin{example}
The category $\Hilb$ of Hilbert spaces and bounded linear maps is a
dagger category, where $f\da:B\ii A$ is given by the usual
adjointness property of linear algebra, i.e., $\iprod{f\da
x}{y}=\iprod{x}{fy}$ for all $x\in B$ and $y\in A$.
\end{example}
\begin{definition}{\bf (Unitary map, self-adjoint map)}
In a dagger category, a morphism $f:A\ii B$ is called {\defit unitary}
if it is an isomorphism and $f^{-1}=f\da$. A morphism $f:A\ii A$ is
called {\defit self-adjoint} or {\defit hermitian} if $f=f\da$.
\end{definition}
A {\em dagger functor} between dagger categories is a functor that
satisfies $F(f\da)=(Ff)\da$ for all $f$.
\paragraph{Graphical language.}
The graphical language of dagger categories extends that of
categories. The adjoint of a morphism variable $f:A\ii B$ is
represented diagrammatically as follows:
\begin{center}
\begin{tabular}{@{}llc@{}}
& $f:A\ii B$ &
$\wirechart{@C=1.7cm@R=0.5cm}{\wireright{r}{A}&\blank\ulbox{[]}{f}\wireright{r}{B}&
}$\\\\
& $f\da:B\ii A$ &
$\wirechart{@C=1.7cm@R=0.5cm}{\wireright{r}{B}&\blank\urbox{[]}{f}\wireright{r}{A}&
}$\\\\
\end{tabular}
\end{center}
More generally, the adjoint of any diagram is its mirror image. Note
that the mirror image of a box is visually distinguishable because we
have marked the upper left corner of each box representing a morphism
variable. Also note that, while we have taken the mirror image of
each box, we have reversed the location, but not the direction, of the
wires. Contrast this with (\ref{eqn-adjoint-mate}).
\begin{theorem}[Coherence for dagger categories]
\label{thm-coherence-dagger-categories}
A well-formed equation between two morphism terms in the language of
dagger categories follows from the axioms of dagger categories if
and only if it holds in the graphical language up to isomorphism of
diagrams.
\end{theorem}
\begin{proof}
This is a consequence of coherence for categories, from
Theorem~\ref{thm-coherence-categories}. As usual, soundness is easy
to check. For completeness, notice that any morphism term $t$ of
dagger categories can be transformed, via the axioms $(g\cp
f)\da=f\da\cp g\da$, $\id\da = \id$, and $f\dada = f$, into an
equivalent term $t'$ with the property that $\dagger$ is only
applied to morphism variables in $t'$. Such a term can be regarded
as a term in the language of categories, over the extended set of
morphism variables $\s{f,f\da,\ldots}$. Now if $t$ and $s$ are two
terms that have isomorphic diagrams, then by soundness, $t'$ and
$s'$ have isomorphic diagrams. By
Theorem~\ref{thm-coherence-categories}, $t'$ and $s'$ are provably
equal from the axioms of categories. Therefore $t$ and $s$ are
provably equal from the axioms of dagger categories.\eot
\end{proof}
We now consider ``dagger notions'' for the various monoidal categories
from Sections~\ref{sec-progressive}--\ref{sec-traced}.
\subsection{Dagger monoidal categories}
\begin{definition}
A {\em dagger monoidal category} is a monoidal category that is a
dagger category, such that the dagger structure is compatible with
the monoidal structure in the following sense:
\begin{enumerate}\alphalabels
\item $(f\x g)\da = f\da\x g\da$, for all $f,g$;
\item the canonical isomorphisms of the monoidal structure,
$\alpha_{A,B,C}: (A\x B)\x C\ii A\x(B\x C)$, $\lambda_A:I\x A\ii
A$, and $\rho_A: A\x I\ii A$, are unitary.
\end{enumerate}
\end{definition}
\paragraph{Graphical language.}
The graphical language of dagger monoidal categories is like the
graphical language of monoidal categories, with the adjoint of a
diagram given by its mirror image. For example,
\[ \left[\mnew{\wirechart{@R+1ex@C+1em}{
*{}\wireright{r}{A}&\blank\ulbox{[].[d]}{g}\wire{rr}{}&\blank\wireright{r}{E}&*{}\\
*{}\wireright{r}{B}&\blank\wireright{r}{D}&\blank\urbox{[].[d]}{g}\wireright{r}{F}&*{}\\
*{}\wireright{r}{C}\wire{rr}{}&\blank&\blank\wireright{r}{G}&*{}\\
}}\right]\largeda
\ssep=\ssep
\mnew{\wirechart{@R+1ex@C+1em}{
*{}\wireright{r}{E}\wire{rr}{}&\blank&\blank\wireright{r}{A}&*{}\\
*{}\wireright{r}{F}&\blank\wireright{r}{D}&\blank\urbox{[].[u]}{f}\wireright{r}{B}&*{}\\
*{}\wireright{r}{G}&\blank\ulbox{[].[u]}{g}\wire{rr}{}&\blank\wireright{r}{C}&*{}\\
}}
\]
\begin{theorem}[Coherence for planar dagger monoidal categories]
\label{thm-coherence-dagger-monoidal}
A well-formed equation between morphism terms in the language of
dagger monoidal categories follows from the axioms of dagger
monoidal categories if and only if it holds, up to planar isotopy,
in the graphical language.
\end{theorem}
\begin{proof}
This is a consequence of coherence for planar monoidal categories,
from Theorem~\ref{thm-coherence-planar}. The proof is analogous to
that of Theorem~\ref{thm-coherence-dagger-categories}. Note that the
axioms of dagger monoidal categories are precisely what is needed to
ensure that all occurrences of $\dagger$ can be removed from a
morphism term, except where applied directly to a morphism
variable.\eot
\end{proof}
\subsection{Other progressive dagger monoidal notions}
We can now ``daggerize'' the other progressive monoidal notions from
Section~\ref{sec-progressive}:
\begin{definition}
\begin{itemize}
\item A dagger monoidal category is {\em spacial} if it is spacial
as a monoidal category.
\item A {\em dagger braided monoidal category} is a dagger monoidal
category with a unitary braiding $\sym_{A,B}:A\x B\ii B\x A$.
\item A {\em dagger balanced monoidal category} is a dagger braided
monoidal category with a unitary twist $\theta_A:A\ii A$.
\item A {\em dagger symmetric monoidal category} {\cite{Sel05}} is a
dagger braided monoidal category such that the unitary braiding is
a symmetry.
\end{itemize}
\end{definition}
\paragraph{Graphical languages.}
In each case, the graphical language extends the corresponding
language from Section~\ref{sec-progressive}, with the dagger of a
diagram taken to be its mirror image. Each notion has a coherence
theorem, proved by the same method as
Theorems~\ref{thm-coherence-dagger-categories}
and~\ref{thm-coherence-dagger-monoidal}. The requirements that the
braiding and twist are unitary ensures that the dagger can be removed
from the corresponding terms. The respective caveats from
Section~\ref{sec-progressive} also apply to the dagger cases.
\begin{example}
The category $\Hilb$ of Hilbert spaces is dagger symmetric monoidal,
with the usual tensor product and symmetry.
\end{example}
\subsection{Dagger pivotal categories}
In defining dagger variants of the notions of
Section~\ref{sec-autonomous}, we find that the notion of a dagger
autonomous category and a dagger pivotal category coincide. This is
because the presence of a dagger structure on an autonomous category
already induces a canonical isomorphism $A\iso A^{**}$, which
automatically satisfies the pivotal axioms under mild assumptions.
To be more precise, consider a dagger monoidal category that is also
right autonomous (as a monoidal category). Because $\eta_A:I\ii A^*\x
A$ has an adjoint $\eta\da_A:A^*\x A\ii I$, we can define a family of
isomorphisms
\[ i_A = A \catarrow{\iso} I\x A\catarrow{\eta_{A^*}\x \id_A} A^{**}\x A^*\x A \catarrow{\id_{A^{**}}\x \eta\da_A}A^{**}\x I\catarrow{\iso} A^{**}.
\]
We can represent this schematically as follows (but bearing in mind
that we do not yet have a formal graphical language to work with):
\begin{equation}\label{eqn-definition-i}
\wirechart{}{\wire{r}{A}&\circbox{i_A}\wire{r}{A^{**}}&}
\ssep=\ssep
\raisebox{1ex}{\mnew{\wirechart{@C=1.0cm@R=0.5cm}{
\wire{r}{A}&\blank\wirecloselabel{d}{\eta\da_A}\\
\blank\wire{r}{A^*}&\blank\\
\blank\wireopenlabel{u}{\eta_{A^*}}\wire{r}{A^{**}}&
}}}
\end{equation}
\begin{lemma}\label{lem-dagger-pivotal-1}
The following are equivalent in a right autonomous, dagger monoidal
category:
\begin{itemize}\alphalabels
\item the family of isomorphisms $i_A:A\ii A^{**}$, as defined
above, determines a pivotal structure;
\item for all $A,B$, the canonical isomorphisms $(A\x B)^*\iso B^*\x
A^*$ and $I^*\iso I$ (determined by the right autonomous
structure) are unitary, and for all $f:A\ii B$, the equation
$f^{*\dagger}=f^{\dagger*}$ holds.
\end{itemize}
\end{lemma}
\begin{proof}
By a direct calculation from the definitions, one can check three
separate and independent facts:
\begin{itemize}
\item For any given $f:A\ii B$, the diagram
\[ \xymatrix{
A \ar[r]^{i_A} \ar[d]_{f} &
A^{**} \ar[d]^{f^{**}} \\
B \ar[r]^{i_B} &
B^{**}
}
\]
commutes if and only if $f^{*\dagger}=f^{\dagger*}$. In
particular, the family $i_A$ is a natural transformation if and
only if this condition holds for all $f$.
\item The diagram from (\ref{eqn-pivotal-monoidal}),
\[ \xymatrix@C=0cm{
&
A\x B \ar[dl]_{i_A\x i_B} \ar[dr]^{i_{A\x B}} \\
A^{**}\x B^{**} \ar[rr]^{\iso} &&
(A\x B)^{**}
}
\]
commutes if and only if the canonical isomorphism $(A\x B)^*\iso B^*\x
A^*$ is unitary.
\item The morphism $i_I:I\ii I^{**}$ is equal to the canonical
isomorphism (from the right autonomous structure) if and only if
the canonical isomorphism $I\ii I^*$ is unitary.
\end{itemize}
Since the three conditions are the defining conditions for a pivotal
structure, the lemma follows. \eot
\end{proof}
\begin{lemma}\label{lem-dagger-pivotal-2}
Under the equivalent conditions of Lemma~\ref{lem-dagger-pivotal-1},
the following hold:
\begin{enumerate}\alphalabels
\item[(a)] $i_A$ is unitary.
\item[(b)] $i_A = A \catarrow{\iso} A\x
I\catarrow{\id_A\x\eps\da_{A^*}} A\x A^*\x A^{**}
\catarrow{\eps_A\x\id_{A^{**}}}I\x A^{**}\catarrow{\iso}
A^{**}$:
\[ \wirechart{}{\wire{r}{A}&\circbox{i_A}\wire{r}{A^{**}}&}
\ssep=\ssep
\raisebox{1ex}{\mnew{\wirechart{@C=1.0cm@R=0.5cm}{
\blank\wireopenlabel{d}{\eps\da_{A^*}}\wire{r}{A^{**}}&\\
\blank\wire{r}{A^*}&\blank\\
\wire{r}{A}&\blank\wirecloselabel{u}{\eps_A}\\
}}}
\]
\item[(c)] $\eta\da_A = \eps_{A^*}\cp(\id_{A^*}\x i_A)$:
\[\raisebox{1ex}{\mnew{\wirechart{}{
\wire{r}{A}&\blank\wirecloselabel{d}{\eta\da_A} \\
\wire{r}{A^*}&\blank
}}}
\ssep=\ssep
\raisebox{1ex}{\mnew{\wirechart{}{
\wire{r}{A}&\circbox{i_A}\wire{r}{A^{**}}&\blank\wirecloselabel{d}{\eps_{A^*}} \\
\wire{rr}{A^*}&&\blank
}}}
\]
\item[(d)] $\eps\da_A = (i\inv_A\x\id_{A^*})\cp\eta_{A^*}$:
\[\raisebox{1ex}{\mnew{\wirechart{}{
\blank\wireopenlabel{d}{\eps\da_A}\wire{r}{A^*}&\\
\blank\wire{r}{A}&
}}}
\ssep=\ssep
\raisebox{1ex}{\mnew{\wirechart{}{
\blank\wireopenlabel{d}{\eta_{A^*}}\wire{rr}{A^*}&&\\
\blank\wire{r}{A^{**}}&\circbox{i\inv_A}\wire{r}{A}&
}}}
\]
\end{enumerate}
\end{lemma}
\begin{proof}
To prove (a), first consider
\[ (i_A)\da
\ssep=\ssep
\raisebox{1ex}{\mnew{\wirechart{@C=1.0cm@R=0.5cm}{
\blank\wireopenlabel{d}{\eta_A}\wire{r}{A}&\\
\blank\wire{r}{A^*}&\blank\\
\wire{r}{A^{**}}&\blank\wirecloselabel{u}{\eta\da_{A^*}}\\
}}}
\]
By definition of adjoint mates, we have
\[ (i_A)^{\dagger*}
\ssep=\ssep
\raisebox{1ex}{\mnew{\wirechart{@C=1.0cm@R=0.5cm}{
\wire{r}{A^*}&\blank\wirecloselabel{d}{\eta\da_{A^*}}\\
\blank\wire{r}{A^{**}}&\blank\\
\blank\wireopenlabel{u}{\eta_{A^{**}}}\wire{r}{A^{***}}&
}}}
\]
But this is just the definition of $i_{A^*}$, therefore
$(i_A)^{\dagger*}=i_{A^*}$. By definition, $i_A$ is unitary iff
$(i_A)\da=i_A^{-1}$, iff $(i_A)^{\dagger*}=(i_A^{-1})^*$, iff
$i_{A^*} = (i_A^{-1})^* = (i_A^*)^{-1}$. Since $i$ is a monoidal
natural transformation, this holds by Saavedra Rivano's Lemma
(Lemma~\ref{lem-saavedra-rivano}).
To prove (b), note that the right-hand side is the inverse of
$(i_A)\da$. Therefore, (b) is equivalent to (a).
Finally, equations (c) and (d) are restatements of the definition of
$i_A$ from (\ref{eqn-definition-i}). \eot
\end{proof}
\begin{remark}
The equivalence between (a) and (b) in
Lemma~\ref{lem-dagger-pivotal-2} holds only if $i_A$ is defined as
in (\ref{eqn-definition-i}). It does not hold for an arbitrary
pivotal structure on a right autonomous dagger monoidal category.
\end{remark}
Armed with these results, we finally state the two equivalent
definitions of a dagger pivotal category:
\begin{definition}
A {\em dagger pivotal category} is defined in one of the following
equivalent ways:
\begin{enumerate}
\item as a dagger monoidal, right autonomous category such that the
natural isomorphisms $(A\x B)^*\iso B^*\x A^*$ and $I^*\iso I$
(from the right autonomous structure) are unitary, and such that
$f^{*\dagger}=f^{\dagger*}$ holds for all morphisms $f$; or
\item as a pivotal, dagger monoidal category satisfying the
condition in Lemma~\ref{lem-dagger-pivotal-2}(c) (or equivalently,
(d)).
\end{enumerate}
\end{definition}
The first form of this definition is much easier to check in practice.
The second form is more suitable for the proof of the coherence
theorem below.
\begin{remark}
In a dagger pivotal category, the operation $(-)^*$ arises from an
adjunction (in the categorical sense) of {\em objects}, with
associated unit, counit, and adjoint mates. On the other hand, the
operation $(-)\da$ arises from an adjunction (in the linear algebra
sense) of {\em morphisms}. The two concepts should not be confused
with each other.
\end{remark}
\paragraph{Graphical language.}
The graphical language of dagger pivotal categories is like that of
pivotal categories, where the adjoint of a diagram is given, as usual,
by its mirror image. For example:
\[ \left[\mnew{\wirechart{@R-1ex}{
\wireright{r}{A}&\blank\ulbox{[].[dd]}{g}&\blank \\
&\blank\wireright{r}{C}& \\
\blank\wireopen{dd}\wireright{r}{B}&\blank \\\\
\blank\wireleft{rr}{B}&&
}}\right]\largeda
\ssep=\ssep
\mnew{\wirechart{@R-1ex}{
&\blank\urbox{[].[dd]}{g}\wireright{r}{A}& \\
\wireright{r}{C}&\blank& \\
\blank&\blank\wireright{r}{B}&\blank\wireclose{dd} \\\\
\wireleft{rr}{B}&&\blank
}}
\]
Note that in the graphical language, adjoint mates $f^*:B^*\ii A^*$
are represented by rotation and adjoints $f\da:B\ii A$ by mirror
image. Therefore, each morphism variable $f:A\ii B$ induces four
kinds of boxes:
\[ \begin{array}{rr}
f=\vcenter{\wirechart{}{
\wireright{r}{A}&\blank\ulbox{[]}{f}\wireright{r}{B}&
}}
&
f\da=\vcenter{\wirechart{}{
\wireright{r}{B}&\blank\urbox{[]}{f}\wireright{r}{A}&
}}
\\\\
f^{*\dagger}=\vcenter{\wirechart{}{
\wireleft{r}{A}&\blank\dlbox{[]}{f}\wireleft{r}{B}&
}}
&
f^*=\vcenter{\wirechart{}{
\wireleft{r}{B}&\blank\drbox{[]}{f}\wireleft{r}{A}&
}}
\end{array}
\]
Also note that, unlike the informal notation used above, the graphical
language does not explicitly display the isomorphism $i_A:A\ii
A^{**}$, and it does not explicitly distinguish $\eta_A:I\ii A^*\x A$
from $\eps\da_{A^*}:I\ii A^*\x A^{**}$. This is justified by the
following coherence theorem.
\begin{theorem}[Coherence for dagger pivotal categories]
A well-formed equation between morphisms in the language of dagger
pivotal categories follows from the axioms of dagger pivotal
categories if and only if it holds in the graphical language up to
planar isotopy, including rotation of boxes (by multiples of 180
degrees).
\end{theorem}
\begin{proof}
This follows from coherence of pivotal categories
(Theorem~\ref{thm-coherence-pivotal}), by the same argument used in
the proof of Theorem~\ref{thm-coherence-dagger-monoidal}. The
equations from Lemma~\ref{lem-dagger-pivotal-2}(c) and (d), and the
fact that $i_A$ is unitary, can be used to replace $\eta\da_A$,
$\eps\da_A$, and $i\da_A$ by equivalent terms not containing
$\dagger$.\eot
\end{proof}
\subsection{Other dagger pivotal notions}
It is possible to define dagger variants of the remaining pivotal
notions from Section~\ref{sec-autonomous}:
\begin{definition}
A dagger pivotal category is {\em spherical} (respectively {\em
spacial}) if it is spherical (respectively spacial) as a pivotal
category.
\end{definition}
\begin{definition}
A {\em dagger braided pivotal category} is a dagger pivotal category
with a unitary braiding $\sym_{A,B}:A\x B\ii B\x A$.
\end{definition}
\begin{remark}
Like any braided pivotal category, a dagger braided pivotal category
is balanced by Lemma~\ref{lem-braided-pivotal}. However, in general
the resulting twist $\theta_A:A\ii A$ is not unitary. In fact,
$\theta_A$ is unitary in this situation if and only if
$\theta_{A^*}=(\theta_A)^*$, i.e., if and only if the category is
tortile.
\end{remark}
\begin{definition}
A {\em dagger tortile category} is defined in one of the following
equivalent ways:
\begin{enumerate}
\item as a dagger braided pivotal category in which the canonical
twist $\theta_A$, defined as in Lemma~\ref{lem-braided-pivotal},
is unitary;
\item as a tortile, dagger monoidal category such that the braiding
is unitary, and such that $\eps_A$ and $\eta_A$ satisfy the
(equivalent) conditions of Lemma~\ref{lem-dagger-pivotal-2}(c) and
(d); or
\item as a dagger balanced monoidal category that is right
autonomous and satisfies
\begin{equation}\label{eqn-theta-dagger}
\theta_A
\ssep=\ssep
\vcenter{\wirechart{@C-4ex}{
*{}\wire{rr}{A}&&
\blank\wirecross{d}\wire{rr}{A}&&
*{}
\\
&\blank\wire{r}{}&
\blank\wirebraid{u}{.3}\wire{r}{}&
\blank
\\
&\blank\wireopenlabel{u}{\eta_A}\wire{rr}{A^*}&&
\blank\wirecloselabel{u}{\eta\da_A}
}}
\end{equation}
\end{enumerate}
\end{definition}
The first form of this definition emphasizes the relationship to
dagger pivotal categories. The second form is easiest to check if a
category is already known to be tortile. Finally, the third form takes
$\eps_A$, $\eta_A$, $\sym_{A,B}$ and $\theta_A$ as primitive
operations and does not mention the pivotal structure $i_A$ at all.
The pivotal structure, in this case, is definable from
(\ref{eqn-i-from-theta}) or (\ref{eqn-definition-i}), with the condition
(\ref{eqn-theta-dagger}) ensuring that the two definitions coincide.
\begin{definition}[\cite{AC04,Sel05}]
A {\em dagger compact closed category} is a dagger tortile category
such that $\theta_A=\id_A$ for all $A$. Equivalently, it is a dagger
symmetric monoidal category that is right autonomous and satisfies
\begin{equation}\label{eqn-dagger-compact-closed}
\raisebox{1ex}{\mnew{\wirechart{}{
\blank\wireopenlabel{d}{\eta_A}\wire{r}{A}&\\
\blank\wire{r}{A^*}&
}}}
\ssep=\ssep
\raisebox{1ex}{\mnew{\wirechart{}{
\blank\wireopenlabel{d}{\eps\da_{A}}\wire{r}{A^*}&\blank\wirecross{d}\wire{r}{A}&\\
\blank\wire{r}{A}&\blank\wirecross{u}\wire{r}{A^*}&
}}}
\end{equation}
\end{definition}
The equivalence of the two definition is immediate from the third form
of the definition of dagger tortile categories. Note that
(\ref{eqn-theta-dagger}) is equivalent to
(\ref{eqn-dagger-compact-closed}) in the symmetric case. Further,
these conditions are equivalent to the condition in
Lemma~\ref{lem-dagger-pivotal-2}(d).
\begin{example}
The category $\FdHilb$ of finite dimensional Hilbert spaces is
dagger compact closed, with $A^*$ the usual dual space of linear
functions from $A$ to $I$, and with $f\da$ the usual linear algebra
adjoint.
\end{example}
\paragraph{Graphical languages.}
Each of the notions defined in this section (except the spherical
notion) has a graphical language, extending the corresponding
graphical language from Section~\ref{sec-autonomous}, with the dagger
of a diagram taken to be its mirror image. Each notion has a coherence
theorem, proved by the same method as
Theorems~\ref{thm-coherence-dagger-categories}
and~\ref{thm-coherence-dagger-monoidal}. As expected, equivalence of
diagrams is up to isomorphism (for spacial dagger pivotal categories);
up to regular isotopy (for dagger braided pivotal categories); up to
framed 3-dimensional isotopy (for dagger tortile categories); and up
to isomorphism (for dagger compact closed categories).
\subsection{Dagger traced categories}\label{subsec-dagger-traced}
There is no difficulty in defining dagger variants of each of the
traced notions of Section~\ref{sec-traced}. A (left or right) trace on
a dagger monoidal category is called a {\em dagger trace} if it
satisfies
\begin{equation}\label{eqn-trace-dagger}
(\Tr f)\da = \Tr(f\da)
\end{equation}
For example: a {\em dagger right traced category} is a right traced
dagger monoidal category satisfying (\ref{eqn-trace-dagger}). A
balanced traced category is {\em dagger balanced traced} if it is
dagger balanced and satisfies (\ref{eqn-trace-dagger}). And similarly
for the other notions. The representation theorems of
Section~\ref{sec-traced} extend to these dagger variants:
\begin{theorem}[Representation of dagger braided/balanced/symmetric traced categories]
Every dagger braided [balanced, symmetric] traced category can be
fully and faithfully embedded in a dagger braided pivotal [dagger
tortile, dagger compact closed] category, via a dagger braided
[balanced, symmetric] traced functor.\eot
\end{theorem}
The proof, in each case, is by Joyal, Street, and Verity's
Int-construction {\cite{JSV96}}, which respects the dagger structure.
\paragraph{Graphical languages.}
The graphical language of each class of traced categories extends to
the corresponding dagger traced categories, in a way suggested by
equation (\ref{eqn-trace-dagger}). As usual, the dagger of a diagram
is its mirror image, thus for example
\[ \left[\mnew{\wirechart{@C=1cm@R=.8cm}{
\blank\wireopen{d}\wire{rr}{}&&
\blank\wireclose{d}
\\
\blank\wireright{r}{X}&
\blank\wireright{r}{X}&
\blank
\\
*{}\wireright{r}{A}&
\blank\ulbox{[].[u]}{f}\wireright{r}{B}&
*{}
\\}}\right]\largeda
\ssep=\ssep
\mnew{\wirechart{@C=1cm@R=.8cm}{
\blank\wireopen{d}\wire{rr}{}&&
\blank\wireclose{d}
\\
\blank\wireright{r}{X}&
\blank\wireright{r}{X}&
\blank
\\
*{}\wireright{r}{B}&
\blank\urbox{[].[u]}{f}\wireright{r}{A}&
*{}
\\}}
\]
The coherence theorems of Section~\ref{sec-traced} extend to this setting.
\subsection{Dagger biproducts}
In a dagger category, if $A\oplus B$ is a categorical product (with
projections $\pi_1:A\oplus B\ii A$ and $\pi_2:A\oplus B\ii B$), then
it is automatically a coproduct (with injections $\pi\da_1:A\ii
A\oplus B$ and $\pi\da_2:B\ii A\oplus B$). It therefore makes sense to
define a notion of {\em dagger biproduct}.
\begin{definition}
A {\em dagger biproduct category} is a biproduct category carrying a
dagger structure, such that $\pi\da_i=\iota_i:A_i\ii A_1\oplus A_2$
for $i=1,2$.
\end{definition}
As in Section~\ref{subsec-biproduct}, we can equivalently define a
dagger biproduct category as a dagger symmetric monoidal category,
together with natural families of morphisms
\[ \Delta_A:A\ii A\x A,\sep \Diamond_A:A\ii I,\sep
\nabla_A:A\x A\ii A,\sep \Box_A:I\ii A,
\]
such that $\Delta\da_A=\nabla_A$ and $\Diamond\da_A=\Box_A$,
satisfying the axioms in Table~\ref{tab-product-axioms}.
\paragraph{Graphical language.}
The graphical language of dagger biproduct categories is like that of
biproduct categories, where the dagger of a diagram is taken to be its
mirror image. For example,
\[ \left[\mnew{\resizebox{3.5cm}{!}{\includegraphics{dagger-biproduct-example}}}\right]\largeda
\ssep=\ssep
\mnew{\resizebox{3.5cm}{!}{\includegraphics{dagger-biproduct-example-flipped}}}
\]
\begin{theorem}[Coherence for dagger biproduct categories]
A well-formed equation between morphism terms in the language of
dagger biproduct categories follows from the axioms of dagger biproduct
categories if and only if it holds in the graphical language, up to
isomorphism of diagrams, the diagrammatic manipulations shown
in Table~\ref{tab-product-examples}, and their duals.
\end{theorem}
\begin{proof}
By reduction to biproduct categories, as in the proofs of
Theorems~\ref{thm-coherence-dagger-categories}
and~\ref{thm-coherence-dagger-monoidal}. The axioms
$\Delta\da_A=\nabla_A$ and $\Diamond\da_A=\Box_A$ allow $\dagger$ to
be removed from anywhere but a morphism variable.\eot
\end{proof}
Finally, there is an obvious notion of {\em dagger traced biproduct
category} (which is really a dagger traced dagger biproduct
category), with graphical language derived from traced biproduct
categories.
\section{Bicategories}
A bicategory {\cite{Ben67}} is a generalization of a monoidal
category. In addition to objects $A,B,\ldots$ and morphisms
$f,g,\ldots$, one now also considers {\em 0-cells}
$\alpha,\beta,\ldots$, which we can visualize as {\em colors}. For
example, consider the following diagram. It is a standard diagram for
monoidal categories, except that the areas between the wires have been
colored.
\[ \resizebox{1.6in}{!}{\includegraphics{bicategory}}
\]
As usual, we have objects $A,B,C,D,E,F$ and morphisms $f:A\ii C\x D$
and $g:B\x C\ii F\x E$. But now there are also 0-cells called green,
red, yellow, and blue. In such diagrams, each object has a {\em
source}, which is the 0-cell just above it, and a {\em target},
which is the 0-cell just below it. For example, we have
$A:\mbox{green}\ii\mbox{yellow}$, $B:\mbox{yellow}\ii\mbox{blue}$, and
so on. It is now clear that, to be consistently colored, such diagrams
have to satisfy some coloring constraints. The constraints are:
\begin{itemize}
\item The tensor $B\x A$ of two objects may only be formed if the
target of $A$ is equal to the source of $B$. In symbols, for any
0-cells $\alpha,\beta,\gamma$, if $A:\alpha\ii\beta$ and
$B:\beta\ii\gamma$, then $B\x A:\alpha\ii\gamma$.
\item If $f:A\ii B$ is a morphism, then $A$ and $B$ must have a common
source and a common target. In symbols, if $f:A\ii B$ and
$A:\alpha\ii\beta$, then $B:\alpha\ii\beta$.
\item One also requires a unit object $I_{\alpha}:\alpha\ii \alpha$
for every color $\alpha$.
\end{itemize}
As an illustration of the second property, consider $f:A\ii C\x D$ in
the above example, where $A:\mbox{green}\ii\mbox{yellow}$ and $C\x
D:\mbox{green}\ii\mbox{yellow}$. Subject to the above coloring
constraints, a bicategory is then required to satisfy exactly the same
axioms as a monoidal category. Notice, for example, that if $f:A\ii
B$ and $g:B\ii C$ and $f,g$ are well-colored, then so is $g\cp f:A\ii
C$. Also, the identity maps $\id_A:A\ii A$, the associativity map
$\alpha_{A,B,C}:(A\x B)\x C\catarrow{\iso} A\x(B\x C)$, and the other
structural maps are well-colored. In particular, a monoidal category
is the same thing as a one-object bicategory.
To give a detailed account of bicategories and their graphical
languages is beyond the scope of this paper. We have already discussed
over 30 different flavors of monoidal categories, and the reader can
well imagine how many possible variations of bicategories there are,
with 2-, 3-, and 4-dimensional graphical languages, once one considers
bicategorical versions of braids, twists, adjoints, and traces. There
are even more variations if one considers tricategories and beyond.
We refer the reader to {\cite{Ben67}} for the definition and basic
properties of bicategories, and to {\cite{Str95}},
{\cite[Sec.~7]{BD95}} for a taste of their graphical languages.
\section{Beyond a single tensor product}\label{sec-star-autonomous}
All the categorical notions that we have considered in this paper have
just a single tensor product, which we represented as juxtaposition in
the graphical languages. For notions of categories with more than one
tensor product, the graphical languages get much more complicated.
The details are beyond the scope of this paper, so we just outline the
basics and give some references.
Examples of categories with more than one tensor are linearly
distributive categories {\cite{CS97}} and *-autonomous categories
{\cite{Bar79}}. Both of these notions are models of multiplicative
linear logic {\cite{Gir87a}}. These categories have two tensors, often
called ``tensor'' and ``par'', and written
\[ A\x B
\sep\mbox{and}\sep
A\llamp B.
\]
The two tensors are related by some morphisms, such as $A\x(B\llamp
C)\ii (A\x B)\llamp C$, while other similar morphisms, such as $(A\x
B)\llamp C\ii A\x(B\llamp C)$, are not present.
To make a graphical language for more than one tensor product, one
must label the wires by morphism terms, rather than morphism
variables. One must also introduce special tensor and par nodes as
shown here:
\[ \mnew{\xymatrix@R-4ex{
\ar[dr]^<>(.3){B} \\
& \circbox{\x} \ar[r]^{A\x B} &\\
\ar[ur]_<>(.3){A}
}}
\ssep
\mnew{\xymatrix@R-4ex{
&&\\
\ar[r]^{A\x B} & \circbox{\x} \ar[ur]^{B} \ar[dr]_{A}\\
&&
}}
\ssep
\mnew{\xymatrix@R-4ex{
\ar[dr]^<>(.3){B} \\
& \circbox{\llamp} \ar[r]^{A\llamp B} &\\
\ar[ur]_<>(.3){A}
}}
\ssep
\mnew{\xymatrix@R-4ex{
&&\\
\ar[r]^{A\llamp B} & \circbox{\llamp} \ar[ur]^{B} \ar[dr]_{A}\\
&&
}},
\]
along with similar nodes for the units. Equivalence of diagrams must
be taken up to axiomatic manipulations, such as the following, which
is called {\em cut elimination} in logic:
\[ \mnew{\xymatrix@R-4ex{
\ar[dr]^<>(.3){B} &&&\\
& \circbox{\x} \ar[r]^{A\x B} & \circbox{\x} \ar[ur]^{B} \ar[dr]_{A}\\
\ar[ur]_<>(.3){A}&&&
}}
\sep=\sep
\mnew{\xymatrix@R-4ex{
\ar[rr]^{B}&&\\
\ar[rr]_{A}&&
}}.
\]
Finally, one must state a {\em correctness criterion}, to explain why
certain diagrams, such as the left one following, are well-formed,
while others, such as the right one, are not well-formed.
\[ \xymatrix@R-4ex{
&&\\
\ar[r]^{A\x B} & \circbox{\x} \ar@/^3ex/[rr]^{B} \ar@/_3ex/[rr]_{A}&&
\circbox{\x} \ar[r]^{A\x B} &\\
&&
}
\sep
\xymatrix@R-4ex{
&&\\
\ar[r]^{A\llamp B} & \circbox{\llamp} \ar@/^3ex/[rr]^{B} \ar@/_3ex/[rr]_{A}&&
\circbox{\x} \ar[r]^{A\x B} &\\
&&
}
\]
The resulting theory is called the theory of {\em proof nets}, and was
first given by Girard for unit-free multiplicative linear logic
{\cite{Gir87a}}. It was later extended to include the tensor units by
Blute et al.~{\cite{BCST96}}.
\section{Summary}\label{sec-summary}
\begin{table}
\newlength{\mywidth}
\newcommand{\fittowidth}[2]{\settowidth{\mywidth}{#2}\ifdim\mywidth>#1\resizebox{#1}{!}{#2}\else#2\fi}
\newcommand{\doubleline}[2]{\begin{tabular}{@{}l@{}}#1\\#2\end{tabular}}
\newcommand{\chartboxaux}[6]{%
\framebox{\resizebox{.55in}{!}{%
\begin{tabular}[b]{@{}p{1in}@{}}
\fittowidth{1in}{#2} \\
\multicolumn{1}{@{}c@{}}{$\m{\rule{0mm}{#1}}{#3}$}\\
#4\hfill#5\hfill#6
\end{tabular}%
}}}
\newcommand{\chartboxaux{.6in}}{\chartboxaux{.6in}}
\newcommand{\chartboxaux{.43in}}{\chartboxaux{.43in}}
\newbox\myboxa
\newbox\myboxb
\newcommand\fittobox[3]{\setbox\myboxa\hbox{\resizebox{#1}{!}{#3}}\setbox\myboxb\hbox{\resizebox{!}{#2}{#3}}\ifdim\ht\myboxa>\ht\myboxb\resizebox{!}{#2}{#3}\else\resizebox{#1}{!}{#3}\fi}
\newcommand{\gr}[1]{\mnew{\fittobox{1in}{0.55in}{\includegraphics{#1}}}}
\newcommand{\grb}[1]{\mnew{\fittobox{1in}{0.35in}{\includegraphics{#1}}}}
\newcommand{\eq}[2]{\resizebox{1in}{!}{\mnew{\resizebox{.4in}{!}{\includegraphics{#1}}}~\mnew{=}~\mnew{\resizebox{.4in}{!}{\includegraphics{#2}}}}}
\newcommand{\eqb}[4]{\resizebox{1in}{!}{\mnew{\resizebox{#1}{!}{\includegraphics{#3}}}~\mnew{=}~\mnew{\resizebox{#2}{!}{\includegraphics{#4}}}}}
\def\chartAA{\chartboxaux{.6in}
{Category}
{\gr{chart-11}}
{d:1}{i:1}{c:\chk}}
\def\chartAB{\chartboxaux{.6in}
{Planar monoidal}
{\gr{chart-12}}
{d:2}{i:2}{c:\cite{JS88,JS91}}}
\def\chartAC{\chartboxaux{.6in}
{Spacial monoidal}
{\eq{chart-13a}{chart-13b}}
{d:2}{i:3}{c:conj}}
\def\chartAD{\chartboxaux{.6in}
{Braided monoidal}
{\gr{chart-14}}
{d:3}{i:3}{c:\cite{JS91}}}
\def\chartAE{\chartboxaux{.6in}
{Balanced monoidal}
{\gr{chart-15}}
{d:3f}{i:3f}{c:\cite{JS91}}}
\def\chartAF{\chartboxaux{.6in}
{Symmetric monoidal}
{\gr{chart-16}}
{d:3}{i:4}{c:\cite{JS91}}}
\def\chartAG{\chartboxaux{.6in}
{Product}
{\eq{chart-17a}{chart-17b}}
{d:3}{i:eqn}{c:\chk}}
\def\chartAH{\chartboxaux{.6in}
{Coproduct}
{\eq{chart-18a}{chart-18b}}
{d:3}{i:eqn}{c:\chk}}
\def\chartAI{\chartboxaux{.6in}
{Biproduct}
{\eqb{.35in}{.45in}{chart-19a}{chart-19b}}
{d:3}{i:eqn}{c:\chk}}
\def\chartBA{\chartboxaux{.6in}
{Right traced}
{\gr{chart-21}}
{d:2}{i:2}{c:conj}}
\def\chartBB{\chartboxaux{.6in}
{Planar traced}
{\gr{chart-22}}
{d:2}{i:2}{c:conj}}
\def\chartBC{\chartboxaux{.6in}
{Spacial traced}
{\eq{chart-23a}{chart-23b}}
{d:2}{i:3}{c:conj}}
\def\chartBD{\chartboxaux{.6in}
{Braided traced}
{\gr{chart-24}}
{\small d:2${}^+$}{i:reg.rot}{c:int${}^*$}}
\def\chartBE{\chartboxaux{.6in}
{Balanced traced}
{\doubleline{\eq{chart-25a}{chart-25b}}{\eqb{.55in}{.25in}{chart-25c}{chart-25d}}}
{d:3f}{i:3f}{c:int${}^*$}}
\def\chartBF{\chartboxaux{.6in}
{Symmetric traced}
{\gr{chart-26}}
{d:3}{i:4}{c:int${}^*$}}
\def\chartBG{\chartboxaux{.6in}
{Traced product}
{\gr{chart-27}}
{d:3}{i:eqn}{c:\chk}}
\def\chartBH{\chartboxaux{.6in}
{Traced coproduct}
{\gr{chart-28}}
{d:3}{i:eqn}{c:\chk}}
\def\chartBI{\chartboxaux{.6in}
{Traced biproduct}
{\gr{chart-29}}
{d:3}{i:eqn}{c:\chk}}
\def\chartCA{\chartboxaux{.43in}
{\doubleline{Planar autonomous}{(rigid)}}
{\grb{chart-31}}
{d:2}{i:2}{c:\cite{JS88}}}
\def\chartCB{\chartboxaux{.43in}
{\doubleline{Planar pivotal}{(sovereign)}}
{\grb{chart-32}}
{d:2}{i:2.rot}{c:\cite{FY92}${}^*$}}
\def\chartCC{\chartboxaux{.6in}
{Spacial pivotal}
{\eq{chart-33a}{chart-33b}}
{d:2}{i:3}{c:conj}}
\def\chartCD{\chartboxaux{.6in}
{Braided autonomous}
{\gr{chart-34}}
{d:2${}^+$}{i:reg}{c:\cite{FY92}${}^*$}}
\def\chartCE{\chartboxaux{.43in}
{\doubleline{Braided pivotal}{\fittowidth{1in}{(balanced autonomous)}}}
{\grb{chart-35}}
{\small d:2${}^+$}{i:reg.rot}{c:\cite{FY92}${}^*$}}
\def\chartCF{\chartboxaux{.6in}
{Tortile (ribbon)}
{\gr{chart-36}}
{d:3f}{i:3f}{c:\cite{Shum94}${}^*$}}
\def\chartCG{\chartboxaux{.6in}
{Compact closed}
{\gr{chart-37}}
{d:3}{i:4}{c:\cite{KL80}${}^*$}}
\[
\xymatrix@!R=.42in@C=.7in{
*\txt{\bf Progressive} &
*\txt{\bf Traced} &
*\txt{\bf Autonomous} \\
*{\chartAA} &
*{\chartBA}\ar[dl] &
*{\chartCA}\ar@/_.75in/[]!L;[dll]!UR \\
*{\chartAB}\ar[u] &
*{\chartBB}\ar[u]\ar[l] &
*{\chartCB}\ar[u]\ar[l] \\
*{\chartAC}\ar[u] &
*{\chartBC}\ar[u]\ar[l] &
*{\chartCC}\ar[u]\ar[l]
\save+<1in,0cm>*{\chartCD}="box43"\ar[uu]!RD \restore \\
*{\chartAD}\ar[u] &
\save+<.7in,0cm>*{\chartBD}="box24"\ar[l]\ar!U;[uu]!RD \restore &
\save+<1in,0cm>*{\chartCE}="box34"\ar"box24"\ar!UL;[uu]!RD\ar"box34";"box43" \restore \\
*{\chartAE}\ar[u] &
*{\chartBE}\ar[uu]\ar"box24"\ar[l] &
*{\chartCF}\ar[uu]\ar!UR;"box34"!LD\ar[l] \\
*{\chartAF}\ar[u] &
*{\chartBF}\ar[u]\ar[l] &
*{\chartCG}\ar[u]\ar[l] \\
\save+<-.2in,0in>*{\chartAG}="box17"\ar!U+<.1in,0in>;[u]!D+<-.1in,0in> \restore &
\save+<-.2in,0in>*{\chartBG}="box27"\ar!U+<.1in,0in>;[u]!D+<-.1in,0in>\ar"box17" \restore \\
\save+<.2in,0in>*{\chartAH}="box18"\ar!U+<.1in,0in>;[uu]!D+<.3in,0in> \restore &
\save+<.2in,0in>*{\chartBH}="box28"\ar!U+<.1in,0in>;[uu]!D+<.3in,0in>\ar"box18" \restore \\
*{\chartAI}\ar!U+<.1in,0in>;"box18"!D+<-.1in,0in>\ar!U+<-.3in,0in>;"box17"!D+<-.1in,0in> &
*{\chartBI}\ar!U+<.1in,0in>;"box28"!D+<-.1in,0in>\ar!U+<-.3in,0in>;"box27"!D+<-.1in,0in>\ar[l] \\
}
\]
\caption{Summary of monoidal notions and their graphical languages}
\label{tab-chart}
\end{table}
Table~\ref{tab-chart} summarizes the graphical languages from
Sections~\ref{sec-categories}--{\ref{sec-products}}. The name of each
class of categories is shown along with a typical diagram or equation.
The arrows indicate forgetful functors. We have omitted spherical
categories, because they do not possess a graphical language modulo a
natural notion of isotopy.
The letter $d$ indicates the dimension of the diagrams, and the letter
$i$ indicates the dimension of the ambient space for isotopy. If
$i>d$, then isotopy coincides with isomorphism of diagrams. Special
cases are ``3f'' for framed diagrams and framed isotopy in 3
dimensions; ``2+'' for two-dimensional diagram with crossings (i.e.,
isotopy is taken on 2-dimensional projections, rather than on
3-dimensional diagrams); ``reg'' for regular isotopy; and ``rot'' to
indicate that isotopy includes rotation of boxes. Finally, ``eqn''
indicates that equivalence of diagrams is taken modulo equational
axioms.
The letter $c$ indicates the status of a coherence theorem. This is
usually a reference to a proof of the theorem, or ``conj'' if the
result is conjectured. A checkmark ``$\chk$'' indicates a result that
is folklore or whose proof is trivial. ``int'' indicates that the
coherence theorem follows from a version of Joyal, Street, and
Verity's Int-construction, and the corresponding coherence theorem for
pivotal categories. An asterisk ``$*$'' indicates that the result has
only been proved for simple signatures.
Dagger variants can be defined of all of the notions shown in
Table~\ref{tab-chart}, except the planar autonomous and braided
autonomous notions. Finally, bicategories require their own
(presumably much larger) table and are not included here.
\newpage
\bibliographystyle{abbrv}
| {
"timestamp": "2009-08-24T01:53:45",
"yymm": "0908",
"arxiv_id": "0908.3347",
"language": "en",
"url": "https://arxiv.org/abs/0908.3347",
"abstract": "This article is intended as a reference guide to various notions of monoidal categories and their associated string diagrams. It is hoped that this will be useful not just to mathematicians, but also to physicists, computer scientists, and others who use diagrammatic reasoning. We have opted for a somewhat informal treatment of topological notions, and have omitted most proofs. Nevertheless, the exposition is sufficiently detailed to make it clear what is presently known, and to serve as a starting place for more in-depth study. Where possible, we provide pointers to more rigorous treatments in the literature. Where we include results that have only been proved in special cases, we indicate this in the form of caveats.",
"subjects": "Category Theory (math.CT)",
"title": "A survey of graphical languages for monoidal categories",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.967899289579129,
"lm_q2_score": 0.731058584489497,
"lm_q1q2_score": 0.7075910845681078
} |
https://arxiv.org/abs/2301.03338 | Topologically Regularized Data Embeddings | Unsupervised representation learning methods are widely used for gaining insight into high-dimensional, unstructured, or structured data. In some cases, users may have prior topological knowledge about the data, such as a known cluster structure or the fact that the data is known to lie along a tree- or graph-structured topology. However, generic methods to ensure such structure is salient in the low-dimensional representations are lacking. This negatively impacts the interpretability of low-dimensional embeddings, and plausibly downstream learning tasks. To address this issue, we introduce topological regularization: a generic approach based on algebraic topology to incorporate topological prior knowledge into low-dimensional embeddings. We introduce a class of topological loss functions, and show that jointly optimizing an embedding loss with such a topological loss function as a regularizer yields embeddings that reflect not only local proximities but also the desired topological structure. We include a self-contained overview of the required foundational concepts in algebraic topology, and provide intuitive guidance on how to design topological loss functions for a variety of shapes, such as clusters, cycles, and bifurcations. We empirically evaluate the proposed approach on computational efficiency, robustness, and versatility in combination with linear and non-linear dimensionality reduction and graph embedding methods. |
\section{From Simplicial Homology to Topological Optimization}
\label{SEC::background}
The purpose of this section is to present how topological optimization works for point clouds---thus embeddings---in an illustrative manner.
In Section \ref{SUBSEC::filtrations}, we introduce \emph{simplicial complexes} and \emph{filtrations}.
These are the fundamental combinatorial structures from which \emph{persistent homology} quantifies topological information, as will be explained in Section \ref{SUBSEC::persistence}.
In Section \ref{SUBSEC::alpha}, we introduce two popular types of simplicial complexes and filtrations, namely the \emph{Vietoris-Rips complex and filtration}, and the \emph{weak Alpha complex and filtration}, respectively, the latter of which are more convenient for performing low-dimensional topological optimization (such as in the embedding space) due to their constrained size.
Subsequently, in Section \ref{SUBSEC::toploss} we illustrate how \emph{topological loss functions} defined on the \emph{persistence diagram(s)} of a point cloud (embedding) can be used to conduct \emph{topological optimization}.
Finally, in Section \ref{SUBSEC::compcost}, we discuss the computational cost that is associated with persistent homology and topological optimization.
Although many of the definitions in this section extend to more general metric spaces, for simplicity, we will restrict to \emph{point clouds}, i.e., finite sets of distinct points in some Euclidean space $\mathbb{R}^k$.
The working example to explain the concepts of persistent homology and topological optimization will be the two-dimensional point cloud data set ${\bm{X}}$ resembling Pikachu, shown in Figure \ref{Pikachu}.
A simpler example to explain (persistent) simplicial homology computation will also be used (Figure \ref{boundarymap}).
The notation `${\bm{X}}$' is used to denote the point cloud data set, while introducing the background material in this section in a broad setting.
Within the context of topological regularization, ${\bm{X}}$ is the embedding ${\bm{E}}$.
\begin{figure}
\centering
\begin{subfigure}[t]{.475\linewidth}
\centering
\includegraphics[width=.63\linewidth]{Images/Pikachu}
\caption{}
\label{Pikachu}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.475\linewidth}
\centering
\includegraphics[width=.63\linewidth]{Images/PikachuAlpha}
\caption{}
\label{PikachuAlpha}
\end{subfigure}
\vskip\floatsep
\begin{subfigure}[t]{\linewidth}
\centering
\includegraphics[width=.85\linewidth]{Images/PikachuFiltration}
\caption{}
\label{PikachuFiltration}
\end{subfigure}
\caption{(a) A point cloud data set ${\bm{X}}$ resembling Pikachu.
(b) The Delaunay triangulation constructed from ${\bm{X}}$.
(c) Various simplicial complexes in the weak Alpha filtration---a sequence of subcomplexes from the Delaunay triangulation ordered by inclusion---of ${\bm{X}}$.
The time parameter $\alpha=\infty$ is used to informally denote any time parameter above which the weak Alpha complex equals the Delaunay triangulation.}
\label{PikachuFull}
\end{figure}
\subsection{Simplicial Complexes and Filtrations}
\label{SUBSEC::filtrations}
As mentioned above, simplicial complexes and filtrations are the fundamental combinatorial structures from which topological information is quantified through persistent homology (Section \ref{SUBSEC::persistence}).
In general, a simplicial complex can be seen as a generalization of a graph, where apart from nodes or vertices (0-simplices) and edges (1-simplices), it may also include triangles (2-simplices), tetrahedra (3-simplices), and so on.
\begin{definition}
\label{def::abstractcomplex}
(Abstract Simplicial Complex).
An \emph{abstract simplicial complex} ${\mathcal{K}}$ is a family of sets that is closed under taking subsets.
More specifically, the two defining properties are
\begin{enumerate}
\item ${\mathcal{K}}$ is a set of finite subsets of some given set ${\mathbb{S}}$;
\item if $\Delta'\subseteq\Delta \in {\mathcal{K}}$, then $\Delta'\in {\mathcal{K}}$.
\end{enumerate}
Each element $\Delta$ in ${\mathcal{K}}$ is called a \emph{face}, and its \emph{dimension} equals $|\Delta|-1$.
The \emph{dimension} of ${\mathcal{K}}$ is the maximal dimension of all faces in ${\mathcal{K}}$.
\end{definition}
Loosely speaking, a \emph{simplicial complex} is an abstract simplicial complex with associated geometry.
A simplicial complex consists of \emph{simplices}, which generalize the notions of triangle and tetrahedron to arbitrary dimensions.
\begin{definition}
\label{def::simplex}
(Simplex).
A \emph{simplex} $\Delta$ is the convex hull of $k+1$ affinely independent points ${\bm{v}}_0,\ldots,{\bm{v}}_k\in\mathbb{R}^k$, in which case $\Delta$ is also called a \emph{$k$-simplex}.
The convex hull of any nonempty subset of $\{{\bm{v}}_0,\ldots,{\bm{v}}_k\}$ is called a \emph{face} of $\Delta$.
Thus, faces are simplices themselves.
\end{definition}
In graph theory, vertices and lines are commonly glued together in a particular way to form a drawing, or thus visualization, of a graph in the plane.
Formally, this is a \emph{geometric realization} of a graph \citep{vandaele2021graph}.
In a similar way, simplices are glued together in a particular way to form a simplicial complex.
\begin{definition}
\label{def::simpcomplex}
(Simplicial Complex).
A simplicial complex ${\mathcal{K}}$ is a set of simplices that satisfies
\begin{enumerate}
\item every face in ${\mathcal{K}}$ is also contained in ${\mathcal{K}}$;
\item the nonempty intersection of two faces $\Delta,\Delta'\in{\mathcal{K}}$ is a face of both $\Delta$ and $\Delta'$.
\end{enumerate}
\end{definition}
Thus, the first defining property in Definition \ref{def::simpcomplex} is similar to the second defining property of an abstract simplicial complex in Definition \ref{def::abstractcomplex}.
The second property in Definition \ref{def::simpcomplex} is similar to how in planar drawing of a graph $G$, two lines representing two edges $e,e'$ of $G$ are not allowed to intersect in a point, unless that point represents a vertex of $G$ that is incident to both $e$ and $e'$.
For the purpose of this paper, it is not an issue for one to mix up the terminology and notations associated to `abstract simplicial complexes' and `simplicial complexes', i.e., to identify simplicial complexes with their discrete counterparts.
The most important thing to be aware of, is that \emph{(persistent) homology} (Section \ref{SUBSEC::persistence}) is concerned with topological properties of the `continuous versions' (geometric realizations) of simplicial complexes, which are computed through their `discrete' (abstract) counterparts.
Compare this to how the connectedness of a graph (as a graph) can be computed through its purely combinatorial and finite representation, but determines the connectedness of any of its geometric realizations (as a topological space containing uncountably many points).
Each separate plot in Figure \ref{PikachuFull} illustrates a simplicial complex.
Furthermore, each simplicial complex in Figure \ref{PikachuFiltration} is a \emph{subcomplex} of the next, that is, all of its simplices (in this case, vertices, edges, and triangles,) are also simplices of the next simplicial complex.
This is the formal defining property of a \emph{filtration}.
\begin{definition}
\label{def::filtration}
(Filtration).
A \emph{filtration} ${\mathcal{F}}$ is a sequence of simplicial complexes
$$
{\mathcal{F}}=({\mathcal{K}}_0\subseteq{\mathcal{K}}_1\subseteq\ldots\subseteq{\mathcal{K}}_n={\mathcal{K}}).
$$
\end{definition}
Although filtrations do not have to be of the finite sequence form such as in Definition \ref{def::filtration} \citep{oudot:hal-01247501}, in practice they always are.
\subsection{The Vietoris-Rips and Alpha Filtration}
\label{SUBSEC::alpha}
One of the most popular filtrations on point cloud data for practical applications is the \emph{Vietoris-Rips filtration}, formally defined as follows.
\begin{definition}
\label{def::VRfilt}
(Vietors-Rips Complex and Filtration).
Let ${\bm{X}}$ be a point cloud and $\epsilon\in\mathbb{R}_{\geq 0}$.
The \emph{Vietoris-Rips complex} of ${\bm{X}}$ at time $\epsilon$ is defined as
$$
\mathrm{VR}_\epsilon({\bm{X}})\coloneqq\{\Delta\subseteq{\bm{X}}:\mathrm{diam}(\Delta)\leq\epsilon\},
$$
where $\mathrm{diam}(\Delta)\coloneqq\max_{{\bm{x}},{\bm{y}}\in\Delta}\|x-y\|$ is the diameter of $\Delta$.
The \emph{Vietoris-Rips filtration} is the parameterized sequence of simplicial complexes $(\mathrm{VR}_\epsilon)_{\epsilon\in\mathbb{R}_{\geq 0}}$.
\end{definition}
Thus, the Vietoris-Rips complex at time $\epsilon$ contains a simplex for every subset of points with diameter at most $\epsilon$, and the Vietoris-Rips filtration varies the resulting simplicial complex by increasing the parameter $\epsilon$.
For practical purposes, simplices with dimension larger than $k+1$ are excluded from the filtration whenever ${\bm{X}}\subseteq\mathbb{R}^k$, since---as will be discussed below---one would be unable to characterize topological information through them.
Even then the Vietoris-Rips filtration may remain challenging to fit into (local) memory.
Therefore, and especially for low-dimensional point clouds, \emph{weak Alpha complexes and filtrations} may be more convenient to use for practical applications.
The simplicial complexes in Figures \ref{PikachuAlpha} and \ref{PikachuFiltration} are examples thereof.
Formally, weak Alpha complexes are \emph{subcomplexes} of a particular simplicial complex termed the \emph{Delaunay triangulation} \citep{delaunay1934sphere}.
\begin{definition}
\label{def::delaunay}
(Delaunay Triangulation).
Let ${\bm{X}}$ be a point cloud.
A \emph{triangulation} ${\mathcal{K}}$ of ${\bm{X}}$ is a simplicial complex that covers the convex hull of ${\bm{X}}$.
A \emph{Delaunay triangulation} of ${\bm{X}}$ is a triangulation ${\mathcal{K}}$ of ${\bm{X}}$ such that no point in ${\bm{X}}$ is inside the circum-hypersphere of any $k$-simplex in ${\mathcal{K}}$.
\end{definition}
Intuitively, Delaunay triangulations maximize the minimum angle of all the triangle angles in the triangulation, i.e., they tend to avoid `skinny triangles'.
It is known that a point cloud ${\bm{X}}$ in $\mathbb{R}^k$ has a unique Delaunay triangulation if ${\bm{X}}$ is in \emph{general position} \citep{delaunay1934sphere}, that is, the affine hull of ${\bm{X}}$, i.e., the smallest affine space containing ${\bm{X}}$, is $k$-dimensional, and no $k+2$ points in ${\bm{X}}$ lie on the boundary of a ball whose interior does not intersect ${\bm{X}}$.
When one deals with finite Euclidean data derived from a continuous distribution, this condition is generally satisfied with probability 1.
Figure \ref{PikachuAlpha} illustrates the Delaunay triangulation of the data ${\bm{X}}$ in Figure \ref{Pikachu}.
Intuitively, weak Alpha filtrations can now be regarded as Vietoris-Rips filtrations defined on the Delaunay triangulation.
\begin{definition}
\label{def::WAfilt}
(Weak Alpha Complex and Filtration).
Let ${\bm{X}}$ be a point cloud with Delaunay triangulation ${\mathcal{K}}$ and $\alpha\in\mathbb{R}_{\geq 0}$.
The \emph{weak Alpha complex} of ${\bm{X}}$ at time $\alpha$ is defined as.
$$
\mathrm{WA}_\alpha({\bm{X}})\coloneqq\{\Delta\in{\mathcal{K}}:\mathrm{diam}(\Delta)\leq\alpha\},
$$
The \emph{weak Alpha filtration} is the parameterized sequence of simplicial complexes $(\mathrm{WA}_\alpha)_{\alpha\in\mathbb{R}_{\geq 0}}$.
\end{definition}
Note that although the filtrations in Definitions~\ref{def::VRfilt} and~\ref{def::WAfilt} are parameterized over an uncountable set, in practice, they are always equivalent to a filtration of finite sequence form such as in Definition~\ref{def::filtration}.
That is because for a point cloud ${\bm{X}}$, the defined filtrations can only change at finitely many time values.
\subsection{Simplicial Homology}
\label{SUBSEC::homology}
In algebraic and computational topology, \emph{homology} is concerned with studying and quantifying algebraic objects associated to (abstract) simplicial complexes.
It is is defined through the \emph{boundary map} of an abstract simplicial complex.
For simplicity of the definitions in this section, we will assume that we have a fixed ordering of all vertices in the abstract simplicial complex, e.g., as induced through the ordering of data points that define the complex, and that computations are performed over a field (such as modulo 2).
\begin{definition}
\label{def::boundarymap}
(Boundary Map).
Let ${\mathcal{K}}$ be an abstract simplicial complex of which the vertices $(v_0, \ldots, v_m)$ are ordered, and $\mathbb{F}$ a field.
The ordering of vertices in ${\mathcal{K}}$ naturally identifies an ordered tuple $(v_{i_0},\ldots,v_{i_k})$, $i_0<\ldots<i_k$, with every simplex $\Delta=\{v_{i_0},\ldots,v_{i_k}\}\in {\mathcal{K}}$, and we say that $\Delta$ is \emph{oriented}.
A \emph{simplicial $k$-chain} is a finite formal sum
$$
\lambda_1\Delta_1+\ldots+\lambda_N\Delta_N,
$$
where $N\in\mathbb{N}$, $\lambda_i$ are coefficients in $\mathbb{F}$, and $\Delta_i$ are oriented $k$-simplices in ${\mathcal{K}}$ (to be interpreted as basis `vectors'), for $1\leq i\leq N$.
The set of all simplicial $k$-chains forms an $\mathbb{F}$-vector space ${\mathcal{K}}_k$ along with the conventional operations for vector spaces.
The \emph{boundary map} $\partial_k:\Delta_k\rightarrow\Delta_{k-1}$ between the $\mathbb{F}$-vector spaces $\Delta_{k}$ and $\Delta_{k-1}$ is defined by letting for each simplex $\Delta=(v_{i_0},\ldots,v_{i_k})$,
\begin{equation}
\label{eq::boundarymap}
\partial_k(\Delta)\coloneqq\sum_{j=0}^{k}(-1)^j(v_{i_0},\ldots,v_{i_{j-1}},v_{i_{j+1}}\ldots,v_{i_k}).
\end{equation}
By convention, $\partial_0\equiv 0$, and $\Delta_{-1}=\{0\}$.
The boundary map extends linearly to all $k$-chains in ${\mathcal{K}}_k$.
If $C$ is a $k$-chain in $\Delta_k$, we also refer to $\partial_k(C)$ as the \emph{boundary} of $C$.
\end{definition}
It can be shown \citep{Hatcher2002} that $\partial_{k}\circ\partial_{k+1}\equiv 0$.
Thus, the image $\mathrm{Im}(\partial_{k+1})$ is a vector subspace of the kernel $\mathrm{Ker}(\partial_{k})$.
As a result, we can define the \emph{homology module} of an abstract simplicial complex ${\mathcal{K}}$.
The term `module' refers to the fact that these structures can be defined over more general algebraic structures, i.e., \emph{rings} \citep{Hatcher2002}.
For fields however, these are just vector spaces.
\begin{definition}
\label{def::homology}
(Homology Module).
Let ${\mathcal{K}}$ be an abstract simplicial complex and $\mathbb{F}$ a field, with associated boundary maps $\partial_k$.
The kernel $\mathrm{ker}(\partial_k)$ is called the \emph{$k$-th cycle module}, and the image $\mathrm{Im}(\partial_k)$ the \emph{$k$-th boundary module}.
The \emph{$k$-th homology module} of ${\mathcal{K}}$ is the $\mathbb{F}$-vector space
$$
H_k\coloneqq \mathrm{Ker}(\partial_{k}) \,/\, \mathrm{Im}(\partial_{k+1}).
$$
\end{definition}
The relation $\mathrm{Im}\,\partial_{k+1}\subseteq\mathrm{ker}\,\partial_k$ has a geometric interpretation.
It implies that every \emph{$(k+1)$-boundary}, which is a $k$-chain that is the image of some $(k+1)$-chain under $\partial_{k+1}$, has zero boundary under $\partial_k$, and hence, is a \emph{$k$-cycle}.
However, the reverse inclusion does not hold in general.
Thus, there may exist $k$-cycles that are not the boundary of any $(k+1)$-chain.
These correspond to `holes' in the simplicial complex, as illustrated by Figure \ref{boundarymap}.
\begin{figure}
\centering
\begin{tikzpicture}
\node[circle, fill=black, inner sep=0pt, minimum size=5pt, label=above:$x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=5pt, label=left:$x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=5pt, label=right:$x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=5pt, label=below:$x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=5pt, label=below:$x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=5pt, label=below:$x_5$] (x5) at (4, 0) {};
\draw (x2) -- (x0) -- (x1) -- (x2);
\draw (x4) -- (x1) -- (x3) -- (x4);
\draw (x5) -- (x2) -- (x4) -- (x5);
\begin{pgfonlayer}{background}
\fill[green] (x0.center) -- (x1.center) -- (x2.center) -- cycle;
\fill[green] (x1.center) -- (x3.center) -- (x4.center) -- cycle;
\fill[green] (x2.center) -- (x4.center) -- (x5.center) -- cycle;
\end{pgfonlayer}
\draw[->, thick](4, 1.75) -- (6, 1.75) node[pos=0.5, above]{$\partial_2$};
\node[circle, fill=black, inner sep=0pt, minimum size=5pt, label=above:$x_0$] (y0) at (8, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=5pt, label=left:$x_1$] (y1) at (7, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=5pt, label=right:$x_2$] (y2) at (9, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=5pt, label=below:$x_3$] (y3) at (6, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=5pt, label=below:$x_4$] (y4) at (8, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=5pt, label=below:$x_5$] (y5) at (10, 0) {};
\draw (y2) -- (y0) -- (y1) -- (y2);
\draw (y4) -- (y1) -- (y3) -- (y4);
\draw (y5) -- (y2) -- (y4) -- (y5);
\draw[red, opacity=0.35, line width=5pt] (y1) -- (y2) -- (y4) -- (y1);
\end{tikzpicture}
\caption{Geometric interpretation of the boundary operator.
$\partial_{k+1}$ maps a ($k+1$)-chain to a $k$-cycle.
Some $k$-cycles, such as the one marked in red, are not the boundary of any $(k+1)$-chain.
These correspond to `holes' in the simplicial complex.}
\label{boundarymap}
\vskip\floatsep
{
\makeatletter\setlength\BA@colsep{2.25pt}\makeatother
\begin{blockarray}{ccccccccccccccccccc}
& $x_0$ & $x_1$ & $x_2$ & $x_3$ & $x_4$ & $x_5$ & $x_{01}$ & $x_{02}$ & $x_{12}$ & $x_{13}$ & $x_{14}$ & $x_{24}$ & $x_{25}$ & $x_{34}$ & $x_{45}$ & $x_{012}$ & $x_{134}$ & $x_{245}$ \\
\begin{block}{c@{\hspace{10pt}}(cccccccccccccccccc)}
$x_0$ & 0 & 0 & 0 & 0 & 0 & 0 & \tikzmark{left1}{1} & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$x_1$ & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$x_2$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
$x_3$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
$x_4$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 \\
$x_5$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \tikzmark{right1}{1} & 0 & 0 & 0\\
$x_{01}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \tikzmark{left2}{1} & 0 & 0 \\
$x_{02}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
$x_{12}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
$x_{13}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
$x_{14}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
$x_{24}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
$x_{25}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
$x_{34}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
$x_{45}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \tikzmark{right2}{1}\\
$x_{012}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$x_{134}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$x_{245}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{block}
\end{blockarray}
\Highlight[first]{left1}{right1}
\Highlight[second]{left2}{right2}
\tikz[overlay,remember picture]{
\node at (.7, 2.95) (B1) {\textcolor{blue}{$\partial_1\sim B_1$}};
\node at (.7, -0.65) (B2) {\textcolor{blue}{$\partial_2 \sim B_2$}};
\draw[->,thick,blue] (first) -- (B1);
\draw[->,thick,blue] (second) -- (B2);
}
}
\caption{The full boundary matrix $B$ of the left simplicial complex in Figure \ref{boundarymap} over $\mathbb{F}=\mathbb{F}_2$, containing the matrix representations of all boundary operators $\partial_k$.
The labels $x_i$, $x_{ij}$, and $x_{ijk}$, are short for the simplices $\{x_i\}$, $\{x_i,x_j\}$, and $\{x_i,x_j,x_k\}$, respectively.}
\label{boundarymatrix}
\end{figure}
The \emph{$k$-th Betti-number} $\beta_k$ quantifies the extend to which the reverse inclusion `$\mathrm{Ker}(\partial_k)\subseteq\mathrm{Im}(\partial_{k+1})$' fails.
\begin{definition}
\label{def::betti}
(Betti Number).
Let ${\mathcal{K}}$ be an abstract simplicial complex and $\mathbb{F}$ a field, with associated homology modules $H_k$.
The \emph{$k$-th Betti number} of ${\mathcal{K}}$, denoted $\beta_k$, is the rank of $H_k$, i.e., its number of generators.
\end{definition}
The $k$-th Betti number quantifies the number of $k$-dimensional holes in a (geometric realization of) an abstract simplicial complex.
For example, the simplicial complex in Figure \ref{boundarymap} (Left) has one 0-dimensional hole (one connected component), one 1-dimensional hole (one loop), and no higher-dimensional holes.
Betti numbers can vary over the choice of field however.
For example, for the triangulation ${\mathcal{K}}$ of the \emph{Klein bottle}, one may find that $\beta_1({\mathcal{K}})=2$ over $\mathbb{F}=\mathbb{F}_2$, and $\beta_1({\mathcal{K}})=1$ over $\mathbb{F}=\mathbb{F}_p$ for any prime number $p>2$ \citep{maria2014algorithms}.
This is related to the fact that the Klein bottle is not torsion-free.
For more intuitive shapes in low dimensions however, which may be of particular interest during topological regularization, this is rarely an issue.
Without further specification, all (persistent) homology computations in this paper are performed over $\mathbb{F}=\mathbb{F}_2$.
\paragraph{Computing Simplicial Homology}
From a given simplicial complex ${\mathcal{K}}$, we can compute the Betti numbers by performing linear operations on the \emph{boundary matrices} of ${\mathcal{K}}$.
These are the matrix representations of the boundary operators $\partial_k$.
For $\mathbb{F}=\mathbb{F}_2$, these are just sparse incidence matrices $B_{k}$, where $B_{k_{ij}}=1$ if the $i$-th $(k-1)$-dimensional simplex $\Delta_i$ is a face of the $j$-th $k$-dimensional simplex $\Delta_j$, and $B_{k_{ij}}=0$ otherwise.
Figure \ref{boundarymatrix} shows the boundary matrices $B_k$ of the left simplicial complex in Figure \ref{boundarymap} collected into the `full' boundary matrix $B$, which contains a row and column for every simplex in the simplicial complex.
Ranks of the homology modules, or thus Betti numbers, can then be derived from the ranks of the matrices $B_k$ through the rank–nullity theorem.
For example for our current example (Figure \ref{boundarymatrix}), over $\mathbb{F}=\mathbb{F}_2$, we have
$$
\begin{cases}
\mathrm{rank}(B_0)=\mathrm{rank}(\mathrm{Im}(\partial_0))=0 \mbox{ (Definition \ref{def::boundarymap})},\\
\mathrm{rank}(B_1)=\mathrm{rank}(\mathrm{Im}(\partial_1))=5,\\
\mathrm{rank}(B_2)=\mathrm{rank}(\mathrm{Im}(\partial_2))=3.
\end{cases}
$$
Note that these ranks were computed in SageMath\citep{sagemath}.
From the rank-nullity theorem, we obtain
$$
\begin{cases}
\mathrm{rank}(\mathrm{Im}(\partial_0)) + \mathrm{rank}(\mathrm{Ker}(\partial_0)) = 6\iff \mathrm{rank}(\mathrm{Ker}(\partial_0)) = 6,\\
\mathrm{rank}(\mathrm{Im}(\partial_1)) + \mathrm{rank}(\mathrm{Ker}(\partial_1)) = 9\iff \mathrm{rank}(\mathrm{Ker}(\partial_1)) = 4.
\end{cases}
$$
We thus find that
$$
\begin{cases}
\beta_0=\mathrm{rank}(H_0) = \mathrm{rank}(\mathrm{Ker}(\partial_0)) - \mathrm{rank}(\mathrm{Im}(\partial_1)) = 6 - 5 = 1,\\
\beta_1=\mathrm{rank}(H_1) = \mathrm{rank}(\mathrm{Ker}(\partial_1)) - \mathrm{rank}(\mathrm{Im}(\partial_2)) = 4 - 3 = 1.
\end{cases}
$$
Hence, we find that the left simplicial complex in Figure \ref{boundarymap} has $\beta_0=1$ connected component, and $\beta_1=1$ loop, which is consistent with what we observe from the figure.
\subsection{Persistent Homology and Persistence Diagrams}
\label{SUBSEC::persistence}
When given a filtration parameterized by a time parameter $t$, such as $t=\alpha$ in the case of the weak Alpha filtration, one may observe changes in the topological holes in the (abstract) simplicial complex when $t$ increases.
For example, as illustrated in Figure \ref{PikachuFiltration}, different connected components may become connected through edges, or cycles may either appear or disappear.
When a topological hole comes to existence in the filtration, we say that it is \emph{born}.
Vice versa, we say that a topological hole \emph{dies} when it disappears.
The filtration times at which these events occur are called the \emph{birth time} $b$ and \emph{death time} $d$, respectively.
Simply put, \emph{persistent homology tracks the birth and death times of topological holes across a filtration}.
Persistent homology is commonly quantified through a \emph{persistence diagram} (Figure \ref{PikachuDiagram}).
Formally, a $k$-dimensional persistence diagram is a set ${\mathcal{D}}_k\subseteq\mathbb{R}^2$, decomposed as
\begin{align*}
\underbrace{\left\{(a_i,\infty):1\leq i\leq N\right\}}_{\eqqcolon {\mathcal{D}}^{\mathrm{ess}}_k}\cup\underbrace{\left\{(b_j,d_j):1\leq j\leq M\wedge b_j<d_j\right\}}_{\eqqcolon {\mathcal{D}}^{\mathrm{reg}}_k}\cup\left\{(x,x):x\in\mathbb{R}\right\},
\end{align*}
where $a_i, b_j, d_j$, $1\leq i\leq N$, $1\leq j\leq M$, correspond to the birth and death times of $k$-dimensional holes across the filtration.
Points $(a_i,\infty)$, are usually displayed on top of the diagram.
They correspond to holes that never die across the filtration, and form the \emph{essential part} of the persistence diagram.
In the case of filtrations defined in Section \ref{SUBSEC::alpha}, one always has ${\mathcal{D}}^{\mathrm{ess}}_0=\{(0,\infty)\}$, and ${\mathcal{D}}^{\mathrm{ess}}_k=\emptyset$ for $k\geq 1$.
This is because eventually, the simplicial complex in the filtration will consist of one connected component that never dies, and any higher dimensional hole will be `filled in', as illustrated by Figure \ref{PikachuFiltration}.
The points $(t_{b_j},t_{d_j})$ in $\mathcal{D}_k$ with finite coordinates $t_{b_j}<t_{d_j}$ form the \emph{regular part} $D^{\mathrm{reg}}_k$ of the persistence diagram.
Finally, the diagonal is included in a persistence diagram, as to allow for a well-defined distance metric between persistence diagram, termed the \emph{bottleneck distance}.
\begin{figure}[t]
\centering
\includegraphics[width=.725\linewidth]{Images/PikachuDiagram}
\caption{The persistence diagrams ${\mathcal{D}}_0$ and ${\mathcal{D}}_1$ of the weak Alpha filtration in Figure \ref{PikachuFiltration} visualized on top of each other.
$H_k$ refers to homology dimension $k$.
The 13 encircled points represent characteristic shapes of Pikachu's eyes, mouth, nose, cheeks, and ears.}
\label{PikachuDiagram}
\end{figure}
\begin{definition}
(bottleneck distance).
Let ${\mathcal{D}}$ and ${\mathcal{D}}'$ be two persistence diagrams.
The \emph{bottleneck distance} between them is defined as
$$
d_{\mathrm{b}}\left({\mathcal{D}}, {\mathcal{D}}'\right)\coloneqq\inf_{\varphi}\sup_x\|x-\varphi(x)\|_{\infty}\in\mathbb{R}\cup\{\infty\},
$$
where $\varphi$ ranges over all bijections from ${\mathcal{D}}$ to ${\mathcal{D}}'$, and $x$ ranges over all points in ${\mathcal{D}}$.
By convention, we let $\infty-\infty=0$ when calculating the distance between two diagram points.
Since persistence diagrams include the diagonal by definition, $|{\mathcal{D}}|=|{\mathcal{D}}'|=|\mathbb{R}|$.
Thus, $d_{\mathrm{b}}\left({\mathcal{D}}, {\mathcal{D}}'\right)$ is well-defined, as unmatched points in the diagram can always be matched to the diagonal.
\end{definition}
The bottleneck distance commonly justifies the use of persistent homology as a stable method for quantifying topological information in data, i.e., robust to small data permutations \citep{oudot:hal-01247501}.
Figure \ref{PikachuDiagram} shows the 0- and 1-dimensional persistence diagram of the weak Alpha filtration in Figure \ref{PikachuFiltration}.
Prominent holes during the filtration are those that persist for a long time interval $d-b$, even indefinitely if $d=\infty$.
In the persistence diagrams, these are points that are `highly' elevated above the diagonal.
Points $(b,\infty)$ are commonly plotted on top of the diagram, such as in Figure \ref{PikachuDiagram}.
From the persistence diagrams ${\mathcal{D}}_0$ and ${\mathcal{D}}_1$ that are plotted on top of each other in Figure \ref{PikachuDiagram}, one may observe 13 prominently elevated points with an early birth-time $b$, that we encircled in Figure \ref{PikachuDiagram}.
The 5 encircled points in ${\mathcal{D}}_0$ capture the characteristic connected components of Pikachu's face: its two eyes, mouth, nose, and contour.
The 8 encircled points in ${\mathcal{D}}_0$ capture the characteristic loops of Pikachu's face: its two eyes, two cycles in its mouth, its two cheeks, and the two tops of its ears.
While birth-times of connected components naturally equal zero in a weak Alpha filtration, the 8 characteristic loops have an early but nonzero birth-time in the filtration as they occur on a small scale, i.e., they can only be defined by connecting distinct data points that are close to each other (Figure \ref{PikachuFiltration}).
Nevertheless, one may also observe prominently elevated points in ${\mathcal{D}}_1$ with larger birth-times.
These correspond to loops that occur on a larger scale and can only be defined by connecting more distant points in the point cloud data.
For example, the loop around Pikachu's forehead, present at times $\alpha=10$ and $\alpha=20$ in Figure \ref{PikachuFiltration}, is the most persisting cycle in the point cloud.
\paragraph{Computing Persistent Homology}
\begin{algorithm}[b!]
\caption{The standard algorithm for computing persistence.}
\label{alg:ph}
\hspace*{\algorithmicindent} \textbf{Input} boundary matrix $B$ of filtration ${\mathcal{F}}$ on simplicial complex ${\mathcal{K}}$.
\begin{algorithmic}[1]
\For{$j=1$ to $|{\mathcal{K}}|$}
\While{there exists $i<j$ with $\mathrm{low}(i)=\mathrm{low}(i)$}
\State add column $i$ to column $j$.
\EndWhile
\EndFor
\end{algorithmic}
\end{algorithm}
Computing persistent homology from a filtration ${\mathcal{F}}=({\mathcal{K}}_0\subseteq{\mathcal{K}}_1\subseteq\ldots\subseteq{\mathcal{K}}_n={\mathcal{K}})$ is similar to computing regular simplicial homology in that it is performed on the (now full) boundary matrix $B$ of the final simplicial complex ${\mathcal{K}}$ (Figure \ref{boundarymatrix}).
However, the rows and columns of $B$ must first be ordered by the time of appearance of their corresponding simplices in the filtration ${\mathcal{F}}$.
For example, the boundary matrix $B$ in Figure \ref{boundarymatrix} would be the matrix representation of the filtration
\begin{equation}
\label{eq:filtrationexample}
{\mathcal{F}}=(\{\{x_0\}\}\subseteq\{\{x_0\},\{x_1\}\}\subseteq\ldots\subseteq\{\{x_0\}, \{x_1\}, \ldots, \{x_2, x_4, x_5\}\}={\mathcal{K}}),
\end{equation}
where ${\mathcal{K}}$ is the left simplicial complex in Figure \ref{boundarymap}.
Thus, the simplices that appear in ${\mathcal{F}}$ are ordered by size first, then by lexicographical ordering of the indices of their included vertices, as shown in Figure \ref{examplefiltration}.
Note that we did not specify the exact times-of-appearances associated to these simplicial complexes, or resulting from this, the simplices in ${\mathcal{K}}$.
\begin{figure}[p]
\centering
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=left:{\color{white} \scriptsize $x_1$}] (x1) at (1, 1.75) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=right:{\color{white} \scriptsize $x_2$}] (x2) at (3, 1.75) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=below:{\color{white} \scriptsize $x_3$}] (x3) at (0, 0) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=below:{\color{white} \scriptsize $x_4$}] (x4) at (2, 0) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=below:{\color{white} \scriptsize $x_5$}] (x5) at (4, 0) {};
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=right:{\color{white} \scriptsize $x_2$}] (x2) at (3, 1.75) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=below:{\color{white} \scriptsize $x_3$}] (x3) at (0, 0) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=below:{\color{white} \scriptsize $x_4$}] (x4) at (2, 0) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=below:{\color{white} \scriptsize $x_5$}] (x5) at (4, 0) {};
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=below:{\color{white} \scriptsize $x_3$}] (x3) at (0, 0) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=below:{\color{white} \scriptsize $x_4$}] (x4) at (2, 0) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=below:{\color{white} \scriptsize $x_5$}] (x5) at (4, 0) {};
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=below:{\color{white} \scriptsize $x_4$}] (x4) at (2, 0) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=below:{\color{white} \scriptsize $x_5$}] (x5) at (4, 0) {};
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=white, inner sep=0pt, minimum size=3pt, label=below:{\color{white} \scriptsize $x_5$}] (x5) at (4, 0) {};
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\end{tikzpicture}
\newline
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\draw (x0) -- (x1);
\end{tikzpicture}
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\draw (x2) -- (x0) -- (x1);
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\draw (x2) -- (x0) -- (x1) -- (x2);
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\draw (x2) -- (x0) -- (x1) -- (x2);
\draw (x1) -- (x3);
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\draw (x2) -- (x0) -- (x1) -- (x2);
\draw (x4) -- (x1) -- (x3);
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\draw (x2) -- (x0) -- (x1) -- (x2);
\draw (x4) -- (x1) -- (x3);
\draw (x2) -- (x4);
\end{tikzpicture}
\newline
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\draw (x2) -- (x0) -- (x1) -- (x2);
\draw (x4) -- (x1) -- (x3);
\draw (x5) -- (x2) -- (x4);
\end{tikzpicture}
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\draw (x2) -- (x0) -- (x1) -- (x2);
\draw (x4) -- (x1) -- (x3) -- (x4);
\draw (x5) -- (x2) -- (x4);
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\draw (x2) -- (x0) -- (x1) -- (x2);
\draw (x4) -- (x1) -- (x3) -- (x4);
\draw (x5) -- (x2) -- (x4) -- (x5);
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\draw (x2) -- (x0) -- (x1) -- (x2);
\draw (x4) -- (x1) -- (x3) -- (x4);
\draw (x5) -- (x2) -- (x4) -- (x5);
\begin{pgfonlayer}{background}
\fill[green] (x0.center) -- (x1.center) -- (x2.center) -- cycle;
\end{pgfonlayer}
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\draw (x2) -- (x0) -- (x1) -- (x2);
\draw (x4) -- (x1) -- (x3) -- (x4);
\draw (x5) -- (x2) -- (x4) -- (x5);
\begin{pgfonlayer}{background}
\fill[green] (x0.center) -- (x1.center) -- (x2.center) -- cycle;
\fill[green] (x1.center) -- (x3.center) -- (x4.center) -- cycle;
\end{pgfonlayer}
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.3]
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=above:\scriptsize $x_0$] (x0) at (2, 3.5) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=left:\scriptsize $x_1$] (x1) at (1, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=right:\scriptsize $x_2$] (x2) at (3, 1.75) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_3$] (x3) at (0, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_4$] (x4) at (2, 0) {};
\node[circle, fill=black, inner sep=0pt, minimum size=3pt, label=below:\scriptsize $x_5$] (x5) at (4, 0) {};
\draw (x2) -- (x0) -- (x1) -- (x2);
\draw (x4) -- (x1) -- (x3) -- (x4);
\draw (x5) -- (x2) -- (x4) -- (x5);
\begin{pgfonlayer}{background}
\fill[green] (x0.center) -- (x1.center) -- (x2.center) -- cycle;
\fill[green] (x1.center) -- (x3.center) -- (x4.center) -- cycle;
\fill[green] (x2.center) -- (x4.center) -- (x5.center) -- cycle;
\end{pgfonlayer}
\end{tikzpicture}
\caption{The filtration on the left simplicial complex in Figure \ref{boundarymap} for which the matrix $B$ in Figure \ref{boundarymatrix} is the boundary matrix.}
\label{examplefiltration}
\vskip\floatsep
{
\makeatletter\setlength\BA@colsep{2.25pt}\makeatother
\begin{blockarray}{ccccccccccccccccccc}
& $x_0$ & $x_1$ & $x_2$ & $x_3$ & $x_4$ & $x_5$ & $x_{01}$ & $x_{02}$ & $x_{12}$ & $x_{13}$ & $x_{14}$ & $x_{24}$ & $x_{25}$ & $x_{34}$ & $x_{45}$ & $x_{012}$ & $x_{134}$ & $x_{245}$ \\
\begin{block}{c@{\hspace{10pt}}(cccccccccccccccccc)}
$x_0$ & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$x_1$ & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$x_2$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
$x_3$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$x_4$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$x_5$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
$x_{01}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
$x_{02}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
$x_{12}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
$x_{13}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
$x_{14}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
$x_{24}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
$x_{25}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
$x_{34}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
$x_{45}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
$x_{012}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$x_{134}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$x_{245}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{block}
\end{blockarray}
}
\caption{The reduced boundary matrix $B$ in Figure \ref{boundarymatrix} through Algorithm \ref{alg:ph}.}
\label{boundaryreduced}
\end{figure}
For the $j$-th column of $B$, we define $\mathrm{low}(j)$ as the largest index $i$ for which $B_{ij}\neq 0$.
The \emph{standard algorithm} for computing persistence, also sometimes called the \emph{column algorithm} \citep{Otter2017}, reduces the boundary matrix $B$ in the chosen field (e.g., modulo 2) through Algorithm \ref{alg:ph}.
One can then read off the the birth-death pairs from the reduced matrix as follows \citep{Otter2017}.
\begin{itemize}
\item If $\mathrm{low}(j) = i$, then the simplex $\Delta_j$ is paired with $\Delta_i$, and the entrance of $\Delta_i$ in the filtration causes the birth of a topological hole that dies with the entrance of $\Delta_j$.
\item If $\mathrm{low}(j)$ is undefined, i.e., the $j$-th column of the reduced boundary matrix only contains zeroes, then the the entrance of the simplex $\Delta_j$ in the filtration causes the
birth of a topological hole.
If there exists some $k$ such that $\mathrm{low}(k) = j$, then the simplex $\Delta_j$ is paired with $\Delta_k$, whose entrance in the filtration causes the death of the topological hole that was born with $\Delta_j$.
If no such $k$ exists, $\Delta_j$ is unpaired, and the topological hole born with it persists indefinitely.
\end{itemize}
Figure \ref{boundaryreduced} shows the reduced form of the matrix in $B$ corresponding to the filtration in (\ref{eq:filtrationexample}).
As an example, we read off some of the birth-death pairs.
\begin{itemize}
\item $\mathrm{low}(x_i)$ is undefined for every $i=1,\ldots,5$: every vertex results in the birth of a connected component.
We find that only for $x_0$, there is no column $x_{ij}$ of the reduced matrix for which $\mathrm{low}(x_{ij})=x_0$.
Thus, only the connect component born through $x_0$ persists indefinitely.
\item $\mathrm{low}(x_{01})=x_1$: with the entrance of the edge $\{x_0,x_1\}$, the connected component that was born with $\{x_1\}$ dies.
\item $\mathrm{low}(x_{12})$ is undefined, however $\mathrm{low}(x_{012})=x_{12}$: the entrance of the edge $\{x_1,x_2\}$ resulted in the birth of a cycle that died with the entrance of the simplex $\{x_0, x_1, x_2\}$.
\item $\mathrm{low}(x_{24})$ is undefined, and there is no column $x_{ijk}$ of the reduced matrix for which $\mathrm{low}(x_{ijk})=x_{24}$: the entrance of the edge $\{x_2,x_4\}$ resulted in the birth of a cycle that persists indefinitely.
\end{itemize}
Note that all of these observations can be made in Figure \ref{examplefiltration} as well.
Furthermore, observe that the topological holes that persists indefinitely correspond to the regular simplicial homology of the final simplicial complex ${\mathcal{K}}$ in the filtration, as computed at the end of Section \ref{SUBSEC::homology}.
By mapping the paired simplices to their times of appearance---which are available in practical applications---one obtains birth-death tuples $(b, d)$, or thus persistence diagrams.
Note that the assignment of a birth-death tuple to a particular dimension of persistence diagram can be done based on the dimensions of the paired simplices.
For example, one knows that the entrance of an edge can only result in the birth of a cycle, or the death of a connected component.
\emph{What is most important to realize from this example for topological optimization, which we discuss below, is that the computation of persistent homology and persistence diagrams allows one to trace a birth-death tuple $(b, d)$ corresponding to a particular topological hole, back to the two simplices in the filtration that resulted in the birth and death of that hole}.
\subsection{Topological Loss Functions and Topological Optimization}
\label{SUBSEC::toploss}
Persistent homology allows one to quantify all from the finest to coarsest topological information in data.
Therefore, persistence diagrams are often regarded as \emph{topological signatures} of data, which can be transformed into (topological) features to be incorporated into various machine learning models.
While methods that learn from persistent homology are now both well developed and diverse \citep{pun2018persistent}, optimizing the data representation for the persistent homology thereof only gained recent attention \citep{gabrielsson2020topology, solomon2021fast, carriere2021optimizing}.
As presented in Sections \ref{SUBSEC::homology} \& \ref{SUBSEC::persistence}, persistent homology has a rather abstract mathematical foundation within algebraic topology, and its computation is inherently combinatorial.
This complicates working with usual derivatives for optimization.
To accommodate this, topological optimization makes use of Clarke subderivatives \citep{clarke1990optimization}, whose applicability to persistence builds on arguments from o-minimal geometry \citep{van1998tame, carriere2021optimizing}.
Fortunately, thanks to the recent work of \citep{gabrielsson2020topology} and \citep{carriere2021optimizing}, powerful tools for topological optimization have been developed for software libraries such as PyTorch and TensorFlow, allowing their usage without deeper knowledge about these subjects.
In this section, \emph{we will present topological optimization for point clouds} in particular.
Note that topological optimization can be applied to other input data structures, for example for optimizing the pixel values in images for denoising \citep{gabrielsson2020topology}.
However, point cloud embeddings are the structures we seek to topologically optimize in this paper.
In Section \ref{SUBSUBSEC::pointcloudopt}, we introduce the basic form of the topological loss functions that we use in this paper following the approach by \citep{gabrielsson2020topology}, and provide some first examples.
In Section \ref{SUBSUBSEC::techback}, we provide an overview of the technical background behind topological optimization of point clouds, where we discuss its dependence on subgradient methods.
\subsubsection{Topological Optimization of Point Clouds: an Example}
\label{SUBSUBSEC::pointcloudopt}
Topological optimization of a point cloud ${\bm{X}}$ optimizes the placement of points in ${\bm{X}}$ with respect to the topological information summarized by one or multiple persistence diagrams ${\mathcal{D}}$ of ${\bm{X}}$.
In this paper, these will be obtained from the weak Alpha filtration of ${\bm{X}}$ (Figures \ref{PikachuAlpha} \& \ref{PikachuDiagram}).
We will use the approach by \citep{gabrielsson2020topology}, where (birth, death)-tuples $(b_1, d_1),(b_2, d_2),\ldots, (b_{|{\mathcal{D}}|}, d_{|{\mathcal{D}}|})$ in ${\mathcal{D}}$ are first ordered by decreasing persistence $d_k-b_k$.
In case of point clouds, one and only one topological hole, i.e., a connected component born at time $\alpha=0$, will always persist indefinitely.
Other gaps and holes will eventually be filled (Figures \ref{PikachuAlpha} \& \ref{PikachuDiagram}).
Thus, we only optimize for the regular part of the persistence diagram in this paper, i.e., for the diagram points with finite coordinates.
This is done through a \emph{topological loss function}, which for a choice of $i\leq j$, a dimension $k$ of topological hole, and a function $g:\mathbb{R}^2\rightarrow\mathbb{R}$, is defined as
\begin{equation}
\label{eq:topo_loss}
\mathcal{L}_{\mathrm{top}}({\mathcal{D}}_k)\coloneqq\sum_{l=i, d_l < \infty}^jg(b_l, d_l), \hspace{2em}\mbox{ where } d_1-b_1\geq d_2-b_2\geq\ldots,
\end{equation}
with $(b_l, d_l)\in {\mathcal{D}}_k$, $l=1,\ldots |{\mathcal{D}}_k|$.
By first ordering the points in ${\mathcal{D}}_k$ by decreasing persistence, (\ref{eq:topo_loss}) is a \emph{function of persistence} \citep{carriere2021optimizing}, meaning that it is invariant
to permutations of the points of the persistence diagram ${\mathcal{D}}_k$.
\begin{figure}[p]
\centering
\includegraphics[width=\linewidth]{Images/PikaOpt}
\caption{The point cloud ${\bm{X}}$ optimized for various topological loss functions, which are all designed to increase the persistence $d_1-b_1$ of the most prominent cycle that corresponds to the contour of Pikachu's forehead.}
\label{PikaOpt}
\vskip\floatsep
\includegraphics[width=0.45\linewidth]{Images/PikaGrad}
\caption{The negative gradients of the topological loss function $\mathcal{L}_{\mathrm{top}}({\mathcal{D}}_1)=b_1-d_1$ show the direction of point movements during the first topological optimization iteration of ${\bm{X}}$. The gradients are nonzero only for four points. The connection of two of these during the weak Alpha filtration causes the birth of the cycle (blue arrows). These points move towards each other to decrease the birth-time of the cycle.
The connection of the other two points during the weak Alpha filtration causes the death of the cycle (red arrows). These points move away from each other to increase the death-time of the cycle.}
\label{PikaGrad}
\end{figure}
For example, consider the 1st-dimensional persistence diagram ${\mathcal{D}}_1$ of the point cloud representation ${\bm{X}}$ of Pikachu obtained from the weak Alpha filtration (Figures \ref{PikachuAlpha} \& \ref{PikachuDiagram}).
As discussed in Section \ref{SUBSEC::persistence}, the most persisting cycle---represented by the point $(b_1, d_1)\in {\mathcal{D}}_1$ according to the ordering in (\ref{eq:topo_loss})---captures the contour of Pikachu's forehead.
The following three functions are then examples of topological loss functions that might be used to increase the persistence $d_1-b_1$ of this cycle, with $i=j=1$ in (\ref{eq:topo_loss}).
\begin{compactitem}
\item $\mathcal{L}_{{\mathrm{top}}_1}({\mathcal{D}}_k)=-d_1$,
\item $\mathcal{L}_{{\mathrm{top}}_2}({\mathcal{D}}_k)=b_1$,
\item $\mathcal{L}_{{\mathrm{top}}_3}({\mathcal{D}}_k)=-(d_1-b_1)$.
\end{compactitem}
Figure \ref{PikaOpt} shows the result of optimizing for each of these loss functions.
When optimizing for $\mathcal{L}_{{\mathrm{top}}_1}$, we see that the cycle enlarges.
However, gaps in the cycle are neglected or even introduced (Figure \ref{PikaOpt}, Left).
Conversely, optimizing for $\mathcal{L}_{{\mathrm{top}}_2}$ closes such gaps, and ensures that the cycle occurs on a smaller scale in the data, or more formally, for earlier filtration times (Figure \ref{PikaOpt}, Middle), but does not enlarge the cycle.
Naturally, both effects are achieved when optimizing for $\mathcal{L}_{{\mathrm{top}}_3}$ (Figure \ref{PikaOpt}, Right).
Figure \ref{PikaGrad} shows the direction of the negative gradients, or thus the direction of point movements, for the topological loss function $\mathcal{L}_{{\mathrm{top}}_3}({\mathcal{D}}_k)$, for the first topological optimization iteration of ${\bm{X}}$.
We observe that the gradients are nonzero for only four points in ${\bm{X}}$.
These correspond to either the two points that cause the birth of the most persisting cycle (blue arrows in Figure \ref{PikaGrad}) or the two points that cause the death of this cycle (red arrows in Figure \ref{PikaGrad}).
This is because the point $(b_1, d_1)\in {\mathcal{D}}_1$ is traced back to the paired simplices whose entrance cause the birth and death of this cycle (Section \ref{SUBSEC::persistence}).
The birth of the cycle at time $b_1$ is caused when the points marked with blue gradients in Figure \ref{PikaGrad} are connected by an edge.
The death of the cycle at time $d_1$ is caused when the points marked with red gradients in Figure \ref{PikaGrad} are connected by an edge, whose entrance forms a triangle with a third unmarked point that closes the cycle.
Since the filtration times of these edges, i.e., the times of their first occurrence during the weak Alpha filtration, are exactly the distances between their endpoints, the gradients are in opposite direction as to either decrease the birth time $b_1$ or increase the death time $d_1$, and thus increase the overall persistence $d_1-b_1$.
\subsubsection{Technical Background of Topological Optimization of Point Clouds}
\label{SUBSUBSEC::techback}
Topological optimization rests on (stochastic) subgradient methods for optimizing the data representation with respect to its topological properties summarized by its persistence diagrams.
To explain this, it is easiest to start from topological optimization based on the Vietoris-Rips filtration (Definition \ref{def::VRfilt}).
For a given point cloud ${\bm{X}}\subseteq\mathbb{R}^d$, the Vietoris-Rips filtration is a filtration on the simplicial complex ${\mathcal{K}}$ that contains a simplex for every subset of points in ${\bm{X}}$, possibly constrained by some dimension.
If now ${\bm{X}}$ consists of $n$ points, we can regard the Vietoris-Rips filtration construction as a mapping $\mathrm{VR}:A=\left(\mathbb{R}^d\right)^n\rightarrow\mathbb{R}^{|{\mathcal{K}}|}$,
where an element in $A$ equals a point cloud of $n$ points in $\mathbb{R}^d$, which gets mapped by $\mathrm{VR}$ onto a (Vietoris-Rips) filtration which can be written as an assignment of a real value (filtration time) to every simplex in ${\mathcal{K}}$.
Persistent homology now pairs simplices that cause the birth and death of the same topological hole during a filtration of ${\mathcal{K}}$.
For simplicity of notation, we consider one full persistence diagram ${\mathcal{D}}$ that is the union of the persistence diagrams in all dimensions.
Let $p$ be the number of paired simplices and $q$ the number of unpaired simplices (Section \ref{SUBSEC::persistence}) during the filtration.
Then $|{\mathcal{K}}|=2p+q$.
Furthermore, in particular for the Vietoris-Rips filtration, the numbers $p$ and $q$ will depend only on the data size $n$ and the dimension of ${\mathcal{K}}$.
Choosing the lexicographical order on $\mathbb{R}\times(\mathbb{R}\cup\{\infty\})$, we can regard the persistence diagram ${\mathcal{D}}$ as a vector in $\mathbb{R}^{2p+q}=\mathbb{R}^{|{\mathcal{K}}|}$.
Thus, we view the persistent homology operation as a mapping $\mathrm{Pers}:\mathbb{R}^{|{\mathcal{K}}|}\rightarrow\mathbb{R}^{|{\mathcal{K}}|}$.
Since birth and death times are necessarily filtration values of simplices, the persistent homology operation $\mathrm{Pers}$ is thus simply a permutation of the coordinates of the input filtration, seen as a vector in $\mathbb{R}^{|{\mathcal{K}}|}$.
Having defined a topological loss function $\mathcal{L}_{\mathrm{top}}$ as in (\ref{eq:topo_loss}), the goal of topological optimization is thus to minimize the function
\begin{equation}
\label{VRfunction}
\mathcal{L}_{\mathrm{top}}\circ\mathrm{Pers}\circ\mathrm{VR}:A=\left(\mathbb{R}^d\right)^n\rightarrow\mathbb{R},
\end{equation}
with respect to the point cloud ${\bm{X}}\in A$.
\emph{O-minimal geometry} now provides a well-suited setting for describing parametrized families of filtrations (here by the point cloud ${\bm{X}}\in A$) and to exhibit interesting differentiability properties of their composition with the persistence mapping and a topological loss function \citep{carriere2021optimizing}.
\begin{definition}
\label{def::ominimal}
(O-minimal Structure \citep{carriere2021optimizing}).
An \emph{o-minimal structure} on the field of real numbers $\mathbb{R}$ is a collection $({\mathbb{S}}_n)_{n\in\mathbb{N}}$, where each ${\mathbb{S}}_n$ is a set of subsets of $\mathbb{R}^n$ such that:
\begin{enumerate}
\item ${\mathbb{S}}_1$ is exactly the collection of finite unions of points and intervals;
\item all algebraic subsets (0-level sets of polynomials) of $\mathbb{R}^n$ are in ${\mathbb{S}}_n$;
\item ${\mathbb{S}}_n$ is a Boolean subalgebra of $\mathbb{R}^n$ for any $n \in \mathbb{N}$;
\item if $A \in {\mathbb{S}}_n$ and $B \in {\mathbb{S}}_m$, then $A\times B \in {\mathbb{S}}_{n+m}$;
\item if $\pi: \mathbb{R}^{n+1} \rightarrow \mathbb{R}^n$ is the linear projection onto the first $n$ coordinates and $A \in {\mathbb{S}}_{n+1}$, then $\pi(A) \in {\mathbb{S}}_n$.
\end{enumerate}
An element $A \in {\mathbb{S}}_n$ for some $n \in \mathbb{N}$ is called a definable set in the o-minimal structure.
For a definable set $A \subseteq \mathbb{R}^n$, a map $f:A \rightarrow \mathbb{R}^m$ is said to be definable if its graph is a definable set in $\mathbb{R}^{n+m}$.
\end{definition}
Definable sets are stable under various geometric operations \citep{coste2000introduction, carriere2021optimizing}.
While the exact details behind this are beyond the scope of this paper, \citep{carriere2021optimizing} showed that if a function of persistence $\mathcal{L}_{\mathrm{top}}$ is locally Lipschitz and definable in an o-minimal structure, then $\mathcal{L}_{\mathrm{top}}\circ\mathrm{Pers}$ is also locally Lipschitz and the composition (\ref{VRfunction}) is definable.
As a consequence, the composition (\ref{VRfunction}) has a well-defined Clarke subdifferential \citep{clarke1990optimization}, since it is differentiable almost
everywhere \citep{carriere2021optimizing}.
Standard stochastic subgradient algorithms can then be used for optimizing (\ref{VRfunction}) with respect to the point cloud that parameterizes the (Vietoris-Rips) filtration \citep{carriere2021optimizing}.
The theory behind topologically optimizing the mapping
\begin{equation}
\label{WAfunction}
\mathcal{L}_{\mathrm{top}}\circ\mathrm{Pers}\circ\mathrm{WA}:A\rightarrow\mathbb{R},
\end{equation}
where $\mathrm{WA}$ maps a point cloud to its weak Alpha filtration, is analogous to the theory for Vietoris-Rips filtrations.
However, the Delanauy triangulation (as an abstract simplicial complex) may vary over the input point cloud ${\bm{X}}\in(\mathbb{R}^d)^n$.
If the points in ${\bm{X}}$ are in general position (Section \ref{SUBSEC::alpha}), we can restrict $A$ to the connected open subset of $(\mathbb{R}^d)^n$ which has the property that the Delaunay triangulation ${\mathcal{K}}_{{\bm{Y}}}$ of every ${\bm{Y}}\in A$ is the same as the Delaunay triangulation ${\mathcal{K}}$ of ${\bm{X}}$, apart from the associated geometry \citep{carriere2021optimizing}.
This means that while the exact positioning of their simplices in $\mathbb{R}^d$ may differ, the same sets of points (e.g., marked by indices $1,\ldots,n$,) define the simplices of ${\mathcal{K}}_{{\bm{Y}}}$ as of ${\mathcal{K}}$, and each mapping in the composition (\ref{WAfunction}) can again be well defined.
Note that this does not imply that the Delaunay triangulation will necessarily remain the same over the entire process of topological optimization.
\subsection{Computational Analysis}
\label{SUBSEC::compcost}
Topological optimization currently requires recomputing the persistence diagram(s) for every iteration of the optimization.
Thus, the computational analysis of topological optimization boils down to the computational analysis of persistent homology, which unfortunately remains challenging to this day.
Although persistent homology is an unparalleled tool for quantifying all from the finest to coarsest topological holes in data, it relies on algebraic computations (Algorithm \ref{alg:ph}) which can be costly for larger sized simplicial complexes.
In the worst case, computing persistent homology is cubic in the number of simplices \citep{Otter2017}.
For a data set ${\bm{X}}\subseteq\mathbb{R}^d$ of $n$ points, the weak Alpha complex has size and computation complexity $\mathcal{O}\left(n^{\ceil{\frac{d}{2}}}\right)$ \citep{Otter2017, toth2017handbook, somasundaram2021benchmarking}, resulting in a computation complexity $\mathcal{O}\left(n^{3\ceil{\frac{d}{2}}}\right)$ for persistent homology.
However, this can be significantly lower in practice, for example, if the points are distributed nearly uniformly on a polyhedron \citep{amenta2007complexity}.
The Vietoris-Rips filtration has size and computation complexity $\mathcal{O}(\min(2^n,n^{k+1}))$, where $d\geq k$ is the homology dimension of interest \citep{Otter2017, somasundaram2021benchmarking}, resulting in a computation complexity $\mathcal{O}(\min(2^{3n},n^{3(k+1)}))$ for persistent homology.
In practice, the choice of $k$ will also significantly reduce the size of the weak Alpha complex and thus the complexity of the persistent homology computation.
However, the full complex, i.e., the Delaunay triangulation, still needs to be constructed first \citep{boissonnat2018geometric}.
This is often the main bottleneck when working with weak Alpha complexes of high-dimensional data.
In practice, weak Alpha complexes are favorable for computing persistent homology from low dimensional point clouds, whereas fewer points in higher dimensions will favor Vietoris-Rips complexes \citep{somasundaram2021benchmarking}.
Within the context of topological regularization and data embeddings, we aim to achieve a low dimensionality $d$ of the point cloud embedding.
This justifies the choice for the weak Alpha filtrations.
Note that many computational improvements as well as approximation algorithms for persistent homology are already available \citep{Otter2017}.
However, their integration into topological optimization is open to further research.
\section{Conclusion}
\label{SEC::discconc}
We proposed a new approach for representation learning under the name of \emph{topological regularization}, which builds on the recently developed differentiation frameworks for topological optimization. To make topological regularization as accessible as possible, we included an extensive introduction to the minimal theoretical required background on persistent homology.
Our approach led to a versatile and effective way for embedding data according to prior expert topological knowledge, directly postulated through our newly introduced topological loss functions. The proposed topological loss functions are able to model a broad range of global and local topological structures. Moreover, by introducing a sampling approach in combination with these loss functions, we were able to overcome an important robustness challenge. The experiments show that, except in the hardest (low signal to noise ratio) circumstances, topological regularization is highly effective in increasing the saliency of an imposed topological structure. We also showed that including prior topological knowledge provides a promising way to improve subsequent---even non-topological---learning tasks (see also Section \ref{pseudotime}).
\section{Future Work}
\label{sec:future_work}
Our experiments showed that there are various directions for future work. First, a clear limitation of topological regularization is that prior topological knowledge is not always available. How to select the best from a list of priors is thus open to further research (see also Section \ref{sec:diffloss}). An interesting question is whether this limitation can be overcome by combining the prior knowledge with, or partially deriving it from, inferred (topological) features of the high-dimensional data.
Second, a sensible range for the trade-off parameter between the embedding and the topological loss could alleviate the problem that almost any topological prior can be optimized in embeddings of datasets with few data points as compared to the number of dimensions, or when using flexible (non-linear) embedding methods such as UMAP. Research into how to best set this parameter may open the possibility of using topological regularization also to test whether a data set contains a specified topological structure.
Third, a more robust (non-local) nonlinear optimization method or informed initialization scheme might be necessary for problems with a low signal to noise ratio. As we have seen in Section \ref{sec:weaksignal}, our proof of concept implementation of topologically regularized PCA did not reach the intended configuration, despite the lower objective value.
Fourth, the design of topological loss functions is arguably still rather complex, and it would be useful to study how to facilitate that design process for lay users. From a foundational perspective, our work provides new research opportunities into extending the developed theory for topological optimization \citep{carriere2021optimizing} to our newly introduced losses, and their integration into data embeddings.
Finally, topological optimization based on combinatorial structures other than the $\alpha$-complex may be of theoretical and practical interest.
For example, point cloud optimization based on graph-approximations such as the minimum spanning tree \citep{vandaele2021stable}, or varying the functional threshold $\tau$ in the loss (\ref{functionalloss}) alongside the filtration time \citep{chazal2009gromov}, may lead to natural topological loss functions with fewer hyperparameters.
\section{Experiments and Applications}
\label{SEC::experiments}
In this section, we illustrate the effects of topological regularization on synthetic and real-world data. In Section \ref{sec:effectiveness}, we show how to optimize embeddings with the ground truth prior and quantify the effect on subsequent prediction tasks.
In Section \ref{sec:robustness}, we explore the robustness of topological regularization with different parametrizations of the loss.
We design the experiments around six research questions that are stated in the beginning of these Sections. Our code is available at \url{https://github.com/aida-ugent/TopoEmbedding}.
\subsection{Datasets}
We evaluate the effectiveness and robustness of topological regularization on one synthetic, two biological, and two network datasets. We provide their size and ground truth model in Table \ref{tab:datasets} and visualizations in Figure \ref{fig:datasets}.
\begin{description}
\item[Synthetic cycle] We sample $50$ points uniformly from the unit circle in $\mathbb{R}^2$.
We then added 500-dimensional noise, sampled uniformly from $[-0.45, 0.45]$ in each dimension.
\item[Cell cycle] We considered a single-cell trajectory data set of 264 cells in a 6812-dimensional gene expression space \citep{robrecht_cannoodt_2018_1443566, Saelens276907}.
This data can be considered a snapshot of the cells at a fixed time.
The ground truth is a circular model connecting three cell groups through cell differentiation (see also Section \ref{pseudotime}).
\item[Cell bifurcation] We considered a second cell trajectory data set of 154 cells in a 1770-dimensional expression space \citep{robrecht_cannoodt_2018_1443566}.
The ground truth here is a bifurcating model connecting four different cell groups through cell differentiation.
\item[Karate] The Karate network \citep{Zac77} is a well known and studied network within graph mining that consists of two different communities. The communities are represented by two key figures (John A.\ and Mr.\ Hi), as shown in Figure \ref{fig:karate}.
\item[Harry Potter] The nodes of this graph (\href{https://github.com/hzjken/character-network}{https://github.com/hzjken/character-network}) are characters from the Harry Potter novel and edges mark friendly relationships between them (Figure \ref{fig:harrypotter}).
We only use the largest connected component in our experiments.
The Harry Potter graph has previously been analyzed by \citet{JMLR:v21:19-1032}, who identified a circular model therein that transitions between the `good' and `evil' characters from the novel.
\end{description}
\begin{table*}
\renewcommand*{\arraystretch}{1.3}
\centering
\begin{tabular}{lrrl}
\toprule
Name & n & d & Ground truth model \\
\midrule
Synthetic Cycle & 50 & 500 & circle in dims 1 and 2 \\
Cell Cycle & 264 & 6812 & circle \\
Cell Bifurcating & 154 & 1770 & bifurcation \\
Karate & 34 & 78 & two clusters \\
Harry Potter & 58 & 217 & circle \\
\bottomrule
\end{tabular}
\caption{Summary of data sizes and their ground truth model.}
\label{tab:datasets}
\end{table*}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/SynthCircleIdeal}
\captionsetup{width=.9\linewidth}
\caption{First two data coordinates of the 500 dimensional synthetic cycle data.}
\label{fig:synthcircle12}
\end{subfigure}\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellCycle/CellCirclePCA}
\captionsetup{width=.9\linewidth}
\caption{Cell cycle data with colors indicating cell state (PCA).}
\label{fig:cellcycle}
\end{subfigure}\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifUMAP}
\captionsetup{width=.9\linewidth}
\caption{Cell bifurcation data with colors indicating cell group (UMAP).}
\label{fig:cellbifurcation}
\end{subfigure}
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=.95\linewidth]{Images/Karate/Karate}
\caption{Karate network (NetworkX spring layout).}
\label{fig:karate}
\end{subfigure}
\begin{subfigure}[t]{.65\textwidth}
\includegraphics[width=\linewidth]{Images/HarryPotter/HPgraph}
\caption{The major connected component in the Harry Potter graph (NetworkX spring layout).
Edges mark friendly relationships between characters.}
\label{fig:harrypotter}
\end{subfigure}
\caption{Visualizations of the datasets used in the experiments. The method to obtain the two-dimensional embeddings is indicated in brackets.}
\label{fig:datasets}
\end{figure*}
\subsection{Effectiveness of Topological Regularization}
\label{sec:effectiveness}
In this section we design and analyze experiments with the goal of answering the following questions about the effectiveness of topological regularization:
\begin{description}
\item[Q1.] Does topological regularization succeed in imposing the specific topological structure in the low dimensional embedding?
\item[Q2.] How does topological regularization affect downstream prediction tasks?
\item[Q3.] How scalable is topological regularization?
\end{description}
An overview of the dimension reduction methods and their optimization parameters is shown in Table \ref{tab:effectiveness_parameters} and the topological loss functions are provided in Table \ref{tab:effectiveness_loss}.
\begin{table*}
\renewcommand*{\arraystretch}{1.2}
\centering
\begin{tabular}{llrrrrr}
\toprule
Data & Method & lr & epochs & $\lambda_{\mathrm{top}}$ & $t$ w/o top & $t$ with top \\
\midrule
Synthetic Cycle & PCA & 0.01 & $\leq 1000$ & 0.0005 & $<$1s & 12s \\
Cell Cycle & PCA & 0.01 & $\leq 1000$ & 0.01 & $<$1s & 3m29s \\
Cell Bifurcating & UMAP & 0.1 & 100 & 10 & $<$1s & 4s \\
Karate & DeepWalk & 1e-2 & 50 & 50 & 19s & 20s \\
Harry Potter & InnerProd & 1e-1 & 100 & 1e-1 & 1m21s & 1m24s \\
\bottomrule
\end{tabular}
\caption{Summarization of optimization parameters for each dataset.}
\label{tab:effectiveness_parameters}
\end{table*}
\begin{table*}[t]
\centering
\renewcommand\tabcolsep{5pt}
\renewcommand\arraystretch{1.2}
\begin{tabular}{lllccc}
\toprule
Data & Figure & Topological loss function & dim & $f_{\mathcal{S}}$ & $n_{\mathcal{S}}$ \\
\midrule
Synthetic Cycle & \ref{fig:synthCirclePCAopt},\ref{fig:SynthCircleTop}, \ref{fig:RandTop} & $-(d_1-b_1)$ & 1 & 0.4 & 5\\
Cell Cycle & \ref{CellCirclePCAopt}, \ref{CellCircleTop} & $-(d_1-b_1)$ & 1 & 0.25 & 10\\
Cell Bifurcating & \ref{fig:CellBifUMAP}, \ref{fig:CellBifUMAPopt}, \ref{fig:CellBifTop} & {\small $\sum_{k=2}^\infty (d_k-b_k)-\left[d_3-b_3\right]_{\mathcal{E}_{{\bm{E}}}^{-1}]-\infty,0.75]}$ } & 0 - 0 & {\color{gray} N/A} & {\color{gray} N/A} \\
Karate & \ref{KarateDWopt}, \ref{KarateTop} & $-(d_2-b_2)$ & $0$ & $0.25$ & $10$\\
Harry Potter & \ref{HarryIPopt}, \ref{HarryTop} & $-(d_1-b_1)$ & 1 & {\color{gray} N/A} & {\color{gray} N/A}\\
\bottomrule
\end{tabular}
\caption{Summary of the topological losses computed from persistence diagrams $\mathcal{D}$ with points $(b_k, d_k)$ ordered by persistence $d_k-b_k$.
Note that for 0-th dimensional homology diagrams $d_1=\infty$.}
\label{tab:effectiveness_loss}
\end{table*}
\subsubsection{Qualitative Evaluation}
\label{sec:effec_qualitative}
In this section we tackle Q1, illustrating the effect of topologically regularizing embeddings by comparing them to topologically optimized and standard embeddings for each of the five datasets. We use topological loss functions to model the ground truth structure of the data (see in Table \ref{tab:datasets}).
We use the term \emph{optimized embeddings} when optimizing only the topological loss for a predefined number of epochs initialized with a PCA, UMAP, or DeepWalk embedding. For the optimized PCA embeddings, we do not use the reconstruction loss but still optimize over the Stiefel manifold resulting in an orthogonal projection. \emph{Regularized embeddings} are the result of minimizing a combination between the embedding and topological loss. We set the trade-off $\lambda_{\mathrm{top}}$ between these two losses as low as possible but as high as needed such that the regularized embedding is notably different from the optimized embedding.
The optimized and regularized embeddings are computed using the same number of epochs (see Table \ref{tab:effectiveness_parameters}) when using DeepWalk and UMAP.
For PCA, we use a maximum of 1000 epochs but stop the optimization when the topological loss begins to stagnate (measured by the ratio between the average topological loss of the last 100 epochs and the average of last 50 epochs).
\paragraph{Synthetic Data}
\label{SEC::synthetic}
For the 500-dimensional synthetic circle, the 498 additional noisy features are irrelevant to the topological (circular) model. An ideal projection embedding would be a restriction to its first two features (see Figure \ref{fig:synthcircle12}). However, it is probabilistically unlikely that the irrelevant features will have a zero contribution to a PCA embedding of the data. Intuitively, each added feature slightly shifts the projection plane away from the plane spanned by the first two features. In Figure \ref{fig:SynthCirclePCA}, we observe that the circular hole is indeed less present in the PCA embedding of the data.
Note that \emph{we observed this to be a notable problem for `small $n$ large $p$' data sets}, as similar to other machine learning models, and as also recently studied by \citet{vandaele2022curse}, more data can significantly accommodate for the effect of noise and result in a better embedding model on its own.
To improve the visual representation of the circle, we use the topological loss function $\mathcal{L}_{\mathrm{top}}({\mathcal{D}}_1) = -(d_1-b_1)\,$
measuring the persistence of the most prominent 1-dimensional hole in the embedding. Since we aim for all points to lie on the boundary of the circle, we evaluate this loss on subsets of the data ($f_{\mathcal{S}}=0.4$, $n_{\mathcal{S}} = 5$).
The resulting embedding is shown in Figure \ref{fig:SynthCircleTop}, which better captures the circular hole. For comparison, Figure \ref{fig:synthCirclePCAopt} shows the optimized embedding (initialized with PCA) without the reconstruction loss $\mathcal{L}_{\mathrm{emb}}$ with a more prominent hole.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/SynthCirclePCA}
\caption{Ordinary PCA embedding.}
\label{fig:SynthCirclePCA}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/SynthCirclePCAopt}
\caption{Top.\ optimized embedding.}
\label{fig:synthCirclePCAopt}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/SynthCircleTop}
\caption{Top. regularized embedding.}
\label{fig:SynthCircleTop}
\end{subfigure}
\caption{Representations of the synthetic data, colored by ground truth coordinates.}
\label{SynthCircle}
\end{figure*}
\paragraph{Circular Cell Trajectory Data}
\label{SEC::cellcycle}
To improve the representation of the circular model in the single-cell trajectory, we use the same loss as for the synthetic data but different sampling parameters ($f_{\mathcal{S}}=0.25$, and $n_{\mathcal{S}}=10$) as the size of the single-cell dataset is larger.
From Figure \ref{CellCirclePCA}, we see that while the ordinary PCA embedding does somehow respect the positioning of the cell groups (marked by their color), it indeed struggles to embed the data in a manner that visualizes the cycle that we know to be present in the data. By topologically regularizing the embedding as shown in Figure \ref{CellCircleTop}, we are able to embed the data in a circular manner. Compared to the optimized embedding in Figure \ref{CellCirclePCAopt}, there are more cells that lie outside of the circle.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.325\linewidth}
\centering
\includegraphics[width=\linewidth]{Images/CellCycle/CellCirclePCA}
\caption{Ordinary PCA embedding.}
\label{CellCirclePCA}
\end{subfigure}
\begin{subfigure}[t]{.325\linewidth}
\centering
\includegraphics[width=\linewidth]{Images/CellCycle/CellCircleTop}
\caption{Top.\ optimized embedding.}
\label{CellCirclePCAopt}
\end{subfigure}
\begin{subfigure}[t]{.325\linewidth}
\centering
\includegraphics[width=\linewidth]{Images/CellCycle/CellCirclePCAopt}
\caption{Top.\ regularized embedding.}
\label{CellCircleTop}
\end{subfigure}
\caption{Representations of the cyclic cell data with colors representing the cell group.}
\end{figure*}
\paragraph{Pseudotime inference in cell trajectory data}
\label{pseudotime}
Single-cell omics include various types of data collected on a cellular level, such as transcriptomics, proteomics and epigenomics.
Studying the topological model underlying the data may lead to better understanding of the dynamic processes of biological cells and the regulatory interactions involved therein.
Such dynamic processes can be modeled through trajectory inference (TI) methods, also called \emph{pseudotime analysis}, which order cells along a trajectory based on the similarities in their expression patterns \citep{Saelens276907}.
For example, the \emph{cell cycle} is a well-known biological differentiation model that describes a cell as it grows and divides. The cell cycle consists of different stages, namely growth (G1), DNA synthesis (S), growth and preparation for mitosis (G2), and mitosis (M). The latter two stages are often grouped in a G2M stage. Hence, by studying expression data of cells that participate in the cell cycle differentiation model, one may identify the genes involved in and between particular stages of the cell cycle \citep{liu2017reconstructing}. Pseudotime analysis allows such study by assigning to each cell a time during the differentiation process in which it occurs, and thus, the relative positioning of all cells within the cell cycle model.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{.9\linewidth}
\includegraphics[height=.4cm]{Images/CellCycle/CellCircleLegend}
\end{subfigure}\\
\begin{subfigure}[t]{.32\linewidth}
\centering
\includegraphics[height=3.7cm]{Images/CellCycle/CellCircle_diode}
\caption{The representation of the most prominent cycle obtained through persistent homology.}
\label{fig:diode_cycle_pca}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.3\linewidth}
\centering
\includegraphics[height=3.8cm]{Images/CellCycle/CellCircle_diodeProjected}
\caption{Orthogonal projection of the embedded data onto the cycle representation.}
\label{fig:diode_proj_pca}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.34\linewidth}
\centering
\includegraphics[height=3.8cm]{Images/CellCycle/CellCircle_diodeCoordinates}
\caption{The pseudotimes inferred from the projection in (b), quantified on a continuous color scale.}
\label{fig:diode_pseudo_pca}
\end{subfigure}
\caption{Automated pseudotime inference of real cell cycle data through persistent homology, from the PCA embedding of the data.}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}[t]{.9\linewidth}
\includegraphics[height=.4cm]{Images/CellCycle/CellCircleLegend}
\end{subfigure}\\
\begin{subfigure}[t]{.32\linewidth}
\centering
\includegraphics[height=3.3cm]{Images/CellCycle/CellCircle_diodeTimeCircle}
\caption{The representation of the most prominent cycle obtained through persistent homology.}
\label{fig:diode_cycle_reg}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.3\linewidth}
\centering
\includegraphics[height=3.3cm]{Images/CellCycle/CellCircle_diodeTimeProjection}
\caption{Orthogonal projection of the embedded data onto the cycle representation.}
\label{fig:diode_proj_reg}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.34\linewidth}
\centering
\includegraphics[height=3.3cm]{Images/CellCycle/CellCircle_diodeTimeCoordinates}
\caption{The pseudotimes inferred from the projection in (b), quantified on a continuous color scale.}
\label{fig:diode_pseudo_reg}
\end{subfigure}
\caption{Automated pseudotime inference of real cell cycle data through persistent homology, from the topologically regularized PCA embedding of the data.}
\label{fig:diode_reg}
\end{figure}
Analyzing single cell cycle data is a use case where prior topological information is available. As the signal-to-noise ratio is commonly low in high-dimensional expression data \citep{libralon2009pre, zhang2021noise}, this data is usually preprocessed through a dimensionality reduction method before automated pseudotime inference \citep{Cannoodt2016, Saelens276907}. Topological regularization provides a tool to enhance the desired topological signal during the embedding procedure, and as such, facilitates automated inference that depends on this signal-to-noise ratio.
To illustrate this, we used persistent homology for an automated (cell) cycle and pseudotime inference method, with and without topological regularization during the PCA embedding of the data.
For this experiment, we use the PCA and topologically regularized embeddings of the cell cycle data which has also been analyzed by \citet{buettner2015computational}.
Our automated pseudotime inference method consists of the following steps.
\begin{enumerate}
\item First, a representation of the most prominent cycle in the embedding is obtained through persistent homology from the $\alpha$-filtration, using the Python software library Dionysus (\url{https://pypi.org/project/dionysus/}).
It can be seen as a circular representation---discretized in edges between data points---of the point in the 1st-dimensional persistence diagram that corresponds to the most persisting cycle (Figures \ref{fig:diode_cycle_pca} \& \ref{fig:diode_cycle_reg}).
\item An orthogonal projection of the embedded data onto the representative cycle is obtained.
This is an intermediate step to derive continuous pseudotimes from a discretized topological representation, as earlier described by \citet{Saelens276907} (Figures \ref{fig:diode_proj_pca} \& \ref{fig:diode_proj_reg}).
\item The lengths between consecutive points on the orthogonal projection are used to obtain circular pseudotimes between $0$ and $2\pi$ (Figures \ref{fig:diode_pseudo_pca} \& \ref{fig:diode_pseudo_reg}).
\end{enumerate}
Note that within the scope of the current paper, we do not advocate this to be a new and generally applicable automated pseudotime inference method for single-cell data following the cell cycle model.
However, we chose this particular method because it illustrates well what persistent homology identifies in the data and what we aim to optimize through topological regularization.
For example, we observe that (the representation of) the most prominent cycle in the ordinary PCA embedding is rather spurious and mostly linearly separates G1 and G2M cells.
Projecting cells onto the edges of this cycle places most of the cells onto a single edge (Figure \ref{fig:diode_proj_pca}).
The resulting pseudotimes are continuous for cells projected onto this edge, whereas they are mainly discrete for all other cells (Figure \ref{fig:diode_pseudo_pca}).
However, by incorporating prior topological knowledge into the PCA embedding, the (representation of) the most prominent cycle in the embedding now better characterizes the transition between the G1, G2M, and S stages in the cell cycle model (Figure \ref{fig:diode_cycle_reg}).
The automated procedure for pseudotime inference also reflects a more continuous transition between the cell stages (Figures \ref{fig:diode_proj_reg} and \ref{fig:diode_pseudo_reg}).
\paragraph{Bifurcating Cell Trajectory Data}
We use the UMAP loss to illustrate the effect of topological regularization on the bifurcating model of this single-cell dataset. The topological loss is $\mathcal{L}_{\mathrm{top}}\equiv\mathcal{L}_{\mathrm{conn}}+\mathcal{L}_{\mathrm{flare}}$, where $\mathcal{L}_{\mathrm{conn}}({\mathcal{D}}_0) = \sum_{k=2}^\infty (d_k-b_k)$ measures the total (sum of) finite 0-dimensional persistence in the embedding to encourage connectedness of the representation and $\mathcal{L}_{\mathrm{flare}} = -\left[d_3-b_3\right]_{\mathcal{E}_{{\bm{E}}}^{-1}]-\infty,0.75}$ optimizes for a `flare' with (at least) three clusters away from the embedding mean.
We observe that while the ordinary UMAP embedding is more dispersed (Figure \ref{fig:CellBifUMAP}), the topologically regularized embedding is more constrained towards a connected bifurcating shape (Figure \ref{fig:CellBifTop}).
For comparison, we conducted topological optimization for the loss $\mathcal{L}_{\mathrm{top}}$ of the initialized UMAP embedding without the UMAP embedding loss.
The resulting embedding is now more fragmented (Figure \ref{fig:CellBifUMAPopt}).
We thus see that topological optimization may also benefit from the embedding loss.
Both losses are needed: the topological loss to impose the topological prior knowledge, and the embedding loss to ensure the learned representation is faithful to the learned representation.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.325\linewidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifUMAP}
\caption{Ordinary UMAP embedding.}
\label{fig:CellBifUMAP}
\end{subfigure}
\begin{subfigure}[t]{.325\linewidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifUMAPopt}
\caption{Top.\ optimized embedding.}
\label{fig:CellBifUMAPopt}
\end{subfigure}
\begin{subfigure}[t]{.325\linewidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifTop}
\caption{Top.\ regularized embedding.}
\label{fig:CellBifTop}
\end{subfigure}
\caption{Embeddings of the bifurcating cell data with colors representing the cell group.}
\label{fig:CellBif}
\end{figure*}
\paragraph{Karate}
The Karate network consists of two different communities represented by their key figures (John A.\ and Mr.\ Hi), as shown in Figure \ref{fig:karate}.
To embed the graph, we used a DeepWalk variant adapted from \citet{graph_nets}.
While the ordinary DeepWalk embedding (Figure \ref{KarateDW}) well respects the ordering of points according to their communities, the communities remain close to each other.
We thus regularized this embedding for (at least) two clusters using the topological loss with sampling as defined by (\ref{sampleloss}), where $\mathcal{L}_{\mathrm{top}}({\mathcal{D}}_0) = -(d_2 - b_2)$ measures the persistence of the second most prominent 0-dimensional hole, and $f_{\mathcal{S}}=0.25$, $n_{\mathcal{S}}=10$.
The resulting embedding (Figure \ref{KarateTop}) now nearly perfectly separates the two ground truth communities present in the graph.
Further optimizing the ordinary DeepWalk embedding with the same topological loss but without the DeepWalk loss, results in a larger separation of the two communities (Figure \ref{KarateDWopt}). Compared to the regularized embedding, the visible structure within the two communities is reduced, again showing the importance of including both embedding loss and topological loss.
\begin{figure}[t]
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/Karate/KarateDW}
\captionsetup{width=.8\linewidth}
\caption{Ordinary DeepWalk embedding.}
\label{KarateDW}
\end{subfigure}
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/Karate/KarateDWopt}
\caption{Top.\ optimized embedding.}
\label{KarateDWopt}
\end{subfigure}
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/Karate/Karatetop}
\caption{Top.\ regularized embedding.}
\label{KarateTop}
\end{subfigure}
\caption{The Karate network and various of its embeddings.}
\end{figure}
\paragraph{Harry Potter}
\label{SEC::HarryPotter}
To embed the Harry Potter graph, we used a simple graph embedding model where the sigmoid of the inner product between embedded nodes quantifies the (Bernoulli) probability of an edge occurrence \citep{rendle2020neural}.
Thus, this probability will be high for nodes close to each other in the embedding, and low for more distant nodes. These probabilities are then optimized to match the binary edge indicator vector.
Figure \ref{HarryIP} shows the result of this embedding, along with the circular model presented by \citet{JMLR:v21:19-1032}.
For clarity, character labels are only annotated for a subset of the nodes (the same as by \citet{JMLR:v21:19-1032}).
To better represent the circular model that transitions between the `good` and `evil` characters, we regularized the embedding using a topological loss function $\mathcal{L}_{\mathrm{top}}({\mathcal{D}}_1) = -(d_1-b_1)$ that measures the persistence of the most prominent 1-dimensional hole.
The resulting embedding is shown in Figure \ref{HarryTop}.
Interestingly, the topologically regularized embedding now better captures the circularity of the model identified by \citet{JMLR:v21:19-1032}, and focuses more on distributing the characters along it.
Note that although this previously identified model is included in the visualizations, it is not used to derive the embeddings, nor is it derived from them.
For comparison, Figure \ref{HarryIPopt} shows the result of optimizing the ordinary graph embedding (used as initialization) for the same topological loss, but without the graph embedding loss.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.30\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/HarryPotter/HarryIP}
\caption{Ordinary graph embedding.}
\label{HarryIP}
\end{subfigure}\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/HarryPotter/HarryOpt}
\caption{Topologically optimized embedding (initialized with the ordinary graph embedding).}
\label{HarryIPopt}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.35\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/HarryPotter/HarryTop}
\caption{Topologically regularized embedding.}
\label{HarryTop}
\end{subfigure}
\caption{Various embeddings of the Harry Potter graph and the circular model therein.}
\label{HPembed}
\end{figure*}
We conclude that topological regularization does successfully impose the specified structure in the embeddings.
\subsubsection{Quantitative Evaluation}
\label{quantitative}
In this section we analyze the embedding and topological losses from the previous experiments and investigate Q2 by quantifying changes in prediction performance on the different embeddings.
Table \ref{losstable} summarizes the losses we obtained for the ordinary embeddings, the topologically optimized embeddings (initialized with the ordinary embeddings, but not using the embedding loss), as well as for the topologically regularized embeddings.
As one would expect, topological regularization balances the embedding losses between the embedding losses of the ordinary and topologically optimized embeddings.
We also observe that there are more significant differences in the obtained topological losses than in the embedding losses with and without regularization.
This suggests that the optimum region for the embedding loss may be somewhat flat with respect to the corresponding region for the topological loss.
Thus, slight shifts in the local embedding optimum by topological regularization may result in much better topological embedding models.
We evaluated the quality of the embedding visualizations presented in Section \ref{sec:effec_qualitative}, by assessing how informative they are for predicting ground truth data labels.
For the Synthetic Cycle data, these labels are the 2D coordinates of the noise-free data on the unit circle in $\mathbb{R}^2$, and we used a multi-output support vector regressor model.
For the cell trajectory data and Karate network, we used the ground truth cell groupings and community assignments, respectively, and a support vector machine model.
The points in the 2D embeddings were then split into 90\% points for training and 10\% for testing with exception of the synthetic cycle dataset, where we used an 80/20 split.
Consecutively, we used 5-fold CV on the training data to tune the regularization hyperparameter $C\in\{1\mathrm{e}-2, 1\mathrm{e}-1, 1, 1\mathrm{e}1, 1\mathrm{e}2\}$.
Other settings were the default from \textsc{scikit-learn}.
The performance of the final tuned and trained model was then evaluated on the test data, through the $r^2$ coefficient of determination for the regression problem, and the accuracy for all classification problems.
The averaged test performance metrics and their standard deviations, obtained over 100 random train-test splits, are summarized in Table \ref{quanttable}.
We observe that quantitative differences between the ordinary, optimized, and regularized embeddings are small. However, aligning the embedding with the expected topology does not seem to decrease the prediction performance.
\begin{table}[tp]
\renewcommand\arraystretch{1.2}
\centering
\begin{tabular}{lccc|rrr}
\toprule
& \multicolumn{3}{c}{Embedding loss} & \multicolumn{3}{c}{Topological loss}\\
Data & ord. & top.\ opt. & top.\ reg. & ord. & top.\ opt. & top.\ reg. \\
\midrule
Synthetic Cycle & $\bf{0.0632}$ & $0.0639$ & $0.0634$ & $-0.42$ & $\bf{-1.63}$ & $-1.38$ \\
Cell Cycle & $\bf{6.70}$ & $7.00$ & $6.78$ & $-10.18$ & $\bf{-68.34}$ & $-63.52$ \\
Cell Bifurcating & $\bf{8496}$ & $9865$ & $8896$ & $116.03$ & $\bf{24.03}$ & $63.90$ \\
Karate & $\bf{2006}$ & $3239$ & $2112$ & $-0.59$ & $\bf{-4.58}$ & $-1.82$ \\
Harry Potter & $\bf{0.20}$ & $4.25$ & $0.23$ & $-0.82$ & $\bf{-5.84}$ & $-3.05$ \\
\bottomrule
\end{tabular}
\caption{Embedding/reconstruction and topological losses of the final embeddings.
Lowest in bold. If the topological loss function was computed on random subsets of the data, we report the loss on the full dataset.}
\label{losstable}
\end{table}
\begin{table}[tp]
\renewcommand\arraystretch{1.2}
\centering
\begin{tabular}{llccc}
\toprule
Data & Metric & Ord.\ emb. & Top.\ opt. & Top.\ reg. \\
\midrule
Synthetic Cycle & $r^2$ & $0.62\pm0.15$ & $0.60\pm0.14$ & $\bf{0.67\pm0.20}$\\ %
Cell Cycle & accuracy & $\bf{0.79\pm0.07}$ & $\bf{0.79\pm0.07}$ & $0.78\pm0.07$ \\
Cell Bifurcating & accuracy & $0.78\pm0.08$ & $\bf{0.85\pm0.07}$ & $0.82\pm0.08$ \\
Karate & accuracy & $\bf{0.97\pm0.08}$ & $\bf{0.97\pm0.08}$ & $\bf{0.97\pm0.08}$\\
\bottomrule
\end{tabular}
\caption{Embedding performance evaluations for label prediction.
Highest in bold.}
\label{quanttable}
\end{table}
\subsubsection{Scalability}
To illustrate the empirical runtime of topological regularization, we optimized a random 2-dimensional dataset of size $n$ using a circle loss but no embedding loss for 100 iterations (learning rate $0.01$). A circle is computationally the most challenging topological hole to optimize in a 2D embedding.
The experiments have been carried out on a Dell Latitude 5511 laptop equipped with an Intel(R) Core(TM) i7-10850H 2.70GHz processor, with 16.0 GB of RAM.
We show the runtime without sampling in a log-log plot in Figure \ref{fig:runtime}.
Computing the topological loss using sampling with $f_{\mathcal{S}} = 0.1$ and $n_{\mathcal{S}} = 10$, takes $1.9, 17, 191, \text{and } 2284$ seconds. This is comparable to the runtime for 100 iterations without sampling, but we might need fewer iterations to reach a similar result. This is because the topological loss for a one-dimensional hole has nonzero gradients only for $4$ points (see Section \ref{SUBSUBSEC::pointcloudopt}). By aggregating the loss for $n_{\mathcal{S}}$ subsets, the gradient of the topological loss can affect up to $n_\mathcal{S}\cdot 4$ points.
\begin{figure}[tp]
\centering
\includegraphics[width=.4\textwidth]{Images/Runtime/runtime_log_log.png}
\caption{Log-log plot showing dataset size vs. runtime to optimize for a one-dimensional hole for 100 iterations without sampling.}
\label{fig:runtime}
\end{figure}
\subsection{Robustness of Topological Regularization}
\label{sec:robustness}
In this section we provide more intuition about the limits of topological regularization and seek to answer the following questions:
\begin{description}
\item[Q4.] How does the precise formulation of the topological loss affect the final embedding?
\item[Q5.] How does a weak signal to noise ratio affect the optimization?
\item[Q6.] What is the effect of regularizing with wrong prior information?
\end{description}
An overview of the topological loss functions that regularize the embeddings presented in this section is given in Table \ref{tab:robustness_loss}.
\begin{table*}[t]
\centering
\renewcommand\tabcolsep{5pt}
\renewcommand\arraystretch{1.5}
\begin{tabular}{p{2.8cm}lllp{1.3cm}}
\toprule
Data & Fig. & Topological loss function & Dim & $f_{\mathcal{S}}, n_{\mathcal{S}}$ \\
\midrule
Synthetic Cycle & \ref{fig:robustness_sampling} & $-(d_1-b_1)$ & 1 & $0.25, 4$\newline $0.5, 2$\newline $1, 1$\\
Random \newline Synthetic Cycle & \ref{fig:RandTop}, \ref{fig:robustness_zscore} & $-(d_1-b_1)$ & 1 & $0.25, 4$ \\
Synthetic Cycle & \ref{fig:SynthCircle_secondcircle} & $-(d_2-b_2)$ & 1 & \textcolor{gray}{NA} \\
Synthetic Cycle & \ref{fig:SynthCircle_connected} & {\small $\sum_{k=2}^\infty d_k$ } & 0 & \textcolor{gray}{NA} \\
Cell Bifurcating & \ref{fig:bifotherp} & {\small $\sum_{k=2}^\infty (d_k-b_k)^{p_{\text{CC}}}-\left[(d_3-b_3)^{p_{\text{Flare}}}\right]_{\mathcal{E}_{{\bm{E}}}^{-1}]-\infty,0.75]}$} & 0 - 0 & \textcolor{gray}{NA} \\
Cell Bifurcating & \ref{CellBifTau} & {\small $\sum_{k=2}^\infty (d_k-b_k)-\left[d_3-b_3\right]_{\mathcal{E}_{{\bm{E}}}^{-1}]-\infty,\tau]}$ } & 0 - 0 & \textcolor{gray}{NA} \\
Cell Bifurcating & \ref{fig:CellBifCircleStrong} & $-(d_1-b_1)$ & 1 & $0.25, 10$ \\
\bottomrule
\end{tabular}
\caption{Summary of the topological losses used to regularize embeddings in Section \ref{sec:robustness}. Note that for 0-th dimensional homology diagrams $d_1=\infty$.}
\label{tab:robustness_loss}
\end{table*}
\subsubsection{Varying the Parameters of Topological Priors\label{SEC:robust_params}}
In Section \ref{SEC::topreg} we introduced topological loss functions with sampling (see Equation~(\ref{sampleloss})) and for the flare shape (see Equation~(\ref{functionalloss})). Here we investigate how topological regularization reacts to different choices of the hyperparameters that are used in these loss functions.
We vary the sampling fraction $f_{\mathcal{S}}$ and number of repeats $n_{\mathcal{S}}$ in Equation~(\ref{sampleloss}), the functional threshold $\tau$ in Equation~(\ref{functionalloss}), and the function to evaluate the points in the persistence diagram.
Note that although changing these parameters may change the \emph{geometrical} characteristics that are specified, the \emph{topological} characteristics remain roughly the same with respect to the chosen topological loss function.
\paragraph{Varying $f_{\mathcal{S}}$ and $n_{\mathcal{S}}$}
Figure \ref{fig:robustness_sampling} shows the topologically regularized PCA embedding of the synthetic cycle for varying choices of the values $f_{\mathcal{S}}$ and $n_{\mathcal{S}}$.
We again optimize for a one-dimensional hole (see Table \ref{tab:robustness_loss}) and use the same parameter setting as before, stated in Table \ref{tab:effectiveness_parameters}. We chose the number of repeats $n_{\mathcal{S}}$ such that computing the embeddings requires approximately the same runtime.
We observe that by increasing $f_{\mathcal{S}}$, the hole tends to become more defined and no points are inside the boundary.
More specifically, one can better visually deduce a subset of the data that represents and lies on a nearly perfect circle in the Euclidean plane.
Nevertheless, we also notice that without sampling ($f_{\mathcal{S}} = 1$), the embedded circle does not align perfectly with the ground truth and only part of the data lies on it.
We conclude that the sampling fraction should be small enough to avoid optimizing spurious holes but large enough to ensure the $d$-dimensional features of interest are present and stable. In this example, regularizing with $f_{\mathcal{S}} = 0.25$ is not stable, as the largest hole in the embedding will change depending on whether the two points in the center are in the sample or not.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/cycle_sampling_fs_0.png}
\end{subfigure}\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/cycle_sampling_fs_1.png}
\end{subfigure}\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/cycle_sampling_fs_2.png}
\end{subfigure}
\caption{Topologically regularized PCA embeddings of the synthetic cycle dataset for varying sampling fraction $f_{\mathcal{S}}$ and number of repeats $n_{\mathcal{S}}$.}
\label{fig:robustness_sampling}
\end{figure*}
\paragraph{Different Loss Functions for the same Topological Prior (varying $\bm{g}$)}
\label{PARAGRAPH::vary_g}
For one particular type of prior topological information, one can design different topological loss functions by varying the real-valued function $g$ evaluated on the persistence diagram points that is used in the loss function (\ref{eq:topo_loss}).
In particular, our topological loss function (\ref{eq:kdim-hole-loss}), where we let $g\vcentcolon\mspace{-1.2mu}\equiv b_k-d_k$, is inspired by the topological loss function
\begin{equation}
\label{fromloss}
\mathcal{L}_{\mathrm{top}}({\bm{E}})\coloneqq\mathcal{L}_{\mathrm{top}}(\mathcal{D})=\mu\sum_{k=i, d_k<\infty}^{|\mathcal{D}^{\mathrm{reg}}|}\left(d_k-b_k\right)^p\left(\frac{d_k+b_k}{2}\right)^q,\hspace{2em}\mbox{ where } d_1-b_1\geq d_2-b_2\geq\ldots,
\end{equation}
introduced by \citet{gabrielsson2020topology}, and more formally investigated within an algebraic setting by \citet{adcock2013ring}.
Here, $p$ and $q$ are hyperparameters that control the strength of penalization, whereas $i$ and the choice of persistence diagram (or homology dimension) is used to postulate the topological information of interest.
The choice of $\mu\in\{1,-1\}$ determines whether one wants to increase ($\mu=-1$) or decrease ($\mu=1$) the topological information for which the topological loss function in the data is designed.
The term $(d_k-b_k)^p$ can be used to control the prominence, i.e., the persistence, of the most prominent topological holes, whereas the term $\left(\frac{d_k+b_k}{2}\right)^q$ can be used to control whether prominent topological features persist either early or later in the filtration.
Our topological loss function (\ref{eq:kdim-hole-loss}) is identical to (\ref{fromloss}) with $p=1$ and $q=0$, and the additional change that we sum to the hyperparameter $j$ instead of $|\mathcal{D}^{\mathrm{reg}}|$, which allows for even more flexible prior topological information.
Note that this is one of the most simple and intuitive, yet flexible, manners to postulate topological loss functions.
Indeed, $(d_k-b_k)^{p=1}$ directly measures the persistence of the $k$-th most prominent topological hole for a chosen dimension of interest.
The role of $q\neq 0$ is to be further investigated within the context of topological regularization and formulating prior topological information.
It may already be clear that the same topological information can be postulated through different choices for $p>0$.
To explore this, we considered topologically regularized UMAP embeddings of the real bifurcating single cell data, for the topological loss function as stated in Table \ref{tab:robustness_loss} where the parameter $p$ is varied, i.e.,
\begin{equation}
\label{bifloss}
\mathcal{L}_{\mathrm{top}}({\bm{E}})=\sum_{k=2}^\infty (d_k-b_k)^{p_{\text{CC}}}-\left[(d_3-b_3)^{p_{\text{Flare}}}\right]_{\mathcal{E}_{{\bm{E}}}^{-1}]-\infty,0.75]}.
\end{equation}
We only change one of the exponents, $p_{\text{CC}}$ or $p_{\text{Flare}}$, to investigate their effects independently and show the embeddings in Figure \ref{fig:bifotherp}.
We observe that a small value for $p_{CC}$ leads to many overlapping points as even the smallest gaps contribute significantly to the loss. For $p_{CC} = 2$ there are no large gaps anymore and all points have approximately the same distance to their nearest neighbor.
With increasing values of $p_{Flare}$, points that are sufficiently far away from the center are pulled towards the leaves of the represented topological model. This increases the persistence of the three clusters. However, there are a few points at the center of the bifurcation that will keep everything connected as regularized through the first part of the topological loss.
Nevertheless, although different values of $p$ appear to affect the overall spacing between points, we see that the embedded topological shape remains overall recognizable and consistent.
\begin{figure*}[tp]
\centering
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifConn_p_0_5}
\end{subfigure}\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifConn_p_1}
\end{subfigure}\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifConn_p_2}
\end{subfigure}
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifFlare_p_0_5}
\end{subfigure}\hspace{2mm}
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifFlare_p_2}
\end{subfigure}
\caption{The topologically regularized UMAP embeddings of the bifurcating real single cell data according to (\ref{bifloss}), for various values of $p$.}
\label{fig:bifotherp}
\end{figure*}
\paragraph{Varying $\tau$}
Figure \ref{CellBifTau} shows the topologically regularized UMAP embedding of the real single-cell data following a bifurcating cell differentiation model, for varying choices of the functional threshold $\tau$.
Recall that $0\leq\tau\leq1$ is a threshold on the normalized distance of the embedded points to their mean as defined in (\ref{centrality}).
Intuitively, $\tau$ specifies how close one looks to the embedding mean, with higher values of $\tau$ allowing more points (thus closer to the embedding mean) to be included when optimizing for three clusters away from the center.
While little differences can be observed between $\tau=0.25$ and $\tau=0.5$, for $\tau=0.75$, points near the center of bifurcation are more stretched towards the endpoints in the embedded model.
This is related to the fact that for higher values of $\tau$, more points close to the center are included when optimizing for three separated clusters.
\begin{figure*}[tp]
\centering
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifTau_0_25}
\end{subfigure}\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifTau_0_5}
\end{subfigure}\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifTau_0_75}
\end{subfigure}
\caption{The topologically regularized UMAP embeddings of the bifurcating real single cell data, for various functional thresholds $\tau$.}
\label{CellBifTau}
\end{figure*}
\subsubsection{Regularization with Weak Signal to Noise Ratio}
\label{sec:weaksignal}
In the synthetic data we used throughout the experiments, the cycle was by construction in the dimensions with highest variance. Thus, the embedding initialization with PCA is already somewhat aligned with the underlying circular structure. Even when initialized differently, the PCA (MSE) loss would contribute to reach this configuration.
We investigate a setup with a 10-dimensional synthetic dataset (n=100) that again contains a cycle in the first two dimensions, but standardize all dimensions to have equal variance.
The PCA projection (Figure \ref{fig:robustness_zscore_pca}) of this data does not necessarily align with the first two dimensions and does not show the circle.
Topological regularization initialized with this PCA projection results in an embedding with a spurious hole (Figure \ref{fig:robustness_zscore_pca_regularized}).
A projection on the first two dimensions and its topologically regularized optimization (Figures \ref{fig:robustness_zscore_d12} and \ref{fig:robustness_zscore_d12_regularized}) both have better objective values due to the lower topological loss.
It appears that the topologically regularized embedding using PCA initialization in \ref{fig:robustness_zscore_pca_regularized} is a local optimum of the loss that the current local optimization technique is not able to overcome.
As the total loss of the projection using only the first two dimensions is substantially lower, and the regularized embedding initialized with the ground truth even more so, we presume that using different optimization techniques, or a more intelligent initialization, it could be possible to still identify the correct cycle in this challenging setting. However, this also highlights the loss is not easy to optimize well in more difficult settings.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.35\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/robustness_zscore_pca}
\caption{PCA projection.}
\label{fig:robustness_zscore_pca}
\end{subfigure}\hspace{.1\textwidth}
\begin{subfigure}[t]{.35\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/robustness_zscore_pca_regularized}
\subcaption{Topologically regularized embedding initialized with \ref{fig:robustness_zscore_pca}.}
\label{fig:robustness_zscore_pca_regularized}
\end{subfigure}
\begin{subfigure}[t]{.35\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/robustness_zscore_d12}
\subcaption{Visualization of the first two data dimensions.}
\label{fig:robustness_zscore_d12}
\end{subfigure}\hspace{.1\textwidth}
\begin{subfigure}[t]{.35\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/robustness_zscore_d12_regularized}
\subcaption{Topologically regularized embedding initialized with \ref{fig:robustness_zscore_d12}.}
\label{fig:robustness_zscore_d12_regularized}
\end{subfigure}
\caption{Regularized embeddings of (n=100, d=10) z-score standardized data in (b) and (d), initialized with the PCA projection (a) and the first two dimensions corresponding to the ground truth circle (c) respectively. The total objective is computed as $\mathcal{L}_{\mathrm{emb}} + 10 \cdot \mathcal{L}_{\mathrm{top}}$.}
\label{fig:robustness_zscore}
\end{figure*}
\subsubsection{Optimizing for Different Topological Priors}
\label{sec:diffloss}
As a user may not always have strong prior expert topological knowledge available, we investigate Q6 by studying how topological regularization reacts to different topological loss functions that do not model the ground truth topological structure of the data.
\paragraph{Wrong Prior on Synthetic Cycle}
We explore the effect of two different topological loss functions regularizing the embedding of the synthetic cycle. For this dataset we know that there is exactly one topological model, i.e., a circle, that generates the data. We compute the regularized PCA embedding with the same parameters as before (see Table \ref{tab:effectiveness_parameters}) and summarize the loss functions in Table \ref{tab:robustness_loss}.
In Figure \ref{fig:SynthCircle_secondcircle} we show the result of regularizing the embedding with a topological loss maximizing the persistence of the second most prominent cycle. We consider this loss to specify a partially wrong form of prior information, as it imposes that the persistence of the most prominent cycle in the embedding must be high as well. The effect is that the two cycles have similar persistence.
The embedding shown in Figure \ref{fig:SynthCircle_connected} was regularized with a topological loss to minimize all finite (0-dimensional) death times, leading to smaller distances between neighboring points.
We also show the embedding regularized with the correct circular prior (\ref{fig:SynthCircle_onecircle}) for comparison.
These experiments show that it is possible to optimize the embedding using a wrong topological prior. In this setting, the reconstruction error for the embedding with one circle and two circles is very similar and do not allow a conclusion which one is correct. Thus, the results must be interpreted with caution: the visual presence of the imposed topological information can in general not be taken as a confirmation that this topological structure is indeed present in the data.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/SynthCircle_onecircle}
\caption{One circle}
\label{fig:SynthCircle_onecircle}
\end{subfigure}\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/SynthCircle_secondcircle}
\caption{Two equally sized circles}
\label{fig:SynthCircle_secondcircle}
\end{subfigure}\hfill
\begin{subfigure}[t]{.325\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/SynthCircle_connected}
\caption{Connected component}
\label{fig:SynthCircle_connected}
\end{subfigure}
\caption{Topologically regularized embeddings of the 500-dimensional synthetic data for which the ground truth model is a circle. For easier visual comparison, we include in (a) the embedding already shown in \ref{fig:SynthCircleTop}.}
\label{fig:SynthCircleOther}
\end{figure*}
\paragraph{Topological Models in Random High-Dimensional Data}
In the previous experiment, the synthetic data set is very small compared to its dimensionality (n=50, d=500), which gives the projection lots of flexibility: it allows the projection to change the low-dimensional embedding of one point without affecting the others. Thus, the result of that experiment is hardly surprising, and a more interesting question is \emph{in which situations} topological regularization can yield low-dimensional embeddings which show spurious topologies.
Without aiming to be exhaustive, we investigate this question by topologically regularizing a random high-dimensional dataset of the same size and dimensionality (n=50, d=500), and with the same amount of noise as the synthetic cycle, but without the circular signal. The topological loss function for regularization also remains the same as stated in Table \ref{tab:robustness_loss}. We observe that, arguably unsurprisingly, regularizing the embedding of the random data (Figure \ref{fig:robustness_random_500}) results in a visually similar circle than the embedding of the synthetic cycle. However, when reducing the dimensionality of the data to d=10, the cycle is much less pronounced (Figure \ref{fig:robustness_random_10}), even when using a higher weight for the topological loss ($\lambda_{\mathrm{top}}$ = 10).
These experiments show that in data where the number of dimensions is high as compared to the number of data points, the result might show the specified topological structure regardless of whether it is meaningfully present in the data. This may happen even with a small weight for the topological loss. When the number of data points is sufficiently large as compared to the number of dimensions, however, topological regularization is less vulnderable to this. While these results may not be surprising to most readers, we believe it is important to stress that topologically regularized embeddings cannot and should not be used as a test to check whether a certain prior about the data is correct.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/topReg_synthetic_circle_50samples_500dims_normal_}
\caption{500-dimensional synthetic data containing a cycle.}
\end{subfigure}\hfill
\begin{subfigure}[t]{.3\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/topReg_synthetic_random_50samples_500dims_normal_}
\caption{500-dimensional point cloud without circular ground truth.}
\label{fig:robustness_random_500}
\end{subfigure}\hfill
\begin{subfigure}[t]{.33\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/ToyCircle/topReg_synthetic_random_50samples_10dims_normal_}
\captionsetup{width=.8\linewidth}
\caption{10-dimensional point cloud without circular ground truth.}
\label{fig:robustness_random_10}
\end{subfigure}
\caption{Topologically regularized embeddings of the 500-dimensional dataset containing the cycle compared to 500 and 10-dimensional point clouds regularized with the same topological loss. To embed a circular structure in the projection of the 10-dimensional data, we used a higher weight on the topological loss ($\lambda_{\mathrm{top}} = 10$).}
\label{fig:RandTop}
\end{figure*}
\paragraph{Circular Prior on Bifurcating Single Cell Data}
In the previous experiments, we showed the effects of regularizing an orthogonal projection with a wrong topological prior in different settings. Here, we consider the effect on the non-linear embeddings by UMAP. We assume they may be more prone to model wrong prior topological information as the point coordinates are directly optimized in the embedding space. This offers substantial flexibility even regardless of the dimensionality of the input data.
To explore this, we embed the real bifurcating single cell data for a circular prior through the topological loss function $-(d_1-b_1)$ with $f_{\mathcal{S}} = 0.25$ and $n_{\mathcal{S}} = 10$, evaluated on the 1st-dimensional persistence diagram.
Figure \ref{fig:CellBifCircleStrong} shows the topologically regularized UMAP embeddings for the circular prior for different topological regularization strengths $\lambda_{\mathrm{top}}$, optimized for 250 epochs.
Naturally, higher topological regularization strengths $\lambda_{\mathrm{top}}$ will more significantly impact the topological representation in the data embedding.
We observe that with $\lambda_{\mathrm{top}}=10$ the UMAP loss is preventing the embedding to show a hole.
With a regularization strength of $\lambda_{\mathrm{top}}=50$ we observe the circle in the regularized embedding. With $\lambda_{\mathrm{top}}=100$ the radius enlarges, but points from the endpoints of the bifurcation lie mostly outside of the circle.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifRegCircle_topoweight_10}
\end{subfigure}\hfill
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifRegCircle_topoweight_50}
\end{subfigure}\hfill
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{Images/CellBifurcating/CellBifRegCircle_topoweight_100}
\end{subfigure}
\caption{Topologically regularized UMAP embeddings of the bifurcating cell data, using potentially wrong prior topological (circular) information. We show embeddings optimized for 250 epochs with various topological regularization strengths $\lambda_{\mathrm{top}}$.}
\label{fig:CellBifCircleStrong}
\end{figure*}
Returning to research question Q6, we conclude that topological regularization cannot and should not be used to test whether a data set contains a certain topological structure. We showed that especially for high-dimensional data with few samples, and for flexible embedding methods like UMAP, arbitrary structures can be optimized. In addition, the suitability of a topological loss function should not be assessed by the required value of $\lambda_{\mathrm{top}}$ to have a visible effect on the embedding. This value mainly depends on the relative difference between the embedding and topological loss.
Thus, caution is warranted when no reliable topological prior information is available.
\section{Introduction\label{SEC::intro}}
Data embedding methods are commonly used in data science to convert raw input data into a form that is more suitable for learning and consecutive inference.
For example, linear dimensionality reduction methods such as Principal Component Analysis \citep[PCA;][]{jolliffe1986principal} and Linear Discriminant Analysis \citep[LDA;][]{mclachlan2005discriminant} are commonly used to preprocess data for unsupervised or supervised learning.
Non-linear dimensionality reduction methods such as Diffusion Maps \citep{coifman2005geometric}, Uniform Manifold Approximation and Projection \citep[UMAP;][]{mcinnes2018umap}, and $t$-Stochastic Neighbor Embedding \citep[$t$-SNE;][]{van2008visualizing}, are often used to preprocess or visualize high-dimensional data.
For relational input data, graph embedding methods such as DeepWalk \citep{perozzi2014deepwalk} allow one to represent the data in a matrix structure that is convenient to visualize or use as an input for common machine learning algorithms.
One of the main arguments for using data embeddings is that they improve the signal-to-noise ratio in the data,
making meaningful information in the data more salient. %
This facilitates robust and automated inference from the data, as well as human understanding and interaction when the embeddings are visualized in a two-dimensional (2D) or three-dimensional (3D) space.
Therefore, data embeddings are invaluable for summarizing the important high-level structure in data in a way that is useful for both automated and interactive learning in a wide variety of application areas and scientific domains.
Unfortunately, data embeddings themselves are susceptible to the noise they commonly aim to reduce \citep{vandaele2022curse}.
For example, consider a high-dimensional noisy data set containing a cycle topology as meaningful structure.
When projecting this data onto its 2D PCA plane, the noise in each dimension will likely shift the PCA plane away from its optimal noise-free orientation, in this way reducing the saliency of the cycle as the dimensionality increases, as illustrated in Figure~\ref{NoisyBalls}.
\begin{figure}[hbp]
\begin{subfigure}[t]{\linewidth}
\centering
\includegraphics[width=\linewidth]{Images/NoisyBallUniform}
\caption{\label{NoisyBallUniform}}
\end{subfigure}
\begin{subfigure}[t]{\linewidth}
\centering
\includegraphics[width=\linewidth]{Images/NoisyBallGaussian}
\caption{\label{NoisyBallGaussian}}
\end{subfigure}
\caption{Two-dimensional PCA projections of a data set of 100 points on the unit circle with added (a) uniform and (b) Gaussian iid noise in high dimensions, with mean $\mu=0$ and standard deviation $\sigma=0.25$ in each dimension.
The coloring of points marks their circular angle without noise, and the title of each plot denotes the input dimensionality.
PCA increasingly struggles to capture the circular topology for increasing dimensionalities.\label{NoisyBalls}}
\end{figure}
In the case of graphs, the relevant information may consist of a certain community structure.
Yet, random links between these communities may cause graph embeddings to fail to effectively separate them for data visualization purposes or consecutive learning tasks.
In other cases, embeddings may truthfully capture the higher-order structure in the data, but the user may be interested in finer or coarser structure that cannot be captured well in a fully automated manner, such as subclusters of a main cluster in the embedding.
\emph{These are examples where the use of topological prior knowledge is required or desired by the user}.
These examples motivate the need for a method that can integrate topological information into data embeddings.
In this paper, we thus introduce the concept of \emph{topological regularization}, and a generic approach for finding \emph{topologically regularized data embeddings} that operationalizes this concept. At a high level, the proposed approach is to find a (matrix) embedding ${\bm{E}}$ of a data set (a point cloud, a graph,\ldots) ${\mathbb{X}}$, with each row from ${\bm{E}}$ corresponding to the embedding of a corresponding element from the data set (a data point, a vertex from the graph,\ldots), that minimizes a total loss function
\begin{equation}
\label{totalloss1}
\mathcal{L}_{\mathrm{tot}}({\bm{E}}, {\mathbb{X}})\coloneqq \mathcal{L}_{\mathrm{emb}}({\bm{E}}, {\mathbb{X}})+\lambda_{\mathrm{top}} \mathcal{L}_{\mathrm{top}}({\bm{E}}),
\end{equation}
where the embedding loss function $\mathcal{L}_{\mathrm{emb}}$ typically aims to preserve a notion of proximity between data elements in the original data, and $\lambda_{\mathrm{top}}>0$ controls the strength of the \emph{topological regularization}.
As we will demonstrate, broad types of topological prior knowledge can be directly encoded through the \emph{topological loss function} $\mathcal{L}_{\mathrm{top}}$.
It is made possible by building on recent developments in the field of \emph{topological data analysis} (TDA), in particular, on \emph{persistent homology}-based optimization \citep{gabrielsson2020topology, solomon2021fast, carriere2021optimizing}.
The theory behind persistent homology---explained in detail in Section \ref{SEC::background}---builds on fundamental concepts from the field of algebraic topology, which is not a standard tool of data scientists.
This may limit accessibility of our work to an expert audience, which may limit adoption in practice.
For this reason, this paper contains an accessible and self-contained overview of relevant concepts of topological optimization, starting from the basic concepts in simplicial homology.
We then introduce topological regularization, and explain in an illustrative manner how to design topological loss functions.
After this, we include extensive computational and experimental analysis of topological regularization, where we study and explore its computational cost, robustness, parameter sensitivity, behavior for different regularization strengths, topological loss functions, and their applications and future challenges, clearly discussing also the existing limitations.
\subsection{Example of Topological Regularization}
To clarify the purpose and possible impact of topological regularization at a non-technical level, we begin by discussing a simple but illustrative example of its application to single-cell omics analysis. (It is discussed in full depth in Sec.~\ref{sec:effec_qualitative}.)
Single-cell omics include various types of data collected on the cell level, such as transcriptomics, proteomics and epigenomics.
Studying the topological model underlying data may lead to a better understanding of the dynamic processes of biological cells and the regulatory interactions involved therein.
Such dynamic processes can be modeled through trajectory inference methods, also called \emph{pseudotime analysis}, which order cells along a trajectory based on the similarities in their expression patterns \citep{Saelens276907}.
For example, the \emph{cell cycle} is a well known biological differentiation model that takes place in a cell as it grows and divides.
The cell cycle consists of different stages, namely growth (G1), DNA synthesis (S), growth and preparation for mitosis (G2), and mitosis (M).
The latter two stages are often grouped together in a G2M stage.
By studying expression data of cells that participate in the differentiation model, one may identify the genes involved in and between particular stages of the cell cycle \citep{liu2017reconstructing}.
Pseudotime analysis allows such study by assigning to each cell a time during the differentiation process in which it occurs, and thus, the relative positioning of all cells within the cell cycle model.
Thus, \emph{the analysis of single cell cycle data constitutes a problem where prior topological information is available}.
As the signal-to-noise ratio is commonly low in high-dimensional expression data \citep{libralon2009pre, zhang2021noise}, this data is usually preprocessed through a dimensionality reduction method prior to automated pseudotime inference \citep{Cannoodt2016, Saelens276907}.
\emph{Topological regularization provides a tool to enhance the expected topological signal during the embedding procedure, and as such, facilitate automated inference that depends on this signal}.
To illustrate this, we applied an automated (cell) cycle and pseudotime inference method based on persistent homology (see Sec.~\ref{sec:effec_qualitative} for full details) on the real cell cycle data presented in \citep{buettner2015computational}, without (Figure~\ref{RepCyclePCA}) and with (Figure~\ref{RepCycleTop}) our proposed topological regularization of a PCA embedding of the data.
Hereby, we designed the topological loss function to bias the embedding towards a circular model.
Similar to the embedding in Figure \ref{NoisyBalls}, the ordinary PCA embedding struggles to capture the circular topology (Figure \ref{RepCyclePCA}~a).
Therefore, the cycle that is captured in an automated manner is rather spurious, and mostly linearly separates G1 and G2M cells.
Projecting cells onto the edges of this cycle---which is an intermediate step to derive continuous pseudotimes from a discretized topological representation as earlier described in \citet{Saelens276907}---places the majority of the cells onto a single edge (Figure~\ref{RepCyclePCA}~b).
The resulting pseudotimes are mostly continuous for cells projected onto this edge, whereas they are more discretized for all other cells (Figure~\ref{RepCyclePCA}~c).
By incorporating prior topological knowledge into the PCA embedding, the cycle in the topologically regularized embedding that is captured in an automated manner better characterizes the transmission between the G1, G2M, and S stages in the cell cycle model (Figure~\ref{RepCycleTop}~a).
The automated procedure for pseudotime inference also reflects a more continuous transmission between the cell stages (Figures~\ref{RepCycleTop}~b and \ref{RepCycleTop}~c).
\begin{figure}[t]
\centering
\begin{subfigure}[t]{.9\linewidth}
\includegraphics[height=.4cm]{Images/CellCycle/CellCircleLegend}
\end{subfigure}\\
\begin{subfigure}[t]{.32\linewidth}
\centering
\includegraphics[height=3.7cm]{Images/CellCycle/CellCircle_diode}
\caption{The representation of the most prominent cycle obtained through persistent homology.}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.3\linewidth}
\centering
\includegraphics[height=3.8cm]{Images/CellCycle/CellCircle_diodeProjected}
\caption{Orthogonal projection of the embedded data onto the cycle representation.}
\end{subfigure}\hfill%
\begin{subfigure}[t]{.34\linewidth}
\centering
\includegraphics[height=3.8cm]{Images/CellCycle/CellCircle_diodeCoordinates}
\caption{The pseudotimes inferred from the projection in (b), quantified on a continuous color scale.}
\end{subfigure}
\caption{Automated pseudotime inference of real cell cycle data through persistent homology, from the PCA embedding of the data.\label{RepCyclePCA}}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}[t]{.9\linewidth}
\includegraphics[height=.4cm]{Images/CellCycle/CellCircleLegend}
\end{subfigure}\\
\begin{subfigure}[t]{.32\linewidth}
\centering
\includegraphics[height=3.3cm]{Images/CellCycle/CellCircle_diodeTimeCircle}
\caption{The representation of the most prominent cycle obtained through persistent homology.}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.3\linewidth}
\centering
\includegraphics[height=3.3cm]{Images/CellCycle/CellCircle_diodeTimeProjection}
\caption{Orthogonal projection of the embedded data onto the cycle representation.}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.34\linewidth}
\centering
\includegraphics[height=3.3cm]{Images/CellCycle/CellCircle_diodeTimeCoordinates}
\caption{The pseudotimes inferred from the projection in (b), quantified on a continuous color scale.}
\end{subfigure}
\caption{Automated pseudotime inference of real cell cycle data through persistent homology, from the topologically regularized PCA embedding of the data.}
\label{RepCycleTop}
\end{figure}
\subsection{Contributions\label{SUBSEC::contributions}}
We introduce the concept of topological regularization, to effectively bias a data embedding towards prior topological information.
\begin{compactitem}
\item In Section~\ref{SEC::background} we provide an extensive, self-contained, and illustrative introduction to topological optimization of point clouds (or thus embeddings), starting from the basic concepts in simplicial homology.
\item In Section~\ref{SEC::topreg} we propose a class of topological loss functions, explaining how they can be designed for a variety of topological priors, ranging from clusters, cycles, bifurcations, etc., to any combination of these. Our proposal also includes a novel sampling approach to ensure the topological loss functions are both robust and meaningful, while at the same time reducing computational cost.
\item In Section~\ref{SEC::experiments} we present an extensive computational and experimental analysis of topological regularization. We study its computational cost, robustness, parameter sensitivity, behavior for different regularization strengths, topological loss functions, or topological priors, and its applications and future challenges. In doing so, we also discuss and empirically analyse current limitations of the proposed approach.
\item We conclude on our work in Section~\ref{SEC::discconc} and discuss possible applications as well as open challenges for topological regularization in Section~\ref{sec:future_work}.
\end{compactitem}
This paper is a significant extension of an earlier conference paper \citep{vandaele2021topologically}, including substantially more background, improvements to the method, and additional experiments and analysis.
\subsection{Related Work\label{SUBSEC::relwork}}
Here we discuss the main related research and how it relates to the current paper.
\subsubsection{Foundations of Topological Optimization}
Most directly, topological regularization is building on a series of recent papers \citep{gabrielsson2020topology, solomon2021fast, carriere2021optimizing}, that showed that topological optimization---thus optimizing for $\mathcal{L}_{\mathrm{top}}$ in (\ref{totalloss1})---is possible in various settings, and in which the mathematical foundation is developed.
Topological optimization is inherently difficult due to the combinatorial nature of how topological information is quantified from point clouds through persistent homology, as the mapping from input data to its persistence diagrams is highly non-linear without an explicit analytical representation.
To accommodate this, topological optimization makes use of Clarke subderivatives \citep{clarke1990optimization}, whose applicability to persistent homology builds on arguments from o-minimal geometry \citep{van1998tame, carriere2021optimizing}.
Thanks to this recent work \citep{gabrielsson2020topology,carriere2021optimizing}, powerful tools for topological optimization have been developed for software libraries such as PyTorch and TensorFlow.
This allows their use without deeper knowledge of the mathematical foundation of persistent homology.
\subsubsection{Incorporating Topological Optimization into Embeddings}
Topological autoencoders \citep{moor2020topological}, DIPOLE \citep[Distributed Persistence- Optimized Local Embeddings][]{wagner2021improving}, and Interleaving Dimension Reduction \citep{nelson2022topology} have already combined topological optimization with a data embedding procedure.
The main difference to our work is that the topological information used for optimization is obtained from the original high-dimensional data, and not passed as a prior as in this paper.
While this can be useful, obtaining such topological information heavily relies on distances between observations, which are often meaningless and unstable in high dimensions \citep{vandaele2022curse,aggarwal2001surprising}.
Furthermore, certain constructions such as the \emph{weak Alpha filtration} obtained from the \emph{Delaunay triangulation}---which we will use extensively for topological regularization and are discussed in detail in Section \ref{SEC::background}---are expensive to obtain from high-dimensional data \citep{cignoni1998dewall}, and are therefore best computed from a low-dimensional embedding.
\subsubsection{Incorporating Topological Information into Embeddings}
Besides topological regularization, other methods that incorporate topological information into data embeddings have been developed as well.
For example, Deep Embedded Clustering \citep{xie2016unsupervised} simultaneously learns feature representations and cluster assignments using deep neural networks.
Constrained embeddings of Euclidean data on spheres have also been studied by \citep{bai2015constrained}, and self-organizing stars have been developed to embed data according to a star-shaped topology with a given number of branches \citep{come2010self}.
The common thread in these methods is that they require an extensive development for one particular kind of input data and one particular kind of topological model.
Contrary to this, incorporating topological optimization into representation learning provides a unifying yet versatile approach towards combining data embedding methods with topological priors, that generalizes well to any input structure as long as the output is a point cloud.
\subsubsection{Other Forms of Topological Regularization in Machine Learning}
Changing topological properties of an object to improve consecutive learning and inference has known recent applications in image segmentation \citep{Vandaele2020} and supervised machine learning \citep{chen2019topological, gabrielsson2020topology}.
For example, basic image smoothing (which can also be conducted through topological optimization; \citealp{gabrielsson2020topology}) decreases the prominence of spurious topological features and improves consecutive unsupervised segmentation of skin lesions \citep{Vandaele2020}.
For supervised learning, topological regularization has been used to prevent spurious topological components in classification boundaries \citep{chen2019topological} or obtain fewer local extrema in the weights of machine learning models \citep{gabrielsson2020topology}, to reduce overfitting.
Similar to topological regularization through (\ref{totalloss1}), they decompose the loss function into an ordinary training loss function (cross-entropy loss, quadratic loss, hinge loss, \ldots) and a topological penalty that is evaluated on the obtained model.
During topological regularization of data embeddings however, we are concerned with enhancing the topological signal for which the topological loss function $\mathcal{L}_{\mathrm{top}}$ has been designed, rather than decreasing the prominence of spurious topological components.
Since $\mathcal{L}_{\mathrm{top}}$ must be designed for the domain-specific application at hand, this approach is not generic.
However, in the current paper we show that topological priors can have a similar effect as regular regularization in machine learning, in the sense that the learned embedding becomes less vulnerable to noise.
\section{Topologically Regularized Data Embeddings}
\label{SEC::topreg}
In this section we propose a range of useful topological loss functions to optimize for topological priors in point cloud embeddings. We illustrate the effect of these losses on two-dimensional point cloud data based on persistent homology from the weak $\alpha$-filtration. In Section~\ref{SUBSEC::k-dimensional-holes} we introduce the building blocks that optimize $k$-dimensional holes. A sampling strategy presented in Section~\ref{SUBSEC::sampling} improves the runtime and the representation of particular structures. In Section~\ref{SUBSEC::flares} we describe a more involved topological loss function to optimize flare-like structures. Finally we point out how to incorporate topological regularization into dimensionality reduction methods.
\subsection{Describing Topological Structures as $k$-Dimensional Holes}
\label{SUBSEC::k-dimensional-holes}
The topological loss is computed from the persistence diagram of a filtration on the two-dimensional point cloud embedding. As described in Section~\ref{SUBSEC::persistence}, the persistence diagram keeps track of the \textit{birth} and \textit{death} times of $k$-dimensional topological holes. These features or the differences between them, can be used to specify loss functions that bias the resulting embedding towards various topological structures when optimized. We instantiate the general loss from Equation~(\ref{eq:topo_loss}) by measuring the persistence of a $k$-dimensional hole with
\begin{equation}
\label{eq:kdim-hole-loss}
\mathcal{L}_{\mathrm{top}}({\mathcal{D}}_k)=\mu \sum_{l=i, d_l < \infty}^j d_l - b_l\,.
\end{equation}
We assume that the birth-death pairs are ordered such that $(b_1, d_1)$ is the pair with the largest persistence. To maximize or minimize we use $\mu \in \{-1, 1\}$ in all our experiments.
Since we compute low-dimensional embeddings in two dimensions, we can only define loss functions on ${\mathcal{D}}_0$ and ${\mathcal{D}}_1$. For three dimensional embeddings, $\mathcal{L}_{\mathrm{top}}({\mathcal{D}}_2)$ could be defined as well following a similar intuition as for ${\mathcal{D}}_1$ and ${\mathcal{D}}_2$. In Section~\ref{PARAGRAPH::vary_g} we will introduce a more flexible version of Equation~(\ref{eq:topo_loss}) but stick with this basic measure of persistence for now.
\paragraph{0-dimensional holes (i.e., connected components)}
{\def\arraystretch{2}\tabcolsep=10pt
\begin{table*}[t]
\centering
\begin{tabularx}{\textwidth}{p{0.5cm}
>{\raggedright\arraybackslash}X
>{\centering\arraybackslash}p{3.5cm}
>{\centering\arraybackslash}p{3.5cm}}
\toprule[1pt]
i,j & Description & $\mu=1$ (min) & $\mu=-1$ (max)\\
\midrule
2,2 & Distance between points with the longest edge in the minimum spanning tree (MST)
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D0_d2_minimize}
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D0_d2_maximize}\\
3,3 & Distance between points with the second longest edge in the MST
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D0_d3_minimize}
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D0_d3_maximize}\\
n,n & Distance between points with the shortest edge in the MST
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D0_dn_minimize}
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D0_dn_maximize}\\
2,$\infty$ & All distances of the MST
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D0_d2-dinf_minimize}
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D0_d2-dinf_maximize}\\
\bottomrule
\end{tabularx}
\caption{Optimizing topological loss functions using ${\mathcal{D}}_0$ on synthetic data for 500 epochs.}
\label{tab:topological_functions_D0}
\end{table*}
}
For 0-dimensional holes based on the $\alpha$-filtration, the loss function reduces to $\mathcal{L}_{\mathrm{top}}({\mathcal{D}}_0)=\mu\sum_{l=i, d_l < \infty}^j d_l\,$ as the birth times are zero for all pairs. At the start of the filtration (time $\epsilon = 0$), each point constitutes one connected component and with increasing $\epsilon$, components are merged as they become connected through an edge.
We illustrate the effect of different values for $\mu$, $i$, and $j$ in Table \ref{tab:topological_functions_D0}. The point cloud ($n=20$, $d=2$) is sampled from an isotropic Gaussian and we optimize the topological loss for 500 epochs.
The 0-dimensional homology always contains $n$ pairs for a point cloud of size $n$ including one pair with infinite lifetime ($d_1 = \infty$). We thus start with $i=j=2$ which acts upon the persistence pair with the largest finite persistence. Depending on $\mu$, this loss will either decrease or increase the distance between the points that are connected the last.
With $i=j=3$, the distance between points that connect second to last is decreased or increased. Note that when $\mu = 1$, this results in a large gap between the two clusters that connect last (one of which consists of only a single point in this case), as the distance between these two clusters does not influence the loss. When $\mu = -1$ however, the point cloud splits into three clusters where the two largest distances are similar (in this case the clusters happen to all three be equidistant) because delaying the second to last merge will eventually become the last merge of the filtration and vice versa.
Minimizing the persistence with $i=j=n$ will make the two points that are closest to each other coincide, while maximizing it will distribute all points to have equal distances to their nearest neighbor and then increase the spacing between all points.
With $i=2$ and $j=\infty$ (or $j=n$) we can either minimize or maximize all finite death times (note the scale of both embeddings). For $\mu = -1$ we observe a circular pattern with some points in the center which only increases in scale if we optimize for more epochs.
\paragraph{1-dimensional holes (i.e. cyclic topologies)}
{\def\arraystretch{2}\tabcolsep=10pt
\begin{table*}[t]
\centering
\begin{tabularx}{\textwidth}{p{0.5cm}
>{\raggedright\arraybackslash}X
>{\centering\arraybackslash}p{3.5cm}
>{\centering\arraybackslash}p{3.5cm}}
\toprule[1pt]
i,j & Description & $\mu=1$ (min) & $\mu=-1$ (max)\\
\midrule
1,1 & Persistence of the most persistent cycle
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D1_d1_minimize}
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D1_d1_maximize}\\
2,2 & Persistence of the second most persistent cycle
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D1_d2_minimize}
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D1_d2_maximize}\\
1,n & Persistence of all cycles
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D1_d1-dinf_minimize}
& \includegraphics[height=2.5cm, width=3.5cm, valign=t, keepaspectratio]{Images/LossFunctions/random_D1_d1-dinf_maximize}\\
\bottomrule
\end{tabularx}
\caption{Optimizing topological loss functions using ${\mathcal{D}}_1$ on synthetic data for 500 epochs.}
\label{tab:topological_functions_D1}
\end{table*}
}
With 1-dimensional holes we impose cyclic topologies on the two-dimensional embedding. In contrast to the 0-dimensional topological loss, the birth times $b_l$ in Equation~(\ref{eq:kdim-hole-loss}) are nonzero and contribute to the loss. In Table~\ref{tab:topological_functions_D1} we show the result of optimizing this loss on ${\mathcal{D}}_1$ for different $i,j$ and $\mu$. First we point out that minimizing 1-dimensional topology ($\mu = 1$) leads to highly similar embeddings for different values of $i$ and $j$. These embeddings all lack 1-dimensional topologies in their persistence diagram ${\mathcal{D}}_1$. No cycle is \textit{born} because the radius of any potential cycle is smaller than the distance of neighboring points that would be part of that cycle.
Optimizing the persistence of the most prominent hole with $i=j=1$ and $\mu = -1$ results in a circle with points evenly distributed on the boundary. Note that points outside the circle would not change the loss measuring only the largest persistence. Maximizing with $i=j=2$ leads to two equally sized circles as the optimization will alternate between the two. Optimizing the persistence of all cycles ($i=1, j=\infty$) results in one larger and several smaller circles.
With the intuition from these basic examples we could build linear combinations of loss functions on ${\mathcal{D}}_0$ and ${\mathcal{D}}_1$, e.g. if we want the represented model to both be connected and include a circle, or know there are two circles that are not connected.
\subsection{Subsampling for Runtime and Representation Improvement}
\label{SUBSEC::sampling}
The examples in Table \ref{tab:topological_functions_D0} and \ref{tab:topological_functions_D1} showed that persistent homology effectively measures the prominence of topological holes. However, it is often ineffective for representing such holes in a natural manner. To illustrate, we optimize the most persistent cycle in a larger two-dimensional dataset with $n=200$ points sampled from an isotropic Gaussian and show the result in Figure \ref{fig:loss_function_circle_nosampling}. While some points are a crucial part of the cycle and will affect the loss when (re-)moved, others lie outside the boundary and do not influence the topological loss. Even more, their position will not change until they are part of the circle as each gradient update only affects the \textit{four} points contributing to birth and death of the cycle (see Figure \ref{PikaGrad}). The representation might improve slightly when optimizing for more epochs (Figure \ref{fig:loss_function_circle_2000epochs}), but the runtime quickly becomes prohibitive.
To represent topological holes through more points, we propose to optimize the loss
\begin{equation}
\label{sampleloss}
\widetilde{\mathcal{L}}_{\mathrm{top}}({\bm{E}})\coloneqq\mathbb{E}\left[\mathcal{L}_{\mathrm{top}}\left(\left\{{\bm{x}}\in {\mathbf{S}}:{\mathbf{S}} \mbox{ is a random sample of } {\bm{E}} \mbox{ with sampling fraction } f_{\mathcal{S}}\right\}\right)\right],
\end{equation}
where $\mathcal{L}_{\mathrm{top}}$ is defined as in (\ref{eq:kdim-hole-loss}).
In practice, during each optimization iteration, $\widetilde{\mathcal{L}}_{\mathrm{top}}$ is approximated by the mean of $\mathcal{L}_{\mathrm{top}}$ evaluated over $n_{\mathcal{S}}$ random samples of the point cloud ${\bm{E}}$.
In Figure \ref{fig:loss_function_circle_sampling} we show the result of optimizing a topological sampling loss of the most persistent cycle with $f_{\mathcal{S}} = 0.2$ and $n_{\mathcal{S}} = 1$.
Evaluating the topological loss on a subset of the data leads to a more balanced distribution of points around the circular shape.
An added benefit of the sampling-based loss in Equation~(\ref{sampleloss}) is that optimization can be conducted significantly faster, as persistent homology is evaluated on smaller samples.
\begin{figure*}[t]
\centering
\begin{subfigure}{.3\linewidth}
\centering
\includegraphics[width=\linewidth, height=\linewidth, keepaspectratio]{Images/LossFunctions/random_D1_d1_gaussian_noSampling.png}
\caption{Optimization for 500 epochs (17s)}
\label{fig:loss_function_circle_nosampling}
\end{subfigure}\hfill
\begin{subfigure}{.3\linewidth}
\centering
\includegraphics[width=\linewidth, height=\linewidth, keepaspectratio]{Images/LossFunctions/random_D1_d1_gaussian_2000epochs.png}
\caption{Optimization for 2k epochs (1min 7s)}
\label{fig:loss_function_circle_2000epochs}
\end{subfigure}\hfill
\begin{subfigure}{.3\linewidth}
\centering
\includegraphics[width=\linewidth, height=\linewidth, keepaspectratio]{Images/LossFunctions/random_D1_d1_gaussian_sampling.png}
\caption{Optimization with sampling loss for 500 epochs (5s) with $f_{\mathcal{S}} = 0.2$}
\label{fig:loss_function_circle_sampling}
\end{subfigure}
\caption{Topological optimization of a two-dimensional dataset with $n=200$ points on ${\mathcal{D}}_1$ with with $i=j=1$ and $\mu=-1$.}
\label{fig:loss_function_circle_sampling_comparison}
\end{figure*}
\paragraph{Computational Cost of Topological Loss Functions with Sampling}
When one makes use of a sampling fraction $0<f_{\mathcal{S}}\leq 1$ along with $n_{\mathcal{S}}$ repeats, the computational complexity of the topological loss function reduces to $\mathcal{O}\left(n_{\mathcal{S}}\left((f_{\mathcal{S}}n)^{3\ceil{\frac{d}{2}}}\right)\right)$ when using weak $\alpha$-filtrations, where $d$ is the embedding dimensionality.
Nevertheless, as discussed in Section \ref{SUBSEC::compcost}, the computational complexity may often be significantly lower in practice, due to constraining the dimension of the homology for which we optimize, and because common distributions across natural shapes admit reduced size and time complexities of the Delaunay triangulations \citep{dwyer1991higher, amenta2007complexity, devillers2019poisson}.
\subsection{Optimization of Flare Structures}
\label{SUBSEC::flares}
Persistent homology is invariant to certain topological changes.
For example, both a linear `I'-structured model and a bifurcating `Y'-structured model consist of one connected component, and no higher-dimensional holes.
These models are indistinguishable based on the (persistent) homology thereof, even though they are topologically different in terms of their singular points.
Capturing such additional topological phenomena is possible through a refinement of persistent homology under the name of \emph{functional persistence}, also well discussed and illustrated by \citet{carlsson_2014}.
Instead of evaluating persistent homology on a data matrix ${\bm{E}}$, we evaluate it on a subset $\{{\bm{x}}\in{\bm{E}}:f({\bm{x}})\leq \tau\}$ for a well chosen function $f:{\bm{E}}\rightarrow\mathbb{R}$ and hyperparameter $\tau$. Inspired by this approach, for a diagram $\mathcal{D}$ of a point cloud ${\bm{E}}$, we propose the topological loss
\begin{equation}
\label{functionalloss}
\widetilde{\mathcal{L}}_{\mathrm{top}}({\bm{E}})\coloneqq\mathcal{L}_{\mathrm{top}}\left(\{{\bm{x}}\in{\bm{E}}:f_{{\bm{E}}}({\bm{x}})\leq \tau\}\right),\mbox{ informally denoted } \left[\mathcal{L}_{\mathrm{top}}(\mathcal{D})\right]_{f_{{\bm{E}}}^{-1}]-\infty,\tau]},
\end{equation}
where $f$ is a real-valued function on ${\bm{E}}$, possibly dependent on ${\bm{E}}$---which changes during optimization itself---, $\tau$ a hyperparameter, and $\mathcal{L}_{\mathrm{top}}$ is an ordinary topological loss as defined by Equation~(\ref{eq:kdim-hole-loss}).
In the experiments we will optimize \emph{flares}, i.e. star-shaped structures using the idea of functional persistence. In particular, we will focus on the case where $f$ equals the scaled centrality measure on ${\bm{E}}$:
\begin{equation}
\label{centrality}
f_{{\bm{E}}}\equiv\mathcal{E}_{{\bm{E}}}\vcentcolon\mspace{-1.2mu}\equiv 1-\frac{g_{{\bm{E}}}}{\max g_{{\bm{E}}}},\mbox{ where }g_{{\bm{E}}}({\bm{x}})\coloneqq \left\|{\bm{x}}-\frac{1}{|{\bm{E}}|}\sum_{{\bm{y}}\in{\bm{E}}}{\bm{y}}\right\|\,.
\end{equation}
With $\tau\geq 1$ we have $\widetilde{\mathcal{L}}_{\mathrm{top}}({\bm{E}})=\mathcal{L}_{\mathrm{top}}({\bm{E}})$.
For sufficiently small $\tau > 0$, $\widetilde{\mathcal{L}}_{\mathrm{top}}$ evaluates $\mathcal{L}_{\mathrm{top}}$ on the points `far away' from the mean in the center of ${\bm{E}}$. In Section \ref{SEC::experiments} we will combine this idea with 0-dimensional persistence to optimize the flares.
\subsection{Incorporating Topological Regularization in Dimensionality Reduction Methods}
\label{SUBSEC::topo_DR}
To regularize point cloud embeddings ${\bm{E}}$, we combine an embedding loss with the topological loss that represents the prior knowledge on the data.
The goal is to find an embedding that minimizes a total loss function
\begin{equation}
\label{totalloss}
\mathcal{L}_{\mathrm{tot}}({\bm{E}}, \mathbb{X})\coloneqq \mathcal{L}_{\mathrm{emb}}({\bm{E}}, \mathbb{X})+\lambda_{\mathrm{top}} \mathcal{L}_{\mathrm{top}}({\bm{E}}),
\end{equation}
where $\mathcal{L}_{\mathrm{emb}}$ aims to preserve structural attributes of the original data, and $\lambda_{\mathrm{top}}>0$ controls the strength of \emph{topological regularization}.
Note that $\mathbb{X}$ itself is not required to be a point cloud, or reside in the same space as ${\bm{E}}$.
This is especially useful for representation learning of graphs.
We can thus use topological regularization for embedding a graph $G$, to learn a representation of the nodes of $G$ in $\mathbb{R}^d$ that well respects properties of $G$. To embed the graph, we used a DeepWalk variant adapted from \citet{graph_nets}.
To regularize graph embeddings by DeepWalk or embeddings of tabular data by UMAP, we simply combine their objectives with the topological loss $\mathcal{L}_{\mathrm{top}}$ as in Equation~(\ref{totalloss}).
In addition to DeepWalk and UMAP, we also present experiments on \emph{regularized PCA projections}.
Topologically regularizing a PCA projection is more involved, as using $$\mathcal{L}_{\mathrm{emb}}({\bm{W}}, {\bm{X}})\coloneqq \mathrm{MSE}\left({\bm{X}}{\bm{W}}\mW^T, {\bm{X}}\right)$$
does not ensure orthonormality of the matrix ${\bm{W}}$. To address this we use Pymanopt \citep{pymanopt} to optimize over the Stiefel manifold of orthonormal matrices. The line-search algorithms to compute the optimal step size for every iteration of the optimization, however, do not work with our topological sampling loss. We therefore removed the backtracking part from the line-search.
Note that in the experiments of the conference version of this paper \citep{vandaele2021topologically} we used an orthogonality loss $\|{\bm{W}}^T{\bm{W}} - {\bm{I}}\|_F$ on the weights ${\bm{W}} \in \mathbb{R}^{n \times 2}$, which had unwanted effects on the optimization.\footnote{In the experiments on the synthetic cycle, the orthogonality loss shifted the weights of ${\bm{W}}$ to the first two data dimensions - not the regularization with the topological loss. We suspect this was a numerical artifact of the orthogonality loss combined with a too high learning rate.}
| {
"timestamp": "2023-01-10T02:22:12",
"yymm": "2301",
"arxiv_id": "2301.03338",
"language": "en",
"url": "https://arxiv.org/abs/2301.03338",
"abstract": "Unsupervised representation learning methods are widely used for gaining insight into high-dimensional, unstructured, or structured data. In some cases, users may have prior topological knowledge about the data, such as a known cluster structure or the fact that the data is known to lie along a tree- or graph-structured topology. However, generic methods to ensure such structure is salient in the low-dimensional representations are lacking. This negatively impacts the interpretability of low-dimensional embeddings, and plausibly downstream learning tasks. To address this issue, we introduce topological regularization: a generic approach based on algebraic topology to incorporate topological prior knowledge into low-dimensional embeddings. We introduce a class of topological loss functions, and show that jointly optimizing an embedding loss with such a topological loss function as a regularizer yields embeddings that reflect not only local proximities but also the desired topological structure. We include a self-contained overview of the required foundational concepts in algebraic topology, and provide intuitive guidance on how to design topological loss functions for a variety of shapes, such as clusters, cycles, and bifurcations. We empirically evaluate the proposed approach on computational efficiency, robustness, and versatility in combination with linear and non-linear dimensionality reduction and graph embedding methods.",
"subjects": "Machine Learning (cs.LG)",
"title": "Topologically Regularized Data Embeddings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992932829918,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7075910816044503
} |
https://arxiv.org/abs/0712.1837 | Approximated profiles for discrete solitons in DNLS lattices | We study four different approximations for finding the profile of discrete solitons in the one-dimensional Discrete Nonlinear Schrödinger (DNLS) Equation. Three of them are discrete approximations (namely, a variational approach, an approximation to homoclinic orbits and a Green-function approach), and the other one is a quasi-continuum approximation. All the results are compared with numerical computations. | \section{Introduction}
Since the 1960's, a large number of works has focused on the
properties of solitons in the Nonlinear Schr\"odinger (NLS)
Equation \cite{Sulem}. As it is well known, the one-dimensional NLS
equation is integrable. Two of the most important discretizations of
this equation admit discrete solitons. One of these discretizations
is known as the Ablowitz-Ladik equation \cite{AL}, which is also
integrable. On the contrary, the other important discretization,
known as the Discrete Nonlinear Schr\"{o}dinger (DNLS) equation, is
not integrable, and discrete soliton solutions must be calculated
numerically. The DNLS equation has many interesting mathematical
properties and physical applications \cite{Panos}. The DNLS equation
models, among others, an array of nonlinear-optical waveguides
\cite{Demetri}, that was originally implemented in an experiment as
a set of parallel ribs made of a semiconductor material (AlGaAs) and
mounted on a common substrate \cite{Silberberg}. It was predicted
\cite{BEC} that the DNLS equation may also serve as a model for
Bose-Einstein condensates (BECs) trapped in a strong optical
lattice, which was confirmed by experiments \cite{BECexperiment}.
In addition to the direct physical realizations in terms of
nonlinear optics and BECs, the DNLS equation appears as an envelope
equation for a large class of nonlinear lattices (for references,
see \cite{Aubry}, Section 2.4). Accordingly, the
solitons known in the DNLS equation represent intrinsic localized
modes investigated in such chains experimentally \cite{Sievers}
and theoretically \cite{MacKay,PhysToday}. In this context,
previous formal derivations of the
DNLS equation have been mathematically justified for small amplitude
time-periodic solutions in references \cite{James}.
In this paper we will consider fundamental solitons, which are of
two types: Sievers-Takeno (ST) modes, which are site-centered
\cite{ST}, and Page (P) modes, which are bond-centered \cite{Page}
(see also Fig. \ref{fig:profiles}). They can also be seen,
respectively, as discrete solitons with a single excited site, or
two adjacent excited site with the same amplitude. The DNLS equation
is given by
\begin{equation}
i\dot{u}_{n}+\varepsilon \left(
u_{n+1}+u_{n-1}-2u_{n}\right)+\gamma|u_{n}|^{2}u_{n}=0, \label{beq1}
\end{equation}
where $u_{n}(t)$ are the lattice dynamical variables, the overdot
stands for the time derivative, $\epsilon>0$ is the lattice coupling
constant and $\gamma$ a nonlinear parameter.
We look for solutions of frequency $\Lambda$ having the form
$u_{n}(t)=e^{i\Lambda t}v_{n}$.
Their envelope $v_n$ satisfies
\begin{equation}
-\Lambda v_{n}+\varepsilon \left(
v_{n+1}+v_{n-1}-2v_{n}\right)+\gamma|v_{n}|^{2}v_{n}=0. \label{beq2}
\end{equation}
Throughout this paper, we assume $\gamma\varepsilon>0$ and choose
$\gamma=\varepsilon=1$ without loss of generality, as Eq.
(\ref{beq2}) can be rescaled. We also look for unstaggered
solutions, for which, $\Lambda>0$ (staggered solutions with
$\Lambda<0$ can be mapped to the former upon a suitable staggering
transformation $\tilde{v}_n=(-1)^n v_n$). Furthermore, we restrict
to real solutions of (\ref{beq2}), which yield (up to multiplication
by $\exp{i\theta}$) all the homoclinic solutions of (\ref{beq2})
\cite{QX07}. Homoclinic solutions of (\ref{beq2}) can be found
numerically using methods based on the anti-continuous limit
\cite{MacKay} and have been studied in detail (first of all, in
one-dimensional models, but many results have been also obtained for
two- and three-dimensional DNLS lattices) \cite{Panos}.
The aim of this paper is to compare four different analytical
approximations of the profiles of ST- and P-modes together with the
exact numerical solutions. These analytical approximations are of
four types: one of variational kind, another one based on a
polynomial approximation of stable and unstable manifolds for the DNLS map,
another one based on a Green-function method, and, finally, a
quasi-continuum approach.
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{STprof.eps} &
\includegraphics[width=.4\textwidth]{Pageprof.eps}
\end{tabular}%
\end{center}
\caption{Discrete soliton profiles with
$\Lambda=\varepsilon=\gamma=1$. Left panel corresponds to a ST-mode,
and right panel, to a P-mode.} \label{fig:profiles}
\end{figure}
\section{Discrete approximations}
\subsection{The variational approximation}
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{varst0.eps} &
\includegraphics[width=.4\textwidth]{varst1.eps}
\end{tabular}%
\end{center}
\caption{Dependence, for ST-modes, of $v_0$ (left panel) and $v_1$
(right panel) with respect to $\Lambda$. Full lines correspond to
the exact numerical solution and dashed lines to the
variational approximation.} \label{fig:profvar1}
\end{figure}
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{varpage0.eps} &
\includegraphics[width=.4\textwidth]{varpage1.eps}
\end{tabular}%
\end{center}
\caption{Dependence, for P-modes, of $v_0$ (left panel) and $v_1$
(right panel) with respect to $\Lambda$. Full lines correspond to
the exact numerical solution while dashed lines correspond to the
variational approximation.} \label{fig:profvar2}
\end{figure}
Equation (\ref{beq2}) can be derived as the Euler-Lagrange equation for the
Lagrangian
\begin{equation}
L_{\mathrm{eff}}=\sum_{n=-\infty }^{+\infty }\left[
(v_{n+1}+v_{n-1})v_{n}-(\Lambda +2)v_{n}^{2}+\frac{1}{2}%
v_{n}^4\right] . \label{beq4}
\end{equation}
The VA for fundamental discrete solutions, elaborated in Ref. \cite%
{Weinstein} (see also Ref. \cite{Ricardo}) was based on the simple
exponential ansatz ,
\begin{equation}
v^{ST}_{n}=A_1e^{-a_1|n|}, \qquad v^{P}_{n}=A_2e^{-a_2|n+1/2|},
\label{beq3}
\end{equation}
where $v^{ST}_{n}$ denotes ST-modes, while $v^{P}_{n}$ is for P-modes,
with variational parameters $A_1$, $A_2$, $a_1$ and $a_2$ (which
determine the amplitude and inverse size of the soliton).
Then, substituting the ansatz in the Lagrangian, one can perform the
summation explicitly, which yields the \textit{effective
Lagrangian},
\begin{equation}
L^{ST}_{\mathrm{eff}}=\mathcal{N}_1(2\mathrm{sech}\
a_1-\Lambda-2)+\frac{\mathcal{N}_1^2\tanh^2 a_1}{2\tanh 2a_1}, \quad
L^{P}_{\mathrm{eff}}=\mathcal{N}_2\left(\frac{2(1-\cosh a_2)}{\sinh
a_2+\cosh a_2}-\Lambda\right)+\frac{\mathcal{N}_2^2}{4}\tanh
a_2\label{beq5}
\end{equation}
The norm of the ansatz (\ref{beq3}), which appears in Eq.
(\ref{beq5}), is given by $\mathcal{N}\equiv \sum_{n=-\infty
}^{+\infty }v_{n}^{2}$. In particular, for the ST- and P-modes,
\begin{equation}
\mathcal{N}_1=A_1^{2}\coth a_1, \qquad \mathcal{N}_2=A_2^2/\sinh a_2 .
\label{beq7}
\end{equation}
The Lagrangian (\ref{beq5}) gives rise to the variational equations,
$\partial L^{ST}_{\mathrm{eff}}/\partial \mathcal{N}_1=\partial
L^{ST}_{\mathrm{eff}}/\partial a_1=0$, and $\partial
L^{P}_{\mathrm{eff}}/\partial \mathcal{N}_2=\partial
L^{P}_{\mathrm{eff}}/\partial a_2=0$, which constitute the basis of
the VA \cite{Progress}. These predict relations between the norm,
frequency, and width of the discrete solitons within the framework
of the VA, namely
\begin{gather} \label{beq9}
\mathcal{N}_1=\frac{4\cosh a_1\sinh^22a_1}{\sinh4a_1-\sinh2a_1},
\quad \mathcal{N}_2=\frac{8(1-\cosh a_2+\sinh a_2)\cosh^2 a_2}
{\sinh a_2+\cosh a_2}\\ \Lambda=2(\mathrm{sech}\
a_1-1)+\mathcal{N}_1\frac{\tanh^2 a_1}{\tanh2 a_1}, \quad
\Lambda=\frac{2(1-\cosh a_2)}{\sinh a_2+\cosh a_2}+
\frac{1}{2}\mathcal{N}_2\tanh a_2 .
\end{gather}
These analytical predictions, implicitly relating $\mathcal{N}$ and
$\Lambda$ through their parametric dependence on the inverse width
parameter $a$, will be compared with numerical findings below. In
Figs. \ref{fig:profvar1} and \ref{fig:profvar2}, we compare the
approximate and exact values of the highest amplitude site and the
second-highest amplitude sites (i.e. $v_0$ and $v_1$, which can be
easily calculated from (\ref{beq9}) once $\mathcal{N}$ and $a$ are
known) with respect to $\Lambda$ for both ST- and P-modes
We can observe that the variational approach captures the exact
asymptotic behavior as $\Lambda\rightarrow+\infty$. Indeed as $a_1
\rightarrow+\infty$ in approximation (\ref{beq3}) one obtains
$\Lambda\sim \mathcal{N}_1\sim e^{a_1}$ and $A_1\sim
\sqrt{\mathcal{N}_1}\sim \sqrt{\Lambda}$. Thus $v^{ST}_0 \sim
\sqrt{\Lambda}$ as $\Lambda\rightarrow+\infty$ which is indeed the
asymptotic behavior of the exact ST-mode. On the contrary, the
variational approximation errs by a small multiplicative factor
($\frac{2}{\sqrt{3}}\sim 1.1$) as $\Lambda\rightarrow 0$ (i.e.,
effectively approaching the continuum limit). This can be seen
taking the limit $a_1 \rightarrow 0$ in approximation (\ref{beq3}).
One has $\mathcal{N}_1\sim 8 a_1$, $\Lambda \sim
-a_1^2+\frac{a_1}{2}\mathcal{N}_1\sim 3a_1^2$ and $A_1\sim
2\sqrt{2}a_1\sim \frac{2}{\sqrt{3}}\sqrt{2\Lambda}$, while the
amplitude of the continuum hyperbolic secant soliton of the
integrable NLS is $A=\sqrt{2 \Lambda}$ [see also below]. Notice that
the P-mode also has the same $\Lambda \rightarrow 0$ limit (and
therefore errs by the same factor).
\subsection{The homoclinic orbit approximation}
\subsubsection{The DNLS map}
The difference equation (\ref{beq2}) can be recast as a
two-dimensional real map by defining $y_n=v_n$ and $x_n=v_{n-1}$
\cite{Dirk,Bountis,Alfimov,Ricardo,QX07}:
\begin{equation}\label{map}
\left\{\begin{array}{l}
x_{n+1}=y_n \\
y_{n+1}=-y_n^3+(\Lambda+2)y_n-x_n .\\
\end{array}\right.
\end{equation}
For $\Lambda>0$, the origin $x_n=y_n=0$ is hyperbolic and a saddle
point, which is checked upon linearization of the map around this
point. Consequently, there exists a 1-d stable and a 1-d unstable
manifolds emanating from the origin in two directions given by
$y=\lambda_{\pm}x$, with
\begin{equation}\label{eigen}
\lambda_{\pm}=\frac{(2+\Lambda)\pm\sqrt{\Lambda(\Lambda+4)}}{2} .
\end{equation}
The eigenvalues $\lambda_{\pm}$ satisfy
$\lambda^2-(\Lambda+2)\lambda+1=0$ and $\lambda_+=\lambda_-^{-1}>1$.
The stable and unstable manifolds are invariant under inversion as
it is the case for eq. (\ref{map}). Moreover, they are exchanged by
the symmetry $(x,y)\rightarrowtail(y,x)$ (this is due to the fact
that the map (\ref{map}) is reversible; see e.g. \cite{QX07} for
more details). Due to the non-integrability of the DNLS equation,
these manifolds intersect in general transversally, yielding the
existence of an infinity of homoclinic orbits (see Figs.
\ref{fig:tangle1} and \ref{fig:tangle2}). Each of their
intersections corresponds to a localized solution, which can be a
fundamental soliton or a multi-peaked one. Fundamental solitons, the
solutions we are interested in, correspond to the primary
intersections points, i.e. those emanating from the first homoclinic
windings. Each intersection point defines an initial condition
$(x_0,y_0)$, that is, $(v_{-1},v_0)$, and the rest of the points
composing the soliton are determined by application of the map.
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{tangle1.eps} &
\includegraphics[width=.4\textwidth]{tangle2.eps} \\
\end{tabular}%
\end{center}
\caption{Homoclinic tangles for $\Lambda=0.4$,
$\Lambda=0.6$.}\label{fig:tangle1}
\end{figure}
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{tangle3.eps} &
\includegraphics[width=.4\textwidth]{tangle4.eps}
\end{tabular}%
\end{center}
\caption{Homoclinic tangles for $\Lambda=1$ and $\Lambda=3$.}
\label{fig:tangle2}
\end{figure}
\subsubsection{The polynomial approximation to the unstable manifold}
The first windings of the stable and unstable manifolds can be
approximated by third order polynomials. Actually, only one of them
is necessary to be determined, as the other one is determined taking
into account the symmetry $x\leftrightarrow y$. We proceed then to
approximate the local unstable manifold $W^u_{\mathrm{loc}}(0)$.
Taking into account its invariance under inversion, it can be
locally written as a graph $y=f(x)=\lambda x-\alpha x^3+O(|x|^5)$
with $\lambda\equiv\lambda_+$ given by (\ref{eigen}). For
$x\approx0$, the image of $(x,f(x))$ under the map (\ref{map}) also
belongs to $W^u_{\mathrm{loc}}(0)$, thus $
-f(x)^3+(\Lambda+2)f(x)-x=f(f(x))\ \forall x\approx0$. This yields
$[\lambda^3+\alpha(\Lambda+2-\lambda-\lambda^3)]x^3+O(|x|^5)=0,\
\forall x\approx0$. Hence
$\alpha=-\lambda^3/(\Lambda+2-\lambda-\lambda^3)=\lambda^4/(\lambda^4-1)$.
The local unstable manifold is approximated at order 3 by
\begin{equation}\label{unstable}
W^u: y=\lambda x-\frac{\lambda^4}{\lambda^4-1} x^3,
\end{equation}
and, by symmetry, the stable manifold is approximated by:
\begin{equation}\label{stable}
W^s: x=\lambda y-\frac{\lambda^4}{\lambda^4-1} y^3.
\end{equation}
In Fig. \ref{fig:tangleappr}, the numerical and approximated
unstable manifolds for $\Lambda=1$ and $\Lambda=3$ are compared. It
can be observed that the fit is better when $\Lambda$ increases. The
approximation breaks down for small $\Lambda$ because the origin is
not a hyperbolic fixed point for $\Lambda=0$.
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{tangleappr1.eps} &
\includegraphics[width=.4\textwidth]{tangleappr2.eps}
\end{tabular}%
\end{center}
\caption{Numerical exact unstable manifold
(full line) and its approximation by Eq. (\ref{unstable}) (dashed line)
for $\Lambda=1$ (left panel) and $\Lambda=3$ (right panel). The fit is so accurate in the
latter that both curves are superimposed.} \label{fig:tangleappr}
\end{figure}
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{intersections.eps} &
\includegraphics[width=.4\textwidth]{pitchfork.eps}
\end{tabular}%
\end{center}
\caption{(Left panel) Approximated stable and unstable manifolds for
$\Lambda=2$ showing the main intersections. (Right panel) Pitchfork
bifurcation arising in the homoclinic approximation when $\Lambda$
is varied. ST-modes (full lines) bifurcate with the P-mode (dashed
line) at $\Lambda=0.5$.} \label{fig:intersections}
\end{figure}
\subsubsection{Approximate solutions via approximate invariant
manifolds}
Once an analytical form of the unstable and stable manifold is
found, discrete solitons profiles (or, concretely, $v_0$ and
$v_{-1}$) can be determined as the intersection of both manifolds.
The polynomial form of (\ref{unstable}) is not sufficient in
practice to obtain good approximations of the whole soliton profile,
due to sensitivity under initial conditions. However, it provides a
good approximation near the soliton center. Some intersections of
$W^s$ and $W^u$ can be approximated by:
\begin{equation}\label{bigeq}
x=\lambda\left(\lambda x-\frac{\lambda^4}{\lambda^4-1} x^3\right)-
\frac{\lambda^4}{\lambda^4-1}\left(\lambda x-\frac{\lambda^4}{\lambda^4-1} x^3\right)^3.
\end{equation}
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{homost0.eps} &
\includegraphics[width=.4\textwidth]{homost1.eps}
\end{tabular}%
\end{center}
\caption{Same as Fig. \ref{fig:profvar1} but with dashed lines
corresponding to approximation (\ref{xST}).}
\label{fig:profhomo1}
\end{figure}
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{homopage0.eps} &
\includegraphics[width=.4\textwidth]{homopage1.eps}
\end{tabular}%
\end{center}
\caption{Same as Fig. \ref{fig:profvar2} but with dashed lines
corresponding to approximation (\ref{xPb}).}
\label{fig:profhomo2}
\end{figure}
This equation has nine solutions (see Fig.
\ref{fig:intersections}a). One of them ($x=0$), corresponds to the
origin. Once this solution is eliminated, the reminder equation is a
bi-quartic one. Thus, if $x=\xi$ is a solution of (\ref{bigeq}),
$x=-\xi$ is also a solution: this is due to the fact that $\pm v_n$
is a solution of (\ref{beq2}). Solutions $x=\xi_1$, $x=\xi_2$,
$x=\xi_0$ and $x=\xi_3$ in Fig. \ref{fig:intersections} correspond
to the positive solutions of (\ref{bigeq}). The point $x=\xi_0$ is
in the bisectrix of the first quadrant and corresponds to the P-mode
(i.e. $v_0^P=\xi_0$), and the point $x=\xi_3$ lies in the bisectrix
of the fourth quadrant and corresponds to a twisted mode (i.e. a
discrete soliton with two adjacent excited sites with the same
amplitude and opposite sign). Setting $y(\xi_0)=\xi_0$ and
$y(\xi_3)=-\xi_3$ in (\ref{unstable}), one obtains
$\xi_0=\lambda^{-2}\sqrt{(\lambda-1)(\lambda^4-1)}$,
$\xi_3=\lambda^{-2}\sqrt{(\lambda+1)(\lambda^4-1)}$.
Upon elimination of the roots $x=\xi_0$ and $x=\xi_3$ from
(\ref{bigeq}), $\xi_1$ and $\xi_2$ can be calculated as solutions of
a quadratic equation. Thus,
\begin{equation}\label{xST}
\xi_1=\lambda^{-2}\sqrt{(\lambda^4-1)(\lambda-\sqrt{\lambda^2-4})/2}, \quad
\xi_2=\lambda^{-2}\sqrt{(\lambda^4-1)(\lambda+\sqrt{\lambda^2-4})/2}.
\end{equation}
These solutions are related with the ST-mode as $v_0^{ST}=\xi_2$ and
$v_1^{ST}=\xi_1$. On the other hand, for the P-mode,
$v_0^{P}=\xi_0$, and, $v_1^{P}$ should be determined by application
of the map (\ref{map}). This yields
\begin{equation}\label{xPb}
v_0^{P}=\lambda^{-2}\sqrt{(\lambda-1)(\lambda^4-1)},\quad
v_1^P=\lambda^{-6}(\lambda^3+\lambda-1)\sqrt{(\lambda-1)(\lambda^4-1)}.
\end{equation}
In Figs. \ref{fig:profhomo1} and \ref{fig:profhomo2}, the values of
$v_0$ and $v_1$ obtained through the homoclinic approximation are
represented versus $\Lambda$ and compared with the exact numerical
results. It can be observed that, for ST-modes, no approximate
solutions exist for $\Lambda<0.5$. For $\Lambda=1/2$ (i.e.
$\lambda=2$), the points $(\xi_1,\xi_2)$ and $(\xi_2,\xi_1)$
disappear via a pitchfork bifurcation at $(\xi_0,\xi_0)$ (see Fig.
\ref{fig:intersections}b). This artifact is a by-product of
the decreasing accuracy of our approximations as $\Lambda \rightarrow 0$;
as discussed before, the ST-mode should exist for all values of $\Lambda > 0$.
\subsection{The Sievers--Takeno approximation}
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{Takenost0.eps} &
\includegraphics[width=.4\textwidth]{Takenost1.eps}
\end{tabular}%
\end{center}
\caption{Same as Fig. \ref{fig:profvar1} but with dashed lines
corresponding to approximation (\ref{appv}).}
\label{fig:Takeno}
\end{figure}
A method to approximate solutions of (\ref{beq2}) has been
introduced by Sievers and Takeno, for a recurrence relation similar
to it but with slightly different nonlinear terms \cite{ST}. This
approach has been generalized to the $d$-dimensional DNLS equation
in reference \cite{Tak89}. In what follows we briefly describe the
method, incorporating some precisions and simplifications. Setting
$v_n = v_0 \eta_n$, equation (\ref{beq2}) becomes
\begin{equation}
\label{rec2} \eta_{n+1}-2 \eta_n +\eta_{n-1}=\Lambda \eta_n -v_0^2
\eta_n^3 ,
\end{equation}
with $\eta_{-n}=\eta_n$, $\eta_0 =1$. Setting $n=0$ in (\ref{rec2})
we obtain in particular
\begin{equation}
\label{site1} v_0^2 = \Lambda + 2(1-\eta_1 ).
\end{equation}
Equation (\ref{rec2}) can be rewritten as a suitable nonlocal
equation using a lattice Green function in conjunction with the
reflectional symmetry of $\eta_n$ and equation (\ref{site1}). This
yields for all $n\geq 1$
\begin{equation}
\label{nonloc} \eta_n = [ \, \Lambda + 2(1-\eta_1)\, ]
\frac{\lambda^{-n}}{\lambda - \lambda^{-1}} +\sum_{k\geq
1}{\eta_k^3\, (\lambda^{-|n-k|}+\lambda^{-n-k})},
\end{equation}
where $\lambda\equiv\lambda_+$ is given by (\ref{eigen}). Problem
(\ref{nonloc}) can be seen as a fixed point equation $\{ \eta \} =
F_{\Lambda}(\{ \eta \})$ in $\ell_\infty (\mathbb{N}^\ast)$. Noting
$B_\epsilon$ the ball $\| \{ \eta \} \|_{\ell_\infty
(\mathbb{N}^\ast)} \leq \epsilon$, the map $F_{\Lambda}$ is a
contraction on $B_\epsilon$ provided $\epsilon$ is sufficiently
small and $\Lambda$ is greater than some constant $\Lambda_0
(\epsilon )$. In that case, the solution of (\ref{nonloc}) is unique
in $B_\epsilon$ by virtue of the contraction mapping theorem and it
can be computed iteratively. Choosing $\{ \eta \}=0$ as an initial
condition, we obtain the approximate solution
\begin{equation}
\label{appet} \eta_n \approx (\, F_{\Lambda}(0)\, )_n =
\frac{\Lambda + 2}{\lambda - \lambda^{-1}}\, \lambda^{-n} , \ \ \
n\geq 1.
\end{equation}
Obviously the quality of the approximation would increase with
further iterations of $F_{\Lambda}$.
Using (\ref{appet}) and (\ref{site1})
in the limit when $\Lambda$ is large, we obtain
\begin{equation}
\label{appv} v_n \approx (\Lambda + 2)^{1/2}\, \lambda^{-|n|}
\end{equation}
since $\lambda \sim \Lambda $ as $\lambda \rightarrow +\infty$. The
values of $v_0$ and $v_1$ in this approximation are compared with
the exact numerical results in Fig. \ref{fig:Takeno}. We observe
that the approximation captures the asymptotic behaviour of $v_0$
and $v_1$ for $\Lambda\rightarrow\infty$.
\section{The quasi-continuum approximation}
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.3\textwidth]{contst0.eps} &
\includegraphics[width=.3\textwidth]{contst1.eps}
\end{tabular}%
\end{center}
\caption{Same as Fig. \ref{fig:profvar1} but with dashed lines
corresponding to approximation (\ref{ceq1}).}
\label{fig:profcont1}
\end{figure}
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.3\textwidth]{contpage0.eps} &
\includegraphics[width=.3\textwidth]{contpage1.eps}
\end{tabular}%
\end{center}
\caption{Same as Fig. \ref{fig:profvar2} but with dashed lines
corresponding to approximation (\ref{ceq1}).}
\label{fig:profcont2}
\end{figure}
As it can be concluded from previous sections, none of the
established approximations perform well for $\Lambda$ close to zero
(although the VA is notably more accurate than the invariant
manifold and Sievers--Takeno approximation). A quasi-continuum
approximation could be used to fill this gap. To this end, we follow
Eqs. (13) and (14) of Ref. \cite{PRB}. Then the ST- and P-modes can
be approximated by the continuum soliton based expressions:
\begin{equation}
v^{ST}_{n}=\sqrt{2\Lambda}\mathrm{sech}\ (n\sqrt{\Lambda}), \qquad
v^{P}_{n}=\sqrt{2\Lambda}\mathrm{sech}\ [(|n+1/2|-1/2)\sqrt{\Lambda}].
\label{ceq1}
\end{equation}
These expressions lead to the results shown in Figs.
\ref{fig:profcont1} and \ref{fig:profcont2}. Naturally, this
approach captures the asymptotic limit $v_0\sim\sqrt{2\Lambda}$ when
$\Lambda\rightarrow0$, but fails increasingly as $\Lambda$ grows.
\section{Summary and conclusions}
In Figs. \ref{fig:compare1} and \ref{fig:compare2} the results of
the paper are summarized. To this end, a variable, giving the
relative error at site $n$, is defined as:
\begin{equation}\label{R}
R_n=\log_{10}\left|(v_n^{\mathrm{approx}}-v_n^{\mathrm{exact}})
/v_n^{\mathrm{exact}}\right|.
\end{equation}
We can generally conclude that the variational approximation offers
the most accurate representation of the amplitude amplitude of the
Page mode $v_n^P$ at the two sites $n=0$ and $n=1$ with some small
exceptions. These involve some particular intervals of $\Lambda$
where the homoclinic approximation may be better and also the
interval sufficiently close to the continuum limit, where the best
approximation is given by the discretization of the continuum
solution. Similar features are observed for the approximation of
the Sievers--Takeno mode $v_n^{ST}$ at site $n=0$. However, a
different scenario occurs for this mode at site $n=1$, since the
homoclinic approximation gives the best result for $\Lambda > 1.5$.
As $\Lambda$ goes to $0$, the Sievers-Takeno, variational and
quasi-continuum approximations give successively the best results in
small windows of the parameter $\Lambda$. Notice that in the
interval $\Lambda\in(0,0.5]$ neither the variational, nor the
homoclinic approximation are entirely satisfactory. The latter
suffers, among other things, the serious problem of producing a
spurious bifurcation of two ST modes with a P-mode. On the other
hand, for larger values of $\Lambda$ (i.e., for $\Lambda > 0.5$),
the quasi-continuum approach is the one that fails increasingly
becoming rather unsatisfactory, while the discrete approaches are
considerably more accurate, especially for $\Lambda > 2$, when their
relative error drops below $1 \%$ (with the exception of the
Sievers-Takeno approximation of $v_0^{ST}$, which only reaches this
precision for $\Lambda > 10$).
We hope that these results can be used as a guide for developing
sufficiently accurate analytical predictions in different parametric
regimes for such systems. It would naturally be of interest to
extend the present considerations to higher dimensions. However, it
should be acknowledged that in the latter setting the variational
approach would extend rather straightforwardly, while the homoclinic
approximation is restricted to one space dimension and the other
approximations would become more technical.
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{comparest0.eps} &
\includegraphics[width=.4\textwidth]{comparest1.eps}
\end{tabular}%
\end{center}
\caption{Representation of variable $R$ defined in (\ref{R}) versus
$\Lambda$ for ST-modes. Full lines correspond to the variational
approach; the dashed line corresponds to the homoclinic
approximation; the dash-dotted lines to the continuum approximation;
and the dotted line to the Sievers--Takeno approximation.}
\label{fig:compare1}
\end{figure}
\begin{figure
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{comparepage0.eps} &
\includegraphics[width=.4\textwidth]{comparepage1.eps}
\end{tabular}%
\end{center}
\caption{Representation of variable $R$ defined in (\ref{R}) versus
$\Lambda$ for P-modes. Full lines corresponds to variational
approach; dashed line, to the homoclinic approximation; and
dash-dotted lines, to the continuum approximation.}
\label{fig:compare2}
\end{figure}
\begin{acknowledgments}
JC and BSR acknowledge financial support from the MECD project
FIS2004-01183. PGK gratefully acknowledges support from NSF-CAREER,
NSF-DMS-0505663 and NSF-DMS-0619492. We acknowledge F Palmero for
his useful comments.
\end{acknowledgments}
| {
"timestamp": "2007-12-11T23:23:42",
"yymm": "0712",
"arxiv_id": "0712.1837",
"language": "en",
"url": "https://arxiv.org/abs/0712.1837",
"abstract": "We study four different approximations for finding the profile of discrete solitons in the one-dimensional Discrete Nonlinear Schrödinger (DNLS) Equation. Three of them are discrete approximations (namely, a variational approach, an approximation to homoclinic orbits and a Green-function approach), and the other one is a quasi-continuum approximation. All the results are compared with numerical computations.",
"subjects": "Pattern Formation and Solitons (nlin.PS)",
"title": "Approximated profiles for discrete solitons in DNLS lattices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992923570262,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7075910809275151
} |
https://arxiv.org/abs/2003.02561 | A Multi-Way Correlation Coefficient | Pearson's correlation is an important summary measure of the amount of dependence between two variables. It is natural to want to generalise the concept of correlation as a single number that measures the inter-relatedness of three or more variables e.g. how `correlated' are a collection of variables in which non are specifically to be treated as an `outcome'? In this short article, we introduce such a measure, and show that it reduces to the modulus of Pearson's $r$ in the two dimensional case. | \section{Introduction and Literature Search}
Pearson's correlation coefficient, $r$, is applied in all areas of quantitative scientific endeavour, see \cite{stigler1989} for a review. For two samples $\{x_i\}_{i=1}^n$ and $\{y_i\}_{i=1}^n$, it is defined as:
\[r = \frac{\sum_{i=1}^n(x_i-\bar x)(y_i - \bar y)}{\sqrt{(x_i-\bar x)^2}\sqrt{(y_i-\bar y)^2}}.\]
In some instances, it is of interest to understand how related three or more variables are and there are a number of straightforward options in this case. Probably the most commonly applied of these would be:
\begin{enumerate}
\item to examine the entries of the sample correlation matrix which summarise pairwise dependence, or
\item to examine the coefficient of determination (a.k.a. the coefficient of multiple correlation), $R^2$, from a linear model assuming one variable as the outcome \citep{wright1921}.
\end{enumerate}
While both of these methods provide very useful summaries of the data, with the former method, the collection of plots can be difficult to interpret for even a moderate number of variables and the latter yields a different $R^2$ depending on the chosen outcome variable: which of these values should we choose to represent the correlation in our data? A secondary philosophical consideration with $R^2$ is that in some sense, one of the variables is being treated as an `outcome', which may in practice not be the case: there may be no justification for treating one of the variables in this way.
Why would this be useful? The motivation for this research occurred during a conversation with a Psychologist in which I was asked whether there was a way to measure dependence between three different variables capturing the properties of a certain type of brain signal: none of which was to be considered as an outcome. As another example, in a recent analysis of Tuberculosis incidence in two distinct areas in Portugal (in progress), we had occasion to examine the correlation between the covariates in our model to decide on whether there there was collinearity:
\begin{equation*}
\left(\begin{array}{cccccc}
1 & -0.13 & 0.18 & -0.27 & 0.19 & 0.36 \\
-0.13 & 1 & 0.15 & 0.38 & 1\times10^{-1} & 0.11 \\
0.18 & 0.15 & 1 & 0.15 & 8\times10^{-2} & 4\times10^{-2} \\
-0.27 & 0.38 & 0.15 & 1 & -0.16 & -4\times10^{-2} \\
0.19 & 1\times10^{-1} & 8\times10^{-2} & -0.16 & 1 & 0.14 \\
0.36 & 0.11 & 4\times10^{-2} & -4\times10^{-2} & 0.14 & 1 \\
\end{array}\right)
\end{equation*}
\begin{equation*}
\left(\begin{array}{cccccc}
1 & 0.23 & 0.51 & -7\times10^{-2} & 3\times10^{-1} & 0.58 \\
0.23 & 1 & 0.12 & -3\times10^{-1} & 0.14 & 0.28 \\
0.51 & 0.12 & 1 & -0.11 & 0.24 & 0.31 \\
-7\times10^{-2} & -3\times10^{-1} & -0.11 & 1 & 2\times10^{-2} & 3\times10^{-2} \\
3\times10^{-1} & 0.14 & 0.24 & 2\times10^{-2} & 1 & 0.33 \\
0.58 & 0.28 & 0.31 & 3\times10^{-2} & 0.33 & 1 \\
\end{array}\right)
\end{equation*}
As well as the specific 2-way correlations between variables in each of these areas, it is also of interest to ask the question: `are the measured covariates in one area more correlated than the other?' Our proposed measure gives an answer to that question.
There are many correlation coefficients in the literature. We used Web of Science to search titles for the words `correlation' and `coefficient' and restricted to the categories statistics probability or chemistry physical or engineering electrical electronic or engineering chemical or mathematics interdisciplinary applications; we did not place any restriction on the type of articles to search. This process yielded 1585 articles. We then searched each of these titles for the phrase `correlation coefficient', which left 700 articles. These were then manually categorised, either using the name of the correlation coefficient considered if it was explicitly stated in the title of the article, or classifying as `unnamed'; the results are in Table \ref{tab:corr_coef}.
None of these identified correlation coefficients is designed for the specific task we have in mind i.e. to describe, using a single number, the correlation between multiple realisations of a single $d$-dimensional random variable $d\geq2$.
\section{The Multi-Way Correlation Coefficient}
In order to circumvent the issues detailed above, we here propose a simple one-dimensional summary of linear inter-dependence in a $d$-dimensional real-valued random variable. We call this the \textbf{multi-way correlation coefficient} and define it as:
\[\text{mcor}\left[\left(v_1,\cdots, v_n\right)\right] = \frac1{\sqrt{d}}\text{s.d.}\left\{\mathrm{eigenvalues}\left[\mathrm{cor}\left(v_1,\cdots, v_n\right)^T\right]\right\},\]
where $v_i$ are column vectors containing the $d$-dimensional variables of interest for individual $i$ ($i=1,\ldots,n$), $\text{s.d.}$ is the standard deviation and $\mathrm{cor}$ is the empirical correlation matrix.
It is straightforward to see that when $d=2$, the multi-way correlation coefficient reduces to the modulus of Pearson's $r$. To see this, recall that $r$ features on the off-diagonal element of the correlation matrix, hence
\[\left|\begin{array}{cc}1-\lambda & r\\r & 1-\lambda\end{array}\right| = (1-\lambda)^2 - r^2.\]
Solving for zero yields eigenvalues $(1-r)$ and $(1+r)$; the empirical standard deviation of which is easily shown to be $\sqrt{2}r$. When $r$ is replaced by $-r$ in the correlation matrix, note the characteristic polynomial is still $(1-\lambda)^2 - r^2$, so the same argument follows for negative correlations, hence $\text{mcor}\{x,y\} = |\mathrm{cor}(x,y)|$ as claimed.
The multi-way correlation coefficient takes values on $[0,1]$. Let $\lambda_1,\ldots,\lambda_d$ be the eigenvalues of the correlation matrix. Then
\begin{eqnarray}
0 \leq \text{mcor}\left[\left(v_1,\cdots, v_n\right)^T\right] &=& \frac{1}{\sqrt{d}}\sqrt{\frac{1}{d-1}\sum_{i=1}^d(\lambda_i - \bar\lambda)^2} \\
&=& \frac{1}{\sqrt{d}}\sqrt{\frac{1}{d-1}\left[\sum_{i=1}^d\lambda_i^2 -2\sum_{i=1}^d\lambda_i + d\right]}\\
&=& \frac{1}{\sqrt{d}}\sqrt{\frac{1}{d-1}\left[\sum_{i=1}^d\lambda_i^2 - d\right]}\\
&\leq& \frac{1}{\sqrt{d}}\sqrt{\frac{1}{d-1}\left[\left(\sum_{i=1}^d\lambda_i\right)^2 - d\right]} = 1
\end{eqnarray}
Where in the above, since the diagonal elements of a correlation matrix are all equal to $1$, we have used $\bar\lambda=\frac1d\sum_{i=1}^d\lambda_i=d/d=1$. The upper bound is attained when one eigenvalue is equal to $d$ and the rest equal to zero and the lower bound is attained when all eigenvalues are equal to 1 (corresponding to the identity matrix).
As is the case for for Pearson's correlation coefficient, if all variables are mutually independent, then the correlation matrix is the identity and the multi-way correlation coefficient will be equal to zero. A similar argument to that above shows that for an $n$-dimensional random variable, if $k$ of the components are independent from the rest then,
\[0 \leq \text{mcor}\left[\left(v_1,\cdots, v_n\right)^T\right]\leq\left(\frac{(n-k)(n-k-1)}{n(n-1)}\right)^{1/2}.\]
The intuition behind the multi-way correlation stems from the eigendecomposition of the correlation matrix, $R$, say. Since $R$ is symmetric and positive definite, the eigenvectors form an orthonormal basis, which in some sense describe the axes of an ellipsoid that represents the multi-dimensional direction of dependence in the data. The eigenvalues scale these axes: a large eigenvalue (compared to the others) means the data is more `stretched' in the direction of the associated eigenvector, whereas if all eigenvalues are similar, then there is a similar amount of stretch in each direction. Hence a measure of spread of the eigenvalues gives a measure of how close to linear the data are. The particular choice of dispersion made in this article is due to the fact that in two dimensions it (nearly) reduces to Pearson's $r$; the concept of a `negative' correlation in $\mathbb{R}^d$, for $d\geq3$, is not meaningful, in any case.
As kindly pointed out by an anonmymous reviewer of an early version of the present article, there is similarity between our proposed measure of multi-way linearity and the measure of covariance sphericity of \cite{john1972}. For eigenvalues of a covariance matrix, $\lambda_1,\ldots,\lambda_d$, the covariance sphericity is defined as $\sum\lambda_i^2 / (\sum\lambda_i)^2$. Note that the denominator is equal to $d^2$ when this measure is applied to a correlation matrix. This measure takes values in $[d,1]$, so can be rescaled onto $[0,1]$ and used in a similar way to the measure we propose in the present article. Compared to this alternative, our proposed measure has the advantage of being in some sense equivalent to Pearson's $r$ in the two dimensional case.
\section{Examples}
Below are some examples of the multi-way correlation coefficient applied to 1000 random points in $\mathbb{R}^3$. Further examples are available in the web supplement.
\begin{figure}[H]
\begin{minipage}{0.333\textwidth}
\centering
Two variables a linear function of the third.
\begin{eqnarray*}
x&\sim&\text{unif}(0,1)\\
y&=&2x\\
z&=&x
\end{eqnarray*}
\begin{tabular}{cccc}
& x & y & z \\ \hline
x & 1 & 1 & 1 \\
y & 1 & 1 & 1 \\
z & 1 & 1 & 1 \\
\end{tabular}
Eigenvalues: 3, 0, 0
$\text{mcor}\{x,y,z\}=1$
\end{minipage}\begin{minipage}{0.333\textwidth}
\includegraphics[width=\textwidth]{pairs_1.pdf}
\end{minipage}\begin{minipage}{0.333\textwidth}
\includegraphics[width=\textwidth]{scatter_1.pdf}
\end{minipage}
\end{figure}
\begin{figure}[H]
\begin{minipage}{0.333\textwidth}
\centering
One variable a linear combination of other two.
\begin{eqnarray*}
x&\sim&\text{unif}(0,1)\\
y&\sim&\text{unif}(0,1)\\
z&=& x + 2y
\end{eqnarray*}
\begin{tabular}{cccc}
& x & y & z \\ \hline
x & 1 & -0.036 & 0.428 \\
y & -0.036 & 1 & 0.888 \\
z & 0.428 & 0.888 & 1 \\
\end{tabular}
Eigenvalues: 1.972, 1.028, 0
$\text{mcor}\{x,y,z\}=0.569$
\end{minipage}\begin{minipage}{0.333\textwidth}
\includegraphics[width=\textwidth]{pairs_2.pdf}
\end{minipage}\begin{minipage}{0.333\textwidth}
\includegraphics[width=\textwidth]{scatter_2.pdf}
\end{minipage}
\end{figure}
\begin{figure}[H]
\begin{minipage}{0.333\textwidth}
\centering
Three independent variables.
\begin{eqnarray*}
x&\sim&\text{unif}(0,1)\\
y&\sim&\text{unif}(0,1)\\
z&\sim&\text{unif}(0,1)
\end{eqnarray*}
\begin{tabular}{cccc}
& x & y & z \\ \hline
x & 1 & -0.026 & 0.024 \\
y & -0.026 & 1 & 0.012 \\
z & 0.024 & 0.012 & 1 \\
\end{tabular}
Eigenvalues: 1.03, 1.012, 0.958
$\text{mcor}\{x,y,z\}=0.021$
\end{minipage}\begin{minipage}{0.333\textwidth}
\includegraphics[width=\textwidth]{pairs_3.pdf}
\end{minipage}\begin{minipage}{0.333\textwidth}
\includegraphics[width=\textwidth]{scatter_3.pdf}
\end{minipage}
\end{figure}
\begin{figure}[H]
\begin{minipage}{0.333\textwidth}
\centering
Correlated variables.
\begin{eqnarray*}
x&\sim&\text{unif}(0,1)\\
y&\sim&\text{unif}(0,1)\\
z&=& x + 2y + \text{N}(0,1)
\end{eqnarray*}
\begin{tabular}{cccc}
& x & y & z \\ \hline
x & 1 & -0.030 & 0.24 \\
y & -0.030 & 1 & 0.46 \\
z & 0.24 & 0.46 & 1 \\
\end{tabular}
Eigenvalues: 1.507, 1.024, 0.469
$\text{mcor}\{x,y,z\}=0.3$
\end{minipage}\begin{minipage}{0.333\textwidth}
\includegraphics[width=\textwidth]{pairs_4.pdf}
\end{minipage}\begin{minipage}{0.333\textwidth}
\includegraphics[width=\textwidth]{scatter_4.pdf}
\end{minipage}
\end{figure}
\begin{figure}[H]
\begin{minipage}{0.333\textwidth}
\centering
Even more Correlated variables.
\begin{eqnarray*}
x&\sim&\text{unif}(0,1)\\
y&=& 5x + \text{N}(0,1)\\
z&=& x + 2y + \text{N}(0,1)
\end{eqnarray*}
\begin{tabular}{cccc}
& x & y & z \\ \hline
x & 1 & 0.812 & 0.814 \\
y & 0.812 & 1 & 0.966 \\
z & 0.814 & 0.966 & 1 \\
\end{tabular}
Eigenvalues: 2.73, 0.236, 0.034
$\text{mcor}\{x,y,z\}=0.867$
\end{minipage}\begin{minipage}{0.333\textwidth}
\includegraphics[width=\textwidth]{pairs_5.pdf}
\end{minipage}\begin{minipage}{0.333\textwidth}
\includegraphics[width=\textwidth]{scatter_5.pdf}
\end{minipage}
\end{figure}
\section{R Code}
R code for the multi-way correlation coefficient is straightforward:
\begin{verbatim}
cor_taylor <- function(X){
if(!inherits(X,"matrix")){
stop("Input must be inherit 'matrix' class.")
}
n <- ncol(X)
return((1/sqrt(n))*sd(eigen(cor(X))$values))
}
\end{verbatim}
\section{Conclusion}
We have introduced a new correlation coefficient for $d$-dimensional random variables ($d\geq2$) that expresses in a similar sense to Pearson's correlation coefficient the amount of linear dependence between a set of variables. Our new measure is straightforward to compute and yields a 1 dimensional, easily interpretable, summary of dependence.
\bibliographystyle{chicago}
| {
"timestamp": "2020-03-06T02:12:12",
"yymm": "2003",
"arxiv_id": "2003.02561",
"language": "en",
"url": "https://arxiv.org/abs/2003.02561",
"abstract": "Pearson's correlation is an important summary measure of the amount of dependence between two variables. It is natural to want to generalise the concept of correlation as a single number that measures the inter-relatedness of three or more variables e.g. how `correlated' are a collection of variables in which non are specifically to be treated as an `outcome'? In this short article, we introduce such a measure, and show that it reduces to the modulus of Pearson's $r$ in the two dimensional case.",
"subjects": "Methodology (stat.ME)",
"title": "A Multi-Way Correlation Coefficient",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992960608888,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.7075910779638571
} |
https://arxiv.org/abs/1608.06712 | Double groupoid cohomology and extensions | We study extensions of double groupoids in the sense of \cite{AN2} and show some classical results of group theory extensions in the case of double groupoids. For it, given a double groupoid $(\mathcal{B}; \mathcal{V},\mathcal{H}; \mathcal{P})$ \emph{acting} on an abelian group bundle $\mathbf{K} \to \mathcal{P}$, we introduce a cohomology double complex, in a similar way as was done in \cite{AN2} and we show that the extensions of $\mathcal{F}$ by $\mathbf{K}$ are classified by the total first cohomology group of the associated total complex.With the aim to extend the above results to the topological setting, following ideas of Deligne \cite{D} and Tu \cite{tu}, by means of simplicial methods, we introduce a \emph{sheaf cohomology} for topological double groupoids, generalizing the double groupoid cohomological in the discrete case, and we carry out in the topological setting the results obtained for discrete double groupoids. | \section*{Introduction}
The notion of double groupoids was introduced by Ehresmann \cite{ehr},
and later studied in \cite{brown, bj, BM, bs} and references therein.
A double groupoid is a set ${\mathcal B}$ endowed with two different
but compatible groupoid structures.
It is useful to represent the elements of ${\mathcal B}$ as boxes that merge
horizontally or vertically according to the
groupoid multiplication into consideration.
The vertical (respectively horizontal) sides of a box belong
to another groupoid ${\mathcal V}$ (resp. ${\mathcal H}$). Later, the notion of double Lie groupoid
was defined and investigated by K. Mackenzie \cite{mk1, mk2}; see also \cite{p, mk3, wl}
for applications to differential and Poisson geometry.
In particular the question of the classification of double
Lie groupoids was raised in \cite{mk1}, see also \cite{BM}.
Since then, several classes of double groupoids has been classified;
in \cite{BM}, a complete answer was given in the restricted
case of locally trivial double Lie groupoids. More recently in \cite{AN3},
was given a description in two stages of discrete double groupoids.
Before to state it, we remind that a diagram
of groupoids over a pair of groupoids ${\mathcal V}$ and ${\mathcal H}$ is a triple $({\mathcal D}, j, i)$
where ${\mathcal D}$ is a groupoid and $i: {\mathcal H} \to {\mathcal D}$, $j: {\mathcal V}
\to {\mathcal D}$ are morphisms of groupoids (over a fixed set of points).
In \cite{AN3} the authors showed that:
\begin{itemize}
\item {\bf Stage 1. \cite[Thm. 1.9]{AN3}. } \label{stage_1}
Any double groupoid is an extension of a slim double groupoid (its \emph{frame}) by an abelian group bundle.
\end{itemize}
In the same paper, the authors also prove the following theorem.
\begin{itemize}
\item {\bf Stage 2. \cite[Thm. 2.8]{AN3}. } \label{stage_2}
The category of slim double groupoids, with fixed vertical
and horizontal goupoids ${\mathcal V}$ and ${\mathcal H}$, satisfying the filling condition,
is equivalent to the category of diagrams over ${\mathcal V}$ and ${\mathcal H}$.
\end{itemize}
In \cite{AOT}, we extend theorem \ref{stage_1} to the setting of double Lie
groupoids. In this context, the usual filling condition is replaced
by the request that the \emph{double source map} is a surjective submersion \cite{mk1}.
As is expected, theorem \ref{stage_1} doesn't has an identical
statement in this case, and there are some topological and
geometrical ingredients to be taken in account. The main result of that work was,
\begin{itemize}
\item \cite[Thm. 3.7]{AOT}
\emph{The category of slim double Lie groupoids, with fixed vertical
and horizontal Lie groupoids ${\mathcal V}$ and ${\mathcal H}$, and proper core action,
is equivalent to the category of diagrams of Lie groupoids $({\mathcal D}, j, i)$
such that the maps $j$ and $i$ are transversal at the identities.}
\end{itemize}
In this work we extend an equivalent formulation of {\bf stage 2} to the setting of double topological groupoids. The paper is divided in two parts and in what follows we will describe the contents of each one of them.
The first part goes from section \ref{prels} to \ref{seccion4} and develop the cohomology of discrete double groupoids. In the section \ref{prels} we remind some basic facts about double groupoids (discrete and topological). In section \ref{discrete_cohomology_section}, we introduce the cohomology of double groupoids by means of the total complex of a double complex attached to any double groupoid. In section \ref{seccion4}
we show that given a double groupoid $({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ acting on an abelian group bundle ${\mathbf K} \to {\mathcal P}$, all the extensions of ${\mathcal F}$ by ${\mathbf K}$ are classified by the first total cohomology group of such total complex (Thm. \ref{corresp-cocycles-opext}), i. e. there is a bijection between $\mathcal{O}pext({\mathcal F},{\mathbf K})$
and the cohomology group $\operatorname{H}^1_{\Tot}({\mathcal F}, {\mathbf K})$. This result is basically an extension of Stage 2.
The second part of this work, sections 4 and 5, is concerned with a geometric construction of the bisimplicial set associated to a double groupoid and with the simplicial cohomology of topological double groupoids.
Secction 4 contains basic facts about simplicial and bisimplicial sets. The main result of this section is Thm. \ref{double_skeleton_core_groupoid} that provides a construction of the double skeleton of a double groupoid as a homogeneous space of the core groupoid attached to the double groupoid. This result enables us to realize the double skeleton as a bisimplicial set (or space, or manifold, depends on the context) in a simple way.
In section 5 we extend the cohomology theory developed in the first part to the setting of topological double groupoids. For the discrete case, the proof of proposition \ref{any.ext.is.smash} (a reformulation of Stage 2), depends strongly on the existence of a section of the frame map that maps a double groupoid onto its frame. In the continuous (differentiable) setting we cannot guarantee any more the existence of a continuous (resp. smooth) global section of this map. If the frame of a double topological (resp. Lie) groupoid is a topological space (resp. smooth manifold) and the frame map is an open surjective map (resp. a surjective submersion), we can assure the existence of local sections and then, we can try to localize the extension process to open sets where the local sections exists. To do this, in this section we develop a \v{C}ech double groupoid cohomology that will allow us to classify the extensions of topological double groupoids by abelian topological group bundles in a similar way that in the discrete case (Thm. \ref{class_ext_simplicial_version}).
Roughly speaking, given a \emph{bisimplicial sheaf} over a \emph{bisimplicial set}, we can define a double complex associated to it, and then we could define the cohomology of the bisimplicial set with values in the bisimplicial sheaf as the cohomology of the associated total complex: let $({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be a topological double groupoid acting on a bundle ${\mathbf K} \to {\mathcal P}$ of topological abelian groups. If $\mathcal{U} = \{ \mathcal{U}_i \}_{i \in I}$ is an open cover of ${\mathcal P}$, the double groupoid $({\mathcal F}[\mathcal{U}]); {\mathcal V}[\mathcal{U}], {\mathcal H}[\mathcal{U}]; {\mathcal P}[\mathcal{U}])$
is the \textit{\v{C}ech double groupoid associated to $({\mathcal F}, \mathcal{U})$} (see def. \ref{cech_groupoid} and \ref{open_cover}), and $\Opext({\mathcal F}[\mathcal{U}], {\mathbf K}[U])$ denotes the set of all extensions of ${\mathcal F}[{\mathcal U}]$ by ${\mathbf K}[{\mathcal U}]$. Now, we can construct on the \emph{double skeleton} $\ubs{{\mathcal F}} = \{ {\mathcal F}^{(m,n)} \}_{(m,n) \in \mathbb{N}^2}$ of the double groupoid, a double bisimplicial sheaf $\ubs{{\mathcal A}}$ associated with the action ${\mathbf K} \to {\mathcal P}$ and define
\begin{equation}\label{extension_form_0}
\operatorname{Ext}({\mathcal F}, {\mathbf K}) := \underset{\mathcal{U}} {\underset{\longrightarrow} \lim } \; \Opext({\mathcal F}[\mathcal{U}], {\mathbf K}[U]),
\end{equation}
where $\mathcal{U} = \{ \mathcal{U}_i \}_{i \in I}$ runs over open
covers of ${\mathcal P}$. Then, the main result of the paper could be stated as
\begin{equation}
\operatorname{Ext}({\mathcal F}, {\mathbf K}) \cong \check{\operatorname{H}}^{1}_{\Tot}(\dbs{{\mathcal U}}; \ubs{{\mathcal A}}).
\end{equation}
\renewcommand{\baselinestretch}{1.2}
\renewcommand{\thefootnote}{}
\thispagestyle{empty}
\begin{section}{Preliminaries on groupoids an double groupoids}\label{prels}
We denote a groupoid in the form $ \xymatrix{{\mathcal G} \ar@<2pt>[r]^{s}
\ar@<-2pt>[r]_{e} &{\mathcal P}}$, where $s$ and $e$ stand for \emph{source} and
\emph{end} respectively; and the identity map will be denoted by $\id: {\mathcal P} \to {\mathcal G}$.
Recall that a groupoid $ \xymatrix{{\mathcal G} \ar@<2pt>[r]^{s}
\ar@<-2pt>[r]_{e} & {\mathcal P}} $ is a \textit{topological groupoid} \cite{renault} if
${\mathcal P}$ and ${\mathcal G}$ are topological spaces and all the structural maps are continuous.
The {\em anchor} of ${\mathcal G}$ is the map $\chi:{\mathcal G} \to {\mathcal P} \times {\mathcal P}$ given by
$\chi(g) = (s(g),e(g) )$.
We recall the following well known definition.
\begin{definition}
A {\em left action} of a groupoid $ \xymatrix{{\mathcal G} \ar@<2pt>[r]^{s}
\ar@<-2pt>[r]_{e} &{\mathcal P}}$ {\em along} a map
$\epsilon:{\mathcal E} \to {\mathcal P}$ is given by a map from the pullback ${\mathcal G} \pfibrado{e}{\epsilon}{\mathcal E}$ to ${\mathcal E}$, denoted by $(g,n) \mapsto gn$, such that:
$$\epsilon(hy) = s(h),\quad \id(\epsilon(y))\; y = y, \quad (gh)y = g(hy),$$
for all $g,h \in {\mathcal G}$ and $y \in N$ such that $e(g) = s(h)$ and $e(h)
= \epsilon(y)$. We shall simply say that the groupoid ${\mathcal G}$ acts on ${\mathcal E}$.
\end{definition}
\begin{definition}[Ehresmann]
A \textit{double groupoid} is a groupoid object internal to the
category of groupoids. In other terms, a \textit{double groupoid} consists
of a set ${\mathcal B}$ with two groupoid structures with \textit{bases} ${\mathcal H}$
and ${\mathcal V}$, which are themselves groupoids over a common base ${\mathcal P}$,
all subject to the compatibility condition that the structure maps
of each structure are morphisms with respect to the other.
\end{definition}
It is usual to represent a double groupoid $({\mathcal B}; {\mathcal V}, {\mathcal H}; {\mathcal P})$
in the form of a diagram of four related groupoids
$$
\xymatrix{ {\mathcal B} \ar@<2pt>[rr]^{l} \ar@<-2pt>[rr]_{r} \ar@<2pt>[d]^{b}
\ar@<-2pt>[d]_{t}
& & {\mathcal V} \ar@<2pt>[d]^{b} \ar@<-2pt>[d]_{t}\\
{\mathcal H} \ar@<2pt>[rr]^{l} \ar@<-2pt>[rr]_{r} & & {\mathcal P} }
$$
where $t$, $b$, $l$, $r$ mean \emph{``top'', ``bottom'', ``left''} and \emph{``right''},
respectively. We sketch the main axioms that these groupoids should
satisfy and refer \emph{e.~g.} to \cite[Sec. 2]{AN1} and
\cite[Sec. 1]{AN2} for a detailed exposition and other
conventions.
The elements of ${\mathcal B}$ are called \emph{``boxes''} and will be denoted by
$$
A = \quad\caja{$A$}{$t(A)$}{$r(A)$}{$b(A)$}{$l(A)$}\quad \in{\mathcal B}.
$$
Here $t(A),\;b(A) \in {\mathcal H}$ and $l(A),\;r(A) \in {\mathcal V}$. The identity
maps will be denoted $\idd: {\mathcal V} \to {\mathcal B}$ and $\idd: {\mathcal H} \to {\mathcal B}$. The
product in the groupoid ${\mathcal B}$ with base ${\mathcal V}$ is called {\em
horizontal product} and is denoted by $AB$ or $\{AB\}$, for $A,B \in {\mathcal B}$ with
$r(A) = l(B)$. The product in the groupoid ${\mathcal B}$ with base ${\mathcal H}$ is
called {\em vertical product} and is denoted by $\begin{matrix}
A\{\mathcal B}\end{matrix}$ or $\left\{\begin{matrix}
A\{\mathcal B}\end{matrix}\right\}$, for $A,B \in {\mathcal B}$ with $b(A) =t(B)$. This
pictorial notation is useful to understand the products in the
double structure. For instance, compatibility axioms between the
horizontal and vertical products with respect to source and target maps
of the horizontal and vertical groupoid structures on ${\mathcal B}$, are described by
$$
\cajaMedium{$A$}{$t$}{$r$}{$b$}{$l$} \;
\cajaMedium{$B$}{$t'$}{$r'$}{$b'$}{$r$} =
\cajaMedium{$\{AB\}$}{$tt'$}{$r'$}{$bb'$}{$l$}\quad \text{ and }
\quad
\begin{matrix}
\cajaMedium{$A$}{$t$}{$r$}{$b$}{$l$} \\
\cajaMedium{$B$}{$b$}{$r'$}{$b'$}{$l'$}
\end{matrix} =\;
\cajaMedium{$\scriptstyle{\left\{\begin{matrix} A
\{\mathcal B}\end{matrix}\right\}}$}{$t$}{$rr'$}{$b'$}{$ll'$}\quad.
$$
We omit the letter inside the box if no confusion arises. We also
write $A^h$ and $A^v$ to denote the inverse of $A\in {\mathcal B}$ with
respect to the horizontal and vertical structures of groupoid over
${\mathcal B}$ respectively. When one of the sides of a box is an identity, we
draw this side as a double edge. For example, if
$t(A) = \id_p$, we draw \begin{tabular}{|p{0,1cm}|} \hhline{|=|} \\
\hline\end{tabular} and say that $t(A) \in {\mathcal P}$.
\noindent \textbf{The interchange law:} The most important axiom of double groupoids is undoubtedly the \emph{interchange law},
it states that
\begin{equation}\label{interchange_law}
\begin{matrix}
\left\{ K L \right\} \vspace{-2pt}\\ \left\{ M N \right\}
\end{matrix}
=
\left\{
\begin{matrix} K \vspace{-4pt} \\ M
\end{matrix}
\right\}
\left\{
\begin{matrix}
L \vspace{-4pt}\\ N
\end{matrix}
\right\},
\end{equation}
when the four boxed are compatible i.e, when all the compositions involved
are well defined.
\begin{definition
A double groupoid is a \textit{topological double groupoid} if all the
four groupoids involved are topological groupoids and the \textit{top-right
corner map}
\begin{equation}\label{t-r corner map}
\urcorner: {\mathcal B} \to {\mathcal H} \pfibrado{l}{t} {\mathcal V}, \qquad A \mapsto
\urcorner(A) = (t(A), r(A)),
\end{equation}
is a continuous surjective map.
\end{definition}
We shall say that a double groupoid is \emph{discrete} if no differentiable or topological structure is present.
A discrete double groupoid satisfies the \textit{filling condition}
if the the \textit{top-right corner map} defined in \eqref{t-r corner map}
is surjective. We refer the reader to \cite{AN3} for more details on discrete
double groupoids and the filling condition.
Let $({\mathcal B};{\mathcal V},{\mathcal H};{\mathcal P})$ be a double groupoid, if $P \in {\mathcal P}$, we denote $ \Theta_P := \idd \circ \id(P)$.
\begin{definition}[Brown and Mackenzie] Let $({\mathcal B};{\mathcal V},{\mathcal H};{\mathcal P})$ be a double groupoid. The core
groupoid ${\mathbf E}({\mathcal B})$ of ${\mathcal B}$ is the set
$${\mathbf E}({\mathcal B})=\{E \in {\mathcal B} : \; t(E), \;
r(E) \in {\mathcal P}\}$$
with source and target projections
$s_{{}_{\mathbf E}}$, $e_{{}_{\mathbf E}}: {\mathbf E}({\mathcal B}) \to {\mathcal P}$, given by
$s_{{}_{\mathbf E}}(E)=bl(E)$ and $e_{{}_{\mathbf E}}(E)= tr(E)$ respectively;
identity map given by $\Theta_P$; multiplication and inverse given by
\begin{equation}\label{productcore} E \circ F : =
\left\{\begin{matrix} {\scriptstyle \idd l(F)} & F\\
E & {\scriptstyle \idd(b(F)) \vspace{-1pt}} \end{matrix} \right\},
\qquad E^{(-1)}: = (E \iddv b(E)^{-1})^v
= \left\{\begin{matrix}\iddv l(E)^{-1} \vspace{-4pt}\\
E^h\end{matrix} \right\},
\end{equation}
for every compatible $E , F\in {\mathbf E}({\mathcal B})$.
We observe that
the elements of ${\mathbf E}({\mathcal B})$ are of the form $E =
\begin{tabular}{|p{0,1cm}||} \hhline{|=||} \\ \hline \end{tabular}
\;$; the source gives the bottom-left vertex and the target gives the
top-right vertex of the box.
\end{definition}
\begin{remark}
It is clear that if $({\mathcal B};{\mathcal V},{\mathcal H};{\mathcal P})$ is a topological double groupoid then its core groupoid also is. Moreover, in the case of double Lie groupoids, the definition request that the top-right corner map be a surjective submersion, then in this case the
core groupoid ${\mathbf E}({\mathcal B})$ is a closed embedded submanifold of ${\mathcal B}$ and clearly $s_{{}_{\mathbf E}}$ and $e_{{}_{\mathbf E}}$ are
surjective submersions. With these structural maps
${\mathbf E}({\mathcal B})$ becomes a Lie groupoid with base ${\mathcal P}$ (differentiability conditions being easily verified because ${\mathbf E}({\mathcal B})$
is an embedded submanifold of ${\mathcal B}$).
\end{remark}
Another important invariant of a double groupoid is the intersection
${\mathbf K}({\mathcal B})$ of all four core groupoids:
\begin{align*} {\mathbf K}({\mathcal B}) & : = \{K \in
{\mathcal B}: \; t(K), b(K),l(K), r(K) \in {\mathcal P} \}.\end{align*}
Thus a box is in ${\mathbf K}({\mathcal B})$ if and only if it is of the form \begin{tabular}{||p{0,1cm}||} \hhline{|=|} \\
\hhline{|=|}\end{tabular}.
Let $p: {\mathbf K}({\mathcal B}) \to {\mathcal P}$ be the `common
vertex' function, say $p(K) = lb(K)$. For any $P\in {\mathbf K}({\mathcal B})$, let
${\mathbf K}({\mathcal B})_P$ be the fiber at $P$; ${\mathbf K}({\mathcal B})_P$ is an abelian group under
vertical composition, that coincides with horizontal composition. This
is just the well-known fact: ``a double group is the same as an
abelian group". Indeed, apply the interchange law
\begin{equation*}
\begin{matrix} (K L) \vspace{-2pt}\\ (M N) \end{matrix} =
\left(\begin{matrix} K \vspace{-4pt}\\ M \end{matrix}\right)
\left(\begin{matrix} L \vspace{-4pt}\\ N \end{matrix}\right)
\end{equation*}
to four boxes $K,L,M,N \in {\mathbf K}_P$: if $L = M = \Theta_P$, this
says that $\begin{matrix} K \vspace{-4pt}\\ N \end{matrix} = KN$
and the two operations coincide. If, instead, $K = N = \Theta_P$,
this says that $\begin{matrix} L \vspace{-4pt}\\ M \end{matrix}
= ML$, hence the composition is abelian. Note that this operation
in ${\mathbf K}({\mathcal B})$ coincides also with the core multiplication
\eqref{productcore}. In short, ${\mathbf K}({\mathcal B})$ is an abelian group bundle
over ${\mathcal P}$.
Once an algebraic structure is introduced the following natural task
is to study the class of objects on which they act. There are
several notion of \emph{modules} for double groupoids \cite{aa, BM},
here we introduce a new one implicit in \cite{AN3}.
\begin{definition}\label{double groupoid left action}
Let ${\mathcal T} = ({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be a double groupoid and let $\epsilon: {\mathcal E} \to {\mathcal P}$
be a map. We say that {\em $\mathcal{T}$ acts on the left along $\epsilon$}
if the following conditions holds
\begin{enumerate}
\item The groupoid $\xymatrix{ {\mathcal V} \ar@<-2pt>[r]_r \ar@<2pt>[r]^l & {\mathcal P}}$
acts on ${\mathcal E} \to {\mathcal P} $ on the left,
\item The groupoid $\xymatrix{ {\mathcal H} \ar@<-2pt>[r]_b \ar@<2pt>[r]^t & {\mathcal P}}$
acts on ${\mathcal E} \to {\mathcal P} $ on the left,
\item for any box $A \in {\mathcal F}$ the following relation holds
\begin{equation}\label{acciondoble}
l(A)^{-1}\cdot(t(A)\cdot X) = b(A)\cdot (r(A)^{-1}\cdot X)
\end{equation}
for any $X$ where the action is defined.
\end{enumerate}
\end{definition}
\begin{remark}
From the above definition it is easy to see that
any module over a double groupoid is, at the same time,
a module over the core groupoid associated to it.
\noindent The notion of action introduced here,
differs substantially from the previous ones introduced by Brown and Mackenzie \cite{BM}
and by Andruskiewitsch and Aguiar \cite{aa}. Nevertheless it is good enough to obtain
meaningful information of a double groupoid from
its \textit{representation category}. We study systematically this concept
of representation in a forthcoming paper .
\end{remark}
\begin{exa}
Given a double groupoid $({\mathcal F}; {\mathcal H}, {\mathcal V};{\mathcal P})$, the vertical and the horizontal groupoids
${\mathcal V}$ and ${\mathcal H}$ acts on ${\mathbf K}({\mathcal B})$ by vertical, respectively horizontal, conjugation:
\begin{align}
\label{accionver} \text{If } g\in {\mathcal V}(Q,P) \text{ and } A\in {\mathbf K}_P \text{ then }
g\cdot A &:= \begin{matrix} \id g\vspace{-1pt}\{\mathcal A} \vspace{-4pt}\\ \id g^{-1} \end{matrix}
\in {\mathbf K}_Q;&
\\
\label{accionhor} \text{if } x\in {\mathcal H}(Q,P) \text{ and } A\in {\mathbf K}_P \text{ then }
x\cdot A &:= \id xA\id x^{-1}\in {\mathbf K}_Q;&
\end{align}
we can note that both actions are by group bundle automorphisms. This maps define an
action of the double groupoid ${\mathcal F}$ on the abelian group bundle ${\mathbf K}({\mathcal F})$ associated to it.
\end{exa}
\begin{remark}
In an analogous way to \ref{double groupoid left action} we can define a right action of a double groupoid
along a map $\epsilon: {\mathcal E} \to {\mathcal P}$. The conditions $(1)$ and $(2)$ in
definition \ref{double groupoid left action} suffers the obvious modifications
and the condition three is changed by
\begin{equation}
(X \cdot l(A)) \cdot t(A)^{-1} = (X \cdot b(A)^{-1})\cdot r(A),
\end{equation}
whenever $\epsilon(X) = bl(A) = lb(A)$.
\end{remark}
\end{section}
\begin{section}{Cohomology of double groupoids}\label{discrete_cohomology_section}
Let ${\mathcal T} = ({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be a double groupoid and $p: {\mathbf K}({\mathcal F}) \to {\mathcal P}$ be the associated abelian group bundle. We also denote ${\mathbf K} = {\mathbf K}({\mathcal F})$.
\vskip .3cm
\subsection{The double complex associated to a Double Groupoid}
Let $\{ \mathcal{F}^{(m,n)} \}$ be the family of sets defined by
\begin{align}\label{bsp-set-db-gpd}
{\mathcal F}^{(0,0)} &:= {\mathcal P}, \nonumber\\
{\mathcal F}^{(0,s)} &:= \left\{\left(x_1, \dots, x_s\right) \in {\mathcal H}^{\, s}:
x_1\vert x_2 \dots \vert x_s \right\} = {\mathcal H}^{(s)}, \quad s> 0, \nonumber\\
{\mathcal F}^{(r,0)} &:= \left\{\left(g_1, \dots, g_r\right) \in{\mathcal V}^{\, r}:
g_1\vert g_2 \dots \vert g_r\right\} = {\mathcal V}^{(r)}, \quad r> 0, \nonumber\\
{\mathcal F}^{(r,s)} &:= \left\{\left(\begin{tabular}{p{0,8cm} p{0,8cm} p{0,8cm} p{0,8cm}}
$A_{11}$ & $ A_{12}$ & \dots & $A_{1s}$ \\
$A_{21}$ & $ A_{22}$ & \dots & $A_{2s}$ \\
\dots & \dots & \dots & \dots \\
$A_{r1}$ & $ A_{r2}$ & \dots & $A_{rs}$ \end{tabular}\right) \in
{\mathcal F}^{r\times s}: \quad
\begin{tabular}{p{0,8cm}|p{0,8cm}|p{0,8cm}|p{0,8cm}}
$A_{11}$ & $ A_{12}$ & \dots & $A_{1s}$ \\ \hline
$A_{21}$ & $ A_{22}$ & \dots & $A_{2s}$ \\ \hline
\dots & \dots & \dots & \dots \\ \hline
$A_{r1}$ & $ A_{r2}$ & \dots & $A_{rs}$ \end{tabular} \right\}
\; r,s> 0;
\end{align}
that is, $\mathcal{F}^{(m,,n)}$ is the set of matrices of
composable boxes of ${\mathcal F}$ of size $m \times n$.
We define $D^{\cdot, \cdot} = D^{\cdot, \cdot}({\mathcal T}, {\mathbf K})$ in the following way: let $r,s>0$, then
\begin{align*}
D^{0,0}&:= \left\{ \alpha: {\mathcal P} \to {\mathbf K} \; : \; p \circ \alpha = Id_{{\mathcal P}} \right\} \\
D^{r,0}&:= \left\{ \alpha: {\mathcal V}^{(r)} \to {\mathbf K}\; : \; p \circ \alpha (f) = b (f_r)\text{, }p \circ \alpha (r(A_{11}),\cdots,r(A_{r1})) = br(A_{11})\text{ and }\alpha\text{ is normalized}\right\}, \\
D^{0,s}&:= \left\{ \alpha: {\mathcal H}^{(s)} \to {\mathbf K}\; : \; p \circ \alpha (x) = l (x_1)\text{, }p \circ \alpha (t(A_{11}),\cdots,t(A_{1s})) = tl(A_{11})\text{ and }\alpha\text{ is normalized}\right\}, \\
D^{r,s}&:= \left\{ \alpha: {\mathcal F}^{(r,s)}\to {\mathbf K}\; : \; p \circ \alpha(A) = bl(A_{r1})\text{ and } \alpha \text{ is normalized} \right\}.
\end{align*}
\begin{remark}
Recall that if $r,s \ge 0$, $\alpha$ is normalized if $\alpha(A) = 0$ (i.e $=\Theta_P$, for some $P\in {\mathcal P}$) when
\begin{align*}
&r > 1, s>0 \text{ and } A_{ij} \in {\mathcal V}, \text{ some } i,j \text{, or }\\
&r > 1, s = 0 \text{ and } A_{i0} \in {\mathcal P}, \text{ some } i \text{, or } \\
\ &r>0, s > 1 \text{ and } A_{ij} \in {\mathcal H}, \text{ some } i,j \text{, or }\\
&r =0, s > 1 \text{ and } A_{0j} \in {\mathcal P}, \text{ some } j.
\end{align*}
\end{remark}
In particular we have
\begin{equation*}
D^{1, 2} = \left\{\alpha: {\mathcal F}^{(1,2)} \to {\mathbf K} | \; (p \circ \alpha)(A_{11},A_{12})
= bl(A_{11}), \; \alpha(A_{11},A_{12}) = 0, \text{ if } A_{11} \text{ or } A_{12} \in {\mathcal H}
\right\}.
\end{equation*}
and
\begin{equation*}
D^{2, 1} = \left\{\alpha: {\mathcal F}^{(2,1)} \to {\mathbf K} | \;
(p \circ \alpha)\left(\begin{matrix}A_{11}\{\mathcal A}_{21}\end{matrix}\right)
= bl(A_{21}), \; \alpha \left(\begin{matrix}A_{11}\{\mathcal A}_{21}\end{matrix}\right) = 0,
\text{ if } A_{11} \text{ or } A_{12} \in {\mathcal V} \right\}.
\end{equation*}
Let $d_H = d_H^{r,s}: D^{r, s} \to D^{r, s+1}$ and
$d_V = d_V^{r,s}: D^{r, s} \to D^{r+1, s}$
be, respectively, the horizontal and vertical coboundary
maps defined as follows:
\begin{itemize}
\item If $r =0$, $d_H$ is
\begin{equation}\label{cob1}
\begin{aligned}d_H^{0,0} \alpha(x) &= x \cdot \alpha(r(x)) - \alpha(l(x)), \\
d_H^{0,s} \alpha(x_1, \dots, x_{s+1}) &= x_1 \cdot \alpha(x_2, \dots, x_{s+1})
+ \sum_{1\le i \le s}(-1)^{i} \alpha(x_1, \dots, x_ix_{i+1}, \dots, x_{s+1})
\\ & \qquad + (-1)^{s+1} \alpha(x_1, \dots, x_{s}).
\end{aligned}
\end{equation}
\item if $s =0$, $d_V$ is
\begin{equation}\label{cob2}
\begin{aligned}d_V^{0,0} \alpha(x) &= \alpha(b(x)) - x^{-1}\cdot \alpha(t(x)), \\
d_V^{r,0} \alpha(x_1, \dots, x_{r+1}) &= \alpha(x_2, \dots, x_{r+1})
+ \sum_{1\le i \le r}(-1)^{i} \alpha(x_1, \dots, x_ix_{i+1}, \dots, x_{r+1})
\\ & \qquad + (-1)^{r+1} x_{r+1}^{-1} \cdot \alpha(x_1, \dots, x_{r})
\end{aligned}
\end{equation}
\item if $r =0$, $s > 0$,
$$d_V^{0,s} \alpha(A_{11}, \dots, A_{1s}) =
\alpha(b(A_{11}), \dots, b(A_{1s})) - l(A_{11})^{-1} \cdot \alpha(t(A_{11}), \dots, t(A_{1s}));$$
\item if $r >0$, $s = 0$,
$$d_H^{r,0} \alpha\begin{pmatrix}
A_{11} \\
\vdots \\
A_{r1} \end{pmatrix} =
b(A_{r1}) \cdot \alpha(r(A_{11}), \dots, r(A_{r1})) - \alpha(l(A_{11}), \dots, l(A_{r1}));$$
\item if $r > 0$ and $s = 1$,
\begin{equation*}
d_V^{r,1} f\begin{pmatrix}
A_{11} \\
\vdots \\
A_{r1} \\
A_{r+1,1}
\end{pmatrix} =
f\begin{pmatrix}
A_{21} \\
\vdots \\
A_{r1} \\
A_{r+1,1}
\end{pmatrix}
+ \sum_{1\le i \le r}(-1)^{i} f\begin{pmatrix}
A_{11} \\
\vdots \\
\left\{\begin{matrix}A_{i1} \{\mathcal A}_{i+1,1}\end{matrix}\right\} \\
\vdots \\
A_{r+1,1}
\end{pmatrix}
+ (-1)^{r+1} l(A_{r+1,1})^{-1} \cdot f\begin{pmatrix}
A_{11} \\
\vdots \\
A_{r1} \end{pmatrix};
\end{equation*}
\item if $r = 1$ and $s > 0$,
\begin{align*}
d_H^{1,s} f\begin{pmatrix}
A_{11} & \dots & A_{1,s+1}\end{pmatrix} &=
b(A_{11}) \cdot f\begin{pmatrix}
A_{12} & \dots & A_{1,s+1}
\end{pmatrix}
\\ & \qquad + \sum_{1\le j \le s}(-1)^{j}
f\begin{pmatrix}
A_{11} & \dots & \left\{A_{1,j} A_{1,j+1} \right\}& \dots & A_{1,s+1}
\end{pmatrix}
\\ & \qquad + (-1)^{s+1} f\begin{pmatrix}
A_{11} & \dots & A_{1s} \end{pmatrix}.
\end{align*}
\item More generally if $r > 0$ and $s > 0$,
\begin{align*}
d_V^{r,s} f\begin{pmatrix}
A_{11} & \dots & A_{1s} \\
\dots & \dots & \dots \\
A_{r1} & \dots & A_{rs} \\
A_{r+1,1} & \dots & A_{r+1,s}
\end{pmatrix} &=
f\begin{pmatrix}
A_{21} & \dots & A_{2s} \\
\dots & \dots & \dots \\
A_{r1} & \dots & A_{rs} \\
A_{r+1,1} & \dots & A_{r+1,s}
\end{pmatrix}
\\ & \qquad + \sum_{1\le i \le r}(-1)^{i} f\begin{pmatrix}
A_{11} & \dots & A_{1s} \\
\dots & \dots & \dots \\
\left\{\begin{matrix}A_{i1} \{\mathcal A}_{i+1,1}\end{matrix}\right\}
& \dots & \left\{\begin{matrix}A_{is} \{\mathcal A}_{i+1,s}\end{matrix}\right\} \\
\dots & \dots & \dots \\
A_{r+1,1} & \dots & A_{r+1,s}
\end{pmatrix}
\\ & \qquad + (-1)^{r+1} l(A_{r+1,1})^{-1} \cdot f\begin{pmatrix}
A_{11} & \dots & A_{1s} \\
\dots & \dots & \dots \\
A_{r1} & \dots & A_{rs} \end{pmatrix};
\end{align*}
\begin{align*}
d_H^{r,s} f\begin{pmatrix}
A_{11} & \dots & A_{1,s+1} \\
\dots & \dots & \dots \\
A_{r1} & \dots & A_{r,s+1} \end{pmatrix} &=
b(A_{r1}) \cdot f\begin{pmatrix}
A_{12} & \dots & A_{1,s+1} \\
\dots & \dots & \dots \\
A_{r2} & \dots & A_{r,s+1}
\end{pmatrix}
\\ & \qquad + \sum_{1\le j \le s}(-1)^{j}
f\begin{pmatrix}
A_{11} & \dots & \left\{A_{1,j} A_{1,j+1} \right\}& \dots & A_{1,s+1} \\
\dots & \dots& \dots& \dots & \dots \\
A_{r,1} & \dots& \left\{A_{r,j}A_{1,j+1} \right\}& \dots & A_{r,s+1}
\end{pmatrix}
\\ & \qquad + (-1)^{s+1} f\begin{pmatrix}
A_{11} & \dots & A_{1s} \\
\dots & \dots & \dots \\
A_{r1} & \dots & A_{rs} \end{pmatrix}.
\end{align*}
\end{itemize}
\vskip .5cm
A lengthy but straightforward computation shows that the following diagram
commutes
$$
\begin{CD}
D^{r+1, s} @>{d_H}>> D^{r+1, s+1}
\\
@AA{d_V}A @AA{d_V}A \\
D^{r, s} @>{d_H}>> D^{r, s+1}
\end{CD}
$$
Thus, using the usual ``sign trick'', we have constructed a double cochain complex
\begin{center}
\begin{equation}\label{totalcomplex}
D^{\cdot\cdot}({\mathcal T}) = \qquad
\begin{CD}
\vdots \\
@AAA \\
D^{2, 0} @>{d_H}>> \hspace{18pt}\raisebox{1.0ex}{\vdots}\cdots \\
@AA{d_V}A @AA{-d_V}A \\
D^{1, 0} @>{d_H}>> D^{1, 1} @>{d_H}>>
\hspace{18pt}\raisebox{1.0ex}{\vdots}\cdots \\
@AA{d_V}A @AA{-d_V}A @AA{d_V}A \vspace{9pt}\\
D^{0, 0} @>{d_H}>> D^{0, 1} @>{d_H}>>
D^{0, 2} @>>> \cdots\ \ .
\end{CD}
\end{equation}
\end{center}
\vskip .5cm
If we remove the edges of this double complex we obtain another
complex by setting ${\mathcal A}^{r, s}({\mathcal T}):= {\mathcal A}^{r,s} := D^{r+1,s+1}$ and defining
$$
d_v^{r,s} := (-1)^{s} d_V^{r+1,s+1} \quad \text{and} \quad d_h^{r,s} := d_H^{r+1,s+1},
$$
with the ``sign trick", we have $d_v\circ d_h + d_h\circ d_v =0$.
\begin{center}
\begin{equation}\label{complexdoublecohomology}
\xymatrix{
&\vdots&\\
&A^{2,0} \ar[u]^{d_v^{2,0}} \ar[r]_{d_h^{2,1}}& A^{2,1} \\
{\mathcal A}^{\cdot \cdot}({\mathcal T}) := &A^{1,0} \ar[u]^{d_v^{1,0}} \ar[r]_{d_h^{1,0}} & A^{1,1}\ar[r]_{d_h^{1,1}} \ar[u]^{d_v^{1,2}}& A^{1,2}\\
&A^{0,0} \ar[u]^{d_v^{0,0}}\ar[r]_{d_h^{0,0}} & A^{0,1}\ar[u]^{d_v^{0,1}}
\ar[r]_{d_h^{0,1}} & A^{0,2} \ar[r]_{d_h^{0,2}}\ar[u]^{d_v^{0,2}}& \cdots
}
\end{equation}
\end{center}
\end{section}
\vskip 1cm
Finally, we denote by $E^{\cdot \cdot}({\mathcal T})$ the double complex consisting only of the edges of $D^{\cdot \cdot}({\mathcal T})$.
\begin{section}{Extensions of double groupoids by abelian group bundles}\label{seccion4}
\begin{definition}
Let $({\mathcal B}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ and $({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be
two double groupoids and let $\Pi: {\mathcal B} \to {\mathcal F}$ be a
morphism of double groupoids. The \emph{double kernel}
of $\Pi$ is the set
$$Ker(\Pi) = \{ B \in {\mathcal B}\;:\; \Pi(B) = \Theta_p,\;\text{for some }\; p \in {\mathcal P} \}$$
\end{definition}
\begin{lemma}
Let $({\mathcal B}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ and $({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be
two double groupoids and let $\Pi: {\mathcal B} \to {\mathcal F}$ be a
morphism of double groupoids. Then the \emph{double kernel}
of $\Pi$ is contained in the abelian group bundle associated
to ${\mathcal B}$ and is, therefore, an abelian group bundle itself.
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
\begin{definition}
Let $({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be a double groupoid and
let $p: {\mathbf K} \to {\mathcal P}$ be any abelian group bundle.
We say that a double groupoid $({\mathcal B}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ is
an extension of ${\mathcal F}$ by ${\mathbf K}$ if there is an epimorphism of
double groupoids $\Pi: {\mathcal B} \to {\mathcal F}$ such that
$Ker(\Pi) = {\mathbf K}$.
If $({\mathcal B}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ is an extension of
$({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ by $p: {\mathbf K} \to {\mathcal P}$ then we write
$$1 \to {\mathbf K} \hookrightarrow {\mathcal B} \twoheadrightarrow {\mathcal F} \to 1$$
\end{definition}
As is usual, once the concept of extensions of some
algebraic structure is defined, we will to introduce a series
of definition related to it. From now on we will work with double
groupoids over fixed lateral groupoids ${\mathcal V}$ and ${\mathcal H}$.
\begin{definition}
Let ${\mathcal B}_1$ and ${\mathcal B}_2$ be two double groupoid extensions of
the double groupoid ${\mathcal F}$ by an abelian group bundle $p: {\mathbf K} \to {\mathcal P}$.
We say that ${\mathcal B}_1$ and ${\mathcal B}_2$ are \emph{isomorphic} if there is a triple
$(\phi, \Phi, \Psi)$ where $\phi: {\mathbf K} \to {\mathbf K}$ is a morphism of
abelian group bundles and, $\Phi: {\mathcal B}_1 \to {\mathcal B}_2$ and $\Psi: {\mathcal F} \to {\mathcal F}$
are isomorphisms of double groupoids such that the following diagram
is commutative
\begin{equation}\label{isomorphic extensions}
\xymatrix{
1 \ar[r] & \ar @{} [dr] |{\circlearrowleft} {\mathbf K} \ar@{^{(}->}[r]^\iota \ar[d]_\phi &
\ar @{} [dr] |{\circlearrowleft} {\mathcal B}_1 \ar@{>>}[r]^{\Pi_1} \ar[d]_\Phi
& {\mathcal F} \ar[r] \ar[d]_\Psi & 1 \\
1 \ar[r] & {\mathbf K} \ar@{^{(}->}[r]_\iota & {\mathcal B}_2 \ar@{>>}[r]_{\Pi_2} & {\mathcal F} \ar[r] & 1
}
\end{equation}
We say that the two extensions ${\mathcal B}_1$ and ${\mathcal B}_2$ are
\emph{equivalent} if in the diagram \eqref{isomorphic extensions}
the morphisms $\phi$ and $\Psi$ are identities. That is,
if the following diagram is commutative
\begin{equation}
\xymatrix{
& & {\mathcal B}_1 \ar@{}[ddr] |{\circlearrowleft} \ar@{}[ddl] |{\circlearrowright} \ar[dd]_\Phi
\ar@{>>}[dr]^{\Pi_1}&&\\
1 \ar[r] & {\mathbf K} \ar@{^{(}->}[dr]_\iota \ar@{^{(}->}[ur]^\iota & &
{\mathcal F} \ar[r] & 1\\
& & {\mathcal B}_2 \ar@{>>}[ur]_{\Pi_2}&&
}
\end{equation}
\end{definition}
\begin{definition}
Let $({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be a double groupoid and
let $p: {\mathbf K} \to {\mathcal P}$ be any abelian group bundle.
The set of equivalence classes of extensions of
double groupoids of ${\mathcal F}$ by ${\mathbf K}$ will be
denoted by $\mathcal{O}pext({\mathcal F},{\mathbf K})$.
\end{definition}
In the rest of the section we will to give a cohomological
description of $\mathcal{O}pext({\mathcal F},{\mathbf K})$.
\subsection{The total complex associated to a double groupoid}
We will to recall the definition of the total complex
$Tot(A^{\cdot\cdot})$ associated to the double complex
$ A^{\cdot\cdot}$ defined in above section.
Let
$$
\Tot( A^{\cdot\cdot})^n = \bigoplus_{p+q=n} A^{p,q}.
$$
Then
$$
d^n: \Tot( A^{\cdot\cdot})^n \to \Tot( A^{\cdot\cdot})^{n+1} \quad
\text{given by}\quad d = d_h + d_v ,
$$
define a map such that $d \circ d =0$.
Then $(\Tot( A^{\cdot\cdot}),d)$ is a cochain complex.
\begin{definition}\label{discrete_double_groupoid_cohomology}
We define the {\em total $n$-cohomology of the double groupoid
${\mathcal T}$ with coefficients in ${\mathbf K}$}, as the $n$-cohomology group
\begin{equation}
\operatorname{H}^n_{\Tot} ({\mathcal T},{\mathbf K}) := \operatorname{H}^n(\Tot( {\mathcal A}^{\cdot\cdot})).
\end{equation}
of the total complex ${\mathcal A}^{\cdot \cdot}({\mathcal T},{\mathbf K})$.
\end{definition}
\begin{lemma}
The set $H^0({\mathcal T}, {\mathbf K})$ of $0$-cocycles is the set of maps
$$
\left\lbrace \alpha: {\mathcal F} \to {\mathbf K}\; : \;\alpha\left\lbrace
\begin{matrix}
A_{11}\{\mathcal A}_{21}
\end{matrix}
\right\rbrace = \alpha (A_{21}) + l(A_{21})^{-1} \cdot \alpha(A_{11})
\;\text{and}
\;\alpha(\{ A_{11} A_{12} \}) = b(A_{11})\cdot \alpha(A_{12})
+\alpha(A_{11})
\right\rbrace
$$
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
\begin{remark}
If the actions of ${\mathcal H}$ and ${\mathcal V}$ on ${\mathbf K}$ are trivial, then
the \emph{$0$-cohomology} of the double groupoid ${\mathcal T}$ is the set
of functions from ${\mathcal F}$ to ${\mathbf K}$ that are \textit{morphisms of
groupoids} with respect to the horizontal and vertical groupoid
structures on ${\mathcal F}$.
\end{remark}
\begin{proposition}
Let $p: {\mathbf K} \to {\mathcal P}$ be a ${\mathcal T}$ module. There is an exact sequence
\begin{equation}\label{kes-formulagral}
\begin{aligned}
0 &\to \operatorname{H}^1(\Tot( D^{\cdot\cdot}))\to \operatorname{H}^1({\mathcal H},{\mathbf K}) \oplus \operatorname{H}^1({\mathcal V},{\mathbf K}) \to
\operatorname{H}^0_{\Tot} ({\mathcal T},{\mathbf K}) \\
&\to \operatorname{H}^2(\Tot( D^{\cdot\cdot}))\to \operatorname{H}^2({\mathcal H}, {\mathbf K}) \oplus \operatorname{H}^2({\mathcal V}, {\mathbf K}) \to
\operatorname{H}^1_{\Tot} ({\mathcal T},{\mathbf K})\\
&\to \operatorname{H}^3(\Tot( D^{\cdot\cdot}))\to \operatorname{H}^3({\mathcal H}, {\mathbf K}) \oplus \operatorname{H}^3({\mathcal V}, {\mathbf K}) \to \cdots \end{aligned}
\end{equation}
\end{proposition}
\begin{proof}
Let $A^{(0)} = 0$, $A^{(1)} = 0$, $A^{(n)} = \Tot(A^{\cdot\cdot})^{n-2}$, for $n\ge 2$. We call $A^{(\cdot)}$ the cochain complex induced by the cochain complex $\Tot(A^{\cdot\cdot})$. Then, we have the short exact sequence
$$
\begin{CD} 0@>>> A^{(n)} @>>> \Tot(D^{\cdot\cdot})^{n}
@>>> \Tot(E^{\cdot\cdot})^{n}@>>> 0\end{CD}$$
that induces a short exact sequence of cochain complexes
$$
\begin{CD} 0@>>> A^{(\cdot)} @>>> \Tot(D^{\cdot\cdot})
@>>> \Tot(E^{\cdot\cdot})@>>> 0\end{CD}.$$
Thus we have a long exact sequence
\begin{align*}
0 \to \operatorname{H}^0(A^{(\cdot)}) \to \operatorname{H}^0&(\Tot(D^{\cdot\cdot})) \to \operatorname{H}^0(\Tot(E^{\cdot\cdot})) \\
&\to \operatorname{H}^1(A^{(\cdot)}) \to \operatorname{H}^1(\Tot(D^{\cdot\cdot})) \to \operatorname{H}^1(\Tot(E^{\cdot\cdot})) \to \operatorname{H}^2(A^{(\cdot)}) \to \cdots.
\end{align*}
As $\operatorname{H}^0(A^{(\cdot)}) = \operatorname{H}^1(A^{(\cdot)}) = 0$ and $ \operatorname{H}^{n}(\Tot(A^{\cdot\cdot})) = \operatorname{H}^{n+2}(A^{(\cdot)})$, we have
\begin{align*}
0 \to \operatorname{H}^1(\Tot(D^{\cdot\cdot})) \to \operatorname{H}^1(\Tot(E^{\cdot\cdot})) \to \operatorname{H}^0(\Tot(A^{\cdot\cdot})) \to \operatorname{H}^2(\Tot( D^{\cdot\cdot})) \to \cdots
\end{align*}
and it is clear that
\begin{equation*}
\operatorname{H}^n(\Tot( E^{\cdot\cdot})) = \operatorname{H}^n({\mathcal H}, {\mathbf K})
\oplus \operatorname{H}^n({\mathcal V}, {\mathbf K}), \quad n> 0.
\end{equation*}
Thus, we get the result.
\end{proof}
\vskip .3cm
\vskip .3cm
\begin{subsection}{The total first cohomology group $\operatorname{H}^1_{\Tot} ({\mathcal T},{\mathbf K})$}
Here we show some properties of \emph{total $1$-cocycles} that are well known in the case of group cohomology. Through this subsection $p: {\mathbf K} \to {\mathcal P}$ denote an abelian group bundle and ${\mathcal T} = ({\mathcal F}; {\mathcal V}, {\mathcal H};{\mathcal P})$ is a double groupoid acting along $p$ on the left. We also denote by ${\mathcal A}^{\cdot \cdot}$ the double complex defined in \eqref{complexdoublecohomology}.
\begin{lemma}\label{1cociclos} Let $(\tau,\sigma) \in A^{0,1} \oplus A^{1,0}$ then $d^1(\tau,\sigma) = 0$ if and only if for all $F,G,H\in {\mathcal B}$,
\begin{center}
\begin{equation}\label{taucocyclo}
\tau(F\; G)+\tau(\{ F G \} \; H) =\tau(F\; \{ G H \}) +
\big( b(F) \cdot \tau(G\, H)\big),\;\text{for all}\; F,G,H\in {\mathcal B}\;
\text{such that}\; \quad F\vert G \vert H;
\end{equation}
\begin{equation}\label{sigmacocyclo}
\sigma \left( \begin{matrix} G \\ H \end{matrix} \right) +
\sigma\left(\begin{matrix} F \\ \left\{\begin{matrix} G \vspace{-4pt}\\
H \end{matrix} \right\} \end{matrix} \right)
= \big( l(H)^{-1} \cdot\sigma\left(\begin{matrix}F \\ G\end{matrix}\right) \big) +
\sigma \left( \begin{matrix} \left\{ \begin{matrix}
F \vspace{-4pt} \\ G \end{matrix} \right\} \\ H \end{matrix} \right),
\;\text{for all}\; F,G,H\in {\mathcal B}\;
\text{such that}\; \quad \begin{tabular}{p{0,4cm}}$F$ \\ \hline $G$\\
\hline $H$\end{tabular};
\end{equation}
\end{center}
and
\begin{center}
\begin{equation}\label{sigma-tau-compatibles}
\big(l(H)^{-1}\cdot \tau(F\,G)\big) +
\tau(H\,J) + \sigma\left(\begin{matrix}\left\{FG\right\} \\
\left\{HJ\right\}\end{matrix}\right) = \big(b(H)\cdot
\sigma\left(\begin{matrix}G\\J\end{matrix}\right)\big) +
\sigma\left(\begin{matrix}F\\H\end{matrix}\right) +
\tau\left(\left\{\begin{matrix} F \vspace{-4pt}\\ H
\end{matrix}\right\}\, \left\{\begin{matrix} G\vspace{-4pt} \\
J \end{matrix}\right\} \right),
\end{equation}
\end{center}
for all
\begin{tabular}
{p{0,4cm}|p{0,4cm}} $F$ & $G$ \\ \hline $H$ & $J$
\end{tabular}.
\end{lemma}
\begin{proof}
Let $(\sigma, \tau)$ be a total $1$-cocycle. Since
$d^1 : A^{1,0} \oplus A^{0,1} \to A^{2,0} \oplus A^{1,1} \oplus A^{0,2}$
is defined by $d^1 = (d_v^{1,0}, d_h^{1,0} + d_v^{0,1}, d_h^{0,1})$,
and since
\begin{align*}
d_v^{1,0} := d_V^{2,1} &: D^{2,1} \to D^{3,1},\\
d_v^{0,1} := -d_V^{1,2} &: D^{1,2} \to D^{2,2},\\
d_h^{0,1} := d_H^{1,2} &: D^{1,2} \to D^{1,3},\\
d_h^{1,0} := d_H^{2,1} &: D^{2,1} \to D^{1,1}.
\end{align*}
Then $d^1(\tau,\sigma) = 0$ if and only if
$d_V^{2,1}(\sigma) = 0$, $d_H^{2,1}(\sigma) - d_V^{1,2}(\tau) = 0$
and $d_H^{1,2}(\tau) =0$. These equations can be seen are equivalent
to \eqref{sigmacocyclo},\eqref{taucocyclo} and \eqref{sigma-tau-compatibles}.
\end{proof}
\begin{lemma}
Any total $1$-cocycle is cohomologous to a normalized total $1$-cocyle.
\end{lemma}
\begin{proof}
Let $(\sigma, \tau)$ be a total $1$-cocycle. We define
$\lambda,\;\delta:{\mathcal B} \to {\mathbf K} $ by
$\lambda(B) = \sigma \left( \begin{matrix} Id\; b(B)\\Id\; b(B)\end{matrix} \right)$
and
$\delta ( B ) = \tau ( Id\;l(B) \; Id\;l(B))$.
By mean of the equations \eqref{sigmacocyclo} and \eqref{taucocyclo},
it is easy to show that the pair $(\sigma', \tau')$, where
$\sigma'= \sigma - d_v^{1,0}\lambda$ and $\tau' = \tau - d_h^{0,1}\delta$,
is a normalized total $1$-cocycle.
\end{proof}
\begin{remark}
In the proof of the above lemma we have really proved that any
\emph{vertical and horizontal $2$-cocycles} are \emph{equivalent} to a
normalized ones.
\end{remark}
\end{subsection}
\begin{subsection}{Extensions of double groupoids by abelian group bundles and the 1-cohomolgy group}
Recall that $\gamma: {\mathcal F} \to {\mathcal P}$ is the `left-bottom' vertex map, i.e. $\gamma(A) =
lb(A)$.
\begin{proposition}\cite[prop. 1.5]{AN3}\label{construct-double-groupoid}
Let $p: {\mathbf K} \to {\mathcal P}$ be an abelian group bundle, and let ${\mathcal T} = ({\mathcal F}; {\mathcal V}, {\mathcal H};{\mathcal P})$ be a double groupoid acting along $p$ on the left.
Let $(\sigma, \tau)$ be a normalized total $1$-cocycle of ${\mathcal F}$
with values in ${\mathbf K}$. Define on ${\mathbf K} \Times{p}{\gamma} {\mathcal F}$
the following maps
\begin{itemize}
\item Four maps $t,b,l,r$ on ${\mathbf K}\Times{p}{\gamma} {\mathcal F}$
defined by those in ${\mathcal F}$: $t(K, F) = t(F)$ and so on.
\item Two composition laws, vertical and horizontal, in ${\mathbf K}\Times{p}{\gamma} {\mathcal F}$
defined by
\begin{align}\label{prodhorreconstrbis}
\{(K,F)(L,G)\} &= \big(K +(b(F)\cdot L)+\tau(F\, G), \{FG\}\big),
\qquad \text{if }\; F\;\vert\; G, \\
\label{prodverreconstrbis}
\left\{\begin{matrix}(K,F) \\(L,G) \end{matrix}\right\} &=
\left( (l(G)^{-1}\cdot K)+ L+
\sigma \left(\begin{matrix}F \\ G \end{matrix}\right),
\left\{\begin{matrix}F \\ G\end{matrix}\right\}\right), \qquad \text{if }\; \dfrac FG\;.
\end{align}
\item Two identity maps $\id: {\mathcal V}\to{\mathbf K}\Times{p}{\gamma} {\mathcal F}$ and
$\id: {\mathcal H}\to{\mathbf K}\Times{p}{\gamma} {\mathcal F}$, given by
$\id g = (\Theta_{b (g)}, \id g)$ and $\id\; x = (\Theta_{l (x)}, \id\; x)$
if $g\in {\mathcal V}$ and $x\in {\mathcal H}$ respectively.
\item The inverse of $(K, F)$ with respect to the horizontal and
vertical products are given by
\begin{align}\label{inversa-hor}
(K,F)^h &= \left(b(F)^{-1}\cdot\left( -K - \tau(F\;, F^h)\right), F^h\right)
\quad \text{and} \\
\label{inversa-ver} (K,F)^v &= \left(-\left(l(F)\cdot
K\right)-\sigma\left(\begin{matrix}F \\ F^v\end{matrix}\right), F^v\right),
\end{align}
respectively.
\end{itemize}
With these structural maps, the arrangement
$$\begin{matrix}
{\mathbf K}\Times{p}{\gamma} {\mathcal F} &\rightrightarrows &{\mathcal H}
\\\downdownarrows &&\downdownarrows \\ {\mathcal V} &\rightrightarrows &{\mathcal P}
\end{matrix}$$
is a double groupoid which will be denoted by ${\mathbf K}\; \sharp_{\sigma, \tau}\;{\mathcal F}$.
\end{proposition}
\begin{proof} It is easy to see that the explicit hypothesis of the original formulation of this proposition are equivalent to ask for an action of ${\mathcal F}$ along $p$ and that $(\sigma, \tau)$ is a total $1$-cocycle (see Lemma \ref{1cociclos}).
The proof is a long but straightforward by checking each axiom
of the definition of double groupoid, see \cite[prop. 1.5]{AN3}
for further details.
\end{proof}
\begin{remark}
The proposition \ref{construct-double-groupoid} is, in fact, an if a only if result as one can easily check. That is, ${\mathbf K}\Times{p}{\gamma} {\mathcal F}$ is a double groupoid if and only if $d^1(\sigma, \tau) = 0$
\end{remark}
\begin{definition} Let $p: {\mathbf K} \to {\mathcal P}$ be an abelian group bundle, and let ${\mathcal T} = ({\mathcal F}; {\mathcal V}, {\mathcal H};{\mathcal P})$ be a double groupoid acting along $p$ on the left.
Let $(\sigma, \tau)$ be a normalized total $1$-cocycle of ${\mathcal F}$ with values in ${\mathbf K}$. The double groupoid structure defined on ${\mathbf K}\Times{p}{\gamma} {\mathcal F}$ as in proposition \ref{construct-double-groupoid} is called the {\em double groupoid extensions of ${\mathcal F}$ by $(\sigma, \tau)$} and is denotated ${\mathbf K}\; \sharp_{\sigma, \tau}\;{\mathcal F}$ .
\end{definition}
Now, we are going to state the main result of
this section.
\begin{theorem}\label{corresp-cocycles-opext}
Let $({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be a double groupoid and
let $p: {\mathbf K} \to {\mathcal P}$ be any abelian group bundle.
There is a bijection between $\mathcal{O}pext({\mathcal F},{\mathbf K})$
and the cohomology group $\operatorname{H}^1_{\Tot}({\mathcal F}, {\mathbf K})$.
\end{theorem}
We divide the proof of this theorem in three steps. The first two ones are the contents of lemma \ref{eq.ext.cohom.cocycles} and proposition \ref{any.ext.is.smash}.
\begin{lemma}\label{eq.ext.cohom.cocycles}
Let $({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be a double groupoid,
let $p: {\mathbf K} \to {\mathcal P}$ be any abelian group bundle and
let $(\sigma, \tau)$ and $(\sigma', \tau')$ be normalized total $1$-cocycles of ${\mathcal F}$
with values in ${\mathbf K}$. If the double groupoid extensions
${\mathbf K}\; \sharp_{\sigma, \tau}\;{\mathcal F}$ and ${\mathbf K}\; \sharp_{\sigma', \tau'}\;{\mathcal F}$
are equivalent, then $(\sigma, \tau)$ and $(\sigma', \tau')$
are in the same cohomology class.
\end{lemma}
\begin{proof}
Let $\Phi: {\mathbf K}\; \sharp_{\sigma, \tau}\;{\mathcal F} \to {\mathbf K}\; \sharp_{\sigma', \tau'}\;{\mathcal F} $
be an equivalence of extensions of double groupoids
\begin{equation}\label{eq.ext.cocycles}
\xymatrix{
& & {\mathbf K}\; \sharp_{\sigma, \tau}\;{\mathcal F} \ar@{}[ddr] |{\circlearrowleft} \ar@{}[ddl] |{\circlearrowright} \ar[dd]_\Phi
\ar@{>>}[dr]^{\Pi_1}&&\\
1 \ar[r] & {\mathbf K} \ar@{^{(}->}[dr]_\iota \ar@{^{(}->}[ur]^\iota & &
{\mathcal F} \ar[r] & 1\\
& & {\mathbf K}\; \sharp_{\sigma', \tau'}\;{\mathcal F} \ar@{>>}[ur]_{\Pi_2}&&
}
\end{equation}
and let $\lambda: {\mathcal F} \to {\mathbf K}$ be defined by $\lambda(F) = (\rho_1 \circ \Phi)(\theta, F)$,
where $\rho_1: {\mathbf K}\; \sharp_{\sigma', \tau'}\;{\mathcal F} \to {\mathbf K}$ denote the projection onto ${\mathbf K}$.
The following identities are easily obtained
from the operations on ${\mathbf K}\;\sharp_{\sigma, \tau}\;{\mathcal F}$
and the commutativity of diagram \eqref{eq.ext.cocycles}
\begin{equation}\label{eq.prod.ext}
(K, F) = \left\{\begin{matrix} (\Theta, F) \\ (K, \Theta) \end{matrix}\right\},
\quad
\left\{ \begin{matrix} (\Theta, F) \\ (\Theta, G) \end{matrix} \right\}
=\left( \sigma\left(\begin{matrix} F \\ G \end{matrix}\right),
\left\{ \begin{matrix} F \\ G \end{matrix} \right\} \right)
\quad \text{and} \quad \Phi(K , F) = ((\rho_1 \circ \Phi)(K, F), F).
\end{equation}
From the above equations we obtain that
$$\sigma\left( \begin{matrix} F \\ G \end{matrix} \right)
= (\rho_1 \circ \Phi)(\Theta, G) - (\rho_1 \circ \Phi)\left(\Theta,
\left\{\begin{matrix} F \\ G \end{matrix}\right\}
\right) + l(G)^{-1} \cdot (\rho \circ \Phi)(\Theta, F)
+ \sigma'\left( \begin{matrix} F \\ G \end{matrix} \right)
\quad\text{for any} \quad F, G \in {\mathcal F};
$$
that is $\sigma = \sigma' + d_V^{1,1}\lambda$. In the same way we
show that $\tau = \tau' + d_H^{1,1}\lambda$.
\end{proof}
\begin{proposition}\cite[Prop. 1.9]{AN3}\label{any.ext.is.smash}
Any double gropoid extension of ${\mathcal F}$ by ${\mathbf K}$ is
equivalent to ${\mathbf K} \; \sharp_{\sigma, \tau} {\mathcal F}$ for
some total $1$-cocycle $(\sigma, \tau)$.
\end{proposition}
\begin{proof}
See \textit{loc. cit} for further details.
\end{proof}
Now we are going to provide a proof of Theorem \ref{corresp-cocycles-opext}.
\begin{proof}[Proof of theorem \ref{corresp-cocycles-opext}]
Let $(\sigma, \tau )$ and $(\sigma', \tau')$ be cohomologous
total $1$-cocycles. Then there is a map $\lambda: {\mathcal F} \to {\mathbf K}$
such that $(\sigma', \tau') = (\sigma, \tau) + d^0 \lambda$;
that is, $\sigma' = \sigma + d^{1,1}_V$ and $\tau'= \tau + d^{1,1}_H$.
Let ${\mathcal B}= {\mathbf K}\; \sharp_{\sigma, \tau}\;{\mathcal F}$ and by
${\mathcal B}' = {\mathbf K}\; \sharp_{\sigma', \tau'}\;{\mathcal F}$ the extensions of ${\mathcal F}$ by
${\mathbf K}$ determined in \ref{construct-double-groupoid} by
$(\sigma, \tau)$ and $(\sigma', \tau')$, respectively.
Define the map
$$ \Phi: {\mathcal B}' \to {\mathcal B} \quad \text{such that} \quad (K, F) \mapsto (K + \lambda(F), F).$$
We state that $\Phi$ is an isomorphism of double groupoids.
In fact, it is clear that it is one to one and onto, then
we need to show that it preserve both structure of groupoids.
For all $(K, F), (L, G) \in {\mathcal B}'$ we have
\begin{align}
\Phi \left(
\begin{matrix}
(K, F) \\
(L, G)
\end{matrix}
\right)
&=\Phi \left((l(G)^{-1} \cdot K)+ L + \sigma'\left(\begin{matrix}F \\ G \end{matrix}\right),
\left\{ \begin{matrix}F \\ G \end{matrix}\right\}\right)\nonumber\\
&=\left( (l(G)^{-1} \cdot K)+ L + \sigma'\left(\begin{matrix}F \\ G \end{matrix}\right) +
\lambda\left( \left\{ \begin{matrix}F \\ G \end{matrix}\right\}\right),
\left\{\begin{matrix}F \\ G \end{matrix}\right\}\right)\nonumber\\
&= \left( (l(G)^{-1} \cdot K) + L + (\sigma + d_V^{1,1})
\left( \begin{matrix} F \\ G \end{matrix}\right),
\left\{ \begin{matrix}F \\ G \end{matrix}\right\} \right)\nonumber\\
&= \left( (l(G)^{-1} \cdot K) + L +
\sigma\left( \begin{matrix} F \\ G \end{matrix}\right)
+ \lambda(G) - \lambda \left( \left\{ \begin{matrix} F \\ G \end{matrix}\right\} \right)
+l(G)^{-1}\cdot \lambda(F) + \lambda \left(\left\{ \begin{matrix}
F \{\mathcal G} \end{matrix}\right\}\right),
\left\{ \begin{matrix}F \\ G \end{matrix}\right\} \right)\nonumber\\
\label{expr.1}
&= \left( (l(G)^{-1} \cdot K) + L +
\sigma\left( \begin{matrix} F \\ G \end{matrix}\right)
+ \lambda(G) +l(G)^{-1}\cdot \lambda(F),
\left\{ \begin{matrix}F \\ G \end{matrix}\right\} \right).
\end{align}
On the other side
\begin{align}\label{expr.2}
\left\{\begin{matrix}
\Phi(K, F) \\ \Phi(L, G)
\end{matrix}
\right\}
&= \left\{\begin{matrix}
(K + \lambda(F), F) \\ (L + \lambda(G), G)
\end{matrix}
\right\} = \left( l(G)^{-1} \cdot (K + \lambda(F)) + L + \lambda(G)
+ \sigma\left( \begin{matrix} F \\ G \end{matrix} \right),
\left\{ \begin{matrix} F \\ G \end{matrix} \right\} \right);
\end{align}
and since the action of ${\mathcal V}$ distributes then the
expressions \eqref{expr.1} and \eqref{expr.2} coincides and
the map $\Phi$ preserves vertical composition. In the same
way we can show that $\Phi$ preserves horizontal composition,
and thus we have an equivalence of the extensions
${\mathcal B}$ and ${\mathcal B}'$.
The above reasoning permit us to introduce a well defined
map
$$\Psi: H^1_{Tot}({\mathcal F}, {\mathbf K}) \to \mathcal{O}pext({\mathcal F}, {\mathbf K}),
\quad \text{such that}\quad [\sigma, \tau] \mapsto {\mathbf K}\; \sharp_{\sigma, \tau}\;{\mathcal F},$$
where $[\sigma, \tau]$ denote the cohomology class
of a total $1$-cocycle $(\sigma, \tau)$; we will to show that
$\Psi$ is a bijection.
Clearly $\Psi$ is one to one by lemma \ref{eq.ext.cohom.cocycles}
and it is onto because of proposition \ref{any.ext.is.smash}. This
finish the proof.
\end{proof}
\end{subsection}
\end{section}
\begin{section}{Bisimplicial spaces and double groupoids}
We are going to introduce some notation to be used
in the rest of the work. Let $\Delta$ be the simplicial category, that is,
the category whose objects are $[n] = \{ 0, 1, 2 ,\cdots, n \}$,
for every $n \in \mathbb{N}$, and whose morphisms
are the order preserving maps.
The following proposition give us a combinatorial description of the simplicial category.
\begin{proposition}[Prop. VII.5.2 \cite{ml}]
The category $\Delta$, with objects all finite ordinals,
is generated by the arrows $\epsilon_i^n: [n-1] \to [n]$ and
$\eta_i^n: [n+1] \to [n]$, where $\epsilon_i^n$ and
$\eta_i^n$ are the unique increasing map that avoids $i$,
and the unique non-decreasing surjective map
such that $i$ is reached twice ($0 \leq i \leq n$), respectively.
These maps are subject to the following relations
\begin{itemize}
\item $\epsilon_i^{n-1} \epsilon_j^n = \epsilon_{j-1}^{n-1} \epsilon_i^n$
if $i < j$,
\item $\eta_i^{n+1}\eta_j^n = \eta_{j+1}^{n+1} = \eta_{j+1}^{n+1} \eta_i^n$
if $i \leq j$,
\item $\epsilon_j^{n+1}\eta_j^n = \eta_{j-1}^{n-1}\epsilon_i^n$ if $i < j$,
\item $\epsilon_i^{n+1} \eta_j^n = \eta_j^{n-1} \epsilon_{i-1}^n$ if
$i > j + 1$,
\item $\epsilon_j^{n+1} \eta_j^n = \epsilon_{j+1}^{n+1} \eta_j^n = Id$.
\end{itemize}
\end{proposition}
\begin{proof}
See \cite[Prop. VII.5.2]{ml}.
\end{proof}
For a complete treatment
of the simplicial category and its properties see \cite{ml}
or \cite{gj} for a deepest one.
In the following diagram we show from left to right all the $\epsilon$ maps,
and from right to left all the $\eta$ maps of the above proposition,
\begin{equation}\label{simplicial-cat-diagram}
\xymatrix
{
[0] \ar@<4pt>[r] \ar@<-4pt>[r] & [1] \ar@<7pt>[r]\ar[r]\ar@<-7pt>[r] \ar[l] &
[2] \ar@<10pt>[r]\ar@<4pt>[r]\ar@<-4pt>[r]\ar@<-10pt>[r] \ar@<4pt>[l] \ar@<-4pt>[l] & [3] \ar[l] \ar@<7pt>[l] \ar@<-7pt>[l]
\cdots.
}
\end{equation}
\subsection{Simplicial sets}
\begin{definition}
A simplicial set is a contravariant functor $X: \Delta^{op} \to \mathcal{S}ets$,
where $\mathcal{S}ets$. In the same way we define \emph{simplicial topological spaces} (or just \emph{simplicial spaces})
and \emph{simplicial manifolds}, just changing $\mathcal{S}ets$ by $\mathcal{T}op$ or $\mathcal{M}an$,
the categories of \emph{topological spaces} or \emph{smooth manifolds}, respectively.
\end{definition}
\begin{remark}
Given a simplicial set $X$ we will to denote it by $X_\bullet = \{ X_n \}_{n \in \mathbb{N}}$.
We also write $\epsilon_i^n$ and $\eta_i^n$ for $\Delta(\epsilon_i^n)$ and $\Delta(\eta_i^n)$,
respectively, if no confusion arise.
\end{remark}
Any topological (Lie) groupoid $\xymatrix{ \mathcal{G} \ar@<-2pt>[r] \ar@<2pt>[r] & {\mathcal P}}$
canonically gives rise to a simplicial space (manifold)
as follows \cite{tu}: Let
$$G_n = \{ (g_1, \ldots, g_n) \in {\mathcal G}^n\; | \; s(g_i) = t(g_{i+1}) \; \forall i \}$$
be the set of all composable $n$-tuples of arrows in ${\mathcal G}$
and define the \emph{face} and \emph{degeneracy maps} as follows
\begin{itemize}
\item $\tilde{{\epsilon}}_0^1(g) = r(g)$ and $\bar{{\epsilon}}_1^1(g) = s(g)$ for $n >1$;
\item $\tilde{{\epsilon}}_0^n(g_1, \ldots, g_n) = (g_2, \ldots, g_n)$ for $n >1$;
\item $\tilde{{\epsilon}}_n^n(g_1, \ldots, g_n) = (g_1, \ldots, g_{n-1})$ for $n >1$;
\item $\tilde{{\epsilon}}_i^n(g_1, \ldots, g_n) = (g_1, \ldots, g_ig_{i+1}, \ldots, g_n)$ for $1 \leq i \leq n-1$;
\item $\tilde{\eta}_0^0: {\mathcal G}_0 \to {\mathcal G}_1$ the unit map of the groupoid;
\item $\tilde{\eta}_0^n(g_1,\ldots, g_n) = (s(g_1), g_1, \ldots, g_n)$;
\item $\tilde{\eta}_i^n(g_1,\ldots, g_n) = (g_1, \ldots, g_i, t(g_i), g_{i+1}, \ldots, g_n )$
for $1 \leq i \leq n$.
\end{itemize}
We refer to loc. cit. to another way to see the simplicial structure of ${\mathcal G}_\bullet$.
\begin{remark}
We would like to note here that we are reading the arrows of
a groupoid from left to right.
\end{remark}
\begin{definition}
Let $X$ and $Y$ be simplicial sets. A map of simplicial sets $f : X \to Y$ is
a natural transformation of contravariant set valued functors.
\end{definition}
We will to denote by ${\mathbb S}$ the resulting category of
simplicial sets, that is, ${\mathbb S}$ is the functor category $\mathcal{S}ets^{\Delta^{op}}$.
\subsection{Bisimplicial sets}
\begin{definition}
A bisimplicial set is a simplicial object in ${\mathbb S}$. That is,
a bisimplicial set $X$ is a functor
$$X : \Delta^{op} \times \Delta^{op} \to \mathcal{S}ets,$$
or equivalently, is a functor $X: \Delta^{op} \to \mathbb{S}$.
We will to denote a bisimplicial set $X$ by $X_{\bullet \bullet}$.
\end{definition}
In the following diagram we show, for the category $\Delta^2$,
a bidimensional analogue to diagram \ref{simplicial-cat-diagram},
\begin{equation}
\xymatrix
{
\vdots & \vdots &\vdots & \\
([2], [0]) \ar@<4pt>[r] \ar@<-4pt>[r] \ar@<10pt>[u]\ar@<4pt>[u]\ar@<-4pt>[u]\ar@<-10pt>[u] \ar@<4pt>[d] \ar@<-4pt>[d] & ([2], [1]) \ar@<7pt>[r]\ar[r]\ar@<-7pt>[r] \ar[l] \ar@<10pt>[u]\ar@<4pt>[u]\ar@<-4pt>[u]\ar@<-10pt>[u] \ar@<4pt>[d] \ar@<-4pt>[d] & ([2], [2]) \ar@<10pt>[u]\ar@<4pt>[u]\ar@<-4pt>[u]\ar@<-10pt>[u] \ar@<4pt>[d] \ar@<-4pt>[d] \ar@<10pt>[r]\ar@<4pt>[r]\ar@<-4pt>[r]\ar@<-10pt>[r] \ar@<4pt>[l] \ar@<-4pt>[l] & \cdots \\
([1], [0]) \ar@<4pt>[r] \ar@<-4pt>[r] \ar@<7pt>[u]\ar[u]\ar@<-7pt>[u] \ar[d] & ([1], [1]) \ar@<7pt>[r]\ar[r]\ar@<-7pt>[r] \ar[l] \ar@<7pt>[u]\ar[u]\ar@<-7pt>[u] \ar[d]& ([1], [2]) \ar@<7pt>[u]\ar[u]\ar@<-7pt>[u] \ar[d] \ar@<10pt>[r]\ar@<4pt>[r]\ar@<-4pt>[r]\ar@<-10pt>[r] \ar@<4pt>[l] \ar@<-4pt>[l] & \cdots \\
([0], [0]) \ar@<4pt>[r] \ar@<-4pt>[r] \ar@<4pt>[u] \ar@<-4pt>[u] & ([0], [1]) \ar@<4pt>[u] \ar@<-4pt>[u] \ar@<7pt>[r]\ar[r]\ar@<-7pt>[r] \ar[l] & ([0], [2]) \ar@<4pt>[u] \ar@<-4pt>[u] \ar@<10pt>[r]\ar@<4pt>[r]\ar@<-4pt>[r]\ar@<-10pt>[r] \ar@<4pt>[l] \ar@<-4pt>[l] & \cdots.
}
\end{equation}
Where the depicted maps are as follows:
\begin{itemize}
\item the \emph{vertical face maps} $([m], [n]) \to ([m+1], [n])$ are $\epsilon^{m+1,n}_{i,v} = (\epsilon_i^{m+1}, Id)$,
\item the \emph{vertical degeneracy maps} $([m+1], [n]) \to ([m], [n])$ are $\eta^{m,n}_{i ,v}= (\eta_i^m, Id)$,
\item the \emph{horizontal face maps} $([m], [n]) \to ([m], [n+1])$ are $\epsilon_{i,h}^{m, n+1} = (Id, \epsilon^{n+1}_i)$, and
\item the \emph{horizontal degeneracy maps} $([m], [n+1]) \to ([m], [n])$ are $\eta^{m, n}_{i,h} = (Id, \eta^{n}_i)$.
\end{itemize}
Thus, a bisimplicial set $\dbs{X}$ can be depict as an array
\begin{equation}
\xymatrix
{
\vdots \ar@<10pt>[d]\ar@<4pt>[d]\ar@<-4pt>[d]\ar@<-10pt>[d] & \vdots \ar@<10pt>[d]\ar@<4pt>[d]\ar@<-4pt>[d]\ar@<-10pt>[d] &\vdots \ar@<10pt>[d]\ar@<4pt>[d]\ar@<-4pt>[d]\ar@<-10pt>[d] & \\
X_{2, 0} \ar[r] \ar@<7pt>[d]\ar[d]\ar@<-7pt>[d] & X_{2, 1} \ar@<4pt>[r] \ar@<-4pt>[r] \ar@<4pt>[l] \ar@<-4pt>[l] \ar@<7pt>[d]\ar[d]\ar@<-7pt>[d] & X_{2, 2} \ar@<7pt>[l]\ar[l]\ar@<-7pt>[l] \ar@<7pt>[d]\ar[d]\ar@<-7pt>[d] & \cdots \ar@<10pt>[l]\ar@<4pt>[l]\ar@<-4pt>[l]\ar@<-10pt>[l] \\
X_{1, 0} \ar[r] \ar@<-4pt>[d] \ar@<4pt>[d] \ar@<4pt>[u] \ar@<-4pt>[u] & X_{1, 1} \ar@<4pt>[r] \ar@<-4pt>[r] \ar@<4pt>[l] \ar@<-4pt>[l] \ar@<-4pt>[d] \ar@<4pt>[d] \ar@<4pt>[u] \ar@<-4pt>[u] & X_{1, 2} \ar@<7pt>[l]\ar[l]\ar@<-7pt>[l] \ar@<-4pt>[d] \ar@<4pt>[d] \ar@<4pt>[u] \ar@<-4pt>[u] & \cdots \ar@<10pt>[l]\ar@<4pt>[l]\ar@<-4pt>[l]\ar@<-10pt>[l] \\
X_{0, 0} \ar[r] \ar[u] & X_{0, 1} \ar@<4pt>[r] \ar@<-4pt>[r] \ar@<4pt>[l] \ar@<-4pt>[l] \ar[u] & X_{0, 2} \ar@<7pt>[l]\ar[l]\ar@<-7pt>[l] \ar[u] & \cdots \ar@<10pt>[l]\ar@<4pt>[l]\ar@<-4pt>[l]\ar@<-10pt>[l] .
}
\end{equation}
With \emph{face and degeneracy maps} denoted by $\tilde{\epsilon}^{m+1,n}_{i,v}, \tilde{\eta}^{m,n}_{i ,v},
\tilde{\epsilon}_{i,h}^{m, n+1}$ and $\tilde{\eta}^{m, n}_{i,h}$.
\begin{definition}
Let $X$ be a bisimplicial set. The \emph{diagonal simplicial set $d(X)$ associated
to $X$} is the simplicial set defined as $d(X)_n = X_{n,n}$. It
also can be viewed as the composition functor
$$ \Delta^{op} \overset{\Delta}\to \Delta^{op} \times \Delta^{op}
\overset{X} \to \mathcal{S}ets,$$
where $\Delta$ is the diagonal functor.
\end{definition}
\subsection{A geometric construction of the bisimplicial set
associated to a double groupoid}
\label{bisimplicial set of a double groupoid}
It is well known that to any category (in particular to any groupoid)
we can associate a simplicial set, called its categorical nerve.
In the same way, with any double category or double groupoid,
we can associate a \emph{bisimplicial set} \cite[Sect. 3.5]{AN2}. In the case
of double groupoids, the existence of inverses for both operations permit us
to construct the \text{nerve} of the double groupoid in
a \emph{geometrical} way just from the core action on itself.
We introduce some notation to construct the nerve of double groupoid. Let $({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be a double groupoid, then if $x,x'\in {\mathcal H}$, $x\vert x'$ denotes that $r(x) =l(x')$. If $g,g' \in {\mathcal V}$, $g\vert g'$ denotes that $b(g) = t(g')$. In analogous way, if $A,A' \in {\mathcal F}$, $A\vert A'$ denotes that $r(A) = l(A')$ and \,
$\begin{matrix}
A \\ \hline
A'
\end{matrix}$
\, denotes that $b(A) = t(A')$.
In section \ref{discrete_cohomology_section} we build a bigraded family of sets
$\{ \mathcal{F}^{(m,n)} \}_{(m,n) \in \mathbb{N} \times \mathbb{N}}$ that allowed us to define the cohomology of discrete double groupoids. This family of sets can be endowed with a structure of bisimplicial set by defining the face and degeneracy maps as follows:
\begin{itemize}
\item For any $n \in \mathbb{N}$, let $\widetilde{\epsilon^{1,n}_{0,v}}[F_{1j}] = [b(F_{1j})]$;
\item For any $m >1$ and $n \in \mathbb{N}$, let $\widetilde{\epsilon^{m+1,n}_{0,v}}[F]$ be the array obtained from $F$ deleting the first row;
\item For any $m >1$ and $n \in \mathbb{N}$, let $\widetilde{\epsilon^{m+1,n}_{m+1,v}}[F]$ be the array obtained from $F$ deleting the last row;
\item For any $m >1$, $n \in \mathbb{N}$ and $1 \leq i \leq m$, let $\widetilde{\epsilon^{m+1,n}_{i,v}}[F]$ be the array obtained from $F$ composing the elements of rows $i$ and $i + 1$;
\item the \emph{vertical degeneracy maps} $([m+1], [n]) \to ([m], [n])$ are $\eta^{m,n}_{i ,v}= (\eta_i^m, Id)$,
\item the \emph{horizontal degeneracy maps} $([m], [n+1]) \to ([m], [n])$ are $\eta^{m, n}_{i,h} = (Id, \eta^{n}_i)$.
\end{itemize}
This bisimplicial set can be constructed in a more geometric way as a \emph{homogeneous space} of the core groupoid, in fact,
let ${\mathbf E}({\mathcal F})$ be the
core groupoid of ${\mathcal F}$. Let us denote by ${\mathcal F}^{m \times n}$ the set of $m \times n$ matrices
with entries in ${\mathcal F}$. Let
\begin{equation}\label{pre-double-complex}
{\mathcal F}_{(m,n)} = \{ F = [F_{i,j}]_{0 \leq i \leq m , \; 0 \leq j \leq n} \in {\mathcal F}^{m \times n}\;|\;
\l(F_{i.j}) = \l(F_{i, j+1}) \;\text{and }\; b(F_{i, j}) = b(F_{i+1, j})\},
\end{equation}
be the set of matrices of size $m \times n$ of boxes in ${\mathcal F}$ such that
the boxes in a fixed row has the same left side and the boxes in
a fixed column has de same bottom side.
We can note that the entries of a matrix in this set
has the same left-bottom corner.
In \cite[Prop. 1.1]{AN3} the authors define the map
$\gamma: {\mathcal F} \to {\mathcal P}$ as the ``left-bottom'' vertex
$\gamma(B) = lb(B)$, and an action of the core groupoid ${\mathbf E}({\mathcal F})$
over the set of boxes ${\mathcal F}$ given be
\begin{equation}\label{actioncore} E \rightharpoondown A : =
\left\{\begin{matrix}\iddv l(A)& A \vspace{-4pt}\\ E &\iddv
b(A)\end{matrix} \right\}, \quad A\in {\mathcal F}, E \in {\mathbf E}.
\end{equation}
We can extend this definitions to the sets of matrices of boxes ${\mathcal F}_{(m,n)}$,
let us to define, for every $m,n \in \mathbb{N}$, the maps
$\gamma_{(m,n)}: {\mathcal F}_{(m,n)} \to {\mathcal P}$ by
$\gamma_{(m,n)}([F_{i.j}]) = bl(F_{m,1})$ and
let $\rightharpoondown : {\mathbf E}({\mathcal F}) \pfib{e_{{\mathbf E}}}{\gamma_{(m,n)}}{\mathcal F}_{(m,n)}$ be
the action given by
\begin{center}
$E \rightharpoondown [F_{i,j}] = [E \rightharpoondown F_{i,j}]$ for any $E \in {\mathbf E}({\mathcal F})$
and $[F_{i,j}] \in {\mathcal F}_{(m,n)}$ such that $\gamma([F_{i,j}]) = e_{{\mathbf E}({\mathcal F})}(E)$.
\end{center}
Denote by $\widetilde{{\mathcal F}}^{m,n}$ the set of orbits ${\mathcal F}_{m,n} / {\mathbf E}({\mathcal F})$ and
by $\langle F_{i,j} \rangle$ the equivalence class
in $\widetilde{{\mathcal F}}^{m,n}$ of a matrix of boxes $[F_{i,j}]$.
From now on we are going to denote the maps $\gamma_{(m,n)}$
bye the same letter $\gamma$.
\begin{proposition}\label{double_skeleton_core_groupoid}
Let $({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be a double groupoid, let ${\mathbf E}({\mathcal F})$ be the
core groupoid of ${\mathcal F}$ and let $\widetilde{{\mathcal F}}^{m,n}$ be as defined above. Then
the map $\Phi: {\mathcal F}^{(m,n)} \to \widetilde{{\mathcal F}}^{m,n}$ given by $[F_{i,j}] \mapsto \langle \overline{F}_{k,l} \rangle$ ,
where
$$ \overline{F}_{k,l} =
\begin{cases}
\left\{
\begin{matrix}
F_{k, 1} & F_{k,2}& \cdots & F_{k,l}\\
\vdots & \vdots & & \vdots \\
F_{m,1} & F_{m,2} & \cdots & F_{m,l}\\
\end{matrix}
\right\} & \text{if} \;\; l\neq 0 \;\text{and}\; k \neq m,\\
\quad\quad\quad \iddv & \text{if}\;\; k = m \;\text{and}\; l \neq 0,\\
\quad\quad\quad \iddv & \text{if}\;\; l = 0 \;\text{and}\; k \neq m,\\
\quad\quad\quad \Theta & \text{if}\;\; k = m\;\text{and} \;l = 0,
\end{cases}
$$
is bijective, with inverse $\Psi : \widetilde{{\mathcal F}}^{m,n} \to {\mathcal F}^{(m,n)}$, given by
$$
\Psi(\langle F_{i,j} \rangle) =
\left[
\begin{matrix}
\left\{
\begin{matrix}
F_{0,0}^h & F_{0,1}\\
F_{1,0}^{hv} & F_{1,1}^v
\end{matrix}
\right\}
& \cdots &
\left\{
\begin{matrix}
F_{0,n-1}^h & F_{0,n}\\
F_{1,n-1}^{hv} & F_{1,n}^v
\end{matrix}
\right\} \\
\vdots & & \vdots \\
\left\{
\begin{matrix}
F_{m-1,0}^h & F_{m-1,1}\\
F_{m,0}^{hv} & F_{m,1}^v
\end{matrix}
\right\}
& \cdots &
\left\{
\begin{matrix}
F_{m-1,n-1}^h & F_{m-1,n}\\
F_{m,n-1}^{hv} & F_{m,n}^v
\end{matrix}
\right\}
\end{matrix}
\right]
$$
\end{proposition}
\begin{proof}
Fist we need to check that the map $\Psi$ is well defined. In fact,
if $\langle F_{i,j} \rangle = \langle L_{i,j} \rangle $ in $\widetilde{{\mathcal F}}^{m,n}$,
then there is $E \in {\mathbf E}({\mathcal F})$ such that
$L_{i,j} = E \rightharpoondown F_{i,j}$, for every $1\leq i\leq m$ and $1 \leq j \leq n$.
Then for $i=0, \ldots, m-1$ and $j= 0, \ldots, n-1$
we have
\begin{align*}
\left\{
\begin{matrix}
L_{i,j}^h & L_{i,j+1} \\
L_{i+1,j}^{hv} & L_{i+1,j+1}^v
\end{matrix}
\right\}
& = \left\{
\begin{matrix}
(E \rightharpoondown F_{i,j} )^{h} & (E \rightharpoondown F_{i,j+1}) \\
(E \rightharpoondown F_{i+1,j})^{hv} & (E \rightharpoondown F_{i+1,j+1})^{v}
\end{matrix}
\right\} \\
&= \left\{
\begin{matrix}
\left\{
\begin{matrix}
\iddv & F_{ij} \\
E & \iddv
\end{matrix}
\right\}^h
&
\left\{
\begin{matrix}
\iddv & F_{i,j+1}\\
E & \iddv
\end{matrix}
\right\}
\\
\left\{
\begin{matrix}
\iddv & F_{i+1,j}\\
E & \iddv
\end{matrix}
\right\}^{hv} &
\left\{
\begin{matrix}
\iddv & F_{i+1,j+1}\\
E & \iddv
\end{matrix}
\right\}^{v}
\end{matrix}
\right\} \\
& =
\left\{
\begin{matrix}
\left\{
\begin{matrix}
F_{ij}^h & \iddv^h \\
\iddv^h & E^h
\end{matrix}
\right\}
&
\left\{
\begin{matrix}
\iddv & F_{i,j+1} \\
E & \iddv
\end{matrix}
\right\} \\
\left\{
\begin{matrix}
\iddv^{hv} & E^{hv} \\
F_{i+1,j}^{hv} & \iddv^{hv}
\end{matrix}
\right\}
&
\left\{
\begin{matrix}
E^v & \iddv^v \\
\iddv^v & F_{i+1,j+1}^v
\end{matrix}
\right\}
\end{matrix}
\right\}\\
&=
\left\{
\begin{matrix}
F_{ij}^h &
\left\{
\begin{matrix}
\iddv^h & \iddv
\end{matrix}
\right\} &
F_{i,j+1} \\
\left\{
\begin{matrix}
\iddv^h \\
\iddv^{hv}
\end{matrix}
\right\} &
\left\{
\begin{matrix}
E^h & E \\
E^{hv} & E^v
\end{matrix}
\right\}
&
\left\{
\begin{matrix}
\iddv \\
\iddv^v
\end{matrix}
\right\} \\
F_{i+1,j}^{hv} &
\left\{
\begin{matrix}
\iddv^{hv} & \iddv^v
\end{matrix}
\right\}
&
F_{i+1,j+1}^v
\end{matrix}
\right\}\\
&=
\left\{
\begin{matrix}
F_{ij}^h & F_{i,j+1} \\
F_{i+1,j}^{hv} & F_{i+1,j+1}^v
\end{matrix}
\right\}.
\end{align*}
The above shows that $\Psi$ es well defined.
By a direct computation we can show that $\Phi$ and $\Psi$ are inverse each other.
\end{proof}
\begin{remark}
With the identification introduced in the above result,
it is clear that the construction carried out in \ref{bsp-set-db-gpd}
associates a bisimplicial set to every double groupoid. In fact, if we have a morphism $f: ([m], [n]) \to ([k], [l])$ in $\Delta^2$ we can associate to it the map
$$\tilde{f}: {\mathcal F}^{(k,l)} \to {\mathcal F}^{(m,n)} \quad \text{given by} \quad
\tilde{f}(\langle A_{i,j}\rangle_{0 \leq i \leq k , \; 0 \leq j \leq l} ) = \langle A_{f(i,j)} \rangle_{1 \leq i \leq m , \; 1 \leq j \leq n}.$$
It is no difficult to show that this maps define a contravariant functor
from $\Delta^2$ to $\mathcal{S}ets$.
\end{remark}
\end{section}
\section{Bisimplicial cohomology of topological double groupoids}
The classification results obtained here (theorem \ref{corresp-cocycles-opext} and proposition \ref{any.ext.is.smash}) where proved for discrete double groupids. The next most natural step is to study to what extend we can carry out such decomposition of double groupoids in the topological and/or differentiable setting.
The first one attempt is to study continuous (differentiable) cohomology requiring that all maps that define the double groupoid cohomology be continuous (smooth) but, for the discrete case, the proof of proposition \ref{any.ext.is.smash} \cite[Prop. 1.9]{AN3}, that indicates that any double groupoid extension can be obtained as the \textit{smash product} of a slim double groupoid by an abelian group bundle, depends strongly in the existence of a section of the function that \textit{maps} a double groupoid onto its frame. In the continuous or differentiable setting we cannot guarantee any more the existence of a global section of such map.
If the \textit{frame} of a topological (Lie) double groupoid is a quotient space (or smooth manifold) and the \textit{frame map} is an open map (surjective submersion, respectively) then we can assure the existence of local sections and then, we can try to localize the decomposition process of the double groupoid to the open sets where the local sections exists.
In this section we develop a \v{C}ech double groupoid cohomology that will permit us to classify the extensions of topological double groupoids by topological abelian group bundles as in the discrete case.
\subsection{\v{C}ech double groupoid}
\begin{definition}\label{cech_groupoid}
Let $\xymatrix{ \mathcal{G} \ar@<-2pt>[r] \ar@<2pt>[r] & {\mathcal P}}$
be a topological groupoid. Let $\{ U_i \}_{i \in I}$
be an open cover of ${\mathcal P}$, define the \textit{cover groupoid},
\textit{\v{C}ech groupoid} or the \textit{localization groupoid}
\begin{equation}
\mathcal{G}[U] = \{ (i, g, j) \in I \times \mathcal{G} \times I\;:\;
g \in r^{-1}(U_i) \cap s^{-1}(U_j)) \},
\end{equation}
with unity spaces $\mathcal{P}[U]=\{ (i, x) \in I \times \mathcal{P}\;:\; x \in U_i \}$,
source and target maps $s(i, g, j) = (i, s(g))$ and $r(i, g, j) = (j, r(g))$
and product $(i, g, j)(j, h, k)=(i, gh, k)$, when $r(g) = s(h)$.
\end{definition}
A relevant fact of the \textit{\v{C}ech groupoid}
is that the canonical map $\mathcal{G}[U] \to \mathcal{G}$
is a \emph{Morita equivalence} of groupoids.
In the case of double groupoids, given an open covering of
the total base, we can construct a \emph{localization}
double groupoid in a similar fashion, although the notion of \emph{Morita equivalence} is more subtle and we don't study it here.
\begin{definition}
Let $({\mathcal B}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be a topological double groupoid and
let us consider $\mathcal{U} = \{ \mathcal{U}_i \}_{i \in I}$ be an open
cover of ${\mathcal P}$, we define de \textit{double cover groupoid of $({\mathcal B}, \mathcal{U})$},
\textit{\v{C}ech double groupoid associated to $({\mathcal B}, \mathcal{U})$} or the
\textit{localization double groupoid associated to $({\mathcal B}, \mathcal{U})$} as follows
\begin{itemize}
\item Let ${\mathcal B}[\mathcal{U}]$ be the set
\begin{equation}
\left\{ \left( \begin{matrix}
i & & j\\
& B & \\
l & & k
\end{matrix} \right) \;:\; B \in {\mathcal B}, \;
t(B) \in l^{-1}(U_i) \cap r^{-1}(U_j)\;
\text{and}\;
b(B) \in l^{-1}(U_l) \cap r^{-1}(U_k)
\right\}
\end{equation}
\item Let $\tau,\;\beta,\;\lambda$ and $\rho$
be the maps defined by
\begin{align}
&\tau,\beta: {\mathcal B}[\mathcal{U}] \to {\mathcal H}[U]\quad \text{such that}\quad
\tau\left( \begin{matrix}
i & & j\\
& B &\\
l & & k
\end{matrix} \right) = (i,t(B),j)
\quad \text{and}\quad
\beta\left( \begin{matrix}
i & & j\\
& B &\\
l & & k
\end{matrix} \right)
= (l, b(B), k)
\\
&\lambda,\rho: {\mathcal B}[\mathcal{U}] \to {\mathcal V}[U]\quad \text{such that}\quad
\lambda\left( \begin{matrix}
i & & j\\
& B &\\
l & & k
\end{matrix} \right) = (i,l(B),l)
\quad \text{and}\quad
\rho \left( \begin{matrix}
i & & j\\
& B &\\
l & & k
\end{matrix} \right)
= (j, r(B), k)
\end{align}
\item We define a horizontal and vertical composition laws
by the following rules
$$
\begin{matrix}
\left( \begin{matrix}
i & & j\\
& A &\\
l & & k
\end{matrix} \right)
\\
\left( \begin{matrix}
i' & & j'\\
& B &\\
l' & & k'
\end{matrix} \right)
\end{matrix}
=
\left( \begin{matrix}
i & & j\\
&
\left\{\begin{matrix}
A\\
B
\end{matrix}\right\}
&\\
l' & & k'
\end{matrix} \right)
\quad
\text{if}
\quad
(l, b(A), k) = (i', t(B), j')
$$
and
$$
\left( \begin{matrix}
i & & j\\
& A &\\
l & & k
\end{matrix} \right)
\left( \begin{matrix}
i' & & j'\\
& B &\\
l' & & k'
\end{matrix} \right)
=
\left( \begin{matrix}
i & & j\\
& \{ A \; B \} &\\
l & & k
\end{matrix} \right)
\quad
\text{if}
\quad
(j, r(A), k) = (i', l(B), l')
$$
\item The identity maps of the horizontal and vertical
groupoid structure of ${\mathcal B}[U]$ are defined as follows
\begin{align}
&Id_h: {\mathcal V}[U] \to {\mathcal B}[U] \quad \text{such that} \quad Id_h(j, f, k)
=
\left( \begin{matrix}
j & & j\\
& Id_h f &\\
k & & k
\end{matrix} \right)
\\
&Id_v:{\mathcal H}[U] \to {\mathcal B}[U] \quad \text{such that} \quad
Id_v (i, x, j)
=
\left( \begin{matrix}
i & & j\\
& Id_v x &\\
i & & j
\end{matrix} \right)
\end{align}
\item The horizontal and vertical inverse of a box in ${\mathcal B}[U]$
are defined respectively by
\begin{align}
&(\;-\;)^h:{\mathcal B}[U] \to {\mathcal B}[U] \quad
\text{such that} \quad
\left( \begin{matrix}
i & & j\\
& B & \\
l & & k
\end{matrix} \right)^h
=
\left( \begin{matrix}
j & & i\\
& B^h & \\
k & & l
\end{matrix} \right)\\
&(\;-\;)^v:{\mathcal B}[U] \to {\mathcal B}[U]
\quad
\text{such that}
\quad
\left( \begin{matrix}
i & & j\\
& B & \\
l & & k
\end{matrix} \right)^v
=
\left( \begin{matrix}
l & & k\\
& B^v & \\
i & & j
\end{matrix} \right)
\end{align}
\end{itemize}
With all the above maps and operations it is easily to check
that
$$
\xymatrix{
{\mathcal B}[U] \ar@<-2pt>[r]_{\lambda} \ar@<2pt>[r]^{\rho}
\ar@<-2pt>[d]_{\tau} \ar@<2pt>[d]^{\beta}
&
{\mathcal V}[U] \ar@<-2pt>[d] \ar@<2pt>[d]
\\
{\mathcal H}[U] \ar@<-2pt>[r] \ar@<2pt>[r]&
{\mathcal P}[U]
}
$$
is a double groupoid.
\end{definition}
\begin{remark}
Roughly speaking, if we denote by
$${\mathcal B}_{(i, j, k, l)} = t^{-1}(l^{-1}(U_i) \cap r^{-1}(U_j))\;
\bigcap \; b^{-1}(r^{-1}(U_k) \cap l^{-1}(U_l)),$$
then the \v{C}ech groupoid associated with $({\mathcal B}, \mathcal{U})$
is the \textit{disjoint union double groupoid}
\begin{equation}
\xymatrix{
\bigsqcup_{(i, j, k, l) \in I^4}{\mathcal B}_{(i, j, k, l)}
\ar@<-2pt>[r] \ar@<2pt>[r] \ar@<-2pt>[d] \ar@<2pt>[d] &
\bigsqcup_{(i, j) \in I^2} {\mathcal V}_{(i, j)} \ar@<-2pt>[d] \ar@<2pt>[d]\\
\bigsqcup_{(l,k) \in I^2} {\mathcal H}_{(l,k)} \ar@<-2pt>[r]\ar@<2pt>[r] & \bigsqcup_{i \in I} {\mathcal P}_{i}.
}
\end{equation}
\end{remark}
\subsection{Sheaves and coverings on bisimplicial spaces}
\begin{definition}\cite[6.4.2]{D}\label{bisim-sheaf}
A sheaf $\ubs{\mathcal{S}}$ on a bisimplicial space $\dbs{M}$
is a collection $\{ \mathcal{S}^{m,n} \}_{(m,n) \in \mathbb{N}^2}$ such that
\begin{enumerate}
\item $\mathcal{S}^{m,n}$ is a sheaf on $M_{m,n}\;$;
\item for all morphisms $f \in Hom_{\Delta^2}((k,l), (m,n))$ we
are given $\tilde{f}$-morphism
$$\tilde{f}^{\ast}:\mathcal{S}^{k,l} \to \mathcal{S}^{m,n},$$
\end{enumerate}
such that $\widetilde{f \circ g}^{\ast} = \tilde{f}^{\ast} \circ \tilde{g}^{\ast}$, when it is defined.
\end{definition}
The next definition introduces the open coverings that
behaves well for the study of bisimplicial sheaves.
\begin{definition}\label{open_cover}
An open cover of a bisimplicial space $X$ is family
$\mathcal{U}_{\bullet \bullet} = \{ \mathcal{U}_{(m,n)} \}_{m, n \in \mathbb{N}}$
such that $\mathcal{U}_{(m, n)} = \{ U^{m,n}_i \}_{i \in I_{(m,n)}}$
is an open cover of the space $X_{(m,n)}$.
The cover is said to be bisimplicial if $I_{\bullet \bullet} = \{ I_{(m,n)} \}$
is a bisimplical set such that for all $f \in Hom_{\Delta^2}((k,l),(m,n))$
and for all $i \in I_{(m,n)}$ we have that $\tilde{f}(U^{(m,n)}_i) \subseteq U^{(k,l)}_{\tilde{f}(i)}$.
\end{definition}
It is obvious that a randomly chosen open cover of $\dbs{M}$ could be far away to be bisimplicial. Nevertheless, as the following lemma shows, given an open cover of a bisimplicial space we can form a bisimplicial cover $bs(\mathcal{U}_{\bullet\bullet})$ that
\emph{refines} the original one.
\begin{lemma}\label{bscover}
To any open cover $\mathcal{U}^{\bullet \bullet}$ of a bisimplicial set,
there is a \emph{naturally} associated bisimplicial open cover
$bs(\mathcal{U}^{\bullet \bullet})$.
\end{lemma}
\begin{proof}
Let us consider $\mathbb{N}^2$ ordered by the lexicographic order.
Let ${\mathcal P}_{m,n}^{k,l}: = \hom_{\Delta'^2}(([k],[l]),([m],[n]))$ and put
${\mathcal P}_{m,n} = \bigcup_{(k,l) \leq (m,n)} {\mathcal P}_{m,n}^{k,l}$. As in \cite[Sec. 4.1]{tu},
the set ${\mathcal P}_{m,n}$ can be identified with the set of pairs of
nonempty subsets of $[m] \times [n]$, then the cardinality of
${\mathcal P}_{m,n}$ is $(2^{m+1}-1)(2^{n+1}-1)$.
Let
$$\Lambda =
\left\{\lambda: {\mathcal P}_{m,n} \to \bigcup_{k,l} I_{k, l} \; : \;
\lambda({\mathcal P}_{m,n}^{k,l}) \subseteq I_{k,l} \; ,\; \forall \; (k,l) \leq (m,n) \right\},
$$
and for $\lambda \in \Lambda_{m,n}$ define
$$V_{\lambda}^{m,n} = \bigcap_{(k,l) \leq (m,n)} \bigcap_{f \in {\mathcal P}_{(m,n)}^{(k,l)}}
\tilde{f}^{-1}(U_{\lambda(f)}^{(k,l)}).$$
It is clear that $(V_{\lambda}^{m,n})_{\lambda \in \Lambda_{(m,n)}}$
is an open cover of $M_{m,n}$. In fact, let $x \in M_{m,n}$ and let
$f \in {\mathcal P}_{m,n}$. Since $\tilde{f}: M_{m,n} \to M_{k,l}$ and $\{ U_{i}^{k,l} \}_{i \in I_{k,l}}$
is a covering, then $\tilde{f}(x) \in U_i^{k,l}$ for some (non unique) $i \in I_{k,l}$.
If we name this $i$ as $\lambda(f)$, then we have a function
$\lambda$ in $\Lambda_{m,n}$ and , by the same definition,
$x \in \Lambda_{m,n}$.
Now we are going to show that $V^{\bullet \bullet} = \{ V_{m,n} \}$
is an open bisimplicial covering of $M_{\bullet \bullet}$.
To do this we need to define a bisimplicial structure on
$\Lambda_{\bullet \bullet}$.
Let $g \in Hom_{\Delta'^{2}}(([m],[n]),([m'],[n']))$, then we have
a continuous map $\tilde{g}: M_{m',n'} \to M_{m,n}$ and set map
$\tilde{g}:I_{m',n'} \to I_{m,n}$ (the use of the same notation
will be clear from the context).
Let $\tilde{g}: \Lambda_{m', n'} \to \Lambda_{m,n}$ defined as follows;
given $\lambda' \in \Lambda_{m', n'}$
take $\tilde{g}(\lambda) \in \Lambda_{m,n}$ as the map defined by
$\tilde{g}(\lambda')(f) = \lambda'(g \circ f)$.
\noindent Let $x \in V_{\lambda'}^{m',n'}$, $(k,l) \leq (m,n)$
and $f \in {\mathcal P}_{m,n}^{k,l}$.
Since $g \circ f : ([k],[l]) \to ([m'],[n'])$
then $\widetilde{(g \circ f)}(x) \in U_{\lambda'(g \circ f)}^{k,l}$,
that is, $\tilde{f}(\tilde{g}(x)) \in U_{\tilde{g}(\lambda')(f)}^{k,l}$.
This mean that $\tilde{g}(x) \in \tilde{f}^{-1}(U^{k,l}_{\tilde{g}(\lambda')(f)})$
and therefore $\tilde{g}(x) \in U_{\tilde{g}(\lambda')}^{m,n}$.
With the above we have proved that $\tilde{g}(U^{m',n'}_{\lambda'}) \subseteq U_{\tilde{g}(\lambda')}^{m,n}$
or $\tilde{g}(\lambda') \in \Lambda_{m,n}$.
\end{proof}
\begin{obs}{\bf Notation}\\
Let $\dbs{M}$ a bisimplicial set and let $\ubs{\mathcal{U}}$
be an open cover of $\dbs{M}$. In the settings of the proof
of lemma \ref{bscover}, given a pair of non negative integers $(m,n)$ we will denote the element $([m],[n]) \in \Delta^2$ by $(m,n)$. If $\lambda \in \Lambda_{m,n}$ then $\lambda$ satisfies tjhe following conditions:
\begin{enumerate}
\item $\lambda$ is a map from ${\mathcal P}_{m,n}$ to $\bigcup_{k,l} I_{k,l}\quad$,
\item for all pair $(k,l)$ of non negative integers, if $(k,l) \leq (m,n)$ then $\lambda({\mathcal P}_{m,n}^{k,l}) \subseteq I_{k,l}\quad$.
\end{enumerate}
By definition of the category $\Delta'$, if $f \in {\mathcal P}_{m,n}^{k,l}$ then $f:= (f_1, f_2) $ with $f_1: [k] \to [m]$ and $f_2: [l] \to [n]$ are strictly increasing. It follows that they are one to one and hence $f_{1}([k])$ and $f_{2}([l])$ are subsets of $[m]$ and $[n]$, respectively, of cardinality $k+1$ and $l+1$, respectively.
The above reasoning permit us to identify any map $f \in {\mathcal P}_{m,n}$ with a pair of non empty subsets of ${\mathcal P}([m])$ and ${\mathcal P}([n])$, and, therefore, any $\lambda \in \Lambda_{m,n}$ with a matrix array
$[\lambda_{S,T}]$ of size $(2^{m+1}-1)(2^{n+1}-1)$ with $S$ and $T$ varying over ${\mathcal P}([m])-\{\emptyset\}$ and ${\mathcal P}([n])-\{\emptyset\}$, respectively, and $\lambda_{S,T}$ stands for $\lambda(S,T)$. Here we can order the pair of subsets $(S, T)$ by a lexicographic like order in the following way:
\begin{equation}\label{lexy_order}
(S_1, T_1) \preccurlyeq (S_2, T_2)\quad \text{if and only if}\quad
\begin{cases}
|S_1| \leq |S_2|, & \\
|S_1| = |S_2|\quad \text{and} \quad S_1 \preccurlyeq_L S_2, & \\
|S_1| = |S_2|\quad \text{and} \quad |T_1| < |T_2| & \\
|S_1| = |S_2|,\quad |T_1|=|T_2| \quad \text{and} \quad T_1 \preccurlyeq_L T_2. &
\end{cases}
\end{equation}
where $\preccurlyeq_L$ stands for the lexicographic order when $S$ and $T$ are displayed in increasing order.
\end{obs}
\begin{exa}
If $\lambda \in \Lambda_{1,1}$ then $\lambda = [\lambda(S, T)]_{3,3}$
and can be displayed as the matrix array
$$
\left [
\begin{matrix}
\lambda_{0,0}&\lambda_{0,1}&\lambda_{0,01}\\
\lambda_{1,0}&\lambda_{1,1}&\lambda_{1, 01}\\
\lambda_{01,0}&\lambda_{01,1}&\lambda_{01,01}\\
\end{matrix}
\right ],
$$
where, for any pair of non empty subsets $i \subseteq [m]$ and $j \subseteq [n]$, we have wrote $\lambda_{i,j} = \lambda(i,j)$.
\end{exa}
The family of all covers of a bisimplicial space $\dbs{M}$
is endowed with a partial preorder.
\begin{definition}\label{order_of_coverings}
Let $\dbs{M}$ be a bisimplicial topological space and suppose that $\dbs{{\mathcal U}}$ and $\dbs{{\mathcal V}}$ are open covers of $\dbs{M}$, with ${\mathcal U}_{(m,n)} = \{ U^{m,n}_i \}_{i \in I_{m,n}}$ and ${\mathcal V}_{(m,n)} = \{ V^{m,n}_j\}_{j \in J_{m,n}} $. We said that $\dbs{{\mathcal V}}$ is finer than $\dbs{{\mathcal U}}$ if there is a family of maps $\theta_{m,n} : J_{m,n} \to I_{m,n}$ such that $V^{m,n}_j \subseteq U^{m,n}_{\theta(j)}$,
for all $j \in J_{m,n}$.
If the covers are bisimplicial we take the maps $\dbs{\theta}$ bisimplicial.
\end{definition}
\subsection{\v{C}ech cohomology of double groupoids}
Let $\dbs{M}$ be a bisimplicial space, $\dbs{{\mathcal U}}$ be a
bisimplicial open cover of $\dbs{M}$ and $\ubs{{\mathcal F}}$ be
a bisimplicial abelian sheaf. Let us define
\begin{equation}
C^{m,n}_{bs}(\dbs{{\mathcal U}}, \ubs{{\mathcal F}}) = \prod_{i \in I_{m,n}}{{\mathcal F}^{m,n}(U^{m,n}_i)},
\end{equation}
and horizontal and vertical differentials given by
\begin{align}
d_h^{m,n}: C^{m,n}_{bs}(\dbs{{\mathcal U}}, \ubs{{\mathcal F}}) \to C^{m,n+1}_{bs}(\dbs{{\mathcal U}}, \ubs{{\mathcal F}})
\quad \text{with} \quad (d_h^{m,n}c)_i =
\sum_{k = 0}^{n+1} (-1)^k \widetilde{\epsilon^{m,n+1}_{k,h}}^\ast c_{\widetilde{\epsilon^{m,n+1}_{k,h}}(i)},\\
d_v^{m,n}: C^{m,n}_{bs}(\dbs{{\mathcal U}}, \ubs{{\mathcal F}}) \to C^{m+1,n}_{bs}(\dbs{{\mathcal U}}, \ubs{{\mathcal F}})
\quad \text{with} \quad (d_v^{m,n}c)_i =
\sum_{k = 0}^{m+1} (-1)^k \widetilde{\epsilon^{m+1,n}_{k,v}}^\ast c_{\widetilde{\epsilon^{m+1,n}_{k,v}}(i)}.
\end{align}
The $\ast$ as a superscript is explained in definition \ref{bisim-sheaf} and
$\widetilde{\epsilon^{m,n+1}_{k,h}}^\ast c_{\widetilde{\epsilon^{m,n+1}_{k,h}}(i)}$ is the \textit{restriction}
of the section $c_{\widetilde{\epsilon^{m,n+1}_{k,h}}(i)} \in {\mathcal F}^{m,n}(U^{m,n}_{\widetilde{\epsilon^{m,n+1}_{k,h}}(i)})$ to a section in ${\mathcal F}^{m, n+1}(U^{m,n+1}_i)$, likewise
$\widetilde{\epsilon^{m+1,n}_{k,v}}^\ast c_{\widetilde{\epsilon^{m+1,n}_{k,v}}(i)}$ is the restriction
of the section $c_{\widetilde{\epsilon^{m+1,n}_{k,v}}(i)} \in {\mathcal F}^{m,n}(U^{m,n}_{\widetilde{\epsilon^{m+1,n}_{k,v}}(i)})$ to a section in ${\mathcal F}^{m, n+1}(U^{m,n+1}_i)$.
It is clear that $(d_h)^2 = 0$ and, with the usual sign trick, the vertical differential
satisfies the relations $ ((-1)^n d_v^{m,n})^2 = 0$ and $d_h^{m,n} + (-1)^{n+1}d_v^{m,n+1} = (-1)^n d_v^{m,n} + d_h^{m+1,n} = 0$.
From this remark the collection $\{C_{bs}^{m,n}\}_{m,n \in \mathbb{N}}$ is a well defined double complex and we may consider the associated total complex $\Tot (\dbs{\mathcal{U}}, \ubs{\mathcal{F}})$ that allow us to define the total cohomology groups
$\operatorname{H}_{\Tot}^{m}(\dbs{\mathcal{U}}, \ubs{\mathcal{F}})$.
\begin{definition}\label{bisimplicial_cohomology_groups}
Let $\dbs{M}$ be a bisimplicial space, $\dbs{{\mathcal U}}$ be a
bisimplicial open cover of $\dbs{M}$ and $\ubs{{\mathcal F}}$ be
a bisimplicial abelian sheaf. The $n$-th \emph{bisimplicial cohomology group}
of $\dbs{M}$ with coefficients in $\ubs{{\mathcal F}}$ is defined as the direct limit
\begin{equation}
\check{\operatorname{H}}^n(\dbs{M}; \ubs{{\mathcal F}}) : = \underset{\longrightarrow}{\lim} \operatorname{H}^n_{\Tot} (\dbs{\mathcal{U}}; \ubs{{\mathcal F}}),
\end{equation}
where $\dbs{\mathcal{U}}$ runs over all open covers of $\dbs{M}$ whose $N$-skeleton
admits an $N$-truncated simplicial structure for some $N \geq n+1$.
\end{definition}
The most important bisimplicial sheaf for our purposes
is the one constructed from a double groupoid acting along a map
into the total base of the double groupoid.
\begin{definition}
Let ${\mathcal T} = ({\mathcal B}; {\mathcal V}, {\mathcal H}; {\mathcal P} )$ be a topological double groupoid.
Then a ${\mathcal T}$-module is a bundle of topological abelian groups
$p: {\mathbf K} \to {\mathcal P}$ such that
\begin{enumerate}
\item ${\mathbf K}$ is endowed with a left ${\mathcal T}$-action (see \ref{double groupoid left action}),
\item ${\mathcal V}$ and ${\mathcal H}$ acts on ${\mathbf K}$ by group bundle automorphisms.
\end{enumerate}
\end{definition}
\begin{remark}\label{associated_simplicial_to_action}
Let ${\mathcal T} = ({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P} )$ be a topological double groupoid,
and let $\gamma: {\mathbf K} \to {\mathcal P}$ be a ${\mathcal T}$-module. Let us denote
by $p_{m,n}: {\mathcal F}^{m,n} \to {\mathcal P}$ the map $(F_{ij}) \mapsto p_{m,n}(F_{ij}) = bl(F_{m,1})$
and let ${\mathcal F}_{m,n} = {\mathbf K} \pfib{\gamma}{p_{m,1}} {\mathcal F}^{m,n}$.
This construction generate a new bisimplicial set,
indeed, with any $f: ([m], [n]) \to ([k], [l])$ in $\Delta^2$
we associate a map $\tilde{f}: {\mathcal F}_{k,l} \to {\mathcal F}_{m,n}$ given by
\begin{equation*}
\tilde{f}(K,\langle F_{i,j}\rangle_{0 \leq i \leq k , \; 0 \leq j \leq l} ) =
(K, \langle F_{f(i,j)} \rangle_{1 \leq i \leq m , \; 1 \leq j \leq n}
\end{equation*}
It is no difficult to show that with this map we have a contravariant functor
from $\Delta^2$ to $\mathcal{S}ets$.
\end{remark}
\begin{obs}\label{associated_sheaf}
In the discrete case we can recover the double groupoid cohomology, defined in \ref{discrete_double_groupoid_cohomology}, from the bisimplicial cohomology \ref{bisimplicial_cohomology_groups} just defined. In fact, let us consider a topological double groupoid ${\mathcal T}= ({\mathcal B},;{\mathcal H}, {\mathcal V}; {\mathcal P})$ and suppose that $\ubs{{\mathcal F}}$ is the bisimplicial sheaf associated to a ${\mathcal T}$-module.
From \ref{associated_simplicial_to_action}, we can construct
a bisimplicial set $\{ {\mathbf K} \pfibrado{p}{\gamma} {\mathcal F}^{m,n} \}$ and then, the projection map
\begin{equation}
\Pi^{m,n}:{\mathbf K} \pfibrado{p}{\gamma} {\mathcal F}^{m,n} \to {\mathcal F}^{m,n},
\end{equation}
is a bisimplicial map. If we denote by ${\mathcal A}^{m,n}$ the sheaf of germs of local continuous sections of $\Pi^{m,n}$, then $\ubs{{\mathcal A}} = \{{\mathcal A}^{m,n}\}_{m,n}$ becomes a bisimplicial sheaf and any section of $\Pi^{m,n}$ defined over $U \subseteq {\mathcal F}^{m,n}$, an open set, can be identified with a continuous map
$\varphi:U \to {\mathbf K}$ such that $p(\varphi(\langle F_{i,j} \rangle)) = \gamma(\langle F_{i,j} \rangle)$
for any $\langle F_{i,j} \rangle \in {\mathcal F}^{m,n}$. Under this bisimplicial structure, the maps $\epsilon_h^{m,n+1}:{\mathcal A}^{m,n} \to {\mathcal A}^{m,n+1}$ and $\epsilon_v^{m+1,n}:{\mathcal A}^{m,n} \to {\mathcal A}^{m+1,n}$ can be described as follows.
Given a local section $\varphi:U \to {\mathbf K}$ of $\Pi^{m,n}$
we can write $\varphi$ as
\begin{equation}
\varphi([F_{i,j}]) =l(F_{m1})^{-1} \cdots l(F_{21})^{-1} l(F_{11})^{-1} \varphi_v([F_{i,j}])
\end{equation}
or as
\begin{equation}
\varphi([F_{i,j}]) =b(F_{m1}) b(F_{m2}) \cdots b(F_{mn}) \varphi_h([F_{i,j}])
\end{equation}
for unique $\varphi_v([F_{ij}]) \in {\mathbf K}_{tl(F_{m1})}$ and $\varphi_h([F_{ij}]) \in {\mathbf K}_{rb(F_{mn})}$. Then $\epsilon_v^{m+1,n}\varphi$ is a germ of a local section of $\Pi ^{m+1,n}$ such that, for any $[F_{ij}] \in {\mathcal F}^{m+1,n}$ we have
\begin{equation}
(\epsilon_{0,v}^{m,n}\varphi)[F_{i,j}]=
l(F_{m+1,1})^{-1}l(F_{m1})^{-1} \cdots l(F_{31})^{-1} l(F_{21})^{-1} \varphi_v\begin{pmatrix}
F_{21} & \dots & F_{2n} \\
\dots & \dots & \dots \\
F_{m1} & \dots & F_{mn} \\
F_{m+1,1} & \dots & F_{m+1,n}
\end{pmatrix},
\end{equation}
if $0 < k < m+1$ then
\begin{equation}
(\epsilon_{k,v}^{m,n}\varphi)[F_{i,j}]=
l(F_{m+1,1})^{-1}l(F_{m1})^{-1} \cdots l(F_{21})^{-1} l(F_{11})^{-1} \varphi_v \begin{pmatrix}
F_{11} & \dots & F_{1n} \\
\dots & \dots & \dots \\
\left\{\begin{matrix}F_{k1} \{\mathcal F}_{k+1,1}\end{matrix}\right\}
& \dots & \left\{\begin{matrix}F_{kn} \{\mathcal F}_{k+1,n}\end{matrix}\right\} \\
\dots & \dots & \dots \\
F_{m+1,1} & \dots & F_{m+1,n}
\end{pmatrix}
\end{equation}
and
\begin{equation}
(\epsilon_{m+1,v}^{m,n}\varphi)[F_{i,j}]=
l(F_{m+1,1})^{-1}l(F_{m1})^{-1} \cdots l(F_{21})^{-1} l(F_{11})^{-1} \varphi_v\begin{pmatrix}
F_{11} & \dots & F_{1n} \\
\dots & \dots & \dots \\
F_{m1} & \dots & F_{mn} \end{pmatrix};
\end{equation}
in the same way, for any $[F_{ij}] \in {\mathcal F}^{m,n+1}$, the sections corresponding to the horizontal maps
$\epsilon_{k,h}^{m,n}$ are given by
\begin{equation}
(\epsilon_{0,h}^{m,n}\varphi)[F_{i,j}]=
b(F_{m1})b(F_{m2}) \cdots b(F_{mn})b(F_{m,n+1}) \varphi_h\begin{pmatrix}
F_{12} & \dots & F_{1,s+1} \\
\dots & \dots & \dots \\
F_{m2} & \dots & F_{m,n+1}
\end{pmatrix},
\end{equation}
if $0 < k < n+1$ then
\begin{equation}
(\epsilon_{k,h}^{m,n}\varphi)[F_{i,j}]=
b(F_{m1})b(F_{m2}) \cdots b(F_{mn})b(F_{m,n+1}) \varphi_h \begin{pmatrix}
F_{11} & \dots & \left\{F_{1k} F_{1,k+1} \right\}& \dots & F_{1,n+1} \\
\dots & \dots& \dots& \dots & \dots \\
F_{m,1} & \dots& \left\{F_{mk}F_{m,k+1} \right\}& \dots & F_{m,n+1}
\end{pmatrix},
\end{equation}
and
\begin{equation}
(\epsilon_{m+1,h}^{m,n}\varphi)[F_{i,j}]=
b(F_{m1})b(F_{m2}) \cdots b(F_{mn}) \varphi_h\begin{pmatrix}
F_{11} & \dots & F_{1n} \\
\dots & \dots & \dots \\
F_{m1} & \dots & F_{mn} \end{pmatrix}.
\end{equation}
If we use the above expressions, together with the formulas for the horizontal and vertical co\-boun\-da\-ry map, and if we consider the largest open bisimplicial covering
$\{ \mathcal{U}^{m,n}_{F}\}_{F \in {\mathcal F}^{m,n}}$ of $ {\mathcal F}^{m,n}$, where $ \mathcal{U}^{m,n}_{F}=\{F\}$ for every $F \in {\mathcal F}^{m,n}$, then we can easily see that the \v{C}ech cohomology groups coincides with the cohomology groups of the double groupoid cohomology introduced in \ref{discrete_double_groupoid_cohomology}.
\end{obs}
\subsection{Low dimensional cohomology}
\begin{definition}\label{extensions}
Let $({\mathcal B}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be a topological double groupoid and ${\mathbf K} \to {\mathcal P}$ an abelian group bundle. We define $Ext({\mathcal B}, A)$ to be the set
\begin{equation}\label{extension_form}
Ext({\mathcal B}, {\mathbf K}) = \underset{\mathcal{U}} {\underset{\longrightarrow} \lim } \; \Opext({\mathcal B}[\mathcal{U}], {\mathbf K}[U])
\end{equation}
Where $\mathcal{U} = \{ \mathcal{U}_i \}_{i \in I}$ runs over open
covers of ${\mathcal P}$ and $(\mathcal{B}[\mathcal{U}]); {\mathcal V}[\mathcal{U}], {\mathcal H}[\mathcal{U}]; {\mathcal P}[\mathcal{U}])$
is the \textit{\v{C}ech double groupoid associated to $({\mathcal B}, \mathcal{U})$}.
\end{definition}
\begin{theorem}\label{class_ext_simplicial_version}
Let $({\mathcal F}; {\mathcal V}, {\mathcal H}; {\mathcal P})$ be a topological double groupoid and $p: {\mathbf K} \to {\mathcal P}$
be an abelian group bundle, and $\ubs{\mathcal{A}}$ the bisimplicial sheaf associate to the action of ${\mathcal F}$ over ${\mathbf K}$.
\begin{itemize}
\item For each open cover $\dbs{\mathcal{U}}$ of $\ubs{{\mathcal F}}$, there is a canonical isomorphism
\begin{equation}\label{extCech}
\Opext_{{\mathcal U}} ({\mathcal F}[{\mathcal U}_{00}], {\mathbf K}[{\mathcal U}_{00}]) \cong \operatorname{H}^{1}_{\Tot}(\dbs{{\mathcal U}};\ubs{{\mathcal A}}),
\end{equation}
where $\Opext_{{\mathcal U}} ({\mathcal F}[{\mathcal U}_{00}], {\mathbf K}[{\mathcal U}_{00}])$ denotes the subgroup of elements of $\Opext ({\mathcal F}[{\mathcal U}_{00}], {\mathbf K}[{\mathcal U}_{00}])$
consisting of extensions $1 \rightarrow {\mathbf K}[{\mathcal U}_{00}] \overset{\iota}\hookrightarrow {\mathcal B} \overset{\Pi}\twoheadrightarrow {\mathcal F}[{\mathcal U}_{00}] \to 1$
such that $\Pi$ admits a continuous lifting over each open set $U_i^{11} \subseteq {\mathcal F}$ ( $i \in I_{11}$).
\item The isomorphisms \ref{extCech} induces another one
\begin{equation}
\operatorname{Ext}({\mathcal F}, {\mathbf K}) \cong \check{\operatorname{H}}^{1}_{\Tot}(\dbs{{\mathcal U}}; \ubs{{\mathcal A}})
\end{equation}
\end{itemize}
\end{theorem}
\begin{proof}
Since we need a detailed study of simplicial covers and to give an explicit description of the two cocycles, the proof is divided in several stages.
{\bf Step 1.} \emph{Description of total one cocycles.}\\
If $\dbs{\mathcal{U}}$ is an open cover of $\dbs{{\mathcal F}}$, we know by definition that the double complex which gives rise to the C\v{e}ch cohomology of the double groupoid is
\begin{equation}\label{complex_simplicial_double_cohomology}
\xymatrix{
&\vdots&\\
&{\mathcal C}^{3,1}(\dbs{bs({\mathcal U})},\ubs{{\mathcal A}}) \ar[u]^{d_v^{3,1}} \ar[r]_{d_h^{3,1}}& {\mathcal C}^{3,2}(\dbs{bs({\mathcal U})},\ubs{{\mathcal A}}) \\
&{\mathcal C}^{2,1}(\dbs{bs({\mathcal U})},\ubs{{\mathcal A}}) \ar[u]^{d_v^{2,1}} \ar[r]_{d_h^{2,1}} & {\mathcal C}^{2,2}(\dbs{bs({\mathcal U})},\ubs{{\mathcal A}}) \ar[r]_{d_h^{2,2}} \ar[u]^{d_v^{2,2}}& {\mathcal C}^{2,3}(\dbs{bs({\mathcal U})},\ubs{{\mathcal A}})\\
&{\mathcal C}^{1,1}(\dbs{bs({\mathcal U})},\ubs{{\mathcal A}}) \ar[u]^{d_v^{1,1}}\ar[r]_{d_h^{1,1}} & {\mathcal C}^{1,2}(\dbs{bs({\mathcal U})},\ubs{{\mathcal A}}) \ar[u]^{d_v^{1,2}}
\ar[r]_{d_h^{1,2}} & {\mathcal C}^{1,3}(\dbs{bs({\mathcal U})},\ubs{{\mathcal A}}) \ar[r]_{\quad \quad\quad d_h^{1,3}}\ar[u]^{d_v^{1,3}}& \cdots
}
\end{equation}
Hence, since the open cover $bs(\dbs{{\mathcal U}})$ is the bisimplicial refinement of $\dbs{{\mathcal U}}$ defined in \ref{bscover},the first terms of the total chain complex are
\begin{align}
0 \to \Tot^1(bs(\dbs{{\mathcal U}}), \ubs{{\mathcal A}}) \to \Tot^2(bs(\dbs{{\mathcal U}}), \ubs{{\mathcal A}})\to \Tot^3(bs(\dbs{{\mathcal U}}), \ubs{{\mathcal A}}) \to \ldots
\end{align}
where
\begin{align}
&\Tot^1(bs(\dbs{{\mathcal U}}), \ubs{{\mathcal A}})=\prod\limits_{\lambda \in \Lambda_{1,1}} {\mathcal A}^{1,1}(U^{1,1}_{\lambda}),\\
&\Tot^2(bs(\dbs{{\mathcal U}}), \ubs{{\mathcal A}})= \prod\limits_{\lambda \in \Lambda_{2,1}} {\mathcal A}^{2,1}(U^{2,1}_{\lambda}) \oplus \prod\limits_{\lambda \in \Lambda_{1,2}} {\mathcal A}^{1,2}(U^{1,2}_{\lambda}),\\
&\Tot^3(bs(\dbs{{\mathcal U}}), \ubs{{\mathcal A}})= \prod\limits_{\lambda \in \Lambda_{3,1}} {\mathcal A}^{3,1}(U^{3,1}_{\lambda}) \oplus \prod\limits_{\lambda \in \Lambda_{2,2}} {\mathcal A}^{2,2}(U^{2,2}_{\lambda}) \oplus \prod\limits_{\lambda \in \Lambda_{1,3}} {\mathcal A}^{1,3}(U^{1,3}_{\lambda}).
\end{align}
A two cocycle in $Z^2(\dbs{bs({\mathcal U})},\ubs{{\mathcal A}})$ is a pair
$(\sigma, \tau)$ where $\sigma$ and $\tau$ are families of the form $\sigma=(\sigma_\lambda)_{\lambda \in \Lambda_{2,1}}$ and $\tau = (\tau_{\lambda})_{\lambda \in \Lambda_{1,2}}$ such that $d_{\Tot}^2(\sigma, \tau) = 0$. This equation is equivalent to
\begin{align*}
d_v^{2,1}(\sigma) &= 0,\\
d_h^{2,1}(\sigma) + d_v^{1,2}(\tau) &= 0,\\
d_h^{1,2}(\tau) &= 0;
\end{align*}
which amounts to the following three ones
\begin{equation}\label{two_cocycle_1}
\sum_{k=0}^3 (-1)^k \widetilde{\epsilon_{k,v}^{3,1}}^{*}(\sigma_{\widetilde{\epsilon_{k,v}^{3,1}}(\lambda)}) = 0, \quad\text{for any} \quad \lambda \in \Lambda_{3,1};
\end{equation}
\begin{equation}\label{two_cocycle_2}
\sum_{k=0}^{3}(-1)^k\widetilde{\epsilon_{k,h}^{2,2}}^{*} (\sigma_{\widetilde{\epsilon^{2,2}_{k,h}}}) + \sum_{k=0}^{3}(-1)^k\widetilde{\epsilon_{k,v}^{2,2}}^{*}(\tau_{\widetilde{\epsilon_{k,v}^{2,2}}(\lambda)}) =0, \quad \text{for any} \quad \lambda \in \Lambda_{2,2}, \quad \text{and}
\end{equation}
\begin{equation}\label{two_cocycle_3}
\sum_{k=0}^{3}(-1)^{k}\widetilde{\epsilon_{k,h}^{1,3}}^{*}(\tau_{\widetilde{\epsilon_{k,h}^{1,3}}(\lambda)}) = 0, \quad \text{for any} \quad \lambda \in \Lambda_{1,3}.
\end{equation}
In order to obtain more concrete information about the two cocycles, we will now analyze each of those equations .
The equation \ref{two_cocycle_1} is valid in the open set
$$U_{\lambda}^{3,1}=\bigcap_{(l,k)\leq (3,1)} \bigcap_{f \in {\mathcal P}_{3,1}^{l,k}} f^{-1}(U_{f(\lambda)}^{3,1}),$$
of ${\mathcal F}^{(3,1)}$ and if we apply $\widetilde{\eta^{3,1}_{0,v}}^{\ast}$ to both sides of it, we obtain
\begin{equation}
\sigma_{\lambda_{[3]\setminus 0,[1]}}= \widetilde{\eta^{3,1}_{0,v}\epsilon^{3,1}_{1,v}}^{*}(\sigma_{\lambda_{[3]\setminus 1,[1]}}) -
\widetilde{\eta^{3,1}_{0,v}\epsilon^{3,1}_{2,v}}^{*}(\sigma_{\lambda_{[3]\setminus 2,[1]}}) +
\widetilde{\eta^{3,1}_{0,v}\epsilon^{3,1}_{3,v}}^{*}(\sigma_{\lambda_{[3]\setminus 3,[1]}}).
\end{equation}
If we write down $\lambda_{[3]\setminus 0,[1]}, \lambda_{[3]\setminus 1,[1]}, \lambda_{[3]\setminus 2,[1]}$ and $\lambda_{[3]\setminus 3,[1]}$ explicitly, it follows from the above equation that the section $\sigma$ is independent of the last row of the index $\lambda$. Then, since $\ubs{{\mathcal A}}$ is a family of sheaves then there is a section
$$
\sigma_{\lambda_{S, T}} \in
{\mathcal A}^{2,1}\left(\bigcap_{(k,l)\leq (2,1)} \bigcap_{f \in {\mathcal P}_{3,1}^{k,l}} f^{-1}(U_{f(\lambda)}^{3,1})\right),
$$
where $S \subseteq [2], T \subseteq [1]$ with $|S| \leq 2$, and such that $\sigma_{\lambda}$ is the restriction to $U^{3,1}_\lambda$ of
$\sigma_{\lambda_{S, T}} \in \Lambda_{2,1}$.
Now, given a tuple $\left( \begin{matrix} A \\ B \\ C \end{matrix} \right) \in \bigcap_{(l,k)\leq (2,1)} \bigcap_{f \in {\mathcal P}_{3,1}^{l,k}} f^{-1}(U_{f(\lambda)}^{3,1})$, and according to \ref{associated_sheaf}, we have
\begin{equation*}
\widetilde{\epsilon^{3,1}_{0,v}}^{*}(\sigma_{\widetilde{\epsilon^{3,1}_{0,v}}(\lambda)})\left( \begin{matrix} A \\ B \\ C \end{matrix} \right) =\sigma_{\lambda_{[3]\setminus 0,[1]}}\left( \begin{matrix} A \\ B \end{matrix} \right), \quad
\widetilde{\epsilon^{3,1}_{1,v}}^{*}(\sigma_{\widetilde{\epsilon^{3,1}_{1,v}}(\lambda)})\left( \begin{matrix} A \\ B \\ C \end{matrix} \right) = \sigma_{\lambda_{[3]\setminus 1,[1]}}\left( \begin{matrix} \left\{ \begin{matrix} A \\ B \end{matrix} \right\} \\ C \end{matrix} \right),
\end{equation*}
\begin{equation*}
\widetilde{\epsilon^{3,1}_{2,v}}^{*}(\sigma_{\widetilde{\epsilon^{3,1}_{2,v}}(\lambda)})\left( \begin{matrix} A \\ B \\ C \end{matrix} \right) = \sigma_{\lambda_{[3]\setminus 2,[1]}}\left( \begin{matrix} A \\ \left\{ \begin{matrix} B \\ C \end{matrix} \right\} \end{matrix} \right)\quad \text{and} \quad
\widetilde{\epsilon^{3,1}_{3,v}}^{*}(\sigma_{\widetilde{\epsilon^{3,1}_{3,v}}(\lambda)})\left( \begin{matrix} A \\ B \\ C \end{matrix} \right) = l(C)^{-1} \cdot \sigma_{\lambda_{[3]\setminus 3,[1]}}\left( \begin{matrix} B \\ C \end{matrix} \right),
\end{equation*}
and the equation \ref{two_cocycle_1} can be rewritten as
\begin{equation}\label{re_two_cocycle_1}
\sigma_{\lambda_{[3]\setminus 0,[1]}}\left( \begin{matrix} A \\ B \end{matrix} \right) - \sigma_{\lambda_{[3]\setminus 1,[1]}}\left( \begin{matrix} \left\{ \begin{matrix} A \\ B \end{matrix} \right\} \\ C \end{matrix} \right) + \sigma_{\lambda_{[3]\setminus 2,[1]}}\left( \begin{matrix} A \\ \left\{ \begin{matrix} B \\ C \end{matrix} \right\} \end{matrix} \right)- l(C)^{-1} \cdot \sigma_{\lambda_{[3]\setminus 3,[1]}}\left( \begin{matrix} B \\ C \end{matrix} \right) =0.
\end{equation}
In the same way we can deduce similar expressions for equations \eqref{two_cocycle_2} and \eqref{two_cocycle_3}.
\vspace{0.5 cm}
{\bf Step 2.} \textit{Passing to a coarser covering of the double groupoid.}\\
Let us consider an open cover $\dbs{{\mathcal W}}$ of $\ubs{{\mathcal F}[{\mathcal U}]}$ defined in the following way:
\begin{itemize}
\item The indexing family $J_{m,n}$ is defined by
$J_{0,0} = \{ \ast \}$\;,\;
$J_{1,0} = I_{0,0}^2 \times I_{1,0}$\;,\;
$J_{0,1}=I_{0,0}^2 \times I_{0,1}$\;,\;
$J_{1,1} = I_{0,0}^2 \times I_{0,0}^2 \times I_{1,1}$\;, \; $J_{20} = I_{00}^3 \times I_{20}$\;, \;$J_{02}=I_{00}^3 \times I_{20}$\;,\; and $J_{m,n}$ any indexing set in other cases.
\item The collection ${\mathcal W}_{m,n}=\{ W^{m,n}_j \}_{j \in J_{m,n}}$ is defined by
\begin{align*}
&W^{0,0}\quad \text{consist of only one open set} \quad \coprod_{i \in I_{0,0}} U^{0,0}_i \quad \text{(disjoint union);}\\
&W^{1,0}_{ijk} =
\left\{
\left(
\begin{matrix}
i\\
g\\
j
\end{matrix}
\right)
\mid
g \in U^{10}_{k},\; t(g) \in U^{00}_{i} \;,\; b(g) \in U^{10}_{j}
\right\} \; \text{for all $i,j \in I_{00}$ and $k \in I_{10}$;}\\
&W^{01}_{ijk} =
\left\{
\left(
\begin{matrix}
i & x & j
\end{matrix}
\right)
\mid
x \in U^{01}_{k},\; l(x) \in U^{00}_{i} \;,\; r(x) \in U^{10}_{j}
\right\} \; \text{for all $i,j \in I_{00}$ and $k \in I_{01}$;}\\
&W_{i_{11}i_{12}i_{21}i_{22}j}^{1,1} =
\left\{
\left(
\begin{matrix}
i_{11} & & i_{12} \\
& F & \\
i_{21} & & i_{22}
\end{matrix}
\right)
\mid
B \in U^{1,1}_{j},\; tl(B) \in U^{0,0}_{i_{11}},\; tr(B) \in U^{0,0}_{i_{12}},\; bl(B) \in U^{0,0}_{i_{21}},\; br(B) \in U^{0,0}_{i_{22}}
\right\}
\end{align*}
for all $i_{00},i_{01},i_{10},i_{11} \in I_{00}$ and $j \in I_{11}$; and let $V^{m,n}$ be any open covering indexed by $J_{m,n}$ , for any other pair $(m,n) \in \mathbb{N}^2$.
\end{itemize}
We will to calculate the first total cohomology group of the pullback sheaf $\ubs{\mathcal{S}}$ of $\ubs{{\mathcal A}}$
along the natural projection map $\ubs{P}: \ubs{{\mathcal F}[{\mathcal U}]} \to \ubs{{\mathcal F}}$.
More exactly, we are going to show that $\Hom_{\Tot}^1(\dbs{{\mathcal V}},\ubs{\mathcal{S}})$ is isomorphic
to $\Hom_{\Tot}^1(\dbs{{\mathcal U}}, \ubs{{\mathcal A}})$.
Let us to denote by $\Gamma=\{\Gamma_{m,n}\}$ the indexing family obtained by the process of the proof of lemma \ref{bscover}, applied to the cover ${\mathcal V}$ indexed by the family $J$. Here, every $\gamma \in \Gamma_{2,1}$ is a map $\gamma: {\mathcal P}_{2,1} \to \bigcup_{k,l} J_{k,l}$
such that $\gamma({\mathcal P}_{2,1}^{k,l}) \subseteq J_{k,l}$ for every $(k,l) \leq (2,1)$,
and it can be represented as a matrix array of size $7 \times 3$, where the rows are indexed by non empty subsets of $[2]$ and the columns are indexed by non empty subsets of $[1]$. Moreover, by definition of $J$, the value $\gamma(S,T) = \ast$ if $S = 0,1, 2$ and $T = 0, 1$.
That is, the block
\begin{equation*}
\left[
\begin{matrix}
\gamma_{0,0} & \gamma_{0,1} \\
\gamma_{1,0} & \gamma_{1,1} \\
\gamma_{2,0} & \gamma_{2,1} \\
\end{matrix}
\right]
=
\left[
\begin{matrix}
\ast & \ast \\
\ast & \ast \\
\ast & \ast
\end{matrix}
\right].
\end{equation*}
In the same way, every $\gamma \in \Gamma_{1,2}$ can be represented by a matrix array $\gamma=[\gamma(S,T)]$, of size $3 \times 7$, where the rows are indexed by non empty subsets $S \subseteq [1]$ and the columns are indexed by non empty subsets $T \subseteq [2]$. As for $\Gamma_{2,1}$, we have that
\begin{equation*}
\left[
\begin{matrix}
\gamma_{0,0} & \gamma_{0,1} & \gamma_{0,2} \\
\gamma_{1,0} & \gamma_{1,1} & \gamma_{1,2} \\
\end{matrix}
\right]
=
\left[
\begin{matrix}
\ast & \ast & \ast \\
\ast & \ast & \ast \\
\end{matrix}
\right].
\end{equation*}
Let $\gamma:\Lambda_{21} \to \Gamma_{21} $ be the map defined as follows. Given $\lambda \in \Lambda_{21}$ we define $\gamma(\lambda) \in \Gamma_{21}$ by the rules
\begin{itemize}
\item $\gamma(\lambda)_{ST} = \ast$ if $S = 0,1$ or $2$, and $T = 0$ or $1$;
\item $\gamma(\lambda)_{ST} = (\lambda_{ik},\lambda_{jk},\lambda_{ST})$ if $S=\{ i, j\}$ and $T= k$;
\item $\gamma(\lambda)_{S,01} = (\lambda_{S0}, \lambda_{S1}, \lambda_{S,01})$ for $S=0,1$ or $2$;
\item $\gamma(\lambda)_{S,01}=(\lambda_{i0},\lambda_{i1},\lambda_{j0},\lambda_{j1},\lambda_{S,01})$ if $S=\{i,j\}$;
\item $\gamma(\lambda)_{012,T}=(\lambda_{0T},\lambda_{1T},\lambda_{2T},\lambda_{012,T})$ if $T=0$ or $1$;
\item $\gamma(\lambda)_{012,T}=(\lambda_{00},\lambda_{01},\lambda_{10},\lambda_{11},\lambda_{20},\lambda_{21},\lambda_{012,01})$.
\end{itemize}
From the definition of the indexing family $J$ it is clear that $\gamma$ is a bijective map with inverse denoted by $\lambda$. In the same way we can define a bijective map $\Lambda_{12} \to \Gamma_{12}$ which we also denote by $\gamma$ (and inverse $\lambda$) and from the context it will be clear which of them we are using.
Let $\Xi: Z^1(\dbs{{\mathcal V}},\ubs{\mathcal{S}}) \to Z^1(\dbs{{\mathcal U}},\ubs{{\mathcal A}})$ be the map defined by
$$(\varphi,\psi)=(\{\varphi_\gamma\}_{\gamma \in \Gamma_{21}},\{\psi_{\gamma}\}_{\gamma \in \Gamma_{12}}) \mapsto (\Xi_1(\varphi),\Xi_2(\psi)):= (\sigma, \tau),
$$
with $\sigma=\{\sigma_{\lambda}\}_{\lambda \in \Lambda_{21}}$ and $\tau= \{\tau_{\lambda}\}_{\lambda \in \Lambda_{12}}),$ where $\sigma_{\lambda}:= \varphi_{\gamma(\lambda)}$ and $\tau_{\lambda} := \psi_{\gamma(\lambda)} $ for all $\lambda$ in $\Lambda_{21}$ or in $\Lambda_{12}$, respectively.
Given a pair $(\varphi, \psi) \in Z^1(\dbs{{\mathcal V}}, \ubs{\mathcal{S}})$, then
$\varphi$ is a family $(\varphi_\gamma)_{\gamma \in \Gamma_{2,1}}$ such that for any $\gamma \in \Gamma_{2,1}$
the following equation holds
\begin{equation}\label{coarser_two_cocycle_1}
l(C)^{-1} \varphi_{\omega_{[3]\setminus 0,[1]}}\left( \begin{matrix} A \\ B \end{matrix} \right)
- \varphi_{\omega_{[3]\setminus 1,[1]}}\left( \begin{matrix} \left\{ \begin{matrix} A \\ B \end{matrix} \right\} \\ C \end{matrix} \right)
+ \varphi_{\omega_{[3]\setminus 2,[1]}}\left( \begin{matrix} A \\ \left\{ \begin{matrix} B \\ C \end{matrix} \right\} \end{matrix} \right)
- \varphi_{\omega_{[3]\setminus 3,[1]}}\left( \begin{matrix} B \\ C \end{matrix} \right) =0.
000\end{equation}
Since $(\varphi, \psi)$ satisfies the cocycle conditions, it is clear that the map $\Xi$ is well defined, moreover it is an isomorphism of abelian groups. The relations
\begin{equation}
(\Xi_1 d^1 \varphi)_\lambda = (d^1 \varphi)_{\gamma(\lambda)} \quad \text{and} \quad
(\Xi_2 d^1 \psi)_\lambda = (d^1 \psi)_{\gamma(\lambda)},
\end{equation}
satisfied by the map $\Xi$, allow us to induce an isomorphism between the total cohomology groups
$\Hom_{\Tot}^1(\dbs{{\mathcal V}},\ubs{\mathcal{S}})$ and $\Hom_{\Tot}^1(\dbs{{\mathcal U}}, \ubs{{\mathcal A}})$.
{\bf Step 3.} \textit{From double groupoid extensions to cohomology.}\\
The above result allow us to consider $\dbs{{\mathcal U}}$ as an open covering
with ${\mathcal U}_{00} = {\mathcal P}$.
Let us consider an extension
$$1 \to {\mathbf K} \overset{\iota}{\hookrightarrow} {\mathcal B} \overset{\Pi}{\twoheadrightarrow} {\mathcal F} \to 1$$
in $\Opext_{{\mathcal U}} ({\mathcal F}[{\mathcal U}_{00}], {\mathbf K}[{\mathcal U}_{00}])$. For any $\lambda \in \Lambda_{21}$, if
$$
\left(\begin{matrix}
A \\ B
\end{matrix}
\right) \in U^{21}_{\lambda} = \bigcap_{(k,l) \leq (2,1)} \bigcap_{f \in {\mathcal P}^{k,l}_{21}} \tilde{f}^{-1}(U^{kl}_{\lambda(f)}),
$$
then, by considering the cases $f = \epsilon^{21}_{2,v}$, $\epsilon_{0,v}^{21}$ and $\epsilon_{1,v}^{21}$ we can assert that $A \in U^{21}_{\lambda_{01}}$, $B \in U_{\lambda_{12}}^{21}$ and
$
\left\{
\begin{matrix} A \\ B
\end{matrix}
\right\} \in U^{21}_{\lambda_{02}}$.
Since there are local sections $\mu_{\lambda_{01,01}}: U^{21}_{\lambda_{01,01}} \to {\mathbf K}$, $\mu_{\lambda_{02,01}}: U^{21}_{\lambda_{02,01}} \to {\mathbf K} $ and $\mu_{\lambda_{12,01}}: U^{21}_{\lambda_{12,01}} \to {\mathbf K}$ of $\Pi$, then we can define $\sigma_{\lambda}: U^{21}_{\lambda} \to {\mathbf K}$ by the equation
\begin{equation}\label{action_abelian_grup_bundle_1}
\left\{
\begin{matrix}
\mu_{\lambda_{01,01}}(A) \\
\mu_{\lambda_{12,01}}(B)
\end{matrix}
\right\}
= \sigma_{\lambda}
\left(
\begin{matrix}
A \\
B
\end{matrix}
\right)
\rightharpoonup \mu_{\lambda_{02,01}}
\left\{
\begin{matrix}
A \{\mathcal B}
\end{matrix}
\right\}
\end{equation}
To show that the above equation defines a simplicial 2-cocycle for the cohomology of the
double groupoid, we come back to the associativity of horizontal and vertical composition, and to the exchange law between them. In fact, if $\lambda \in \Lambda_{31}$ and
$$
\left(
\begin{matrix}
A \\ B \\ C
\end{matrix}
\right)
\in U^{31}_{\lambda} = \bigcap_{(k,l) \leq (3,1)} \bigcap_{f \in {\mathcal P}^{(k,l)}_{(3,1)}} \tilde{f}^{-1}(U^{kl}_{\lambda(f)}),
$$
with $f = \epsilon_{i,v}^{31}$ for $i=0,1,2$ and $3$, we have
\begin{equation*}
\left(
\begin{matrix}
B \\ C
\end{matrix}
\right)
\in U^{21}_{\lambda_{[3] \setminus 0}}\;,\;
\left(
\begin{matrix}
\left\{
\begin{matrix}
A \\
B
\end{matrix}
\right\} \\
C
\end{matrix}
\right)
\in U^{21}_{\lambda_{[3] \setminus 1}}\;,\;
\left(
\begin{matrix}
A \\
\left\{
\begin{matrix}
B \\ C
\end{matrix}
\right\}
\end{matrix}
\right)
\in
U^{21}_{\lambda_{[3] \setminus 2}}\;,\quad \text{and} \quad
\left(
\begin{matrix}
A \\ B
\end{matrix}
\right) \in U^{21}_{\lambda_{[3] \setminus 3}}.
\end{equation*}
Since the vertical composition law is associative, by modifying in each case equation \eqref{action_abelian_grup_bundle_1}, we obtain that $\sigma = \{\sigma_{\lambda}\}_{\lambda \in \Lambda_{21}}$ satisfies the cocycle equation \eqref{re_two_cocycle_1}. In a similar way, by using horizontal composition law, instead of the vertical one, we can define $\{ \tau_{\lambda}\}_{\lambda \in \Lambda_{12}} \in \prod_{\lambda \in \Lambda_{12}} \mathcal{A}^{12}(U^{12}_{\lambda})$, and in the same way we can show that the other equation that defines a total $2$-cocycle are satisfied.
To see that the cohomology class $[(\sigma, \tau)]$ is independent of the local sections initially taken, we consider $\lambda \in \Lambda_{11}$ and another local sections $\mu_{\lambda_{01,01}}': U^{21}_{\lambda_{01,01}} \to {\mathbf K}$ of $\Pi$. Since $\mu_{\lambda_{01,01}}$ and $\mu_{\lambda_{01,01}}'$ have the same sides, there is a continuous map $\alpha_{\lambda}: U^{11}_{\lambda} \to {\mathbf K}$, such that for all $A \in U^{21}_{\lambda}$ the equation
\begin{equation}\label{cohomologous_cocycles_1}
{\mu_{01,01}}(A) = \alpha_{\lambda}(A) \rightharpoonup \mu_{\lambda_{01,01}}'(A)
\end{equation}
holds. Similar equations hold for the other two local sections. Replacing \eqref{cohomologous_cocycles_1} in \eqref{action_abelian_grup_bundle_1} and comparing with the respective equation for $\mu_{\lambda_{01,01}}'$ we find
\begin{equation}
(\sigma -\sigma')_{\lambda}
\left(
\begin{matrix}
A\{\mathcal B}
\end{matrix}
\right)
=
\alpha_{12,01}(B) - \alpha_{02,01}
\left\{
\begin{matrix}
A \\ B
\end{matrix}
\right\}
+ l(B)^{-1} \cdot \alpha_{01,01}(A) = (d_{v}^{11} \alpha)_{\lambda}
\left(
\begin{matrix}
A \\ B
\end{matrix}
\right).
\end{equation}
In the same way we show that $\tau$ satisfies a similar equation, but with the horizontal composition instead of the vertical one, and we can conclude that $(\sigma, \tau)$ and $(\sigma', \tau')$ are cohomologous.
{\bf Step 4.} \textit{From cohomology to double groupoid extensions.}\\
Let $(\sigma, \tau) \in \prod\limits_{\lambda \in \Lambda_{2,1}} {\mathcal A}^{2,1}(U^{2,1}_{\lambda}) \oplus \prod\limits_{\lambda \in \Lambda_{1,2}} {\mathcal A}^{1,2}(U^{1,2}_{\lambda})$ be a 1-cocycle of the double groupoid cohomology
We need to construct a new double double groupoid ${\mathcal B}$ from $(\sigma, \tau)$ that
fit in an extension
$$1 \to {\mathbf K} \overset{\iota}{\hookrightarrow} {\mathcal B} \overset{\Pi}{\twoheadrightarrow} {\mathcal F} \to 1$$
of ${\mathcal F}$ by ${\mathbf K}$ and such that $\Pi$ has sections over each open set $U^{11}_i$ in the cover ${\mathcal U}_{11}$.
Thus define
$${\mathcal B}' = \coprod_{i \in I_{11}} \left\{ (K, F, i) \mid K \in {\mathbf K},\; F \in U^{11}_i \quad
\text{and} \quad p(K) = lb(F) \right\},$$
and let $\sim$ be the relation on ${\mathcal B}'$ defined by
\begin{equation}\label{eq_rel_1}
(K, F, k) \sim (-\tau_{\lambda(i)}(\id_{l(F)},\id_{l(F)}) + K + \tau_{\lambda(ijk)}(\id_{l(F)}, F),F , j),
\end{equation}
where $\lambda(ijk)$ stands for an element $\lambda \in {\mathcal P}_{(1,2)}^{(1,1)}$ with $\lambda_{01,01}= i, \lambda_{01,02}=j$
and $\lambda_{01,12}=k$, and $(\id_l(F), \id_l(F)) \in U^{12}_{\lambda(i)}$
along with $(\id_{l(F)}, F) \in U^{12}_{\lambda(ijk)}$; and
\begin{equation}\label{eq_rel_2}
(K, F, k) \sim \left(-\sigma_{\lambda(i)}\left(\begin{matrix}\id_{t(F)} \\ \id_{t(F)}\end{matrix}\right) + K
+ \sigma_{\lambda(ijk)}\left(\begin{matrix} \id_{t(F)} \\ F \end{matrix} \right), F, j \right),
\end{equation}
where $\lambda(ijk)$ stands for an element $\lambda \in {\mathcal P}_{(2,1)}^{(1,1)}$ with $\lambda_{01,01}= i, \lambda_{02,01}=j$
and $\lambda_{12,01}=k$, and $\left( \begin{matrix} \id_{l(F)} \\ \id_{l(F)}\end{matrix}\right) \in U^{21}_{\lambda(i)}$
along with $\left( \begin{matrix}\id_{l(F)} \\ F \end{matrix} \right) \in U^{12}_{\lambda(ijk)}$.
To show that $\sim$ defines an equivalence relation on ${\mathcal B}'$, let us define
\begin{align}
\psi_{ikj}^h(F) = -\tau_{\lambda(i)}(\id_{l(F)},\id_{l(F)}) + \tau_{\lambda(ijk)}(\id_{l(F)}, F), \label{eq_rel_3}\\
\psi_{ikj}^v(F) = -\sigma_{\lambda(i)}\left(\begin{matrix}\id_{t(F)} \\ \id_{t(F)}\end{matrix}\right) \label{eq_rel_4}
+ \sigma_{\lambda(ijk)}\left(\begin{matrix} \id_{t(F)} \\ F \end{matrix} \right).
\end{align}
By using cocycle conditions it is no difficult to show that $\psi_{ikj^v}$ and $\psi_{ikj}^h$
are independent of the value of $i$ and that
\begin{center}
\begin{tabular}{lll}
$\psi_{jj}^v = 0$, & & $\psi_{jj}^h = 0$; \\
$\psi_{kj}^v = -\psi_{jk}^v$, & & $\psi_{kj}^h = -\psi_{jk}^v$;\\
$\psi_{jm}^v = \psi_{jk}^v + \psi_{km}^v$, & & $\psi_{jm}^v = \psi_{jk}^h + \psi_{km}^h$;
\end{tabular}
\end{center}
and therefore $\sim$ is an equivalence relation.
Let us denote ${\mathcal B} = {\mathcal B}' / \sim$ and define the following partial composition laws
on ${\mathcal B}$
\begin{align*}
\left\{
\begin{matrix}
[K, F, \lambda_{01, 01}] & [L, G, \lambda_{01, 12}]
\end{matrix}
\right\}
&= [K + b(F)\cdot L + \tau_{\lambda}(L, G), \left\{ F \; G \right\}, \lambda_{01, 02}], \\
\left\{
\begin{matrix}
[K, F, \lambda_{01, 01}] \\
[L, G, \lambda_{01, 12}]
\end{matrix}
\right\}
&= \left[ K + l(F)^{-1}\cdot L + \sigma_{\lambda}
\left(
\begin{matrix}
F \\
G
\end{matrix}
\right), \left\{
\begin{matrix} F \\
G
\end{matrix}
\right\}, \lambda_{01, 02} \right].
\end{align*}
It is easy to check that these operations, joint with the quotient topology, endow ${\mathcal B}$
with the structure of a double topological groupoid, that is an extension of ${\mathcal F}$ by ${\mathbf K}$,
and that the canonical projection of ${\mathcal B}$ over ${\mathcal F}$ admit a section over each open set $U^{11}_i$ in the open cover ${\mathcal U}_{11}$ of ${\mathcal F}$, \emph{i.e} this extension is an element of
$\Opext_{{\mathcal U}} ({\mathcal F}, {\mathbf K})$. Finally, it is no difficult to prove that this correspondence defines an isomorphism between $\operatorname{H}^{1}_{\Tot}({\mathcal F}, {\mathbf K})$ and $\Opext_{{\mathcal U}} ({\mathcal F}, {\mathbf K})$ and then, passing to the limit, it follows
$$\operatorname{Ext}({\mathcal F}, {\mathbf K}) \cong \check{\operatorname{H}}^{1}_{\Tot}(\dbs{{\mathcal U}}; \ubs{{\mathcal A}}).$$
This last step finish the proof.
\end{proof}
\begin{section}{Acknowledgements}
The first named author was supported by the research project ``Groupoid extensions and cohomology'', ID-PRJ: 00006538 of the Faculty of Sciences of Pontificia Universidad Javeriana, Bogotá, Colombia.
The second named author was partially supported by CONICET, ANPCyT and Secyt (UNC).
\end{section}
| {
"timestamp": "2016-08-25T02:01:36",
"yymm": "1608",
"arxiv_id": "1608.06712",
"language": "en",
"url": "https://arxiv.org/abs/1608.06712",
"abstract": "We study extensions of double groupoids in the sense of \\cite{AN2} and show some classical results of group theory extensions in the case of double groupoids. For it, given a double groupoid $(\\mathcal{B}; \\mathcal{V},\\mathcal{H}; \\mathcal{P})$ \\emph{acting} on an abelian group bundle $\\mathbf{K} \\to \\mathcal{P}$, we introduce a cohomology double complex, in a similar way as was done in \\cite{AN2} and we show that the extensions of $\\mathcal{F}$ by $\\mathbf{K}$ are classified by the total first cohomology group of the associated total complex.With the aim to extend the above results to the topological setting, following ideas of Deligne \\cite{D} and Tu \\cite{tu}, by means of simplicial methods, we introduce a \\emph{sheaf cohomology} for topological double groupoids, generalizing the double groupoid cohomological in the discrete case, and we carry out in the topological setting the results obtained for discrete double groupoids.",
"subjects": "K-Theory and Homology (math.KT); Category Theory (math.CT)",
"title": "Double groupoid cohomology and extensions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.967899295134923,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.707591077286922
} |
https://arxiv.org/abs/1702.06385 | Causal Inference on Multivariate and Mixed-Type Data | Given data over the joint distribution of two random variables $X$ and $Y$, we consider the problem of inferring the most likely causal direction between $X$ and $Y$. In particular, we consider the general case where both $X$ and $Y$ may be univariate or multivariate, and of the same or mixed data types. We take an information theoretic approach, based on Kolmogorov complexity, from which it follows that first describing the data over cause and then that of effect given cause is shorter than the reverse direction.The ideal score is not computable, but can be approximated through the Minimum Description Length (MDL) principle. Based on MDL, we propose two scores, one for when both $X$ and $Y$ are of the same single data type, and one for when they are mixed-type. We model dependencies between $X$ and $Y$ using classification and regression trees. As inferring the optimal model is NP-hard, we propose Crack, a fast greedy algorithm to determine the most likely causal direction directly from the data.Empirical evaluation on a wide range of data shows that Crack reliably, and with high accuracy, infers the correct causal direction on both univariate and multivariate cause-effect pairs over both single and mixed-type data. |
\section{Introduction}
\label{sec:intro}
Telling cause from effect is one of the core problems in science. It is often difficult, expensive, or impossible to obtain data through randomized trials, and hence we often have to infer causality from, what is called, observational data~\cite{pearl:09:book}. We consider the setting where, given data over the joint distribution of two random variables $X$ and $Y$, we have to infer the causal direction between $X$ and $Y$. In other words, our task is to identify whether it is more likely that $X$ causes $Y$, or vice versa, that $Y$ causes $X$, or that the two are merely correlated.
In practice, $X$ and $Y$ do not have to be of the same type. The altitude of a location (real-valued), for example, determines whether it is a good habitat (binary) for a mountain hare. In fact, neither $X$ nor $Y$ have to be univariate. Whether or not a location is a good habitat for an animal, is not just caused by a single aspect, but by a \emph{combination} of conditions, which not necessarily are of the same type. We are therefore interested in the general case where $X$ and $Y$ may be of any cardinality, and may be single or mixed-type.
To the best of our knowledge there exists no method for this general setting. Causal inference based on conditional independence tests, for example, requires three variables, and cannot decide between ${X \rightarrow Y}\xspace$ and ${Y \rightarrow X}\xspace$~\cite{pearl:09:book}.
All existing methods that consider two variables are only defined for single-type pairs. Additive Noise Models (ANMs), for example, have only been proposed for univariate pairs of real-valued~\cite{peters:14:continuousanm} or discrete variables~\cite{peters:11:dr}, and similarly so for methods based on the independence of $P(X)$ and $P(Y\mid X)$~\cite{sgouritsa:15:cure,liu:16:dc}.
Trace-based methods require both $X$ and $Y$ to be strictly multivariate real-valued~\cite{janzing:10:ltr,chen:13:ktr}, and whereas \textsc{Ergo}\xspace~\cite{vreeken:15:ergo} also works for univariate pairs, these again have to be real-valued.
We refer the reader to Sec.~\ref{sec:rel} for a more detailed overview of related work.
Our approach is based on algorithmic information theory. That is, we follow the postulate that if ${X \rightarrow Y}\xspace$, it will be easier---in terms of Kolmogorov complexity---to first describe $X$, and then describe $Y$ given $X$, than vice-versa~\cite{janzing:10:algomarkov,vreeken:15:ergo,budhathoki:16:origo}.
Kolmogorov complexity is not computable, but can be approximated through the Minimum Description Length (MDL) principle~\cite{rissanen:78:mdl,grunwald:07:book}, which we use to instantiate this framework.
In addition, we develop a causal indicator that is able to handle multivariate and mixed-type data.
To this end, we define an MDL score for coding forests, a model class where a model consists of classification and regression trees. By allowing dependencies from $X$ to $Y$, or vice versa, we can measure the difference in complexity between ${X \rightarrow Y}\xspace$ and ${Y \rightarrow X}\xspace$. Discovering a single optimal decision tree is already NP-hard~\cite{murthy:97:decision-trees}, and hence we cannot efficiently discover the coding forest that describes the data most succinctly. We therefore propose \textsc{Crack}\xspace, an efficient greedy algorithm for discovering good models directly from data.
Through extensive empirical evaluation on synthetic, benchmark, and real-world data, we show that \textsc{Crack}\xspace performs very well in practice.
It performs on par with existing methods for univariate single-type pairs, is the first to handle pairs of mixed data type, and outperforms the state of the art on multivariate pairs with a large margin.
It is also very fast, taking less than 4 seconds over any pair in our experiments.
\section{Preliminaries}
\label{sec:prelim}
First, we introduce notation and give brief primers to Kolmogorov complexity and the MDL principle.
\subsection{Notation}
In this work we consider data $D$ over the joint distribution of random variables $X$ and $Y$. Such data $D$ contains $n$ records over a set $A$ of $|A| = |X| + |Y| = m$ attributes, $a_1, \dots, a_m \in A$. An attribute $a$ has a type $\textit{type}(a)$ where $\textit{type}(a) \in \{ \text{\textit{binary}, \textit{categorical}, \textit{numeric}} \}$. We will refer to binary and categorical attributes as \emph{nominal} attributes. The size of the domain of an attribute $a$ is defined as
\begin{equation}
|\ensuremath{\mathbb{D}}(a)| = \begin{cases}
\#\textit{values} &\text{if \textit{type}$(a)$ is nominal}\\
\frac{\max(a) - \min(a)}{\ensuremath{\mathit{res}}(a)} + 1&\text{if \textit{type}$(a)$ is numeric} \; ,
\end{cases}
\end{equation}
where $\ensuremath{\mathit{res}}(a)$ is the resolution at which the data over attribute $a$ was recorded. For example, a resolution of 1 means that we consider integers, of $0.01$ means that $a$ was recorded with a precision of up to a hundredth.
We will consider decision and regression trees. In general, a tree $T$ consist of $|T|$ nodes. We identify internal nodes as $\ensuremath{\mathit{v}} \in \ensuremath{\mathit{int}}(T)$, and leaf nodes as $\ensuremath{\mathit{l}} \in \ensuremath{\mathit{lvs}}(T)$. A leaf node $l$ contains $|l|$ data points.
All logarithms are to base 2, and we use $0 \log 0 = 0$.
\subsection{Kolmogorov Complexity, a brief primer}
The Kolmogorov complexity of a finite binary string $x$ is the length of the shortest binary program $p^*$ for a universal Turing machine $\mathcal{U}$ that generates $x$, and then halts~\cite{kolmogorov:65:information, vitanyi:93:book}. Formally, we have
\[
K(x) = \min \{ |p| \mid p \in \{0,1\}^*, \mathcal{U}(p) = x \} \; .
\]
Simply put, $p^*$ is the most succinct \emph{algorithmic} description of $x$, and the Kolmogorov complexity of $x$ is the length of its ultimate lossless compression. Conditional Kolmogorov complexity, $K(x \mid y) \leq K(x)$, is then the length of the shortest binary program $p^*$ that generates $x$, and halts, given $y$ as input. For more details see~\cite{vitanyi:93:book}.
\subsection{MDL, a brief primer}
The Minimum Description Length (MDL) principle~\cite{rissanen:78:mdl, grunwald:07:book} is a practical variant of Kolmogorov Complexity. Intuitively, instead of all programs, it considers only those programs that we know that output $x$ and halt. Formally, given a model class $\models$, MDL identifies the best model $M \in \models$ for data $\ensuremath{D}$ as the one minimizing
\[
L(\ensuremath{D}, M) = L(M) + L(\ensuremath{D} \mid M) \; ,
\]
where $L(M)$ is the length in bits of the description of $M$, and $L(\ensuremath{D}\mid\ensuremath{M})$ is the length in bits of the description of data $\ensuremath{D}$ given $M$. This is known as two-part MDL. There also exists one-part, or \emph{refined} MDL, where we encode data and model together. Refined MDL is superior in that it avoids arbitrary choices in the description language $L$, but in practice only computable for certain model classes. Note that in either case we are only concerned with code \emph{lengths} --- our goal is to measure the \emph{complexity} of a dataset under a model class, not to actually compress it~\cite{grunwald:07:book}.
\section{Causal Inference by Compression}
\label{sec:causal}
We pursue the goal of causal inference by compression. Below we give a short introduction to the key concepts.
\subsection{Causal Inference by Complexity}
The problem we consider is to infer, given data over two correlated variables $X$ and $Y$, whether $X$ caused $Y$, whether $Y$ caused $X$, or whether $X$ and $Y$ are only correlated. As is common, we assume causal sufficiency. That is, we assume there exists no hidden confounding variable $Z$ that is the common cause of both $X$ and $Y$.
The Algorithmic Markov condition, as recently postulated by Janzing and Sch\"{o}lkopf~\cite{janzing:10:algomarkov}, states that factorizing the joint distribution over \ensuremath{\mathit{cause}}\xspace and \ensuremath{\mathit{effect}}\xspace into $P(\ensuremath{\mathit{cause}}\xspace)$ and $P(\ensuremath{\mathit{effect}}\xspace \mid \ensuremath{\mathit{cause}}\xspace)$, will lead to simpler---in terms of Kolmogorov complexity---models than factorizing it into $P(\ensuremath{\mathit{effect}}\xspace)$ and $P(\ensuremath{\mathit{cause}}\xspace \mid \ensuremath{\mathit{effect}}\xspace)$. Formally, they postulate that if $X$ causes $Y$,
\begin{equation}
K(P(X)) + K(P(Y \mid X)) \le K(P(Y)) + K(P(X \mid Y)) \; . \label{eq:janzing}
\end{equation}
While in general the symmetry of information, $K(x)+K(y\mid x) = K(y) + K(x \mid y)$, holds up to an additive constant~\cite{vitanyi:93:book}, Janzing and Sch\"{o}lkopf~\cite{janzing:10:algomarkov} showed it does \emph{not} hold when $X$ causes $Y$, or vice versa. Based on this, Budhathoki \& Vreeken~\cite{budhathoki:16:origo} proposed
\begin{equation}
\Delta_{\XtoY}\xspace^{*} = \frac{K(P(\ensuremath{X})) + K(P(\ensuremath{Y} \mid \ensuremath{X}))}{K(P(\ensuremath{X})) + K(P(\ensuremath{Y}))} \; , \label{eq:origo}
\end{equation}
as a causal indicator that uses this asymmetry to infer that $X \rightarrow Y$ as the most likely causal direction if $\Delta_{\XtoY}\xspace^* < \Delta_{\YtoX}\xspace^*$, and vice versa. The normalisation has no function during inference, but does help to interpret the confidence of the indicator.
Both scores assume access to the true distribution $P(\cdot)$, whereas in practice we only have access to empirical data. Moreover, following from the halting problem, Kolmogorov complexity is not computable. We can approximate it, however, via MDL~\cite{vitanyi:93:book,grunwald:07:book}, which also allows us to directly work with empirical distributions.
\subsection{Causal Inference by MDL}
For causal inference by MDL, we will need to approximate both $K(P(\ensuremath{X}))$ and $K(P(\ensuremath{Y} \mid \ensuremath{X}))$. For the former, we need to consider the model classes $\models_{X}$ and $\models_{Y}$, while for the latter we need to consider class $\models_{Y\mid X}$ of models $\ensuremath{M}_{Y \mid X}$ that describe the data of $Y$ dependent the data of $X$.
That is, we are after the \emph{causal} model $\ensuremath{M}_{{X \rightarrow Y}\xspace}= (\ensuremath{M}_{X}, \ensuremath{M}_{Y\mid X})$
from the class $\models_{X \rightarrow Y}\xspace = \models_{X} \times \models_{Y \mid X}$ that best describes the data $Y$ by exploiting as much as possible structure of $X$ to save bits. By MDL, we identify the optimal model $\ensuremath{M}_{X \rightarrow Y}\xspace \in \models_{X \rightarrow Y}\xspace$ for data $\ensuremath{D}$ over $X$ and $Y$ as the one minimizing
\[
L(\ensuremath{D},\ensuremath{M}_{X \rightarrow Y}\xspace) = L(\ensuremath{X}, \ensuremath{M}_{X}) + L(\ensuremath{Y}, M_{Y \mid X} \mid \ensuremath{X}) \; ,
\]
where the encoded length of the data of $X$ under a given model is encoded using two-part MDL, similarly so for $Y$, if we consider the inverse direction.
To identify the most likely causal direction between $X$ and $Y$ by MDL we can now simply rewrite Eq.~\eqref{eq:origo}
\[
\Delta_{\XtoY}\xspace = \frac{L(\ensuremath{X}, \ensuremath{M}_{X}) + L(\ensuremath{Y}, M_{Y\mid X} \mid X)}{L(\ensuremath{X}, \ensuremath{M}_{X}) + L(\ensuremath{Y}, \ensuremath{M}_{Y})} \; .
\]
Similar to the original score, we infer that $X$ is a likely cause of $Y$ if $\Delta_{\XtoY}\xspace < \Delta_{\YtoX}\xspace$, $Y$ is a likely cause of $X$ if $\Delta_{\YtoX}\xspace < \Delta_{\XtoY}\xspace$, and that $X$ and $Y$ are only correlated or might have a common cause if $\Delta_{\XtoY}\xspace = \Delta_{\YtoX}\xspace$.
\subsection{Normalized Causal Indicator}
Although $\Delta$ has nice theoretical properties, it has a mayor drawback. It assumes that a bit gain in the description of the data over one attribute has the same importance as one bit gain in the description of the data over another attribute. This does not hold true if these attributes have different intrinsic complexities, such as when their domain sizes strongly differ. For example, a continuous valued attribute is very likely to have a much higher intrinsic complexity than a binary attribute. This means that gaining $k$ bits from an attribute with a large domain is not comparable to gaining $k$ bits from an attribute with a small domain. Since the $\Delta$ indicator compares the absolute difference in bits, it does not account for differences w.r.t. the intrinsic complexity. Hence, $\Delta$ is highly likely to be a bad choice when $X$ and $Y$ are of different, or of mixed-type data.
We therefore propose an alternative indicator for causal inference on mixed-type data. Instead of taking the absolute difference between the conditioned and unconditioned score, we instead consider relative differences w.r.t. the marginal. We can derive the \textit{Normalized Causal Indicator} (\ensuremath{\mathit{NCI}}\xspace) starting from the numerator of the $\Delta$ indicator. By subtracting the conditional costs on both sides, we have
\[
L(\ensuremath{X}, \ensuremath{M}_{X}) - L(\ensuremath{X}, M_{X| Y} | Y) < L(\ensuremath{Y}, \ensuremath{M}_{Y}) - L(\ensuremath{Y}, M_{Y| X} | X).
\]
Since the aim of the \ensuremath{\mathit{NCI}}\xspace is to measure the relative gain, we divide by the costs of the unconditioned data
\[
\frac{L(\ensuremath{X}, \ensuremath{M}_{X}) - L(\ensuremath{X}, M_{X| Y} | Y)}{L(\ensuremath{X}, \ensuremath{M}_{X})} = 1 - \frac{L(\ensuremath{X}, M_{X| Y} | Y)}{L(\ensuremath{X}, \ensuremath{M}_{X})} \; .
\]
After this step, we can conclude that for the relative gain it holds, if ${X \rightarrow Y}\xspace$
\[
\frac{L(\ensuremath{X}, M_{X\mid Y} \mid Y)}{L(\ensuremath{X}, \ensuremath{M}_{X})} > \frac{L(\ensuremath{Y}, M_{Y\mid X} \mid X)}{L(\ensuremath{Y}, \ensuremath{M}_{Y})} \; .
\]
This score can be understood as an instantiation of the \textsc{Ergo}\xspace indicator proposed by Vreeken~\cite{vreeken:15:ergo}. From the derivation, we can easily see that the difference between the score of both indicators depends only on the normalization factor and hence both are based on the Algorithmic Markov condition. It turns out, however, that the \textsc{Ergo}\xspace indicator is also biased. Although it balances the gain between $X$ and $Y$, we need a score that does not impose prior assumptions to the individual attributes of $X$ and $Y$. With the \textsc{Ergo}\xspace indicator, it could happen that a single $X_i \in X$ dominates the whole score for $X$. To account for this, we assume independence among the variables within $X$ and $Y$, meaning that the domain of two individual attributes within $X$ or $Y$ is allowed to differ. Hence, we formulate the \ensuremath{\mathit{NCI}}\xspace, which we from now on denote by $\delta$, from $X$ to $Y$ as
\[
\delta_{{X \rightarrow Y}\xspace} = \frac{1}{|Y|} \sum_{Y_i \in Y} \frac{L(Y_i, M_{Y_i \mid X} \mid X)}{L(Y_i, \ensuremath{M}_{Y_i})}\;
\]
and analogously $\delta_{{Y \rightarrow X}\xspace}$. To avoid bias towards dimensionality, we normalize by the number of attributes. As above, we infer ${X \rightarrow Y}\xspace$ if $\delta_{{X \rightarrow Y}\xspace} < \delta_{{Y \rightarrow X}\xspace}$ and vice versa.
In practice, we expect that $\Delta$ performs well on data where $X$ and $Y$ are of the same type, especially when $|X|=|Y|$ and the domain sizes of their attributes are balanced. For unbalanced domains, dimensionality, and especially for mixed-type data, we expect $\delta$ to perform much better. The experiments indeed confirm this.
\section{MDL for Tree Models}
\label{sec:score}
To use the above defined causal indicators in practice, we need to define a casual model class $\models_{X \rightarrow Y}\xspace$, how to encode a model $\ensuremath{M} \in \models$ in bits, and how to encode a dataset $\ensuremath{D}$ using a model $\ensuremath{M}$. As models we consider tree models, or, \emph{coding forests}.
A coding forest $\ensuremath{M}$ contains per attribute $a_i \in A$ one coding tree $T_i$. A coding tree $T_i$ encodes the values of $a_i$ in its leaves, splitting or regressing the data of $a_i$ on attribute $a_j$ ($i \neq j$) in its internal nodes to encode the data of $a_i$ more succinctly.
We encode the data over attribute $a_i$ with the corresponding coding tree $T_i$. The encoded length of data $D$ and $\ensuremath{M}$ then is $L(D, \ensuremath{M}) = \sum_{a_i \in \ensuremath{A}} L(\ensuremath{T}_i)$, which corresponds to the sum of costs of the individual trees.
To ensure lossless decoding, there needs to exist an order on the trees $T \in \ensuremath{M}$ such that we can transmit these one by one. In other words, in a \emph{valid} tree model there are no cyclic dependencies between the trees $\ensuremath{T} \in \ensuremath{M}$, and a valid model can hence be represented by a DAG. Let $\models(D)$ be the set of all valid tree models for $D$, that is, $\ensuremath{M} \in \models(D)$ is a set of $|A|$ trees such that the data types of the leafs in $\ensuremath{T}_i$ corresponds to the data type of attribute $a_i$, and its dependency graph is acyclic.
\begin{figure}[t]
\begin{minipage}[t]{.5\linewidth}
\centering
\includegraphics[]{lepus_ytox_large.pdf}
\subcaption{DAG}\label{toy:dag}
\end{minipage}%
\begin{minipage}[t]{.5\linewidth}
\centering
\includegraphics[]{lepus_xtoy_tree.pdf}
\subcaption{Tree for $Y_2$}\label{toy:tree}
\end{minipage}%
\caption{Toy data set with ground truth ${X \rightarrow Y}\xspace$. Shown is the dependency DAG (right). More dependencies go from $X$ to $Y$ than vice versa. Left: Example coding tree for $Y_2$. $X_1$ splits the values of $Y_2$ into two subsets. In addition, the subset belonging to the left child can be further compressed by regressing on $X_2$.}
\label{fig:toy:example}
\end{figure}
We write $\models_{X}(X)$ and $\models_{Y}(Y)$ to denote the subset of valid coding forests for $X$ and $Y$, where we do not allow dependencies. To describe the possible set of models where we allow attributes of $X$ to only depend on attributes of $Y$ we write $\models_{X \mid Y}(X)$ and do so accordingly for $Y$ depending only on $X$. If an attribute does not have any incoming dependencies, its tree is a stump. Fig.~\ref{fig:toy:example} shows the DAG for a toy data set, and an example tree for $Y_2$. From the DAG, the set of purple edges would be a valid model in $\models_{Y \mid X}(Y)$, whereas the orange edges are a valid model from $\models_{X \mid Y}(X)$.
\subsubsection*{Cost of a Tree}
The encoded cost of a tree consists of two parts. First, we transmit the topology of the tree. From the root node on we indicate with one bit per node whether it is a leaf or an internal node, and if the latter, one further bit to identify whether it is a split or regression node. Formally we have that
\[
L(\ensuremath{T}) = |\ensuremath{T}| + \sum_{\ensuremath{\mathit{v}} \in \ensuremath{\mathit{int}}(\ensuremath{T})} (1+L(\ensuremath{\mathit{v}})) + \sum_{\ensuremath{\mathit{l}} \in \ensuremath{\mathit{lvs}}(\ensuremath{T})} L(\ensuremath{\mathit{l}}) \; .
\]
Next, we explain how we encode internal nodes and then specify the encoding for leaf nodes.
\subsubsection*{Cost of a Single Split}
The length of a split node $\ensuremath{\mathit{v}}$ is
\[
L_{1\ensuremath{\mathit{split}}\xspace}(\ensuremath{\mathit{v}}) = 1 + \log |\ensuremath{A}| + \begin{cases}
\log |\ensuremath{\mathbb{D}}(a_j)| \text{ if $a_i$ is categorical,}\\
\log |\ensuremath{\mathbb{D}}(a_j) - 1| \text{ else.}
\end{cases}
\]
whereas we first
need one bit to indicate this is a single-split node, then
identify in $\log |\ensuremath{A}|$ bits on which attribute $a_j$ we split,
and third the split condition.
The split condition can be any value in the domain for categorical, and can lie in between two consecutive values of a numeric attribute ($|\ensuremath{\mathbb{D}}(a_j) - 1|$ choices). For binary we only have one option, resulting in zero cost.
\subsubsection*{Costs of a Multiway split}
A multiway split is only possible for categorical and real valued data. As there are exponentially many multiway splits, we consider only a subset. The costs for a multiway split are
\[
L_{\text{k}\ensuremath{\mathit{split}}\xspace}(\ensuremath{\mathit{v}}) = 1 + \log |\ensuremath{A}| + \begin{cases}
0 \text{ if $a_i$ is categorical,}\\
L_{\mathbb{N}}(k) \text{ numeric,}
\end{cases}
\]
where the first two terms are similar to above. For categorical data, we only consider splitting on all values, and hence have no further cost. For numeric data, we only split non-deterministic cases, i.e. if there exist duplicate values. To do so, we split on every such value that occurs at least $k$ times, and one residual split for all remaining data points.
To encode such a split, we transmit $k$ using $L_{\mathbb{N}}(k)$ bits, where $L_{\mathbb{N}}$ is the MDL optimal encoding for integers $z \geq 1$~\cite{rissanen:83:integers}.
\subsubsection*{Cost of Regressing}
For a regression node we also first encode the target attribute, and then the parameters of the regression, i.e.
\[
L_{\ensuremath{\mathit{reg}}\xspace}(\ensuremath{\mathit{v}}) = \log |\ensuremath{A}| + \sum_{\phi \in \Phi(\ensuremath{\mathit{v}})} \left( \, 1 + \ensuremath{L_\mathbb{N}}(s) + \ensuremath{L_\mathbb{N}}(\lfloor \phi \cdot 10^{s}\rfloor) \, \right) ,
\]
where $\Phi(\ensuremath{\mathit{v}})$ denotes the set of parameters for the regression. For linear regression, it consists of $\alpha$ and $\beta$, while for quadratic regression it further contains $\gamma$. To describe each parameter $\phi \in \Phi$ we transmit it up to a user defined precision, e.g. $0.001$, we first encode the corresponding number of significant digits $s$, e.g. $3$, and then the shifted parameter value.
Next, we describe how to encode the data in a leaf $l$. As we consider both nominal and numeric attributes, we need to define $L_\ensuremath{\mathit{nom}}\xspace(l)$ for nominal and $L_\ensuremath{\mathit{num}}\xspace(l)$ for numeric data.
\subsubsection*{Cost of a Nominal Leaf}
To encode the data in a leaf of a nominal attribute, we can use refined MDL~\cite{kontkanen:07:histo}. That is, we encode minimax optimally, without having to make design choices~\cite{grunwald:07:book}. In particular, we encode the data of a nominal leaf using the normalized maximum likelihood (NML) distribution as
\begin{align}
L_\ensuremath{\mathit{nom}}\xspace(\ensuremath{\mathit{l}}) =& \log \left( \sum_{\substack{h_1 + \cdots + h_{k} = |\ensuremath{\mathit{l}}|}} \frac{|\ensuremath{\mathit{l}}|!}{h_1! h_2! \cdots h_{k}!} \right)\\
& - |l| \sum_{c \in \ensuremath{\mathbb{D}}(a_i)} \Pr(a_i = c \mid \ensuremath{\mathit{l}}) \log \Pr(a_i = c \mid \ensuremath{\mathit{l}}) \; .
\end{align}
Kontkanen \& Myllym\"{a}ki~\cite{kontkanen:07:histo} derived a recursive formula to calculate this in linear time.
\subsubsection*{Cost of a Numerical Leaf}
For numeric data existing refined MDL encodings have high computational complexity \cite{kontkanen:07:histo}. Hence, we encode the data in numeric leaves using two-part MDL, using point models with Gaussian or uniform noise. A split or a regression on an attribute aims to reduce the variance or the domain in the leaf. We encode the costs of a numeric leaf as
\begin{align}
L_\ensuremath{\mathit{num}}\xspace(\ensuremath{\mathit{l}} \mid \sigma, \mu) =& \frac{|l|}{2} \left( \frac{1}{\ln 2} + \log 2 \pi \sigma^2 \right) - |l| \log \ensuremath{\mathit{res}}(a_i) ,
\end{align}
given empirical mean $\mu$ and variance $\sigma$ or as uniform given $\min$ and $\max$ as
\begin{align}
L_\ensuremath{\mathit{num}}\xspace(\ensuremath{\mathit{l}} \mid \min, \max) =& |l| \cdot \log \left( \frac{\max - \min}{\ensuremath{\mathit{res}}(a_i)} + 1 \right) \; .
\end{align}
We encode the data as Gaussian if this costs fewer bits than encoding it as uniform. To indicate this decision, we use one bit and encode the minimum of both plus the corresponding parameters.
As we consider empirical data, we can safely assume that all parameters lie in the domain of the given attribute. Since we do not have any preference on the parameter values, the encoded costs of a numeric leaf $\ensuremath{\mathit{l}}$ are
\begin{align}
L_\ensuremath{\mathit{num}}\xspace(l) &= 1 + 2 \log |\ensuremath{\mathbb{D}}(a_j)| \\
&+ \min \{ L_\ensuremath{\mathit{num}}\xspace(l \mid \sigma, \mu), L_\ensuremath{\mathit{num}}\xspace(\ensuremath{\mathit{l}} \mid \min, \max) \} \; .
\end{align}
Putting it all together, we now know how to compute $L(D,\ensuremath{M})$, by which we can formally define the Minimal Coding Forest problem.
\vspace{0.5em}
\noindent\textbf{Minimal Coding Forest Problem}
\emph{
Given a data set $\ensuremath{D}$ over a set of attributes $A = \{a_1, \ldots, a_m\}$, and $\models$ a valid model class for $A$. Find the smallest model $\ensuremath{M} \in \models$ such that $L(\ensuremath{D},\ensuremath{M})$ is minimal.
}
\vspace{0.5em}
From the fact that both inferring optimal decision trees and structure learning of Bayesian networks---to which our tree-models reduce for nominal-only data and splitting on all values---are NP-hard~\cite{murthy:97:decision-trees}, it trivially follows that the Minimal Coding Forest problem is also NP-hard. Hence, we resort to heuristics.
\section{The \textsc{Crack}\xspace Algorithm}
\label{sec:algo}
Knowing the score $L(\ensuremath{D},\ensuremath{M})$ and the problem, we can now introduce the \textsc{Crack}\xspace algorithm, which stands for \textbf{c}lassification and \textbf{r}egression based p\textbf{ack}ing of data. \textsc{Crack}\xspace is an efficient greedy heuristic for discovering a coding forest $\ensuremath{M}$ from given model class $\models$ with low $L(\ensuremath{D},\ensuremath{M})$. It builds upon the well-known ID3 algorithm~\cite{quinlan:86:id3}. In the next section we explain the main aspects of the algorithm.
\subsubsection*{Greedy algorithm}
We give the pseudocode of \textsc{Crack}\xspace as Algorithm \ref{alg:crack}. Before running the algorithm, we set the resolution per attribute, which is $1$ for nominal data (line~\ref{alg:crack:res}). For numeric data, we calculate the differences between adjacent values, and to reduce sensitivity to outliers take the $k^{\mathit{th}}$ smallest difference as resolution. In general, setting $k$ to $0.1n$ works well in practice.
\textsc{Crack}\xspace starts with an empty model consisting of only trivial trees, i.e. leaf nodes containing all records, per attribute (line~\ref{alg:crack:trivialtrees}). The given model class $\models$ implicitly defines a graph $\ensuremath{\mathcal{G}}$ of dependencies between attributes that we are allowed to consider (line~\ref{alg:crack:depgraph}).
To make sure the returned model is valid, we need to maintain a graph representing its dependencies (lines~\ref{alg:crack:checkgraph1}--\ref{alg:crack:checkgraph2}).
We iteratively discover that refinement of the current model that maximizes compression. To find the best refinement, we consider every attribute (line~\ref{alg:crack:everyattrib}), and every legal additional split or regression of its corresponding tree (line~\ref{alg:crack:splitregress}). A refinement is only legal when the dependency is allowed by the model family (line~\ref{alg:crack:depcheck}), the dependency graph remains acyclic, and we do not split or regress twice on the same attribute (line~\ref{alg:crack:validcheck}). We keep track of the best found refinement.
The key subroutine of \textsc{Crack}\xspace is \textsc{RefineLeaf}\xspace, in which we discover the optimal refinement of a leaf $l$ in tree $T_i$. That is, it finds the optimal split of $l$ over all candidate attributes $a_j$ such that we minimize the encoded length. In case both $a_i$ and $a_j$ are numeric, \textsc{RefineLeaf}\xspace also considers the best linear and quadratic regression and decides for the variant with the best compression---choosing to split in case of a tie. In the interest of efficiency, we do not allow splitting or regressing multiple times on the same candidate.
\begin{algorithm}[tb!]
\caption{$\textsc{Crack}\xspace(\ensuremath{D}, \models)$}
\label{alg:crack}
\Input{ data $\ensuremath{D}$ over attributes $A$, model class $\models$}
\Output{ tree model $\ensuremath{M} \in \models$ with low $L(\ensuremath{D}, \ensuremath{M})$}
$\ensuremath{\mathit{res}}(a_i) \leftarrow \textsc{RobustMinDiff}\xspace(a_i)$\; \label{alg:crack:res}
$\ensuremath{T}_i \leftarrow \textsc{TrivialTree}\xspace(a_i)$ for all $a_i \in A$\; \label{alg:crack:trivialtrees}
$\ensuremath{\mathcal{G}} \leftarrow$ dependency graph for $\models$\; \label{alg:crack:depgraph}
$V \leftarrow \{ v_i \mid i \in A \}, \; E \leftarrow \emptyset$\; \label{alg:crack:checkgraph1}
$\ensuremath{\mathcal{G}} \leftarrow (V,E)$\; \label{alg:crack:checkgraph2}
\While{$L(\ensuremath{D},\ensuremath{M})$ decreases}{
\For {$a_i \in A$}{\label{alg:crack:everyattrib}
$O_i \leftarrow \ensuremath{T}_i$\;
\For{$l \in \ensuremath{\mathit{lvs}}(\ensuremath{T}_i), (i,j) \in \ensuremath{\mathcal{G}}$}{ \label{alg:crack:depcheck}
\If{$E \cup (v_i, v_j) \text{ is acyclic}$ \AND $j \notin \text{path}(l)$}{ \label{alg:crack:validcheck}
$\ensuremath{T}'_i \leftarrow \textsc{RefineLeaf}\xspace(\ensuremath{T}_i, l, j)$\;\label{alg:crack:splitregress}
\If{$L(\ensuremath{T}'_i) < L(O_i)$}{\label{alg:crack:bettercheck}
$O_i \leftarrow \ensuremath{T}'_i, \; e_i \leftarrow j$\;\label{alg:crack:betterstore}
}
}
}
}
$k \leftarrow \arg\min_i \{ L(O_i) - L(\ensuremath{T}_i) \}$\;\label{alg:crack:bestk}
\If{$L(O_k) < L(\ensuremath{T}_k)$}{\label{alg:crack:bestcheck}
$\ensuremath{T}_k \leftarrow O_k$\;\label{alg:crack:beststore}
$E \leftarrow E \cup (v_k, v_{e_k})$\label{alg:crack:bestedge}
}
}
\Return{$\ensuremath{M} \leftarrow \bigcup_i \ensuremath{T}_i $} \label{alg:crack:return}
\end{algorithm}
Since we use a greedy heuristic to construct the coding trees, we have a worst case runtime of $O(2^{m}n)$, where $m$ is the number of attributes and $n$ is the number of rows. Although the worst case runtime is exponential, in practice, \textsc{Crack}\xspace takes only a few seconds.
\subsubsection*{Causal Inference with \textsc{Crack}\xspace}
To compute our causal indicators we have to run \textsc{Crack}\xspace twice on $D$. First with model class $\models_{X \mid Y}(X)$ to obtain $\ensuremath{M}_{X \mid Y}(X)$ and second with $\models_{Y\mid X}(Y)$, to obtain $\ensuremath{M}_{Y\mid X}(Y)$. To estimate $\models_{X}(X)$, we assume a uniform prior $L(X \mid \ensuremath{M}_{X}) = -n\sum_{a_i \in X} \log res(a_i)$ and similarly for $\ensuremath{M}_{Y}(Y)$. We can use these scores to calculate both the $\delta$ score and the $\Delta$ score. We will refer to \textsc{Crack}\xspace using the $\delta$ indicator as \ensuremath{\textsc{Crack}_{\delta}}\xspace, and \textsc{Crack}\xspace with the $\Delta$ indicator as \ensuremath{\textsc{Crack}_{\Delta}}\xspace.
\section{Related Work}
\label{sec:rel}
Causal inference on observational data is a challenging problem, and has recently attracted a lot of attention~\cite{pearl:09:book, janzing:10:algomarkov, shimizu:06:anm, budhathoki:16:origo}. Most existing proposals, however, are highly specific in the type of causal dependencies and type of variables they can consider.
Clasical constrained-based approaches, such as conditional independence tests, require three observed random variables~\cite{spirtes:00:book,pearl:09:book}, cannot distinguish Markov equivalent causal DAGs~\cite{verma:90:markov-equiv} and hence cannot decide between ${X \rightarrow Y}\xspace$ and ${Y \rightarrow X}\xspace$. Recent approaches use properties of the joint distribution to break the symmetry.
Additive Noise Models (ANMs)~\cite{shimizu:06:anm}, for example, assume that the effect is a function of the cause and cause-independent additive noise. ANMs exist for univariate real-valued~\cite{shimizu:06:anm,hoyer:09:nonlinear,zhang:09:ipcm,peters:14:continuousanm} and discrete data~\cite{peters:10:discreteanm}.
A related approach considers the asymmetry in the joint distribution of $\ensuremath{\mathit{cause}}\xspace$ and $\ensuremath{\mathit{effect}}\xspace$ for causal inference. The linear trace method (\textsc{LTR}\xspace)~\cite{janzing:10:ltr} and the kernelized trace method (\textsc{KTR}\xspace)~\cite{chen:13:ktr} aim to find a structure matrix $A$ and the covariance matrix $\Sigma_X$ to express $Y$ as $AX$. Both methods are restricted to multivariate continuous data.
Sgouritsa et al.~\cite{sgouritsa:15:cure} show that the marginal distribution $P(\ensuremath{\mathit{cause}}\xspace)$ of the cause is independent of the conditional distribution $P(\ensuremath{\mathit{effect}}\xspace \mid \ensuremath{\mathit{cause}}\xspace)$ of the effect.
They proposed \textsc{Cure}\xspace, using unsupervised reverse regression on univariate continuous pairs. Liu et al~\cite{liu:16:dc} use distance correlation to identify the weakest dependency between univariate pairs of discrete data.
The algorithmic information-theoretic approach views causality in terms of Kolmogorov complexity. The key idea is that if $X$ causes $Y$, the shortest description of the joint distribution $P(X, Y)$ is given by the separate descriptions of the distributions $P(X)$ and $P(Y \mid X)$~\cite{janzing:10:algomarkov}, and justifies additive noise model based causal inference~\cite{janzing:10:justifyanm}.
However, as Kolmogorov complexity is not computable~\cite{vitanyi:93:book}, causal inference using algorithmic information theory requires practical implementations, or notions of independence. For instance, the information-geometric approach~\cite{janzing:12:igci} defines independence via orthogonality in information space for univariate continuous pairs.
Vreeken~\cite{vreeken:15:ergo} instantiates it with the cumulative entropy to infer the causal direction in continuous univariate and multivariate data. Mooij instantiates the first practical compression-based approach~\cite{mooij:10:mml} using the Minimum Message Length. Budhathoki and Vreeken approximate $K(X)$ and $K(Y \mid X)$ through MDL, and propose \textsc{Origo}\xspace, a decision tree based approach for causal inference on multivariate binary data~\cite{budhathoki:16:origo}. Marx and Vreeken\cite{marx:17:slope} proposed \textsc{Slope}\xspace, an MDL based method employing local and global regression for univariate numeric data.
In contrast to all methods above, \textsc{Crack}\xspace can consider pairs of any cardinality, univariate or multivariate, and of same, different, or even mixed-type data.
\section{Experiments}
\label{sec:exps}
In this section, we evaluate \textsc{Crack}\xspace empirically. We implemented \textsc{Crack}\xspace in C++, and provide the source code including the synthetic data generator along with the tested datasets for research purposes.\!\footnote{\oururl} The experiments concerning \textsc{Crack}\xspace were executed single-threaded.
All tested data sets could be processed within seconds; over all pairs the longest runtime for \textsc{Crack}\xspace was $3.8$ seconds.
We compare \textsc{Crack}\xspace to \textsc{Cure}\xspace~\cite{sgouritsa:15:cure}, \textsc{IGCI}\xspace~\cite{janzing:12:igci}, \textsc{LTR}\xspace~\cite{janzing:10:ltr}, \textsc{Origo}\xspace~\cite{budhathoki:16:origo}, \textsc{Ergo}\xspace~\cite{vreeken:15:ergo} and \textsc{Slope}\xspace~\cite{marx:17:slope} using their publicly available implementations and recommended parameter settings.
\subsection{Synthetic data}
The aim of our experiments on synthetic data is to show the advantages of either score. In particular, we expect \ensuremath{\textsc{Crack}_{\Delta}}\xspace to perform well on nominal data and numeric data with balanced domain sizes and dimensions. On the other hand, \ensuremath{\textsc{Crack}_{\delta}}\xspace should have an advantage when it comes to numeric data with varying domain sizes and mixed-type data.
We generate synthetic data with assumed ground truth ${X \rightarrow Y}\xspace$ with $|X| = k$ and $|Y| = l$, each having $n=5 \, 000$ rows, in the following way. First, we randomly assign the type for each attribute in $X$. For nominal data, we randomly draw the number of classes between two (binary) and five and distribute the classes uniformly. Numeric data is generated following a normal distribution taken to the power of $q$ by keeping the sign, leading to a sub-Gaussian ($q < 1.0$) or super-Gaussian ($q > 1.0$) distribution.\!\footnote{We use super- and sub-Gaussians to ensure identifiability.}
To create data with the true causal direction ${X \rightarrow Y}\xspace$, we introduce dependencies from $X$ to $Y$, where we distinguish between splits and refinements. We call the probability threshold to create a dependency $\varphi \in [ 0, 1 ]$. For each $j \in \{ 1, \dots, l \}$, we throw a biased coin based on $\varphi$ for each $X_i \in X$ that determines if we model a dependency from $X_i$ to $Y_j$. A split means that we find a category (nominal) or a split-point (numeric) on $X_i$ to split $Y_j$ into two groups, for which we model its distribution independently. As refinement, we either do a multiway split or model $Y_j$ as a linear or quadratic function of $X_i$ plus independent Gaussian noise.
\paragraph{Accuracy}
First, we compare the accuracies of \ensuremath{\textsc{Crack}_{\delta}}\xspace and \ensuremath{\textsc{Crack}_{\Delta}}\xspace with regard to single-type and mixed-type data. To do so, we generate $200$ synthetic data sets with $|X| = |Y| = 3$ for each dependency level where $\varphi \in \{0.0,0.1,\dots 1.0 \}$. Figure~\ref{fig:dependency} shows the results for numeric, nominal and mixed-type data.
At $\varphi = 1.0$ both approaches reach nearly $100\%$ accuracy on single-type data. For single-type data, the accuracy of both methods increases with the dependency. At $\varphi = 0$, both approaches correctly do not decide instead of taking wrong decisions.
As expected \ensuremath{\textsc{Crack}_{\delta}}\xspace strongly outperforms \ensuremath{\textsc{Crack}_{\Delta}}\xspace on mixed-type data, reaching near $100\%$ accuracy, whereas \ensuremath{\textsc{Crack}_{\Delta}}\xspace reaches only $72\%$. On nominal data, \ensuremath{\textsc{Crack}_{\Delta}}\xspace picks up the correct signal faster than \ensuremath{\textsc{Crack}_{\delta}}\xspace.
\begin{figure}[t]%
\begin{minipage}[t]{.42\linewidth}
\includegraphics[]{dependency_nom.pdf}
\subcaption{nominal}\label{fig:dep:nom}
\end{minipage}%
\begin{minipage}[t]{.29\linewidth}
\includegraphics[]{dependency_num.pdf}
\subcaption{numeric}\label{fig:dep:num}
\end{minipage}%
\begin{minipage}[t]{.29\linewidth}
\includegraphics[]{dependency_mixed.pdf}
\subcaption{mixed}\label{fig:dep:mixed}
\end{minipage}%
\caption{Accuracy for $\Delta$ and $\delta$ on nominal, numeric and mixed-type data based on the dependency.}
\label{fig:dependency}
\end{figure}
\paragraph{Dimensionality}
Next, we check how sensitive both scores are to dimensionality, whereas we discriminate between asymmetric $k \neq l$ and symmetric $k=l$. We evaluated $200$ data sets per dimensionality. For the symmetric case, both methods are near to $100\%$ on single-type data, whereas only \ensuremath{\textsc{Crack}_{\delta}}\xspace also reaches this target on mixed-type data, as can be seen in the appendix.\!\footnotemark[1] We now discuss the more interesting case for asymmetric pairs in detail.
To test asymmetric pairs, we keep the dimension of one variable at three, $k = 3$, while we increase the dimension of the second variable $l$ from one to eleven. To avoid bias, we assigned the dimension $k$ to $X$ and $l$ to $Y$ and swap the dimensions in every other test. We show the results in Figure~\ref{fig:asymmetric}. As expected, we observe that \ensuremath{\textsc{Crack}_{\delta}}\xspace has much fewer difficulties with the asymmetric data sets than \ensuremath{\textsc{Crack}_{\Delta}}\xspace. From $l = 3$ onwards, \ensuremath{\textsc{Crack}_{\delta}}\xspace is close to $100\%$. On nominal data, \ensuremath{\textsc{Crack}_{\Delta}}\xspace performs near perfect and also has the clear advantage for $l=1$.
\begin{figure}[t]
\hfill
\includegraphics[]{asymmetric_origo.pdf}
\hfill
\includegraphics[]{asymmetric_nci.pdf}
\hfill
\caption{Accuracy of $\Delta$ (left) and $\delta$ (right) on asymmetric dimensions $k\in \{1,3,5,7,11\}$ and $3$ for nominal, numeric and mixed-type data.}
\label{fig:asymmetric}
\end{figure}
\subsection{Real world data}
Based on the evaluation on synthetic data, we test our approach on univariate benchmark data and multivariate data consisting of known test sets and new causal pairs with known ground truth that we present in the current paper.
\paragraph{Univariate benchmark}
To evaluate \textsc{Crack}\xspace on univariate data, we apply it to the well-known Tuebingen benchmark data set that consists of $100$ univariate pairs.\!\footnote{https://webdav.tuebingen.mpg.de/cause-effect/} The pairs mainly consist of numeric data and a few categoric instances. Therefore, we apply \ensuremath{\textsc{Crack}_{\Delta}}\xspace. We compare to the state of the art methods that are applicable to multivariate and univariate data, \textsc{Origo}\xspace~\cite{budhathoki:16:origo} and \textsc{Ergo}\xspace~\cite{vreeken:15:ergo}, and methods specialized for univariate pairs, \textsc{Cure}\xspace~\cite{sgouritsa:15:cure}, \textsc{IGCI}\xspace~\cite{janzing:12:igci} and \textsc{Slope}\xspace~\cite{marx:17:slope}. For each approach, we sort the results by their confidence. According to this order, we calculate for each position $k$ the percentage of correct inferences up to this point, called the decision rate. We weigh the decisions as specified by the benchmark, plot the results in Fig.~\ref{fig:decision_rate} and show the $95\%$ confidence interval of a fair coin flip as a grey area. Except to \textsc{Crack}\xspace and \textsc{Slope}\xspace, all methods are insignificant w.r.t. the fair coin flip. In particular, \textsc{Crack}\xspace has an accuracy of over $90\%$ for the first $41\%$ of its decisions and reaches $77.2\%$ overall.
Regarding the whole decision rate, \textsc{Crack}\xspace is nearly on par with \textsc{Slope}\xspace, which is as far as we know, the current state of the art for univariate continuous data.
\begin{figure}[t]
\centering
\includegraphics[]{decision_rate_weighted.pdf}
\caption{[Higher is better] Decision rates of \textsc{Crack}\xspace, \textsc{Origo}\xspace, \textsc{IGCI}\xspace, \textsc{Cure}\xspace, \textsc{Ergo}\xspace and \textsc{Slope}\xspace on univariate Tuebingen pairs (100) weighted as defined. Approaches that are only applicable to univariate data are drawn with dotted lines.}
\label{fig:decision_rate}
\end{figure}
\paragraph{Multivariate data}
To test \ensuremath{\textsc{Crack}_{\delta}}\xspace on multivariate mixed-type and single-type data, we collected 17 data sets. The information of the dimensionality for each data set is listed in Table~\ref{tab:mv_comparison}. The first six data sets belong to the Tuebingen benchmark data set~\cite{mooij:16:pairs} and the next four were published by Janzing et al.~\cite{janzing:10:ltr}. Further, we extracted cause-effect pairs form the \textit{Haberman}~\cite{haberman:76:dataset}, \textit{Iris}~\cite{fisher:36:dataset}, \textit{Mammals}~\cite{heikinheimo:07:mammals} and \textit{Octet}~\cite{ghiringhelli:15:octet, vechten:69:quantum} data sets. Those are described in more detail in the appendix.
We compare \ensuremath{\textsc{Crack}_{\delta}}\xspace with \textsc{LTR}\xspace, \textsc{Ergo}\xspace and \textsc{Origo}\xspace. \textsc{Ergo}\xspace and \textsc{LTR}\xspace do not consider categoric data, and are hence
not applicable on all data sets. In addition, \textsc{LTR}\xspace is only applicable to strictly multivariate data sets. \ensuremath{\textsc{Crack}_{\delta}}\xspace is applicable to all data sets, infers $15/17$ causal directions correctly, by which it has an overall accuracy of $88.2\%$. Importantly, the two wrong decisions have low confidences compared to the correct inferences.
\begin{table}[t]
\centering
\newcommand{\tiny{(n/a)}}{\tiny{(n/a)}}
\begin{tabular}{l@{\hspace{0.6em}} r@{\hspace{0.5em}} r@{\hspace{0.4em}} r c@{\hspace{0.35em}} c@{\hspace{0.35em}} c@{\hspace{0.35em}} c}
\toprule
& & & & & \multicolumn{3}{l}{\textbf{Decisions}} \\
\cmidrule{5-8}
\textbf{Dataset}&$m$&$k$&$l$&\footnotesize\textsc{LTR}\xspace&\footnotesize\textsc{Ergo}\xspace&\footnotesize\textsc{Origo}\xspace&\footnotesize\textsc{Crack}\xspace \\
\midrule
Climate&$10\,226$&4&4&\ding{51}&\ding{51}&--&-- \\
Ozone&$989$&1&3&\tiny{(n/a)}&\ding{51}&\ding{51}&\ding{51} \\
Car&$392$&3&2&--&\ding{51}&\ding{51}&\ding{51} \\
Radiation&$72$&16&16&--&--&--&\ding{51} \\
Symptoms&$120$&6&2&\ding{51}&\ding{51}&--&\ding{51} \\
Brightness&$1\,000$&9&1&\tiny{(n/a)}&\tiny{(n/a)}&--&\ding{51} \\
Chemnitz&$1\,440$&3&7&\ding{51}&\ding{51}&\ding{51}&\ding{51} \\
Precip.&$4\,748$&3&12&\ding{51}&--&--&\ding{51} \\
Stock 7&$2\,394$&4&3&--&\ding{51}&--&\ding{51} \\
Stock 9&$2\,394$&4&5&--&\ding{51}&--&\ding{51} \\
Haberman&$306$&3&1&\ding{51}&\ding{51}&--&--\\
Iris flower&$150$&4&1&\tiny{(n/a)}&\tiny{(n/a)}&--&\ding{51} \\
Canis&$2\,183$&4&2&\tiny{(n/a)}&\tiny{(n/a)}&\ding{51}&\ding{51} \\
Lepus&$2\,183$&4&3&\tiny{(n/a)}&\tiny{(n/a)}&\ding{51}&\ding{51} \\
Martes&$2\,183$&4&2&\tiny{(n/a)}&\tiny{(n/a)}&\ding{51}&\ding{51} \\
Mammals&$2\,183$&4&7&\tiny{(n/a)}&\tiny{(n/a)}&\ding{51}&\ding{51} \\
Octet&$82$&1&10&\tiny{(n/a)}&\ding{51}&\ding{51}&\ding{51} \\
\midrule
\multicolumn{2}{l}{\textbf{Accuracy}} & & & $0.56$ & $0.82$ & $0.47$ & $0.88$ \\
\bottomrule& &
\end{tabular}
\caption{Comparison of \textsc{LTR}\xspace, \textsc{Ergo}\xspace, \textsc{Origo}\xspace and \textsc{Crack}\xspace on eleven multivariate data sets. We write {\tiny{(n/a)}} whenever a method is not applicable on the pair.}
\label{tab:mv_comparison}
\end{table}
\section{Conclusion}\label{sec:conc}
We considered the problem of inferring the causal direction from the joint distribution of two univariate or multivariate random variables $X$ and $Y$ consisting of single-, or mixed-type data. We point out weaknesses of known causal indicators and propose the Normalized Causal Indicator for mixed-type data and data with highly unbalanced domains. Further, we propose a practical encoding based on classification and regression trees to instantiate these causal indicators and provide a fast greedy heuristic to compute good solutions.
In the experiments we evaluate the advantages of the NCI and the common indicator and give advice on when to use them. On real world benchmark data, we are on par with the state of the art for univariate continuous data and beat the state of the art on multivariate data with a wide margin.
For future work, we aim to investigate in the application of \textsc{Crack}\xspace for causal discovery, meaning that we would like to infer causal networks. In addition, we only selected a subset of possible refinements to exploit dependencies from candidates. This choice could be expanded by considering more complex functions, finding combinations of categories for splitting. However, unless specific care is taken many of such extensions will likely have repercussions on the runtime of our algorithm, which is why besides being out of scope here, we leave this for future work.
\section*{Acknowledgements}
The authors wish to thank Kailash Budhathoki for insightful discussions.
Alexander Marx is supported by the International Max Planck Research School for Computer Science (IMPRS-CS).
Both authors are supported by the Cluster of Excellence ``Multimodal Computing and Interaction'' within the Excellence Initiative of the German Federal Government.
\balance
\bibliographystyle{abbrv} %
| {
"timestamp": "2017-10-17T02:15:18",
"yymm": "1702",
"arxiv_id": "1702.06385",
"language": "en",
"url": "https://arxiv.org/abs/1702.06385",
"abstract": "Given data over the joint distribution of two random variables $X$ and $Y$, we consider the problem of inferring the most likely causal direction between $X$ and $Y$. In particular, we consider the general case where both $X$ and $Y$ may be univariate or multivariate, and of the same or mixed data types. We take an information theoretic approach, based on Kolmogorov complexity, from which it follows that first describing the data over cause and then that of effect given cause is shorter than the reverse direction.The ideal score is not computable, but can be approximated through the Minimum Description Length (MDL) principle. Based on MDL, we propose two scores, one for when both $X$ and $Y$ are of the same single data type, and one for when they are mixed-type. We model dependencies between $X$ and $Y$ using classification and regression trees. As inferring the optimal model is NP-hard, we propose Crack, a fast greedy algorithm to determine the most likely causal direction directly from the data.Empirical evaluation on a wide range of data shows that Crack reliably, and with high accuracy, infers the correct causal direction on both univariate and multivariate cause-effect pairs over both single and mixed-type data.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)",
"title": "Causal Inference on Multivariate and Mixed-Type Data",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992969868542,
"lm_q2_score": 0.7310585669110203,
"lm_q1q2_score": 0.7075910729693937
} |
https://arxiv.org/abs/2111.10298 | An Asymptotic Equivalence between the Mean-Shift Algorithm and the Cluster Tree | Two important nonparametric approaches to clustering emerged in the 1970's: clustering by level sets or cluster tree as proposed by Hartigan, and clustering by gradient lines or gradient flow as proposed by Fukunaga and Hosteler. In a recent paper, we argue the thesis that these two approaches are fundamentally the same by showing that the gradient flow provides a way to move along the cluster tree. In making a stronger case, we are confronted with the fact the cluster tree does not define a partition of the entire support of the underlying density, while the gradient flow does. In the present paper, we resolve this conundrum by proposing two ways of obtaining a partition from the cluster tree -- each one of them very natural in its own right -- and showing that both of them reduce to the partition given by the gradient flow under standard assumptions on the sampling density. |
\section{Introduction}
\label{sec:introduction}
In the 1970's, two distinctively nonparametric approaches to clustering were proposed, and we argue in this paper that they are fundamentally the same.
\subsection{The cluster tree}
One of these approaches was the one that Hartigan proposed in his pioneering book on the topic \cite{hartigan1975clustering}, which was published in 1975. In this perspective, the connected components of the upper-level sets of the population density are the candidate clusters, and are seen as ``regions of high density separated from other such regions by regions of low density".
(Similar takes on clustering emerged around the same time, including in \cite{koontz1972nonparametric}, but historically this perspective has been attributed to Hartigan.)
This view of clustering is not complete in the sense that it depends on a choice of level, which is not at all obvious. Adopting an exploratory stance, Hartigan recommended looking at the entire tree structure, which he called the ``density-contour tree'' and is now better known as the {\em cluster tree}. This structure is the partial ordering between clusters that comes with the set inclusion operation: indeed, as defined, two clusters are either disjoint or one of them contains the other.
It is important to note that {\em the cluster tree does not provide a complete partitioning of the space}. It is in fact common to see the region outside the identified clusters as noise, even though in the typical situation this distinction or labeling does not appear natural.
See Figures~\ref{fig:level_sets_1d} and~\ref{fig:level_sets_2d} for illustrations in dimension~1 and~2, respectively.
\begin{figure}[!htpb]
\centering
\includegraphics[scale=0.7]{figures/level_sets_1d_2gaussians}
\caption{A sample of upper level sets of a density in dimension $d = 1$ with two modes (which happens to be the mixture of two normal distributions). At any level $0 < t \le t_0$, where $t_0 \approx 0.0348$ is the value of the density at the local minimum, $\mathcal{U}_t$ is connected, and thus corresponds to the cluster at the level. At any level $t_0 < t \le t_1$, where $t_1 \approx 0.2792$ is the value of the density at its local maximum near $x=0$, $\mathcal{U}_t$ has exactly two connected components, and these are the clusters at that level. (At $t = t_1$, one of the clusters is a singleton.) Finally, at $t_1 < t \le t_2$, where $t_2 \approx 0.4021$ is the value of the density at its global maximum (near $x=3$), $\mathcal{U}_t$ is again connected, and is thus the cluster at that level.
(At $t = t_2$, the cluster is a singleton.)}
\label{fig:level_sets_1d}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8]{figures/level_sets_2d_2gaussians}
\caption{A sample of upper level sets of a density in dimension $d = 2$ with two modes (which happens to be the mixture of two normal distributions, one with a non-scalar covariance matrix). The situation is similar to that of \figref{level_sets_1d}, where the number of connected components of the upper level set at $t$ is $=1$ one when $0 < t \le t_0$, where $t_0$ is the value of the density at the saddle point; $=2$ when $t_0 < t \le t_1$, where $t_1$ is the value of the density at the local (but not global) maximum; and $=1$ again when $t_1 < t \le t_2$, where $t_2$ is the maximum value of the density.}
\label{fig:level_sets_2d}
\end{figure}
Despite not providing an actual clustering of the population (or sample, in practice), the perspective offered in the form of the cluster tree has been influential in statistical research: the estimation of the cluster tree is studied, for example, in \citep{stuetzle2003estimating, stuetzle2010generalized, rinaldo2012stability, chaudhuri2014consistent, wang2019dbscan, eldridge2015beyond}, and the choice of threshold the level is considered, for example, in \citep{sriperumbudur2012consistency, steinwart2015fully, steinwart2011adaptive}.
The estimation of level sets has itself received a lot of attention in the literature \citep{polonik1995measuring, tsybakov1997nonparametric,rigollet2009optimal,chen2017density,mason2009asymptotic,walther1997granulometric,rinaldo2010generalized,singh2009adaptive}.
\subsection{The gradient flow}
\label{sec:intro gradient flow}
The other approach is the one that Fukunaga and Hosteler proposed in an article \cite{fukunaga1975}, incidentally also published in 1975.
In that article, they proposed to use the gradient lines of the population density to move an arbitrary a point upward along the curve of steepest ascent in the topography given by the density. Under some regularity conditions related to Morse theory \citep{chacon2015population}, almost all points in the support of the density (i.e., the population) are in this way moved to a mode (i.e., a local maximum) of the density, and a partition of the population is obtained by grouping together the points ending at the same mode.
See \figref{gradient_lines_2d} for an illustration.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8]{figures/gradient_lines_2d_2gaussians}
\caption{A sample of gradient ascent lines for the density of \figref{level_sets_2d}. The square points are the starting points, while the round points are the end points, which are not only critical points but also local maxima (modes) in this example.}
\label{fig:gradient_lines_2d}
\end{figure}
This approach to clustering has been rediscovered over the years \citep{cheng2004estimating, li2007nonparametric, roberts1997parametric}.
It has also generated a lot of activity on the methodology side. This was started by Fukunaga and Hosteler themselves as they proposed a method in their original paper which is today known as the {\em blurring MeanShift} algorithm.
Interestingly, this method is not strictly plug-in, and in fact behaves quite differently. Plug-in methods came later, when \citet{cheng1995} proposed what is now known as the {\em MeanShift} algorithm.
Since then, other methods been proposed, including {\em MedoidShift} \cite{sheikh2007mode}, {\em QuickShift} \cite{vedaldi2008quick}, {\em MedianShift} \cite{shapira2009mode}, and more \cite{carreira2008generalised, carreira2006fast, chacon2019mixture}. The maximum slope approach of \citet{koontz1976graph} is also in that family.
\citet{carreira2015clustering} provides a fairly recent review of this work.
The MeanShift algorithm, and the twin problem of estimating the gradient lines of a density, are now well-understood \citep{cheng1995, comaniciu2002mean, cheng2004estimating, arias2016estimation, carreira2000mode}.
The behavior of the blurring MeanShift algorithm is not as well understood, although some results do exist \citep{cheng1995, carreira2008generalised, carreira2006fast}.
In this gradient flow approach to clustering, the density modes represent clusters, and this may have also contributed to motivate research work on the estimation of density modes \citep{hartigan1985dip, silverman1981using, minnotte1997nonparametric, dumbgen2008multiscale, burman2009multivariate, genovese2016non}, some of the methods being based on the gradient flow \cite{tsybakov1990recursive, NIPS2017_f457c545}. (Note that modes have been recognized as important features of a density for much longer, at least since \citet{pearson1895}.)
\subsection{Contribution}
The cluster tree proposed by \citet{hartigan1975clustering} and the gradient flow proposed by \citet{fukunaga1975} are clearly related by the central role that modes play in both of them: in the cluster tree, the modes correspond to the leaves; in the gradient flow, the modes correspond to the end points.
And, indeed, recent survey papers in the area by \citet{menardi2016review, chacon2020modal} discuss them together under the umbrella name of {\em modal clustering}.
In a recent paper \cite{arias2021level}, we go a step further and highlight even stronger ties. We do so by noting that the gradient flow is in a sense compatible with the cluster tree, in that a gradient line does not cross from one cluster to another at the same level and thus respects the partial ordering given by the cluster tree; and by showing how the gradient flow provides a way to move up and down the cluster tree, in the sense that between critical levels, the gradient flow yields a homeomorphism between a cluster and any of its descendants.
In the present paper, we draw more concrete, and arguably even more closer, ties, which we believe establish these two approaches as being essentially equivalent. By necessity, we work under standard regularity assumptions on the density that guarantee that the gradient flow is well-defined and behaves appropriately. And we do so by suggesting a couple of ways --- which we believe to be very natural --- to obtain a partition from the cluster tree, and then prove that this partition coincides with the partition given by the gradient flow.
The remainder of the paper is organized as follows.
\secref{setting} provides the framework, introduces some concepts and the related notation.
\secref{cluster tree} is the heart of the paper and is where we describe the two algorithms for obtain a partition from the cluster tree (\algref{1} and \algref{2}), and where we state our main results (\thmref{alg1} and \thmref{alg2}).
\secref{discussion} is a discussion section.
The technical proofs are gathered in \secref{proofs}.
\section{Setting and preliminaries}
\label{sec:setting}
Since we discuss approaches to clustering that are both defined based on the sampling density, we discuss and compare them in that context: we have full access to the underlying density, denoted $f$ throughout. All the features that we mention in the paper, in particular (upper) level sets and the cluster tree, gradient lines and the gradient flow, and modes, are always understood in reference to that density.
The density $f$ is with respect to the Lebesgue measure on $\mathbb{R}^d$ and is assumed to satisfy the following conditions:
\begin{itemize}
\item {\em Zero at infinity.}
$f$ converges to zero at infinity, meaning, $f(x) \to 0$ as $\|x\| \to \infty$.
\item {\em Twice differentiable.}
$f$ is twice continuously differentiable everywhere with bounded zeroth, first, and second derivatives.
\item
{\em Non-degenerate critical points.}
The Hessian is non-singular at every critical point of $f$.
\end{itemize}
The first condition is equivalent to $f$ having bounded positive (upper) level sets --- see below.
A function that satisfies the last condition is sometimes referred to as a {\em Morse function} \citep{milnor1963morse}.
Similar conditions are standard in the literature cited in the \hyperref[sec:introduction]{Introduction}.
We discuss possible extensions in \secref{extensions}.
\begin{remark}
The reader will notice that the dimension of the observation space does not play an important role. Of course, in practice, the related methodology has to contend with a standard curse of dimensionality.
\end{remark}
\subsection{Level sets and cluster tree}
For a positive real number $t > 0$, the $t$-level set of $f$ is given by
\begin{equation}
\label{level}
\mathcal{L}_t := \{x : f(x) = t\}.
\end{equation}
while the $t$-upper level set of $f$ is given by
\begin{equation}
\label{up}
\mathcal{U}_t := \{x : f(x) \ge t\}.
\end{equation}
Throughout, whether specified or not, we will only consider levels that are in $(0, \max f)$. Note that, because $f$ converges to zero at infinity and is continuous, its (upper) level sets are compact.
In the level set view of clustering, any connected component of any upper level set is considered a (potential) cluster. The cluster tree, as we consider it here, is then the partial ordering that results from the fact that two clusters are either disjoint or one of them includes the other. (The reader is invited to verify this statement, which comes from the fact that $\mathcal{U}_t \subset \mathcal{U}_s$ when $s \le t$, as is obvious from the definition in \eqref{up}.)
We say that $\mathcal{C}$ is a {\em cluster} if it is a connected component of an upper level set. We say it is a {\em leaf cluster} if the cluster tree does not branch out past $\mathcal{C}$, or said differently, if all the descendants of $\mathcal{C}$ have at most one child.
The last descendant of a leaf cluster is a singleton defined by a mode, and we will use that mode to represent the corresponding lineage.
For a point $x$ and $t \le f(x)$, let $\mathcal{C}_t(x)$ denote the connected component of $\mathcal{U}_t$ that contains $x$.
\subsection{Gradient lines and gradient flow}
\citet{fukunaga1975} suggest to ``assign each [point] to the nearest mode along the direction of the gradient". Formally, the gradient ascent line starting at a point $x$ is the curve given by the image of $\gamma_x$, the parameterized curve defined by the the following ordinary differential equation (ODE)
\begin{equation} \label{gradient_flow}
\gamma_x(0) = x; \quad
\dot\gamma_x(t) = \nabla f(\gamma_x(t)).
\end{equation}
Under our conditions on $f$, $\nabla f$ is Lipschitz, and this is enough for standard theory for ODEs \citep[Sec 17.1]{hirsch2012differential} to justify this definition. This, and the fact that this is a gradient flow \citep[Sec 9.3]{hirsch2012differential}, gives the following.
\begin{lemma}
\label{lem:gradient flow}
For any $x$, the function $\gamma_x$ defined in \eqref{gradient_flow} is well-defined on $[0,\infty)$, with $\gamma_x(t)$ converging to a critical point of $f$ as $t \to \infty$.
\end{lemma}
Checking that $s \mapsto f(\gamma_x(s))$ is non-decreasing is simply due to
\begin{equation}
\label{density_increase}
\frac{{\rm d} }{{\rm d} s} f(\gamma_x(s)) = \|\nabla f(\gamma_x(s))\|^2 \ge 0.
\end{equation}
The {\em basin of attraction} of a point $x_0$ is defined as $\{x: \gamma_x(\infty) = x_0\}$. Note that this set is empty unless $x_0$ is a critical point (i.e., $\nabla f(x_0) = 0$).
In the gradient line view of clustering, we call a cluster any basin of attraction of a mode.
It turns out that, if $f$ is a Morse function \citep{milnor1963morse}, then all these basins of attraction, sometimes called stable manifolds, provide a partition of the support up to a set of zero measure.
\begin{lemma}
\label{lem:basins}
Under the assumed regularity conditions, the basins of attraction of the local maxima, by themselves, cover the population, except for a set of zero measure.
\end{lemma}
Indeed, by \lemref{gradient flow}, the basins of attraction partition the entire population. In addition, the set of critical points is discrete \citep[Cor~3.3]{banyaga2013lectures}, the basin of attraction of each critical point that is not a local maximum is a (differentiable) submanifold of co-dimension at least one\footnote{ The requirement in that theorem that the function be compactly supported is clearly not essential.} \cite[Th~4.2]{banyaga2013lectures}, and therefore has zero Lebesgue measure.
For more background on Morse functions and their use in statistics, see the recent articles of \citet{chacon2015population} and \citet{chen2017statistical}.
In their paper, \citet{fukunaga1975} propose to discretize the following variant of the gradient ascent flow
\begin{equation} \label{gradient_flow_ms}
\xi_x(0) = x; \quad
\dot\xi_x(t) = \frac{\nabla f(\xi_x(t))}{f(\xi_x(t))}.
\end{equation}
Note that this is the gradient flow given by $\log f$, while the flow defined in \eqref{gradient_flow} is the gradient flow given by $f$ itself. The rationale given in \cite{fukunaga1975} is that the latter helps move points in low-density regions upward more quickly, but note that, regardless, the gradient lines defined by these two flows coincide --- only the speed at which a line is traveled changes.
Later in the paper, we use another variant: the gradient line parameterized by the level, which takes the form
\begin{equation} \label{gradient_flow_norm}
\zeta_x(0) = x; \quad
\dot\zeta_x(t) =
\frac{\nabla f(\zeta_x(t))}{\|\nabla f(\zeta_x(t))\|^2}.
\end{equation}
Indeed, $\dot\zeta_x(t) \propto \nabla f(\zeta_x(t))$, so that $\zeta_x$ traces the same gradient line as $\gamma_x$ defined in \eqref{gradient_flow}, and elementary derivations show that $f(\zeta_x(t)) = t- f(x)$ for all applicable $t$, knowing that here the flow is only defined for $t \in [0, f(x_*)-f(x))$, where $x_*$ denotes the critical point to which $x$ is attracted.
In \lemref{converge mode}, we show that $\zeta_x(t)$ converges to $x_*$ as $t \nearrow f(x_*)-f(x)$, as expected.
\begin{remark}
The difference between the MeanShift algorithm and the blurring MeanShift algorithm does not arise at the population level, but at the sample level, and has to do, roughly, with whether the data points remain fixed or are updated with each iteration of a Euler discretization of \eqref{gradient_flow_ms}.
\end{remark}
\subsection{Metric projection and reach}
The concepts of metric projection and reach, the latter introduced by \citet{federer1959curvature}, will play an important role. For a point $x$ and $\delta > 0$, $B(x,\delta)$ is the open ball centered at $x$ of radius $\delta$.
For $\mathcal{A} \subset \mathbb{R}^d$ and $\delta > 0$, define its $\delta$-neighborhood as
\begin{equation}
B(\mathcal{A}, \delta)
:= \bigcup_{a \in \mathcal{A}} B(a, \delta)
= \{x : \dist(x, \mathcal{A}) < \delta\},
\end{equation}
where $\dist(x, \mathcal{A}) := \inf\{\|x-a\| : a \in \mathcal{A}\}$.
The projection (aka metric projection) of a point $x$ onto a closed set $\mathcal{A}$ is the subset points in $\mathcal{A}$ that are closest to $x$.
That subset is nonempty if $\mathcal{A}$ is non-empty.
We say that $x$ has a unique projection onto $\mathcal{A}$ if its projection is a singleton.
The reach of $\mathcal{A} \subset \mathbb{R}^d$, denoted $\rho(\mathcal{A})$, is defined as the infimum over $\delta>0$ such that every point in $B(\mathcal{A}, \delta)$ has a unique projection onto $\mathcal{A}$.
Thus the projection on a set $\mathcal{A}$ is well-defined on $B(\mathcal{A}, \rho(\mathcal{A}))$.
\section{Cluster tree: A successive projection algorithm}
\label{sec:cluster tree}
The cluster tree was presented by \citet{hartigan1975clustering} as a partial order structure on the clusters defined by the level sets. To this day, there is no clear proposal for a partition of the entire population based on the cluster tree.
(When we say `entire population', we mean the support of the density understood as being up to a set of measure zero.)
We present below what we see as a natural definition. The core idea is intuitive enough: each point in the population is moved upward the cluster tree to a leaf mode. The implementation of this idea is not as straightforward, although it remains natural: discretize the levels and project on successively higher and higher level sets. It turns out that, as the level discretization becomes finer and finer, this algorithm is well-defined for a larger and larger proportion of the population, covering almost all the population, eventually.
And this behavior does not depend on the particular discretization.
Moreover, the limiting sequence obtained in this fashion converges in a proper sense to the gradient ascent line with origin the starting point, implying, therefore, that clustering according to the cluster tree --- if understood as doing the above --- coincides with clustering according to the gradient flow.
And this is our central message, announced in the \hyperref[sec:introduction]{Introduction}.
\subsection{Moving up the cluster tree: Algorithm~1}
\label{sec:alg1}
We define a way --- which we believe to be very natural --- of moving up the cluster tree, starting at an arbitrary point in the population and ending at a leaf cluster. This process will result in a partitioning of the entire population.
In this first version, we discretize the levels. Let $\eta$ denote the size of the discretization step.
\begin{algorithm}
\label{alg:1}
Given any point $x$ in the support of $f$, define the sequence of levels: set $t_0 = f(x)$, and for $k \ge 1$, let $t_k = t_0 + k \eta$.
Then build the following sequence of points: set $q_0 = x$, and for $k \ge 1$, let $q_k$ denote a metric projection of $q_{k-1}$ onto $\mathcal{L}_{t_k}$; if $q_k \notin \mathcal{C}_{t_{k-1}}(q_{k-1})$ or if $\mathcal{L}_{t_k} = \emptyset$, stop and return $\argmax_{\,\mathcal{C}_{t_{k-1}}(q_{k-1})} f$.
\end{algorithm}
Note that the algorithm must end in a finite number of steps since the level increases by $\eta$ are every step.
The condition that the new projection point remains inside the cluster associated with the current point is there to ensure that the sequence respects the partial ordering that is the cluster tree, and in particular prevents long jumps across the cluster tree.
We also remark that other options include stopping when a projection is not unique; stopping when reaching a leaf cluster; and pursuing all possible paths when a projection is not unique and returning all the leaf clusters reached in that a way. It turns out that these options are essentially equivalent for almost all the points in the population under the regularity assumed of the density.
\begin{theorem}
\label{thm:alg1}
Suppose that $x$ is in the basin of attraction of a mode $x_*$.
Then, for $\eta$ small enough, \algref{1} given $x$ as starting point returns the mode $x_*$
Furthermore, as $\eta \to 0$, the polygonal line connecting the sequence $(q_k)$ computed by the algorithm converges to the gradient ascent line originating at $x$.
\end{theorem}
The first part of the theorem is established by showing that the sequence that the algorithm computes enters a leaf cluster containing $x_*$, and once this is established, the sequence must remain there by construction. This is done by comparing the sequence with the following sequence sampled from the gradient line: $z_0 = x$, and for $k \ge 1$, $z_k = \zeta_x(t_k - t_0)$.
In the second part, convergence is with respect to the Hausdorff metric for subsets in $\mathbb{R}^d$ --- see \eqref{d_H}.
By \lemref{basins}, the result applies to almost all points in the support of $f$. The theorem thus shows that the way of partitioning the population according to the cluster tree provided by \algref{1} in the infinitesimal limit $\eta\to0$ --- which again we find rather natural --- is equivalent to partitioning the population according to the gradient flow.
\subsection{Moving up the cluster tree: Algorithm~2}
\label{sec:alg2}
We look at another natural way of moving up the cluster tree. In this second variant, we control the spatial step size as opposed to the level step size in the previous algorithm.
Let the spatial step size be $\varepsilon > 0$.
\begin{algorithm}
\label{alg:2}
Given a point $x$ in the population, successively project onto the highest level set within distance $\varepsilon$: set $t_0 = f(x)$ and $q_0 = x$, and for $k \ge 1$, let $t_k = \max\{f(y) : \|y - q_{k-1}\| \le \varepsilon\}$, and let $q_k$ denote a projection of $q_{k-1}$ onto $\mathcal{L}_{t_k}$; if $q_k = q_{k-1}$, stop and return that point.
\end{algorithm}
Unlike the previous algorithm, it is not obvious that \algref{2} will end in a finite number of steps.
We note that, as before, other options are possible, but again, they would be equivalent for almost all points in the population.
\begin{theorem}
\label{thm:alg2}
Suppose that $x$ is in the basin of attraction of a mode $x_*$.
Then, for $\varepsilon$ small enough, \algref{2} given $x$ as starting point returns the mode $x_*$.
Furthermore, as $\varepsilon \to 0$, the polygonal line connecting the sequence $(q_k)$ computed by the algorithm converges to the gradient ascent line originating at $x$.
\end{theorem}
The first part of the theorem is established by showing that the sequence that the algorithm computes converges to the mode.
We note that such a convergence, if it happens, must happen in a finite number of steps since the moment $x_*$ is within distance $\varepsilon$ of $q_k$, we have $q_{k+1} = x_*$ and the process stop.
The technical arguments are otherwise similar to those underlying \thmref{alg1}.
As before, by \lemref{basins}, the result above applies to almost all points in the population, thus establishing that partition given by \algref{2} in the infinitesimal limit $\eta\to0$ --- another natural way of dividing the support according to the cluster tree --- is equivalent to partitioning the population according to the gradient flow.
\section{Discussion}
\label{sec:discussion}
\subsection{Comparison with Euler's methods}
The forward and the backward Euler methods are two well-known numerical methods for solving ODEs. \citet{fukunaga1975} suggested to use the forward Euler method to approximate the gradient flow of \eqref{gradient_flow_ms}: for a fixed small $\varepsilon>0$, starting from a given point $x_0=x$, successively compute
\begin{equation}
x_{i+1} = x_i + \varepsilon \frac{\nabla f(x_i)}{f(x_i)},\quad i=0,1,2, \dots,
\end{equation}
until convergence, where the gradient of $\log f$ at the current location is explicitly used to generate the next point in the sequence.
As remarked in \secref{intro gradient flow}, the method that they proposed --- now known as the blurring MeanShift algorithm --- is not a plug in implementation of this. Such a method came later from \citet{cheng1995} --- a method now known as the MeanShift algorithm.
A more detailed discussion of the connection between the MeanShift algorithm and the forward Euler method is given in \citep{arias2016estimation}.
In contrast to the forward Euler method, the backward Euler method is an implicit approach: for a fixed small $\varepsilon>0$, starting from a given point $x_0=x$, the backward Euler method computes
\begin{equation}
\label{backward_Euler}
x_{i+1} = x_i + \varepsilon \nabla f(x_{i+1}),\quad i=0,1, 2, \dots,
\end{equation}
until convergence. Below we argue that our \algref{1} and \algref{2} can be viewed as two variants of the backward Euler method. The backward Euler method is known to be stable and the generated sequence has a monotonic property, which is also shared by our two algorithms.
\algref{1}: recall that before the algorithm stops, $q_k$ is a metric projection of $q_{k-1}$ onto $\mathcal{L}_{t_k}$, where $t_k = t_0 + k\eta$. When $\eta$ is small enough, $q_{k-1} - q_k$ is normal to $\mathcal{L}_{t_k}$ at $q_k$, so that we can write
\begin{equation}
\label{Alg1_equation}
q_k = q_{k-1} + \varepsilon_k\nabla f(q_k),
\end{equation}
where, this time, $\varepsilon_k$ depends on $k$ and is determined by the property that it is smallest such as to guarantee that $q_k\in \mathcal{L}_{t_k}$. The dynamical systems \eqref{backward_Euler} and \eqref{Alg1_equation} are otherwise the same. It is in this sense that we view \algref{1} as being a variant of the backward Euler method.
\algref{2}: similar to \algref{1}, $q_k$ is a projection of $q_{k-1}$ onto $\mathcal{L}_{t_k}$ but using a different definition of $t_k = \max\{f(y) : \|y - q_{k-1}\| \le \varepsilon\}$. We can, of course, use the expression in \eqref{Alg1_equation} again to interpret \algref{2} as a variant of the backward Euler method. There is also a different perspective: In \algref{2}, the metric projection of $q_{k-1}$ to $\mathcal{L}_{t_k}$ and the maximization of $f$ over $\partialB(q_{k-1}, \varepsilon)$ are in fact the same thing except for the very last step. (This is not completely obvious, but not hard to anticipate, and we omit further details.) That is, the projection point $q_k$, unless it is the final output, is the solution to the following maximization problem with a constraint:
\begin{align}
\label{Lagrange}
\text{maximize } f(y), \text{ subject to } \|y - q_{k-1}\|^2 = \varepsilon^2.
\end{align}
With a Lagrange multiplier $\lambda$, the corresponding Lagrange function of the above problem is
\begin{equation}
L(y,\lambda) := f(y) - \lambda (\|y - q_{k-1}\|^2 - \varepsilon^2).
\end{equation}
Taking the gradient of $L$ with respect to $y$ and equating it to 0, we obtain
\begin{equation}
\label{Lagrange_gradient}
\nabla f(y) = 2\lambda (y-q_{k-1}).
\end{equation}
Since we know $q_k$ is the solution, we can write
\begin{equation}
\label{Alg2_equation}
q_k = q_{k-1} + \frac{1}{2\lambda} \nabla f(q_k ),
\end{equation}
where $\lambda$ has to be solved using \eqref{Lagrange_gradient} and the constraint in \eqref{Lagrange}.
We have thus argued that, compared with the original backward Euler method in \eqref{backward_Euler}, \eqref{Alg2_equation} can also be considered as a variant with a varying step size, which here needs to be determined to guarantee that $\|q_k - q_{k-1}\| = \varepsilon$ before the very last step.
\subsection{Milder assumptions on the density}
\label{sec:extensions}
The assumptions that we made of the density are very standard in this literature, and they are mostly driven by the necessity to have gradient lines be well-defined and well-behaved, and to have Morse theory apply.
But if this can be guaranteed in other ways, then it seems possible to extend the results to other densities.
A case in point --- one that is relatively easy to address --- is the assumption that the modes are non-degenerate. It's not hard to extend the first part of \thmref{alg1} and first part of \thmref{alg2} --- which are the main parts since they imply the equivalence that is the thesis of this paper --- to a setting where some modes are degenerate. We do assume that the modes are isolated, however, and this is now a separate assumption. (Non-degenerate modes are automatically isolated.)
Indeed, take \algref{1}, and notice that the first part of \thmref{alg1} is stated for fixed $\eta$. Then the idea is very simple: modify the density within the very last level set containing the mode (or the second-to-last if the last level corresponds to the mode's level) in such a way that the resulting function has a non-degenerate mode at the same location and no other mode within the region of modification. Then, since the algorithm builds the sequence without having to access the values of the density inside that region, the sequence it builds is identical up to the second-to-last step, but also the very last step since it also corresponds to jumping directly to the mode.
Formalizing this is rather straightforward, but tedious, and we do not provide further details.
Another similar extension is to situations in which the density has a flat top. This is a situation in which a level set has non-empty interior, in which case each connected component of such a level set that has non-empty interior is considered a flat top and is for an end leaf in the cluster tree. The algorithms that we have considered continue to work.
From the gradient flow perspective, points whose gradient ascent lines end at the same flat top are clustered together.
And it is not hard to see that the equivalence remains the same in that the first part of \thmref{alg1} and first part of \thmref{alg2} continue to apply.
This can be argued in the same way, by modifying the density a little bit within each flat top to make it of Morse-type.
Again, making this rigorous is not hard to do, but laborious.
Where the results do not apply, and in fact the equivalence because rather questionable, is when the density has discontinuities. In that case, clustering by gradient flow seems less appropriate (even if the density is piecewise smooth), as the discontinuity imply a saliency of a larger magnitude that is better captured by a level set approach to clustering.
\subsection{Bound on the Hausdorff distance}
\label{sec:Hausdorff bound}
We designed \algref{1} and \algref{2} as natural ways to obtain a partition from the cluster tree, and the analysis we provide for them in \thmref{alg1} and \thmref{alg2} establishes that both are equivalent to the partition defined by the gradient flow when the step size converges to zero --- which really is also natural given the context. As this was our main goal, this qualitative analysis is enough for our purposes.
However, it turns out that our proof arguments can be easily augmented to yield a convergence rate for the polygonal line generated by any one of these two algorithms and the corresponding gradient line. A relatively small elaboration of these arguments yields that {\em the Hausdorff distance between these two lines is $O(\sqrt{\eta})$ for \algref{1} and $O(\varepsilon)$ for \algref{2}.}
We provide more formal arguments establishing these rates in \secref{proof1 Hausdorff bound} and \secref{proof2 Hausdorff bound}.
We do not know if these rates are sharp, but this is very tangential to our main purpose here.
\subsection{Uniform consistency of \algref{1} and \algref{2}}
\label{sec:uniform}
\thmref{alg1} and \thmref{alg2} only give pointwise (i.e., for a fixed $x$ as the starting point) convergence results for the two algorithms.
It turns out that this convergence behavior is also uniform in the following sense. {\em For \algref{1}, there is a measurable set $\Omega_\eta$ with probability at least $1-g(\eta)$ for some $g(\eta) \to 0$ as $\eta \to 0$ such that \algref{1} applied to any $x \in \Omega_\eta$ returns the associated mode, meaning, $\lim_{t\to\infty}\gamma_x(t)$. And an analogous result applies for \algref{2}.}
We give more formal arguments in \secref{proof1 uniform} and \secref{proof2 uniform}.
A rate of convergence of the order of $g(\eta)=O(\sqrt{\eta})$ can also be obtained if one adopts the framework of \citet{chen2017statistical}, and assumes the same conditions of the density, including the assumption (D) there that requires that ``the gradient flow to move away from the boundaries of clusters", where a ``cluster" refers to the basin of attraction of a mode. Other conditions include that $f$ is a Morse function with bounded support and is thrice differentiable with bounded third derivatives. Details are omitted.
\subsection{Methodology inspired by \algref{1} and \algref{2}}
\label{sec:methods}
\algref{1} and \algref{2} were proposed as ways to obtain a partition of the population which, based on successive projections onto higher and higher level sets, may merit to be considered `natural'. And we showed that both ways lead to the partition given by the gradient flow. This equivalence between level set or cluster tree clustering and gradient lines or gradient flow is exactly what we were aiming for.
At the same time, these algorithms can be easily turned into methods by simply applying them on an estimate of the density instead of the density itself when, in the usual framework where only a sample is available, the latter is not available. This plug-in approach yields the following two clustering methods.
\begin{method}
\label{method:1}
Given a sample, $x_1, \dots, x_n \in \mathbb{R}^d$, estimate the density to obtain $\hat f$.
Given some $\eta>0$, apply \algref{1} to each data point with $\hat f$ in place of $f$: let $c_i$ denote what the algorithm returns when applied to $x_i$.
Group the data points according to the distinct elements in $\{c_1, \dots, c_n\}$.
\end{method}
\methodref{1} can be related to the {\em QuickShift} algorithm proposed by \citet{vedaldi2008quick}.
QuickShift works as follows: based on a sample and an estimate of the density, starting at an arbitrary point, the algorithm iteratively moves to the closest data point whose estimated density is strictly larger than the current value, as long as this data point is within a given distance.
\citet{NIPS2017_f457c545} and \citet{jiang2018quickshift++} examined the theoretical properties of this method and a variant.
In \methodref{1}, the points on a trajectory are forced to be on certain level sets. This guarantees that the level increases by $\eta$ with every step except possibly for the very last step.
Note that, in our context, this was motivated from a need to discretize the process of moving up the cluster tree and not for methodological reasons.
\begin{method}
\label{method:2}
Same as \methodref{1} but with \algref{2} (provided with $\varepsilon>0$) in place of \algref{1}.
\end{method}
\methodref{2} can be related to the {\em MaxShift} algorithm that we are proposing and in the process of studying (work in progress at the moment of writing).
MaxShift is a kind of dual to QuickShift, and works as follows: based on a sample and an estimate of the density, starting at an arbitrary point, the algorithm iteratively moves to the data point whose estimated density is largest, as long as this data point is within a given distance.
In \methodref{2}, the process is, at least in appearance, a bit more careful in that it has the point move to the closest point at the highest level achieved within a certain range.
Note that, again, this design choice is motivated not by methodological considerations, but by the main goal of this paper, which was to propose natural ways of moving up the cluster tree --- and then showing that these are in a sense equivalent to follow the gradient lines upward.
\medskip
In any case, we are not proposing these methods as competitors of these or other MeanShift variants, as their implementation appears to be cumbersome and computationally intensive because of having to estimate very many (as $\eta$ or $\varepsilon$ are chosen small) level sets and having to project onto them.
The estimation of the density can be done, for example, by kernel density estimation with bandwidth chosen by leave-one-out cross-validation. Even after having chosen the bandwidth automatically, there remains the choice of $\eta$ or $\varepsilon$, which is analogous to the choice of step size in a forward Euler implementation of the gradient flow --- the MeanShift method of \citet{cheng1995}. This choice is nontrivial, as is often the case for the choice of tuning parameter(s) in a clustering method.
All that being said, these methods can be shown to be consistent.
This can be developed in three steps in a typical way as, for example, detailed in the proof of \citep[Th~2]{arias2016estimation}.
Below we give a brief sketch for these three steps with the intention of convincing the reader that a formal proof is not at all far beyond.
\begin{itemize}
\item {\em Step 1:} By estimating $f$ by kernel density estimation with an appropriate kernel, for example, we obtain $\hat f$ that satisfies all the same assumptions we have required for $f$. Therefore, \thmref{alg1} and \thmref{alg2} apply to $\hat f$, that is, the polygon lines connecting the sequences generated in \methodref{1} and \methodref{2} are close approximations to the gradient flow lines induced by $\hat f$.
\item {\em Step 2:} When the sample size is large enough, $\hat f$ and $f$, and their derivatives up to the second order, are close to each other, which further implies by some classical stability result for ODEs \citep[Sec 17.5]{hirsch2012differential}, that their induced gradient flow lines are consistent.
\item {\em Step 3:} Combining the results in Step 1 and Step 2, we can then immediately say that the polygon lines connecting the sequences in \methodref{1} and \methodref{2} are consistent estimators of the gradient flow line of $f$, if their starting points are the same, and the gradient flow line ends at a mode.
\end{itemize}
In fact, the rates discussed in \secref{Hausdorff bound} in the population setting can be shown --- again following standard arguments --- to transfer to the sample setting.
\section{Technical details}
\label{sec:proofs}
Recall the assumptions placed on the density $f$ in \secref{setting}. In particular, it is differentiable and its gradient is bounded and is Lipschitz.
We let $\kappa_1$ denote the sup norm of the gradient and let $\kappa_2$ denote its Lipschitz constant. Then $f$ is $\kappa_1$-Lipschitz, meaning
\begin{align}
\label{kappa1}
|f(y) - f(x)| \le \kappa_1 \|y-x\|, \quad \forall x, y,
\end{align}
and $\nabla f$ is $\kappa_2$-Lipschitz, meaning
\begin{align}
\label{kappa2}
\|\nabla f(y) - \nabla f(x)\| \le \kappa_2 \|y-x\|, \quad \forall x, y,
\end{align}
and the following Taylor expansion holds
\begin{align}
\label{taylor2}
\big|f(y) - f(x) - \nabla f(x)^\top (y-x)\big|
\le \tfrac12 \kappa_2 \|y-x\|^2, \quad \forall x, y.
\end{align}
We will denote $N(x) := \nabla f(x)/\|\nabla f(x)\|$, which gives the inward-pointing unit normal of $\mathcal{L}_t = \{f = t\}$ at $x$ whenever $\nabla f(x) \ne 0$.
Throughout, we let
\begin{align}
\label{F}
F(x) := \frac{\nabla f(x)}{\|\nabla f(x)\|^2},
\end{align}
which is the function that defines the ODE given in \eqref{gradient_flow_norm}.
When we speak of distance between sets we mean the Hausdorff distance, which for $\mathcal{A}, \mathcal{B} \subset \mathbb{R}^d$ is defined as
\begin{align} \label{d_H}
d_H(\mathcal{A}, \mathcal{B})
:= \max\big\{d_H(\mathcal{A} \mid \mathcal{B}), d_H(\mathcal{B} \mid \mathcal{A})\big\}, &&
d_H(\mathcal{A} \mid \mathcal{B})
:= \sup_{a\in \mathcal{A}} \inf_{b \in \mathcal{B}} \|a-b\|.
\end{align}
\subsection{Preliminaries}
Following \cite{federer1959curvature}, for a set $\mathcal{A}$ and $x \in \mathcal{A}$, $\rho(\mathcal{A}, x)$ denotes the supremum over $r > 0$ such that the metric projection onto $\mathcal{A}$ is well-defined in the whole of $B(x, r)$. By definition, $\rho(\mathcal{A})$ is the infimum of $\rho(\mathcal{A}, x)$ over $x \in \mathcal{A}$.
The following is an immediate consequence of \citep[Lem 4.8(2)]{federer1959curvature}.
\begin{lem} \label{lem:federer4.8(2)}
Take a set $\mathcal{A}$ and $x \in \mathcal{A}$ such that $r := \rho(\mathcal{A}, x) > 0$. Then $\mathcal{A}$ admits a tangent space at $x$.
Also, the metric projection onto $\mathcal{A}$, denoted $P$, is well-defined on $B(x, r)$, and for any $y \in B(x,r)$, $P(y) = x$ and $x-y$ is orthogonal to the tangent space of $\mathcal{A}$ at $x$.
\end{lem}
The following is a quantitative version of \citep[Lem 4.11]{federer1959curvature}, distilled from its own proof.
\begin{lem} \label{lem:federer4.11}
For any level $t>0$ and any $x \in \mathcal{L}_t$, $\rho(\mathcal{L}_t, x) \ge (1/4\kappa_2) \|\nabla f(x)\|$.
\end{lem}
\begin{lemma}
\label{lem:dist}
Take $x$ such that $\nabla f(x) \ne 0$ and let $t := f(x)$. Then, if $\eta > 0$ is small enough that $\eta \le \|\nabla f(x)\|^2/2\kappa_2$, it holds that $\dist(x, \mathcal{L}_{t+\eta}) \le 2 \eta/\|\nabla f(x)\|$.
\end{lemma}
\begin{proof}
By \eqref{taylor2}, we have, for $u > 0$,
\begin{align}
\label{taylor_N}
f(x+ uN(x)) \ge f(x) + u \|\nabla f(x)\| - \tfrac12 \kappa_2 u^2.
\end{align}
Applying this with $u_0 = \|\nabla f(x)\|/\kappa_2$, and letting $x_0 := f(x+ u_0 N(x))$, gives $f(x_0) \ge t + \|\nabla f(x)\|^2/2\kappa_2$.
Because $f$ is continuous on the half-line $\{x+ \lambda N(x) : \lambda > 0\}$, if $\eta \le \|\nabla f(x)\|^2/2\kappa_2$, then necessarily, there is $u_1 \le u_0$ such that $f(x+u_1 N(x)) = t+\eta$, which then forces $\dist(x, \mathcal{L}_{t+\eta}) \le \|x - (x+u_1 N(x))\| = u_1$. And because $f(x+u_1 N(x)) \ge t + u_1 \|\nabla f(x)\|/2$, we also have that $u_1 \le 2 \eta/\|\nabla f(x)\|$.
\end{proof}
Let $P_t$ denote the metric projection onto $\mathcal{L}_t$, which is well-defined on $B(\mathcal{L}_t, \rho(\mathcal{L}_t))$.
Combining the last two lemmas, we get the following
\begin{lemma}
\label{lem:proj}
Take $x$ such that $\nabla f(x) \ne 0$ and let $t := f(x)$. Then, if $\eta > 0$ is small enough that $\eta \le \|\nabla f(x)\|^2/16\kappa_2$, it holds that $P_{t+\eta}(x)$ is well-defined and satisfies $\|P_{t+\eta}(x) - x\| \le 2 \eta/\|\nabla f(x)\|$.
\end{lemma}
\begin{proof}
The conditions of \lemref{dist} are met, so if $y \in \mathcal{L}_{t+\eta}$ is closest to $x$, then $\|y-x\| \le 2 \eta/\|\nabla f(x)\|$. Now, by \eqref{kappa2},
\begin{align}
\|\nabla f(y)\|
&\ge \|\nabla f(x)\| - \kappa_2 \|y-x\| \\
&\ge \|\nabla f(x)\| - \kappa_2 (2 \eta/\|\nabla f(x)\|) \\
&> \tfrac12 \|\nabla f(x)\|.
\end{align}
Calling in \lemref{federer4.11}, and using the assumed bound on $\eta$, we derive
\[\rho(\mathcal{L}_{t+\eta}, y)
> (1/4\kappa_2) \|\nabla f(y)\|
\ge (1/8\kappa_2) \|\nabla f(x)\|
\ge 2 \eta/\|\nabla f(x)\|
\ge \|y - x\|,\]
implying, by definition, that $y$ is the unique projection of $x$ onto $\mathcal{L}_{t+\eta}$.
\end{proof}
Elementary derivations yield the following.
\begin{lemma}
\label{lem:F}
When $f$ is twice differentiable, the function $F$ defined in \eqref{F} is once differentiable at any $x$ where $\nabla f(x) \ne 0$, where its derivative is equal to
\begin{align}
D F(x) = \frac{H f(x)}{\|\nabla f(x)\|^2} - \frac{2 \nabla f(x) \nabla f(x)^\top H f(x)}{\|\nabla f(x)\|^4}.
\end{align}
In particular, $\|D F(x)\| \le 3 \|H f(x)\|/\|\nabla f(x)\|^2$ at any such $x$.
\end{lemma}
\begin{lem}
\label{lem:P}
There is a constant $C > 0$ that depends on $f$ such that the following holds.
Consider a level $t > 0$ and $x \in \mathcal{L}_t$ such that $\nabla f(x) \ne 0$.
If $\eta > 0$ is small enough that $\eta \le (1/C) \|\nabla f(x)\|^2$, then $P_{t+\eta}(x)$ is well-defined and satisfies
\begin{equation}
\label{P}
P_{t+\eta}(x) = x + \eta F(x) \pm \eta^2 \frac{C}{\|\nabla f(x)\|^3}.
\end{equation}
\end{lem}
\begin{proof}
Below, $x$ and $t$ are considered fixed.
Let $C$ be large enough that $\eta \le \|\nabla f(x)\|^2/16\kappa_2$, so that \lemref{proj} applies to give us that $x_\eta := P_{t+\eta}(x)$ is well-defined and satisfies
\begin{align}
\label{P_proof1}
\|x_\eta - x\|
\le 2\eta/\|\nabla f(x)\|
\le (2/C) \|\nabla f(x)\|.
\end{align}
By \lemref{federer4.8(2)}, $x_\eta -x$ is parallel to $\nabla f(x_\eta)$, so we can write
\begin{align}\label{Peta1}
x_\eta -x = c_\eta \nabla f(x_\eta),
\end{align}
for some scalar $c_\eta > 0$.
Using a Taylor expansion, we have for some $s_\eta\in(0,1)$,
\begin{align}\label{Peta2}
\eta
&= f(x_\eta) - f(x) \\
&= [x_\eta - x]^\top \nabla f(s_\eta x_\eta + (1-s_\eta) x) \nonumber\\
& = c_\eta [\nabla f(x_\eta)]^\top \nabla f(s_\eta x_\eta + (1-s_\eta) x).
\end{align}
Extracting the expression of $c_\eta$ from (\ref{Peta2}) and plugging it into (\ref{Peta1}), we get
\begin{align}
\frac{x_\eta - x}\eta = \frac{\nabla f(x_\eta)}{[\nabla f(x_\eta)]^\top \nabla f(s_\eta x_\eta + (1-s_\eta) x)}.
\end{align}
We then develop the denominator using \eqref{kappa2}, to get
\begin{align}
& [\nabla f(x_\eta)]^\top \nabla f(s_\eta x_\eta + (1-s_\eta) x) \\
&= [\nabla f(x_\eta)]^\top \big(\nabla f(x_\eta) \pm \kappa_2 \|x_\eta -x\|\big) \\
&= \|\nabla f(x_\eta)\|^2 \pm \kappa_2 \|x_\eta -x\| \|\nabla f(x_\eta)\|.
\end{align}
Hence,
\begin{align}
\frac{x_\eta - x}\eta
&= \frac{\nabla f(x_\eta)}{\|\nabla f(x_\eta)\|^2 \pm \kappa_2 \|x_\eta -x\| \|\nabla f(x_\eta)\|} \\
&= \frac{F(x_\eta)}{1 \pm \kappa_2 \|x_\eta -x\|/\|\nabla f(x_\eta)\|}.
\end{align}
By \eqref{kappa2} and \eqref{P_proof1},
\begin{align}
\|\nabla f(x_\eta)\|
&\ge \|\nabla f(x)\| - \kappa_2 \|x_\eta -x\| \\
&\ge (1 - \kappa_2 (2/C)) \|\nabla f(x)\| \\
&\ge \tfrac12 \|\nabla f(x)\|,
\end{align}
for $C$ large enough, which then implies
\begin{align}
\|x_\eta -x\|/\|\nabla f(x_\eta)\|
\le 4/C,
\end{align}
so that, taking $C$ as large as needed, we have
\begin{align}
\label{P_proof2}
\frac{x_\eta - x}\eta = F(x_\eta) \left(1 \pm 2 \kappa_2 \|x_\eta -x\|/\|\nabla f(x_\eta)\|\right).
\end{align}
Continuing,
\begin{align}
F(x_\eta)
&= F(x) + \int_0^1 D F(s x_\eta + (1-s) x) (x_\eta-x) {\rm d} s \\
&= F(x) \pm C_1 \|x_\eta-x\|/\|\nabla f(x)\|^2,
\end{align}
based on \lemref{F}.
Plugging this into \eqref{P_proof2} above, and also using the fact $\|F(x)\| = 1/\|\nabla f(x)\|$, gives
\begin{align}
x_\eta
&= x + \eta F(x) \pm C_2 \eta \|x_\eta - x\|/\|\nabla f(x)\|^2 \\
&= x + \eta F(x) \pm 2 C_2 \eta^2/\|\nabla f(x)\|^3,
\end{align}
using the first inequality in \eqref{P_proof1}.
\end{proof}
\begin{lemma}
\label{lem:cluster modes}
Take any level $0 < t < \max f$, and any cluster at level $t$, meaning, any connected component of $\mathcal{U}_t$. Then that cluster contains at least one mode. Moreover, take any gradient ascent line that ends at a mode: if it intersects that cluster, then it must remain in that cluster and, therefore, end at one of the modes the cluster contains.
\end{lemma}
\begin{proof}
Take any connected component of $\mathcal{U}_t$ and denote it by $\mathcal{C}_t$.
Since $\mathcal{C}_t$ is a compact set, $\mathcal{M} := \argmax_{x\in\mathcal{C}_t}f(x)$ is a non-empty subset of $\mathcal{C}_t$. Moreover, we claim that any point in $\mathcal{M}$ is a mode. This is obvious if $\mathcal{C}_t$ is a singleton, therefore, assume this is not the case. Then $\mathcal{C}_t^\circ$ is a connected component of $\mathcal{U}_t^\circ = \{f > t\}$. And a point in $\mathcal{M}$ must be in that interior, by definition, since any point on the boundary of $\mathcal{C}_t$ is at level $t$. Therefore, $\mathcal{M}$ is the set of maximizers in $\mathcal{C}_t^\circ$ and from that we deduce that any point in $\mathcal{M}$ is a local maximum, i.e., a mode.
Hence, we have established that $\mathcal{C}_t$ contains at least one mode.
Take any gradient ascent line, say $\gamma_x$ as defined in \eqref{gradient_flow}. Suppose that this line intersects $\mathcal{C}_t$, so that there is $s \ge 0$ such that $\gamma_x(s) \in \mathcal{C}_t$. Then for $r \ge s$, $f(\gamma_x(r)) \ge f(\gamma_x(s)) = t$, and therefore $\{\gamma_x(r): r \ge s\} \subset \mathcal{U}_t$. And since this is a continuous piece of curve, it must be entirely included in a connected component of $\mathcal{U}_t$, and that component must be $\mathcal{C}_t$ because of the existing intersection at $\gamma_x(s)$.
Finally, by assumption, $\gamma_x$ converges to a mode, so that if it remains in $\mathcal{C}_t$ past a certain point, the mode it converges to must belong to $\mathcal{C}_t$.
\end{proof}
\begin{lemma}
\label{lem:converge mode}
Let $x_*$ be a mode of $f$ and let $x$ be in the basin of $x_*$ but $x\neq x_*$. Then
\begin{equation}
\zeta_x(t) \longrightarrow x_* \quad \text{as} \quad t\nearrow\,f(x_*)-f(x).
\end{equation}
\end{lemma}
\begin{proof}
Let $\gamma_x$ be the integral curve induced by the vector field $\nabla f$ originating from $x$, as defined in \eqref{gradient_flow}.
Notice that for any $\tau\in[0,\infty)$,
\begin{align}
\label{tau_def}
f(\gamma_x(\tau)) - f(x) = \int_0^\tau \nabla f(\gamma_x(s) )^\top \dot \gamma_x(s) {\rm d} s = \int_0^\tau \| \nabla f(\gamma_x(s) ) \|^2 {\rm d} s =: t(\tau).
\end{align}
Since $\| \nabla f(\gamma_x(s) ) \|>0$ for any $s\in[0,\infty)$, $t$ is a strictly increasing function of $\tau$, and it has an inverse, denoted by $\tau(t)$, satisfying $\tau(0)=0$. The definition in \eqref{tau_def} and relation between $t$ and $\tau$ give that
\begin{equation}
\lim_{\tau\to\infty} t(\tau) = f(x_*) - f(x) \quad \text{and} \quad \lim_{t\nearrow [f(x_*)-f(x)]} \tau(t) = \infty.
\end{equation}
Notice that $t$ is differentiable as a function of $\tau$, and $\dot t(\tau) = \| \nabla f(\gamma_x(\tau) ) \|^2$. Hence $\tau$ is also differentiable as a function of $t$, and
\begin{equation}
\dot \tau(t) = \| \nabla f(\gamma_x(\tau(t)) ) \|^{-2}.
\end{equation}
This then leads to
\begin{align}
\frac{\partial \gamma_x(\tau(t))}{\partial t} = \dot \gamma_x(\tau(t)) \, \dot \tau(t) = \frac{\nabla f(\gamma_x(\tau(t)))}{\|\nabla f(\gamma_x(\tau(t)))\|^2},
\quad \text{with} \quad
\gamma_x(\tau(0)) = \gamma_x(0) = x,
\end{align}
which implies that $\zeta_x(t) = \gamma_x(\tau(t))$ for all $t\in[0,f(x_*)-f(x))$. Therefore
\begin{equation*}
\lim_{t\nearrow\, f(x_*)-f(x)} \zeta_x(t) = \lim_{t\nearrow\, f(x_*)-f(x)} \gamma_x(\tau(t))= \lim_{\tau\to\infty} \gamma_x(\tau)= x_*.
\qedhere
\end{equation*}
\end{proof}
\begin{lemma}
\label{lem:cluster balls}
Let $x_*$ be a mode of $f$ and let $t_* = f(x_*)$. For $0 < t < t_*$, let $\mathcal{C}_t$ denote the connected component of $\mathcal{U}_t$ that contains $x_*$. Then there are non-increasing functions $\delta_-$ and $\delta_+$ defined on $(0, t_*)$ such that $0 < \delta_- < \delta_+$ and $\delta_+(t) \to 0$ as $t \to t_*$, and
\begin{align}
\label{Ct_bounds}
\barB(x_*, \delta_-(t)) \subset \mathcal{C}_t \subset B(x_*, \delta_+(t)), \quad \text{for all } 0 < t < t_*.
\end{align}
In particular, one may take $\delta_-$ and $\delta_+$ such that $\delta_-(t) \asymp \delta_+(t) \asymp \sqrt{t_* - t}$ as $t \nearrow t_*$.
\end{lemma}
\begin{proof}
We first focus on \eqref{Ct_bounds}. Notice that this part of proof only needs the modes to be isolated, but not necessarily non-degenerate, to accompany our comments in \secref{extensions}. For a fixed constant $a>1$, define
\begin{align}
\delta_-(t) = \sup\{\delta\ge0: B(x_*,\delta)\subset \mathcal{C}_t\}\quad \text{and} \quad \delta_+(t) = a\sup_{x\in \mathcal{C}_t} \|x-x_*\|.
\end{align}
Due to the fact that $\mathcal{C}_t \subset \mathcal{C}_{t^\prime}$ for any $0<t^\prime \le t<t_*$, $\delta_-$ and $\delta_+$ are non-increasing on $(0,t_*)$. It is also clear that $\delta_- < \delta_+$ and \eqref{Ct_bounds} is satisfied, using the compactness of $\mathcal{C}_{t}$.
Next we show that $\delta_-(t)>0$ for all $t\in(0,t_*)$. Let $\mathcal{B}_t = \mathcal{L}_t \cap \mathcal{C}_t$. Denote $\tilde \mathcal{U}_t := \{x : f(x) > t\}$. Then note that $\mathcal{U}_t = \tilde \mathcal{U}_t \cup \mathcal{L}_t$, and $\mathcal{C}_t = (\mathcal{C}_t\cap \tilde \mathcal{U}_t)\cup \mathcal{B}_t$. It is also clear that $\tilde \mathcal{U}_t \cap \mathcal{C}_t \subset \mathcal{C}_t^\circ$. Since $\mathcal{C}_t$ is a compact set, we have $\partial \mathcal{C}_t = \mathcal{C}_t \setminus \mathcal{C}_t^\circ \subset \mathcal{B}_t$, that is, $f(x)=t$ for all $x\in\partial \mathcal{C}_t$. Define $\tilde\delta_-(t) : = \inf_{x\in \mathcal{B}_t} \|x-x_*\|$. For any $t\in(0,t_*)$, suppose that there exists $x\in \barB(x_*,\tilde\delta_-(t))$ such that $f(x)<t$. By the continuity of $f$, there exists an interior point $\tilde x$ of the line segment $[x,x_*]$ such that $\tilde x\in \partial\mathcal{C}_t\subset \mathcal{B}_t.$ This would lead to a contradictory result $\|\tilde x-x_*\| < \tilde\delta_-(t) $. Hence we must have $f(x)\ge t$ for all $x\in \barB(x_*,\tilde\delta_-(t))$, which yields $\delta_-(t) \ge \tilde \delta_-(t)$ by definition. If $ \tilde \delta_-(t)=0$, then $x_*\in \mathcal{B}_t$ implying that $f(x_*) = t$, which is not true. Therefore we have $0<\tilde \delta_-(t) \le \delta_-(t) .$
Then we show that $\delta_+(t) \to 0$ as $t \to t_*$. Since $x_*$ is an isolated mode of $f$, there exists $\delta_0>0$ such that $x_*$ is the only mode in $\barB(x_*,\delta_0)$. For any $\delta\in(0,\delta_0)$, let $t_\diamond(\delta)=\sup_{x\in\partial B(x_*,\delta)} f(x)$ and $t_\dagger(\delta) = \frac{1}{2}(t_\diamond(\delta) + t_*)$. Note that $t_\diamond(\delta)<t_\dagger(\delta) < t_*$, and $\mathcal{C}_{t_\dagger(\delta)}\subset \barB(x_*,\delta)$, with the latter being a result of the following argument: if there exists $x\in\mathcal{C}_{t_\dagger(\delta)}$ such that $\|x-x_*\|>\delta$, then there exists a path $p[x,x_*]$ connecting $x$ and $x_*$ such that $p[x,x_*]\subset \mathcal{C}_{t_\dagger(\delta)}$; however, this path $p[x,x_*]$ must intersect $\partial B(x_*,\delta)$, resulting in $\sup_{x\in\partial B(x_*,\delta)} f(x) \geq t_\dagger(\delta) > t_\diamond(\delta)$, which is not consistent with the definition of $t_\diamond$. Therefore
\begin{equation}
\label{deltaplug_bound}
\delta_+(t_* ) \le \delta_+(t_\dagger(\delta)) \le a\delta.
\end{equation}
Since $\delta_+$ is non-increasing and non-negative, there exists $\delta_*\ge 0$ such that $\delta_+(t) \to \delta_*$ as $t \to t_*$. By observing that $\delta$ in \eqref{deltaplug_bound} can be chosen arbitrarily small, we conclude that $\delta_*=0$.
Under the assumption that $x_*$ is non-degenerate and the second derivatives of $f$ are continuous, there exist $\lambda_1\ge \lambda_0>0$ and $\delta_1>0$ such that for all $x\in\barB(x_*,\delta_1)$, all the eigenvalues of $Hf(x)$ are within $[-\lambda_1,-\lambda_0]$. Using a Taylor expansion, we can write for all $x\in\barB(x_*,\delta_1)$,
\begin{equation}
-\frac{1}{2}\lambda_1\|x-x_*\|^2 \le f(x) - f(x_*) \leq -\frac{1}{2}\lambda_0\|x-x_*\|^2.
\end{equation}
It then follows that for any $t\in(t_* - \frac{1}{2}\delta_1\lambda_0,t_*)$,
\begin{equation}
\label{Ct_inclusion}
\barB\Big(x_*, \sqrt{2\lambda_1^{-1}(t_*-t)}\,\Big) \subset \mathcal{C}_t \subset \barB\Big(x_*, \sqrt{2\lambda_0^{-1}(t_*-t)}\,\Big),
\end{equation}
which gives the desired result in \eqref{Ct_bounds}.
\end{proof}
\subsection{Proof of \thmref{alg1}}
\label{sec:proof1}
Let $\zeta(t) := \zeta_x(t-t_0)$, where $\zeta_x$ is the flow defined in \eqref{gradient_flow_norm}, that is, the gradient ascent line originating with $x$ parameterized by the level. In what follows, we only consider $t$ in $[t_0, t_*]$, where $t_* := f(x_*)$, and note that $f(\zeta(t)) = t$ for any such $t$. For $t$ in that range, we let $\mathcal{Z}_t := \zeta([t_0,t])$, which is the gradient line as a subset of the ambient Euclidean space starting at $x$ and ending when reaching level $t$.
For $k$ such that $t_k < t_*$, define $z_k := \zeta(t_k)$, and note that $f(z_k) = t_k$, so that $z_k \in \mathcal{L}_{t_k}$, just like $q_k$.
Of course, as the level grid becomes finer and finer, the sequence $(z_k)$ is closer and closer to the gradient ascent line, and the basic idea is to compare the sequence $(q_k)$ to the sequence $(z_k)$. Note that $z_0 = x = q_0$ --- which is a good start.
\subsubsection{Algorithm returns $x_*$}
\label{sec:proof1 return}
It suffices to show that the sequence reaches a leaf cluster. After that, it has to remain there by construction, which forces it to end at a leaf cluster, since all descendants of a leaf cluster are also leaf clusters, by definition.
Take ${s_*} \in (0, t_*)$ close enough to $t_*$ that $\mathcal{C}_* := \mathcal{C}_{s_*}(x_*)$ is a leaf cluster. That such a level ${s_*}$ exists comes from \lemref{cluster balls} and the fact that the modes are isolated.
Suppose $\eta$ is small enough that $\eta \le \frac12 (t_* - {s_*})$, and let $t_\# := \frac12 (t_*+{s_*})$.
Define
$
\nu := \frac12 \min\{\|\nabla f(z)\| : z \in \mathcal{Z}_{t_\#}\},
$
and note that $\nu > 0$ by the fact that $t \mapsto \|\nabla f(\zeta(t))\|$ is continuous and positive on $[t_0,t_\#]$ because the gradient line traced by $\zeta$ does not contain a critical point other than $x_*$ at its very end (at $t = t_* > t_\#$).
By an application of \eqref{kappa2}, we have that $\|\nabla f(y)\| \ge \nu$ for all $y$ in the `tube' $\mathcal{T} := B(\mathcal{Z}_{t_\#}, \delta_{\rm tube})$, where $\delta_{\rm tube} := \nu/\kappa_2$.
Let $k_\# := \max\{k : t_k \le t_\#\}$ and note that $t_\# -\eta < t_{k_\#} \le t_\#$ and $(t_\# - t_0)/\eta -1 < k_\# \le (t_\# - t_0)/\eta$.
We show below that, when $\eta$ is small enough, the sequence $(q_k)$ is uniquely defined up to $k = k_\#$ at which step it is inside $\mathcal{C}_*$, which is certainly enough to establish that the algorithm returns $x_*$.
\paragraph{The sequence $(z_k)$}
We first note that $\zeta$ is twice differentiable, with $\dot\zeta(t) = F(\zeta(t))$ and $\ddot\zeta(t) = D F(\zeta(t)) \dot\zeta(t) = D F(\zeta(t)) F(\zeta(t))$. In view of \lemref{F} and the fact that $\|\nabla f(\zeta(t))\| \ge 2\nu$ for $t \in [t_0,t_\#]$, for such a $t$ it holds that $\|\ddot\zeta(t)\| \le (3 \kappa_2/(2 \nu)^2) (1/2\nu)$.
Therefore, a Taylor expansion gives
\begin{align}
z_k - z_{k-1}
&= \zeta(t_k) - \zeta(t_{k-1}) \\
&= (t_k - t_{k-1}) \dot\zeta(t_{k-1}) \pm C_1 (t_k-t_{k-1})^2 \\
&= \eta F(z_{k-1}) \pm C_1 \eta^2, \label{z diff}
\end{align}
for any $k \le k_\#$.
In addition, the sequence $(z_k)$ does not come `close' to any cluster at ${s_*}$ other than $\mathcal{C}_*$. Indeed, $\mathcal{Z}_{t_*}$ does not intersect any of these clusters, for otherwise it would end at one of the corresponding modes and not at $x_*$ --- see \lemref{cluster modes}.
Therefore, by compactness of the leaf clusters and of $\mathcal{Z}_{t_*}$, there is $\delta_* > 0$ such that $\mathcal{Z}_{t_*}$ is at least $\delta_*$ away from any cluster at ${s_*}$ other than $\mathcal{C}_*$, and this then applies to the sequence $(z_k)$ since $(z_k) \subset \mathcal{Z}_{t_*}$.
\paragraph{The sequence $(q_k)$}
Suppose for now that, for $k \le k_\#$, $d_{k-1} := \|q_{k-1} - z_{k-1}\| \le \delta_{\rm tube}$ --- so that $\|\nabla f(q_{k-1})\| \ge \nu$ since $z_{k-1} = \zeta(t_{k-1})$ with $t_{k-1} \le t_\#$ --- and consider bounding $d_k = \|q_k - z_k\|$.
Assume $\eta$ is small enough that
\begin{align}
\label{eta_bound}
\eta \le \nu^2/C_2, \text{ where $C_2$ is the constant of \lemref{P}.}
\end{align}
The same lemma then tells us that $q_k$ is well-defined and gives
\begin{align}
q_k
&= q_{k-1} + (t_k - t_{k-1}) F(q_{k-1}) \pm (t_k - t_{k-1})^2 C_2/\|\nabla f(q_{k-1})\|^3 \\
&= q_{k-1} + \eta F(q_{k-1}) \pm C_3 \eta^2. \label{q diff}
\end{align}
The bound \eqref{q diff} combined with \eqref{z diff} gives
\begin{align}
q_k - z_k
&= q_k - q_{k-1} + q_{k-1} - z_{k-1} + z_{k-1} - z_k \\
&= \eta F(q_{k-1}) \pm C_3 \eta^2 + q_{k-1} - z_{k-1} - \eta F(z_{k-1}) \pm C_1 \eta^2.
\end{align}
Applying the triangle inequality, we further derive
\begin{align}
d_k
&\le d_{k-1} + \eta \|F(q_{k-1}) - F(z_{k-1})\| + (C_1+C_3) \eta^2 \\
&\le d_{k-1} + \eta (3\kappa_2/\nu^2) \|q_{k-1} - z_{k-1}\| + C_4 \eta^2 \\
&= (1+ C_5 \eta) d_{k-1} + C_4 \eta^2, \label{d_bound}
\end{align}
where the second inequality is due to the fact that $\|D F(y)\| \le 3\kappa_2/\nu^2$ inside $\mathcal{T}$ and the segment $[z_{k-1},q_{k-1}]$ is in that set since that set contains $B(z_{k-1}, \delta_{\rm tube})$ and $\|z_{k-1} - q_{k-1}\| = d_{k-1} \le \delta_{\rm tube}$, by assumption.
Define the sequence
\begin{align}
\text{$a_0 = 0$ and $a_k = (1+C_5 \eta) a_{k-1} + C_4 \eta^2$ for $k \ge 1$.}
\end{align}
As is well-known,
\begin{align}
\label{a_bound}
a_k
\le C_4 \eta^2 \frac{\exp[C_5 \eta k] - 1}{C_5 \eta}, \quad \forall k \ge 0.
\end{align}
If we only consider $0 \le k \le k_\#$, it holds that $a_k \le C_6 \eta$, since over that range $\eta k \le \eta k_\# \le t_\# - t_0 \le t_*$.
We choose $\eta$ small enough as to guarantee that $C_6 \eta \le \delta_{\rm tube}$. This is in addition to the main bound on $\eta$ assumed in \eqref{eta_bound}.
For a recursion, we start at $d_0 = \|q_0 - z_0\| = 0$ and assume that $d_{k-1} \le a_{k-1}$. Then \eqref{d_bound} gives
\begin{align}
d_k
&\le (1+ C_5 \eta) d_{k-1} + C_4 \eta^2 \\
&\le (1+ C_5 \eta) a_{k-1} + C_4 \eta^2 \\
&= a_k,
\end{align}
and thus the recursion can proceed all the way to $k = k_\#$, thus establishing that
\begin{align}
\label{d_final}
d_k = \|q_k - z_k\| \le C_6 \eta, \quad \forall k = 0, \dots, k_\#.
\end{align}
\paragraph{Conclusion}
We have thus shown that, if $\eta$ is small enough, the sequence $(q_k : k = 0, \dots, k_\#)$ is well-defined and satisfies \eqref{d_final}.
In particular, because $\|q_{k_\#} - z_{k_\#}\| \le C_6 \eta$, by \eqref{kappa1} we have
\begin{align}
\label{alg1_proof1}
f(q_{k_\#})
\ge f(z_{k_\#}) - C_7 \eta
= t_{k_\#} - C_7 \eta
\ge t_\# - C_8 \eta.
\end{align}
When $\eta$ is so small that $t_\# - C_8 \eta \ge {s_*}$, we thus have that $q_{k_\#} \in \mathcal{U}_{{s_*}}$.
And when $\eta$ is so small that $\delta_* - C_6 \eta > 0$, the connected component of $\mathcal{U}_{{s_*}}$ that $q_{k_\#}$ belongs to must be $\mathcal{C}_*$, since $z_{k_\#}$ is at least $\delta_*$ away from any other component of $\mathcal{U}_{{s_*}}$.
Thus, the sequence $(q_k)$ enters $\mathcal{C}_*$.
\subsubsection{Convergence}
\label{sec:proof1 convergence}
Given $\delta > 0$, we show that when $\eta$ is small enough, the Hausdorff distance between the polygonal line $\mathcal{Q} := \bigcup_k [q_{k-1}, q_k]$ and $\mathcal{Z}_{t_*}$ is bounded from above by $\delta$.
We showed that, when ${s_*}$ is below but close enough to $t_*$, and when $\eta$ is sufficiently small, we have $q_k \in \mathcal{C}_* = \mathcal{C}_{s_*}(x_*)$ for all $k \ge k_\#$. Hence, if $\delta_+$ is as in \lemref{cluster balls}, then any such $q_k$ belongs to $B(x_*, \delta_+({s_*}))$. By convexity of the ball, we must have that past $q_{k_\#}$, $\mathcal{Q}$ is inside the same ball.
We also saw that $z_{k_\#} \in \mathcal{C}_*$, and therefore past that point, $\mathcal{Z}_{t_*}$ must be inside $\mathcal{C}_{s_*}(x_*)$ (since it's the gradient ascent line we are dealing with), and thus inside that same ball.
Therefore, past step $k_\#$, both the polygonal line and the gradient line are inside a ball of radius $\delta_+({s_*})$, and must therefore be within Hausdorff distance $2\delta_+({s_*})$.
We now choose ${s_*}$ close enough to $t_*$ that $2\delta_+({s_*}) \le \delta$.
Now, take any $k = 1, \dots, k_\#$ and any point $q \in [q_{k-1}, q_k]$.
We have
\begin{align}
\|q-z_{k-1}\|
&\le \|q-q_{k-1}\| + \|q_{k-1} - z_{k-1}\| \\
&\le \|q_k-q_{k-1}\| + d_{k-1},
\end{align}
with, based on \eqref{q diff},
\begin{align}
\label{q diff convergence}
\|q_k-q_{k-1}\|
\le \eta \|F(q_{k-1})\| + C_3 \eta^2
\le \eta/\nu + C_3 \eta^2
\le C_9 \eta,
\end{align}
and with \eqref{d_final} giving $d_{k-1} \le C_6 \eta$.
Hence, $\|q-z_{k-1}\| \le C_{10} \eta$. This is true for any point $q$ on $\mathcal{Q}$ up to step $k_\#$. And because $(z_k) \subset \mathcal{Z}_{t_*}$, this implies that this part of $\mathcal{Q}$ is within $C_{10} \eta$ of $\mathcal{Z}_{t_*}$.
And the same is true the other way around. To be sure, we walk the reader through the same arguments.
Take any $k = 1, \dots, k_\#$ and any point $z = \zeta(t)$ with $t \in [t_{k-1}, t_k]$.
We have
\begin{align}
\|z-q_{k-1}\|
&\le \|z-z_{k-1}\| + \|z_{k-1} - q_{k-1}\|,
\end{align}
with, exactly as in \eqref{z diff},
\begin{align}
\label{z diff convergence}
\|z-z_{k-1}\|
&\le (t-t_{k-1}) \|F(z_{k-1})\| + C_1 (t-t_{k-1})^2 \\
&\le \eta \|F(z_{k-1})\| + C_1 \eta^2
\le \eta/\nu + C_1 \eta^2
\le C'_9 \eta,
\end{align}
and with \eqref{d_final} giving $\|z_{k-1} - q_{k-1}\| = d_{k-1} \le C_6 \eta$, as before.
Hence, $\|z-q_{k-1}\| \le C'_{10} \eta$. This is true for any point $z$ on $\mathcal{Z}_{t_*}$ up to $z_{k_\#}$. And because $(q_k) \subset \mathcal{Q}$, this implies that this part of $\mathcal{Z}_{t_*}$ is within $C'_{10} \eta$ of $\mathcal{Q}$.
By choosing $\eta$ even smaller if needed that $C_{10} \eta \le \delta$ and $C'_{10} \eta \le \delta$, we get that $\mathcal{Q}$ and $\mathcal{Z}_{t_*}$, in their entirety, are within Hausdorff distance $\delta$.
\subsubsection{Bound on the Hausdorff distance}
\label{sec:proof1 Hausdorff bound}
We assume here that $x_*$ is non-degenerate. In that case, by the assumption that the density is twice continuously differentiable, there is $\delta_0$ such that, for some $\lambda_0 > 0$, $H f$ has all its eigenvalues bounded from above by $- \lambda_0$ inside the ball $B_0 :=
B(x_*, \delta_0)$.
Now, consider $g(t) := \|\nabla f(\zeta(t))\|^2$. It is differentiable, with
\begin{align}
g'(t)
&= \frac{2 \nabla f(\zeta(t))^\top Hf(\zeta(t)) \nabla f(\zeta(t))}{\|\nabla f(\zeta(t))\|^2}
\end{align}
so that $g'(t) \le -2\lambda_0$ whenever $t \ge s_0 := \sup\{s: \zeta(s) \notin B_0\}$.
From this we deduce that $g(t) \ge 2\lambda_0 (t_*-t)$, i.e., for $t \in [s_0, t_*]$,
\begin{equation}
\label{gr_norm_bound}
\|\nabla f(\zeta(t))\| \ge \sqrt{2\lambda_0 (t_*-t)}.
\end{equation}
Define $\nu_0 := \frac12 \min\{\|\nabla f(\zeta(s))\| : 0 \le s \le s_0\}$. Note that $\nu_0 > 0$.
In what follows, we take ${s_*} = t_* - 2a_* \eta$, where $a_* > 0$ will be chosen large enough below but fixed.
Note that $t_\# = t_* - a_* \eta$.
For $\eta$ small enough that $\mathcal{C}_{t_\#} \subset B(x_*, \delta_0)$, we also have
\begin{align}
2 \nu
&= \min\{\|\nabla f(\zeta(s))\| : 0 \le s \le t_\#\} \\
&= \min\{\|\nabla f(\zeta(s))\| : 0 \le s \le s_0\} \wedge \min\{\|\nabla f(\zeta(s))\| : s_0 \le s \le t_\#\} \\
&\ge 2 \nu_0 \wedge \sqrt{2\lambda_0 (t_*-t_\#)}.
\end{align}
Therefore, for $\eta$ small enough, $\nu \ge \sqrt{\lambda_0 a_* \eta}$.
Now, tracing back the necessary conditions on $\eta$ for the arguments in \secref{proof1 return} to work, we see that the more stringent requirement, at least when $\nu$ is small enough, is \eqref{eta_bound}. Because of the lower bound on $\nu$ above, this requirement is satisfied when $1 \le \lambda_0 a_*$, in effect, when $a_*$ is large enough.
Taking $a_*$ as such, we now follow the arguments in \secref{proof1 convergence}. Note that, here, $\delta_+(t) \asymp \sqrt{t_*-t}$, so that $2 \delta_+(s_*) \asymp \sqrt{\eta}$ for the choice of $s_*$ we made above.
Up to step $k_\#$, we have that $\mathcal{Q}$ is within $C_{10} \eta$ from $\mathcal{Z}_{t*}$. Therefore, $\mathcal{Q}$ is within distance $O(\sqrt{\eta} + \eta) = O(\sqrt{\eta})$ of $\mathcal{Z}_{t_*}$.
And vice-versa, $\mathcal{Z}_{t_*}$ is within Hausdorff distance $O(\sqrt{\eta})$ of $\mathcal{Q}$.
\subsubsection{Uniform convergence}
\label{sec:proof1 uniform}
We continue to use the same notation as in this whole section, except that we make the dependence on the starting point $x$ explicit whenever needed as in, e.g., $\nu(x)$ denoting $\nu$ when associated with $x$. Let $\mathcal{A}_*$ be the basin of attraction associated with $x_*$, that is, $\mathcal{A}_* = \{x\in\mathbb{R}^d: \lim_{t\to\infty} \gamma_x(t) = x_*\}$. Note that $\mathcal{A}_*$ is an open set. We fix $s_*\in(0,t_*)$ close enough to $t_*$ that $\mathcal{C}_*$ is a leaf cluster. Denote $\mathcal{C}_\# := \mathcal{C}_{t_\#}(x_*)$.
First we show that $\nu$ is continuous on $\mathcal{A}_*\setminus \mathcal{C}_\#$. Let $x$ be any point in $\mathcal{A}_*\setminus \mathcal{C}_\#$. Recall that $\|\nabla f(y)\|\ge \nu(x)$ for all $y\in\mathcal{T}(x) :=B(\mathcal{Z}_{t_\#}(x), \delta_{\rm tube})$. Notice that $\|F(y)\|\le 1/\nu(x)$ and $\|D F(y)\| \le 3\kappa_2/\nu(x)^2$ for all $y\in\mathcal{T}(x)$, using \lemref{F}. Define $\mathcal{V}(x):=B(\mathcal{Z}_{t_\#}(x), \delta_{\rm tube}/2)$. Let $y_1,\dots,y_m$ be a $\delta_{\rm tube}/2$-packing of $\mathcal{V}(x)$, meaning that $\min_{i \ne j} \|y_i - y_j\| > \delta_{\rm tube}/2$ and also that $\max_{y \in \mathcal{V}(x)} \min_i \|y - y_i\| \le \delta_{\rm tube}/2$. And define $\mathcal{W}(x) := \bigcup_i B(y_i, \delta_{\rm tube}/2)$.
Note that $\mathcal{W}(x)$ is open with $\mathcal{V}(x) \subset \mathcal{W}(x) \subset \mathcal{T}(x)$, so that $F(y)\le 1/\nu(x)$ and $\|D F(y)\| \le 3\kappa_2/\nu(x)^2$ for all $y \in \mathcal{W}(x)$. If $y,z \in \mathcal{W}(x)$ are such that $\|y-z\| \le \delta_{\rm tube}/4$, there must be $i$ such that $y,z \in B(y_i, \delta_{\rm tube}/2)$, and because that ball is convex, we have
\begin{equation}
\|F(y) - F(z)\|
\le (3\kappa_2/\nu(x)^2) \|y-z\|.
\end{equation}
If, on the other hand, $\|y-z\| > \delta_{\rm tube}/4$, then we can simply write
\begin{equation}
\|F(y) - F(z)\|
\le 2/\|\nu(x)\|
= \frac{2/\nu(x)}{\delta_{\rm tube}/4} \delta_{\rm tube}/4
\le \frac{8}{\nu(x) \delta_{\rm tube}} \|y-z\|.
\end{equation}
Hence, $F$ is Lipschitz on $\mathcal{W}(x)$ with corresponding constant $C := \max\{3\kappa_2/\nu(x)^2, 8/(\nu(x)\delta_{\rm tube})\}$.
Take a positive constant $\delta_\diamond \le \frac{1}{2}\exp\{-C (t_\# - t_0)\}\delta_{\rm tube}.$
Suppose that
\[
\mathcal{H}(y) := \big\{\zeta_y(\tau): \tau \in [0,t_\# - t_0], y \in B(x,\delta_\diamond)\big\} \nsubset \mathcal{V}(x).
\]
Then there exist $y\inB(x,\delta_\diamond)$ and an escaping time $t_\square\in (0,t_\# - t_0)$ such that $\zeta_y(t_\square)\in\partial \mathcal{V}(x)$ and $\{\zeta_y(\tau): \tau \in [0,t_\square)\} \subset \mathcal{V}(x)$. This is impossible because applying a standard result on the dependence of the gradient flow on the initial condition, for example, the main theorem in \citep[Sec 17.3]{hirsch2012differential}, we have
\begin{equation}
\|\zeta_x(\tau_\square) - \zeta_y(\tau_\square)\|
\le \|x-y\| \exp(C \tau_\square) < \frac{1}{2} \delta_{\rm tube},
\end{equation}
which would lead to $\zeta_y(t_\square) \in \mathcal{V}(x)$, a contradiction against the definition of $t_\square$ as $\mathcal{V}(x)$ is an open set.
Therefore we must have $\mathcal{H}(y) \subset \mathcal{V}(x)$, and
\begin{equation}
\label{initial_continuous}
\|\zeta_x(\tau) - \zeta_y(\tau)\|
\le \|x-y\| \exp(C \tau) \le \frac{1}{2} \delta_{\rm tube}, \quad \forall \tau \in [0,t_\# - t_0], \quad \forall y \in B(x,\delta_\diamond).
\end{equation}
For $y \in B(x,\delta_\diamond)$, without loss of generality, suppose that $t_0 = f(x)\ge f(y)$. Notice that
\begin{equation}
\mathcal{Z}_{t_\#}(y) = \mathcal{Z}_{t_\#-(f(y) - t_0)}(y) \,\bigcup\, \big\{\zeta_{y}(t): \; t\in[t_\# - t_0,t_\# - f(y)]\big\}.
\end{equation}
With notation $R = \sup_{t\in[t_\# - t_0,t_\# - f(y)]} \|\zeta_{y}(t) - \zeta_{y}(t_\# - t_0)\|$, we can write
\begin{align}
\label{haus_dist}
d_H(\mathcal{Z}_{t_\#}(x), \mathcal{Z}_{t_\#}(y))
&\le d_H(\mathcal{Z}_{t_\#}(x), \mathcal{Z}_{t_\#-[f(y) - t_0]}(y) ) + R \\
%
&\le \sup_{t\in[0,t_\# - t_0]}\|\zeta_x(t) - \zeta_{y}(t)\| + R\\
&\le \|x-y\| \exp\{C (t_\#-t_0)\} + R, \label{haus_dist_last}
\end{align}
where the last equality is a result of \eqref{initial_continuous}.
Below we further require that $\delta_\diamond \le \frac{1}{2\kappa_1}\delta_{\rm tube}\nu(x)$. We will show that
\begin{equation}
\label{Gy_subset}
\mathcal{G}_y: = \big\{\zeta_y(t): t\in [t_\# - t_0,t_\# - f(y)]\big\} \subset \mathcal{T}(x).
\end{equation}
If this is true, then the length of $\mathcal{G}_y$ is
\begin{align}
{\rm length}(\mathcal{G}_y)&= \int_{t_\# - t_0}^{t_\# - f(y)} \|\dot\zeta_y(t)\| {\rm d} t \\
&= \int_{t_\# - t_0}^{t_\# - f(y)} \|\nabla f(\zeta_y(t))\|^{-1} {\rm d} t \\
&\le (f(x) - f(y))/\nu(x) \\
&\le \kappa_1 \|x-y\|/\nu(x) \\ \label{G_y bound}
&\le \kappa_1 \delta_\diamond/\nu(x) \le \tfrac{1}{2} \delta_{\rm tube},
\end{align}
where in the 3rd line we used the fact that $t_0 = f(x)$, and in the fourth line we used \eqref{kappa1}.
The above calculation confirms that indeed (\ref{Gy_subset}) must be true, because otherwise there exists an escaping time $t_\triangle \in (t_\# - t_0,t_\# - f(y))$ such that $\zeta_y(t_\triangle) \in \partial \mathcal{T}$ and $\widetilde \mathcal{G}_y:=\{\zeta_y(t): t\in [t_\# - t_0,t_\triangle)\} \subset \mathcal{T}$, which will lead to a contradiction with the fact that $\zeta_y(t_\# - t_0) \in \mathcal{V}(x)$ and ${\rm length}(\widetilde \mathcal{G}_y) < \frac{1}{2}\delta_{\rm tube}$, with the latter following from a similar calculation as above after replacing $t_\# - f(y)$ by $t_\triangle$.
Now, using \eqref{G_y bound} along the way,
\begin{align}
R
%
&= \sup_{t\in[t_\# - t_0,t_\# - f(y)]} \Big\|\int_{t_\# - t_0}^t \dot\zeta_y(t) {\rm d} t \Big\| \\
&\le {\rm length}(\mathcal{G}_y) \\
&\le \frac{\kappa_1}{\nu(x)}\|x-y\|.
\end{align}
Combining this with \eqref{haus_dist_last}, we obtain
\begin{equation}
d_H(\mathcal{Z}_{t_\#}(x), \mathcal{Z}_{t_\#}(y)) \le C_\dagger\|x-y\|,
\end{equation}
where $C_\dagger = \exp\{C (t_\#-t_0)\} + \kappa_1/\nu(x)$. Hence
\begin{align}
&|\nu(x) - \nu(y)| \\
&\le \max\Big\{\sup_{v\in \mathcal{Z}_{t_\#}(x)} \; \inf_{w\in \mathcal{Z}_{t_\#}(y)} | \|\nabla f(v)\| - \|\nabla f(w)\| |,\; \sup_{v\in \mathcal{Z}_{t_\#}(y)} \; \inf_{w\in \mathcal{Z}_{t_\#}(x)} | \|\nabla f(v)\| - \|\nabla f(w)\| | \Big\} \\
&\le \kappa_2 d_H(\mathcal{Z}_{t_\#}(x), \mathcal{Z}_{t_\#}(y)) \\
&\le \kappa_2C_\dagger\|x-y\|.
\end{align}
We have shown that $\nu$ is a continuous function on $\mathcal{A}_*\setminus \mathcal{C}_\#$.
Let $\mathcal{A}$ be the union of the basins of attraction for all the local modes, and $\mathcal{B} = \mathcal{A}^{\complement}$, which is also the boundary of $\mathcal{A}$. By \lemref{basins}, $\mathcal{B}$ has zero Lebesgue measure. Let $\mathcal{C}$ be the union of all the leaf clusters. For $t,\delta> 0$, define $\Gamma_{\delta,t} = \mathcal{U}_t \bigcap B(\mathcal{B},\delta)^{\complement} \setminus \mathcal{C}^\circ$, which is a compact set. Define $\beta(\delta,t): = \inf_{x\in \Gamma_{\delta,t}} \nu(x)$, which is positive for any $t>0$ and $\delta>0$ small enough that $\Gamma_{\delta,t}$ is not empty, since $\Gamma_{\delta,t}$ is a compact set, and $\nu$ is continuous on an open set containing $\Gamma_{\delta,t}$ as shown above. Based on the proof of \thmref{alg1}, in order to guarantee that Algorithm 1 returns the correct mode for any $x\in \Gamma_{\delta,t}$, we only need to choose $\eta>0$ small enough that $\eta\le C_0 \beta(\delta,t)^2$ for some constant $C_0>0$. For an arbitrarily small but fixed $\epsilon>0$, let $t_\epsilon$ be the largest $t>0$ such that the probability measure of $\mathcal{U}_t^{\complement}$ is not larger than $\epsilon$, and let $\delta_\eta$ be the smallest $\delta>0$ such that $\eta\le C_0 \beta(\delta,t_\epsilon)^2$, and define $\Omega_\eta = \Gamma_{\delta_\eta,t_\epsilon}\bigcup \mathcal{C}^\circ=\mathcal{U}_{t_\epsilon} \bigcap B(\mathcal{B},\delta)^{\complement}$. Using parameter $\eta$, for any starting point $x\in\Omega_\eta$, we always have the correct clustering result using Algorithm 1. Since $\beta_\epsilon(\delta):=\beta(\delta,t_\epsilon)$ is non-decreasing, $\beta_\epsilon(\delta) \to 0$ as $\delta\to 0$, and $\beta_\epsilon(\delta) >0$ if $\delta>0$, it is clear that $\delta_\eta \to 0$ as $\eta\to0$. Note that any point in $\mathcal{B}$ is in the basin of attraction of some critical point that is not a local mode. Under the assumption $f(x)\to0$ as $\|x\| \to\infty$, there exists $r<\infty$ large enough that all the points in $\mathcal{U}_{t_\epsilon}\bigcap\mathcal{B}$ are in the basins of attraction of the critical points in $B(\underline{0},r)$, where $\underline{0}$ is the origin. Using a standard approach we can build a Morse function $\hbar$ that has bounded support and coincides with $f$ on $B(\underline{0},r)$. This way we understand $\mathcal{U}_{t_\epsilon}\bigcap\mathcal{B}$ as a subset of $\mathcal{B}_{\hbar}$, which denotes the boundary of basins of attraction of all local modes of $\hbar$. Based on \citep[Th 4.2]{banyaga2013lectures}, $\mathcal{B}_{\hbar}$ consists of finitely many $k$-dimensional manifolds associated with the critical points of $\hbar$ that are not local modes, where $k=0,\cdots,d-1$. Hence the Lebesgue measure of $\mathcal{U}_{t_\epsilon}\bigcapB(\mathcal{B},\delta_\eta)$ is of order $O(\delta_\eta)=o(1)$ as $\delta_\eta \to 0$. Using the boundedness of $f$, the probability measure of $\mathcal{U}_{t_\epsilon}\bigcapB(\mathcal{B},\delta_\eta)$ is also of order $o(1)$, so that the probability measure of $\Omega_\eta$ is lower bounded by $1-\epsilon+o(1)$ as $\eta\to0$. Since $\epsilon$ is arbitrarily small, we conclude the proof.
\subsection{Proof of \thmref{alg2}}
\label{sec:proof2}
The proof of \thmref{alg2} is similar to that of \thmref{alg1} given in \secref{proof1}. As much as we can, we reuse the same notation. Of course, here $(q_k)$ denote the sequence generated by \algref{2} instead, and we define $t_k$ as the level of $q_k$ so that $f(q_k) = t_k$. Otherwise, $\zeta$ has the same meaning, and so does $z_k = \zeta(t_k)$, which again has level $t_k$ and is compared with $q_k$ in a recursive manner.
A natural choice for an arbitrary neighborhood is a ball, but to keep the parallel with \secref{proof1}, we use a cluster, and this is justified in view of \lemref{cluster balls}.
Therefore, take ${s_*} \in (t_0, t_*)$ arbitrarily close to $t_*$, and let $\mathcal{C}_*$ be as before. We want to show that the sequence enters $\mathcal{C}_*$, eventually.
Define $t_\#$, $\nu$, and $\mathcal{T}$ as before, and note that it remains true that $\|\nabla f(y)\| \ge \nu$ for all $y \in \mathcal{T}$.
We also define $k_\#$ in exactly the same way, and keep the same notation --- even though things are here `parameterized' by $\varepsilon$ but in view of the definition of $\eta$ below in \eqref{eta def}, this is just fine. Note that we do not yet have a good control over $k_\#$.
Here the levels are not regularly discretized, and so we denote $\eta_k := t_k - t_{k-1}$. By definition and \eqref{kappa1},
\begin{align}
\label{eta def}
0
\le t_k-t_{k-1}
= f(q_k) - f(q_{k-1})
\le \kappa_1 \|q_k-q_{k-1}\|
\le \kappa_1 \varepsilon
=: \eta.
\end{align}
Therefore, $\eta_k \le \eta$ for all applicable $k$, so that $\eta$ plays the same role as before. We note that $\eta$ is simply proportional to $\varepsilon$, and so saying `when $\eta$ is small enough' is completely equivalent to saying `when $\varepsilon$ is small enough'.
Furthermore, by how the sequence $(q_k)$ is built and by \eqref{taylor2}, if it is the case that $q_{k-1} \in \mathcal{T}$, then we have
\begin{align}
f(q_k)
&\ge f(q_{k-1} + \varepsilon N(q_{k-1})) \\
&\ge f(q_{k-1}) + \varepsilon \|\nabla f(q_{k-1})\| - \tfrac12 \kappa_2 \varepsilon^2 \\
&\ge f(q_{k-1}) + \varepsilon \nu - \tfrac12 \kappa_2 \varepsilon^2 \\
&\ge f(q_{k-1}) + \tfrac12 \varepsilon \nu,
\end{align}
assuming $\varepsilon$ is small enough.
So that we also have the lower bound $\eta_k \ge (\nu/2) \varepsilon$ whenever $q_{k-1} \in \mathcal{T}$.
In what follows, not all the numbered constants have the same exact meaning as in \secref{proof1}. We only keep the numbering for the reader's orientation.
Following the proof in \secref{alg1}, instead of \eqref{z diff}, here we have
\begin{align}
z_k - z_{k-1}
&= \eta_k F(z_{k-1}) \pm C_1 \eta^2, \label{z diff2}
\end{align}
valid for any $k \le k_\#$.
And instead of \eqref{q diff}, we get
\begin{align}
q_k - q_{k-1}
&= \eta_k F(q_{k-1}) \pm C_3 \eta^2, \label{q diff2}
\end{align}
We then simply follow the same arguments, augmented by the following. Since in the recursion we assume that $d_{k-1} \le \delta_{\rm tube}$, as part of the recursion we also have $q_{k-1} \in \mathcal{T}$, as so $\eta_k \ge (\nu/2) \varepsilon$. And since $t_k = \eta_k + \cdots + \eta_1 + t_0$, this implies that $k_\# \le (t_\#-t_0)/(\nu/2) \varepsilon = (t_\#-t_0)/(\nu/2\kappa_1) \eta$.
Thus, the bound $a_{k_\#} \le C_6 \eta$ resulting from \eqref{a_bound} holds in a similar way, because $\eta k_\#$ is upper-bounded by the constant $(t_\#-t_0)/(\nu/2\kappa_1)$.
We thus arrive at \eqref{d_final}.
The rest of the proof is identical.
We thus conclude that, when $\varepsilon$ is small enough, the sequence $(q_k)$ enters the cluster $\mathcal{C}_*$.
In addition, we can show as we did in \secref{alg1} that the polygonal line connecting the sequence $(q_k : k \le k_\#)$ is within distance $O(\eta) = O(\varepsilon)$ of $\mathcal{Z}_{t_*}$.
It remains to show that the sequence ends at $x_*$ when $\varepsilon$ is small enough; and then to show that the polygonal line converges to the gradient line as $\varepsilon$ approaches 0.
\subsubsection{Algorithm returns $x_*$}
Take ${s_*} > 0$ small enough that $\mathcal{C}_*$ is a leaf cluster.
Assume $\varepsilon$ is small enough that $(q_k)$ enters $\mathcal{C}_*$, and take it even smaller if needed to have $\varepsilon < \delta_*$, where, as before, $\delta_*$ is the minimum separation between two clusters at level ${s_*}$.
Then, because $(f(q_k))$ is non-decreasing and $\|q_k - q_{k-1}\| \le \varepsilon < \delta_*$, once $(q_k)$ enters $\mathcal{C}_*$, it must remain there.
It therefore suffices to show that $f(q_k) \to t_*$ since $x_*$ is the only point in $\mathcal{C}_*$ with a value of $t_*$ and all other points have a smaller value. Let $t_\infty := \sup_k f(q_k) = \lim_k f(q_k)$ and suppose for contradiction that $t_\infty < t_*$. Consider the set $\mathcal{A} := \{y : {s_*} \le f(y) \le t_\infty\}$. Since $\mathcal{A} \subset \mathcal{C}_*$ and $x_* \notin \mathcal{A}$, $\mathcal{A}$ has no critical point, and because it is compact, $\nu_\ddag := \inf\{\|\nabla f(y)\| : y \in \mathcal{A}\} > 0$.
Therefore, if $\varepsilon_\ddag := \min\{\nu_\ddag/\kappa_2, \varepsilon\}$, by \eqref{taylor_N}, for any $y \in \mathcal{A}$, $f(\tilde y) \ge f(y) + \eta_\ddag$ with $\tilde y := y+ \varepsilon_\ddag N(y)$ and $\eta_\ddag := \nu_\ddag \varepsilon_\ddag - (\kappa_2/2) \varepsilon_\ddag^2 > 0$.
Now, take $k$ large enough that $f(q_{k-1}) > t_\infty - \eta_\ddag/2$.
We then have $f(\tilde q_{k-1}) \ge f(q_{k-1}) + \eta_\ddag > t_\infty + \eta_\ddag/2$ and $\|\tilde q_{k-1} - q_{k-1}\| = \varepsilon_\ddag \le \varepsilon$. But then the sequence moves to $q_k$ with $f(q_k) \le t_\infty$ when it could have moved instead to $\tilde q_{k-1}$ with $f(\tilde q_{k-1}) > t_\infty$, contradicting the rules governing \algref{2}.
\subsubsection{Convergence}
The convergence of the polygonal line to the gradient line is proved in essentially the same way as in \secref{proof1 convergence}, and we omit further details.
\subsubsection{Bound on the Hausdorff distance}
\label{sec:proof2 Hausdorff bound}
A bound of order $O(\varepsilon)$ on the Hausdorff distance between the polygonal line and the gradient line can be obtained in essentially the same way as in \secref{proof1 Hausdorff bound}, and we omit further details.
\subsubsection{Uniform convergence}
\label{sec:proof2 uniform}
The arguments are truly analogous as those given in \secref{proof1 uniform}, and details are thus omitted.
\subsection*{Acknowledgments}
We are grateful to José Chacón for stimulating discussions.
\small
\bibliographystyle{chicago}
| {
"timestamp": "2021-11-22T02:19:12",
"yymm": "2111",
"arxiv_id": "2111.10298",
"language": "en",
"url": "https://arxiv.org/abs/2111.10298",
"abstract": "Two important nonparametric approaches to clustering emerged in the 1970's: clustering by level sets or cluster tree as proposed by Hartigan, and clustering by gradient lines or gradient flow as proposed by Fukunaga and Hosteler. In a recent paper, we argue the thesis that these two approaches are fundamentally the same by showing that the gradient flow provides a way to move along the cluster tree. In making a stronger case, we are confronted with the fact the cluster tree does not define a partition of the entire support of the underlying density, while the gradient flow does. In the present paper, we resolve this conundrum by proposing two ways of obtaining a partition from the cluster tree -- each one of them very natural in its own right -- and showing that both of them reduce to the partition given by the gradient flow under standard assumptions on the sampling density.",
"subjects": "Statistics Theory (math.ST); Machine Learning (cs.LG)",
"title": "An Asymptotic Equivalence between the Mean-Shift Algorithm and the Cluster Tree",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992932829918,
"lm_q2_score": 0.7310585669110202,
"lm_q1q2_score": 0.7075910702616532
} |
https://arxiv.org/abs/1209.6591 | The short time asymptotics of Nash entropy | Let $(M^n, g)$ be a complete Riemannian manifold with $Rc\geq -Kg$, $H(x, y, t)$ is the heat kernel on $M^n$, and $H= (4\pi t)^{-\frac{n}{2}}e^{-f}$. Nash entropy is defined as $N(H, t)= \int_{M^n} (fH) d\mu(x)- \frac{n}{2}$. We studied the asymptotic behavior of $N(H, t)$ and $\frac{\partial}{\partial t}\Big[N(H, t)\Big]$ as $t\rightarrow 0^{+}$, and got the asymptotic formulas at $t= 0$. In the Appendix, we got Hamilton-type upper bound for Laplacian of positive solution of the heat equation on such manifolds, which has its own independent interest. | \section{Introduction}
On a complete manifold $(M^n, g)$ with $Rc\geq -Kg$, where $K> 0$ is a consant, for fixed $y\in M^n$, it is well-known that the heat kernel $H(x, y, t)$ on $(M^n, g)$ is unique. We assume $H= (4\pi t)^{-\frac{n}{2}}e^{-f}$. As in \cite{Niadd}, Nash entropy is defined as:
\begin{definition}\label{def Nash}
{\begin{equation}\label{Nash}
N(H, t)= \int_{M^n} (fH) d\mu(x)- \frac{n}{2}
\end{equation}
}
\end{definition}
Nash entropy has close relation to $\mathscr{W}$-entropy for linear heat equation, and the large time asymptotics of this entropy reflects the volume growth rate of the manifold (see \cite{Ni}, \cite{Niadd} and \cite{Nilarge}).
In this paper, we studied the asymptotic behavior of $N(H, t)$ and $\frac{\partial}{\partial t}N(H, t)$ as $t\rightarrow 0^{+}$, and solved one problem proposed in \cite{RFTA} (Problem $23.36$ there). More precisely, we proved the following theorem:
\begin{theorem}\label{thm 3.2}
{Let $(M^n, g)$ be a complete Riemannian manifold with $Rc\geq -Kg$, where $K> 0$ is a constant. Then
\begin{equation}\label{3.6}
{N(H, t)= -\frac{1}{2}R(y)\cdot t+ O(t^{\frac{3}{2}})
}
\end{equation}
and
\begin{equation}\label{3.6.1}
{\frac{\partial}{\partial t}\Big[N(H, t) \Big]= -\frac{1}{2}R(y)+ o(1)
}
\end{equation}
where $\limsup_{t\rightarrow 0}O(t^{\frac{3}{2}})t^{-\frac{3}{2}}$ is bounded, $\lim_{t\rightarrow 0}o(1)= 0$ and $t$ is small enough.
}
\end{theorem}
One motivation to study the short time asymptotics of Nash entropy is Li-Yau-Perelman type estimate for heat equation on manifolds with Ricci curvature bounded from below. Motivated by Perelman's differential Harnack estimate for Ricci flow, in \cite{Ni}, on a closed manifold $(M^n, g)$ with $Rc\geq 0$, Ni proved the following Li-Yau-Perelman type estimate for the heat equation when $t> 0$:
\begin{equation}\label{0.2.0}
{2\Delta f(x, y, t)- |\nabla f(x, y, t)|^2+ \frac{f(x, y, t)- n}{t}\leq 0
}
\end{equation}
where $H(x, y, t)= (4\pi t)^{-\frac{n}{2}}e^{-f}$ is the heat kernel. In fact, (\ref{0.2.0}) is also true for heat kernel on complete manifold $(M^n, g)$ with $Rc\geq 0$ (see \cite{RFAA}).
In the well-known paper \cite{Pere}, Perelman made the following claim (see remark $9.6$ there):
\begin{claim}\label{claim}
{If $(M^n, g)$ is a compact Riemannian manifold, $g_{ij}(x, t)$ evolves according to $\Big(g_{ij}\Big)_t= A_{ij}(t)$ and $g_{ij}(x, 0)= g_{ij}(x)$, $t\in (-T, 0]$. Define $\square = \frac{\partial}{\partial t}- \Delta$ and its conjugate $\square^{*}= -\frac{\partial}{\partial t}- \Delta- \frac{1}{2}A$ (where $A= g^{ij}A_{ij}$). Consider the fundamental solution $u= (-4\pi t)^{-\frac{n}{2}} e^{-f}$ for $\square^{*}$, starting as $\delta$-function at some point $(p, 0)$. Then for general $A_{ij}$ the function $\Big(\square \bar{f}+ \frac{\bar{f}}{t}\Big)(q, t)$, where $\bar{f}= f- \int_{M^n} fu$, is of order $O(1)$ for $(q, t)$ near $(p, 0)$.
}
\end{claim}
We will focus on the special case where the evolving metrics are the static metric. From Theorem \ref{thm 3.2}, it is easy to show that Perelman's claim in the static metric case is equivalent to the following claim on compact manifolds:
\begin{equation}\label{0.2}
{2\Delta f(x, y, t)- |\nabla f(x, y, t)|^2+ \frac{f(x, y, t)- n}{t}= -R(y)+ O(t+ d^2(x, y))
}
\end{equation}
If (\ref{0.2}) is true, it will be an improvement of (\ref{0.2.0}) when $t+ d^2(x, y)$ is small enough and $R(y)> 0$. But using the following explicit formula of heat kernel on hyperbolic manifold $\mathbb{H}^3$ (cf. section $9.2$ in \cite{Gri}):
\begin{equation}\nonumber
{H= (4\pi t)^{-\frac{3}{2}}\frac{d}{\sinh d}\exp{\big(-\frac{d^2}{4t}- t\big)}
}
\end{equation}
it is easy to check that (\ref{0.2}) is not true generally. Hence Claim \ref{claim} is not generally true for static metric case on complete manifolds.
As observed in \cite{Niadd}, the integrand of $\frac{\partial}{\partial t}\Big[N(H, t)\Big]$ is nothing but the expression in Li-Yau's gradient estimate for heat kernel multiplying with the heat kernel, which is $-(\Delta \ln H+ \frac{n}{2t})H$. Because so far there is no sharp Li-Yau-type gradient estimate for heat kernel or solutions to the heat equation on complete manifolds with Ricci curvature bounded from below by negative constant, we hope that (\ref{3.6.1}) will be helpful on understanding this estimate better.
On the other hand, when $(M^n, g)$ is a compact Riemannian manifold, the short time behavior of logarithm of heat kernel had been studied by many probabilists. Although the heat kernel $H(x, y, t)$ has infinite sequence expansion at $t= 0$, generally there is no such expansion of $\ln H$ at $t= 0$, and the singularity of $\ln H$ at $t= 0$ can have many complicated situations. However, in \cite{Va}, Varadhan proved
\begin{equation}\label{0.1}
{\lim_{t\rightarrow 0} t\ln H(x, y, t)= -\frac{d^2(x, y)}{4}
}
\end{equation}
Moreover, using stochastic processes methods, Malliavin and Stroock proved that the above equation is preserved while taking the first and second spatial derivatives on domain outside of cut locus (see \cite{MS}). Using analytic methods, (\ref{0.1}) is proved for complete Riemannain manifolds in \cite{CLY} by Cheng, Li and Yau. We hope that Theorem \ref{thm 3.2} will be useful on studying the short time behavior of logarithm of the heat kernel on complete manifolds by analytic methods.
The strategy to prove (\ref{3.6}) is using infinite sequence expansion $H_{N}(x, y, t)$ of $H(x ,y ,t)$ at $t= 0$, although generally $\ln H_{N}$ does not converge to $\ln H$ near $t= 0$ uniformly. In integral sense of (\ref{Nash}), we show there is a uniform convergence in Lemma \ref{lem 3.1} by using an improved estimate of $H- H_{N}$ got in Theorem \ref{thm 2.2}. The rest calculation about integral of $H_{N}$ is standard, for completeness we give details in full.
To prove (\ref{3.6.1}), because the manifold $M^n$ can be non-compact, we need to be more careful on the switch of the order of differentiation and integration. The detailed proof of the validity of the switch is given in the beginning of section $4$. We need an upper bound of $\frac{H_{t}}{H}$ in verifying the above switch. This type bound is known for closed manifolds from \cite{Ham}, and in \cite{RFAA} (also see \cite{LYH}) the proof is sketched for complete manifolds with $Rc\geq 0$ following similar strategy of Kotschwar in \cite{Kots}. The detailed proof of this Hamilton-type upper bound for complete manifolds with $Rc\geq -Kg$ is included in the Appendix for completeness.
\begin{note}\label{note 1.1}
After the paper is circulated and posted on the arXiv, Jia-yong Wu kindly informed us that he had independently proved Hamilton-type upper bound for complete manifolds with $Rc\geq 0$ in details, which is in \cite{Wu}.
\end{note}
The paper is organized as follows: In section $2$, we state some preliminary results about heat kernel and get some improved estimates about $H-H_{N}$. In section $3$, we prove (\ref{3.6}). Using (\ref{3.6}) and results in Appendix, (\ref{3.6.1}) is proved in section $4$. In Appendix, on complete manifolds with Ricci curvature bounded from below, Hamilton-type upper bound of $\frac{H_{t}}{H}$ is proved.
Acknowledgement: The author would like to thank Zhiqin Lu, Brett Kotschwar for interest and suggestions, and Peter Li, Jiaping Wang for their interest.
\section{Preliminary}
We firstly define some notations and functions. In the rest of the paper, we fix $y\in M^n$, define \[\Omega_y=\{x\in M^n: d(x, y)< inj_{g}(y)\}\]
where $inj_{g}(y)$ denotes the injectivity radius of metric $g$ at $y$. Define
\[B(\rho)= \{x|\ d(x, y)\leq \rho\} \quad and \quad B_{z}(\rho)= \{x| \ d(x, z)\leq \rho \}\]
Hence $B(\rho)= B_y(\rho)$. $V(B_z(\rho))$ is used to denote the volume of $B_z(\rho)$, $V_{-K}(\rho)$ is the volume of the geodesic ball of radius $\rho$ in the constant $\Big(-\frac{K}{n -1}\Big)$ sectional curvature space form.
Choose $r\in (0, \frac{1}{4}inj_{g}(y))$, fix it and let $N_{0}= \frac{n}{2}+ 3$. Define $E= (4\pi t)^{-\frac{n}{2}}\exp \Big(-\frac{d^2(x, y)}{4t}\Big)$ and $\tilde{E}= (4\pi t)^{-\frac{n}{2}}\exp \Big(-\frac{d^2(x, y)}{5t}\Big)$. Sometimes we will use $B$ as the simplification of the notation $B(\frac{r}{2})$, $d(x, y)$ will be simplified as $d$, and what it means will be clear from the context.
Assume $\eta: [0, \infty)\rightarrow [0, 1]$ is a $C^{\infty}$ cut-off function with
\begin{equation}\label{2.1}
\eta(s)=\left\{
\begin{array}{rl}
&1\quad if\ s\leq r \\
&0\quad if\ s\geq 2r
\end{array} \right.
\end{equation}
The following theorem collects some known results about heat kernel on complete manifolds (see \cite{RFTA}, \cite{GL}, \cite{Li} etc.).
\begin{theorem}\label{thm 2.1}
{$(M^n, g)$ is a complete Riemannian manifold with $Rc\geq -Kg$, where $K> 0$ is a constant. Then there exists a unique positive fundamental solution $H(x, y, t)$ to the heat equation, which is called the heat kernel. Moreover $H(x, y, t)\in C^{\infty}(M^n\times M^n\times (0, \infty))$ is symmetric in $x$ and $y$, and
$(i)$ \begin{equation}\label{2.3}
{\int_{M^n} H(x, y, t)d\mu(x)\equiv 1
}
\end{equation}
$(ii)$ \begin{equation}\label{2.4}
{H(x, y, t)= P_{N_0}(x, y, t)+ F_{N_0}(x, y, t)
}
\end{equation}
\begin{equation}\label{2.5}
{P_{N_0}(x, y, t)= \eta(d(x, y)) H_{N_0}(x, y, t)
}
\end{equation}
and
\begin{equation}\label{2.6}
{H_{N_0}(x, y, t)= (4\pi t)^{-\frac{n}{2}}\exp \Big(-\frac{d^2(x, y)}{4t}\Big)\cdot \sum_{k= 0}^{N_0} \varphi_{k}(x, y)t^k
}
\end{equation}
$\varphi_{k}(x,y )\in C^{\infty}(\Omega_{y})$, $k= 0, 1, \cdots, N_0$. Also $H_{N_0}$ satisfies the following:
\begin{equation}\label{2.7}
{(\Delta- \frac{\partial}{\partial t})H_{N_0}(x, y, t)= E\Delta \varphi_{N_0} t^{N_0}
}
\end{equation}
$(iii)$ Let $\{x^k\}_{k=1}^n$ be exponential normal coordinates centered at $y\in M^n$, then $\varphi_{0}$ and $\varphi_{1}$ have the following asymptotic expansion:
\begin{equation}\label{2.9}
{\varphi_{0}(x, y)= 1+ \frac{1}{12}R_{pq}(y)x^px^q+ O(d^3(x, y))
}
\end{equation}
\begin{equation}\label{2.10}
{\varphi_{1}(x, y)= \frac{R(y)}{6}+ O(d(x, y))
}
\end{equation}
}
\end{theorem}
We will prove an estimate for $F_{N_0}$, this estimate is an improvement of the usual estimate of $F_{N_0}$, which only gives $t^{N_0+ 1- \frac{n}{2}}$ bound. The improved estimate (\ref{2.11}) is the key to the proof of Lemma \ref{lem 3.1}.
\begin{theorem}\label{thm 2.2}
{For $F_{N_0}(x, y, t)$ in Theorem \ref{thm 2.1}, we have the following estimates:
\begin{equation}\label{2.11}
{|F_{N_0}(x, y, t)|\leq Ct^4\exp{(-\frac{d^2(x, y)}{5t})}
}
\end{equation}
and
\begin{equation}\label{2.11.0}
{\Big|\frac{\partial}{\partial t}F_{N_0}(x, y, t)\Big|\leq Ct^2\exp{(-\frac{d^2(x, y)}{5t})}
}
\end{equation}
where $t$ is small enough and $C$ is a positive constant independent of $x$, $t$.
}
\end{theorem}
\begin{remark}\label{remark 2.1}
{(\ref{2.11}) had been proved in \cite{GL} for uniformly parabolic operators. Our proof of (\ref{2.11}) and (\ref{2.11.0}) is motivated by argument in \cite{Li}, and it is different from the proof in \cite{GL}.
}
\end{remark}
{\it Proof:}~
{($\mathnormal{1}$). We first prove (\ref{2.11}). From the definition of $P_{N_0}(x, y, t)$, it is easy to see that $\lim_{t\rightarrow 0}P_{N_0}(x, y, t)= \delta_{y}(x)$. In particular,
\begin{align}
F_{N_0}(x, y, t)&= H(x, y, t)- P_{N_0}(x, y, t) \nonumber \\
&= -\int_{0}^{t}\frac{\partial}{\partial s}\int_{M^n} H(x, z, t-s)P_{N_0}(z, y, s)d\mu(z)ds \nonumber\\
&= -\int_{0}^{t}\int_{M^n}\Big(\frac{\partial}{\partial s}- \Delta_{z}\Big)P_{N_0}(z, y, s)\cdot H(x, z, t- s)d\mu(z) ds\nonumber
\end{align}
where $\Delta_z$ is the Laplacian with respect to the $z$-variable.
From (\ref{2.7}) and the definition of $\eta$, when $z\in B(r)$,
\begin{equation}\label{2.10a}
{\Big|\Big(\frac{\partial}{\partial s}- \Delta_{z}\Big)P_{N_0}(z, y, s)\Big|\leq C_{1}s^3\exp{(-\frac{d^2(z, y)}{4s})}
}
\end{equation}
and when $z\in B(2r)\backslash B(r)$,
\begin{equation}\label{2.10b}
{\Big|\Big(\frac{\partial}{\partial s}- \Delta_{z}\Big)P_{N_0}(z, y, s)\Big|\leq C_{2}s^{-\frac{n}{2}-1}\exp{(-\frac{d^2(z, y)}{4s})}
}
\end{equation}
Hence
\begin{align}
|F_{N_0}(x, y, t)|&\leq C_1\int_{0}^{t} s^3\int_{B(r)} H(x, z, t- s)\exp{\Big(-\frac{d^2(z, y)}{4s}\Big)} d\mu(z) ds\nonumber \\
& \quad +C_2\int_{0}^{t} s^{-\frac{n}{2}-1}\int_{B(2r)\backslash B(r)} H(x, z, t- s)\exp{\Big(-\frac{d^2(z, y)}{4s}\Big)} d\mu(z) ds \nonumber \\
&\leq (\mathit{a})+ (\mathit{b})
\end{align}
We can find $0< t_1\leq 1$ and $k_0> 0$, such that if $s\in (0, t_1)$, then
\[V(B_{p}(\sqrt{s}))\geq k_0 s^{\frac{n}{2}} \quad for \ any \ p\in B_{y}(3r) \]
In the rest of the proof, assume $t\in (0, t_1]$, we have two cases.
Case (I): If $x\in B_{y}(3r)$ and $z\in B_{y}(2r)$, then from \cite{LY} and the above volume lower bound,
\begin{align}
H(x, z, t- s)&\leq C V^{-\frac{1}{2}}\Big(B_x(\sqrt{t- s})\Big) V^{-\frac{1}{2}} \Big(B_z(\sqrt{t- s})\Big)\cdot \exp{\Big[CK(t- s)- \frac{6d^2(z, x)}{25(t- s)}\Big]}\label{2.12.0} \\
& \leq C(K, k_0, n)(t- s)^{-\frac{n}{2}} \exp{\Big(- \frac{6d^2(z, x)}{25(t- s)}\Big)}\label{2.12}
\end{align}
Case (II): If $x\notin B_y(3r)$ and $z\in B_y(2r)$, using (\ref{2.12.0}), $d(x, z)\geq r$ and volume comparison theorem,
\begin{align}
H(x, z, t- s)&\leq C V^{-1}\Big(B_z(\sqrt{t- s})\Big) \cdot \Big[ \frac{V_{-K}\Big(\sqrt{t- s}+ d(x, z)\Big)}{V_{-K}\Big(\sqrt{t- s}\Big)} \Big]^{\frac{1}{2}} \nonumber \\
& \quad \cdot \exp{\Big[CK(t- s)- \frac{6d^2(z, x)}{25(t- s)}\Big]}\nonumber \\
& \leq C(K, k_0, n, r) \exp{\Big(- \frac{23d^2(z, x)}{100(t- s)}\Big)}\label{2.13}
\end{align}
Note in Case (I), $inj_{g}(x)$ has a uniform lower bound, hence it is easy to get
\begin{equation}\label{2.14}
{\int_{B_y(r)} s^{-\frac{n}{2}} \exp{\Big(-\frac{d^2(z, x)}{100s}\Big)} d\mu(z)\leq C
}
\end{equation}
for any $s\in (0, t_1]$.
Now using (\ref{2.12}), (\ref{2.13}), (\ref{2.14}) and the classical inequality
\[\frac{d^2(x, z)}{t- s}+ \frac{d^2(y, z)}{s}\geq \frac{d^2(x, y)}{t}\]
we can get
\begin{equation}\nonumber
{\int_{B_y(r)} H(x, z, t-s)\exp{\Big(-\frac{d^2(z, y)}{4s}\Big)} d\mu(z)\leq C\exp{\Big(-\frac{23d^2(x, y)}{100t}\Big)}
}
\end{equation}
Hence
\begin{equation}\label{2.15}
{(\mathit{a})\leq C t^4\exp{\Big(-\frac{23d^2(x, y)}{100t}\Big)}
}
\end{equation}
Similarly,
\begin{equation}\nonumber
{\int_{B_y(2r)\backslash B_y(r)} H(x, z, t-s)\exp{\Big(-\frac{d^2(z, y)}{4s}\Big)} d\mu(z)\leq C \exp{\Big(-\frac{3r^2}{100s}\Big)} \exp{\Big(-\frac{d^2(x, y)}{5t}\Big)}
}
\end{equation}
Hence
\begin{align}
(\mathit{b})&\leq C_2\Big[\int_0^t s^{-\frac{n}{2}- 1} \exp{\Big(-\frac{3r^2}{100s}\Big)} ds\Big] \exp{\Big(-\frac{d^2(x, y)}{5t}\Big)} \nonumber\\
& \leq Ct^4 \exp{\Big(-\frac{d^2(x, y)}{5t}\Big)} \label{2.16}
\end{align}
By (\ref{2.15}) and (\ref{2.16}), (\ref{2.11}) is proved.
($\mathnormal{2}$). The strategy to prove (\ref{2.11.0}) is similar.
\begin{align}
\frac{\partial}{\partial t}F_{N_0}(x, y, t)&= \frac{\partial}{\partial t} \Big[-\int_{0}^{t}\int_{M^n}\Big(\frac{\partial}{\partial s}- \Delta_{z}\Big)P_{N_0}(z, y, s)\cdot H(x, z, t- s)d\mu(z) ds \Big]\nonumber \\
&= -\int_{0}^{t}\int_{M^n}\Big(\frac{\partial}{\partial s}- \Delta_{z}\Big)P_{N_0}(z, y, s)\cdot \Big(\frac{\partial }{\partial t}H(x, z, t- s) \Big) d\mu(z) ds \nonumber \\
&\quad + \Big(\Delta_x- \frac{\partial}{\partial t}\Big)P_{N_0}(x, y, t) \nonumber \\
&= (\mathit{\gamma})+ (\mathit{\tau}) \nonumber
\end{align}
From (\ref{2.10a}), (\ref{2.10b}) and $P_{N_0}(x, y, t)= 0$ when $x\notin B(2r)$,
\begin{equation}\label{2.17}
{(\mathit{\tau})\leq Ct^4\exp{\Big(-\frac{d^2(x, y)}{5t}\Big)}
}
\end{equation}
Now we estimate $(\mathit{\gamma})$.
\begin{align}
(\mathit{\gamma})&= -\int_{0}^{t}\int_{M^n}\Big(\frac{\partial}{\partial s}- \Delta_{z}\Big)P_{N_0}(z, y, s)\cdot \Big(\Delta_{z}H(x, z, t- s) \Big) d\mu(z) ds \nonumber \\
&= -\int_{0}^{t}\int_{M^n}\Big[\Delta_z \Big(\frac{\partial}{\partial s}- \Delta_{z}\Big)P_{N_0}(z, y, s)\Big] \cdot H(x, z, t- s)d\mu(z) ds \nonumber
\end{align}
Similar with (\ref{2.10a}) and (\ref{2.10b}), from (\ref{2.7}), when $z\in B(r)$,
\begin{align}
\Big|\Delta_z \Big(\frac{\partial}{\partial s}- \Delta_{z}\Big)P_{N_0}(z, y, s)\Big|\leq C_3s\exp{\Big(-\frac{d^2(z, y)}{4s}\Big)} \label{2.10c}
\end{align}
and when $z\in B(2r)\backslash B(r)$,
\begin{align}
\Big|\Delta_z \Big(\frac{\partial}{\partial s}- \Delta_{z}\Big)P_{N_0}(z, y, s)\Big|\leq C_4s^{-\frac{n}{2}- 3}\exp{\Big(-\frac{d^2(z, y)}{4s}\Big)} \label{2.10d}
\end{align}
Following similar argument in the proof of (\ref{2.11}), using (\ref{2.10c}), (\ref{2.10d}) instead of (\ref{2.10a}), (\ref{2.10b}),
\begin{equation}\label{2.18}
{(\mathit{\gamma})\leq Ct^{2}\exp{\Big(-\frac{d^2(x, y)}{5t}\Big)}
}
\end{equation}
From (\ref{2.17}) and (\ref{2.18}),
\begin{equation}\nonumber
{\Big|\frac{\partial}{\partial t}F_{N_0}(x, y, t)\Big|\leq (\mathit{\gamma})+ (\mathit{\tau})\leq Ct^2\exp{(-\frac{d^2(x, y)}{5t})}
}
\end{equation}
}
\qed
\section{The short time asymptotics of $N(H, t)$}
From (\ref{2.6}) and (\ref{2.9}) in Theorem \ref{thm 2.1}, there exists $0< t_0\leq 1$ such that
\begin{equation}\label{3.0}
{\frac{1}{2}\leq (4\pi t)^{\frac{n}{2}}\exp \Big(\frac{d^2(x, y)}{4t}\Big) H_{N_0}(x, y, t)\leq 2
}
\end{equation}
holds when $x\in B(\frac{r}{2})$ and $0< t\leq t_0$. In section $3$ and section $4$, we assume that $t\in (0, t_0]$ and $(M^n, g)$, $H$ are from Theorem \ref{thm 2.1}.
\begin{lemma}\label{lem 3.1}
{\begin{equation}\label{3.1}
{\int_{B(\frac{r}{2})} \Big[\ln \frac{H(x, y, t)}{H_{N_0}(x, y, t)}\Big] \cdot H(x, y, t) d\mu(x)= O(t^{2})
}
\end{equation}
}
\end{lemma}
{\it Proof:}~
{Assume $x\in B(\frac{r}{2})$, $t\leq t_0$, then $P_{N_0}(x, y, t)= H_{N_0}(x, y, t)$. Hence
\[F_{N_0}(x, y, t)= H(x, y, t)- H_{N_0}(x, y, t)\]
From (\ref{2.11}),
\begin{equation}\label{3.2}
{|F_{N_0}(x, y, t)|\leq Ct^{N_0+1- \frac{n}{2}} \exp \Big(-\frac{d^2(x, y)}{5t}\Big)
}
\end{equation}
If $F_{N_0}(x, y, t)> 0$, then
\begin{align}
|\ln \frac{H}{H_{N_0}} \cdot H|(x, y, t) &= \ln \Big(1+ \frac{F_{N_0}}{H_{N_0}} \Big)\cdot H\leq \frac{F_{N_0}}{H_{N_0}}\cdot H \nonumber\\
& \leq Ct^{N_0+1} \exp \Big(\frac{d^2(x, y)}{20t}\Big)\cdot H(x, y, t) \nonumber
\end{align}
If $F_{N_0}(x, y, t)\leq 0$, then $H(x, y, t)\leq H_{N_0}(x, y, t)$,
\begin{align}
|\ln \frac{H}{H_{N_0}} \cdot H|(x, y, t) &= |\ln H(x, y, t)- \ln H_{N_0}(x, y, t)|\cdot H(x, y, t)\nonumber\\
&= |\frac{1}{\xi}[H(x, y, t)- H_{N_0}(x, y, t)]|\cdot H(x, y, t) \nonumber
\end{align}
where $H(x, y, t)\leq \xi\leq H_{N_0}(x, y, t)$. Hence
\begin{equation}\nonumber
{|\ln \frac{H}{H_{N_0}} \cdot H|(x, y, t) \leq |\frac{F_{N_0}}{H}|\cdot H= F_{N_0}\leq Ct^{N_0+1- \frac{n}{2}} \exp \Big(-\frac{d^2(x, y)}{5t}\Big)
}
\end{equation}
By the above,
\begin{equation}\label{3.3}
{|\ln \frac{H}{H_{N_0}} \cdot H|(x, y, t) \leq Ct^{4} \Big[t^{\frac{n}{2}}\exp \Big(\frac{d^2(x, y)}{20t}\Big)\cdot H+ \exp \Big(-\frac{d^2(x, y)}{5t}\Big) \Big]
}
\end{equation}
From (\ref{3.0}) and (\ref{3.2}),
\begin{equation}\label{3.4}
{H(x, y, t)\leq |H_{N_0}|+ |F_{N_0}|\leq 2(4\pi t)^{-\frac{n}{2}}\exp \Big(-\frac{d^2(x, y)}{4t}\Big)+ Ct^{4}\cdot \exp \Big(-\frac{d^2(x, y)}{5t}\Big)
}
\end{equation}
By (\ref{3.3}) and (\ref{3.4}),
\begin{equation}\label{3.5}
{\Big|\ln \frac{H}{H_{N_0}} \cdot H\Big|(x, y, t) \leq Ct^{4}
}
\end{equation}
Hence $\int_{B(\frac{r}{2})} \Big[\ln \frac{H(x, y, t)}{H_{N_0}(x, y, t)}\Big] \cdot H(x, y, t) d\mu(x)= O(t^{2})$.
}
\qed
\bigskip
{\it \textbf{Proof of (\ref{3.6})}:}~
{\begin{align}
-\int_{M^n} fH d\mu = \int_{M^n\backslash B(\frac{r}{2})} (-f H)d\mu + \int_{B(\frac{r}{2})} (-f H) d\mu = (I)+ (II) \nonumber
\end{align}
Firstly, we estimate $(I)$. From \cite{LY}, we have
\begin{equation}\nonumber
{H(x, y, t)\leq C V^{-\frac{1}{2}}\Big(B_x(\sqrt{t})\Big) V^{-\frac{1}{2}} \Big(B_y(\sqrt{t})\Big)\cdot \exp{\Big[CKt- \frac{d^2(x, y)}{5t}\Big]}
}
\end{equation}
If $x\in M^n\backslash B(\frac{r}{2})$ and $t$ is small enough, using volume comparison theorem,
\begin{equation}\label{3.5.1}
{H(x, y, t)\leq C V^{-1}\Big(B_y(\sqrt{t})\Big) \cdot \frac{V_{-K}\Big(\sqrt{t}+ d\Big)}{V_{-K}\Big(\sqrt{t}\Big)}\cdot \exp{\Big[CKt- \frac{d}{5t}\Big]}\leq Ct^2\exp{\Big(-\frac{d^2}{6t}\Big)}
}
\end{equation}
where $C$ depends on $n$, $K$, $r$ and the metric $g$ near $y$. Choose $t$ small enough such that $H\leq Ct^2\leq e^{-1}$, then by the monotonicity of $h(x)= \ln x \cdot x$ on $(0, e^{-1}]$,
\begin{equation}\nonumber
{\Big|\ln H(x, y, t)\cdot H(x, y, t) \Big| \leq \Big|\ln \Big[Ct^2\exp{\Big(-\frac{d^2}{6t}\Big)}\Big]\cdot \Big[Ct^2\exp{\Big(-\frac{d^2}{6t}\Big)}\Big]\Big|
}
\end{equation}
hence
\begin{align}
|(I)|&= |\int_{M^n\backslash B(\frac{r}{2})} [\ln H+ \frac{n}{2}\ln (4\pi t)]\cdot H d\mu(x) | \nonumber \\
&\leq \int_{M^n\backslash B(\frac{r}{2})} \Big|\ln \Big[Ct^2\exp{\Big(-\frac{d^2}{6t}\Big)}\Big]\cdot \Big[Ct^2\exp{\Big(-\frac{d^2}{6t}\Big)}\Big]\Big| d\mu(x) \nonumber \\
&\quad + \frac{n}{2} \int_{M^n\backslash B(\frac{r}{2})} |\ln (4\pi t)\cdot \Big[Ct^2\exp{\Big(-\frac{d^2}{6t}\Big)}\Big]| d\mu(x) \nonumber \\
&\leq O(t^{\frac{3}{2}}) \label{use}
\end{align}
in the last inequality, we used $Rc\geq -Kg$ and volume comparison theorem.
\begin{align}
|(II)|&= \int_{B(\frac{r}{2})} [\ln H+ \frac{n}{2}\ln (4\pi t)]\cdot H d\mu\nonumber \\
&= \int_{B(\frac{r}{2})} \ln \frac{H}{H_{N_0}}\cdot H d\mu(x)+ \int_{B(\frac{r}{2})} \Big[\ln H_{N_0}+ \frac{n}{2}\ln (4\pi t)\Big]\cdot H d\mu(x) \nonumber \\
&= (III)+ (IV) \nonumber
\end{align}
By Lemma \ref{lem 3.1}, $(III)= O(t^2)$. From Lemma \ref{lem 3.3} in the following,
\[(IV)= -\frac{n}{2}+ \frac{1}{2}R(y)\cdot t + O(t^{\frac{3}{2}})\]
By all the above, we get our conclusion.
}
\qed
\begin{lemma}\label{lem 3.3}
{\begin{equation}\label{3.9}
{\int_{B(\frac{r}{2})} \Big[\ln H_{N_0}+ \frac{n}{2}\ln (4\pi t)\Big]\cdot H d\mu(x)= -\frac{n}{2}+ \frac{1}{2}R(y)\cdot t + O(t^{\frac{3}{2}})
}
\end{equation}
}
\end{lemma}
{\it Proof:}~
{$(I)\doteqdot \int_{B(\frac{r}{2})} \Big[\ln H_{N_0}+ \frac{n}{2}\ln (4\pi t)\Big]\cdot H d\mu(x)$ in the following proof. From Theorem \ref{thm 2.1},
\begin{equation}\nonumber
{\ln H_{N_0}= -\frac{n}{2}\ln (4\pi t)- \frac{d^2(x, y)}{4t}+ \ln \Big(\sum_{k= 0}^{N_0}\varphi_k t^k\Big)
}
\end{equation}
and
\begin{equation}\nonumber
{\ln \Big(\sum_{k= 0}^{N_0}\varphi_k t^k\Big)= \ln \varphi_0+ \frac{\varphi_1}{\varphi_0}\cdot t+ O(t^2)
}
\end{equation}
Hence
\begin{equation}\nonumber
{(I)= \int_{B(\frac{r}{2})}\Big[-\frac{d^2(x, y)}{4t}+ \ln \varphi_0+ \frac{\varphi_1}{\varphi_0}\cdot t+ O(t^2)\Big]\cdot H d\mu(x)
}
\end{equation}
Now using $(iii)$ of Theorem \ref{thm 2.1},
\begin{align}
(I)&= \int_{B(\frac{r}{2})} \Big[-\frac{d^2}{4t}+ \frac{1}{12}R_{pq}(y)x^px^q+ O(d^3)+ \Big(\frac{R(y)}{6}+ O(d)\Big)t\Big]\cdot H d\mu(x)+ O(t^2) \nonumber \\
&= (II)+ (III)+ (IV)+ (V)+ (VI)+ O(t^2) \nonumber
\end{align}
where
\begin{align}
(II)&= \int_{B(\frac{r}{2})} \Big(-\frac{d^2(x, y)}{4t}\Big)\cdot H d\mu(x), \quad (III)= \frac{1}{12}\int_{B(\frac{r}{2})} \Big(R_{pq}(y)x^px^q)\cdot H d\mu(x), \nonumber \\
(IV)&= C\int_{B(\frac{r}{2})} d^3(x, y)\cdot H(x, y, t) d\mu(x), \quad (V)= \frac{R(y)}{6}t\cdot \int_{B(\frac{r}{2})} H(x, y, t) d\mu(x), \nonumber \\
(VI)&= Ct\cdot \int_{B(\frac{r}{2})} d(x, y)\cdot H(x, y, t) d\mu(x) \nonumber
\end{align}
From (\ref{3.5.1}),
\begin{equation}\nonumber
{\int_{B(\frac{r}{2})} H= \int_{M^n} H- \int_{M^n\backslash B(\frac{r}{2})} H= 1+ O(t^2)
}
\end{equation}
Hence
\begin{equation}\nonumber
{(V)= \frac{1}{6}R(y)\cdot t+ O(t^2)
}
\end{equation}
Using (\ref{3.4}) and the fact that
\begin{equation}\nonumber
{\int_{R^n}O(|x|^{k})(4\pi t)^{-\frac{n}{2}} \exp (-\frac{|x|^2}{4t}) dx= O(t^{\frac{k}{2}})
}
\end{equation}
where $k$ is any nonnegative integer, we can get $(IV)= O(t^{\frac{3}{2}})$ and $(VI)= O(t^{\frac{3}{2}})$. Similarly,
\begin{equation}\nonumber
{(III)= \frac{1}{6}R(y)\cdot t+ O(t^2)
}
\end{equation}
Finally, from the following Lemma \ref{lem 3.4},
\begin{equation}\nonumber
{(II)= -\frac{n}{2}+ \frac{1}{6}R(y)\cdot t+ O(t^{\frac{3}{2}})
}
\end{equation}
By all the above, the conclusion is proved.
}
\qed
\begin{lemma}\label{lem 3.4}
{\begin{equation}\label{3.10}
{-\frac{1}{4t}\int_{B(\frac{r}{2})} d^2(x, y)\cdot H d\mu(x)= -\frac{n}{2}+ \frac{1}{6}R(y)\cdot t+ O(t^{\frac{3}{2}})
}
\end{equation}
}
\end{lemma}
{\it Proof:}~
{$(II)\doteqdot -\frac{1}{4t}\int_{B(\frac{r}{2})} d^2(x, y)\cdot H d\mu(x)$, then
\begin{equation}\label{3.11}
{(II)= -\frac{1}{4t}\int_{B(\frac{r}{2})} d^2(x, y)\cdot (H_{N_0}+ F_{N_0})\cdot \alpha dx
}
\end{equation}
where $dx$ in the integral of (\ref{3.11}) is the volume element of Euclidean space $\mathbb{R}^n$, and
\begin{equation}\nonumber
{\alpha= \sqrt{det(g)}= 1- \frac{1}{6}R_{pq}(y)x^px^q+ O(d^3(x, y))
}
\end{equation}
Then
\begin{align}
(II)&= -\frac{1}{4t}\int_{B(\frac{r}{2})} d^2(x, y)\cdot (4\pi t)^{-\frac{n}{2}} \exp{\Big(-\frac{d^2(x, y)}{4t}\Big)} (\varphi_0+ \varphi_1 t)\cdot \alpha dx+ O(t^2) \nonumber \\
&= \Big[-\frac{1}{4t}- \frac{1}{24}R(y) \Big]\cdot \int_{B(\frac{r}{2})}d^2\cdot (4\pi t)^{-\frac{n}{2}}\exp (-\frac{d^2}{4t}) dx \nonumber \\
&\quad + \frac{1}{48t}\int_{B(\frac{r}{2})} (4\pi t)^{-\frac{n}{2}} (R_{pq}(y) x^px^q) d^2\cdot \exp (-\frac{d^2}{4t}) dx+ O(t^{\frac{3}{2}}) \nonumber \\
&= \Big[-\frac{1}{4t}- \frac{1}{24}R(y)\Big]\cdot 2nt+ \frac{1}{48t}I_n+ O(t^{\frac{3}{2}}) \nonumber
\end{align}
where
\begin{equation}\nonumber
{I_n= \int_{\mathbb{R}^n} (4\pi t)^{-\frac{n}{2}}\Big(\sum_{k=1}^{n}\lambda_k x_k^2\Big)\cdot \Big(\sum_{i= 1}^{n}x_i^2\Big) \exp \Big(-\frac{1}{4t}\cdot \sum_{j= 1}^n x_j^2\Big) dx
}
\end{equation}
in above we diagonalize $R_{pq}(y)$ and let $\lambda_k= R_{kk}(y)$.
We can get $I_1= 12\lambda_1 t^2$, and the induction formula
\[I_n= I_{n- 1}+ 4(\sum_{i=1}^{n}\lambda_i)t^2+ 4(n+ 1)\lambda_n t^2 \]
Then it is easy to get
\begin{equation}\label{I_n}
{I_n= 4(n+ 2)(\sum_{i=1}^{n}\lambda_i)t^2= 4(n+ 2)R(y)t^2
}
\end{equation}
By all the above $(II)= -\frac{n}{2}+ \frac{R(y)}{6}t+ O(t^{\frac{3}{2}})$, the lemma is proved.
}
\qed
\section{The short time asymptotics of $\frac{\partial}{\partial t}\Big[N(H, t) \Big]$}
To study $\frac{\partial}{\partial t}\Big[N(H, t) \Big]$, we need to switch the differentiation with integration firstly. Because the manifold $M^n$ can be non-compact, we need to be more careful on the switch of the order of differentiation and integration. The next lemma justifies this switch in our case.
\begin{lemma}\label{switch}
{\begin{align}
\frac{\partial}{\partial t} \Big[\int_{M^n} H(-f) d\mu(x)\Big]= \int_{M^n} \Big[H(-f)\Big]_{t} d\mu(x)
\end{align}
}
\end{lemma}
{\it Proof:}~
{Define $\varphi_{\rho}(x)= \phi\Big(\frac{d(x, y)}{\rho}\Big)$, where $\phi$ is defined in Appendix, $\rho> 1$ is a constant. Fix $t> 0$, define $G(x, t)= [H(-f)](x, y, t)$. For any $\epsilon> 0$, assume $1> l> 0$ (if $l< 0$, similar argument works). Then
\begin{align}
&\Big|\int_{M^n} \frac{G(x, t+ l)- G(x, t)}{l} d\mu(x)- \int_{M^n} G_{t}\varphi_{\rho} d\mu(x)\Big| \nonumber \\
&\leq \int_{B(\rho)} |G_{t}(x, t+ \xi_{x}l)- G_{t}(x, t)| d\mu(x)+ 2\int_{M^n\backslash B(\rho)} \sup_{s\in [t, t+ l]} |G_{t}(x, s)| d\mu(x) \nonumber \\
&\leq \int_{B(\rho)} \Big|\frac{\partial^2}{\partial^2 t}G(x, t+ \zeta_{x}l) \Big| d\mu(x)\cdot l+ 2\int_{M^n\backslash B(\rho)} \Big(\sup_{s\in [t, t+ l]} |G_{t}(x, s)| \Big)d\mu(x) \nonumber \\
&\leq (I)+ (II)
\end{align}
We firstly estimate $(II)$. From \cite{LY}, for $s\in [t, t+ l]$,
\begin{align}
\frac{H_{t}}{H}(x, y, s)\geq \frac{1}{2}\Big[\frac{|\nabla H|^2}{H^2}- \frac{2n}{s}- CK\Big]\geq -\frac{C}{s} \label{4.0.2}
\end{align}
where $C= C(K, n)$. From Corollary \ref{cor 2.5},
\begin{align}
\frac{H_{t}}{H}(x, s)&\leq \frac{2}{s}\Big\{n+ (4+ Ks)\ln \Big[\frac{C(K, t+ 1)}{H(x, y, s) V^{\frac{1}{2}}(B_{x}(\sqrt{\frac{s}{2}})) V^{\frac{1}{2}}(B_{y}(\sqrt{\frac{s}{2}}))}\Big]\Big\} \nonumber \\
&\leq \frac{C}{s}\Big(1+ |\ln H|+ \Big|\ln \Big[V\Big(B_{x}\Big(\sqrt{\frac{s}{2}}\Big)\Big)\cdot V\Big(B_{y}\Big(\sqrt{\frac{s}{2}}\Big)\Big)\Big]\Big|\Big) \label{4.0.3}
\end{align}
When $x\in M^n\backslash B(\rho)$, using volume comparison theorem,
\begin{align}
\Big|\ln \Big[V\Big(B_{x}\Big(\sqrt{\frac{s}{2}}\Big)\Big)\cdot V\Big(B_{y}\Big(\sqrt{\frac{s}{2}}\Big)\Big)\Big]\Big| &\leq 2|\ln V\Big(B_{y}\Big(\sqrt{\frac{s}{2}}\Big)\Big) | \nonumber \\
& \quad + \Big|\ln V_{-K}\Big(\sqrt{\frac{s}{2}}\Big) \Big| + |\ln V_{-K}(s+ d(x, y))| \nonumber \\
&\leq C(|\ln s|+ s+ d) \label{4.0.4}
\end{align}
where $C$ is independent of $\rho$. From (\ref{4.0.2}), (\ref{4.0.3}) and (\ref{4.0.4}),
\begin{align}
\Big|\frac{H_t}{H}\Big|(x, s)\leq \frac{C}{s} (|\ln H|+ |\ln s|+ s+ d) \label{4.0.5}
\end{align}
when $x\in M^n\backslash B(\rho)$.
From (\ref{4.0.5}), on $M^n\backslash B(\rho)$,
\begin{align}
|(-f)H_{t}|(x, s) &\leq \Big[|\ln H|+ \frac{n}{2}|\ln (4\pi s)|\Big] \cdot |H_{t}|(x, s) \nonumber \\
& \leq \Big[|\ln H|+ \frac{n}{2}|\ln (4\pi s)|\Big] \cdot C|H|\cdot s^{-1}(|\ln H|+ |\ln s|+ s+ d) \nonumber \\
&\leq \frac{C}{s} \cdot H\Big[|\ln H|^2+ |\ln s|^2+ s^2+ d^2\Big] \label{4.0.5.1}
\end{align}
From (\ref{4.0.5}) and (\ref{4.0.5.1}), if $s\in [t, t+ l]$ and $x\in M^n\backslash B(\rho)$,
\begin{align}
|G_{t}(x, s)|&\leq \Big[|H_{t}|+ \frac{n}{2s}|H|+ |(-f)H_{t}| \Big](x, s) \nonumber \\
&\leq \frac{C}{s}H\cdot \Big(|\ln H|+ |\ln s|+ s+ d\Big)\nonumber \\
&\quad + \frac{n}{2s}|H|+ \frac{C}{s}H\cdot \Big(|\ln H|^2+ |\ln s|^2+ s^2+ d^2\Big) \nonumber \\
&\leq \frac{C}{s}H\cdot \Big(|\ln H|^2+ |\ln s|^2+ s^2+ d^2\Big) \label{s2}
\end{align}
where $C$ is independent of $\rho$. We can choose $l$ smooth enough such that $(t+ l)\leq 2t$, then using (\ref{3.5.1}) and (\ref{s2}), on $x\in M^n\backslash B(\rho)$
\begin{align}
|G_{t}(x, s)|&\leq Cs\exp{\Big(-\frac{d^2}{6s}\Big)}\cdot \Big[\Big|C+ 2\ln s- \frac{d^2}{6s}\Big|^2+ |\ln s|^2+ s^2+ d^2\Big] \nonumber \\
&\leq C(t+ l)\exp{\Big(-\frac{d^2}{6(t+ l)}\Big)}\cdot \Big[t^2+ d^2+ |\ln t|^2+ \Big(\frac{d^2}{t}\Big)^2\Big] \nonumber \\
&\leq Ct\exp{\Big(-\frac{d^2}{12t}\Big)}\cdot \Big[t^2+ |\ln t|^2+ \Big(\frac{d^2}{t}\Big)^2\Big]
\end{align}
Hence for any $\epsilon> 0$, we can find $\rho_0> 1$, such that if $\rho\geq \rho_0$,
\begin{align}
\int_{M^n\backslash B(\rho)} \Big(\sup_{s\in [t, t+ l]} |G_{t}(x, s)| \Big)d\mu(x)< \frac{\epsilon}{4} \label{s3}
\end{align}
On the other hand, because $0< l< 1$,
\begin{align}
\int_{B(\rho)} |G_{tt}(x, t+ \zeta_x l)|d\mu(x) &\leq \int_{B(\rho)} \sup_{s\in [t, t+ 1]}|G_{tt}(x, s)|d\mu(x) \nonumber \\
&\leq C(\rho) \label{s4}
\end{align}
Choose $l\leq \frac{\epsilon}{4C(\rho)}$, from (\ref{s3}) and (\ref{s4}), if $\rho> \rho_0$,
\begin{align}
\Big|\int_{M^n} \frac{G(x, t+ l)- G(x, t)}{l} d\mu(x)- \int_{M^n} G_{t}\varphi_{\rho} d\mu(x)\Big|< \epsilon \label{s5}
\end{align}
It is easy to see from Lemma \ref{lem 4.0} and its proof, $\lim_{\rho\rightarrow \infty}\int_{M^n} G_{t}\phi_{\rho}$ exists and
\begin{align}
\lim_{\rho\rightarrow \infty}\int_{M^n} G_{t}\phi_{\rho}= \int_{M^n} G_{t}\label{s6}
\end{align}
From (\ref{s5}) and (\ref{s6}), we get our conclusion.
}
\qed
By Cheng-Li-Yau's result (see \cite{CLY}), $\lim_{t\rightarrow 0}t\ln H= -\frac{d^2}{4}$ and the limit is uniform for any $x$ in $B(r)$. Hence we can assume
\begin{equation}\nonumber
{t\ln H(x, y, t)= -\frac{d^2(x, y)}{4}+ \epsilon (t, x, y)
}
\end{equation}
Sometimes $\epsilon(t, x, y)$ will be simplified as $\epsilon$, then
\begin{equation}\label{4.02}
{t(-f)= \frac{n}{2}t\ln (4\pi t)- \frac{d^2}{4}+ \epsilon
}
\end{equation}
where $\lim_{t\rightarrow 0}\epsilon(t, x, y)= 0$, and the limit is uniform for any $x$ in $B(r)$. Without losing generality, we can assume that $\varphi_{0}(x, y)\geq \frac{1}{2}$ when $x\in B(\frac{r}{2})$.
\begin{lemma}\label{lem 4.1}
{\begin{equation}\label{4.1}
{\int_{B(\frac{r}{2})} E(-f) d\mu(x)= -\frac{n}{2}+ \frac{1}{3}R(y)\cdot t+ o(t)
}
\end{equation}
and
\begin{equation}\label{4.2}
{\int_{B(\frac{r}{2})} E(-f) O(d(x, y)) d\mu(x)= o(1)
}
\end{equation}
where $\lim_{t\rightarrow 0}\frac{o(t)}{t}= 0$.
}
\end{lemma}
{\it Proof:}~
{\begin{align}
&\int_{B} E(-f)d\mu(x)= \int_{B} \frac{H_{N_0}}{\sum_{k= 0}^{N_{0}}\varphi_k t^k}\cdot (-f) d \mu(x) \nonumber \\
&\quad = \int_B \Big(\frac{1}{\varphi_0}- \frac{\varphi_1}{\varphi_0^2}t\Big)H(-f) d\mu(x) + o(t) \nonumber \\
&\quad = \int_{B} \Big(1+ \frac{1}{12}R_{pq}(y)x^px^q- \frac{R(y)}{6}t\Big)H(-f)+ o(t) \nonumber \\
&\quad = -\frac{n}{2}+ \Big(\frac{1}{2}+ \frac{n}{12}\Big)R(y)t+ \frac{1}{12}\int_{B} R_{pq}(y)x^px^q\cdot H(-f) d\mu(x)+ o(t) \label{4.1.0}
\end{align}
in the last equation, we used (\ref{3.6}).
We estimate the third term on the right side of (\ref{4.1.0}).
\begin{align}
(I)&\doteqdot \frac{1}{12}\int_{B} R_{pq}(y)x^px^q\cdot H(-f) d\mu(x) \nonumber \\
& = \frac{1}{12} \int_{B} R_{pq}(y)x^px^q\cdot H\Big[\ln H_{N_0}+ \frac{n}{2}\ln (4\pi t) \Big] d\mu(x) \nonumber \\
& = \frac{1}{12} \int_{B} R_{pq}(y) x^px^q\cdot H_{N_0} \Big[-\frac{d^2}{4t}+ \ln \varphi_0 \Big]\cdot \alpha dx+ o(t) \nonumber \\
& = -\frac{1}{48t} \int_{B} E\cdot d^2\cdot R_{pq}(y) x^px^q dx+ o(t) \nonumber \\
& = -\frac{n+ 2}{12}R(y)t+ o(t) \label{4.1.1}
\end{align}
In the last equation above, we used (\ref{I_n}). From (\ref{4.1.0}) and (\ref{4.1.1}), we get (\ref{4.1}). To prove (\ref{4.2}), we will follow similar strategy.
\begin{align}
&\int_{B} E(-f) O(d) d\mu(x)= \int_B \Big(\frac{1}{\varphi_0}- \frac{\varphi_1}{\varphi_0^2}t\Big)H(-f) O(d) d\mu(x) + o(1) \nonumber \\
&\quad = \int_{B} H_{N_0}\Big[\ln H_{N_0}+ \frac{n}{2}\ln (4\pi t)\Big] O(d) d\mu(x)+ o(1) \nonumber \\
&\quad = \int_{B} E\Big(-\frac{d^2}{4t}+ \ln \varphi_0 \Big) O(d) d\mu(x)+ o(1)= o(1) \nonumber
\end{align}
(\ref{4.2}) is proved.
}
\qed
\begin{lemma}\label{lem 4.0}
{\begin{align}
\int_{M^n\backslash B} |(-f)H_{t}| d\mu(x)= O(t^{\frac{1}{2}}) \nonumber
\end{align}
where $t< < 1$ is small enough.
}
\end{lemma}
{\it Proof:}~
{Similarly as (\ref{4.0.5.1}), on $M^n\backslash B$,
\begin{align}
|(-f)H_{t}|\leq \frac{C}{t} \cdot H\Big[|\ln H|^2+ |\ln t|^2+ t^2+ d^2\Big] \nonumber
\end{align}
Hence
\begin{align}
\int_{M^n\backslash B} |(-f)H_{t}|&\leq \frac{C}{t} \int_{M^n\backslash B} H\cdot |\ln H|^2+ \frac{C}{t} \int_{M^n\backslash B} H (|\ln t|^2+ t^2+ d^2) \nonumber \\
& =(I)+ (II) \nonumber
\end{align}
Using (\ref{3.5.1}), volume comparison theorem and monotonicity of $h(x)= x(\ln x)^2$ when $x\in (0, e^{-2}]$, similar to the proof of (\ref{use}),
\begin{align}
(I)\leq O(t^{\frac{1}{2}}) \nonumber
\end{align}
Using (\ref{2.11}), when $x\in M^n\backslash B$,
\begin{equation}\label{4.2.1}
{H\leq |\eta H_{N_0}|+ |F_{N_0}|\leq C\Big[t^{-\frac{n}{2}}\exp \Big(-\frac{d^2}{4t}\Big)+ t^{4}\cdot \exp \Big(-\frac{d^2}{5t}\Big) \Big]= O(t^2)\tilde{E}
}
\end{equation}
From (\ref{4.2.1}), it is easy to get
\begin{align}
(II)\leq O(t) \nonumber
\end{align}
By all the above, we get our conclusion.
}
\qed
\bigskip
{\it \textbf{Proof of (\ref{3.6.1})}:}~
{\begin{align}
\frac{\partial}{\partial t}\Big[\int_{M^n} H(-f) d\mu(x)\Big]&= \int_{M^n} \Big[H_t+ \frac{n}{2t}H+ (-f)H_t\Big]d\mu(x) \nonumber \\
&= \frac{n}{2t}+ \int_{M^n\backslash B(\frac{r}{2})} (-f)H_t d\mu(x)+ \int_{B(\frac{r}{2})} (-f)H_t d\mu(x) \nonumber \\
&= \frac{n}{2t}+ (I)+ (II) \nonumber
\end{align}
From Lemma \ref{lem 4.0} in the above, we have
\begin{align}
(I)= O(t^{\frac{1}{2}}) \nonumber
\end{align}
From Lemma \ref{lem 4.3} in the following, we get
\begin{align}
(II)= -\frac{n}{2t}+ \frac{1}{2}R(y)+ o(1) \nonumber
\end{align}
From all the above, we get (\ref{3.6.1}).
}
\qed
\begin{lemma}\label{lem 4.3}
{\begin{equation}\nonumber
{\int_{B} (-f)H_t d\mu(x)= -\frac{n}{2t}+ \frac{1}{2}R(y)+ o(1)
}
\end{equation}
}
\end{lemma}
{\it Proof:}~
{From (\ref{2.11.0}) and (\ref{4.02}),
\begin{equation}\nonumber
{\int_{B} (-f)H_t d\mu(x)= \int_{B} (-f)\cdot (H_{N_0})_t+ O(t)
}
\end{equation}
\begin{align}
&\int_{B} (-f)(H_{N_0})_t d\mu(x)= \int_{B} \Big(\frac{d^2}{4t^2}- \frac{n}{2t}\Big) H_{N_0}\cdot (-f)d\mu(x)+ \int_{B} E\varphi_1 (-f) d\mu(x)+ o(1) \nonumber \\
&\quad = \frac{1}{4t^2}\int_{B} H_{N_0}(-f)d^2 d\mu(x)- \frac{n}{2t}\int_{B}H_{N_0}(-f)d\mu(x)+ \int_{B} E\varphi_1(-f) d\mu(x) + o(1)\nonumber \\
&\quad = (I)+ (II)+ (III)+ o(1) \nonumber
\end{align}
Using Lemma \ref{lem 4.1},
\begin{align}
(III)&= \int_{B}E\varphi_1 (-f) d\mu(x)= \frac{1}{6}R(y)\int_{B} E(-f) d\mu(x)+ \int_{B} E(-f)\cdot O(d)\nonumber \\
&= -\frac{n}{12}R(y)+ o(1) \nonumber
\end{align}
From (\ref{2.11}) and (\ref{3.6}),
\begin{align}
(II)&= -\frac{n}{2t}\int_{B} H_{N_0}(-f) d\mu(x) = -\frac{n}{2t}\int_{B} H(-f)d\mu(x) -\frac{n}{2t}\int_{B} O(t^{N_0+ 1})\tilde{E}(-f) d\mu(x) \nonumber \\
&= \frac{n^2}{4t}- \frac{n}{4}R(y)+ o(1) \nonumber
\end{align}
Similarly, by the following Lemma \ref{lem 4.4},
\begin{align}
(I)&= \frac{1}{4t^2} \int_{B} (H+ O(t^{N_0+ 1})\tilde{E})(-f)\cdot d^2 d\mu(x)= \frac{1}{4t^2} \int_{B} H(-f)\cdot d^2 d\mu(x)+ o(1)\nonumber \\
&= -\frac{n(n+ 2)}{4t}+ \Big(\frac{n}{3}+ \frac{1}{2}\Big)R(y)+ o(1) \nonumber
\end{align}
From all the above,
\begin{equation}\nonumber
{\int_{B} (-f)H_t d\mu(x)= -\frac{n}{2t}+ \frac{1}{2}R(y)+ o(1)
}
\end{equation}
}
\qed
\begin{lemma}\label{lem 4.4}
{\begin{equation}\nonumber
{\frac{1}{4t^2} \int_{B} H(-f)\cdot d^2 d\mu(x)= -\frac{n(n+ 2)}{4t}+ \Big(\frac{n}{3}+ \frac{1}{2}\Big) R(y)+ o(1)
}
\end{equation}
}
\end{lemma}
{\it Proof:}~
{We will use the similar strategy as in the proof of (\ref{3.6}).
\begin{align}
\frac{1}{4t^2} \int_{B} H(-f)\cdot d^2 d\mu(x)&= \frac{1}{4t^2} \int_{B} \Big[\ln H_{N_0}+ \frac{n}{2}\ln (4\pi t)\Big]Hd^2 d\mu(x) \nonumber \\
&\quad + \frac{1}{4t^2} \int_{B} \Big[\ln \frac{H}{H_{N_0}}\Big]Hd^2 d\mu(x) \nonumber
\end{align}
From (\ref{3.5}),
\begin{equation}\nonumber
{\Big[\ln \frac{H}{H_{N_0}}\Big]H= O(t^4)
}
\end{equation}
Hence,
\begin{align}
&\frac{1}{4t^2} \int_{B} H(-f)\cdot d^2 d\mu(x)= \frac{1}{4t^2} \int_{B} \Big[-\frac{d^2}{4t}+ \ln \varphi_0+ \frac{\varphi_1}{\varphi_0}t+ O(t^2)\Big]Hd^2\cdot \alpha dx+ o(1) \nonumber\\
&= \frac{1}{4t^2}\int_{B} \Big(-\frac{d^2}{4t}+ \frac{1}{12}R_{pq}(y)x^px^q+ \frac{1}{6}R(y)t- \frac{R(y)}{24}d^2+ \frac{1}{48t}R_{pq}(y)x^px^q\cdot d^2\Big) \nonumber \\
&\quad \quad \quad\cdot E d^2 dx+ o(1) \nonumber \\
&\quad = -\frac{n(n+ 2)}{4t}+ \frac{-n^2+ 2n+ 4}{24}R(y)+ \frac{1}{192t^3}\int_{\mathbb{R}^n} ER_{pq}(y)x^px^q \cdot d^4 dx+ o(1) \nonumber
\end{align}
Define
\begin{equation}\nonumber
{Q_{n}= \int_{\mathbb{R}^n} ER_{pq}(y)x^px^q \cdot d^4 dx= \int_{\mathbb{R}^n} E\cdot \Big(\sum_{i= 1}^n \lambda_i x_i^2\Big)\cdot (\sum_{j= 1}^n x_j^2) dx
}
\end{equation}
where we diagonalize $R_{pq}(y)$ and let $\lambda_i= R_{ii}(y)$. We can get $Q_1= 120\lambda_1 t^3$ and the induction formula:
\begin{equation} \nonumber
{Q_n= Q_{n- 1}+ 8(2n+ 5)\Big(\sum_{i= 1}^n \lambda_i \Big)t^3+ 8(n^2+ 4n+ 3)\lambda_n \cdot t^3
}
\end{equation}
Then it is easy to get $Q_n= 8(n^2+ 6n+ 8)R(y)\cdot t^3$, hence
\begin{equation}\nonumber
{\frac{1}{4t^2} \int_{B} H(-f)\cdot d^2 d\mu(x)= -\frac{n(n+ 2)}{4t}+ \Big(\frac{n}{3}+ \frac{1}{2}\Big) R(y)+ o(1)
}
\end{equation}
}
\qed
| {
"timestamp": "2013-04-09T02:03:01",
"yymm": "1209",
"arxiv_id": "1209.6591",
"language": "en",
"url": "https://arxiv.org/abs/1209.6591",
"abstract": "Let $(M^n, g)$ be a complete Riemannian manifold with $Rc\\geq -Kg$, $H(x, y, t)$ is the heat kernel on $M^n$, and $H= (4\\pi t)^{-\\frac{n}{2}}e^{-f}$. Nash entropy is defined as $N(H, t)= \\int_{M^n} (fH) d\\mu(x)- \\frac{n}{2}$. We studied the asymptotic behavior of $N(H, t)$ and $\\frac{\\partial}{\\partial t}\\Big[N(H, t)\\Big]$ as $t\\rightarrow 0^{+}$, and got the asymptotic formulas at $t= 0$. In the Appendix, we got Hamilton-type upper bound for Laplacian of positive solution of the heat equation on such manifolds, which has its own independent interest.",
"subjects": "Differential Geometry (math.DG); Analysis of PDEs (math.AP)",
"title": "The short time asymptotics of Nash entropy",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992914310605,
"lm_q2_score": 0.7310585669110202,
"lm_q1q2_score": 0.707591068907783
} |
https://arxiv.org/abs/1902.02260 | GMRES with Singular Vector Approximations | This paper has proposed the GMRES that augments Krylov subspaces with a set of approximate right singular vectors. The proposed method suppresses the error norms of a linear system of equations. Numerical experiments comparing the proposed method with the Standard GMRES and GMRES with eigenvectors methods[3] have been reported for benchmark matrices. | \section{Introduction}
The
GMRES method is well-known for solving $Ax=b,$ a large sparse system of linear equations, especially, when an approximate solution is sufficient \cite{saad}.
Nonetheless, the error norms of approximate solutions in GMRES need not be smaller; See \cite[Example-1]{Gmrerr}.
Prompted by this, Weiss proposed an algorithm that minimizes error norms,
The Generalized Minimal Error method (GMERR
\cite{errm}. Ehrig and Deuflhard studied convergence properties of GMERR and developed an algorithm that has an implementation similar to GMRES \cite{Gmrerr}.
Later, A stable variant of GMERR proposed using Householder transformations and has observed that the full version of GMERR may be effective in reducing the error, but its performance is not competitive to GMRES
\cite{stab}. Although CGNR minimizes both error and residual norms, its convergence depends on the square of the condition number of $A$ \cite{CGNR}.
This paper develops a tool that can combine to standard restarting GMRES, and reduce both error and residual norms. The tool augments the Krylov subspaces in restarting GMRES with a set of approximate right singular vectors.
The following is the outline of this paper: In Section-2, we present the
GMRES method. Section-3 develops the said tool, and analyzes the convergence properties of the GMRES with approximate Singular vectors method. Section-4 gives implementation details of the algorithm proposed in section-3. Section-5 reports the results of numerical experiments on some benchmark matrices. Section-6 concludes the paper.
\section{GMRES}\label{gmres}
Consider the following system of linear equations:
$$Ax =b, A \in \mathcal{C}^{~n \times n}, b~\in~\mathcal{C}^{~n}, x \in \mathcal{C}^{~n}.$$
Let $x0 \in \mathcal{C}^n$ be an arbitrarily chosen initial
approximation to the solution of the above problem. Without loss of
generality, it is assumed throughout the paper that $x0=0$ so that
$r_0 =b.$ Then, at $i^{th}$ iteration of GMRES an approximate
solution belongs to the Krylov subspace:
$$\mathcal{K}_i(A,b) ~=~span\{b,Ab,\cdots, A^{i-1} b\},$$
and is of minimal residual norm:
\begin{equation}\label{eq1}\relax
\|r_i\| = \min_{x \in \mathcal{K}_i(A,b)} \|b-Ax\|.
\end{equation}
The GMRES method solves this minimization problem by generating the matrix $V_i=\begin{bmatrix} v_1~ v_2~ \cdots~ v_i \end{bmatrix}$ in successive iterations, using the following recurrence relation:
\begin{equation}\label{eq1a}\relax
AV_i=V_iH_i+h_{i+1,i}v_{i+1}e_i^\ast, ~~\mbox{where}~~v_1 =
\frac{b}{\|b\|},
\end{equation}
and $H_i$ is an unreduced upper Hessenberg matrix of order
$i.$ Here, a vector $v_{i+1} \perp v_j$ for $j = 1,2,\cdots i,$ and
$\|v_{i+1}\|=1.$ Thus, vectors $v_j$ for $j=1,2,\cdots i$ form an orthonormal basis for the Krylov subspace $\mathcal{K}_i(A,b).$
Next, by using the equation (\ref{eq1a}), GMRES recasts the minimization problem in (\ref{eq1}) into the following:
$$z_i = \arg\min_{x \in \mathcal{C}^i} \|b-AV_ix\| = \arg\min_{x \in \mathcal{C}^i} \|\beta V_{i+1}e_1-V_{i+1}\tilde{H_i} x\|,$$
where $\beta = \|b\|,$ and $\tilde{H_i}$ is an upper Hessenberg matrix
obtained by appending the row $[0~0~\cdots~h_{i+1,i}]$ at the bottom of the matrix $H_i.$ As columns of the matrix $V_{i+1}$ are orthonormal, the above least squares problem is equivalent to the following problem:
$$z_i =\arg\min_{x \in \mathcal{C}^i} \|\beta e_1-\tilde{H_i} x\|.$$
GMRES solves this problem for the vector $z_i$ by using the $QR$ decomposition of the matrix $\tilde{H_i}.$ Note that a vector $V_iz_i$ minimizes the associated residual norm due to the equation~(\ref{eq1}). Thus, the sequence of residual norms $\{\|b-AV_iz_i\|_2\}$ in GMRES is monotonically decreasing. However, the corresponding sequence of error norms may not decrease monotonically. To tackle this, in the following sections, we augment the Krylov subspace in the GMRES method with a vector space containing approximate singular vectors.
\section{Motivation:}
In this section, we are addressing the augmentation of a Krylov subspace in GMRES with the singular vectors. The following lemma explains the motivation behind this.
\begin{lem}
Let $z$ be a right singular vector of a matrix $A$ corresponding to the
singular value $\sigma,$ and $x0$ be an approximate solution of a
linear system of equations $Ax=b.$ Then, a solution of the following minimization problem
\begin{equation}\label{mp1}\relax
\alpha = \arg \min_{k \in \cal{C}}\|b-Ax0-kAz\|,
\end{equation}
is the solution of the minimization problem
\begin{equation}\label{mp2}\relax
\alpha = \arg \min_{k \in \cal{C}}\|x-x0-kz\|.
\end{equation}
\end{lem}
\begin{proof}
Let $\alpha$ be a
solution of the minimization problem (\ref{mp1}). By using $Ax=b,$
this implies $\alpha = \frac{\langle Ax-Ax0,Az \rangle}{\|Az\|^2}.$ Further, by using $\|z\|_2=1$ and $A^\ast A z =\sigma^2 z$ from the hypothesis, we have $\alpha =
\frac{\langle x-x0,z \rangle}{\|z\|^2}.$
Equivalently, this gives
$$\langle x-x0-\alpha z,z \rangle = \langle x-x0,z \rangle- \alpha
\|z\|^ 2=0.$$ Thus, a vector
$x-x0-\alpha z$ is orthogonal to the vector space spanned by the vector $z.$ Therefore, $\alpha$ is a
solution of the minimization problem (\ref{mp2}).
\end{proof}
The above lemma has shown an advantage of the singular vectors that they reduce both residual and error norms.
Now the following two questions arise when augmenting the Krylov subspace in GMRES with a vector space spanned by singular vectors.
The first question is computing a singular vector of a sparse matrix requires more computation than finding the solution of a sparse linear system of equations, in general. The second question to address is singular vectors corresponding to what singular values are better to augment Krylov subspaces in GMRES.
Usage of an approximate singular vector in the augmentation process instead of an exact singular vector resolves the first problem. The following theorem discusses the effect of this usage.
\begin{thm}\label{mg}\relax
Let a matrix $V_m$ have orthonormal columns and $x0$ be an
approximate solution of a linear system of equations $Ax =
b.$ Assume that $x-x0$ is in the range space of $V_m$ and $z$ is a right
singular vector of the matrix $AV_m$ corresponding to its singular value
$\sigma.$ Then a solution of the following minimization
problem
\begin{equation}\label{mp3}\relax
\alpha = \arg \min_{k \in \cal{C}}\|b-Ax0-kAV_mz\|
\end{equation}
is the solution of the minimization problem:
\begin{equation}\label{mp4}\relax
\alpha = \arg \min_{k \in \cal{C}}\|x-x0-kV_mz\|.
\end{equation}
\end{thm}
\begin{proof}
Let $x-x0=V_my.$ Note that if $y$ is a
zero vector then $x0$ is an exact solution of the linear system $Ax=b.$ Assume that $y$ is a non-zero vector. Now, $\alpha,$ a solution of the minimization problem (\ref{mp3}) is
\begin{equation}\label{nt}\relax
\alpha
= \frac{\langle Ax-Ax0,AV_mz\rangle}{\|AV_mz\|^2}=\langle y,z\rangle=\langle x-x0,V_mz
\rangle.
\end{equation}
The above equation has used the facts that $V_m^\ast A^\ast A V_m z= \sigma ^2 z,$ and $V_m^\ast V_m$ is an Identity matrix. Therefore, by using the
same lines of proof as in the previous lemma, $\alpha$ is a solution of the error minimization problem (\ref{mp4}).
\end{proof}
The Theorem-\ref{mg} says that if $x-x0$ is in the Krylov subspace spanned by the columns of $V_m$ a singular vector of $AV_m$ will serve the purpose of a singular vector of $A$ in the augmentation process. However, the error vector $x-x0$ may not lie entirely in the said
subspace,
in general. In this case, the following theorem establishes a relationship between the solutions of minimization problems (\ref{mp3}) and (\ref{mp4}).
\begin{thm}\label{lem2}\relax
Let $x0$ be an approximate solution of the linear system of equations $Ax=b$ and $\sigma,z$ are same as in the previous theorem. Assume that $\alpha_1,$ $\alpha_2$ are solutions of the minimization problems (\ref{mp3}) and (\ref{mp4}) respectively. Then,
\begin{equation}\label{a12}\relax
\|x-x0-\alpha_1 V_mz\|^2-\|x-x0-\alpha_2V_mz\|^2= \frac{|\langle x-x0, (A^\ast
A-\sigma^2 I)V_mz \rangle|^2}{\sigma^4}.
\end{equation}
\end{thm}
\begin{proof}
Note that $x-x0=V_mV_m^\ast(x-x0)+(I-V_mV_m^\ast)(x-x0).$ By using $Ax=b$ and this, the solution $\alpha_1$ of the minimization problem (\ref{mp3}) can be
written as
$$\alpha_1 = \frac{\langle AV_mV_m^\ast (x-x0)+A(I-V_mV_m^\ast )(x-x0), AV_mz
\rangle}{\|AV_mz\|^2}. $$
On substituting the equation (\ref{nt}) this yields
$$\alpha_1= \langle x-x0,V_mz \rangle +\frac{\langle
x-x0, (I-V_mV_m^\ast)A^\ast AV_mz \rangle}{\|AV_mz\|^2}.$$
Further, by using
$V_m^\ast A^\ast AV_mz = \sigma^2 z,$ this gives
\begin{equation}\label{a1}\relax
\alpha_1 = \langle x-x0,V_mz \rangle +\frac{\langle x-x0, (A^\ast A- \sigma^2 I) V_m z \rangle}{\|AV_mz\|^2}=\big\langle x-x0,\frac{A^\ast AV_mz}{\sigma^2} \big\rangle.
\end{equation}
The above equation used the fact that $\|AV_mz\|^2 = \sigma^2.$
As $\|V_mz\|_2=1$ and $\alpha_2$ is the solution of an error minimization problem (\ref{mp4}), we have $\alpha_2= \langle x-x0,V_mz \rangle,$ and
$$\|x-x0-\alpha_1 V_mz\|^2-\|x-x0-\alpha_2V_mz\|^2 =
|\alpha_1-\alpha_2|^2\|V_mz\|^2 = |\alpha_1-\alpha_2|^2.$$ Now, observe from the equation (\ref{a1}) that
\begin{equation}\label{a1m2}\relax
\alpha_1-\alpha_2 = \big\langle x-x0,\frac{(A^\ast A-\sigma^2 I)
V_mz}{\sigma^2} \big \rangle,
\end{equation}
and substitute it in the previous equation. It gives the equation (\ref{a12}). Hence, we proved the theorem.
\end{proof}
From $V_m^\ast A^\ast AV_mz=\sigma^2z$ note that $(A^\ast A-\sigma^2 I)
V_mz=(I-V_mV_m^\ast)(A^\ast A-\sigma^2 I)
V_mz.$ On substituting this in the right-hand side of the equation (\ref{a12}), it is easy to see that
the difference between $\|x-x0-\alpha_1 V_mz\|^2$ and $\|x-x0-\alpha_2 V_mz\|^2$ was only due to components orthogonal to $V_m$ in $x-x0,$ that means,
"The components from the column space of $V_m$ are optimally balanced in the error vector $x-x0-\alpha_1V_mz."$ Thus, augmenting a search subspace in GMRES with a singular vector approximation will accelerate the convergence of approximate solutions.
This fact motivates us to augment the Krylov subspace at each run in the restarting GMRES with the singular vector approximations as explained in the following paragraph.
\vspace{0.2cm}
Let $e_m^{(i)}$ denotes an error vector, and the columns of $V_m^{(i)}$ span the search subspace at the $i^{th}$ run of restarting GMRES.
Suppose $z$ is a right singular vector of $AV_m^{(i)}$ and it augments the column space of $V_m^{(i+1)}$ at the $(i+1)^{th}$ run.
Then, the Theorem-\ref{lem2} and the previous paragraph says that
the components of the column space of $V_m^{(i)}$ are optimally balanced in the error vector that corresponds to the vector minimizing a residual norm
over Range$(V_m^{(i+1)},V_m^{(i)} z).$
Next, the following theorem is required to answer the second question that we arose before, approximate singular vectors corresponding to what singular values are better to augment the Krylov subspace in GMRES.
The proof will follow the lines of Sections 3 and 4 in
\cite{zitko}.
\begin{thm}\label{conv}\relax
Let a subspace range of $Y_k$ be augmenting the Krylov subspace
$\mathcal{K}_m(A,r_0),$ where $Y_k \in C^{n \times k}$ and $m+k < n.$ Assume that $z \in Y_k,$ and $q(A)$ is a polynomial in $A$ of degree $m$ such that $q(0)=0.$ Let $r_s:=r_0-y$ be an optimal residual over the space Range$(AV_m, AY_k), $ where $y := \|r_0\|q(A)v+Az.$ Then
\begin{equation}\label{zit}\relax
\frac{\|r_s\|^2}{\|r_0\|^2} \leq 1-\min_{w \in S_n} \frac{|w^\ast
q(A) w|^2+\|w^\ast AY_k\|^2}{\|q(A)\|^2+\|AY_k\|_F^2},
\end{equation}
where $ S_n$ denotes the unit sphere in $C^n,$ and $\|~.~\|_F$ is the Frobenius norm.
\end{thm}
\begin{proof}
Let $U:= (q(A)v, AY_k)$ is a matrix of full rank. Then,
$P=U(UU^\ast)^{-1}U^\ast$ defines an orthogonal projection onto the
space $\textbf{Range}(q(A)v, AY_k).$ Thus, $$Pr_0 \in \textbf{Range}(q(A)v, AY_k).$$ Since $q(A)v \in AV_m,$ and $r_s:=r_0-y$ is an optimal residual over $\textbf{Range} (AV_m, AY_k), $
this implies
$$\|r_s\|^2 \leq \|r_0-Pr_0\|^2 = \|(I-P)r_0\|^2.$$
This gives
$$\frac{\|r_s\|^2}{\|r_0\|^2} \leq \|(I-P)\frac{r_0}{\|r_0\|}\|^2=\|(I-P)v\|^2 = 1-v^\ast Pv.$$
By using $P=U(UU^\ast)^{-1}U^\ast$ the above equation gives the following:
$$\frac{\|r_s\|^2}{\|r_0\|^2} \leq 1-\|U^\ast v\|^2 \lambda_{min}(U^\ast U)^{-1} \leq 1-\frac{\|U^\ast v\|^2}{\lambda_{max}(U^\ast U) }\leq 1-\frac{\|U^\ast v\|^2}{Trace(U^\ast U)}.$$
Since $U= (q(A)v, AY_k),$ we have $\|U^\ast v\|^2 =|v^\ast
q(A) v|^2+\|v^\ast AY_k\|^2$ and $Trace(U^\ast U)=\|q(A)v\|^2+\|AY_k\|_F^2.$
Substituting these inequalities in the above equation gives the following:
$$\frac{\|r_s\|^2}{\|r_0\|^2} \leq 1-\frac{|v^\ast
q(A) v|^2+\|v^\ast AY_k\|^2}{\|q(A)v\|^2+\|AY_k\|_F^2} \leq 1-\frac{|v^\ast
q(A) v|^2+\|v^\ast AY_k\|^2}{\|q(A)\|^2+\|AY_k\|_F^2}.$$
Here, the second inequality used the facts that $\|v\|_2=1$ and $\|q(A)v\|_2 \leq \|q(A)\|_2.$ Now minimizing the numerator of the second term over $S_n,$ the unit sphere in $C^n,$ gives the equation (\ref{zit}).
Hence, the theorem proved.
\end{proof}
From the Theorem-\ref{conv} note that the norm of an updated residual
$\|r_s\|$ deviates more from $\|r_0\|$ when
$\|AY_k\|_F$ is smaller. It is well known that $\|AY_k\|_F$ is small when columns of $Y_k$ are right singular vectors corresponding to smaller singular
values of $A.$ This answers the second question that we arose before.
The next section devises the GMRES with approximate singular vectors method. The new algorithm augments a Krylov subspace at each run with an approximate right singular vector from the previous run.
\section{Implementation}
Let $x0$ be an initial approximate solution of a linear
system of equations $Ax =b,$ and $r_0=b-Ax0.$ Let $m$ be the
dimension of a search subspace consists of $m-k$ dimensional Krylov subspace ${\cal K}_{m-k}(A,r_0)$ and $k<m$ approximate singular vectors. Let $W$ be a matrix of order $n \times m.$ Assume that the first $(m-k)$ columns of $W$ form an orthonormal basis for the Krylov subspace ${\cal K}_{m-k}(A,r_0)$ and
its last $k$ columns are approximate singular vectors $y_i,$ for $i=1,2,\cdots k.$
The new algorithm recursively constructs first $m-k$ columns of $W$ and an orthonormal basis matrix $Q$ of an $m$ dimensional search subspace. The matrices $W$ and $Q$ satisfy the following relation:
$$AW = Q\tilde{H},$$
where $\tilde{H}$ is an upper Hessenberg matrix of order $(m+1) \times m.$ Note that $Q$ is a matrix of order $n \times (m+1)$ and its first $m-k+1$ columns are formed using the Arnoldi recurrence relation. The last $k$ columns of it are formed by successively orthogonalizing the vectors $Ay_i$ for $i=1,2,.....,k$ against its previous columns. Further, notice that $Q^\ast r_0$ is a multiple of a first coordinate vector.
Similar to the GMRES algorithm, the new algorithm computes an orthogonal matrix $P$ of order $(m+1)$ and an upper
triangular matrix $R$ of order $(m+1) \times m$ such that
$$P\tilde{H} = R.$$ Then, it finds a vector $d$ such that $\|r\|= \|b-A(x0+Wd)\|$ is minimum, and updates an approximate solution to \^{x}$:= x0+Wd.$ Note that
$$\|r\| = \|b-A(x0+Wd)\| = \|r_0-AWd\| = \|r_0-Q\tilde{H}d\|$$
$$ = \|Q^\ast r_0- \tilde{H}d\| = \|PQ^\ast r_0- Rd\|.~~~~~~~~~~~~~~~~~~~$$
As $R$ is an upper triangular matrix of order $(m+1) \times m$ and $\|PQ^\ast r_0- Rd\|$ is minimum, the new method gives the minimal solution by solving for $d$ that makes the first $m$ entries of $PQ^\ast r_0- Rd$ zero. Hence, $\|r\|$ is equal to the magnitude of the last entry of
$PQ^\ast r_0.$ Therefore, in the new method, $\|r\|$ is a
byproduct and does not require any extra computation.
Next, We wish to find approximate right singular vectors of $A$ from the
subspace spanned by $W$ to augment the search subspace in the next run. For this, we find eigenvectors corresponding to the $k$ smaller
eigenvalues of the matrix $W^\ast A^\ast AW.$ A little calculation is required to compute this matrix,
because of
$$G:=W^\ast A^\ast AW = \tilde{H}^
\ast Q^\ast Q \tilde{H}= \tilde{H}^\ast \tilde{H}= R^\ast R.$$ We used the Matlab command "eigs" to
solve the eigenvalue problem for $G.$
The implementation of our new method is as follows.
For simplicity, a listing of the algorithm has done for the second
and subsequent runs.
\begin{center}
\large One restarted run of GMRES with singular vectors
\end{center}
1. Initial definitions and calculations: The Krylov subspace has
dimension m-k, k is the number of approximate eigenvectors. Let
$q_l= \frac{ r_0}{\|r_0\|}$ and $w_l=q_l.$ Let $y_1, y_2,..., y_k$ be
the approximate singular vectors. Let $W_{m+i}= y_i,$ for $i=
1,2,...,k.$ \\
2. Generation of Arnoldi vectors: For $j= 1, 2,..., m$ do:
\begin{center}
$h_{i,j}= \langle Aq_j, q_i \rangle , ~i= 1, 2,..., j$, \\
$\hat{q}_{j+1} = Aq_j-\displaystyle \sum\limits _{i=1}^{j}
h_{i,j}q_i,$\\ \vspace{0.2cm}$h_{j+1,j} = \| \hat{q}_{j+1} \|,$ and \\
\vspace{0.2cm} $q_{j+1} = \hat{q}_{j+1} / h_{j+1,j}.$ \\\vspace{0.2cm} If $j
< m-k,$ let $w_{j+1}= q_{j+1}.$
\end{center}
3.Addition of approximate singular vectors: For $j=
m-k+1,m-k+2,...m$,do:
\begin{center}
$h_{i,j} = \langle Aw_j, q_i\rangle,$ ~$i= 1, 2,..., j$, \\
$\hat{q}_{j+1} = Aw_j-\displaystyle \sum\limits _{i=1}^{j}
h_{i,j}q_i,$\\
$h_{j+1,j} = \|{\hat{q}_{j+1}}\|,$ and\\\vspace{0.2cm}
$q_{j+1} = \frac{\hat{q}_{j+1}}{h_{j+1,j}}.$\\
\end{center}
4. Form the approximate solution: Let $\beta = \|r_0\|.$ Find $d$
that minimizes $\|\beta e_1-\tilde{H}d\|$ for all $d \in {\cal R}^{m}
$. The orthogonal factorization $P\tilde{H}= R$, for R upper
triangular, is used. Then $\hat{x}= xo + Wd.$ \\
5. Form the new approximate singular vectors: Calculate $G= R^\ast
R.$ Solve $Gg_i=\sigma^2 g_i$, for the appropriate $g_i.$ Form $y_i=
Wg_i$ and $Ay_i= Q \tilde{H}g_i$.\\
6. Restart: Compute $r= b- A\hat{x}$; if satisfied with the residual
norm then stop, else let $x0=\hat{x}$ and go to 2.
Only the Step-5 in the above algorithm is different from the GMRES with eigenvectors method. The GMRES with eigenvectors method requires the
computation of both $F=W^\ast A^\ast W$ and $G$
\cite{aug}, whereas the Step-5 in the above algorithm computes the only $G=W^\ast A^\ast AW.$ Hence, the above algorithm requires less computation
and storage compared to the GMRES with eigenvectors method.
Next, we compare the GMRES
with singular vectors and standard GMRES methods. For this, we follow the procedure that used in \cite{aug} to compare the standard GMRES and GMRES with eigenvectors methods.
It compares only significant expenses.
Suppose the
search subspace currently at hand is a
Krylov subspace of dimension $j.$ If the search subspace expands with one more Arnoldi vector, then it requires one matrix-vector product. The orthogonalization requires about $2jn$ multiplications. Instead, if search subspace expanded with a singular vector approximation, no matrix-vector product is required. The other costs are approximately $4jn$ multiplications. It includes $2jn$ multiplications for orthogonalization and $2jn$ for
computing $y_i$ and $Ay_i.$
Hence, GMRES with singular vectors
requires $2jn$ extra multiplications compared to the standard GMRES, but at
the cost of a matrix-vector product in GMRES that requires $n^2$ multiplications. In general, $2j << n.$ Therefore, the GMRES with
singular vectors method requires overall less computation than standard
GMRES.
The GMRES with $k$ singular vector approximations method requires the storage of $m+2k+2$ vectors. This includes the storage of $2k$ vectors, $y_i$ and $Ay_i$ for $i=1,2,\cdots,k.$ However, the storage requirement for standard GMRES with $m+k$ dimensional Krylov subspace is $m+k+2$ vectors. Thus, the GMRES with singular vectors method requires extra storage compared to standard GMRES. Since $k << m$ this extra storage is often not a problem.
\section{Examples}\vspace{-0.2cm}
In the following, GMRES-SV(m,k) indicates that at each run $k$ approximate singular vectors from the previous run augment Krylov subspace of dimension $m-k.$ Similarly, GMRES-HR(m,k) indicates the augmentation of approximate eigenvectors those obtained using the Harmonic Rayleigh-Ritz process to a Krylov subspace. Moreover, in the first run of both the methods search subspace is a Krylov subspace of dimension $m$ .
In each of the example, we compare GMRES-SV(m,k) with GMRES-HR(m,k), GMRES(m), and GMRES(m+k). Here for GMRES, the number in the parenthesis represents the dimension of a Krylov subspace at each run. Note that the dimension of search subspaces in GMRES(m), GMRES-SV(m,k) and GMRES-HR(m,k) are same, and in GMRES(m+k) the search subspace at each run requires nearly the same storage as that of GMRES-SV(m,k) and GMRES-HR(m,k).
In all numerical examples, the right-hand sides
have all entries $1.0,$ unless mentioned otherwise. The initial
guesses $x0$ are zero vectors. Further, we stopped each algorithm
when $\|r\|/\|b\|$ reduced below the fixed tolerance $10^{-8}.$ All
experiments have been carried out using MATLAB R2016b on intel core i7 system with $3.40GHZ$ speed.
\begin{eg}\label{eg1}\relax
This example is same as the example-1 in \cite{errm}. The linear system results from the discretization of one dimensional Laplace equation. The coefficient matrix $A$ is a symmetric tridiagonal matrix of dimension $1000,$ and the right-hand side vector is $(1~ 0~ 0~ \cdots~ 0~ 1)'$. The entry on the main diagonal of $A$ is $2$, whereas $−1$ is on the sub-diagonals of $A$. The matrix has eigenvalues $\lambda_k= 2.(1-\cos \frac{k\pi}{1001})$ for $1 \leq k \leq 1000$ and its condition number is $(1+\cos \frac{\pi}{1001})/(1-\cos \frac{\pi}{1001}).$
\end{eg}
The Figure-\ref{fig1-B} depicts the convergence of residual norms and corresponding error norms in the GMRES-SV(20,4), GMRES-HR(20,4), GMRES(20), and standard GMRES(24) methods.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=5.5in, height=2.5in]
{eg1-ErrM-kashivishwanathagmresnlog.eps}
\vspace{-0.5cm}
\caption{ Magnitudes of $\frac{\|r\|}{ \|b\|}$ with GMRES(20,4) singular vectors/eigenvectors, GMRES(24), and GMRES(20) (left). Absolute errors with GMRES(20,4) singular vectors/eigenvectors, GMRES(24), and GMRES(20) (right). }
\label{fig1-B}
\end{center}
\vspace{-0.5cm}
\end{figure}
In GMRES-SV(20,4), $\|r\|/\|b\|$ drops to below the tolerance $10^{-8}$ in the $148^{th}$ run. It required $2365$ number of matrix-vector products. In the remaining three methods $\|r\|/\|b\|$ did not reached at least $10^{-4}$ even after $5000$ matrix-vector products. Here, the total
number of matrix-vector products in all methods counted in a similar way
as in \cite{aug}.
Observe from the right part of Figure-\ref{fig1-B} that GMRES-SV(20,4) reduces error norms also to a far better extent than the remaining three methods. Here, error norm is the norm of an error vector, a difference between a solution obtained using "backslash" command in Matlab and an approximate solution in an iterative method.
From the Figure-\ref{fig1-B}, we observed that when residual norm drops below the tolerance $10^{-8},$ the $\log10$ of an error norm in the GMRES-SV(20,4) is $-4.763.$ In the other three methods, at $5000^{th}$ matrix-vector product it is just near $1.244$. Therefore, this example illustrates the fact that the augmentation of a Krylov subspace with singular vectors reduces error norms and also the residual norms.
\begin{eg}\label{sherman}\relax
Consider the matrix $SHERMAN4$ that comes from Oil reservoir modeling. It is a real un-symmetric matrix of order $1104.$ The Matrix market provided the right-hand side vector. We compare GMRES-SV(20,4)
(16 Krylov
vectors and 4 approximate right singular vectors) with GMRES-HR(20,4), GMRES(20), and GMRES(24).
\end{eg}\vspace{-0.2cm}
See left part of Figure \ref{fig1} for the convergence of $log10$ of residual norms in all the methods.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=5.5in,height=2.5in]
{example2-sherman4-kashivishwanathagmresnlog.eps}
\vspace{-0.5cm}
\caption{ Magnitudes of $\frac{\|r\|}{ \|b\|}$ with GMRES(20,4) singular vectors/eigenvectors, GMRES(24), and GMRES(20) (left). Absolute errors with GMRES(20,4) singular vectors/eigenvectors, GMRES(24), and GMRES(20) (right). }
\label{fig1}
\end{center}
\vspace{-0.5cm}
\end{figure}
In GMRES-SV(20,4) the quantity $\|r\|/\|b\|$
reduced to below $10^{-8}$ at the $12^{th}$ run.
The total number of matrix-vector products it required is $190.$ GMRES-HR(20,4) required $562$ matrix vector products to drop $\|r\|/\|b\|$ below the tolerance $10^{-8}.$
Thus, GMRES-HR had required nearly thrice the computation than the GMRES-SV method. Further, observe from the Figure-\ref{fig1} that
GMRES-SV(20,4) is far better than GMRES(24) even though it used smaller search subspaces.
The right part of the Figure-\ref{fig1} compares error norms.
When the residual norm reached the tolerance, the $\log10$ of an error norm in GMRES-SV(20,4)is $-6.063,$ whereas it is $-5.063 $ in GMRES-HR(20,4), and is equal to $-4.813,$ $-4.802$ in the GMRES(24) and GMRES(20) methods respectively. Therefore, for this example, the GMRES-SV method significantly reduced the error norm compared to the remaining three methods.
\begin{eg}\label{watt}\relax
Consider the matrix $WATT1$ that came from petroleum engineering. It
is a real un-symmetric matrix of order $1856$ with $11360$ non-zero
entries. The right-hand side is the one provided by the Matrix
Market. To see the performance of GMRES-SV with fewer approximate
singular vectors, we have chosen $m=20$ and $k=2.$ Thus, we have
used only two approximate singular vectors, which are less in number
compared to the previous examples.
\end{eg}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=5.5in,height=2.5in]
{example3-watt1-kashivishwanathagmresnlog.eps}
\vspace{-0.5cm}
\caption{ Magnitudes of $\frac{\|r\|}{ \|b\|}$ with GMRES(20,2) singular vectors/eigenvectors, GMRES(22), and GMRES(20) (left). Absolute errors with GMRES(20,2) singular vectors/eigenvectors, GMRES(22), and GMRES(20) (right). }
\label{fig2}
\end{center}
\vspace{-0.9cm}
\end{figure}
Using GMRES-SV(20,2), the
ratio $\|r\|/\|b\|$ reached the required tolerance in the $30^{th}$ run, whereas in GMRES-HR(20,2), GMRES(22), and GMRES(20) it happened in $82^{nd},109^{th},$ and $143^{rd}$ run respectively. See
Figure-\ref{fig2}(left) for the comparison of $\log10$ of residual norms in
all the four methods.
Figure-\ref{fig2}(right), compares the convergence of error norms in four methods. Observe from it that GMRES-SV(20,2) reduced the error norm
to a better extent compared to the other three methods, even
though it took fewer iterations for the convergence of $\|r\|/\|b\|.$ Also, it reduced residual norms as well.
\begin{eg}\label{mor}\relax
This example has taken from \cite{aug}. It is a bidiagonal matrix of order $1000.$ The diagonal elements
are $1,2, \cdots, 1000$ in order. The super diagonal elements are
$0.1s.$
We have chosen $m= 20$ and $k=2$ for
GMRES method with singular vectors. We used only two eigenvector
approximations in GMRES-HR. We compare these two methods with GMRES(20) and GMRES(22).
\end{eg}
\begin{figure}[!htb]\relax
\begin{center}
\includegraphics[width=5.5in,height=2.5in]
{example4-morgan1-kashivishwanathagmresnlog.eps}
\caption{ Magnitudes of $\frac{\|r\|}{ \|b\|}$ with GMRES(20,2) singular vectors/eigenvectors, GMRES(22), and GMRES(20)(left). Absolute errors with GMRES(20,2) singular vectors/eigenvectors, GMRES(22), and GMRES(20) (right). }
\label{fig3}
\end{center}
\vspace{-0.5cm}
\end{figure}
Figure-\ref{fig3}(left), compares a ratio $\|r\|/\|b\|$ in these
four different methods. Using GMRES-SV(20,2)
$\|r\|/\|b\|$ had reached the desired tolerance in a $15^{th}$ run,
whereas in GMRES-HR(20,2), GMRES(22), and GMRES(20), this ratio reduced below the tolerance in $27^{th},20^{th},$ and $24^{th}$ run, respectively.
In the Figure-\ref{fig3}(right), we compared the error norms in the four
methods. Though the error is reduced up to the same order in all methods, the GMRES-SV(20,2) has taken less number of matrix-vector products. Further, observe that in GMRES-SV smaller error norm at each iteration accelerates the convergence of residual norms .
Above examples have shown that our new method is effective in accelerating the convergence of GMRES. It also shows that we can use
singular vector approximations instead of eigenvector approximations
to augment the search subspace. Further, example-1 has shown the superiority of the GMRES-SV method even in the case of near stagnation of error norms in standard GMRES.
We reported four typical examples in detail though computation carried out on several matrices available in the Matrix Market. The Table-1 reports a summary of results on eight other matrices with various base sizes.
It is apparent from the table that the GMRES with singular vectors method performs better in reducing the error norms compared to standard GMRES and GMRES-HR.
\begin{table}[!htb]
\begin{tiny}
\begin{center}
\caption{Summary results on other matrices}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Matrix& Method &rhs&Initial vector &MVP&$\|r\|/\|b\|$&error \\ \hline
Add20 & GMRES-SV(30,4) & NIST & Zeros(2395,1) & 17*26 & 8.903030927585648e-09 & 6.249677395458858e-15\\
&GMRES(30) & && 27*30+23& 9.951915432072370e-09& 1.269588940989828e-14\\
&GMRES-HR(30,4) & &&24*26+9 &9.859393321942817e-09 & 1.326769652866634e-14
\\&& && & & \\
Bcsstm12& GMRES-SV(30,4)& Ones(1473,1)& Zeros(1473,1)& 7*26+21& 9.735591826540923e-09& 4.010655907680425e-05
\\
&GMRES(30) &&& 7*30+18& 8.504590396065243e-09& 3.371368967798073e-05\\
&GMRES-HR(30,4) & &&9*26+4 &9.359680271393492e-09 & 3.454242536809857e-05 \\&& && & & \\
Cavity05 &GMRES-SV(30,4) &NIST &Zeros(1182,1)& 58*26+3 &9.979063739097918e-09 & 8.864891034548036e-17\\
&GMRES(30)* &&& 300*30&9.083509763407927e-06 &4.366773990806636e-12\\
&GMRES-HR(30,4)* & && 300*30 &4.188013345206927e-04 & 2.098667819011905e-10 \\&& && & & \\
Cavity10& GMRES-SV(30,4)& NIST &Zeros(2597,1)& 107*26+2& 9.748878041774344e-09&1.082459456375754e-14\\
&GMRES(30)* &&& 300*30& 1.480273732125253e-05 &1.745590861026809e-09\\
&GMRES-HR(30,4)* & && 300*30 & 4.412649085564300e-05 & 5.202816341776465e-09 \\&& && & & \\
Cdde1& GMRES-SV(30,4)& Ones(961,1) &Zeros(961,1)& 7*26+4 & 9.375985059553094e-09 &9.904876436183557e-06\\
&GMRES(30) &&& 28*30+24& 9.903484254958585e-09
& 8.057216911709931e-05\\
&GMRES-HR(30,4) & &&42*26 & 9.784366641015672e-09
&5.754305524376241e-05 \\&& && & & \\
$Orsreg_1$ &GMRES-SV(30,4)& Ones(2205,1)& Zeros(2205,1)& 9*26+18& 9.636073229113061e-09 & 3.069738625094621e-08 \\
&GMRES(30)&&& 13*30+18& 9.671973942415371e-09& 8.655424850905282e-08\\
&GMRES-HR(30,4) & && 15*26+24 & 9.858474091767057e-09 &7.314829956508281e-08 \\&& && & & \\
Sherman1 &GMRES-SV(30,4) &NIST& Zeros(1000,1)& 34*26+16 &9.988703017482971e-09& 3.073177766807258e-05 \\
&GMRES(30) &&& 103*30+21& 9.987479947720702e-09& 1.160914298556447e-04\\
&GMRES-HR(30,4) & && 35*26+1 &8.658352412777792e-09 & 5.119874131512304e-05 \\&& && & & \\
$Watt_2$ &GMRES-SV(30,4)& Ones(1856,1)& Zeros(1856,1)& 40*26 &9.956725136180967e-09& 1.955980343013359e+03\\
&GMRES(30) &&& 168*30+6 &9.897399520154960e-09
& 6.004185029785801e+03\\
&GMRES-HR(30,2) & &&254*26 &9.996375252736734e-09
& 7.072952057904888e+03 \\
\hline
\end{tabular}
\end{center}
\end{tiny}
\end{table}
In Table-1, NIST refers to the right-hand side vector provided by Matrix Market website and GMRES* represents the non-convergence of the GMRES method even after 300 iterations. Moreover, for counting the number of matrix-vector products(MVP) we followed the same procedure as in \cite{aug}. In the above table $x*y+z$ means in each of the $x$ iterations, the specific method used $y$ MVPs and in the $(x+1)^{th}$ iteration, it used $z$ Matrix-Vector Products.
\section{Conclusions}
In this paper, a new augmentation procedure in GMRES has been proposed using approximate right singular vectors of a coefficient matrix. The proposed method has an advantage that it requires less computation compared to the GMRES with Harmonic Ritz vectors method. Unlike the augmentation method in \cite{aug}, the proposed method reduces the error norms also to a better extent.
Further, the proposed method involves the computation in real arithmetic for the matrices and right-hand side vectors in the real number system. Numerical experiments have been carried out on benchmark matrices. Results have shown the superiority of the proposed method over the standard GMRES and GMRES with Harmonic Ritz vectors methods.
\section*{Acknowledgements}
The author thanks the National Board of Higher Mathematics, India for supporting this work under the Grant number2/40(3)/2016/R\&D-II/9602 .
\section*{References}
| {
"timestamp": "2019-02-07T02:17:37",
"yymm": "1902",
"arxiv_id": "1902.02260",
"language": "en",
"url": "https://arxiv.org/abs/1902.02260",
"abstract": "This paper has proposed the GMRES that augments Krylov subspaces with a set of approximate right singular vectors. The proposed method suppresses the error norms of a linear system of equations. Numerical experiments comparing the proposed method with the Standard GMRES and GMRES with eigenvectors methods[3] have been reported for benchmark matrices.",
"subjects": "Numerical Analysis (math.NA)",
"title": "GMRES with Singular Vector Approximations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924818279464,
"lm_q2_score": 0.7279754607093178,
"lm_q1q2_score": 0.7075866747646925
} |
https://arxiv.org/abs/2002.12652 | Hyperbolic Knot Theory | This book is an introduction to hyperbolic geometry in dimension three, and its applications to knot theory and to geometric problems arising in knot theory. It has three parts. The first part covers basic tools in hyperbolic geometry and geometric structures on 3-manifolds. The second part focuses on families of knots and links that have been amenable to study via hyperbolic geometry, particularly twist knots, 2-bridge knots, and alternating knots. It also develops geometric techniques used to study these families, such as angle structures and normal surfaces. The third part gives more detail on three important knot invariants that come directly from hyperbolic geometry, namely volume, canonical polyhedra, and the A-polynomial. | \chapter{A Brief Introduction to Hyperbolic Knots}\label{Chap:KnotIntro}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
This book gives an introduction to knots, links, and hyperbolic geometry. Before we begin, we need to define carefully what we mean by knots and links, and that is done in this chapter. We also introduce classical problems in knot theory, and problems motivated by geometry, especially hyperbolic geometry. This chapter is meant to motivate future chapters, and it has many references to content covered in more detail later in the book, where we address some of these problems. Many of the questions described in this chapter have partial answers, and many are still wide open.
\section{An introduction to knot theory}\label{Sec:KnotIntro}
The earliest study of knots seems to be by Gauss, Listing, and especially Tait, who published several papers on knot theory in the years 1876 through 1885. In a preface to his work on knot theory, republished in his 1898 Scientific papers~\cite{Tait:SciPapers}, Tait writes:
\begin{quote}
``The subject [knot theory] is a very much more difficult and intricate one than at first sight one is inclined to think, and I feel that I have not succeeded in catching the key-note.''
\end{quote}
Since Tait's work, advances in knot theory have come through applications of topology, algebra, and invariants arising in quantum field theory, but no single mathematical field has led to simple tools that apply to all knots. In other words, perhaps mathematicians still have not succeeded in catching the ``key-note.'' Perhaps there is no ``key-note'' in knot theory. However, there are definitely mathematical techniques that work well when applied to particular problems or particular families. This book introduces techniques arising from geometry.
\subsection{Basic terminology}
To begin, we need careful definitions of the objects involved.
\begin{definition}\label{Def:knot}
A \emph{knot}\index{knot, definition} $K \subset S^3$ is a subset of points homeomorphic to a circle $S^1$ under a piecewise linear (PL) homeomorphism. We may also
think of a knot as a PL embedding $K\from S^1 \to S^3$. We will use the same symbol $K$ to refer to the map and its image $K(S^1)$.
More generally, a \emph{link}\index{link, definition} is a subset of $S^3$ PL homeomorphic to a disjoint union of copies of $S^1$. Alternatively, we may think of a link as a PL embedding of a disjoint union of copies of $S^1$ into $S^3$.
\end{definition}
A PL homeomorphism of $S^1$ is one that takes $S^1$ to a finite number of linear segments. Restricting to such homeomorphisms allows us to assume that a knot $K\subset S^3$ has a regular tubular neighborhood,\index{tubular neighborhood} that is there is an embedding of a solid torus $S^1\times D^2$ into $S^3$ such that $S^1\times\{0\}$ maps to $K$. An embedding of $S^1$ into $S^3$ that cannot be made piecewise linear defines an object called a \emph{wild knot}.\index{wild knot} Wild knots may have very interesting geometry, but we will only be concerned with the classical knots of \refdef{knot} here.
In fact, rather than working with PL embeddings and homeomorphisms, we obtain the same results working with smooth ones. That is, we could require instead in \refdef{knot} that a knot be a smooth embedding of $S^1$ into $S^3$, and we obtain an equivalent theory. We will assume this fact throughout the book, working with both PL and smooth maps, with very little mention of this fact.
\begin{definition}\label{Def:knotequiv}
We will say that two knots (or links) $K_1$ and $K_2$ are equivalent if they are \emph{ambient isotopic}\index{ambient isotopic}, that is, if there is a (PL or smooth) homotopy $h\from S^3 \times [0,1] \to S^3$ such that $h(*, t) = h_t\from S^3 \to S^3$ is a homeomorphism for each $t$, and
\[ h(K_1, 0) = h_0(K_1) = K_1 \quad \mbox{ and } \quad
h(K_1, 1) = h_1(K_1) = K_2. \]
Such a map $h$ is called an \emph{ambient isotopy}.\index{ambient isotopy}
\end{definition}
A PL (or smooth) embedding of $S^1$ into $S^3$ defines two 3-manifolds, one open and one compact, as in the following definition.
\begin{definition}\label{Def:knotcomplement}
For a knot $K$, let $N(K)$ denote an open regular neighborhood of $K$ in $S^3$. The \emph{knot exterior}\index{knot exterior, definition} is the manifold $S^3-N(K)$. Notice that it is a compact 3-manifold with boundary homeomorphic to a torus.
The \emph{knot complement}\index{knot complement, definition} is the open manifold $S^3-K$, homeomorphic to the interior of $S^3-N(K)$.
Similarly, if $L$ is a link the \emph{link exterior}\index{link exterior, definition} is $S^3-N(L)$, and the \emph{link complement}\index{link complement, definition} is $S^3-L$.
\end{definition}
It was an open question for many years as to whether two knots with homeomorphic complements must be equivalent (up to reflection). This was proved in the affirmative by Gordon and Luecke in 1989~\cite{gordon-luecke}.
\begin{theorem}[Gordon--Luecke Theorem]\label{Thm:GordonLuecke}\index{Gordon--Luecke knot complement theorem}
If two knots have complements that are homeomorphic by an orientation-preserving homeomorphism, then the knots are equivalent.
\end{theorem}
The complement of a knot and the complement of its reflection are homeomorphic, by the orientation-reversing reflection homeomorphism. However, the knot itself may not be equivalent to its reflection. In fact, hyperbolic geometry tools do not distinguish knots and their reflections, and so we often only consider knots up to reflection in this book. If we disregard reflections, the Gordon--Luecke theorem states that knots are determined by their complements.
The same is not true for links. There are infinitely many inequivalent links whose complements are homeomorphic. However, the ways in which such links can be constructed are relatively well-understood; see, for example~\cite{gordon:links}.
\begin{definition}\label{Def:knot-diagram}
A \emph{knot diagram}\index{knot diagram, definition} (or \emph{link diagram})\index{link diagram, definition} is a 4-valent graph with over/under crossing information at each vertex. The diagram is embedded in a plane $S^2\subset S^3$ called the \emph{projection plane},\index{projection plane, definition} or \emph{plane of projection}.\index{plane of projection, definition}
\end{definition}
\begin{figure}[h]
\includegraphics{Figures/Ch00_KnotTheory/F0-01-KnotTable}
\caption{Knots with at most six crossings.}
\label{Fig:KnotTable}
\end{figure}
\Reffig{KnotTable} shows diagrams of the eight knots with at most six crossings.
Classically, a knot has been described by a diagram.
Tait's works give many diagrams. In modern work, knots also appear without diagrams, for example when they arise as periodic orbits of a dynamical system~\cite{BirmanWilliams:Lorenz}, or from a gluing of polyhedra~\cite{CallahanDeanWeeks, ckp, ckm}.
However, many open problems in knot theory still concern knot diagrams.
One goal of \refchap{Fig8Decomp}, and then the next few chapters, is to give a method to pass from a knot or link \emph{diagram} to a topological and then geometric description of the knot or link \emph{complement}. That is, we start with a 4-valent graph describing a knot or link $K$, and obtain a mathematically rigorous decomposition of the 3-manifold $S^3-K$ into simple 3-dimensional pieces, which will be useful for applying tools from geometry and 3-manifold topology.
\section{Problems in knot theory}
There are many open problems in knot theory, and as new mathematical fields are brought to bear upon these problems, new questions and problems arise. This section gives a few highlights of the most classical problems, and also problems that seem most amenable to geometric techniques. Probably the most long-standing problem, and also one of the most broad, is the following.
\subsection{The classification problem}
When do two different descriptions of knots yield equivalent knots? When do they have homeomorphic complements?
When the description of a knot is given by a diagram, this is the problem that Tait encountered while trying to list all knots with a fixed number of crossings. See \reffig{Tait}, which is modified from the 1884 paper~\cite{Tait:OnKnotsII}.
\begin{figure}
\includegraphics{Figures/Ch00_KnotTheory/F0-02-Tait}
\caption{A very small portion of P.~Tait's 1884 tables of knot diagrams, from~\cite{Tait:OnKnotsII}. The original contains a full page with such diagrams, with additional pages of diagrams in~\cite{Tait:OnKnotsIII}.}
\label{Fig:Tait}
\end{figure}
There are a few moves that can be performed on a diagram that do not change the equivalence class of the underlying knot. For example, if the diagram contains a single crossing that forms a loop, as shown on the left of \reffig{Nugatory}, that loop can be untwisted to simplify the diagram.
\begin{figure}[h]
\import{Figures/Ch00_KnotTheory/}{F0-03-Nuga.eps_tex}
\caption{On the left, a nugatory crossing. On the right, a more general reducible crossing.}
\label{Fig:Nugatory}
\end{figure}
\begin{definition}\label{Def:Nugatory}
A single crossing forming a loop, as on the left of \reffig{Nugatory}, is called a \emph{nugatory crossing}\index{nugatory crossing}.
More generally, a \emph{reducible crossing}\index{reducible crossing} is a crossing through which we may draw a circle $\gamma$ on the plane of projection such that $\gamma$ meets the diagram only at one point, at the crossing. See \reffig{Nugatory}, right.
A diagram is \emph{reduced}\index{reduced diagram} if it contains no reducible crossings.
\end{definition}
Note that reducible crossings can be removed by isotoping the diagram. We typically will assume that our knot diagrams are reduced.
There are other well-known moves to change a diagram into an equivalent diagram. These include the three moves shown in \reffig{Reidemeister}, called Reidemeister moves.\index{Reidemeister moves}
\begin{figure}[h]
\includegraphics{Figures/Ch00_KnotTheory/F0-04-Reide}
\caption{Three Reidemeister moves do not change knot equivalence.}
\label{Fig:Reidemeister}
\end{figure}
The Reidemeister moves appear in work of Maxwell in the 1800s (see, for example,~\cite{Epple}). In the 1920s, Reidemeister~\cite{Reidemeister} and Alexander and Briggs~\cite{AlexanderBriggs} independently gave rigorous proofs that two equivalent diagrams can always be related by a sequence of such moves.
The \emph{crossing number}\index{crossing number} of a knot is the minimal number of crossings in all diagrams of the knot. A minimal crossing diagram will necessarily be reduced. However, a reduced diagram is not necessarily a minimal crossing diagram. For example, \reffig{Goeritz} shows the reduced diagram of a knot that can, with a little work, be simplified to the \emph{unknot}\index{unknot}, i.e.\ the simple circle with no crossings. This diagram was discovered by Goeritz in 1934~\cite{Goeritz}.
\begin{figure}[h]
\includegraphics{Figures/Ch00_KnotTheory/F0-05-Goer}
\caption{This diagram of the unknot was discovered in 1934 by Goeritz.}
\label{Fig:Goeritz}
\end{figure}
In fact, the diagram of \reffig{Goeritz} is an example of a knot diagram that cannot be simplified by Reidemeister moves without first increasing the number of crossings of the diagram.
In addition to attempting to remove crossings, other moves can be performed on diagrams to simplify the classification problem. For example, there is a way of joining two simple diagrams into one more complicated diagram, shown in \reffig{KnotSum}.
\begin{figure}[h]
\includegraphics{Figures/Ch00_KnotTheory/F0-06-KSum}
\caption{The knot sum of two knots.\index{knot sum}}
\label{Fig:KnotSum}
\end{figure}
Starting with two diagrams side-by-side, take a rectangle embedded in the plane of projection that has one side on one diagram, avoiding crossings, an opposite side on the other diagram, again avoiding crossings, and the final two sides disjoint from the two diagrams. Form the new diagram by removing the two edges of the rectangle that lie on the knots, and joining the knots along the two opposite sides of the rectangle. The resulting knot is called the \emph{knot sum}\index{knot sum}. It is also sometimes called the \emph{connected sum of the knots}\index{connected sum}.
Given a knot sum of two knot diagrams, consider the embedded curve $\gamma$ in the plane of projection of the diagram that encircles exactly one of the original diagrams, cutting through the rectangle in the definition of the knot sum. This curve $\gamma$ meets the diagram of the knot sum in exactly two points, and it bounds disks on both sides (thinking of the projection plane as $S^2\subset S^3$), and both discs contain crossings.
We say that a diagram is \emph{prime}\index{prime}\index{prime!diagram} if no such curve $\gamma$ exists. That is, a knot or link diagram is \emph{prime} if, for every simple closed curve $\gamma$ in the plane of projection, if $\gamma$ meets the knot exactly twice transversely away from crossings, then $\gamma$ bounds a region of the diagram with no crossings.
Curves such as $\gamma$ above detect knot sums. When knots are classified by diagram, listed according to crossing number, typically only prime diagrams are included.
The problem of listing all knots by crossing number, without duplicates, is a difficult one. There are 1,701,936 prime knots with at most 16 crossings, classified by Hoste, Thistlethwaite, and Weeks in 1998~\cite{htw}. More recently, Burton classified 352,152,252 prime knots up to 19 crossings~\cite{Burton:KnotEnum}. These knots can be downloaded with the 3-manifold software Regina~\cite{regina}. In both instances, the knots are only classified up to reflection in the plane of projection.
\begin{definition}\label{Def:knotinvt}
A \emph{knot invariant}\index{knot invariant, definition} is a function from the set of knots to some other set whose value depends only on the equivalence class of the knot. A \emph{link invariant}\index{link invariant} is defined similarly.
\end{definition}
The crossing number of a knot is an example of a knot invariant.
Knot and link invariants are used to prove that two knots or links are distinct, or to measure the complexity of the link in various ways. We will revisit examples of knot invariants below, particularly geometric ones.
Notice that the number of knots with a given crossing number grows very rapidly. There does not seem to be a natural way of enumerating knots within a fixed class of crossing number. And while the crossing number was one of the first knot invariants to be studied by knot theorists, it does not seem to relate well to other knot invariants, particularly those that arise in geometry. For these reasons and others, other ways of classifying knots have arisen over the years, which we will discuss further below.
In this book we will apply geometry to the problem of the classification of knots. It has been known since the early 1980s, due to work of Thurston~\cite{thurston:bulletin}, that the complement of a knot decomposes into pieces, each admitting a 3-dimensional geometry. By using geometric properties of knot complements, we can often distinguish knots. This brings us to the second problem in knot theory that we discuss here.
\subsection{The problem of determining geometry of the complement}
Briefly, the complement of a knot is \emph{hyperbolic}\index{hyperbolic knot or link} if and only if it admits a complete metric with all sectional curvatures equal to $-1$. We will give other equivalent definitions of hyperbolic knots in later chapters, which will often be more useful for calculations, computations, and examples.
For now, it is known that when a knot complement is hyperbolic, its hyperbolic metric is unique. That is, hyperbolic knot complements that are homeomorphic must also be isometric under any hyperbolic metrics placed upon their complements.
Moreover, a large number of knots are hyperbolic, and many that are not hyperbolic decompose into hyperbolic pieces.
More precisely, consider the following families of knots.
\begin{definition}\label{Def:TorusKnot0}
A \emph{torus knot}\index{torus knot} is a knot that can be embedded (without crossings) on the surface of an unknotted torus in $S^3$. See \reffig{TorusKnot0}.
\end{definition}
\begin{figure}[h]
\includegraphics{Figures/Ch00_KnotTheory/F0-07-Torus}
\caption{A torus knot}
\label{Fig:TorusKnot0}
\end{figure}
By an unknotted torus, we mean the neighborhood of an unknot in $S^3$, with no crossings.
\begin{definition}\label{Def:Satellite0}
A \emph{satellite knot}\index{satellite knot} is a knot that can be embedded in a regular neighborhood of another knot in $S^3$. See \reffig{Satellite0}.
\end{definition}
\begin{figure}[h]
\includegraphics{Figures/Ch00_KnotTheory/F0-08-Sat}
\caption{An example of a satellite knot. The dotted line forms the boundary of a neighborhood of a different knot, and the satellite lives inside that neighborhood.}
\label{Fig:Satellite0}
\end{figure}
The complement of a torus knot admits a 3-dimensional geometry that is not hyperbolic, due to work of Thurston~\cite{thurston:bulletin}. He also showed that the complement of a satellite knot cannot be hyperbolic, but can be cut along a torus to decompose into pieces that admit 3-dimensional geometry, which could possibly be hyperbolic. For example, the knot complement in \reffig{Satellite0} can be cut along the dashed solid torus into two hyperbolic pieces, as we will see later in this book.
Thurston showed that every knot in $S^3$ that is neither a torus knot nor a satellite knot must have hyperbolic complement~\cite{thurston:bulletin}.
Thus hyperbolic geometry can be a useful tool in the classification problem of knots --- in theory.
In practice, we need tools and techniques to determine when a knot complement is hyperbolic. For example, if a knot is given by a messy diagram, how does one determine whether or not it is equivalent to a torus or satellite knot? How can we determine whether its complement is hyperbolic? And if it is hyperbolic, how can we find a hyperbolic metric?
Thurston outlined a procedure for finding a hyperbolic metric using the diagram of the figure-8 knot in his 1979 lecture notes~\cite{thurston}. This process was generalized by others, for example~\cite{menasco:links}, and even made algorithmic, in Weeks' 1985 PhD thesis~\cite{Weeks:Thesis}. There is now software that determines, given a knot diagram, whether or not the knot complement is hyperbolic. This is the computer program SnapPy, which is freely available~\cite{SnapPy}.\index{SnapPy}
Indeed, using computational tools, Burton has determined that of all prime knots with up to 19 crossings, $352,151,858$ are hyperbolic, and only $395$ are not hyperbolic~\cite{Burton:KnotEnum}. These are split into $14$ torus knots and $380$ satellite knots.
The next four chapters of this book concern the problem of determining a hyperbolic metric on a knot complement. We will step carefully through the necessary definitions and procedures, using Thurston's decomposition of the figure-8 knot complement as an example. This will give our first potential method to find a hyperbolic metric.
Chapters~\ref{Chap:Margulis} and~\ref{Chap:CompletionDehnFilling} give additional methods and tools from hyperbolic geometry to find or deform a hyperbolic metric. These first six chapters form the foundation required to discuss hyperbolic geometry and knots in more detail.
Of course, these chapters require some work. The fact that software exists that can compute hyperbolic geometry of knots begs the question, why work through such computations by hand at all? Why not just work with the computer? There are many reasons, related to additional open problems. One reason is the next problem.
\subsection{The problem of determining geometry for families of knots.}
A computer program computes hyperbolic geometry for one knot at a time, or for a finite number of knots. But what can be said about infinite families of knots? For example, how does one determine the hyperbolic geometry of knots with descriptions given by infinite classes of diagrams? If two knots in a family are ``similar'' is their geometry also similar?
Potential answers to such questions seem to depend very heavily on the family of knots given. For example, for fixed $c$, it does not seem to be the case that the (finite) family of knots with crossing number $c$ have very similar hyperbolic geometry.
On the other hand, certain infinite families of knots do exist with very similar hyperbolic geometry, and others at least seem to have geometry that reflects properties of the diagrams. We will discuss such knots and their properties, for example in chapters~\ref{Chap:TwistKnots}, \ref{Chap:TwoBridge}, and~\ref{Chap:Alternating}, with careful proofs. For now, we will present a definition of one such family.
\begin{definition}\label{Def:Bigon}
A \emph{bigon}\index{bigon} is a region of a graph bounded by exactly two edges and exactly two vertices.
\end{definition}
For example, \reffig{TwistRegion} shows several bigons connected end-to-end in a portion of a diagram graph of a knot.
\begin{figure}[h]
\includegraphics{Figures/Ch00_KnotTheory/F0-09-TwReg}
\caption{A twist region of a diagram}
\label{Fig:TwistRegion}
\end{figure}
\begin{definition}\label{Def:TwistRegion}
A \emph{twist region}\index{twist region} of a diagram of a knot is a maximal portion of the knot diagram where two strands twist around each other, as in \reffig{TwistRegion}.
More precisely, recall that a diagram of a knot is a 4-valent graph with over/under crossing information at each vertex. A twist region is a string of bigon\index{bigon} regions in the diagram graph, arranged end-to-end at their vertices, which is maximal in the sense that there are no additional bigon regions meeting the vertices on either end. A single crossing adjacent to no bigons is also a twist region. We will further restrict so that all twist regions are alternating, meaning crossings alternate over and under while following a strand of the twist region. If not, the second Reidemeister move\index{Reidemeister move} applied to the diagram removes two crossings from the twist region.
\end{definition}
The condition that twist regions be maximal ensures that there is only one way to put together exactly two twist regions in a diagram.
\begin{definition}\label{Def:TwistKnot}
The \emph{twist knot} $J(2,n)$\index{twist knot} is the knot with a diagram consisting of exactly two twist regions, one of which contains two crossings. The other twist region contains $n\in{\mathbb{Z}}$ crossings. The direction of crossing depends on the sign of $n$.
\end{definition}
Twist knots $J(2,2)$, $J(2,3)$, $J(2,4)$, and $J(2,5)$ are shown in \reffig{TwistKnots}.
\begin{figure}[h]
\includegraphics{Figures/Ch00_KnotTheory/F0-10-TwKnot}
\caption{Twist knots $J(2,2)$ (the figure-8 knot), $J(2,3)$ (the $5_2$ knot), $J(2,4)$ (the $6_1$ or Stevedore knot), and $J(2,5)$}
\label{Fig:TwistKnots}
\end{figure}
The family of twist knots $J(2,n)$ has very nice hyperbolic geometry, which we discuss in \refchap{TwistKnots}. In particular, as $n$ approaches infinity, we will see that the hyperbolic geometry of twist knot complements limits, in a precise sense, to the hyperbolic geometry of the Whitehead link complement; the Whitehead link is shown in \reffig{Whitehead0}.\index{Whitehead link}
\begin{figure}[h]
\includegraphics{Figures/Ch00_KnotTheory/F0-11-Whiteh}
\caption{Two diagrams of the Whitehead link.\index{Whitehead link}}
\label{Fig:Whitehead0}
\end{figure}
More generally, any family of knots containing higher and higher numbers of crossings in a twist region will have complements converging to a link with a simple circle encircling that twist region. Knots with high numbers of crossings in twist regions are called \emph{highly twisted}.\index{highly twisted} Again these are discussed in \refchap{TwistKnots}.
Given a diagram of a link, we can combine twist regions by performing a sequence of moves on the diagram called \emph{flypes}.
\begin{definition}\label{Def:Flype}
Let $\gamma$ be a simple closed curve meeting the diagram of $K$ transversely exactly four times away from crossings, with two intersections adjacent to a crossing on the outside of $\gamma$. A \emph{flype}\index{flype} is a move on the diagram that rotates the region inside $\gamma$ by $180^\circ$, moving the crossing adjacent to $\gamma$ to become a crossing adjacent to $\gamma$ but between the opposite two strands.
See \reffig{Flype0}.
\end{definition}
\begin{figure}[h]
\includegraphics{Figures/Ch00_KnotTheory/F0-12-Flype}
\caption{A flype.}
\label{Fig:Flype0}
\end{figure}
Now, suppose a simple closed curve $\gamma$ in the plane of projection meets a diagram transversely exactly four times away from crossings, and suppose also that the curve is adjacent to crossings on both sides. Then we can perform a flype to move one of the crossings to the opposite side of the curve, to form a bigon.\index{bigon} If the bigon is not alternating, remove both crossings, producing a diagram with fewer crossings. Otherwise, there are two cases. Either the curve $\gamma$ encloses only bigons on one side to begin with, and the flype produces a diagram that is unchanged, or the flype has moved a crossing out of one twist region, on one side of $\gamma$, into a distinct twist region on the other side of $\gamma$. Performing the same flype a finite number of times will move all crossings in the twist region on one side of $\gamma$ into the twist region on the other side, thus reducing the number of twist regions of the diagram. Thus by performing a finite number of flypes, we obtain a diagram with a minimal number of twist regions. Such a diagram is called \emph{twist-reduced}.\index{twist-reduced}
Every knot has a twist-reduced diagram with some number of twist regions. On the other hand, for a fixed positive integer $T$, there are only finitely many ways of combining twist regions to form a twist-reduced diagram with $T$ twist regions. The collection of twist-reduced diagrams with $T$ twist regions forms an infinite family of diagrams. Two highly twisted\index{highly twisted} diagrams with the same pattern of twist regions will have similar hyperbolic geometry, in ways that can be quantified. Thus rather than classifying knots by crossing number, from a geometric perspective it may make more sense to classify knots by number of twist regions in a twist-reduced diagram, or \emph{twist-number}.\index{twist-number} This brings us to another (broad and vaguely-worded) problem.
\subsection{The problem of enumerating knots by geometry}
Enumerating knots by twist region may make more geometric sense than enumerating by crossing number, because highly twisted\index{highly twisted} knots have diagrams that relate well to their geometry, in a sense that will be made precise in \refchap{TwistKnots}. Given any knot, is there always a diagram that encodes hyperbolic geometry?
Schubert considered a family of knots in 1956~\cite{Schubert}. He called the knots \emph{2-bridge knots}\index{2-bridge knot or link}. They can be described diagrammatically by taking four parallel strands, and twisting pairs of the strands into sequences of twist regions, then capping off either end with two ``bridges.'' A general form of such a diagram is shown in \reffig{2BridgeDiagram0}; see also \refchap{TwoBridge}.
\begin{figure}[h]
\import{Figures/Ch00_KnotTheory/}{F0-13-2BDia.eps_tex}
\caption{A general form of a 2-bridge knot.\index{2-bridge knot or link}}
\label{Fig:2BridgeDiagram0}
\end{figure}
Although Schubert's work pre-dates the first work on the hyperbolic geometry of knots by nearly two decades, his 2-bridge knots\index{2-bridge knot or link} turn out to be very amenable to hyperbolic geometry techniques. We will see early on in this book that any knot exterior $S^3-N(K)$ can be decomposed into a collection of truncated tetrahedra.\index{truncated tetrahedron} Equivalently, $S^3-K$ is formed by gluing tetrahedra whose vertices have been removed. This is called an \emph{ideal triangulation}\index{ideal triangulation, definition} of the knot exterior, or sometimes simply a \emph{triangulation}.\index{triangulation, definition}
In the case of 2-bridge knots,\index{2-bridge knot or link} we will see that a triangulation of the knot complement can be read easily off the diagram. Not only that, we will see in \refchap{TwoBridge} that the edges and faces of the triangulation can be made totally geodesic under the hyperbolic metric, and the tetrahedra can be straightened simultaneously to be convex, with piecewise geodesic boundaries. Thus the combinatorics of the diagram of a 2-bridge knot gives a combinatorial method of describing the geometry of the 2-bridge knot.\index{2-bridge knot or link} This is very powerful.
It would be great to be able to extend these techniques to all knots, and some progress has been made with applications to other families, such as $n$-bridge knots for higher $n$. However, few families seem to be quite as nice as 2-bridge knots.\index{2-bridge knot or link}
There is still much ongoing work on triangulating knot exteriors and determining geometric properties of triangulations. We will discuss some of the techniques and applications in \refchap{AngleStruct}.
We have mentioned above that any knot exterior can be triangulated. In fact, any 3-manifold with torus boundary components can be decomposed into truncated tetrahedra. When the tetrahedra are convex hyperbolic tetrahedra, we say the triangulation is \emph{geometric}.\index{geometric triangulation} The software SnapPy has a census of orientable manifolds built up of at most nine geometric tetrahedra~\cite{SnapPy}. Some of these are knot complements.
This leads to a new way of classifying hyperbolic knots: by the number of geometric tetrahedra required to triangulate their exterior. This method of enumerating knots has been employed in~\cite{CallahanDeanWeeks, ckp, ckm}.
\begin{figure}[h]
\includegraphics{Figures/Ch00_KnotTheory/F0-14-CDW}
\caption{The seven simplest hyperbolic knots, built of at most four geometric tetrahedra.}
\label{Fig:SimplestHyp}
\end{figure}
To date, 502 hyperbolic knots, built of at most eight geometric tetrahedra, have been classified. The diagrams of these knots often have large numbers of crossings. The knots built of at most four tetrahedra are shown in \reffig{SimplestHyp}.
Classifying knots by triangulations of their exteriors seems to be more difficult than classifying them by diagrams. This is because, given a triangulation of a 3-manifold with torus boundary, it is not obvious that the underlying space is a knot complement for a knot $K$ in $S^3$. We will discuss some techniques to detect whether such a manifold is a knot complement in \refchap{Essential}.
\subsection{The problem of finding geometric diagrams}
Twist knots and 2-bridge knots\index{2-bridge knot or link} have standard diagrams that encode a great deal of information about the geometry of the knot. Does every knot have such a diagram? (Probably not.) Does every knot have a diagram from which we may read some geometric information?
Alternating knots are another family of knots that seem to be amenable to hyperbolic geometric techniques.
\begin{definition}\label{Def:Alternating}
An \emph{alternating diagram}\index{alternating diagram!definition}\index{alternating diagram} is a diagram of a knot or link that has an orientation such that, when following the knot in the direction of the orientation, the crossings alternate between over and under. An \emph{alternating knot or link}\index{alternating knot or link}\index{alternating knot or link!definition} is a knot or link that has an alternating diagram.
\end{definition}
We will see in the exercises in \refchap{Fig8Decomp} that alternating knot complements decompose into pieces with the same combinatorics of the diagram. In chapters~\ref{Chap:Alternating}, \ref{Chap:Quasifuchsian}, and~\ref{Chap:Volume} we will use this decomposition to determine some geometric information on the knot complement.
How useful is this work broadly? All knots with at most seven crossings have alternating diagrams.\index{alternating diagram} Tait began his work~\cite{Tait:SciPapers} by assuming diagrams were alternating (although he did publish diagrams of eight- and ten-crossing non-alternating examples in 1877).
However, the proportion of alternating knots in diagrams enumerated by crossing number rapidly drops to zero~\cite{SundbergThistlethwaite, Thistlethwaite:Tangles}. As for knots enumerated by geometric triangulations,\index{geometric triangulation} non-alternating examples seem to be even more common; a non-alternating example appears as the second knot on the list in \reffig{SimplestHyp}. Thus unfortunately, alternating knots\index{alternating knot or link} and links are not very common.
An open research question is, how many of the techniques presented in these chapters for determining geometry of alternating links generalize to other knots and links? There has been much work in recent years in extending this work to other families of knots, and some success. We are far from using such techniques to find hyperbolic geometry of all knots, though.
\subsection{The problem of determining geometric invariants}
One way of distinguishing knots is to compute invariants for each of them. If the invariants disagree, then the knots cannot be equivalent.
Several knot invariants arise classically, such as the crossing number that we encountered above. Many additional knot invariants arise through geometry. One aim of this book is to discuss such invariants, and give tools to calculate them.
One of the most straightforward knot invariants that arises in geometry is the volume of a knot. We will show in \refchap{Margulis} that any knot complement that admits a hyperbolic structure has finite volume. Thus volumes of knots give knot invariants.
For those knots whose diagrams are particularly amenable to geometric techniques, such as twist knots, 2-bridge knots,\index{2-bridge knot or link} and alternating knots,\index{alternating knot or link} there are known methods to estimate volume using the combinatorics of the diagram. This is discussed along the way, but especially in \refchap{Volume}, where we bring to bear several tools in geometry to give two-sided bounds on volumes.
How powerful is volume as a knot invariant? It can be easy to calculate numerically, using the software SnapPy~\cite{SnapPy}, for example. Such computations can be rigorously verified to lie in a fixed error range using interval arithmetic, as in~\cite{HIKMOT}. Thus computing volume is a useful tool for distinguishing knots with distinct volume. However, there are many distinct knots that cannot be distinguished by volume; they have the same volume. We give some methods of constructing such knots and links in \refchap{Quasifuchsian}.
Then, is there a better geometric knot invariant than volume to distinguish knots? In \refchap{Canonical}, we describe the \emph{canonical decomposition}\index{canonical decomposition} of a hyperbolic knot complement. This is a decomposition consisting of convex polyhedra. We will show that when two knots have the same canonical decomposition, they must necessarily have homeomorphic complements, and thus by the Gordon--Luecke theorem, they must be equivalent (up to reflection). Thus the canonical decomposition is a complete invariant for hyperbolic knots. Unfortunately, it is not easy to compute in general, and provable forms of canonical decompositions are only known for a few infinite families of knots, including 2-bridge knots~\cite{Gueritaud:thesis}.\index{2-bridge knot or link} Canonical decompositions of alternating knots\index{alternating knot or link} are still unknown in general, for example.
Finally, we discuss very briefly one polynomial invariant.
In most standard books on knot theory, there will be chapters on polynomial invariants, particularly the Alexander polynomial and the Jones polynomial. We will not treat such polynomials here; they arise from techniques that do not use hyperbolic geometry. There is one polynomial invariant of knots that depends heavily on hyperbolic geometry, however. This is the $A$-polynomial.\index{$A$-polynomial} We devote \refchap{Character} to a discussion of the $A$-polynomial, its definition and computation for a few examples. We will see that it relates to hyperbolic structures on a knot complement and the deformations of such structures.
\subsection{The problem of relating geometric invariants to other invariants}
What of the invariants that are being omitted from this book? We mentioned above Alexander and Jones polynomials. There are also more modern algebraic knot invariants, such as Khovanov homology and Floer homologies, and quantum invariants such as colored Jones polynomials.
Many open problems in knot theory, driving much of the ongoing research in the field, concern relating invariants of knots arising from other fields of mathematics to hyperbolic geometry and hyperbolic knot invariants. We will not discuss in detail these open problems, because defining non-hyperbolic invariants will take us too far afield. However, one motivating factor for writing this book was to help mathematicians, particularly students, get up to speed with their hyperbolic geometry, in order to investigate the relations of geometry to other invariants in knot theory.
\section{Exercises}
\begin{exercise}
Find a sequence of isotopy moves of the diagram of the Goeritz knot, \reffig{Goeritz}, that reduces it to the standard diagram of the unknot with no crossings.
\end{exercise}
\begin{exercise}
Download and install the software SnapPy~\cite{SnapPy}. Use it to sketch diagrams of a few knots, and determine whether the knot is hyperbolic. Do this for at least one hyperbolic knot and at least one non-hyperbolic knot.
\end{exercise}
\begin{exercise}
Convince yourself by drawing several examples that every 4-valent planar graph can be assigned over/under crossing information at each vertex to obtain an alternating knot.\index{alternating knot or link} Now try to prove this fact. (This may require some graph theory.)
\end{exercise}
\begin{exercise}
Show that a connected sum\index{connected sum} of two knots is always a satellite knot.\index{satellite knot}
\end{exercise}
\part{Foundations of Hyperbolic Structures}\label{Part:Foundations}
\chapter{Decomposition of the Figure-8 Knot}\label{Chap:Fig8Decomp}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
In this chapter, we begin developing tools to work with knots and links and the 3-manifolds they define. We give a geometric method, explained carefully by example, to decompose a knot or link complement into simple pieces. The methods here are an introduction to topological techniques in 3-manifold geometry and topology, and an introduction to some of the tools used in the field.
One goal of this chapter is to present a method that will allow us to pass from a knot or link \emph{diagram} to a description of the knot or link \emph{complement}. That is, we start with a 4-valent graph describing a knot or link $K$, and obtain a mathematically rigorous decomposition of the 3-manifold $S^3-K$ into simple 3-dimensional pieces, which pieces will be useful for applying tools from geometry and 3-manifold topology.
\section{Polyhedra}
Sometimes it is easier to study manifolds, including knot complements, if we split them into smaller, simpler pieces, for example 3-balls. We are going to decompose the figure-8 knot complement into two carefully marked 3-balls, namely ideal polyhedra. The diagram of the figure-8 knot that we use is shown in \reffig{Fig8Diagram}. The decomposition we describe appears in Thurston's notes~\cite{thurston}, and with a little more explanation in~\cite{thurston:book}. The procedure has been generalized to all link complements, for example in~\cite{menasco:links}. This work is essentially what we present below in the text and in exercises.
\begin{figure}[h]
\includegraphics{Figures/Ch01_Fig8Decomp/F1-01-Fig8Di}
\caption{A diagram of the figure-8 knot.}
\label{Fig:Fig8Diagram}
\end{figure}
\begin{definition}\label{Def:polyhedron}
A \emph{polyhedron}\index{polyhedron, definition} is a closed 3-ball whose boundary is labeled with a finite graph, containing a finite number vertices and edges, so that complementary regions, which are called \emph{faces},\index{face} are simply connected.
An \emph{ideal polyhedron}\index{ideal polyhedron, definition} is a polyhedron with all vertices removed. That is, to form an ideal polyhedron, start with a regular polyhedron and remove the points corresponding to vertices.
\end{definition}
We will cut $S^3 - K$ into two ideal polyhedra. We will then have a description of $S^3 - K$ as a gluing of two ideal polyhedra. That is, given a description of the polyhedra, and gluing information on the faces of the polyhedra, we may reconstruct the knot complement $S^3- K$. Although we use the example of the figure-8 knot, in the exercises, you will walk through the techniques below to determine decompositions of other knot complements into ideal polyhedra, and to generalize to all knots.
\subsection{Overview}
Start with a diagram of the knot. There will be two polyhedra in our decomposition. These can be visualized as two balloons: One balloon expands above the diagram, and one balloon expands below the diagram. As the balloons continue expanding, they will bump into each other in the regions cut out by the graph of the diagram. Label these regions. In \reffig{Faces}, the regions are labeled $A$, $B$, $C$, $D$, $E$, and $F$. These will correspond to faces of the polyhedra.
\begin{figure}[h]
\import{Figures/Ch01_Fig8Decomp/}{F1-02-F8Face.eps_tex}
\caption{Faces for the figure-8 knot complement.}
\label{Fig:Faces}
\end{figure}
The faces meet up in edges. There is one edge for each crossing. It runs vertically from the knot at the top of the crossing to the knot at the bottom (or the other way around). The balloon expands until faces meet at edges. \Reffig{3dFolded} shows how the top balloon would expand at a crossing. The edge is drawn as an arrow from the top of the crossing to the bottom. Faces labeled $T$ and $U$ meet across the edge. Rotating the picture $180^\circ$ about the edge, we would see an identical picture with $S$ meeting $V$.
\begin{figure}[h]
\import{Figures/Ch01_Fig8Decomp/}{F1-03-CrFold.eps_tex}
\caption{The knot runs along diagonals. Faces labeled $U$ and $T$ meet
at the edge shown, marked by an arrow.}
\label{Fig:3dFolded}
\end{figure}
It may be helpful to examine the meeting of faces at an edge by 3-dimensional model. Henry Segerman has come up with a paper model to illustrate the phenomenon of \reffig{3dFolded}. Start with a sheet of paper labeled as in \reffig{3dEdge}. Cut out the shaded square in the middle. Now fold the paper until it looks like that in \reffig{3dFolded}. By rotating the paper model,
we can see how faces meet up.
\begin{figure}
\import{Figures/Ch01_Fig8Decomp/}{F1-04-CModel.eps_tex}
\caption{Cut out the shaded square. Start with a pair of parallel lines. Fold the thick part of the line in a direction opposite that of the dashed part of the line. Fold parallel thick and dashed lines in opposite directions. Correct folding results in a model that looks like \reffig{3dFolded}.}
\label{Fig:3dEdge}
\end{figure}
Stringing crossings such as this one together, we obtain the complete polyhedral decomposition of the knot. This is the geometric intuition behind the polyhedral expansion. We now explain a combinatorial method to describe the polyhedra.
\subsection{Step 1.} Sketch faces and edges into the diagram.
Recall a diagram is a 4-valent graph lying on a plane, the plane of projection. The regions on the plane of projection that are cut out by the graph will be the faces, including the outermost unbounded region of the plane of projection. We start by labeling these, as in \reffig{Faces}.
Edges come from arcs that connect the two strands of the diagram at a crossing. These are called \emph{crossing arcs}.\index{crossing arc} For ease of explanation, we are going to draw each edge four times, as follows. Shown on the left of \reffig{Edges} is a single edge corresponding to a crossing arc. Note that the edge is ambient isotopic\index{ambient isotopic} in $S^3$ to the three additional edges shown on the right in \reffig{Edges}.
\begin{figure}
\includegraphics{Figures/Ch01_Fig8Decomp/F1-05-Edge}
\caption{A single edge.}
\label{Fig:Edges}
\end{figure}
The reason for sketching each edge four times is that it allows us to visualize easily which edges bound the faces we have already labeled. In \reffig{Fig8Edges}, we have drawn four copies of each of the four edges we get from crossing arcs of the diagram of the figure-8 knot. Note that the face labeled $A$, for example, will be bordered by three edges, one with two tick marks, one with a single tick mark, and one with no tick marks.
\begin{figure}
\import{Figures/Ch01_Fig8Decomp/}{F1-06-8Edge.eps_tex}
\caption{Edges of the figure-8 knot complement.}
\label{Fig:Fig8Edges}
\end{figure}
\begin{remark}
Orientations on the edges can be chosen to run in either direction; that is, arrows on the edges can run from overcrossing to undercrossing or vice versa, as long as we are consistent with orientations corresponding to the same edge. We have chosen the orientations in \reffig{Fig8Edges} to simplify a later step, and to match a figure in \refchap{GluingCompleteness}. The opposite choice for any edge is also fine.
\end{remark}
\subsection{Step 2} Shrink the knot to ideal vertices on the top polyhedron.
Now we come to the reason for using \emph{ideal} polyhedra, rather than regular polyhedra. Notice that the edges stretch from a part of the knot to a part of the knot. However, the manifold we are trying to model is the knot complement, $S^3 - K$. Therefore, the knot $K$ does not exist in the manifold. An edge with its two vertices on $K$ must necessarily be an ideal edge; that is, its vertices are not contained in the manifold $S^3- K$.
Since the knot is \emph{not} part of the manifold, we will shrink strands of the knot to ideal vertices. That is, retract each knot strand to a single point.
This may cause some confusion at first, because the strand of the knot is not homeomorphic to a single point. However, we are considering the \emph{complement} of the strand. The complement of the strand on the boundary of the ball is homeomorphic to the complement of a single point on the boundary of the ball, so we replace strands by ideal vertices (single removed points).
Focus first on the polyhedron on top. Each component of the knot we ``see'' from inside the top polyhedron will be shrunk to a single ideal vertex. These visible knot components correspond to sequences of overcrossings of the diagram. Compare to \reffig{3dFolded} --- note that at an undercrossing, the component of the knot ends in an edge, but at an overcrossing the knot continues on. Moreover, note that at an overcrossing, the knot passes the same edge twice, once on each side.
In terms of the four copies of the edge in \reffig{Edges}, when we consider the polyhedron on top, we may identify the two edges which are isotopic along an overstrand, but not those isotopic along understrands. See \reffig{8Top}.
\begin{figure}[h]
\import{Figures/Ch01_Fig8Decomp/}{F1-07-8Top.eps_tex}
\caption{Isotopic edges in top polyhedron identified.}
\label{Fig:8Top}
\end{figure}
Shrink each overstrand to a single ideal vertex. The result is pattern of faces, edges, and ideal vertices for the top polyhedron, shown in \reffig{8TopPoly}. Notice that the face $D$ is a disk, containing the point at infinity.
\begin{figure}[h]
\import{Figures/Ch01_Fig8Decomp/}{F1-08-8TopP.eps_tex}
\caption{Top polyhedron, viewed from the inside.}
\label{Fig:8TopPoly}
\end{figure}
\subsection{Step 3} Shrink the knot to ideal vertices for the bottom polyhedron.
\vspace{.1in}
Notice that underneath the knot, the picture of faces, edges, and vertices will be slightly different. In particular, when finding the top polyhedron, we collapsed overstrands to a single ideal vertex. When you put your head underneath the knot, what appear as overstrands from below will appear as understrands on the usual knot diagram.
One way to see this difference is to take the 3-dimensional
model constructed in \reffig{3dEdge}. \Reffig{3dFolded} shows the view of the faces meeting at an edge from the top. If you turn the model over to the opposite side, you will see how the faces meet underneath. \Reffig{3dBottom}
illustrates this. Note $U$ now meets $V$, and $S$ meets $T$.
\begin{figure}[h]
\import{Figures/Ch01_Fig8Decomp/}{F1-09-CrFldB.eps_tex}
\caption{3-dimensional model, opposite side as in \reffig{3dFolded}. Now faces $V$ and $U$ meet along an edge.}
\label{Fig:3dBottom}
\end{figure}
In terms of the combinatorics, edges of \reffig{Edges} that are isotopic by sliding an endpoint along an understrand are identified to each other on the bottom polyhedron, but edges only isotopic by sliding an endpoint along an overstrand are not identified.
As above, collapse each knot strand corresponding to an understrand to a single ideal vertex. The result is \reffig{8BotPoly}.
\begin{figure}[h]
\import{Figures/Ch01_Fig8Decomp/}{F1-10-8BotP.eps_tex}
\caption{Bottom polyhedron, from the outside.}
\label{Fig:8BotPoly}
\end{figure}
\vspace{.1in}
One thing to notice: we sketched the top polyhedron with our heads inside the ball on top, looking out. If we move the face $D$ away from the point at infinity, then it wraps \emph{above} the other faces shown in \reffig{8TopPoly}.
On the other hand, we sketched the bottom polyhedron with our heads outside the ball on the bottom. If we move the face $D$ away from the point at infinity, it wraps \emph{below} the other faces shown in \reffig{8BotPoly}.
\subsection{Rebuilding the knot complement from the polyhedra} Figures~\ref{Fig:8TopPoly} and~\ref{Fig:8BotPoly} show two ideal polyhedra that we obtained by studying the figure-8 knot complement. We claim that they glue to give the figure-8 knot complement. That is, attach face $A$ on the bottom polyhedron to the face labeled $A$ on the top polyhedron, ensuring that the edges bordering face $A$ match up. Similarly for the other faces.
This process of gluing faces and edges gives exactly the complement of the knot. By construction, faces glue to give the faces illustrated in \reffig{Fig8Edges}, and edges glue to give the edges there, except when we have finished, all four edges in an isotopy shown in that figure have been glued together.
\section{Generalizing: Exercises}
This polyhedral decomposition works for any knot or link diagram, to give a polyhedral decomposition of its complement.
\begin{exercise}\label{Ex:WarmUp}
As a warm-up exercise, determine the polyhedral decomposition for one (or more) of the knots shown in \reffig{SimpleKnots}. Sketch both top and bottom polyhedra.
Your solution should consist of two ideal polyhedra, i.e.\ marked graphs on the surface of a ball, with faces and edges marked according to the gluing pattern. For example, the complete diagrams in Figures~\ref{Fig:8TopPoly} and~\ref{Fig:8BotPoly} form the solution for the figure-8 knot.
\end{exercise}
\begin{figure}[h]
\begin{tabular}{ccccc}
\includegraphics{Figures/Ch01_Fig8Decomp/F1-11a-Tref} & \hspace{.2in} &
\includegraphics{Figures/Ch01_Fig8Decomp/F1-11b-K52} & \hspace{.2in} &
\includegraphics{Figures/Ch01_Fig8Decomp/F1-11c-K63} \\
(a) Trefoil. & \hspace{.2in} & (b) The $5_2$ knot. &
\hspace{.2in} & (c) The $6_3$ knot. \\
\end{tabular}
\caption{Three examples of knots.}
\label{Fig:SimpleKnots}
\end{figure}
\begin{exercise}
The examples of knots we have encountered so far are all alternating,\index{alternating knot or link} as in \refdef{Alternating}. The diagram of the knot $8_{19}$ in \reffig{8-19} is not alternating. In fact, the knot $8_{19}$ has no alternating diagram.\index{alternating diagram!example with no alternating diagram}
\begin{figure}[h]
\includegraphics{Figures/Ch01_Fig8Decomp/F1-12-K819}
\caption{The knot $8_{19}$, which has no alternating diagram.\index{alternating diagram}\index{alternating diagram!example with no alternating diagram}}
\label{Fig:8-19}
\end{figure}
Determine the polyhedral decomposition for the given diagram of the knot $8_{19}$. Note: as above, many ideal vertices are obtained by shrinking overstrands to a point. However, you will have to use, for example, \reffig{3dFolded} to determine what happens between two understrands.
\end{exercise}
\begin{exercise}
Recall that the \emph{valence}\index{valence} of a vertex in a graph is the number of edges that meet that vertex. The valence of an ideal vertex is defined similarly.
\begin{enumerate}
\item[(a)] If a knot diagram is alternating,\index{alternating knot or link}\index{alternating knot or link!polyhedral decomposition} we obtain a very special ideal polyhedron. In particular, all ideal vertices will have the same valence. What is it? Show that the ideal vertices for an alternating knot all have this valence.
\item[(b)] What are the possible valences of ideal vertices in general, i.e.\ for non-alternating knots? For which $n\geq 0 \in {\mathbb{Z}}$ is there a knot diagram whose polyhedral decomposition yields an ideal vertex of valence $n$? Explain your answer, with (portions of) knot diagrams.
\end{enumerate}
\end{exercise}
\begin{exercise}
In the polyhedral decomposition for alternating knots,\index{alternating knot or link!polyhedral decomposition} the polyhedra are given by simply labeling each ball with the projection graph of the knot and declaring each vertex to be ideal.
\begin{enumerate}
\item Prove this statement for any alternating knot. That is, prove that the decomposition gives polyhedra whose edges match the projection graph of the diagram.
\item Show that for non-alternating knots, this is false. That is, the decomposition does not give polyhedra whose edges match the projection graph of the diagram.
\end{enumerate}
\end{exercise}
\begin{exercise}
A graph admits a \emph{checkerboard coloring}\index{checkerboard coloring} if all the complementary regions can be colored either white or shaded, with white faces meeting shaded faces across the edges. Any 4-valent graph can be checkerboard colored, particularly projection graphs of knot diagrams.
In the case of an alternating knot,\index{alternating knot or link!polyhedral decomposition} faces are identified from the top polyhedron to the identical face on the bottom polyhedron, and the identification is by a \emph{gear rotation}:\index{gear rotation} white faces on the top are rotated once counter-clockwise and then glued to the corresponding face on the bottom; shaded faces on the top are rotated once clockwise and then glued. This is shown for the figure-8 knot in \reffig{Gears}. Prove that for the decomposition of any alternating knot, faces are identified by a gear rotation.
\end{exercise}
\begin{figure}[h]
\includegraphics{Figures/Ch01_Fig8Decomp/F1-13-8Check}
\caption{Checkerboard coloring and ``gear rotation'' for the figure-8 knot.}
\label{Fig:Gears}
\end{figure}
\begin{exercise}
The diagrams we have encountered so far are all reduced, as in \refdef{Nugatory}, but we can follow the above procedure for non-reduced diagrams. For example, we can obtain a polyhedral decomposition for diagrams which contain a nugatory crossing\index{nugatory crossing}.
Show that the polyhedral decomposition of a knot diagram will contain a monogon, i.e.\ a face whose boundary is a single edge and a single vertex, if and only if the diagram has a simple nugatory crossing.
\end{exercise}
\begin{exercise}\label{Ex:CollapseBigons}
Recall that a bigon\index{bigon} is a region of a graph bounded by exactly two edges and exactly two vertices. Note that when a bigon appears in our polyhedral decomposition, the two edges of the bigon must be isotopic to each other. Hence, we sometimes will remove bigon faces from the polyhedral decomposition, identifying their two edges.
\begin{quote}
Let bigons\index{bigon} be bygone. --- William Menasco
\end{quote}
For the figure-8 knot, sketch the two polyhedra we get when bigon faces are removed. How many edges are there in this new, bigon-free decomposition? The resulting polyhedra are well known solids in this case. What are they?
For each of the polyhedra obtained in \refex{WarmUp}, sketch the resulting polyhedra with bigons removed.
\end{exercise}
\begin{exercise}
Suppose we start with an alternating knot\index{alternating knot or link} diagram with at least two crossings, and do the polyhedral decomposition above, collapsing bigons at the last step. What are possible valences of vertices? Sketch the diagram of a single alternating knot that has all possible valences of ideal vertices in its polyhedral decomposition.
What valences of vertices can you get if you don't require the diagram to be alternating but collapse bigons? Can you find 1-valent vertices? For any $n>4 \in {\mathbb{Z}}$, can you find $n$-valent vertices?
\end{exercise}
\chapter{Calculating in Hyperbolic Space}\label{Chap:IntroHyp}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
We will need to manipulate objects in 2 and 3-dimensional hyperbolic space. This chapter provides a very brief introduction to the tools that will be needed in the future, the objects that will be studied (lines, triangles, tetrahedra, metric properties), and examples of calculations that will appear.
We will use terminology and calculations from standard elementary Riemannian geometry. The reader who is not as comfortable with Riemannian geometry might find it helpful to follow along in the first few chapters of an introductory Riemannian geometry text, such as do~Carmo \cite[Chapter~1]{docarmo}. We will not provide all the details to all the statements given. The idea is that we want to begin calculating on knot complements and other 3-manifolds immediately, without getting lost early in details. Thus our aim is to provide just enough information here to start calculating in future chapters. Many more details and results can be found in other books, including full books on hyperbolic geometry. Anderson gives a very nice introduction to 2-dimensional hyperbolic geometry~\cite{anderson}. More details in all dimensions appear in Ratcliffe~\cite{ratcliffe}. The book by Marden includes more on groups of isometries of hyperbolic space, including results on infinite volume hyperbolic 3-manifolds~\cite{marden}.
An introduction to hyperbolic geometry that includes a discussion of its visualization is also given by Thurston~\cite{thurston:book}.
\section{Hyperbolic geometry in dimension two}
We start with hyperbolic 2-space, ${\mathbb{H}}^2$.
There are several models of hyperbolic space. Here, we will work with the upper half plane model. In this model, hyperbolic 2-space ${\mathbb{H}}^2$ is defined to be the set of points in the upper half plane:
\[
{\mathbb{H}}^2 = \{ x+ i\,y \in {\mathbb{C}} \mid y>0 \},
\]
equipped with the metric whose first fundamental form is given by
\[
ds^2 = \frac{dx^2 + dy^2}{y^2}.
\]
That is, start with the usual Euclidean metric on ${\mathbb{R}}^2$, whose first fundamental form is $dx^2+dy^2$. To obtain the metric on the hyperbolic plane, rescale the usual Euclidean metric by $1/y$, where $y$ is height in the plane.
Note that a point in ${\mathbb{H}}^2$ can either be thought of as a complex number $x+i\,y \in {\mathbb{C}}$ or as a point $(x,y)\in{\mathbb{R}}^2$. Both perspectives are useful: ${\mathbb{R}}^2$ leads more easily to coordinates and calculations, and ${\mathbb{C}}$ works seamlessly with our definition of isometries below. Changing perspectives does not affect our results, so we will regularly switch between the two without comment.
Our first task is to explore the meaning of the hyperbolic metric, and how it affects measurements.
\subsection{Hyperbolic 2-space and Riemannian geometry}
In this subsection, we briefly review how the metric and the space ${\mathbb{H}}^2$ described above fit into a more general picture of Riemannian geometry. We also describe tools from Riemannian geometry we will use to do calculations. If you are not yet familiar with Riemannian geometry, feel free to skim this section, noting equations \eqref{Eqn:ArcLength}, \eqref{Eqn:VolForm}, and \eqref{Eqn:Area}. This section was primarily written for a student who has seen some Riemannian geometry, but may have difficulty applying abstract concepts of that field to the specific metric of hyperbolic geometry. In the author's experience, a few key equations will be enough to get started.
In Riemannian geometry, a \emph{Riemannian metric}\index{Riemannian metric} on a manifold $M$ is defined to be a correspondence associating to each point $p\in M$ an inner product $\langle\cdot, \cdot\rangle_p$ on the tangent space $T_pM$. This inner product gives us a way of measuring the lengths of vectors tangent to $M$ at $p$, as well as computing areas, angles between curves, etc. The \emph{first fundamental form}\index{first fundamental form} is defined by $\langle v, v\rangle_p$ for $v\in T_pM$.
In our case, the Riemannian manifold we consider is ${\mathbb{H}}^2$, and we have natural local coordinates on the manifold given by $x+i\,y \in {\mathbb{C}}$, or $(x,y)\in{\mathbb{R}}^2$, for $y>0$. We may use these coordinates to describe the Riemannian metric. In particular, at the point $(x,y)\in{\mathbb{H}}^2$, a tangent vector $v\in T_{(x,y)}{\mathbb{H}}^2$ can also be described by coordinates $v = v_x \frac{\partial}{\partial x} + v_y\frac{\partial}{\partial y}$, and we write it as a vector
\[ v = \left( \begin{array}{c} v_x \\ v_y \end{array} \right). \]
Then the metric on ${\mathbb{H}}^2$ is given by a matrix
\[ \langle v, w \rangle = (v_x, v_y) \mat{1/y^2 & 0 \\ 0 & 1/y^2} \left(\begin{array}{c} w_x \\ w_y \end{array} \right). \]
One of the simplest geometric measurements we can compute using the definition of the metric is the arc length of a curve. If $\gamma(t)$ is a (differentiable) curve in ${\mathbb{H}}^2$, for $t \in [a,b]$, then we obtain a tangent vector $\gamma'(t)$ at each point of $\gamma(t)$ in ${\mathbb{H}}^2$, called the velocity vector. The \emph{arc length}\index{arc length} of $\gamma$ for $t\in[a,b]$ is defined to be
\[ |\gamma| = \int_a^b \sqrt{\langle \gamma'(s), \gamma'(s) \rangle}\, ds. \]
When considering ${\mathbb{H}}^2$, we will have coordinates $\gamma(t)=(\gamma_x(t), \gamma_y(t))$, and $\gamma'(t) = (\gamma'_x(t), \gamma'_y(t))^T$. Thus the arc length will be
\begin{equation}\label{Eqn:ArcLength}
|\gamma| = \int_a^b \sqrt{ (\gamma'_x(s))^2 + (\gamma_y'(s))^2 }\, \frac{1}{\gamma_y(s)}\,ds.
\end{equation}
We will use this formula to compute examples in the next subsection.
Another piece of geometric information we can compute with a metric is the volume of a region, which we typically call ``area'' in two dimensions. In the most general setting, if $R\subset M$ is contained in a coordinate neighborhood of the Riemannian manifold $M$, with coordinates $(x_1, \dots, x_n)$ and metric given by the matrix $g_{ij}$ in these coordinates, then we can compute the volume of $R$ to be
\begin{equation}\label{Eqn:VolForm}
\operatorname{vol}(R) = \int_R d\operatorname{vol} = \int_R \sqrt{\det(g_{ij})}\,dx_1 \dots dx_n.
\end{equation}
The form $d\operatorname{vol}$ is the \emph{volume form}\index{volume form}.
Thus in our setting, with $M={\mathbb{H}}^2$ and metric as above,
\begin{equation}\label{Eqn:Area}
\operatorname{area}(R) = \int_R \,\frac{1}{y^2}\, dx\,dy.
\end{equation}
\subsection{Computing arc lengths and areas}
Now we will use the formulas obtained above to do calculations, in order to better understand the hyperbolic space ${\mathbb{H}}^2$.
\begin{example}\label{Example:HorizontalLine}
Fix a height $h>0$, and consider first a horizontal line segment between points $(0,h) = i\,h$ and $(1,h) = 1+i\,h$ in ${\mathbb{H}}^2$. We may parameterize the line segment by $\gamma(t) = (t, h)$, for $t\in[0,1]$. Using \refeqn{ArcLength}, we find the arc length\index{arc length} of $\gamma$ is $|\gamma| = 1/h$. Note that because $h$ is fixed, the arc length of $\gamma$ is just its usual Euclidean length rescaled by $1/h$. Thus when $h=1$, the length of $\gamma$ is $1$. When $h$ becomes large, the arc length becomes very small. In other words, points with the same height become very close together as their heights increase. On the other hand, as $h$ approaches $0$, the length of $\gamma$ approaches infinity. In fact, points near the real line ${\mathbb{R}}=\{(x,0)\in{\mathbb{R}}^2\}$ can be very far apart.
\end{example}
\begin{example}\label{Example:VerticalLine}
Consider now a vertical line between points $(x,a)$ and $(x,b)$, for $x,a,b$ fixed in ${\mathbb{R}}$, $0<a<b$. Such a line can be parameterized by $\zeta(t) = (x,t)$ for $t\in [a,b]$. So $\zeta'(t) = (0,1)$. Thus its arc length\index{arc length} is given by
\[
|\zeta| = \int_a^b \sqrt{0+1}\,\frac{1}{s}\, ds = \log\left(\frac{b}{a}\right).
\]
If we set $b=1$ and let $a$ approach $0$, note that the arc length\index{arc length} of $\zeta$ gets arbitrarily large, approaching infinity. Similarly setting $a=1$ and letting $b$ approach infinity gives arbitrarily long lengths.
\end{example}
The real line ${\mathbb{R}} = \{(x,0)\in {\mathbb{R}}^2\}$ along with the point at infinity $\infty$ play an important role in the geometry of ${\mathbb{H}}^2$, although these points are not contained in ${\mathbb{H}}^2$.
\begin{definition}\label{Def:BdryatInfty}
We call ${\mathbb{R}}\cup\{\infty\}$ the \emph{boundary at infinity}\index{boundary at infinity} for ${\mathbb{H}}^2$. Note it is homeomorphic to a circle $S^1$, and hence is sometimes called the \emph{circle at infinity}\index{circle at infinity}. It is denoted by $S^1_\infty$, $\partial{\mathbb{H}}^2$, and sometimes $\partial_\infty{\mathbb{H}}^2$.
\end{definition}
Areas behave quite differently in hyperbolic space than in Euclidean space.
\begin{example}\label{Example:AreaHorocycle}
In this example, we will compute the area of the region $R$ of ${\mathbb{H}}^2$ that is the intersection of the half-plane lying to the left of the line $x=1$, the half-plane to the right of the line $x=0$, and the plane lying above $y=1$. The region $R$ is shown in \reffig{example-region}.
\begin{figure}
\includegraphics{Figures/Ch02_IntroHyp/F2-01-ExReg}
\caption{The region of \refexamp{AreaHorocycle}}
\label{Fig:example-region}
\end{figure}
Using \refeqn{Area}, we see that the area of the region is given by
\begin{align*}
\operatorname{area}(R) & = \int_R \frac{1}{y^2}\,dx\,dy \\
& = \int_0^1 \int_1^\infty \frac{1}{y^2}\, dy\, dx \\
& = \int_0^1 1\, dx = 1
\end{align*}
This example shows that regions with infinite Euclidean area may have finite hyperbolic area.
\end{example}
\subsection{Geodesics and isometries}
Recall that a \emph{geodesic}\index{geodesic} between points $p$ and $q$ is a length minimizing curve between those points. An \emph{infinite geodesic}\index{geodesic!infinite} is a curve $\gamma$ from ${\mathbb{R}}$ to a Riemannian manifold such that for any $s< t\in {\mathbb{R}}$, the curve $\gamma([s,t])$ minimizes the distance between $\gamma(s)$ and $\gamma(t)$.
\begin{theorem}\label{Thm:GeodesicsH2}
The infinite geodesics in ${\mathbb{H}}^2$ consist of vertical straight lines and semi-circles with center on the real line.\qed
\end{theorem}
Note these are exactly the circles and lines in the upper half plane that meet $S^1_\infty$ at right angles. See \reffig{HypGeodesics}. Observe that between any two points in the upper half plane, there is a unique vertical line or semi-circle between them. Thus a geodesic between points $p$ and $q$ in ${\mathbb{H}}^2$ is a segment of a semi-circle or a vertical straight line. An infinite geodesic can also be viewed as the unique semi-circle or vertical straight line between two points on the boundary at infinity\index{boundary at infinity} of ${\mathbb{H}}^2$. We will typically drop the word ``infinite'' to describe geodesics between points on the boundary at infinity. Thus we use the same word ``geodesic'' to describe both infinite or bounded arcs, depending on context.
\begin{figure}
\import{Figures/Ch02_IntroHyp/}{F2-02-HypGeo.eps_tex}
\caption{Some geodesics and points in ${\mathbb{H}}^2$.}
\label{Fig:HypGeodesics}
\end{figure}
The proof of \refthm{GeodesicsH2} is left as an exercise in Riemannian geometry. The simplest way to prove the theorem uses coordinates and a bit more Riemannian geometry than we have reviewed so far. The interested reader can work through the details. The fact that these are the geodesics of ${\mathbb{H}}^2$ is all we will need going forward.
An \emph{isometry}\index{isometry, definition} between Riemannian manifolds $M$ and $N$ is a diffeomorphism $f\from M\to N$ such that
\[
\langle v, w \rangle_p = \langle df_p(v), df_p(w) \rangle_{f(p)} \quad \mbox{ for all } p\in M, v, w \in T_pM.
\]
Isometries preserve lengths, angles, and other geometric information. We are most interested in orientation preserving isometries from hyperbolic space to itself, i.e.\ diffeomorphisms $\phi\from{\mathbb{H}}^2\to{\mathbb{H}}^2$ that preserve the metric and orientation on ${\mathbb{H}}^2$. All such isometries form a group acting on ${\mathbb{H}}^2$. We will assume the following theorem; see also \refex{Reflection}.
\begin{theorem}\label{Thm:IsomH2}
The full group of isometries of ${\mathbb{H}}^2$ is generated by reflections in geodesics in ${\mathbb{H}}^2$.
The group of orientation preserving isometries of ${\mathbb{H}}^2$ is the group of \emph{linear fractional transformations}\index{linear fractional transformation}
\[ z\mapsto \frac{az + b}{cz+d}, \]
where $a,b,c,d\in{\mathbb{R}}$, and $ad-bc > 0$.\qed
\end{theorem}
By taking the quotient of $a$, $b$, $c$, and $d$ by $\sqrt{ad-bc}$, the linear fractional transformation is equivalent to an element of $\operatorname{PSL}(2,{\mathbb{R}})$, the group of projective 2 by 2 matrices with real coefficients and determinant $1$. That is, we may view $A\in\operatorname{PSL}(2,{\mathbb{R}})$ as given by a matrix
\[ A = \pm\mat{a&b\\c&d}, \]
where $a$, $b$, $c$, $d \in {\mathbb{R}}$ and $ad-bc=1$. The sign in front reflects the fact that it is \emph{projective}; it is well-defined only up to multiplication by $\pm {\mathrm{Id}}$. On the other hand, $A$ acts on ${\mathbb{H}}^2$ via
\[ Az = \frac{az + b}{cz + d}. \]
Note the action is unaffected when we multiply $a$, $b$, $c$, and $d$ by the same real constant, thus it is necessary to take projective matrices.
Linear fractional transformations take circles and lines to circles
and lines, so they map geodesics to geodesics. For more information on these transformations, see for example \cite[pp~76--89]{Ahlfors}.
The following lemma is very useful.
\begin{lemma}\label{Lem:3points}
Given any three distinct points $z_1$, $z_2$, and $z_3$ in $\partial{\mathbb{H}}^2$, there exists an orientation preserving isometry of ${\mathbb{H}}^2$ taking $z_3$ to $\infty$, and taking $\{z_1,z_2\}$ to $\{0,1\}$. It follows that there exists an isometry of ${\mathbb{H}}^2$ taking any three distinct points on $\partial{\mathbb{H}}^2$ to any other three distinct points, with appropriate orientation.
\end{lemma}
\begin{proof}
This is a standard fact of linear fractional transformations. We need to take some care to preserve orientation. If necessary, switch $z_1$ and $z_2$ so that the sequence $z_1,z_2,z_3$ runs in counterclockwise order around $\partial{\mathbb{H}}^2 \cong S^1$.
If none of $z_1$, $z_2$, and $z_3$ are infinity, then a linear fractional transformation sending $z_1$ to $1$, $z_2$ to $0$, and $z_3$ to $\infty$ is given by
\[ z\mapsto \frac{z-z_2}{z-z_3}\frac{z_1-z_3}{z_1-z_2}.\]
Note that the determinant of this transformation is
\[(z_1-z_3)(z_1-z_2)(z_2-z_3).\]
Because the sequence $z_1, z_2, z_3$ is in counterclockwise order, this is positive. Thus it gives the desired orientation preserving isometry.
If $z_1=\infty$, $z_2=\infty$, or $z_3=\infty$, then the isometry is given by
\[
z\mapsto \frac{z-z_2}{z-z_3}, \quad z\mapsto\frac{z_1-z_3}{z-z_3}, \quad z\mapsto\frac{z-z_2}{z_1-z_2}
\]
respectively. One can check that again, because we ensured the sequence $z_1,z_2,z_3$ is in counterclockwise order, the determinant of each transformation is positive.
\end{proof}
Many metric calculations in ${\mathbb{H}}^2$ can be simplified greatly by applying an appropriate isometry, including the use of \reflem{3points}. For example, the following lemma is easily proved using an isometry.
\begin{lemma}\label{Lem:IntGeodesics}
Two distinct geodesics $\ell_1$ and $\ell_2$ in ${\mathbb{H}}^2$ either
\begin{enumerate}
\item intersect in a single point in the interior of ${\mathbb{H}}^2$,
\item intersect in a single point on the boundary $\partial {\mathbb{H}}^2$, or
\item are completely disjoint in ${\mathbb{H}}^2 \cup \partial {\mathbb{H}}^2$.
\end{enumerate}
In the third case, there is a unique geodesic $\ell_3$ that is perpendicular to both $\ell_1$ and $\ell_2$.
\end{lemma}
\begin{proof}
We may apply an isometry $g$ of ${\mathbb{H}}^2$, taking endpoints of $\ell_1$ to $0$ and $\infty$, and taking one of the endpoints of $\ell_2$ to $1$. The image of the second endpoint of $\ell_2$ under $g$ is then some point $w$ in $\partial{\mathbb{H}}^2 = {\mathbb{R}}\cup\{\infty\}$. Note that $g(\ell_1)$ is the vertical line from $0$ to $\infty$ in ${\mathbb{H}}^2$. The point $w$ determines the image of $g(\ell_2)$.
If $w=0$ or if $w=\infty$, then we are in the second case, and $g(\ell_2)$ is a semi-circle with endpoints $0$ and $1$, or a vertical line from $1$ to $\infty$.
If $w \in {\mathbb{R}}$ is less than zero, then we are in the first case. The two endpoints of $g(\ell_2)$ are separated by the line $g(\ell_1)$, so the geodesics must meet.
Finally, if $w\in{\mathbb{R}}$ is greater than zero, then we are in the third case, and the geodesics are disjoint. One way to see that there is a unique geodesic perpendicular to both is to apply another isometry $h$, taking $\sqrt{w}$ to $0$ and $-\sqrt{w}$ to $\infty$. That is, let $h\from {\mathbb{H}}^2\to {\mathbb{H}}^2$ be given by
\[ h(z) = \frac{z-\sqrt{w}}{z+\sqrt{w}}. \]
Note that $h(0) = -1$, $h(\infty) = 1$, so $h(g(\ell_1))$ is the geodesic that is a semi-circle with endpoints at $-1$ and $1$. Also,
\[ h(1) = \frac{1-\sqrt{w}}{1+\sqrt{w}}\quad \mbox{and} \quad h(w)=\frac{w-\sqrt{w}}{w+\sqrt{w}}=-\frac{1-\sqrt{w}}{1+\sqrt{w}}. \]
So $h(g(\ell_2))$ is the geodesic that is a semi-circle with endpoints $h(1)$ and $-h(1)$. Thus images of both geodesics are semi-circles with center at $0$. The geodesic from $0$ to $\infty$ is therefore perpendicular to both, and it is the unique such geodesic. Set $\ell_3$ to be the image of the line from $0$ to $\infty$ under the composition $g^{-1}\circ h^{-1}$.
\end{proof}
In the previous proof, knowing which isometry $h$ to apply in the last step required a calculation. However, once that isometry was applied, the existence and uniqueness of the geodesic $\ell_3$ was clear.
Computing lengths of geodesics is also simplified by applying isometries.
\begin{example}\label{Example:length} Length computation.
Suppose you wish to compute the length of a segment, or the distance between two points in ${\mathbb{H}}^2$. One strategy for doing this is to apply an isometry taking the two points to a simpler picture. For example, in \reffig{HypGeodesics}, we may find an isometry taking the geodesic containing points $a$ and $b$ to the vertical geodesic from $0$ to $\infty$. Then under this isometry, the points $a$ and $b$ map to points of the form $(0,t_1)$ and $(0, t_2)$.
In \refexamp{VerticalLine}, we already computed the length of the vertical segment between $(0, t_1)$ and $(0, t_2)$; its length is $\log(t_1/t_2)$ (assuming here that $t_2<t_1$, otherwise take the negative of the log). This gives the distance between $a$ and $b$.
\end{example}
\subsection{Triangles and horocycles}
\begin{definition}\label{Def:ideal-triangle}
An \emph{ideal triangle}\index{ideal triangle} in ${\mathbb{H}}^2$ is a triangle with three geodesic edges, with all three vertices on $\partial{\mathbb{H}}^2$.
\end{definition}
There is an isometry of ${\mathbb{H}}^2$ taking any ideal triangle\index{ideal triangle} to the ideal triangle with vertices $0$, $1$, and $\infty$, by \reflem{3points}. Hence all ideal triangles in ${\mathbb{H}}^2$ are isometric. In fact, we will see that they all have finite area. Thus all ideal triangles have the same area!
\begin{definition}\label{Def:horocycle}
A \emph{horocycle}\index{horocycle} centered at an ideal point $p \in \partial{\mathbb{H}}^2$ is defined as a curve perpendicular to all geodesics through $p$. When $p$ is a point on ${\mathbb{R}} \subset \partial{\mathbb{H}}^2 = {\mathbb{R}} \cup \{\infty\}$, a horocycle is a Euclidean circle tangent to $p$, as in \reffig{horocycle}. When $p$ is the point $\infty$, a horocycle at $p$ is a line parallel to ${\mathbb{R}}$. That is, in this case the horocycle consists of points of the form $\{ (x,y) \mid y=c \}$ where $c>0$ is constant.
\end{definition}
\begin{figure}
\includegraphics{Figures/Ch02_IntroHyp/F2-03-Horoc}
\caption{A horocycle}
\label{Fig:horocycle}
\end{figure}
\begin{definition}\label{Def:horoball}
A \emph{horoball}\index{horoball} is the region of ${\mathbb{H}}^2$ interior to a horocycle.
\end{definition}
Note a horoball will either be a Euclidean disk tangent to ${\mathbb{R}}\subset\partial{\mathbb{H}}^2$ or a region consisting of points of the form $\{(x,y) \mid y>c \}$.
In \refexamp{AreaHorocycle}, we computed the area of a portion of a horoball, and we observed it was finite. Using this, we can show that the area of an ideal triangle is finite.
\begin{lemma}\label{Lem:AreaTriangleFinite}
The area of an ideal triangle is finite.\index{ideal triangle}
\end{lemma}
\begin{proof}
Given any ideal triangle in ${\mathbb{H}}^2$, we may apply an isometry taking its vertices to $0$, $1$, and $\infty$. Let $T$ denote this ideal triangle. Consider the intersection of $T$ with the horoball about infinity of height $1$. This is the region $R$ of \refexamp{AreaHorocycle}.
Note that the isometries
\[ z \mapsto \frac{z-1}{z} \quad \mbox{and} \quad z\mapsto\frac{-1}{z-1} \]
take the horoball about infinity to horoballs of Euclidean diameter $1$ centered at $1$ and at $0$, respectively, and take $T$ to $T$. Thus the intersections of $T$ with these horoballs also have areas $1$.
Finally, note that the complement of these horoballs in $T$ is a closed and bounded region $B$, lying below the line $y=1$, above the horocycle of Euclidean diameter $1$ centered at $0$, and above the horocycle of Euclidean diameter $1$ centered at $1$. The region $B$ lies in the rectangle $[0,1]\times[{\frac{1}{2}},1]$. It follows that the area of $B$ is at most the area of the rectangle, which is finite.
Thus the area of $T$ is $3$ plus the area of $B$, which is finite.
\end{proof}
From the lemma, we see that the area of an ideal triangle is larger than~$3$.
The exercises lead you through a calculation showing that the area of an ideal triangle is in fact $\pi$.
\section{Hyperbolic geometry in dimension three}
Hyperbolic 3-space is defined as follows:
\[
{\mathbb{H}}^3 = \{ (x+iy, t) \in {\mathbb{C}} \times {\mathbb{R}} \mid t>0\}, \]
under the metric with first fundamental form
\begin{equation}\label{Eqn:3dHypMetric}
ds^2 = \frac{dx^2 + dy^2 + dt^2}{t^2}.
\end{equation}
We have the following theorems, which we will assume. Their proofs can be found in texts on hyperbolic geometry.
\begin{theorem}\label{Thm:GeodesicsH3}
The geodesics in ${\mathbb{H}}^3$ consist of vertical lines and semicircles orthogonal to the boundary $\partial{\mathbb{H}}^3 = {\mathbb{C}} \cup \{\infty\}$.\index{boundary at infinity} Totally geodesic planes are vertical planes and hemispheres centered on ${\mathbb{C}}$.\qed
\end{theorem}
\begin{theorem}\label{Thm:IsomH3}
The full group of isometries of ${\mathbb{H}}^3$ is generated by reflections in geodesic planes.
The group of orientation preserving isometries of ${\mathbb{H}}^3$ is $\operatorname{PSL}(2,{\mathbb{C}})$. Its action on the boundary\index{boundary at infinity} $\partial {\mathbb{H}}^3 = {\mathbb{C}} \cup \{\infty\}$ is the usual action of $\operatorname{PSL}(2,{\mathbb{C}})$ on ${\mathbb{C}} \cup \{\infty\}$, via M\"obius transformation.\index{M\"obius transformation}\qed
\end{theorem}
An element $A\in\operatorname{PSL}(2,{\mathbb{C}})$ can be represented by a matrix, up to multiplication by $\pm {\mathrm{Id}}$. \Refthm{IsomH3} states that if
\[ A=\pm\mat{a&b\\c&d} \in \operatorname{PSL}(2,{\mathbb{C}}),\]
then the action of $A$ on $\partial{\mathbb{H}}^3$ is given by
\[A(z) = \frac{az+b}{cz+d}, \mbox{ for } z\in\partial{\mathbb{H}}^3.\]
The action of an element of $\operatorname{PSL}(2,{\mathbb{C}})$ extends to the interior of hyperbolic 3-space, and there is a unique way to extend. Marden works through it carefully in \cite[Chapter~1]{marden}. However, we will not need the formula, and it is complicated, so we omit it here.
\begin{theorem}\label{Thm:ClassifyPSL(2,C)}
Apart from the identity, any element of $\operatorname{PSL}(2,{\mathbb{C}})$ is exactly one of the following:
\begin{enumerate}
\item \emph{elliptic},\index{elliptic} which has two fixed points on $\partial{\mathbb{H}}^3$ and rotates about the geodesic axis between them in ${\mathbb{H}}^3$, fixing the axis pointwise,
\item \emph{parabolic},\index{parabolic} which has a single fixed point on $\partial{\mathbb{H}}^3$,
\item \emph{loxodromic},\index{loxodromic} which has two fixed points on $\partial{\mathbb{H}}^3$, and dilates and rotates about the axis between them.
\end{enumerate}
\end{theorem}
For example, the element $\mat{\exp(i\theta)& 0 \\ 0& \exp(-i\theta)}\in \operatorname{PSL}(2,{\mathbb{C}})$ is elliptic:\index{elliptic} it fixes the points $0$ and $\infty$, and the axis between them, and rotates about that axis by angle $2\theta$.
The element $\mat{1&1\\0&1} \in \operatorname{PSL}(2,{\mathbb{C}})$ is parabolic.\index{parabolic} It fixes the point $\infty$ only. Its action on $\partial{\mathbb{H}}^3$ is $z\mapsto z+1$, which extends to Euclidean translation by~$1$ in the interior of hyperbolic 3-space.
Finally, the element $\mat{\rho & 0\\0& \rho^{-1}}$ is loxodromic\index{loxodromic} whenever $\rho$ is a complex number with $|\rho|>1$. It fixes the points $0$ and $\infty$ in $\partial {\mathbb{H}}^3$, but translates along the axis between them, and rotates and translates points in the interior of ${\mathbb{H}}^3$ that do not lie on the axis.
In fact, after conjugating by an appropriate element of $\operatorname{PSL}(2,{\mathbb{C}})$, any element of $\operatorname{PSL}(2,{\mathbb{C}})$ actually becomes one of these three examples. This is stated as \reflem{MoreClassifyPSL} in \refchap{Margulis}. As a warm up for that theorem and \refthm{ClassifyPSL(2,C)}, \refex{IsomH2} works through a similar classification for isometries of ${\mathbb{H}}^2$.
\begin{definition}\label{Def:ideal-tetr}
An \emph{ideal tetrahedron}\index{ideal tetrahedron, definition} is a tetrahedron in ${\mathbb{H}}^3$ with all four vertices on $\partial{\mathbb{H}}^3$, and with geodesic edges and faces.
\end{definition}
Since there exists a M\"obius transformation\index{M\"obius transformation} taking any three points to $0$, $1$, and $\infty$ in ${\mathbb{C}} \cup \{\infty\}$, we may assume our tetrahedron has vertices at $0$, $1$, and $\infty$, and at some point $z \in {\mathbb{C}} \setminus \{0,1\}$. So any ideal tetrahedron is parameterized by $z$. See \reffig{ideal-tet}.
\begin{figure}
\import{Figures/Ch02_IntroHyp/}{F2-04-IdealT.eps_tex}
\caption{Ideal tetrahedron}
\label{Fig:ideal-tet}
\end{figure}
The value of $z$ tells us about the geometry of the ideal tetrahedron. For example, the argument of $z$ is the dihedral angle between the vertical planes through $0,1,\infty$ and through $0, z,\infty$.
The modulus of $z$ also has geometric meaning. Consider the hyperbolic geodesic through $z \in {\mathbb{C}}$ that meets the vertical line from $0$ to $\infty$ in a right angle at a point $p_1$. Consider also the geodesic through $1 \in {\mathbb{C}}$ that meets the vertical line from $0$ to $\infty$ at a right angle at point $p_2$. The hyperbolic distance between $p_1$ and $p_2$ is exactly $|\log|z||$
(\refex{CrossRatio}). Hence
\[
\log z = (\mbox{signed dist between altitudes}) + i(\mbox{dihedral angle}).
\]
\begin{definition}\label{Def:horosphere}
A \emph{horosphere}\index{horosphere} about $\infty$ in $\partial{\mathbb{H}}^3$ is a plane parallel to ${\mathbb{C}}$, consisting of points $\{(x+iy, c) \in {\mathbb{C}} \times {\mathbb{R}}\}$ where $c>0$ is constant. Note for any $c>0$, this plane is perpendicular to all geodesics through $\infty$. When we apply an isometry that takes $\infty$ to some $p \in {\mathbb{C}}$, note a horosphere is taken to a Euclidean sphere tangent to $p$. By definition, this is a horosphere about $p$. A \emph{horoball}\index{horoball} is the region interior to a horosphere.
\end{definition}
\begin{figure}
\includegraphics{Figures/Ch02_IntroHyp/F2-05-Horos}
\caption{Horosphere}
\label{Fig:horosphere}
\end{figure}
The metric on ${\mathbb{H}}^3$ induces a metric on a horosphere. For a horosphere $\{x+iy,c)\in{\mathbb{C}}\times{\mathbb{R}}\}$ about $\infty$, the metric is just the Euclidean metric, rescaled by $1/c$. We may apply an isometry to any horosphere, taking it to one about $\infty$. Thus the induced metric on any horosphere will always be Euclidean. Hence when we intersect horospheres about $0$, $1$, $\infty$ and $z$ with an ideal tetrahedron through those points, we obtain four Euclidean triangles. These four triangles are similar (\refex{TetLabels1}).
\section{Exercises}
\begin{exercise}[Requires geometry]
Prove \refthm{GeodesicsH2}, that is, show that vertical lines and semi-circles are geodesics, without using isometries of ${\mathbb{H}}^2$. One way to solve this problem is to use Riemannian geometry, such as calculations in coordinates on ${\mathbb{H}}^2$. Break the problem into two steps.
\begin{enumerate}
\item Prove that vertical lines $L(t) = (x, t)$, $t>0$, are geodesics in ${\mathbb{H}}^2$.
\item Prove that semi-circles $C(t) = (x+ r\,\cos(t), r\,\sin(t))$, $t\in (0, \pi)$ are geodesics in ${\mathbb{H}}^2$.
\end{enumerate}
\end{exercise}
\begin{exercise}[Requires some geometry]\label{Ex:Reflection}
Suppose $C$ is a geodesic in ${\mathbb{H}}^2$ that is a Euclidean semi-circle with center $a\in{\mathbb{R}}$ and radius $R$. Then the \emph{reflection through $C$} \index{reflection through a geodesic} takes $z$ to $R^2/(\overline{z}-\overline{a}) +a$, where $\overline{z}$ denotes complex conjugation.
Prove the reflection through $C$ is an isometry of ${\mathbb{H}}^2$ that fixes $C$ pointwise. Note this is an orientation reversing isometry.
A similar result holds for reflection through a vertical line. Find a description for the reflection through a vertical line, and prove it is an isometry.
\end{exercise}
\begin{exercise}[Requires geometry]
Prove any isometry of ${\mathbb{H}}^2$ is the product of reflections in hyperbolic geodesics.
\end{exercise}
\begin{exercise}\label{Ex:IsomH2}
Work through the classification of isometries of ${\mathbb{H}}^2$ as elliptic,\index{elliptic} parabolic,\index{parabolic} or loxodromic.\index{loxodromic} (E.g.\ Thurston \cite[page 67]{thurston}).
\end{exercise}
\begin{exercise}\label{Ex:IsomH3}
\Reflem{3points} shows there exists an orientation preserving isometry of ${\mathbb{H}}^2$ taking any three points of $\partial{\mathbb{H}}^2$ to any other three points, provided we are careful with orientation. Prove a similar statement for ${\mathbb{H}}^3$: Given distinct $b$, $c$ and $d$ in ${\mathbb{C}} \cup \{\infty\}$, prove there exists an orientation preserving isometry of ${\mathbb{H}}^3$ taking $b$ to $1$, $c$ to $0$, and $d$ to $\infty$. Write it down as a matrix in $\operatorname{PSL}(2,{\mathbb{C}})$. Note in ${\mathbb{H}}^3$ we no longer have to worry about orientation.
\end{exercise}
\begin{exercise}
Prove the following analogue of \reflem{IntGeodesics} in ${\mathbb{H}}^3$. Show two distinct geodesics $\ell_1$ and $\ell_2$ either intersect in a single point in the interior of ${\mathbb{H}}^3$, intersect in a single point on $\partial{\mathbb{H}}^3$, or are completely disjoint in ${\mathbb{H}}^3\cup\partial{\mathbb{H}}^3$. In the third case, show there exists a unique geodesic that is perpendicular to both $\ell_1$ and $\ell_2$.
\end{exercise}
\begin{exercise}[Cross ratios] \label{Ex:CrossRatio}
Given $a\in {\mathbb{C}}$, the image of $a$ under the isometry of \refex{IsomH3} is said to be the \emph{cross ratio}\index{cross ratio} of $a,b,c,d$, and is denoted $\lambda(a,b;c,d)$.
Let $x$ be the point on the geodesic in ${\mathbb{H}}^3$ between $c$ and $d$ such that the geodesic from $a$ to $x$ is perpendicular to that between $c$ and $d$. Let $y$ be the point on the geodesic between $c$ and $d$ such that the geodesic from $b$ to $y$ is perpendicular to that between $c$ and $d$. Prove the hyperbolic distance between $x$ and $y$ is equal to $|\log|\lambda(a,b;c,d)||$.
\end{exercise}
\begin{exercise}[Areas of ideal triangles]\label{Ex:IdealTriangleArea}
Prove that the area of an ideal hyperbolic triangle is $\pi$. (E.g.\ use calculus.)\index{ideal triangle}
\end{exercise}
\begin{exercise}[Areas of 2/3-ideal triangles]\label{Ex:2/3IdealTriangle}
A \emph{$2/3$-ideal triangle}\index{$2/3$-ideal triangle} is a triangle with two vertices on the boundary at infinity\index{boundary at infinity} $\partial{\mathbb{H}}^2$, and the third in the interior of ${\mathbb{H}}^2$ such that the interior angle at the third vertex is $\theta$.
\begin{figure}
\import{Figures/Ch02_IntroHyp/}{F2-06-23Tri.eps_tex}
\caption{$2/3$-ideal triangle.\index{$2/3$-ideal triangle}}
\label{Fig:two-thirds}
\end{figure}
\begin{enumerate}
\item[(a)] Show that all $2/3$-ideal triangles of angle $\theta$ are congruent to the triangle shown in \reffig{two-thirds}, with one ideal vertex at infinity, one at $-1 \in \partial{\mathbb{H}}^2 = {\mathbb{R}}\cup\{\infty\}$, and the third in the interior of ${\mathbb{H}}^2$ with edges making angle $\theta$.
\item[(b)] Define a function $A\from (0,\pi) \to {\mathbb{R}}$ by: $A(\theta)$ is the area of the $2/3$-ideal triangle with interior angle $\pi-\theta$. Show that
\[ A(\theta_1 + \theta_2) = A(\theta_1) + A(\theta_2), \]
when this is defined. (Hint: Figure \ref{Fig:area} may be useful.)
\begin{figure}[h!]
\import{Figures/Ch02_IntroHyp/}{F2-07-Area23.eps_tex}
\caption{Areas of triangles.}
\label{Fig:area}
\end{figure}
\item[(c)] It follows that $A$ is ${\mathbb{Q}}$-linear. Since $A$ is continuous, it must be ${\mathbb{R}}$-linear. Show $A(\theta) = \theta$.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:TriangleAreas}
(Areas of general triangles.)
Using the previous two problems, show that the area of a triangle with interior angles $\alpha$, $\beta$, and $\gamma$ is equal to $\pi -\alpha -\beta -\gamma$. Note an ideal vertex has interior angle $0$.
\end{exercise}
\begin{exercise}\label{Ex:TetLabels1}
(Ideal tetrahedra and dihedral angles.)
The dihedral angles on a tetrahedron are labeled $A$, $B$, $C$, $D$, $E$, and $F$ in \reffig{labeled-tet}. Using linear algebra, prove that opposite dihedral angles agree. That is, show $A=E$, $B=F$, and $C=D$.
\end{exercise}
\begin{figure}
\import{Figures/Ch02_IntroHyp/}{F2-08-LabelT.eps_tex}
\caption{Dihedral angles of an ideal tetrahedron.}
\label{Fig:labeled-tet}
\end{figure}
\begin{exercise}\label{Ex:tet-labels}
(Ideal tetrahedra and cross ratios.)
Orient an ideal tetrahedron with vertices $a,b,c,d$. When we apply a M\"obius transformation\index{M\"obius transformation} taking $b,c,d$ to $1,0,\infty$, respectively, the point $a$ goes to the cross ratio\index{cross ratio} $\lambda(a,b;c,d)$. Label the edge from $c$ to $d$ by the complex number $\lambda = \lambda(a,b;c,d)$. We may do this for each edge of the tetrahedron, labeling by a different cross ratio. (Notice you need to keep track of orientation.) Find all labels on the edges of the tetrahedra in terms of $\lambda$.
\end{exercise}
\begin{exercise}\label{Ex:CuspVolume}(Volume of a region in a horoball)
Let $R$ be the region in ${\mathbb{H}}^3$ given by $A\times[1,\infty)$, where $A$ is some region contained in the horosphere about $\infty$ of height $1$, i.e.\ $A\subset \{(x+i\,y, 1)\}$. Prove that $\operatorname{vol}(R) = \operatorname{area}(A)/2$.
\end{exercise}
\chapter{Geometric Structures on Manifolds}\label{Chap:Geometric}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
In this chapter, we give our first examples of hyperbolic manifolds, combining ideas from the previous two chapters.
\section{Geometric structures}
\subsection{Introductory example: The torus}
A geometric structure you are likely familiar with is a 2-dimensional Euclidean structure\index{Euclidean structure} on a torus. Given any parallelogram, we obtain a torus by gluing the top and bottom sides of the parallelogram, and the right and left sides, as shown in \reffig{TorusGluing}.
\begin{figure}[h]
\includegraphics{Figures/Ch03_Geometric/F3-01-TorGlu}
\caption{A parallelogram glued to a torus}
\label{Fig:TorusGluing}
\end{figure}
The universal cover of the torus is obtained by gluing copies of the parallelogram to itself in ${\mathbb{R}}^2$. We may glue infinitely many copies in two directions, and we obtain a tiling of the plane ${\mathbb{R}}^2$ by parallelograms, as in \reffig{EuclideanTorus}. These parallelograms define a lattice in ${\mathbb{R}}^2$, and covering transformations of the universal cover ${\mathbb{R}}^2$ of the torus are given by Euclidean translations by points of the lattice. That is, if the parallelogram is determined by vectors $\overrightarrow{v}$ and $\overrightarrow{w}$ along its sides, then any covering transformation is of the form $a\overrightarrow{v}+b\overrightarrow{w}$ for $a, b \in {\mathbb{Z}}$. This construction works for any choice of parallelogram.
\begin{figure}
\includegraphics{Figures/Ch03_Geometric/F3-02-EuclT}
\caption{The universal cover of a Euclidean torus.}
\label{Fig:EuclideanTorus}
\end{figure}
Now modify this construction by choosing a more general quadrilateral instead of a parallelogram. We can still identify opposite sides in an orientation preserving manner, so when we glue we still get an object homeomorphic to a torus. However, the quadrilateral no longer determines a tiling of ${\mathbb{R}}^2$, nor a lattice. Indeed, when we glue copies of the quadrilateral to itself, as we did when constructing the universal cover above, we have to shrink, expand, and rotate the quadrilateral to glue copies, and the result is not a tiling of the plane. See \reffig{AffineTorus}.
\begin{figure}
\includegraphics{Figures/Ch03_Geometric/F3-03-AfTDev}
\caption{When we construct a torus from a quadrilateral that is not a parallelogram, generally a single point is omitted from the plane.}
\label{Fig:AffineTorus}
\end{figure}
These examples of the torus can be generalized to different surfaces and manifolds. The torus was created by gluing quadrilaterals. More generally, we
will glue different types of polygons, including ideal polygons, and
in 3-dimensions, polyhedra.
\begin{definition}\label{Def:TopPolygon}
Let $M$ be a 2-manifold. A \emph{topological polygonal decomposition}\index{topological polygonal decomposition} of $M$ is a combinatorial way of gluing polygons so that the result is homeomorphic to $M$.
We allow ideal polygons, i.e.\ those with one or more ideal vertex. Additionally, by \emph{gluing}\index{gluing} we mean an identification that takes faces to faces, edges to edges, and vertices to vertices.
\end{definition}
Both constructions of the torus above give examples of topological polygonal decompositions of the torus.
\begin{definition}\label{Def:GeomPoly}
A \emph{geometric polygonal decomposition}\index{geometric polygonal decomposition} of $M$ is a topological polygonal decomposition along with a metric on each polygon such that gluing is by isometry and the result of the gluing is a smooth manifold with a complete metric.
\end{definition}
Recall that a metric space is \emph{complete}\index{complete metric space} if every Cauchy sequence converges; and recall that a \emph{Cauchy sequence}\index{Cauchy sequence} is a sequence $\{x_i\}_{i=1}^\infty$ such that for each $\epsilon>0$, there exists a positive integer $N$ such that $d(x_i,x_j)<\epsilon$ if $i,j\geq N$.
The first construction of the torus gives a complete Euclidean metric on the torus, by pulling back the Euclidean metric on the parallelogram. Because gluings of the sides of the parallelogram are by Euclidean isometries, this will be well-defined. The second construction of the torus does not give a complete Euclidean metric, or any Euclidean metric: gluings of the quadrilaterals are by affine transformations (rotation, translation, scale),\index{affine transformation} not isometries of the Euclidean plane, so we cannot pull back a well-defined metric. Note also that toward the center of \reffig{AffineTorus}, the quadrilaterals are becoming arbitrarily small. In fact, there is a point in the figure that is disjoint from all quadrilaterals (see \refex{DevelopingImageMissesPoint}).
We will also be studying polygonal decompositions of manifolds and their generalization to three dimensions: polyhedral decompositions. More generally, we can discuss geometric structures on manifolds.
\subsection{Geometric structures on manifolds}
\begin{definition}\label{Def:GXStructure}
Let $X$ be a manifold, and $G$ a group acting on $X$. We say a manifold $M$ has a $(G,X)$-structure\index{$(G,X)$-structure} if for every point $x \in M$, there exists a \emph{chart}\index{chart} $(U,\phi)$, that is, a neighborhood $U \subset M$ of $x$ and a homeomorphism $\phi\from U \to \phi(U)\subset X$. We also sometimes refer to the map $\phi$ as a chart when $U$ is understood. Charts satisfy the following: if two charts $(U, \phi)$ and $(V, \psi)$ overlap, then the \emph{transition map}\index{transition map} or \emph{coordinate change map}\index{coordinate change map}
\[
\gamma = \phi \circ \psi^{-1}\from \psi(U\cap V) \to \phi(U\cap V)
\]
is an element of $G$.
\end{definition}
In the examples we encounter here, $X$ will be simply connected, and $G$ a group of \emph{real analytic diffeomorphisms}\index{real analytic diffeomorphism}\index{diffeomorphism!real analytic} acting transitively on $X$.
The reason we need real analytic diffeomorphisms is that they are uniquely
determined by their restriction to any open set. This is true, for example, of isometries of Euclidean space, and isometries of hyperbolic space. While we present the results in this full generality, the reader who is unfamiliar with real analytic diffeomorphisms can read with Euclidean or hyperbolic isometries in mind.
Our manifold $X$ will typically admit a known metric as well, and $G$ will be the group of isometries of $X$. It will follow that $M$ inherits a metric from $X$ (\refex{InheritMetric}).
We will say that $M$ has a \emph{geometric structure}\index{geometric structure}.
\begin{example}[Euclidean torus]
\label{Example:EuclTorus}
Let $X$ be 2-dimensional Euclidean space, ${\mathbb{E}}^2$. Let $G$ be isometries of Euclidean space ${\operatorname{Isom}}({\mathbb{E}}^2)$. The torus admits an $({\operatorname{Isom}}({\mathbb{E}}^2),{\mathbb{E}}^2)$-structure, also called a \emph{Euclidean structure}.\index{Euclidean structure}
To help us understand the definition, let's look at some charts\index{chart} and
transition maps for this example.
We know the universal cover of the torus is given by tiling the plane ${\mathbb{R}}^2$ with parallelograms. For simplicity, we will work with the example in which each parallelogram is a square, and one square has vertices $(0,0)$, $(1,0)$, $(1,1)$, and $(0,1)$ in ${\mathbb{R}}^2$. Call this square the \emph{basic} square.
Now pick any point $p$ on the torus. This will lift to a collection of points on ${\mathbb{R}}^2$, one for each copy of the unit square. Take a disk of radius $1/4$ around each lift. These all project under the covering map to an open neighborhood $U$ of $p$ in the torus. Therefore we have the following charts: $(U,\phi)$ is a chart,\index{chart} where $\phi$ maps $U$ into the disk of radius $1/4$ centered around the lift $\widehat{p}_0$ of $p$ in the basic square. Another chart is $(U, \psi)$, where $\psi$ maps $U$ into the disk of radius $1/4$ about the lift $\widehat{p}_1$ of $p$ in some other square. Such a lift is given by a translation of $\widehat{p}_0$ by a vector $(m,n)\in{\mathbb{Z}}\times{\mathbb{Z}}$, in the lattice determined by the basic square. Thus $\phi \circ \psi^{-1}$ will be a Euclidean translation by integral values in the $x$ and $y$ direction. These are Euclidean isometries.
More generally, let $q$ be a point such that a lift $\widehat{q}_0$ of $q$ has distance less than $1/2$ to $\widehat{p}_0$ in the basic square. Thus a disk of radius $1/4$ about $\widehat{p}_0$ overlaps with a disk of radius $1/4$ about $\widehat{q}_0$. These disks project to give open neighborhoods $U$ and $V$ of $p$ and $q$ respectively in the torus. Since these neighborhoods overlap, we need to ensure that any corresponding charts\index{chart} differ by a Euclidean isometry in the region of overlap. Obtain charts by mapping $U$ to your favorite disk of radius $1/4$ about a lift of $p$ in ${\mathbb{R}}^2$. Map $V$ to your favorite disk of radius $1/4$ about a lift of $q$ in ${\mathbb{R}}^2$; see \reffig{GXTorus} for an example. Again, regardless of the choice of $\phi$ and $\psi$, the overlap
\[
\phi \circ \psi^{-1}\from \psi(U\cap V)\to \phi(U\cap V)
\]
will be a Euclidean translation of the intersection of the two disks by some $(n,m)\in{\mathbb{Z}}\times{\mathbb{Z}}$ corresponding to the choice of lifts. Again see \reffig{GXTorus}.
\begin{figure}
\begin{center}
\includegraphics{Figures/Ch03_Geometric/F3-04-GXTor}
\end{center}
\caption{Euclidean structure\index{Euclidean structure} on a torus: Transition maps are Euclidean translations.}
\label{Fig:GXTorus}
\end{figure}
This idea extends to arbitrary neighborhoods $U$ and $V$: transition maps will always be translations by $(n,m)\in {\mathbb{Z}}\times{\mathbb{Z}}$.
Therefore, we conclude that the torus obtained by gluing sides of the square with vertices $(0,0)$, $(0,1)$, $(1,0)$ and $(1,1)$ admits an $({\operatorname{Isom}}({\mathbb{E}}^2),{\mathbb{E}}^2)$-structure, where ${\mathbb{E}}^2$ denotes ${\mathbb{R}}^2$ with the standard Euclidean metric.
\end{example}
\begin{example}[The affine torus]
\label{Example:AffineTorus}\index{affine torus}
Again let $X={\mathbb{R}}^2$, but this time let $G$ be the affine group acting on ${\mathbb{R}}^2$. That is, $G$ consists of invertible affine transformations,\index{affine transformation} i.e.\ linear transformations followed by a translation:
\[
x\mapsto Ax+b.
\]
The torus of \reffig{AffineTorus} admits a $(G, {\mathbb{R}}^2)$-structure. This can be seen in a manner similar to that in the previous example. Charts\index{chart} will differ by a scaling, rotation, then translation.
\end{example}
\smallskip
In practice, we rarely use charts\index{chart} to show manifolds have a particular $(G,X)$-structure.\index{$(G,X)$-structure} Instead, as in the two previous examples, we build manifolds by starting with an existing manifold $X$ and taking the quotient by the action of a group, or by gluing together polygons.
\subsection{Hyperbolic surfaces}
Let $X = {\mathbb{H}}^2$, and let $G = {\operatorname{Isom}}({\mathbb{H}}^2)$, the group of isometries of ${\mathbb{H}}^2$. When a 2-manifold admits an $({\operatorname{Isom}}({\mathbb{H}}^2), {\mathbb{H}}^2)$-structure, we say the manifold admits a \emph{hyperbolic structure}\index{hyperbolic structure, definition}, or is hyperbolic. More generally, an $n$-manifold that admits an $({\operatorname{Isom}}({\mathbb{H}}^n), {\mathbb{H}}^n)$-structure admits a \emph{hyperbolic structure}, or is hyperbolic.
We will look at some examples of hyperbolic 2-manifolds obtained from geometric polygonal decompositions. To do so, we start with a collection of hyperbolic polygons in ${\mathbb{H}}^2$, for example, a collection of triangles. We allow vertices to either be finite or ideal, i.e.\ in the interior of ${\mathbb{H}}^2$ or on $\partial_\infty{\mathbb{H}}^2$, respectively. In any case, we will always assume each polygon is convex, and edges are segments of geodesics in ${\mathbb{H}}^2$. Now, to each edge, associate exactly one other edge. Just as in the case of the torus, glue polygons along associated edges by an isometry of ${\mathbb{H}}^2$.
When does the result of this gluing give a manifold that admits a hyperbolic structure? We obtain a hyperbolic structure exactly when each point in the result has a neighborhood $U$ and a homeomorphism into ${\mathbb{H}}^2$ so that transition maps are in ${\operatorname{Isom}}({\mathbb{H}}^2)$. The following lemma gives a condition that will guarantee this.
\begin{lemma}\label{Lem:IsometricNbhd}
A gluing of hyperbolic polygons yields a 2-manifold with a hyperbolic structure, with structure agreeing with that in the interior of the polygons, if and only if each point in the gluing has a neighborhood (in the quotient topology) isometric to a disk in ${\mathbb{H}}^2$.
More generally, a gluing of $n$-dimensional hyperbolic polyhedra yields a hyperbolic $n$-manifold, with hyperbolic structure agreeing with that in the interior of the polyhedra, if and only if each point has a neighborhood (in the quotient topology) isometric to a ball in ${\mathbb{H}}^n$, with the isometry the identity in the interior of polyhedra.
\end{lemma}
Here by a \emph{gluing}\index{gluing} of hyperbolic polyhedra, we mean a collection of geodesic polyhedra embedded in ${\mathbb{H}}^n$, along with identifications on faces, called gluing maps or face-pairings,\index{face-pairing isometry} which are given by an isometry on each face. The quotient space of the polyhedra with identifications given by the gluing maps is the gluing.
Additionally, we say that the hyperbolic structure agrees with the structure in the interior of the polyhedra if, for any point in the interior of the polyhedron, a ball $U$ containing that point, lying in the interior of the polyhedron, along with the identity map from $U$ to $U\subset{\mathbb{H}}^3$, provides a chart\index{chart} in the hyperbolic structure.
\begin{proof}[Proof of \reflem{IsometricNbhd}]
We will prove the more general statement. Suppose first that a gluing of hyperbolic polyhedra $M$ yields an $n$-manifold with hyperbolic structure, agreeing with the hyperbolic structure in the interior of the polyhedra. Then every point $x$ in $M$ has a neighborhood $U$ and a chart\index{chart} $\phi\from U\to \phi(U)\subset {\mathbb{H}}^n$ such that transition maps are isometries of ${\mathbb{H}}^n$. By restricting $\phi$ to a subset of $U$, we may assume $\phi(U)$ is a ball in ${\mathbb{H}}^n$. The neighborhood $U$ is open in the quotient topology on the gluing. Thus it is made up of portions of open neighborhoods meeting the polyhedra in ${\mathbb{H}}^n$, identified by gluing isometries. In the interior of a polyhedron $P$, $\phi$ composed with the identity map on $U\cap P$ is an isometry of ${\mathbb{H}}^n$. Thus we may view $U\cap P$ as the intersection of a hyperbolic ball with $P$. Since gluing maps are isometries, they identify faces of $U\cap P$ into a hyperbolic ball, and $\phi$ must be an isometry of $U$ into a ball in ${\mathbb{H}}^n$.
Now suppose that under the quotient topology, every point of $M$ has a neighborhood isometric to a ball of ${\mathbb{H}}^n$, with isometry the identity for points in the interior of a polyhedron. Then this isometry gives a chart\index{chart} $\phi\from U\to\phi(U)\subset{\mathbb{H}}^n$. If $(U,\phi)$ and $(V,\psi)$ are charts\index{chart} and $U\cap V\neq\emptyset$, then $\phi\circ\psi^{-1}\from \psi(U\cap V)\to \phi(U\cap V)$ is the composition of isometries, hence an isometry, so $M$ has an $({\operatorname{Isom}}({\mathbb{H}}^n),{\mathbb{H}}^n)$-structure. Because charts\index{chart} in the interior of polyhedra are identity maps, the hyperbolic structure agrees with that on the polyhedra.
\end{proof}
When does each point in a gluing of hyperbolic polygons have a neighborhood isometric to a disk in the hyperbolic plane? Let $x$ be a point in the gluing, and consider its lifts to the polygons. There are three cases.
\begin{enumerate}
\item If $x$ lifts to a point $\widehat{x}$ in the interior of one of the polygons, then that lift is unique. In this case, for small enough $\epsilon>0$, there is a disk about $\widehat{x}$ of radius $\epsilon$ embedded in the interior of the polygon in ${\mathbb{H}}^2$. This projects under the quotient map to a disk about $x$ isometric to a disk in ${\mathbb{H}}^2$.
\item If $x$ lifts to a point on an edge of a polygon, then it has two lifts, $\widehat{x}_0$ and $\widehat{x}_1$, on two different edges that are glued to each other by the gluing map. A neighborhood of $x$ in the quotient topology lifts to give a ``half-neighborhood'' of $\widehat{x}_0$ glued to a corresponding ``half-neighborhood'' of $\widehat{x}_1$. Each contains a half-disk in ${\mathbb{H}}^2$, and we may scale the disks so that they glue to a disk under the gluing map. Thus in this case as well, $x$ has a neighborhood isometric to a disk in ${\mathbb{H}}^2$.
\item If $x$ lifts to a finite vertex of a polygon, then it may have several lifts, possibly including several vertices of the collection of polygons. In this case, we need to be more careful. The following lemma gives a condition that will guarantee we have an isometry to a hyperbolic disk in this case as well.
\end{enumerate}
\begin{lemma}\label{Lem:VertexSum}
A gluing of hyperbolic polygons gives a 2-manifold with a hyperbolic structure if and only if for each finite vertex $v$ of the polygons, the sum of interior angles at each vertex glued to $v$ is $2\pi$.
\end{lemma}
\begin{proof}
This is an immediate consequence of \reflem{IsometricNbhd} and the observation that around a vertex, portions of the polygons meet in a cycle, with total angle around the finite vertex equal to the sum of interior angles of the polyhedron at that vertex. We need to check that each finite vertex has a neighborhood isometric to a neighborhood in ${\mathbb{H}}^2$. This will hold if and only if the sum of interior angles is $2\pi$.
\end{proof}
\section{Complete structures}
Given a gluing of hyperbolic polygons, suppose the angle sum at each finite vertex is $2\pi$, so that we have a hyperbolic structure by \reflem{VertexSum}. Does it necessarily follow that we have a geometric polygonal decomposition?
Recall from \refdef{GeomPoly} that for a geometric polygonal decomposition, we need a geometric structure on each polygon so that the result of the gluing is a smooth manifold with a complete metric.\index{complete metric space} Our hyperbolic structure gives a smooth manifold with a metric. However, in the presence of ideal vertices, the metric may not be complete.
It will be easier to discuss criteria for completeness using the language of \emph{developing maps} and \emph{holonomy}. Our exposition of these terms is based on that of Thurston \cite{thurston:book}.
\subsection{Developing map and holonomy}
The developing map, which we define in this subsection, encodes information on the $(G,X)$-structure of a manifold.\index{$(G,X)$-structure} It is a local homeomorphism into $X$. When a manifold has a polygonal decomposition, say by polygons in $X={\mathbb{R}}^2$ or ${\mathbb{H}}^2$, the developing map ``develops'' the gluing information on the polygons by attaching copies of the polygons along edges in the space $X$, as we did for the torus in Figures~\ref{Fig:EuclideanTorus} and~\ref{Fig:AffineTorus}.
More generally, a developing map can be defined for any manifold $M$ with a $(G,X)$-structure,\index{$(G,X)$-structure} assuming as before that $X$ is a manifold and $G$ is a group of real analytic diffeomorphisms acting transitively on $X$. Any chart\index{chart} $(U,\phi)$ gives a homeomorphism of $U$ onto $\phi(U)\subset X$. To define the developing map, we wish to extend this map.
Suppose $(V,\psi)$ is another chart,\index{chart} and $y\in U\cap V$. Then
\[ \gamma = \phi\circ\psi^{-1}\from \psi(U\cap V)\to \phi(U\cap V) \]
is an element of $G$ acting on $\psi(U\cap V)$. By setting $y\mapsto \gamma$, we obtain a map from $U\cap V$ to $G$.
Because $G$ is a group of real analytic diffeomorphisms, the element $\gamma$ is uniquely determined in a neighborhood of $\psi(y)$. This implies that the map $y\mapsto \gamma$ is locally constant: we obtain the same element $\gamma$ for all $x$ in a neighborhood of $y$ in $U\cap V$. We let $\gamma(y)$ denote this element of $G$. Then we may define a map $\Phi\from U\cup V \to X$ by
\[
\Phi(x) = \begin{cases} \phi(x) & \mbox{ if } x\in U \\
\gamma(y)\cdot\psi(x) \quad & \mbox{ if } x\in V
\end{cases}
\]
Note that if $U\cap V$ is connected, then $\Phi$ is a well-defined homeomorphism, since for $x\in U\cap V$, we have $\phi(x)= \gamma(y)\cdot\psi(x)$. Thus in this case, $\Phi$ is an extension of $\phi$. However, note that we may run into trouble when $U\cap V$ is not connected, as follows. If $x$ is in a component disjoint from that containing $y$, then $\phi(x)$ may not equal $\gamma(y)\cdot\psi(x)$. This is illustrated in the following example.
\begin{example}\label{Example:TorusDeveloping}
Consider the Euclidean torus obtained by gluing sides of a square with vertices $(0,0)$, $(1,0)$, $(0,1)$, and $(1,1)$ in ${\mathbb{R}}^2$. Suppose the union of two simply connected neighborhoods $U$ and $V$ forms a neighborhood of a longitude for the torus, as in \reffig{TorusDeveloping}, such that $U\cap V$ has two components. Suppose $y$ lies in one component of $U\cap V$. There exist charts\index{chart} $(U,\phi)$ and $(V,\psi)$ sending $y$ to the interior of the basic square in ${\mathbb{R}}^2$. Then the transition map $\gamma(y)$ is the identity element of $G$ in this case, since $\phi(U)$ and $\psi(V)$ overlap in the component of $U\cap V$ containing $y$. However, the map $\Phi$ defined above is not well-defined, for if $x$ lies in the other component of $U\cap V$, $\phi(x)$ lies in the basic square, but $\gamma(y)\cdot\psi(x)= \psi(x)$ lies in the square with vertices $(1,0)$, $(2,0)$, $(1,1)$, and $(1,2)$.
\begin{figure}
\import{Figures/Ch03_Geometric/}{F3-05-TorDev.eps_tex}
\caption{Neighborhoods $U$ and $V$ on the torus have two components of intersection, one containing $x$ and one containing $y$. The map $\phi$ cannot be extended over $V$ because it will not be well-defined at $x$}
\label{Fig:TorusDeveloping}
\end{figure}
\end{example}
Similarly, as we attempt to extend $\Phi$ by considering other coordinate neighborhoods overlapping $U$ and $V$, the natural extensions using transition maps such as $\gamma(y)$ again may not be well-defined.
To overcome this problem, we use the universal cover of $M$. Recall from algebraic topology that the universal cover $\widetilde{M}$ of $M$ can be defined to be the space of homotopy classes of paths in $M$ that start at a fixed basepoint $x_0$. See, for example \cite[Theorem~82.1]{Munkres:Topology} or \cite[page~64]{Hatcher:AlgebraicTopology}. Let $\alpha\from[0,1]\to M$ be a path representing a point $[\alpha]\in\widetilde{M}$, and let the chart\index{chart} $(U_0,\phi_0)$ contain the basepoint $x_0$.
Now find $0=t_0<t_1<\dots<t_n=1$ and charts\index{chart} $(U_i,\phi_i)$ such that $\alpha([t_i,t_{i+1}])$ is contained in $U_i$ for $i=0, 1, \dots, n-1$. Denote the points $\alpha(t_i)$ by $x_i\in M$. We extend $\phi_0$ to all of $\alpha$ as follows. First, note that each $x_i$, for $i=1, \dots, n-1$, is contained in a connected component of the intersection of two charts,\index{chart} $x_i \in U_{i-1}\cap U_i$. Then the transition map $\gamma_{i-1,i} = \phi_{i-1}\circ\phi_i^{-1}$ gives an element $\gamma_{i-1,i}(x_i)$ in $G$ that is well-defined on the entire connected component. Thus at the first step, we may extend $\phi_0$ to a function from $[0,t_2]$ to $X$ by defining $\Phi_1\from [0,t_2] \to X$ to be the function:
\[
\Phi_1(t) = \begin{cases}
\phi_0(\alpha(t)) & \mbox{ if } t\in[0,t_1] \\
\gamma_{0,1}(x_1)\cdot\phi_1(\alpha(t)) & \mbox{ if } t\in [t_1,t_2]
\end{cases}
\]
This will be well-defined on all of $[0,t_2]$, since $\phi_0(\alpha(t_1)) = \gamma_{0,1}(x_1)\cdot\phi_1(\alpha(t_1))$.
Extend inductively to $\Phi_i\from[0,t_{i+1}]\to X$ by setting:
\[
\Phi_i(t) = \begin{cases}
\Phi_{i-1}(t) & \mbox{ if } t\in [0,t_i] \\
\gamma_{0,1}(x_1)\gamma_{1,2}(x_2)\dots\gamma_{(i-1),i}(x_i)\cdot\phi_i(\alpha(t)) & \mbox{ if } t\in[t_i,t_{i+1}]
\end{cases}
\]
Again this is well-defined, for we know $\phi_{i-1}(\alpha(t_i)) = \gamma_{(i-1),i}(x_i)\cdot\phi_i(\alpha(t_i))$. Thus by induction, at the point $t_i$,
\begin{align*}
\Phi_{i-1}(t_i) = & \:\gamma_{0,1}(x_1)\gamma_{1,2}(x_2)\dots\gamma_{(i-2),(i-1)}(x_{i-1})\cdot\phi_{i-1}(\alpha(t_i)) \\
= & \:\gamma_{0,1}(x_1)\gamma_{1,2}(x_2)\dots\gamma_{(i-1),i}(x_i)\cdot\phi_i(\alpha(t_i)).
\end{align*}
After the $(n-1)$-st step, we have a map $\Phi_{n-1} \from [0,1]\to X$. In fact, note that the definition of $\Phi_{n-1}$ actually provides a map $\Phi_{[\alpha]}\from U\to X$, for some small neighborhood $U$ of $\alpha(1)$, defined by
\[
\Phi_{[\alpha]}(x) = \gamma_{0,1}(x_1)\gamma_{1,2}(x_2)\dots\gamma_{(n-2),(n-1)}(x_n)\cdot\phi_{n-1}(x).
\]
The function $\Phi_{[\alpha]}$ defined in this manner, with fixed initial chart\index{chart} $(U_0,\phi_0)$ and fixed basepoint $x_0$, is an example of a function defined by \emph{analytic continuation}\index{analytic continuation}. It is well known that analytic continuation gives a well-defined function, independent of choice of the charts\index{chart} $(U_1,\phi_1)$, $\dots$, $(U_{n-1}, \phi_{n-1})$, independent of the choice of points $t_1, \dots, t_{n-1}$, and independent of the choice of path $\alpha$ in the homotopy class $[\alpha]\in\widetilde{M}$. For our particular application, we will leave this as an exercise (\refex{AnalyticContinuation}).
\begin{definition}\label{Def:DevelopingMap}
The \emph{developing map}\index{developing map} $D\from\widetilde{M}\to X$ is the map
\[
D([\alpha]) = \Phi_n(1) =
\gamma_{0,1}(x_1)\gamma_{1,2}(x_2)\dots\gamma_{(n-2),(n-1)}(x_{n-1})\cdot\phi_{n-1}(\alpha(1)),
\]
with notation given above.
\end{definition}
\begin{proposition}\label{Prop:DevelopingMapProperties}
The developing map $D\from \widetilde{M}\to X$ satisfies the following properties.
\begin{enumerate}
\item\label{Itm:DevMapWellDefined} For fixed basepoint $x_0$ and initial chart\index{chart} $(U_0,\phi_0)$, with $x_0\in U_0$, the map $D$ is well-defined, independent of all other choices used to define it, including charts,\index{chart} points in the intersection of chart neighborhoods, and independent of choice of $\alpha$ in the homotopy class of $[\alpha]$.
\item\label{Itm:DevMapLocalHomeo} $D$ is a local diffeomorphism.
\item\label{Itm:DevMapBaseptChange} If we define a new map in the same way as $D$, except beginning with a new choice of basepoint and initial chart,\index{chart} the resulting map is equal to the composition of $D$ with an element of $G$.
\end{enumerate}
\end{proposition}
\begin{proof}
Showing the map is well-defined, part \eqref{Itm:DevMapWellDefined}, is a standard exercise in analytic continuation,\index{analytic continuation} and uses heavily the fact that $G$ is analytic. We leave it as \refex{AnalyticContinuation}. We also leave part \eqref{Itm:DevMapBaseptChange} as an exercise. Part \eqref{Itm:DevMapLocalHomeo} follows from part \eqref{Itm:DevMapWellDefined}, the fact that each $\gamma_{i,(i+1)}(x_i)$ is a diffeomorphism and $\phi_n$ is a local diffeomorphism on $M$, and the topology on $\widetilde{M}$.
\end{proof}
Now consider the case that $[\alpha]\in\widetilde{M}$ is an element of the fundamental group of $M$. That is, $[\alpha]$ is a homotopy class of loops starting and ending at $x_0$. Analytic continuation\index{analytic continuation} along a loop gives a function $\Phi_{[\alpha]}$ whose domain is a neighborhood of the basepoint of the loop; this is a new chart\index{chart} defined in a neighborhood of the basepoint. Since $\phi_0$ and $\Phi_{[\alpha]}$ are both charts\index{chart} defined in a neighborhood of the basepoint, these maps must differ by an element of $G$. Let $g_{[\alpha]} \in G$ be the element such that $\Phi_{[\alpha]} = g_{[\alpha]} \phi_0$.
Let $T_{[\alpha]}$ denote the covering transformation of $\widetilde{M}$ that corresponds to $[\alpha]$. It follows that
\[
D\circ T_{[\alpha]} = g_{[\alpha]} \circ D.\]
Note also that for $[\alpha], [\beta] \in \widetilde{M}$,
\[
D\circ T_{[\alpha]}\circ T_{[\beta]} = (g_{[\alpha]}\circ D) \circ T_{[\beta]} = g_{[\alpha]}\circ g_{[\beta]} \circ D.
\]
It follows that the map $\rho\from \pi_1(M) \to G$ defined by $\rho([\alpha]) = g_{[\alpha]}$ is a group homomorphism.
\begin{definition}\label{Def:holonomy}
The element $g_{[\alpha]}$ is the \emph{holonomy}\index{holonomy} of $[\alpha]$. The group homomorphism $\rho$ is called the \emph{holonomy} of $M$. Its image is the \emph{holonomy group}\index{holonomy group} of $M$.
\end{definition}
Note that $\rho$ depends on the choices from the construction of $D$. When $D$ changes, $\rho$ changes by conjugation in $G$ (exercise).
\begin{example}\label{Example:DevelopTorus}
Pick a point $x$ on the torus, say $x$ lies at the intersection of a choice of meridian and longitude curves for the torus, and consider a nontrivial curve $\gamma$ based at $x$. An example of a nontrivial curve $\gamma$ on the torus is shown in \reffig{TorusCurve}.
\begin{figure}
\includegraphics{Figures/Ch03_Geometric/F3-06-TorCrv}
\caption{A nontrivial curve $\gamma$ (gray) on the torus. Meridian
and longitude curves are shown in black.}
\label{Fig:TorusCurve}
\end{figure}
Now consider a Euclidean structure\index{Euclidean structure} on the torus. There exists a chart\index{chart} mapping $x$ onto the Euclidean plane. We can take our chart to be an open parallelogram about $x$, where boundaries of the parallelogram glue in the usual way to form the torus. As the curve $\gamma$ passes over a meridian or longitude, in the image of the developing map we must glue a new parallelogram to the appropriate side of the parallelogram we just left. See \reffig{EuclidTorusDevel}, left, for an example. The tiling of the plane by parallelograms is the image of the developing map, or the developing image of the Euclidean torus.
\begin{figure}
\includegraphics{Figures/Ch03_Geometric/F3-07-TDevIm}
\caption{Left to right: developing a Euclidean torus, developing an
affine torus.}
\label{Fig:EuclidTorusDevel}
\end{figure}
As for the affine torus, \refexamp{AffineTorus},\index{affine torus} each time a curve crosses a meridian or longitude we attach a rescaled, rotated, translated copy of our quadrilateral to the appropriate edge. \Reffig{EuclidTorusDevel} right shows an example. \Reffig{AffineTorus} shows (part of) the developing image of the affine torus.
\end{example}
\subsection{Completeness of polygonal gluings}
Now we return to the question of determining when a gluing of hyperbolic polygons gives a complete hyperbolic structure.\index{complete metric space} We know there will be a hyperbolic structure provided the angle sum around finite vertices is $2\pi$ (\reflem{VertexSum}). The question of whether the structure is complete or not depends on what happens near ideal vertices.
Let $M$ be an oriented hyperbolic surface obtained by gluing \emph{ideal} hyperbolic polygons. An \emph{ideal vertex}\index{ideal vertex} of $M$ is
an equivalence class of ideal vertices of the polygons, identified by the gluing.
Let $v$ be an ideal vertex of $M$. Then $v$ is identified to some ideal vertex $v_0$ of a polygon $P_0$. Let $h_0$ be a horocycle centered at $v_0$ on $P_0$, and extend $h_0$ counterclockwise around $v_0$. The horocycle $h_0$ will meet an edge $e_0$ of $P_0$, which is glued to an edge of some polygon $P_1$ meeting ideal vertex $v_1$ identified to $v$. Note $h_0$ meets $e_0$ at a right angle. It extends to a unique horocycle $h_1$ about $v_1$ in $P_1$. Continue extending the horocycle in this manner, obtaining horocycles $h_2, h_3,\dots$. Since we only have a finite number of polygons with a finite number of vertices, eventually we return to the vertex $v_0$ of $P_0$, obtaining a horocycle $h_n$ about that ideal vertex. Note $h_n$ may not agree with the initial horocycle $h_0$. See \reffig{distd}.
\begin{figure}
\begin{center}
\import{Figures/Ch03_Geometric/}{F3-08-DistD.eps_tex}
\end{center}
\caption{Extending a horocycle: view inside the manifold.}
\label{Fig:distd}
\end{figure}
\begin{definition}\label{Def:SignedDistHorocycle}
Let $d(v)$\index{$d(v)$} denote the signed hyperbolic distance between $h_0$ and $h_n$ on $P_0$. See \reffig{distd}. The sign is taken such that if $h_n$ is closer to $v_0$ than $h_0$, then $d(v)$ is positive. This is the direction shown in the figure.
\end{definition}
\begin{lemma}\label{Lem:dvInd-h0}
The value $d(v)$\index{$d(v)$} does not depend on the initial choice of horocycle $h_0$, nor on the initial choice of $v_0$ in the equivalence class of $v$.
\end{lemma}
\begin{proof}
Exercise.
\end{proof}
It may be easier to compute $d(v)$\index{$d(v)$} if we look at polygons in ${\mathbb{H}}^2$, using terminology of developing map and holonomy.\index{holonomy}
Fix an ideal vertex on one of the polygons $P$. Put $P$ in ${\mathbb{H}}^2$ with $v$ at infinity. Now take $h_0$ to be a horocycle centered at infinity intersected with $P$. Follow $h_0$ to the right. When it meets the edge of $P$, a new polygon is glued. The developing map instructs us how to embed that new polygon as a polygon in ${\mathbb{H}}^2$, with one edge the vertical geodesic which is the edge of $P$. Continue along this horocycle, placing polygons in ${\mathbb{H}}^2$ according to their developing image. Eventually the horocycle will meet $P$ again with $v$ at infinity. When this happens, the developing map will instruct us to glue a copy of $P$ to the given edge. This copy of $P$ will be isometric to the original copy of $P$, where the isometry is the holonomy\index{holonomy} of the closed path which encircles the ideal vertex $v$ once in the counterclockwise direction. This holonomy isometry, call it $T$, takes the horocycle $h_0$ on our original copy of $P$ to a horocycle $T(h_0)$, and $T(h_0)$ will be of distance $d(v)$\index{$d(v)$} from the extended horocycle that began with $h_0$. See \reffig{Extend-h}.
\begin{figure}
\begin{center}
\import{Figures/Ch03_Geometric/}{F3-09-ExtH.eps_tex}
\end{center}
\caption{Extending a horocycle.}
\label{Fig:Extend-h}
\end{figure}
\begin{proposition}\label{Prop:Complete}
Let $S$ be a surface with hyperbolic structure obtained by gluing
hyperbolic polygons. Then the metric on $S$ is complete\index{complete metric space} if and only
if $d(v)=0$ for each ideal vertex $v$.
\end{proposition}
Before we prove this proposition, let's look at an example.
\begin{example}[Complete 3-punctured sphere]
\label{Example:3-punct-sphere}
A topological polygonal decomposition for the 3-punctured sphere consists of two ideal triangles.\index{ideal triangle} See \reffig{3-punct1}.
\begin{figure}
\import{Figures/Ch03_Geometric/}{F3-10-3PTri.eps_tex}
\caption{Topological polygonal decomposition for the 3-punctured sphere.}
\label{Fig:3-punct1}
\end{figure}
Let's try to construct a geometric polygonal decomposition by building the developing image. We can put one of the ideal triangles in ${\mathbb{H}}^2$ as the triangle with vertices at $0$, $1$, $\infty$. If we glue the other triangle immediately to the right, we have two vertices at $1$ and at $\infty$, but the third can go to any point $x$, where $x>1$. See \reffig{3punct2}. These two triangles on the left, labeled $A$ and $B$, give a fundamental region for the 3-punctured sphere. The developing image will be created by gluing additional copies of these two triangles to edges in the figure by holonomy\index{holonomy} isometries.
\begin{figure}
\import{Figures/Ch03_Geometric/}{F3-11-3PDev.eps_tex}
\caption{We may choose any $x>1$, $y>x$ when finding a hyperbolic
structure.}
\label{Fig:3punct2}
\end{figure}
We may choose the position of the next copy of the triangle $A$ glued to the right, putting its vertex at the point $y$ as in \reffig{3punct2}. After this choice, notice we cannot choose where the next vertex of $B$ to the right will go. This is because the choice $y$ determines an isometry of ${\mathbb{H}}^2$ taking the triangle $A$ on the left to the triangle labeled $A$ on the right. This isometry is exactly the holonomy\index{holonomy} element corresponding to the closed curve running once around the vertex at infinity. The same isometry, which has been determined with the choice of $y$, must take $B$ in the middle to the next triangle glued to the right in our figure. In fact, now that we know this holonomy\index{holonomy} element, we may apply it and its inverse successively to the triangles of \reffig{3punct2}, and we obtain the entire developing image of all triangles adjacent to infinity.
Recall that we want our hyperbolic structure to be complete.\index{complete metric space} By \refprop{Complete}, we need to look at horocycles. Pick a collection of horocycles about the vertices $0$, $1$, and $\infty$. Each of these horocycles extends to give a new horocycle about another copy of $A$. Each copy of $A$ is obtained by applying a holonomy isometry to the original triangle with vertices at $0$, $1$, and $\infty$. We want the horocycles obtained under these holonomy
isometries to agree with the horocycles obtained by extending the original horocycles. This is the condition for completeness.
Here is one way to determine complete structures.\index{complete metric space} Let $\ell_1$ denote the distance in ${\mathbb{H}}^2$ between the horocycle at infinity and the horocycle at $0$. See \reffig{3punct3}. The holonomy\index{holonomy} element $\psi$, corresponding to the group element fixing the ideal vertex at infinity, is an isometry of ${\mathbb{H}}^2$, hence it preserves distances. Thus, under this isometry, the distance between the image of the horocycle at infinity and the horocycle at $\psi(0)=x$ must also be $\ell_1$. If the structure is complete, then the horocycle about infinity is preserved by $\psi$. Thus the horocycle at $x$ must have the same (Euclidean) diameter as the horocycle at $0$.
Now consider the length of the edge between horocycles at $0$ and $1$, labeled $\ell_3$ in \reffig{3punct3}. There is another holonomy\index{holonomy} isometry $\phi$ mapping the geodesic edge between $0$ and $1$ to one between $x$ and $1$, corresponding to the group element encircling the ideal vertex at $1$.
Again completeness implies that the horocycle at $1$ is fixed by $\phi$. The horocycle at $0$ maps to the horocycle centered at $x$, and again because $\phi$ is an isometry, the distance between horocycles centered at $1$ and $x$ must still be $\ell_3$. We already determined the fact that the horocycle at $x$ has the same (Euclidean) diameter as the one at $0$. The only possible way that the distance $\ell_3$ will also be preserved is if $x=2$, and the picture is symmetric across the edge from $1$ to infinity.
\begin{figure}
\import{Figures/Ch03_Geometric/}{F3-12-3PLen.eps_tex}
\caption{Lengths between horocycles}
\label{Fig:3punct3}
\end{figure}
Note at this point that the holonomy\index{holonomy} $\psi$ is completely determined: It fixes $\infty$, takes $0$ to $2$, and maps a point $i\,h$ on a horocycle about infinity to the point $2+i\,h$. This is the translation $\psi(z)=z+2$. Similarly, the holonomy $\phi$ is also completely determined, as it fixes $1$, maps $0$ to $2$ and takes a point on a horocycle on the edge of length $\ell_3$ to a determined point on the edge from $x$ to $1$.
Because the fundamental group of the 3-punctured sphere is generated by the two loops corresponding to $\psi$ and $\phi$, this determines the complete structure. We have therefore shown:
\begin{proposition}\label{Prop:CompleteStruct3punct}
There is a unique complete hyperbolic structure\index{complete metric space} on the 3-punctured sphere. A fundamental region for the structure is given by two ideal triangles\index{ideal triangle} with vertices $0$, $1$, and $\infty$ and $1$, $2$, and $\infty$, respectively.\qed
\end{proposition}
\end{example}
\begin{example}[Incomplete structure on 3-punctured sphere]
\label{Example:Incomplete3PunctSphere}\index{incomplete 3-punctured sphere}
What if we choose a different value for $x$ besides $x=2$? Say we let $x=3/2$. To simplify things, let's keep the length of the edge between horocycles at $0$ and $1$ constant as we extend horocycles. Choose horocycles at $0$ and $1$ of (Euclidean) radius $1/2$, so that these horocycles are tangent along the edge between $0$ and $1$, hence the distance between horocycles is $0$. This distance will remain equal to $0$ under each holonomy\index{holonomy} element, so there will be a horocycle at $x=3/2$ tangent to the horocycle about $1$, to preserve distance $0$. This determines where the image of the triangle $A$ must go under the holonomy fixing infinity: its third vertex (called $y$ in \reffig{3punct2}) must have a horocycle about it of the same (Euclidean) size as the horocycle at $3/2$. This determines the holonomy isometry about the vertex at infinity. Apply this holonomy isometry successively, and we obtain a pattern of triangles as in \reffig{3punct-incomplete}.
\begin{figure}
\begin{center}
\import{Figures/Ch03_Geometric/}{F3-13-3PInc.eps_tex}
\end{center}
\caption{Part of developing image of an incomplete structure on a
3-punctured sphere.}
\label{Fig:3punct-incomplete}
\end{figure}
Notice that the edges of the triangles approach a limit --- the thick line shown on the far right of the figure. Notice also that this line is not part of the developing image of the 3-punctured sphere.
This hyperbolic structure is incomplete: for any horocycle about infinity in ${\mathbb{H}}^2$, the sequence of points at the intersection of the horocycle and the edges of the developing images of ideal triangles\index{ideal triangle} projects to a Cauchy sequence that does not converge. Alternately, the value $d(v)$\index{$d(v)$} is nonzero for $v$ the ideal vertex lifting to the point at infinity.
An incomplete metric space\index{complete metric space} may be completed by adjoining points corresponding to limits of Cauchy sequences, and giving the resulting space the metric topology. In our case, the completion of this incomplete 3-punctured sphere is obtained by attaching a geodesic segment --- the projection of the thick line in \reffig{3punct-incomplete}. Each point of the thick geodesic on the right of \reffig{3punct-incomplete} corresponds to the limiting point of the Cauchy sequence given by a horocycle about infinity at the appropriate height. Note that in the quotient, however, we attach a closed curve of length $d(v)$,\index{$d(v)$} since points on that thick geodesic lying on horospheres of distance $d(v)$ apart will be identified.
Note that horocycles about infinity run straight into this thick geodesic, meeting it at right angles. On the other hand, these horocycles meet infinitely many edges of ideal triangles\index{ideal triangle} on their way into the geodesic, and none of these ideal edges meets the geodesic. It follows that the ideal edges of the two triangles $A$ and $B$ become arbitrarily close to the geodesic attached in the completion, without ever meeting it. Geometrically, it appears that the edges of the ideal triangles spin around the geodesic infinitely many times, while horocycles run directly into it. See \reffig{3punctCompletion}.
\begin{figure}
\begin{center}
\includegraphics{Figures/Ch03_Geometric/F3-14-3PCom}
\end{center}
\caption{The completion of an incomplete structure on a 3-punctured sphere.\index{complete metric space} Attach a geodesic of length $d(v)$.\index{$d(v)$} Ideal edges spin arbitrarily close to the attached geodesic without meeting it. Horocycles (dashed) run directly into the geodesic. (This example has two complete cusped ends and one incomplete end.)}
\label{Fig:3punctCompletion}
\end{figure}
\end{example}
\begin{proof}[Proof of \refprop{Complete}]
Let $S$ be a surface obtained by gluing hyperbolic polygons.
Suppose first that $d(v)$\index{$d(v)$} is nonzero. Then take a sequence of points on a horocycle about $v$, one point for each intersection of the horocycle with an ideal edge. This gives a Cauchy sequence that does not converge. Therefore, the metric is not complete.\index{complete metric space}
Now suppose $d(v)=0$ for each ideal vertex $v$. Then some horocycle closes up around each ideal vertex, so we may remove the interior horoball from each polygon. After this removal, the remainder is a compact manifold with boundary. For any $t>0$, let $S_t$ be the compact manifold obtained by removing interiors of horocycles of distance $t$ from our original choice of horocycle. Then the compact subsets $S_t$ of $S$ satisfy $\bigcup_{t\in{\mathbb{R}}^+} S_t= S$ and $S_{t+a}$ contains a neighborhood of radius $a$ about $S_t$. Any Cauchy sequence must be contained in some $S_t$ for sufficiently large $t$. Hence by compactness of $S_t$, the Cauchy sequence must converge.
\end{proof}
\section{Developing map and completeness}
Here is a better condition for completeness that works in all dimensions and all geometries.
\begin{theorem}\label{Thm:Developing}
Let $M$ be an $n$-manifold with a $(G,X)$-structure,\index{$(G,X)$-structure} where $G$ acts transitively on $X$, and $X$ admits a complete $G$-invariant metric. Then the metric on $M$ inherited from $X$ is complete\index{complete metric space} if and only if the developing map $D\from \widetilde{M} \to X$ is a covering map.
\end{theorem}
\begin{proof}
Suppose first that the developing map $D\from \widetilde{M}\to X$ is a covering map. Let $\{x_n\}_{n=1}^\infty$ be a Cauchy sequence in $M$. For $n$ large enough, $x_n$ will be contained in an $\epsilon$-ball in $M$ that is evenly covered in $\widetilde{M}$. Thus the sequence lifts to a Cauchy sequence $\{\widetilde{x}_n\}$ in $\widetilde{M}$. Since $D$ is a local isometry, $\{D(\widetilde{x}_n)\}$ is a Cauchy sequence in $X$. Finally since $X$ is complete,\index{complete metric space} $\{D(\widetilde{x}_n)\}$ converges to $y\in X$. Now, because $D$ is a covering map, there is a neighborhood $U$ of $y$ that is evenly covered by $D$. Lift this to a neighborhood $\widetilde{U}$ of $\widetilde{M}$ containing infinitely many points of the sequence $\{\widetilde{x}_n\}$. The lift of $y$ in this neighborhood, call it $\widetilde{y}$, must be a limit point of $\{\widetilde{x}_n\}$. Then the projection of $\widetilde{y}$ to $M$ is a limit point of the sequence $\{x_n\}$, so $M$ is complete.\index{complete metric space}
For the converse, we appeal to a proof by Thurston~\cite[Proposition 3.4.15]{thurston:book}. Suppose $M$ is complete.\index{complete metric space} To show $D\from \widetilde{M} \to X$ is a covering map, we show that any path $\alpha_t$ in $X$ lifts to a path $\widetilde{\alpha_t}$ in $\widetilde{M}$. Since $D$ is a local homeomorphism, this implies that $D$ is a covering map.
First, if $M$ is complete,\index{complete metric space} then $\widetilde{M}$ must also be complete, where the metric $\widetilde{M}$ is the lift of the metric on $M$, as follows. The projection to $M$ of any Cauchy sequence gives a Cauchy sequence in $M$, with limit point $x$. Then $x$ has a compact neighborhood which is evenly covered in $\widetilde{M}$, hence there is a compact neighborhood in $\widetilde{M}$ containing all but finitely many points of the Cauchy sequence and also containing a lift of $x$. Thus the sequence converges in $\widetilde{M}$.
Let $\alpha_t$ be a path in $X$. Because $D$ is a local homeomorphism, we may lift $\alpha_t$ to a path $\widetilde{\alpha_t}$ in $\widetilde{M}$ for $t \in [0, t_0)$, some $t_0>0$. By completeness of $\widetilde{M}$, the lifting extends to $[0, t_0]$. But because $D$ is a local homeomorphism, a lifting to $[0, t_0]$ extends to $[0, t_0+\epsilon)$. Hence the lifting extends to all of $\alpha_t$ and $D$ is a covering map.
\end{proof}
\begin{corollary}\label{Cor:DevelopingCover}
If $X$ is simply connected, and $M$ is a manifold with a $(G,X)$-structure\index{$(G,X)$-structure} as in \refthm{Developing}, then $M$ is complete\index{complete metric space} if and only if the developing map is an isometry of $X$.
\end{corollary}
\begin{proof}
The developing map is a local isometry by construction. \Refthm{Developing} shows that $M$ is complete if and only if the developing map is a covering map. Since $X$ and $\widetilde{M}$ are simply connected, the developing map is a covering map if and only if it is a covering isomorphism. A covering isomorphism that is a local isometry must be an isometry.
\end{proof}
\section{Exercises}
\begin{exercise} We have seen that Euclidean structures\index{Euclidean structure} on a torus are determined by a parallelogram.
\begin{enumerate}
\item[(a)] Show that by applying translation and rotation isometries of ${\mathbb{E}}^2$, we may assume that the parallelogram has vertices $(0,0)$, $(x_1,0)$, $(x_2, y)$, and $(x_1+x_2,y)$ where $x_1>0$ and $y>0$.
\item[(b)] Show that up to rescaling, a parallelogram has vertices $(0,0)$, $(1,0)$, $(x,y)$, and $(x+1,y)$ for some $(x,y)\in {\mathbb{R}}^2$ with $y>0$.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:InheritMetric}
If $X$ is a metric space, and $G$ is a group of isometries acting transitively on $X$, and $M$ is a manifold admitting a $(G,X)$-structure,\index{$(G,X)$-structure} show that $M$ inherits a metric from $X$. That is, explain how to define a metric on $M$ from that on $X$, and show that the metric is well-defined.
\end{exercise}
\begin{exercise} (Induced structures \cite[Exercise 3.1.5]{thurston:book}).
Let $N$ be a topological space and $M$ a manifold with a $(G,X)$-structure,\index{$(G,X)$-structure} and suppose $\pi \from N \to M$ is a local homeomorphism. Prove $N$ has a $(G,X)$-structure that is preserved by $\pi$. As a corollary, show that any covering space of $M$ admits a $(G,X)$-structure.
\end{exercise}
\begin{exercise}[Analytic continuation\index{analytic continuation}]\label{Ex:AnalyticContinuation}
Prove item \eqref{Itm:DevMapWellDefined} of \refprop{DevelopingMapProperties}. That is, prove the following.
\begin{enumerate}
\item[(a)] Suppose $\alpha\from [0,1]\to M$ is a path. Let $(U_i,\phi_i)$ and $(V_j,\psi_j)$ be two choices of charts\index{chart} that cover $\alpha([0,1])$. Let
\[ 0=t_0<t_1<\dots<t_n=1 \mbox{ and }
0=s_0<s_1<\dots<s_m=1\]
be points in $[0,1]$ such that $\alpha([t_i,t_{i+1}])\subset U_i$ and $\alpha([s_j,s_{j+1}])\subset V_j$. Define inductively extensions $\Phi_i(t)$ and $\Psi_j(t)$ as in the definition of the developing map. Finally, let $X\subset [0,1]$ be the set of points on which $\Phi_i(t) = \Psi_j(t)$. Prove that $X=[0,1]$.
(Hint: You will need to use the fact that $G$ is analytic. One reference for analytic continuation\index{analytic continuation} is \cite[Chapter~IX]{Conway:Complex}.)
\item[(b)] Suppose $\alpha$ and $\beta$ are homotopic paths with the same endpoints. By part (a), there are well-defined functions $D(\alpha)$ and $D(\beta)$, defined separately on $\alpha$ and on $\beta$ as in \refdef{DevelopingMap}. Prove that $D(\alpha)=D(\beta)$.
This proves that the definition of $D$ is independent of choice $\alpha$ in the homotopy class of $[\alpha]\in\widetilde{M}$.
(Again you will use the fact that $G$ is analytic; see for example \cite[Chapter~IX]{Conway:Complex}.)
\end{enumerate}
\end{exercise}
\begin{exercise} Prove item \eqref{Itm:DevMapBaseptChange} of \refprop{DevelopingMapProperties}. Show that if we define a new map in the same way as $D$, except we change the basepoint $x_0$ or the initial chart\index{chart} $(U_0,\phi_0)$, then the resulting map is equal to the composition of $D$ with an element of $G$.
\end{exercise}
\begin{exercise}\label{Ex:AffineTorusExplicit}
Let $T$ be the affine torus\index{affine torus} obtained by identifying the sides of the trapezoid with vertices $(0,0)$, $(1,0)$, $(0,1)$, and $(1/2, 1)$.
\begin{enumerate}
\item[(a)] Compute the holonomy\index{holonomy} elements of $T$ corresponding to meridian and longitude (i.e.\ the loop running along the horizontal edge of the trapezoid and the loop running along the vertical edge of the trapezoid). What is the holonomy group\index{holonomy group} of $T$?
\item[(b)] For basepoint $(0,0)$ and initial chart\index{chart} chosen so that the trapezoid is mapped by the identity into ${\mathbb{R}}^2$, compute explicitly the developing images of various curves, including the following:
\begin{itemize}
\item The curve running twice along the meridian (based at $(0,0)$).
\item The curve running twice along the longitude.
\item The curve running twice along the meridian and three times along the longitude.
\end{itemize}
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:DevelopingImageMissesPointExplicit}
Let $T$ be the affine torus\index{affine torus} of \refex{AffineTorusExplicit}, obtained by identifying sides of the trapezoid with vertices $(0,0)$, $(1,0)$, $(0,1)$, and $(1/2,1)$, and let $\widetilde{T}$ denote its universal cover. Prove that the developing image $D(\widetilde{T})\subset {\mathbb{R}}^2$ misses exactly one point.
\end{exercise}
\begin{exercise}\label{Ex:DevelopingImageMissesPoint}
Generalize \refex{DevelopingImageMissesPointExplicit}: Let $T$ be any affine torus.\index{affine torus} Prove that either the developing map $D\from \widetilde{T}\to {\mathbb{R}}^2$ is a covering map, and $T$ is a Euclidean torus, or the image of the developing map misses a single point in ${\mathbb{R}}^2$.
\end{exercise}
\begin{exercise}
Fix an example of your favorite quadrilateral that is not a parallelogram, and let $T$ be the torus obtained by identifying sides. Use a computer to create a picture such as \reffig{AffineTorus} for your quadrilateral.
\end{exercise}
\begin{exercise} Prove \reflem{dvInd-h0}, that $d(v)$\index{$d(v)$} is independent of initial choice of horocycle, and independent of choice of $v_0$ in the equivalence class of $v$.
\end{exercise}
\begin{exercise}
Prove the holonomy\index{holonomy group} group of the complete structure\index{complete metric space} on a 3-punctured sphere is generated by
\[ \mat{1&2\\0&1} \quad \mbox{and} \quad \mat{1&0\\2&1}. \]
\end{exercise}
\begin{exercise} How many incomplete hyperbolic structures are there on a 3-punctured sphere? How can they be parameterized? Give a geometric interpretation of this parameterization. That is, relate the parameterization to the developing image of the associated hyperbolic structure.
\end{exercise}
\begin{exercise}\label{Ex:1punctTorus} A torus with 1 puncture has a topological polygonal decomposition consisting of two triangles.
\begin{enumerate}
\item[(a)] Find a complete hyperbolic structure on the 1-punctured torus and prove your structure is complete.
\item[(b)] Find all complete hyperbolic structures on the 1-punctured torus. How are they parameterized?
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:4PunctSphere} A sphere with 4 punctures has a topological polygonal decomposition consisting of four triangles. Repeat \refex{1punctTorus} for the 4-punctured sphere.
\end{exercise}
\chapter{Hyperbolic Structures and Triangulations}\label{Chap:GluingCompleteness}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
In \refchap{Geometric}, we learned that hyperbolic structures lead to developing maps and holonomy,\index{holonomy} and that the developing map is a covering map if and only if the hyperbolic structure is complete.\index{complete metric space}
In this chapter, we wish to compute explicit complete hyperbolic structures on 3-manifolds, again with our primary examples being knot complements. One of the most straightforward ways to find a hyperbolic structure is to first triangulate the manifold, or subdivide it into tetrahedra, and then to put a hyperbolic structure on each tetrahedron, ensuring the tetrahedra glue to give a $(\operatorname{PSL}(2,{\mathbb{C}}), {\mathbb{H}}^3)$-structure whose developing map is complete. This method of computing hyperbolic structures has been studied by many, and in particular was implemented on the computer by J.~Weeks as part of his 1985 PhD thesis \cite{Weeks:Thesis}. Here we will describe the conditions required to obtain a complete hyperbolic structure via triangulations, and as usual, work through examples.
\section{Geometric triangulations}
In \refchap{Geometric}, we defined topological and geometric polygonal decompositions of 2-manifolds. We can extend these notions to 3-manifolds by considering decompositions into ideal polyhedra. In \refchap{Fig8Decomp}, we obtained topological ideal polyhedral decompositions for knot complements. For many applications, including those later in this chapter, it simplifies matters greatly to consider decompositions into ideal tetrahedra.
\begin{definition}\label{Def:TopIdealTriang}
Let $M$ be a 3-manifold. A \emph{topological ideal triangulation}\index{topological ideal triangulation} of $M$ is a combinatorial way of gluing truncated tetrahedra (ideal tetrahedra) so that the result is homeomorphic to $M$. Truncated parts will correspond to the boundary of $M$. As before, a gluing should take faces to faces, edges to edges, etc.
\end{definition}
\begin{example}\label{Example:Fig8TopTriang}
The figure-8 knot has a topological ideal triangulation consisting of two ideal tetrahedra, as we saw in \refex{CollapseBigons} in \refchap{Fig8Decomp}.
\end{example}
For a given knot complement, it is relatively easy to find topological ideal triangulations. For example, starting with any polyhedral decomposition, choose an ideal vertex $v$ and cone to that vertex: i.e.\ add edges between $v$ and all other ideal vertices, between any two edges meeting $v$ add an ideal triangle\index{ideal triangle} (adding an additional edge opposite $v$ if necessary), and between three triangles meeting $v$ add an ideal tetrahedron. Split off the resulting tetrahedra. This reduces the collection of polyhedra to a collection with at least one fewer ideal vertex. Hence after repeating a finite number of times, we are left with a collection of topological tetrahedra.
\subsection{An extended example: the $6_1$ knot}\label{Example:61TopTriang}
We work out an example for the $6_1$ knot carefully. We will see how to decompose the complement into five tetrahedra. (In fact, the complement of the $6_1$ knot can be decomposed into four tetrahedra, but we won't bother simplifying further here.)
We start with a polyhedral decomposition of the $6_1$ knot. We use the decomposition obtained using the methods of \refchap{Fig8Decomp}. The result is shown in \reffig{61Poly}, with the knot on the left, the top polyhedron in the center, and the bottom polyhedron on the right. Recall all polyhedra are viewed from the outside; that is the ball of the polyhedron is behind the projection plane in each figure. In this example, oriented edges are labeled $1$ through $6$.
\begin{figure}[h]
\import{Figures/Ch04_Gluing/}{F4-01-61Poly.eps_tex}
\caption{Left to right: The $6_1$ knot, the top polyhedron, the bottom polyhedron}
\label{Fig:61Poly}
\end{figure}
Collapse all bigons,\index{bigon} identifying edges $1$ and $2$, and $3$ through $6$. New edges and orientations are shown in \reffig{61NoBigons}.
\begin{figure}[h]
\import{Figures/Ch04_Gluing/}{F4-02-61BiCo.eps_tex}
\caption{Polyhedra for $6_1$ knot with bigons\index{bigon} collapsed}
\label{Fig:61NoBigons}
\end{figure}
We cone the top polyhedron to the vertex in the center. This subdivides faces $C$ and $D$ into triangles, shown in \reffig{61Subdivide} in both top and bottom polyhedra.
\begin{figure}[h]
\import{Figures/Ch04_Gluing/}{F4-03-SplitP.eps_tex}
\caption{A subdivision of faces $C$ and $D$ in the top
polyhedron (left) leads to a subdivision of the bottom (right)}
\label{Fig:61Subdivide}
\end{figure}
Continuing the subdivision in the top polyhedron, two edges meeting in the center vertex bound an ideal triangle;\index{ideal triangle} three triangles bound a tetrahedron. Thus edges labeled $1$, $7$, $9$ bound an ideal triangle $E_1$; edges labeled $1$, $8$, $0$ bound an ideal triangle $E_2$. Triangles $A$, $C_1$, $D_1$, and $E_1$ bound an ideal tetrahedron, as do triangles $B$, $C_3$, $D_3$, and $E_2$. When we split off these tetrahedra a single tetrahedron remains. All tetrahedra making up the top polyhedron are shown in \reffig{61TopTetr}.
\begin{figure}[h]
\import{Figures/Ch04_Gluing/}{F4-04-TopTet.eps_tex}
\caption{The top polyhedron splits into the three tetrahedra shown}
\label{Fig:61TopTetr}
\end{figure}
Now we split the bottom polyhedron into tetrahedra. However, first, observe in \reffig{61Subdivide} that edges labeled $7$ and $0$ in the bottom polyhedron run between the same two ideal vertices. Thus these two edges should be flattened and identified in the bottom polyhedron. While we could do that now in one step, we believe it is more geometrically clear how to flatten and identify if we first cut off ideal tetrahedra from the bottom polyhedron.
So first, note there will be an ideal triangle\index{ideal triangle} $E_3$ with edges labeled $4$, $7$, and $1$, and this cuts off an ideal tetrahedron with sides $A$, $B$, $C_1$, $E_3$. Similarly there is an ideal triangle $E_4$ with edges $7$, $9$, and $4$, cutting off an ideal tetrahedron with sides $C_2$, $C_3$, $E_4$, and $D_1$. These two tetrahedra, as well as the remnant of the bottom polyhedron, are shown in \reffig{61BottomSubdivide}.
\begin{figure}[h]
\import{Figures/Ch04_Gluing/}{F4-05-BotTet.eps_tex}
\caption{Splitting off two tetrahedra in the bottom polyhedron}
\label{Fig:61BottomSubdivide}
\end{figure}
Notice that the object on the right of \reffig{61BottomSubdivide} is not a tetrahedron: edges labeled $7$ and $0$ in that polyhedron form a bigon,\index{bigon} which collapses to a single edge which we label $7$. When we do the collapse, the faces $E_4$ and $D_2$ collapse to a single triangle, which we will label $D_2$. The faces $E_3$ and $D_3$ also collapse to a single triangle, which we
will label $D_3$.
When we have finished, we have five tetrahedra that glue to give the complement of the $6_1$ knot. All five tetrahedra with their edges and faces labeled are shown in \reffig{61Tetr}.
\begin{figure}
\import{Figures/Ch04_Gluing/}{F4-06-61Tet.eps_tex}
\caption{Five tetrahedra which glue to give the complement of the
$6_1$ knot}
\label{Fig:61Tetr}
\end{figure}
\subsection{Geometric ideal triangulations}
\begin{definition}\label{Def:GeomTriang}
A \emph{geometric ideal triangulation}\index{geometric ideal triangulation} of $M$ is a topological ideal triangulation such that each tetrahedron has a (positively oriented)\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented} hyperbolic structure, and the result of gluing is a smooth manifold with a complete metric.\index{complete metric space} We also call such a triangulation a \emph{geometric triangulation}\index{geometric triangulation} for short.
\end{definition}
As of the writing of this book, it is still an open question as to whether every 3-manifold that admits a complete hyperbolic structure actually admits a geometric ideal triangulation.
It is known that every cusped hyperbolic 3-manifold can be decomposed into convex ideal polyhedra \cite{EpsteinPenner}: we will go through this in \refchap{Canonical}.
However, subdividing this decomposition into tetrahedra may create degenerate tetrahedra --- actual topological tetrahedra (as opposed to the object on the right of \reffig{61BottomSubdivide}), but tetrahedra that are flat in the hyperbolic structure on $M$.
There are known examples of generalized spaces with singularities that do not admit geometric triangulations\index{geometric triangulation} \cite{Choi:Triangulations}.
\section{Edge gluing equations}
In \refchap{Geometric}, we saw that a gluing of hyperbolic polygons has a hyperbolic structure if and only if the angle sum around each finite vertex is $2\pi$ (\reflem{VertexSum}).
There are similar conditions for a gluing of hyperbolic tetrahedra. We now need to consider gluing around an edge.
Let $T$ be an ideal tetrahedron embedded in ${\mathbb{H}}^3$. Any ideal tetrahedron has six edges. If we select any one, say $e$, we may choose an isometry of ${\mathbb{H}}^3$ taking the endpoints of $e$ to $0$ and $\infty$, and sending a third vertex to $1 \in {\mathbb{C}}\subset \partial_\infty{\mathbb{H}}^3$. This choice uniquely determines the isometry. The fourth vertex of $T$ will be mapped to some $z'\in{\mathbb{C}}$. We may assume that $z'$ has positive imaginary part, for if not, apply an isometry of ${\mathbb{H}}^3$ rotating around the geodesic from $0$ to $\infty$ and rescaling so that $z'$ maps to $1$. In this case, the image of $1$ under this isometry will be a complex number with positive imaginary part.
\begin{definition}\label{Def:EdgeInvariant}
For an ideal tetrahedron $T$ embedded in ${\mathbb{H}}^3$, and edge $e$ of that tetrahedron, define the number $z(e)$ in ${\mathbb{C}}$ to be the complex number with positive imaginary part obtained by applying the unique isometry of ${\mathbb{H}}^3$ that takes the vertices of $e$ to $0$ and $\infty$, takes another vertex to $1$, and takes the final vertex of $T$ to $z(e)$. This is called the \emph{edge invariant}\index{edge invariant} of $e$.
\end{definition}
\begin{remark}\label{Rem:Degenerate}
Note that it is possible to map an ideal tetrahedron to ${\mathbb{H}}^3$ so that three vertices map to $0$, $\infty$, and $1$, and the fourth maps to a point on the real line. In this case, the tetrahedron produced does not have a hyperbolic structure. If the fourth vertex is not $0$ or $1$, it is said to be \emph{flat}.\index{flat tetrahedron}\index{tetrahedron!flat} If the fourth vertex is $0$ or $1$, it is \emph{degenerate}.\index{degenerate tetrahedron}\index{tetrahedron!degenerate} Similarly, a fourth vertex mapped to infinity is a degenerate tetrahedron. An ideal triangulation of a hyperbolic 3-manifold with flat or degenerate tetrahedra is not a geometric ideal triangulation. When looking for geometric triangulations,\index{geometric triangulation} we must rule out such tetrahedra. Similarly, for geometric triangulations, all edge invariants of all tetrahedra must have positive imaginary part. This ensures the tetrahedra are \emph{positively oriented}.\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented} Finally, the procedure above always chooses an edge invariant with positive imaginary part. However, when we glue many tetrahedra together, at times it is impossible to simultaneously choose all edge invariants to have positive imaginary part; some may have negative imaginary part. Such a tetrahedron is a \emph{negatively oriented} tetrahedron.\index{negatively oriented tetrahedron}\index{tetrahedron!negatively oriented}
\end{remark}
Edge invariants of an ideal tetrahedron determine each other, in the following way.
\begin{lemma}\label{Lem:EdgeInvariants}
Let $T$ be an ideal tetrahedron with edge $e_1$, mapped so that vertices of $T$ lie at $\infty$, $0$, $1$, and $z(e_1)$ (so endpoints of $e_1$ lie at $0$ and $\infty$). Then $T$ has the following additional edge invariants.
\begin{itemize}
\item The edge $e_1'$ opposite $e_1$, with vertices $1$ and $z(e_1)$, has edge invariant $z(e_1')=z(e_1)$.
\item The edge $e_2$ with vertices $\infty$ and $1$ has edge invariant
\[ z(e_2) = \frac{1}{1-z(e_1)}. \]
\item The edge $e_3$ with vertices $\infty$ and $z(e_1)$ has edge invariant
\[ z(e_3) = \frac{z(e_1)-1}{z(e_1)}. \]
\end{itemize}
Thus we have the following relationships for these edge invariants.
\[ z(e_1)z(e_2)z(e_3) = -1, \quad \mbox{ and } \quad
1 - z(e_1) + z(e_1)z(e_3) = 0 \]
\end{lemma}
\begin{proof}
The proof is obtained by considering isometries of ${\mathbb{H}}^3$ that move the different edges of $T$ onto the geodesic from $0$ to $\infty$. For ease of notation, we set $z=z(e_1)$.
For the first part, we label one more edge. Let $e_3'$ be the edge of $T$ opposite $e_3$. So $e_3'$ has endpoints $0$ and $1$. Note there is a geodesic $\gamma$ in ${\mathbb{H}}^3$ that meets the edges $e_3$ and $e_3'$ orthogonally. An elliptic\index{elliptic} isometry rotating about $\gamma$ by angle $\pi$ maps $0$ to $1$ and $1$ to $0$, and maps $\infty$ to $z$ and $z$ to $\infty$, thus it preserves $T$. It takes the edge $e_1'$ with endpoints $1$ and $z$ to an edge with endpoints $0$ and $\infty$. Hence $z(e_1') = z$.
To determine $z(e_2)$, we apply a M\"obius transformation\index{M\"obius transformation} fixing $\infty$, taking $1$ to $0$, and taking $z$ to $1$. This transformation is given by
\[ w \mapsto \frac{w-1}{z-1}.\]
It sends $0$ to $-1/(z-1)$. Thus $z(e_2) = 1/(1-z)$.
As for the edge $e_3$ running from $z$ to $\infty$, to determine its edge invariant we apply a M\"obius transformation\index{M\"obius transformation} fixing $\infty$, sending $z$ to $0$, and sending $0$ to $1$. This is given by
\[ w \mapsto \frac{w-z}{-z}.\]
It sends $1$ to $(1-z)/(-z)$. Thus $z(e_3) = (z-1)/z$.
\end{proof}
The three edge invariants of a tetrahedron are shown in \reffig{EdgeInvt}.
\begin{figure}
\import{Figures/Ch04_Gluing/}{F4-07-EInvt.eps_tex}
\caption{Edge invariants}
\label{Fig:EdgeInvt}
\end{figure}
Now consider a gluing of ideal tetrahedra. Fix an edge $e$ of the gluing, and let $T_1$ be a tetrahedron which has edge $e_1$ glued to $e$. Put $T_1$ in ${\mathbb{H}}^3$ with the edge $e_1$ running from $0$ to $\infty$, with a third vertex at $1$, and the fourth vertex at $z(e_1)$, where $z(e_1)$ has positive imaginary part. The gluing identifies each face of $T_1$ with another face. Let $F_1$ denote the face of $T_1$ with vertices $0$, $z(e_1)$, and $\infty$. This is glued to a face $F_1'$ in some tetrahedron $T_2$, where the edge $e_2$
in $T_2$ glues to $e$.
Now, we could put $T_2$ in ${\mathbb{H}}^3$ with vertices at $0$, $\infty$, $1$, and $z(e_2)$, but since we're gluing to $T_1$, we want the face $F_1'$ to have vertices $0$, $\infty$, and $z(e_1)$ rather than vertices $0$, $\infty$, and $1$. Thus to do the gluing, we apply an isometry of ${\mathbb{H}}^3$ fixing $0$ and $\infty$, mapping $1$ to $z(e_1)$. This takes the fourth vertex of $T_2$ to $z(e_1)z(e_2)$.
Continue attaching tetrahedra counterclockwise around $e$. The next tetrahedron attached will have vertices $0$, $\infty$, $z(e_1) z(e_2)$, and $z(e_1) z(e_2) z(e_3)$ in ${\mathbb{C}}$. See \reffig{TriGlue}. Eventually one of the tetrahedra will be glued to $T_1$ again. The fourth vertex of the final tetrahedron will be at the point $z(e_1) z(e_2) \cdots z(e_n)$.
\begin{figure}
\import{Figures/Ch04_Gluing/}{F4-08-TriGlu.eps_tex}
\caption{Vertices of attached triangles.}
\label{Fig:TriGlue}
\end{figure}
\begin{theorem}[Edge gluing equations]
\label{Thm:Gluing}\index{edge gluing equations}\index{gluing equations!edge equations}
Let $M^3$ admit a topological ideal triangulation such that each ideal tetrahedron has a hyperbolic structure. The hyperbolic structures on the ideal tetrahedra induce a hyperbolic structure on the gluing, $M$, if and only if for each edge $e$,
\[ \prod z(e_i) = 1 \quad \mbox{ and } \quad \sum {\rm arg}(z(e_i)) = 2\pi,\]
where the product and sum are over all edges that glue to $e$.
\end{theorem}
\begin{proof}
The hyperbolic structure on the tetrahedra induces a hyperbolic structure on $M$ if and only if every point in $M$ has a neighborhood isometric to a ball in ${\mathbb{H}}^3$, by \reflem{IsometricNbhd}. Consider a point on an edge. If it has a neighborhood isometric to a ball in ${\mathbb{H}}^3$ then the sum of the dihedral angles around the edge must be $2\pi$. See \reffig{AngleSum2pi}. This sum of dihedral angles is $\sum\arg(z(e_i))$. Moreover there must be no nontrivial translation as we move around the edge. Since the last face of the last triangle glues to the triangle with vertices $0$, $1$, and $\infty$, this condition requires that $\prod z(e_i)=1$.
Conversely, if we have $\prod z(e_i)=1$ and $\sum \arg(z(e_i))=2\pi$, then any point on the edge under the gluing has a ball neighborhood isometric to a ball in ${\mathbb{H}}^3$.
\end{proof}
\begin{figure}
\includegraphics{Figures/Ch04_Gluing/F4-09-AngSum}
\caption{Left: Angle sum must be $2\pi$. Right: An example of why this condition is important.}
\label{Fig:AngleSum2pi}
\end{figure}
The equations $\prod z(e_i) =1$ (and restrictions $\sum \arg(z(e_i))=2\pi$) are called the \emph{edge gluing equations}\index{edge gluing equations}. We have one for each edge. However, since by \reflem{EdgeInvariants} the three edge invariants of a tetrahedron are all determined by a single edge invariant, one ideal tetrahedron contributes at most one unknown to the gluing equations.
\begin{example}[Edge gluing equations for the figure-8 knot]
The figure-8 knot decomposes into two ideal tetrahedra. Choose the two tetrahedra to be \emph{regular}. That is, all dihedral angles are $\pi/3$. We claim that this gives a hyperbolic structure on the figure-8 knot complement.
We wish to find all such structures.
Thurston worked through this example in detail in his notes; we recall his work here \cite[pages~50--52]{thurston}.
\Reffig{Fig8tet} shows the two tetrahedra in the decomposition of the figure-8 knot complement, which we obtained in \refchap{Fig8Decomp}.
These tetrahedra come from the two ideal polyhedra that glue to give the figure-8 knot complement that we discussed in detail in \refchap{Fig8Decomp}; see \reffig{8TopPoly} and \reffig{8BotPoly}. The tetrahedra differ from those in \refchap{Fig8Decomp} in the following ways. First, we have collapsed the bigons.\index{bigon} This gives two remaining edge classes, which we label with one tick mark and with two tick marks. Second, in \reffig{8TopPoly}, we viewed the top ideal polyhedron from the inside; that is, the ball of the polyhedron lay above the plane of projection. To be more consistent in viewing both top and bottom polyhedron, we have rotated our perspective such that now both tetrahedra are viewed from the outside.
\begin{figure}
\import{Figures/Ch04_Gluing/}{F4-10-Fg8Tet.eps_tex}
\caption{The ideal tetrahedra of the figure-8 knot complement.}
\label{Fig:Fig8tet}
\end{figure}
For each tetrahedron, we label each edge with a complex number $z_i$ or $w_i$, to denote the edge invariant associated with that edge. Note that opposite edges in a tetrahedron have the same edge invariant. We also have relationships between $z_1$, $z_2$, and $z_3$ as in \reflem{EdgeInvariants}, and similarly for $w_1$, $w_2$, and $w_3$.
There are two edge classes in the tetrahedra in \reffig{Fig8tet}, labeled with one or two tick marks on the edge. We obtain the edge gluing equations by taking the product of edge invariants for all edges identified with each edge class.
For the edge with one tick mark, we obtain the edge gluing equation
\[ z_1^2\,z_3\,w_1^2\,w_3=1. \]
For the edge with two tick marks,
\[ z_2^2\,z_3\,w_2^2\,w_3=1. \]
We set $z_1 = z$ and $w_1 = w$. From \reflem{EdgeInvariants}, the first edge gluing equation gives
\[ z^2\left(\frac{z-1}{z}\right)w^2\left(\frac{w-1}{w}\right)=1, \]
or
\begin{equation}\label{Eqn:Fig8Gluing}
z\,(z-1)\,w\,(w-1) =1.
\end{equation}
Solving for $z$ in terms of $w$:
\[
z = \frac{1 \pm \sqrt{1+4/(w(w-1))}}{2}.
\]
We need the imaginary parts of $z$ and $w$ to be strictly greater than $0$. For each value of $w$, there is at most one solution for $z$ with positive imaginary part. The solution exists provided that the discriminant $1+4/(w(w-1))$ is not positive real. Thus solutions are parameterized by the region of ${\mathbb{C}}$ shown in \reffig{ThurstonRegion} (see also \refex{ThurstonRegion}).
\begin{figure}
\import{Figures/Ch04_Gluing/}{F4-11-ThuReg.eps_tex}
\caption{Solutions to edge gluing equations for the figure-8 knot complement are parameterized by the above region.}
\label{Fig:ThurstonRegion}
\end{figure}
Notice that
\[ z = w = \sqrt[3]{-1} = \frac{1}{2} + \frac{\sqrt{3}}{2}i \]
is one solution to the equations. We will see that this gives a complete hyperbolic structure on the complement of the figure-8 knot.\index{complete metric space}
\end{example}
\section{Completeness equations}
Suppose now that $M$ is a 3-manifold with torus boundary. In much of this section, we will assume that $M$ admits a topological ideal triangulation, and moreover we have a solution to the edge gluing equations for this triangulation, thus $M$ admits a hyperbolic structure. We need to consider cusps of the manifold to determine whether this is a complete structure or not.
\begin{definition}\label{Def:CuspNbhd}
Let $M$ be a 3-manifold with torus boundary. Define a \emph{cusp}\index{cusp},
or \emph{cusp neighborhood}\index{cusp neighborhood} of $M$ to be a neighborhood of $\partial M$ homeomorphic to the product of a torus and an interval, $T^2\times I$. Define a \emph{cusp torus}\index{cusp torus} to be a torus component of $\partial M$, or the boundary of a cusp.
\end{definition}
A hyperbolic structure on $M$ induces an affine structure on the boundary of any cusp of $M$.
\begin{theorem}\label{Thm:EuclidCusp}
Let $M$ be a 3-manifold with torus boundary and hyperbolic structure, i.e.\ with $({\operatorname{Isom}}({\mathbb{H}}^3), {\mathbb{H}}^3)$-structure. Then the structure on $M$ is complete\index{complete metric space} if and only if for each cusp of $M$, the induced structure on the boundary of the cusp is a Euclidean structure\index{Euclidean structure} on the torus.
\end{theorem}
\begin{proof}
\Refex{EuclidCusp}. Hint: the proof is very similar to that of the analogous result in two dimensions, \refprop{Complete}.
\end{proof}
\begin{definition}
\label{Def:CuspTriangulation}
Let $M$ have a topological ideal triangulation. If we truncate the vertices of each ideal tetrahedron, we obtain a collection of triangles, each of which lies on the boundary of a cusp. Edges of each triangle inherit a gluing from the gluing of faces of the ideal tetrahedra. This gives a triangulation of each boundary torus, which we call a \emph{cusp triangulation}\index{cusp triangulation}.
\end{definition}
An example for the figure-8 knot is shown in \reffig{Fig8CuspTriang}. The truncated ideal vertices give eight triangles, with labels $a$ through $h$. These glue together on the boundary of the cusp to give a triangulation of the torus as shown. Note that the corner of each triangle is labeled with the edge invariant of the tetrahedron corresponding to the edge meeting that corner.
\begin{figure}
\import{Figures/Ch04_Gluing/}{F4-12-F8Cusp.eps_tex}
\caption{Finding the cusp triangulation of the figure-8 knot complement}
\label{Fig:Fig8CuspTriang}
\end{figure}
\Reffig{Fig8CuspTriang} shows a fundamental region of the cusp triangulation. By tracing through gluings of cusp triangles, we may obtain the full developing image of the cusp torus. \Refthm{EuclidCusp} states that the original manifold is complete\index{complete metric space} if and only if the cusp tori are Euclidean, which will hold if and only if the holonomy\index{holonomy} maps for each element of $\pi_1(T)$ on each cusp torus $T$ are pure Euclidean translations, without rotation or scale.
We can determine if holonomy\index{holonomy} maps are Euclidean translations directly from the cusp triangulation. Start with a triangle $\Delta$ whose vertices we may assume lie at $0$, $1$, and $z(e_1)$ in the complex plane ${\mathbb{C}}$. Let $\alpha\in\pi_1(T)$. Then the holonomy $\rho(\alpha)$ takes $\Delta$ to a new triangle, which appears in the developing image. The holonomy $\rho(\alpha)$ will be a Euclidean translation if and only if the triangle side from $0$ to $1$ of $\Delta$ is mapped to the side of a triangle of length $1$ pointing in the same direction, without rotation (or scale). To determine whether this holds, we may follow the side of the triangle in the developing image, and obtain exactly its rotation and scale by considering the edge invariants that adjust its length and direction as it is adjusted in the cusp triangulation, as in \reffig{TriGlue}. This can be described efficiently in the following way.
\begin{definition}\label{Def:CompletenessEquations}
Suppose $M$ has a topological ideal triangulation, and let $T$ be the boundary torus of a cusp of $M$. Let $[\alpha]\in\pi_1(T)$, so $\alpha$ is a loop on $T$ in the homotopy class of $[\alpha]$. We associate a complex number $H(\alpha)$ to $\alpha$ as follows.
First, orient the loop $\alpha$ on $T$. The loop $\alpha$ can be homotoped to run through any triangle of the cusp triangulation of $T$ monotonically, i.e.\ in such a way that it cuts off a single corner of each triangle it enters. Denote the edge invariants of the corners cut off by $\alpha$ by $z_1$, $z_2$, \dots, $z_n$. Further associate to each corner a value $\epsilon_i=\pm 1$: if the $i$-th corner cut off by $\alpha$ lies to the left of $\alpha$, set $\epsilon_i=+1$. If the corner lies to the right of $\alpha$, set $\epsilon_i=-1$. Finally, set the value of $H(\alpha)$ to be
\begin{equation}\label{Eqn:CompletenessEquations}
H(\alpha) = \prod_{i=1}^n z_i^{\epsilon_i}
\end{equation}
\end{definition}
\begin{figure}
\import{Figures/Ch04_Gluing/}{F4-13-CuspEx.eps_tex}
\caption{Example for determining $H([\alpha])$}
\label{Fig:ExampleCusp}
\end{figure}
\begin{example}\label{Example:ExampleCusp}
An example cusp is shown in \reffig{ExampleCusp}. For this example, the value of $H(\alpha)$ is given by
\[ H(\alpha) = z_1\,z_2^{-1}\,z_3\,z_4\,z_5^{-1}\,z_6^{-1}\,z_7\,z_8^{-1}.\]
\end{example}
We will see that $H$ is independent of homotopy class of $\alpha$ (\refex{HomotopyInvarianceH}). For this reason, we sometimes denote the complex number by $H([\alpha])$, or evaluate it on a homotopy class rather than a curve.
\begin{example}\label{Example:Fig8EdgeSequence}
For the figure-8 knot, there is a closed curve on the cusp torus running from the left side of the triangle labeled $a$ on the left of \reffig{Fig8CuspTriang} to the left side of the triangle labeled $a$ on the right of that figure. Call this curve $\alpha$. Then we can compute:
\[ H(\alpha) = z_3\,w_2^{-1}\,z_2\,w_3^{-1}\,z_3\,w_2^{-1}\,z_2\,w_3^{-1} =
\left(\frac{z_2\,z_3}{w_2\,w_3}\right)^2 \]
Another closed curve runs from the base of the triangle labeled $a$ on the left of \reffig{Fig8CuspTriang} to the top of the triangle labeled $h$, also on the left of that figure. Call this curve $\beta$. Then we have:
\[ H(\beta) = z_2^{-1}\,w_1 = \frac{w_1}{z_2}. \]
\end{example}
\begin{proposition}[Completeness equations]\label{Prop:CompletenessEqns}
Let $T$ be the torus boundary of a cusp neighborhood of $M$, where $M$ admits a topological ideal triangulation, and the ideal tetrahedra admit hyperbolic structures that satisfy the edge gluing equations (\refthm{Gluing}). Let $\alpha$ and $\beta$ generate $\pi_1(T)$. If $H(\alpha) = H(\beta)=1$, then the ideal triangulation is a geometric ideal triangulation, i.e.\ the hyperbolic structure on $M$ induced by the hyperbolic structure on the tetrahedra will be a complete structure.\index{complete metric space}
\end{proposition}
The equations $H(\alpha)=1$ and $H(\beta)=1$ are called the \emph{completeness equations}.\index{completeness equations}
\begin{proof}[Proof of \refprop{CompletenessEqns}]
By \refthm{EuclidCusp}, it suffices to show that the induced structure on $T$ is Euclidean. To do so, it suffices to show that the holonomy\index{holonomy} elements $\rho(\alpha)$ and $\rho(\beta)$ are pure translations, with no rotation and scale. Thus we will show $\rho(\alpha)$ and $\rho(\beta)$ do not rotate or scale.
To show this, let $\Delta$ be a triangle met by the curve $\alpha$ used in defining the complex number $H(\alpha)$, and suppose $\alpha$ meets a side $e_1$ of $\Delta$. Let $v$ be a vector with length equal to the length of $e_1$, pointing in the direction of $e_1$ such that the oriented curve $\alpha$ and the vector $v$ are oriented according to the right hand rule. This is true of the vector $v$ shown on the far left of \reffig{CompletenessEqnsProof}.
\begin{figure}
\import{Figures/Ch04_Gluing/}{F4-14-ComEqn.eps_tex}
\caption{A path of vectors in the proof of \refprop{CompletenessEqns}}
\label{Fig:CompletenessEqnsProof}
\end{figure}
The holonomy\index{holonomy} $\rho(\alpha)$ is Euclidean if and only if the image of $v$ under $\rho(\alpha)$ still has length $v$, and points in the same direction as $v$. We determine the effect of holonomy by considering what happens to $v$ in each triangle of the cusp triangulation.
We may rotate $v$ around a vertex of the triangle $\Delta$ meeting $e_1$, and scale, so that the result lines up with a second edge $e_2$ of the triangle, having the same length and direction as $e_2$. We know exactly how the rotation and scale is determined when the vertex of the triangle is labeled with edge invariant $z_1$: if we rotate in a counterclockwise direction, $v$ is adjusted by multiplication by $z_1$, as in \reffig{TriGlue}. If we rotate in a clockwise direction, $v$ is adjusted by multiplication by $1/z_1$.
Now, our path $\alpha$ cuts off exactly one corner of each triangle it meets. This defines a path of edges of triangles, namely, starting with $v$, at each step we have a vector lying on the side of a triangle where $\alpha$ enters that triangle. In this triangle, rotate through the corner cut off by $\alpha$ to produce a new vector pointing in the direction of the side where $\alpha$ exits. An example path of such vectors is shown in \reffig{CompletenessEqnsProof}. When $\alpha$ returns to the initial triangle $\Delta$, the final vector of this path will be parallel to the image of $v$ under $\rho(\alpha)$. Then $\rho(\alpha)$ will be a Euclidean transformation if and only if the final vector in the path has length and direction identical to that of $v$.
On the other hand, the final length and direction of the vector $\rho(\alpha)v$ is given by the product of edge invariants at the corners of each triangle in the path of edges, with edge invariant either multiplied or divided depending on whether the rotation is in the counterclockwise or clockwise direction, respectively. This is exactly the complex number $H(\alpha)$. Thus $\rho(\alpha)$ is Euclidean if and only if $H(\alpha)=1$.
The same argument applies to $H(\beta)$ and $\rho(\beta)$. Since the holonomy group\index{holonomy group} of the cusp is generated by $\rho(\alpha)$ and $\rho(\beta)$, the cusp will be Euclidean if and only if $H(\alpha)=H(\beta)=1$.
\end{proof}
\begin{example}\label{Example:Fig8Completeness}
Returning to the example of the figure-8 knot, in \refexamp{Fig8EdgeSequence}, we found that completeness equations\index{completeness equations} are given by
\[ H(\alpha) = \left(\frac{z_2\,z_3}{w_2\,w_3}\right)^2 \quad \mbox{ and } \quad H(\beta) = \frac{w_1}{z_2}. \]
\Reflem{EdgeInvariants} implies that these can be rewritten in terms of variables $z$ and $w$ alone, as
\[
H(\alpha) = \left( \frac{1}{1-z} \cdot \frac{z-1}{z} \cdot \frac{1-w}{1} \cdot \frac{w}{w-1}\right)^2 = \left( \frac{w}{z} \right)^2 \]
and
\begin{equation}\label{Eqn:2ndFig8Complete}
H(\beta) = w\,(1-z)
\end{equation}
If the hyperbolic structure is complete,\index{complete metric space} then by
\refprop{CompletenessEqns}, $H(\alpha)=H(\beta)=1$, so $z=w$.
From \refeqn{2ndFig8Complete}, $z(z-1)=-1$. Hence the only possibility is $z=w= \frac{1}{2}+i\frac{\sqrt{3}}{2}$.
\end{example}
\section{Computing hyperbolic structures}
Given a triangulation of a 3-manifold $M$ with torus boundary, we may determine a complete hyperbolic structure on $M$ by solving the edge gluing and completeness equations.\index{completeness equations}\index{complete metric space} However, note this amounts to solving a complicated system of nonlinear equations. Consequently, it is difficult to use these to find exact hyperbolic structures on infinite families of manifolds.
However in practice, topological triangulations, edge gluing equations, and completeness equations\index{completeness equations} can be found very efficiently by computer for specific, finite examples. The resulting nonlinear system of equations can then be solved numerically. The first software to find hyperbolic structures on knots and 3-manifolds was the program SnapPea,\index{SnapPea} written by Weeks \cite{Weeks:Thesis} (see also \cite{weeks:computation}). This program has allowed researchers to run experiments on large classes of hyperbolic 3-manifolds, making observations and testing conjectures, and has been influential in a great deal of results on hyperbolic structures on knots and 3-manifolds.
The SnapPea kernel is now part of a program maintained by Culler, Dunfield, Goerner, and others, reincarnated as SnapPy, and available for free download \cite{SnapPy}.\index{SnapPy} This new program includes much additional functionality, and still remains an excellent tool for research in hyperbolic knot theory.
One issue in the past with finding a hyperbolic structure via SnapPea (SnapPy) is that it would only give a numerical approximation to a hyperbolic structure, and there was no guarantee that the manifold would be actually provably hyperbolic. This has been addressed in a few ways. The program Snap \cite{Snap} deduces exact solutions from the numerical approximations, which can be used to prove hyperbolicity. In another direction, Moser used analytic techniques to prove that a solution to edge gluing and completeness equations\index{completeness equations} exists in a small neighborhood of an approximate solution \cite{Moser:ProvingHyp}. In \cite{HIKMOT}, interval arithmetic is used to prove hyperbolic structures exist when a structure is computed numerically. Thus using these tools, we can often prove that if SnapPy computes a hyperbolic structure on a knot complement, then the knot is indeed hyperbolic.
\section{Exercises}
\begin{exercise}
Write down the edge gluing equations (not completeness equations) for the $6_1$ knot, using the ideal tetrahedra of \refexamp{61TopTriang}. Make appropriate substitutions such that your equations contain exactly one variable per tetrahedron.
\end{exercise}
\begin{exercise}\label{Ex:edge=tet}
Notice that for both the figure-8 knot complement and for the $6_1$ knot, we had exactly the same number of edges as tetrahedra in the ideal triangulation.
\begin{enumerate}
\item[(a)] Prove that this will always be true. That is, prove that if $M$ is any 3-manifold with (possibly empty) boundary consisting of tori, then for any topological ideal triangulation of $M$, the number of edges of the triangulation will always equal the number of tetrahedra.
\item[(b)] Since we have one unknown per ideal tetrahedra, part (a) implies that the number of gluing equations will equal the number of unknowns. However, in fact the gluing equations are always redundant. Prove this fact.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:52KnotTriang}
In \refchap{Fig8Decomp}, we found a polyhedral decomposition of the $5_2$ knot complement (without bigons).\index{bigon}
Split this into a topological ideal triangulation of the knot complement.
\end{exercise}
\begin{exercise} Using the ideal tetrahedra of \refex{52KnotTriang}, or otherwise,
write down all edge invariants and all edge gluing equations, one variable per tetrahedron.
\end{exercise}
\begin{exercise}
Find a topological ideal triangulation of the $6_3$ knot, edge invariants, and edge gluing equations.
\end{exercise}
\begin{exercise}\label{Ex:ThurstonRegion}
Check that \reffig{ThurstonRegion} does indeed parameterize the space of hyperbolic structures on the figure-8 knot complement. What is the equation of the vertical ray shown in that picture?
\end{exercise}
\begin{exercise}\label{Ex:EuclidCusp}
Prove \refthm{EuclidCusp}: the hyperbolic structure on $M$ is complete\index{complete metric space} if and only if for each cusp of $M$, the induced structure on the boundary of the cusp is a Euclidean structure\index{Euclidean structure} on the torus.
\end{exercise}
\begin{exercise}
For the topological triangulation of the $5_2$ knot of \refex{52KnotTriang}:
\begin{enumerate}
\item[(a)] Find the triangulation of the cusp. Label a fundamental domain, and meridian and longitude.
\item[(b)] Write down completeness equations.\index{completeness equations}
\end{enumerate}
\label{Ex:52CuspTri}
\end{exercise}
\begin{exercise}
Find the cusp triangulation for the complement of the $6_1$ knot from \refexamp{61TopTriang}.
\end{exercise}
\begin{exercise}
Find completeness equations\index{completeness equations} for the $6_1$ or $6_3$ knot.
\end{exercise}
\begin{exercise}\label{Ex:HomotopyInvarianceH}
Suppose $M$ admits an ideal triangulation that satisfies the edge gluing equations.
\begin{enumerate}
\item [(a)] In \refdef{CompletenessEquations}, we claimed that for any closed curve $\alpha$ in a torus boundary component of $\partial M$, we could homotope $\alpha$ in such a way that it cuts off a single corner of each triangle that it meets. Prove this.
\item[(b)] Show that $H([\alpha])$ is independent of the choice of $\alpha$ in the homotopy class of $[\alpha]$. In particular, if $\alpha$ is homotoped to run through different triangles, the value of $H([\alpha])$ is unchanged.
\end{enumerate}
\end{exercise}
\begin{figure}[h]
\import{Figures/Ch04_Gluing/}{F4-15-Thurst.eps_tex}
\caption{Path of vectors going from $e_1$, the oriented edge from $0$ to $1$, to $\rho(-\alpha)(e_1)$}
\label{Fig:ThurstonFig8}
\end{figure}
\begin{exercise}
In Thurston's 1979 notes \cite{thurston}, he computed completeness equations\index{completeness equations} for the figure-8 knot using a method similar to our proof of \refprop{CompletenessEqns}. Namely, he found a path of vectors from an edge on a triangle $\Delta$ to the same edge on $\rho(-\alpha)(\Delta)$ and $\rho(\beta)(\Delta)$. His path of vectors for $\rho(\beta)$ agrees with ours. His path of vectors for $\rho(-\alpha)$ is different from our path for $\rho(\alpha)$, and is shown in \reffig{ThurstonFig8}.
\begin{enumerate}
\item[(a)] Prove that the completeness equation obtained from Thurston's path of vectors is equivalent to our completeness equation.
\item[(b)] More generally, prove that if we replace our path of vectors used to construct the complex number $H([\alpha])$ by any other path of vectors obtained by rotating around vertices of the cusp triangulation, with same starting and ending vectors, then the equation we obtain from multiplying (and dividing) by edge invariants corresponding to the path of vectors gives a completeness equation that is equivalent to $H([\alpha])=1$.
\end{enumerate}
\end{exercise}
\begin{exercise}
What breaks down when you try to find triangulations and edge gluing equations for non-hyperbolic knots and links, such as the trefoil or the $(2,4)$-torus link?
\end{exercise}
\begin{exercise}
Use the computer program SnapPy to determine which of the knots with seven or fewer crossings admit a hyperbolic structure \cite{SnapPy}. For those that do admit a hyperbolic structure, use SnapPy to find the cusp triangulation of the knot. Obtain a screen shot of this information, which should include cusp triangles as well as a fundamental parallelogram for the cusp.
\end{exercise}
\chapter{Discrete Groups and the Thick--Thin Decomposition}\label{Chap:Margulis}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
Suppose we have a complete hyperbolic structure on an orientable 3-manifold $M$. Then the developing map $D\from \widetilde{M}\to{\mathbb{H}}^3$ is a covering map, by \refthm{Developing}. Since $\widetilde{M}$ and ${\mathbb{H}}^3$ are both simply connected, it follows that the developing map is an isometry. Thus we may view ${\mathbb{H}}^3$ as the universal cover of $M$. The covering transformations are then the elements of the holonomy group\index{holonomy group} $\rho(\pi_1(M)) = \Gamma \leq \operatorname{PSL}(2,{\mathbb{C}})$. Hence $M$ is homeomorphic to the quotient $M\cong {\mathbb{H}}^3/\Gamma$.
Subgroups $\Gamma$ of $\operatorname{PSL}(2,{\mathbb{C}})$ can have very nice properties, and have been investigated for many decades. In this chapter, we discuss some classical results in the area and their consequences for hyperbolic 3-manifolds. Some of our discussion follows closely work of J{\o}rgensen and Marden; we recommend the book~\cite{marden} for more details, generalizations, and consequences.
\section{Discrete subgroups of hyperbolic isometries}
\subsection{Isometries and subgroups}
In \refthm{ClassifyPSL(2,C)} we classified elements of $\operatorname{PSL}(2,{\mathbb{C}})$ as elliptic,\index{elliptic} parabolic,\index{parabolic} or loxodromic\index{loxodromic} depending on their fixed points. One of the first things we need is an extension of that theorem.
Before we give the extension, recall that we can view an element of $\operatorname{PSL}(2,{\mathbb{C}})$ as a matrix
\[ A = \mat{a&b\\ c&d}, \mbox{ with } a,b,c,d\in {\mathbb{C}} \mbox{ and } ad-bc=1,\]
and the matrix is well-defined up to multiplication by $\pm {\mathrm{Id}}$. In this chapter, we will frequently write an isometry of ${\mathbb{H}}^3$ as a 2 by 2 matrix with determinant $1$, omitting and ignoring the $\pm$ sign. The sign very rarely affects our arguments, but the reader should be aware that we are suppressing it, for example in the following definition.
\begin{definition}\label{Def:ConjugateTrace}
We say $A\in\operatorname{PSL}(2,{\mathbb{C}})$ is \emph{conjugate}\index{conjugate} to $B\in\operatorname{PSL}(2,{\mathbb{C}})$ if there exists $U\in\operatorname{PSL}(2,{\mathbb{C}})$ such that $A=UBU^{-1}$. The \emph{trace}\index{trace} of $A$ is the trace of its normalized matrix:
\[ \operatorname{tr}\mat{a&b\\c&d} = a+d. \]
\end{definition}
Note there is a sign ambiguity in our definition of trace; again this will not affect our arguments. Note also that conjugate elements have the same trace.
\begin{lemma}\label{Lem:MoreClassifyPSL}
For $A\in\operatorname{PSL}(2,{\mathbb{C}})$,
\begin{itemize}
\item $A$ is \emph{parabolic}\index{parabolic} if and only if $\operatorname{tr}(A)=\pm 2$, and if and only if $A$ is conjugate to \[ z\mapsto z+1.\]
\item $A$ is \emph{elliptic}\index{elliptic} if and only if $\operatorname{tr}(A)\in(-2,2)\subset{\mathbb{R}}\subset{\mathbb{C}}$, and if and only if $A$ is conjugate to
\[ z\mapsto e^{2i\theta}z, \quad \mbox{ with } 2\theta\neq 2\pi n \mbox{ for any } n\in{\mathbb{Z}}. \]
\item $A$ is \emph{loxodromic}\index{loxodromic} if and only if $\operatorname{tr}(A)\in{\mathbb{C}}-[-2,2]$, and if and only if $A$ is conjugate to
\[ z\mapsto \zeta^2z, \quad \mbox{with } |\zeta|>1.\]
\end{itemize}
\end{lemma}
\begin{proof}
\Refex{MoreClassifyPSL}
\end{proof}
\begin{definition}\label{Def:DiscreteGroup}
A subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$ is said to be \emph{discrete}\index{discrete group} if it contains no sequence of distinct elements converging to the identity element.
A discrete subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$ is often called a \emph{Kleinian group}\index{Kleinian group}.
\end{definition}
An example of a discrete group is a subgroup generated by a single loxodromic\index{loxodromic} element, or a single parabolic\index{parabolic} element. These are the simplest such groups. They are so simple that they are examples of what are called \emph{elementary groups}; see \refdef{Elementary}. Examples of discrete groups in general can be quite complicated. In \refprop{FreePropDisc}, we will prove that the holonomy group\index{holonomy group} of any complete\index{complete metric space} hyperbolic 3-manifold is always a discrete group. Meanwhile, consider the example of the figure-8 knot complement.
\begin{example}\label{Example:Fig8GroupDiscrete}
Let $K$ be the figure-8 knot, and give $S^3-K$ its complete hyperbolic structure by gluing two regular ideal tetrahedra, with face-pairings\index{face-pairing isometry} as in \reffig{Fig8tet}. We will find generators of the holonomy group,\index{holonomy group} which is a discrete subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$, as we will see in \refprop{FreePropDisc}. These are obtained by face-pairing isometries,\index{face-pairing isometry} as follows.
Place the two ideal tetrahedra in ${\mathbb{H}}^3$, putting ideal vertices for one tetrahedron at $0$, $1$, $\omega$, and $\infty$, where $\omega = {\frac{1}{2}} + i\frac{\sqrt{3}}{2}$, and putting the ideal vertices of the other tetrahedron at $1$, $\omega$, $\omega+1$, and $\infty$. This glues the faces labeled $A$ along the ideal triangle\index{ideal triangle} with vertices $1$, $\omega$, and $\infty$, to obtain one connected fundamental region for the knot complement, shown in \reffig{Fig8FundRegion}.
\begin{figure}
\import{Figures/Ch05_Margulis/}{F5-01-FunReg.eps_tex}
\caption{A connected fundamental region for the figure-8 knot complement}
\label{Fig:Fig8FundRegion}
\end{figure}
The manifold $S^3-K$ is obtained by gluing the remaining faces labeled $B$, $C$, and $D$. These gluings, or face-pairings,\index{face-pairing isometry} correspond to holonomy\index{holonomy} isometries, which we will denote by $T_B$, $T_C$, and $T_D$, respectively.
A calculation (\refex{Fig8Gluings}) shows that the gluing isometries are given by:
\begin{equation}\label{Eqn:Fig8Gluings}
T_B = \frac{i}{\sqrt{\omega}} \mat{1&1\\1&-\omega^2}, \quad T_C=\mat{1&\omega\\0&1}, \quad T_D = \mat{2&-1\\1&0}.
\end{equation}
These three gluing isometries generate the holonomy group\index{holonomy group} for $S^3-K$. In fact, $T_B$ can be written as a (somewhat complicated) product involving $T_C$ and $T_D$ and their inverses.
Riley was the first to prove that $S^3-K$ has a hyperbolic structure \cite{Riley:Fig8}. He did so by taking a presentation of the fundamental group of $S^3-K$ with two generators, and finding an explicit representation of the fundamental group into $\operatorname{PSL}(2,{\mathbb{C}})$. Exercises~\ref{Ex:Riley},
\ref{Ex:Riley2},
and~\ref{Ex:Riley3} explore this work a little further.
\end{example}
We finish this section with one quick condition equivalent to a group being discrete.
\begin{lemma}\label{Lem:EquivDiscrete}
A subgroup $G\leq \operatorname{PSL}(2,{\mathbb{C}})$ is discrete if and only if it does not contain an infinite sequence of distinct elements that converges to some element $A \in \operatorname{PSL}(2,{\mathbb{C}})$.
\end{lemma}
\begin{proof}
One implication is trivial: If $G$ is not discrete, by definition it contains an infinite sequence of distinct elements converging to the identity in $\operatorname{PSL}(2,{\mathbb{C}})$.
For the other direction, suppose $\{A_n\} \subset G$ is an infinite sequence of distinct elements of $G$ converging to $A\in\operatorname{PSL}(2,{\mathbb{C}})$. Consider $\{A_{n+1}A_n^{-1}\}\subset G$. Note the sequence converges to the identity. To show $G$ is not discrete, it remains to show that $\{A_{n+1}A_n^{-1}\}$ contains infinitely many distinct elements. Suppose not. Then $A_{n+1}=CA_n$ for some fixed $C\in G$ and some subsequence. Since $A_{n+1}A_n^{-1}\to{\mathrm{Id}}$, we must have $C={\mathrm{Id}}$, and thus $A_{n+1}=A_n$. This contradicts the fact that $\{A_n\}$ is a sequence of distinct elements. Thus $G$ is not discrete.
\end{proof}
\subsection{Sequences of isometries}
We can learn a lot about subgroups of $\operatorname{PSL}(2,{\mathbb{C}})$ by considering sequences of group elements. For example, note that the definition of a discrete group involves sequences. We also have the following result, which will be used later in the chapter.
\begin{lemma}\label{Lem:Sequences}
Let $\{A_n\}$ be a sequence of elements of $\operatorname{PSL}(2,{\mathbb{C}})$. Then either a subsequence of $\{A_n\}$ converges to some $A\in\operatorname{PSL}(2,{\mathbb{C}})$, or there exists a point $q\in\partial{\mathbb{H}}^3$ such that for all $x\in{\mathbb{H}}^3$, the sequence $\{A_n(x)\}$ has a subsequence converging to $q$.
\end{lemma}
\begin{proof}
Let $p_n$, $q_n$ denote the fixed points of $A_n$; note we could have $p_n=q_n$. Then $\{p_n\}$ and $\{q_n\}$ are sequences in $\partial{\mathbb{H}}^3 \cong S^2$, which is compact, so they have convergent subsequences. Replace $A_n$, $p_n$, $q_n$ by a subsequence such that $p_n\to p$ and $q_n\to q$. Again note that $p$ could equal $q$.
\smallskip
\textbf{Case 1.} Suppose $p\neq q$. Then for large enough $n$, $p_n\neq q_n$.
Consider an isometry $R_n$ of ${\mathbb{H}}^3$ mapping $p_n$ to $0$ and $q_n$ to $\infty$.
Furthermore, for concreteness, fix a point $y\in \partial{\mathbb{H}}^3$, independent of $n$ that is disjoint from the sequences $\{p_n\}$, $\{q_n\}$ and from $p$ and $q$. We may take $R_n$ to map $y$ to $1$.
If we view $R_n$ as a sequence of matrices for example, we see that $R_n$ converges to the hyperbolic isometry $R\in\operatorname{PSL}(2,{\mathbb{C}})$ taking $p$ to $0$, $q$ to $\infty$, and $y$ to $1$. Consider $B_n = R_nA_nR_n^{-1}$. This is an isometry in $\operatorname{PSL}(2,{\mathbb{C}})$ fixing $0$ and $\infty$. Hence it has the form $B_n(z) = a_nz$ for $a_n\in{\mathbb{C}}$. If $\{|a_n|\}$ has a bounded subsequence, then some subsequence $a_n$ converges to $a\in{\mathbb{C}}$. Hence there is a subsequence $B_n$ with $B_n \to B$, where $B$ is the hyperbolic isometry $B(z)=az$. This is an element of $\operatorname{PSL}(2,{\mathbb{C}})$. It follows that $A_n = R_n^{-1}B_nR_n$ converges to $A=R^{-1}B R \in \operatorname{PSL}(2,{\mathbb{C}})$.
If $|a_n|\to\infty$, then for any $z\in\partial{\mathbb{H}}^3$, $B_n(z)\to\infty$. Thus for any point $x\in{\mathbb{H}}^3$, $B_n(x)\to\infty$. It follows that for all $x\in{\mathbb{H}}^3$, $A_n(x) = R_n^{-1}B_nR_n(x)$ converges to $q\in\partial{\mathbb{H}}^3$.
\smallskip
\textbf{Case 2.} Now suppose $p=q$. Then again we will conjugate $A_n$ by an isometry $R_n$ taking $q_n$ to infinity. For concreteness, choose $y_1$ and $y_2$ disjoint from $\{p_n\}$, $\{q_n\}$, and $q$. Let $R_n$ be the isometry taking $y_1$ to $1$, $y_2$ to $0$, and $q_n$ to $\infty$. Then $R_n$ converges to the isometry $R$ taking $y_1$, $y_2$, and $q$ to $1$, $0$, and $\infty$, respectively. Finally let $B_n = R_nA_nR_n^{-1}$. Note $B_n$ fixes $\infty$, hence it is of the form $B_n=a_nz+b_n$ for $a_n$, $b_n\in{\mathbb{C}}$. If $a_n=1$, $B_n$ is parabolic\index{parabolic} and has unique fixed point $\infty$. Otherwise, the other fixed point of $B_n$ is $b_n/(1-a_n)$.
If $\{|b_n|\}$ has a bounded subsequence, then some subsequence $b_n\to b$. In that case, either $a_n=1$ for large $n$, or since $p_n, q_n$ converge to $p=q$, the fixed point $b_n/(1-a_n)$ converges to $\infty$. Thus $a_n$ converges to $1$. In any case, $B_n(z)$ converges to $B(z)=z+b$. This is an element of $\operatorname{PSL}(2,{\mathbb{C}})$. It follows that $A_n=R_n^{-1}B_nR_n$ converges to $R^{-1}BR\in\operatorname{PSL}(2,{\mathbb{C}})$.
If $\{|b_n|\}$ has no bounded subsequence, then $b_n\to\infty$. We know that the fixed point $b_n/(1-a_n)$ converges to $\infty$ because it is a fixed point of $B_n$, so $(1-a_n)/b_n\to 0$. Rewrite $B_n$ to have the form
\[ B_n(z) = b_n\left(\frac{(a_n-1)z}{b_n}+1\right)+z.\]
Then as $n\to\infty$, $B_n(z) \to \infty$ for all $z\in \partial{\mathbb{H}}^3$. Thus $B_n(x)\to\infty$ for all $x\in{\mathbb{H}}^3$. It follows that $A_n(x) = R_n^{-1}B_nR_n(x)$ converges to $q$ for all $x\in{\mathbb{H}}^3$.
\end{proof}
\subsection{Action of groups of isometries}
We return to the problem of showing that holonomy groups\index{holonomy group} of complete hyperbolic 3-manifolds are discrete. We will show this by considering the action of these groups on ${\mathbb{H}}^3$.
\begin{definition}\label{Def:ProperlyDiscont}
The action of a group $G\leq \operatorname{PSL}(2,{\mathbb{C}})$ on ${\mathbb{H}}^3$ is \emph{properly discontinuous}\index{properly discontinuous}\index{group action!properly discontinuous} if for every closed ball $B\subset {\mathbb{H}}^3$, the set $\{\gamma\in G \mid \gamma(B)\cap B\neq \emptyset\}$ is a finite set.
\end{definition}
\begin{definition}\label{Def:FreeAction}
The action of a group $G\leq\operatorname{PSL}(2,{\mathbb{C}})$ is \emph{free}\index{free group action}\index{group action!free} if the identity element of $G$ is the only element to have a fixed point in ${\mathbb{H}}^3$.
\end{definition}
Note that parabolics\index{parabolic} and loxodromics\index{loxodromic} have fixed points on $\partial {\mathbb{H}}^3$, but not in the interior of ${\mathbb{H}}^3$. However, elliptics\index{elliptic} have fixed points in the interior of ${\mathbb{H}}^3$. Thus the action of $G$ is free\index{free group action}\index{group action!free} if and only if $G$ contains no elliptics.
\begin{lemma}\label{Lem:DiscretePropDisc}
A subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$ is discrete if and only if its action on ${\mathbb{H}}^3$ is properly discontinuous.\index{properly discontinuous}\index{group action!properly discontinuous}
\end{lemma}
\begin{proof}
Suppose $G$ is a subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$ that is not discrete, so there exists a sequence $\{A_n\}$ in $G$ with $A_n\to{\mathrm{Id}}$. Then for all $x\in {\mathbb{H}}^3$, the hyperbolic distance $d(x,A_nx)\to 0$. Let $B$ be any closed ball about $x$ with radius $R>0$. For $n$ such that $d(x,A_nx)<R$, the set
\[ \{A\in G \mid A(B)\cap B\neq \emptyset\} \] contains $A_n$. Since this is true for infinitely many $A_n$, the action is not properly discontinuous.\index{properly discontinuous}\index{group action!properly discontinuous}
Now suppose that for $G\leq\operatorname{PSL}(2,{\mathbb{C}})$, there exists a closed ball $B$ of radius $R$ such that the set $\{A\in G\mid A(B)\cap B\neq\emptyset\}$ is infinite. Let $\{A_n\}$ be a sequence of distinct elements in this set. Note that for $x\in B$, the hyperbolic distance $d(x,A_nx)$ is bounded by $4R$, for all $n$. Thus $\{A_nx\}$ has no subsequence converging to a point on $\partial{\mathbb{H}}^3$. \Reflem{Sequences} implies that $\{A_n\}$ has a subsequence converging to $A\in\operatorname{PSL}(2,{\mathbb{C}})$. Then \reflem{EquivDiscrete} implies $G$ is not discrete.
\end{proof}
We are now ready to prove the main result in this section, namely that a complete hyperbolic 3-manifold has a discrete holonomy group,\index{holonomy group} and conversely a discrete subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$ that acts freely\index{free group action}\index{group action!free} gives rise to a complete hyperbolic 3-manifold.
\begin{proposition}\label{Prop:FreePropDisc}
The action of a group $G\leq\operatorname{PSL}(2,{\mathbb{C}})$ on ${\mathbb{H}}^3$ is free\index{free group action}\index{group action!free} and properly discontinuous\index{properly discontinuous}\index{group action!properly discontinuous} if and only if ${\mathbb{H}}^3/G$ is a 3-manifold with a complete hyperbolic structure and with covering projection ${\mathbb{H}}^3\to{\mathbb{H}}^3/G$.
\end{proposition}
\begin{proof}
Suppose the action of $G$ on ${\mathbb{H}}^3$ is free\index{free group action}\index{group action!free} and properly discontinuous.\index{properly discontinuous}\index{group action!properly discontinuous} Let $x\in{\mathbb{H}}^3/G$, and let $\widetilde{x}\in{\mathbb{H}}^3$ be a point that projects to $x$ under the map ${\mathbb{H}}^3\to{\mathbb{H}}^3/G$. Because the action of $G$ is properly discontinuous, there is a closed ball $B_x$ that intersects only finitely many of its translates. Because the action is free,\index{free group action}\index{group action!free} we may shrink $B_x$ until all its translates are disjoint. Then the interior of $B_x$ maps isometrically to a neighborhood of $x$ in ${\mathbb{H}}^3/G$, so ${\mathbb{H}}^3/G$ is a hyperbolic manifold. Moreover, this neighborhood is evenly covered (by translates of the interior of $B_x$), and so the quotient map is a covering projection.
Conversely, suppose ${\mathbb{H}}^3/G$ is a hyperbolic manifold and $p\from {\mathbb{H}}^3\to {\mathbb{H}}^3/G$ is a covering projection. For any $x\in{\mathbb{H}}^3$, the action of $G$ permutes the preimages $\{p^{-1}p(x)\}$. Only the identity of $G$ fixes $x$, so the action is free.\index{free group action}\index{group action!free}
Let $B\subset{\mathbb{H}}^3$ be a closed ball. Consider the compact set $B\times B$. For any $(x,y)\in B\times B$, we claim there exist neighborhoods $U_{xy}$ of $x$ and $V_{xy}$ of $y$ such that $g(U_{xy})\cap V_{xy}\neq \emptyset$ for at most one $g\in G$. To see this, if $y$ is not in the orbit of $x$, then $p(x)$ and $p(y)$ have disjoint neighborhoods in ${\mathbb{H}}^3/G$. Shrink these neighborhoods to be evenly covered, and let $U_{xy}$ and $V_{xy}$ be neighborhoods of $x$ and $y$ respectively homeomorphic to the disjoint neighborhoods of $p(x)$ and $p(y)$. For any $g\in G$, $g(U_{xy})\cap V_{xy} = \emptyset$ in this case. On the other hand, if $y=g_1(x)$ for some $g_1\in G$, then take $U_{xy}$ to be homeomorphic to an evenly covered neighborhood of $p(x)=p(y)$ in ${\mathbb{H}}^3/G$, and let $V_{xy}=g_1(U_{xy})$. Then $g(U_{xy})\cap V_{xy}\neq\emptyset$ only when $g=g_1$.
Now $B\times B$ is compact, and the set $\{ U_{xy}\times V_{xy}\}_{(x,y)\in B\times B}$ forms an open cover. Thus there is a finite subcover $\{U_1\times V_1, \dots, U_n\times V_n\}$, where $U_i\times V_i$ has the property that $g(U_i)\cap V_i\neq \emptyset$ only when $g=g_i\in G$.
If $\gamma\in G$ is a group element such that there exists $x \in \gamma(B)\cap B$, then consider $(\gamma^{-1}(x),x) \in B\times B$. There must be some $U_i\times V_i$ containing $(\gamma^{-1}(x),x)$. Since $x\in \gamma(U_i)\cap V_i$, it follows that $\gamma=g_i$. Thus $\gamma$ must be one of the elements $g_1, \dots, g_n$ associated to the finite covering. It follows that the action is properly discontinuous.\index{properly discontinuous}\index{group action!properly discontinuous}
\end{proof}
\Refprop{FreePropDisc} implies that if ${\mathbb{H}}^3/G$ is a hyperbolic 3-manifold, then $G$ contains no elliptics\index{elliptic}. For this reason, we will exclude elliptic elements from discrete groups $G$ whenever possible to simplify our proofs in the rest of the chapter. In fact, many results below also hold for discrete groups that contain elliptics. Details can be found, for example, in Marden \cite{marden}.
\section{Elementary groups}
\begin{definition}\label{Def:Elementary}
A subgroup $G\leq\operatorname{PSL}(2,{\mathbb{C}})$ is \emph{elementary}\index{elementary group} if one of the following holds.
\begin{enumerate}
\item The union of all fixed points on $\partial{\mathbb{H}}^3$ of all nontrivial elements of $G$ is a single point on $\partial{\mathbb{H}}^3$.
\item The union of all fixed points on $\partial{\mathbb{H}}^3$ of all nontrivial elements of $G$ consists of exactly two points on $\partial{\mathbb{H}}^3$.
\item\label{Itm:FixedPtInt} There exists $x\in {\mathbb{H}}^3$ such that for all $g\in G$, $g(x)=x$.
\end{enumerate}
The group is \emph{nonelementary}\index{nonelementary group} if it is not elementary.
\end{definition}
Elementary groups will be important subgroups of the discrete groups we study. Because of that, we will need to know more about their form.
\begin{proposition}\label{Prop:ClassInfElementary}
Let $G$ be a discrete nontrivial elementary subgroup\index{elementary group} of $\operatorname{PSL}(2,{\mathbb{C}})$ without elliptics. Then either
\begin{enumerate}
\item the union of fixed points of nontrivial elements of $G$ is a single point on $\partial{\mathbb{H}}^3$, $G$ is isomorphic to ${\mathbb{Z}}$ or ${\mathbb{Z}}\times{\mathbb{Z}}$, and $G$ is generated by parabolics\index{parabolic} (fixing the same point on $\partial{\mathbb{H}}^3$), or
\item the union of fixed points of nontrivial elements of $G$ consists of two points on $\partial {\mathbb{H}}^3$, $G$ is isomorphic to ${\mathbb{Z}}$, and $G$ is generated by a single loxodromic\index{loxodromic} leaving invariant the line between the fixed points.
\end{enumerate}
\end{proposition}
\begin{proof}
If the union of all fixed points of nontrivial elements of $G$ consists of a single point on $\partial{\mathbb{H}}^3$, then $G$ must contain only parabolics\index{parabolic} fixing that point. Conjugate so that the fixed point is $\infty$ in ${\mathbb{H}}^3$. Then $G$ fixes a horosphere about $\infty$, which is isometric to the Euclidean plane $P$. The group $G$ acts on $P$ by Euclidean translations. Since $G$ is discrete, $G$ must be generated by either one translation, in which case $G\cong{\mathbb{Z}}$, or two linearly independent translations, in which case $G\cong{\mathbb{Z}}\times{\mathbb{Z}}$.
If the union of all fixed points of nontrivial elements of $G$ consists of two points, then $G$ contains only loxodromics\index{loxodromic} fixing the axis between them. The group $G$ acts on the axis; the fact that the group is discrete means that there is some finite minimal translation distance $\tau$ under this group action. Let $A\in G$ realize the minimal translation distance, i.e.\ $d(x,Ax)=\tau$ for $x$ on the axis. We claim $G = \langle A \rangle$.
First, we show all $C\in G$ translate by distance $n\tau$ for some $n\in{\mathbb{Z}}$, for if some $C\in G$ has translation distance that is not a multiple of $\tau$, then $C(x)$ lies between $A^n(x)$ and $A^{n+1}(x)$ for any $x$ on the axis. But then $CA^{-n}\in G$ translates $A^n(x)$ a distance strictly less than $\tau$, which is a contradiction. Thus all $C\in G$ translate along the axis a distance equal to a multiple of $\tau$. Now suppose $C\in G$ translates by $n\tau$ for some integer $n$. Then $CA^{-n}$ fixes the axis pointwise. Because $G$ contains no elliptics, $C=A^n$. So $G$ is cyclic generated by $A$.
\end{proof}
Consider the first case of \refprop{ClassInfElementary}.
\begin{figure}
\includegraphics{Figures/Ch05_Margulis/F5-02-Cusp.eps}
\caption{Left: The quotient of horosphere $\partial C$ under the group ${\mathbb{Z}}$ generated by a single parabolic\index{parabolic} gives a cylinder, or annulus. Right: The quotient of $\partial C$ under ${\mathbb{Z}}\times{\mathbb{Z}}$ is a torus}
\label{Fig:Rank1Rank2Cusp}
\end{figure}
\begin{definition}\label{Def:RankOneRankTwoCusp}
Suppose $G$ is an infinite elementary\index{elementary group} discrete group in $\operatorname{PSL}(2,{\mathbb{C}})$ fixing a single point on $\partial{\mathbb{H}}^3$. We may conjugate $G$ so that fixed point is the point at infinity. Let $H$ be the closed horoball of height $1$:
\[ H = \{ (x,y,z) \mid z\geq 1\}. \]
\Refprop{ClassInfElementary} tells us that $G$ is isomorphic to ${\mathbb{Z}}$ or ${\mathbb{Z}}\times{\mathbb{Z}}$.
If $G\cong{\mathbb{Z}}$, the quotient of the horoball $H/G$ is homeomorphic to the space $A\times [1,\infty)$, where $A$ is an annulus, or cylinder; see \reffig{Rank1Rank2Cusp}. We say that $H/G$ is a \emph{rank-1 cusp}\index{rank-1 cusp}\index{cusp!rank-1}.
If $G\cong{\mathbb{Z}}\times{\mathbb{Z}}$, the quotient of the horoball $H/G$ is homeomorphic to $T\times[1,\infty)$, where $T$ is a Euclidean torus; see \reffig{Rank1Rank2Cusp}, right. We say that $H/G$ is a \emph{rank-2 cusp}\index{rank-2 cusp}\index{cusp!rank-2}.
\end{definition}
\Refprop{ClassInfElementary} has an immediate corollary giving information about ${\mathbb{Z}}\times{\mathbb{Z}}$ subgroups of discrete groups, which will be used in later chapters.
\begin{corollary}[${\mathbb{Z}}\times{\mathbb{Z}}$ subgroups]\label{Cor:ZxZSubgroup}
Suppose a discrete group $G$ without elliptics has a subgroup isomorphic to ${\mathbb{Z}}\times{\mathbb{Z}}$. Then the subgroup is generated by two parabolic\index{parabolic} elements fixing the same point on the boundary at infinity\index{boundary at infinity} $\partial{\mathbb{H}}^3$ of ${\mathbb{H}}^3$.
\end{corollary}
\begin{proof}
Let $A$ and $B$ denote the generators of the subgroup of $G$ isomorphic to ${\mathbb{Z}}\times{\mathbb{Z}}$. Since $A$ and $B$ commute, they must have the same fixed points on the boundary at infinity\index{boundary at infinity} $\partial{\mathbb{H}}^3$ of ${\mathbb{H}}^3$ (\refex{PSL(2,C)Commute}). Thus $H\cong \langle A,B\rangle$ is an elementary discrete group\index{elementary group} isomorphic to ${\mathbb{Z}}\times{\mathbb{Z}}$. By \refprop{ClassInfElementary}, $H$ must be generated by parabolics\index{parabolic} fixing the same point on $\partial {\mathbb{H}}^3$.
\end{proof}
Discrete elementary groups\index{elementary group} are often defined in terms of the set of accumulation points of the group on $\partial {\mathbb{H}}^3$; for example this is the definition in \cite{thurston}. We review that definition here as well.
\begin{definition}\label{Def:LimitSet}
Let $G\leq\operatorname{PSL}(2,{\mathbb{C}})$ be a discrete group, and let $x\in {\mathbb{H}}^3$ be any point. The \emph{limit set $\Lambda(G)$}\index{limit set} is defined to be the set of accumulation points on $\partial{\mathbb{H}}^3$ of the orbit $G(x)$.
\end{definition}
\begin{lemma}\label{Lem:LimitSetIndX}
The limit set $\Lambda(G)$ is well-defined, independent of choice of $x$ in \refdef{LimitSet}.
\end{lemma}
\begin{proof}
Suppose $\{A_n\}\subset G$ is a sequence such that $A_n(x)$ converges to a point $p\in \Lambda(G)\subset \partial {\mathbb{H}}^3$. Let $y\in {\mathbb{H}}^3$. Then the distance between $x$ and $y$ is a constant, equal to the distance between $A_n(x)$ and $A_n(y)$ for all $n$. Thus as $n\to\infty$, $A_n(x)$ and $A_n(y)$ lie a bounded distance apart, but $A_n(x)$ approaches $p$. This is possible only if $A_n(y)$ approaches the same point $p$ on $\partial{\mathbb{H}}^3$.
\end{proof}
Consider a few examples of groups $G$ and limit sets $\Lambda(G)$. If $G$ is generated by a single loxodromic\index{loxodromic} element $g$, then its limit set $\Lambda(G)$ consists of the two fixed points of $g$ on $\partial{\mathbb{H}}^3$: one is an accumulation point for $g^n(x)$, and the other for $g^{-n}(x)$. If $G$ is generated by a single parabolic\index{parabolic} element, then $\Lambda(G)$ consists of a single point. If $G$ contains both a loxodromic element $g$ and a parabolic element $h$, then $\Lambda(G)$ contains the fixed points of $g$ on $\partial{\mathbb{H}}^3$, as well as the fixed points of $h^n\circ g$ for all $n$; this is a countably infinite set. Finally, if $G$ is the identity group, consisting only of the identity element, then $\Lambda(G)$ is empty.
The following is often given as the definition of an elementary discrete subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$.
\begin{lemma}\label{Lem:Elementary2}
A discrete subgroup $G\leq\operatorname{PSL}(2,{\mathbb{C}})$ with no elliptics is elementary\index{elementary group} if and only if $\Lambda(G)$ consists of $0$, $1$, or $2$ points. \qed
\end{lemma}
\section{Thick and thin parts}
We are now ready to put together facts about elementary\index{elementary group} and nonelementary\index{nonelementary group} discrete groups to prove a remarkable result on the geometry and topology of hyperbolic 3-manifolds, namely that any such manifold decomposes into a thick part and completely classified thin parts. To state the result precisely, we give a few definitions.
\begin{definition}\label{Def:InjRad}
Suppose $M$ is a complete hyperbolic 3-manifold and $x\in M$. The \emph{injectivity radius}\index{injectivity radius} of $x$, denoted ${\operatorname{injrad}}(x)$, is defined to be the supremal radius $r$ such that a metric $r$-ball around $x$ is embedded.
\end{definition}
\begin{definition}\label{Def:ThickThin}
Let $M$ be a complete hyperbolic 3-manifold, and let $\epsilon>0$. Define the \emph{$\epsilon$-thin part}\index{thin part} of $M$, denoted $M^{<\epsilon}$ to be
\[ M^{<\epsilon} = \{ x\in M \mid {\operatorname{injrad}}(x)<\epsilon/2 \}. \]
Similarly, the \emph{$\epsilon$-thick part}\index{thick part}, denoted $M^{>\epsilon}$ is defined to be
\[ M^{>\epsilon} = \{ x\in M \mid {\operatorname{injrad}}(x)>\epsilon/2 \}. \]
We also have closed versions $M^{\geq\epsilon}$ and $M^{\leq\epsilon}$ defined in the obvious way.
\end{definition}
\begin{theorem}[Structure of thin part]\label{Thm:ThinPart}\index{thin part!structure of thin part}
There exists a universal constant $\epsilon_3>0$ such that for $0<\epsilon \leq \epsilon_3$, the $\epsilon$-thin part of any complete, orientable, hyperbolic 3-manifold $M$ consists of tubes around short geodesics, rank-1 cusps, and/or rank-2 cusps.
\end{theorem}
\begin{figure}
\import{Figures/Ch05_Margulis/}{F5-03-Thin.eps_tex}
\caption{A schematic picture of a hyperbolic 3-manifold $M$, with $M^{<\epsilon}$ a collection of cusps and tubes}
\label{Fig:SchematicMargulis}
\end{figure}
A cartoon illustrating \refthm{ThinPart} is given in \reffig{SchematicMargulis}.
\begin{definition}\label{Def:MargulisConstant}
The supremum of all constants $\epsilon_3$ satisfying \refthm{ThinPart} is called the \emph{Margulis constant}\index{Margulis constant}. More generally, given a complete hyperbolic 3-manifold $M$, a number $\epsilon>0$ is said to be a \emph{Margulis number for $M$}\index{Margulis number for $M$} if $M^{<\epsilon}$ satisfies the conclusions of \refthm{ThinPart}, i.e.\ $M^{<\epsilon}$ consists of tubes around short geodesics, rank-1, and/or rank-2 cusps. The Margulis constant is therefore the infimum over all complete hyperbolic 3-manifolds $M$ of the supremum of all Margulis numbers for $M$.
\end{definition}
As of the writing of this book, the optimal Margulis constant is still unknown, although there are bounds on its value.
R.~Meyerhoff gave what is currently the best lower bound on $\epsilon_3$ in \cite[Section~9]{meyerhoff}, that it is at least $0.104$. As for an upper bound, M.~Culler has discovered a closed hyperbolic 3-manifold with Margulis number less than $0.616$ using SnapPea \cite{weeks:computation}.
We save the proof of \refthm{ThinPart} until the end of this section. We will see that it is a consequence of a well-known theorem concerning the structure of discrete groups of isometries, commonly called the Margulis lemma,\index{Margulis lemma} which appears in a paper of Ka{\v{z}}dan and Margulis \cite{KazhdanMargulis}. The actual Margulis lemma is very general, concerning discrete groups acting on symmetric spaces. We restrict to the case of discrete subgroups of $\operatorname{PSL}(2,{\mathbb{C}})$ acting freely\index{free group action}\index{group action!free} on hyperbolic space. The consequence we will need is the following.
\begin{theorem}[Universal Elementary Neighborhoods]\label{Thm:Margulis}
There is a universal constant $\epsilon_3>0$ such that for all $x\in{\mathbb{H}}^3$, and for any discrete group $G\leq \operatorname{PSL}(2,{\mathbb{C}})$ without elliptics, if $H$ denotes the subgroup of $G$ generated by all elements of $G$ that translate $x$ distance less than $\epsilon_3$, then $H$ is elementary.\index{elementary group}
\end{theorem}
We will give a proof of \refthm{Margulis} in \refsec{Margulis}. Before that, a few remarks are in order. First, the Margulis lemma holds when we allow elliptics;\index{elliptic} this appears in Wang \cite{Wang1969} in the full generality of the theorem of Ka{\v{z}}dan and Margulis. Second, the form of \refthm{Margulis} above, concerning discrete subgroups of $\operatorname{PSL}(2,{\mathbb{C}})$, is due to J{\o}rgensen and Marden, only their result is more general in that it also includes elliptics. Their proof appears in \cite{marden}, and is the basis for the proof that we include below in \refsubsec{Technical}.
However, before we discuss the proof, we show how \refthm{Margulis} implies \refthm{ThinPart} (Structure of thin part).
First, we need to relate translation distance to injectivity radius.\index{injectivity radius}
\begin{lemma}\label{Lem:InjRad}
Let $M$ be a complete, orientable, hyperbolic 3-manifold with $M\cong {\mathbb{H}}^3/\Gamma$ for a discrete group $\Gamma\leq\operatorname{PSL}(2,{\mathbb{C}})$. For any $x\in M$ with lift $\tilde{x}\in {\mathbb{H}}^3$,
\[ {\operatorname{injrad}}(x) = {\frac{1}{2}}\inf_{A\neq{\mathrm{Id}} \in \Gamma} \{ d(\tilde{x}, A\tilde{x})\}. \]
Moreover, this is realized. That is, there exists nontrivial $A\in \Gamma$ such that $2\,{\operatorname{injrad}}(x) = d(\tilde{x},A\tilde{x})$.
\end{lemma}
\begin{proof}
A metric $r$-ball is embedded at $x$ if and only if for all $A\neq {\mathrm{Id}}\in \Gamma$, the metric $r$-ball $B(r,\tilde{x})$ is disjoint from the metric $r$-ball $A(B(r,\tilde{x})) = B(r,A\tilde{x})$. This holds if and only if the translation distance $d(\tilde{x}, A\tilde{x})$ is at least $2r$ for all $A$.
Now suppose ${\operatorname{injrad}}(x)=b$. Then a metric $b$-ball is embedded, but for any $\epsilon>0$, a metric $b+\epsilon$-ball is not embedded. Thus for each $\epsilon>0$, there is $A_\epsilon\in\Gamma$ such that $d(\tilde{x},A_\epsilon(\tilde{x})<2(b+\epsilon)$. If the set $\{A_\epsilon\}$ contains infinitely many distinct elements, then we obtain a sequence $\{A_n\}$ such that $A_n(\tilde{x})$ is of bounded distance from $\tilde{x}$. By \reflem{Sequences}, $A_n\to A\in \operatorname{PSL}(2,{\mathbb{C}})$, implying $\Gamma$ is not discrete by \reflem{EquivDiscrete}. This is a contradiction. Thus $\{A_\epsilon\}$ is a finite set. Let $A\in\Gamma$ be such that $d(\tilde{x},A\tilde{x})$ is minimal. This $A$ satisfies the conclusion of the lemma.
\end{proof}
We are now ready to complete the proof of \refthm{ThinPart}, assuming \refthm{Margulis}.
\begin{proof}[Proof of \refthm{ThinPart} (Structure of thin part)]
Take $\epsilon_3>0$ as in \refthm{Margulis}. Let $M \cong {\mathbb{H}}^3/\Gamma$ be a complete, orientable, hyperbolic 3-manifold, so $\Gamma\leq \operatorname{PSL}(2,{\mathbb{C}})$ is a discrete subgroup with no elliptics.
For $\epsilon\leq\epsilon_3$, if $x\in M^{<\epsilon}$, then by definition ${\operatorname{injrad}}(x)<\epsilon/2$.\index{injectivity radius} By \reflem{InjRad}, it follows that there exists $A\neq{\mathrm{Id}}\in \Gamma$ such that $d(\tilde{x},A\tilde{x}) < \epsilon$ for any lift $\tilde{x}$ of $x$. But \refthm{Margulis} implies that the subgroup $\Gamma_\epsilon$ of $\Gamma$ generated by all $A \in \Gamma$ such that $d(\tilde{x}, A\tilde{x})<\epsilon$ is elementary.\index{elementary group} Since $\Gamma_\epsilon$ contains $A\neq{\mathrm{Id}}$, \refprop{ClassInfElementary} implies that $\Gamma_\epsilon$ either fixes a single point $\zeta\in\partial{\mathbb{H}}^3$ and is generated by parabolics\index{parabolic} fixing $\zeta$, or $\Gamma_\epsilon$ is generated by a single loxodromic\index{loxodromic} preserving an axis $\ell \subset {\mathbb{H}}^3$.
Suppose first that $\Gamma_\epsilon$ fixes a single point $\zeta\in\partial{\mathbb{H}}^3$. Then $\Gamma_\epsilon$ is generated by one or two parabolics\index{parabolic} (\refprop{ClassInfElementary}), and $\tilde{x}$ lies on a horosphere $H$ about $\zeta$ that is fixed by $\Gamma_\epsilon$.
Suppose $\tilde{y}$ lies in the horoball bounded by $H$. Then the height of $\tilde{y}$ is at least $C$: $\tilde{y}$ has coordinates $(a+b\,i,t)$ with $t\geq C$. A generator $A$ of $\Gamma_\epsilon$ takes $\tilde{y}$ to a point with the same height $t$. A calculation in this case (\refex{CalculateHypDist}) shows that
\[ \epsilon > d(\tilde{x},A\tilde{x}) \geq d(\tilde{y},A\tilde{y}),\]
and it follows that in the quotient ${\mathbb{H}}^3/\Gamma$, the point $\tilde{y}$ maps to $M^{<\epsilon}$. Since this is true for every point in the horoball bounded by $H$, $M^{<\epsilon}$ contains the quotient of a horoball under the elementary group\index{elementary group} $\Gamma_\epsilon$; this is a rank-1 or rank-2 cusp.
Now suppose that $H$ is generated by a single loxodromic\index{loxodromic} $A$ preserving the axis $\ell$. Let $R$ denote the distance from $\tilde{x}$ to the axis $\ell$, and let $T_R$ denote the set of points in ${\mathbb{H}}^3$ of distance $R$ from the axis $\ell$. Then $T_R$ bounds a tube consisting of all points in ${\mathbb{H}}^3$ of distance at most $R$ from $\ell$. If $\tilde{y}$ is any point within this tube, then one can calculate (\refex{CalculateDistTube}) that $d(\tilde{y},A\tilde{y}) \leq d(\tilde{x},A\tilde{x})<\epsilon$, so $M^{<\epsilon}$ contains the quotient of a tube about $\ell$ under the elementary group\index{elementary group} $\langle A\rangle$. This is a tube around a short geodesic.
\end{proof}
\section{Hyperbolic manifolds with finite volume}
In \refchap{GluingCompleteness} we gave a method that will allow us to compute (complete) hyperbolic structures on many 3-manifolds, including many knot complements. Once we have a hyperbolic structure on a 3-manifold, we have equipped the manifold with a Riemannian metric with very nice properties, for example the metric can be described in local coordinates by \refeqn{3dHypMetric}.
One of the simplest invariants we can compute from a hyperbolic metric is the volume of the underlying manifold. This gives a good measure of the ``size'' of the manifold. In \refchap{Volume}, we will discuss volumes in some detail, including how to compute volumes of hyperbolic 3-manifolds including knot and link complements. Meanwhile, we give an application of the thick--thin decomposition of hyperbolic 3-manifolds to classifying those with finite volume.
\begin{theorem}\label{Thm:FteVolIffTorusBdy}
A hyperbolic 3-manifold $M$ has finite volume if and only if $M$ is closed (compact without boundary), or $M$ is homeomorphic to the interior of a compact manifold $\overline{M}$ with torus boundary components.
\end{theorem}
\begin{proof}
If $M$ is closed then a fundamental domain for $M$ in its universal cover ${\mathbb{H}}^3$ is a compact set, hence has finite volume. If $M$ is the interior of a manifold with torus boundary, then each such boundary component will be realized as a cusp in the complete hyperbolic structure on $M$. The complement of the cusps of $M$ in $M$ is compact, hence has finite volume. We now show that each cusp has finite volume.
Consider the universal cover ${\mathbb{H}}^3$. For any cusp $C$, we may apply an isometry to ${\mathbb{H}}^3$ so that the point at infinity projects to that cusp, and a horoball of height $1$ projects to an embedded horoball neighborhood of the cusp. On the horosphere of height $1$, some parallelogram $A$ will be a fundamental region for the torus of the cusp, since the structure is complete (\refthm{EuclidCusp}). Then the volume of the cusp is given by
\[ \int_C d\operatorname{vol} = \int_{t=1}^{\infty}\int_A d\operatorname{vol} = \int_{t=1}^\infty\int_A \frac{dx\,dy\,dt}{t^3} = {\frac{1}{2}}\operatorname{area}(A). \]
(See \refex{CuspVolume}.) Thus every cusp has finite volume. Since the volume of $M$ is the sum of the volumes of the compact region with cusps removed, as well as a finite number of finite-volume cusps, the manifold $M$ has finite volume.
To prove the converse, we use \refthm{ThinPart}. Suppose $M$ is a complete hyperbolic manifold with finite volume. Fix $\epsilon>0$ less than the universal constant $\epsilon_3$ of \refthm{ThinPart}, and consider $M^{<\epsilon}$ and $M^{\geq\epsilon}$. By \refthm{ThinPart}, $M^{<\epsilon}$ consists of cusps and tubes. Note that a rank-1 cusp has infinite volume, hence since $M$ has finite volume, $M^{<\epsilon}$ consists of rank-2 cusps and tubes, each of which has finite volume. On the other hand, $M^{\geq\epsilon}$ has finite volume. Moreover, any point in $M^{\geq\epsilon}$ is contained in an embedded ball of radius at least ${\frac{1}{2}}\epsilon$. If two points in $M^{\geq\epsilon}$ have distance at least $\epsilon$, then the balls of radius ${\frac{1}{2}}\epsilon$ about each are disjointly embedded in $M^{\geq\epsilon}$. Thus a collection of points with pairwise distance at least $\epsilon$ in $M^{\geq\epsilon}$ leads to a pairwise disjoint collection of $\epsilon/2$-balls. Because $M$ has finite volume there can only be finitely many of these. Starting with any such collection of points, we may complete the collection to a maximal collection of points of $M^{\geq\epsilon}$ of distance at least $\epsilon$; there are finitely many of these and the $\epsilon/2$-balls around each are embedded. Then the closed $\epsilon$-balls about the collection must contain $M^{\geq\epsilon}$. The union of these balls is a compact set, and $M^{\geq\epsilon}$ is a closed subset. Hence $M^{\geq\epsilon}$ is compact.
Now the union of $M^{\geq\epsilon}$ and any tubes of $M^{<\epsilon}$ is the union of compact sets, hence compact. This is a manifold with boundary homeomorphic to a finite collection of tori corresponding to the finite number of cusps of $M^{<\epsilon}$. Attach a closed collar neighborhood of each torus boundary component, and call the result $N$; each collar neighborhood is homeomorphic to $T^2\times[0,1]$, where $T^2$ is a torus. Then by construction, the manifold $M$ is homeomorphic to the interior of $N$.
\end{proof}
By \refthm{FteVolIffTorusBdy}, the complement of any knot or link in $S^3$ with a hyperbolic structure must have finite hyperbolic volume.
\section{Universal elementary neighborhoods}\label{Sec:Margulis}
In this section, we give a proof of \refthm{Margulis}, on the existence of universal elementary neighborhoods.
In fact, we split this section into two subsections. The first gives a proof of \refthm{Margulis} that is elementary, in the sense that it uses only the machinery of subgroups of $\operatorname{PSL}(2,{\mathbb{C}})$ developed in this chapter. However, it is also quite technical, requiring calculations that, upon first glance, may seem mysterious and arbitrary. Nevertheless, by the end of \refsubsec{Technical}, the proof of \refthm{Margulis} is complete.
The second subsection is an attempt to put \refthm{Margulis} into a wider mathematical context. Although we have presented a proof that uses only the tools of $\operatorname{PSL}(2,{\mathbb{C}})$, related theorems hold for much more general Lie groups. The technical calculations of \refsubsec{Technical} can be seen as instances of more general, and in some sense simpler, mathematical phenomena, put into a broader context.
\subsection{A technical proof in $\operatorname{PSL}(2,{\mathbb{C}})$}\label{Subsec:Technical}
We now give a complete proof of \refthm{Margulis}, restricting to the setting of $\operatorname{PSL}(2,{\mathbb{C}})$.
We need a few more tools before we begin. Namely, \refprop{ClassInfElementary} classifies elementary discrete groups without elliptics. We also need the following result giving more information on \emph{nonelementary} discrete groups without elliptics.
\begin{lemma}\label{Lem:InfiniteElementary}
If $G$ is a nonelementary discrete subgroup\index{nonelementary group} of $\operatorname{PSL}(2,{\mathbb{C}})$ that contains no elliptics, then the following hold.
\begin{enumerate}
\item\label{Itm:Gfinite} $G$ is infinite.
\item\label{Itm:LoxNoFixed} For any nontrivial $A\in G$, there exists a loxodromic\index{loxodromic} $B\in G$ that has no common fixed points with $A$.
\item\label{Itm:LoxOneCommonIndiscrete} If $B\in G$ is loxodromic, then there is no nontrivial $C\in G$ that has exactly one fixed point in common with $B$.
\item\label{Itm:2Lox} $G$ contains two loxodromic elements with no fixed points in common.
\end{enumerate}
\end{lemma}
\begin{proof}
The group $G$ must be nontrivial; since it contains no elliptics it must contain a loxodromic\index{loxodromic} or parabolic.\index{parabolic} Such an element has infinite order, so $G$ is infinite, proving \refitm{Gfinite}.
Next we show \refitm{LoxOneCommonIndiscrete}. Suppose $B$ is loxodromic,\index{loxodromic} and $C$ has exactly one fixed point in common with $B$; we will show that the group generated by $B$ and $C$ is indiscrete, contradicting the fact that $G$ is discrete. Conjugate the group. \Reflem{MoreClassifyPSL} implies we may assume $B=\mat{\rho&0\\0&1/\rho}$, and since $C$ has exactly one fixed point in common with $B$, it has the form $C=\mat{a&b\\0&1/a}$ where $b\neq 0$. Then
\[ B^nCB^{-n}C^{-1} = \mat{1& ab\;(\rho^{2n}-1) \\ 0& 1}. \]
If $|\rho|<1$, let $n\to\infty$. If $|\rho|>1$, let $n\to-\infty$. In either case, $B^nCB^{-n}C^{-1}$ approaches the parabolic\index{parabolic} $\mat{1&-ab\\0&1}$. \Reflem{EquivDiscrete} now implies that the subgroup generated by $B$ and $C$ is not discrete, therefore $G$ is not discrete.
Now we show \refitm{LoxNoFixed}. There are two cases depending on whether $A$ is parabolic\index{parabolic} or loxodromic.\index{loxodromic} Note that if we can show the result for a conjugate group $UGU^{-1}$ for $U\in\operatorname{PSL}(2,{\mathbb{C}})$, then the result holds for $G$, so in both cases we will replace $G$ by a conjugate group at the first step.
\textbf{Case 1.} Suppose $A$ is parabolic.\index{parabolic} Then by \reflem{MoreClassifyPSL}, $A$ is conjugate to $z\mapsto z+1$, so we may assume $A=\mat{1&1\\0&1}$ and $A$ fixes $\infty$. Because $G$ is nonelementary, there exists $C\in G$ that does not fix $\infty$. If $C$ is loxodromic,\index{loxodromic} we are done. If not, $C$ must be parabolic,\index{parabolic} and $C=\mat{a&b\\c&d}$ with $c\neq 0$. Note that $A^nC$ cannot fix $\infty$ for any integer $n$, and $\operatorname{tr}(A^nC) = a+nc+d = nc\pm 2$. For $|n|$ sufficiently large, this cannot be in $[-2,2]$, so $A^nC$ is the desired loxodromic by \reflem{MoreClassifyPSL}.
\textbf{Case 2.} Suppose $A$ is loxodromic.\index{loxodromic} Then after conjugating, \reflem{MoreClassifyPSL} implies we may assume $A=\mat{\rho&0\\0&\rho^{-1}}$ with $|\rho|>1$, so $A$ fixes $0$ and $\infty$. Because $G$ is nonelementary and discrete, \refitm{LoxOneCommonIndiscrete} implies there is $C=\mat{a&b\\c&d} \in G$ that does not fix either $0$ or $\infty$ (so $b, c\neq 0$). If $C$ happens to be loxodromic,\index{loxodromic} we are done. If not, $C$ is parabolic,\index{parabolic} so $a+d=\pm 2$. Then $A^nC$ also has distinct fixed points from those of $A$ for any integer $n$, and $\operatorname{tr}(A^nC)=a\rho^n+d\rho^{-n}$. For $|n|$ large, this lies outside $[-2,2]$, hence $A^nC$ is loxodromic\index{loxodromic} by \reflem{MoreClassifyPSL}. This concludes the proof of \refitm{LoxNoFixed}.
Finally, to prove part \refitm{2Lox}, we use part \refitm{LoxNoFixed}. Suppose $A\in G$ is not the identity. Then \refitm{LoxNoFixed} implies there is a loxodromic\index{loxodromic} $B\in G$ with distinct fixed points from $A$. If $A$ is also loxodromic, we are done. Otherwise, apply \refitm{LoxNoFixed} to $B$, to obtain a loxodromic $C$ with no fixed points in common with $B$. Then $B$ and $C$ are the desired loxodromics.
\end{proof}
The following theorem, on convergence of nonelementary discrete groups, is due to J{\o}rgensen and Klein \cite{jorgensen-klein}, using previous work of J{\o}rgensen \cite{jorgensen}.
\begin{theorem}[J{\o}rgensen and Klein, 1982]\label{Thm:JorgensenKlein}
Let
\[ G_n= \langle A_{1,n}, A_{2,n}, \dots, A_{r,n}\rangle\] be a sequence of $r$-generator, nonelementary,\index{nonelementary group} discrete subgroups of $\operatorname{PSL}(2,{\mathbb{C}})$ such that $A_k = \lim_{n\to\infty} A_{k,n}$ exists and is an element of $\operatorname{PSL}(2,{\mathbb{C}})$ for each $k$. Then $G=\langle A_1,A_2, \dots, A_r\rangle$ is also nonelementary and discrete. Moreover, for sufficiently large $n$, the map $A_k\to A_{k,n}$ for each $k$ extends to a homomorphism from $G$ to $G_n$.
\end{theorem}
The proof of \refthm{JorgensenKlein} follows from an analysis of various properties of elements of $\operatorname{PSL}(2,{\mathbb{C}})$ and discrete subgroups. Its proof is not unlike many of the other results proved in this chapter. However, its proof would lead us a little further afield than we wish to go, into technicalities of $\operatorname{PSL}(2,{\mathbb{C}})$. The full proof can be found in the original papers; Marden also gives an exposition closely following the original proof in \cite{marden}. We will refer the interested reader to those references.
Meanwhile, we don't actually need the full strength of \refthm{JorgensenKlein}; we only need the following immediate consequence.
\begin{corollary}\label{Cor:LimitNonelementary}
Suppose $\{\langle A_n,B_n\rangle\}$ is a sequence of nonelementary\index{nonelementary group} discrete subgroups of $\operatorname{PSL}(2,{\mathbb{C}})$ such that $\lim A_n=A$ and $\lim B_n=B$ in $\operatorname{PSL}(2,{\mathbb{C}})$. Then $\langle A, B\rangle$ is a nonelementary discrete subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$.\qed
\end{corollary}
\begin{proof}[Proof of \refthm{Margulis}]
First we establish some notation. For fixed $x\in{\mathbb{H}}^3$ and $A\in\operatorname{PSL}(2,{\mathbb{C}})$, let $d(x,Ax)$ denote the distance in ${\mathbb{H}}^3$ between $x$ and $Ax$. For fixed $r>0$, let $G(r,x)$ denote the set
\[ G(r,x) = \{A\in G \mid d(x, Ax) < r\}.
\]
The group generated by $G(r,x)$ will be denoted by $\langle G(r,x) \rangle$.
Our goal is to show that there exists $r>0$ such that for all discrete $G$ and for all $x$, the group $\langle G(r,x)\rangle$ is elementary.\index{elementary}
As a first step, we show that if we fix a discrete group $G$ with no elliptics and fix $x$, then there exists $r>0$ such that the group $\langle G(r,x)\rangle$ is elementary.
For suppose this is not the case. Then for a sequence $r_n\to 0$, each $\langle G(r_n,x)\rangle$ is nonelementary. It follows that there exists a sequence of distinct $A_n\in G(r_n,x)$ with $d(x,A_nx)<r_n$. But then \reflem{Sequences} implies that $A_n$ must converge to some $A\in\operatorname{PSL}(2,{\mathbb{C}})$. Using \reflem{EquivDiscrete}, we see that this contradicts the fact that $G$ is a discrete group. So for $r>0$ sufficiently small, $\langle G(r,x)\rangle$ is elementary, and it follows that $G(r,x)$ contains finitely many elements. By choosing $r>0$ smaller than the translation distance of each of these elements, we find that $G(r,x)$ contains only the identity element. Note that the identity group is elementary.
Now we will prove the more general result, that there is a universal $r>0$, independent of $G$ and $x$, such that $\langle G(r,x)\rangle$ is always elementary. Again suppose not. Then there is a sequence $r_n\to 0$, a sequence of discrete groups $G_n\leq\operatorname{PSL}(2,{\mathbb{C}})$ without elliptics, and a sequence of points $x_n\in{\mathbb{H}}^3$ such that $\langle G_n(r_n,x_n)\rangle$ is not elementary.
We will simplify the argument by replacing $x_n$ with a fixed $x$ for all $n$: choose any $x\in {\mathbb{H}}^3$, and let $R_n \in \operatorname{PSL}(2,{\mathbb{C}})$ be an isometry mapping $x_n$ to $x$. Consider the group $R_n G_n R_n^{-1}$. Note that $A\in G_n(r_n,x_n)$ if and only if $R_n A R_n^{-1}$ is in $R_nG_nR_n^{-1}(r_n,x)$, and so $\langle R_nG_nR_n^{-1}(r_n,x)\rangle$ is nonelementary. Thus if we replace $G_n$ by $R_nG_nR_n^{-1}$, we may work with a single fixed value of $x$. So we assume there is a fixed $x$ and sequences $r_n\to 0$ and $G_n$ so that $\langle G_n(r_n,x)\rangle$ is nonelementary.
Now fix $n$. Our next goal is to find $A_n$ and $B_n$ in $G_n(r_n,x)$ such that $\langle A_n,B_n\rangle$ is nonelementary. Since $\langle G_n(r_n,x)\rangle$ is nonelementary, \reflem{InfiniteElementary} implies that there exist loxodromics\index{loxodromic} $S_n$ and $T_n$ with no common fixed points in $\langle G_n(r_n,x)\rangle$, and certainly they generate a nonelementary group. However, we need to take some care to ensure that $A_n$ and $B_n$ are actually in $G_n(r_n,x)$. To do this, we use the first part of this proof: consider the groups $\langle G_n(\rho,x)\rangle$ as $\rho$ ranges between $0$ and $r_n$. We have observed that for some $\rho_n<r_n$, the group $\langle G_n(\rho_n,x)\rangle$ will consist only of the identity element. As $\rho$ increases, the sets $G_n(\rho,x)$ will be nested. There will be some value $0<\mu_n\leq r_n$ such that $\langle G_n(\rho,x)\rangle$ is elementary for $\rho<\mu_n$ but $\langle G_n(\mu_n,x)\rangle$ is nonelementary. We may assume $\mu_n=r_n$.
Moreover, there is some $\tau_n<r_n$ such that for $\tau_n\leq \rho < r_n$, the groups $\langle G_n(\rho,x)\rangle$ are all elementary and isomorphic, equal to the group $\langle G_n(\tau_n,x)\rangle$.
Suppose that the elementary group $\langle G_n(\tau_n,x)\rangle$ is infinite with two fixed points on $\partial {\mathbb{H}}^3$. Then \refprop{ClassInfElementary} implies that it contains a loxodromic\index{loxodromic} $A_n\in G_n(\tau_n,x)$ fixing a line $\ell$. Since $\langle G_n(r_n,x)\rangle$ is not elementary, $G_n(r_n,x)$ must contain a loxodromic $B_n$ that does not fix $\ell$. Then $A_n$ and $B_n$ are loxodromics in $G_n(r_n,x)$ with no common fixed points. So $\langle A_n, B_n\rangle$ is not elementary.
Now suppose that the elementary group $\langle G_n(\tau_n,x)\rangle$ fixes a single point $\zeta \in \partial{\mathbb{H}}^3$. Then $G_n(\tau_n,x)$ contains a parabolic\index{parabolic} $A_n$. Since $\langle G_n(r_n,x)\rangle$ is not elementary, $G_n(r_n,x)$ contains some $B_n$ that does not fix $\zeta$. So again $A_n$ and $B_n$ are elements of $G_n(r_n,x)$ with no common fixed points, and $\langle A_n, B_n\rangle$ is not elementary.
Finally suppose that the elementary group $\langle G_n(\tau_n,x)\rangle$ consists only of the identity element. Since $\langle G_n(r_n,x)\rangle$ is nonelementary with no elliptics, the generating set $G_n(r_n,x)$ must contain two elements $A_n$ and $B_n$ with no common fixed point. Thus $\langle A_n, B_n \rangle$ is not elementary.
In all cases, we have a nonelementary subgroup with two generators, $\langle A_n,B_n\rangle$, and $A_n$, $B_n \in G_n(r_n,x)$. Note that $A_n(x)\to x$ and $B_n(x)\to x$, so \reflem{Sequences} implies there are subsequences of $\{A_n\}$ and $\{B_n\}$ converging to $A\in\operatorname{PSL}(2,{\mathbb{C}})$ and $B\in\operatorname{PSL}(2,{\mathbb{C}})$, respectively. Then \refcor{LimitNonelementary} implies that $\langle A,B \rangle$ is nonelementary.
On the other hand, $A_n, B_n \in G_n(r_n,x)$, so as $n\to\infty$, $A_n$ and $B_n$ must converge to elements of $\operatorname{PSL}(2,{\mathbb{C}})$ fixing $x$. Thus $\langle A, B\rangle$ fixes $x$, hence it is elementary by definition. This contradiction finishes the proof.
\end{proof}
\subsection{A sketch of a broader result in Lie groups}
The Universal Elementary Neighborhoods theorem, \refthm{Margulis}, which we proved using properties of $\operatorname{PSL}(2,{\mathbb{C}})$ in the previous subsection, actually follows quickly from a broader result in Lie groups due to Ka{\v{z}}dan and Margulis~\cite{KazhdanMargulis}. We will not go into many details on Lie groups here, but we do include a sketch of some of the ideas.
\begin{definition}\label{Def:Commutator}
Let $G$ be a group with subgroups $H$ and $K$. The group $[H,K]$ is defined to be the subgroup of $G$ generated by elements $[h,k] = hkh^{-1}k^{-1}$ for all $h\in H$ and $k\in K$.
The $m$-th \emph{commutator}\index{commutator} $G^m$ of $G$ is defined recursively by $G^1 = [G,G]$, and $G^{m+1} = [G,G^m]$, for $m\geq 1$.
A group is \emph{nilpotent}\index{nilpotent group} if for some integer $m$, $G^m=\{1\}$.
\end{definition}
The following is due to Zassenhaus, proved in 1937~\cite{Zassenhaus}.
\begin{theorem}[Zassenhaus Theorem]\label{Thm:Zassenhaus}\index{Zassenhaus Theorem}
Let $G$ be a Lie group. Then there is a neighborhood of the identity $U_Z \subset G$ such that for each discrete subgroup $\Gamma\leq G$, the group generated by $\Gamma \cap U_Z$ is nilpotent.
\end{theorem}
\begin{proof}[Proof sketch]
The derivative of the commutator map $[\cdot,\cdot]\from G\times G\to G$ at $(1,1)$ can be shown to be identically $0$, so $[\cdot,\cdot]$ is a strict contraction in a neighborhood $U$ of the identity, in both variables. Thus for $\gamma_1, \dots, \gamma_m\in \Gamma\cap U$, the iterated commutator
\[ y_m = [\gamma_1, [\gamma_2, [\dots [\gamma_{m-1},\gamma_m]\dots]]] \]
must lie in $U$ and must satisfy $\lim_{m\to\infty} y_m = 1$. Because $\Gamma$ is a discrete group, there exists an integer $N$ such that for $n\geq N$, $y_N =1$. Then the group is nilpotent.
\end{proof}
This in turn implies a more general result on Lie groups, proved in \cite{KazhdanMargulis}.
Before we state the theorem, we say a few words about the general setting in which the theorem applies.
Let $G$ be a Lie group, and $K$ a maximal compact subgroup of $G$. We may give $G$ a left-invariant Riemannian metric that is also right-invariant under $K$. Then the space $G/K$ becomes a Riemannian manifold with $G$ acting on $X$ on the left by isometries of $X$. We say $X = G/K$ is the \emph{homogeneous space associated with $G$}.\index{homogeneous space}
For example, in the setting of ${\mathbb{H}}^3$, we may take $G$ to be the group of isometries of ${\mathbb{H}}^3$, and $K$ the subgroup fixing a point $x\in{\mathbb{H}}^3$. This is isomorphic to the compact Lie group $O(2)$. Then the quotient $G/K$ is ${\mathbb{H}}^3$, with its usual metric and action of $G$ by isometries. We will apply the theorem in this setting.
\begin{theorem}[Kazhdan--Margulis Theorem]\label{Thm:MargulisLemmaGeneral}\index{Ka{\v{z}}dan--Margulis Theorem}
Let $X$ be the homogeneous space associated with a Lie group $G$. There exists a constant $\eta = \eta(X)$ satisfying the following.
Let $x\in X$, and let $\Gamma$ be any discrete group generated by elements $\{ g_1, \dots, g_\ell\}\subset G$ such that $d(x,g_j(x))\leq \eta$ for all $j$. Then there exists a subgroup $\Gamma'$ of $\Gamma$ of finite index such that $\Gamma'$ is nilpotent.
\end{theorem}
A proof of this version of the Margulis Lemma can be found in~\cite{kapovich}. See also~\cite{BallmannGromovSchroeder} for a version that applies to Riemannian manifolds with negative sectional curvature, or~\cite{benedetti-petronio} for another proof when $X={\mathbb{H}}^n$.
\begin{proof}[Proof sketch]
The proof begins by taking a Zassenhaus neighborhood $U_Z$ of $1\in G$ from \refthm{Zassenhaus}.
There exists $\epsilon>0$ depending only on $X$ such that the ball of radius $\epsilon$ around $1\in G$ is contained in $U_Z$: $B_\epsilon(1)\subset U_Z$.
Next, because $X$ is homogeneous, we may assume $x$ is the projection of $1\in G$ to $X=G/K$, removing the dependence of the argument upon $x$.
The value of $\eta$ is determined from $\epsilon$ as follows. Because $K$ is compact, there is an $\epsilon/10$-dense subset of $K$ consisting of a finite number of elements; say $N$ elements. Choose $\eta$ such that whenever $\{g_1, \dots, g_\ell\}$ satisfy $d(x,g_j(x))\leq \eta$, any word $w=w(g_1, \dots, g_\ell)$ in the $g_j$ of length at most $N$ satisfies $d(x,wx)\leq \epsilon/5$.
For this value of $\eta$, whenever such $\{g_1, \dots, g_\ell\}$ generate a discrete group $\Gamma$, the group $\Gamma\cap B_\epsilon(1)=\Gamma'$ is nilpotent, by \refthm{Zassenhaus}. The choice of $\eta$ allows one to show that $\Gamma'$ also has finite index in $\Gamma$.
\end{proof}
Assuming the Ka{\v{z}}dan--Margulis theorem, \refthm{MargulisLemmaGeneral}, we obtain a quick proof of \refthm{Margulis}, the Universal Elementary Neighborhoods theorem, which we now explain.
Recall that the \emph{center}\index{center} of a group is the subgroup of all elements that commute with every other element.
\begin{lemma}\label{Lem:NilpotentCenter}
A non-trivial nilpotent group has non-trivial center.
\end{lemma}
\begin{proof}
Suppose $G$ is nilpotent, with $G^n=[G,G^{n-1}] =1$ but $G^{n-1}\neq 1$. Then
$[G,G^{n-1}]=1$ if and only if for every $x\in G^{n-1}$ and every $g\in G$, the product $x^{-1}g^{-1}xg=1$, which holds if and only if $xg=gx$. Thus $G^{n-1}$ lies in the center of $G$, and is nontrivial.
\end{proof}
\begin{corollary}\label{Cor:Nilpotent}
A nilpotent subgroup $G$ of $\operatorname{PSL}(2,{\mathbb{C}})$ without elliptics must satisfy one of the following:
\begin{itemize}
\item $G = \{1\}$
\item $G-\{1\}$ consists of loxodromic\index{loxodromic} elements with the same fixed points at infinity.
\item $G-\{1\}$ consists of parabolic\index{parabolic} elements with the same fixed point at infinity.
\end{itemize}
\end{corollary}
\begin{proof}
By \reflem{NilpotentCenter}, if $G$ is nontrivial then there is a nontrivial element $g\in G$ that commutes with every other element of $G$. By \refex{PSL(2,C)Commute}, every element in $G$ must have the same fixed points as $g$. The cases follow depending on whether $g$ is loxodromic\index{loxodromic} or parabolic.\index{parabolic}
\end{proof}
\begin{proof}[Proof of \refthm{Margulis} assuming \refthm{MargulisLemmaGeneral}]
Let $G$ be a discrete subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$ without elliptics, let $x\in{\mathbb{H}}^3$, and let $\eta$ be the constant from the Ka{\v{z}}dan--Margulis Theorem, \refthm{MargulisLemmaGeneral}. If $H$ denotes the subgroup of $G$ generated by elements of $G$ that translate $x$ distance less than $\eta$, then there exists a nilpotent subgroup $H'$ of $H$ such that $H/H'$ is finite. By \refcor{Nilpotent}, $H'$ has one of three forms.
If $H'$ is trivial, then $H$ is a finite group. Since there are no elliptics in $G$, $H$ must also be trivial, and so $H$ is elementary.\index{elementary group}
Since $H'$ is a finite index subgroup of $H$, for any $h\in H$ there exists an integer $m$ such that $h^m\in H'$. Then $h^m$ has the same fixed points as $H'$, and hence $h$ has the same fixed points as $H'$. Thus either $H'-\{1\}$ consists of loxodromic\index{loxodromic} elements with two fixed points at infinity, and all elements of $H$ have the same fixed points at infinity, or $H'-\{1\}$ consists of parabolics\index{parabolic} with one point at infinity, and all elements of $H$ have the same fixed point at infinity. In either case, $H$ is elementary.\index{elementary group}
\end{proof}
\section{Exercises}
\begin{exercise} Is a subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$ generated by a single elliptic\index{elliptic} element always discrete? Prove it is discrete, or give a counterexample.
\end{exercise}
\begin{exercise}\label{Ex:MoreClassifyPSL}
Prove \reflem{MoreClassifyPSL}, giving more properties of parabolic,\index{parabolic} elliptic,\index{elliptic} and loxodromics\index{loxodromic} in $\operatorname{PSL}(2,{\mathbb{C}})$.
\end{exercise}
\begin{exercise}\label{Ex:Fig8Gluings}
Prove that the gluing isometries for the figure-8 knot complement are the elements of $\operatorname{PSL}(2,{\mathbb{C}})$ given in \refeqn{Fig8Gluings}.
\end{exercise}
\begin{exercise}\label{Ex:Riley}
R.~Riley gave a presentation of the fundamental group of the figure-8 knot complement in \cite{Riley:Fig8}:
\[ \pi_1(S^3-K) = \langle a,b \mid yay^{-1}=b\rangle, \]
where $y=a^{-1}bab^{-1}$. He let
\[ A=\mat{1&1\\0&1}, \quad B=\mat{1&0\\-\sigma&1},\]
where $\sigma$ is a primitive cube root of unity,
and let
\[\rho\from \pi_1(S^3-K)\to \langle A,B\rangle \leq \operatorname{PSL}(2,{\mathbb{C}})\]
be the representation $\rho(a)=A$, $\rho(b)=B$.
Prove the representation $\rho$ gives an isomorphism of groups.
\end{exercise}
\begin{exercise}\label{Ex:Riley2}
Let $A$ and $B$ in $\operatorname{PSL}(2,{\mathbb{C}})$ be as in \refex{Riley}.
Find an explicit element $U=\mat{a&b\\c&d}$ of $\operatorname{PSL}(2,{\mathbb{C}})$ such that Riley's $A$ and $B$ are conjugate via $U$ to our isometries $T_C$ and $T_D^{-1}$, respectively. That is, find $U$ such that
\[ A = UT_CU^{-1}, \quad B=UT_D^{-1}U^{-1}. \]
Even better: $U$ can be written as a composition of a parabolic\index{parabolic} fixing infinity $T$, followed by a rotation $R$: $U=RT$. Find $T$ and $R$.
\end{exercise}
\begin{exercise}\label{Ex:Riley3}
Note that Riley's isometries $A$ and $B$ of \refex{Riley} do not give face-pairings\index{face-pairing isometry} of the fundamental domain in \reffig{Fig8FundRegion}. Find a fundamental domain for the figure-8 knot such that $A$ and $B$ are face-pairing isometries.
Hint: \refex{Riley2} might be helpful.
\end{exercise}
\begin{exercise} If a group $G$ acts on Euclidean space ${\mathbb{R}}^n$ or hyperbolic space ${\mathbb{H}}^n$, extend the definitions of properly discontinuous\index{properly discontinuous}\index{group action!properly discontinuous} and free actions\index{free group action}\index{group action!free} in the obvious way.
Show directly by definitions that each of the following groups $G$ acts freely\index{free group action}\index{group action!free} and properly discontinuously\index{properly discontinuous}\index{group action!properly discontinuous} on the given space $X$.
\begin{enumerate}
\item $X={\mathbb{R}}^2$, $G$ is generated by two translations $\phi\from {\mathbb{R}}^2\to{\mathbb{R}}^2$ and $\psi\from {\mathbb{R}}^2\to{\mathbb{R}}^2$ given by $\phi(x,y)=(x+t,y)$ and $\psi(x,y)=(x,y+s)$ for $s,t\in{\mathbb{R}}$.
\item $X={\mathbb{H}}^2$, $G$ is the holonomy group\index{holonomy group} of the (complete) 3-punctured sphere.
\item $X={\mathbb{H}}^3$, $G$ is generated by face-pairing isometries\index{face-pairing isometry} of an ideal polyhedron such that the face identifications give a complete hyperbolic 3-manifold.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:FiniteElementary}
Show that the following give finite elementary groups.\index{elementary group}
\begin{enumerate}
\item Cyclic groups fixing an axis in ${\mathbb{H}}^3$.
\item Orientation preserving symmetries of an ideal platonic solid (tetrahedron, octahedron/cube, icosahedron/dodecahedron).
\item Dihedral groups preserving an ideal polygon with $n$ sides inscribed in a plane in ${\mathbb{H}}^3$.
\end{enumerate}
\end{exercise}
\begin{exercise}
Show that the finite groups in \refex{FiniteElementary} are the only finite elementary groups.\index{elementary group}
\end{exercise}
\begin{exercise}
Let $G$ be a subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$. Show that the following are equivalent.
\begin{enumerate}
\item $G$ is discrete.
\item $G$ has no limit points in the interior of ${\mathbb{H}}^3$. That is, for any $x\in{\mathbb{H}}^3$, there is no $y\in{\mathbb{H}}^3$ and no sequence of distinct elements $\{A_n\}$ in $G$ such that $A_n(y)=x$.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:PSL(2,C)Commute}
Let $A$ and $B$ in $\operatorname{PSL}(2,{\mathbb{C}})$ be distinct from the identity. Prove that the following are equivalent.
\begin{enumerate}
\item[(a)] $A$ and $B$ commute.
\item[(b)] Either $A$ and $B$ have the same fixed points, or $A$ and $B$ have order $2$ and each interchanges the fixed points of the other.
\item[(c)] Either $A$ and $B$ are parabolic\index{parabolic} with the same fixed point at infinity, or the axes of $A$ and $B$ coincide, or $A$ and $B$ have order $2$ and their axes intersect orthogonally in ${\mathbb{H}}^3$.
\end{enumerate}
\end{exercise}
\begin{exercise}
Suppose that $A$ and $B$ in $\operatorname{PSL}(2,{\mathbb{C}})$ are loxodromics\index{loxodromic} with exactly one fixed point in common. Show that $\langle A, B\rangle$ is not discrete.
\end{exercise}
\begin{exercise}
State and prove a version of \refthm{ThinPart}, the structure of the thin part, for hyperbolic 2-manifolds.
\end{exercise}
\begin{exercise}\label{Ex:CalculateHypDist}
Suppose $A$ is a parabolic\index{parabolic} fixing the point $\zeta$ and $p$ is a point in ${\mathbb{H}}^3$ such that $d(p, A(p)) < \epsilon$. After applying an isometry, we may assume that $\zeta=\infty$, that $A=\mat{1&\alpha\\0&1}$ for some $\alpha\in{\mathbb{C}}$, and $p$ lies on a horosphere $H_C$ that is a Euclidean plane of constant height $t=C$ for some $C>0$:
\[ H_C = \{(x+y\,i,C) \mid C > 0\}. \]
\begin{enumerate}
\item[(a)] Prove that if a point $q$ lies inside the horoball bounded by $H_C$ on a horosphere $H_t$ of height $t\geq C$, then the Euclidean distance from $q$ to $A(q)$ measured along $H_t$ is at most the Euclidean distance from $p$ to $A(p)$ measured along $H$.
\item[(b)] Prove that the hyperbolic distances, measured in ${\mathbb{H}}^3$, satisfy
\[ \epsilon > d(p, A(p)) \geq d(q, A(q)). \]
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:CalculateDistTube}
Suppose $A$ is a loxodromic\index{loxodromic} fixing an axis $\ell$, and $p$ is a point in ${\mathbb{H}}^3$ such that $d(p,A(p))<\epsilon$.
\begin{enumerate}
\item Prove that the distance from any $q\in{\mathbb{H}}^3$ to $\ell$ is the same as the distance from $A(q)$ to $\ell$.
\item We can use cylindrical coordinates in ${\mathbb{H}}^3$ about the geodesic $\ell$. Let $r$ denote the distance from $\ell$, $\theta$ the rotation about $\ell$ (measured modulo $2\pi$), and $\zeta$ the translation distance along $\ell$. Finally, let $\widehat{{\mathbb{H}}}^3$ denote the cover of ${\mathbb{H}}^3$ in which $\theta$ is no longer measured modulo $2\pi$, but is a real number.
Using these coordinates, it can be shown that the distance $d$ between points $p_1$ and $p_2$ in $\widehat{{\mathbb{H}}}^3$ with cylindrical coordinates $(r_1,\theta_1, \zeta_1)$ and $(r_2,\theta_2,\zeta_2)$ with $|\theta_1-\theta_2|<\pi$ is given by
\[\cosh d = \cosh(\zeta_1-\zeta_2)\cosh r_1 \cosh r_2 - \cos(\theta_1-\theta_2)\sinh r_1 \sinh r_2. \]
(See \cite[Lemma~2.1]{GabaiMeyerMilley:Tubes})
Using this formula, prove that if $x,y\in{\mathbb{H}}^3$ are points such that $d(y,\ell)\leq d(x,\ell)$, then
\[ d(y,A(y)) \leq d(x, A(x)). \]
\end{enumerate}
\end{exercise}
\chapter{Completion and Dehn Filling}\label{Chap:CompletionDehnFilling}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
In \refchap{Geometric} we considered some incomplete structures on hyperbolic 2-manifolds, particularly the 3-punctured sphere, \refexamp{Incomplete3PunctSphere}.
In this chapter, we examine incomplete hyperbolic structures on 3-manifolds with torus boundary, and their completions.
\section{Mostow--Prasad rigidity}
We begin by stating a few important results on complete hyperbolic structures on manifolds to set up some context for the rest of the chapter.
Many surfaces admit infinitely many complete hyperbolic structures. For example, in exercises~\ref{Ex:1punctTorus} and~\ref{Ex:4PunctSphere} you found 2-parameter families of complete hyperbolic structures on the 1-punctured torus and 4-punctured sphere. This flexibility is only possible in two dimensions. In higher dimensions, there is only \emph{one} complete structure\index{complete metric space} on a finite volume hyperbolic manifold, up to isometry. This result was proved in the case $M$ is a closed manifold by Mostow \cite{mostow}, and extended to the case of open manifolds with finite volume by Prasad \cite{prasad}. Recall that by \refthm{FteVolIffTorusBdy}, an open hyperbolic 3-manifold has finite volume if and only if it is the interior of a manifold with torus boundary components.
\begin{theorem}[Mostow--Prasad rigidity]
\label{Thm:MostowGeom}\index{Mostow--Prasad rigidity}
If $M_1^n$ and $M_2^n$ are complete\index{complete metric space} hyperbolic $n$-manifolds with finite volume and $n\geq 3$, then any isomorphism of fundamental groups $\phi\from \pi_1(M_1) \to \pi_1(M_2)$ is realized by a unique isometry.
\end{theorem}
We will not include the proof in this book, as it leads us a little further away from knots and links than we wish to stray. However, the proof of the theorem can be found in the original papers, or in books on hyperbolic geometry including \cite{benedetti-petronio} and \cite{ratcliffe}.
Recall also Gordon and Luecke's knot complement theorem, \refthm{GordonLuecke} from \refchap{KnotIntro}, which states that knots with homeomorphic complement are equivalent.\index{Gordon--Luecke knot complement theorem}
Knots with homeomorphic complements have isomorphic fundamental group. By \refthm{MostowGeom}, Mostow--Prasad rigidity,\index{Mostow--Prasad rigidity} any complete hyperbolic structure on the knot complement is the \emph{only} complete hyperbolic structure.\index{complete metric space} So the complete hyperbolic structure on a knot complement distinguishes any two knots. This is one reason hyperbolic geometry gives many very nice knot invariants!
\section{Completion of incomplete structures}
What about incomplete structures on a manifold $M$ with torus boundary? There are many of these. For the figure-8 knot complement, for example, we found a 1-complex parameter family of incomplete structures, parameterized by $w \in {\mathbb{C}}$ as in \reffig{ThurstonRegion}. If we take the \emph{completion} of a hyperbolic structure on a 3-manifold, we obtain surprising topological results.
As a warm up, recall completions of incomplete structure on 2-manifolds. In \refchap{Geometric}, we saw an example of an incomplete structure on a hyperbolic 3-punctured sphere. Recall that in the developing map for an incomplete structure, ideal polygons approached a limiting line. By selecting a point on a horocycle about infinity, approaching this line, we obtained a Cauchy sequence that did not converge. See \reffig{3punct-incomplete}. Adjoining a point where each horocycle met the limiting line, we obtained the completion. The completion was given by attaching a geodesic of length $d(v)$,\index{$d(v)$} as in \reffig{3punctCompletion}.
Now consider an incomplete structure on a 3-manifold $M$ such that $M$ is the interior of a compact manifold with torus boundary. Let $C$ be a cusp torus of $M$. Then the torus $C$ inherits an affine structure from the hyperbolic structure on $M$, and because the structure on $M$ is not complete, the affine structure is not Euclidean (\refthm{EuclidCusp}).\index{Euclidean structure}
Let $\alpha$ and $\beta$ generate $\pi_1(C) \cong {\mathbb{Z}} \times{\mathbb{Z}}$. Corresponding to $\alpha$ and $\beta$ are two holonomy\index{holonomy} isometries $\rho(\alpha)$ and $\rho(\beta)$. To simplify notation, we will drop the $\rho$, abusing notation slightly, and simply refer to these isometries as $\alpha$ and $\beta$. Assume the action of $\alpha$ and $\beta$ does not induce a Euclidean structure\index{Euclidean structure} on $C$, so the hyperbolic structure on $M$ is not complete. To form its completion, we remove a small neighborhood $N(C)$ of $C$, take the completion of $N(C)$, and then reattach this neighborhood to $M$. Thus to analyze the completion of $M$, we analyze the completion of neighborhoods of cusp tori.
\begin{proposition}\label{Prop:CompletionAttachGeodesic}
The completion of $N(C)$ is obtained by adjoining some portion of a geodesic to $N(C)$.
\end{proposition}
\begin{proof}
Consider the developing map for the affine torus $C$.\index{affine torus} The image will miss a single point (\refex{DevelopingImageMissesPoint}), for example as in \reffig{AffineTorus}. This image is obtained by considering the action of $\alpha$ and $\beta$ restricted to a horosphere. More precisely, if $C$ has a fundamental domain that is a quadrilateral, then we build its developing image by starting with a copy of that quadrilateral on ${\mathbb{C}}$, which we identify with a horosphere about infinity, and attaching copies of the quadrilateral according to instructions given by the holonomy\index{holonomy} isometries corresponding to $\alpha$ and $\beta$, acting on the fixed horosphere.
If we shift the original choice of horosphere up, we will see the same image of the developing map. In particular, the developing map will still miss a single point, with the same complex value for each choice of horosphere. These missed points form a vertical geodesic in ${\mathbb{H}}^3$. We may apply an isometry so that this vertical geodesic runs from $0$ to $\infty$ in ${\mathbb{H}}^3$. Notice that the developing image of the neighborhood $N(C)$ is obtained by taking developing images of $C$ on all horospheres about $\infty$ above some fixed initial height. Thus the developing image $N(C)$ misses the single geodesic from $0$ to $\infty$ in ${\mathbb{H}}^3$. Hence the completion of $N(C)$ is obtained by adjoining some portion of this geodesic to $N(C)$.
\end{proof}
As in the case of incomplete 2-manifolds, the length of the portion of adjoined geodesic of \refprop{CompletionAttachGeodesic} will be determined by considering the action of the holonomy.\index{holonomy} Considering this action leads to the following result on the topology of the completion.
\begin{proposition}\label{Prop:CompletionTop}
Let $N(C)$ be the neighborhood of a cusp torus $C$ of an incomplete hyperbolic manifold, so $N(C)$ is homeomorphic to $C\times(0,1)$. Then the completion of $N(C)$ is either homeomorphic to the 1-point compactification of $N(C)$ obtained by crushing $C\times\{1\}$ to a point, or it is homeomorphic to the solid torus obtained by attaching a solid torus to $C\times\{1\}$.
\end{proposition}
\begin{proof}
As in the proof of \refprop{CompletionAttachGeodesic}, consider the developing image of $N(C)$ and assume it misses the geodesic from $0$ to $\infty$. Note the group $\langle\alpha,\beta\rangle$ acts on the geodesic from $0$ to $\infty$. Since points in our completion should be identified to their images under the holonomy\index{holonomy} action, we should identify each point $z$ on the geodesic from $0$ to $\infty$ with $\langle \alpha, \beta\rangle \cdot z$. There are two cases.
\smallskip
\textbf{Case 1.} The image of $z$ under the action of $\alpha$ and $\beta$ is dense in the line from $0$ to $\infty$. In this case, the completion is the 1-point compactification. It is not a manifold (\refex{1PtCmpt}).
\smallskip
\textbf{Case 2.} The image of $z$ is a discrete set of points on the line, each of some distance $d(C)$ apart. In this case the completion is obtained by adjoining a geodesic circle of length $d(C)$ to $N(C)$. Denote the completion by $\overline{N(C)}$. We wish to understand the topology of $\overline{N(C)}$.
We may obtain a manifold homeomorphic to $N(C)$ by removing a small, closed tubular neighborhood\index{tubular neighborhood} of the geodesic circle adjoined to form $\overline{N(C)}$. Notice that a tubular neighborhood of a circle is a solid torus, with the geodesic at its core. Thus we obtain a manifold homeomorphic to $\overline{N(C)}$ by attaching a solid torus to the torus $C\times\{1\}$ of $N(C)$
\end{proof}
\begin{definition}\label{Def:DehnFilling}
Let $M$ be a manifold with torus boundary component $T$. Let $s$ be an isotopy class of simple closed curves on $T$; $s$ is called a \emph{slope}.\index{slope} The manifold obtained from $M$ by attaching a solid torus to $T$ so that $s$ bounds a disk in the resulting manifold is called the \emph{Dehn filling of $M$ along $s$}\index{Dehn filling} and is denoted $M(s)$.
\end{definition}
A cartoon describing Dehn filling is shown in \reffig{DehnFilling}.
\begin{figure}
\import{Figures/Ch06_Completion/}{F6-01-Dehnfl.eps_tex}
\caption{A cartoon describing Dehn filling.\index{Dehn filling} After filling, the curve shown on the torus boundary component of $M$ will bound a disk.}
\label{Fig:DehnFilling}
\end{figure}
By \refprop{CompletionTop}, the space obtained by taking the completion of an incomplete hyperbolic structure on $M$ either fails to be a manifold, or is homeomorphic to a Dehn filling\index{Dehn filling} of $M$.
Dehn filling\index{Dehn filling} is a very important topological procedure in 3-manifold topology, due to work of Wallace and Lickorish in the 1960s. Independently, they showed the following theorem \cite{wallace, lickorish}. A nice, highly readable proof can be found in the book \cite{rolfsen}.
\begin{theorem}[Fundamental theorem of Wallace and Lickorish]\label{Thm:WallaceLickorish}\index{Dehn filling!fundamental theorem of Wallace and Lickorish}\index{Wallace, fundamental Dehn filling theorem}\index{Lickorish, fundamental Dehn filling theorem}
Let $M$ be a closed, orientable 3-manifold. Then $M$ is obtained by Dehn filling\index{Dehn filling} the complement of a link in $S^3$. \qed
\end{theorem}
\Refthm{WallaceLickorish} gives a topological result on manifolds. By considering completions of hyperbolic 3-manifolds, we can make Dehn filling a geometric procedure.
\begin{definition}\label{Def:ConeManifold}
Consider the geodesic running from $0$ to $\infty$ in ${\mathbb{H}}^3$. We may write points in ${\mathbb{H}}^3$ in cylindrical coordinates $(r, \theta,\zeta)$ where $r$ is the distance from this geodesic, $\theta$ is a rotation angle around the geodesic, measured modulo $2\pi$, and $\zeta$, the height, is translation distance in the direction of the geodesic. In these coordinates, the metric is given by
\[ dr^2 + \sinh^2 r\, d\theta^2 + \cosh^2 r\, d\zeta^2, \]
with $\theta$ measured modulo $2\pi$.
Now fix $\alpha>0$. Adjust the metric so $\theta$ is measured modulo $\alpha$. Then a neighborhood of a point on the geodesic from $0$ to $\infty$ is called a \emph{hyperbolic cone}\index{hyperbolic cone} with \emph{cone angle}\index{cone angle} $\alpha$. Note the definition makes sense when $\alpha>2\pi$. A cross section perpendicular to the geodesic is a 2-dimensional cone with cone angle $\alpha$.
A 3-dimensional \emph{hyperbolic cone manifold}\index{hyperbolic cone manifold}\index{cone manifold} is a manifold $M$ in which each point $x$ either has a neighborhood isometric to a ball in ${\mathbb{H}}^3$, or has a neighborhood isometric to a hyperbolic cone.
In a hyperbolic cone manifold, the set of points that only have neighborhoods of the second kind form a geodesic link in $M$ called the \emph{singular locus}\index{singular locus}. The hyperbolic metric on $M$ is smooth everywhere except at points on the singular locus.
\end{definition}
\begin{proposition}\label{Prop:CompletionConeMfld}
When the completion $\overline{M}$ of $M$ is topologically equivalent to attaching a solid torus, obtained by Dehn filling,\index{Dehn filling} it has the structure of a cone manifold.\index{hyperbolic cone manifold}\index{cone manifold} The singular locus $\Sigma$ is the geodesic (link) attached in the completion.
\end{proposition}
\begin{proof}
As before, let $C$ be a cusp torus of $M$ with neighborhood $N(C)$, whose developing image misses the geodesic from $0$ to $\infty$ in ${\mathbb{H}}^3$. Let $\zeta \in \pi_1(C)$ generate the kernel of the action of $\pi_1(C) \cong \langle \alpha, \beta \rangle$ on the line from $0$ to $\infty$. The isometry $\zeta$ will be a rotation about this line by some angle $\alpha$. Then a perpendicular cross section of the circle added to $\overline{N(C)}$ to form the completion will be a 2-dimensional hyperbolic cone, of cone angle\index{cone angle} $\alpha$. Thus a neighborhood of a point on the completion is isometric to a hyperbolic cone.
Thus when we attach $\overline{N(C)}$ to $M$, the result $\overline{M}$ is a hyperbolic cone manifold\index{hyperbolic cone manifold}\index{cone manifold} with singular locus along the attached geodesic.
\end{proof}
There is one very important case of \refprop{CompletionConeMfld}. When the cone angle\index{cone angle} at the singular locus of $\overline{M}$ is actually $2\pi$, then the hyperbolic structure on $\overline{M}$ is smooth everywhere. Thus $\overline{M}$ is a hyperbolic manifold. We conclude:
\begin{corollary}\label{Cor:HypDehnFilling}
When the holonomy\index{holonomy} $\rho(\pi_1(C))$ acts on the geodesic omitted from the developing image of $N(C)$ by a fixed translation, and when the generator $\zeta \in \pi_1(C)$ of the kernel has holonomy\index{holonomy} a rotation by $2\pi$, then the completion of $M$ is a complete hyperbolic manifold, homeomorphic to the Dehn filled manifold $M(\zeta)$.\qed
\end{corollary}
\section{Hyperbolic Dehn filling space}
We re-interpret the above section in the language of complex lengths of isometries of ${\mathbb{H}}^3$.
Anytime $M$ admits a hyperbolic structure, consider a cusp torus $C$ for $M$.
The fundamental group of the torus is isomorphic to ${\mathbb{Z}}\times {\mathbb{Z}}$, generated by some $\alpha$ and $\beta$.
\begin{remark}\label{Rem:MeridLongitude}
When $M$ is a knot complement, $M\cong S^3\setminus N(K)$, we often choose $\alpha$ to be the \emph{meridian},\index{meridian} i.e.\ the curve on $\partial N(K)$ bounding a disk in $N(K)\subset S^3$, and $\beta$ to be the \emph{standard longitude},\index{longitude!standard} i.e.\ the curve on $\partial N(K)$ that is homologous to $0$ in $S^3\setminus N(K)$.
\end{remark}
Consider the holonomy\index{holonomy} elements of $\alpha$ and $\beta$. These are some isometries of ${\mathbb{H}}^3$. As above, we will continue to abuse notation and denote the holonomy isometries corresponding to $\alpha$ and $\beta$ by $\alpha$ and $\beta$.
Recall the classification of isometries of ${\mathbb{H}}^3$, from \reflem{MoreClassifyPSL}. Any isometry is one of three types: parabolic,\index{parabolic} elliptic,\index{elliptic} or loxodromic.\index{loxodromic} Since $\alpha$ and $\beta$ generate ${\mathbb{Z}}\times {\mathbb{Z}}$, they must commute. This is possible only if $\alpha$ and $\beta$ are parabolic,\index{parabolic} fixing the same point on the boundary at infinity,\index{boundary at infinity} or if $\alpha$ and $\beta$ share the same axis (\refex{PSL(2,C)Commute}).
If $\alpha$ and $\beta$ are parabolic,\index{parabolic} fixing a point at infinity, then they must fix an entire horosphere about infinity. Conjugating to put their fixed point at $\infty$ in $\partial{\mathbb{H}}^3$, they are of the form $\alpha(z) = z+a$, $\beta(z) = z+b$. Hence they restrict to Euclidean isometries on the horosphere, and the hyperbolic structure is complete.
Now suppose $\alpha$ and $\beta$ are not parabolic.\index{parabolic}
In this case, because $\alpha$ and $\beta$ commute but are not parabolic, they share an axis, and are both given by rotation and/or dilation along this axis. The hyperbolic structure is not complete, and the axis must be exactly the geodesic whose points are omitted from the developing image of $C$ for each horosphere.
\begin{definition}
Suppose the interior of $M$ has a hyperbolic structure, and $C$ is a cusp torus of $M$, with $N(C)$, homeomorphic to $T^2\times I$, a neighborhood of $C$. Let $\alpha, \beta \in \pi_1(C)$ be generators. Suppose the interior of $M$ has a hyperbolic structure, and the holonomy\index{holonomy} elements corresponding to $\alpha$ and $\beta$ are not parabolic, so they share an axis.
Fix a direction on the axis of $\alpha$ and $\beta$. Any element $\gamma$ of $\pi_1(C)$ translates some signed distance $d$ along the axis, and rotates by total angle $\theta \in {\mathbb{R}}$, where the sign of $\theta$ is given by the right hand rule. Let $\mathcal{L}(\gamma) = d + i\theta$. The value $\mathcal{L}(\gamma)$ is called the \emph{complex length}\index{complex length} of $\gamma$. This defines a function $\mathcal{L}$ from $\pi_1(C) = H_1(C; {\mathbb{Z}})$ to ${\mathbb{C}}$.
\label{Def:ComplexLength}
\end{definition}
Notice that if $\gamma = p\alpha + q\beta$, then $\mathcal{L}(\gamma) = p\mathcal{L}(\alpha) + q\mathcal{L}(\beta)$, so $\mathcal{L}$ is a linear map. We may extend it canonically to a linear map $\mathcal{L}\from H_1(C; {\mathbb{R}}) \to {\mathbb{C}}$. The value $\mathcal{L}(c)$ for any $c\in H_1(C; {\mathbb{R}}) \cong {\mathbb{R}}^2$ will be called the complex length of $c$.
Suppose that the complex length of a simple closed curve $\gamma$ on $C$ equals $2\pi i$. Then in the completion of $M$, $\gamma$ will bound a smooth hyperbolic disk. This implies that the completion of $M$ is a manifold homeomorphic to the Dehn filled manifold $M(\gamma)$, and that $M(\gamma)$ admits a complete hyperbolic structure.
Suppose instead that the complex length of a closed curve $\gamma$ on $C$ equals $\theta i \neq 2\pi i$. Then in the completion of $M$, $\gamma$ will bound a hyperbolic cone, with cone angle\index{cone angle} $\theta$. The completion of $M$ is still homeomorphic to the Dehn filled manifold $M(\gamma)$. However, the metric on $M(\gamma)$ inherited from the completion of $M$ is not smooth. The core of the added solid torus is the singular locus, with cone angle $\theta$.
For an incomplete structure, there will be a unique element $c \in H_1(C;{\mathbb{R}})$ so that $\mathcal{L}(c) = 2\pi i$.
\begin{definition}\label{Def:DehnFillingCoeff}
We say $c \in H_1(C;{\mathbb{R}})$ such that $\mathcal{L}(c) = 2\pi i$ is the \emph{Dehn filling coefficient}\index{Dehn filling coefficient} of the boundary component $C$.
\end{definition}
When $c$ is of the form $(p,q)$, with $p$ and $q$ relatively prime integers, it corresponds to a simple closed curve and the completion is smooth.
We have been looking at a fixed incomplete hyperbolic structure on $M$, and examining possible completions for this fixed structure. Now we turn our attention to a topological manifold $X$, homeomorphic to $M$, and consider all possible hyperbolic structures on $X$.
\begin{definition}\label{Def:DehnFillingSpace}
Let $X$ be a 3-manifold with cusp torus $C$. The subset of $H_1(C;{\mathbb{R}})$ consisting of Dehn filling coefficients of hyperbolic structures on $X$ is called the \emph{hyperbolic Dehn filling space for $X$}.\index{hyperbolic Dehn filling space}\index{Dehn filling space}
If $X$ admits a complete hyperbolic structure, then we let $\infty$ correspond to the complete hyperbolic structure on $X$.
\end{definition}
\begin{theorem}[Thurston's hyperbolic Dehn filling theorem]
\label{Thm:HypDehnSurgery}\index{Thurston's hyperbolic Dehn filling theorem}
\index{hyperbolic Dehn filling theorem}
Let $X$ be a 3-manifold homeomorphic to the interior of a compact manifold with boundary a single torus $T$, such that $X$ admits a complete hyperbolic structure. Then hyperbolic Dehn filling space\index{hyperbolic Dehn filling space}\index{Dehn filling space} for $X$ always contains an open neighborhood of $\infty$ in ${\mathbb{R}}^2\cup \{\infty\} \cong H_1(T;{\mathbb{R}})\cup\{\infty\}$.
More generally, if $X$ is the interior of a compact manifold with torus boundary components $T_1, \dots, T_n$, and $X$ admits a complete hyperbolic structure, then the hyperbolic Dehn filling space for $X$ contains an open neighborhood of $\infty$ for each $T_i$.
\end{theorem}
\Refthm{HypDehnSurgery} is an important result, and the result, its proofs, and its extensions continue to have useful consequences.
The first proof of \refthm{HypDehnSurgery} was sketched in Thurston's 1979 notes \cite{thurston}, and uses results on holonomy\index{holonomy} representations. A proof in the case that $X$ admits a geometric triangulation was given in \cite{NeumannZagier}, presented with expanded details in \cite{benedetti-petronio}. This proof was extended to the case of more general hyperbolic 3-manifolds by Petronio and Porti \cite{PetronioPorti}. Martelli puts these proofs together to give a complete exposition in his recent book~\cite{Martelli}. Additionally, precise universal bounds on the size of the open neighborhood of infinity provided by the theorem were given by Hodgson and Kerckhoff \cite{hk:univ}, about 25 years after \refthm{HypDehnSurgery} was proved.
All the proofs require work.
In chapters~\ref{Chap:Essential} and~\ref{Chap:Volume} we will give full proofs of related results that are weaker than what is claimed in \refthm{HypDehnSurgery}. Here, we provide only a short sketch of the argument that goes into the proof of \refthm{HypDehnSurgery}, and then focus on applications.
\begin{proof}[Proof sketch of \refthm{HypDehnSurgery}]
Suppose first that $X$ is homeomorphic to the interior of a compact manifold with a single torus boundary component $T$, and $X$ admits a complete hyperbolic structure.
Because $X$ is hyperbolic, there is a holonomy\index{holonomy} representation
\[ \rho\from \pi_1(X) \to \operatorname{PSL}(2,{\mathbb{C}}) \]
whose image is a discrete group. The fundamental group of the cusp torus $\pi_1(T)$ has image generated by two parabolics\index{parabolic} $\rho(\alpha)$ and $\rho(\beta)$, which we may assume fix the point at infinity in ${\mathbb{H}}^3$.
Now Thurston shows that there exists a one-complex parameter family of deformations of the holonomy\index{holonomy} representation \cite[Theorem~5.6]{thurston}.
Each small deformation of the complete hyperbolic structure taking $\rho(\alpha)$ to a loxodromic\index{loxodromic} must take $\rho(\beta)$ to a loxodromic with the same fixed points. As in the discussion above, this extends to an incomplete hyperbolic structure, with Dehn filling coefficient some complex number $d+i\theta = z$, and where $z=\infty$ corresponds to the complete hyperbolic structure on $X$.
To complete the proof, one shows that $z$ varies continuously over a neighborhood of infinity.
When there are $k>1$ cusps, the proof is similar. In this case, there is a $k$-complex parameter family of deformations, with completions giving Dehn filling coefficients $d_1+i\theta_1, \dots, d_k+i\theta_k$. Again one shows that these vary in a neighborhood of $(\infty, \dots, \infty)$.
\end{proof}
\begin{corollary}\label{Cor:FiniteFillings}
Let $X$ be a manifold with a single torus boundary component such that the interior of $X$ admits a complete hyperbolic metric. Then there are at most finitely many Dehn fillings\index{Dehn filling} of $X$ which do not admit a complete hyperbolic metric. \qed
\end{corollary}
\begin{corollary}\label{Cor:FiniteFillings2}
Let $X$ be a manifold with $n$ torus boundary components $T_1, \dots, T_n$. For each $T_i$, exclude finitely many Dehn fillings.\index{Dehn filling} The remaining Dehn fillings yield a manifold with a complete hyperbolic structure. \qed
\end{corollary}
Corollaries~\ref{Cor:FiniteFillings} and~\ref{Cor:FiniteFillings2} follow immediately from \refthm{HypDehnSurgery}.
Notice that \refcor{FiniteFillings2} does not rule out the fact that a manifold with more than one torus boundary component may have infinitely many non-hyperbolic Dehn fillings, as in the following example.
\begin{example}\label{Example:WLInfExFillings}
The Whitehead link is the link shown in \reffig{WhiteheadDehn}.\index{Whitehead link} We will see that it admits a complete hyperbolic structure (\refprop{WhiteheadGeom}).
\begin{figure}
\includegraphics{Figures/Ch06_Completion/F6-02-Whiteh.eps}
\caption{Two diagrams of the Whitehead link.\index{Whitehead link}}
\label{Fig:WhiteheadDehn}
\end{figure}
If we erase one of the link components, that action can be seen as attaching a solid torus to the link complement in a trivial way. This is called \emph{trivial Dehn filling}.\index{trivial Dehn filling}\index{Dehn filling!trivial} For this example, perform trivial Dehn filling on the component that clasps itself in \reffig{WhiteheadDehn}, leaving a single unknotted component, a trivial knot in $S^3$. Its complement is a solid torus.
\begin{definition}\label{Def:LensSpace}
A \emph{lens space}\index{lens space} is the 3-manifold obtained by gluing together two solid tori along their common torus boundary components.
\end{definition}
Thus any Dehn filling\index{Dehn filling} of a trivial knot in $S^3$ is a lens space.\index{lens space}
\begin{theorem}\label{Thm:LensSpace}
A lens space\index{lens space} cannot admit a hyperbolic structure.
\end{theorem}
\begin{proof}
\Refex{LensSpaceNotHyp}.
\end{proof}
There are infinitely many Dehn fillings\index{Dehn filling} on the trivial knot in $S^3$ that produce lens spaces.\index{lens space} Thus there are infinitely many non-hyperbolic Dehn fillings of the Whitehead link complement.\index{Whitehead link}
\end{example}
The fundamental theorem of Wallace and Lickorish, \refthm{WallaceLickorish}, implies that any closed orientable 3-manifold is obtained by Dehn filling a link complement in $S^3$. In fact we may take that link complement to be hyperbolic, due to work of Myers \cite{Myers}. Thus the hyperbolic Dehn filling theorem implies that in some sense, ``almost all'' 3-manifolds are hyperbolic.
There are still many unanswered questions about hyperbolic Dehn filling space.\index{hyperbolic Dehn filling space}\index{Dehn filling space} As of the writing of this book, the following questions are all unknown.
\begin{question}\label{Question:HypDehnSurgery}
What is the topology of hyperbolic Dehn filling space?\index{hyperbolic Dehn filling space}\index{Dehn filling space} For example, is it connected? Is it path connected? That is, if a finite volume manifold $M(s)$ admits a complete hyperbolic structure, and if $M$ also admits a complete hyperbolic structure, is there necessarily a deformation of the hyperbolic structure running from the complete structure on $M$ to the complete structure on $M(s)$?
Stronger: If $M(s)$ admits a complete hyperbolic structure, and $M$ admits a complete hyperbolic structure, can we deform the hyperbolic structure on $M$ through cone manifolds\index{hyperbolic cone manifold}\index{cone manifold} with cone angles\index{cone angle} increasing monotonically from $0$ (at the complete structure on $M$) to $2\pi$ (at the complete structure on $M(s)$)?
\end{question}
As of the writing of this book, we do not even know if hyperbolic Dehn filling space is connected for the simplest of examples --- the figure-8 knot complement. The following example is discussed in \cite{Cooper-HK:orbifolds}.
\begin{example}[Dehn filling space for the figure-8 knot]\label{Example:HypDehnFillingFig8}
Thurston identified part of the boundary of the neighborhood about infinity separating hyperbolic Dehn fillings from non-hyperbolic ones. This is done on pages 58 through 61 of his notes \cite{thurston}. To determine these boundaries, he considers what is happening to the two hyperbolic structures on the tetrahedra as the values of their edge invariants approach the boundaries given by the gluing equations (the boundaries of the region in \reffig{ThurstonRegion}). When both tetrahedra degenerate, the hyperbolic structure collapses and the limiting manifold is not hyperbolic.
However, when only one tetrahedron degenerates, we still have a hyperbolic structure for a little while. In this case, we will be gluing a positively oriented tetrahedron\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented} to a negatively oriented one.\index{negatively oriented tetrahedron}\index{tetrahedron!negatively oriented} We can make sense of this by cutting the negatively oriented tetrahedron into pieces and subtracting them from the positively oriented one, leaving a polyhedron $P$. Faces of $P$ may then be identified to give a hyperbolic structure. No one knows exactly where
this stops working, although Hodgson's 1986 PhD thesis \cite{hodgson:thesis} gives evidence that the boundary should be as shown in \reffig{Fig8Boundary}.
\begin{figure}
\begin{center}
\import{Figures/Ch06_Completion/}{F6-03-Fig8DF.eps_tex}
\end{center}
\caption{Hyperbolic Dehn filling space\index{hyperbolic Dehn filling space}\index{Dehn filling space} for the figure-8 knot complement is known to include the unshaded region exterior to the dark curve shown, is conjectured to contain the two shaded regions, and is conjectured to contain no other points. Figure modified from \cite{Cooper-HK:orbifolds}}
\label{Fig:Fig8Boundary}
\end{figure}
In \refex{fig8}, you are asked to study how tetrahedra
degenerate in the figure-8 knot complement.
\end{example}
\begin{question}
What is the hyperbolic Dehn filling space\index{hyperbolic Dehn filling space}\index{Dehn filling space} for the figure-8 knot complement?
\end{question}
\begin{definition}\label{Def:Exceptional}
Dehn fillings that do not yield a hyperbolic manifold are called \emph{exceptional}.\index{exceptional Dehn filling}\index{Dehn filling!exceptional}
\end{definition}
There are many interesting problems on exceptional Dehn fillings. We include an example, that as of the writing of this book is open.
It is known (and you can prove as an exercise) that no hyperbolic manifold can contain an embedded 2-sphere that does not bound a 3-ball. A manifold that contains such a 2-sphere is called \emph{reducible}\index{reducible 3-manifold}. If you start with a hyperbolic 3-manifold, perform Dehn filling,\index{Dehn filling} and obtain a reducible manifold, the Dehn filling is called \emph{reducible}.\index{reducible Dehn filling}
\begin{conjecture}[The cabling conjecture]\label{Conj:Cabling}
No hyperbolic knot complement admits a reducible\index{reducible 3-manifold} Dehn filling.\index{Dehn filling}\index{Cabling conjecture}
\end{conjecture}
The original wording of the cabling conjecture is that only cables of knots admit reducible\index{reducible 3-manifold} Dehn fillings.\index{Dehn filling} The conjecture listed as \refconj{Cabling} is the remaining case to prove.
\subsection{Triangulations and Dehn filling}
When $X$ is a hyperbolic 3-manifold that admits an ideal triangulation, then Dehn filling\index{Dehn filling} of $X$ can frequently be performed by adjusting the edge invariants, as in \refdef{EdgeInvariant}, of the ideal tetrahedra making up $X$. That is, given a triangulation of a 3-manifold $X$ with torus boundary, we may solve a non-linear system of equations in the tetrahedra's edge parameters to find a hyperbolic structure on a Dehn filling of $X$. To do so, use the edge gluing equations of \refchap{GluingCompleteness}, but not the completeness equations.
Carefully, let $\mu$ and $\lambda$ be generators of $H_1(\partial X)$, with associated completeness equations $H([\mu]) = H([\lambda])=1$, with $H([\mu]) = \prod_j z_{i_j}$ as in \refdef{CompletenessEquations}, and similarly for $H([\lambda])$.
Let $s=p\mu + q\lambda \in H_1(\partial X)$ be the slope of the Dehn filling.\index{Dehn filling} To find a complete hyperbolic structure on $X(s)$, we solve the system of equations consisting of edge gluing equations and the \emph{Dehn filling equation}\index{Dehn filling equation}
\begin{equation}\label{Eqn:DehnFillingEquation}
p \log H([\mu]) + q \log H([\lambda]) = 2\pi i.
\end{equation}
Note a solution to these equations will produce an incomplete hyperbolic structure on $X$, with Dehn filling coefficient $(p,q) \in H_1(\partial X;{\mathbb{R}})$. In fact, this process is valid for any $(p,q)\in {\mathbb{R}}\oplus{\mathbb{R}} \cong H_1(\partial X;{\mathbb{R}})$, not just relatively prime integers.
The system of edge gluing equations along with \refeqn{DehnFillingEquation} may not have a solution. If it does have a solution, it may not be the case that all tetrahedron parameters have positive imaginary part. For such a solution, the corresponding tetrahedra are not all positively oriented; some are negatively oriented as well.\index{negatively oriented tetrahedron}\index{tetrahedron!negatively oriented}
In practice, it is possible to implement this process by computer, to find solutions to gluing and Dehn filling equations numerically, and this has been implemented in SnapPy \cite{SnapPy}. Indeed, in 2009, Schleimer and Segerman investigated hyperbolic Dehn filling space\index{hyperbolic Dehn filling space}\index{Dehn filling space} for thousands of manifolds, using SnapPy \cite{SchleimerSegerman}. Graphically, they identified regions of hyperbolic Dehn filling space for which SnapPy computed positively oriented tetrahedra,\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented} negatively oriented tetrahedra,\index{negatively oriented tetrahedron}\index{tetrahedron!negatively oriented} and degenerate tetrahedra, as well as regions for which no solution was found.
Schleimer and Segerman's computed space for the figure-8 knot is shown in \reffig{SchleimerSegerman}.
\begin{figure}
\includegraphics{Figures/Ch06_Completion/F6-04-m004DF.eps}
\caption{Computer generated picture of hyperbolic Dehn filling space\index{hyperbolic Dehn filling space}\index{Dehn filling space} for the figure-8 knot complement, generated by Schleimer and Segerman. Compare with the conjectural picture of the space, \reffig{Fig8Boundary}}
\label{Fig:SchleimerSegerman}
\end{figure}
Green regions are those for which the computer found a solution with all positively oriented tetrahedra.\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented} Blue regions are those for which the computer only found solutions with some negatively oriented tetrahedra.\index{negatively oriented tetrahedron}\index{tetrahedron!negatively oriented} In the white regions, the computer failed to recognize a solution. The gray region around the origin is where no solution was found. The shading in green and blue regions corresponds to volume; lines in those regions are level sets of volume. It is difficult to see in the printed version, but there is also a thin red line between green and white regions. Red indicates that all tetrahedra are flat\index{flat tetrahedron}\index{tetrahedron!flat} and non-degenerate,\index{degenerate tetrahedron}\index{tetrahedron!degenerate} i.e.\ the cross-ratio of the four ideal points for each tetrahedron is real, but bounded away from $0$, $1$, and $\infty$. Any point where at least one tetrahedron is degenerate, i.e.\ its cross-ratio is near $0$, $1$, or $\infty$, would be shaded purple.
Note that the green and blue regions away from the origin in \reffig{SchleimerSegerman} match the conjectured picture for Dehn filling space in \reffig{Fig8Boundary}. The blue and green regions in the interior are conjectured to be noise, and not to correspond to actual hyperbolic structures.
In addition, we include Schleimer and Segerman's images of hyperbolic Dehn filling space\index{hyperbolic Dehn filling space}\index{Dehn filling space} for the $5_2$ knot, and for the $6_3$ knot, in \reffig{52SchleimerSegerman} and \reffig{63SchleimerSegerman}, respectively.
\begin{figure}
\includegraphics{Figures/Ch06_Completion/F6-05-m015DF.eps}
\caption{Computer generated picture of hyperbolic Dehn filling space\index{hyperbolic Dehn filling space}\index{Dehn filling space} for the $5_2$ knot complement, generated by Schleimer and Segerman}
\label{Fig:52SchleimerSegerman}
\end{figure}
The color scheme is the same as for the figure-8 knot, above. The triangulation used in these cases is known as the \emph{canonical triangulation}\index{canonical triangulation}, which will be defined in \refchap{TwoBridge}. Using different triangulations can lead to different regions of negatively oriented triangles. However, the boundary between hyperbolic and non-hyperbolic structures (green or blue versus white regions) seems to be independent of choice of triangulation.
These figures and many more can be found on Segerman's website \cite{SchleimerSegerman}.
\begin{figure}
\includegraphics{Figures/Ch06_Completion/F6-06-s912DF.eps}
\caption{Computer generated picture of hyperbolic Dehn filling space\index{hyperbolic Dehn filling space}\index{Dehn filling space} for the $6_3$ knot complement, generated by Schleimer and Segerman}
\label{Fig:63SchleimerSegerman}
\end{figure}
\section{A brief summary of geometric convergence}
\Refthm{HypDehnSurgery}, the hyperbolic Dehn filling theorem, actually gives information on convergence of geometry of spaces. The 3-manifolds obtained by hyperbolic Dehn filling on a complete hyperbolic manifold $M$ are ``close'' geometrically to $M$. This statement can be made precise, and often explicit, which is very useful: if we can bound geometric quantities for $M$, then the fact that (certain) Dehn fillings are geometrically close often translates into a bound on the same geometric quantities for Dehn fillings.
In this section, we define convergence of spaces and state a stronger version of the hyperbolic Dehn filling theorem that includes such convergence. We also survey briefly a few results and consequences of these results.
\subsection{Convergence of spaces}
Given two abstract metric spaces, we need a way to measure distance between them, and to describe when a sequence of spaces converges to another space.
Convergence of metric spaces has been studied by Gromov \cite{Gromov}. In the case of hyperbolic spaces, \cite{CanaryEpsteinGreen:Notes}, \cite[Chapter~E]{benedetti-petronio}, \cite[Chapter~8]{kapovich}, and \cite[Chapter~6]{Cooper-HK:orbifolds} give further details and examples.
There are actually several different definitions of geometric convergence of metric spaces in the literature on hyperbolic 3-manifolds and cone manifolds,\index{hyperbolic cone manifold}\index{cone manifold} which can be confusing. However, many are equivalent; see for example~\cite[Theorem~3.2.9]{CanaryEpsteinGreen:Notes} and~\cite[Theorem~8.11]{kapovich}. We give one definition here, as well as some examples that motivate other equivalent definitions.
One way of measuring ``distance'' between spaces is via quasi-isometries.
\begin{definition}\label{Def:QuasiIsometry}
Let $X$ and $Y$ be metric spaces with distance functions $d_X$ and $d_Y$, respectively. For $K>1$ and $c>0$, a bijection $f\from X \to Y$ is a \emph{$(K,c)$-quasi-isometric embedding}\index{quasi-isometric embedding} if for all distinct points $x,y\in X$,
\[ \frac{1}{K} d_X(x,y) - c \leq d_Y(f(x),f(y)) \leq K d_X(x,y) + c.\]
Let $f\from X\to Y$ be a $(K,c)$-quasi-isometric embedding. We say $f$ is a \emph{$(K,c)$-quasi-isometry}\index{quasi-isometry} if there also exists a map $\overline{f}\from Y \to X$ that is a $(K,c)$-quasi-isometric embedding as well as an \emph{approximate inverse}:\index{approximate inverse}
\[ \mbox{for all $x\in X$ and $y\in Y$, } d_X(\overline{f}\circ f(x),x)\leq c \mbox{ and } d_Y(f\circ\overline{f}(y),y)\leq c. \]
\end{definition}
\begin{definition}\label{Def:GeomConvergence}
Let $\{X_n\}$ be a sequence of metric spaces, each with a basepoint $x_n\in X_n$. Let $X$ be a metric space with basepoint $x\in X$. Let $B_r(x_n)$ denote the set of points in $X_n$ with distance at most $r$ from $x_n$. Similarly, let $B_r(x)$ denote the set of points in $X$ with distance at most $r$ from $x$.
We say that the sequence $(X_n,x_n)$ converges to the metric space $(X,x)$ in the \emph{quasi-isometric topology}\index{quasi-isometric topology} (or \emph{Gromov--Hausdorff topology}\index{Gromov--Hausdorff topology} or \emph{geometric topology}\index{geometric topology}) if the following holds.
Suppose that for all $\epsilon>0$ and all $r>0$ there exists an integer $N$ such that if $n>N$, then there exists a $(1+1/n, \epsilon)$-quasi-isometry \[f_n\from B_r(x_n) \to B_r(x).\]
In this case, we also say that the space $X$ is a \emph{geometric limit}\index{geometric limit} of the sequence $X_n$. The spaces $X_n$ \emph{converge geometrically}\index{geometric convergence} to $X$.
\end{definition}
In other words, the spaces $X_n$ with basepoints $x_n$ converge in the quasi-isometric topology, or converge geometrically, if there are better and better quasi-isometries between larger and larger closed and bounded sets about the basepoints. The maps $f_n$ are becoming closer and closer to actual isometries on larger and larger compact sets.
Notice that a basepoint, and compact sets around basepoints, feature prominently in the definition. The choice of basepoint does affect geometric limits, as the following example shows.
\begin{example}[A 2-dimensional geometric limit]
Suppose $S$ is a compact surface with genus three. Let $\gamma \subset S$ be a simple closed curve such that cutting $S$ along $\gamma$ yields two components: a genus one surface with one boundary component and a genus two surface with one boundary component.
Let $X_n$ be the metric space obtained by giving $S$ a hyperbolic metric in which $\gamma$ has length $1/n$. Let $x_n$ be a basepoint that lies on the genus one side of $\gamma$ in $X_n$ and let $y_n$ be a basepoint that lies on the genus two side. See \reffig{GeomLimExample}.
\begin{figure}
\import{Figures/Ch06_Completion/}{F6-07-BasePt.eps_tex}
\caption{Changing the basepoint from $x_n$ to $y_n$ can change the homeomorphism type of a geometric limit.}
\label{Fig:GeomLimExample}
\end{figure}
Then $(X_n, x_n)$ has a geometric limit $(X,x)$ such that $X$ is homeomorphic to a torus with one cusp, as shown on the left of \reffig{GeomLimExample}. However, $(X_n,y_n)$ has a geometric limit $(Y,y)$ where $Y$ is homeomorphic to a genus two surface with one cusp, as shown on the right of \reffig{GeomLimExample}. Thus changing the basepoint can change the homeomorphism type of a geometric limit.
\end{example}
The following gives a 3-dimensional example of geometric convergence.
\begin{example}[Geometric convergence of ideal tetrahedra]\label{Example:TetrahedraConvergence}
Consider a sequence of ideal tetrahedra $T_n$ in ${\mathbb{H}}^3$ with vertices at $0$, $1$, $\infty$, and $z_n$, where $z_n$ is converging to some $z_\infty\in{\mathbb{C}}$ with $\Im(z_\infty)>0$. For $n$ sufficiently large, there will be a point $p\in{\mathbb{H}}^3$ in the interior of all tetrahedra $T_n$ and in the ideal tetrahedron $T_\infty$ with vertices at $0$, $1$, $\infty$, and $z_\infty$. For fixed $R>0$, consider the compact set given by taking the intersection of $T_n$ with a closed ball $B_R(p) \subset {\mathbb{H}}^3$.
For large $n$, there will be better and better quasi-isometries from the balls $B_R(p)\cap T_n$ to $B_R(p)\cap T_\infty$. It follows that the ideal tetrahedra $(T_n,p)$ converge geometrically to the ideal tetrahedron $(T_\infty,p)$.
\end{example}
\begin{example}[Polyhedral convergence]\label{Example:PolyhedralConvergence}
More generally, suppose $M$ is a complete hyperbolic 3-manifold obtained by gluing faces of an ideal polyhedron (possibly with infinite volume) in ${\mathbb{H}}^3$, and suppose $M_n$ is another complete hyperbolic 3-manifold obtained by face-pairings\index{face-pairing isometry} of slightly deformed polyhedra, with face-pairings of $M_n$ pairing the same (combinatorial) faces as those of $M$. Suppose also that the polyhedra making up $M_n$ converge geometrically to those making up $M$ as $n\to\infty$, with appropriate basepoints. Then with the same basepoints, $(M_n, x_n)$ converges to $(M,x)$ in the quasi-isometric topology. This is made precise in \cite{marden}; convergence of spaces in this manner is called \emph{polyhedral convergence}.\index{polyhedral convergence}
Finally, taking the example one step farther, note that face-pairings\index{face-pairing isometry} are isometries in $\operatorname{PSL}(2,{\mathbb{C}})$, generating a discrete group $G_n$ when $M_n$ is hyperbolic. If $M$ is also hyperbolic, with face-pairings generating the discrete group $G$, we say that the sequence of groups $G_n$ converges to $G$ \emph{geometrically}.\index{geometric convergence of groups} These notions of convergence can be shown to be equivalent to convergence in the quasi-isometric topology; for example see \cite[theorem~8.11]{kapovich}.
\end{example}
Our main reason for defining geometric convergence is that it allows us to restate a much stronger version of Thurston's hyperbolic Dehn filling theorem, \refthm{HypDehnSurgery}, as follows.
\begin{theorem}[Hyperbolic Dehn filling with geomeric convergence]\label{Thm:GeomConvDehnFilling}\index{Thurston's hyperbolic Dehn filling theorem!geometric convergence}\index{hyperbolic Dehn filling theorem!geometric convergence}
Let $M$ admit a complete hyperbolic structure with fixed horoball neighborhood of a cusp $C$. Let $s_n$ be a sequence of slopes on $\partial C$ such that the length of a geodesic representative of $s_n$, measured in the induced Euclidean metric on $\partial C$, approaches infinity. Then for large enough $n$, the Dehn filled manifolds $M(s_n)$ are hyperbolic and approach $M$ as a geometric limit.
Similarly, if $M$ has multiple cusps, then Dehn filled manifolds along slopes with lengths approaching infinity approach $M$ as a geometric limit.
\end{theorem}
\begin{proof}[Proof idea]
In the proof sketch of Thurston's hyperbolic Dehn filling theorem, \refthm{HypDehnSurgery}, we noted that incomplete hyperbolic structures on $M$ can be obtained by one-complex parameter families of deformations of the complete hyperbolic structure. These deformations are continuous maps in the quasi-isometric topology.
\end{proof}
\subsection{Some consequences of geometric convergence}
The fact that $M$ is a geometric limit in \refthm{GeomConvDehnFilling} implies that geometric properties of Dehn fillings of $M$ converge to those of $M$. For example, the thick parts of a cusped finite-volume manifold and its high Dehn fillings will be quasi-isometric. A geodesic in the cusped manifold will map to curves that will eventually be isotopic to geodesics in the filled manifolds, with lengths approaching the length of the original. Unfortunately, \refthm{GeomConvDehnFilling} does not give any information on how high the Dehn fillings need to be in order to guarantee concrete bounds on geometry change. However, since the theorem appeared, there has been progress in making it more concrete.
For example, the volume of a finite volume hyperbolic 3-manifold $M$ is one of its most useful geometric properties. If 3-manifolds $M_n$ converge to $M$ as a geometric limit, then their volumes converge:
\[ \lim_{n\to\infty}\operatorname{vol}(M_n)\to \operatorname{vol}(M). \]
Much more can be said on volumes and Dehn filling.\index{Dehn filling} As a first step, the following theorem is also due to Thurston, and appears in the same notes in which he outlined the proof of \refthm{GeomConvDehnFilling}, as \cite[theorem~6.5.6]{thurston}.
\begin{theorem}[Volume under Dehn filling]\label{Thm:VolumeDF}\index{volume bound!upper bound on Dehn filling}
If $M$ is hyperbolic with cusp $C$, and $s$ is a slope on $\partial C$ such that $M(s)$ is hyperbolic, then
\[ \operatorname{vol}(M) > \operatorname{vol}(M(s)). \]
Similarly if $M$ has multiple cusps $C_1,\dots, C_n$ and slopes $s_1, \dots, s_n$, one on each $C_j$, such that $M(s_1, \dots, s_n)$ is hyperbolic, then
\[ \operatorname{vol}(M)>\operatorname{vol}(M(s_1, \dots, s_n)). \]
\end{theorem}
While \refthm{GeomConvDehnFilling} implies that volumes of $M(s)$ approach volumes of $M$, \refthm{VolumeDF} implies that volume strictly decreases under Dehn filling for any slope giving a hyperbolic manifold. The slope need not be in the neighborhood of infinity provided by \refthm{HypDehnSurgery}; the volume decreases regardless.
The full proof of \refthm{VolumeDF} can be found in \cite{thurston}. In this book, we will give a full proof of a slightly weaker result in \refchap{AngleStruct}, so we will delay the discussion of the proof ideas until then.
For volumes, even more can be said, and there have been concrete results bounding the change in volume under Dehn filling by Neumann and Zagier~\cite{NeumannZagier} and by Hodgson and Kerckhoff~\cite{hk:univ}, among others. We state one additional result along these lines here.
Note that if $M$ has a complete hyperbolic structure with cusp $C$, then $\partial C$ has a Euclidean structure,\index{Euclidean structure} and any slope $s\subset \partial C$ is isotopic to a geodesic with well-defined Euclidean length $\ell_{\partial C}(s)$. Provided the length of $s$ is at least $2\pi$, a lower bound on volume under Dehn filling\index{Dehn filling} can also be obtained. We will prove the following theorem in \refchap{Volume}.
\begin{theorem}[\cite{fkp:dfvjp}]\label{Thm:FKP}\index{volume bound!lower bound on Dehn filling}
Suppose $M$ is a hyperbolic manifold with cusps $C_1, \dots, C_n$ and slopes $s_1, \dots, s_n$, one on each $\partial C_i$, such that the minimal length slope $\ell_{\min} = \min\{\ell_{\partial C_j}(s_j)\}$ has length at least $2\pi$. Then the Dehn filled manifold $M(s_1, \dots, s_n)$ is hyperbolic with volume satisfying
\[ \operatorname{vol}(M(s_1, \dots, s_n)) \geq \left( 1 - \left(\frac{2\pi}{\ell_{\min}}\right)^2 \right)^{3/2} \operatorname{vol}(M). \]
\end{theorem}
\section{Exercises}
\begin{exercise}\label{Ex:fig8} (Incomplete structures on the figure-8 knot)
Thurston's notes contain a figure showing all parameterizations of hyperbolic structures on the figure-8 knot \cite[page 52]{thurston}. For any $w$ in this region, formula 4.3.2 in the notes gives us a corresponding $z$ so that if two tetrahedra with edge invariants $z$ and $w$ are glued, we obtain a (possibly incomplete) hyperbolic structure on the figure-8 knot.
Analyze what happens to the tetrahedra corresponding to $z$ and to $w$ as $w$ approaches a point on the boundary of this region.
More specifically, if $w$ approaches certain points on the boundary of this region, tetrahedra corresponding to both $z$ and $w$ start to become degenerate. Which points are these? Prove that the two tetrahedra are becoming degenerate in this case.
As $w$ approaches other values on the boundary, only one of the tetrahedra degenerates. Which points are these? Prove that only one tetrahedron is degenerating in this case.
\end{exercise}
\begin{exercise}
We have seen that the completion of an incomplete hyperbolic 3-manifold is no longer homeomorphic to the original hyperbolic 3-manifold. Is this true for completions of incomplete structures on the 3-punctured sphere? What surface do we obtain when we complete an incomplete hyperbolic structure on a 3-punctured sphere? Prove it.
\end{exercise}
\begin{exercise}\label{Ex:zcrossz}
Suppose $M$ is a closed manifold with a complete hyperbolic structure. Prove that $\pi_1(M)$ cannot contain a ${\mathbb{Z}} \times {\mathbb{Z}}$ subgroup. Conclude that $M$ cannot contain an embedded torus $T$ such that $\pi_1(T)$ injects into $\pi_1(M)$. [Such a torus is called \emph{incompressible}\index{incompressible torus}. A Dehn filling\index{Dehn filling} resulting in a closed manifold with an embedded incompressible torus is another example of an exceptional filling.]\index{exceptional Dehn filling}\index{Dehn filling!exceptional}
\end{exercise}
\begin{exercise}
Let $M$ be an orientable 3-manifold with a decomposition into ideal polyhedra, each with a hyperbolic structure, such that the polyhedra induce a hyperbolic structure on $M$. Let $v$ be an ideal vertex of $M$, i.e.\ an equivalence class of ideal vertices of the polyhedra, where vertices are equivalent if and only if they are identified under the gluing of the polyhedra.
Recall that ${\rm link}(v)$ is defined to be the boundary of a neighborhood of $v$ in $M$.
\begin{enumerate}
\item[(a)] Prove $\operatorname{link}(v)$ always inherits a similarity structure from the hyperbolic structure on $M$. Here a similarity structure is a $({\rm Sim}({\mathbb{E}}^2), {\mathbb{E}}^2)$-structure, where ${\rm Sim}({\mathbb{E}}^2)$ is a subgroup of the group of affine transformations consisting of elements of the form $x\mapsto Ax+b$, where $A$ is a linear map that rotates and/or scales only. Thus ${\rm Sim}({\mathbb{E}}^2)$ is formed by rotations, scalings, and translations.
\item[(b)] Prove that the only closed, orientable surface which admits a similarity structure is a torus. It follows that $\operatorname{link}(v)$ is always homeomorphic to a torus when $M$ is an orientable manifold with hyperbolic structure (even incomplete).
\end{enumerate}
\label{Ex:link}
\end{exercise}
\begin{exercise}\label{Ex:1PtCmpt}
Let $M$ be an orientable 3-manifold that admits an incomplete hyperbolic structure with completion given by attaching the one-point compactification of a cusp neighborhood $N(C)$. Prove that the completion is not a manifold.
\end{exercise}
\begin{exercise}
Prove that a reducible\index{reducible 3-manifold} manifold cannot be hyperbolic. That is, it admits no complete hyperbolic structure.
\end{exercise}
\begin{exercise}\label{Ex:LensSpaceNotHyp}
Prove that a lens space\index{lens space} cannot admit a complete hyperbolic structure.
\end{exercise}
\begin{exercise}
(On complex length of $A$ in $\operatorname{PSL}(2,{\mathbb{C}})$)
\begin{enumerate}
\item[(a)] Suppose $A \in \operatorname{PSL}(2,{\mathbb{C}})$ has axis the geodesic from $0$ to $\infty$. Then the matrix of $A$ may be parameterized by a single complex number $\lambda$. What is the form of this matrix?
\item[(b)] Denote the trace of a matrix $A$ by ${\rm tr}(A)$, and its complex length\index{complex length} by $\mathcal{L}(A)$. Prove that ${\rm tr}(A) = 2\cosh(\mathcal{L}(A)/2)$.
\end{enumerate}
\end{exercise}
\begin{exercise}
By computer, investigate the hyperbolic Dehn filling space\index{hyperbolic Dehn filling space}\index{Dehn filling space} obtained by filling a single component of the Whitehead link.\index{Whitehead link} Identify regions for which the result has a decomposition into positively oriented tetrahedra, negatively oriented tetrahedra, etc.
\end{exercise}
\begin{exercise}
(Algebraic versus geometric convergence) The purpose of this exercise is to work through a basic example of Thurston \cite{thurston} showing a difference between algebraic and geometric convergence of discrete groups. Let $A_n \in \operatorname{PSL}(2,{\mathbb{C}})$ have matrix representation
\[ A_n = \mat{\exp(w_n) & n\sinh(w_n)\\ 0& \exp(-w_n)} \quad \mbox{where} \quad
w_n = \frac{1}{n^2} + i\frac{\pi}{n}.\]
\begin{enumerate}
\item Show that the matrices $A_n$ converge to the matrix $A= \mat{1&i\pi\\0&1}$; thus the group $\langle A_n \rangle$ converges algebraically to the group $\langle A \rangle$.
\item Show that $\langle A_n \rangle$ does not converge geometrically to the group $\langle A \rangle$, by finding a subsequence $A_{n_j}$ converging to an element of $\operatorname{PSL}(2,{\mathbb{C}})$ that does not lie in $\langle A \rangle$.
\end{enumerate}
\end{exercise}
\part{Tools, Techniques, and Families of Examples}\label{Part:Examples}
\chapter{Twist Knots and Augmented Links}\label{Chap:TwistKnots}
In this chapter, \blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
we study a class of knots that have some of the simplest hyperbolic geometry, namely twist knots. This class includes the figure-8 knot, the $5_2$ knot, and the $6_1$ knot that we have encountered so far. We also generalize to give examples of knots and links whose geometry is relatively explicit. This will equip us with many examples.
From now on, we say a knot or link in $S^3$ is \emph{hyperbolic}\index{hyperbolic knot or link} if its complement $S^3-K$ admits a complete hyperbolic structure. Similarly, a \emph{hyperbolic 3-manifold}\index{hyperbolic 3-manifold} is a 3-manifold that admits a complete hyperbolic structure. Note that the completeness of the hyperbolic structure is implied in this terminology.
\section{Twist knots and Dehn fillings}
Recall the definition of twist knots from \refchap{KnotIntro}.
\begin{figure}[h]
\includegraphics{Figures/Ch07_TwistKnots/F7-01-TwReg.eps}
\caption{A twist region of a diagram}
\label{Fig:TwistRegion2}
\end{figure}
A twist region\index{twist region} is a string of bigon\index{bigon} regions in the diagram graph of a knot diagram, with the bigons arranged end-to-end at their vertices, as in \reffig{TwistRegion2}. Recall also that a twist region is maximal in the sense that there are no additional bigon regions meeting the vertices on either end. A single crossing adjacent to no bigons is also a twist region. Recall also that twist regions are required to be alternating.
The \emph{twist knot} $J(2,n)$\index{twist knot}, defined in \refdef{TwistKnot}, is the knot with a diagram consisting of exactly two twist regions, one of which contains two crossings, and the other containing $n\in{\mathbb{Z}}$ crossings. The direction of crossing depends on the sign of $n$.
Twist knots $J(2,2)$, $J(2,3)$, $J(2,4)$, and $J(2,5)$ are shown again in \reffig{TwistKnots2}.
\begin{figure}
\includegraphics{Figures/Ch07_TwistKnots/F7-02-TwKnot.eps}
\caption{Twist knots $J(2,2)$ (the figure-8 knot), $J(2,3)$ (the $5_2$ knot), $J(2,4)$ (the $6_1$ or Stevedore knot), and $J(2,5)$}
\label{Fig:TwistKnots2}
\end{figure}
\begin{definition}\label{Def:WhiteheadLink}
The \emph{Whitehead link}\index{Whitehead link} is the link shown in \reffig{Whitehead}. Note the two links shown are isotopic.
\end{definition}
\begin{figure}
\includegraphics{Figures/Ch07_TwistKnots/F7-03-Whiteh.eps}
\caption{Two diagrams of the Whitehead link.\index{Whitehead link}}
\label{Fig:Whitehead}
\end{figure}
We will show in \refprop{WhiteheadGeom} that the complement of the Whitehead link\index{Whitehead link} is hyperbolic.
\begin{proposition}\label{Prop:TwistKnotWhitehead}
The complement of the twist knot $J(2,n)$ is obtained by Dehn filling\index{Dehn filling} the hyperbolic manifold isometric to the complement of the Whitehead link.\index{Whitehead link}
\end{proposition}
\begin{proof}
The proof uses topological properties of the sphere $S^3$ and the solid torus. Recall first that the sphere $S^3$ is the union of two solid tori whose cores are linked exactly once, but each core alone is unknotted.
The diagram of the Whitehead link\index{Whitehead link} on the left of \reffig{Whitehead} has a component at the bottom that is unknotted and does not cross itself. The complement of this component in $S^3$ is a solid torus. Note then that the other component is a knot in a solid torus, as shown on the left of \reffig{TKWProof}.
\begin{figure}
\import{Figures/Ch07_TwistKnots/}{F7-04-TKWpf.eps_tex}
\caption{The Whitehead link\index{Whitehead link} complement is homeomorphic to a knot in a solid torus, which we cut, twist, and reglue. The result is homeomorphic to the complement of $J(2,2)\cup U$}
\label{Fig:TKWProof}
\end{figure}
Now we apply a homeomorphism to the solid torus, which we view as $S^1\times D^2$. There is a homeomorphism given by slicing along a disk $\{x\}\times D^2$ of the solid torus, rotating one full time, then gluing back together. This homeomorphism is shown in the center of \reffig{TKWProof}.
The homeomorphism replaces the original link in the solid torus by a link with two additional crossings. By applying the homeomorphism repeatedly, we see that the complement of the Whitehead link\index{Whitehead link} is homeomorphic to the complement of the link with any even number of crossings encircled by the unknotted component. In particular, it is homeomorphic to the complement of the link $J(2,2k)\cup U$, where $U$ is a single unknotted component. By the Mostow--Prasad rigidity\index{Mostow--Prasad rigidity} theorem (\refthm{MostowGeom}), these link complements have isometric hyperbolic structures.
To obtain the knot $J(2,2k)$, attach a solid torus to $S^3-(J(2,2k)\cup U)$, filling in $U$ in a trivial way to give $S^3-J(2,2k)$. Thus $J(2,2k)$ is obtained from a manifold isometric to the complement of the Whitehead link\index{Whitehead link} by Dehn filling.\index{Dehn filling}
So far our proof only works for $J(2,n)$ with $n$ even. Now we consider the case of the knot $J(2,2k+1)$, with odd second component. We may isotope the Whitehead link,\index{Whitehead link} starting with the diagram on the left of \reffig{Whitehead}, to reverse the two crossings at the top, and insert a crossing encircled by the unknotted component at the bottom. This is shown in \reffig{TKWOdd}, left.
\begin{figure}
\import{Figures/Ch07_TwistKnots/}{F7-05-TKWOdd.eps_tex}
\caption{A sequence of homeomorphisms of the Whitehead link\index{Whitehead link} complement}
\label{Fig:TKWOdd}
\end{figure}
Following that figure, we may then reflect the diagram in the plane of projection, reversing all the crossings. This is a homeomorphism of the knot complement, hence an isometry. Now just as in the even case, we may insert any even number of crossings into the two strands encircled by the unknotted component. To obtain $J(2,2k+1)$, simply Dehn fill\index{Dehn filling} the unknotted component in the obvious way.
\end{proof}
\begin{corollary}\label{Cor:TwistKnotWhitehead}
The complement of the Whitehead link\index{Whitehead link} is a geometric limit of $S^3-J(2,n)$.
\end{corollary}
\begin{proof}
Because they are obtained by Dehn filling\index{Dehn filling} the complement of the Whitehead link,\index{Whitehead link} all but finitely many link complements $S^3-J(2,n)$ lie in any given neighborhood of infinity in the Dehn surgery space for a cusp of the complement of the Whitehead link. \Refthm{GeomConvDehnFilling} implies that the Whitehead link is therefore a geometric limit of these manifolds.
\end{proof}
In order to study the geometry of twist knots, we study the geometry of the geometric limit, the Whitehead link complement.\index{Whitehead link}
\begin{proposition}\label{Prop:WhiteheadGeom}
The complete hyperbolic structure on the complement of the Whitehead link\index{Whitehead link} is obtained by gluing faces of a regular ideal octahedron,\index{regular ideal octahedron} with the face pairings as shown in \reffig{Octahedron}.
\end{proposition}
\begin{figure}
\import{Figures/Ch07_TwistKnots/}{F7-06-Oct.eps_tex}
\caption{Shown is the boundary of an ideal octahedron (one vertex at infinity). Pairing faces as shown gives the complement of the Whitehead link.\index{Whitehead link}}
\label{Fig:Octahedron}
\end{figure}
A \emph{regular ideal octahedron}\index{ideal octahedron, regular}\index{regular ideal octahedron} is the ideal octahedron in ${\mathbb{H}}^3$ with all dihedral angles equal to $\pi/2$.
\begin{proof}
The fact that the Whitehead link\index{Whitehead link} complement is obtained by face pairings of an ideal octahedron can be readily seen by applying the methods of \refchap{Fig8Decomp} to the diagram of the Whitehead link on the right of \reffig{Whitehead}. After collapsing bigons,\index{bigon} we obtain two ideal polyhedra with four triangular faces and one quadrilateral face. Glue the quadrilaterals to obtain an ideal octahedron.\index{ideal octahedron, regular}\index{regular ideal octahedron} The form is shown in \reffig{Octahedron}. We leave the details for \refex{WhiteheadOctahedron}.
In a regular ideal octahedron, all dihedral angles are $\pi/2$, so horospheres intersect a neighborhood of each ideal vertex in a square. We need to check that the face pairings give a hyperbolic structure in this case. Note first that every point in the interior of an octahedron and in the interior of a face of the octahedron has a neighborhood isometric to a ball in ${\mathbb{H}}^3$. We need to show that each point on an edge also has such a neighborhood, and then \reflem{IsometricNbhd} will imply that the gluing is a manifold with a (possibly incomplete) hyperbolic structure.
Note first that each of the edges (there are three) is glued four times. Thus the total angle around each edge will be $4\pi/2 =2\pi$. This is not quite enough to show that each point on an edge has a neighborhood isometric to a ball in ${\mathbb{H}}^3$, because composing the gluings around an edge may introduce nontrivial translation or scale. To show that this does not happen, consider each end of an ideal edge within a cusp. Any horosphere intersects a neighborhood of an ideal vertex of the regular ideal octahedron\index{ideal octahedron, regular}\index{regular ideal octahedron} in a Euclidean square. Under the developing map, squares can only patch together in squares to give a tiling of the universal cover of each cusp by Euclidean squares. There are four squares meeting around a vertex in the cusp corresponding to one of our ideal edges. Note that the squares cannot be scaled or sheared. It follows that edges glue up without shearing singularities, and the structure is hyperbolic.
To show that the structure is complete, we use \refthm{EuclidCusp}: the structure is complete if and only if for each cusp, the induced structure on the boundary is Euclidean. But as already noted, each cusp is tiled by Euclidean squares corresponding to intersections of a horosphere with an ideal vertex of the regular ideal octahedron.\index{ideal octahedron, regular}\index{regular ideal octahedron} Under the developing map, squares can only patch together to give a Euclidean structure:\index{Euclidean structure} there will be no rotation or scale. Thus the hyperbolic structure must be complete.
\end{proof}
In \refchap{AngleStruct}, we will obtain a formula to calculate the volume of a regular hyperbolic ideal octahedron. For now, we state that the volume is a constant ${v_{\rm{oct}}} = 3.66...$.\index{ideal octahedron, regular}\index{regular ideal octahedron}\index{regular ideal octahedron! volume}
\begin{corollary}\label{Cor:TwistVolume}
The volume of a hyperbolic twist knot is universally bounded
\[ \operatorname{vol}(S^3-J(2,n)) < {v_{\rm{oct}}}, \]
and as $n\to \infty$, $\operatorname{vol}(S^3-J(2,n))\to {v_{\rm{oct}}}$.
\end{corollary}
\begin{proof}
The Dehn filling\index{Dehn filling} bound follows immediately from Thurston's theorem on volume change under Dehn filling, \refthm{VolumeDF}. The convergence follows from \refthm{GeomConvDehnFilling}.
\end{proof}
We have not yet discussed which twist knots are hyperbolic. We have seen that the figure-8 knot is hyperbolic, and similar methods can be used to show each of the knots in \reffig{TwistKnots} are hyperbolic. More generally, we will see in \refchap{Alternating} (or by other methods in \refchap{TwoBridge}) that all twist knots $J(2,n)$ with $n\geq 2$ or $n\leq -3$ are hyperbolic. When $n=1$ or $-2$, the standard diagram of $J(2,n)$ can be easily reduced to a diagram with only a single twist region, which is not hyperbolic, and when $n=-1$ its diagram can be easily reduced to that of the unknot, which is also not hyperbolic. All other twist knots are hyperbolic.
\section{Double twist knots and the Borromean rings}
The results of the previous section generalize immediately to knots and links with exactly two twist regions, but with any number of crossings in either twist region.
\begin{definition}\label{Def:DoubleTwistKnot}
The \emph{double twist knot or link} $J(k,\ell)$ is the knot or link with a diagram consisting of exactly two twist regions, one of which contains $k$ crossings, and the other contains $\ell$ crossings, for $k, \ell\in{\mathbb{Z}}$. See \reffig{DoubleTwist}. Note that $J(k,\ell)$ is a knot if and only if at least one of $k, \ell$ is even; otherwise it is a link with two components.
\end{definition}
\begin{figure}
\import{Figures/Ch07_TwistKnots/}{F7-07-DoubTw.eps_tex}
\caption{A double twist knot or link has two twist regions, one with $k$ crossings and one with $\ell$ crossings}
\label{Fig:DoubleTwist}
\end{figure}
Just as for twist knots, double twist knots are obtained by Dehn filling\index{Dehn filling} a simple link complement.
\begin{proposition}\label{Prop:JklBorromean}
The complement of the link $J(k,\ell)$ is obtained by Dehn filling\index{Dehn filling} the complement of one of the four links shown in \reffig{Borromean}, depending on the parity of $k$ and $\ell$.
\end{proposition}
\begin{figure}
\includegraphics{Figures/Ch07_TwistKnots/F7-08-Borrom.eps}
\caption{Complements of $J(k,\ell)$ are obtained by Dehn filling\index{Dehn filling} one of these four links. The link on the left is known as the Borromean rings.\index{Borromean rings}}
\label{Fig:Borromean}
\end{figure}
\begin{proof}
The proof is nearly identical to that of \refprop{TwistKnotWhitehead}, except now it is done in two steps, since there are two unknotted components. Apply a homeomorphism of a solid torus as in \reffig{TKWOdd} two times. The details are left to the reader.
\end{proof}
The link on the left of \reffig{Borromean} is equivalent to a link more famously known as the \emph{Borromean rings}\index{Borromean rings}; its more common diagram is shown in \reffig{2BorromeanRings}.
We will call the other links of \reffig{Borromean} the \emph{Borromean twisted sisters}\index{Borromean twisted sisters}, and say the links are in the \emph{Borromean family}\index{Borromean family}. In fact, the middle two links are equivalent.
\begin{proposition}\label{Prop:GeomBorromean}
The complements of the Borromean rings\index{Borromean rings} and the Borromean twisted sisters\index{Borromean twisted sisters} all admit complete hyperbolic structures obtained by gluing two regular ideal octahedra.
\end{proposition}
\begin{proof}
Because the Borromean rings has a diagram that is alternating,\index{alternating knot or link} its complement can be split into ideal polyhedra using the methods of \refchap{Fig8Decomp}. However, we present a new way to decompose the complements of links of the Borromean family that we will generalize below.
View the diagrams of \reffig{Borromean} in three dimensions. The two link components in each diagram that will be Dehn filled\index{Dehn filling} to produce $J(k,\ell)$ should be viewed as lying perpendicular to the plane of the paper, which is the plane of projection $S^2\subset S^3$. The other link component(s) should be viewed as lying in the plane of projection except at crossings; when the component crosses itself it dips briefly above or below the plane of projection, then returns to the plane.
The components lying perpendicular to the plane of projection are unknotted, and each bounds a 2-punctured disk, shown as shaded in \reffig{Shaded2PunctDisk}.
\begin{figure}
\includegraphics{Figures/Ch07_TwistKnots/F7-09-2PunD.eps}
\caption{Shaded 2-punctured disks.}
\label{Fig:Shaded2PunctDisk}
\end{figure}
As the first step of the decomposition, slice each of these disks up the middle, replacing a single 2-punctured disk with two parallel copies of the 2-punctured disk. This move is shown on the left of \reffig{BorromeanDecomp}.
\begin{figure}
\includegraphics{Figures/Ch07_TwistKnots/F7-10-BorDmp.eps}
\caption{Left: slice 2-punctured disks up the middle (obtain parallel 2-punctured disks, shown here pulled apart). Middle left: Untwist single crossings. Middle right: Cut along plane of projection. Right: collapse remnants of the link to ideal vertices.}
\label{Fig:BorromeanDecomp}
\end{figure}
Now if a 2-punctured disk is adjacent to a crossing in the plane of projection, the next step is to rotate that 2-punctured disk $180^\circ$ to unwind the crossing, as in the middle left of \reffig{BorromeanDecomp}. Note this rotation pulls the diagram along with it on one side, but the rotation is only performed on the 2-punctured disk adjacent to the crossing, not on the parallel 2-punctured disk. After this step, all crossings in the plane of projection have been removed.
Next, cut along the plane of projection, splitting the complement into two identical pieces as in the middle right of \reffig{BorromeanDecomp}.
Finally, for each piece, collapse remnants of the link to ideal vertices, as on the right of \reffig{BorromeanDecomp}. We claim the result in that figure is topologically an octahedron. To see this, note it has two ideal vertices colored white, coming from crossing circles, and four ideal vertices colored black, coming from the component of the link on the plane of projection. There are four shaded faces that all have three edges, hence all shaded faces are triangles. There are four white faces, including the one running through the point at infinity in the plane of projection, and each of these white faces also has three edges, so each is a triangle. Thus the result is an ideal octahedron. Recall that there is actually another octahedron coming from our decomposition: the other octahedron comes from the region below the plane of projection after slicing along that plane. So the link complements in the Borromean family\index{Borromean family} all decompose into two ideal octahedra.
Note that the face pairings of the two ideal octahedra that give back the original link complement will be different for the different links; they can be found by tracing backwards through the decomposition process above. To undo the step of cutting along the plane of projection, we glue matching white faces of the opposite octahedra together in pairs. To undo the step of slicing along 2-punctured disks, we glue remaining shaded triangles in pairs; however there are two options depending on whether or not we untwisted a crossing. If both parallel 2-punctured disks were adjacent to no crossings, then corresponding shaded triangles on the same ideal octahedron are glued across an ideal vertex (one of the white vertices of \reffig{BorromeanDecomp}). If there was an adjacent crossing, then a shaded triangle on one octahedron is glued to the opposite shaded triangle on the other octahedron across the (white) ideal vertex.
Finally, to see that these link complements all admit a complete hyperbolic structure, we give each of the two octahedra the geometry of a hyperbolic regular ideal octahedron,\index{ideal octahedron, regular}\index{regular ideal octahedron} then argue as in the proof of \refprop{WhiteheadGeom}. We check: each edge of the decomposition comes from the intersection of a 2-punctured disk with the plane of projection, and each edge class in the manifold is obtained by gluing four such edges. Thus the total angle around each edge will be $4(\pi/2) = 2\pi$. Again horospheres meet ideal vertices in Euclidean squares, and so the developing image cannot scale, shear, or rotate these squares. Thus edges glue without shearing singularities, and cusps are Euclidean. Hence the result is a complete hyperbolic structure.
\end{proof}
\begin{corollary}\label{Cor:VolJnm}
The volume of a double twist knot satisfies
\[ \operatorname{vol}(J(k,\ell)) < 2\,{v_{\rm{oct}}}, \]
where ${v_{\rm{oct}}} = 3.66\dots$ is the volume of a regular ideal octahedron.\index{ideal octahedron, regular}\index{regular ideal octahedron}
\end{corollary}
\section{Augmenting and highly twisted knots}
The above procedure can be generalized.
\begin{definition}\label{Def:Augmenting}
For any twist region of any link diagram, a new link is obtained by adding a single unknotted link component to the diagram, encircling the two strands of the link component. The link is said to be \emph{augmented}.\index{augmenting link diagram}\index{augmented link}\index{augmented link!definition} The added link component is called a \emph{crossing circle}\index{crossing circle}. We will refer to the original link components as \emph{knot strands}.\index{knot strand}
When a crossing circle is added to each twist region of the diagram, the link is said to be \emph{fully augmented}.\index{fully augmented link}\index{augmented link!fully augmented}
\end{definition}
The complement of an augmented link is homeomorphic to the complement of the link with any even number of crossings added to or removed from the twist region, by the same argument illustrated in \reffig{TKWProof}. Thus the complement of a fully augmented link is homeomorphic to the complement of the fully augmented link\index{augmented link} with one or zero crossings adjacent to each crossing circle. When there is one crossing adjacent to a crossing circle, we say the crossing circle is adjacent to a \emph{half-twist}.\index{half-twist}
The four links of \reffig{Borromean} are examples of fully augmented links.\index{augmented link} The decomposition of \refprop{GeomBorromean} goes through more generally for all fully augmented links.
This decomposition appears in the appendix to \cite{lackenby:alt-volume} by Agol and D.~Thurston. These links have very beautiful geometric properties, explored further in \cite{futer-purcell}, \cite{purcell:volume}, \cite{purcell:cusps}, and in the survey article \cite{Purcell:FullyAugmented}. Some of these results are included below, modeled off the exposition in \cite{Purcell:FullyAugmented}.
\begin{theorem}\label{Thm:FullyAugmentedDecomp}
Let $L$ be any fully augmented link,\index{fully augmented link!polyhedral decomposition} and assume we have applied a homeomorphism to $S^3-L$ so that $L$ has one or zero crossings adjacent to each crossing circle. Then the link complement $S^3-L$ decomposes into two identical ideal polyhedra with the following properties.
\begin{enumerate}
\item Faces of the polyhedra can be checkerboard colored. White faces correspond to regions of the plane of projection. Shaded faces are all triangles, and come from 2-punctured disks bounded by crossing circles, which we call \emph{crossing disks}.\index{crossing disk}
\item Ideal vertices are all 4-valent (before gluing).
\item Gluing the polyhedra identifies exactly four edges to a single edge class in the link complement.
\end{enumerate}
\end{theorem}
\begin{proof}
The decomposition is obtained very similarly to that in the proof of \refprop{GeomBorromean}. First, each crossing circle bounds a 2-punctured disk, which we shade. After applying a homeomorphism removing all pairs of crossings in each twist region, we may assume that the shaded disk is either adjacent to no crossings, or adjacent to a single crossing (half-twist).
Slice along the 2-punctured disks, splitting each into two parallel 2-punctured disks. Next, apply a $180^\circ$ rotation to those 2-punctured disks adjacent to a crossing, unwinding the crossing. Then slice along the plane of projection, splitting the complement into two identical pieces. Finally, shrink remnants of the link to ideal vertices. We check that each item of the theorem holds.
First, note faces are already checkerboard colored, with shaded faces coming from 2-punctured disks and white faces coming from the plane of projection. Note that edges of the decomposition come from intersections of white and shaded faces. There are exactly three edges bordering each shaded face, so each shaded face is a triangle.
Ideal vertices of the polyhedra come from remnants of the link. For those ideal vertices coming from a component of the link in the plane of projection, the ideal vertex will be adjacent to two edges coming from the 2-punctured disk on one of its ends, and two edges coming from the 2-punctured disk on its other end. Thus it is 4-valent. An ideal vertex coming from a crossing circle is also adjacent to four edges: two from each point where the link component meets the plane of projection.
Finally, note that each edge class contains four edges: two in each polyhedron lying on the parallel copies of the 2-punctured disk.
\end{proof}
Just as with the family of Borromean rings,\index{Borromean rings}\index{Borromean family} we can show that many of these links are hyperbolic, but not all. For example, if a fully augmented link has only one crossing circle, then the polyhedral decomposition of \refthm{FullyAugmentedDecomp} will have white bigon\index{bigon} regions, and collapsing these will collapse the entire polyhedron to a triangle (exercise). Similarly, if there are parallel crossing circles then there will be white bigon faces. We wish to rule these out.
\begin{definition}
A fully augmented link is called \emph{reduced}\index{reduced fully augmented link}\index{fully augmented link!reduced}\index{fully augmented link} if the following hold.
\begin{enumerate}
\item Its diagram is connected.
\item Its diagram is \emph{prime},\index{prime!diagram} i.e.\ any closed curve meeting the diagram twice bounds a region on one side with no crossings.
\item None of its crossing circles are parallel. That is, there are no closed curves in the diagram running over exactly two crossing circles and meeting exactly two white faces on either side of the two crossing circles. See \reffig{UnreducedAug}.
\end{enumerate}
\end{definition}
\begin{figure}
\import{Figures/Ch07_TwistKnots/}{F7-11-UnredA.eps_tex}
\caption{Left: a fully augmented link\index{fully augmented link} with a diagram that is not prime, with dotted lines indicating the closed curve contradicting the definition of prime. Middle: a fully augmented link that is not reduced, with parallel crossing circles indicated by the dotted lines. Removing one of the parallel crossing circles will give a reduced link. Right: The diagram is twist-reduced if one of the regions $A$ or $B$ consists only of bigons\index{bigon} in a twist region.}
\label{Fig:UnreducedAug}
\end{figure}
Reduced fully augmented links come from adding crossing circles to links with reduced diagrams as in the sense of the following definition.
\begin{definition}\label{Def:TwReduced}
A diagram is \emph{twist-reduced}\index{twist-reduced} if whenever a simple closed curve $\gamma$ meets the diagram exactly twice in two crossings, running from one side of each crossing to the opposite side, then the curve $\gamma$ bounds a portion of the diagram containing a string of bigons\index{bigon} arranged end-to-end. See \reffig{UnreducedAug}, right.
\end{definition}
Note that if a diagram is not twist-reduced, then there exists a curve $\gamma$ meeting the diagram in exactly two crossings, with those crossings not separated by a string of bigons.\index{bigon} Because the two crossings are not separated by bigons, they lie in different twist regions. Augmenting the two twist regions results in two parallel crossing circles. Thus a diagram that is not twist-reduced has an associated fully augmented link that is not reduced.
We will encounter twist-reduced diagrams again, for example in \refdef{TwistReduced}. Meanwhile, the following gives a way of building large numbers of reduced fully augmented links.
\begin{lemma}\label{Lem:TwReducedGivesReducedAug}
Let $K$ be a link with a connected, prime, twist-reduced diagram. Then the fully augmented link obtained from $K$ by adding crossing circles to each twist region gives a reduced fully augmented link.\index{fully augmented link}\index{fully augmented link!reduced}\index{reduced fully augmented link}
\end{lemma}
\begin{proof}
Adding crossing circles to twist regions of a diagram does not change whether it is prime or connected. If the resulting fully augmented link is not reduced, there must be two parallel crossing circles. Thus in the original $K$, there are two distinct twist regions in the diagram with the property that when crossing circles are added around them, the crossing circles are parallel. Then an isotopy of one of the crossing circles to the other traces out two arcs on the plane of projection disjoint from $K$. Straightening these, and drawing arcs across $K$ over the crossing circle defines a closed curve in the diagram whose boundary meets the diagram exactly twice, once in each of the two distinct twist regions. Isotope slightly to give a curve in $K$ contradicting the definition of a twist-reduced diagram.
\end{proof}
\begin{lemma}\label{Lem:ExistsRightAngled}
Suppose a fully augmented link is reduced and contains at least two crossing circles. Then the polyhedra in the decomposition of \refthm{FullyAugmentedDecomp} admit a hyperbolic structure in which all dihedral angles are $\pi/2$. \index{fully augmented link}\index{fully augmented link}\index{augmented link!fully}\index{fully augmented link!polyhedral decomposition}
\end{lemma}
The proof of the lemma uses circle packings.
\begin{definition}\label{Def:CirclePacking}
A \emph{circle packing}\index{circle packing} is a connected collection of circles with disjoint interiors. The \emph{intersection graph}\index{intersection graph}\index{circle packing!intersection graph} of a circle packing is the graph with a vertex at the center of each circle, and an edge between vertices whenever the corresponding circles are tangent.
\end{definition}
\Reffig{CirclePacking} shows an example of a circle packing and most of its intersection graph on the left --- the vertex of the intersection graph in the unbounded region has been omitted.
\begin{figure}
\includegraphics{Figures/Ch07_TwistKnots/F7-12-CircPk.eps}
\caption{Left: A circle packing\index{circle packing} and its intersection graph.\index{circle packing!intersection graph} Right: Gray circles meeting white circles of a circle packing}
\label{Fig:CirclePacking}
\end{figure}
\begin{theorem}[Circle packing theorem]\label{Thm:Andreev}\index{circle packing theorem}\index{Koebe--Andreev--Thurston theorem}
Let $G$ be a finite planar graph that is simple, meaning $G$ has no loops and no multiple edges between a pair of vertices. Then $G$ is (isotopic to) the intersection graph of a circle packing\index{circle packing} on $S^2$. If $G$ is a triangulation of $S^2$, then the circle packing is unique up to M\"obius transformation.\index{M\"obius transformation}
\end{theorem}
\Refthm{Andreev} is also known as the Koebe--Andreev--Thurston theorem. It was first proved by Koebe \cite{Koebe}. We will use it here without giving its proof, as the proof is somewhat unrelated to the topic at hand.
\begin{proof}[Proof of \reflem{ExistsRightAngled}]
Consider a polyhedron $P$ from \refthm{FullyAugmentedDecomp} for a reduced fully augmented link with at least two crossing circles. Edges and vertices of the polyhedron form a graph $\Gamma$ on $S^2$. Form a new graph $G$ on $S^2$ by taking a vertex for each white face of $\Gamma$, and an edge between vertices of $G$ if two white faces are adjacent across an ideal vertex of $\Gamma$.
If we superimpose $G$ on $P$, then notice that each region of $G$ will contain exactly one shaded triangular face of $P$. Thus $G$ is a triangulation of $S^2$. We show that $G$ has no loops and no multiple edges.
Suppose first that $G$ has a loop. Then the edge of $G$ forming the loop can be superimposed on $P$ to run from a white face, through an ideal vertex of $P$, then back to the same white face. White faces correspond to regions of the diagram, and ideal vertices correspond to remnants of the link. Thus there is a closed curve $\gamma$ on the link diagram that runs from a region back to itself crossing over a single component of the link diagram. Because the link diagram consists of closed curves, this is possible only if the curve $\gamma$ runs along a crossing circle from one white region back to the same white region. Pushing off the crossing circle slightly, this contradicts the fact that the diagram is prime.
Now suppose that the graph $G$ has a multi-edge. Then there is a pair of white faces $W_1$ and $W_2$ of $P$ and a pair of ideal vertices $v_1$ and $v_2$ such that $v_1$ and $v_2$ are both adjacent to $W_1$ and $W_2$. Form a loop in $P$ running from $W_1$ through $v_1$ to $W_2$, then back through $v_2$ to $W_1$. This loop corresponds to a loop $\gamma$ in the diagram meeting the regions on the plane of projection corresponding to $W_1$ and $W_2$ and meeting two distinct link components between those regions. If the link components came from components on the plane of projection, then this contradicts the fact that the diagram is prime. If the link components came from crossing circles, then it contradicts the fact that the diagram is reduced. If one link component lies in the plane of projection and the other is a crossing circle, then we may slide slightly off the crossing circle to obtain a loop meeting exactly three components in the plane of projection. This is impossible for a closed curve and closed link components.
It follows that $G$ is a finite, simple, planar graph that is a triangulation of $S^2$. The circle packing\index{circle packing} theorem, \refthm{Andreev},\index{circle packing theorem}\index{Koebe--Andreev--Thurston theorem} implies that there is a unique circle packing of $S^2$ with $G$ as its intersection graph. View $S^2$ as the boundary at infinity\index{boundary at infinity} of ${\mathbb{H}}^3$. The circle packing of $G$ is then a circle packing on $\partial {\mathbb{H}}^3$. Each Euclidean circle on $\partial{\mathbb{H}}^3$ is the boundary of a plane in ${\mathbb{H}}^3$. Color these planes \emph{white}.
Because the intersection graph of the circle packing\index{circle packing!intersection graph} is a triangulation, regions complementary to the circle packing meet exactly three circles from the packing. There is a unique Euclidean circle running through the three points of tangency of the circle packing. Again this defines a geodesic plane in ${\mathbb{H}}^3$. This plane will intersect the white planes at right angles. Color this plane \emph{gray}. See \reffig{CirclePacking}.
For each white plane, remove from ${\mathbb{H}}^3$ the region bounded by that plane that is disjoint from the other white planes. Similarly for each gray plane. The result is a right-angled hyperbolic ideal polyhedron that is isomorphic to $P$, proving the lemma.
\end{proof}
\begin{theorem}\label{Thm:FullyAugCompleteHyp}
The complement of a reduced fully augmented link\index{fully augmented link!reduced}\index{fully augmented link!hyperbolic} with at least two crossing circles admits a complete hyperbolic structure, which is obtained by putting a right-angled structure on each of the polyhedra of \refthm{FullyAugmentedDecomp}.
\end{theorem}
\begin{proof}
By \reflem{ExistsRightAngled}, there exists a right-angled ideal hyperbolic polyhedron with the combinatorics of one of the polyhedron of \refthm{FullyAugmentedDecomp}. We give each of the polyhedra of \refthm{FullyAugmentedDecomp} the hyperbolic structure of this right-angled hyperbolic polyhedron, and glue by corresponding face-pairing isometries\index{face-pairing isometry} to obtain the fully augmented link.
To show this admits a complete hyperbolic structure, we need to show the angle around each edge is $2\pi$, that there is no shearing around edges, and that the cusps are all Euclidean. Because each edge class contains four edges, and each edge has dihedral angle $\pi/2$, the angle sum around each edge is $2\pi$.
Now consider cusps. Any horosphere meets an ideal vertex of the right-angled polyhedron in a rectangle. The developing image of a cusp is obtained by gluing these rectangles according to the gluing isometries on the faces. Note that a white face is glued by a reflection to the identical white face on the opposite polyhedron, so gluing across white sides of a rectangle does not scale or rotate. But then the gluing across shaded faces cannot scale or rotate either. Hence the developing image of each cusp is a tiling of the plane by Euclidean rectangles. Thus around each vertex there cannot be shearing, and the structure on the cusp must be Euclidean. So this gives the complete hyperbolic structure on the fully augmented link.
\end{proof}
\begin{corollary}\label{Cor:AugmentedGeodSfces}
In a reduced fully augmented link\index{fully augmented link} with at least two crossing circles, each shaded 2-punctured disk bounded by a crossing circle is a totally geodesic surface embedded in the link complement. The \emph{white surface}, obtained by gluing together regions corresponding to regions on the plane of projection (white faces), is also a totally geodesic surface embedded in the hyperbolic link complement. Moreover, these shaded 2-punctured disks and white surfaces meet at right angles whenever they intersect.
\end{corollary}
\begin{proof}
In the polyhedral decomposition, these surfaces become white and shaded faces, which are straightened to portions of geodesic planes to obtain the hyperbolic structure. Thus we know that these surfaces are \emph{pleated},\index{pleated surface} i.e.\ they decompose into ideal polygons, each of which is totally geodesic. In general pleated surfaces are bent along the edges bounding each polygon, so they are not necessarily totally geodesic. However, in this case, white faces meet shaded faces at angle $\pi/2$, thus in the gluing, white faces glue to white faces with angle $\pi$, i.e.\ no bending, and similarly for shaded faces. If follows that these surfaces are totally geodesic.
\end{proof}
\section{Cusps of fully augmented links}
For many applications in later chapters, it will be useful to know more explicit information about the geometry of fully augmented links,\index{fully augmented link} particularly the geometry of their \emph{cusps}.\index{cusp} Recall from \refthm{EuclidCusp} that each cusp admits a Euclidean structure.\index{Euclidean structure} In this section, we will determine properties of that Euclidean structure for fully augmented links. The exposition is similar to that in \cite{futer-purcell}.
Consider the universal cover of the complement of a hyperbolic fully augmented link. By \refcor{AugmentedGeodSfces}, the universal cover will contain the lift of embedded totally geodesic white surfaces, which will be a collection of disjoint totally geodesic planes that we color white in ${\mathbb{H}}^3$. It will also contain the lifts of embedded totally geodesic shaded 2-punctured disks bounded by crossing circles. These will also be totally geodesic planes in ${\mathbb{H}}^3$ and we call them \emph{shaded}. The white planes and shaded planes meet at right angles in ${\mathbb{H}}^3$. They cut out all the translates of the two ideal polyhedra of \refthm{FullyAugmentedDecomp} under the developing map.
Apply an isometry so that the boundary $\widetilde{T}$ of a neighborhood of the point at infinity in ${\mathbb{H}}^3$ projects under the covering map to a cusp torus $T$ of the fully augmented link. Because each link component meets both white and shaded surfaces, in the universal cover we will see vertical planes corresponding to white and shaded surfaces running into the point at infinity, meeting $\widetilde{T}$ in a rectangular lattice.
If we forget the fact that the edges of the lattice have lengths, but consider each rectangle on $\widetilde{T}$ as a topological object with two opposite shaded sides and two opposite white sides, then we obtain the following.
\begin{lemma}\label{Lem:FullyAugCusps}
Let $T$ be a cusp torus of a fully augmented link,\index{fully augmented link}\index{fully augmented link!cusp} with universal cover $\widetilde{T}$ tiled by rectangles coming from white and shaded surfaces. Let $s$ denote a step along a shaded surface between two white surfaces, and let $w$ denote a step along a white surface between two shaded ones. Then a fundamental domain for $T$ is given as follows.
\begin{itemize}
\item If $T$ comes from a crossing circle without a half-twist, then it has meridian $w$ and longitude $2s$.
\item If $T$ comes from a crossing circle with a half-twist, it has meridian $w\pm s$ (depending on the direction of the twist) and longitude $2s$.
\item If $T$ comes from a knot strand, i.e.\ a component that is not a crossing circle, then it has meridian $2s$ and longitude $nw+ks$, where $n$ is the number of twist regions met by the strand, with multiplicity, and $k$ is some integer.
\end{itemize}
\end{lemma}
\begin{proof}
From the construction of the polyhedral decomposition of $S^3-L$, each crossing circle gives rise to an ideal vertex of each polyhedron. Thus a fundamental domain for a crossing circle consists of two rectangles, given by neighborhoods of the corresponding 4-valent ideal vertices.
In the case that there are no half-twists, the shaded faces adjacent to the ideal vertex are glued to each other. Thus an arc running along a white face has its endpoints glued into a meridian, and thus the meridian in this case is $w$. As for the longitude, a white face on one polyhedron is glued to a white face on the other. Thus a longitude steps along two shaded sides, one on one polyhedron and one on the other, before closing up. See \reffig{CrossingCircleCusp}.
\begin{figure}
\import{Figures/Ch07_TwistKnots/}{F7-13-CCCusp.eps_tex}
\caption{A fundamental region for a crossing circle.}
\label{Fig:CrossingCircleCusp}
\end{figure}
For a knot strand $K$ meeting no half-twists, there will be one ideal vertex of one polyhedron, hence one rectangular vertex neighborhood, for each portion of $K$ between adjacent crossing circles. These rectangles are glued end to end along shaded faces coming from the crossing disks to complete a longitude. Thus there will be $n$ such rectangles, and a longitude is given by $n$ steps along white faces, or $nw$. There will be $n$ identical rectangles glued end to end in the other polyhedron. These two blocks of $n$ rectangles will be glued along their white faces to form a $2\times n$ block, making up the fundamental domain of $K$. A meridian is given by two steps along shaded faces. See \reffig{KnotStrandCusp}.
\begin{figure}
\import{Figures/Ch07_TwistKnots/}{F7-14-KSCusp.eps_tex}
\caption{A fundamental region for a knot strand with no half twists.}
\label{Fig:KnotStrandCusp}
\end{figure}
If there are half-twists, then the gluing changes along shaded faces at half-twists. A shaded triangle on one polyhedron will be glued to the opposite shaded triangle on the other polyhedron. This introduces shearing into the fundamental domain, as in \reffig{AugShearing}.
\begin{figure}
\includegraphics{Figures/Ch07_TwistKnots/F7-15-Shear.eps}
\caption{Adding a half twist shifts the gluing along the shaded faces, shearing the fundamental domain.}
\label{Fig:AugShearing}
\end{figure}
Since the shearing only occurs as shaded faces are glued, it does not affect the longitude of a crossing circle or the meridian of a knot strand: these are both $2s$. However, it will adjust a meridian of a crossing circle by adding $\pm s$, and it will adjust the longitude of a knot strand by adding $\pm s$ for each half-twist. Thus the longitude of a knot strand becomes $nw + ks$ for some integer $k$.
\end{proof}
\Reflem{FullyAugCusps} is purely topological. We now wish to give geometric information on the rectangles forming the cusps. To do so, we need to find more explicit embedded cusp neighborhoods of the cusps of a fully augmented link. An embedded cusp neighborhood lifts to a disjoint collection of horoballs in the universal cover ${\mathbb{H}}^3$, one for each ideal vertex of each translate of the ideal polyhedra under the developing map. We will find an embedded cusp neighborhood by finding a collection of embedded horoballs about ideal vertices of the polyhedra forming a fully augmented link.
\begin{definition}\label{Def:MidpointIdealTriangle}
Let $T\subset {\mathbb{H}}^3$ be an ideal triangle.\index{ideal triangle} For each edge $e$ of $T$, define the \emph{midpoint}\index{midpoint} $m$ of $e$ to be the point such that the geodesic from $m$ to the opposite ideal vertex is perpendicular to $e$. Note this point is unique. See \reffig{Midpoint}.
For each edge $e$ of the ideal polyhedral decomposition of a fully augmented link, define its \emph{midpoint} to be the midpoint of that edge on one of the two ideal triangles\index{ideal triangle} adjacent to the edge. Note that since the two polyhedra are symmetric by a reflection in the white faces, both triangles adjacent to $e$ have the same midpoint, so the midpoint of each edge is well-defined.
\end{definition}
\begin{figure}
\import{Figures/Ch07_TwistKnots/}{F7-16-Midpt.eps_tex}
\caption{When an ideal triangle\index{ideal triangle} in ${\mathbb{H}}^2$ has vertices at $0$, $1$, and $\infty$, one of its midpoints\index{midpoint} will lie in ${\mathbb{H}}^2$ at height $1$.}
\label{Fig:Midpoint}
\end{figure}
\begin{lemma}\label{Lem:EmbeddedHoroballs}
Let $L$ be a hyperbolic fully augmented link,\index{fully augmented link}\index{fully augmented link!cusp} with decomposition into ideal polyhedra $P_1$ and $P_2$. For each ideal vertex of $P_i$, there is a unique horoball meeting the midpoint\index{midpoint} of each edge through that ideal vertex. The collection of all such horoballs, intersected with $P_i$ and $P_j$, glue to give an embedded cusp neighborhood of all the cusps of $S^3-L$.
\end{lemma}
\begin{proof}
Place $P_i$ in ${\mathbb{H}}^3$ so that the ideal vertex of interest lies at infinity, and so that one of the two shaded faces meeting the ideal vertex has its ideal vertices at $0$, $1$, and $\infty$ in ${\mathbb{H}}^3$. Note that the edges of that shaded face have midpoints\index{midpoint} at height $1$, as in \reffig{Midpoint}. Because the polyhedron is right-angled, the other shaded face will have ideal points at some points $ci$, $1+ci$, and $\infty$ for some $c\in {\mathbb{R}}$. Thus again the midpoints of these edges lie at height $1$. Then the horoball of height $1$ about infinity meets the midpoint of each edge through the ideal vertex.
The above discussion applies to any vertex of any polyhedron, and so this proves the first statement of the lemma. However, to show that these horoballs glue to give an embedded cusp neighborhood, we need to show that under the developing map, these horoballs have disjoint embedded interiors in ${\mathbb{H}}^3$.
Develop in a neighborhood of infinity. Since white faces are glued by reflection, the developing map takes $P_i$ to a reflected copy of $P_i$, where the reflection is through the vertical plane determined by a white face meeting infinity. Note that the reflection isometry takes points at height $1$ to points at height $1$. Similarly, because the polyhedra are right angled, developing by gluing shaded faces produces shaded faces of the same width, and thus midpoints\index{midpoint} are height $1$. Thus a horoball of height $1$ through infinity will meet the midpoints of all edges through infinity under the developing image.
We claim that this horoball cannot meet any white faces besides those that have an ideal vertex at infinity. Consider a white face that does not meet infinity. It lies in a hemisphere with boundary a circle $C$ on ${\mathbb{C}}$. Because the white surface is embedded in the fully augmented link complement, the lifts of this surface are disjointly embedded in ${\mathbb{H}}^3$. Thus the boundary circles of all lifts of white faces meet only at points of tangency corresponding to ideal vertices. Thus the circle $C$ meets the boundaries of the vertical planes containing white faces meeting $\infty$ only in points of tangency. The vertical planes have boundary on ${\mathbb{C}}$ a collection of parallel vertical lines, and these lines must be exactly distance $1$ apart. Then the diameter of $C$ can be at most $1$. It follows that the height of the hemisphere containing a white face that does not run through infinity must be at most $1/2$; therefore the horoball at height $1/2$ cannot meet it.
By an isometry, the previous argument applies to any ideal vertex. Thus we have proved that the horoballs through the midpoints\index{midpoint} of ideal vertices of $P_i$ only meet white faces that run through the center of the horoball at infinity.
Suppose, by way of contradiction, that under the developing map one of these horoballs $H$ centered at a point $p$ in ${\mathbb{C}}$ has diameter strictly greater than $1$, so that the collection of interiors of horoballs will not be embedded. Then $p$ must lie on the boundary of one of the vertical white planes, else $H$ intersects a vertical white plane in a compact region, giving a contradiction.
So $p$ is an ideal vertex of a polyhedron meeting a white face on a vertical plane $V$, and some other white plane $W$. The boundary $\partial W$ is a circle on ${\mathbb{C}}$ of diameter at most $1$, as we have seen above. The vertex $p$ also meets two shaded faces, and at least one of these, call it $S$, is not a vertical plane. Then $S\cap V$ is an ideal edge of a polyhedron, and it must have a midpoint.\index{midpoint} The midpoint is obtained by taking a perpendicular from a point on $\partial W$ to the semicircle $S\cap V$ on the vertical plane $V$. The set of all points obtained by dropping a perpendicular from $\partial W$ to $V$ is a circle of diameter equal to the diameter of $\partial W$ on the plane $V$; see \reffig{Diameter}. But $H$ has diameter greater than $1$, so this entire circle lies inside of $H$. This contradicts the fact that $H$ does not contain any of the midpoints of edges through $p$.
\begin{figure}
\includegraphics{Figures/Ch07_TwistKnots/F7-17-Diam.eps}
\caption{If $H$ is centered at a point $p$ on a vertical white plane, and has diameter greater than $1$, then it must contain the midpoint\index{midpoint} of an edge through $p$.}
\label{Fig:Diameter}
\end{figure}
Thus when we expand all horoballs to the midpoints\index{midpoint} of their adjacent edges, all those centered at points on ${\mathbb{C}}$ have diameter at most $1$, while that at infinity has height exactly $1$, so their interiors are embedded. Since the above discussion applies to any ideal vertex of any polyhedron, we conclude that under the developing map, interiors of all such horoballs are embedded, and thus the quotient under the covering map gives an embedded horoball neighborhood of each cusp of $S^3-L$.
\end{proof}
\begin{corollary}\label{Cor:AugSideLength}
Let $L$ be a hyperbolic fully augmented link.\index{fully augmented link}\index{fully augmented link!cusp} There exists an embedded horoball neighborhood of the cusps of $S^3-L$ such that, when measured in the induced Euclidean metric on the boundary of each cusp, the sides of the steps $s$ and $w$ (of \reflem{FullyAugCusps}) have lengths $\ell(s)=1$ and $\ell(w)\geq 1$.
\end{corollary}
\begin{proof}
If we place the ideal vertices of a shaded triangle at $0$, $1$, and $\infty$, then the midpoints\index{midpoint} of the edges from $0$ to $\infty$ and from $1$ to $\infty$ are of height $1$. Thus the horoball neighborhood through these points is at height $1$, and distance along the boundary of this horoball is just Euclidean distance. Since the shaded triangle meets this plane in a line segment from $0$ to $1$, the length of the step $s$ is $\ell(s)=1$.
To find $w$, we note that there will be horoballs of diameter $1$ centered at all the corners of the rectangle containing the step $w$. Two of these will be centered at $0$ and $1$, the other two at some $ci$ and $1+ci$ in ${\mathbb{C}}$, for some $c=\ell(w)$. Because the four horoballs are disjoint, we must have $\ell(w)\geq 1$.
\end{proof}
The above results lead to consequences on slope lengths of Dehn fillings.\index{Dehn filling}
\begin{theorem}\label{Thm:AugSlopeLengths}
Let $L$ be a hyperbolic fully augmented link.\index{fully augmented link}\index{fully augmented link!cusp} Let $C_1, \dots, C_k$ be crossing circles of $L$. Let $s_j$ be a slope on $N(C_j)$ such that Dehn filling\index{Dehn filling} along $s_j$ replaces the crossing circle $C_j$ by a twist region with $n_j$ crossings (with $n_j$ even if and only if $C_j$ is not adjacent to a half-twist). Then there is an embedded horoball neighborhood of all cusps of $S^3-L$ such that on the boundary of each cusp, the length of $s_j$ is at least $\ell(s_j) \geq \sqrt{n_j^2+1}$.
\end{theorem}
\begin{proof}
The slope of the Dehn filling that replaces a crossing circle with $2a_j$ crossings runs over one meridian and $a_j$ longitudes.
If $n_j=2a_j$ is even, then $C_j$ meets no half-twist. Then \reflem{FullyAugCusps} implies that the slope $s_j$ will have the form $w+2a_j s$ or $w-2a_j s$. Because $w$ and $s$ run in orthogonal directions, and each has length at least one by \refcor{AugSideLength}, the length of $w\pm 2a_js$ is at least $\sqrt{1+(2a_j)^2} =\sqrt{1+n_j^2}$, as claimed.
If $n_j=2a_j+1$ is odd, then $C_j$ meets a half-twist, and \reflem{FullyAugCusps} implies that $s_j$ has the form $w\pm s +2a_j s$ or $w\pm s-2a_j s$, with the signs the same: $s_j = w\pm (2a_j+1)s$. Again because $w$ and $s$ are orthogonal, \refcor{AugSideLength} implies the length of $s_j$ is at least $\sqrt{1 + (2a_j+1)^2} = \sqrt{1+n_j^2}$.
\end{proof}
\section{Exercises}
\begin{exercise}\label{Ex:Whitehead}
In this exercise, you investigate the two diagrams of the Whitehead link\index{Whitehead link} shown in \reffig{Whitehead}.
\begin{enumerate}
\item Show by a sequence of diagrams that the two links in that figure are isotopic.
\item Use SnapPy \cite{SnapPy} to show that the two link complements are isometric. The check using SnapPy is not mathematically rigorous, but in this case the link has a special property: it is \emph{arithmetic}\index{arithmetic link}. We will not define an arithmetic link here (we won't use the definition elsewhere), but a consequence of arithmeticity is that the program Snap \cite{Snap} can be used to give a mathematically rigorous certification that the two links shown are isometric.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:WhiteheadOctahedron}
Use the methods of \refchap{Fig8Decomp} to prove that the complement of a Whitehead link\index{Whitehead link} can be decomposed into two ideal pyramids with a square base, which in turn can be glued to an ideal octahedron.
\end{exercise}
\begin{exercise}
Shown on the left of \reffig{2BorromeanRings} is the diagram of a link which we claim is the Borromean rings.\index{Borromean rings} Shown on the right is a more familiar diagram of the Borromean rings. There are several different ways to prove the complements of these hyperbolic manifolds are isometric.
\begin{enumerate}
\item Show by a sequence of diagram moves that the links are isotopic. Why does this suffice to show the complements are isometric?
\item Find a hyperbolic structure on each by hand, and show by hand that the manifolds are isometric. This will take some work, and sounds tedious. The exercise here is to think about why this will be tedious: list the steps involved.
\item Use computational tools. Use SnapPy \cite{SnapPy} to show they are isometric. As in \refex{Whitehead}, these links are arithmetic,\index{arithmetic link} so you can check using Snap \cite{Snap} that the link complements are isometric, which gives a mathematically rigorous certification.
\end{enumerate}
\begin{figure}
\includegraphics{Figures/Ch07_TwistKnots/F7-18-Borro.eps}
\caption{Two different diagrams of the Borromean rings.\index{Borromean rings}}
\label{Fig:2BorromeanRings}
\end{figure}
\end{exercise}
\begin{exercise}
The $(p,q,r)$-pretzel link. As $p$, $q$, $r$ go to infinity, find geometric limits of pretzel links. Find a universal upper bound on their volumes.
\end{exercise}
\begin{exercise} (Topology of the solid torus)
A solid torus $V$ is homeomorphic to $S^1\times D^2$, where a specified homeomorphism $h\from S^1\times D^2 \to V$ is called a \emph{framing}\index{framing of solid torus}.
\begin{enumerate}
\item[(a)] A non-trivial simple closed curve in $\partial V$ is called a \emph{meridian}\index{meridian} if it bounds a disk in $V$. Prove that if $\mu$ is a meridian, then for some framing $h\from S^1\times D^2\to V$, $\mu = h(\{1\}\times \partial D^2)$.
\item[(b)] A non-trivial simple closed curve $\lambda$ in $\partial V$ is called a \emph{longitude}\index{longitude} if it represents a generator of $\pi_1(V)\cong {\mathbb{Z}}$. Prove that if $\lambda$ is a longitude, then for some framing $h\from S^1\times D^2\to V$, $\lambda = h(S^1\times\{1\})$.
\item[(c)] Prove that there are infinitely many ambient isotopy\index{ambient isotopy} classes of longitudes in a solid torus.
\end{enumerate}
\end{exercise}
\begin{exercise}
For the Whitehead link,\index{Whitehead link} find slopes of Dehn filling\index{Dehn filling} giving the twist knot $J(2,n)$ for $n$ even. Write them as $p\mu + q\lambda$, for relatively prime integers $p$ and $q$, where $\mu$ is a meridian and $\lambda$ is the longitude that bounds a disk in $S^3$. This is called the \emph{standard longitude}.\index{standard longitude!Whitehead link}
Repeat for $n$ odd, using the isometric link.
\end{exercise}
\begin{exercise}
Using the meridian and standard longitude as a basis for two boundary components of the exterior of the Borromean rings\index{Borromean rings} (i.e.\ take a longitude on each component that bounds a disk in $S^3$), find the slopes of the Dehn fillings\index{Dehn filling} of the Borromean rings that give $J(2k,2\ell)$.
Repeat for $J(2k,2\ell+1)$ and $J(2k+1,2\ell+1)$.
\end{exercise}
\begin{exercise}
The simplest fully augmented link\index{fully augmented link} has a single crossing circle; it comes from augmenting a knot with only one twist region. Show that when we apply the decomposition of this chapter to the fully augmented link with only one crossing circle, the result is not a decomposition into two ideal polyhedra. What does the decomposition give?
\end{exercise}
\begin{exercise}
Prove a result analogous to \refthm{AugSlopeLengths} for knot strand cusps. If $K_i$ is a knot strand cusp of a hyperbolic fully augmented link,\index{fully augmented link} and $s_i$ is a slope on $K_i$ that represents a nontrivial filling (i.e.\ $s_i$ is not a meridian), then the length of $s_i$ is at least $m_i$, where $m_i$ denotes the number of crossing disks that $K_i$ intersects, counted with multiplicity.
\end{exercise}
\begin{exercise}\label{Ex:FullyAug2Bridge}
In \refchap{TwoBridge} we will consider a class of links called \emph{two-bridge links}\index{two-bridge links} which have twist regions arranged in two rows, illustrated in \reffig{2BridgeDiagram}. Show that the complement of the fully augmented link\index{fully augmented link} coming from a 2-bridge\index{2-bridge knot or link} link can be obtained by gluing a collection of regular ideal octahedra. How many regular ideal octahedra?
\end{exercise}
\chapter{Essential Surfaces}\label{Chap:Essential}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
We have already encountered hyperbolic surfaces embedded in hyperbolic 3-manifolds, for example the 3-punctured spheres that bound ``shaded surfaces'' in fully augmented links.\index{fully augmented link} In this chapter, we explore surfaces more carefully. We will see that many results can be deduced about the geometry of 3-manifolds from the topology of the surfaces they contain.
\section{Incompressible surfaces}
In section gives many of the definitions needed to describe surfaces in 3-manifolds. These are topological in nature, and are standard in 3-manifold topology. Good references are \cite{hempel:3mflds}, \cite{Hatcher:3MfldNotes}, and \cite{Schultens:3Mfld}.
\begin{remark}\label{Rem:SmoothCategory}
Whenever we step from geometric arguments to topological ones, we typically need to take some care to discuss whether our objects will be merely continuous, or piecewise linear, or smooth, because the topology of manifolds, maps, etc.\ can behave very differently under different assumptions. We will assume throughout that our manifolds and maps are smooth, i.e.\ in the $C^\infty$ category,\index{smooth category} unless otherwise stated. This allows us to assume basic results on differentiable manifolds:
\begin{itemize}
\item Submanifolds have \emph{tubular neighborhoods}.\index{tubular neighborhood} That is, for any submanifold $S$ embedded in a manifold $M$ of codimension $k$, there exists an open neighborhood of $S$ embedded in $M$ diffeomorphic to $S\times D^k$. Thus a link $L$ embedded in $S^3$ lies in a tube $L\times D^2$ embedded in $S^3$ with $L$ at its core. A surface $S$ lies in a thickened surface $S\times D^1 = S\times I$. A tubular neighborhood is also sometimes called a \emph{regular neighborhood}\index{regular neighborhood}.
\item An isotopy of a submanifold can be extended to an isotopy of the ambient manifold, i.e.\ to an ambient isotopy.\index{ambient isotopy}
\item Submanifolds can be perturbed to intersect transversely.\index{transverse intersection}
\end{itemize}
More information on these results can be found in a standard text on differential topology.
\end{remark}
Our submanifolds will typically be surfaces, and throughout, these will almost always be \emph{properly} embedded in the ambient 3-manifold,\index{proper embedding, definition} where a \emph{proper} embedding is one in which the boundary of the surface is mapped by the embedding to the boundary of the 3-manifold: $S\cap \partial M=\partial S$ where the intersection is transverse. Additionally, we will assume throughout that the ambient 3-manifold is orientable; this is the case for knot complements in $S^3$, for example.
\begin{definition}\label{Def:Compressible}
Let $F$ be a connected surface properly embedded in a 3-manifold. An embedded disk $D\subset M$ with $\partial D \subset F$ is said to be a \emph{compression disk}\index{compression disk} for $F$ if $\partial D$ does not bound a disk on $F$. A surface that admits a compression disk is \emph{compressible}.\index{compressible} If the surface contains no compression disk, and is not the sphere $S^2$, projective plane $P^2$, or disk $D^2$, then we say it is \emph{incompressible}.\index{incompressible}
\end{definition}
Compressible surfaces can be simplified, as follows. Suppose $S$ is a surface properly embedded in a 3-manifold $M$, and $D$ is a disk embedded in $M$ with $\partial D$ contained in $S$. Let $\nu(D)$ be a thickened $D$: i.e.\ $\nu(D)$ is homeomorphic to $D\times I$, with $(\partial D)\times I$ a regular neighborhood of $\partial D$ in $S$, and $\nu(D)$ embedded in a tubular neighborhood\index{tubular neighborhood} of $D$ in $M$.
We may form a new (possibly disconnected) properly embedded surface from $S$ and $D$ by the following procedure: remove $\partial \nu(D) \cap S$ from $S$ and attach the two parallel disks $\partial \nu(D) - (\partial \nu(D)\cap S)$ to $S$. See \reffig{Surgery}. (Technically, as a final last step we need to smooth corners, to remain in the $C^\infty$ category. Such a smoothing is easily done, and we will assume it is done without comment for related constructions.)
\begin{figure}
\includegraphics{Figures/Ch08_Essential/F8-01-Surg.eps}
\caption{A compressible\index{compressible} surface can surgered along a disk, replaced with a simpler surface or surfaces}
\label{Fig:Surgery}
\end{figure}
\begin{definition}\label{Def:Surgery}
The process of replacing $S$ by attaching the two disks $\partial\nu(D)-(\partial\nu(D)\cap S)$ to curves $S-(\partial\nu(D)\cap S)$ is called \emph{surgery} of $S$ along $D$. Usually, we use the verb to describe the procedure: \emph{surger}\index{surger} $S$ along $D$.
\end{definition}
An incompressible surface\index{incompressible} admits no compression disk. That means that if $D$ is a disk embedded in the ambient 3-manifold with $\partial D \subset S$, then $\partial D$ also bounds a disk $E\subset S$. Thus in this case, if we surger $S$ along $D$ we obtain two surfaces: one diffeomorphic to $S$, and one diffeomorphic to a 2-sphere ($D\cup E$). Often in our applications the 2-sphere bounds a ball and contracts to a point. Thus surgery on an incompressible surface does nothing to simplify the surface.
\begin{example}\label{Example:Satellite}
As an example of an incompressible\index{incompressible} surface, consider the torus $T$ marked by dashed lines embedded in the knot complement shown in \reffig{Satellite}. On its outside, this torus bounds a manifold homeomorphic to the figure-8 knot complement.
We claim there cannot be a compression disk\index{compression disk} on the outside, in the complement of the figure-8 knot. For suppose $D$ is such a disk. Surger the torus $T$ along $D$. The result is a sphere $S$ embedded in $S^3$. Any sphere in $S^3$ bounds two balls, one on either side. One ball bounded by $S$ must contain the figure-8 knot and the compression disk\index{compression disk} $D$. The other is a ball contained in the figure-8 knot complement, disjoint from $D\times I$. If we undo the surgery along $D$, we glue this ball along two disks, $D\times\{0\}$ and $D\times \{1\}$, yielding a solid torus. This implies that the figure-8 knot complement is homeomorphic to a solid torus, which is the unknot complement. But this is a contradiction, for example because the figure-8 knot is hyperbolic.
\begin{figure}
\includegraphics{Figures/Ch08_Essential/F8-02-Satell.eps}
\caption{The torus shown (dotted lines) bounds the figure-8 knot complement on one side, the Whitehead link on the other, hence is incompressible.\index{incompressible}}
\label{Fig:Satellite}
\end{figure}
So if the torus $T$ of \reffig{Satellite} is compressible,\index{compressible} then a compression disk\index{compression disk} must lie on the inside of the torus. But the inside of the torus is homeomorphic to a solid torus in $S^3$ containing a knot complement. In fact, the inside is homeomorphic to the complement of the Whitehead link, the knot in the solid torus shown in \reffig{TKWProof}. Again if there were an embedded compression disk\index{compression disk} for the torus on this side, surgering would give a sphere embedded in the Whitehead link complement, bounding two balls in $S^3$. A similar argument to that above would imply that one of the components of the Whitehead link lies in a ball in a solid torus in the link complement. But then the two link components are unlinked, which is a contradiction: the Whitehead link is nontrivially linked.
Thus the torus in \reffig{Satellite} is incompressible.\index{incompressible}
\end{example}
\begin{definition}\label{Def:BoundaryParallel}
An embedded surface $F$ in a 3-manifold $M$ is said to be \emph{boundary parallel}\index{boundary parallel} if it can be isotoped into the boundary of $M$.
\end{definition}
\begin{definition}\label{Def:Satellite}
A \emph{satellite knot}\index{satellite knot} is a knot whose complement contains an incompressible\index{incompressible} torus\index{incompressible torus} that is not boundary parallel.\index{boundary parallel}
Equivalently, a satellite knot can be formed as follows. Start with a knot $K'$ in a solid torus $V$, with $K'$ chosen so that it is not contained in a ball in $V$, and $K'$ not isotopic to the core of the solid torus. Let $K''$ be a nontrivial knot in $S^3$. Form the satellite knot (complement) by removing a tubular neighborhood\index{tubular neighborhood} of $K''$ and replacing it with $V-K'$ in a trivial way (that is, attach $V$ so that the meridian curve of $K''$ still bounds a disk in $V$). The knot $K''$ is called the \emph{companion knot}.\index{companion knot} The satellite knot lies in a regular neighborhood of the companion.
\end{definition}
For orientable surfaces in orientable 3-manifolds, incompressibility is equivalent to the fundamental group injecting in the 3-manifold.
\begin{lemma}\label{Lem:IncompressVsPi1}
An orientable surface in an orientable 3-manifold is incompressible\index{incompressible} if and only if it is $\pi_1$-injective,\index{$\pi_1$-injective} i.e.\ if the fundamental group of the surface injects into the fundamental group of the 3-manifold under the homomorphism induced by inclusion.
A nonorientable surface $S$ is $\pi_1$-injective if and only if the boundary of a regular neighborhood of $S$ is an orientable incompressible surface.
\end{lemma}
\begin{proof}
\Refex{ProveIncompressVsPi1} and \refex{NonorIncompressVsPi1}.
\end{proof}
There is an additional notion of incompressibility for properly embedded surfaces with boundary.
\begin{definition}\label{Def:BoundaryIncompressible}
Let $F$ be a surface with boundary properly embedded in a 3-manifold $M$. A \emph{boundary compression disk}\index{boundary compression disk} for $F$ is a disk $D$ with $\partial D$ consisting of two arcs, $\partial D=\alpha\cup\beta$, such that $\alpha=D\cap F\subset F$ and $\beta=D\cap \partial M\subset \partial M$, and such that there is no arc $\gamma$ of $\partial F$ such that $\gamma\cup\alpha$ bounds a disk on $F$.
If $F$ admits a boundary compression disk it is \emph{boundary compressible}\index{boundary compressible}, otherwise it is \emph{boundary incompressible}\index{boundary incompressible}.
\end{definition}
We will give an example of a class of knots that always contains a boundary incompressible surface. First we define the class of knots.
\begin{definition}\label{Def:TorusKnot}
Let $(p,q)$ be relatively prime integers. Then the pair $(p,q)\in{\mathbb{Z}}\times{\mathbb{Z}}\cong H_1(T^2;{\mathbb{Z}})$ defines a nontrivial simple closed curve on a torus. View the torus $T$ as the boundary of a neighborhood of an unknot in $S^3$. Notice that there is one compression disk\index{compression disk} to the inside of $T$; its boundary is a meridian $m$ for the unknot. There is also a compression disk to the outside of $T$; its boundary is a longitude $\ell$ of the unknot. We choose a basis so that the curve $(p,q)$ has minimal representative intersecting $m$ a total of $|p|$ times, and $\ell$ a total of $|q|$ times, with the signs of $p,q$ determining the direction (right or left handed screw motion).
A \emph{torus knot}\index{torus knot}, or \emph{$(p,q)$-torus knot}, is the knot in $S^3$ given by the $(p,q)$ curve on the unknotted torus $T$ in $S^3$. It is frequently denoted by $T(p,q)$. \Reffig{TorusKnot} shows an example.
\end{definition}
\begin{figure}
\includegraphics{Figures/Ch08_Essential/F8-03-TorusK.eps}
\caption{The torus knot $T(2,3)$}
\label{Fig:TorusKnot}
\end{figure}
\begin{example}\label{Example:TorusKnot}
Suppose $T(p,q)=K$ is a torus knot with $|p|,|q| \geq 2$. The surface $T-K$ is an annulus; we will denote the annulus by $F$. We show that $F$ is boundary incompressible\index{boundary incompressible} in $S^3-K$.
For suppose there is a boundary compression disk\index{boundary compression disk} $D$ for $F$. Since $F\cup K$ is the torus $T$, the disk $D$ lies to one side of $T$. Because $T$ is an unknotted torus in $S^3$, the disk $D$ is either (freely) homotopic to a disk with boundary $m$ or to one with boundary $\ell$. By definition, $T(p,q)$ has intersection number $|p|$ with $m$ and $|q|$ with $\ell$. On the other hand, a boundary compression disk intersects $F$ in exactly one arc, and $K$ in exactly one arc. This is impossible when $|p|,|q|\geq 2$.
\end{example}
We now fold all our definitions into one.
\begin{definition}\label{Def:EssentialSurface}
A surface $F$ properly embedded in a 3-manifold $M$ is \emph{essential}\index{essential} if one of the following holds.
\begin{enumerate}
\item $F$ is a 2-sphere that does not bound a 3-ball.
\item $F$ is a disk and either $\partial F \subset \partial M$ does not bound a disk on $\partial M$, or $\partial F\subset \partial M$ does bound a disk $E$ on $\partial M$, but $E\cup F$ does not bound a 3-ball.
\item $F$ is not a disk or sphere, and is incompressible,\index{incompressible} boundary incompressible,\index{boundary incompressible} and not boundary parallel.\index{boundary parallel}
\end{enumerate}
\end{definition}
\begin{definition}\label{Def:NoBadEssential}
A 3-manifold is said to be:
\begin{itemize}
\item \emph{irreducible}\index{irreducible} if it contains no essential\index{essential} 2-sphere,
\item \emph{boundary irreducible}\index{boundary irreducible} if it contains no essential disk,
\item \emph{atoroidal}\index{atoroidal} if it contains no essential torus, and
\item \emph{anannular}\index{anannular} if it contains no essential annulus.
\end{itemize}
\end{definition}
\begin{theorem}\label{Thm:Atoroidal}
A manifold that contains an embedded essential\index{essential} torus cannot be hyperbolic.
\end{theorem}
The proof of this theorem was part of an exercise in \refchap{CompletionDehnFilling}, but we will go through the argument here.
\begin{proof}
Suppose $M$ contains an essential torus. By \reflem{IncompressVsPi1}, the fundamental group of $M$ contains a ${\mathbb{Z}}\times{\mathbb{Z}}$ subgroup. \Refcor{ZxZSubgroup} implies that the subgroup is generated by two parabolic elements\index{parabolic} fixing the same point on the boundary of ${\mathbb{H}}^3$ at infinity. But then the thick--thin decomposition (\refthm{ThinPart} structure of thin part) implies that the torus is parallel to a cusp torus, hence it is boundary parallel.\index{boundary parallel} This contradicts the definition of essential.
\end{proof}
\begin{corollary}\label{Cor:Satellite}
A satellite knot\index{satellite knot} complement does not admit a hyperbolic structure. \qed
\end{corollary}
\begin{theorem}\label{Thm:Anannular}
Suppose $M$ is a 3-manifold with torus boundary components whose interior has a complete finite volume hyperbolic metric. Then $M$ cannot contain an essential\index{essential} annulus.\index{anannular}
\end{theorem}
\begin{proof}
Suppose not. Suppose $M$ is hyperbolic, and $A$ is an essential annulus in $M$. Consider the core curve $\gamma$ of $A$. Note $\gamma$ is isotopic to $\partial A$, hence $\gamma$ is isotopic into $\partial M$. Under the hyperbolic structure on the interior of $M$, torus boundary components become cusps. Thus in the hyperbolic structure, $\gamma$ is isotopic into a cusp, so is isotopic to closed curves of arbitrarily small length. Then the loop $\gamma$ corresponds to a covering transformation of ${\mathbb{H}}^3\to M$ that is parabolic.\index{parabolic} But then both boundary components of $\partial A$ correspond to the same parabolic element. Thus the annulus has boundary components given by the same curve in the same cusp. This is possible only if the annulus is boundary parallel.\index{boundary parallel}
\end{proof}
\begin{corollary}\label{Cor:TorusKnot}
A torus knot\index{torus knot} complement does not admit a hyperbolic structure.
\end{corollary}
\begin{proof}
Consider the curve $T(p,q)$ embedded on an unknotted torus $T\subset S^3$. If $(p,q)\in\{(\pm 1,q)\mid q\in{\mathbb{Z}}\}\cup \{(p,\pm 1)\mid p\in{\mathbb{Z}}\}$ then $T(p,q)$ is the unknot in $S^3$, which does not have hyperbolic complement. Otherwise
the annulus $T-T(p,q)$ is incompressible\index{incompressible} (\refex{TorusKnot}). We showed in \refexamp{TorusKnot} that it is boundary incompressible.\index{boundary incompressible} A similar argument shows it cannot be boundary parallel.\index{boundary parallel} So it is essential. Thus a torus knot is not hyperbolic.
\end{proof}
Theorems~\ref{Thm:Atoroidal} and~\ref{Thm:Anannular} give surfaces which preclude a knot from being hyperbolic. In fact, an even stronger result is known.
\begin{theorem}[Thurston, Hyperbolization]\label{Thm:SfcesHyperbolic}
A knot complement admits a complete hyperbolic structure if and only if it is not a satellite knot or a torus knot.
More generally, a compact 3-manifold with nonempty torus boundary has interior admitting a complete hyperbolic structure if and only if it is irreducible,\index{irreducible} boundary irreducible,\index{boundary irreducible} atoroidal,\index{atoroidal} and anannular.\index{anannular} \qed
\end{theorem}
\Refthm{SfcesHyperbolic} follows from the geometrization theorem for Haken manifolds, which is a very deep result; see \cite{thurston:bulletin}. The proof requires a book of its own (e.g.\ \cite{kapovich}), and we will not include it here. However we will use the theorem to show many knot and link complements are hyperbolic.
Note that \refthm{SfcesHyperbolic} turns the geometric problem of determining whether a manifold admits a hyperbolic structure into a topological problem of finding surfaces in 3-manifolds, or proving such surfaces cannot exist. It has been applied to show many knots and 3-manifolds admit a hyperbolic structure, although unfortunately it does not give much information on such a structure, beyond the fact that it exists. We will see some results along these lines in the rest of this chapter.
\section{Torus decomposition, Seifert fibering, and geometrization}
While we are most interested in hyperbolic spaces and hyperbolic geometry, we will also need to identify non-hyperbolic spaces as we encounter them in knot theory. \Refthm{SfcesHyperbolic} gives us a characterization of hyperbolic knots, but we also have the tools now to study non-hyperbolic knots. This section gives a brief overview of the terminology and results that we will need.
\begin{definition}\label{Def:SeifertFiberedSolidTorus}
Let $p$ and $q$ be relatively prime integers.
A \emph{Seifert fibered solid torus}\index{Seifert fibered solid torus}\index{solid torus!Seifert fibered} of type $(p,q)$ is a solid torus $S^1\times D^2$ constructed as the union of disjoint circles, as follows. Begin with a solid cylinder $D^2\times [0,1]$, fibered by intervals $\{x\}\times [0,1]$. Glue the disk $D^2\times\{0\}$ to $D^2\times\{1\}$ by a $2\pi p/q$ rotation. The fiber $\{0\}\times [0,1]$ in $D^2\times[0,1]$ becomes a circle; this is called the \emph{exceptional fiber}\index{exceptional fiber}\index{Seifert fibered solid torus!exceptional fiber}. Every other fiber $\{x\}\times [0,1]$ is glued to $q$ segments to form a circle. These are called \emph{normal fibers}\index{normal fiber}\index{Seifert fibered solid torus!normal fiber}. If $q=1$, the Seifert fibered solid torus is called a \emph{regularly fibered solid torus}.\index{regularly fibered solid torus}\index{Seifert fibered solid torus!regularly fibered solid torus}
\end{definition}
\begin{definition}\label{Def:SeifertFiberedSpace}
A \emph{Seifert fibered space}\index{Seifert fibered space} is an orientable 3-manifold $M$ that is the union of pairwise disjoint circles, called \emph{fibers}\index{fiber}, such that every fiber has neighborhood diffeomorphic to a fibered solid torus, preserving fibers.
\end{definition}
\begin{example}\label{Example:S3SeifertFibered}
The 3-sphere $S^3$ is the union of two solid tori $V$ and $W$. For relatively prime integers $(p,q)$, give $V$ the fibering of a Seifert fibered solid torus of type $(p,q)$, and give $W$ the fibering of a Seifert fibered solid torus of type $(q,p)$. Then when we glue $\partial V$ to $\partial W$ to form $S^3$, the fibers on the boundaries are identified. Thus $S^3$ is Seifert fibered.
\end{example}
\begin{example}[Torus knot complements]\label{Example:TorusKnotSeifertFibered}
The complement of a $(p,q)$-torus knot (\refdef{TorusKnot}) is Seifert fibered, as follows. Take the Seifert fibering of $S^3$ of the previous example. A regular fiber on $\partial V$ is a $(p,q)$-torus knot. Thus when we remove a fibered solid torus neighborhood of this regular fiber, the result is a Seifert fibered space homeomorphic to the exterior of a torus knot, $S^3-N(T(p,q))$.
\end{example}
A Seifert fibered space is never hyperbolic. The following theorem follows from work of many people, including work of Casson and Jungreis \cite{CassonJungreis} and Gabai \cite{Gabai:Convergence}.
\begin{theorem}[Characterization of Seifert fibered spaces]\label{Thm:SeifertFibered}
A compact orientable irreducible\index{irreducible} 3-manifold $M$ with infinite fundamental group is a Seifert fibered space if and only if $\pi_1(M)$ contains a normal infinite cyclic subgroup. \qed
\end{theorem}
For further information on Seifert fibered spaces, see \cite{Schultens:3Mfld}, \cite{Hatcher:3MfldNotes}, or \cite{scott:geometries}.
The following theorem applies to manifolds that admit an embedded essential torus. It was proved by Jaco, Shalen \cite{jaco-shalen}, and Johannson \cite{johannson}.
\begin{theorem}[JSJ decomposition]\label{Thm:JSJ}
For any compact irreducible,\index{irreducible} boundary irreducible\index{boundary irreducible} 3-manifold $M$, there exists a (possibly empty) finite collection $\mathcal{T}$ of disjoint essential\index{essential} tori such that each component of the 3-manifold obtained by cutting $M$ along $\mathcal{T}$ is either atoroidal\index{atoroidal} or Seifert fibered. Moreover, a minimal such collection $\mathcal{T}$ is unique up to isotopy. \qed
\end{theorem}
\begin{definition}\label{Def:JSJ}
The minimal collection of tori $\mathcal{T}$ as in \refthm{JSJ} is called the \emph{JSJ-decomposition}\index{JSJ-decomposition} of $M$, or sometimes the \emph{torus decomposition}\index{torus decomposition} of $M$. The union of the Seifert fibered pieces of $M$ cut along $\mathcal{T}$ is called the \emph{characteristic submanifold}\index{characteristic submanifold} of $M$.
\end{definition}
In \refexamp{Satellite}, the torus decomposition consists of the single essential\index{essential} torus shown in \reffig{Satellite}. Cutting along it splits the 3-manifold into two hyperbolic pieces, hence the characteristic submanifold is empty.
Thurston's hyperbolization theorem implies if the JSJ-decomposition of $M$ is nontrivial, then atoroidal components of $M$ cut along $\mathcal{T}$ are hyperbolic. More generally, even in the closed case we now know the following theorem.
\begin{theorem}[Geometrization of closed 3-manifolds] \label{Thm:Geometrization}\index{Geometrization theorem for closed 3-manifolds}
Let $M$ be a closed, orientable, irreducible\index{irreducible} 3-manifold.
\begin{enumerate}
\item If $\pi_1(M)$ is finite, then $M$ is spherical; i.e.\ $M$ is homeomorphic to $S^3/\Gamma$ where $\Gamma$ is a finite subgroup of $O(4)$ acting on $S^3$ without fixed points.
\item If $\pi_1(M)$ is infinite and contains a ${\mathbb{Z}}\times{\mathbb{Z}}$ subgroup, then $M$ is either Seifert fibered or contains an incompressible\index{incompressible} torus\index{incompressible torus} (so is not hyperbolic).
\item If $\pi_1(M)$ is infinite and contains no ${\mathbb{Z}}\times{\mathbb{Z}}$ subgroup, then $M$ is hyperbolic. \qed
\end{enumerate}
\end{theorem}
The second part of the theorem follows from work of Casson and Jungreis \cite{CassonJungreis} and Gabai \cite{Gabai:Convergence}. The first and third parts were proved by Perelman \cite{perelman02}, \cite{perelman03}.
\section{Normal surfaces, angled polyhedra, and hyperbolicity}
In this section, we will use \refthm{SfcesHyperbolic} to prove that many manifolds are hyperbolic. We will be considering 3-manifolds
that admit an ideal polyhedral decomposition, for example a decomposition into ideal tetrahedra, but also more general ideal polyhedra as in \refchap{Fig8Decomp} and \refchap{TwistKnots}. We will see that we need only consider surfaces that intersect the polyhedra in simple ways: in disks with well-behaved boundaries.
\subsection{Normal surfaces}
To describe nice positions of embedded surfaces in polyhedra, we give the following definition.
\begin{definition}\label{Def:NormalDisk}
Let $P$ be an ideal polyhedron. Truncate the ideal vertices of $P$, so they become \emph{boundary faces}, and denote the truncated polyhedron by $\overline{P}$. The edges between (regular) faces and boundary faces are called \emph{boundary edges}. See \reffig{NormalDisk}, left.
Let $D$ be a disk embedded in $\overline{P}$ with $\partial D \subset \partial \overline{P}$. We say that $D$ is \emph{normal}\index{normal!disk} if it satisfies the following conditions.
\begin{enumerate}
\item $\partial D$ meets the faces, boundary faces, edges, and boundary edges of $\overline{P}$ transversely.
\item $\partial D$ does not lie entirely on a single face or boundary face of $\overline{P}$.
\item Any arc of intersection of $\partial D$ with a face of $\overline{P}$ does not have both endpoints on the same edge, or on the same boundary edge, or on an adjacent edge and boundary edge. Similarly, any arc of intersection of $\partial D$ with a boundary face does not have both endpoints on the same boundary edge.
\item $\partial D$ meets any edge at most once.
\item $\partial D$ meets any boundary face at most once.
\end{enumerate}
\end{definition}
\Reffig{NormalDisk} illustrates some of these conditions.
\begin{figure}
\import{Figures/Ch08_Essential/}{F8-04-Normal.eps_tex}
\caption{Left: truncating an ideal polyhedron. Boundary faces are shaded, boundary edges dashed, (regular) faces are white, and (regular) edges are solid black. Right: examples of curves that cannot be the boundary of a normal disk, along with the number of the property of \refdef{NormalDisk} that they violate}
\label{Fig:NormalDisk}
\end{figure}
\begin{definition}\label{Def:NormalForm}
A surface is in \emph{normal form},\index{normal form}\index{normal} with respect to a polyhedral decomposition, or is \emph{normal}, for short, if it intersects the (truncated) polyhedra in a collection of normal disks.
\end{definition}
Normal surfaces in 3-manifolds are well-studied objects, and the following theorem is classical, dating back to work of Kneser in the late 1920s \cite{kneser}, and Haken and Schubert in the 1960s \cite{Haken,Schubert:NormalSfces}, and many others since then.
\begin{theorem}\label{Thm:NormalForm}
Suppose $M$ admits an ideal polyhedral decomposition.
If $M$ contains an essential\index{essential} 2-sphere, then it contains one in normal form.\index{normal}
If $M$ is irreducible\index{irreducible} and $M$ contains an essential disk, then it contains one in normal form.
If $M$ is irreducible\index{irreducible} and boundary irreducible,\index{boundary irreducible} and contains an essential surface, then that surface can be isotoped in $M$ to meet the polyhedra in normal form.\index{normal}
\end{theorem}
\begin{proof}
Let $S$ be an essential surface in $M$. We may isotope $S$ so that it intersects faces, boundary faces, edges, and boundary edges of the truncated polyhedra of $M$ transversely. Let $f$ denote the number of times $S$ meets a face or boundary face, and let $e$ denote the number of times $S$ meets an edge or boundary edge. The pair $(f,e)$ is called the \emph{complexity}\index{complexity of essential surface} of $S$ in $M$, and we order it lexicographically. We will adjust $S$ to remove intersections with the polyhedra that violate normality while reducing its complexity. Since the complexity is finite, it follows that a finite number of adjustments give the result.
First we claim we can adjust $S$ so that it meets the polyhedra only in disks, lowering the complexity. If $S$ is a sphere, this is done by replacing $S$. If $M$ is irreducible,\index{irreducible} then this is done by isotopy of $S$. The argument is similar to arguments below and so we leave it as \refex{EssentialSfcesMeetPolyinDisks}.
We now assume that the components of intersection of $S$ with a truncated polyhedron $P$ are all disks. Suppose that $\partial(S\cap P)$ contains a simple closed curve of intersection contained entirely in a face or boundary face. Then there must be an innermost such curve $\gamma$, and $\gamma$ bounds a disk $E$ inside that face disjoint from $S$.
If $S$ is a 2-sphere, then surger\index{surger} $S$ along $E$, obtaining two 2-spheres, $S'$ and $S''$, which we push slightly to be disjoint from a neighborhood of $E$. Thus $S'$ and $S''$ have strictly fewer intersections with $\partial P$. Either $S'$ or $S''$ must still be essential, else $S$ could not be essential, so say $S''$ is essential. Replace $S$ with $S''$. Then $S''$ has strictly smaller complexity than $S$. Repeating a finite number of times, we may assume $S$ does not meet faces or boundary faces of $P$ in closed curves in this case.
If $M$ is irreducible\index{irreducible} and $S$ is a disk, then as before surger along $E$. This gives two surfaces, a disk $S'$ and a sphere $S''$, both of strictly smaller complexity than $S$. Because $M$ is irreducible, the sphere $S''$ bounds a ball, and we may isotope $S$ through that ball to the disk $S'$. Repeat for each closed curve of intersection of $S$ with faces, removing all such intersections.
If $M$ is irreducible\index{irreducible} and boundary irreducible,\index{boundary irreducible} then $S$ is not a sphere or disk. Because $S$ is essential, $\gamma$ bounds a disk $E'$ in $S$. Then the sphere $E\cup E'$ must bound a ball by irreducibility. Isotope $S$ through this ball, removing the intersection $\gamma$ and reducing complexity. Repeating finitely many times eliminates all closed curves of intersection of $S$ with faces of $P$.
Now suppose an arc of intersection of $S$ with a face or boundary face of $P$ has both its endpoints on the same edge or boundary edge, or on an edge and adjacent boundary edge. Then there must be an outermost such arc $\alpha$, bounding a disk $E$ on that face or boundary face, with $E$ disjoint from $S$. In the case that the face is a boundary face, or the endpoints of $S$ do not both lie on a boundary edge, we may slide $S$ through a neighborhood of $E$ to isotope across the edge, decreasing complexity.
When the arc $\alpha$ lies on a regular face, and both endpoints of $\alpha$ lie on boundary edges, then we have to take more care since we cannot isotope the surface $S$ past the boundary edge without changing its topology. In this case, we know $S$ is a surface with boundary, so not a sphere, so we are in the case that $M$ is irreducible.\index{irreducible} If $S$ is a disk, surger along $E$ and push off the face slightly, obtaining two disks with lower complexity. One of them must be essential, since $S$ is essential. Replace $S$ with this essential disk.
If $S$ is not a disk, then $M$ is irreducible\index{irreducible} and boundary irreducible.\index{boundary irreducible} Since $S$ is essential, $E$ is not a boundary compression disk\index{boundary compression disk} for $S$, thus $\alpha$ bounds a disk $E'$ on $S$. Then $E\cup E'$ is a disk with boundary on $\partial M$. Because $M$ is boundary irreducible, it must be parallel to a disk $E''$ on $\partial M$. Then $E\cup E'\cup E''$ is a sphere, so bounds a ball, and we may isotope $S$ through this ball to remove the intersection $\alpha$, strictly decreasing complexity.
Finally, we need to show for each polyhedron $P$, any disk of $S\cap P$ has boundary meeting each edge of $P$ at most once, and meeting each boundary face of $P$ at most once. We leave these as exercises~\ref{Ex:NormalDisk} and~\ref{Ex:NormalBdry}.
\end{proof}
\subsection{Angle structures and combinatorial area}
We are interested in 3-manifolds that are hyperbolic. If a 3-manifold admits a decomposition into ideal tetrahedra,
recall from \refthm{Gluing} (edge gluing equations) and \refdef{CompletenessEquations} (completeness equations) that a complete hyperbolic structure satisfies a system of nonlinear equations. If we take the log of the edge gluing equations, the complex product becomes a sum of real and imaginary parts:
\[ \log\left(\prod z(e_j)\right) = \sum ( \log|z(e_j)| + i\,{\mathrm{Arg}}\, z(e_j) ) = 2\pi\,i. \]
The imaginary parts encode relations on dihedral angles of the tetrahedra. If we ignore the real part, then finding solutions to the imaginary parts involves finding dihedral angles that satisfy a system of linear equations. Thus by considering dihedral angles alone, we reduce a complicated nonlinear problem to a linear problem. This can significantly simplify computations.
\begin{definition}\label{Def:AngleStructures}
An \emph{angle structure}\index{angle structure} on an ideal triangulation $T$ of a manifold $M$ is a collection of (interior) dihedral angles, one for each edge of each tetrahedron, satisfying the following conditions.
\begin{enumerate}
\item[(0)]\label{Itm:OppEdges} Opposite edges of the tetrahedron have the same angle.
\item[(1)]\label{Itm:Range} Dihedral angles lie in $(0,\pi)$.
\item[(2)]\label{Itm:VertexSum} The sum of angles around any ideal vertex of any tetrahedron is $\pi$.
\item[(3)]\label{Itm:EdgeSum} The sum of angles around any edge class of $M$ is $2\pi$.
\end{enumerate}
The set of all angle structures for triangulation $T$ is denoted by $\mathcal{A}(T)$.
\end{definition}
Conditions~(0) and~(1) are required for nonsingular tetrahedra. Condition~(2) ensures that a triangular cross-section of any ideal vertex of any tetrahedron is actually a Euclidean triangle. Finally, condition~(3) is the imaginary part of the edge gluing equations. Note we have not included the completeness equations. For many results, we don't need them!
An angle structure\index{angle structure} on an ideal tetrahedron uniquely determines the shape of that tetrahedron. If it has assigned dihedral angles $\alpha$, $\beta$, $\gamma$ in clockwise order, then there is a unique hyperbolic tetrahedron with those dihedral angles, and its edge invariant corresponding to the edge with angle $\alpha$ can be shown to be
\begin{equation}\label{Eqn:AngleEdgeInvariant}
z(\alpha) = \frac{\sin\gamma}{\sin\beta}e^{i\alpha}.
\end{equation}
Thus if a triangulation has an angle structure (not all do), we can think of the manifold as being built of hyperbolic tetrahedra.
Note that because we have discarded the completeness equations, an angle structure\index{angle structure} typically will not give a complete hyperbolic structure. In fact, because we have discarded the nonlinear part of the edge gluing equations, an angle structure typically won't even give a hyperbolic structure. There will likely be shearing singularities\index{shearing} around each edge, as in \reffig{Shearing}.
\begin{figure}
\includegraphics{Figures/Ch08_Essential/F8-05-Shear.eps}
\caption{Angle structures typically have shearing singularities.\index{angle structure}}
\label{Fig:Shearing}
\end{figure}
Even so, much useful information can be extracted from angle structures,\index{angle structure} which we will see in later chapters. In this chapter, we will show that if an angle structure exists on a triangulation of a manifold, then the manifold admits (some) hyperbolic structure.
The idea of an angled triangulation can be generalized to ideal polyhedra as well. First, we need to assign to each edge a dihedral angle, which is a number lying in the range $(0,\pi)$. Once that has been done, we can measure a combinatorial area of normal disks embedded in the polyhedra, as follows.
\begin{definition}\label{Def:CombinatorialArea}
Let $D$ be a normal disk\index{normal} in a (truncated) ideal polyhedral decomposition of $M$, such that each ideal edge of $M$ has been assigned an interior dihedral angle in the range $(0,\pi)$. Let $\alpha_1, \dots, \alpha_n$ be the angles assigned to the ideal edges met by $\partial D$. Then the \emph{combinatorial area} of $D$\index{combinatorial area}\index{combinatorial area!disk} is defined as:
\[ a(D) = \sum_{i=1}^n (\pi-\alpha_i) - 2\pi + \pi|\partial D \cap \partial M|. \]
Here $|\partial D \cap \partial M|$ indicates the number of components of intersection of $\partial D$ with boundary faces.
If $S$ is a surface in normal form,\index{normal} the \emph{combinatorial area}\index{combinatorial area}\index{combinatorial area!surface} of $S$ is defined to be the sum of combinatorial areas of the normal disks making up $S$.
\end{definition}
Note in the case $D$ is contained in a hyperbolic plane, meeting each edge of the polyhedron orthogonally, the combinatorial area\index{combinatorial area} of $D$ agrees with the actual hyperbolic area (\refex{AreaPolygon}).
We can now generalize the idea of an angle structure\index{angle structure} on a triangulation to an angle structure on an ideal polyhedral decomposition.
\begin{definition}\label{Def:AnglePolyhedra}
An \emph{angled polyhedral structure}\index{angled polyhedral structure} on a 3-manifold $M$ is a decomposition of $M$ into ideal polyhedra, along with a collection of (interior) dihedral angles, one for each edge of each polyhedra, that satisfy the following conditions.
\begin{enumerate}
\item Each dihedral angle lies in the range $(0,\pi)$.
\item Every normal\index{normal} disk has non-negative combinatorial area.\index{combinatorial area}
\item Interior angles around an edge sum to $2\pi$.
\end{enumerate}
\end{definition}
\begin{example}
An angle structure\index{angle structure} on an ideal triangulation of $M$ is an example of an angled polyhedral structure.\index{angled polyhedral structure} To show this, suppose we have an angle structure on a triangulation of $M$. Then by \refdef{AngleStructures}, the dihedral angles are in the correct range, and interior angles around edges must sum to $2\pi$ to satisfy the definition of an angle structure. So we need only consider normal disks,\index{normal} and show that each has non-negative combinatorial area.\index{combinatorial area} Two examples of normal disks are shown in \reffig{BdyTriangleBdyBigon}.
\begin{figure}
\includegraphics{Figures/Ch08_Essential/F8-06-BdyTri.eps}
\caption{Normal disks:\index{normal} a vertex triangle\index{vertex triangle} and a boundary bigon\index{boundary bigon}}
\label{Fig:BdyTriangleBdyBigon}
\end{figure}
The triangle in the figure has combinatorial area\index{combinatorial area}
\[ a(D) = (\pi-\alpha) + (\pi-\beta) + (\pi-\gamma) -2\pi = \pi-(\alpha+\beta+\gamma) = 0,\]
since $\alpha$, $\beta$, $\gamma$ encircle an ideal vertex of a tetrahedron. We call this normal disk a \emph{vertex triangle}\index{vertex triangle}.
The other normal disk shown also has zero combinatorial area:\index{combinatorial area}
\[ a(D) = 0 - 2\pi + \pi\cdot 2.\]
This disk is called a \emph{boundary bigon}\index{boundary bigon}.
\begin{lemma}\label{Lem:CombAreaTriangulation}
Let $M$ be a triangulated 3-manifold with an angle structure.\index{angle structure} Then the combinatorial area\index{combinatorial area} of any normal disk\index{normal} $D$ in an ideal tetrahedron of $M$ is non-negative. It is zero if and only if $D$ is a vertex triangle or a boundary bigon.
\end{lemma}
\begin{proof}
As we have seen above, the combinatorial areas of boundary bigons and vertex triangles are zero.
If $D$ is a normal disk\index{normal} that meets at least two boundary faces, its combinatorial area is at least the sum $\sum (\pi-\alpha_i)$, and any term $(\pi-\alpha_i)$ is positive, so $a(D)\geq 0$ in this case.
If $D$ meets exactly one boundary face, then because $\partial D$ cannot meet edges adjacent to a boundary edge, it must meet an opposite edge in each of the triangles on either side of the boundary face. These cannot be opposite edges in the tetrahedron. Hence the combinatorial area is
\[ a(D) \geq \pi-\alpha + \pi-\beta -2\pi + \pi = \pi-\alpha-\beta = \gamma>0,\]
where here we let $\alpha$, $\beta$, and $\gamma$ denote the angles of an ideal tetrahedron with $\alpha+\beta+\gamma=\pi$. Thus if $D$ meets just one boundary face, $a(D)$ is strictly positive.
If $D$ meets no boundary faces, then it is either a vertex triangle or a quad separating two opposite edges. In the first case, the combinatorial area is zero. In the second, the combinatorial area is
\[ a(D) = 2(\pi-\alpha)+2(\pi-\beta) -2\pi = 2(\pi-\alpha-\beta) = 2\gamma>0.\]
In all cases, the combinatorial area is non-negative.
\end{proof}
Thus we have shown:
\begin{theorem}\label{Thm:AngleStructIsAngledPoly}
An angle structure\index{angle structure} on an ideal triangulation of $M$ is an angled polyhedral structure.\index{angled polyhedral structure}\qed
\end{theorem}
\end{example}
\begin{lemma}[Gauss--Bonnet]\label{Lem:GaussBonnet}
A normal surface\index{normal} $S$ in an angled polyhedral structure\index{angled polyhedral structure} satisfies
\[ a(S) = -2\pi\chi(S).\]
\end{lemma}
\begin{proof}
Recall that $\chi(S)$, the Euler characteristic of $S$,\index{Euler characteristic} is given by $\chi(S)=v-e+f$, where $v$ is the number of vertices in a polygonal decomposition of $S$, $e$ is the number of edges, and $f$ is the number of faces. In our case, the intersection of $S$ with the polyhedra determines a polygonal decomposition. Then $f$ is the number of normal disks,\index{normal} or intersections of $S$ with interiors of the polyhedra. The value $e$ is the number of intersections of $S$ with faces of the polyhedra, and $v$ is the number of intersections of $S$ with ideal edges of the polyhedra. Intersections of $S$ with boundary edges and boundary faces do not affect Euler characteristic at all.
By definition,
\begin{align*}
a(S) & = \sum_{D} a(D) = \sum_{D} \left(\sum_i(\pi-\alpha_i)+\pi|\partial D\cap\partial M| -2\pi\right) \\
& = \pi \sum_{D} \left( \left(\sum_i 1\right)+|\partial D\cap\partial M|\right) - \sum_{D}\sum_i\alpha_i - \sum_{D} 2\pi,
\end{align*}
where the sum is over normal disks\index{normal} $D\subset S$.
Note that the last term in the sum is $-2\pi f$, since we add $-2\pi$ for each normal disk of intersection in $S\cap P$.
The term $\sum_D\sum_i \alpha_i$ gives the sum of all interior angles met by the surface $S$. This is $2\pi v$.
Finally, we claim that $(\sum_i 1+|\partial D\cap\partial M|)$ counts the number of edges in faces (not boundary faces) in the normal disk\index{normal} $D$. To see this, orient $\partial D$ and give each edge the corresponding direction. Its initial endpoint is either on an ideal edge of the polyhedron or a boundary edge. The sum counts all the initial endpoints of edges of $\partial D$ on faces, without counting initial endpoints of edges on boundary faces. Denote this by
\[ \left(\sum_i 1 + |\partial D\cap\partial M|\right) = e(D).\]
Now take the sum $\sum_D e(D)$. The sum over all normal disks\index{normal} counts each edge exactly twice, so its value is $2 e$. Thus $\pi \sum_D (\sum_i 1 + |\partial D \cap \partial M|) = 2\pi e$.
Putting it together, we find
\[ a(S) = 2\pi e - 2\pi v - 2\pi f = -2\pi\chi(S).\qedhere\]
\end{proof}
\subsection{Hyperbolicity}
\begin{theorem}\label{Thm:HypAngleStruct}
Let $M$ be a manifold admitting an angled polyhedral structure.\index{angled polyhedral structure} Then $M$ is irreducible\index{irreducible} and boundary irreducible,\index{boundary irreducible} and its boundary consists of tori.
Moreover, if the angled polyhedral structure is actually an angle structure on a triangulation of $M$, then $M$ is atoroidal\index{atoroidal} and anannular.\index{anannular} Hence any manifold admitting an angle structure\index{angle structure} is hyperbolic.
\end{theorem}
\begin{proof}
Suppose $S$ is an essential\index{essential} sphere in $M$. By \refthm{NormalForm}, we can put $S$ into normal form\index{normal form} with respect to the polyhedral decomposition of $M$, and obtain a combinatorial area\index{combinatorial area} for $S$. By definition of an angled polyhedral structure,\index{angled polyhedral structure} each normal disk\index{normal} of $S$ has non-negative combinatorial area, so $a(S)$ is non-negative. But \reflem{GaussBonnet} implies $a(S) = -4\pi$. This contradiction proves that $M$ is irreducible.\index{irreducible} A similar argument shows that $S$ cannot be an essential\index{essential} disk, so $M$ is boundary irreducible.\index{boundary irreducible}
Now consider the boundary components of $M$. These are obtained by gluing boundary faces. Pushing in slightly, we find that $\partial M$ is parallel to a normal surface\index{normal} made up of boundary parallel\index{boundary parallel} disks. By the definition of an angle structure,\index{angle structure} \refdef{AngleStructures}, and the definition of combinatorial area, \refdef{CombinatorialArea}, it follows that each such disk has combinatorial area zero. So $\partial M$ has combinatorial area\index{combinatorial area} zero. Because $\partial M$ consists of closed surfaces in an orientable manifold, it must be a disjoint union of tori.
Now suppose the angled polyhedral structure\index{angled polyhedral structure} is an angle structure\index{angle structure} on a triangulation of $M$, and suppose $S$ is an essential torus. Then $S$ can be put into normal form\index{normal} by \refthm{NormalForm}, and \reflem{GaussBonnet} implies that $a(S)=0$, so each normal disk of $S$ has zero combinatorial area.\index{combinatorial area} Then \reflem{CombAreaTriangulation} implies that each normal disk is a vertex triangle\index{vertex triangle} or a boundary bigon.\index{boundary bigon} Since $S$ is a closed surface embedded in $M$, it does not meet boundary faces of $M$, hence each normal disk\index{normal} is a vertex triangle. But vertex triangles join to form the boundary of $M$, hence a component of $\partial M$ is a torus, and $S$ is parallel to $\partial M$. This contradicts the fact that $S$ is essential.
Finally, suppose $S$ is an essential annulus in the manifold $M$ with a triangulation and angle structure.\index{angle structure} Then again $a(S)=0$, so $S$ is made up of vertex triangles\index{vertex triangle} and boundary bigons.\index{boundary bigon} There must be at least one boundary bigon. This must be glued to another boundary bigon, since the edges on the face of the tetrahedron run between boundary edges. Then $S$ is made up entirely of boundary bigons. The only possibility is that the boundary bigons encircle a single edge of the triangulation of $M$. This is not incompressible.\index{incompressible} So $S$ is anannular.\index{anannular}
The fact that $M$ is hyperbolic now follows from \refthm{SfcesHyperbolic}.
\end{proof}
\section{Pleated surfaces and a 6-theorem}
When a 3-manifold admits a hyperbolic structure, then that structure can often be used to induce a hyperbolic structure on a properly embedded essential\index{essential} surface with punctures. If the surface is totally geodesic inside the 3-manifold, such as for the white and shaded surfaces in a fully augmented link,\index{fully augmented link} \refcor{AugmentedGeodSfces}, then the induced hyperbolic structure on the surface is unique. But usually a properly embedded surface is not totally geodesic. In this case, frequently we may still straighten the surface.
\begin{definition}\label{Def:HomotopicBdyIncompr}
An embedded surface $S$ in a 3-manifold $M$ is \emph{homotopically boundary incompressible},\index{homotopically boundary incompressible} or \emph{homotopically $\partial$-incompressible}\index{homotopically $\partial$-incompressible}, if for any properly embedded arc $\alpha$ in $S$ that is not homotopic rel endpoints into $\partial S$, the arc $\alpha$ in $M$ is not homotopic rel endpoints into $\partial M$. That is, a nontrivial arc in $S$ remains nontrivial in $M$. (This is also sometimes called \emph{algebraically $\partial$-incompressible}.)\index{algebraically $\partial$-incompressible}
\end{definition}
Notice that a homotopically $\partial$-incompressible surface is boundary incompressible.\index{boundary incompressible} However, now arcs may be immersed and a homotopy gives a singular disk.
\begin{lemma}\label{Lem:PrePleating}
Let $S$ be a surface with non-empty boundary properly embedded in a 3-manifold $M$ whose interior admits a complete
hyperbolic structure. Suppose $S$ is homotopically $\partial$-incompressible. Then the ideal edges of any ideal triangulation of $S$ can be homotoped to be geodesics in $M$. Similarly, each ideal triangle\index{ideal triangle} can be homotoped to be totally geodesic in $M$.
\end{lemma}
\begin{proof}
Any edge of an ideal triangulation on $S$ is homotopically non-trivial on $S$. Because $S$ is homotopically $\partial$-incompressible, each edge must also be homotopically non-trivial in $M$. Thus it lifts to an arc with distinct endpoints in the universal cover ${\mathbb{H}}^3$ of $M$. Such an arc is homotopic to a unique geodesic in ${\mathbb{H}}^3$. The image of this geodesic (and the homotopy) under the covering map gives the desired geodesic (and homotopy) in $M$.
Now for any ideal triangle of $S$, the edges of the triangle are homotopic to geodesics in $M$. Lift one edge to be a geodesic in ${\mathbb{H}}^3$. Because the interior of the triangle is homotopically trivial in $S$ and in $M$, it lifts to the interior of a triangle in ${\mathbb{H}}^3$, and thus we may choose lifts of the other two edges of the triangle such that the three bound a unique totally geodesic triangle, homotopic to a lift of $S$, in ${\mathbb{H}}^3$. The image of this triangle (and homotopy) under the covering map gives the desired totally geodesic triangle in $M$.
\end{proof}
We often refer to the homotopy of \reflem{PrePleating} as \emph{straightening}\index{straightening edges and triangles}.
After straightening edges and triangles in \reflem{PrePleating}, note that the surface will typically be bent along the geodesic ideal edges. In addition, note that the homotopy is not at all guaranteed to leave the surface embedded. Thus after such a straightening the surface is frequently only immersed, not embedded. However, the process still can give significant geometric information.
\begin{definition}\label{Def:PleatedSurface}
A \emph{pleated surface}\index{pleated surface} in a hyperbolic 3-manifold $M$ is a pair $(S, \varphi)$ consisting of a surface $S$ with complete hyperbolic structure, and a local isometry $\varphi\from S\to \varphi(S)\subset M$ such that each point in $S$ lies in a geodesic mapped by $\varphi$ to a geodesic. When $(S,\varphi)$ is a pleated surfaces, will also sometimes say that the image $\varphi(S) \subset M$ is pleated.
\end{definition}
\begin{proposition}\label{Prop:Pleating}
A homotopically $\partial$-incompressible surface (with non-empty boundary) properly embedded in a hyperbolic 3-manifold can be pleated. That is, it is homotopic to the image of a local isometry $\varphi\from S\to\varphi(S)$ coming from a pleated surface.\index{pleated surface}
\end{proposition}
\begin{proof}
This follows almost immediately from \reflem{PrePleating}; we only need to describe the hyperbolic structure on $S$. The straightening process of \reflem{PrePleating} maps each ideal triangle\index{ideal triangle} of $S$ to a hyperbolic ideal triangle. We define a hyperbolic structure on $S$ by taking isometric ideal triangles in $S$ and attaching them along edges such that the map from $S$ into the homotopic surface in $M$ is an isometry. Thus the hyperbolic structure on $S$ can be viewed as pulling the triangles of $S$ out of $M$ and lining them up, without bending, in ${\mathbb{H}}^2$. This gives a fundamental domain for $S$. The surface is obtained by applying isometric gluing maps on edges.
\end{proof}
Let $M$ be a compact 3-manifold with a torus boundary component $T$ such that the interior of $M$ admits a complete hyperbolic structure. Then recall that in the complete hyperbolic structure, the boundary of any embedded horoball neighborhood of the cusp corresponding to $T$ inherits a Euclidean structure\index{Euclidean structure} (\refthm{EuclidCusp}).
\begin{definition}\label{Def:SlopeLength}
Recall that an isotopy class of simple closed curves on the torus $T$ is a \emph{slope}.\index{slope} In a Euclidean metric on $T$, any slope can be isotoped to a geodesic. The \emph{slope length} of $s$\index{slope length} is defined to be the length of such a geodesic, denoted $\ell(s)$.
For $M$ a compact 3-manifold with torus boundary components and hyperbolic interior, a fixed choice of cusp neighborhoods gives a fixed Euclidean structure\index{Euclidean structure} on each torus boundary component. The slope length of $s$ is measured in this fixed Euclidean structure.
\end{definition}
Hyperbolic geometry and pleated surfaces\index{pleated surface} can be used to give a proof of the following theorem.
\begin{theorem}[A 6-theorem]\label{Thm:6Theorem}\index{6-theorem, weaker}
Suppose $M$ is a compact manifold with torus boundary components, such that the interior of $M$ admits a complete hyperbolic structure. Let $s_1, \dots, s_n$ be slopes on distinct boundary components of $M$ such that each slope length\index{slope length} $\ell(s_i)$ is strictly larger than $6$ on a collection of disjoint embedded horospherical tori for $M$. Then the manifold $M(s_1, \dots, s_n)$ obtained by Dehn filling\index{Dehn filling} $M$ along slopes $s_1, \dots,s_n$ is irreducible,\index{irreducible} boundary irreducible,\index{boundary irreducible} anannular,\index{anannular} and atoroidal.\index{atoroidal}
\end{theorem}
\Refthm{6Theorem} is \emph{a} 6-theorem, but not exactly \emph{the} 6-theorem.\index{6-theorem} The 6-theorem, proved independently and simultaneously by Agol \cite{agol:bounds} and Lackenby \cite{lackenby:word}, is stronger. Their theorem states that if slope lengths are at least six, then the Dehn filled manifold cannot be reducible,\index{reducible 3-manifold} toroidal, Seifert fibered or have finite fundamental group. The geometrization theorem implies such a manifold must be hyperbolic. When the Dehn filling gives a closed manifold, our 6-theorem, in \refthm{6Theorem}, does not rule out Seifert fibered or finite fillings. However, in the case that the manifold we obtain after Dehn filling\index{Dehn filling} still has boundary, it will be hyperbolic by Thurston's hyperbolization theorem, \refthm{SfcesHyperbolic}. Also, our proof uses a little less machinery, while still giving a nice introduction to the geometric arguments involved. We highly recommend reading the original papers \cite{agol:bounds} and \cite{lackenby:word}.
Our proof of \refthm{6Theorem} will follow three simple steps. First, assuming that $M(s)$ is reducible,\index{reducible 3-manifold} boundary reducible, annular, or toroidal, we show that there is a punctured 2-sphere or punctured torus $S$ embedded in $M$ that is essential,\index{essential} with boundary components on $\partial M$ tracing out slopes $s_i$. Second, because $S$ is essential, it can be pleated,\index{pleated surface} and it inherits a hyperbolic metric and cusp neighborhoods from the metric on $M$ and its embedded horocusps. Third, arguments in hyperbolic geometry show that the slope lengths\index{slope length} are at most six.
\begin{lemma}\label{Lem:EssentialPunctSurface}
Let $M$, $s_1, \dots, s_n$ be as in the statement of \refthm{6Theorem}.
Suppose $M(s_1, \dots, s_n)$ contains an embedded essential\index{essential} sphere, disk, annulus, or torus. Then $M$ contains an essential, homotopically $\partial$-incompressible punctured sphere or torus $S$, with $\partial S$ some subset of the slopes $s_1, \dots, s_n$. Moreover, if $S$ is a punctured sphere then it has at least three punctures.
\end{lemma}
\begin{proof}
Let $T$ be an embedded essential sphere, disk, annulus, or torus in $M(s_1, \dots, s_n)$. Note that $M \subset M(s_1, \dots, s_n)$. If $T$ is embedded in $M$, then it cannot be essential in $M$ because $M$ is hyperbolic. But if $T$ is compressible\index{compressible} or boundary compressible\index{boundary compressible} in $M$, then a compression disk\index{compression disk} for $T$ is embedded in $M\subset M(s_1, \dots, s_n)$, hence is a compression disk for $T$ in $M(s_1, \dots, s_n)$, contradicting the fact that $T$ is essential. Similarly, if $T$ is boundary parallel\index{boundary parallel} in $M$ then it is compressible\index{compressible} in $M(s_1, \dots, s_n)$. So $T$ cannot be embedded in $M$; it must meet the solid tori attached to $M$ to form $M(s_1, \dots, s_n)$. We may assume it meets the cores of the added solid tori transversely, else we could isotope the surface to lie in $M$, which would be a contradiction. Thus when we drill these cores from $M(s_1, \dots, s_n)$, the surface $S$ given by removing neighborhoods of the cores from $T$ is a surface with boundary whose boundary components come from the set of slopes $\{s_1, \dots, s_n\}$. Note $S$ is a punctured sphere or punctured torus.
Now, $S$ cannot be compressible\index{compressible} in $M$, else a compression disk\index{compression disk} is a compression disk for $T$ in $M(s_1, \dots, s_n)$. Any boundary compression disk\index{boundary compression disk} $D$ must have boundary consisting of an arc $\alpha$ in $S$ and an arc $\beta$ running along a boundary component of $M$. Using $D$, we may isotope $T$ through $D$ to the core of a solid torus in $M(s_1, \dots, s_n)$, and then slightly past, removing two intersections of $T$ with cores of filled solid tori. So after repeating this move finitely many times, we may assume $S$ is boundary incompressible.\index{boundary incompressible} Note also that $S$ cannot be boundary parallel,\index{boundary parallel} or $T$ is compressible.\index{compressible} So $S$ is essential.
To show $S$ is homotopically $\partial$-incompressible, apply a proof similar to that of \reflem{IncompressVsPi1} to show that if $S$ is not homotopically $\partial$-incompressible, then the boundary of a regular neighborhood of $S$ is boundary compressible.\index{boundary compressible} The same argument as above implies that intersections of $S$ with cores of solid tori can be removed in this case.
Finally, if $S$ is a punctured sphere, then it must have at least three punctures else it will be an essential disk or annulus in $M$, but the hyperbolicity of $M$ rules out such surfaces.
\end{proof}
The following lemma is from \cite{FuterSchleimer}, and uses arguments of \cite{agol:bounds}.
\begin{lemma}\label{Lem:PleatedSurfaceLength}
Suppose $M$ is an orientable hyperbolic 3-manifold with a cusp, and horoball neighborhood $C$ about the cusp. Suppose $f\from S\to M$ is a pleating of a punctured surface $S$,\index{pleated surface} with $n$ punctures of $S$ mapping to $C$. Suppose finally that for each puncture of $S$,a loop about the puncture is represented by a geodesic of length $\lambda$ on $\partial C$ in $M$. Then in the hyperbolic metric on $S$ given by the pleating, the preimage $f^{-1}(C)\subset S$ contains horospherical cusp neighborhoods $R_1, \dots, R_n$ of the $n$ punctures of $S$, with disjoint interiors, such that
\[ \ell(\partial R_i) = \operatorname{area}(R_i) \geq \lambda \quad \mbox{for each } i.\]
\end{lemma}
\begin{proof}
The pleating of $S$ gives $S$ an ideal triangulation. Start with a cusp neighborhood $C_0\subset C$ such that $f$ maps all ideal edges of the triangulation to geodesic rays running into the cusp. That is, $C_0$ does not intersect any edge of the triangulation in a compact arc. See \reffig{Pleating}, left. Then $f(S)\cap C_0$ consists of tips of triangles, and $f^{-1}(C_0)$ is a collection of embedded cusps $R_1^0, \dots, R_n^0$ in $S$.
\begin{figure}
\import{Figures/Ch08_Essential/}{F8-07-Pleat.eps_tex}
\caption{Left: Choose $C_0$ to meet only tips of triangles. Right: Distances between horoballs}
\label{Fig:Pleating}
\end{figure}
Lift $M$ to the universal cover ${\mathbb{H}}^3$. The cusps $C$ and $C_0$ both lift to collections of disjoint horoballs. Because $C_0$ is contained in $C$, for each horoball lift $H$ of $C$, there is a horoball lift $H_0$ of $C_0$ contained in $H$. Let $d$ denote the hyperbolic distance between $H_0$ and $H$. Note that since $C$ is embedded, the distance from $H_0$ to any other lift of $C_0$ must be at least $2d$. See \reffig{Pleating}, right.
Projecting back to $M$, a geodesic between $H_0$ and any other lift of $C_0$ projects to a geodesic from $C_0$ to $C_0$ of length at least $2d$. Pulling back to $S$, the distance from $R_i^0$ to any other $R_j^0$ is at least $2d$. Let $R_1, \dots, R_n$ be cusps in $S$ of distance $d$ from $R_1^0, \dots, R_n^0$. They must be embedded.
We now show that the lengths $\ell(\partial R_i)$ are at least $\lambda$ for all $i$. Let $\gamma_0$ be a Euclidean geodesic on $\partial C_0$ representing $f(\partial R_i^0)$. Since pleating may decrease distance, $\ell(\gamma_0)\leq \ell(\partial R_i^0)$. Moreover, letting $\gamma$ be the loop on $\partial C$ homotopic to $\gamma_0$, we have $\lambda = \ell(\gamma) = e^{-d}\ell(\gamma_0)$, because $\gamma$ and $\gamma_0$ lie on cusp boundaries of hyperbolic distance $d$ apart. Moreover, $\ell(\partial R_i) = e^{-d}\ell(\partial R_i^0)$. Putting this together,
\[ \lambda \leq \ell(\gamma) = e^{-d}\ell(\gamma_0) \leq e^{-d}\ell(\partial R_i^0) = e^{-d}\cdot e^d\ell(\partial R_i) = \ell(\partial R_i). \]
Finally, we need to show that $f(R_i)$ is contained in $C$. We know $f(R_i^0)$ is contained in $C_0$. By construction, $f(R_i)$ is contained in a $d$-neighborhood of $C_0$. But a $d$-neighborhood of $C_0$ is the cusp $C$. Thus $f(R_i)$ lies in $C$.
\end{proof}
Now we present a result from \cite{boroczky}.
\begin{theorem}[B{\"o}r{\"o}czky cusp density theorem]\label{Thm:Boroczky}\index{B{\"o}r{\"o}czky cusp density theorem!2-dimensional}\index{cusp density theorem!2-dimensional}
Let $S$ be a hyperbolic surface with cusps, and let $H$ be an embedded horoball neighborhood for the cusps of $S$. Then
\[ \operatorname{area}(H) \leq \frac{3}{\pi}\operatorname{area}(S). \]
\end{theorem}
\begin{proof}
Given $S$ and $H$, we claim there exists an ideal triangulation of $S$ such that for $T$ any triangle, $H$ meets $T$ only in connected neighborhoods of its ideal vertices in noncompact sets. For example this will hold for a subdivision of the canonical decomposition\index{canonical decomposition} of $S$ with respect to $H$, which is defined in \refchap{Canonical}. We will assume such a decomposition exists.
Map $T$ isometrically to the triangle $T' \subset {\mathbb{H}}^2$ with ideal vertices at $0$, $1$, and $\infty$. The image of $H$ determines horoballs $H_0$, $H_1$, and $H_\infty$ about $0$, $1$, $\infty$, respectively. The area of $H\cap T$ in $S$ is given by the sum
\begin{equation}\label{Eqn:AreaSum}
\operatorname{area}(H\cap T) = \operatorname{area}(H_0\cap T') + \operatorname{area}(H_1\cap T') + \operatorname{area}(H_\infty\cap T'),
\end{equation}
and the area of $H$ is given by the sum of all such areas over all triangles $T$. Since the area of $S$ is just $\pi$ times the number of triangles, to maximize the cusp density $\operatorname{area}(H)/\operatorname{area}(S)$ we need to maximize the cusp density within ideal triangles,\index{ideal triangle} or maximize the sum of \refeqn{AreaSum} within each triangle.
So consider the triangle $T'\subset {\mathbb{H}}^2$ with vertices at $0$, $1$, and $\infty$, and with horoballs $H_\infty$ about $\infty$ of height $h_\infty$, $H_0$ about $0$ of diameter $h_0$, and $H_1$ about $1$ of diameter $h_1$. By the observation that (open) horoballs do not meet edges of the triangulation in intervals with compact closure, we know that $h_0$ and $h_1$ are at most $2$, and $h_\infty$ is at least $1/2$. These give constraints on $h_0$, $h_1$, and $h_\infty$. We also have constraints coming from the fact that $H_0$, $H_1$, and $H_\infty$ are disjoint.
If one of $H_0$, $H_1$, or $H_\infty$ is not tangent to one of the other two, then we may expand it, increasing cusp area, until either it is as large as possible given our constraints, or it is tangent to one of the other horoballs. If one of the $H_i$ is as large as possible, but still not tangent to the other horoballs, then the other two horoballs are much smaller than our constraints, and we may expand them until they are tangent to $H_i$. In any case, we may assume that the cusp area is maximized when each horoball is tangent to one other horoball; because there are three, it follows that one horoball, without loss of generality $H_\infty$, is tangent to the other two.
Now we may compute the cusp area directly. Since $H_0$ and $H_1$ are tangent to $H_\infty$, they have diameters $h_0=h_1=h_\infty$, and $H_\infty$ has height $H_\infty$. Then the areas of the cusps satisfy (\refex{CuspAreaTriangle}):
\[ \operatorname{area}(H_\infty\cap T') = \frac{1}{h_\infty}, \quad
\operatorname{area}(H_0\cap T') = \operatorname{area}(H_1\cap T') = h_\infty, \]
so
\[ \operatorname{area}(H\cap T') = \frac{1}{h_\infty} + 2 h_\infty. \]
We maximize this equation for $h_\infty$ subject to constraints: $h_\infty$ is at least $1/2$, and $H_1$ and $H_0$ are disjoint, so $h_\infty$ is at most $1$.
We find that the function has a critical point at $\sqrt{2}$, but that it reaches its maximum value when $h_\infty=1/2$ and when $h_\infty=1$, and the maximum is $3$.
Now let $n$ be the number of triangles in $S$. Then
\[
\frac{\operatorname{area}(H)}{\operatorname{area}(S)} \leq \frac{ 3\cdot n}{\pi \cdot n} = \frac{3}{\pi}.\qedhere
\]
\end{proof}
\begin{proof}[Proof of \refthm{6Theorem}, a 6-theorem]\index{6-theorem, weaker}
Suppose by way of contradiction that $M(s_1, \dots, s_n)$ is reducible,\index{reducible 3-manifold} boundary reducible, annular, or toroidal. Then \reflem{EssentialPunctSurface} implies $M$ contains an embedded essential\index{essential} punctured 2-sphere or torus, whose boundary components on $\partial M$ are parallel to slopes $s_1, \dots, s_n$.
By \refprop{Pleating}, $S$ may be pleated.\index{pleated surface} By \reflem{PleatedSurfaceLength}, the pleating induces horoball neighborhoods $R_1, \dots, R_m$ of cusps of $S$ for which
\[ \ell(\partial R_i) = \operatorname{area}(R_i) \geq \ell(s_{j}),\]
where $f(\partial R_i)$ is the slope $s_{j}$. Let $H$ denote the union of the horoball neighborhoods $R_i$.
Now \refthm{Boroczky} and the Gauss--Bonnet theorem imply
\begin{equation}\label{Eqn:6Thm}
\sum_i \ell(s_{j_i}) \leq \sum_i \ell(\partial R_i) = \operatorname{area}(H) \leq \frac{3}{\pi}\operatorname{area}(S) = \frac{3}{\pi}\cdot 2\pi|\chi(S)| = 6|\chi(S)|.
\end{equation}
On the other hand, each $\ell(s_{j_i})>6$, and there are $m$ of these, where $m$ is the number of boundary components of $S$. If $S$ is a punctured sphere, $|\chi(S)| = m-2$ and \refeqn{6Thm} implies $6m < 6(m-2)$, which is a contradiction. If $S$ is a punctured torus, \refeqn{6Thm} implies $6m < 6m$, again a contradiction.
\end{proof}
Our 6-theorem\index{6-theorem, weaker} has immediate consequences to determining when certain knots and links are hyperbolic.
\begin{definition}\label{Def:HighlyTwisted}
For an integer $c>0$, we say a knot or link is \emph{$c$-highly twisted} if it admits a diagram in which every twist region has at least $c$ crossings. If $c$ is understood from the context, we also say such a knot or link is \emph{highly twisted}.\index{highly twisted}
\end{definition}
The following theorem was first proved in \cite{futer-purcell}.
\begin{theorem}[Hyperbolicity of highly twisted links]\label{Thm:FuterPurcellFilling}\index{highly twisted!hyperbolicity}
Let $K\subset S^3$ be a link with a prime, twist-reduced diagram, as in \refdef{TwReduced}. Assume that $K$ has at least two twist regions. If every twist region of the diagram contains at least six crossings, then the complement of $K$ is hyperbolic.
\end{theorem}
\begin{proof}
By \reflem{TwReducedGivesReducedAug}, when we add crossing circles to every twist region of $K$, we obtain a fully augmented link\index{fully augmented link} $L$; $K$ now forms the knot strands of this link. Remove all crossings of the knot strands except possibly single crossings in twist regions. Then $K$ is obtained from $L$ by performing Dehn filling along slopes $s_j$ on crossing circles $C_j$ that replace the crossing circle with a twist region with at least six crossings.
By \refthm{AugSlopeLengths}, each slope $s_j$ has length at least $\sqrt{(6)^2+1} =\sqrt{37}>6$. Thus by our 6-Theorem, \refthm{6Theorem},\index{6-theorem, weaker} the link complement $L$ obtained by the Dehn filling is irreducible,\index{irreducible} boundary irreducible,\index{boundary irreducible} anannular,\index{anannular} and atoroidal.\index{atoroidal} By Thurston's hyperbolization theorem, \refthm{SfcesHyperbolic}, it is hyperbolic.
\end{proof}
The full 6-theorem can be used to identify knots in $S^3$.
\begin{corollary}\label{Cor:KnotsInS3}
Suppose $M$ is a hyperbolic 3-manifold with a single cusp. Then $M$ is the complement of a knot in $S^3$ if and only if there exists a slope $s$ on the cusp of $M$ of length at most six such that $M(s)$ is homeomorphic to $S^3$.
\end{corollary}
\begin{proof}
A manifold with a torus boundary component is the complement of a knot in $S^3$ if and only if a Dehn filling gives $S^3$. The full 6-theorem\index{6-theorem} of Agol and Lackenby implies such a slope on a horocusp of a hyperbolic 3-manifold must have length at most six.
\end{proof}
Since only finitely many slopes on a fixed hyperbolic 3-manifold have length at most six, to determine whether a hyperbolic 3-manifold is a knot in $S^3$, it suffices to check finitely many Dehn fillings,\index{Dehn filling} and then to identify whether or not the filled manifold is $S^3$.
\section{Exercises}
\begin{exercise}
Prove that the unknot is the only knot $K$ in $S^3$ such that $S^3-N(K)$ admits a properly embedded compression disk\index{compression disk} for $\partial N(K)$.
\end{exercise}
\begin{exercise}
Prove that the paragraph starting with ``Equivalently'' in \refdef{Satellite} is indeed an equivalent definition of a satellite knot. You may assume Alexander's theorem from 3-manifold topology that states that an embedded torus in $S^3$ bounds a solid torus on at least one side.
\end{exercise}
\begin{exercise}\label{Ex:ProveIncompressVsPi1}
Prove \reflem{IncompressVsPi1}. For one direction, you may use the \emph{loop theorem}, which is a classical result in 3-manifold topology:
\begin{theorem}[Loop theorem \cite{papa}]\label{Thm:LoopTheorem}\index{loop theorem}
If $N$ is a 3-manifold with boundary, and there is a map $f\from D^2\to N$ such that the loop $f(\partial D^2)\subset \partial N$ is homotopically nontrivial in $\partial N$, then there is an embedding with the same property.
\end{theorem}
\end{exercise}
\begin{exercise}\label{Ex:NonorIncompressVsPi1}
Prove that a nonorientable surface $S$ properly embedded in a 3-manifold $M$ is $\pi_1$-injective\index{$\pi_1$-injective} if and only if $\widetilde{S}$, the boundary of a regular neighborhood of $S$ in $M$, is an orientable incompressible\index{incompressible} surface.
\end{exercise}
\begin{exercise}\label{Ex:TorusKnot}
\begin{enumerate}
\item Prove that a $(p,q)$-torus knot $T(p,q)$ is nontrivial if $|p|, |q|\geq 2$.
\item Let $T$ denote the torus in $S^3$ on which the torus knot $T(p,q)$ lies, and let $A$ denote the annulus $T-T(p,q)$. Prove $A$ is incompressible\index{incompressible} if $|p|, |q|\geq 2$.
\end{enumerate}
Hint for both parts: Seifert--Van Kampen theorem.
\end{exercise}
\begin{exercise}\label{Ex:EssentialSfcesMeetPolyinDisks}
Suppose $M$ is a 3-manifold with an ideal polyhedral decomposition and $S$ a properly embedded essential\index{essential} surface in $M$ with complexity as in the proof of \refthm{NormalForm}.
\begin{enumerate}
\item Prove that if $S$ is an embedded essential sphere, then we may replace $S$ with an embedded essential sphere $S'$ meeting each polyhedron in disks such that the complexity of $S'$ is at most that of $S$.
\item Prove that if $M$ is irreducible\index{irreducible} and $S$ is an essential surface properly embedded in $M$, then $S$ can be isotoped to meet polyhedra only in disks, reducing (or at worst fixing) complexity.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:NormalDisk}
Suppose $M$ is a 3-manifold with an ideal polyhedral decomposition, and $S$ is an essential\index{essential} surface properly embedded in $M$. Suppose there is a disk $D$ of intersection of $S$ with a polyhedron $P$ such that $\partial D$ meets an edge of $P$ more than once. Prove $S$ can be isotoped to remove at least two intersections with that edge.
\end{exercise}
\begin{exercise}\label{Ex:NormalBdry}
Suppose $M$ is a 3-manifold with an ideal polyhedral decomposition, and $S$ is an essential\index{essential} surface with boundary properly embedded in $M$. Suppose there is a disk $D$ of intersection of $S$ with a polyhedron such that $\partial D$ meets a boundary face more than once. Then if $S$ is a disk, prove it can be replaced by an essential disk meeting the boundary face fewer times. If $S$ is not a disk and $M$ is irreducible\index{irreducible} and boundary irreducible,\index{boundary irreducible} prove $S$ can be isotoped to meet the boundary face fewer times.
\end{exercise}
\begin{exercise}\label{Ex:AreaPolygon}
Prove that if $D$ is a hyperbolic polygon, then its hyperbolic area is
\[ \operatorname{area}(D) = \sum(\pi-\alpha_i) - 2\pi + \pi \, v,\]
where $\alpha_i$ is the angle of the $i$-th finite vertex, and $v$ is the number of ideal vertices of $D$.
\end{exercise}
\begin{exercise}\label{Ex:CuspAreaTriangle}
Consider the ideal triangle\index{ideal triangle} $\Delta$ with vertices at $0$, $1$, and $\infty$, and horoballs $H_\infty$ about $\infty$ of height $h_\infty$, $H_0$ about $0$ of diameter $h_0$, and $H_1$ about $1$ of diameter $h_1$. Prove that the areas of $H_i\cap \Delta$ satisfy
\[ \operatorname{area}(H_\infty\cap \Delta) = \frac{1}{h_\infty}, \quad \operatorname{area}(H_0\cap\Delta)=h_0, \quad \mbox{and } \operatorname{area}(H_1\cap\Delta)=h_1, \]
so the total cusp area of $\Delta$ is $1/h_\infty + h_0 + h_1$.
\end{exercise}
\begin{exercise}
Let $S$ be a hyperbolic surface with a cusp, with horoball cusp neighborhood $C$. Show that the length of the boundary of $C$ is equal to the area of $C$.
\end{exercise}
\begin{exercise}
Let $T'$ be the ideal triangle\index{ideal triangle} in ${\mathbb{H}}^2$ with vertices at $0$, $1$, and $\infty$, with horoballs $H_\infty$ about $\infty$ of height $h_\infty$, $H_0$ about $0$ of diameter $h_0$, and $H_1$ about $1$ of diameter $h_1$. Show that
\[ \operatorname{area}(H_\infty\cap T') = \frac{1}{h_\infty}, \quad \operatorname{area}(H_0\cap T') = h_0, \quad \operatorname{area}(H_1\cap T') = h_1.\]
\end{exercise}
\chapter{Volume and Angle Structures}\label{Chap:AngleStruct}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
Those hyperbolic 3-manifolds that admit a triangulation by positively oriented geometric tetrahedra\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented} exhibit many additional nice properties. The existence of such a triangulation often gives a simpler way to prove many results in hyperbolic geometry. We present some of the techniques and consequences in this chapter.
In the theory of knots and links, these tools have been applied to great effect to an infinite class of knots and links called 2-bridge links,\index{2-bridge knot or link} which we will describe (and triangulate) in the next chapter.
\section{Hyperbolic volume of ideal tetrahedra}
Ideal tetrahedra are building blocks of many complete hyperbolic manifolds. In this section, we will calculate volumes of ideal tetrahedra.
Recall that a hyperbolic ideal tetrahedron is completely determined by $z\in{\mathbb{C}}$ with positive imaginary part, as in \refdef{EdgeInvariant}. It is also determined by three dihedral angles, as the following lemma shows.
\begin{lemma}\label{Lem:AnglesDetermineTet}
Let $\alpha$, $\beta$, $\gamma$ be angles in $(0,\pi)$ such that $\alpha+\beta+\gamma=\pi$. Then $\alpha$, $\beta$, and $\gamma$ determine a unique hyperbolic ideal tetrahedron up to isometry of ${\mathbb{H}}^3$. Conversely, any hyperbolic tetrahedron determines unique $\{\alpha, \beta, \gamma\} \subset (0,\pi)$ with $\alpha+\beta+\gamma=\pi$.
\end{lemma}
\begin{proof}
First we prove the converse. Given an ideal tetrahedron with ideal vertices on $\partial {\mathbb{H}}^3$ at $0$, $1$, $\infty$, and $z$, note that a horosphere about $\infty$ intersects the tetrahedron in a Euclidean triangle. Let $\alpha$, $\beta$, $\gamma$ denote the interior angles of the triangle; these are dihedral angles of the tetrahedron. Each angle $\alpha$, $\beta$, $\gamma$ lies in $(0,\pi)$, and the sum $\alpha+\beta+\gamma=\pi$, as desired. \Refex{TetLabels1} shows that taking a different collection of vertices to $0$, $1$, and $\infty$ will give the same dihedral angles $\alpha$, $\beta$, $\gamma$, so these three angles are uniquely determined by the tetrahedron.
Now, suppose $\alpha$, $\beta$, and $\gamma$ in $(0,\pi)$ are given, with $\alpha+\beta+\gamma=\pi$. Then these three numbers determine a Euclidean triangle, uniquely up to scale, with interior angles $\alpha$, $\beta$, $\gamma$. View the triangle as lying in ${\mathbb{C}}$; we may adjust such a triangle so that it has vertices at $0$, $1$, and some $z\in{\mathbb{C}}$ with positive imaginary part. This determines a tetrahedron with edge parameter $z$. If we rotate and scale the triangle so that different vertices map to $0$ and $1$, this corresponds to mapping different ideal vertices of the tetrahedron to $0$ and $1$. The parameter $z$ will be adjusted as in \reflem{EdgeInvariants}, but the tetrahedron will be the same up to isometry.
\end{proof}
Lemmas~\ref{Lem:AnglesDetermineTet} and~\ref{Lem:EdgeInvariants} give two different ways of uniquely describing an ideal tetrahedron, either by a single complex number $z$ or by a triple of angles $\alpha$, $\beta$, $\gamma$ with $\alpha+\beta+\gamma=\pi$. We will compute volumes of an ideal tetrahedron, and we choose to compute volumes using a parameterization by angles rather than edge parameter, although computations can be done either way. (See exercises.)
\begin{definition}\label{Def:LobachevskyFunction}
The \emph{Lobachevsky function}\index{Lobachevsky function} $\Lambda(\theta)$ is the function defined by
\[
\Lambda(\theta) = - \int_0^\theta \log|2\sin u|\, du.
\]
\end{definition}
\begin{theorem}\label{Thm:VolTet}
Suppose $\alpha$, $\beta$, and $\gamma$ are angle measures strictly between $0$ and $\pi$, and suppose $\alpha+\beta+\gamma=\pi$, so they determine a hyperbolic ideal tetrahedron $\Delta(\alpha, \beta, \gamma)$. Then the volume $\operatorname{vol}(\Delta(\alpha, \beta, \gamma))$ is equal to
\[
\operatorname{vol}(\Delta(\alpha, \beta, \gamma)) = \Lambda(\alpha)+\Lambda(\beta)+\Lambda(\gamma),
\]
where $\Lambda$ is the Lobachevsky function of \refdef{LobachevskyFunction}.
\end{theorem}
\begin{example}\label{Example:VolFig8}
The figure-8 knot complement has complete hyperbolic structure built of two regular ideal tetrahedra. Therefore the volume of the figure-8 knot complement is $6\Lambda(\pi/3)$, which can be numerically calculated to be approximately $2.0299$.
\end{example}
Our proof of \refthm{VolTet} follows that given by Milnor in \cite{Milnor:HypGeom} and also in \cite[Chapter~7]{thurston}. Milnor, in turn, credits Lobachevsky for several of his calculations.
First, we need a lemma concerning the Lobachevsky function.
\begin{lemma}\label{Lem:Lobachevsky}
The Lobachevsky function $\Lambda(u)$ satisfies:
\begin{enumerate}
\item It is well-defined and continuous on ${\mathbb{R}}$ (even though the defining integral is improper).
\item $\Lambda(-\theta)=-\Lambda(\theta)$, i.e.\ $\Lambda(\theta)$ is odd.
\item $\Lambda(\theta)$ is periodic of period $\pi$.
\item\label{Itm:Kubert2} It satisfies the expression $\Lambda(2\theta) = 2\Lambda(\theta) + 2\Lambda(\theta+\pi/2).$
\end{enumerate}
\end{lemma}
\begin{proof}
To prove the lemma, we will relate the Lobachevsky function to the well-known \emph{dilogarithm function}\index{dilogarithm function}
\begin{equation}\label{Eqn:Dilogarithm}
\psi(z) = \sum_{n=1}^\infty z^n/n^2 \quad \mbox{for } |z|\leq 1.
\end{equation}
For more information on the dilogarithm, see for example \cite{Zagier:Dilogarithm}. Note that for $|z|<1$, the derivative of $\psi(z)$ satisfies
\[
\psi'(z) = \sum_{n=1}^\infty \frac{z^{n-1}}{n} = \frac{1}{z}\left( \sum_{n=1}^\infty \frac{z^n}{n} \right).
\]
The sum on the right hand side is a well-known Taylor series:
\[ -\log(1-z) = \sum_{n=1}^\infty \frac{z^n}{n} \quad \mbox{for } |z|<1. \]
Thus the analytic continuation\index{analytic continuation} of $\psi(z)$ is given by
\begin{equation}\label{Eqn:DilogIntegral}
\psi(z) = -\int_0^z \frac{\log(1-u)}{u}\,du \quad \mbox{ for } z\in {\mathbb{C}}-[1,\infty).
\end{equation}
For $0 < u < \pi$, consider $\psi(e^{2iu}) - \psi(1)$. Although the integral formula \refeqn{DilogIntegral} above is not defined at $z=1$, the summation of \refeqn{Dilogarithm} is defined and continuous at $z=1$ (in fact, $\psi(1)=\pi^2/6$), so we may write
\[
\psi(e^{2iu})-\psi(1) = -\int_1^{\displaystyle{e^{2iu}}} \frac{\log(1-w)}{w}\,dw.
\]
Substitute $w=e^{2i\theta}$ into this expression to obtain
\begin{align*}
\psi(e^{2iu})-\psi(1) &= - \int_{\theta=0}^u \log(1-e^{2i\theta})\,(2i)\,d\theta \\
&= - \int_0^u\log\left(-2ie^{i\theta}\left(\frac{e^{i\theta}-e^{-i\theta}}{2i}\right) \right) (2i)\,d\theta \\
&= - \int_0^u 2i(\log(-i)+\log(e^{i\theta}) + \log(2\sin\theta))\,d\theta \\
&= - \int_0^u (\pi - 2\theta + 2i\log(2\sin\theta))\,d\theta.\\
\end{align*}
Take the imaginary parts of both sides of the above equation. Note $\psi(1)$ is real, hence
\[
\Im(\psi(e^{2iu})-\psi(1))=\Im(\psi(e^{2iu})) = \Im\left(\sum_{n=1}^\infty \frac{e^{2inu}}{n^2}\right) = \sum_{n=1}^\infty \frac{\sin(2nu)}{n^2}.\]
On the other side, this equals
\[ \Im(\psi(e^{2iu})-\psi(1)) = 2\int_0^u-\log(2\sin\theta)\,d\theta = 2\Lambda(u).\]
Thus for $0\leq u \leq \pi$, we have the uniformly convergent Fourier series for $\Lambda(u)$ given by
\begin{equation}\label{Eqn:LobachevskiDilog}
\Lambda(u) = {\frac{1}{2}}\sum_{n=1}^\infty \frac{\sin(2nu)}{n^2} \quad \mbox{for } 0\leq u\leq\pi.
\end{equation}
This shows $\Lambda(u)$ is well-defined and continuous for $0\leq u\leq \pi$. It also shows that $\Lambda(u)$ can be defined on $-\pi\leq u\leq 0$, and it is an odd function on this range. Finally, it shows that $\Lambda(0)=\Lambda(\pi) =0$.
Notice now that the derivative $d\Lambda(\theta)/d\theta = -2\log|2\sin\theta|$ is periodic of period $\pi$. Then for $\theta>\pi$,
\begin{align*}
\Lambda(\theta) & = \int_0^\theta \Lambda'(u)\,du = \int_0^\pi\Lambda'(u)\,du + \int_\pi^\theta \Lambda'(u)\,du \\
& = \Lambda(\pi) + \int_0^{\theta-\pi}\Lambda'(u)\,du = \Lambda(\theta-\pi),
\end{align*}
by the periodicity of $\Lambda'$, and the fact that $\Lambda(\pi)=0$. This shows that $\Lambda$ is well-defined and continuous for $\theta\geq 0$; a similar result implies it is well-defined and continuous for $\theta\leq 0$, and it will be odd everywhere.
It only remains to show the last item of the lemma. To do so, begin with the identity
\[
2\sin(2\theta) = 4\sin\theta\cos\theta = (2\sin\theta)(2\sin(\theta+\pi/2)).
\]
Then note that
\begin{align*}
\Lambda(2\theta) &= \int_0^{2\theta} - \log|2\sin u| \, du \\
&= 2\int_0^\theta - \log|2\sin(2w)|\,dw \quad (\mbox{letting } w=u/2) \\
&= 2\int_0^\theta -\log|2\sin w|\, dw + 2\int_0^\theta-\log|2\sin(w+\pi/2)|\,dw \\
&= 2\Lambda(\theta) + 2\int_{\pi/2}^{\theta+\pi/2} -\log|2\sin v|\, dv \\
&= 2\Lambda(\theta) +2\Lambda(\theta+\pi/2) - 2\Lambda(\pi/2).
\end{align*}
Finally, note that if we substitute $u=\pi/2$ into \refeqn{LobachevskiDilog}, we obtain $\Lambda(\pi/2)=0$. This finishes the proof of the lemma.
\end{proof}
\begin{remark}\label{Rem:Kubert}
Item~\eqref{Itm:Kubert2} of \reflem{Lobachevsky} is a special case of more general identities known as the \emph{Kubert identities}\index{Kubert identities}, which have the following form. For any nonzero integer $n$,
\[ \Lambda(n\theta) = \sum_{k=0}^{n-1} n\Lambda(\theta + k\pi/n). \]
You are asked to prove these identities in the exercises.
\end{remark}
To prove \refthm{VolTet}, we will subdivide our ideal tetrahedron into six 3-dimensional simplices, each simplex with some finite and some infinite vertices. Such a simplex will be described by a region in ${\mathbb{H}}^3$. To obtain the volume, we integrate the hyperbolic volume form $d\operatorname{vol} = dx\,dy\,dz/z^3$ over the region describing the simplex, and then sum the six results.
More carefully, given an ideal tetrahedron in ${\mathbb{H}}^3$, we have been viewing the tetrahedron as having vertices $0$, $1$, $\infty$, and $z$. The three points $0$, $1$, and $z$ determine a Euclidean circle on ${\mathbb{C}}$, which is the boundary of a Euclidean hemisphere, giving a hyperbolic plane in ${\mathbb{H}}^3$. To this picture, apply a hyperbolic isometry that takes the circle on ${\mathbb{C}}$ through $0$, $1$, $z$ to the unit circle in ${\mathbb{C}}$, taking $0$, $1$, $z$ to some points $p$, $q$, $r$ on $S^1\subset {\mathbb{C}}$.
Now, drop a perpendicular from $\infty$ to the hemisphere; this will be a vertical ray from $(0,0,1)\in {\mathbb{H}}^3$ to $\infty$. There will be two cases to consider: the case that the point $(0,0)\in {\mathbb{C}}$ is interior to the triangle determined by $p$, $q$, $r$, and the case that the point $(0,0)$ is exterior to that triangle. The cases are shown in \reffig{TrianglesVolume}.
\begin{figure}
\includegraphics{Figures/Ch09_AngleStruct/F9-01-SubTet.eps}
\caption{Left is a tetrahedron for which the point $(0,0)$ lies in the interior of the triangle on ${\mathbb{C}}$, right is one for which it is exterior. Both show subdivisions into six triangles. }
\label{Fig:TrianglesVolume}
\end{figure}
Consider first the case that the point $(0,0)$ is interior to the triangle determined by $p$, $q$, and $r$. Then the ray from $(0,0,1)$ to $\infty$ lies interior to the tetrahedron. Now, on the hemisphere whose boundary is the unit circle, draw perpendicular arcs from $(0,0,1)$ to each edge of the tetrahedron lying on that hemisphere. Also draw arcs from $(0,0,1)$ to the vertices of the tetrahedron, as shown in \reffig{TrianglesVolume}. Now cone to $\infty$. This divides the original tetrahedron up into six simplices. Similarly, if $(0,0)$ is not interior to the triangle determined by $p$, $q$, and $r$, it still makes sense to draw the same arcs and rays, as in \reffig{TrianglesVolume}, right. However, in this case the six simplices obtained overlap each other. In either case, we have the following result.
\begin{lemma}\label{Lem:SixSimplices}
Each of the six simplices obtained as above has the following properties, illustrated in \reffig{SixSimplices}.
\begin{enumerate}
\item It has two finite vertices and two ideal vertices.
\item Three of its dihedral angles are $\pi/2$, the other dihedral angles are $\zeta$, $\zeta$, and $\pi/2-\zeta$ for some $\zeta\in(0,\pi/2)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Note that by construction, the two ideal vertices are at $\infty$ and one of $p$, $q$, $r$, i.e.\ one of the vertices of the original ideal tetrahedron. The other vertices are at $(0,0,1)$, and some point on the unit hemisphere where an arc from $(0,0,1)$ meets an edge of the original tetrahedron in a right angle.
Consider the dihedral angles of the faces meeting infinity. Each of these is a cone (to $\infty$) over an edge on the unit hemisphere. The dihedral angles agree with the dihedral angles of the vertical projection of the simplex to ${\mathbb{C}}$, which is the triangle $T$ shown in \reffig{TrianglesVolume}; these angles are $\pi/2$, $\zeta$, and $\pi/2-\zeta$ for some $\zeta\in (0,\pi/2)$. The fourth face of the tetrahedron lies on the hemisphere. It meets both vertical faces through $(0,0,1)$ in right angles. The final face is a subset of a vertical plane whose boundary on ${\mathbb{C}}$ is a line $L$ containing a side of the projection triangle $T$. The angle this vertical plane meets with the unit hemisphere is obtained by measuring the angles between the line $L$ and a tangent to the unit circle at the points where these intersect. Notice this angle is complementary to $\pi/2-\zeta$, hence is $\zeta$.
\end{proof}
A simplex with the form of \reflem{SixSimplices} is called an \emph{orthoscheme},\index{orthoscheme} named by Scl\"afli in the 1950s \cite{SchlafliI, SchlafliII}. Around that time, he computed volumes of orthoschemes.
\begin{figure}
\import{Figures/Ch09_AngleStruct/}{F9-02-Simplx.eps_tex}
\caption{One of the six simplices obtained from subdividing an ideal tetrahedron}
\label{Fig:SixSimplices}
\end{figure}
\begin{lemma}\label{Lem:VolSubSimplex}
Let $S(\zeta)$ denote a simplex obtained as above, with properties of \reflem{SixSimplices}. That is, $S(\zeta)$ has two finite vertices and two ideal vertices, three dihedral angles of $\pi/2$, and other dihedral angles $\zeta$, $\zeta$, and $\pi/2-\zeta$ for $\zeta\in(0, \pi/2)$. Then the volume of $S(\zeta)$ is
\[ \operatorname{vol}(S(\zeta)) = \frac{1}{2}\Lambda(\zeta). \]
\end{lemma}
\begin{proof}
The proof is a computation.
Apply an isometry to ${\mathbb{H}}^3$ so that one ideal vertex of $S(\zeta)$ lies at $\infty$, the other on the unit circle, with one of the finite vertices at $(0,0,1)$; this is the same position of the simplex in the proof of \reflem{SixSimplices} above. When we project vertically to ${\mathbb{C}}$, we obtain a triangle $T$ with one vertex at $0$, one on the unit circle, and the last some $v\in{\mathbb{C}}$. The angle at $v$ is $\pi/2$, and the other two angles are $\zeta$ and $\pi/2-\zeta$. By applying a M\"obius transformation\index{M\"obius transformation} that rotates and reflects (but does not affect volume), we may assume $v$ is the point $\cos(\zeta) \in {\mathbb{R}}\subset{\mathbb{C}}$, and the third point, on the unit circle, is the point $\cos(\zeta) + i\, \sin(\zeta)$.
Now the triangle $T$ is described by the region
\[ 0\leq x \leq \cos(\zeta) \quad \mbox{and} \quad 0\leq y\leq x\tan(\zeta).\]
Then $\operatorname{vol}(S(\zeta))$ is given by
\[ \operatorname{vol}(S(\zeta)) = \int_T\int_{z\geq\sqrt{1-x^2-y^2}} d\operatorname{vol}
= \int_{0}^{\cos(\zeta)}\int_0^{x\tan(\zeta)}\int_{\sqrt{1-x^2-y^2}}^{\infty} \frac{dz\,dy\,dx}{z^3} \\
\]
Integrating with respect to $z$, we obtain
\[
\operatorname{vol}(S(\zeta)) = \int_0^{\cos(\zeta)}\int_0^{x\tan(\zeta)}\frac{dx\,dy}{2(1-x^2-y^2)},
\]
which we rewrite
\[
\operatorname{vol}(S(\zeta)) = \int_0^{\cos(\zeta)} \int_0^{x\tan(\zeta)} \frac{dx\,dy}{2((\sqrt{1-x^2})^2-y^2)},
\]
and integrate with respect to $y$:
\begin{align*}
\operatorname{vol}(S(\zeta))
& = \int_0^{\cos(\zeta)} \frac{1}{4\sqrt{1-x^2}} \log \left(\frac{\sqrt{1-x^2}+x\tan\zeta}{\sqrt{1-x^2}-x\tan\zeta}\right)\,dx \\
& = \int_0^{\cos(\zeta)} \frac{1}{4\sqrt{1-x^2}}\log \left(\frac{\sqrt{1-x^2}\cos(\zeta) + x\sin(\zeta)}{\sqrt{1-x^2}\cos(\zeta)-x\sin(\zeta)}\right) \, dx.
\end{align*}
Using the substitution $x=\cos(\theta)$, the integral becomes
\begin{align*}
\operatorname{vol}(S(\zeta)) &= \int_{\pi/2}^{\zeta} \frac{1}{4}\log\left(\frac{\sin\theta\cos\zeta+\cos\theta\sin\zeta}{\sin\theta\cos\zeta-\cos\theta\sin\zeta}\right)\,(-d\theta) \\
&= -\frac{1}{4}\left( \int_{\pi/2}^{\zeta}\log\left(\frac{2\sin(\theta+\zeta)}{2\sin(\theta-\zeta)}\right)\,d\theta \right)\\
&= \frac{1}{4}\left( \int_{\pi/2}^{\zeta} -\log(2\sin(\theta+\zeta))\,d\theta - \int_{\pi/2}^{\zeta} -\log(2\sin(\theta-\zeta))\,d\theta \right) \\
&= \frac{1}{4}\left( \int_{\pi/2+\zeta}^{2\zeta} -\log(2\sin(u))\,du - \int_{\pi/2-\zeta}^0-\log(2\sin(u))\,du \right) \\
&= \frac{1}{4}(\Lambda(2\zeta)-\Lambda(\pi/2+\zeta)+\Lambda(\pi/2-\zeta)).
\end{align*}
To finish, we use \reflem{Lobachevsky}. Since $\Lambda(\theta)$ is periodic of period $\pi$, note that $\Lambda(\pi/2-\zeta) = \Lambda(-\pi/2-\zeta)$. Since $\Lambda$ is an odd function, $\Lambda(-\pi/2-\zeta)=-\Lambda(\pi/2+\zeta)$. Finally, since $\Lambda(2\zeta)= 2\Lambda(\zeta)+2\Lambda(\zeta+\pi/2)$, the above becomes
\[ \operatorname{vol}(S(\zeta)) = \frac{1}{4}( 2\Lambda(\zeta) +2\Lambda(\zeta+\pi/2) - 2\Lambda(\pi/2+\zeta)) = \frac{1}{2}\Lambda(\zeta).\qedhere
\]
\end{proof}
\begin{proof}[Proof of \refthm{VolTet}]
For an ideal tetrahedron with dihedral angles $\alpha$, $\beta$, and $\gamma$, place the tetrahedron in ${\mathbb{H}}^3$ with vertices at $\infty$, and at $p$, $q$, $r$ all on the unit circle in ${\mathbb{C}}$. As above, drop a perpendicular ray to the unit hemisphere.
\underline{Case 1.} Suppose first that the ray lies in the interior of the ideal tetrahedron. Then subdivide the tetrahedron into six simplices as before. Each of the simplices has the properties of \reflem{SixSimplices}, and is determined by some $\zeta\in(0,\pi/2)$. By \reflem{VolSubSimplex}, its volume is determined by $\zeta$ as well, so it remains to calculate $\zeta$ for each of the six simplices making up the ideal tetrahedron. Project vertically to the complex plane ${\mathbb{C}}$; the angles determining the simplex can then be easily computed using Euclidean geometry. In particular, there are two with angle $\alpha$, two with angle $\beta$, and two with angle $\gamma$. See the left of \reffig{SubSimplices}.
\begin{figure}
\import{Figures/Ch09_AngleStruct/}{F9-03-SubSpx.eps_tex}
\caption{Left: angles of subsimplicies when perpendicular ray lies interior to the tetrahedron. Right: angles when it is exterior}
\label{Fig:SubSimplices}
\end{figure}
Then the volume of the tetrahedron $\Delta(\alpha, \beta,\gamma)$ is
\begin{align*}
\operatorname{vol}(\Delta(\alpha, \beta, \gamma)) &= 2\operatorname{vol}(S(\alpha)) + 2\operatorname{vol}(S(\beta)) + 2\operatorname{vol}(S(\gamma)) \\
&= \Lambda(\alpha)+\Lambda(\beta)+\Lambda(\gamma)
\end{align*}
\underline{Case 2.} Now suppose that the ray from $\infty$ to the point $(0,0,1)$ lies outside of the ideal tetrahedron. We may still draw perpendicular lines from $(0,0,1)$ to the edges of the ideal tetrahedron on the unit hemisphere, and lines from $(0,0,1)$ to vertices of the ideal tetrahedron; the right of \reffig{SubSimplices} shows the projection to ${\mathbb{C}}$ and the corresponding angles. Note that we may still cone to $\infty$, obtaining six simplices with the properties of \reflem{SixSimplices}, only now they overlap. However, by adding and subtracting volumes of overlapping simplices, we still will obtain the volume of the ideal tetrahedron. In particular, we have the following.
\begin{align*}
\operatorname{vol}(\Delta(\alpha, \beta, \gamma)) & = 2\operatorname{vol}(S(\gamma)) + 2\operatorname{vol}(S(\beta)) - 2\operatorname{vol}(S(\pi-\alpha)) \\
&= \Lambda(\gamma) + \Lambda(\beta) - \Lambda(\pi-\alpha)
\end{align*}
Since $\Lambda$ is an odd function and has period $\pi$, $-\Lambda(\pi-\alpha) = \Lambda(\alpha)$. Hence $\operatorname{vol}(\Delta(\alpha,\beta,\gamma)) = \Lambda(\alpha)+\Lambda(\beta)+\Lambda(\gamma)$ in this case as well.
\end{proof}
The formula for volume of a tetrahedron has the following useful consequences.
\begin{theorem}\label{Thm:VolConcaveDown}
Let $\mathcal{A}$ be the set of possible angles on a tetrahedron:
\[ \mathcal{A} = \{(\alpha, \beta, \gamma)\in (0,\pi)^3 \mid \alpha+\beta+\gamma=\pi\}.\]
Then the function $\operatorname{vol}\from \mathcal{A}\to {\mathbb{R}}$ given by \[ \operatorname{vol}(\alpha,\beta,\gamma)=\Lambda(\alpha)+\Lambda(\beta) +\Lambda(\gamma)\]
is strictly concave down on $\mathcal{A}$. Moreover, we can compute its first two derivatives. For $a=(a_1, a_2, a_3)\in\mathcal{A}$ a point and $w=(w_1,w_2,w_3) \in T_a\mathcal{A}$ a nonzero tangent vector, the first two derivatives of $\operatorname{vol}$ in the direction of $w$ satisfy
\[ \frac{\partial\operatorname{vol}}{\partial w} = \sum_{i=1}^3 -w_i\log\sin a_i, \quad
\frac{\partial^2\operatorname{vol}}{\partial w^2} < 0.\]
\end{theorem}
\begin{proof}
First, note that since $w$ is a tangent vector to $\mathcal{A}$, and the sum of the three coordinates of each point in $\mathcal{A}$ is $\pi$, it follows that $w_1+w_2+w_3=0$.
Next, by \refthm{VolTet}, the directional derivative of $\operatorname{vol}$ at $a$ in the direction of $w$ is given by
\begin{align*}
\frac{\partial \operatorname{vol}}{\partial w} &= \sum_{i=1}^3 -w_i\log|2\sin a_i| \\
& = \sum_{i=1}^3 w_i(-\log 2) + \sum_{i=1}^3 -w_i\log|\sin a_i| \\
& = 0 + \sum_{i=1}^3 -w_i\log \sin a_i.
\end{align*}
The last line holds since $w_1+w_2+w_3=0$ and since $a_i\in(0,\pi)$, hence $\sin a_i >0$.
For the second derivative, we know $a_1+a_2+a_3=\pi$, so at least two of $a_1, a_2, a_3$ are strictly less than $\pi/2$. Without loss of generality, say $a_1$ and $a_2$ are less than $\pi/2$.
Then the second derivative is
\[ \frac{\partial^2\operatorname{vol}}{\partial w^2} = \sum_{i=1}^3 -w_i^2\cot a_i. \]
Since $a_3 = \pi-a_1-a_2$ and $w_3=-w_1-w_2$, we may write
\[ w_3^2\cot a_3 = (w_1+w_2)^2\cot(\pi-a_1-a_2) = -(w_1+w_2)^2\frac{\cot a_1\cot a_2 -1}{\cot a_1 + \cot a_2}, \]
where the last equality is an exercise in trig identities.
Then we obtain
\begin{align*}
-\frac{\partial^2\operatorname{vol}}{\partial w^2} & = w_1^2\cot a_1 +w_2^2\cot a_2 -(w_1+w_2)^2\frac{\cot a_1 \cot a_2 -1}{\cot a_1 + \cot a_2} \\
& = \frac{(w_1+w_2)^2 + (w_1\cot a_1 - w_2\cot a_2)^2}{\cot a_1 + \cot a_2}.
\end{align*}
The denominator of the last fraction is positive, because $a_1, a_2\in(0,\pi/2)$. The numerator is the sum of squares, hence at least zero. In fact, if it equals zero, then we have $w_1=-w_2$ and $\cot a_1 = -\cot a_2$. But $a_1,a_2\in(0,\pi/2)$, so this is impossible. Thus numerator and denominator are strictly positive, and so $\partial^2\operatorname{vol}/\partial w^2$ is strictly negative, hence strictly concave down.
\end{proof}
\begin{theorem}\label{Thm:MaxVolTet}
The regular ideal tetrahedron, with dihedral angles $\alpha=\beta=\gamma=\pi/3$, maximizes volume over all ideal tetrahedra.
\end{theorem}
\begin{proof}
Because $\operatorname{vol}$ is continuous, we know it obtains a maximum on the cube $[0,\pi]^3$. First we consider the boundary of that cube, and we show the maximum cannot occur there. If any angle is $\pi$, then $\alpha+\beta+\gamma=\pi$ implies the other two angles are $0$. Thus to show the maximum does not occur on the boundary of the cube, it suffices to show the maximum does not occur when one of the angles is zero. So suppose $\alpha=0$.
Since $\Lambda(0)=\Lambda(\pi)=0$ by \refeqn{LobachevskiDilog}, and since $\Lambda(\beta)+\Lambda(\pi-\beta) = \Lambda(\beta)+\Lambda(-\beta)=0$ by \reflem{Lobachevsky}, the volume in this case will be $0$. So the maximum does not occur on the boundary.
Thus we seek a maximum in the interior. We maximize
$\operatorname{vol}(\alpha,\beta, \gamma) = \Lambda(\alpha)+\Lambda(\beta)+\Lambda(\gamma)$ subject to the constraint $\pi=\alpha+\beta+\gamma =: f(\alpha,\beta,\gamma)$. The theory of Lagrange multipliers tells us that at the maximum, there is a scalar $\lambda$ such that
\[ \nabla \operatorname{vol} = \lambda \nabla f, \quad \mbox{or}\]
\[ \log\sin\alpha = \log\sin\beta = \log\sin\gamma = \lambda. \]
This will be satisfied when $\sin\alpha=\sin\beta=\sin\gamma$. Since $\alpha, \beta, \gamma \in (0,\pi)$, and $\alpha+\beta+\gamma=\pi$, it follows that $\alpha=\beta=\gamma=\pi/3$, and the tetrahedron is regular.
\end{proof}
\section{Angle structures and the volume functional}
Note that in \refthm{VolTet}, we showed that the volume of an ideal tetrahedron can be computed given only its dihedral angles. A dihedral angle can be obtained by taking the imaginary part of the log of a tetrahedron's edge invariant. Thus the imaginary parts alone of the edge invariants allow us to assign a volume to the structure. These are exactly the angles of an angle structure.\index{angle structure}
Recall from \refdef{AngleStructures} that we defined an angle structure\index{angle structure} on an ideal triangulation $\mathcal{T}$ of a manifold $M$ to be a collection of (interior) dihedral angles satisfying:
\begin{enumerate}
\item[(0)] Opposite edges of a tetrahedron have the same angle.
\item[(1)] Dihedral angles lie in $(0,\pi)$.
\item[(2)] The sum of angles around any ideal vertex of any tetrahedron is $\pi$.
\item[(3)] The sum of angles around any edge class of $M$ is $2\pi$.
\end{enumerate}
The set of all angle structures\index{angle structure} for a triangulation $\mathcal{T}$ is denoted by $\mathcal{A}(\mathcal{T})$. For $M$ is an orientable 3-manifold with boundary consisting of tori, and $\mathcal{T}$ a triangulation of $M$, we will study the set of angle structures $\mathcal{A}(\mathcal{T})$.
\begin{proposition}\label{Prop:AngleStructsPoly}
Let $\mathcal{T}$ be an ideal triangulation of a 3-manifold $M$ consisting of $n$ tetrahedra, and as usual denote the set of angle structures\index{angle structure} by $\mathcal{A}(\mathcal{T})$. If $\mathcal{A}(\mathcal{T})$ is nonempty, then it is a convex, finite-sided, bounded polytope in $(0,\pi)^{3n}\subset {\mathbb{R}}^{3n}$.
\end{proposition}
\begin{proof}
For each tetrahedron of $\mathcal{T}$, an angle structure\index{angle structure} selects three dihedral angles lying in $(0,\pi)$. Thus $\mathcal{A}(\mathcal{T})$ is a subset of $(0,\pi)^{3n}$. The equations coming from conditions~(2) and~(3) are linear equations whose solution set is an affine subspace of ${\mathbb{R}}^{3n}$. When we intersect the solution space with the cube $(0,\pi)^{3n}$, we obtain a bounded, convex, finite-sided polytope.
\end{proof}
There is no guarantee that $\mathcal{A}(\mathcal{T})$ is nonempty. However, \refprop{AngleStructsPoly} implies that if it is nonempty, then we may view a point of $\mathcal{A}(\mathcal{T})$ as a point in $(0,\pi)^{3n}$. We write $a\in\mathcal{A}(\mathcal{T})$ as $a=(a_1, \dots, a_{3n})$.
\begin{definition}\label{Def:VolFunctional}
The \emph{volume functional}\index{volume functional}
$\mathcal{V}\from \mathcal{A}(\mathcal{T})\to{\mathbb{R}}$ is defined by
\[ \mathcal{V}(a_1, \dots, a_{3n}) = \sum_{i=1}^{3n} \Lambda(a_i). \]
\end{definition}
Thus $\mathcal{V}(a)$ is the sum of volumes of hyperbolic tetrahedra associated with the angle structure\index{angle structure} $a$.
A reason angle structures are so useful comes from the following two theorems.
\begin{theorem}[Volume and angle structures]\label{Thm:VolAngleStructs}
Let $M$ be an orientable 3-manifold with boundary consisting of tori, with ideal triangulation $\mathcal{T}$. If a point $A\in\mathcal{A}(\mathcal{T})$ is a critical point for the volume functional\index{volume functional} $\mathcal{V}$ then the ideal hyperbolic tetrahedra obtained from the angle structure\index{angle structure} $A$ give $M$ a complete hyperbolic structure.
\end{theorem}
The converse is also true:
\begin{theorem}\label{Thm:VolAngleStructsConverse}
If $M$ is finite volume hyperbolic 3-manifold with boundary consisting of tori such that $M$ admits a positively oriented hyperbolic ideal triangulation\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented} $\mathcal{T}$, then the angle structure\index{angle structure} $A\in\mathcal{A}(\mathcal{T})$ giving the angles of $\mathcal{T}$ for the complete hyperbolic structure is the unique global maximum of the volume functional\index{volume functional} $\mathcal{V}$ on $\mathcal{A}(\mathcal{T})$.
\end{theorem}
The two theorems are attributed to Casson and Rivin, and follow from proofs in \cite{Rivin:EuclidStructs}. The first direct proof of the results are written in Chan's honors thesis \cite{Chan:Honours}, using work of Neumann and Zagier \cite{NeumannZagier}. A very nice self-contained exposition and proof of both theorems is given in \cite{FuterGueritaud:Survey}. We will follow the ideas of Futer and Gu{\'e}ritaud to show \refthm{VolAngleStructs} in \refsec{LeadingTrailing}.
To prove the converse, we will follow a simple proof of Chan using the Schl\"afli formula for the variation of volumes of ideal tetrahedra. Chan credits his proof to unpublished ideas of Schlenker.
\section{Leading--trailing deformations}\label{Sec:LeadingTrailing}
\begin{lemma}\label{Lem:DerivativesVol}
Let $M$ be an orientable 3-manifold with boundary consisting of tori, with ideal triangulation $\mathcal{T}$ consisting of $n$ tetrahedra. Then the volume functional\index{volume functional} $\mathcal{V}\from \mathcal{A}(\mathcal{T})\to {\mathbb{R}}$ is strictly concave down on $\mathcal{A}(\mathcal{T})$. For $a=(a_1, \dots, a_{3n})\in\mathcal{A}(\mathcal{T})$ and $w=(w_1, \dots, w_{3n})\in T_a\mathcal{A}(\mathcal{T})$ a non-zero tangent vector, the first two directional derivatives of $\mathcal{V}$ satisfy
\[ \frac{\partial \mathcal{V}}{\partial w} = \sum_{i=1}^{3n} -w_i\log\sin a_i \quad \mbox{and} \quad \frac{\partial^2 \mathcal{V}}{\partial w^2} < 0. \]
\end{lemma}
\begin{proof}
Because the volume functional\index{volume functional} $\mathcal{V}$ is the sum of volumes of ideal tetrahedra, the formulas for derivatives follow by linearity from \refthm{VolConcaveDown}. Because the second derivative is strictly negative, the volume functional is strictly concave down.
\end{proof}
We will need to take derivatives in carefully specified directions. To that end, we now define a vector $w=(w_1, \dots, w_{3n})\in {\mathbb{R}}^{3n}$ and show that $w$ lies in $T_a\mathcal{A}(\mathcal{T})$. Again the ideas follow from \cite{FuterGueritaud:Survey}.
\begin{definition}\label{Def:LeadingTrailing}
Let $C$ be a cusp of $M$ with a cusp triangulation corresponding to the ideal tetrahedra of $\mathcal{T}$. Let $\zeta$ be an oriented closed curve on $C$, isotoped to run monotonically through the cusp triangulation, as in \refdef{CompletenessEquations}. Let $\zeta_1, \dots, \zeta_k$ be the oriented segments of $\zeta$ in distinct triangles. For the segment $\zeta_i$ in triangle $t_i$, define the \emph{leading corner} of $t_i$ to be the corner of the triangle that is opposite the edge where $\zeta_i$ enters $t_i$, and define the \emph{trailing corner} to be the corner opposite the edge where $\zeta_i$ exits.
Each corner of the triangle $t_i$ is given a dihedral angle $a_j$ in an angle structure,\index{angle structure} thus corresponds to a coordinate of $\mathcal{A}(\mathcal{T}) \subset {\mathbb{R}}^{3n}$. Similarly for any $a\in \mathcal{A}(\mathcal{T})$, each corner of $t_i$ corresponds to a coordinate of the tangent space $T_a\mathcal{A}(\mathcal{T}) \subset {\mathbb{R}}^{3n}$.
We define a vector $w(\zeta_i) \in {\mathbb{R}}^{3n}$ by setting the coordinate corresponding to the leading corner of $t_i$ equal to $+1$, and the coordinate corresponding to the trailing corner of $t_i$ equal to $-1$. Set all other coordinates equal to zero. The \emph{leading--trailing deformation}\index{leading--trailing deformation} corresponding to $\zeta$ is defined to be the vector $w(\zeta)=\sum_i w(\zeta_i)$.
\end{definition}
An example is shown in \reffig{LeadingTrailing}.
\begin{figure}
\import{Figures/Ch09_AngleStruct/}{F9-04-LeadTr.eps_tex}
\caption{For the oriented curve $\zeta$ shown, leading corners are marked with $+1$ and trailing corners with $-1$}
\label{Fig:LeadingTrailing}
\end{figure}
\begin{lemma}\label{Lem:TantoAngleStruct}
Let $\sigma$ be a curve encircling a vertex of the cusp triangulation on cusp $C$. Let $\mu$ be an embedded curve isotopic to a generator of the holonomy group\index{holonomy group} of the cusp torus. Then the corresponding leading--trailing deformation vectors $w(\sigma)$ and $w(\mu)$ both lie in the tangent space $T_a\mathcal{A}(\mathcal{T})$, for any $a\in\mathcal{A}(\mathcal{T})$.
\end{lemma}
\begin{proof}
The space $\mathcal{A}(\mathcal{T})$ is a submanifold of ${\mathbb{R}}^3$ cut out by linear equations corresponding to (2) and (3) of \refdef{AngleStructures}, namely that angles at each ideal vertex of a tetrahedron sum to $\pi$, and angles about an edge of $M$ sum to $2\pi$. Let $f_i(a) = a_i+a_{i+1}+a_{i+2}$ be the sum of angles of the $i$-th tetrahedron, and let $g_e(a) = \sum a_{e_i}$ be the sum of angles about the edge $e$. So $\mathcal{A}(\mathcal{T}) \subset (0,\pi)^{3n}$ is the space cut out by all equations $f_i=\pi$ and $g_e = 2\pi$. Thus to see that $w(\zeta)$ is a tangent vector to $\mathcal{A}(\mathcal{T})$ at a point $a$, we need to show that the vector is orthogonal to the gradient vectors $\nabla f_i$ and $\nabla g_e$ at $a$, for all $i$ and all $e$.
Note $\nabla f_i$ is the vector $(0,\dots,0,1,1,1,0,\dots,0)$, with $0$s away from the $i^{\rm{th}}$ tetrahedron $t_i$ and $1$s in the three positions corresponding to the angles of $t_i$. There are four cusp triangles coming from this tetrahedron, corresponding to its four ideal vertices. Suppose $\zeta$ is a curve in the cusp triangulation of $C$. If no segment of $\zeta$ runs through a triangle of $t_i$, then $w(\zeta)$ has only $0$s in the position corresponding to the $1$s of $\nabla f_i$, hence $\nabla f_i \cdot w(\zeta)=0$ in this case.
So suppose that some segment $\zeta_j$ of $\zeta$ runs through a triangle of $t_i$. Then one corner of the triangle is a leading corner for $\zeta_j$, and one is a trailing corner, so $w(\zeta_j)$ has one $0$, one $+1$, and one $-1$ in the three positions corresponding to angles of $t_i$. Hence $\nabla f_i \cdot w(\zeta_j)=0$. By linearity, $\nabla f_i\cdot w(\zeta)=0$.
So it remains to show that for each edge $e$, $\nabla g_e \cdot w(\zeta) = 0$, where $\zeta$ is one of the curves $\sigma$ or $\mu$ in the hypothesis of the lemma. Note that $\nabla g_e$ is a vector $(\epsilon_1, \dots, \epsilon_{3n})$, where $\epsilon_j$ is one of the integers $0$, $1$, or $2$, counting the number of times a dihedral angle of a tetrahedron occurs in the gluing equation $g_e$. We will consider the segments of $\zeta$ one at a time. Note that any segment $\zeta_j$ of $\zeta$ contributes $0$, $+1$, and $-1$ to opposite edges of exactly one tetrahedron $t_j$, as illustrated in \reffig{LeadingTrailingVertex}, left.
\begin{figure}
\import{Figures/Ch09_AngleStruct/}{F9-05-LTTet.eps_tex}
\caption{Left: Effect of $w(\sigma_j)$ on the edges of a tetrahedron. Right:
If the lower edge is identified to $e$, then the contribution of $+1$ from $w(\zeta_j)$ to $e$ cancels with the $-1$ contribution from $w(\zeta_{j-1})$.}
\label{Fig:LeadingTrailingVertex}
\end{figure}
If the edge $e$ is not identified to any of the edges of the tetrahedron $t_j$, then $\nabla g_e\cdot w(\zeta_j)=0$. Similarly, if $e$ is identified only to one or both of the edges for which $w(\zeta_j)$ contributes a $0$, then although the corresponding coordinate of $\nabla g_e$ will be $1$ or $2$, the dot product $\nabla g_e \cdot w(\zeta_j)$ will still be $0$.
If $e$ is identified to one or both of the edges labeled with a $+1$ by $w(\zeta_j)$, then there will be a contribution of $+1$ or $+2$ (respectively) to $\nabla g_e \cdot w(\zeta_j)$ coming from these labels. We will show that in this case, there exists one or two (respectively) segments of $\zeta$ each contributing $-1$, so that the positive contributions cancel.
Suppose first that $e$ is identified to the non-vertical edge of $t_j$ labeled $+1$. Then consider the segment $\zeta_{j-1}$. This lies in a tetrahedron $t_{j-1}$ glued to $t_j$ along a face containing the edge identified to $e$. In the cusp triangulation, $\zeta_{j-1}$ exits its cusp triangle at this face. Thus the opposite corner of the cusp triangle is a trailing corner, and is assigned a $-1$. This trailing corner corresponds to an edge opposite $e$. So $e$ picks up a $-1$ from $w(\zeta_{j-1})$. See \reffig{LeadingTrailingVertex}, right. Hence the $+1$ contribution of $w(\zeta_j)$ is canceled in this case with this $-1$ from $w(\zeta_{j-1})$.
Now suppose $e$ is identified to the vertical edge of $t_j$ labeled $+1$. Let $\zeta_j, \zeta_{j+1}, \dots, \zeta_{j+r}$ be a maximal collection of segments in cusp triangles adjacent to $e$. Note if $\zeta=\sigma$ encircles a vertex, that vertex will not correspond to the endpoint of $e$. Then $r=1$, i.e.\ there are just two segments of $\zeta$ adjacent to $e$. If $\zeta=\mu$ is a generator of cusp homology, then $r\geq 1$. Because we are assuming $\zeta$ is embedded and meets each edge of the cusp triangulation at most once, we know $\zeta_j, \dots, \zeta_{j+r}$ do not encircle $e$ completely in this case. See \reffig{LeadingTrailingTan}.
\begin{figure}
\import{Figures/Ch09_AngleStruct/}{F9-06-LTMu.eps_tex}
\caption{If $w(\zeta_j)$ contributes $+1$ to a vertical edge meeting $e$, there is a maximal collection of segments running through cusp triangles adjacent to $e$.}
\label{Fig:LeadingTrailingTan}
\end{figure}
In both cases $\zeta=\sigma$ and $\zeta=\mu$, for segments $\zeta_{j+k}$ with $0<k<r$, note $w(\zeta_{j+k})$ contributes only $0$s to the edge $e$. Since the segment after $\zeta_{j+r}$ is no longer adjacent to the vertical edge $e$, it follows that $w(\zeta_{j+r})$ contributes $-1$ to $e$. Then the $+1$ contribution from $w(\zeta_j)$ cancels with the $-1$ contribution from $w(\zeta_{j+r})$.
Finally, it could be the case that $e$ is identified to both edges labeled $+1$ by $w(\zeta_j)$, so that $\nabla g_e$ has a $2$ in that coordinate and $\nabla g_e\cdot w(\zeta_j)$ picks up a $+2$ from these two edges. But in this case, combining both arguments above implies that one of the $+1$ contributions is canceled by a $-1$ coming from $w(\zeta_{j-1})$ and one by a $-1$ coming from $w(\zeta_{j+r})$ for appropriate $r$. Thus both are canceled.
We have shown that for each $j$, each $+1$ contribution of $w(\zeta_j)$ to $\nabla g_e \cdot w(\zeta_j)$ is canceled by a $-1$ contribution from some $w(\zeta_k)$. Provided none of the $-1$ contributions from $w(\zeta_k)$ are repeated for distinct $j$, this shows that $\nabla g_e \cdot w(\zeta) \leq 0$. The fact that these contributions are not repeated follows from the uniqueness of the choice of $\zeta_{j-1}$ and $\zeta_{j+r}$.
A similar argument implies $\nabla g_e \cdot w(\zeta) \geq 0$. Thus $\nabla g_e\cdot w(\zeta)=0$, as desired.
\end{proof}
\begin{lemma}\label{Lem:ReH}
Let $\zeta$ be one of the curves $\sigma$ or $\mu$ of \reflem{TantoAngleStruct}, and let $w(\zeta)\in T_a\mathcal{A}(\mathcal{T})$ be the corresponding leading--trailing deformation vector. Let $H(\zeta)$ be the complex number associated to the curve $\zeta$ given in \refdef{CompletenessEquations} (completeness equations). Then
\[ \frac{\partial \mathcal{V}}{\partial w(\zeta)} = \Re(\log H(\zeta)). \]
\end{lemma}
\begin{proof}
Let $\zeta_1, \dots, \zeta_k$ denote segments of $\zeta$ in cusp triangles $t_1, \dots, t_k$, respectively. Label the dihedral angles of triangle $t_i$ by $\alpha_i$, $\beta_i$, $\gamma_i$, in clockwise order, so that $\alpha_i$ is the angle cut off by $t_i$. By \refdef{CompletenessEquations},
\[ \Re(\log H(\zeta)) = \sum_i \epsilon_i \Re (\log |z(\alpha_i)|), \]
where $z(\alpha_i)$ is the edge invariant associated with the edge labeled $\alpha_i$, and $\epsilon_i =+1$ if $\alpha_i$ is to the left of $\zeta_i$ and $\epsilon_i=-1$ if $\alpha_i$ is to the right of $\zeta_i$.
On the other hand, comparing \reffig{ExampleCusp} and \reffig{LeadingTrailing}, we see that when $\alpha_i$ is to the left of $\zeta_i$, the vector $w(\zeta_i)$ has a $+1$ in the position corresponding to $\beta_i$ and a $-1$ in the position corresponding to $\gamma_i$, and when $\alpha_i$ is to the right of $\zeta_i$, the vector $w(\zeta_i)$ has a $-1$ in the position corresponding to $\beta_i$ and a $+1$ in the position corresponding to $\gamma_i$. Then \reflem{DerivativesVol} implies that
\begin{align*}
\frac{\partial \mathcal{V}}{\partial w} &= \sum_{j=1}^{3n} -w_j\log\sin a_j \\
&= \sum_i -\epsilon_i \log\sin\beta_i + \epsilon_i\log\sin\gamma_i \\
&= \sum_i \epsilon_i \log \left( \frac{\sin\gamma_i}{\sin\beta_i} \right)\\
& = \sum_i \epsilon_i \Re(\log |z(\alpha_i)|), \quad \mbox{by \refeqn{AngleEdgeInvariant}}
\end{align*}
This is what we needed to show.
\end{proof}
We now have the tools we need to prove \refthm{VolAngleStructs}, to show that a critical point $a\in\mathcal{A}(\mathcal{T})$ of the volume functional\index{volume functional} corresponds to a complete hyperbolic structure on the manifold $M$.
\begin{proof}[Proof of \refthm{VolAngleStructs}]
Suppose $a\in\mathcal{A}(\mathcal{T})$ is a critical point of the volume functional\index{volume functional} $\mathcal{V}$. Then $a$ assigns a dihedral angle to each tetrahedron of $\mathcal{T}$, giving each ideal tetrahedron a unique hyperbolic structure. By \refthm{Gluing}, gluing these tetrahedra will give a hyperbolic structure on $M$ if and only if the edge gluing equations are satisfied for each edge. By \refthm{EuclidCusp}, the hyperbolic structure will be complete if and only if the induced geometric structure on each cusp torus is a Euclidean structure,\index{Euclidean structure} and we obtain a Euclidean structure when the completeness equations are satisfied by \refprop{CompletenessEqns}.
Consider first the edge gluing equations. Notice that any angle structure\index{angle structure} gives hyperbolic ideal tetrahedra satisfying the imaginary part of the gluing equations, so we need to show that our tetrahedra satisfy the real part. Fix an edge of the triangulation, and let $\sigma$ be a curve on a cusp torus encircling an endpoint of that edge. The real part of the gluing equation corresponding to this edge will be satisfied if and only if $\Re (\log H(\sigma)) = 0$. But \reflem{ReH} implies that $\Re(\log H(\sigma)) = \frac{\partial\mathcal{V}}{\partial w(\sigma)}$, and this is zero because our angle structure\index{angle structure} is a critical point of the volume functional.\index{volume functional} So the gluing equations hold.
As for the completeness equations, for any cusp torus $C$, and $\mu_1$ and $\mu_2$ generators of the first homology group of $C$, the completeness equations require that $H(\mu_1)=H(\mu_2)=1$. By \reflem{ReH}, we know
\[ \Re\log(H(\mu_i)) = \frac{\partial \mathcal{V}}{\partial w(\mu_i)} = 0,\]
since $a$ is a critical point. Thus the real part of each of these completeness equations is satisfied.
Consider the developing image of a fundamental domain for the cusp torus $C$. Because we know the angles given by $a$ satisfy the gluing equations, the structure on $C$ is at least an affine structure on the torus. Therefore the developing image of the fundamental domain is a quadrilateral in ${\mathbb{C}}$. Since the real parts of the completeness equations for $\mu_1$ and $\mu_2$ are both satisfied, it follows that the holonomy\index{holonomy} elements corresponding to $\mu_1$ and $\mu_2$ do not scale either side of the fundamental domain. But then the holonomy elements cannot effect a non-trivial rotation either; a quadrilateral in ${\mathbb{C}}$ whose opposite sides are the same length is a parallelogram. Thus the developing image of a fundamental domain is a parallelogram, the holonomy elements corresponding to $\mu_1$ and $\mu_2$ must be pure translations, and the cusp torus admits a Euclidean structure.\index{Euclidean structure} So the completeness equations hold.
\end{proof}
\Refthm{VolAngleStructs} gives us a way of proving not only that a 3-manifold $M$ is hyperbolic, but also that it admits a positively oriented\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented} geometric triangulation.\index{geometric triangulation} To use the theorem, first, fix a triangulation $\mathcal{T}$. Then show the space of angle structures\index{angle structure} $\mathcal{A}(\mathcal{T})$ is nonempty. Finally, show that the volume functional\index{volume functional} achieves its maximum in the interior of $\overline{\mathcal{A}(\mathcal{T})}$. The last step can often be accomplished by considering angle structures\index{angle structure} on the boundary $\overline{\mathcal{A}(\mathcal{T})}-\mathcal{A}(\mathcal{T})$ and proving such structures cannot maximize volume. We will follow exactly this procedure for 2-bridge knots\index{2-bridge knot or link} in \refchap{TwoBridge}.
The following proposition is a useful tool for examining the maximum of the volume functional\index{volume functional} on the boundary $\overline{\mathcal{A}(\mathcal{T})}-\mathcal{A}(\mathcal{T})$.
\begin{proposition}\label{Prop:NoSingleDegenerate}
Suppose an angle structure\index{angle structure} $a \in \overline{\mathcal{A}(\mathcal{T})}$ maximizes the volume functional\index{volume functional} $\mathcal{V}$. Suppose that for some tetrahedron $\Delta_i$, one of the three angles of $\Delta_i$ in the angle structure $a$ is $0$. Then two of the angles are $0$ and the third is $\pi$.
\end{proposition}
\begin{proof}
Suppose instead that one angle, say $a_{i}$ is $0$, but the other two angles of $\Delta_i$ are nonzero: $a_{i+1} \neq 0$ and $a_{i+2}\neq 0$. We will find a path through $\mathcal{A}(\mathcal{T})$ with endpoint the angle structure\index{angle structure} $a$, and we will show that the derivative of this path is positive, and in fact unbounded, as it approaches the endpoint corresponding to $a$. It will follow that $a$ cannot be a maximum, which contradicts our assumption on $a$.
The space of angle structures\index{angle structure} $\mathcal{A}(\mathcal{T})$ is a bounded open convex subset of ${\mathbb{R}}^{3n}$. Its tangent space can be extended to its boundary. We may choose a tangent vector $w$ in $T_a\overline{\mathcal{A}(\mathcal{T})}$ pointing into the interior of $\mathcal{A}(\mathcal{T})$, and take the path corresponding to geodesic flow in the direction of this tangent vector. \Refthm{VolConcaveDown} implies that the derivative of the volume functional\index{volume functional} along this path is the sum of terms of the form $\sum_{i=1}^{3n} -w_{i}\log\sin(a_{i})$.
Consider the contribution from $\Delta_i$. The terms
\[ w_{i+1}\log\sin(a_{i+1}) \mbox{ and } w_{i+2}\log\sin(a_{i+2}) \]
are bounded, since $a_{i+1}$ and $a_{i+2}$ are bounded away from zero. But as the path approaches the angle structure\index{angle structure} $a$, the term coming from $-w_{i}\log\sin(a_{i})$ approaches positive infinity. Thus such a point cannot be a maximum.
\end{proof}
\section{The Schl{\"a}fli formula}\label{Sec:NeumannZagier}
In this short section, we prove \refthm{VolAngleStructsConverse}, the converse to \refthm{VolAngleStructs}. Our proof uses the Schl{\"a}fli formula for ideal tetrahedra, which can be stated as follows.
\begin{theorem}[Schl{\"a}fli's formula for ideal tetrahedra]\label{Thm:Schlafli}\index{Schl{\"a}fli's formula}
Let $P$ be an ideal tetrahedron. Let $H_1, \dots, H_n$ be a collection of horospheres centered on the ideal vertices of $P$. For each edge $e_{ij}$, running between the $i$-th to the $j$-th ideal vertices of $P$, let $\ell(e_{ij})$ denote the signed distance between $H_i$ and $H_j$ (that is, $\ell(e_{i,j})$ is defined to be negative if $H_i\cap H_j \neq \emptyset$). Finally, let $\theta_{ij}$ denote the dihedral angle along edge $e_{i,j}$. Then the variation in the volume of $P$ satisfies
\begin{equation}\label{Eqn:Schlafli}
d\mathcal{V}(P) = -\frac{1}{2}\sum_{i,j} \ell(e_{ij}) d\theta_{i,j}.
\end{equation}
\end{theorem}
Schl{\"a}fli's formula was originally proved for finite spherical simplices by Schl{\"a}fli in the 1850s. It has been extended in many directions, including to finite and ideal polyhedra in spaces of constant curvature. A proof of a formula that contains the result in \refthm{Schlafli} can be found in \cite{Milnor:Schlafli}; see also \cite{Rivin:EuclidStructs}. These sources note that the right hand side of \refeqn{Schlafli} is independent of the choice of horospheres.
Using this, we can finish the proof.
\begin{proof}[Proof of \refthm{VolAngleStructsConverse}]
Choose a horosphere about each cusp in the complete hyperbolic structure on $M$. Because the hyperbolic structure is complete, this choice gives a well-defined horosphere about each ideal vertex of each ideal tetrahedron in the positively oriented\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented} hyperbolic ideal triangulation $\mathcal{T}$. Thus for each tetrahedron, we may use this choice to define the edge lengths $\ell(e_{ij})$ of \refthm{Schlafli}.
Now note that because the total angle around each edge is a constant $2\pi$, the contributions to the variation of the volume coming from each simplex add to zero for each edge. Thus the right hand side of \refeqn{Schlafli} is zero. It follows that the complete structure is a critical point for the volume functional.\index{volume functional}
On the other hand, since the volume functional\index{volume functional} is strictly concave down on $\mathcal{A}$, \refthm{VolConcaveDown}, it must follow that the complete structure is the unique global maximum.
\end{proof}
\section{Consequences}
\Refthm{VolAngleStructs} and its converse have a number of important immediate consequences. We leave many proofs as exercises.
\begin{corollary}[Lower volume bounds, angle structures]\label{Cor:VolumeBound}\index{volume bound!lower bound from angle structures}
Suppose $M$ has ideal triangulation $\mathcal{T}$ such that the volume functional\index{volume functional} $\mathcal{V}\from \mathcal{A}(\mathcal{T})\to{\mathbb{R}}$ has a critical point $p \in \mathcal{A}(\mathcal{T})$.\index{angle structure} Then for any other point $q\in\overline{\mathcal{A}(\mathcal{T})}$, the volume functional satisfies
\[ \mathcal{V}(q) \leq \operatorname{vol}(M), \]
with equality if and only if $q=p$, i.e.\ $q$ also gives the complete hyperbolic metric on $M$.
\end{corollary}
\begin{proof}
By \reflem{DerivativesVol}, the volume functional\index{volume functional} is strictly concave down on $\overline{\mathcal{A}(\mathcal{T})}$, and so for any point $q\in \overline{\mathcal{A}(\mathcal{T})}$, $\mathcal{V}(q)$ is at most the maximum value of the volume functional, which is the value $\mathcal{V}(p)$ by hypothesis, and with equality if and only if $q=p$. By \refthm{VolAngleStructs}, $\operatorname{vol}(M)=\mathcal{V}(p)$.
\end{proof}
More is conjectured to be true. \Refcor{VolumeBound} only gives a bound when there is a known critical point of the volume functional\index{volume functional} in the interior of the space of angle structures.\index{angle structure} If the maximum of the volume functional occurs on the boundary, it still seems to be the case in practice that the maximum is bounded by the volume of the complete hyperbolic structure. However, the following conjecture is currently still open.
\begin{conjecture}[Casson's conjecture]\label{Conj:Casson}\index{Casson's conjecture}
Let $M$ be a cusped hyperbolic 3-manifold, and let $\mathcal{T}$ be any ideal triangulation of $M$. If the space of angle structures\index{angle structure} $\mathcal{A}(\mathcal{T})$ is nonempty, then the maximum value for the volume functional\index{volume functional} on $\overline{\mathcal{A}(\mathcal{T})}$ is at most the volume of the complete hyperbolic structure on $M$.
\end{conjecture}
We may use angle structures to find hyperbolic Dehn fillings\index{Dehn filling} of triangulated 3-manifolds as well. Recall from \refchap{CompletionDehnFilling} that the $(p,q)$ Dehn filling on a triangulated manifold satisfies equation \refeqn{DehnFillingEquation}:
\[
p \log H(\mu) + q\log H(\lambda) = 2\pi i.
\]
\begin{theorem}[Angle structures and Dehn fillings]\label{Thm:DehnFillingTriangulated}
Let $M$ be a manifold with torus boundary components $T_1, \dots, T_n$, with generators $\mu_j, \lambda_j$ of $\pi_1(T_j)$ for each $j$. For each $j$, let $(p_j, q_j)$ denote a pair of relatively prime integers. Let $\mathcal{A}_{(p_1, q_1),\dots,(p_n,q_n)} \subset \mathcal{A}$ be the set of all angle structures\index{angle structure} that satisfy the imaginary part of the Dehn filling\index{Dehn filling} equations:
\[ \Im( p_j\log H(\mu_j) + q_j\log H(\lambda_j) ) = 2\pi. \]
Then a critical point of the volume functional\index{volume functional} $\mathcal{V}$ on $\mathcal{A}_{(p_1, q_1),\dots,(p_n,q_n)}$ gives the complete hyperbolic structure on the Dehn filling $M((p_1, q_1), \dots, (p_n,q_n))$ of $M$.
\end{theorem}
\begin{proof}
As in the proof of \refthm{VolAngleStructs}, we will have a complete hyperbolic structure on the Dehn filling\index{Dehn filling} if and only if each edge gluing equation is satisfied and additionally each Dehn filling equation
\[ p_j \log H(\mu_j) + q_j\log H(\lambda_j) = 2\pi i \]
is satisfied.
The proof that edge gluing equations are satisfied follows exactly as in the proof of \refthm{VolAngleStructs}. As for the Dehn filling equations, the imaginary part of each equation is satisfied by the given constraint on the space of angle structures.\index{angle structure} By \reflem{ReH}, the real part satisfies
\[ \Re ( p_j\log(H(\mu_j)) + q_j\log(H(\lambda_j))) = p_j \frac{\partial \mathcal{V}}{\partial w(\mu_j)} + q_j\frac{\partial \mathcal{V}}{\partial w(\lambda_j)} = 0,\]
because this is a critical point for the volume functional.\index{volume functional} Thus each Dehn filling equation is satisfied.
\end{proof}
\Refcor{VolumeBound} and \refthm{HypDehnSurgery} imply the following (weaker) version of Thurston's theorem on volume change under Dehn filling,\index{Dehn filling} \refthm{VolumeDF}.
\begin{corollary}\label{Cor:VolDecreaseDehnFilling}
Let $M$ be as in \refthm{DehnFillingTriangulated}.
If $s$ is a slope in the neighborhood of $\infty$ provided by Thurston's hyperbolic Dehn filling theorem, \refthm{HypDehnSurgery}, then the volume of $M(s)$ is strictly smaller than the volume of $M$.
\end{corollary}
\begin{proof}
\Refex{VolDecreaseDehnFilling}.
\end{proof}
We also have the tools to prove a rigidity theorem originally due to Weil. The following follows from Mostow--Prasad rigidity,\index{Mostow--Prasad rigidity} \refthm{MostowGeom}, but was first proved over a decade before that theorem, and can now be proved easily using angle structures.\index{angle structure}
\begin{corollary}[Weil rigidity theorem]\label{Cor:WeilRigidity}\index{Weil rigidity theorem}
Suppose $M$ is a 3-manifold with boundary consisting of tori, and suppose the interior of $M$ admits a complete hyperbolic metric. Then the metric is locally rigid, i.e.\ there is no local deformation of the metric through complete hyperbolic structures.
\end{corollary}
\begin{proof}
\Refex{WeilRigidity}.
\end{proof}
\section{Exercises}
\begin{exercise}
Find a formula for volume of a tetrahedron with ideal vertices $0$, $1$, $\infty$ and $z$ in terms of $z$ alone.
\end{exercise}
\begin{exercise}
Give a proof that the figures of \reffig{SubSimplices} are correct. That is, given a Euclidean triangle with vertices on the unit circle, and angles $\alpha$, $\beta$, and $\gamma$, prove that the angles around the origin are given as shown in the figure.
\end{exercise}
\begin{exercise}
Prove the Kubert identities for the Lobachevsky function:\index{Kubert identities}
\[ \Lambda(n\theta) = \sum_{k=0}^{n-1} n\Lambda(\theta + k\pi/n). \]
Hint: cyclotomic identities:
\[ 2\sin(n\theta) = \prod_{k=0}^{n-1} 2\sin\left( \theta + \frac{k\pi}{n}\right). \]
\end{exercise}
\begin{exercise}
Find an explicit convex polytope describing the set of angle structures\index{angle structure} on the complement of the figure-8 knot.
\end{exercise}
\begin{exercise}
Find an explicit convex polytope describing the set of angle structures\index{angle structure} on the complement of the $5_2$ knot.
\end{exercise}
\begin{exercise}
Suppose an ideal tetrahedron has dihedral angles $\alpha$, $\beta$, $\gamma$ in clockwise order. Prove equation \eqref{Eqn:AngleEdgeInvariant}: that the edge invariant of the tetrahedron assigned to the edge with angle $\alpha$ is
\[ z(\alpha) = \frac{\sin(\gamma)}{\sin(\beta)}\,e^{i\alpha}. \]
\end{exercise}
\begin{exercise}\label{Ex:RegIdealOctahedron}
An ideal octahedron can be obtained by gluing two identical ideal pyramids over an ideal quadrilateral base along the ideal quadrilateral. We triangulate this by running an edge from the ideal point opposite the quadrilateral on one pyramid, through the quadrilateral, to the ideal point opposite the quadrilateral on the other pyramid, and then we stellar subdivide. Using angle structures\index{angle structure} on this collection of ideal tetrahedra, prove that the maximal volume ideal hyperbolic octahedron is the regular one: the one for which the quadrilateral base is a square.\index{ideal octahedron, regular}\index{regular ideal octahedron}
\end{exercise}
\begin{exercise}\label{Ex:DoubleIdealPyramids}
Generalize \refex{RegIdealOctahedron} to the ideal object obtained by gluing two ideal pyramids over an ideal $n$-gon base. Using stellar subdivision and angle structures,\index{angle structure} prove that the volume of the ideal double pyramid with base an $n$-gon is maximized when the $n$-gon is regular.
\end{exercise}
\begin{exercise}\label{Ex:VolDecreaseDehnFilling}
Prove that volume decreases locally under Dehn filling,\index{Dehn filling} \refcor{VolDecreaseDehnFilling}.
\end{exercise}
\begin{exercise}\label{Ex:WeilRigidity}
Prove the Weil rigidity theorem, \refcor{WeilRigidity}. First prove it for manifolds admitting an angle structure.\index{angle structure} Extend to all manifolds using the following theorem of Luo, Schleimer, and Tillmann, which can be found in \cite{LuoSchleimerTillmann}. (You may assume the theorem for the exercise.)
\begin{theorem}[Geometric triangulations exist virtually]\label{Thm:VirtualTriangulation}\index{geometric triangulation!exist virtually}
Let $M$ be a 3-manifold with boundary consisting of tori, such that the interior of $M$ admits a complete hyperbolic structure. Then $M$ has a finite cover $N$ such that $N$ decomposes into positively oriented ideal tetrahedra.\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented}
\end{theorem}
\end{exercise}
\chapter{Two-Bridge Knots and Links}\label{Chap:TwoBridge}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
In this chapter we will study in detail a class of knots and links that has particularly nice geometry, namely the class of 2-bridge knots and links.\index{2-bridge knot or link} The key property of these links that we will explore is the fact that they admit a geometric triangulation\index{geometric triangulation} that can be read off of a diagram, or an algebraic description of the link. In this chapter, we will define 2-bridge knots and links, describe their triangulations, and mention some of their geometric properties and consequences.
\section{Rational tangles and 2-bridge links}
In 1956, H.~Schubert showed that the class of 2-bridge knots and links are classified by a rational number, and any continued fraction expansion of this number gives a diagram of the knot \cite{Schubert}. In this section, we work through the description of 2-bridge knots and links via rational numbers. Additional references are \cite{BurdeZieschang} and \cite{murasugi}.
First, we define tangles.
\begin{definition}\label{Def:Tangle}
A \emph{tangle}\index{tangle} is a 1-manifold properly embedded in a 3-ball $B$. That is, it is a collection of arcs with endpoints on $\partial B$ and interiors disjointly embedded in the interior of $B$, possibly along with a collection of simple closed curves. For our purposes, we will consider only tangles consisting of two arcs, thus with four endpoints embedded on the boundary of a ball.
\end{definition}
The simplest tangle is a rational tangle.
\begin{definition}\label{Def:RationalTangle}
A \emph{rational tangle}\index{rational tangle}\index{tangle!rational tangle} is a tangle obtained by embedding two disjoint arcs on the surface of a 4-punctured sphere (pillowcase), and then pushing the interiors slightly into the $3$-ball bounded by the 4-punctured sphere.
\end{definition}
The tangles are called rational because they can be defined by a rational number, as follows. Recall that a rational number can be described by a continued fraction:\index{continued fraction}
\[
\frac{p}{q} = [a_n, a_{n-1}, \hdots, a_1] = a_n + \cfrac{1}{a_{n-1} + \cfrac{1}{\ddots \, + \cfrac{1}{a_1}}}.
\]
Now, given a continued fraction $[a_n, \dots, a_1]$, we will form a rational tangle. Start by labeling the four points on the pillowcase NW, NE, SW, and SE.
If $n$ is even, connect NE to SE and NW to SW by attaching two arcs as in \reffig{RatTangle1}(a).
Perform a homeomorphism of $B^3$ that rotates the points NW and NE $|a_1|$ times, creating a vertical band of $|a_1|$ crossings in the two arcs. If $a_1>0$, rotate in a counterclockwise direction, so that the overcrossings of the result have positive slope. This is called a \emph{positive crossing}.\index{positive crossing}\index{crossing!positive} If $a_1<0$ rotate in a clockwise direction, so that overcrossings have negative slope, forming a \emph{negative crossing}.\index{negative crossing}\index{crossing!negative} In \reffig{RatTangle1}(b), three positive crossings have been added. After twisting, relabel the points NW, NE, SW, and SE to match their original orientation. Next, apply a homeomorphism of $B^3$ that rotates NE and SE $|a_2|$ times, adding crossings in a horizontal band. Again these crossings will be positive if $a_2>0$, and negative if $a_2<0$. Repeat this process for each $a_i$. When finished, we obtain a rational tangle. An example is shown in \reffig{RatTangle1}.
\begin{figure}[h]
\import{Figures/Ch10_TwoBridge/}{F10-01-Tangle.eps_tex}
\caption{Building a rational tangle from the continued fraction $[4, -2, -2, 3]$.}
\label{Fig:RatTangle1}
\end{figure}
If $n$ is odd, start with two arcs connecting NW to NE and SW to SE. In this case we add a horizontal band of crossings first, and then continue as before, alternating between horizontal and vertical bands for each $a_i$.
Any rational tangle may be built by this process. As a convention, we require that the left-most term $a_n$ in the continued fraction expansion corresponds to a horizontal band of crossings. If we build a rational tangle ending with a vertical band, as in \reffig{RatTangle1}(b), then we insert a $0$ into the corresponding continued fraction, representing a horizontal band of $0$ crossings. For example, the continued fraction corresponding to the tangle in \reffig{RatTangle1}(b) is $[0, 3]$. This convention ensures that any continued fraction completely specifies a single rational tangle. There are two trivial rational tangles, namely $0 = [0]$, with untwisted strands connecting NW to NE and SW to SE, and $\infty = [0,0] = 0+\frac{1}{0}$, with untwisted strands connecting NW to SW and NE to SE. The tangle $\infty$ is shown in \reffig{RatTangle1}(a).
\begin{proposition}[\cite{Conway}]\label{Prop:Conway}
Equivalence classes of rational tangles are in one-to-one correspondence with the set ${\mathbb{Q}} \cup \infty$. In particular, tangles $T(a_n, \dots, a_1)$ and $T(b_m,\dots, b_1)$ are equivalent if and only if the continued fractions $[a_n, \dots, a_1]$ and $[b_m, \dots, b_1]$ are equal. \qed
\end{proposition}
\Refprop{Conway} allows us to put all our tangles into nice form.
\begin{corollary}\label{Cor:Conway}
For any rational tangle, there exists an equivalent tangle $T(a_n, \dots, a_1)$ for which $a_i\neq 0$ for $1\leq i<n$, and either all $a_i\geq 0$ or all $a_i\leq 0$.
\end{corollary}
\begin{proof}
This follows immediately from \refex{ContinuedFractions}.
\end{proof}
Thus we will assume that positive tangles have only positive crossings, and negative tangles have only negative crossings. In either case, this will make the tangle diagram alternating.\index{alternating diagram!tangle}
\begin{definition}
The \emph{numerator closure}\index{numerator closure} ${\rm{num}}(T)$ of a rational tangle $T$ is formed by connecting NW to NE and SW to SE by simple arcs with no crossings. The \emph{denominator closure}\index{denominator closure} ${\rm{denom}}(T)$ is formed by connecting NW to SW and NE to SE by simple arcs with no crossings.
\end{definition}
\begin{definition}\label{Def:2BridgeKnot}
A \emph{2-bridge knot or link}\index{2-bridge knot or link}\index{2-bridge knot or link!definition} is the denominator closure of a rational tangle.
\end{definition}
Notice that the denominator closure of the tangle $T(a_n, a_{n-1},\dots, a_1)$ is always equivalent to the denominator closure of the tangle $T(0,a_{n-1}, \dots, a_1)$, since $a_n$ corresponds to horizontal crossings that can simply be unwound after forming the denominator closure. Thus when we consider 2-bridge knots, we may assume that in our rational tangle, $a_n=0$.
\begin{definition}\label{Def:2BridgeNotation}
The 2-bridge knot or link\index{2-bridge knot or link} that is the denominator closure of the tangle $T(a_n, a_{n-1}, \dots, a_1)$ (and $T(0,a_{n-1}, \dots, a_1)$) is denoted by $K[a_{n-1}, \dots, a_1]$.
\end{definition}
The above discussion of twisting and taking denominator closure gives a nice correspondence between diagrams of 2-bridge knots and continued fraction expansions of rational numbers $p/q$ with $|p/q|\leq 1$. This is summarized in the following lemma.
\begin{lemma}\label{Lem:DiagramCF}
Suppose $[0,a_{n-1},\dots, a_1]$ is a continued fraction with either $a_i>0$ for all $i$ or $a_i<0$ for all $i$. Then the diagram of $K[a_{n-1}, \dots, a_1]$ contains $n-1$ twist regions, arranged left to right. The twist region on the far left contains $|a_1|$ crossings, with sign opposite that of $a_1$ when $n$ is even, the next twist region to the right contains $|a_2|$ crossings, with sign opposite that of $a_2$ when $n$ is odd, and so on, with the $i$-th twist region from the left containing $|a_i|$ crossings, with sign the same as $a_i$ if $i$ and $n$ are both even or both odd, and opposite sign of $a_i$ if one of $i$ and $n$ is even and the other odd. Twist regions connect as illustrated in \reffig{2BridgeDiagram}.
\end{lemma}
\begin{figure}[h]
\import{Figures/Ch10_TwoBridge/}{F10-02-2BDiagr.eps_tex}
\caption{The diagram of $K[a_{n-1}, \dots, a_1]$. Top: $n$ odd; bottom: $n$ even. Box labeled $\pm a_i$ denotes a (horizontal) twist region with $|a_i|$ crossings, with sign of the crossings equal to that of $\pm a_i$.}
\label{Fig:2BridgeDiagram}
\end{figure}
\begin{proof}
The tangle $T(0,a_{n-1}, \dots, a_1)$ is obtained by forming horizontal or vertical bands of $|a_i|$ crossings, for $i=1,\dots, n-1$. Thus the diagram of the denominator closure has $n-1$ twist regions with the numbers of crossings as claimed. To put it into the form of \reffig{2BridgeDiagram}, isotope the diagram by rotating vertical twist regions to be horizontal. Note that the rotation changes the sign of the crossing. Thus when $n$ is even, this rotates twist regions with odd index $i$ to be horizontal, and thus sign becomes opposite that of $a_i$, for even $n$ and odd $i$. When $n$ is odd, this rotates twist regions with even index to be horizontal, again with sign opposite that of $a_i$, for odd $n$ and even $i$.
\end{proof}
\begin{lemma}\label{Lem:a1an}
For a 2-bridge knot or link\index{2-bridge knot or link} $K[a_{n-1}, \dots, a_1]$, we may always assume $|a_1|\geq 2$ and $|a_{n-1}|\geq 2$.
\end{lemma}
\begin{proof}
Exercise. One way to see this is to consider the form of a 2-bridge knot or link with $|a_1|=1$ or $|a_{n-1}|=1$, and show that the corresponding twist region can be subsumed into another twist region.
\end{proof}
For the rest of this chapter, we will assume the conclusions of \refcor{Conway} and \reflem{a1an}, namely that if $K[a_{n-1}, \dots, a_1]$ is a 2-bridge knot or link, then either $a_i>0$ for all $i$ or $a_i<0$ for all $i$, and $|a_{n-1}|\geq 2$ and $|a_1|\geq 2$.
\section{Triangulations of 2-bridge links}
We now describe a way to triangulate 2-bridge link complements that was first observed by Sakuma and Weeks \cite{SakumaWeeks}. A description was also given by Futer in the appendix of \cite{GueritaudFuter:2bridge}; we base our exposition here off of the latter paper.
Consider again our construction of a rational tangle. We started with two strands in a 4-punctured sphere, or pillowcase. To form each crossing, we either rotate the points NE and NW or the points NE and SE. In the former case, we call the crossing a \emph{vertical} crossing,\index{vertical crossing}\index{crossing!vertical} and in the latter a \emph{horizontal} crossing.\index{horizontal crossing}\index{crossing!horizontal} For all but the first crossing in a tangle, adding a crossing can be seen as stacking a region $S^2\times I$ to the outside of a pre-existing tangle, where $S^2\times I$ contains four strands, two of them forming a crossing. Positive vertical and horizontal crossings in $S^2\times I$ are shown in \reffig{4PunctSphereBlocks}, negative ones will be in the opposite direction. If we drill the four strands from $S^2\times I$, the region becomes $S\times I$, where $S$ is a 4-punctured sphere.
\begin{figure}[h]
\includegraphics{Figures/Ch10_TwoBridge/F10-03-4PSBlk.eps}
\caption{Vertical (left) and horizontal (right) blocks of the form $S\times I$. The 4-punctured spheres on the outside and inside correspond to $S\times\{1\}$ and $S\times\{0\}$, respectively}
\label{Fig:4PunctSphereBlocks}
\end{figure}
\begin{lemma}\label{Lem:StackingSpheres}
Let $K:=K[a_{n-1}, \dots, a_1]$ be a 2-bridge link\index{2-bridge knot or link} and let $C$ denote the number of crossings of $K$; so $C=|a_1|+\dots+|a_{n-1}|$. Assume either $a_i<0$ for all $i$ or $a_i>0$ for all $i$, and $|a_1|\geq 2$ and $|a_{n-1}|\geq 2$. Let $N$ be the manifold obtained from the complement $S^3-K$ by removing a ball neighborhood of the first and last crossings, and let $S$ denote the 4-punctured sphere. Then $N$ is homeomorphic to $S\times [a,b]$, obtained from stacking $C-2$ copies of $S\times I$ end to end, with each copy of $S\times I$ corresponding to either a horizontal or vertical crossing.
\begin{itemize}
\item If $n$ is even, the first $a_1-1$ copies of $S\times I$ are vertical, followed by $a_2$ horizontal copies, $a_3$ vertical, etc, finishing with $a_{n-1}-1$ vertical copies of $S\times I$.
\item If $n$ is odd, the first $a_1-1$ copies of $S\times I$ are horizontal, followed by $a_2$ vertical copies, $a_3$ horizontal, etc, finishing with $a_{n-1}-1$ vertical copies.
\end{itemize}
The $i$-th copy of $S\times I$ is glued along $S\times\{1\}$ to $S\times\{0\}$ on the $(i+1)$-st copy, $i=2, 3, \dots, C-1$. \qed
\end{lemma}
An example of \reflem{StackingSpheres} is shown in \reffig{TangleExample}.
\begin{figure}[h]
\includegraphics{Figures/Ch10_TwoBridge/F10-04-TangEx.eps}
\caption{On the left is $K[4,2,2]$. On the right, remove neighborhoods of inside and outside crossings to obtain a manifold homeomorphic to $S\times[a,b]$. Each crossing is contained in a block $S\times I$ of the form of \reffig{4PunctSphereBlocks}}
\label{Fig:TangleExample}
\end{figure}
We will obtain a triangulation of a 2-bridge link complement by first finding a triangulation of the manifold $N$ in \reflem{StackingSpheres}. To do so, we will consider each of the blocks $S\times I$ separately, and then consider how they fit together.
Denote the blocks of \reflem{StackingSpheres} by $S_2\times I, S_3\times I, \dots, S_{C-1}\times I$, where $S_i\times I$ corresponds to the $i$-th crossing of the tangle. In the description below, we will consider the case that all crossings are positive, i.e.\ $a_j>0$ for all $j$, so that if the $i$-th crossing is vertical, then $S_i\times I$ has the form of the left of \reffig{4PunctSphereBlocks}, and if it is horizontal, then $S_i\times I$ has the form of the right. The case of all negative crossings will be similar.
In $S_i\times I$, the 4-punctured sphere $S_i$ is embedded at any level $S_i\times\{t\}$. We will focus in particular on $S_i\times\{0\}$ and $S_i\times \{1\}$.
\begin{lemma}\label{Lem:TriangulateSi}
There is an ideal triangulation of $S_i$ such that when we isotope the triangulation to $S_i\times\{1\}$, edges are horizontal (from NE to NW and from SE to SW), vertical (from SW to NW and from SE to NE), and diagonal, and when we isotope the triangulation to $S_i\times\{0\}$, edges are still horizontal, vertical, and diagonal, but the diagonals are opposite those of $S_i\times\{1\}$.
\end{lemma}
\begin{proof}
First consider the outside, $S_i\times\{1\}$.
Draw vertical and horizontal ideal edges on $S_i\times\{1\}$; that is, draw horizontal edges from NE to NW and from SE to SW, and draw vertical edges from SE to NE and from SW to NW. Now isotope from $S_i\times\{1\}$ through $S_i\times\{t\}$ inside to $S_i\times\{0\}$, and track these ideal edges through the isotopy.
\begin{figure}[h]
\includegraphics{Figures/Ch10_TwoBridge/F10-05-4PTri.eps}
\caption{Effect on ideal edges of isotopies between $S_i\times\{1\}$ and $S_i\times\{0\}$.}
\label{Fig:4PunctTriangles}
\end{figure}
In the case that $S_i\times I$ has a vertical crossing, as on the left of \reffig{4PunctSphereBlocks}, notice that the isotopy takes the horizontal edges in $S_i\times\{1\}$ to horizontal edges in $S_i\times\{0\}$, but it takes vertical edges in $S_i\times\{1\}$ to diagonal edges in $S_i\times\{0\}$, as shown on the left of \reffig{4PunctTriangles}. Now consider the vertical edges in $S_i\times\{0\}$, i.e.\ the ideal edges running from SE to NE and SW to NW on the inside of the block. When we isotope $S_i\times\{0\}$ to $S_i\times\{1\}$, notice that these edges become diagonal edges on $S_i\times\{1\}$, as shown on the right of \reffig{4PunctTriangles}. Notice that these diagonals are exactly opposite the diagonals on the inside on the left of the figure; see also \reffig{4PunctFullTriang}.
When $S_i\times I$ has a horizontal crossing, as on the right of \reffig{4PunctSphereBlocks}, the vertical ideal edges in $S_i\times\{1\}$ are isotopic to vertical edges in $S_i\times\{0\}$. Horizontal edges on $S_i\times\{1\}$ isotope to diagonal edges on $S_i\times\{0\}$, and horizontal edges on $S_i\times\{0\}$ isotope to diagonal edges on $S_i\times\{1\}$. Again the diagonal edges have opposite slopes on the inside and outside.
In either case, add all horizontal and vertical edges to $S_i$ on $S_i\times\{1\}$ and add all horizontal and vertical edges to $S_i$ on $S_i\times\{0\}$. Since either horizontal or vertical edges are duplicated, we add six ideal edges total. This is the triangulation claimed in the lemma.
\end{proof}
\begin{figure}[h]
\import{Figures/Ch10_TwoBridge/}{F10-06-4PFull.eps_tex}
\caption{Triangulation of $S_i\times\{1\}$ and $S_i\times\{0\}$, shown for both horizontal and vertical (positive) crossings.}
\label{Fig:4PunctFullTriang}
\end{figure}
The triangulation of $S_i\times\{1\}$ and $S_i\times\{0\}$ for both vertical and horizontal crossings is shown in \reffig{4PunctFullTriang}. (We have removed the arrows from the edges as we will not need to work with directed edges, and keeping track of direction will unnecessarily complicate the discussion.) Note the ideal edges cut $S_i\times\{1\}$ into four ideal triangles:\index{ideal triangle} two on the front and two on the back. Similarly, these ideal triangles can be isotoped to the inside $S_i\times\{0\}$, giving two triangles on the front and two on the back, although notice that the isotopy does not take both triangles in the front of $S_i\times\{1\}$ to triangles in the front of $S_i\times\{0\}$.
So far we only have ideal triangles on surfaces $S_i$, and no ideal tetrahedra. The ideal tetrahedra are obtained when we put blocks $S_{i-1}\times I$ and $S_i\times I$ together, and we now describe how this works.
\begin{lemma}\label{Lem:StackingTetrahedra}
With $S_{i-1}$ and $S_i$ triangulated as in \reflem{TriangulateSi}, gluing $S_{i-1}\times I$ to $S_i\times I$ by identifying $S_{i-1}\times\{1\}$ and $S_i\times\{0\}$ gives rise to two ideal tetrahedra, each with two faces on $S_{i-1}$ and two on $S_i$.
\end{lemma}
\begin{proof}
Consider the triangulations of $S_{i-1}\times\{1\}$ and $S_i\times\{0\}$, shown in \reffig{4PunctFullTriang}. Notice that the diagonal edges of $S_{i-1}\times\{1\}$ are exactly opposite the diagonal edges of $S_i\times\{0\}$, and so these edges do not match up. The horizontal and vertical edges on $S_{i-1}\times\{1\}$ and $S_i\times\{0\}$ can be identified, but the diagonal edges cannot. To keep these edges embedded, we view the diagonals of $S_{i-1}\times\{1\}$ as inside of the diagonals of $S_i\times\{0\}$, as shown in \reffig{Pillowcases}.
\begin{figure}[h]
\includegraphics{Figures/Ch10_TwoBridge/F10-07-Pillow.eps}
\caption{When blocks are glued, diagonals of the triangulated surfaces $S_i$ and $S_{i-1}$ are as shown.}
\label{Fig:Pillowcases}
\end{figure}
With horizontal and vertical edges identified, notice that the interior of the region between $S_{i-1}\times\{1\}$ and $S_i\times\{0\}$ lies in two components: one on the front of the figure, and one on the back. Each of these components is bounded by four triangular faces, six ideal edges, and four ideal vertices; each is an ideal tetrahedron as desired.
\end{proof}
By \reflem{StackingTetrahedra}, when we glue $S_i\times I$ to $S_{i+1}\times I$, we obtain two additional tetrahedra, and these will be attached along $S_i$ to the two tetrahedra from $S_i\times I$ and $S_{i-1}\times I$. Thus as we run from $S_2\times I$ out to $S_{C-1}\times I$, we obtain pairs of tetrahedra for each gluing of blocks, and these are glued inside to outside to form a triangulation of $N\cong S\times[a,b]$.
Now, at this stage, we have a triangulation of $N$, but there will be four triangular faces on the very inside corresponding to $S_2$ that are unglued, and four triangular faces on the very outside corresponding to $S_{C-1}$ that are unglued. To complete the description of the triangulation of the 2-bridge link complement, we need to describe what happens at the outermost and innermost crossings, e.g.\ on the left of \reffig{TangleExample}.
\begin{proposition}\label{Prop:2BridgeTriang}
Let $K:=K[a_{n-1}, \dots, a_1]$ be a 2-bridge link\index{2-bridge knot or link!triangulation}\index{2-bridge knot or link} with at least two twist regions, with either $a_i>0$ for all $1\leq i\leq n-1$, or $a_i < 0$ for all $i$. Assume $|a_1|\geq 2$ and $|a_{n-1}|\geq 2$. Let $C=|a_1|+\dots+|a_{n-1}|$ denote the number of crossings of $K$.
Then $S^3-K$ has a decomposition into $2(C-3)$ ideal tetrahedra denoted by $T_i^1, T_i^2$, for $i=2, \dots, C-2$.
\begin{itemize}
\item For $2\leq i \leq C-2$, the tetrahedra $T_i^1$ and $T_i^2$ each have two faces on $S_i$ and two on $S_{i+1}$.
\item The two faces of $T_2^1$ on $S_2$ glue to the two faces of $T_2^2$ on $S_2$.
\item Similarly, the two faces of $T_{C-2}^1$ on $S_{C-1}$ glue to the two faces of $T_{C-2}^2$ on $S_{C-1}$.
\end{itemize}
\end{proposition}
\begin{proof}
The tetrahedra come from the triangulation of $N$. By the previous lemmas, there are two tetrahedra for each pair of adjacent crossings, omitting the first and last, thus $2(C-3)$ tetrahedra. By \reflem{StackingTetrahedra}, each tetrahedron in each pair has two faces on $S_i$ and two on $S_{i+1}$, where $S_i\times I$ and $S_{i+1}\times I$ are blocks corresponding to the two adjacent crossings. We label the tetrahedra $T_2^1$, $T_2^2$, $T_3^1$, $T_3^2$, $\dots$, $T_{C-2}^1$, $T_{C-2}^2$, so that the first item is satisfied.
It remains to show that the innermost and outermost tetrahedra, $T_2^1$, $T_2^2$ and $T_{C-2}^1$, $T_{C-2}^2$, glue as claimed.
We will focus on the outermost tetrahedra here. The innermost case is similar, but we leave its description to the reader.
For the outermost crossing, recall that we may assume our 2-bridge knot is the denominator closure of a tangle $T(a_n,a_{n-1}, \dots, a_1)$ with $a_n=0$. Thus the outermost crossing will be vertical, not horizontal. Hence we restrict to pictures with a single vertical crossing on the outside.
The outermost 4-punctured sphere $S_{C-1}$ will be triangulated as shown on the left of \reffig{OutsideTriangle}. Notice that when we add the outside crossing as shown, vertical edges and horizontal edges all become isotopic and hence are identified, by isotopies swinging the endpoints around the strand of the crossing. One such isotopy is indicated by the small arrow on the left of \reffig{OutsideTriangle}.
The diagonal edges are not identified to horizontal or vertical edges. When we follow the isotopy of \reffig{OutsideTriangle}, the diagonal edge in the front wraps once around a strand of the knot, as shown on the right of \reffig{OutsideTriangle}. Thus the triangle in the upper left corner of $S_{C-1}$ maps under the isotopy to a triangle with two of its edges identified, looping around a strand of the 2-bridge knot.
\begin{figure}[h]
\includegraphics{Figures/Ch10_TwoBridge/F10-08-OutTri.eps}
\caption{Identifying triangles of the outermost 4-punctured sphere}
\label{Fig:OutsideTriangle}
\end{figure}
Now consider the triangle in front in the lower right corner. We will isotope the triangle by dragging its vertex on the SE corner around the strand of the knot to the NW corner. If we perform this isotopy while holding the diagonal fixed, note that the lower left triangle flips around backwards to be identified to the upper right triangle in the front. Thus the two triangles on the front of $S_{C-1}\times \{1\}$ will be identified under isotopy. Similarly for the two back triangles.
Thus inserting the outermost crossing identifies the four outside triangular faces of the outermost tetrahedra in pairs.
The tetrahedra $T_{C-2}^1$ and $T_{C-2}^2$ have triangular faces on $S_{C-1}$, shown in \reffig{Pillowcases}. One of these, say $T_{C-2}^1$, will lie in front in that figure and one will lie in back.
However, note that in \reffig{Pillowcases}, we have isotoped the surface $S_{C-1}$ to be in the position of $S_{C-1}\times\{0\}$, while in \reffig{OutsideTriangle}, when we glue faces of $S_{C-1}$, we have isotoped $S_{C-1}$ to be in the position of $S_{C-1}\times\{1\}$. Isotoping from $S_{C-1}\times\{0\}$ to $S_{C-1}\times\{1\}$ will move the faces of $T_{C-2}^1$ and $T_{C-2}^2$. In particular, the face of $T_{C-2}^1$ lying on the upper right of \reffig{Pillowcases} will be moved by isotopy to lie in the back on the upper right, and the face of $T_{C-2}^1$ lying in the lower left will be moved by isotopy to lie in front, in the lower right. See \reffig{TetrFaces}.
\begin{figure}[h]
\import{Figures/Ch10_TwoBridge/}{F10-09-TetFac.eps_tex}
\caption{Locations of faces of $T_{C-2}^1$ under isotopy from $S\times\{0\}$ to $S\times\{1\}$}
\label{Fig:TetrFaces}
\end{figure}
Thus the identification of triangles on the outside identifies faces of $T_{C-2}^1$ to faces of $T_{C-2}^2$.
\end{proof}
\subsection{The cusp triangulation}
Now we consider the view of the tetrahedra from a cusp. Consider first the manifold $N$ with ball neighborhoods of the first and last crossings removed. The manifold $N$ is homeomorphic to the product of a 4-punctured sphere and a closed interval. Note that in $N$, there are four distinct cusps, corresponding to the product of $I$ and the four distinct punctures of the 4-punctured sphere. Note that each cusp meets each 4-punctured sphere $S_i$, and that a curve on $S_i$ running around the puncture forms a meridian.
Finally, note that between each $S_i$ and $S_{i+1}$ lie two tetrahedra, as in \reffig{Pillowcases}. Each tetrahedron has exactly one ideal vertex on each of the four cusps. Thus the cusp triangulation of $N$ consists of four disjoint cusp neighborhoods. Each cusp neighborhood meets each $S_i$, in the same order, and each cusp neighborhood meets each tetrahedron in the decomposition in the same order. Thus the four cusp triangulations look identical at this stage, at least combinatorially. We will create one of these four cusps.
In order to see the pattern of tetrahedra in one of these cusps, note that there will be a stack of triangles in each cusp, each triangle corresponding to the tip of a tetrahedron. By \refprop{2BridgeTriang}, the triangles will be sandwiched between 4-punctured spheres $S_i$ and $S_{i+1}$, with the bottom of the stack of triangles bounded by $S_2$ and the top by $S_{C-1}$. (Recall that the 4-punctured sphere $S_i$ actually lies in a block $S_i\times I$, so when we refer to $S_i$ in the following it may be helpful to recall that we are referring to a surface isotopic to $S_i\times\{t\}$ for appropriate $t$.)
When we run along a meridian of the cusp on $S_i$, we stay on edges of the cusp triangulation. Moreover, note that we pass over exactly three ideal edges; see the left of \reffig{CuspSpheres}. Thus in the cusp triangulation, running along such a meridian on the surface $S_i$ will correspond to running over three edges of triangles. This is shown in \reffig{CuspSpheres} for $S_i$. The vertical dotted lines indicate boundaries of a fundamental region for the cusp torus; in this case running from one dotted line to the other corresponds to a meridian.
\begin{figure}[h]
\import{Figures/Ch10_TwoBridge/}{F10-10-CuSphe.eps_tex}
\caption{Form of two meridians running over $S_i$ and $S_{i+1}$}
\label{Fig:CuspSpheres}
\end{figure}
When the $i$-th, $(i+1)$-st, and $(i+2)$-nd blocks are all vertical crossings, note that the surfaces $S_i$, $S_{i+1}$, and $S_{i+2}$ will all share an edge; a horizontal edge in \reffig{4PunctFullTriang}. Similarly adjacent horizontal crossings also lead to surfaces sharing an edge.
\begin{notation}\label{Not:RLNotation}
In the cusp triangulation we see 4-punctured spheres $S_2, \dots, S_{C-1}$. Give the label $R$ to each 4-punctured sphere $S_i$ corresponding to a block $S_i\times I$ containing a horizontal crossing. Give the label $L$ to each corresponding to a block containing a vertical crossing. A tetrahedron that lies between layers labeled $R$ and $L$ is called a \emph{hinge tetrahedron}\index{hinge tetrahedron}.
\end{notation}
The labels $R$ and $L$ are given for historical reasons; they refer to moves to the right and left for a path in the Farey graph given by the rational number of our tangle. We won't delve into the history of this notation here, but we will use this notation for ease of reference with other literature. For more information see \cite{GueritaudFuter:2bridge}.
\begin{example}
A cusp triangulation for an example $N$ is shown in \reffig{CuspExample}. That figure follows some standard conventions. Because we have many surfaces $S_i$, we connect edges of $S_i$ to form a single connected jagged line, identifiable as one surface in the cusp, and put a little space between multiple such surfaces at a vertex they share. We also put vertices of the triangles in two columns (in a fundamental domain). Finally, we shade the hinge layers.
\begin{figure}[h]
\import{Figures/Ch10_TwoBridge/}{F10-11-Examp.eps_tex}
\caption{On the right is shown the cusp triangulation of one of the four cusps of $N$ on the left}
\label{Fig:CuspExample}
\end{figure}
Note in the figure as we move inside to out, we move from the bottom of the cusp triangulation to the top. Tetrahedron $T_2^1$ lies between surfaces $S_2$ associated with the second innermost crossing (vertical, $L$) and $S_3$ associated with the third innermost crossing (horizontal, $R$). It is a hinge tetrahedron. Tetrahedron $T_3^1$ lies between $S_3$ and $S_4$, both of which are associated with horizontal crossings, $R$. Note there is an ideal edge shared by all three surfaces $S_2$, $S_3$, and $S_4$, and this corresponds to a shared vertex of $T_2^1$ and $T_3^1$ in the cusp triangulation (in the center).
\end{example}
Now we determine what happens to cusps when we put in the innermost and outermost crossings. At the outermost crossing, note that the cusp corresponding to the vertex SE becomes identified with the cusp corresponding to the vertex NW, and similarly for SW and NE. Thus the four identical cusp triangulations we have obtained so far will be glued. Recall that the gluing is along triangle faces of $S_{C-1}$ in the case of the outermost crossing. The faces of $T_{C-2}^1$ are glued to faces of $T_{C-2}^2$. The result is a ``folding'' of triangles. See \reffig{HairpinTurn}. We call this a \emph{hairpin turn}\index{hairpin turn}.
\begin{figure}[h]
\import{Figures/Ch10_TwoBridge/}{F10-12-Hairpn.eps_tex}
\caption{Gluing tetrahedra across $S_{C-1}$ yields a hairpin turn}
\label{Fig:HairpinTurn}
\end{figure}
If $K$ is a knot, if we follow a longitude of the cusp, starting at one of the corners of $S_2$, we will see 4-punctured spheres $S_2, S_3, \dots, S_{C-2}$, then a hairpin turn on $S_{C-1}$ corresponding to the outside crossing as in \reffig{HairpinTurn}. Continuing, we will pass $S_{C-2}, S_{C-3}, \dots, S_3$, then another hairpin turn on $S_2$ corresponding to the inside crossing, then $S_3, \dots, S_{C-1}$ and a hairpin turn, and finally $S_{C-2}, \dots, S_3$ and the original $S_2$ with a hairpin turn. A hairpin turn appears in the cusp triangulation as a single edge stretching across a meridian, adjacent to two triangles whose third vertex is 3-valent.
We summarize:
\begin{proposition}\label{Prop:2BridgeCuspTriang}
Let $K:=K[a_{n-1}, \dots, a_1]$ be a 2-bridge knot\index{2-bridge knot or link}\index{2-bridge knot or link!cusp triangulation} with at least two twist regions, such that either $a_i>0$ for all $i$, or $a_i<0$ for all $i$, and $|a_1|\geq 2$ and $|a_{n-1}|\geq 2$. Let $C=|a_1|+\dots+|a_{n-1}|$ denote the number of crossings of $K$. The cusp triangulation of $K$ has the following properties.
\begin{itemize}
\item It is made of four pieces, each piece bookended by hairpin turns corresponding to 4-punctured spheres $S_2$ and $S_{C-1}$. Between lies a sequence of 4-punctured spheres $S_3, \dots, S_{C-2}$. We call the 4-punctured spheres \emph{zig-zags}.\index{zig-zag}
\item The first and third pieces are identical; the second and fourth are also identical and given by rotating the first piece $180^\circ$ about a point in the center of the edge of the final hairpin turn (and swapping some labels $T_i^1$ as $T_i^2$). Thus the second and fourth pieces follow the first in reverse.
\item When running in a longitudinal direction, the first piece begins with $|a_{n-1}|-1$ zig-zags labeled $L$; the first of these is $S_{C-1}$, the hairpin turn corresponding to the outside crossing of the knot. These zig-zags are followed by $|a_{n-2}|$ zig-zags labeled $R$, then $|a_{n-3}|$ labeled $L$, and so on. If $n$ is even, finish with $|a_1|-1$ zig-zags labeled $L$, the last of which is the final hairpin turn, corresponding to $S_2$ at the inside crossing. If $n$ is odd, the final $|a_1|-1$ zig-zags are labeled $R$.
\item A meridian follows a single segment of the zig-zag in a hairpin turn, or three segments of any other zig-zag.
\end{itemize}
Note we see each $S_i$ exactly four times, including seeing $S_2$ twice for each of the two hairpin turns in the cusp triangulation corresponding to the inside crossing, and seeing $S_{C-1}$ twice for each hairpin turn corresponding to the outside crossing.
\end{proposition}
An example sketched by SnapPy (\cite{SnapPy}) is shown in \reffig{SnapPyExample}.
\begin{figure}[h]
\includegraphics{Figures/Ch10_TwoBridge/F10-13a-SnapEx.eps}
\hspace{.1in}
\includegraphics{Figures/Ch10_TwoBridge/F10-13b-SnapEx.eps}
\caption{An example of a 2-bridge knot\index{2-bridge knot or link!cusp triangulation} and its cusp triangulation from SnapPy. The shaded region shows a fundamental domain for the cusp torus, stretching from one hairpin turn through three others back to the same hairpin turn.}
\label{Fig:SnapPyExample}
\end{figure}
\section{Positively oriented tetrahedra}
The triangulation described in the last section has nice geometry. In particular, when the 2-bridge link\index{2-bridge knot or link} has at least two twist regions, we can find angle structures\index{angle structure} on the triangulation. These can be used to prove that the 2-bridge link is hyperbolic (\refcor{2BridgeHyperbolic}), and to show that in the complete hyperbolic structure on the link complement, the tetrahedra are all geometric. Thus we will obtain our first infinite class of knots and links with known geometric triangulations.\index{geometric triangulation}
The main theorem of the next two sections is \refthm{2BridgeGeometric}, below. It was originally proved by Futer in the appendix to \cite{GueritaudFuter:2bridge}.
\begin{theorem}\label{Thm:2BridgeGeometric}
Let $K$ be a 2-bridge knot or link\index{2-bridge knot or link}\index{2-bridge knot or link!triangulation} with a reduced alternating diagram\index{alternating knot or link}\index{alternating diagram} with at least two twist regions. Let $\mathcal{T}$ be the triangulation of $S^3-K$ as described above.
Then $S^3-K$ is hyperbolic, and in the complete hyperbolic structure on $S^3-K$, all tetrahedra of $\mathcal{T}$ are positively oriented.\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented}
\end{theorem}
The proof of \refthm{2BridgeGeometric} uses angle structures\index{angle structure} on $\mathcal{T}$, as in \refdef{AngleStructures}, and is done in two steps. First, the space of angle structures\index{angle structure} $\mathcal{A}(\mathcal{T})$ is shown to be non-empty. In \refthm{HypAngleStruct}, we showed that the existence of an angle structure\index{angle structure} is enough to conclude that the manifold admits a hyperbolic structure. We conclude that these 2-bridge link complements are hyperbolic.
Second, in the next section, we show that the volume functional\index{volume functional} cannot achieve its maximum on the boundary of $\mathcal{A}(\mathcal{T})$. This is all that is needed: because the volume functional is strictly concave down on $\mathcal{A}(\mathcal{T})$ (\reflem{DerivativesVol}), it achieves a maximum in the interior of $\mathcal{A}(\mathcal{T})$. By \refthm{VolAngleStructs}, the maximum corresponds to the complete hyperbolic structure, and at that structure, all angles are strictly positive, meaning all tetrahedra are geometric --- positively oriented.\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented}
\begin{proposition}\label{Prop:2BridgeAngleNonempty}
Let $\mathcal{T}$ be the triangulation of a 2-bridge knot or link\index{2-bridge knot or link!angle structure} complement with at least two twist regions, as described above. Then the space of angle structures\index{angle structure} $\mathcal{A}(\mathcal{T})$ is nonempty.
\end{proposition}
The proof of the proposition is not hard, but requires additional notation. First, we need to label the angles of each of the tetrahedra constructed in the previous section. Remember that the tetrahedra were constructed in pairs, and the pairs of tetrahedra lie between two 4-punctured spheres of the manifold $N=S\times[a,b]$, as in \reffig{Pillowcases}. In order to show some angle structure\index{angle structure} exists, we will first assume that the angles on each of these pairs of tetrahedra agree. Let $z_i$ denote the angle on the outside diagonal edges of tetrahedra $T_i^1$ and $T_i^2$. Because opposite edges have the same angle, $z_i$ is also the angle on the inside diagonal edge. Denote the angle at the horizontal edges by $x_i$ and the angle at vertical edges by $y_i$. We may add these angles to the cusp triangulation. The cusp triangulation was obtained by adding layers of zig-zagging 4-punctured spheres. Each 4-punctured sphere shares two edges with the previous 4-punctured sphere, and has one new edge. In the cusp triangulation, this forms a sequence of triangles in which two vertices are shared, but one new vertex is added. The new vertex corresponds to the diagonal edge, so is labeled $z_i$. Note that angles labeled $x_i$ are glued together, as are angles labeled $y_i$. Finally, we have oriented tetrahedra so that angles read $x_i$, $y_i$, $z_i$ in clockwise order around a cusp triangle. This completely determines the labeling on all the cusp triangles of $N=S\times[a,b]$. An example is shown in \reffig{CuspLabels}.
\begin{figure}[h]
\import{Figures/Ch10_TwoBridge/}{F10-14-Label.eps_tex}
\caption{Labels on the cusp triangulation of $N = S\times [a,b]$ for an example}
\label{Fig:CuspLabels}
\end{figure}
In the example, a 4-punctured sphere corresponding to a horizontal crossing is labeled $R$, and one corresponding to a vertical crossing is labeled $L$, as in \refnot{RLNotation}.
Now $(x_i,y_i,z_i)$ give us labels for angles of all the tetrahedra on the 2-bridge link complement. We will need $x_i+y_i+z_i=\pi$ for each $i$ to satisfy condition \refitm{VertexSum} of the definition of an angle structure,\index{angle structure} \refdef{AngleStructures}. We also need sums of angles around edge classes to be $2\pi$.
Away from hairpin turns, the edge gluings of $S^3-K$ agree with those of $N=S\times[a,b]$, so we will first consider angle sums around edges of $N$, simplifying these conditions to a system of equations in terms of the $z_i$ alone, then deal with hairpin turns later.
There will be four cases depending on whether the $i$-th tetrahedron lies between two horizontal crossings, two vertical crossings, a horizontal followed by a vertical crossing, or a vertical followed by a horizontal crossing. These cases are denoted by $RR$, $LL$, $RL$, and $LR$, respectively.
The labels for two consecutive $LL$ 4-punctured spheres are shown in \reffig{LLCusp}. Note in this case, there is a 4-valent vertex in the cusp triangulation (or 4-valent ideal edge in the decomposition into tetrahedra). In order for the angle sum around this edge to be $2\pi$, we need $2x_i + z_{i+1}+z_{i-1}=2\pi$, or $x_i = {\frac{1}{2}}(2\pi-z_{i-1} - z_{i+1})$. Then in order for $x_i+y_i+z_i=\pi$, we need $y_i= {\frac{1}{2}}(z_{i-1} - 2z_i + z_{i+1})$.
\begin{figure}[h]
\import{Figures/Ch10_TwoBridge/}{F10-15-LL.eps_tex}
\caption{Labels in the $LL$ case}
\label{Fig:LLCusp}
\end{figure}
A similar picture occurs in the $RR$ case. Again there is a 4-valent vertex, and reading the labels around that vertex we find that we need the formulas $y_i = {\frac{1}{2}}(2\pi-z_{i-1}-z_{i+1})$, and $x_i = {\frac{1}{2}}(z_{i-1}-2z_i+z_{i+1})$.
In the $LR$ and $RL$ cases, there is not a single edge all of whose labels we can read off the diagram. In these cases, we find restrictions by considering \emph{pleating angles}\index{pleating angle}. Pleating angles $\alpha_1$, $\alpha_2$, and $\alpha_3$ are the angles determining the bending of the pleated\index{pleated surface} 4-punctured sphere. They are shown for 4-punctured spheres labeled $L$ and $R$ in \reffig{PleatingAngles}.
\begin{figure}[h]
\import{Figures/Ch10_TwoBridge/}{F10-16-PlAng.eps_tex}
\caption{Pleating angles for 4-punctured spheres}
\label{Fig:PleatingAngles}
\end{figure}
\begin{lemma}\label{Lem:PleatingAngles}
If the angle structure\index{angle structure} gives a Euclidean structure\index{Euclidean structure} on the cusp, then it will be the case that pleating angles as in \reffig{PleatingAngles} satisfy $\alpha_1+\alpha_2-\alpha_3=0$.
\end{lemma}
\begin{proof}
Exercise.
\end{proof}
To find an angle structure,\index{angle structure} we will assume this pleating condition holds in the $LR$ and $RL$ case.
The $LR$ labels are shown in \reffig{LRCusp}. Note that the pleating angles for the 4-punctured sphere at the bottom of the diagram are $\alpha_1=\pi-z_i$, $\alpha_2 = \pi-(2y_i+z_{i+1})$, and $\alpha_3=\pi-z_{i-1}$. Thus the condition $\alpha_1+\alpha_2-\alpha_3=0$ implies $y_i={\frac{1}{2}}(\pi+z_{i-1}-z_i-z_{i+1})$. The pleating angles on the 4-punctured sphere on the top of the diagram in \reffig{LRCusp} are $\alpha_1=\pi-(2x_i+z_{i-1})$, $\alpha_2=\pi-z_i$, and $\alpha_3=\pi-z_{i+1}$. Thus the pleating condition for this 4-punctured sphere gives $x_i={\frac{1}{2}}(\pi-z_{i-1}-z_i+z_{i+1})$. Conditions can be obtained in a similar manner in the $RL$ case.
\begin{figure}[h]
\import{Figures/Ch10_TwoBridge/}{F10-17-LR.eps_tex}
\caption{Labels in the $LR$ case}
\label{Fig:LRCusp}
\end{figure}
In summary, away from hairpin turns, labels must satisfy the conditions given in \reftable{LabelConditions}.
\begin{table}
\begin{tabular}{ccc}
& $LL$ & $RR$ \\
\hline\noalign{\smallskip}
$x_i$ & ${{\frac{1}{2}}(2\pi-z_{i-1} - z_{i+1})}$ & ${{\frac{1}{2}}(z_{i-1}-2z_i+z_{i+1})}$ \vspace{.1in} \\
$y_i$ & ${{\frac{1}{2}}(z_{i-1} - 2z_i + z_{i+1})}$ & ${{\frac{1}{2}}(2\pi-z_{i-1}-z_{i+1})}$ \\
$z_i$ & $z_i$ & $z_i$ \\
\multicolumn{3}{ c } {\vspace{.1in}} \\
& $LR$ & $RL$ \\
\hline\noalign{\smallskip}
$x_i$ & ${{\frac{1}{2}}(\pi-z_{i-1}-z_i+z_{i+1})}$ & ${{\frac{1}{2}}(\pi+z_{i-1}-z_i-z_{i+1})}$ \vspace{.1in} \\
$y_i$ & ${{\frac{1}{2}}(\pi+z_{i-1}-z_i-z_{i+1})}$ & ${{\frac{1}{2}}(\pi-z_{i-1}-z_i+z_{i+1})}$ \\
$z_i$ & $z_i$ & $z_i$ \smallskip
\end{tabular}
\caption{Label conditions in terms of the $z_i$}
\label{Table:LabelConditions}
\end{table}
Notice this allows us to express $x_i$ and $y_i$ in terms of $z_{i-1}, z_i,$ and $z_{i+1}$ alone. Note also that the sum of the angles $x_i+y_i+z_i=\pi$ in each case.
Finally, we claim that with the conditions in \reftable{LabelConditions}, the angle sum around each edge in $N$ is $2\pi$. To see this, note first that we have constructed the angles so that the sum is $2\pi$ around 4-valent edges. We now check the remaining edges. The angle sum around one such edge will be
\[ z_{j-1} + 2x_j + \sum_{i=j+1}^{k-1} 2x_i + 2x_k + z_{k+1},\]
where $j$ and $k$ are indices of hinge tetrahedra, with $j$ between $LR$ and $k$ between $RL$, and $j<k$, and all 4-punctured spheres labeled $R$ between them; refer to \reffig{CuspLabels}.
By the formulas in the tables, this is
\[ z_{j-1} + \pi-z_{j-1}-z_j+z_{j+1} + \sum_{i=j+1}^{k-1} (z_{i-1}-2z_i+z_{i+1}) + \pi+z_{k-1}-z_k-z_{k+1} + z_{k+1}. \]
This is a telescoping sum; all terms cancel except $2\pi$, as desired.
The angle sum around another such edge will be
\[ z_{j-1} + 2y_j + \sum_{i=j+1}^{k-1} 2y_i + 2y_k + z_{k+1},\]
where $j$ and $k$ are hinge indices, with $j$ between $RL$ and $k$ between $LR$, and $j<k$, and all 4-punctured spheres are labeled $L$ between them. Again check that everything cancels except $2\pi$.
We still need to consider the hairpin turns. With the gluing that comes from a hairpin turn, labels are as shown in \reffig{HairpinLabels}, for the $LL$ case. The cases $RR$, $LR$, $RL$ are similar (exercise).
\begin{figure}
\import{Figures/Ch10_TwoBridge/}{F10-18-HairLa.eps_tex}
\caption{Labels from a hairpin turn}
\label{Fig:HairpinLabels}
\end{figure}
If we set $z_1=0$, the interior angle in which the 4-punctured sphere $S_2$ is bent at the hairpin turn, then all the equations in \reftable{LabelConditions} hold, depending on whether the hairpin turn occurs in the case $LL$, $RR$, $LR$, or $RL$. It remains only to check the edge equations. For the edge at the sharp bend, the equation will be identical to one of the previous equations, only now with angle $z_1=0$ included. The sum is still $2\pi$. As for the final edges, in the case $S_2$ is $R$, these contribute $2z_2 + 4x_2 + \dots$, where the remainder of terms depends on whether the hairpin turn occurs at a hinge or not. In either case, the sum is $2\pi$. Similarly when $S_2$ is $L$, and similarly for the outside hairpin turn that occurs at the 4-punctured sphere $S_{C-1}$.
We are now ready to show the space of angle structures is nonempty.
\begin{proof}[Proof of \refprop{2BridgeAngleNonempty}]
We show the space of angle structures\index{angle structure} is nonempty by showing there is a choice of $(z_1, z_2, \dots, z_{C-2}, z_{C-1})$ with $z_1=z_{C-1}=0$, all other $z_i\in (0,\pi)$, and $x_i, y_i \in (0,\pi)$. For this to hold, the equations in \reftable{LabelConditions} tell us that:
\begin{equation}\label{Eqn:2BridgeZConditions}
\begin{cases}
2z_i < z_{i-1}+z_{i+1} & \mbox{ if } i \mbox{ is not a hinge ($LL$ or $RR$)} \\
|z_{i+1}-z_{i-1}| < \pi-z_i & \mbox{ if } i \mbox{ is a hinge index ($LR$ or $RL$)} \\
\end{cases}
\end{equation}
The first equation is called the \emph{convexity equation}.\index{convexity equation} The second is the \emph{hinge equation}.\index{hinge equation}
We find a point with all $z_i\in(0,\pi)$ that satisfies convexity and hinge equations. Namely, let $z_1=z_{C-1}=0$. For each hinge index $i$, let $z_i=\pi/3$. Between hinge indices, choose a sequence to satisfy the convexity equations. For example, if $j,k$ are consecutive hinge indices with $j<k$, then for all $j\leq i\leq k$, take
\[ z_i = \frac{\pi}{3} - \frac{2(i-j)(k-i)}{(k-j)^2}. \]
Then the sequence $(z_1, z_2, \dots, z_{C-2}, z_{C-1})$ satisfies all required conditions. Letting $x_i$ and $y_j$ be as in the tables, this gives an angle structure.
\end{proof}
\begin{corollary}\label{Cor:2BridgeHyperbolic}
Let $K[a_{n-1}, \dots, a_1]$ be a 2-bridge knot or link\index{2-bridge knot or link}\index{2-bridge knot or link!hyperbolic} with $a_i>0$ for all $i$, or $a_i<0$ for all $i$, and $|a_1|\geq 2$ and $|a_{n-1}|\geq 2$. Assume also that $n\geq 3$, so there are at least two twist regions in the diagram of $K$ given by the denominator closure of the rational tangle $T(0,a_{n-1}, \dots, a_1)$. Then $S^3-K$ is hyperbolic.
\end{corollary}
\begin{proof}
The link complement $S^3-K$ admits a triangulation as in \refprop{2BridgeTriang}. Then \refprop{2BridgeAngleNonempty} implies the set of angle structures\index{angle structure} on this triangulation is nonempty. By \refthm{HypAngleStruct} in \refchap{Essential}, any manifold admitting an angle structure must also admit a hyperbolic structure.
\end{proof}
\Refcor{2BridgeHyperbolic} is a special case of a stronger theorem due to Menasco determining when any alternating knot or link is hyperbolic \cite{menasco:alt}. We will return to that theorem in \refchap{Alternating}.
\begin{remark}
\Refcor{2BridgeHyperbolic} will also follow from \refthm{2BridgeGeometric}, which we will finish proving in the next section, by an appeal to \refthm{VolAngleStructs} (volume and angle structures).\index{angle structure} While the proof of \refcor{2BridgeHyperbolic} given above appears short, in fact recall that the proof of \refthm{HypAngleStruct} requires the difficult hyperbolization theorem of Thurston, \refthm{SfcesHyperbolic}, whose proof is beyond the scope of this book. By contrast, finishing the proof of \refthm{2BridgeGeometric} requires only calculus and some calculations, and we go through it in the next section. Moreover, when finished, we will additionally know that the hyperbolic structure on 2-bridge links arises from a geometric triangulation\index{geometric triangulation} of the link complements, and that triangulation can be explicitly described. Thus a proof of \refcor{2BridgeHyperbolic} using the calculations in the next section is in many ways a ``better'' proof, worth finishing.
\end{remark}
\section{Maximum in interior}
In this section we conclude the proof of \refthm{2BridgeGeometric}, by proving the following.
\begin{proposition}\label{Prop:MaxInInterior}
For the 2-bridge links\index{2-bridge knot or link}\index{2-bridge knot or link!angle structure} of \refprop{2BridgeAngleNonempty}, the volume functional\index{volume functional} $\mathcal{V}\from \mathcal{A}(\mathcal{T}) \to {\mathbb{R}}$ cannot have a maximum on the boundary of the space of angle structures.\index{angle structure}
\end{proposition}
\begin{remark}[Summary of proof]
The proof is given by a series of lemmas and calculations, and is quite technical. However, the idea of the proof is straightforward. First, we show that we can use the conditions on angle structures\index{angle structure} obtained in \reftable{LabelConditions}; this is done in \reflem{VolMaxAtSymmPt}. We then assume the maximum occurs on the boundary. Using the conditions of \reftable{LabelConditions}, we find restrictions on the tetrahedra that arise; this is done in \reflem{FlatTetrConsequences}. Finally, we show that in all cases that remain there is a path from the purported maximum on the boundary of the space of angle structures\index{angle structure} to the interior for which the directional derivative of $\mathcal{V}$ is strictly increasing. This contradicts the fact that the boundary point is a maximum.
\end{remark}
We note that the technical arguments required for the proof of \refprop{MaxInInterior} are used only in this section, and are not required for other chapters of the book.
Let $K$ be a 2-bridge knot or link as in \refprop{2BridgeAngleNonempty}. To obtain angle structures\index{angle structure} on $S^3-K$, we made some simplifying assumptions in the proof of \refprop{2BridgeAngleNonempty}. Namely, when constructing the triangulation $\mathcal{T}$, we had two tetrahedra $T_i^1$ and $T_i^2$ at each level, and we assumed that the angles on the two tetrahedra agreed. This led to the calculations of the previous section.
\begin{lemma}\label{Lem:VolMaxAtSymmPt}
The maximum of the volume functional\index{volume functional} $\mathcal{V}\from \mathcal{A}(\mathcal{T}) \to {\mathbb{R}}$ must occur at a point for which the angles $(x_i^1,y_i^1,z_i^1)$ of $T_i^1$ agree with those $(x_i^2,y_i^2,z_i^2)$ of $T_i^2$, for all $i$, where $T_i^1$ and $T_i^2$ are the two tetrahedra constructed at the $i$-th level.
\end{lemma}
\begin{proof}
Suppose the volume is maximized at an angle structure\index{angle structure} $A$ for which angles of $T_i^1$ and $T_i^2$ do not agree. Because of the symmetry of the construction of $\mathcal{T}$, note that we obtain a new angle structure $A'$ by swapping angles of $T_i^1$ with the corresponding angles of $T_i^2$, for all tetrahedra of $A$. Note that since $A$ and $A'$ contain isometric ideal tetrahedra, $\mathcal{V}(A)=\mathcal{V}(A')$. Then $A$ and $A'$ are distinct angle structures, and the volume is maximized on both.
By \refthm{VolConcaveDown}, the volume functional\index{volume functional} is strictly concave down on $\mathcal{A}(\mathcal{T})$. Thus if the volume obtains its maximum in the interior, then that maximum is unique, and the fact that $\mathcal{V}(A)=\mathcal{V}(A')$ gives an immediate contradiction in this case. If $A$ lies on the boundary, then $A'$ also lies on the boundary. Because $\mathcal{A}(\mathcal{T})$ is convex (\refprop{AngleStructsPoly}), the line between $A$ and $A'$ lies in $\mathcal{A}(\mathcal{T})$. But then along this line, the second directional derivative in the direction of the line is strictly negative, which implies the maximum cannot occur at the endpoints. This is a contradiction.
\end{proof}
By \reflem{VolMaxAtSymmPt}, we may assume angles of $T_i^1$ and $T_i^2$ agree. Thus we may use the conditions on angles in \reftable{LabelConditions} that we calculated in the previous section to prove \refprop{MaxInInterior}.
Now assume that the maximum of $\mathcal{V}$ does occur on the boundary of the space of angle structures\index{angle structure} $\mathcal{A}(\mathcal{T})$. Then there will be a flat tetrahedron (or more accurately, a pair of flat tetrahedra). We will slowly narrow in on what type of tetrahedron it is, and where it occurs in the triangulation.
Note we will switch notation slightly. Rather than referring to the two tetrahedra between $S_i$ and $S_{i+1}$ as $T_i^1$ and $T_i^2$, we will simply refer to such a tetrahedron by $\Delta_i$. Because the angles of $T_i^1$ and $T_i^2$ can be assumed to agree by \reflem{VolMaxAtSymmPt}, this will simplify our notation.
\begin{lemma}\label{Lem:FlatTetrConsequences}
Suppose the maximum of the volume functional\index{volume functional} occurs on the boundary of $\mathcal{A}(\mathcal{T})$.
\begin{enumerate}
\item[(1)] Then there exists a flat tetrahedron in the triangulation of the 2-bridge link.
\item[(2)] The flat tetrahedron is not adjacent to any other flat tetrahedra.
\item[(3)] The flat tetrahedron is not adjacent to a hairpin turn.
\item[(4)] The flat tetrahedron occurs at a hinge, and satisfies $z_i=\pi$, and for the two adjacent tetrahedra, $z_{i-1}=z_{i+1}$.
\item[(5)] If some tetrahedron $\Delta_i$ is type LL or RR, then $\Delta_{i-1}$ and $\Delta_{i+1}$ cannot both be flat.
\end{enumerate}
\end{lemma}
\begin{proof}
By \refprop{NoSingleDegenerate}, if the volume takes its maximum at an angle structure\index{angle structure} for which a tetrahedron has an angle equal to $0$, then it must have two angles equal to $0$ and one equal to $\pi$. This is a flat tetrahedron. Because we are assuming the maximum is on the boundary, there must be a flat tetrahedron in the triangulation, say tetrahedron $\Delta_i$ is flat, where $2\leq i\leq C-2$. This proves item (1).
There are three cases for the angles, namely $(x_i, y_i, z_i)$ can equal $(0,0,\pi)$, $(0,\pi,0)$, or $(\pi, 0,0)$. There are also four possibilities for the tetrahedron: type $LL$, $RR$, $LR$, or $RL$. The equations of \reftable{LabelConditions} give us angles of adjacent tetrahedra in all cases, and an analysis of these will lead to the conclusions of the lemma.
\bigskip
\subsubsection*{Case $(x_i,y_i,z_i) = (0,0,\pi)$}
\begin{enumerate}
\item[$LL$,] $RR$: Equations of \reftable{LabelConditions} imply
\[0 = {\frac{1}{2}}(2\pi-z_{i-1}-z_{i+1}), \mbox{ which implies } z_{i-1}=z_{i+1}=\pi.\]
In this case, both adjacent tetrahedra must be flat.
\item[$LR$,] $RL$: Equations of \reftable{LabelConditions} imply
\[0 = {\frac{1}{2}}(z_{i+1}-z_{i-1}), \mbox{ or } z_{i-1}=z_{i+1}. \]
Note in this case, it is not necessarily true that both adjacent tetrahedra are flat, but if one is flat then so is the other.
\end{enumerate}
\subsubsection*{Case $(x_i,y_i,z_i)=(0,\pi,0)$}.
\begin{enumerate}
\item[$LL$:] $0={\frac{1}{2}}(2\pi-z_{i-1}-z_{i+1})$ implies $z_{i-1}=z_{i+1}=\pi$.
\item[$RR$:] $0={\frac{1}{2}}(z_{i-1}+z_{i+1})$ implies $z_{i-1}=z_{i+1}=0$.
\item[$LR$:] $0={\frac{1}{2}}(\pi-z_{i-1}+z_{i+1})$ implies $\pi+z_{i+1}=z_{i-1}$. Since angles lie in $[0,\pi]$, it follows that $z_{i+1}=0$ and $z_{i-1}=\pi$.
\item[$RL$:] Similar to the last case, $z_{i+1}=\pi$ and $z_{i-1}=0$.\\
For all types of tetrahedra in this case, the two tetrahedra adjacent to $\Delta_i$ are flat.
\end{enumerate}
\subsubsection*{Case $(x_i,y_i,z_i)=(\pi,0,0)$.}
\begin{enumerate}
\item[$LL$:] Equations of \reftable{LabelConditions} imply $z_{i-1}=z_{i+1}=0$.
\item[$RR$:] $z_{i-1}=z_{i+1}=\pi$.
\item[$LR$:] $z_{i-1}=0$, $z_{i+1}=\pi$.
\item[$RL$:] $z_{i+1}=0$, $z_{i-1}=\pi$. \\
Again this shows that the two adjacent tetrahedra are both flat in this case.
\end{enumerate}
In all cases, if two adjacent tetrahedra are flat, then the next adjacent tetrahedron is also flat. It follows that if there are two adjacent flat tetrahedra, then all tetrahedra are flat, and the structure has zero volume, which cannot be a maximum for the volume. Thus we cannot have two adjacent flat tetrahedra. This proves item (2).
Moreover, the only case that does not immediately imply multiple adjacent flat tetrahedron is the first case, with $z_i=\pi$, for the hinge tetrahedra $RL$ or $LR$, and the calculation above gives the relationship $z_{i-1}=z_{i+1}$, proving item (4).
If the tetrahedron is adjacent to a hairpin turn, then $i=2$ or $i=C-2$, and $z_i=\pi$. We also have $z_1=0$ and $z_{C-1}=0$, hence in either case the equations above imply that a next adjacent tetrahedron, corresponding to $z_3$ or $z_{C-3}$, is flat, and thus all tetrahedra are flat, contradicting item (2). This proves (3).
Now suppose $\Delta_{i-1}$ and $\Delta_{i+1}$ are flat. By the previous work, we know $z_{i-1}=z_{i+1}=\pi$. If $\Delta_i$ is type $LL$, the equations of \reftable{LabelConditions} imply $x_i={\frac{1}{2}}(2\pi-\pi-\pi)=0$, so $\Delta_i$ is flat. Similarly if $\Delta_i$ is of type $RR$, then $y_i=0$ and $\Delta_i$ is flat. But then we have three adjacent flat tetrahedra, contradicting item (2). This proves item (5).
\end{proof}
We now know that any flat tetrahedron occurring in a maximum for $\mathcal{V}$ on the boundary has a very particular form. To finish the proof of \refprop{MaxInInterior}, we will show that the maximum cannot occur in the remaining cases. For the argument, we will find a path through the space of angle structures\index{angle structure} starting at the purported maximum for $\mathcal{V}$ on the boundary, and then show that the derivative at time $0$ in the direction of this path is strictly positive. This will contradict the fact that the point is a maximum.
The paths we consider adjust the angles of the flat tetrahedron $\Delta_i$ by
\[ (x_i(\epsilon),y_i(\epsilon),z_i(\epsilon)) = ((1+\lambda)\epsilon, (1-\lambda)\epsilon, \pi -2\epsilon),\]
where $\epsilon\to 0$ and $\lambda$ will be a carefully chosen constant.
In such a path, we will leave as many angles unchanged away from the $i$-th tetrahedron as possible. However, the equations in \reftable{LabelConditions} imply that many angles of adjacent tetrahedra must change with $\epsilon$ as well.
Recall from \reflem{DerivativesVol} that the derivative of the volume functional\index{volume functional} in the direction of a vector $w=(w_1, \dots, w_n)$ at a point $a=(a_1, \dots, a_n)$ is
\[ \frac{\partial \mathcal{V}}{\partial w} = \sum_{i=1}^{3n} -w_i\log\sin a_i. \]
The terms of the sum are grouped into threes, with each group corresponding to a single tetrahedron, with derivative coming from \refthm{VolConcaveDown}.
\begin{lemma}\label{Lem:FrancoisPath}
Let $\gamma(t)$ be a path through $\overline{\mathcal{A}(\mathcal{T})}$ with the angles of the $i$-th tetrahedron $\Delta_i$ in $\gamma(t)$ satisfying $(x_i,y_i,z_i) = ((1+\lambda)t, (1-\lambda)t,\pi-2t)$. Then the derivative of the volume of $\Delta_i$ along this path at $t=0$ satisfies
\[ \left. \frac{d\operatorname{vol}(\Delta_i)}{dt} \right\rvert_{t=0} = \log \left( \frac{4}{1-\lambda^2}\left(\frac{1-\lambda}{1+\lambda}\right)^\lambda\right).\]
\end{lemma}
\begin{proof}
By \refthm{VolConcaveDown}, the derivative in the direction of $\gamma'(0) = w= ((1+\lambda), (1-\lambda), -2)$ is
\begin{align*}
\frac{\partial \operatorname{vol}}{\partial w} & = \lim_{t\to 0} \big[ -(1+\lambda)\log\sin((1+\lambda)t) -(1-\lambda)\log \sin((1-\lambda)t)\\
& \hspace{.5in} +2\log\sin(\pi-2t) \big].
\end{align*}
Using the Taylor expansion $\sin(At) = At$ near $t=0$, this becomes
\begin{align*}
\frac{\partial \operatorname{vol}}{\partial w} &
\lim_{t\to 0} \big[-(1+\lambda)\log((1+\lambda) t) - (1-\lambda)\log((1-\lambda)t) + 2\log(2t)\big] \\
& = \log \left( \frac{4}{(1+\lambda)(1-\lambda)}\left(\frac{1-\lambda}{1+\lambda}\right)^\lambda \right)\qedhere
\end{align*}
\end{proof}
We denote the location of a flat tetrahedron by a vertical line:
\[\dots LL|RR\dots.\]
By \reflem{FlatTetrConsequences}, a vertical line can only appear at a hinge: $L|R$ or $R|L$; at least two letters lie between consecutive vertical lines; and patterns $L|RR|L$ and $R|LL|R$ cannot occur. The remaining cases are $LR|LR$ and $RL|RL$, which we deal with simultaneously; $RR|LR$ and $LL|RL$ and their reversals $RL|RR$ and $LR|LL$; and $RR|LL$ and $LL|RR$.
In all cases, we find a path $\gamma(t)$ through $\mathcal{A}(\mathcal{T})$ with $\gamma(0)$ a point on the boundary with the flat tetrahedron specified in the given case.
\subsection*{Case $LR|LR$ and $RL|RL$:}
Begin with the $LR|LR$ case.
Let $\Delta_i$ denote the flat tetrahedron, with $(x_i,y_i,z_i)=(0,0,\pi)$. We take a path $\gamma(t)$ to satisfy $(x_i(t), y_i(t), z_i(t)) = (t,t,\pi-2t)$, i.e.\ $\lambda=0$ in \reflem{FrancoisPath}, and we will keep as many other angles constant as possible. The formulas in \reftable{LabelConditions} imply that angles of tetrahedra $\Delta_{i-1}$ and $\Delta_{i+1}$ must also vary, as in the following table. In the table, we let $z_{i-1}=z_{i+1}=w$ (required by \reflem{FlatTetrConsequences}(4)), and we let $u=z_{i-2}$, $v=z_{i+2}$.
\bigskip
\begin{tabular}{c|c|c|c}
Angle & $\Delta_{i-1}$ & $\Delta_i$ & $\Delta_{i+1}$ \\
\hline\noalign{\smallskip}
$x$ & ${\frac{1}{2}}(2\pi - u -w - 2t)$ & $t$ & ${\frac{1}{2}}(2t - w + v)$ \\
$y$ & ${\frac{1}{2}}(u-w+2t)$ & $t$ & ${\frac{1}{2}}(2\pi - 2t - w - v)$ \\
$z$ & $w$ & $\pi-2t$ & $w$
\end{tabular}
\bigskip
Thus the derivative vector to the path at time $t=0$ is
\[ \gamma'(t) = (0, \dots, 0, \underbrace{-1, 1, 0,}_{\Delta_{i-1}} \underbrace{1,1,-2,}_{\Delta_i} \underbrace{1,-1,0,}_{\Delta_{i+1}} 0, \dots, 0). \]
Hence the derivative of the volume functional\index{volume functional} in the direction of the path is given by
\begin{align*}
\left.\frac{d\mathcal{V}}{dt}\right\rvert_{t=0} & =
\log\sin\left({\frac{1}{2}}(2\pi-u-w)\right) - \log\sin\left({\frac{1}{2}}(u-w)\right) + \log 4 \\
& \hspace{.5in} -\log\sin\left({\frac{1}{2}}(v-w)\right) + \log\sin\left({\frac{1}{2}}(2\pi-v-w)\right) \\
& = \log \left( \frac{ 4 \sin(u/2-w/2)\sin(v/2-w/2)}{\sin(u/2+w/2)\sin(v/2+w/2)} \right) > 0.
\end{align*}
Note this is strictly positive, hence the volume functional\index{volume functional} cannot have a maximum at this boundary point. The calculation is similar for $RL|RL$.
\subsection*{Remaining cases:}
We will first take care of cases $RR|LR$ and $RR|LL$.
As in the previous case, we will take a path such that the flat tetrahedron $\Delta_i$ changes. This time, we will find a fixed $\lambda$ such that angles of $\Delta_i$ satisfy $(x_i,y_i,z_i)=((1-\lambda)t,(1+\lambda)t, \pi-2t)$ for $t\in [0, \epsilon]$, for some $\epsilon>0$. At time $t=0$, we require $z_{i-1}=z_{i+1}=w$, a constant. Set $z_{i-2}=u$ and $z_{i+2}=v$, also constant. Additionally, adjust $z_{i-1}$ so that at time $t$, $z_{i-1}=w-2\lambda t$. In the argument below, we will assume that $i-2\neq 1$, so there is a tetrahedron $\Delta_{i-2}$. We also need to consider the case $i-2=1$; we will do this at the very end of the proof.
Assuming $i-2\neq 1$, the angles that are modified are shown in the tables below for the cases $RR|LR$ and $RR|LL$.
\bigskip
\noindent\begin{tabular}{c|cccccccc}
& & $R$ && $R$ & $|$ & $L$ && $R$ \\
Angle & $\Delta_{i-2}$ && $\Delta_{i-1}$ && $\Delta_i$ && $\Delta_{i+1}$ &\\
\hline\noalign{\smallskip}
$x$ & $A+{\frac{1}{2}} w-\lambda t$ && $x_{i-1}(t,\lambda)$ && $(1+\lambda)t$ && $x_{i+1}(t,\lambda)$ &\\
$y$ & $A'-{\frac{1}{2}} w+\lambda t$ && ${\frac{1}{2}}(\pi-2t)$ && $(1-\lambda)t$ && ${\frac{1}{2}}(2t - w + v)$ &\\
$z$ & $u$ && $w-2\lambda t$ && $\pi-2t$ && $w$ &
\end{tabular}
\bigskip
\noindent\begin{tabular}{c|cccccccc}
& & $R$ && $R$ & $|$ & $L$ && $L$ \\
Angle & $\Delta_{i-2}$ && $\Delta_{i-1}$ && $\Delta_i$ && $\Delta_{i+1}$ &\\
\hline\noalign{\smallskip}
$x$ & $A+{\frac{1}{2}} w-\lambda t$ && $x_{i-1}(t,\lambda)$ && $(1+\lambda)t$ && ${\frac{1}{2}}(\pi+2t-v)$ &\\
$y$ & $A'-{\frac{1}{2}} w+\lambda t$ && ${\frac{1}{2}}(\pi-2t)$ && $(1-\lambda)t$ && $y'_{i+1}(t,\lambda)$ &\\
$z$ & $u$ && $w-2\lambda t$ && $\pi-2t$ && $w$ &
\end{tabular}
\bigskip
Here $A$ and $A'$ are constants, $x_{i-1}(t,\lambda) = {\frac{1}{2}}(u-2w+4\lambda t + \pi-2t)$, $x_{i+1}(t,\lambda) = {\frac{1}{2}}(2\pi-2t - w - v)$, and $y'_{i+1}(t,\lambda) = {\frac{1}{2}}(\pi-2t-2w+v)$.
If $i>3$, we may use the table to compute the derivative in the direction of the path, and find in the case $RR|LR$, $d\mathcal{V}/dt \rvert_{t=0}$ equals:
\begin{equation}\label{Eqn:RRLRDeriv}
\left.\frac{d\mathcal{V}}{dt}\right\rvert_{t=0} = \log \left( \frac{4}{1-\lambda^2}\frac{\sin(\frac{v}{2}+\frac{w}{2})}{\sin(\frac{v}{2}-\frac{w}{2})}
\frac{\sin x_{i-1}}{\sin y_{i-1}} \left( \frac{1-\lambda}{1+\lambda} \cdot
\frac{\sin x_{i-2}}{\sin{y_{i-2}}} \frac{\sin^2 z_{i-1}}{\sin^2 x_{i-1}} \right)^\lambda\right).
\end{equation}
And in the case $RR|LL$, $d\mathcal{V}/dt \rvert_{t=0}$ equals:
\begin{equation}\label{Eqn:RRLLDeriv}
\left.\frac{d\mathcal{V}}{dt}\right\rvert_{t=0} =
\log \left( \frac{4}{1-\lambda^2} \frac{\sin x_{i-1} }{\sin y_{i-1}}
\frac{\sin y_{i+1}}{\sin x_{i+1}}
\left( \frac{1-\lambda}{1+\lambda} \cdot
\frac{\sin x_{i-2}}{\sin y_{i-2}} \frac{\sin^2 z_{i-1}}{\sin^2 x_{i-1}}
\right)^\lambda\right).
\end{equation}
\begin{lemma}\label{Lem:LambdaFact}
Let $X$, $Y$ be positive constants, and let
\[ f(\lambda) = \log \left( \frac{4}{1-\lambda^2}\, X \left(\frac{1-\lambda}{1+\lambda} \, Y \right)^{\lambda} \right). \]
Then $f$ has a critical point at $\lambda = (Y-1)/(Y+1)$, and $f$ takes the value $\log (X (Y+1)^2/Y)$ at this point.
\end{lemma}
\begin{proof}
Calculus.
\end{proof}
Now apply \reflem{LambdaFact} to \refeqn{RRLRDeriv} and \refeqn{RRLLDeriv}, choosing $\lambda$ to be the value given by that lemma at time $t=0$.
For this value of $\lambda$, we obtain the following:
The derivative $d\mathcal{V}/dt \rvert_{t=0}$ in the case $RR|LR$ equals:
\begin{align}
\nonumber & \log
\left( \frac{\sin(\frac{v}{2}+\frac{w}{2})}{\sin(\frac{v}{2}-\frac{w}{2})}
\frac{\sin x_{i-1}}{\sin y_{i-1}} \left( 1 +
\frac{\sin x_{i-2}}{\sin{y_{i-2}}} \frac{\sin^2 z_{i-1}}{\sin^2 x_{i-1}} \right)^2
\frac{\sin y_{i-2}}{\sin{x_{i-2}}} \frac{\sin^2 x_{i-1}}{\sin^2 z_{i-1}}
\right) \\
\label{Eqn:RRLRDerivSimp} & \geq \log \left(
\frac{\sin x_{i-1}}{\sin y_{i-1}} \left( 1 +
\frac{\sin x_{i-2}}{\sin{y_{i-2}}} \frac{\sin^2 z_{i-1}}{\sin^2 x_{i-1}} \right)^2
\frac{\sin y_{i-2}}{\sin{x_{i-2}}} \frac{\sin^2 x_{i-1}}{\sin^2 z_{i-1}}
\right).
\end{align}
The derivative $d\mathcal{V}/dt \rvert_{t=0}$ in the case $RR|LL$ equals:
\begin{equation}\label{Eqn:RRLLDerivSimp}
\log \left( \frac{\sin x_{i-1} }{\sin y_{i-1}}
\frac{\sin y_{i+1}}{\sin x_{i+1}}
\left( 1 + \frac{\sin x_{i-2}}{\sin y_{i-2}} \frac{\sin^2 z_{i-1}}{\sin^2 x_{i-1}}
\right)^2
\frac{\sin y_{i-2}}{\sin{x_{i-2}}} \frac{\sin^2 x_{i-1}}{\sin^2 z_{i-1}}
\right).
\end{equation}
The remaining quantities $\sin a / \sin b$ are geometric: by the law of sines, they give a ratio of lengths of triangles, and the triangles are those from our cusp triangulation, as in \reffig{CuspLabels}.
\begin{figure}[h]
\import{Figures/Ch10_TwoBridge/}{F10-19-Seg.eps_tex}
\caption{Segments of length $P$, $Q$, and $T$ shown on the zigzag corresponding to the first $R$ after a flat hinge tetrahedron. For the $RR|LL$ case, segments of length $P'$, $Q'$ and $T'$ also shown on the first $L$ zigzag after the flat hinge tetrahedron.}
\label{Fig:Segments}
\end{figure}
\begin{lemma}\label{Lem:SineRelations}
In the case $RR|L$, let $P$, $Q$, $T$ be the lengths of segments on the middle zigzag $R$, with $P$ opposite the angle $x_{i-1}$, $Q$ opposite the angle $y_{i-1}$ and $T$ opposite the angle $z_{i-1}$, as in \reffig{Segments}. Then the following hold:
\[
\frac{\sin x_{i-2}}{\sin y_{i-2}} \frac{\sin^2 z_{i-1}}{\sin^2 x_{i-1}} = \frac{T}{P}, \qquad
\frac{\sin x_{i-1}}{\sin y_{i-1}} = \frac{P}{Q}.\]
\end{lemma}
\begin{proof}
The equations follow from the law of sines.
\end{proof}
\begin{lemma}\label{Lem:Geometrical}
Suppose there is a subword $L|R^k L$ with $k\geq 2$. Let $Q$, $P$, and $T$ be lengths of segments of the zigzag corresponding to the first $R$, with $P$ and $T$ adjacent to the angle labeled $z=\pi$ on the hinge tetrahedron $L|R$. Then $P+T > Q$.
Similarly, if there is a subword $R|L^k R$ with $k\geq 2$, and $Q'$, $P'$, and $T'$ denote the lengths of the segments of the zigzag corresponding to the first $L$, with $P'$ and $T'$ adjacent to the angle $z=\pi$ on the hinge tetrahedron $R|L$, then $P'+T'>Q'$.
\end{lemma}
The labels $P$, $Q$, and $T$ are illustrated in \reffig{Segments}. In the case there are at least two $L$'s at the top of the figure, $P'$, $Q'$ and $T'$ will be labeled as shown there as well.
\begin{proof}
\cite[Lemma~8.2]{GueritaudFuter:2bridge}.
\end{proof}
Now we can show in the $RR|LR$ case the derivative $d\mathcal{V}/dt\rvert_{t=0}$ is positive. From \refeqn{RRLRDerivSimp}, we obtain
\[
\left.\frac{d\mathcal{V}}{dt}\right\rvert_{t=0} \geq
\log \left( \frac{P}{Q} \left(1 + \frac{T}{P} \right)^2 \frac{P}{T} \right) =
\log \left( \frac{P+T}{T} \cdot \frac{P+T}{Q}\right) > \log(1)=0. \]
A similar calculation holds in the $LL|RL$ case. By swapping the indices $i-1$, $i+1$, and $i-2$, $i+2$, the same argument shows the derivative is strictly positive in the $RL|RR$ and $LR|LL$ cases, provided $i+2$ is not the index of a hairpin turn.
We now finish the $RR|LL$ case.
\begin{lemma}\label{Lem:Geom2}
In the $RR|LL$ case, with $P'$, $Q'$, and $T'$ as in \reffig{Segments}, $P'/T' = T/P$, and $\sin(y_{i+1})/\sin(x_{i+1}) = P'/Q'$.
\end{lemma}
\begin{proof}
By \reflem{FlatTetrConsequences}, item (5), there is no flat tetrahedron either directly before or directly after the sequence $RR|LL$, so the angles of $\Delta_{i-2}$, $\Delta_{i-1}$, $\Delta_{i+1}$, and $\Delta_{i+2}$ are all positive. Thus the parameter $w$ can vary freely in an open interval when $t=0$. Since the volume is maximized, the derivative with respect to $w$ satisfies
\[ \left.\frac{d\mathcal{V}}{dw}\right\rvert_{t=0} =
\log \left( \sqrt{\frac{\sin x_{i-2}}{\sin y_{i-2}}} \cdot \frac{\sin z_{i-1}}{\sin x_{i-1}} \cdot \frac{\sin z_{i+1}}{\sin y_{i+1}} \cdot \sqrt{\frac{\sin y_{i+2}}{\sin x_{i+2}}} \right) = 0. \]
Thus
\[ \frac{\sin y_{i-2} \sin^2 x_{i-1}}{\sin x_{i-2} \sin^2 z_{i-1}} \cdot \frac{\sin^2 y_{i+1} \sin x_{i+2}}{\sin^2 z_{i+1} \sin y_{i+2}} =1. \]
Using an expanded version of \reffig{Segments}, one can check (exercise) that
\[ \frac{\sin y_{i-2} \sin^2 x_{i-1}}{\sin x_{i-2} \sin^2 z_{i-1}} = \frac{P}{T}, \quad \mbox{ and } \quad
\frac{\sin^2 y_{i+1} \sin x_{i+2}}{\sin^2 z_{i+1} \sin y_{i+2}} = \frac{P'}{T'}. \]
This shows $P'/T'=T/P$.
Similarly using \reffig{Segments}, one can check that $\sin(y_{i+1})/\sin(x_{i+1}) = P'/Q'$.
\end{proof}
By \reflem{Geom2} and \refeqn{RRLLDerivSimp}, we find that in the $RR|LL$ case,
\begin{align*}
\left.\frac{d\mathcal{V}}{dt}\right\rvert_{t=0} & =
\log \left( \frac{P}{Q}\frac{P'}{Q'}\left( 1+\frac{T}{P}\right)^2\frac{P}{T}\right) \\
& = \log \left( \frac{P}{Q}\left(1+\frac{T}{P}\right) \frac{P'}{Q'}\left(1+\frac{P'}{T'}\right) \frac{T'}{P'}\right) \\
& = \log \left( \frac{P+T}{Q} \cdot \frac{P'+T'}{Q'} \right) \\
& > \log (1) = 0.
\end{align*}
A similar calculation takes care of the $LL|RR$ case.
So far, we have argued only for $i>3$.
It remains to consider what happens when $i=3$. In this case, $i-2=1$ is the index of a hairpin turn, and the terms $\sin y_1/\sin x_1$ disappear from the computations of $d\mathcal{V}/dt$ in \refeqn{RRLRDeriv} and \refeqn{RRLLDeriv}. We have a result similar to \reflem{Geometrical}: $R^aL$ is a tessellated Euclidean triangle, and lengths still behave as in \reflem{Geometrical} to give the same result; see \cite[Lemma~1.5]{GueritaudFuter:2bridge}.
This concludes the proof of \refprop{MaxInInterior}. \qed
\bigskip
We now assemble the pieces to obtain the stronger result, \refthm{2BridgeGeometric}.
\begin{proof}[Proof of \refthm{2BridgeGeometric}]
Let $K$ be a knot or link with a reduced alternating diagram with at least two twist regions. Let $\mathcal{T}$ be the triangulation of $S^3-K$ described in this chapter. By \refprop{2BridgeAngleNonempty}, the space of angle structures\index{angle structure} $\mathcal{A}(\mathcal{T})$ is nonempty. By \refprop{MaxInInterior}, the volume functional\index{volume functional} $\mathcal{V}\from \mathcal{A}(\mathcal{T}) \to {\mathbb{R}}$ cannot have a maximum on the boundary of the space of angle structures. It follows that the maximum of $\mathcal{V}$ is on the interior of the space of angle structures.\index{angle structure} Let $A\in\mathcal{A}(\mathcal{T})$ denote this critical point. Thus by \refthm{VolAngleStructs} (volume and angle structures), the ideal hyperbolic tetrahedra obtained from the angle structure $A$ give $S^3-K$ a complete hyperbolic structure. Note since $A$ lies in the interior, the ideal hyperbolic tetrahedra it determines are all positively oriented,\index{positively oriented tetrahedron}\index{tetrahedron!positively oriented} as claimed.
\end{proof}
\section{Exercises}
\begin{exercise}
Sketch rational tangles and diagrams of 2-bridge links\index{2-bridge knot or link} associated to the following continued fractions: $[3,2]$, $[0,3,2]$, $[1,3,2]$.
\end{exercise}
\begin{exercise}\label{Ex:ContinuedFractions}
Continued fractions. Show every rational number has a continued fraction expansion $p/q=[a_n, a_{n-1}, \dots, a_1]$ such that if $i<n$, then $a_i\neq 0$, and such that if $p/q>0$, then each $a_i\geq 0$, while if $p/q<0$, then each $a_i\leq 0$.
\end{exercise}
\begin{exercise}\label{Ex:a1an}
Prove \reflem{a1an}. That is, show that if $K[a_{n-1}, \dots, a_1]$ is a 2-bridge knot or link,\index{2-bridge knot or link} then we may assume that $|a_{n-1}|\geq 2$ and $|a_1|\geq 2$.
\end{exercise}
\begin{exercise}
Work through the identification of tetrahedra at the innermost crossing. Prove that faces of the innermost tetrahedra are glued in pairs, two triangles of one tetrahedron glued to triangles of the opposite tetrahedron. Why is there no need to consider both horizontal and vertical crossings for the innermost crossing?
\end{exercise}
\begin{exercise}\label{Ex:2TwistRegions}
In \refprop{2BridgeTriang}, we require at least two twist regions. Show that this requirement is necessary by showing that the construction fails to give a triangulation of a knot or link with just one twist region. What breaks down?
\end{exercise}
\begin{exercise}
This exercise asks you to consider hairpin turns.
\begin{enumerate}
\item Prove if $|a_{n-1}|\geq 3$, there is a 3-valent vertex of the cusp triangulation.
\item Prove all vertices aside from possibly a single vertex in a hairpin turn must have valence at least four.
\item If $|a_{n-1}|=2$, prove the vertex corresponding to the outside hairpin turn may have arbitrarily high valence.
\end{enumerate}
\end{exercise}
\begin{exercise}
Use the methods of this chapter to find the form of the cusp triangulation for the twist knot $J(2,n)$. How many tetrahedra are in its decomposition?
\end{exercise}
\begin{exercise}
Find the form of the cusp triangulation for $J(k, \ell)$, where one of $k$, $\ell$ is even. How many tetrahedra are in its decomposition?
\end{exercise}
\begin{exercise}
Find the form of the cusp triangulation of a 2-bridge knot\index{2-bridge knot or link}\index{2-bridge knot or link!cusp triangulation} with exactly three twist regions.
\end{exercise}
\begin{exercise}
Prove \reflem{PleatingAngles}: that pleating angles as in \reffig{PleatingAngles} satisfy $\alpha_1+\alpha_2-\alpha_3=0$ when the cusp is Euclidean.
\end{exercise}
\begin{exercise}
Determine the labels of the hairpin turns of the form $RR$, $LR$, $RL$, similar to \reffig{HairpinLabels}.
\end{exercise}
\begin{exercise}
In the cases $RR|LR$ and $RR|LL$, compute the derivative $d\mathcal{V}/dt \rvert_{t=0}$ and check that it agrees with the formulas given in \refeqn{RRLRDeriv} or \refeqn{RRLLDeriv}.
\end{exercise}
\begin{exercise}
Give the proof of \reflem{Geometrical}.
\end{exercise}
\begin{exercise}
Work through the geometric details of \reflem{Geom2}. First, sketch the zigzag labeled $L$ at the top of \reffig{Segments}, along with angles at its corners, and show that:
\[ \frac{\sin y_{i-2} \sin^2 x_{i-1}}{\sin x_{i-2} \sin^2 z_{i-1}} = \frac{P}{T}, \quad \mbox{ and } \quad
\frac{\sin^2 y_{i+1} \sin x_{i+2}}{\sin^2 z_{i+1} \sin y_{i+2}} = \frac{P'}{T'}. \]
Also show
$\sin(y_{i+1})/\sin(x_{i+1}) = P'/Q'$.
\end{exercise}
\begin{exercise}
Go carefully through the proof of cases $RR|LR$ and $RR|LL$ when the index of the flat tetrahedron is $i=3$.
\end{exercise}
\chapter{Alternating Knots and Links}\label{Chap:Alternating}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
Alternating knots and links\index{alternating knot or link} need their own chapter, because there is a wealth of geometric information coming from them. Of all knots, alternating knots seem to have hyperbolic geometry most closely related to their diagrams. As of the writing of this book, there are many open conjectures concerning how the geometry and diagrams interact.
An \emph{alternating knot or link}\index{alternating knot or link} has a diagram with an orientation such that when following the knot in the direction of the orientation, the crossings alternate between over and under, all the way along the diagram.\index{alternating diagram} Alternating knots account for large numbers of knots with small crossing numbers, but they are less prevalent among knots with higher crossing numbers. Indeed, the proportion of alternating knots and links among all prime $n$-crossing knots and links is known to approach zero exponentially as $n$ approaches infinity \cite{SundbergThistlethwaite, Thistlethwaite:Tangles}. The first non-alternating knot in the knot tables has eight crossings. Note that it takes some work to prove that a knot is non-alternating: one must show that among all possible knot diagrams, there is no alternating diagram.\index{alternating diagram} In this chapter, we won't consider the question of whether a knot with a non-alternating diagram actually is alternating. Instead we will assume we have an alternating knot diagram, and consider what this implies for the geometry of the knot complement.
One main result of the chapter is a proof of a theorem originally due to Menasco that identifies when alternating knots and links are hyperbolic \cite{menasco:alt}. We also define checkerboard surfaces of these links, and show they are essential.\index{essential}
\section{Alternating diagrams and hyperbolicity}
Since we are interested in hyperbolic knots and links, we will consider only \emph{connected diagrams} of knots and links throughout; that is, the underlying 4-valent diagram graph is connected.\index{connected diagram} For note that if a diagram is not connected, it contains an obvious essential\index{essential} 2-sphere, namely one separating two diagram components. Since a hyperbolic knot or link complement can contain no essential 2-sphere, we restrict to connected diagrams.
We also wish to work with diagrams that have been simplified in obvious ways. For example, we wish to untwist all reducible crossings,\index{reducible crossing} like those shown in \reffig{Nugatory} in \refchap{Fig8Decomp}.
We also wish to work with prime diagrams, also defined in \refchap{Fig8Decomp}. We recall the definition again.
\begin{definition}\label{Def:Prime}
A diagram is \emph{prime}\index{prime}\index{prime!diagram} if, for every simple closed curve $\gamma$ in the plane of projection, if $\gamma$ meets the knot exactly twice transversely away from crossings, then $\gamma$ bounds a region of the diagram with no crossings. See \reffig{Prime}, left.
\end{definition}
\begin{figure}
\begin{center}
\import{Figures/Ch11_Alternating/}{F11-01-Prime.eps_tex}
\end{center}
\caption{Left: a prime diagram. Middle: a diagram that is not prime. Right: a swallow--follow torus}
\label{Fig:Prime}
\end{figure}
\Reffig{Prime}, middle, shows an example of a diagram that is not prime: a curve running through the center of the diagram meets the knot exactly twice, with crossings on both sides. It is constructed of two simpler knots via the following procedure, which we also defined in \refchap{KnotIntro}; see \reffig{KnotSum}.
\begin{definition}\label{Def:ConnectedSum}
Given two knots $K_1$ and $K_2$ in $S^3$, form their \emph{knot sum}\index{knot sum} or \emph{connected sum}\index{connected sum} as follows. For each knot $K_i\subset S^3$, take a ball $B_i$ in $S^3$ such that $B_i\cap K_i$ is a single unknotted arc, with $K_i$ meeting $\partial B_i$ transversely in two points. That is, $(B_i,K_i)$ is homeomorphic to the product of an interval and a disk with a single marked point.
Now, remove $B_i$ from $S^3-K_i$. The result is homeomorphic to an arc in a ball. Glue $(S^3-K_1)-B_1$ to $(S^3-K_2)-B_2$ via a homeomorphism taking $(\partial B_1, \partial B_1\cap K_1)$ to $(\partial B_2, \partial B_2\cap K_2)$. (Here the notation means that $\partial B_1$ is mapped to $\partial B_2$ in such a way that the two points $\partial B_1\cap K_1$ are mapped to the two points $\partial B_2\cap K_2$.)
\end{definition}
\begin{definition}\label{Def:PrimeKnot}
A knot or link is said to be \emph{prime}\index{prime!knot or link} if it cannot be expressed as a connected sum of knots.\index{connected sum}\index{knot sum}
\end{definition}
The knot in the middle of \reffig{Prime} is a connected sum. Again from the point of view of hyperbolic geometry, knots that are not prime are not interesting, since they always contain an incompressible\index{incompressible} torus\index{incompressible torus} called a \emph{swallow--follow torus}\index{swallow--follow torus}. The torus is built by taking the boundary of one of the balls $\partial B_1-N(K_1)$ in the construction of the connected sum, and then attaching a tube from one component of $N(K_1)$ on $\partial B_1$ to the other, following $K_2$. This forms a torus which ``swallows'' $K_1$, then ``follows'' $K_2$. The right side of \reffig{Prime} shows a swallow--follow torus for the given example.
Notice that prime diagrams and prime knots are not the same thing in general. Every knot, whether or not it is prime, admits a diagram that is not prime: simply insert a nugatory crossing. In general, if a knot admits a prime diagram, it still may not be a prime knot.
Recall the definitions of meridian and longitude of a knot or link.
\begin{definition}\label{Def:MeridianLongitude}
A curve on the boundary of a neighborhood of a knot or link in $S^3$ that bounds a disk inside the neighborhood of the link is called a \emph{meridian}\index{meridian}.
For a knot, a \emph{longitude}\index{longitude} is a curve on the boundary of a neighborhood of the knot that intersects a meridian exactly once. The \emph{standard longitude}\index{longitude!standard} is the longitude that is homologous to zero in $H_1(S^3-K)$.
More generally, a \emph{standard longitude} of a component $K_1$ of a link is the longitude that is trivial in $H_1(S^3-K_1)$.
\end{definition}
\begin{lemma}\label{Lem:PrimeEquiv}
Let $K$ be a knot in $S^3$. Then $K$ is a connected sum\index{connected sum} of nontrivial knots if and only if $S^3-N(K)$ contains an essential\index{essential} annulus that meets the boundary of the neighborhood $N(K)$ in two simple meridians.
\end{lemma}
We call such an annulus an \emph{essential meridional annulus}\index{essential meridional annulus}.
\begin{proof}
Suppose first that $S^3-N(K)$ contains an essential annulus with boundary two meridians of $N(K)$. Cut $S^3$ along the sphere obtained from the union of this annulus and two disks bounded by the meridians. This separates $S^3$ into two balls $B_1$ and $B_2$, each containing an arc of $K$. Form new knots $K_1$ and $K_2$ by attaching to each $(\partial B_i, B_i \cap K)$ a ball with an unknotted arc. By construction, $K$ is the connected sum\index{connected sum} of $K_1$ and $K_2$.
Now suppose that $K$ is a connected sum\index{connected sum} of nontrivial knots. Then $S^3-N(K)$ is obtained from nontrivial knots $K_1$ and $K_2$ by removing 3-balls $B_1$ and $B_2$ from $S^3-N(K_1)$ and $S^3-N(K_2)$, respectively, each meeting $\partial N(K_i)$ transversely in two simple meridians. The result has boundary an annulus $A\cong B_i-N(K_i)$, and these annuli are glued to form $S^3-N(K)$. We claim $A$ is the essential annulus required. It meets $N(K)$ in meridians, as required. If it is compressible,\index{compressible} then a disk $D$ with boundary isotopic to the essential core curve of $A$ lies inside $(S^3-K_i)-B_i$ for one of $i=1,2$. Slice $\partial B_i$ along $\partial D$ to obtain a disk $E_i$ meeting $K_i$ exactly once. Attach to $E_i$ the disk $D$. This is a sphere meeting $K_i$ exactly once. But $K_i$ is a closed curve in $S^3$, hence it meets any sphere an even number of times. This contradiction proves that $A$ is incompressible.\index{incompressible}
Now suppose that there is a boundary compression disk\index{boundary compression disk} $D$ for $A$. An arc of $\partial D$ must run from one (meridian) boundary component of $A$ to the other along $A$. The other arc of $\partial D$ must run along $K$, either along $K_1$ or $K_2$, say $K_1$. But then $D$ can be used to isotope $K_1$ through $B_1$ to $\partial B_1$, contradicting the fact that $K_1$ is nontrivial.
Finally, $A$ cannot be boundary parallel,\index{boundary parallel} else one side $(S^3-K_i)-B_i$ is homeomorphic to an unknotted arc in the ball $B_i$, again contradicting the fact that $K_i$ is nontrivial. This concludes the proof that a connected sum\index{connected sum} of knots contains an essential annulus with boundary two meridians of $N(K)$.
\end{proof}
\subsection{Polyhedral decomposition, revisited}
We know from the above discussion that alternating diagrams\index{alternating diagram} that are not connected and not prime cannot have hyperbolic complement. More generally, we need to determine which alternating diagrams lead to essential spheres, disks, tori, and annuli in the complement to rule out hyperbolicity. Our main tool will be a polyhedral decomposition of the link complement.
Recall that in \refchap{Fig8Decomp} we worked through a decomposition of the figure-8 knot complement into two ideal polyhedra. This was extended in the exercises. In particular, following the methods of that chapter, the exercises outline a proof of the following theorem.
\begin{theorem}\label{Thm:PolyAltKnot}
Let $L$ be an alternating link.\index{alternating knot or link}\index{alternating knot or link!polyhedral decomposition} Then the complement of $L$ can be obtained by gluing two ideal polyhedra that satisfying:
\begin{enumerate}
\item The polyhedra are obtained by labeling the boundary of two balls with the projection graph of the alternating diagram\index{alternating diagram} of $L$, and declaring each vertex to be ideal. On one ball, the outside boundary is labeled with the diagram, on the other the inside.
\item Ideal vertices are 4-valent, corresponding to overcrossings in one polyhedron, undercrossings in the other.
\item Ideal edges correspond to crossing arcs\index{crossing arc} in the diagram, and each edge class contains four ideal edges of the two polyhedra.
\item Faces correspond to regions of the diagram, and are checkerboard colored, white and shaded.
\item Each face on one polyhedron is glued to the identical face on the opposite polyhedron. The gluing rotates the face by one edge in the clockwise direction for white faces, and rotates by one edge in the counterclockwise direction for shaded faces. \qed
\end{enumerate}
\end{theorem}
The theorem is illustrated in \reffig{PolyAltKnot}.
\begin{figure}
\includegraphics{Figures/Ch11_Alternating/F11-02-Chkbd.eps}
\caption{Ideal polyhedral decomposition of an alternating link (the figure-8 knot). Shown is one ideal polyhedron. The other is identical (with head on the opposite side) and gluing of faces is by a rotation in each face as shown}
\label{Fig:PolyAltKnot}
\end{figure}
In the exercises of \refchap{Fig8Decomp}, we collapsed bigons\index{bigon} to a single edge. We will actually keep bigons around in this chapter, as they make certain arguments simpler.
\subsection{Angled polyhedra and alternating links}
\begin{proposition}\label{Prop:AltAngleStruct}
Let $K$ be a knot or link with a connected, prime, alternating diagram.\index{alternating diagram}\index{alternating knot or link}\index{alternating knot or link!angled polyhedral structure}
If we assign a dihedral angle of $\pi/2$ to each ideal edge of the polyhedral decomposition of $S^3-K$ of \refthm{PolyAltKnot}, then we obtain an angled polyhedral structure,\index{angled polyhedral structure} as in \refdef{AnglePolyhedra}.
\end{proposition}
\begin{proof}
We need to check that the polyhedra with interior dihedral angles $\pi/2$ satisfy the three conditions of \refdef{AnglePolyhedra}. The first condition is immediate: $\pi/2$ lies in $(0,\pi)$. The third condition is also straightforward: each ideal edge of the ideal polyhedral decomposition appears exactly four times in the decomposition, hence interior angles sum to $2\pi$.
The second condition takes the most work. We need to show that every normal disk\index{normal} has non-negative combinatorial area.\index{combinatorial area} Recall the combinatorial area of a normal disk $D$ is defined to be
\[ a(D) = \sum_{i=1}^n (\pi-\alpha_i) - 2\pi + \pi|\partial D \cap \partial M|, \]
where $\alpha_1, \dots, \alpha_n$ are the dihedral angles met by $\partial D$, and $|\partial D\cap \partial M|$ is the number of times $\partial D$ meets a boundary face. In our case, each $\alpha_i=\pi/2$, so the sum is
\[ a(D) = \frac{\pi}{2}|\partial D \cap e(M)| - 2\pi + \pi|\partial D \cap \partial M|,\]
where $|\partial D\cap e(M)|$ is the number of times $\partial D$ meets an ideal edge (not a boundary edge).
Notice that if $\partial D$ meets at least four ideal edges, or at least two boundary faces, then the combinatorial area\index{combinatorial area} of $D$ is non-negative. The only possible ways it could be negative is if $\partial D$ meets three or fewer edges and no boundary faces, or if it meets one boundary face and at most one edge. We rule these out.
First, suppose $\partial D$ meets exactly one boundary face. The endpoints of the arc of $\partial D$ on the boundary face must be on distinct boundary edges, by definition of a normal disk.\index{normal} So $\partial D$ runs through at least two distinct regular faces of the polyhedron, and so $\partial D$ must meet an edge of the polyhedron to connect into a closed curve. If $\partial D$ meets only one edge, then it cannot meet an edge adjacent to the boundary face, by definition of normal.\index{normal} So $\partial D$ encloses boundary faces on both sides. See \reffig{AltAngleStructProof}.
\begin{figure}[h]
\import{Figures/Ch11_Alternating/}{F11-03-1Ed1Bd.eps_tex}
\caption{If $\partial D$ meets just one edge and one boundary face, it determines a simple closed curve in the diagram of $K$ as shown}
\label{Fig:AltAngleStructProof}
\end{figure}
But recall that the graph of the polyhedron is exactly the diagram graph of $K$, and boundary faces correspond to crossings of $K$. Then $\partial D$ gives a simple closed curve in the diagram of $K$ meeting a single crossing and a single strand of the link. We may slide $\partial D$ off the crossing slightly so that it meets one more strand of the link near this crossing. Then $\partial D$ is a closed curve in the link diagram meeting the diagram exactly twice transversely away from crossings, enclosing crossings on either side. This contradicts the fact that the diagram is prime.
Now suppose $\partial D$ meets no boundary faces, but has negative combinatorial area.\index{combinatorial area} Then $\partial D$ meets fewer than four edges, but more than zero edges by definition of normal.\index{normal} Because edges correspond to strands of the link, and the link consists of closed curves, it follows that $\partial D$ meets exactly two ideal edges. Transferring $\partial D$ to the diagram, it becomes a closed curve meeting the diagram exactly twice. But then because $K$ has a prime diagram, there are no crossings on one side of $\partial D$. Transferring back to the polyhedron, this means an arc of $\partial D$ meets the same edge of the polyhedron two times. This contradicts the definition of normal.\index{normal}
\end{proof}
\begin{corollary}\label{Cor:AltIrreducible}
Let $K$ be a knot or link with a connected, prime, alternating diagram.\index{alternating diagram} Then $S^3-K$ is irreducible\index{irreducible} and boundary irreducible.\index{boundary irreducible}\index{alternating knot or link}
\end{corollary}
\begin{proof}
For such a knot or link, $S^3-K$ admits an angled polyhedral structure\index{angled polyhedral structure} by \refprop{AltAngleStruct}. Then the result follows from the first part of \refthm{HypAngleStruct}.
\end{proof}
In fact, we may say more. The following is proved in \cite{menasco:alt}, using Thurston's \refthm{SfcesHyperbolic}; we also give a proof here.
\begin{theorem}\label{Thm:AltHyperbolic}
A knot with a connected prime alternating diagram\index{alternating diagram} is either a $(2,q)$-torus knot or it is hyperbolic. \index{alternating knot or link}\index{alternating knot or link!hyperbolic}
\end{theorem}
We have already shown alternating knots have complements that are irreducible\index{irreducible} and boundary irreducible.\index{boundary irreducible} To prove \refthm{AltHyperbolic} we need to consider essential annuli and tori, and we do so in the next subsections.
\subsection{Alternating knots and essential annuli}
There are alternating knots that contain essential\index{essential} annuli, namely the $(2,q)$-torus knots. However, all other alternating knots are anannular. In this section, we prove that fact. In Menasco's original proof classifying hyperbolic alternating knots, he proves knots are anannular\index{anannular} by appealing to an algebraic result of Simon \cite{Simon:AlgKnot}. We take a more direct approach here, giving a geometric proof of this fact using the angled polyhedral structure\index{angled polyhedral structure} of the previous subsection.
First, we need more terminology to describe $(2,q)$-torus knots. The following definition is \refdef{TwReduced}, repeated here for convenience.
\begin{definition}\label{Def:TwistReduced}
A diagram is \emph{twist-reduced}\index{twist-reduced} if, whenever $\gamma$ is a simple closed curve on the plane of projection meeting the diagram exactly twice in two crossings, running from one side of the crossing to the opposite side, the curve $\gamma$ bounds a string of bigons\index{bigon}
on one side. See \reffig{Flype}, left.
\end{definition}
\begin{figure}[h]
\import{Figures/Ch11_Alternating/}{F11-04-TwRed.eps_tex}
\caption{Left: A twist-reduced diagram. Right: A flype}
\label{Fig:Flype}
\end{figure}
\begin{definition}
Let $\gamma$ be a simple closed curve meeting the diagram of $K$ transversely exactly four times in knot strands, with two intersections adjacent to a crossing on the outside of $\gamma$. A \emph{flype}\index{flype} is a move on the diagram that rotates the region inside $\gamma$ by $180^\circ$, moving the crossing outside $\gamma$ to lie between the opposite two strands.
See \reffig{Flype}, right.
\end{definition}
\begin{lemma}\label{Lem:TwistReduced}
Every knot or link $K$ has a twist-reduced diagram.
Moreover, if a diagram of $K$ is connected, prime, and alternating, then there is a twist-reduced diagram of $K$ that is connected, prime, and alternating.\index{alternating knot or link}
\end{lemma}
\begin{proof}
Start with a diagram of a knot or link. It has a finite number of twist regions, and a finite number of crossings in each twist region. Suppose the diagram is not twist reduced. Then there is a curve meeting the diagram exactly four times adjacent to two distinct twist regions. Slide the curve so that all crossings of both twist regions are on the outside of the curve, say with one twist region on the left and one on the right. Perform a sequence of flypes. Each flype will remove a crossing in the twist region on the left, and either add or remove a crossing in the twist region on the right (depending on the direction of crossings on the right and the direction of the flype).
Continue until there are no crossings on the left. When finished, the diagram has one fewer twist region and at most the same number of crossings as before. Repeat, strictly reducing the number of twist regions. Since the number of twist regions is finite, the process will terminate in a twist-reduced diagram.
Finally, note that the process of flyping takes a connected diagram to a connected diagram. It also takes a prime diagram to a prime diagram and an alternating diagram\index{alternating diagram} to an alternating diagram (\refex{FlypePrimeAlt}). Thus if the original diagram of a link is connected, prime, and alternating, then the twist-reduced diagram, obtained by performing flypes, is also connected, prime, and alternating.
\end{proof}
\begin{definition}\label{Def:TwistNumber}
The \emph{twist-number}\index{twist-number} of a knot diagram is the number of twist regions in a twist-reduced diagram.
\end{definition}
\begin{example}\label{Example:2qTorusKnot}
A $(2,q)$-torus knot\index{torus knot!$(2,q)$-torus knot} has twist-number $1$. Any knot with a prime alternating diagram\index{alternating diagram}\index{alternating knot or link}\index{alternating knot or link!twist-number} that is not a $(2,q)$-torus knot has twist number at least $2$. In particular, the figure-8 knot shown in \reffig{Fig8Diagram} has twist number $2$. More generally, any twist knot $J(k,\ell)$ has twist number $2$.
\end{example}
We are now ready to consider essential annuli.
\begin{lemma}\label{Lem:Squares}
Suppose $K$ is a knot or link with a connected prime alternating diagram, with $S^3-N(K)$ given its (truncated ideal) polyhedral decomposition. Suppose $S$ is an essential\index{essential} annulus embedded in $S^3-N(K)$. Then when $S$ is isotoped into normal form,\index{normal} it contains at least one normal disk $D$ meeting a boundary face, and $\partial D$ either meets exactly two boundary faces and no edges, or $\partial D$ meets exactly one boundary face and exactly two edges.
\end{lemma}
\begin{proof}
When we put $S$ into normal form,\index{normal form}\index{normal} \reflem{GaussBonnet} implies the combinatorial area\index{combinatorial area} of $S$ is $0$. Because each normal disk of $S$ has non-negative combinatorial area,\index{combinatorial area} in fact each normal disk of $S$ must have combinatorial area\index{combinatorial area} $0$. Because $S$ is a surface with boundary, there is at least one normal disk of $S$ that meets a boundary face; this is $D$. Now, considering the formula for the combinatorial area\index{combinatorial area} of $D$, there are only two possibilities: $\partial D$ either meets exactly two boundary faces and no edges, or $\partial D$ meets one boundary face and exactly two edges.
\end{proof}
\begin{lemma}\label{Lem:21Annulus}
If $K$ has a prime alternating diagram, and $S$ is an embedded normal annulus\index{normal} properly embedded in the truncated polyhedral decomposition of $S^3-N(K)$, containing at least one normal disk $D_2$ whose boundary meets exactly one boundary face and exactly two edges of the polyhedra, then:
\begin{enumerate}
\item $S$ contains a subannulus $S'$ for which all normal disks\index{normal} meet exactly one boundary face and two edges.
\item $K$ is a $(2,q)$-torus link with two components, and there is an annulus $\Sigma$ bounded by the two components of the link that is obtained by gluing bigon\index{bigon} faces of the polyhedral decomposition.
\item A component of $\partial S$ and $\partial S'$ runs along at least one longitude of the link, so $\partial S$ is not a meridian.
\item The other component of $\partial S'$ runs along the core of the annulus $\Sigma$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $S$, $K$, and $D_2$ be as in the statement of the lemma. The disk $D_2$ is glued to normal disks\index{normal} $D_1$ and $D_3$ in the opposite polyhedron. The gluing maps a side of $D_2$ in a face to a side of $D_1$, and the gluing map on a face rotates the side either clockwise or counterclockwise. Without loss of generality, say clockwise. Thus a side of $D_2$ running from a boundary face to an edge is glued to a side of $D_1$ running from a boundary face to an edge, although rotated. Similarly for a side of $D_2$. See \reffig{21Curve}, left.
Since $D_1$ and $D_3$ also have combinatorial area\index{combinatorial area} $0$, they must each meet one boundary face and exactly two edges. Repeat for disks meeting $D_1$ and $D_3$. Eventually this string of disks will glue up. Thus if there is one normal disk\index{normal} meeting a single boundary face and two edges, then there is a cycle of normal disks meet a single boundary face and two edges, gluing to form a subannulus $S'$ of $S$ as claimed.
\begin{figure}[h]
\import{Figures/Ch11_Alternating/}{F11-05-21Crv.eps_tex}
\caption{Curve $\partial D_2$ must be glued to arcs of $\partial D_1$ and $\partial D_3$ as shown on the left. Since $\partial D_1$ and $\partial D_3$ are disjoint, the only possibility for $\partial D_1$ is that shown on the right. Then there is a curve $\gamma$ meeting the diagram exactly twice; it must bound an unknotted strand, forming a bigon.\index{bigon}}
\label{Fig:21Curve}
\end{figure}
Sketch $\partial D_2$ onto the boundary of the polyhedron, which has the combinatorics of the diagram graph. We will add to this picture $\partial D_1$ and $\partial D_3$ by superimposing, as in \reffig{21Curve}. By what we know of the gluing maps, an arc of $\partial D_1$ must have its endpoints rotated once clockwise from an arc of $\partial D_2$, as shown on the left of \reffig{21Curve}, and similarly for an arc of $\partial D_3$.
Because $D_1$ and $D_3$ are disjoint, $\partial D_1$ must lie to one side of the arc of $\partial D_3$ shown in \reffig{21Curve}, and thus $\partial D_1$ and $\partial D_3$ close up as shown on the right of that figure. Now inside of $\partial D_1$, we may draw a curve $\gamma_1$ running through the shaded face between boundary faces met by $\partial D_1$ and $\partial D_2$, and through a single white face as in \reffig{21Curve}. This gives a curve meeting the diagram exactly twice. Because the diagram is prime, $\gamma_1$ bounds an arc of the diagram with no crossings on one side. Thus that shaded face is a simple bigon.\index{bigon} (Similar arguments show that other dotted lines in \reffig{21Curve}, right, are also single edges, but we will not use this.)
Repeating this argument with $D_1$ replacing $D_2$, and so on, we find that $S'$ is made up of disks bounding a closed chain of bigons. Thus the diagram of $K$ contains a single twist region, and $K$ is a $(2,q)$-torus knot or link. To see it is a link, note that disks $D_i$ for $i$ odd must all be disjoint, and disks $D_i$ for $i$ even must also be disjoint. If there are an odd number of bigons in the chain, this will be impossible. So there are an even number of bigons, $K$ is a 2-component link, and the surface $\Sigma$ made up of the bigons lies between the strands of $K$ and is an annulus.
Now note that $\partial D_1$ and $\partial D_2$ together meet both ideal vertices on either side of a bigon face in the polyhedral decomposition. One of $D_1$, $D_2$ lies in one polyhedron, and one in the other. But then some $\partial D_i$ will meet each ideal vertex in the diagram graph. It follows that $\partial S$ meets each ideal vertex in each polyhedron at least once. This implies that $\partial S$ runs along at least one longitude.
Finally, the arc of $\partial D_i$ lying in a shaded face is a simple arc through the bigon. Thus it runs from one crossing arc\index{crossing arc} bounding the bigon to the other. When we glue all the disks $D_i$, the boundary of $S'$ traces the core of the annulus $\Sigma$.
\end{proof}
\begin{lemma}\label{Lem:22Annulus}
Suppose $K$ is a knot or link with a connected, twist-reduced, prime, alternating diagram. Suppose $S$ is an essential\index{essential} annulus in normal form\index{normal form} in the polyhedral decomposition of $S^3-K$ such that $S$ contains a normal disk\index{normal} whose boundary meets exactly two boundary faces and no ideal edges of the polyhedra. Then all normal disks of $S$ meet exactly two boundary faces, and the diagram of $K$ is that of a $(2,q)$-torus knot or link. Further, $\partial S$ runs along at least one longitude of the knot or link, so $\partial S$ is not a meridian.
\end{lemma}
\begin{proof}
Suppose $D_2$ is a normal disk\index{normal} of $S$ such that $\partial D_2$ meets exactly two boundary faces and no edges. Then the fact that the diagram is prime and twist-reduced implies that $D_2$ is either a boundary bigon\index{boundary bigon} or $\partial D_2$ encircles a portion of a twist region in the diagram.
Suppose first that all normal disks\index{normal} of $S$ that meet boundary faces are boundary bigons. Then by considering how such disks must glue, note that there can only be four disks, and they must encircle a single edge of the polyhedral decomposition. Thus $S$ is an annulus encircling a crossing circle. This contradicts the fact that $S$ is essential.
Now suppose all normal disks\index{normal} encircle portions of twist regions. Because these match up to form an annulus, the diagram of $K$ must consist only of a single twist region, and the knot is a $(2,q)$-torus knot or link. As in the proof of \reflem{21Annulus}, trace the boundary of $S$ in this case. Superimpose onto a single polyhedron to obtain a string of boundaries of normal squares, with every other normal square coming from the same polyhedron. Because normal squares in a polyhedron are disjoint, this forces each square to bound either a single bigon, or a pair of adjacent bigons;\index{bigon} see \refex{22Bigons}. Then $\partial S$ meets every ideal vertex at least once, so $\partial S$ is not a meridian.
So finally suppose normal disks\index{normal} of $S$ consist both of curves encircling twist regions and boundary bigons.\index{boundary bigon} There must be a disk $D_2$ that is a boundary bigon adjacent to a disk $D_1$ encircling a portion of a twist region. Superimpose $\partial D_1$ and $\partial D_2$ on a single polyhedron. By following the gluing maps, we find $\partial D_1$ bounds a single bigon face of the polyhedron, and $\partial D_2$ bounds an edge sharing the same boundary face (ideal vertex) with the bigon. See \reffig{TwistBigon}, left.
\begin{figure}[h]
\import{Figures/Ch11_Alternating/}{F11-06-TwBign.eps_tex}
\caption{Left: Boundary bigon\index{boundary bigon} adjacent to a disk encircling a portion of twist region must have the form shown on the left. Middle: two next disks must have the form shown. Right: sketch the disks in the 3-dimensional knot complement}
\label{Fig:TwistBigon}
\end{figure}
There is another normal disk\index{normal} $D_3$ attached to $D_2$, in the opposite polyhedron from that containing $D_2$. Because $D_1$ and $D_3$ are disjoint, $D_3$ must be a boundary bigon\index{boundary bigon} of the form shown in \reffig{TwistBigon}. Then $D_4$ cannot be a boundary bigon, for if it is, $D_4$ is not glued to $D_1$ (its side is in the wrong region), thus $D_4$ is glued to another disk $D_5$. Because $D_5$ and $D_1$ are disjoint, $D_5$ must be a boundary bigon, and then some $D_6$ will also be a boundary bigon contained inside $D_2$, and so on, and there will be infinitely many boundary bigons spiraling around the same edge class. This is impossible. So $D_4$ bounds a portion of twist region, and $\partial D_4$ is parallel to $\partial D_1$ when superimposed (although recall that the disk $D_1$ lies in the opposite polyhedron from $D_4$). The disks $D_1$ through $D_4$ are shown superimposed on the same polyhedron in \reffig{TwistBigon}, middle, and in the link complement in \reffig{TwistBigon}, right.
We claim the annulus $S$ can be isotoped so that these four disks become two normal boundary bigons, and all other normal disks\index{normal} of $S$ are unchanged. The isotopy is by sliding past a crossing of the twist region where the boundary bigons\index{boundary bigon} cause the annulus to double back on itself. The isotopy is shown in \reffig{RemoveTwist}.
\begin{figure}[h]
\includegraphics{Figures/Ch11_Alternating/F11-07-TwBnIs.eps}
\caption{An isotopy of $S$ removes normal disks\index{normal} bounding portions of twist region. Shown on the left is the effect of the isotopy in the diagram graph. Shown on the right is the result of the isotopy in the link complement}
\label{Fig:RemoveTwist}
\end{figure}
Repeating this move a finite number of times, we remove all disks bounding twist regions, and $S$ is made up only of boundary bigons.\index{boundary bigon} This is a contradiction.
\end{proof}
\begin{proposition}\label{Prop:AltAnannular}
If $K$ is a knot or link with a connected, twist-reduced, prime, alternating diagram,\index{alternating diagram} and $K$ is not a $(2,q)$-torus knot or link, then $K$ is anannular.\index{anannular}\index{alternating knot or link}\index{alternating knot or link!anannular}
If $K$ is a $(2,q)$-torus knot or link, then any essential annulus in $S^3-K$ has boundary tracing out at least one longitude. Thus there is no essential\index{essential} meridional annulus.\index{essential meridional annulus}
\end{proposition}
\begin{proof}
Suppose $S^3-N(K)$ contains an embedded essential annulus $S$. We may isotope it into normal form,\index{normal} and by \reflem{Squares} each normal disk making up $S$ either meets one boundary face and two ideal edges, or two boundary faces and no ideal edges. In the former case, \reflem{21Annulus} implies $K$ is a $(2,q)$-torus link and $S$ is not meridional. In the latter case, \reflem{22Annulus} implies $K$ is a $(2,q)$-torus knot or link and $S$ is not meridional.
\end{proof}
\begin{corollary}\label{Cor:PrimeDiagram}
If $K$ has a connected prime alternating diagram,\index{alternating diagram}\index{alternating knot or link} then $K$ is a prime link.
\end{corollary}
\begin{proof}
By \reflem{PrimeEquiv}, the link $K$ is not prime if and only if $S^3-N(K)$ contains an essential meridional annulus.\index{essential meridional annulus} By \reflem{TwistReduced}, $K$ has a diagram that is connected, prime, alternating, and twist-reduced. Then \refprop{AltAnannular} implies that $S^3-N(K)$ cannot contain an essential meridional annulus. So $K$ is prime.
\end{proof}
\subsection{Closed surfaces and alternating knots}
Our goal is still to prove \refthm{AltHyperbolic}, that a knot with a connected, prime, alternating diagram\index{alternating diagram} is either a $(2,q)$-torus knot or is hyperbolic. We now consider closed essential surfaces embedded in $S^3-K$.
\begin{lemma}\label{Lem:Meridional}
Suppose $S$ is a closed essential\index{essential} surface embedded in the complement of a knot or link $K$ with a prime, connected, alternating diagram.\index{alternating diagram} Then $S$ contains a closed curve that encircles a meridian of $K$ at a crossing.
\end{lemma}
\begin{proof}
Put $S$ into normal form\index{normal form} with respect to the polyhedral decomposition of $S^3-K$. Let $D$ be an innermost normal disk\index{normal} in the polyhedron; that is, $D$ cuts off a portion of a polyhedron that contains no other normal disks of $S$. Now, because $S$ is a closed surface, $D$ must meet a regular (i.e.\ not boundary) face $F$ of the polyhedron. Moreover, normality implies an arc of $\partial D$ meets $F$ on two distinct edges $e_1$ and $e_2$ bordering $F$. These edges correspond to crossing arcs.\index{crossing arc} Recall from the construction of the polyhedra (e.g.\ in \refchap{Fig8Decomp}) that each such edge is identified to an edge on the opposite side of an ideal vertex in the polyhedron. Because the diagram is alternating, the two edges that are identified to $e_1$ and $e_2$ must lie on opposite sides of $D$; see \reffig{OppositeSides}.
\begin{figure}[h]
\import{Figures/Ch11_Alternating/}{F11-08-OppSid.eps_tex}
\caption{Crossing arcs\index{crossing arc} identified to edges meeting $\partial D$ lie on opposite sides of $D$. This is shown on the left in the polyhedron, and on the right in the diagram of the knot.}
\label{Fig:OppositeSides}
\end{figure}
Because $D$ meets edges $e_1$ and $e_2$, another normal disk\index{normal} of $S$ in the same polyhedron must meet the opposite edges identified to $e_1$ and $e_2$, and thus there is an arc of a normal disk of $S$ on either side of $D$. In \reffig{OppositeSides}, these are shown as dashed arcs. But $D$ was chosen to be innermost, so one of those arcs must also belong to $D$. Then $D$ contains an arc running from one crossing arc on one side of an ideal vertex back to the identified crossing arc\index{crossing arc} on the other side of the ideal vertex. This arc glues up in $S$ to be a closed curve encircling a meridian at the crossing.
\end{proof}
\begin{corollary}\label{Cor:AltAtoroidal}
If $K$ is a knot or link with a prime alternating diagram,\index{alternating diagram}\index{alternating knot or link}\index{alternating knot or link!atoroidal} then its complement is atoroidal.\index{atoroidal}
\end{corollary}
\begin{proof}
Suppose $S$ is an essential torus in $S^3-K$. Then $S$ contains a closed curve encircling a meridian at a crossing, by \reflem{Meridional}. This closed curve bounds a disk in $S^3$ that meets the knot exactly once in a meridian at a crossing. Surger along this disk and push both ends away from the crossing. We obtain a sphere $S'$ that meets the knot exactly twice in two meridians, with a crossing on the outside. Then $S'-N(K):=A$ is a meridional annulus. Because the link is prime, by \refcor{PrimeDiagram}, the annulus $A$ cannot be essential. It follows that $A$ is boundary parallel,\index{boundary parallel} and thus the original torus $S$ is boundary parallel,\index{boundary parallel} not essential.
\end{proof}
\begin{proof}[Proof of \refthm{AltHyperbolic}]
\Refcor{AltIrreducible} implies that a knot or link $K$ with a prime alternating diagram\index{alternating diagram}\index{alternating knot or link!hyperbolic} is irreducible\index{irreducible} and boundary irreducible.\index{boundary irreducible} \Refcor{AltAtoroidal} implies that it is atoroidal.\index{atoroidal} By \refprop{AltAnannular}, if it is not a $(2,q)$-torus knot, then it is also anannular.\index{anannular} The fact that $S^3-K$ is hyperbolic then follows from Thurston's \refthm{SfcesHyperbolic}.
\end{proof}
\section{Checkerboard surfaces}\label{Sec:Checkerboard}
Recall that the diagram graph of a knot or link is a 4-valent graph embedded in the plane of projection. We may checkerboard color the regions of the graph, white and shaded. This checkerboard coloring\index{checkerboard coloring} may be used to define two surfaces embedded in any link complement.
\begin{definition}\label{Def:CheckerboardSurfaces}\index{checkerboard surface}\index{surface!checkerboard}
Let $K$ be a knot or link. Consider all the shaded regions in the checkerboard coloring\index{checkerboard coloring} of the complement of the diagram graph of $K$. By removing a neighborhood of each vertex, these can be embedded as disks in the link complement, with boundary lying on the knot and along crossing arcs.\index{crossing arc} At each crossing, attach a \emph{twisted band}\index{twisted band} between crossing arcs on opposite sides of the crossing; see \reffig{TwistedBand}. Note the twisting is in the same direction as the crossing. The result is a surface embedded in $S^3-N(K)$, with boundary on $N(K)$. This is called the \emph{shaded checkerboard surface}. The \emph{white checkerboard surface} is obtained similarly, using the opposite regions of the checkerboard coloring.\index{checkerboard coloring}
\end{definition}
\begin{figure}
\includegraphics{Figures/Ch11_Alternating/F11-09-TwBand.eps}
\caption{A twisted band is shown on the left, and a checkerboard surface on the right.\index{checkerboard surface}}
\label{Fig:TwistedBand}
\end{figure}
The main result of this section will be to show that if $K$ is alternating, then checkerboard surfaces are essential. Our main tool again will be the checkerboard polyhedral decomposition of an alternating link.
For an alternating knot or link, the checkerboard surfaces are closely related to the polyhedral decomposition of \refthm{PolyAltKnot}. In that theorem, we obtained two polyhedra with checkerboard colored faces that glue to give the link complement. The shaded checkerboard surface is obtained from the shaded faces of the polyhedra, the white checkerboard surface from the white faces.
\begin{definition}\label{Def:Cut}
Let $\Sigma$ be a properly embedded surface in a compact manifold $M$ with torus boundary components. Let $N(\Sigma)$ be a regular neighborhood of $\Sigma$. The manifold \emph{cut along $\Sigma$}\index{cut along surface $\Sigma$} is the manifold
\[ M{\backslash \backslash}\Sigma := M - N(\Sigma). \]
The boundary of $M{\backslash \backslash}\Sigma$ is a union of two subsurfaces. One of these is the surface $\partial(N(\Sigma))\subset\partial(M{\backslash \backslash}\Sigma)$; it is homeomorphic to the double cover $\widetilde{\Sigma}$ of $\Sigma$. The other is the remnant of $\partial M$, consisting of $\partial M-(\partial M\cap N(\Sigma))$, containing annuli and tori. The latter surface is called the \emph{parabolic locus}\index{parabolic locus} of $M{\backslash \backslash}\Sigma$.
\end{definition}
\begin{definition}\label{Def:BoundedPolyDecomp}
A \emph{bounded polyhedral decomposition} of a manifold $M{\backslash \backslash}\Sigma$ is a decomposition of $M{\backslash \backslash}\Sigma$ into truncated ideal polyhedra with interior and boundary faces, as well as \emph{surface faces},\index{surface faces} which are unglued, and which come from $\widetilde{\Sigma} \subset \partial(M{\backslash \backslash}\Sigma)$. As in \refdef{NormalDisk}, boundary edges are still defined to lie between boundary faces and other faces, \emph{interior edges}\index{interior edge} lie between pairs of interior faces, and \emph{surface edges}\index{surface edge} lie between surface faces and interior faces. We do not allow two surface faces to be adjacent along an edge. Moreover, under the gluing, each edge class either contains no surface edges, or it contains exactly two surface edges.
\end{definition}
Our main example of a bounded polyhedral decomposition comes from checkerboard surfaces and alternating knots.
\begin{lemma}\label{Lem:BoundedPolyAlt}
Suppose $M=S^3-N(K)$ is the exterior of an alternating knot or link $K$,\index{alternating knot or link!checkerboard surface}\index{alternating knot or link!bounded polyhedral decomposition} and suppose $\Sigma$ is the shaded checkerboard surface. Then the cut manifold $M{\backslash \backslash}\Sigma$ has a bounded polyhedral decomposition into the two checkerboard colored polyhedra of \refthm{PolyAltKnot}. Surface faces are shaded faces; boundary faces glue to form the parabolic locus of $M{\backslash \backslash}\Sigma$.
A similar statement holds for the white checkerboard surface.\index{checkerboard surface}
\end{lemma}
\begin{proof}
The decomposition is just as before, only in the gluing of the two polyhedra, leave the shaded faces unglued.
\end{proof}
There is a theorem for normal surfaces in bounded polyhedral decompositions that is completely analogous to \refthm{NormalForm}, for normal surfaces\index{normal} in ideal polyhedral decompositions.
\begin{theorem}\label{Thm:BddNormalForm}
Let $M{\backslash \backslash}\Sigma$ have a bounded polyhedral decomposition.
\begin{enumerate}
\item If $M$ is reducible,\index{reducible 3-manifold} then $M$ contains a normal 2-sphere.\index{normal}
\item If $M$ is irreducible\index{irreducible} and boundary reducible, then $M$ contains a normal disk.\index{normal}
\item If $M$ is irreducible and boundary irreducible,\index{boundary irreducible} then any essential\index{essential} surface in $M$ can be isotoped into normal form.\index{normal}
\end{enumerate}
\end{theorem}
\begin{proof}
The proof is nearly identical to that of \refthm{NormalForm}, except we can no longer isotope an essential surface $S$ through surface faces, as they are now part of the boundary of $M{\backslash \backslash}\Sigma$. We modify the proof of \refthm{NormalForm} where required to avoid such moves. Note that the proofs of the first two parts of the theorem require surgering, not isotoping, and so their arguments go through unchanged. So suppose $M$ is irreducible\index{irreducible} and boundary irreducible,\index{boundary irreducible} and $S$ is essential.
First, if a component of $\partial S$ lies entirely in a surface face and bounds a disk in that face, then since $S$ is incompressible,\index{incompressible} that curve bounds a disk in $S$ as well, hence $S$ has a disk component, parallel into a surface face, contradicting the fact that it is essential.
If an arc of intersection of $S$ with a face has both its endpoints on the same surface edge, and the arc lies in an interior face, then the arc and the edge bound a disc $D$ with one arc of $\partial D$ on $S$ and one arc on a surface face. Because $S$ is essential, it is boundary incompressible;\index{boundary incompressible} it follows that the arc of intersection can be pushed off. A similar argument implies that an arc of intersection of $S$ with an interior face that has one endpoint on a boundary edge and one on a surface edge can be pushed off. For all other arcs of intersection with endpoints on one edge, or an edge and adjacent boundary edges, the argument follows just as before.
\end{proof}
As in the case of polyhedral decompositions, we may put angled structures on bounded polyhedral decomposition and assign to normal disks\index{normal} and normal surfaces a combinatorial area,\index{combinatorial area} exactly as in \refdef{CombinatorialArea}.
\begin{definition}\label{Def:BoundedAngledPolyhedra}
A \emph{bounded angled polyhedral structure}\index{bounded angled polyhedral structure}\index{angled polyhedral structure!bounded} is a decomposition of $M{\backslash \backslash}\Sigma$ into ideal polyhedra, glued along interior faces, along with a collection of dihedral angles, one for each (surface or interior) edge, that satisfy the following.
\begin{enumerate}
\item Each dihedral angle lies in the range $(0,\pi)$.
\item Each normal disk\index{normal} in a polyhedron has nonnegative combinatorial area.\index{combinatorial area}
\item Under the gluing, dihedral angles sum to $2\pi$ around an edge class meeting no surface edges. They sum to $\pi$ if they meet surface edges.
\end{enumerate}
\end{definition}
\begin{proposition}\label{Prop:BddAngledPolyAlt}
Suppose $M=S^3-N(K)$ is the exterior of an alternating knot or link\index{alternating knot or link} $K$, and $\Sigma$ is the shaded checkerboard surface. Then the cut manifold $M{\backslash \backslash}\Sigma$ has a bounded angled polyhedral structure.\index{angled polyhedral structure!bounded}\index{bounded angled polyhedral structure}\index{checkerboard surface}
\end{proposition}
\begin{proof}
As in \refprop{AltAngleStruct}, label each ideal edge of the checkerboard polyhedra by $\pi/2\in(0,\pi)$. Then the proof of \refprop{AltAngleStruct} carries through to show that every normal disk\index{normal} has non-negative combinatorial area.\index{combinatorial area} We only need to check that dihedral angles sum to $\pi$ at surface edges. Note that because each ideal edge is adjacent to both white and shaded faces, in fact each ideal edge is a surface edge. Because we no longer glue shaded faces, each edge class contains exactly two surface edges. Thus the sum of dihedral angles at each edge is $\pi/2+\pi/2 =\pi$, as required.
\end{proof}
\begin{lemma}[Bounded Gauss--Bonnet]\label{Lem:BddGaussBonnet}
Let $S$ be a surface properly embedded in $M{\backslash \backslash}\Sigma$, in normal form\index{normal form}\index{normal} with respect to a bounded angled polyhedral structure\index{angled polyhedral structure!bounded}\index{bounded angled polyhedral structure} on $M{\backslash \backslash}\Sigma$. Let $p$ denote the number of times $\partial S$ intersects a boundary edge adjacent to a surface face. Then
\[ a(S) = -2\pi\chi(S) + \frac{\pi}{2}\,p. \]
\end{lemma}
\begin{proof}
Exercise.
\end{proof}
\begin{definition}\label{Def:BdryPi1Injective}
Let $S$ be a surface properly embedded in a compact 3-manifold $M$ with boundary. We say $S$ is \emph{boundary $\pi_1$-injective}\index{boundary $\pi_1$-injective} if whenever $\alpha \subset S$ is an arc properly embedded in $S$ that is not homotopic rel endpoints to $\partial S$ in $S$, then $\alpha$ is not homotopic rel endpoints to $\partial M$ inside $M$.
We say the surface $S$ is \emph{$\pi_1$-essential}\index{$\pi_1$-essential} if
it is $\pi_1$-injective,\index{$\pi_1$-injective} boundary $\pi_1$-injective, and not parallel into $\partial M$.
\end{definition}
Note that boundary $\pi_1$-injective is stronger than boundary incompressible,\index{boundary incompressible} and $\pi_1$-essential is stronger than essential.\index{essential} For checkerboard surfaces, we have this stronger result.
\begin{theorem}\label{Thm:CheckerboardEssential}
Let $K$ be a link with a connected, prime, reduced alternating diagram,\index{alternating diagram} and let $\Sigma$ be one of its checkerboard surfaces. Then $\Sigma$ is $\pi_1$-injective\index{$\pi_1$-injective} and boundary $\pi_1$-injective,\index{boundary $\pi_1$-injective} hence it is $\pi_1$-essential.\index{alternating knot or link!checkerboard surface}\index{checkerboard surface!$\pi_1$-essential}
\end{theorem}
\begin{proof}
We claim first that $\Sigma$ is $\pi_1$-injective and boundary $\pi_1$-injective if and only if the surface $\widetilde{\Sigma} = \partial N(\Sigma)$ is incompressible\index{incompressible} and boundary incompressible.\index{boundary incompressible} The proof uses the loop theorem; we leave it as an exercise.
Now we claim that if $D$ is a compression disk\index{compression disk} for $\widetilde{\Sigma}$ in $S^3-N(K)$, then we may assume $D$ is properly embedded in $(S^3-N(K)){\backslash \backslash}\Sigma$. This is because $\widetilde{\Sigma}$ separates $S^3-N(K)$ into $N(\Sigma)$ and $(S^3-N(K)){\backslash \backslash}\Sigma$. An innermost disk argument implies that $D$ can be isotoped to be disjoint $\widetilde{\Sigma}$ in its interior, so $D$ either lies in $(S^3-N(K)){\backslash \backslash}\Sigma$, as desired, or in the product $N(\Sigma)$. If $D$ is in the product, then an isotopy mapping $N(\Sigma)$ to $\Sigma$ takes the disk $D$ to a disk parallel to $\Sigma$, hence parallel to $\widetilde{\Sigma}$, contradicting the fact that it is a compression disk\index{compression disk} for $\widetilde{\Sigma}$.
Thus $D$ is an essential disk in $(S^3-N(K)){\backslash \backslash}\Sigma$ with boundary completely contained in $\widetilde{\Sigma}$. Put $D$ into normal form\index{normal form} with respect to the bounded polyhedral decomposition of $(S^3-N(K)){\backslash \backslash}\Sigma$. Because $\partial D$ meets no boundary faces, \reflem{BddGaussBonnet}, the bounded Gauss--Bonnet lemma, implies that $a(D)=-2\pi$. But each normal disk making up $D$ has nonnegative combinatorial area,\index{combinatorial area} by \refprop{BddAngledPolyAlt}. This is a contradiction.
Now suppose that $D$ is a boundary compression disk\index{boundary compression disk} for $\widetilde{\Sigma}$. An innermost disk and outermost arc argument implies that $D$ is isotopic to a disk with interior disjoint from $\widetilde{\Sigma}$, and again this disk must lie in $(S^3-N(K)){\backslash \backslash}\Sigma$. Put the disk into normal form.\index{normal form} One arc of $\partial D$ lies on $\widetilde{\Sigma}$ and one arc lies on boundary faces. Note this arc begins and ends on boundary edges adjacent to a surface face, but if it meets any other boundary edges in its interior they must be adjacent to interior edges. Then the bounded Gauss--Bonnet lemma, \reflem{BddGaussBonnet} implies that $a(D) = -2\pi + \pi = -\pi$. Again this contradicts the fact that normal disks\index{normal} have nonnegative combinatorial area.\index{combinatorial area}
So $\Sigma$ is $\pi_1$-injective\index{$\pi_1$-injective} and boundary $\pi_1$-injective.\index{boundary $\pi_1$-injective} Because it has boundary on $N(K)$ it cannot be boundary parallel.\index{boundary parallel} So it is $\pi_1$-essential.\index{$\pi_1$-essential}
\end{proof}
By \refthm{CheckerboardEssential}, every alternating knot contains a pair of $\pi_1$-essential checkerboard surfaces.\index{alternating knot or link!checkerboard surface} The converse is also true: independently, Howie \cite{Howie:Alternating} and Greene \cite{Greene:Alternating} showed that if a 3-manifold contains a pair of essential surfaces satisfying certain conditions required of checkerboard surfaces, then the 3-manifold is the complement of an alternating knot in $S^3$ and the surfaces are isotopic to checkerboard surfaces. \index{checkerboard surface!$\pi_1$-essential}
In addition to being $\pi_1$-essential,\index{$\pi_1$-essential} checkerboard surfaces of a hyperbolic alternating knot also exhibit other nice geometric properties. We discuss these in the next chapter.
\section{Exercises}
\begin{exercise}
Give a proof that a knot or link with a connected, prime, alternating diagram\index{alternating diagram} is a prime link.
One way to prove this is to note that essential meridional annuli\index{essential meridional annulus} can be put into normal form\index{normal form} while ensuring boundary components of the annuli stay well-behaved, and then analyzing the number and form of normal disks that could possibly arise.
\end{exercise}
\begin{exercise}\label{Ex:FlypePrimeAlt}
(Flypes and alternating diagrams)
\begin{enumerate}
\item Prove that a flype takes a prime diagram to a prime diagram.
\item Prove that a flype takes an alternating diagram\index{alternating diagram} to an alternating diagram.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:NoNormalBigons}
Prove the following result, which will be used in \refchap{Quasifuchsian}.
\begin{proposition}\label{Prop:NoNormalBigons}
Let $K$ be a knot or link with a connected, prime, alternating diagram.\index{alternating diagram} Then in the polyhedral decomposition of the link complement, there can be no bigon\index{bigon} in normal form.\index{normal form} That is, there is no normal disk\index{normal} embedded in a polyhedron that meets exactly two interior edges.
\end{proposition}
\end{exercise}
\begin{exercise}\label{Ex:22Bigons}
Work through the details of the proof of \reflem{22Annulus}: Suppose $K$ is a knot or link with a connected, twist-reduced, prime, alternating diagram. Suppose $S$ is an essential\index{essential} annulus in normal form with respect to the polyhedral decomposition. Suppose some normal disk\index{normal} making up $D_2$ meets exactly two boundary faces, but runs through opposite sides of those boundary faces.
\begin{enumerate}
\item[(a)] Show that $\partial D_2$ encircles a twist region of the diagram.
\item[(b)] Assume that $\partial D_2$ runs through exactly two boundary faces and exactly two white faces. One arc of $\partial D_2$ in a white face is glued to the side of a normal disk\index{normal} $D_2$, and the other arc of $\partial D_2$ in a white face is glued to the side of a normal disk $D_3$. Following the example of \reffig{21Curve} left, sketch the images of these arcs of $\partial D_1$ and $\partial D_3$ superimposed on the same polyhedron containing $\partial D_2$.
\item[(c)] Prove that $\partial D_1$ and $\partial D_3$ must each encircle a string of adjacent bigons.\index{bigon}
\item[(d)] Prove that in fact, $\partial D_1$, $\partial D_2$, and $\partial D_3$ either all encircle a single bigon each, or they all encircle a pair of bigons\index{bigon} each. Sketch these curves superimposed on the same polyhedron.
\end{enumerate}
\end{exercise}
\begin{exercise}
Prove \reflem{BddGaussBonnet}, the bounded Gauss--Bonnet theorem.
\end{exercise}
\begin{exercise}
Prove that a properly embedded surface $S$ in a compact 3-manifold $M$ is $\pi_1$-injective\index{$\pi_1$-injective} if and only if the surface $\widetilde{S} = \partial N(S)$ is incompressible.\index{incompressible}
\end{exercise}
\begin{exercise}
Prove that a properly embedded surface $S$ in a compact 3-manifold $M$ is boundary $\pi_1$-injective\index{boundary $\pi_1$-injective} if and only if the surface $\widetilde{S} = \partial N(S)$ is boundary incompressible.\index{boundary incompressible}
\end{exercise}
\chapter{The Geometry of Embedded Surfaces}\label{Chap:Quasifuchsian}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
In this chapter, we discuss the geometry of essential surfaces\index{essential} embedded in hyperbolic 3-manifolds.
In the first section, we show that specific surfaces embedded in hyperbolic 3-manifolds always admit isometries. This allows us to cut along such surfaces and reglue, obtaining new manifolds whose geometry can be understood from the geometry of the original. The most straightforward instance of this uses the 3-punctured sphere, and was discovered by Adams \cite{adams:3-punct}, building on work of Wielenberg \cite{Wielenberg}. Ruberman discovered a similar result for 4-punctured spheres and related surfaces \cite{Ruberman:mutation}. Both techniques are still frequently used to build examples of hyperbolic knots and links with particular geometric properties (for example volume: \cite{SBurton}, \cite{adamsetal:VolDet}, short geodesics: \cite{Millichap}, cusp shapes: \cite{DangPurcell}).
We then return to more general essential\index{essential} surfaces, and discuss a geometric classification of such surfaces as quasifuchsian (or Fuchsian), accidental, or virtual fibered. We illustrate the behavior of such surfaces using examples from knot complements, especially alternating knots.\index{alternating knot or link} We show that for hyperbolic alternating links, their checkerboard surfaces are always quasifuchsian.
\section{Belted sums and mutations}
This section describes two techniques for building distinct links with related hyperbolic structures; for example they have the same volume. The techniques both arose in the 1980s by cutting along an embedded surface in a hyperbolic 3-manifold and regluing via isometry.
\subsection{3-punctured spheres and belted sums}
Suppose $M$ is a hyperbolic 3-manifold that contains an embedded incompressible\index{incompressible} 3-punctured sphere $S$. We have seen examples of this: in \refchap{TwistKnots}, each crossing circles of a reduced fully augmented link\index{fully augmented link} bounds an embedded essential\index{essential} 3-punctured sphere. In \refcor{AugmentedGeodSfces} we noted that these 3-punctured spheres are always totally geodesic in fully augmented links.\index{fully augmented link}
We now generalize this.
\begin{theorem}\label{Thm:3PunctTotallyGeodesic}
Let $M$ be a 3-manifold admitting a complete, finite volume hyperbolic structure, so $M$ is the interior of a compact manifold $\overline{M}$ with torus boundary. Let $S$ be a $\pi_1$-injective (equivalently, incompressible)\index{incompressible}
3-punctured sphere properly embedded in $\overline{M}$. Then $S$ is isotopic to a properly embedded 3-punctured sphere that is totally geodesic in the hyperbolic structure on $M$.
\end{theorem}
\begin{proof}
Let $\alpha$, $\beta$, and $\gamma = \alpha\cdot \beta$ be generators of $\pi_1(S)$ that encircle the three punctures of $S$. Because $M$ admits a complete hyperbolic structure, there is a representation $\rho\from \pi_1(M)\to \operatorname{PSL}(2,{\mathbb{C}})$ taking $\alpha$, $\beta$, and $\gamma$ to parabolic elements.\index{parabolic}
We may conjugate to adjust the images of three points at infinity; we conjugate so that the fixed point of $\rho(\alpha)$ is $\infty$, so that $\rho(\alpha)$ translates $0\in{\mathbb{C}}$ to $2\in{\mathbb{C}}$, and so that the fixed point of $\rho(\beta)$ is $0$. Then the three parabolics have the form
\[ \rho(\alpha) = \mat{1 & 2 \\ 0 & 1}, \quad \rho(\beta) = \mat{1 & 0 \\ z & 1}, \quad \mbox{and } \rho(\alpha\cdot\beta) = \mat{ 1+2z & 2 \\ z & 1}. \]
Because $\rho(\alpha\cdot\beta)$ is parabolic, its trace is $2+2z = \pm 2$, so $z=0$ or $z=-2$. If $z=0$, $\rho(\beta)$ is the identity, contradicting the fact that $S$ is $\pi_1$-injective. Thus $z=-2$.
Now note that both $\rho(\alpha)$ and $\rho(\beta)$ (and hence $\rho(\alpha\cdot\beta)$) preserve the real line ${\mathbb{R}} \subset {\mathbb{C}}\subset \partial_\infty {\mathbb{H}}^3$. Hence $\rho(\pi_1(S))$ preserves the vertical plane $P$ in ${\mathbb{H}}^3$ whose boundary is the real line. Thus under the covering map $p\from {\mathbb{H}}^3\to {\mathbb{H}}^3/\rho(\pi_1(M)) = M$, the plane $P$ maps to a totally geodesic surface homeomorphic to $S$ in $M$.
It remains to show that $p(P)$ is embedded in $M$, and $S$ is isotopic to the embedded totally geodesic surface $p(P)$.
To do so, consider $p^{-1}(S)$. This is a disjoint union of embedded, possibly non-geodesic planes in ${\mathbb{H}}^3$. One lift $\widetilde{S}$ is fixed by $\rho(\alpha)$, $\rho(\beta)$, and $\rho(\gamma)$ above. It follows that $\widetilde{S}$ and $P$ have the same limit set;\index{limit set} recall limit set is defined in \refdef{LimitSet}. Then for any $\gamma\in\pi_1(M)$, $\rho(\gamma)(P)$ has the same limit set as $\rho(\gamma)(\widetilde{S})$. Because $S$ is embedded, translates of $\widetilde{S}$ are disjoint, and it follows that translates of $P$ are disjoint embedded planes in ${\mathbb{H}}^3$. Then $p(P)$ is an embedded surface in $M$, an isotopy from $\widetilde{S}$ to $P$ projects to an isotopy from $S$ to $p(P)$ in $M$, and $S$ is isotopic to a properly embedded totally geodesic 3-punctured sphere.
\end{proof}
\begin{corollary}\label{Cor:Gluing3PunctSpheres}
Let $M$ and $M'$ be hyperbolic 3-manifolds containing essential\index{essential} embedded 3-punctured spheres $S$ and $S'$, respectively. Then $M{\backslash \backslash} S$ and $M'{\backslash \backslash} S'$ are hyperbolic 3-manifolds, each with two totally geodesic 3-punctured sphere boundary components. Moreover:
\begin{enumerate}
\item Any manifold $M''$ obtained by identifying 3-punctured sphere boundary components of $M{\backslash \backslash} S$ to those of $M'{\backslash \backslash} S'$ will be hyperbolic, containing embedded essential 3-punctured spheres, and $M{\backslash \backslash} S$ and $M'{\backslash \backslash} S'$ embed isometrically in $M''$. In particular, $\operatorname{vol}(M'') = \operatorname{vol}(M)+\operatorname{vol}(M')$.
\item Any manifold $M'''$ obtained by identifying the 3-punctured sphere boundary components of $M{\backslash \backslash} S$ via homeomorphism will be hyperbolic, containing an embedded essential 3-punctured sphere, and $\operatorname{vol}(M''')=\operatorname{vol}(M)$.
\end{enumerate}
\end{corollary}
\begin{proof}
Because $S$ and $S'$ are isotopic to totally geodesic hyperbolic surfaces, we obtain $M{\backslash \backslash} S$ and $M'{\backslash \backslash} S'$, respectively, by removing a collection of half spaces from ${\mathbb{H}}^3$ corresponding to lifts of $S$ and $S'$, and then taking the quotient. Note that the geometry of $M{\backslash \backslash} S$ and $M'{\backslash \backslash} S'$ therefore agree with geometry of $M$ and $M'$, respectively, away from $S$ and $S'$. In particular, $\operatorname{vol}(M{\backslash \backslash} S) = \operatorname{vol}(M)$ and $\operatorname{vol}(M'{\backslash \backslash} S')=\operatorname{vol}(M')$.
Moreover, $M{\backslash \backslash} S$ and $M'{\backslash \backslash} S'$ have totally geodesic 3-punctured sphere boundary.
By \refprop{CompleteStruct3punct}, there is a unique hyperbolic structure on a 3-punctured sphere. Therefore, any gluing of 3-punctured spheres can be obtained by isometry. So $M''$ is obtained by gluing $M{\backslash \backslash} S$ to $M'{\backslash \backslash} S'$ by isometry. Thus $M{\backslash \backslash} S$ and $M'{\backslash \backslash} S'$ isometrically embed in $M''$, and
the volume of $M''$ is equal to $\operatorname{vol}(M) + \operatorname{vol}(M')$.
Similarly, $M{\backslash \backslash} S$ isometrically embeds in $M'''$ and $\operatorname{vol}(M''')=\operatorname{vol}(M)$.
\end{proof}
\Refthm{3PunctTotallyGeodesic} and \refcor{Gluing3PunctSpheres} were used by Adams to construct explicit examples of links in $S^3$ with additive volumes.
\begin{definition}\label{Def:BeltSum}
A \emph{belted tangle}\index{belted tangle} is a link in $S^3$ with one link component unknotted in $S^3$, bounding a disk meeting other components of the link exactly two times; see \reffig{BeltedTangle}, left.
The \emph{belted sum}\index{belted sum} of two belted tangles is the belted tangle obtained by tangle addition as in \reffig{BeltedTangle}, right.
\end{definition}
\begin{figure}[h]
\import{Figures/Ch12_Quasifuch/}{F12-01-BeltSu.eps_tex}
\caption{A belted tangle\index{belted tangle} is a link with an unknotted component bounding an embedded 2-punctured disk; two are shown on the left. On the right, a belted sum\index{belted sum} is obtained from two belted tangles via the tangle addition shown.}
\label{Fig:BeltedTangle}
\end{figure}
\begin{corollary}\label{Cor:BeltedSum}
If $L_1$ and $L_2$ are belted tangles\index{belted tangle} that are hyperbolic, then their belted sum\index{belted sum} $L$ is a hyperbolic link with volume satisfying $\operatorname{vol}(L) = \operatorname{vol}(L_1)+\operatorname{vol}(L_2)$.
\end{corollary}
\begin{proof}
Note that $S^3-L_1$ and $S^3-L_2$ each contain an embedded 3-punctured sphere, namely the 2-punctured disk whose boundary is on the unknotted component of the link. Because these two link complements are hyperbolic, the 3-punctured sphere must be incompressible\index{incompressible} in both cases (easy exercise). The result then follows from \refcor{Gluing3PunctSpheres}.
\end{proof}
\subsection{4-punctured spheres and mutation}
Note that to prove \refcor{BeltedSum}, we glued isometric 3-punctured spheres. Unfortunately, the 3-punctured sphere is the only hyperbolic surface with a unique hyperbolic structure. All others have infinitely many hyperbolic structures, and so a gluing homeomorphism will not necessarily give an isometry, and volume will not necessarily be additive. However, in certain cases we may still cut and glue along an essential surface and still ensure that geometry is well behaved. One way to do this is a process called \emph{mutation}, which applies to 4-punctured spheres.
This section gives a condition on the diagrams of two knots and links that will guarantee that the geometries of their complements are similar; in particular they will have the same hyperbolic volume. This was first discovered by Ruberman~\cite{Ruberman:mutation}, who proved the result using minimal surfaces. Because the full proof requires more background on minimal surfaces than we wish to include here, we will refer to his paper for the complete result. However, we will provide full details in the special case that an embedded essential 4-punctured sphere is isotopic to an embedded pleated\index{pleated surface} 4-punctured sphere isometric to the boundary of an ideal tetrahedron.
\begin{definition}\label{Def:ConwaySphere}
A \emph{Conway sphere}\index{Conway sphere} is a 4-punctured sphere obtained from the diagram of a knot or link $K$ as follows. Let $\gamma$ be a simple closed curve in the plane of projection of the diagram of $K$ that meets the diagram exactly four times, transversely in edges of the diagram. Let $\overline{S}$ be the sphere embedded in $S^3-K$ obtained by attaching two disks to $\gamma$, one on either side of the plane of projection. Let $S$ denote the corresponding 4-punctured sphere in $S^3-K$. In the case that $S$ is essential,\index{essential} we say that it is a \emph{Conway sphere}\index{Conway sphere} for $K$.
\end{definition}
We will put geometric structures on Conway spheres, and cut and reglue via isometry of the spheres. Note that a hyperbolic ideal tetrahedron has boundary a pleated\index{pleated surface} 4-punctured sphere. This gives us a special case of a hyperbolic structure on a 4-punctured sphere and an isometry preserving it.
\begin{lemma}\label{Lem:IsomTetr}
Let $T$ be a hyperbolic ideal tetrahedron. For each pair of opposite edges, there is an axis in ${\mathbb{H}}^3$ meeting the two edges orthogonally. Rotation by $\pi$ about such an axis is an isometry of the ideal tetrahedron.
\end{lemma}
\begin{proof}
Rotation by $\pi$ through an axis is an isometry of ${\mathbb{H}}^3$ that maps the tetrahedron back to itself.
\end{proof}
\begin{corollary}\label{Cor:Isom4PunctSphere}
Let $S$ be a pleated\index{pleated surface} 4-punctured sphere with hyperbolic structure identical to the boundary of an ideal hyperbolic tetrahedron. Then any of the three rotations of \reflem{IsomTetr} gives an isometry of $S$.
\end{corollary}
\begin{definition}\label{Def:Mutation}
A \emph{mutation}\index{mutation} of a knot or link is obtained by cutting along a Conway sphere,\index{Conway sphere} rotating by $\pi$ along one of three axes shown in \reffig{Mutation}, and then regluing.
\end{definition}
\begin{figure}
\includegraphics{Figures/Ch12_Quasifuch/F12-02-Mutate.eps}
\caption{Mutation\index{mutation} cuts along a Conway sphere,\index{Conway sphere} performs one of the involutions shown on the left, and then reglues. Shown on the right is an example of two distinct knots related by mutation.}
\label{Fig:Mutation}
\end{figure}
\begin{theorem}\label{Thm:Mutation}
Let $K$ be a hyperbolic knot or link admitting an embedded essential Conway sphere.\index{Conway sphere} Let $K^\mu$ be any mutation\index{mutation} of $K$. Then $K^\mu$ is hyperbolic, and $\operatorname{vol}(K) = \operatorname{vol}(K^\mu)$.
\end{theorem}
\begin{proof}
Let $S$ be the essential Conway sphere,\index{Conway sphere} and pleat $S$. If the pleating is embedded, isometric to the boundary of an ideal tetrahedron, then we may cut along the pleated\index{pleated surface} surface to obtain two hyperbolic manifolds whose boundaries are isometric pleated 4-punctured spheres, and isometric to the boundary of a hyperbolic ideal tetrahedron. By \refcor{Isom4PunctSphere}, rotation by $\pi$ through an axis orthogonal to opposite edges of the tetrahedron gives an isometry of the pleated surface $S$. Thus we may apply this isometry to one of the pieces and reglue, to obtain a complete hyperbolic manifold with an embedded essential Conway sphere,\index{Conway sphere} and volume equal to the volume of $S^3-K$. Note this is exactly a mutation.\index{mutation}
In the case that the pleating is not embedded, then Ruberman shows that $S$ is still isotopic to an embedded 4-punctured sphere that is a minimal surface with respect to the hyperbolic metric, and that mutation\index{mutation} is an isometry of this minimal surface \cite{Ruberman:mutation}. Thus the same argument applies to show the volumes agree.
\end{proof}
\section{Fuchsian, quasifuchsian, and accidental surfaces}
We now return to more general essential surfaces embedded in a hyperbolic 3-manifold.
Let $M$ be hyperbolic, such that $M$ is the interior of a compact 3-manifold $\overline{M}$ with boundary.
Since $M$ is hyperbolic, we know there is a discrete, faithful representation $\rho\from \pi_1(M)\to \operatorname{PSL}(2,{\mathbb{C}})$ (\refprop{FreePropDisc}). If $S$ is a surface properly embedded in $M$, then the restriction of $\rho$ to $\pi_1(S)$ will be a discrete subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$. We will consider properties of this subgroup.
Let $\Gamma\leq\operatorname{PSL}(2,{\mathbb{C}})$ be a discrete group. Recall from \refdef{LimitSet} that the limit set\index{limit set} of $\Gamma$ is the set of accumulation points on $\partial{\mathbb{H}}^3$ of the orbit $\Gamma(x)$ for any point $x\in{\mathbb{H}}^3$.
\begin{definition}\label{Def:QFuchsian}
A discrete group $\Gamma\leq\operatorname{PSL}(2,{\mathbb{C}})$ is said to be \emph{Fuchsian}\index{Fuchsian} if its limit set is a geometric circle on $\partial{\mathbb{H}}^3$. If its limit set is a Jordan curve and no element of $\Gamma$ interchanges the complementary components of the limit set, then $\Gamma$ is said to be \emph{quasifuchsian}.\index{quasifuchsian group}
\end{definition}
\begin{example}\label{Example:Quasifuchsian}
Let $\Gamma \leq \operatorname{PSL}(2,{\mathbb{R}})$ be the image of a discrete faithful representation of the fundamental group of a hyperbolic surface $S$ that is either closed or punctured without boundary. Then the limit set of $\Gamma$ in ${\mathbb{H}}^2$ is all of $\partial{\mathbb{H}}^2$. Now view $\Gamma\leq\operatorname{PSL}(2,{\mathbb{R}})\leq\operatorname{PSL}(2,{\mathbb{C}})$ as acting on a hyperplane $H$ in ${\mathbb{H}}^3$. When we extend the action of $\Gamma$ to all of ${\mathbb{H}}^3$, the limit set is the geometric circle that is the boundary of the hyperplane $\partial H$. Thus $\Gamma$ is Fuchsian.\index{Fuchsian}
Now adjust the representation very slightly, to $\Gamma_\epsilon \leq \operatorname{PSL}(2,{\mathbb{C}})$. The limit set also adjusts slightly. If $\Gamma_\epsilon$ is no longer a subgroup of $\operatorname{PSL}(2,{\mathbb{R}})$, then its limit set is no longer a geometric circle. However, it will be a topological circle. Thus $\Gamma_\epsilon$ is quasifuchsian. An example is shown in \reffig{Fractal}; this figure first appeared in \cite{thurston:bulletin}.
\begin{figure}
\begin{center}
\includegraphics{Figures/Ch12_Quasifuch/F12-03-ThurLS.eps}
\end{center}
\caption{The limit set of a Fuchisan group, and various limit sets of quasifuchsian groups obtained by deforming the Fuchsian\index{Fuchsian} group slightly. Figures are from \cite{thurston:bulletin}.}
\label{Fig:Fractal}
\end{figure}
The examples of \reffig{Fractal} were created by computer. Adjusting deformations of Fuchsian group by computer leads to beautiful fractal images. See, for example, \cite{IndrasPearls}. Software to visualize limit sets has also been developed by Wada \cite{Wada:Opti}. Yamashita has written a note to help users create their own software \cite{Yamashita:Software}. As a first step for the interested reader, we suggest working through Yamashita's example in \refex{Software}. For further work, the book \cite{IndrasPearls} includes direction on creating and exploring limit sets by computer.
\end{example}
\begin{definition}\label{Def:QFuchsianSfce}
Let $M$ be a hyperbolic 3-manifold and $S\subset M$ a properly embedded essential\index{essential} surface. Let $\rho\from \pi_1(M)\to\operatorname{PSL}(2,{\mathbb{C}})$ be a discrete, faithful representation. The surface $S$ is \emph{totally geodesic},\index{totally geodesic surface}\index{surface!totally geodesic} if, under the induced representation, the image $\rho(\pi_1(S))\leq\operatorname{PSL}(2,{\mathbb{C}})$ is Fuchsian. Sometimes a totally geodesic surface is also called \emph{Fuchsian}.\index{Fuchsian}\index{surface!Fuchsian} The surface $S$ is \emph{quasifuchsian}\index{quasifuchsian surface}\index{surface!quasifuchsian} if $\rho(\pi_1(S))\leq \operatorname{PSL}(2,{\mathbb{C}})$ is quasifuchsian.
\end{definition}
\begin{definition}\label{Def:Accidental}
Let $S$ be a surface properly embedded in $M$. A nontrivial loop $\gamma$ that is not freely homotopic into $\partial S$ in $S$ is called an \emph{accidental parabolic}\index{accidental parabolic} if $\rho(\gamma)$ is parabolic\index{parabolic} in $\operatorname{PSL}(2,{\mathbb{C}})$. The surface $S$ is said to be \emph{accidental}\index{accidental surface} if it contains an accidental parabolic.
\end{definition}
\begin{theorem}\label{Thm:NotAccidentalIfGeodesic}
If $S$ is a totally geodesic or quasifuchsian surface\index{quasifuchsian surface}\index{surface!quasifuchsian} properly embedded in the hyperbolic 3-manifold $M$, then $S$ is not accidental.\index{accidental surface}
\end{theorem}
\begin{proof}
If $S$ is a totally geodesic surface, then any closed curve $\gamma$ in $S$ that is not freely homotopic into $\partial S$ must be freely homotopic to a closed geodesic in $S$. In turn, this closed geodesic in $S$ is a closed geodesic in $M$. Thus $\rho(\gamma)$ has a geodesic axis, and cannot be parabolic.\index{parabolic} Thus $S$ has no accidental parabolics in this case.
If $S$ is quasifuchsian, the same argument does not immediately apply, However, it is known that a quasifuchsian group\index{quasifuchsian group} cannot contain an accidental parabolic\index{accidental parabolic} element. See Chapter IX, proposition D.17 of \cite{Maskit:KleinianGroups}.
Thus $S$ is not accidental.
\end{proof}
\begin{corollary}\label{Cor:AltNoGeodesicSfce}
Let $K$ be a knot or link with a prime, connected, alternating diagram,\index{alternating diagram}\index{alternating knot or link} and suppose $K$ is not a $(2,q)$-torus knot. Then the hyperbolic manifold $S^3-K$ contains no closed embedded totally geodesic surface.
\end{corollary}
\begin{proof}
Suppose $S$ is a closed, embedded, totally geodesic surface in the link complement $S^3-K$. By \reflem{Meridional}, any closed essential\index{essential} surface contains a closed curve that encircles a meridian of $K$. In particular, $S$ must contain a closed curve $\gamma$ encircling a meridian. But then $\gamma$ is freely isotopic to a meridian of $K$, meaning $\gamma$ is an accidental parabolic.\index{accidental parabolic} This contradicts \refthm{NotAccidentalIfGeodesic}.
\end{proof}
In \cite{MenascoReid}, Menasco and Reid were the first to observe \refcor{AltNoGeodesicSfce}. Based on their observation, they made the following conjecture, which is still open at the time of writing this book.
\begin{conjecture}[Menasco--Reid conjecture]\label{Conj:MenascoReid}\index{Menasco--Reid conjecture}
Let $K$ be a knot in $S^3$ such that $S^3-K$ is hyperbolic. Then $S^3-K$ admits no closed, embedded, totally geodesic surface.
\end{conjecture}
Since the conjecture was proposed in the 1990s, evidence has developed both for and against the Menasco--Reid conjecture. As evidence for the conjecture, Menasco and Reid showed that in addition to alternating knots,\index{alternating knot or link} additional classes of hyperbolic knots cannot contain a closed embedded totally geodesic surface (closed 3-braids and tunnel number one knots) \cite{MenascoReid}. Since then, even more classes of knots and links have been shown to contain no closed, embedded, totally geodesic surfaces; a summary of such results can be found in the survey \cite{Adams:HyperbolicKnots}.
On the other hand, \refconj{MenascoReid} is known to be false for link complements, shown first in \cite{MenascoReid}. Leininger showed that there exists a sequence of hyperbolic knots whose complements contain closed embedded essential surfaces with principal curvatures converging to zero \cite{leininger}; if the principal curvatures were known to be zero the surfaces would be totally geodesic. DeBlois showed that \refconj{MenascoReid} does not hold for knots in rational homology spheres \cite{deblois-surfaces}. And Adams and Schoenfeld showed that \refconj{MenascoReid} is false if surfaces are allowed to have punctures \cite{AdamsSchoenfeld}. For example, they showed that the checkerboard surface\index{checkerboard surface} of certain pretzel knots, such as the surface shown in \reffig{TwistedBand}, is totally geodesic.
Most of the evidence in support of \refconj{MenascoReid} is obtained by showing that any closed surface properly embedded in a particular type of knot complement must contain an accidental parabolic,\index{accidental parabolic} similar to \reflem{Meridional}. Thus there is interest in finding examples of essential surfaces without accidental parabolics.
If we consider surfaces with (parabolic) boundary, we already have most of the tools in place to prove the following.
\begin{theorem}\label{Thm:CheckerboardNotAccidental}
Let $K$ be a link with a connected, prime, reduced alternating diagram,\index{alternating diagram}\index{alternating knot or link!checkerboard surface} and let $\Sigma$ be one of its checkerboard surfaces. Then $\Sigma$ is not accidental.\index{accidental surface}\index{checkerboard surface!not accidental}
\end{theorem}
Before we prove the theorem, we need a definition and a lemma.
\begin{definition}\label{Def:ParabolicLocus}
Let $M$ be a hyperbolic 3-manifold, such that $M$ is the interior of a compact 3-manifold $\overline{M}$ with boundary.
The \emph{parabolic locus}\index{parabolic locus} $P$ of $M$ consists of tori and annuli in $\partial\overline{M}$ such that each simple curve in $P$ lifts to a parabolic element\index{parabolic} of $\pi_1(M)\leq\operatorname{PSL}(2,{\mathbb{C}})$.
\end{definition}
\begin{lemma}\label{Lem:AccidentalAnnulus}
Suppose $S$ is a $\pi_1$-essential\index{$\pi_1$-essential} surface properly embedded in an irreducible,\index{irreducible} boundary irreducible\index{boundary irreducible} 3-manifold $M$, and suppose $S$ is accidental.\index{accidental surface} Then there is an essential annulus $A$ embedded in $M{\backslash \backslash} S$ with one boundary component on the parabolic locus\index{parabolic locus} $P$ of $M$ and one boundary component an essential closed curve on $\widetilde{S}$.
\end{lemma}
The proof of the lemma uses the annulus theorem of Jaco \cite[Theorem~VIII.13]{Jaco:3Manifold} stated below. Briefly, it ensures we can replace an immersion of an annulus into a compact 3-manifold with an embedding. For a proof of the annulus theorem, see \cite{Jaco:3Manifold}. Compare to \refthm{LoopTheorem}, the loop theorem.
\begin{theorem}[Annulus theorem]\label{Thm:AnnulusTheorem}\index{annulus theorem}\index{Jaco annulus theorem}
Let $M$ be a compact, irreducible\index{irreducible} 3-manifold with incompressible boundary. Suppose $f\from (A, \partial A) \to (M,\partial M)$ is a proper map, i.e.\ $f$ takes $\partial A$ to $\partial M$. Suppose also that $f$ is nondegenerate, i.e.\ that $f$ cannot be homotoped to the boundary of $M$. Then there exists an embedding $g\from (A, \partial A) \to (M,\partial M)$ that is nondegenerate. Furthermore, if the restriction of $f$ to $\partial A$ is an embedding, then $g$ may be chosen so that its restriction to $\partial A$ is the same embedding.
\end{theorem}
\begin{proof}[Proof of \reflem{AccidentalAnnulus}]
If $S$ is accidental,\index{accidental surface} then there exists a nontrivial closed curve on $S$ that is freely homotopic into $\partial M$ through $M$. Note if $S$ is nonorientable, then $\widetilde{S}$, the boundary of a regular neighborhood of $S$, is also accidental,\index{accidental surface} with accidental parabolic\index{accidental parabolic} a double cover of the curve on $S$.
So we may assume there is a nontrivial closed curve $\gamma$ on $\widetilde{S}$ that is freely homotopic into $\partial M$ through $M$. The free homotopy defines a map of an annulus $A'$ into $M$; one boundary component of $A'$ lies on $\gamma$ and one on $\partial M$. Adjust $A'$ so all intersections with $\widetilde{S}$ are transversal, and move the component of $A'$ on $\widetilde{S}$ in a bicollar of $S$ to be disjoint from $\widetilde{S}$.
Now consider intersections of the interior of $A'$ with $\widetilde{S}$. Consider first a closed curve of intersection that bounds a disk on $A'$. Since $\widetilde{S}$ is incompressible\index{incompressible} (because $S$ is $\pi_1$-essential),\index{$\pi_1$-essential} an innermost such curve also bounds a disk in $\widetilde{S}$. Since $M$ is irreducible,\index{irreducible} the union of the disk on $A'$ and that on $\widetilde{S}$ bounds a ball in $M$, and we may isotope $A'$ through the ball to remove the intersection. Thus we may assume there are no closed curves of intersection that bound disks on $A'$. Suppose there is an arc of intersection $A'\cap\widetilde{S}$ with both endpoints on $\partial M$. This arc co-bounds a disk on $A'$ along with an arc on $\partial A'\subset \partial M$. Because $\widetilde{S}$ is boundary incompressible\index{boundary incompressible} and $M$ is boundary irreducible,\index{boundary irreducible} an innermost such arc may be isotoped away. So we assume there are no such arcs of intersection. Finally, because we have isotoped $A'$ away from $\widetilde{S}$ in a bicollar of the curve $\gamma\subset\widetilde{S}$, there are no arcs of intersection $A'\cap\widetilde{S}$ with an endpoint on $\widetilde{S}$. Thus there are no arcs of intersection of $A'\cap\widetilde{S}$. The only remaining possibility is that $A'\cap \widetilde{S}$ is a collection of essential closed curves on $A'$.
Apply a homotopy to minimize the number of closed curves of intersection. There is a sub-annulus $A''\subset A'$ that is outermost: it has one boundary component on $\partial M$ and one on $\widetilde{S}$ and interior disjoint from $\widetilde{S}$. Thus we may consider $A''$ as an immersion of an annulus into $M{\backslash \backslash}\widetilde{S}$. It is nondegenerate, else we could have reduced the number of closed curves of intersection of $A'$. Now we apply \refthm{AnnulusTheorem}, the annulus theorem.\index{annulus theorem}\index{Jaco annulus theorem} There exists a nondegenerate embedding of an annulus $A$ into $M{\backslash \backslash} \widetilde{S}$ with one boundary component on $\widetilde{S}$ and one on the parabolic locus\index{parabolic locus} $\partial M$. To finish the proof, we need to show that the embedding lies in $M{\backslash \backslash} S$.
Note that $M{\backslash \backslash} \widetilde{S}$ consists of two components, one homeomorphic to $M{\backslash \backslash} S$ and one to a regular neighborhood of $S$. The regular neighborhood of $S$ only meets $\partial M$ in a neighborhood of $\partial S$. Since $\partial A$ has a component on $\partial M$, if $A$ is embedded in the neighborhood of $S$, it has a component running parallel to $\partial S$ on $\partial M$. But then the retraction of $A$ to $S$ defines a free homotopy of the closed curve $\partial A\cap S$ to $\partial S$, contradicting the definition of accidental.\index{accidental surface} Thus $A$ is embedded in the component of $M{\backslash \backslash} \widetilde{S}$ that is homeomorphic to $M{\backslash \backslash} S$.
\end{proof}
\begin{proof}[Proof of \refthm{CheckerboardNotAccidental}]
Let $\Sigma$ be a checkerboard surface, and suppose by way of contradiction that it is accidental.\index{accidental surface} Then by \reflem{AccidentalAnnulus}, there exists an essential annulus $A$ embedded in $(S^3-N(K)){\backslash \backslash}\Sigma$ with one boundary component on $\widetilde{\Sigma}$ and one on $N(K)$.
Consider the bounded polyhedral decomposition of $(S^3-N(K)){\backslash \backslash}\Sigma$ (\reflem{BoundedPolyAlt}).
By \refthm{BddNormalForm}, we may take $A$ to be in normal form\index{normal form}\index{normal} with respect to the polyhedra. Note that because components of $\partial A$ lie entirely in $\widetilde{S}$ and in $\partial N(K)$, respectively, there are no arcs of $\partial A$ intersecting a boundary edge adjacent to a surface face. Thus by \reflem{BddGaussBonnet}, the combinatorial area\index{combinatorial area} of $A$ is $0$. It follows that each normal disk\index{normal} of $A$ has combinatorial area $0$. This is possible only if the disk has one of three forms: each normal disk either meets exactly two boundary faces and no edges, or it meets exactly one boundary face and exactly two edges, or it meets exactly four edges. Because one component of $\partial A$ lies on $\partial N(K)$, there must be a normal disk meeting a boundary face. If the normal disk meets two boundary faces, then there is an arc of intersection of $A$ with a white face that runs from the boundary component of $\partial A$ on $\partial M$ back to the same boundary component, cutting off a disk in $A$. Because the white checkerboard surface is boundary incompressible,\index{boundary incompressible} such an arc bounds a disk on the white checkerboard surface. By normality, the disk cannot be contained in a single white face: the arc would run from one boundary edge back to the same edge. But an innermost intersection with a shaded face would give a crossing arc\index{crossing arc} cutting off a boundary compression disk\index{boundary compression disk} for the link. This is also impossible in a reduced diagram.
So we may assume that there is a normal disk of $A$ meeting exactly one boundary face and exactly two edges of the polyhedron. On $A$, such an arc runs from the component $\partial A\cap \partial M$ of $\partial A$ to $A\cap S$, which is the other component of $\partial A$.
Then \reflem{21Annulus} implies that all normal disks\index{normal} of $A$ have this form, and that $K$ is a $(2,q)$-torus link. The surface $S$ is the annulus lying between the two strands of the link. Moreover, the boundary component $\gamma$ of $\partial A$ on $\widetilde{S}$ must run along the core of the annulus $S$. It follows that $\gamma$ is boundary parallel. But then $\gamma$ is not accidental.\index{accidental surface} This is a contradiction.
\end{proof}
\section{Fibers and semifibers}
Consider again the limit set of a group $\rho(\pi_1(S))$ where $S$ is a surface embedded in a hyperbolic 3-manifold $M$ and $\rho\from \pi_1(M)\to\operatorname{PSL}(2,{\mathbb{C}})$ is the holonomy\index{holonomy} representation. \Reffig{Fractal} shows the limit set of Fuchsian and quasifuchsian examples.\index{quasifuchsian surface}\index{surface!quasifuchsian} There is an additional option: the limit set of a discrete group isomorphic to $\pi_1(S)$ might be a space-filling curve. In this section, we will analyze surfaces with this property. First we present two topological definitions.
\begin{definition}\label{Def:Fiber}
Let $S$ be a surface properly embedded in a 3-manifold $M$. We say $S$ is a \emph{fiber}\index{fiber}\index{surface!fiber} if $M$ can be written as a fiber bundle over $S^1$, with fiber the surface $S$. Equivalently, there is a homeomorphism $f\from S\to S$ such that $M$ is the mapping torus
\[ M \cong (S\times [0,1])/ (0,x)\sim(1,f(x)). \]
\end{definition}
\begin{definition}\label{Def:IBundle}
An \emph{$I$-bundle}\index{$I$-bundle} is a 3-manifold homeomorphic to $S\times I$, where $S$ is a surface, possibly with boundary. The \emph{vertical boundary}\index{vertical boundary}\index{$I$-bundle!vertical boundary} is $\partial S\times I$; note it is a collection of annuli. The \emph{horizontal boundary}\index{horizontal boundary}\index{$I$-bundle!horizontal boundary} consists of $S\times \partial I$. If $S$ is orientable, this consists of the disjoint union of $S\times\{0\}$ and $S\times\{1\}$. If $S$ is nonorientable, we say that the $I$-bundle is \emph{twisted},\index{twisted $I$-bundle}\index{$I$-bundle!twisted} and the horizontal boundary is homeomorphic to the oriented double cover of $S$. We often denote a twisted $I$-bundle by $S\widetilde{\times} I$. See \refex{TwistedIBundle}.
\end{definition}
\begin{definition}\label{Def:Semifiber}
A surface $S$ properly embedded in a 3-manifold $M$ is a \emph{semifiber}\index{semifiber}\index{surface!semifiber} if it is either a fiber, or if $S$ is the boundary of an $I$-bundle $S'\times I$ over a nonorientable surface $S'$, and $M$ is obtained by gluing two copies of this $I$-bundle by the identity on $S$. In the latter case, sometimes $S$ is called a \emph{strict semifiber}.\index{strict semifiber}\index{surface!strict semifiber}
\end{definition}
A strict semifiber is an example of a \emph{virtual fiber}, defined below. See \refex{SemifiberVirtual}.
\begin{definition}\label{Def:VirtualFiber}
A surface $S$ properly embedded in a 3-manifold $M$ is called a \emph{virtual fiber}\index{virtual fiber}\index{fiber!virtual} if there is a finite index cover of $M$ in which $S$ lifts to a fiber.
\end{definition}
The following is due to Thurston \cite{thurston} and Bonahon \cite{Bonahon:Ends}. See also \cite{CanaryEpsteinGreen:Notes}.
\begin{theorem}\label{Thm:Trichotomy}
Let $S$ be an essential\index{essential} surface in a hyperbolic 3-manifold $M$. Then $S$ has exactly one of three forms:
\begin{enumerate}
\item $S$ is Fuchsian\index{Fuchsian} or quasifuchsian,\index{quasifuchsian surface}\index{surface!quasifuchsian}
\item $S$ is accidental,\index{accidental surface} or
\item $S$ is a virtual fiber.
\end{enumerate}
\end{theorem}
The proof of the theorem is obtained by analyzing surfaces that are not accidental, and whose limit set is not a circle or topological circle.
\begin{example}\label{Example:Fig8Fiber}
The figure-8 knot complement contains a surface that is a fiber, namely the punctured torus shown in \reffig{Fig8Seifert}.
\begin{figure}
\includegraphics{Figures/Ch12_Quasifuch/F12-04-Seifrt.eps}
\caption{The Seifert surface of the figure-8 knot complement.}
\label{Fig:Fig8Seifert}
\end{figure}
A portion of the limit set of this surface was computed by S.~Schleimer, following W.~Thurston, and is shown in \reffig{Fig8Fiber2D}.
\begin{figure}
\includegraphics{Figures/Ch12_Quasifuch/F12-05-LimitS.eps}
\caption{The limit set of the Seifert surface of the figure-8 knot complement, created by S.~Schleimer.}
\label{Fig:Fig8Fiber2D}
\end{figure}
Its lift to the universal cover, given a pleating, is shown in \reffig{Fig8Fiber3D}, due to S.~Schleimer and H.~Segerman.
\begin{figure}
\includegraphics{Figures/Ch12_Quasifuch/F12-06-3DPl.eps}
\caption{The lift of the figure-8 knot Seifert surface to the universal cover ${\mathbb{H}}^3$, with a pleating. Created by S.~Schleimer and H.~Segerman.}
\label{Fig:Fig8Fiber3D}
\end{figure}
Another view of the surface is shown in \reffig{Fig8Fiber3D2}, due to D.~Bachman, S.~Schleimer, and H.~Segerman. Note that in \reffig{Fig8Fiber3D}, the cusps of the surface have been cut off to show a larger view of the pleating. On the other hand, \reffig{Fig8Fiber3D2} gives a better view of the surface near infinity, without cusps cut off.
\begin{figure}
\includegraphics{Figures/Ch12_Quasifuch/F12-07-PlSmall.eps}
\caption{More of the lift of the figure-8 knot Seifert surface to ${\mathbb{H}}^3$, without cusps cut off. Created by D.~Bachman, S.~Schleimer and H.~Segerman.}
\label{Fig:Fig8Fiber3D2}
\end{figure}
\end{example}
We will be dealing only with embedded surfaces. In the case a surface is embedded, the virtual fiber case of the trichotomy reduces to a simpler situation.
\begin{lemma}\label{Lem:TrichotomyEmbedded}
Suppose $S$ is a properly embedded surface in a 3-manifold $M$. Then $S$ is a virtual fiber if and only if $S$ is a semifiber.
\end{lemma}
\begin{proof}
\Refex{TrichotomyEmbedded}.
\end{proof}
As an additional example of a fibered surface in a link complement, consider the checkerboard surfaces of the $(2,q)$-torus knot or link.
\begin{lemma}\label{Lem:2qTorusFiber}
One of the checkerboard surfaces of a standard diagram of a $(2,q)$-torus knot or link is a fiber.\index{checkerboard surface}\index{checkerboard surface!fiber}
\end{lemma}
\begin{proof}
One of the checkerboard surfaces is an annulus or M\"obius band running between the two strands of the link. Let that be the white checkerboard surface. We will show the shaded surface $\Sigma$ is a fiber. Note that $\Sigma$ is orientable, as it is built of two disks with a sequence of (singly) twisted bands between them. Thus the cut manifold $(S^3-N(K)){\backslash \backslash}\Sigma$ has boundary consisting of parabolic locus\index{parabolic locus} and $\widetilde{\Sigma}$, which is two copies of $\Sigma$ in the orientable case.
The surface $\Sigma$ is a semifiber if and only if the cut manifold $(S^3-N(K)){\backslash \backslash} \Sigma$ is an $I$-bundle:\index{$I$-bundle}
\[ (S^3-N(K)){\backslash \backslash} \Sigma \cong \Sigma \times I. \]
Consider the polyhedral decomposition of the cut manifold in this case. The two polyhedra consist of a chain of adjacent white bigons\index{bigon} along with two shaded disks, one inside and one outside the chain of bigons. Note each of these polyhedra is an $I$-bundle over the shaded face, of the form $D\times I$ where $D$ is a shaded disk. White bigon faces are of the form $\alpha_i\times I$, where $\alpha_i$ is an arc with endpoints on edges of the polyhedron (shaded faces). The parabolic locus\index{parabolic locus} consists of boundary squares, which are also products $\mbox{arc}\times I$, parallel to $\alpha_i\times I$ on their sides meeting white faces, with endpoints of arcs on shaded faces.
To obtain $(S^3-N(K)){\backslash \backslash} \Sigma$, glue white faces. The gluing takes each bigon\index{bigon} face $\alpha_i\times I$ to another bigon face $\alpha_j\times I$, matching the $I$-bundle structure. Thus $(S^3-N(K)){\backslash \backslash} \Sigma$ is an $I$-bundle. So $\Sigma$ is a semifiber.\index{semifiber}
To see that $\Sigma$ is actually a fiber, note that the gluing of the two polyhedra along white faces matches $D_1\times\{0\}$ in one polyhedron to $D_2\times\{0\}$ in the other, and $D_1\times\{1\}$ to $D_2\times\{1\}$ in the other. Thus the boundary of the $I$-bundle has two components, so it is not an $I$-bundle\index{$I$-bundle} over a nonorientable surface, and cannot be a strict semifiber.
\end{proof}
\begin{theorem}\label{Thm:AltFiber}
Let $K$ be a knot or link with a connected, twist-reduced, prime, alternating diagram,\index{alternating diagram}\index{alternating knot or link} and let $\Sigma$ be an associated checkerboard surface. Then $\Sigma$ is a semifiber if and only if $K$ is a $(2,q)$-torus link and $\Sigma$ is the checkerboard surface of \reflem{2qTorusFiber} that is a fiber. \index{checkerboard surface}\index{checkerboard surface!fiber}
\end{theorem}
Before proving the theorem, we give a lemma. Its proof is very similar to Lemma~4.17 of \cite{fkp:guts}; see also \cite{HowiePurcell}.
\begin{lemma}\label{Lem:ProductRectangles}
Let $K$ be a knot or link as in the statement of \refthm{AltFiber}, and let $\Sigma$ be its shaded checkerboard surface.
Let $B$ be an $I$-bundle\index{$I$-bundle} embedded in $M_\Sigma = (S^3-N(K)){\backslash \backslash} \Sigma$, with horizontal boundary on $\widetilde{\Sigma}$, and suppose the vertical boundary of $B$ is essential. Let $W$ be a white face of the polyhedral decomposition of the cut manifold. Then $B\cap W$ is isotopic in $M_\Sigma$ to a collection of product rectangles $\alpha\times I$, where $\alpha\times\{0\}$ and $\alpha\times\{1\}$ are arcs of ideal edges on the boundary of $W$.
\end{lemma}
\begin{proof}
First suppose $B=Q\times I$ is a product $I$-bundle\index{$I$-bundle} over an orientable base.
Consider a component of $\partial(B\cap W)$. If it lies entirely in the interior of $W$, then it lies in the vertical boundary $V=\partial Q\times I$.
The intersection $V\cap W$ then contains a closed curve component; an innermost one bounds a disk in $W$. Since the vertical boundary is essential, we may isotope $B$ to remove such intersections. So assume each component of $\partial(B\cap W)$ meets $\widetilde{\Sigma}$. Note that it follows that each component of $B\cap W$ is a disk.
Note $W\cap \widetilde{\Sigma}$ consists of ideal edges on the boundary of the face $W$. It follows that the boundary of each component of $B\cap W$ consists of arcs $\alpha_1$, $\beta_1$, $\dots$, $\alpha_n$, $\beta_n$ with $\alpha_i$ an arc in an ideal edge of $W\cap \widetilde{\Sigma}$ and $\beta_i$ in the vertical boundary of $B$, in the interior of $W$.
We may assume that each arc $\beta_i$ runs between distinct ideal edges, else isotope $B$ through the disk bounded by $\beta_i$ and an ideal edge to remove $\beta_i$, and merge $\alpha_i$ and $\alpha_{i+1}$.
We may assume that $\beta_i$ runs from $Q\times\{0\}$ to $Q\times\{1\}$, for if not, then $\beta_i \subset W$ is an arc from $\partial Q\times\{1\}$ to $\partial Q\times\{1\}$, say, in an annulus component of $\partial Q \times I$. Such an arc bounds a disk in $\partial Q\times I$. This disk has boundary consisting of the arc $\beta_i$ in $W$ and an arc on $\partial Q\times\{1\} \subset\widetilde{\Sigma}$. If the disk were essential, it would give a contradiction to \refprop{NoNormalBigons}. So it is inessential, and we may isotope $B$ to remove $\beta_i$, merging $\alpha_i$ and $\alpha_{i+1}$.
Finally we show that $n=2$, i.e.\ that each component of $B\cap W$ is a quadrilateral with arcs $\alpha_1, \beta_1, \alpha_2, \beta_2$. For if not, there is an arc $\gamma\subset W$ with endpoints on $\alpha_1$ and $\alpha_3$. By sliding along the disk $W$, we may isotope $B$ so $\gamma$ lies in $B\cap W$. Then note that $\gamma$ lies in $Q\times I$ with endpoints on $Q\times\{1\}$. It must be parallel vertically to an arc $\delta\subset Q\times\{1\} \subset \widetilde{\Sigma}$. This gives another disk with boundary consisting of an arc on $W$ and an arc on $\widetilde{\Sigma}$. Either the disk contradicts \refprop{NoNormalBigons}, or $\alpha_1$ and $\alpha_3$ lie on the same ideal edge in the boundary of $W$. But then $\beta_1$ and $\beta_2$ are arcs running from $\alpha_2$ on one ideal edge on the boundary of $W$ to the same ideal edge on the boundary of $W$ containing $\alpha_1$ and $\alpha_3$. The only way that the boundary of this component of $B\cap W$ bounds a disk in $W$ is if $n=3$, and $\beta_3$ runs from an endpoint of $\alpha_1$ to an endpoint of $\alpha_3$. But then $\beta_3$ runs from $Q\times\{1\}$ to $Q\times\{1\}$, which we ruled out in the previous paragraph. So $n=2$ and $B\cap W$ is a product rectangle $\alpha_1\times I$.
Next suppose $B$ is a twisted $I$-bundle\index{$I$ bundle!twisted} $B=Q \widetilde{\times} I$ where $Q$ is non-orientable. Let $\gamma_1, \dots, \gamma_m$ be a maximal collection of orientation reversing closed curves on $Q$. Let $A_i\subset B$ be the $I$-bundle over $\gamma_i$. Each $A_i$ is a M\"obius band. The bundle $B_0 = B\setminus (\cup_i A_i)$ is then a product bundle $B_0 = Q_0\times I$ where $Q_0=Q\setminus (\cup_i \gamma_i)$ is an orientable surface. Our work above then implies that $B_0\cap W$ is a product rectangle for each white region $W$. To obtain $B\cap W$, we attach the vertical boundary of such a product rectangle to the vertical boundary of a product rectangle of $A_i$. This procedure respects the product structure of all rectangles, hence the result is a product rectangle.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm:AltFiber}]
One direction is \reflem{2qTorusFiber}: If the link diagram is the standard diagram of a $(2,q)$-torus link, then a checkerboard surface is a fiber.
Conversely, if the checkerboard surface $\Sigma$ is a semifiber, then $M_\Sigma = (S^3-N(K)){\backslash \backslash} \Sigma$ is an $I$-bundle.\index{$I$-bundle} In this case, \reflem{ProductRectangles} implies $M_\Sigma$ intersects each white face $W$ in a product rectangle of the form $\alpha\times I$, where $\alpha\times\{0\}$ and $\alpha\times\{1\}$ lie on ideal edges of $W$. Since $W \subset M_\Sigma$, the face $W$ is a product rectangle, with exactly two ideal edges $\alpha\times\{0\}$ and $\alpha\times\{1\}$. Thus $W$ is a bigon.\index{bigon} So every white face is a bigon. Thus the diagram of $K$ is a chain of bigons lined up end to end. This is a $(2,q)$-torus link. The white checkerboard surface is obtained by gluing sides of those bigons, and so forms the annulus or M\"obius band between the link components. The shaded checkerboard surface must therefore be the fiber of \reflem{2qTorusFiber}.
\end{proof}
\begin{corollary}\label{Cor:Quasifuchsian}
Let $K$ be a knot or link with a connected, twist-reduced, prime, alternating diagram,\index{alternating diagram}\index{alternating knot or link}\index{alternating knot or link!checkerboard surface} and let $\Sigma$ be an associated checkerboard surface. If $K$ is hyperbolic, then $\Sigma$ is quasifuchsian.\index{quasifuchsian surface}\index{surface!quasifuchsian}\index{checkerboard surface!quasifuchsian}
\end{corollary}
\begin{proof}
If $K$ is hyperbolic, it cannot be a $(2,q)$-torus link. Then \refthm{AltFiber} implies that $\Sigma$ cannot be a semifiber. Because $\Sigma$ is an embedded surface, \reflem{TrichotomyEmbedded} implies that $\Sigma$ is not a virtual fiber. \Refthm{CheckerboardNotAccidental} implies that $\Sigma$ is not accidental.\index{accidental surface} By \refthm{Trichotomy}, it must be quasifuchsian.
\end{proof}
\section{Exercises}
\begin{exercise}\label{Ex:BeltTangleIncompress}
(Easy) Show that the 3-punctured sphere bounded by the crossing circle in a hyperbolic belted tangle\index{belted tangle} must be incompressible.\index{incompressible}
\end{exercise}
\begin{exercise}
A chain link is a link that has the form of a circular chain, as in \reffig{ChainLink}, left.\index{chain link} Note that the link components of the chain can be twisted. We define the minimally twisted chain link with an even number of components to be the chain link with every other link component lying flat in the plane of projection, and alternate link components to be perpendicular to the plane of projection.
A minimally twisted chain link may be augmented by adding a crossing circle encircling the circular chain, as in \reffig{ChainLink}, right.
\begin{figure}
\includegraphics{Figures/Ch12_Quasifuch/F12-08-Chain.eps}
\caption{Left: The minimally twisted chain link with eight link components. Right: the augmented minimally twisted chain link.}
\label{Fig:ChainLink}
\end{figure}
\begin{enumerate}
\item Using belted sums\index{belted sum} and the volume of the Whitehead link, find the volume of any augmented minimally twisted chain link with an even number of chain components.
\item Find a belted tangle\index{belted tangle} $T$ such that repeatedly taking belted sums\index{belted sum} of $T$ with the Whitehead link gives the augmented minimally twisted chain link with an odd number of chain components. What is the volume of the augmented minimally twisted chain link with an odd number of chain components?
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:Software}
Following Yamashita's instructions, create a python program that allows us to visualize the limit set as hyperbolic structures are varied on a punctured torus. Print an example with a Fuchsian limit set, and three examples of quasifuchsian limit sets. See \cite{Yamashita:Software}.
\end{exercise}
\begin{exercise}\label{Ex:TwistedIBundle}
Suppose $S'$ is a closed nonorientable surface. Consider $S'\times I$ (also frequently denoted $S'\widetilde{\times}I$). Prove that its boundary is a closed orientable surface $S$ homeomorphic to the oriented double cover of $S'$.
\end{exercise}
\begin{exercise}\label{Ex:SemifiberVirtual}
Prove that a strict semifiber is a virtual fiber.
\end{exercise}
\begin{exercise}
Prove that a nonorientable surface can never be a fiber in a link complement $S^3-L$. That is, there are no strict semifibers for links in $S^3$.
\end{exercise}
\begin{exercise}\label{Ex:TrichotomyEmbedded}
Prove \reflem{TrichotomyEmbedded}: that a properly embedded surface that is a virtual fiber in a 3-manifold must be a semifiber.
\end{exercise}
\part{Hyperbolic Knot Invariants}\label{Part:Invariants}
\chapter{Estimating Volume}\label{Chap:Volume}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
We have seen that hyperbolic 3-manifolds have finite volume if and only if they are compact or the interior of a compact manifold with finitely many torus boundary components (\refthm{FteVolIffTorusBdy}).
However, it is not completely straightforward to estimate volumes of large classes of manifolds, including knot complements. There are many open questions concerning the relationship of volume of a hyperbolic manifold to other invariants, such as knot invariants.
In this chapter, we discuss different ways to estimate volumes of hyperbolic 3-manifolds that are defined topologically or combinatorially, such as knot complements.
\section{Summary of bounds encountered so far}
\subsection{Upper bounds}
It is usually an easier problem to give upper bounds on the volume of a hyperbolic 3-manifold than lower bounds, although there are exceptions, especially when sharp upper bounds are needed. Here we review two methods we have already encountered that can give upper bounds on volume.
\subsubsection{Volume bounds from polyhedra}
Recall from \refthm{MaxVolTet} that the maximal volume tetrahedron is the regular ideal tetrahedron. Its volume is the value $3\Lambda(\pi/3) := {v_{\rm tet}} = 1.0149\dots$. In various chapters, we have found decompositions of several different knot and link complements into ideal tetrahedra. The volume of such a knot or link is therefore bounded by ${v_{\rm tet}}$ times the number of tetrahedra in its decomposition.
For example, this can be used to show the following theorem, originally proved by Agol and D.~Thurston in the appendix to \cite{lackenby:alt-volume}.
\begin{theorem}\label{Thm:FullyAugUpperBound}
A fully augmented link\index{fully augmented link} $L$ with $t(L)$ crossing circles has volume at most $10{v_{\rm tet}}(t(L)-1)$.\index{fully augmented link!volume bound}
\end{theorem}
\begin{proof}
In \refchap{TwistKnots}, we saw that a fully augmented link has a decomposition into two right angled ideal polyhedra $P_1$ and $P_2$, with white and shaded faces, where shaded faces are ideal triangles\index{ideal triangle} coming from 2-punctured disks bounded by crossing circles, and white faces come from the plane of projection.
For the polyhedron $P_1$, add a finite vertex $v_1$ in the interior and cone to the faces of the polyhedra. Do the same for $P_2$, adding vertex $v_2$ and coning. Each shaded triangle in $\partial P_1$ gives rise to a tetrahedron. There are two shaded triangles per crossing circle in each of the two polyhedra, so $4t(L)$ tetrahedra arise in this way.
The white faces are coned to pyramids. Glue a pair of pyramids in $P_1$ and $P_2$ together across a matching white face, and perform stellar subdivision. That is, add an edge running from the finite vertex in one polyhedron through the center of the face to the finite vertex in the other polyhedron, then add triangles around the edge to divide the pyramids into tetrahedra. If the face has $d$ edges, it is subdivided into $d$ tetrahedra. Note each crossing circle contributes six edges to each polyhedron. Thus the total number of edges of the white faces will be $6t(L)$, and thus the white faces contribute $6t(L)$ tetrahedra to the decomposition.
This gives us $10t(L)$ tetrahedra, but these have finite vertices. We can improve the bound by choosing an ideal vertex $w_1$ in $P_1$, and collapsing the edge from $w_1$ to $v_1$. Similarly, choose the corresponding vertex $w_2$ in $P_2$, and collapse the edge from $w_2$ to $v_2$. Now simplify the triangulation by collapsing monogons to vertices, bigons\index{bigon} to a single edge, and parallel triangles to a single triangle. Note that all tetrahedra adjacent to $w_1$ and $w_2$ are collapsed to triangles under this procedure. We count the number of these.
The ideal vertex $w_1$ is adjacent to two shaded triangles and two white faces. The white faces each have at least three edges, and so give rise to at least three tetrahedra to be collapsed, running between the two polyhedra. Thus there are at least six such tetrahedra arising from white faces. Each shaded face gives rise to one tetrahedron to be collapsed in each $P_i$, or four total. Thus there is an ideal triangulation with at most $10t(L)-10$ tetrahedra.
Now apply \refthm{MaxVolTet}. The volume of each of the $10t(L)-10$ tetrahedra is at most ${v_{\rm tet}}$. Thus the volume of the fully augmented link is at most $10{v_{\rm tet}}(t(L)-1)$.
\end{proof}
The bound of \refthm{FullyAugUpperBound} is \emph{asymptotically sharp},\index{asymptotically sharp volume bound} in the sense that there is a sequence of fully augmented links whose volumes approach the upper bound; this is proved in the first part of \refex{AsympSharpFullyAug}.
\subsubsection{Dehn filling}
Recall Thurston's theorem on volume change under Dehn filling,\index{Dehn filling} \refthm{VolumeDF}: If $M$ is hyperbolic with cusps $C_1, \dots, C_n$, and $s_1, \dots, s_n$ are slopes, one on each $\partial C_j$, such that $M(s_1, \dots, s_n)$ is hyperbolic, then
\[ \operatorname{vol}(M)>\operatorname{vol}(M(s_1, \dots, s_n)). \]
This result can be combined with the previous to give an upper bound on the volume of knots in terms of the twist number, first observed in \cite{lackenby:alt-volume}.
\begin{theorem}\label{Thm:VolUpperTwistRegions}
Suppose $K$ is a knot or link with a prime, twist-reduced diagram with twist number ${\operatorname{tw}}(K)\geq 2$. Then $S^3-K$ is hyperbolic, and the volume of $S^3-K$ satisfies
\[ \operatorname{vol}(S^3-K) < 10{v_{\rm tet}}({\operatorname{tw}}(K)-1). \]
\end{theorem}
Again the bound of \refthm{VolUpperTwistRegions} is asymptotically sharp;\index{asymptotically sharp volume bound} this is proved in the second part of \refex{AsympSharpFullyAug}.
\begin{proof}[Proof of \refthm{VolUpperTwistRegions}]
Because the diagram of $K$ is prime and twist-reduced, when we fully augment $K$ by adding a crossing circle to each twist region, and then remove pairs of crossings to form a fully augmented link,\index{fully augmented link} the resulting link $L$ is hyperbolic; see \reflem{TwReducedGivesReducedAug} and \reflem{ExistsRightAngled}. It will have $t(L)={\operatorname{tw}}(K)$ crossing circles. By \refthm{FullyAugUpperBound}, the volume of the fully augmented link is at most $10{v_{\rm tet}}({\operatorname{tw}}(K)-1)$.
Now, we obtain $S^3-K$ from $S^3-L$ by Dehn filling the crossing circles, filling the $i$-th one along a slope $1/n_i$ where $n_i$ is an integer such that $2n_i$ crossings were removed at that twist region to go from the diagram of $K$ to that of $L$. By Thurston's theorem on volume change under Dehn filling, \refthm{VolumeDF},
\[ \operatorname{vol}(S^3-K) < \operatorname{vol}(S^3-L) \leq 10{v_{\rm tet}}({\operatorname{tw}}(K)-1). \qedhere \]
\end{proof}
\subsection{Lower bounds via angle structures}
In addition to previously obtaining results that lead to upper bounds on volume, we have also built the tools to give lower bounds on hyperbolic volume in special cases, in particular when the manifold admits an angle structure.\index{angle structure} Recall \refthm{VolAngleStructs}: if the maximum of the volume functional\index{volume functional} over the set of all angle structures on a manifold $M$ occurs in the interior of the set of angle structures, then that angle structure gives the unique complete hyperbolic metric on $M$. We have the following corollary.
\begin{corollary}\label{Cor:VolAngleStructs}
Suppose $M$ is an orientable 3-manifold with boundary consisting of tori, with an ideal triangulation $\mathcal{T}$. Suppose that the set of angle structures\index{angle structure} $\mathcal{A}(\mathcal{T})$ is nonempty, and that the volume functional\index{volume functional} takes its maximum on the interior of the set of all angle structures $\mathcal{A}(\mathcal{T})$. Let $A$ be any structure in the closure of $\mathcal{A}(\mathcal{T})$. Then $M$ is hyperbolic, and $\operatorname{vol}(M) \geq \mathcal{V}(A)$.
\end{corollary}
\begin{proof}
The fact that $M$ is hyperbolic is a consequence of \refthm{HypAngleStruct}.
The volume functional\index{volume functional} is strictly concave down by \refthm{VolConcaveDown}, and is uniquely maximized at the complete hyperbolic structure in the interior by \refthm{VolAngleStructs} and \refthm{VolAngleStructsConverse}. Thus any structure in the closure of $\mathcal{A}(\mathcal{T})$ gives a volume at most that of $M$.
\end{proof}
\Refcor{VolAngleStructs} was used by Futer and Gu{\'e}ritaud to obtain bounds on the volumes of 2-bridge knots in terms of their continued fraction decompositions \cite{GueritaudFuter:2bridge}. They proved the following.
\begin{theorem}\label{Thm:Vol2Bridge}
Let $K$ be a reduced alternating diagram\index{alternating diagram} of a hyperbolic 2-bridge link\index{2-bridge knot or link}\index{2-bridge knot or link!volume bound} $K$ with ${\operatorname{tw}}(K)$ twist regions. Then
\[ \operatorname{vol}(S^3-K) > 2{v_{\rm tet}} {\operatorname{tw}}(K) - 2.7066. \]
Moreover, the lower bound is asymptotically sharp.\index{asymptotically sharp volume bound}
\end{theorem}
\begin{proof}
We may suppose that the diagram of $K$ is determined by a continued fraction $[0,a_{n-1}, \dots, a_1]$, where ${\operatorname{tw}}(K)=n-1$, as in \refdef{2BridgeNotation}. The link complement has a geometric triangulation\index{geometric triangulation} discussed in \refchap{TwoBridge}, determined by real numbers $(z_1, z_2, \dots, z_{C-2},z_{C-1})$, where $C$ is the crossing number of the diagram of $K$; see the proof of \refprop{2BridgeAngleNonempty}. We will choose explicit values of the $z_i$ that give a structure in the boundary of the space of angle structures.\index{angle structure} By \refcor{VolAngleStructs} the volume of the result gives a lower bound on the actual hyperbolic volume.
First assume that ${\operatorname{tw}}(K)\geq 3$, so $n\geq 2$.
We let $z_1 = z_{C-1}=0$, and we will choose $z_i=\pi/3$ for indices $i$ such that $a_1\leq i \leq C-a_{n-1}$.
These choices satisfy the hinge equation of \refeqn{2BridgeZConditions}: $|z_{i+1}-z_{i-1}| = 0<\pi-z_i=2\pi/3$, for appropriate $i$. They do not satisfy the strict inequality of the convexity equation of \refeqn{2BridgeZConditions}, but only satisfy the weak inequality: $2z_i \leq z_{i-1}+z_{i+1}$. When $a_1<i<C-a_{n-1}$, these will assign values to $x_i$ and $y_i$ using \reftable{LabelConditions}. Note the angles will take values of $\pi/3$ in the hinge case, but $2\pi/3$ and $0$ in the non-hinge case, and thus there will be flat tetrahedra. However, our choices so far give rise to a structure on the boundary of $\mathcal{A}(\mathcal{T})$, which will be sufficient for our purposes. Each hinge index contributes volume ${v_{\rm tet}}$ to the structure, while non-hinges contribute nothing to volume. Note hinges occur between twist regions; there are ${\operatorname{tw}}(K)-3$ hinge indices between $a_1$ and $C-a_{n-1}$.
We cannot choose $z_i=\pi/3$ for all the indices in the first and last fans, else even the weak inequality of the convexity equations \eqref{Eqn:2BridgeZConditions} will not be satisfied near $i=1$ or $i=C-1$. Instead, in the first and last fans, interpolate between $0$ and $\pi/3$ in a way that satisfies the weak versions of \refeqn{2BridgeZConditions}. Then again angles $x_i$ and $y_i$ will be determined by \reftable{LabelConditions}. At the hinge indices $i=a_1$ or $i=C-a_{n-1}$, the angles will be:
\[ \frac{\pi}{2} - z_{i-1}, \quad \frac{\pi}{6}+z_{i-1}, \quad \frac{\pi}{3}. \]
The volume defined by these angles is smallest when $z_{i-1}=0$, which occurs when the three angles are $\pi/2$, $\pi/6$, $\pi/3$, and the volume is $0.84578\dots$. Thus the four tetrahedra $T_{a_1}^1$, $T_{a_1}^2$, $T_{C-a_{n-1}}^1$, and $T_{C-a_{n-1}}^2$ each have volume at least $0.84578$, and the volume of this structure satisfies
\[ \mathcal{V} > 2{v_{\rm tet}} ({\operatorname{tw}}(K)-3) + 4(0.84578) > 2{v_{\rm tet}} {\operatorname{tw}}(K) - 2.7066. \]
Finally, check that when ${\operatorname{tw}}(K) = 2$, $\mathcal{V} > 2(0.84578)$ still satisfies the theorem.
\end{proof}
\section{Negatively curved metrics and Dehn filling}
We now turn our attention to a new technique for bounding volume from below that has not arisen in previous chapters. This bound comes from differential geometric methods, in particular from finding volumes of hyperbolic manifolds under families of metrics, and showing that the hyperbolic metric maximizes volume among such a family.
A hyperbolic manifold has constant sectional curvature equal to $-1$. The metrics we will consider will have negative sectional curvature, not necessarily constant. If a 3-manifold admits such a metric, it actually follows from the Geometrization theorem that the manifold also admits a hyperbolic metric; see \refthm{NegCurvedIsHyp}. However, the proof of that fact gives no information on how the hyperbolic metric relates to the negatively curved one. Often we can build an explicit negatively curved metric, and we will use this metric to make conclusions about the hyperbolic geometry of the manifold. This section presents a number of results along these lines, particularly relating to volume.
\begin{theorem}\label{Thm:NegCurvedIsHyp}
Suppose $M$ is a compact orientable 3-manifold whose interior admits a Riemannian metric with negative sectional curvature. Then $M$ admits a hyperbolic metric.
\end{theorem}
\begin{proof}
Because sectional curvature is negative, the Cartan--Hadamard theorem implies that the universal cover of $M$ is homeomorphic to ${\mathbb{R}}^3$ (see for example \cite[Theorem~3.87]{GallotHulinLafontaine}). It follows that the fundamental group of $M$ is infinite. It also follows that $M$ is irreducible,\index{irreducible} for any sphere in $M$ lifts to a sphere in ${\mathbb{R}}^3$, which bounds a ball. Then the image of the sphere in $M$ bounds the image of that ball in $M$, which is a ball.
In the case that $M$ is closed, because $M$ has strictly negative curvature, it is known that every abelian subgroup of its fundamental group is cyclic~\cite{Preissmann}. Thus there is no ${\mathbb{Z}}\times{\mathbb{Z}}$ subgroup of its fundamental group. So in this case, $M$ is irreducible,\index{irreducible} with $\pi_1(M)$ infinite, containing no ${\mathbb{Z}}\times{\mathbb{Z}}$ subgroup. By the Geometrization Theorem, \refthm{Geometrization}, $M$ admits a hyperbolic structure.
In the case that $M$ has boundary, there may be a ${\mathbb{Z}}\times{\mathbb{Z}}$ subgroup of $\pi_1(M)$, but the fact that the curvature is strictly negative in the interior implies that the subgroup is peripheral, hence $M$ is atoroidal~\cite{BallmannGromovSchroeder}.\index{atoroidal} If $M$ is a Seifert fibered space with infinite fundamental group, then $\pi_1(M)$ contains a cyclic normal subgroup (\refthm{SeifertFibered}). But again, a complete negatively curved finite volume Riemannian manifold cannot have a cyclic normal subgroup~\cite{BallmannGromovSchroeder}. It follows that $M$ is hyperbolic.
\end{proof}
Notice that the proof of \refthm{NegCurvedIsHyp} gives no information on the relationship between the negatively curved metric and the hyperbolic metric. For example, we can make no conclusions about the difference in volumes of the manifolds. In the closed case, work of Besson, Courtois, and Gallot \cite{BessonCourtoisGallot} can be used to bound the volume of a manifold under one negatively curved metric in terms of the volume of another. This was extended to the finite volume case by Boland, Connell, and Souto \cite{BolandConnellSouto}. In 3-dimensions, a special case of their work is the following theorem.
\begin{theorem}\label{Thm:BolandConnellSouto}
Let $\sigma$ and $\sigma'$ be two complete, finite volume Riemannian metrics on the same 3-manifold $N$. Suppose the Ricci curvature of $\sigma$ satisfies ${\mathrm{Ric}_\sigma} \geq -2\sigma$, and suppose the sectional curvatures of $\sigma'$ lie in the interval $[-a,-1]$ for some constant $a\geq 1$. Then
\[ \operatorname{vol}(N,\sigma) \geq \operatorname{vol}(N,\sigma'), \]
with equality if and only if both metrics are hyperbolic. \qed
\end{theorem}
Our goal in this section is to bound the change in volume of a hyperbolic manifold under Dehn filling by constructing a negatively curved metric on the Dehn filling of a hyperbolic manifold, and then applying \refthm{BolandConnellSouto}.
This volume estimate was first obtained in \cite{fkp:dfvjp}. Along the way we will obtain additional important consequences, for example the $2\pi$-theorem \cite{bleiler-hodgson}.
\subsection{Negatively curved metrics on a solid torus}
We will construct metrics in this subsection, and use them to bound volume. The arguments here require a little more familiarity with Riemannian geometry than the rest of the book so far. However, these arguments are only needed in this section and will not be required elsewhere in the book. Thus a reader disinclined to work carefully through the calculations in Riemannian geometry at this time may accept the statements of the main results here and skip ahead to their applications in \refsubsec{ApplicationsNegCurved}.
The metrics we construct will have constant sectional curvatures away from a collection of solid tori, namely those we glue to perform Dehn filling. Within a solid torus, we will use cylindrical coordinates.
\begin{definition}\label{Def:CylindricalCoords}
Let $V$ be a solid torus, and let $\widetilde{V}$ be its universal cover. The \emph{cylindrical coordinates}\index{cylindrical coordinates} on $\widetilde{V}$ are given by $(r,\mu,\lambda)$, where $r\leq 0$ is the radial distance measured outward from $\partial V$, $0\leq \mu\leq 1$ is measured around each meridional circle, and $-\infty<\lambda<\infty$ is measured in the longitudinal direction, orthogonal to $\mu$. Normalize the coordinates so that the generator of the deck transformation group on $\widetilde{V}$ changes the $\lambda$ coordinate by $1$. These coordinates descend to \emph{cylindrical coordinates} on $V$, where $r\leq 0$ is radial distance measured outward from $\partial V$, $0\leq \mu \leq 1$ is in the meridional direction, and $0\leq \lambda \leq 1$ is measured orthogonal to $\mu$.
\end{definition}
\begin{lemma}\label{Lem:BleilerHodgson}
Let $(r,\mu,\lambda)$ be cylindrical coordinates on a solid torus or its universal cover. Then a metric of the form
\begin{equation}\label{Eqn:NegCurvedSolidTorus}
ds^2 = dr^2 + f(r)^2d\mu^2 + g(r)^2d\lambda^2,
\end{equation}
where $f\from {\mathbb{R}}\to {\mathbb{R}}$ and $g\from {\mathbb{R}} \to {\mathbb{R}}$ are smooth functions of $r$,
satisfies the property that all sectional curvatures are convex combinations of
\[ -\frac{f''}{f}, \quad -\frac{g''}{g}, \quad -\frac{f'g'}{fg}. \]
Moreover, the metric is nonsingular if $f'(r_0)=2\pi$, where $r_0<0$ is the root of $f$ nearest $0$ (if it exists).
\end{lemma}
\begin{proof}
The proof will be a standard calculation from Riemannian geometry, following \cite{bleiler-hodgson}.
For notational convenience, set $r=x_1$, $\mu=x_2$, $\lambda=x_3$. Our Riemannian metric can be written in coordinates as
\[ (g_{ij}) = \left(\begin{array}{ccc}
1& 0 & 0 \\
0 & f(r)^2 & 0 \\
0 & 0 & g(r)^2
\end{array}\right) \quad \mbox{and} \quad
(g^{ij}) = \left(\begin{array}{ccc}
1&0&0\\
0&f(r)^{-2}& 0\\
0&0& g(r)^{-2}
\end{array}\right)
\]
The Christoffel symbols $\Gamma_{ij}^k = \sum_\ell \Gamma_{ij\ell}g^{\ell k}$ can be computed using
\[ \Gamma_{ijk}= \frac{1}{2}\left(\frac{\partial g_{jk}}{\partial x_i}+ \frac{\partial g_{ik}}{\partial x_j}- \frac{\partial g_{ij}}{\partial x_k}\right).\]
Most of the 27 $\Gamma_{ijk}$ are zero; the non-zero ones are $\Gamma_{122} = f\cdot f'$, $\Gamma_{133} = g\cdot g'$, $\Gamma_{212}=f\cdot f'$, $\Gamma_{221}=-f\cdot f'$, $\Gamma_{313} = g\cdot g'$, and $\Gamma_{331}= -g\cdot g'$.
We then obtain the connection
$\nabla_{\partial/\partial x_i}(\partial/\partial x_j) = \sum_k \Gamma_{ij}^k\cdot \partial/\partial x_k$ as follows.
\begin{tabular}{cc|c cc}
& && $j$ & \\
& $\nabla_{\partial/\partial x_i}(\partial/\partial x_j)$ & 1& 2& 3\\
\hline
& 1 &0& $f'/f\cdot\partial/\partial x_2$ & $g'/g\cdot\partial/\partial x_3$\\
$i$& 2& $f'/f\cdot\partial/\partial x_2$ & $-f\cdot f'\cdot\partial/\partial x_1$ & 0 \\
& 3& $g'/g\cdot\partial/\partial x_3$ & 0 & $-f\cdot f'\cdot \partial/\partial x_1$
\end{tabular}
The Riemannian curvature tensor is given by
\[R(X,Y,Z) = \nabla_Y\nabla_XZ-\nabla_X\nabla_YZ+\nabla_{[X,Y]}Z,\]
and the sectional curvatures
\[ K(X,Y) = -\frac{\langle R(X,Y,X),Y\rangle}{|X|^2|Y|^2-\langle X,Y\rangle^2} \]
are all convex combinations of the three sectional curvatures
\[ K_{ij}=K(\partial/\partial x_i, \partial/\partial x_j) \]
for $\{i,j\}\subset\{1,2,3\}$.
We compute
\begin{align*}
K_{12} &= -\frac{\langle R(\partial/\partial x_1, \partial/\partial x_2, \partial/\partial x_1), \partial/\partial x_2\rangle}{\langle\partial/\partial x_1, \partial/\partial x_1\rangle\langle\partial/\partial x_2, \partial/\partial x_2\rangle-\langle\partial/\partial x_1, \partial/\partial x_2\rangle^2}
\\[2mm]
&= -\frac{\langle\nabla_{\partial/\partial x_1}\cdot\nabla_{\partial/\partial x_2}(\partial/\partial x_1) - \nabla_{\partial/\partial x_2}\cdot\nabla_{\partial/\partial x_1}(\partial/\partial x_1),\partial/\partial x_2\rangle}{1\cdot f^2 - 0^2}\\[2mm]
&= -\frac{\langle\nabla_{\partial/\partial x_1}(f'/f\cdot\partial/\partial x_2),\partial/\partial x_2\rangle}{f^2} \\[2mm]
&=-\frac{f''/f\cdot\langle\partial/\partial x_2, \partial/\partial x_2\rangle}{f^2} \\[2mm]
&=-f''/f.
\end{align*}
A symmetric calculation shows $K_{13}= -g''/g$.
Finally
\begin{align*}
K_{23} &= -\frac{\langle\nabla_{\partial/\partial x_2}\cdot\nabla_{\partial/\partial x_3}(\partial/\partial x_2) - \nabla_{\partial/\partial x_3}\cdot\nabla_{\partial/\partial x_2}(\partial/\partial x_2),\partial/\partial x_3\rangle}{f^2 \cdot g^2}\\[2mm]
&= -\frac{\langle-\nabla_{\partial/\partial x_3}(-f'\cdot f\cdot\partial/\partial x_1),\partial/\partial x_3\rangle}{f^2\cdot g^2} \\[2mm]
&=-\frac{f'\cdot f\cdot g'/g\langle\partial/\partial x_3, \partial/\partial x_3\rangle}{f^2} \\[2mm]
&=-\frac{f'\cdot g'}{f\cdot g}.
\end{align*}
Finally, to ensure the metric is nonsingular, it must have a cone angle\index{cone angle} of $2\pi$ along the core, i.e.\ at the point $r=r_0<0$ nearest $0$ such that $f(r_0)=0$. If $f(r_0)=0$, then
\[ f'(r_0) = \lim_{r\to r_0} \frac{1}{r-r_0} \int_0^1 f(r)d\mu \]
gives the cone angle\index{cone angle} along the core circle of the solid torus. Thus we must ensure that $f'(r_0)=2\pi$.
\end{proof}
\begin{lemma}\label{Lem:SolidTorusMetric}
Suppose $V$ is a solid torus with a prescribed Euclidean metric on $\partial V$ such that a Euclidean geodesic representing a meridian has length $\ell_1>2\pi$. Then there exists a smooth Riemannian metric on $V$ that is hyperbolic on a collar neighborhood of $\partial V$, has negative sectional curvature elsewhere, and the restriction of the metric to $\partial V$ gives the prescribed Euclidean metric.
\end{lemma}
\begin{proof}
Let $\widetilde{V}$ denote the universal cover of $V$. We will assign a metric to $\widetilde{V}$ that has the form of \refeqn{NegCurvedSolidTorus}. The functions $f$ and $g$ must satisfy a number of properties.
To obtain the prescribed Euclidean metric, we must have $f(0)=\ell_1$ and $g(0)=\ell_2$ where $\ell_2 = \operatorname{area}(V)/\ell_1$. Then the deck transformation group on $\widetilde{V}$ is generated by
\[ (r,\mu,\lambda)\mapsto (r,\mu+\theta,\lambda+1), \]
where $\theta\in[0,1)$ is appropriately chosen so that the fundamental domain of $\partial V$ has the correct shape. The metric on $\widetilde{V}$ descends to give a smooth metric on $V$.
In order for the metric to be hyperbolic near $\partial V$, $f$ and $g$ must give sectional curvatures equal to $-1$ near $r=0$. This will hold if $f(r)=\ell_1e^r$ and $g(r)=\ell_2e^r$ near $r=0$. To ensure it is nonsingular, we need to ensure $f'(r_0)=2\pi$ where $r_0$ is the negative root of $f$ nearest $0$.
To finish the proof of the lemma, we need to show that there exist functions $f$ and $g$ satisfying the above properties. For purposes of this lemma, it suffices to choose $r_0$ such that $-\ell_1/2\pi < r_0 < -1$ and define $f$ and $g$ near $r=r_0$ by $f(r)=2\pi\sinh(r-r_0)$ and $g(r)=b\cosh(r-r_0)$, for $0<b<\ell_2$. Note that $f$ and $g$ give a metric of constant curvature $-1$ near $r_0$, and that $f'(r_0)=2\pi$, so the metric will be nonsingular. To see that definitions of $f$ and $g$ can be extended, note that the tangent line to $f$ at $r=r_0$ runs through the points $(r_0,0)$ and $(0,-2\pi r_0)$, and the tangent line to $f$ at $r=0$ runs through points $(0, \ell_1)$ and $(-1,0)$. Because $-2\pi r_0<\ell_1$ and $r_0<-1$, the function $f$ can be extended to be strictly convex, increasing, positive and smooth on $r_0<r<0$; see \reffig{2PiGraph}.
\begin{figure}
\import{Figures/Ch13_Volume/}{F13-01-2PiThm.eps_tex}
\caption{Extending $f$ and $g$ to be strictly convex, increasing, positive, smooth functions on $r_0<r<0$}
\label{Fig:2PiGraph}
\end{figure}
Similarly, $g'(r_0)=0<\ell_2=g'(0)$, and $0<b=g(r_0)<\ell_2=g(0)$, so $g$ can be extended to be strictly convex, increasing, positive and smooth on $r_0<r<0$. This gives the desired negatively curved metric.
\end{proof}
\begin{theorem}[$2\pi$-Theorem]\label{Thm:2PiThm}\index{$2\pi$-theorem}
Suppose $M$ is a hyperbolic 3-manifold with disjoint embedded cusps $C_1, \dots, C_n$ and slopes $s_j$ on $C_j$ such that a geodesic representative of each $s_j$ on $\partial C_j$ has length strictly greater than $2\pi$ in the induced Euclidean metric. Then the Dehn filled\index{Dehn filling} manifold $M(s_1, \dots, s_n)$ admits a metric of negative curvature. Thus it is hyperbolic.
\end{theorem}
\begin{proof}
Remove the cusps $C_1, \dots, C_n$. By \reflem{SolidTorusMetric}, there exists a negatively curved metric on a solid torus $V_j$ such that the Euclidean metric on $\partial V_j$ agrees with that of $\partial C_j$, and such that the metric is hyperbolic on a collar neighborhood of $\partial V_j$. Then put a metric on $M(s_1, \dots, s_n)$ by taking the hyperbolic metric on $M-(\bigcup_{i=1}^n C_i)$, and gluing in solid tori with the metric from \reflem{SolidTorusMetric}.
\end{proof}
Note that there is flexibility in choosing the metric of \reflem{SolidTorusMetric}. For example, the value of $b$ in the proof can be anything in a range of values. If we take a little more care to determine the metric, we can obtain bounds on additional geometric information. For example, we can determine the curvature more explicitly, using the following lemma, and bound the volume of the negatively curved solid torus, as in \reflem{VolDiffEqn}.
\begin{lemma}\label{Lem:CurvatureDiffEqn}
Let $k(r)$ be a smooth, increasing function that lies in $[0,1]$ for all $r$.
Define $f$ and $g$ to be solutions to the differential equations $f''/f=k$ and $(f'g')/(fg) = k$, subject to initial conditions $f(0)= f'(0)=\ell_1$ and $g(0)=\ell_2$. Then the function $g$ satisfies $g''/g = k + (f/f')k'$.
\end{lemma}
\begin{proof}
To check the formula for $g''/g$, note $g'/g = k f/f'$, and differentiate both sides of this equation.
\[ \frac{g''}{g} - \left(\frac{g'}{g}\right)^2 = k\left(1 + \frac{f''}{f} \left(\frac{f}{f'}\right)^2\right) + \frac{f}{f'}k'.\]
Using the fact that $g'/g = k f/f'$ and $f''/f=k$, this simplifies to the desired equation:
\[ \frac{g''}{g} = k + \frac{f}{f'}k'. \qedhere \]
\end{proof}
\begin{lemma}\label{Lem:VolDiffEqn}
Let $\ell_1>2\pi$, let $k$ be a constant function $k(r)=t\in(0,1)$, and let $f$ and $g$ be defined by the differential equations in \reflem{CurvatureDiffEqn}. Let $V$ be a solid torus with metric of \refeqn{NegCurvedSolidTorus}. Then:
\begin{enumerate}
\item Letting $r_0 = -\arctanh(\sqrt{t})/\sqrt{t}$, $f$ and $g$ have the form:
\begin{align*}
f(r) & = \frac{\ell_1\sqrt{1-t}}{\sqrt{t}}\sinh(\sqrt{t}(r-r_0)) \\
g(r) & = \ell_2\sqrt{1-t}\cosh(\sqrt{t}(r-r_0))
\end{align*}
\item At $r_0$, $f(r_0)=0$ and $f'(r_0) = \ell_1\sqrt{1-t}$. Thus the solid torus $V$ has a nonsingular metric of negative curvature $-t$ when $t=1-(2\pi/\ell_1)^2$.
\item For any $t\in(0,1)$, the volume of the (possibly singular) solid torus $V$ with metric $ds^2=dr^2+f(r)^2d\mu^2+g(r)^2d\lambda^2$ is given by
\[ \operatorname{vol}(V) = \int_{r_0}^0 f(r)g(r)dr = \frac{\ell_1\ell_2}{2}. \]
\end{enumerate}
\end{lemma}
\begin{proof}
By \reflem{CurvatureDiffEqn} and \reflem{BleilerHodgson}, the metric will have negative sectional curvature. We need to show the additional properties.
The proof is a series of calculations. First, solving the differential equation $f''/f=t$, the function $f$ has the form
\[ f(r) = c_1 e^{\sqrt{t} r} + c_2 e^{-\sqrt{t} r}. \]
Given initial conditions $f(0)=f'(0)=\ell_1$, we find
\begin{align*}
f(r) &= \frac{\ell_1}{2}\left(1+\frac{1}{\sqrt{t}}\right) e^{\sqrt{t}r} + \frac{\ell_1}{2}\left(1-\frac{1}{\sqrt{t}}\right) e^{-\sqrt{t} r} \\
&= \ell_1\left(\cosh(r\sqrt{t})+\frac{1}{\sqrt{t}}\sinh(r\sqrt{t})\right) \\
&= \frac{\ell_1\sqrt{1-t}}{\sqrt{t}}\sinh(\sqrt{t}(r-r_0)),
\end{align*}
where $r_0=-\arctanh(\sqrt{t})/\sqrt{t}$, as claimed. Note that
\[ f'(r) = \ell_1\sqrt{1-t}\cosh(\sqrt{t}(r-r_0)),\]
so when $r=r_0$, we have $f(r_0)=0$ and $f'(r_0)=\ell_1\sqrt{1-t}$. Thus $f'(r_0)=2\pi$ if $t= 1-(2\pi/\ell_1)^2$, and the metric will be nonsingular in this case.
As for $g$, we may solve $g'/g = t f'/f = \sqrt{t}\tanh(\sqrt{t}(r-r_0))$ by integration, to obtain
\[ g(r) = c_2\cosh(\sqrt{t}(r-r_0)) = \ell_2\sqrt{1-t}\cosh(\sqrt{t}(r-r_0)), \]
using the initial condition $g(0)=\ell_2$ to determine the constant $c_2$.
Finally we compute the volume of a solid torus $V$ with metric as in \refeqn{NegCurvedSolidTorus}.
\begin{align*}
\operatorname{vol}(V) &= \int_{r_0}^0 f(r)g(r)\,dr \\
&= \int_{r_0}^0\frac{\ell_1\ell_2(1-t)}{\sqrt{t}}\sinh(\sqrt{t}(r-r_0))\cosh(\sqrt{t}(r-r_0)) \\
&=\left[ \frac{\ell_1\ell_2(1-t)}{2t}\sinh^2(\sqrt{t}(r-r_0)) \right]_{r=r_0}^0 \\
&=\frac{\ell_1\ell_2(1-t)}{2t}\sinh^2(\arctanh(\sqrt{t})) \\
&= \frac{\ell_1\ell_2(1-t)}{2t}\cdot\frac{t}{1-t} \\
&= \frac{\ell_1\ell_2}{2}. \qedhere
\end{align*}
\end{proof}
Now we would like to use the metric on the solid torus $V$ obtained from \reflem{VolDiffEqn}, along with \refthm{BolandConnellSouto}, to bound the volume of Dehn filled manifolds. However, at this point we have a problem. Although we have constructed a nonsingular Riemannian metric on the solid torus with nice curvature and volume, note that the metric does not give a hyperbolic metric, with sectional curvatures $-1$, on a collar neighborhood of the boundary of $V$. Thus we cannot glue the metric of \reflem{VolDiffEqn} to the metric of the cusped manifold with horoball neighborhoods removed to obtain a negatively curved metric on the Dehn filled manifold, as we did in \refthm{2PiThm}. The way to fix this problem is to do a little deeper analysis, which is done in the following lemma.
\begin{lemma}\label{Lem:FKPVolFilling}
Suppose $V$ is a solid torus with a prescribed Euclidean metric on $\partial V$, such that a Euclidean geodesic representing a meridian has length $\ell_1>2\pi$. Let $\zeta\in (0,1)$ be a constant. Then there exists a smooth, negatively curved Riemannian metric on $V$ that satisfies the following properties.
\begin{enumerate}
\item The metric is hyperbolic on a collar neighborhood of $\partial V$, and its restriction to $\partial V$ gives the prescribed Euclidean metric.
\item The sectional curvatures are bounded above by $-\zeta(1-(2\pi/\ell_1)^2)$.
\item The volume of $V$ in this metric is at least $\frac{1}{2}\zeta\operatorname{area}(\partial V)$.
\end{enumerate}
\end{lemma}
\begin{proof}
We use the ideas of \reflem{CurvatureDiffEqn} and \reflem{VolDiffEqn} to define $f$ and $g$ by differential equations. However, we do not choose $k(r)$ to be constant. We need $k(r)=1$ near $r=0$ to obtain the appropriate curvature estimates on the boundary of $V$. We have seen in \reflem{VolDiffEqn} that we may obtain nice volume and curvature results when $k(r)=t$ for some $t\in(0,1)$ for $r<0$. So we define $k$ to be a smooth bump function, depending on $r$, $t$, and $\epsilon>0$, as follows. If $r\leq -\epsilon$, set $k_{t,\epsilon}(r)=t$. If $r\geq -\epsilon/2$, set $k_{t,\epsilon}(r)=1$. For $r$ between $-\epsilon$ and $-\epsilon/2$, the function $k_{t,\epsilon}(r)$ is smooth and strictly increasing. See \reffig{BumpFunction} for a typical graph.
\begin{figure}
\import{Figures/Ch13_Volume/}{F13-02-kter.eps_tex}
\caption{Graph of $k_{t,\epsilon}(r)$.}
\label{Fig:BumpFunction}
\end{figure}
Then $k$ is continuous in the three variables $t, \epsilon,r$.
We also define $k_{t,0}(r)$ to be the step function
\[ k_{t,0}(r) = \lim_{\epsilon\to 0^+} k_{t,\epsilon}(r) =
\begin{cases}
t & \mbox{ if } r<0, \\ 1 & \mbox{ if } r\geq 0.
\end{cases}
\]
Now for $\epsilon\geq 0$ and $t\in (0,1)$, define $f_{t,\epsilon}$ and $g_{t,\epsilon}$ by the differential equations
\[ \frac{f''_{t,\epsilon}(r)}{f_{t,\epsilon}(r)} = k_{t,\epsilon}(r), \quad
\frac{g'_{t,\epsilon}(r)}{g_{t,\epsilon}(r)} = k_{t,\epsilon}(r)\frac{f_{t,\epsilon}(r)}{f'_{t,\epsilon}(r)}. \]
The family of functions $f_{t,\epsilon}(r)$ and $g_{t,\epsilon}(r)$ can be shown to have a number of nice properties. Away from $\epsilon=0$, these mostly follow by standard facts in differential equation. As $\epsilon\to 0^+$, a little more analysis is required, which we will omit here. For full details see \cite{fkp:dfvjp}. In particular, the following hold.
\textbf{Nonsingularity:}
For all $t\in(0,1)$ and $\epsilon\geq 0$, $f_{r,\epsilon}(r)$ has a unique root $r_0(t,\epsilon)$. The function $f'_{t,\epsilon}(r_0(t,\epsilon))$ is continuous in $t$ and $\epsilon$, and strictly decreasing in both variables. For every $t$ between $0$ and $1-(2\pi/\ell_1)^2<1$, there is a unique value $\epsilon(t)>0$ such that $f'_{t, \epsilon(t)}(r_0(t, \epsilon(t)) = 2\pi$. This gives a nonsingular metric for every $t$. Moreover, as $t\to 1-(2\pi/\ell_1)^2$, $\epsilon(t)\to 0$.
Now let $t\in (0,1)$ and define $\tau(t)$ to be the nonsingular Riemannian metric given by the functions $f_t(r) = f_{t, \epsilon(t)}(r)$ and $g_t(r)=g_{t,\epsilon(t)}(r)$.
\textbf{Sectional curvatures:}
The metric $\tau(t)$ has all sectional curvatures bounded above by $-t$.
This follows from \reflem{CurvatureDiffEqn}, along with the fact that
the function $f_t(r)$ is positive and increasing. Thus $f_t(r)/f'_t(r)$ is positive. Moreover, $k'_{t,\epsilon(t)}(r)$ is positive, since $k_{t,\epsilon(t)}(r)$ is increasing with $r$. Thus \reflem{CurvatureDiffEqn} implies that $g''_t(r)/g_t(r)$ is least $k_{t,\epsilon(t)}(r)\geq t$. By definition, $f''_t(r)/f_t(r)$ and $(f'_t(r)g'_t(r))/(f_t(r)g_t(r))$ are equal to $k_{t,\epsilon(t)}(r)\geq t$. So all sectional curvatures are bounded above by $-t$.
\textbf{Volumes:}
Recall we have fixed $\zeta>0$. For notational purposes, define $t_0$ to be $t_0=1-(2\pi/\ell_1)^2$. Let $t$ lie in the interval $(\zeta t_0,t_0)$. Then for the metric $\tau(t)$, we have
\[\lim_{t\to t_0} \operatorname{vol}(V, \tau(t)) = \frac{\ell_1\ell_2}{2} = \frac{1}{2}\operatorname{area}{\partial V}. \]
This follows from the fact that $f_t$ and $g_t$ converge uniformly to $f_{t_0,0}$ and $g_{t_0,0}$ as $t\to t_0$. Moreover, $r_0(t,\epsilon(t))$ converges to $r_0(t_0,0)$. Then the limit must be the limit of the differential equation in the case $k$ is constant, which we computed in \reflem{VolDiffEqn}. In particular, we have
\[ \lim_{t\to t_0} \operatorname{vol}(V, \tau(t)) = \operatorname{vol}(V, t_0) = \frac{\ell_1\ell_2}{2}.\]
To finish the proof of the lemma, select $t \in (\zeta t_0, t_0)$ near enough to $t_0$ so that $\operatorname{vol}(V, \tau(t))\geq \frac{1}{2}\zeta\operatorname{area}(\partial V)$. For this metric, sectional curvatures are bounded above by $-t \leq -\zeta t_0 = -\zeta(1-(2\pi/\ell_1)^2)$. Finally, the metric is nonsingular, and by choice of bump function and initial conditions, on a collar neighborhood of $\partial V$ it is hyperbolic, with metric agreeing with the prescribed metric on $\partial V$.
\end{proof}
We are now ready to prove the main result of this section.
\begin{theorem}[Volume change under Dehn filling]\label{Thm:VolumeChangeUnderFilling}
Let $M$ be a complete, finite volume hyperbolic manifold with cusps. Suppose $C_1, \dots, C_n$ are disjoint embedded cusps with slopes $s_j$ on $C_j$ such that a geodesic representative of $s_j$ on $\partial C_j$ has length strictly greater than $2\pi$. Denote the minimal slope length by $\ell_{\min}$. Then the Dehn filled\index{Dehn filling}\index{volume bound!lower bound on Dehn filling} manifold $M(s_1, \dots, s_n)$ is a hyperbolic manifold with
\[ \operatorname{vol}(M(s_1, \dots, s_n) \geq \left( 1 - \left(\frac{2\pi}{\ell_{\min}}\right)^2\right)^{3/2} \operatorname{vol}(M). \]
\end{theorem}
\begin{proof}
Fix an arbitrary constant $\zeta \in (0,1)$. Replace each cusp $C_j$ by a solid torus $V_j$ whose meridian is $s_j$. By \reflem{FKPVolFilling}, the smooth Riemannian metric $\tau_j$ on $V_j$ agrees with the hyperbolic metric on $\partial C_j$, so this gives a smooth Riemannian metric $\tau$ on $M(s_1, \dots, s_n)$. Additionally, for each $j$, sectional curvatures on $V_j$ are at most $-\zeta(1-(2\pi/\ell_{\min})^2)$, and the volume of $V_j$ is at least $\zeta\operatorname{area}(\partial V_j)/2 = \zeta\operatorname{vol}(C_j)$ where $\operatorname{vol}(C_j)$ is the cusp volume in the hyperbolic metric. Note that sectional curvatures in $V_j$ are also bounded below by some constant, since $V_j$ is compact.
Thus the metric $\tau$ on $M(s_1,\dots, s_n)$ has sectional curvatures bounded above by $-\zeta(1-(2\pi/\ell_{\min})^2)$ and bounded below by some constant. Moreover
\begin{align*}
\operatorname{vol}(M(s_1, \dots, s_n))& \geq\operatorname{vol}(M-\cup_{j=1}^n C_j) + \zeta\sum\operatorname{vol}(C_j) \\
&\geq \zeta\operatorname{vol}(M).
\end{align*}
Rescale the metric to obtain a metric with sectional curvatures bounded above by $-1$. To do this, replace $\tau$ by $\sigma = \sqrt{\zeta (1-(2\pi/\ell_{\min})^2)}\tau$. Note this multiplies all sectional curvatures by $(\zeta(1-(2\pi/\ell_{\min})^2))^{-1}$. The volume is rescaled by a factor of $(\zeta(1-(2\pi/\ell_{\min})^2))^{3/2}$. Thus under the metric $\sigma$, sectional curvatures of $M(s_1, \dots, s_n)$ lie in $[-a, 1]$ for some $a\geq 1$, and $\operatorname{vol}(M(s_1, \dots, s_n), \sigma) \geq \zeta^{5/2}(1-(2\pi/\ell_{\min})^2)^{3/2}\operatorname{vol}(M)$.
Now let $S$ denote the set of all metrics on $M(s_1, \dots, s_n)$ whose sectional curvatures lie in the interval $[-a,-1]$. Since $\zeta$ is arbitrary, by the above work the supremum of volumes of $M(s_1, \dots, s_n)$ over all metrics in $S$ satisfies:
\[ \sup_{\sigma \in S} \operatorname{vol}(M(s_1, \dots, s_n), \sigma) \geq \left(1-\left(\frac{2\pi}{\ell_{\min}}\right)^2\right)^{3/2}\operatorname{vol}(M). \]
Here $\operatorname{vol}(M(s_1, \dots, s_n),\sigma)$ denotes volume under the metric $\sigma$, and $\operatorname{vol}(M)$ denotes the volume of $M$ under its given hyperbolic metric.
Now, \refthm{BolandConnellSouto} implies that the hyperbolic metric $\sigma_{\mathrm{hyp}}$ on the Dehn filled manifold $M(s_1, \dots, s_n)$ uniquely maximizes volume over the set $S$ of all metrics whose sectional curvatures lie in the interval $[-a,1]$. Thus:
\begin{align*}
\operatorname{vol}(M(s_1, \dots, s_n), \sigma_{\mathrm{hyp}}) & \geq
\sup_{\sigma \in S} \operatorname{vol}(M(s_1, \dots, s_n), \sigma) \\
& \geq \left(1-\left(\frac{2\pi}{\ell_{\min}}\right)^2\right)^{3/2}\operatorname{vol}(M). \qedhere
\end{align*}
\end{proof}
\subsection{Applications to knots}\label{Subsec:ApplicationsNegCurved}
Recall the definitions of \emph{twist-reduced}\index{twist-reduced}, \refdef{TwReduced} or \refdef{TwistReduced}, and \emph{twist-number}\index{twist-number}, \refdef{TwistNumber}. We will denote the twist-number of a twist-reduced diagram $K$ by ${\operatorname{tw}}(K)$. An application of \refthm{VolumeChangeUnderFilling} is the following result, which first appeared in \cite{fkp:dfvjp}.
\begin{theorem}[Volume bounds for highly twisted links]\label{Thm:VolBoundHighlyTwisted}\index{highly twisted!volume bounds}\index{highly twisted}
Let $K\subset S^3$ be a link with a prime, twist-reduced diagram. Assume the diagram has ${\operatorname{tw}}(K)\geq 2$ twist regions, and that each twist region contains at least seven crossings. Then $K$ is a hyperbolic link satisfying
\[ 0.70734 ({\operatorname{tw}}(K) -1) \leq \operatorname{vol}(S^3-K) < 10 {v_{\rm tet}} ({\operatorname{tw}}(k)-1), \]
where ${v_{\rm tet}} = 1.0149\dots$ is the volume of a regular ideal tetrahedron.
\end{theorem}
The proof of \refthm{VolBoundHighlyTwisted} uses a theorem due to Miyamoto, which in full generality gives a lower bound on the volume of an $n$-dimensional hyperbolic manifold with geodesic boundary \cite{Miyamoto}. Here, we state only the 3-dimensional case, which is the case we will use.
\begin{theorem}[Miyamoto]\label{Thm:Miyamoto}\index{Miyamoto's theorem}
If $N$ is a hyperbolic 3-manifold with totally geodesic boundary, then $\operatorname{vol}(N)\geq -{v_{\rm{oct}}}\chi(N)$,
where ${v_{\rm{oct}}} =3.66\dots$ is the volume of a regular ideal octahedron,\index{ideal octahedron, regular}\index{regular ideal octahedron}\index{regular ideal octahedron!volume} with equality if and only if $N$ decomposes into $-\chi(N)$ ideal octahedra.
\end{theorem}
\begin{proof}[Proof sketch]
Suppose $N$ has totally geodesic boundary. Then the lift of $N$ to the universal cover ${\mathbb{H}}^3$ is a subspace $\widetilde{N}$ of ${\mathbb{H}}^3$ bounded by disjoint hyperplanes, where each hyperplane is a lift of the geodesic surface $\partial N$.
Pick one such hyperplane $O$, and consider the set $D_O$ consisting of points in $\widetilde{N}$ closer to $O$ than to any other hyperplane in the lift of $\partial N$. The set $D_O$ is convex, with boundary consisting of faces $F$ made up of points equidistant from the hyperplane $O$ and some other hyperplane. Define the truncated cone $C_F$ to be all points in $D_O$ that lie on a line running from $F$ to meet $O$ orthogonally. We may decompose all of $\widetilde{N}$ into truncated cones.
Now project to $N$. Because $\partial \widetilde{N}$ is invariant under the action of the covering transformations, and distances along geodesics are preserved, the decomposition projects to a decomposition of $N$. The volume of $N$ is obtained by summing the volumes of all the truncated cones decomposing $N$.
The main step of the proof is to bound the ratio $\operatorname{vol}(C_F)/\operatorname{area}(O\cap C_F)$, with notation as above. To do so, \emph{truncated tetrahedra} are introduced.
Consider a combinatorial polyhedron $P$ obtained by removing from a tetrahedron a small open neighborhood of each vertex. The faces of the tetrahedra become hexagonal faces of $P$, and four new triangular faces are added. A truncated tetrahedron,\index{truncated tetrahedron} also called a \emph{hyperideal tetrahedron}\index{hyperideal tetrahedron} in the literature, is a compact polyhedron in hyperbolic space that realizes $P$, such that the triangle faces and hexagonal faces are totally geodesic, and such that hexagons meet triangles at right angles. See \reffig{TruncTet}, left.
\begin{figure}
\includegraphics{Figures/Ch13_Volume/F13-03-TruncT.eps}
\caption{Left: a truncated tetrahedron, or hyperideal tetrahedron. Right: when lengths of edges between triangles go to zero, the truncated tetrahedron becomes a regular ideal octahedron.\index{ideal octahedron, regular}\index{regular ideal octahedron}}
\label{Fig:TruncTet}
\end{figure}
It can be shown that a truncated tetrahedron is determined by the lengths of the six edges between its triangular faces. When each of these is the same length, we say the truncated tetrahedron is regular, and we denote the regular truncated tetrahedron of edge length $r$ by $T_r$. Denote its four triangular faces by $\tau_1, \dots, \tau_4$. We will be considering the ratio
\[ \rho(r) = \frac{\operatorname{vol}(T_{2r})}{\operatorname{area}(\tau_1\cup\tau_2\cup\tau_3\cup\tau_4)}. \]
Observe that $r\geq 0$, and when $r$ approaches $0$, the triangles $\tau_j$ each become ideal triangles,\index{ideal triangle} and the truncated tetrahedron $T_0$ becomes a regular ideal octahedron.\index{ideal octahedron, regular}\index{regular ideal octahedron} See \reffig{TruncTet}, right. Miyamoto shows that $\rho(r)$ increases with $r$~\cite[Lemma~2.1]{Miyamoto}. Thus $\rho(r)\geq\rho(0)$.
Now consider again the truncated cone $C_F$ and its truncation face $O$. Consider geodesics in $N$ with both endpoints on $\partial N$ and orthogonal to $\partial N$. Such a geodesic is called a \emph{return path}. Let $\ell$ be the length of the shortest return path in $N$; note $\ell\geq 0$. The return path lifts to a collection of geodesics in $\widetilde{N}$, each running between hyperplane lifts of $\partial N$, each perpendicular to the hyperplane, and each of length $\ell$. If such a geodesic meets the cone $C_F$, its intersection with $C_F$ has length $\ell/2$.
The main technical result in \cite{Miyamoto} is a proof that
\begin{equation}\label{Eqn:Miyamoto}
\frac{\operatorname{vol}(C_F)}{\operatorname{area}(O\cap C_F)} \geq \frac{\operatorname{vol}(T_{\ell})}{\operatorname{area}(\tau_1\cup\tau_2\cup\tau_3\cup\tau_4)} = \rho(\ell/2).
\end{equation}
The proof is obtained by subdividing $C_F$ into pieces, matching pieces making up $T_{\ell}$, and observing relationships between volume and edge lengths for such pieces.
Assuming \refeqn{Miyamoto}, we complete the proof. When $N$ has shortest return path of length at least $\ell$,
\[
\operatorname{vol}(N) = \sum_C \operatorname{vol}(C) \geq \rho\left(\frac{\ell}{2}\right)\sum_C \operatorname{area}(C\cap O) = \rho\left(\frac{\ell}{2}\right)\operatorname{area}(\partial N),\]
where the sum is over all truncated cones $C$ in the decomposition, and $C\cap O\subset \partial N$ denotes the portion of $C$ on $\partial N$.
Finally, note that the shortest return path always has length at least $\ell=0$. Using the fact that $\rho$ is increasing, the above equation becomes
\[ \operatorname{vol}(N) \geq \rho(0)\operatorname{area}(\partial N) = \frac{{v_{\rm{oct}}}}{4\pi}\cdot (-2\pi\chi(\partial N)) = -{v_{\rm{oct}}}\chi(N).
\]
Here we are using the fact that $\operatorname{vol}(T_0)={v_{\rm{oct}}}$, that the area of an ideal triangle\index{ideal triangle} is $\pi$, the Gauss--Bonnet formula $\operatorname{area}(\partial N) = -2\pi\chi(\partial N)$, and the fact that for a 3-manifold $N$ with boundary, $\chi(\partial N) =2\chi(N)$.
\end{proof}
Given Miyamoto's theorem, we prove \refthm{VolBoundHighlyTwisted}.
\begin{proof}[Proof of \refthm{VolBoundHighlyTwisted}]
The fact that the link $K$ is hyperbolic follows from \refthm{FuterPurcellFilling}. The upper bound on volume comes from \refthm{VolUpperTwistRegions}.
To obtain the lower bound, we will consider fully augmented links.\index{fully augmented link} Let $L$ be the fully augmented link obtained by adding a crossing circle encircling each twist region of $K$. By \refthm{AugSlopeLengths}, $S^3-K$ is obtained from $S^3-L$ by performing Dehn fillings on crossing circles, along a slopes of length at least $\sqrt{7^2+1}=\sqrt{50} > 2\pi$.
We will find a lower bound on the volume of $S^3-L$. To do so, first remove all half-twists from the diagram of $L$. That is, recall $L$ may have single crossings at twist regions. Replace $L$ with a new fully augmented link $L'$ that has no crossing at twist regions. Note the complement of $L'$ is obtained from that of $L$ by cutting along 2-punctured disks bounded by crossing circles and regluing, thus it follows from \refcor{Gluing3PunctSpheres} that the volume of $S^3-L'$ is identical to the volume of $S^3-L$.
Now cut $S^3-L'$ along the plane of projection, separating it into two identical pieces, each with totally geodesic boundary coming from the white surface. Call one of these $M$. By Miyamoto's theorem, $\operatorname{vol}(M) \geq -{v_{\rm{oct}}}\chi(M)$. Note that $M$ is homeomorphic to a ball in $S^3$ with a tube drilled out for each crossing circle, and there are ${\operatorname{tw}}(K)$ crossing circles.
Thus the Euler characteristic\index{Euler characteristic} of $M$ is $\chi(M) = (1-{\operatorname{tw}}(K))$. Because we form $S^3-L'$ by taking two copies of $M$, the volume satisfies
\[ \operatorname{vol}(S^3-L) = \operatorname{vol}(S^3-L') \geq -2{v_{\rm{oct}}}\chi(M) = 2{v_{\rm{oct}}}({\operatorname{tw}}(K)-1). \]
Now by \refthm{VolumeChangeUnderFilling} (volume change under Dehn filling), the volume of $S^3-K$ satisfies:
\begin{align*}
\operatorname{vol}(S^3-K) &\geq \left( 1-\left(\frac{2\pi}{\sqrt{50}}\right)^2\right)^{3/2} \operatorname{vol}(S^3-L) \\
& \geq \left(1-\frac{2\pi^2}{25}\right)^{3/2} 2{v_{\rm{oct}}}({\operatorname{tw}}(K)-1) \\
& \geq 0.70735({\operatorname{tw}}(K)-1). \qedhere
\end{align*}
\end{proof}
\section{Volume, guts, and essential surfaces}
\Refthm{VolBoundHighlyTwisted} gives volume bounds highly twisted\index{highly twisted} knots and links, but only with at least seven crossings per twist region. A similar result holds for alternating knots and links,\index{alternating knot or link} without a restriction on the number of crossings for twist regions, originally due to Lackenby \cite{lackenby:alt-volume}. The method of proof is different, but illustrates another tool for bounding hyperbolic volume from below, developed by Agol, Storm, and Thurston \cite{ast:volumes}. In this section, we explain the tool, and use it to bound volumes of alternating links.
The main theorem of the section is the following.
\begin{theorem}[Volume bounds for alternating links]\label{Thm:VolAlt}\index{alternating knot or link!volume bounds}
Let $K$ be a hyperbolic knot or link with a twist-reduced alternating diagram\index{alternating diagram} with twist number ${\operatorname{tw}}(K)$. Then
\[ \operatorname{vol}(S^3-K) \geq \frac{{v_{\rm{oct}}}}{2}({\operatorname{tw}}(K) - 2). \]
\end{theorem}
We will prove \refthm{VolAlt} by considering again the checkerboard surfaces of the alternating link, and the bounded polyhedral decomposition of the link complement cut along those surfaces, \refthm{PolyAltKnot} and \reflem{BoundedPolyAlt}. To describe our main tool, we need additional terminology.
First, recall the JSJ-decomposition of a 3-manifold, \refthm{JSJ} and \refdef{JSJ}. We will apply a special form of this decomposition to a 3-manifold $M$ cut along an essential\index{essential} surface $S$. Recall that $M{\backslash \backslash} S$ is the closure of the manifold obtained by removing a regular neighborhood of $S$ (\refdef{Cut}). Its boundary consists of components of the parabolic locus,\index{parabolic locus} which are remnants of the torus boundary components of $M$, and $\widetilde{S}$.
\begin{definition}\label{Def:Double}
Let $M$ be a compact 3-manifold with torus boundary components, and let $S$ be an essential\index{essential} surface properly embedded in $M$. The \emph{double of $M{\backslash \backslash} S$},\index{double of $M{\backslash \backslash} S$} denoted $D(M{\backslash \backslash} S)$ is the manifold obtained by taking two copies of $M{\backslash \backslash} S$ and gluing them by the identity map on $\widetilde{S}$.
\end{definition}
Note the double of $M{\backslash \backslash} S$ will have torus boundary components coming from the parabolic locus of the boundary of $M{\backslash \backslash} S$. We will consider the JSJ-decomposition of the double, as in \refdef{JSJ}.\index{JSJ-decomposition}
\begin{lemma}\label{Lem:AnnulusDecomposition}
Let $M$ be a hyperbolic 3-manifold, homeomorphic to the interior of a compact manifold with torus boundary. Let $S$ be a properly embedded essential\index{essential} surface in $M$. Consider the double $D(M{\backslash \backslash} S)$, and let $\mathcal{T}$ denote the JSJ-decomposition of $D(M{\backslash \backslash} S)$. Finally, slice $\mathcal{T}$ and $D(M{\backslash \backslash} S)$ along $\widetilde{S}$, obtaining two copies of $M{\backslash \backslash} S$. The following hold.
\begin{enumerate}
\item The tori in the collection $\mathcal{T}$ can be isotoped to be preserved by the reflection of $D(M{\backslash \backslash} S)$ in the surface $\widetilde{S}$; thus cutting $D(M{\backslash \backslash} S)$ along $\widetilde{S}$ cuts $\mathcal{T}$ into two identical pieces.
\item Each essential torus $T\in \mathcal{T}$ is sliced into essential annuli in $M{\backslash \backslash} S$ with boundary on $\widetilde{S}$.
\item The characteristic submanifold\index{characteristic submanifold} of $D(M{\backslash \backslash} S)$ intersects $M{\backslash \backslash} S$ in components that are either $I$-bundles\index{$I$-bundle} over a subsurface of $\widetilde{S}$, or Seifert fibered solid tori.
\end{enumerate}
\end{lemma}
\begin{proof}
The first item follows from the equivariant torus theorem, due to Holzmann \cite{holzmann}. It is an exercise to prove the remaining two items; \refex{AnnularJSJ}.
\end{proof}
\begin{definition}\label{Def:Guts}
Let $M$, $S$, $\mathcal{T}$ be as in \reflem{AnnulusDecomposition}. We say that the intersection of the characteristic submanifold\index{characteristic submanifold} of $D(M{\backslash \backslash} S)$ with $M{\backslash \backslash} S$ is the \emph{characteristic submanifold of $M{\backslash \backslash} S$}. Its complement in $M{\backslash \backslash} S$ is the \emph{guts}\index{guts} of $S$, denoted $\operatorname{guts}(M{\backslash \backslash} S)$ or sometimes simply $\operatorname{guts}(S)$.
\end{definition}
\begin{example}
Recall from \refexamp{Fig8Fiber} that the figure-8 knot complement $M$ contains a surface $S$ that is a fiber, shown in \reffig{Fig8Seifert}. This surface $S$ is essential\index{essential} and properly embedded. The manifold $M{\backslash \backslash} S$ is homeomorphic to $S\times I$. Thus in this case, all of $M{\backslash \backslash} S$ is an $I$-bundle.\index{$I$-bundle} Thus $\operatorname{guts}(M{\backslash \backslash} S)$ is empty.
\end{example}
By contrast, later in this section we will find examples of alternating knot complements $M$ and surfaces $S$ such that $\operatorname{guts}(M{\backslash \backslash} S) = M{\backslash \backslash} S$, that is the characteristic submanifold is empty.
\begin{lemma}\label{Lem:GutsHyperbolic}
Let $M$, $S$, and $\mathcal{T}$ be as in \reflem{AnnulusDecomposition}. Then the manifold $\operatorname{guts}(M{\backslash \backslash} S)$ admits a hyperbolic metric with totally geodesic boundary.\index{guts}
\end{lemma}
\begin{proof}
If we double $\operatorname{guts}(M{\backslash \backslash} S)$ along the portion of the boundary on $\widetilde S$, i.e. take two copies of $\operatorname{guts}(M{\backslash \backslash} S)$ and glue by the identity along their common boundary on $\widetilde{S}$, we obtain the complement of the characteristic submanifold of the manifold $D(M{\backslash \backslash} S)$. This admits a finite volume hyperbolic metric. It also admits an involution fixing the surface $\operatorname{guts}(M{\backslash \backslash} S)\cap \widetilde{S}$ pointwise. It follows from the proof of the Mostow--Prasad rigidity theorem that an embedded surface fixed pointwise by an involution of a finite volume hyperbolic 3-manifold must be totally geodesic. Hence cutting along it yields a hyperbolic structure on $\operatorname{guts}(M{\backslash \backslash} S)$ with totally geodesic boundary.
\end{proof}
The following theorem, from \cite{ast:volumes}, gives us a tool to bound volumes from below using the guts of surfaces.
\begin{theorem}[Agol, Storm, and Thurston]\label{Thm:astguts}
Let $S$ be a $\pi_1$-essential\index{$\pi_1$-essential} surface properly embedded in an orientable hyperbolic $3$-manifold $M$. Then
\[\operatorname{vol}(M)\geq-{v_{\rm{oct}}}\chi(\operatorname{guts}(M{\backslash \backslash} S)),\]
where ${v_{\rm{oct}}} =3.66\dots$ is the volume of a regular ideal octahedron.\index{ideal octahedron, regular}\index{regular ideal octahedron}\index{regular ideal octahedron!volume}\index{guts}
\end{theorem}
\begin{proof}[Proof sketch.]
The essential surface $S$ can be isotoped to be minimal in $M$. Cut along the minimal surface isotopic to $S$, and denote $M{\backslash \backslash} S$ by $N$. Note that $N$ inherits from $M$ a Riemannian metric for which the mean curvature on its boundary $\widetilde{S}$ is $0$. Denote this metric by $g$. Then $\operatorname{vol}(M)= \operatorname{vol}(N,g)$.
Let $D(N)$ denote the double of $N$, i.e.\ the manifold obtained by taking two copies of $N$ and identifying them along their common boundary. By \reflem{GutsHyperbolic}, $D(\operatorname{guts}(N)) \subset D(N)$ inherits a complete hyperbolic metric with $\widetilde{S}\cap \operatorname{guts}(N)$ a totally geodesic surface embedded in $D(\operatorname{guts}(N))$; denote this metric by $h$. On the other hand, $D(N)$ inherits a singular Riemannian metric with singularities on $S$ obtained from the metric $g$; denote this metric by $g$ as well. Agol, Storm, and Thurston show that this singular metric $g$ can be approximated by smooth Riemannian metrics $\{g_i\}$ with restricted curvature properties. The volumes $\operatorname{vol}(D(N),g_i)$ under the smooth metrics $g_i$ converge to the volume $\operatorname{vol}(D(N),g)$ under its singular metric, with $\operatorname{vol}(D(N),g) = 2\operatorname{vol}(N,g) = 2\operatorname{vol}(M)$.
First suppose $M$ is closed. Use Ricci flow with surgery to evolve the metric, as in Perelman's proof of the geometrization theorem \cite{perelman02,perelman03}. The evolution will give $D(\operatorname{guts}(N))$ the hyperbolic metric $h$. Perelman's techniques imply a monotonicity result, in particular that
\[ \operatorname{vol}(D(N),g) \geq \operatorname{vol}(D(\operatorname{guts}(N)), h), \]
with equality if and only if $S$ is totally geodesic in $(D(N),g)$.
Then
\[ \operatorname{vol}(M) = \frac{1}{2}\operatorname{vol}(D(N),g) \geq \operatorname{vol}(\operatorname{guts}(M{\backslash \backslash} S)), \]
with equality if and only if $S$ is totally geodesic in $M$.
If $M$ has torus boundary, then similar techniques can be used to approximate the metric on compact sets, giving the same result.
Finally, $(\operatorname{guts}(M{\backslash \backslash} S), h)$ is a hyperbolic manifold with totally geodesic boundary. Thus by Miyamoto's theorem, \refthm{Miyamoto},
\[ \operatorname{vol}(\operatorname{guts}(M{\backslash \backslash} S)) \geq -{v_{\rm{oct}}}\chi(\operatorname{guts}(M{\backslash \backslash} S)).\qedhere \]
\end{proof}
We will be considering the guts of the checkerboard surfaces in an alternating link. By \refthm{CheckerboardEssential}, the checkerboard surfaces are essential,\index{essential} and thus satisfy the hypotheses of \refthm{astguts}.
\begin{lemma}\label{Lem:BigonCharacteristic}
Let $S$ be one of the checkerboard surfaces of a link with a twist-reduced alternating diagram\index{alternating diagram}\index{alternating knot or link!checkerboard surface}\index{checkerboard surface} $K$, whose hyperbolic complement we denote by $S^3-K=M$. Without loss of generality say $S$ is shaded, with $W$ the other checkerboard surface, colored white. Suppose in the polyhedral decomposition of $M$ that there are white bigon\index{bigon} faces. Then a neighborhood of each white bigon\index{bigon} face is part of an $I$-bundle\index{$I$-bundle} in $M{\backslash \backslash} S$, and thus any bigon face of $W$ does not belong to $\operatorname{guts}(M{\backslash \backslash} S)$. \index{guts}
\end{lemma}
\begin{proof}
The $I$-bundle components\index{$I$-bundle} of $M{\backslash \backslash} S$ have the form $Y\times I$, with $Y\times \{0\}$ and $Y\times\{1\}$ subsets of $\widetilde{S} \subset \partial(M{\backslash \backslash} S)$. Recall that $Y\times\{0\}$ and $Y\times\{1\}$ form the \emph{horizontal boundary}\index{horizontal boundary}\index{$I$-bundle!horizontal boundary} of the $I$-bundle. The subset $\partial Y\times I$ forms the \emph{vertical boundary}.\index{vertical boundary}\index{$I$-bundle!vertical boundary}
A white bigon\index{bigon} region of the polyhedral decomposition of $M$ is bounded by two edges and two vertices; recall that edges are crossing arcs,\index{crossing arc} and lie in the intersection $W\cap S$, and the vertices are ideal, forming portions of the parabolic locus.\index{parabolic locus} Thus a regular neighborhood of a white bigon face can be visualized as a thickened square, with two sides of its boundary on $S$ and two sides of its boundary on the parabolic locus. Note such a thickened square is a portion of an $I$-bundle, with horizontal boundary a neighborhood in $S$ of the two crossing arcs of $W\cap S$ that form the boundary of the bigon, and vertical boundary on the parabolic locus. We can complete this thickened square to an $I$-bundle over a subsurface of $S$ with boundary by attaching a neighborhood of the annulus that forms the parabolic locus. This annulus has boundary components on $S$, and is parallel to a link component. Its neighborhood can be given the structure of an $I$-bundle with fibers $I$ parallel to those of the bigon.\index{bigon} Thus the union of the neighborhood of the bigon and the neighborhood of this annulus (or possibly two annuli in the case of a link) forms an $I$-bundle\index{$I$-bundle} in $M{\backslash \backslash} S$.
\end{proof}
\begin{corollary}\label{Cor:RemoveCrossingsTwist}
Let $K$ be a twist-reduced diagram of a hyperbolic alternating link.\index{alternating knot or link!checkerboard surface}\index{checkerboard surface} Let $K'$ be the diagram obtained from $K$ by removing all crossings but one in each twist region of $K$. Let $S$ denote a checkerboard surface of $K$, and let $S'$ denote the corresponding checkerboard surface of $K'$. Then\index{guts}
\[ \operatorname{guts}((S^3-K){\backslash \backslash} S) = \operatorname{guts}((S^3-K'){\backslash \backslash} S'). \qedhere\]
\end{corollary}
\subsection{Essential annuli}
Our method of proving the volume bound on alternating links, \refthm{VolAlt}, is to apply the volume bound via guts, \refthm{astguts}, to the modified diagram $K'$ of $K$, as in \refcor{RemoveCrossingsTwist}. We will determine the Euler characteristic\index{Euler characteristic} of the guts of checkerboard surfaces of $K'$. Recall that to identify guts,\index{guts} we must first cut along essential annuli. Thus the next step in the proof is to find essential\index{essential} annuli in the cut manifold.
Suppose there is an essential annulus. Then the proof most easily breaks into two cases, depending on whether the annulus is \emph{parabolically compressible} or not, in the sense of the following definition.
\begin{definition}\label{Def:ParabolicCompress}
Let $M$ be a hyperbolic 3-manifold with a properly embedded essential\index{essential} surface $S$. Let $P$ denote the parabolic locus of $M{\backslash \backslash} S$, as in \refdef{Cut}.\index{parabolic locus} An annulus $A$, properly embedded in $M{\backslash \backslash} S$ with $\partial A\subset \widetilde{S}$, is \emph{parabolically compressible}\index{parabolically compressible}\index{compressible!parabolically compressible} if there exists a disk $D$ with interior disjoint from $A$, with $\partial D$ meeting $A$ in an essential arc $\alpha$ on $A$, and with $\beta = \partial D - \alpha$ lying on $\widetilde{S} \cup P$, with $\beta$ meeting $P$ transversely exactly once.
We may surger along such a disk; this is called a \emph{parabolic compression},\index{parabolic compression} and it turns the annulus $A$ into a disk meeting $P$ transversely exactly twice, with boundary otherwise on $\widetilde{S}$. See \reffig{ParabolicCompression}.
\end{definition}
\begin{figure}
\import{Figures/Ch13_Volume/}{F13-04-ParCom.eps_tex}
\caption{A portion of a parabolically compressible\index{parabolically compressible} annulus on the left, and a parabolic compression\index{parabolic compression} on the right.}
\label{Fig:ParabolicCompression}
\end{figure}
\begin{definition}\label{Def:EPD}
Let $D$ be a disk properly embedded in $M{\backslash \backslash} S$ with boundary consisting of two arcs on $\widetilde{S}$ and two arcs on the parabolic locus\index{parabolic locus} $P$. We say $D$ is a \emph{essential product disk (EPD)}.\index{essential product disk (EPD)}\index{EPD}
\end{definition}
A proof nearly identical to that of \reflem{BigonCharacteristic} shows that EPDs belong to the $I$-bundle of $M{\backslash \backslash} S$ (\refex{EPD}).
\begin{lemma}\label{Lem:NoParabComprAnnulus}
Let $S$ be the shaded checkerboard surface of a link with a twist-reduced alternating diagram\index{alternating diagram!checkerboard surface}\index{checkerboard surface} $K$, whose hyperbolic complement we denote by $S^3-K=M$.
Suppose there are no white bigon\index{bigon} regions, and suppose $A$ is an essential annulus properly embedded in $M{\backslash \backslash} S$, disjoint from the parabolic locus\index{parabolic locus} and not parallel to the parabolic locus, with $\partial A\subset\widetilde{S}$. Then $A$ is not parabolically compressible.\index{parabolically compressible}
\end{lemma}
The proof of \reflem{NoParabComprAnnulus} is completed by considering how a parabolically compressible annulus intersects the polyhedra in the decomposition of an alternating link, similar to several proofs in \refchap{Alternating}. The following lemma will be useful.
\begin{lemma}\label{Lem:MarcLemma7}
Let $K$ be a link with a prime, twist-reduced alternating diagram,\index{alternating diagram} with corresponding ideal polyhedral decomposition. Let $D_1$ and $D_2$ be normal disks\index{normal} in the polyhedra such that $\partial D_1$ and $\partial D_2$ meet exactly four interior edges. Isotope $\partial D_1$ and $\partial D_2$ to minimize intersections $\partial D_1\cap\partial D_2$ in faces. If $\partial D_1$ intersects $\partial D_2$, then $\partial D_1$ intersects $\partial D_2$ exactly twice, in two faces of the same color.
\end{lemma}
\begin{proof}
The boundaries $\partial D_1$ and $\partial D_2$ are quadrilaterals, with sides of $\partial D_i$ between intersections with interior edges. Note that $\partial D_1$ can intersect $\partial D_2$ at most once in any of its sides by the requirement that the number of intersections be minimal (else isotope through a face).
Thus there are at most four intersections of $\partial D_1$ and $\partial D_2$. If $\partial D_1$ meets $\partial D_2$ four times, then the two quads run through the same faces, both bounding disks, and can be isotoped off each other using the fact that the diagram is prime. Since the quads intersect an even number of times, there are either zero or two intersections. If zero intersections, we are done.
So suppose there are two intersections. Suppose $\partial D_1$ intersects $\partial D_2$ exactly twice in faces of the opposite color. Then an arc $\alpha_1\subset \partial D_1$ has both endpoints on $\partial D_1\cap \partial D_2$ and meets only one intersection of $\partial D_1$ with an interior edge of the polyhedron. Similarly, an arc $\alpha_2\subset\partial D_2$ has both endpoints on $\partial D_1\cap \partial D_2$ and meets only one intersection of $\partial D_2$ with an interior edge of the polyhedral decomposition. Then $\alpha_1\cup \alpha_2$ is a closed curve on the boundary of the polyhedron meeting exactly two interior edges. This gives a curve in the diagram of $K$ meeting $K$ exactly twice. Because the diagram is prime, there must be no crossings on one side of the curve. In the polyhedron, this means the arcs $\alpha_1$ and $\alpha_2$ are parallel, and we can isotope them to remove the intersections of $D_1$ and $D_2$.
\end{proof}
\begin{proof}[Proof of \reflem{NoParabComprAnnulus}]
Suppose by way of contradiction that $A$ is an essential, parabolically compressible annulus properly embedded in $M{\backslash \backslash} S$ with $\partial A\subset\widetilde{S}$. Perform a parabolic compression\index{parabolic compression} to obtain an EPD\index{essential product disk (EPD)}\index{EPD} $E$. Put $E$ into normal form\index{normal form}\index{normal} with respect to the polyhedral decomposition of $M{\backslash \backslash} S$.
Suppose $E$ intersects a white face $V$ of $W$. Consider the arcs $E\cap V$; such an arc has both endpoints on $\widetilde{S}$. If one cuts off a disk on $E$ that does not meet the parabolic locus,\index{parabolic locus} then there will be an innermost such disk. Its boundary consists of an arc in $W$ and an arc in $S$. We may sketch the boundary of the disk on the diagram of $K$, since the graph on the polyhedra is identical to the projection graph of the diagram (see \refthm{PolyAltKnot}). Thus this innermost disk has boundary intersecting the link diagram exactly twice. Because the diagram of $K$ is prime, this disk bounds a region containing no crossings. But then the original innermost disk in the polyhedron it is not normal:\index{normal} its boundary runs from a single edge back to that edge. This is a contradiction.
So $E\cap W$ consists of arcs running from $\widetilde{S}$ to $\widetilde{S}$, cutting off disks on either side meeting the parabolic locus.\index{parabolic locus} Thus the white surface cuts $E$ into normal quadrilaterals\index{normal} $\{E_1, \dots, E_n\}$, with $n\geq 2$ by assumption that $E$ intersects a white face. On the end of $E$, the quadrilateral $E_1$ has one side on $W$, two sides on $\widetilde{S}$ and the final side on the parabolic locus (a boundary face). Isotope slightly off the boundary face into the adjacent white face so that $E_1$ remains normal. Do the same for $E_n$. Then all quadrilaterals $E_1, \dots, E_n$ have two sides on $S$ and two sides on $W$.
Superimpose $E_1$ and $E_2$ onto the boundary of one of the (identical) polyhedra. An edge of $E_1$ in a white face $V$ is glued to an edge of $E_2$ in the same white face, but by a rotation in the face $V$. Thus when we superimpose, $\partial E_2\cap V$ is obtained from $\partial E_2\cap V$ by a rotation in $V$.
If $E_1\cap V$ is not parallel to a single boundary edge, then $\partial E_1\cap V$ must intersect $\partial E_2\cap V$; see \reffig{NoEPD}. Then \reflem{MarcLemma7} implies that $\partial E_1$ and $\partial E_2$ also intersect in another white face. But $\partial E_1$ is parallel to a single boundary edge in its second white face, so $\partial E_2$ cannot intersect it. This is a contradiction.
\begin{figure}
\import{Figures/Ch13_Volume/}{F13-05-NoEPD.eps_tex}
\caption{Left: $\partial E_1$ is not parallel to a boundary edge in $U$, hence $\partial E_1$ meets $\partial E_2$ in $U$. Right: $\partial E_1$ is parallel to a boundary edge.}
\label{Fig:NoEPD}
\end{figure}
So $E_1\cap V$ is parallel to a single boundary edge (and hence so is $E_2\cap V$). But then $E_1$ meets both white faces in arcs parallel to boundary edges. Isotoping $\partial E_1$ slightly into these boundary faces and transfer the curve to the diagram of the link. This gives a closed curve in the link diagram meeting the projection graph of $K$ in exactly two crossings, running to opposite sides of the crossings. If the crossings are distinct, then because the diagram is twist-reduced, the two crossings must bound white bigons\index{bigon} between them, contradicting the fact that there are no white bigon regions in the diagram. So the crossings are not distinct. Returning to the polyhedron, $\partial E_1$ encircles a single ideal vertex of the polyhedron. Repeating the argument with $E_2$ and $E_3$, and so on, we find that each $\partial E_i$ encircles a single ideal vertex. Gluing these together, the original annulus $A$ is parallel to the parabolic locus.\index{parabolic locus} This contradicts our assumption on $A$.
So if there is an EPD\index{essential product disk (EPD)}\index{EPD} $E$, it cannot meet $W$. Then it lies completely in a single polyhedron of the decomposition. Its boundary runs through two shaded faces and two boundary faces. Transfer to the link diagram; its boundary defines a curve meeting the link diagram in exactly two crossings, running to opposite sides of the crossings. Because the diagram is twist-reduced, the curve $\partial E$ encloses a string of white bigons.\index{bigon} But there are no white bigons, so $\partial E$ must run in and out of the same boundary face. This contradicts the fact that it was normal.\index{normal}
\end{proof}
\begin{lemma}\label{Lem:ParabIncomp}
Let $K$ be a hyperbolic alternating link with a prime, twist-reduced diagram and corresponding polyhedral decomposition. Let $M$ denote $S^3-K$ and let $S$ denote the shaded checkerboard surface.\index{checkerboard surface} Suppose that there are no white bigons\index{bigon} in the polyhedra. Suppose $A$ is an essential\index{essential} annulus embedded in $M{\backslash \backslash} S$, disjoint from the parabolic locus\index{parabolic locus} and not parallel to it, with $\partial A\subset\widetilde{S}$. Then $A$ bounds a Seifert fibered solid torus.
\end{lemma}
\begin{proof}
By \reflem{NoParabComprAnnulus} we may assume that $A$ is not parabolically compressible.\index{parabolically compressible} Put it into normal form\index{normal form} with respect to the polyhedral decomposition. Because the Euler characteristic\index{Euler characteristic} of an annulus is $0$, each normal disk\index{normal} making up $A$ must have combinatorial area\index{combinatorial area} $0$ by the Gauss--Bonnet lemma, \reflem{GaussBonnet}. Because $A$ does not meet the parabolic locus,\index{parabolic locus} each such disk must meet exactly four interior edges; see \refdef{CombinatorialArea}. Thus the white surface $W$ cuts $A$ into squares $E_1, \dots, E_n$. Note that if a component of intersection of $E_i\cap W$ is parallel to a boundary edge, then the disk of $W$ bounded by $E_i\cap W$, the boundary edge, and portions of edges of $\widetilde{S}\cap W$ defines a parabolic compression\index{parabolic compression} disk for $A$, contradicting the fact that $A$ cannot be parabolically compressible. So no component of $E_i\cap W$ is parallel to a boundary edge.
Again superimpose all squares $E_1, \dots, E_n$ on one of the polyhedra. The squares are glued in white faces, and cut off more than a single boundary edge in each white face, so $\partial E_i$ must intersect $\partial E_{i+1}$ in a white face; see again \reffig{NoEPD}. Then \reflem{MarcLemma7} implies $\partial E_i$ intersects $\partial E_{i+1}$ in both of the white faces it meets. Similarly, $\partial E_i$ intersects $\partial E_{i-1}$ in both its white faces. Because $E_{i-1}$ and $E_{i+1}$ lie in the same polyhedron, they are disjoint (or $E_{i-1} = E_{i+1}$, but this makes $A$ a M\"obius band rather than an annulus; see \refex{Mobius}).
This is possible only if $E_{i-1}$, $E_i$, and $E_{i+1}$ line up as in \reffig{FusedUnits} left, bounding portions of the polyhedron as shown. These transfer to the link diagram to bound tangles; Lackenby calls such tangles \emph{units} in \cite{lackenby:alt-volume}. Then all $E_j$ form a cycle of such tangles, as in \reffig{FusedUnits} right.
\begin{figure}
\import{Figures/Ch13_Volume/}{F13-06-FusUnt.eps_tex}
\caption{Left: $E_{i-1}$, $E_i$, and $E_{i+1}$ must intersect as shown. Right: cycle of three such tangles.}
\label{Fig:FusedUnits}
\end{figure}
Observe from \reffig{FusedUnits} that each disk $E_i$ encircles two units, with a band of shaded surface between $E_i$ and $E_{i+2}$ in the same polyhedron.
Then in each polyhedron, these disks of $A$ bound a solid cylinder (a ball) with top and bottom on white faces --- one the central region of \reffig{FusedUnits}, right, and one the unbounded region --- and sides along disks $E_j$ and shaded faces. The two solid cylinders glue across white faces with a twist, to form a solid torus. As each cylinder can be written as $D^2\times I$, with $D^2\times\{0\}$ and $D^2\times\{1\}$ on white faces, the gluing by a twist in the white face gives the solid torus a Seifert fibering. Thus $A$ bounds a Seifert fibered solid torus.
\end{proof}
\begin{theorem}[Lackenby, \cite{lackenby:alt-volume}]\label{Thm:ChiGuts}
Let $K$ be a link with a prime, twist-reduced alternating diagram,\index{alternating diagram}\index{alternating knot or link!checkerboard surface}\index{checkerboard surface} and corresponding polyhedral decomposition. Let $M$ denote the complement of $K$, let $S$ and $W$ denote the checkerboard surfaces, and let $r_S$ and $r_W$ denote the number of non-bigon\index{bigon} regions of $S$ and $W$ respectively. Then\index{guts}
\[\chi(\operatorname{guts}(M{\backslash \backslash} S))=2-r_W,\quad \chi(\operatorname{guts}(M{\backslash \backslash} W))=2-r_S.\]
\end{theorem}
\begin{proof}
Suppose first that the diagram has no white bigon\index{bigon} regions. Then \reflem{NoParabComprAnnulus} implies there is no embedded essential annulus that is parabolically compressible,\index{parabolically compressible} and \reflem{ParabIncomp} implies any parabolically incompressible annulus bounds a Seifert fibered solid torus. Thus
\[ \chi(\operatorname{guts}(M{\backslash \backslash} S)) = \chi(M{\backslash \backslash} S).\]
Since $M{\backslash \backslash} S$ is obtained by gluing two balls along white faces, $\chi(M{\backslash \backslash} S) = 2-r_W$.
If the diagram contains white bigon regions, then replace each string of white bigons\index{bigon} in the diagram by a single crossing, obtaining a new link $K'$. Let $M'$ denote $S^3-K'$ and let $S'$ be the checkerboard surface coming from the same shaded regions as $S$ in $K$. By \refcor{RemoveCrossingsTwist}, $\operatorname{guts}(M{\backslash \backslash} S) = \operatorname{guts}(M'{\backslash \backslash} S')$. Hence $\chi(\operatorname{guts}(M{\backslash \backslash} S)) = \chi(\operatorname{guts}(M'{\backslash \backslash} S')) = 2-r_W$.
An identical argument applies to $M{\backslash \backslash} W$, replacing $S$ with $W$.
\end{proof}
\Refthm{VolAlt} is now almost an immediate consequence of \refthm{ChiGuts} and \refthm{astguts}.
\begin{proof}[Proof of \refthm{VolAlt}]
Let $\Gamma$ be the $4$-regular diagram graph associated to $K$ by replacing each twist-region with a vertex.
Let $|v(\Gamma)|$ denote the number of vertices of $\Gamma$, and $|f(\Gamma)|$ the number of regions. Because $\Gamma$ is 4-valent, the number of edges is $2|v(\Gamma)|$, so
\[\chi(S^2) = 2 =-|v(\Gamma)|+|f(\Gamma)|=-{\operatorname{tw}}(K)+r_S+r_W.\]
Then applying Theorems~\ref{Thm:astguts} and~\ref{Thm:ChiGuts} gives
\begin{align*}
\operatorname{vol}(S^3-K) \ &\geq \
-\frac{1}{2}{v_{\rm{oct}}}\chi(\operatorname{guts}(M{\backslash \backslash} S))-\frac{1}{2}{v_{\rm{oct}}}\chi(\operatorname{guts}(M{\backslash \backslash} W)) \\
\ &= \ -\frac{1}{2}{v_{\rm{oct}}}(2-r_S-r_W) \\
\ &= \ \frac{1}{2}{v_{\rm{oct}}}({\operatorname{tw}}(K)-2). \qedhere
\end{align*}
\end{proof}
\section{Exercises}
\begin{exercise}\label{Ex:AsympSharpFullyAug}
Show the upper bound of \refthm{VolUpperTwistRegions} is asymptotically sharp, in two steps.\index{asymptotically sharp volume bound} First, show there is a sequence of fully augmented links\index{fully augmented link} $L_i$ with $t(L_i)$ crossing circles such that $\operatorname{vol}(S^3-L_i)/t(L_i)$ approaches $10{v_{\rm tet}}$ as $i$ goes to infinity.\index{fully augmented link!volume bound} (Hint: take white faces to be regular hexagons.) Then show that there is a sequence of links $K_i$ with twist number $t(K_i)$ such that $\operatorname{vol}(S^3-K_i)/t(K_i)$ approaches $10{v_{\rm tet}}$ as $i$ goes to infinity.
\end{exercise}
\begin{exercise}
Use \refthm{MaxVolTet} to give an upper bound on the volume of a 2-bridge knot with continued fraction expansion $[0, a_{n-1}, \dots, a_1]$.
Find an example of a 2-bridge knot such that your upper bound becomes $2{v_{\rm tet}}({\operatorname{tw}}(K)-1)$. Use this to show that the lower bound of \refthm{Vol2Bridge} is asymptotically sharp:\index{asymptotically sharp volume bound} The ratio of upper and lower bounds goes to $1$ as ${\operatorname{tw}}(K)\to\infty$.
\end{exercise}
\begin{exercise}
The volume of a regular ideal octahedron\index{ideal octahedron, regular}\index{regular ideal octahedron}\index{regular ideal octahedron!volume} is denoted by ${v_{\rm{oct}}}$. In \refex{FullyAug2Bridge}, it was shown that a fully augmented 2-bridge link decomposes into regular ideal octahedra. Use this to prove that the volume of a 2-bridge link with twist number ${\operatorname{tw}}(K)$ is at most $2{v_{\rm{oct}}}({\operatorname{tw}}(K)-1)$.
\end{exercise}
\begin{exercise}
Suppose $K$ is a link complement that admits a rotational symmetry about an axis, with order $p$. That is, suppose there is a curve $\gamma$ in $S^3$ such that a rotation of order $p$ about $\gamma$ preserves $K$. Show that if $p\geq 7$,
\[ \operatorname{vol}(S^3-K) \geq \left( 1 - \frac{4\pi^2}{49} \right)^{3/2}\operatorname{vol}(S^3-(K\cup\gamma)). \]
\end{exercise}
\begin{exercise}\label{Ex:AnnularJSJ}
Prove \reflem{AnnulusDecomposition}.
\end{exercise}
\begin{exercise}\label{Ex:EPD}(EPDs lie in the characteristic submanifold)
Prove that an essential product disk\index{essential product disk (EPD)}\index{EPD} in $M{\backslash \backslash} S$ is a subset of the $I$-bundle\index{$I$-bundle} of $M{\backslash \backslash} S$, thus cannot be part of the guts.\index{guts}\index{characteristic submanifold}
\end{exercise}
\begin{exercise}\label{Ex:Mobius}
Let $K$ be a hyperbolic alternating knot\index{alternating knot or link} with a prime twist-reduced diagram, and corresponding polyhedral decomposition. Let $S$ denote the shaded checkerboard surface, and suppose that $A$ is an essential\index{essential} surface with boundary on $\widetilde{S}$ such that $A$ is the union of exactly two normal squares\index{normal} $E_1$ and $E_2$, and such that the sides of $\partial E_1$ and $\partial E_2$ in white faces are not parallel to boundary edges of the polyhedra. Then prove that $A$ is a M\"obius band.
\end{exercise}
\begin{exercise}
We obtain lower bounds on volumes of highly twisted\index{highly twisted} 2-bridge knots with at least seven crossings per twist region from three theorems in this chapter, namely \refthm{Vol2Bridge}, \refthm{VolBoundHighlyTwisted}, and \refthm{VolAlt}. Compare the bounds coming from each theorem. Which gives the best volume estimate?
\end{exercise}
\begin{exercise}
How sharp are \refthm{VolBoundHighlyTwisted} and \refthm{VolAlt}? By tracing through the proofs, find conditions that must be satisfied for the lower bound on volume to be sharp.
\end{exercise}
\chapter{Ford Domains and Canonical Polyhedra}\label{Chap:Canonical}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
We have noted that there is (currently) no guarantee that every finite volume cusped hyperbolic 3-manifold admits a decomposition into positively oriented ideal tetrahedra. However, we can guarantee that every cusped hyperbolic 3-manifold admits a decomposition into convex ideal polyhedra. This is the canonical decomposition, first studied by \cite{EpsteinPenner}, which we describe in this chapter.
The canonical decomposition is dual to another decomposition, the Ford domain (sometimes called the Ford--Voronoi domain)\index{Ford--Voronoi domain}, which we will describe first. Our exposition is similar to that of \cite{LackenbyPurcell}, also \cite{aswy}, and \cite{Bonahon:LowDimGeom}.
Before we begin, we give a few words motivating the canonical decomposition. In the case of a hyperbolic knot, if two knot complements have the same canonical decomposition, then they must necessarily be isometric, and hence equivalent by \refthm{GordonLuecke}, the Gordon--Luecke theorem. This result follows from \refthm{CanonicalUnique}, below. Thus for hyperbolic knots, the canonical decomposition is a \emph{complete} knot invariant. Unfortunately, it is not easy to compute in general. However, in this chapter we will explain how it is defined, and give a few examples.
Both Ford domains and canonical polyhedra arise from natural geometric ideas. However, they are somewhat difficult to describe in words because of various choices that must be made. If a hyperbolic 3-manifold has more than one cusp, they depend on a choice of horoball neighborhood of the cusp. For this reason, we begin this chapter by describing choices of horoball neighborhoods and the sets equidistant from horoballs, in \refsec{Horoballs}.
Moreover, the definition of a Ford domain differs in the literature, although all definitions are closely related. Perhaps the simplest definition is the one given by Gu{\'e}ritaud and Schleimer~\cite{GueritaudSchleimer}: they define a Ford domain (or Ford--Voronoi domain)\index{Ford domain}\index{Ford--Voronoi domain} to be the set $S$ of points in $M$ that have a unique shortest path to the fixed horoball neighborhood. The drawback to this definition is that the resulting set $S$ is not a fundamental domain\index{fundamental domain} for the manifold in the sense that not every point of $M$ has a preimage in $S$. Additionally, the components of $S$ are not simply connected, because each component admits a deformation retraction to a horoball neighborhood, which is homeomorphic to the thickened torus. We will give a closely related definition of the Ford domain in \refsec{FordDomain} that overcomes these difficulties, but at the cost of being slightly more complicated and dependent upon an additional choice (of fundamental domain for the horoball neighborhood). Still, throughout the discussion it is useful to keep Gu{\'e}ritaud and Schleimer's definition in mind.
However, all the different definitions of Ford domain in the literature still have the same geometric dual: the canonical polyhedral decomposition.\index{canonical decomposition} This convex cell decomposition does depend on choice of horoball neighborhood, but it is independent of all other choices involved in defining the Ford domain. We describe the canonical polyhedral decomposition in \refsec{Canonical}.
\section{Horoballs and isometric spheres}\label{Sec:Horoballs}
Throughout, our setup is the following. We let $M$ be an orientable 3-manifold admitting a complete hyperbolic structure with at least one cusp. The universal cover of $M$ is then ${\mathbb{H}}^3$. We may apply an isometry so that the point at infinity $\infty \in \partial_\infty {\mathbb{H}}^3$ maps to a cusp of $M$ under the covering map. Then $M \cong {\mathbb{H}}^3/\Gamma$, where $\Gamma \leq \operatorname{PSL}(2,{\mathbb{C}})$ is a discrete group of isometries isomorphic to $\pi_1(M)$ via the holonomy\index{holonomy} representation $\rho\from \pi_1(M)\to \operatorname{PSL}(2,{\mathbb{C}})$.
Because the point at infinity projects to a cusp of $M$, there will be a parabolic\index{parabolic} subgroup $\Gamma_\infty$ of $\Gamma$ fixing the point at infinity. If the cusp of $M$ is a rank-1 cusp, then $\Gamma_\infty$ will be isomorphic to ${\mathbb{Z}}$. If it is a rank-2 cusp, $\Gamma_\infty$ will be isomorphic to ${\mathbb{Z}}\times{\mathbb{Z}}$; see \refdef{RankOneRankTwoCusp}. Only a rank-2 cusp can occur in a finite volume hyperbolic manifold such as a knot complement, and so this is the case we will consider in this chapter. However, much of the discussion here generalizes to the infinite volume case.
\begin{proposition}\label{Prop:EmbeddedHoroballNbhd}
A complete hyperbolic 3-manifold contains an embedded horoball neighborhood. That is, there is an embedded neighborhood $N$ of the cusps of $M$ such that $N$ lifts to a disjoint collection of embedded horoballs in ${\mathbb{H}}^3$.
\end{proposition}
\begin{proof}
This result follows immediately from the structure of the thin part, \refthm{ThinPart}. By that theorem, for any $0<\epsilon\leq\epsilon_3$, where $\epsilon_3$ is a universal constant, the $\epsilon$-thin part of $M$ consists of tubes around short geodesics and rank-1 and rank-2 cusps. Ignore the tubes; the cusps are embedded. Their lift to ${\mathbb{H}}^3$ consists of disjoint embedded horoballs as required.
\end{proof}
\begin{lemma}\label{Lem:CountableHoroballs}
Suppose $N$ is an embedded horoball neighborhood of a cusp of $M$ that lifts to the horoball about $\infty\in\partial_\infty{\mathbb{H}}^3$. Then all the lifts of $N$ to ${\mathbb{H}}^3$ give countably many horoballs in ${\mathbb{H}}^3$, with centers at the points
\[ \{ g(\infty) \mid g \in \Gamma\}. \]
\end{lemma}
\begin{proof}
Let $H_\infty$ denote the horoball about $\infty$ that projects to $N$. Note that for all $g\in\Gamma$, the horoball $g(H_\infty)$ must also project to $N$, and its center is $g(\infty)$. On the other hand, if $H$ is a horoball in ${\mathbb{H}}^3$ that projects to $N$, then there must exist $h\in\Gamma$ such that $h(H) = H_\infty$, and so $H$ has center $h^{-1}(\infty)$. Thus the set $\{g(H_\infty) \mid g\in \Gamma\}$ is exactly the set of horoballs projecting to $N$. Because $\Gamma$ is a discrete group, it has countably many elements (\refex{CountableDiscrete}). Thus all lifts of $N$ to ${\mathbb{H}}^3$ is a countable set of horoballs.
\end{proof}
\begin{corollary}\label{Cor:CountableCuspNbhd}
Let $M$ be finite volume. Any embedded horoball neighborhood about all cusps of $M$ lifts to countably many disjoint horoballs in ${\mathbb{H}}^3$.
\end{corollary}
\begin{proof}
\Reflem{CountableHoroballs} shows that an embedded horoball neighborhood of one cusp of $M$ lifts to countably many disjoint horoballs. \Refprop{EmbeddedHoroballNbhd} implies that all horoball lifts from all cusps are embedded. Since $M$ has finite volume, it has only finitely many cusps. Hence the collection of all lifts of horoball neighborhoods is countable.
\end{proof}
The software SnapPy \cite{SnapPy} has a feature `Cusp Neighborhoods' that shows the horoballs making up the lift of an embedded horoball neighborhood of $M$ --- or at least those that have Euclidean diameter larger than some specified lower bound. For example, the pattern arising from the figure-8 knot is shown in \reffig{Fig8KnotSnappyCusp}.
\begin{figure}
\includegraphics{Figures/Ch14_Canonical/F14-01-Fig8Cu.eps}
\caption{The horoballs in the lift of an embedded cusp of the figure-8 knot complement, from SnapPy \cite{SnapPy}.}
\label{Fig:Fig8KnotSnappyCusp}
\end{figure}
Now consider an embedded cusp neighborhood of a manifold with a single cusp, for example the figure-8 knot complement. \Refprop{EmbeddedHoroballNbhd} and \ref{Lem:CountableHoroballs} imply that the cusp lifts to a countable collection of embedded horoballs in ${\mathbb{H}}^3$. If we adjust the size of the initial cusp neighborhood of $M$, the sizes of the horoball lifts will also be adjusted. For example, if we shrink the cusp of $M$, the horoball about infinity $H_\infty$ will also shrink, which we see as an increase in Euclidean height of the horoball. All its translates $\gamma(H_\infty)$ will also shrink, which we see as a shrinking of the Euclidean diameters of the horoballs with centers away from $\infty$.
On the other hand, if we increase the size of the cusp of $M$, the horoballs will grow. We can increase their sizes, keeping the cusp embedded, up until the point where the cusp neighborhood becomes tangent to itself. In the lift of the horoball neighborhood, at this point two horoballs are tangent.
If $M$ has multiple cusps, then we can grow and shrink the sizes of cusps independently. However, we can still increase sizes of embedded cusps only until each is tangent either to itself or to another cusp.
\begin{definition}\label{Def:MaximalCusp}
A \emph{maximal cusp neighborhood}\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} is an (open) embedded cusp neighborhood for $M$ that is maximal in the sense that no cusp can be expanded while keeping the set of cusps embedded and disjoint.
\end{definition}
\begin{definition}\label{Def:FullSized}
Consider the lift of an embedded maximal cusp neighborhood to ${\mathbb{H}}^3$, with one cusp lifting to a horoball at infinity. A \emph{full-sized horoball}\index{full-sized horoball}\index{horoball!full-sized} is a horoball in this pattern that is tangent to the horoball at infinity. Viewed from infinity, it has maximal Euclidean diameter.
\end{definition}
\begin{example}\label{Example:Fig8FullSized}
The complement of the figure-8 knot admits four full-sized horoballs, distinct up to translation by $\Gamma_\infty$; see again \reffig{Fig8KnotSnappyCusp}.
For this manifold, each full-sized horoball is tangent to the other three, in a pattern that is known to be the densest possible horoball packing. (This follows from a theorem of B{\"o}r{\"o}czky \cite{boroczky}: the 3-dimensional analogue of \refthm{Boroczky}.)\index{B{\"o}r{\"o}czky cusp density theorem!3-dimensional}\index{cusp density theorem!3-dimensional}
\end{example}
Recall that a \emph{fundamental domain}\index{fundamental domain} for the action of a group on a space is a subset of the space that contains a point from each orbit, whose interior contains exactly one point from each orbit. In this chapter, we will restrict to complete $(G,X)$-structures\index{$(G,X)$-structure} on a manifold $M$, where $X$ is the metric space ${\mathbb{E}}^2$ or ${\mathbb{H}}^3$ and $G$ acts by isometries. In this case, we require a fundamental domain $R$ to be cut out by geodesic planes. Distinct points in the interior of $R$ project to distinct points in the manifold. The boundary of $R$ is made up of faces intersecting in edges and vertices, and the interior of each face is paired by an isometry of $G$ to exactly one other face; this is called a \emph{face-pairing isometry}\index{face-pairing isometry}. Finally, the quotient of the fundamental domain under the action of face-pairing isometries, which agrees with the restriction of the covering map $X\to M$, is all of $M$.
For example, a Euclidean structure\index{Euclidean structure} on a torus has fundamental domain a single (closed) parallelogram. It follows that the boundary of a cusp of a hyperbolic 3-manifold has a fundamental domain that is a parallelogram on a horosphere in ${\mathbb{H}}^3$.
\begin{lemma}\label{Lem:FullSized}
Let $M \cong {\mathbb{H}}^3/\Gamma$ be a hyperbolic 3-manifold with at least one cusp. In a horoball pattern in ${\mathbb{H}}^3$ given by lifting an embedded maximal cusp neighborhood\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} for $M$, apply any isometry taking a desired horoball to the one at infinity. Then in the new pattern obtained by applying this isometry, there is at least one full-sized horoball meeting a fundamental domain for the boundary of the horoball about infinity.
Moreover, if $M$ has only one cusp, then there are at least two full-sized horoballs in a fundamental domain. The second is often called the \emph{Adams horoball}.\index{Adams horoball}\index{horoball!Adams}
\end{lemma}
\begin{proof}
By definition, an embedded maximal cusp neighborhood\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} cannot be expanded or the cusp will no longer be embedded. Thus its lift to ${\mathbb{H}}^3$ will no longer consist of disjoint horoballs. That means that for each cusp, one horoball in its fundamental domain must be tangent to another. Hence when we apply an isometry taking a horoball projecting to that cusp to a horoball about infinity, another horoball becomes tangent to the one at infinity, hence full-sized.
In the case that $M$ has exactly one cusp, let $H_\infty$ denote the horoball at infinity and let $H_f$ denote the full-sized horoball. Because there is only one cusp of $M$, the two horoballs must project to the same cusp of $M$. Thus there must be a covering transformation, i.e.\ an isometry $g \in \Gamma$, taking $H_f$ to $H_\infty$. Consider the image $g(H_\infty)$. This is a horoball tangent to $g(H_f)=H_\infty$, hence it must be a full-sized horoball. Apply an isometry of $w\in \Gamma_\infty\leq \Gamma$ fixing $\infty$, if necessary, so that $wg(H_\infty)$ lies in the same fundamental domain of the cusp as $H_f$, and replace $g$ with $wg$. Now either $H_f$ and $g(H_\infty)$ are disjoint full-sized horoballs, as desired, or possibly $H_f=g(H_\infty)$. We now rule out the latter case.
Suppose $g(H_\infty)= H_f$. Consider the effect of $g$ on the geodesic from the center of $H_f$ to $\infty$. If $g$ takes $\infty$ to the center of $H_f$, then this geodesic is mapped to itself, with the point of tangency between $H_f$ and $H_\infty$ mapped to the point of tangency between the two horoballs; hence $g$ has a fixed point in the interior of ${\mathbb{H}}^3$, so it is elliptic.\index{elliptic} But $M$ is a manifold, hence \refprop{FreePropDisc} implies the action of $\Gamma$ is fixed point free, so $g$ cannot have a fixed point. Thus $H_f$ and $g(H_\infty)$ are disjoint full-sized horoballs.
\end{proof}
The Adams horoball is so-called because it appears prominently in work of Adams, e.g.\ \cite{adams:waist}.\index{Adams horoball}\index{horoball!Adams}
\begin{corollary}\label{Cor:CuspVolume}
The volume of any cusp component in a maximal cusp neighborhood\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} of $M$ is at least $\sqrt{3}/4$. If $M$ has only one cusp, the volume of a maximal cusp neighborhood is at least $\sqrt{3}/2$.
\end{corollary}
\begin{proof}
\Refex{CuspVolumeMaxCusp}.
\end{proof}
We learn a great deal of information from a 3-manifold by considering its cusp neighborhoods, and lifts of maximal cusp neighborhoods\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} to ${\mathbb{H}}^3$. However, because there are countably infinitely many horoballs in such a pattern, it is difficult to compute this pattern and it can be difficult to work with. We can reduce the difficulty of the problem by considering points closer to one lift than another, and their boundaries in ${\mathbb{H}}^3$.
\begin{lemma}\label{Lem:DistHoroballs}
The set of points equidistant from two horoballs in ${\mathbb{H}}^3$ forms a geodesic plane in ${\mathbb{H}}^3$.
\end{lemma}
\begin{proof}
Let $H_1$ and $H_2$ be the two horoballs, and consider the geodesic $\gamma$ between their centers. There exists a unique point $p$ on $\gamma$ equidistant from the boundaries of $H_1$ and $H_2$; see \reffig{Equidistant}.
\begin{figure}
\import{Figures/Ch14_Canonical/}{F14-02-Equid.eps_tex}
\caption{Map horoballs $H_1$ and $H_2$ to lie over $-1$ and $1$, respectively, with the point equidistant from them on the geodesic between their centers mapped to the point over $0$ with height $1$.}
\label{Fig:Equidistant}
\end{figure}
Apply the hyperbolic isometry $\phi$ taking the center of $H_1$ to $-1 \in {\mathbb{C}}$, taking the center of $H_2$ to $1\in {\mathbb{C}}$, and taking $p$ to the point lying over $0\in{\mathbb{C}}$ of height $1$. Under this isometry, $H_1$ and $H_2$ are mapped to horoballs of the same Euclidean diameter, centered at $-1$ and $1$. We claim that the set of points equidistant from two horoballs of the same Euclidean diameter and centers $-1$ and $1$ is the vertical plane $P$ that meets ${\mathbb{C}}$ in the imaginary axis. This can be seen as follows. There is a reflection isometry fixing $P$ pointwise and exchanging the two horoballs. Thus the shortest path from a point $q\in P$ to one of the two horoballs will be mapped under the reflection to the shortest path from $q$ to the other horoball. Thus the horoballs are equidistant from $P$.
Now apply $\phi^{-1}$ to this picture. The geodesic plane $P$ is mapped to a geodesic plane in ${\mathbb{H}}^3$ that is equidistant to the original horoballs.
\end{proof}
In the case that the two horoballs $H_1$ and $H_2$ project to the same cusp, the totally geodesic plane is known as an isometric sphere, as in the following definition.
\begin{definition}\label{Def:IsoSphere}
Let $g\in \operatorname{PSL}(2,{\mathbb{C}})$ be an element that does not fix $\infty$. Let $H$ denote a horosphere about $\infty$ in ${\mathbb{H}}^3$. Then $g^{-1}(H)$ is a horosphere centered at a point of ${\mathbb{C}} \subset ({\mathbb{C}}\cup\{\infty\}) = \partial{\mathbb{H}}^3$. Define the set $I(g)$\index{$I(g)$} to be the set of points in ${\mathbb{H}}^3$ equidistant from $H$ and $g^{-1}(H)$:
\[ I(g) = \{x\in{\mathbb{H}}^3\mid d(x,H) = d(x,g^{-1}(H))\} \]
The set $I(g)$ is the \emph{isometric sphere}\index{isometric sphere} of $g$.
\end{definition}
Note that $I(g)$ is well-defined, independent of $H$, even if $H$ and $g^{-1}(H)$ overlap (\refex{DefIsoSphereDoesNotDependOnH}).
\begin{lemma}\label{Lem:IsoSphereImage}
For $g\in\Gamma-\Gamma_\infty$, $g$ maps $I(g)$\index{$I(g)$} isometrically to $I(g^{-1})$, taking the half ball bounded by $I(g)$ to the exterior of the half ball bounded by $I(g^{-1})$.
\end{lemma}
See \reffig{IsoSphere}.
\begin{figure}
\import{Figures/Ch14_Canonical/}{F14-03-IsoSph.eps_tex}
\caption{The horoballs $H$ and $g^{-1}(H)$ are shown, along with the isometric sphere\index{isometric sphere} $I(g)$.\index{$I(g)$} The effect of applying $g$ to this picture is shown on the right: $g\circ g^{-1}(H)$ maps to $H$, $H$ to $g(H)$, and $I(g)$ maps to $I(g^{-1})$. }
\label{Fig:IsoSphere}
\end{figure}
\begin{proof}[Proof of \reflem{IsoSphereImage}]
Let $H$ denote a horosphere about $\infty$ in ${\mathbb{H}}^3$.
Note that $g$ takes the horoball $g^{-1}(H)$ to $H$, and takes $H$ to $g(H)$. Thus $g$ maps $I(g)$ isometrically to the set of points equidistant from $H$ and $g(H)$. This is $I(g^{-1})$. The half-space bounded by $I(g)$,\index{$I(g)$} which contains $g^{-1}(H)$, is mapped to the exterior of the half-space bounded by $I(g^{-1})$, which contains $H$.
\end{proof}
\begin{lemma}\label{Lem:IsoSphereRadius}
As a set, $I(g^{-1})$ (and hence $I(g)$)\index{$I(g)$} is a Euclidean hemisphere orthogonal to ${\mathbb{C}}$. If $ g=\mat{a&b\\c&d}\in {\rm PSL}(2,{\mathbb{C}}),$ then the center of the Euclidean hemisphere $I({g^{-1}})$ is $g(\infty) = a/c$. Its Euclidean radius is $1/|c|$.
\end{lemma}
\begin{proof}
The fact that $I(g^{-1})$ is a Euclidean hemisphere follows immediately from \reflem{DistHoroballs}: it must be a geodesic plane in ${\mathbb{H}}^3$. Moreover, it cannot meet the point $\infty$, since $H$ is centered at that point. Thus it is a Euclidean hemisphere.
As for the center and radius of the hemisphere, note that $g(\infty)=a/c$, and this must be the center. Consider the geodesic running from $\infty$ to $g(\infty)$. It consists of points of the form $(a/c, t)$ in ${\mathbb{C}} \times {\mathbb{R}}^+ \cong {\mathbb{H}}^3$. It will meet the horosphere $H$ about infinity at some height $t=h_1$, and the horosphere $g(H)$ at some height $t=h_0$. The radius of the isometric sphere\index{isometric sphere} $I({g^{-1}})$ is the height of the point equidistant from points $(a/c, h_0)$ and $(a/c, h_1)$.
Note that $g^{-1}( g(H)) = H$, and hence $h_1$ is given by the height of $g^{-1}(a/c, h_0)$, which can be computed to be $(-d/c, 1/(|c|^2h_0))$. Thus $h_1 =1/(|c|^2h_0)$. Then the point equidistant from $(a/c, h_0)$ and $(a/c,1/(|c|^2h_0))$ is the point of height $h = 1/|c|$.
\end{proof}
\begin{lemma}\label{Lem:IsoSphereHeight}
If $p=(x+iy, t)\in{\mathbb{H}}^3$ lies in $I(g)$, then $g(p)$ has third coordinate $t$. That is, $g$ preserves the heights of points on $I(g)$.\index{$I(g)$}
\end{lemma}
\begin{proof}
Let $p\in I(g)$. If $p\in I(g)$ lies on the geodesic from $\infty$ to $g^{-1}(\infty)$, then the third coordinates of $p$ and $g(p)$ can both be determined to be $1/|c|$ from \reflem{IsoSphereRadius}, and so they agree. If $p$ is another point on $I(g)$,\index{$I(g)$} construct a 2/3-ideal triangle\index{$2/3$-ideal triangle} with vertices $\infty$, $g^{-1}(\infty)$, and $p$. Then $g$ maps this 2/3-ideal triangle to a triangle with the same area, hence the same angle at its finite vertex (see \refex{2/3IdealTriangle}). Since $p$ and $g(p)$ also both lie on Euclidean hemispheres of the same radius (again by \reflem{IsoSphereRadius}), it follows that $p$ and $g(p)$ have the same third coordinate.
\end{proof}
\begin{lemma}\label{Lem:LocallyFinite}
Let $\Gamma\leq \operatorname{PSL}(2,{\mathbb{C}})$ be a nonelementary discrete group with a parabolic\index{parabolic} subgroup $\Gamma_\infty$ fixing the point at infinity. Then the set of all isometric spheres\index{isometric sphere} $\{ I(g) \mid g\in \Gamma-\Gamma_\infty \}$ is \emph{locally finite},\index{locally finite} meaning that for any $x\in{\mathbb{H}}^3$, there exists $\epsilon>0$ such that the ball of radius $\epsilon$ centered at $x$ meets only finitely many isometric spheres\index{isometric sphere} $I(g)$\index{$I(g)$} for $g\in\Gamma-\Gamma_\infty$.
\end{lemma}
In fact we show that for all $x\in{\mathbb{H}}^3$, and all $\epsilon>0$, the ball $B_\epsilon(x)$ of radius $\epsilon$ centered at $x$ meets only finitely many $I(g)$.\index{$I(g)$}
\begin{proof}[Proof of \reflem{LocallyFinite}]
Suppose there exists $x\in{\mathbb{H}}^3$ and $\epsilon>0$ so that $B_\epsilon(x)$ meets infinitely many distinct isometric spheres $I(g_n)$, for elements $g_n\in \Gamma-\Gamma_\infty$. Let $q_n\in B_\epsilon(x)\cap I(g_n)$, and let $H$ be a horoball about infinity. By definition of $I(g_n)$, the point $q_n$ is equidistant from $H$ and $g_n^{-1}(H)$. Then $g_n(q_n)$ is equidistant from $g_n(H)$ and $H$, and by \reflem{IsoSphereHeight} $g_n(q_n)$ has the same third coordinate as $q_n$, hence its third coordinate lies in an interval of length at most $2\epsilon$ centered at the third coordinate of $x$.
Consider next the first and second coordinates of $g_n(q_n)$. The group $\Gamma_\infty$ is isomorphic to ${\mathbb{Z}}\times{\mathbb{Z}}$, generated by two parabolics\index{parabolic} translating along the Euclidean plane $\partial H$. Choose a (closed) parallelogram on $\partial H$ that forms a fundamental domain for the action of $\Gamma_\infty$. There exists some $w_n\in\Gamma_\infty$ taking $g_n(q_n)$ to lie in this parallelogram. Then the points $w_n g_n (q_n)$ have first and second coordinates lying within this parallelogram. Since $w_n$ does not affect height, the height of $w_n g_n (q_n)$ agrees with that of $q_n$, and thus also lies in a bounded region. So all points $\{ w_n g_n (q_n) \}$ lie within a bounded parallelopiped in ${\mathbb{H}}^3$. Thus they all lie within some bounded distance of our original point $x$, say $d(x, w_n g_n (q_n)) \leq R$ for some $R>0$.
Now consider the points $\{ (w_n g_n)^{-1} (x)\}$.
We have
\begin{align*}
d(x, (w_n g_n)^{-1} x) & \leq d(x, q_n) + d(q_n, (w_n g_n)^{-1} x) \\
& = d(x, q_n) + d(w_n g_n(q_n), x) \\
& \leq \epsilon + R.
\end{align*}
Let $B$ denote the closed ball of radius $R+\epsilon$ centered at $x$. The above calculation shows that each point $(w_n g_n)^{-1}(x)$ lies within this ball.
Then for each $n$, $(w_n g_n)^{-1}(B) \cap B$ contains $(w_n g_n)^{-1}(x)$, so is nonempty. Because each of the $g_n$ are distinct, each of the $(w_n g_n)^{-1}$ must be distinct, and therefore we have found an infinite set of elements of $\Gamma$ which take $B$ to a ball intersecting $B$. It follows that $\Gamma$ is not properly discontinuous, as in \refdef{ProperlyDiscont}. But $\Gamma$ is a discrete group, contradicting \reflem{DiscretePropDisc}.
\end{proof}
\section{Ford domain}\label{Sec:FordDomain}
In this section, we define a special fundamental domain for a hyperbolic 3-manifold, called a Ford domain. We will build this fundamental domain for a hyperbolic 3-manifold with at least one cusp. It will not be unique or canonical, although in the case $M$ has only one cusp, a cover will be unique and canonical. Because of the non-uniqueness of domains for multiple cusps, and the consequent additional difficulties to keep track of in that case, we will first treat the case that $M$ has exactly one cusp.
\subsection{The case of one cusp}
When $M$ has a unique cusp, we define a fundamental domain in terms of isometric spheres\index{isometric sphere} of $M$.
\begin{definition}\label{Def:EquivariantFordDomain}
Define $B(g)$ to be the open half ball bounded by $I(g)$\index{$I(g)$} in ${\mathbb{H}}^3$, and let $\mathcal{F}(\Gamma)$ be the set
\[ \mathcal{F}(\Gamma) = {\mathbb{H}}^3 - \bigcup_{g\in\Gamma-\Gamma_\infty} B(g) = \bigcap_{g\in\Gamma-\Gamma_\infty} ({\mathbb{H}}^3 - B(g)). \]
We call $\mathcal{F}(\Gamma)$ the \emph{equivariant Ford domain}.\index{equivariant Ford domain}\index{Ford domain!equivariant Ford domain}
Notice that $\mathcal{F}(\Gamma)$ is invariant under the action of $\Gamma_\infty$.
\end{definition}
An example is shown in \reffig{EqFordDomainCrossSection}.
\begin{figure}
\includegraphics{Figures/Ch14_Canonical/F14-04-Ford.eps}
\caption{The shaded region is 2-dimensional cross section of the equivariant Ford domain corresponding to the maximal cusp neighborhood\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} of the Figure-8 knot complement.}
\label{Fig:EqFordDomainCrossSection}
\end{figure}
\begin{lemma}\label{Lem:EquivFordDomain}
Fix a maximal cusp neighborhood\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} of $M$, and let $H$ be the horoball that is lift of the maximal cusp neighborhood to ${\mathbb{H}}^3$ with center at infinity. Then
\[ B(g) = x\in {\mathbb{H}}^3 | d(x,g^{-1}(H))< d(x,H), \quad \mbox{and} \]
\[ \mathcal{F}(\Gamma) = \{ x\in {\mathbb{H}}^3 | d(x,H) \leq d(x,g(H)) \mbox{ for all } g\in \Gamma-\Gamma_\infty \}. \]
\end{lemma}
\begin{proof}
This follows from the definitions. By \refdef{IsoSphere}, $I(g)$\index{$I(g)$} is the set of points equidistant from $H$ and $g^{-1}(H)$. Thus $B(g)$ consists of points strictly closer to $g^{-1}(H)$ than to $H$. Then $\mathcal{F}(\Gamma)$ consists of points at least as close to $H$ as to any of its translates under $\Gamma-\Gamma_\infty$.
\end{proof}
\begin{lemma}\label{Lem:EquFordDomainProperties}
The equivariant Ford domain $\mathcal{F}(\Gamma)$ satisfies the following.
\begin{enumerate}
\item $\mathcal{F}(\Gamma)$ is a convex subset of ${\mathbb{H}}^3$.
\item $\partial \mathcal{F}(\Gamma)$ consists of points on $I(g)$\index{$I(g)$} for at least one $g \in\Gamma-\Gamma_\infty$, and admits a decomposition into convex faces, edges, and vertices.
\item $\mathcal{F}(\Gamma)$ is invariant under the action of $\Gamma_\infty$. In particular, $\Gamma_\infty$ takes faces, edges, and vertices of $\mathcal{F}(\Gamma)$ to faces, edges, and vertices, respectively.
\end{enumerate}
\end{lemma}
\begin{proof}
For convexity, $\mathcal{F}(\Gamma)$ is the intersection of half-spaces in ${\mathbb{H}}^3$, which are convex, thus $\mathcal{F}(\Gamma)$ is convex.
Any point on the boundary of $\mathcal{F}(\Gamma)$ must lie on the boundary of $B(g)$ for some $g\in \Gamma-\Gamma_\infty$. But $\partial B(g) = I(g)$. Thus the decomposition into faces, edges, and vertices is via isometric spheres\index{isometric sphere} and their intersections: The faces of $\partial\mathcal{F}(\Gamma)$ are those points that lie on $I(g)$\index{$I(g)$} for a fixed $g$. The interior of the face consists of points that do not lie on any other $I(h)$ for $h\in\Gamma-\Gamma_\infty$. Edges are points in the intersection of $I(g_1)$ and $I(g_2)$, for some $g_1, g_2\in\Gamma-\Gamma_\infty$. Vertices lie in the intersection of three or more isometric spheres.\index{isometric sphere} Because faces are subsets of Euclidean hemispheres cut out by other Euclidean hemispheres, they are convex.
Finally, let $w\in \Gamma_\infty$. A point $x\in{\mathbb{H}}^3$ lies in $\mathcal{F}(\Gamma)$ if and only if $x$ lies in the exterior of all open half balls $B(h)$ for $h\in \Gamma-\Gamma_\infty$. This holds if and only if $w(x)$ lies in the exterior of all open half balls $B(hw^{-1})$ for $hw^{-1} \in \Gamma-\Gamma_\infty$. This is the same set of open half balls. Thus $w(x)$ lies in $\mathcal{F}(\Gamma)$ if and only if $x$ does, and so $\mathcal{F}(\Gamma)$ is invariant under $\Gamma_\infty$.
Suppose that $x$ lies on a face of $\mathcal{F}(\Gamma)$, which is a subset of $I(g)$\index{$I(g)$} for some $g\in\Gamma-\Gamma_\infty$. Then $x$ is equidistant from a horoball $H$ at infinity and its translate $g^{-1}(H)$. Thus $w(x)$ is equidistant from $w(H)=H$ and $w(g^{-1}(H)) = (gw^{-1})^{-1}(H)$. It follows that $w(x)$ lies on the isometric sphere\index{isometric sphere} $I(gw^{-1})$, and so $w$ takes isometric spheres to isometric spheres. Because $w$ preserves $\mathcal{F}(\Gamma)$, $w$ takes the face to a subset of the isometric sphere $wI(g)$, which must be a face. Similarly, if $x$ lies on an edge or vertex, then it lies on the intersection of isometric spheres, and so does $w(x)$. Since $w$ preserves $\mathcal{F}(\Gamma)$, $w(x)$ lies on an edge or vertex.
\end{proof}
The faces of $\mathcal{F}(\Gamma)$, which are contained in isometric spheres\index{isometric sphere} $I(g)$,\index{$I(g)$} can be glued in pairs using the group elements $g$, as in the following lemma.
\begin{lemma}\label{Lem:FaceToFace}
Suppose a subset $f_g$ of $I(g)$\index{$I(g)$} is a face of $\mathcal{F}(\Gamma)$. Then $g(f_g)$ is a face of $\mathcal{F}(\Gamma)$.
\end{lemma}
\begin{proof}
Any point $x$ in the interior of $f_g$ is equidistant from $H$ and $g^{-1}(H)$, and because $x$ is in the interior, $x$ lies further away from $h(H)$ for any $h\neq g^{-1}$ in $\Gamma-\Gamma_\infty$. By \reflem{IsoSphereImage}, $g$ maps $x$ to $g(x) \in I(g^{-1})$. Then $g(x)$ is equidistant from $H$ and $g(H)$, but further away from $gh(H)$ for any $h\neq g^{-1}$ in $\Gamma-\Gamma_\infty$. Equivalently, $g(x)$ is equidistant from $H$ and $g(H)$ but further from $k(H)$ for any $k\neq g$ in $\Gamma-\Gamma_\infty$. It follows that $g(x)$ lies in the interior of a face of $\mathcal{F}(\Gamma)$.
\end{proof}
\Reflem{FaceToFace} implies that if $I(g)\cap \mathcal{F}(\Gamma)$ is a face for some $g$, then $g$ is a face-pairing isometry\index{face-pairing isometry} of $\mathcal{F}(\Gamma)$ in the sense that it maps a face isometrically to a face. At this point, we could take the quotient of $\mathcal{F}(\Gamma)$ by its face pairing isometries and obtain a manifold that is a covering space of $M$. However, we really want a fundamental domain of $M$, so we restrict $\mathcal{F}(\Gamma)$ further.
\begin{definition}\label{Def:VerticalFundDomain}
A \emph{vertical fundamental domain}\index{vertical fundamental domain} for $\Gamma_\infty$ is a connected convex fundamental domain for the action of $\Gamma_\infty$ on ${\mathbb{H}}^3$ that is cut out by finitely many vertical geodesic planes in ${\mathbb{H}}^3$.
\end{definition}
For example, a vertical fundamental domain for the figure-8 knot is cut out by four vertical planes whose boundary on ${\mathbb{C}}$ at infinity is the parallelogram bounding the triangles shown in \reffig{Fig8CuspTriang}.
\begin{definition}\label{Def:FordDomain}
A \emph{Ford domain}\index{Ford domain} for $M$ is the intersection of $\mathcal{F}(\Gamma)$ and a vertical fundamental domain for $\Gamma_\infty$.
\end{definition}
A Ford domain is not canonical; that is, it is not uniquely defined for the manifold, because the choice of vertical fundamental domain is not unique. However, the equivariant Ford domain is canonical. For this reason, sometimes in the literature the Ford domain is actually defined to be $\mathcal{F}(\Gamma)$; this is the definition in \cite{Bonahon:LowDimGeom}, for example. However, $\mathcal{F}(\Gamma)$ is not a finite sided region, and its interior maps to $M$ in an infinite-to-one manner rather than a one-to-one manner, meaning it is not a fundamental domain for $M$. Thus we have chosen to define the Ford domain as in \refdef{FordDomain}.
\begin{proposition}\label{Prop:FordFiniteFaces}
Let $\overline{M}$ be a compact orientable 3-manifold with a single torus boundary component whose interior admits a complete finite-volume hyperbolic structure. Then any Ford domain $F$ for $M$ is a convex finite-sided polyhedron, cut out by finitely many geodesic planes in ${\mathbb{H}}^3$. Moreover, it is a fundamental domain for $M$, in the sense that $M$ is obtained as a quotient of the Ford domain by face-pairing isometries.\index{face-pairing isometry}
\end{proposition}
\begin{proof}
Both the equivariant Ford domain and a vertical fundamental domain are convex, and thus their intersection is convex.
To see that $F$ is a fundamental domain for $M$, we must prove that when we restrict the covering map ${\mathbb{H}}^3 \to {\mathbb{H}}^3/\Gamma \cong M$ to $F\subset{\mathbb{H}}^3$, the projection surjects onto $M$, and that no two points in the interior of $F$ project to the same point of $M$.
First, note that if $x$ is in the interior of $F$, then $x\in\mathcal{F}(\Gamma)$, so $g(x) \notin \mathcal{F}(\Gamma)$ for all $g\in\Gamma-\Gamma_\infty$. Since $x$ also lies in the interior of a vertical fundamental domain $V$ for the action of $\Gamma_\infty$, all $g(x) \notin V$ for all nontrivial $g\in\Gamma_\infty$. Thus $g(x)$ lies in $F$ only if $g$ is the identity, therefore no two points in the interior of $F$ project to the same point under the covering projection ${\mathbb{H}}^3\to M$.
Now we show that the image of $F$ under the covering map surjects onto $M$. Choose a maximal cusp neighborhood\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} for $M$, and let $H$ be a horoball about infinity that projects onto the cusp. Let $x\in M$. Let $\delta$ be the minimal distance from $x$ to the maximal cusp neighborhood in $M$. Then there exists a lift $\widetilde{x}$ of $x$ in ${\mathbb{H}}^3$ that is distance $\delta$ from $H$. Since this distance is minimal, it follows that $\widetilde{x}$ lies in $\mathcal{F}(\Gamma)$. There exists $w\in\Gamma_\infty$ such that $w\widetilde{x}$ lies in $V$. Thus $w\widetilde{x}$ lies in $\mathcal{F}(\Gamma)\cap V= F$, and $w\widetilde{x}$ projects to $x$. So $F$ is a fundamental domain for $M$.
Next we show that the Ford domain is a finite-sided polyhedron. First, remove a small embedded horoball neighborhood from the cusp of $M$ to obtain a compact 3-manifold $\overline{M}$. The lift of the horoball neighborhood lifts to ${\mathbb{H}}^3$; remove it from $F$, and call the result $\overline{F}$.
By \reflem{LocallyFinite}, for any $x$ in ${\mathbb{H}}^3$, there is a ball $B_x$ centered at $x$ meeting only finitely many isometric spheres.\index{isometric sphere} The set of all such balls for $x\in \overline{F}$ cover $\overline{F}$. Since $F$ is a fundamental domain for $M$, these balls map to a set of balls covering the compact manifold $\overline{M}$. Thus there is a finite subcollection of balls covering $\overline{M}$, which lift to give a finite cover of $\overline{F}$. Then the total collection of these balls meet only finitely many faces of $F$. Thus $F$ is finite sided.
Finally consider face-pairings.\index{face-pairing isometry} By definition, a face of $V$ is paired to another face of $V$ by an isometry $w\in\Gamma_\infty$. Any $x$ in the interior of the intersection of that face with $\mathcal{F}(\Gamma)$ is mapped by $w$ to the point $w(x)$ in $V\cap \mathcal{F}(\Gamma)$. So $w$ is a face-pairing isometry of $F$.
If $x\in F$ lies in the interior of a face $I(g)\cap F$, then $x$ is glued to $g(x)$ in $I(g^{-1})$ in the interior of a face in $\mathcal{F}(\Gamma)$. The face may be disjoint from $V$, but because $V$ is a fundamental domain for the action of $\Gamma_\infty$, there exists some $w\in\Gamma_\infty$ such that $wg(x)\in V$. Thus $wg(x)\in I(g^{-1}w^{-1})\cap F$. By continuity, the same $w$ maps $g(y)$ to $F$ for any $y$ in a small neighborhood of $x$. Thus $wg$ is a face-pairing isometry.\index{face-pairing isometry}
\end{proof}
\begin{example}\label{Example:Fig8FordDomain}
We can compute explicitly a Ford domain for $M$ the figure-8 knot complement. From \refexamp{Fig8GroupDiscrete} of \refchap{Margulis}, we have a description of three generators of the holonomy group\index{holonomy group} $\Gamma$ of the figure-8 knot complement, namely
\[
T_B = \frac{i}{\sqrt{\omega}}\mat{1&1\\1&-\omega^2}, \quad T_C=\mat{1&\omega\\0&1}, \quad T_D = \mat{2&-1\\1&0},
\]
where recall $\omega = {\frac{1}{2}} + i\frac{\sqrt{3}}{2}$ is a cube root of unity. The transformation $T_C$ fixes the point at infinity, so it plays a part in defining a vertical fundamental domain $V$, but it does not give isometric spheres.\index{isometric sphere} Note isometric spheres corresponding to $T_B^{\pm 1}$ and $T_D^{\pm 1}$ all have radius $1$.
The center of $I(T_B^{-1})$ is $1$, that of $I(T_B)$ is $\omega^2$, that of $I(T_D)$ is $2$, and that of $I(T_D^{-1})$ is $0$.
These are equidistant to the full-sized horoballs of \refexamp{Fig8FullSized}, up to translation in $\Gamma_\infty$. Take a vertical fundamental domain $V$ for $M$ cut out by planes meeting ${\mathbb{C}}$ as shown in \reffig{Fig8CuspTriang}. Then $V$ has face-pairing isometries\index{face-pairing isometry} $T_C$ and $T=\mat{1&4\\0&1}$. The ten isometric spheres $I(T_D^{-1})$, $I(T_B^{-1})$, $I(T_D)$, $I(T_C^{-1}TT_B)$, $I(T(T_D^{-1}))$, and their translates under $T_C$ all intersect $V$. In fact, along with $V$ they cut out a Ford domain. See \reffig{Fig8FordDomain}.
\end{example}
\begin{figure}
\import{Figures/Ch14_Canonical/}{F14-05-FoMark.eps_tex}
\caption{A Ford domain for the figure-8 knot complement. The vertical fundamental $V$ domain meets ${\mathbb{C}}$ in the parallelogram shown with vertices $0$, $4$, $\omega$ and $4+\omega$. The isometric spheres\index{isometric sphere} intersect to form hexagon faces of the equivariant Ford domain $\mathcal{F}(\Gamma)$. The ford domain is the intersection $\mathcal{F}(\Gamma)\cap V$.}
\label{Fig:Fig8FordDomain}
\end{figure}
\begin{definition}\label{Def:CombinatoriallyEquivalent}
Let $M \cong {\mathbb{H}}^3/\Gamma$ and $M'\cong{\mathbb{H}}^3/\Gamma'$ be one-cusped hyperbolic 3-manifolds, and let $\mathcal{F}(\Gamma)$ and $\mathcal{F}(\Gamma')$ be their respective equivariant Ford domains. Suppose there is a bijection between faces, edges, and vertices of $\mathcal{F}(\Gamma)$ and faces, edges, and vertices of $\mathcal{F}(\Gamma')$ such that:
\begin{enumerate}
\item An edge of $\mathcal{F}(\Gamma)$ is contained in a given face if and only if the corresponding edge of $\mathcal{F}(\Gamma')$ is contained in the corresponding face, and a vertex of $\mathcal{F}(\Gamma)$ is contained in a given edge if and only if the corresponding vertex of $\mathcal{F}(\Gamma')$ is contained in the corresponding edge.
\item A face $f_1$ of $\mathcal{F}(\Gamma)$ is mapped to a face $f_2$ by a parabolic\index{parabolic} translation in $\Gamma_\infty\leq \Gamma$ if and only if the face of $\mathcal{F}(\Gamma')$ corresponding to $f_1$ is mapped by a parabolic translation in $\Gamma_\infty'\leq\Gamma'$ to the face corresponding to $f_2$.
\item A face pairing isometry of $\mathcal{F}(\Gamma)$ matches faces, edges, and vertices if and only if a face pairing isometry of $\mathcal{F}(\Gamma')$ matches corresponding faces, edges, and vertices.
\end{enumerate}
Then the equivariant Ford domains $\mathcal{F}(\Gamma)$ and $\mathcal{F}(\Gamma')$ are said to be \emph{combinatorially equivalent}.\index{combinatorially equivalent}\index{Ford domain!equivariant!combinatorially equivalent}\index{equivariant Ford domain!combinatorially equivalent}
\end{definition}
\begin{theorem}\label{Thm:CombinatoriallyEquivalent}
Suppose $M\cong {\mathbb{H}}^3/\Gamma$ and $M'\cong {\mathbb{H}}^3/\Gamma'$ are one-cusped hyperbolic 3-manifolds with combinatorially equivalent Ford domains $\mathcal{F}(\Gamma)$ and $\mathcal{F}(\Gamma')$. Then $M$ and $M'$ are isometric.
In particular, if $M\cong S^3-K$ and $M'\cong S^3-K'$ are knot complements, then $K$ and $K'$ are isomorphic knots, up to reflection.
\end{theorem}
\begin{proof}
Because $\mathcal{F}(\Gamma)$ and $\mathcal{F}(\Gamma')$ are combinatorially equivalent, the quotients $\mathcal{F}(\Gamma)/\Gamma$ and $\mathcal{F}(\Gamma')/\Gamma'$ are homeomorphic as 3-manifolds. Mostow--Prasad rigidity,\index{Mostow--Prasad rigidity} \refthm{MostowGeom}, then implies that the quotients are actually isometric.
If $M$ and $M'$ are knot complements, then the fact that they come from isomorphic knots follows from Gordon and Luecke's knot complement theorem, \refthm{GordonLuecke}~\cite{gordon-luecke}.\index{Gordon--Luecke knot complement theorem}\index{knot complement theorem}
\end{proof}
\subsection{The case of multiple cusps}
When there are multiple cusps, we still build a Ford domain by considering points closer to one cusp than another. However, there will be a choice involved. We will first give the definitions, then explain how the choices affect the domain.
Let $M\cong {\mathbb{H}}^3/\Gamma$ be a complete hyperbolic 3-manifold, and let $C_0, \dots, C_k$ denote its cusps.
Start by choosing a horoball neighborhood of all the cusps of $M$. The neighborhood does not necessarily need to be embedded for the definitions to work. In practice, however, we often consider a choice of maximal cusp neighborhood.\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood}
Lift the horoball neighborhood to the universal cover ${\mathbb{H}}^3$. This gives a countable collection of horoballs in ${\mathbb{H}}^3$, which will be disjoint if and only if the choice of horoball neighborhood is embedded in $M$. Apply an isometry of ${\mathbb{H}}^3$ so that the horoball $H_0$ about infinity projects to the cusp $C_0$ under the covering map.
Let $\Gamma_\infty\leq \Gamma$ denote the subgroup fixing the cusp at infinity. We may choose a vertical fundamental domain $V_0$ for the action of $\Gamma_\infty$, and we still have isometric spheres\index{isometric sphere} $I(g)$\index{$I(g)$} for $g\in \Gamma-\Gamma_\infty$. These will form some of the faces of the Ford domain, but not all. In particular, isometric spheres only give points equidistant from lifts of the cusp $C_0$. We also need to consider points equidistant from the horoball $H_0$ and lifts of cusps $C_1, \dots, C_k$. These are not isometric spheres, so must be defined separately.
For each $C_j$, $j=1, \dots, k$, choose a horoball $H_j$ such that the distance from $H_0$ to $H_j$ is the distance from the cusp $C_0$ to $C_j$ in $M$. For convenience, we may choose $H_j$ such that its center lies inside the vertical fundamental domain $V_0$. For a horoball $H$ with center on ${\mathbb{C}}$, let $P(H)$ denote the set of points equidistant from $H_0$ and $H$. (Thus for $g\in\Gamma-\Gamma_\infty$, the isometric sphere\index{isometric sphere} $I(g)$\index{$I(g)$} is the plane $P(g^{-1}(H_0))$.) Let $B(H)$ denote the open half ball bounded by $P(H)$, so $B(g^{-1}(H_0)) = B(g)$ in the notation of the previous subsection.
Define $\mathcal{F}_0$ to be the set
\[
\mathcal{F}_0 = {\mathbb{H}}^3 - \left( \bigcup_{g\in \Gamma-\Gamma_\infty} B(g) \cup \bigcup_{j=1}^k\bigcup_{g\in\Gamma} B(g(H_j)) \right).
\]
Define $F_0$ to be the set $\mathcal{F}_0 \cap V_0$.
Now repeat the entire construction above, only replacing $C_0$ with $C_j$. Thus we start by applying an isometry so that $H_j$ is a horoball about infinity projecting to $C_j$. Define a vertical fundamental domain $V_j$, and obtain sets $\mathcal{F}_j$ and $F_j = \mathcal{F}_j \cap V_j$. Note that under the isometry taking $C_j$ to the cusp at infinity, the sets $\mathcal{F}_0$ and $F_0$ created before are mapped to some other region of ${\mathbb{H}}^3$ whose interior will be disjoint from $\mathcal{F}_j$ and $F_j$.
\begin{definition}\label{Def:FordDomainMultiple}
Let $M$ be a complete hyperbolic 3-manifold with cusps $C_0, \dots, C_k$. For each cusp, construct subsets $\mathcal{F}_j$ and $F_j$ of ${\mathbb{H}}^3$ as above.
The (disjoint) union of the sets $\mathcal{F}_j$ is the \emph{equivariant Ford domain} $\mathcal{F}$.\index{equivariant Ford domain!multiple cusps}
The (disjoint) union of the sets $F_j$ is the \emph{Ford domain}\index{Ford domain!multiple cusps} of $M$.\index{Ford domain!multiple cusps}
\end{definition}
Observe that each $F_j$ is a convex polyhedron. Faces of $\mathcal{F}$ are paired by isometries of ${\mathbb{H}}^3$ that map the appropriate horoballs $H_i, H_j$ to be equidistant from the given face.
\begin{figure}
\begin{center}
\includegraphics{Figures/Ch14_Canonical/F14-06-Small.eps}
\end{center}
\caption{Three different Ford domains for the Borromean rings.\index{Borromean rings} The boundary of a vertical fundamental domain $V_0$ is in magenta. Faces of $\mathcal{F}_0$ have boundaries shown in black.}
\label{Fig:BorroRingsFord}
\end{figure}
Also note that the polyhedra $F_j$ will depend on the choice of expansion of horoballs. An example is shown in \reffig{BorroRingsFord} for the complement of the Borromean rings.\index{Borromean rings} The figures are adapted from SnapPy \cite{SnapPy}. Three different choices of maximal cusp neighborhood\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} are given, and their effect on the combinatorics of the corresponding subset $\mathcal{F}_0$ of the equivariant Ford domain is shown. At the top, the cusp $C_0$ has been chosen to be as large as possible. That is, first $C_0$ was expanded until it bumped itself, then $C_1$ and $C_2$ were expanded to meet $C_0$. For this choice of maximal cusp neighborhood,\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} the faces of $\mathcal{F}_0$ are squares oriented horizontally. In the middle, $C_0$ still has larger volume than that of $C_1$ and $C_2$, but is not as large as possible. The faces of $\mathcal{F}_0$ are now octagons and squares. At the bottom, all three cusps have been chosen to have the same volume. The faces of $\mathcal{F}_0$ are squares again, but oriented on a diagonal.
Akiyoshi has shown there are at most finitely many combinatorially inequivalent Ford domains for any given manifold \cite{Akiyoshi:Finiteness}.
\section{Canonical polyhedra}\label{Sec:Canonical}
We now describe how to construct the canonical polyhedral decomposition of a finite volume 3-manifold.
Throughout this section, let $M \cong {\mathbb{H}}^3/\Gamma$ be a finite volume hyperbolic 3-manifold with a choice of maximal cusp neighborhood,\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} and corresponding equivariant Ford domain $\mathcal{F}$.
We begin by describing the 1-cells, or edges of the polyhedral decomposition. Take a component $\mathcal{F}_j$ of the equivariant Ford domain, embedded in ${\mathbb{H}}^3$ such that it contains a horoball $H$ about infinity. Consider a face $f$ of $\mathcal{F}_j$ with nonempty interior. Points in the interior of $f$ are exactly those points in ${\mathbb{H}}^3$ that are equidistant from $H$ and from another horoball lift $H'$ of a cusp of $M$. For each such face, take the geodesic from the center of $H'$ on $\partial_\infty{\mathbb{H}}^3$ to $\infty$. This geodesic is the \emph{geometric dual} to the face $f$.\index{geometric dual} For each face $f$ of each $\mathcal{F}_j$, the geometric dual will be identified to a 1-cell of the canonical polyhedral decomposition. Two such geodesics are identified to the same 1-cell if they are identified by an element of $\Gamma$.
\begin{example}\label{Example:Fig8DualEdges}
For the figure-8 knot complement, with equivariant Ford domain $\mathcal{F}(\Gamma)$ shown in \reffig{Fig8FordDomain}, the geometric dual edges are those edges running from the points on ${\mathbb{C}}$ at the centers of the hexagon faces to the point at infinity, intersecting the faces of the Ford domains in their centers.
\end{example}
\begin{remark}\label{Rmk:DualIntersection}
Note: A geometric dual edge does not necessarily intersect the face of $\mathcal{F}$ that it is dual to, although the dual edges in the case of the figure-8 knot intersect their corresponding faces in \refexample{Fig8DualEdges}.
For example, a geodesic may be the geometric dual to a face $f_1$ of the Ford domain, but the highest point of the Euclidean hemisphere containing $f_1$ might be covered by another face $f_2$. Then the geometric dual to $f_1$, which runs through this highest point, would not intersect $f_1$. This phenomenon is not common, especially in the small examples that one sees using SnapPy, but it does occur in practice.
\end{remark}
We now construct the 2-cells.
Consider an edge $e$ of $\mathcal{F}_j$ with nonempty interior. The interior points on such an edge lie on faces $f'$ and $f''$ of $\mathcal{F}_j$, where $f'$ consists of points equidistant from $H$ and a horoball $H'$, and where $f''$ consists of points equidistant from $H$ and a horoball $H''$. Denote the geometric dual of the face $f'$ by $\gamma'$, and the geometric dual of the face $f''$ by $\gamma''$. Thus $\gamma'$ and $\gamma''$ are infinite geodesics running from $H'$ to $H$ and from $H''$ to $H$, respectively. Consider the vertical plane containing $\gamma'$ and $\gamma''$. Form a portion $P$ of a 2-cell by taking the region of this plane lying between $\gamma'$ and $\gamma''$ and its intersection with the exterior of the faces $f'$ and $f''$. For example, in \reffig{DualFaces}, the lightly shaded region lying above two isometric spheres\index{isometric sphere} and running into infinity forms the portion $P$ of the 2-cell. Two such portions of planes are identified if they are identified by a parabolic\index{parabolic} translation of $\Gamma$ fixing the point at infinity.
The region $P$ and its translates under $\Gamma_\infty$ do not form the entire 2-cell; the 2-cell is formed by gluing portions of such regions $P$ along their intersections with isometric spheres.\index{isometric sphere} Note that each $P$ meets the faces $f'$ and $f''$ at right angles. Form a 2-cell by gluing portions of planes via the face-pairing isometries\index{face-pairing isometry} of $\mathcal{F}_j$. That is, if $f'$ is glued to $\bar{f}'$ by a face-pairing isometry, then $P\cap f'$ is glued to $\bar{f}'$ by the same isometry. Similarly for $P\cap f''$. In \reffig{DualFaces}, the darker shaded regions form additional portions of the 2-cell; they are obtained by such gluings.
\begin{lemma}\label{Lem:Dual2Cell}
The 2-cells constructed as above are totally geodesic ideal polygons with $n\geq 3$ sides, where $n$ is the number of cusps equidistant from the image of the edge $e$ in the component of the Ford domain $\mathcal{F}_j$ of $M$.
\end{lemma}
\begin{proof}
\Refex{Dual2Cell}.
\end{proof}
The totally geodesic ideal polygon thus constructed is the \emph{geometric dual} of the edge $e$.\index{geometric dual}
\begin{example}\label{Example:Fig8DualFaces}
For the figure-8 knot complement, with equivariant Ford domain shown in \reffig{Fig8FordDomain}, each portion $P$ of a 2-cell lies over two faces of the Ford domain, running between their centers. A cross sectional view is shown on the left of \reffig{DualFaces}. In this example, three distinct portions $P$ glue to form an ideal triangle.\index{ideal triangle} Thus the 2-cells in this case are all ideal triangles.
\end{example}
An example for an edge equidistant from more than three horoballs is shown on the right in \reffig{DualFaces}.
\begin{figure}
\includegraphics{Figures/Ch14_Canonical/F14-07-Dual.eps}
\caption{Left: Cross-sectional view of a 2-cell dual to an edge of the equivariant Ford domain for the figure-8 knot complement. Right: In different examples, the dual 2-cell may have more sides.}
\label{Fig:DualFaces}
\end{figure}
As in the case of the 1-cells, two 2-cells are identified if they differ by an element of $\Gamma$.
Finally we construct the 3-cells. For each vertex of $\mathcal{F}_j$, there are finitely many adjacent edges of $\mathcal{F}_j$. The intersection of $\mathcal{F}_j$ with the region bounded by the corresponding 2-cells forms a portion of a 3-cell $C$. The full 3-cell is formed by gluing $C\cap \partial\mathcal{F}_j$ by face-pairing isometries.\index{face-pairing isometry} The 3-cell is the \emph{geometric dual} to the vertex.\index{geometric dual} Again 3-cells are identified if they differ by an element of $\Gamma$. Note each 3-cell is bounded by a finite number of ideal polygons, thus it is an ideal polyhedron.
\begin{definition}\label{Def:CanonicalDecomp}
The \emph{canonical polyhedral decomposition} of $M$, or simply the \emph{canonical decomposition}\index{canonical decomposition} of $M$ is the disjoint union of the 3-cells dual to the vertices of each $\mathcal{F}_j$, along with their ideal faces and ideal edges. The faces are paired by isometries of $\Gamma$.
\end{definition}
\begin{theorem}\label{Thm:CanonicalConvex}
If $M$ is a finite volume, orientable, cusped 3-manifold, with fixed embedded horoball neighborhood $H$ of all cusps, then the canonical polyhedral decomposition associated with $H$ decomposes $M$ uniquely into a finite number of convex ideal polyhedra. That is, $M$ is the quotient of the polyhedra with faces identified via face-pairing isometries,\index{face-pairing isometry} and the interiors of the polyhedra are mapped in a one-to-one manner to a subset of $M$.
\end{theorem}
\begin{proof}[Proof sketch]
The fact that there are finitely many such polyhedra follows from the fact that Ford domains are finite sided. Convexity follows from the fact that the polyhedra are cut out by finitely many ideal polygons dual to the convex Ford domain. All translates of the polyhedra under $\Gamma$ cover each $\mathcal{F}_j$, by their definition, thus the polyhedra project surjectively to $M$. If $x$ lies in the interior of a polyhedron, projecting to some $y\in M$, then any other point $\tilde{x}$ in a 3-cell projecting to $y$ differs from $x$ by an element of $\Gamma$. By our definition of the 3-cells, it follows that the two 3-cells must agree. Thus $x$ is the only point in the polyhedra that projects to $y$.
\end{proof}
\begin{example}
The 3-cells in the canonical polyhedral decomposition of the figure-8 knot are finite sided polyhedra whose faces are ideal triangles,\index{ideal triangle} by \refexample{Fig8DualFaces}. In fact, they are two regular ideal tetrahedra. The canonical polyhedral decomposition of the figure-8 knot is exactly the decomposition we obtained in \refchap{Fig8Decomp} and \refchap{GluingCompleteness}.
\end{example}
\begin{theorem}\label{Thm:CanonicalUnique}
Suppose $M$ and $M'$ are hyperbolic 3-manifolds, each with a single cusp, and suppose $M$ and $M'$ have combinatorially equivalent canonical decompositions. Then $M$ and $M'$ are isometric.
In particular, if $M$ and $M'$ are knot complements, then $K$ and $K'$ are equivalent knots, up to reflection.
\end{theorem}
\begin{proof}
The canonical decomposition is geometrically dual to the Ford domain, and so this result follows from \refthm{CombinatoriallyEquivalent}.
\end{proof}
In his thesis, Gu{\'e}ritaud showed that the canonical polyhedral decomposition of 2-bridge knots are the tetrahedra in the triangulation we described in \refchap{TwoBridge}~\cite{Gueritaud:thesis}; see also \cite{aswy}. There are a few other families of 3-manifolds whose canonical polyhedral decompositions are now known; for example Sakuma and Weeks find canonical polyhedra for some families of link complements using symmetry~\cite{SakumaWeeks}. SnapPy computes canonical polyhedral decompositions of most reasonably sized 3-manifolds~\cite{SnapPy}, but at the date of writing this book, such decompositions were not rigorously verified. In general it seems to be a difficult problem to determine canonical polyhedral decompositions of important families of link complements. For example, as of the writing of this book, the following is unknown, asked as a question in \cite{SakumaWeeks}.
\begin{conjecture}\label{Conj:CrossingArcGeodesic}
For any hyperbolic alternating knot\index{alternating knot or link} and any alternating diagram\index{alternating diagram} of the knot, each crossing arc\index{crossing arc} is isotopic to an edge of the canonical polyhedral decomposition of the knot complement.
\end{conjecture}
In fact, to date it is not even known if crossing arcs of alternating knots are isotopic to geodesics in general.
\section{Exercises}
\begin{exercise}\label{Ex:CountableDiscrete}
Prove that a discrete subgroup of $\operatorname{PSL}(2,{\mathbb{C}})$ is countable.
\end{exercise}
\begin{exercise}
Prove that a complete hyperbolic surface contains an embedded neighborhood of its cusps that lifts to countably many horodisks in ${\mathbb{H}}^2$.
\end{exercise}
\begin{exercise}\label{Ex:DefIsoSphereDoesNotDependOnH}
Prove that the definition of the set $I(g)$\index{$I(g)$} is independent of choice of horosphere $H$.
\end{exercise}
\begin{exercise}\label{Ex:CuspVolumeMaxCusp}
Prove \refcor{CuspVolume}, that the volume of a cusp component in a maximal cusp neighborhood\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} of $M$ is at least $\sqrt{3}/4$, and that if $M$ has only one cusp, then the volume of a maximal cusp neighborhood is at least $\sqrt{3}/2$.
Hints: If desired, you may use \refex{CuspVolume}. You may also apply the following theorem of B\"or\"oczky \cite{boroczky}, generalizing \refthm{Boroczky}.
\begin{theorem}[B\"or\"oczky density of disks in the torus]\label{BoroczkyTorus}\index{B{\"o}r{\"o}czky cusp density theorem!3-dimensional}\index{cusp density theorem!2-dimensional}
Let $D$ be a collection of disks of the same radius, embedded disjointly in the torus $T$. Then
\[ \frac{\operatorname{area}(T\cap D) }{\operatorname{area}(T)} \leq \frac{\pi}{2\sqrt{3}}. \]
\end{theorem}
\end{exercise}
\begin{exercise}
Extend the definition of a Ford domain to complete hyperbolic surfaces, first for those with one cusp, then generalize to finitely many cusps. Using the natural extension of the definition of a fundamental domain to hyperbolic surfaces, prove that the object you have defined is a convex fundamental domain for the surface.
\end{exercise}
\begin{exercise}
This exercise uses SnapPy \cite{SnapPy} to investigate the combinatorics of Ford domains associated with different choices of maximal cusp neighborhoods.\index{maximal cusp neighborhood}\index{cusp!maximal cusp neighborhood} The manifold {\tt{m125}} in the SnapPy census is a hyperbolic manifold with two cusps, isometric to the complement of the (-2,3,8)-pretzel link.
\begin{enumerate}
\item Using SnapPy, find at least five different combinatorially inequivalent Ford domains for {\tt{m125}}.
\item Find a Ford domain for {\tt{m125}} where the dual canonical decomposition is a triangulation. Find a Ford domain where it is not a triangulation.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:Dual2Cell}
Prove \reflem{Dual2Cell}: that the 2-cell dual to an edge of $\mathcal{F}$ is a totally geodesic ideal polygon with $n\geq 3$ sides, where $n$ is the number of cusps equidistant from the projection of the edge to $M$.
\end{exercise}
\chapter{Algebraic Sets and the $A$-Polynomial}\label{Chap:Character}
\blfootnote{Jessica S. Purcell, Hyperbolic Knot Theory}
In this chapter, we introduce a polynomial invariant of a knot that is obtained from hyperbolic geometry. There are three closely related polynomials that appear in the literature and in calculations. We introduce all three in this chapter and discuss the relationships between them. We need to introduce a small amount of algebraic geometry, to consider how hyperbolic structures deform. M.~Culler and P.~Shalen were the first to investigate this material \cite{CullerShalen:Varieties}. The $A$-polynomial was originally introduced in \cite{APoly}. However, we start with a slightly different perspective.
\section{The gluing variety}
Suppose $M$ is a 3-manifold with a topological ideal triangulation,\index{topological ideal triangulation} as in \refdef{TopIdealTriang}. We have seen that associated with each ideal tetrahedron is an edge invariant,\index{edge invariant} namely a complex number $z(e)$ corresponding to an edge of the tetrahedron, as in \refdef{EdgeInvariant}, with all edge invariants of a tetrahedron satisfying the relations of \reflem{EdgeInvariants}. Finally, we have seen in \refthm{Gluing} that a choice of edge invariants gives a hyperbolic structure on $M$ if and only if the $z(e)$ satisfy the edge gluing equations.\index{gluing equations!edge equations}\index{edge gluing equations}
Suppose that $n$ ideal tetrahedra form the topological ideal triangulation of $M$. Then we obtain $3n$ edge invariants: $z_1, \dots, z_n$, as well as $z_i'=1/(1-z_i)$ and $z_i''=(z_i-1)/z_i$ for $i=1, \dots, n$. The edge gluing equations tell us that the product of those edge invariants that glue to the same edge of $M$ must be $1$. Clearing denominators, each edge gluing equation becomes a polynomial equation in $z_1, \dots, z_n$.
\begin{definition}\label{Def:AlgebraicSet}
An \emph{affine algebraic set}\index{affine algebraic set} is a subset of ${\mathbb{C}}^N$ that is defined as the set of zeros of a system of polynomial equations with coefficients in ${\mathbb{C}}$ and $N$ variables.
\end{definition}
The union of two affine algebraic sets is an affine algebraic set, and the intersection of arbitrarily many affine algebraic sets is an affine algebraic set (\refex{Zariski}). We define a topology on ${\mathbb{C}}^N$, called the \emph{Zariski topology},\index{Zariski topology} by taking affine algebraic sets to be closed sets.
As a very simple preliminary example, the subset of ${\mathbb{C}}^2$ defined by $xy=0$ is an affine algebraic set.
\begin{definition}\label{Def:ReducibleAlgebraicSet}
An affine algebraic set is \emph{reducible}\index{reducible affine algebraic set}\index{affine algebraic set!reducible, irreducible} if it can be expressed as the union of two proper affine algebraic sets. It is \emph{irreducible}\index{irreducible affine algebraic set} if not.
It is a corollary of the Hilbert basis theorem (says Shalen \cite{shalen:survey}) that any affine algebraic set is a finite union of irreducible affine algebraic sets; we call these the \emph{irreducible components}\index{irreducible components}.
An irreducible affine algebraic set is called an \emph{affine variety}.\index{affine variety}
\end{definition}
The affine algebraic set defined by $xy=0$ is not an affine variety;\index{affine variety} it is reducible, with irreducible affine algebraic sets defined by polynomials $x=0$ and $y=0$. These form irreducible components. Note they are not disjoint. In general, irreducible components\index{irreducible components} of affine algebraic sets are not necessarily disjoint.
\begin{corollary}\label{Cor:GluingAlgebraic}
Let $\mathcal{T}$ be a topological ideal triangulation of a 3-manifold $M$, with $n$ ideal tetrahedra. Then the set of points in ${\mathbb{C}}^n$ satisfying the edge gluing equations associated with $\mathcal{T}$ forms an affine algebraic set. \qed
\end{corollary}
\begin{definition}\label{Def:GluingVariety}
Suppose $M$ has ideal triangulation $\mathcal{T}$ made up of $n$ ideal tetrahedra (so $n$ gluing equations by \refex{edge=tet}).
The \emph{gluing variety}\index{gluing variety} associated with $\mathcal{T}$, denoted $\mathcal{D}(\mathcal{T})$, is the affine algebraic subset of ${\mathbb{C}}^n\times {\mathbb{C}}$ consisting of points $(z_1, \dots, z_n, t)$ satisfying the edge gluing equations as in \refcor{GluingAlgebraic}, as well as the equation
\[ t\prod_{i=1}^n z_i(1-z_i)=1. \]
This last equation is called the \emph{degeneracy equation},\index{degeneracy equation} and ensures that the parameters $z_i$ do not degenerate to $0$ or $1$.
\end{definition}
\begin{remark}
Degeneracy is handled differently by different authors in the literature. For example, rather than including one degeneracy equation, some authors require that for each $i$, the equations
\[ z_i(1-z_i'')=1, \quad z_i'(1-z_i)=1, \quad z_i''(1-z_i')=1 \]
hold, in addition to gluing equations. This also rules out $z_i=0,1$.
Similarly, some authors require that for each $i$, the following equations hold in addition to the gluing equations:
\[ z_iz_i'z_i''=-1 \quad \mbox{and} \quad z_i + (z_i')^{-1} -1 = 0. \]
Encoded in these equations are the relationships between $z_i$, $z_i'$, and $z_i''$ from \reflem{EdgeInvariants}. Again a solution does not allow $z_i=0,1$.
\end{remark}
Note that the gluing variety is an affine algebraic set, but it is not necessarily irreducible.\index{irreducible affine algebraic set} In particular, the gluing variety may not satisfy the definition of an affine variety,\index{affine variety} so the terminology is somewhat misleading.
In the case that $M$ has a complete hyperbolic structure, and $\mathcal{T}$ can be given a collection of edge invariants to form a geometric ideal triangulation for this structure, as in \refdef{GeomTriang}, then the gluing variety is nonempty. That is, there is an irreducible component\index{irreducible component}\index{irreducible affine algebraic set} of the gluing variety that contains this complete hyperbolic structure. The irreducible component containing the complete hyperbolic structure (when it exists) really is an affine variety.\index{affine variety}
\begin{definition}\label{Def:CanonicalComponent}
Suppose $\mathcal{T}$ is an ideal triangulation of $M$ that can be assigned edge invariants to form a geometric ideal triangulation, as in \refdef{GeomTriang}. Then the irreducible component\index{irreducible component} of $\mathcal{D}(\mathcal{T})$ containing the geometric triangulation is called the \emph{canonical component}\index{canonical component}\index{canonical component!gluing variety} of the gluing variety, and it is denoted $\mathcal{D}_0(\mathcal{T})$.
\end{definition}
Some examples are in order.
\begin{example}[Figure-8 knot]
Take the figure-8 knot complement with the ideal triangulation of \refchap{Fig8Decomp}. This has two ideal tetrahedra, with edge invariants determined by $z$ and $w$ in ${\mathbb{C}}$. The two edge gluing equations of the figure-8 knot actually both give a single polynomial equation, which we calculated in \refeqn{Fig8Gluing}:
\begin{equation}\label{Eqn:Fig8GluingPolynomial}
z(z-1)w(w-1)=1, \quad \mbox{ or } \quad z^2w^2 - z^2w - zw^2 + zw -1 = 0.
\end{equation}
Note that in this case, the degeneracy equation\index{degeneracy equation} $t z (1-z)w(1-w)=1$ becomes simply $t=1$, and so the gluing variety\index{gluing variety} is therefore the set of points $(z, w, 1)$ in ${\mathbb{C}}^2\times {\mathbb{C}}$ satisfying \refeqn{Fig8GluingPolynomial}.
Note that the complete hyperbolic structure occurs when
\[z=w=(1+i\sqrt{3})/2,\]
as shown in \refexamp{Fig8Completeness}. The triple
\[ \left( (1+i\sqrt{3})/2, (1+i\sqrt{3})/2, 1 \right) \in {\mathbb{C}}^2\times{\mathbb{C}} \] satisfies \refeqn{Fig8GluingPolynomial}.
\end{example}
\begin{example}[$6_1$ knot]
In \refchap{GluingCompleteness}, we found a triangulation $\mathcal{T}_1$ of the $6_1$ knot with five tetrahedra. There is also a triangulation $\mathcal{T}_2$ of the $6_1$ knot with only four tetrahedra, for example this triangulation is stored in the SnapPy database of manifold \cite{SnapPy}. Note that the gluing variety\index{gluing variety} corresponding to $\mathcal{T}_1$ is a subset of ${\mathbb{C}}^6$, while that corresponding to $\mathcal{T}_2$ is a subset of ${\mathbb{C}}^5$. Thus even though the triangulations give the same manifold, the gluing varieties are different, living in different spaces.
\end{example}
\begin{example}[Triangulation with inessential edge]
Dunfield notes that the figure-8 knot complement admits a topological ideal triangulation $\mathcal{T}$ consisting of five tetrahedra where one of the edges $E$ is 1-valent: it meets only one of the tetrahedra \cite{Dunfield:Appendix}. Moreover, this edge $E$ is \emph{inessential}\index{inessential edge}, meaning for any horoball neighborhood $H$ of the knot, the arc $E\cap (S^3-H)$ is homotopic rel endpoints $E\cap H$ into $\partial H$.
For this triangulation $\mathcal{T}$, note that the edge gluing equation corresponding to the inessential edge is the equation $z_1=1$. Because it is impossible for this equation to hold simultaneously with the degeneracy equation,\index{degeneracy equation} for this triangulation the gluing variety\index{gluing variety} $\mathcal{D}(\mathcal{T})$ is empty.
\end{example}
\begin{remark}
Suppose $M$ is the interior of a compact manifold with torus boundary. Segerman and Tillmann showed that for any ideal triangulation $\mathcal{T}$ of $M$, the gluing variety\index{gluing variety} associated with $\mathcal{T}$ is nonempty if and only if all edges in the triangulation are essential \cite{SegermanTillmann}.
\end{remark}
\subsection{The ${A_{\rm Hyp}}$-polynomial}
We now describe a two-variable polynomial ${A_{\rm Hyp}}(\ell,m)$ associated with any triangulated knot complement, or indeed any triangulated 3-manifold that is the interior of a compact manifold with a single torus boundary component. It was introduced by Champanerkar in \cite{Champanerkar:Thesis}, and there related to a more common polynomial, which we will describe later in this chapter. For now, we have the tools to understand the ${A_{\rm Hyp}}$-polynomial and compute examples, so we do this first.
For a fixed triangulation $\mathcal{T}$, the gluing variety $\mathcal{D}(\mathcal{T})$ is defined using edge gluing equations only. Recall that we also have two completeness equations for each cusp, by \refprop{CompletenessEqns}. That is, given a knot complement $S^3-K$, with meridian $[\mu]$ and longitude $[\lambda]$, for $[\mu], [\lambda] \in \pi_1(\partial N(K))$, there are associated equations $H(\mu)$ and $H(\lambda)$, which are rational functions in the $z_i$. Define a map $\mathcal{H}\from \mathcal{D} \to {\mathbb{C}}\times{\mathbb{C}}$ by
\[ \mathcal{H}(z_1, \dots, z_n) = (H(\lambda)(z_1, \dots, z_n), H(\mu)(z_1, \dots, z_n)) = (\ell, m). \]
\begin{definition}
Let $Y_i$ be an irreducible component\index{irreducible component} of $\mathcal{D}(\mathcal{T})$ for which the closure of $\mathcal{H}(Y_i)$ in ${\mathbb{C}}\times{\mathbb{C}}$ (in the Zariski topology) is an affine algebraic set, and let $Z$ be the union of all such components.
The image of $Z$ under $\mathcal{H}$ is called the \emph{holonomy variety}\index{holonomy variety} with respect to the triangulation $\mathcal{T}$. The defining polynomial of the closure of the image is denoted by ${A^{\calT}_{\rm Hyp}}(\ell, m)$. We occasionally omit the notation $\mathcal{T}$ when the triangulation is understood. The polynomial is called the \emph{${A_{\rm Hyp}}$-polynomial}\index{${A_{\rm Hyp}}$-polynomial} or \emph{hyperbolic $A$-polynomial}.\index{hyperbolic $A$-polynomial}\index{$A$-polynomial!hyperbolic, ${A_{\rm Hyp}}$}
\end{definition}
\begin{example}[Figure-8 knot]\label{Ex:HPolynomialFig8}
For $\mathcal{T}$ the usual two tetrahedra triangulation of the figure-8 knot $K$, the ${A^{\calT}_{\rm Hyp}}$-polynomial satisfies \refeqn{Fig8GluingPolynomial} as well as two rational equations in $m$ and $\ell$ that come from the completeness equations. We computed completeness equations in \refexample{Fig8EdgeSequence}, for curves $\alpha$ and $\beta$. The curve $\beta$ is the meridian of the figure-8 knot. The curve $\alpha$ generates $\pi_1(\partial N(K))$ along with $\beta$, but it is not the standard longitude as defined in \refrem{MeridLongitude}. Still, we may use it to compute a polynomial. That is, we require
\begin{align*}
m &= z_2^{-1}w_1 = (1-z) w, \quad \mbox{and} \\
\ell & = \left(\frac{z_2z_3}{w_2 w_3}\right)^2 = \frac{1}{(1-z)^2}\frac{(z-1)^2}{z^2}\frac{(1-w)^2}{1}\frac{w^2}{(w-1)^2} = \frac{w^2}{z^2}
\end{align*}
Finding the polynomial ${A_{\rm Hyp}}$, as an equation in $m$ and $\ell$ only, is now a problem in elimination theory. One way to solve it is to form an ideal generated by the above equations in the ring ${\mathbb{Z}}[z,w,\ell,m]$ and compute a Groebner basis that eliminates $z$ and $w$ using tools from computational algebraic geometry. For our examples, we did the latter, using Mathematica. The following command returns a polynomial in $m$ and $\ell$.
\begin{quote}
\begin{verbatim}
system := {z*(z-1)*w*(w-1)==1, M==w*(1-z), L*z^2==w^2};
elim := {z,w}; keep := {M,L};
GB := GroebnerBasis[system, keep, elim];
Print[GB];
\end{verbatim}
\end{quote}
This computation yields the following 2-variable polynomial:\index{${A_{\rm Hyp}}$-polynomial}\index{$A$-polynomial!hyperbolic, ${A_{\rm Hyp}}$}
\begin{multline*}
{A_{\rm Hyp}}(\ell, m) = \ell - 2\ell m - 3 \ell m^2 - \ell^2 m^2 + 2 \ell m^3 + 6 \ell m^4 \\
+ 2 \ell m^5 - m^6-3 \ell m^6 - 2 \ell m^7 + \ell m^8.
\end{multline*}
\end{example}
There were several choices made in the last example, particularly concerning representatives for the curves that give the completeness equations. We could change these in several different ways.
\begin{figure}
\import{Figures/Ch15_APoly/}{F15-01-CuspTr.eps_tex}
\caption{The cusp triangulation of the figure-8 knot complement, along with isotopic meridians $\beta$ and $\beta'$, and a longitude $\alpha'$.}
\label{Fig:Fig8CuspTriangAgain}
\end{figure}
In the cusp triangulation of the figure-8 knot, reproduced in \reffig{Fig8CuspTriangAgain},
a representative $\beta$ for the meridian was chosen to run from the bottom of the triangle labeled $a$, across the triangle labeled $h$, back to the triangle labeled $a$, giving $H(\beta) = w_1/z_2$. We could have chosen a parallel curve, for example the curve $\beta'$ running from the base of $b$, across $g$ to the base of $b$, giving the curve $H(\beta') = w_2/z_1$.
Recall that in \refchap{GluingCompleteness}, to define $H([\gamma])$ we require that a curve $\gamma$ on a cusp triangulation cuts off a single corner of each triangle it enters. Such a curve is called a \emph{normal curve}.\index{normal curve}
\begin{proposition}\label{Prop:AhypRepChoice}
Suppose $\alpha, \alpha' \in [\lambda]$ are distinct normal curves in the homotopy class of $\lambda$. Then the polynomial obtained by replacing $H(\alpha)$ by $H(\alpha')$ leaves ${A_{\rm Hyp}}$ unchanged. Similarly for $[\mu]$.
\index{$A$-polynomial!hyperbolic, ${A_{\rm Hyp}}$}\index{${A_{\rm Hyp}}$-polynomial}
\end{proposition}
\begin{proof}
This follows from \refex{HomotopyInvarianceH}: $H([\alpha])$ is independent of choice of $\alpha$. The reason why is that if normal curves $\alpha$ and $\alpha'$ are ambient isotopic\index{ambient isotopic} on a triangulated torus, then they differ by sliding past edges of the triangulation. Because the product of all tetrahedron edge parameters $z_i$ meeting at an edge is $1$, and the edge gluing equations are satisfied for the parameters $z_i$, it follows that the two polynomials in the definition of ${A_{\rm Hyp}}$ will differ by a factor of $1$.
\end{proof}
We also could have chosen a different basis for $H_1(\partial N(K))$. For example, if $\alpha'$ denotes the standard longitude of the figure-8 knot, shown in \reffig{Fig8CuspTriangAgain}, then it can be shown that
\[
H(\alpha') = z_1^{-1}\cdot w_3 \cdot z_2 \cdot w_3^{-1}\cdot z_3 \cdot w_2^{-1} \cdot z_3^{-1}\cdot w_1.
\]
Changing the basis for $H_1(\partial N(K))$ does affect ${A_{\rm Hyp}}$, but in a well-understood way.
\begin{proposition}\label{Prop:AhypChangeBasis}
Suppose $p, q, r, s$ are integers satisfying $ps-rq=1$, so that $\langle \ell^p m^q,\ell^r m^s \rangle$ is another basis for $H_1(\partial N(K))$. Then there exist integers $a, b$ such that \index{$A$-polynomial!hyperbolic, ${A_{\rm Hyp}}$}\index{${A_{\rm Hyp}}$-polynomial}
\[ {A_{\rm Hyp}}(\ell^p m^q, \ell^r m^s) = \pm\ell^a m^b {A_{\rm Hyp}}(\ell, m).
\]
\end{proposition}
\begin{example}\label{Example:Fig8_AHypPoly}
The standard longitude of the figure-8 knot can be shown to be represented by the curve $\alpha'$ of \reffig{Fig8CuspTriangAgain}, which is isotopic to $\alpha\beta^2$. Replacing $\alpha$ by $\alpha'$ yields the polynomial:
\begin{multline*}
{A_{\rm Hyp}}'(\ell,m) := \ell - 2 \ell m - 3 \ell m^2 + 2 \ell m^3 - m^4 + 6 \ell m^4 \\ - \ell^2 m^4 + 2 \ell m^5 - 3 \ell m^6 - 2 \ell m^7 + \ell m^8.
\end{multline*}
Note that this new polynomial satisfies ${A_{\rm Hyp}}'(\ell,m) = m^{-2}{A_{\rm Hyp}}(\ell m^2, m)$. \index{$A$-polynomial!hyperbolic, ${A_{\rm Hyp}}$}\index{${A_{\rm Hyp}}$-polynomial}
\end{example}
\section{Representations of knots}
The original $A$-polynomial was defined in \cite{APoly} using representations of the fundamental group of a knot complement into the group $\operatorname{SL}(2,{\mathbb{C}})$. In this section, we work our way up to the original definition of the $A$-polynomial. Our goals are first, to give a taste of the interesting mathematics along the way, but more importantly, to relate the polynomial ${A_{\rm Hyp}}$ to the original $A$-polynomial.
\subsection{Wirtinger presentation}
We take a bit of a detour in this subsection, to recall a few important results on presentations of knot groups. The tools we need follow almost immediately from the \emph{Wirtinger presentation},\index{Wirtinger presentation} which we review now. See also \cite{rolfsen}.
To obtain the presentation, start with a diagram of the knot, which we view as a collection of arcs, with each arc starting and ending at undercrossings and running along only overcrossings (if any) between them. To simplify the discussion, and to fix notation, choose an orientation of the knot (either orientation is fine). We say that a crossing is \emph{positive}\index{crossing!positive} or \emph{negative}\index{crossing!negative} based on the orientation of the arcs at the crossing, as in \reffig{PosNegWirtinger}.
\begin{figure}
\includegraphics{Figures/Ch15_APoly/F15-02-Cross.eps}
\caption{A positive\index{crossing!positive} crossing (left), and a negative\index{crossing!negative} crossing (right).}
\label{Fig:PosNegWirtinger}
\end{figure}
Now each arc $a_i$ of the diagram is oriented. There will be one generator $g_i$ of the fundamental group for each arc $a_i$; the generator can be viewed as a loop beginning at a basepoint that lies high above the plane of projection, looping around $a_i$ in a positive direction, then running back to the basepoint. Relators for the group presentation come from crossings, as follows. Suppose arc $a_i$ runs along an overcrossing, meeting endpoints of arcs $a_j$ and $a_k$ at the crossing. If it is a positive crossing, then we have the relation $g_j g_i g_k^{-1} g_i^{-1} = 1$. If it is a negative crossing, we have instead $g_j g_i^{-1} g_k^{-1} g_i=1$. See \reffig{WirtingerRelators}.
\begin{figure}
\import{Figures/Ch15_APoly/}{F15-03-Wirt.eps_tex}
\caption{Relators for positive (left) and negative (right) crossings}
\label{Fig:WirtingerRelators}
\end{figure}
\begin{theorem}[Wirtinger presentation]\label{Thm:Wirtinger}
If $K$ has a diagram with $n$ crossings, then the group $\langle g_1, \dots, g_n \mid r_1, \dots, r_n \rangle$ obtained as above forms a presentation for the fundamental group of the knot complement. Moreover, one of the relators is redundant and can be removed from the presentation.
\end{theorem}
\begin{proof}
The proof is a standard exercise in algebraic topology, using the Seifert--Van Kampen theorem. We include it as \refex{SVK}.
\end{proof}
We give a few immediate corollaries.
\begin{corollary}\label{Cor:Homology}
The first homology group of a knot complement is always isomorphic to ${\mathbb{Z}}$, generated by (the class of) the meridian.
\end{corollary}
\begin{proof}
The first homology group $H_1(S^3-K)$ is the abelianization of the fundamental group $\pi_1(S^3-K) \cong \langle g_1, \dots, g_n \mid r_1, \dots, r_n\rangle$. Under the abelianization, each relator becomes $g_j=g_k$. Thus the presentation reduces to $\langle g_1,\dots,g_n \mid g_1=\dots=g_n\rangle \cong {\mathbb{Z}}$, generated by the class of the meridian $g_1$.
\end{proof}
Note that each generator of the Wirtinger presentation is a curve that bounds a disk in $S^3$, and can be isotoped to an embedded curve on the boundary of a neighborhood of the knot.
Generators of the Wirtinger presentation are meridians (\refdef{MeridianLongitude}).
This leads to another corollary.
\begin{corollary}\label{Cor:ParabolicGenerator}
Let $K$ be a knot whose complement admits a hyperbolic structure. Then the holonomy group\index{holonomy group} of the complement of $K$ has a presentation in which all generators are parabolic.\index{parabolic}
\end{corollary}
Recall from \refdef{holonomy} that the holonomy group is the image of the holonomy\index{holonomy} map $\rho\from \pi_1(S^3-K)\to\operatorname{PSL}(2,{\mathbb{C}})$, encoding the hyperbolic structure.
\begin{proof}[Proof of \refcor{ParabolicGenerator}]
Each generator of the Wirtinger presentation is a meridian. Each meridian is mapped by the holonomy\index{holonomy} map to an element of $\operatorname{PSL}(2,{\mathbb{C}})$ that commutes with a longitude. Then \refcor{ZxZSubgroup} (${\mathbb{Z}}\times{\mathbb{Z}}$ subgroups) implies that meridian and longitude are mapped to parabolic\index{parabolic} elements fixing the same point on $\partial{\mathbb{H}}^3$.
\end{proof}
\begin{lemma}
Given a Wirtinger presentation of the fundamental group of a knot complement, we may compute the longitude as a product of generators as follows.
\begin{enumerate}
\item Beginning on an arc $a_k$ of the knot, travel along it.
\item Write $g_i$ when traversing an undercrossing under arc $a_i$ in the positive direction, and write $g_i^{-1}$ when traversing in the negative direction.
\item Finally, multiply by $g_k^p$ where the power $p$ is chosen such that the total exponent sum is zero.
\end{enumerate}
\end{lemma}
\begin{proof}
The loop obtained as the product of generators corresponding to undercrossings as above will be homotopic to some longitude. We want to ensure it is homologically trivial. Consider its image in the abelianization of the fundamental group $H_1(S^3-K)\cong {\mathbb{Z}}$. Each $g_i$ maps to the generator $g$ of ${\mathbb{Z}}$ in homology. Therefore if the sum of the exponents in the product is zero, the image of the element in homology is trivial.
\end{proof}
\begin{example}\label{Example:Fig8Wirtinger}
We use the Wirtinger presentation to compute the fundamental group of the figure-8 knot complement. See \reffig{Fig8Wirtinger}. \index{Wirtinger presentation}
\begin{figure}
\import{Figures/Ch15_APoly/}{F15-04-WirtF8.eps_tex}
\caption{Generators of the fundamental group of the figure-8 knot complement}
\label{Fig:Fig8Wirtinger}
\end{figure}
The generators are $g_1$, $g_2$, $g_3$, $g_4$. The relators are
\[ g_2 g_1^{-1}g_3^{-1}g_1=1 \quad g_4g_3^{-1}g_1^{-1}g_3=1 \quad
g_3g_2g_4^{-1}g_2^{-1}=1 \quad g_1g_4g_2^{-1}g_4^{-1}=1.\]
Use the first and third equations to eliminate $g_3$ and $g_4$, respectively. Substitute into the fourth equation to obtain the single relation
\[ g_1g_2^{-1}g_1g_2g_1^{-1}g_2 = g_2^{-1}g_1g_2g_1^{-1}g_2g_2. \]
Choose the meridian $M$ to be $g_1$.
Now starting on the arc of $g_1$ (right side of the diagram), traverse the knot by running up the arc, obtaining a longitude of the form
\begin{align*} L & = g_3g_2^{-1} g_1 g_4^{-1} \\
& = (g_1g_2g_1^{-1})g_2^{-1}g_1(g_2^{-1}g_1g_2^{-1}g_1^{-1}g_2).
\end{align*}
Note that the sum of exponents in the example is zero, so this is the homologically trivial longitude.
\end{example}
\subsection{Representation space and character varieties}
In this subsection, we discuss representations of fundamental groups of 3-manifolds, particularly knot complements. We will only touch upon some of the details here; see \cite{shalen:survey} for a more comprehensive survey.
Suppose $M$ is a 3-manifold that is the interior of the compact manifold $\overline{M}$ with a single torus boundary component, and suppose that $M$ admits an ideal triangulation $\mathcal{T}$. Then associated to $M$ is a gluing variety.
Any point in the gluing variety gives a representation of $\pi_1(M)$ into $\operatorname{PSL}(2,{\mathbb{C}})$, as follows. A point in the gluing variety gives a collection of edge parameters for the tetrahedra in $\mathcal{T}$ that satisfy the edge gluing equations. These lead to a $(G,X)$-structure on $M$,\index{$(G,X)$-structure} where $G=\operatorname{PSL}(2,{\mathbb{C}})$ and $X={\mathbb{H}}^3$, by setting each tetrahedron in $\mathcal{T}$ to be the ideal hyperbolic tetrahedron with given edge parameters. Then there is an associated developing map\index{developing map} as in \refdef{DevelopingMap}. A group element $\alpha\in\pi_1(M)$ determines a holonomy\index{holonomy} element $g_\alpha\in\operatorname{PSL}(2,{\mathbb{C}})$, as in \refdef{holonomy}. We have seen that the holonomy gives a homomorphism $\rho\from \pi_1(M)\to \operatorname{PSL}(2,{\mathbb{C}})$.
Thus holonomy is a representation
\[ \rho\from\pi_1(M)\to\operatorname{PSL}(2,{\mathbb{C}}) \cong \operatorname{SL}(2,{\mathbb{C}})/\pm{\mathrm{Id}}. \]
In fact, the original study of $A$-polynomials in \cite{APoly} considered representations to $\operatorname{SL}(2,{\mathbb{C}})$ rather than $\operatorname{PSL}(2,{\mathbb{C}})$, to avoid certain difficulties that arise. We begin with this perspective as well.
As usual, we focus on 3-manifolds that arise as knot complements. The following ensures that we have interesting representations to work with.
\begin{proposition}\label{Prop:HolonomyLiftstoSL}
Any representation $\rho\from\pi_1(S^3-K)\to\operatorname{PSL}(2,{\mathbb{C}})$ lifts to two representations $\widetilde{\rho}, \tilde{\rho}'\from \pi_1(S^3-K)\to\operatorname{SL}(2,{\mathbb{C}})$.
\end{proposition}
\begin{proof}
Let $\langle g_1, \dots, g_n \mid r_1, \dots, r_n \rangle$ be a Wirtinger presentation\index{Wirtinger presentation} for $\pi_1(S^3-K)$. Starting with $g_1$, the representation $\rho$ assigns $\rho(g_1)$ an element of $\operatorname{PSL}(2,{\mathbb{C}})$. This lifts to two matrices, say $A_1$ and $-A_1$ in $\operatorname{SL}(2,{\mathbb{C}})$. Choose $\widetilde{\rho}(g_1)=A_1$, and $\tilde{\rho}'(g_1)=-A_1$. Now consider the arc $a_1$ of the knot diagram corresponding to the generator $g_1$. Follow that arc to its endpoint. This will be an undercrossing, with an associated relator $g_1=g_j^{\pm 1}g_kg_j^{\mp 1}$. Thus there is a unique choice for $\widetilde{\rho}(g_k)$ and $\tilde{\rho}'(g_k)$. Continue along the arc associated with $g_k$, obtaining a unique choice for the representations at its endpoint. Continuing in this manner, we traverse the entire knot, and obtain a unique choice for each generator, obtaining two well-defined lifts $\widetilde{\rho}$ and $\tilde{\rho}'$.
\end{proof}
In fact, it can be shown that for any complete hyperbolic manifold $M$ the holonomy\index{holonomy} representation always lifts to a representation into $\operatorname{SL}(2,{\mathbb{C}})$ \cite{CullerShalen:Varieties}, but we only present the result above on knot complements.
We now consider representations of a group $G$ into $\operatorname{SL}(2,{\mathbb{C}})$.
Suppose $G$ has finite presentation $\langle g_1, \dots, g_n\mid r_1, \dots , r_m\rangle$. Then a representation $\rho\from G \to \operatorname{SL}(2,{\mathbb{C}})$ is determined by $(\rho(g_1), \dots, \rho(g_n))$, each of which is a matrix in $\operatorname{SL}(2,{\mathbb{C}})$:
\[ \rho(g_i) = \mat{a_i& b_i \\ c_i & d_i}.\]
We therefore may view the space of representations $R_{\operatorname{SL}}(G)$ as a subset of the complex space ${\mathbb{C}}^{4n}$.
Any representation of $G$ must satisfy the relators. Each relator $r_i$ is a word in the generators of $G$. Let $r_i(\rho(g_1), \dots, \rho(g_n))$ be the word with $\rho(g_j)^{\pm 1}$ substituted for each $g_j^{\pm 1}$ in $r_i$. Then \[(a_1, b_1, c_1, d_1, \dots, a_n, b_n, c_n, d_n)\in{\mathbb{C}}^{4n}\]
lies in $R_{\operatorname{SL}}(G)$ if and only if the following hold:
\begin{equation}\label{Eqn:Determinant}
a_id_i-b_ic_i=1 \quad \mbox{ for } i=1,\dots,n ,
\end{equation}
\begin{equation}\label{Eqn:Relator}
r_j\left( \mat{a_1&b_1\\c_1&d_1}, \dots, \mat{a_n&b_n\\c_n&d_n} \right) = \mat{1&0\\0&1} \quad \mbox{ for }j=1,\dots,m.
\end{equation}
The equations in \eqref{Eqn:Determinant} come from the determinant condition, to ensure each matrix $\mat{a_i&b_i\\c_i&d_i}$ lies in $\operatorname{SL}(2,{\mathbb{C}})$.
The equations in \eqref{Eqn:Relator} come from the relators; we claim that each such equation gives four polynomial equations in the variables $\{a_i, b_i, c_i, d_i\}_{i=1}^n$. To see this, in the word on the left of \eqref{Eqn:Relator}, replace each instance of $\mat{a_i&b_i\\c_i&d_i}^{-1}$ with $\mat{d_i&-b_i\\-c_i&a_i}$. Then matrix multiplication, followed by setting each position in the matrix on the left of \eqref{Eqn:Relator} equal to the corresponding position in the identity matrix on the right of \eqref{Eqn:Relator} gives four polynomial equations for each relator.
\begin{proposition}\label{Prop:RepSpaceisAlgSet}
The space $R_{\operatorname{SL}}(G)$ consisting of representations of $G$ into $\operatorname{SL}(2,{\mathbb{C}})$ is an affine algebraic set.\index{affine algebraic set}
\end{proposition}
\begin{proof}
By the above discussion, a point in ${\mathbb{C}}^{4n}$ lies in $R_{\operatorname{SL}}(G)$ if and only if it satisfies the polynomial equations coming from \eqref{Eqn:Determinant}, and the polynomial equations coming from each entry of each matrix equation \eqref{Eqn:Relator}.
\end{proof}
\begin{definition}\label{Def:RepresentationVariety}
For $G$ a finitely presented group, its \emph{representation variety}\index{representation variety} is the affine algebraic set $R_{\operatorname{SL}}(G)$.
\end{definition}
Notice that a different presentation of $G$ will have different relators, and hence different polynomial equations. However, we will see that the corresponding affine algebraic sets are not very different in this case, which we formalize with the following definition.
\begin{definition}\label{Def:Morphism}
A \emph{polynomial map}\index{polynomial map} is a map completely defined by polynomials. A \emph{morphism}\index{affine algebraic set!morphism}\index{morphism of affine algebraic sets} between affine algebraic sets is a polynomial map from one to the other. It is an isomorphism if it is bijective and its inverse function is also a morphism.
\end{definition}
\begin{proposition}\label{Prop:DifferentPresentation}
Suppose $G$ has presentations
\[ G = \langle g_1, \dots, g_n \mid r_1, \dots, r_m \rangle =
\langle h_1, \dots, h_k \mid s_1, \dots, s_\ell\rangle. \]
Then the affine algebraic sets $R_{\operatorname{SL}}^1(G)$ and $R_{\operatorname{SL}}^2(G)$ corresponding to the two presentations are isomorphic.
\end{proposition}
\begin{proof}
We may write each $h_i$ as a word in the generating set $g_1, \dots, g_n$; denote this by $h_i=w_i(g_1, \dots, g_n)$. Define a map $\phi\from R_{\operatorname{SL}}^1(G)\to R_{\operatorname{SL}}^2(G)$ by setting $\phi(\rho)$ to be the representation defined on the generators $h_1,\dots,h_k$ by $\phi(\rho)(h_i) = w_i(\rho(g_1), \dots, \rho(g_n))$. This map only involves multiplication and addition of matrix coordinates, hence it is completely defined by polynomials, and hence the map is a morphism\index{affine algebraic set!morphism} of affine algebraic sets. By symmetry, we also have a polynomial map $\psi\from R_{\operatorname{SL}}^2(G)\to R_{\operatorname{SL}}^1(G)$ defined similarly. Then $\phi$ and $\psi$ are inverses; these are isomorphisms of affine algebraic sets.
\end{proof}
\begin{example}\label{Example:Abelian}
The space $R_{\operatorname{SL}}(\pi_1(S^3-K))$ for any knot $K$ will always contain a simple irreducible affine algebraic set,\index{irreducible affine algebraic set}\index{affine algebraic set} which we now describe.
Recall that the abelianization of the fundamental group of any knot complement is ${\mathbb{Z}} = H_1(S^3-K)$, generated by the (homology class of the) meridian (\refcor{Homology}). Thus any representation of ${\mathbb{Z}}$ into $\operatorname{SL}(2,{\mathbb{C}})$ gives an induced representation of $\pi_1(S^3-K)$ into $\operatorname{SL}(2,{\mathbb{C}})$ via
\[ \pi_1(S^3-K) \to {\mathbb{Z}} \to \operatorname{SL}(2,{\mathbb{C}}).\]
Any such representation is called an \emph{abelian representation}\index{abelian representation}.
Note that the standard longitude $\lambda$ is trivial in $H_1(S^3-K)$. Thus for any abelian representation $\rho$, we will have $\rho(\lambda)={\mathrm{Id}}$. On the other hand, the meridian $\mu$ can be mapped to any element of $\operatorname{SL}(2,{\mathbb{C}})$.
We claim that abelian representations form an affine algebraic set as a subset of $R_{\operatorname{SL}}(\pi_1(S^3-K))$, and that this irreducible component\index{irreducible component} is isomorphic to $\operatorname{SL}(2,{\mathbb{C}})$. We leave this as an exercise (\refex{AbelianReps}).
\end{example}
We only wish to consider representations into $\operatorname{SL}(2,{\mathbb{C}})$ up to conjugation. Changing the basepoint of $G$ will change the fundamental group by conjugation, and hence will change any representation by conjugation.
Thus we really want to consider two representations to be the same if they are conjugate.
A first idea would be to declare representations
$\rho_1, \rho_2\from G\to\operatorname{SL}(2,{\mathbb{C}})$ to be equivalent if they are conjugate.
That is, $\operatorname{SL}(2,{\mathbb{C}})$ acts on $R_{\operatorname{SL}}(G)$ via conjugation as follows. If $\rho\in R_{\operatorname{SL}}(G)$ and $A\in\operatorname{SL}(2,{\mathbb{C}})$, then $A\cdot\rho = i_A\circ\rho$ where $i_A$ is defined by $i_A(X)=AXA^{-1}$. We could consider the orbits of this action, which is a polynomial map from $\operatorname{SL}(2,{\mathbb{C}})\times R_{\operatorname{SL}}(G)$ to $R_{\operatorname{SL}}(G)$, and take a quotient.
However, a point may be in the closure of several orbits, so this can lead to a non-Hausdorff space. The way to fix this is to take what is sometimes called the algebro-geometric quotient, or the algebraic quotient of invariant theory. We will not go into the details of the construction here. Instead, we give a definition of the algebro-geometric quotient that is known to be equivalent.
\begin{definition}\label{Def:Character}
The \emph{character}\index{character!representation} of a representation $\rho\from G\to\operatorname{SL}(2,{\mathbb{C}})$ is the function $\chi_\rho\from G\to{\mathbb{C}}$ given by $\chi_\rho(g) = \operatorname{tr}(\rho(g))$, where $\operatorname{tr}$ denotes the function that takes the trace of a matrix.
\end{definition}
Note the character is the same for two conjugate representations. It is not quite true that representations with the same character are necessarily conjugate.
\begin{definition}\label{Def:CharacterVariety}
The \emph{character variety}\index{character variety} $X_{\operatorname{SL}}(G)$ is the space of characters of all representations $R_{\operatorname{SL}}(G)$.
\end{definition}
It is shown in \cite{CullerShalen:Varieties} that the character variety is an affine algebraic set; a more elementary proof of this fact is also given in \cite{GonzalezAcuna-Montesinos}. We refer you to those papers for the details. We note again that the standard terminology is a little misleading. In \refdef{ReducibleAlgebraicSet}, we noted that an affine variety\index{affine variety} is an irreducible affine algebraic set.\index{irreducible affine algebraic set} But the character variety\index{character variety} is typically reducible, hence not an affine variety at all. One true variety that is frequently studied is the irreducible component\index{irreducible component} of the character variety corresponding to a discrete faithful representation, i.e.\ the representation coming from a complete hyperbolic structure. This is called the \emph{canonical component}\index{canonical component}\index{character variety!canonical component}.
The canonical component of a knot complement contains a great deal of information about the manifold, including information on Dehn filling and surfaces embedded in the 3-manifold. However, it can be very difficult to compute the character variety.\index{character variety} At the time of writing this book, character varieties have been computed only for simple families of 3-manifolds and knot complements, including double twist knots and links, and some 2-bridge links \cite{MacasiebPetersenvanLuijk}, \cite{PetersenTran:TwistLinks}.
\subsection{The case of $\operatorname{PSL}$}
We spent the last subsection considering representations to $\operatorname{SL}(2,{\mathbb{C}})$. Much of that work can be extended to $\operatorname{PSL}(2,{\mathbb{C}})$ with a little extra effort. The extension to $\operatorname{PSL}(2,{\mathbb{C}})$ has been done carefully by \cite{BoyerZhang}. See also \cite{HeusenerPorti} for a nice exposition.
Let $G$ be a finitely presented group:
\[G = \langle g_1, \dots, g_n \mid r_1, \dots, r_m\rangle. \]
Let $R_{\operatorname{PSL}}(G)$ denote the space of representations from $G$ to $\operatorname{PSL}(2,{\mathbb{C}})$.
The first thing to check is that $R_{\operatorname{PSL}}$ forms an algebraic set. As before, relators give polynomial equations, but now these are only defined up to multiplication by $\pm 1$. An easy way to avoid sign issues is to use the fact that the group $\operatorname{PSL}(2,{\mathbb{C}})$ is isomorphic to $\operatorname{SO}(3,{\mathbb{C}})$; this can be shown using standard tools from Lie groups, and we leave this to the reader (or see \cite[Lemma~2.1]{HeusenerPorti}).
\begin{proposition}\label{Prop:PSLRepSpaceisAlg}
The space $R_{\operatorname{PSL}}$ consisting of representations of $G$ into $\operatorname{PSL}(2,{\mathbb{C}})$ is an affine algebraic set.\index{affine algebraic set}
\end{proposition}
\begin{proof}
Using the fact that $\operatorname{PSL}(2,{\mathbb{C}})\cong \operatorname{SO}(3,{\mathbb{C}})$, this follows as in the proof of \refprop{RepSpaceisAlgSet}.
\end{proof}
\begin{definition}\label{Def:PSLRepVariety}
Let $G$ be a finitely presented group. Define the \emph{$\operatorname{PSL}$-representation variety}\index{representation variety!$\operatorname{PSL}$}\index{$\operatorname{PSL}$-representation variety} of $G$ to be the algebraic set $R_{\operatorname{PSL}}(G)$.
\end{definition}
Again we only wish to consider representations into $\operatorname{PSL}(2,{\mathbb{C}})$ up to conjugation. In the $\operatorname{PSL}$ case, there is a natural geometric reason: conjugate holonomy\index{holonomy} representations give isometric hyperbolic structures on a 3-manifold. We want an algebraic theory that views these structures as the same.
As in the $\operatorname{SL}$ case, we may take the algebro-geometric quotient, or alternatively, take the set of $\operatorname{PSL}$-characters, defined below. \cite{HeusenerPorti} give a proof that this is equivalent.
\begin{definition}\label{Def:PSLCharacter}
Let $\rho\from G\to\operatorname{PSL}(2{\mathbb{C}})$ be a representation.
The \emph{$\operatorname{PSL}$-character}\index{character!$\operatorname{PSL}$} of $\rho$ is the function $\xi_\rho\from G\to {\mathbb{C}}$ given by $\xi_{\rho}(g) = \operatorname{tr}(\rho(g))^2$, where $\operatorname{tr}$ denotes the trace of the matrix. Note that the trace of an element of $\operatorname{PSL}(2,{\mathbb{C}})$ is only defined up to sign, but the square of the trace is well-defined.
\end{definition}
\begin{definition}\label{Def:PSLCharacterVariety}
The \emph{$\operatorname{PSL}$-character variety}\index{character variety!$\operatorname{PSL}$}\index{$\operatorname{PSL}$-character variety} $X_{\operatorname{PSL}}(G)$ is the space of $\operatorname{PSL}$-characters of all representations $R_{\operatorname{PSL}}(G)$.
\end{definition}
\section{The $A$-polynomial}
The $A$-polynomial was first defined by Cooper, Culler, Gillet, Long, and Shalen in \cite{APoly}. The authors noticed that for manifolds with a single torus boundary component, the difficult problem of computing the $\operatorname{SL}(2,{\mathbb{C}})$ character variety could be replaced by a slightly simpler problem. Rather than considering the entire fundamental group of the knot complement, we focus on the subgroup corresponding to generators of the boundary of a regular neighborhood of the knot, $\partial N(K)$. This reduces a complicated algebraic set to a single polynomial, called the $A$-polynomial, which still encodes a great deal of geometric information.
Our exposition in this section is based on that of Cooper and Long in \cite{CooperLong:Apoly}, with thanks to Mathews \cite{Mathews:Honours}.
\subsection{Classical definition}
Denote the meridian and longitude of a knot by $\mu$ and $\lambda$, respectively. We will be considering representations of the fundamental group $\pi_1(\partial N(K))$ into $\operatorname{SL}(2,{\mathbb{C}})$ up to conjugation.
Since $\mu$ and $\lambda$ commute, for any representation $\rho\from \pi_1(\partial N(K))\to\operatorname{SL}(2,{\mathbb{C}})$, the matrices $\rho(\mu)$ and $\rho(\lambda)$ must have the same fixed points on $\partial {\mathbb{H}}^3$ (\refex{PSL(2,C)Commute}). Hence they can be conjugated to fix either the point at infinity or the geodesic from $0$ to $\infty$ in $\partial{\mathbb{H}}^3$. In the former case, the two matrices have the form
$\mat{1 & * \\ 0 & 1}$, and in the latter, the matrices are diagonal. Thus there will be no loss of generality in restricting to representations for which $\rho(\mu)$ and $\rho(\lambda)$ are upper triangular.
Define $R^U_{\operatorname{SL}}(\pi_1(S^3-K))\subset R_{\operatorname{SL}}(\pi_1(S^3-K))$ to be those representations $\rho$ for which $\rho(\mu)$ and $\rho(\lambda)$ are both upper triangular. When the context is clear, we abbreviate to $R^U$.
\begin{lemma}
The set $R^U_{\operatorname{SL}}(\pi_1(S^3-K))$ forms an affine algebraic set.\index{affine algebraic set}
\end{lemma}
\begin{proof}
We need to show that $R^U_{\operatorname{SL}}$, just like $R_{\operatorname{SL}}$, is defined as the set of zeros of a system of polynomial equations.
Recall that $\rho \in R_{\operatorname{SL}}(\pi_1(S^3-K))$ is determined by $\rho(g_1), \dots, \rho(g_n)$, where the $g_i$ are generators, and we view each $\rho(g_i)$ as a matrix with coordinates $a_i, b_i, c_i, d_i\in{\mathbb{C}}$ satisfying \refeqn{Determinant} and \refeqn{Relator}. Then $\rho(\mu)$ and $\rho(\lambda)$ will be matrices whose coefficients are polynomials in the $\{a_i, b_i, c_i, d_i\}$. In particular, the lower left entries of $\rho(\mu)$ and $\rho(\lambda)$ are polynomials $q_\mu$ and $q_\lambda$, respectively. Thus $R^U_{\operatorname{SL}}(\pi_1(S^3-K))$ is obtained by adjoining $q_\mu=0$ and $q_\lambda=0$ to the polynomials defining $R_{\operatorname{SL}}(\pi_1(S^3-K))$. Hence $R^U_{\operatorname{SL}}$ is an algebraic set.
\end{proof}
Next, because the determinants of $\rho(\mu)$ and $\rho(\lambda)$ are equal to $1$, it must be the case that for some $m,\ell\in{\mathbb{C}}^2$, the matrices $\rho(\mu)$ and $\rho(\lambda)$ have the form
\[ \rho(\mu) = \mat{m&* \\0&m^{-1}}, \quad \rho(\lambda) = \mat{\ell&*\\0&\ell^{-1}}.\]
Define functions $\xi_\mu, \xi_\lambda\from R^U_{\operatorname{SL}} \to {\mathbb{C}}$ by letting $\xi_\mu(\rho)$ (respectively $\xi_\lambda(\rho)$) be the upper left entry of the matrix $\rho(\mu)$ (respectively $\rho(\lambda)$). That is, $\xi_\mu(\rho)=m$ and $\xi_\lambda(\rho)=\ell$. Each of these entries can be written as a polynomial in the $a_i, b_i, c_i, d_i$, so each map $\xi_\mu$ and $\xi_\lambda$ is a polynomial map, and thus the map $\xi\from R^U_{\operatorname{SL}}\to {\mathbb{C}}^2$ given by $\xi(\rho)= (\xi_\lambda(\rho), \xi_\mu(\rho)) = (\ell, m)$ is a morphism. \index{affine algebraic set!morphism}
Consider the image $\xi(R^U_{\operatorname{SL}})\subset {\mathbb{C}}^2$. This is the image of an algebraic set under a morphism,\index{affine algebraic set!morphism} hence is an algebraic set. It may have several irreducible components.\index{irreducible component} Let $C$ be an irreducible component of $R^U_{\operatorname{SL}}$ and let $\overline{\xi(C)}$ be its closure (in the Zariski topology).
The closure $\overline{\xi(C)}$ is the set of zeros of a family of polynomials. If there is a single such polynomial, then denote it by $F_C$.
\begin{example}\label{Example:AbelianApoly}
Let $C \subset R^U_{\operatorname{SL}}$ be the component of abelian representations,\index{abelian representation} as in \refexamp{Abelian}. Then $\rho(\mu)$ is an upper triangular matrix in $\operatorname{SL}(2,{\mathbb{C}})$, but $\rho(\lambda)={\mathrm{Id}}$ always. Thus $\xi_\mu(\rho)$ can be arbitrary, but $\xi_\lambda(\rho)=1$. Thus the closure of $\xi(C)$ for $C$ the component of abelian representations gives the polynomial $F_C = \ell-1$.
\end{example}
Now consider all polynomials $F_C$ arising from irreducible components\index{irreducible component} of $R^U_{\operatorname{SL}}$, aside from the abelian representation.
Our preliminary definition of the $A$-polynomial is the polynomial obtained by multiplying all of these polynomials, or $1$ if there are no such polynomials. Observe that since $F_C$ is the set of zeros of a polynomial, it is only defined up to multiplication by a scalar and by powers of $m$ and $\ell$. It was shown in \cite{APoly} that a scalar multiple can be chosen so that all coefficients are integers. We require integer coefficients with no common factors, and multiply by $\ell^am^b$ for $a,b$ integers such that the total degree is minimal. This defines the $A$-polynomial up to sign. We summarize:
\begin{definition}\label{Def:Apoly}
Let $K$ be a knot. For every $C$ an irreducible component\index{irreducible component} of $R^U_{\operatorname{SL}}(\pi_1(S^3-K))$ that is not abelian, and such that $\overline{\xi(C)}$ is the set of zeros of a single polynomial, let $F_C$ denote this polynomial.
The \emph{$A$-polynomial}\index{$A$-polynomial} (or $\operatorname{SL}(2,{\mathbb{C}})$ $A$-polynomial)\index{$\operatorname{SL}(2,{\mathbb{C}})$ $A$-polynomial}\index{$A$-polynomial!$\operatorname{SL}(2,{\mathbb{C}})$} of $K$ is defined to be the product of $1$ and any polynomials $F_C$ as above, rescaled so that all coefficients are integers with no common factors. We normalize by multiplying by $\ell^am^b$ so that the total degree is minimized, and denote the result by $A_{\operatorname{SL}}(\ell,m)$.
\end{definition}
It is easiest to make sense of the definitions if we work with examples. The simplest example is given by the following.
\begin{proposition}\label{Prop:ApolyTrivial}
The $A$-polynomial of the unknot is $1$.\index{$A$-polynomial}
\end{proposition}
\begin{proof}
The fundamental group $G$ of the unknot is ${\mathbb{Z}}$, hence any representation of $G$ into $\operatorname{SL}(2,{\mathbb{C}})$ is an abelian representation.\index{abelian representation} Thus in \refdef{Apoly}, there are no polynomials $F_C$ and the $A$-polynomial is $1$.
\end{proof}
\begin{example}[Figure-8 knot]\label{Example:Fig8_APoly}
We use the Wirtinger presentation of \refexamp{Fig8Wirtinger} to compute the polynomial $A_{SL}$ for the figure-8 knot.\index{$A$-polynomial}
There are two generators, $g_1$ and $g_2$, satisfying
\[ \rho(g_1) = \mat{a_1 & b_1 \\ c_1 & d_1}, \quad \rho(g_2) = \mat{a_2 & b_2 \\ c_2 & d_2}. \]
This initially gives the eight unknowns $a_i, b_i, c_i, d_i$ for $i=1, 2$. These must satisfy the following equations:
\begin{enumerate}
\item Two determinant equations $a_id_i-b_ic_i=1$.
\item Four equations coming from the four matrix entries of the relation
\[ \rho(g_1g_2^{-1}g_1g_2g_1^{-1}g_2) = \rho(g_2^{-1}g_1g_2g_1^{-1}g_2g_2). \]
\item Equations coming from the matrix entries of $\rho(M)$ and $\rho(L)$:
\[ \rho(L)= \rho(g_1^{-1}g_2 g_1g_2^{-1}g_1^{-1} g_2^{-1}g_1g_2g_1^{-1}g_2) \quad \mbox{ and } \quad \rho(M)=\rho(g_1). \]
There are two equations to ensure $\rho(M)$ and $\rho(L)$ are upper triangular, by setting their bottom left entry to be $0$. In particular, this requires the simple equation $c_1=0$, coming from $\rho(M)$. A more complicated equation will come from the bottom left entry of $\rho(L)$, since $L$ is a more complicated product of $g_1^{\pm 1}, g_2^{\pm 1}$.
There is one equation ensuring the top left entry of $\rho(L) = \ell$, and one equation ensuring the top left entry of $\rho(M)=m$. In particular, again in the simpler $\rho(M)$ case this gives $a_1=m$. Using one of the determinant equations, we also conclude $d_1=m^{-1}$.
\end{enumerate}
In all, this gives seven equations in the unknowns $\ell$, $m$, $b_1$, $a_2$, $b_2$, $c_2$, $d_2$.
Using computer software such as Mathematica, we find a polynomial describing the curve satisfying this system:
\[ -\ell + \ell^2 +\ell m^2 - \ell^2m^2 + m^4 + \ell m^4 - \ell^2 m^4 - \ell^3m^4 + \ell m^6 - \ell^2m^6 - \ell m^8 + \ell^2m^8. \]
Note that $(\ell-1)$ divides this polynomial, corresponding to the abelian representation.\index{abelian representation} We divide out by $(\ell-1)$, to obtain\index{$A$-polynomial}\index{$\operatorname{SL}(2,{\mathbb{C}})$ $A$-polynomial}\index{$A$-polynomial!$\operatorname{SL}(2,{\mathbb{C}})$}
\[ A_{\operatorname{SL}}(\ell, m) = \ell - \ell m^2 - m^4 - 2\ell m^4 - \ell^2 m^4 - \ell m^6 + \ell m^8. \]
\end{example}
\subsection{The ${A_{\rm PSL}}$-polynomial}
The work in the previous section can also be applied to representations of $\pi_1(S^3-K)$ into $\operatorname{PSL}(2,{\mathbb{C}})$ rather than $\operatorname{SL}(2,{\mathbb{C}})$. That is, for fixed generators $\mu$ and $\lambda$ of $\pi_1(\partial N(K))$ restrict to $R^U_{\operatorname{PSL}} \subset R_{\operatorname{PSL}}$, the subset of representations $\rho$ for which $\rho(\mu)$ and $\rho(\lambda)$ are upper triangular in $\operatorname{PSL}(2,{\mathbb{C}})$. Define $\xi_{\operatorname{PSL}}\from R^U_{\operatorname{PSL}}(\pi_1(S^3-K))\to{\mathbb{C}}^2$ by
\[ \xi_{\operatorname{PSL}}(\rho) = (\xi_\lambda(\rho), \xi_\mu(\rho)) = (\ell^2, m^2), \]
where $\xi_\lambda$ gives the square of the top left entry of $\rho(\lambda)$, and similarly for $\xi_\mu$. Consider each irreducible component\index{irreducible component} $C$ of $R^U_{\operatorname{PSL}}$ such that the closure $\overline{\xi_{\operatorname{PSL}}(C)}$ is defined by a single polynomial $F_{\operatorname{PSL}}(C)$, and $C$ is not abelian.
\begin{definition}
The \emph{${A_{\rm PSL}}$-polynomial}\index{${A_{\rm PSL}}$-polynomial}\index{$A$-polynomial!${A_{\rm PSL}}$} is defined to be the product of the polynomials $F_{\operatorname{PSL}}(C)$ as above, or $1$ if there are no such polynomials.
\end{definition}
The ${A_{\rm PSL}}$-polynomial and the $A$-polynomial are related, and we describe the relationship in the hyperbolic case.
When $K$ is hyperbolic, with $G=\pi_1(S^3-K)$, let $X_{\operatorname{PSL}}^0(G)$ be the irreducible component\index{irreducible component} of $X_{\operatorname{PSL}}(G)$ containing the complete hyperbolic structure. Similarly, let $X_{\operatorname{SL}}^0(G)$ be the irreducible component of $X_{\operatorname{SL}}(G)$ containing the lift of the complete hyperbolic structure. Then we may define polynomials $A^0_{\operatorname{PSL}}$ and $A^0_{\operatorname{SL}}$ by only considering $C$ coming from these irreducible components. Note $A^0_{\operatorname{PSL}}$ is a factor of the ${A_{\rm PSL}}$-polynomial, and $A^0_{\operatorname{SL}}$ is a factor of the $A$-polynomial.
\begin{proposition}
The polynomial $A_{\operatorname{PSL}}^0(\ell^2, m^2)$ divides the polynomial\index{$A$-polynomial}
\[ A^0_{\operatorname{SL}}(\ell, m)A^0_{\operatorname{SL}}(\ell, -m)A^0_{\operatorname{SL}}(-\ell, m)A^0_{\operatorname{SL}}(-\ell, -m). \]
\end{proposition}
\begin{proof}
The projection $\pi\from \operatorname{SL}(2,{\mathbb{C}}) \to \operatorname{PSL}(2,{\mathbb{C}})$ induces a map on character varieties\index{character variety}
$\pi\from X_{\operatorname{SL}}^0(G) \to X_{\operatorname{PSL}}^0(G)$, which is surjective by \cite{Culler:Lifting}. Define $h\from {\mathbb{C}}^2 \to {\mathbb{C}}^2$ by $h(x,y) = (x^2, y^2)$. Then the following diagram commutes.
\[
\begin{tikzcd}[column sep=50pt]
X_{\operatorname{SL}}^0(\pi_1(S^3-N(K)))
\arrow[r, "r"]
\arrow[d, "\pi"]
&
X_{\operatorname{SL}}^0(\pi_1(\partial N(K)))
\arrow[d, "\pi"]
\arrow[r, "\xi^{-1} \, \circ \, \operatorname{tr}", leftarrow]
& ({\mathbb{C}}^*)^2
\arrow[d, "h"] \\
X_{\operatorname{PSL}}^0(\pi_1(S^3-N(K)))
\arrow[r, "r"]
&
X^0_{\operatorname{PSL}}(\pi_1(\partial N(K)))
\arrow[r, "\xi_{\operatorname{PSL}}^{-1} \circ \operatorname{tr}", leftarrow]
& ({\mathbb{C}}^*)^2
\end{tikzcd}
\]
Here $r$ is a map induced by the inclusion of $\pi_1(\partial N(K))$ into $\pi_1(S^3-N(K))$.
If $D^0_{\operatorname{SL}}$ and $D^0_{\operatorname{PSL}}$ are curves defined by $A^0_{\operatorname{SL}}(\ell, m)$ and $A^0_{\operatorname{PSL}}(\ell, m)$, respectively, then $h(D^0_{\operatorname{SL}}) = D^0_{\operatorname{PSL}}$. So $h^{-1}(D^0_{\operatorname{PSL}})$ is the union of the curves given by $A^0_{\operatorname{SL}}(\pm\ell, \pm m)$. Thus $A^0_{\operatorname{PSL}}(\ell^2, m^2)$ divides
\[ A^0_{\operatorname{SL}}(\ell, m)A^0_{\operatorname{SL}}(\ell, -m)A^0_{\operatorname{SL}}(-\ell, m)A^0_{\operatorname{SL}}(-\ell, -m).\qedhere
\]
\end{proof}
\subsection{Relation to ${A_{\rm Hyp}}$-polymomial}
Let $\{z_1, \dots, z_n\}$ be parameters for a triangulation coming from a point in the gluing variety. Then we obtain a well-defined developing map by attaching tetrahedra in ${\mathbb{H}}^3$ via a face-pairing isometry. This determines a holonomy\index{holonomy} representation, hence we obtain a map $D\from \mathcal{D}{\mathcal{T}} \to X_{\operatorname{PSL}}(G)$. Champanerkar showed that this map is algebraic, i.e.\ a morphism,\index{affine algebraic set!morphism} and thus ${A_{\rm Hyp}}(\ell, m)$ divides ${A_{\rm PSL}}(\ell,m)$ \cite{Champanerkar:Thesis}.
It follows that $A^0_{\mathrm{Hyp}}(\ell,m) = A_{\operatorname{PSL}}^0(\ell,m)$.
\begin{example}
We computed above in \refexamp{Fig8_AHypPoly} that for the figure-8 knot complement,
\begin{multline*}
{A_{\rm Hyp}}(\ell,m) = \ell - 2 \ell m - 3 \ell m^2 + 2 \ell m^3 - m^4 + 6 \ell m^4 \\ - \ell^2 m^4 + 2 \ell m^5 - 3 \ell m^6 - 2 \ell m^7 + \ell m^8.
\end{multline*}
And in \refexamp{Fig8_APoly}, its $A$-polynomial is\index{$A$-polynomial}
\[ A_{\operatorname{SL}}(\ell, m) = \ell - \ell m^2 - m^4 - 2\ell m^4 - \ell^2 m^4 - \ell m^6 + \ell m^8. \]
In this case, ${A_{\rm Hyp}}(\ell^2,m^2) = -A_{\operatorname{SL}}(\ell,m)A_{\operatorname{SL}}(-\ell,m)$.\index{${A_{\rm Hyp}}$-polynomial}\index{$\operatorname{SL}(2,{\mathbb{C}})$ $A$-polynomial}\index{$A$-polynomial!hyperbolic, ${A_{\rm Hyp}}$}\index{$A$-polynomial!$\operatorname{SL}(2,{\mathbb{C}})$}
\end{example}
\section{Exercises}
\begin{exercise}\label{Ex:Zariski}[Zariski topology]
\begin{enumerate}
\item Prove that the union of two affine algebraic sets\index{affine algebraic set} and the intersection of arbitrarily many affine algebraic sets are both affine algebraic sets.
\item Describe the following sets as affine algebraic sets: ${\mathbb{C}}^N$, $\emptyset$, a single point $(a_1, \dots, a_N)$.
\item
The \emph{Zariski topology}\index{Zariski topology} is the topology on ${\mathbb{C}}^N$ formed by taking affine algebraic sets to be closed sets.
Prove that any Zariski--closed set is closed in the standard Euclidean topology. Find an example of a set that is Euclidean--closed but not Zariski--closed.
\end{enumerate}
\end{exercise}
\begin{exercise}
SnapPy finds triangulations of knot complements and allows you to print out gluing and completeness equations.
\begin{enumerate}
\item Using SnapPy, find a set of polynomial equations that lead to the ${A_{\rm Hyp}}$ polynomials of the knots $5_2$, $6_1$, and $6_2$.
\item Using Mathematica (or other), compute the ${A_{\rm Hyp}}$ polynomials of these knots.
\item Use SnapPy to randomize the triangulations of the knots. What happens to the ${A_{\rm Hyp}}$ polynomial?
\item Use SnapPy to change the generators of $\pi_1(\partial N(K))$, i.e.\ the curves giving the completeness equations. What happens to the ${A_{\rm Hyp}}$ polynomial?
\end{enumerate}
\end{exercise}
\begin{exercise}\label{Ex:SVK}
Using the Siefert--Van Kampen theorem, or otherwise, prove that the group given by the Wirtinger presentation\index{Wirtinger presentation} is isomorphic to the fundamental group of the knot complement.
\end{exercise}
\begin{exercise}
Find the Wirtinger presentation\index{Wirtinger presentation} of the fundamental group of the $5_2$, $6_1$, and $6_2$ knots. Also find a presentation for their longitudes.
\end{exercise}
\begin{exercise}
Let $r_1, r_2, \dots, r_n$ denote the relators of the Wirtinger presentation\index{Wirtinger presentation} coming from the $n$ distinct crossings of a knot. Show that $r_n$ is always redundant.
\end{exercise}
\begin{exercise}[Abelian representations]\label{Ex:AbelianReps}
Show that abelian representations\index{abelian representation} form an affine algebraic set\index{affine algebraic set} isomorphic to $\operatorname{SL}(2,{\mathbb{C}})$. (Ensure your isomorphism is an isomorphism of affine algebraic sets,\index{affine algebraic set!morphism} i.e.\ defined by polynomial maps.)
\end{exercise}
\begin{exercise}
Compute $A_{\operatorname{SL}}(\ell, m)$ for the $5_2$, $6_1$, or $6_2$ knot.
\end{exercise}
\begin{exercise}
The \emph{support} of a polynomial $F(x,y)$ is the set $\{(a,b)\}\subset {\mathbb{Z}}^2$ such that the coefficient of the term $x^ay^b$ in $F(x,y)$ is nonzero. The convex hull of the support is the \emph{Newton polygon}\index{Newton polygon} of $F$. The Newton polygon of the $A$-polynomial has a remarkable relationship with essential surfaces embedded in the knot:
\begin{theorem}[\cite{APoly}]
Let $K$ be a knot in $S^3$ with $A$-polynomial $A_{\operatorname{SL}}(\ell,m)$.
Suppose the Newton polygon of $A_{\operatorname{SL}}(\ell,m)$ has a side of slope $p/q$. Then there is an essential surface\index{essential} $S$ in $S^3-N(K)$ with boundary that is a curve on the torus $\partial N(K)$ with slope $p/q \in H_1(\partial N(K))$.
\end{theorem}
Champanerkar proved that the corresponding result holds for ${A_{\rm Hyp}}$.
\begin{enumerate}
\item Compute the Newton polygon for $A_{\operatorname{SL}}(\ell,m)$ for the figure-8 knot.
\item Compute the Newton polygon for ${A_{\rm Hyp}}(\ell,m)$ for the figure-8 knot.
\end{enumerate}
\end{exercise}
\begin{exercise}
Repeat the previous exercise for the $5_2$, $6_1$, or $6_2$ knot.
\end{exercise}
\chapter*{Introduction}
Knots appear in scientific literature as early as 1771, in work of Vandermonde. In approximately 1833, Gauss developed the linking number of two knots, and his student Listing published work on alternating knots in 1847. Tait was one of the first to try to classify knots up to equivalence, creating the first knot tables in the 1870s and 1880s.
For more on the history of knots, see for example the detailed article by Epple~\cite{Epple}, or the survey articles by Przytycki~\cite{przytycki:knots} and Silver~\cite{silver:knothistory}.
Since the early work of Tait, knot theory has been influenced by and influential in the mathematical fields of topology, algebra, quantum field theory, and in geometry. There are several books that investigate knots from topological, algebraic, and quantum perspectives; some of my favorites are those of Rolfsen~\cite{rolfsen}, Burde and Zieschang~\cite{BurdeZieschang}, Murasugi~\cite{murasugi}, and Lickorish~\cite{lickorish:KnotTheory}. This book focuses on knots from a geometric perspective, particularly hyperbolic geometry, and overlaps more with books on hyperbolic geometry than knot theory, particularly in the early chapters that develop prerequisites in hyperbolic geometry. See, for example, \cite{benedetti-petronio, ratcliffe, thurston:book}.
The study of the geometry of knots, particularly hyperbolic geometry, began with work of Robert Riley in the 1970s~\cite{Riley:Fig8},
and developed further in the late 1970s and early 1980s, with work of William Thurston~\cite{thurston}.
By Thurston's work, a knot complement has one of three forms: Either it is a \emph{torus knot}, which can be drawn as an embedded curve on a Heegaard torus in the 3-sphere, or a \emph{satellite knot}, which lies in a tubular neighborhood of another knot, or it is \emph{hyperbolic}~\cite{thurston:bulletin}. Torus knots are relatively well-understood, and satellite knots are often studied by considering other knots.
Hyperbolic knots, however, are not well-understood in general, and yet they are extremely common.
For example, of all prime knots up to 16 crossings,
classified by Hoste, Thistlethwaite, and Weeks,
13 are torus knots, 20 are satellite knots, and the remaining 1,701,903 are hyperbolic~\cite{htw}. Of all prime knots up to 19 crossings, 15 are torus knots, 380 are satellite knots, and the remaining 352,151,858 are hyperbolic~\cite{Burton:KnotEnum}.
Moreover,
if a knot complement admits a hyperbolic structure, then that structure is unique, by work of Mostow and Prasad in the 1970s~\cite{mostow, prasad}.
More carefully, Mostow showed that if there is an isomorphism between the fundamental groups of two closed hyperbolic 3-manifolds, then there is an isometry taking one to the other. Prasad extended this work to 3-manifolds with torus boundary, including knot complements. Thus if two hyperbolic knot complements have isomorphic fundamental group, then they have exactly the same
hyperbolic structure. Finally, Gordon and Luecke showed that two knot complements with the same fundamental group are equivalent~\cite{gordon-luecke}
(up to mirror reflection).
Thus a hyperbolic structure on a knot complement is a complete invariant of the knot. If we could completely understand hyperbolic structures on knot complements, we could completely classify hyperbolic knots. This book is an introduction to the mathematics involved.
\chapter*{Preface}
\section*{Why I wrote this book}
This book is an introduction to hyperbolic geometry in three dimensions, with motivations and examples coming from the field of knots. It is also an introduction to knot theory, with tools, techniques, and topics coming from geometry. As I write, I believe it is the only book that attempts to be both.
To be clear, there are dozens of excellent books on knot theory, available from undergraduate to graduate levels, many of them classics that I learned from and continue to learn from. There are also several excellent books on hyperbolic geometry, particularly from the three-dimensional viewpoint. The aim of this book is to fill in a gap between them: to feature the contributions of hyperbolic geometry to knot theory, and the contributions of knot theory to hyperbolic geometry. It also aims to put techniques and tools from both fields into one place.
In recent years, the field of hyperbolic 3-manifolds has matured, with many open conjectures resolved in the early 2000s. The result is that we now have better insight than ever into the structure of hyperbolic manifolds. This insight can be applied to broad classes of 3-manifolds, including many knot and link complements. On the other hand, the area of knot theory has also ballooned in recent years, with new tools arising from algebra, homology theory, quantum topology, representation theory, as well as geometry. As new knot and link invariants arise, and new applications of knot theory to other fields develop, it is natural to ask how such invariants interact. In particular, how do these invariants interact with hyperbolic geometry, which contains some of the strongest information on knots and links? There are many open questions and conjectures about the interaction of hyperbolic geometry with other knot invariants, and many mathematicians are interested in learning hyperbolic geometry specifically as it applies to knot theory. This book is a more direct introduction to the hyperbolic geometry of knots.
Hyperbolic geometry was first applied to the study of knots and their complements in the 1970s.
Since then, hyperbolic geometry has played an important role in the classification of knots, with invariants such as volume and canonical decomposition developing directly from geometry.
However, the contribution of knot theory to hyperbolic geometry should not be understated. Complements of knots and links have been the playground of the 3-dimensional hyperbolic geometer for decades, aided by diagrams and topology, and by computational software such as SnapPea by Weeks, to find hyperbolic structures on knots. Many conjectures in hyperbolic geometry are based upon geometric properties that were first observed in knots. Many results in hyperbolic geometry have been proved first by restricting to families of knots, especially twist knots, two-bridge knots, and alternating knots, all of which feature prominently in this text.
This book is a hands-on introduction to this mixing of fields, geometry and knots.
\section*{How I structured the book}
The book starts with an introductory chapter giving basic definitions required from knot theory, and motivating some of the problems discussed in this book.
The first example of a hyperbolic knot, identified by Riley, is the unique prime knot with crossing number four, known as the figure-8 knot. In \refchap{Fig8Decomp}, we give an introduction to the complement
of the figure-8 knot, and describe how to decompose it into two polyhedra. The exercises outline a generalization of this decomposition to all knots, and lead the reader through complications that arise when generalizing. This decomposition, particularly for the figure-8 knot, will then serve as a running example for later chapters.
In chapters two through six, we develop the basics of geometric structures on manifolds, particularly in dimensions two and three. Much of this material overlaps with other texts on hyperbolic geometry. Here, we try to keep our presentation heavily illustrated by examples, especially examples from knot theory. More specifically, \refchap{IntroHyp} gives an introduction to the hyperbolic plane and hyperbolic 3-space, and gives properties and examples of calculations that we will need in the text. It is purposely brief, as it is not meant to be a comprehensive introduction to these spaces, but only to equip the reader with the tools required to calculate and compute in hyperbolic geometry.
\Refchap{Geometric} introduces geometric structures on manifolds, and works through examples in two dimensions, including careful examples of the torus and the 3-punctured sphere. \Refchap{GluingCompleteness} returns to 3-manifolds and knots, building the first examples of hyperbolic structures on knot complements by way of triangulations. The chapter covers Thurston's gluing and completeness equations, again using the figure-8 knot as a running example. \Refchap{Margulis} delves a little more deeply into properties of hyperbolic isometries, with a main goal of proving the thick-thin decomposition of hyperbolic 3-manifolds. This decomposition implies that thin parts of hyperbolic 3-manifolds can always be identified with knots or links in some 3-dimensional space. Finally, in \refchap{CompletionDehnFilling}, incomplete structures on hyperbolic 3-manifolds are investigated carefully. The main result is that such structures can often be viewed as Dehn fillings of hyperbolic manifolds.
Chapters seven through twelve focus on families of knots and links that have been particularly amenable to study through hyperbolic geometry, and to tools used to study these knots and links, including tools coming from more general 3-manifold topology. \Refchap{TwistKnots}, just after the chapter on hyperbolic Dehn filling, discusses knots described by Dehn filling links in the 3-sphere; many of these links have very explicit hyperbolic geometry. This chapter explores consequences of Dehn fillings for these families. \Refchap{Essential} then provides an interlude, with results from 3-manifold topology, defining essential surfaces, normal surfaces, and returning to hyperbolic geometry via angle structures. \Refchap{AngleStruct} develops the powerful tool of angle structures and volumes of 3-manifolds. The main result in the chapter is a proof of the theorem of Casson and Rivin relating volumes of angle structures to hyperbolic geometry.
Angle structures have had great success as applied to the family of two-bridge knots, and this is the subject of \refchap{TwoBridge}. The chapter develops topological descriptions of the knots as gluings of tetrahedra, and works through a proof that these tetrahedra are geometric using the theorems of \refchap{AngleStruct}.
In \refchap{Alternating}, we study alternating links. This chapter gives a proof, using properties of these knots, of the theorem of Menasco that a prime alternating knot with more than one twist region is hyperbolic. \Refchap{Quasifuchsian} discusses the geometry of surfaces embedded in knot and link complements, including three and four punctured spheres, and checkerboard surfaces.
The final chapters, chapters thirteen through fifteen, explore some of the more important knot and link invariants arising from hyperbolic geometry. One of the most important geometric invariants of a hyperbolic knot is its volume, and \refchap{Volume} is devoted to volumes of knots and links. It contains methods to bound the volume of a knot.
\Refchap{Canonical} discusses the Ford domain and canonical polyhedral decomposition, also called the Epstein--Penner decomposition of a manifold. This decomposition provides a tool that can be used to identify when two 3-manifolds are isometric; for example it is used by the software SnapPea (and SnapPy). \Refchap{Character} gives a brief introduction to the overlap of hyperbolic geometry and algebraic geometry, introducing gluing and character varieties of knots, and the $A$-polynomial, which is a polynomial invariant directly related to the hyperbolic geometry of a knot.
\section*{Prerequisites and notes to students}
I have tried to keep prerequisites to a minimum. A basic course in topology is required, as well as some knowledge of basic algebraic topology, particularly the fundamental group and covering spaces. Occasionally, experience with Riemannian geometry will be helpful, but it is not required, with one exception: we assume standard results from a first course in Riemannian geometry in parts of \refchap{Volume}. We also occasionally assume basic results in differential topology, such as the fact that smooth manifolds admit tubular neighborhoods, and that submanifolds can be isotoped to meet transversely.
Also, this book is written to be interactive, with examples and exercises. I hope you work through the examples as they are presented, and generalize them in exercises. Many important results are saved for exercises.
\section*{Acknowledgments}
The first form of this book appeared as lecture notes for a unit at Brigham Young University (BYU). The subject was inspired by my participation in a workshop on interactions between hyperbolic geometry, quantum topology, and number theory held at Columbia University in 2009. I have also given related graduate student workshops at Iowa in 2014, at Melbourne in 2016, and at Luminy in 2018. I thank the organizers of these workshops for inviting me, and various agencies for supporting the workshops, and for supporting fundamental research in mathematics.
I learned much of the material in the first part of this book as a graduate student under the direction of Steve Kerckhoff, reading notes of William Thurston from the 1970s that were ghost-written by Kerckhoff and Bill Floyd~\cite{thurston}. Learning along with me were fellow graduate students David Futer and Henry Segerman. Many of their insights and elucidations helped me develop my own understanding; those insights are contained in this book, and I thank Steve, David, and Henry for them. I also owe thanks to Henry Segerman and Saul Schleimer for figures, particularly figures~\ref{Fig:SchleimerSegerman}, \ref{Fig:52SchleimerSegerman}, \ref{Fig:63SchleimerSegerman}, and~\ref{Fig:Fig8Fiber3D}.
Thanks to Saul Schleimer for figures~\ref{Fig:Fig8Fiber2D}, and to David Bachman, Saul Schleimer, and Henry Segerman for figure~\ref{Fig:Fig8Fiber3D2}.
Discussions with David Futer about various parts of this book, especially two-bridge knots, have been invaluable. I also thank Jim Cannon, who attended my course on this material, and provided ideas for helping students get involved with exercises, and I thank Kenneth Perko, Abhijit Champanerkar, Ilya Kofman, Yi Wang, and Yair Minsky for feedback on drafts of the book.
I owe the most thanks to the many people who have worked through various drafts and incarnations of this book, especially Emma Turner and Mark Meilstrup, who gave great feedback during the original 2010 BYU course, and Sophie Ham, Max Jolley, Josh Howie, Emily Thompson, John Stewart, and Ensil Kang, who read through many chapters carefully and helped me fix exposition and errors.
I also thank students who have worked through drafts of this book with others, including students at the University of Warwick, at Michigan State University, Temple University, and at Oklahoma State University.
Remaining errors are, of course, my own. Please tell me about them.
\aufm{Jessica S. Purcell}
| {
"timestamp": "2020-03-02T02:09:58",
"yymm": "2002",
"arxiv_id": "2002.12652",
"language": "en",
"url": "https://arxiv.org/abs/2002.12652",
"abstract": "This book is an introduction to hyperbolic geometry in dimension three, and its applications to knot theory and to geometric problems arising in knot theory. It has three parts. The first part covers basic tools in hyperbolic geometry and geometric structures on 3-manifolds. The second part focuses on families of knots and links that have been amenable to study via hyperbolic geometry, particularly twist knots, 2-bridge knots, and alternating knots. It also develops geometric techniques used to study these families, such as angle structures and normal surfaces. The third part gives more detail on three important knot invariants that come directly from hyperbolic geometry, namely volume, canonical polyhedra, and the A-polynomial.",
"subjects": "Geometric Topology (math.GT)",
"title": "Hyperbolic Knot Theory",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924785827002,
"lm_q2_score": 0.7279754607093178,
"lm_q1q2_score": 0.7075866724022329
} |
https://arxiv.org/abs/0706.0624 | On compositions of d.c. functions and mappings | A d.c. (delta-convex) function on a normed linear space is a function representable as a difference of two continuous convex functions. We show that an infinite dimensional analogue of Hartman's theorem on stability of d.c. functions under compositions does not hold in general. However, we prove that it holds in some interesting particular cases. Our main results about compositions are proved in the more general context of d.c. mappings between normed linear spaces. | \section*{Introduction}
Let $C$ be a convex set in a (real) normed linear space $X$. A
function $f\colon C\to\mathbb{R}$ is called {\em d.c.}\ or {\em delta-convex}
if it can be represented as a difference of two continuous convex
functions on $C$. We say that $f$ is locally d.c.\ on $C$, if each $c\in C$ has a convex neighbourhood $U$ such that
$f$ is d.c.\ on $U \cap C$. A mapping $F\colon C\to\mathbb{R}^n$ is a {\em d.c.\ mapping}
if each of its $n$ components is a d.c.\ function. There exist many articles
which work with d.c.\ functions (see, e.g., the references in \cite{Hi} and \cite{DuVeZa}).
In 1959, P.~Hartman \cite{H} proved the following interesting well-known results.
\medskip
\noindent
(I)\ \ {\em Let $A \subset \mathbb{R}^m$ be a convex set which is either open or closed.
Let $f\colon A \to \mathbb{R}$ be locally d.c.\ on $A$. Then $f$ is d.c.\ on $A$.}
\medskip
\noindent
(II)\ \ {\em Let $X$ be a normed linear space, $A \subset X$ a convex set which is either open or closed,
and $B \subset \mathbb{R}^n$ an open convex set. If $F\colon A \to B$ is a d.c.\ mapping and $g\colon B\to\mathbb{R}$
is a d.c.\ function, then the function
$g\circ F$ is locally d.c.\ on $A$.}
\medskip
In fact, Hartman \cite{H} formulated (II) only for the case $X = \mathbb{R}^m$, but he mentioned
(see the end of p.707) that his proof
clearly works also in more general settings (we could even suppose that $X$ is a topological linear space and $A$
is an arbitrary convex set). For a generalization of (II), proved in a
quite different way, see Proposition~\ref{lokdc}.
Hartman also remarked that his proof of (I) does not work for
infinite dimensional spaces. A corresponding counterexample was provided
by E.~Ko\-peck\'a and J.~Mal\'y \cite{KM}: given a nonempty open convex
set $A\subset\ell_2$, there exists a locally d.c.\ function on $A$ which
is not d.c.\ on $A$. (They also remark without proof that a similar
example can be constructed in each infinite dimensional normed linear space;
we prove this claim in Corollary~\ref{jldc}.)
The results (I) and (II) immediately imply the following superposition theorem.
\begin{theoremH}\label{T:H}
Let $A\subset \mathbb{R}^m$ and $B\subset\mathbb{R}^n$ be convex sets. Let $A$ be either open or closed, and let $B$ be open. If
$F\colon A\to B$ and $g\colon B\to\mathbb{R}$ are d.c., then the function
$g\circ F$ is d.c.
\end{theoremH}
\noindent
Note that Hartman did not mention Theorem~H explicitly, but he formulated its corollary
(obtained by putting $F := (f_1,f_2)$ and $g(x,y):= xy$ or $g(x,y):= x/y$):
\begin{corollaryH}\label{sopo}
Let $A \subset \mathbb{R}^m$ be either an open or a closed convex set. Let $f_1$, $f_2$ be d.c.\ on $A$. Then the
product $f_1\!\cdot\! f_2$ and, if $f_2(x) \neq 0$ for $x \in A$, the quotient $f_1/f_2$ are d.c.\ functions
on $A$.
\end{corollaryH}
Note that the case of the product can be proved in a more elementary way
(see \cite{Hi}), but the
stability with respect to quotients probably cannot be proved more easily.
Though (I) cannot be used to generalize Theorem H
to infinite dimensions, it remained open whether such a generalization is possible.
The present paper concerns this question.
We show that an infinite dimensional analogue of Theorem H does not hold
(see Corollary~\ref{jldc}):
\medskip
\noindent
{\em For each infinite dimensional normed linear space $X$, there exists a positive d.c.\ function
$f$ on $X$ such that $1/f$ is not d.c.}
\medskip
However, using a modification of Hartman's methods, we prove (Theorem
\ref{cely}) the following variant of Theorem~H (for other variants see Theorem~\ref{spec}).
\medskip
\noindent {\it
Let $X$ be a normed linear space.
Let $A\subset X$ be an open convex set, and
$F\colon A\to \mathbb{R}^n$ and $g\colon \mathbb{R}^n\to\mathbb{R}$ be d.c. Then the function
$g\circ F$ is d.c.}
\smallskip
\noindent
Consequently, if $f$, $h$ are d.c.\ on $A$, then, for instance,
$\exp (f)$ and $\frac{fh}{1+f^2 + h^2}$
are d.c.\ on $A$ (see the text after Theorem~\ref{cely}).
Another positive result in which $F$ is a real continuous convex (or concave)
function is Proposition~\ref{reflex}. It implies (see Remark~\ref{remreflex}(i))
the following:
\medskip\noindent
{\em Let $X$ be a reflexive Banach space and $f_1$, $f_2$ be continuous convex functions on
$X$. If the quotient $f_1/f_2$ is defined on $X$, then it is d.c.}
\medskip\noindent
(Note that the above statement is true only in reflexive spaces, see \cite{HKVZ}.)
We prove our results in a more general context of d.c.\ mappings between
normed linear spaces. In particular, we prove (see Corollary~\ref{bilin})
that, in some interesting cases, the inner product (and even a general ``product''
given by a bilinear mapping) of two d.c.\
mappings is d.c.\ as well.
\section{Preliminaries}\label{S:prelim}
We consider only normed linear spaces over the
reals $\mathbb{R}$. If $X$ is a normed linear space, we denote by $B_X$ its closed unit ball.
By $B(x,r)$ we denote the open ball with center
$x$ and radius $r$. We say that a Lipschitz mapping
$F$ is $L$-Lipschitz, if $\mathrm{Lip} F \leq L$, where $\mathrm{Lip} F$ is the (least) Lipschitz constant of $F$.
\begin{definition}[\cite{VeZa}]\label{D:dc}
Let $X,Y$ be normed linear spaces, $C\subset X$ be a convex set, and
$F\colon C\to Y$ be a continuous mapping. We say that $F$ is {\em d.c.}\
(or {\em delta-convex}) if there exists a continuous (necessarily convex)
function $f\colon C\to\mathbb{R}$ such that $y^*\circ F+f$ is convex on $C$
whenever $y^*\in Y^*$, $\|y^*\|\le1$. In this case we say that $f$
controls $F$, or that $f$ is a {\em control function} for $F$.
\end{definition}
\begin{remark}\label{R:dc}
The following facts are easy to prove (cf.~\cite{VeZa}).
\begin{enumerate}
\item[(a)] For $Y=\mathbb{R}^n$, the above definition of a d.c.\ mapping
coincides with the one in
the beginning of Introduction. Moreover, if $F=(F_1,\dots,F_n)$ and $f_i$ controls $F_i$, then
$f:= f_1+\dots+f_n$ controls $F$.
\item[(b)] If $g=f_1-f_2$, where $f_1,f_2$ are continuous convex
functions on a convex subset of a normed linear space, then $f_1+f_2$
controls $g$.
\item[(c)] The notion of delta-convexity does not depend on the choice
of equivalent norms on $X$ and $Y$.
\end{enumerate}
\end{remark}
A theory of d.c.\ mappings on open convex sets was developed in \cite{VeZa}. Some further
results, together with a survey of main results from \cite{VeZa}, can be
found in \cite{DuVeZa}.
We shall need the following two propositions.
\begin{proposition}[\cite{VeZa}]\label{P:veza}
Let $X,Y,Z$ be normed linear spaces, and let $A\subset X$ and $B\subset Y$ be
convex sets. Let $F\colon A\to B$ and $G\colon B\to Z$ be d.c.\ mappings
with control functions $f\colon A\to\mathbb{R}$ and $g\colon B\to \mathbb{R}$, respectively. If $G$ and
$g$ are Lipschitz on $B$,
then $G\circ F$ is d.c.\ on $A$ with a
control function $h=g\circ F +(\mathrm{Lip} G+ \mathrm{Lip} g)f$.
\end{proposition}
\begin{proof}
This was proved in \cite[Proposition~4.1]{VeZa} assuming that the sets
$A,B$ are also open, since this was the context the authors were interested in.
However, it is easy to see that the proof does not need this additional
assumption. Indeed, the proof is based on the equivalence of (i) and (iii) in
\cite[Proposition 1.13]{VeZa}, whose
proof does not use the openness of $A$.
\end{proof}
\begin{proposition}\label{P:lip}
Let $X,Y$ be normed linear spaces, $C\subset X$ a bounded open convex
set, and $F\colon C\to Y$ a d.c.\ mapping with a Lipschitz control
function. Then $F$ is Lipschitz.
\end{proposition}
\begin{proof}
This was stated in \cite[Theorem~18(i)]{DuVeZa} for $X$ and $Y$ Banach
spaces, but the proof therein works for normed linear spaces as well.
(Note that the question for which open convex
sets $C$ the proposition holds was answered in \cite{CsNa}.)
\end{proof}
\begin{notation}
Let $A,B,A_n,B_n$ ($n\in\mathbb{N}$) be subsets of a normed linear space $X$. We
shall use the notation:
\begin{itemize}
\item $A\subset\subset B$ whenever there exists $\epsilon>0$
such that $A+ B(0,\varepsilon) \subset B$;
\item $A_n\nearrow A$ whenever $A_n\subset A_{n+1}$ for each
$n\in\mathbb{N}$, and $\bigcup_{n\in\mathbb{N}}A_n=A$;
\item $A_n\nearrow\!\!\!\nearrow A$ whenever $A_n\subset\subset A_{n+1}$ for each
$n\in\mathbb{N}$, and $\bigcup_{n\in\mathbb{N}}A_n=A$.
\end{itemize}
\end{notation}
\begin{fact}\label{F:1}
Let $C\subset X$ be a nonempty convex set, and $f\colon C\to\mathbb{R}$ a
convex function.
\begin{enumerate}
\item[(a)] If $C$ is open and bounded, and $f$ is continuous,
then $f$ is bounded below on $C$.
\item[(b)] If $f$ is bounded on $C$, then $f$ is Lipschitz on each $D\subset\subset C$.
\item[(c)] If $f$ is $L$-Lipschitz on $C$, then $f$ admits a convex $L$-Lipschitz
extension to the whole $X$.
\end{enumerate}
\end{fact}
\begin{proof}
(a) follows from the fact that $f$ is minorized by a continuous affine
function (by the Hahn-Banach theorem).
\\
(b) can be proved in the same way as local
Lipschitz continuity of
continuous, convex functions. For the sake of completeness, we give a
sketch of proof. Let $|f|\le M$ on $C$, $r>0$ be such that
$D+ B(0,2r) \subset C$, and $x,y\in D$. Then $z:=y+\frac{r(y-x)}{\|y-x\|}\in
C$, and $y=\frac{r}{\|y-x\|+r}x+\frac{\|y-x\|}{\|y-x\|+r}z$. By convexity,
$f(y)\le\frac{r}{\|y-x\|+r}f(x)+\frac{\|y-x\|}{\|y-x\|+r}f(z)$. It
easily follows that
$f(y)-f(x)\le\frac{\|y-x\|\bigl(f(z)-f(y)\bigr)}{r}\le\frac{2M}{r}\|y-x\|$.
The rest follows by interchanging $x$ and $y$.
\\
(c) It is well-known (and easy-to-prove)
that the function $\widehat{f}\colon X\to\mathbb{R}$, given by
$\widehat{f}(x)=\inf\bigl\{f(c)+L\|x-c\|:c\in C\bigr\}$,
is a convex, $L$-Lipschitz extension of $f$ (cf.\ \cite{CoMu}).
\end{proof}
We shall need the following well-known and very easy fact.
\begin{fact}\label{obalky}
Let $C$ be a convex set in a normed linear space $X$, and $r>0$.
Then the sets (called ``inner parallel set'' and ``outer parallel set'' of $C$)
$$
D:=\{x\in C:\mathrm{dist}(x,X\setminus C)>r\},\ \
E:=\{x\in X: \mathrm{dist}(x,C)<r\}
$$
are convex.
\end{fact}
\begin{observation}\label{O:lip}
Let $X,Y$ be normed linear spaces, $C\subset X$ a convex set,
and $F\colon C\to Y$ a d.c.\ mapping with a bounded above
control function $f$. Then both $F$ and $f$ are Lipschitz on each
bounded convex set $B\subset\subset C$.
\end{observation}
\begin{proof}
By Fact~\ref{obalky},
there exist open, bounded, convex sets
$D$ and $E$ such that $B\subset D\subset\subset E\subset C$. By Fact~\ref{F:1}(a),
$f$ is bounded on $E$. Hence $f$ is Lipschitz on $D$ by
Fact~\ref{F:1}(b), and $F$ is Lipschitz on $D$ by
Proposition~\ref{P:lip}.
\end{proof}
\begin{definition}
A normed linear space $X$ is said to have {\em modulus of convexity of power
type~2} if there exists $a>0$ such that $\delta_X(\epsilon)\ge
a\epsilon^2$ for each $\epsilon\in(0,2]$ (where $\delta_X$ denotes the
classical modulus of convexity of $X$; see, e.g., \cite{day} for the
definition).
\end{definition}
\begin{fact}\label{F:powertype}\
\begin{enumerate}
\item[(a)] The $\ell_2$-direct sum $(X\oplus Y)_{\ell_2}$ has modulus of convexity of power
type 2 whenever both $X$ and $Y$ do.
\item[(b)] All $L_p(\mu)$ spaces with
$1<p\le2$ ($\mu$ arbitrary nonnegative measure) have modulus of
convexity of power type 2 (in their canonical norms).
\end{enumerate}
\end{fact}
\begin{proof}
(a) follows immediately from the following result by Bynum
\cite{bynum}: $X$ has modulus of convexity of power
type 2 if and only if there exists $b>0$ such that
$2\|x\|^2+2\|y\|^2\ge\|x+y\|^2+b\|x-y\|^2$ for each $x,y\in X$.\\
(b) is due to Hanner \cite{hanner}.
\end{proof}
Let $X,Y$ be normed linear spaces, and $A\subset X$
an open set. Recall that a mapping $F\colon A\to Y$ is said to be
$C^{1,1}$ on $A$ if its
Fr\'echet
derivative $F'(x)$ exists at each point $x\in A$ and
$F'\colon A\to\mathcal{L}(X,Y)$ is
Lipschitz.
The next proposition follows from the proof of the implication $(i)\!\Rightarrow\!(ii)$
in \cite[Theorem~11]{DuVeZa}.
\begin{proposition}\label{C11}
Let $X,Y$ be normed linear spaces,
$A\subset X$ an open convex set, $F\colon A\to Y$ a $C^{1,1}$
mapping. If $X$ admits an equivalent norm $|\cdot|$ with modulus of convexity
of power type 2, then $F$ is
d.c.\ on $A$ with a control function of the form
$f(x)=c|\cdot|^2$ for some $c>0$.
\end{proposition}
\section{A consequence of Hartman's construction}
Hartman's construction \cite{H}, which gives the proof that
locally d.c.\ functions
in $\mathbb{R}^n$ are d.c., has some consequences also in infinite dimensional
spaces. It was observed (independently) already in \cite{PB} and
\cite{KM} (cf.\ Remark~\ref{obec}). The main new observation of the
present article is that Hartman's construction gives even a
characterization of d.c.\ mappings on open sets (Proposition~\ref{P:HKM}) which
(together with Proposition~\ref{P:veza}) implies some
infinite dimensional versions of Hartman's superposition theorem.
First we formulate a lemma which describes Hartman's construction in a general setting.
\begin{lemma}\label{L:hartman}
Let $X,Y$ be normed linear spaces, $C\subset X$ a nonempty
convex set, and $F\colon C\to Y$ a mapping. Let $\emptyset \neq D_n \subset C$
($n\in\mathbb{N}$)
be convex sets such that $D_n\nearrow C$ and, for each $n$,
$\mathrm{dist}(D_n,C\setminus D_{n+1})>0$,
$D_n$ is relatively open in $C$,
and
$F|_{D_n}$ is d.c.\ with a
control function $\gamma_n\colon D_n\to \mathbb{R}$
which is either bounded or Lipschitz. Then $F$ is d.c.\ on $C$.
\end{lemma}
\begin{proof}
First, fix $a\in D_1$, and observe
that the bounded sets $\tilde{D}_n:=D_n\cap B(a,n)$ satisfy the same
assumptions as the sets $D_n$. Thus we can (and do) suppose that each
$D_n$ is bounded, and hence each $\gamma _n$ is bounded on $D_n$.
Adding a constant to $\gamma_n$ if necessary, we can suppose that $0< \gamma_n(x) < b_n < \infty$ for
each $n \in \mathbb{N}$ and $x \in D_n$.
For each $n \in \mathbb{N}$, choose $0 < d_n < \mathrm{dist}(D_n,C\setminus D_{n+1})$, and
consider the Lipschitz convex functions
$\varphi_n(x) := \frac{b_n+1}{d_n} \, \mathrm{dist}(x,D_n)$ on $C$. Define
$$ h_n(x):= \max\{\gamma_{n+1}(x), \varphi_n(x)\},\ x \in D_{n+1},\ \ \text{and}\ \
h_n(x):= \varphi_n(x),\ x\in C \setminus D_{n+1}.$$
If $z \in D_{n+1}$, then there exists $\varepsilon>0$ such that $h_n(x)= \max\{\gamma_{n+1}(x), \varphi_n(x)\}$ for
$x \in C \cap B(z,\varepsilon)$, since $D_{n+1}$ is open in $C$. If $z \in C \setminus D_{n+1}$, then
$\mathrm{dist}(x,D_n)\ge d_n$ and therefore
there exists
$\varepsilon>0$ such that $\varphi_n(x) > b_n$, and thus $h_n(x)= \varphi_n(x)$, for each
$x \in C \cap B(z,\varepsilon)$.
Therefore, $h_n$ is continuous and convex on $C$. Moreover, clearly
\begin{itemize}
\item $h_n \geq 0$, and $h_n$ is bounded on each bounded subset of $C$.
\end{itemize}
Since $\varphi_n(x) = 0 $ for $x \in D_n$, we see that
$h_n(x) = \gamma_{n+1}(x)$ for $x \in D_n$. So,
\begin{itemize}
\item $h_n$ is a control function of $F$ on $D_n$.
\end{itemize}
Let us define, by induction, a sequence $\{f_n\}$
of continuous convex functions on
$C$ such that:
\begin{enumerate}
\item[(a)] $f_n$ is bounded on bounded subsets of $C$,
\item[(b)] $f_n\ge0$,
\item[(c)] $f_n$ controls $F$ on $D_{n+1}$, and
\item[(d)] $f_{n+1}=f_n$ on $D_n$.
\end{enumerate}
Put $f_1:=h_2$. Suppose we already have $f_1,\ldots,f_n$. Set
\[
s:=\sup f_n(D_{n+2})\,,\qquad \sigma:=\sup h_{n+2}(D_n)\, \ \ \text{and}
\]
\[
g_n(x):=h_{n+2}(x)-\sigma + \frac{\sigma+ s+1}{d_n}\mathrm{dist}(x,D_n)
\ \ \ \text{for}\ \ x\in C.
\]
Then clearly $g_n$ is continuous and convex on C, and it controls $F$ on $D_{n+2}$.
Define $f_{n+1}=\max\{f_n,g_n\}$. Clearly $f_{n+1}$ is continuous convex, $f_{n+1}\ge f_{n}\ge0$,
and $f_{n+1}$ is bounded on bounded subsets of $C$. If $x \in D_n$, then $g_n(x)\le 0\le f_n(x)$, consequently
$f_{n+1}=f_n$ on $D_n$.
Let us show that $f_{n+1}$ controls $F$ on
$D_{n+2}$; i.e., that the function
$\varphi_{y^*}:=y^*\circ F+f_{n+1}=\max\{y^*\circ F+f_n,y^*\circ F+g_n\}$ is
continuous and convex on $D_{n+2}$ for each $y^*\in B_{X^*}$.
To this end, fix $y^*\in B_{X^*}$ and $z \in D_{n+2}$.
If $z \in D_{n+1}$, then there is $\varepsilon>0$
such that $\varphi_{y^*}$ is continuous and convex on $B(z,\varepsilon) \cap C$ (since $D_{n+1}$ is open in $C$ and both
$f_n$ and $g_n$ control $F$ on $D_{n+1}$). If $z \in D_{n+2} \setminus D_{n+1}$, then $\mathrm{dist}(z, D_n) \geq d_n$, and
consequently $g_n(x)\geq 0-\sigma+(\sigma+s+1) > f_n(x)$. Therefore there exists $\varepsilon>0$
such that $U:=B(z,\varepsilon) \cap C\subset D_{n+2}$ and $\varphi_{y^*}$ equals to the continuous convex function
$y ^*\circ F+g_n$ on $U$. Hence we can conclude that $\varphi_{y^*}$ is
continuous and convex on $D_{n+2}$.
Now, for each $x\in C$, the sequence $\bigl\{f_n(x)\bigr\}$ is constant
for large $n$'s, hence
$f(x):=\lim_{n\to\infty} f_n(x)$ is well defined on $C$.
Since $f=f_n$ on $D_n$, (c) easily implies that
$f$ is a continuous convex function, which controls $F$ on $C$.
\end{proof}
\begin{remark}\label{zahl}
The assumptions of Lemma \ref{L:hartman} allow the possibility
that $D_n = D_{n+1}= \dots=C$ for some $n$.
\end{remark}
\begin{lemma}\label{L:dn}
Let X be a normed linear space and
let $C\subset X$ be nonempty, open and convex. Let $\{C_n\}$ be a sequence of
convex sets with nonempty interior, such that $C_n\nearrow C$.
Then there exists a sequence $\{D_n\}$ of nonempty
bounded open
convex sets such that $D_n\nearrow\!\!\!\nearrow C$, and $D_n\subset\subset C_n$ for each $n$.
\end{lemma}
\begin{proof}
We can (and do) suppose that each $C_n$ is bounded. (If this is not the case,
replace, for each $n$, the set $C_n$ with the set $C_n\cap B(x_0,n)$
where $x_0$ is an arbitrary interior point of $C_1$.)
\\
First we claim that $C=\bigcup_n\mathrm{int}\, C_n$. Indeed, let $x\in C$
be any point. Then $x\in C_n$ for some $n$. If $x\notin\mathrm{int}\, C_n$,
choose any $y\in\mathrm{int}\, C_n$. There exists $z\in C$ such that
$x\in(y,z)$ (i.e., $x$ is a relative interior point of the segment
$[y,z]$). There exists $k>n$ such that $z\in C_k$. Then
$x\in\mathrm{int}\, C_k$, since $y\in\mathrm{int}\, C_k$.
\\
Now, fix $\delta>0$ such that $C_1$ contains an open ball of radius $2\delta$, and
define
$$D_n:=\{x\in C_n : \mathrm{dist}(x, X\setminus C_n)>\delta/n\}.$$
Obviously $D_n\subset\subset C_n$ for each $n$, and the sets $D_n$ are
nonempty, open and (by Fact~\ref{obalky}) convex. Moreover
\[
D_n\subset\subset
\{x\in C_n: \mathrm{dist}(x, X\setminus C_n)>\delta/(n+1)\}\subset
D_{n+1}.
\]
To finish the proof, fix $x\in C$. Then $x\in\mathrm{int}\, C_n$ for some $n$.
Fix $k>n$ such that
$\mathrm{dist}(x,X\setminus C_n)>\delta/k$. Then
$\mathrm{dist}(x,X\setminus C_k)\ge\mathrm{dist}(x,X\setminus C_n)>\delta/k$
which means that $x$ belongs to $D_k$.
\end{proof}
Now, we are ready to state the main result of this section.
\begin{proposition}\label{P:HKM}
Let $X,Y$ be normed linear spaces, $C\subset X$ a nonempty open
convex set, and $F\colon C\to Y$ a mapping. Then the following
assertions are equivalent:
\begin{enumerate}
\item[(i)]
$F$ is d.c.\ on $C$;
\item[(ii)]
there exists a sequence $\{C_n\}$ of convex sets with nonempty interior
such that $C_n\nearrow C$ and, for each $n$, $F|_{C_n}$ is d.c.\ with a control
function that is bounded from above on $C_n$;
\item[(iii)]
there exists a sequence $\{D_n\}$ of bounded open convex sets such that
$D_n\nearrow\!\!\!\nearrow C$ and, for each $n$, $F|_{D_n}$ is Lipschitz and
d.c.\ with a Lipschitz control function
on $D_n$.
\end{enumerate}
\end{proposition}
\begin{proof}
$(i)\Rightarrow(ii).$
Let $f\colon C\to\mathbb{R}$ be a control function for
$F$. Fix $x_0\in C$ and consider the sets
$C_n=\{x\in C: f(x)< f(x_0)+n\}$ ($n\in\mathbb{N}$). They are nonempty, open and
convex, and they obviously satisfy (ii).
\smallskip\noindent
$(ii)\Rightarrow(iii).$
Let $\{C_n\}$ be as in (ii). Let $D_n$ ($n\in\mathbb{N}$) be the bounded, open,
convex sets constructed in Lemma~\ref{L:dn} from the sets $C_n$.
Then (iii) follows immediately from Observation~\ref{O:lip}.
\smallskip\noindent
$(iii)\Rightarrow(i)$ follows from Lemma~\ref{L:hartman}.
\end{proof}
Proposition~\ref{P:HKM} easily implies the following
generalization of Hartman's result (I) from Introduction,
which was stated (for open $A$) already in \cite[Theorem~1.20]{VeZa}
with only a hint for the proof.
\begin{corollary}\label{C:loc}
Let $A\subset\mathbb{R}^d$ be a convex set which is either open or closed, and let $Y$ be a normed linear
space. Then each locally d.c.\ mapping $F\colon A\to Y$ is d.c.\ on $A$.
\end{corollary}
\begin{proof}
First we will show that $F$ is d.c.\ on each compact convex
set $C \subset A$. Using compactness of $C$ and \cite[Lemma 1]{H},
we easily see that there exist continuous convex functions $f_i$ on $A$, $x_i \in C$, and $r_i>0$, $i=1,\dots,k$, such that
$C \subset \bigcup_{i=1}^k B(x_i,r_i)$ and $f_i$ controls $F$ on $C \cap B(x_i,r_i)$. Consequently, $f_C
= f_1+\dots+f_k$ controls $F$ on $C$.
Now, distinguish two cases. First suppose that $A$ is open.
Then choose compact convex sets $C_n$ with nonempty interior such that
$C_n\nearrow A$. Since $f_{C_n}$ is bounded on $C_n$, Proposition \ref{P:HKM} implies that $F$ is d.c.
If $A$ is closed, choose $z \in A$ and put $D_n:= A \cap B(z,n)$.
Since $\overline{D_n}\subset A$ is compact and convex, $F$ is d.c.\ on $D_n$ (with a bounded control function), and we can apply
Lemma \ref{L:hartman}.
\end{proof}
\begin{remark}\label{obec}
It is known (see \cite{BFV}) that, on each infinite dimensional
Banach space, there exists
a continuous convex function which is unbounded on a ball. This implies
(via Fact~\ref{F:1}(b) and Proposition~\ref{P:lip}) that the implication
$(ii)\Rightarrow(i)$ in Proposition~\ref{P:HKM} is a {\it strict} generalization of
both \cite[Theorem~2.3]{PB} and \cite[Corollary~18]{KM}, where
delta-convexity of $F$ was proved under the following stronger
assumption: $F$ is d.c.\ on each bounded closed convex $B\subset C$
with a Lipschitz (\cite{PB}) or bounded (\cite{KM}) control function on
$B$.
\end{remark}
\section{Global delta-convexity of composed mappings}
Let us start with the following generalization of (II) (see
Introduction) which is essentially proved in \cite[Theorem~4.2]{VeZa}.
\begin{proposition}\label{lokdc}
Let $X,Y,Z$ be normed linear spaces, $A\subset X$ a convex set, and
$B\subset Y$ an open set. Let $F\colon A\to B$ and $G\colon B\to Z$ be
locally d.c.\ mappings. Then $G\circ F$ is locally d.c.
\end{proposition}
\begin{proof}
Fix $a\in A$. Since $G$ is locally d.c.\ and each d.c.\ mapping on an
open convex subset of $Y$ is locally Lipschitz (see
\cite[Proposition~1.10]{VeZa}), there exists an open convex neighborhood
$B_0\subset B$ of $F(a)$ on which $G$ is Lipschitz and d.c.\ with a
Lipschitz control function. Find $\delta>0$ such that, for
$A_0:=B(a,\delta)\cap A$, we have that $F(A_0)\subset B_0$ and $F|_{A_0}$
is d.c. Then $G\circ F|_{A_0}=(G|_{B_0})\circ (F|_{A_0})$ is d.c.\ by
Proposition~\ref{P:veza}.
\end{proof}
Our results on global delta-convexity of composed mappings will follow from the
next basic lemma.
\begin{lemma}\label{L:main}
Let $X,Y,Z$ be normed linear spaces, let $A\subset X$
and $B\subset Y$ be convex sets, and let $F\colon A\to B$
and $G\colon B\to Z$ be mappings.
Suppose
there exist sequences of convex sets $A_n \subset A$, $B_n \subset B$
such that $F(A_n) \subset B_n$, $G|_{B_n}$ is Lipschitz and d.c.\ with a
Lipschitz control function,
and at least one of the following conditions holds:
\begin{enumerate}
\item
$A_n$ is relatively open in $A$, $A_n\nearrow A$, $\mathrm{dist}(A_n, A \setminus A_{n+1}) >0$,
$F|_{A_n}$ is either bounded or Lipschitz and it is d.c.\ with a control function
which is either bounded or Lipschitz.
\item
$A$ is open, $F$ is d.c., $\mathrm{int} A_n \neq \emptyset$, and $A_n\nearrow A$.
\end{enumerate}
Then $G\circ F$ is d.c.\ on $A$.
\end{lemma}
\begin{proof}
Let (i) hold. As in the proof of Lemma~\ref{L:hartman}, we can (and do)
suppose that the sets $A_n$ are bounded. Then, on each $A_n$, $F$ is
bounded and admits a bounded control function.
Proposition~\ref{P:veza} implies that the mapping
$G \circ F|_{A_n}= (G|_{B_n})
\circ (F|_{A_n})$ is d.c.\ with a bounded control function. By
Lemma~\ref{L:hartman}, $G\circ F$ is d.c.
Now, suppose that (ii) holds. By Lemma~\ref{L:dn}, we can (and do)
suppose that $A_n\nearrow\!\!\!\nearrow A$ and each $A_n$ is open.
By Proposition~\ref{P:HKM},
there exists a sequence $\{D_n\}$ of bounded, open, convex sets such that
$D_n\nearrow\!\!\!\nearrow A$ and, for each $n$, $F|_{D_n}$ is Lipschitz and d.c.\
with a Lipschitz control function. Then the sets $\tilde{A}_n:=A_n\cap D_n$ are open and convex,
$F(\tilde{A}_n) \subset B_n$, and $\tilde{A}_n\nearrow\!\!\!\nearrow A$. Thus the condition (i) holds with
$A_n$ replaced by $\tilde{A}_n$. So $G \circ F$ is d.c by the first part of the proof.
\end{proof}
As a simpler but still rather general consequence we obtain:
\begin{proposition}\label{nove}
Let $X,Y,Z$ be normed linear spaces, let $A\subset X$
and $B\subset Y$ be convex sets, and let $F\colon A\to B$
and $G\colon B\to Z$ be mappings. Suppose that
the restriction of $G$ to each bounded convex subset of $B$
is Lipschitz and
d.c.\ with a Lipschitz control function,
and at least one of the following conditions holds.
\begin{enumerate}
\item
The restriction of $F$ to each bounded convex subset of $A$
is bounded and d.c.\ with a bounded control function.
\item
$A$ is open and $F$ is d.c.
\end{enumerate}
Then $G \circ F$ is d.c.
\end{proposition}
\begin{proof}
To prove (i), choose an arbitrary $a\in A$ and,
for each $n\in\mathbb{N}$, set $A_n:=B(a,n)\cap A$,
$B_n:=\mathrm{conv}\,F(A_n)$. It is easy to see that
$\mathrm{dist}(A_n,A\setminus A_{n+1})>0$ and
$B_n\subset B$ is bounded for each $n$. Thus
$G \circ F$ is d.c.\ by Lemma \ref{L:main}.
To prove (ii), use Proposition~\ref{P:HKM} to choose a sequence $\{A_n\}$ of bounded open convex sets such that
$A_n\nearrow\!\!\!\nearrow A$ and, for each $n$, $F|_{A_n}$ is Lipschitz and d.c.\
with a Lipschitz control function. Then $B_n := \mathrm{conv} F(A_n)$ is clearly bounded and convex, and
thus $G|_{B_n}$ is Lipschitz and d.c.\
with a Lipschitz control function. Apply Lemma \ref{L:main}.
\end{proof}
Most of the next results are corollaries of Proposition~\ref{nove}.
One of the
exceptions is the following
interesting proposition.
\begin{proposition}\label{reflex}
Let $C$ be an open convex subset of a reflexive Banach space $X$,
and $f\colon C\to \mathbb{R}$ be a continuous convex function. Let $I\subset\mathbb{R}$ be an open interval containing $f(C)$.
Then, for every normed
linear space $Z$ and every d.c.\ mapping $G\colon I\to Z$, the
composed map $G\circ f$ is d.c.\ on $C$.
\end{proposition}
\begin{proof}
Let $\{b_n\}\subset\bigl(\inf f(C),\sup I\bigr)$ be an increasing
sequence tending to $\sup I$. Then clearly the sets
$C_n:=\{ x\in C: f(x)< b_n\}$ are nonempty, open and convex, and
$C_n\nearrow C$. By Lemma~\ref{L:dn}, there exist nonempty
bounded open convex sets $D_n\subset\subset C_n$ ($n\in\mathbb{N}$) with
$D_n\nearrow\!\!\!\nearrow C$.
Since $f$ attains its infimum on
the weakly compact set $\overline{D_n}$
(see e.g.\ \cite[Theorem~25.1(b)]{Deimling}),
we have
$a_n:=\min f\bigl(\overline{D_n}\bigr)>\inf{I}$ and hence
$f(D_n)\subset[a_n, b_n]\subset I$ ($n\in\mathbb{N}$). Since $G$ and its control function
are locally Lipschitz on $I$ (cf.\ \cite[Proposition~1.10]{VeZa}), they are Lipschitz on each
$[a_n, b_n]$. Apply Lemma~\ref{L:main} with $A:= C$, $A_n:=D_n$, and $B_n:= [a_n,b_n]$.
\end{proof}
\begin{remark}\label{remreflex}
\begin{enumerate}
\item
Proposition~\ref{reflex} implies that $1/f$ is d.c.\ whenever $f$ is a positive continuous
convex function on an open convex subset of a reflexive Banach space.
\item
It is easy to see that
Proposition~\ref{reflex} holds for concave (instead of convex) $f$ as
well. However it is not true for {\em all} d.c.\ functions $f$ (see Corollary~\ref{jldc}).
\item
Proposition~\ref{reflex} fails in any nonreflexive Banach space $X$: by \cite{HKVZ},
a Banach space
$X$ is reflexive
if and only if $1/f$ is d.c.\ for each positive continuous convex function $f$ on $X$.
\end{enumerate}
\end{remark}
\begin{theorem}\label{T:best}
Let $X,Y,Z$ be normed linear spaces, let $A\subset X$ and $B\subset Y$ be
open convex sets, and let $F\colon A\to B$ and $G\colon B\to Z$ be d.c.\
mappings. Then $G\circ F$ is d.c.\ on $A$, provided
at least one of the following
conditions is satisfied:
\begin{enumerate}
\item[(a)] $B=Y$ and $G$ admits a control function $g$ that is bounded on
bounded sets;
\item[(b)] $Y$ is finite-dimensional and $\overline{F(A)}\subset B$;
\item[(c)] $Y$ admits a renorming with modulus of convexity of power
type 2, and $G$ is $C^{1,1}$ on bounded open subsets of $B$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let (a) hold. Let $E \subset Y$ be an arbitrary bounded convex set. Choose a bounded convex set $C$ such that
$E \subset\subset C$. Since $g$ is bounded on $C$, Observation~\ref{O:lip} implies
that both $G$ and $g$ are Lipschitz on $E$. Thus $G \circ F$
is d.c.\ by Proposition \ref{nove} .
Now, suppose (b) holds. By Proposition~\ref{P:HKM}, there exists a sequence
$\{A_n\}$ of nonempty bounded open convex sets such that
$A_n\nearrow A$ and $F$ is Lipschitz on each $A_n$.
Since each
$\overline{F(A_n)}$
is a compact subset of $B$ ($Y$ is finite-dimensional!),
$B_n:=\mathrm{conv}\,\overline{F(A_n)}\subset\subset B$ is a compact convex subset of $B$.
Let $\tilde g$ be a control function of $G$. We can clearly find $\varepsilon>0$ such that $\tilde g$ is bounded
on $C:= B_n + B(0,\varepsilon) \subset B$.
Observation~\ref{O:lip} implies
that both $G$ and $\tilde g$ are Lipschitz on $B_n$.
Now, Lemma~\ref{L:main} shows that $G\circ F$ is d.c.
Finally, let (c) hold. For each bounded convex set $E\subset B$, let
$B_0\subset B$ be a bounded convex open set
containing $E$. Since $G$ is $C^{1,1}$ on $B_0$, it
is also Lipschitz on $B_0$. Moreover,
Proposition~\ref{C11} easily implies that $G$ admits a Lipschitz control function on $B_0$, and
hence also on $E$. Thus, we can apply Proposition~\ref{nove}.
\end{proof}
Let $X,Y$ be vector spaces.
Recall that a mapping $Q\colon X\to Y$ is {\em quadratic} if there
exists a bilinear mapping $B\colon X\times X\to Y$ such that
$Q(x)=B(x,x)$ for each $x\in X$. In this case, we say that $Q$ {\em is generated} by $B$.
\begin{definition}[\cite{KoVe}]
A normed linear space $X$ is said to have the {\em property~(D)} if every
continuous quadratic form on $X$ can be represented as a difference of
two nonnegative continuous quadratic forms.
\end{definition}
\begin{proposition}\label{quadr}
Let $X,Y,Z$ be normed linear spaces, $C\subset X$ an open convex set,
$F\colon C\to Y$ a d.c.\ mapping, and $Q\colon Y\to Z$ a
continuous quadratic mapping. Then $Q\circ F$ is d.c.\ on $C$, provided
at least one of the following conditions is satisfied:
\begin{enumerate}
\item[(a)] $Y$ admits a renorming with modulus of convexity
of power type 2;
\item[(b)] $Z$ is finite-dimensional and $Y$ has the property (D).
\end{enumerate}
\end{proposition}
\begin{proof}
The case (a) follows immediately from Theorem~\ref{T:best}(c),
since each
continuous quadratic mapping is $C^{1,1}$.
Suppose (b) holds. We can suppose that $Z=\mathbb{R}^d$ for some $d\in\mathbb{N}$. Then
the components $Q_j$ ($j=1,\ldots,d$) of the quadratic mapping $Q$ are
continuous quadratic forms. Since $Y$ has (D), we can write
$Q_j=p_j-q_j$ where $p_j,q_j$ are nonnegative continuous quadratic
forms, in particular, they are convex continuous functions that are
bounded on bounded sets.
By Remark \ref{R:dc}(a) and (b),
$Q$ is d.c.\ with a
control function which is bounded on bounded subsets of $Y$. Apply
Theorem~\ref{T:best}(a).
\end{proof}
The following Corollary~\ref{bilin} improves \cite[Corollary 4.3.]{VeZa} which
states only that $B\circ(F,G)$ is locally d.c.\ whenever $Y$ and $V$ are Hilbert spaces.
\begin{corollary}\label{bilin}
Let $X,Y,V,Z$ be normed linear spaces, $C\subset X$ an open convex set,
$F\colon C\to Y$ and $G\colon C\to V$ d.c.\ mappings, and
$B\colon Y\times V\to Z$ a continuous bilinear mapping. Then the
mapping $B\circ(F,G)\colon x\mapsto B\bigl(F(x), G(x)\bigr)$ is d.c.\ on
$C$, provided at least one of the following conditions is satisfied:
\begin{enumerate}
\item[(a)] both $Y$ and $V$ admit renormings with modulus of convexity
of power type 2;
\item[(b)] $Z$ is finite-dimensional and $Y\times V$ has the property
(D).
\end{enumerate}
\end{corollary}
\begin{proof}
Observe that $B$ is also a quadratic mapping on $Y\times V$; indeed, it is generated by
the bilinear mapping
$\tilde{B}\bigl((y,v),(y',v')\bigr)=B(y,v')$ on $(Y\times
V)\times(Y\times V)$. Moreover, by \cite[Lemma~1.7]{VeZa},
the mapping $x\mapsto\bigl(F(x),G(x)\bigr)$ is d.c.\ on $C$.
Apply Fact~\ref{F:powertype}(a) and Proposition~\ref{quadr}.
\end{proof}
\begin{remark}
\begin{enumerate}
\item[(a)] By Fact~\ref{F:powertype}(b), the assumptions in
Proposition~\ref{quadr}(a) and Corollary~\ref{bilin}(a) are satisfied, for instance, if each of
$Y,V$ is isomorphic to a subspace of some $L_p(\mu)$ with $1< p\le 2$
(not necessarily with the same $p$ and $\mu$).
\item[(b)] By \cite[Theorem~1.6 and Observation~3.9]{KoVe}, the
assumptions in
Proposition~\ref{quadr}(b) and Corollary~\ref{bilin}(b) are satisfied, for instance, if each of
$Y,V$ is isomorphic to one (not necessarily the same) of the spaces
$C(K)$, $c_0(\Gamma)$, $L_p(\mu)$ with $2\le p\le\infty$.
\end{enumerate}
\end{remark}
\section{Global delta-convexity of composed functions}
Here we present positive results which are formulated without using the notion of d.c.\ operators, i.e., those
which directly concern Hartman's results.
Probably most interesting is the following immediate consequence of
Theorem~\ref{T:best}(a).
\begin{theorem}\label{cely}
Let $X$ be a normed linear space. Let $A\subset X$ be an open convex set, and
$F\colon A\to \mathbb{R}^n$ and $g: \mathbb{R}^n \to \mathbb{R}$ be d.c. Then the composed function $g\circ F$ is d.c.
\end{theorem}
Since each $C^2$ function $g: \mathbb{R}^n \to \mathbb{R}$ is d.c\ by Proposition \ref{C11} and (I) from Introduction,
applying Theorem \ref{cely} to $F=(f,h)$ and $g(x,y)=xy$, we obtain
that $f\!\cdot\! h$ is d.c.\
on $A$, whenever $f$ and $h$ are real d.c.\ functions on $A$.
However, this fact is well-known (cf.~\cite{Hi}) and
can be proved by a quite elementary way.
But the fact that, for instance, $\exp(f)$ and $\frac{fh}{1+f^2 + h^2}$ are d.c.\
on $A$ seems to be new. (Hartman's results only imply that these functions are locally d.c.)
For compositions of special d.c.\ functions, we obtain the following.
\begin{theorem}\label{spec}
Let $X$ be a normed linear space and $A\subset X$, $B\subset \mathbb{R}^n$ convex sets. Let
$F = (F_1,\dots,F_n)\colon A \to B$ be a d.c.\ mapping and $g\colon B \to \mathbb{R}$ a d.c.\ function.
Then $g\circ F$ is d.c.\ on $A$, provided
at least one of the following
conditions is satisfied:
\begin{enumerate}
\item[(a)] $A$ is open, $F$ is d.c., and $g$ is a difference of two Lipschitz convex functions;
\item[(b)] each $F_i$ is a difference of two continuous convex functions, which are
bounded on bounded subsets of $A$, and
the restriction of $g$ to each bounded convex subset of $B$
is a difference of two Lipschitz convex functions;
\item[(c)] $X = \mathbb{R}^k$, $A$ is open or closed, $F$ is d.c., and, for each $a \in A$, there exists $\varepsilon>0$ such $g$ is a difference of two Lipschitz convex functions on $B \cap B(F(a),\varepsilon)$.
\end{enumerate}
\end{theorem}
\begin{proof}
To prove (a), observe that, by Fact \ref{F:1} (c), we can suppose that $g$ is a difference of two Lipschitz convex functions
on the whole $\mathbb{R}^n$. Hence $g\circ F$ is d.c.\ on $A$ by Theorem \ref{cely}.
The part (b) follows from Remark~\ref{R:dc}(a) and (b), and
Proposition \ref{nove}.
Let (c) hold. By Corollary \ref{C:loc} (or (I)), it is sufficient to show that
$g\circ F$ is locally d.c.\ on $A$. To this end, choose an $a \in A$ and find $\varepsilon>0$ such $g$ is a difference of two Lipschitz convex functions on $B \cap B(F(a),\varepsilon)$. Since $F$ is continuous, we can find $\delta>0$ such that
$F(B(a,\delta) \cap A) \subset B \cap B(F(a),\varepsilon)$. Using Proposition \ref{P:veza} (and Remark~\ref{R:dc}(b)), we obtain
that $g\circ F$ is d.c.\ on $B(a,\delta) \cap A$.
\end{proof}
Note that the case (c) follows also from proofs in \cite{H}.
However, a claim of
P.~Hartman (see \cite{H}, p.708, lines 12-17), which would imply (via (I)
from Introduction)
that, in (c), it is sufficient to write ``$g$ is d.c.\ and Lipschitz''
instead of ``$g$ is a difference of two Lipschitz convex functions'', is
false (presumably due to a misprint).
This is shown by the following example.
\begin{example}\label{chyba}
Let $d\colon \mathbb{R} \to \mathbb{R}$ be the characteristic function of the set $S: = \bigcup_{n \in \mathbb{N}} [-2^{-2n+2},-2^{-2n+1})$
and put $g(x): = \int_{-1}^x d$ for $x\in [-1,0]$. First we will show that $g$ is a Lipschitz d.c.\ function which
is not a difference of two Lipschitz convex functions on $[-1/2,0]$. Since $d$ is bounded, $g$ is clearly Lipschitz.
Clearly $g'_+(x) = d(x)$, $x \in [-1,0)$, since $d$ is right continuous.
For $x\in[-1,0)$,
let $v(x)$ be the total
variation of $d$ on the interval $[-1,x]$. It is easy to check that $v(x)= n-1$ for $x \in [-2^{1-n}, -2^{-n})$,
and consequently $\int_{-1}^0 v = \sum_{n=1}^{\infty} (n-1) 2^{-n} < \infty$. Thus both $v$ and
$w:= v -d$ are nondecreasing and (Lebesgue) integrable on $[-1,0)$. So, $c_1(x):= \int_{-1}^x v$ and
$c_2(x):= \int_{-1}^x w$ are continuous convex functions on $[-1,0]$, and
$$ g(x) = \int_{-1}^x d = \int_{-1}^x (v-w) = c_1(x) - c_2(x),\ \ \ \ x \in [-1,0].$$
Therefore, $g$ is d.c.\ on $[-1,0]$.
Now, suppose to the contrary that $g=p-q$ on $[-1/2,0]$, where $p$, $q$ are convex Lipschitz functions
on $[-1/2,0]$. It is well-known that then the right derivatives $p'_+$, $q'_+$ are finite, bounded
and nondecreasing
functions on $[-1/2,0]$. Further $d = g'_+ = p'_+ - q'_+$ on $[-1/2,0)$.
Let $V_a^b\phi$ denote the total variation of $\phi$ on $[a,b]$.
Then, for $ x \in [-1/2,0)$,
\begin{multline*}
v(x)- v(-1/2)=V_{-1/2}^x ( p'_+ - q'_+) \leq
V_{-1/2}^x \, p'_+ + V_{-1/2}^x \, q'_+\\
= \bigl(p'_+(x)- p'_+(-1/2)\bigr) + \bigl(q'_+(x)- q'_+(-1/2)\bigr)=: z(x),
\end{multline*}
which is a contradiction, since $\lim_{x\to 0+} v(x) = \infty$ and $z$ is a bounded function.
Now, set $F(x):= -|x|$ for $x \in [-1,1]$. Then $g\circ F$ is not d.c.\ even on $(-1,1)$. Indeed, otherwise $g\circ F$ would
be a difference of two Lipschitz convex functions on $[-1/2,0]$, which is not true, since $g\circ F = g$ on $[-1/2,0]$.
\end{example}
\section{The main counterexample}
The main result of this section (Theorem~\ref{ex}) provides
a general construction of non-d.c.\ composed
mappings. Its proof uses some ideas from \cite{KM}.
The following lemma, implicitly contained in
\cite{KM}, is useful
for showing that certain functions or mappings are not d.c.
\begin{lemma}\label{ndc}
Let $X,Y$ be normed linear spaces, let $A\subset X$ be an open convex
set with $0\in A$, and let $F\colon A\to Y$ be a mapping. Suppose there exist
$\lambda\in(0,1)$ and a sequence of balls $B(x_n,\delta_n)\subset A$ such
that $\{x_n\}\subset\lambda A$, $\delta_n\to0$ and $F$ is unbounded on
each $B(x_n,\delta_n)$. Then $F$ is not d.c.\ on $A$.
\end{lemma}
\begin{proof}
Suppose the contrary. Let $f$ be a control function for $F$
on $A$. We can suppose $f\geq 0$ (otherwise choose an affine function $g$ such that $g \leq f$ on $A$, and consider
$f-g$ instead of $f$). For each $n$, let $z_n\in A$ be such that $x_n=\lambda z_n$.
Observe that $\|h\|<\delta_n$ implies
$x_n+h=\lambda z_n+(1-\lambda)\frac{h}{1-\lambda}$ and
$\|\frac{h}{1-\lambda}\|<\frac{\delta_n}{1-\lambda}$. Now, fix $m\in\mathbb{N}$ so
large that $B(0,\frac{\delta_m}{1-\lambda})\subset A$ and both $F$ and $f$ are
bounded on $B(0,\frac{\delta_m}{1-\lambda})$. Then we have
\begin{align}
{\textstyle
\|\lambda F(z_m)+(1-\lambda)F(\frac{h}{1-\lambda})}&{\textstyle -F(x_m+h)\|}
\label{uno}\\
&{\textstyle\le \lambda f(z_m)+(1-\lambda)f(\frac{h}{1-\lambda})-f(x_m+h)}
\notag\\
&\le {\textstyle\lambda f(z_m)+(1-\lambda)f(\frac{h}{1-\lambda})}
\label{due}
\end{align}
whenever $\|h\|<\delta_m$. But this is a contradiction since the
expression \eqref{due} is bounded on $\{h: \|h\| < \delta_m\}$ while
\eqref{uno} is not (because $F$ in
unbounded on $B(x_m,\delta_m)$).
\end{proof}
\begin{lemma}\label{strexp}
Let $X$ be a normed linear space. Let $e\in S_X$, $e^*\in S_{X^*}$ and $c>0$
be such that $e^*(e)=1$ and the implication
\begin{equation}\label{hypo}
e^*(u)>1-\epsilon\text{ and } \|u\|\le1\ \ \Rightarrow\ \
\|u-e\|\le c\,\epsilon
\end{equation}
holds for $u\in X$ and $\epsilon>0$.
Then the following implication holds for $x\in X$ and $0<\delta<\frac{1}{2}\,$:
\begin{equation}\label{thesis}
{\textstyle\frac{1}{2}}\|x\|^2 <
{\textstyle\frac{1}{2}}\|e\|^2 + e^*(x-e) +\delta\ \ \Rightarrow\ \
\|x-e\|<(1+2c)\sqrt{2\delta}\,.
\end{equation}
\end{lemma}
\begin{proof}
Let $x\in X$ and $0<\delta<\frac{1}{2}$ satisfy the left-hand side of
\eqref{thesis}.
Then
\[
\textstyle\frac{1}{2}\|x\|^2<e^*(x)-\frac{1}{2}+\delta\le\|x\|-
\frac{1}{2}+\delta
\]
which implies $\frac{1}{2}(1-\|x\|)^2<\delta$. Thus
$0<1-\sqrt{2\delta}<\|x\|<1+\sqrt{2\delta}.$
If $\|x\|\le1$, then $e^*(x)>\frac{1}{2}\|x\|^2+\frac{1}{2}-\delta>
\frac{1}{2}(1-\sqrt{2\delta})^2+\frac{1}{2}-\delta=1-\sqrt{2\delta}$. By
the assumption \eqref{hypo},
$\|x-e\|\le c\,\sqrt{2\delta}<(1+2c)\sqrt{2\delta}$.
If $\|x\|>1$, then (as above)
$e^*(\frac{x}{\|x\|})>\frac{1}{\|x\|}\left(\frac{1}{2}\|x\|^2+\frac{1}{2}-\delta\right)
>\frac{1-\sqrt{2\delta}}{\|x\|}>\frac{1-\sqrt{2\delta}}{1+\sqrt{2\delta}}=
1-\frac{2\sqrt{2\delta}}{1+\sqrt{2\delta}}.$ By \eqref{hypo}, we have
$\|\frac{x}{\|x\|}-e\|\le c\,\frac{2\sqrt{2\delta}}{1+\sqrt{2\delta}}$.
Consequently,
$\|x-e\| \le \|x-\frac{x}{\|x\|}\| + \|\frac{x}{\|x\|}-e\| \le
(\|x\|-1) + \frac{2c\sqrt{2\delta}}{1+\sqrt{2\delta}}<
\sqrt{2\delta}\bigl(1+\frac{2c}{1+\sqrt{2\delta}}\bigr)<(1+2c)\sqrt{2\delta}$.
\end{proof}
\begin{lemma}\label{baze}
For each infinite dimensional normed linear space, there exists a
countable biorthogonal system $\{e_n,e^*_n\}\subset X\times X^*$ such that:
\[
\textstyle
\|e_n\|=1\ (n\in\mathbb{N}),\ \
R:=\sup_{n}\|e^*_n\|<\infty,\ \
r:=\inf_{m\ne n}\|e_m-e_n\|>0.
\]
\end{lemma}
\begin{proof}
The completion of $X$ contains a normalized basic sequence
$\{e_n\}$ (see \cite[Theorem~6.14]{FHHMPZ}). By the ``small perturbation lemma''
\cite[Theorem~6.18]{FHHMPZ}, we may assume that $\{e_n\}\subset X$. Let $e^*_n$
($n\in\mathbb{N}$)
be Hahn-Banach extensions of the corresponding coefficient functionals; it is
well-known that they are equi-bounded (cf.\ \cite[p.164]{FHHMPZ}). Moreover, for
$m\ne n$, we have $\|e_n-e_m\|\ge 1/R$, since $1 = e^*_n(e_n-e_m) \leq R\, \|e_n-e_m\|$.
\end{proof}
\begin{lemma}\label{construction}
Let $X,Y$ be normed linear spaces, $X$ infinite dimensional.
Then, for
each bounded sequence $\{y_n\}\subset Y$, there exists a d.c.\ mapping
$\Phi\colon X\to Y$ such that:
\begin{enumerate}
\item[(a)] $\Phi=0$ outside $B_X$;
\item[(b)] $\Phi$ admits a control function that is Lipschitz on bounded
sets;
\item[(c)] $\{y_n\}\subset \Phi(B_X)$ and
$\Phi(X)\subset\mathrm{conv}\bigl[\{0\}\cup\{y_n\}_{n\in\mathbb{N}}\bigr]$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\{e_n\}$, $\{e^*_n\}$, $R$ and $r$ be as in Lemma~\ref{baze}.
Observe that $R\ge 1$ since $e_1^*(e_1)=1$.
Fix an arbitrary $\rho\in(0,\frac{1}{R})$.
The symmetric closed convex set
\[
C:=\overline{\mathrm{conv}}\bigl(
\rho B_X \cup \{\pm e_n\}_{n\in\mathbb{N}}
\bigr)
\]
is the unit ball of an equivalent norm $|\!|\!|\cdot|\!|\!|$ on $X$
since $\rho B_X\subset C\subset B_X$.
Fix an arbitrary $n\in\mathbb{N}$. It is easy to see that
$|\!|\!|e^*_n|\!|\!|=\max e^*_n(C)=e^*_n(e_n)=1$, which implies that also
$|\!|\!|e_n|\!|\!|=1$. Let $\epsilon>0$ and $u\in C$ be such that
\[
e^*_n(u)>1-\epsilon.
\]
Observe that $C=\mathrm{conv}\left(\{e_n\}\cup C_n\right)$ where
\[
C_n=\overline{\mathrm{conv}}\bigl(
\rho B_X \cup \{- e_k\}_{k\in\mathbb{N}} \cup \{e_k\}_{k\in\mathbb{N}\setminus\{n\}}
\bigr).
\]
Thus we can write $u=(1-\lambda)e_n+\lambda v$ where $v\in C_n$ and
$0\le\lambda\le1$. Since
$$1-\epsilon<e^*_n(u)\le 1-\lambda +\lambda\sup e^*_n(C_n)\le 1-\lambda
+\lambda R\rho,$$
we easily get $\lambda<\frac{\epsilon}{1-R\rho}$. Consequently,
$$
|\!|\!|u-e_n|\!|\!|=
\lambda|\!|\!|v-e_n|\!|\!|\le\frac{2\epsilon}{1-R\rho}\,.
$$
Denote $g(x)=\frac{1}{2}|\!|\!|x|\!|\!|^2$. By Lemma~\ref{strexp}, for
$n\in\mathbb{N}$, $x\in X$ and $0<\delta<\frac{1}{2}$ the following implication holds:
\[
g(x)< g(e_n)+e^*_n(x-e_n)+\delta\ \ \Rightarrow\ \
\|x-e_n\|\le|\!|\!|x-e_n|\!|\!|<(1+{\textstyle \frac{4}{1-R\rho}})\sqrt{2\delta}.
\]
Since the sequence $\{e_n\}$ is uniformly discrete, it is
possible to fix a
$\delta\in(0,\frac{1}{2})$ so small that the open convex sets
\[
D_n=\{x\in X: g(x)< g(e_n)+e^*_n(x-e_n)+\delta\}
\]
satisfy $\mathrm{dist}_{\|\cdot\|}(D_m,D_n)>\delta$ whenever $m\ne n$.
We have $e_n\in D_n$ for each $n$.
Define $H\colon X\to Y$ by
\[
H(x)=\begin{cases}
{\textstyle\frac{1}{\delta}}\bigl[
g(e_n)+e^*_n(x-e_n)+\delta-g(x)
\bigr]y_n &\text{if $x\in D_n$;}\\
0 &\text{for $x\notin\bigcup_{n\in\mathbb{N}}D_n$.}
\end{cases}
\]
It is easy to see that $H$ is continuous since we have
\begin{equation}\label{nadn}
\textstyle
H(x)=\frac{1}{\delta}\bigl[
\max\{g(x), g(e_n)+e^*_n(x-e_n)+\delta\} -g(x)
\bigr]y_n\,,\ \
x\in D_n+\delta B_X.
\end{equation}
Put $s:=\sup_{n\in\mathbb{N}}\|y_n\|$.
We claim that the formula
\begin{equation}\label{kontr}
\textstyle
h(x)=\frac{s}{\delta}\,\sup_{n\in\mathbb{N}}
\bigl(\max\{g(x),g(e_n)+e^*_n(x-e_n)+\delta\}\bigr)\,
+ \frac{s}{\delta}\,g(x)
\end{equation}
defines a control function for $H$, which is Lipschitz on bounded sets.
First, observe that $h(0)=\frac{s}{\delta}\max\{0,\frac{1}{2}-
1+\delta\}=0$. Moreover, since $g$ is Lipschitz on bounded sets and the
functionals $e^*_n$ ($n\in\mathbb{N}$) are equi-Lipschitz, \eqref{kontr} defines
a real convex function that is Lipschitz on bounded sets.
Fix $y^*\in B_{Y^*}$. To prove that the function $\psi:=y^*\circ H +h$
is convex, it is sufficient to show that it is locally convex.
For
$x\notin\bigcup_n\overline{D_n}=\overline{\bigcup_n D_n}$, we have
$\psi(x)=h(x)$.
For $x\in D_n+\delta B_X$, we have $g(x)\ge g(e_k)+e^*_k(x-e_k)+\delta$
whenever $k\ne n$, and hence
\[
\textstyle
h(x)=\frac{s}{\delta}\,
\max\{g(x),g(e_n)+e^*_n(x-e_n)+\delta\}\,
+ \frac{s}{\delta}\,g(x)\,,\ \ x\in D_n+\delta B_X.
\]
Consequently, \eqref{nadn} implies that, on the set $D_n+\delta B_X$, the function
\[
\textstyle
\psi(x)=
\frac{s+y^*(y_n)}{\delta}\,\max\{g(x),g(e_n)+e^*_n(x-e_n)+\delta\}\,
+ \frac{s-y^*(y_n)}{\delta}\,g(x)
\]
is convex (since it is a sum of convex functions).
Observe that $H(e_n)=y_n$. Moreover, for each $x\in D_n$,
\begin{multline*}
\textstyle
0<g(e_n)+e^*_n(x-e_n)+\delta-g(x)\le
\frac{1}{2}+|\!|\!|x|\!|\!|-1+\delta-\frac{1}{2}|\!|\!|x|\!|\!|^2
\\
\textstyle =
\delta-\frac{1}{2}\bigl(|\!|\!|x|\!|\!|-1\bigr)^2\le\delta\,.\ \ \
\end{multline*}
Thus, for each $n$, the image $H(D_n)$ is contained in the
segment $[0,y_n]$. Since the support of $H$ is contained in $2B_X$, the
mapping $\Phi(x):=H(2x)$ has all the required properties (note that $\varphi(x):= h(2x)$ clearly controls
$\Phi$, cf. \cite[Lemma 1.5]{VeZa}).
\end{proof}
\begin{theorem}\label{ex}
Let $X,Y,Z$ be normed linear spaces, $X$ infinite dimensional. Let
$A\subset X$ be an open convex set, let $B\subset Y$ be a convex set, and let
$G\colon B\to Z$ be a mapping which is unbounded on a bounded subset of
$B$. Then there exists a d.c.\ mapping $F\colon A\to B$ such that
$G\circ F$ is not d.c.\ on $A$.
\end{theorem}
\begin{proof}
We can (and do) suppose that $0\in A$.
Fix $r\in(0,1)$ such that $B(0,2r)\subset A$. By \cite{BFV}, there exists a continuous convex function $h$ on $X$ such that
$h(0) =0$ and $\sup_{x \in B(0,r)} \, h(x) = \infty$. For $k \in \mathbb{N}$, set
$$A_k := \{x \in A:\ h(x) < k,\; \|x\|<k\}.$$
Clearly
each $A_k$ contains $0$, is open and convex; moreover, $A_k \nearrow A$.
It is easy to see that, for each $k \in \mathbb{N}$, we can choose
$v_k \in B(0,r)$ and $0 < \delta_k < 1/k$ such that $B(v_k, 2\delta_k) \subset A_{k+1}\setminus A_k$.
We can (and do) suppose that $0\in B$.
Let $\{y_n\}\subset B$ be a bounded sequence such that
$\|G(y_n)\|\to\infty$, and let $\Phi$ be the corresponding mapping from
Lemma~\ref{construction}.
For each $k\in\mathbb{N}$, define $F_k\colon X\to Y$ by
\[
F_k(x)=\Phi\left(\frac{x-v_k}{\delta_k}\right).
\]
Since the supports of these mappings are pairwise disjoint and each
$A_k$ intersects only finitely many of them, the mapping
\[
F\colon A\to Y\,,\qquad
F(x):=\sum_{k\in\mathbb{N}} F_k(x)
\]
is well-defined and continuous. Observing that $\varphi_n(x):= \varphi(\frac{x-v_k}{\delta_k})$ controls $F_k$
if $\varphi$ controls $\Phi$ (cf. \cite[Lemma 1.5]{VeZa}), we obtain that $F$ is d.c.\ on each $A_k$ with a
Lipschitz (hence bounded) control function. By Proposition~\ref{P:HKM}, $F$ is d.c.\ on $A$.
Moreover,
$F(A)\subset \bigcup_k F_k(X)\subset B$ by Lemma~\ref{construction}(c).
Since $G\circ F$ is unbounded on each $B(v_k,\delta_k)$ and
$v_k\in\frac{1}{2}A$, Lemma~\ref{ndc} implies that
$G\circ F$ is not d.c.\ on $A$.
\end{proof}
\begin{corollary}\label{jldc}
Let $X$ be an infinite dimensional normed linear space, and $A\subset X$ a nonempty open convex set.
\begin{enumerate}
\item[(a)] There exists a positive d.c.\ function $f$ on $A$ such that $1/f$ is not d.c.
\item[(b)] There exists a locally d.c.\ function $g$ on $A$, which is not d.c.
\end{enumerate}
\end{corollary}
\begin{proof}
Applying Theorem~\ref{ex} with $B=(0,\infty)$ and $G(y)=1/y$, we obtain (a). Now, (b)
follows from (a), since $g :=1/f$ is locally d.c.\ by Proposition~\ref{lokdc}
(or (II) in Introduction).
\end{proof}
\bigskip
\bigskip
\subsection*{Acknowledgment}
The research of the first author was partially supported by the Ministero
dell'Universit\`a e della Ricerca of Italy.
The research of the second author was partially supported by the grant
GA\v CR 201/06/0198 from the Grant Agency of
Czech Republic and partially supported by the grant MSM 0021620839 from
the Czech Ministry of Education.
| {
"timestamp": "2007-06-05T13:28:21",
"yymm": "0706",
"arxiv_id": "0706.0624",
"language": "en",
"url": "https://arxiv.org/abs/0706.0624",
"abstract": "A d.c. (delta-convex) function on a normed linear space is a function representable as a difference of two continuous convex functions. We show that an infinite dimensional analogue of Hartman's theorem on stability of d.c. functions under compositions does not hold in general. However, we prove that it holds in some interesting particular cases. Our main results about compositions are proved in the more general context of d.c. mappings between normed linear spaces.",
"subjects": "Functional Analysis (math.FA); Classical Analysis and ODEs (math.CA)",
"title": "On compositions of d.c. functions and mappings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.971992482639258,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7075866696189285
} |
https://arxiv.org/abs/2205.15717 | Estimating a density near an unknown manifold: a Bayesian nonparametric approach | We study the Bayesian density estimation of data living in the offset of an unknown submanifold of the Euclidean space. In this perspective, we introduce a new notion of anisotropic Hölder for the underlying density and obtain posterior rates that are minimax optimal and adaptive to the regularity of the density, to the intrinsic dimension of the manifold, and to the size of the offset, provided that the latter is not too small -- while still allowed to go to zero. Our Bayesian procedure, based on location-scale mixtures of Gaussians, appears to be convenient to implement and yields good practical results, even for quite singular data. | \section{Introduction}
\subsection{Motivation}
In many high dimensional statistical problems it is common to consider that the data has a low dimensional structure. More precisely, statistics and computer sciences have seen a growing interest in the so-called \emph{manifold hypothesis} where the data is believed to be supported (or near supported) on a low dimensional submanifold $M$ of an ambient space (see \cite{ma2012manifold} for an introduction). There are good intuitive reasons to believe that real world data (such as natural images, sounds or even texts) have indeed this property, as uniform noise in the ambient space almost never seem to produce an interesting structured output, and as local transformation around points (e.g : rotation/translation for images or change of pitch for sounds) produce possible local low dimensional paths on the data space. More empirical works have also supported the manifold hypothesis : for instance, low dimensional structures have been found in texts \citep{belkin2001laplacian}, sounds \citep{klein1970vowel,belkin2001laplacian}, images and videos \citep{weinberger2006unsupervised,kingma2013auto} or more recently in Covid data \citep{varghese2022intrinsic}. The case of an affine subspace has been extensively studied, from the celebrated principal component analysis (PCA) which dates back to \citep{pearson1901liii,hotelling1933analysis}, to its more recent extension to functional data, see \cite[Chap 8]{ferraty2011oxford} and \cite{belhakem2021minimax}). When the latent structure is nonlinear however a more involved geometric approach is needed.
\\
With the growth of deep learning during the past decade, a lot of principles and architectures have emerged in order to perform nonlinear dimensionality reduction. Among them, autoencoders (see \cite[Sec 14.6]{goodfellow2016deep} or \cite{kingma2013auto}) are specifically designed to produce a mapping of a high dimensional input into a low dimensional space (encoding into a latent space), followed by an approximate inverse mapping from the latent space into the original data space. Generative adversarial networks \citep{goodfellow2014generative} are another type of popular deep learning architecture that have been pushing the state of the art in computer vision \citep{goodfellow2014generative,arjovsky2017towards,arjovsky2017wasserstein} and have also been used in order to learn a meaningful low dimensional latent space (see for instance \cite{radford2015unsupervised, wu2016learning}) in particular when the later is a submanifold, see \citep{tang2022minimax}. Another deep generative method is the popular Normalizing Flows \citep{rezende2015variational}, with recent advances in abstract manifold models \citep{mathieu2020riemannian} or submanifold models \citep{horvat2021density, horvat2021denoising}.
\\
Before the deep learning revolution, Thes focus was mostly on unsupervised manifold estimation techniques such as Isomap \citep{tenenbaum2000global}, Locally Linear Embeddings \citep{roweis2000nonlinear}, manifold charting \citep{brand2002charting}, kernel Principal Component Analysis \citep{scholkopf1998nonlinear} or Laplacian eigenmaps \citep{belkin2003laplacian}. Our method, a Bayesian mixture of Gaussians, relies on the idea of tiling the manifold by low-rank \emph{Gaussian pancakes}, an approach that is similar to mixtures of factors analyzers \citep{ghahramani1996algorithm,chen2010compressive} or manifold Parzen windows \citep{vincent2002manifold}.
\\
In the field of nonparametric statistics, the problem of estimating a manifold (usually under the Hausdorff loss) has been adressed for instance in \cite{genovese2012manifold}, \cite{aamari2019nonasymptotic} or \cite{divol2021minimax}. Another point of view is to consider the estimation of a probability density with a singular support with respect to the ambiant Lebesgue measure, as it has been done using kernel density estimation for instance in \cite{ozakin2009submanifold}, \cite{berenfeld2021density} or \cite{divol2021reconstructing}. Another approach is to consider that the underlying probability measure is non-singular but very concentrated in a neighborhood of a submanifold, for instance by adding a very small noise to the data like in \cite{horvat2021density} or \cite{chae2021likelihood}, making the model dominated by the Lebesgue measure of the ambient space. This is the path we take in this paper. In such a setting, having accurate estimators of the density could come as a valuable building block in ridge estimation \citep{genovese2014nonparametric, chen2015asymptotic}.
\\
Nonparametric location mixtures of Gaussians are known to be flexible models for densities, and adaptive minimax rates of convergence on H\"older types spaces have been obtained using Bayesian or frequentist estimation procedures based on location mixtures of normals, see \cite{kruijer2010adaptive}, \cite{shen2013adaptive}, and \cite{ghosal2007posterior} for Bayesian methods and \cite{maugis2013adaptive} for a penalized likelihood approach. However, location mixtures are not versatile enough since the covariance matrix remains fixed across the components, so we instead take advantage of the flexibility of location-scale mixtures of Gaussians, as studied in \cite{canale2017posterior}, and more particularly of its hybrid version, for which optimal posterior concentration rates have been derived in the Euclidean case \citep{naulet2017posterior}.
\subsection{Our contribution}
Following the impulse of \cite{mukhopadhyay2020estimating}, who obtain weak posterior consistencies as well as good empirical results by modelling singular densities with Fisher-Gaussians mixtures, we argue here that even simple Gaussian mixtures can lead to good theoretical and empirical results if the prior is carefully chosen to adapt to the local geometry of the submanifold. Our contribution is two-fold:
\begin{itemize}\setlength{\itemsep}{.05in}
\item From a methodological point of view, we provide with a family of versatile priors (\secref{model}) that are shown empirically and theoretically to perform very well in modelling data that are singularly supported near submanifolds (\secref{num});
\item From a theoretical point of view, we introduce a new notion of H\"older smoothness along a submanifold (\subsecref{manihold}) which is proving to be adequate for the study of such almost-degenerated densities, and we derive posterior concentration rates for this new model (\subsecref{main}). The rates are optimal if the data do not collapse too quickly towards the manifold. These results rely on an intermediate result in approximation theory which has interests in its own right.
\end{itemize}
\subsection{Organisation of the paper} In \secref{model}, we define the nonparametric priors we use for inference, and introduce novel smoothness assumptions for density supported near a submanifold. \secref{theoretical} contains the main results, namely the posterior concentration rates together with the approximation theory which is the key step in the derivation of posterior concentration rates. In \secref{num}, we implement some of the priors of \secref{model} with one approach based on Gibbs sampling and the other one on stochastic variational inference. \secref{proofs} details the proofs of the main results, and we end the paper with a discussion on our results and possible future research directions. Further details, proofs and results are given in Appendices \ref{app:A} to \ref{app:num}.
\subsection{Notations} For a multi-index $k = (k_1,\dots,k_D) \in \mathbb{N}^D$, we set $|k| = k_1 + \dots + k_D$ and $k! = k_1! \dots k_D!$. For $x \in \mathbb{R}^D$, we write $x^k = x_1^{k_1} \dots x_D^{k_D} \in \mathbb{R}$ and $x_{\max}$ (resp. $x_{\min}$) to be the maximal (resp. minimal) value of its entries. For any two indices $i,j \in \{1,\dots,D\}$ with $i \leq j$, we set $x_{i:j} = (x_i,\dots,x_j) \in \mathbb{R}^{j-i+1}$. Finally, for a sufficiently regular function $f : \mathbb{R}^D \to \mathbb{R}$, we define its $k$-th partial derivative as
$$
\Diff^k f(x) = \frac{\partial^{|k|}f}{\partial x_1^{k_1}\dots \partial x_D^{k_D}} (x).
$$
If $M \subset \mathbb{R}^D$ is a measurable subset with Hausdorff dimension $d$, one denote $\mu_M$ for the Borel measure $
\mu_M = \mathcal{H}^d(\cdot \cap M)$ where $\mathcal{H}^d$ is the $d$-dimensional Hausdorff measure on $\mathbb{R}^D$. For $r > 0$ and $x \in \mathbb{R}^D$, one write $\ball_M(x,r) = \ball(x,r)\cap M$ where $\ball(x,r)$ is the usual Euclidean ball of $\mathbb{R}^D$. If $M$ is closed, then $\pr_M$ defines the (possible multi-valued) orthonormal projection from $\mathbb{R}^D$ to $M$.
\\
We will denote by $\|\cdot\|$ the usual Euclidean norm of $\mathbb{R}^k$ for any $k \in \mathbb{N}^*$. When $\mathcal{L}$ is a linear map between such spaces, we write $\|\mathcal{L}\|_{\op}$ for the operator norm associated with the Euclidean norms. The notation $\|\cdot\|_1$ (resp. $\|\cdot\|_\infty$) will refer to both the $L^1$-norm (resp $\sup$-norm) for vectors of $\mathbb{R}^k$ for any $k \in \mathbb{N}^*$, and to the $L^1$-norm (resp $\sup$-norm) for measurable functions from $\mathbb{R}^k$ to $\mathbb{R}$ for any $k \in \mathbb{N}^*$. The brackets $\inner{\cdot}{\cdot}$ will be used to denote the usual Euclidean product in $\mathbb{R}^k$ for any $k\in\mathbb{N}^*$. For any matrix $A \in \mathbb{R}^{k\times k}$, the notation $\|\cdot\|_A$ will refer to the quadratic form over $\mathbb{R}^k$ defined by $x\mapsto \inner{Ax}{x}$, which is a norm if $A$ is positive. The set of orthogonal transform of $\mathbb{R}^D$ will be denoted by $\mathcal{O}(D,\mathbb{R})$, or sometimes simply $\mathcal{O}(D)$.
\\
For two positive functions $f,g : \mathbb{R}^D \to \mathbb{R}$ we write the Hellinger distance as
$$
\hel(f,g) = \{\int_{\mathbb{R}^D} (\sqrt{f(x)} - \sqrt{g(x)} )^2 dx \}^{1/2}.
$$
In this paper, $M$ will designate a closed submanifold of $\mathbb{R}^D$ of dimension $1 \leq d \leq D-1$. For any point $x \in M$, the tangent and normal spaces of $M$ at $x$ will be denoted $T_x M$ and $N_x M$, and the corresponding bundles $TM$ and $NM$. We write $\exp_{x} : (T_x M, 0) \to (M,x)$ for the exponential map of $M$ at point $x$. We let $d_{M}(x,y)$ denote the intrinsic distance between $x$ and $y$ in $M$.
\section{Model : distributions concentrated near manifolds} \label{sec:model}
We assume that we observe $X_1, \dots, X_n$ independent and identically distributed from $P_0$ on $\mathbb R^D$ with density $f_0$ with respect to Lebesgue measure. We assume that there is a low dimensional structure underlying our observations, i.e. that $f_0$ has support concentrated near a low dimensional manifold $M$ which is unknown. More precisely there exists $\delta>0$ unknown and typically small such that $P_0(M^\delta)=1$, where $M^\delta$ is the $\delta$-offset of $M$: it is the set of points that are at distance less than $\delta$ from $M$,
$$
M^\delta := \bigcup_{x\in M} \ball(x,\delta).
$$
A typical example is when the observations are noisy versions of data whose support is $M$: $X=Y + Z$ with $Y \in M$ and $|Z|\leq \delta$ almost surely. When the noise $Z$ has a density smoother than the density of $Y$ on $M$ (with respect to the Hausdorff measure), the density of $X$ is anisotropic with a smoothness along the manifold $M$ smaller than that along the normal directions. In this paper we thus aim at constructing priors which are flexible enough to lead to \textit{good} estimation of $f_0$ in situations where the density has a complex anisotropic structure in that it has an unknown smoothness $\beta_0$ along an unknown manifold $M$ and a different (larger) smoothness $\beta_\perp$, also unknown, along the normal spaces of the manifold. In this context, since the anisotropy varies spatially, it is therefore important to consider priors which adapt spatially to such \textit{non linear smoothness}. In Section \ref{sec:prior} below we consider certain families of location-scale mixtures with a careful modelling of the prior on the variance of the components and we show in Section \ref{sec:theoretical} that these priors allow for manifold driven smoothness.
We first define more formally what we mean by a density concentrated near a manifold and by manifold driven smoothness.
\subsection{Manifold-anisotropic H\"older densities} \label{sec:manifoldsmoothness}
To begin with, we define what we think is a new notion of anisotropic H\"older spaces on the Euclidean space $\mathbb{R}^D$, and which happens to be a natural extension of the usual notion of (isotropic) H\"older smoothness. We are aware that there exist various notions of anisotropic smoothness, see for instance \cite{kerkyacharian2001nonlinear, hoffman2002random, comte2013anisotropic, goldenshluger2011bandwidth, goldenshluger2014adaptive}, with most of them stemming from the anisotropic smoothness as defined in \cite{nikol2012approximation}. In all the aforementionned references, the anisotropy was consistently defined as a control of the variations of the partial derivatives along each axis separately, with no control of the cross-derivatives (and no guarantee that they, in fact, exist). While this is enough in a Euclidean framework, we argue that, to the best of our effort, we could not make such assumptions sufficient in our non-linear setting, as the proofs presented in \secref{proofs} or in the Appendices might highlight. Instead, we come out with a new notion of H\"older anisotropy, in the footsteps of what \cite{shen2013adaptive} already sketched in their paper, that handles cross-derivatives in the same way that the usual notion of (isotropic) H\"older smoothness does, and which in fact coincides with the latter when the anisotropy vector is isotropic. This new class is defined is the subsection below, and its main properties reviewed in \appref{hold}.
\subsubsection{Anisotropic H\"older spaces} \label{subsec:anis}
An anisotropic H\"older functions $f : \mathbb{R}^D \to \mathbb{R}$ is, informally, a function whose smoothness is different along each axis of $\mathbb{R}^D$. Letting $\mathbf{m}{\beta} = (\beta_1,\dots,\beta_D) \in (\mathbb{R}_+^*)^D$, we define $\beta$ to be the harmonic mean of $\mathbf{m}\beta$ and
$$
\mathbf{m}\alpha = (\alpha_1,\dots,\alpha_D)~~~\text{where}~~~\alpha_i = \beta/\beta_i \in [0,D].
$$
Notice that $\alpha_1 + \dots + \alpha_D = D$. In this section, we define the spaces of anisotropic functions over bounded open subset of $\mathbb{R}^D$. We defer to \appref{hold} the introduction of the same class over general open subsets. We let $\mathcal{U} \subset \mathbb{R}^D$ be a bounded open subset and $L : \mathcal{U} \to \mathbb{R}_+$ be any non-negative function.
\begin{dfn}\label{dfn:anihold}
The anisotropic H\"older spaces $\mathcal{H}_{\an}^{\mathbf{m}\beta}(\mathcal{U},L)$ is the set of all functions $f : \mathcal{U} \to \mathbb{R}^D$ satisfying:
\begin{itemize}\setlength{\itemsep}{.05in}
\item[i)] For any multi-index $k \in \mathbb{N}^D$ such that $\inner{k}{\mathbf{m}\alpha} < \beta$, the partial derivative $\Diff^k f$ is well defined on $\mathcal{U}$ and
$|\Diff^k f(x)| \leq L(x)$ for all $x \in \mathcal{U}$;
\item[ii)] For any multi-index $k \in \mathbb{N}^D$ such that $\beta - \alpha_{\max} \leq \inner{k}{\mathbf{m}\alpha} < \beta$, there holds
\begin{equation}} % \setcounter{equation}{1}
\label{eq:hold}
|\Diff^k f(y) - \Diff^k f(x)| \leq L(x) \sum_{i=1}^D |y_i - x_i|^{\frac{\beta- \inner{k}{\mathbf{m}\alpha}}{\alpha_i} \wedge 1}~~~ \forall x,y \in \mathcal{U}.
\end{equation}
\end{itemize}
\end{dfn}
See \figref{beta} for a graphical representation of the quantities at stake. The function $L$ acts as an upper-bound for the localized and anisotropic version of the usual H\"older-norm:
$$
\{\max_{\inner{k}{\mathbf{m}\alpha} < \beta} |\Diff^k f(x)|\} \vee \max_{\beta - \alpha_{\max} \leq\inner{k}{\mathbf{m}\alpha} < \beta} \sup_{y \in \mathcal{U}} \frac{|\Diff^k f(y) - \Diff^k f(x)|}{\displaystyle\sum_{i=1}^D |y_i - x_i|^{\frac{\beta- \inner{k}{\mathbf{m}\alpha}}{\alpha_i}\wedge 1}} .
$$
\begin{figure}[!h]
\centering
\includegraphics[scale=.12]{beta_1} \hspace{20pt}
\includegraphics[scale=.12]{beta_2}
\caption{An exemple in dimension $D = 2$. The vector $\mathbf{m}\alpha$ is the only vector of $1$-norm $D$ which has positive coordinates and which is orthogonal to the simplex of vertices $\{\beta_i e_i\}_{1 \leq i \leq D}$. In black are the points $k$ of $\mathbb{N}^2$ such that $\inner{k}{\mathbf{m}\alpha} < \beta$.}
\label{fig:beta}
\end{figure}
Note that the constraint on the intermediate derivatives $\Diff^k f(x)$ for $0 < \inner{k}{\mathbf{m}\alpha} < \beta$ may seem superfluous since some Kolmogorov-Landau type inequalities would yield some bounds on these derivatives, but we add them nonetheless to our functional class to make the computations simpler. We list and prove in \appref{hold} various useful properties of functions in the anisotropic H\"older class.
\paragraph{Isotropic H\"older spaces} If the regularity vector is isotropic, meaning that $\mathbf{m}\beta = (\beta,\dots,\beta)$ with $\beta > 0$, then $\mathcal{H}^{\mathbf{m}\beta}_{\an}(\mathcal{U},L)$ coincides with the usual space of $\beta$-H\"older functions. We will write
$$
\mathcal{H}^{\beta}_{\iso}(\mathcal{U},L) := \mathcal{H}^{\mathbf{m}\beta}_{\an}(\mathcal{U},L)~~~\text{for}~~~\mathbf{m}\beta = (\beta,\dots,\beta).
$$
We will allow to replace the function $L$ in the above definitions by some positive constant $C > 0$, giving rise to the respective functional spaces $\mathcal{H}^{\mathbf{m}\beta}_{\an}(\mathcal{U},C)$ and $\mathcal{H}^{\beta}_{\iso}(\mathcal{U},C)$.
As a final remark, we will use the same notations for the spaces of multivalued functions when their coordinate functions are all in the corresponding space. For instance, if $\Psi : \mathcal{U} \to \mathbb{R}^D$, then
$$
\Psi = \(\Psi_1,\dots,\Psi_D\right) \in \mathcal{H}^{\mathbf{m}\beta}_{\an}(\mathcal{U},L) ~~\underset{\text{def}}{\ \Leftrightarrow \ } ~~ \Psi_i \in \mathcal{H}^{\mathbf{m} \beta}_{\an}(\mathcal{U},L)~\text{for all}~i \in \{1,\dots,D\},
$$
and the same holds for the other spaces defined in this subsection.
\subsubsection{M-anisotropic H\"older spaces} \label{subsec:manihold}
We now would like to consider functions whose smoothness directions at point $x \in \mathbb{R}^D$ are dependant on the position of $x$ with respect to a given submanifold $M \subset \mathbb{R}^D$ of dimension $1 \leq d \leq D-1$. More specifically, we will ask the functions to be of a certain regularity in the tangential directions of $M$, and of another regularity in the normal directions of $M$. Such a function will be said \emph{manifold-anisotropic H\"older}, or sometimes simply \emph{M-anisotropic}. To define such a class of function, we assume that $M$ is a closed submanifold with reach bounded from below by $\tau > 0$ (see \appref{A} for definition and properties of the reach) and we consider local parametrizations at any $x_0 \in M$
$$
\Psi_{x_0} : \mathcal{V}_{x_0} \to M,
$$
where $\mathcal{V}_{x_0}$ is a neighborhood of $0$ in $T_{x_0} M$. The maps $\Psi_{x_0}$ can be taken in a wide class of parametrizations of $M$. For instance, one could consider taking $\Psi_{x_0}$ to be (close to) the inverse projection over $M \to T_{x_0} M$ where $T_{x_0} M$ is seen as an affine subspace of $\mathbb{R}^D$ going through $x_0$, see for instance \cite{aamari2019nonasymptotic} or \cite{divol2021reconstructing}. For purely practical matter, we choose $\Psi_{x_0}$ to be the exponential map $\exp_{x_0}$, although the entire analysis in this paper could be carried out with other well-behaved parametrizations, such as the one mentioned above.
In particular, in the case of the exponential maps, we can always the define the domain of $\Psi_{x_0}$ to be $\ball_{T_{x_0} M}(0,\pi\tau)$, see \appref{A}. In the rest of this paper, we set
$$
\mathcal{V}_{x_0} := \ball_{T_{x_0} M}(0,\tau/8),
$$
with factor $1/8$ being there for technical reasons. If all the maps $\Psi_{x_0}$ are of regularity $\beta_M > 1$, meaning that there exists $C_M > 0$ such that
\begin{equation}} % \setcounter{equation}{1} \label{betam}
\Psi_{x_0} \in \mathcal{H}_{\iso}^{\beta_M}(\mathcal{V}_{x_0}, C_M),~~\forall x_0 \in M
\end{equation}
(in particular $M$ is at least $\mathcal{C}^{k}$ with $k =\ceil{\beta_M-1}$), then one can construct a map
$$
\bar \Psi_{x_0} : \begin{cases} \mathcal{V}_{x_0} \times N_{x_0} M &\to \mathbb{R}^D \\
~~~~(v,\eta) &\mapsto \Psi_{x_0}(v) + N_{x_0}(v,\eta).
\end{cases}
$$
where $N_{x_0}(v,\cdot)$ is an isometry from $N_{x_0} M$ to $N_{\Psi_{x_0}(v)} M$ and where
$
v \mapsto N_{x_0}(v,\cdot) \in \mathcal{H}_{\iso}^{\beta_M -1}(\mathcal{V}_{x_0}, C_M^\perp)
$
for some other constant $C_M^\perp$ depending on $C_M$, $\tau$ and $\beta_M$. We refer to \appref{A} for further details concerning the construction of $\bar\Psi_{x_0}$ and the proof of its regularity. When restricting the latter map, one get a local parametrization of the offset $M^{\tau/2}$ around $x_0$
$$
\bar\Psi_{x_0} : \mathcal{V}_{x_0} \times \ball_{N_{x_0} M}(0,\tau/2) \to M^{\tau/2}
$$
as shown in \lemref{detpsi}. This parametrization is such that $\pr_M ( \bar\Psi_{x_0}(v,\eta)) = \Psi_{x_0}(v)$ for any $(v,\eta) \in \mathcal{V}_{x_0} \times \ball_{N_{x_0} M}(0,\tau/2)$. All in all, $\bar\Psi_{x_0}$ is thus a diffeomorphism from $\mathcal{V}_{x_0} \times \ball_{N_{x_0} M}(0,\tau/2) $ to its image which is such that
\begin{equation}} % \setcounter{equation}{1} \label{barpsi}\bar\Psi_{x_0} \in \mathcal{H}^{\beta_M-1}_{\iso}( \mathcal{V}_{x_0} \times \ball_{N_{x_0} M}(0,\tau/2), C^*_M)
\end{equation}
for some $C^*_M > 0$ depending on $C_M$, $\tau$ and $\beta_M$. See \figref{psix0} for a visual interpretation of this parametrizations.
\\
\begin{figure}[h!]
\centering
\includegraphics[scale=.1]{psix0}
\caption{A visual interpretation of the parametrization $\bar\Psi_{x_0}$.}
\label{fig:psix0}
\end{figure}
For any $\delta > 0$, we define $\bar\Psi_{x_0,\delta}(v,\eta) := \bar\Psi_{x_0}(v,\delta \eta)$ to be the rescaled version of $\bar\Psi_{x_0}$ in the normal directions. It is a well defined parametrization of $M^{\tau/2}$ on the set $\mathcal{W}_{x_0,\delta} := \mathcal{V}_{x_0} \times \ball_{N_{x_0} M}(0,\tau/2\delta)$. We let $\beta_0, \beta_\bot$ be two positive real numbers, and define the vector
$$
\mathbf{m}\beta = (\underbrace{\beta_0,\dots,\beta_0}_{d},\underbrace{\beta_\bot,\dots,\beta_\bot}_{D-d}) \in \mathbb{R}^D.
$$
Now for any function $L : \mathbb{R}^D \to \mathbb{R}_+$, we define:
\begin{dfn} The class
$
\mathcal{H}^{\beta_0,\beta_\bot}_\delta(M,L)
$
is the set of all functions $f : \mathbb{R}^D \to \mathbb{R}$ that satisfies:
\begin{enumerate}\setlength{\itemsep}{.05in}
\item[i)] $f$ is supported on $M^\delta$;
\item[ii)] For any $x_0 \in M$,
\begin{equation}} % \setcounter{equation}{1}\bar f_{x_0,\delta} \in \mathcal{H}^{\mathbf{m}\beta}_{\an}(\mathcal{W}_{x_0,\delta}, L_{x_0,\delta}). \label{eq:mholder}
\end{equation}
where $ \bar f_{x_0,\delta} := \delta^{D-d} f \circ \bar\Psi_{x_0,\delta}$ and $L_{x_0,\delta} := \delta^{D-d} L \circ \bar\Psi_{x_0,\delta}$.
\end{enumerate}
\end{dfn}
Such a function will be, informally, $\beta_0$-H\"older along the manifold $M$, and $\beta_\perp$-H\"older normal to the manifold $M$. The normalization $\delta^{D-d}$ accounts for the scaling $\eta \mapsto \delta \eta$ along the normal spaces (which are of dimension $D-d$) in the definition of $\bar\Psi_{x_0,\delta}$. Its presence should not be too surprising and can be understood as follows: when $f$ is a density supported on $M^\delta$, the typical magnitude of its values is of order $1/\delta^{D-d}$, and the absence of normalization would whence make the above functional class irrelevant to describe the regularity of such densities.
M-anisotropic functions happen to be a convenient way to describe the regularity of a number of densities that are naturally supported around $M$. To illustrate this, take $f_\ast : M \to \mathbb{R}$ to be a $\beta_0$-H\"older density, meaning that there exists $L_0 : M \to \mathbb{R}$ such that for any $x_0 \in M$,
$$
f_\ast \circ\Psi_{x_0} \in \mathcal{H}^{\beta_0}_{\iso}(\mathcal{V}_{x_0}, L_0\circ\Psi_{x_0}).
$$
Now take $K : \mathbb{R}^D \to \mathbb{R}$ to be a normalized positive smooth isotropic kernel supported on $\ball(0,1)$. We introduce
$
c_\bot^{-1} = \int K(\varepsilon) \diff \mu_E(\varepsilon)
$
where $E$ is any (through isotropy) $(D-d)$-dimensional subspaces of $\mathbb{R}^D$. We also assume that
$K \in \mathcal{H}^{\beta_\perp}_{\iso}(\mathbb{R}^D,L_\perp) $
for some function $L_\perp$ which is also rotationally invariant.
\begin{prp} \label{prp:model}
Let $f$ be the density of a random variable $Z = X + \delta \mathcal{E}$ where $X \sim f_*(x) \mu_M(\diff x)$ and $0 < \delta < \tau$. Then,
\begin{enumerate}\setlength{\itemsep}{.05in}
\item \emph{(Orthonormal noise model)} If $\beta_\perp \leq \beta_M - 1$, and if
$$\mathcal{E} | X \sim c_{\bot} K(\varepsilon) \mu_{N_X M}(\diff \varepsilon),$$
then $f \in \mathcal{H}^{\beta_0,\beta_\bot}_\delta(M,L)$ with
$$L (x) := C \delta^{-(D-d)} L_0(\pr_M x) \times L_\perp(x-\pr_M x)$$
and $C$ depending on $C_M$, $\tau$, $\beta_M$, $\beta_{0}$, $\beta_{\bot}$.
\item \emph{(Isotropic noise model)} If $\delta < \tau/32$ and $\beta_0 \leq \beta_\perp \leq \beta_M - 1$, and if
$$\mathcal{E} \sim K(\varepsilon) \diff \varepsilon ~~~\text{independently of}~X,$$
then $f \in \mathcal{H}^{\beta_0,\beta_\bot}_\delta(M,L)$ with
$$L(x) := C \delta^{-D} \int_{M} L_\perp\(\frac{x-y}{\delta}\right) L_0(y) \mu_M(\diff y),
$$
where $C$ depends on $C_M$, $\tau$, $\beta_M$, $\beta_{0}$, $\beta_{\bot}$.
\end{enumerate}
\end{prp}
See \appref{manihold} for a proof of this result.
\begin{rem} Anticipating the statistical part of the paper, we could argue that instead of the strong condition $\supp f \subset M^\delta$, one could simply ask that $\mathbb{R}^D \setminus M^\delta$ has probability of order $o(1/n)$ under $f(x)\diff x$ where $n$ will be the size of the sample. One would get the same results as the one stated in \secref{theoretical}. This is for instance fulfils in the additive noise model (see \prpref{model}) with $Z = X + \frac{ \delta }{ \sqrt{ C\log n } } \mathcal{E}$ with the Gaussian kernel $K(x) := (2\pi)^{D/2} \exp(-\|x\|^2/2)$.
\end{rem}
In the following section we describe the family of priors which we use to estimate the above family of densities.
\subsection{Location-scale mixtures of normal priors} \label{sec:prior}
We model the manifold-anisotropic H\"older densities using location-scale mixtures of normals. We parametrize the covariances of the components by $\Sigma = O^T\Lambda O$ where $O$ is a unitary matrix and $\Lambda = \diag(\lambda_1, \cdots, \lambda_D)$ is diagonal. Location-scale mixtures can then be written as:
\begin{equation} \label{loc-scale-mix}
f_{P}(x) = \int_{\mathbb{R}^D} \varphi_{O^T\Lambda O}(x-\mu)\diff P(\mu, O, \Lambda),\quad P = \sum_{k=1}^K p_k \delta_{(\mu_k, O_k, \Lambda_k)}, \quad K \in \mathbb N \cup \{ +\infty\},
\end{equation}
where, for any positive definite matrix $\Sigma$,
$$
\varphi_\Sigma(z) := \frac{1}{\det^{1/2} \(2\pi\Sigma\)} \exp\{-\frac{1}{2} \|z\|^2_{\Sigma^{-1}}\},
$$
is the density of a centered Gaussian with covariance matrix $\Sigma$. The two most well known families of priors on $P$ are Dirichlet process priors and mixtures with random number of components, also known as mixtures of finite mixtures. Recall that if $P$ follows a Dirichlet process priors with parameters $A$ and $H$ where $A > 0$ and $H$ is a probability measure on some measurable space $\Theta$ , then
\begin{equation*}
P = \sum_{k=1}^\infty p_k \delta_{ \theta_k}~~~\text{with}~~~ p_k = V_k\prod_{i<k}(1-V_i), \quad V_i\stackrel{iid}{\sim} \Beta(1, A) \quad \text{and} \quad \theta_k \stackrel{ iid}{ \sim} H .
\end{equation*}
If $P$ follows a mixture of finite mixtures prior of parameters $\alpha_K$ and $\pi_K$ where $\alpha_K > 0$ and $\pi_K$ is a probability measure on $\mathbb{N}$, then
$$P = \sum_{k=1}^K p_k \delta_{ \theta_k}~~~\text{with}~~~K \sim \pi_K,\quad (p_1, \cdots, p_K) ~| K \sim \mathcal{D}( \alpha_K, \dots , \alpha_K) \quad \text{and} \quad \theta_k \stackrel{iid}{\sim } H.$$
In both cases (Dirichlet process and mixture of finite mixtures) we call $H$ the base probability measure.
Obviously in the case of mixtures of finite mixtures the conditional prior on $(p_1, \dots, p_K)$ and $\theta_1, \dots, \theta_K$ could be different but we will be considered this setup for the sake of simplicity.
As shown empirically by \cite{mukhopadhyay2020estimating} location-scale Dirichlet process mixtures with base measure constructed from the conjuguate prior of the Gaussian model are not well adapted to the problem at hand. We show however that if particular care is given to the choice of $H$, the posterior on manifold-anisotropic density is well behaved. In particular we consider the two following types of location-scale mixtures:
\begin{itemize}
\item \emph{Partial location-scale mixtures}: The eigenvalues $\Lambda$ of the covariance of the Gaussians are common accross components,
\begin{equation}\label{partialprior}
f_{\Lambda,P}(x) = \int_{\mathbb{R}^D} \varphi_{O^T\Lambda O}(x-\mu)\diff P(\mu, O) ,\quad P = \sum_{k=1}^K p_k \delta_{(\mu_k, O_k)}, \quad K \in \mathbb{N} \cup \{ +\infty\}
\end{equation}
where $P$ is a probability distribution on $\mathbb R^D \times \mathcal O(D)$ (where $\mathcal O(D)$ is the set of unitary matrices in $\mathbb R^D$) and is either a Dirichlet process prior or a mixture of finite mixtures.
\item \emph{Hybrid location-scale mixtures}:
The density $f_P$ is written as \eqref{loc-scale-mix} where $P$ conditionally on a probability $Q_2$ on $\mathbb R_+^D$ follows a Dirichlet process mixture or a mixture of finite mixtures with base measure $H_0 (\diff\mu, \diff O, \diff\lambda)= H_1(\diff\mu, \diff O) \otimes Q_2(\diff\lambda)$, and $Q_2$ follows a distribution $\wt{\Pi}_\Lambda$.
\end{itemize}
We denote by $\Pi$ the prior on the parameter and we consider the following assumptions on $\Pi$. These conditions differs wether $\Pi$ is assumed to come from a partial location-scale mixture prior or a hybrid location-sclae mixture prior.
\paragraph{Conditions on the partial location-scale mixtures} $f$ is modelled as in \eqref{partialprior} and
we assume that $(\mu_i, O_i)$ are iid with distribution $H(\diff\mu, \diff O) = h(\mu, O)\diff\mu \diff O$ where $\diff\mu$ designates the Lebesgue measure on $\mathbb R^D$ and $\diff O$ the Haar measure on $\mathcal O(D)$. We further assume that there exists $c_1,b'_1 > 0$ and $b_1 > 2D-1$ such that
\begin{equation}} % \setcounter{equation}{1}\label{condH:partial}
e^{-c_1\|\mu\|^{b_1'}} \lesssim h(\mu,O) \lesssim (1 + \|\mu\|)^{-b_1} \quad \text{with} \quad \quad \forall \mu, O.
\end{equation}
We also assume that $\Lambda$ is drawn from a probability measure $\Pi_\Lambda$ that has a density $\pi_\Lambda$ with respect to Lebesgue measure on $\mathbb R^D$, and that this density satisfies: there exists $c_2, c_2', b_2>0$ and $b_3 > D(D-1)/2$ such that
\begin{equation}} % \setcounter{equation}{1}\label{cond:piL}
\begin{split}
&e^{-c_2 \sum_{i=1}^D \lambda_i^{-d/2}} \lesssim \pi_\Lambda(\lambda_1, \cdots , \lambda_D ) \quad \text{for small}~\lambda_1,\dots,\lambda_D \in (\mathbb{R}_+^*)^D, \\
&\Pi_\Lambda\( \min_{1\leq i\leq D} \lambda_i < x \) \lesssim e^{- c_2' x^{-b_2}}\quad~~ \text{for small}~x > 0,\\ \text{and}~~~~~~~~~~&\Pi_\Lambda\( \max_{1\leq i\leq D} \lambda_i > x \) \lesssim x^{-b_3} \quad \quad \quad \text{for large}~x > 0. \\
\end{split}
\end{equation}
Condition \eqref{condH:partial} is weak and is for instance satisfied as soon as $\mu$ and $O$ are independent under $H$ with positive and continuous density for $O$ and positive density for $\mu$ with weak tail assumptions. Condition \eqref{cond:piL} is also weak and common in the case of location Gaussian mixtures and is verified in particular if the $\sqrt{\lambda_i}$'s are independent inverse Gammas under $\Pi_\Lambda$.
\paragraph{Conditions on the hybrid location-scale mixtures}
$H_1$ satisfies \eqref{condH:partial} and $Q_2$ is random with distribution $\tilde \Pi_\lambda$ which satisfies: there exists $B_0, c_2>0$ such that for $x_1\leq x_2$ both small ,
\begin{equation}\label{cond:piH:LB}
\begin{split}
\tilde\Pi_\Lambda\left[Q_2\left( [x_1,2x_1]^d \times [x_2,2x_2]^{D-d} \right) \geq x_1^{B_0} \right] &\gtrsim e^{-c_2 x_1^{-d/2}}.
\end{split}
\end{equation}
Moreover we assume that for some positive constant $c_2', b_2, b_3>0$ such that for $x>0$ small
\begin{equation}\label{cond:piH:UB}
\begin{split}
\mathbb{E}_{\tilde \Pi_\Lambda}\left[Q_2\left(\min_{1\leq i\leq D} \lambda_i \leq x \right) \right] &\lesssim e^{-c_2' x^{-b_2}}\quad \text{for small}~x,\\
\mathbb{E}_{\tilde \Pi_\Lambda}\left[Q_2\left(\max_{1\leq i\leq D} \lambda_i > x \right) \right] &\lesssim x^{-b_3} \quad\quad\quad \text{for large}~x.\\
\end{split}
\end{equation}
\begin{rem}
One can view the partial location-scale mixture as a special instance of the hybrid location-scale mixture defined above: take $Q_2 $ to be the Dirac mass at a value $\Lambda$ where $\Lambda \sim \Pi_\Lambda$.
\end{rem}
Conditions \eqref{cond:piH:LB} and \eqref{cond:piH:UB} are in particular satisfied if
$ Q_2$ comes from a Dirichlet process. More precisely, if $Q_2$ is of the form
\begin{align*}
Q_2(\diff\lambda) = \prod_{i=1}^D Q_0(\diff\lambda_i) ~~~\text{with}~~~Q_0 \sim \DP( BH_\lambda),
\end{align*}
with $B > 0$, then \eqref{cond:piH:LB} and \eqref{cond:piH:UB} are satisfied for reasonable choices of probability distribution $H_\lambda$ on $\mathbb{R}_+$. We show in the next proposition that this is in particular true when $H_\lambda$ is a square-root- inverse Gamma.
\begin{prp}\label{prp:hybridDP}
Assume that $ Q_2 = Q_0^{\otimes D}$ where $Q_0 \sim \DP( B H_\lambda),$ where $B>0$ and under $H_\lambda$, $\sqrt{\lambda_i}$ follow an inverse Gamma with parameters $a,b>0$. Then conditions \eqref{cond:piH:LB} and \eqref{cond:piH:UB} are satisfied.
\end{prp}
A proof of \prpref{hybridDP} can be found in \appref{prior}.
\begin{rem}\label{rem:priord}
Although the conditions on the prior: \eqref{cond:piL} and \eqref{cond:piH:LB} depend on $d$, they are satisfies for all $d$ by setting $d=1$ and in particular they are verified for all $d\geq 1$ if $\sqrt{\lambda_i }$ follow an inverse Gamma under the base measure, which is agnostic to $d$.
\end{rem}
\section{Main results} \label{sec:theoretical}
\subsection{Posterior contraction rates} \label{subsec:main}
Throughout the rest of this paper, we assume observing a iid sample $X_1,\dots,X_n$ drawn from a density $f_0$. This density is concentrated around a submanifold $M$, with a a given smoothness $\beta_0$ along the manifold and a typically much larger smoothness $\beta_\perp$ along the normal spaces. More precisely, we will assume:
\begin{itemize}\setlength{\itemsep}{.05in}
\item \emph{Conditions on $M$}: the submanifold $M$ is of dimension $d$ and has a reach greater than $\tau > 0$. Furthermore, there exists $\beta_M > 2$ and $C_M > 0$ such that $\Psi_{x_0} \in \mathcal{H}^{\beta_M}_{\iso}(\mathcal{V}_{x_0},C_M),~\forall x_0 \in M$. In particular, $M$ also satisfies \eqref{barpsi}.
\item \emph{Conditions on $f_0$}: the density $f_0$ is in $\mathcal{H}^{\beta_0,\beta_\bot}_\delta(M,L)$. Furthermore, there exists
$C_1,C_2 > 0$ and $\kappa > 0$,
\begin{equation}} % \setcounter{equation}{1} \label{expdec}
f_0(x) \leq C_1 e^{- C_2 \|x \|^\kappa}~~~~\forall x\in \mathbb{R}^D,
\end{equation}
and for some $\omega > 6 \beta$ and $ C_0<\infty$,
\begin{equation}} % \setcounter{equation}{1} \label{assint}
\int_{\mathcal{W}_{x_0,\delta}} \left|\frac{\Diff^k \bar f_{x_0,\delta}}{\bar f_{x_0,\delta}}\right|^{\omega/\inner{k}{\mathbf{m}\alpha}} \bar f_{\delta,x_0} \leq C_0 ~~~\text{and}~~~ \int_{\mathcal{W}_{x_0,\delta}} \left|\frac{L_{x_0,\delta}}{\bar f_{x_0,\delta}}\right|^{\omega/\beta} \bar f_{\delta,x_0} \leq C_0.
\end{equation}
for all $\delta$ small, $x_0 \in M$ and all $0 \leq \inner{k}{\mathbf{m}\alpha} < \beta$
\item \emph{Conditions on $\Pi$}: the prior $\Pi$ is originating from a partial location-scale Dirichlet mixture satisfying \eqref{condH:partial} and \eqref{cond:piL}, or from a hybrid location-scale Dirichlet mixture satisfying \eqref{condH:partial} and \eqref{cond:piH:LB}-\eqref{cond:piH:UB}.
\end{itemize}
\begin{rem} Note that the conditions regarding the prior $\Pi$ do not involve $M$, $\delta$, $L$, $\beta$, or $\tau$: they are regarded as unknown in this framework. In fact, the only feature of $M$ or $f_0$ from which $\Pi$ seems to depend is the intrinsic dimension $d$, through \eqref{cond:piL} or \eqref{cond:piH:LB}. However, as noted in Remark \ref{rem:priord} we can choose priors which do not depend on $d$ and such that these aassumptions are verified for all $d\geq 1$.
\end{rem}
In the rest of this paper, we will use throughout the symbols $\simeq$, $\lesssim$ and $\gtrsim$ to denote equalities or inequalities up to a constant depending on $D$, $d$, $\tau$, $\beta_M$, $C_M$, $\beta_0$, $\beta_\perp$ and all the other parameters appearing in conditions \eqref{condH:partial} to \eqref{assint}.
\begin{thm} \label{thm:main}
Let $X_1, \dots, X_n$ be a $n$-sample from $f_0$. We assume that $\omega>6 \beta$ is large enough so that $n^{-\frac{\omega-6\beta}{2\beta+D}} = o(\delta^{D-d})$
and that $\beta_0 \leq \beta_\perp \leq \beta_M - 3$. Then, under the conditions stated above,
$$
\Pi\left(\hel(f_P,f_0) \geq \varepsilon_n~|~\mathcal{X}_n\right) \xrightarrow[n \to \infty]{} 0~~~\text{in}~~ \mathbb{P}^{\otimes n}_0\text{-probability}
$$
where
$$
\varepsilon_n \simeq \log^p n \times \{\frac{1}{\sqrt{n \delta^{\frac{D}{\alpha_0-\alpha_\perp}}}} \vee n^{-\frac{\beta}{2\beta + D}}\},
$$
with $p > 0$ depending on $D$, $\kappa$ and $\beta$.
\end{thm}
\begin{rem}
The case where $\beta_\perp$ is infinite is particularly of interest. Then, $\beta \to \beta_0$, $\alpha_0 \to D/d$, $\alpha_\perp\to 0$ and the rate $\varepsilon_n$ becomes
$$
\varepsilon_n^\infty =\log^p (n) \times \{\frac{ 1}{\sqrt{n \delta^{d}}} \vee n^{-\frac{\beta_0}{2\beta_0 + d}}\},
$$
which is, when $\delta$ is not too small (i.e $\delta \gtrsim n^{-1/(2\beta_0+d)}$), the minimax rate for estimating a $\beta_0$ H\"older density in $\mathbb{R}^d$, up to a log term. Here the strength of our result lies in that the manifold (and thus the support of $f_0$) is unknown and the prior depends neither on $\mathbf{\beta}$ nor $\delta$ or $d$ (or $M$).
\end{rem}
\begin{rem}
Since the class of densities contains the case where $M$ is a $d$ dimensional subspace of $\mathbb{R}^D$, when $\delta \gtrsim n^{-(\alpha_0-\alpha_\perp)/(2\beta+D)}$ the rate $\varepsilon_n$ is the minimax estimation rate (up to a $\log n$ term). Although no proof of a minimax bound exists in our framework, a careful look at the proof of \cite[Thm 4 (ii)]{goldenshluger2014adaptive} for $p=1$, $r = \left(\infty,\dots,\infty\right)$, $s = \infty$ and $\theta = 1$ (tail dominance from \eqref{expdec}) show that the lower-bound translates in our context. Indeed the densities used to derived the lower bound are obtained from a smooth density with additive perturbations of the form $x \mapsto h^\beta \prod g(x_i/h^{\alpha_i})$ where $h > 0$ and $g$ is a smooth and compactly supported function of zero-mean. Such a perturbations belong to anisotropic H\"older classes defined in Section \ref{subsec:anis}.
\end{rem}
\begin{rem} \label{rem:union}Because the approximation results of \subsecref{approx} are stable under finite mixtures, so does the results of \thmref{main}. In particular, the support of $f_0$ can be a finite union of submanifolds $M_i$ with non trivial intersections and with each $M_i$ fulfilling \eqref{betam}. See \figref{Mi} for a diagram of such a situation. Hence the assumption on the lower bound on the reach can be significantly weakened. We consider such an example in the simulations of Section \ref{sec:num}.
\end{rem}
\begin{figure}[h!]
\centering
\includegraphics[width = 7cm]{M}~~~~
\includegraphics[width = 7cm]{Mi}
\caption{(Left) An example of smooth submanifolds with a reach constrained to be greater than $\tau$ and (Right) a finite union of such manifolds. Both subsets are admissible as the (near) support for a density $f_0$ satisfying the contraction rates displayed in \thmref{main}, as explained in Remark \ref{rem:union}.}
\label{fig:Mi}
\end{figure}
\begin{rem} Assumptions \eqref{expdec} and \eqref{assint} are common assumptions in density estimation based on mixtures of Gaussians, see for instance \cite{kruijer2010adaptive} or \cite{shen2013adaptive} for the Bayesian approaches and \cite{maugis2013adaptive} for the frequentist approaches. They are rather weak asssumptions. The difficulty with \eqref{assint} is that it is expressed on $\bar f_{x_0,\delta}$, which is natural in our context since the smoothness assumption on $f_0$ is expressed in terms of $\bar f_{x_0,\delta}$, but is not so intuitive. However, a careful examination of \eqref{assint} show that this assumption is for instance implied by the stronger, but chart-independent assumption:
$$
\int_{\mathbb{R}^D} \{\frac{L(x)}{f_0(x)}\}^{\omega_*} f_0(x) \diff x \leq C_0,
$$
for some large $\omega^*$. To understand better what it means in terms of $f_0$, we shall go back to our archetypal model where $f_0$ is the density of $X = Y +\delta \mathcal E$ as in Proposition \ref{prp:model}. We then have the following result:
\end{rem}
\begin{lem}\label{lem:moments} Going back to the two models of \ref{prp:model}, assume that
$$
\int_{M} \{\frac{L_0(x)}{f_*(x)} \}^{\omega_* } f_*(x) \diff\mu_{M}(x) < \infty ~~~\text{and}~~~ \int_{\ball(0,1)} \{\frac{L_\perp(\eta)}{K(\eta)} \}^{\omega_*}K(\eta) \diff \eta < \infty,
$$
for some $\omega_*$ large. Then \eqref{assint} will be satisfied in both the orthonormal noise model and in the isotropic noise model.
\end{lem}
Note that the condition in \lemref{moments} are fulfilled by a number of natural kernels or densities. For instance, one could take $K(\eta) \approx (1-\|\eta\|^2)_+^p$ and $L_\perp \approx (1-\|\eta\|^2)_+^{p-\beta_\perp}$ with large $p$. Another example could be $K(\eta) \approx \exp(-(1-\|x\|^2)^{-1}_+)$ and $L_\perp(\eta) \approx (1-\|\eta\|^2)^{-\beta_M} K(\eta)$, see \lemref{chixj} for further details regarding computation.
\subsection{Approximating M-anisotropic densities} \label{subsec:approx}
Theorem \ref{thm:main} is proved using the approach of \cite{ghosal2007posterior}, with a control on the prior mass of Kullback-Leibler neighbourhoods of $f_0$ and on the entropy of the support of the prior. The difficulty in our setup is in proving the Kullback-Leibler prior mass condition. To do so, we need to construct an \textit{efficient} approximation of $f_0$ by mixtures of Gaussian. This construction is of interest in its own as it sheds light on the behaviour of such mixtures and on the geometry of M-anisotropic densities.
To explain the construction, we denote, for any $x \in M^\tau$, $T_x = T_{\pr_M(x)} M$ and $N_x = N_{\pr_M(x)} M$. We also write $\Sigma(x) = O_x^\top \Delta_{\sigma,\delta}^2 O_x$ where $O_x$ is the matrix in the canonical basis of $\mathbb{R}^D$ and in arbitrary basis of $T_x$ and $N_x$ of the linear map $z \mapsto (\pr_{T_x} z, \pr_{N_x} z)$ and where
$$
\Delta_{\sigma,\delta} = \begin{pmatrix} \sigma^{\alpha_0} \Id_{d} & 0 \\ 0 & \delta \sigma^{\alpha_\perp} \Id_{D-d} \end{pmatrix}.
$$
Note that $\Sigma_x$ does not depends on the choice of an orthonormal basis of $T_x$ and $N_x$ since for any orthonormal base change $P$ that preserves $T_x$, one have $P^\top \Delta_{\sigma,\delta} P = \Delta_{\sigma,\delta}$.
For any function $f : \mathbb{R}^D \to \mathbb{R}$, we define
\begin{align*}
K_\Sigma f(x) &:= \int_{M^\tau} \varphi_{\Sigma(y)} (x-y) f(y) \diff y,
\end{align*}
where we recall that $\varphi_{\Sigma(y)}$ is the density of a centered Gaussian with variance $\Sigma(y)$. The idea behind the construction of the approximation of $f_0$ by a mixture of gaussians is show that $K_\Sigma f_0(x) $ is close to $f_0$ and then to define a perturbation $f_1$ of $f_0$ such that $K_\Sigma f_1(x) -f_0(x) = O(L(x) \sigma^{\beta})$. Compared to the construction proposed by \cite{kruijer2010adaptive} in the univariate case or \cite{shen2013adaptive} in the multivariate case where $K_\Sigma f = \varphi_\Sigma \ast f$, in our construction $\Sigma$ varies with the location $y$. Note that in particular $\int \varphi_{\Sigma(y)}(x-y) \diff y$ may be different from 1. This dependence in $y$ is crucial to adapt to the geometry of the manifold. We first show that $f_0$ can be efficiently approximated for the sup-norm.
\begin{thm}\label{thm:approx} Assume that $\sigma,\delta \leq1$, that $f_0 \in \mathcal{H}^{\beta_0,\beta_\bot}_\delta(M,L)$ satisfies \eqref{expdec}, that $\sigma^{\alpha_0-\alpha_\perp} \leq\delta$ and that the manifold satisfies \eqref{betam} with $\beta_0 \leq \beta_\perp \leq \beta_M-3$.
Then there exists a function $g : \mathbb{R}^D \to \mathbb{R}$ such that, for any $H > 0$,
$$
| K_\Sigma g(x) - f_0(x) | \lesssim \sigma^\beta L(x)\mathbbm{1}_{M^\tau} + (H\log(1/\sigma))^{D/\kappa} \sigma^H \| L \|_\infty ~~~\forall x \in \mathbb{R}^D.
$$
The function $g$ has the form
$$g(x) = f_0 + \frac{ 1}{ \delta^{D-d} }\sum_{0 < \inner{k}{\mathbf{m}\alpha} < \beta} \sigma^{\inner{k}{\mathbf{m}\alpha}}\sum_{j=1}^J d_{j,k}(x,\sigma,\delta) \Diff_z^{k} \overline{(\chi_j f_0)}_{x_j,\delta}(z_{j,x})~~\text{where}~~z_{j,x} := \Delta_{1,\delta}^{-1} \bar\Psi_{x_j,\delta}^{-1}(x),$$
where $(\chi_j)_{j\leq J}$ is a partition of unity of $M^\tau \cap \ball( 0, R_0(\log (1/\sigma))^{1/\kappa})$ defined in Appendix \ref{app:theoretical} associated with a $\tau/64$-packing $(x_j)_{j\leq J}$ of $M^\tau \cap \ball( 0, R_0(\log (1/\sigma))^{1/\kappa})$ and $d_{j,k}(x,\sigma,\delta) $ are smooth and bounded functions depending on $\chi_j$ and $M$.
\end{thm}
We then establish that the previous bound translates to a control in terms of Hellinger distance.
\begin{cor} \label{cor:approx}
In the context of \thmref{approx}, if $f_0$ also satisfies \eqref{assint} and if $\sigma^{\omega - 6\beta} = o(\delta^{D-d})$
and $\delta \sigma^{\alpha_\perp} = o(|\log \sigma|^{-1/2})$,
the probability density $\tilde h \propto g \mathbbm{1}_{g\geq f_0/2} + f_0/2 \mathbbm{1}_{g<f_0/2}$ verifies
\begin{equation}\label{eq:approx}
\hel(K_\Sigma \tilde h,g)^2 \lesssim \sigma^{2\beta} |\log \sigma |^{16\beta + D/\kappa}.
\end{equation}
\end{cor}
Theorem \ref{thm:approx} and \corref{approx} are proved in Appendices \ref{app:thmapprox1} and \ref{app:thmapprox2}.
\section{Numerical results} \label{sec:num}
In this section we present two implementations of the partial location-scale prior, with Gibbs sampling on the model parameters (using for instance \cite[Algo 8]{neal2000markov}) or stochastic variational inference and the {\tt python} package {\tt pyro} (\cite{bingham2019pyro}). The goal of this section is not to check that the theoretical results of this paper hold numerically, but rather to give some visual examples as how suitable the location-scale mixtures of Subsection \ref{sec:prior} can be to describe in a relevant way data that can be very singular.
\subsection{Simulations with Gibbs sampling} \label{sec:gibbssampl}
This first section presents the implementation of the partial location-scale prior \eqref{partialprior} with Gibbs sampling for a particular choice of measure $H$. We will use the notations (see \cite{hoff2009simulation} for more details) $p_{\ML}(O|M) \propto \exp(\tr(M^\top O))$ and $p_{\BMF}(O|A,B,C) \propto \exp(\tr(C^\top O + B O^\top A O))$ ($M,C \in \mathcal{M}_D(\mathbb{R}), O \in \mathcal{O}_D(\mathbb{R}), A \in \mathcal{S}_D(\mathbb{R})$ and $B$ diagonal). $p_{\ML}$ ("Matrix-Langevin") and $p_{\BMF}$ ("Bingham-von Mises-Fisher") are parametric probability densities on $\mathcal{O}_D(\mathbb{R})$ with respect to the uniform Haar measure.
More precisely, we consider the hierarchical model :
\begin{align} \label{prior:1}
\begin{split}
(y_i)_{i=1}^n ~|~ (\mu_i,O_i)_{i=1}^n, P,\Lambda,(b_j)_{j=1}^D &\sim \bigotimes_{i=1}^n \mathcal{N}(\cdot|\mu_i,O_i \diag(\Lambda) O_i^T)\\
(\mu_i,O_i) ~|~ P,\Lambda,(b_j)_{j=1}^D &\sim P^{\otimes n} \\
(P,\Lambda) ~|~ (b_j)_{j=1}^D &\sim \DP(\alpha P_0)\otimes G_0 \\
(b_j)_{j=1}^D &\sim \bigotimes_{j=1}^D \Exp(\kappa_j)
\end{split}
\end{align}
with
$$
G_0 = \bigotimes_{j=1}^D \Invg(a_j,b_j)~~\text{and}~~P_0 = \mathcal{N}(\cdot|\mu_0,\Sigma_0) \otimes p_{\ML}(\cdot|M_0),
$$
and where $\alpha,a_j,\kappa_j > 0, \mu_0 \in \mathbb{R}^D, \Sigma_0 \in \mathcal{S}_D^{++}(\mathbb{R}), M_0 \in \mathcal{M}_D(\mathbb{R})$ are fixed hyperparameters. Notice that the variances are constrained to be same among all components but the orientation of the covariance matrices vary across components. Even though we did not investigate the effect of the additional prior on the $b'_j$s on the posterior contraction rate, from a methodology point of view this leads to better empirical results, as shown in \eqref{rem:1}.
In order to simplify the notations we will write indifferently $\Lambda$ as a $D$-dimensional diagonal matrix or a $D$-dimensional vector.
In model \eqref{prior:1}, due to the discrete nature of the Dirichlet process, a lot of ties arise in $(\mu_i,O_i) \overset{i.i.d}{\sim} P$ which induces a nontrivial clustering of the sample $(y_i)_{i=1}^n$ described by the chinese restaurant process. We will write the resulting partition as
\[
c_i = c_j \ \Leftrightarrow \ \text{i and j are in the same cluster}
\]
with
\[
N = \text{number of clusters}, 1 \leq c_i \leq N
\]
We write $\phi_c = (\mu_c^*,O_c^*)_{1 \leq c \leq N}$ for the unique values of the cluster parameters. The marginal posterior densities are in fact explicit, as shown in the next theorem :
\begin{prp} \label{thm:prior1}
In the model \eqref{prior:1}, we have :
\begin{align*}
p(\Lambda | Y, (\mu_i,O_i),b) = & \bigotimes_{j=1}^D \Invg\left(a_j+n/2, b_j + \frac{1}{2} \sum_{i=1}^n \langle O_i^j | y_i - \mu_i \rangle^2\right) \\
p(b|Y,(\mu_i,O_i),\Lambda) = & \bigotimes_{j=1}^D \Exp(\kappa_j + \lambda_j^{-1})
\end{align*}
and :
\begin{align*}
p(\mu_c^*|O_c^*,Y,(c_i),\Lambda) &= \mathcal{N}(\mu_c^*|A^{-1}b,A^{-1}) \\
p(O_c^*|\mu_c^*,Y,(c_i),\Lambda) &= p_{\BMF}(O_c^*|S,- \frac{1}{2}\Lambda^{-1},M_0)
\end{align*}
where
\begin{align*}
A &= \Sigma_0^{-1} + n_c O_c^* \Lambda^{-1} (O_c^*)^T,~~~~~~
n_c = \Card \{i: c_i = c \}, \\
b &= \Sigma_0^{-1}\mu_0 + O_c^*\Lambda^{-1}(O_c^*)^T \sum_{c_i = c}y_i
~~\text{and}~~ S = \sum_{c_i = c} (y_i - \mu_c)(y_i - \mu_c)^T.
\end{align*}
\end{prp}
A proof can be found in \appref{gibbssampl}. Proposition \ref{thm:prior1} is the key to implement a Gibbs aalgorithm (cf \cite{neal2000markov}).
\begin{rem}\label{rem:1}
Here we can see that when $\sum_{i=1}^n \langle O_i^j | y_i - \mu_i \rangle^2 = o(n^{3/2})$, $\Invg(a+n/2, b_j + \frac{1}{2} \sum_{i=1}^n \langle O_i^j | y_i - \mu_i \rangle^2)$ is tightly concentrated around its mean
$$\frac{b_j + \frac{1}{2} \sum_{i=1}^n \langle O_i^j | y_i - \mu_i \rangle^2}{a+n/2-1} \geq \frac{b_j}{a+n/2-1};$$
and if we fix $b_j$ this may induce a rather strong penalization on small values of $\lambda_j$ for finite $n$.
\end{rem}
Even though to the best of our knownledge no direct samplers are available for $p_{\BMF}$, it is still possible to perform a Gibbs sampling update over the columns of $O_c^*$ (cf \cite{hoff2009simulation} and the associated package {\tt rstiefel}). Another more involved (but more efficient) option would be to perform Hamiltonian Monte Carlo via polar expansion as suggested in \cite{jauch2021monte}. With a slight abuse of notation we will write
$$
O_c^* \sim p_{\BMF,\Gibbs}(\cdot|A,B,C,O_c^*)
$$
for a Gibbs sampling scan on the column of $O_c^*$ starting from $O_c^*$. All in all, this leads us to \algoref{1} presented in \appref{gibbssampl}.
We present below a visual comparison between the partial location scale with additional prior on the $b'_j$s, the partial location-scale with fixed value of the $b'_j$s and the Normal inverse-Wishart prior. The training data is a 2-dimensional sample of size 300, defined either by $X \sim \mathcal{U}_{[0,4\pi]}, J(X) = (Xcos(X)/2, Xsin(X)/2), \epsilon \sim \mathcal{N}(0,I_2), Y = J(X) + 0.02 \epsilon$ or $X \sim \mathcal{U}_{[-1,1]}, J(X) = (X,X^2), \epsilon \sim \mathcal{N}(0,I_2), Y = J(X) + 0.01 \epsilon$ (thus, the underlying submanifold is 1-dimensional). The numbers of auxiliary parameters used for \cite[Algo 8]{neal2000markov} is $m=5$ (see \algoref{1} in \appref{gibbssampl} for a definition of $m$) and we perform $10000$ Gibbs sampling iterations in each case.
\begin{figure}[!h]
\centering
\includegraphics[width = 16cm]{x_2_prior_hyperprior_comparison_new_labels.png}
\includegraphics[width = 16cm]{2d_spiral_prior_hyperprior_comparison_new_labels.png}
\caption{The blue points are the training samples and the red points are the predicted data from the estimator of the posterior mean.}
\label{fig:priorcomparison}
\end{figure}
Even though it leads to an consistent estimate of the posterior distribution of interest, Gibbs sampling in its vanilla version is typically quite slow.
In order to make this procedure scalable, both for the size of the sample and the dimensionality of the data, one solution could be to use a variant based on parallel computing for Dirichlet process mixtures, see for instance \cite{meguelati2019dirichlet}. Another solution is to replace MCMC simulation by variational inference, as explained in the next subsection.
\subsection{Simulations with stochastic variational inference} \label{subsec:pyro}
In this section, we provide a few numerical experiments using stochastic variational inference (SVI) \citep{hoffman2013stochastic} to approximate the posterior distribution in a fast and efficient fashion, as implemented in the {\tt python} package {\tt pyro} \citep{bingham2019pyro}. We will restrict our numerical study to the ambiant dimension $D=2$ and $D=3$, and use the partial location-scales mixture described in \secref{model} with exponential hyperpriors on the common eigenvalues $\lambda_1$ and $\lambda_2$ of the scales, and with the eigenvalue matrix $\Lambda$ having the form
$$
\Lambda = \begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{pmatrix}~~\text{for}~~D=2,~~~~\text{or}~~~~\Lambda = \begin{pmatrix} \lambda_1 & 0 & 0\\ 0 & \lambda_2 & 0 \\ 0 & 0 & \lambda_2 \end{pmatrix}~~\text{for}~~D=3.
$$
We refer to \appref{pyro} for further details regarding the design of the prior. The synthetic dataset we use are supported near four geometric shapes: the 2D-spiral ($D=2$, $d=1$) the two circles ($D=2$, $d=1$); the 3D-spiral ($D=3$, $d=1$) and the torus ($D=3$, $d=2$). We also refer to \appref{pyro} for the exact parametric equations of these sets.
For each shape, we generate between 500 and 10000 points for various values of $\delta$, and generate the same amount of data through the estimated posterior obtain with SVI by predictive posterior sampling. The results are presented in Figures \ref{fig:spiral2D} to \ref{fig:torus}. As expected, the posterior does visually concentrate around the true distribution, even when $\delta $ is very small (i.e the true probability measure is very singular) or when the support of the density is based on two crossing manifolds as it is the case for the two circles, see also remark \ref{rem:union}.
\begin{figure}[h!]
\centering
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{data_spiral_2D_N_500_delta_01} \caption{} \label{fig:1a}\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{pred_spiral_2D_N_500_delta_01}\caption{} \label{fig:1b} \end{subfigure}
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{data_spiral_2D_N_500_delta_001}\caption{} \label{fig:1c} \end{subfigure}
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{pred_spiral_2D_N_500_delta_001} \caption{} \label{fig:1d}\end{subfigure}
\caption{The 2D spiral: observed data in blue (a)-(c) versus predicted data in orange (b)-(d) for $n = 500$ observations and $\delta = 0.1$ (a)-(b) or $\delta = 0.01$ (c)-(d).}
\label{fig:spiral2D}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{data_circles_N_500_delta_01}\caption{} \label{fig:2a}\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{pred_circles_N_500_delta_01}\caption{} \label{fig:2b}\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{data_circles_N_500_delta_001}\caption{} \label{fig:2c}\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{pred_circles_N_500_delta_001}\caption{} \label{fig:2d}\end{subfigure}
\caption{The circles: observed data in blue (a)-(c) versus predicted data in orange (b)-(d) for $n = 500$ observations and $\delta = 0.1$ (a)-(b) or $\delta = 0.01$ (c)-(d).}
\label{fig:circles}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{data_spiral_3D_N_1000_delta_01}\caption{} \label{fig:3a}\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{pred_spiral_3D_N_1000_delta_01}\caption{} \label{fig:3b}\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{data_spiral_3D_N_1000_delta_001}\caption{} \label{fig:3c}\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{pred_spiral_3D_N_1000_delta_001}\caption{} \label{fig:3d}\end{subfigure}
\caption{The 3D spiral: observed data in blue (a)-(c) versus predicted data in orange (b)-(d) for $n = 1000$ observations and $\delta = 0.1$ (a)-(b) or $\delta = 0.01$ (c)-(d).}
\label{fig:spiral3D}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{data_torus_N_10000_delta_05}\caption{} \label{fig:4a}\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{pred_torus_N_10000_delta_05}\caption{} \label{fig:4b}\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{data_torus_N_10000_delta_005}\caption{} \label{fig:4c}\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\includegraphics[width = 4cm]{pred_torus_N_10000_delta_005}\caption{} \label{fig:4d}\end{subfigure}
\caption{The torus: observed data in blue (a)-(c) versus predicted data in orange (b)-(d) for $n = 10000$ observations and $\delta = 0.5$ (a)-(b) or $\delta = 0.05$ (c)-(d).}
\label{fig:torus}
\end{figure}
One interesting observation is the fact that the prior naturally adapts to the intrinsic dimension $d$ of the support: for the 3D spiral and the torus one notice that the value of $\lambda_1$ (which corresponds to the $1$-dimensional side of the covariance matrix) is predominant compared to $\lambda_2$, while it is automatically the other way around for the torus --- see \figref{lambda} for a visualisation of this fact.
\\
We conclude this series of numerical experiments with the inspection of the decrease of contraction rate for the 2D spiral and the torus, with $n$ ranging from $100 $ to $ 10 000 $.
As in \cite{mukhopadhyay2020estimating} we evaluate the risk using the histogram metric which is computationally much less expensive than the $L_1$ metric. It is computed as follow: take a new test sample $\mathcal{X}_N = \{x_1,\dots,x_N\}$ and, for any predictive $n$-sample $\mathcal{Z}_n := \{z_1,\dots,z_n\}$ built on a training sample, compute
\begin{equation}} % \setcounter{equation}{1} \label{eq:hist}
\mathrm{hist}_\varepsilon(\mathcal{X}_N,\mathcal{Z}_n) := \frac1N \sum_{i=1}^N \left| \mathbb{P}_{\mathcal{X}_N}(\ball(x_i,\varepsilon))-\mathbb{P}_{\mathcal{Z}_n}(\ball(x_i,\varepsilon)) \right|
\end{equation}
where $\mathbb{P}_{\mathcal{X}_N}$ (resp. $\mathbb{P}_{\mathcal{Z}_n}$) is the empirical distribution associated with $\mathcal{X}_N$ (resp. $\mathcal{Z}_n$). We present the results in Figures \ref{fig:spiralloss} and \ref{fig:torusloss}.
\begin{figure}[h!]
\centering
\includegraphics[width = 6cm]{lambda_spiral_3D_N_1000_delta_001}
\includegraphics[width = 6cm]{lambda_torus_N_10000_delta_005}
\caption{The values of $\lambda_1$ and $\lambda_2$ as functions of the number of iterations in the SVI process for the 3D spiral with $n=1000$ observations and $\delta = 0.05$ (Left) and the torus with $n=10000$ observations and $\delta = 0.05$ (Right). Each value is initialized in the same way but are seperated as soon as the first iteration.}
\label{fig:lambda}
\end{figure}
\begin{figure}[h!]
\centering
\centering
\includegraphics[width = 5.5cm]{pred_spiral_2D_Nmax_1000_Nrep_30_delta_005_nobs_100}
\includegraphics[width = 5.5cm]{pred_spiral_2D_Nmax_1000_Nrep_30_delta_005_nobs_999}
\includegraphics[width = 5.5cm]{hists_spiral_2D_Nmax_1000_Nrep_30_delta_005_fig}
\caption{For the 2D-spiral with $\delta = 0.05$ we sampled 1000 points from the predictive distribution after observing (Left) 100 points and (Middle) 999 points. (Right) We computed the histogram metric \eqref{eq:hist} for a new test sample of size $N = 10000$ and a predictive $n = 1000$ sample, for $\varepsilon = 0.05$. The experiment was repeated 30 times for a training sample of size ranging from 100 to 999 on a log-regular grid of length 10. We show the median value of the metric in orange and its 10 and 90 percentiles in gray. Results are in double $\log_{10}$-scale.}
\label{fig:spiralloss}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width = 5.5cm]{pred_torus_Nmax_10000_Nrep_10_delta_005_nobs_100}
\includegraphics[width = 5.5cm]{pred_torus_Nmax_10000_Nrep_10_delta_005_nobs_10000}
\includegraphics[width = 5.5cm]{hists_torus_Nmax_10000_Nrep_10_delta_005_fig}
\caption{Same experiment as in \figref{spiralloss} but for the torus with $\delta = 0.05$. Prediction is made using (Left) 100 observed points and (Middle) 10000 points. (Right) We computed the histogram metric \eqref{eq:hist} for a new test sample of size $N = 10000$ and a predictive $n = 1000$ sample, for $\varepsilon = 0.5$. The experiment was repeated 10 times for a training sample of size ranging from 100 to 10000 on a log-regular grid of length 5.}
\label{fig:torusloss}
\end{figure}
\section{Outline of the proofs} \label{sec:proofs}
Theorem \ref{thm:main} is proven using \cite[Thm 5]{ghosal2007posterior}, which relies on two things: making sure that the prior probability distribution puts enough mass around the true density $f_0$, and ensuring that most of its probability mass is concentrated on a subset of manageable entropy. We let
$$
\ball(f_0,\varepsilon) = \{f : \mathbb{R}^D \to \mathbb{R}~\middle|~\mathbb{P}_0 \left(\log \frac{f_0}{f} \right) \leq \varepsilon^2~~\text{and}~~\mathbb{P}_0 \left(\log^2 \frac{f_0}{f}\right) \leq \varepsilon^2\}
$$
and show, under the conditions explicited in \secref{model}, that the prior put sufficiently enough mass on the neighborhood $\ball(f_0,\varepsilon)$ of $f_0$ for suitably small values of $\varepsilon$.
\begin{lem} \label{lem:thickness} Let $\varepsilon > 0$ and assume that it is small enough so that $\varepsilon^{\alpha_0} \leq \delta^\beta \varepsilon^{\alpha_\perp}$ and that $\varepsilon^{\alpha_\perp} \ll \log^{-1}(1/\varepsilon)$. Then, in the context of \thmref{main}, there holds
$$
\Pi\left(f_{P} \in \ball(f_0,\tilde\varepsilon)\right) \gtrsim \exp\{- C \varepsilon^{-D/\beta} \log^{t}\left(1/\varepsilon\right)\},
$$
where the constant $C$ depends on the parameters, where $\tilde\varepsilon \simeq \varepsilon \log^s (1/\varepsilon)$ and with $s,t > 0$ depending on $D$, $\beta$ and $\kappa$.
\end{lem}
The proof of \lemref{thickness} is to be found in \appref{thickness} and make use of the the fact that the probability measure $K_\Sigma \tilde h$ exhibited in \corref{approx} can be discretized as follow:
\begin{lem} \label{lem:disc} Let $\varepsilon > 0$ such that $\delta\sigma^{\alpha_\perp}\log(1/\varepsilon) \ll 1$. For any density $g$ on $M^\delta$ satisfying \eqref{expdec}, there exists a discrete probability measure $G$ on $\mathbb{R}^D$ such that
\begin{itemize}\setlength{\itemsep}{.05in}
\item[i)] The atoms of $G$ are in $M^\delta$ and are $\sigma^{2\alpha_0}\varepsilon$-apart;
\item[ii)] $G$ has at most $N$ atoms, with $N \simeq \sigma^{-D} \log^{D}(1/\varepsilon)$;
\item[iii)] There holds
$$\|K_\Sigma G - K_\Sigma g \|_\infty \lesssim \frac{\varepsilon}{\sigma^D \delta^{D-d}}~~\text{and}~~\|K_\Sigma G - K_\Sigma g \|_1 \lesssim \varepsilon \log^{D/2}(1/\varepsilon).
$$
\end{itemize}
\end{lem}
The proof of \lemref{disc} can be found in \appref{disc}. We finally check that our prior fulfils the entropic condition of \cite[Thm 5]{ghosal2007posterior}, as in \cite{canale2017posterior}. To do so we define
$\mathcal F_n$ to be the set of all probability density function $f_P$ with $P = \sum_{h \geq 1} \pi_h \delta_{\mu_h,U_h,\Lambda_h}$ such that
\begin{align*}
\sum_{h > H_n} \pi_h \leq \varepsilon_n,~~~~\forall h \leq H_n, \mu_h \in B(0,R_n)~~~~\text{and}~~~~ \forall h \leq H_n, \Lambda_h \in \mathcal Q_n= [\underline \sigma_n^2, \bar \sigma_n^2]^D,
\end{align*}
where $R_n = \exp(R_0 n\epsilon_n^2 ) $, $H_n = \lfloor H_0 (n\epsilon_n^2)/\log n \rfloor$, $\underline{\sigma}_n^2 = \sigma_0^2 (n\epsilon_n^2)^{-1/b_2}$ for some constant $R_0,H_0$ and $\sigma_0$ and where $\bar \sigma_n^2$ goes to infinity with
\begin{itemize}
\item $\bar \sigma_n^2 = \exp( \sigma_1^2 (n\epsilon_n^2)^{1/b_3} ) $ in the case of the Partial location-scale mixture
\item $\bar \sigma_n^2 = \sigma_1^2 (n\epsilon_n^2) $ in the case of the hybrid location-scale mixture.
\end{itemize}
In addition we consider the following partition of $\mathcal F_n$. For $\mathbf j = (j_h, h\leq H_n) \in \mathbb{N}^{H_n}$,
$$ \mathcal F_{n, \mathbf j} := \{f_P \in \mathcal{F}_n~|~ \forall h\leq h_n,~ j_h \sqrt{n} < \|\mu_h\| \leq ( j_h+1)\sqrt{n}\} $$
and for the partial location-scale prior
$$\mathcal F_{n, \mathbf j, 0} := \{f_P \in \mathcal F_{n, \mathbf j} ~\middle|~ \frac{ \max_i\lambda_{i} }{ \min_i\lambda_{i}} \leq n \}~~~\text{and}~~~\forall \ell \geq 1,~ \mathcal F_{n, \mathbf j, \ell} := \{f_P \in \mathcal F_{n, \mathbf j} ~\middle|~ \, n^{ 2^{\ell-1} } < \frac{ \max_i\lambda_{i} }{ \min_i\lambda_{i}} \leq n^{ 2^{\ell}} \}. $$
We then have
\begin{lem}\label{lem:sieve} Under assumptions \emph{(\ref{condH:partial}-\ref{cond:piH:UB})}, for any sequence $\varepsilon_n \to 0$ such that $n\varepsilon^2_n \geq n^\tau$, for some $\tau >0$, and for all $c_1>0$ if $R_0, H_0, \sigma_1^2 $ are large enough and $\sigma_0^2$ is small enough, there exists $M_0>0$ such that
\begin{itemize}\setlength{\itemsep}{.05in}
\item[i)] $\Pi(\mathcal{P}_n^c) \lesssim \exp\left(- c_1n\varepsilon_n^2\right)$; \\
\item[ii)] In the case of the partial location-scale prior
$$\sum_{\mathbf j, l} \sqrt{\Pi(\mathcal F_{n, \mathbf j, \ell} )N(\varepsilon_n, \mathcal F_{n, \mathbf j, \ell}, \| \cdot \|_1 ) } e^{-M_0 n \varepsilon_n^2 } = o(1) .$$
\item[ii)] In the case of the hybrid location-scale prior
$$\sum_{\mathbf j} \sqrt{\Pi(\mathcal F_{n, \mathbf j} )N( \varepsilon_n, \mathcal F_{n, \mathbf j}, \| \cdot \|_1 ) } e^{-M_0 n \varepsilon_n^2 } = o(1) .$$
\end{itemize}
\end{lem}
The proof of \lemref{sieve} can be found in \appref{sieve}. Using the two Lemmata \ref{lem:thickness} and \ref{lem:sieve}, we are in position to prove \thmref{main}.
\begin{proof}[Proof of \thmref{main}] Using the notations of \lemref{thickness}, we define $p = s\vee t/2+1/2$ and
\begin{align*}
\tilde\varepsilon_n := \delta^{\frac{\beta}{\alpha_0 - \alpha_\perp}} \wedge n^{-\frac{\beta}{2\beta+D}} ~~~~\text{and}~~~~
\varepsilon_n := \{\frac{C^{1/2}}{\sqrt{n}} \tilde\varepsilon_n^{-D/2\beta} \log^{t/2}(1/\tilde\varepsilon_n)\} \vee \{\tilde\varepsilon_n \log^s(1/\tilde\varepsilon_n)\}.
\end{align*}
The sequence $\tilde \varepsilon_n$ goes to $0$ and is such that $\tilde \varepsilon_n^{\alpha_0} \leq \delta^\beta \tilde \varepsilon_n^{\alpha_\perp}$ so that \lemref{thickness} applies and
$$
\Pi\left(f_{P} \in \ball(f_0,\varepsilon_n)\right) \geq \Pi\left(f_{P} \in \ball(f_0,\tilde\varepsilon_n \log^{s}(1/\tilde \varepsilon_n)\right) \gtrsim \exp\(- C \tilde\varepsilon_n^{-D/\beta} \log^{t}(1/\tilde\varepsilon_n)\right) \gtrsim \exp(-n\varepsilon_n^2).
$$
We now distinguish three cases:
\begin{itemize}\setlength{\itemsep}{.05in}
\item[1.] If first $\delta^{\frac{D}{\alpha_0-\alpha_\perp}} \leq n^{-1} \log^{2p-t} n$ then the results of \thmref{main} is trivial and there is nothing to show (because the contraction rate goes to $\infty$ instead of $0$);
\item[2.] If $\delta^{\frac{D}{\alpha_0-\alpha_\perp}} \geq n^{-\frac{D}{2\beta+D}}$, then $\tilde\varepsilon_n = n^{-\frac{\beta}{2\beta+D}}$ and one easily get that $\varepsilon_n \simeq n^{-\frac{\beta}{2\beta+D}} \log^p n$. In particular, $n\varepsilon_n^2 \gg n^\tau$ for $\tau = D/(2\beta+D)$ and \lemref{sieve} applies so that the conclusion of \thmref{main} follows from \cite[Thm 5]{ghosal2007posterior}.
\item[3.] If finally $n^{-1} \log^{2p-t} n < \delta^{\frac{D}{\alpha_0-\alpha_\perp}} < n^{-\frac{D}{2\beta+D}}$, then $\tilde\varepsilon_n = \delta^{\frac{\beta}{\alpha_0 - \alpha_\perp}}$ and $\log(1/\delta) \gtrsim \log n$ so that
$$n \varepsilon_n^2 \geq C \tilde\varepsilon_n^{-D/\beta} \log^{t}(1/\tilde\varepsilon_n) \gtrsim \delta^{-\frac{D}{\alpha_0 - \alpha_\perp}} \log^t(n) \gg n^{\frac{D}{2\beta+D}}.
$$
In particular, \lemref{sieve} applies and the conclusion follows again from \cite[Thm 5]{ghosal2007posterior}, to the last detail that we need to understand how $\varepsilon_n$ depends on $\delta$. By assumption, there holds $\log(1/\tilde\varepsilon_n) \lesssim \log(1/\delta) \lesssim \log(n)$ and $\tilde\varepsilon_n < n^{-\beta/(2\beta+D)}$ and thus
$$
\varepsilon_n \lesssim \log^p n \times \{\frac{\tilde\varepsilon_n^{-D/2\beta}}{\sqrt{n}} \vee \tilde\varepsilon_n\} = \log^p n \times \frac1{\sqrt{n \delta^{D/(\alpha_0 - \alpha_\perp)}}}.
$$
\end{itemize}
This concludes the proof.
\end{proof}
\section{Discussion} \label{sec:discussion}
With the aim of developing Bayesian procedures in the framework of manifold learning, we exhibited a new family of priors based on location-scales of Dirichlet mixture of Gaussians, and described a general setting for studying density supported near a submanifold. The latter relies on two things: first, a parametrization of the offset of the manifold and second, an anisotropic class of H\"older functions. In this model, we obtained concentration rates in \thmref{main} for the associated posterior distrubution that are adaptive to the regularity of the underlying density while being totally agnostic of the underlying submanifolds and their main features.
\\
Using mixtures of Gaussians have many advantages. The good behaviour of the normal density leads to elegant mathematical analysis, which is actually not an insignificant feature in a manifold framework where the computations can be lengthy and technical. Most importantly, the simplicity of Gaussian mixtures make them easy to implement, while being still rich enough to capture complex geometrical situations, as highlighted in \thmref{approx} and in the numerical simulations displayed in this paper.
\\
However, they also suffer some limitations. In particular, they do not allow to get optimal rates when the data are extremely concentrated around the underlying submanifolds (i.e. $\delta $ small). Obtaining optimal procedures in the regime of small $\delta$, or even in the singular regime of $\delta = 0$, could be the object of future works. One solution could be to resort to more intricate mixtures, using for instance Fisher-Gaussian kernels as in \cite{mukhopadhyay2020estimating}, which could allow to catch finer details of the geometry of the underlying submanifolds in such situations.
\\
Also, because the usual losses such as the Hellinger of the $L_1$ loss are particularly inadequate in a singular framework, it would be nice to obtain concentration rates for waeker metrics such as the Wasserstein metrics, in the spirit of what have been done on the frequentist side in \cite{divol2021reconstructing} or \cite{tang2022minimax}, or in \cite{camerlenghi2022wasserstein} in a Bayesian setting.
\\
Finally, our framework only allows densities which converge to 0 smoothly at the boundary of their support. It is not clear that Gaussian mixtures are well behaved to estimate densities which do not vanish near the boundaries, for instance as in \cite{berry2017density} in the frequentist case, and extending the results obtained here to this setup would be interesting.
\subsection*{Acknowledgements}
We are very grateful to Marc Hoffmann for helpful discussions.
This work has received funding from the European Research Council
(ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 834175).
\bibliographystyle{chicago}
| {
"timestamp": "2022-06-01T02:18:04",
"yymm": "2205",
"arxiv_id": "2205.15717",
"language": "en",
"url": "https://arxiv.org/abs/2205.15717",
"abstract": "We study the Bayesian density estimation of data living in the offset of an unknown submanifold of the Euclidean space. In this perspective, we introduce a new notion of anisotropic Hölder for the underlying density and obtain posterior rates that are minimax optimal and adaptive to the regularity of the density, to the intrinsic dimension of the manifold, and to the size of the offset, provided that the latter is not too small -- while still allowed to go to zero. Our Bayesian procedure, based on location-scale mixtures of Gaussians, appears to be convenient to implement and yields good practical results, even for quite singular data.",
"subjects": "Statistics Theory (math.ST)",
"title": "Estimating a density near an unknown manifold: a Bayesian nonparametric approach",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924793940118,
"lm_q2_score": 0.7279754548076478,
"lm_q1q2_score": 0.707586667256469
} |
https://arxiv.org/abs/2009.09829 | Generalized Leverage Score Sampling for Neural Networks | Leverage score sampling is a powerful technique that originates from theoretical computer science, which can be used to speed up a large number of fundamental questions, e.g. linear regression, linear programming, semi-definite programming, cutting plane method, graph sparsification, maximum matching and max-flow. Recently, it has been shown that leverage score sampling helps to accelerate kernel methods [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17].In this work, we generalize the results in [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17] to a broader class of kernels. We further bring the leverage score sampling into the field of deep learning theory.$\bullet$ We show the connection between the initialization for neural network training and approximating the neural tangent kernel with random features.$\bullet$ We prove the equivalence between regularized neural network and neural tangent kernel ridge regression under the initialization of both classical random Gaussian and leverage score sampling. |
\section{Conclusion}
In this paper, we generalize the leverage score sampling theory for kernel approximation. We discuss the interesting application of connecting leverage score sampling and training regularized neural networks. We present two theoretical results: 1) the equivalence between the regularized neural nets and kernel ridge regression problems under the classical random Gaussian initialization for both training and test predictors; 2) the new equivalence under the leverage score initialization. We believe this work can be the starting point of future study on the use of leverage score sampling in neural network training.
\paragraph{Roadmap}
In the appendix, we present our complete results and rigorous proofs. Section~\ref{sec:pre} presents some well-known mathematically results that will be used in our proof. Section~\ref{sec:equiv} discusses our first equivalence result between training regularized neural network and kernel ridge regression. Section~\ref{sec:ger_lev} discusses our generalization result of the leverage score sampling theory. Section~\ref{sec:eq_lev} discusses our second equivalence result under leverage score initialization and potential benefits compared to the Gaussian initialization. Section~\ref{sec:gen_dnn} discusses how to extend our results to a broader class of neural network models.
\section{Equivalence between sufficiently wide neural net and kernel ridge regression}\label{sec:equiv}
In this section, we extend the equivalence result in \cite{jgh18, adhlsw19} to the case with regularization term, where they showed the equivalence between a fully-trained infinitely wide/sufficiently wide neural net and the kernel regression solution using the neural tangent kernel (NTK). Specifically, we prove Theorem~\ref{thm:main_train_equivalence_intro} and Theorem~\ref{thm:main_test_equivalence_intro} in this section.
Section~\ref{sec:equiv_preli} introduces key notations and standard data assumptions. Section~\ref{sec:equiv_definition} restates and supplements the definitions introduced in the paper. Section~\ref{sec:equiv_gradient_flow} presents several key lemmas about the gradient flow and linear convergence of neural network and kernel ridge regression predictors, which are crucial to the final proof. Section~\ref{sec:equiv_proof_sketch} provides a brief proof sketch. Section~\ref{sec:test} restates the main equivalence Theorem~\ref{thm:main_test_equivalence_intro} and provides a complete proof following the proof sketch. Section~\ref{sec:train} restates and proves Theorem~\ref{thm:main_train_equivalence_intro} by showing it as a by-product of previous proof.
\subsection{Preliminaries}\label{sec:equiv_preli}
Let's define the following notations:
\begin{itemize}
\item $X\in\R^{n\times d}$ be the training data
\item $x_{\test}\in\R^d$ be the test data
\item $u_{\ntk}(t)=\kappa f_{\ntk}(\beta(t),X) = \kappa \Phi(X)\beta(t) \in \R^n$ be the prediction of the kernel ridge regression for the training data at time $t$. (See Definition~\ref{def:krr_ntk})
\item $u^* = \lim_{t \rightarrow \infty} u_{\ntk}(t)$ (See Eq.~\eqref{eq:def_u_*})
\item $u_{\ntk,\test}(t) = \kappa f_{\ntk}(\beta(t),x_{\test}) = \kappa \Phi(x_{\test}) \beta(t) \in \R$ be the prediction of the kernel ridge regression for the test data at time $t$. (See Definition~\ref{def:krr_ntk})
\item $u_{\test}^* = \lim_{t\rightarrow \infty} u_{\ntk,\test}( t ) $ (See Eq.~\eqref{eq:def_u_test_*})
\item $\k_{\ntk}(x, y) = \E [\left\langle \frac{\partial f_{\nn}(W,x)}{\partial W},\frac{\partial f_{\nn}(W,y)}{\partial W} \right\rangle]$ (See Definition~\ref{def:krr_ntk})
\item $\k_t(x_{\test},X) \in \R^n$ be the induced kernel between the training data and test data at time $t$, where
\begin{align*}
[\k_t(x_{\test},X)]_i
= \k_t(x_{\test},x_i)
= \left\langle \frac{\partial f(W(t),x_{\test})}{\partial W(t)},\frac{\partial f(W(t),x_i)}{\partial W(t)} \right\rangle
\end{align*}
(see Definition~\ref{def:dynamic_kernel})
\item $u_{\nn}(t) = \kappa f_{\nn}(W(t),X) \in \R^n$ be the prediction of the neural network for the training data at time $t$. (See Definition~\ref{def:nn})
\item $u_{\nn,\test}(t) = \kappa f_{\nn}(W(t),x_{\test}) \in \R$ be the prediction of the neural network for the test data at time $t$ (See Definition~\ref{def:nn})
\end{itemize}
\begin{assumption}[data assumption]\label{ass:data_assumption}
We made the following assumptions: \\
1. For each $i \in [n]$, we assume $|y_i| = O(1)$.\\
2. $H^{\cts}$ is positive definite, i.e., $\Lambda_0:=\lambda_{\min}(H^{\cts})>0$.\\
3. All the training data and test data have Euclidean norm equal to 1.
\end{assumption}
\subsection{Definitions}\label{sec:equiv_definition}
To establish the equivalence between neural network and kernel ridge regression, we prove the similarity of their gradient flow and initial predictor. Note kernel ridge regression starts at 0 as initialization, so we hope the initialization of neural network also close to zero. Therefore, using the same technique in~\cite{adhlsw19}, we apply a small multiplier $\kappa>0$ to both predictors to bound the different of initialization.
\begin{definition}[Neural network function]\label{def:f_nn}
We define a two layer neural networks with rectified linear unit (ReLU) activation as the following form
\begin{align*}
f_{\nn} (W, a, x) = \frac{1}{\sqrt{m}} \sum_{r=1}^m a_r \sigma (w_r^\top x) \in \R,
\end{align*}
where $x \in \R^d$ is the input, $w_r \in \R^d,~r\in[m]$ is the weight vector of the first layer, $W = [w_1, \cdots, w_m]\in\R^{d \times m}$, $a_r \in \R,~r\in[m]$ is the output weight, $a = [a_1, \cdots, a_m]^\top$ and $\sigma(\cdot)$ is the ReLU activation function: $\sigma(z) = \max\{0,z\}$. In this paper, we consider only training the first layer $W$ while fix $a$. So we also write $f_{\nn}(W,x) = f_{\nn}(W, a, x)$. We denote $f_{\nn}(W, X) = [f_{\nn}(W, x_1),\cdots, f_{\nn}(W, x_n)]^\top\in\R^n$.
\end{definition}
\begin{definition}[Training neural network with regularization, restatement of Definition~\ref{def:nn_intro}]\label{def:nn}
Given training data matrix $X\in\R^{n\times d}$ and corresponding label vector $Y\in\R^n$. Let $f_{\nn}$ be defined as in Definition~\ref{def:f_nn}. Let $\kappa\in(0,1)$ be a small multiplier. Let $\lambda\in(0,1)$ be the regularization parameter. We initialize the network as $a_r\overset{i.i.d.}{\sim} \unif[\{-1,1\}]$ and $w_r(0)\overset{i.i.d.}{\sim} \N(0,I_d)$. Then we consider solving the following optimization problem using gradient descent:
\begin{align}\label{eq:nn}
\min_{W} \frac{1}{2}\| Y - \kappa f_{\nn}(W,X) \|_2 + \frac{1}{2}\lambda\|W\|_F^2.
\end{align}\label{eq:nn_predict_train}
We denote $w_r(t),r\in[m]$ as the variable at iteration $t$. We denote
\begin{align}\label{eq:nn_predict_train}
u_{\nn}(t) = \kappa f_{\nn}(W(t),X) = \frac{\kappa}{\sqrt{m}} \sum_{r=1}^m a_r \sigma (w_r(t)^\top X) \in \R^n
\end{align}
as the training data predictor at iteration $t$. Given any test data $x_{\test}\in\R^d$, we denote
\begin{align}\label{eq:nn_predict_test}
u_{\nn,\test}(t) = \kappa f_{\nn}(W(t),x_{\test}) = \frac{\kappa}{\sqrt{m}} \sum_{r=1}^m a_r \sigma (w_r(t)^\top x_{\test}) \in \R
\end{align}
as the test data predictor at iteration $t$.
\end{definition}
\begin{definition}[Neural tangent kernel and feature function]\label{def:ntk_phi}
We define the neural tangent kernel(NTK) and the feature function corresponding to the neural networks $f_{\nn}$ defined in Definition~\ref{def:f_nn} as following
\begin{align*}
\k_{\ntk}(x, z) = \E \left[\left\langle \frac{\partial f_{\nn}(W,x)}{\partial W},\frac{\partial f_{\nn}(W,z)}{\partial W} \right\rangle \right]
\end{align*}
where $x,z \in \R^d$ are any input data, and the expectation is taking over $w_r\overset{i.i.d.}{\sim} \N(0,I),~r=1, \cdots, m$. Given training data matrix $X=[x_1,\cdots,x_n]^\top\in\R^{n\times d}$, we define $H^{\cts}\in\R^{n\times n}$ as the kernel matrix between training data as
\begin{align*}
[H^{\cts}]_{i,j} = \k_{\ntk}(x_i, x_j) \in \R.
\end{align*}
We denote the smallest eigenvalue of $H^{\cts}$ as $\Lambda_0 > 0$, where we assume $H^{\cts}$ is positive definite. Further, given any data $z \in \R^d$, we write the kernel between test and training data $\k_{\ntk}(z, X) \in \R^n$ as
\begin{align*}
\k_{\ntk}(z, X) = [\k_{\ntk}(z, x_1),\cdots,\k_{\ntk}(z,x_n)]^\top \in \R^n.
\end{align*}
We denote the feature function corresponding to the kernel $\k_{\ntk}$ as we defined above as $\Phi:\R^{d} \rightarrow \mathcal{F}$, which satisfies
\begin{align*}
\langle\Phi(x),\Phi(z)\rangle_\mathcal{F} = \k_{\ntk}(x,z),
\end{align*}
for any data $x$, $z\in\R^d$. And we write $\Phi(X)=[\Phi(x_1),\cdots,\Phi(x_n)]^\top$.
\end{definition}
\begin{definition}[Neural tangent kernel ridge regression]\label{def:krr_ntk}
Given training data matrix $X=[x_1,\cdots,x_n]^\top$ $\in \R^{n\times d}$ and corresponding label vector $Y\in\R^n$. Let $\k_{\ntk}$, $H^{\cts}\in\R^{n\times n}$ and $\Phi$ be the neural tangent kernel and corresponding feature functions defined as in Definition~\ref{def:ntk_phi}. Let $\kappa\in(0,1)$ be a small multiplier. Let $\lambda\in(0,1)$ be the regularization parameter. Then we consider the following neural tangent kernel ridge regression problem:
\begin{align}\label{eq:krr}
\min_{\beta} \frac{1}{2}\| Y - \kappa f_{\ntk}(\beta,X) \|_2^2 + \frac{1}{2}\lambda\|\beta\|_2^2.
\end{align}
where $f_{\ntk}(\beta,x) = \Phi(x)^\top \beta \in \R$ denotes the prediction function is corresponding RKHS and $f_{\ntk}(\beta,X) = [f_{\ntk}(\beta,x_1),\cdots,f_{\ntk}(\beta,x_n)]^\top\in\R^{n}$. Consider the gradient flow of solving problem~\eqref{eq:krr} with initialization $\beta(0) = 0$.
We denote $\beta(t)$ as the variable at iteration $t$. We denote
\begin{align}\label{eq:ntk_predict_train}
u_{\ntk}(t) = \kappa\Phi(X)\beta(t) \in \R^n
\end{align}
as the training data predictor at iteration $t$. Given any test data $x_{\test}\in\R^d$, we denote
\begin{align}\label{eq:ntk_predict_test}
u_{\ntk,\test}(t) = \kappa\Phi(x_{\test})^\top\beta(t) \in \R
\end{align}
as the test data predictor at iteration $t$. Note the gradient flow converge the to optimal solution of problem~\eqref{eq:krr} due to the strongly convexity of the problem. We denote
\begin{align}\label{eq:def_beta_*}
\beta^* = \lim_{t\to\infty} \beta(t) = \kappa (\kappa^2 \Phi(X)^\top \Phi(X) + \lambda I)^{-1} \Phi(X)^\top Y
\end{align}
and the optimal training data predictor
\begin{align}\label{eq:def_u_*}
u^* = \lim_{t\to\infty} u_{\ntk}(t) = \kappa \Phi(X)\beta^* = \kappa^2 H^{\cts}(\kappa^2 H^{\cts}+\lambda I)^{-1}Y \in \R^n
\end{align}
and the optimal test data predictor
\begin{align}\label{eq:def_u_test_*}
u_{\test}^* = \lim_{t\to\infty} u_{\ntk,\test}(t) = \kappa \Phi(x_{\test})^\top \beta^* = \kappa^2 \k_{\ntk}(x_{\test}, X)^\top(\kappa^2 H^{\cts}+\lambda I)^{-1}Y \in \R.
\end{align}
\end{definition}
\begin{definition}[Dynamic kernel]\label{def:dynamic_kernel}
Given $W(t) \in \R^{d \times m}$ as the parameters of the neural network at training time $t$ as defined in Definition~\ref{def:nn}. For any data $x,z\in\R^d$, we define $\k_t(x,z)\in\R$ as
\begin{align*}
\k_t(x,z)
= \left\langle \frac{\d f_{\nn}(W(t),x)}{\d W(t)},\frac{\d f_{\nn}(W(t),z)}{\d W(t)} \right\rangle
\end{align*}
Given training data matrix $X=[x_1,\cdots,x_n]^\top\in\R^{n\times d}$, we define $H^{(t)}\in\R^{n\times n}$ as
\begin{align*}
[H(t)]_{i,j} = \k_{t}(x_i, x_j)\in\R.
\end{align*}
Further, given a test data $x_{\test}\in\R^d$, we define $\k_t(x_{\test},X)\in\R^n$ as
\begin{align*}
\k_t(x_{\test},X) = [\k_t(x_{\test},x_1), \cdots, \k_t(x_{\test},x_n)]^\top\in\R^n.
\end{align*}
\end{definition}
\subsection{Gradient, gradient flow, and linear convergence}\label{sec:equiv_gradient_flow}
\begin{lemma}[Gradient flow of kernel ridge regression]\label{lem:gradient_flow_of_krr}
Given training data matrix $X\in\R^{n\times d}$ and corresponding label vector $Y\in\R^n$. Let $f_{\ntk}$ be defined as in Definition~\ref{def:krr_ntk}. Let $\beta(t)$, $\kappa\in(0,1)$ and $u_{\ntk}(t)\in\R^n$ be defined as in Definition~\ref{def:krr_ntk}. Let $\k_{\ntk}: \R^d \times \R^{n\times d} \to\R^n$ be defined as in Definition~\ref{def:ntk_phi}. Then for any data $z\in\R^d$, we have
\begin{align*}
\frac{\d f_{\ntk}(\beta(t), z)}{\d t} = \kappa \cdot \k_{\ntk}(z, X)^\top ( Y - u_{\ntk}(t) ) - \lambda \cdot f_{\ntk}(\beta(t), z).
\end{align*}
\end{lemma}
\begin{proof}
Denote $L(t)= \frac{1}{2}\|Y-u_{\ntk}(t)\|_2^2+\frac{1}{2}\lambda\|\beta(t)\|_2^2$. By the rule of gradient descent, we have
\begin{align*}
\frac{\d \beta(t)}{\d t}=-\frac{\d L}{\d \beta}=\kappa \Phi(X)^\top(Y-u_{\ntk}(t))-\lambda\beta(t),
\end{align*}
where $\Phi$ is defined in Definition~\ref{def:ntk_phi}.
Thus we have
\begin{align*}
\frac{\d f_{\ntk}(\beta(t), z)}{\d t}
= & ~ \frac{\d f_{\ntk}(\beta(t), z)}{\d \beta(t)}\frac{\d \beta(t)}{\d t} \\
= & ~ \Phi(z)^\top (\kappa\Phi(X)^\top(Y-u_{\ntk}(t))-\lambda\beta(t)) \\
= & ~ \kappa\k_{\ntk}(z, X)^\top (Y-u_{\ntk}(t))-\lambda\Phi(z)^\top\beta(t) \\
= & ~ \kappa\k_{\ntk}(z, X)^\top (Y-u_{\ntk}(t))-\lambda f_{\ntk}(\beta(t), z),
\end{align*}
where the first step is due to chain rule, the second step follows from the fact $ \d f_{\ntk}(\beta, z)/ \d \beta=\Phi(z)$, the third step is due to the definition of the kernel $\k_{\ntk}(z, X)=\Phi(X)\Phi(z) \in \R^{n}$, and the last step is due to the definition of $f_{\ntk}(\beta(t), z)\in\R$.
\end{proof}
\begin{corollary}[Gradient of prediction of kernel ridge regression]\label{cor:ntk_gradient}
Given training data matrix $X=[x_1,\cdots,x_n]^\top \in \R^{n\times d}$ and corresponding label vector $Y \in \R^n$. Given a test data $x_{\test}\in\R^d$. Let $f_{\ntk}$ be defined as in Definition~\ref{def:krr_ntk}. Let $\beta(t)$, $\kappa\in(0,1)$ and $u_{\ntk}(t)\in\R^n$ be defined as in Definition~\ref{def:krr_ntk}. Let $\k_{\ntk}: \R^d \times \R^{n\times d} \rightarrow \R^n,~H^{\cts}\in\R^{n\times n}$ be defined as in Definition~\ref{def:ntk_phi}. Then we have
\begin{align*}
\frac{\d u_{\ntk}(t)}{\d t} & = \kappa^2 H^{\cts} ( Y - u_{\ntk}(t) ) - \lambda \cdot u_{\ntk}(t)\\
\frac{\d u_{\ntk, \test}(t)}{\d t} & = \kappa^2 \k_{\ntk}(x_{\test}, X)^\top ( Y - u_{\ntk}(t) ) - \lambda \cdot u_{\ntk, \test}(t).
\end{align*}
\end{corollary}
\begin{proof}
Plugging in $z = x_i\in\R^d$ in Lemma~\ref{lem:gradient_flow_of_krr}, we have
\begin{align*}
\frac{\d f_{\ntk}(\beta(t), x_i)}{\d t} = \kappa \k_{\ntk}(x_i, X)^\top ( Y - u_{\ntk}(t) ) - \lambda \cdot f_{\ntk}(\beta(t), x_i).
\end{align*}
Note $[u_{\ntk}(t)]_i = \kappa f_{\ntk}(\beta(t), x_i)$ and $[H^{\cts}]_{:,i} = \k_{\ntk}(x_i, X)$, so writing all the data in a compact form, we have
\begin{align*}
\frac{\d u_{\ntk}(t)}{\d t} = \kappa^2 H^{\cts} ( Y - u_{\ntk}(t) ) - \lambda \cdot u_{\ntk}(t).
\end{align*}
Plugging in data $z = x_{\test}\in\R^d$ in Lemma~\ref{lem:gradient_flow_of_krr}, we have
\begin{align*}
\frac{\d f_{\ntk}(\beta(t), x_{\test})}{\d t} = \kappa \k_{\ntk}(x_{\test}, X)^\top ( Y - u_{\ntk}(t) ) - \lambda \cdot f_{\ntk}(\beta(t), x_{\test}).
\end{align*}
Note by definition, $u_{\ntk,\test}(t) = \kappa f_{\ntk}(\beta(t), x_{\test}) \in \R$, so we have
\begin{align*}
\frac{\d u_{\ntk, \test}(t)}{\d t} = \kappa^2 \k_{\ntk}(x_{\test}, X)^\top ( Y - u_{\ntk}(t) ) - \lambda \cdot u_{\ntk, \test}(t).
\end{align*}
\end{proof}
\begin{lemma}[Linear convergence of kernel ridge regression]\label{lem:linear_converge_krr}
Given training data matrix $X=[x_1,\cdots,x_n]^\top \in \R^{n\times d}$ and corresponding label vector $Y\in\R^n$. Let $\kappa\in(0,1)$ and $u_{\ntk}(t) \in \R^n$ be defined as in Definition~\ref{def:krr_ntk}. Let $u^* \in \R^n$ be defined in Definition~\ref{def:krr_ntk}. Let $\Lambda_0 > 0$ be defined as in Definition~\ref{def:ntk_phi}. Let $\lambda > 0$ be the regularization parameter. Then we have
\begin{align*}
\frac{\d \|u_{\ntk}(t)-u^*\|_2^2}{\d t} \le - 2(\kappa^2 \Lambda_0+\lambda) \|u_{\ntk}(t)-u^*\|_2^2.
\end{align*}
Further, we have
\begin{align*}
\|u_{\ntk}(t)-u^*\|_2 \leq e^{-(\kappa^2 \Lambda_0+\lambda)t} \|u_{\ntk}(0)-u^*\|_2.
\end{align*}
\end{lemma}
\begin{proof}
Let $H^{\cts} \in \R^{n \times n}$ be defined as in Definition~\ref{def:ntk_phi}. Then
\begin{align}\label{eq:322_1}
\kappa^2 H^{\cts}(Y-u^*)
= & ~ \kappa^2 H^{\cts}(Y-\kappa^2 H^{\cts}(\kappa^2 H^{\cts}+\lambda I_n)^{-1}Y) \notag \\
= & ~ \kappa^2 H^{\cts}(I_n-\kappa^2 H^{\cts}(\kappa^2 H^{\cts}+\lambda I)^{-1})Y \notag \\
= & ~ \kappa^2 H^{\cts}(\kappa^2 H^{\cts}+\lambda I_n - \kappa^2 H^{\cts})(\kappa^2 H^{\cts}+\lambda I_n)^{-1} Y \notag \\
= & ~ \kappa^2 \lambda H^{\cts}(\kappa^2 H^{\cts}+\lambda I_n)^{-1} Y \notag \\
= & ~ \lambda u^*,
\end{align}
where the first step follows the definition of $u^* \in \R^n$, the second to fourth step simplify the formula, and the last step use the definition of $u^* \in \R^n$ again.
So we have
\begin{align}\label{eq:322_2}
\frac{\d \|u_{\ntk}(t)-u^*\|_2^2}{\d t}
= & ~ 2(u_{\ntk}(t)-u^*)^\top \frac{\d u_{\ntk}(t)}{\d t} \notag\\
= & ~ -2\kappa^2 (u_{\ntk}(t)-u^*)^\top H^{\cts} (u_{\ntk}(t) - Y) -2\lambda(u_{\ntk}(t)-u^*)^\top u_{\ntk}(t) \notag\\
= & ~ -2\kappa^2 (u_{\ntk}(t)-u^*)^\top H^{\cts} (u_{\ntk}(t) - u^*) + 2\kappa^2 (u_{\ntk}(t)-u^*)^\top H^{\cts} (Y-u^*) \notag\\
& ~ -2\lambda(u_{\ntk}(t)-u^*)^\top u_{\ntk}(t)\notag\\
= & ~ -2\kappa^2 (u_{\ntk}(t)-u^*)^\top H^{\cts} (u_{\ntk}(t) - u^*) + 2\lambda(u_{\ntk}(t)-u^*)^\top u^* \notag\\
& ~ -2\lambda(u_{\ntk}(t)-u^*)^\top u_{\ntk}(t) \notag\\
= & ~ -2(u_{\ntk}(t)-u^*)^\top (\kappa^2 H^{\cts}+\lambda I) (u_{\ntk}(t) - u^*) \notag\\
\leq & ~ -2(\kappa^2 \Lambda_0 + \lambda)\|u_{\ntk}(t)-u^*\|_2^2,
\end{align}
where the first step follows the chain rule, the second step follows Corollary~\ref{cor:ntk_gradient}, the third step uses basic linear algebra, the fourth step follows Eq.~\eqref{eq:322_1}, the fifth step simplifies the expression, and the last step follows the definition of $\Lambda_0$.
Further, since
\begin{align*}
& ~ \frac{\d (e^{2(\kappa^2 \Lambda_0+\lambda)t}\|u_{\ntk}(t)-u^*\|_2^2)}{\d t} \\
= & ~ 2(\kappa^2 \Lambda_0+\lambda)e^{2(\kappa^2 \Lambda_0+\lambda)t}\|u_{\ntk}(t)-u^*\|_2^2 + e^{2(\kappa^2 \Lambda_0+\lambda)t}\cdot\frac{\d \|u_{\ntk}(t)-u^*\|_2^2}{\d t} \\
\leq & ~ 0,
\end{align*}
where the first step calculates the gradient, and the second step follows from Eq.~\eqref{eq:322_2}. Thus, $e^{2(\kappa^2 \Lambda_0+\lambda)t}\|u_{\ntk}(t)-u^*\|_2^2$ is non-increasing, which implies
\begin{align*}
\|u_{\ntk}(t)-u^*\|_2 \leq e^{-(\kappa^2 \Lambda_0+\lambda)t} \|u_{\ntk}(0)-u^*\|_2.
\end{align*}
\end{proof}
\begin{lemma}[Gradient flow of neural network training]\label{lem:gradient_flow_of_nn}
Given training data matrix $X \in \R^{n\times d}$ and corresponding label vector $Y\in\R^n$. Let $f_{\nn}: \R^{d\times m} \times \R^{d} \rightarrow \R$ be defined as in Definition~\ref{def:f_nn}. Let $W(t) \in \R^{d \times m}$, $\kappa\in(0,1)$ and $u_{\nn}(t)\in\R^n$ be defined as in Definition~\ref{def:nn}. Let $\k_{t}: \R^d \times \R^{n\times d} \rightarrow \R^n$ be defined as in Definition~\ref{def:dynamic_kernel}. Then for any data $z \in \R^d$, we have
\begin{align*}
\frac{\d f_{\nn}(W(t),z)}{\d t} = \kappa \k_{t}(z,X)^\top ( Y - u_{\nn}(t) )- \lambda \cdot f_{\nn}(W(t),z).
\end{align*}
\end{lemma}
\begin{proof}
Denote $L(t)=\frac{1}{2}\|Y-u_{\nn}(t)\|_2^2+\frac{1}{2}\lambda\|W(t)\|_F^2$. By the rule of gradient descent, we have
\begin{align}\label{eq:323_1}
\frac{\d w_r}{\d t} = -\frac{\partial L}{\partial w_r}=(\frac{\partial u_{\nn}}{\partial w_r})^\top(Y-u_{\nn})-\lambda w_r.
\end{align}
Also note for ReLU activation $\sigma$, we have
\begin{align}\label{eq:323_2}
\Big\langle \frac{\d f_{\nn}(W(t),z)}{\d W(t)},\lambda W(t) \Big\rangle = & ~ \sum_{r=1}^m \Big(\frac{1}{\sqrt{m}}a_r z \sigma'(w_r(t)^\top z)\Big)^\top (\lambda w_r(t)) \notag \\
= & ~ \frac{\lambda}{\sqrt{m}}\sum_{r=1}^m a_r w_r(t)^\top z \sigma'(w_r(t)^\top z) \notag \\
= & ~ \frac{\lambda}{\sqrt{m}}\sum_{r=1}^m a_r \sigma(w_t(t)^\top z) \notag \\
= & ~ \lambda f_{\nn}(W(t),z),
\end{align}
where the first step calculates the derivatives, the second step follows basic linear algebra, the third step follows the property of ReLU activation: $\sigma(l) = l\sigma'(l)$, and the last step follows from the definition of $f_{\nn}$.
Thus, we have
\begin{align*}
& ~ \frac{\d f_{\nn}(W(t),z)}{\d t} \\
= & ~ \Big\langle \frac{\d f_{\nn}(W(t),z)}{\d W(t)}, \frac{\d W(t)}{\d t} \Big\rangle \notag \\
= & ~ \sum_{j=1}^{n}(y_j - \kappa f_{\nn}(W(t),x_j)) \Big\langle \frac{\d f_{\nn}(W(t),z)}{\d W(t)},\frac{\d \kappa f_{\nn}(W(t),x_j)}{\d W(t)} \Big\rangle - \Big\langle \frac{\d f_{\nn}(W(t),z)}{\d W(t)},\lambda W(t) \Big\rangle \notag\\
= & ~ \kappa \sum_{j=1}^{n}(y_j- \kappa f_{\nn}(W(t),x_j)) \k_{t}(z,x_j)-\lambda \cdot f_{\nn}(W(t),z)\notag\\
= & ~ \kappa \k_{t}(z,X)^\top ( Y - u_{\nn}(t) )- \lambda \cdot f_{\nn}(W(t),z),
\end{align*}
where the first step follows from chain rule, the second step follows from Eq.~\eqref{eq:323_1}, the third step follows from the definition of $\k_{t}$ and Eq.~\eqref{eq:323_2}, and the last step rewrites the formula in a compact form.
\end{proof}
\begin{corollary}[Gradient of prediction of neural network]\label{cor:nn_gradient}
Given training data matrix $X = [x_1,\cdots,x_n]^\top\in\R^{n\times d}$ and corresponding label vector $Y \in \R^n$. Given a test data $x_{\test} \in \R^d$. Let $f_{\nn}: \R^{d\times m} \times \R^d \rightarrow \R$ be defined as in Definition~\ref{def:f_nn}. Let $W(t) \in \R^{d\times m}$, $\kappa\in(0,1)$ and $u_{\nn}(t) \in \R^n$ be defined as in Definition~\ref{def:nn}. Let $\k_{t} : \R^d \times \R^{n \times d} \rightarrow \R^n,~H(t) \in \R^{n \times n}$ be defined as in Definition~\ref{def:dynamic_kernel}. Then we have
\begin{align*}
\frac{\d u_{\nn}(t)}{\d t} = & ~ \kappa^2 H(t) ( Y - u_{\nn}(t) ) - \lambda \cdot u_{\nn}(t)\\
\frac{\d u_{\nn,\test}(t)}{\d t} = & ~ \kappa^2 \k_{t}(x_{\test}, X)^\top ( Y - u_{\nn}(t) ) - \lambda \cdot u_{\nn,\test}(t).
\end{align*}
\end{corollary}
\begin{proof}
Plugging in $z = x_i\in\R^d$ in Lemma~\ref{lem:gradient_flow_of_nn}, we have
\begin{align*}
\frac{\d f_{\nn}(W(t), x_i)}{\d t} = \kappa \k_{t}(x_i, X)^\top ( Y - u_{\nn}(t) ) - \lambda \cdot f_{\nn}(W(t), x_i).
\end{align*}
Note $[u_{\nn}(t)]_i = \kappa f_{\nn}(W(t), x_i)$ and $[H(t))]_{:,i} = \k_{t}(x_i, X)$, so writing all the data in a compact form, we have
\begin{align*}
\frac{\d u_{\nn}(t)}{\d t} = \kappa^2 H(t) ( Y - u_{\nn}(t) ) - \lambda \cdot u_{\nn}(t).
\end{align*}
Plugging in data $z = x_{\test}\in\R^d$ in Lemma~\ref{lem:gradient_flow_of_nn}, we have
\begin{align*}
\frac{\d f_{\nn}(W(t), x_{\test})}{\d t} = \kappa \k_{t}(x_{\test}, X)^\top ( Y - u_{\nn}(t) ) - \lambda \cdot f_{\nn}(W(t), x_{\test}).
\end{align*}
Note by definition, $u_{\nn,\test}(t) = \kappa f_{\nn}(W(t), x_{\test}) $, so we have
\begin{align*}
\frac{\d u_{\nn, \test}(t)}{\d t} = \kappa^2 \k_{t}(x_{\test}, X)^\top ( Y - u_{\nn}(t) ) - \lambda \cdot u_{\nn, \test}(t).
\end{align*}
\end{proof}
\begin{lemma}[Linear convergence of neural network training]\label{lem:linear_converge_nn}
Given training data matrix $X=[x_1,\cdots,x_n]^\top\in\R^{n\times d}$ and corresponding label vector $Y \in \R^n$. Fix the total number of iterations $T>0$. Let , $\kappa\in(0,1)$ and $u_{\nn}(t) \in \R^{n \times n}$ be defined as in Definition~\ref{def:nn}. Let $u^* \in \R^n$ be defined in Eq.~\eqref{eq:def_u_*}. Let $H^{\cts} \in \R^{n \times n}$ and $\Lambda_0 > 0$ be defined as in Definition~\ref{def:ntk_phi}. Let $H(t) \in \R^{n \times n}$ be defined as in Definition~\ref{def:dynamic_kernel}. Let $\lambda > 0$ be the regularization parameter. Assume $\|H(t) - H^{\cts}\| \leq \Lambda_0/2$ holds for all $t\in[0,T]$. Then we have
\begin{align*}
\frac{\d \|u_{\nn}(t)-u^*\|_2^2}{\d t} \le - ( \kappa^2 \Lambda_0+\lambda) \|u_{\nn}(t)-u^*\|_2^2+ 2 \kappa^2 \| H(t) - H^{\cts} \| \cdot \| u_{\nn}(t) - u^* \|_2 \cdot \| Y - u^* \|_2.
\end{align*}
\end{lemma}
\begin{proof}
Note same as in Lemma~\ref{lem:linear_converge_krr}, we have
\begin{align}\label{eq:325_1}
\kappa^2 H^{\cts}(Y-u^*)
= & ~ \kappa^2 H^{\cts}(Y-\kappa^2 H^{\cts}(\kappa^2 H^{\cts}+\lambda I_n)^{-1}Y) \notag \\
= & ~ \kappa^2 H^{\cts}(I_n-\kappa^2 H^{\cts}(\kappa^2 H^{\cts}+\lambda I)^{-1})Y \notag \\
= & ~ \kappa^2 H^{\cts}(\kappa^2 H^{\cts}+\lambda I_n - \kappa^2 H^{\cts})(\kappa^2 H^{\cts}+\lambda I_n)^{-1} Y \notag \\
= & ~ \kappa^2 \lambda H^{\cts}(\kappa^2 H^{\cts}+\lambda I_n)^{-1} Y \notag \\
= & ~ \lambda u^*,
\end{align}
where the first step follows the definition of $u^* \in \R^n$, the second to fourth step simplify the formula, and the last step use the definition of $u^* \in \R^n$ again.
Thus, we have
\begin{align*}
& ~ \frac{\d \|u_{\nn}(t)-u^*\|_2^2}{\d t} \\
= & ~ 2(u_{\nn}(t)-u^*)^\top \frac{\d u_{\nn}(t)}{\d t}\\
= & ~ -2 \kappa^2 (u_{\nn}(t)-u^*)^\top H(t) (u_{\nn}(t) - Y) -2\lambda(u_{\nn}(t)-u^*)^\top u_{\nn}(t)\\
= & ~ -2 \kappa^2 (u_{\nn}(t)-u^*)^\top H(t) (u_{\nn}(t) - u^*) + 2 \kappa^2 (u_{\nn}(t)-u^*)^\top H^{\cts} (Y-u^*)\\
& ~ + 2 \kappa^2 (u_{\nn}(t)-u^*)^\top (H(t) - H^{\cts}) (Y-u^*) -2\lambda(u_{\nn}(t)-u^*)^\top u_{\nn}(t)\\
= & ~ -2 \kappa^2 (u_{\nn}(t)-u^*)^\top H(t) (u_{\nn}(t) - u^*) + 2\lambda(u_{\nn}(t)-u^*)^\top u^*\\
& ~ +2 \kappa^2 (u_{\nn}(t)-u^*)^\top (H(t) - H^{\cts}) (Y-u^*) -2\lambda(u_{\nn}(t)-u^*)^\top u_{\nn}(t)\\
= & ~ -2(u_{\nn}(t)-u^*)^\top ( \kappa^2 H(t)+\lambda I) (u_{\nn}(t) - u^*) +2 \kappa^2 (u_{\nn}(t)-u^*)^\top (H(t) - H^{\cts}) (Y-u^*)\\
\leq & ~ - ( \kappa^2 \Lambda_0 + \lambda) \| u_{\nn}(t) - u^* \|_2^2 + 2 \kappa^2 \| H(t) - H^{\cts} \| \| u_{\nn}(t) - u^* \|_2 \| Y - u^* \|_2
\end{align*}
where the first step follows the chain rule, the second step follows Corollary~\ref{cor:nn_gradient}, the third step uses basic linear algebra, the fourth step follows Eq.~\eqref{eq:325_1}, the fifth step simplifies the expression, and the last step follows the assumption $\|H(t) - H^{\cts}\| \leq \Lambda_0/2$.
\end{proof}
\subsection{Proof sketch}\label{sec:equiv_proof_sketch}
Our goal is to show with appropriate width of the neural network and appropriate training iterations, the neural network predictor will be sufficiently close to the neural tangent kernel ridge regression predictor for any test data. We follow similar proof framework of Theorem 3.2 in \cite{adhlsw19}. Given any accuracy $\epsilon\in(0,1)$, we divide this proof into following steps:
\begin{enumerate}
\item Firstly, according to the linear convergence property of kernel ridge regression shown in Lemma~\ref{lem:linear_converge_krr}, we can choose sufficiently large training iterations $T>0$, so that $|u_{\test}^* - u_{\ntk,\test}(T) | \leq \epsilon/2$, as shown in Lemma~\ref{lem:u_ntk_test_T_minus_u_test_*}.
\item Once fix training iteration $T$ as in step 1, we bound $|u_{\nn,\test}(T) - u_{\ntk,\test}(T)| \leq \epsilon/2$ by showing the following:
\begin{enumerate}
\item Due to the similarity of the the gradient flow of neural network training and neural tangent kernel ridge regression, we can reduce the task of bounding the prediction perturbation at time $T$, i.e., $|u_{\nn,\test}(T) - u_{\ntk,\test}(T)| $, back to bounding
\begin{enumerate}
\item the initialization perturbation $|u_{\nn,\test}(0) - u_{\ntk,\test}(0)| $ and
\item kernel perturbation $ \|H(t)-H^{\cts}\|$,~$\|\k_{\ntk}(x_{\test}, X) - \k_{t}(x_{\test}, X)\|_2$, as shown in Lemma~\ref{lem:more_concreate_bound}.
\end{enumerate}
\item According to concentration results, we can bound the initialization perturbation $ |u_{\nn,\test}(0) - u_{\ntk,\test}(0) | $ small enough by choosing sufficiently small $\kappa\in(0,1)$, as shown in Lemma~\ref{lem:epsilon_init}.
\item We characterize the over-parametrization property of the neural network by inductively show that we can bound kernel perturbation $ \|H(t)-H^{\cts}\|$,~$\|\k_{\ntk}(x_{\test}, X) - \k_{t}(x_{\test}, X)\|_2$ small enough by choosing network width $m>0$ large enough, as shown in Lemma~\ref{lem:induction}.
\end{enumerate}
\item Lastly, we combine the results of step 1 and 2 using triangle inequality, to show the equivalence between training neural network with regularization and neural tangent kernel ridge regression, i.e., $| u_{\nn,\test}(T) - u_{\test}^* | \leq \epsilon$, as shown in Theorem~\ref{thm:main_test_equivalence}.
\end{enumerate}
\subsection{Equivalence between training net with regularization and kernel ridge regression for test data prediction}\label{sec:test}
In this section, we prove Theorem~\ref{thm:main_test_equivalence_intro} following the proof sketch in Section~\ref{sec:equiv_proof_sketch}.
\subsubsection{Upper bounding $| u_{\ntk,\test}(T) - u_{\test}^* |$}\label{sec:equiv_bound_ntk_test_T_and_test_*}
In this section, we give an upper bound for $| u_{\ntk,\test}(T) - u_{\test}^* |$.
\begin{lemma}\label{lem:u_ntk_test_T_minus_u_test_*}
Let $u_{\ntk,\test}(T) \in \R$ and $u_{\test}^* \in \R$ be defined as Definition~\ref{def:krr_ntk}.
Given any accuracy $\epsilon>0$, if $\kappa\in(0,1)$, then by picking $T = \wt{O}(\frac{1}{\kappa^2 \Lambda_0})$, we have
\begin{align*}
| u_{\ntk,\test}(T) - u_{\test}^* | \leq \epsilon/2.
\end{align*}
\end{lemma}
where $\wt{O}(\cdot)$ here hides $\poly\log( n/(\epsilon \Lambda_0) )$.
\begin{proof}
Due to the linear convergence of kernel ridge regression, i.e.,
\begin{align*}
\frac{ \d \| \beta(t) - \beta^* \|_2^2 }{ \d t } \leq - 2(\kappa^2 \Lambda_0 +\lambda) \| \beta(t) - \beta^* \|_2^2
\end{align*}
Thus,
\begin{align*}
| u_{\ntk,\test}(T) - u_{\test}^* |
= & ~ | \kappa \Phi(x_{\test})^\top \beta(T) - \kappa \Phi(x_{\test})\beta^* | \\
\leq & ~ \kappa \| \Phi(x_{\test}) \|_2 \| \beta(T) - \beta^* \|_2\\
\leq & ~ \kappa e^{-(\kappa^2 \Lambda_0+\lambda)T}\| \beta(0) - \beta^* \|_2 \\
\leq & ~ e^{-(\kappa^2 \Lambda_0+\lambda)T} \cdot \poly(\kappa,n,1/\Lambda_0)
\end{align*}
where the last step follows from $\beta(0) = 0$ and $\|\beta^*\|_2 = \poly(\kappa,n, 1/\Lambda_0)$.
Note $\kappa\in(0,1)$. Thus, by picking $ T = \wt{O}(\frac{1}{\kappa^2\Lambda_0})$, we have
\begin{align*}
\| u_{\ntk,\test}(T) - u_{\test}^* \|_2 \leq \epsilon/2,
\end{align*}
where $\wt{O}(\cdot)$ here hides $\poly\log( n/ (\epsilon \Lambda_0) )$
\end{proof}
\subsubsection{Upper bounding $| u_{\nn,\test}(T) - u_{\ntk,\test}(T) |$ by bounding initialization and kernel perturbation}\label{sec:equiv_splitting_nn_test_T_and_ntk_test_T_into_three}
The goal of this section is to prove Lemma~\ref{lem:more_concreate_bound}, which reduces the problem of bounding prediction perturbation to the problem of bounding initialization perturbation and kernel perturbation.
\begin{lemma}[Prediction perturbation implies kernel perturbation]\label{lem:more_concreate_bound}
Given training data matrix $X \in \R^{n \times d}$ and corresponding label vector $Y \in \R^n$. Fix the total number of iterations $T > 0$. Given arbitrary test data $x_{\test} \in \R^d$. Let $u_{\nn,\test}(t) \in \R^n$ and $u_{\ntk,\test}(t) \in \R^n$ be the test data predictors defined in Definition~\ref{def:nn} and Definition~\ref{def:krr_ntk} respectively. Let $\kappa\in(0,1)$ be the corresponding multiplier. Let $\k_{\ntk}(x_{\test},X) \in \R^n,~\k_{t}(x_{\test},X) \in \R^n,~H^{\cts} \in \R^{n \times n},~H(t) \in \R^{n \times n},~\Lambda_0 > 0$ be defined in Definition~\ref{def:ntk_phi} and Definition~\ref{def:dynamic_kernel}. Let $u^* \in \R^n$ be defined as in Eq.~\eqref{eq:def_u_*}. Let $\lambda > 0$ be the regularization parameter.
Let $\epsilon_{K} \in (0,1)$, $\epsilon_{\init} \in (0,1)$ and $\epsilon_H \in (0,1)$ denote parameters that are independent of $t$, and the following conditions hold for all $t\in[0,T]$,
\begin{itemize}
\item $\|u_{\nn}(0)\|_2 \leq \sqrt{n}\epsilon_{\init}$ and $|u_{\nn,\test}(0)| \leq \epsilon_{\init}$
\item $\|\k_{\ntk}(x_{\test},X)-\k_{t}(x_{\test},X)\|_2\le \epsilon_{K}$
\item $\| H(t) - H^{\cts} \| \le \epsilon_H$
\end{itemize}
then we have
\begin{align*}
|u_{\nn,\test}(T)-u_{\ntk,\test}(T)| \leq & ~ (1+\kappa^2 nT)\epsilon_{\init} + \kappa^2 \epsilon_K \cdot \Big( \frac{ \| u^* \|_2 }{ \kappa^2 \Lambda_0 + \lambda } + \|u^*-Y\|_2 T\Big)\\
& ~ + \sqrt{n}T^2\kappa^4 \epsilon_H ( \| u^* \|_2 + \| u^* - Y \|_2 )
\end{align*}
\end{lemma}
\begin{proof}
Combining results from Lemma~\ref{lem:very_rough_bound}, Claim~\ref{cla:A}.~\ref{cla:B},~\ref{cla:C}, we complete the proof.
We have
\begin{align*}
|u_{\nn,\test}(T)-u_{\ntk,\test}(T)|
\leq & ~ |u_{\nn,\test}(0)-u_{\ntk,\test}(0)|\\
& ~ + \kappa^2 \Big| \int_{0}^T (\k_{\ntk}(x_{\test},X)-\k_{t}(x_{\test},X))^\top (u_{\ntk}(t)-Y) \d t \Big|\\
& ~ + \kappa^2 \Big| \int_{0}^T \k_{t}(x_{\test},X)^\top(u_{\ntk}(t)-u_{\nn}(t)) \d t \Big|\\
\leq & ~ \epsilon_{\init} + \kappa^2 \epsilon_K \cdot \Big( \frac{ \|u^*\| }{ \kappa^2 \Lambda_0 + \lambda } + \|u^*-Y\|_2 T\Big)\\
& ~ + \kappa^2 n\epsilon_{\init}T + \sqrt{n}T^2 \cdot \kappa^4 \epsilon_H \cdot ( \| u^* \|_2 + \| u^* - Y \|_2 )\\
\leq & ~ (1+ \kappa^2 nT)\epsilon_{\init} + \kappa^2 \epsilon_K \cdot \Big( \frac{ \| u^* \|_2 }{ \kappa^2 \Lambda_0 + \lambda } + \|u^*-Y\|_2 T\Big)\\
& ~ + \sqrt{n}T^2\kappa^4 \epsilon_H ( \| u^* \|_2 + \| u^* - Y \|_2 )
\end{align*}
where the first step follows from Lemma~\ref{lem:very_rough_bound}, the second step follows from Claim~\ref{cla:A},~\ref{cla:B} and \ref{cla:C}, and the last step simplifies the expression.
\end{proof}
To prove Lemma~\ref{lem:more_concreate_bound}, we first bound $| u_{\nn,\test}(T) - u_{\ntk,\test}(T) |$ by three terms in Lemma~\ref{lem:very_rough_bound}, then we bound each term individually in Claim~\ref{cla:A}, Claim~\ref{cla:B}, and Claim~\ref{cla:C}.
\begin{lemma}\label{lem:very_rough_bound}
Follow the same notation as~Lemma~\ref{lem:more_concreate_bound}, we have
\begin{align*}
| u_{\nn,\test}(T) - u_{\ntk,\test}(T) | \leq A + B + C,
\end{align*}
where
\begin{align*}
A = & ~ |u_{\nn,\test}(0)-u_{\ntk,\test}(0)|\\
B = & ~ \kappa^2 \Big| \int_{0}^T (\k_{\ntk}(x_{\test},X)-\k_{t}(x_{\test},X))^\top (u_{\ntk}(t)-Y) \d t \Big|\\
C = & ~ \kappa^2 \Big| \int_{0}^T \k_{t}(x_{\test},X)^\top(u_{\ntk}(t)-u_{\nn}(t)) \d t \Big|
\end{align*}
\end{lemma}
\begin{proof}
\begin{align}\label{eq:320_3}
& ~|u_{\nn,\test}(T)-u_{\ntk,\test}(T)|\notag\\
= & ~ \Big| u_{\nn,\test}(0)-u_{\ntk,\test}(0)+\int_{0}^T(\frac{\d u_{\nn,\test}(t)}{\d t}-\frac{\d u_{\ntk,\test}(t)}{\d t})\d t \Big |\notag\\
\leq & ~ | u_{\nn,\test}(0)-u_{\ntk,\test}(0) |+ \Big| \int_{0}^T(\frac{\d u_{\nn,\test}(t)}{\d t}-\frac{\d u_{\ntk,\test}(t)}{\d t}) \d t \Big|,
\end{align}
where the first step follows from the definition of integral, the second step follows from the triangle inequality.
Note by Corollary~\ref{cor:ntk_gradient},~\ref{cor:nn_gradient}, their gradient flow are given by
\begin{align}
\frac{\d u_{\ntk,\test}(t)}{\d t}&= - \kappa^2 \k_{\ntk}(x_{\test},X)^\top(u_{\ntk}(t)-Y)-\lambda u_{\ntk,\test}(t)\label{eq:320_1}\\
\frac{\d u_{\nn,\test}(t)}{\d t}&= - \kappa^2 \k_{t}(x_{\test},X)^\top(u_{\nn}(t)-Y)-\lambda u_{\nn,\test}(t)\label{eq:320_2}
\end{align}
where $u_{\ntk}(t) \in \R^n$ and $u_{\nn}(t) \in \R^n$ are the predictors for training data defined in Definition~\ref{def:krr_ntk} and Definition~\ref{def:nn}. Thus, we have
\begin{align}\label{eq:320_4}
& ~ \frac{\d u_{\nn,\test}(t)}{\d t}-\frac{\d u_{\ntk,\test}(t)}{\d t} \notag\\
= & ~ - \kappa^2 \k_{t}(x_{\test},X)^\top(u_{\nn}(t)-Y)+ \kappa^2 \k_{\ntk}(x_{\test},X)^\top(u_{\ntk}(t)-Y) -\lambda (u_{\nn,\test}(t)-u_{\ntk,\test}(t)) \notag\\
= & ~ \kappa^2 (\k_{\ntk}(x_{\test},X)- \k_{t}(x_{\test},X))^\top (u_{\ntk}(t)-Y)- \kappa^2 \k_{t}(x_{\test},X)^\top(u_{\ntk}(t)-u_{\nn}(t)) \notag\\
& ~ -\lambda (u_{\nn,\test}(t)-u_{\ntk,\test}(t)),
\end{align}
where the first step follows from Eq.~\eqref{eq:320_1} and Eq.~\eqref{eq:320_2}, the second step rewrites the formula.
Note the term $-\lambda (u_{\nn,\test}(t)-u_{\ntk,\test}(t))$ will only make
\begin{align*}
\Big| \int_{0}^T(\frac{\d u_{\nn,\test}(t)}{\d t}-\frac{\d u_{\ntk,\test}(t)}{\d t})\d t \Big|
\end{align*}
smaller, so we have
\begin{align}\label{eq:320_5}
& ~ \Big|\int_{0}^T (\frac{\d u_{\nn,\test}(t)}{\d t}-\frac{\d u_{\ntk,\test}(t)}{\d t}) \d t\Big| \notag\\
\leq & ~ \Big|\int_{0}^T \kappa^2 ((\k_{\ntk}(x_{\test},X)-\k_{t}(x_{\test},X))^\top (u_{\ntk}(t)-Y)- \kappa^2 \k_{t}(x_{\test},X)^\top(u_{\ntk}(t)-u_{\nn}(t))) \d t\Big|
\end{align}
Thus,
\begin{align*}
& ~ |u_{\nn,\test}(T)-u_{\ntk,\test}(T)|\\
\leq & ~ |u_{\nn,\test}(0)-u_{\ntk,\test}(0)| + \Big| \int_{0}^T(\frac{\d u_{\nn,\test}(t)}{\d t}-\frac{\d u_{\ntk,\test}(t)}{\d t}) \d t \Big|\\
\leq & ~ |u_{\nn,\test}(0)-u_{\ntk,\test}(0)| + \Big|\int_{0}^T \kappa^2 ((\k_{\ntk}(x_{\test},X)-\k_{t}(x_{\test},X))^\top (u_{\ntk}(t)-Y) \\
& ~ - \kappa^2 \k_{t}(x_{\test},X)^\top(u_{\ntk}(t)-u_{\nn}(t))) \d t\Big|\\
\leq & ~ |u_{\nn,\test}(0)-u_{\ntk,\test}(0)| + \Big| \int_{0}^T \kappa^2 (\k_{\ntk}(x_{\test},X)-\k_{t}(x_{\test},X))^\top (u_{\ntk}(t)-Y) \d t \Big|\\
& ~ + \Big| \int_{0}^T \kappa^2 \k_{t}(x_{\test},X)^\top(u_{\ntk}(t)-u_{\nn}(t)) \d t \Big|\\
= & ~ A + B + C,
\end{align*}
where the first step follows from Eq.~\eqref{eq:320_3}, the second step follows from Eq.~\eqref{eq:320_5}, the third step follows from triangle inequality, and the last step follows from the definition of $A,~B,~C$.
\end{proof}
Now let us bound these three terms $A$, $B$ and $C$ one by one. We claim
\begin{claim}[Bounding the term $A$]\label{cla:A}\
We have
\begin{align*}
A \leq \epsilon_{\init}.
\end{align*}
\end{claim}
\begin{proof}
Note $u_{\ntk,\test}(0) = 0$, so by assumption we have
\begin{align*}
A=|u_{\nn,\test}(0)| \leq \epsilon_{\init}.
\end{align*}
\end{proof}
\begin{claim}[Bounding the term $B$]\label{cla:B}
We have
\begin{align*}
B \leq \kappa^2 \epsilon_K \cdot \Big( \frac{ \|u^*\| }{ \kappa^2 \Lambda_0 + \lambda } + \|u^*-Y\|_2 T\Big) .
\end{align*}
\end{claim}
\begin{proof}
Note
\begin{align*}
B = & ~ \kappa^2 \Big| \int_{0}^T (\k_{\ntk}(x_{\test},X)-\k_{t}(x_{\test},X))^\top (u_{\ntk}(t)-Y) \d t \Big|\\
\le & ~ \kappa^2 \max_{ t \in [ 0 , T ] }\|\k_{\ntk}(x_{\test},X)-\k_{t}(x_{\test},X)\|_2 \int_{0}^T \|u_{\ntk}(t)-Y\|_2 \d t ,
\end{align*}
where the first step follows from definition of $B$, and the second step follows from the Cauchy-Schwartz inequality.
Note by Lemma~\ref{lem:linear_converge_krr}, the kernel ridge regression predictor $u_{\ntk}(t) \in \R^n$ converges linearly to the optimal predictor $u^*=\kappa^2 H^{\cts}(\kappa^2 H^{\cts}+\lambda I)^{-1}Y \in \R^n$, i.e.,
\begin{align}\label{eq:320_6}
\|u_{\ntk}(t) - u^*\|_2 \leq e^{-(\kappa^2 \Lambda_0+\lambda)t} \|u_{\ntk}(0) - u^*\|_2.
\end{align}
Thus, we have
\begin{align}\label{eq:upper_bound_int_0_T_u_ntk_t_minus_Y}
\int_{0}^T \|u_{\ntk}(t)-Y\|_2 \d t
\leq & ~ \int_0^T \| u_{\ntk}(t) - u^* \|_2 \d t + \int_0^T \|u^*-Y\|_2 \d t \notag \\
\le & ~ \int_{0}^T e^{-(\kappa^2 \Lambda_0+\lambda)} \|u_{\ntk}(0)-u^*\|_2 \d t + \int_{0}^T \|u^*-Y\|_2 \d t \notag \\
\leq & ~ \frac{\|u_{\ntk}(0)-u^*\|_2}{\kappa^2 \Lambda_0+\lambda} + \|u^*-Y\|_2 T \notag \\
= & ~ \frac{ \|u^*\| }{\kappa^2 \Lambda_0 + \lambda } + \|u^*-Y\|_2 T,
\end{align}
where the first step follows from the triangle inequality, the second step follows from Eq.~\eqref{eq:320_6}, the third step calculates the integration, and the last step follows from the fact $u_{\ntk}(0) = 0$.
Thus, we have
\begin{align*}
B \le & ~ \kappa^2 \max_{t \in [0, T]}\|\k_{\ntk}(x_{\test},X)-\k_{t}(x_{\test},X)\|_2 \cdot \int_0^T \| u_{\ntk}(t) - Y \|_2 \d t \\
\leq & ~ \kappa^2 \epsilon_K \cdot \Big( \frac{ \|u^*\| }{ \kappa^2 \Lambda_0 + \lambda } + \|u^*-Y\|_2 T \Big) .
\end{align*}
where the first step follows from Eq.~\eqref{eq:320_6}, the second step follows from Eq.~\eqref{eq:upper_bound_int_0_T_u_ntk_t_minus_Y} and definition of $\epsilon_K$.
\end{proof}
\begin{claim}[Bounding the term $C$]\label{cla:C}
We have
\begin{align*}
C \leq n\epsilon_{\init}T + \sqrt{n}T^2 \cdot \kappa^2 \epsilon_H \cdot ( \| u^* \|_2 + \| u^* - Y \|_2 )
\end{align*}
\end{claim}
\begin{proof}
Note
\begin{align}\label{eq:320_10}
C = & ~ \kappa^2 \Big| \int_{0}^T \k_{t}(x_{\test},X)^\top(u_{\ntk}(t)-u_{\nn}(t)) \d t \Big| \notag\\
\le & ~ \kappa^2 \max_{ t \in [0,T] }\|\k_t(x_{\test},X)\|_2 \max_{ t \in [0,T] } \|u_{\ntk}(t)-u_{\nn}(t)\|_2\cdot T
\end{align}
where the first step follows from the definition of $C$, and the second step follows the Cauchy-Schwartz inequality.
To bound term $\max_{ t \in [0,T] } \|u_{\ntk}(t)-u_{\nn}(t)\|_2$, notice that for any $t\in[0,T]$, we have
\begin{align}\label{eq:320_7}
\|u_{\ntk}(t)-u_{\nn}(t)\|_2 \leq & ~\|u_{\ntk}(0)-u_{\nn}(0)\|_2 + \Big\|\int_{0}^{t} \frac{\d (u_{\ntk}(\tau)-u_{\nn}(\tau))}{\d \tau} \d \tau \Big\|_2 \notag\\
= & ~ \sqrt{n} \epsilon_{\init} + \Big\|\int_{0}^{t} \frac{\d (u_{\ntk}(\tau)-u_{\nn}(\tau))}{\d \tau} \d \tau \Big\|_2,
\end{align}
where the first step follows the triangle inequality, and the second step follows the assumption.
Further,
\begin{align*}
\frac{\d (u_{\ntk}(\tau)-u_{\nn}(\tau))}{\d \tau} &= - \kappa^2 H^{\cts}(u_{\ntk}(\tau)-Y)-\lambda u_{\ntk}(\tau) + \kappa^2 H(\tau)(u_{\nn}(\tau)-Y) +\lambda u_{\nn}(\tau)\\
&=-( \kappa^2 H(\tau)+\lambda I)(u_{\ntk}(\tau)-u_{\nn}(\tau)) + \kappa^2 (H(\tau)-H^{\cts})(u_{\ntk}(\tau)-Y),
\end{align*}
where the first step follows the Corollary~\ref{cor:ntk_gradient},~\ref{cor:nn_gradient}, the second step rewrites the formula.
Since the term $-( \kappa^2 H(\tau)+\lambda I)(u_{\ntk}(\tau)-u_{\nn}(\tau))$ makes $\|\int_{0}^t \frac{\d (u_{\ntk}(\tau)-u_{\nn}(\tau))}{\d \tau} \d \tau\|_2$ smaller.
Taking the integral and apply the $\ell_2$ norm, we have
\begin{align}\label{eq:320_8}
\Big\| \int_{0}^t \frac{\d (u_{\ntk}(\tau)-u_{\nn}(\tau))}{\d \tau} \d \tau \Big\|_2 \leq \Big\| \int_0^t \kappa^2 (H(\tau)-H^{\cts})(u_{\ntk}(\tau)-Y) \d \tau \Big\|_2.
\end{align}
Thus,
\begin{align}\label{eq:320_9}
\max_{t\in[0,T]} \|u_{\ntk}(t)-u_{\nn}(t)\|_2
\leq & ~ \sqrt{n} \epsilon_{\init} + \max_{t\in[0,T]} \Big\| \int_{0}^t \frac{\d (u_{\ntk}(\tau)-u_{\nn}(\tau))}{\d \tau} \d \tau \Big\|_2 \notag \\
\leq & ~ \sqrt{n} \epsilon_{\init} + \max_{t\in[0,T]} \Big\| \int_0^t \kappa^2 (H(\tau)-H^{\cts})(u_{\ntk}(\tau)-Y) \d \tau \Big\|_2 \notag \\
\leq & ~ \sqrt{n} \epsilon_{\init} + \max_{t\in[0,T]} \int_0^t \kappa^2 \| H(\tau) - H^{\cts} \| \cdot \| u_{\ntk}(\tau) - Y \|_2 \d \tau \notag \\
\leq & ~ \sqrt{n} \epsilon_{\init} + \max_{t\in[0,T]} \kappa^2 \epsilon_H \Big( \int_{0}^t \| u_{\ntk}(\tau) - u^* \|_2 \d \tau +\int_{0}^t \| u^* - Y \|_2 \d \tau \Big) \notag\\
\leq & ~ \sqrt{n} \epsilon_{\init} + \max_{t\in[0,T]} \kappa^2 \epsilon_H \Big( \int_{0}^t \| u_{\ntk}(0) - u^* \|_2 \d \tau +\int_{0}^t \| u^* - Y \|_2 \d \tau \Big) \notag\\
\leq & ~ \sqrt{n} \epsilon_{\init} + \max_{t\in[0,T]} t \cdot \kappa^2 \epsilon_H \cdot (\|u^*\|_2 + \|u^*-Y\|_2) \notag \\
\leq & ~ \sqrt{n} \epsilon_{\init} + T \cdot \kappa^2 \epsilon_H \cdot (\|u^*\|_2 + \|u^*-Y\|_2)
\end{align}
where the first step follows from Eq.~\eqref{eq:320_7}, the second step follows from Eq.~\eqref{eq:320_8}, the third step follows from triangle inequality, the fourth step follows from the condition $\| H(\tau) - H^{\cts} \| \leq \epsilon_H$ for all $\tau \leq T$ and the triangle inequality, the fifth step follows from the linear convergence of $\| u_{\ntk}(\tau) - u^* \|_2$ as in Lemma~\ref{lem:linear_converge_krr}, the sixth step follows the fact $u_{\ntk}(0) = 0$, and the last step calculates the maximum.
Therefore,
\begin{align*}
C \leq & ~ \kappa^2 \max_{ t \in [0,T] }\|\k_t(x_{\test},X)\|_2 \max_{ t \in [0,T] } \|u_{\ntk}(t)-u_{\nn}(t)\|_2\cdot T \\
\leq & ~ \kappa^2 \max_{ t \in [0,T] } \| \k_t (x_{\test} , X) \|_2 \cdot (\sqrt{n} \epsilon_{\init}T + T^2 \cdot \kappa^2 \epsilon_H \cdot ( \| u^* \|_2 + \| u^* - Y \|_2 )) \\
\leq & ~ \kappa^2 n\epsilon_{\init}T + \sqrt{n}T^2 \cdot \kappa^4 \epsilon_H \cdot ( \| u^* \|_2 + \| u^* - Y \|_2 )
\end{align*}
where the first step follows from Eq.~\eqref{eq:320_10}, and the second step follows from Eq.~\eqref{eq:320_9}, and the last step follows from the fact that $\k_{t}(x,z) \leq 1$ holds for any $\|x\|_2,~\|z\|_2 \leq 1$.
\end{proof}
\begin{remark}
Given final accuracy $\epsilon$, to ensure $|u_{\nn,\test}(T)-u_{\ntk,\test}(T)| \leq \epsilon$, we need to choose $\kappa>0$ small enough to make $\epsilon_{\init}=O(\epsilon)$ and choose width $m>0$ large enough to make $\epsilon_H$ and $\epsilon_{\test}$ both $O(\epsilon)$. And we discuss these two tasks one by one in the following sections.
\end{remark}
\subsubsection{Upper bounding initialization perturbation}\label{sec:equiv_intialization}
In this section, we bound $\epsilon_{\init}$ to our wanted accuracy $\epsilon$ by picking $\kappa$ large enough. We prove Lemma~\ref{lem:epsilon_init}.
\begin{lemma}[Bounding initialization perturbation]\label{lem:epsilon_init}
Let $f_{\nn}$ be as defined in Definition~\ref{def:f_nn}. Assume the initial weight of the network work $w_r(0)\in\R^d,~r=1,\cdots,m$ as defined in Definition~\ref{def:nn} are drawn independently from standard Gaussian distribution $\mathcal{N}(0,I_d)$. And $a_r\in\R,~r=1,\cdots,m$ as defined in Definition~\ref{def:f_nn} are drawn independently from $\unif[\{-1,+1\}]$. Let $\kappa\in(0,1)$, $u_{\nn}(t)$ and $u_{\nn,\test}(t)\in\R$ be defined as in Definition~\ref{def:nn}. Then for any data $x\in\R^d$ with $\|x\|_2\leq 1$, we have with probability $1-\delta$,
\begin{align*}
|f_{\nn}(W(0),x)| \leq 2 \log(2m/\delta).
\end{align*}
Further, given any accuracy $\epsilon\in(0,1)$, if $\kappa = \wt{O}(\epsilon(\Lambda_0+\lambda)/n)$, let $\epsilon_{\init} = \epsilon(\Lambda_0+\lambda)/n$, we have
\begin{align*}
|u_{\nn}(0)| \leq \sqrt{n}\epsilon_{\init}~\text{and}~|u_{\nn,\test}(0)| \leq \epsilon_{\init}
\end{align*}
hold with probability $1-\delta$, where $\wt{O}(\cdot)$ hides the $\poly\log( n / ( \epsilon \delta \Lambda_0 ) )$.
\end{lemma}
\begin{proof}
Note by definition,
\begin{align*}
f_{\nn}(W(0), x) = \frac{1}{\sqrt{m}}\sum_{r=1}^m a_r\sigma(w_r(0)^\top x).
\end{align*}
Since $w_r(0)\sim\mathcal{N}(0,I_d)$, so $w_r(0)^\top x_{\test}\sim N(0,\|x_{\test}\|_2)$. Note $\|x_{\test}\|_2 \leq 1$, by Gaussian tail bounds~Lemma~\ref{lem:gaussian_tail}, we have with probability $1-\delta / (2m)$:
\begin{align}\label{eq:guassian_tail}
|w_r(0)^\top x| \leq \sqrt{ 2 \log( 2 m / \delta ) }.
\end{align}
Condition on Eq.~\eqref{eq:guassian_tail} holds for all $r\in[m]$, denote $Z_r = a_r\sigma(w_r(0)^\top x)$, then we have $\E[Z_r] = 0$ and $|Z_r| \leq \sqrt{2\log(2m/\delta)}$. By Lemma~\ref{lem:hoeffding}, with probability $1-\delta/2$:
\begin{align}\label{eq:heoffding}
\Big| \sum_{r=1}^m Z_r \Big| \leq 2 \sqrt{m}\log{ ( 2 m / \delta ) }.
\end{align}
Since $u_{\nn,\test}(0) = \frac{1}{\sqrt{n}}\sum Z_r $, by combining Eq.~\eqref{eq:guassian_tail},~\eqref{eq:heoffding} and union bound over all $r\in[m]$, we have with probability $1-\delta$:
\begin{align*}
|u_{\nn,\test}(0)| \leq 2\log( 2 m / \delta ).
\end{align*}
Further, note $[u_{\nn}(0)]_i = \kappa f_{\nn}(W(0),x_i)$ and $u_{\nn,\test}(0) = \kappa f_{\nn}(W(0),x_{\test})$. Thus, by choosing $\kappa = \wt{O}(\epsilon(\Lambda_0+\lambda)/n)$, taking the union bound over all training and test data, we have
\begin{align*}
|u_{\nn}(0)| \leq \sqrt{n}\epsilon_{\init}~\text{and}~|u_{\nn,\test}(0)| \leq \epsilon_{\init}
\end{align*}
hold with probability $1-\delta$, where $\wt{O}(\cdot)$ hides the $\poly\log( n / ( \epsilon \delta \Lambda_0 ) )$.
\end{proof}
\subsubsection{Upper bounding kernel perturbation}\label{sec:equiv_kernel_perturbation}
In this section, we try to bound kernel perturbation by induction. We want to prove Lemma~\ref{lem:induction}, which also helps to show the equivalence for training data prediction as shown in Section~\ref{sec:train}.
\begin{lemma}[Bounding kernel perturbation]\label{lem:induction}
Given training data $X\in\R^{n\times d}$, $Y\in\R^n$ and a test data $x_{\test}\in\R^d$. Let $T > 0$ denotes the total number of iterations, $m >0 $ denotes the width of the network, $\epsilon_{\train}$ denotes a fixed training error threshold, $\delta > 0$ denotes the failure probability. Let $u_{\nn}(t) \in \R^n$ and $u_{\ntk}(t) \in \R^n$ be the training data predictors defined in Definition~\ref{def:nn} and Definition~\ref{def:krr_ntk} respectively. Let $\kappa\in(0,1)$ be the corresponding multiplier. Let $\k_{\ntk}(x_{\test},X) \in \R^n,~\k_{t}(x_{\test},X) \in \R^n,~H(t) \in \R^{n \times n},~\Lambda_0 > 0$ be the kernel related quantities defined in Definition~\ref{def:ntk_phi} and Definition~\ref{def:dynamic_kernel}. Let $u^* \in \R^n$ be defined as in Eq.~\eqref{eq:def_u_*}. Let $\lambda > 0$ be the regularization parameter. Let $W(t) = [w_1(t),\cdots,w_m(t)]\in\R^{d\times m}$ be the parameters of the neural network defined in Definition~\ref{def:nn}.
For any accuracy $\epsilon\in(0,1/10)$. If $\kappa=\wt{O}(\frac{\epsilon\Lambda_0}{n})$, $T=\wt{O}(\frac{1}{\kappa^2(\Lambda_0+\lambda)})$, $\epsilon_{\train} = \wt{O}(\|u_{\nn}(0)-u^*\|_2)$, $m \geq\wt{O}(\frac{n^{10} d}{\epsilon^6 \Lambda_0^{10}})$ and $\lambda=\wt{O}(\frac{1}{\sqrt{m}})$, with probability $1-\delta$, there exist $\epsilon_W,~\epsilon_H',~\epsilon_K'>0$ that are independent of $t$, such that the following hold for all $0 \leq t \le T$:
\begin{itemize}
\item 1. $\| w_r(0) - w_r(t) \|_2 \leq \epsilon_W $, $\forall r \in [m]$
\item 2. $\| H(0) - H(t) \|_2 \leq \epsilon_H'$
\item 3. $\| u_{\nn}(t) - u^* \|_2^2 \leq \max\{\exp(-(\kappa^2\Lambda_0 + \lambda) t/2) \cdot \| u_{\nn}(0) - u^* \|_2^2, ~ \epsilon_{\train}^2\}$
\item 4. $\| \k_0( x_{\test} , X )- \k_{t} (x_{\test}, X) \|_2 \leq \epsilon_K'$
\end{itemize}
Further, $\epsilon_W \leq \wt{O}(\frac{\epsilon \lambda_0^2}{n^2})$, $\epsilon_H' \leq \wt{O}(\frac{\epsilon \lambda_0^2}{n})$ and $\epsilon_K' \leq \wt{O}(\frac{\epsilon \lambda_0^2}{n^{1.5}})$. Here $\wt{O}(\cdot)$ hides the $\poly\log( n / ( \epsilon \delta \Lambda_0) )$.
\end{lemma}
We first state some concentration results for the random initialization that can help us prove the lemma.
\begin{lemma}[Random initialization result]\label{lem:random_init}
Assume initial value $w_r(0) \in \R^d ,~r=1,\cdots,m$ are drawn independently from standard Gaussian distribution $\mathcal{N}(0,I_d)$, then with probability $1-3\delta$ we have
\begin{align}
\|w_r(0)\|_2 \leq & ~ 2\sqrt{d} + 2\sqrt{\log{(m/\delta)}}~\text{for all}~r\in[m]\label{eq:3322_1}\\
\|H(0)- H^{\cts} \| \leq & ~ 4n ( \log(n/\delta) / m )^{1/2}\label{eq:3322_2}\\
\|\k_{0}( x_{\test} , X ) - \k_{\ntk} ( x_{\test} , X )\|_2 \leq & ~ ( 2n \log{(2n/\delta)} / m )^{1/2}\label{eq:3322_3}
\end{align}
\end{lemma}
\begin{proof}
By lemma~\ref{lem:chi_square_tail}, with probability at least $1-\delta$,
\begin{align*}
\| w_r(0) \|_2 \leq \sqrt{d} + \sqrt{\log(m/\delta)}
\end{align*}
holds for all $r\in[m]$.\\
Using Lemma~\ref{lem:lemma_4.1_in_sy19} in \cite{sy19}, we have
\begin{align*}
\| H(0) - H^{\cts} \| \leq \epsilon_H'' = 4 n ( \log{(n/\delta)} / m )^{1/2}
\end{align*}
holds with probability at least $1-\delta$.\\
Note by definition,
\begin{align*}
\E[\k_0 ( x_{\test}, x_i )] = \k_{\ntk} ( x_{\test}, x_i )
\end{align*}
holds for any training data $x_i$. By Hoeffding inequality, we have for any $t>0$,
\begin{align*}
\Pr[|\k_0 ( x_{\test}, x_i ) - \k_{\ntk} ( x_{\test}, x_i )|\ge t] \le 2\exp{(-mt^2/2)}.
\end{align*}
Setting $t=(\frac{2}{m}\log{(2n/\delta)})^{1/2}$, we can apply union bound on all training data $x_i$ to get with probability at least $1-\delta$, for all $i\in[n]$,
\begin{align*}
|\k_0 ( x_{\test}, x_i ) - \k_{\ntk} ( x_{\test}, x_i )| \le (2 \log(2n/\delta) / m)^{1/2}.
\end{align*}
Thus, we have
\begin{align}
\| \k_0 ( x_{\test}, X ) - \k_{\ntk} ( x_{\test}, X ) \|_2 \le ( 2n\log(2n/\delta) / m )^{1/2}
\end{align}
holds with probability at least $1-\delta$.\\
Using union bound over above three events, we finish the proof.
\end{proof}
Now conditioning on Eq.~\eqref{eq:3322_1},~\eqref{eq:3322_2},~\eqref{eq:3322_3} holds, We show all the four conclusions in Lemma~\ref{lem:induction} holds using induction.
We define the following quantity:
\begin{align}
\epsilon_W := & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \max\{4\| u_{\nn}(0) - u^* \|_2/(\kappa^2\Lambda_0+\lambda), \epsilon_{\train} \cdot T\} \notag \\
& ~ + \Big( \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-u^* \|_2 + \lambda (2\sqrt{d} + 2\sqrt{\log(m/\delta)}) \Big) \cdot T\label{eq:def_epsilon_W}\\
\epsilon_H' := & ~ 2n\epsilon_W\notag \\
\epsilon_K := & ~ 2\sqrt{n}\epsilon_W\notag
\end{align}
which are independent of $t$.
Note the base case when $t=0$ trivially holds. Now assuming Lemma~\ref{lem:induction} holds before time $t\in[0,T]$, we argue that it also holds at time $t$. To do so, Lemmas~\ref{lem:hypothesis_1},~\ref{lem:hypothesis_2},~\ref{lem:hypothesis_3} argue these conclusions one by one.
\begin{lemma}[Conclusion 1]\label{lem:hypothesis_1}
If for any $\tau < t$, we have
\begin{align*}
\| u_{\nn}(\tau) - u^* \|_2^2 \leq ~ \max\{\exp(-(\kappa^2 \Lambda_0 + \lambda) \tau/2) \cdot \| u_{\nn}(0) - u^* \|_2^2,~\epsilon_{\train}^2\}
\end{align*}
and
\begin{align*}
\| w_r(0) - w_r(\tau) \|_2 \leq \epsilon_W \leq 1
\end{align*}
and
\begin{align*}
\| w_r(0) \|_2 \leq ~ \sqrt{d} + \sqrt{\log(m/\delta)} ~ \text{for all}~r\in[m
\end{align*}
hold,
then
\begin{align*}
\| w_r(0) - w_r(t) \|_2 \leq \epsilon_W
\end{align*}
\end{lemma}
\begin{proof}
Recall the gradient flow as Eq.~\eqref{eq:323_1}
\begin{align}\label{eq:332_2}
\frac{ \d w_r( \tau ) }{ \d \tau } = ~ \sum_{i=1}^n \frac{1}{\sqrt{m}} a_r ( y_i - u_{\nn}(\tau)_i ) x_i \sigma'( w_r(\tau)^\top x_i ) - \lambda w_r(\tau)
\end{align}
So we have
\begin{align}\label{eq:332_1}
\Big\| \frac{ \d w_r( \tau ) }{ \d \tau } \Big\|_2
= & ~ \left\| \sum_{i=1}^n \frac{1}{\sqrt{m}} a_r ( y_i - u_{\nn}(\tau)_i ) x_i \sigma'( w_r(\tau)^\top x_i ) - \lambda w_r(\tau) \right\|_2 \notag \\
\leq & ~ \frac{1}{\sqrt{m}} \sum_{i=1}^n |y_i-u_{\nn}(\tau)_i| + \lambda \| w_r(\tau) \|_2 \notag\\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-u_{\nn}(\tau) \|_2 + \lambda \| w_r (\tau) \|_2 \notag \\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-u_{\nn}(\tau) \|_2 + \lambda (\| w_r(0) \|_2 + \| w_r(\tau) - w_r(0) \|_2) \notag\\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-u_{\nn}(\tau) \|_2 + \lambda (\sqrt{d} + \sqrt{\log(m/\delta)} + 1) \notag\\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-u_{\nn}(\tau) \|_2 + \lambda (2 \sqrt{d} +2 \sqrt{\log(m/\delta)} ) \notag\\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } (\| Y-u^*\|_2 + \| u_{\nn}(\tau) - u^*\|_2) + \lambda (2\sqrt{d} + 2\sqrt{\log(m/\delta)} ) \notag\\
= & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \| u_{\nn}(\tau) - u^*\|_2 \notag \\
& ~ + \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-u^* \|_2 + \lambda ( 2 \sqrt{d} + 2 \sqrt{\log(m/\delta)} ) \notag \\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \max\{e^{-(\kappa^2 \Lambda_0+\lambda)\tau/4} \| u_{\nn}(0) - u^* \|_2, \epsilon_{\train}\} \notag\\
& ~ + \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-u^* \|_2 + \lambda ( 2 \sqrt{d} + 2 \sqrt{\log(m/\delta)} ),
\end{align}
where the first step follows from Eq.~\eqref{eq:332_2}, the second step follows from triangle inequality, the third step follows from Cauchy-schwarz inequality, the forth step follows from triangle inequality, the fifth step follows from condition $\| w_r(0) - w_r(\tau) \|_2 \leq 1,~\| w_r(0) \|_2 \leq \sqrt{d} + \sqrt{\log(m/\delta)}$, the seventh step follows from triangle inequality, the last step follows from $\| u_{\nn}(\tau) - u^* \|_2^2 \leq \max\{\exp(-(\kappa^2\Lambda_0 + \lambda) \tau/2) \cdot \| u_{\nn}(0) - u^* \|_2^2,~\epsilon_{\train}^2\}$.
Thus, for any $t \le T$,
\begin{align*}
\| w_r(0) - w_r(t) \|_2 \leq & ~ \int_0^t \Big\| \frac{ \d w_r( \tau ) }{ \d \tau } \Big\|_2 d\tau \\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \max\{4\| u_{\nn}(0) - u^* \|_2/(\kappa^2\Lambda_0+\lambda), \epsilon_{\train}\cdot T \} \\
& ~ + \Big( \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-u^* \|_2 + \lambda ( 2 \sqrt{d} + 2 \sqrt{\log(m/\delta)} ) \Big) \cdot T\\
= & ~ \epsilon_W
\end{align*}
where the first step follows triangle inequality, the second step follows Eq.~\eqref{eq:332_1}, and the last step follows the definition of $\epsilon_W$ as Eq.~\eqref{eq:def_epsilon_W}.
\end{proof}
\begin{lemma}[Conclusion 2]\label{lem:hypothesis_2}
If $\forall r \in [m]$,
\begin{align*}
\| w_r(0) - w_r(t) \|_2 \leq \epsilon_W < 1,
\end{align*}
then
\begin{align*}
\| H(0) - H(t) \|_F \leq 2n \epsilon_W
\end{align*}
holds with probability $1-n^2 \cdot \exp{(-m\epsilon_W/10)}$.
\end{lemma}
\begin{proof}
Directly applying Lemma~\ref{lem:lemma_4.2_in_sy19}, we finish the proof.
\end{proof}
\begin{lemma}[Conclusion 3]\label{lem:hypothesis_3}
Fix $\epsilon_H'>0$ independent of $t$. If for all $\tau < t$
\begin{align*}
\| H(0) - H^{\cts} \| \leq 4n ( \log{(n/\delta)} / m )^{1/2} \leq \Lambda_0/4
\end{align*}
and
\begin{align*}
\| H(0) - H(\tau) \| \leq \epsilon_H' \leq \Lambda_0/4
\end{align*}
and
\begin{align}\label{eq:335_2}
4 n ( \log ( n / \delta ) / m )^{1/2} \leq \frac{\epsilon_{\train}}{8\kappa^2\|Y-u^*\|_2}(\kappa^2\Lambda_0+\lambda)
\end{align}
and
\begin{align}\label{eq:335_3}
\epsilon_H' \leq \frac{\epsilon_{\train}}{8\kappa^2\|Y-u^*\|_2}(\kappa^2\Lambda_0+\lambda)
\end{align}
then we have
\begin{align*}
\| u_{\nn}(t) - u^* \|_2^2 \leq \max\{\exp(-(\kappa^2\Lambda_0 + \lambda)t/2 ) \cdot \| u_{\nn}(0) - u^* \|_2^2, ~ \epsilon_{\train}^2\}.
\end{align*}
\end{lemma}
\begin{proof}
By triangle inequality we have
\begin{align}\label{eq:335_1}
\| H(\tau) - H^{\cts} \|
\leq & ~ \| H(0) - H(\tau) \| + \| H(0) - H^{\cts} \| \notag \\
\leq & ~ \epsilon_H' + 4n( \log{(n/\delta)} / m)^{1/2} \notag\\
\leq & ~ \Lambda_0/2
\end{align}
holds for all $\tau < t$. Denote $ \epsilon_H = \epsilon_H' + 4n( \log{(n/\delta)} / m )^{1/2}$, we have $\| H(\tau) - H^{\cts} \| \leq\epsilon_H \leq \Lambda_0/2$, which satisfies the condition of Lemma~\ref{lem:linear_converge_nn}. Thus, for any $\tau < t$, we have
\begin{align}\label{eq:induction_linear_convergence}
\frac{ \d \| u_{\nn}(\tau) - u^* \|_2^2 }{ \d \tau }
\leq & ~ - ( \kappa^2 \Lambda_0 + \lambda ) \cdot \| u_{\nn} (\tau) - u^* \|_2^2 + 2 \kappa^2 \| H(\tau) - H^{\cts} \| \cdot \| u_{\nn} (\tau) - u^* \|_2 \cdot \| Y - u^* \|_2 \notag\\
\leq & ~ - ( \kappa^2 \Lambda_0 + \lambda ) \cdot \| u_{\nn} (\tau) - u^* \|_2^2 + 2 \kappa^2 \epsilon_H \cdot \| u_{\nn} (\tau) - u^* \|_2 \cdot \|Y-u^*\|_2
\end{align}
where the first step follows from Lemma~\ref{lem:linear_converge_nn}, the second step follows from Eq.~\eqref{eq:335_1}. Now let us discuss two cases:
{\bf Case 1.} If for all $\tau < t$, $\|u_{\nn}(\tau) - u^*\|_2 \geq \epsilon_{\train}$ always holds, we want to argue that
\begin{align*}
\| u_{\nn}(t) - u^* \|_2^2 \leq \exp(-(\kappa^2 \Lambda_0 + \lambda) t/2) \cdot \| u_{\nn}(0) - u^* \|_2.
\end{align*}
Note by assumption~\eqref{eq:335_2} and~\eqref{eq:335_3}, we have
\begin{align*}
\epsilon_H \leq \frac{\epsilon_{\train}}{4 \kappa^2 \|Y-u^*\|_2}(\kappa^2 \Lambda_0+\lambda)
\end{align*}
implies
\begin{align*}
2 \kappa^2 \epsilon_H \cdot \|Y-u^*\|_2 \leq (\kappa^2 \Lambda_0 + \lambda)/2\cdot \|u_{\nn}(\tau)-u^*\|_2
\end{align*}
holds for any $\tau < t$. Thus, plugging into~\eqref{eq:induction_linear_convergence},
\begin{align*}
\frac{ \d \| u_{\nn}(\tau) - u^* \|_2^2 }{ \d \tau }
\leq ~ - ( \kappa^2 \Lambda_0 + \lambda )/2 \cdot \| u_{\nn} (\tau) - u^* \|_2^2,
\end{align*}
holds for all $\tau < t$, which implies
\begin{align*}
\| u_{\nn}(t) - u^* \|_2^2 \leq \exp{(-(\kappa^2 \Lambda_0 + \lambda)t/2)} \cdot \| u_{\nn}(0) - u^* \|_2^2.
\end{align*}
{\bf Case 2.} If there exist $\bar{\tau} < t$, such that $\|u_{\nn}(\bar{\tau}) - u^*\|_2 < \epsilon_{\train}$, we want to argue that $\|u_{\nn}(t) - u^*\|_2 < \epsilon_{\train}$.
Note by assumption~\eqref{eq:335_2} and~\eqref{eq:335_3}, we have
\begin{align*}
\epsilon_H \leq \frac{\epsilon_{\train}}{4\kappa^2\|Y-u^*\|_2}(\kappa^2\Lambda_0+\lambda)
\end{align*}
implies
\begin{align*}
2\kappa^2 \epsilon_H \cdot \|u_{\nn}(\bar{\tau}) - u^*\|_2 \cdot \|Y-u^*\|_2 \leq (\kappa^2 \Lambda_0 + \lambda) \cdot \epsilon_{\train}^2.
\end{align*}
Thus, plugging into~\eqref{eq:induction_linear_convergence},
\begin{align*}
\frac{ \d (\| u_{\nn}(\tau) - u^* \|_2^2 -\epsilon_{\train}^2)}{ \d \tau }
\leq ~ - ( \kappa^2 \Lambda_0 + \lambda ) \cdot (\| u_{\nn} (\tau) - u^* \|_2^2 - \epsilon_{\train}^2)
\end{align*}
holds for $\tau = \bar{\tau}$, which implies $ e^{( \kappa^2\Lambda_0 + \lambda )\tau} (\| u_{\nn}(\tau) - u^* \|_2^2 -\epsilon_{\train}^2) $ is non-increasing at $\tau = \bar{\tau}$. Since $\| u_{\nn}(\bar{\tau}) - u^* \|_2^2 - \epsilon_{\train}^2 < 0$, by induction, $ e^{( \kappa^2\Lambda_0 + \lambda )\tau} (\| u_{\nn}(\tau) - u^* \|_2^2 -\epsilon_{\train}^2) $ being non-increasing and $\| u_{\nn}(\tau) - u^* \|_2^2 - \epsilon_{\train}^2 < 0$ holds for all $\bar{\tau} \leq \tau \leq t$, which implies
\begin{align*}
\|u_{\nn}(t) - u^*\|_2 < \epsilon_{\train}.
\end{align*}
Combine above two cases, we conclude
\begin{align*}
\| u_{\nn}(t) - u^* \|_2^2 \leq \max\{\exp(-(\kappa^2\Lambda_0 + \lambda)t/2 ) \cdot \| u_{\nn}(0) - u^* \|_2^2, ~ \epsilon_{\train}^2\}.
\end{align*}
\end{proof}
\begin{lemma}[Conclusion 4]\label{lem:hypothesis_4}
Fix $\epsilon_W\in(0,1)$ independent of $t$. If $\forall r \in [m]$, we have
\begin{align*}
\| w_r(t) - w_r(0) \|_2 \leq \epsilon_W
\end{align*}
then
\begin{align*}
\| \k_t( x_{\test} , X ) - \k_{0} ( x_{\test} , X ) \|_2 \leq \epsilon_K' = 2\sqrt{n}\epsilon_W
\end{align*}
holds with probability at least $1- n\cdot\exp{(-m\epsilon_W/10)}$.
\end{lemma}
\begin{proof}
Recall the definition of $\k_0$ and $\k_t$
\begin{align*}
\k_0 (x_{\test},x_i) = & ~ \frac{1}{m} \sum_{r=1}^m x_{\test}^\top x_i \sigma'( x_{\test}^\top w_r(0) ) \sigma'( x_i^\top w_r(0) ) \\
\k_t (x_{\test},x_i) = & ~ \frac{1}{m} \sum_{r=1}^m x_{\test}^\top x_i \sigma'( x_{\test}^\top w_r(t) ) \sigma'( x_i^\top w_r(t) )
\end{align*}
By direct calculation we have
\begin{align*}
\| \k_0 ( x_{\test}, X ) - \k_t ( x_{\test}, X ) \|_2^2\le \sum_{i=1}^n \Big( \frac{1}{m}\sum_{r=1}^m s_{r,i} \Big)^2,
\end{align*}
where
\begin{align*}
s_{r,i} = {\bf 1}[w_r(0)^\top x_{\test} \ge 0, w_r(0)^\top x_i \ge 0] - {\bf 1}[w_r(t)^\top x_{\test} \ge 0, w_r(t)^\top x_i \ge 0], \forall r \in [m], i \in [n].
\end{align*}
Fix $i\in [n]$, by Bernstein inequality (Lemma~\ref{lem:bernstein}), we have for any $t>0$,
\begin{align*}
\Pr \Big[ \frac{1}{m}\sum_{r=1}^m s_{r,i} \ge 2\epsilon_W \Big] \le \exp ( - m \epsilon_W / 10 ) .
\end{align*}
Thus, applying union bound over all training data $x_i,~i\in[n]$, we conclude
\begin{align*}
\Pr[\| \k_0 ( x_{\test}, X ) - \k_t ( x_{\test}, X ) \|_2 \le 2\sqrt{n}\epsilon_W] \le 1-n\cdot\exp{(-m\epsilon_W/10)}.
\end{align*}
Note by definition $\epsilon_K' = 2\sqrt{n} \epsilon_W$, so we finish the proof.
\end{proof}
Now we summarize all the conditions need to be satisfied so that the induction works as in Table~\ref{tab:condition_only}.
\begin{table}[htb]\caption{Summary of conditions for induction}\label{tab:condition_only}
\centering
{
\begin{tabular}{| l | l | l |}
\hline
{\bf No.} & {\bf Condition} & {\bf Place} \\ \hline
1 & $\epsilon_W \leq 1$ & Lem.~\ref{lem:hypothesis_1} \\ \hline
2 & $ \epsilon_H' \leq \Lambda_0/4$ & Lem.~\ref{lem:hypothesis_3} \\ \hline
3 & $4n(\frac{\log{(n/\delta)}}{m})^{1/2} \leq \Lambda_0/4$ & Lem.~\ref{lem:hypothesis_3} \\ \hline
4 & $4n(\frac{\log{(n/\delta)}}{m})^{1/2} \leq \frac{\epsilon_{\train}}{8\kappa^2\|Y-u^*\|_2}(\kappa^2\Lambda_0+\lambda)$ & Lem.~\ref{lem:hypothesis_3} \\ \hline
5 & $\epsilon_H' \leq \frac{\epsilon_{\train}}{8\kappa^2\|Y-u^*\|_2}(\kappa^2\Lambda_0+\lambda)$ & Lem.~\ref{lem:hypothesis_3} \\
\hline
\end{tabular}
}
\end{table}
Note by choosing If $\kappa=\wt{O}(\frac{\epsilon\Lambda_0}{n}) ,~T=\wt{O}(\frac{1}{\kappa^2(\Lambda_0+\lambda)})$, $\epsilon_{\train} = \wt{O}(\|u_{\nn}(0)-u^*\|_2)$, $m \geq\wt{O}(\frac{n^{10}d}{\epsilon^6 \Lambda_0^{10}})$ and $\lambda=\wt{O}(\frac{1}{\sqrt{m}})$, we have
\begin{align}\label{eq:y-u*}
\|Y-u^*\|_2 = & ~ \|(\kappa^2 \lambda(H^{\cts}+\lambda I_n)^{-1}Y\|_2 \notag\\
\leq & ~ \lambda\|(\kappa^2 H^{\cts}+\lambda I_n)^{-1}\|\|Y\|_2\notag\\
\leq & ~ \wt{O}(\frac{\lambda\sqrt{n}}{\kappa^2\Lambda_0}) \notag\\
\leq & ~ \wt{O}(\frac{1}{n^{2.5}})
\end{align}
where the first step follows from the definition of $u^*$, the second step follows from Cauchy-Schwartz inequality, the third step follows from $Y=O(\sqrt{n})$, and the last step follows from the choice of the parameters.
Further, with probability $1-\delta$, we have
\begin{align*}
\|u_{\nn}(0) - u^*\|_2 \leq & ~ \|u_{\nn}(0)\|_2 + \|Y-u^*\|_2 + \|Y\|_2 \\
\leq & ~ \wt{O}(\frac{\epsilon\Lambda_0}{\sqrt{n}}) + \wt{O}(\frac{1}{n^{2.5}}) + \wt{O}(\sqrt{n}) \\
\leq & ~ \wt{O}(\sqrt{n})
\end{align*}
where the first step follows from triangle inequality, the second step follows from Lemma~\ref{lem:epsilon_init}, Eq.~\eqref{eq:y-u*} and $Y=O(\sqrt{n})$, and the last step follows $\epsilon,~\Lambda_0<1$. With same reason,
\begin{align*}
\|u_{\nn}(0) - u^*\|_2 \geq & ~ -\|u_{\nn}(0)\|_2 - \|Y-u^*\|_2 + \|Y\|_2 \\
\geq & ~ -\wt{O}(\frac{\epsilon\Lambda_0}{\sqrt{n}}) - \wt{O}(\frac{1}{n^{2.5}}) + \wt{O}(\sqrt{n}) \\
\geq & ~ \wt{O}(\sqrt{n}).
\end{align*}
Thus, we have $\|u_{\nn}(0) - u^*\|_2=\wt{O}(\sqrt{n})$.
Now, by direct calculation, we have all the induction conditions satisfied with high probability. Note the failure probability only comes from Lemma~\ref{lem:epsilon_init},~\ref{lem:random_init},~\ref{lem:hypothesis_2},~\ref{lem:hypothesis_4}, which only depend on the initialization. By union bound over these failure events, we have all four conclusions in Lemma~\ref{lem:induction} holds with high probability, which completes the proof.
\subsubsection{Final result for upper bounding $| u_{\nn,\test}(T) - u_{\ntk,\test}(T) |$}\label{sec:equiv_bound_nn_test_T_and_ntk_test_T}
In this section, we prove Lemma~\ref{lem:equivalence_at_T}.
\begin{lemma}[Upper bounding test error]\label{lem:equivalence_at_T}
Given training data matrix $X \in \R^{n \times d}$ and corresponding label vector $Y \in \R^n$. Fix the total number of iterations $T > 0$. Given arbitrary test data $x_{\test} \in \R^d$. Let $u_{\nn,\test}(t) \in \R^n$ and $u_{\ntk,\test}(t) \in \R^n$ be the test data predictors defined in Definition~\ref{def:nn} and Definition~\ref{def:krr_ntk} respectively. Let $\kappa\in(0,1)$ be the corresponding multiplier. Given accuracy $\epsilon>0$, if $\kappa = \wt{O}(\frac{\epsilon\Lambda_0}{n})$, $T=\wt{O}(\frac{1}{\kappa^2\Lambda_0})$, $m \geq \wt{O}(\frac{n^{10}d}{\epsilon^6\Lambda_0^{10}})$ and $\lambda \leq \wt{O}(\frac{1}{\sqrt{m}})$. Then for any $x_{\test}\in\R^d$, with probability at least $1-\delta$ over the random initialization, we have
\begin{align*}
\| u_{\nn,\test}(T) - u_{\ntk,\test}(T) \|_2 \leq \epsilon/2,
\end{align*}
where $\wt{O}(\cdot)$ hides $\poly\log( n / (\epsilon\delta\Lambda_0) )$
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:more_concreate_bound}, we have
\begin{align}\label{eq:b44_1}
|u_{\nn,\test}(T)-u_{\ntk,\test}(T)| \leq & ~ (1+\kappa^2 nT)\epsilon_{\init} + \kappa^2 \epsilon_K \cdot \Big( \frac{ \| u^* \|_2 }{ \kappa^2 \Lambda_0 + \lambda } + \|u^*-Y\|_2 T\Big) \notag \\
& ~ + \sqrt{n}T^2\kappa^4 \epsilon_H ( \| u^* \|_2 + \| u^* - Y \|_2 )
\end{align}
By Lemma~\ref{lem:epsilon_init}, we can choose $\epsilon_{\init} = \epsilon(\Lambda_0)/n$.
Further, note
\begin{align*}
\|\k_{\ntk}(x_{\test},X)-\k_{t}(x_{\test},X)\|_2 \leq & ~ \|\k_{\ntk}(x_{\test},X)-\k_0(x_{\test},X)\|_2 + \|\k_0(x_{\test},X)-\k_{t}(x_{\test},X)\|_2 \\
\leq & ~ ( 2n \log{(2n/\delta)} / m )^{1/2} + \|\k_0(x_{\test},X)-\k_{t}(x_{\test},X)\|_2 \\
\leq & ~ \wt{O}(\frac{\epsilon \Lambda_0^2}{n^{1.5}})
\end{align*}
where the first step follows from triangle inequality, the second step follows from Lemma~\ref{lem:random_init}, and the last step follows from Lemma~\ref{lem:induction}. Thus, we can choose $\epsilon_K = \frac{\epsilon \Lambda_0^2}{n^{1.5}}$.
Also,
\begin{align*}
\|H^{\cts}-H(t)\| \leq & ~ \|H^{\cts}-H(0)\| + \|H(0)-H(t)\|_2 \\
\leq & ~ 4n ( \log(n/\delta) / m )^{1/2} + \|H(0)-H(t)\|_2 \\
\leq & ~ \wt{O}(\frac{\epsilon \Lambda_0^2}{n})
\end{align*}
where the first step follows from triangle inequality, the second step follows from Lemma~\ref{lem:random_init}, and the last step follows from Lemma~\ref{lem:induction}. Thus, we can choose $\epsilon_H = \frac{\epsilon \Lambda_0^2}{n}$.
Note $\|u^*\|_2 \leq \sqrt{n}$ and $\|u^*-Y\|\leq\sqrt{n}$, plugging the value of $\epsilon_{\init},~\epsilon_K,~\epsilon_H$ into Eq.~\eqref{eq:b44_1}, we have
\begin{align*
|u_{\nn,\test}(T)-u_{\ntk,\test}(T)| \leq \epsilon/2.
\end{align*}
\end{proof}
\subsubsection{Main result for test data prediction equivalence}\label{sec:equiv_main_test_equivalence}
In this section, we restate and prove Theorem~\ref{thm:main_test_equivalence_intro}.
\begin{theorem}[Equivalence between training net with regularization and kernel ridge regression for test data prediction, restatement of Theorem~\ref{thm:main_test_equivalence_intro}]\label{thm:main_test_equivalence}
Given training data matrix $X \in \R^{n \times d}$ and corresponding label vector $Y \in \R^n$. Let $T > 0$ be the total number of iterations. Given arbitrary test data $x_{\test} \in \R^d$. Let $u_{\nn,\test}(t) \in \R^n$ and $u_{\test}^* \in \R^n$ be the test data predictors defined in Definition~\ref{def:nn} and Definition~\ref{def:krr_ntk} respectively.
For any accuracy $\epsilon \in (0,1/10)$ and failure probability $\delta \in (0,1/10)$, if $\kappa = \wt{O}(\frac{\epsilon\Lambda_0}{n})$, $T=\wt{O}(\frac{1}{\kappa^2\Lambda_0})$, $m \geq \wt{O}(\frac{n^{10}d}{\epsilon^6\Lambda_0^{10}})$ and $\lambda \leq \wt{O}(\frac{1}{\sqrt{m}})$. Then for any $x_{\test}\in\R^d$, with probability at least $1-\delta$ over the random initialization, we have
\begin{align*}
\| u_{\nn,\test}(T) - u_{\test}^* \|_2 \leq \epsilon.
\end{align*}
Here $\wt{O}(\cdot)$ hides $\poly\log(n/(\epsilon \delta \Lambda_0 ))$
\end{theorem}
\begin{proof}
It follows from combining results of bounding $\| u_{\nn,\test}(T) - u_{\ntk,\test}(T) \|_2 \leq \epsilon/2$ as shown in Lemma~\ref{lem:equivalence_at_T} and $\| u_{\ntk,\test}(T) - u_{\test}^* \|_2 \leq \epsilon/2$ as shown in Lemma~\ref{lem:u_ntk_test_T_minus_u_test_*} using triangle inequality.
\end{proof}
\subsection{Equivalence between training net with regularization and kernel ridge regression for training data prediction}\label{sec:train}
In this section, we restate and proof Theorem~\ref{thm:main_train_equivalence_intro}.
Note the proof of equivalence results for the test data in previous sections automatically gives us an equivalence results of the prediction for training data. Specifically, the third conclusion in Lemma~\ref{lem:induction} characterizes the training prediction $u_{\nn}(t)$ throughout the training process. Thus, we have the following theorem characterize the equivalence between training net with regularization and kernel ridge regression for the training data.
\begin{theorem}[Equivalence between training net with regularization and kernel ridge regression for training data prediction, restatement of Theorem~\ref{thm:main_train_equivalence_intro}]\label{thm:main_train_equivalence}
Given training data matrix $X \in \R^{n \times d}$ and corresponding label vector $Y \in \R^n$. Let $T > 0$ be the total number of iterations. Let $u_{\nn}(t) \in \R^n$ and $u^* \in \R^n$ be the training data predictors defined in Definition~\ref{def:nn} and Definition~\ref{def:krr_ntk} respectively. Let $\kappa=1$ be the corresponding multiplier.
Given any accuracy $\epsilon \in ( 0 , 1/10 )$ and failure probability $\delta \in (0,1/10)$, if $\kappa = 1$, $T=\wt{O}(\frac{1}{\Lambda_0})$, network width $m \geq \wt{O}(\frac{n^4d}{\lambda_0^4\epsilon})$ and regularization parameter $\lambda \leq \wt{O}(\frac{1}{\sqrt{m}})$, then with probability at least $1-\delta$ over the random initialization, we have
\begin{align*}
\|u_{\nn}(T) - u^*\|_2 \leq \epsilon.
\end{align*}
Here $\wt{O}(\cdot)$ hides $\poly\log(n/(\epsilon \delta \Lambda_0 ))$.
\end{theorem}
\begin{proof}
Let $\kappa=1$, $T=\wt{O}(\frac{1}{\Lambda_0})$, $\lambda \leq \wt{O}(\frac{1}{\sqrt{m}})$, $m \geq \wt{O}(\frac{n^4d}{\lambda_0^4\epsilon})$ and $\epsilon_{\train} = \epsilon$ in Lemma~\ref{lem:induction}. We can see all the conditions in Table~\ref{tab:condition_only} hold. Thus, the third conclusion in Lemma~\ref{lem:induction} holds. So with probability $1-\delta$, we have
\begin{align*}
\|u_{\nn}(T) - u^*\|_2^2 \leq & ~ \max\{\exp(-(\kappa^2\Lambda_0 + \lambda) t/2) \cdot \| u_{\nn}(0) - u^* \|_2^2, ~ \epsilon_{\train}^2\}\\
\leq & ~ \max\{\exp(-(\Lambda_0 + \lambda) T/2) \cdot \| u_{\nn}(0) - u^* \|_2^2, ~ \epsilon^2\}\\
\leq & ~ \epsilon^2
\end{align*}
where the first step follows from Lemma~\ref{lem:induction}, the second step follows from $\kappa =1$ and $\epsilon_{\train} = \epsilon$, the last step follows from $T=\wt{O}(\frac{1}{\Lambda_0})$ and $\| u_{\nn}(0) - u^* \|_2^2 \leq n$ with high probability.
\end{proof}
\begin{table}[!t]\caption{Summary of parameters of main results in Section~\ref{sec:equiv}}\label{tab:xxx}
\centering
{
\begin{tabular}{ | l | l | l | l | l | l |}
\hline
{\bf Statement} & $\kappa$ & $T$ & $m$ & $\lambda$ & {\bf Comment} \\ \hline
Theorem~\ref{thm:main_test_equivalence} & $\epsilon \Lambda_0 / n$ & $1/(\kappa^2 \Lambda_0)$ & $\Lambda_0^{-10} \epsilon^{-6} n^{10} d$ & $1/\sqrt{m}$ & test \\ \hline
Theorem~\ref{thm:main_train_equivalence} & $1$ & $1/\Lambda_0$ & $\Lambda_0^{-4} \epsilon^{-1} n^4 d $ & $1/\sqrt{m}$ & train \\ \hline
\end{tabular}
}
\end{table}
\section{Extension to other neural network models}\label{sec:gen_dnn}
In previous sections, we discuss a simple neural network model: 2-layer ReLu neural network with first layer trained. We remark that our results can be naturally extended to multi-layer ReLU deep neural networks with all parameters training together.
Note the core of the connection between regularized NNs and KRR is to show the similarity between their gradient flows, as shown in Corollary~\ref{cor:ntk_gradient} and Corollary~\ref{cor:nn_gradient}: their gradient flow are given by
\begin{align*}
\frac{\d u_{\ntk,\test}(t)}{\d t}&= - \kappa^2 \k_{\ntk}(x_{\test},X)^\top(u_{\ntk}(t)-Y)-\lambda u_{\ntk,\test}(t)\\%\label{eq:320_1_ex}\\
\frac{\d u_{\nn,\test}(t)}{\d t}&= - \kappa^2 \k_{t}(x_{\test},X)^\top(u_{\nn}(t)-Y)-\lambda u_{\nn,\test}(t)
\end{align*}
Note these gradient flows consist of two terms: the first term $- \kappa^2 \k(x_{\test},X)^\top(u(t)-Y)$ comes from the normal neural network training without $\ell_2$regularization, the second term $-\lambda u_{\test}(t)$ comes from the regularizer and can be directly derived using the piece-wise linearity property of the 2-layer ReLu NN (in this case, with respect to the parameters in the first layer).
Now consider the case of training multi-layer ReLu neural network with regularization. We claim above similarity between the gradient flows of NN and KRR still holds as long as we scale up the network width by the number of layers trained: as 1) the similarity of the first term $- \kappa^2 \k(x_{\test},X)^\top(u(t)-Y)$ has already been shown in previous literature \cite{adhlsw19,als19b}, and 2) the similarity of the second term $-\lambda u_{\test}(t)$ comes from the piece-wise linearity property of deep ReLu neural network with respect to all training parameters. In the common case where we train all the parameters together, the equivalence still holds as long as we scale up the network width by the number of layers, as shown in the following theorem:
\begin{theorem}\label{thm:dnn}
Consider training a $L$-layer ReLU neural network with $\ell_2$ regularization. Let $u_{\nn,\test}(t)$ denote the neural network predictor at time $t$, and $u_{\test}^*$ denote the kernel ridge regression predictor. Then Given any accuracy $\epsilon\in(0,1/10)$ and failure probability $\delta\in(0,1/10)$.
Let multiplier $\kappa = \poly(\epsilon,\Lambda_0,1/n,1/L)$, number of iterations $T=\poly(1/\epsilon,1/\Lambda_0,n,L)$, network width $m \geq \poly(n,d,1/\epsilon,1/\Lambda_0, L)$ and regularization parameter $\lambda \leq \poly(1/n,1/d,\epsilon,\Lambda_0,1/L)$. Then with probability at least $1-\delta$ over random initialization, we have
\begin{align*}
\| u_{\nn,\test}(T) - u_{\test}^* \|_2 \leq \epsilon.
\end{align*}
Here we omit $\poly\log(n/(\epsilon \delta \Lambda_0 ))$ factors.
\end{theorem}
The results under leverage score sampling can be argued in the same way.
We also remark that it is possible to extend our results further to the model of convolutional neural network (CNN) by making use the convolutional neural tangent kernel (CNTK) discussed in \cite{adhlsw19}, and to the case using stochastic gradient descent in training rather than gradient descent. However, these discussion require more detailed proof and is out of the scope of this work.
\section{Introduction}
Kernel method is one of the most common techniques in various machine learning problems. One classical application is the kernel ridge regression (KRR). Given training data $X = [x_1,\cdots,x_n]^\top \in \R^{n \times d}$, corresponding labels $Y=[y_1, \cdots, y_n] \in \R^n$ and regularization parameter $\lambda>0$, the output estimate of KRR for any given input $z$ can be written as:
\begin{align}\label{eq:intro_krr}
f(z) = \k( z , X )^\top ( K + \lambda I_n )^{-1} Y,
\end{align}
where $\k(\cdot,\cdot)$ denotes the kernel function and $K \in \R^{n\times n}$ denotes the kernel matrix.
Despite being powerful and well-understood, the kernel ridge regression suffers from the costly computation when dealing with large datasets, since generally implementation of Eq.~\eqref{eq:intro_krr} requires $O(n^3)$ running time. Therefore, intensive research have been dedicated to the scalable methods for KRR \cite{b13,am15,zdw15,acw17,mm17,znvkr20}. One of the most popular approach is the random Fourier features sampling originally proposed by \cite{rr08} for shift-invariant kernels. They construct a finite dimensional random feature vector $\phi : \R^d \to \C^s$ through sampling that approximates the kernel function $\k(x,z) \approx \phi(x)^* \phi(z)$ for data $x,z\in\R^d$. The random feature helps approximately solves KRR in $O(ns^2+n^2)$ running time, which improves the computational cost if $s \ll n$. The work \cite{akmmvz17} advanced this result by introducing the leverage score sampling to take the regularization term into consideration.
In this work, we follow the the approach in \cite{akmmvz17} and naturally generalize the result to a broader class of kernels, which is of the form
\begin{align*}
\k(x,z) = \E_{w \sim p}[\phi(x,w)^\top\phi(z,w)],
\end{align*}
where $\phi : \R^{d} \times \R^{d_1} \to \R^{d_2}$ is a finite dimensional vector and $p : \R^{d_1} \to \R_{\geq 0}$ is a probability distribution. We apply the leverage score sampling technique in this generalized case to obtain a tighter upper-bound on the dimension of random features.
Further, We discuss the application of our theory in neural network training. Over the last two years, there is a long line of over-parametrization theory works on the convergence results of deep neural network \cite{ll18,dzps19,als19a,als19b,dllwz19,adhlw19,adhlsw19,sy19,bpsw20}, all of which either explicitly or implicitly use the property of neural tangent kernel \cite{jgh18}. However, most of those results focus on neural network training without regularization, while in practice regularization (which is originated from classical machine learning) has been widely used in training deep neural network. Therefore, in this work we rigorously build the equivalence between training a ReLU deep neural network with $\ell_2$ regularization and neural tangent kernel ridge regression.
We observe that the initialization of training neural network corresponds to approximating the neural tangent kernel with random features, whose dimension is proportional to the width of the network. Thus, it motivates us to bring the leverage score sampling theory into the neural network training. We present a new equivalence between neural net and kernel ridge regression under the initialization using leverage score sampling, which potentially improves previous equivalence upon the upper-bound of network width needed.
We summarize our main results and contribution as following:
\begin{itemize}
\item Generalize the leverage score sampling theory for kernel ridge regression to a broader class of kernels.
\item Connect the leverage score sampling theory with neural network training.
\item Theoretically prove the equivalence between training regularized neural network and kernel ridge regression under both random Gaussian initialization and leverage score sampling initialization.
\end{itemize}
\section{Generalization result of leverage score sampling for approximating kernels}\label{sec:ger_lev}
In this section, we generalize the result of Lemma 8 in \cite{akmmvz17} for a more broad class of kernels and feature vectors. Specifically, we prove Theorem~\ref{thm:leverage_score_intro}.
Section~\ref{sec:pre_lev} introduces the related kernel and random features, we also restate Definition~\ref{def:leverage_score_intro} and~\ref{def:modify_random_feature_lev} for leverage score sampling and random features in this section. Section~\ref{sec:result_lev} restates and proves our main result Theorem~\ref{thm:leverage_score_intro}.
\subsection{Preliminaries}\label{sec:pre_lev}
\begin{definition}[Kernel]\label{def:kernel}
Consider kernel function $k: \R^d \times \R^d \to \R$ which can be written as
\begin{align*}
\k(x,z) = \E_{w \sim p}[\phi(x,w)^\top\phi(z,w)],
\end{align*}
for any data $x,z\in\R^d$, where $\phi:\R^{d} \times \R^{d_1}\to \R^{d_2}$ denotes a finite dimensional vector and $p:\R^{d_1}\to\R_{\geq 0}$ denotes probability density function. Given data $x_1,\cdots,x_n\in\R^d$, we define the corresponding kernel matrix $K \in \R^{n\times n}$ as
\begin{align*}
K_{i,j} = \k(x_i,x_j) = \E_{w\sim p}[\phi(x_i,w)^\top\phi(x_j,w)]
\end{align*}
\end{definition}
\begin{definition}[Random features]\label{def:random_feature}
Given $m$ weight vectors $w_1, \cdots, w_m \in \R^{d_1}$. Let $\varphi : \R^d \rightarrow \R^{md_2}$ be define as
\begin{align*}
\varphi(x) = \Big[ \frac{1}{ \sqrt{m} }\phi(x,w_1)^\top , \cdots, \frac{1}{ \sqrt{m} }\phi(x,w_m)^\top \Big]^\top
\end{align*}
If $w_1, \cdots, w_m$ are drawn according to $p(\cdot)$, then
\begin{align*}
\k(x,z) = \E_{p} [ \varphi(x)^\top \varphi(z) ]
\end{align*}
Given data matrix $X=[x_1,\cdots,x_n]^\top \in \R^{n \times d}$, define $\Phi : \R^{d_1} \to \R^{n\times d_2}$ as
\begin{align*}
\Phi(w) = [\phi(x_1,w)^\top, \cdots, \phi(x_n,w)^\top]^\top
\end{align*}
If $w$ are drawn according to $p(\cdot)$, then
\begin{align}\label{eq:ls_1}
K = \E_{p} [ \Phi(w)\Phi(w)^\top ]
\end{align}
Further, define $\Psi\in\R^{n\times md_2}$ as
\begin{align*}
\Psi = [\varphi(x_1),\cdots,\varphi(x_n)]^\top.
\end{align*}
Then we have
\begin{align*}
\Psi\Psi^\top = \frac{1}{m}\sum_{r=1}^m \Phi(w_r)\Phi(w_r)^\top
\end{align*}
If $w_1, \cdots, w_m$ are drawn according to $p(\cdot)$, then
\begin{align*}
K = \E_{p} [ \Psi\Psi^\top ]
\end{align*}
\end{definition}
\begin{definition}[Modified random features, restatement of Definition~\ref{def:modify_random_feature_lev}]\label{def:modify_random_feature}
Given any probability density function $q(\cdot)$ whose support includes that of $p(\cdot)$. Given $m$ weight vectors $w_1, \cdots, w_m \in \R^{d_1}$. Let $\bar{\varphi} : \R^d \rightarrow \R^{m d_2}$ be defined as
\begin{align*}
\bar{\varphi}(x) = \frac{1}{\sqrt{m}} \Big[ \frac{\sqrt{p(w_1)}}{ \sqrt{q(w_1)} }\phi(x,w_1)^\top , \cdots, \frac{\sqrt{p(w_m)}}{ \sqrt{q(w_m)} }\phi(x,w_m)^\top \Big]^\top
\end{align*}
If $w_1, \cdots, w_m$ are drawn according to $q(\cdot)$, then
\begin{align*}
\k(x,z) = \E_{q} [ \bar{\varphi}(x)^\top \bar{\varphi}(z) ]
\end{align*}
Given data matrix $X=[x_1,\cdots,x_n]^\top \in \R^{n \times d}$, define $\bar{\Phi}:\R^{d_1}\to\R^{n\times d_2}$ as
\begin{align*}
\bar{\Phi}(w) = \frac{\sqrt{p(w)}}{ \sqrt{q(w)} }[\phi(x_1,w)^\top, \cdots, \phi(x_n,w)^\top]^\top = \frac{\sqrt{p(w)}}{ \sqrt{q(w)} }\Phi(w)
\end{align*}
If $w$ are drawn according to $q(\cdot)$, then
\begin{align*}
K = \E_{q} [ \bar{\Phi}(w)\bar{\Phi}(w)^\top ]
\end{align*}
Further, define $\bar{\Psi}\in\R^{n\times md_2}$ as
\begin{align*}
\bar{\Psi} = [\bar{\varphi}(x_1),\cdots,\bar{\varphi}(x_n)]^\top.
\end{align*}
then
\begin{align}\label{eq:ls_2}
\bar{\Psi}\bar{\Psi}^\top = \frac{1}{m}\sum_{r=1}^m \bar{\Phi}(w_r)\bar{\Phi}(w_r)^\top = \frac{1}{m}\sum_{r=1}^m \frac{p(w_r)}{q(w_r)}\Phi(w_r)\Phi(w_r)^\top
\end{align}
If $w_1, \cdots, w_m$ are drawn according to $q(\cdot)$, then
\begin{align*}
K = \E_{q} [ \bar{\Psi}\bar{\Psi}^\top ]
\end{align*}
\end{definition}
\begin{definition}[Leverage score, restatement of Definition~\ref{def:leverage_score_intro}]\label{def:leverage_score}
Let $p : \R^{d_1} \rightarrow \R_{\geq 0}$ denote the probability density function defined in Definition~\ref{def:kernel}. Let $\Phi: \R^{d_1} \rightarrow \R^{n\times d_2}$ be defined as Definition~\ref{def:random_feature}. For parameter $\lambda > 0$, we define the ridge leverage score as
\begin{align*}
q_\lambda(w) = p(w) \Tr[\Phi(w)^\top ( K + \lambda I_n )^{-1} \Phi(w)].
\end{align*}
\end{definition}
\begin{definition}[Statistical dimension]\label{def:stat_dim}
Given kernel matrix $K\in\R^{n\times n}$ and parameter $\lambda>0$, we define statistical dimension $s_{\lambda}(K)$ as:
\begin{align*}
s_{\lambda}(K) = \Tr[(K+\lambda I_n)^{-1} K].
\end{align*}
\end{definition}
Note we have $\int_{\R^{d_2}} q_\lambda(w) \d w = s_{\lambda}(K)$. Thus we can define the leverage score sampling distribution as
\begin{definition}[Leverage score sampling distribution]\label{def:lev_distribution}
Let $q_\lambda(w)$ denote the leverage score defined in Definition~\ref{def:leverage_score}. Let $s_{\lambda}(K)$ denote the statistical dimension defined in Definition~\ref{def:stat_dim}. We define the leverage score sampling distribution as
\begin{align*}
q(w) = \frac{q_{\lambda}(w)}{s_{\lambda}(K)}.
\end{align*}
\end{definition}
\subsection{Main result}\label{sec:result_lev}
\begin{theorem}[Restatement of Theorem~\ref{thm:leverage_score_intro}, generalization of Lemma 8 in \cite{akmmvz17}]\label{thm:leverage_score}
Given $n$ data points $x_1, x_2, \cdots, x_n \in \R^d$. Let $k:\R^d\times\R^d\to\R$, $K\in\R^{n\times n}$ be the kernel defined in Definition~\ref{def:kernel}, with corresponding vector $\phi:\R^{d} \times \R^{d_1}\to \R^{d_2}$ and probability density function $p:\R^{d_1}\to\R_{\geq 0}$. Given parameter $\lambda \in (0,\|K\|)$, let $q_\lambda:\R^{d_1}\to\R_{\geq 0}$ be the leverage score defined in Definition~\ref{def:leverage_score}. Let $\tilde{q}_{\lambda}:\R^{d_1}\rightarrow \R$ be any measurable function such that $\tilde{q}_{\lambda}(w) \geq q_\lambda(w)$ holds for all $w\in \mathbb{R}^{d_1}$. Assume $ s_{\tilde{q}_\lambda} = \int_{\mathbb{R}^{d_1}} \tilde{q}_{\lambda}(w)\d w$ is finite. Denote $\bar{q}_{\lambda}(w)=\tilde{q}_{\lambda}(w)/s_{\tilde{q}_\lambda}$. For any accuracy parameter $\epsilon \in (0,1/2)$ and failure probability $\delta \in ( 0 , 1)$. Let $w_1,\cdots,w_m \in \R^d$ denote $m$ samples draw independently from the distribution associated with the density $\bar{q}_{\lambda}(\cdot)$, and construct the matrix $\bar{\Psi} \in \R^{n \times md_2}$ according to Definition~\ref{def:modify_random_feature} with $q=\bar{q}_{\lambda}$. Let $s_\lambda(K)$ be defined as Definition~\ref{def:stat_dim}.
If $m \geq 3 \epsilon^{-2} s_{\tilde{q}_\lambda} \ln(16s_{\tilde{q}_\lambda}\cdot s_\lambda(K) / \delta)$, then we have
\begin{align} \label{eq:leverage_score_thm}
(1-\epsilon) \cdot (K + \lambda I_n) \preceq \bar{\Psi} \bar{\Psi}^\top + \lambda I_n \preceq (1+\epsilon) \cdot ( K + \lambda I_n )
\end{align}
holds with probability at least $1-\delta$.
\end{theorem}
To prove the theorem, we follow the same proof framework as Lemma 8 in \cite{akmmvz17}.
\begin{proof}
Let $K+\lambda I_n = V^\top \Sigma^2 V$ be an eigenvalue decomposition of $K+\lambda I_n$. Note that Eq.~\eqref{eq:leverage_score_thm} is equivalent to
\begin{align}\label{eq:57_1}
K-\epsilon(K+\lambda I_n) \preceq \bar{\Psi}\bar{\Psi}^\top \preceq K+ \epsilon(K+\lambda I_n).
\end{align}
Multiplying $\Sigma^{-1}V$ on the left and $V^\top \Sigma^{-1}$ on the left for both sides of Eq.~\eqref{eq:57_1}, it suffices to show that
\begin{align}\label{eq:57_2}
\| \Sigma^{-1}V\bar{\Psi}\bar{\Psi}^\top V^\top \Sigma^{-1} - \Sigma^{-1}VKV^\top \Sigma^{-1}\| \leq \epsilon
\end{align}
holds with probability at least $1-\delta$. Let
\begin{align*}
Y_r = \frac{p(w_r)}{\bar{q}_{\lambda}(w_r)} \Sigma^{-1}V{\Phi}(w_r){\Phi}(w_r)^\top V^\top \Sigma^{-1}.
\end{align*}
We have
\begin{align*}
\E_{\bar{q}_{\lambda}}[Y_l] = & ~ \E_{\bar{q}_{\lambda}}[\frac{p(w_r)}{\bar{q}_{\lambda}(w_r)}\Sigma^{-1}V{\Phi}(w_r)\bar{\Phi}(w_r)^\top V^\top \Sigma^{-1}]\\
= & ~ \Sigma^{-1}V\E_{\bar{q}_{\lambda}}[\frac{p(w_r)}{\bar{q}_{\lambda}(w_r)}{\Phi}(w_r){\Phi}(w_r)^\top] V^\top \Sigma^{-1}\\
= & ~ \Sigma^{-1}V \E_{p}[{\Phi}(w_r){\Phi}(w_r)^\top] V^\top \Sigma^{-1}\\
= & ~ \Sigma^{-1}V K V^\top \Sigma^{-1}
\end{align*}
where the first step follows from the definition of $Y_r$, the second step follows the linearity of expectation, the third step calculations the expectation, and the last step follows Eq.~\eqref{eq:ls_1}.
Also we have
\begin{align*}
\frac{1}{m}\sum_{r=1}^m Y_r = & ~ \frac{1}{m}\sum_{r=1}^m \frac{p(w_r)}{\bar{q}_{\lambda}(w_r)}\Sigma^{-1}V{\Phi}(w_r)\bar{\Phi}(w_r)^\top V^\top \Sigma^{-1}\\
= & ~ \Sigma^{-1}V\Big(\frac{1}{m}\sum_{r=1}^m \frac{p(w_r)}{\bar{q}_{\lambda}(w_r)}{\Phi}(w_r){\Phi}(w_r)^\top\Big) V^\top \Sigma^{-1}\\
= & ~ \Sigma^{-1}V \bar{\Psi}\bar{\Psi}^\top V^\top \Sigma^{-1}
\end{align*}
where the first step follows from the definition of $Y_r$, the second step follows from basic linear algebra, and the last step follows from Eq.~\eqref{eq:ls_2}.
Thus, it suffices to show that
\begin{align}\label{eq:57_3}
\| \frac{1}{m}\sum_{r=1}^m Y_r -\E_{\bar{q}_{\lambda}}[Y_l] \| \leq \epsilon
\end{align}
holds with probability at least $1-\delta$.
We can apply matrix concentration Lemma~\ref{lem:con_lev} to prove Eq.~\eqref{eq:57_3}, which requires us to bound $\|Y_r\|$ and $\E[Y_l^2]$. Note
\begin{align*}
\|Y_l\| \leq & ~ \frac{p(w_r)}{\bar{q}_{\lambda}(w_r)} \Tr[\Sigma^{-1}V{\Phi}(w_r){\Phi}(w_r)^\top V^\top \Sigma^{-1}]\\
= & ~ \frac{p(w_r)}{\bar{q}_{\lambda}(w_r)} \Tr[{\Phi}(w_r)^\top V^\top \Sigma^{-1} \Sigma^{-1}V{\Phi}(w_r)]\\
= & ~ \frac{p(w_r)}{\bar{q}_{\lambda}(w_r)} \Tr[{\Phi}(w_r)^\top (K+\lambda I_n)^{-1}{\Phi}(w_r)]\\
= & ~ \frac{q_\lambda(w_r) s_{\bar{q}_{\lambda}}}{\tilde{q}_{\lambda}(w_r)}\\
\leq & ~ s_{\bar{q}_{\lambda}}.
\end{align*}
where the first step follows from $\|A\| \leq \Tr[A]$ for any positive semidefinite matrix, the second step follows $\Tr[AB]=\Tr[BA]$, the third step follows from the definition of $V,\Sigma$, the fourth step follows from the definition of leverage score $q_\lambda(\cdot)$ as defined in Definition~\ref{def:leverage_score}, and the last step follows from the condition $\tilde{q}_{\lambda}(w) \geq q_\lambda(w)$.
Further, we have
\begin{align*}
Y_r^2 = & ~ \frac{p(w_r)^2}{\bar{q}_{\lambda}(w_r)^2} \Sigma^{-1}V{\Phi}(w_r){\Phi}(w_r)^\top V^\top \Sigma^{-1} \Sigma^{-1}V{\Phi}(w_r){\Phi}(w_r)^\top V^\top \Sigma^{-1} \\
= & ~ \frac{p(w_r)^2}{\bar{q}_{\lambda}(w_r)^2} \Sigma^{-1}V{\Phi}(w_r){\Phi}(w_r)^\top (K+\lambda I_n)^{-1} {\Phi}(w_r){\Phi}(w_r)^\top V^\top \Sigma^{-1} \\
\preceq & ~ \frac{p(w_r)^2}{\bar{q}_{\lambda}(w_r)^2} \Tr[{\Phi}(w_r)^\top (K+\lambda I_n)^{-1} {\Phi}(w_r)] \Sigma^{-1}V{\Phi}(w_r){\Phi}(w_r)^\top V^\top \Sigma^{-1} \\
= & ~ \frac{p(w_r) q_\lambda(w_r)}{\bar{q}_{\lambda}(w_r)^2} \Sigma^{-1}V{\Phi}(w_r) {\Phi}(w_r)^\top V^\top \Sigma^{-1} \\
= & ~ \frac{q_\lambda(w_r)}{\bar{q}_{\lambda}(w_r)} Y_r \\
= & ~ \frac{q_\lambda(w_r) s_{\tilde{q}_{\lambda}}}{\tilde{q}_{\lambda}(w_r)} Y_r \\
\preceq & ~ s_{\tilde{q}_{\lambda}} Y_r.
\end{align*}
where the first step follows from the definition of $Y_r$, the second step follows from the definition of $V,\Sigma$, the third step follows from $\|A\| \leq \Tr[A]$ for any positive semidefinite matrix, the fourth step follows from the definition of leverage score $q_\lambda(\cdot)$ as defined in Definition~\ref{def:leverage_score}, the fifth step follows from the definition of $Y_r$, the sixth step follows from the definition of $\bar{q}_{\lambda}(\cdot)$, and the last step follows from the condition $\tilde{q}_{\lambda}(w) \geq q_\lambda(w)$.
Thus, let $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n$ be the eigenvalues of $K$, we have
\begin{align*}
\E_{\bar{q}_{\lambda}}[Y_r^2] \preceq & ~ \E_{\bar{q}_{\lambda}}[s_{\tilde{q}_{\lambda}} Y_r] \\
= & ~ s_{\tilde{q}_{\lambda}} \Sigma^{-1}VKV\Sigma^{-1} \\
= & ~ s_{\tilde{q}_{\lambda}} (I_n - \lambda\Sigma^{-2}) \\
= & ~ s_{\tilde{q}_{\lambda}} \cdot \diag\{\lambda_1/(\lambda_1 +\lambda),\cdots, \lambda_n/(\lambda_n +\lambda)\}:= D.
\end{align*}
So by applying Lemma~\ref{lem:con_lev}, we have
\begin{align*}
\Pr\Big\| \frac{1}{m}\sum_{r=1}^m Y_r -\E[Y_r] \Big\| \geq \epsilon] \leq ~ & \frac{8\Tr[D]}{\|D\|}\exp\Big( \frac{-m\epsilon^2/2}{\|D\|+2s_{\tilde{q}_{\lambda}} \epsilon/3} \Big) \\
\leq & ~ \frac{8 s_{\tilde{q}_{\lambda}}\cdot s_{\lambda}(K)}{\lambda_1/(\lambda_1 + \lambda)} \exp\Big( \frac{-m\epsilon^2 }{ 2s_{\tilde{q}_{\lambda}}(1 + 2\epsilon/3) } \Big) \\
\leq & ~ 16s_{\tilde{q}_{\lambda}}\cdot s_{\lambda}(K) \exp\Big( \frac{-m\epsilon^2 }{ 2s_{\tilde{q}_{\lambda}}(1 + 2\epsilon/3)} \Big) \\
\leq & ~ 16s_{\tilde{q}_{\lambda}}\cdot s_{\lambda}(K) \exp\Big( \frac{-3m\epsilon^2 }{ 8s_{\tilde{q}_{\lambda}}} \Big) \\
\leq & ~ \delta
\end{align*}
where the first step follows from Lemma~\ref{lem:con_lev}, the second step follows from the definition of $D$ and $s_\lambda(K) = \Tr[(K+\lambda I_n)^{-1}K] = \sum_{i}^n\lambda_i/(\lambda_i+\lambda) = \Tr[D]$, the third step follows the condition $\lambda\in(0,\|K\|)$, the fourth step follows the condition $\epsilon\in(0,1/2)$, and the last step follows from the bound on $m$.
\end{proof}
\begin{remark}
Above results can be generalized to $\C$. Note in the random Fourier feature case, we have $d_1 = d$, $d_2 = 1$, $\phi(x,w) = e^{ -2 \pi \i w^\top x } \in \C$ and $p(\cdot)$ denotes the Fourier transform density distribution. In the Neural Tangent Kernel case, we have $d_1 = d_2 = d $, $\phi(x,w) = x\sigma'(w^\top x)$ and $p(\cdot)$ denotes the probability density function of standard Gaussian distribution $\N(0,I_d)$. So they are both special cases in our framework.
\end{remark}
\section*{Appendix}
\section{Preliminaries}\label{sec:pre}
\subsection{Probability tools}
In this section we introduce the probability tools we use in the proof.
We state Chernoff, Hoeffding and Bernstein inequalities.
\begin{lemma}[Chernoff bound \cite{c52}]\label{lem:chernoff}
Let $X = \sum_{i=1}^n X_i$, where $X_i=1$ with probability $p_i$ and $X_i = 0$ with probability $1-p_i$, and all $X_i$ are independent. Let $\mu = \E[X] = \sum_{i=1}^n p_i$. Then \\
1. $ \Pr[ X \geq (1+\delta) \mu ] \leq \exp ( - \delta^2 \mu / 3 ) $, $\forall \delta > 0$ ; \\
2. $ \Pr[ X \leq (1-\delta) \mu ] \leq \exp ( - \delta^2 \mu / 2 ) $, $\forall 0 < \delta < 1$.
\end{lemma}
\begin{lemma}[Hoeffding bound \cite{h63}]\label{lem:hoeffding}
Let $X_1, \cdots, X_n$ denote $n$ independent bounded variables in $[a_i,b_i]$. Let $X= \sum_{i=1}^n X_i$, then we have
\begin{align*}
\Pr[ | X - \E[X] | \geq t ] \leq 2\exp \left( - \frac{2t^2}{ \sum_{i=1}^n (b_i - a_i)^2 } \right).
\end{align*}
\end{lemma}
\begin{lemma}[Bernstein inequality \cite{b24}]\label{lem:bernstein}
Let $X_1, \cdots, X_n$ be independent zero-mean random variables. Suppose that $|X_i| \leq M$ almost surely, for all $i$. Then, for all positive $t$,
\begin{align*}
\Pr \left[ \sum_{i=1}^n X_i > t \right] \leq \exp \left( - \frac{ t^2/2 }{ \sum_{j=1}^n \E[X_j^2] + M t /3 } \right).
\end{align*}
\end{lemma}
We state three inequalities for Gaussian random variables.
\begin{lemma}[Anti-concentration of Gaussian distribution]\label{lem:anti_gaussian}
Let $X\sim N(0,\sigma^2)$,
that is,
the probability density function of $X$ is given by $\phi(x)=\frac 1 {\sqrt{2\pi\sigma^2}}e^{-\frac {x^2} {2\sigma^2} }$.
Then
\begin{align*}
\Pr[|X|\leq t]\in \left( \frac 2 3\frac t \sigma, \frac 4 5\frac t \sigma \right).
\end{align*}
\end{lemma}
\begin{lemma}[Gaussian tail bounds]\label{lem:gaussian_tail}
Let $X\sim\N(\mu,\sigma^2)$ be a Gaussian random variable with mean $\mu$ and variance $\sigma^2$. Then for all $t \geq 0$, we have
\begin{align*}
\Pr[|X-\mu|\geq t]\leq 2e^{-\frac{-t^2}{2\sigma^2}}.
\end{align*}
\end{lemma}
\begin{lemma}[Lemma 1 on page 1325 of Laurent and Massart \cite{lm00}]\label{lem:chi_square_tail}
Let $X \sim {\cal X}_k^2$ be a chi-squared distributed random variable with $k$ degrees of freedom. Each one has zero mean and $\sigma^2$ variance. Then
\begin{align*}
\Pr[ X - k \sigma^2 \geq ( 2 \sqrt{kt} + 2t ) \sigma^2 ] \leq \exp (-t), \\
\Pr[ k \sigma^2 - X \geq 2 \sqrt{k t} \sigma^2 ] \leq \exp(-t).
\end{align*}
\end{lemma}
We state two inequalities for random matrices.
\begin{lemma}[Matrix Bernstein, Theorem 6.1.1 in \cite{t15}]\label{lem:matrix_bernstein}
Consider a finite sequence $\{ X_1, \cdots, X_m \} \subset \R^{n_1 \times n_2}$ of independent, random matrices with common dimension $n_1 \times n_2$. Assume that
\begin{align*}
\E[ X_i ] = 0, \forall i \in [m] ~~~ \mathrm{and}~~~ \| X_i \| \leq M, \forall i \in [m] .
\end{align*}
Let $Z = \sum_{i=1}^m X_i$. Let $\mathrm{Var}[Z]$ be the matrix variance statistic of sum:
\begin{align*}
\mathrm{Var} [Z] = \max \left\{ \Big\| \sum_{i=1}^m \E[ X_i X_i^\top ] \Big\| , \Big\| \sum_{i=1}^m \E [ X_i^\top X_i ] \Big\| \right\}.
\end{align*}
Then
{\small
\begin{align*}
\E[ \| Z \| ] \leq ( 2 \mathrm{Var} [Z] \cdot \log (n_1 + n_2) )^{1/2} + M \cdot \log (n_1 + n_2) / 3.
\end{align*}
}
Furthermore, for all $t \geq 0$,
\begin{align*}
\Pr[ \| Z \| \geq t ] \leq (n_1 + n_2) \cdot \exp \left( - \frac{t^2/2}{ \mathrm{Var} [Z] + M t /3 } \right) .
\end{align*}
\end{lemma}
\begin{lemma}[Matrix Bernstein, Lemma 7 in \cite{akmmvz17}]\label{lem:con_lev}
Let $B\in\R^{d_1\times d_2}$ be a fixed matrix. Construct a random matrix $R\in\R^{d_1\times d_2}$ satisfies
\begin{align*}
\E[R] = B~\text{and}~\|R\| \leq L.
\end{align*}
Let $M_1$ and $M_2$ be semidefinite upper bounds for the expected squares:
\begin{align*}
\E[RR^\top] \preceq M_1~\text{and}~\E[R^\top R] \preceq M_2.
\end{align*}
Define the quantities
\begin{align*}
m = \max\{\|M_1\|, \|M_2\|\}~\text{and}~d=(\Tr[M_1]+\Tr[M_2])/m.
\end{align*}
Form the matrix sampling estimator
\begin{align*}
\bar{R}_n = \frac{1}{n} \sum_{k=1}^n R_k
\end{align*}
where each $R_k$ is an independent copy of $R$. Then, for all $t\geq \sqrt{m/n}+2L/3n$,
\begin{align*}
\Pr[\|\bar{R}_n - B\|_2 \geq t] \leq 4d\exp{\Big(\frac{-nt^2}{m+2Lt/3}\Big)}
\end{align*}
\end{lemma}
\subsection{Neural tangent kernel and its properties}
\begin{lemma}[Lemma 4.1 in \cite{sy19}]\label{lem:lemma_4.1_in_sy19}
We define $H^{\cts}$, $H^{\dis} \in \R^{n \times n}$ as follows
\begin{align*}
H^{\cts}_{i,j} = & ~ \E_{w \sim {\cal N} (0,I) } [ x_i^\top x_j {\bf 1}_{ w^\top x_i \geq 0 , w^\top x_j \geq 0 } ] , \\
H^{\dis}_{i,j} = & ~ \frac{1}{m} \sum_{r=1}^m [ x_i^\top x_j {\bf 1}_{w_r^\top x_i \geq 0, w_r^\top x_j \geq 0 } ] .
\end{align*}
Let $\lambda = \lambda_{\min} (H^{\cts})$. If $m = \Omega( \lambda^{-2} n^2 \log ( n / \delta ) )$, we have
\begin{align*}
\| H^{\dis} - H^{\cts} \|_F \leq \frac{ \lambda }{ 4 }, \text{~and~} \lambda_{\min} ( H^{\dis} ) \geq \frac{3}{4} \lambda
\end{align*}
hold with probability at least $1-\delta$.
\end{lemma}
\begin{lemma}[Lemma 4.2 in \cite{sy19}]\label{lem:lemma_4.2_in_sy19}
Let $R \in (0,1)$. If $\wt{w}_1, \cdots, \wt{w}_m$ are i.i.d. generated from ${\cal N}(0,I)$. For any set of weight vectors $w_1, \cdots, w_m \in \R^d$ that satisfy for any $r \in [m]$, $\| \wt{w}_r - w_r \|_2 \leq R$, then the $H : \R^{ m \times d} \rightarrow \R^{n \times n}$ defined
\begin{align*}
H(W) = \frac{1}{m} x_i^\top x_j \sum_{r=1}^m {\bf 1}_{ w_r^\top x_i \geq 0, w_r^\top x_j \geq 0 } .
\end{align*}
Then we have
\begin{align*}
\| H(w) - H(\wt{w}) \|_F < 2 n R
\end{align*}
holds with probability at least $1-n^2 \cdot \exp(-m R /10)$.
\end{lemma}
\section{Related work}
\paragraph{Leverage scores}
Given a $m \times n$ matrix $A$. Let $a_i^\top$ be the $i$-th rows of $A$ and the leverage score of the $i$-th row of $A$ is $\sigma_i(A) = a_i^\top (A^\top A)^{\dagger} a_i$. A row's leverage score measures how important it is in composing the row space of $A$. If a row has a component orthogonal to all other rows, its leverage score is $1$. Removing it would decrease the rank of $A$, completely changing its row space. The coherence of $A$ is $\| \sigma(A) \|_{\infty}$. If $A$ has low coherence, no particular row is especially important. If $A$ has high coherence, it contains at least one row whose removal would significantly affect the composition of $A$'s row space.
Leverage score is a fundamental concept in graph problems and numerical linear algebra. There are many works on how to approximate leverage scores \cite{ss11,dmmw12,cw13,nn13} or more general version of leverages, e.g. Lewis weights \cite{l78,blm89,cp15}. From graph perspective, it has been applied to maximum matching \cite{bln+20,lsz20}, max-flow \cite{ds08,m13_flow,m16,ls20_stoc,ls20_focs}, generate random spanning trees \cite{s18}, and sparsify graphs \cite{ss11}. From matrix perspective, it has been used to give matrix CUR decomposition \cite{bw14,swz17,swz19} and tensor CURT decomposition \cite{swz19}. From optimization perspective, it has been used to approximate the John Ellipsoid \cite{ccly19}, linear programming \cite{ls14,blss20,jswz20}, semi-definite programming \cite{jklps20}, and cutting plane methods \cite{v89,lsw15,jlsw20}.
\paragraph{Kernel methods}
Kernel methods can be thought of as instance-based learners: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead ``remember'' the $i$-th training example $(x_i,y_i)$ and learn for it a corresponding weight $w_i$. Prediction for unlabeled inputs, i.e., those not in the training set, is treated by the application of similarity function $\k$, called a kernel, between the unlabeled input $x'$ and each of the training inputs $x_i$.
There are three lines of works that are closely related to our work. First, our work is highly related to the recent discoveries of the connection between deep learning and kernels \cite{dfs16,d17,jgh18,cb18}. Second, our work is closely related to development of connection between leverage score and kernels \cite{rr08,cw17,cmm17,mw17_focs,mw17_nips,ltos18,akmmvz17,akmmvz19,acss20}. Third, our work is related to kernel ridge regression \cite{b13,am15,zdw15,acw17,mm17,znvkr20}.
\paragraph{Convergence of neural network}
There is a long line of work studying the convergence of neural network with random input assumptions \cite{bg17,t17,zsjbd17,s17,ly17,zsd17,dltps18,glm18,brw19}. For a quite while, it is not known to remove the randomness assumption from the input data points. Recently, there is a large number of work studying the convergence of neural network in the over-parametrization regime \cite{ll18,dzps19,als19a,als19b,dllwz19,adhlw19,adhlsw19,sy19,bpsw20}. These results don't need to assume that input data points are random, and only require some much weaker assumption which is called ``data-separable''. Mathematically, it says for any two input data points $x_i$ and $x_j$, we have $\| x_i - x_j \|_2 \geq \delta$. Sufficiently wide neural network requires the width $m$ to be at least $\poly(n,d,L,1/\delta)$, where $n$ is the number of input data points, $d$ is the dimension of input data point, $L$ is the number of layers.
\paragraph{Continuous Fourier transform}
The continuous Fourier transform is defined as a problem \cite{jls20} where you take samples $f(t_1), \cdots , f(t_m)$ from the time domain $f(t) := \sum_{j=1}^n v_j e^{2\pi \i \langle x_j , t \rangle }$, and try to reconstruct function $f : \R^d \rightarrow \C$ or even recover $\{(v_j,x_j)\} \in \C \times \R^d$. The data separation connects to the sparse Fourier transform in the continuous domain. We can view the $n$ input data points \cite{ll18,als19a,als19b} as $n$ frequencies in the Fourier transform \cite{m15,ps15}. The separation of the data set is equivalent to the gap of the frequency set ($\min_{i \neq j} \| x_i - x_j \|_2 \geq \delta$). In the continuous Fourier transform, there are two families of algorithms: one requires to know the frequency gap \cite{m15,ps15,cm20,jls20} and the other doesn't \cite{ckps16}. However, in the over-parameterized neural network training, all the existing work requires a gap for the data points.
\section{Main results}
In this section, we state our results. In Section~\ref{subsubsec:leverage_sampling}, we consider the large-scale kernel ridge regression (KRR) problem. We generalize the Fourier transform result \cite{akmmvz17} of accelerating the running time of solving KRR using the tool of leverage score sampling to a broader class of kernels. In Section~\ref{subsec:app}, we discuss the interesting application of leverage score sampling for training deep learning models due to the connection between regularized neural nets and kernel ridge regression.
\subsection{Kernel approximation with leverage score sampling}
\label{subsubsec:leverage_sampling}
In this section, we generalize the leverage score theory in \cite{akmmvz17}, which analyzes the number of random features needed to approximate kernel matrix under leverage score sampling regime for the kernel ridge regression task. In the next a few paragraphs, we briefly review the settings of classical kernel ridge regression.
Given training data given training data matrix $X=[x_1, \cdots, x_n]^\top \in \R^{n \times d}$, corresponding labels $Y=[y_1,\cdots,y_n]^\top \in \R^n$ and feature map $\phi : \R^d \to \mathcal{F}$, a classical kernel ridge regression problem can be written as\footnote{Strictly speaking, the optimization problem should be considered in a hypothesis space defined by the reproducing kernel Hilbert space associated with the feature/kernel. Here, we use the notation in finite dimensional space for simplicity.}
\begin{align*}
\min_{\beta} \frac{1}{2}\| Y - \phi(X)^\top \beta \|_2^2 + \frac{1}{2}\lambda\|\beta\|_2^2
\end{align*}
where $\lambda>0$ is the regularization parameter. By introducing the corresponding kernel function $\k(x,z) = \langle \phi(x), \phi(z) \rangle$ for any data $x , z \in \R^d$, the output estimate of the kernel ridge regression for any data $x \in \R^d$ can be denoted as $f^*(x) = \k(x,X)^\top \alpha$,
where $\alpha \in \R^n$ is the solution to
\begin{align*}
(K + \lambda I_n) \alpha = Y.
\end{align*}
Here $K \in \R^{n \times n}$ is the kernel matrix with $K_{i,j} = \k(x_i,x_j)$, $\forall i, j \in [n] \times [n]$.
Note a direct computation involves $(K+\lambda I_n)^{-1}$, whose $O(n^3)$ running time can be fairly large in tasks like neural network due to the large number of training data. Therefore, we hope to construct feature map $\phi : \R^d \to \R^{s}$, such that the new feature approximates the kernel matrix well in the sense of
\begin{align}\label{eq:leverage_goal}
(1-\epsilon) \cdot (K + \lambda I_n) \preceq \Phi \Phi^\top + \lambda I_n \preceq (1+\epsilon) \cdot ( K + \lambda I_n ),
\end{align}
where $\epsilon\in(0,1)$ is small and $\Phi = [ \phi(x_1),\cdots,\phi(x_n)]^\top \in\R^{n\times s}$. Then by Woodbury matrix equality, we can approximate the solution by $u^*(z)= \phi (z)^\top ( \Phi^\top \Phi +\lambda I_s)^{-1} \Phi^\top Y $, which can be computed in $O(ns^2+n^2)$ time. In the case $s= o(n)$, computational cost can be saved.
In this work, we consider a generalized setting of \cite{akmmvz17} as a kernel ridge regression problem with positive definite kernel matrix $\k: \R^d \times \R^d \to \R$ of the form
\begin{align}\label{eq:lev_kernel_intro}
\k(x,z) = \E_{w \sim p}[\phi(x,w)^\top\phi(z,w)],
\end{align}
where $\phi:\R^{d} \times \R^{d_1}\to \R^{d_2}$ denotes a finite dimensional vector and $p:\R^{d_1}\to\R_{\geq 0}$ denotes a probability density function.
Due to the regularization $\lambda>0$ in this setting, instead of constructing the feature map directly from the distribution $q$, we consider the following ridge leveraged distribution:
\begin{definition}[Ridge leverage function]\label{def:leverage_score_intro}
Given data $x_1,\cdots,x_n\in\R^d$ and parameter $\lambda >0$, we define the ridge leverage function as
\begin{align*}
q_\lambda(w) = p(w) \cdot \Tr[\Phi(w)^\top ( K + \lambda I_n )^{-1} \Phi(w)],
\end{align*}
where $p(\cdot)$,~$\phi$ are defined in Eq.~\eqref{eq:lev_kernel_intro}, and $\Phi(w) = [\phi(x_1,w)^\top, \cdots, \phi(x_n,w)^\top]^\top\in\R^{n\times d_2}$. Further, we define statistical dimension $s_{\lambda}(K)$ as
\begin{align}\label{eq:stat_dim_lev}
s_{\lambda}(K) = \int q_{\lambda}(w) \d w = \Tr[(K+\lambda I_n)^{-1} K].
\end{align}
\end{definition}
The leverage score sampling distribution $q_{\lambda}/s_{\lambda}(K)$ takes the regularization term into consideration and achieves Eq.~\eqref{eq:leverage_goal} using the following modified random features vector:
\begin{definition}[Modified random features]\label{def:modify_random_feature_lev}
Given any probability density function $q(\cdot)$ whose support includes that of $p(\cdot)$. Given $m$ vectors $w_1, \cdots, w_m \in \R^{d_1}$, we define modified random features $\bar{\Psi}\in\R^{n\times md_2}$ as
$
\bar{\Psi} := [\bar{\varphi}(x_1),\cdots,\bar{\varphi}(x_n)]^\top,
$
where
\begin{align*}
\bar{\varphi}(x) = \frac{1}{\sqrt{m}} \left[ \frac{\sqrt{p(w_1)}}{ \sqrt{q(w_1)} }\phi(x,w_1)^\top , \cdots, \frac{\sqrt{p(w_m)}}{ \sqrt{q(w_m)} }\phi(x,w_m)^\top \right]^\top.
\end{align*}
\end{definition}
Now we are ready to present our result.
\begin{theorem}[Kernel approximation with leverage score sampling, generalization of Lemma 8 in \cite{akmmvz17}]\label{thm:leverage_score_intro}
Given parameter $\lambda \in (0,\|K\|)$. Let $q_\lambda:\R^{d_1}\to\R_{\geq 0}$ be the leverage score defined in Definition~\ref{def:leverage_score_intro}. Let $\tilde{q}_{\lambda}:\R^{d_1}\rightarrow \R$ be any measurable function such that $\tilde{q}_{\lambda}(w) \geq q_\lambda(w)$ holds for all $w\in \mathbb{R}^{d_1}$. Assume $ s_{\tilde{q}_\lambda} = \int_{\mathbb{R}^{d_1}} \tilde{q}_{\lambda}(w)\d w$ is finite. Let $\bar{q}_{\lambda}(w)=\tilde{q}_{\lambda}(w)/s_{\tilde{q}_\lambda}$. Given any accuracy parameter $\epsilon \in (0,1/2)$ and failure probability $\delta \in ( 0 , 1)$. Let $w_1,\cdots,w_m \in \R^d$ denote $m$ samples draw independently from the distribution associated with the density $\bar{q}_{\lambda}(\cdot)$, and construct the modified random features $\bar{\Psi} \in \R^{n \times md_2}$ as in Definition~\ref{def:modify_random_feature_lev} with $q=\bar{q}_{\lambda}$. Let $s_\lambda(K)$ be the statistical dimension defined in~\eqref{eq:stat_dim_lev}.
If $m \geq 3 \epsilon^{-2} s_{\tilde{q}_\lambda} \ln(16s_{\tilde{q}_\lambda}\cdot s_\lambda(K) / \delta)$, then we have
{\small
\begin{align}\label{eq:leverage_score_thm_intro}
(1-\epsilon) \cdot (K + \lambda I_n) \preceq \bar{\Psi} \bar{\Psi}^\top + \lambda I_n \preceq (1+\epsilon) \cdot ( K + \lambda I_n )
\end{align}}
holds with probability at least $1-\delta$.
\end{theorem}
\begin{remark}
Above results can be generalized to the complex domain $\C$. Note for the random Fourier feature case discussed in \cite{akmmvz17}, we have $d_1 = d$, $d_2 = 1$, $\phi(x,w) = e^{ -2 \pi \i w^\top x } \in \C$ and $p(\cdot)$ denotes the Fourier transform density distribution, which is a special case in our setting.
\end{remark}
\subsection{Application in training regularized neural network}\label{subsec:app}
In this section, we consider the application of leverage score sampling in training $\ell_2$ regularized neural networks.
Past literature such as \cite{dzps19},\cite{adhlsw19} have already witnessed the equivalence between training a neural network and solving a kernel regression problem in a broad class of network models. In this work, we first generalize this result to the regularization case, where we connect regularized neural network with kernel ridge regression. Then we apply the above discussed the leverage score sampling theory for KRR to the task of training neural nets.
\subsubsection{Equivalence I, training with random Gaussian initialization}
\label{subsubsec:equivalence1}
To illustrate the idea, we consider a simple model two layer neural network with ReLU activation function as in \cite{dzps19,sy19}\footnote{Our results directly extends to multi-layer deep neural networks with all layers trained together}.
\begin{align*}
f_{\nn} (W, a, x) = \frac{1}{\sqrt{m}} \sum_{r=1}^m a_r \sigma (w_r^\top x) \in \R,
\end{align*}
where $x \in \R^d$ is the input, $w_r \in \R^d,~r\in[m]$ is the weight vector of the first layer, $W = [w_1, \cdots, w_m]\in\R^{d \times m}$, $a_r \in \R,~r\in[m]$ is the output weight, $a = [a_1, \cdots, a_m]^\top$ and $\sigma(\cdot)$ is the ReLU activation function: $\sigma(z) = \max\{0,z\}$.
Here we consider only training the first layer $W$ with fixed $a$, so we also write $f_{\nn}(W,x) = f_{\nn}(W, a, x)$. Again, given training data matrix $X=[x_1,\cdots,x_n]^\top\in\R^{n\times d}$ and labels $Y=[y_1,\cdots,y_n]^\top\in\R^n$, we denote $f_{\nn}(W, X) = [f_{\nn}(W, x_1),\cdots, f_{\nn}(W, x_n)]^\top\in\R^n$. We formally define training neural network with $\ell_2$ regularization as follows:
\begin{definition}[Training neural network with regularization]\label{def:nn_intro}
Let $\kappa\in(0,1]$ be a small multiplier\footnote{To establish the training equivalence result, we assign $\kappa = 1$ back to the normal case. For the training equivalence result, we pick $\kappa>0$ to be a small multiplier only to shrink the initial output of the neural network. The is the same as what is used in \cite{akmmvz17}.}. Let $\lambda\in(0,1)$ be the regularization parameter. We initialize the network as $a_r\overset{i.i.d.}{\sim} \unif[\{-1,1\}]$ and $w_r(0)\overset{i.i.d.}{\sim} \N(0,I_d)$. Then we consider solving the following optimization problem using gradient descent:
\begin{align}\label{eq:nn_intro}
\min_{W} \frac{1}{2}\| Y - \kappa f_{\nn}(W,X) \|_2 + \frac{1}{2}\lambda\|W\|_F^2.
\end{align}
Let $w_r(t),r\in[m]$ be the network weight at iteration $t$. We denote the training data predictor at iteration $t$ as
$u_{\nn}(t) = \kappa f_{\nn}(W(t),X) \in\R^n$.
Further, given any test data $x_{\test}\in\R^d$, we denote
$u_{\nn,\test}(t) = \kappa f_{\nn}(W(t),x_{\test}) \in \R$
as the test data predictor at iteration $t$.
\end{definition}
On the other hand, we consider the following neural tangent kernel ridge regression problem:
\begin{align}\label{eq:krr_intro}
\min_{\beta} \frac{1}{2}\| Y - \kappa f_{\ntk}(\beta,X) \|_2^2 + \frac{1}{2}\lambda\|\beta\|_2^2,
\end{align}
where $\kappa,\lambda$ are the same parameters as in Eq.~\eqref{eq:nn_intro}, $f_{\ntk}(\beta,x) = \Phi(x)^\top \beta \in \R$ and $f_{\ntk}(\beta,X) = [f_{\ntk}(\beta,x_1),\cdots,f_{\ntk}(\beta,x_n)]^\top\in\R^{n}$ are the test data predictors. Here, $\Phi$ is the feature map corresponding to the neural tangent kernel (NTK):
\begin{align}\label{eq:ntk_lev}
\k_{\ntk}(x, z) = \E \left[\left\langle \frac{\partial f_{\nn}(W,x)}{\partial W},\frac{\partial f_{\nn}(W,z)}{\partial W} \right\rangle \right]
\end{align}
where $x,z \in \R^d$ are any input data, and the expectation is taken over $w_r\overset{i.i.d.}{\sim} \N(0,I),~r=1, \cdots, m$.
Under the standard assumption $\k_{\ntk}$ being positive definite, the problem Eq.~\eqref{eq:krr_intro} is a strongly convex optimization problem with the optimal predictor $u^*=\kappa^2 H^{\cts}(\kappa^2 H^{\cts}+\lambda I)^{-1}Y$ for training data, and the corresponding predictor $u_{\test}^* = \kappa^2 \k_{\ntk}(x_{\test}, X)^\top(\kappa^2 H^{\cts}+\lambda I)^{-1}Y$ for the test data $x_{\test}$, where $H^{\cts}\in\R^{n\times n}$ is the kernel matrix with $[H^{\cts}]_{i,j} = \k_{\ntk}(x_i, x_j)$.
We connect the problem Eq.~\eqref{eq:nn_intro} and Eq.~\eqref{eq:krr_intro} by building the following equivalence between their training and test predictors with polynomial widths:
\begin{theorem}[Equivalence between training neural net with regularization and kernel ridge regression for training data prediction]\label{thm:main_train_equivalence_intro}
Given any accuracy $\epsilon \in ( 0 , 1/10 )$ and failure probability $\delta \in (0,1/10)$. Let multiplier $\kappa = 1$, number of iterations $T=\wt{O}(\frac{1}{\Lambda_0})$, network width $m \geq \wt{O}(\frac{n^4d}{\Lambda_0^4\epsilon})$ and the regularization parameter $\lambda \leq \wt{O}(\frac{1}{\sqrt{m}})$. Then with probability at least $1-\delta$ over the Gaussian random initialization, we have
\begin{align*}
\|u_{\nn}(T) - u^*\|_2 \leq \epsilon.
\end{align*}
Here $\wt{O}(\cdot)$ hides $\poly\log(n/(\epsilon \delta \Lambda_0 ))$.
\end{theorem}
We can further show the equivalence between the test data predictors with the help of the multiplier $\kappa$.
\begin{theorem}[Equivalence between training neural net with regularization and kernel ridge regression for test data prediction]\label{thm:main_test_equivalence_intro}
Given any accuracy $\epsilon \in (0,1/10)$ and failure probability $\delta \in (0,1/10)$. Let multiplier $\kappa = \wt{O}(\frac{\epsilon\Lambda_0}{n})$, number of iterations $T=\wt{O}(\frac{1}{\kappa^2\Lambda_0})$, network width $m \geq \wt{O}(\frac{n^{10}d}{\epsilon^6\Lambda_0^{10}})$ and regularization parameter $\lambda \leq \wt{O}(\frac{1}{\sqrt{m}})$. Then with probability at least $1-\delta$ over the Gaussian random initialization, we have
\begin{align*}
\| u_{\nn,\test}(T) - u_{\test}^* \|_2 \leq \epsilon.
\end{align*}
Here $\wt{O}(\cdot)$ hides $\poly\log(n/(\epsilon \delta \Lambda_0 ))$.
\end{theorem}
\subsubsection{Equivalence II, training with leverage scores}
\label{subsubsec:equivalence2}
To apply the leverage score theory discussed in Section~\ref{subsubsec:leverage_sampling}, Note the definition of the neural tangent kernel is exactly of the form:
\begin{align*}
\k_{\ntk}(x, z) = \E \left[\left\langle \frac{\partial f_{\nn}(W,x)}{\partial W},\frac{\partial f_{\nn}(W,z)}{\partial W} \right\rangle \right]
= \E_{w \sim p}[\phi(x,w)^\top\phi(z,w)]
\end{align*}
where $\phi(x,w) = x\sigma'(w^\top x)\in\R^{d}$ and $p(\cdot)$ denotes the probability density function of standard Gaussian distribution $\N(0,I_d)$. Therefore, we try to connect the theory of training regularized neural network with leverage score sampling. Note the width of the network corresponds to the size of the feature vector in approximating the kernel. Thus, the smaller feature size given by the leverage score sampling theory helps us build a smaller upper-bound on the width of the neural nets.
Specifically, given regularization parameter $\lambda>0$, we can define the ridge leverage function with respect to neural tangent kernel $H^{\cts}$ defined in Definition~\ref{def:leverage_score_intro} as
\begin{align*}
q_\lambda(w) = p(w) \Tr[\Phi(w)^\top ( H^{\cts} + \lambda I_n )^{-1} \Phi(w)]
\end{align*}
and corresponding probability density function
\begin{align}\label{eq:lev_dis_intro}
q(w)=\frac{q_{\lambda}(w)}{s_{\lambda}(H^{\cts})}
\end{align}
where $\Phi(w) = [\phi(x_1,w)^\top, \cdots, \phi(x_n,w)^\top]^\top\in\R^{n\times d_2}$.
We consider training the following reweighed neural network using leverage score initialization:
\begin{definition}[Training reweighed neural network with regularization]\label{def:f_nn_lev_intro}
Let $\kappa\in(0,1]$ be a small multiplier. Let $\lambda\in(0,1)$ be the regularization parameter. Let $q(\cdot):\R^d\to\R_{> 0}$ defined in~\eqref{eq:lev_dis_intro}. Let $p(\cdot)$ denotes the probability density function of Gaussian distribution $\N(0,I_d)$. We initialize the network as $a_r\overset{i.i.d.}{\sim} \unif[\{-1,1\}]$ and $w_r(0)\overset{i.i.d.}{\sim} q$. Then we consider solving the following optimization problem using gradient descent:
\begin{align}\label{problem:nn_lev_intro}
\min_{W} \frac{1}{2}\| Y - \kappa \bar{f}_{\nn}(W,X) \|_2 + \frac{1}{2}\lambda\|W\|_F^2.
\end{align}
where
\begin{align*}
\bar{f}_{\nn}(W,x) = \frac{1}{\sqrt{m}} \sum_{r=1}^m a_r \sigma (w_r^\top X) \sqrt{\frac{p(w_r(0))}{q(w_r(0))}} \text{~and~}\bar{f}_{\nn}(W,X) = [\bar{f}_{\nn}(W,x_1),\cdots,\bar{f}_{\nn}(W,x_n)]^\top .
\end{align*}
We denote $w_r(t),r\in[m]$ as the estimate weight at iteration $t$. We denote $\bar{u}_{\nn}(t) = \kappa \bar{f}_{\nn}(W(t),X)$ as the training data predictor at iteration $t$. Given any test data $x_{\test}\in\R^d$, we denote $\bar{u}_{\nn,\test}(t) = \kappa \bar{f}_{\nn}(W(t),x_{\test})$ as the test data predictor at iteration $t$.
\end{definition}
We show that training this reweighed neural net with leverage score initialization is still equivalence to the neural tangent kernel ridge regression problem~\eqref{eq:krr_intro} as in following theorem:
\begin{theorem}[Equivalence between training reweighed neural net with regularization and kernel ridge regression for training data prediction]\label{thm:equivalence_train_lev_intro}
Given any accuracy $\epsilon\in(0,1)$ and failure probability $\delta\in(0,1/10)$. Let multiplier $\kappa=1$, number of iterations $T=O(\frac{1}{\Lambda_0}\log(\frac{1}{\epsilon}))$, network width $m = \poly(\frac{1}{\Lambda_0},n,d,\frac{1}{\epsilon},\log(\frac{1}{\delta}))$ and regularization parameter $\lambda = \wt{O}(\frac{1}{\sqrt{m}})$. Then with probability at least $1-\delta$ over the random leverage score initialization, we have
\begin{align*}
\|\bar{u}_{\nn}(T) - u^*\|_2 \leq \epsilon.
\end{align*}
Here $\wt{O}(\cdot)$ hides $\poly\log(n/(\epsilon \delta \Lambda_0 ))$.
\end{theorem}
\section{Overview of techniques}
\paragraph{Generalization of leverage score theory}
To prove Theorem~\ref{thm:leverage_score_intro}, we follow the similar proof framework as Lemma 8 in \cite{akmmvz17}.
Let $K+\lambda I_n = V^\top \Sigma^2 V$ be an eigenvalue decomposition of $K+\lambda I_n$. Then conclusion~\eqref{eq:leverage_score_thm_intro} is equivalent to
\begin{align}\label{eq:57_2_intro}
\| \Sigma^{-1}V\bar{\Psi}\bar{\Psi}^\top V^\top \Sigma^{-1} - \Sigma^{-1}VKV^\top \Sigma^{-1}\| \leq \epsilon
\end{align}
Let random matrix $Y_r\in\R^{n\times n}$ defined as
\begin{align*}
Y_r := \frac{p(w_r)}{\bar{q}_{\lambda}(w_r)} \Sigma^{-1}V{\Phi}(w_r){\Phi}(w_r)^\top V^\top \Sigma^{-1}.
\end{align*}
where $\Phi(w)=[\phi(x_1,w),\cdots,\phi(x_n,w)]^\top\in\R^{n\times d_2}$.
Then we have
\begin{align*}
\E_{\bar{q}_{\lambda}}[Y_l] = \E_{\bar{q}_{\lambda}} \left[ \frac{p(w_r)}{\bar{q}_{\lambda}(w_r)}\Sigma^{-1}V{\Phi}(w_r)\bar{\Phi}(w_r)^\top V^\top \Sigma^{-1} \right]
= \Sigma^{-1}V K V^\top \Sigma^{-1},
\end{align*}
and
\begin{align*}
\frac{1}{m}\sum_{r=1}^m Y_r = \frac{1}{m}\sum_{r=1}^m \frac{p(w_r)}{\bar{q}_{\lambda}(w_r)}\Sigma^{-1}V{\Phi}(w_r)\bar{\Phi}(w_r)^\top V^\top \Sigma^{-1}
= \Sigma^{-1}V \bar{\Psi}\bar{\Psi}^\top V^\top \Sigma^{-1}.
\end{align*}
Thus, it suffices to show that
\begin{align}\label{eq:57_3}
\left\| \frac{1}{m}\sum_{r=1}^m Y_r -\E_{\bar{q}_{\lambda}}[Y_l] \right\| \leq \epsilon
\end{align}
holds with probability at least $1-\delta$, which can be shown by applying matrix concentration results. Note
\begin{align*}
\|Y_l\| \leq s_{\bar{q}_{\lambda}}~~\text{and}~
\E_{\bar{q}_{\lambda}}[Y_r^2] \preceq s_{\tilde{q}_{\lambda}} \cdot \diag\{\lambda_1/(\lambda_1 +\lambda),\cdots, \lambda_n/(\lambda_n +\lambda)\} .
\end{align*}
Applying matrix concentration Lemma 7 in \cite{akmmvz17}, we complete the proof.
\paragraph{Equivalence between regularized neural network and kernel ridge regression}
To establish the equivalence between training neural network with regularization and neural tangent kernel ridge regression, the key observation is that the dynamic kernel during the training is always close to the neural tangent kernel.
Specifically, given training data $x_1,\cdots,x_n\in\R^d$, we define the dynamic kernel matrix $H(t)\in\R^{n\times n}$ along training process as
\begin{align*}
[H(t)]_{i,j}
= \left\langle \frac{\d f_{\nn}(W(t),x_i)}{\d W(t)},\frac{\d f_{\nn}(W(t),x_j)}{\d W(t)} \right\rangle
\end{align*}
Then we can show the gradient flow of training regularized neural net satisfies
\begin{align}
\frac{\d \|u^* - u_{\nn}(t)\|_2^2}{\d t}
= & -2(u^* - u_{\nn}(t))^\top ( H(t)+\lambda I) (u^* - u_{\nn}(t))\label{eq:linear_1_lev}\\
& + 2 (u_{\nn}(t)-u^*)^\top (H(t) - H^{\cts}) (Y-u^*)\label{eq:linear_2_lev}
\end{align}
where term~\eqref{eq:linear_1_lev} is the primary term characterizing the linear convergence of $u_{\nn}(t)$ to $t^*$, and term~\eqref{eq:linear_2_lev} is the additive term that can be well controlled if $H(t)$ is sufficiently close to $H^{\cts}$. We argue the closeness of $H(t)\approx H^{\cts}$ as the consequence of the following two observations:
\begin{itemize}
\item{Initialization phase:} At the beginning of the training, $H(0)$ can be viewed as approximating the neural tangent kernel $H^{\cts}$ using finite dimensional random features. Note the size of these random features corresponds to the width of the neural network (scale by the data dimension $d$). Therefore, when the neural network is sufficiently wide, it is equivalent to approximate the neural tangent kernel using sufficient high dimensional feature vectors, which ensures $H(0)$ is sufficiently close to $H^{\cts}$.
In the case of leverage score initialization, we further take the regularization into consideration. We use the tool of leverage score to modify the initialization distribution and corresponding network parameter, to give a smaller upper-bound of the width of the nets needed.
\item{Training phase:} If the net is sufficiently wide, we can observe the over-parametrization phenomenon such that the weight estimate $W(t)$ at time $t$ will be sufficiently close to its initialization $W(0)$, which implies the dynamic kernel $H(t)$ being sufficiently close to $H(0)$. Due to the fact $H(0)\approx H^{\cts}$ argued in initialization phase, we have $H(t)\approx H^{\cts}$ throughout the algorithm.
\end{itemize}
Combining both observations, we are able to iteratively show the (nearly) linear convergence property of training the regularized neural net as in following lemma:
\begin{lemma}[Bounding kernel perturbation, informal]\label{lem:induction_lev}
For any accuracy $\Delta \in (0,1/10)$. If the network width $m = \poly(1/\Delta, 1/T, 1/\epsilon_{\train}, n, d, 1/\kappa, 1/\Lambda_0, \log(1/\delta))$ and $\lambda = O(\frac{1}{\sqrt{m}})$, with probability $1-\delta$, there exist $\epsilon_W,~\epsilon_H',~\epsilon_K'\in(0,\Delta)$ that are independent of $t$, such that the following hold for all $0 \leq t \le T$:
\begin{enumerate}
\item $\| w_r(0) - w_r(t) \|_2 \leq \epsilon_W $, $\forall r \in [m]$
\item $\| H(0) - H(t) \|_2 \leq \epsilon_H'$
\item $\| u_{\nn}(t) - u^* \|_2^2 \leq \max\{\epsilon_{\train}^2, e^{-(\kappa^2\Lambda_0 + \lambda) t/2}\| u_{\nn}(0) - u^* \|_2^2\}$
\end{enumerate}
\end{lemma}
Given arbitrary accuracy $\epsilon\in(0,1)$, if we choose $\epsilon_{\train} = \epsilon$, $T=\wt{O}(\frac{1}{\kappa^2\Lambda_0})$ and $m$ sufficiently large in Lemma~~\ref{lem:induction_lev}, then we have $\| u_{\nn}(t) - u^* \|_2 \leq \epsilon$, indicating the equivalence between training neural network with regularization and neural tangent kernel ridge regression for the training data predictions.
To further argue the equivalence for any given test data $x_{\test}$, we observe the similarity between the gradient flows of neural tangent kernel ridge regression $u_{\ntk, \test}(t)$ and regularized neural networks $u_{\nn,\test}(t)$ as following:
\begin{align}
\frac{\d u_{\ntk, \test}(t)}{\d t} = & ~ \kappa^2 \k_{\ntk}(x_{\test}, X)^\top ( Y - u_{\ntk}(t) ) - \lambda \cdot u_{\ntk, \test}(t).\label{eq:gradient_nn_intro}\\
\frac{\d u_{\nn,\test}(t)}{\d t} = & ~ \kappa^2 \k_{t}(x_{\test}, X)^\top ( Y - u_{\nn}(t) ) - \lambda \cdot u_{\nn,\test}(t).\label{eq:gradient_ntk_intro}
\end{align}
By choosing the multiplier $\kappa>0$ small enough, we can bound the initial difference between these two predictors. Combining with above similarity between gradient flows, we are able to show $|u_{\nn,\test}(T)-u_{\ntk,\test}(T)|\geq \epsilon/2$ for appropriate $T>0$. Finally, note the linear convergence property of the gradient of the kernel ridge regression, we can prove $|u_{\nn,\test}(T)-u^*_{\ntk,\test}|\geq \epsilon$.
Using the similar idea, we can also show the equivalence for test data predictors and the case of leverage score initialization. We refer to the Appendix for a detailed proof sketch and rigorous proof.
\begin{remark}
Our results can be naturally extended to multi-layer ReLU deep neural networks with all parameters training together. Note the core of the connection between regularized NNs and KRR is to show the similarity between their gradient flows, as in Eq.~\eqref{eq:gradient_nn_intro},~\eqref{eq:gradient_ntk_intro}. The gradient flows consist of two terms: the first term is from normal NN training without regularizer, whose similarity has been shown in broader settings, e.g. \cite{dzps19,sy19,adhlsw19,als19a,als19b}; the second term is from the $\ell_2$ regularizer, whose similarity is true for multi-layer ReLU DNNs if the regularization parameter is divided by the number of layers of parameters trained, due to the piecewise linearity of the output with respect to the training parameters.
\end{remark}
\section{Equivalence between training neural network with regularization and kernel ridge regression under leverage score sampling}\label{sec:eq_lev}
In this section, we connected the neural network theory with the leverage score sampling theory by showing a new equivalence result between training reweighed neural network with regularization under leverage score initialization and corresponding neural tangent kernel ridge regression. Specifically, we prove Theorem~\ref{thm:equivalence_train_lev}. Due to the similarity of the results to Section~\ref{sec:equiv}, we present this section in the same framework.
Section~\ref{sec:equiv_preli_D} introduces new notations and states the standard data assumptions again. Section~\ref{sec:def_D} restates and supplements the definitions in the paper. Section~\ref{sec:lemma_D} presents the key lemmas about the leverage score initialization and related properties, which are crucial to the proof. Section~\ref{sec:proof_sketch_D} provides a brief proof sketch. Section~\ref{sec:main_D} restates and proves the main result Theorem~\ref{thm:equivalence_train_lev} following the proof sketch.
Here, we list the locations where definitions and theorems in the paper are restated. Definition~\ref{def:f_nn_lev_intro} is restated in Definition~\ref{def:f_nn_lev}. Theorem~\ref{thm:equivalence_train_lev_intro} is restated in Theorem~\ref{thm:equivalence_train_lev}.
\subsection{Preliminaries}\label{sec:equiv_preli_D}
Let's define the following notations:
\begin{itemize}
\item $\bar{u}_{\ntk}(t)=\kappa \bar{f}_{\ntk}(\beta(t),X) = \kappa\bar{\Phi}(X)\beta(t) \in \R^n$ be the prediction of the kernel ridge regression for the training data with respect to $\bar{H}(0)$ at time $t$. (See Definition~\ref{def:krr_ntk_lev})
\item $\bar{u}^* = \lim_{t \rightarrow \infty} \bar{u}_{\ntk}(t)$ (See Eq.~\eqref{eq:def_u_*_lev})
\item $\bar{u}_{\ntk,\test}(t) = \kappa \bar{f}_{\ntk}(\beta(t),x_{\test}) = \kappa\bar{\Phi}(x_{\test}) \beta(t) \in \R$ be the prediction of the kernel ridge regression for the test data with respect to $\bar{H}(0)$ at time $t$. (See Definition~\ref{def:krr_ntk_lev})
\item $\bar{u}_{\test}^* = \lim_{t\rightarrow \infty} \bar{u}_{\ntk,\test}( t ) $ (See Eq.~\eqref{eq:def_u_test_*_lev})
\item $\bar{\k}_t(x_{\test},X) \in \R^n$ be the induced kernel between the training data and test data at time $t$, where
\begin{align*}
[\bar{\k}_t(x_{\test},X)]_i
= \bar{\k}_t(x_{\test},x_i)
= \left\langle \frac{\partial \bar{f}(W(t),x_{\test})}{\partial W(t)},\frac{\partial \bar{f}(W(t),x_i)}{\partial W(t)} \right\rangle
\end{align*}
(see Definition~\ref{def:dynamic_kernel_lev})
\item $\bar{u}_{\nn}(t) = \kappa \bar{f}_{\nn}(W(t),X) \in \R^n$ be the prediction of the reweighed neural network with leverage score initialization for the training data at time $t$. (See Definition~\ref{def:f_nn_lev})
\item $\bar{u}_{\nn,\test}(t) = \kappa \bar{f}_{\nn}(W(t),x_{\test}) \in \R$ be the prediction of the reweighed neural network with leverage score initialization for the test data at time $t$ (See Definition~\ref{def:f_nn_lev})
\end{itemize}
\begin{assumption}[data assumption]\label{ass:data_assumption_D}
We made the following assumptions: \\
1. For each $i \in [n]$, we assume $|y_i| = O(1)$.\\
2. $H^{\cts}$ is positive definite, i.e., $\Lambda_0:=\lambda_{\min}(H^{\cts})>0$.\\
3. All the training data and test data have Euclidean norm equal to 1.
\end{assumption}
\subsection{Definitions}\label{sec:def_D}
\begin{definition}[Training reweighed neural network with regularization, restatement of Definition~\ref{def:f_nn_lev_intro}]\label{def:f_nn_lev}
Given training data matrix $X\in\R^{n\times d}$ and corresponding label vector $Y\in\R^n$. Let $\kappa\in(0,1)$ be a small multiplier. Let $\lambda\in(0,1)$ be the regularization parameter. Given any probability density distribution $q(\cdot):\R^d\to\R_{> 0}$. Let $p(\cdot)$ denotes the Gaussian distribution $\N(0,I_d)$. We initialize the network as $a_r\overset{i.i.d.}{\sim} \unif[\{-1,1\}]$ and $w_r(0)\overset{i.i.d.}{\sim} q$. Then we consider solving the following optimization problem using gradient descent:
\begin{align}\label{problem:nn_lev}
\min_{W} \frac{1}{2}\| Y - \kappa \bar{f}_{\nn}(W,X) \|_2 + \frac{1}{2}\lambda\|W\|_F^2.
\end{align}
where $\bar{f}_{\nn}(W,x) = \frac{1}{\sqrt{m}} \sum_{r=1}^m a_r \sigma (w_r^\top X) \sqrt{\frac{p(w_r(0))}{q(w_r(0))}}$ and $\bar{f}_{\nn}(W,X) = [\bar{f}_{\nn}(W,x_1),\cdots,\bar{f}_{\nn}(W,x_n)]^\top\in\R^n$. We denote $w_r(t),r\in[m]$ as the variable at iteration $t$. We denote
\begin{align}\label{eq:nn_predict_train_lev}
\bar{u}_{\nn}(t) = \kappa \bar{f}_{\nn}(W(t),X) = \frac{\kappa}{\sqrt{m}} \sum_{r=1}^m a_r \sigma (w_r(t)^\top X) \sqrt{\frac{p(w_r(0))}{q(w_r(0))}}
\end{align}
as the training data predictor at iteration $t$. Given any test data $x_{\test}\in\R^d$, we denote
\begin{align}\label{eq:nn_predict_test_lev}
\bar{u}_{\nn,\test}(t) = \kappa \bar{f}_{\nn}(W(t),x_{\test}) = \frac{\kappa}{\sqrt{m}} \sum_{r=1}^m a_r \sigma (w_r(t)^\top x_{\test}) \sqrt{\frac{p(w_r(0))}{q(w_r(0))}}
\end{align}
as the test data predictor at iteration $t$.
\end{definition}
\begin{definition}[Reweighed dynamic kernel]\label{def:dynamic_kernel_lev}
Given $W(t) \in \R^{d \times m}$ as the parameters of the neural network at training time $t$ as defined in Definition~\ref{def:f_nn_lev}. For any data $x,z\in\R^d$, we define $\k_t(x,z)\in\R$ as
\begin{align*}
\bar{\k}_t(x,z)
= \left\langle \frac{\d \bar{f}_{\nn}(W(t),x)}{\d W(t)},\frac{\d \bar{f}_{\nn}(W(t),z)}{\d W(t)} \right\rangle
\end{align*}
Given training data matrix $X=[x_1,\cdots,x_n]^\top\in\R^{n\times d}$, we define $\bar{H}^{(t)}\in\R^{n\times n}$ as
\begin{align*}
[\bar{H}(t)]_{i,j} = \bar{\k}_{t}(x_i, x_j)\in\R.
\end{align*}
We denote $\bar{\Phi}(x) = [x^\top\sigma'(w_1(0)^\top x)\sqrt{\frac{p(w_1(0))}{q(w_1(0))}},\cdots,x^\top\sigma'(w_m(0)^\top x)\sqrt{\frac{p(w_r(0))}{q(w_r(0))}}]^\top\in\R^{md}$ as the feature vector corresponding to $\bar{H}(0)$, which satisfies
\begin{align*}
[\bar{H}(0)]_{i,j} = \bar{\Phi}(x_i)^\top \bar{\Phi}(x_j)
\end{align*}
for all $i,j\in[n]$. We denote $\bar{\Phi}(X)=[\bar{\Phi}(x_1),\cdots,\bar{\Phi}(x_n)]^\top\in\R^{n\times md}$. Further, given a test data $x_{\test}\in\R^d$, we define $\bar{\k}_t(x_{\test},X)\in\R^n$ as
\begin{align*}
\bar{\k}_t(x_{\test},X) = [\bar{\k}_t(x_{\test},x_1), \cdots, \bar{\k}_t(x_{\test},x_n)]^\top\in\R^n.
\end{align*}
\end{definition}
\begin{definition}[Kernel ridge regression with $\bar{H}(0)$]\label{def:krr_ntk_lev}
Given training data matrix $X=[x_1,\cdots,x_n]^\top\in\R^{n\times d}$ and corresponding label vector $Y\in\R^n$. Let $\bar{\k}_{0}$, $\bar{H}(0)\in\R^{n\times n}$ and $\bar{\Phi}$ be the neural tangent kernel and corresponding feature functions defined as in Definition~\ref{def:dynamic_kernel_lev}. Let $\kappa\in(0,1)$ be a small multiplier. Let $\lambda\in(0,1)$ be the regularization parameter. Then we consider the following neural tangent kernel ridge regression problem:
\begin{align}\label{eq:krr_ntk_lev}
\min_{\beta} \frac{1}{2}\| Y - \kappa \bar{f}_{\ntk}(\beta,X) \|_2^2 + \frac{1}{2}\lambda\|\beta\|_2^2.
\end{align}
where $\bar{f}_{\ntk}(\beta,x) = \bar{\Phi}(x)^\top \beta$ denotes the prediction function is corresponding RKHS and $\bar{f}_{\ntk}(\beta,X) = [\bar{f}_{\ntk}(\beta,x_1),\cdots,\bar{f}_{\ntk}(\beta,x_n)]^\top\in\R^{n}$. Consider the gradient flow of solving problem~\eqref{eq:krr_ntk_lev} with initialization $\beta(0) = 0$.
We denote $\beta(t)\in\R^{md}$ as the variable at iteration $t$. We denote
\begin{align}\label{eq:ntk_predict_train_lev}
\bar{u}_{\ntk}(t) = \kappa\bar{\Phi}(X)\beta(t)
\end{align}
as the training data predictor at iteration $t$. Given any test data $x_{\test}\in\R^d$, we denote
\begin{align}\label{eq:ntk_predict_test_lev}
\bar{u}_{\ntk,\test}(t) = \kappa\bar{\Phi}(x_{\test})^\top\beta(t)
\end{align}
as the test data predictor at iteration $t$. Note the gradient flow converge the to optimal solution of problem~\eqref{eq:krr_ntk_lev} due to the strongly convexity of the problem. We denote
\begin{align}\label{eq:def_beta_*_lev}
\bar{\beta}^* = \lim_{t\to\infty} \beta(t) = \kappa (\kappa^2 \bar{\Phi}(X)^\top \bar{\Phi}(X) + \lambda I)^{-1} \bar{\Phi}(X)^\top Y
\end{align}
and the optimal training data predictor
\begin{align}\label{eq:def_u_*_lev}
\bar{u}^* = \lim_{t\to\infty} \bar{u}_{\ntk}(t) = \kappa \bar{\Phi}(X)\bar{\beta}^* = \kappa^2 \bar{H}(0)(\kappa^2 \bar{H}(0)+\lambda I)^{-1}Y
\end{align}
and the optimal test data predictor
\begin{align}\label{eq:def_u_test_*_lev}
\bar{u}_{\test}^* = \lim_{t\to\infty} \bar{u}_{\ntk,\test}(t) = \kappa \bar{\Phi}(x_{\test})^\top \bar{\beta}^* = \kappa^2\bar{\k}_{0}(x_{\test}, X)^\top (\kappa^2 \bar{H}(0)+\lambda I)^{-1}Y.
\end{align}
\end{definition}
\subsection{Leverage score sampling, gradient flow, and linear convergence}\label{sec:lemma_D}
Recall in the main body we connect the leverage score sampling theory and convergence theory of the neural network training by observing
\begin{align*}
\k_{\ntk}(x, z) & = \E \left[\left\langle \frac{\partial f_{\nn}(W,x)}{\partial W},\frac{\partial f_{\nn}(W,z)}{\partial W} \right\rangle \right] \\
& = \E_{w \sim p}[\phi(x,w)^\top\phi(z,w)]
\end{align*}
where $\phi(x,w) = x\sigma'(w^\top x)\in\R^{d}$ and $p(\cdot)$ denotes the probability density function of standard Gaussian distribution $\N(0,I_d)$.
Thus, given regularization parameter $\lambda>0$, we can define the ridge leverage function with respect to $H^{\cts}$ defined in Definition~\ref{def:ntk_phi} as
\begin{align*}
q_\lambda(w) = p(w) \Tr[\Phi(w)^\top ( H^{\cts} + \lambda I_n )^{-1} \Phi(w)]
\end{align*}
and corresponding probability density function
\begin{align}\label{eq:lev_dis_lev}
q(w)=\frac{q_{\lambda}(w)}{s_{\lambda}(H^{\cts})}
\end{align}
where $\Phi(w) = [\phi(x_1,w)^\top, \cdots, \phi(x_n,w)^\top]^\top\in\R^{n\times d_2}$.
\begin{lemma}[property of leverage score sampling distribution]\label{lem:property_lev}
Let $p(\cdot)$ denotes the standard Gaussian distribution $\N(0,I_d)$. Let $q(\cdot)$ be defined as in \eqref{eq:lev_dis_lev}. Assume $\Tr[\Phi(w)\Phi(w)^\top]=O(n)$ and $\lambda \leq \Lambda_0/2$. Then for all $w\in\R^d$ we have
\begin{align*}
c_1p(w) \leq q(w) \leq c_2 p(w)
\end{align*}
where $c_1=O(\frac{1}{n})$ and $c_2=O(\frac{1}{\Lambda_0})$.
\end{lemma}
\begin{proof}
Note by assumption $s_{\lambda}(H^{\cts}) = \Tr[(H^{\cts} + \lambda I_n)^{-1}H^{\cts}] = O(n)$. Further, note for any $w\in\R^d$,
\begin{align*}
q_\lambda(w) = & ~ p(w) \Tr[\Phi(w)^\top ( H^{\cts} + \lambda I_n )^{-1} \Phi(w)]\\
\leq & ~ p(w) \Tr[\Phi(w)^\top \Phi(w)]\cdot\frac{1}{\Lambda_0+\lambda}\\
= & ~ p(w) \Tr[\sum_{i=1}^n \phi(x_i,w) \phi(x_i, w)^\top]\cdot\frac{1}{\Lambda_0+\lambda}\\
= & ~ p(w) \sum_{i=1}^n \Tr[ \phi(x_i,w) \phi(x_i, w)^\top]\cdot\frac{1}{\Lambda_0+\lambda}\\
= & ~ p(w) \sum_{i=1}^n \Tr[ \phi(x_i,w)^\top \phi(x_i, w)]\cdot\frac{1}{\Lambda_0+\lambda}\\
= & ~ p(w) \sum_{i=1}^n \|x_i\|_2^2\sigma'(w^\top x_i)^2 \cdot\frac{1}{\Lambda_0+\lambda}\\
\leq & ~ p(w)\frac{n}{\Lambda_0+\lambda}
\end{align*}
where the first step follows from $H^{\cts} \succeq \Lambda_0 I_n$, the second step follows from the definition of $\Phi$, the third step follows from the linearity of trace operator, the fourth step follows from $\Tr(AB)=\Tr(BA)$, the fifth step follows from the definition of $\phi$, and the last step follows from $\|x_i\|_2 = 1$ and $\sigma'(\cdot)\leq 1$.
Thus, combining above facts, we have
\begin{align*}
q(w) = \frac{q_{\lambda}(w)}{s_{\lambda}(H^{\cts})} \leq p(w)\frac{n}{(\Lambda_0+\lambda)s_{\lambda}(H^{\cts})} = c_2p(w)
\end{align*}
hold for all $w\in\R^d$, where $c_2 = O(\frac{1}{\Lambda_0})$.
Similarly, note $H^{\cts}\preceq n I_n$, we have
\begin{align*}
q_\lambda(w) \geq p(w)\Tr[\Phi(w)^\top \Phi(w)]\cdot \frac{1}{n+\lambda}
\end{align*}
which implies
\begin{align*}
q(w) = \frac{q_{\lambda}(w)}{s_{\lambda}(H^{\cts})} \geq p(w)\frac{\Tr[\Phi(w)^\top \Phi(w)]}{(n+\lambda)s_{\lambda}(H^{\cts})} = c_1 p(w)
\end{align*}
hold for all $w\in\R^d$, where $c_1 = O(\frac{1}{n})$. Combining above results, we complete the proof.
\end{proof}
\begin{lemma}[leverage score sampling]\label{lem:leverage_score_sampling}
Let $\bar{H}(t)$, $H^{\cts}$ be the kernel defined as in Definition~\ref{def:dynamic_kernel_lev} and Definition~\ref{def:ntk_phi}. Let $\Lambda_0 > 0$ be defined as in Definition~\ref{def:ntk_phi}. Let $p(\cdot)$ denotes the probability density function for Gaussian $\N(0,I_d)$. Let $q(\cdot)$ denotes the leverage sampling distribution with respect to $p(\cdot)$, $\bar{H}(0)$ and $\lambda$ defined in Definition~\ref{def:lev_distribution}. Let $\Delta\in(0,1/4)$. Then we have
\begin{align*}
\E_{q}[\bar{H}(0)] = H^{\cts}.
\end{align*}
By choosing $m \geq \wt{O}(\Delta^{-2} s_{\lambda}(H^{\cts})$, with probability at least $1-\delta$,
\begin{align*}
(1-\Delta)(H^{\cts} + \lambda I) \preceq \bar{H}(0) + \lambda I \preceq (1+\Delta)(H^{\cts} + \lambda I)
\end{align*}
Further, if $\lambda \leq \Lambda_0$, we have with probability at least $1-\delta$,
\begin{align*}
\bar{H}(0) \succeq \frac{\Lambda_0}{2} I_n.
\end{align*}
Here $\wt{O}(\cdot)$ hides $\poly\log(s_{\lambda}(\bar{H}(0))/\delta)$.
\end{lemma}
\begin{proof}
Note for any $i,j\in[n]$,
\begin{align*}
[\E_{q}[\bar{H}(0)]]_{i,j} = & ~ \E_{q} [\bar{\Phi}(x_i)^\top \bar{\Phi}(x_j)]\\
= & ~ \E_{q}[\langle \frac{\d \bar{f}_{\nn}(W(0),x_i)}{\d W},\frac{\d \bar{f}_{\nn}(W(0),x_j)}{\d W} \rangle] \\
= & ~ \E_{p}[\langle \frac{\d {f}_{\nn}(W(0),x_i)}{\d W},\frac{\d {f}_{\nn}(W(0),x_j)}{\d W} \rangle] \\
= & ~ [H^{\cts}]_{i,j}
\end{align*}
where the first step follows from the definition of $\bar{H}(0)$, the second step follows the definition of $\bar{\Phi}$, the third step calculates the expectation, and the last step follows the definition of $H^{\cts}$.
Also, by applying Theorem~\ref{thm:leverage_score} directly, we have with probability at least $1-\delta$,
\begin{align*}
(1-\Delta)(H^{\cts} + \lambda I) \preceq \bar{H}(0) + \lambda I \preceq (1+\Delta)(H^{\cts} + \lambda I)
\end{align*}
if $m \geq \wt{O}(\Delta^{-2} s_{\lambda}(\bar{H}(0))$.
Since
$$\bar{H}(0) + \lambda I \succeq (1-\Delta)(H^{\cts} + \lambda I) \succeq (1-\Delta)(\Lambda_0 + \lambda) I_n ,$$
we have
$$\bar{H}(0) \succeq [(1-\Delta)(\Lambda_0 + \lambda) - \lambda] I_n \succeq \frac{\Lambda_0}{2} I_n,$$
which completes the proof.
\end{proof}
\begin{lemma}[Gradient flow of kernel ridge regression, parallel to Lemma~\ref{lem:gradient_flow_of_krr}]\label{lem:gradient_flow_of_krr_lev}
Given training data matrix $X\in\R^{n\times d}$ and corresponding label vector $Y\in\R^n$. Let $\bar{f}_{\ntk}$, $\beta(t)\in\R^{md}$, $\kappa\in(0,1)$ and $\bar{u}_{\ntk}(t)\in\R^n$ be defined as in Definition~\ref{def:krr_ntk_lev}. Let $\bar{\Phi},~\bar{\k}_t$ be defined as in Definition~\ref{def:dynamic_kernel_lev}. Then for any data $z\in\R^d$, we have
\begin{align*}
\frac{\d \bar{f}_{\ntk}(\beta(t), z)}{\d t} = \kappa \cdot \bar{\k}_{0}(z, X)^\top ( Y - \bar{u}_{\ntk}(t) ) - \lambda \cdot \bar{f}_{\ntk}(\beta(t), z).
\end{align*}
\end{lemma}
\begin{proof}
Denote $L(t)= \frac{1}{2}\|Y-\bar{u}_{\ntk}(t)\|_2^2+\frac{1}{2}\lambda\|\beta(t)\|_2^2$. By the rule of gradient descent, we have
\begin{align*}
\frac{\d \beta(t)}{\d t}=-\frac{\d L}{\d \beta}=\kappa \bar{\Phi}(X)^\top(Y-\bar{u}_{\ntk}(t))-\lambda\beta(t).
\end{align*}
Thus we have
\begin{align*}
\frac{\d \bar{f}_{\ntk}(\beta(t), z)}{\d t}
= & ~ \frac{\d \bar{f}_{\ntk}(\beta(t), z)}{\d \beta(t)}\frac{\d \beta(t)}{\d t} \\
= & ~ \bar{\Phi}(z)^\top (\kappa\bar{\Phi}(X)^\top(Y-\bar{u}_{\ntk}(t))-\lambda\beta(t)) \\
= & ~ \kappa\bar{\k}_{0}(z, X)^\top (Y-\bar{u}_{\ntk}(t))-\lambda\bar{\Phi}(z)^\top\beta(t) \\
= & ~ \kappa\bar{\k}_{0}(z, X)^\top (Y-\bar{u}_{\ntk}(t))-\lambda \bar{f}_{\ntk}(\beta(t), z),
\end{align*}
where the first step is due to chain rule, the second step follows from the fact $ \d \bar{f}_{\ntk}(\beta, z)/ \d \beta=\bar{\Phi}(z)$, the third step is due to the definition of the kernel $\bar{\k}_{0}(z, X)=\bar{\Phi}(X)\bar{\Phi}(z) \in \R^{n}$, and the last step is due to the definition of $\bar{f}_{\ntk}(\beta(t), z)\in\R$.
\end{proof}
\begin{corollary}[Gradient of prediction of kernel ridge regression, parallel to Corollary~\ref{cor:ntk_gradient}]\label{cor:ntk_gradient_lev}
Given training data matrix $X=[x_1,\cdots,x_n]^\top \in \R^{n\times d}$ and corresponding label vector $Y \in \R^n$. Given a test data $x_{\test}\in\R^d$. Let $\bar{f}_{\ntk}$, $\beta(t)\in\R^{md}$, $\kappa\in(0,1)$, $\bar{u}_{\ntk}(t)\in\R^n$ and $\bar{u}_{\ntk,\test}(t)\in\R$ be defined as in Definition~\ref{def:krr_ntk_lev}. Let $\bar{\k}_{t},~\bar{H}(0)\in\R^{n\times n}$ be defined as in Definition~\ref{def:dynamic_kernel_lev}. Then we have
\begin{align*}
\frac{\d \bar{u}_{\ntk}(t)}{\d t} & = \kappa^2 \bar{H}(0) ( Y - \bar{u}_{\ntk}(t) ) - \lambda \cdot \bar{u}_{\ntk}(t)\\
\frac{\d \bar{u}_{\ntk, \test}(t)}{\d t} & = \kappa^2 \bar{\k}_{0}(x_{\test}, X)^\top ( Y - \bar{u}_{\ntk}(t) ) - \lambda \cdot \bar{u}_{\ntk, \test}(t).
\end{align*}
\end{corollary}
\begin{proof}
Plugging in $z = x_i\in\R^d$ in Lemma~\ref{lem:gradient_flow_of_krr_lev}, we have
\begin{align*}
\frac{\d \bar{f}_{\ntk}(\beta(t), x_i)}{\d t} = \kappa \bar{\k}_{0}(x_i, X)^\top ( Y - \bar{u}_{\ntk}(t) ) - \lambda \cdot \bar{f}_{\ntk}(\beta(t), x_i).
\end{align*}
Note $[\bar{u}_{\ntk}(t)]_i = \kappa \bar{f}_{\ntk}(\beta(t), x_i)$ and $[\bar{H}(0)]_{:,i} = \bar{\k}_{0}(x_i, X)$, so writing all the data in a compact form, we have
\begin{align*}
\frac{\d \bar{u}_{\ntk}(t)}{\d t} = \kappa^2 \bar{H}(0) ( Y - \bar{u}_{\ntk}(t) ) - \lambda \cdot \bar{u}_{\ntk}(t).
\end{align*}
Plugging in data $z = x_{\test}\in\R^d$ in Lemma~\ref{lem:gradient_flow_of_krr_lev}, we have
\begin{align*}
\frac{\d \bar{f}_{\ntk}(\beta(t), x_{\test})}{\d t} = \kappa \bar{\k}_{0}(x_{\test}, X)^\top ( Y - \bar{u}_{\ntk}(t) ) - \lambda \cdot \bar{f}_{\ntk}(\beta(t), x_{\test}).
\end{align*}
Note by definition, $\bar{u}_{\ntk,\test}(t) = \kappa \bar{f}_{\ntk}(\beta(t), x_{\test}) \in \R$, so we have
\begin{align*}
\frac{\d \bar{u}_{\ntk, \test}(t)}{\d t} = \kappa^2 \bar{\k}_{0}(x_{\test}, X)^\top ( Y - \bar{u}_{\ntk}(t) ) - \lambda \cdot \bar{u}_{\ntk, \test}(t).
\end{align*}
\end{proof}
\begin{lemma}[Linear convergence of kernel ridge regression, parallel to Lemma~\ref{lem:linear_converge_krr}]\label{lem:linear_converge_krr_lev}
Given training data matrix $X=[x_1,\cdots,x_n]^\top \in \R^{n\times d}$ and corresponding label vector $Y\in\R^n$. Let $\kappa\in(0,1)$, $\bar{u}_{\ntk}(t) \in \R^n$ and $\bar{u}^* \in \R^n$ be defined as in Definition~\ref{def:krr_ntk_lev}. Let $\Lambda_0 > 0$ be defined as in Definition~\ref{def:ntk_phi}. Let $\lambda > 0$ be the regularization parameter. Then we have
\begin{align*}
\frac{\d \|\bar{u}_{\ntk}(t)-\bar{u}^*\|_2^2}{\d t} \le - (\kappa^2 \Lambda_0+\lambda) \|\bar{u}_{\ntk}(t)-\bar{u}^*\|_2^2.
\end{align*}
Further, we have
\begin{align*}
\|u_{\ntk}(t)-u^*\|_2 \leq e^{-(\kappa^2 \Lambda_0+\lambda)t/2} \|u_{\ntk}(0)-u^*\|_2.
\end{align*}
\end{lemma}
\begin{proof}
Let $\bar{H}(0) \in \R^{n \times n}$ be defined as in Definition~\ref{def:dynamic_kernel_lev}. Then
\begin{align}\label{eq:322_1_lev}
\kappa^2 \bar{H}(0)(Y-\bar{u}^*)
= & ~ \kappa^2 \bar{H}(0)(Y-\kappa^2 \bar{H}(0)(\kappa^2 \bar{H}(0)+\lambda I_n)^{-1}Y) \notag \\
= & ~ \kappa^2 \bar{H}(0))(I_n-\kappa^2 \bar{H}(0))(\kappa^2 \bar{H}(0)+\lambda I)^{-1})Y \notag \\
= & ~ \kappa^2 \bar{H}(0)(\kappa^2 \bar{H}(0)+\lambda I_n - \kappa^2 \bar{H}(0))(\kappa^2 \bar{H}(0)+\lambda I_n)^{-1} Y \notag \\
= & ~ \kappa^2 \lambda \bar{H}(0)(\kappa^2 \bar{H}(0)+\lambda I_n)^{-1} Y \notag \\
= & ~ \lambda \bar{u}^*,
\end{align}
where the first step follows the definition of $\bar{u}^* \in \R^n$, the second to fourth step simplify the formula, and the last step use the definition of $\bar{u}^* \in \R^n$ again.
So we have
\begin{align}\label{eq:322_2_lev}
\frac{\d \|\bar{u}_{\ntk}(t)-\bar{u}^*\|_2^2}{\d t}
= & ~ 2(\bar{u}_{\ntk}(t)-\bar{u}^*)^\top \frac{\d \bar{u}_{\ntk}(t)}{\d t} \notag\\
= & ~ -2\kappa^2 (\bar{u}_{\ntk}(t)-\bar{u}^*)^\top \bar{H}(0) (\bar{u}_{\ntk}(t) - Y) -2\lambda(\bar{u}_{\ntk}(t)-\bar{u}^*)^\top \bar{u}_{\ntk}(t) \notag\\
= & ~ -2\kappa^2 (\bar{u}_{\ntk}(t)-\bar{u}^*)^\top \bar{H}(0) (\bar{u}_{\ntk}(t) - \bar{u}^*) + 2\kappa^2 (\bar{u}_{\ntk}(t)-\bar{u}^*)^\top \bar{H}(0)) (Y-\bar{u}^*) \notag\\
& ~ -2\lambda(\bar{u}_{\ntk}(t)-\bar{u}^*)^\top \bar{u}_{\ntk}(t)\notag\\
= & ~ -2\kappa^2 (\bar{u}_{\ntk}(t)-\bar{u}^*)^\top \bar{H}(0) (\bar{u}_{\ntk}(t) -\bar{u}^*) + 2\lambda(\bar{u}_{\ntk}(t)-\bar{u}^*)^\top \bar{u}^* \notag\\
& ~ -2\lambda(\bar{u}_{\ntk}(t)-\bar{u}^*)^\top \bar{u}_{\ntk}(t) \notag\\
= & ~ -2(\bar{u}_{\ntk}(t)-\bar{u}^*)^\top (\kappa^2 \bar{H}(0)+\lambda I) (\bar{u}_{\ntk}(t) - \bar{u}^*) \notag\\
\leq & ~ -(\kappa^2 \Lambda_0 + \lambda)\|\bar{u}_{\ntk}(t)-\bar{u}^*\|_2^2,
\end{align}
where the first step follows the chain rule, the second step follows Corollary~\ref{cor:ntk_gradient_lev}, the third step uses basic linear algebra, the fourth step follows Eq.~\eqref{eq:322_1_lev}, the fifth step simplifies the expression, and the last step follows from Lemma~\ref{lem:leverage_score_sampling}.
Further, since
\begin{align*}
& ~ \frac{\d (e^{(\kappa^2 \Lambda_0+\lambda)t}\|\bar{u}_{\ntk}(t)-\bar{u}^*\|_2^2)}{\d t} \\
= & ~ (\kappa^2 \Lambda_0+\lambda)e^{(\kappa^2 \Lambda_0+\lambda)t}\|\bar{u}_{\ntk}(t)-\bar{u}^*\|_2^2 + e^{(\kappa^2 \Lambda_0+\lambda)t}\cdot\frac{\d \|\bar{u}_{\ntk}(t)-\bar{u}^*\|_2^2}{\d t} \\
\leq & ~ 0,
\end{align*}
where the first step calculates the gradient, and the second step follows from Eq.~\eqref{eq:322_2_lev}. Thus, $e^{(\kappa^2 \Lambda_0+\lambda)t}\|\bar{u}_{\ntk}(t)-\bar{u}^*\|_2^2$ is non-increasing, which implies
\begin{align*}
\|\bar{u}_{\ntk}(t)-\bar{u}^*\|_2 \leq e^{-(\kappa^2 \Lambda_0+\lambda)t/2} \|\bar{u}_{\ntk}(0)-\bar{u}^*\|_2.
\end{align*}
\end{proof}
\begin{lemma}[Gradient flow of neural network training, Parallel to Lemma~\ref{lem:gradient_flow_of_nn}]\label{lem:gradient_flow_of_nn_lev}
Given training data matrix $X \in \R^{n\times d}$ and corresponding label vector $Y\in\R^n$. Let $\bar{f}_{\nn}: \R^{d\times m} \times \R^{d} \rightarrow \R$, $W(t) \in \R^{d \times m}$, $\kappa\in(0,1)$ and $\bar{u}_{\nn}(t)\in\R^n$ be defined as in Definition~\ref{def:f_nn_lev}. Let $\bar{\k}_{t}: \R^d \times \R^{n\times d} \rightarrow \R^n$ be defined as in Definition~\ref{def:dynamic_kernel_lev}. Then for any data $z \in \R^d$, we have
\begin{align*}
\frac{\d \bar{f}_{\nn}(W(t),z)}{\d t} = \kappa \bar{\k}_{t}(z,X)^\top ( Y - \bar{u}_{\nn}(t) )- \lambda \cdot \bar{f}_{\nn}(W(t),z).
\end{align*}
\end{lemma}
\begin{proof}
Denote $L(t)=\frac{1}{2}\|Y-\bar{u}_{\nn}(t)\|_2^2+\frac{1}{2}\lambda\|W(t)\|_F^2$. By the rule of gradient descent, we have
\begin{align}\label{eq:323_1_lev}
\frac{\d w_r}{\d t} = -\frac{\partial L}{\partial w_r}=(\frac{\partial \bar{u}_{\nn}}{\partial w_r})^\top(Y-\bar{u}_{\nn})-\lambda w_r.
\end{align}
Also note for ReLU activation $\sigma$, we have
\begin{align}\label{eq:323_2_lev}
\langle \frac{\d \bar{f}_{\nn}(W(t),z)}{\d W(t)},\lambda W(t)\rangle = & ~ \sum_{r=1}^m \Big(\frac{1}{\sqrt{m}}a_r z \sigma'(w_r(t)^\top z)\sqrt{\frac{p(w_r(0))}{q(w_r(0))}}\Big)^\top (\lambda w_r(t)) \notag \\
= & ~ \frac{\lambda}{\sqrt{m}}\sum_{r=1}^m a_r w_r(t)^\top z \sigma'(w_r(t)^\top z)\sqrt{\frac{p(w_r(0))}{q(w_r(0))}} \notag \\
= & ~ \frac{\lambda}{\sqrt{m}}\sum_{r=1}^m a_r \sigma(w_t(t)^\top z)\sqrt{\frac{p(w_r(0))}{q(w_r(0))}}\notag \\
= & ~ \lambda \bar{f}_{\nn}(W(t),z),
\end{align}
where the first step calculates the derivatives, the second step follows basic linear algebra, the third step follows the property of ReLU activation: $\sigma(l) = l\sigma'(l)$, and the last step follows from the definition of $\bar{f}_{\nn}$.
Thus, we have
\begin{align*}
& ~ \frac{\d \bar{f}_{\nn}(W(t),z)}{\d t} \\
= & ~ \langle\frac{\d \bar{f}_{\nn}(W(t),z)}{\d W(t)}, \frac{\d W(t)}{\d t}\rangle \notag \\
= & ~ \sum_{j=1}^{n}(y_j - \kappa \bar{f}_{\nn}(W(t),x_j)) \langle \frac{\d \bar{f}_{\nn}(W(t),z)}{\d W(t)},\frac{\d \kappa \bar{f}_{\nn}(W(t),x_j)}{\d W(t)} \rangle-\langle \frac{\d \bar{f}_{\nn}(W(t),z)}{\d W(t)},\lambda W(t)\rangle\notag\\
= & ~ \kappa \sum_{j=1}^{n}(y_j- \kappa \bar{f}_{\nn}(W(t),x_j)) \bar{\k}_{t}(z,x_j)-\lambda \cdot \bar{f}_{\nn}(W(t),z)\notag\\
= & ~ \kappa \bar{\k}_{t}(z,X)^\top ( Y - \bar{u}_{\nn}(t) )- \lambda \cdot \bar{f}_{\nn}(W(t),z),
\end{align*}
where the first step follows from chain rule, the second step follows from Eq.~\eqref{eq:323_1_lev}, the third step follows from the definition of $\bar{\k}_{t}$ and Eq.~\eqref{eq:323_2_lev}, and the last step rewrites the formula in a compact form.
\end{proof}
\begin{corollary}[Gradient of prediction of neural network, Parallel to Lemma~\ref{cor:nn_gradient}]\label{cor:nn_gradient_lev}
Given training data matrix $X = [x_1,\cdots,x_n]^\top\in\R^{n\times d}$ and corresponding label vector $Y \in \R^n$. Given a test data $x_{\test} \in \R^d$. Let $\bar{f}_{\nn}: \R^{d\times m} \times \R^d \rightarrow \R$, $W(t) \in \R^{d\times m}$, $\kappa\in(0,1)$ and $\bar{u}_{\nn}(t) \in \R^n$ be defined as in Definition~\ref{def:f_nn_lev}. Let $\bar{\k}_{t} : \R^d \times \R^{n \times d} \rightarrow \R^n,~\bar{H}(t) \in \R^{n \times n}$ be defined as in Definition~\ref{def:dynamic_kernel_lev}. Then we have
\begin{align*}
\frac{\d \bar{u}_{\nn}(t)}{\d t} = & ~ \kappa^2 \bar{H}(t) ( Y - \bar{u}_{\nn}(t) ) - \lambda \cdot \bar{u}_{\nn}(t)\\
\frac{\d \bar{u}_{\nn,\test}(t)}{\d t} = & ~ \kappa^2 \bar{\k}_{t}(x_{\test}, X)^\top ( Y - \bar{u}_{\nn}(t) ) - \lambda \cdot \bar{u}_{\nn,\test}(t).
\end{align*}
\end{corollary}
\begin{proof}
Plugging in $z = x_i\in\R^d$ in Lemma~\ref{lem:gradient_flow_of_nn_lev}, we have
\begin{align*}
\frac{\d \bar{f}_{\nn}(W(t), x_i)}{\d t} = \kappa \bar{\k}_{t}(x_i, X)^\top ( Y - \bar{u}_{\nn}(t) ) - \lambda \cdot \bar{f}_{\nn}(W(t), x_i).
\end{align*}
Note $[\bar{u}_{\nn}(t)]_i = \kappa \bar{f}_{\nn}(W(t), x_i)$ and $[\bar{H}(t))]_{:,i} = \bar{\k}_{t}(x_i, X)$, so writing all the data in a compact form, we have
\begin{align*}
\frac{\d \bar{u}_{\nn}(t)}{\d t} = \kappa^2 \bar{H}(t) ( Y - \bar{u}_{\nn}(t) ) - \lambda \cdot \bar{u}_{\nn}(t).
\end{align*}
Plugging in data $z = x_{\test}\in\R^d$ in Lemma~\ref{lem:gradient_flow_of_nn_lev}, we have
\begin{align*}
\frac{\d \bar{f}_{\nn}(W(t), x_{\test})}{\d t} = \kappa \bar{\k}_{t}(x_{\test}, X)^\top ( Y - \bar{u}_{\nn}(t) ) - \lambda \cdot \bar{f}_{\nn}(W(t), x_{\test}).
\end{align*}
Note by definition, $\bar{u}_{\nn,\test}(t) = \kappa \bar{f}_{\nn}(W(t), x_{\test}) $, so we have
\begin{align*}
\frac{\d \bar{u}_{\nn, \test}(t)}{\d t} = \kappa^2 \bar{\k}_{t}(x_{\test}, X)^\top ( Y - \bar{u}_{\nn}(t) ) - \lambda \cdot \bar{u}_{\nn, \test}(t).
\end{align*}
\end{proof}
\begin{lemma}[Linear convergence of neural network training, Parallel to Lemma~\ref{lem:linear_converge_nn}]\label{lem:linear_converge_nn_lev}
Given training data matrix $X=[x_1,\cdots,x_n]^\top\in\R^{n\times d}$ and corresponding label vector $Y \in \R^n$. Let $\kappa\in(0,1)$ and $\bar{u}_{\nn}(t) \in \R^{n \times n}$ be defined as in Definition~\ref{def:f_nn_lev}. Let $u^* \in \R^n$ be defined in Eq.~\eqref{eq:def_u_*_lev}. Let $\bar{H}(t) \in \R^{n \times n}$ be defined as in Definition~\ref{def:dynamic_kernel_lev}. Let $\lambda \in(0, \Lambda_0)$ be the regularization parameter. Assume $\|\bar{H}(t) - \bar{H}(0)\| \leq \Lambda_0/4$ holds for all $t\in[0,T]$. Then we have
\begin{align*}
\frac{\d \|\bar{u}_{\nn}(t)-\bar{u}^*\|_2^2}{\d t} \le - \frac{1}{2}( \kappa^2 \Lambda_0+\lambda) \|\bar{u}_{\nn}(t)-\bar{u}^*\|_2^2+ 2 \kappa^2 \| \bar{H}(t) - \bar{H}(0) \| \cdot \| \bar{u}_{\nn}(t) - \bar{u}^* \|_2 \cdot \| Y - \bar{u}^* \|_2.
\end{align*}
\end{lemma}
\begin{proof}
Note same as in Lemma~\ref{lem:linear_converge_krr_lev}, we have
\begin{align}\label{eq:325_1_lev}
\kappa^2 \bar{H}(0)(Y-\bar{u}^*)
= & ~ \kappa^2 \bar{H}(0)(Y-\kappa^2 \bar{H}(0)(\kappa^2 \bar{H}(0)+\lambda I_n)^{-1}Y) \notag \\
= & ~ \kappa^2 \bar{H}(0))(I_n-\kappa^2 \bar{H}(0))(\kappa^2 \bar{H}(0)+\lambda I)^{-1})Y \notag \\
= & ~ \kappa^2 \bar{H}(0)(\kappa^2 \bar{H}(0)+\lambda I_n - \kappa^2 \bar{H}(0))(\kappa^2 \bar{H}(0)+\lambda I_n)^{-1} Y \notag \\
= & ~ \kappa^2 \lambda \bar{H}(0)(\kappa^2 \bar{H}(0)+\lambda I_n)^{-1} Y \notag \\
= & ~ \lambda \bar{u}^*,
\end{align}
where the first step follows the definition of $\bar{u}^* \in \R^n$, the second to fourth step simplify the formula, and the last step use the definition of $\bar{u}^* \in \R^n$ again.
Thus, we have
\begin{align*}
& ~ \frac{\d \|\bar{u}_{\nn}(t)-\bar{u}^*\|_2^2}{\d t} \\
= & ~ 2(\bar{u}_{\nn}(t)-\bar{u}^*)^\top \frac{\d \bar{u}_{\nn}(t)}{\d t}\\
= & ~ -2 \kappa^2 (\bar{u}_{\nn}(t)-\bar{u}^*)^\top \bar{H}(t) (\bar{u}_{\nn}(t) - Y) -2\lambda(\bar{u}_{\nn}(t)-\bar{u}^*)^\top \bar{u}_{\nn}(t)\\
= & ~ -2 \kappa^2 (\bar{u}_{\nn}(t)-u^*)^\top \bar{H}(t) (\bar{u}_{\nn}(t) - \bar{u}^*) + 2 \kappa^2 (\bar{u}_{\nn}(t)-\bar{u}^*)^\top \bar{H}(0) (Y-\bar{u}^*)\\
& ~ + 2 \kappa^2 (\bar{u}_{\nn}(t)-\bar{u}^*)^\top (\bar{H}(t) - \bar{H}(0)) (Y-\bar{u}^*) -2\lambda(\bar{u}_{\nn}(t)-\bar{u}^*)^\top \bar{u}_{\nn}(t)\\
= & ~ -2 \kappa^2 (\bar{u}_{\nn}(t)-\bar{u}^*)^\top \bar{H}(t) (\bar{u}_{\nn}(t) - \bar{u}^*) + 2\lambda(\bar{u}_{\nn}(t)-\bar{u}^*)^\top \bar{u}^*\\
& ~ +2 \kappa^2 (\bar{u}_{\nn}(t)-\bar{u}^*)^\top (\bar{H}(t) - \bar{H}(0))) (Y-\bar{u}^*) -2\lambda(\bar{u}_{\nn}(t)-\bar{u}^*)^\top \bar{u}_{\nn}(t)\\
= & ~ -2(\bar{u}_{\nn}(t)-\bar{u}^*)^\top ( \kappa^2 \bar{H}(t)+\lambda I) (\bar{u}_{\nn}(t) - \bar{u}^*) +2 \kappa^2 (\bar{u}_{\nn}(t)-\bar{u}^*)^\top (\bar{H}(t) - \bar{H}(0)) (Y-\bar{u}^*)\\
\leq & ~ - \frac{1}{2}( \kappa^2 \Lambda_0 + \lambda) \| \bar{u}_{\nn}(t) - \bar{u}^* \|_2^2 + 2 \kappa^2 \| \bar{H}(t) - \bar{H}(0)) \| \| \bar{u}_{\nn}(t) - \bar{u}^* \|_2 \| Y - \bar{u}^* \|_2
\end{align*}
where the first step follows the chain rule, the second step follows Corollary~\ref{cor:nn_gradient_lev}, the third step uses basic linear algebra, the fourth step follows Eq.~\eqref{eq:325_1_lev}, the fifth step simplifies the expression, and the last step follows the assumption $\| \bar{H}(t) - \bar{H}(0)) \| \leq \Lambda_0/4$ and the fact $\|\bar{H}(0))\| \leq \Lambda_0/2$.
\end{proof}
\subsection{Proof sketch}\label{sec:proof_sketch_D}
We introduce a new kernel ridge regression problem with respect to $\bar{H}(0)$ to decouple the prediction perturbation resulted from initialization phase and training phase. Specifically, given any accuracy $\epsilon\in(0,1)$, we divide this proof into following steps:
\begin{enumerate}
\item Firstly, we bound the prediction perturbation resulted from initialization phase $\|u^*-\bar{u}^*\|_2 \leq \epsilon/2$ by applying the leverage score sampling theory, as shown in Lemma~\ref{lem:u*_minus_bar_u*}.
\item Then we use the similar idea as section B to bound the prediction perturbation resulted from training phase $\|\bar{u}_{\nn}(T)-\bar{u}^*\|_2 \leq \epsilon/2$ by showing the over-parametrization and convergence property of neural network inductively, as shown in Lemma~\ref{lem:induction_lev} and Corollary~\ref{cor:train_lev}.
\item Lastly, we combine the results of step 1 and 2 using triangle inequality to show $\|\bar{u}_{\nn}(T) - u^*\|_2 \leq \epsilon$, as shown in Theorem~\ref{thm:equivalence_train_lev}.
\end{enumerate}
\subsection{Main result}\label{sec:main_D}
In this section, we prove Theorem~\ref{thm:equivalence_train_lev} following the above proof sketch.
\subsubsection{Upper bounding $\|u^*-\bar{u}^*\|_2$}
\begin{lemma}\label{lem:u*_minus_bar_u*}
Let $u^*\in\R^n$ and $\bar{u}^*\in\R^n$ be the optimal training data predictors defined in Definition~\ref{def:krr_ntk} and Definition~\ref{def:krr_ntk_lev}. Let $\bar{H}(0)\in\R^{n \times n}$ be defined in Definition~\ref{def:dynamic_kernel_lev}. Let $p(\cdot)$ denotes the probability density function for Gaussian $\N(0,I_d)$. Let $q(\cdot)$ denotes the leverage sampling distribution with respect to $p(\cdot)$, $\bar{H}(0)$ and $\lambda$ defined in Definition~\ref{def:lev_distribution}. Let $\Delta\in(0,1/2)$. If $m\geq \wt{O}(\Delta^{-2} s_{\lambda}(H^{\cts}))$, then we have
\begin{align*}
\| \bar{u}^* - u^* \|_2 \leq \frac{\lambda\Delta\sqrt{n}}{\Lambda_0+\lambda}
\end{align*}
with probability at least $1-\delta$. Particularly, given arbitrary $\epsilon\in(0,1)$, if $m \geq \wt{O}(\frac{n}{\epsilon\Lambda_0})$ and $\lambda \leq \wt{O}(\frac{1}{\sqrt{m}})$, we have
\begin{align*}
\| \bar{u}^* - u^* \|_2 \leq \epsilon/2.
\end{align*}
Here $\wt{O}(\cdot)$ hides $\poly\log(s_{\lambda}(H^{\cts})/\delta)$.
\end{lemma}
\begin{proof}
Note
\begin{align*}
Y - u^* = \lambda(H^{\cts} + \lambda I_n)^{-1} Y
\end{align*}
and
\begin{align*}
Y - \bar{u}^* = \lambda(\bar{H}(0) + \lambda I_n)^{-1} Y
\end{align*}
So
\begin{align*}
\bar{u}^* - u^* = \lambda[(H^{\cts} + \lambda I_n)^{-1} -(\bar{H}(0) + \lambda I_n)^{-1}]Y
\end{align*}
By Lemma~\ref{lem:leverage_score_sampling}, if $m \geq \wt{O}(\Delta^{-2} s_{\lambda}(H^{\cts})$, we have
\begin{align*}
(1-\Delta)(H^{\cts} + \lambda I) \preceq \bar{H}(0) + \lambda I \preceq (1+\Delta)(H^{\cts} + \lambda I)
\end{align*}
which implies
\begin{align*}
\frac{1}{1+\Delta}(H^{\cts} + \lambda I)^{-1} \preceq (\bar{H}(0) + \lambda I)^{-1} \preceq \frac{1}{1-\Delta}(H^{\cts} + \lambda I)^{-1}
\end{align*}
i.e.,
\begin{align*}
-\frac{\Delta}{1+\Delta}(H^{\cts} + \lambda I)^{-1} \preceq (\bar{H}(0) + \lambda I)^{-1} -(H^{\cts} + \lambda I)^{-1} \preceq \frac{\Delta}{1-\Delta}(H^{\cts} + \lambda I)^{-1}
\end{align*}
Assume $\Delta\in(0,1/2)$, we have
\begin{align}\label{eq:kmn_lev}
-{\Delta}(H^{\cts} + \lambda I)^{-1} \preceq (\bar{H}(0) + \lambda I)^{-1} -(H^{\cts} + \lambda I)^{-1} \preceq 2{\Delta}(H^{\cts} + \lambda I)^{-1}
\end{align}
Thus,
\begin{align*}
\| \bar{u}^* - u^* \|_2 \leq & ~ \lambda \|(H^{\cts} + \lambda I_n)^{-1} -(\bar{H}(0) + \lambda I_n)^{-1}\| \|Y\|_2 \\
\leq & ~ 2\lambda \Delta \|(H^{\cts} + \lambda I)^{-1} \| \|Y\|_2\\
\leq & ~ O(\frac{\lambda\Delta\sqrt{n}}{\Lambda_0+\lambda})
\end{align*}
where the first step follows from Cauchy-Schwartz inequality, the second step follows from Eq.~\eqref{eq:kmn_lev}, and the last step follows from the definition of $\Lambda_0$ and $\|Y\|_2=O(\sqrt{n})$.
\end{proof}
\subsubsection{Upper bounding $\|\bar{u}_{\nn}(T)-\bar{u}^*\|_2$}
\begin{lemma}[Bounding kernel perturbation, Parallel to Lemma~\ref{lem:induction}]\label{lem:induction_lev}
Given training data $X\in\R^{n\times d}$, $Y\in\R^n$ and a test data $x_{\test}\in\R^d$. Let $T > 0$ denotes the total number of iterations, $m >0 $ denotes the width of the network, $\epsilon_{\train}$ denotes a fixed training error threshold, $\delta > 0$ denotes the failure probability. Let $\bar{u}_{\nn}(t) \in \R^n$ be the training data predictors defined in Definition~\ref{def:f_nn_lev}. Let $\kappa\in(0,1)$ be the corresponding multiplier. Let $\bar{\k}_{t}(x_{\test},X) \in \R^n,~\bar{H}(t) \in \R^{n \times n},~\Lambda_0 > 0$ be the kernel related quantities defined in Definition~\ref{def:dynamic_kernel_lev}. Let $\bar{u}^* \in \R^n$ be defined as in Eq.~\eqref{eq:def_u_*_lev}. Let $\lambda > 0$ be the regularization parameter. Let $W(t) = [w_1(t),\cdots,w_m(t)]\in\R^{d\times m}$ be the parameters of the neural network defined in Definition~\ref{def:f_nn_lev}.
Given any accuracy $\epsilon\in(0,1/10)$ and failure probability $\delta \in (0,1/10)$. If $\kappa = 1$, $T=\wt{O}(\frac{1}{\Lambda_0})$, $\epsilon_{\train} = \epsilon/2$, network width $m \geq \wt{O}(\frac{n^4d}{\lambda_0^4\epsilon})$ and regularization parameter $\lambda \leq \wt{O}(\frac{1}{\sqrt{m}})$,
then there exist $\epsilon_W,~\epsilon_H',~\epsilon_K'>0$ that are independent of $t$, such that the following hold for all $0 \leq t \le T$:
\begin{itemize}
\item 1. $\| w_r(0) - w_r(t) \|_2 \leq \epsilon_W $, $\forall r \in [m]$
\item 2. $\| \bar{H}(0) - \bar{H}(t) \|_2 \leq \epsilon_H'$
\item 3. $\| \bar{u}_{\nn}(t) - \bar{u}^* \|_2^2 \leq \max\{\exp(-(\kappa^2\Lambda_0 + \lambda) t/4) \cdot \| \bar{u}_{\nn}(0) - \bar{u}^* \|_2^2, ~ \epsilon_{\train}^2\}$
\end{itemize}
Here $\wt{O}(\cdot)$ hides the $\poly\log( n / ( \epsilon \delta \Lambda_0 ) )$.
\end{lemma}
We first state the following concentration result for the random initialization that can help us prove the lemma.
\begin{lemma}[Random initialization result]\label{lem:random_init_lev}
Assume initial value $w_r(0) \in \R^d ,~r=1,\cdots,m$ are drawn independently according to leverage score sampling distribution $q(\cdot)$ defined in \eqref{eq:lev_dis_lev}, then with probability $1-\delta$ we have
\begin{align}
\|w_r(0)\|_2 \leq & ~ 2\sqrt{d} + 2\sqrt{\log{(mc_2/\delta)}}:=\alpha_{w,0}\label{eq:3322_1_lev}
\end{align}
hold for all $r\in[m]$, where $c_2=O(n)$.
\end{lemma}
\begin{proof}
By lemma~\ref{lem:chi_square_tail}, if $w_r(0)\sim\N(0,I_n)$, then with probability at least $1-\delta$,
\begin{align*}
\| w_r(0) \|_2 \leq 2\sqrt{d} + 2\sqrt{\log(m/\delta)}
\end{align*}
holds for all $r\in[m]$. By Lemma~\ref{lem:property_lev}, we have $q(w) \leq c_2 p(w)$ holds for all $w\in\R^d$, where $c_2 = O(1/\Lambda_0)$ and $p(\cdot)$ is the probability density function of $\N(0,I_d)$. Thus, if $w_r(0)\sim q$, we have with probability at least $1-\delta$,
\begin{align*}
\| w_r(0) \|_2 \leq 2\sqrt{d} + 2\sqrt{\log(mc_2/\delta)}
\end{align*}
holds for all $r\in[m]$.
\end{proof}
Now conditioning on Eq.~\eqref{eq:3322_1},~\eqref{eq:3322_2},~\eqref{eq:3322_3} holds, We show all the four conclusions in Lemma~\ref{lem:induction} holds using induction.
We define the following quantity:
\begin{align}
\epsilon_W := & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \max\{4\| \bar{u}_{\nn}(0) - \bar{u}^* \|_2/(\kappa^2\Lambda_0+\lambda), \epsilon_{\train} \cdot T\} \notag \\
& ~ + \Big( \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-\bar{u}^* \|_2 + 2\lambda \alpha_{w,0} \Big) \cdot T\label{eq:def_epsilon_W_lev}\\
\epsilon_H' := & ~ 2n\epsilon_W\notag \\
\epsilon_K := & ~ 2\sqrt{n}\epsilon_W\notag
\end{align}{}
which are independent of $t$. Here $\alpha_{w,0}$ are defined in Eq.~\eqref{eq:3322_1_lev}.
Note the base case when $t=0$ trivially holds. Now assuming Lemma~\ref{lem:induction_lev} holds before time $t\in[0,T]$, we argue that it also holds at time $t$. To do so, Lemmas~\ref{lem:general_hypothesis_1_lev},~\ref{lem:general_hypothesis_2_lev},~\ref{lem:general_hypothesis_3_lev} argue these conclusions one by one.
\begin{lemma}[Conclusion 1]\label{lem:general_hypothesis_1_lev}
If for any $\tau < t$, we have
\begin{align*}
\| \bar{u}_{\nn}(\tau) - \bar{u}^* \|_2^2 \leq ~ \max\{\exp(-(\kappa^2 \Lambda_0 + \lambda) \tau/4) \cdot \| \bar{u}_{\nn}(0) - \bar{u}^* \|_2^2,~\epsilon_{\train}^2\}
\end{align*}
and
\begin{align*}
\| w_r(0) - w_r(\tau) \|_2 \leq \epsilon_W \leq 1
\end{align*}
and
\begin{align*}
\| w_r(0) \|_2 \leq ~ \alpha_{w,0} ~ \text{for all}~r\in[m
\end{align*}
hold,
then
\begin{align*}
\| w_r(0) - w_r(t) \|_2 \leq \epsilon_W
\end{align*}
\end{lemma}
\begin{proof}
Recall the gradient flow as Eq.~\eqref{eq:323_1_lev}
\begin{align}\label{eq:332_2_lev}
\frac{ \d w_r( \tau ) }{ \d \tau } = ~ \sum_{i=1}^n \frac{1}{\sqrt{m}} a_r ( y_i - \bar{u}_{\nn}(\tau)_i ) x_i \sigma'( w_r(\tau)^\top x_i ) - \lambda w_r(\tau)
\end{align}
So we have
\begin{align}\label{eq:332_1_lev}
\Big\| \frac{ \d w_r( \tau ) }{ \d \tau } \Big\|_2
= & ~ \left\| \sum_{i=1}^n \frac{1}{\sqrt{m}} a_r ( y_i - \bar{u}_{\nn}(\tau)_i ) x_i \sigma'( w_r(\tau)^\top x_i ) - \lambda w_r(\tau) \right\|_2 \notag \\
\leq & ~ \frac{1}{\sqrt{m}} \sum_{i=1}^n |y_i-\bar{u}_{\nn}(\tau)_i| + \lambda \| w_r(\tau) \|_2 \notag\\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-\bar{u}_{\nn}(\tau) \|_2 + \lambda \| w_r (\tau) \|_2 \notag \\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-\bar{u}_{\nn}(\tau) \|_2 + \lambda (\| w_r(0) \|_2 + \| w_r(\tau) - w_r(0) \|_2) \notag\\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-\bar{u}_{\nn}(\tau) \|_2 + \lambda (\alpha_{W,0} + 1) \notag\\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-\bar{u}_{\nn}(\tau) \|_2 + 2\lambda \alpha_{W,0} \notag\\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } (\| Y-\bar{u}^*\|_2 + \| \bar{u}_{\nn}(\tau) - \bar{u}^*\|_2) + 2\lambda\alpha_{W,0} \notag\\
= & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \| \bar{u}_{\nn}(\tau) - \bar{u}^*\|_2 \notag \\
& ~ + \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-\bar{u}^* \|_2 + 2\lambda \alpha_{W,0} \notag \\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \max\{e^{-(\kappa^2 \Lambda_0+\lambda)\tau/8} \| \bar{u}_{\nn}(0) - \bar{u}^* \|_2, \epsilon_{\train}\} \notag \\
& ~ + \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-\bar{u}^* \|_2 + 2\lambda\alpha_{W,0} ,
\end{align}
where the first step follows from Eq.~\eqref{eq:332_2_lev}, the second step follows from triangle inequality, the third step follows from Cauchy-Schwartz inequality, the forth step follows from triangle inequality, the fifth step follows from condition $\| w_r(0) - w_r(\tau) \|_2 \leq 1,~\| w_r(0) \|_2 \leq \alpha_{W,0}$, the seventh step follows from triangle inequality, the last step follows from $\| \bar{u}_{\nn}(\tau) - \bar{u}^* \|_2^2 \leq \max\{\exp(-(\kappa^2\Lambda_0 + \lambda) \tau/4) \cdot \| \bar{u}_{\nn}(0) - \bar{u}^* \|_2^2,~\epsilon_{\train}^2\}$.
Thus, for any $t \le T$,
\begin{align*}
\| w_r(0) - w_r(t) \|_2 \leq & ~ \int_0^t \Big\| \frac{ \d w_r( \tau ) }{ \d \tau } \Big\|_2 d\tau \\
\leq & ~ \frac{ \sqrt{n} }{ \sqrt{m} } \max\{4\| \bar{u}_{\nn}(0) - \bar{u}^* \|_2/(\kappa^2\Lambda_0+\lambda), \epsilon_{\train}\cdot T \} \\
& ~ + \Big( \frac{ \sqrt{n} }{ \sqrt{m} } \| Y-\bar{u}^* \|_2 + 2\lambda \alpha_{W,0} \Big) \cdot T\\
= & ~ \epsilon_W
\end{align*}
where the first step follows triangle inequality, the second step follows Eq.~\eqref{eq:332_1_lev}, and the last step follows the definition of $\epsilon_W$ as Eq.~\eqref{eq:def_epsilon_W_lev}.
\end{proof}
\begin{lemma}[Conclusion 2]\label{lem:general_hypothesis_2_lev}
If $\forall r \in [m]$,
\begin{align*}
\| w_r(0) - w_r(t) \|_2 \leq \epsilon_W < 1,
\end{align*}
then
\begin{align*}
\| H(0) - H(t) \|_F \leq 2n \epsilon_W
\end{align*}
holds with probability $1-n^2 \cdot \exp{(-m\epsilon_Wc_1/10)}$, where $c_1=O(1/n)$.
\end{lemma}
\begin{proof}
Directly applying Lemma~\ref{lem:lemma_4.2_in_sy19_lev}, we finish the proof.
\end{proof}
\begin{lemma}[perturbed $w$]\label{lem:lemma_4.2_in_sy19_lev}
Let $R \in (0,1)$. If $\wt{w}_1, \cdots, \wt{w}_m$ are i.i.d. generated from the leverage score sampling distribution $q(\cdot)$ as in \eqref{eq:lev_dis_lev}. Let $p(\cdot)$ denotes the standard Gaussian distribution $\N(0,I_d)$. For any set of weight vectors $w_1, \cdots, w_m \in \R^d$ that satisfy for any $r\in [m]$, $\| \wt{w}_r - w_r \|_2 \leq R$, then the $H : \R^{m \times d} \rightarrow \R^{n \times n}$ defined
\begin{align*}
\bar{H}(w)_{i,j} = \frac{1}{m} x_i^\top x_j \sum_{r=1}^m {\bf 1}_{ w_r^\top x_i \geq 0, w_r^\top x_j \geq 0 } \frac{p(\wt{w}_r)}{q(\wt{w}_r)}.
\end{align*}
Then we have
\begin{align*}
\| \bar{H} (w) - \bar{H}(\wt{w}) \|_F < 2 n R,
\end{align*}
holds with probability at least $1-n^2 \cdot \exp(-m R c_1 /10)$, where $c_1 = O(1/n)$.
\end{lemma}
\begin{proof}
The random variable we care is
\begin{align*}
& ~ \sum_{i=1}^n \sum_{j=1}^n | \bar{H}(\wt{w})_{i,j} - \bar{H}(w)_{i,j} |^2 \\
\leq & ~ \frac{1}{m^2} \sum_{i=1}^n \sum_{j=1}^n \left( \sum_{r=1}^m {\bf 1}_{ \wt{w}_r^\top x_i \geq 0, \wt{w}_r^\top x_j \geq 0} - {\bf 1}_{ w_r^\top x_i \geq 0 , w_r^\top x_j \geq 0 } \frac{p(\wt{w}_r)}{q(\wt{w}_r)}\right)^2 \\
= & ~ \frac{1}{m^2} \sum_{i=1}^n \sum_{j=1}^n \Big( \sum_{r=1}^m s_{r,i,j} \Big)^2 ,
\end{align*}
where the last step follows from for each $r,i,j$, we define
\begin{align*}
s_{r,i,j} := ({\bf 1}_{ \wt{w}_r^\top x_i \geq 0, \wt{w}_r^\top x_j \geq 0} - {\bf 1}_{ w_r^\top x_i \geq 0 , w_r^\top x_j \geq 0 })\frac{p(\wt{w}_r)}{q(\wt{w}_r)}.
\end{align*}
We consider $i,j$ are fixed. We simplify $s_{r,i,j}$ to $s_r$.
Then $s_r$ is a random variable that only depends on $\wt{w}_r$.
Since $\{\wt{w}_r\}_{r=1}^m$ are independent,
$\{s_r\}_{r=1}^m$ are also mutually independent.
Now we define the event
\begin{align*}
A_{i,r} = \left\{ \exists u : \| u - \wt{w}_r \|_2 \leq R, {\bf 1}_{ x_i^\top \wt{w}_r \geq 0 } \neq {\bf 1}_{ x_i^\top u \geq 0 } \right\}.
\end{align*}
Then we have
\begin{align}\label{eq:Air_bound}
\Pr_{ \wt{w}_r \sim {\cal N}(0,I) }[ A_{i,r} ] = \Pr_{ z \sim \N(0,1) } [ | z | < R ] \leq \frac{ 2 R }{ \sqrt{2\pi} }.
\end{align}
where the last step follows from the anti-concentration inequality of Gaussian (Lemma~\ref{lem:anti_gaussian}).
If $\neg A_{i,r}$ and $\neg A_{j,r}$ happen,
then
\begin{align*}
\left| {\bf 1}_{ \wt{w}_r^\top x_i \geq 0, \wt{w}_r^\top x_j \geq 0} - {\bf 1}_{ w_r^\top x_i \geq 0 , w_r^\top x_j \geq 0 } \right|=0.
\end{align*}
If $A_{i,r}$ or $A_{j,r}$ happen,
then
\begin{align*}
\left| {\bf 1}_{ \wt{w}_r^\top x_i \geq 0, \wt{w}_r^\top x_j \geq 0} - {\bf 1}_{ w_r^\top x_i \geq 0 , w_r^\top x_j \geq 0 } \right|\leq 1.
\end{align*}
So we have
\begin{align*}
\E_{\wt{w}_r\sim q}[s_r] \leq & ~ \E_{\wt{w}_r\sim q} \left[ {\bf 1}_{A_{i,r}\vee A_{j,r}}\frac{p(\wt{w}_r)}{q(\wt{w}_r)} \right] \\
= & ~ \E_{\wt{w}_r\sim p} \left[ {\bf 1}_{A_{i,r}\vee A_{j,r}} \right]\\
\leq & ~ \Pr_{ \wt{w}_r \sim \N(0,I_n) }[A_{i,r}]+\Pr_{ \wt{w}_r \sim \N(0,I_n) }[A_{j,r}] \\
\leq & ~ \frac {4 R }{\sqrt{2\pi}}\\
\leq & ~ 2R,
\end{align*}
and
\begin{align*}
\E_{\wt{w}_r\sim q} \left[ \left(s_r-\E_{\wt{w}_r\sim q}[s_r] \right)^2 \right]
= & ~ \E_{\wt{w}_r\sim q}[s_r^2]-\E_{\wt{w}_r\sim q}[s_r]^2 \\
\leq & ~ \E_{\wt{w}_r\sim q}[s_r^2]\\
\leq & ~\E_{\wt{w}_r\sim q} \left[ \left( {\bf 1}_{A_{i,r}\vee A_{j,r}} \frac{p(\wt{w}_r)}{q(\wt{w}_r)}\right)^2 \right] \\
= & ~ \E_{\wt{w}_r\sim p} \left[ {\bf 1}_{A_{i,r}\vee A_{j,r}} \frac{p(\wt{w}_r)}{q(\wt{w}_r)}\right] \\
\leq & \frac{2R}{c_1}
\end{align*}
where the last step follows from Lemma~\ref{lem:property_lev} and $c_1= O(n)$. We also have $|s_r|\leq 1/c_1$.
So we can apply Bernstein inequality (Lemma~\ref{lem:bernstein}) to get for all $t>0$,
\begin{align*}
\Pr \left[\sum_{r=1}^m s_r\geq 2m R +mt \right]
\leq & ~ \Pr \left[\sum_{r=1}^m (s_r-\E[s_r])\geq mt \right]\\
\leq & ~ \exp \left( - \frac{ m^2t^2/2 }{ 2m R/c_1 + mt/3c_1 } \right).
\end{align*}
Choosing $t = R$, we get
\begin{align*}
\Pr \left[\sum_{r=1}^m s_r\geq 3mR \right]
\leq & ~ \exp \left( -\frac{ m^2 R^2 /2 }{ 2 m R/c_1 + m R /3c_1 } \right) \\
\leq & ~ \exp \left( - m R c_1 / 10 \right) .
\end{align*}
Plugging back, we complete the proof.
\end{proof}
\begin{lemma}[Conclusion 3]\label{lem:general_hypothesis_3_lev}
Fix $\epsilon_H'>0$ independent of $t$. If for all $\tau < t$
\begin{align*}
\| H(0) - H(\tau) \| \leq \epsilon_H' \leq \Lambda_0/4
\end{align*}
and
\begin{align}\label{eq:335_3_lev}
\epsilon_H' \leq \frac{\epsilon_{\train}}{8\kappa^2\|Y-u^*\|_2}(\kappa^2\Lambda_0+\lambda)
\end{align}
then we have
\begin{align*}
\| u_{\nn}(t) - u^* \|_2^2 \leq \max\{\exp(-(\kappa^2\Lambda_0 + \lambda)t/2 ) \cdot \| u_{\nn}(0) - u^* \|_2^2, ~ \epsilon_{\train}^2\}.
\end{align*}
\end{lemma}
\begin{proof}
Note $\| \bar{H}(\tau) - \bar{H}(0) \| \leq \epsilon_H \leq \Lambda_0/4$.
By Lemma~\ref{lem:linear_converge_nn_lev}, for any $\tau < t$, we have
\begin{align}\label{eq:induction_linear_convergence_lev}
\frac{ \d \| \bar{u}_{\nn}(\tau) - \bar{u}^* \|_2^2 }{ \d \tau }
\leq & ~ - \frac{1}{2}( \kappa^2 \Lambda_0 + \lambda ) \cdot \| \bar{u}_{\nn} (\tau) - \bar{u}^* \|_2^2 + 2 \kappa^2 \| \bar{H}(\tau) - \bar{H}(0) \| \cdot \| \bar{u}_{\nn} (\tau) - \bar{u}^* \|_2 \cdot \| Y - \bar{u}^* \|_2 \notag\\
\leq & ~ - \frac{1}{2}( \kappa^2 \Lambda_0 + \lambda ) \cdot \| \bar{u}_{\nn} (\tau) - \bar{u}^* \|_2^2 + 2 \kappa^2 \epsilon_H' \cdot \| \bar{u}_{\nn} (\tau) - \bar{u}^* \|_2 \cdot \|Y-\bar{u}^*\|_2
\end{align}
where the first step follows from Lemma~\ref{lem:linear_converge_nn_lev}, the second step follows from definition of $\epsilon_H'$.
Now let us discuss two cases:
{\bf Case 1.} If for all $\tau < t$, $\|u_{\nn}(\tau) - u^*\|_2 \geq \epsilon_{\train}$ always holds, we want to argue that
\begin{align*}
\| \bar{u}_{\nn}(t) - \bar{u}^* \|_2^2 \leq \exp(-(\kappa^2 \Lambda_0 + \lambda) t/4) \cdot \| \bar{u}_{\nn}(0) - \bar{u}^* \|_2.
\end{align*}
Note by assumption~\eqref{eq:335_3_lev}, we have
\begin{align*}
\epsilon_H' \leq \frac{\epsilon_{\train}}{8 \kappa^2 \|Y-u^*\|_2}(\kappa^2 \Lambda_0+\lambda)
\end{align*}
implies
\begin{align*}
2 \kappa^2 \epsilon_H \cdot \|Y-\bar{u}^*\|_2 \leq (\kappa^2 \Lambda_0 + \lambda)/4\cdot \|\bar{u}_{\nn}(\tau)-\bar{u}^*\|_2
\end{align*}
holds for any $\tau < t$. Thus, plugging into~\eqref{eq:induction_linear_convergence_lev},
\begin{align*}
\frac{ \d \| \bar{u}_{\nn}(\tau) - \bar{u}^* \|_2^2 }{ \d \tau }
\leq ~ - ( \kappa^2 \Lambda_0 + \lambda )/4 \cdot \| \bar{u}_{\nn} (\tau) - \bar{u}^* \|_2^2,
\end{align*}
holds for all $\tau < t$, which implies
\begin{align*}
\| \bar{u}_{\nn}(t) - \bar{u}^* \|_2^2 \leq \exp{(-(\kappa^2 \Lambda_0 + \lambda)t/4)} \cdot \| \bar{u}_{\nn}(0) - \bar{u}^* \|_2^2.
\end{align*}
{\bf Case 2.} If there exist $\bar{\tau} < t$, such that $\|\bar{u}_{\nn}(\bar{\tau}) - \bar{u}^*\|_2 < \epsilon_{\train}$, we want to argue that $\|\bar{u}_{\nn}(t) - \bar{u}^*\|_2 < \epsilon_{\train}$.
Note by assumption~\eqref{eq:335_3_lev}, we have
\begin{align*}
\epsilon_H' \leq \frac{\epsilon_{\train}}{8\kappa^2\|Y-\bar{u}^*\|_2}(\kappa^2\Lambda_0+\lambda)
\end{align*}
implies
\begin{align*}
4\kappa^2 \epsilon_H' \cdot \|\bar{u}_{\nn}(\bar{\tau}) - \bar{u}^*\|_2 \cdot \|Y-\bar{u}^*\|_2 \leq (\kappa^2 \Lambda_0 + \lambda) \cdot \epsilon_{\train}^2.
\end{align*}
Thus, plugging into~\eqref{eq:induction_linear_convergence_lev},
\begin{align*}
\frac{ \d (\| \bar{u}_{\nn}(\tau) - \bar{u}^* \|_2^2 -\epsilon_{\train}^2)}{ \d \tau }
\leq ~ - ( \kappa^2 \Lambda_0 + \lambda )/2 \cdot (\| \bar{u}_{\nn} (\tau) - \bar{u}^* \|_2^2 - \epsilon_{\train}^2)
\end{align*}
holds for $\tau = \bar{\tau}$, which implies $ e^{( \kappa^2\Lambda_0 + \lambda )\tau/2} (\| \bar{u}_{\nn}(\tau) - \bar{u}^* \|_2^2 -\epsilon_{\train}^2) $ is non-increasing at $\tau = \bar{\tau}$. Since $\| \bar{u}_{\nn}(\bar{\tau}) - \bar{u}^* \|_2^2 - \epsilon_{\train}^2 < 0$, by induction, $ e^{( \kappa^2\Lambda_0 + \lambda )\tau/2} (\| \bar{u}_{\nn}(\tau) - \bar{u}^* \|_2^2 -\epsilon_{\train}^2) $ being non-increasing and $\| \bar{u}_{\nn}(\tau) - \bar{u}^* \|_2^2 - \epsilon_{\train}^2 < 0$ holds for all $\bar{\tau} \leq \tau \leq t$, which implies
\begin{align*}
\|\bar{u}_{\nn}(t) - \bar{u}^*\|_2 < \epsilon_{\train}.
\end{align*}
Combine above two cases, we conclude
\begin{align*}
\| \bar{u}_{\nn}(t) - \bar{u}^* \|_2^2 \leq \max\{\exp(-(\kappa^2\Lambda_0 + \lambda)t/2 ) \cdot \| \bar{u}_{\nn}(0) - \bar{u}^* \|_2^2, ~ \epsilon_{\train}^2\}.
\end{align*}
\end{proof}
Now we summarize all the conditions need to be satisfied so that the induction works as in Table~\ref{tab:condition_only_lev}.
\begin{table}[htb]\caption{Summary of conditions for induction}\label{tab:condition_only_lev}
\centering
{
\begin{tabular}{| l | l | l |}
\hline
{\bf No.} & {\bf Condition} & {\bf Place} \\ \hline
1 & $\epsilon_W \leq 1$ & Lem.~\ref{lem:general_hypothesis_1_lev} \\ \hline
2 & $ \epsilon_H' \leq \Lambda_0/4$ & Lem.~\ref{lem:general_hypothesis_3_lev} \\ \hline
3 & $\epsilon_H' \leq \frac{\epsilon_{\train}}{8\kappa^2\|Y-u^*\|_2}(\kappa^2\Lambda_0+\lambda)$ & Lem.~\ref{lem:general_hypothesis_3_lev} \\
\hline
\end{tabular}
}
\end{table}
Compare Table~\ref{tab:condition_only} and Table~\ref{tab:condition_only_lev}, we can see by picking the same value for the parameters as in Theorem~\ref{thm:main_train_equivalence}, we have the induction holds, which completes the proof.
As a direct corollary, we have
\begin{corollary}\label{cor:train_lev}
Given any accuracy $\epsilon\in(0,1/10)$ and failure probability $\delta \in (0,1/10)$. If $\kappa = 1$, $T=\wt{O}(\frac{1}{\Lambda_0})$, network width $m \geq \wt{O}(\frac{n^4d}{\lambda_0^4\epsilon})$ and regularization parameter $\lambda \leq \wt{O}(\frac{1}{\sqrt{m}})$, then with probability at least $1-\delta$,
\begin{align*}
\|\bar{u}_{\nn}(T) - \bar{u}^*\|_2 \leq \epsilon/2.
\end{align*}
Here $\wt{O}(\cdot)$ hides the $\poly\log( n / ( \epsilon \delta \Lambda_0 ) )$.
\end{corollary}
\begin{proof}
By choosing $\epsilon_{\train} = \epsilon/2$ in Lemma~\ref{lem:induction_lev}, the induction shows
\begin{align*}
\| \bar{u}_{\nn}(t) - \bar{u}^* \|_2^2 \leq \max\{\exp(-(\kappa^2\Lambda_0 + \lambda) t/4) \cdot \| \bar{u}_{\nn}(0) - \bar{u}^* \|_2^2, ~ \epsilon^2/4\}
\end{align*}
holds for all $t\leq T$. By picking $T=\wt{O}(\frac{1}{\Lambda_0})$, we have
\begin{align*}
\exp(-(\kappa^2\Lambda_0 + \lambda) T/4) \cdot \| \bar{u}_{\nn}(0) - \bar{u}^* \|_2^2 \leq \epsilon^2/4
\end{align*}
which implies $\|\bar{u}_{\nn}(T) - \bar{u}^*\|_2^2 \leq \max\{\epsilon^2/4, \epsilon^2/4\} = \epsilon^2/4$.
\end{proof}
\subsubsection{Main result for equivalence with leverage score sampling initialization}
\begin{theorem}[Equivalence between training reweighed neural net with regularization under leverage score initialization and kernel ridge regression for training data prediction, restatement of Theorem~\ref{thm:equivalence_train_lev_intro}]\label{thm:equivalence_train_lev}
Given training data matrix $X \in \R^{n \times d}$ and corresponding label vector $Y \in \R^n$. Let $T > 0$ be the total number of iterations. Let $\bar{u}_{\nn}(t) \in \R^n$ and $u^* \in \R^n$ be the training data predictors defined in Definition~\ref{def:f_nn_lev} and Definition~\ref{def:krr_ntk} respectively. Let $\kappa=1$ be the corresponding multiplier. Given any accuracy $\epsilon\in(0,1)$, if $\kappa = 1$, $T=\wt{O}(\frac{1}{\Lambda_0})$, network width $m \geq \wt{O}(\frac{n^4d}{\lambda_0^4\epsilon})$ and regularization parameter $\lambda \leq \wt{O}(\frac{1}{\sqrt{m}})$, then with probability at least $1-\delta$ over the random initialization, we have
\begin{align*}
\|\bar{u}_{\nn}(T) - u^*\|_2 \leq \epsilon.
\end{align*}
Here $\wt{O}(\cdot)$ hides $\poly\log(n/(\epsilon \delta \Lambda_0 ))$.
\end{theorem}
\begin{proof}
Combining results of Lemma~\ref{lem:u*_minus_bar_u*} and Corollary~\ref{cor:train_lev} using triangle inequality, we finish the proof.
\end{proof}
\begin{remark}
Despite our given upper-bound of network width under leverage score sampling is asymptotically the same as the Gaussian initialization, we point out the potential benefits of introducing leverage score sampling to training regularized neural networks.
Note the bound for the width consists of two parts: 1) initialization and 2) training. Part 1, requires the width to be large enough, so that the initialized dynamic kernels $H(0)$ and $\ov{H}(0)$ are close enough to NTK by concentration, see Lem~\ref{lem:epsilon_init} and~\ref{lem:u*_minus_bar_u*}. Part 2, requires the width to be large enough, so that the dynamic kernels $H(t)$ and $\ov{H}(t)$ are close enough to the NTK during the training by the over-parameterization property, see Lem~\ref{lem:induction} and~\ref{lem:induction_lev}. Leverage score sampling optimizes the bound for part 1 while keeping the bound for part 2 the same. The current state-of-art analysis gives a tighter bound in part 2, so the final bound for width is the same for both cases. If analysis for part 2 can be improved and part 1 dominates, then initializing using leverage score will be beneficial in terms of the width needed.
\end{remark}
| {
"timestamp": "2020-09-22T02:30:24",
"yymm": "2009",
"arxiv_id": "2009.09829",
"language": "en",
"url": "https://arxiv.org/abs/2009.09829",
"abstract": "Leverage score sampling is a powerful technique that originates from theoretical computer science, which can be used to speed up a large number of fundamental questions, e.g. linear regression, linear programming, semi-definite programming, cutting plane method, graph sparsification, maximum matching and max-flow. Recently, it has been shown that leverage score sampling helps to accelerate kernel methods [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17].In this work, we generalize the results in [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17] to a broader class of kernels. We further bring the leverage score sampling into the field of deep learning theory.$\\bullet$ We show the connection between the initialization for neural network training and approximating the neural tangent kernel with random features.$\\bullet$ We prove the equivalence between regularized neural network and neural tangent kernel ridge regression under the initialization of both classical random Gaussian and leverage score sampling.",
"subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "Generalized Leverage Score Sampling for Neural Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924785827002,
"lm_q2_score": 0.7279754548076478,
"lm_q1q2_score": 0.707586666665854
} |
https://arxiv.org/abs/2110.03231 | Compact and weakly compact Lipschitz operators | Any Lipschitz map $f : M \to N$ between two pointed metric spaces may be extended in a unique way to a bounded linear operator $\widehat{f} : \mathcal F(M) \to \mathcal F(N)$ between their corresponding Lipschitz-free spaces. In this paper, we give a necessary and sufficient condition for $\widehat{f}$ to be compact in terms of metric conditions on $f$. This extends a result by A. Jiménez-Vargas and M. Villegas-Vallecillos in the case of non-separable and unbounded metric spaces. After studying the behavior of weakly convergent sequences made of finitely supported elements in Lipschitz-free spaces, we also deduce that $\widehat{f}$ is compact if and only if it is weakly compact. | \section{Introduction}
Let $(M,d)$ be a metric space equipped with a distinguished point denoted by $0_M \in M$. We let $\Lip_0(M)$ be the Banach space of Lipschitz maps from $M$ to $\K$ ($\K = \R$ or $\C$), vanishing at $0_M$, equipped with the norm
$$\displaystyle
\mathrm{Lip}(f) := \sup_{x \neq y \in M} \frac{|f(x)-f(y)|}{d(x,y)}.$$
For $x\in M$, we denote by $\delta(x)$ the bounded linear functional on $\Lip_0(M)$ defined by $\<f,\delta(x)\> = f(x), \ f\in \Lip_0(M).$ The Lipschitz-free space over $M$, denoted by $\F(M)$, is the Banach space
$$\F(M) := \overline{ \mbox{span}}^{\| \cdot \|}\left \{ \delta(x) \, : \, x \in M \right \} \subset \Lip_0(M)^*.$$
We refer the reader to \cite{GoKa_2003} or \cite{Weaver2} (where they are called Arens--Eells spaces) for more information on these spaces, including a proof of the next fundamental ``linearization" property which will be the cornerstone of our study.
\begin{proposition} \label{diagramfree}
Let $M$ and $N$ be two pointed metric spaces. Let $f \colon M \to N$ be a Lipschitz map such that $f(0_M) = 0_N$. Then, there exists a unique bounded linear operator $\widehat{f} \colon \F(M) \to \F(N)$ with $\|\widehat{f}\|=\mathrm{Lip}(f)$ and such that the following diagram commutes:
$$\xymatrix{
M \ar[r]^f \ar[d]_{\delta_{M}} & N \ar[d]^{\delta_{N}} \\
\F(M) \ar[r]_{\widehat{f}} & \F(N)
}.$$
More precisely, for every $\gamma=\sum_{i=1}^n a_i\delta(x_i)\in \F(M)$,
$\widehat{f}(\gamma)=\sum_{i=1}^n a_i\delta(f(x_i))$.
\end{proposition}
In this paper, operators of the kind $\widehat{f} \colon \F(M) \to \F(N)$ will be called Lipschitz operators.
The above linearization property carries some metric information about $f$ and the metric spaces $M,N$ themselves. Of course, passing from a Lipschitz map to a linear map has a price and the difficulty is to analyse the structure of the associated Lipschitz-free spaces. A very natural yet widely unexplored topic consists in the study of how metric properties of $f$ are transferred to linear properties of $\widehat{f}$, and vice-versa (see e.g. \cite{ACP20}).
\smallskip
In this paper, we investigate the compactness properties of $\widehat{f}$ and characterize them in terms of metric conditions on $f$. Recall that an operator $T : X \to Y$ between Banach spaces is compact if the image by $T$ of the unit ball of $X$, denoted by $B_X$, is relatively compact in $Y$. Similarly, we say that $T$ is weakly compact if $T(B_X)$ is relatively weakly compact in $Y$. It is obvious that any compact operator is also weakly compact, while the converse is not true in general.
A disguised study of compact Lipschitz operators has probably been initiated by Kamowitz and Scheinberg in \cite{Kamo} and then pursued by Jim\'{e}nez-Vargas and Villegas-Vallecillos in \cite{Vargas1} (see also \cite{JSV14} where vector-valued Lipschitz functions are considered). Indeed, in the last mentioned papers, the authors consider composition operators on Lipschitz spaces which appear naturally as the adjoints of our Lipschitz operators $\widehat{f}$. To be more specific, noting that
$$f\in \Lip_0(M) \mapsto \left[\sum_i a_i \delta(x_i) \mapsto \sum_i a_if(x_i) \right] \in \F(M)^*$$
is an isometric isomorphism, we get that $\big(\widehat{f}\,\big)^\ast=C_f$, where $C_f : \Lip_0(M) \to \Lip_0(N)$ is the composition operator given by $C_f(g) = g \circ f, \ g\in \Lip_0(M)$. Of course, by Schauder's theorem, $\widehat{f}$ is compact if and only if $\big(\widehat{f}\,\big)^*$ is compact, so one can tackle the problem either working with $C_f$ or working with $\widehat{f}$. In \cite{Vargas1}, the authors proved the next characterization.
\smallskip
\noindent\textbf{Theorem (\cite[Theorem 1.2]{Vargas1}).}
\textit{
Let $M$ be pointed separable metric spaces and let $f: M \to M$ be a Lipschitz map vanishing at $0_M$. Assume that $M$ is bounded and separable. Then the composition operator $C_f : g \in \Lip_0(M) \mapsto g \circ f \in \Lip_0(M)$ is compact if and only if
\begin{enumerate}
\item[(i)] $f(M)$ is totally bounded in $M$.
\item[(ii)] $f$ is uniformly locally flat, that is, for each
$\varepsilon > 0$, there exists $\delta > 0$ such that $d(f(x), f(y)) \leq \varepsilon d(x,y)$ whenever $d ( x , y ) \leq \delta$.
\end{enumerate}
}
A few comments about the above statement are necessary. First, as it is proved in \cite[Theorem~8.7.8]{LipschitzBook}, the very same result holds for Lipschitz maps $f: M \to N$ where $N$ is any pointed metric space. Notice also that the separable assumption is absent in \cite[Theorem 1.2]{Vargas1}, but, as is this written in \cite{LipschitzBook}, the method of the proof needs $M$ to be separable. Finally the above condition $(ii)$ is called ``supercontractive'' in \cite{Vargas1}, but we also sometimes see it as the ``the little Lipschitz condition'' (since the space of uniformly locally flat Lipschitz functions is often called the little Lipschitz space, see \cite{Weaver2}).
\smallskip
Our first main result extends the previous theorem in the case of any metric spaces $M$ and $N$ (in particular not separable and unbounded).
In fact, when $M$ is unbounded, one needs an additional assumption to take into account the behavior of the function $f$ at infinity. To prove our result, we are dealing directly with $\widehat{f}$ instead of its adjoint $C_f$.
Hence, even when $M$ is bounded, our proof is different from that of \cite{Vargas1}.
\begin{ThmA}
Let $M,N$ be complete pointed metric spaces, and let $f : M \to N$ be a base point-preserving Lipschitz mapping. Then $\widehat{f} : \F(M) \to \F(N)$ is compact if and only if the next assertions are satisfied:
\begin{enumerate}
\item[$(P_1)$] For every bounded subset $S \subset M$, $f(S)$ is totally bounded in $N$;
\item[$(P_2)$] $f$ is uniformly locally flat, that is,
$$ \lim\limits_{d(x,y) \to 0} \dfrac{d(f(x),f(y))}{d(x,y)} =0;$$
\item[$(P_3)$] For every $(x_n,y_n)_n \subset \widetilde{M} : = \{(x,y) \in M \times M \; | \; x \neq y\}$ such that \\
$\lim\limits_{n \to \infty} d(x_n,0) = \lim\limits_{n \to \infty} d(y_n,0) = \infty$, either
\smallskip
\begin{itemize}
\item $(f(x_n), f(y_n))_n$ has an accumulation point in $N \times N$, or
\item $\underset{n\to+\infty}{\liminf}\,\dfrac{d(f(x_n),f(y_n))}{d(x_n,y_n)}=0$.
\end{itemize}
\end{enumerate}
\end{ThmA}
It turns out that in the proof of ``$\implies$'' in Theorem~\ref{thmA}, which will be provided in Section~\ref{section2}, most of the time we only use the weaker assumption that $\widehat{f}$ is weakly compact. This suggests that there should be a close relationship between compact Lipschitz operators and weakly compact Lipschitz operators. Another clue is contained in \cite{JimenezWCompact}. Let us denote $\lip_0(M)$ the subspace of $\Lip_0(M)$ made of uniformly locally flat functions. Then we say that $\lip_0(M)$ separates the points (of $M$) uniformly if there exists $C >0$ such that, for every $x \neq y$, there exists a C-Lipschitz map $f \in \lip_0(M)$ with $|f(x) - f(y)| = d(x,y)$. Now
\cite[Corllary~2.4]{JimenezWCompact} states that if $M$ is a compact metric space such that $\lip_0(M)$ separates the points uniformly, then the composition operator $C_f : g \in \Lip_0(M) \mapsto g \circ f \in \Lip_0(M)$ is weakly compact if and only if it is compact. Let us point out that for a compact metric space $M$, $\lip_0(M)$ separates points uniformly if and only if $M$ is purely 1-unrectifiable (that is, does not contain any bi-Lipschitz image of a subset of $\R$ with positive Lebesgue measure; see \cite[Theorem~A]{AGPP21}). This recent characterization underlines the fact that the assumptions in \cite[Corllary~2.4]{JimenezWCompact} are rather restrictive. We shall prove in Section~\ref{section3} that this result is actually true for every metric space $M$.
\begin{ThmB}
Let $M,N$ be complete pointed metric spaces, and let $f : M \to N$ be a base point-preserving Lipschitz mapping. The the next conditions are equivalent
\begin{enumerate}
\item $\widehat{f} : \F(M) \to \F(N)$ is compact;
\item $\widehat{f} : \F(M) \to \F(N)$ is weakly compact;
\item $C_f : \Lip_0(N) \to \Lip_0(M)$ is compact;
\item $C_f : \Lip_0(N) \to \Lip_0(M)$ is weakly compact;
\item $ C_f : \Lip_0(N) \to \Lip_0(M) \ \text{is weak}^*\text{-to-weak continuous}$.
\end{enumerate}
\end{ThmB}
The key ingredient for proving Theorem~\ref{thmB} will be a structural result concerning weakly convergent sequences of finitely supported elements in Lipschitz-free spaces. We recall that $\gamma \in \F(M)$ is said to be finitely supported if $\gamma \in \lspan \left \{ \delta(x) \, : \, x \in M \right \} $ and then the support of $\gamma$, denoted by $\supp (\gamma)$, is the smallest subset $S \subset M$ such that $\gamma \in \F(S)$. In what follows, for every $k \in \N$, $\mathcal{FS}_k(M)$ stands for the set of all $\gamma \in \F(M)$ such that $\supp(\gamma)$ contains at most $k$ points of $M$.
\begin{ThmC}
Let $M$ be a complete metric space. If a sequence $(\gamma_n)_n \subset \mathcal{FS}_k(M)$ weakly converges to some $\gamma \in \F(M)$, then $\gamma \in \mathcal{FS}_k(M)$ and $(\gamma_n)_n$ actually converges to $\gamma$ in the norm topology.
\end{ThmC}
The previous theorem can be deduced as a direct consequence of the deep result \cite[Theorem 5.2]{AlbiacKalton} and \cite[Lemma 2.10]{ACP20}. Since the proof from \cite{AlbiacKalton} is rather elaborated, for the convenience of the reader, we shall provide a different proof which is based on some recent developments in the theory of Lipschitz-free spaces.
\smallskip
\medskip
\noindent \textbf{Notation and background.} If $X$ is a Banach space, then we let $X^*$ be its topological dual, $B_X$ be its unit ball and $S_X$ be its unit sphere.
\smallskip
Throughout the paper, $M,N$ are complete pointed metric spaces and the distinguished points will be denoted by $0_M$ and $0_N$ respectively, or simply $0$ if there is no ambiguity. We will write
$$\widetilde{M} = \{(x,y) \in M \times M \; | \; x \neq y \}.$$
We will use the notation
\begin{align*}
B(p,r) &= \{x \in M \; | \; d(x,p) \leq r \}\\
\rad(S) &= \sup\{d(x,0) \; | \; x \in S\}
\end{align*}
where $p \in M$ and $S \subset M$. Next, if $(x_n)_n$ is a sequence of elements of $M$, we will say that \textit{$(x_n)_n$ goes to infinity} if $\lim_n d(x_n , 0_M) = \infty$. For convenience, let us recall the vector spaces
\begin{align*}
\Lip(M) &= \{f \in \K^M \; | \; f \text{ is Lipschitz}\}\\
\Lip_0(M) &= \{f \in \Lip(M) \; | \; f(0) = 0\} .
\end{align*}
We also wish to recall some important features of the Lipschitz-free space over $M$,
$$\F(M) := \overline{ \mbox{span}}^{\| \cdot \|}\left \{ \delta(x) \, : \, x \in M \right \} \subset \Lip_0(M)^*.$$
First, $\F(M)$ is actually an isometric predual of $\Lip_0(M)$, that is $\F(M)^* \equiv \Lip_0(M)$. Moreover, if $0_M \in K \subset M$, then $\F(K)$ is isomorphic to a subspace of $\F(M)$ in the following way
$$\F(K) \simeq \overline{\text{span}} \{ \delta_M(x) \; | \; x \in K\} \subset \F(M).$$
According to this identification, the \textit{support of $\gamma \in \F(M)$} is the smallest closed subset $K \subset M$ such that $\gamma \in \F(K)$. It is denoted by $\text{supp}(\gamma)$. In particular and according to the terminology introduced before, $\mathcal{FS}_k(M)$ is the set of elements $\gamma \in \F(M)$ such that $\text{supp}(\gamma)$ is finite and $|\text{supp}(\gamma)| \leq k$ (where $|A|$ denote the cardinal of a subset $A \subset M$).
We refer to \cite{AP20, APPP2019} for more background information on the support. We mention here a very simple particular case of Theorem~\ref{thmC} in the case of some sequences in $\mathcal{FS}_1(M)$. We will use this fact in Section~$\ref{Section2}$ without mention, and it can be easily proved by considering the Lipschitz function $y \in M \mapsto d(x,y) - d(x,0_M)$.
\medskip
\noindent \textbf{Fact:} \textit{If $(x_n)_n \subset M$ is such that $\delta(x_n) \to \delta(x)$ weakly, then $\delta(x_n) \to \delta(x)$ in the norm topology (which is equivalent to saying that $x_n \to x$ in $M$).}
\medskip
We also wish to mention that the Lipschitz-free space over $M$ is isometrically isomorphic to the Lipschitz-free space over its completion $\overline{M}$ in a very natural way. Indeed, it is readily checked that $f \in \Lip_0(\overline{M}) \mapsto f\mathord{\upharpoonright}_M \in \Lip_0(M)$ is a weak$^*$-to-weak$^*$ continuous isometry. Hence, if $\overline{f} : \overline{M} \to \overline{N}$ is the unique extension of $f : M \to N$, then
$\widehat{f}: \F(M) \to \F(N)$ and $\widehat{\overline{f}} : \F(\overline{M}) \to \F(\overline{N})$ are conjugate one to another, so one of them is compact if and only if the other one is.
The only place where we need completeness is in Theorem \ref{thmA}. Indeed, in the proof, we use the fact that if $N$ is complete, then a totally bounded subset of $N$ is relatively compact. However, one could restate this theorem by replacing $N$ by its completion.
So one can deduce the general statements (without completeness) from our statements (with completeness).
Since there is no real loss of generality, we will assume that $M$ and $N$ are always complete.
\smallskip
To conclude this introduction, let us state the next particular case of Urysohn's lemma that we shall use several times throughout the paper. It allows us to separate two or more points of $M$ by an element of $\text{Lip}_0(M)$. Since we are dealing with metric spaces, a concrete simple formula can be given for the Lipschitz map, but it can also be easily deduced from McShane extension's theorem, see e.g. \cite[Theorem 1.33 and Corollary 1.34]{Weaver2}.
\begin{lemma}
\label{LemmaConstrctionLipF}
Let $M$ be a pointed metric space, let $p\in M, p\neq 0_M$ and let $\varepsilon \in (0, d(p,0_M)/4)$. Then there exists $f \in \text{Lip}_0(M)$ such that $f = 1$ on $B(p, \varepsilon)$ and $f=0$ on $M \setminus B(p, 2 \varepsilon).$
\end{lemma}
\section{A metric characterisation of compact Lipschitz operators}\label{Section2}
\label{section2}
The main objective of this section is to prove Theorem~\ref{thmA}. The proof will be based on the next easy but smart observation from \cite{Vargas2} (see Theorem~2.3 therein). This result concerns not only compact operators but also weakly compact operators, and so it will be useful in Section~$\ref{section3}$ as well. We shall provide its short proof for completeness.
\begin{proposition}[\cite{Vargas2}] \label{caracCompact}
Let $M,N$ be pointed metric spaces and let $f : M \to N$ be a base point-preserving Lipschitz mapping. Then $\widehat{f} : \F(M) \to \F(N)$ is (weakly) compact if and only if
$$ \left\{ \dfrac{\delta(f(x)) - \delta(f(y))}{d(x,y)} \; | \; x \neq y \in M \right\} $$
is relatively (weakly) compact in $\F(N)$.
\end{proposition}
\begin{proof} We will only prove the statement for compact operators, the proof being verbatim the same in the case of weakly compact operators.
Notice that
$$ \left\{ \dfrac{\delta(f(x)) - \delta(f(y))}{d(x,y)} \; | \; x \neq y \in M \right\} = \widehat{f}(\mathcal{M}),$$
where $\mathcal{M} = \left\{ d(x,y)^{-1}(\delta(x) - \delta(y)) \; | \; x \neq y \in M \right\}$. Since $\mathcal{M} \subset B_{\F(M)}$, if $\widehat{f}$ is compact then $\widehat{f}(\mathcal{M})$ must be relatively compact. Conversely, it follows from the Hahn--Banach separation theorem that $B_{\F(M)} = \overline{\conv} \mathcal{M}$, the closure being taken for the norm topology. Now observe that
$$ \widehat{f}(B_{\F(M)}) \subset \widehat{f}(\overline{\conv}\mathcal{M}) \subset \overline{\conv} (\widehat{f}(\mathcal{M})) \subset \overline{\conv} \left(\overline{\widehat{f}(\mathcal{M})}\right).$$
So, if $\widehat{f}(\mathcal{M})$ is relatively compact, then $ \overline{\conv} \left(\overline{\widehat{f}(\mathcal{M})}\right)$ is compact (see e.g. \cite[Theorem~5.35]{IDBook}), and therefore $\widehat{f}(B_{\F(M)})$ is relatively compact.
\end{proof}
In the proof of Theorem~$\ref{thmA}$, we will use Proposition~$\ref{caracCompact}$ repeatedly and hence, we will work with sequences of finitely supported elements in Lipschitz-free spaces. By \cite[Lemma 2.10]{ACP20}, the set $\mathcal{FS}_k(M)$ of elements of $\F(M)$ whose support contains at most $k $ elements is weakly closed in $\F(M)$ (in particular, it is norm closed). We will use this fact in various places.
\begin{lemma}\label{Prepmaintheorem}
Let $k\in \mathbb{N}$ and $(\gamma_n)_n = \big(\sum_{i=1}^k a_{i}(n) \delta(x_{i}(n))\big)_n \subset \mathcal{FS}_k(M)$ be a sequence converging weakly to an element $\gamma \in \mathcal{FS}_k(M)$. Then, for every $p\in \supp(\gamma)$, there exists $1\leq m \leq k$ such that $\underset{n\to+\infty}{\liminf}\,d(x_m(n),p)=0.$
\end{lemma}
\begin{proof}
Let us write $\gamma = \sum_{i=1}^l a_i \delta(p_i)$ where $1\leq l \leq k$, $a_i \neq 0$ and $p_1, \ldots, p_l $ are pairwise distinct elements of $M \setminus \{0_M\}$.
Aiming for a contradiction, assume that there exists $1\leq j \leq l$ such that none of the sequences $(x_{i}(n))_n$, $1\leq i \leq k$, has a subsequence converging to $p_j$. Then, there exists $\varepsilon > 0$ and a strictly increasing sequence $(n_m)_m \subset \mathbb{N}$ such that, for every $m$ and every $1\leq i\leq k$, $d(x_{i}(n_m), p_j) \geq \varepsilon$. Hence, by Lemma~\ref{LemmaConstrctionLipF} we can find $h \in \Lip_0(M)$ such that $h(p_j) = 1, h(p_i) = 0$ if $i\neq j$ and $h = 0$ outside of $B(p_j, \varepsilon/2)$.
Now, simply notice that since $\gamma_{n_m} \to \gamma$ in the weak topology we have
$$
0 = \< h , \gamma_{n_m} \> \to \< h , \gamma \> = a_j,
$$
which is a contradiction.
\end{proof}
\begin{lemma}\label{Prep2maintheorem} Let $f : M \to N$ be a Lipschitz map such that $f(0_M)=0_N$. Let $(x_n , y_n)_n \subset \widetilde{M}$ and let $(m_n)_n \subset \F(N)$ be defined by $$m_n=\dfrac{\delta(f(x_n)) - \delta(f(y_n))}{d(x_n,y_n)}.$$
Assume that $(m_n)_n$ weakly converges to $\gamma \in \F(N)$.
\begin{enumerate}
\item If $d(x_n,y_n) \to 0$ then $\gamma = 0$.
\item If $d(x_n,y_n) \to + \infty$ then $\gamma = 0$.
\item If there exists $\alpha>0$ such that $d(x_n,y_n) \geq \alpha$ and $\gamma \neq 0$ then $(d(x_n,y_n))_n$ is bounded and $(f(x_n),f(y_n))_n$ has an accumulation point in $N\times N$.
\end{enumerate}
\end{lemma}
\begin{proof}
Notice that $(m_n)_n \subset \mathcal{FS}_2(N)$ which is weakly closed so $\gamma = a\delta(p)+b\delta(q) \in \mathcal{FS}_2(N)$ where either $p\neq q$ or $p=q=0$.
\smallskip
Let us prove $(1)$. If $\gamma \neq 0$ then we can assume that $a\neq 0$, $p \neq 0_N$ and, according to Lemma $\ref{Prepmaintheorem}$, that $(f(x_n))_n$ or $(f(y_n))_n$ has a subsequence converging to $p$. Since $d(x_n,y_n) \to 0$, both subsequences converge, that is, there exists an increasing sequence $(n_k)_k \subset \N$ such that $(f(x_{n_k}))_k$ and $(f(y_{n_k}))_k$ are converging to $p$. The same lemma ensures that $b=0$ so that $\gamma_n \to a\delta(p)$.
Now, let $h \in \Lip_0(N)$ be such that $h$ takes the value 1 on a ball around $p$. Then, for $k$ large enough, $\langle h , m_{n_k} \rangle = 0 $ while the limit over $k$ of this term is $\langle h , a \delta(p) \rangle = a$, therefore $a=0$. This is a contradiction so we must have $\gamma = 0$.
\smallskip
We now prove $(2)$. We aim for a contradiction. If $\gamma \neq 0$, we may assume that $a\neq 0$ and $p\neq 0_N$ and by Lemma~\ref{Prepmaintheorem}, up to extracting a subsequence, that $(f(x_n))_n$ converges to $p$. Since $(f(x_n))_n$ converges and $d(x_n,y_n) \to + \infty$, we have
$$\|\cdot\| - \lim\limits_{n \to \infty} \dfrac{\delta(f(x_n))}{d(x_n,y_n)} = 0.$$
Therefore $( d(x_n,y_n)^{-1}\delta( f(y_n) ) )_n \subset \mathcal{FS}_1(N)$ must converge to an element $\gamma'=c\delta(r)$. We then distinguish two cases :
\begin{itemize}
\item[$\bullet$] If for some subsequence, $(f(y_{n_k}))_k$ is bounded, then $m_{n_k} \to 0$ and this is a contradiction.
\item[$\bullet$] If for some subsequence, $d(f(y_{n_k}),0) \to +\infty$, then $(f(y_{n_k}))_k$ is eventually far from $r$. Similarly as in $(1)$, one can show that $c$ must be equal to 0 by using a Lipschitz map taking the value 1 at $p$ and 0 outside of a ball centered at $r$. So $m_{n_k} \to 0$, yet another contradiction.
\end{itemize}
\smallskip
Let us finish with the proof of $(3)$. As above, since $\gamma \neq 0$ we can assume that $a\neq 0$, $p\neq 0_N$ and $(f(x_n))_n$ converges to $p$. We only need to show that $(f(y_n))_n$ has a convergent subsequence. If $b\neq 0$ and $q$ is not equal to $0_N$ or $p$, then Lemma~\ref{Prepmaintheorem} ensures that $(f(y_n))_n$ has a subsequence converging to $q$. So assume that $b=0$ or $q=0_N$, that is, $\gamma_n \to a\delta(p)$.
Up to extracting another subsequence, we may assume that $d(x_n,y_n)$ converges to $\rho \in (0 , +\infty]$. If $\rho = + \infty$ then $\gamma = 0$ by $(2)$, so we actually have that $\rho \in (0 , +\infty)$. Therefore
$$
\delta(f(x_n)) - \delta(f(y_n)) \to a' \delta(p) \ \ \text{weakly}
$$
where $a'=a\rho$. Since $\delta(f(x_n)) \to \delta(p)$, we have
$$\delta(f(y_n)) \to a'' \delta(p) \ \ \text{weakly}$$
where $a''=1-a'$. If $a'' \neq 0$ then by Lemma $\ref{Prepmaintheorem}$, $f(y_n)$ has a subsequence converging to $p$, and if $a''=0$ then $f(y_n) \to 0_N$.
\end{proof}
We need one last lemma before the proof of Theorem~\ref{thmA}. For convenience, let us recall we was called $(P_3)$ in this statement.
\begin{enumerate}
\item[$(P_3)$] For every $(x_n,y_n)_n \subset \widetilde{M} : = \{(x,y) \in M \times M \; | \; x \neq y\}$ such that $\lim\limits_{n \to \infty} d(x_n,0) = \lim\limits_{n \to \infty} d(y_n,0) = \infty$, either
\smallskip
\begin{itemize}
\item $(f(x_n), f(y_n))_n$ has an accumulation point in $N \times N$, or
\item $\underset{n\to+\infty}{\liminf}\,\dfrac{d(f(x_n),f(y_n))}{d(x_n,y_n)}=0$.
\end{itemize}
\end{enumerate}
\begin{lemma}\label{3implies5}
Let $M$ be an unbounded metric space, $N$ be any metric space and $f : M \to N$ be any map. If $f$ satisfies $(P_3)$ then $f$ is radially flat, that is
$$\ \ \lim\limits_{d(x,0) \to \infty} \dfrac{d(f(x),0)}{d(x,0)} =0. $$
\end{lemma}
\begin{proof}
Assume that $f$ satisfies $(P_3)$. Let $(x_n)_n \subset M$ be such that $d(x_n,0) \to +\infty$. We will show that there exists a subsequence $(x_{n_k})_k$ such that $$\dfrac{d(f(x_{n_k}),0)}{d(x_{n_k},0)} \underset{k\to +\infty}{\longrightarrow} 0.$$
In view of applying Property $(P_3)$, we first construct by induction an increasing sequence $(n_k)_k \subset \mathbb{N}$ and a sequence $(y_{n_k})_k \subset M$ such that for every $k \in \N$
\begin{itemize}
\item[(i)] $d(y_{n_k},0) \geq k$;
\item[(ii)] $d(x_{n_k},y_{n_k}) \geq k$;
\item[(iii)] $\dfrac{d(x_{n_k}, y_{n_k})}{d(x_{n_k}, y_{n_k}) - d(y_{n_k}, 0)} \leq 2$;
\item[(iv)] $\dfrac{d(f(y_{n_k}), 0)}{d(x_{n_k}, y_{n_k}) - d(y_{n_k}, 0)} \leq \dfrac{1}{k}$.
\end{itemize}
We proceed by induction and start with the base case $k=1$. Since $M$ is unbounded, there is an element $y\in M$ such that $d(y,0) \geq 1$. We fix such $y$. The inequality
$$
d(x_n,y) \geq d(x_n,0)-d(y,0)
$$
yields $d(x_n,y) \underset{n\to +\infty}{\longrightarrow} +\infty$ so that
$$
\dfrac{d(x_n, y)}{d(x_n, y) - d(y, 0)} \underset{n\to +\infty}{\longrightarrow} 1 \ \ \ \text{and} \ \ \ \dfrac{d(f(y), 0)}{d(x_n, y) - d(y, 0)} \underset{n\to +\infty}{\longrightarrow} 0.
$$
Hence, we can find $n_1 \in \mathbb{N}$ large enough so that
$$
d(x_{n_1},y) \geq 1, \ \dfrac{d(x_{n_1}, y)}{d(x_{n_1}, y) - d(y, 0)} \leq 2 \ \ \ \text{and} \ \ \ \dfrac{d(f(y), 0)}{d(x_{n_1}, y) - d(y, 0)} \leq 1.
$$
We then set $y_{n_1}=y$.
Assume now that $y_{n_1}, \ldots, y_{n_k} \in M$ are constructed with $n_1 < n_2 < \cdots < n_k$. We can find $y\in M$ such that $d(y,0) \geq k+1$. We now proceed as above, and we find $n_{k+1} \in \mathbb{N}$ such that $n_k < n_{k+1}$ and
$$
d(x_{n_{k+1}},y) \geq k+1, \ \dfrac{d(x_{n_{k+1}}, y)}{d(x_{n_{k+1}}, y) - d(y, 0)} \leq 2 \ \ \ \text{and} \ \ \ \dfrac{d(f(y), 0)}{d(x_{n_{k+1}}, y) - d(y, 0)} \leq \dfrac{1}{k+1}.
$$
We can now set $y_{n_{k+1}} = y$ and by construction, the sequence $(y_{n_k})_k \subset M$ satisfies the desired properties.\\
In particular, $d(y_{n_k},0) \to +\infty$, so we can apply $(P_3)$ to $(x_{n_k})_k$ and $(y_{n_k})_k$ and we keep denoting by $(x_{n_k})_k$ and $(y_{n_k})_k$ the subsequences that we obtain. Hence, we either have $f(x_{n_k}) \to p$ and $f(y_{n_k}) \to q$ for some $p,q \in N$ or $\frac{d(f(x_{n_k}) , f(y_{n_k}))}{d(x_{n_k},y_{n_k})} \to 0$. Note that if we are in the first case, then we also have
\begin{equation}\label{CV0}
\dfrac{d(f(x_{n_k}) , f(y_{n_k}))}{d(x_{n_k},y_{n_k})} \to 0
\end{equation}
because $d(x_{n_k},y_{n_k}) \to +\infty$,
and that is the property we will need. Indeed, by the triangle inequality
\begin{align*}
\dfrac{d(f(x_{n_k}), 0)}{d(x_{n_k},0)} & \leq \dfrac{d(f(x_{n_k}), f(y_{n_k}))}{d(x_{n_k},y_{n_k}) - d(y_{n_k},0)} + \dfrac{d(f(y_{n_k}), 0)}{d(x_{n_k},y_{n_k}) - d(y_{n_k},0)} \\
& = \dfrac{d(f(x_{n_k}), f(y_{n_k}))}{d(x_{n_k}, y_{n_k})} \dfrac{d(x_{n_k},y_{n_k})}{d(x_{n_k},y_{n_k}) - d(y_{n_k},0)} + \dfrac{d(f(y_{n_k}), 0)}{d(x_{n_k},y_{n_k}) - d(y_{n_k},0)}
\end{align*}
and the right hand side converges to $0$ by $(iii)$, $(iv)$ and \eqref{CV0}.
\end{proof}
\begin{maintheorem} \label{thmA}
Let $M,N$ be complete pointed metric spaces, and let $f : M \to N$ be a base point-preserving Lipschitz mapping. Then $\widehat{f} : \F(M) \to \F(N)$ is compact if and only if the next assertions are satisfied:
\begin{enumerate}
\item[$(P_1)$] For every bounded subset $S \subset M$, $f(S)$ is totally bounded in $N$;
\item[$(P_2)$] $f$ is uniformly locally flat, that is,
$$ \lim\limits_{d(x,y) \to 0} \dfrac{d(f(x),f(y))}{d(x,y)} =0;$$
\item[$(P_3)$] For every $(x_n,y_n)_n \subset \widetilde{M} : = \{(x,y) \in M \times M \; | \; x \neq y\}$ such that\\
$\lim\limits_{n \to \infty} d(x_n,0) = \lim\limits_{n \to \infty} d(y_n,0) = \infty$, either
\smallskip
\begin{itemize}
\item $(f(x_n), f(y_n))_n$ has an accumulation point in $N \times N$, or
\item $\underset{n\to+\infty}{\liminf}\,\dfrac{d(f(x_n),f(y_n))}{d(x_n,y_n)}=0$.
\end{itemize}
\end{enumerate}
\end{maintheorem}
\begin{remark}
Assume that the condition $(P_3)$ is satisfied. Then, if $(x_n,y_n)_n \subset \widetilde{M}$ is such that $\dfrac{d(f(x_n),f(y_n))}{d(x_n,y_n)}$ does not converge to $0$, there is a subsequence $(x_{n_k},y_{n_k})_k$ such that $\underset{k\to+\infty}{\liminf}\,\dfrac{d(f(x_{n_k}),f(y_{n_k}))}{d(x_{n_k},y_{n_k})}>0$. This implies that $(f(x_{n_k}), f(y_{n_k}))_k$ and hence $(f(x_n), f(y_n))_n$, has an accumulation point in $N \times N$. This tells us that we can reformulate condition $(P_3)$ by :
\begin{enumerate}
\item[$(P_3')$] For every $(x_n,y_n)_n \subset \widetilde{M} : = \{(x,y) \in M \times M \; | \; x \neq y\}$ such that
$\lim\limits_{n \to \infty} d(x_n,0) = \lim\limits_{n \to \infty} d(y_n,0) = \infty$, either
\smallskip
\begin{itemize}
\item $(f(x_n), f(y_n))_n$ has an accumulation point in $N \times N$, or
\item $\underset{n\to+\infty}{\lim}\,\dfrac{d(f(x_n),f(y_n))}{d(x_n,y_n)}=0$.
\end{itemize}
\end{enumerate}
\end{remark}
\smallskip
\begin{proof}[Proof of Theorem $\ref{thmA}$.]
We first prove the ``$\implies$" direction.
\medskip
We start with $\widehat{f}$ compact implies $(P_1)$. Let $S$ be a bounded subset of $M$ and let $(x_n)_n$ be a sequence in $S$. By assumption (and Proposition \ref{caracCompact}), the sequence
$$ (m_n)_n := \left(\widehat{f}\left(\frac{\delta(x_n)}{d(x_n,0)} \right)\right)_n = \left(\frac{\delta(f(x_n))}{d(x_n,0)}\right)_n$$
has a convergent subsequence $(m_{n_k})_k$.
Denote by $\gamma$ the limit of $(m_{n_k})_k$. If $\gamma = 0$, then
$$
d(f(x_{n_k}), 0_N) =\| m_{n_k} \| d(x_{n_k}, 0_M) \underset{k \to \infty}{\longrightarrow} 0
$$
because $(x_{n_k})_k$ is bounded. In that case, $f(x_{n_k}) \to 0_N$ and we are done. Hence, it only remains to consider the case when $\gamma \neq 0$. By Lemma $\ref{Prep2maintheorem}$, this can only happen if $d(x_{n_k},0_M)$ does not tend to $0$. But then, we can find a subsequence, still denoted by $(n_k)_k$ for convenience, such that $d(x_{n_k},0_M) \geq \alpha > 0$ for every $k$. By the same Lemma, we then must have a subsequence of $(f(x_{n_k}))_k$ which converges and this finishes to prove $(P_1)$.
\medskip
We now show that $\widehat{f}$ compact implies $(P_2)$. Let $(x_n)_n$, $(y_n)_n$ be two sequences in $M$ such that $d(x_n, y_n) \to 0$. By Proposition \ref{caracCompact}, the sequence
$$ \left(\frac{\delta(f(x_n)) - \delta(f(y_n))}{d(x_n,y_n)}\right)_n$$
has a converging subsequence. However it follows immediately from Lemma~\ref{Prep2maintheorem}~(1) that the limit is $0$.
\medskip
It remains to prove that $\widehat{f}$ compact implies $(P_3)$. We already know that if $\widehat{f}$ is compact then $f$ satisfies $(P_2)$, which will be of use. Let $(x_n)_n, (y_n)_n \subset M$ going to infinity with $x_n \neq y_n$. Again, we let
$$m_n := \dfrac{\delta(f(x_n)) - \delta(f(y_n))}{d(x_n,y_n)}$$
and $(m_n)_n$ has a convergent subsequence, which we keep denoting by $(m_n)_n$, for simplicity. Let $\gamma$ be the limit of $(m_n)_n$. Notice that
$$ \|m_n \| = \dfrac{d(f(x_n) , f(y_n))}{d(x_n,y_n)}.$$
We distinguish two cases: up to extracting a further subsequence, we will need to consider the cases when $d(x_n,y_n)$ converges to $0$ and when there exists $\alpha >0$ such that $d(x_n,y_n) \geq \alpha$ for every $n$. In the first case, we get by $(P_2)$ that $m_n \to 0$ so that $\|m_n\| \to 0$. In the second case, if $\gamma \neq 0$, we have by Lemma~\ref{Prep2maintheorem}~(3) that there exist $p,q\in N$ and an increasing sequence $(n_k)_k \subset \N$ such that $f(x_{n_k}) \to p$ and $f(y_{n_k}) \to q$. Finally if $\gamma = 0$ then again $\|m_n\| \to 0$. In all cases, $f$ satisfies $(P_3)$.
\bigskip
\iffalse
\ach{We can extract from the second point of (ii) the following lemma: for $x=(x_n)_{n\in\N}$, let us consider the set
$$ \omega(x,f)=\bigcap\overline{\{f(x_k):\, k>n\}}$$
\begin{lemma}
Let $x=(x_n)_n\subset M$ and $y=(y_n)_{n}\subset M$ be such that
$$\delta(f(x_n))-\delta(f(y_n))\underset{n\to+\infty}{\longrightarrow} \gamma=a\delta(p)+b\delta(q).$$
If $p,q\notin\omega(x,f)$ and $p,q\notin\omega(y,f)$, then $\gamma=0$.
\end{lemma}
}
\fi
Let us now prove the ``$\impliedby$" direction. We keep using the notation
$$(m_n)_n :=\left(\frac{\delta(f(x_n)) - \delta(f(y_n))}{d(x_n,y_n)}\right)_n$$
where $x_n \neq y_n \in M$ for every $n \in \N$. By Proposition $\ref{caracCompact}$, we have to show that this sequence admits a convergent subsequence in $\F(N)$. Up to extracting a subsequence, we only have to distinguish three cases : when both $(x_n)_n$ and $(y_n)_n$ are bounded, when one of them is bounded while the other one goes to $+\infty$, and when both go to $+\infty$.
\begin{enumerate}[(i), leftmargin=*,itemsep=5pt]
\item If $(x_n)_n$ and $(y_n)_n$ are bounded, by $(P_1)$ there exists an increasing sequence $(n_k)_k \subset \N$ such that $(f(x_{n_k}))_k$ converges to a point $p \in N$ and $(f(y_{n_k}))_k$ converges to some $q\in N$. Since the sequence $(d(x_{n_k},y_{n_k}))_k$ is bounded, up to a further extraction, we may assume that it converges to some $\rho \geq 0$. Since $f$ is uniformly locally flat, if $\rho = 0$ then $(m_{n_k})_k$ converges to 0. If $\rho>0$, then it is readily seen that $(m_{n_k})_k$ converges to $\rho^{-1}(\delta(p) - \delta(q))$.
\item If $(x_n)_n$ is bounded while $d(y_n,0) \to \infty$, thanks to $(P_1)$ there exists an increasing sequence $(n_k)_k \subset \N$ such that
$(f(x_{n_k}))_k$ converges to a point $p \in N$. Therefore we may write for every $k \in \N$:
\begin{align*}
m_{n_k} &= \frac{\delta(f(x_{n_k})) - \delta(f(y_{n_k}))}{d(x_{n_k},y_{n_k})}\\
&= \frac{\delta(f(x_{n_k})) - \delta(0)}{d(x_{n_k},y_{n_k})} + \frac{\delta(0) - \delta(f(y_{n_k}))}{d(0,y_{n_k})} \frac{d(0,y_{n_k})}{d(x_{n_k},y_{n_k})}.
\end{align*}
On the one hand,
$$ \left\|\frac{\delta(f(x_{n_k})) - \delta(0)}{d(x_{n_k},y_{n_k})}\right\| = \frac{d(f(x_{n_k}) , 0)}{d(x_{n_k},y_{n_k})} \underset{k \to \infty}{\longrightarrow} 0.$$
On the other hand, $f$ is radially flat thanks to Lemma~$\ref{3implies5}$ so that
$$ \left\|\frac{\delta(0) - \delta(f(y_{n_k}))}{d(0,y_{n_k})}\right\| = \frac{d(f(y_{n_k}) , 0)}{d(y_{n_k},0)} \underset{k \to \infty}{\longrightarrow} 0. $$
Since the triangle inequality implies that
$\lim\limits_{k \to \infty} d(0 , y_{n_k})^{-1} d(x_{n_k},y_{n_k}) = 1$, we obtain that $(m_{n_k})_k$ converges to 0.
\item If $d(x_n,0) \to +\infty$ and $d(y_n,0)\to +\infty$, then by $(P_3)$ there exists $(n_k)_k \subset \N$ such that, either $\|m_{n_k}\| \to 0$ or $f(x_{n_k}) \to p$ and $f(y_{n_k}) \to q$ for some $p,q \in N$. In the first case we are done since $(m_{n_k})_k$ converges to 0. In the second case, up to further extraction, we may assume that $d(x_{n_k},y_{n_k}) \to \rho \in [0,+\infty]$. Hence, $m_{n_k}$ converges to $0$ if $\rho = 0$ or $\rho = +\infty$ and converges to $\rho^{-1}(\delta(p) - \delta(q))$ otherwise.
\end{enumerate}
In all cases, the sequence $(m_n)_n$ admits a convergent subsequence.
\end{proof}
Of course, condition $(P_3)$ is always satisfied if the metric space $M$ is bounded. Similarly, condition $(P_2)$ is always satisfied if the space is uniformly discrete, that is, $\inf_{x\neq y} d(x,y) >0$. On the other hand, if $M=\R=N$ with the usual metric $|.|$, this condition means that $f'=0$ and hence $f=0$ because $f(0)=0$. In particular, according to Theorem~\ref{thmA}, the only compact Lipschitz operator $\widehat{f} : \F(\R) \to \F(\R)$ is $0$.
Furthermore, $(P_3)$ may seem uneasy to check. The next result shows that we may replace this property by a stronger yet simpler condition. Nonetheless, Example~$\ref{P4notnecessary}$ will show that this condition is not necessary.
\begin{corollary}
Let $M,N$ be complete pointed metric spaces, and let $f : M \to N$ be a base point-preserving Lipschitz mapping. If $f$ satisfies
\begin{enumerate}
\item[$(P_1)$] For every bounded subset $S \subset M$, $f(S)$ is totally bounded in $N$;
\item[$(P_2)$] $f$ is uniformly locally flat, that is,
$$ \lim\limits_{d(x,y) \to 0} \dfrac{d(f(x),f(y))}{d(x,y)} =0;$$
\item[$(P_4)$] $f$ is flat at infinity, that is,
$$ \underset{d(y,0) \to \infty}{\lim\limits_{d(x,0) \to \infty}} \dfrac{d(f(x),f(y))}{d(x,y)} =0,$$
\end{enumerate}
then $\widehat{f} : \F(M) \to \F(N)$ is compact.
\end{corollary}
\begin{proof}
It is readily seen that $(P_4)$ implies $(P_3)$.
\end{proof}
\begin{remark}
Assume that $\widehat{f}$ is compact. It follows from Proposition $\ref{caracCompact}$ (or the proof of Theorem $\ref{thmA}$) and Lemma $\ref{Prep2maintheorem}$ that $f$ satisfies the following property
$$
\underset{d(x,y)\to +\infty}{\lim} \ \dfrac{d(f(x),f(y))}{d(x,y)} = 0.
$$
This property is stronger than the condition ``radially flat'' from Lemma $\ref{3implies5}$, but weaker than the condition ``flat at infinity'' from the previous corollary.
\end{remark}
\begin{example}[Property $(P_4)$ is not necessary]\label{P4notnecessary}
Consider $(M,d) =(\N \cup \{0\}, |\cdot|)$ and $f : M \to M$ obtained by $f(2n) = 0$ and $f(2n+1) = 1$. Then $f$ is clearly Lipschitz and $\widehat{f} : \F(M) \to \F(M)$ is compact because its range is finite dimensional. Even so, if we let $x_n = 2n+1$ and $y_n=2n$ then $d(x_n,0), d(y_n,0) \to +\infty$ while $\frac{d(f(x_n), f(y_n))}{d(x_n,y_n)} = 1$ for every $n$. Consequently $f$ does not satisfy $(P_4)$.
\end{example}
In fact, in the previous example, $f$ satisfies a much stronger property: $f(M)$ is totally bounded.
\begin{corollary}
Let $M,N$ be complete pointed metric spaces, and let $f : M \to N$ be a base point-preserving Lipschitz mapping. If $f(M)$ is totally bounded in $N$ and $f$ is uniformly locally flat, then $\widehat{f} : \F(M) \to \F(N)$ is compact.
\end{corollary}
\begin{proof}
If $f(M)$ is totally bounded then clearly $f$ satisfies $(P_1)$. Moreover if $(x_n)_n$ and $(y_n)_n$ are two sequences in $M$ going to infinity, then the sequences $(f(x_n))_n$ and $(f(y_n))_n$ have a common convergent subsequence and so $f$ readily satisfies $(P_3)$ in Theorem~$\ref{thmA}$.
\end{proof}
\begin{example}[$f(M)$ totally bounded is not necessary]\label{P4notnecessary2}
Take $M= \N \cup \{0\}$ equipped with the metric given by $d(n,0)=n!$ and $d(n,m)=n!+m!$ if $n\neq m$. Define $f : M \to M$ by $f(0)=0$ and $f(n) = n-1$ if $n\geq 1$. Then $f(M)=M$ is clearly not totally bounded while $\widehat{f}$ is compact as it satisfies $(P_1), (P_2)$ and $(P_4)$.
\end{example}
\section{Weak compactness of Lipschitz operators}\label{section3}
As we already mentioned in the introduction, Theorem~\ref{thmB} is an easy consequence of Theorem~\ref{thmC}, which states that norm-convergence and weak-convergence are equivalent for sequences in $\mathcal{FS}_k(M)$, plus some other classical results concerning (weakly) compact operators. We postpone the proof of Theorem~\ref{thmC} in order to first discuss its use in the proof of Theorem~\ref{thmB}.
\begin{maintheorem} \label{thmB} Let $M,N$ be complete pointed metric spaces, and let $f : M \to N$ be a base point-preserving Lipschitz mapping. The the next conditions are equivalent
\begin{enumerate}
\item $\widehat{f} : \F(M) \to \F(N)$ is compact;
\item $\widehat{f} : \F(M) \to \F(N)$ is weakly compact;
\item $C_f : \Lip_0(N) \to \Lip_0(M)$ is compact;
\item $C_f : \Lip_0(N) \to \Lip_0(M)$ is weakly compact;
\item $ C_f : \Lip_0(N) \to \Lip_0(M) \ \text{is weak}^*\text{-to-weak continuous}$.
\end{enumerate}
\end{maintheorem}
\begin{proof}
The implication $(1) \implies (2)$ is obvious. Next, $(2) \implies (1)$ follows from Theorem~\ref{thmC} and Proposition~\ref{caracCompact}. Indeed, thanks to the Eberlein--\v{S}mulian theorem (see \cite[Theorem~1.6.3]{TopicsBanachSpaceTheory} e.g.), a subset $S$ of a Banach space $X$ is (relatively) weakly compact if and only if it is (relatively) weakly sequentially compact. So, Theorem~\ref{thmC} implies that a subset $S \subset \mathcal{FS}_k(M)$ is weakly compact if and only if it is compact in the norm topology. Now observe that the set appearing in Proposition~\ref{caracCompact} is a subset of $\mathcal{FS}_2(M)$ so that compactness and weak compactness are indeed equivalent. To conclude, $(1) \iff (3)$ follows from Schauder's theorem (see e.g. \cite[Theorem~3.4.15]{Megg}), $(2) \iff (4)$ follows from Gantmacher's theorem (see e.g. \cite[Theorem~3.5.13]{Megg}), and $(2) \iff (5)$ follows from a classical result \cite[Theorem~3.5.14]{Megg} due to Gantmacher in the separable case and Nakamura in the general case.
\end{proof}
Theorem~\ref{thmC} is essentially contained in the very deep result \cite[Theorem~5.2]{AlbiacKalton}, even if one really needs to use the weak closeness of $\mathcal{FS}_k(M)$ \cite[Lemma 2.10]{ACP20} in order to obtain the statement we give.
For the sake of completeness, we will take advantage of some recent developments in the study of Lipschitz-free spaces in order to provide a new direct proof of this result. First, we recall two useful facts.
The first one shows that the pointwise multiplication with a Lipschitz function of bounded support always results in a Lipschitz function and, in fact, defines a continuous operator between Lipschitz spaces.
\begin{lemma}[Lemma~2.3 in \cite{APPP2019}]
\label{lm:multiplication_operator}
Let $M$ be a pointed metric space and let $h\in\Lip(M)$ have bounded support. Let $K\subset M$ contain the base point and the support of $h$. For $f\in\Lip_0(K)$, let $T_h(f)$ be the function given by
\begin{equation}
\label{eq:T_h}
T_h(f)(x)=\begin{cases}
f(x)h(x) & \text{if } x\in K \\
0 & \text{if } x\notin K
\end{cases} \,.
\end{equation}
Then $T_h$ defines a weak$^*$-to-weak$^*$ continuous linear operator from $\Lip_0(K)$ into $\Lip_0(M)$, and $\norm{T_h}\leq\norm{h}_\infty+\rad(\supp(h))\lipnorm{h}$.
\end{lemma}
The function $T_h(f)$ does not depend on the choice of $K$, as long as it contains the support of $h$. Thus the requirement that $0\in K$ is not really a restriction, as one may always use the set $K\cup\set{0}$ instead.
Since $T_h$ is weak$^*$-to-weak$^*$ continuous, there is an associated bounded linear operator $W_h\colon\lipfree{M}\rightarrow\lipfree{K}$ such that $\dual{W_h}=T_h$.
\smallskip
The second fact is the following, whose proof can be deduced from that of \cite[Lemma~4.5]{Kalton04}.
\begin{lemma} \label{lemmaKalton}
Let $M$ be a bounded metric space. If $(\gamma_n)_n \subset \F(M)$ is a weakly null sequence such that
$$ \exists \varepsilon >0, \forall n \neq m , \quad d(\supp(\gamma_n) , \supp(\gamma_m))> \varepsilon,$$
then $(\gamma_n)_n$ converges to 0 in the norm topology.
\end{lemma}
We are now ready to prove the desired structural result about finitely supported sequences in Lipschitz-free spaces.
\begin{maintheorem} \label{thmC}
Let $M$ be a complete metric space. If a sequence $(\gamma_n)_n \subset \mathcal{FS}_k(M)$ weakly converges to some $\gamma \in \F(M)$, then $\gamma \in \mathcal{FS}_k(M)$ and $(\gamma_n)_n$ converges to $\gamma$ in the norm topology.
\end{maintheorem}
\begin{proof}
Since $\mathcal{FS}_k(M)$ is weakly closed by \cite[Lemma~2.10]{ACP20}, if a sequence $(\gamma_n)_n \subset \mathcal{FS}_k(M)$ weakly converges to some $\gamma \in \F(M)$, then $\gamma \in \mathcal{FS}_k(M)$. Therefore, for every $n \in \N$, $\gamma - \gamma_n \in \mathcal{FS}_{2k}(M)$. Consequently, to prove the result it is enough to show that for every complete metric space $M$ and for every $k \in \N$, any weakly null sequence in $\FS_k(M)$ is actually norm null. We will proceed by induction on $k \in \N$. Thanks to \cite[Theorem~A]{AACD20}, there exists a bounded metric space $B(M)$ such that $\F(M)$ is linearly isomorphic to $\F(B(M))$. Moreover the isomorphism $T : \F(M) \to \F(B(M))$ preserves finitely supported elements is the sense that $\gamma \in \FS_k(M)$ if and only if $T(\gamma) \in \FS_k(B(M))$. So, without loss of generality, we may assume that $M$ is a bounded metric space.
If $k=1$ and $(\gamma_n)_n \subset \mathcal{FS}_1(M)$ is a weakly null sequence, we can write $\gamma_n = a_n \delta(x_n)$ where $a_n \in \K$ and $x_n \in M$. Let us denote $f := d(\cdot,0) \in \Lip_0(M)$. Since $(\gamma_n)_n$ is weakly null, it is readily seen that
$$\|\gamma_n\| = |a_n| d(x_n,0) = |\langle f , \gamma_n \rangle | \underset{n \to \infty}{\longrightarrow} 0.$$
Let us fix $k \in \N$. Assume we have shown that, for every $j \leq k$, every weakly null sequence in $\mathcal{FS}_j(M)$ is in fact norm null. Let us consider a weakly null sequence $(\gamma_n)_n \subset \mathcal{FS}_{k+1}(M)$. For every $n \in \N$, we will write
$$\gamma_n = \sum_{i=1}^{k+1} a_i(n) \delta(x_i(n)),$$
where $a_i(n) \in \K$ and $x_i(n) \in M$ for every $1 \leq i \leq k+1$. We will distinguish two cases:
\begin{itemize}[leftmargin=0pt, itemsep=4pt]
\item There exists $i \in \{1, \ldots , k+1\}$ such that $(x_i(n))_n$ has a convergent subsequence to some $x \in M$. For simplicity, we still denote the subsequence by $(x_i(n))_n$. Notice that $i \in \{1, \ldots , k+1\}$ as above might not be unique. So, up to a further extraction, we may assume that there exists $\varepsilon >0$ and $i_1, \ldots , i_j$ such that $(x_{i}(n))_n$ converges to $x$ for every $i \in I:=\{i_1, \ldots , i_j\}$, while $(x_{i}(n))_n \subset M \setminus B(x,\varepsilon)$ whenever $i \in \{1 , \ldots, k+1\} \setminus I$.
If $j=k+1$, that is $I = \{1 , \ldots, k+1\}$, then the set $K:=\{x_i(n) \; | \; n \in \N \text{ and } 1 \leq i \leq k+1 \} \cup \{x\} \cup \{0\}$ is a countable compact metric space such that $(\gamma_n)_n \subset \F(K)$. Thanks to \cite[Theorem~3.1]{HLP} (see also \cite{AGPP21}), $\F(K)$ has the Schur property so that $(\gamma_n)$ is actually norm null, which is what we wanted to prove.
If $j<k+1$, we let $h$ be the map defined by $h(z)=1$ if $z \in B(x, \varepsilon/2)$ and $h(z) = 0$ if $z \in M \setminus B(x, \varepsilon)$. It is easy to prove that $h$ is Lipschitz on $B(x,\varepsilon/2) \cup M \setminus B(x, \varepsilon)$ and using McShane's extension theorem (see e.g. \cite[Theorem~1.33 and Corollary 1.34]{Weaver2}), we can extend $h$ to the all $M$. Clearly, $\supp(h) \subset K:= B(x,\varepsilon)\cup \{0\}$. Now let $T_h$ be as in Lemma~\ref{lm:multiplication_operator} and $W_h : \F(M) \to \F(K)$ be its pre-adjoint operator. It is a routine check to see that if $\mu \in \F(B(x,\varepsilon/2) \cup \{0\} )$ then
$W_h(\mu) = \mu$. Furthermore, there exists $N_0 \in \N$ such that for every $n \geq N_0$ and every $i \in \{i_1, \ldots , i_j\}$, $(x_{i}(n))_n \subset B(x,\varepsilon/2)$. Thus, by construction, we have:
$$ \forall n \geq N_0, \quad W_h\gamma_n = \underset{i \in I}{\sum_{i=1}^{k+1}} a_{i}(n) \delta(x_{i}(n)). $$
Since $W_h$ is continuous and since $(\gamma_n)_n$ is weakly null, the sequence $(W_h\gamma_n)_n \subset \F(K)$ is weakly null as well.
As $j<k+1$, we may use the induction hypothesis to deduce that $(W_h\gamma_n)_n$ is norm null in $\F(K)$. Recall that $\F(K)$ is a closed subspace of $\F(M)$ so that $(W_h\gamma_n)_n$ can be seen as a norm null sequence in $\F(M)$, which in turn implies that the sequence $(\mu_n)_n$ given by
$$ \mu_n := \underset{i \not\in I}{\sum_{i=1}^{k+1}} a_i(n) \delta(x_i(n)) $$
has to be weakly null. So we use once more our induction hypothesis to get that $(\mu_n)_n$ is norm null and finally
$$(\gamma_n)_n = (W_h\gamma_n)_n + (\mu_n)_n$$
is norm convergent to 0 as the sum of two such sequences.
\item There is no $i \in \{1, \ldots , k+1\}$ such that $(x_i(n))_n$ has a convergent subsequence. Then each set $\{x_i(n) \; | \; n \in \N \}$, $1 \leq i \leq k+1$, is not totally bounded. Hence there exists $\varepsilon >0$ and an infinite subset $\mathbb{M}$ of $\N$ such that for every $i$ and every $n \neq m \in \mathbb{M}$: $d(x_i(n) , x_i(m))> \varepsilon$. We now claim that we can extract an infinite subset $\mathbb{M}_1$ of $\mathbb{M}$ such that for every $i\neq j$ and every $n \neq m \in \mathbb{M}_1$: $d(x_i(n) , x_j(m))> \varepsilon/2$. Let us briefly sketch this extraction. We write $\M= \{n_1 , n_2 , \ldots\}$ and we let $m_1:=n_1$. Since the sequences $(x_i(n_\ell))_\ell$, $1\leq i \leq k+1$, are $\varepsilon$-separated, by the triangle inequality they must ``escape" the balls $B(x_j(m_1), \varepsilon/2)$, $1\leq j \leq k+1$, eventually. In other words, there exists $m_2 \in \M$ such that $m_1<m_2$ and for every $n \in \M$ and $1 \leq i,j \leq k+1$, $n \geq m_2 \implies d(x_i(n) , x_j(m_1)) \geq \varepsilon/2$. By the same argument, there exists $m_3 \in \M$ such that $m_3 > m_2$ and for every $n \in \M$, $n \geq m_3 \implies d(x_i(n) , x_j(m_1)) \geq \varepsilon/2$ and $d(x_i(n) , x_i(m_2)) \geq \varepsilon/2$. Continuing this construction by induction provides the required $\M_1 = \{m_1, m_2 , \ldots\}$. To conclude, notice that for every $n \neq m \in \mathbb{M}_1$, $d(\supp(\gamma_n) , \supp(\gamma_m)) > \varepsilon/2$. Since $(\gamma_n)_{n \in \M_1}$ is weakly null, we may apply Lemma~\ref{lemmaKalton} to conclude that $(\gamma_n)_{n \in \M_1}$ is norm null.
\end{itemize}
\end{proof}
| {
"timestamp": "2021-10-08T02:10:53",
"yymm": "2110",
"arxiv_id": "2110.03231",
"language": "en",
"url": "https://arxiv.org/abs/2110.03231",
"abstract": "Any Lipschitz map $f : M \\to N$ between two pointed metric spaces may be extended in a unique way to a bounded linear operator $\\widehat{f} : \\mathcal F(M) \\to \\mathcal F(N)$ between their corresponding Lipschitz-free spaces. In this paper, we give a necessary and sufficient condition for $\\widehat{f}$ to be compact in terms of metric conditions on $f$. This extends a result by A. Jiménez-Vargas and M. Villegas-Vallecillos in the case of non-separable and unbounded metric spaces. After studying the behavior of weakly convergent sequences made of finitely supported elements in Lipschitz-free spaces, we also deduce that $\\widehat{f}$ is compact if and only if it is weakly compact.",
"subjects": "Functional Analysis (math.FA)",
"title": "Compact and weakly compact Lipschitz operators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924785827003,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.707586666665854
} |
https://arxiv.org/abs/2005.00337 | State aggregations in Markov chains and block models of networks | We consider state-aggregation schemes for Markov chains from an information-theoretic perspective. Specifically, we consider aggregating the states of a Markov chain such that the mutual information of the aggregated states separated by T time steps is maximized. We show that for T = 1 this approach recovers the maximum-likelihood estimator of the degree-corrected stochastic block model as a particular case, thereby enabling us to explain certain features of the likelihood landscape of this popular generative network model from a dynamical lens. We further highlight how we can uncover coherent, long-range dynamical modules for which considering a time-scale T >> 1 is essential, using synthetic flows and real-world ocean currents, where we are able to recover the fundamental features of the surface currents of the oceans. | \section{Autoinformation and its properties}
In this section we describe a number of properties of the autoinformation in more detail.
\subsection{Relationship to DCSBM for one-step random walk dynamics}%
Here we show that the log-likelihood for a given partition of an symmetric, binary network is (up to factors that are non-essential for its optimization) equivalent to the one-step autoinformation $\mathcal I_1(h)$ of the corresponding aggregated dynamics of a simple random walk on the network.
We recall that the maximization of the log-likelihood for the DCSBM~\cite{newman2015generalized} for a partition into $K$ blocks corresponds to the minimization of:
\begin{equation*}
\mathcal S = E - \sum_{ij} e_{ij} \log \frac{e_{ij}}{e_i e_j},
\end{equation*}
where $e_{ij}$ is the sum of the adjacency matrix entries connecting nodes in block $i$ to nodes in block $j$, $e_i = \sum_j e_{ij}$ is the sum of links attached to nodes in class $i$, and $E$ is a constant (see, e.g., ~\cite{peixotoPRL2013parsimonious}).
By expanding the logarithm, this can be rewritten as:
\begin{equation*}
\mathcal S =
E - \sum_{ij} e_{ij} \log e_{ij} + 2 \sum_i e_i \log e_i.
\end{equation*}
Now, since $\sum_{ij} e_{ij} = \sum_i e_i = 2m$ is twice the number of edges in the network we can rewrite the above quantities as:
\begin{align*}
\sum_{ij} e_{ij} \log e_{ij} = &
2m \sum_{ij} \frac{e_{ij}}{2m} \log \frac{e_{ij}}{2m} + 2m \log(2m), \\
\sum_i e_i \log e_i = &
2m \sum_i \frac{e_i}{2m} \log \frac{e_i}{2m} + 2m \log(2m),
\end{align*}
Plugging these equations into the above expression gives:
\begin{align*}
\mathcal S =&\;
E - 2m \sum_{ij} \frac{e_{ij}}{2m} \log \frac{e_{ij}}{2m} \\ &+ 4m \sum_i \frac{e_i}{2m} \log \frac{e_i}{2m}+ 2m \log(2m).
\end{align*}
Finally observe that for a stationary random walk on a symmetric, binary network we will have $H(y_t) = H(y_{t+1})$ (by stationarity).
Further the occupation probabilities of the blocks are $p(y_t = i) = \frac{e_{i}}{2m}$, and transition probabilities between the blocks are given by $p(y_t = i, y_{t+1} = j) = \frac{e_{ij}}{2m}$.
We can thus assert by direct computation that:
\begin{align*}
\mathcal S &=
E + 2m \left[H(y_t, y_{t+1}) - 2 H(y_t) +\log(2m) \right]\\
&=
E - 2m \left[H(y_t) - H(y_{t+1}| y_t) -\log(2m) \right]\\
&=
E - 2m \left[\mathcal I_1(h) - \log(2m) \right].
\end{align*}
Hence, minimizing $\mathcal S$, i.e., maximising the likelihood of the DCSBM with parameters given by their maximium likelihood estimates, corresponds to maximizing the autoinformation for $T=1$ if the network is symmetric and binary.
A number of points of the above result are worth emphasizing.
While the DCSBM is a generative network model, maximising the autoinformation does not impose a generative process of the data and can be applied to any dynamical process with a discrete state space.
For instance, the autoinformation can be computed without modification for a Markov process defined by a random walk on a \emph{weighted} network, or a set of trajectory data without an explicitly defined network.
In contrast, the DCSBM is a priori specified only for unweighted networks.
This corresponds to the fact that edges are all independent in the DCSBM and, hence, only paths of length one (i.e., edges) are essential to its likelihood function.
Note that autoinformation beyond $T=1$ or beyond simple random walks on unweighted graphs is not, to our best knowledge, the likelihood function of a generative model, thus cannot be maximised in general by a statistical inference technique.
\subsubsection{Over-fitting of non-block structures using a DCSBM.}
The aim of algorithms based on stochastic block models or their variants is to decompose the adjacency matrix into groups which are `simple' (e.g.\ with density that is approximately constant or proportional to a degree distribution).
This offers a `dictionary' of patterns that is universal, in that it can eventually fit any network.
Nevertheless, such patterns, applied to cycle-like graphs such as depicted on Fig.~\ref{fig:plot_examples_betafix}, will generate a large number of blocks to fit the \emph{banded shape} of the adjacency matrix.
Thus the dictionary of the DCSBM is not adapted for an efficient description of the cyclic structures, which are `simple' in another fashion.
The situation is similar in some regard to the approximation theorems in numerical analysis: we know that any continuous real-valued function on the interval can be approximated arbitrarily well by polynomials (Weierstrass theorem) or by sines and cosines (Fourier decomposition), or by many other basis functions, but some functions are more efficiently approximated by polynomials and some others by a truncated Fourier decomposition.
Here, with similar arguments, one can assert that in some cases (i.e., those considered in Fig.~\ref{fig:plot_examples_betafix}) a block model is an inefficient basis to describe structures such as a banded adjacency matrix, while the `dictionary' offered by higher time scales $T>1$ is more appropriate.
\subsection{Extreme values of (regularized) autoinformation and data processing inequalities}
In this section we consider the state aggregation mappings with maximal (regularized) autoinformation.
Observe that for any two random variables $X$ and $Y$ and deterministic maps $f$
and $g$, it is well known that $I(f(X);g(Y)) \leq I(X;Y)$.
This is a form of the data-processing inequality~\cite{cover2012elements}.
We apply this data-processing inequality to the autoinformation of a Markov chain ($I(x_{t+T};x_t)$), and its aggregation, $I(y_{t+T};y_t)$, through the aggregation map $y_t=h(x_t)$, to obtain:
\begin{equation*}
I(y_{t+T};y_t) \leq I(x_{t+T};x_t).
\end{equation*}
Thus the aggregation maximizing the autoinformation is the trivial aggregation $y_t=x_t$, with $h$ being the identity map.
For the regularized autoinformation
\begin{equation*}
\mathcal{I}_{\beta,T}(h)=I(y_t;y_{t+T}) - \beta H(y_t)
\end{equation*}
with $\beta=1$, we see that it reduces to $-H(y_{t+T}|y_t)$, which takes its maximal value of zero for the trivial constant aggregation $h$ (all states of $\mathcal X$ being mapped to the single element set $\mathcal{Y}=\{y\}$).
\subsection{Analysis of the (regularized) autoinformation for limiting cases}
In the following we analyze how the autoinformation maximization behaves when considering two aggregated states, short or long time-scales.
To remove the effect of regularization, when comparing two different partitions, we compare aggregations of same complexity $H(y_t)$.
\subsubsection*{Two aggregated states}
We consider the ideal case of $K=2$ aggregated states denoted by $y=1$ and $y=2$ of same occupation probability $p(y_t=1)=p(y_t=2)=1/2$ (equivalently, $H(y_t) = 1$).
The joint probabilities on successive states fulfill the following set of equalities:
\begin{align}
p(y_t=y_{t+T}=1)+p(y_t=1,y_{t+T}=2)&=1/2, \label{eq:2state-1} \\
p(y_t=y_{t+T}=1)+p(y_t=2,y_{t+T}=1)&=1/2, \label{eq:2state-2} \\
p(y_t=2,y_{t+T}=1)+p(y_t=y_{t+T}=2)&=1/2, \label{eq:2state-3} \\
p(y_t=1,y_{t+T}=2)+p(y_t=y_{t+T}=2)&=1/2. \label{eq:2state-4}
\end{align}
We define the quantity $p_\text{leak,$T$}=p(y_t \neq y_{t+T})$, which, in the simple case of two classes, is $p(y_t=1,y_{t+T}=2)+p(y_t=2,y_{t+T}=1)$.
The above equalities, subtracting Eq.~(\ref{eq:2state-4}) from Eq.~(\ref{eq:2state-3}) and Eq.~(\ref{eq:2state-2}) from Eq.~(\ref{eq:2state-1}), enable us to write:
\begin{align*}
p(y_t=1,y_{t+T}=2)&=p(y_t=2,y_{t+T}=1)=\frac{p_\text{leak,$T$}}{2}, \\
p(y_t=1,y_{t+T}=1)&=p(y_t=2,y_{t+T}=2)=\frac{1-p_\text{leak,$T$}}{2}.
\end{align*}
Note that this calculation implies, in particular, that in this case the aggregated Markov chain is reversible even when the original Markov process on $\mathcal{X}$ is not.
From the above calculations we conclude that
\begin{equation*}
H(y_{t+T}|y_t)=H(\mathds{1}_{y_t \neq y_{t+T}})=\mathcal{H}(p_\text{leak,$T$}),
\end{equation*}
where $\mathcal{H}(x)$ denotes the Shannon entropy function of a probability $x$, $\mathcal{H}(x)=-x \log x - (1-x) \log (1-x)$; and the notation $\mathds{1}_{S}$ stands for the indicator variable of the event $S$, taking value $1$ if the event $S$ is realized and $0$ otherwise.
Therefore the autoinformation for an aggregation into two classes with same complexity can be written as:
\begin{equation*}
I(y_{t+T};y_t)= H(y_{t+T}) - H(y_{t+T}|y_t)=1-\mathcal{H}(p_\text{leak,$T$}).
\end{equation*}
In other words, the autoinformation is in this case determined by the leak probability $p_\text{leak,$T$}$.
It is maximized when $p_\text{leak,$T$}$ is either as low as possible or as large as possible.
The former case can be identified as an `assortative' partition of the original Markov chain (relatively to time scale $T$) and the latter, as a `disassortative' partition of the original Markov chain (relatively to time scale $T$).
This terminology generalizes the usual notion of assortativity coefficient in the following way.
Considering the random walk on an binary symmetric network with two classes of nodes, Newman's binary assortativity coefficient~\cite{newman2002assortative} is also in one-to-one relationship with $p_\text{leak,$1$}$, for $T=1$ step, and takes high values (close to $+1$) for $p_\text{leak,$1$}$ close to $1$, and low values (close to $-1$) for $p_\text{leak,$1$}$ close to $0$.
\subsubsection*{Short time scales}
Let us characterize the autoinformation~\eqref{eq:autoinfo-bis} of an arbitrary aggregated Markov chain for short time-scales.
To make our analysis more meaningful, we will here switch our focus to a continuous-time Markov chain, as it will enable us to consider the limit of the step size (respectively the transition rate) going to zero.
A continuous-time Markov chain on state space $\mathcal{X}$ is in a state $i$ at the real time instant $t$, and makes a transition in the infinitesimal interval $[t,t+dt]$ to another state $j$ with probability $L_{ij}dt$, for some rate $L_{ij} \geq 0$.
Introducing the quantity $L_{ii}=-\sum_j L_{ij} \leq 0$, the Markov chain remains in state $i$ throughout $[t,t+dt]$ with probability $1+L_{ii}dt$.
Arranging the coefficients $L_{ij}$ into a Laplacian-like matrix $L$ (with rows summing to zero), we can describe the evolution of state probabilities of each state with the following master equation:
\begin{equation*}
\dot{p}(t)=p(t) L
\end{equation*}
where $p_i(t)$ is the probability of the Markov chain to be in state $i$ at time $t$, and $p(t) = [p_1(t),\ldots p_N(t)]$ is the row-vector collecting all probabilities $p_i(t)$.
The above dynamics lead to a state-transition equation of the form:
\begin{equation*}
p(t+T) = p(t)\text{exp}(LT),
\end{equation*}
where $\text{exp}(\cdot)$ is the matrix exponential function.
We now consider the aggregated process on $\mathcal{Y}$. Let us consider the indicator variable $\mathds{1}_{y_t \neq y_{t+T}}$ taking value 1 if a change of aggregated state occurs or 0 otherwise. This is again the \emph{leak probability} or \emph{escape probability} $p_\text{leak,$T$}$, as discussed in the previous section.
For short time-scales $T\rightarrow 0$, we have $\exp(LT) \approx I + LT$, and therefore
$$p_\text{leak,$T$} \approx \sum_{i,j \in \mathcal{X}:h(i)\neq h(j)} p(x_t=i) L_{ij} T=\mathcal{O}(T).$$
For the derivations below it is useful to remember that the Shannon binary entropy function $\mathcal{H}(x)=-x \log x - (1-x) \log (1-x)$ scales as $-x \log x$ for $x \rightarrow 0$.
For instance $H(\mathds{1}_{y_t \neq y_{t+T}})=\mathcal{H}(p_\text{leak,$T$})$ scales as $p_\text{leak,$T$} |\log T|$ for small $T$, since $\log \sum_{i,j \in \mathcal{X}:h(i)\neq h(j)} p(x_t=i) L_{ij}$ is a constant, while $|\log T| \to +\infty$.
We can now write
\begin{align}
H(y_{t+T}|y_t) & =H(y_{t+T},\mathds{1}_{y_t \neq y_{t+T}}|y_t) \nonumber \\
& =H(\mathds{1}_{y_t \neq y_{t+T}}|y_t)+H(y_{t+T}|y_t,\mathds{1}_{y_t \neq y_{t+T}}) \label{eq:si1jump} \\
&\approx H(\mathds{1}_{y_t \neq y_{t+T}}|y_t) \label{eq:si2jump} \\
& =\sum_{k \in \mathcal{Y}} p(y_t=k) H(\mathds{1}_{y_t=k \neq y_{t+T}}) \nonumber \\
&=\sum_{k \in \mathcal{Y}} p(y_t=k) p(y_t=k \neq y_{t+T}) |\log T| \nonumber \\
& = p_\text{leak,$T$} |\log T|. \nonumber
\end{align}
We derive Eq.~(\ref{eq:si1jump}) from the chain rule for joint entropy $H(X,Y|Z)=H(X|Z)+H(Y|X,Z)$ (for arbitrary random variables $X,Y,Z$). We derive Eq.~(\ref{eq:si2jump}) by observing that the first term in Eq.~(\ref{eq:si1jump}) turns out to scale as $T|\log T|$, whereas the second term scales as $p_\text{leak,$T$}$, thus as $T$, which is dominated by $T|\log T|$ for $T \rightarrow 0$.
In conclusion, in the short time limit, the dominant term of the conditional entropy is the \emph{leak probability} from the aggregated states, up to a factor $\log T$.
Accordingly, the autoinformation $I(y_t;y_{t+T})= H(y_t)-H(y_{t+T}|y_t)$ in a continuous-time Markov process with a fixed $H(y_t)$ is maximized for $T\rightarrow 0$ by choosing a state aggregation such that the flow of the process gets trapped within each block.
In the case of a continuous-time random walk on an symmetric graph, $L$ is the usual Laplacian, and the leak probability from the aggregated states is essentially given by the `cut size' between the blocks (`communities') of nodes in the graph, i.e.\ the total weight of edges standing between the aggregation classes.
A number of `node partitioning' or `community detection' methods aim at minimizing this cut size, regularized with a constraint or entropy-like cost promoting a nontrivial number of equal-size blocks.
This strategy underlies most edge-counting methods such as conductance, normalized cuts, ratio cuts, modularity, Potts model and linearized Markov stability.
See~\cite{delvenne2013stability} for references, discussion and detailed arguments.
Maximizing the short-term autoinformation is essentially identical to what all these methods implement, up to the choice of regularization strategy, and will accordingly yield an `assortative' aggregation, minimizing the leak from the aggregated states.
These type of `assortative' partitions are also the foundation for most time-scale separation techniques on Markov chains.
As a concrete example, assume the Laplacian $L$ is written as $L_0+\epsilon L_1$, where $L_0$ is a block-diagonal Laplacian matrix describing the union of decoupled Markov chains, and the rows of $L_1$ sum to zero.
Then it it is a classic result~\cite{simon1961aggregation} that for $\epsilon\rightarrow 0$, the Markov chain aggregated along the diagonal blocks of $L_0$ is indeed approximately Markovian, and can therefore be safely replaced by a Markovian approximation.
Note that this trade-off between the Markov property and the dynamical predictability is precisely what is captured by the autoinformation, see Eq.~(\ref{eq:mutual_info}).
\subsubsection*{Long time scales}
Let us now consider the autoinformation for long time-scales, for which we revert to a discrete-time formulation.
Note that we can rewrite the autoinformation from the form in Eq.~\eqref{eq:autoinfo-bis} as:
\begin{equation}
I(y_{t+T};y_t)=\left\langle \log \frac{p(y_{t+T},y_t)}{p(y_t)p(y_{t+T})} \right\rangle,
\end{equation}
where $\langle\cdot\rangle$ denotes the expectation, taken over the joint the distribution $p(y_{t+T},y_t)$.
Since for a symmetric graph the chain will converge towards a stationary state, the autoinformation will be close to zero for large $T$ for a mixing Markov chain.
We can therefore approximate the above expression using a Taylor expansion for the natural logarithm as:
\begin{align*}
I(y_t;y_{t+T}) &\approx \left\langle \frac{p(y_t,y_{t+T})}{p(y_t)p(y_{t+T})} -1\right\rangle\\
&= \left\langle \frac{p(y_t,y_{t+T})}{p(y_t)p(y_{t+T})} \right\rangle -1\\
&=\sum_{y_t,y_{t+T}} \frac{p^2(y_t,y_{t+T})}{p(y_t)p(y_{t+T})}-1.
\end{align*}
This can be further developed by estimating $p^2(y_t,y_{t+T})$ for $T \rightarrow \infty$. The Markov chain on the state space $\mathcal{X}$ (which we assume to be ergodic and mixing) is described by the one-step transition matrix $P$, which appears in the master equation:
$$
p(t+1)=p(t)P
$$
The transition matrix $P$ can be decomposed spectrally as $P=\bm{1}p + \lambda v u + \ldots$, where $p$ is the stationary row-vector of occupation probabilities on $\mathcal{X}$, $\lambda$ is the second eigenvalue in magnitude, $u$ is the corresponding (column) right eigenvector ($Pv=\lambda v$) and $u$ is the (row) left eigenvector.
These eigenvectors satisfy $u \bm 1 = 0 = p v$, and $uv=1$.
Let us now suppose for convenience that $\lambda$ is real and unique.
For large $T$, we find that $P^T\approx\bm{1}p + \lambda^T v u$ as the powers of all other eigenvalues decay faster than $\lambda^T$.
Accordingly we can write
$$p(x_t,x_{t+T}) \approx p(x_t)p(x_{t+T}) + \lambda^T v(x_t) u(x_{t+T}).$$
Passing to the aggregated states, we obtain:
$$p(y_t,y_{t+T}) \approx p(y_t)p(y_{t+T}) + p(y_t) \lambda^T pv(y_t) u(y_{t+T}).$$
Here $pv(y_t)$ denotes the sum of all entries $p(x_t)v(x_t)$ for all states $x_t$ aggregated to $y_t$, and $u(y_{t+T})$ is the sum of $u(x_{t+T})$ over all $x_{t+T}$ aggregated to $y_{t+T}$.
Thus we obtain the following approximation:
\begin{align*}
I(y_t;y_{t+T}) &\approx \sum_{y_t,y_{t+T}} \frac{p^2(y_t,y_{t+T})}{p(y_t)p(y_{t+T})} -1 \\
&\approx \sum_{y_t,y_{t+T}} \lambda^{2T} \frac{pv(y_t)^2 u(y_{t+T})^2}{p(y_t)p(y_{t+T})}\\
&= \lambda^{2T} \sum_{y_t} \frac{pv(y_t)^2}{p(y_t)} \sum_{y_{t+T}} \frac{ u(y_{t+T})^2}{p(y_{t+T})}
\end{align*}
In conclusion, the aggregation with highest autoinformation (in combination with a regularization criteria) in the large $T$ limit can be determined from the second left and right eigenvectors of $P$.
Thus the optimal aggregation can be determined exactly by a spectral algorithm (i.e.\ an algorithm exploiting the dominant eigenvectors $u$, $v$).
A complete description of such a spectral algorithm, whose details depend on the chosen regularization, is beyond the scope of this article.
To build the intuition for such an algorithm, we observe that in the case of a reversible Markov chain (for instance a simple random walk on a symmetric network), $p(x_t)v(x_t)=u(x_t)$ and the usual idea of splitting between positive and negative values of $u$ may typically offers a good aggregation into two states, with high sum-of-squares $\sum_{y_t} \frac{u{(y_t)}^2}{p(y_t)}$.
This split will be an `assortative' splitting of $\mathcal{X}$ (if $\lambda >0$), or `disassortative', almost-bipartite splitting (if $\lambda < 0$).
This agrees with the general result for two-state aggregation above.
\subsection{Differences between time-scale \texorpdfstring{$T$}{T} and regularization parameter \texorpdfstring{$\beta$}{beta}}
Here we expand on the different roles of the time-scale parameter $T$ and the regularization parameter $\beta$.
The definition of the (unregularized) autoinformation in Eq.~\eqref{eq:autoinfo} contains only a time-scale $T$.
While this parameter changes the transition properties of the Markov process, unless a regularization term is introduced, optimizing the autoinformation in the space of all partitions will always lead to the trivial partition in which all nodes are in their own group.
We add this regularization term, which is here chosen to be the entropy of the aggregated state space, with a scalar multiplier $\beta$.
The parameter $\beta$ thus regulates the influence of the regularization akin to a Lagrange multiplier.
Changing $T$ or $\beta$ has a markedly different effect on the optimization landscape of the regularized autoinformation Eq.~\ref{eq:autoinfo_regularization}.
The parameter $\beta$ provides a \emph{linear} scaling of the entropic cost term which is independent of the detailed graph structure but merely depends on the size of the aggregated states in terms of the aggregated degrees.
In contrast, the temporal parameter $T$ acts in a \emph{non-linear} way and directly changes the time-scale of the dynamics.
In particular, the effect of $T$ depends on the details of the path structure of the underlying graph and cannot be understood by some local statistics such as the degree sequence.
For a related discussion in the context of dynamics-based graph embeddings, see also~\cite{Schaub2019}.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.5\linewidth]{plot_parameter_space.pdf}
\caption{%
The effect of $\beta$ and $T$ for optimal state-aggregation for a network with two alternative aggregations, as plotted in Fig.~\ref{fig:sameB} of the main text.
The plot shows the parameter regime in which the preferred state aggregation is either the ``assortative'' split into two cycles or the ``disassortative'' almost bipartite split (see Fig.~\ref{fig:sameB}).}%
\label{fig:parameter_space}
\end{figure}
Nonetheless, the parameters $\beta$ and $T$ may in some cases have a similar effect on the granularity of partitions obtained from optimizing Eq.~\eqref{eq:autoinfo_regularization}.
For instance, in Fig.~\ref{fig:parameter_space} we revisit the example network of Fig.\ref{fig:sameB}, in which both an almost bipartite split (for short time-scales) and a split into two cyclic structures (for long time-scales) provide a good aggregated description of the dynamics.
Accordingly, at low values of $T$, the bipartite partition is selected when optimizing the regularized autoinformation, but for large values of $T$, the split into two rings is obtained.
The same effect can here be obtained by fixing $T$ and regulating $\beta$: for a small $\beta$ the almost bipartite split is preferred and the split into two cycles is preferred from large $\beta$.
However, as our next example illustrates, the effect of $\beta$ and $T$ on the chosen partition is indeed different, in general.
In Fig.~\ref{fig:parameter_space_cpass}, we consider a random walk on a network that may be partitioned in terms of a core-periphery structure, as well as a block-diagonal (``assortative'') partition.
As Fig.~\ref{fig:parameter_space_cpass} illustrates, the effect of $T$ and $\beta$ is clearly different in this case.
For small $T$ there is no setting of $\beta$ under which the core-periphery structure would lead to a more accurate description according to the regularized autoinformation.
For large $T$, however, the core-periphery split is always preferred.
This illustrates that the time-scale parameter $T$ may in some cases be necessary to find certain dynamically relevant structures.
\begin{figure}[htb!]
\centering
\includegraphics[width=\linewidth]{plot_param_space_cp-ass.pdf}
\caption{%
The effect of $\beta$ and $T$ for optimal state-aggregation for a network with two alternative aggregations: a core-periphery and an assortative block-structure (see also Fig.~\ref{fig:plotexamples_hier}).
The plot shows the parameter regime in which each of these partitions corresponds to the optimal state aggregation.
}\label{fig:parameter_space_cpass}
\end{figure}
\section{Details on the ocean drifters experiment}
\begin{figure}[th]
\centering
\includegraphics[width=\linewidth]{drifters-short.pdf}
\caption{%
Partitioning the oceans according to the found aggregation classes of the ocean drifters for short time-scales.
For any value of the regularization parameter $\beta$ and $T=1$ the resulting partition comprises only small/local aggregation classes.
}\label{fig:drifters-short}
\end{figure}
We divide the Earth surface in a grid of equal-area cells.
The Equator is divided into 100 equally spaced intervals of $3.6^\circ$ and the meridians into 50 intervals with varying length such that all cells have the same area under the assumption that the Earth is a perfect sphere.
Each cell represents a node of the graph and an edge is added between cell $i$ and cell $j$ if a drifter visited the cell $i$ at any time $t$ and the cell $j$ at time $t+T$, where each time step is represented by a window of 16 days.
The weight of each edge represents the number of drifters following that path.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{drifters-long.pdf}
\caption{%
Partitioning the oceans according to the found aggregation classes of the ocean drifters for long time-scales.
For $T=10$ (about 160 days) the partitions encompass large geographically coherent patches of the ocean surfaces and eventually only subdivide the ocean into well known macro areas.
}\label{fig:drifters-long}
\end{figure}
In Figures~\ref{fig:drifters-short} and~\ref{fig:drifters-long} we show the partition of the surface currents for a number of values of the regularization parameter $\beta$ and temporal parameter $T$.
Note that for $T=1$, even for high values of the regularization parameter $\beta$, the partitions remain small compared to the ocean gyres.
We estimate the time spent by a drifter to cross the ocean on one of the major currents to be around 160 days (this is an approximation, since drifter velocities and ocean perimeter are heterogeneous).
For longer time scales ($T=10$, i.e.\ around 160 days) the size of the aggregation classes becomes closer to the size of the major attractors, corresponding to the location of the Garbage Patches (Eastern and Western Great Pacific Garbage Patches, Northern Atlantic Garbage Patch, Indian Ocean Garbage Patch).
The cells visited by fewer drifters are hard to classify: We noticed that in many cases singletons and small classes coincidence with cells with very low visiting probability.
Earlier works focusing on the Mediterranean Sea~\cite{berline2014mediterranean_clustering,rossi2014hydrodynamic} or the Great Barrier Reef~\cite{thomas2014numerical} use an approach similar to~\cite{sonnewald2019ocean_Kmeans}, and use clustering or community detection to identify certain regions of the oceans based on a simulated dynamical model.
Our partition also shows strong similarities with empirical data of the phase in the annual oscillation of the elevation of the surface of the ocean (Figure 7b in~\cite{wunsch1998satellite}), another proxy for the dynamical behavior.
Other approaches include measuring and thresholding dynamical similarities from a set of climatological or oceanographic measures~\cite{donges2009complex, molkenthin2014network_from_flows, tupikina2016correlation_net_flows} to construct the topological structure of a network model.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{drifters-algs.pdf}
\caption{%
Partitioning the oceans according to other algorithms.
In particular the outcome of the Louvain algorithm (above) and the DCSBM inference provided by the \texttt{graph-tool} implementation.
The partitioning of the ocean appears to be overfitted in both cases.
}\label{fig:drifters-algos}
\end{figure}
\section{Perfect Markovian aggregation: lumpable Markov chains and equitable partitions}
A Markov chain is said to be \emph{lumpable}~\cite{buchholz1994exact,tian2006lumpability} if we can aggregate its states $x_t$ such that the dynamics of the aggregated states $y_t$ is again a Markov chain.
Let us define the partition induced by the classes of the state aggregation by the indicator matrix $Z \in {\{0,1\}}^{N\times K}$ with entries $Z_{xy} = 1$ if state $x$ is mapped to the aggregated state $y= h(x)$ and zero otherwise.
We can then write the condition of a Markov chain to be lumpable in terms of the following algebraic relation:
\begin{equation}\label{eq:lumpable}
PZ = ZP^\pi
\end{equation}
where we have denoted the transition matrix of a Markov chain by $P$, and we have defined the aggregated state transition matrix as $P^\pi = {({Z}^{T}Z)}^{-1}{Z}^T P Z$.
In other words the above equation asserts that if two states $x$ and $x'$ are mapped to the same aggregated state $h(x) = h(x') = y$ (they belong to the same class), then the probability to transition from $x$ or $x'$ to the whole aggregation class $h^{-1}(y')$ defined by the aggregated state $y'$ will only depend on $y$ and $y'$, but not on $x$ or $x'$.
The condition in Eq.~\eqref{eq:lumpable} implies that when a transition $y \to y'$ is observed in the aggregated process, knowing the exact state $x$ inside the aggregation class $h^{-1}(y)$ observed in the original process is of no help to predict the future of the process, since all other states in the same aggregation class lead to the same statistics for the future trajectory.
Note that if a Markov chain is lumpable we can effectively use the smaller transition matrix $P^{\pi}$ to simulate the full chain exactly within the projected subspace.
Hence, if we can find a lumpable partition, we can significantly simplify the description of the system dynamics.
Accordingly, finding such lumpable partitions is of high-interest from a dynamical perspective.
There are a number of important consequences of the above algebraic relationship (cf.~\cite{Schaub2016,o2013observability}): (i) it implies that the transition matrix $P$ will have a set of eigenvectors, such that each eigenvectors is piecewise constant on every aggregation class; (ii) these eigenvectors correspond to appropriately rescaled eigenvectors of the aggregated transition matrix $P^{\pi}$; (iii) the eigenvalues associated to these (scaled) eigenvectors are the same for both $P$ and $P^\pi$, i.e., the eigenvalues of $P^\pi$ are a subset of the eigenvalues of $P$.
Importantly, however, those eigenvalues do not have to correspond to dominant modes of $P$.
Interestingly, the algebraic condition for a Markov chain to be lumpable is closely related to so-called equitable partitions of (directed or undirected) graphs~\cite{Schaub2016,godsil2013algebraic}.
An equitable partition, splits the graph into classes of nodes $\{\mathcal C_i\}$ such that the number of connections from any node $v\in\mathcal C_i$ to a class $\mathcal C_j$ is only dependent on $\mathcal C_i$ and $\mathcal C_j$, but not on $v$.
Similar to above, let us define the indicator matrix $\mathcal Z$ with entries $\mathcal Z_{ij}=1$ if node $i$ is in class $j$.
Then the algebraic characterization of an equitable partition reads as follows:
\begin{equation}
A\mathcal Z = \mathcal ZA^\pi,
\end{equation}
where $A^\pi = (\mathcal Z^{T}\mathcal Z)^{-1}\mathcal Z^T A \mathcal Z$ can be interpreted as the adjacency matrix of the quotient graph associated to partition $\mathcal Z$.
The quotient graph is defined as follows: the nodes are the aggregation classes $\{\mathcal C_i\}$, and the number of edges between $\mathcal C_i$ and $\mathcal C_j$ is the number of edges in the graph from each node of $\mathcal C_i$ to each node of class $\mathcal C_j$.
As we can see, the condition for a lumpable Markov chain is identical to that of an equitable partition, apart from the fact that in one case we use the transition matrix $P$ and in the other the adjacency matrix $A$.
This implies (as can be verified by direct computation) that if we perform an unbiased random walk on the graph, then any equitable partition is lumpable for the random walk process.
Note that the equitability is in fact a stronger requirement: equitability regards the absolute number of links going from a node to any class, while lumpability for the random walk is formulated in terms of the (relative) fraction of links going from a node to any aggregation class.
In the following we will use the above relationship between lumpable Markov chains and equitable partitions to construct lumpable Markov chains.
\begin{figure}[htpb]
\centering
\includegraphics[width=\linewidth]{plot_bowtie.pdf}
\caption{%
A family of graphs with two dynamically meaningful partitions (assortative and equitable) inspired by the small depicted bow-tie graph.
Most of the tested algorithms select the assortative partition as the best description for such system.
By tuning the connectivity parameter $\kappa$ the separation of timescales in the dynamics can be controlled.
Depending on the mutual relation of the timescales of the dynamics on assortative and equitable partitions, either one or the other can display higher autoinformation.
(Numerical values computed with $T=1$ and $\beta=0$).
}\label{fig:bowtie}
\end{figure}
\section{Detecting equitable partitions via autoinformation}
In Fig.~\ref{fig:bowtie} we show a family of graphs with two dynamically meaningful partitions: an assortative partition with low number of edges between aggregated classes; and an equitable partition.
At low values of the number of links between the assortative classes ($\kappa$), the assortative partition has higher autoinformation due to the high predictability of the projected dynamics together with its \emph{almost} Markovianity.
Accordingly, a spectral analysis of the graph Laplacian would split the nodes in the same assortative partition.
In fact, using GenPCCA, fitting a DCSBM, and maximizing modularity, all prefer the assortative partition associated to the highest eigenvalue of the system if we specify that partition in two classes is to be found, regardless of the value of $\kappa$.
However, projecting Markovian dynamics to an equitable partition leads to exactly Markovian dynamics on the class space.
When we increase $\kappa$ the characteristic timescales of the dynamics on the assortative and equitable partitions become more similar but the projected dynamics associated to the assortative partition get \emph{less} Markovian.
By construction of the autoinformation, the fact that an equitable partition induces a Markov dynamics eventually leads to a higher autoinformation for large $\kappa$ in such partition.
Interestingly, we never observe an eigenvalue crossing for the values of $\kappa$ considered here, i.e., the dominant (slowest) modes in terms of eigenvalues are always those associated to the assortative split.
This split, however, induces a non-Markovian projected dynamics.
Thus if we were to purely concentrate on \emph{dominant (i.e.\ slow) timescales, rather than Markovianity, the assortative partition would always be favoured}--- this is exactly the case for spectral algorithms such as GenPCCA that focus on dominant eigenvectors and are thus unable to detect in this case the equitable structure.
\begin{figure}[htpb]
\centering
\includegraphics[width=\linewidth]{part-plot.pdf}
\caption{%
Range dependant network with non-negligible connectivity probability within classes.
We apply statistical inference of a DCSBM, GenPCCA as well as autoinformation maximization to the graph described by the adjacency matrix on the top.
The statistical inference of a DCSBM overfits the graph with a high number of small classes along the building cycles.
GenPCCA and autoinformation maximization recover the planted partition (if we choose a temporal or model selection parameter higher enough).
Building parameters are: $N=180$, $\alpha = 0.9$, $\gamma=0.8$.
}%
\label{fig:stoch}
\end{figure}
\begin{figure}[htpb]
\centering
\includegraphics[width=\linewidth]{part-plot-r90.pdf}
\caption{%
Range dependant network with non-negligible connectivity probability within and between classes (same parameters as in Fig.~\ref{fig:stoch}).
We apply statistical inference of a DCSBM, GenPCCA as well as autoinformation maximization to the graph described by the adjacency matrix on the top, the first two classes have links mostly between them.
The statistical inference of a DCSBM overfits the graph with a high number of small classes along the building cycles.
In this case GenPCCA underfits the original partition recognising only the structure highlighted by the spectral analysis.
Autoinformation maximization recovers the planted partition (if we choose a temporal parameter $T$ higher enough).
}%
\label{fig:stoch90}
\end{figure}
\section{Comparisons with other partitioning algorithms}
The previous section allows to compare on a toy example a spectral method and autoinformation maximization. In this section we further compare partitioning methods on synthetic and real-life examples.
The network in Fig.~\ref{fig:sameB} in the main text represents the state-transition graph of a Markov chain, for which two dynamically relevant partitions can be defined: an assortative partition consisting of the two ring-like structures, and an almost bipartite partition.
Note that the graph can be represented by a conjunction of two \emph{banded adjacency matrices}.
For any banded adjacency matrix, we can be reorder the nodes such that most non-zero entries are close to the main diagonal in a staircase like pattern.
Importantly, for graphs described by banded adjacency matrices most nodes are not directly linked, but are only connected through a chain of connections (a ring like backbone of the graph).
Accordingly, algorithms relying only on information encoded in paths of length one (edges), are likely to be unable to capture this structure.
In this specific case shown in Fig.~\ref{fig:sameB}, using statistical inference to infer a DCSBM will lead to an over-partitioning of the network into many classes.
For instance, the algorithm described in~\cite{peixotoPRL2013parsimonious} finds 18 classes.
We emphasize this is a generic result that can be observed with other techniques as well: Modularity optimization, e.g., via the Louvain algorithm finds 7 classes.
Likewise, statistical inference based on the DCSBM as provided by the implementation~\cite{peixotoPRL2013parsimonious} as well as Modularity maximization via the Louvain algorithm will partition the ocean surface with a high number of classes (see Fig.~\ref{fig:drifters-algos}).
Spectral algorithms such as GenPCCA~\cite{fackeldey2018spectral} that aim to identify dominant subspaces of a (transition) matrix, i.e., focus on slow time-scales, can typically resolve banded adjacency structures, as these often induce slow dynamical modes.
Specifically, in the case of Fig.~\ref{fig:sameB} GenPCCA indeed finds the split into the two cyclic structures.
For the ocean example, the runtime of the GenPCCA algorithm was however too long to be compared, so we cannot report results here.
Optimizing regularized autoinformation can be seen as a way to interpolate between structures describing different dynamical modes with possibly different time-scales.
Indeed the autoinformation identifies both dynamically meaningful partitions for a corresponding time-scale parameter $T$ in Fig.~\ref{fig:sameB}.
\subsection{Range dependent networks}
\label{ssec:rangedependent}
Here, we introduce graph ensembles for which similar effects than in~\ref{fig:sameB} in the main text can be observed, when applying graph partitioning algorithms.
The construction of these graphs is closely related to the so-called range-dependent graphs~\cite{grindrod2002rangedependent}, and the $\mathbb{S}_1$ model~\cite{Serrano2008}.
Specifically, we consider networks composed of $N$ nodes endowed with a class label $c_i$, just like in a stochastic block model.
In addition, in each class every node is equipped with an angular coordinate $\theta_i$, corresponding to a point on the unit circle.
We denote by $d_\theta(i,j)$ the (shorter) angular distance between the two coordinates on the (possibly different) circles.
For our construction below we assume that, within each class the nodes have angular coordinates uniformly spaced on the circle.
We define a distance between two nodes $i$ and $j$ as:
\begin{equation*}
d_{ij} = \frac{d_\theta(i, j) \cdot \sqrt{N_{c_i} N_{c_j}}}{2\pi}
\end{equation*}
where $N_c$ is the size of class $c$.
We now connect two nodes $i$ and $j$ with a probability $p_{ij}=f(d_{ij},c_i,c_j)$ that depends on the above defined distance along the circle between the two nodes and the class labels:
\begin{equation*}
p_{ij} = \alpha_{c_i,c_j} \cdot (\gamma_{c_i,c_j})^{d_{ij}}
\end{equation*}
where $\alpha_{c_i,c_j}\in[0,1]$ defines a purely class specific connectivity, and $\gamma_{c_i,c_j}\in[0,1]$ is a class dependent parameter that modulates the influence of the distance.
We set $p_{ii}=0$ to ensure that there are no self-loops.
Note that if $\gamma_{c_i,c_j}=1$ for all class labels, we recover the SBM where the probability to link between nodes of two classes is simply given by $\alpha_{c_i,c_j}$.
For $\gamma_{c_i,c_j}\neq 1$, the link probability depends on the angular distance between the two nodes: the larger the distance, the smaller the connection probability.
Hence if $\gamma_{c_i,c_i} <1$ the link probability within each $c_i$ block will be akin to a stochastic cycle.
If $\gamma_{c_i,c_j} <1$ with $i \neq j$, the link probability will be high between corresponding nodes of the two stochastic cycles.
For simplicity we consider the distance parameter to assume a fixed value $\gamma_{c_i, c_j} = \gamma$.
We now consider two types of networks generated according to the above outlined constructions, as exemplified by the adjacency matrices in Figures~\ref{fig:stoch} and \ref{fig:stoch90}.
In the first scenario (Fig.~\ref{fig:stoch}) we consider an assortative setup in which the class-dependent connectivity $\alpha_{c_i, c_k} = \alpha$ and the range parameter $\gamma_{c_i, c_k}=\gamma=0.8$ if $c_i = c_j$, and very small ($\alpha_{c_i, c_j}=\varepsilon \ll 1$ with $\gamma=1$) otherwise.
The resulting adjacency matrix thus consists of a set of banded matrices along the diagonal blocks (see Fig.~\ref{fig:stoch}, top).
For a typical graph drawn from this ensemble Fig.~\ref{fig:stoch} also shows a comparison of the partitioning results obtained from the autoinformation maximisation, the statistical inference of a DCSBM, and partitioning with GenPCCA.
The statistical inference of a DCSBM appears to overfit the graph and tends to find many small blocks within each of the stochastic cycles, on the other hand spectral approaches as well as our dynamical based approach can detect the planted partition of three classes.
In Fig.~\ref{fig:stoch90}, we consider a similar scenario, where $\alpha_{c_i, c_j} = \varepsilon$ apart from the specific interclass link probability $\alpha_{1,2}=\alpha_{2,1}$ and the within class link probability $\alpha_{3,3}$.
For the statistical inference of the DCSBM we find again similar results as before.
However, in this case GenPCCA lumps together the first two planted classes and cannot resolve both the assortative and disassortative cyclic blocks, which is in agreement with the shape of the dominant eigenvectors.
In contrast, the maximization of autoinformation resolves the planted partition structure correctly and unveils both assortative and disassortative features.
\section{How many aggregation classes? Practical recommendations}
In many unsupervised data minign methods, one is confronted to a trade-off between the complexity of a representation of the original data and its accuracy in reproducing some features of the data.
In clustering or partition methods, this amounts to choosing the number of clusters or communities or (in our case) aggregation classes $K$.
In some applications, the number of classes $K$ is known to the user and may be directly imposed to the algorithm.
In this case one seeks the partition of the Markov chain states into $K$ classes that results into the highest autoinformation of the resulting aggregated process.
In other cases however, the number of classes is not known to the user, or only approximately so. In this case, various methods for selecting $K$ are used, some tailored for a particular algorithm, some relatively generic as they can be adapted to a wide diversity of partitioning algorithms.
Here we discuss two generic strategies for choosing $K$, which we recommend for their simplicity and versatility. The reader is of course free to combine our autoinformation criterion with any other heuristics of their preference.
The first generic heuristics is the \emph{elbow criterion}~\cite{kodinariya2013review}, which in this case works as follows:
\begin{enumerate}
\item Choose an interval of interest for $K$ (which can be as large as $[1,N]$).
\item For each $K$ in this interval, find the partition into $K$ aggregation classes with highest autoinformation, with the heuristics in \ref{alg} (with $c_{\min} = c_{\max} = K$ and $\beta=0$).
\item Plot the highest autoinformation found for $K$ classes as an (increasing) function of $K$.
\item Look for elbows in the plot, i.e. values of $K$ that mark a break from a steady increase of autoinformation (for $\leq K$ classes) to a slower increase (for $\geq K$ classes).
\item Each elbow value for $K$ is deemed to represent a \emph{natural} value for $K$, beyond which the \emph{quality} of the reduced model increases at slower pace.
\end{enumerate}
This simple and intuitive method is effective in some circumstances but may prove too crude in others, and leave no clear conclusions.
One of the reasons is that $K$ alone is not always a good representation of the \emph{complexity} or \emph{size} of the aggregated model.
For example a split of 100 states into three classes of 50, 45 and 5 nodes may be interpreted as \emph{essentially} a split into two classes with a small ``correction'' in terms of a small third class.
A metric that takes into account such heterogeneity of classes is the entropy $H$ of the partition.
Optimising the autoinformation with a constraint on the maximal allowed partition entropy $H$ (in lieu of $K$) is equivalent to the regularised autoinformation, with $\beta$ as a Lagrange multiplier.
The choice of the number of classes is now replaced with the choice of a parameter $\beta$, controlling the trade-off between high autoinformation and low entropy of the aggregated model.
In this framework, a heuristic recommended is thus the \emph{plateau} or \emph{robustness} criterion~\cite{lambiotte2010multi,delmotte2011protein}, which checks for a plateau, i.e. in our case a large interval of $\beta$ where the solution is robust, in that it keeps the same number of clusters:
\begin{enumerate}
\item Choose a set $B=\{\beta_i\}$ of values of interest for the regularization parameter $\beta$ (which can be a discrete sampling of $[0,1]$).
\item For each value $\beta_i \in B$, find the partition with highest regularised autoinformation, with the heuristics in \ref{alg} (setting $c_{\min} = 1, c_{\max} = N$ and $\beta=\beta_i$).
\item Plot the number of classes $K$ found as a (decreasing) function of $\beta$. Typically this plot present plateaux.
\item Look for plateaux in the plot, i.e. interval of values of $\beta$ for which the algorithm finds the same partition, thus with a constant number of classes.
\item Each large plateau is deemed to represent a \emph{natural partition} of the graph, robust to the choice of $\beta$.
\end{enumerate}
Ultimately the problem of selecting $K$ is ill-defined, as different users faced to different applications may wish different trade-offs between quality of the description and complexity of the model.
These, and more advanced methodologies, are simply tools to help the user make an informed decision.
Fully automated model selection procedures also exist in some cases (e.g. modularity maximisation or Bayesian DCSBM inference), that implicitly internalise a certain choice strategy for the user.
We now illustrate the elbow criterion and the plateau criterion on an example where a \emph{ground truth} structure is planted, and we show that both criteria are able to recover the planted structure.
\begin{figure}[htpb]
\centering
\includegraphics[width=0.8\linewidth]{plot_stoch_model_sel}
\caption{Model selection on range dependent networks.
Maximization of autoinformation is performed on the range dependent network composed by three planted partitions and with adjacency matrix depicted in (A).
Autoinformation maximized at fixed number of classes $k$ (B) shows an elbow when the number of classes corresponds to the planted partition.
Maximization of autoinformation at different values of $\beta$ (C) shows a plateau at the number of classes of the planted partition.
The graph is built with three classes of 80, 60 and 60 nodes respectively.
Diagonal blocks have $\gamma=0.8$ and $\alpha=0.9$; out of diagonal blocks have $\gamma=1$ and $\alpha=0.001$.
}%
\label{fig:modelselection}
\end{figure}
In Fig.~\ref{fig:modelselection} we maximize autoinformation in a range dependent network (see above for a description of the generative model).
The network is built with 200 nodes divided into one class of 80 nodes and two classes of 60.
We set $\gamma_{c_i, c_j}=\gamma=0.8$ and $\alpha_{c_i, c_j}=\alpha=0.9$ within classes ($c_i=c_j$), while $\gamma=1$ and $\alpha=\varepsilon \ll 1$ between classes.
Maximization of autoinformation for different fixed numbers of $k$ of classes (see Fig.~\ref{fig:modelselection}B) shows an elbow in the value of autoinformation.
The elbow corresponds to the number of classes in the planted partition and becomes more prominent as $T$ increases.
Fitting a DCSBM with the fully automated criterion proposed in~\cite{peixotoPRL2013parsimonious} to the same graph, results in a partition with 9 to 11 classes (consistently with our previous examples).
The results of using the complexity parameter $\beta$ to select the number of classes are shown in Fig.~\ref{fig:modelselection}C.
In this case a wide plateau is found when the partition that maximizes autoinformation has the planted number of classes, revealing a partitioning that is robust with respect to the choice of $\beta$.
Increasing the value of $T$ corresponds to a growth of the robustness plateaux and a shift to lower values of $\beta$.
\section{Algorithm}
We briefly describe in Algorithm~\ref{alg} the AutoInformation State-Aggregation algorithm (AISA) used to maximize the autoinformation within this paper.
The algorithm uses an approach inspired by the simulated annealing technique to sample the state space of all possible partitions.
A code with a reference implementation in \texttt{Python} is available as a \texttt{git} repository at the following hyperlink: \href{https://maurofaccin.github.io/aisa}{AISA (https://maurofaccin.github.io/aisa)}.
\begin{algorithm}[ht!]
\SetAlgoLined
\KwResult{Partition of the nodes}
Optional initialization with initial partition\;
$T \leftarrow$ time-scale parameter\\
$\beta \leftarrow$ regularization parameter (default is $0$)\\
$\mathcal T \leftarrow$ initial pseudo-temperature\\
$c_{\min}, c_{\max} \leftarrow$ class number bound (default is $[1,N]$)\\
\While{no accepted move for a max duration or max steps reached}{
$n_i \leftarrow \textrm{random node}$\\
$c \leftarrow \textrm{class of node $n_i$}$\\
$\hat c \leftarrow \textrm{random class}$\\
$\delta \leftarrow \mathcal I_{\beta, T}(n_i \in \hat c) - \mathcal I_{\beta, T}(n_i \in c)$
\eIf{$\delta > 0$}{
node $i$ moved to class $\hat c$
}{
$p_{\hat c \rightarrow c} \leftarrow$ probability of moving $n_i$ from $\hat c$ to $c$\\
$p_{c \rightarrow \hat c} \leftarrow$ probability of moving $n_i$ from $c$ to $\hat c$\\
$\textrm{prob} \leftarrow p_{\hat c \rightarrow c}/p_{c \rightarrow \hat c}$\\
$t \leftarrow e^\frac{\delta}{\mathcal T\cdot\textrm{prob}}$\\
$r \leftarrow \textrm{random number} \in [0, 1]$\\
\eIf{$r < t$}{
node $i$ moved to class $\hat c$
}{\textrm{nothing is done}}
}
$\mathcal T$ is decreased
}
\caption{Pseudocode used to optimize autoinformation}
\label{alg}
\end{algorithm}
With the algorithm in \ref{alg}, one can maximize the autoinformation fixing the number of classes.
This is achieved by fixing the value of $\beta$ (defaults to $0$) and assigning the same value to the class number bounds ($c_{\min} = c_{\max} =k$).
In this way the moves that change the total number of classes are forbidden.
Similarly one may prefer to set the regularization parameter $\beta$ to some value and let the algorithm maximize the regularized autoinformation over a wide interval of values of $k$.
\section{Related literature}\label{connection_to_previous_works}
In this section, we comment on similarities and differences to previous works that appeared in the literature, complementing the discussion on limiting cases above.
A number of information theoretic methods have been proposed for the analysis and compression of dynamical data generated by Markov processes.
Computational mechanics~\cite{Crutchfield1989,shalizi2001computational,CrutchfieldFeldman2003,kelly2012new} provides a framework to construct a minimal dynamical description of an observed stationary process in terms of an $\epsilon$-machine, which is a minimal system description commensurate with an accurate description of the process.
This focus on predictability is similar to the approach presented here.
However, our goal is not to find an optimal state space representation of an arbitrary process in terms of predictability.
Instead, we are interested in the opposite direction, we start with a given Markov chain (and its state-space representation) and want to find an approximate description (a ``lossy compression'') of the dynamics.
The information bottleneck method~\cite{Tishby2000} provides another information theoretic method that provides a way to find a compressed (or quantized) representation of a random signal via a variational problem.
In contrast to our method here the information bottleneck method was however not designed with a random process in mind and involves choosing a relevance variable that captures the features one wants the compressed description to preserve.
As the states in Markov process can be interpreted as nodes in a graph, with state transitions encoded by edges, any Markov process can also be mapped to a network and vice versa.
Accordingly such methods have also been employed in the context of the analysis of complex networks~\cite{masuda2017random}.
The map equation framework~\cite{Rosvall29012008} by Rosvall et al.\ proposes to compress the one-step transition properties of random walks on networks under a specific coding scheme.
Similar to our work, finding the optimal compression in terms of the assignment of nodes to codewords within the map equation framework, is also associated to finding a partition of the nodes. However, the coding scheme used in the map equation effectively amounts to a mean-field description of the transition properties of the associated Markov chain~\cite{Schaub2012}.
As a consequence, only densely connected groups of nodes (assortative community structure) are identified by this scheme.
Another approach related our work is the study by Peixoto and Rosvall~\cite{Peixoto2017} which proposes a model for ``Markov chains with community structure'' and uses a Bayesian framework to fit the model.
While~\cite{Peixoto2017} also recovers an DCSBM as a special case of their method their approach \emph{imposes} that the observed dynamics have a transition matrix of a block-diagonal form, i.e.\ their generative model posits \emph{a priori} a certain transition structure.
In contrast, we do not make any \emph{a priori} assumption on the transition matrix of the observed Markov chain, but show that the autoinformation on symmetric networks for $T=1$ is equivalent up trivial transformations to the log-likelihood the DCSBM\@.
Stated differently, both approaches lead under the specific assumption of a reversible dynamics and a time horizon of $T=1$ to equivalent optimization problems --- hence for this special case, we can give alternative interpretations of the corresponding methods (cf.\ Fig.~\ref{fig:sameB}).
Finally, we remark that our framework exhibits certain parallels to the so-called Markov stability framework.
Just like Modularity~\cite{newman2004finding} can be dynamically interpreted as a sum of covariances between successive dynamical states of a random walker~\cite{Delvenne20072010,Schaub2014,Schaub2019}, here we have shown how the use of information theoretic measures can provide us with a dynamic interpretation of the DCSBM\@.
Interestingly, it has been shown by Newman recently~\cite{Newman2016} that the objective function of Modularity can, with a specifically chosen resolution parameter, be also be interpreted as the likelihood of a planted partition model, a particular type of assortative block model.
In contrast, the equivalence we showed here between the autoinformation for $T=1$ and the likelihood function of the DCSBM holds for general block models and is not limited to any specific structure.
See also the discussion on the short and long scale limits above.
\end{document}
| {
"timestamp": "2021-08-23T02:14:50",
"yymm": "2005",
"arxiv_id": "2005.00337",
"language": "en",
"url": "https://arxiv.org/abs/2005.00337",
"abstract": "We consider state-aggregation schemes for Markov chains from an information-theoretic perspective. Specifically, we consider aggregating the states of a Markov chain such that the mutual information of the aggregated states separated by T time steps is maximized. We show that for T = 1 this approach recovers the maximum-likelihood estimator of the degree-corrected stochastic block model as a particular case, thereby enabling us to explain certain features of the likelihood landscape of this popular generative network model from a dynamical lens. We further highlight how we can uncover coherent, long-range dynamical modules for which considering a time-scale T >> 1 is essential, using synthetic flows and real-world ocean currents, where we are able to recover the fundamental features of the surface currents of the oceans.",
"subjects": "Physics and Society (physics.soc-ph); Applied Physics (physics.app-ph); Data Analysis, Statistics and Probability (physics.data-an)",
"title": "State aggregations in Markov chains and block models of networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924777713886,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.707586666075239
} |
https://arxiv.org/abs/1809.05118 | Rigidity of weighted composition operators on $H^p$ | We show that every non-compact weighted composition operator $f \mapsto u\cdot (f\circ\phi)$ acting on a Hardy space $H^p$ for $1 \leq p < \infty$ fixes an isomorphic copy of the sequence space $\ell^p$ and therefore fails to be strictly singular. We also characterize those weighted composition operators on $H^p$ which fix a copy of the Hilbert space $\ell^2$. These results extend earlier ones obtained for unweighted composition operators. | \section{Introduction and main results}
Let $\mathbb D$ be the open unit disc in the complex plane $\mathbb C$ and fix analytic
maps $u\colon \mathbb D \to \mathbb C$ and $\phi\colon \mathbb D \to \mathbb D$. The
weighted composition operator $uC_\phi$ is defined by
\[
(uC_\phi) f = u \cdot (f\circ\phi)
\]
for $f\colon \mathbb D \to \mathbb C$ analytic. Boundedness and compactness
properties of such operators acting on the classical Hardy spaces
$H^p$ were characterized in terms of Carleson measures in
\cite{CHD01,CHD03} (see also \cite{CZ}). An obvious necessary
condition for the boundedness of $uC_\phi\colon H^p \to H^q$
is that $u = (uC_\phi)1 \in H^q$.
The purpose of this work is to study
the qualitative properties of non-compact weighted composition
operators on the Hardy spaces
$H^p$, extending the results obtained in \cite{LNST} for unweighted
composition operators. It turns out that the weighted composition
operators exhibit the exact same rigidity phenomena as the unweighted
ones. We also refer the reader to the recent parallel work \cite{MNST}
in the context of Volterra-type integral operators, where some of
our ideas originate from.
Recall that if $X$ is a Banach space
and $T\colon X \to X$ is a linear operator, then $T$ is called
\emph{strictly singular} if the restriction of $T$ to any
infinite-dimensional subspace of $X$ is not an isomorphism
(equivalently, it is not bounded below).
Our first result is a generalization of \cite[Thm~1.2]{LNST}
and shows, in particular, that the notions of compactness and strict
singularity
coincide for weighted composition operators on $H^p$. Here we employ
the usual test functions
\[
g_a(z) = \frac{(1-|a|^2)^{1/p}}{(1-\bar{a}z)^{1/p}},
\qquad z \in \mathbb D,
\]
where $a \in \mathbb D$. They always satisfy $\|g_a\|_{H^p} = 1$.
\begin{theorem} \label{thm:ellp}
Let $1 \leq p < \infty$ and suppose that $uC_\phi$ is bounded
and non-compact $H^p \to H^p$. Then $uC_\phi$ fixes an isomorphic copy
of $\ell^p$ in $H^p$. More precisely, there exists a sequence
$(a_n)$ in $\mathbb D$ such that $(g_{a_n})$ is equivalent to the natural
basis of $\ell^p$ and $uC_\phi$ is bounded below on the closed
linear span of $(g_{a_n})$.
\end{theorem}
We next determine under which conditions a weighted composition
operator on $H^p$ with $p\neq 2$ fixes a copy of the Hilbert space
$\ell^2$. In the unweighted case (see \cite[Thm~1.4]{LNST}) this
is the case precisely when the boundary contact set
\[
E_\phi = \{\zeta \in\mathbb T : |\phi(\zeta)|=1\}
\]
has positive measure. It turns out that this result holds in the
weighted case as well. The first half is established in the
following theorem, where we also allow for the possibility that
the target space of the operator is a larger Hardy space than the
domain.
\begin{theorem}\label{thm:ell2}
Let $1 \leq q \leq p < \infty$ and suppose that $uC_\phi$ is bounded
$H^p \to H^q$. If $u \neq 0$ and $m(E_\phi) > 0$, then $uC_\phi$
fixes an isomorphic copy of $\ell^2$ in $H^p$.
\end{theorem}
In the converse direction we have the following result.
\begin{theorem}\label{thm:ell2conv}
Let $1 \leq p < \infty$ and suppose that $uC_\phi$ is bounded
$H^p \to H^p$ with $m(E_\phi) = 0$. If $uC_\phi$ is bounded below on
an infinite-dimensional subspace $M \subset H^p$, then $M$ contains
an isomorphic copy of $\ell^p$. In particular, if $p \neq 2$,
then $uC_\phi$ does not fix any isomorphic copies of $\ell^2$ in
$H^p$.
\end{theorem}
The last statement of the theorem is due to the fact that
$\ell^p$ and $\ell^2$ are totally incomparable spaces for $p \neq 2$.
\section{Proofs}
Towards the proof of Theorem~\ref{thm:ellp} we first state the
following lemma.
\begin{lemma}\label{le:Hump}
Let $u \in H^p$ and $\phi\colon \mathbb D \to \mathbb D$ be analytic. For
$\epsilon > 0$, define
$$F_\epsilon = \{\zeta\in\mathbb T: |\phi(\zeta)-1| < \epsilon\}.$$ Then
\begin{gather*}
\lim_{a\to 1} \int_{\mathbb T\setminus F_\epsilon}
|(uC_\phi)g_a|^p\,dm = 0 \quad\text{for each $\epsilon > 0$,} \\
\lim_{\epsilon\to 0} \int_{F_\epsilon}
|(uC_\phi)g_a|^p\,dm = 0 \quad\text{for each $a \in \mathbb D$.} \\
\end{gather*}
\end{lemma}
\begin{proof}
Let $\epsilon > 0$ be fixed and consider
$\zeta \in \mathbb T\setminus F_\epsilon$ for which the radial limit
$\phi(\zeta)$ exists. Then, if $|a-1| < \epsilon/2$, we have
\[
|1-\bar{a}\phi(\zeta)| \geq |1-\phi(\zeta)| - |1-a| > \epsilon/2,
\]
and so
\[
\int_{\mathbb T\setminus F_\epsilon} |(uC_\phi)g_a|^p\,dm
\leq (1-|a|^2) \int_{\mathbb T\setminus F_\epsilon}
\frac{|u|^p}{|1-\bar{a}\phi|^2}\,dm
\leq \frac{4(1-|a|^2)}{\epsilon^2} \|u\|_{H^p}^p.
\]
Since this tends to $0$ as $a \to 1$, we obtain the first part of the
lemma.
The second part follows from the absolute continuity of the measure
$F \mapsto \int_F |(uC_\phi)g_a|^p\,dm$ and the fact that
$m(F_\epsilon) \to m(\{\zeta\in\mathbb T: \phi(\zeta)=1\}) = 0$ as
$\epsilon \to 0$. Note that $g_a \in H^\infty$ and hence
$(uC_\phi)g_a \in L^p(\mathbb T,m)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:ellp}]
Since $uC_\phi$ is non-compact, we may find a sequence $(a_n)$ in
$\mathbb D$ such that $|a_n| \to 1$ and
$\|(uC_\phi)g_{a_n}\|_{H^p} \geq c > 0$ for all $n$. This is a
consequence of the compactness characterization of $uC_\phi$ in
terms of vanishing Carleson measures; see \cite[Theorem~3.5]{CHD01}.
By passing to a convergent subsequence of $(a_n)$ and utilizing a
suitable rotation, we may assume that $a_n \to 1$.
We now proceed exactly as in the unweighted case (see the proof of
Theorem~1.2 in \cite{LNST} for the details of the following
argument). First, by invoking
Lemma~\ref{le:Hump} repeatedly, we may extract a subsequence of
$(a_n)$, still denoted by $(a_n)$, such that the image sequence
$((uC_\phi)g_{a_n})$ in $H^p$ is equivalent to the standard basis
of $\ell^p$, that is,
\[
\biggl\|\sum_{n=1}^\infty \alpha_n (uC_\phi)g_{a_n} \biggr\|_{H^p}
\sim \|(\alpha_n)\|_p
\quad\text{for $(\alpha_n) \in \ell^p$.}
\]
Then a second application of Lemma~\ref{le:Hump} to the functions
$g_{a_n}$ (taking $u = 1$ and $\phi(z) = z$) produces a further
subsequence of $(a_n)$, which we continue to denote by $(a_n)$,
such that also
\[
\biggl\|\sum_{n=1}^\infty \alpha_n g_{a_n} \biggr\|_{H^p}
\sim \|(\alpha_n)\|_p
\quad\text{for $(\alpha_n) \in \ell^p$.}
\]
By combining the preceding two norm estimates we see that
$uC_\phi$ restricts to a linear isomorphism on the closed linear
span of $(g_{a_n})$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:ell2}]
Since $m(E_\phi) > 0$, \cite[Prop.~3.2]{LNST} shows that there exists
a sequence of integers
$(n_k)$ satisfying $\inf_k (n_{k+1}/n_k) > 1$
and a constant $c_1 > 0$ such that
$\bigl\|\sum_k\alpha_k\phi^{n_k}\bigr\|_{H^1}
\geq c_1\|(\alpha_k)\|_2$ for all $(\alpha_k) \in \ell^2$.
Our goal is to prove a weighted version of this estimate,
that is, for some constant $c > 0$,
\begin{equation}\label{eq:weightedH1}
\biggl\| u \sum_k \alpha_k \phi^{n_k} \biggr\|_{H^1}
\geq c \|(\alpha_k)\|_2.
\end{equation}
Since Paley's theorem (see e.g.\ \cite[p.~104]{Duren}) implies that
the closed linear
span $M = [z^{n_k}: k \geq 1]$ in $H^p$ is isomorphic to
$\ell^2$, inequality \eqref{eq:weightedH1} implies that
$uC_\phi$ is an isomorphism from $M$ into $H^1$. This yields the
theorem because $\|f\|_{H^q} \geq \|f\|_{H^1}$ for all $f \in H^q$.
To establish \eqref{eq:weightedH1}, we first note that
since $u \neq 0$, we have $|u| > 0$ a.e.\ on $\mathbb T$. Thus, for
a given $\epsilon > 0$ there
exist a set $F \subset \mathbb T$ with
$m(\mathbb T\setminus F) < \epsilon$ such that
$|u| > c_2$ on $F$ for some constant $c_2 = c_2(\epsilon) > 0$.
Then, using Hölder's inequality and the boundedness of $C_\phi$
on $H^2$, we get
\[
\int_{\mathbb T\setminus F} \biggl| \sum_k \alpha_k\phi^{n_k} \biggr|\,dm
\leq \sqrt{m(\mathbb T\setminus F)}
\biggl\|\sum_k \alpha_k\phi^{n_k} \biggr\|_2
\leq \sqrt{\epsilon} \cdot c_3\|(\alpha_k)\|_2
\]
for some constant $c_3 > 0$. On combining these estimates we obtain
\[
\biggl\| u \sum_k \alpha_k \phi^{n_k} \biggr\|_{H^1}
\geq c_2 \int_F \biggl|\sum_k \alpha_k \phi^{n_k} \biggr|\,dm
\geq c_2 (c_1 - \sqrt{\epsilon}\cdot c_3) \|(\alpha_k)\|_2.
\]
In particular, choosing
$\epsilon = (c_1/2c_3)^2$ here proves \eqref{eq:weightedH1} with
$c = \frac{1}{2} c_2c_1$. This completes the proof of the theorem.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:ell2conv}]
Since $M$ is infinite-dimensional, there exists a sequence $(f_n)$
in $M$ such that $\|f_n\|_{H^p} = 1$ and $f_n \to 0$ uniformly on
compact subsets of $\mathbb D$; for instance, we can choose
$f(0) = f'(0) = \cdots = f^{(n)}(0) = 0$ for all $n$.
For each $k \geq 1$, define
$E_k = \{\zeta \in \mathbb T: |\phi(\zeta)| > 1-\tfrac{1}{k}\}$. We have
$m(E_k) \to m(E_\phi) = 0$ as $k \to \infty$ and therefore
\begin{equation}\label{eq:limk}
\lim_{k\to\infty} \int_{E_k} |(uC_\phi)f_n|^p\,dm = 0.
\end{equation}
On the other hand, since $f_n\circ\phi$ converges to zero uniformly
on $\mathbb T\setminus E_k$ as $n\to\infty$ and $u \in H^p$,
\begin{equation}\label{eq:limn}
\lim_{n\to\infty} \int_{\mathbb T\setminus E_k} |(uC_\phi)f_n|^p\,dm = 0.
\end{equation}
Since $uC_\phi$ is bounded below on $M$, we also have
$\|(uC_\phi)f_n\|_{H^p} \geq c$ for all $n$ and some constant
$c > 0$. Using a gliding hump argument based on a repeated
application of \eqref{eq:limk} and \eqref{eq:limn} (akin to
the proof of \cite[Prop.~3.3]{LNST} in the unweighted case),
we may extract a subsequence $(f_{n_j})$ such that the
sequence $\bigl((uC_\phi)f_{n_j}\bigr)$ is equivalent to the
standard basis of $\ell^p$. Since $uC_\phi$ is bounded below
on the closed linear span $[f_{n_j}: j \geq 1] \subset M$, we
conclude that $uC_\phi$ fixes a copy of $\ell^p$ in $M$.
\end{proof}
| {
"timestamp": "2018-09-17T02:00:46",
"yymm": "1809",
"arxiv_id": "1809.05118",
"language": "en",
"url": "https://arxiv.org/abs/1809.05118",
"abstract": "We show that every non-compact weighted composition operator $f \\mapsto u\\cdot (f\\circ\\phi)$ acting on a Hardy space $H^p$ for $1 \\leq p < \\infty$ fixes an isomorphic copy of the sequence space $\\ell^p$ and therefore fails to be strictly singular. We also characterize those weighted composition operators on $H^p$ which fix a copy of the Hilbert space $\\ell^2$. These results extend earlier ones obtained for unweighted composition operators.",
"subjects": "Functional Analysis (math.FA); Complex Variables (math.CV)",
"title": "Rigidity of weighted composition operators on $H^p$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924769600771,
"lm_q2_score": 0.7279754548076478,
"lm_q1q2_score": 0.7075866654846242
} |
https://arxiv.org/abs/1703.00093 | Accurate gradient computations at interfaces using finite element methods | New finite element methods are proposed for elliptic interface problems in one and two dimensions. The main motivation is not only to get an accurate solution but also an accurate first order derivative at the interface (from each side). The key in 1D is to use the idea from \cite{wheeler1974galerkin}. For 2D interface problems, the idea is to introduce a small tube near the interface and introduce the gradient as part of unknowns, which is similar to a mixed finite element method, except only at the interface. Thus the computational cost is just slightly higher than the standard finite element method. We present rigorous one dimensional analysis, which show second order convergence order for both of the solution and the gradient in 1D. For two dimensional problems, we present numerical results and observe second order convergence for the solution, and super-convergence for the gradient at the interface. | \section{Introduction}
In this paper, we consider the following interface problems
\begin{equation} \label{nabla-pde ch2}
- \nabla \cdot (\beta(\mathbf{x}) \nabla u(\mathbf{x})) + q(\mathbf{x}) u(\mathbf{x}) = f(\mathbf{x}), \qquad \mathbf{x}\in \Omega\setminus \Gamma,
\end{equation}
in one and two dimensions.
We assume that there is a closed interface $\Gamma$ in the solution domain across
which the coefficient $\beta$ has a finite jump (discontinuity)
\begin{equation}\label{beta ch2}
\beta(\mathbf{x}) =
\begin{cases}
\beta_1 & \mbox{if $\mathbf{x} \in \Omega_1,$}\\
\beta_2 & \mbox{if $\mathbf{x} \in \Omega_2.$}
\end{cases}
\end{equation}
Because of the discontinuity,
the natural jump condition should be satisfied,
that is, both of the solution and the flux should be continuous across the interface $\Gamma$
\begin{equation}\label{flux ch2}
[u]_{\Gamma}=0, \qquad \left[\beta \frac{\partial u}{\partial n} \right]_{\Gamma} = 0,
\end{equation}
where the jump at a point $\mathbf{X}=(X,Y)$ on the interface $\Gamma$ is defined as
\begin{small}
\begin{equation*}
\left[\beta \frac{\partial u}{\partial n} \right]_{\mathbf{X}}
= \lim_{\mathbf{x} \rightarrow\mathbf{X}, \mathbf{x} \in \Omega_2} \beta(\mathbf{x}) \frac{\partial u(\mathbf{x})}{\partial n} - \lim_{\mathbf{x}\rightarrow \mathbf{X}, \mathbf{x}\in \Omega_1} \beta(\mathbf{x}) \frac{\partial u(\mathbf{x})}{\partial n},
\end{equation*}
\end{small}
and $u_n = \mathbf{n} \cdot \nabla u=\frac{\partial u}{\partial n}$ is the normal derivative of solution $u(\mathbf{X})$,
and $\mathbf{n}$ is the unit normal direction pointing to $\Omega_2$ side,
see Fig.~\ref{fig:domain ch2} for an illustration.
Since a finite element discretization is used,
we assume that $f(\mathbf{x})\in L^2(\Omega_i)$, $q(\mathbf{x})\in L^{\infty}(\Omega_i)$ excluding $\Gamma$.
For the regularity requirement of the problem,
we also assume that $\beta(\mathbf{x})\ge \beta_0 >0$ and $q(\mathbf{x})\ge 0$; $\Gamma\in C^1$.
From these assumptions,
we know that the solution $u(\mathbf{x})\in H^2(\Omega_i)$ for $i=1,2$.
\begin{figure}[!b]
\centering
\includegraphics[width=0.45\textwidth]{ch4_1.eps}
\caption{A diagram of domain $\Omega_1$, $\Omega_2$, an interface $\Gamma$, and the boundary $\partial\Omega$. }\label{fig:domain ch2}
\end{figure}
There are many applications of such an interface problem, see for example,
\cite{sutton1995interfaces, zienkiewicz2000finite, li2006immersed}
and the references therein.
Many numerical methods have been developed for solving such an important problem.
For the elliptic interface problem (\ref{nabla-pde ch2})-(\ref{flux ch2}),
the solution has a low global regularity.
Thus, a direct conforming finite element method based on polynomial basis functions over a mesh likely works poorly
if the mesh is not aligned along the interface since the FE solution will be a smooth piece in an element
and can not capture the discontinuity in the directive at the interface.
Nevertheless, it is reasonable to assume that the solution is piecewise smooth excluding the interface.
For example, if the coefficient is a piecewise constant in each sub-domain,
then the solution in each sub-domain is an analytic function in the interior,
but has jump in the normal derivative of the solution across the interface from the PDE limiting theory \cite{kevorkian1990partial}.
The gradient used in this paper is defined as the \emph{limiting gradient} from each side of the interface.
Naturally, finite element methods can be and have been applied to solve interface problems.
It is well known that a second order accurate approximation to the solution
of an interface problem can be generated by the Galerkin finite element method
with the standard linear basis functions
if the triangulation is aligned with the interface,
that is, a body fitted mesh is used, see for example,
\cite{babuska1970finite, bramble1996finite, chen1998finite, xu1982error}.
Other state of art methods include the IGA-FEM,
or DPG,
a discontinuous Petrov-Galerkin finite element method \cite{carstensena2014low}.
Some kind of posterior techniques or at least quadratic elements are needed
in order to get second order accurate gradient from each side of the interface.
The cost in the mesh generation coupled with unstructured linear solver
makes the body-fitted mesh approach less competitive.
Alternatively, we can use a structured mesh (non-body fitted) to solve such an interface problem.
There are also quite a few finite element methods using Cartesian meshes.
The immersed finite element method (IFEM) was developed for 1D and 2D interface problems
in \cite{li1998immersed} and \cite{li2003new}, respectively.
Since then, many IFEM methods and analysis have appeared in the literature,
see for example, \cite{chou2010optimal, he2010immersed},
with applications in \cite{lin2012linear, yang2003immersed}.
Often they provide second order accurate solution in $L^2$ norm but only first order accurate flux.
Nevertheless, in many applications the primary interest
is focused on flux values at interfaces
in addition to solutions of governing differential equations,
see for example, \cite{chou2012immersed, douglas1974galerkin, wheeler1974galerkin}.
Most of numerical methods for interface problems based on structure meshes
are between first and second order accurate for the solution
but the accuracy for the gradient is usually one order lower.
Note that the gradient recovering techniques for examples, \cite{wahlbin1995superconvergence, zhang2005new},
hardly work for structured meshes because of the arbitraries of the interface and the underlying mesh.
The mixed finite element approach and a few other methods
that can find accurate solutions and gradients simultaneously
in the entire domain are often lead to saddle problems
and are computationally expensive
which are not ideal choices
if we are only interested in an accurate gradient near an interface or a boundary.
In this paper, we develop two new finite element methods, one is in 1D, the other one is 2D,
for obtaining accurate approximations of the flux values at interfaces.
The rest of the paper is organized as follows.
In Section 2,
we explain the one dimensional algorithm and provide the theoretical analysis.
We explain how to construct approximations to the flux values at the left and the right of the interface,
and approximations to the flux values at the boundary of the domain.
The numerical algorithm for two dimensional problems is explained in Section 3 along with some numerical experiments.
We conclude and acknowledge in Section 4.
\section{One-dimensional algorithm and analysis}
The one dimensional model problem has the following form,
\begin{equation}\label{sol ch2}
\begin{split}
& -\left(\beta\left(x\right) u^{'}(x)\right)^{'}+q\left(x\right) u\left(x\right)=f\left(x\right),\\
&\qquad \qquad u\left(0\right)=0,\qquad u\left(1\right)=0,
\end{split}
\end{equation}
where $0<x<1$,
$\beta$ is a piecewise constant and have a finite jump at an interface $0<\alpha<1$,
and homogenous boundary condition for simplicity of the discussion.
Across the interface, the natural jump conditions are:
\begin{equation} \label{jumpcond ch2}
\left[ u\right]_{\alpha} =0, \qquad \left[\beta u^{'}(x)\right]_{\alpha}=0.
\end{equation}
We define the standard bilinear form,
\begin{equation*}
a\left(u,v\right)=\int_{0}^{1}
\left(\beta\left(x\right)u^{'}\left(x\right)v^{'}\left(x\right)+q\left(x\right)u\left(x\right)v\left(x\right)\right) dx,\quad\forall~ u\left(x\right), v\left(x\right) \in H^{1}_{0}\left(0,1\right),
\end{equation*}
where $H^{1}_{0}\left(0,1\right)$ is the Sobolev space
\cite{brenner2007mathematical, adams2003sobolev, tartar2007introduction}.
\begin{equation*}
H^{1}_{0}\left(0,1\right)=\{v\left(x\right) \in H^{1} \left(0,1\right)~~\mbox{and}~~v(0)=v(1)=0 \}.
\end{equation*}
The solution of the differential equation $u\left(x\right)\in H^{1}_{0}\left(0,1\right)$
is also the solution of the following variational problem:
\begin{equation}\label{varform ch2}
a\left(u,v\right)=\left(f,v\right)
=\int_{0}^{1} f\left(x\right)v\left(x\right) dx, \qquad \forall~ v\left(x\right) \in H^{1}_{0}\left(0,1\right).
\end{equation}
Integration by parts over the separated intervals $ \left(0,\alpha\right)$ and $\left(\alpha,1\right)$ yields,
\begin{equation*}\label{}
0=\int_{0}^{\alpha}\left\{-\left(\beta u^{'}\right)^{'}+qu-f\right\}vdx+\beta_{1}u^{-}_{x}v^{-}+\int_{\alpha}^{1}\left\{-\left(\beta u^{'}\right)^{'}+qu-f\right\}vdx-\beta_{2}u^{+}_{x}v^{+}.
\end{equation*}
The superscripts $-$ and $+$ indicate the limiting value as $x$ approaches $\alpha$ from the left and right, respectively, and $u_{x}=u^{'}$. Recall that $v^{-}=v^{+}$ for any $v$ in $H^{1}_{0}$, it follows that the differential equation holds in each interval and that
\begin{equation*}\label{}
\left[u\right]=u^{+}-u^{-}=0,\quad\left[\beta u_{x}\right]=\beta^{+}u^{+}_{x}-\beta^{-}u^{-}_{x}=0,
\end{equation*}
where we have dropped the subscript $\alpha$ in the jumps for simplicity.
These relations are the same as in $\left(\ref{sol ch2}\right)$,
which indicates that the discontinuity in the coefficient $\beta\left(x\right)$
does not cause any additional difficulty for the theoretical analysis of the finite element method.
The weak solution will satisfy the jump conditions (\ref{jumpcond ch2}).
\subsection{The immersed finite element method in 1D}
Now we briefly explain the immersed finite element method (IFEM) in 1D introduced in \cite{li1998immersed}.
As in the IFEM, we use a uniform grid, i.e., $x_{i}=ih, i=0, 1,\cdots, n, $
and assume that $\alpha \in \left(x_{j}, x_{j+1}\right)$.
Since it is one dimensional problem,
we use $\beta^+$ for $\beta_2$, $\beta^-$ for $\beta_1$ and so on.
If the interface does not cut an interval $(x_i,x_{i+1})$,
then we use the standard piecewise linear basis function, \emph{i.e.},
the hat function $\phi_i(x)$, $i=1,2, \cdots$ if $i\neq j$ and $i\neq j+1$.
For $x_{j}$ and $x_{j+1}$, the associated piecewise linear basis functions $\phi_j(x)$, $\phi_{j+1}(x)$ are modified.
For example, $\phi_j(x)$ is defined as a piecewise linear function that satisfies
\begin{equation*}\label{}
\varphi_j(x_j) = 1, \quad \varphi_j(x_i)=0, \quad \left[\varphi_{j}\right]=0,\quad\left[\beta \varphi^{'}_{j}\right]=0.
\end{equation*}
It has been derived in \cite{li2006immersed} that
\begin{equation*}\label{}
\varphi_{j}\left(x\right)=
\begin{cases}
~0 ,\qquad \qquad \qquad~~~~~0\leq x <x_{j-1},\\
~\frac{x-x_{j-1}}{h} ,\qquad \quad~~~~~~x_{j-1}\leq x < x_{j},\\
~ \frac{x_{j}-x}{D}+1 ,\qquad ~~~~~~~x_{j}\leq x < \alpha,\\
~ \frac{\rho \left(x_{j+1}-x\right)}{D} ,\qquad ~~~~~~\alpha\leq x < x_{j+1},\\
~0 ,\qquad \qquad \qquad~~~~~x_{j+1}\leq x \leq 1,
\end{cases}
\end{equation*}
where
\begin{equation*}\label{}
\rho =\frac{\beta_{1}}{\beta_{2}},\quad D=h-\frac{\beta_{2}-\beta_{1}}{\beta_{2}}\left(x_{j+1}-\alpha\right),
\end{equation*}
and
\begin{equation*}\label{}
\varphi_{j+1}\left(x\right)=
\begin{cases}
~0 ,\qquad \qquad \qquad~~~~~0\leq x <x_{j},\\
~\frac{x-x_{j}}{D} ,\qquad \quad\quad~~~~~~x_{j}\leq x < \alpha,\\
~ \frac{\rho \left(x-x_{j+1}\right)}{D}+1 , ~~~~~~\alpha\leq x < x_{j+1},\\
~ \frac{x_{j+2}-x}{h} ,\qquad \quad~~~~~~x_{j+1}\leq x < x_{j+2},\\
~0 ,\qquad \qquad \qquad~~~~~x_{j+2}\leq x \leq 1.
\end{cases}
\end{equation*}
Let $V_{h, \left(0, 1\right)}\triangleq$ Span $\left\{\varphi_{i}\right\}^{n-1}_{i=1}$
be the immersed finite element space for approximating $u$.
We propose the following bilinear form for problem
$\left(\ref{sol ch2}\right)$:
find $u_{h} \in V_{h, \left(0, 1\right)} \subset H^{1}_{0}\left(0,1\right)$ such that
\begin{equation}\label{c ch2}
a\left(u_{h},v_{h}\right)=\left(f,v_{h}\right),\qquad\forall~ v_{h} \in V_{h, \left(0, 1\right)},
\end{equation}
\subsection{Error analysis for 1D IFEM}
Some error analysis of 1D IFEM has been given in \cite{li1998immersed, li2006immersed}.
Here we provide somewhat different and more traditional analysis.
As usual, we study the approximation property of the IFE space $V_{h, \left(0, 1\right)}$
so that we can bound the error of the finite element solution using that of the interpolation function.
Assuming that $x_{j}\leq \alpha <x_{j+1}$,
we define the interpolation operator $\pi_{h}: H^{1}_{0}\left(0,1\right)\longrightarrow V_{h, \left(0, 1\right)}$ as follows:
\begin{equation}\label{int ch2}
\pi_{h} u\left(x\right)=
\begin{cases}
~\frac{x_{i+1}-x}{h}u\left(x_{i}\right)+\frac{x-x_{i}}{h}u\left(x_{i+1}\right), \quad x_{i}< x < x_{i+1},~i\neq j,\\
~\kappa\left(x-x_{j}\right)+u\left(x_{j}\right),\qquad\qquad ~x_{j}< x\leq \alpha,\\
~\kappa \frac{\beta_{1}}{\beta_{2}}\left(x-x_{j+1}\right)+u\left(x_{j+1}\right), \quad \alpha\leq x <x_{j+1},
\end{cases}
\end{equation}
where
\begin{equation*}\label{}
\kappa=\frac{\beta_{2}\left(u\left(x_{j+1}\right)-u\left(x_{j}\right)\right)}{\beta_{2}
\left(\alpha-x_{j}\right)-\beta_{1}\left(\alpha-x_{j+1}\right)}.
\end{equation*}
It is easy to verify that $\pi_{h} u\left(x_{i}\right)=u\left(x_{i}\right), i=0, 1,\cdots, n$, $\left[\pi_{h} u\right]=0$, and $\left[\beta\pi_{h} u^{'}\right]=0$.
Now we pay attention to the estimation of $\left\|u\left(x\right)-\pi_{h} u\left(x\right)\right\|_{0}$.
Here, we define $E_{i}=\left[x_{i}, x_{i+1}\right], i\neq j$ and $I=\left[x_{j}, x_{j+1}\right]$.
For a regular element, we use the classical finite element method to estimate the error.
\begin{equation}\label{pp ch2}
\left\|u-\pi_{h} u\right\|_{1, E_{i}}\leq ch\left\|u\right\|_{2, E_{i}}.
\end{equation}
We are going to focus on the error analysis for the element which contains the interface.
We first define
\begin{small}
\begin{equation*}
\widetilde{H}^{2}\left(0, 1\right)=\left\{v\in H^{1}_{0}\left(0, 1\right),\\
~v\in H^{2}\left(0, \alpha\right),~v\in H^{2}\left(\alpha, 1\right)\right\}.
\end{equation*}
\end{small}
equipped with the norm and the semi-norm,
\begin{equation*}
\|u\|^{2}_{\widetilde{H}^{2}\left(0, 1\right)} \triangleq \|u\|^{2}_{H^{2}\left(0, \alpha\right)}+\|u\|^{2}_{H^{2}\left(\alpha, 1\right)} \end{equation*}
and
\begin{equation*}
|u|^{2}_{\widetilde{H}^{2}\left(0, 1\right)} \triangleq |u|^{2}_{H^{2}\left(0, \alpha\right)}+|u|^{2}_{H^{2}\left(\alpha, 1\right)}.
\end{equation*}
Then we have the following error estimate for the derivative approximation, $\kappa\sim u_x^-$ \ from left.
\begin{lemma}{}
If $u\left(x\right)$ is the solution of $\left(\ref{sol ch2}\right)$, the following inequality holds:
\begin{equation}\label{lem ch2}
\begin{split}
&\left|u_{x}^{-}\left(\alpha\right)-\kappa\right|\leq ch^{\frac{1}{2}}\left\|u\right\|_{2,I},\\
&\left|u_{x}^{+}\left(\alpha\right)-\kappa\rho\right|\leq ch^{\frac{1}{2}}\left\|u\right\|_{2,I}.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
Using the Taylor expansion at $\alpha$ and the jump conditions $\left(i.e., \left(\ref{jumpcond ch2}\right)\right)$, we have
\begin{eqnarray*}
&&\left|u_{x}^{-}\left(\alpha\right)-\kappa\right|=\left|u_{x}^{-}\left(\alpha\right)-\frac{\beta_{2}\left(u\left(x_{j+1}\right)-u\left(x_{j}\right)\right)}{\beta_{2}\left(\alpha-x_{j}\right)-\beta_{1}\left(\alpha-x_{j+1}\right)}\right|\\
&&~~~~\quad~\qquad~=\bigg|u_{x}^{-}\left(\alpha\right)-\frac{\beta_{2}\left\{u^{+}\left(\alpha\right)+u^{+}_{x}\left(\alpha\right)\left(x_{j+1}-\alpha\right)+\int_\alpha^{x_{j+1}} u^{''}\left(t\right)\left(x_{j+1}-t\right)dt\right\}}{\beta_{2}\left(\alpha-x_{j}\right)-\beta_{1}\left(\alpha-x_{j+1}\right)}\\
&&~~\qquad\qquad\quad~~~-\frac{u^{-}\left(\alpha\right)+u^{-}_{x}\left(\alpha\right)\left(x_{j}-\alpha\right)+\int_\alpha^{x_{j}}u^{''}\left(t\right)\left(x_{j}-t\right)dt}{\beta_{2}\left(\alpha-x_{j}\right)-\beta_{1}\left(\alpha-x_{j+1}\right)}\bigg|\\
&&~~~\quad~\qquad~~=\left|\frac{\beta_{2}\left[\int_\alpha^{x_{j+1}} u^{''}\left(t\right)\left(x_{j+1}-t\right)dt-\int_\alpha^{x_{j}} u^{''}\left(t\right)\left(x_{j}-t\right)dt\right]}{\beta_{2}\left(\alpha-x_{j}\right)-\beta_{1}\left(\alpha-x_{j+1}\right)}\right|
\\
&&~~~\quad~\qquad~~\leq ch^{\frac{1}{2}}\left\|u\right\|_{2,I}, \qquad\qquad\qquad~~~~\qquad(\rm{Cauchy-Schwarz ~~ Inequality})
\end{eqnarray*}
where $c$ is a positive constant depending only on the coefficients $\beta$, $q(x)$.
This completes the proof of the lemma.
\end{proof}
The lemma gives rough estimates of the first order derivative of the interpolation function
from each side of the interface with an $O(\sqrt{h})$ convergence order
compared with that $O(h)$ in the $H^1(0,1)$ of the interpolation function.
Later on, we will explain our method to get second order accurate derivative
from each side of the interface.
In a similar way, we can prove that
\begin{equation*}
\left|u_{x}^{+}\left(\alpha\right)-\kappa\rho\right|\leq ch^{\frac{1}{2}}\left\|u\right\|_{2,I}.
\end{equation*}
\subsection{Convergence analysis of 1D IFEM}
Although some error analysis is given in \cite{li1998immersed},
below we provide some different,
more traditional finite element analysis with some results
that are useful for accurate gradient computations at the interface.
First we prove the following theorem on the accuracy of the interpolating function~$\pi_{h} u$.
\begin{theorem}{}
If $u\left(x\right)$ is the solution of $\left(\ref{sol ch2}\right)$,
and $\pi_{h}u\left(x\right)$ is the interpolation function defined in $\left(\ref{int ch2}\right)$, then
\begin{equation}\label{theo ch2}
\left\|u-\pi_{h} u\right\|_{1, I}\leq ch\left\|u\right\|_{2,I},
\end{equation}
where $c$ is a positive constant depending on the interface location,
the coefficients $\beta$, and $q(x)$.
\end{theorem}
\begin{proof}
Proof of theorem If $x_{j}\leq x\leq \alpha$,
then using the Taylor expansion, we have
\begin{eqnarray*}
&&\qquad\qquad~~u(x)=u\left(x_{j}\right)+u^{'}\left(x_{j}\right)\left(x-x_{j}\right)+\int_{x_{j}}^x u^{''}\left(t\right)\left(x-t\right)dt\\
&&~~~\qquad\qquad\quad~~=u\left(x_{j}\right)+\left[u^{-}_{x}\left(\alpha\right)+\int_{\alpha}^{x_{j}} u^{''}\left(x\right)dx\right]\left(x-x_{j}\right)+\int_{x_{j}}^x u^{''}\left(t\right)\left(x-t\right)dt\\
&&~~~\qquad\qquad\quad~~=u\left(x_{j}\right)+u^{-}_{x}\left(\alpha\right)\left(x-x_{j}\right)+\int_{\alpha}^{x_{j}} u^{''}\left(x\right)dx\left(x-x_{j}\right)+\int_{x_{j}}^x u^{''}\left(t\right)\left(x-t\right)dt.
\end{eqnarray*}
By $\left(\ref{lem ch2}\right)$ and the Cauchy-Schwartz inequality, we get
\begin{eqnarray*}
&&\left|u(x)-\pi_{h} u\left(x\right)\right|=\left|\left(u_{x}^{-}\left(\alpha\right)-k\right)\left(x-x_{j}\right)+\int_{\alpha}^{x_{j}} u^{''}\left(x\right)dx\left(x-x_{j}\right)+\int_{x_{j}}^{x} u^{''}\left(t\right)\left(x-t\right)dt\right|\\
&&~~~\qquad\qquad\quad~~\leq c h^{\frac{3}{2}}\left\|u\right\|_{2,I}
\end{eqnarray*}
and furthermore
\begin{eqnarray*}
&&\left|\left(u\left(x\right)-\pi_{h} u\left(x\right)\right)^{'}\right|=\left|u_{x}^{-}\left(\alpha\right)-k+\int_{\alpha}^{x_{j}}u^{''}\left(x\right)dx+\int_{x_{j}}^{x}u^{''}\left(t\right)dt\right|\\
&&\quad~~~~\qquad\qquad\quad~~\leq ch^{\frac{1}{2}}\left\|u\right\|_{2,I}.
\end{eqnarray*}
If $\alpha \leq x\leq x_{j+1}$, the proof is similar. Thus we also have,
\begin{equation*}
\left|u(x)-\pi_{h} u\left(x\right)\right|\leq c h^{\frac{3}{2}}\left\|u\right\|_{2,I}, \qquad\left|\left(u\left(x\right)-\pi_{h} u\left(x\right)\right)^{'}\right|\leq ch^{\frac{1}{2}}\left\|u\right\|_{2,I}.
\end{equation*}
We proceed with the remaining proof below
\begin{eqnarray*}
&&\left\|u\left(x\right)-\pi_{h} u\left(x\right)\right\|_{0, I}=\left(\int_{x_{j}}^{\alpha}\left(u\left(x\right)-\pi_{h} u\left(x\right)\right)^{2}dx+\int_{\alpha}^{x_{j+1}}\left(u\left(x\right)-\pi_{h} u\left(x\right)\right)^{2}dx\right)^{\frac{1}{2}}\\
&&~~\quad~~~\qquad\qquad\quad~~\leq c\left(\int_{x_{j}}^{\alpha}h^{3}\left\|u\right\|_{2,I}^{2}dx+\int_{\alpha}^{x_{j+1}}h^{3}\left\|u\right\|_{2,I}^{2}dx\right)^{\frac{1}{2}} \\
&&~~\quad~~~\qquad\qquad\quad~~\leq ch^{2}\left\|u\right\|_{2,I},
\end{eqnarray*}
in $L^2$ and we continue to $H^1$,
\begin{eqnarray*}
&&\left|u\left(x\right)-\pi_{h} u\left(x\right)\right|_{1, I}=\left(\int_{x_{j}}^{\alpha}\left[\left(u\left(x\right)-\pi_{h} u\left(x\right)\right)^{'}\right]^{2}dx+\int_{\alpha}^{x_{j+1}}\left[\left(u\left(x\right)-\pi_{h} u\left(x\right)\right)^{'}\right]^{2}dx\right)^{\frac{1}{2}}\\
&&~~~\qquad~~~~\qquad\quad~~\leq c h\left\|u\right\|_{2,I}.
\end{eqnarray*}
Combining all above to get,
\begin{eqnarray*}
&&\left\|u\left(x\right)-\pi_{h} u\left(x\right)\right\|_{1, I}=\left(\left\|u\left(x\right)-\pi_{h} u\left(x\right)\right\|_{0, I}^{2}+\left|u(x)-\pi_{h} u(x)\right|_{1, I}^{2}\right)^{\frac{1}{2}}\\
&&~~~\quad\qquad\qquad\quad~~~~\leq c h\left\|u\right\|_{2, I},
\end{eqnarray*}
which completes the proof.
\end{proof}
The following theorem states that the IFEM in 1D provides optimal convergence as that the FEM for regular problems.
\begin{theorem}{}
If $u\left(x\right)$ is the solution of $\left(\ref{sol ch2}\right)$,
and $\pi_{h}u\left(x\right)$ is the interpolating function defined in $\left(\ref{int ch2}\right)$, then
\begin{equation}\label{error ch2}
\left\|u-\pi_{h} u\right\|_{1}\leq ch\left\|u\right\|_{2},
\end{equation}
\end{theorem}
\begin{proof}
We would using $\left(\ref{pp ch2}\right)$ and $\left(\ref{theo ch2}\right)$ to get
\begin{eqnarray*}
&&\left\|u\left(x\right)-\pi_{h} u\left(x\right)\right\|_{1}=\left(\int_{0}^{1}\left(u\left(x\right)-\pi_{h} u\left(x\right)\right)^{2}+\left(\left(u\left(x\right)-\pi_{h} u\left(x\right)\right)^{'}\right)^{2}dx\right)^{\frac{1}{2}}\\
&&\qquad\qquad\qquad~~~~~=\left(\sum_{E_{i}}\left\|u-\pi_{h} u\right\|_{1, E_{i}}^{2}+\left\|u-\pi_{h} u\right\|_{1, I}^{2}\right)^{\frac{1}{2}}\\
&&\qquad\qquad\qquad~~~~~\leq \left(\sum_{E_{i}} ch^{2}\left\|u\right\|_{2, E_{i}}^{2}+ch^{2}\left\|u\right\|_{2, I}^{2}\right)^{\frac{1}{2}}\\
&&\qquad\qquad\qquad~~~~~\leq ch\left(\sum_{E_{i}}\left\|u\right\|_{2, E_{i}}^{2}+\left\|u\right\|_{2, I}^{2}\right)^{\frac{1}{2}} \\
&&~~~\qquad\qquad\qquad~~\leq ch\left\|u\right\|_{2}.
\end{eqnarray*}
This completes the proof of the theorem.
\end{proof}
\subsection{An accurate flux computation at the left of the interface}
In this sub-section, we explain how to get an accurate flux or
first order derivative of the solution at the interface from the left side of the interface.
The method is based on the approach proposed in \cite{wheeler1974galerkin}
for flux computations at boundaries.
The method is based on the use of the Galerkin solution of the problem
and it is different from other posterior error analysis.
We define the following $\Gamma_{\alpha}^{-}$
\begin{equation}\label{f ch2}
\Gamma_{\alpha}^{-}\triangleq\frac{1}{\alpha}\{(\beta u_{h}^{'}, 1)_{(0,\alpha)}
+(qu_{h}-f, x)_{(0,\alpha)}\},
\end{equation}
as an approximation to the exact flux $\beta_{1}u_{x}^{-}\left(\alpha\right)$.
Below we show that this is a second order approximation which improves the accuracy of the flux by one order
compared the estimate in (\ref{error ch2}).
\begin{theorem}{}
If $u\left(x\right)$ is the solution of $\left(\ref{sol ch2}\right)$,
$u_{h}\left(x\right)$ is the Galerkin approximation of the solution of $u\left(x\right)$,
$\Gamma_{\alpha}^{-}$ is as defined above, then
\begin{equation}\label{a ch2}
\left|\Gamma_{\alpha}^{-}-\beta_{1}u_{x}^{-}\left(\alpha\right)\right|\leq ch^{2}\left\|u\right\|_{2}.
\end{equation}
\end{theorem}
\begin{proof}
We define $Y \in V_{h}$ as a function that satisfied $Y(0)=0$ and
\begin{equation}\label{g ch2}
\left(\beta Y^{'} , v_{h}^{'}\right)_{\left(0,\alpha\right)}+\left(qY-f , v_{h}\right)_{\left(0,\alpha\right)}=\beta_{1}u_{x}^{-}\left(\alpha\right)v_{h}\left(\alpha\right) , \qquad \forall v_{h}\in V_{h}, \ and ~v_{h}\left(0\right)=0.
\end{equation}
\ Subtracting (\ref{f ch2}) with $v_{h}=x/\alpha$~from (\ref{g ch2}), we have
\begin{eqnarray*}
&&\left|\Gamma_{\alpha}^{-}-\beta_{1}u_{x}^{-}\left(\alpha\right)\right|=\frac{1}{\alpha}\left|\left(\beta\left(u_{h}-Y\right)^{'}, 1 \right)_{\left(0,\alpha\right)}+\left(q\left(u_{h}-Y\right), x\right)_{\left(0,\alpha\right)}\right|\\
&&~~~\qquad\qquad\quad~~~\leq c\left\{\left\|u_{h}-Y\right\|_{0}+\left|\left(u_{h}-Y\right)\left(\alpha\right)\right|\right\}.
\end{eqnarray*}
From (\ref{c ch2}) and (\ref{g ch2}) we can see that
\begin{equation}\label{}
\left(\beta \left(u_{h}-Y\right)^{'} , v_{h}^{'}\right)_{\left(0,\alpha\right)}+\left(q\left(u_{h}-Y\right) , v_{h} \right)_{\left(0,\alpha\right)}=0 , \qquad \forall v_{h} \in V_{h,\left(0,\alpha\right)};
\end{equation}
Set $w=u_{h}-Y$, and $v_{h}=w-xw(\alpha)$, we get the following equation
\begin{equation*}\label{}
\left(\beta w^{'} , w^{'}\right)_{\left(0,\alpha\right)}+\left(qw , w\right)_{\left(0,\alpha\right)}=\left(\beta w^{'} , w\left(\alpha\right)\right)_{\left(0,\alpha\right)}+\left(qw , xw\left(\alpha\right)\right)_{\left(0,\alpha\right)},
\end{equation*}
and thus we have
\begin{equation}\label{h ch2}
\left\|w\right\|_{1}\leq c\left\{\left\|w\right\|_{0}+\left|w\left(\alpha\right)\right|\right\}.
\end{equation}
Next we construct the following auxiliary problem.
Let $\varphi \in H^{2}(0, \alpha)\cap \widetilde{H}^{1}\left(0, \alpha\right)$ be the solution of
the following
\begin{equation*}\label{}
\begin{cases}
L^{*}\varphi =-w ,\quad w\in \left(0 , \alpha\right),\\
\varphi\left(0\right)=\varphi\left(\alpha\right)=0.
\end{cases}
\end{equation*}
We also assume that
\begin{equation*}\label{}
\left\|\varphi\right\|_{2}\leq c \left\|w\right\|_{0}.
\end{equation*}
Then, for an appropriately chosen $\pi\varphi \in V_{h,\left(0,\alpha\right)}$, we proceed to get the following,
\begin{eqnarray*}
&&\left(w , w\right)_{\left(0,\alpha\right)}=\left|-\left(w , L^{*}\varphi\right)_{\left(0,\alpha\right)}\right|=\left|\left(w , \left(\beta\varphi^{'}\right)^{'}-q\varphi\right)_{\left(0,\alpha\right)}\right|\\
&&\qquad~~~~~~~~=\left|-\left(\beta w^{'} , \varphi^{'}\right)_{\left(0,\alpha\right)}-\left(qw , \varphi \right)_{\left(0,\alpha\right)}+\left(\beta ^{-}\varphi^{'}w\right)\left(\alpha\right)\right|\\
&&\qquad\quad~~~~~\leq\left|\left(\beta w^{'} , \varphi^{'}-\left(\pi\varphi\right)^{'}\right)_{\left(0,\alpha\right)}\right|+\left|\left(qw , \varphi-x\right)_{\left(0,\alpha\right)}\right|+\left|\left(\beta\varphi^{'} w\right)\left(\alpha\right)\right|\\
&&\qquad\quad~~~~~\leq c \left\{\left\|w\right\|_{1}\left\|\varphi-\pi\varphi\right\|_{1}+\left|\varphi^{'}\left(\alpha\right)\right|\left|w\left(\alpha\right)\right|\right\}\\
&&\qquad\quad~~~~~\leq c \left\{h\left\|w\right\|_{1}\left\|\varphi\right\|_{2}+\left\|w\right\|_{0}\left|w\left(\alpha\right)\right|\right\}\\
&&\qquad\quad~~~~~\leq c\left\|w\right\|_{0}\left\{h\left\|w\right\|_{1}+\left|w\left(\alpha\right)\right|\right\}.
\end{eqnarray*}
The above yields,
\begin{equation}\label{i ch2}
\left\|w\right\|_{0}\leq c \left\{h\left\|w\right\|_{1}+\left|w\left(\alpha\right)\right|\right\}.
\end{equation}
For $h$ sufficiently small (\ref{h ch2}) and (\ref{i ch2}) imply that
\begin{equation}\label{}
\left\|w\right\|_{0}\leq c \left|w\left(\alpha\right)\right|,
\end{equation}
where $c$ is a positive constant depending only on the coefficients $\beta$, $q(x)$.
We now derive an estimate of $\left|w\left(\alpha\right)\right|$, using the new auxiliary problem
\begin{equation*}\label{}
\begin{cases}
L^{*}\xi =0 ,\quad \xi\in \left(0 , \alpha\right),\\
\xi\left(0\right)=0 ,\quad ~\beta_{1}\left(\alpha\right)\xi^{'}\left(\alpha\right)=1 .
\end{cases}
\end{equation*}
Let $\eta=u-Y$, then we can get
\begin{equation}\label{}
\left(\beta\eta^{'} , v_{h}\right)_{\left(0,\alpha\right)}+\left(q\eta , v_{h}\right)_{\left(0,\alpha\right)}=0 , \quad~\forall v_{h}\in V_{h}~~ such~that ~~v_{h}\left(0\right)=0.
\end{equation}
Furthermore, using
\begin{eqnarray*}
&&0=-\left(\eta , L^{*}\xi\right)_{\left(0,\alpha\right)}=-\left(\eta , -\left(\beta\xi^{'}\right)^{'}+q\xi\right)_{\left(0,\alpha\right)}\\
&&\qquad\quad\qquad\qquad~~~=\left(\eta , \left(\beta\xi^{'}\right)^{'}-q\xi\right)_{\left(0,\alpha\right)}\\
&&\qquad\quad\qquad\qquad~~~=\left(-\beta\eta^{'} , \xi^{'}\right)_{\left(0,\alpha\right)}+\left(-q\eta , \xi \right)_{\left(0,\alpha\right)}+\eta\left(\alpha\right),
\end{eqnarray*}
we have
\begin{eqnarray*}
&&\left|\eta\left(\alpha\right)\right|=\left|\left(\beta\eta^{'} , \xi^{'}\right)_{\left(0,\alpha\right)}+\left(q\eta , \xi\right)_{\left(0,\alpha\right)}\right|\\
&&\quad~~~~~\leq c\left\|\eta\right\|_{1}\left\|\xi-x\right\|_{1}\\
&&\quad~~~~~\leq ch\left\|\eta\right\|_{1}\\
&&\quad~~~~~\leq ch^{2}\left\|u\right\|_{2}.
\end{eqnarray*}
Finally, we get
\begin{eqnarray*}
&&\left|\Gamma_{\alpha}^{-}-\beta_{1}u_{x}^{-}\left(\alpha\right)\right|\leq c\left|\left(u_{h}-Y\right)\left(\alpha\right)\right|\\
&&~~~~\quad\qquad\quad~~~~\leq c\left\{\left|\left(u-u_{h}\right)\left(\alpha\right)\right|+\left|\left(u-Y\right)\left(\alpha\right)\right|\right\}\\
&&~~~~\quad\qquad\quad~~~~\leq ch^{2}\left\|u\right\|_{2}.
\end{eqnarray*}
This completes the proof of the theorem.
\end{proof}
\subsubsection*{Approximation of flux from the right side of the interface}
In the similar way, we can get the second order accurate flux, $-\beta^+ u_x^+ $, from the right side of the interface
\begin{equation*}
\Gamma_{\alpha}^{+}\triangleq\frac{1}{1-\alpha}\{(\beta u_{h}^{'}, -1)_{(\alpha,1)}+(qu_{h}-f, 1-x)_{(\alpha,1)}\}.
\end{equation*}
We also have the following error bound.
\begin{equation*}\label{}
\left|\Gamma_{\alpha}^{+}+\beta^+ u_{x}^{+}\left(\alpha\right)\right|\leq ch^{2}\left\|u\right\|_{2}.
\end{equation*}
\subsubsection*{Approximation of fluxes at the boundary}
The approach for accurate flux computations at the interface
can be applied to the flux computation from the left and right boundaries as expressed below.
We define approximations $\Gamma_{0}$ and $\Gamma_{1}$ to the fluxes,
$\beta_{1}u^{'}\left(0\right)$ and $\beta_{2}u^{'}\left(1\right)$ respectively:
\begin{equation*}\label{}
\Gamma_{0} \triangleq\left(\beta u_{h}^{'}, -1\right)+\left(qu_{h}-f, 1-x\right),
\end{equation*}
\begin{equation*}\label{}
\Gamma_{1}\triangleq\left(\beta u_{h}^{'}, 1\right)+\left(qu_{h}-f, x\right).
\end{equation*}
Then $\Gamma_{0}$ and $\Gamma_{1}$ are second order approximations to the flux
from the left and right boundaries as stated in the following theorem.
We skip the proof since it is similar to that for the flux from the left side of the interface.
\begin{theorem}{}
If $u\left(x\right)$ is the solution of $\left(\ref{sol ch2}\right)$,
$u_{h}\left(x\right)$ is the Galerkin approximation of the solution of $u\left(x\right)$,
$\Gamma_{0}$ and $\Gamma_{1}$ are as defined above,
then
\begin{equation*}\label{}
\left|\Gamma_{0}+\beta_{1}u^{'}\left(0\right)\right|+\left|\Gamma_{1}-\beta_{2}u^{'}\left(1\right)\right|\leq c h^{2}\left\|u\right\|_{2}.
\end{equation*}
\end{theorem}
\subsection{Numerical experiments in 1D}
We present one example below that is taken from \cite{li2006immersed}.
The exact solution is
\begin{equation*}
u(x)=
\begin{cases}
x^4/\beta^-, \qquad \quad \textrm{if}~~ 0<x < \alpha,\\
x^4/\beta^+ +(1/\beta^--1/\beta^+)\alpha^4, \; \;\textrm{if}~~ \; \alpha < x <1,
\end{cases}
\end{equation*}
where $0<\alpha<1$ is an interface.
The solution satisfies the ODE $-(\beta u')'=f(x)$ where $f(x)=-12 x^2$.
In this example, $q(x)=0$ and $[u]=0$ and $[\beta u']=0$, but $[u']\neq~0$.
In Table~\ref{result1D ch2},
we show a grid refinement analysis for the proposed method with $\alpha=1/3$, $\beta^-=2$, $\beta^+=10$.
Thus the interface $\alpha$ is not a nodal point.
We measure the error for the solution $u(x)$is the entire domain $(0,1)$
in the second column using the strongest norm $L^{\infty}$ norm.
We estimate the convergence order using $p = \log (E_n/E_{2N})/\log 2$ in the third column.
As usual, since the relative location between the underlying grid and the interface $\alpha$ is not fixed,
the convergence order fluctuates.
The average convergence order is $1.983$.
In the third column, we list the grid refinement analysis for $u_x^-=\lim_{x\rightarrow \alpha, x<\alpha} u'(x)$,
that is, the first order derivative from the left side of the interface,
we observe clear second order convergence as shown in the fifth column.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
N & $\|u-u_h\|_{L^{\infty}}$ & Order & $|u-u_h|_{H^1}$ & Order \\
\hline
16 & 3.395E-05 & & 3.870E-03 & \\
\hline
32 & 1.547E-05 & 1.134 & 7.980E-04 & 2.278 \\
\hline
64 & 2.191E-06 & 2.820 & 1.562E-04 & 2.353 \\
\hline
128 & 9.732E-07 & 1.171 & 3.892E-05 & 2.005 \\
\hline
256 & 1.413E-07 & 2.784 & 8.475E-06 & 2.199 \\
\hline
512 & 6.088E-08 & 1.215 & 2.263E-06 & 1.905 \\
\hline
1024& 8.900E-09 & 2.774 & 5.098E-07 & 2.150 \\
\hline
\end{tabular}
\caption{ A grid refinement analysis of the proposed method with $\alpha=1/3$, $\beta^-=2$, $\beta^+=10$. The second column is the $L^{\infty}$ error of the solution in the entire domain $(0,1)$. The fourth column is the error in the first order derivative $u_x^-$, that is, from the left side of the interface. The average convergence order for the solution and $u_x^-$ are $1.983$ and $2.148$, respectively.} \label{result1D ch2}
\end{table}
\section{The numerical method and experiments for the 2D interface problem}
The results in the previous section are the optimal since both the solution and the flux (at the interface or boundaries)
using a piecewise linear finite element space.
However, it is still an open question how to apply the approach to 2D problems with a curved interface.
In this section, we provide an alternative approach that is similar to a mixed finite element method
but only in a small tube around the interface.
In this section, the elliptic interface problem is
\begin{equation}\label{2Dpde ch2}
-\nabla \cdot \left(\beta \nabla u(\mathbf{x}) \right) + q(\mathbf{x}) u(\mathbf{x}) = f(\mathbf{x}), \quad \mathbf{x}\in \Omega_i, \quad \mbox{$i=1,2$},
\end{equation}
where $q(\mathbf{x})\in L^{\infty}(\Omega) \ge 0$, $\Omega=\Omega^-\cup \Omega^+$,
$\beta(\mathbf{x})$ is a piecewise positive constant as in (\ref{beta ch2})
and has a finite jump discontinuity across a closed interface $\Gamma\in C^1$ in the solution domain.
In our new method, we introduce a tube that contains the interface with a diameter $2 \epsilon$ as
\begin{equation*}
\Omega_{\epsilon} = \left \{ \mathbf{x}\in \Omega_1 \cup \Omega_2, \quad d(\mathbf{x},\Gamma ) \le \epsilon \right\},
\end{equation*}
where $d(\mathbf{x},\Gamma )$ is the distance between $\mathbf{x}$ and the interface~$\Gamma$.
In the tube~$\Omega_{\epsilon}$,
we introduce the flux as a separate variable vector $\mathbf{v}$
that can be considered as an augmented variable.
Thus, in addition to the PDE (\ref{2Dpde ch2}) in the entire domain,
we also have the following equations
\begin{equation*}
\begin{array}{c}
-\beta \, \nabla u = \mathbf{v}, \\
\nabla \cdot \mathbf{v} + q u = f ,
\end{array}
\qquad \mathbf{x} \in \Omega_{\epsilon}.
\end{equation*}
Next we define the following functional spaces
\begin{eqnarray*}
H_0^1 &=& \left\{\phi \in H^1(\Omega), \quad \phi = 0 \text{ on } \partial\Omega \right\},\\
W & =& \left\{ w\in L^2(\Omega_{\epsilon}) \right\}, \\
L_g &=& \left \{\mathbf{g} \in (L^2(\Omega_{\epsilon}))^2 \right\},
\end{eqnarray*}
assuming homogenous boundary condition along $\partial\Omega_2$.
We can easily get the following weak form for $u$ (in the entire domain) and $\mathbf{g}$ (in $\Omega_{\epsilon}$) below
\begin{eqnarray}
& \left(\beta_i \nabla u, \, \nabla \phi \right) + \left(q u, \phi\right) = \left(f, \phi\right) \text{ in } \Omega_i, \; i=1, 2, \\
& -\left(\beta_i \nabla u, \, \mathbf{g}\right) = \left(\mathbf{v}, \mathbf{g}\right) \; \text{in} \; \Omega_{\epsilon}\cap \Omega_{i}, \; i=1,2, \label{poi2}\\
& \left(\nabla \cdot \mathbf{v},\, w\right) + (q u, w) = \left(f, w \right) \quad \text{ in } \Omega_{\epsilon}, \label{poi3}
\end{eqnarray}
where the inner product is in the regular $L^2$ sense and those quantities $\phi$, $\mathbf{g}$, and $w$ are from the spaces defined above.
There are two intuitive reasonings behind the new algorithm.
We know that the mixed formulation would improve the gradient computation.
If we are only interested in the gradient from each side of the interface,
then we just need to use a small tube for the computation.
The second consideration is that if we set the flux
$\mathbf{v}=\beta \nabla u$ along the interface
as an unknown in addition to the solution $u$,
and then discretize the whole system with high order discretization (second order in the manuscript),
then we would expect the error for the unknown flux $\mathbf{v}$ would have the same order of accuracy as for the discretization.
In discretization, we use piecewise linear functions for $\phi$ and $\mathbf{g}$ as usual,
and piecewise constant functions for $w$.
The new augmented method enlarged the system by (\ref{poi2}) and (\ref{poi3}).
In terms of the stiffness matrix,
an additional $n_v$ number of columns are added to the stiffness matrix
where $n_v$ is the number of extra unknowns $\mathbf{v}$ in the tube $\Omega_{\epsilon}$.
As a result, the stiffness matrix becomes a rectangular matrix instead of a square matrix.
We used the singular value decomposition (SVD) to solve the resultant rectangular system.
Since $\mathbf{v}$ has co-dimension one compared with that of $u$,
the additional extra cost is negligible compared with that of the elliptic solver on the entire domain.
\subsection{Numerical experiments in 2D}
Let $\Omega_2$ be a unit circle centered at $(0,0)$ with radius $R=1$.
In our numerical test, we take $q=0$.
Let $\Gamma$ be an interface inside the unit circle with radius $R=0.9$.
The tube width is taken as $\epsilon=3h$, that is three layers from each side of the interface.
The exact solution is
\begin{eqnarray}
u(x,y) = \sin x \cos y,
\end{eqnarray}
in the entire domain so that the solution is continuous,
but the flux $\beta \nabla u$ is discontinuous for the test problem.
The coefficient is taken as $\beta_1=100$ and $\beta_2=1$.
The source terms and boundary condition are determined accordingly.
In Table $\ref{table41 ch2}$,
the $L^2$ norm errors of $u$, $\mathbf{v}$,
and the $H^1$ norm error of $u$ are reported.
The $L^2(\Omega)$, $H^1(\Omega)$ are used for the solution $u$,
that is, in the entire domain, while $L^2(\Gamma)$ is used for the flux along the interface.
The first column $N$ is the mesh lines in the coordinates directions.
The results indicate that the new augmented method worked as expected.
The convergence rate is shown in Table $\ref{table42 ch2}$.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$N$ & $L^2$ error of ${u}$ & $H^1$ error of ${u}$ & $L^2$ error of $\mathbf{v}$ \\
\hline
8 & 9.96e-3 & 5.67e-1 & 5.67e-1 \\
\hline
16 & 2.48e-3 & 1.75e-1 & 1.75e-1 \\
\hline
32 & 7.77e-4 & 4.14e-2 & 4.14e-2 \\
\hline
64 & 1.81e-4 & 1.04e-2 & 1.04e-2 \\
\hline
128 & 4.71e-5 & 5.95e-3 & 5.95e-3 \\
\hline
256 & 1.15e-5 &1.54e-3 & 1.54e-3 \\
\hline
512 & 2.92e-6 &3.80e-4 & 3.80e-4 \\
\hline
\end{tabular}
\caption{A grid refinement analysis of the proposed method. The $H^1$ norm of $u$ is the same as the $L^2$ error of $\mathbf{v}$ because the error in the gradient is dominated compared with that of $u$.} \label{table41 ch2}
\end{table}
In Table $\ref{table42 ch2}$,
a comparison of the convergence order between the standard finite element method and the new augmented approach is presented.
We observe that the new approach provides much better accuracy for the flux (gradient).
Now we have super-convergence for the gradient.
Super-convergence here is the result of the convergence that is faster than the original method.
For the finite element method with the piecewise linear function space,
it is well-known that the flux is first order accurate in the $L^2$ norm.
In our manuscript, We proposed a method to reconstruct the flux at the interface
and has shown that the reconstructed flux has second order convergence.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Quantity &$u$ & $u$ &$\mathbf{v}$ \\
\hline
Norm & $L^2$ & $H^1$ &$L^2$ \\
\hline
Order (standard FEM) & 1.98 & 1.03 & 1.03\\
\hline
Order (new method) & 1.94 & 1.72 & 1.72\\
\hline
\end{tabular}
\caption{A comparison of the convergence order between the standard FEM and the new augmented approach.\label{table42 ch2}}
\end{table}
Next, in Table~\ref{table43 ch2} we present the results with different interface locations
including the case that covers the entire domain ($r_i=0$)
so that we get the gradient in the entire domain as well,
where $r_i$ is the radius of the interface.
Of course, the computational cost also increased.
As we expected, we have second order convergence in the $L^2$ norm for the solution,
and roughly $1.70$ order for the flux (gradient).
In this case, the accuracy of the gradient is improved by about $70$ percent.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$r_i$ &Order in $L^2$ of $u$ & order in $H^1$ \\
\hline
$0.9$ & 1.94 & 1.72\\
\hline
$0.99$ & 1.94 & 1.70\\
\hline
0 & 1.93 & 1.70\\
\hline
\end{tabular}
\caption{A comparison of the convergence order for various location of the interface, where $r_i$ is the radius of the interface.\label{table43 ch2}}
\end{table}
Now we show the result with a non-zero $q(x,y)$.
We use the same solution above with $q(x,y)=1$.
The source term is modified accordingly.
In Table~\ref{table41_new_q},
we show the grid refinement analysis of the error in $L^2$ and $H^1$ norm.
We observe the same behavior with the similar convergence orders.
The average convergence orders are $1.92$ for the $L^2$ norm, $1.71$ for the $H^1$ norm, respectively.
Note that, the $L^2$ norm of the error in $\mathbf{v}$ is the same as the $H^1$ norm as explained earlier.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|}
\hline
$N$ & $L^2$ error of ${u}$ & $H^1$ error of ${u}$ \\
\hline
8 & 9.96e-3 & 5.67e-1 \\
\hline
16 & 3.51e-3 & 2.61e-1 \\
\hline
32 & 1.17e-4 & 6.54e-2 \\
\hline
64 & 2.81e-4 & 1.66e-2 \\
\hline
128 & 7.24e-5 & 9.13e-3 \\
\hline
256 & 1.85e-5 &2.31e-3 \\
\hline
512 & 4.62e-6 &5.79e-4 \\
\hline
\end{tabular}
\caption{A grid refinement analysis of the proposed method when $q(x,y)=1$. The average convergence orders are $1.92$ for the $L^2$ norm, $1.71$ for the $H^1$ norm, respectively.} \label{table41_new_q}
\end{table}
In the previous example, the solution is the same in the entire domain in spite of the flux is discontinuous.
Below we present another example in which the solution is different in different domain.
The outer boundary is $R=2$.
\begin{equation}\label{}
u(x,y,t) =
\begin{cases}
(x^2 + y^2)^2 & \mbox{if $ r > 1, $}\\
(x^2 + y^2) & \mbox{if $ r \leq 1, $}
\end{cases}
\end{equation}
where $r=\sqrt{x^2 + y^2}$.
The source term $f(x,y)$,
and the Dirichlet boundary condition are determined from the true solution.
In this example, the solution is continuous, that is, $[u]=0$, but the flux jump is non-homogeneous.
We tested our new method with large jump ratios $\beta_2 : \beta_1=1000:1$ and $\beta_2 : \beta_1=1:1000$.
In Table~\ref{table53 ch2},
we present the results with different widths of the tube
including the case that covers the entire domain
so that we get the gradient in the entire domain as well.
As we expected,
we have second order convergence in the $L^2$ norm for the solution,
and roughly $1.54$ order for the flux (gradient) as in the thin tube case.
Compared with the standard finite element method, the accuracy of the computed gradient is improved by more than $50$ percent.
Note that the results are almost the same as the new gradient recovery technique using a posterior approach \cite{guo-yang16}
in which the rate of the recovered gradient is around $1.5$.
Note that, the convergence order for the gradient is about $1.54$
which is lower than the previous case possibly due to the non-homogenous flux jump.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
width ($\epsilon$) &Order in $L^2$ of $u$ & order in $H^1$ \\
\hline
$3h$ & 1.96 & 1.53\\
\hline
$10h$ & 1.96 & 1.56\\
\hline
2 (entire domain) & 1.96 & 1.56\\
\hline
\end{tabular}
\caption{A comparison of the convergence order for various widths of the tube.\label{table53 ch2}}
\end{table}
\section{Conclusions}
In this paper, we discussed two methods to enhance the accuracy of the computed flux
at the interface for the elliptic interface problem.
One is for one-dimensional problems
in which we use a simple weak form to get second order accurate fluxes at the interface from each side.
We also have rigorous analysis for the approach.
The other one is an augmented approach for two dimensional interface problems.
Numerical examples show that we get better than super-convergence (about $1.50\sim 1.70$ order)
for the computed fluxes at the interface from each side.
For the two dimensional algorithm, the theoretical analysis is still an open challenge.
\section*{Acknowledgment}
The authors would like to thank the following supports.
F. Qin is partially supported by China NSF Grants 11691209, 11691210
and the NSF of Jiangsu Province, China (Grant No. BK20160880).
Z. Ma is supported by China National Special Fund: 2012DFA60830,
and
Z. Li would is partially supported by the US NSF grant DMS-1522768, and China NSF grant 11371199 and 11471166.
| {
"timestamp": "2017-03-02T02:01:52",
"yymm": "1703",
"arxiv_id": "1703.00093",
"language": "en",
"url": "https://arxiv.org/abs/1703.00093",
"abstract": "New finite element methods are proposed for elliptic interface problems in one and two dimensions. The main motivation is not only to get an accurate solution but also an accurate first order derivative at the interface (from each side). The key in 1D is to use the idea from \\cite{wheeler1974galerkin}. For 2D interface problems, the idea is to introduce a small tube near the interface and introduce the gradient as part of unknowns, which is similar to a mixed finite element method, except only at the interface. Thus the computational cost is just slightly higher than the standard finite element method. We present rigorous one dimensional analysis, which show second order convergence order for both of the solution and the gradient in 1D. For two dimensional problems, we present numerical results and observe second order convergence for the solution, and super-convergence for the gradient at the interface.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Accurate gradient computations at interfaces using finite element methods",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924761487654,
"lm_q2_score": 0.7279754548076478,
"lm_q1q2_score": 0.7075866648940092
} |
https://arxiv.org/abs/2211.16960 | BASiS: Batch Aligned Spectral Embedding Space | Graph is a highly generic and diverse representation, suitable for almost any data processing problem. Spectral graph theory has been shown to provide powerful algorithms, backed by solid linear algebra theory. It thus can be extremely instrumental to design deep network building blocks with spectral graph characteristics. For instance, such a network allows the design of optimal graphs for certain tasks or obtaining a canonical orthogonal low-dimensional embedding of the data. Recent attempts to solve this problem were based on minimizing Rayleigh-quotient type losses. We propose a different approach of directly learning the eigensapce. A severe problem of the direct approach, applied in batch-learning, is the inconsistent mapping of features to eigenspace coordinates in different batches. We analyze the degrees of freedom of learning this task using batches and propose a stable alignment mechanism that can work both with batch changes and with graph-metric changes. We show that our learnt spectral embedding is better in terms of NMI, ACC, Grassman distance, orthogonality and classification accuracy, compared to SOTA. In addition, the learning is more stable. | \section{Introduction}
\label{sec:intro}
Representing information by using graphs and analyzing their spectral properties has been shown to be an effective classical solution in a wide range of problems including clustering \cite{bresson2013multiclass, ng2001spectral, zelnik2004self}, classification \cite{garcia2014multiclass}, segmentation \cite{shi2000normalized}, dimensionality reduction \cite{belkin2003laplacian, coifman2006diffusion, roweis2000nonlinear} and more. In this setting, data is represented by nodes of a graph, which are embedded into the eigenspace of the graph-Laplacian, a canonical linear operator measuring local smoothness.
Incorporating analytic data structures and methods within a deep learning framework has many advantages. It yields better transparency and understanding of the network, allows the use of classical ideas, which were thoroughly investigated and can lead to the design of new architectures, grounded in solid theory.
Spectral graph algorithms, however, are hard to incorporate directly in neural-networks since they require eigenvalue computations which cannot be integrated in back-propagation training algorithms. Another major drawback of spectral graph tools is their low scalability. It is not feasible to hold a large graph containing millions of nodes and to compute its graph-Laplacian eigenvectors. Moreover, updating the graph with additional nodes is combersome and one usually resorts to graph-interpolation techniques, referred to as Out Of Sample Extension (OOSE) methods.
An approach to solve the above problems using deep neural networks (DNNs), firstly suggested in \cite{shaham2018spectralnet} and recently also in \cite{chen2022specnet2}, is to train a network that approximates the eigenspace by minimizing Rayleigh quotient type losses. The core idea is that the Rayleigh quotient of a sum of $n$ vectors is minimized by the $n$ eigenvectors with the corresponding $n$ smallest eigenvalues. As a result, given the features of a data instance (node) as input, these networks generate the respective coordinate in the spectral embedding space. This space should be equivalent in some sense to the analytically calculated graph-Laplacian eigenvector space. A common way to measure the equivalence of these spaces is using the Grassman distance. Unfortunately, applying this indirect approach does not guarantee convergence to the desired eigenspace and therefore the captured might not be faithful.
An alternative approach, suggested in \cite{mishne2019diffusion} for computing the diffusion map embedding, is a direct supervised method. The idea is to compute the embedding analytically, use it as ground-truth and train the network to map features to eigenspace coordinates in a supervised manner. In order to compute the ground truth embedding, the authors used the entire training set. This operation is very demanding computationally in terms of both memory and time and is not scalable when the training set is very large.
\begin{figure*}[htbp!]
\centering
\includegraphics[trim = 2mm 80mm 2mm 50mm, clip=true,width=1\textwidth]{figures/output_model_all4.pdf}
\caption{{\bf Toy examples}. An Illustration of trained BASiS models over common spectral-clustering toy examples. Each figure describes the embedding values given by the network to each point in the space and the clustering results over selected points. BASiS performs successful clustering and is able to interpolate and extrapolate the training data smoothly.}
\label{fig:model_output_all}
\end{figure*}
Our proposed method is to learn directly the eigenspace in batches. We treat each batch as sampling of the full graph and learn the eigenvector values in a supervised manner. A major problem of this kind of scheme is the inconsistency in the embedding coordinates. Thus, two instances in different batches with the same features can be mapped to very different eigenspace coordinates. Our solution is to use affine registration techniques to align the batches. Further, we use this alignment strategy to also allow changes in the graph affinity metric.
Our proposed method retains the following main qualities: 1) {\bf Scalability.} Data is learnt in batches, allowing a training based on large and complex input sets; 2) {\bf OOSE.} Out of sample extension is immediate. 3) {\bf High quality approximation of the eigenspace.} Since our learning method is direct and fully supervised, an excellent approximation of the graph eigenspace is obtained. 4) {\bf Robustness to features change.} We can train the model also when features and affinities between nodes change;
All the above properties yield a spectral building block which can be highly instrumental in various deep learning algorithms, containing an inherent orthogonal low dimensional embedding of the data, based on linear algebraic theory.
\section{Settings and Notations}
Let $\{x_i\}_{i=1}^n$ be a set of data instances denoted as $X$ which is a finite set in $\mathbb{R}^d$. These samples are assumed to lie on a lower dimensional manifold $\mathcal{M}$.
These instances are represented as nodes on an undirected weighted graph $G = (V, E, W)$, where $V$ and $E$ are sets of the vertices and edges, respectively, and $W$ is the adjacency matrix. This matrix is symmetric and defined by a distance measure between the nodes. For example, a common choice is a Gaussian kernel and Euclidean distance,
\begin{equation}\label{Gaussian kernel}
W_{ij}=\exp\left(-\frac{||x_i-x_j||_2^2}{2\sigma^2}\right),
\end{equation}
where $\sigma$ is a soft-threshold parameter.
The degree matrix $D$ is a diagonal matrix where $D_{ii}$ is the degree of the $i$-th vertex, i.e., $D_{ii}=\sum_{j}{W_{ij}}$.
The graph-Laplacian operator is defined by,
\begin{equation}\label{Laplacian_def}
L := D-W .
\end{equation}
The graph-Laplacian is a symmetric, positive semi-definite matrix, its eigenvalues are real, and its eigenvectors form an orthogonal basis.
The eigenvalues of $L$ are sorted in ascending order $\lambda_1 \leq \lambda_2 \leq ... \leq \lambda_n$, where the corresponding eigenvectors are denoted by $u_1,u_2...,u_n$. The sample $x_i$ is represented in the spectral embedding space as the $i$th row of the matrix $U=\begin{bmatrix}
u_1&\cdots&u_K
\end{bmatrix} \in \mathbb{R}^{n \times K}$, denoted as $\varphi_i$. Thus, more formally, the dimensionality reduction process can be formulated as
\begin{equation}\label{embedding_def}
x_i \longmapsto \varphi_i = [u_1(i), u_2(i),..., u_K(i)]\in \mathbb{R}^K,
\end{equation}
where $K\ll d$.
This representation preserves well essential data information
\cite{coifman2006diffusion, katz2019alternating, lederman2018learning, ortega2018graph}
Alternatively, one can replace the Laplacian definition \eqref{Laplacian_def} wit
\begin{equation}\label{normalized laplacian}
L_N := D^{-\frac{1}{2}}LD^{-\frac{1}{2}}=I-D^{-\frac{1}{2}}WD^{-\frac{1}{2}}.
\end{equation}
This matrix may yield better performances for certain tasks and datasets
\cite{shi1998motion, shi2000normalized, tatiraju2008image}.
\section{Related Work}
\label{sec:related_work}
OOSE and scalability of graph-based learning methods are ongoing research topics. Mathematical analyses and analytical solutions to these problems can be found, for example, in \cite{alzate2008multiway, belabbas2009landmark, bengio2003out, fowlkes2004spectral, williams2000using}. However, neural networks learning the latent space of the data usually yield an efficient, robust and reliable solution for these problems.
Moreover, neural network modules can be easily integrated in larger networks, employing this embedding. For a recent use of learnable graphs in semi-supervised learning and data visualization see \cite{aviles2022graphxcovid}.
The effectiveness of modeling PDE's and certain eigenproblems in grid-free, mesh-free manner was shown in \cite{bar2021strong,weinan2021algorithms,ben2020solving}.
We review below the main advances in eigenspace embedding.
{\bf Diffusion Nets \cite{mishne2019diffusion}.}
Diffusion Maps (DM) is a spectral embedding, resulting from the eigendecomposition of
\begin{equation}\label{eq:random_walk_matrix}
P := WD^{-1},
\end{equation}
known as the random-walk matrix \cite{coifman2006diffusion}. More formally, similarly to \cref{embedding_def}, diffusion maps is defined by
\begin{equation}\label{dm_embedding_def}
x_i \longmapsto \varphi_i = [\gamma_1^t\Phi_1(i), \gamma_2^t\Phi_2(i),...,\gamma_K^t\Phi_K(i)]\in \mathbb{R}^K,
\end{equation}
where $\{\Phi_j\}_{j=1}^K$ are the first non-trivial eigenvectors of $P$, $\{\gamma_j\}_{j=1}^K$ are the corresponding eigenvalues and $t>0$ is the diffusion time. Note, that $P$ and $L_N$ have the same eigenvectors, in reverse order with respect to their eigenvalues.
Diffusion Net (DN) is an autoencoder trained to map between the data and the DM.
The loss function of the encoder is defined by,
\begin{equation}\label{diffusion_net_encoder_loss}
\mathcal{L}_{DN}^e(\theta^e)=\frac{1}{2n}\sum_{i=1}^n\norm{f_{\theta^e}^e(x_i)-\phi_i}^2+ F(\theta^e, X),
\end{equation}
where $\theta^e$ denotes the encoder's parameters,
$f_{\theta^e}^e(x_i)$ is the encoder output and $F(\theta^e, X) = \frac{\mu}{2}\sum_{l=1}^{L-1}{\norm{\theta^e{_l}}_F^2}+\frac{\eta}{2m}\sum_{j=1}^d{||(P-\gamma_jI)(o^e_j)^T||^2}$ is a regularization term such that $\theta^e{_l}$ are the weights of the $l$-th layer, $o^e_j$ is the $j$-th column of the output matrix, $\mu$ and $\eta$ are regularization parameters. Note,
Diffusion Net requires
to compute the embedding of the training set in advance, meaning \emph{it cannot be trained with mini-batches} and therefore has difficulty dealing with large datasets.
{\bf SpectralNet1 \cite{shaham2018spectralnet} (SpecNet1).}
This DNN learns the embedding corresponds to $L$ by minimizing the \emph{ratio-cut} loss of Ng \etal \cite{ng2001spectral}, without adding an orthogonality constraint on the solution, with the loss
\begin{equation}\label{spectralnet_loss}
\mathcal{L}_{SN1}(\theta) = \frac{1}{m^2}\sum_{i,j=1}^m{W_{i,j}||y_i-y_j||^2} = \frac{2}{m^2}\textrm{tr}(Y^TLY),
\end{equation}
where $y_i=f_{\theta}(x_i)$ is the network output, $m$ is the batch size, and tr is the trace operator. In order to calculate the eigenvectors of $L_N$, one should normalize $y_i, y_j$ with the corresponding node degree. In SpectralNet1 orthogonality of the training is gained by defining the last layer of the network as a linear layer set to orthogonalize the output. The last layer's weights are calculated during training with QR decomposition over the DNN's outputs. The authors point out that in order to get good generalization and approximate orthogonal output at inference, large batches are required.
{\bf SpectralNet2 \cite{chen2022specnet2} (SpecNet2).}
In this recent work the authors suggested to solve the eigenpair problem of the matrix pencil $(W, D)$.
The loss function is defined by,
\begin{equation}\label{specnet2_loss}
\mathcal{L}_{SN2}(\theta) = \frac{1}{m^2}\textrm{tr}\left(-2Y^TW Y+ \frac{1}{m^2}Y^TDYY^TDY\right),
\end{equation}
where $Y$ is the network's output.
Given the output $Y$, an approximation to the eigenvectors of $P$, \cref{eq:random_walk_matrix}, can be calculated as $\hat{U}=YO$ where $O \in \mathbb{R}^{K \times K}$ satisfies
\begin{equation}\label{specnet2_ev_mat_calc}
\begin{aligned}
Y^TWYO = Y^TDYO\Lambda,
\end{aligned}
\end{equation}
where $\Lambda$ is a refined approximation of the eigenvalue matrix of $(W, D)$.
Note that \cref{specnet2_ev_mat_calc} requires a batch for its computation, which may be problematic at inference.
The authors
show qualitatively a successful approximation to the analytical embedding.
\begin{figure}[htbp!]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.235\textwidth}
\includegraphics[trim = 10mm 50mm 30mm 60mm, clip,width=1\textwidth,valign=t]{figures/IdosFigs/toyExmple_data.pdf}
\caption{ }
\label{subfig:toyExmple_data}
\end{subfigure}
\begin{subfigure}[t]{0.235\textwidth}
\includegraphics[trim = 10mm 50mm 30mm 60mm, clip,width=1\textwidth,valign=t]{figures/IdosFigs/toyExmple_LS1.pdf}
\caption{}
\label{fig:toyExmple_LS1}
\end{subfigure}\\
\begin{subfigure}[t]{0.235\textwidth}
\includegraphics[valign=t, trim = 10mm 50mm 30mm 60mm, clip,width=1\textwidth]{figures/IdosFigs/toyExmple_LS2.pdf}
\caption{}
\label{fig:toyExmple_LS2}
\end{subfigure}
\begin{subfigure}[t]{0.235\textwidth}
\includegraphics[valign=t,trim = 10mm 50mm 30mm 60mm, clip,width=1\textwidth]{figures/IdosFigs/toyExmple_LS2wthLS1.pdf}
\caption{}
\label{fig:toyExmple_LS2wthLS1}
\end{subfigure}
\caption{{\bf Illustration}. The full dataset of three separated clusters, divided into two subsets and anchors is shown in \cref{subfig:toyExmple_data}. Figs. \ref{fig:toyExmple_LS1}-\ref{fig:toyExmple_LS2} show the embedding spaces of subset $\#1$ and subset $\#2$, respectively. \cref{fig:toyExmple_LS2wthLS1} shows the embedding space of the entire data after aligning the embedding of subset $\#1$ to that of subset $\#2$.
}
\label{idosfigs:toyExample}
\end{figure}
\section{Our Method} \label{sec:our_method}
\subsection{Motivation}
Our goal is to learn a model $f: \mathcal{M} \rightarrow \mathbb{R}^K $, where given a sample $x \in \mathcal{M}$ approximates well the corresponding $\varphi$ of \cref{embedding_def}.
As is common with DNN learning, given a large training set, we would like to train the model by splitting the dataset into batches. A batch can be viewed as sampling the graph of the training set.
A straightforward approach would be to compute the eigenspace of each batch and to learn a mapping from $x$ to $ \varphi$, using a data loss similar to Diffusion Nets. The problem is that different samples of the training set most often lead to different embeddings. Specifically, the same instance $x_i$ can be mapped very differently in each batch.
This can be demonstrated in a very simple toy example, shown in \cref{idosfigs:toyExample}, which illustrates the core problem and our proposed solution. Three distinct clusters in Euclidean space are sampled in two trials (batches) and the eigenspace embedding is computed analytically. Three samples appear in both subsets, one for each cluster (red color). We refer to the common samples as \emph{anchors}. The plots of the instances in the embedding space for the two subsets are shown in Figs. \ref{fig:toyExmple_LS1}-\ref{fig:toyExmple_LS2}. One can observe the embeddings are different. Specifically, all anchors, which appear in both samplings, are mapped differently in a substantial way. It is well known that eigenvector embedding has a degree of freedom of rotation (as shown for example in \cite{zelnik2004self}). However, in the case of uneven sampling of clusters there may be also some scale changes and slight translation (caused by the orthonormality constraints). We thus approximate these degrees of freedom in the general case as an affine rigid transformation according to the anchors. Aligning one embedding space to the other one, using this transformation, yields a consistent embedding, as can be seen in \cref{fig:toyExmple_LS2wthLS1}. Following the alignment process, the embedding can be learnt well using batches.
In the toy example of the Three Moons, see \cref{fig:3moons}, we show the mapping of $9$ anchor-points, as shown in \cref{fig:3_moons_data_set_anchors}. In Figs. \ref{fig:3_moons_ev1_not_affined_final}-\ref{fig:3_moons_ev2_not_affined_final} the analytic computation of the first two non-trivial eigenvectors are plotted for $20$ different batch samples of size $256$ (out of $9000$ nodes in the entire dataset), all of which contain the $9$ points. In this simple example anchors located on the same moon receive approximately the same value. However, in different batches the embedding of the anchors is clearly inconsistent. Surely, a network cannot be expected to generalize such a mapping. After learning the transformation and performing alignment, the embedding values are consistent. In Figs. \ref{fig:3_moons_ev1_affined_final}-\ref{fig:3_moons_ev2_affined_final} the values are shown after our correction procedure.
This consistency allows to train DNN to learn the desired embedding, by dividing the data into batches.
The result of the trained DNN model for the Three Moons dataset appears in \cref{fig:model_output_all} (second from left).
These toy examples lead us to the detailed description of our algorithm.
\begin{figure}[htbp!]
\centering
\begin{subfigure}[H]{0.23\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/3-moons/3_moons_original_data_set_new.png}
\caption{}
\label{fig:3_moons_data_set_final}
\end{subfigure}
\hfill
\begin{subfigure}[H]{0.23\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/3-moons/3_moons_anchors_new.png}
\caption{}
\label{fig:3_moons_data_set_anchors}
\end{subfigure}
\hfill
\begin{subfigure}[H]{0.23\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/3-moons/classic/3_moons_ev1_not_affined.png}
\caption{}
\label{fig:3_moons_ev1_not_affined_final}
\end{subfigure}
\hfill
\begin{subfigure}[H]{0.23\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/3-moons/classic/3_moons_ev2_not_affined.png}
\caption{}
\label{fig:3_moons_ev2_not_affined_final}
\end{subfigure}
\hfill
\begin{subfigure}[H]{0.23\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/3-moons/classic/3_moons_ev1_affined.png}
\caption{}
\label{fig:3_moons_ev1_affined_final}
\end{subfigure}
\hfill
\begin{subfigure}[H]{0.23\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/3-moons/classic/3_moons_ev2_affined.png}
\caption{}
\label{fig:3_moons_ev2_affined_final}
\end{subfigure}
\caption{{\bf Three-Moons toy example}. The full dataset is shown in \cref{fig:3_moons_data_set_final} and the chosen anchor-nodes in \cref{fig:3_moons_data_set_anchors}. Figs. \ref{fig:3_moons_ev1_not_affined_final}- \ref{fig:3_moons_ev2_not_affined_final} show the values of the two leading eigenvectors for the anchors, for 20 different graph-samples. Figs. \ref{fig:3_moons_ev1_affined_final}- \ref{fig:3_moons_ev2_affined_final} show those values after our proposed alignment.}
\label{fig:3moons}
\end{figure}
\subsection{{BASiS Algorithm}}
We propose to calculate the embedding space with batches. To obtain consistency in this representation,
we calculate the first-order approximation of the distortion obtained in the eigenvector values between different samples of the data. The main steps of our algorithm are as follows:
First we perform two preliminary initialization steps.
{\bf Defining an anchor set.} Draw $l$ samples from the data. This subset is denoted as $V^a$ and will be added to any batch in the learning process.
{\bf Defining the reference embedding space.} We would like to define the embedding space of the anchor set as a reference space. However, to get more information about the manifold $\mathcal{M}$, we add $m-l$ samples (randomly) and term it as the reference set $V^{ref}$. After calculating the embedding $V^{ref} \to \varphi^{ref}$ (as in \cref{embedding_def}), one can extract the coordinates of the anchor samples,
\begin{equation}\label{eq:stepEmbeddingRef}
V^a\to \{\varphi^{a, ref}_i\}_{i=1}^l.
\end{equation}
Following this initialization, the main steps of the training are as follows:
{\bf Calculate the embedding space over a new batch.} Draw $m-l$ new samples and add them to the anchor set. Let us denote the union set as $V^{b}$. We calculate $\{\varphi_i\}_{i=1}^m$, the embedding of $V^{b}$ and extract the embedding $\{\varphi^{a}_i\}_{i=1}^l$ corresponding to the anchors.
{\bf Calculate the alignment transformation.} Now, we calculate the alignment transformation between $\{\varphi^{a}_i\}_{i=1}^l$ to $\{\varphi^{a, ref}_i\}_{i=1}^l$. More formally, for $\varphi^a, \varphi^{a, ref} \in \mathbb{R}^K$ we find $A \in \mathbb{R}^{K \times K}$ and $b \in \mathbb{R}^K$ which minimize
\begin{equation}\label{eq:affie_transformation_least_squares1}
\min_{A, b}\sum_{i=1}^l{\norm{\varphi^{a, ref}_i - (A\varphi^{a}_i +b)}^2}.
\end{equation}
Alternatively, one can define $\hat{\varphi}^{a} = [\varphi^{a}, 1]$ and find the transformation $T \in \mathbb{R}^{K\times(K+1)}$ such that \begin{equation}\label{eq:stepAffie_transformation_least_squares2}
\min_{T}\sum_{i=1}^l{\norm{\varphi^{a, ref}_i - T\hat{\varphi}^{a}_i}^2}.
\end{equation}
In this case there are $K\times(K+1)$ degrees of freedom. Each anchor provides $K$ constraints, that means at least $K+1$ anchors are needed in order to solve this least squares problem. Since in many real-world problem there is noise in the measurements, it is customary to solve such problems in an overdertermined setting, using a higher number of anchors. In addition, given a large number of anchors, the transformation can be calculated using best matches - for example by using the RANdom SAmple Consensus (RANSAC) algorithm \cite{fischler1981random}.
{\bf Batch Alignment.} Given the transformation $T$, we can align the embedding $\{\varphi_i\}_{i=1}^m$ of all the instances of $V^{b}$. We define $\hat{\varphi} = [\varphi, 1]$ and update
\begin{equation}\label{eq:stepAlignment}
\varphi \leftarrow T\hat{\varphi}.
\end{equation}
{\bf Gradient Step.} Now that we have a mechanism that allows to get consistent embedding, we can train the DNN by dividing the data into batches and use a simple MSE lose function
\begin{equation}\label{ltsnet_loss}
\mathcal{L}_{BASiS}(\theta)=\frac{1}{m}\sum_{i=1}^m\norm{y_i-\varphi_i}^2,
\end{equation}
where $y_i = f_{\theta}(x_i)$ is the DNN's output and $\varphi_i$ is the embedding of $x_i$ after alignment.
The full training scheme is detailed in \Cref{ltsnet_training_scheme}.
\begin{algorithm}[htbp!]
\caption{BASiS Training Scheme}
\label{ltsnet_training_scheme}
\begin{algorithmic}[1]
\Inputs{data features $\{x_i \}_{i=1}^n$, number of eigenvectors $K$, batch size $m$, number of anchors $l$.}
\Outputs{Spectral embedding model $f: \mathcal{M} \rightarrow \mathbb{R}^K $}
\Initialize{
Define anchor set $V^a$.\\
Extract $\{\varphi^{a, ref}_i\}_{i=1}^l$, the reference embedding of $V^a$ using \cref{eq:stepEmbeddingRef}
}
\While {$\mathcal{L}_{BASiS}(\theta)$ not converged}
\State Draw $m-l$ new samples.
\State Define node set $V^b$ as union of the anchors with the new sampled nodes.
\State Calculate the embedding $\{\varphi_i\}_{i=1}^m$ of $V^b$.
\State Calculate the optimal transformation $T$, \cref{eq:stepAffie_transformation_least_squares2}.
\State Align $\{\varphi_i\}_{i=1}^m$ with $T$ , \cref{eq:stepAlignment}.
\State Do a gradient step of $\mathcal{L}_{BASiS}$, \cref{ltsnet_loss}.
\EndWhile
\end{algorithmic}
\label{algo:TrainingScheme}
\end{algorithm}
\subsection{BASiS for feature perturbation}
In the process of network training, the features are often not fixed and slightly change each iteration during training.
In this case the adjacency values change and hence naturally also the embedding space. We suggest an algorithm to allow iterative changes in the feature space (inducing different graph metric). This algorithm is also based on an alignment technique.
Similar to \Cref{algo:TrainingScheme}, we define anchor nodes. Given the current features, we extract $\{\varphi^{a, prev}_i\}_{i=1}^l$, the current spectral embedding of the anchors. When the features are updated, we find the new embedding of the anchors $\{\varphi^{a, update}_i\}_{i=1}^l$. In order to maintain consistency in the learning process, we find a transformation $T_G$, as in \cref{eq:stepAffie_transformation_least_squares2}, that aligns the updated anchor embedding to the previous one. Then we align the entire embedding space according to the calculated transformation. \Cref{algo:global_transformation_algo}
summarizes the proposed method.
\begin{algorithm}[htb]
\caption{BASiS under iterative feature change}
\begin{algorithmic}[1]
\Inputs{ $\{\varphi^{a, prev}_i\}_{i=1}^l$ the anchors embedding over previous features $\{x_i\}_{i=1}^n$, updated features $\{ \hat{x}_i \}_{i=1}^n$.}
\Outputs{$\{\varphi^{update}_i\}_{i=1}^n$ the spectral embedding over the updated features, aligned to $\{\varphi^{a, prev}_i\}_{i=1}^l$.}
\State Calculate the embedding $\{\varphi^{update}_i\}_{i=1}^n$ over the updated features.
\State Extract $\{\varphi^{a, update}_i\}_{i=1}^l$ the embedding correspond to the anchors.
\State Calculate the transformation $T_G$ between $\{\varphi^{a, prev}_i\}_{i=1}^l$ and $\{\varphi^{a, update}_i\}_{i=1}^l$.
\State Align $\{\varphi^{update}_i\}_{i=1}^n$ with $T_G$.
\end{algorithmic}
\label{algo:global_transformation_algo}
\end{algorithm}
\begin{table*}[htb]
\centering
\begin{tabular}{P{2.0cm}||P{3.0cm}|| P{2.4cm} P{2.4cm} P{2.4cm} P{2.4cm}}
\toprule
Measures & Networks & MNIST & Fashion-MNIST & SVHN & CIFAR-10 \\
\midrule
\multirow{3}{*}{$d_G$↓} & Diffusion-Net & $0.204 \pm 0.058$ & $0.488 \pm 0.238$ & $1.909 \pm 0.238$ & $1.022 \pm 0.250$ \\
& SpecNet1 & $0.386 \pm 0.074$ & $0.375 \pm 0.132$ & $3.526 \pm 0.529$ & $2.256 \pm 0.471$ \\
& SpecNet2 & $1.388 \pm 0.262$ & $1.976 \pm 0.210$ & $1.903 \pm 0.242$ & $2.970 \pm 0.682$ \\
& BASiS (Ours) & $\bf{0.107 \pm 0.038}$ & $\bf{0.284 \pm 0.073}$ & $\bf{1.656 \pm 0.170}$ & $\bf{0.803 \pm 0.085}$ \\
\midrule
\multirow{3}{*}{$d_{\perp}$↓} & Diffusion-Net & $0.535 \pm 0.365$ & $0.823 \pm 0.664$ & $1.532 \pm 0.354$ & $2.957 \pm 1.837$ \\
& SpecNet1 & $6.296 \pm 0.922$ & $6.384 \pm 0.899$ & $4.507 \pm 0.821$ & $5.169 \pm 0.775$ \\
& SpecNet2 & $9.486 \pm 0.001$ & $8.561 \pm 1.397$ & $4.104 \pm 0.269$ & $4.922 \pm 0.102$ \\
& BASiS (Ours) & $\bf{0.247 \pm 0.076}$ & $\bf{0.590 \pm 0.144}$ & $\bf{0.488 \pm 0.098}$ & $\bf{0.407 \pm 0.095}$ \\
\midrule
\multirow{3}{*}{NMI↑} & Diffusion-Net & $0.944 \pm 0.041$ & $0.759 \pm 0.085$ & $0.645 \pm 0.016$ & $0.466 \pm 0.034$ \\
& SpecNet1 & $0.911 \pm 0.008$ & $0.761 \pm 0.011$ & $0.665 \pm 0.018$ & $0.443 \pm 0.012$ \\
& SpecNet2 & $0.925 \pm 0.012$ & $0.759 \pm 0.010$ & $0.701 \pm 0.009$ & $0.466 \pm 0.013$ \\
& BASiS (Ours) & $\bf{0.961 \pm 0.001}$ & $\bf{0.798 \pm 0.001}$ & $\bf{0.736 \pm 0.001}$ & $\bf{0.501 \pm 0.001}$ \\
\midrule
\multirow{3}{*}{ACC↑} & Diffusion-Net & $0.944 \pm 0.030$ & $0.781 \pm 0.179$ & $0.687 \pm 0.303$ & $0.620 \pm 0.062$ \\
& SpecNet1 & $0.963 \pm 0.005$ & $0.815 \pm 0.029$ & $0.811 \pm 0.039$ & $0.637 \pm 0.029$ \\
& SpecNet2 & $0.966 \pm 0.007$ & $0.801 \pm 0.023$ & $0.813 \pm 0.015$ & $0.606 \pm 0.039$ \\
& BASiS (Ours) & $\bf{0.986 \pm 0.001}$ & $\bf{0.865 \pm 0.003}$ & $\bf{0.880 \pm 0.001}$ & $\bf{0.688 \pm 0.001}$ \\
\midrule
\multirow{3}{*}{Accuracy(\%)↑} & Diffusion-Net & $95.508 \pm 1.449$ & $86.207 \pm 0.196$ & $86.850 \pm 1.386$ & $67.316 \pm 2.112$ \\
& SpecNet1 & $92.278 \pm 4.776$ & $84.123 \pm 1.229$ & $85.154 \pm 0.377$ & $65.336 \pm 0.626$ \\
& SpecNet2 & $97.026 \pm 0.546$ & $85.953 \pm 0.240$ & $87.469 \pm 0.130$ & $67.093 \pm 0.644$ \\
& BASiS (Ours) & $\bf{98.522 \pm 0.065}$ & $\bf{87.202 \pm 0.187}$ & $\bf{88.021 \pm 0.064}$ & $\bf{68.887 \pm 0.128}$ \\
\bottomrule
\end{tabular}
\caption{{\bf Spectral embedding performance comparison.} Average performance obtained over 10 different installations of each of the four methods. In each experiment we learn an embedding space in dimension of 10 for 1000 training iterations using batches of size 512.}
\label{table:comparison_10ev_final}
\end{table*}
\section{Experimental Results}
\label{sec:experimental_results}
In this section we examine the ability of BASiS to learn the graph-spectral embedding over different datasets quantifying its success using several performance measures. Our method is able to learn any desired eigen embedding (since it is supervised by analytic calculations). To fairly compare our method to the ones mentioned in \cref{sec:related_work} we calculate the eigenspace of $L_N$ (\cref{normalized laplacian}). For all methods the DNN's architecture includes $5$ fully connected (FC) layers with ReLU activation in between (see full details in the supplementary).
\subsection{Evaluation Metrics}
We evaluate our results using several measures.
We calculate the Grassmann distance (projection metric) \cite{hamm2008grassmann} between the network output and the analytically calculated eigenvectors.
The squared Grassmann distance between two orthonormal matrices $Y_1, Y_2 \in \mathbb{R}^{n \times K} $ is defined as:
\begin{equation}\label{grassmann_distance_defenition}
d_G(Y_1,Y_2) = K-\sum_{i=1}^K{cos^2\theta_i},
\end{equation}
where $0 \leq \theta_1 \leq ... \leq \theta_K \leq \frac{\pi}{2}$ are the principal angles between the two subspaces $span(Y_1)$ and $span(Y_2)$.
The distance is in $[0, K]$ where lower values indicate greater proximity between the subspaces.
A second measure is the degree of orthogonality of the DNN's output $Y$.
We use the following orthogonality measure:
\begin{equation}\label{orthogonality_measure_defenition}
d_{\perp}(Y)=||Y^TY-I||_F^2,
\end{equation}
where $I$ is an identity matrix and $||\cdot||_F$ is the Frobenius norm. For $Y$ containing columns of orthonormal vectors we get $d_{\perp}(Y)=0$. In general, we expect this measure to be close to zero in proper embeddings.
To evaluate the potential clustering performance we examined two common metrics: Normalized mutual information (NMI) and unsupervised clustering accuracy (ACC). The clustering result is achieved by preforming the K-Means algorithm over the spectral embedding. Both indicators are in the range $[0,1]$, where high values indicate a better correspondence between the clustering result and the true labels.
NMI is defined as,
\begin{equation}\label{NMI_defenition}
NMI(c, \hat{c}) = \frac{I(c, \hat{c})}{max\{H(c), H(\hat{c})\}},
\end{equation}
where $I(c, \hat{c})$ is the mutual information between the true labels $c$ and the clustering result $\hat{c}$ and $H(\cdot)$ denotes entropy.
ACC is defined as,
\begin{equation}\label{ACC_defenition}
ACC(c, \hat{c}) = \frac{1}{n}\max_{\pi \in \Pi}{\sum_{i=1}^n{\mathbbm{1}\{c_i=\pi(\hat{c}_i)\}}},
\end{equation}
where $\Pi$ is the set of possible permutations of the clustering results. To choose the optimal permutation $\pi$ we followed \cite{shaham2018spectralnet} and used the Kuhn-Munkres algorithm \cite{munkres1957algorithms}.
Finally, we examine how suitable the embedding is for classification. We trained (with Cross-Entropy loss) a supervised linear regression model containing a single fully connected layer without activation, and examined its accuracy:
\begin{equation}\label{accuracy_defenition}
Accuracy(c, \hat{c}) = \frac{1}{n}{\sum_{i=1}^n{\mathbbm{1}\{c_i=\hat{c}_i\}}}.
\end{equation}
\subsection{Spectral Clustering} \label{subsec:spectral_clustring_experiments}
In this section we examine the ability of our method to learn the spectral embedding for clustering of different datasets.
First, we examine the performance for well-known spectral clustering toy examples, appearing in \cref{fig:model_output_all}. In these examples the dataset includes $9000$ nodes and the model is trained by calculating the first non-trivial eigenvectors (sampling $256$ nodes, using $1000$ iterations). In all the experiments NMI and ACC of 1.0 were obtained over the test set (i.e., perfect clustering results). In addition, as demonstrate in \cref{fig:model_output_all}, the learnt model is able to generalize the clustering result and performs smooth interpolation and extrapolation of the space mapping. We note that no explicit regularization loss is used in our method, generalization and smoothness are obtained naturally through the neural training process.
Next we compare the performance of BASiS to those of the models mentioned in \cref{sec:related_work}.
We examine the results over 4 well-known image datasets: MNIST, Fashion-MNIST \cite{xiao2017fashion}, SVHN \cite{netzer2011reading} and CIFAR-10 \cite{krizhevsky2009learning}. For each dataset we first learn an initial low-dimensional representation, found to be successful for graph-based learning, by a Siamese network, a convolutional neural network (CNN) trained in a supervised manner using Contrastive loss
\begin{equation}\label{contrastive_loss}
\begin{aligned}
\mathcal{L}_{Cont}(x_i, x_j, \theta) =
\mathbbm{1}\{y_i=y_j\}\norm{f^{rep}_{\theta}(x_i)-f^{rep}_{\theta}(x_j)}_2^2\\ + \mathbbm{1}\{y_i \neq y_j\}\max(0, \epsilon-\norm{f^{rep}_{\theta}(x_i)-f^{rep}_{\theta}(x_j)}_2)^2,
\end{aligned}
\end{equation}
where $f^{rep}_{\theta}(x_i)$ is the Siamese network's output for input image $x_i$ labeld as $y_i$ , $\epsilon \in \mathbb{R}^+$.
More details on the architecture of the Siamese network are in the supplementary material.
We use this representations as inputs to the spectral embedding models.
In all the experiments the graph affinity matrix $W$ is defined by \cref{Gaussian kernel}, using the $50$ nearest neighbors of each node.
We use $50$ neighbors in order that all methods could perform well (see sensitivity analysis hereafter).
The model is trained to learn the first $K$ eigenvectors, where $K$ is equal to the number of clusters.
The batch size is set to $m=512$. For our method, we randomly select $25$ anchor-nodes from each cluster and use RANSAC to find the best transformation.
Comparison between the methods is summarized in \Cref{table:comparison_10ev_final}. The numbers are obtained by showing the average and empirical standard deviation of each measure, as computed based on $10$ experiments.
Since the training process in Diffusion Net is not scalable, in each initialization we randomly sampled a single batch from the training set and trained the model with the analytically calculated spectral embedding.
In relation to SpecNet2, as indicated in \cref{sec:related_work}, to obtain an approximation of the spectral space, SpecNet2 requires a post-processing stage over the network's output. In order to maintain consistency and obtain reasonable performance for all measures, the post-processing is performed over the entire test set (this naturally limits the method and increases the level of complexity at inference).
More implementation details are in the supplementary. \Cref{table:comparison_10ev_final} shows that our method is superior and more stable in approximating the analytical embedding and in clustering.
We further examined the robustness of the methods to changes in the number of neighbors for each node. This parameter defines the connectivity of the graph in the affinity matrix. \cref{fig:ms_change} shows the average and the empirical standard deviation of the performance measures, for $50$ training procedures over the MNIST dataset.
It is shown that our method is highly robust and consistent. We note the high sensitivity of Diffusion Net to this meta-parameter.
\begin{figure}[htbp!]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/MNIST/ms-change/ms_nmi.png}
\caption{}
\label{fig:ms_nmi}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/MNIST/ms-change/ms_acc.png}
\caption{}
\label{fig:ms_acc}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/MNIST/ms-change/ms_grassmann.png}
\caption{}
\label{fig:ms_grassmann}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/MNIST/ms-change/ms_orth.png}
\caption{}
\label{fig:ms_orth}
\end{subfigure}
\caption{{\bf Robustness to the node neighborhood.} The average and standard deviation of $50$ different training experiments over the MNIST dataset, for different number of neighbors per node.}
\label{fig:ms_change}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/Bunny_DM/bunny_array.png}
\caption{}
\label{fig:bunny_image}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/Bunny_DM/data_set_dm_embd.png}
\caption{}
\label{fig:full_data_dm}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/Bunny_DM/test_set_embd_original.png}
\caption{}
\label{fig:test_set_dm}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/Bunny_DM/test_set_embd_net.png}
\caption{}
\label{fig:test_set_net_output}
\end{subfigure}
\caption{{\bf Diffusion Maps encoding}. Data set of 2000 snapshots of bunny on rotating display \cite{lederman2018learning}. \cref{fig:bunny_image} shows snapshot examples. \cref{fig:full_data_dm} present the analytically calculated DM embedding for the full dataset . The test set analytical embedding is shown in \cref{fig:test_set_dm} and the network output for the test set in \cref{fig:test_set_net_output}.}
\label{fig:bunny_dm}
\end{figure}
\subsection{Diffusion Maps Encoding}
We examine the ability of our model to learn the DM embedding (\ref{dm_embedding_def}). For this purpose we use the dataset from \cite{lederman2018learning} which includes $2000$ snapshots of toy bunny located on a rotating surface. Six examples of such frames are shown in \cref{fig:bunny_image}. We define a graph using the $20$ nearest neighbors of each node, and calculate the random-walk matrix $P$ (\ref{eq:random_walk_matrix}). Raw pixels are used as features (dimension $288,000$). Dimension reduction is performed with DM to $\mathbb{R}^2$. In \cref{fig:full_data_dm} the analytically calculated embedding obtained based on the entire dataset is shown. Our model was trained to approximate this embedding using $1500$ images. Test is performed over $500$ images.
\cref{fig:test_set_dm} shows the analytically calculated embedding over the test snapshots. \cref{fig:test_set_net_output} shows the embedding obtained by our trained model. Our method approximate well the analytic calculation.
\subsection{Iterative Change of Features} \label{subsec:globalT}
In this section we illustrate the performance of \Cref{algo:global_transformation_algo} for aligning the spectral embedding space under changing features.
We define two DNN models. The first one is for feature extraction, trained to minimize the Contrastive loss (\ref{contrastive_loss}). The second is trained for calculating the spectral embedding, using \Cref{algo:TrainingScheme}, based on the output of the features model.
Both models are trained simultaneously. The feature model is trained for $1500$ iterations, where every tenth iteration we perform an update step for the spectral embedding model.
To maintain consistency under the feature change, we align the spectral space using \Cref{algo:global_transformation_algo} before performing an update step for the spectral model.
\cref{fig:tg_mnist_training} shows the results of the training process over the MNIST dataset where the learnt features are of dimension $16$ and the eigenvectors are of dimension $9$. \cref{fig:tg_mnist_embedding} shows a visualization (TSNE \cite{van2008visualizing})
of the test set embedding at the beginning and the end of the training process. It can be seen that the two modules were able to converge and reach good clustering results. In addition, we can observe that when the loss of the spectral module is sufficiently low (around iteration 800, marked with a red line in \cref{fig:tg_mnist_training}) the clustering performance of the spectral module is comparable to the analytic embedding, calculated with the current features (the orange and the green plots tightly follow the blue plot).
To illustrate the role of the transformation $T_G$, we show in \cref{fig:tg_mnist749_embedding_new} the results of another experiment using a similar setting. For better visualization, the training is only for three digits of MNIST: 4,7 and 9. The embedding (and visualization) is two dimensional. It can be seen that $T_G$ imposes consistency of the resulting embedding under feature change, allowing convergence and good approximation of the eigenvector space.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST/contrastive_loss.png}
\caption{}
\label{fig:tg_contrastive_loss}
\end{subfigure}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST/mse_loss.png}
\caption{}
\label{fig:tg_mse}
\end{subfigure}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST/NMI.png}
\caption{}
\label{fig:tg_nmi}
\end{subfigure}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[trim = 1mm 1mm 1mm 1mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST/ACC.png}
\caption{}
\label{fig:tg_acc}
\end{subfigure}
\caption{{\bf Training under feature change}. Evolution of measures during training (MNIST).
Figs. \ref{fig:tg_contrastive_loss}-\ref{fig:tg_mse} show losses of the features module and the spectral embedding module, respectively. Figs. \ref{fig:tg_nmi}-\ref{fig:tg_acc} show the clustering measures. Blue, analytic calculation of the eigenvectors based on the current features. Orange and green, output of spectral embedding module for the training and validation sets, respectively.
}
\label{fig:tg_mnist_training}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST/ev_valid_iter1_tsne.png}
\caption{}
\label{fig:ev_valid_iter1_tsne}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST/ev_valid_iter1500_tsne.png}
\caption{}
\label{fig:ev_valid_iter1500_tsne}
\end{subfigure}
\caption{{\bf Test set embedding}. Visualization (TSNE) of the spectral embedding, MNIST test set.
\cref{fig:ev_valid_iter1_tsne} shows the spectral embedding at the beginning of training, \cref{fig:ev_valid_iter1500_tsne} at the end.}
\label{fig:tg_mnist_embedding}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.15\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST479_new/U_sampled_base_before_Tg_iter10.png}
\label{fig:tg479_U_sampled_base_before_Tg_iter10}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.15\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST479_new/U_sampled_base_after_Tg_iter10.png}
\label{fig:tg479_U_sampled_base_after_Tg_iter10}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.15\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST479_new/ev_valid_iter10.png}
\label{fig:tg479_ev_valid_iter10}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.15\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST479_new/U_sampled_base_before_Tg_iter100.png}
\label{fig:tg479_U_sampled_base_before_Tg_iter100}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.15\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST479_new/U_sampled_base_after_Tg_iter100.png}
\label{fig:tg479_U_sampled_base_after_Tg_iter100}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.15\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST479_new/ev_valid_iter100.png}
\label{fig:tg479_ev_valid_iter100}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.15\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST479_new/U_sampled_base_before_Tg_iter500.png}
\label{fig:tg479_U_sampled_base_before_Tg_iter500}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.15\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST479_new/U_sampled_base_after_Tg_iter500.png}
\label{fig:tg479_U_sampled_base_after_Tg_iter500}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.15\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST479_new/ev_valid_iter500.png}
\label{fig:tg479_ev_valid_iter500}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.15\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST479_new/U_sampled_base_before_Tg_iter1500.png}
\label{fig:tg479_U_sampled_base_before_Tg_iter1500}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.15\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST479_new/U_sampled_base_after_Tg_iter1500.png}
\label{fig:tg479_U_sampled_base_after_Tg_iter1500}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.15\textwidth}
\centering
\includegraphics[trim = 40mm 15mm 70mm 20mm, clip=true,width=1\textwidth]{figures/Global-transformation/MNIST479_new/ev_valid_iter1500.png}
\label{fig:tg479_ev_valid_iter1500}
\end{subfigure}
\hfill
\caption{{\bf Features change demonstration} Left column: the analytically calculated spectral embedding of $V^{ref}$ obtained over the features module output. Distortion is a consequence of feature change. Middle column: Spectral embedding of $V^{ref}$ after alignment with $T_G$ . Right column: Network spectral embedding of the test set. Each row represents the embeddings for the 10th, 100th, 500th and 1500th iteration, respectively.
}
\label{fig:tg_mnist749_embedding_new}
\end{figure}
\section{Conclusion}
In this paper we introduced BASiS, a new method for learning the eigenspace of a graph, in a supervised manner, allowing the use of batch training. Our proposed method has shown to be highly robust and accurate in approximating the analytic spectral space, surpassing all other methods with respect to Grassman distance, orthogonality, NMI, ACC and accuracy, over various benchmarks. In addition, we proposed an adaptation of our procedure for learning the eigenspace during iterative changes in the graph metric (as common in neural training). Our method can be viewed as a useful building block for integrating analytical spectral methods in deep learning algorithms. This enables to effectively use extensive theory and practices available, related to classical spectral embedding.
{\small
\bibliographystyle{ieee_fullname}
| {
"timestamp": "2022-12-01T02:14:38",
"yymm": "2211",
"arxiv_id": "2211.16960",
"language": "en",
"url": "https://arxiv.org/abs/2211.16960",
"abstract": "Graph is a highly generic and diverse representation, suitable for almost any data processing problem. Spectral graph theory has been shown to provide powerful algorithms, backed by solid linear algebra theory. It thus can be extremely instrumental to design deep network building blocks with spectral graph characteristics. For instance, such a network allows the design of optimal graphs for certain tasks or obtaining a canonical orthogonal low-dimensional embedding of the data. Recent attempts to solve this problem were based on minimizing Rayleigh-quotient type losses. We propose a different approach of directly learning the eigensapce. A severe problem of the direct approach, applied in batch-learning, is the inconsistent mapping of features to eigenspace coordinates in different batches. We analyze the degrees of freedom of learning this task using batches and propose a stable alignment mechanism that can work both with batch changes and with graph-metric changes. We show that our learnt spectral embedding is better in terms of NMI, ACC, Grassman distance, orthogonality and classification accuracy, compared to SOTA. In addition, the learning is more stable.",
"subjects": "Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Signal Processing (eess.SP)",
"title": "BASiS: Batch Aligned Spectral Embedding Space",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924761487653,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.707586664894009
} |
https://arxiv.org/abs/1103.3300 | A Nonparametric Frequency Domain EM Algorithm for Time Series Classification with Applications to Spike Sorting and Macro-Economics | I propose a frequency domain adaptation of the Expectation Maximization (EM) algorithm to group a family of time series in classes of similar dynamic structure. It does this by viewing the magnitude of the discrete Fourier transform (DFT) of each signal (or power spectrum) as a probability density/mass function (pdf/pmf) on the unit circle: signals with similar dynamics have similar pdfs; distinct patterns have distinct pdfs. An advantage of this approach is that it does not rely on any parametric form of the dynamic structure, but can be used for non-parametric, robust and model-free classification. This new method works for non-stationary signals of similar shape as well as stationary signals with similar auto-correlation structure. Applications to neural spike sorting (non-stationary) and pattern-recognition in socio-economic time series (stationary) demonstrate the usefulness and wide applicability of the proposed method. |
\section{KL divergence and MLE}
The Kullback-Leibler (KL) divergence between two discrete probability distribution $p_1, \ldots, p_K$ and $q_1, \ldots, q_K$ is defined as
\begin{equation}
\KL{p}{q} := \sum_{i=1}^{K} p_i \log_{2} \frac{p_i}{q_i} = \mathbb{E}_p \log_{2} \frac{p_i}{q_i}.
\end{equation}
For the continuous case
\begin{equation}
\KL{p}{q} := \int p(x) \log \frac{p(x)}{q(x)} dx,
\end{equation}
where $p(x)$ and $q(x)$ are probability density functions.
KL divergence measures how far $p$ and $q$ are apart. In particular, note that if $p = q$ then $\KL{p}{q} = 0$; and if $p$ and $q$ are independent (or more precisely their RVs) then $\KL{p}{q} = \infty$. Note however, that in general KL divergence is not symmetric: $\KL{p}{q} \neq \KL{q}{p}$. From the definiton it is intuitive that the second argument is typically a reference/``true'' value, whereas the first argument is the one we want to see how far it is from that reference distribution.
Let $\tilde{p}(x)$ be the empirical distribution of a sequence $(x_1, \ldots, x_N)$
\begin{equation}
\label{eq:empirical_distribution}
\tilde{p}(x) := \sum_{n=1}^{N} \delta(x - x_n),
\end{equation}
and $p(x | \theta)$ be a model distribution.
Typically if we want to estimate the parameter $\theta$ for a given data set we maximize the log-likelihood of the data
\begin{equation}
\ell(x | \theta) := \sum_{n=1}^{N} \log p(x_n | \theta).
\end{equation}
Using KL divergence it is intuitive to select a $\theta$ that minimizes the distance between the model distribution and the empirical distribution. In fact, it turns out that both are equivalent:
\begin{eqnarray}
\KL{\tilde{p}(x)}{p(x | \theta)} &=& \int \tilde{p}(x) \log \frac{\tilde{p}(x)}{p(x | \theta)} dx \\
&=& - H(\tilde{p}(x)) - \int \tilde{p}(x) \log p(x | \theta) dx,
\end{eqnarray}
where $H(\tilde{p}(x)) = \int \tilde{p}(x) \log \tilde{p}(x) dx $ is the entropy of $\tilde{p}(x)$. Note that the entropy is independent of the choice of $\theta$. Thus
\begin{equation}
\arg \min_{\theta} \KL{\tilde{p}(x)}{p(x | \theta)} = \arg \max_{\theta} \mathbb{E}_{\tilde{p}} \log p(x | \theta).
\end{equation}
Plugging \eqref{eq:empirical_distribution} into the right hand side gives
\begin{eqnarray}
\mathbb{E}_{\tilde{p}} \log p(x | \theta) &=& \frac{1}{T} \int \sum_{n=1}^{N} N \delta(x - x_n) \log p(x | \theta) dx \\
&=& \frac{1}{N} \sum_{n=1}^{N} \log p(x_n | \theta) \\
&=& \frac{1}{N} \ell(x | \theta)
\end{eqnarray}
Given this relation we can now evaluate the log-likelihood of $x$ given the KL divergence and entropy of the empirical distribution as
\begin{eqnarray}
\KL{\tilde{p}(x)}{p(x | \theta)} &=& - H(\tilde{p}) - \frac{1}{N} \ell(x | \theta) \\
\Leftrightarrow \ell(x | \theta) &=& -N \cdot \left(\KL{\tilde{p}(x)}{p(x | \theta)} + H(\tilde{p}) \right)
\end{eqnarray}
Since we can view the empirical spectrum as a multinomial probability mass function on the unit circle, the Kullback-Leibler divergence seems to be a natural candidate. The KL divergence between two discrete probability distribution $p_1, \ldots, p_K$ and $q_1, \ldots, q_K$ is defined as.
\begin{equation}
\KL{p}{q} := \sum_{i=1}^{K} p_i \log_{2} \frac{p_i}{q_i} = \mathbb{E}_p \log_{2} \frac{p_i}{q_i}.
\end{equation}
Note however, that in general KL divergence is not symmetric: $\KL{p}{q} \neq \KL{q}{p}$. This violates the conditions needed for a proper application of Laplacian graphs and diffusion maps. A typical ``trick'' is to symmetrize KL divergence as
\begin{equation}
\symKL{p}{q} := \frac{\KL{p}{q} + \KL{q}{p}}{2}.
\end{equation}
Hence for $\mathbf{X} = \left( s_{1}, \ldots, s_{n} \right) \in \mathbb{R}^{T \times n}$ the similarity matrix between spike $s_{i}$ and $s_{j}$ can be defined as
\begin{equation}
\mathbf{S} = \symKL{I(s_i)}{I(s_j)},
\end{equation}
where $I(s_k)$ is the periodogram estimate of spike $s_{k,t}$.
\begin{comment}
\section{Progress report}
For the project proposal I set the following ``milestones'':
\begin{itemize}
\item have an automated procedure that can succesfully extract the actual spikes (only noise remains)
\item characterize spikes theoretically and empirically so well that I can get useful features as a basis for classification.
\item Have a good idea about what a good (parametric) distribution fits the data/features; the raw data is obviously non-Gaussian; it might be possible to approximate the features by (mixtures of) Gaussians.
\item Present preliminary results based on the models I have by then.
\end{itemize}
I think I have accomplished that mostly; maybe in the second point I don't have any highly theoretical characterizations, but the slowness measure seems to be a very reasonable feature that approaches the problem at its heart.
\end{comment}
\section{Clustering spikes}
\section{Clustering in the complex plane}
Doing the FFT of the signal the different wave forms must be visible as ``cones'' on the complex plane, pointing in some particular direction (the important frequency); due to the way the data is collected these cones are very likely pointing towards low frequencies (something like angles greater than $\pi / 16$).
The noise in the measurements will be apparent by a circular bivariate distributed point cloud around $(0,0)$: this is the white noise sequence.
Picking out these cones will give the clusters.
\begin{remark}
The logarithm of the Fourier coefficients converts the ``cones'' to nice horizontal lines; much easire to cluster. Look into the complex logarithm.
\end{remark}
\begin{figure}[!t]
\centering
\makebox{\includegraphics[width=\textwidth]{images/simulated_spike_trains.png}}
\caption{\label{fig:simulated_spike_trains} (top left) the $5$ true signals $S_i(t)$ of $5$ neurons; (top right) $N$ measured signals plus noise $s_{i,t}$, $t = 1, \ldots, 46$; (bottom left) Fourier coefficients in the complex plan of all $N$ signals; (bottom right) logarithm of the Fourier coefficients.}
\end{figure}
\begin{figure}[!t]
\centering
\makebox{\includegraphics[width=\textwidth]{images/true_and_noisy.png}}
\caption{\label{fig:simulated_spike_trains} (top left) the $5$ true signals $S_i(t)$ of $5$ neurons; (top right) $N$ measured signals plus noise $s_{i,t}$, $t = 1, \ldots, 46$; (bottom left) Fourier coefficients in the complex plan of all $N$ signals; (bottom right) logarithm of the Fourier coefficients.}
\end{figure}
\begin{figure}[!t]
\centering
\makebox{\includegraphics[width=\textwidth]{images/joint_density_plot_sim.png}}
\caption{\label{fig:simulated_spike_trains} Joint density of a mixture of $10$ Gamma - von Mises distributions}
\end{figure}
\subsection{Prior knowledge/ distributions}
The higher the variance of the noise, the wider the cones on the plane ($\rightarrow$ linear models theory: $\mathbb{V} \beta = \sigma^2 (X'X)^{-1}$. For Fourier basis, this $X'X = I$ (orthonormal), so the width of the cone should grow linearly with the variance of the noise.)
For an iid sequence all frequencies are equally important (uniform distribition) and the disribution of the radii of the coefficients is exponential (see Brockwell and Davis).
For the signals a von Mises distribution on the frequency and a Gamma distribution for the coefficients for a given frequency would make sense, i.e.\ Fourier coefficients $a_{j} = r \exp(i \varphi)$ are random variables with the frequency $\varphi$ having a von Mises distribution\footnote{\url{https://secure.wikimedia.org/wikipedia/en/wiki/Von_Mises_distribution}}
\begin{equation}
\varphi \sim f(\lambda | \mu, \kappa) = \frac{e^{\kappa\cos(x-\mu)}}{2\pi I_0(\kappa)}
\end{equation}
Then for a given frequency, the radius of the Fourier coefficients should follow a distribution that has the exponential distribution as a special case, since for an iid sequence the distribution is exponential (Brockwell and Davis). For example the Gamma distribution would be useful (also because it's from a conjugate family)
\begin{equation}
r | \varphi \sim \Gamma(k, \theta)
\end{equation}
For a simple model just assume that radius and phase are independent. However, for this data they presumably are not: since we measure the signals only over a certain time frame (that tries to exactly match one complete waveform), very high frequencies will not be important for these series. Thus if $\varphi$ is very small, then also the radius is presumably small. Or in terms of distributions, for low frequencies I expect a more exponential distribution than for low frequencies (that's where Fourier coefficients will show up).
\section{Discussion and outlook}
I introduce a novel technique to detect and classify similar dynamics in signals, where similar dynamics can either mean similar shape for non-stationary signals, or similar auto-correlation for stationary signals. It is an adaptation of the EM algorithm to the power spectra of the signals and thus future research can benefit from the extensive literature in both areas of signal processing. Applications to neural spike sorting and pattern recognition in macro-economic time series demonstrate the usefulness of the presented method.
I also used the recently introduced slowness feature for the classification of neuron spikes. The slowness of signals can separate signals from noise and also distinguish differently shaped signals. Compared to multivariate methods in the literature it is very fast and easily computable, and more robust to outliers than for example the standard approach of a simple threshold method.
\section{Non-parametric frequency domain EM algorithm}
\label{sec:EM_spectral_density}
The fundamental idea of this novel approach to clustering dynamic structures in time series is to identifying each time series with the distribution it induces on the unit circle - and thus on the interval $[-\pi, \pi]$ - by its Fourier transform. These distributions can then be used in a mixture density setting and an adaptation of the EM algorithm yields the classification algorithm.
\begin{definition}[Spectral Density]
The spectral density of a stationary, zero-mean time series $x_t$ with auto-covariance function $\gamma_{x}(\ell) = \mathbb{E} x_t x_{t-\ell}$ is defined as
\begin{equation}
\label{eq:spectral_density}
f_x(\lambda) := \frac{1}{2 \pi} \sum_{\ell=-\infty}^{\infty} \gamma_{x}(\ell) e^{i \lambda \ell}, \quad \lambda \in [-\pi, \pi],
\end{equation}
where the limit is understood point-wise if $\lbrace \gamma_{x}(\ell) \rbrace_{\ell=-\infty}^{\infty}$ is absolutely summable, and in the mean-square sense if $\lbrace \gamma_{x}(\ell) \rbrace_{\ell=-\infty}^{\infty}$ is square summable.
\end{definition}
For real valued processes $\gamma_{x}(\ell) = \gamma(-\ell)$, thus $f_x(\lambda) \geq 0$ for all $\lambda$. Furthermore,
\begin{equation}
\label{eq:spectral_decomposition}
\int_{-\pi}^{\pi} f_x(\lambda) = \sigma_x^2,
\end{equation}
since $\int_{-\pi}^{\pi} e^{i \lambda \ell} d \lambda = 0$ if $\ell \neq 0$ and $\gamma_x(0) = \sigma_x^2$. Equivalence \eqref{eq:spectral_decomposition} is also known as the spectral decomposition of the variance of a time series. Hence, the spectral density is a non-negative function on the interval $[-\pi,\pi]$ and peaks at $\lambda_0$ indicate that this frequency is important for the overall variance of the process, since those peaks contribute a lot to the integral in \eqref{eq:spectral_decomposition}.
An estimate of the spectral density is the power spectrum or \emph{periodogram}.
\begin{definition}[Periodogram]
The periodogram (or power spectrum) of $x_t$ is defined as
\begin{equation}
\label{eq:periodogram}
I_{T,x}(\omega_j) := \left| X(\omega_j) \right|^2 = \Big| \frac{1}{\sqrt{T}} \sum_{k=0}^{T-1} x_t e^{-2 \pi i \omega_j t}\Big|^2, \quad \omega_j= j/T, \quad j=0,1,\ldots, T-1
\end{equation}
where $\omega_j$ are the Fourier frequencies (scaled by $2 \pi$ for easier interpretation).
\end{definition}
For large $T$ a frequent model for $I_{T,x}(\omega_j)$ is \citep[see][]{BrockwellDavis91}
\begin{align}
I_{T,x}(\omega_j) =
\begin{cases}
\chi_1^2, & \text{if $j = 0$ or $T/2$,}
\\
f_x(\omega_j) \, \eta, & \text{otherwise.}
\end{cases}
\end{align}
where $\eta$ is a standard (rate $=$ 1) exponential RV. At each frequency $\omega_j$ (except $0$ and $\pi$) the periodogram is an exponential RV with rate parameter equal to the true spectral density $f_x(\omega_j)$. Therefore $I_{T,x}(\omega_j)$ is asymptotically unbiased ($\mathbb{E} I_{T,x}(\omega_j) = f_x(\omega_j)$), but not consistent ($\mathbb{V} I_{T,x}(\omega_j) = f_x(\omega_j)^2 \stackrel{T \rightarrow \infty}{\nrightarrow} 0)$. This is especially harmful for large values of the true spectral density, as they exactly correspond to those frequencies which are particularly important for the overall variation.\\
There are two main ways to reduce the variance of the raw periodogram. If only one time series is available, then one can reduce the variance of $I_{T,x}(\omega_j)$ by smoothing over neighboring frequencies \citep{OppenheimSchafer89_Discrete-TimeSignalProcessing} - just as in non-parametric density estimation. This works well for series with many observations, but for small samples such as the neuron spikes or many economic time series averaging over neighboring frequencies is not a practical option as it quickly introduces too much bias at each $\omega_j$.
For $M_k$ independent time series $\lbrace x_{m,t} \rbrace_{m=1}^{M_k} $ of the same type (all from sub-system $\mathcal{S}_k$) a variance-reduced estimate of the true $f_{\mathcal{S}_k}(\lambda)$ can be obtained by averaging over all $M_k$ periodograms at each frequency
\begin{equation}
\label{eq:M_periodograms}
\widehat{f}_{\mathcal{S}_k}(\lambda) \mid_{\lambda = \omega_j} = \frac{1}{M_k} \sum_{m = 1}^{M_k} I_{T, x_{m,t}}(\omega_j), \quad j=0, \ldots, T-1.
\end{equation}
Since by assumption all $x_{m,t} \in \mathcal{S}_k$ have the same dynamic structure, $\widehat{f}_{\mathcal{S}_k}(\lambda)$ is also a good estimate of $f_{x_{m,t}}(\lambda)$ for all $m = 1, \ldots, M_k$.
If the sub-series $x_{m,t}$ are far enough apart in a signal $y_t$, then periodograms $I_{T, x_{m,t}}(\omega_j)$ can be considered as independent estimates of the same underlying true spectral density $f_{\mathcal{S}_k}(\lambda)$. Thus \eqref{eq:M_periodograms} is still unbiased but has a much lower variance
\begin{equation}
\mathbb{E} \widehat{f}_{\mathcal{S}_k}(\omega_j) = f_{\mathcal{S}_k}(\omega_j), \quad \mathbb{V} \widehat{f}_{\mathcal{S}_k}(\omega_j) = \frac{ f^2_{\mathcal{S}_k}(\omega_j)}{M_k}.
\end{equation}
\subsection{From spectral density estimation to the EM algorithm}
Equation \eqref{eq:M_periodogram} looks very similar to the M step of an EM algorithm \citep{McLachlanetal08_EM_book}. By averaging over periodograms in \eqref{eq:M_periodograms} we assume we \emph{know} that series $\mathbf{x}_{i,t}$ came from system $\mathcal{S}_k$. This can be a reasonable assumption when repeatedly measuring time series in controlled physical experiments. In many applications, however, it is not \emph{known} where the signal came from. Thus I introduce a non-parametric frequency domain EM algorithm to classify time series. As a general idea this shift from averaging periodograms deterministically to probabilistically is analogous to the shift from hard-thresholding in k-means to soft-thresholding in the EM algorithm.\\
Formally, let $\mathbf{z}_{i}$ be a $K$-dimensional vector indicating from which system series $\mathbf{x}_{i,t}$ comes from; i.e.\ $z_{i k} = 1$ if $\mathbf{x}_{i,t}$ is from $\mathcal{S}_k$, $0$ otherwise. By averaging over periodograms as in \eqref{eq:M_periodograms}, $\mathbf{z}_i$ is treated as a deterministic, known variable. Thus \eqref{eq:M_periodograms} can be rewritten to
\begin{align}
\widehat{f}_{\mathcal{S}_k}(\lambda) \mid_{\lambda = \omega_j} &= \frac{1}{M_k} \sum_{m = 1}^{M_k} I_{x_{m,t}}(\omega_j) \\
\label{eq:M_periodogram_z}
&= \frac{1}{\sum_{i = 1}^{N} z_{ik} }\sum_{i = 1}^{N} z_{ik} I_{\mathbf{x}_{i,t}}(\omega_j), \quad j=0, \ldots, T-1.
\end{align}
For the EM framework we treat $\mathbf{z}_i$ as random variable with marginal distribution $\mathbb{P} \left( z_{ik} = 1 \right) = \pi_k$, also commonly referred to as \emph{mixing weights}. Rather than weighing periodograms with binary weights in \eqref{eq:M_periodogram_z}, the non-parametric frequency domain EM estimator for $\widehat{f}_{\mathcal{S}_k}(\lambda)$ weighs the periodogram of series $\mathbf{x}_{i,t}$ with the probability of coming from system $\mathcal{S}_k$, that is
\begin{align}
\label{eq:gamma_periodogram}
\widehat{f}_{\mathcal{S}_k}(\omega_j) &= \frac{1}{N_k}\sum_{i = 1}^{N} \gamma_{ik} I_{\mathbf{x}_{i,t}}(\omega_j),
\end{align}
where
\begin{equation}
\label{eq:gamma_ik}
\gamma_{ik} := \mathbb{P} \left( z_{ik} = 1 \mid \mathbf{x}_{i,t} \right),
\end{equation}
and $N_k = \sum_{i = 1}^{N} \gamma_{ik}$ is the effective number of time series from sub-system $\mathcal{S}_k$. As a by-result this new method also gives improved spectral density estimates.\\
For the frequency domain EM algorithm we treat the spectral density/periodogram of $\mathbf{x}_{i,t}$ as a pdf/pmf on the on the Fourier frequencies $\mathbf{\lambda}_i = \left( \lambda_{i,0}, \ldots, \lambda_{i,T-1} \right)$. Thus we compute \eqref{eq:gamma_ik} by the probability that ``model'' density $f_{\mathcal{S}_k}(\lambda)$ assigns to the empirical distribution function (edf) of the Fourier frequencies of $\mathbf{x}_{i,t}$ (= periodogram of $\mathbf{x}_{i,t}$), i.e.\
\begin{align}
\mathbb{P}\left( z_{ik} = 1 \mid \mathbf{x}_{i,t} \right) := \mathbb{P}\left( I_{\mathbf{x}_{i,t}}(\lambda) \text{ from } f_{\mathcal{S}_k}(\lambda) \right)
\end{align}
As we do not observe the Fourier frequencies $\mathbf{\lambda}_i$ we cannot compute likelihoods and probabilities such as $ \mathbb{P}\left( I_{\mathbf{x}_{i,t}}(\lambda) \text{ from } f_{\mathcal{S}_k}(\lambda) \right)$ directly. However, eq.\ \eqref{eq:loglik_equals_KL_ent} in the Appendix \ref{sec:KL_MLE} shows how to compute the log-likelihood of $\mathbf{\lambda}_i$ as a linear combination of the Kullback-Leibler (KL) divergence between $I_{\mathbf{x}_{i,t}}(\lambda)$ and $f_{\mathcal{S}_k}(\lambda)$, and the entropy of $I_{\mathbf{x}_{i,t}}(\lambda)$.
Thus the EM algorithm can be implemented as follows:
\begin{enumerate}
\setcounter{enumi}{-1}
\item Initialization: set $\tau = 0$ and randomly assign $\mathbf{x}_{i,t}$ to one of the $K$ sub-systems; set class probabilities $\gamma_{ik}^{(\tau)} := 1$ if $\mathbf{x}_{i,t} \in \mathcal{S}_k$; $0$ otherwise. Compute effective number of time series per cluster $N_k^{(\tau)} = \sum_{i=1}^{N} \gamma_{ik}^{(\tau)}$ and estimate mixing weights by $\widehat{\pi}_k^{(\tau)} = \frac{N_k^{(\tau)}}{N}$.
\item \label{item:start_iteration} Estimate $f_{\mathcal{S}_k}(\lambda)$ by a weighted average of the periodograms of $\mathbf{x}_{i,t}$:
\begin{equation}
\label{eq:est_f_j2}
\widehat{f}_{\mathcal{S}_k}^{(\tau)}(\omega_{j}) = \frac{1}{N_k^{(\tau)}} \sum_{i=1}^{N} \gamma_{i{k}}^{(\tau)} I_{\mathbf{x}_{i,t}}(\omega_{j}), \text{ for each } k =1, \ldots, K.
\end{equation}
This gives $K$ spectral densities $\lbrace \widehat{f}_{\mathcal{S}_1}^{(\tau)}, \ldots, \widehat{f}_{\mathcal{S}_K}^{(\tau)} \rbrace =: \mathcal{F}^{(\tau)}$ at iteration $\tau$. Note that for each $k$, $\mathbb{E} \widehat{f}_{\mathcal{S}_k}^{(\tau)}(\omega_{j}) = f_{\mathcal{S}_k}(\omega_{j})$ and $\mathbb{V} \widehat{f}_{\mathcal{S}_k}^{(\tau)}(\omega_{j})\approx \frac{ f_{\mathcal{S}_k}(\omega_{j}) }{N_k} \ll f_{\mathcal{S}_k}(\omega_{j})$.
\item \label{item:end_iteration} Compute KL divergence between each $I_{\mathbf{x}_{i,t}}(\omega_{j})$ and all $\widehat{f}_{\mathcal{S}_k}^{(\tau)} \in \mathcal{F}^{(\tau)}$:
\begin{equation}
\label{eq:KL_div_periodograms}
\KL{I_{\mathbf{x}_{i,t}}}{\widehat{f}_{\mathcal{S}_k}^{(\tau)}} = \sum_{j=0}^{T-1} I_{\mathbf{x}_{i,t}}(\omega_{j}) \log \frac{I_{\mathbf{x}_{i,t}}(\omega_{j})}{\widehat{f}_{\mathcal{S}_k}^{(\tau)}(\omega_{j})}, \quad \forall i, \forall k,
\end{equation}
and update conditional probability that series $\mathbf{x}_{i,t}$ comes from system $\mathcal{S}_k$
\begin{align}
\gamma_{ik}^{(\tau+1)} &=
\mathbb{P}\left( I_{\mathbf{x}_{i,t}}(\lambda) \text{ from } \widehat{f}_{\mathcal{S}_k}^{(\tau)} \right) \quad \forall i, \forall k
\end{align}
using \eqref{eq:KL_div_periodograms} and \eqref{eq:probability_from_loglik}. Update mixing weights
\begin{equation}
\widehat{\pi}_k^{(\tau+1)} = \frac{N_k^{(\tau+1)}}{N} \text{, where } N_k^{(\tau+1)} = \sum_{i=1}^{N} \gamma_{ik}^{(\tau+1)}.
\end{equation}
Set $\tau = \tau +1$.
\item Repeat steps \ref{item:start_iteration} and \ref{item:end_iteration} until convergence of the overall spectral likelihood
\begin{equation}
\label{eq:loglik_total}
\ell(\mathcal{S}_1, \ldots, \mathcal{S}_K; \pi_1, \ldots, \pi_K \mid \mathbf{x}_{1,t}, \ldots \mathbf{x}_{N,t}) = \sum_{i=1}^{N} \log \left( \sum_{k=1}^{K} \widehat{\pi}_k e^{\ell \left( I_{\mathbf{x}_{i,t}}(\omega) \mid \widehat{f}_{\mathcal{S}_k}(\omega) \right) } \right).
\end{equation}
\end{enumerate}
Since for unit-variance input $x_t$ the spectral density/periodogram are well-defined continuous/discrete probability distributions, this EM algorithm can be applied to both stationary as well as non-stationary signals: in the first case, the spectral density $f_x(\lambda)$ exists as a non-negative, square integrable function and a large part of the time series and econometrics literature is devoted to the spectral analysis of stationary time series \citep{Iacobucci03_spectralanalysiseconomics, Priestly81_SpectralAnalysis,Mathias04_Irregularspaced}; in the second case, the periodogram \eqref{eq:periodogram}, viewed as a purely data-driven method, represents a valid discrete pmf on $\lbrace \omega_j \rbrace_{j=0}^{T-1}$.
Since frequency domain analysis plays a very prominent and successful role in statistics, time series analysis, and signal processing, this frequency domain EM algorithm to detect similar dynamics or shape can be easily implemented and applied to a great variety of problems where the data has a spectral representation. For example, the method can be used for image clustering ($2D$ Fourier transform) as well as classification of a family of positive semi-definite random matrices $\lbrace A_i \rbrace_{i=1}^{N}$, $A_i \in \mathbb{R}^{T \times T}$ considering their normalized eigenvalues $\lbrace \lbrace \tilde{\lambda}_{j} \rbrace_{j=1}^{T} \rbrace_i$ as a discrete distribution on $j = 1, \ldots, T$.\\
It must be noted though that it comes with all the pros and cons of the basic EM algorithm (never decreasing likelihood, but possibly local optima). For a detailed account of convergence results and many other properties see \citet{McLachlanetal08_EM_book} or \citet{Bishop07_ML_book}.
\subsection{Choosing the number of clusters}
So far the number of clusters was fixed a-priori and the algorithm gives the (locally) best $K$-cluster solution. However, since this number is rarely known in practice, we must have a rule to select a good $K$. In some cases there is a ``true'' $K$. For example, the micro-electrode in the brain recorded a certain number of neurons. Thus there is an underlying truth which we try to estimate. In other cases, such as the economic time series example, there may not be a true number of sub-systems but choosing the number of clusters is based on convenience and ease of interpretation. One may choose only two clusters to show vastly contrary situations, and then compare this to a more refined structure by allowing more clusters.\\
While the EM algorithm achieves a (locally) optimal classification by maximizing the expected likelihood function, this criterion cannot be used to choose the optimal number of clusters: the likelihood is non-decreasing in $K$, thus maximizing the likelihood with respect to $K$ will always give $K \equiv N$; that is each time series is its own class. For parametric models one can use the BIC to choose $K$ \citep{BiernackiGovaert98_choosingmodels}, but for non-parametric settings this is not directly applicable. A common heuristic is the ``elbow rule'', where the number of clusters is determined by looking where the likelihood does not show a substantial increase anymore.
\citet{CeleuxSoromenho96_EntropyCriterion} propose an entropy based criterion to assess the optimal number of clusters. The \emph{normalized entropy criterion} (NEC) chooses that $K$ which minimizes
\begin{equation}
\label{eq:NEC}
NEC(K) = \frac{E(K)}{\ell^*(K) - \ell(1)}, K \geq 2,
\end{equation}
where $\ell^*(K)$ is the log-likelihood of the best $K$ cluster solution, and
\begin{equation}
\label{eq:classification_entropy}
E(K) = - \sum_{k=1}^{K} \sum_{i=1}^{N} \gamma_{ik} \log \gamma_{ik} \geq 0,
\end{equation}
is the entropy of the soft classification matrix. Since it is only based on the class probabilities and the log-likelihood, it can be easily computed even for non-parametric classification methods.
The entropy in \eqref{eq:classification_entropy} measures how well the best $K$ cluster partition can separate the data. In the case of perfectly separable classification, $\gamma_{ik} = 1$ for one $k$ and $0$ otherwise (for each $i$); in this case $E(K) = 0$. In practice, classification is not perfect, thus in general $E(K) > 0$. Hence it makes sense to choose that $K$ which minimizes $E(K)$ as this is as close as possible to a perfect separation. Since the baseline value of the likelihood changes for each $K$, the entropy is normalized by the optimal log-likelihood for each $K$. The optimal number of clusters is the one that minimizes \eqref{eq:NEC}. See \citet{Biernacki99_ImprovementNEC, CeleuxSoromenho96_EntropyCriterion} for details and simulation results.
It must be noted though that rather than looking at the global minimum, it is more useful to consider all local minima as possible candidates. Only focusing on the global minimum can lead to an under-estimation of the true order $K$. For example, sometimes a $K = 2$ cluster solution gives binary weights to each class - and thus $E(K) = 0$ - but can be far from representing the true number of clusters, as it averages over several clusters in one region of the space. Thus for simulations and applications I use the NEC rule combined with the ``elbow'' rule in the log-likelihood to choose an appropriate $K$.
\section{Fourier Analysis}
As every bounded, continuous function can be approximated arbitrarily well by Fourier polynomials of sufficiently high order (in the mean square sense), we can write each spike wave as
\begin{equation}
S(t) = \sum_{\ell=-\infty}^{\infty} \alpha_{\ell} e^{i \ell t}, \text{ with } \alpha_{\ell} = \overline{\alpha}_{\ell},
\end{equation}
where $\overline{z} = a - ib$ is the complex conjugate of the complex number $z = a+ib$. This must hold as $S_i(t)$ is a real-valued function; thus the coefficients of the Fourier series must be complex conjugates.
The Fourier coefficients can be computed by
\begin{equation}
\alpha_{\ell} = \frac{1}{2\pi}\int_{-\pi}^{\pi} S(t) e^{-i \ell t}\, dt.
\end{equation}
Since $S_i(t)$ and $\lbrace a_{\ell}^{(i)} \rbrace$ are also in a one-to-one relation we have
\begin{equation}
\text{ Neuron } n_i \leftrightarrow S_i(t) \leftrightarrow \boldsymbol{\alpha}^{(i)}
\end{equation}
Thus if we want to classify the signals we measure to which $n_i$ they are coming from, we can either classify the wave forms $S_i(t)$ or classify Fourier coefficients. The advantage of Fourier coefficients is that in general there will be only very few important coefficients (since the wave forms are already ``nice'' harmonics, and not random signals).
In general the function is not on $[-\pi, \pi]$: for a periodic function $f(t)$ on $[a, a+ \tau]$ the formulas become
\begin{equation}
f(x)=\sum_{\ell =-\infty}^\infty \alpha_{\ell} \cdot e^{i 2\pi \frac{\ell}{\tau} x}.
\end{equation}
and the coefficients are
\begin{equation}
\alpha_{\ell} = \frac{1}{\tau}\int_a^{a+\tau} f(x)\cdot e^{-i 2\pi \frac{\ell}{\tau} x}\, dx.
\end{equation}
For measured data and for ``nice'' wave forms, the infinite sum for spike-signal $S(t)$ can be split into the important frequencies\footnote{Note that the important frequencies are not necessarily the first frequencies in $j$.} $\mathcal{K}_i$ and the noise
\begin{eqnarray}
S_i(t) &=& \sum_{\ell \in \mathcal{K}_i} \alpha_{\ell} \cdot e^{i 2\pi \frac{\ell}{\tau} t} + \sum_{j \notin \mathcal{K}_i} a_n \cdot e^{i 2\pi \frac{\ell}{\tau} t} \\
&=& \sum_{\ell \in \mathcal{K}_i} \alpha_{\ell} \cdot e^{i 2\pi \frac{\ell}{\tau} t} + \varepsilon(t) \\
\end{eqnarray}
This decomposition transfers directly to the statistical model for measured signals $s_{j,t}$
\begin{equation}
s_{j,t} = \sum_{\ell \in \mathcal{K}_i} a_{\ell} \cdot e^{i 2\pi \frac{\ell}{\tau} t} + \varepsilon_t.
\end{equation}
\subsection{Estimating the spectral density}
Clustering the waves in the frequency domain also allows to use the well-developed theory of spectral densities of time series. For example, an for a white noise sequence each frequency is equally important ($\rightarrow$ uniform distribution); for the wave signal only very few frequencies are important. Furthermore, the important frequencies will be the low frequencies, not the high ones. This makes it easy to separate between signal and noise.
In practice the data of a wave form is a time series of length $T$, $(s_{i,1}, \ldots, s_{i,T})$; the Fourier coefficients at the Fourier frequencies $\omega_j = \frac{2 \pi j}{T}$ for $j = 0, \ldots, T-1$ are
\begin{equation}
X_j = \sum_{n=0}^{T-1} S_i(n) e^{-\frac{2 \pi i}{T} j n} \quad \quad j = 0, \dots, T-1
\end{equation}
The fast fourier transform (FFT) can achieve that computation in $\mathcal{T \log T}$ operations.
\begin{table}[!b]
\caption{\label{tab:notation} Notation used in this analysis.}
\begin{center}
\begin{tabular}{ | l | l | l | }
\hline
$n_i$ & Neuron $i$ & $i = 1, \ldots, K$; $K$ unknown\\
$\mathcal{N}$ & collection of all neurons $\mathcal{N} = \lbrace n_1, \ldots, n_K \rbrace$ & \\
\hline
$S_i(t)$ & spike-signal from $n_i$ & $i=1,\ldots, K$, $t \in \mathbb{R}$ \\
$\mathcal{S}$ & collection of all spike-signal wave forms $\mathcal{S} =\lbrace S_1(t), \ldots, S_K(t) \rbrace$ & \\
\hline
$\boldsymbol{\alpha}^{(i)} = \lbrace \alpha_{\ell}^{(i)} \rbrace \in \mathbb{C}$ & Fourier coefficients of $S_i(t)$ & $\ell = 0, \pm 1,\pm2 , \ldots$ \\
$\mathcal{A}$ & collection of all sequences $\mathcal{A} =\lbrace \boldsymbol{\alpha}^{(1)}, \ldots, \boldsymbol{\alpha}^{(K)} \rbrace$ & \\
\hline
$s_{j,t}$ & $N > K$ measured signals & $j = 1, \ldots, N$, $t = 1,\ldots, T$ \\
$\mathbf{a}^{(j)} = \lbrace a_{\ell}^{(j)} \rbrace$ & Fourier coefficients of $s_{j,t}$ & $\ell = 0, 1, \ldots, T-1$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{definition}[Discrete Fourier Transform]
For a sequence $(x_1, \ldots, x_T)$, define the discrete Fourier transform (DFT) as $(X(\omega_0), X(\omega_1), \ldots, X(\omega_{n-1}))$, where
\begin{equation}
X(\omega_k) = \frac{1}{\sqrt{T}} \sum_{t=1}^{T} x_t e^{-2 \pi i \omega_k t},
\end{equation}
and $\omega_k = \frac{k}{T}$ for $k=0, 1, \ldots, T-1$ are the Fourier frequencies.
\end{definition}
Since the Fourier basis $\lbrace e^{2 \pi i \omega_k t} \rbrace$ is an orthonomal sequence, the coordinates of $x_t$ in the Fourier basis are $X(\omega_k)$, i.e.\
\begin{equation}
x_t = \sum_{k=0}^{T-1} \innerprod{x_t, e_k} e_k = \sum_{k=0}^{T-1} X(\omega_k) e^{2 \pi i \omega_k t}.
\end{equation}
\section{Introduction}
Classification of similar signals is a widespread task in signal processing, where similar can either mean similar shape (for non-stationary signals) or similar dynamics (for stationary\footnote{A sequence of random variables (RVs) $\lbrace x_t \rbrace_{t \in \mathbb{Z}}$ is stationary if i) $\mathbb{E} x_t = \mu < \infty$, ii) $\mathbb{V} x_t := \mathbb{E} (x_t - \mu)^2 < \infty$ and iii) the auto-covariance function $\gamma(k):=\mathbb{E} (x_t - \mu) (x_{t-k} - \mu)$ is independent of $t$.} signals). Non-stationary examples are recordings of brain activity (see Section \ref{sec:spike_sorting}) or speech signals; stationary signals can be found in many economic or physical time series. In both cases, researchers want to detect similar dynamics:
\begin{description}
\item Neuro-scientists study the signal shape sent by neurons in order to understand how fast neurons send information across the brain. As a recording can contain signals from many different neurons, it is necessary to cluster them into signals of similar shape \citep{Quiroga04}, which were presumably sent by the same neuron.
\item In economics and public policy one is often interested in similar dynamics of the market/society to characterize, for example, how fast a country recovers from a recession, and how it compares to other countries in the region; or which countries have similar dynamics in their labor market.
\end{description}
Formally, let $\mathcal{X} = \lbrace \mathbf{x}_{1,t}, \ldots, \mathbf{x}_{N,t} \rbrace$ be a family of sequential observations from a dynamical system $\mathcal{S}$, where $\mathbf{x}_{i,t} = (x_{i,1}, \ldots, x_{i,T})$ is the individual time series of entity $i$. For example, $\mathcal{S}$ can be a particular area in the brain or the economic rules in the labor market. Here we consider systems which can be naturally divided into $K$ homogeneous sub-systems $\mathcal{S}_1, \ldots, \mathcal{S}_K$, each one with its own characteristic dynamics. In the neurology context these sub-systems $\mathcal{S}_k$ represent different neurons sending a signal; in economics $\mathcal{S}_k$ could correspond to different dynamics in the market, e.g.\ countries that recover fast from a recession ($\mathcal{S}_1$) versus countries that need more time to catch up again to global economy ($\mathcal{S}_2$).
Many clustering and dimension reduction techniques such as principal component analysis (PCA) \citep{Jolliffe02_PCAbook} focus on the mean and/or variance to make a reduction/classification in similar blocks of data. Yet these two statistics are irrelevant for the correlation over time; \citet{Keogh03_MeaninglessTSclustering} even claim that time series clustering is entirely meaningless. \citet{Simon05_UnfoldingTScluster} show that time series clustering is not meaningless \emph{per se}, but that the similarity measure must be chosen carefully. They embed each time series in a higher dimensional space of lagged variables, $x_t \rightarrow \tp{\left(x_{t-\tau_1}, x_{t-\tau_2} \ldots, x_{t-\tau_s} \right)} \in \mathbb{R}^{s}$, $0 \leq \tau_1 < \tau_2 < \ldots < \tau_s < T$, such that signals with different dynamics can be easily distinguished in the higher dimensional $\mathbb{R}^s$. This method works particularly well for long time series even with non-linear dynamics. If only few observations per series are available ($T \approx 100$ or even only $50$), then time-embeddings are extremely sparse in $\mathbb{R}^{s}$ and thus clustering becomes impractical.
For few observations per series it can be useful to first fit a parametric model $\mathcal{M}_{\theta_j}$ to every series $\mathbf{x}_{j,t}$, and then cluster in the lower-dimensional parameter space $\lbrace \widehat{\theta}_{1}, \ldots, \widehat{\theta}_{n} \rbrace$. For example, for the broad class of auto-regressive integrated moving average (ARIMA) models several approaches have been studied: \citet{Dhiral01_ClusteringARIMA} cluster ARIMA models based on the distance between their estimated coefficients; \citet{Piccolo90_ARIMAdistance} uses the Euclidean distance between their auto-regressive extensions as a metric on the invertible ARIMA model space; \citet{Maharaj00} present a hypothesis test to distinguish between two - not necessarily independent - stationary time series by comparing auto-regressive fits to the data. See \citet{Liao05} for a detailed survey. Although this works well for small $T$, it suffers from a model selection bias: if we pick the wrong model for just some of the series, then the clustering cannot be accurate anymore. Furthermore, if the models are not nested in some sense, then it is hard to compare the parameters of $\mathbf{x}_{j,t}$ to those of $\mathbf{x}_{i,t}$.\\
Here I propose a novel approach to clustering similar dynamics using frequency domain properties of the signals, which avoids the model selection bias and at the same time works even with few observations. Existing frequency domain classification methods are mostly based on defining a metric on the spectrum and then using a clustering algorithm based on the so-obtained distance (or similarity) matrix. \citet{CaiadoNunoPena06} use hierarchical clustering algorithm on the Euclidean distance between the log-spectra; \citet{Savvides08} use a distance measure on cepstral coefficients obtained from the log-spectra. The method proposed here differs from existing techniques as it treats the magnitude of the discrete Fourier transform (DFT) of signal $\mathbf{x}_{j,t}$ as a probability mass function (pmf) on the unit circle and thus leads to a natural classification by an adaptation of the well-known Expectation Maximization (EM) algorithm \citep{Dempster77_EM, Bishop07_ML_book}. Section \ref{sec:EM_spectral_density} describes a non-parametric version which avoids the model selection bias, but it can also be easily adapted to a parametric framework, e.g.\ to cluster time series within the ARIMA model class.
\subsection{Similar dynamics in socio-economic time series}
In macro-economics and public policy researchers are often interested in comparing economies/societies with each other. For example, annual unemployment rates over the course of several decades can show law changes or adaptations of economic interdependencies within a country as well as with the rest of the world.
Here I will consider the annual per-capita income growth rate of the ``lower 48'' in the US from $1958$ to $2008$ compared to the overall US growth rate
\begin{equation}
g_{j,t} := r_{j,t} - r_{US,t}, \quad j \in \lbrace \text{Alabama, $\ldots$, Wyoming} \rbrace,
\end{equation}
where $r_{j,t}$ is the annual growth rate of region $j$ (see Appendix \ref{sec:Data} for details). Clustering states according to similar economic dynamics can help to decide where to provide economic support to overcome a recession faster. For example, if certain states do not show any important dynamics on a 7-8 year period - which is typically considered the ``business cycle'' \citep{Halletetal08_EuropeanBusinessCycle, Iacobucci03_spectralanalysiseconomics} - then it might be more useful for to invest available money in those states that are heavily affect by these global economy swings.
This dataset has also been analyzed in \citet{Dhiral01_ClusteringARIMA}, who fit auto-regressive models of order $1$ ($AR(1)$) to the non-adjusted growth rates $r_{j,t}$ for pre-selected $25$ states, and then cluster them based on the different fits. Although this procedure gives useful results, it is very unlikely that different dynamics for each of the $48$ states only manifest themselves in a different $AR(1)$ coefficient. In particular, simple $AR(1)$ models cannot capture the business cycle dynamics which are clearly visible in the power spectra of the growth rates (even in the adjusted rates) - see Section \ref{sec:marco_econ_results}, Fig.\ \ref{fig:EM_spikes_periodogram}.
The non-parametric EM algorithm introduced in Section \ref{sec:EM_spectral_density} does not face this model selection bias, but can capture different cyclic components in all $48$ time series.
\subsection{Neuron identification - ``spike sorting''}
\label{sec:spike_sorting}
The human brain can be seen as a big information-processing and -storing unit. For example, the information we get from watching our environment must be carried from the eye to the visual cortex. As the visual cortex resides in the back of the brain neurons have to transmit information from the front to the back of the head, just for us to being able to make sense of what we see; set aside the neurons involved in executing our reaction to what we see. Every time a neuron transmits information it emits an electrochemical signal, which can be measured by an electrode put in the brain area of interest. Figure \ref{fig:data_recording_40_50} (top) shows a recorded signal $y_t$ with $73,500$ observations.\footnote{For a detailed description see Appendix \ref{sec:Data}.}.
\begin{figure}[!t]
\centering
\makebox{\includegraphics[width=0.8\textwidth]{images/data_recording_1_50.png}}
\caption{\label{fig:data_recording_40_50} Brain signal recording: (top) entire $49$ seconds; (bottom left) zoom to $[43.2,43.6]$ - the transition between extremely large spikes ($\sim 43.32$) and smaller spikes ($\sim 43.57$); (bottom right) zoom to $[43.26, 43.29]$ where two spikes become visible
\end{figure}
On a macro-level these measurements help to identify active areas of the brain which are involved in performing a particular task; e.g.\ for visual tasks the back of the brain shows up as an active area. Micro-level properties of neuron activity are also important: for example \citet{KassVentura01} analyze how fast neurons can send information - this is characterized by a neuron's ``firing rate''. To do this non-trivial task, however, one implicitly makes an important assumption: it is known which neuron sent which signal. Figure \ref{fig:data_recording_40_50} clearly shows that micro-electrodes cannot single out one neuron, but record a concatenation - and sometimes a superposition - of an unknown number of neurons $n_1, \ldots, n_K$ transmitting information plus a lot of background noise. Hence to successfully analyze the firing rates, it is necessary to
\begin{enumerate}
\item[i)] distinguish actual spikes from background noise, and
\item[ii)] identify and assign each signal to one particular neuron $n_k$, $k = 1, \ldots, K$ where the number of neurons $K$ is unknown: an electrode records as many neurons as there are in its local neighborhood.
\end{enumerate}
Part i) constitutes one of the core problems in signal processing \citep{ZhengmingXiaojun00, DaviesJames07}. Consequently there is an immense literature on signal/noise separation, especially in audio and speech processing \citep{Jangetal03_SingleChannel, Barry05}. For sub-problem ii) we can classify the observed spikes into classes of similarly shaped wave-forms. If these shapes actually correspond to one sole neuron $n_k$ or still to a collection of neurons, depends on whether each neuron has a unique wave form or not. Only if there exists such a one-to-one relation, we can determine the firing rates of each single neuron. Biochemical and physiological findings suggest that each neuron has its own unique wave-form, which can only vary slightly based on the state of the neuron. Thus it should be possible to classify neuron activity according to the form of the signal - the ``spike''. This classification task is commonly known as ``spike sorting'' \citep{Lewicki98,Kim06,Natakanietal01}.
A common and simple approach is performing PCA on the spikes, and then cluster the signals according to the PCA coefficients \citep{Wood04_Automaticspike}. Although generally there are far more spikes than observations per spike ($N \gg T$), still the first $2$-$3$ eigen-vectors of the low-rank correlation matrix capture most of the variation in the data. However, since PCA selects sources by the direction of maximum variance, it will classify low power firings from the same neuron as different neurons.
The frequency domain classification algorithm introduced here, builds on the relation between the shape of the signal and its Fourier coefficients. Similar shapes have similar Fourier coefficients and thus clustering in the frequency domain should reveal these sub-classes.
\section{KL divergence and maximum likelihood}
\label{sec:KL_MLE}
Let $p_k := \mathbb{P}(X = a_k)$ define a probability distribution for the RV $X$ taking values in the finite alphabet $\mathcal{A} := \lbrace a_1, \ldots, a_K \rbrace$. The Kullback-Leibler (KL) divergence between two discrete probability distributions $p = \lbrace p_1, \ldots, p_K \rbrace$ and $q = \lbrace q_1, \ldots, q_K \rbrace$
\begin{equation}
\KL{p}{q} := \sum_{i=1}^{K} p_i \log_{2} \frac{p_i}{q_i} = \mathbb{E}_p \log_{2} \frac{p_i}{q_i}
\end{equation}
measures how far $p$ is from the ``truth'' $q$; in particular, if $p = q$ then $\KL{p}{q} = 0$.
Let $\tilde{p}(x)$ be the empirical distribution function (edf) of a sample $\mathbf{x} = (x_1, \ldots, x_N)$
\begin{equation}
\label{eq:empirical_distribution}
\tilde{p}(\mathbf{x}) := \frac{1}{N} \sum_{n=1}^{N} \delta(x - x_n),
\end{equation}
where $\delta(y)$ is the Dirac delta function, and let $p(\mathbf{x} \mid \theta)$ be a model (distribution) for the RV $X$. The maximum likelihood estimator (MLE) is that $\theta$ which maximizes the log-likelihood of the data (assuming iid)
\begin{equation}
\label{eq:loglik}
\ell( \theta \mid \mathbf{x}) = \sum_{n=1}^{N} \log p(x_n \mid \theta).
\end{equation}
In terms of the KL divergence it is intuitive to select that $\theta$ which minimizes the distance between the empirical distribution of the data, $\tilde{p}(\mathbf{x})$, and the model $p(\mathbf{x} \mid \theta)$. In fact, they are equivalent since
\begin{eqnarray}
\label{eq:KL_entropy_likelihood}
\KL{\tilde{p}(\mathbf{x})}{p(\mathbf{x} \mid \theta)} = \int \tilde{p}(\mathbf{x}) \log \frac{\tilde{p}(\mathbf{x})}{p(\mathbf{x} \mid \theta)} d \mathbf{x} = - H(\tilde{p}(\mathbf{x})) - \int \tilde{p}(\mathbf{x}) \log p(\mathbf{x} \mid \theta) d \mathbf{x},
\end{eqnarray}
where $H(\tilde{p}(\mathbf{x})) = - \int \tilde{p}(\mathbf{x}) \log \tilde{p}(\mathbf{x}) d \mathbf{x} $ is the entropy of $\tilde{p}(\mathbf{x})$, which is independent of $\theta$. Thus
\begin{equation}
\label{eq:KL_equiv_loglik}
\arg \min_{\theta} \KL{\tilde{p}(\mathbf{x})}{p(\mathbf{x} \mid \theta)} = \arg \max_{\theta} \mathbb{E}_{\tilde{p}} \log p(\mathbf{x} \mid \theta).
\end{equation}
Plugging \eqref{eq:empirical_distribution} in the right hand side of \eqref{eq:KL_equiv_loglik} shows the equivalence of KL divergence minimization and log-likelihood maximization as
\begin{eqnarray}
\mathbb{E}_{\tilde{p}} \log p(\mathbf{x} \mid \theta) &=& \frac{1}{N} \int \sum_{n=1}^{N} \delta(x - x_n) \log p(x \mid \theta) dx = \frac{1}{N} \sum_{n=1}^{N} \log p(x_n \mid \theta) \\
\label{eq:1_N_loglik}
&=& \frac{1}{N} \ell( \theta \mid \mathbf{x}).
\end{eqnarray}
Conversely the log-likelihood of $\mathbf{x}$ can be computed by
\begin{eqnarray}
\label{eq:loglik_equals_KL_ent}
\ell( \theta \mid \mathbf{x}) = -N \cdot \left[ \KL{\tilde{p}(\mathbf{x})}{p(\mathbf{x} \mid \theta)} + H(\tilde{p}(\mathbf{x})) \right],
\end{eqnarray}
and consequently
\begin{equation}
\label{eq:probability_from_loglik}
\mathbb{P} \left( \mathbf{x} \mid \theta \right) = e^{\ell( \theta \mid \mathbf{x})}.
\end{equation}
Equations \eqref{eq:loglik_equals_KL_ent} and \eqref{eq:probability_from_loglik} play a key role in the non-parametric EM algorithm defined on the power spectra, as they allow to compute $\ell( \theta \mid \mathbf{x})$ even though $\mathbf{x}$ has not been observed directly, but just its edf $\tilde{p}(\mathbf{x})$ and a model $p(\mathbf{x} \mid \theta)$. In this frequency domain framework, the data $\mathbf{x}$ are the unobserved Fourier frequencies $\omega_0, \ldots, \omega_{T-1}$, the edf $\tilde{p}(\mathbf{x})$ is the periodogram $I_{T,x}(\omega_k)$, and the ``true'' model $p(\mathbf{x} \mid \theta)$ is the EM estimate $\widehat{f}_{\mathcal{S}_k}(\lambda) \mid_{\lambda = \omega_j}$ of the spectral density of sub-system $\mathcal{S}_k$ - see \eqref{eq:est_f_j2}. Thus the conditional probability $\gamma_{ik} = \mathbb{P}(z_{ik} = 1 \mid \mathbf{x}_{i,t})$ can be computed by
\begin{equation}
\gamma_{ik} = \frac{e^{\ell \left( I_{\mathbf{x}_{i,t}}(\lambda) \mid \widehat{f}_{\mathcal{S}_k}(\omega) \right) }}{\sum_{k=1}^{K} e^{\ell \left( I_{\mathbf{x}_{i,t}}(\lambda) \mid \widehat{f}_{\mathcal{S}_k}(\omega) \right) }}, \quad i =1, \ldots, N \text{ and } k = 1, \ldots, K.
\end{equation}
\section{``Spike sorting'' in the time domain}
Let $\mathcal{N}$ be the set of all neurons and assume that each neuron $n_i \in \mathcal{N}$ has a unique characteristic spike $S_i(t) \in \mathcal{C}[a,b]$, where $ \mathcal{C}[a,b]$ is the set of all continuous functions
on $[a,b]$. The spike is unique in the sense that $n_i = n_j$ if and only if $S_i(t) = S_j(t)$, or put in words if we see two different spikes, then we know that two different neurons were active and vice versa.
The micro-electrode only records a subset of neurons $n_k$, $k = 1, \ldots, K$, where $K$ is unknown. In Section \ref{sec:spike_detection} I use a slowness measure to distinguish between signal and noise, and in Section \ref{sec:features} I fit a Gaussian mixture model (GMM) to the slowness to detect different neurons, based on the assumption that a every different spike shape has its characteristic slowness.
\subsection{Spike detection}
\label{sec:spike_detection}
Given the recorded signal $y_t$ it is necessary to extract windows of size $T$ containing a spike.\footnote{Since these extracted windows containing a spike will later on be used as the $N$ time series $\lbrace \mathbf{x}_{1,t}, \ldots, \mathbf{x}_{N,t} \rbrace$ of length $T$ to the classification algorithm, I also use $T$ here to denote the window size.} These signals $s_{j, t}$ of length $T$ represent the family of sequential observations $\mathcal{X} = \lbrace s_{1, t}, \ldots, s_{N, t} \rbrace \in \mathbb{R}^{T \times N}$, where $N$ is the number of detected spikes. As the entire micro-electrode recording is much longer than the length of one single spike there are far more extracted spikes than number of observations per signal ($N \gg T$). Since the electrode only records signals in its local neighborhood, we can also expect a small number of sub-systems (neurons) $\mathcal{S}_1, \ldots, \mathcal{S}_K$ of similar shape ($K \ll N$). The size of the window must match the length of a typical spike: the lower right panel of Fig.\ \ref{fig:data_recording_40_50} suggests that a typical spike lasts for about $0.0035$ seconds $\approx$ $55$ time steps (vertical red lines). Thus for the rest of this section I set $T = 55$.
Since we do not know a-priori where a spike occurs we need a rule that tells us where to look for it. Whereas characterizing spikes visually is easy, designing a quantitative automated rule that can describe spikes is much more difficult. A common approach \citep{Quiroga04} is to set a threshold value $tol$ and a spike is detected if the signal exceeds this threshold. This threshold rule will not only be very sensitive to outliers, but also bias the selected spikes in favor of spikes with large variance (power). Furthermore neurons sometimes fire with lower power than usual, and thus may not exceed such a threshold. Although missing these spikes would not affect the spike sorting algorithm, it will underestimate the firing rate of neuron $n_k$.
Here I characterize ``non-spikes'', i.e.\ noise, in a way that detects spikes according to properties of the entire signal, not of one single observations (such as the threshold rule). One way to characterize noise is that it is moving much faster than any spike - whatever such a spike may look like. \citet{Berkes05} introduced a measure of slowness for a signal $x_t$, defined as the variance of the differenced, unit-variance signal
\begin{equation}
\label{eq:slowness}
\Delta(x_t) = \mathbb{V} \left( x_t - x_{t-1} \right), \quad \mathbb{V} x_t = 1.
\end{equation}
For an independent identically distributed (iid) signal $\varepsilon_t$ the slowness satisfies $\Delta\left(\varepsilon_t\right) = 2$. On the other hand, if $x_t \rightarrow const$ then $\Delta(x_t) \rightarrow 0$. Therefore, the larger $\Delta(x_t)$, the faster $x_t$.
Computing the slowness of the signal in a sliding window over $y_t$ reveals noisy parts (fast) and - complementary - the spikes (slow). The red (right) histogram in Fig.\ \ref{fig:MonteCarlo_slowness} shows simulated $\Delta\left( \varepsilon_t \right)$, where $\varepsilon_t \stackrel{iid}{\sim} \mathcal{N}(0,1)$ with $t = 1, \ldots, T = 55$ for $N = 10,000$ replications. Clearly, the central limit theorem (CLT) comes into play and the simulated values are centered around their true slowness $\Delta\left(\varepsilon_t\right) = 2$.
However, there is no obvious reason to assume that the brain background noise in the neighborhood of the micro-electrode is necessarily iid. In fact, the empirical slowness (blue histogram) of the sliding windows is substantially smaller than $2$, showing that brain background noise is not iid.\footnote{Since $\Delta\left( \varepsilon_t \right) = \Delta\left( const \cdot \varepsilon_t \right)$ by definition ($\mathbb{V} x_t \equiv 1$ in \eqref{eq:slowness}), the lower slowness for the brain signal is not due to a lower variance white noise sequence, but indeed a manifestation of some dependence in the data.}
But even though we do not know how slow it is, we know - and can clearly see in Fig.\ \ref{fig:MonteCarlo_slowness} (bottom) - that noise moves much faster than any of the spikes: $\Delta(\text{background noise}) \gg \Delta\left( \text{any spike} \right)$. Hence, we can learn the boundary value that distinguishes noise and spikes from the data. At this stage we are only concerned with separating spikes from noise, thus we can choose a conservative value for the boundary. If it turns out that this still includes too much noise, then a clustering algorithm will put these falsely extracted ``spikes'' in a \emph{noise} class. On the other hand, a too small boundary will miss spikes and thus bias the analysis of firing rates towards larger firing intervals. The lower panel of Figure \ref{fig:MonteCarlo_slowness} suggests that $tol = 0.25$ provides a good separation between noise on the right and spikes on the left.
\begin{figure}[!t]
\centering
\subfloat[(top): (red) Simulation of $\Delta \left(\varepsilon_t\right)$ with $10,000$ replications for iid $\lbrace \varepsilon_t \rbrace_{t=1}^{T=55}$; (blue) empirical slowness of the rolling window over the data $y_t$. \newline (bottom): zoom into $(0, 0.4)$ with the boundary between spikes ($ < 0.25$) and noise ($> 0.25$).]{\includegraphics[width=.45\textwidth]{images/MonteCarlo_slowness}\label{fig:MonteCarlo_slowness} }
\hspace{1cm}
\subfloat[(top) detected spikes; (below) their $\log$ slowness and a Gaussian mixture fit with $6$ components (chosen according to BIC score)]{\includegraphics[width=.45\textwidth]{images/extracted_spikes}\label{fig:extracted_spikes} }
\caption{How slowly do we think? - the slowness of brain recordings.}
\label{fig:Flatness}
\end{figure}
This rolling window approach gives the so called on-set times (the moment a neuron fires and the spike lasts for $T = 55$ units of time), which are then used to extract possible spikes $s_{j,t}$ from $y_t$. An additional alignment step takes place to avoid slight misplacements of the onset times; following the spike sorting literature, this was done by identifying the maximum of each spike and adjust the window such that all signals have their maximum at the same position.
Figure \ref{fig:MonteCarlo_slowness} shows $n = 1,747$ extracted signals of length $T = 55$ obtained by applying the slowness measures on a sliding window using $tol = 0.25$. The rolling window spike detection could not exclude noise completely, so in the upper left panel of Fig.\ \ref{fig:extracted_spikes} one can visually distinguish $2$-$3$ spikes and some noise. Even though the low-variance signals seem to be noise, they could also be just low-power spikes. Since the slowness measure is invariant to scaling, it does not falsely ignore low power signals.
Before applying a standard classification algorithm on the extracted signals in Section \ref{sec:Results}, I first describe the main contribution of this work.
\section{Applications}
\label{sec:Results}
In this section I demonstrate the usefulness and wide applicability of the presented methods on and income growth (stationary) and neuron spike train (non-stationary) data.
\subsection{States with similar income dynamics in the US}
\label{sec:marco_econ_results}
First, all $48$ (more or less) stationary series $\mathbf{x}_{i,t}$ were transformed via the DFT to get the raw periodograms $I_{\mathbf{x}_{i,t}}(\omega)$, $i=1, \ldots, 48$. Without any further smoothing all $K$-cluster models for $K = 1, \ldots, 6$ were fitted to the data and both the $NEC(K)$ and the log-likelihood suggest that $K = 3$ clusters provide a good fit. The upper row in Fig.\ \ref{fig:US_growth_1960_2008} shows the periodograms of the three classes and the estimate $\widehat{f}_{\mathcal{S}_k}(\lambda)$ (black line) using \eqref{eq:gamma_periodogram}. The x-axis represents the Fourier frequencies $\omega_j$, which have been re-scaled from $[0, \pi]$ to $[0,0.5]$ for easier interpretation. Peaks at frequency $\omega_j$ mean that periods of length $1/\omega_j$ are important for the variation in the data. For example, the blue series show two important low frequencies (long cycles): $\omega \approx 0.04$ and $\omega \approx 0.18$. They correspond to a cycle of $25$ years and $5$-$6$ years -- which represent a generation cycle and a (short) business cycle \citep{Tylecote94_Longcycles}. Note that $AR(1)$ models \citep{Dhiral01_ClusteringARIMA} may be appropriate for the red dynamics ($AR(1)$ coefficient slightly negative), but cannot capture two cycles as shown in the blue and green periodograms.\\
\begin{figure}[!t]
\centering
\makebox{\includegraphics[width=0.9\textwidth]{images/US_growth_1960_2008_3clusters}}
\caption{\label{fig:US_growth_1960_2008} Non-parametric, frequency domain EM detects $3$ dominant dynamics of per-capita income growth (top); (left) normalized entropy from \eqref{eq:NEC} as a function of $K$; (center) spectral log-likelihood \eqref{eq:loglik_total} at the optimum for each $K$; (right) color-coded US map where red/green/blue intensity equals the conditional probability $\gamma_{nk}$ (RGB = ($\widehat{\gamma}_{n1},\widehat{\gamma}_{n2}, \widehat{\gamma}_{n3}$)).}
\end{figure}
The spatial connectivity of the obtained clusters confirms the good model fit as it separates US economy $\mathcal{S}$ in three major sub-economies/regions:\footnote{Any resemblance of the RGB color system to politics is purely coincidental.}
\begin{description}
\item[$\approx$ East \& Rockies \& CA (blue):] economy is highly persistent, changes are slow; business cycle of $\approx 5$-$6$ years is also important.
\item[$\approx$ South-West (green):] also highly persistent and affected by global business cycle of $7-8$ years (peak at $\omega \approx 0.13$).
\item[$\approx$ Mid-West (red):] almost flat spectrum, high frequencies (short cycles) are slightly more important; decoupled from global business cycle.
\end{description}
One possible explanation why the red states have a flat spectrum, is that they are mostly agricultural states, and since people have to eat no matter how the global economy is doing, the red states' income is not affected too much by recession or other market fluctuations. On the contrary, states whose economy - and thus income - relies heavily on industry, production, or technology are more affected by global economy swings, which typically happen every $7$-$8$ years.\\
Hence, the classification map in Fig.\ \ref{fig:US_growth_1960_2008} can provide a basis for more effective policies to boost local economies facing a recession: it might be more effective to allocate main parts of public investments to states that are actually affected by the business cycle, and not put it in states which are decoupled from global economy.
\subsection{Spike sorting}
\label{sec:features}
For the neuron classification we can either try to fit a mixture model directly on the $T$ - dimensional data, or compute ``features'' for each spike that summarize its shape. A good feature selection will reduce the dimensionality of the data, and thus greatly accelerate computations.
Here I will cluster both in the time and frequency domain: for the first I fit a Gaussian mixture model (GMM) on the logarithm of the slowness of each spike, $\log \Delta \left( s_{j,t} \right)$; for the second I use the frequency domain EM algorithm on the power spectra induced by each spike $s_{j,t}$.
\subsubsection{Gaussian mixture model on slowness}
The histogram in Fig.\ \ref{fig:extracted_spikes} of $\lbrace \log \Delta \left( s_{j,t} \right) \rbrace_{j=1}^{1,747}$ shows 5-6 peaks, which presumably correspond to 5-6 differently shaped spikes. Thus I fit GMMs to $\log \Delta \left( s_{j,t} \right)$ and assign each spike $s_{j,t}$ to the cluster with highest a posteriori probability. Table \ref{tab:gaussian.mix} shows parameter estimates of the $6$ component model, which was chosen according to the highest BIC score (Fig.\ \ref{fig:extracted_spikes}) from all GMMs up to order $10$.\footnote{To avoid local maxima, I ran the EM algorithm (package \texttt{mixtools} in R) $100$ times for each $K$ and chose the largest local optimum solution in each run.}
\begin{table}[!t]
\begin{center}
\caption{\label{tab:gaussian.mix} EM estimates of a $6$ component GMM for $\log \Delta(s_{j,t})$}
\begin{tabular}{rrrrrrr}
\hline
& Comp 1 & Comp 2 & Comp 3 & Comp 4 & Comp 5 & Comp 6 \\
\hline
\hline
$\pi_k$ & 0.069 & 0.218 & 0.093 & 0.511 & 0.078 & 0.031 \\
\hline
$\mu$ & -3.285 & -2.766 & -2.331 & -2.171 & -1.671 & -1.442 \\
$\sigma^2$ & 0.155 & 0.171 & 0.037 & 0.156 & 0.125 & 0.042 \\
\hline
\end{tabular}
\end{center}
\end{table}
The corresponding spikes are shown in the upper right panel of Fig.\ \ref{fig:extracted_spikes}. As $tol = 0.25$ was too conservative, two shapes still represent noise, and GMM identifies $K=4$ different neurons.
\subsubsection{Clustering in the frequency domain}
After time-domain techniques, I use the frequency domain EM algorithm described in Section \ref{sec:EM_spectral_density}. An additional advantage of working in the frequency domain compared to the time-domain is that misalignment of the spikes does not affect the clustering.
\begin{figure}[!t]
\centering
\makebox{\includegraphics[width=0.75\textwidth]{images/EM_spikes_periodogram_6clusters_original_data}}
\caption{\label{fig:EM_spikes_periodogram} EM on periodograms of spike signals $s_{j,t}$ with $K = 6$ clusters.}
\end{figure}
Also here I fit all mixture models up to order $K = 9$. In this case $NEC(K)$ achieves a global minimum at $K = 2$ and is monotonically increasing (not shown here). However, the two cluster solution is only optimal in the sense that it separates perfectly between spikes and noise, even though there is a relevant sub-classification within all spikes (similar to the behavior of $NEC(K)$ in the simulations). Hence here I use the ``elbow'' rule in the log-likelihood to determine the number of clusters. The most prominent ``elbow'' occurs at $K = 3$ (Fig.\ \ref{fig:EM_spikes_periodogram}) followed by another level-shift at $K = 6$. Since $K = 6$ was also by the BIC for the time-domain classification, I choose $K = 6$ for easier comparison. The $K = 6$ cluster solution reveals five spikes and one noise class (green shapes). Thus compared to the time-domain technique the frequency domain version detects one more spike.
\section{Simulations}
\label{sec:simulations}
This section shows how the methods perform on simulated data. In particular, I consider $K = 5$ sub-systems consisting of both stationary and non-stationary series: one white noise sequence (flat spectrum), two $AR(1)$ processes with $\phi = 0.5$ and $0.75$ respectively, and two sine waves with frequencies $\omega = 0.1$ and $0.2$ (on the $[0, 0.5]$ scale) corrupted by additive Gaussian noise. For each model I generate $n = 100$ series with $T = 50$ observations each. All series have been scaled to zero mean and unit variance.
\begin{figure}[!t]
\centering
\subfloat[ (left) sample time series from each of the $5$ groups: (1) white noise, (2) $AR(1)$ with $\phi = 0.5$, (3) $AR(1)$ with $\phi = 0.75$, (4) $\sin(1/10 2 \pi t / T)$, (5) $\sin(1/20 2 \pi t / T)$; (right) corresponding spectra.]{\includegraphics[width=.45\textwidth]{images/sample_series5}\label{fig:sample_series5} }
\hspace{1cm}
\subfloat[(top) log-likelihood and NEC as a function of $K$; (bottom-left) conditional class probabilities $\gamma_{nk}$; (bottom-right) logarithm of estimated cluster centers equaling optimal estimates $\widehat{f}_{\mathcal{S}_k}(\lambda)$, $k = 1, \ldots, 5$. ]{\includegraphics[width=.45\textwidth]{images/simulation_5series_results_loglik}\label{fig:simulation_5series_results} }
\caption{Simulation results for the frequency domain EM algorithm.}
\label{fig:simulation_study}
\end{figure}
Figure \ref{fig:sample_series5} shows a representative series of each sub-system. All corresponding non-smoothed periodograms have high variance (not consistent estimate). The nonparametric EM can be directly applied to these raw periodograms to cluster the $500$ time series.
The ``elbow'' rule for the log-likelihood favors a $K = 3$ solution, because separating the signals into the two non-stationary signals plus all stationary signals in the third class provides the largest gain in likelihood. The additional likelihood gain by separating the stationary signals into their sub-systems is negligible and thus is not evident in a plot of the log-likelihood as a function of $K$. The NEC has a global minimum at $K = 2$ (one sine wave plus rest) and a local minimum at $K = 5$. The log-likelihood clearly shows that $K = 2$ can not be an optimal separation, thus we take the $K = 5$ local minimum. Figure \ref{fig:simulation_5series_results} shows a very good separation between all signals, except for some cross-matches between the white noise sequence and the two $AR(1)$. However, as the parameters are close to each other and due to the small sample size ($N = 50$), some overlap between them can not be avoided - even using the true model and an MLE $\widehat{\phi}_{MLE}$ to cluster them (see below).
\subsection{Comparison to model based clustering}
For comparison I also fit $AR(1)$ and $ARMA(1,1)$ models\footnote{The series $x_t$ is an auto-regressive moving average process of order $(1,1)$ if it satisfies $x_t - \phi x_{t-1} = \varepsilon_t - \theta \varepsilon_{t-1}$, where both parameters $\phi$ and $\theta$ must lie in $(-1,1)$ to guarantee stationarity and invertibility.} to each series. Figure \ref{fig:simulation_series5_model_clustering} shows the separation of the series in the parameter space $\phi \in (-1,1)$ and $(\phi, \theta) \in (-1,1) \times (-1,1)$. Using the $AR(1)$ model not only gives a large overlap between the non-stationary signals and the stationary $AR(1)$, $\phi = 0.5$, but also completely fails to distinguish between the two harmonic signals. Even if the true signal is an $AR(1)$, model based clustering still has many falsely classified signals. The overlap in the fitted parameters $\widehat{\phi}_{MLE}$ show that the bad performance of the frequency domain EM for the $AR(1)$ series is not due to the algorithm, but results from the true parameters of distinct $AR(1)$ being very close ($0.5$ and $0.75$). In this case even the maximum likelihood estimator (MLE) provides wrong conclusions.
Extending the $AR(1)$ to $ARMA(1,1)$ models improves the separability between the two harmonic series, but also leads to additional variation in other regions of the parameter space. In particular, the black dots around the 45-degree line show that avoiding the model bias by simply using a larger model class introduces another problem of model based clustering. Here the model class is an $ARMA(1,1)$, but the true process is white noise, which is a special case of an $ARMA(1,1)$ for $\phi = \theta = 0$. However, every $ARMA(1,1)$ with $\phi = \theta$ also describes a white noise process, thus the MLE finds optimal solutions along the $\phi = \theta$ line and thus adds artificial variance - and thus performance loss for the clustering.\\
The exploratory analysis of the AR and ARMA models is an example of how the model selection bias can undermine clustering algorithms. For a good classification we would need to identify the correct model for each series first, and then estimate the parameters on each tuned model. However, even if we had the time and resources to do a model check for all $N$ time series, the $AR(1)$ example shows that even if we found the true model for each series, a large overlap $\widehat{\phi}_{MLE}$ remains (red triangles and green diamonds in the left panel of Fig. \ref{fig:simulation_series5_model_clustering}).
The non-parametric EM approach, on the other hand, does not require any modeling and subsequent checks, and has comparable performance to the model based clustering if we knew the true model (white noise and $AR(1)$) and performs much better if the models are wrong (sine waves versus rest).
\begin{figure}[!t]
\centering
\makebox{\includegraphics[width=0.7\textwidth]{images/simulation_series5_model_clustering}}
\caption{\label{fig:simulation_series5_model_clustering} Model based classification: (left) $\widehat{\phi}$ (y-axis) from fitting an $AR(1)$ to all series; (right) estimate pair $(\widehat{\phi}, \widehat{\theta})$ from fitting an $ARMA(1,1)$ model to each series. Colors and shapes represent the true classes, not estimated clusters from the data.}
\end{figure}
\section{Data}
\label{sec:Data}
\paragraph{Spikes:} The \texttt{PKdata} data set can be obtained from \url{www.biomedicale.univ-paris5.fr/physcerv/C_Pouzat/Data.html}. It contains recordings of the electro-chemical signal in the cerebral slice of a rat. A band pass filter for frequencies between $300$ Hz and $5$ kHz has been applied to the signal $y_t$, which was sampled at a rate of $15$ Khz for $1$ minute.
\paragraph{Income:} The dataset can be obtained from \url{www.bea.gov/regional/spi}. It contains yearly ($1958 - 2008$) average per-capita income of the ``lower $48$'' and the entire US: $I_{j,t}$, $j = 1, \ldots, 49$. As $I_{j,t}$ grew exponentially over time, one typically considers income growth rates $r_{j,t} = \log I_{j,t} - \log I_{j,t-1}$ - also known as log-returns - which are (more or less) stationary. Since we are interested in the individual dynamics of a state compared to the US, I analyze the difference between each state's growth rate to the US, as this is a more refined indicator of the state's dynamics (it removes the overall seasonal dynamics of the US baseline).
| {
"timestamp": "2011-10-04T02:02:36",
"yymm": "1103",
"arxiv_id": "1103.3300",
"language": "en",
"url": "https://arxiv.org/abs/1103.3300",
"abstract": "I propose a frequency domain adaptation of the Expectation Maximization (EM) algorithm to group a family of time series in classes of similar dynamic structure. It does this by viewing the magnitude of the discrete Fourier transform (DFT) of each signal (or power spectrum) as a probability density/mass function (pdf/pmf) on the unit circle: signals with similar dynamics have similar pdfs; distinct patterns have distinct pdfs. An advantage of this approach is that it does not rely on any parametric form of the dynamic structure, but can be used for non-parametric, robust and model-free classification. This new method works for non-stationary signals of similar shape as well as stationary signals with similar auto-correlation structure. Applications to neural spike sorting (non-stationary) and pattern-recognition in socio-economic time series (stationary) demonstrate the usefulness and wide applicability of the proposed method.",
"subjects": "Machine Learning (stat.ML); Data Analysis, Statistics and Probability (physics.data-an); Applications (stat.AP); Methodology (stat.ME)",
"title": "A Nonparametric Frequency Domain EM Algorithm for Time Series Classification with Applications to Spike Sorting and Macro-Economics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.971992481016635,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7075866627013196
} |
https://arxiv.org/abs/2001.03439 | Remarks on the notion of homo-derivations | The purpose of this paper is to study the (different) notions of homo-derivations. These are additive mappings $f$ of a ring $R$ that also fulfill the identity \[ f(xy)=f(x)y+xf(y)+f(x)f(y) \qquad \left(x, y\in R\right),\] or (in case of the other notion) the system of equations \[ f(xy)=f(x)f(y)\] \[f(xy)=f(x)y+xf(y) \qquad \left(x, y\in R\right).\] Our primary aim is to investigate the above equations without additivity as well as the following Pexiderized equation \[ f(xy)=h(x)h(y)+xk(y)+k(x)y. \] The obtained results show that under rather mild assumptions homo-derivations can be fully characterized, even without the additivity assumption. | \section{Introduction and preliminaries}
The main aim of this paper is to present some characterization theorems concerning homomorphisms, derivations and also homo-derivations.
Thus, at first, we list some notions and preliminary results that will be used in the sequel.
All of these statements and definitions can be found in Kuczma \cite{Kuc09}\index{Kuczma, M.} and
in Zariski--Samuel\index{Zariski, O.}\index{Samuel, P.} \cite{ZarSam75} and also in Kharchenko\index{Kharchenko, V. L.} \cite{Kha91}.
\subsection*{Homomorphisms and derivations}
\begin{dfn}
Let $P$ and $Q$ be (not necessarily unital) rings. A function $f\colon P\to Q$ is called a
\emph{homomorphism} (of $P$ into $Q$) if it is additive, i.e.
\[
f(x+y)=f(x)+f(y)
\quad
\left(x, y\in P\right)
\]
and also
\[
f(xy)=f(x)f(y)
\qquad
\left(x, y\in P\right)
\]
holds.
If moreover, $f$ is one-to-one, then $f$ is called a \emph{monomorphism}.
If $f$ is onto, then $f$ is called an \emph{epimorphism}.
A homomorphism which is a monomorphism and an epimorphism is called an \emph{isomorphism}.
In case $P=Q$, the function $f$ is termed to be an \emph{endomorphism}.
\end{dfn}
\begin{dfn}
Let $Q$ be a (not necessarily unital) ring and let $P$ be a subring of $Q$.
A function $f\colon P\rightarrow Q$ is called a \emph{derivation} if it is additive
and also satisfies the so-called \emph{Leibniz rule}, i.e. equation
\[
f(xy)=f(x)y+xf(y)
\quad
\left(x, y\in P\right).
\]
\end{dfn}
\begin{rem}
Let $Q$ be a ring and let $P$ be a subring of $Q$.
Functions $f\colon P\rightarrow Q$ fulfilling the Leibniz rule only, will be termed \emph{Leibniz functions}.
\end{rem}
Among derivations one can single out so-called inner derivations, similarly as in the case of automorphisms.
\begin{dfn}
Let $R$ be a ring and $b\in R$, then the mapping $\mathrm{ad}_{b}\colon R\to R$ defined by
\[
\mathrm{ad}_{b}(x)=\left[x, b\right]=xb-bx
\qquad
\left(x\in R\right)
\]
is a derivation. A derivation $f\colon R\to R$ is termed to be an \emph{inner derivation} if there is a
$b\in R$ so that $f=\mathrm{ad}_{b}$. We say that a derivation is an \emph{outer derivation} if it is not inner.
\end{dfn}
An another fundamental example for derivations is the following.
\begin{rem}
Let $\mathbb{F}$ be a field, and let in the above definition $P=Q=\mathbb{F}[x]$
be the ring of polynomials with coefficients from $\mathbb{F}$. For a polynomial
$p\in\mathbb{F}[x]$, $p(x)=\sum_{k=0}^{n}a_{k}x^{k}$, define the function
$f\colon \mathbb{F}[x]\rightarrow\mathbb{F}[x]$ as
\[
f(p)=p',
\]
where $p'(x)=\sum_{k=1}^{n}ka_{k}x^{k-1}$ is the derivative of the polynomial $p$.
Then the function $f$ clearly fulfills
\[
\begin{array}{rcl}
f(p+q)&=&f(p)+f(q)\\
f(pq)&=&pf(q)+qf(p)
\end{array}
\]
for all $p, q\in\mathbb{F}[x]$. Hence $f$ is a derivation.
\end{rem}
Clearly, commutative rings admit only \emph{trivial} inner derivations. At the same time, it
not so evident whether commutative rings (or fields) \emph{do} or \emph{do not} admit nontrivial
outer derivations. To answer this problem partially, here we recall Theorem 14.2.1 from Kuczma \cite{Kuc09}.
\begin{thm}\label{T14.2.1}
Let $\mathbb{K}$ be a field of characteristic zero, let $\mathbb{F}$ be a subfield of $\mathbb{K}$,
let $S$ be an algebraic base of $\mathbb{K}$ over $\mathbb{F}$,
if it exists, and let $S=\emptyset$ otherwise.
Let $f\colon \mathbb{F}\to \mathbb{K}$ be a derivation.
Then, for every function $u\colon S\to \mathbb{K}$,
there exists a unique derivation $g\colon \mathbb{K}\to \mathbb{K}$
such that $g \vert_{\mathbb{F}}=f$ and $g \vert_{S}=u$.
\end{thm}
In \cite{Sof00}, El Sofy introduced the notion of homo-derivations.
After that several results appeared where the authors proved commutativity results
for the domain of these mappings, see e.~g. \cite{AlhMut19a, AlhMut19, MahAlaNaj19, RehAlRadMut19}.
It is objectionable however that there have not been made attempt to characterize or to compare these notions.
One of the main purpose of this work is to clarify these problems.
\begin{dfn}[El Sofy \cite{Sof00}]
Let $Q$ be a ring and let $P$ be a subring of $Q$.
A function $f\colon P\rightarrow Q$ is called a \emph{homo-derivation} if it is additive
and also satisfies the equation
\[
f(xy)=f(x)y+xf(y)+f(x)f(y)
\quad
\left(x, y\in P\right).
\]
\end{dfn}
We remark that there can be found some other ways of introducing homo-derivations.
Here we present a further definition as it appears in \cite{MehPaj03}.
\begin{dfn}[Mehdi Ebrahimi--Pajoohesh \cite{MehPaj03}]
Let $Q$ be a ring and let $P$ be a subring of $Q$.
A function $f\colon P\rightarrow Q$ is called a \emph{homo-derivation} if it is a homomorphism
and also satisfies the Leibniz rule.
\end{dfn}
\subsection*{Polynomials and the Levi-Civit\`{a} equation}
As we will see in the second section, the notion of exponential polynomials and the
so-called Levi-Civit\`{a} functional equation will play a distinguished role while
proving our results.
In view of Theorem 10.1 of \cite{Sze91}, if $(G, \cdot)$ is an Abelian group, then
any function $f\colon G\to \mathbb{C}$ satisfying the so-called
\emph{Levi-Civit\`{a} functional equation}, that is,
\begin{equation}\label{Eq_levi}
f(x\cdot y)= \sum_{i=1}^{n}g_{i}(x)h_{i}(y)
\qquad
\left(x, y\in G\right)
\end{equation}
for some positive integer $n$ and functions
$g_{i}, h_{i}\colon G\to\mathbb{C}\; (i=1, \ldots, n)$, is an exponential polynomial of
order at most $n$.
At the same time, in equation \eqref{Eq_levi} not only the function $f$, but also the
mappings $g_{i}, h_{i}\colon G\to\mathbb{C}\; (i=1, \ldots, n)$ are assumed to be unknown.
If either the functions $g_{1}, \ldots, g_{n}$ or
the functions $h_{1}, \ldots, h_{n}$ are \emph{linearly dependent},
then the number $n$ appearing in equation \eqref{Eq_levi} can be reduced
and in this case the general solution of the equation can contain arbitrary functions,
we shall call this case \emph{degenerate}.
Alternatively, if the functions $h_{1}, \ldots, h_{n}$ are \emph{linearly independent},
then $g_{1}, \ldots, g_{n}$ are linear combinations of
the translates of $f$, hence they are exponential polynomials of
order at most $n$, too. Moreover, they are built up from the same
additive and exponential functions as the function $f$. Roughly speaking this is
Theorem 10.4 of \cite{Sze91} which is the following.
\begin{thm}\label{Szekely}
Let $G$ be an Abelian group, $n$ be a positive integer and $f, g_{i}, h_{i}\colon G\to \mathbb{C}\; (i=1, \ldots, n)$ be functions so that both the sets
$\left\{g_{1}, \ldots, g_{n}\right\}$ and $\left\{h_{1}, \ldots, h_{n}\right\}$ are linearly independent.
The functions $f, g_{i}, h_{i}\colon G\to \mathbb{C}\; (i=1, \ldots, n)$ form a \emph{non-degenerate} solution of equation \eqref{Eq_levi}
if and only if
\begin{enumerate}[(a)]
\item there exist positive integers $k, n_{1}, \ldots, n_{k}$ with $n_{1}+\cdots+n_{k}=n$;
\item there exist different nonzero complex exponentials $m_{1}, \ldots, m_{k}$;
\item for all $j=1, \ldots, k$ there exists linearly independent sets of complex additive functions \[\left\{a_{j, 1}, \ldots, a_{j, n_{j}-1}\right\};\]
\item there exist polynomials $P_{j}, Q_{i, j}, R_{i, j}\colon \mathbb{C}^{n_{j}-1}\to \mathbb{C}$ for all $i=1, \ldots, n; j=1, \ldots, k$ in
$n_{j}-1$ complex variables and of degree at most $n_{j}-1$;
\end{enumerate}
so that we have
\[
f(x)= \sum_{j=1}^{k}P_{j}\left(a_{j, 1}(x), \ldots, a_{j, n_{j}-1}(x)\right)m_{j}(x)
\]
\[
g_{i}(x)= \sum_{j=1}^{k}Q_{i, j}\left(a_{j, 1}(x), \ldots, a_{j, n_{j}-1}(x)\right)m_{j}(x)
\]
and
\[
h_{i}(x)= \sum_{j=1}^{k}R_{i, j}\left(a_{j, 1}(x), \ldots, a_{j, n_{j}-1}(x)\right)m_{j}(x)
\]
for all $x\in G$ and $i=1, \ldots, n$.
\end{thm}
Let $G$ be a groupoid and $\mathbb{F}$ be field.
Given $(A_{i, j})_{i, j}\in \mathscr{M}_{n\times n}(\mathbb{F})$ and $(\Gamma^{(k)}_{i, j})_{i, j} \in \mathscr{M}_{n\times n}(\mathbb{F})$,
in \cite{McK77} McKiernan studied the following problems.
\begin{enumerate}[(1)]
\item Find all functions $h,f_{i}, g_{i}\colon G\to \mathbb{F}$ ($i= 1, \ldots, n$) satisfying the equation
\begin{equation}\label{Eq_McKiernan1}
h(xy)=\sum_{i, j=1}^{n} A_{i, j} f_{i}(x)g_{j}(y)
\qquad
\left(x, y\in G\right).
\end{equation}
\item Find all functions $g_{i}\colon G\to \mathbb{F}$ ($i=1, \ldots, n$)
satisfying the system of equations
\begin{equation}\label{Eq_McKiernan2}
g_{k}(xy)=\sum_{i, j=1}^{n}\Gamma_{i, j}^{(k)}g_{i}(x)g_{j}(y)
\qquad
\left(
x, y\in G, k=1, \ldots, n
\right)
\end{equation}
\end{enumerate}
The solutions are obtained by showing that the two problems are essentially equivalent,
then transforming to a matrix problem and applying one of his earlier results concerning a multiplicative matrix equation.
All in all, the main result of \cite{McK77} is that if
\begin{enumerate}[(a)]
\item $G$ is a (not necessarily commutative) semigroup,
\item $\mathbb{F}$ is an algebraically closed field with $\mathrm{char}(F)\geq (n-1)!$,
\item $\det \left(A_{i, j}\right)\neq 0$,
\item $f_{1}, \ldots, f_{n}$ are linearly independent,
\item $g_{1}, \ldots, g_{n}$ are linearly independent,
\end{enumerate}
then the solutions of equation \eqref{Eq_McKiernan1} as well as \eqref{Eq_McKiernan2} are exponential polynomials.
In what follows \cite[Lemma 4.2]{Sze91}, that is, the statement below will be utilized several times.
\begin{thm}
Let $G$ be an Abelian group, $\mathbb{K}$ a field,
$X$ a $\mathbb{K}$-linear space and $\mathscr{V}$ a translation invariant linear space
of $X$-valued functions on $G$.
Let $k_{i}$ be nonnegative integers, $n\geq 1$,
$m_{i}\colon G\to \mathbb{K}$ different nonzero exponentials,
$A_{i}\colon G^{n_{i}}\to X$ symmetric, $k_{i}$-additive functions and
$q_{i}\colon G\to X$ polynomials of degree at most $k_{i}-1$ ($i=1, \ldots, n$).
If the function
\[
\sum_{i=1}^{n}\left(\mathrm{diag}(A_{i})+q_{i}\right)m_{i}
\]
belongs to
$\mathscr{V}$, then there exist polynomials $r_{i}\colon G\to X$ of degree at most
$k_{i}-1$ such that $\left(\mathrm{diag}(A_{i})+r_{i}\right)m_{i}$ belongs to $\mathscr{V}$ for
$i=1, \ldots, n$.
\end{thm}
From this, with the choice $\mathscr{V}= \left\{0\right\}$ and $X=\mathbb{K}$ we get the following.
\begin{prop}
Let $G$ be an Abelian group, $\mathbb{K}$ be a field and $n$ be a positive integer.
Suppose that for each $x\in G$
\[
p_{1}(x)m_{1}(x)+\cdots+p_{n}(x)m_{n}(x)=0
\]
holds, where $m_{1}, \ldots, m_{n}\colon G\to \mathbb{K}$ are different exponentials and
$p_{1}, \ldots, p_{n}\colon G\to \mathbb{K}$ are (generalized) polynomials.
Then for all $i=1, \ldots, n$ the polynomial $p_{i}$ is identically zero.
\end{prop}
\section{Results}
\subsection*{The functional equation of homo-derivations}
As the theorem below shows, the notion of homo-derivations (in the sense of El Sofy \cite{Sof00}) can
be characterized even \emph{without} additivity supposition.
\begin{thm}\label{Thm4}
Let $P$ and $Q$ be rings such that $P$ is a subring of $Q$ and assume that
$\varepsilon$ is an arbitrary nonzero element of the center of $Q$.
Function $h\colon P\to Q$ fulfills the functional equation
\begin{equation}\label{Eq_homo-deriv}
h(xy)= h(x)y+xh(y)+\varepsilon h(x)h(y)
\end{equation}
for all $x, y\in P$ if and only if there exists a multiplicative function $m\colon P\to Q$ such that
\[
\varepsilon h(x)= m(x)-x
\qquad
\left(x\in P\right).
\]
\end{thm}
\begin{proof}
Multiplying equation \eqref{Eq_homo-deriv} with $\varepsilon$ leads to
\[
\varepsilon h(xy)= \varepsilon h(x)y+x \varepsilon h(y)+\varepsilon h(x)\varepsilon h(y)
\qquad
\left(x, y\in P\right).
\]
If we add $xy$ to both sides of this equation, then
\[
\varepsilon h(xy)+xy= \varepsilon h(x)y+x \varepsilon h(y)+\varepsilon h(x)\varepsilon h(y) +xy
\qquad
\left(x, y\in P\right),
\]
follows. Observe however that
\[
\varepsilon h(x)y+x \varepsilon h(y)+\varepsilon h(x)\varepsilon h(y) +xy=
(\varepsilon h(x)+x)\cdot \left(\varepsilon h(y)+y\right)
\qquad
\left(x, y\in P\right),
\]
yielding exactly that the mapping $m\colon P\to Q$ defined through
\[
m(x)= \varepsilon h(x)+x
\qquad
\left(x\in P\right)
\]
is multiplicative.
\end{proof}
\begin{rem}
In the case the ring $Q$ is unital, $\varepsilon=1$ is a nonzero central element of the ring $Q$. Hence
any solution of the above equation can be represented as
\[
h(x)= m(x)-x
\qquad
\left(x\in P\right),
\]
with an appropriate multiplicative function $m$.
\end{rem}
As an immediate corollary we get the following.
\begin{cor}
Let $P$ be a subring of the ring $Q$ and assume that $\varepsilon$ is an arbitrary nonzero element of the center of $Q$.
The additive mapping $a\colon P\to Q$ fulfills functional equation
\[
a(xy)= a(x)y+xa(y)+\varepsilon a(x)a(y)
\]
for all $x, y\in P$ if and only if there exists a homomorphism $\varphi\colon P\to Q$ such that
\[
\varepsilon a(x)=\varphi(x)-x
\qquad
\left(x\in P\right).
\]
\end{cor}
In view of the above result, homo-derivations in the sense of El Sofy \cite{Sof00} on commutative rings can be characterized.
\begin{cor}
Let $P$ be a subring of the commutative ring $Q$ and $a\colon P\to Q$ be a homo-derivation in the sense of El Sofy \cite{Sof00}.
Then and only then
there exists a homomorphism $\varphi\colon P\to Q$ such that
\[
a(x)=\varphi(x)-x
\qquad
\left(x\in P\right).
\]
\end{cor}
The proposition below considers the other notion of homo-derivations.
\begin{prop}\label{Prop1}
Let $P$ be a subring of the ring $Q$, the
function $f\colon P\to Q$ fulfills the system of equations
\[
\begin{array}{rcl}
f(xy)&=&f(x)f(y)\\
f(xy)&=&f(x)y+xf(y)
\end{array}
\qquad
\left(x\in \mathbb{F}\right)
\]
if and only if there exists a non-zero constant $\alpha\in Q$ such that $\alpha \cdot f$ and
$f\cdot \alpha$ are identically zero.
\end{prop}
\begin{proof}
Assume that the function $f\colon P\to Q$ satisfies the above system.
Then clearly, functional equation
\[
f(x)f(y)=f(x)y+xf(y)
\qquad
\left(x, y\in P\right)
\]
also holds.
After some rearrangement we arrive at
\[
f(x)\left[f(y)-y\right]= xf(y)
\qquad
\left(x\in P\right).
\]
Since $f$ must be a Leibniz function,
\[
f(y)-y=0
\]
cannot hold for all $y\in P$. Thus there exists $y^{*}\in P$ such that
$ f(y^{*})-y^{*}\neq 0$, from which
\[
f(x)\left[f(y^{*})-y^{*}\right]= xf(y^{*})
\qquad
\left(x\in P\right).
\]
In other words, there are constants $\alpha, \beta \in Q$ with $\alpha\neq 0$ such that
\[
f(x)\alpha = x \beta
\qquad
\left(x\in P\right).
\]
Writing this back to the Leibniz equation $\beta =0$ follows, yielding that $f\cdot \alpha$ is identically zero.
Changing the role of $x$ and $y$ in the above argument, $\alpha \cdot f \equiv 0$ also follows.
\end{proof}
\begin{cor}
Let $P$ be a subring of the ring $Q$,
a function $a\colon P\to Q$ is a homo-derivation in the sense of
Mehdi Ebrahimi--Pajoohesh \cite{MehPaj03} if and only if there exists a nonzero constant $\alpha\in Q$ such that
$\alpha \cdot a\equiv a\cdot \alpha\equiv 0$. Especially, if $Q$ has no zero-divisors, then $a$ has to be identically zero.
\end{cor}
\subsection*{On the functional equation $f(xy)=h(x)h(y)+xk(y)+k(x)y$}
In this subsection we determine the solutions $f, h, k\colon \mathbb{F}\to \mathbb{K}$ of the functional equation
\begin{equation}\label{Eq_gen}
f(xy)=h(x)h(y)+xk(y)+k(x)y
\qquad
\left(x, y\in \mathbb{F}\right).
\end{equation}
From this, the additive solutions of the same equation will follow immediately.
Here we suppose that
\emph{$\mathbb{K}$ is an algebraically closed field with $\mathrm{char}(\mathbb{K})\neq 2$ and
$\mathbb{F}$ is a subfield of $\mathbb{K}$}.
Observe that equation \eqref{Eq_gen} is a special Levi-Civit\`{a} equation.
Therefore according to the value $\dim \mathrm{lin}\left(\mathrm{id}, h, k\right)$, we have to distinguish several cases.
Clearly, \[\dim \mathrm{lin}\left(\mathrm{id}, h, k\right)=3\] means that the mappings involved in the right hand side of
\eqref{Eq_gen} are linearly independent. Thus in the degenerate cases we have $\dim \mathrm{lin}\left(\mathrm{id}, h, k\right)<3$.
\subsubsection*{Degenerate cases}
Firstly, let us assume that $\dim \mathrm{lin}\left(\mathrm{id}, h, k\right)=1$. In this situation there exist
constants $\lambda_{1}, \lambda_{2}\in \mathbb{K}$ such that
\[
h(x)=\lambda_{1}x
\quad
\text{and}
\quad
k(x)=\lambda_{2}x
\qquad
\left(x\in \mathbb{F}\right).
\]
\begin{prop}
Let $\lambda_{1}, \lambda_{2}\in \mathbb{K}$ be arbitrarily fixed.
Function $f\colon \mathbb{F}\to \mathbb{K}$ fulfills functional equation \eqref{Eq_gen}
if and only if
\[
f(x)= \left(\lambda_{1}^{2}+2\lambda_{2}\right)x
\qquad
\left(x\in \mathbb{F}\right).
\]
\end{prop}
\begin{proof}
In case
\[
h(x)=\lambda_{1}x
\quad
\text{and}
\quad
k(x)=\lambda_{2}x
\qquad
\left(x\in \mathbb{F}\right),
\]
our equation reduces to
\[
f(xy)= \left(\lambda_{1}^{2}+2\lambda_{2}\right)xy
\qquad
\left(x, y\in \mathbb{F}\right),
\]
from which the results follows immediately.
\end{proof}
Secondly, assume that $\dim \mathrm{lin}\left(\mathrm{id}, h, k\right)=2$, which can happen in different ways.
If $\left\{ \mathrm{id}, h\right\}$ are linearly dependent, that is
\[
h(x)= \lambda x
\qquad
\left(x\in \mathbb{F}\right)
\]
holds with a certain $\lambda\in \mathbb{K}$,
then we have the following.
\begin{prop}\label{Prop4}
Let $\lambda\in \mathbb{K}$ be an arbitrary constant.
Functions $f, k\colon \mathbb{F}\to \mathbb{K}$ fulfill the functional equation
\[
f(xy)=\lambda^{2}xy+xk(y)+k(x)y
\qquad
\left(x, y\in \mathbb{F}\right)
\]
if and only if there exists a Leibniz function
$\delta \colon \mathbb{F}\to \mathbb{K}$ such that
\[
\begin{array}{rcl}
k(x)&=&k(1)x+\delta(x)\\
f(x)&=&\left(\lambda^{2}+ 2k(1)\right)x+\delta(x)
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
\end{prop}
\begin{proof}
Under the above assumptions, equation \eqref{Eq_gen} with $y=1$ yields that
\[
f(x)= (\lambda^{2}+k(1))x+k(x)
\qquad
\left(x\in \mathbb{F}\right).
\]
Writing this back into our equation, we get that
\[
k(xy)+(\lambda^{2}+k(1))xy= xk(y)+k(x)y +\lambda^{2}xy
\qquad
\left(x, y\in \mathbb{F}\right).
\]
This means that the function $k\colon \mathbb{F}\to \mathbb{K}$ fulfills
\[
k(xy)+k(1)xy= xk(y)+k(x)y
\qquad
\left(x, y\in \mathbb{F}\right).
\]
Thus, there exists a Leibniz function $\delta\colon \mathbb{F}\to \mathbb{K}$ such that
\[
\begin{array}{rcl}
k(x)&=&k(1)x+\delta(x)\\
f(x)&=&\left(\lambda^{2}+2k(1)\right)x+\delta(x)
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
\end{proof}
\begin{cor}
Let $\lambda\in \mathbb{K}$ be an arbitrary constant.
The additive functions $f, k\colon \mathbb{F}\to \mathbb{K}$ fulfill the functional equation
\[
f(xy)=\lambda^{2}xy+xk(y)+k(x)y
\qquad
\left(x, y\in \mathbb{F}\right)
\]
if and only if there exists a derivation
$d \colon \mathbb{F}\to \mathbb{K}$ such that
\[
\begin{array}{rcl}
k(x)&=&k(1)x+d(x)\\
f(x)&=&\left(\lambda^{2}+ 2k(1)\right)x+d(x)
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
\end{cor}
Our second case is when $\left\{ \mathrm{id}, k\right\}$ are linearly dependent, that is if
\[
k(x)= \lambda x
\qquad
\left(x\in \mathbb{F}\right)
\]
with a certain $\lambda \in \mathbb{K}$.
\begin{prop}\label{Prop3}
Let $\lambda\in \mathbb{F}$ be arbitrarily fixed.
Functions $f, h\colon \mathbb{F}\to \mathbb{K}$ fulfill the functional equation
\[
f(xy)=h(x)h(y)+2\lambda xy
\qquad
\left(x, y\in \mathbb{F}\right)
\]
if and only if there exists a multiplicative function $m\colon \mathbb{F}\to \mathbb{K}$ such that
\[
\begin{array}{rcl}
h(x)&=&h(1)m(x)\\
f(x)&=&h(1)^{2}m(x)+2\lambda x
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
\end{prop}
\begin{proof}
Define the function $\widetilde{f}\colon \mathbb{F}\to \mathbb{K}$ through
\[
\widetilde{f}(x)=f(x)-2\lambda x
\qquad
\left(x\in \mathbb{F}\right)
\]
to deduce that
\[
\widetilde{f}(xy)= h(x)h(y)
\qquad
\left(x, y\in \mathbb{F}\right).
\]
This identity with $y=1$ implies that
\[
\widetilde{f}(x)=h(1)h(x)
\qquad
\left(x\in \mathbb{F}\right).
\]
Therefore,
\begin{enumerate}[A)]
\item either $h(1)=0$ from which
\[
f(x)=2\lambda x
\qquad
\left(x\in \mathbb{F}\right)
\]
and $h\equiv 0$ follows;
\item or $h(1)\neq 0$ from which we get that
\[
\dfrac{h(xy)}{h(1)}= \dfrac{h(x)}{h(1)}\cdot \dfrac{h(y)}{h(1)}
\qquad
\left(x, y\in \mathbb{F}\right).
\]
\end{enumerate}
All in all, there exists a multiplicative function $m\colon \mathbb{F}\to \mathbb{K}$ such that
\[
h(x)=h(1)m(x)
\qquad
\left(x\in \mathbb{F}\right).
\]
From this we also obtain that
\[
f(x)=h(1)^{2}m(x)+2\lambda x
\qquad
\left(x\in \mathbb{F}\right).
\]
\end{proof}
\begin{rem}
Similarly, as previously, the additive solutions of the above equation are of the form
\[
\begin{array}{rcl}
h(x)&=&h(1)\varphi(x)\\
f(x)&=&h(1)^{2}\varphi(x)+2\lambda x
\end{array}
\qquad
\left(x\in \mathbb{F}\right),
\]
with a certain homomorphism $\varphi\colon \mathbb{F}\to \mathbb{K}$.
\end{rem}
Finally, the last possibility is that $\left\{h, k\right\}$ is linearly dependent. In this case
there are constants $\lambda_{1}, \lambda_{2}\in \mathbb{K}$ not vanishing simultaneously such that
\[
\lambda_{1}k(x)+\lambda_{2}h(x)=0
\]
for all $x\in \mathbb{F}$. Again we have the following alternative.
\begin{enumerate}[A)]
\item Either $\lambda_{2}=0$ and then $k\equiv 0$. In this case \eqref{Eq_gen} is
\[
f(xy)= h(x)h(y)
\qquad
\left(x, y\in \mathbb{F}\right).
\]
Using Proposition \ref{Prop3} we finally get that there exists a multiplicative function
$m\colon \mathbb{F}\to \mathbb{K}$ such that
\[
\begin{array}{rcl}
h(x)&=&h(1)m(x)\\
f(x)&=&h(1)^{2}m(x)
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
\item Or $\lambda_{2}\neq 0$ and then there exists a constant $\lambda\in \mathbb{K}$ such that
\[
h(x)=\lambda k(x)
\qquad
\left(x\in \mathbb{F}\right).
\]
In this case equation \eqref{Eq_gen} is of the form
\[
f(xy)=k(x)y+xk(y)+\lambda^{2}k(x)k(y)
\qquad
\left(x, y\in \mathbb{F}\right).
\]
\end{enumerate}
\begin{prop}
Let $\lambda\in \mathbb{K}$.
Functions $f, k\colon \mathbb{F}\to \mathbb{K}$ fulfill the functional equation
\[
f(xy)=xk(y)+k(x)y+\lambda^{2}k(x)k(y)
\qquad
\left(x, y\in \mathbb{F}\right)
\]
if and only if
\begin{enumerate}[A)]
\item in case $\lambda=0$ and there exists a Leibniz function $\delta\colon \mathbb{F}\to \mathbb{K}$ such that
\[
\begin{array}{rcl}
k(x)&=&k(1)x+\delta(x)\\
f(x)&=&2k(1)x+\delta(x)
\end{array}
\qquad
\left(x\in \mathbb{F}\right),
\]
\item in case $\lambda\neq 0$
\begin{enumerate}[(a)]
\item either there exists a constant $\gamma \in \mathbb{K}$ such that
\[
\begin{array}{rcl}
k(x)&=&\gamma x\\
f(x)&=&\left(\lambda^{2}\gamma^{2}+2\gamma \right) x
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
\item or there exists a multiplicative function $m\colon \mathbb{F}\to \mathbb{K}$ and a constant
$\gamma\in \mathbb{K}$ such that
\[
\begin{array}{rcl}
k(x)&=& -\dfrac{1}{\lambda^{2}}x+\gamma m(x)\\[2mm]
f(x)&=&-\dfrac{1}{\lambda^2}x+\gamma^{2}\lambda^{2}m(x)
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
\end{enumerate}
\end{enumerate}
\end{prop}
\begin{proof}
Observe, that our equation with $y=1$ immediately yields that
\[
f(x)= (1+\lambda^{2}k(1))k(x)+k(1)x
\qquad
\left(x\in \mathbb{F}\right).
\]
If $\lambda=0$, then from this we get that the mapping $\tilde{k}$ defined on $\mathbb{F}$ by
\[
\tilde{k}(x)=k(x)-k(1)x
\qquad
\left(x\in \mathbb{F}\right),
\]
is a Leibniz function.
If $\lambda\neq 0$, then define the function $h\colon \mathbb{F}\to \mathbb{K}$ by
\[
h(x)= x+\dfrac{\lambda^{2}}{2}k(x)
\qquad
\left(x\in \mathbb{F}\right)
\]
to derive
\[
f(xy)= k(x)h(y)+h(x)k(y)
\qquad
\left(x, y\in \mathbb{F}\right),
\]
that is the same sine-type equation as in the proof of Proposition \ref{Prop4}.
Similarly as there, a careful adaptation shows that alternative (a) corresponds to the case when
$k$ and $h$ are linearly dependent, while alternative (b) corresponds to the case when
$k$ and $h$ are linearly independent.
\end{proof}
\begin{rem}
If we would like to describe the additive solutions of the above functional equation, then the mapping
$\delta$ should be replaced by a derivation $d\colon \mathbb{F}\to \mathbb{K}$, and the mapping
$m$ should be replaced by a homomorphism $\varphi\colon \mathbb{F}\to\mathbb{K}$.
\end{rem}
\subsubsection*{The non-degenerate case}
In this subsection we investigate the so-called non-degenerate case.
More precisely, the problem to be studied is the following.
Let \emph{$\mathbb{K}$ be an algebraically closed field with $\mathrm{char}(\mathbb{K})\neq 2$ zero and
$\mathbb{F}$ be a subfield of $\mathbb{K}$},
and $f, h, k\colon \mathbb{F}\to \mathbb{K}$ be functions so that the system
$\left\{\mathrm{id}, h, k\right\}$ is \emph{linearly independent}.
In what follows we determine the solutions of the functional equation
\[
f(xy)=h(x)h(y)+xk(y)+k(x)y
\qquad
\left(x, y\in \mathbb{F}\right).
\]
Using the results of Székelyhidi \cite{Sze91} and McKiernan \cite{McK77} delineated in the first section,
we derive immediately that the solutions $f, h, k\colon \mathbb{F}\to \mathbb{K}$ of the above equation
are exponential polynomials of degree at most two.
Depending on this degree we have three different possibilities, see pages 89-92 of \cite{Sze91} where the description
of the functional equation
\[
f(xy)= g_{1}(x)h_{1}(y)+g_{2}(x)h_{2}(y)+g_{3}(x)h_{3}(y)
\]
can be found. This obviously covers our equation with the choice
\[
\begin{array}{rcl}
g_{1}(x)=h_{1}(x)&=&h(x)\\
g_{2}(x)&=&x\\
h_{2}(x)&=&k(x)\\
g_{3}(x)&=&k(x)\\
h_{3}(x)&=&x
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
The first possibility is that there exist different nonzero multiplicative functions
$m_{1}, m_{2}, m_{3} \colon \mathbb{F}\to \mathbb{K}$ and constants $\alpha_{i}, \beta_{j}^{(i)}, \gamma_{i}^{(j)}\in \mathbb{K}$,
$i, j=1, 2, 3$ such that
\[
\begin{array}{rcl}
f(x)&=&\alpha_{1}m_{1}(x)+\alpha_{2}m_{2}(x)+\alpha_{3}m_{3}(x)\\[2mm]
g_{i}(x)&=&\beta^{(i)}_{1}m_{1}(x)+\beta^{(i)}_{2}m_{2}(x)+\beta^{(i)}_{3}m_{3}(x)\\[2mm]
h_{i}(x)&=&\gamma^{(i)}_{1}m_{1}(x)+\gamma^{(i)}_{2}m_{2}(x)+\gamma^{(i)}_{3}m_{3}(x)\\
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
Condition $g_{1}=h_{1}$ implies that
\[
\beta_{j}^{(1)}= \gamma_{j}^{(1)}
\qquad
\left(j=1, 2, 3\right).
\]
Similarly, from $h_{2}=g_{3}$ we obtain that
\[
\beta^{(3)}_{j} =\gamma_{j}^{(2)}
\qquad
\left(j=1, 2, 3\right).
\]
Finally, from
\[
g_{2}(x)=h_{3}(x)=x
\qquad
\left(x\in \mathbb{F}\right)
\]
we derive that one of the multiplicative functions $m_{1}, m_{2}, m_{3}$ is the identity, say $m_{1}$ and
\[
\begin{array}{rcl}
\beta_{2}^{(2)}=\beta_{3}^{(2)}=0& \; & \beta_{1}^{(2)}=1\\[2mm]
\gamma_{2}^{(3)}=\gamma_{3}^{(3)}=0& \; & \gamma_{1}^{(3)}=1.
\end{array}
\]
Using this and our functional equation, we get that for the above constants
\[
\begin{pmatrix}
\beta_{1}^{(1)} & 1 & \beta_{1}^{(3)}\\
\beta_{2}^{(1)} & 0& \beta_{2}^{(3)}\\
\beta_{3}^{(1)} & 0 & \beta_{3}^{(3)}
\end{pmatrix}
\cdot
\begin{pmatrix}
\beta_{1}^{(1)} & \beta_{2}^{(1)} & \beta_{3}^{(1)}\\
\beta_{1}^{(3)} & \beta_{2}^{(3)}& \beta_{3}^{(3)}\\
1 & 0 & 0
\end{pmatrix}
=
\begin{pmatrix}
\alpha_{1} & 0& 0\\
0& \alpha_{2} & 0\\
0& 0& \alpha_{3}
\end{pmatrix}
\]
has to hold. Especially, $\beta_{2}^{(1)}\cdot\beta_{3}^{(1)}=0$, which yields that the coefficient of $m_{2}$ or that of $m_{3}$ is zero.
All in all, there exists a multiplicative function $m\colon \mathbb{F}\to \mathbb{K}$ and constants such that
\[
\begin{array}{rcl}
f(x)&=&(\beta_{1}^{2}+2\gamma_{1})x+\beta_{2}^{2}m(x)\\
h(x)&=&\beta_{1}x+\beta_{2}m(x)\\
k(x)&=&\gamma_{1}x+\gamma_{2}m(x)
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
In this case however the functions $\mathrm{id}, h$ and $k$ span a linear space of dimension at most two.
Notice that we are interested in the case $\dim \mathrm{lin}\left(\mathrm{id}, h, k\right)=3$. Therefore the above type of solutions
does not appear.
The second possibility is that there exist different multiplicative functions
$m_{1}, m_{2}$ and a logarithmic function $l\colon \mathbb{F}^{\times}\to \mathbb{K}$ and constants
such that
\[
\begin{array}{rcl}
f(x)&=&\left(\alpha_{1}l(x)+\alpha_{2}\right)m_{1}(x)+\alpha_{3}m_{2}(x)\\
g_{i}&=&\left(\beta_{1}^{(i)}l(x)+\beta_{2}^{(i)}\right)m_{1}(x)+\beta_{3}m_{2}(x)\\
h_{i}&=&\left(\gamma_{1}^{(i)}l(x)+\gamma_{2}^{(i)}\right)m_{1}(x)+\gamma_{3}m_{2}(x)\\
\end{array}
\qquad
\left(x\in \mathbb{F}, i=1, 2, 3\right).
\]
In our case
\[
\begin{array}{rcl}
g_{1}(x)=h_{1}(x)&=&h(x)\\
g_{2}(x)&=&x\\
h_{2}(x)&=&k(x)\\
g_{3}(x)&=&k(x)\\
h_{3}(x)&=&x
\end{array}
\qquad
\left(x\in \mathbb{F}\right),
\]
thus either $m_{1}$ or $m_{2}$ is the identity.
If we would have
\[
m_{2}(x)=x
\qquad
\left(x\in \mathbb{F}\right),
\]
then after comparing the coefficients
\[
\begin{array}{rcl}
f(x)&=&\alpha_{1}x+\alpha_{2}m(x)\\
h(x)&=&\beta_{1}x+\beta_{2}m(x)\\
k(x)&=&\gamma_{1}x+\gamma_{2}m(x)
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
would follow with certain constants. Similarly as above, in this case we would have
$\dim \mathrm{lin}\left(\mathrm{id}, h, k\right)\leq 2$, contrary to our assumptions.
The fact that
\[
m_{1}(x)=x
\qquad
\left(x\in \mathbb{F}\right),
\]
means that there exists a multiplicative function
$m\colon \mathbb{F}\to \mathbb{K}$ and a logarithmic function $l\colon \mathbb{F}^{\times}\to \mathbb{K}$ and constants
such that
\[
\begin{array}{rcl}
f(x)&=&\left(\alpha_{1}l(x)+\alpha_{2}\right)x+\alpha_{3}m(x)\\[2mm]
h(x)&=&\left(\beta_{1}l(x)+\beta_{2}\right)x+\beta_{3}m(x)\\[2mm]
k(x)&=&\left(\gamma_{1}l(x)+\gamma_{2}\right)x+\gamma_{3}m(x)\\
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
Inserting this back into our equation
the system of equations
\[
\begin{array}{rcl}
\beta_{1}&=&0 \\[2mm]
\alpha_{1}&=&\gamma_{1}\\[2mm]
\alpha_{2}&=& \beta_{2}^{2}+2\gamma_{2}\\[2mm]
\alpha_{3}&=&\beta_{3}^{2}
\end{array}
\]
follow, that is,
\[
\begin{array}{rcl}
f(x)&=&\left(\gamma_{1}l(x)+\beta_{2}^{2}+2\gamma_{2}\right)x+\beta_{3}^{2}m(x)\\[2mm]
h(x)&=&\beta_{2}x+\beta_{3}m(x)\\[2mm]
k(x)&=&\left(\gamma_{1}l(x)+\gamma_{2}\right)x+\gamma_{3}m(x)\\
\end{array},
\]
where additionally $\gamma_{3}+\beta_{2}\beta_{3}=0$ also has to hold.
To guarantee the system $\left\{\mathrm{id}, h, k\right\}$ to be linearly independent,
$\beta_{3}\neq 0$ and $\gamma_{1}\neq 0$ also has to be supposed.
The last possibility is that all the involved functions are exponential polynomials of degree two. Since
\[
g_{2}(x)=h_{3}(x)= x
\qquad
\left(x\in \mathbb{F}\right),
\]
the corresponding multiplicative function is the identity, that is, we have
\[
\begin{array}{rcl}
f(x)=\displaystyle \sum_{p, q=1}^{2}\alpha_{p, q}l_{p}(x)l_{q}(x)x+\displaystyle \sum_{p=1}^{2}\alpha_{p}l_{p}(x)x+\alpha x\\[2mm]
h(x)=\displaystyle \sum_{p, q=1}^{2}\beta_{p, q}l_{p}(x)l_{q}(x)x+\displaystyle \sum_{p=1}^{2}\beta_{p}l_{p}(x)x+\beta x\\[2mm]
k(x)=\displaystyle \sum_{p, q=1}^{2}\gamma_{p, q}l_{p}(x)l_{q}(x)x+\displaystyle \sum_{p=1}^{2}\gamma_{p}l_{p}(x)x+\gamma x\\
\end{array}
\qquad
\left(x\in \mathbb{F}\right)
\]
with certain constants and with linearly independent logarithmic functions $l_{1}, l_{2}\colon \mathbb{F}^{\times}\to\mathbb{K}$.
Substituting these representations into our equation, firstly
\[
\alpha_{p, q}=\beta_{p, q}=\gamma_{p, q}=0
\qquad
\left(p, q\in \left\{1, 2\right\}\right)
\]
can be concluded, that is, in fact we have that
\[
\begin{array}{rcl}
f(x)=\displaystyle \sum_{p=1}^{2}\alpha_{p}l_{p}(x)x+\alpha x\\[2mm]
h(x)=\displaystyle \sum_{p=1}^{2}\beta_{p}l_{p}(x)x+\beta x\\[2mm]
k(x)=\displaystyle \sum_{p=1}^{2}\gamma_{p}l_{p}(x)x+\gamma x\\
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
Again, the comparison of the coefficients leads to
\[
\beta_{1}=0\qquad
\beta_{2}=0\qquad
\alpha_{1}=\gamma_{1}\qquad
\alpha_{2}=\gamma_{2}\qquad
\alpha=\beta^{2}+2\gamma,
\]
that is, there exist linearly independent logarithmic functions $l_{1}, l_{2}\colon \mathbb{F}^{\times}\to \mathbb{K}$ and
constants $\gamma_{1}, \gamma_{2}, \beta, \gamma \in \mathbb{K}$
such that
\[
\begin{array}{rcl}
f(x)&=&\gamma_{1}l_{1}(x)x+\gamma_{2}l_{2}(x)x+ (\beta^{2}+2\gamma) x\\[2mm]
h(x)&=&\beta x\\[2mm]
k(x)&=&\gamma_{1}l_{1}(x)x+\gamma_{2}l_{2}(x)x+\gamma x
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
Observe that in this case $\mathrm{id}$ and $h$ are linearly dependent, yielding that this possibility cannot occur in our situation.
Summing up, the following statement holds true.
\begin{thm}\label{Thm5}
Let $f, h, k\colon \mathbb{F}\to \mathbb{K}$ be functions such that
the system $\left\{\mathrm{id}, h, k\right\}$ is \emph{linearly independent}.
The functional equation
\begin{equation}\label{Eq_gen_indep}
f(xy)=h(x)h(y)+xk(y)+k(x)y
\end{equation}
is fulfilled for any $x, y\in \mathbb{F}$ if and only if
there exists a multiplicative function
$m\colon \mathbb{F}\to \mathbb{K}$ and a logarithmic function $l\colon \mathbb{F}^{\times}\to \mathbb{K}$ and constants
$\beta_{2}, \beta_{3}, \gamma_{1}, \gamma_{2}, \gamma_{3}\in \mathbb{K}$
such that
\[
\begin{array}{rcl}
f(x)&=&\left(\gamma_{1}l(x)+\beta_{2}^{2}+2\gamma_{2}\right)x+\beta_{3}^{2}m(x)\\[2mm]
h(x)&=&\beta_{2}x+\beta_{3}m(x)\\[2mm]
k(x)&=&\left(\gamma_{1}l(x)+\gamma_{2}\right)x+\gamma_{3}m(x)\\
\end{array}
\qquad
\left(x\in \mathbb{F}\right),
\]
where additionally $\gamma_{3}+\beta_{2}\beta_{3}=0$, $\beta_{3}\neq 0$ and $\gamma_{1}\neq 0$ also have to hold.
\end{thm}
\section{Further interpretations and open questions}
The primary aim of this paper was to study (different) notions of homo-derivations on fields
(with and without additivity supposition) as well as the Pexiderized version of this definition.
At the same time, the results obtained here can be restated with the aid of the notion of alien functional equations.
This concept was developed by J.~Dhombres in the paper \cite{Dho88}\index{Dhombres, J.}.
However, the interested reader should also consult the survey paper Ger--Sablik \cite{GerSab17}.
Let $X$ and $Y$ be nonempty sets and $E_{1}(f)=0$ and $E_{2}(f)=0$ be two functional equations for the function $f\colon X\to Y$.
We say that equations $E_{1}$ and $E_{2}$ are \emph{alien}, if any solution $f\colon X\to Y$ of the functional equation
\[
E_{1}(f)+E_{2}(f)=0
\]
also solves the system
\[
\begin{array}{rcl}
E_{1}(f)&=&0\\
E_{2}(f)&=&0.
\end{array}
\]
Furthermore, equations $E_{1}$ and $E_{2}$ are \emph{strongly alien}, if any pair $f, g\colon X\to Y$ of functions that solves
\[
E_{1}(f)+E_{1}(g)=0
\]
also yields a solution for
\[
\begin{array}{rcl}
E_{1}(f)&=&0\\
E_{2}(g)&=&0.
\end{array}
\]
The following statement shows that the multiplicative Cauchy equation and the Leibniz rule equation are \emph{not strongly alien},
this is an easy consequence of Theorem \ref{Thm5}.
\begin{prop}
Let $h, k\colon \mathbb{F}\to \mathbb{K}$ be functions such that the system
$\left\{\mathrm{id}, h, k\right\}$ is \emph{linearly independent}.
The functional equation
\begin{equation}\label{Eq_gen_alien}
h(xy)+k(xy)=h(x)h(y)+xk(y)+k(x)y
\end{equation}
is fulfilled for any $x, y\in \mathbb{F}$ if and only if
there exists a multiplicative function
$m\colon \mathbb{F}\to \mathbb{K}$ and a logarithmic function $l\colon \mathbb{F}^{\times}\to \mathbb{K}$ and constants
$\beta, \gamma\in \mathbb{K}$, $\beta \neq 1$, $\gamma\neq 0$
such that
\[
\begin{array}{rcl}
h(x)&=&\beta x+(1-\beta)m(x)\\[2mm]
k(x)&=&\left(\gamma l(x)+\beta-\beta^{2}\right)x+(\beta^{2}-\beta)m(x)\\
\end{array}
\qquad
\left(x\in \mathbb{F}\right).
\]
\end{prop}
As the proposition below shows, the multiplicative Cauchy equation and the Leibniz rule equation
are alien, cf. Proposition \ref{Prop1} and take $\lambda=\mu$ in the corollary below.
\begin{cor}
Let $\lambda, \mu\in \mathbb{K}$ be arbitrarily fixed constants not vanishing simultaneously.
Function $f\colon \mathbb{F}\to\mathbb{K}$ fulfills the functional equation
\begin{equation}\label{Eq8}
\lambda \left[f(xy)-f(x)y-xf(y)\right]+\mu\left[f(xy)-f(x)f(y)\right]=0
\qquad
\left(x, y\in \mathbb{F}\right)
\end{equation}
if and only if
\begin{enumerate}[(A)]
\item either $\lambda=0$ and $f$ is multiplicative;
\item or $\mu=0$ and $f$ is a Leibniz function;
\item or none of them is zero and
\begin{enumerate}
\item $f$ is identically zero
\item or
\[
f(x)= \dfrac{\mu-\lambda}{\mu}\cdot x
\qquad
\left(x\in \mathbb{F}\right).
\]
\end{enumerate}
\end{enumerate}
\end{cor}
\begin{rem}
Under the assumptions of the previous corollary, the additive solutions of equation \eqref{Eq8} are the following.
\begin{enumerate}[(A)]
\item either $\lambda=0$ and $f$ is a homomorphism;
\item or $\mu=0$ and $f$ is a derivation;
\item or none of them is zero and
\begin{enumerate}
\item $f$ is identically zero
\item or
\[
f(x)= \dfrac{\mu-\lambda}{\mu}\cdot x
\qquad
\left(x\in \mathbb{F}\right).
\]
\end{enumerate}
\end{enumerate}
\end{rem}
\begin{opp}
In this paper equation \eqref{Eq_homo-deriv} was considered under rather mild assumptions on the domain and also on the range.
At the same time, while investigating functional equation \eqref{Eq_gen} we always assumed that the range
of the involved mappings is a commutative, algebraically closed field $\mathbb{K}$ with $\mathrm{char}(\mathbb{K})\neq 2$.
The reason for this is that our main tools were Theorem \ref{Szekely} and the related results of McKiernan \cite{McK77}.
Clearly, the above equations can be studied without these assumptions.
We remark that in case of the so-called \emph{degenerate cases} a careful adaptation of the proofs shows that the same results hold true for commutative rings
(at some places we have to assume that the range does not have any zero-divisors).
Therefore we can formulate the following open questions.
\begin{enumerate}[(A)]
\item The algebraic closedness of the field $\mathbb{K}$ essential in our results. Nevertheless, if the field $\mathbb{K}$ is not algebraically closed, then we may
take $\mathrm{algcl}(\mathbb{K})$ as the extended range. Using our method, $\mathrm{algcl}(\mathbb{K})$-valued solutions can be described and
every $\mathbb{K}$-valued solution belongs to the above larger solution space. How can these solutions be recognized in the larger solution space?
\item To apply the results of Székelyhidi \cite{Sze91} and McKiernan \cite{McK77}, the assumption
$\mathbb{F}\subset \mathbb{K}$ be a field is sufficient, but may not be necessary.
Observe that we only need $\mathbb{F}$ to be a (commutative) subring of $\mathbb{K}$. In case the range
$Q$ is only a (commutative) ring, then what else should be supposed about $Q$, to get the same results? \\
It is worth to note that if $Q$ has no zero divisors, then up to isomorphism there exists a unique field of fractions that we may denote by
$\mathbb{K}$. In this case $\mathbb{K}$-valued functions can be considered and if it is needed we should take the algebraic closure
of this field (c.f. part (A)).
\item In case $P\subset Q$ are (commutative) rings and we consider mappings from $P$ to $Q$, then are there different solutions of \eqref{Eq_gen} then presented here?
\end{enumerate}
\end{opp}
\vspace{0.5cm}
\noindent
\emph{Acknowledgement.}
Gergely Kiss was supported by the Hungarian National Foundation for
Scientific Research, Grant No. K124749 and the Premium Postdoctoral
Fellowship of Hungarian Academy of Science.
The research of Eszter Gselmann has been carried out with the help of the project 2019-2.1.11-T\'{E}T-2019-00049,
which has been implemented with the support provided from the National Research, Development
and Innovation Fund of Hungary, financed under the T\'{E}T funding scheme.
| {
"timestamp": "2020-02-06T02:11:12",
"yymm": "2001",
"arxiv_id": "2001.03439",
"language": "en",
"url": "https://arxiv.org/abs/2001.03439",
"abstract": "The purpose of this paper is to study the (different) notions of homo-derivations. These are additive mappings $f$ of a ring $R$ that also fulfill the identity \\[ f(xy)=f(x)y+xf(y)+f(x)f(y) \\qquad \\left(x, y\\in R\\right),\\] or (in case of the other notion) the system of equations \\[ f(xy)=f(x)f(y)\\] \\[f(xy)=f(x)y+xf(y) \\qquad \\left(x, y\\in R\\right).\\] Our primary aim is to investigate the above equations without additivity as well as the following Pexiderized equation \\[ f(xy)=h(x)h(y)+xk(y)+k(x)y. \\] The obtained results show that under rather mild assumptions homo-derivations can be fully characterized, even without the additivity assumption.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Remarks on the notion of homo-derivations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924810166349,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7075866627013195
} |
https://arxiv.org/abs/1805.00208 | Some properties of Controlled Fusion Frames | Controlled frames in Hilbert spaces have been introduced by Balazs, Antoine and Grybos to improve the numerical output of in relation to algorithms for inverting the frame operator. In this paper we have introduced and displayed some new concepts and results on controlled fusion frames for Hilbert spaces. It is shown that controlled fusion frames as a generalization of fusion frames give a generalized way to obtain numerical advantage in the sense of reconditioning to check the fusion frame condition. For this end, we introduce the notion of Q-duality for Controlled fusion frames. Also, we survey the robustness of Controlled fusion frames under some perturbations. | \section{Introduction}
Frames, as a expansion of the bases in Hilbert spaces, were first introduced
by Duffin and Schaeffer During their study of nonharmonic Fourier series in 1952, they (\cite{Duffin}) introduced frames as a expansion of the bases in Hilbert spaces.
recently, frames play an fundamental role not only in the visionary but also in many kinds of applications and have been widely applied in filter bank theory, coding and communications, signal processing, system modeling, see(\cite{Musazadeh}),(\cite{Strohmer}),({\cite{Feichtinger}}), ({\cite{cassaza}}), ({\cite{Blo}}).
One of the newest generalization of frames is controlled frames. Controlled frames have been introduced to improve the numerical efficiency of interactive algorithms for inverting the frame operator on abstract Hilbert spaces (\cite{Balazs}), (\cite{Khosravi}),(\cite{Hua}).
This maniscript is organized as follows. In section 2, we remined some definitions and Lemmas in frames and operators theory. In section 3, we fix the notations of this paper, summarize known and prove some new results. In section 4, we defined Q-duality and perturbation for $CC^{\prime}$ -Controlled fusion frame and express some results about them.
Throughout this paper, $H$ is a separable Hilbert space, $\mathcal{B}(H)$ is the family of all bounded linear operators on $H$ and $GL(H)$ denotes the set of all bounded linear operators which have bounded inverse. Let $GL^{+}(H)$ be the set of all positive operators in $GL(H)$.
It is easy to check that if $C, C^{\prime} \in GL(H)$, then $C^{\prime*}, C^{\prime-1}$ and $CC^{\prime}$ are in $GL(H)$.
Assume that $Id_{H}$ be the identity operator on $H$ and $\pi_W$ be the orthogonal projection from $H$ onto a closed subspace $V\subseteq H$.
\section{Preliminaries}
In this section, some necessary definitions and lemmas are introduced.\\
\begin{definition}
A sequence $\lbrace f_{i} \rbrace _{i\in I}$ in $H$ is a frame if there exist constants $0 <A \leqslant B <\infty$ such that for all $f\in H$
\begin{align*}
A\Vert f \Vert ^{2}\leqslant \sum _{i\in I} \vert\langle f,f_{i}\rangle\vert^{2}\leqslant B \Vert f \Vert ^{2}.
\end{align*}
\end{definition}
The constants $A,B$ are called frame bounds; $A$ is the lower bound and $B$ is the upper bound. The frame is thight if $A=B$, it is called a Parseval frame if $A=B=1$. If we only have the upper bound, We call $\lbrace f_{i} \rbrace _{i\in I}$ a Bessel sequence. If $\lbrace f_{i} \rbrace _{i\in I}$ is a Bessel sequence then the following operators are bounded:
\begin{align*}
T&:\ell^{2}(I)\longmapsto H\\
&T(c_{i})=\sum _{i\in I}c_{i}f_{i}
\end{align*}
\begin{align*}
T^{*}&:H\longmapsto\ell^{2}(I)\\
&T^{*}f=\lbrace\langle f,f_{i}\rangle\rbrace _{i\in I}
\end{align*}
\begin{align*}
S&:H\longmapsto H\\
Sf&=TT^{*}f=\sum _{i\in I} \langle f,f_{i}\rangle f_{i}.
\end{align*}
These operators are called synthesis operator; analysis operator and frame operator, respectively. The representation space employed in this setting is:
\begin{align*}
\ell^{2}(I)=\Big\lbrace\lbrace c_{i}\rbrace_{i\in I}:\ c_{i}\in\Bbb C ,\ \sum _{i\in I}\vert c_{i} \vert ^{2}<\infty \Big\rbrace
\end{align*}
\begin{definition}.
Let $W:=\{W_i\}_{i\in I}$ be a family of closed subspaces of $H$ and $v:=\{v_i\}_{i\in I}$ be a family of weights (i.e. $v_i>0$ for any $i\in I$). We say that $W$ is a fusion frame with respect to $v$ for $H$ if there exists $0<A\leq B<\infty$ such that for each $h\in H$
\begin{eqnarray*}
A\Vert h\Vert^2\leq\sum_{i\in I}v^{2}_i\Vert \pi_{W_i}(h)\Vert^2\leq B\Vert h\Vert^2.
\end{eqnarray*}
\end{definition}
\begin{lemma}(\cite{ga})\label{l1}
Let $V\subseteq H$ be a closed subspace, and $T$ be a linear bounded operator on $H$. Then
$$\pi_{V}T^*=\pi_{V}T^* \pi_{\overline{TV}}.$$
If $T$ is a unitary (i.e. $T^*T=Id_{H}$), then
$$\pi_{\overline{Tv}}T=T\pi_{v}.$$
\end{lemma}
\begin{lemma}(\cite{ch})\label{l3}
Let $u \in \mathcal{B}(K,H)$ be a bounded operator with closed range $R_{u}$. Then there exist a bounded operator $u^{\dagger} \in \mathcal{B}(H,K)$ for which\\
\begin{align*}
uu^{\dagger}x=x, \ \ x \in R_{u}
\end{align*}
and $$(u^*)^\dagger=(u^{\dagger})^*.$$
\end{lemma}
\section{Controlled fusion frame}
\begin{definition}.\cite{Khosravi}
Let $\lbrace W_{i}\rbrace_ {i\in I}$ be a collection of closed subspace in Hilbert space $H$, $\lbrace v_{i}\rbrace_ {i\in I}$ be a family of weights, i.e. $v_{i}>0$, $i \in I$ and $C, C'\in GL(H)$. The sequence $W=\lbrace (W_{i},v_{i})\rbrace _ {i \in I}$ is called a fusion frame controlled by $C$ and $C^{\prime}$ or $CC^{\prime}$-Controlled fusion frame for $H$ if there exist constants $0< A \leq B< \infty$ such that for all $f \in H$
\begin{align*}
A \Vert f \Vert^{2} \leq \sum _{i\in I} v_{i}^{2} \langle \pi_{W_{i}} C^{\prime}f,\pi_{W_{i}} Cf \rangle \leq B \Vert f\Vert^{2}
\end{align*}
\end{definition}
Throughout this paper, $W$ will be a set $\lbrace (W_{i},v_{i})\rbrace _ {i \in I}$ unless otherwise stated. $W$ is called a tight controlled fusion frame, if the constants $A,B$ can be chosen such that $A=B$, a parseval fusion frame provided $A=B=1$. We call $W$ is a $C^2$-Controlled fusion frame if $C=C^{\prime}$.\\
If only the second Inequality is required, We call $W$-Controlled Bessel fusion sequence with bound $B$.\\
We define the controlled analysis operator by\\
\begin{align*}
&T_{W}:H \rightarrow \mathcal{K}_{2,W} \\
&T_{W}(f)=(v_{i}(C^{*}\pi_{W_{i}}C^{\prime})^{\frac{1}{2}}f).
\end{align*}
where
\begin{align*}
\mathcal{K}_{2,W}:=\lbrace v_i(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f \ : \ f\in H\rbrace \subset (\bigoplus_{i \in I} H)_{l^{2}}.
\end{align*}
It is easy to see that $\mathcal{K}_{2,W}$ is closed and $T_{W}$ is well defined. Moreover $T_{W}$ is a bounded linear operator with the adjoint operator $T ^{*}_{W}$ defined by
\begin{align*}
&T^{*}_{W}:\mathcal{K}_{2,W} \rightarrow H \\
&T ^{*}_{W}(v_i(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f)=\sum _{i\in I} v_{i}^{2}C^{*}\pi_{W_{i}} C^{\prime}f.
\end{align*}
Therefore, we define the controlled fusion frame operator $S_{W}$ on $H$ by
\begin{align*}
S_{W}f=T ^{*}_{W}T_{W}(f)=\sum _{i\in I} v_{i}^{2}C^{*}\pi_{W_{i}} C^{\prime}f.
\end{align*}
\begin{Example}
Let $\lbrace e_{1},e_{2},e_{3}\rbrace$ be the standard orthonormal basis for $\mathcal{R}^{3}$ and $W_{1}=\overline{span}\lbrace e_{1},e_{2} \rbrace$, $W_{2}=\overline{span}\lbrace e_{1},e_{3} \rbrace$, $W_{3}=\overline{span}\lbrace e_{2},e_{3} \rbrace$. Let
$C(x_{1},x_{2},x_{3})=(x_{1},x_{2},x_{1}+x_{3})$, $C^{\prime}(x_{1},x_{2},x_{3})=(x_{1},x_{2},x_{2}+x_{3})$ be two operators on $\mathcal{R}^{3}$. It is easy to see that $C,C^{\prime}\in GL(\mathcal{R}^{3})$. Now an easy computation shows that $\lbrace(W_{i},1)\rbrace^{3} _{i=1}$, is a $CC'$-controlled fusion frame with bounds 1, 4.
\begin{align*}
1 \Vert f \Vert^{2} \leq \sum _{i\in I} v_{i}^{2} \langle \pi_{W_{i}} C^{\prime}f,\pi_{W_{i}} Cf \rangle \leq 4 \Vert f\Vert^{2}
\end{align*}
\end{Example}
\begin{theorem}\label{1}
$W$ be a $CC'$-controlled fusion Bessel sequence for $H$ with bound $B$ if and only if the operator
\begin{align*}
&T^{*}_{W}:\mathcal{K}_{2,W} \rightarrow H \\
&T ^{*}_{W}(v_i(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f)=\sum _{i\in I} v_{i}^{2}C^{*}\pi_{W_{i}} C^{\prime}f.
\end{align*}
is well -defined and bounded operator with $\Vert T^{*}_{W} \Vert \leq \sqrt{B}$.\\
\end{theorem}
\begin{proof}
The necessary condition follows from the definition of $CC'$-controlled fusion Bessel sequence. We only need to prove that the sufficient condition holds. Let $T^{*} _{W}$ be well-defined and bounded operator with $\Vert T^{*}_{W} \Vert \leq \sqrt{B}$. For any $f\in H$, we have
\begin{align*}
(\sum_{i \in I} v_{i} ^{2} \langle \pi_{W_{i}}C^{\prime}f, \pi_{W_{i}} Cf \rangle )^{2}&=(\sum_{i\in I} v_{i} ^{2} \langle C^{*}\pi_{W_{i}} C^{\prime}f,f \rangle )^{2}\\
&= (\langle T^{*}_{W}\big(v_i(C^{*} \pi_{W_{i}} C')^{\frac{1}{2}}f\big), f \rangle )^2\\
&\leq \Vert T^{*}_{W}\Vert ^{2} \Vert (v_i(C^{*}\pi_{W_{i}} C')^{\frac{1}{2}} f \Vert ^{2} \Vert f \Vert ^{2}.
\end{align*}
But
\begin{align*}
\Vert (v_i(C^{*} \pi_{W_{i}} C')^{\frac{1}{2}} f\Vert_{2} ^{2}=\sum _{i \in I} v_{i}^{2}\langle \pi_{W_{i}} C'f,\pi _{W_{i}} C f\rangle.
\end{align*}
It follows that
\begin{align*}
\sum _{i \in I} v_{i}^{2}\langle \pi_{W_{i}} C'f,\pi _{W_{i}} C f\rangle\leq B\Vert f\Vert ^{2}.
\end{align*}
and this means that $W$ is a $CC'$-controlled fusion Bessel sequence for H.
\end{proof}
\begin{theorem}\label{2}
$W$ is a $CC^{\prime}$-Controlled fusion frame for $H$ if and only if
\begin{align*}
&T^{*}_{W}:\mathcal{K}_{2,W} \rightarrow H \\
&T ^{*}_{W}(v_i(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f)=\sum _{i\in I} v_{i}^{2}C^{*}\pi_{W_{i}} C^{\prime}f.
\end{align*}
is a well-defined, bounded and surjective.
\end{theorem}
\begin{proof}
If $W$ is a $CC^{\prime}$-Controlled fusion frame for $H$, the operator $S_{W}$ is invertible. Thus, $T^{*}_{W}$ is surjective.
Conversely, let $T^{*}_{W}$ be a well-defined, bounded and surjective. Then, by Theorem \ref{1}, $W$ is a $CC^{\prime}$-Controlled Bessel fusion sequence for $H$.\\
So, $T_{W}(f)=(v_{i}(C^{*}\pi_{W_{i}}C^{\prime})^{\frac{1}{2}}f)$ for all $f \in H$. Since $T^{*}_{W}$ is Surjective, by Lemma [\ref{l3}], there exists an operator $T^{*\dagger}_{W}:H \rightarrow \mathcal{K}_{2,W}$ such that $T^{\dagger}_{W} T_{W}=I_{H}$ . Now, for each $f\in H$ we have
\begin{align*}
\Vert f\Vert^{2}\leq \Vert T^{\dagger}_{W}\Vert ^{2}.\Vert T_{w}f\Vert ^{2}=\Vert T^{\dagger}_{W}\Vert ^{2}.\sum _{i\in I} v_{i}^{2} \langle \pi_{W_{i}} C^{\prime}f,\pi_{W_{i}} Cf \rangle ^{2}
\end{align*}
Therefore, $W$ is a $CC^{\prime}$-Controlled fusion frame for $H$ with the lower Controlled fusion frame bound $\Vert T^{\dagger}_{W}\Vert ^{-2}$ and the upper Controlled fusion frame $\Vert T^{*}_{W}\Vert ^{2}$.
\end{proof}
\begin{theorem}
Let $W$ be a $C^{2}$-controlled fusion frame with frame bounds $A$ and $B$. If $u \in \mathcal{B}(H)$ is an invertible operator such that $u^*C=Cu^*$, then
$\lbrace(uW_{i},v_{i})\rbrace _{i \in I}$ is a $C^{2}$-controlled
fusion frame for $H$.
\end{theorem}
\begin{proof}
Let $f\in H$. From lemma \ref{l1}, we have
\begin{align*}
\Vert\pi_{W_i}Cu^*f\Vert=\Vert\pi_{W_i}u^*Cf\Vert=\Vert\pi_{W_i}u^*\pi_{uW_i}C^*f\Vert\leq
\Vert u\Vert \Vert \pi_{uW_i}C^*f\Vert.
\end{align*}
Therefore,
\begin{align*}
A\Vert u^*f\Vert^2\leq\sum_{i\in I}\Vert\pi_{W_i}Cu^*f\Vert^2\leq\Vert u\Vert^2\sum_{i\in I} \Vert \pi_{uW_i}C^*f\Vert.
\end{align*}
But,
\begin{align*}
\Vert f\Vert^2\leq\Vert (u^{-1})^{*}u^*f\Vert^2\leq\Vert u^{-1}\Vert^2\Vert u^*f\Vert^2.
\end{align*}
Then,
\begin{align*}
A\Vert u^{-1}\Vert^{-2}\Vert u\Vert^{-2}\Vert f\Vert^2\leq\sum_{i\in I} \Vert \pi_{uW_i}C^*f\Vert.
\end{align*}
On the other hand, from lemma \ref{l1}, we obtain, with $u^{-1}$ instead of $T$:
$$\pi_{uW_i}=\pi_{uW_i}(u^*)^{-1}\pi_{W_i}u^*.$$
Thus,
$$\Vert\pi_{uW_i}Cf\Vert\leq\Vert u^{-1}\Vert \Vert\pi_{W_j}u^*Cf\Vert,$$
and it follows
\begin{align*}
\sum _{i \in I} v_{i}^{2} \Vert \pi _{uW_{i}}Cf\Vert ^{2}&\leq \Vert u^{-1}\Vert^{2}\sum _{i \in I} v_{i}^{2}\Vert
\pi _{W_{i}}u^{*}Cf\Vert ^{2}\\
&=\Vert u^{-1}\Vert^2\sum_{i\in I}v_i^2\Vert \pi_{W_i}C u^{*}f\Vert^2\\
&\leq B \Vert u^{-1}\Vert^{2}\Vert u \Vert^{2} \Vert f\Vert^{2}.
\end{align*}
\end{proof}
\begin{theorem}
Let $W=\lbrace(W_{i},v_{i})\rbrace _{i \in I}$ be a $C^{2}$-controlled
fusion frame with frame bounds $A$ and $B$. If $u \in \mathcal{B}(H)$is an
invertible and unitary operator such that $uC=Cu$, then
$\lbrace(uW_{i},v_{i})\rbrace _{i \in I}$ is a $C^{2}$-controlled
fusion frame for $H$.
\end{theorem}
\begin{proof}
Using lemma \ref{l1}, we have foe any $f\in H$,
\begin{align*}
A\Vert f\Vert ^{2}&\leq\Vert u\Vert^2 \Vert u^{-1}f\Vert^2\\
&\leq\Vert u\Vert^2\sum _{i \in I} v_{i}^{2}\Vert \pi _{W_{i}}
u^{-1}Cf\Vert ^{2}\\
&\leq\Vert u\Vert^{2} \sum _{i \in I} v_{i}^{2}\Vert u^{-1}\pi
_{uW_{i}}Cf\Vert ^{2}\\
&\leq\Vert u\Vert^2 \Vert u^{-1}\Vert^2\sum_{i\in I}v_i^2\Vert
\pi_{uW_i}Cf\Vert^2,
\end{align*}
and we obtain
\begin{align*}
\sum _{i \in I} v_{i}^{2}\Vert \pi _{uW_{i}}Cf\Vert ^{2}\geq\dfrac{A}{\Vert u \Vert^{2}\Vert u^{-1}\Vert^{2}}\Vert
f\Vert^{2}.
\end{align*}
On the other hand, from lemma \ref{l1}, we obtain
\begin{align*}
\sum _{i \in I} v_{i}^{2} \Vert \pi _{uW_{i}}Cf\Vert ^{2}&\leq \Vert u\Vert^{2}\sum _{i \in I} v_{i}^{2}\Vert
\pi _{W_{i}}u^{-1}Cf\Vert ^{2}\\
&=\Vert u\Vert^2\sum_{i\in I}v_i^2\Vert \pi_{W_i}C u^{-1}f\Vert^2\\
&\leq B \Vert u^{-1}\Vert^{2}\Vert u \Vert^{2} \Vert f\Vert^{2}.
\end{align*}
\end{proof}
\begin{theorem}
Let $W=\lbrace(W_{i},v_{i})\rbrace _{i \in I}$ and $Z=\lbrace(Z_{i},v_{i})\rbrace _{i \in I}$ be $CC'$-Controlled Bessel fusion sequences for $H$. Suppose that there exists $0<\epsilon< 1$ such that
\begin{align*}
\Vert f-T^{*}_{Z}T_ {W}f \Vert \leq \epsilon \Vert f \Vert
\end{align*}
Then $W$ and $Z$ are $CC'$-Controlled fusion frame for $H$.
\end{theorem}
\begin{proof}
For each $f\in H$, we have
\begin{align*}
\Vert T^{*}_{Z}T_ {W}f\Vert\geq\Vert f\Vert-\Vert f-T^{*}_{Z}T_ {W} f \Vert\geq (1- \epsilon)\Vert f \Vert.
\end{align*}
Therefore
\begin{align*}
(1- \epsilon)\Vert f \Vert\leq\Vert T^{*}_{Z}T_ {W}f\Vert &=\sup _ {\Vert g\Vert =1}\vert \langle T^{*}_{Z}T_ {W}f,g \rangle\vert\\
&=\sup _ {\Vert g\Vert =1}\vert \langle T_ {W}f,T_{Z}g \rangle\vert\\
&\leq \sup _ {\Vert g\Vert =1} \Vert T_ {W}f\Vert .\Vert T_{Z}g\Vert\\
&\leq \sqrt{B}(\sum _{i\in I} v_{i}^{2} \langle \pi_{W_{i}} C^{\prime}f,\pi_{W_{i}} Cf \rangle)^{\frac{1}{2}},
\end{align*}
where $B$ is a Controlled Bessel bound for $W$. Hence,
\begin{align*}
\dfrac{(1- \epsilon)^{2}}{B}.\Vert f\Vert^{2}\leq(\sum _{i\in I} v_{i}^{2} \langle \pi_{W_{i}} C^{\prime}f,\pi_{W_{i}} Cf \rangle).
\end{align*}
Therefore, $W$ is a $CC'$-Controlled fusion frame for $H$. Similarly, we can show that $Z$ is also a $CC'$-Controlled fusion frame for $H$.
\end{proof}
\begin{corollary}
Let $W:=\lbrace(W_{i},v_{i})\rbrace _{i \in I}$ and $Z:=\lbrace(Z_{i},z_{i})\rbrace _{i \in I}$ be two $CC'$-controlled fusion Bessel sequence for $H$ with bounds $B_{1}$ and $B_{2}$, respectively. Suppose that $T^{*}_{W}$ and $T^{*}_{Z}$ be their controlled analysis operators such that $T^{*}_{Z}T_{W}=Id_{H}$. Then, both $W$ and $Z$ are $CC'$-controlled fusion frame for $H$.\\
\end{corollary}
\begin{theorem}
Let $W$ be a $CC'$-controlled fusion frame with bounds $A,B$ for $H$. Also, let $Z:=\lbrace Z_{i}\rbrace _{i \in I}$ be a family of closed subspaces in $H$ and
\begin{align*}
\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime}-C^{*}\pi_{Z_{i}} C^{\prime})^{\frac{1}{2}}f \Vert \leq \epsilon \Vert f\Vert,
\end{align*}
for some $0<\epsilon<\sqrt{A}$. Then $Z:=\lbrace(Z_{i},v_{i})\rbrace _{i \in I}$ is a $CC'$-controlled fusion frame with bounds $(A-\epsilon^{2})$ and $(B+\epsilon^{2})$.
\end{theorem}
\begin{proof}
For every $f \in H$, we can write
\begin{align*}
\Vert v_{i}(C^{*}\pi_{Z_{i}} C^{\prime})^{\frac{1}{2}}f \Vert^{2}&\leq\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f \Vert^{2}+\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime}-C^{*}\pi_{Z_{i}} C^{\prime})^{\frac{1}{2}}f \Vert^{2}\\
&\leq(B+\epsilon^{2})\Vert f\Vert^{2}
\end{align*}
Thus,
\begin{align*}
\sum _{i\in I} v_{i}^{2} \langle \pi_{Z_{i}} C^{\prime}f,\pi_{Z_{i}} Cf \rangle =\Vert v_{i}(C^{*}\pi_{Z_{i}} C^{\prime})^{\frac{1}{2}}f \Vert^{2}\leq(B+\epsilon^{2})\Vert f\Vert^{2}.
\end{align*}
Therefore, $Z:=\lbrace(Z_{i},z_{i})\rbrace _{i \in I}$ is a Controlled Bessel fusion sequence. On the other hand
\begin{align*}
\Vert v_{i}(C^{*}\pi_{Z_{i}} C^{\prime})^{\frac{1}{2}}f \Vert^{2}&\geq\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f \Vert^{2}-\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime}-C^{*}\pi_{Z_{i}} C^{\prime})^{\frac{1}{2}}f \Vert^{2}\\
&\geq(A-\epsilon^{2})\Vert f\Vert^{2}.
\end{align*}
Hence,
\begin{align*}
\sum _{i\in I} v_{i}^{2} \langle \pi_{Z_{i}} C^{\prime}f,\pi_{Z_{i}} Cf \rangle =\Vert v_{i}(C^{*}\pi_{Z_{i}} C^{\prime})^{\frac{1}{2}}f \Vert^{2}\geq(A-\epsilon^{2})\Vert f\Vert^{2}
\end{align*}
and the proof is completed.
\end{proof}
\section{ Q-dual and perturbation on Controlled fusion frame}
\begin{definition}
Assume that $W$ be a $CC^{\prime}$-Controlled fusion frame for H. We call a $CC'$-controlled fusion Bessel sequence as $\tilde{W}:=\{(\tilde{W}_i, z_i)\}_{i\in I}$ the Q-dual $CC'$-Controlled fusion frame of $W$, if there exists a bounded linear operator $Q:\mathcal{K}_{2,W}\longrightarrow \mathcal{K}_{2,\tilde{W}}$ such that
\begin{align*}
T^{*}_{W}QT_{\tilde{W}}=I_{H}.
\end{align*}
\end{definition}
\begin{theorem}
Let $\tilde{W}$ be Q-dual $CC'$-Controlled fusion frame for $W$ and $Q:\mathcal{K}_{2,W}\longrightarrow \mathcal{K}_{2,\tilde{W}}$. Then, the following conditions are equivalent.
\begin{enumerate}
\item $T^{*}_{\tilde{W}}Q^{*} T_ {W}=I_{H}$;
\item $T^{*}_ {W}QT_{\tilde{W}}=I_{H}$;
\item $\langle f,g \rangle = \langle Q^{*} T_ {W}f,T_{\tilde{W}}g \rangle=\langle Q T_{\tilde{W}}f,T_ {W}g \rangle$.
\end{enumerate}
\end{theorem}
\begin{proof}
Straightforward.
\end{proof}
\begin{theorem}
If $\tilde{W}$ is a Q-dual for $W$, then $\tilde{W}$ is a $CC'$-controlled fusion frame for H.
\end{theorem}
\begin{proof}
Let $f\in H$ and $B$ an upper bound for $W$. Therefore
\begin{align*}
\Vert f\Vert ^{4}&=\vert\langle f,f \rangle\vert^{2}\\
&=\vert\langle Q^{*} T_ {W_{i}}f, T_{\tilde{W}_{i}}f\rangle\vert^{2}\\
&=\vert\langle QT_{\tilde{W}_{i}} f,T_ {W_{i}}f\rangle\vert^{2}\\
&\leq \Vert T_ {\tilde{W}_{i}}f \Vert^{2} \Vert Q\Vert ^{2} \Vert T_ {W_{i}}f \Vert ^{2}\\
&\leq \Vert T_ {\tilde{W}_{i}}f \Vert^{2} \Vert Q \Vert^{2} B \Vert f \Vert^{2}\\
&=\Vert Q \Vert^{2} B \Vert f \Vert^{2}\sum _{i\in I} z_{i}^{2} \langle \pi_{\tilde{W}_{i}} C^{\prime}f,\pi_{\tilde{W}_{i}} Cf \rangle ^{2}.
\end{align*}
Hence,
\begin{align*}
B^{-1} \Vert Q \Vert^{-2}\Vert f\Vert ^{2}\leq \sum _{i\in I} z_{i}^{2} \langle \pi_{\tilde{W}_{i}} C^{\prime}f,\pi_{\tilde{W}_{i}} Cf \rangle ^{2}
\end{align*}
and this completes the proof.
\end{proof}
\begin{corollary}
If $C_{op}$ and $D_{op}$ are the optimal bounds of $\tilde{W}$, then
$$C_{op}\geq B_{op}^{-1}\Vert Q\Vert^{-2}\ \ \ and \ \ \ D_{op}\geq A_{op}^{-1}\Vert Q\Vert^{-2}$$
which $A_{op}$ and $B_{op}$ are the optimal bounds of $W$, respectively.
\end{corollary}
\begin{definition}
Let $W:=\lbrace(W_{i},v_{i})\rbrace _{i \in I}$ and $Z:=\lbrace(Z_{i},v_{i})\rbrace _{i \in I}$ be $CC'$-controlled fusion frame for $H$ where $C,C^{\prime} \in GL(H)$ and $0\leq \lambda_{1} , \lambda_{2}<1$ be real numbers. Suppose that $\beta:=\lbrace c_i\rbrace_{i\in I}\in \ell^{2}(I)$ is a positive sequence of real numbers. If
\begin{small}
\begin{align*}
\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime} -C^{*} \pi_{{Z}_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2} &\leq \lambda _{1}\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f \Vert_{2}+\lambda _{2}\Vert v_{i}(C^{*} \pi_{{Z}_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2}+\\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\Vert\beta\Vert_{2}\Vert f \Vert.\\
\end{align*}
\end{small}
then, we say that $Z:=\lbrace(Z_{i},v_{i})\rbrace _{i \in I}$ is a $(\lambda _{1} ,\lambda _{2},\beta ,C ,C^{\prime})$-perturbation of $W=\lbrace(W_{i},v_{i})\rbrace _{i \in I}$.
\end{definition}
\begin{theorem}
Let $W:=\lbrace(W_{i},v_{i})\rbrace _{i \in I}$ be a $CC'$-controlled fusion frame for $H$ with frame bounds $A,B$, and $Z:=\lbrace(Z_{i},v_{i})\rbrace _{i \in I}$ be a $(\lambda _{1} ,\lambda _{2},\beta ,C,C^{\prime})$-perturbation of $W:=\lbrace(W_{i},v_{i})\rbrace _{i \in I}$. Then $Z:=\lbrace(Z_{i},v_{i})\rbrace _{i \in I}$ is a $CC'$-controlled fusion frame for $H$ with bounds:
$$(\dfrac{(1-\lambda_{1})\sqrt{A}-\Vert\beta\Vert_{2}}{1+\lambda _{2}})^{2} \ \ , \ \ (\dfrac{(1+\lambda_{1})\sqrt{B} +\Vert\beta\Vert_{2} }{1-\lambda_{2}})^{2} $$
\end{theorem}
\begin{proof}
Let $f \in H$. We have
\begin{small}
\begin{align*}
\Vert v_{i}( C^{*} \pi_{{W}_{i}}C^{\prime})^\frac{1}{2}f \Vert_{2}&=\Vert v_{i}(C^{*}\pi_{{Z}_{i}}C^{\prime}-C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f+v_{i}(C ^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f\Vert_{2}\\
&\leq \Vert v_{i}(C^{*} \pi_{{Z}_{i}}C^{\prime}-C ^{*}\pi_{W_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2} +\Vert v_{i}(C^{*}\pi_{W_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2}\\
&\leq \lambda _{1}\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f \Vert_{2}+\lambda _{2}\Vert v_i(C^* \pi_{Z_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2}+ \\
&\ \ \ \ \ \ \ \ \ \ \ +\Vert\beta\Vert_{2}\Vert f \Vert+
\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f \Vert_{2}.
\end{align*}
\end{small}
Hence,
\begin{align*}
(1-\lambda_{2})\Vert (v_{i} (C^{*} \pi_{{Z}_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2}\leq(1+\lambda_{1})\Vert v_{i}(C ^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f \Vert_{2}+\Vert\beta\Vert_{2}\Vert f \Vert.
\end{align*}
Since $W$ is a $CC'$-controlled fusion frame with bounds $A$ and $B$, then
\begin{align*}
\Vert v_{i}(C^{*}\pi_{W_{i}}C^{\prime})^{\frac{1}{2}}f \Vert^{2}
=\sum _{i \in I} v_{i}^{2}\langle \pi_{W_{i}} C^{\prime}f, \pi _{W_{i}} C f\rangle
\leq B \Vert f \Vert^{2}.
\end{align*}
So,
\begin{small}
\begin{align*}
\Vert v_{i} (C^{*} \pi_{{Z}_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2}&\leq\dfrac{(1+\lambda_{1})\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f \Vert_{2}+\Vert\beta\Vert_{2}\Vert f \Vert }{1-\lambda_{2}}\\
&\leq(\dfrac{(1+\lambda_{1})\sqrt{B} +\Vert\beta\Vert_{2} }{1-\lambda_{2}}\Vert f \Vert).
\end{align*}
Thus
\begin{align*}
\sum _{i \in I} v_{i}^{2}\langle \pi_{Z_{i}} C^{\prime}f, \pi _{Z_{i}} C f\rangle=\Vert v_{i} (C^{*} \pi_{{Z}_{i}}C^{\prime})^{\frac{1}{2}}f \Vert^{2}_{2}&\leq(\dfrac{(1+\lambda_{1})\sqrt{B} +\Vert\beta\Vert_{2} }{1-\lambda_{2}}\Vert f \Vert)^{2}.
\end{align*}
\end{small}
Now, for the lower bound, we have
\begin{small}
\begin{align*}
\Vert v_{i} (C^{*} \pi_{Z_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2}&=\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f -v_{i}(C^{*}\pi_{W_{i}} C^{\prime}
- C ^{*}\pi_{{Z}_{i}}C^{\prime})^{\frac{1}{2}}f\Vert_{2}\\
&\geq \Vert v_{i}(C^{*}\pi_{W_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2}-\Vert v_{i}(C ^{*}\pi_{Z_{i}} C^{\prime}- C^{*} \pi_{{Z}_{i}}C^{\prime})^{\frac{1}{2}}f\Vert_{2}\\ &\geq \Vert v_{i}(C ^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f \Vert_{2}-\lambda _{1}\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f \Vert_{2}\\
&\ \ \ \ \ -\lambda _{2}\Vert v_{i}( C^{*} \pi_{{Z}_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2} -\Vert\beta\Vert_{2}\Vert f \Vert.
\end{align*}
\end{small}
Therefore,
\begin{align*}
(1+\lambda_{2})\Vert v_{i} (C^{*} \pi_{{Z}_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2}\geq(1-\lambda_{1})\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime})^{\frac{1}{2}}f \Vert_{2}-\Vert\beta\Vert_{2}\Vert f \Vert,
\end{align*}
or
$$\Vert v_{i} (C^{*} \pi_{{Z}_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2}\geq\dfrac{(1-\lambda_{1})\Vert v_{i}(C^{*}\pi_{W_{i}} C^{\prime})f \Vert_{2}-\Vert\beta\Vert_{2}\Vert f \Vert}{1+\lambda _{2}}. $$
Thus, we get
\begin{align*}
\Vert v_{i}(C^{*}\pi_{W_{i}}C^{\prime})^{\frac{1}{2}}f \Vert^{2}=\sum _{i \in I} v_{i}^{2}\langle \pi_{W_{i}} C^{\prime}f, \pi _{W_{i}} C f \rangle\geq A\Vert f \Vert^{2}.
\end{align*}
So,
\begin{align*}
\Vert v_{i} (C^{*} \pi_{{Z}_{i}}C^{\prime})^{\frac{1}{2}}f \Vert_{2}\geq(\dfrac{(1-\lambda_{1})\sqrt{A}-\Vert\beta\Vert_{2}}{1+\lambda _{2}}\Vert f \Vert).
\end{align*}
Thus,
\begin{align*}
\sum _{i \in I} v_{i}^{2}\langle \pi_{Z_{i}} C^{\prime}f, \pi _{Z_{i}} C f\rangle
&=\Vert v_{i} (C^{*} \pi_{{Z}_{i}}C^{\prime})^{\frac{1}{2}}f \Vert^{2}_{2}\\
&\geq(\dfrac{(1-\lambda_{1})\sqrt{A}-\Vert\beta\Vert_{2}}{1+\lambda _{2}}\Vert f \Vert)^{2}
\end{align*}
and the proof is completed.
\end{proof}
| {
"timestamp": "2018-05-02T02:06:17",
"yymm": "1805",
"arxiv_id": "1805.00208",
"language": "en",
"url": "https://arxiv.org/abs/1805.00208",
"abstract": "Controlled frames in Hilbert spaces have been introduced by Balazs, Antoine and Grybos to improve the numerical output of in relation to algorithms for inverting the frame operator. In this paper we have introduced and displayed some new concepts and results on controlled fusion frames for Hilbert spaces. It is shown that controlled fusion frames as a generalization of fusion frames give a generalized way to obtain numerical advantage in the sense of reconditioning to check the fusion frame condition. For this end, we introduce the notion of Q-duality for Controlled fusion frames. Also, we survey the robustness of Controlled fusion frames under some perturbations.",
"subjects": "Functional Analysis (math.FA)",
"title": "Some properties of Controlled Fusion Frames",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924802053235,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7075866621107048
} |
https://arxiv.org/abs/2210.14617 | A new thin layer model for viscous flow between two nearby non-static surfaces | We propose a two-dimensional flow model of a viscous fluid between two close moving surfaces. We show, using a formal asymptotic expansion of the solution, that its asymptotic behavior, when the distance between the two surfaces tends to zero, is the same as that of the the Navier-Stokes equations. The leading term of the formal asymptotic expansions of the solutions to the new model and Navier-Stokes equations are solution of the same limit problem, and the type of the limit problem depends on the boundary conditions. If slip velocity boundary conditions are imposed on the upper and lower bound surfaces, the limit is a solution of a lubrication model, but if the tractions and friction forces are known on both bound surfaces, the limit is a solution of a thin fluid layer model. The model proposed has been obtained to be a valuable tool for computing viscous fluid flow between two nearby moving surfaces, without the need to decide a priori whether the flow is typical of a lubrication or a thin fluid layer problem, and without the enormous computational effort that would be required to solve the Navier-Stokes equations in such a thin domain. | \section{Introduction}\label{sec1}
In our previous work \cite{RodTabJMAA2021}, we used the asymptotic expansions technique to study the behavior of a viscous fluid that flows between two very close moving surfaces. Asymptotic analysis is a mathematical tool that has been used successfully (since the pioneering works of Dean \cite{Dean1}-\cite{Dean2}, Friedrichs and Dressler \cite{FriedrichsDressler} and Goldenveizer \cite{Goldenveizer}) to obtain and justify mathematical models, in solid mechanics \cite{Rigolot1972}-\cite{TutekAganovicNedelec} and fluid mechanics \cite{Cimatti1983}-\cite{PanasenkoPileckas2}, when at least one of the dimensions of the domain is much smaller than the others. Using the same mathematical technique, the authors have also proposed several new shallow water models \cite{RTV1}-\cite{RTV6} and curved-pipe flow models \cite{CR1}-\cite{CR2}.
We observed, in our prior article \cite{RodTabJMAA2021}, that the viscous fluid that moves between two nearby surfaces has two very different behaviors, depending on the boundary conditions of the problem. If the pressure differences are large in the open part of the domain boundary (that is, the region of the domain boundary between the two surfaces), then the fluid obeys equation \eqref{Reynolds_gen}, which resembles a lubrication problem. If the pressure differences are small in the mentioned region of the domain boundary, then the fluid obeys equation \eqref{ec_Vi0-v2}, which is a thin fluid layer problem (it can also be understood as a shallow water problem in which the depth is known).
This behavior reminds us of that observed in the works of Ciarlet et al. \cite{CiarletLodsI}-\cite{CiarletLodsIV}, where it is shown that the solution of the linearized elasticity equations in a shell converges, when the shell thickness tends to zero, to different shell models, depending on the geometry of the shell and its boundary conditions. In particular, in the work of Ciarlet and Lods \cite{CiarletLodsIII}, the authors show that the Koiter’s shell model has the same asymptotic behavior, that is, its solutions converge, when the thickness of the shell tends to zero, to the same limit problems as the linearized elasticity equations do.
In this article we intend to justify a new two-dimensional flow model of a viscous fluid between two very close moving surfaces in a similar way to what was done in the above mentioned works \cite{CiarletLodsI}-\cite{CiarletLodsIII}, and, to confirm that when the distance between the two surfaces tends to zero, its behavior is the same as that observed in \cite{RodTabJMAA2021} for the Navier-Stokes equations, justifying it rigorously in sections \ref{sec-aa} and \ref{BoundaryConditions}.
With this aim, in the first place, we will summarize, in section \ref{PreviousModels}, the results presented previously in our article \cite{RodTabJMAA2021}, which will allow us to make some assumptions about the behavior of the solutions of the Navier-Stokes. These hypothesis will be used in sections \ref{NewAssumptions}-\ref{BoundaryConditions} to derive the new two-dimensional model proposed in section \ref{NewModel}. Next, in section \ref{sec-aa}, we will begin the asymptotic analysis of the model, deriving in section \ref{subseccion-4-1} the limit model if the fluid velocity is known at the upper and lower bound surfaces, and obtaining, in section \ref{subseccion_sw}, the limit model when the tractions are known at the bound surfaces. Finally we will discuss the results achieved in section \ref{sec-conclusions}.
\section{Summary of the main previous results} \label{PreviousModels}
In our prior work \cite{RodTabJMAA2021} we studied the behavior of the Navier-Stokes equations in a domain bounded by two nearby moving surfaces, when the distance between them tends to zero. We observed that the asymptotic behavior of the solutions of the Navier-Stokes equations, in this case, strongly depends on the boundary conditions. In fact, two different limit models were obtained (one similar to a lubrication model and the other similar to a thin fluid layer model), depending on the boundary conditions in the original problem.
The two models presented in the preceding article \cite{RodTabJMAA2021} were derived from Navier-Stokes equations in a three-dimensional thin domain, $\Omega^{\varepsilon}_t$,
filled by a viscous fluid, that varies with time $t \in [0, T]$, given by
\begin{eqnarray}\Omega^{\varepsilon}_t&=&\left\{ (x_1^{\varepsilon},x_2^{\varepsilon},x_3^{\varepsilon})\in{R}^3:x_i(\xi_1,\xi_2,t)\leq x_i^{\varepsilon} \leq
x_i(\xi_1,\xi_2,t)+ h^\varepsilon(\xi_1,\xi_2,t)N_i(\xi_1,\xi_2,t), \nonumber\right.\\
&&\left. (i=1,2,3), \ (\xi_1,\xi_2)\in D\subset \mathbb{R}^2
\right\} \label{eq-o-domain} \end{eqnarray} where
$\vec{X}_t(\xi_1,\xi_2)=\vec{X}(\xi_1,\xi_2,t)=(x_1(\xi_1,\xi_2,t),x_2(\xi_1,\xi_2,t),
x_3(\xi_1,\xi_2,t))$ is the lower
bound surface parametrization, $h^\varepsilon(\xi_1,\xi_2,t)$ is the gap between the two surfaces in
motion, and $\vec{N}(\xi_1,\xi_2,t)$ is the unit normal vector.
The lower bound surface is assumed to be regular
and the gap is assumed to be small with regard to the dimensions of the bound
surfaces. We take into account that the fluid film between the
surfaces is thin by introducing a small non-dimensional parameter
$\varepsilon$, and setting that
\begin{equation}
h^\varepsilon(\xi_1,\xi_2,t) = \varepsilon h(\xi_1,\xi_2,t), \quad
h(\xi_1,\xi_2,t) \ge h_0 > 0, \quad \forall \ (\xi_1,\xi_2)\in D\subset \mathbb{R}^2, \ \forall \
t\in [0,T].
\end{equation}
We introduce a reference domain
\begin{equation}
\Omega=D \times [0,1] \label{eq-Omega}
\end{equation}
independent of
$\varepsilon$ and $t$, which is related to $\Omega^{\varepsilon}_t$ by the following change of variable:
\begin{eqnarray}
t^\varepsilon&=&t \label{eq-1-cv} \\
x_i^\varepsilon &=& x_i(\xi_1,\xi_2,t)+\varepsilon \xi_3 h(\xi_1,\xi_2,t)N_i(\xi_1,\xi_2,t) \quad (i=1,2,3) \label{eq-2-cv}
\end{eqnarray}
where $(\xi_1,\xi_2)\in D$ and $\xi_3 \in[0,1]$. Now, given any scalar function $F^\varepsilon(t^\varepsilon,x_1^\varepsilon,x_2^\varepsilon,x_3^\varepsilon)$ defined on $\Omega^\varepsilon_t$, we can introduce another scalar function $F(\varepsilon)(t,\xi_1,\xi_2,\xi_3)$ on $\Omega$, using the change of variable:
\begin{equation}
F(\varepsilon)(t,\xi_1,\xi_2,\xi_3) = F^\varepsilon(t^\varepsilon,x_1^\varepsilon,x_2^\varepsilon,x_3^\varepsilon)
\end{equation}
We also define the basis $\left\{
\vec{a}_1,\vec{a}_2,\vec{a}_3\right\}$
\begin{eqnarray}
\vec{a}_1(\xi_1,\xi_2,t)&=&\dfrac{\partial \vec{X}(\xi_1,\xi_2,t)}{\partial \xi_1} \label{base_a1} \\
\vec{a}_2(\xi_1,\xi_2,t)&=&\dfrac{\partial \vec{X}(\xi_1,\xi_2,t)}{\partial \xi_2}\\
\vec{a}_3(\xi_1,\xi_2,t)&=& \vec{N}(\xi_1,\xi_2,t) \label{base_a3}
\end{eqnarray}
so that, the velocity, $\vec{u}^{\varepsilon}$, and the external density of
volume forces, $\vec{f}^\varepsilon$, can be written in the new basis \eqref{base_a1}-\eqref{base_a3} as follows, where we adopt the convention of summing over repeated indices from 1 to 3, except where otherwise indicated:
\begin{eqnarray}\vec{u}^{\varepsilon} &=& u_i^{\varepsilon}
\vec{e}_i = u_k(\varepsilon)\vec{a}_{k}, \quad
u_i^{\varepsilon}= \left ( u_k(\varepsilon)\vec{a}_{k} \right ) \cdot \vec{e}_i = u_k(\varepsilon) a_{ki} \label{cambio_base_u}\\
\vec{f}^\varepsilon &=& f_i^{\varepsilon}\vec{e}_i =
f_k(\varepsilon)\vec{a}_{k}, \quad f_i^{\varepsilon}= \left ( f_k(\varepsilon)\vec{a}_{k} \right ) \cdot \vec{e}_i = f_k(\varepsilon){a}_{ki}
\label{cambio_base_f}
\end{eqnarray}
where ${a}_{ki} = \vec{a}_{k} \cdot \vec{e}_i$.
Taking into account \eqref{eq-1-cv}-\eqref{cambio_base_f}, Navier-Stokes equations can be written in the reference domain $\Omega$ in the following way (in the next equations, repeated indices indicate summation from 1 to 3, except for index $l$ and $n$, which take values from 1 to 2):
\begin{eqnarray}
&&\dfrac{ \partial u_k(\varepsilon)}{\partial t} {a}_{ki} +
u_k(\varepsilon)\dfrac{
\partial {a}_{ki}}{\partial t} + \left({a}_{ki} \dfrac{
\partial u_k(\varepsilon)}{\partial \xi_n} + u_k(\varepsilon)\dfrac{ \partial {a}_{ki}}{\partial \xi_n} \right)\left[ -(\alpha_n \vec{a}_1 + \beta_n \vec{a}_2)\cdot\left(
\dfrac{\partial \vec{X}}{\partial t} + \varepsilon \xi_3 h
\dfrac{\partial \vec{a}_{3}}{\partial t} \right) \right]
\nonumber\\
&&\hspace*{+0.5cm}{}+\left({a}_{ki}\dfrac{
\partial u_k(\varepsilon)}{\partial \xi_3} + u_k(\varepsilon)\dfrac{
\partial {a}_{ki}}{\partial \xi_3} \right)\left(
-\dfrac{1}{\varepsilon h} \vec{a}_3 \cdot\dfrac{\partial
\vec{X}}{\partial t}- \dfrac{\xi_3}{ h} \dfrac{\partial h}{\partial
t}
\right)\nonumber\\
&&\hspace*{+0.5cm}{}+u_k(\varepsilon){a}_{kj} \left({a}_{qi} \dfrac{
\partial u_q(\varepsilon)}{\partial \xi_l} +u_q(\varepsilon)
\dfrac{
\partial {a}_{qi}}{\partial \xi_l} \right) \left( \alpha_l
{a}_{1j} + \beta_l {a}_{2j}+ \gamma_l {a}_{3j}\right)\nonumber\\
&&\hspace*{+0.5cm}=-\dfrac{1}{\rho_0}\dfrac{
\partial p(\varepsilon)}{\partial \xi_l}\left(\alpha_l
{a}_{1i} + \beta_l {a}_{2i}+ \gamma_l {a}_{3i}\right) + \nu \left\{ \left[\dfrac{
\partial^2 (u_k(\varepsilon){a}_{ki})}{\partial \xi_l \partial \xi_m}
\left( \alpha_l {a}_{1j} + \beta_l {a}_{2j}+ \gamma_l {a}_{3j}
\right)\right.\right. \nonumber\\
&&\hspace*{+0.5cm}\left.\left.{}+ \dfrac{
\partial (u_k(\varepsilon){a}_{ki})}{\partial \xi_l}\dfrac{\partial}{\partial \xi_m}\left(
\alpha_l {a}_{1j} + \beta_l {a}_{2j}+ \gamma_l {a}_{3j} \right)
\right] \left( \alpha_m {a}_{1j} + \beta_m {a}_{2j}+ \gamma_m
{a}_{3j} \right) \right\} \nonumber\\
&&\hspace*{+0.5cm}{}+f_k(\varepsilon){a}_{ki}, \quad (i=1,2,3)\label{ec_ns_ij_alfa_beta}\\
&& \left({a}_{kj} \dfrac{
\partial u_k(\varepsilon)}{\partial \xi_l} + u_k(\varepsilon)\dfrac{
\partial {a}_{kj}}{\partial \xi_l}\right) \left(\alpha_l
{a}_{1j} + \beta_l {a}_{2j}+ \gamma_l {a}_{3j}\right) =0
\label{div_i_dr_alfa_beta}
\end{eqnarray}where $\alpha_l$, $\beta_l$ and $\gamma_l$ are defined in appendix \ref{ApendiceA} by expressions \eqref{alfaides}-\eqref{beta3n}. We denote by $p$ the pressure, by $\rho_0$ the fluid density and by $\nu$ the kinematic viscosity.
We begin assuming that $u_i(\varepsilon)$, $f_i(\varepsilon)$
($i=1,2,3$) and $p(\varepsilon)$ can be developed in powers of
$\varepsilon$, that is:
\begin{eqnarray}
&& u_i(\varepsilon) = u_i^0 + \varepsilon u_i^1 + \varepsilon^2
u_i^2 + \cdots \quad (i=1,2,3) \label{ansatz_1}\\
&&p(\varepsilon) =\varepsilon^{-2} p^{-2} + \varepsilon^{-1} p^{-1}
+p^0 + \varepsilon p^1 + \varepsilon^2 p^2 + \cdots \label{ansatz_2}\\
&& f_i(\varepsilon) = f_i^0 + \varepsilon f_i^1 + \varepsilon^2
f_i^2 + \cdots \quad (i=1,2,3) \label{ansatz_3}
\end{eqnarray}
As mentioned above, using asymptotic analysis we are able to derive two different models depending on the boundary conditions chosen.
In the first place, if we assume that the fluid slips at the lower surface $(\xi_3=0)$, and at the upper surface $(\xi_3=1)$, but there is continuity in the normal direction, so the tangential velocities at the lower and upper surfaces are known, and the normal velocity of each of them must match the fluid velocity, we obtain
\begin{eqnarray}
&&\hspace*{-0.9cm} \dfrac{1}{\sqrt{A^0}} \textrm{div}\left(\dfrac{h^3 }{ \sqrt{A^0}} M \nabla
p^{-2} \right) =12\mu \dfrac{\partial h}{\partial t} +
12\mu\dfrac{h A^1}{A^0} \left(\dfrac{\partial \vec{X}}{\partial t}
\cdot
\vec{a}_3\right)\nonumber\\
&&\hspace*{-0.5cm}{} - 6\mu \nabla h\cdot (\vec{W}^{0}-\vec{V}^{0}) +
\dfrac{6\mu h}{\sqrt{A^0}} \textrm{div} (\sqrt{A^0}(\vec{W}^{0} + \vec{V}^{0})) \label{Reynolds_gen}
\end{eqnarray}
that can be considered a generalization of Reynolds equation. We denote by $V_1 \vec{a}_1+V_2 \vec{a}_2$
the tangential velocity
at the lower surface and, by $W_1\vec{a}_1+W_2\vec{a}_2$ the
tangential velocity at the upper surface, and we have
\begin{eqnarray}
\vec{V}(\varepsilon)&=&(V_1,V_2)=\vec{V}^0+O(\varepsilon)\\
\vec{W}(\varepsilon)&=&(W_1,W_2)=\vec{W}^0+O(\varepsilon)
\end{eqnarray}
Coefficients $A^0$, $A^1$ and matrix $M$ are defined in appendix \ref{ApendiceA} (\eqref{A0}-\eqref{M}), and $\mu = \rho_0 \nu$ is the dynamic viscosity.
Once obtained $p^{-2}$ using \eqref{Reynolds_gen}, the following approximation of the three components of the velocity is yielded
\begin{eqnarray}
&&\hspace*{-0.2cm} u_i^0 =\dfrac{h^2 (\xi_3^2-\xi_3)}{2\mu} \sum_{k=1}^2J^{0,0}_{ik} \dfrac{
\partial p^{-2}}{\partial \xi_k}+\xi_3(W_i^{0}-
V_i^{0})+ V_i^{0}, \quad (i=1,2)\label{ui_0_lub}\\
&&\hspace*{-0.2cm}
u_3^0 = \dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}_3
\label{u30_lub}
\end{eqnarray}
where $J^{0,0}_{ik}$ is given by \eqref{J}.
If instead of considering that the tangential and normal velocities are known on the upper and lower surfaces, we assume that the normal component of the traction on $\xi_3=0$ and on $\xi_3=1$ are known pressures (denoted by $\pi_0^\varepsilon$ and $\pi_1^\varepsilon$, respectively), and that the tangential component of the traction on these surfaces are friction forces depending on the value of the velocities on $\partial D$, then we get a thin fluid layer model:
\begin{eqnarray}
&& \hspace*{-0.5cm} u_i^0=W_i^{0}=V_i^{0} \quad (i=1,2) \label{u_i^0s}\\
&& \hspace*{-0.5cm} p^{-2}=p^{-1}=0\label{p-2p-1}\\
&& \hspace*{-0.5cm} p^0=\frac{2\mu}{h} \dfrac{\partial h}{\partial t} + \pi_0^0 \label{p0s-v2}
\\
&&\hspace*{-0.5cm}\dfrac{ \partial V_i^0}{\partial t} + \sum_{l=1}^2 \left( V_l^0-C^0_l \right) \dfrac{
\partial V_i^0}{\partial \xi_l} +
\sum_{k=1}^2
\left( R^0_{ik}+\sum_{l=1}^2
H^0_{ilk} V_l^0 \right) V_k^0=-\dfrac{1}{\rho_0} \sum_{l=1}^2 \dfrac{
\partial \pi_0^0 }{\partial \xi_l}J^{0,0}_{il} \nonumber \\
&&{} +\nu \left\{ \sum_{m=1}^2 \sum_{l=1}^2
\dfrac{
\partial^2 V_i^0 }{\partial \xi_m \partial \xi_l} J^{0,0}_{lm}
+ \sum_{k=1}^2 \sum_{l=1}^2 \dfrac{
\partial V_k^0 }{\partial \xi_l}\bar{L}^{0,0}_{ikl}
+ \sum_{k=1}^2 V_k^0 \bar{S}_{ik}^{0,0} + {\kappa}^0_i \right\} \nonumber \\
&&{} + {F}^0_i-
Q^0_{i3} \left ( \frac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3\right ) \quad (i=1,2)\label{ec_Vi0-v2} \end{eqnarray}
where coefficients $C^0_l$, $R^0_{ik}$, $H^0_{ilk}$, $J^{0,0}_{lm}$, $\bar{L}^{0,0}_{ikl}$, $\bar{S}_{ik}^{0,0}$, ${\kappa}^0_i$, $F^0_i$ and $Q^0_{i3}$ are defined in appendix \ref{ApendiceA} (in \eqref{C}, \eqref{R}, \eqref{H}, \eqref{J}, \eqref{Lbar}, \eqref{Sbar}, \eqref{kappa}, \eqref{F} and \eqref{Q3} respectively), and $\pi_0^0$ is the term of order zero on $\varepsilon$ of $\pi_0^\varepsilon$, that is, $\pi_0^\varepsilon = \pi_0^0 + O(\varepsilon)$.
\begin{rmk}
Equation \eqref{ec_Vi0-v2} is exactly the same as equation (168) in \cite{RodTabJMAA2021}, although some of the constants have been redefined here (see \eqref{Lbar} and \eqref{Sbar}) with respect to the definitions in \cite{RodTabJMAA2021} to simplify \eqref{ec_Vi0-v2}.
\end{rmk}
\section{New hypothesis about the dependence of the solution on $\xi_3$} \label{NewAssumptions}
If we carefully observe the steps of the proofs in the previous work \cite{RodTabJMAA2021}, we can see that $p^k\ (k=-2, -1, 0, 1)$, and $u_i^k\ (i=1,2,3; k=0,1)$ are polynomials in $\xi_3$ of at most degree three. Because of this, we are going to assume that, for $\varepsilon$ small enough, the following equalities are true:
\begin{align}
&\hspace*{-0.5cm} u_i(\varepsilon)(t,\xi_1,\xi_2,\xi_3)=\sum_{n=0}^3 \xi_3^n \bar{u}_i^n (\varepsilon) (t,\xi_1,\xi_2), \quad (i=1,2,3) \label{ui_pol_xi3}\\
&\hspace*{-0.5cm} p(\varepsilon)(t,\xi_1,\xi_2,\xi_3)=\sum_{n=0}^3 \xi_3^n \bar{p}^n(\varepsilon)(t,\xi_1,\xi_2),\label{p_pol_xi3} \\ &\hspace*{-0.5cm} f_i(\varepsilon)(t,\xi_1,\xi_2,\xi_3)=\sum_{n=0}^{\infty} \xi_3^n \bar{f}_i^n (\varepsilon) (t,\xi_1,\xi_2). \quad (i=1,2,3) \label{f_pol_xi3}\end{align}
We want to point out that the previous hypothesis is equivalent to neglecting in \eqref{ansatz_1}-\eqref{ansatz_2} the terms in $O(\varepsilon^2)$ when $\varepsilon$ is small.
Using expressions \eqref{ui_pol_xi3}-\eqref{f_pol_xi3} and \eqref{alfaides}-\eqref{gammades} (see appendix \ref{ApendiceA}), we can rewrite equations \eqref{ec_ns_ij_alfa_beta}-\eqref{div_i_dr_alfa_beta} as follows (repeated indices $k$, $j$ and $q$ indicate summation from 1 to 3, while repeated indices $l$ and $m$ indicate summation from 1 to 2):
\begin{eqnarray}
&&\hspace*{-0.5cm} \sum_{n=0}^3 \xi_3^n \dfrac{ \partial \bar{u}_k^n}{\partial t} \vec{a}_{k} +
\sum_{n=0}^3 \xi_3^n \bar{u}_k^n \dfrac{
\partial \vec{a}_{k}}{\partial t} \nonumber\\
&&{}- \left(\vec{a}_{k}\sum_{n=0}^3 \xi_3^n \dfrac{
\partial \bar{u}_k^n}{\partial \xi_l} + \sum_{n=0}^3 \xi_3^n \bar{u}_k^n\dfrac{ \partial \vec{a}_{k}}{\partial \xi_l} \right)\left[ \sum_{r=0}^{\infty} (\varepsilon \xi_3 h)^r \left(\alpha_l^r \vec{a}_1 + \beta_l^r \vec{a}_2\right) \cdot\left(
\dfrac{\partial \vec{X}}{\partial t} + \varepsilon \xi_3 h
\dfrac{\partial \vec{a}_{3}}{\partial t} \right) \right]
\nonumber\\
&&{}-\vec{a}_{k} \sum_{n=1}^3 n \xi_3^{n-1} \bar{u}_k^n\left[\dfrac{1}{\varepsilon h} \left(\vec{a}_3 \cdot\dfrac{\partial
\vec{X}}{\partial t} \right)+ \dfrac{\xi_3}{ h} \dfrac{\partial h}{\partial
t}\right.\nonumber\\
&&\left.{}+ \sum_{r=0}^{\infty} \varepsilon^{r} \xi_3^{r+1} h^{r-1} \left( \alpha_3^{r} \vec{a}_1 + \beta_3^{r} \vec{a}_2 \right) \cdot\left(
\dfrac{\partial \vec{X}}{\partial t} + \varepsilon \xi_3 h
\dfrac{\partial \vec{a}_{3}}{\partial t} \right)
\right]\nonumber\\
&&{}+\sum_{n=0}^3 \xi_3^n \bar{u}_k^n \left(\vec{a}_{q}\sum_{d=0}^3 \xi_3^{d} \dfrac{
\partial \bar{u}_q^{d}}{\partial \xi_l} + \sum_{d=0}^3 \xi_3^{d} \bar{u}_q^{d}\dfrac{ \partial \vec{a}_{q}}{\partial \xi_l} \right) \sum_{r=0}^{\infty} (\varepsilon \xi_3 h)^{r} \left(\alpha_l^{r}
(\vec{a}_k \cdot \vec{a}_{1}) + \beta_l^{r} (\vec{a}_k \cdot \vec{a}_{2})\right))\nonumber\\
&&{}+\sum_{n=0}^3 \xi_3^n \bar{u}_k^n \left(\vec{a}_{q} \sum_{d=1}^3 d \xi_3^{d-1} \bar{u}_q^{d} \right) \left( \sum_{r=0}^{\infty} \varepsilon^n \xi_3^{r+1} h^{r-1} \left( \alpha_3^{r}
(\vec{a}_k \cdot \vec{a}_{1}) + \beta_3^{r} (\vec{a}_k \cdot \vec{a}_{2})\right)\right.\nonumber\\
&&\left.{}+ \dfrac{1}{\varepsilon h} (\vec{a}_k \cdot \vec{a}_{3})\right)=-\dfrac{1}{\rho_0} \sum_{n=0}^3 \xi_3^n \dfrac{
\partial \bar{p}^n}{\partial \xi_l} \sum_{r=0}^{\infty} (\varepsilon \xi_3 h)^{r} \left(\alpha_l^{r} \vec{a}_1 + \beta_l^{r} \vec{a}_2\right) \nonumber\\
&&{} -\dfrac{1}{\rho_0} \sum_{n=1}^3 n \xi_3^{n-1} \bar{p}^n\left( \sum_{r=0}^{\infty} \varepsilon^{r} \xi_3^{{r}+1} h^{r-1} \left( \alpha_3^{r} \vec{a}_1 + \beta_3^{r} \vec{a}_2 \right) + \dfrac{1}{\varepsilon h} \vec{a}_{3}\right)\nonumber\\
&&{} + \nu \left[ \displaystyle\sum_{n=0}^3 \xi_3^n\dfrac{
\partial^2 (\vec{a}_{k} \bar{u}_k^n)}{\partial \xi_l \partial \xi_m}
\sum_{r=0}^{\infty} (\varepsilon \xi_3 h)^{r} \left( \alpha_l^{r} {a}_{1j} + \beta_l^{r} {a}_{2j}
\right)\right. \nonumber\\
&&\left.{}+ 2\sum_{n=1}^3 n\xi_3^{n-1}\dfrac{
\partial (\vec{a}_{k} \bar{u}_k^n)}{ \partial \xi_m}
\left(\sum_{r=0}^{\infty} \varepsilon^{r}
\xi_3^{r+1} h^{r-1} \left( \alpha_3^{r} {a}_{1j} + \beta_3^{r}
{a}_{2j}\right)+ \dfrac{1}{\varepsilon h}
{a}_{3j}
\right)\right. \nonumber\\
&&\left.{}+ \displaystyle\sum_{n=0}^3 \xi_3^n\dfrac{
\partial (\vec{a}_{k} \bar{u}_k^n)}{\partial \xi_l}\dfrac{\partial}{\partial \xi_m} \left(\sum_{r=0}^{\infty} (\varepsilon \xi_3 h)^{r} \left( \alpha_l^{r} {a}_{1j} + \beta_l^{r} {a}_{2j}
\right) \right)\right. \nonumber\\
&&\left.{}+ \vec{a}_{k}
\displaystyle\sum_{n=1}^3 n \xi_3^{n-1}
\bar{u}_k^n\dfrac{\partial}{\partial \xi_m}\left(\sum_{r=0}^{\infty} \varepsilon^{r}
\xi_3^{r+1} h^{r-1} \left( \alpha_3^{r} {a}_{1j} + \beta_3^{r}
{a}_{2j}\right)+ \dfrac{1}{\varepsilon h}
{a}_{3j} \right)
\right]\nonumber\\
&& \cdot \sum_{s=0}^{\infty} (\varepsilon \xi_3 h)^{s} \left( \alpha_m^{s} {a}_{1j} + \beta_m^{s} {a}_{2j}
\right) \nonumber\\
&&{}+ \nu \left[\vec{a}_{k} \sum_{n=2}^3 n(n-1) \xi_3^{n-2}
\bar{u}_k^n \left( \sum_{r=0}^{\infty} \varepsilon^{r}
\xi_3^{r+1} h^{r-1} \left( \alpha_3^{r} {a}_{1j} + \beta_3^{r}
{a}_{2j}\right)+ \dfrac{1}{\varepsilon h}
{a}_{3j}
\right)\right. \nonumber\\
&&\left.{}+ \displaystyle\sum_{n=0}^3 \xi_3^n \dfrac{
\partial (\vec{a}_{k} \bar{u}_k^n)}{\partial \xi_l}
\sum_{r=1}^{\infty} r \varepsilon^r \xi_3^{r-1} h^{r} \left( \alpha_l^{r} {a}_{1j} + \beta_l^{r} {a}_{2j}
\right) \right. \nonumber\\
&&\left.{}+\vec{a}_{k} \displaystyle\sum_{n=1}^3 n \xi_3^{n-1}
\bar{u}_k^n \left( \sum_{r=0}^{\infty} (r+1)\varepsilon^{r} \xi_3^{r}
h^{r-1} \left( \alpha_3^{r} {a}_{1j} + \beta_3^{r} {a}_{2j}
\right) \right)\right]\nonumber\\
&& \cdot\left( \sum_{s=0}^{\infty} \varepsilon^{s}
\xi_3^{s+1} h^{s-1} \left( \alpha_3^{s} {a}_{1j} + \beta_3^{s}
{a}_{2j}\right) + \dfrac{1}{\varepsilon h}
{a}_{3j} \right)+
\sum_{n=0}^{\infty} \xi_3^n \bar{f}_k^n \vec{a}_{k}\label{ec_ns_ij_alfa_beta_pol_xi3}
\\
&&\hspace*{-0.5cm}\sum_{n=0}^3 \xi_3^n \dfrac{
\partial \bar{u}_k^n}{\partial \xi_l} \sum_{r=0}^{\infty} (\varepsilon \xi_3 h)^{r} \left( \alpha_l^{r}
(\vec{a}_k \cdot \vec{a}_{1}) + \beta_l^{r} (\vec{a}_k \cdot \vec{a}_{2})\right) \nonumber\\
&&{} +\sum_{n=0}^3 \xi_3^n \bar{u}_k^n\dfrac{
\partial {a}_{kj}}{\partial \xi_l} \sum_{r=0}^{\infty} (\varepsilon \xi_3 h)^{r} \left( \alpha_l^{r} {a}_{1j} + \beta_l^{r} {a}_{2j}
\right) \nonumber\\
&&{}+ \sum_{n=1}^3 n \xi_3^{n-1 } \bar{u}_k^n \left( \sum_{r=0}^{\infty} \varepsilon^{r}
\xi_3^{r+1} h^{r-1} \left( \alpha_3^{r}
(\vec{a}_k \cdot \vec{a}_{1}) + \beta_3^{r} (\vec{a}_k \cdot \vec{a}_{2})\right) + \dfrac{1}{\varepsilon h} (\vec{a}_k \cdot \vec{a}_{3})\right) =0
\label{div_i_dr_alfa_beta_pol_xi3}
\end{eqnarray}
and identify the terms multiplied by $\xi_3^n$ ($n=0,1,2,3$) in \eqref{ec_ns_ij_alfa_beta_pol_xi3}-\eqref{div_i_dr_alfa_beta_pol_xi3}. In equation \eqref{ec_ns_xi3^n}, below, repeated indices $k$ and $q$ indicate, again, summation from 1 to 3, while repeated indices $l$ and $m$ indicate summation from 1 to 2.
\begin{eqnarray}
&&\hspace*{-0.5cm} \dfrac{ \partial \bar{u}_k^n}{\partial t} \vec{a}_{k} +
\bar{u}_k^n \dfrac{
\partial \vec{a}_{k}}{\partial t} - \left(\vec{a}_{k} \dfrac{
\partial \bar{u}_k^n}{\partial \xi_l} + \bar{u}_k^n\dfrac{ \partial \vec{a}_{k}}{\partial \xi_l} \right) C^0_l\nonumber\\
&&{}- \sum_{m=0}^{n-1} \left(\vec{a}_{k} \dfrac{
\partial \bar{u}_k^m}{\partial \xi_l} + \bar{u}_k^m\dfrac{ \partial \vec{a}_{k}}{\partial \xi_l} \right) (\varepsilon h)^{n-m} C^{n-m,n-m-1}_l \nonumber\\
&&{}-\dfrac{n+1}{\varepsilon h} \vec{a}_{k} \bar{u}_k^{n+1}\left(\vec{a}_3 \cdot\dfrac{\partial
\vec{X}}{\partial t} \right)- \dfrac{n}{ h} \vec{a}_{k} \bar{u}_k^n \left(\dfrac{\partial h}{\partial
t} + C^0_3\right)\nonumber\\
&&{} -\vec{a}_{k} \sum_{m=0}^{n-2} (m+1) \bar{u}_k^{m+1} \varepsilon^{n-m-1} h^{n-m-2} C_3^{n-m-1,n-m-2}
\nonumber\\
&&{}+\sum_{m=0}^n \bar{u}_k^m \sum_{j=0}^{n-m} \left(\vec{a}_{q} \dfrac{
\partial \bar{u}_q^j}{\partial \xi_l} + \bar{u}_q^j\dfrac{ \partial \vec{a}_{q}}{\partial \xi_l} \right) (\varepsilon h)^{n-m-j} B_{lk}^{n-m-j}
\nonumber\\
&&{}+ \sum_{m=0}^{n-1} \bar{u}_k^m \sum_{j=1}^{n-m} j \varepsilon^{n-m-j} h^{n-m-j-1} B_{3k}^{n-m-j} \vec{a}_{q}
\bar{u}_q^{j} +\dfrac{1}{\varepsilon h} \sum_{m=0}^n \bar{u}_3^m (n-m+1) \left(\vec{a}_{q} \bar{u}_q^{n-m+1} \right)\nonumber\\
&&{} =-\dfrac{1}{\rho_0} \sum_{m=0}^n \dfrac{
\partial \bar{p}^m}{\partial \xi_l} (\varepsilon h)^{n-m} \left(\alpha_l^{n-m} \vec{a}_1 + \beta_l^{n-m} \vec{a}_2\right) - \dfrac{n+1}{\varepsilon h \rho_0} \bar{p}^{n+1} \vec{a}_{3}\nonumber\\
&&{} -\dfrac{1}{\rho_0} \sum_{m=1}^{n} m \bar{p}^{m} \varepsilon^{n-m} h^{n-m-1} \left( \alpha_3^{n-m} \vec{a}_1 + \beta_3^{n-m} \vec{a}_2 \right) \nonumber\\
&&{} + \nu \left[ \displaystyle\sum_{r=0}^n (\varepsilon h)^{n-r} \dfrac{
\partial^2 (\vec{a}_{k} \bar{u}_k^r)}{\partial \xi_l \partial \xi_m}
\sum_{s=0}^{n-r} J_{l,m}^{s,n-r-s} + 2 \sum_{r=1}^n r \varepsilon^{n-r} h^{n-r-1} \dfrac{
\partial (\vec{a}_{k} \bar{u}_k^r)}{ \partial \xi_m}
\sum_{s=0}^{n-r}
J_{3m}^{s,n-r-s}
\right. \nonumber\\
&&\left.{}+ \displaystyle\sum_{r=0}^n (\varepsilon h)^{n-r}\dfrac{
\partial (\vec{a}_{k} \bar{u}_k^r)}{\partial \xi_l} \sum_{s=0}^{n-r} K_l^{s,n-r-s} + \displaystyle\sum_{r=0}^n \varepsilon^{n-r} h^{n-r-1} \dfrac{
\partial (\vec{a}_{k} \bar{u}_k^r)}{\partial \xi_l} \sum_{s=1}^{n-r} s \dfrac{\partial h}{\partial \xi_m} J_{lm}^{s,n-r-s}\right. \nonumber\\
&&\left.{}+ \vec{a}_{k}
\displaystyle\sum_{r=1}^n r \varepsilon^{n-r} h^{n-r-1}
\bar{u}_k^r \sum_{s=0}^{n-r} K_3^{s,n-r-s}+ \vec{a}_{k}
\displaystyle\sum_{r=1}^n r \varepsilon^{n-r} h^{n-r-2}
\bar{u}_k^r \sum_{s=0}^{n-r} (s-1) \dfrac{\partial h}{\partial \xi_m} J_{3m}^{s,n-r-s}
\right. \nonumber\\
&&\left.{}+ \vec{a}_{k}
\sum_{r=0}^n (r+1) \varepsilon^{n-r-1} h^{n-r-1}
\bar{u}_k^{r+1} H^{n-r}_{mm3}
\right]\nonumber\\
&&{}+ \nu \left[\vec{a}_{k} \sum_{r=2}^{n} r(r-1) \varepsilon^{n-r} h^{n-r-2}
\bar{u}_k^r \sum_{s=0}^{n-r}
J_{33}^{s,n-r-s}
+ \dfrac{\vec{a}_{k}}{\varepsilon^2 h^2} (n+2)(n+1) \bar{u}_k^{n+2} \right. \nonumber\\
&&\left.{}+ \displaystyle\sum_{r=0}^{n-1} \varepsilon^{n-r} h^{n-r-1}\dfrac{
\partial (\vec{a}_{k} \bar{u}_k^r)}{\partial \xi_l}
\sum_{s=1}^{n-r} s J_{l3}^{s,n-r-s}
+\vec{a}_{k} \displaystyle\sum_{r=1}^n r \varepsilon^{n-r} h^{n-r-2} \bar{u}_k^r
\sum_{s=0}^{n-r} (s+1) J_{33}^{s,n-r-s} \right]\nonumber\\
&& {} +
\bar{f}_k^n \vec{a}_{k}, \quad (n=0,1,2,3) \label{ec_ns_xi3^n}
\\
&&\hspace*{-0.5cm}\bar{u}_3^{n+1} =- \dfrac{\varepsilon h}{n+1} \sum_{m=0}^n (\varepsilon h)^{n-m} \sum_{k=1}^2 \left[ \sum_{l=1}^2 \left(\dfrac{
\partial \bar{u}_k^m}{\partial \xi_l} B_{lk}^{n-m} + \bar{u}_k^m H_{llk}^{n-m} \right)+ \bar{u}_3^m H_{kk3}^{n-m}
+ \dfrac{ m}{h} \bar{u}_k^m
B_{3k}^{n-m} \right]\nonumber\\
&& (n=0,1,2)
\label{div_u3n}\\
&&\hspace*{-0.5cm} \sum_{m=0}^3 (\varepsilon h)^m \left[
\sum_{k=1}^2 \sum_{l=1}^2 \dfrac{
\partial \bar{u}_k^{3-m}}{\partial \xi_l} B^m_{lk} + \sum_{k=1}^3 \bar{u}_k^{3-m} \sum_{l=1}^2 H^m_{llk} + \dfrac{3-m}{h} \sum_{k=1}^2 \bar{u}_k^{3-m} B^m_{3k}\right]=0
\label{ec_div_n3}
\end{eqnarray}
where we have introduced the notation
$\bar{u}_i^4=\bar{u}_i^5=0$, \ ($i=1,2,3$). The coefficients $B^j_{lk}$, $C^0_l$, $C^{i,j}_l$, $H^j_{ilk}$, $J^{i,j}_{lm}$, $K^{j,i}_l$ are given by \eqref{B}, \eqref{C}, \eqref{Cij}, \eqref{H}, \eqref{J}, \eqref{Kji}.
We multiply equations \eqref{ec_ns_xi3^n} by $\alpha^0_i \vec{a}_i$ ($i=1,2$) and sum in $i$, then we repeat the procedure multiplying \eqref{ec_ns_xi3^n} by $\beta^0_i \vec{a}_i$ ($i=1,2$) and adding in $i$ again, to yield these equations:
\begin{eqnarray}
&&\hspace*{-0.5cm} \dfrac{ \partial \bar{u}_i^n}{\partial t} +
\sum_{k=1}^3 \bar{u}_k^n Q^0_{ik} - \sum_{l=1}^2 \dfrac{
\partial \bar{u}_i^n}{\partial \xi_l} C^0_l - \sum_{m=0}^{n-1} \sum_{l=1}^2 \left(\dfrac{
\partial \bar{u}_i^m}{\partial \xi_l} + \sum_{k=1}^3 \bar{u}_k^m H^0_{ilk} \right) (\varepsilon h)^{n-m} C^{n-m,n-m-1}_l \nonumber\\
&&{}-\dfrac{n+1}{\varepsilon h} \bar{u}_i^{n+1}\left(\vec{a}_3 \cdot\dfrac{\partial
\vec{X}}{\partial t} \right)- \dfrac{n}{ h} \bar{u}_i^n \left(\dfrac{\partial h}{\partial
t} + C^0_3\right)\nonumber\\
&&{} -\sum_{m=0}^{n-2} (m+1) \bar{u}_i^{m+1} \varepsilon^{n-m-1} h^{n-m-2} C_3^{n-m-1,n-m-2}
\nonumber\\
&&{}+\sum_{m=0}^n \sum_{k=1}^2 \bar{u}_k^m \sum_{j=0}^{n-m} \sum_{l=1}^2 \left( \dfrac{
\partial \bar{u}_i^j}{\partial \xi_l} + \sum_{q=1}^3 \bar{u}_q^j H^0_{ilq} \right) (\varepsilon h)^{n-m-j} B_{lk}^{n-m-j}
\nonumber\\
&&{}+ \sum_{m=0}^{n-1} \sum_{k=1}^2 \bar{u}_k^m \sum_{j=1}^{n-m} j \varepsilon^{n-m-j} h^{n-m-j-1} B_{3k}^{n-m-j} \bar{u}_i^{j} +\dfrac{1}{\varepsilon h} \sum_{m=0}^n \bar{u}_3^m (n-m+1) \bar{u}_i^{n-m+1} \nonumber\\
&&{} =-\dfrac{1}{\rho_0} \sum_{m=0}^n \dfrac{
\partial \bar{p}^m}{\partial \xi_l} (\varepsilon h)^{n-m} J_{il}^{0,n-m} -\dfrac{1}{\rho_0} \sum_{m=1}^{n} m \bar{p}^{m} \varepsilon^{n-m} h^{n-m-1} J_{i3}^{0,n-m} \nonumber\\
&&{} + \nu \left\{ \sum_{r=0}^n \varepsilon^{n-r} \sum_{m=1}^2\sum_{l=1}^2 \dfrac{
\partial^2 \bar{u}_i^r}{\partial \xi_l \partial \xi_m}
\iota_{lm}^{n-r}+ \sum_{r=0}^n \varepsilon^{n-r} \sum_{k=1}^3 \sum_{l=1}^2 \dfrac{
\partial \bar{u}_k^r}{\partial \xi_l} L_{ikl}^{n,r} + \sum_{r=0}^n \varepsilon^{n-r} \sum_{k=1}^3 \bar{u}_k^r S_{ik}^{n,r}
\right\}\nonumber\\
&&{}+
\dfrac{\nu(n+1)}{\varepsilon h}
\bar{u}_i^{n+1} \sum_{m=1}^2 H^{0}_{mm3}+ \nu
\dfrac{ (n+2)(n+1) }{\varepsilon^2 h^2}\bar{u}_i^{n+2} +
\bar{f}_i^n, \quad (i=1,2, \ n=0,1,2,3) \label{ec_ns_xi3^n_ai}\end{eqnarray} where $Q^0_{ik}$, $\iota^n_{lm}$, $L^{n,r}_{ikl}$ and $S^{n,r}_{ik}$ are given by \eqref{Q}, \eqref{Q3}, \eqref{iotalm}, \eqref{Likl_nr} and \eqref{Sik_nr} respectively.
If equations \eqref{ec_ns_xi3^n} are multiplied by $\vec{a}_3$, we obtain:
\begin{eqnarray}
&&\hspace*{-0.5cm} \dfrac{ \partial \bar{u}_3^n}{\partial t} +\sum_{k=1}^2
\bar{u}_k^n \left[\dfrac{
\partial \vec{a}_{k}}{\partial t} \cdot \vec{a}_3 -\sum_{l=1}^2 C^0_l \dfrac{ \partial \vec{a}_{k}}{\partial \xi_l} \cdot \vec{a}_3\right]- \sum_{l=1}^2 \dfrac{
\partial \bar{u}_3^n}{\partial \xi_l} C^0_l\nonumber\\
&&{}- \sum_{m=0}^{n-1} \sum_{l=1}^2 \left[ \dfrac{
\partial \bar{u}_3^m}{\partial \xi_l} + \sum_{k=1}^{2} \bar{u}_k^m \left(\dfrac{ \partial \vec{a}_{k}}{\partial \xi_l} \cdot \vec{a}_3 \right)\right] (\varepsilon h)^{n-m} C^{n-m,n-m-1}_l \nonumber\\
&&{}-\dfrac{n+1}{\varepsilon h} \bar{u}_3^{n+1}\left(\vec{a}_3 \cdot\dfrac{\partial
\vec{X}}{\partial t} \right)- \dfrac{n}{ h} \bar{u}_3^n \left(\dfrac{\partial h}{\partial
t} + C^0_3\right)\nonumber\\
&&{} - \sum_{m=0}^{n-2} (m+1) \bar{u}_3^{m+1} \varepsilon^{n-m-1} h^{n-m-2} C_3^{n-m-1,n-m-2}
\nonumber\\
&&{}+\sum_{m=0}^n \sum_{k=1}^2\bar{u}_k^m \sum_{j=0}^{n-m} \sum_{l=1}^2 \left( \dfrac{
\partial \bar{u}_3^j}{\partial \xi_l} + \bar{u}_q^j\dfrac{ \partial \vec{a}_{q}}{\partial \xi_l} \cdot \vec{a}_3 \right) (\varepsilon h)^{n-m-j} B_{lk}^{n-m-j}
\nonumber\\
&&{}+ \sum_{m=0}^{n-1} \sum_{k=1}^2 \bar{u}_k^m \sum_{j=0}^{n-m-1} (n-m-j)\varepsilon^{j} h^{j-1} B_{3k}^j
\bar{u}_3^{n-m-j} +\dfrac{1}{\varepsilon h} \sum_{m=0}^n \bar{u}_3^m (n-m+1) \bar{u}_3^{n-m+1} \nonumber\\
&&{} = - \dfrac{n+1}{\varepsilon h \rho_0} \bar{p}^{n+1} + \nu \left[ \sum_{r=0}^n \varepsilon^{n-r} \sum_{l=1}^2\sum_{m=1}^2\dfrac{
\partial^2 \bar{u}_3^r}{\partial \xi_l \partial \xi_m}
\iota_{lm}^{n-r}+\sum_{r=0}^n \varepsilon^{n-r} \sum_{k=1}^3 \sum_{l=1}^2 \dfrac{
\partial \bar{u}_k^r}{\partial \xi_l} L_{3kl}^{n,r} \right. \nonumber\\
&&\left.{}+\sum_{r=0}^n \varepsilon^{n-r} \sum_{k=1}^2\bar{u}_k^r S_{3k}^{n,r}
\right]+
\dfrac{\nu (n+1)}{\varepsilon h}
\bar{u}_3^{n+1} \sum_{m=1}^2 H^{0}_{mm3}+ \dfrac{\nu}{\varepsilon^2 h^2} (n+2)(n+1) \bar{u}_3^{n+2} \nonumber\\
&&{}+
\bar{f}_3^n, \quad (n=0,1,2,3) \label{ec_ns_xi3^n_a3}
\end{eqnarray}
where the coefficients $L^{n,r}_{3kl}$ and $S^{n,r}_{3k}$ are defined in \eqref{L3kl_nr} and \eqref{S3k_nr}.
Since we have assumed that the velocity and the pressure are polynomials of degree three in $\xi_3$ (\eqref{ui_pol_xi3}-\eqref{p_pol_xi3}), we have 16 unknowns to determine. Out of these unknowns, the terms $\bar{u}^k_3$ and $\bar{p}^k$ ($k=1,2,3$) corresponding to the third component of the velocity and the pressure, respectively, are given by \eqref{div_u3n} and \eqref{ec_ns_xi3^n_a3} using the terms $\bar{u}^k_i$ ($i=1,2, k=0,1,2$) once they have been computed. Therefore, we must actually determine 10 unknowns.
Denoting, as previously done, by $V_1 \vec{a}_1+V_2 \vec{a}_2$ and $W_1\vec{a}_1+W_2\vec{a}_2$ the tangential velocity
at the lower and upper surfaces, respectively, we have
\begin{eqnarray}
u_k^{\varepsilon} \vec{e}_k = u_k(\varepsilon)\vec{a}_{k}&=&
V_1(\varepsilon) \vec{a}_1+V_2(\varepsilon) \vec{a}_2+\left( \dfrac{\partial
\vec{X}}{\partial t}
\cdot \vec{a}_3\right)\vec{a}_3 \ \textrm{on }\xi_3=0 \label{cc_xi3_0}\\
u_k^{\varepsilon} \vec{e}_k =u_k(\varepsilon)\vec{a}_{k}&=&
W_1(\varepsilon) \vec{a}_1+ W_2(\varepsilon) \vec{a}_2+\left( \dfrac{\partial (\vec{X}+
\varepsilon h\vec{a}_3)}{\partial t} \cdot \vec{a}_3\right)\vec{a}_3
\ \textrm{on }\xi_3=1 \label{cc_xi3_1}
\end{eqnarray}
and, taking into account \eqref{ui_pol_xi3}, we yield
\begin{eqnarray}
&&\bar{u}_i^0=V_i \quad (i=1,2) \label{ui0_Vi}\\
&&\bar{u}_3^0= \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3 \label{u30}\\
&&\sum_{k=1}^3 \bar{u}_i^k= W_i - V_i \quad (i=1,2)
\label{sum_uik_Wi_Vi}\\
&&\sum_{k=1}^3 \bar{u}_3^k= \varepsilon \dfrac{\partial
h}{\partial t}
\label{sum_u3k_h}
\end{eqnarray}
Equality \eqref{u30} gives us an expression for $\bar{u}_3^0$, so it is no longer an unknown, it is determined by the lower bound surface. At this point, 9 unknowns are left, $\bar{u}_i^k$ $(i=1,2,\ k=0,1,2,3)$ and $\bar{p}^0$, but we will see that not all are needed to obtain an approximation of the velocity and the pressure.
\section{New Model} \label{NewModel}
As we have just seen in section \ref{NewAssumptions}, hypotheses \eqref{ui_pol_xi3}-\eqref{f_pol_xi3} allow us to derive a two-dimensional model, which we shall call new model from now on, formed by equations:
\begin{align}
&\hspace*{-0.5cm} u_i(\varepsilon)(t,\xi_1,\xi_2,\xi_3)=\sum_{n=0}^3 \xi_3^n \bar{u}_i^n (\varepsilon) (t,\xi_1,\xi_2), \quad (i=1,2,3) \label{ui_pol_xi3m}\\
&\hspace*{-0.5cm} p(\varepsilon)(t,\xi_1,\xi_2,\xi_3)=\sum_{n=0}^3 \xi_3^n \bar{p}^n(\varepsilon)(t,\xi_1,\xi_2),\label{p_pol_xi3m} \\
&\hspace*{-0.5cm} \dfrac{ \partial \bar{u}_i^n}{\partial t} +
\sum_{k=1}^3 \bar{u}_k^n Q^0_{ik} - \sum_{l=1}^2 \dfrac{
\partial \bar{u}_i^n}{\partial \xi_l} C^0_l - \dfrac{n}{ h} \bar{u}_i^n \left(\dfrac{\partial h}{\partial
t} + C^0_3\right) \nonumber\\
&{}+\sum_{m=0}^n \sum_{k=1}^2 \bar{u}_k^m \sum_{j=0}^{n-m} \sum_{l=1}^2 \left( \dfrac{
\partial \bar{u}_i^j}{\partial \xi_l} + \sum_{q=1}^3 \bar{u}_q^j H^0_{ilq} \right) (\varepsilon h)^{n-m-j} B_{lk}^{n-m-j}
\nonumber\\
&{}+ \sum_{m=0}^{n-1} \sum_{k=1}^2 \bar{u}_k^m \sum_{j=1}^{n-m} j \varepsilon^{n-m-j} h^{n-m-j-1} B_{3k}^{n-m-j}
\bar{u}_i^{j} +\dfrac{1}{\varepsilon h} \sum_{m=1}^n \bar{u}_3^m (n-m+1) \bar{u}_i^{n-m+1} \nonumber\\
&{} - \sum_{m=0}^{n-1} \sum_{l=1}^2 \left(\dfrac{
\partial \bar{u}_i^m}{\partial \xi_l} + \sum_{k=1}^3 \bar{u}_k^m H^0_{ilk} \right) (\varepsilon h)^{n-m} C^{n-m,n-m-1}_l\nonumber\\
&{} -\sum_{m=0}^{n-2} (m+1) \bar{u}_i^{m+1} \varepsilon^{n-m-1} h^{n-m-2} C_3^{n-m-1,n-m-2}
\nonumber\\
&{} =-\dfrac{1}{\rho_0} \sum_{m=0}^n \sum_{l=1}^2\dfrac{
\partial \bar{p}^m}{\partial \xi_l} (\varepsilon h)^{n-m} J_{il}^{0,n-m} -\dfrac{1}{\rho_0} \sum_{m=1}^{n} m \bar{p}^{m} \varepsilon^{n-m} h^{n-m-1} J_{i3}^{0,n-m} \nonumber\\
&{} + \nu \sum_{r=0}^n \varepsilon^{n-r} \left[ \sum_{m=1}^2\sum_{l=1}^2 \dfrac{
\partial^2 \bar{u}_i^r}{\partial \xi_l \partial \xi_m}
\iota_{lm}^{n-r}+ \sum_{k=1}^3 \sum_{l=1}^2 \dfrac{
\partial \bar{u}_k^r}{\partial \xi_l} L_{ikl}^{n,r} + \sum_{k=1}^3 \bar{u}_k^r S_{ik}^{n,r}
\right] \nonumber\\
&{}+
\dfrac{\nu(n+1)}{\varepsilon h} \dfrac{A^1}{A^0}
\bar{u}_i^{n+1} + \nu
{\dfrac{ (n+2)(n+1) }{\varepsilon^2 h^2}\bar{u}_i^{n+2}} +
\bar{f}_i^n, \quad (i=1,2, n=0,1,2,3) \label{ec_uin}
\\
&\hspace*{-0.5cm} \sum_{m=0}^3 (\varepsilon h)^m \left[
\sum_{k=1}^2 \sum_{l=1}^2 \dfrac{
\partial \bar{u}_k^{3-m}}{\partial \xi_l} B^m_{lk} + \sum_{k=1}^3 \bar{u}_k^{3-m} \sum_{l=1}^2 H^m_{llk} + \dfrac{3-m}{h} \sum_{k=1}^2 \bar{u}_k^{3-m} B^m_{3k}\right]=0
\label{ec_div_n3m}\\
&\hspace*{-0.5cm}\bar{u}_3^0= \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3 \label{u30m}\\
&\hspace*{-0.5cm}\bar{u}_3^{n+1} =- \dfrac{\varepsilon h}{n+1} \sum_{m=0}^n (\varepsilon h)^{n-m} \sum_{k=1}^2 \left[ \sum_{l=1}^2 \left(\dfrac{
\partial \bar{u}_k^m}{\partial \xi_l} B_{lk}^{n-m} + \bar{u}_k^m H_{llk}^{n-m} \right)+ \bar{u}_3^m H_{kk3}^{n-m}
+ \dfrac{ m}{h} \bar{u}_k^m
B_{3k}^{n-m} \right] \nonumber\\
&(n=0,1,2)
\label{u3n}\\
&\hspace*{-0.5cm} \bar{p}^{n+1}= {\dfrac{\mu}{\varepsilon h} (n+2)\bar{u}_3^{n+2}} +
\dfrac{\mu A^1}{A^0}
\bar{u}_3^{n+1} - \dfrac{ \rho_0}{n+1} \sum_{m=1}^n \bar{u}_3^m (n-m+1) \bar{u}_3^{n-m+1} \nonumber\\
&{} +\dfrac{\varepsilon h \rho_0}{n+1} \left\{- \dfrac{ \partial \bar{u}_3^n}{\partial t} -\sum_{k=1}^2
\bar{u}_k^n Q^0_{3k}+ \sum_{l=1}^2 \dfrac{
\partial \bar{u}_3^n}{\partial \xi_l} C^0_l + \dfrac{n}{ h} \bar{u}_3^n \left(\dfrac{\partial h}{\partial
t} + C^0_3\right) \right.\nonumber\\
&{} + \sum_{m=0}^{n-1} \sum_{l=1}^2 \left[ \dfrac{
\partial \bar{u}_3^m}{\partial \xi_l} + \sum_{k=1}^2 \bar{u}_k^m \left(\dfrac{ \partial \vec{a}_{k}}{\partial \xi_l} \cdot \vec{a}_3\right) \right] (\varepsilon h)^{n-m} C^{n-m,n-m-1}_l \nonumber\\
&{} +\sum_{m=0}^{n-2} (m+1) \bar{u}_3^{m+1} \varepsilon^{n-m-1} h^{n-m-2} C_3^{n-m-1,n-m-2}
\nonumber\\
&{}-\sum_{m=0}^n \sum_{k=1}^2\bar{u}_k^m \sum_{j=0}^{n-m} \sum_{l=1}^2 \left[ \dfrac{
\partial \bar{u}_3^j}{\partial \xi_l} + \bar{u}_q^j \left(\dfrac{ \partial \vec{a}_{q}}{\partial \xi_l} \cdot \vec{a}_3 \right)\right] (\varepsilon h)^{n-m-j} B_{lk}^{n-m-j}
\nonumber\\
&{}- \sum_{m=0}^{n-1} \sum_{k=1}^2 \bar{u}_k^m \sum_{j=0}^{n-m-1} (n-m-j)\varepsilon^{j} h^{j-1} B_{3k}^j
\bar{u}_3^{n-m-j}\nonumber\\
&\left.{} + \nu \sum_{r=0}^n \varepsilon^{n-r} \left[ \sum_{l=1}^2\sum_{m=1}^2\dfrac{
\partial^2 \bar{u}_3^r}{\partial \xi_l \partial \xi_m}
\iota_{lm}^{n-r}+ \sum_{k=1}^3 \sum_{l=1}^2 \dfrac{
\partial \bar{u}_k^r}{\partial \xi_l} L_{3kl}^{n,r} + \sum_{k=1}^3\bar{u}_k^r S_{3k}^{n,r}
\right]+
\bar{f}_3^n\right\} \nonumber\\& (n=0,1,2) \label{pn}
\end{align}
Examining the new model, we observe that the equations can be divided into two groups: a first group, including equations \eqref{ec_uin} and \eqref{ec_div_n3m}, that must be solved to obtain the terms $\bar{u}_1^0$, $\bar{u}_2^0$, $\bar{p}^0$, $\bar{u}_1^1$, $\bar{u}_2^1$, $\bar{u}_1^2$, $\bar{u}_2^2$, $\bar{u}_1^3$ and $\bar{u}_2^3$, and a second group, including equations \eqref{u30m}-\eqref{pn}, that allow us to eliminate the terms $\bar{u}_3^0$, $\bar{u}_3^1$, $\bar{u}_3^2$, $\bar{u}_3^3$, $\bar{p}^1$, $\bar{p}^2$, $\bar{p}^3$ from equations \eqref{ec_uin}-\eqref{ec_div_n3m}. Once the aforementioned elimination has been carried out, we can solve the first group of equations to compute $\bar{u}_i^k$ $(k=0,1,2,3, i=1,2)$ and $\bar{p}^0$, and use the second group of equations to obtain $\bar{u}_3^k$ ($k=0,1,2,3$) and $\bar{p}^j$ ($j=1,2,3$). Boundary and initial conditions must be added to this system of equations.
\section{Asymptotic Analysis of the new model} \label{sec-aa}
This section is devoted to the asymptotic analysis of the model \eqref{ui_pol_xi3m}-\eqref{pn}, proposed in the previous section. We want to check if the asymptotic behavior of the new model, when $\varepsilon$ tends to zero, is the same as the Navier-Stokes equations, shown in \cite{RodTabJMAA2021}.
Let us start assuming that $\bar{u}_i^n(\varepsilon)$, $\bar{f}_i^n(\varepsilon)$, $\bar{p}^n(\varepsilon)$, $W_i(\varepsilon)$ and $V_i(\varepsilon)$ ($i=1,2,3$, $n=0,1,2,3$) can be developed in powers of
$\varepsilon$, that is:
\begin{eqnarray}
&& \bar{u}_i^n(\varepsilon) = \sum_{k=0}^{\infty} \varepsilon^k \bar{u}_i^{n,k} \quad (i=1,2,3, \ n=0,1,2,3) \label{ansatz_bar_u}\\
&&\bar{p}^n(\varepsilon) = \sum_{k=-2}^{\infty}\varepsilon^{k} \bar{p}^{n,k} \quad (n=0,1,2,3) \label{ansatz_bar_p}\\
&& \bar{f}_i^n(\varepsilon) =\sum_{k=0}^{\infty}\varepsilon^{k} \bar{f}_i^{n,k} \quad (i=1,2,3, n=0,1,2,\cdots) \label{ansatz_bar_f}\\
&&V_i(\varepsilon)= \sum_{k=0}^{\infty} \varepsilon^k V_i^k \quad (i=1,2) \label{ansatz_V}\\
&& W_i(\varepsilon)= \sum_{k=0}^{\infty} \varepsilon^k W_i^k \quad (i=1,2) \label{ansatz_W}
\end{eqnarray}
The substitution of the developments \eqref{ansatz_bar_u}-\eqref{ansatz_W}
in \eqref{ec_uin}-\eqref{pn}, \eqref{ui0_Vi} and \eqref{sum_uik_Wi_Vi}-\eqref{sum_u3k_h}
and the identification of the terms multiplied by the same power of
$\varepsilon $, lead to a series of equations that will allow us to determine
$\bar{u}^{n,0}_i, \bar{p}^{n,-2}, \dots (i=1,2,3, \ n=0,1,2,3)$.
In this way, we identify the terms multiplied by
\begin{itemize}
\item $\varepsilon^{-2}$ in \eqref{ec_uin} and \eqref{pn}:
\begin{align} &\dfrac{1}{\rho_0} \sum_{l=1}^2\dfrac{
\partial \bar{p}^{0,-2}}{\partial \xi_l} J_{il}^{0,0} =
{\dfrac{ 2 \nu }{h^2}\bar{u}_i^{2,0}} ,& (i=1,2) \label{p0-2_ui20}\\
&\dfrac{1}{\rho_0} \sum_{l=1}^2\dfrac{
\partial \bar{p}^{1,-2}}{\partial \xi_l} J_{il}^{0,0} + \bar{p}^{1,-2} h^{-1} J_{i3}^{0,0} = \nu
{\dfrac{6 }{ h^2}\bar{u}_i^{3,0}}, & (i=1,2) \label{p1-2_ui30}\\
&\dfrac{1}{\rho_0} \sum_{l=1}^2\dfrac{
\partial \bar{p}^{n,-2}}{\partial \xi_l} J_{il}^{0,0} + n \bar{p}^{n,-2} h^{-1} J_{i3}^{0,0} = 0, & (n=2,3, \quad i=1,2) \label{pn-2_0}\\
&\bar{p}^{n,-2}=0, & (n=1,2,3) \label{pn-2}
\end{align}
Taking into account \eqref{pn-2}, we obtain from \eqref{p1-2_ui30}:
\begin{equation}
\bar{u}_i^{3,0}=0, \quad (i=1,2) \label{ui30}
\end{equation}
\item $\varepsilon^{-1}$ in \eqref{ec_uin} and \eqref{pn} (considering \eqref{pn-2} and \eqref{ui30}):
\begin{align}
&{}\dfrac{1}{\rho_0} \sum_{l=1}^2\dfrac{
\partial \bar{p}^{0,-1}}{\partial \xi_l} J_{il}^{0,0}=
\dfrac{\nu}{ h}
\bar{u}_i^{1,0} \dfrac{A^1}{A^0}+ \nu
{\dfrac{2}{h^2}\bar{u}_i^{2,1}}, \quad (i=1,2) \label{p0-1_ui10_ui21}\\
&{}\dfrac{1}{ h} \bar{u}_3^{1,0} \bar{u}_i^{1,0} =-\dfrac{1}{\rho_0} \sum_{l=1}^2\dfrac{
\partial \bar{p}^{1,-1}}{\partial \xi_l} J_{il}^{0,0} -\dfrac{1}{\rho_0} \sum_{l=1}^2\dfrac{
\partial \bar{p}^{0,-2}}{\partial \xi_l} h J_{il}^{0,1} -\dfrac{1}{\rho_0} \bar{p}^{1,-1} h^{-1} J_{i3}^{0,0} \nonumber\\
&{} +
\dfrac{2\nu}{ h} \dfrac{A^1}{A^0}
\bar{u}_i^{2,0} + \nu
{\dfrac{ 6}{ h^2}\bar{u}_i^{3,1}}, \quad (i=1,2) \label{ec_ns_eps-1}\\
&{}\dfrac{1}{ h} \sum_{m=1}^2 \bar{u}_3^{m,0} (3-m) \bar{u}_i^{3-m,0} =-\dfrac{1}{\rho_0} \sum_{l=1}^2\dfrac{
\partial \bar{p}^{2,-1}}{\partial \xi_l} J_{il}^{0,0} -\dfrac{2}{\rho_0} \bar{p}^{2,-1} h^{-1} J_{i3}^{0,0},\quad (i=1,2)\\
&{}\dfrac{1}{ h} \sum_{m=2}^3 \bar{u}_3^{m,0} (4-m) \bar{u}_i^{4-m,0} =-\dfrac{1}{\rho_0} \sum_{l=1}^2\dfrac{
\partial \bar{p}^{3,-1}}{\partial \xi_l} J_{il}^{0,0} -\dfrac{n}{\rho_0} \bar{p}^{3,-1} h^{-1} J_{i3}^{0,0} ,\quad (i=1,2, n=2,3)\\
%
& \bar{p}^{n+1,-1}= {\dfrac{\mu}{ h} (n+2)\bar{u}_3^{n+2,0}}, \quad (n=0,1) \label{pn-1_u3n+20}\\
%
& \bar{p}^{3,-1}= 0 \label{p3-1}\end{align}
\item $\varepsilon^0$ in \eqref{ec_uin}-\eqref{pn}, \eqref{ui0_Vi} and \eqref{sum_uik_Wi_Vi} (keeping in mind \eqref{pn-2} and \eqref{ui30}):
\begin{align}
&\hspace*{-0.5cm} \dfrac{ \partial \bar{u}_i^{0,0}}{\partial t} +
\sum_{k=1}^3 \bar{u}_k^{0,0} Q^0_{ik} - \sum_{l=1}^2 \dfrac{
\partial \bar{u}_i^{0,0}}{\partial \xi_l} C^0_l + \sum_{k=1}^2 \bar{u}_k^{0,0} \sum_{l=1}^2 \left( \dfrac{
\partial \bar{u}_i^{0,0}}{\partial \xi_l} + \sum_{q=1}^3 \bar{u}_q^{0,0} H^0_{ilq} \right) B_{lk}^{0}
\nonumber\\
&{} =-\dfrac{1}{\rho_0}\sum_{l=1}^2\dfrac{
\partial \bar{p}^{0,0}}{\partial \xi_l} J_{il}^{0,0} + \nu \left\{ \sum_{m=1}^2\sum_{l=1}^2 \dfrac{
\partial^2 \bar{u}_i^{0,0}}{\partial \xi_l \partial \xi_m}
\iota_{lm}^{0}+ \sum_{k=1}^3 \sum_{l=1}^2 \dfrac{
\partial \bar{u}_k^{0,0}}{\partial \xi_l} L_{ikl}^{0,0} + \sum_{k=1}^3 \bar{u}_k^{0,0} S_{ik}^{0,0}
\right\}\nonumber\\
&{}+
\dfrac{\nu}{ h} \dfrac{A^1}{A^0}
\bar{u}_i^{1,1} + \nu
{\dfrac{ 2}{ h^2}\bar{u}_i^{2,2}} +
\bar{f}_i^{0,0}, \quad (i=1,2) \label{ec_ui00}\\
&\hspace*{-0.5cm} \dfrac{ \partial \bar{u}_i^{1,0}}{\partial t} +
\sum_{k=1}^3 \bar{u}_k^{1,0} Q^0_{ik} - \sum_{l=1}^2 \dfrac{
\partial \bar{u}_i^{1,0}}{\partial \xi_l} C^0_l - \dfrac{1}{ h} \bar{u}_i^{1,0} \left(\dfrac{\partial h}{\partial
t} + C^0_3\right) \nonumber\\
&{}+\sum_{m=0}^1 \sum_{k=1}^2 \bar{u}_k^{m,0} \sum_{l=1}^2 \left( \dfrac{
\partial \bar{u}_i^{1-m,0}}{\partial \xi_l} + \sum_{q=1}^3 \bar{u}_q^{1-m,0} H^0_{ilq} \right) B_{lk}^{0}
+ \sum_{k=1}^2 \bar{u}_k^{0,0} h^{-1} B_{3k}^{0}
\bar{u}_i^{1,0} +\dfrac{2}{ h} \bar{u}_3^{1,1}\bar{u}_i^{1,0}
\nonumber\\
&{} =-\dfrac{1}{\rho_0} \sum_{m=0}^1 \sum_{l=1}^2\dfrac{
\partial \bar{p}^{m,m-1}}{\partial \xi_l} h^{1-m} J_{il}^{0,1-m}
-\dfrac{1}{\rho_0} \bar{p}^{1,0} h^{-1} J_{i3}^{0,0} \nonumber\\
&{} + \nu \left\{ \sum_{m=1}^2\sum_{l=1}^2 \dfrac{
\partial^2 \bar{u}_i^{1,0}}{\partial \xi_l \partial \xi_m}
\iota_{lm}^{0}+ \sum_{k=1}^3 \sum_{l=1}^2 \dfrac{
\partial \bar{u}_k^{1,0}}{\partial \xi_l} L_{ikl}^{1,1} + \sum_{k=1}^3 \bar{u}_k^{1,0} S_{ik}^{1,1}
\right\}\nonumber\\
&{}+
\dfrac{2\nu}{ h} \dfrac{A^1}{A^0}
\bar{u}_i^{2,1} + \nu
{\dfrac{ 6 }{h^2}\bar{u}_i^{3,2}} +
\bar{f}_i^{1,0}, \quad (i=1,2) \label{ec_ui10}\\
&\hspace*{-0.5cm} \dfrac{ \partial \bar{u}_i^{2,0}}{\partial t} +
\sum_{k=1}^3 \bar{u}_k^{2,0} Q^0_{ik} - \sum_{l=1}^2 \dfrac{
\partial \bar{u}_i^{2,0}}{\partial \xi_l} C^0_l - \dfrac{2}{ h} \bar{u}_i^{2,0} \left(\dfrac{\partial h}{\partial
t} + C^0_3\right) \nonumber\\
&{}+\sum_{m=0}^2 \sum_{k=1}^2 \bar{u}_k^{m,0} \sum_{l=1}^2 \left( \dfrac{
\partial \bar{u}_i^{2-m,0}}{\partial \xi_l} + \sum_{q=1}^3 \bar{u}_q^{2-m,0} H^0_{ilq} \right) B_{lk}^{0}
\nonumber\\
&{}+ \sum_{m=0}^{1} \sum_{k=1}^2 \bar{u}_k^{m,0} (2-m) h^{-1} B_{3k}^{0}
\bar{u}_i^{2-m,0} +\dfrac{1}{ h} \sum_{m=1}^n (3-m+ (\bar{u}_3^{m,1}\bar{u}_i^{3-m,0}
+\bar{u}_3^{m,0} \bar{u}_i^{3-m,1} )\nonumber\\
&{} =-\dfrac{1}{\rho_0} \sum_{m=1}^2 \sum_{l=1}^2\dfrac{
\partial \bar{p}^{m,m-2}}{\partial \xi_l} h^{n-m} J_{il}^{0,2-m} -\dfrac{1}{\rho_0} \sum_{m=1}^{2} m \bar{p}^{m,m-2} h^{1-m} J_{i3}^{0,2-m} \nonumber\\
&{} + \nu \left\{ \sum_{m=1}^2\sum_{l=1}^2 \dfrac{
\partial^2 \bar{u}_i^{2,0}}{\partial \xi_l \partial \xi_m}
\iota_{lm}^{0}+ \sum_{k=1}^3 \sum_{l=1}^2 \dfrac{
\partial \bar{u}_k^{2,0}}{\partial \xi_l} L_{ikl}^{2,2} + \sum_{k=1}^3 \bar{u}_k^{2,0} S_{ik}^{2,2}
\right\}\nonumber\\
&{}+
\dfrac{3\nu}{ h} \dfrac{A^1}{A^0}
\bar{u}_i^{3,1} +
\bar{f}_i^{2,0}, \quad (i=1,2)\\
&\hspace*{-0.5cm} \sum_{m=1}^2 \sum_{k=1}^2 \bar{u}_k^{m,0} \sum_{l=1}^2 \left( \dfrac{
\partial \bar{u}_i^{3-m,0}}{\partial \xi_l} + \sum_{q=1}^3 \bar{u}_q^{3-m,0} H^0_{ilq} \right) B_{lk}^{0}
\nonumber\\
&{}+ \sum_{m=1}^{2} \sum_{k=1}^2 \bar{u}_k^{m,0} (3-m) h^{-1} B_{3k}^{0}
\bar{u}_i^{3-m,0} +\dfrac{1}{ h} \sum_{m=1}^3 (4-m) (\bar{u}_3^{m,1}\bar{u}_i^{4-m,0}
+\bar{u}_3^{m,0} \bar{u}_i^{4-m,1} )\nonumber\\
&{} =-\dfrac{1}{\rho_0} \sum_{m=2}^3 \sum_{l=1}^2\dfrac{
\partial \bar{p}^{m,m-3}}{\partial \xi_l} h^{3-m} J_{il}^{0,3-m}
-\dfrac{1}{\rho_0} \sum_{m=2}^{3} m \bar{p}^{m,m-3} h^{2-m} J_{i3}^{0,3-m} +
\bar{f}_i^{3,0},\nonumber\\
&{} \quad (i=1,2)
\\
&\hspace*{-0.5cm}\bar{u}_3^{0,0}= \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3 \label{u300}\\
&\hspace*{-0.5cm}\bar{u}_3^{n+1,0}=0, \quad (n=0,1,2) \label{u3n0}\\
&\hspace*{-0.5cm} \bar{p}^{n+1,0}= {\dfrac{\mu}{ h} (n+2)\bar{u}_3^{n+2,1}}, \quad
(n=0,1) \label{pn0}\\
&\hspace*{-0.5cm} \bar{p}^{3,0}= 0 \label{p30}\\
&\hspace*{-0.5cm} \bar{u}_i^{0,0}=V_i^0 \quad (i=1,2) \label{ui00_Vi0}\\
&\hspace*{-0.5cm} \bar{u}_i^{1,0}+\bar{u}_i^{2,0}= W_i^0 - V_i^0, \quad (i=1,2)
\label{sum_uik0_Wi_Vi0}
\end{align}
Bearing in mind \eqref{u3n0} we yield from \eqref{pn-1_u3n+20}:
\begin{equation}
\bar{p}^{1,-1}=\bar{p}^{2,-1}=0 \label{p1-1,p2-1}
\end{equation}
\item $\varepsilon$ in \eqref{u30m}, \eqref{u3n}, \eqref{ui0_Vi} and \eqref{sum_uik_Wi_Vi}-\eqref{sum_u3k_h} (considering \eqref{u3n0}):
\begin{align}
&\hspace*{-0.5cm}\bar{u}_3^{0,1}= 0 \label{u301}\\
&\hspace*{-0.5cm}\bar{u}_3^{1,1} =- h \left[ \dfrac{1}{\sqrt{A^0}} \textrm{div}\left( \sqrt{A^0} \vec{u}^{0,0}\right) + \bar{u}_3^{0,0} \dfrac{A^1}{A^0}
\right] \label{u311}\\
&\hspace*{-0.5cm}\bar{u}_3^{n+1,1} =- \dfrac{ h}{n+1} \left[ \dfrac{1}{\sqrt{A^0}} \textrm{div}\left( \sqrt{A^0} \vec{u}^{n,0}\right)
- \dfrac{ n}{h} \nabla h \cdot \vec{u}^{n,0}
\right], \quad (n=1,2) \label{u3_231}\\
&\hspace*{-0.5cm}\bar{u}_i^{0,1}=V_i^1 \quad (i=1,2) \label{ui01_Vi1}\\
&\hspace*{-0.5cm}\sum_{k=1}^3 \bar{u}_i^{k,1}= W_i^1 - V_i^1 \quad (i=1,2)
\label{sum_uik1_Wi1_Vi1}\\
&\hspace*{-0.5cm}\sum_{k=1}^3 \bar{u}_3^{k,1}= \dfrac{\partial
h}{\partial t}
\label{sum_u3k1_h}
\end{align} where $\vec{u}^{n,k}=(\bar{u}_1^{n,k},\bar{u}_2^{n,k})$.
\item $\varepsilon^2$ in \eqref{u30m}, \eqref{u3n} and \eqref{ui0_Vi} and \eqref{sum_uik_Wi_Vi}-\eqref{sum_u3k_h} (keeping in mind \eqref{u301}):
\begin{align}
&\hspace*{-0.5cm}\bar{u}_3^{0,2}= 0 \label{u302}\\
&\hspace*{-0.5cm}\bar{u}_3^{1,2} =- \dfrac{h}{\sqrt{A^0}} \textrm{div}\left( \sqrt{A^0} \vec{u}^{0,1}\right) \label{u312} \\
&\hspace*{-0.5cm}\bar{u}_3^{2,2} =- \dfrac{ h}{2} \sum_{m=0}^1 h^{1-m} \sum_{k=1}^2 \left[ \sum_{l=1}^2 \left(\dfrac{
\partial \bar{u}_k^{m,m}}{\partial \xi_l} B_{lk}^{1-m} + \bar{u}_k^{m,m} H_{llk}^{1-m} \right)+ \bar{u}_3^{m,m} H_{kk3}^{1-m}
+ \dfrac{ m}{h} \bar{u}_k^{m,m}
B_{3k}^{1-m} \right]\\
&\hspace*{-0.5cm}\bar{u}_3^{3,2} =- \dfrac{ h}{3} \sum_{m=1}^2 h^{2-m} \sum_{k=1}^2 \left[ \sum_{l=1}^2 \left(\dfrac{
\partial \bar{u}_k^{m,m-1}}{\partial \xi_l} B_{lk}^{2-m} + \bar{u}_k^{m,m-1} H_{llk}^{2-m} \right)+ \bar{u}_3^{m,m-1} H_{kk3}^{2-m}\nonumber\right.\\
&\left.{}+ \dfrac{ m}{h} \bar{u}_k^{m,m-1}
B_{3k}^{2-m} \right] \label{u332}\\
&\hspace*{-0.5cm}\bar{u}_i^{0,2}=V_i^2 \quad (i=1,2) \label{ui02_Vi2}\\
&\hspace*{-0.5cm}\sum_{k=1}^3 \bar{u}_i^{k,2}= W_i^2 - V_i^2 \quad (i=1,2)
\label{sum_uik2_Wi2_Vi2}\\
&\hspace*{-0.5cm}\sum_{k=1}^3 \bar{u}_3^{k,2}= 0
\label{sum_u3k2_h}
\end{align}
\end{itemize}
Once the equations \eqref{p0-2_ui20}-\eqref{sum_u3k2_h} have been derived, we will proceed, in the next section, to impose the boundary conditions.
\section{Imposing boundary conditions} \label{BoundaryConditions}
\subsection{Boundary conditions leading to a lubrication problem} \label{subseccion-4-1}
Let us assume that the fluid slips at the lower
surface $(\xi _{3}=0)$, and at the upper surface $(\xi _{3}=1)$, that is,
let us assume that the tangential velocities of the fluid at
the lower and upper surfaces are known, and, that the normal velocity of both surfaces matches the normal velocities of the fluid at the surfaces.
In this case, the terms $\bar{u}_i^0$ ($i=1,2$) are known (\eqref{ui0_Vi}),
$\bar{u}_i^{0,k}$ ($i=1,2$, $k=0,1,\dots$) are known (see \eqref{ui00_Vi0}, \eqref{ui01_Vi1}, \eqref{ui02_Vi2}), and from equations \eqref{p0-2_ui20} and \eqref{sum_uik0_Wi_Vi0}, we obtain
\begin{align}
&\bar{u}_i^{2,0} = \dfrac{h^2}{2 \mu} \sum_{l=1}^2\dfrac{
\partial \bar{p}^{0,-2}}{\partial \xi_l} J_{il}^{0,0}, &(i=1,2) \label{ui20}
\\
&\bar{u}_i^{1,0}= W_i^0 - V_i^0 - \dfrac{h^2}{2 \mu} \sum_{l=1}^2\dfrac{
\partial \bar{p}^{0,-2}}{\partial \xi_l} J_{il}^{0,0}, &(i=1,2) \label{ui1_WVui2}
\end{align}
We can substitute $\bar{u}^{k,1}_3$ ($k=1,2,3$) in \eqref{sum_u3k1_h} by the expressions \eqref{u311} - \eqref{u3_231} and then, using \eqref{ui00_Vi0}, \eqref{u300}, \eqref{ui20} and \eqref{ui1_WVui2} we yield:
\begin{align}
\dfrac{ 1}{12\mu\sqrt{A^0}} \textrm{div}\left(\sqrt{A^0} h^3\sum_{l=1}^2 \dfrac{
\partial p^{0,-2}}{\partial \xi_l}( J^{0,0}_{1l}, J^{0,0}_{2l})\right) - h \left( \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3 \right)\dfrac{A^1}{A^0}\nonumber\\
{}- \dfrac{ h}{2\sqrt{A^0}} \textrm{div}(\sqrt{A^0}(\vec{W}^{0}+\vec{V}^{0})) + \dfrac{1 }{2} \nabla h \cdot (\vec{W}^{0}-\vec{V}^{0})= \dfrac{\partial
h}{\partial t} \label{ec_lub}
\end{align}
\begin{rmk}
Equation \eqref{ec_lub} is exactly the same as equation \eqref{Reynolds_gen} (equation (99) in \cite{RodTabJMAA2021}), although the divergence term is written in a slightly different form using$$ \dfrac{M}{\sqrt{A^0}}= \sqrt{A^0} \begin{pmatrix}
J_{11}^{0,0}& J_{12}^{0,0}\\ J_{21}^{0,0} & J_{22}^{0,0}
\end{pmatrix} $$
As explained in \cite{RodTabJMAA2021}, \eqref{ec_lub} is a generalized version of the Reynolds equation.
\end{rmk}
The following result can be proved using \eqref{cambio_base_u}, \eqref{ui_pol_xi3m}, \eqref{p_pol_xi3m}, \eqref{pn-2_0}, \eqref{ui30}, \eqref{u300}, \eqref{u3n0}, \eqref{ui00_Vi0}, \eqref{ui20} and \eqref{ui1_WVui2}:
\begin{theorem} If we assume that there exist asymptotic expansions \eqref{ansatz_bar_u}-\eqref{ansatz_W}, that \eqref{f_pol_xi3} holds
and that the tangential and normal velocities are known on the
bound surfaces, then the solution of model \eqref{ui_pol_xi3m}-\eqref{pn} verifies
\begin{align}
u_k(\varepsilon) &= V_k^0 + \xi_3 (W_k^0 - V_k^0) +\dfrac{h^2}{2 \mu} \sum_{l=1}^2\dfrac{
\partial \bar{p}^{0,-2}}{\partial \xi_l} J_{kl}^{0,0}(\xi_3^2-\xi_3) + O(\varepsilon),
\quad (k=1,2) \\
u_3(\varepsilon) &= \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3 + O(\varepsilon), \\
p(\varepsilon) &= \varepsilon^{-2} \bar{p}^{0,-2} + O(\varepsilon^{-1}),
\end{align}
where $\bar{p}^{0,-2}$is the solution of the equation \eqref{ec_lub}. Thus,
the velocity, $\vec{u}^{\varepsilon}$, and the pressure, $p^{\varepsilon}$, defined in the original domain, satisfy
\begin{align}
u_i^{\varepsilon} &= \sum_{k=1}^2 u_k(\varepsilon) \dfrac{\partial x_i}{\partial \xi_k} + u_3(\varepsilon) N_i, \quad (i=1,2,3) \\
&=\sum_{k=1}^2 \left[V_k^0 + \xi_3 (W_k^0 - V_k^0) +\dfrac{h^2}{2 \mu} \sum_{l=1}^2\dfrac{
\partial \bar{p}^{0,-2}}{\partial \xi_l} J_{il}^{0,0}(\xi_3^2-\xi_3) \right] \dfrac{\partial x_i}{\partial \xi_k} + \left( \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3\right) N_i + O(\varepsilon),\\
p^{\varepsilon}&=\varepsilon^{-2} \bar{p}^{0,-2} + O(\varepsilon^{-1}).
\end{align}
\end{theorem}
\subsection{Boundary conditions leading to a thin fluid layer problem} \label{subseccion_sw}
Now, instead
of considering that the tangential and normal velocities are known on the
upper and lower surfaces,
we assume that the normal component of the traction on $\xi _{3}=0$ and
on $\xi _{3}=1$ are known pressures, and that the tangential component
of the traction on these surfaces are friction forces depending on the
value of the velocities on $\partial D$. Therefore, we assume that
\begin{align}
&\vec{T}^{\varepsilon}\cdot
\vec{n}^{\varepsilon}_0 = (\sigma^{\varepsilon} \vec{n}^{\varepsilon}_0)\cdot
\vec{n}^{\varepsilon}_0=-\pi^{\varepsilon}_0 &\textrm{ on } \xi_3=0& \label{Tn0_xi3_0}\\
&\vec{T}^{\varepsilon}\cdot
\vec{n}^{\varepsilon}_1 = (\sigma^{\varepsilon} \vec{n}^{\varepsilon}_1)\cdot
\vec{n}^{\varepsilon}_1=-\pi^{\varepsilon}_1 &\textrm{ on } \xi_3=1&
\label{Tn1_xi3_1}
\\
&\vec{T}^{\varepsilon}\cdot
\vec{a}_i= (\sigma^{\varepsilon} \vec{n}^{\varepsilon}_0)\cdot
\vec{a}_i=-\vec{f}^{\varepsilon}_{R_0}\cdot
\vec{a}_i &\textrm{ on } \xi_3=0, &\quad (i=1,2) \label{Tai_xi3_0}\\
&\vec{T}^{\varepsilon}\cdot
\vec{v}_i^{\varepsilon}= (\sigma^{\varepsilon} \vec{n}^{\varepsilon}_1)\cdot
\vec{v}_i^{\varepsilon}=-\vec{f}^{\varepsilon}_{R_1}\cdot
\vec{v}_i^{\varepsilon} &\textrm{ on } \xi_3=1, &\quad (i=1,2)
\label{Tvi_xi3_1}
\end{align} where $\vec{T}^{\varepsilon}$ is the traction vector and $\sigma^{\varepsilon}$ is the stress tensor given by
\begin{align}
\sigma^{\varepsilon}_{ij}&= -p^{\varepsilon}\delta_{ij}+\mu \left(
\dfrac{\partial u_i^{\varepsilon}}{\partial x^{\varepsilon}_j} +
\dfrac{\partial u_j^{\varepsilon}}{\partial
x^{\varepsilon}_i}\right)\nonumber\\
&= \sum_{n=0}^3 \xi_3^n \left[ -\sum_{k=-2}^{\infty} \varepsilon^k \bar{p}^{n,k} \delta_{ij}+\mu \sum_{k=0}^{\infty} \varepsilon^k\sum_{m=1}^3 \sum_{l=1}^2 \left(
\dfrac{\partial( \bar{u}_m^{n,k} a_{mi})}{\partial \xi_l} \left(\sum_{r=0}^{\infty}(\varepsilon \xi_3 h)^r(\alpha_l^r a_{1j}+\beta_l^r a_{2j})\right) \right. \right.\nonumber\\
&+\left. \left.
\dfrac{\partial (\bar{u}_m^{n,k} a_{mj})}{\partial
\xi_l} \left(\sum_{r=0}^{\infty}(\varepsilon \xi_3 h)^r(\alpha_l^r a_{1i}+\beta_l^r a_{2i})\right) \right)\right]\nonumber\\
&+ \mu \sum_{k=0}^{\infty} \varepsilon^k \sum_{n=1}^3 \sum_{m=1}^3 n \xi_3^{n-1} \bar{u}_m^{n,k} \left[\sum_{r=0}^{\infty}\varepsilon^r \xi_3^{r+1} h^{r-1}
\left(\alpha_3^r( a_{mi} a_{1j}+ a_{mj} a_{1i}) \right.\right.\nonumber\\
&+\left.\left.\beta_3^nr(a_{mi} a_{2j}+ a_{mj} a_{2i}) \right) + \dfrac{1}{\varepsilon h} ( a_{mi} a_{3j} + a_{mj}a_{3i}) \right] , \quad (i,j=1,2,3) \label{sigmaij_des}
\end{align}
and vectors $\vec{n}^{\varepsilon}_0$, $\vec{n}^{\varepsilon}_1$ are, respectively, the outward unit normal vectors to the lower and the upper surfaces, that is
\begin{align}
&\vec{n}^{\varepsilon}_0=s_0 \vec{a}_3 \label{n0}\\
&\vec{n}^{\varepsilon}_1=-s_0\dfrac{\vec{v}^{\varepsilon}_3}{\|\vec{v}^{\varepsilon}_3\|}
\label{n1}
\end{align}
where
\begin{equation}
s_0=-1 \textrm{ or } s_0=1
\end{equation}
is fixed ($\vec{n}^{\varepsilon}_0 = \vec{a}_3$ or $\vec{n}^{\varepsilon}_0 = - \vec{a}_3$, depending on the orientation of the parametrization $\vec{X}$), and
\begin{align}
\vec{v}^{\varepsilon}_1&=\vec{a}_1+\varepsilon
\left(\dfrac{\partial h}{\partial \xi_1} \vec{a}_3 + h
\dfrac{\partial \vec{a}_3}{\partial \xi_1}\right) \label{v1}\\
\vec{v}^{\varepsilon}_2&=\vec{a}_2+\varepsilon
\left(\dfrac{\partial h}{\partial \xi_2} \vec{a}_3 + h
\dfrac{\partial \vec{a}_3}{\partial \xi_2}\right) \label{v2}\\
\vec{v}^{\varepsilon}_3&=\vec{v}^{\varepsilon}_1 \times \vec{v}^{\varepsilon}_2 =\vec{a}_1 \times \vec{a}_2 +\varepsilon
\left[\dfrac{\partial h}{\partial \xi_2} (\vec{a}_1 \times
\vec{a}_3) + h \left(\vec{a}_1 \times \dfrac{\partial
\vec{a}_3}{\partial \xi_2} + \dfrac{\partial
\vec{a}_3}{\partial \xi_1} \times
\vec{a}_2 \right) + \dfrac{\partial h}{\partial
\xi_1} (\vec{a}_3\times \vec{a}_2) \right]\nonumber\\
&+\varepsilon^2 \left[ \left(\dfrac{\partial h}{\partial \xi_1}
\vec{a}_3 + h \dfrac{\partial \vec{a}_3}{\partial
\xi_1}\right)\times\left(\dfrac{\partial h}{\partial \xi_2}
\vec{a}_3 + h \dfrac{\partial \vec{a}_3}{\partial \xi_2}\right)
\right]\label{v3}\\
\|\vec{v}^{\varepsilon}_3\|&=\|\vec{a}_1 \times
\vec{a}_2\| + \varepsilon h \left[ \vec{a}_3 \cdot \left(\vec{a}_1
\times \dfrac{\partial \vec{a}_3}{\partial \xi_2}\right) +\vec{a}_3
\cdot \left(\dfrac{\partial \vec{a}_3}{\partial \xi_1} \times
\vec{a}_2\right)\right] + O(\varepsilon^2)\label{mod_v3}
\end{align}
Typically, the friction force is of the form
\begin{equation}
\vec{f}^{\varepsilon}_{R\alpha} = \rho_0 C_R^\varepsilon \|
\vec{{u}}^\varepsilon \| \vec{{u}}^\varepsilon \textrm{ on } \xi_3=\alpha, \quad (\alpha=0,1)\label{fR}
\end{equation}
where $C_R^\varepsilon$ is a small constant. Let us assume that it is of order $\varepsilon$ (see \cite{RTV1} or \cite{Weiyan}), that is,
\begin{equation}
C_R^{\varepsilon} =\varepsilon C^1_R\label{CR}
\end{equation}
If we assume that the pressures and the friction forces on the upper and lower surfaces admit a development in powers of $\varepsilon$ too:
\begin{align}
\pi_i(\varepsilon)&=\sum_{r=0}^{\infty} \varepsilon^r \pi_i^r, \quad (i=0,1) \label{ansatz_pi}\\
\vec{f}(\varepsilon)_{R_\alpha}&=\sum_{k=1}^{\infty} \varepsilon^k \vec{f}^k_{R_\alpha}, \quad (\alpha=0,1) \label{ansatz_fR}
\end{align}
condition \eqref{Tn0_xi3_0} can now be written (using
\eqref{sigmaij_des}, \eqref{n0}) as:
\begin{equation}
(\sigma_{ij}a_{3j})a_{3i} = -\bar{p}^0 + \dfrac{2\mu }{\varepsilon h} \bar{u}_3^1=-\pi_0 \label{Tn0_xi3_0s}
\end{equation}
Next, we identify the terms multiplied by the same power of $\varepsilon$, we obtain (for $\varepsilon^{-2}$, $\varepsilon^{-1}$ and $\varepsilon^0$) taking into account \eqref{u300}, \eqref{u3n0} and \eqref{u311}:
\begin{align}
&\bar{p}^{0,-2}=\bar{p}^{0,-1}=0 \label{p0-2-1_sw}\\
&\bar{p}^{0,0}=\pi_0^0-\dfrac{2\mu}{\sqrt{A^0}}\textrm{div}(\sqrt{A^0}\vec{V}^{0})- 2\mu \left( \dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}_3\right) \dfrac{A^1}{A^0}
\label{p00_sw}
\end{align}
From \eqref{p0-2_ui20}, \eqref{p0-2-1_sw}, \eqref{u3_231}, \eqref{pn0}, \eqref{ec_ns_eps-1}, \eqref{sum_uik0_Wi_Vi0}, \eqref{p0-1_ui10_ui21} and bearing in mind \eqref{u3n0}, \eqref{p1-1,p2-1}, \eqref{p0-2-1_sw} we get:
\begin{align}
\bar{u}_i^{2,0}&=0, \quad (i=1,2) \label{ui20_sw}
\\
\bar{u}_3^{3,1}&=0 \label{u331_sw}\\
\bar{p}^{2,0}&=0 \label{p20_sw}\\
\bar{u}_i^{3,1}&= 0 \quad (i=1,2) \label{ui31_sw}\\
\bar{u}_i^{1,0}&=W_i^0-V_i^0, \quad (i=1,2) \label{ui10_sw}\\
\bar{u}_i^{2,1}&=-
\dfrac{h}{ 2} \dfrac{A^1}{A^0}
(W_i^0-V_i^0). \quad (i=1,2)
\label{ui21_ui10}
\end{align}
Boundary condition \eqref{Tn1_xi3_1} can be written (using
\eqref{n1}) as follows:
\begin{eqnarray}
&&\left(\sigma_{ij}^{\varepsilon}
{v}^{\varepsilon}_{3j}\right)\cdot
{v}^{\varepsilon}_{3i}=-\pi^{\varepsilon}_1\|\vec{v}^{\varepsilon}_3\|^2
\textrm{ on } \xi_3=1 \label{Tn1_xi3_1s}
\end{eqnarray}
and, using
\eqref{sigmaij_des}, \eqref{v3}, \eqref{mod_v3} and \eqref{ansatz_pi} to substitute $\sigma_{ij}$, vector $\vec{v}_3$, its module and $\pi^{\varepsilon}_1$ into the above condition, we identify the terms multiplied by $\varepsilon^0$ (the terms multiplied by $\varepsilon^{-2}$ and $\varepsilon^{-1}$ trivially vanish):
\begin{align}
&
- \sum_{n=0}^3 \bar{p}^{n,0} \|\vec{a}_1 \times \vec{a}_2\|^2 + \dfrac{ 2\mu}{ h} \sum_{n=1}^3 n \bar{u}_3^{n,1} \|\vec{a}_1 \times \vec{a}_2\|^2\nonumber\\
&{}+ \dfrac{2 \mu}{ h} \sum_{n=1}^3 \sum_{m=1}^2 n \bar{u}_m^{n,0} \|\vec{a}_1 \times \vec{a}_2\| \vec{a}_{m} \cdot
\left[\dfrac{\partial h}{\partial \xi_2} (\vec{a}_1 \times
\vec{a}_3) + h \left(\vec{a}_1 \times \dfrac{\partial
\vec{a}_3}{\partial \xi_2} + \dfrac{\partial
\vec{a}_3}{\partial \xi_1} \times
\vec{a}_2\right)+ \dfrac{\partial h}{\partial
\xi_1} (\vec{a}_3\times \vec{a}_2)\right]\nonumber\\
&=- \pi_1^0\|\vec{a}_1 \times\vec{a}_2\|^2 \label{Tni1_xi3_1_eps0}
\end{align}
Taking into account \eqref{p20_sw}, \eqref{p30}, \eqref{ui30}, \eqref{ui20_sw}, \eqref{u331_sw}, \eqref{pn0}, \eqref{u3_231}, \eqref{p00_sw}, \eqref{ui10_sw} and, that
\begin{align}
&(\vec{a}_1 \times \vec{a}_3)\cdot\vec{a}_{2} = (\vec{a}_3\times
\vec{a}_2) \cdot \vec{a}_1 =- \| \vec{a}_{1}\times \vec{a}_2\| \label{a1xa3_a2}\\
& \left(\vec{a}_1 \times \dfrac{\partial \vec{a}_3}{\partial
\xi_2}\right)\cdot\vec{a}_{2} = \left(\dfrac{\partial \vec{a}_3}{\partial \xi_1} \times
\vec{a}_2\right)\cdot \vec{a}_1 = 0 \label{a12_x_da3dxi21_a21}
\end{align}
equation \eqref{Tni1_xi3_1_eps0} can be written as follows:
\begin{align}
&{}\pi_0^0
+ \dfrac{ \mu}{ h} \left(
\dfrac{ h}{\sqrt{A^0}} \textrm{div}(\sqrt{A^0}(\vec{W}^{0}-\vec{V}^0)) + \nabla h \cdot (\vec{W}^{0}-\vec{V}^0)\right)
= \pi_1^0\label{Tni1_xi3_1_eps0s}
\end{align}
Boundary conditions \eqref{Tai_xi3_0} on $\xi_3=0$ have been re-written using
\eqref{sigmaij_des}, \eqref{n0} and \eqref{ansatz_fR}, then we identify the terms multiplied by each power of $\varepsilon$:
\begin{itemize}
\item $\varepsilon^{-1}$:
\begin{equation}
\bar{u}_i^{1,0}=0, \quad (i=1,2) \label{ui10_sw0}
\end{equation}
and from \eqref{ui10_sw}:
\begin{equation}
W_i^0=V_i^0, \quad (i=1,2) \label{W_V_sw}
\end{equation}
\item $\varepsilon^0$ (keeping in mind \eqref{u300}, \eqref{ui00_Vi0}):
\begin{equation}
\dfrac{\partial }{\partial
\xi_i} \left( \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3\right)+ \sum_{m=1}^2 V_m^{0} \left(
\dfrac{\partial \vec{a}_{m}}{\partial
\xi_i} \cdot \vec{a}_{3}\right) + \dfrac{1}{ h}\sum_{m=1}^2 \bar{u}_m^{1,1} \vec{a}_{m}\cdot \vec{a}_{i} =0, \quad (i=1,2) \label{cc_xi3_0_fR0_eps0}
\end{equation}
\item $\varepsilon$ (considering \eqref{u301}, \eqref{ui01_Vi1}):
\begin{equation}
s_0 \mu \left[
\sum_{m=1}^2 V_m^{1} \left(
\dfrac{\partial \vec{a}_{m}}{\partial
\xi_i} \cdot \vec{a}_{3}\right)+ \dfrac{1}{ h} \sum_{m=1}^2 \bar{u}_m^{1,2} \vec{a}_{m}\cdot \vec{a}_{i} \right]=- \vec{f}^1_{R_0}\cdot \vec{a}_{i}. \quad (i=1,2) \label{cc_xi3_0_fR0_epsk}
\end{equation}
\end{itemize}
If we sum equation \eqref{cc_xi3_0_fR0_eps0} ($i=1$) multiplied by $\alpha^0_1$ and equation \eqref{cc_xi3_0_fR0_eps0} ($i=2$) multiplied by $\beta^0_1$ (and we repeat the process analogously multiplying by $\alpha^0_2$ and $\beta^0_2$) we obtain:
\begin{equation}
\bar{u}_i^{1,1} =-h\sum_{l=1}^2\left( J^{0,0}_{il}
\dfrac{\partial }{\partial
\xi_l}\left( \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3 \right) + V_l^{0}D^0_{il} \right), \quad (i=1,2)
\label{ui11_sw}
\end{equation}
Coefficients $D^0_{il}$ are defined in \eqref{D}.
We do the same operations from \eqref{cc_xi3_0_fR0_epsk} to get:
\begin{equation}
\bar{u}_i^{1,2} =- h \sum_{m=1}^2 \left( \dfrac{s_0}{\mu}J^{0,0}_{im} (\vec{f}^1_{R_0}\cdot \vec{a}_{m}) + V_m^{1} D^0_{im}\right), \quad (i=1,2)\label{ui1k_sw}
\end{equation}
From \eqref{u3_231}, \eqref{pn0}, \eqref{ui21_ui10}, \eqref{u332}, \eqref{ec_ui10}, taking into account \eqref{ui10_sw0}, \eqref{W_V_sw}, \eqref{p0-2-1_sw}, \eqref{u3n0}, we have:
\begin{align}
\bar{u}_3^{2,1}&=0\label{u321_sw}\\
\bar{p}^{1,0}&= 0 \label{p10_sw}\\
\bar{u}_i^{2,1}&=0, \quad (i=1,2) \label{ui21_sw}\\
\bar{u}_3^{3,2}&=0 \label{u332_sw}
\\
\bar{u}_i^{3,2}& =- \dfrac{ h^2 }{6\nu}\bar{f}_i^{1,0}, \quad (i=1,2) \label{ui32_fi1}
\end{align}
Next, we yield from \eqref{sum_u3k1_h}, \eqref{u311}, \eqref{u300}, \eqref{ui00_Vi0}, \eqref{u321_sw} and \eqref{u331_sw}:
\begin{align}
\dfrac{\partial
h}{\partial t} &=\bar{u}_3^{1,1}\label{u311_sw}\\
&=- { h \left( \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3 \right)\dfrac{A^1}{A^0}} - \dfrac{ h}{\sqrt{A^0}} \textrm{div}(\sqrt{A^0}\vec{V}^0) \label{ec_h_V_sw}
\end{align}
and, now, \eqref{p00_sw} reads (using \eqref{ec_h_V_sw})
\begin{equation}
\bar{p}^{0,0}=\pi_0^0 + \dfrac{2\mu }{ h} \dfrac{\partial h}{\partial t} \label{p00_sw_h}
\end{equation}
Boundary conditions \eqref{Tvi_xi3_1} can be rewritten taking into account \eqref{sigmaij_des}, \eqref{n1}, \eqref{v1}-\eqref{mod_v3} and \eqref{ansatz_fR}. Then we identify the terms multiplied by the each power of $\varepsilon$:
\begin{itemize}
\item $\varepsilon^{0}$ (considering \eqref{ui30}, \eqref{ui20_sw}, \eqref{ui10_sw0}, \eqref{u3n0}, \eqref{ui31_sw}, \eqref{ui21_sw}) we re-obtain \eqref{cc_xi3_0_fR0_eps0}
\item $\varepsilon$ (using \eqref{a1xa3_a2}-\eqref{a12_x_da3dxi21_a21}, bearing in mind \eqref{u301}, \eqref{u331_sw}, \eqref{u321_sw}, \eqref{ui31_sw}, \eqref{u3n0}, \eqref{ui30}, \eqref{ui20_sw}, \eqref{ui10_sw0}, \eqref{ui21_sw} and dividing by $\|\vec{a}_1 \times \vec{a}_2\|$):
\begin{align}
&\hspace*{-0.5cm} \mu\left\{\dfrac{\partial \bar{u}_3^{1,1}}{\partial
\xi_i} +
\sum_{m=1}^2 \bar{u}_m^{0,1} \left(
\dfrac{\partial \vec{a}_{m}}{\partial
\xi_i} \cdot \vec{a}_{3}\right)
+ \dfrac{1}{ h}\sum_{n=1}^3 \sum_{m=1}^2 n \bar{u}_m^{n,2} (\vec{a}_{m}\cdot\vec{a}_i) + \dfrac{1}{ h}\dfrac{\partial h}{\partial \xi_i} \bar{u}_3^{1,1} \right. \nonumber\\
&\left.{}- \sum_{l=1}^2 \left(\alpha_l^0 \dfrac{\partial h}{\partial\xi_1}+\beta_l^0 \dfrac{\partial h}{\partial\xi_2}\right) \left[\sum_{m=1}^2
\dfrac{\partial \bar{u}_m^{0,0} }{\partial \xi_l}\left( \vec{a}_m \cdot \vec{a}_{i} \right)+ \sum_{m=1}^3 \bar{u}_m^{0,0} \left(\dfrac{\partial \vec{a}_{m}}{\partial \xi_l} \cdot \vec{a}_i\right) \right]\right.\nonumber\\
&{}
-\sum_{m=1}^2
\dfrac{\partial \bar{u}_m^{0,0}}{\partial
\xi_i} \dfrac{\partial h}{\partial \xi_m} +\dfrac{ h}{\sqrt{A^0}} \dfrac{\partial \bar{u}_3^{0,0}}{\partial
\xi_i} I + \dfrac{1}{\sqrt{A^0}} \sum_{m=1}^3 \bar{u}_m^{0,0}
\left( \dfrac{\partial \vec{a}_{m}}{\partial
\xi_i} \cdot \vec{\eta}(h)\right)\nonumber\\
& \left.{}+ \dfrac{1}{\sqrt{A^0}} \sum_{m=1}^2 \bar{u}_m^{1,1}( \vec{a}_{m}\cdot \vec{a}_{s})I\right\}=s_0 \left( \vec{f}^1_{R_1}\cdot
\vec{a}_i\right) \quad (i=1,2) \label{Tvi_xi3_1_epsb}
\end{align} where coefficients $I$ and $\vec{\eta}(h)$ are defined in \eqref{I} and \eqref{eta} respectively.
\end{itemize}
Now, we replace in \eqref{Tvi_xi3_1_epsb} the terms $\bar{u}_3^{1,1}$, $\bar{u}_3^{0,0}$, $\bar{u}_i^{0,0}$, $\bar{u}_i^{0,1}$, $\bar{u}_i^{1,1}$ and $\bar{u}_i^{1,2}$ ($i=1,2$) by the expressions obtained in \eqref{u311_sw}, \eqref{u300}, \eqref{ui00_Vi0}, \eqref{ui01_Vi1}, \eqref{ui11_sw} and \eqref{ui1k_sw} respectively, and taking into account \eqref{ui32_fi1}, we can write:
\begin{align}
&\hspace*{-0.5cm} \dfrac{\partial^2 h}{\partial t\partial
\xi_i} + \dfrac{1}{ h}\dfrac{\partial h}{\partial \xi_i} \dfrac{\partial h}{\partial t}
+ \dfrac{2}{ h} \sum_{m=1}^2 \bar{u}_m^{2,2} (\vec{a}_{m}\cdot\vec{a}_i) \nonumber\\
&\left.{}- \sum_{l=1}^2 \left(\alpha_l^0 \dfrac{\partial h}{\partial\xi_1}+\beta_l^0 \dfrac{\partial h}{\partial\xi_2}\right) \left[\sum_{m=1}^2
\dfrac{\partial V_m^{0} }{\partial \xi_l} \left( \vec{a}_m \cdot \vec{a}_{i} \right)+ \sum_{m=1}^3 V_m^{0}\left(\dfrac{\partial \vec{a}_{m}}{\partial \xi_l} \cdot \vec{a}_i\right) \right]\right.\nonumber\\
&{} -\sum_{m=1}^2
\dfrac{\partial V_m^{0}}{\partial
\xi_i} \dfrac{\partial h}{\partial \xi_m} - \dfrac{1}{\sqrt{A^0}} \sum_{m=1}^2 V_m^{0}\left[
\left( \dfrac{\partial \vec{a}_{m}}{\partial
\xi_i} \cdot \vec{\eta}(h)\right) - h I \left( \vec{a}_{3}\cdot \dfrac{\partial \vec{a}_m}{\partial\xi_i} \right)\right]\nonumber\\
& {}
+ \left( \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3 \right)\left[ H^0_{3i3} + \dfrac{1}{\sqrt{A^0}}
\left( \dfrac{\partial \vec{a}_{3}}{\partial
\xi_i} \cdot \vec{\eta}(h)\right)\right] \nonumber\\
&{}= \dfrac{s_0}{\mu} \left( \vec{f}^1_{R_1}+ \vec{f}^1_{R_0} \right)\cdot
\vec{a}_i +\dfrac{h}{2\nu} \sum_{m=1}^2 \bar{f}_m^{1,0} (\vec{a}_{m}\cdot\vec{a}_i) \quad (i=1,2) \label{Tvi_xi3_1_epse}
\end{align}
Next, we multiply \eqref{Tvi_xi3_1_epse} by $J^{0,0}_{ji}$ for $j=1,2$ and we sum in $i=1,2$. In this way we are able to infer the terms
$\bar{u}_i^{2,2}$
\begin{align}
&\hspace*{-0.5cm} \dfrac{2}{ h} \bar{u}_i^{2,2}
=- \sum_{l=1}^2
\dfrac{\partial V_i^{0} }{\partial \xi_l} J^{0,0}_{3l}
- \sum_{m=1}^2 V_m^{0}\left(\sum_{l=1}^2 J^{0,0}_{3l} H^0_{ilm} + \dfrac{ h I}{\sqrt{A^0}}
D^0_{im}\right) \nonumber\\
&{}-\sum_{j=1}^2 J^{0,0}_{ij}\left(\dfrac{\partial^2 h}{\partial t\partial
\xi_j} + \dfrac{1}{ h}\dfrac{\partial h}{\partial \xi_j} \dfrac{\partial h}{\partial t} -\sum_{m=1}^2
\dfrac{\partial V_m^{0}}{\partial
\xi_j} \dfrac{\partial h}{\partial \xi_m} - \dfrac{1}{\sqrt{A^0}} \sum_{m=1}^2 V_m^{0} \left( \dfrac{\partial \vec{a}_{m}}{\partial
\xi_j} \cdot \vec{\eta}(h)\right)\right) \nonumber\\
& {}
- \left( \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3 \right)\sum_{j=1}^2 J^{0,0}_{ij}\left[ H^0_{3j3} + \dfrac{1}{\sqrt{A^0}}
\left( \dfrac{\partial \vec{a}_{3}}{\partial
\xi_j} \cdot \vec{\eta}(h)\right)\right] \nonumber\\
&{}+\dfrac{s_0}{\mu} \sum_{j=1}^2 J^{0,0}_{ij}\left( \vec{f}^1_{R_1}+ \vec{f}^1_{R_0}\right) \cdot
\vec{a}_j +\dfrac{ h}{2 \nu} \bar{f}_i^{1,0}, \quad (i=1,2) \label{ui22_sw}
\end{align}
We can rewrite equations \eqref{ec_ui00} considering \eqref{ui00_Vi0}, \eqref{u300}, \eqref{p00_sw_h}, \eqref{ui11_sw} and \eqref{ui22_sw} to substitute $\bar{u}_i^{0,0}$ ($i=1,2,3$), $\bar{p}^{0,0}$, $\bar{u}_i^{1,1}$ ($i=1,2$) and $\bar{u}_i^{2,2}$ ($i=1,2$) respectively by the expressions obtained previously:
\begin{eqnarray}
&&\hspace*{-0.5cm} \dfrac{ \partial V_i^{0}}{\partial t}+\sum_{l=1}^2 \dfrac{ \partial V_i^{0}}{\partial \xi_l} (V_l^{0} -C^0_l)+ \sum_{k=1}^2
V_k^{0} \left( R^0_{ik} + \sum_{l=1}^2 V_l^{0} H^0_{ilk} \right)
\nonumber\\
&&{}
=-\dfrac{1}{\rho_0} \sum_{l=1}^2 J^{0,0}_{il}\dfrac{
\partial \pi_0^0 }{\partial \xi_l} + \nu \left[ \sum_{m=1}^2 \sum_{l=1}^2 \dfrac{
\partial^2 V_i^{0}}{\partial \xi_l \partial \xi_m} J^{0,0}_{lm}+ \sum_{k=1}^2 \sum_{l=1}^2 \dfrac{
\partial V_k^{0}}{ \partial \xi_l} \bar{L}_{ikl}^{0,0} + \sum_{k=1}^2 V^{0}_k \bar{S}^{0,0}_{ik}+\kappa^0_i\right] \nonumber\\
&&{} + \bar{F}^0_i -
\left( \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3 \right) Q^0_{i3} \quad (i=1,2) \label{ec_ui00_sw}\end{eqnarray}
where coefficients $\bar{L}^{0,0}_{ikl}$, $R^0_{ik}$, $\bar{S}^{0,0}_{ik}$, $\kappa^0_i$ and $\bar{F}^0_i$ are defined in \eqref{Lbar}, \eqref{R}, \eqref{Sbar}, \eqref{kappa} and \eqref{Fbar2} respectively.
\begin{rmk}
Equation \eqref{ec_ui00_sw} is exactly the same as equation \eqref{ec_Vi0-v2} (see \eqref{F}-\eqref{Fbar3}).
\end{rmk}
Taking into account \eqref{cambio_base_u}, \eqref{ui_pol_xi3}, \eqref{p_pol_xi3}, \eqref{pn-2_0}, \eqref{ui30}, \eqref{u300}, \eqref{u3n0}, \eqref{ui00_Vi0}, \eqref{p0-2-1_sw}, \eqref{ui20_sw}, \eqref{ui10_sw0}, \eqref{p00_sw_h}, \eqref{p10_sw}, \eqref{p20_sw} and \eqref{p30}, we can prove the following result:
\begin{theorem}
If we assume that there exist asymptotic expansions \eqref{ansatz_bar_u}-\eqref{ansatz_W} and \eqref{ansatz_pi}-\eqref{ansatz_fR}, and that the hypothesis \eqref{f_pol_xi3} and \eqref{CR}, and the boundary conditions \eqref{Tn0_xi3_0}-\eqref{Tvi_xi3_1} hold, then the solution of model \eqref{ui_pol_xi3m}-\eqref{pn} verifies
\begin{align}
u_k(\varepsilon) &= V_k^0 + O(\varepsilon), \quad (k=1,2) \\
u_3(\varepsilon) &= \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3 + O(\varepsilon), \\
p(\varepsilon) &= \pi_0^0 + \dfrac{2\mu}{h}\dfrac{\partial h}{\partial t} + O(\varepsilon),
\end{align}
where $V_k^0$ ($k=1,2$) are the solutions of the equations \eqref{ec_ui00_sw}. Thus,
the velocity, $\vec{u}^{\varepsilon}$, and the pressure, $p^{\varepsilon}$, defined in the original domain, satisfy
\begin{align}
u_i^{\varepsilon} &=\sum_{k=1}^2 V_k^0 \dfrac{\partial x_i}{\partial \xi_k} + \left( \dfrac{\partial \vec{X}}{\partial t}
\cdot \vec{a}_3\right) N_i + O(\varepsilon), \quad (i=1,2,3) \\
p^{\varepsilon}&= \pi_0^0 + \dfrac{2\mu }{ h^{\varepsilon}} \dfrac{\partial h^{\varepsilon}}{\partial t} + O(\varepsilon).
\end{align}
\end{theorem}
\section{Conclusions}\label{sec-conclusions}
In this article, we propose a two-dimensional flow model of a viscous fluid between two very close moving surfaces and we show that its asymptotic behavior, when the distance between the two surfaces tends to zero, is the same as that previously obtained in \cite{RodTabJMAA2021} for the Navier-Stokes equations. In fact, we have justified that, under the assumptions about the boundary conditions made in subsection \ref{subseccion-4-1}, the solution of the new model approaches the solution of model \eqref{ec_lub} as $\varepsilon$ tends to zero, just as in the previous work \cite{RodTabJMAA2021}, where we showed that the solution of the Navier-Stokes equations
approaches the solution of \eqref{Reynolds_gen}. And, we have also seen that, under the assumptions about the boundary conditions shown in subsection \ref{subseccion_sw}, the solution of the the new model tends to the solution of \eqref{ec_ui00_sw}, as it happened in our prior article \cite{RodTabJMAA2021} with the solution of the Navier-Stokes equations (see \eqref{ec_Vi0-v2}).
As it is well known, numerical solution of three-dimensional Navier-Stokes equations requires large computational resources, and solving these equations in such a thin domain presents even more numerical problems, while solving the new two-dimensional model presented here is much easier.
On the other hand, as we have already mentioned, the new model has the same asymptotic behavior as the Navier-Stokes equations, so, in a certain sense, it encompasses the two limit models presented in subsections \ref{subseccion-4-1} and \ref{subseccion_sw}.
For all the above reasons, the new model proposed in this article can be considered a good option for calculating viscous fluid flow between two nearby moving surfaces, without the need to decide a priori whether the flow is typical of a lubrication problem or it is of thin fluid layer type, and without the enormous computational effort that would be required to solve the Navier-Stokes equations in such a thin domain.
We are currently working on performing numerical simulations that allow us to compare the accuracy and computation time required for each of the models that we have mentioned in this article. We hope to be able to present the results of these simulations very soon.
\section*{Acknowledgements}
This work has been partially supported by the European Union's Horizon 2020 Research and Innovation Programme,
under the Marie Sklodowska-Curie Grant Agreement No 823731 CONMECH.
| {
"timestamp": "2022-10-27T02:11:29",
"yymm": "2210",
"arxiv_id": "2210.14617",
"language": "en",
"url": "https://arxiv.org/abs/2210.14617",
"abstract": "We propose a two-dimensional flow model of a viscous fluid between two close moving surfaces. We show, using a formal asymptotic expansion of the solution, that its asymptotic behavior, when the distance between the two surfaces tends to zero, is the same as that of the the Navier-Stokes equations. The leading term of the formal asymptotic expansions of the solutions to the new model and Navier-Stokes equations are solution of the same limit problem, and the type of the limit problem depends on the boundary conditions. If slip velocity boundary conditions are imposed on the upper and lower bound surfaces, the limit is a solution of a lubrication model, but if the tractions and friction forces are known on both bound surfaces, the limit is a solution of a thin fluid layer model. The model proposed has been obtained to be a valuable tool for computing viscous fluid flow between two nearby moving surfaces, without the need to decide a priori whether the flow is typical of a lubrication or a thin fluid layer problem, and without the enormous computational effort that would be required to solve the Navier-Stokes equations in such a thin domain.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "A new thin layer model for viscous flow between two nearby non-static surfaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.971992476960077,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.707586659748245
} |
https://arxiv.org/abs/2105.12046 | Leaf multiplicity in a Bienaymé-Galton-Watson tree | This note defines a notion of multiplicity for nodes in a rooted tree and presents an asymptotic calculation of the maximum multiplicity over all leaves in a Bienaymé-Galton-Watson tree with critical offspring distribution $\xi$, conditioned on the tree being of size $n$. In particular, we show that if $S_n$ is the maximum multiplicity in a conditional Bienaymé-Galton-Watson tree, then $S_n = \Omega(\log n)$ asymptotically in probability and under the further assumption that ${\bf E}\{2^\xi\} < \infty$, we have $S_n = O(\log n)$ asymptotically in probability as well. Explicit formulas are given for the constants in both bounds. We conclude by discussing links with an alternate definition of multiplicity that arises in the root-estimation problem. |
\section{Introduction}
\label{sec:intro}
Equivalence between two distinct mathematical objects is a far-reaching concept in mathematics. When two structures are similar, one may define a relation under which they are regarded as one and the same. The term ``multiplicity'' is often used to indicate the extent to which an object is, in some sense, not structurally unique (or how often it is repeated in a suitably-defined multiset).
Towards a concept of the multiplicity of a node in a tree, consider the small example depicted in
Fig.~\ref{fig:multdef}
\begin{figure}
\begin{center}
\subfigure{\includegraphics{fig_multdef.eps}}
\caption{A rooted tree with six nodes. The pair $x$ and $y$ are similar, but $x$ and $z$ are not.}
\label{fig:multdef}
\end{center}
\end{figure}
\subsection{Definitions and notation}
Consider a tree $T$ rooted at
a node $u$. For a node $v$ in the tree, we let $T_v$ denote the subtree rooted at $v$. Let $v$ and $w$ be nodes in the tree and let $v=v_1, v_2,\ldots, v_n=u$ and $w=w_1, w_2, \ldots, w_m=u$ be the paths from $v$ and $w$, respectively, to the root. We say that $v$ and $w$ are {\it identical} and write $v\equiv w$ if the paths have the same length and $T_{v_j}$ and $T_{w_j}$ are isomorphic as rooted ordered trees for $1\leq j\leq n$.
For example, in Fig.~\ref{fig:multdef}, the nodes $x$ and $y$ are identical in this sense, but different from node $z$.
It is clear that $\equiv$ defines an equivalence relation on the set of nodes in the tree, so we may now define the {\it multiplicity} $\sigma(v)$ of a node $v$ to be the size of the equivalence class $[v]$ under the relation. The
{\it leaf multiplicity} (or simply {\it multiplicity}, when no confusion can arise)
$S(T)$ of a rooted tree $T$ is the maximum value of $\sigma(v)$, taken over all nodes $v$ of $T$. The name ``leaf multiplicity'' is motivated by the fact that the function $\sigma$ increases monotonically away from the root, so that $S(T)$ remains the same when the maximum is only computed over the set of {\it leaves} of $T$.
Note that $\equiv$ is not the only structural equivalence relation one can define on the set of nodes in a
tree, and thus $\sigma$ is only one of many possible notions of leaf multiplicity.
Towards the end of this paper, we will explore an alternate definition $\mu$ of multiplicity which extends well
to free trees as well as rooted trees, and discuss the relationship between the two functions $\sigma$ and $\mu$.
\subsection{The Bienaym\'e--Galton--Watson model}
For a random variable $\xi$ taking values in the nonnegative integers, a {\it Bienaym\'e--Galton--Watson tree}~\cite{athreya1972branching} is a rooted ordered tree in which every node has $i$ children with probability $p_i = \mathop{\hbox{\bf P}}\nolimits\{\xi = i\}$. We say that $\xi$ is the {\it offspring distribution} of the tree.
Trees arising from this process are often called {\it Galton--Watson trees}, but we include the name of I.-J.~Bienaym\'e~\cite{bienayme1845}, since his work predates (and is more mathematically precise than) the analysis of F. Galton and H. W. Watson~\cite{galtonwatson1874}; see~\cite{jagers2009} for an extended account of the history of branching processes. We shall deal with critical Bienaym\'e--Galton--Watson trees; that is, trees whose offspring distributions satisfy $\mathop{\hbox{\bf E}}\nolimits\{\xi\} = 1$ and $\mathop{\hbox{\bf V}}\nolimits\{\xi\} \in (0,\infty)$. This restriction on the variance ensures that $p_i\neq 1$, so that the tree is finite almost surely. The Bienaym\'e--Galton--Watson trees that we shall study are {\it conditioned Bienaym\'e--Galton--Watson trees} $T_n$. Such trees are conditioned on $|T| = n$, where $|T|$ is the number of nodes in the tree.
\subsection{R\'enyi entropy}
It will be convenient to simplify our notation with some information-theoretic
definitions. Letting $p_i = \mathop{\hbox{\bf P}}\nolimits\{\xi = i\}$, for $\alpha>1$
we define the {\it R\'enyi entropy of order $\alpha$}~\cite{renyi1961measures} (see also~\cite{jacquet1999entropy}) to be the value
\[{\rm H}_\alpha(\xi) = {1\over \alpha - 1}\log_2 {1\over \sum_{i\geq 0} {p_i}^\alpha}.\]
As $\alpha\to 1$, this approaches the {\it binary (Shannon) entropy}~\cite{shannon1948}
\[{\rm H}(\xi) = \sum_{i\geq 0} p_i \log_2 {1\over p_i}.\]
Since $\xi$ will be fixed throughout the paper, for brevity we will let ${\rm H}_\alpha = {\rm H}_\alpha(\xi)$ and
${\rm H} = {\rm H}(\xi)$.
Fix an offspring distribution $\xi$ with mean $1$ and nonzero finite variance;
let $T_n$ be a conditioned Bienaym\'e--Galton--Watson tree of size $n$ with this offspring distribution. The leaf
multiplicity $S(T_n)$ of this tree is a random variable, and will be denoted $S_n$.
The main result of this paper gives bounds on $S_n$ that are obeyed asymptotically in probability.
\begin{thm}
\label{mainthm}
Let $\xi$ be an offspring distribution with $\mathop{\hbox{\bf E}}\nolimits\{\xi\}=1$
and $\mathop{\hbox{\bf V}}\nolimits\{\xi\}\in (0,\infty)$. If $S_n$ is the multiplicity of a conditioned Bienaym\'e--Galton--Watson tree of size $n$ with offspring distribution $\xi$, then letting
\[\gamma = \max_{k \geq 2}{p_0}^k {p_k}^{k/(k-1)},\]
we have for all $\epsilon > 0$,
\[\mathop{\hbox{\bf P}}\nolimits \bigg\{ S_n \geq (1-\epsilon) {\log_2 n \over \log_2(1/\gamma)} \bigg\} \to 1\]
as $n\to\infty$, and under the further assumption
that $\mathop{\hbox{\bf E}}\nolimits\{2^\xi\}<\infty$, we have the upper bound
\[\mathop{\hbox{\bf P}}\nolimits \bigg\{ S_n \leq (1+\epsilon) {2\log_2 n\over {\rm H}_2} \bigg\} \to 1\]
as $n\to\infty$, where ${\rm H}_2$ is the R\'enyi entropy of order $2$
of the random variable $\xi$.
\end{thm}
This theorem will be proved as two separate lemmas in the next section.
\section{Asymptotics of the leaf multiplicity}
\label{sec:asymptotics}
In this section we derive asymptotic
upper and lower bounds on $S_n$. Before we begin, we observe that if
$p_{\rm max} = \max_{i\geq 0} p_i$ and $1<\alpha < \beta < \infty$, we have the inequalities
\[e^{-{\rm H}} \leq \Big(\sum_{i\geq 0} {p_i}^\alpha\Big)^{1/(\alpha-1)} \leq \Big(\sum_{i\geq 0} {p_i}^\beta\Big)^{1/(\beta-1)} \leq p_{\rm max} \leq
\Big(\sum_{i\geq 0} {p_i}^\beta\Big)^{1/\beta} \leq \Big(\sum_{i\geq 0} {p_i}^\alpha\Big)^{1/\alpha} \leq 1,\]
and defining ${\rm H}_\infty = \log_2(1/p_{\rm max})$, we have the equivalent chain of inequalities
\[{\rm H} \geq {\rm H}_\alpha \geq {\rm H}_\beta \geq {\rm H}_\infty \geq {\beta-1 \over \beta} {\rm H}_\beta \geq {\alpha-1\over\alpha}{\rm H}_\alpha\geq 0.\]
In particular, because we have assumed that $\mathop{\hbox{\bf V}}\nolimits\{\xi\}\neq 0$, we have the strict inequality
\[\Big(\sum_{i\geq 0} {p_i}^k\Big)^{1/ k} < \Big(\sum_{i\geq 0} {p_i}^2\Big)^{1/2}\label{strictineq}\]
for all $k> 2$.
We will prove the upper bound and lower bound for $S_n$ as two separate lemmas.
\begin{lem}
\label{upperbound}
Let $\xi$ be an offspring distribution with mean 1 and nonzero finite variance $\sigma^2$. Suppose further that $\mathop{\hbox{\bf E}}\nolimits\{2^\xi\}$ is finite.
If $S_n$ is the multiplicity of a conditioned Bienaym\'e--Galton--Watson tree of size $n$ with offspring
distribution $\xi$, then
\[\mathop{\hbox{\bf P}}\nolimits\{S_n>(1+\epsilon)2\log_2 n/{\rm H}_2\} \to 0 \]
for all $\epsilon>0$, where ${\rm H}_2$ is the R\'enyi entropy of order $2$ of the random variable $\xi$.
\end{lem}
\begin{proof}
For $1\leq i\leq n$, let $\xi_i$ denote the degree of the $i$th node in preorder in the tree $T_n$.
For all $1\leq t< n$, the partial sum $\sum_{i=1}^t \xi_i > t-1$ and $\sum_{i=1}^n \xi_i = n-1$.
We will concentrate on the least common ancestor of the nodes in the largest equivalence
class of $T_n$. This node, call it $w$, has the property that the nodes in the equivalence class
belong to $k\geq 2$ different subtrees rooted at the children of $w$. The node $w$ has (random) degree $D$,
which we will deal with by summing over all possible degrees $d$.
Let ${\cal A}_{wk}$ denote the collection of all
subsets of size $k$ of the children of $w$ (naturally, this collection is empty if $w$ has fewer than $k$
children). For $x>0$, a node $w$,
integers $2\leq k\leq d$,
and a set $A\in {\cal A}_{wk}$, we let $E(x,w,k,A)$ be the event that all the nodes in $A$
are identical and their subtree sizes are at least $x/k$. Now for integers $s\geq x/k$,
we let $E'(x,k,d,s,A)$ be the event that a randomly chosen node $w$ of the tree $T_n$ has degree $d$ and
the leftmost $k$ children of $w$ are identical, with subtrees of size $s$. We have, by the union bound,
\[\mathop{\hbox{\bf P}}\nolimits\{S_n\geq x\} \leq \mathop{\hbox{\bf P}}\nolimits\!\bigg\{\bigcup_{w\in T_n} \bigcup_{k\geq 2}
\bigcup_{A\in {\cal A}_{wk}} E(x,w,k,A)\bigg\}\leq n\sum_{k\geq 2} \sum_{d\geq k} {d\choose k}\sum_{s\geq x/k}
\mathop{\hbox{\bf P}}\nolimits\!\big\{E'(x,k,d,s,A)\big\}.\label{upperSn}\]
Supposing that $w$ is the $j$th node in preorder, $E'(x,k,d,s,A)$ is the event that $\xi_j=d$,
$(\xi_{j+1},\ldots,\xi_{j+s})$ forms a tree, and $(\xi_{j+rs+1,\ldots,j+rs+s}) = (\xi_{j+1},\ldots,
\xi_{j+s})$ for all $1\leq r < k$. Let us say that an integer $j$ is ``good'' if these conditions hold when addition on the indices is done modulo $n$.
Clearly, there are more good $j$ than $j$ satisfying the above conditions. Let $G$ be the event that an index
$j$ chosen uniformly at random from $\{1,\ldots,n\}$ is ``good'';
let $B$ be the event that $(\xi_2,\ldots,\xi_s)$ forms a tree and
$(\xi_{rs+2},\ldots,\xi_{(r+1)s+1}) = (\xi_{j+1},\ldots, \xi_{j+s})$ for all $1\leq r <k$. By a rotational argument due to Dwass~\cite{dwass1969},
\begin{align}
\mathop{\hbox{\bf P}}\nolimits\big\{G\,|\, (\xi_1,\ldots,\xi_n)\ \hbox{forms a tree}\big\} &= \mathop{\hbox{\bf P}}\nolimits\!\Big\{G \;\Big| \sum_{i=1}^n \xi_i = n-1\Big\} \cr
&= {\mathop{\hbox{\bf P}}\nolimits\!\big\{\xi_1 = d,\, B,\, \sum_{i=1}^n \xi_i = n-1\big\}\over \mathop{\hbox{\bf P}}\nolimits\!\big\{\sum_{i=1}^n \xi_i=n-1\big\}}\cr
&= {\mathop{\hbox{\bf P}}\nolimits\!\big\{\xi_1 = d,\, B,\, \sum_{i=\lfloor 1 + ks\rfloor + 1}^n\xi_i = (n-1)-d-k(s-1)\big\}\over
\mathop{\hbox{\bf P}}\nolimits\!\big\{\sum_{i=1}^n \xi_i=n-1\big\}},\cr
\end{align}
so letting
\[R={\mathop{\hbox{\bf P}}\nolimits\!\big\{\sum_{i=1}^{n-(1-ks)}\xi_i=\big(n-(1-ks)-1\big)+(k+1-d)\big\}\over\mathop{\hbox{\bf P}}\nolimits\{\sum_{i=1}^n\xi_i = n-1\}},\]
we have
\[\mathop{\hbox{\bf P}}\nolimits\big\{G\,|\, (\xi_1,\ldots,\xi_n)\ \hbox{forms a tree}\big\} = p_d\mathop{\hbox{\bf P}}\nolimits\{B\}R.\]
Letting $\lambda = \gcd\{i : i \geq 1, p_i>0\}$, a lemma of Kolchin~\cite{kolchin1986} states that uniformly in $y$,
\[\mathop{\hbox{\bf P}}\nolimits\!\bigg\{\sum_{i=1}^n\xi_i = n-y\bigg\} =\begin{cases}
\lambda/\big(\sqrt{2\pi n}\sigma^2\big)\exp\big(\!\!-y^2/2n\sigma^2\big)+{o(1)/\sqrt n},
& \hbox{if $n\bmod\lambda=0$}; \\
0,& \hbox{if $n\bmod \lambda \neq 0$.}\cr\end{cases}
\]
As the $o(1)$ term does not depend on $y$, we find that
\[R = \sqrt{{n-1\over n-(1-ks)-1+(k+1-d)}}\exp\bigg(\!\!-{(1-ks+d-k)^2\over 2\big(n-(1-ks+d-k)\big)}
\sigma^2\bigg)+o(1),\]
where the $o(1)$ term depends only on $n$. Assuming that $ks+d\leq n/2$, we have $R\leq \sqrt2+o(1)$. Hence
\[\mathop{\hbox{\bf P}}\nolimits\big\{G\,|\, (\xi_1,\ldots,\xi_n)\ \hbox{forms a tree}\big\} \leq \big(\sqrt 2 + o(1)\big)p_d\mathop{\hbox{\bf P}}\nolimits\{B\}\]
whenever $ks+d\leq n/2$. We now compute a bound on $\mathop{\hbox{\bf P}}\nolimits\{B\}$. We have
\[\mathop{\hbox{\bf P}}\nolimits\{B\,|\, \xi_2,\ldots,\xi_{1+s}\} = (p_{\xi_2}\cdots p_{\xi_{1+s}})^{k-1}\]
and therefore, by independence of the $\xi_i$,
\begin{align}
\mathop{\hbox{\bf P}}\nolimits\{B\}&=\mathop{\hbox{\bf E}}\nolimits\big\{(p_{\xi_2}\cdots p_{\xi_{1+s}})^{k-1}\mathop{\hbox{\bf 1}}\nolimits_{[(\xi_2,\ldots,\xi_{1+s})\ \text{\small{forms a tree}}]}
\!\big\}\cr
&\leq \prod_{i=2}^{1+s} \mathop{\hbox{\bf E}}\nolimits\big\{(p_{\xi_i})^{k-1}\big\} = \Big(\sum_{i\geq 0}{p_i}^k\Big)^s.\cr
\end{align}
We can now combine all of these bounds. Substituting everything into \eqref{upperSn}, we have
\begin{align}
\mathop{\hbox{\bf P}}\nolimits\{S_n\geq x\} &\leq n\sum_{k\geq 2}\sum_{d\geq k}{d\choose k}\sum_{s\geq x/k} \big(\sqrt2+o(1)\big)p_d
\Big(\sum_{i\geq 0}{p_i}^k\Big)^s\cr
&\leq\big(\sqrt2+o(1)\big)n\sum_{k\geq 2}\sum_{d\geq k} p_d{d\choose k} \Big(\sum_{i\geq 0}{p_i}^k\Big)^{x/k}
{1\over {1-\sum_{i\geq 0} {p_i}^k}}. \cr
\end{align}
Since the inequality~\eqref{strictineq} was strict, there exists $0<\theta<1$ such that
\begin{align}
\mathop{\hbox{\bf P}}\nolimits\{S_n\geq x\} &\leq {\sqrt2+o(1)\over 1-\sum_{i\geq 0}{p_i}^2} \bigg( n\sum_{d\geq 2} p_d{d\choose 2}
\Big(\sum_{i\geq 0}{p_i}^2\Big)^{x/2} + n\sum_{k\geq 3}\sum_{d\geq k} p_d{d\choose k}
\Big(\sum_{i\geq 0}{p_i}^2\Big)^{x/2}\theta^x\bigg)\cr
&\leq {\sqrt2+o(1)\over 1-\sum_{i\geq 0}{p_1}^2} n (\sigma^2 + 1)\Big(\sum_{i\geq 0}{p_i}^2\Big)^{x/2}
+ n\Big(\sum_{i\geq 0}{p_i}^2\Big)^{x/2}\theta^x\sum_{k\geq 3}\sum_{d\leq k}p_d{d\choose k}.\cr
\end{align}
Since
\[\sum_{d\geq 2}p_d\sum_{k=3}^d{d\choose k} \leq \sum_{d\geq 3}p_d2^d\leq \mathop{\hbox{\bf E}}\nolimits\{2^\xi\},\]
we have
\[\mathop{\hbox{\bf P}}\nolimits\{S_n\geq x\}\leq n{\sqrt2(\sigma^2+1)\over 1-\sum_{i\geq 0}{p_i}^2}\Big(\sum_{i\geq 0}{p_i}^2\Big)^{x/2} \big(1+o(1)\big),
\]
provided that $\mathop{\hbox{\bf E}}\nolimits\{2^\xi\}<\infty$. Setting
\[x = (1+\epsilon){2\log_2n\over \log_2 \big(1/\sum_{i\geq 0} {p_i}^2\big)}= (1+\epsilon){2\log_2 n\over {\rm H}_2},\]
we find that $\mathop{\hbox{\bf P}}\nolimits\{S_n\geq x\}\to 0$ as $n\to\infty$.
\end{proof}
\goodbreak
The next lemma presents a lower bound for $S_n$.
\begin{lem}
\label{lowerbound}
Let $\xi$ be an offspring distribution with mean 1 and nonzero finite variance $\sigma^2$.
If $S_n$ is the multiplicity of a conditioned Bienaym\'e--Galton--Watson tree of size $n$ with offspring
distribution $\xi$, then
\[\mathop{\hbox{\bf P}}\nolimits\!\bigg\{S_n<(1-\epsilon){\log_2 n \over\log_2(1/\gamma)}\bigg\} \to 0 \]
for all $0<\epsilon<1$, where
$\gamma = \max_{k\geq 2} {p_0}^k {p_k}^{k/(k-1)}$.
\end{lem}
\begin{proof}
Consider a complete $k$-ary tree of height $L$. This tree has $k^L$ leaves and $1+k+\cdots+k^{L-1} = (k^L-1)/(k-1)$ internal nodes, all of degree $k$. The probability that an unconditioned Bienaym\'e--Galton--Watson tree takes this shape is
\[{p_0}^{k^L}{p_k}^{(k^L-1)/(k-1)};\]
call this probability $q$.
For any real number $x$, the statement $S_n<x$ implies that no node in the tree can have the given $k$-ary tree as a subtree for any $k^L\geq x$, as the multiplicity of the $k$-ary tree is $k^L$. Fix $k\geq 2$ for now, let $L$ be the first integer for which $k\geq x$, and let $y = k^L$. Observe that $y\leq kx$. Denote the size of the $k$-ary tree by $z = y+(y-1)/(k-1)$.
\begin{figure}
\begin{center}
\subfigure{\includegraphics{fig_embedded.eps}}
\caption{The construction in the proof of Lemma~\ref{lowerbound}. In this example, both $k$ and $L$ are equal to $3$.}
\label{fig:embedded}
\end{center}
\end{figure}
We now consider the indices $1, 1+z, 1+2z, 1+3z,\ldots$ in
$\{1,\ldots,n-z\}$. Let $Y_i$ be the event (and ${Y_i}^c$ its complement) that $(\xi_i,\ldots, \xi_{i+z-1})$ defines precisely the $k$-ary tree, where $i$ is in the set of indices defined above, which has size $\lfloor(n-z)/z\rfloor$. Note that
\[\mathop{\hbox{\bf P}}\nolimits\{S_n<x\}\leq \mathop{\hbox{\bf P}}\nolimits\{S_n< y\} = \mathop{\hbox{\bf P}}\nolimits\bigg\{\bigcap_{i=1}^{n-z} {Y_i}^c\,\Big|\, (\xi_1,\ldots,\xi_n)\ \hbox{defines a tree}\bigg\}.\]
By Dwass' cycle lemma~\cite{dwass1969}, the probability that $(\xi_1,\ldots,\xi_n)$ defines a tree is $\Theta(n^{3/2})$, so
\begin{align}
\mathop{\hbox{\bf P}}\nolimits\{S_n < x\} &\leq \Theta(n^{3/2}) \mathop{\hbox{\bf P}}\nolimits\bigg\{\bigcap_{i=1}^{n-z} {Y_i}^c\bigg\} \cr
&= \Theta(n^{3/2}) \mathop{\hbox{\bf P}}\nolimits\{{Y_i}^c \}^{\lfloor(n-z)/z\rfloor} \cr
&= \Theta(n^{3/2}) (1-q)^{\lfloor(n-z)/z\rfloor} \cr
&\leq \Theta(n^{3/2}) \exp\!\bigg(\!\!-\! \bigg\lfloor {n-z\over z}\bigg\rfloor {p_0}^y {p_k}^{(y-1)/(k-1)}\bigg) \cr
&\leq \Theta(n^{3/2}) \exp\!\bigg(\!\!-\! \bigg\lfloor {n-z\over z}\bigg\rfloor {p_0}^{kx} {p_k}^{(kx-1)/(k-1)}\bigg) \cr
&\leq \Theta(n^{3/2}) \exp\!\bigg(\!\!-\! \Omega(1)\bigg\lfloor {n-z\over z}\bigg\rfloor \big({p_0}^{k} {p_k}^{(k-1)/(k-1)}\big)^x\bigg) \cr
&\leq \Theta(n^{3/2}) \exp\!\bigg(\!\!-\! \Omega(1)\bigg\lfloor {n-z\over z}\bigg\rfloor \gamma^x\bigg).\cr
\end{align}
Substituting $(1-\epsilon)\log_2 n/\log_2(1/\gamma)$ for $x$, and noting that $z = \Theta(\log n)$, we observe that this bound tends to $0$.
\end{proof}
\section{The maximal leaf-degree}
\label{sec:maxleafdeg}
Let $T_n$ be a random critical Bienaym\'e--Galton--Watson tree of size $n$. We let $\xi_u$ be the degree of the node $u$ and let $\lambda_u$ be the number of children of $u$ that are leaves in $T_n$, i.e., the {\it leaf-degree} of $u$. We denote by $L_n$ the random variable $\max_{u\in T_n} \lambda_u$; it is clear that the multiplicity $S_n$ satisfies
$M_n\geq L_n$. The next lemma shows that when the tail of the offspring distribution $\xi$ decays at a rate slower than exponential,
the ratio $L_n/\log n\to \infty$ in probability.
So while our condition in the upper bound that $\mathop{\hbox{\bf E}}\nolimits\{2^\xi\}$ be finite might have seemed somewhat artificial at first glance, we essentially cannot do without it.
\begin{lem}
\label{maxleafdeg}
Let $\mathop{\hbox{\bf E}}\nolimits\{\xi\}=1$, $\mathop{\hbox{\bf V}}\nolimits\{\xi\} = \sigma^2\in (0,\infty)$, and suppose that $\mathop{\hbox{\bf E}}\nolimits\{\rho^\xi\}=\infty$ for every $1 < \rho<\infty$. Let $L_n$ be the maximal leaf-degree in $T_n$, the Bienaym\'e--Galton--Watson tree induced by $\xi$, of size $n$. Then
\[{L_n\over \log n}\to \infty\]
in probability along a subsequence, as $n\to \infty$.
\end{lem}
The proof of this lemma uses Kesten's limit tree~\cite{kesten1986} for the offspring distribution $\xi$, whose construction we briefly recall here (see also~\cite{lyonsperes}).
Kesten's infinite tree, denoted $T_\infty$, is obtained by iterating the following step. Let the root of $T_\infty$ be marked. A marked node has $\zeta$ children, where $\mathop{\hbox{\bf P}}\nolimits\{\zeta=i\} = ip_i$ and $p_i = \mathop{\hbox{\bf P}}\nolimits\{\xi=i\}$. Observe that $\zeta \geq 1$ and $\mathop{\hbox{\bf E}}\nolimits\{\zeta\} = \mathop{\hbox{\bf E}}\nolimits\{\xi^2\} = \sigma^2+1$. Of these $\zeta$ children, a random child is marked and all others are unmarked. The unmarked nodes are roots of independent (unconditioned) Bienaym\'e--Galton--Watson trees. The procedure is then repeated for the sole marked node.
\begin{proof}
We argue by coupling $T_n$ with $T_\infty$. Let $(T_n,k)$ and $(T_\infty,k)$ denote the truncations of $T_n$ and $T_\infty$, respectively, to the nodes at distance $\leq k$ from the root. Then, denoting the total variation distance by ${\rm TV}$, it is known that
\[{\rm TV}\big((T_n,k_n), (T_\infty, k_n)\big) = o(1)\]
if the sequence $(k_n)$ is $o(\sqrt n)$ (see, e.g.,~\cite{kersting1998} and~\cite{stufler2019}). We couple $(T_n, k_n)$ and $(T_\infty, k_n)$
such that
\[\mathop{\hbox{\bf P}}\nolimits\!\big\{(T_n,k_n) \neq (T_\infty,k_n)\big\} ={\rm TV}\big((T_n,k_n), (T_\infty,k_n)\big)\to 0.\]
To show that $L_n/\log n\to \infty$ in probability, it suffices to show
this for $L_n'$, the maximal leaf-degree among all marked nodes of $(T_\infty,k_n)$ at distance $<k_n$ from the root. Let $\zeta_0,\zeta_1,\ldots,\zeta_{k_n-1}$ be the degrees of the marked nodes in $(T_\infty, k_n)$, indexed by their distance from the root, let $\lambda_i$ be the leaf-degree corresponding to $\zeta_i$. Now, fix a constant $c$ and let $A_i$ be the event that $\lambda_i\leq c\log n$; we have
\begin{align}
\mathop{\hbox{\bf P}}\nolimits\{L_n'\leq c\log n\} &\leq \mathop{\hbox{\bf P}}\nolimits\!\bigg\{ \bigcap_{i=0}^{k_n-1} A_i\bigg\} \cr
&= \mathop{\hbox{\bf P}}\nolimits\{A_0\}^{k_n-1} \cr
&\leq \exp\!\big(\!-\!(k_n-1) \mathop{\hbox{\bf P}}\nolimits\{\lambda_0 > c\log n\}\big). \cr
\end{align}
Setting $k_n = \lceil n^{1/3}\rceil + 1$, we have
\[\mathop{\hbox{\bf P}}\nolimits\{L_n' \leq c\log n\}
\leq \exp\!\big(\!-n^{1/3} \mathop{\hbox{\bf P}}\nolimits\{\lambda_0 > c\log n\}\big).\]
Note that $\lambda_0\sim \op{{\rm binomial}}(\zeta_0 - 1,p_0)$, so that $\mathop{\hbox{\bf P}}\nolimits\{\lambda_0 \leq p_0\zeta_0/2 \,|\, \zeta_0\}\leq 1/2$ for $\zeta_0$ large enough, by the law of large numbers. Therefore, for $n$ large enough, we have
\[\mathop{\hbox{\bf P}}\nolimits\{\lambda_0 > c\log n\}\geq \mathop{\hbox{\bf P}}\nolimits\!\big\{\lambda_0 \geq {p_0\zeta_0\over 2} > c\log n\big\} \geq {1\over 2} \mathop{\hbox{\bf P}}\nolimits\!\bigg\{\zeta_0 > {2c\over p_0}\log n\bigg\}.\]
To conclude the proof, we must show that $n^{1/3}\mathop{\hbox{\bf P}}\nolimits\{\zeta_0 > 2c\log n/p_0\}\to \infty$ along a subsequence of $n$. Note that if $\mathop{\hbox{\bf E}}\nolimits\{\rho^\xi\} = \infty$, then $\int_0^\infty \mathop{\hbox{\bf P}}\nolimits\{\rho^\xi > x\}\,d x = \infty$, and thus
\[\sum_{\ell=1}^\infty 2^\ell \mathop{\hbox{\bf P}}\nolimits\!\bigg\{\xi>{\ell\over \log_2\rho}\bigg\} \geq \sum_{\ell=1}^\infty 2^\ell \mathop{\hbox{\bf P}}\nolimits\{\rho^\xi>2^\ell\} \geq \sum_{\ell=1}^\infty \int_{2^\ell}^{2^{\ell+1}} \mathop{\hbox{\bf P}}\nolimits\{\rho^\xi > x\}\,d x = \infty,\]
and consequently, $\mathop{\hbox{\bf P}}\nolimits\{\xi > \ell/\log_2\rho\} \geq \ell^{-2}2^{-\ell}$ for infinitely many $\ell \in \N$. As
\[\mathop{\hbox{\bf P}}\nolimits\!\bigg\{\zeta > {\ell\over\log_2\rho}\bigg\} \geq {\ell\over \log_2\rho} \mathop{\hbox{\bf P}}\nolimits\!\bigg\{\xi > {\ell\over\log_2\rho}\bigg\},\]
we see that
\[\mathop{\hbox{\bf P}}\nolimits\!\bigg\{\zeta > {\ell \over \log_2\rho}\bigg\} \geq {1\over \log_2\rho\cdot\ell 2^\ell}\]
for infinitely many $\ell$. Setting $\ell = (2c/p_0)\log n \log_2 \rho$, we have,
\[n^{1/3} \mathop{\hbox{\bf P}}\nolimits\!\bigg\{\zeta > {2c\over p_0}\log n\bigg\}\geq n^{1/3}\cdot
{1\over 2^{2c\log n\log_2\rho/p_0}}\cdot {1\over \log_2\rho \cdot 2c\log n\log_2\rho/p_0}\]
for infinitely many $n$ provided that
\[{2c\over p_0} \log 2 \log_2\rho \leq {1\over 6},\]
which is possible by making $\rho > 1$ small enough. Thus, for every
$c>0$,
\[\limsup_{n\to \infty} \mathop{\hbox{\bf P}}\nolimits\{L_n' > c\log n\} = 1,\]
which is what we wanted to show.
\end{proof}
Note that if for every $\rho < 1$, $p_n > \rho^n$ for all $n$ large enough, then $L_n/\log n\to \infty$ in probability (instead of just along a subsequence).
\section{Examples}
\label{sec:examples}
There exists an important link between certain offspring distributions of conditioned Bienaym\'e--Galton--Watson trees and families of ``simply-generated trees''~\cite{meirmoon1978}. In this section we examine several important families of trees in the Bienaym\'e--Galton--Watson context, and give explicit asymptotic upper and lower bounds for the multiplicity. In each case,
the two important parameters will be
\[\gamma = \max_{k \geq 2}{p_0}^k {p_k}^{k/(k-1)}\qquad\hbox{and}\qquad{\rm H}_2 = \log_2{1\over \sum_{i\geq 0} {p_i}^2}.\]
We must also verify that $\mathop{\hbox{\bf E}}\nolimits\{2^\xi\}$ is finite, if the upper bound is to hold. In particular, this latter condition always holds if $\xi$ is bounded. A summary of this section is displayed in Table 1.
\subsection{Full binary trees}
These are trees in which every node must have exactly zero or two children, and arise from the distribution $p_0 = p_2 = 1/2$. We compute $\gamma=1/16$ and ${\rm H}_2 = 1$, so that
\[(1-\epsilon){\log_2 n\over 4} \leq S_n \leq (1+\epsilon)2\log_2 n\]
asymptotically in probability. Because the multiplicity in a full binary tree must be a power of $2$, in essence this means that there exists a sequence of integers $(a_n)$ such that
\[\mathop{\hbox{\bf P}}\nolimits\!\big\{S_n \in \{2^{a_n}, 2^{a_n+1}, 2^{a_n+2}, 2^{a_n+3}\}\big\}\to 1.\]
In other words, in general one cannot improve the ratio between the upper and lower bounds in
Theorem~\ref{mainthm} to a factor of less than $8+\epsilon$.
\subsection{Flajolet t-ary trees}
Full binary trees are a special case of a
Flajolet $t$-ary tree for $t=2$
(see~\cite{flajoletsedgewick}, p.~68). In general, these are trees whose non-leaf nodes each have $t$ children, and they arise from the finite distribution $p_0 = (t-1)/t$, $p_t = 1/t$. We have
\begin{align}
\gamma &= {p_0}^t{p_t}^{t/(t-1)} \cr
&=\bigg(1-{1\over t}\bigg)^{t} \bigg({1\over t}\bigg)^{t/(t-1)} \cr
&=\exp\!\big(\!\!-1 + o_t(1) - \log t\big),
\end{align}
so $\log_2(1/\gamma) = \log_2 e + \log_2 t + o(1)$ as $t\to \infty$. On the other hand,
\[{\rm H}_2 = \log_2 {1\over {p_0}^2 + {p_t}^2} = \log_2{1\over
1-2/t + 2/t^2},\]
so ${\rm H}_2 \sim 2\log_2 e/t$ as $t\to \infty$. This means that as $t$ gets large, the ratio between the upper and lower bound grows as $t \log t$.
\begin{figure}
\begin{equation*}\vcenter{\hspace{-20.86215pt}\vbox{
\footnotesize
\tabskip=.2em plus2em minus.6em
\halign{
\hfil$\displaystyle{#}$\hfil & \hfil$\displaystyle{#}$\hfil & \hfil$\displaystyle{#}$\hfil &
\hfil$\displaystyle{#}$\hfil & \hfil$\displaystyle{#}$\hfil \cr
\noalign{\hrule}
\noalign{\medskip}
\hbox{Family} & \gamma & {\rm H}_2 & \hbox{Lower bound} & \hbox{Upper bound} \cr
\noalign{\medskip}
\noalign{\hrule}
\noalign{\medskip}
{\hbox{Full binary} \atop (\op{{\rm uniform}}\{0,2\})} &{1\over16} & 1 & {\log_2 n \over 4} & 2\log_2 n \cr
\noalign{\medskip}
{\hbox{Flajolet $t$-ary}\atop (p_0 = 1-1/t;\;p_t= 1/t)} & e^{-1 + o_{t\to\infty}(1) - \log t} & \log_2{1\over 1-2/t-t/t^2} & {\log_2 n\over \log_2 e + \log_2 t + o_{t\to\infty}(1)} & \sim_{t\to\infty} t\log n \cr
\noalign{\medskip}
{\hbox {Cayley} \atop (\op{{\rm Poisson}}(1))} & {1\over 4e^4} & \log_2\Big({e^2 \over I_0(2)}\Big) & {\log_2 n\over 2+4\log_2 e} & {2\log_2 n\over \log_2(e^2 / (I_0(2))} \cr
\noalign{\medskip}
{\hbox{Catalan} \atop (\op{{\rm binomial}}(2,1/2))} & {1\over 256} & \log_2(8/3) & \log_{256} n & {2\log_2 n\over \log_2(8/3)}\cr
\noalign{\medskip}
{\hbox{Binomial} \atop (\op{{\rm binomial}}(d,1/d))} & {1 \over 4} {\Big(1-{1 \over d}\Big)}^{4d-2} & \log_2\Big({e^2 \over I_0(2)}\Big)+o_{d\to\infty}(1) & {\log_2 n \over 2 - \log_2({(1-1/d)}^{4d-2})} & {2 \log_2 n \over \log_2(e^2 / (I_0(2))+o_{d\to\infty}(1)} \cr
\noalign{\medskip}
{\hbox{Motzkin} \atop (\op{{\rm uniform}}\{0,1,2\})} & {1\over 81} & \log_2 3 & \log_{81} n & 2\log_3 n \cr
\noalign{\medskip}
{\hbox{Planted plane} \atop (\op{{\rm geometric}}(1/2))} & {1\over 256} & \hbox{---} & \log_{256} n & \hbox{---} \cr
\noalign{\medskip}
\noalign{\hrule}
}
}}\end{equation*}
\begin{equation*}\vcenter{\vbox{
\centerline{{\bf Table 1.}\enspace
Leaf multiplicities of certain families of trees}
}}\end{equation*}
\end{figure}
\subsection{Cayley trees}
These trees arise from a $\op{{\rm Poisson}}(1)$ distribution, where $p_i = 1/(e\cdot i!)$ for $i\geq 0$. We verify first that
\[\mathop{\hbox{\bf E}}\nolimits\{2^\xi\} = \sum_{i=0}^\infty {2^i\over e i!} = e < \infty,\]
and then work out that $\gamma = 1/(4e^4)$. Letting
\[I_0(z) = \sum_{i=0}^\infty {(i^2/4)^k\over i!\cdot\Gamma(z+1)} = {1\over \pi}\int_0^\pi e^{z\cos \theta} \,d\theta\]
be the modified Bessel function of the first kind (see~\cite{abrasteg1972}, p.~376), we find that
\[\sum_{i=0}^\infty {p_i}^2 = {1\over e^2} \sum_{i=0}^\infty {1\over (i!)^2} = {1\over e^2}I_0(2),\]
meaning that ${\rm H}_2 = 2\log_2 e - \log_2\!\big(I_0(2)\big)$. Putting everything together, the lower and upper bounds in probability for $S_n$ are, respectively,
\[{\log_2 n\over 2+4\log_2 e} \approx {\log_2 n\over 7.771}
\qquad\hbox{and}\qquad {2\log_2 n\over 2\log_2 e - \log_2\big((1/\pi)\int_0^\pi e^{2\cos \theta}\,d \theta\big)} \approx {\log_2 n\over 0.8483}.\]
\subsection{Catalan trees}
When we set $p_0 = p_2 = 1/4$ and $p_1 = 1/2$, we obtain a family of trees often called Catalan trees, since the number of such trees on $n$ nodes is ${2n\choose n}/(n+1)$. There is a one-to-one correspondence between Catalan trees on $n$ nodes and full binary trees on $2n+1$ nodes, since one obtains a full binary tree from a Catalan tree by adding artificial external nodes to every empty slot, and this procedure is reversed by removing all leaves from a full binary tree. It is easy to see that the leaf multiplicity of a full binary tree is exactly double the multiplicity of its corresponding Catalan tree. By plugging in $d=2$ above, the lower bound given by
Lemma~\ref{lowerbound} is $\log_2 n / 8$, which makes sense since the correspondence with full binary trees tells us that the lower bound on the Catalan trees should be similar to $\log_2(2n+1)/8$. We calculate
${\rm H}_2 = \log_2(8/3)$ and the upper bound
is $2\log_2 n/\log_2(8/3)$, so
the ratio between the upper and lower bounds is $16/\log_2(8/3)$.
\subsection{Binomial trees}
Catalan trees are a special case of a binomial tree. For integer parameter $d\geq 2$, nodes in these trees have $d$ ``slots'' that may or may not contain a child; so there are ${d\choose i}$ ways for a node to have $i$ children, for $0\leq i\leq d$. These trees correspond to a $\op{{\rm binomial}}(d, 1/d)$ distribution. We compute
\[\gamma = (p_0 p_2)^2 = \Bigg(\bigg({d-1 \over d}\bigg)^{d}\cdot {d(d-1) \over 2}\cdot {(d-1)^{d-2} \over d^{d}}\Bigg)^2 = {1 \over 4} \bigg(1-{1 \over d}\bigg)^{4d-2}.\]
Note that taking the limit $d \to \infty$, the $\op{{\rm binomial}}(d, 1/d)$ distributions approach a $\op{{\rm Poisson}}(1)$ distribution. Thus we see from our earlier discussion on the Cayley trees that ${\rm H}_2 = \log_2\big(e^2 / (I_0(2)\big)+o_{d\to\infty}(1)$. This gives the respective lower and upper bounds
\[{\log_2 n \over 2 - (4d-2)\log_2(1-1/d)} \qquad\hbox{ and }\qquad {2 \log_2 n \over \log_2\big(e^2 / (I_0(2)\big)+o_{d\to\infty}(1)}.\]
The lower bound tends to $\log_2n / (2 + 4\log_2 e)$ as $d \to \infty$, matching the lower bound we obtained for Cayley trees above.
\subsection{Motzkin trees}
Also known as unary-binary trees, these are trees in which each non-leaf node can have either one or two children. They correspond to the distribution with $p_0 = p_1 = p_2 = 1/3$. We easily compute $\gamma = 1/81$ and ${\rm H}_2 = \log_2 3$, which yields an asymptotic lower bound of $\log_2 n/(\log_2 81)= \log_{81} n$ and an asymptotic upper bound of $2\log_2 n /\log_2 3 = 2 \log_3 n$. The ratio between the upper and lower bounds is $8$.
\subsection{Planted plane trees}
These are trees with ordered children, so that each can be embedded in the plane in a unique way. They correspond to a $\op{{\rm geometric}}(1/2)$ distribution, with $p_i = 1/2^{i+1}$ for all $i$. We find that $\gamma = 1/256$, so we have the asymptotic lower bound $S_n \geq \log_2 n/ 8$.
Unfortunately, we have $\mathop{\hbox{\bf E}}\nolimits\{2^\xi\} = \sum_{i\geq 0} 1/2 = \infty$, so Lemma~\ref{upperbound} cannot be applied to give an upper bound here. We note that the maximal degree $\Delta_n$ of $T_n$ satisfies $\Delta_n/\log_2 n\to 1$ in probability (see, e.g., \cite{rootest2020}, Lemma 6). However, this does not imply that $S_n = O(\log n)$ in probability.
\section{Automorphic multiplicity}
\label{sec:automult}
The multiplicity of a tree does not have a natural extension to unrooted trees,
because whether or not two nodes are identical depends crucially on their position in relation to a
distinguished root node $u$. In this section we briefly investigate an alternate notion of multiplicity
that does extend nicely to free trees. It arises in the problem of root estimation
in Galton--Watson trees described in~\cite{rootest2020}. We briefly recall some definitions.
Let $T$ be a rooted tree. By disregarding the parent-child directions of the edges, we obtain a free tree
$T_{\rm F}$. Conversely, if we start with a free tree $T_{\rm F}$ and any node $u$, we can define a
{\it rooting of $T_{\rm F}$ at $u$}
to be the rooted tree $T_u$ obtained by fixing $u$ as the root.
This does not give rise to a unique tree in general, because children of a given node may
hang on the wall in an arbitrary left-to-right order, but our new notion of multiplicity will treat all
of these possible ordered trees the same.
Let ${\rm Aut}(T_{\rm F})$ be the group of all graph automorphisms of $T_{\rm F}$, that is, bijections $f$ from the
set of vertices $T_{\rm F}$
to itself such that for vertices $u$ and $v$, $f(u)$ is adjacent to $f(v)$ whenever $u$ is adjacent to $v$.
We can then define an {\it automorphism} of $T_u$ to be a graph automorphism of $T_{\rm F}$ such that
the root $u$ stays fixed. By a slight abuse of notation,
we denote the set of these rooted-tree automorphisms by ${\rm Aut}(T_u)$;
formally this is the {\it stabilizer subgroup}
\[{\rm Stab}(u) = \{ g\in {\rm Aut}(T_{\rm F}) : g\cdot u = u\}\]
of ${\rm Aut}(T_{\rm F})$.
We will say that two nodes $v$ and $w$ in $T_u$ are {\it congruent} and write $v\sim_u w$
if $v$ and $w$ belong to the same orbit under the action of ${\rm Aut}(T_u)$. This means that there exists
an element $f$ of ${\rm Aut}(T_u)$ such that $f(v) = w$. It is clear that this gives us an equivalence
relation on the set of all nodes of $T_u$,
and the {\it automorphic multiplicity} of a node $v$, denoted $\mu(u,v)$,
is the size of the equivalence class of $v$ under this relation. Since any node can be mapped to
itself under an automorphism, $\mu(u,v)\geq 1$ for all $v$.
In fact, one can define the relation
$\sim_u$, and consequently the function $\mu$,
purely in terms of the relation $\equiv$.
We have $v\sim_u w$ if and only if there exists a permutation
for every node in $T_u$ such that applying each permutation to the left-to-right ordering of its
respective node's children results in a tree in which $v\equiv w$. The analogue of $S$ in this setting
is the {\it automorphic (leaf) multiplicity} $M(T)$ of a rooted tree $T$. If $o$ is the root of the tree $T$,
then $M(T)$ is the maximum value of $\mu(o,v)$ over all
nodes $v$ in the tree $T$.
\begin{figure}
\begin{center}
\subfigure{\includegraphics{fig_autodef.eps}}
\caption{Different leaf multiplicities but the same automorphic leaf multiplicity.}
\label{fig:autodef}
\end{center}
\end{figure}
Fig.~\ref{fig:autodef} illustrates the distinction between the automorphic and non-auto\-mor\-phic multiplicity.
We have $S(T_1) = M(T_1) = 4$, since the two non-leaf children of the root have identical
(and therefore congruent) subtrees. In $T_2$, on the other hand, these subtrees are congruent but not
identical, so that $M(T_2) = 4$ but the non-automorphic multiplicity of $T_2$ is only $2$.
This definition is still somewhat at odds with the notion of multiplicity that arises in the
root estimation problem from~\cite{rootest2020}.
In that setting, one considers all graph
automorphisms of the free tree, not just ones that fix the root. We will call the size of the orbit of a node
under this larger action the {\it free multiplicity} $\mu_{\rm F}$ and if two nodes $u$ and $v$ are congruent under
an arbitrary graph automorphism, then we write $u\sim_{\rm F} v$ and say that the two nodes are {\it free-congruent}.
We also let $M_{\rm F}(T)$ denote the {\it free (leaf) multiplicity}, the maximum value of $\mu_{\rm F}$
over all nodes in the free tree $T_{\rm F}$.
\begin{figure}
\begin{center}
\subfigure{\includegraphics{fig_freecong.eps}}
\caption{A rooted tree $T$ with $M(T)=6$ and $M_{\rm F}(T)=9$.}
\label{fig:freecong}
\end{center}
\end{figure}
Fig.~\ref{fig:freecong} shows the relation between the automorphic multiplicity of a rooted tree and the
free multiplicity its free-tree counterpart. Note that $M(T) \leq M_{\rm F}(T)$ for any rooted tree $T$,
since we have $\mu(u) \leq \mu_{\rm F}(u)$ for every node $u$.
We shall spend the rest of this section showing that this inequality can more or less be reversed. First,
we need three lemmas, and in their statements and proofs, we shall understand ``multiplicity'' to mean
``free multiplicity''. In the following proof, we also write $[G:H]$ to denote the {\it index} of a subgroup $H$
in a larger group $G$; that is, the cardinality of the coset space $G/H$.
\begin{lem}
\label{sakslemma}
If $u$ and $v$ are adjacent nodes in a finite free tree $T$, then
either $\mu_{\rm F}(u)$ is an integer multiple of $\mu_{\rm F}(v)$ or the other way around.
\end{lem}
\begin{proof}
We may reduce to the case where one of $u$ or $v$ is a leaf. This is because if neither is a leaf,
then it is not in the orbit of any leaf by graph automorphism, so we can remove all the leaves from
the tree $T$ without changing either of $\mu_{\rm F}(u)$ or $\mu_{\rm F}(v)$. This is done finitely many times
since $T$ is finite and always contains at least one leaf.
Now without loss of generality, suppose $u$ is the leaf and $v$ is its unique neighbour. By the
orbit-stabilizer theorem,
\[\bigl|{\rm Aut}(T)\bigr| = \mu_{\rm F}(u) \bigl|{\rm Stab}(u)\bigr| = \mu_{\rm F}(v)\bigl|{\rm Stab}(v)\bigr|,\]
where stabilizers are taken with respect to the group ${\rm Aut}(T)$. Every automorphism fixing $u$ must
permute its neighbours, but since $u$ only has one neighbour, we have ${\rm Stab}(u)\subseteq {\rm Stab}(v)$.
Thus
\begin{align}
\mu_{\rm F}(u)
&= {\mu_{\rm F}(v) \bigl|{\rm Stab}(v)\bigr|\over \bigl|{\rm Stab}(u)\bigr|} \cr
&= {\mu_{\rm F}(v) \bigl[{\rm Stab}(v):{\rm Stab}(u)\bigr] \bigl|{\rm Stab}(u)\bigr|\over \bigl|{\rm Stab}(u)\bigr|} \cr
&= \mu_{\rm F}(v) \bigl[{\rm Stab}(v):{\rm Stab}(u)\bigr],\cr
\end{align}
proving the lemma.
\end{proof}
The next lemma formalizes the intuitive
notation that in a free tree, the multiplicities are in some sense smaller towards the centre of the tree.
\begin{lem}
\label{middlemult}
Let $u \mathrel-\!\!\mathrel- v \mathrel-\!\!\mathrel- w$ be neighbouring nodes in a free tree $T$ with
$v$ being the central node. Then $v$ cannot have strict maximal free multiplicity among the three nodes;
that is, $\mu_{\rm F}(v) \leq \mu_{\rm F}(u)$ or $\mu_{\rm F}(v) \leq \mu_{\rm F}(w)$.
\end{lem}
\begin{proof}
Suppose for contradiction that $\mu_{\rm F}(v) > \mu_{\rm F}(u)$ and $\mu_{\rm F}(v) > \mu_{\rm F}(w)$. Then, for each of the
pairs of neighbours
$u\mathrel-\!\!\mathrel- v$ and $v\mathrel-\!\!\mathrel- w$,
the multiplicity of one of the nodes must be an integer multiple of the multiplicity of the other,
by the previous lemma.
So there must be integers $r,s > 1$ such that
\[\mu_{\rm F}(v) = r \mu_{\rm F}(w)\quad\hbox{and}\quad \mu_{\rm F}(v) = s \mu_{\rm F}(u). \]
The situation is illustrated in Fig.~\ref{fig:landscape}. Since $\mu_{\rm F}(v) = s \mu_{\rm F}(u)$, $u$ must
have $s - 1$ children in the orbit of $v$ and thus have subtree rooted at each of these
children be isomorphic to $B$. Similarly, since $\mu_{\rm F}(v) = r \mu_{\rm F}(w)$, $w$ must have $r - 1$
child subtrees isomorphic to $A$.
\begin{figure}
\begin{center}
\subfigure{\includegraphics{fig_landscape.eps}}
\caption{Three adjacent nodes and their subtrees.}
\label{fig:landscape}
\end{center}
\end{figure}
We note that in order to satisfy the $r,s>1$ requirements, we must have
\[|A| \geq (s - 1) |B| + 2 \quad\hbox{and}\quad |B| \geq (r - 1)|A| + 2,\]
where the additional $+2$ terms come respectively from nodes $u$ and $v$ (for $|A|$) or $v$ and
$w$ (for $|B|$). This implies that
\[|A| \geq (s - 1)(r - 1)|A| + 2s,\]
which is impossible if $|A|\geq 1$ and $r,s > 1$. The contradiction tells us that $v$
cannot have strict maximal multiplicity among the three nodes.
\end{proof}
We have established that if we embed a free tree into the $(x,y)$-plane and then lift the nodes
up by setting each node's $z$-coordinate to its multiplicity, then the result is a convex,
spidery bowl or valley. This is illustrated in Fig.~\ref{fig:spidery}.
\begin{figure}
\begin{center}
\subfigure{\includegraphics{fig_spidery.eps}}
\caption{Darker shades of grey indicate higher multiplicities in this free tree.}
\label{fig:spidery}
\end{center}
\end{figure}
On a path between any two endpoints, the multiplicities decrease monotonically towards the centre
of the tree before increasing monotonically towards the endpoint. There is a central connected core
of nodes of minimal multiplicity and we are able to show
that this minimal multiplicity cannot be greater than 2.
\begin{lem}
\label{minmult}
If $F = (V,E)$ is a finite free tree, then the node of minimal multiplicity
in $F$ has multiplicity 1 or 2.
\end{lem}
\begin{proof}
The proof is by contraposition. Let $u\in V(F)$ be a node of minimal multiplicity and suppose
$\mu_{\rm F}(u) >2$. Let $C_u$ be the orbit of $u$. There is a subtree
$F'$ whose endpoints are the members of $C_u$; since $m>2$ and the graph is connected, there is
necessarily at least one node $v\in F'\setminus C_u$. By Lemma~\ref{middlemult},
we have $\mu_{\rm F}(v)\leq \mu_{\rm F}(u)$ but by
minimality of $\mu_{\rm F}(u)$, we know that $\mu_{\rm F}(v) = \mu_{\rm F}(u)$.
So we can repeat the argument with $C_v$ to find that
the tree is infinite (at each step we are removing $\mu_{\rm F}(u)$ nodes from the free tree, but the process never
terminates).
Note that this argument does not work when $\mu_{\rm F}(u) = 2$ because $F'$ may simply consist of two nodes
connected by one edge.
\end{proof}
\begin{thm}
Let $T$ be a rooted tree with $n$ nodes; let $M(T)$ and $M_{\rm F}(T)$ be the
automorphic multiplicity and
free multiplicity of $T$, respectively. We have the inequality
\[M_{\rm F}(T) \leq 2M(T),\]
and this bound is the best possible.
\end{thm}
\begin{proof}
Suppose first that $n\geq 3$.
Let $v$ be a leaf of maximal automorphic multiplicity in $T_{\rm F}$, and let $[v]$ denote the set of nodes that
are free-congruent to $v$ (so $\bigl|[v]\bigr| = M_{\rm F}(T)$).
By Lemma~\ref{minmult}, a node $s$ of minimal automorphic multiplicity
either has $\mu_{\rm F}(s) = 1$ or $\mu_{\rm F}(s) = 2$, and since we assumed that $n\geq 3$, we can require that $s$ not be
a leaf.
If $\mu_{\rm F}(s) = 1$, then $M(T_s) = M_{\rm F}(T)$, since any automorphism of $T_{\rm F}$ already fixes $s$. The nodes
in $[v]$ all lie in some subtrees of $s$, and without loss of generality, we may assume that they do not
all lie in the same subtree, since if $s'$ is the only child of $s$ whose subtree contains nodes of $[v]$,
we can reroot the tree $T_s$ at $s'$ instead without changing the maximum automorphic multiplicity.
There are $d\geq 2$ children of the root whose subtrees contain elements of $[v]$; each one
contains an equal proportion of these nodes, so $d$ divides $M_{\rm F}(T)$. If we reroot the tree
at any node outside these subtrees, then the automorphic multiplicity of the tree does not change.
If, on the other hand,
we choose a node in one of these subtrees, then there are still $(d-1)M_{\rm F}(T)/d$ leaves that can still be shuffled
amongst themselves, so the maximum automorphic multiplicity is $(d-1)M_{\rm F}(T)/d \geq M_{\rm F}(T)/2$.
If $\mu_{\rm F}(s) = 2$, there is a node $s'$ that is free-congruent to $s$, and there is mirror symmetry in the graph.
This means that there is a way to split the graph along an edge such that the two sides
have the exact same shape, one
contains $s$, and the other contains $s'$. The side containing $s$ has $M_{\rm F}(T)/2$ members of $[v]$; call this
half $[v]_s$ and the other half $[v]_{s'}$.
When the tree is rooted at $s$, we find that $M(T_s) = M_{\rm F}(T)/2$, since any two members of $[v]_s$ can be
exchanged and any two members of $[v]_{s'}$ can be exchanged
(but exchanges cannot happen between the two subtrees).
And rerooting the tree at an arbitrary node, it is clear that the automorphic multiplicity of the
tree will not decrease.
When $n=1$ the statement is trivial, and taking $n=2$ shows that the bound is the best possible,
because if $T$ is the tree with a root and a single (leaf) child, then $M_{\rm F}(T) = 2$ and $M(T) = 1$.
\end{proof}
This theorem tells us that the asymptotics of the free multiplicity are the same as the asymptotics
of the automorphic
multiplicity, up to a fudge factor of $2$.
Because congruence of two nodes is immediately implied by their being identical under $\equiv$,
we have $S(T)\leq M(T)$ for all
rooted trees $T$.
Thus if $M_n = M(T_n)$ and $F_n = M_{\rm F}(T_n)$, where $T_n$ is a conditioned Galton--Watson tree
of size $n$, then if $\gamma$ is as defined in Lemma~\ref{lowerbound}, the inequality
\[F_n \geq M_n \geq (1-\epsilon){\log_2 n \over\log_2(1/\gamma)}\]
holds with probability tending to 1.
\acknowledgements
\label{sec:ack}
We thank the anonymous referee for numerous insightful comments that substantially improved the
the readability and rigour of the paper. We also thank Jonah Saks for helping us find a clean proof of
Lemma~\ref{sakslemma}.
| {
"timestamp": "2022-03-22T02:01:04",
"yymm": "2105",
"arxiv_id": "2105.12046",
"language": "en",
"url": "https://arxiv.org/abs/2105.12046",
"abstract": "This note defines a notion of multiplicity for nodes in a rooted tree and presents an asymptotic calculation of the maximum multiplicity over all leaves in a Bienaymé-Galton-Watson tree with critical offspring distribution $\\xi$, conditioned on the tree being of size $n$. In particular, we show that if $S_n$ is the maximum multiplicity in a conditional Bienaymé-Galton-Watson tree, then $S_n = \\Omega(\\log n)$ asymptotically in probability and under the further assumption that ${\\bf E}\\{2^\\xi\\} < \\infty$, we have $S_n = O(\\log n)$ asymptotically in probability as well. Explicit formulas are given for the constants in both bounds. We conclude by discussing links with an alternate definition of multiplicity that arises in the root-estimation problem.",
"subjects": "Probability (math.PR)",
"title": "Leaf multiplicity in a Bienaymé-Galton-Watson tree",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924761487654,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7075866591576301
} |
https://arxiv.org/abs/math/0511151 | Some equations relating multiwavelets and multiscaling functions | The local trace function introduced in \cite{Dut} is used to derive equations that relate multiwavelets and multiscaling functions in the context of a generalized multiresolution analysis, without appealing to filters. A construction of normalized tight frame wavelets is given. Particular instances of the construction include normalized tight frame and orthonormal wavelet sets. | \section{\label{introduction}Introduction}
A wavelet is a function $\psi\inL^{2}\left(\mathbb{R}\right)$ such that
$$\{D^jT_k\psi\,|\,j\in\mathbb{Z},k\in\mathbb{Z}\}$$
is an orthonormal basis for $L^{2}\left(\mathbb{R}\right)$, where
$$Df(\xi)=\sqrt{2}f(2\xi),\quad T_kf(\xi)=f(\xi-k),\quad(\xi\in\mathbb{R},f\inL^{2}\left(\mathbb{R}\right),k\in\mathbb{Z}).$$
Many examples of wavelets have been produced using the concept of multiresolution analysis (MRA) (see \cite{Dau}). A MRA
is a nest of subspaces $(V_n)_{n\in\mathbb{Z}}$ of $L^{2}\left(\mathbb{R}\right)$ with the following properties:
\begin{equation}\label{eq0_1}
V_n\subset V_{n+1},\quad(n\in\mathbb{Z});
\end{equation}
\begin{equation}\label{eq0_2}
f\in V_n\mbox{ iff }Df\in V_{n+1};
\end{equation}
\begin{equation}\label{eq0_3}
\overline{\bigcup_{n\in\mathbb{Z}}V_n}=L^{2}\left(\mathbb{R}\right);
\end{equation}
\begin{equation}\label{eq0_4}
\bigcap_{n\in\mathbb{Z}}V_n=\{0\};
\end{equation}
\begin{equation}\label{eq0_5}
\mbox{ There exists }\varphi\in V_0\mbox{ such that }\{T_k\varphi\,|\, k\in\mathbb{Z}\}\mbox{ is an orthonormal basis for }V_0.
\end{equation}
$\varphi$ is called a scaling function.
\par
To construct wavelets, one has to find functions $\psi$ such that
$\{T_k\psi\,|\,k\in\mathbb{Z}\}$ is an orthonormal basis for $W_0:=V_1\ominus V_0$.
\par
Many examples, due to Journe and others (\cite{DL}), show that there are wavelets which are not associated to MRAs. The theory
developed by Baggett (\cite{BMM},\cite{BM}) shows that every orthogonal wavelet is associated to a similar, more general structure
called generalized multiresolution analysis (GMRA) which satisfies the conditions (\ref{eq0_1})-(\ref{eq0_4}) while condition
(\ref{eq0_5}) is replaced by a weaker one:
\begin{equation}\label{eq0_6}
V_0\mbox{ is invariant under all integer trasnlations }T_k.
\end{equation}
In the context of a MRA, wavelets are constructed from scaling functions via filters (\cite{Dau}). In the GMRA case, one can
still get some scaling functions in $V_0$, namely, there are functions $\phi_1,...,\phi_n,...\in V_0$ such that
$$\{T_k\phi_i\,|\,i\in\mathbb{N},k\in\mathbb{Z}\}$$
is a normalized tight frame for $V_0$.
\par
We recall that a set of vectors $\{e_i\,|\, i\in I\}$ in a Hilbert space $H$ is a frame if there are some positive constants
$A,B>0$ such that
$$A\|f\|^2\leq\sum_{i\in I}\left|\left\langle f\,|\, e_i \right\rangle\right|^2\leq B\|f\|^2,\quad (f\in H).$$
If $A=B=1$ it is called a normalized tight frame (NTF).
\par
In the GMRA situation, the wavelets can be again constructed using filters but substantial complications appear because,
instead of just one filter, as it was in the case of a MRA, now one has to use a matrix of filters.
\par
In this paper we analyse the relation between scaling functions and wavelets without the use of filters. This relation is described
in three theorems:
\par
1. In theorem \ref{th1_1} we assume that the scaling functions are given and offer necessary and sufficient conditions for a set of functions
to be an associated wavelet.
\par
2. In theorem \ref{th1_2} we start with a wavelet and derive equations that characterize the associated scaling functions.
\par
3. In theorem \ref{th1_5} we show that if two sets of functions are related by equations similar to those that link scaling functions
and wavelets then one of the sets will be indeed a wavelet. (However the other set is not necessarily the corresponding scaling function.)
\par
In section \ref{some} we list some definitions and results in preparation for the main part, which is section \ref{main} where
the results are proved.
In section \ref{construction}, we describe a general procedure for constructing normalized tight frame wavelets. All wavelet sets and
normalized tight frame wavelet sets can be obtained with this procedure provided the initial data is chosen appropriately. We
end with an example of a NTF wavelet which has a piecewise linear square in the Fourier domain.
\section{\label{some} Some definitions and tools}
Throughout the paper we will work with an $n\times n$ dilation matrix $A$ which preserves the lattice $\mathbb{Z}^n$, that is:
all the eigenvalues $\lambda$ of $A$ have $|\lambda|>1$ and $A\mathbb{Z}^n\subset\mathbb{Z}^n$. Define the translation and dilation operators on
$L^{2}\left(\mathbb{R}^n\right)$:
$$T_kf(\xi)=f(\xi-k),\quad D_Af(\xi)=|\operatorname*{det}A|^{\frac{1}{2}}f(A\xi),\quad(\xi\in\mathbb{R}^n,f\inL^{2}\left(\mathbb{R}^n\right),k\in\mathbb{Z}^n).$$
For a subset $\Psi$ of $L^{2}\left(\mathbb{R}^n\right)$ define the affine system
$$X(\Psi):=\{D_A^jT_k\psi\,|\,j\in\mathbb{Z},k\in\mathbb{Z}^n,\psi\in\Psi\}.$$
$\Psi$ is called a normalized tight frame (orthogonal) multiwavelet if $X(\Psi)$ is a normalized tight frame (orthonormal basis) for
$L^{2}\left(\mathbb{R}^n\right)$.
\par
A generalized multiresolution analysis (GMRA) is a nest of closed subspaces $(V_n)_{n\in\mathbb{Z}}$ of $L^{2}\left(\mathbb{R}^n\right)$ with the following properties:
\begin{equation}\label{eq00_1}
V_n\subset V_{n+1},\quad(n\in\mathbb{Z});
\end{equation}
\begin{equation}\label{eq00_2}
f\in V_n\mbox{ iff }D_Af\in V_{n+1};
\end{equation}
\begin{equation}\label{eq00_3}
\overline{\bigcup_{n\in\mathbb{Z}}V_n}=L^{2}\left(\mathbb{R}^n\right);
\end{equation}
\begin{equation}\label{eq00_4}
\bigcap_{n\in\mathbb{Z}}V_n=\{0\};
\end{equation}
\begin{equation}\label{eq00_5}
T_kV_0=V_0,\quad(k\in\mathbb{Z}^n).
\end{equation}
\par
A multiscaling function associated to a GMRA is a subset $\Phi$ of $L^{2}\left(\mathbb{R}^n\right)$ such that
$\{T_k\varphi\,|\,k\in\mathbb{Z}^n,\varphi\in\Phi\}$ is a normalized tight frame for $V_0$.
\par
The Fourier transform is given by
$$\widehat{f}(\xi)=\int_{\mathbb{R}^n }f(x)e^{-i\left\langle x\,|\,\xi\right\rangle}\,dx,\quad(\xi\in\mathbb{R}^n ).$$
If $V$ is closed subspace of a Hilbert space $H$ and $f\in H$ we denote by $P_V$ the projection onto $V$ and
by $P_f$ the operator defined by:
$$P_f(v)=\left\langle v\,|\,f\right\rangle f,\quad(v\in H).$$
\par
The main tool needed for our analysis will be the local trace function introduced in \cite{Dut}. For details, several properties and
the appropriate references
we refer the reader to that paper. We recall below the definition and some properties that will be used here.
The local trace function is associated to shift invariant spaces.
\begin{definition}\label{def00_1}
A closed subspace $V$ of $L^{2}\left(\mathbb{R}\right)$ is called shift invariant (or shortly SI) if
$$T_kV=V,\quad(k\in\mathbb{Z}^n).$$
If $\mathcal{A}$ is a subset of $L^{2}\left(\mathbb{R}^n\right)$ then we denote by $S(\mathcal{A})$ the shift invariant space generated
by $\mathcal{A}$,
$$S(\mathcal{A})=\overline{\operatorname*{span}}\{T_k\varphi\,|\,k\in\mathbb{Z}^n,\varphi\in\mathcal{A}\}.$$
\end{definition}
\par
\begin{definition}\label{def00_2}
Let $V$ be a shift invariant subspace of $L^{2}\left(\mathbb{R}^n\right)$. A subset $\Phi$ of $V$ is called a normalized tight frame
generator (or NTF generator) for $V$ if
$$\{T_k\varphi\,|\,k\in\mathbb{Z}^n,\varphi\in\Phi\}$$
is a NTF for $V$.
\end{definition}
\par
Shift invariant spaces have been studied in connection not only to wavelets but also to spline systems, Gabor systems or approximation theory.
The local trace function is constructed using some fiberization techniques introduced in \cite{H} and developed by A.Ron, Z.Shen,
M. Bownik, Z. Rzeszotnik and others (\cite{RS1}, \cite{RS2}, \cite{RS3}, \cite{Bo1}, \cite{BoRz}).
These "fiberization" tools include the range function. For more information on the range function we refer to \cite{H},\cite{Bo1} and \cite{Dut}.
The periodic range function is a measurable map from $\mathbb{R}^n$ to the projections (or subspaces) of $l^{2}\left(\mathbb{Z}^n\right)$ satisfying the periodicity:
$$J_{per}(\xi+2k\pi)=\lambda(k)^*\left(J_{per}(\xi)\right),\quad(k\in\mathbb{Z}^n,\xi\in\mathbb{R}^n),$$
where $\lambda$ denotes the shift on $l^{2}\left(\mathbb{Z}^n\right)$,
$$(\lambda(k)\alpha)(l)=\alpha(l-k),\quad(l\in\mathbb{Z}^n,k\in\mathbb{Z}^n).$$
$\mathcal{T}_{per}$ is defined on $L^{2}\left(\mathbb{R}^n\right)$ by
$$\mathcal{T}_{per} f(\xi)=(\widehat{f}(\xi+2k\pi))_{k\in\mathbb{Z}^n},\quad(\xi\in\mathbb{R}^n,f\inL^{2}\left(\mathbb{R}^n\right)).$$
Periodic range functions are associated to shift invariant subspaces in a unique way, the connection being described by the
following theorem due to Helson:
\begin{theorem}\label{th00_3}
A closed subspace $V$ of $L^{2}\left(\mathbb{R}^n\right)$ is shift invariant if and only if
$$V=\{f\inL^{2}\left(\mathbb{R}^n\right)\,|\, \mathcal{T}_{per} f(\xi)\in J_{per}(\xi)\mbox{ for a.e. }\xi\in\mathbb{R}^n\},$$
for some measurable periodic range function $J_{per}$. The correspondence between $V$ and $J_{per}$ is
bijective under the convention that range functions are identified if they are equal a.e. Furthermore, if
$V=S(\mathcal{A})$ for some countable $\mathcal{A}\subsetL^{2}\left(\mathbb{R}^n\right)$, then
$$J_{per}(\xi)=\overline{\operatorname*{span}}\{\mathcal{T}_{per}\varphi(\xi)\,|\,\varphi\in\mathcal{A}\},\quad \mbox{for
a.e. }\xi\in\mathbb{R}^n.$$
\end{theorem}
The local trace function is defined as follows:
\begin{definition}\label{def00_4}
Let $V$ be a SI subspace of $L^{2}\left(\mathbb{R}^n\right)$, $T$ a positive operator on $l^{2}\left(\mathbb{Z}^n\right)$ and let $J_{per}$ be the range function
associated to $V$. We define the local trace function associated to $V$ and $T$ as the map from $\mathbb{R}^n$ to
$[0,\infty]$ given by the formula
$$\tau_{V,T}(\xi)=\operatorname*{Trace}\left(TJ_{per}(\xi)\right),\quad(\xi\in\mathbb{R}).$$
We define the restricted local trace function associated to $V$ and a vector $f$ in $l^{2}\left(\mathbb{Z}^n\right)$ by
$$\tau_{V,f}(\xi)=\operatorname*{Trace}\left(P_fJ_{per}(\xi)\right)(=\tau_{V,P_f}(\xi)),\quad(\xi\in\mathbb{R}^n).$$
\end{definition}
\par
Theorems \ref{th00_5} gives a formula for the computation of the local trace function.
\begin{theorem}\label{th00_5}\cite{Dut}
Let $V$ be a SI subspace of $L^{2}\left(\mathbb{R}^n\right)$ and $\Phi\subset V$ a NTF generator for $V$. Then for every positive operator $T$
on $l^{2}\left(\mathbb{Z}^n\right)$ and any $f\inl^{2}\left(\mathbb{Z}^n\right)$,
\begin{equation}\label{eq00_5_1}
\tau_{V,T}(\xi)=\sum_{\varphi\in\Phi}\left\langle T\mathcal{T}_{per}\varphi(\xi)\,|\,\mathcal{T}_{per}\varphi(\xi)\right\rangle,\quad\mbox{for a.e. }\xi\in\mathbb{R}^n;
\end{equation}
\begin{equation}\label{eq00_5_2}
\tau_{V,f}(\xi)=\sum_{\varphi\in\Phi}|\left\langle f\,|\,\mathcal{T}_{per}\varphi(\xi)\right\rangle|^2,\quad \mbox{ for a.e. }\xi\in\mathbb{R}^n.
\end{equation}
\end{theorem}
We should point out that the equations (\ref{eq00_5_1}) and (\ref{eq00_5_2}) show that the local trace function can be calculated
with {\it any} NTF generator. This is the fact that we will use frequently: the local trace function can be computed in two (ore more) different ways
and the resulting quantities must be equal.
\par
The next theorem characterizes the NTF generators for a SI space.
\begin{theorem}\label{th00_5_1}\cite{Dut}
Let $V$ be a SI subspace of $L^{2}\left(\mathbb{R}^n\right)$, $J_{per}$ its periodic range function and $\Phi$ a countable subset of $L^{2}\left(\mathbb{R}^n\right)$.
Then following affirmations are
equivalent:
\begin{enumerate}
\item
$\Phi\subset V$ and $\Phi$ is a NTF generator for $V$;
\item
For every $f\inl^{2}\left(\mathbb{Z}^n\right)$
\begin{equation}\label{eq00_5_1_1}
\sum_{\varphi\in\Phi}|\left\langle f\,|\,\mathcal{T}_{per}\varphi(\xi)\right\rangle |^2=\|J_{per}(\xi)(f)\|^2(=\tau_{V,f}(\xi)),\quad\mbox{for a.e. }\xi\in\mathbb{R}^n
\end{equation}
\item
For every $0\neq l\in\mathbb{Z}^n$ and $\alpha\in\{0,1,i\}$,
\begin{equation}\label{eq00_5_1_2}
\sum_{\varphi\in\Phi}|\widehat{\varphi}(\xi)+\overline{\alpha}\widehat{\varphi}(\xi+2l\pi)|^2=\|J_{per}(\xi)(\delta_0+\alpha\delta_l)\|^2(=\tau_{V,\delta_0+\alpha\delta_l}(\xi)),\quad\mbox{for
a.e. }\xi\in\mathbb{R}^n.
\end{equation}
\end{enumerate}
\end{theorem}
\par
The local trace function contains the dimension function $\mbox{dim}_V$ and the spectral function
$\sigma_V$ introduced in \cite{BoRz}. More precisely:
$$\mbox{dim}_V=\tau_{V,I},\quad\sigma_V=\tau_{V,\delta_0}.$$
where
$$\delta_k(l)=\left\{\begin{array}{ccc}
1&\mbox{if}&k=l,\\
0& &\mbox{otherwise.}
\end{array}\right.
$$
Here are some properties of the local trace function:
\begin{proposition}\label{prop00_6}\cite{Dut}
\begin{enumerate}
\item
If $V_1,V_2$ are orthogonal shift invariant subspaces then
$$\tau_{V_1\oplus V_2,f}=\tau_{V_1,f}+\tau_{V_2,f},\quad(f\inl^{2}\left(\mathbb{Z}^n\right)).$$
\item
If $V_1\subset V_2$ are SI subspaces then
$$\tau_{V_1,f}\leq\tau_{V_2,f},\quad(f\inl^{2}\left(\mathbb{Z}^n\right)).$$
\item
If $(V_i)_{i\in\mathbb{N}}$ is an increasing set of SI subspaces and $V=\overline{\cup V_i}$, then
$$\tau_{V,f}=\lim_{i\rightarrow\infty}\tau_{V_i,f}.$$
\end{enumerate}
\end{proposition}
\par
The local trace function is well behaved with respect to dilations: the local trace function of the dilation of a SI space can be computed
in terms of the local trace function of the in initial space:
\begin{proposition}\label{prop00_7}\cite{Dut}
Let $V$ be a SI subspace and $A$ an $n\times n$ integer matrix with $\operatorname*{det}A\neq 0$. Then $D_AV$ is
shift invariant and, for every vector $f\inl^{2}\left(\mathbb{Z}^n\right)$,
\begin{equation}\label{eq00_7_1}
\tau_{D_AV,f}(\xi)=\sum_{d\in\mathcal{D}}\tau_{V,D_d^*f}\left(\left(A^*\right)^{-1}(\xi+2d\pi)\right),\quad\mbox{
for
a.e. }\xi\in\mathbb{R}^n,
\end{equation}
where $\mathcal{D}$ is a complete set of representatives of the cosets $\mathbb{Z}^n/A^*\mathbb{Z}^n$
and
$D_d$ is the linear operator on $l^{2}\left(\mathbb{Z}^n\right)$ defined by
$$(D_d\alpha)(k)=\left\{\begin{array}{ccc}
\alpha(l),&\mbox{if}&k=d+A^*l\\
0,& & otherwise
\end{array}
\right.,\quad(k\in\mathbb{Z}^n,\alpha\inl^{2}\left(\mathbb{Z}^n\right)).$$
\end{proposition}
\par
We will also need the following characterization of NTF multiwavelets (see \cite{HW} or \cite{Bo2}).
\begin{theorem}\label{th00_8}
Let $\Psi$ be a finite subset of $L^{2}\left(\mathbb{R}^n\right)$. Then $\Psi$ is a NTF multiwavelet iff the following equations are satisfied for a.e.
$\xi\in\mathbb{R}^n$:
\begin{equation}\label{eq00_8_1}
\sum_{\psi\in\Psi}\sum_{j\in\mathbb{Z}}|\widehat{\psi}|^2((A^*)^j(\xi))=1;
\end{equation}
\begin{equation}\label{eq00_8_2}
\sum_{\psi\in\Psi}\sum_{j\geq0}\widehat{\psi}((A^*)^j(\xi))\overline{\widehat{\psi}}((A^*)^j(\xi+2s\pi))=0,\quad(s\in\mathbb{Z}^n\setminus A^*\mathbb{Z}^n).
\end{equation}
\end{theorem}
And the last tool that we will need is a relation between multiscaling functions and multiwavelets which was proved in \cite{Dut}.
\begin{theorem}\label{th00_9}
Let $(V_n)_{n\in\mathbb{Z}}$ be a GMRA and $\Psi$ a NTF generator for $W_0:=V_1\ominus V_0$.
Let $\Phi$ be a countable subset
of $L^{2}\left(\mathbb{R}^n\right)$. The following affirmations are equivalent:
\begin{enumerate}
\item
$\Phi$ is contained in $V_0$ and is a NTF generator for $V_0$;
\item
The following equations hold: for every $s\in\mathbb{Z}^n$,
\begin{equation}\label{eq00_9_1}
\sum_{\psi\in\Psi}\sum_{j\geq 1}\widehat{\psi}((A^*)^j\xi)\overline{\widehat{\psi}}((A^*)^j(\xi+2s\pi))=
\sum_{\varphi\in\Phi}\widehat{\varphi}(\xi)\overline{\widehat{\varphi}}(\xi+2s\pi).
\end{equation}
for a.e. $\xi\in\mathbb{R}^n$.
\end{enumerate}
\end{theorem}
\section{\label{main}Main results}
The first theorem of the section starts with a GMRA for which the scaling functions are given. The theorem characterizes the
wavelets associated to this GMRA.
\begin{theorem}\label{th1_1}
Let $(V_n)_{n\in\mathbb{Z}}$ be a GMRA and $\Phi$ a NTF generator for $V_0$. Then $\Psi$ is a NTF generator for $W_0:=V_1\ominus V_0$ iff for a.e. $\xi\in\mathbb{R}^n$:
\begin{equation}\label{eq1_1_1}
-\sum_{\varphi\in\Phi}\widehat{\varphi}(\xi)\overline{\widehat{\varphi}}(\xi+2s\pi)=\sum_{\psi\in\Psi}\widehat{\psi}(\xi)\overline{\widehat{\psi}}(\xi+2s\pi),
\quad(s\in\mathbb{Z}^n\setminus A^*\mathbb{Z}^n);
\end{equation}
$$\sum_{\varphi\in\Phi}\widehat{\varphi}((A^*)^{-1}\xi)\overline{\widehat{\varphi}}((A^*)^{-1}(\xi+2s\pi))-
\sum_{\varphi\in\Phi}\widehat{\varphi}(\xi)\overline{\widehat{\varphi}}(\xi+2s\pi)=$$
\begin{equation}\label{eq1_1_2}
\sum_{\psi\in\Psi}\widehat{\psi}(\xi)\overline{\widehat{\psi}}(\xi+2s\pi),\quad(s\in A^*\mathbb{Z}^n).
\end{equation}
\end{theorem}
\begin{proof}
We will use theorem \ref{th00_5_1}. For this, we have to compute $\tau_{W_0,\delta_0+\lambda\delta_s}$ for $s\neq0$, $\lambda\in\{0,1,i\}$. Using the additivity, this will reduce to the computation
of the local trace function for $V_1$.
\par
Since $V_1=D_AV_0$, the dilation property given in proposition \ref{prop00_7}, yields:
$$\tau_{V_1,\delta_0+\lambda\delta_s}(\xi)=\sum_{d\in\mathcal{D}}\tau_{V_0,D_d^*(\delta_0+\lambda\delta_s)}((A^*)^{-1}(\xi+2d\pi)).$$
\par
The first case we consider is when $s$ is not in $A^*\mathbb{Z}^n$. In this case we can assume $0,s\in\mathcal{D}$ and
$$D_d^*(\delta_0+\lambda\delta_s)=\left\{
\begin{array}{ccc}
0&\mbox{if}&d\neq0\mbox{ and }d\neq s\\
\delta_0&\mbox{if}&d=0\\
\lambda\delta_0&\mbox{if}&d=s.
\end{array}\right.$$
So that
\begin{align*}
\tau_{V_1,\delta_0+\lambda\delta_s}(\xi)&=\tau_{V_0,\delta_0}((A^*)^{-1}\xi)+\tau_{V_0,\lambda\delta_0}((A^*)^{-1}(\xi+2s\pi))\\
&=\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2((A^*)^{-1}\xi)+\sum_{\varphi\in\Phi}|\lambda|^2|\widehat{\varphi}|^2((A^*)^{-1}(\xi+2s\pi)).
\end{align*}
For the last equality we used theorem \ref{th00_5} for $V_0$.
\par
Also, we have
$$\tau_{V_0,\delta_0+\lambda\delta_s}(\xi)=\sum_{\varphi\in\Phi}|\widehat{\varphi}(\xi)+\overline{\lambda}\overline{\widehat{\varphi}}(\xi+2s\pi)|^2,$$
$$\tau_{W_0,\delta_0+\lambda\delta_s}(\xi)=\tau_{V_1,\delta_0+\lambda\delta_s}(\xi)-\tau_{V_0,\delta_0+\lambda\delta_s}(\xi).$$
and $\Psi$ is a NTF for $W_0$ if and only if for all $s\neq0$
$$\tau_{W_0,\delta_0+\lambda\delta_s}(\xi)=
\sum_{\psi\in\Psi}|\widehat{\psi}(\xi)+\overline{\lambda}\overline{\widehat{\psi}}(\xi+2s\pi)|^2,$$
Hence, if $\Psi$ is a NTF for $W_0$ then:
\\
If we take $\lambda=0$ then
\begin{equation}\label{eq1_1_3}
\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2((A^*)^{-1}\xi)-\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2(\xi)=\sum_{\psi\in\Psi}|\widehat{\psi}|^2(\xi),
\end{equation}
Then take $\lambda=1$ and $\lambda=i$ and substract the equalities:
\begin{equation}\label{eq1_1_4}
-\sum_{\varphi\in\Phi}\widehat{\varphi}(\xi)\overline{\widehat{\varphi}}(\xi+2s\pi)=\sum_{\psi\in\Psi}\widehat{\psi}(\xi)\overline{\widehat{\psi}}(\xi+2s\pi)
\end{equation}
for all $s\in\mathbb{Z}^n\setminus A^*\mathbb{Z}^n$.
\par
Now take $s\in A^*\mathbb{Z}^n$. Then
$$D_d^*(\delta_0+\lambda\delta_s)=\left\{
\begin{array}{ccc}
0&\mbox{if}&d\neq0\\
\delta_0+\lambda\delta_{(A^*)^{-1}s}&\mbox{if}&d=0.
\end{array}
\right.$$
$$\tau_{V_1,\delta_0+\lambda\delta_s}(\xi)=\tau_{V_0,\delta_0+\lambda\delta_{(A^*)^{-1}s}}((A^*)^{-1}\xi)$$
$$=\sum_{\varphi\in\Phi}|\widehat{\varphi}((A^*)^{-1}\xi)+\overline{\lambda}\widehat{\varphi}((A^*)^{-1}\xi+2\pi(A^*)^{-1}s)|^2
$$
Therefore,
$$\tau_{W_0,\delta_0+\lambda\delta_s}(\xi)=\sum_{\varphi\in\Phi}|\widehat{\varphi}((A^*)^{-1}(\xi)+\overline{\lambda}\widehat{\varphi}((A^*)^{-1}(\xi+2s\pi))|^2
-\sum_{\varphi\in\Phi}|\widehat{\varphi}(\xi)+\overline{\lambda}\widehat{\varphi}(\xi+2s\pi)|^2.$$
With (\ref{eq1_1_3}) it follows that
$$\sum_{\varphi\in\Phi}\widehat{\varphi}((A^*)^{-1}\xi)\overline{\widehat{\varphi}}((A^*)^{-1}(\xi+2s\pi))-
\sum_{\varphi\in\Phi}\widehat{\varphi}(\xi)\overline{\widehat{\varphi}}(\xi+2s\pi)
=\sum_{\psi\in\Psi}\widehat{\psi}(\xi)\overline{\widehat{\psi}}(\xi+2s\pi).$$
The converse follows by retracing the calculations and using theorem \ref{th00_5_1}.
\end{proof}
\par
For the next theorem we assume the multiwavelet is given for a fixed GMRA and show that if some functions satisfy the equations
discovered in theorem \ref{th1_1}, then they will be multiscaling functions associated to this GMRA.
\begin{theorem}\label{th1_2}
Let $(V_n)_{n\in\mathbb{Z}}$ be a GMRA with $\operatorname*{dim}_{V_0}(\xi)<\infty$ for a set of positive measure and
let $\Psi$ be a NTF generator for $W_0$. The following affirmations are equivalent
\begin{enumerate}
\item
$\Phi$ is a NTF generator for $V_0$.
\item
The equations (\ref{eq1_1_1}) and (\ref{eq1_1_2}) hold and
\begin{equation}\label{eq1_2_1}
\lim_{j\rightarrow\infty}\sum_{\varphi\in\Phi}|\widehat{\varphi} |^2((A^*)^j\xi)=0,\mbox{ for a.e. }\xi\in\mathbb{R}^n.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
We know that (i) implies (\ref{eq1_1_1}) and (\ref{eq1_1_2}). Let's check (\ref{eq1_2_1}). By theorem 4.1 in
\cite{BoRz}
$$\sum_{j=-\infty}^\infty\sigma_{W_0}((A^*)^j\xi)=1,\mbox{ for a.e }\xi\in\mathbb{R}^n$$
and
$$\sigma_{V_0}(\xi)=\sum_{j\geq 1}^\infty\sigma_{W_0}((A^*)^j\xi),\mbox{ for a.e. }\xi\in\mathbb{R}^n.$$
Then
$$\sigma_{V_0}((A^*)^J\xi)=\sum_{j\geq J+1}^{\infty}\sigma_{W_0}((A^*)^j\xi)\rightarrow 0,\mbox{ as }J\rightarrow\infty\mbox{ for a.e. }\xi\in\mathbb{R}^n.$$
But
$$\sigma_{V_0}(\xi)=\tau_{V_0,\delta_0}(\xi)=\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2(\xi),$$
and (\ref{eq1_2_1}) follows immediately.
\par
For the converse we use theorem \ref{th00_9}. For all $j\geq1$, using (\ref{eq1_1_2}) for $\xi=(A^*)^j\xi$ and $s=(A^*)^js$,
$$\sum_{\psi\in\Psi}\widehat{\psi}((A^*)^j\xi)\overline{\widehat{\psi}}((A^*)^j(\xi+2s\pi))$$
$$=\sum_{\varphi\in\Phi}\widehat{\varphi}((A^*)^{j-1}\xi)\overline{\widehat{\varphi}}((A^*)^{j-1}(\xi+2s\pi))
-\sum_{\varphi\in\Phi}\widehat{\varphi}((A^*)^j\xi)\overline{\widehat{\varphi}}((A^*)^j(\xi+2s\pi)).$$
Now sum for $j\in\{1,...,J\}$.
$$\sum_{j=1}^J\sum_{\psi\in\Psi}\widehat{\psi}((A^*)^j\xi)\overline{\widehat{\psi}}((A^*)^j(\xi+2s\pi))$$
$$=\sum_{\varphi\in\Phi}\widehat{\varphi}(\xi)\overline{\widehat{\varphi}}(\xi+2s\pi)
-\sum_{\varphi\in\Phi}\widehat{\varphi}((A^*)^J\xi)\overline{\widehat{\varphi}}((A^*)^J(\xi+2s\pi)).$$
But, by (\ref{eq1_2_1}),
$$|\sum_{\varphi\in\Phi}\widehat{\varphi}((A^*)^J\xi)\overline{\widehat{\varphi}}((A^*)^J(\xi+2s\pi))|$$
$$\leq\left(\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2((A^*)^J\xi)\right)^{1/2}
\left(\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2((A^*)^J(\xi+2s\pi))\right)^{1/2}\rightarrow 0,\mbox{ for a.e. }\xi.$$
as $J\rightarrow\infty$.
Hence
$$\sum_{j=1}^\infty\sum_{\psi\in\Psi}\widehat{\psi}((A^*)^j\xi)\overline{\widehat{\psi}}((A^*)^j(\xi+2s\pi))
=\sum_{\varphi\in\Phi}\widehat{\varphi}(\xi)\overline{\widehat{\varphi}}(\xi+2s\pi)$$
and, with theorem \ref{th00_9}, we can conclude that $\Phi$ is a NTF generator for $V_0$.
\end{proof}
\begin{proposition}\label{prop1_3}
Let $V_0$ be a refinable space i.e. $V_0\subset D_AV_0$ and let $\Phi$ be a NTF generator for $V_0$. Denote by
$V_j=D_A^jV_0$, $j\in\mathbb{Z}$. The following affirmations are equivalent:
\begin{enumerate}
\item
$$\overline{\bigcup_{j\in\mathbb{Z}}V_j}=L^{2}\left(\mathbb{R}^n\right);$$
\item
$$\lim_{j\rightarrow\infty}\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2((A^*)^{-j}\xi)=1,\mbox{ for a.e. }\xi\in\mathbb{R}^n.$$
\end{enumerate}
\end{proposition}
\begin{proof}
Let
$$V:=\overline{\bigcup_{j\in\mathbb{Z}}V_j}.$$
Then, for a.e $\xi$,
$$\tau_{V,\delta_0}(\xi)=\lim_{j\rightarrow\infty}\tau_{V_j,\delta_0}(\xi).$$
But, according to proposition \ref{prop00_7},
$$\tau_{V_j,\delta_0}(\xi)=\tau_{V,\delta_0}((A^*)^{-j}\xi)=\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2((A^*)^{-j}\xi).$$
If (i) holds then $\tau_{V,\delta_0}(\xi)=1$ so (ii) is immediate.
\par
If (ii) holds then $\tau_{V,\delta_0}=1$ which implies $\delta_0\in J_{per}(\xi)$ for a.e. $\xi$, $J_{per}$ being the
periodic range function associated to $V$. By periodicity,
$\delta_k=\lambda(-k)^*\delta_0\in\lambda(-k)^*J_{per}(\xi+2k\pi)=J_{per}(\xi)$ so that
$\delta_k\in J_{per}(\xi)$ for all $k$ for a.e. $\xi$. This means that $J_{per}(\xi)=l^{2}\left(\mathbb{Z}^n\right)$ almost everywhere
so $V=L^{2}\left(\mathbb{R}^n\right)$.
\end{proof}
\begin{proposition}\label{prop1_4}
Let $V_0$ be a refinable space then
$$(\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2((A^*)^{-j}\xi))_{j\geq0}$$ is an increasing sequence for a.e. $\xi$.
\end{proposition}
\begin{proof}
Indeed $\tau_{V_j,\delta_0}(\xi)=\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2((A^*)^{-j}\xi)$ and the rest follows by the
monotony of the local trace function.
\end{proof}
\begin{theorem}\label{th1_5}
Let $\Phi,\Psi$ be two subsets of $L^{2}\left(\mathbb{R}^n\right)$ with $\Psi$ finite and $\Phi$ countable. Suppose the following relations are
satisfied:
\begin{equation}\label{eq1_5_0}
\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2(\xi)<\infty,\mbox{ for a.e. }\xi
\end{equation}
\begin{equation}\label{eq1_5_1}
-\sum_{\varphi\in\Phi}\widehat{\varphi}(\xi)\overline{\widehat{\varphi}}(\xi+2s\pi)=
\sum_{\psi\in\Psi}\widehat{\psi}(\xi)\overline{\widehat{\psi}}(\xi+2s\pi),(s\in\mathbb{Z}^n\setminus A^*\mathbb{Z}^n);
\end{equation}
$$\sum_{\varphi\in\Phi}\widehat{\varphi}((A^*)^{-1}\xi)\overline{\widehat{\varphi}}((A^*)^{-1}(\xi+2s\pi))
-\sum_{\varphi\in\Phi}\widehat{\varphi}(\xi)\overline{\widehat{\varphi}}(\xi+2s\pi)=$$
\begin{equation}\label{eq1_5_2}
=\sum_{\psi\in\Psi}\widehat{\psi}(\xi)\overline{\widehat{\psi}}(\xi+2s\pi),\,(s\in A^*\mathbb{Z}^n);
\end{equation}
\begin{equation}\label{eq1_5_3}
\lim_{J\rightarrow\infty}\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2((A^*)^J\xi)=0,\mbox{ for a.e. }\xi;
\end{equation}
\begin{equation}\label{eq1_5_4}
\lim_{J\rightarrow\infty}\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2((A^*)^{-J}\xi)=1,\mbox{ for a.e. }\xi.
\end{equation}
Then
$$\{D_A^jT_k\psi\,|\,j\in\mathbb{Z},k\in\mathbb{Z}^n,\psi\in\Psi\}$$
is a NTF for $L^{2}\left(\mathbb{R}^n\right)$.
\end{theorem}
\begin{proof}
We use the characterization from theorem \ref{th00_8}. For $j\geq 1$ and any $s\in\mathbb{Z}^n$, using (\ref{eq1_5_2}) with $\xi=(A^*)^j\xi$ and
$s=(A^*)^js$, we have
$$\sum_{\psi\in\Psi}\widehat{\psi}((A^*)^j\xi)\overline{\widehat{\psi}}((A^*)^j(\xi+2s\pi))=$$
$$=\sum_{\varphi\in\Phi}\widehat{\varphi}((A^*)^{j-1}\xi)\overline{\widehat{\varphi}}((A^*)^{j-1}(\xi+2s\pi))
-\sum_{\varphi\in\Phi}\widehat{\varphi}((A^*)^j\xi)\overline{\widehat{\varphi}}((A^*)^j(\xi+2s\pi))$$
Then, sum over $j\in\{1,...,J\}$:
$$\sum_{j=1}^J\sum_{\psi\in\Psi}\widehat{\psi}((A^*)^j\xi)\overline{\widehat{\psi}}((A^*)^j(\xi+2s\pi))=$$
$$\sum_{\varphi\in\Phi}\widehat{\varphi}(\xi)\overline{\widehat{\varphi}}(\xi+2s\pi)-
\sum_{\varphi\in\Phi}\widehat{\varphi}((A^*)^J\xi)\overline{\widehat{\varphi}}((A^*)^J(\xi+2s\pi))$$
An application of (\ref{eq1_5_0}) and the Schwarz inequality shows that the last sum converges to 0 as $J\rightarrow\infty$, hence one can
conclude that
\begin{equation}\label{eq1_5_5}
\sum_{j=1}^\infty\sum_{\psi\in\Psi}\widehat{\psi}((A^*)^j\xi)\overline{\widehat{\psi}}((A^*)^j(\xi+2s\pi))=
\sum_{\varphi\in\Phi}\widehat{\varphi}(\xi)\overline{\widehat{\varphi}}(\xi+2s\pi),\,(s\in\mathbb{Z}^n).
\end{equation}
This, added to (\ref{eq1_5_1}), yields
$$\sum_{j=0}^\infty\sum_{\psi\in\Psi}\widehat{\psi}((A^*)^j\xi)\overline{\widehat{\psi}}((A^*)^j(\xi+2s\pi))=0,\,(s\in\mathbb{Z}^n\setminus A^*\mathbb{Z}^n).$$
Also, from (\ref{eq1_5_5}) with $s=0$ and $\xi=(A^*)^{-J}\xi$, we have
$$\sum_{j=-J+1}^\infty\sum_{\psi\in\Psi}|\widehat{\psi}|^2((A^*)^j\xi)=
\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2((A^*)^{-J}\xi).$$
Then (\ref{eq1_5_4}) implies
$$\sum_{j=-\infty}^\infty\sum_{\psi\in\Psi}|\widehat{\psi}|^2((A^*)^j\xi)=1,\mbox{ for a.e. }\xi,$$
and the conclusion is proved with theorem \ref{th00_8}.
\end{proof}
\section{\label{construction}A construction of NTF wavelets}
We give a construction of NTF wavelets that starts from the spectral function $\sigma_{V_0}=\tau_{V_0,\delta_0}$. We construct
two sets of functions $\Phi$ and $\Psi$ satisfying the hypotheses of theorem \ref{th1_5} and such that
$$\sigma(\xi)=\sum_{\varphi\in\Phi}|\widehat{\varphi}|^2(\xi)=\sum_{j\geq1}\sum_{\psi\in\Psi}|\widehat{\psi}((A^*)^j\xi)|^2.$$
We will obtain a NTF multiwavelet, $\Psi$.
\par
In the sequel, we describe the construction. The starting point is a function $\sigma$ on $\mathbb{R}^n$ with the following properties:
\begin{equation}\label{eq2_1}
\sigma\in L^1(\mathbb{R}^n),\quad\sigma\geq0;
\end{equation}
\begin{equation}\label{eq2_2}
\sigma(A^*\xi)\leq\sigma(\xi),\mbox{ for a.e. }\xi\in\mathbb{R}^n;
\end{equation}
\begin{equation}\label{eq2_3}
\mbox{If }K\mbox{ is the support of }\xi\mapsto\sigma((A^*)^{-1}\xi)-\sigma(\xi)\mbox{ then }\operatorname*{Per}(\chi_{K})\mbox{ is bounded};
\end{equation}
Recall that, for $f\in L^1(\mathbb{R}^n)$,
$$\operatorname*{Per}(f)(\xi):=\sum_{k\in\mathbb{Z}^n}f(\xi+2k\pi),\quad(\xi\in\mathbb{R}^n).$$
\begin{equation}\label{eq2_4}
\lim_{J\rightarrow\infty}\sigma((A^*)^{-J}\xi)=1,\mbox{ for a.e. }\xi\in\mathbb{R}^n;
\end{equation}
\begin{equation}\label{eq2_5}
\lim_{J\rightarrow\infty}\sigma((A^*)^J\xi)=0,\mbox{ for a.e. }\xi\in\mathbb{R}^n.
\end{equation}
In view of theorem \ref{th1_2} (ii), proposition \ref{prop1_3} and \ref{prop1_4} and since $\sigma$ should be a spectral
function, the conditions (\ref{eq2_1}),(\ref{eq2_2}),(\ref{eq2_4}) and (\ref{eq2_5}) are natural. Condition
(\ref{eq2_3}) will allow us to pick a finite number of $\psi$'s.
\par
We can restate condition (\ref{eq2_3}) as follows
\begin{equation}\label{eq2_6}
\mbox{There is a finite partition }K_1,...,K_p\mbox{ of }K\mbox{ such that }
\end{equation}
$$\mbox{ no }K_i\mbox{ contains }\xi\mbox{ and }\xi+2k\pi\mbox{ for some }k\neq 0,\xi\in\mathbb{R}^n.$$
This is clear because $\operatorname*{Per}(\chi_{K})(\xi)=l$ means that there are exactly $l$ points in $K$ that are congruent to
$\xi$ modulo $2\pi$. One way to choose such a partition is by intersecting $K$ with the intervals
$[-\pi,\pi]^n+2k\pi$. A better way is to let $p$ be the maximum of $\operatorname*{Per}(\chi_{K})$. First pick a measurable subset $K_p$ of $K$ such that $K_p$ is
congruent modulo $2\pi$ to $\{\xi\in[-\pi,\pi]^n\,|\,\operatorname*{Per}(\chi_{K})(\xi)=p\}$, then pick a measurable subset $K_{p-1}$ of
$K\setminus K_p$ such that $K_{p-1}$ is congruent to
$\{\xi\in[-\pi,\pi]^n\,|\,\operatorname*{Per}(\chi_{K})(\xi)\geq p-1\}$ and so on, finally $K_1$ is a subset of $K\setminus\cup_{i=2}^pK_i$ which is
congruent modulo $2\pi$ to
$\{\xi\in[-\pi,\pi]^n\,|\,\operatorname*{Per}(\chi_{K})(\xi)\geq 1\}$.
\par
Consider that we have built this partition. Define the measurable functions $\varphi_k$ for $k\in\mathbb{Z}^n$ as follows
$$\widehat{\varphi}_k(\xi)=\left\{
\begin{array}{ccc}
\sqrt{\sigma(\xi)}&\mbox{if}&\xi\in [-\pi,\pi]^n+2k\pi\\
0&,&\mbox{otherwise}.
\end{array}
\right.
$$
Clearly then $\varphi_k\inL^{2}\left(\mathbb{R}^n\right)$ and
\begin{equation}\label{eq2_7}
\sum_{k\in\mathbb{Z}^n}|\widehat{\varphi}_k|^2(\xi)=\sigma(\xi),\mbox{ for a.e. }\xi,
\end{equation}
and
\begin{equation}\label{eq2_8}
\widehat{\varphi}_k(\xi)\overline{\widehat{\varphi}}_k(\xi+2s\pi)=0,\quad(\xi\in\mathbb{R}^n,s\in\mathbb{Z}^n\setminus\{0\},k\in\mathbb{Z}^n).
\end{equation}
Now we construct the wavelets: for $i\in\{1,...,p\}$ define a measurable $\psi_i$ such that
$$|\widehat{\psi}_i|^2(\xi)=\left\{
\begin{array}{ccc}
\sigma((A^*)^{-1}(\xi))-\sigma(\xi)&\mbox{if}&\xi\in K_i\\
0&,&\mbox{otherwise}.
\end{array}
\right.
$$
This is possible because (\ref{eq2_2}) holds. Discard those $\psi_i$'s which are identically 0.
Then, of course, $\psi_i\inL^{2}\left(\mathbb{R}^n\right)$,
\begin{equation}\label{eq2_9}
\sum_{i=1}^p|\widehat{\psi}_i|^2(\xi)=\sigma((A^*)^{-1}\xi)-\sigma(\xi),\mbox{ for a.e. }\xi\in\mathbb{R}^n,
\end{equation}
and
\begin{equation}\label{eq2_10}
\widehat{\psi}_i(\xi)\overline{\widehat{\psi}}(\xi+2s\pi)=0,\quad(\xi\in\mathbb{R}^n,s\in\mathbb{Z}^n\setminus\{0\},i\in\{1,...,p\}).
\end{equation}
(\ref{eq2_7}),(\ref{eq2_8}),(\ref{eq2_9}),(\ref{eq2_10}) and theorem \ref{th1_5} imply the fact that $\{\psi_i\,|\, i\in\{1,...,p\}\}$ is
a NTF wavelet.
\begin{example}\label{ex2_1}
{\bf [NTF multi-wavelet sets]}
Any NTF multi-wavelet set can be obtained with this construction. Recall that a NTF multi-wavelet set is a
NTF multiwavelet $\Psi=\{\psi_1,...,\psi_p\}$ such that each $\widehat{\psi}_i$ is the characteristic function of some measurable
set $E_i$.
\par
First, we give a theorem which characterizes NTF multi-wavelet sets. For a different proof when $p=1$ and some related topics see
\cite{DDGH}.
\begin{theorem}\label{th2_2}
If $\Psi=\{\psi_1,...,\psi_p\}$ and $\widehat{\psi}_i=\chi_{E_i}, (i\in\{1,...,p\})$, then
$\Psi$ is a multiwavelet set if and only if the following conditions are satisfied:
\begin{equation}\label{eq2_2_1}
E_1,...,E_p\mbox{ are mutually disjoint};
\end{equation}
\begin{equation}\label{eq2_2_2}
E_i\cap(E_i+2k\pi)=\emptyset,\quad(k\neq 0,i\in\{1,...,p\});
\end{equation}
\begin{equation}\label{eq2_2_3}
\{(A^*)^j(\cup_{i=1}^pE_i)\,|\, j\in\mathbb{Z}\}\mbox{ is a partition of }\mathbb{R}^n.
\end{equation}
\end{theorem}
\begin{proof}
By theorem \ref{th00_8}, $\Psi$ is a NTF multi-wavelet iff the equations (\ref{eq00_8_1}) and (\ref{eq00_8_2}) hold. Since
$\widehat{\psi}_i=\chi_{E_i}$, (\ref{eq00_8_2}) is equivalent to
$$\chi_{E_i}((A^*)^j\xi)\chi_{E_i}((A^*)^j(\xi+2s\pi))=0,\quad(\xi\in\mathbb{R}^n,i\in\{1,...,p\},j\geq0,s\in\mathbb{Z}^n\setminus A^*\mathbb{Z}^n),$$
and, as any number can be written as $(A^*)^j\xi$ and any $k\neq0$ can be written as $(A^*)^js$, this is true iff
(\ref{eq2_2_2}) holds.
\par
(\ref{eq00_8_1}) rewrites as
$$\sum_{i=1}^p\sum_{j\in\mathbb{Z}}\chi_{(A^*)^jE_i}=1,$$
which is equivalent to (\ref{eq2_2_1}) and (\ref{eq2_2_3}).
\end{proof}
Now let's see how any NTF multi-wavelet set is obtained with our construction.
\par
Let $\Psi:=\{\psi_1,...,\psi_p\}$, $\widehat{\psi}_i=\chi_{E_i},(i\in\{1,...,p\})$ be the NTF multi-wavelet set. Define
$$\sigma(\xi)=\sum_{i=1}^p\sum_{j\geq1}\chi_{E_i}((A^*)^j\xi),\quad(\xi\in\mathbb{R}^n).$$
Then, theorem \ref{th2_2}, implies that $\sigma=\chi_E$, where
$$E=\bigcup_{i=1}^p\bigcup_{j\geq1}(A^*)^{-j}E_i.$$
Therefore,
\begin{equation}\label{eq2_1_1}
\sigma((A^*)^{-1}\xi)-\sigma(\xi)=\chi_{\cup_{i=1}^pE_i}(\xi),\quad(\xi\in\mathbb{R}^n).
\end{equation}
A simple check shows that $\sigma$ satisfies the equations (\ref{eq2_1})-(\ref{eq2_5}).
\par
Then, we can choose the partition $K_i$ to be $K_i=E_i$ and $\widehat{\psi}_i=E_i$, $(i\in\{1,...,p\}$ so that, after the
construction we get back our NTF multi-wavelet set $\Psi$.
\par
But how must $\sigma$ be chosen, if we want to obtain a multi-wavelet set after the construction? We saw that $\sigma$ must be
a characteristic function of some measurable set
$$\sigma=\chi_E.$$
The conditions (\ref{eq2_1})-(\ref{eq2_5}) can be reformulated as:
\begin{equation}\label{eq2_1_2}
E\mbox{ has finite measure };
\end{equation}
\begin{equation}\label{eq2_1_3}
E\subset A^*E;
\end{equation}
\begin{equation}\label{eq2_1_4}
\operatorname*{Per}(\chi_{A^*E\setminus E})\mbox{ is bounded};
\end{equation}
\begin{equation}\label{eq2_1_5}
\mbox{For a.e. }\xi\mbox{ there exists a }J_0\mbox{ such that }(A^*)^{-j}\xi\in E\mbox{ for }j\geq J_0;
\end{equation}
\begin{equation}\label{eq2_1_6}
\mbox{For a.e. }\xi\mbox{ there exists a }J_0\mbox{ such that }(A^*)^j\xi\not\in E\mbox{ for }j\geq J_0.
\end{equation}
The proceed with the construction and the $\widehat{\psi}_i$'s can be chosen characteristic functions.
\par
Hence starting with $\sigma=\chi_E$ that verifies (\ref{eq2_1_2})-(\ref{eq2_1_6}), the construction yields
multi-wavelet sets.
\par
A simple example of such a function $\sigma$, for $L^{2}\left(\mathbb{R}\right)$ and $A=2$, would be $\sigma:=\chi_{(a,b)}$, where the
interval $(a,b)$ contains $0$. Then
$$\sigma(2^{-1}\xi)-\sigma(\xi)=\chi_{(2a,a]\cup[b,2b)}(\xi),$$
and the NTF multi-wavelet set is obtained taking
$$\widehat{\psi}_i:=\chi_{((2a,a]\cup[b,2b))\cap K_i},$$
where $K_i$ is a partition of $(2a,a]\cup[b,2b)$ as described in the construction.
\par
To obtain single NTF wavelet sets (i.e. $p=1$), one has to start with $\sigma=\chi_E$, where $E$ verifies (\ref{eq2_1_2}),
(\ref{eq2_1_3}),(\ref{eq2_1_5}),(\ref{eq2_1_6}), and (\ref{eq2_1_4}) is replaced by
\begin{equation}\label{eq2_1_7}
\operatorname*{Per}(\chi_{A^*E\setminus E})\leq 1.
\end{equation}
\par
If we analyse the argument before, we see that any single NTF wavelet set comes from such a construction.
\par
For single orthonormal wavelet sets, we have the same conditions, only (\ref{eq2_1_7}) must be replaced by
\begin{equation}\label{eq2_1_8}
\operatorname*{Per}(\chi_{A^*E\setminus E})=1.
\end{equation}
\end{example}
\begin{example}\label{ex2_2}Let $a,b>0$. Define the piecewise linear function
$$\sigma(\xi)=\left\{
\begin{array}{ccc}
\frac{1}{a}\xi+1,&\mbox{if}&\xi\in(-a,0]\\
-\frac{1}{b}\xi+1,&\mbox{if}&\xi\in[0,b)\\
0,& &\mbox{otherwise.}
\end{array}\right.$$
Then,
$$\sigma(2^{-1}\xi)=\left\{
\begin{array}{ccc}
\frac{1}{2a}\xi+1,&\mbox{if}&\xi\in(-2a,0]\\
-\frac{1}{2b}\xi+1,&\mbox{if}&\xi\in[0,2b)\\
0,& &\mbox{otherwise.}
\end{array}\right.$$
and
$$\sigma(2^{-1}\xi)-\sigma(\xi)=\left\{
\begin{array}{ccc}
\frac{1}{2a}\xi+1,&\mbox{if}&\xi\in(-2a,a]\\
-\frac{1}{2a}\xi,&\mbox{if}&\xi\in(a,0]\\
\frac{1}{2b}\xi,&\mbox{if}&\xi\in[0,b)\\
-\frac{1}{2b}\xi+1,&\mbox{if}&\xi\in[b,2b)\\
0,& &\mbox{otherwise.}
\end{array}\right.$$
A simple check shows that $\sigma$ satisfies the conditions (\ref{eq2_1})-(\ref{eq2_5}).
Then we can construct a NTF multiwavelet as before, we only need the partition $K_i$. For example, take
$K_l=[(2l-1)\pi,(2l+1)\pi)$ and intersect with $(-2a,2b)$. Then, let
$$\eta(\xi):=
\left\{
\begin{array}{ccc}
\sqrt{\frac{1}{2a}\xi+1},&\mbox{if}&\xi\in(-2a,a]\\
\sqrt{-\frac{1}{2a}\xi},&\mbox{if}&\xi\in(a,0]\\
\sqrt{\frac{1}{2b}\xi},&\mbox{if}&\xi\in[0,b)\\
\sqrt{-\frac{1}{2b}\xi+1},&\mbox{if}&\xi\in[b,2b)\\
0,& &\mbox{otherwise.}
\end{array}\right.$$
and $\widehat{\psi}_l=\eta\chi_{K_l}$ for $l\in\mathbb{Z}$ with the property that $[(2l-1)\pi,(2l+1)\pi)\cap(-2a,2b)\neq\emptyset$.
Then $\{\psi_l\}_l$ is a NTF multiwavelet.
\par
If $a+b\leq\pi$ then we need only one $K_i$ and we can let $K_1=(-2a,2b)$ so that in this case $\eta$ is a NTF wavelet.
\end{example}
\begin{remark}\label{rem2_3}
The NTF multi-wavelet sets are always semi-orthogonal, that is, if
$$W_j=\overline{\mbox{span}}\{D_A^jT_k\psi\,|\, k\in\mathbb{Z}^n,\psi\in\Psi\},\quad(j\in\mathbb{Z}),$$
then $W_j\perp W_{j'}$ for $j\neq j'$. \par
This can be seen from the fact that $\widehat{D}_A^j\widehat{T}_k\widehat{\psi}$ and
$\widehat{D}_A^{j'}\widehat{T}_{k'}\widehat{\psi}'$ are disjointly supported for $j\neq j'$ and any
$k,k'\in\mathbb{Z}^n$, $\psi,\psi'\in\Psi$ (this is guaranteed by (\ref{eq2_2_1}) and (\ref{eq2_2_3})).
\par
On the other hand, the NTF multiwavelet in example \ref{ex2_1} is not semi-orthohgonal because $\eta$ and $\widehat{D}_A\eta$ are
positive functions that have an overlap in their supports.
\end{remark}
| {
"timestamp": "2005-11-06T20:28:58",
"yymm": "0511",
"arxiv_id": "math/0511151",
"language": "en",
"url": "https://arxiv.org/abs/math/0511151",
"abstract": "The local trace function introduced in \\cite{Dut} is used to derive equations that relate multiwavelets and multiscaling functions in the context of a generalized multiresolution analysis, without appealing to filters. A construction of normalized tight frame wavelets is given. Particular instances of the construction include normalized tight frame and orthonormal wavelet sets.",
"subjects": "Functional Analysis (math.FA)",
"title": "Some equations relating multiwavelets and multiscaling functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924818279465,
"lm_q2_score": 0.727975443004307,
"lm_q1q2_score": 0.7075866575555553
} |
https://arxiv.org/abs/q-alg/9504019 | Introduction to vertex operator algebras III | This is the third part of the revised versions of the notes of three consecutive expository lectures given by Chongying Dong, Haisheng Li and Yi-Zhi Huang in the conference on Monster and vertex operator algebras at the Research Institute of Mathematical Sciences, Kyoto, September 4-9, 1994. In this part, we discuss an $S_{3}$-symmetry of the Jacobi identity, construct the contragredient module for a module for a vertex operator algebra and apply these to the construction of the vertex operator map for the moonshine module. We review the notions of intertwining operator, fusion rule and Verlinde algebra. We also describe briefly the geometric interpretation of vertex operator algebras. We end the exposition with an explanation of the role of vertex operator algebras in conformal field theories. | \section{$S_{3}$-symmetry of the Jacobi identity
and contragredient modules}
The results and constructions discussed in this section are all
natural {}from the axiomatic viewpoint. But they also have practical
uses in some very concrete problems. Before going into the detailed
discussions, let us first recall one of those problems.
One of the most important examples of vertex operator algebra is the
moonshine module constructed by Frenkel, Lepowsky and Meurman
\cite{FLM1} \cite{FLM2}. (See the introduction of \cite{FLM2} for
a historical discussion, including the important role of Borcherds'
announcement \cite{B}.) The construction can be briefly described as
follows: {}From the Leech lattice $\Lambda$, one can construct an
untwisted vertex operator algebra $V_{\Lambda}$.
The automorphism
$\theta: \Lambda\to \Lambda$ defined by $\theta(x)=-x$ for any $x\in
\Lambda$ induces an automorphism of $V_{\Lambda}$
which is still denoted $\theta$. One can construct a unique irreducible
$\theta$-twisted module $V_{\Lambda}^{T}$ for $V_{\Lambda}$.
The automorphism
$\theta: \Lambda\to \Lambda$ also induces an automorphism of $V_{\Lambda}$
and is also denoted $\theta$.
Let $V_{\Lambda}^{+}$ and
$(V_{\Lambda}^{T})^{+}$ be spaces of fixed points of $\theta$ in
$V_{\Lambda}$ and $V_{\Lambda}^{T}$, respectively. Then the moonshine
module is $V^{\natural}=V_{\Lambda}^{+}\oplus (V_{\Lambda}^{T})^{+}$
as a $\Bbb{Z}$-graded vector space. In \cite{FLM2}, the vertex
operator map for the moonshine module is defined and it is shown that
$V^{\natural}$ is indeed a vertex operator algebra.
The definition of vertex operator map for $V^{\natural}$ in
\cite{FLM2} uses some special features in the construction of the
moonshine module, in particular, ``triality'' (\cite{FLM1},
\cite{FLM2}). In fact, there is a conceptual way to define the vertex
operator map which is motivated by the $S_{3}$-symmetry of the Jacobi
identity and contragredient modules and which works also in much more
general cases (see \cite{FHL} and also \cite{DGM} in physicists'
language). The hard part is to prove that the moonshine module
together with this abstractly defined vertex operator map is a vertex
operator algebra. This was first proved directly (i.e., without using
triality, as had been done in \cite{FLM2}) in \cite{DGM} using
techniques developed in string theory. Recently, this has also
been proved conceptually by the author \cite{H7} using the tensor
product theory for modules for a vertex operator algebra developed by
Lepowsky and the author \cite{HL1} \cite{HL4}--\cite{HL6} \cite{H6}
\cite{H7.5}
and some results of Dong-Mason-Zhu \cite{DMZ} and
Dong \cite{D} on modules for the vertex operator
algebra $V_{\Lambda}^{+}$. (Note that in more general cases in which
we can still define the vertex operator maps abstractly, it is not
always true that we will obtain a vertex operator algebra. In
\cite{DGM}, a vertex operator algebra is obtained in a family of cases
generalizing \cite{FLM2}.)
We now turn to the main subjects of this section. At the end of this section,
we shall apply these results to the problem above. The meterial below
in Section 1 is essentially taken from \cite{FHL}.
We first list some properties of the formal $\delta$-function
and
some easy consequences of the definition of vertex
operator algebra. First there is the fundamental property of the
$\delta$-function:
\begin{equation}\label{1-1}
f(x)\delta (x) = f(1)\delta (x) \;\;\mbox{for}\;\;f(x) \in
\Bbb{C}[x,x^{-1}].
\end{equation}
This property has many variants; in general, whenever an expression is
multiplied by the $\delta $-function, we may formally set the argument
appearing in the $\delta $-function equal to 1, provided the relevant algebraic
expressions make sense.
There are two basic identities for the $\delta$-function:
\begin{eqnarray}\label{1-2}
x^{-1}_1\delta \left( {x_2+x_0\over x_1}\right) =
x^{-1}_2\delta \left( {x_1-x_0\over x_2}\right),
\end{eqnarray}
\begin{eqnarray}\label{1-3}
x^{-1}_0\delta \left( {x_1-x_2\over z_0}\right) -
x^{-1}_0\delta \left( {x_2-x_1\over -x_0}\right)
=
x^{-1}_2\delta \left( {x_1-x_0\over x_2}\right).
\end{eqnarray}
Let $(V, Y, \bold{1}, \omega)$ be a vertex operator algebra.
We have the following immediate consequences of
the definition of vertex operator algebra:
\begin{eqnarray}
{[L(-1),Y(v,x)]} &=& Y(L(-1)v,x),\label{1-4}\\
{[L(0), Y(v, x)]} &=& Y(L(0)v,x) + xY(L(-1)v,x),\label{1-5}\\
{[L(1),Y(v,x)]} &=& Y(L(1)v,x) + 2xY(L(0)v,x) + x^2Y(L(-1)v,x)\label{1-6}
\end{eqnarray}
for any $v\in V$.
{}From the $L(-1)$-derivative property and bracket
formulas (\ref{1-4}), we obtain
\begin{equation}\label{1-7}
e^{x_0L(-1)}Y(v,x)e^{-x_0L(-1)} = Y(e^{x_0L(-1)}v,x) = Y(v,x+x_0)
\end{equation}
Applying (\ref{1-7}) to $\bold{1}$ and then taking the constant term in
$x_{0}$,
we have
\begin{equation}\label{1-8}
Y(v,x)\bold{1} = e^{xL(-1)}v.
\end{equation}
Finally, one very important consequence is
the skew-symmetry, that is, for any $u, v\in V$,
\begin{equation}\label{1-9}
Y(u, x)v=e^{xL(-1)}Y(v, -x)u.
\end{equation}
We derive (\ref{1-9}) as follows: We have
\begin{eqnarray}\label{1-10}
\lefteqn{x_0^{-1}\delta \left( {x_1-x_2\over x_0}\right)
Y(u,x_1)Y(v,x_2)}\nonumber\\
&&\quad-
x_0^{-1}\delta \left( {x_2-x_1\over -x_0}\right) Y(v,x_1)Y(u,x_2)\nonumber\\
&&=(-x_0)^{-1}\delta \left( {x_2-x_1\over -x_0}\right)
Y(v,x_2)Y(u,x_1)\nonumber\\
&& \quad-
(-x_0)^{-1}\delta \left( {x_1-x_2\over -(-x_0)}\right) Y(u,x_1)Y(v,x_2).
\end{eqnarray}
By the Jacobi identity and (\ref{1-10}),
\begin{equation}
x^{-1}_2\delta \left( {x_1-x_0\over x_2}\right) Y(Y(u,x_0)v,x_2)
=x^{-1}_1\delta \left( {x_2-(-x_0)\over x_1}\right) Y(Y(v,-x_0)u,x_1).
\end{equation}
Using the fundamental property of the $\delta$-function and the
identity (\ref{1-2}),
we obtain
\begin{equation}\label{1-13}
x^{-1}_2\delta \left( {x_1-x_0\over x_2}\right) Y(Y(u,x_0)v,x_2)
= x^{-1}_2\delta \left( {x_1-x_0\over x_2}\right) Y(Y(v,-x_0)u,x_2+x_{0}).
\end{equation}
In particular (taking the coefficient of $x_{1}^{-1}$ in (\ref{1-13})),
\begin{equation}\label{1-14}
Y(Y(u,x_0)v,x_2)=Y(Y(v,-x_0)u,x_2+x_{0}).
\end{equation}
But by the second equality in (\ref{1-7}),
\begin{equation}\label{1-15}
Y(Y(v,-x_0)u,x_2+x_{0})=Y(e^{x_{0}L(-1)}Y(v, -x_{0})u, x_{2}).
\end{equation}
By the creation property, (\ref{1-14}) and (\ref{1-15}),
\begin{eqnarray}
Y(u, x_{0})v&=&\lim_{x_{2}\to 0}Y(Y(u,x_0)v,x_2)\bold{1}\nonumber\\
&=&\lim_{x_{2}\to 0}Y(e^{x_{0}L(-1)}Y(v, -x_{0})u, x_{2})\bold{1}\nonumber\\
&=&e^{x_{0}L(-1)}Y(v, -x_{0})u.
\end{eqnarray}
Now we discuss the $S_{3}$-symmetry of the Jacobi
identity. For the Jacobi identity for Lie algebras, if we call
\begin{equation}
[u, [v, w]]-[v, [u, w]]=[[u, v], w]
\end{equation}
``the Jacobi identity for the ordered triple $(u, v, w)$,''
then the Jacobi identity for $(u, v, w)$ implies the Jacobi identity for
any permutation of the ordered triple $(u, v, w)$. The $S_{3}$-symmetry for
the Jacobi identity for vertex operator algebra is an analogous statement.
(The analogy between Lie algebras and vertex operator algebras is the
reason why Frenkel, Lepowsky and Meurman called the main axiom for vertex
operator algebras the ``Jacobi identity.'' It would be more accurate
and less confusing to call this
identity the Frenkel-Lepowsky-Meurman identity or simply the FLM identity.)
Let us retain the axioms for a vertex operator algebra
except for the Jacobi identity, and let us call
\begin{eqnarray}\label{1-18}
&{\displaystyle x_0^{-1}\delta
\left( \frac{x_1-x_2}{x_0}\right) Y(u,x_1)Y(v,x_2)w -
x_0^{-1}\delta \left( \frac{x_2-x_1}{-x_0}\right)
Y(v,x_2)Y(u,x_1)w}&\nonumber\\
&={\displaystyle x^{-1}_2\delta \left( \frac{x_1-x_0}{x_2}\right) Y(Y(u,x_0)v,x_2)w}&
\end{eqnarray}
``the Jacobi identity for the ordered triple $(u,v,w)."$ We also
assume that the consequences (\ref{1-7}) and (\ref{1-9}) hold.
By skew-symmetry (\ref{1-9})
for the pair $(u,v)$ and the second equality in (\ref{1-7}) for the vector
$Y(v,-x_0)u$ we have
\begin{equation}\label{1-19}
Y(Y(u,x_0)v, x_2) =
Y(e^{x_0L(-1)}Y(v,-x_0)u,x_2) = Y(Y(v,-x_0)u,x_2+x_0).
\end{equation}
Thus {}from (\ref{1-19}) and the identity (\ref{1-2}),
the Jacobi identity (\ref{1-18}) for $(u,v,w)$ gives
\begin{eqnarray}
&{\displaystyle (-x_0)^{-1}\delta
\left( \frac{x_2-x_1}{-x_0}\right) Y(v,x_2)Y(u,x_1)w -
(-x_0)^{-1}\delta \left( \frac{x_1-x_2}{-(-x_0)}\right)
Y(u,x_1)Y(v,x_2)}w& \nonumber\\
&= {\displaystyle x^{-1}_1\delta \left(
\frac{x_2-(-x_0)}{x_1}\right) Y(Y(v,-x_0)u,x_1)w,}&
\end{eqnarray}
which is the Jacobi identity for $(v,u,w)$ (with $(x_1,x_2,x_0)$
replaced by $(x_2,x_1, -x_0)$).
On the other hand, multiplying both sides of the Jacobi identity (\ref{1-18})
for $(u,v,w)$
by $e^{-x_2L(-1)}$ and using (\ref{1-9}) for the pairs $(v,w)$,
$(v,Y(u,x_1)w)$
and $(Y(u,x_0)v,w)$ and the outer equality in (\ref{1-7}) for
the vector $u$,
we obtain
\begin{eqnarray}\label{1-21}
&{\displaystyle x^{-1}_0\delta \left(
\frac{x_1-x_2}{x_0}\right)Y(u,x_1-x_2)Y(w,-x_2)v - x^{-1}_0\delta
\left(\frac{x_2-x_1}{-x_0}\right) Y(Y(u,x_1)w,-x_2)v}&\nonumber\\
&={\displaystyle
x^{-1}_2\delta \left( \frac{x_1-x_0}{x_2}\right) Y(w,-x_2)Y(u,x_0)v.}&
\end{eqnarray}
Using the fundamental property of the $\delta$-function and (\ref{1-2}), we
can write (\ref{1-21}) as
\begin{eqnarray}
&{\displaystyle x^{-1}_1\delta \left( \frac{x_0+x_2}{x_1}\right)
Y(u,x_0)Y(w,-x_2)v + x^{-1}_2\delta \left( \frac{x_0-x_1}{-x_2}\right)
Y(Y(u,x_1)w,-x_2)v}& \nonumber\\
&={\displaystyle x^{-1}_1\delta \left(
\frac{-x_2-x_0}{-x_1}\right) Y(w,-x_2)Y(u,x_0)v,}&
\end{eqnarray}
that is,
\begin{eqnarray}
\lefteqn{x^{-1}_1\delta \left( \frac{x_0-(-x_2)}{x_1}\right)
Y(u,x_0)Y(w,-x_2)v}
\nonumber \\
&& - x^{-1}_1\delta
\left( \frac{(-x_2)-x_0}{-x_1}\right) Y(w,-x_2)Y(u,x_0)v\nonumber \\
&&= (-x_2)^{-1}\delta \left( \frac{x_0-x_1}{-x_2}\right) Y(Y(u,x_1)w,-x_2)v,
\end{eqnarray}
the Jacobi identity for $(u,w,v)$ (and $(x_0,-x_2,x_1))$. Since the two
permutation above of $(u, v, w)$ generate $S_{3}$, the permutation group
of $(u, v, w)$, we conclude:
\begin{propo}
Under the assumptions indicated in the argument
above, the Jacobi identity for an ordered triple implies the
Jacobi identity for any
permutation of this triple.
\end{propo}
We turn next to the contragredient module for a module for a
vertex operator algebra.
Let $(W,Y)$, with
\begin{equation}
W = \coprod_{n\in \Bbb{C}}W_{(n)},
\end{equation}
be a module for a vertex operator algebra $(V,Y, \bold{1},\omega),$
\begin{equation}
W' = \coprod_{n\in \Bbb{C}}W^*_{(n)}
\end{equation}
the graded dual space of $W$ and $\langle \cdot, \cdot\rangle$ the pairing
between $W'$ and $W$. We define the
{\it contragredient vertex operators} $Y'(v,x)$ ($v \in V$)
by means of the linear map
\begin{eqnarray}
V &\rightarrow & (\text{End}\ W')[[x,x^{-1}]]\nonumber\\
v &\mapsto & Y'(v,x) = \sum_{n\in
\Bbb{Z}}v'_nx^{-n-1} \;\;\;(\mbox{where} \;\;v'_n \in
\mbox{\rm End}\ W'),
\end{eqnarray}
determined by the condition
\begin{equation}\label{1-27}
\langle Y'(v,x)w',w\rangle =
\langle w',Y(e^{xL(1)}(-x^{-2})^{L(0)}v,x^{-1})w\rangle
\end{equation}
for $v \in V$, $w' \in W'$, $w \in W$. The operator
$(-x^{-2})^{L(0)}$ has the obvious meaning; it acts on a vector of weight $n
\in \Bbb{Z}$ as multiplication by $(-x^{-2})^n$. Also note that
$e^{xL(1)}(-x^{-2})^{L(0)}v$ involves only finitely many (integral) powers of
$z$, that the right-hand side of (\ref{1-27}) is a Laurent polynomial in $x$,
and
that the components $v'_n$ of the formal Laurent series
$Y'(v,x)$ defined by (\ref{1-27}) indeed preserve $W'.$
We give the space $W'$ a $\Bbb{C}$-grading by setting
\begin{equation}
W'_{(n)} = W^*_{(n)}\;\; \mbox{\rm for}\;\; n \in \Bbb{C}.
\end{equation}
The following proposition defines the $V$-{\it module contragredient to
$W$}:
\begin{theo}
The pair $(W',Y')$ carries the structure of a
$V$-module.
\end{theo}
\begin{pf}
The axioms on the
grading
are clear.
For the Virasoro algebra properties, we note that
\begin{equation}
\langle Y'(\omega ,x)w',w\rangle =
\langle w',Y(x^{-4}\omega ,x^{-1})w\rangle
\end{equation}
since
\begin{equation}
L(1)\omega = L(-1)L(-2)\bold{1}=L(-2)L(-1)\bold{1}=0.
\end{equation}
Thus, defining component operators $L'(n)$ by
\begin{equation}
Y'(\omega ,x) = \sum_{n\in
\Bbb{Z}}L'(n)x^{-n-2},
\end{equation}
we have
\begin{eqnarray}
\lefteqn{\langle \sum_{n\in
\Bbb{Z}}L'(n)x^{-n}w',w\rangle=}\nonumber \\
&& = \langle x^2Y'
(\omega ,x)w',w\rangle\nonumber \\
&&= \langle w',x^{-2}Y(\omega ,x^{-1})w\rangle\nonumber\\
&&= \langle w', \sum_{n\in \Bbb{Z}}L(-n)x^{-n}w\rangle,
\end{eqnarray}
and so
\begin{equation}
\langle L'(n)w',w\rangle = \langle w',L(-n)w\rangle \;\;
\mbox{\rm for}\;\;
n \in \Bbb{Z}.
\end{equation}
This immediately gives us the Virasoro commutator relation for
$L'(n)$, $n\in \Bbb{Z}$.
We shall give proofs of the
Jacobi identity and the $L(-1)$-derivative property.
For these two axioms, we shall use some
commutator formulas motivated by the Lie group SL$(2,\Bbb{C})$, but
formulated and proved in terms of formal series. We shall omit the
proofs of these formulas;
they can be found in \cite{FHL} and are all direct
calculations.
\begin{lemma} Let
\begin{equation}
f(x) \in x\Bbb{C}[[x]].
\end{equation}
We have the following identities, valid on any module for the Lie algebra
$\frak{s}\frak{l}(2)$ spanned by $L(-1)$, $L(0)$, $L(1):$
\begin{equation}
L(-1)e^{f(x)L(0)} = e^{f(x)L(0)}L(-1)e^{-f(x)},
\end{equation}
\begin{equation}
L(1)e^{f(x)L(0)} = e^{f(x)L(0)}L(1)e^{f(x)},
\end{equation}
\begin{eqnarray} \label{1-37}
\lefteqn{L(-1)e^{f(x)L(1)}=}\nonumber\\
&&= e^{f(x)L(1)}L(-1) - 2f(x)L(0)e^{f(x)L(1)} -
f(x)^2L(1)e^{f(x)L(1)}\nonumber\\
&&= e^{f(x)L(1)}L(-1) - 2f(x)e^{f(x)L(1)}L(0) + f(x)^2e^{f(x)L(1)}L(1).
\end{eqnarray}
These identities also hold for more general $f$ for which the series are well
defined, such as
\begin{equation}
f(x,x_0) \in x\Bbb{C}[[x,x_0]].
\end{equation}
\end{lemma}
Now we establish the $L(-1)$-derivative property. For convenience,
we assume that $v \in V$ is homogeneous of weight $n \in \Bbb{Z}:$
$L(0)v = nv.$
Using the definition $Y'(\cdot ,x)$ and the chain rule we get
\begin{eqnarray} \label{1-39}
\lefteqn{\langle {d\over dx}Y'(v,x)w',w\rangle=}\nonumber\\
&&={d\over dx}\langle w',Y(e^{xL(1)}(-x^{-2})^{L(0)}v,x^{-1})w\rangle \nonumber\\
&&= \langle w',{d\over dx}Y(e^{xL(1)}(-x^{-2})^{L(0)}v,x^{-1})w\rangle \nonumber\\
&&= \langle w',Y({d\over dx}(e^{xL(1)}(-x^{-2})^{L(0)})v,x^{-1})w\rangle\nonumber \\
&&\;\;\;\;+ \langle w',{d\over dx}Y(v_1,x^{-1})\left| _{v_1 =
e^{xL(1)}(-x^{-2})^{L(0)}v}w\rangle,\right.
\end{eqnarray}
where $w'$ and $w$ are arbitrary elements of $W'$ and $W$,
respectively. We perform the indicated calculations:
\begin{eqnarray} \label{1-40}
\lefteqn{{d\over dx}(e^{xL(1)}(-x^{-2})^{L(0)})=}\nonumber\\
&&= L(1)e^{xL(1)}(-x^{-2})^{L(0)} - 2x^{-1}e^{xL(1)}L(0)(-x^{-2})^{L(0)},
\end{eqnarray}
\begin{eqnarray} \label{1-41}
\lefteqn{{d\over dx}Y(v_1,x^{-1})\left| _{v_1 = e^{xL(1)}(-x^{-2})^{L(0)}v}
\right.=}\nonumber \\
&&= -x^{-2}{d\over dx^{-1}}Y(v_1,x^{-1})\left| _{v_1 =
e^{xL(1)}(-x^{-2})^{L(0)}v}\right.\nonumber \\
&&= -x^{-2}Y(L(-1)v_1,x^{-1})\left| _{v_1 = e^{xL(1)}(-x^{-2})^{L(0)}v}\right.
\nonumber\\
&&= -x^{-2}Y(L(-1)e^{xL(1)}(-x^{-2})^{L(0)}v,x^{-1})\nonumber\\
&&= -x^{-2}Y((e^{xL(1)}L(-1) - 2xe^{xL(1)}L(0)\nonumber\\
&&\;\;\;\;+ x^2L(1)e^{xL(1)})(-x^{-2})^nv,x^{-1})\nonumber\\
&&= Y(e^{xL(1)}(-x^{-2})^{n+1}L(-1)v,x^{-1})\nonumber\\
&&\;\;\;\;+ Y(2x^{-1}e^{xL(1)}L(0)(-x^{-2})^nv,x^{-1})\nonumber\\
&&\;\;\;\;- Y(L(1)e^{xL(1)}(-x^{-2})^nv,x^{-1})\nonumber\\
&&= Y(e^{xL(1)}(-x^{-2})^{L(0)}L(-1)v,x^{-1})\nonumber\\
&&\;\;\;\;+ Y(2x^{-1}e^{xL(1)}L(0)(-x^{-2})^{L(0)}v,x^{-1})\nonumber\\
&&\;\;\;\;- Y(L(1)e^{xL(1)}(-x^{-2})^{L(0)}v,x^{-1}).
\end{eqnarray}
Here we have used the outer equality in (\ref{1-37}) and
the fact that
\begin{equation}
L(0)L(-1)v = L(-1)(L(0) + 1)v = (n+1)L(-1)v.
\end{equation}
Substituting (\ref{1-40}) and (\ref{1-41}) into (\ref{1-39}) we get
\begin{eqnarray}
\lefteqn{\langle {d\over dx}Y'(v,x)w',w\rangle =}\nonumber\\
&&= \langle w',Y(L(1)e^{xL(1)}(-x^{-2})^{L(0)}v\nonumber\\
&&\;\;\;\;\;\;\;\;- 2x^{-1}e^{xL(1)}L(0)(-x^{-2})^{L(0)}v,x^{-1})w\rangle
\nonumber\\
&&\;\;\;\;+ \langle w',Y(e^{xL(1)}(-x^{-2})^{L(0)}L(-1)v,x^{-1})w\rangle \nonumber\\
&&\;\;\;\;+ \langle w',Y(2x^{-1}e^{xL(1)}L(0)(-x^{-2})^{L(0)}v,x^{-1})w\rangle
\nonumber\\
&&\;\;\;\;- \langle w',Y(L(1)e^{xL(1)}(-x^{-2})^{L(0)}v,x^{-1})w\rangle \nonumber\\
&&= \langle w',Y(e^{xL(1)}(-x^{-2})^{L(0)}L(-1)v,x^{-1})w\rangle \nonumber\\
&&= \langle Y'(L(-1)v,x)w',w\rangle,
\end{eqnarray}
proving the $L(-1)$-derivative property.
Finally, we shall prove the Jacobi identity. Let $v_1,v_2 \in V$,
$w \in W$ and $w' \in W'$. What we want to prove can be written as
follows:
\begin{eqnarray} \label{1-44}
\lefteqn{\langle x^{-1}_0\delta \left( {x_1-x_2\over x_0}\right)
Y'(v_1,x_1)Y'(v_%
2,x_2)w',w\rangle }\nonumber\\
&&\;\;\;\;-
\langle x^{-1}_0\delta \left( {x_2-x_1\over -x_0}\right)
Y'(v_2,x_2)Y'(v_%
1,x_1)w',w\rangle\nonumber\\
&&=
\langle x^{-1}_2\delta \left( {x_1-x_0\over x_2}\right)
Y'(Y(v_1,x_0)v_2,x_2%
)w',w\rangle.
\end{eqnarray}
But by the definition (\ref{1-27}) of contragredient vertex operator, we have
\begin{eqnarray}
\lefteqn{\langle Y'(v_1,x_1)Y'(v_2,x_2)w',w\rangle }\nonumber \\
&&= \langle w',Y(e^{x_2L(1)}(-x^{-2}_2)^{L(0)}v_2,x^{-1}_2)
Y(e^{x_1L(1)}(-x^{-2}_1)
^{L(0)}v_1,x^{-1}_1)w\rangle
\end{eqnarray}
\begin{eqnarray}
\lefteqn{\langle Y'(v_2,x_2)Y'(v_1,x_1)w',w\rangle }\nonumber \\
&&=
\langle w',Y(e^{x_1L(1)}(-x^{-2}_1)^{L(0)}v_1,x^{-1}_1)Y(e^{x_2L(1)}(-x^{-2}_2)
^{L(0)}v_2,x^{-1}_2)w\rangle
\end{eqnarray}
\begin{eqnarray}
\lefteqn{\langle Y'(Y(v_1,x_0)v_2,x_2)w',w\rangle }\nonumber \\
&&= \langle w',Y(e^{x_2L(1)}(-x^{-2}_2)^{L(0)}Y(v_1,x_0)v_2,x^{-1}_2)w\rangle,
\end{eqnarray}
and {}from the Jacobi identity for $W$ we have
\begin{eqnarray}
\lefteqn{\langle w',\left( {-x_0\over x_1x_2}\right) ^{-1}\delta
\left( {x^{-1}_1-x^{-1}_2\over -x_0/x_1x_2} \right)
Y(e^{x_1L(1)}(-x^{-2}_1)^{L(0)}v_1,x^{-1}_1)\cdot}\nonumber \\
&&\;\;\;\;\;\;\;\;\;\;\cdot
Y(e^{x_2L(1)}(-x^{-2}_2)^{L(0)}v_2,x^{-1}_2)w\rangle\nonumber \\
&&\;\;\;\;- \langle w',\left( {-x_0\over x_1x_2}\right) ^{-1}\delta
\left( {x^{-1}_2-x^{-1}_1\over x_0/x_1x_2}\right)
Y(e^{x_2L(1)}(-x^{-2}_2)^{L(0)}v_2,x^{-1}_2)\cdot \nonumber\\
&&\;\;\;\;\;\;\;\;\;\;\cdot
Y(e^{x_1L(1)}(-x^{-2}_1)^{L(0)}v_1,x^{-1}_1)w\rangle \nonumber\\
&&= \langle w',
(x^{-1}_2)^{-1}\delta \left( {x^{-1}_1+x_0/x_1x_2\over x^{-1}_2}%
\right) \cdot\nonumber \\
&&\;\;\;\;\;\;\;\;\;\;\cdot
Y(Y(e^{x_1L(1)}(-x^{-2}_1)^{L(0)}v_1,-x_0/x_1x_2)e^{x_2L(1)}(-x^{%
-2}_2)^{L(0)}v_2,x^{-1}_2)w\rangle,\nonumber\\
&&
\end{eqnarray}
or equivalently,
\begin{eqnarray}
\lefteqn{-
\langle w',x^{-1}_0\delta \left( {x_2-x_1\over -x_0}\right)
Y(e^{x_1L(1)}(-x^{-%
2}_1)^{L(0)}v_1,x^{-1}_1)\cdot}\nonumber \\
&&\;\;\;\;\;\;\;\;\;\;\cdot
Y(e^{x_2L(1)}(-x^{-2}_2)^{L(0)}v_2,x^{-1}_2)w\rangle\nonumber \\
&&\;\;\;\;+
\langle w',x^{-1}_0\delta
\left( {x_1-x_2\over x_0}\right)
Y(e^{x_2L(1)}(-x^{-2}_2)^{L(0)}v_2,x^{-1}_2)\cdot\nonumber \\
&&\;\;\;\;\;\;\;\;\;\;\cdot
Y(e^{x_1L(1)}(-x^{-2}_1)^{L(0)}v_1,x^{-1}_1)w\rangle\nonumber \\
&&= \langle w',x^{-1}_1\delta \left( {x_2+x_0\over x_1}\right) \cdot\nonumber\\
&&\;\;\;\;\;\;\;\;\;\;\cdot
Y(Y(e^{x_1L(1)}(-x^{-2}_1)^{L(0)}v_1,-x_0/x_1x_2)e^{x_2L(1)}(-x^{%
-2}_2)^{L(0)}v_2,x^{-1}_2)w\rangle.\nonumber\\
&&
\end{eqnarray}
(As usual, the reader should be observing that the formal Laurent series which
arise are well defined.) Thus (by (\ref{1-2})) the desired result
(\ref{1-44}) is
equivalent to
\begin{eqnarray}
\lefteqn{x^{-1}_1\delta \left( {x_2+x_0\over x_1}\right)
Y(e^{x_2L(1)}(-x^{-2}_2)^{L(0)
}Y(v_1,x_0)v_2,x^{-1}_2) }\nonumber\\
&&=
x^{-1}_1\delta \left( {x_2+x_0\over x_1}\right)
Y(Y(e^{x_1L(1)}(-x^{-2}_1)^{L(0%
)}v_1,-x_0/x_1x_2)\cdot\nonumber \\
&&\;\;\;\;\;\;\;\;\;\;\cdot e^{x_2L(1)}(-x^{-2}_2)^{L(0)}v_2,x^{-1}_2),
\end{eqnarray}
or to
\begin{eqnarray}
\lefteqn{Y(e^{x_2L(1)}(-x^{-2}_2)^{L(0)}Y(v_1,x_0)v_2,x^{-1}_2)}\nonumber \\
&&=
Y(Y(e^{(x_2+x_0)L(1)}(-(x_2+x_0)^{-2})^{L(0)}v_1,-x_0/(x_2+x_0)x_2)%
\cdot\nonumber \\
&&\;\;\;\;\;\;\;\;\;\;\cdot e^{x_2L(1)}(-x^{-2}_2)^{L(0)}v_2,x^{-1}_2).
\end{eqnarray}
If we can prove
\begin{eqnarray}
\lefteqn{e^{x_2L(1)}(-x^{-2}_2)^{L(0)}Y(v_1,x_0)}\nonumber\\
&&=
Y(e^{(x_2+x_0)L(1)}(-(x_2+x_0)^{-2})^{L(0)}v_1,-x_0/(x_2+x_0)x_2)\cdot\nonumber\\
&&\;\;\;\;\;\;\;\;\;\;\cdot e^{x_2L(1)}(-x^{-2}_2)^{L(0)}
\end{eqnarray}
or equivalently, the conjugation formula
\begin{eqnarray}
\lefteqn{e^{xL(1)}(-x^{-2})^{L(0)}Y(v,x_0)(-x^{-2})^{-L(0)}e^{-xL(1)}}\nonumber \\
&&= Y(e^{(x+x_0)L(1)}(-(x+x_0)^{-2})^{L(0)}v,-x_0/(x+x_0)x)
\end{eqnarray}
for any element $v$ of a vertex operator algebra, where the operators act on
the algebra itself, then we will be done. But for this, it is sufficient to
prove the following lemma:
\begin{lemma} Let $V$ be a vertex operator algebra.
The following conjugation formulas hold on $V:$
\begin{eqnarray}
x^{L(0)}Y(v,x_0)x^{-L(0)} &=& Y(x^{L(0)}v,xx_0)\\
e^{xL(1)}Y(v,x_0)e^{-xL(1)} &= &
Y(e^{x(1-xx_0)L(1)}(1-xx_0)^{-2L(0)}v,x_0/(1-xx_0)).
\end{eqnarray}
\end{lemma}
The proof of this lemma, which we omit here, can be found in \cite{FHL}.
This finishes the proof of the theorem.
\end{pf}
The functor taking a $V$-module to its contragredient module has some
important properties which we state without proof (see \cite{FHL}):
\begin{propo}
There is a natural isomorphism between the double contragredient module
$(W'', Y'')$ and $(W, Y)$.
\end{propo}
\begin{propo}
The module $(W, Y)$ is irreducible if and only if $(W', Y')$ is irreducible.
\end{propo}
\begin{propo}\label{biform}
The module $(W, Y)$ is isomorphic to its contragredient module $(W', Y')$ if
and only if there exists a nondegenerate bilinear form $(\cdot, \cdot)_{W}$
on $W$ such that
\begin{equation}\label{1-55}
(W_{(m)}, W_{(n)})_{W}=0,\;\;\;m\ne n
\end{equation}
and
\begin{equation}\label{1-56}
(Y'(v,x)w_{1}, w_{2})_{W} =
(w_{1}, Y(e^{xL(1)}(-x^{-2})^{L(0)}v,x^{-1})w_{2})_{W}.
\end{equation}
If $V$
as a $V$-module is isomorphic to $V'$, the bilinear form $(\cdot, \cdot)_{V}$
is symmetric.
\end{propo}
We return to our problem of defining the vertex operator map for
$V^{\natural}$. Let $V$ be a vertex operator algebra and $W$ a $V$-module.
Assume that both $V$ and $W$ as $V$-modules are isomorphic to themselves.
By Proposition \ref{biform}, there are nondegenerate bilinear forms
$(\cdot, \cdot)_{V}$ and $(\cdot, \cdot)_{W}$ satisfying the two conditions
in Proposition \ref{biform}. In addition, $(\cdot, \cdot)_{V}$ is
symmetric. Assume that there is a vertex operator map
$$Y_{V\oplus W}: (V\oplus W)\otimes (V\oplus W)\to
(\text{End}\; (V\oplus W))[[x, x^{-1}]]$$
such that $(V\oplus W, Y_{V\oplus W}, \bold{1}, \omega)$ ($\bold{1}$ and
$\omega$ are the vacuum and the Virasoro element of $V$, respectively)
is a vertex operator algebra satisfying the following:
\begin{enumerate}
\item The vertex
operator algebra structure on $V$ and the module structure on $W$ are
substructures of it.
\item As a module for itself, it is isomorphic its contragredient
module and the corresponding symmetric nondegenerate bilinear form
$(\cdot, \cdot)_{V\oplus W}$ is
defined by
\begin{equation}
((v_{1}, w_{2}), (v_{2}, w_{2}))_{V\oplus W}=(v_{1}, v_{2})_{V}
+(w_{1}, w_{2})_{W}
\end{equation}
for all $v_{1}, v_{2}\in V$ and $w_{1}, w_{2}\in W$.
\item The involution which is the identity on $V$ and is $-1$ on $W$ is an
automorphism of $(V\oplus W, Y_{V\oplus W}, \bold{1}, \omega)$.
\end{enumerate}
Then we must have the
following:
\begin{enumerate}
\item The module $W$ is $\Bbb{Z}$-graded.
\item The bilinear form $(\cdot, \cdot)_{W}$ is symmetric.
\item We have the following formulas: For any $v\in V$ and $w\in W$,
\begin{equation}\label{1-58}
Y_{V\oplus W}(w, x)v=e^{xL(-1)}Y_{W}(v, -x)w
\end{equation}
and
for any $v\in V$, $w_{1}, w_{2}, w_{3}\in W$,
\begin{eqnarray}
(w_{3}, Y_{V\oplus W}(w_{1}, x)w_{2})_{W}&=&0,\label{1-59}\\
(v, Y_{V\oplus W}(w_{1}, x)w_{2})_{V}&=&(Y_{W}(v, -x^{-1})
e^{xL(1)}(-x^{2})^{-L(0)}w_{1},
e^{x^{-1}L(1)}w_{2})_{W},\nonumber\\
&&\label{1-60}
\end{eqnarray}
where $Y_{V}$ and $Y_{W}$ are the vertex operator maps for $V$ and $W$,
respectively.
\end{enumerate}
We see that the vertex operator map $Y_{V\oplus W}$ is determined completely
by the vertex operator maps $Y_{V}$, $Y_{W}$, the bilinear forms
$(\cdot, \cdot)_{V}$, $(\cdot, \cdot)_{W}$ and (\ref{1-58})--(\ref{1-60}).
Thus even if we do not know whether $V\oplus W$ is such a vertex operator
algebra, we can still define a vertex operator map $Y_{V\oplus W}$ using
$Y_{V}$, $Y_{W}$, the bilinear forms
$(\cdot, \cdot)_{V}$, $(\cdot, \cdot)_{W}$
and (\ref{1-58})--(\ref{1-60}). In particular, since $V^{+}_{\Lambda}$ and
$(V_{\Lambda}^{T})^{+}$ as $V^{+}_{\Lambda}$-modules are both isomorphic
to their contragredient modules, we can define a vertex operator map
$Y_{V^{\natural}}$ for
$V^{\natural}=V^{+}_{\Lambda}\oplus (V_{\Lambda}^{T})^{+}$.
\section{Intertwining operators, fusion rules and Verlinde algebras}
We first define intertwining operators and fusion rules for a
vertex operator following \cite{FHL}.
Let
\begin{equation}
V\{x\} = \left\lbrace \sum_{n
\in
\Bbb{C}}v_nx^n|v_n \in V\right\rbrace
\end{equation}
be the vector space of $V$-valued formal series involving
the complex powers of $x$ with coefficients in a
vector space $V.$
\begin{defi}
Let $V$ be a vertex operator algebra and let
$(W_1,Y_1)$, $(W_2,Y_2)$ and $(W_3,Y_3)$ be three $V$-modules (not
necessarily distinct, and possibly equal to $V)$. An {\it intertwining
operator of type ${3}\choose{12}$} (or {\it of type
${W_3}\choose{W_1\ W_2} $}) is a linear map $W_1\otimes W_2
\rightarrow W_3\{x\}$, or equivalently,
\begin{eqnarray}
W_1 &\rightarrow & (\mbox{\rm Hom}(W_2,W_3))\{x\}\nonumber\\
w & \mapsto & \cal{Y}(w,x) =\sum_{n\in
\Bbb{Q}}w_nx^{-n-1}\;\;\; (\mbox{\rm where}\;\;
w_n \in \mbox{\rm Hom}(W_2,W_3))
\end{eqnarray}
such that ``all the defining properties of a module action that make
sense hold"
(cf. the definition of $V$-module).
That is, for $v \in V$, $w_{(1)} \in W_1$ and
$w_{(2)} \in W_2,$
\begin{equation}\label{2-3}
w_{(1)n}w_{(2)} = 0\;\; \mbox{\rm for}\;\; n \;\;
\mbox{\rm whose real part is sufficiently large;}
\end{equation}
the following Jacobi identity holds for the operators $Y_{i}(v,\cdot )$,
$i=1, 2, 3$,
$\cal{Y}(w_{(1)},\cdot )$ acting on the element $w_{(2)}:$
\begin{eqnarray}
\lefteqn{x^{-1}_0\delta \left( {x_1-x_2\over x_0}\right)
Y_3(v,x_1)\cal{Y}(w_{(1)},x_2%
)w_{(2)}}\nonumber\\
&&\;\;\;\;-
x^{-1}_0\delta \left( {x_2-x_1\over -x_0}\right)
\cal{Y}(w_{(1)},x_2)Y_2(v,x_1%
)w_{(2)} \nonumber\\
&&=
x^{-1}_2\delta \left( {x_1-x_0\over x_2}\right) \cal{Y}(Y_1(v,x_0)w_{(1)},x_2)%
w_{(2)}
\end{eqnarray}
(note that the first term on the left-hand side is algebraically meaningful
because of condition (\ref{2-3}), and the other terms are meaningful by the
usual
properties of modules; also note that this Jacobi identity involves integral
powers of $x_0$ and $x_1$ and complex powers of $x_2);$
\begin{equation}
{d\over dx}\cal{Y}(w_{(1)},x) = \cal{Y}(L(-1)w_{(1)},x),
\end{equation}
where $L(-1)$ is the operator acting on $W_{1}.$
\end{defi}
We may denote the intertwining operator just defined by
\begin{equation}
\cal{Y}^3_{12}\;\;\; \mbox{\rm or}\;\;\cal{Y}^{W_3}_{W_1W_2},
\end{equation}
if necessary, to indicate its type.
Note that $Y(\cdot ,x)$ acting on $V$
is an example of
an intertwining operator of type $ V\choose {V\ V} $, and
$Y(\cdot ,x)$ acting on a $V$-module $W$ is an example of an intertwining
operator of type $ W\choose {V\ W} $. These intertwining operators
satisfy the normalization condition $Y(\bold{1},x) = 1$.
The intertwining operators of type $ 3\choose {1\ 2}$ clearly form a
vector space, which we denote by $\cal{V}^3_{12}$ or
$\cal{V}^{W_3}_{W_1W_2}$. We set
\begin{equation}
N^3_{12} = N^{W_3}_{W_1W_2} = \dim \ \cal{V}^3_{12}
\;\;(\le \infty ).
\end{equation}
These numbers are called the
{\it fusion rules} associated with the algebra and
modules. Then for example, assuming that $V$ and the $V$-module $W$ are
nonzero, the corresponding fusion rules are positive:
\begin{eqnarray}
N^V_{VV} &\ge& 1,\\
N^W_{VW} &\ge& 1,\\
N^W_{WV} &\ge& 1.
\end{eqnarray}
In \cite{FHL} and \cite{HL5}, it is shown that the fusion rules have
the following symmetry property: Define
\begin{equation}
N_{ijk}=N_{W_{i}W_{j}W_{k}}=N^{W'_{k}}_{W_{i}W_{j}}
\end{equation}
for $i, j, k=1, 2, 3$.
Then
for any element $\sigma\in S_{3}$,
we have
\begin{equation}\label{2-10}
N_{\sigma(1)\sigma(2)\sigma(3)}=N_{123}.
\end{equation}
If the vertex operator algebra $V$ is rational, that is, V satisfies the
conditions:
(i) there are only finitely many irreducible $V$-modules
(up to equivalence), (ii) every $V$-module is completely
reducible, (iii) all the fusion rules are finite, then we can
define an algebra called the {\it fusion algebra} or the {\it Verlinde
algebra} using fusion rules for the irreducible modules as follows:
Assume that there are $m$ inequivalent irreducible $V$-modules. Let
$A$ be the abelian group tensor product of
the $K$-group of the $V$-modules with
$\Bbb{C}$. Then $A$ has a natural structure of a vector space. Since $V$ is
rational, we have
\begin{equation}
A=\sum_{i=1}^{m}\Bbb{C}\phi_{i}
\end{equation}
where $\phi_{i}$, $i=1, \dots, m$, are all the equivalence classes containing
irreducible modules. We define a product on $A$ by
\begin{equation}
\phi_{i}\cdot \phi_{j}=\sum_{k=1}^{m}N_{ij}^{k}\phi_{k}
\end{equation}
for all $i, j=1, \dots, m$, where $N^{k}_{ij}$, $1\le i, j,k \le m$, are
the fusion rules $N^{W_{k}}_{W_{i}W_{j}}$ for any $W_{i}\in \phi_{i}$,
$W_{j}\in \phi_{j}$ and $W_{k}\in \phi_{k}$.
By the symmetry (\ref{2-10}), it is clear that this product is commutative.
When the intertwining operators for
the vertex operator algebra satisfy certain additional
conditions, it can be proved
that this product is also associative. One condition that
we need is that all irreducible $V$-modules are $\Bbb{R}$-graded.
If $V$ is rational, then this condition implies that every $V$-module is
$\Bbb{R}$-graded, that is, the weight of an element of a $V$-module is
always a real number.
We also need
an additional condition.
Given any $V$-modules $W_{1}$, $W_{2}$, $W_{3}$, $W_{4}$ and $W_{5}$,
let $\cal{Y}_{1}$, $\cal{Y}_{2}$, $\cal{Y}_{3}$ and $\cal{Y}_{4}$
be intertwining operators of type ${W_{4}}\choose {W_{1}W_{5}}$,
${W_{5}}\choose {W_{2}W_{3}}$, ${W_{5}}\choose {W_{1}W_{2}}$ and
${W_{4}}\choose {W_{5}W_{3}}$, respectively. Consider the following
conditions for the product of $\cal{Y}_{1}$ and $\cal{Y}_{2}$ and
for the iterate of $\cal{Y}_{3}$ and $\cal{Y}_{4}$, respectively:
\begin{description}
\item[Convergence and extension property for products]
There exists
an integer $N$
(depending only on $\cal{Y}_{1}$ and $\cal{Y}_{2}$), and
for any $w_{(1)}\in W_{1}$,
$w_{(2)}\in W_{2}$, $w_{(3)}\in W_{3}$, $w'_{(4)}\in W'_{4}$, there exist
$j\in \Bbb{N}$, $r_{i}, s_{i}\in \Bbb{R}$, $i=1, \dots, j$, and analytic
functions $f_{i}(z)$ on $|z|<1$, $i=1, \dots, j$,
satisfying
\begin{equation}
\mbox{\rm wt}\ w_{(1)}+\mbox{\rm wt}\ w_{(2)}+s_{i}>N,\;\;\;i=1, \dots, j,
\end{equation}
such that
\begin{equation}
\langle w'_{(4)}, \cal{Y}_{1}(w_{(1)}, x_{2})
\cal{Y}_{2}(w_{(2)}, x_{2})w_{(3)}\rangle_{W_{4}}
\bigg\vert_{x_{1}^{n}= e^{n\log z_{1}}, \;x_{2}^{n}=e^{n\log z_{2}},\;
n\in \Bbb{C}}
\end{equation}
is convergent when $|z_{1}|>|z_{2}|>0$ and can be analytically extended to
the multi-valued analytic function
\begin{equation}\label{phyper}
\sum_{i=1}^{j}z_{2}^{r_{i}}(z_{1}-z_{2})^{s_{i}}
f_{i}\left(\frac{z_{1}-z_{2}}{z_{2}}\right)
\end{equation}
when $|z_{2}|>|z_{1}-z_{2}|>0$.
\item[Convergence and extension property for iterates]
\hspace{.5em}There exists an integer
$\tilde{N}$
(depending only on $\cal{Y}_{3}$ and $\cal{Y}_{4}$), and
for any $w_{(1)}\in W_{1}$,
$w_{(2)}\in W_{2}$, $w_{(3)}\in W_{3}$, $w'_{(4)}\in W'_{4}$, there exist
$k\in \Bbb{N}$, $\tilde{r}_{i}, \tilde{s}_{i}\in \Bbb{R}$, $i=1, \dots, k$,
and analytic
functions $\tilde{f}_{i}(z)$ on $|z|<1$, $i=1, \dots, k$,
satisfying
\begin{equation}
\mbox{\rm wt}\ w_{(2)}+\mbox{\rm wt}\ w_{(3)}+\tilde{s}_{i}>\tilde{N},\;\;\;i=1, \dots, k,
\end{equation}
such that
\begin{equation}
\langle w'_{(4)},
\cal{Y}_{4}(\cal{Y}_{3}(w_{(1)}, x_{0})w_{(2)}, x_{2})w_{(3)}\rangle_{W_{4}}
\bigg\vert_{x^{n}_{0}=e^{n\log (z_{1}-z_{2})},\;x^{n}_{2}=e^{n\log z_{2}},\;
n\in \Bbb{C}}
\end{equation}
is convergent when $|z_{2}|>|z_{1}-z_{2}|>0$ and can be analytically extended
to the multi-valued analytic function
\begin{equation}
\sum_{i=1}^{k}z_{1}^{\tilde{r}_{i}}z_{2}^{\tilde{s}_{i}}
\tilde{f}_{i}\left(\frac{z_{2}}{z_{1}}\right)
\end{equation}
when $|z_{1}|>|z_{2}|>0$.
\end{description}
If for any $V$-modules $W_{1}$, $W_{2}$, $W_{3}$, $W_{4}$ and $W_{5}$ and
any intertwining operators $\cal{Y}_{1}$ and $\cal{Y}_{2}$
of the types as above, the convergence and extension property of products
holds,
we say that
{\it the products of the
intertwining operators for $V$ have the convergence and extension property}.
Similarly we can define the meaning of the phrase
{\it the iterates of the intertwining
operators for $V$ have the convergence and extension property}.
We also need the notion of generalized module: A {\it generalized $V$ module}
is
a pair $(W, Y)$ satisfying all the axioms for a $V$-module except
the two grading axioms: $\dim W_{(n)}< \infty$ for all $n\in \Bbb{C}$ and
$W_{(n)}=0$ for $n\in \Bbb{C}$ whose real part is sufficiently small.
If a generalized $V$-module $W=\coprod_{n\in \Bbb{C}}W_{(n)}$
satisfies the second grading axiom above,
we say that $W$ is {\it lower-truncated}.
We have the following
result:
\begin{theo}
Let $V$ be a rational vertex operator algebra for which all irreducible modules
are $\Bbb{R}$-graded. Assume
that $V$ satisfies the following conditions:
\begin{enumerate}
\item Every finitely-generated lower-truncated generalized $V$-module
is a $V$-module.
\item The products or the iterates of
the intertwining operators for $V$ have the convergence and extension property.
\end{enumerate}
Then the Verlinde algebra for $V$ is a commutative associative algebra
with unit.
\end{theo}
This theorem is an easy consequence of the associativity of the tensor product
theory for modules for a vertex operator algebra developed by Lepowsky and
the author \cite{HL1} \cite{HL4}--\cite{HL6} \cite{H6}.
Fusion rules and Verlinde algebras are very important concepts and
tools in the study of conformal field theory. One of the most
interesting results
in the mathematical study of conformal field theory is that
the fusion rules and their higher-genus generalizations
for the WZNW conformal field theory can be expressed
in terms of elementary functions (actually, the sine functions)
\cite{V}.
On the other hand, these fusion rules and generalizations can also be
shown to be equal to the dimensions of the space of ``generalized
theta functions'' on the moduli spaces of semistable principal bundles
on smooth projective irreducible algebraic curves \cite{KNR}. Thus
one obtains a simple and beautiful formula for these dimensions. These
are the so called Verlinde formulas. Mathematical proofs of these
formulas have been obtained in \cite{TUY} and
\cite{F}.
\section{Geometric interpretation of vertex operator algebras}
We give a brief description of the geometric interpretation of vertex operator
algebras in this section. The geometric interpretations of vertex operators,
their duality properties and their transformation properties under the
projective transformations were first given by Frenkel \cite{Fr} using
the geometry of $\Bbb{C}\cup \{\infty\}$ with some discs deleted. The
complete geometric interpretation is obtained in \cite{H1} and \cite{H8}.
The formulation using operads is given in \cite{HL2} and \cite{HL3}.
See \cite{H1}--\cite{H4}, \cite{H8}, \cite{HL2} and \cite{HL3} for
details and other expositions.
In classical algebraic theories we study mostly algebraic structures defined by
binary operations. These binary operations can always be described by
one-dimensional geometric objects. For example,
Lie algebras can be described by
binary trees. A Lie algebra can be defined to be
a ``linear representation" of the moduli space of binary trees with a ``welding
operation," satisfying certain ``conservation" and ``orientation" properties
\cite{H1} \cite{H5}.
Any
associative binary operation, for example, the multiplication for a group or
an algebra, can be described using
the moduli space of circles with punctures and
local coordinates \cite{HL2} \cite{HL3}.
The general philosophy behind the geometric interpretation of vertex
operator algebras is to study certain two-dimensional analogues of the
classical binary operations,
that is, to study operations described by two-dimensional analogues of binary
trees or circles with punctures and local coordinates.
The two-dimensional analogues, used to describe vertex operator
algebras, of both binary trees
and circles with punctures and local coordinates are spheres with
analytically parametrized boundaries,
where by spheres we mean one-dimensional compact connected genus-zero complex
manifolds. These spheres with boundaries are in some sense
equivalent to spheres with ordered points (which are called punctures),
one negatively oriented and others positively oriented,
and local coordinates vanishing at these
points, as is explained in \cite{H1} and \cite{H8}.
We will use the the index $0$
to denote the negatively oriented puncture on such a sphere
with punctures and local coordinates. Let $S_{1}$ and $S_{2}$ be two
such spheres
with punctures and local coordinates,
$p_{j}$, $j=0, \dots, m$, the punctures of $S_{1}$,
$q_{k}$, $k=0, \dots, n$, the
punctures of $S_{2}$, $(U_{j}, \varphi_{j})$, $j=0, \dots, m$, the
local coordinates vanishing at $p_{j}$ and
$(V_{k}, \psi_{k})$, $k=0, \dots, n$, the
local coordinates vanishing at $q_{k}$.
For any integer $i$ satisfying $0<i\le n$,
we would like to sew $S_{1}$ and $S_{2}$ through the $i$-th puncture of $S_{1}$
and the $0$-th puncture of $S_{2}$ to obtain
a new spheres with punctures and local coordinates.
Assume that there exists a
positive number $r$ such that $\varphi _{i}(U_{i})$ contains the
closed disc $\bar{B}_{0}^{r}$ centered at $0$ with radius $r$ and
$\psi_{0}(V_{0})$
contains the closed disc $\bar{B}_{0}^{1/r}$ centered at $0$ with radius $1/r$.
Assume also that $p_{i}$ and $q_{0}$ are the only punctures in
$\varphi_{i}^{-1}(\bar{B}_{0}^{r})$ and $\psi
_{0}^{-1}(\bar{B}_{0}^{1/r})$, respectively. In this case we say that
{\it the $i$-th puncture of $S_{1}$ can be sewn with
the $0$-th puncture of $S_{2}$}. In this case,
we obtain a sphere with $n+m+1$ punctures and local coordinates by
cutting $\varphi_{i}^{-1}(B_{0}^{r})$ and $\psi_{0}^{-1}(B_{0}^{1/r})$
{}from $S_{1}$ and $S_{2}$, respectively, and then identifying the boundaries
of the resulting surfaces using the map $\varphi_{i} \circ \gamma \circ
\psi_{0}^{-1}$ where $\gamma$ is the map {}from $\Bbb{C}\setminus \{ 0\}$ to
itself defined by $\gamma(z)=1/z$. The punctures (with ordering) of this sphere
with punctures and local coordinates are $p_{0}$, $\dots$, $p_{i-1}$, $q_{1}$,
$\dots$,
$q_{n}$, $p_{i+1}$, $\dots$, $p_{m}$. The local coordinates
vanishing at these punctures are given in the obvious way. Thus we have a
partial
operation. Given two such spheres with punctures and local coordinates, $S_{1}$
and $S_{2}$, with the same number of punctures,
if there is a analytic isomorphism {}from the underlying sphere of $S_{1}$
to the underlying sphere of $S_{2}$ such that the ordered
punctures of $S_{1}$ are
mapped to the ordered punctures of $S_{2}$ and the germs containing
the pull-backs of the
local coordinates of $S_{2}$ are the same as the germs containing
the local coordinates of $S_{1}$, we say that $S_{1}$ and $S_{2}$
are {\it conformally equivalent}. This is an equivalence relation.
The space of conformal equivalence classes of such spheres with punctures and
local coordinates is
called the {\it moduli space of spheres with punctures and local coordinates}.
The moduli space of spheres
with $n+1$ punctures and local coordinates ($n\ge 1$) can be
identified with $K(n)=M^{n-1}\times H \times H_{c}^{n}$ where
$H$ is the set of all sequences $A$ of complex numbers such that
$\mbox{\rm exp}(\displaystyle
\sum_{j=1}^{\infty}A_{j}x^{j+1}\frac{d}{dx})\cdot x$ is a
convergent power series in some neighborhood of $0$,
$H_{c}=\Bbb{C}^{\times} \times H$,
and $M^{n-1}$ is the subset of elements in $\Bbb{C}^{n-1}$ with nonzero
and distinct components. The moduli space of spheres with one punctures and
local coordinates can be
identified
with $K(0)=\{B\in H\;|\;B_{1}=0\}$. Then the moduli space of spheres with
punctures and local coordinates can be identified with
$\cup_{n=1}^{\infty}K(n)$. {}From now on we will refer to $K(n)$,
$n\in \Bbb{N}$ as the moduli
space of spheres with $n+1$ punctures and local coordinates.
The sewing operation for
spheres with punctures and local coordinates induces a partial operation on
$\cup_{n=1}^{\infty}K(n)$.
It is still called the {\it sewing operation} and is denoted
$_{^{i}}\infty_{^{0}}$.
Note that there is an obvious action of $S_{n}$ on $K(n)$ by permuting the
ordering of the $n$ positively oriented punctures and local coordinates.
Now we have a sequences of sets $K=\{K_{n}\}_{n=1}^{\infty}$
together with partial operations
$_{^{i}}\infty_{^{0}}: K(j)\times K(k)\to K(j+k-1)$,
$j\in \Bbb{Z}_{+}$, $k\in \Bbb{N}$, $i\in \Bbb{Z}_{+}$
and actions of $S_{n}$ on $K(n)$, $n\in \Bbb{Z}_{+}$, respectively. It is
easy to show that the sewing operations satisfy the following conditions
when the sewing operations appearing in the equations below
exist:
\begin{enumerate}
\item For any $j\in \Bbb{Z}_{+}$, $k, l\in \Bbb{N}$, $i_{1}$,
$1\le i_{1}\le j$, $i_{2}$,
$1\le i_{2}\le j+k-1$, $Q_{1}\in K(j)$, $Q_{2}\in K(k)$, $Q_{3}\in K(l)$,
\begin{equation}
(Q_{1\;^{i_{1}}}\infty_{^{0}}Q_{2})_{^{i_{2}}}\infty_{^{0}}Q_{3}=
\left\{\begin{array}{ll}
(Q_{1\;^{i_{2}}}\infty_{^{0}}Q_{3})_{^{l+i_{1}-1}}\infty_{^{0}}Q_{2},
&i_{2}< i_{1},\\
Q_{1\;^{i_{1}}}\infty_{^{0}}(Q_{2\;^{i_{2}-i_{1}+1}}\infty_{^{0}}Q_{3}),
&i_{1}\le i_{2}<i_{1}+k,\\
(Q_{1\;^{i_{2}-j+1}}\infty_{^{0}}Q_{3})_{^{i_{1}}}\infty_{^{0}}Q_{2},
&i_{1}+k\le i_{2}.\end{array} \right.
\end{equation}
\item For any $j\in \Bbb{Z}_{+}$, $k\in \Bbb{N}$, $i$, $1\le i\le k$,
$Q_{1}\in K(j)$, $Q_{2}\in
K(k)$, $\sigma\in S_{j}$ and $\tau\in S_{k}$,
\begin{equation}
\sigma(Q_{1})_{^{i}}\infty_{^{0}}Q_{2}
=\sigma(\stackrel{j-1}{\overbrace{1, \dots, 1}}, k,
\stackrel{j-k}
{\overbrace{1, \dots, 1}})(Q_{1\;^{\sigma(i)}}\infty_{^{0}}Q_{2}),
\end{equation}
\begin{equation}
Q_{1\;^{i}}\infty_{^{0}}\tau(Q_{2})
=(\stackrel{k-1}{\overbrace{1\oplus \cdots \oplus 1}} \oplus
\tau \oplus \stackrel{j-k}{\overbrace{1\oplus \cdots \oplus 1}})
(Q_{1\;^{i}}\infty_{^{0}}Q_{2}).
\end{equation}
\item Let $I=(\bold{0}, (1, \bold{0}))\in H\times (\Bbb{C}^{\times}\times
H)=K(1)$. Then for any $k\in \Bbb{N}$, $i$, $1\le i\le k$, $Q\in K(k)$,
\begin{equation}
Q_{^{i}}\infty_{^{0}}I=I_{^{1}}\infty_{^{0}}Q=Q.
\end{equation}
\end{enumerate}
A sequence $\{\cal{X}(j)\}_{j\in \Bbb{N}}$ of sets equipped with
$\circ_{i}: \cal{X}(j)\times \cal{X}(k)\to \cal{X}(j+k-1)$,
$j\in \Bbb{Z}_{+}$, $k\in \Bbb{N}$, $1\le i\le k$,
actions of $S_{n}$ on $\cal{X}(n)$, $n\in \Bbb{Z}_{+}$, respectively,
and $I\in \cal{X}(1)$ satisfying the conditions (1)--(3) above with
$K(n)$, $n\in \Bbb{N}$, replaced by $\cal{X}(n)$, $_{^{i}}\infty_{^{0}}$
by $\circ_{i}$, is called an {\it operad} \cite{M1}.
If the operations $\circ_{i}$
are only partial and conditions (1)--(3) are satisfied when the operations
in the equations in (1)--(3) exist, it is called a {\it partial operad}
\cite{M2} \cite{HL2} \cite{HL3}.
Thus we see that $K$ is a partial operad. We can also give a topological
structure and a complex analytic structure to $K$ such that
the sewing operations
$_{^{i}}\infty_{^{0}}$ are continuous and complex analytic.
We shall define a (geometric)
vertex
operator algebra to be a ``linear projective representation''
of this partial operad satisfying some additional conditions.
In the representation theory of groups, a linear projective representation
of a group is a linear representation of a central extension of the group.
For $K$, we also have certain extensions which are analogues of
central extensions of groups. These extensions are constructed using
determinant lines over spheres with analytically
parametrized boundary.
We describe briefly Segal's work on determinant lines over
Riemann surfaces with analytically
parametrized boundary here.
For details, see \cite{S}. Let $\Sigma$ be a compact Riemann surface
with analytically
parametrized and oriented boundary components. We have the Cauchy-Riemann
operator $\overline{\partial}$ {}from the space $\Omega^{0}(\Sigma)$
of smooth functions on
the surface to the space $\Omega^{0, 1}(\Sigma)$
of $(0, 1)$-forms on the surface. The boundary of $\Sigma$ can be
decomposed as $\partial \Sigma=\cup_{i=1}^{k}C_{i}^{\epsilon_{i}}$ where
for any $i$, $1\le i\le k$,
$C_{i}^{\epsilon_{i}}$ is a connected component of $\partial \Sigma$
and thus is parametrized by an analytic map {}from the circle $S^{1}$ to
$C_{i}^{\epsilon}$ and where $\epsilon_{i}=\pm$ indicates the orientation
of the component. Any smooth function on $C_{i}^{\epsilon_{i}}$ can
be decomposed as the sum of two smooth functions, one of which,
as a function on
$S^{1}$, has a Fourier expansion of the form $\sum_{n\ge 0}a_{n}e^{2\pi
n\theta i}$ ($\theta$ is the usual parametrization of the circle by angles)
and the other of which, as a function on $S^{1}$, has a
Fourier expansion of the
form $\sum_{n< 0}a_{n}e^{2\pi
n\theta i}$. If $\epsilon_{i}=+$ ($\epsilon_{i}=-$), that is,
this component is positively (negatively) oriented,
we denote by $\Omega^{0}_{+}(C_{i}^{\epsilon_{i}})$
the space of all smooth functions on $C_{i}^{\epsilon_{i}}$ which
as functions on $S^{1}$ have Fourier expansions of the form
$\sum_{n\ge 0}a_{n}e^{2\pi
n\theta i}$ ($\sum_{n< 0}a_{n}e^{2\pi
n\theta i}$) and by $\Omega^{0}_{-}(C_{i}^{\epsilon_{i}})$ the space of smooth
functions on $C_{i}^{\epsilon_{i}}$ which
as functions on $S^{1}$ have Fourier expansions of the form
$\sum_{n< 0}a_{n}e^{2\pi
n\theta i}$ ($\sum_{n\ge 0}a_{n}e^{2\pi
n\theta i}$). Thus the space $\Omega^{0}(\partial \Sigma)$ of all smooth
functions on $\partial \Sigma$ can be decomposed as
$\oplus_{i=1}^{k}(\Omega^{0}_{+}(C_{i}^{\epsilon_{i}})\oplus
\Omega^{0}_{-}(C_{i}^{\epsilon_{i}}))$. Following Segal's notation, let
\begin{equation}
\Omega^{0}_{+}(\partial \Sigma)
=\oplus_{i=1}^{k}\Omega^{0}_{-\epsilon_{i}}(C_{i}^{\epsilon_{i}})
\subset \Omega^{0}(\partial \Sigma).
\end{equation}
(Note that in our notation, it is better to denote this space by
$\Omega^{0}_{-}(\partial \Sigma)$. We denote it by
$\Omega^{0}_{+}(\partial \Sigma)$ so that it agrees with Segal's notation.)
Let $\mbox{\rm pr}$ be the composition of the restriction {}from
$\Omega^{0}(\Sigma)$ to $\Omega^{0}(\partial \Sigma)$ and the
projection
{}from
$\Omega^{0}(\partial \Sigma)$ to $\Omega^{0}_{+}(\partial \Sigma)$.
We have an operator
\begin{equation}
\overline{\partial}\oplus \mbox{\rm pr}:
\Omega^{0}(\Sigma)\to \Omega^{0, 1}(\Sigma)
\oplus \Omega^{0}_{+}(\partial \Sigma).
\end{equation}
Using the theory of elliptic boundary problems on manifolds with
boundaries (see, for example, \cite{Ho}), we can show that
$\overline{\partial}\oplus \mbox{\rm pr}$ can be extended to Fredholm
operators {}from suitable Sobolev spaces on $\Sigma$ to direct sums of
closed subspaces of suitable Sobolev spaces on $\Sigma$ and closed
subspaces of suitable Sobolev spaces on $\partial \Sigma$. In
addition, the kernels of these extensions are equal to the kernel of
$\overline{\partial}\oplus \mbox{\rm pr}$ and the orthogonal
complements of the images of these extensions are in $\Omega^{0,
1}(\Sigma)
\oplus \Omega^{0}_{+}(\partial \Sigma)$. Thus we can regard the kernel and
cokernel of $\overline{\partial}\oplus \mbox{\rm pr}$ as the kernels and
cokernels of its extensions. Since these extensions are Fredholm, the kernel
and cokernel of $\overline{\partial}\oplus \mbox{\rm pr}$ are
finite-dimensional. The determinant line over $\Sigma$ is defined as
\begin{equation}
\mbox{\rm Det}_{\Sigma}=\mbox{\rm Det}\; (\mbox{\rm Ker}\;
(\overline{\partial}\oplus \mbox{\rm pr}))^{*}\otimes
\mbox{\rm Det}\;\mbox{\rm Coker}\;(\overline{\partial}\oplus \mbox{\rm pr})
\end{equation}
where $\mbox{\rm Det}\; (\mbox{\rm Ker}\;
(\overline{\partial}\oplus \mbox{\rm pr}))^{*}$ and
$\mbox{\rm Det}\;
\mbox{\rm Coker}\;(\overline{\partial}\oplus \mbox{\rm pr})$ are the
highest nonzero exterior powers of $(\mbox{\rm Ker}\;
(\overline{\partial}\oplus \mbox{\rm pr}))^{*}$ and
$\mbox{\rm Coker}\;(\overline{\partial}\oplus \mbox{\rm pr})$, respectively.
The main property of determinant lines over
Riemann surfaces with analytically
parametrized and oriented boundary components is that if we
sew two such Riemann surfaces, $\Sigma_{1}$ and $\Sigma_{2}$,
by identifying certain boundary components
on $\Sigma_{1}$ to certain boundary components with
opposite orientations on $\Sigma_{2}$ using the given analytic
parametrizations to obtain another such,
denoted by $\Sigma_{1}\infty \Sigma_{2}$,
then there exists a canonical isomorphism
\begin{equation}\label{3.8}
\ell_{\Sigma_{1}, \Sigma_{2}}:
\mbox{\rm Det}_{\Sigma_{1}}\otimes \mbox{\rm Det}_{\Sigma_{2}}
\to \mbox{\rm Det}_{\Sigma_{1}\infty\Sigma_{2}}.
\end{equation}
These determinant lines give a holomorphic line bundle over the moduli space
of Riemann surfaces with oriented and analytically parametrized boundaries,
and there is a canonical connection on this line bundle.
See \cite{S} for more details.
Now we want to use Segal's work described above to define the determinant line
for an element $Q$ of $K$. We need to find a sphere with analytically
parametrized and oriented boundary $\Sigma_{Q}$
determined uniquely by $Q$. For any $Q\in K$,
there is a unique sphere with punctures
and local coordinates in $Q$ such that its underlying sphere is
$\Bbb{C}\cup \{\infty \}$, the negatively oriented puncture is
$\infty$, the last positively oriented puncture is $0$, the value at
$\infty$ of the derivative of the local coordinate map at $\infty$ is
$1$ and all the local coordinate neighborhoods at the punctures
are the preimages under the
local coordinate maps of the maximal open disks (possibly with
infinite radius) centered at $0$ on which the inverses of local
coordinate maps have well-defined analytic extensions.
For any positive real number $r$ and any puncture, consider the closed disk of
radius equal to $r$ times the minimum of $1$ and half of
the radius of the maximal disk above at the puncture.
(To avoid closed disks with
infinite radius, we choose the minimum of $1$ and half of
the radius of the maximal disk instead of half of
the radius of the maximal disk.) For a fixed $r$, a closed disk above is
called a {\it closed disk associated to $r$}. Let $X$ be the set of all
positive real numbers such that if $r\in X$, then at any puncture
the closed disk associated to $r$ is contained in
the maximal open disk above and preimages under local coordinate maps
of closed
disks associated to $r$ at different punctures
do not intersect each other.
Let $r_{0}=\sup X$ and
$r_{1}=\min(1, \frac{r_{0}}{2})$. (To make sure that $r_{1}$ is not
$\infty$, we define $r_{1}$ to be $\min(1, \frac{r_{0}}{2})$ instead of
$\frac{r_{0}}{2}$.)
We obtain a Riemann surface with oriented and analytically parametrized
boundary components $\Sigma_{Q}$ by cutting the preimages
of the closed disks associated to $r_{1}$ and giving its
boundary components the obvious orientations and analytic
parametrizations (by first mapping the unit circle to the circle with
radius $r_{1}$). We define
\begin{equation}
\mbox{\rm Det}_{Q}=\mbox{\rm Det}_{\Sigma_{Q}}.
\end{equation}
We consider annuli on the complex plane with two circles centered at $0$
as boundaries. They can be degenerate in the sense that the two
boundary circles are the
same. With the obvious orientations of the boundary circles and with
multiplications by the radii of the
boundary circles as analytic boundary
parametrizations, these annuli become Riemann surfaces with
oriented and analytically parametrized
boundary components.
If the inner circle of such an annulus is the unit circle, we call it a
{\it canonical annlus with
oriented and analytically parametrized
boundary components}.
For $m, n\in \Bbb{N}$, $Q_{1}\in K(m)$ and $Q_{2}\in K(n)$ such
that $Q_{1^{i}}\infty_{^{0}}Q_{2}$ exists, we can find unique
Riemann surfaces with
oriented and analytically parametrized
boundary components $A$, $B$, $C$, $D$ and $E$
which in general are not connected
and are disjoint unions of
canonical annuli with
oriented and analytically parametrized
boundary components, such that
$((A\infty \Sigma_{Q_{1}})\infty B)\infty(\Sigma_{Q_{2}}\infty C)$
is conformally equivalent to $(D\infty\Sigma_{Q_{1^{i}}\infty_{^{0}}Q_{2}})
\infty E$.
So we have a canonical isomorphism from
$\mbox{\rm Det}_{((A\infty \Sigma_{Q_{1}})\infty B)
\infty(\Sigma_{Q_{2}}\infty C)}$ to
$\mbox{\rm Det}_{(D\infty\Sigma_{Q_{1^{i}}\infty_{^{0}}Q_{2}})
\infty E}$.
It can be shown easily that
$\mbox{\rm Det}_{A}$, $\mbox{\rm Det}_{B}$, $\mbox{\rm Det}_{C}$,
$\mbox{\rm Det}_{D}$ and $\mbox{\rm Det}_{E}$ are
canonically isomorphic to $\Bbb{C}$.
Thus we obtain canonical isomorphisms from
$\mbox{\rm Det}_{\Sigma_{Q_{1}}}\otimes \mbox{\rm Det}_{\Sigma_{Q_{2}}}$
to $\mbox{\rm Det}_{A}\otimes \mbox{\rm Det}_{\Sigma_{Q_{1}}}\otimes
\mbox{\rm Det}_{B}\otimes
\mbox{\rm Det}_{\Sigma_{Q_{2}}}\otimes \mbox{\rm Det}_{C}$ and from
$\mbox{\rm Det}_{D}\otimes \mbox{\rm Det}_{Q_{1^{i}}\infty_{^{0}}Q_{2}}
\otimes \mbox{\rm Det}_{E}$ to $\mbox{\rm Det}_{Q_{1^{i}}\infty_{^{0}}Q_{2}}$.
Composing in the obvious order the three canonical isomorphisms above with
$$\ell_{(A\infty \Sigma_{Q_{1}})\infty B,
\Sigma_{Q_{2}}\infty C}\circ
(1\otimes \ell_{Q_{2}, C})\circ
(\ell_{A\infty \Sigma_{Q_{1}}, B}\otimes 1\otimes 1)\circ
(\ell_{A, \Sigma_{Q_{1}}}\otimes 1\otimes 1\otimes 1),$$
we obtain
a canonical isomorphism
\begin{equation}
\ell^{i}_{Q_{1}, Q_{2}}: \mbox{\rm Det}_{Q_{1}}\otimes \mbox{\rm Det}_{Q_{2}}
\to \mbox{\rm Det}_{Q_{1^{i}}\infty_{^{0}}Q_{2}}.
\end{equation}
Let
\begin{eqnarray}
\tilde{K}(n)&=&\cup_{Q\in K(n)}\mbox{\rm Det}_{Q}, \;\;\;n\in \Bbb{N},\\
\tilde{K}&=&\{\tilde{K}(n)\}_{n\in \Bbb{N}}.
\end{eqnarray}
Then $\tilde{K}(n)$, $n\in \Bbb{N}$, are holomorphic line bundles (in a
suitable sense) over $K(n)$.
There are also operations in $\tilde{K}$ obtained {}from the sewing operations
in $K$ and the canonical isomorphisms for determinant lines defined as follows:
Let $m, n\in \Bbb{N}$, $i$ an integer satisfying $1\le i\le m$, $Q_{1}\in
K(m)$,
$Q_{2}\in K(n)$,
$\tilde{Q}_{1}\in \mbox{\rm Det}_{Q_{1}} \subset \tilde{K}(m)$ and
$\tilde{Q}_{2}\in \mbox{\rm Det}_{Q_{2}}\subset \tilde{K}(n)$,
such that $Q_{1^{i}}\infty_{^{0}}Q_{2}$ exists.
We define
\begin{equation}
\tilde{Q}_{1^{i}}\widetilde{\infty}_{^{0}}\tilde{Q}_{2}
=\ell^{i}_{Q_{1}, Q_{2}}(\tilde{Q}_{1} \otimes \tilde{Q}_{2})
\in \mbox{\rm Det}_{Q_{1^{i}}\infty_{^{0}}Q_{2}}\subset \tilde{K}(m+n-1).
\end{equation}
Thus we obtain a partial operation
$_{^{i}}\widetilde{\infty}_{^{0}}: \tilde{K}(m)\times \tilde{K}(n)
\to \tilde{K}(m+n-1)$ for any $m, n\in \Bbb{N}$ and any
integer $i$ satisfying $1\le i\le m$.
Note that the definition of determinant line over an element
$Q\in K(n)$ for any $n\in \Bbb{N}$ does not use the ordering of the
positively oriented punctures of $Q$. Thus for any $\sigma\in S_{n}$,
$\mbox{\rm Det}_{Q}$ is canonically isomorphic to $\mbox{\rm Det}_{\sigma(Q)}$.
We denote this canonical isomorphism by $\varphi^{\sigma}_{Q}$.
For any $\tilde{Q}\in \mbox{\rm Det}_{Q} \subset \tilde{K}(n)$, we define
\begin{equation}
\sigma(\tilde{Q})=\varphi^{\sigma}_{Q}(\tilde{Q})\in \mbox{\rm Det}_{\sigma(Q)}
\subset \tilde{K}(n).
\end{equation}
We obtain an action of $S_{n}$ on $\tilde{K}(n)$.
Let $\tilde{I}$ be the unique element of $\mbox{\rm Det}_{I}$ satisfying
$\ell^{1}_{I, I}(\tilde{I} \otimes \tilde{I})=\tilde{I}$.
Then the sequence $\tilde{K}$ together with the operations
$$_{^{i}}\widetilde{\infty}_{^{0}}: \tilde{K}(m)\times \tilde{K}(n)\to
\tilde{K}(m+n-1),$$
$m, n\in \Bbb{N}$, $1\le i\le m$,
the actions of the symmetric groups and $\tilde{I}$ is a partial operad.
Also the operations $_{^{i}}\widetilde{\infty}_{^{0}}$,
$m, n\in \Bbb{N}$, $1\le i\le m$, are all continuous and analytic with
respect to the topological and analytic structures on the holomorphic
line bundles $\tilde{K}(n)$ over $K(n)$,
$n\in \Bbb{N}$.
For any $n\in \Bbb{N}$, there is
a canonical connection on the determinant line bundle $\tilde{K}(n)$
induced from the canonical connection on the determinant line bundle over
the moduli space of Riemann surfaces with oriented and analytic parametrized
boundaries.
Using this connection, we can prove that
the determinant line bundle $\tilde{K}(n)$ is trivial.
Thus for any complex number $c$, a $c$-th power of
determinant line bundle $\tilde{K}(n)$ is well defined. Note
that a $c$-th power of $\tilde{K}(n)$
is the line bundle whose fibers are the
same as those of $\tilde{K}(n)$ and whose transition
functions are equal to certain branches of
the $c$-th powers of the transition functions
of $\tilde{K}(n)$. The existence of a $c$-power of $\tilde{K}(n)$ means that
we can choose the branches of the $c$-th powers of the transition functions
of $\tilde{K}(n)$ consistently so that they also give a holomorphic
line bundle, a $c$-th power of $\tilde{K}(n)$.
So we see that because $\tilde{K}(n)$ is trivial,
there is only one $c$-th power of $\tilde{K}(n)$
and it is in fact canonically isomorphic to
$\tilde{K}(n)$.
We denote the $c$-th power of $\tilde{K}(n)$
by $\tilde{K}^{c}(n)$.
Since, as a line bundle over $K(n)$, $\tilde{K}^{c}(n)$ is
canonically isomorphic to $\tilde{K}(n)$, we shall not distinguish between the
elements of $\tilde{K}(n)$ and the elements of $\tilde{K}^{c}(n)$.
In particular, for any element $\tilde{Q}$ of $\tilde{K}^{c}(n)$
there is $Q\in K(n)$ such that $\tilde{Q}$ is in $\mbox{\rm Det}_{Q}$.
The difference between $\tilde{K}^{c}$ and $\tilde{K}$ is
that the canonical isomorphisms for them are different. We can prove that
we can choose values of $\ell^{i}_{Q_{1}, Q_{2}}$ and $\varphi^{\sigma}_{Q}$
raised to the complex power $c$
(denoted by $(\ell^{i}_{Q_{1}, Q_{2}})^{c}$ and $(\varphi^{\sigma}_{Q})^{c}$,
respectively) consistently
for $m, n\in \Bbb{N}$, $1\le i\le m$, $Q_{1}\in K(m)$, $Q, Q_{2}\in K(n)$
and $\sigma\in S_{n}$,
such that $\tilde{K}^{c}=\{\tilde{K}^{c}(n)\}_{n\in \Bbb{N}}$ becomes
a partial operad; the operations $_{^{i}}\widetilde{\infty}^{c}_{^{0}}$
are defined in the same
way as those for $_{^{i}}\widetilde{\infty}_{^{0}}$ except that
$\ell^{i}_{Q_{1}, Q_{2}}$ is replaced by $(\ell^{i}_{Q_{1}, Q_{2}})^{c}$,
the actions of the symmetric groups are
defined using $(\varphi^{\sigma}_{Q})^{c}$
and the identity element is $\tilde{I}\in
\tilde{K}^{c}(1)$.
The canonical connection on $\tilde{K}(n)$ gives
a canonical connection on $\tilde{K}^{c}(n)$. Beginning with $\tilde{I}$,
we obtain a section $\psi_{1}$ of $\tilde{K}^{c}(1)$ by parallel transport
(this section is in fact not continuous when $c\ne 0$).
Let $J\in K(0)$ be the conformal equivalence class containing the sphere
$\Bbb{C}\cup \{\infty \}$ with the negatively oriented puncture $\infty$ and
the standard local coordinate $w\to w^{-1}$ vanishing at $\infty$ and let
$\tilde{J}$ be any fixed element of $\mbox{\rm Det}_{J}$. Then beginning with
$\tilde{J}$,
we obtain a section $\psi_{0}$ of $\tilde{K}^{c}(0)$ by parallel transport.
Let $P(1)\in K(2)$ be the conformal equivalence class containing
the sphere $\Bbb{C}\cup \{\infty\}$ with the negatively oriented puncture
$\infty$, the positively oriented punctures $1$ and $0$, the standard local
coordinate $w\to w^{-1}$ vanishing at $\infty$, the standard local coordinate
$w\to w-1$ vanishing at $1$ and the standard local coordinate $w\to w$
vanishing
at $0$. Let $\tilde{P}(1)$ be the unique element of $\mbox{\rm Det}_{P(1)}$
such that $(\ell^{1}_{P(1), J})^{c}(\tilde{P}(1)\otimes \tilde{J})=\tilde{I}$.
Beginning with
$\tilde{P}(1)\in \tilde{K}^{c}(2)$ we obtain a section $\psi_{2}$ of
$\tilde{K}^{c}(2)$ by parallel transport. Since $K$ is generated by
$K(0)$, $K(1)$ and $K(2)$ (which means that any element in $K(n)$ for any
$n\in \Bbb{N}$ can be obtained by sewing elements in
$K(0)$, $K(1)$ and $K(2)$), we obtain sections $\psi_{n}$ of
$\tilde{K}^{c}(n)$, $n\in \Bbb{N}$. It can be shown that $\psi_{n}$,
$n\in \Bbb{N}$, are well-defined. Then we have $\{\psi_{n}\}_{n\in \Bbb{N}}$
which is a section of $\tilde{K}^{c}$.
To define a ``linear representation'' of $\tilde{K}^{c}$, we
first have to construct a partial operad {}from a vector space. Given a
$\Bbb{Z}$-graded vector space $V=\coprod_{n\in \Bbb{Z}}V_{(n)}$ such that
$\dim V_{(n)} <\infty$, we can construct a partial operad
in the following way (see \cite{HL2} \cite{HL3}): Let
\begin{eqnarray}
\cal{H}_{V}(n)&=&\hom(V^{n}, \overline{V}),\\
\cal{H}_{V}&=&\{\cal{H}_{V}(n)\}_{n=1}^{\infty}
\end{eqnarray}
where $\overline{V}=\prod_{n\in \Bbb{Z}}V_{(n)}$. Let
$P_{n}$, $n\in \Bbb{Z}$, be the projection {}from $\overline{V}$ to $V_{(n)}$.
For $f\in \cal{H}_{V}(m)$, $g\in \cal{H}_{V}(n)$ and $0\le i \le m$,
if for any $v' \in V'$, $v_{1}, \dots, v_{m+n-1} \in V$ the
series
\begin{equation}
\sum_{n\in \Bbb{Z}}\langle v', f(v_{1}, \dots, v_{i-1}, P_{n}(g(v_{i}, \dots,
v_{i+n-1})), v_{i+n}, \dots, v_{m+n-1})\rangle
\end{equation}
(where $\langle\cdot, \cdot \rangle$ denotes the pairing between
$V'$ and $\overline{V}$) converges,
we say that {\it the contraction} $f_{^{\;i}}\!*_{^{0}}g$
{\it exists} and we define the {\it contraction} $f_{^{\;\;i}}\!*_{^{0}}g\in
\cal{H}_{V}(m+n-1)$
using the values of these series. Note that contractions are partial
operations.
The permutation group $S_{n}$ also acts on $\cal{H}_{V}(n)$ in the
obvious way. We also have the inclusion map $I_{V}\in \cal{H}_{V}(1)=
\hom(V, \overline{V})$. The sequence $\cal{H}_{V}$ together with
the contractions, the actions of the symmetric groups and the inclusion map
$I_{V}$, is a partial operad, called the {\it endomorphism partial operad
of $V$}.
Roughly speaking, a ``geometric vertex operator algebra'' (or
a ``vertex associative algebra'') is
a $\Bbb{Z}$-graded vector space $V$ equipped with a ``homomorphism''
{}from the partial operad $\tilde{K}^{c}$ to the partial operad
$\cal{H}_{V}$ satisfying some
additional natural axioms. Precisely, we have the following:
\begin{defi}
A {\it geometric vertex operator
algebra of central charge $c$} is
a $\Bbb{Z}$-graded vector space $V$ and a map $\Phi:
\tilde{K}^{c}\longrightarrow
\cal{H}_{V}$ such that $\Phi(\tilde{K}^{c}(n))\subset \cal{H}_{V}(n)$
satisfying:
\begin{enumerate}
\item The positive energy axiom: $V_{(n)}=0$ for $n$ sufficiently small.
\item The grading axiom: Let $Q(a)=(\bold{0}, (a, \bold{0}))\in H\times
(\Bbb{C}^{\times}\times H)=K(1)$ (the conformal equivalence class containing
the sphere $\Bbb{C}\cup\{\infty\}$ with the negatively oriented puncture
$\infty$, the positively oriented puncture $0$, the standard local coordinate
$w\to w^{-1}$ vanishing at $\infty$ and the local coordinate $w\to aw$
vanishing at $0$).
Then for
any $n\in \Bbb{Z}$, $v\in V_{(n)}$, $v'\in V'$,
\begin{equation}
\langle v', \Phi(\psi_{1}(Q(a)))(v)\rangle_{V}=
a^{-n}\langle \cdot, \cdot\rangle_{V_{(n)}}
\end{equation}
where
$\langle \cdot, \cdot\rangle_{n}$ is the pairing between $V'$ and $V_{(n)}$
induced {}from the pairing $\langle \cdot, \cdot \rangle_{V}$ between
$V'$ and $\overline{V}$.
\item The permutation axiom: For any
$n\in \Bbb{N}$, $\sigma\in S_{n}$ and $\tilde{Q}\in \tilde{K}^{c}(n)$,
\begin{equation}
\Phi (\sigma(\tilde{Q}))=\sigma(\Phi (\tilde{Q})).
\end{equation}
\item The analyticity axiom:
For any $n\in \Bbb{N}$, let
\begin{equation}
\nu_{n}=\Phi\circ \psi_{n}: K(n)\to \cal{H}_{V}(n).
\end{equation}
Then for any $v'\in
V'$, $v_{1}, \dots, v_{n}\in V$, $\langle v',
\nu_{n}(\cdot)(v_{1}\otimes \cdots \otimes
v_{n})\rangle$ as a function of $$(z_{1}, \dots, z_{n-1}; A^{(0)},
(a_{0}^{(1)}, A^{(1)}), \dots, (a_{0}^{(n)}, A^{(n)}))\in
K(n)=M^{n-1}\times H\times (\Bbb{C}^{\times}\times H)^{n}$$
is meromorphic in $z_{1}, \dots, z_{n-1}$
with $z_{i}=0$ and $z_{i}=z_{j}$, $i, j=1,
\dots, n-1$, $i\not =j$,
as
the only possible poles, and is a Laurent polynomial in $a_{0}^{(1)}, \dots,
a_{0}^{(n)}$ and is a polynomial in the components of $A^{(0)}, \dots,
A^{(n)}$. In addition, for fixed $i, j$,
$1\le i<j\le n$, and
$v_{i}, v_{j}\in V$ there is an
upper bound, independent of $v_{k}$, $k\ne i, j$,
for the order of the pole $z_{i}-z_{j}$ of the function
$\langle v', \nu_{n}(\cdot)(v_{1}\otimes \cdots\otimes v_{i}\otimes \cdots
\otimes v_{j}\otimes \cdots \otimes v_{n})\rangle$.
\item The sewing axiom: For any $m, n\in \Bbb{N}$,
$\tilde{Q}_{1}\in \mbox{\rm Det}_{Q_{1}}\subset \tilde{K}^{c}(m)$ and
$\tilde{Q}_{2}\in \mbox{\rm Det}_{Q_{1}}\subset \tilde{K}^{c}(n)$ such that
$Q_{1^{i}}\infty_{^{0}}Q_{2}$ exists,
$\Phi(\tilde{Q}_{1})_{^{\;i\!\!}}*_{^{0}}\Phi(\tilde{Q}_{2})$ also exists
and
\begin{equation}
\Phi(\tilde{Q}_{1^{i}}\widetilde{\infty}_{^{0}}\tilde{Q}_{2})
=\Phi(\tilde{Q}_{1})_{^{\;i\!\!}}*_{^{0}}\Phi(\tilde{Q}_{2}).
\end{equation}
\end{enumerate}
\end{defi}
The definition of
homomorphism {}from one geometric
vertex operator algebra to another of the same rank is clear.
The following theorem (see \cite{H1}--\cite{H4} \cite{H8})
establishes the equivalence between vertex operator
algebras and geometric vertex operator algebras:
\begin{theo}
The category of geometric vertex operator algebras
of rank $c$ is isomorphic to the category of vertex operator algebras of
rank $c$.
\end{theo}
The map $\nu$ in the definition above can also be
constructed algebraically (see \cite{H1} and \cite{H8}).
\section{Vertex operator algebras and conformal field theories}
The rapidly-evolving theory of vertex operator algebras has been starting
to show its power in the study of many problems related to conformal
field theories. It is expected that in the future this theory will play
a more important role in the study of conformal field theories and related
mathematical problems.
Basically, there are two approaches to conformal field theories. One
is the geometric approach. In physics, many models of conformal field
theories are studied using the path integral method. Starting {}from
the work of Friedan and Shenker \cite{FS}, physicists have realized
the importance of the moduli space of Riemann surfaces with punctures
in the study of conformal field theories. The basic mathematical work
in the geometric approach is Segal's definition of conformal field
theory using Riemann surfaces with oriented and analytically
parametrized boundary components \cite{S}. Motivated by the operator
formalism for the theory of free bosons and free fermions, one
closely related formulation of conformal field theories is given by
Vafa \cite{Va} using Riemann surfaces with punctures and local
coordinates vanishing at these punctures, on a physical level of
rigor. The geometric approach has the advantage that it gives
conceptually satisfactory definitions and it also allows one to derive
many important results using geometric intuition. But the main
difficulty that the geometric approach encountered is that it is very
difficult to construct nontrivial examples satisfying all these
geometric axioms and thus also difficult to discover subtle structures
that a conformal field theory might have. On the other hand, beginning
with the seminal work of Belavin, Polyakov and Zamolodchikov
\cite{BPZ} in physics and the works of Borcherds \cite{B}, Frenkel,
Lepowsky and Meurman \cite{FLM2} in mathematics, another approach, the
algebraic one, provides a practical way for both physicists and
mathematicians to construct concrete examples of conformal field
theories, at least at genus zero and genus one.
There are already many examples of conformal field theories
(in the algebraic formulation) constructed {}from Lie algebras,
lattices, Jordan algebras, $\cal{W}$-algebras (certain associative
algebras similar to the universal enveloping algebra of a Lie
algebra). There are also algebraic methods, for example, methods to
construct orbifold theories and coset models, which give more examples
{}from known ones. But the algebraic approach has the disadvantage
that it mostly constructs and studies only the genus-zero and
genus-one theory. Also the axioms in the algebraic formulations may
at first seem unfamiliar or complicated (although they are indeed
completely canonical). It is therefore necessary and important to
establish rigorously the relationship between the algebraic and
geometric approaches. One of the main ingredients in a conformal field
theory is its ``chiral algebra,'' which is a vertex operator algebra.
The geometric interpretation of vertex operator algebras described in
the preceding section can be viewed as a crucial step of the project of
establishing the equivalence between the two approaches and thus
obtaining examples satisfying the geometric axioms {}from the known
examples satisfying the algebraic axioms. Another step in this
direction is Zhu's work \cite{Z} in which he constructed certain
genus-one correlation functions {}from a vertex operator algebra and
its irreducible modules, assuming that the vertex operator algebra
satisfies certain conditions.
Let me end this exposition with the following picture describing the program of
studying conformal field theories and related mathematical problems
using the representation theory of vertex operator algebras:
\vspace{2em}
\begin{tabular}{c}
Elementary mathematical data (lattices, Lie algebras, \\Jordan algebras,
$\cal{W}$-algebras, etc.)\\
$\Downarrow$\\
Vertex operator algebras, modules, intertwining operators\\
$\Downarrow$\\
Modular functors and conformal field theories (in the sense of
Segal)\\
$\Downarrow$\\
Consequences (Verlinde formulas, modular tensor categories,
knot invariants \\
and three-manifold invariants, monstrous moonshine, etc.)
\end{tabular}
| {
"timestamp": "1995-07-01T06:36:37",
"yymm": "9504",
"arxiv_id": "q-alg/9504019",
"language": "en",
"url": "https://arxiv.org/abs/q-alg/9504019",
"abstract": "This is the third part of the revised versions of the notes of three consecutive expository lectures given by Chongying Dong, Haisheng Li and Yi-Zhi Huang in the conference on Monster and vertex operator algebras at the Research Institute of Mathematical Sciences, Kyoto, September 4-9, 1994. In this part, we discuss an $S_{3}$-symmetry of the Jacobi identity, construct the contragredient module for a module for a vertex operator algebra and apply these to the construction of the vertex operator map for the moonshine module. We review the notions of intertwining operator, fusion rule and Verlinde algebra. We also describe briefly the geometric interpretation of vertex operator algebras. We end the exposition with an explanation of the role of vertex operator algebras in conformal field theories.",
"subjects": "Quantum Algebra (math.QA); High Energy Physics - Theory (hep-th)",
"title": "Introduction to vertex operator algebras III",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924810166349,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7075866569649404
} |
https://arxiv.org/abs/2205.06798 | Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime | The generalization performance of kernel ridge regression (KRR) exhibits a multi-phased pattern that crucially depends on the scaling relationship between the sample size $n$ and the underlying dimension $d$. This phenomenon is due to the fact that KRR sequentially learns functions of increasing complexity as the sample size increases; when $d^{k-1}\ll n\ll d^{k}$, only polynomials with degree less than $k$ are learned. In this paper, we present sharp asymptotic characterization of the performance of KRR at the critical transition regions with $n \asymp d^k$, for $k\in\mathbb{Z}^{+}$. Our asymptotic characterization provides a precise picture of the whole learning process and clarifies the impact of various parameters (including the choice of the kernel function) on the generalization performance. In particular, we show that the learning curves of KRR can have a delicate "double descent" behavior due to specific bias-variance trade-offs at different polynomial scaling regimes. | \subsection{\label{sec:Equivalence-Conjecture}Gaussian Equivalence Conjecture}
A key technical insight behind our proof of Theorem \ref{thm:training_test_err} is the rigorous establishment of a so-called Gaussian equivalence conjecture. This conjecture was implicitly used (without proof) in several earlier works \citep{dietrich1999statistical,bordelon2020spectrum,canatar2021spectral} that study the generalization performance of kernel methods using non-rigorous statistical physics methods.
For simplicity, let us assume that the kernel and teacher functions are
both band-limited: there exists $L>0$ such that $\kcoeff k=\tcoeff k=0$,
for all $k>L$. First, based on the expansions (\ref{eq:kernel_expansion})
and (\ref{eq:teacher_expansion}), we can obtain an equivalent formulation
of (\ref{eq:kernel_1}) as a linear regression in the feature space
with the following feature map:
\[
\invec_{a}\mapsto\vgamma_{a}=[\underbrace{\widetilde{Y}_{01}(\invec_{a})}_{\usdim 0},\underbrace{\widetilde{Y}_{11}(\invec_{a}),\cdots,\widetilde{Y}_{1\usdim 1}(\invec_{a})}_{\usdim 1},\cdots\cdots\underbrace{\widetilde{Y}_{L1}(\invec_{a}),\cdots\widetilde{Y}_{L\usdim L}(\invec_{a})}_{\usdim L}]^{\T}
\]
where $\widetilde{Y}_{ki}(\cdot):=\sqrt{\tkcoeff{k,d}}Y_{ki}(\cdot)$.
Then (\ref{eq:kernel_1}) is equivalent to:
\begin{equation}
\hat{\vtheta}=\argmin{\vtheta}\sum_{a=1}^{n}[y_{a}-\vtheta^{\T}\vgamma_{a}]^{2}+\lambda\|\vtheta\|^{2}\label{eq:kernel_1_feature}
\end{equation}
where $y_{a}=\vbeta^{\T}\mLa\vgamma_{a}+\noisei_{a}$, with %
\begin{comment}
\begin{align*}
y_{a} & =\sum_{k=0}^{L}\frac{\tcoeff{k,d}}{\sqrt{\usdim k}}\sum_{i=1}^{\usdim k}Y_{ki}(\sgl)^{\T}Y_{ki}(\invec_{a})\\
& =\sum_{k=0}^{L}\frac{\tcoeff{k,d}}{\sqrt{\kcoeff{k,d}}}\sum_{i=1}^{\usdim k}Y_{ki}(\sgl)^{\T}\widetilde{Y}_{ki}(\invec_{a})
\end{align*}
\end{comment}
\begin{align*}
\vbeta & =[Y_{01}(\sgl),Y_{11}(\sgl),\cdots,Y_{1\usdim 1}(\sgl),\cdots\cdots,Y_{L1}(\sgl),\cdots Y_{L\usdim L}(\sgl)]^{\T}
\end{align*}
and
\[
\mLa=\diag\Big\{\tfrac{\tcoeff{0,d}}{\sqrt{\kcoeff{0,d}}},\tfrac{\tcoeff{1,d}}{\sqrt{\kcoeff{1,d}}},\cdots,\tfrac{\tcoeff{1,d}}{\sqrt{\kcoeff{1,d}}},\cdots\cdots,\tfrac{\tcoeff{L,d}}{\sqrt{\kcoeff{L,d}}},\cdots\tfrac{\tcoeff{L,d}}{\sqrt{\kcoeff{L,d}}}\Big\}.
\]
If the entries of $\vgamma_{a}$ are independent Gaussian random variables, then we reach
the setting that has been analyzed in several recent papers \citep{dicker2016ridge,dobriban2018high,hastie2019surprises,wu2020optimal,richards2021asymptotics}.
The main challenge here is that different entries of $\vgamma_{a}$
are not independent and that there is no linear transformation that can
decouple this dependence. On the other hand, however, different entries $\vgamma_{a}$ are still \emph{uncorrelated}: recall that
$\E\widetilde{Y}_{ki}(\invec)\widetilde{Y}_{\ell j}(\invec)=\tkcoeff{k,d} \indicatorfn_{k\ell}\indicatorfn_{ij}$,
since $\{Y_{ki}(\invec)\}_{k,i}$ are orthonormal with respect to
$\spmeasure{d-1}$. Thus, the so-called Gaussian
equivalence conjecture states that the learning performance of the original
KRR problem will remain asymptotically unchanged if we replace each $\vgamma_{a}$
by a Gaussian vector $\vg_{a}$ with the same mean and covariance
matrix.
\begin{comment}
The general case $\tcoeff{k,d}\neq0$ can be analyzed in a similar
way by taking care of the (weak) correlation between $\vy$ and $\kmtx$.
\end{comment}
\begin{comment}
Next, we explain why as the sample size grows, KRR learns the polynomials
with increasing degrees sequencially, instead of simultaneously. To
this end, we need to look at eigenvalues $\{\tkcoeff k\}_{k\geq0}$.
The key point is that the eigenvalues of kernel function $\Kernalfn(\cdot,\cdot)$
decays with respect to $k$ as: $\tkcoeff k\asymp\frac{1}{d^{k}}$.
Recall from (\ref{eq:cov_gaussian_vec}) that $\tkcoeff k$ corresponds
to the energy of each predictor in the linear regression formulation
(\ref{eq:kernel_1_feature}). Since the low-degree components possess
much higher energy than the high-degree components, under the same
sample size, they are estimated with much higher accuracy and are
learned in earlier phases.
Finally, let us explain why the transitions between different learning
phases happen at the scaling of $n\asymp d^{k}$, $k=0,1,2,\cdots$.
In other words, why the polynomial scaling is the transition point,
instead of some fractal scaling such as $n\asymp d^{k+1/2}$. One
way to understand this is by examining the spectrum of $\kmtx$. Again
based on our equivalence conjecture, we will consider $\kmtx_{G}$
as the surrogate for $\kmtx$. First, we partition each $\vg_{a}^{\T}$
as:
\[
\vg_{a}^{\T}=[\vg_{a,1}^{\T},\vg_{a,2}^{\T},\cdots,\vg_{a,L}^{\T}]
\]
where each sub-vector $\vg_{a,k}\sim\mathcal{N}(\boldsymbol{0},\tkcoeff k\mI_{\usdim k})$
and are mutually independent. Denote $\mG_{k}:=[\vg_{1,k},\vg_{2,k},\cdots,\vg_{n,k}]^{\T}\in\R^{n\times\usdim k}$,
which is the sub-matrix of $\mG$ corresponding the $k$th eigenvalue
$\tkcoeff k$ and we can decompose $\kmtx_{G}$ as:
\[
\kmtx_{G}=\sum_{k\geq0}\mG_{k}\mG_{k}^{\T}.
\]
Now for a fixed $k$, let us analyze the eigenvalues of each component
$\mG_{k}\mG_{k}^{\T}$. We discuss over different scaling of $n$.
\begin{enumerate}
\item[(i)] $n\asymp d^{k}$. By (\ref{eq:usdim_def}) in Appendix \ref{sec:Spherical-Harmonics-and},
we have $\usdim k\asymp d^{k}$. Hence in this case, we have $n\asymp\usdim k$,
which is the scaling studied in \cite{marvcenko1967distribution}
and the spectrum of $\mG_{k}\mG_{k}^{\T}$ is described by the MP
distribution. In the current setting, it has the following (generalized)
density function:
\begin{equation}
\mu_{\text{MP}}(x)=\max\Big\{1-\frac{\usdim k}{n},0\Big\}\cdot\delta(x)+\frac{\usdim k}{2\pi\kcoeff kn}\frac{\sqrt{(\lambda_{+}-x)(x-\lambda_{-})}}{x},\quad x\in[\lambda_{-},\lambda_{+}]\label{eq:Marcheko_dist}
\end{equation}
where $\delta(x)$ is the delta distribution at $x=0$, $\lambda_{+}=\kcoeff k(1+\sqrt{n/\usdim k})^{2}$
and $\lambda_{-}=\kcoeff k(1-\sqrt{n/\usdim k})^{2}$. From the distribution
(\ref{eq:Marcheko_dist}) we can see under current scaling, the non-zero
eigenvalues of $\mG_{k}\mG_{k}^{\T}$ is supported over the interval
$[\lambda_{-},\lambda_{+}]$.
\item[(ii)] $n\ll d^{k}$. In this case, $n\ll\usdim k$. This corresponds to
the $n/\usdim k\to0$ limit in (\ref{eq:Marcheko_dist}), which indicates
that spectrum of $\mG_{k}\mG_{k}^{\T}$ converges to a single point
mass at $\kcoeff k$. In other words, $\mG_{k}\mG_{k}^{\T}\approx\kcoeff k\mI$.
\item[(iii)] $n\gg d^{k}$. We have $n\gg\usdim k$, indicating that $\mG_{k}\mG_{k}^{\T}$
is a highly low-rank matrix. Moreover, since different columns of
$\mG_{k}$ are i.i.d. Gaussian vector with mean zero and covariance
$\tkcoeff k\mI_{n}$, it is not hard to show that the squared $\ell_{2}$-norm
of each column of $\mG_{k}$ converge to $\frac{n}{\usdim k}\kcoeff k\gg1$
and they are almost orthogonal to each other. Therefore, the non-zero
eigenvalues of $\mG_{k}\mG_{k}^{\T}$ must be much larger than 1.
\end{enumerate}
From the above discussions, we can find the general picture about
how the spectrum of $\mG_{k}\mG_{k}^{\T}$ evolves as $n$ increases.
When $n\ll d^{k}$, the spectrum are concentrated at a single point
$\kcoeff k\asymp1$ and it only start to enlarge and form a bulk when
$n\asymp d^{k}$. If $n$ further increases such that $n\gg d^{k}$,
the non-zero eigenvalues will be much larger than 1 and conceivably
all the eigenvalues will be outside the bulk spectrum of $\kmtx_{G}$.
From this perspective, we obtain one interpretation on the phase transitions
occurring at $n\asymp d^{k}$: under this scaling, the eigenvalues
of the $k$h component $\mG_{k}\mG_{k}^{\T}$ start to pop out of
the $\mathcal{O}(1)$ bulk spectrum of $\kmtx_{G}$.
\end{comment}
\subsection{Limitations of the Current Work}
\label{sec:limitations}
Finally, we point out several important limitations of our results. First, we have assumed that the input vectors $\invec$
are uniformly distributed over $\dsphere d$. Although this is a convenient model for theoretical analysis, the spherical symmetry of the model might impose too strong an assumption on the input data. It will be desirable to explore other more general data distributions such as those considered in \cite{liang2020just,liang2020multiple,mei2021generalization}.
Second, we have assumed that the labels $\set{y_i}$ in the training set are generated by a specific teacher-student model, which is essentially a generalized linear model. On the contrary, in \cite{liang2020just,liang2020multiple,mei2021generalization},
there is no such constraint and a generic non-parametric model for the labels $\set{y_i}$ is
considered.
Since current proof crucially hinges on the fact that the distribution of $\invec_i$ is isotropic and that $h(\invec)$ only depends on the projection $\invec^{\T}\sgl$, handling more general function classes may require substantial changes to our current proof technique.
Finally, we only analyze the inner-product kernels here.
It will be interesting to consider other types of kernels, \emph{e.g.}, radial kernel $\Kernalfn(\invec,\invec')=k(\|\invec-\invec'\|/\sqrt{d})$ and translation invariant kernel $\Kernalfn(\invec,\invec')=k(\invec-\invec')$. In the current setting, since $\invec_{i}\iid\spmeasure{d-1}$, it is easy to see that radial kernels can be viewed as inner-product kernels. For more general settings, we conjecture that some versions of
Gaussian equivalence may still hold.
The extensions to the above more general cases are left as
interesting future work.
\section{\label{sec:proof_of_cross} Proof of \eqref{eq:testerr_crossterm_final_form}}
By (\ref{eq:representer_theorem}) and (\ref{eq:inner_product_kernel_def}),
$\hat{\regressfn}(\invec_{\text{new}})$ can be written as
\begin{equation}
\hat{\regressfn}(\invec_{\text{new}})=\kernalfn\Big(\tfrac{\invec_{\text{new}}^{\T}\inputmtx^{\T}}{\sqrt{d}}\Big)\rsmtx\vy,\label{eq:hath_x_new_express_1}
\end{equation}
so we can get
\begin{align}
\E_{\text{new}}[\E(y_{\text{new}}\mid\invec_{\text{new}})\hat{\regressfn}(\invec_{\text{new}})] & =\E_{\text{new}}\Big[\E(y_{\text{new}}\mid\invec_{\text{new}})\kernalfn\Big(\tfrac{\invec_{\text{new}}^{\T}\inputmtx^{\T}}{\sqrt{d}}\Big)\rsmtx\vy\Big]\nonumber \\
& =\E_{\text{new}}\Big[\big(\sum_{k=0}^{\infty}\tfrac{\tcoeff k}{\sqrt{\usdim k}}\shmtx_{k}(\sgl)^{\T}\shmtx_{k}(\invec_{\text{new}})\big)\cdot\big(\sum_{k=0}^{\infty}\tfrac{\kcoeff k}{\usdim k}\shmtx_{k}(\invec_{\text{new}})^{\T}\shmtx_{k}^{\T}\big)\rsmtx\vy\Big]\nonumber \\
& =\frac{1}{n}\tilde{\tevec}^{\T}\rsmtx\vy,\label{eq:testerr_crossterm_1}
\end{align}
where $\tilde{\tevec}=\sum_{k=0}^{\infty}\kcoeff k\delta_{k}\tevec_{k}$
and in the second step we use the expansion of $\teacherfn(\cdot)$
and $\kernalfn(\cdot)$ under spherical harmonics. Comparing (\ref{eq:testerr_crossterm_1})
and (\ref{eq:training_error}), we can see that $\E_{\text{new}}[\E(y_{\text{new}}\mid\invec_{\text{new}})\hat{\regressfn}(\invec_{\text{new}})]$
takes a similar form as $\mathcal{E}_{\text{train}}$. We will apply
a similar proof strategy here.
First, consider the truncation of $\obser$: $\hat{\obser}=\sum_{k=0}^{L}\tevec_{k}+\noise$,
where $L\geq\msc$ is some constant to be chosen. The next result
shows that $\frac{1}{n}\tilde{\tevec}^{\T}\rsmtx\hat{\obser}$ can
be approximated by dropping the low-degree and high-degree terms in
the expansions of $\Kernalfn(\cdot,\cdot)$ and $\teacherfn(\cdot)$.
\begin{prop}
\label{prop:test_error_cross_truncation}There exists $\tau>0$ such
that
\[
\bigg|\frac{1}{n}\tilde{\tevec}^{\T}\rsmtx\hat{\vy}-\big(\sum_{k<\msc}\tcoeff k^{2}+\frac{1}{n}\tilde{\tevec}_{\msc}^{\T}\widetilde{\rsmtx}_{\msc}\hat{\vy}_{\geq\msc}\big)\bigg|\stodom\frac{1}{d^{\tau}}
\]
where $\tilde{\tevec}_{k}=\kcoeff k\delta_{k}\tevec_{k}$, $\hat{\vy}_{\geq\msc}=\sum_{k=\msc}^{L}\tevec_{k}+\noise$
and $\widetilde{\rsmtx}_{\msc}=(\tilde{\lambda}\mI+\kmtx_{\msc})^{-1}$,
with $\tilde{\lambda}=\lambda+\sum_{k>\msc}\kcoeff k$.
\end{prop}
\begin{proof}
We make the following decomposition of $\frac{1}{n}\tilde{\tevec}^{\T}\rsmtx\hat{\vy}$:%
\begin{comment}
\begin{align*}
& \frac{1}{n}\tilde{\vy}^{\T}(\lambda\mI+\kmtx)^{-1}\vy\\
= & \frac{1}{n}\tilde{\vy}^{\T}(\lambda\mI+\kmtx)^{-1}\vy-\frac{1}{n}\tilde{\vy}^{\T}\rsmtx_{\leq\msc}\vy+\frac{1}{n}\tilde{\vy}^{\T}\rsmtx_{\leq\msc}\vy\\
= & \frac{1}{n}\tilde{\vy}^{\T}(\lambda\mI+\kmtx)^{-1}\vy-\frac{1}{n}\tilde{\vy}^{\T}\rsmtx_{\leq\msc}\vy+\frac{1}{n}\tilde{\vy}^{\T}\rsmtx_{\leq\msc}\vy-\frac{1}{n}\tilde{\vy}_{\leq\msc}^{\T}\rsmtx_{\leq\msc}\vy+\frac{1}{n}\tilde{\vy}_{\leq\msc}^{\T}\rsmtx_{\leq\msc}\vy\\
= & \frac{1}{n}\tilde{\vy}^{\T}(\lambda\mI+\kmtx)^{-1}\vy-\frac{1}{n}\tilde{\vy}^{\T}\rsmtx_{\leq\msc}\vy+\frac{1}{n}\tilde{\vy}^{\T}\rsmtx_{\leq\msc}\vy-\frac{1}{n}\tilde{\vy}_{\leq\msc}^{\T}\rsmtx_{\leq\msc}\vy+\frac{1}{n}\tilde{\vy}_{<\msc}^{\T}\rsmtx_{\leq\msc}\vy_{<\msc}+\frac{1}{n}\tilde{\vy}_{\leq\msc}^{\T}\rsmtx_{\leq\msc}\vy-\frac{1}{n}\tilde{\vy}_{<\msc}^{\T}\rsmtx_{\leq\msc}\vy_{<\msc}\\
= & \frac{1}{n}\tilde{\vy}^{\T}(\lambda\mI+\kmtx)^{-1}\vy-\frac{1}{n}\tilde{\vy}^{\T}\rsmtx_{\leq\msc}\vy+\frac{1}{n}\tilde{\vy}^{\T}\rsmtx_{\leq\msc}\vy-\frac{1}{n}\tilde{\vy}_{\leq\msc}^{\T}\rsmtx_{\leq\msc}\vy+\frac{1}{n}\tilde{\vy}_{<\msc}^{\T}\rsmtx_{\leq\msc}\vy_{<\msc}\\
& +\frac{1}{n}\tilde{\vy}_{<\msc}^{\T}\rsmtx_{\leq\msc}\vy+\frac{1}{n}\tilde{\vy}_{\msc}^{\T}\rsmtx_{\leq\msc}\vy-\frac{1}{n}\tilde{\vy}_{<\msc}^{\T}\rsmtx_{\leq\msc}\vy_{<\msc}\\
= & \frac{1}{n}\tilde{\vy}^{\T}(\lambda\mI+\kmtx)^{-1}\vy-\frac{1}{n}\tilde{\vy}^{\T}\rsmtx_{\leq\msc}\vy+\frac{1}{n}\tilde{\vy}^{\T}\rsmtx_{\leq\msc}\vy-\frac{1}{n}\tilde{\vy}_{\leq\msc}^{\T}\rsmtx_{\leq\msc}\vy\\
& +\frac{1}{n}\tilde{\vy}_{<\msc}^{\T}\rsmtx_{\leq\msc}\vy_{<\msc}+\frac{1}{n}\tilde{\vy}_{<\msc}^{\T}\rsmtx_{\leq\msc}(\vy_{\geq\msc}+\noise)+\frac{1}{n}\tilde{\vy}_{\msc}^{\T}\rsmtx_{\leq\msc}(\vy_{\geq\msc}+\noise)+\frac{1}{n}\tilde{\vy}_{\msc}^{\T}\rsmtx_{\leq\msc}\vy_{<\msc}
\end{align*}
\end{comment}
\begin{comment}
\[
\frac{1}{n}\tilde{\tevec}^{\T}\rsmtx\hat{\vy}=\frac{1}{n}\tilde{\tevec}_{<\msc}^{\T}\rsmtx\tilde{\tevec}_{<\msc}+\frac{1}{n}\tilde{\tevec}_{<\msc}^{\T}\rsmtx\hat{\obser}_{\geq\msc}+\frac{1}{n}\tilde{\tevec}_{\msc}^{\T}\rsmtx\tilde{\tevec}_{<\msc}+\frac{1}{n}\tilde{\tevec}_{\msc}^{\T}\rsmtx\hat{\obser}_{\geq\msc}+\frac{1}{n}\tilde{\tevec}_{>\msc}^{\T}\rsmtx\hat{\vy}
\]
\end{comment}
\begin{comment}
\begin{align*}
\frac{1}{n}\tilde{\tevec}^{\T}\rsmtx\hat{\vy}= & \sum_{k<\msc}\tcoeff k^{2}+\frac{1}{n}\tilde{\tevec}_{\msc}^{\T}\rsmtx_{\leq\msc}\hat{\vy}_{\geq\msc}\\
& +\frac{1}{n}\tilde{\tevec}^{\T}(\rsmtx-\rsmtx_{\leq\msc})\hat{\vy}+\frac{1}{n}(\tilde{\tevec}^{\T}-\tilde{\tevec}_{\leq\msc}^{\T})\rsmtx_{\leq\msc}\hat{\vy}\\
& +\Big(\frac{1}{n}\tilde{\tevec}_{<\msc}^{\T}\rsmtx_{\leq\msc}\hat{\vy}_{\geq\msc}+\frac{1}{n}\tilde{\tevec}_{\msc}^{\T}\rsmtx_{\leq\msc}\tevec_{<\msc}\Big)+\Big(\frac{1}{n}\tilde{\tevec}_{<\msc}^{\T}\rsmtx_{\leq\msc}\tevec_{<\msc}-\sum_{k<\msc}\tcoeff k^{2}\Big)
\end{align*}
\end{comment}
\begin{align*}
\frac{1}{n}\tilde{\tevec}^{\T}\rsmtx\hat{\vy}= & \sum_{k<\msc}\tcoeff k^{2}+\frac{1}{n}\tilde{\tevec}_{\msc}^{\T}\widetilde{\rsmtx}_{\msc}\hat{\obser}_{\geq\msc}\\
& +\frac{1}{n}\tilde{\tevec}_{>\msc}^{\T}\rsmtx\hat{\vy}+(\frac{1}{n}\tilde{\tevec}_{<\msc}^{\T}\rsmtx{\tevec}_{<\msc}-\sum_{k<\msc}\tcoeff k^{2})\\
& +(\frac{1}{n}\tilde{\tevec}_{<\msc}^{\T}\rsmtx\hat{\obser}_{\geq\msc}+\frac{1}{n}\tilde{\tevec}_{\msc}^{\T}\rsmtx\tevec_{<\msc})+\frac{1}{n}\tilde{\tevec}_{\msc}^{\T}(\rsmtx-\widetilde{\rsmtx}_{\msc})\hat{\obser}_{\geq\msc}.
\end{align*}
The last four terms correspond to the approximation error. In Lemma
\ref{lem:test_err_cross_approx_step2}-Lemma \ref{lem:test_err_cross_approx_step4},
we show they all decay to 0 with rate no slower than $\frac{1}{d^{\tau}}$
for some $\tau>0$, so we prove the desired result.
\end{proof}
By Proposition \ref{prop:test_error_cross_truncation}, it boils down
to compute $\frac{1}{n}\tilde{\tevec}_{\msc}^{\T}\widetilde{\rsmtx}_{\msc}\hat{\vy}_{\geq\msc}$.
To this end, we are back to the setting in Sec. \ref{subsec:training_err_special_case_proof}.
Similar as obtaining (\ref{eq:training_error_2}), we can get
\begin{align*}
\frac{1}{n}\tilde{\tevec}_{\msc}^{\T}\widetilde{\rsmtx}_{\msc}\hat{\vy}_{\geq\msc} & =\frac{1}{n}\kcoeff{\msc}\delta_{\msc}\tevec_{\msc}^{\T}\widetilde{\rsmtx}_{\msc}(\sum_{k=\msc}^{L}\tevec_{k}+\noise)\\
& \pconv\tfrac{\kcoeff{\msc}\delta_{\msc}\tcoeff{\msc}^{2}\widehat{R}_{\msc}}{1+\kcoeff{\msc}\delta_{\msc}\widehat{R}_{\msc}}
\end{align*}
where $\widehat{R}_{\msc}=\frac{\tr[\tilde{\lambda}\mI+\kcoeff{\msc}\widetilde{\shmtx}_{\msc}(\mV)\widetilde{\shmtx}_{\msc}(\mV)^{\T}]^{-1}}{n}$ is defined in a completely analogous way
as in (\ref{eq:training_error_2}) and the only difference is that
$\lambda$ is replaced by $\tilde{\lambda}$ here. Finally, by the same proof of Theorem 1 in \cite{Lu2022Equi}, we can get $\widehat{R}_{\msc}\pconv R_{\star}$.
Therefore, for any $L\geq\msc$,
\begin{equation}
\frac{1}{n}\tilde{\tevec}^{\T}\rsmtx\hat{\vy}\pconv\sum_{k<\msc}\tcoeff k^{2}+\tfrac{\kcoeff{\msc}\delta_{\msc}\tcoeff{\msc}^{2}R_{\star}}{1+\kcoeff{\msc}\delta_{\msc}R_{\star}}.\label{eq:testerr_crossterm_2_pconv}
\end{equation}
The final step is to show the approximation error $|\frac{1}{n}\tilde{\tevec}^{\T}\rsmtx(\obser-\hat{\vy})|$
can be made arbitrarily small. Indeed, by Lemma \ref{lem:obser_truncation},
for any $\veps>0$, there exists $L\geq\msc$ and $C>0$ such that for
all large enough $d$, $\P(\frac{1}{n}\|\obser-\hat{\obser}\|^{2}>\veps)\leq\frac{C}{n\veps^{2}}$.
Also from (\ref{eq:test_err_cross_approx_step3_2}) in the proof of
Lemma \ref{lem:test_err_cross_approx_step3}, we have $\frac{1}{\sqrt{n}}\|\rsmtx\tilde{\tevec}_{<\msc}\|\stodom1$
and from Lemma \ref{lem:y_norm_bd} with the fact that $\delta_{k}\lesssim1$,
for $k\geq\msc$, we can get $\frac{1}{\sqrt{n}}\|\rsmtx\tilde{\tevec}_{\geq\msc}\|\stodom1.$
These together imply that for any $\veps>0$, there exists $L\geq\msc$
and $C>0$ such that for all large enough $d$,
\begin{equation}
\P(|\frac{1}{n}\tilde{\tevec}^{\T}\rsmtx(\obser-\hat{\vy})|>\veps)\leq\frac{C}{n\veps^{2}}.\label{eq:testerr_crossterm_y_yhat_diffbd}
\end{equation}
Combining (\ref{eq:testerr_crossterm_y_yhat_diffbd}), (\ref{eq:testerr_crossterm_2_pconv})
and (\ref{eq:testerr_crossterm_1}), we get
\begin{equation}
\E_{\text{new}}[y_{\text{new}}\hat{\regressfn}(\invec_{\text{new}})]=\frac{1}{n}\tilde{\tevec}^{\T}\rsmtx\obser\pconv\sum_{k<\msc}\tcoeff k^{2}+\tfrac{\kcoeff{\msc}\delta_{\msc}\tcoeff{\msc}^{2}R_{\star}}{1+\kcoeff{\msc}\delta_{\msc}R_{\star}}.
\end{equation}
\section{\label{sec:Proof-of-hsq}Proof of (\ref{eq:testerr_normsqterm_finalform})}
Recall that $\hat{\regressfn}(\invec_{\text{new}})=f\Big(\tfrac{\invec_{\text{new}}^{\T}\inputmtx^{\T}}{\sqrt{d}}\Big)\rsmtx\vy$
and for any $i,j\in[n]$,
\begin{align*}
\E_{\text{new}}\Big[\kernalfn\big(\tfrac{\vx_{i}^{\T}\vx_{\text{new}}}{\sqrt{d}}\big)\kernalfn\big(\tfrac{\vx_{\text{new}}^{\T}\vx_{j}}{\sqrt{d}}\big)\Big] & =\E_{\text{new}}[\sum_{k=0}^{\infty}\frac{\kcoeff k}{\usdim k}Y_{k}(\vx_{i}^{\T})Y_{k}(\vx_{\text{new}})\times\sum_{k=0}^{\infty}\frac{\kcoeff k}{\usdim k}Y_{k}(\vx_{\text{new}}^{\T})Y_{k}(\vx_{j})]\\
& =\sum_{k=0}^{\infty}\tfrac{\kcoeff k^{2}}{\usdim k^{2}}Y_{k}(\vx_{i})^{\T}Y_{k}(\vx_{j}).
\end{align*}
Therefore, we can get
\begin{align}
\E_{\text{new}}\hat{\regressfn}(\invec_{\text{new}})^{2} & =\E_{\text{new}}\bigg[\vy^{\T}\rsmtx\kernalfn\Big(\tfrac{\inputmtx\invec_{\text{new}}}{\sqrt{d}}\Big)\kernalfn\Big(\tfrac{\invec_{\text{new}}^{\T}\inputmtx^{\T}}{\sqrt{d}}\Big)\rsmtx\vy\bigg]\nonumber \\
& =\frac{1}{n}\vy^{\T}\rsmtx\Big(\sum_{k=0}^{\infty}\kcoeff k\delta_{k}\kmtx_{k}\Big)\rsmtx\vy,\label{eq:testerr_normsqterm_1}
\end{align}
where $\kmtx_{k}=\tfrac{\kcoeff k}{\usdim k}\shmtx_{k}\shmtx_{k}^{\T}$
and $\delta_{k}=n/\usdim k$.
Similar as before, the first step is to apply a truncation argument
to simplify the right hand-side of (\ref{eq:testerr_normsqterm_1}).
Specifically, by Lemma \ref{lem:testerr_normsqterm_lowdegree_truncation},
we have
\begin{equation}
\Big|\frac{1}{n}\vy^{\T}\rsmtx\Big(\sum_{k=0}^{\infty}\kcoeff k\delta_{k}\kmtx_{k}\Big)\rsmtx\vy-[\sum_{k=0}^{\msc-1}\tcoeff k^{2}+\frac{1}{n}\vy^{\T}\rsmtx\big(\kcoeff{\msc}\delta_{\msc}\kmtx_{\msc})\rsmtx\vy]\Big|\stodom\frac{1}{\sqrt{d}}.\label{eq:testerr_normsqterm_1_approx_1_1}
\end{equation}
This shows that all the high-degree components of kernel function
can be neglected. while the low-degree components can be equivalent
to a single constant $\sum_{k=0}^{\msc-1}\tcoeff k^{2}$. Based on
(\ref{eq:testerr_normsqterm_1_approx_1_1}), the remaining task is
to compute $\frac{1}{n}\vy^{\T}\rsmtx(\kcoeff{\msc}\delta_{\msc}\kmtx_{\msc})\rsmtx\vy$.
To this end, we introduce the following function:
\begin{equation}
\Gamma_{d}(\epsilon)=\frac{1}{n}\obser^{\T}\big(\lambda\mI+\mK+\epsilon\kcoeff{\msc}\delta_{\msc}\kmtx_{\msc}\big)^{-1}\vy,\label{eq:testerr_normsqterm_1_Gamma_d}
\end{equation}
where $\epsilon\in[-\epsilon_{0},\epsilon_{0}]$, with $\epsilon_{0}=\frac{1}{2}(\kcoeff{\msc}\delta_{\msc})^{-1}$.
It can be directly checked that
\begin{equation}
\Gamma_{d}'(0)=-\frac{1}{n}\vy^{\T}\rsmtx(\kcoeff{\msc}\delta_{\msc}\kmtx_{\msc})\rsmtx\vy,\label{eq:testerr_normsqterm_1_Gammad_deri0}
\end{equation}
so in order to compute $\lim_{d\to\infty}\frac{1}{n}\vy^{\T}\rsmtx(\kcoeff{\msc}\delta_{\msc}\kmtx_{\msc})\rsmtx\vy$,
it suffices to compute $\lim_{d\to\infty}\Gamma_{d}'(0)$. Also $\Gamma_{d}(\epsilon)$
is well-defined for $\epsilon\in[-\epsilon_{0},\epsilon_{0}]$, because
$\mK+\epsilon\kcoeff{\msc}\delta_{\msc}\kmtx_{\msc}\succeq\boldsymbol{0}$
for any $\epsilon\in[-\epsilon_{0},\epsilon_{0}]$. In addition, since
$\kcoeff{\msc}\in(0,\infty)$ and $\delta_{\msc}\in(0,\infty)$ under our
main assumptions, we have $0<\epsilon_{0}\leq C_{0}$, for some $C_{0}$
not depending on $d$.
From (\ref{eq:testerr_normsqterm_1_Gamma_d}) we can find that
$\Gamma_{d}(\epsilon)$ takes the same form as $\mathcal{E}_{\text{train}}$
{[}c.f. (\ref{eq:training_error}){]}. Therefore, repeating the same
procedure in proving (\ref{eq:main_results_trainerr}), we can obtain
\begin{equation}
\Gamma_{d}(\epsilon)\pconv\frac{\tcoeff{\msc}^{2}R(\tilde{\lambda},\epsilon)}{1+\delta_{\msc}\kcoeff{\msc}(1+\epsilon\kcoeff{\msc}\delta_{\msc})R(\tilde{\lambda},\epsilon)}+\Big(\varnoise^{2}+\sum_{k>\msc}\tcoeff k^{2}\Big)R(\tilde{\lambda},\epsilon):=\Gamma(\epsilon)\label{eq:testerr_normsqterm_1_Gamma_lim}
\end{equation}
where $R(\tilde{\lambda},\epsilon)$ is the unique non-negative solution
of
\begin{equation}
\frac{1}{R}=\tilde{\lambda}+\frac{\mu_{\msc}(1+\epsilon\kcoeff{\msc}\delta_{\msc})}{1+\delta_{\msc}\mu_{\msc}(1+\epsilon\kcoeff{\msc}\delta_{\msc})R}.\label{eq:testerr_normsqterm_1_quadratic_equation}
\end{equation}
Note that (\ref{eq:testerr_normsqterm_1_quadratic_equation}) is a
quadratic equation. For any fixed $\tilde{\lambda}>0$, it allows
for an explicit solution: $R(\tilde{\lambda},\epsilon)=\mathcal{R}(\tilde{\lambda};\mu_{\msc}(1+\epsilon\kcoeff{\msc}\delta_{\msc}),\delta_{\msc})$,
where $\mathcal{R}(\lambda;\mu,\delta)$ is defined in (\ref{eq:main_results_Stieltjes_1}).
It can be directly verified that $R(\tilde{\lambda},\epsilon)$ is
smooth with respect to $\epsilon$. In particular, we can compute
its partial derivative with respect to $\epsilon$ at $0$:
\begin{equation}
R'_{\epsilon}(\tilde{\lambda},0)=-\frac{1}{\equisamp-1},\label{eq:testerr_normsqterm_1_R_deri}
\end{equation}
where $\equisamp=\frac{(1+\kcoeff{\msc}\delta_{\msc}R_{\star})^{2}}{\delta_{\msc}\kcoeff{\msc}^{2}R_{\star}^{2}}$. For notational
simplicity, in the following, we denote by $R'_{\epsilon=0}:=R'_{\epsilon}(\tilde{\lambda},0)$.
On the other hand, since $R(\tilde{\lambda},\epsilon)$ is smooth,
$\Gamma(\epsilon)$ in (\ref{eq:testerr_normsqterm_1_Gamma_lim})
is also smooth with respect to $\epsilon$ and
\begin{equation}
\Gamma'(0)=\frac{\tcoeff{\msc}^{2}R'_{\epsilon=0}}{[1+\delta_{\msc}\kcoeff{\msc}R_{\star}]^{2}}-\frac{(\delta_{\msc}\kcoeff{\msc}\tcoeff{\msc}R_{\star})^{2}}{(1+\delta_{\msc}\kcoeff{\msc}R_{\star})^{2}}+\Big(\varnoise^{2}+\sum_{k>\msc}\tcoeff k^{2}\Big)R'_{\epsilon=0}.\label{eq:testerr_normsqterm_1_Gammaderi}
\end{equation}
The next step is to show that $\Gamma_{d}'(0)\pconv\Gamma'(0)$. Denote
$\rsmtx(\epsilon):=[\lambda\mI+\mK+\epsilon\big(\kcoeff \msc\delta_{\msc}\kmtx_{\msc}\big)]^{-1}$.
It can be directly verified that
\[
\Gamma_{d}''(\epsilon)=\frac{2}{n}\vy^{\T}\rsmtx(\epsilon)\cdot(\kcoeff{\msc}\delta_{\msc}\kmtx_{\msc})\cdot\rsmtx(\epsilon)\cdot(\kcoeff{\msc}\delta_{\msc}\kmtx_{\msc})\cdot\rsmtx(\epsilon)\vy\geq0,
\]
so $\Gamma_{d}(\epsilon)$ is a convex function. Therefore, for any
$\epsilon\in(-\epsilon_{0},\epsilon_{0})$ and $\varsigma\in[0,(\epsilon_{0}-|\epsilon|)/2]$,
\begin{equation}
\frac{\Gamma_{d}(\epsilon-\varsigma)-\Gamma_{d}(\epsilon)}{-\varsigma}-\Gamma'(\epsilon)\leq\Gamma_{d}'(\epsilon)-\Gamma'(\epsilon)\leq\frac{\Gamma_{d}(\epsilon+\varsigma)-\Gamma_{d}(\epsilon)}{\varsigma}-\Gamma'(\epsilon).\label{eq:testerr_normsqterm_1_convex_2}
\end{equation}
On the other hand, since $\Gamma(\epsilon)$ is smooth, for any $\varrho>0$,
there exists $c>0$ such that for any $\varsigma\in[-c,c]$, it holds
that $|\epsilon|+|\varsigma|<\epsilon_{0}$ and
\begin{equation}
\Big|\Gamma'(\epsilon)-\frac{\Gamma(\epsilon+\varsigma)-\Gamma(\epsilon)}{\varsigma}\Big|\leq\frac{\varrho}{2}.\label{eq:testerr_normsqterm_1_smooth_2}
\end{equation}
Combining (\ref{eq:testerr_normsqterm_1_convex_2}) and (\ref{eq:testerr_normsqterm_1_smooth_2}),
we have for any $\varsigma\in[-c,c]$,
\begin{equation}
\begin{aligned}
\frac{[\Gamma_{d}(\epsilon-\varsigma)-\Gamma(\epsilon-\varsigma)]-[\Gamma_{d}(\epsilon)-\Gamma(\epsilon)]}{-\varsigma}-\frac{\varrho}{2}&\leq\Gamma_{d}'(\epsilon)-\Gamma'(\epsilon)\\
&\leq\frac{[\Gamma_{d}(\epsilon+\varsigma)-\Gamma(\epsilon+\varsigma)]-[\Gamma_{d}(\epsilon)-\Gamma(\epsilon)]}{\varsigma}+\frac{\varrho}{2}.
\end{aligned}\label{eq:testerr_normsqterm_1_convex_3}
\end{equation}
Since $\Gamma_{d}(\epsilon)\to\Gamma(\epsilon)$ for any $\epsilon\in(-\epsilon_{0},\epsilon_{0})$,
taking $d\to\infty$ in (\ref{eq:testerr_normsqterm_1_convex_3}),
we get
\[
\P(|\Gamma_{d}'(\epsilon)-\Gamma'(\epsilon)|>\varrho)\to0.
\]
Moreover, here $\varrho>0$ can be made arbitrarily small, so setting $\epsilon=0$, we prove
\begin{equation}
\Gamma_{d}'(0)\pconv\Gamma'(0).\label{eq:testerr_normsqterm_1_pconv_Gammadderi}
\end{equation}
Finally, putting (\ref{eq:testerr_normsqterm_1}), (\ref{eq:testerr_normsqterm_1_approx_1_1}),
(\ref{eq:testerr_normsqterm_1_Gammad_deri0}), (\ref{eq:testerr_normsqterm_1_Gammaderi})
and (\ref{eq:testerr_normsqterm_1_pconv_Gammadderi}) together, we
reach at:
\[
\E_{\text{new}}\hat{\regressfn}(\invec_{\text{new}})^{2}\pconv\sum_{k<\msc}\tcoeff k^{2}+\frac{(\delta_{\msc}\kcoeff{\msc}\tcoeff{\msc}R_{\star})^{2}}{(1+\delta_{\msc}\kcoeff{\msc}R_{\star})^{2}}-\frac{\tcoeff{\msc}^{2}R'_{\epsilon=0}}{(1+\delta_{\msc}\kcoeff{\msc}R_{\star})^{2}}-\Big(\varnoise^{2}+\sum_{k>\msc}\tcoeff k^{2}\Big)R'_{\epsilon=0},
\]
where $R'_{\epsilon=0}=-\frac{1}{\equisamp-1}$, which is defined
in (\ref{eq:testerr_normsqterm_1_R_deri}).
\section{Auxiliary Results for Analyzing Test Error}
\begin{comment}
There exists $\tau>0$ such that
\[
\left|\frac{1}{n}\tilde{\tevec}(\rsmtx-\rsmtx_{\leq\msc})\vy\right|\stodom\frac{1}{d^{\tau}}
\]
where $\tilde{\lambda}=\lambda+\sum_{k>\msc}\kcoeff k$ and $\rsmtx_{\leq\msc}=(\tilde{\lambda}\mI+\kmtx_{\leq\msc})^{-1}$.
\end{comment}
\begin{lem}
\label{lem:test_err_cross_approx_step2}It holds that
\end{lem}
\begin{equation}
\left\Vert \frac{1}{\sqrt{n}}\tilde{\tevec}_{>\msc}\right\Vert \stodom\frac{1}{d}\label{eq:test_err_cross_approx_step2_finalform}
\end{equation}
where $\tilde{\tevec}_{>\msc}:=\sum_{k>\msc}\tilde{\tevec}_{k}$,
with $\tilde{\tevec}_{k}=\kcoeff k\delta_{k}\tevec_{k}$.
\begin{proof}
Since $\E\tevec_{k}^{\T}\tevec_{\ell}=\indicatorfn_{k=\ell}\tcoeff k^{2}$,
we have
\begin{align*}
\E\|\tilde{\tevec}_{>\msc}\|^{2} & =\E\|\sum_{k>\msc}\kcoeff k\delta_{k}\tevec_{k}\|^{2}\\
& =n\sum_{k>\msc}\kcoeff k^{2}\delta_{k}^{2}\tcoeff k^{2}\\
& \leq n\delta_{\msc+1}^{2}\sum_{k>\msc}\kcoeff k^{2}\tcoeff k^{2}\\
& \lesssim\frac{n}{d^{2}},
\end{align*}
where the last step follows from $\sum_{k=0}^{\infty}\kcoeff k<\infty,$
$\sum_{k=0}^{\infty}\tcoeff k^{2}<\infty$. Therefore, we have $\|\tilde{\tevec}_{>\msc}\|\stodom\frac{\sqrt{n}}{d}$,
which is the desired result.
\end{proof}
\begin{lem}
\label{lem:test_err_cross_approx_step0_finalform}It holds that
\begin{equation}
\Big|\frac{1}{n}\tilde{\tevec}_{<\msc}^{\T}\rsmtx\tevec_{<\msc}-\sum_{k<\msc}\tcoeff k^{2}\Big|\stodom\frac{1}{d}\label{eq:test_err_cross_approx_step0_finalform}
\end{equation}
where $\tilde{\tevec}_{<\msc}=\sum_{k<\msc}\kcoeff k\delta_{k}\tevec_{k}$
.
\end{lem}
\begin{proof}
From Lemma \ref{lem:muted_lowerdegree},
\begin{align*}
\frac{1}{n}\mD_{<\msc}^{2}\shmtx_{<\msc}^{\T}(\lambda\mI+\kmtx)^{-1}\shmtx_{<\msc} & =(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})^{-1}(\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})\\
& =\mI-(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})^{-1}\mD_{<\msc}^{-2}.
\end{align*}
Therefore,
\begin{align}
\frac{1}{n}\tilde{\tevec}_{<\msc}^{\T}\rsmtx\tevec_{<\msc}= & \frac{1}{n}\widetilde{\shmtx}_{<\msc}(\vxi^{\T})\mD_{<\msc}^{2}\shmtx_{<\msc}^{\T}(\lambda\mI+\kmtx)^{-1}\shmtx_{<\msc}\widetilde{\shmtx}_{<\msc}(\vxi)\nonumber \\
= & \sum_{k<\msc}\tcoeff k^{2}-\widetilde{\shmtx}_{<\msc}(\vxi^{\T})(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})^{-1}\mD_{<\msc}^{-2}\widetilde{\shmtx}_{<\msc}(\vxi).\label{eq:test_err_cross_approx_step0_1}
\end{align}
By Lemma \ref{lem:spectral_norm_1}, we have $\|(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})^{-1}\|\stodom1$.
Also we have $\|\mD_{<\msc}^{-2}\|\lesssim\frac{1}{d}$. Therefore,
\begin{equation}
|\widetilde{\shmtx}_{<\msc}(\vxi^{\T})(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})^{-1}\mD_{<\msc}^{-2}\widetilde{\shmtx}_{<\msc}(\vxi)|\stodom\frac{1}{d}\sum_{k<\msc}\tcoeff k^{2}.\label{eq:test_err_cross_approx_step0_2}
\end{equation}
Combining (\ref{eq:test_err_cross_approx_step0_1}) and (\ref{eq:test_err_cross_approx_step0_2})
together with $\sum_{k<\msc}\tcoeff k^{2}\lesssim1$ (which follows
from Assumption (A.4)), we reach at (\ref{eq:test_err_cross_approx_step0_finalform}).
\end{proof}
\begin{comment}
When computing $\E_{\vx_{\text{new}}\mid\sgl,\inputmtx}(y_{\text{new}}\hat{y}_{\text{new}})$,
where $\hat{y}_{\text{new}}=f(\tfrac{\invec_{\text{new}}^{\T}\inputmtx^{\T}}{\sqrt{d}})\sol$,
we need to handle two quantities:
\begin{align*}
\widetilde{\shmtx}_{<\msc}(\sgl^{\T})\mdmtx_{<\msc}^{2}\big(\frac{1}{n}\shmtx_{<\msc}^{\T}(\lambda\mI+\kmtx)^{-1}\shmtx_{k}\big)\widetilde{\shmtx}_{k}(\sgl), & k\geq\msc\\
\widetilde{\shmtx}_{k}(\sgl^{\T})\mdmtx_{k}^{2}\big(\frac{1}{n}\shmtx_{k}^{\T}(\lambda\mI+\kmtx)^{-1}\shmtx_{<\msc}\big)\widetilde{\shmtx}_{<\msc}(\sgl), & k\geq\msc
\end{align*}
The following results show that for any $k\geq\msc$, both quantities
converge to zero with high probability.
\end{comment}
\begin{lem}
\label{lem:test_err_cross_approx_step3}For any $L\geq\msc$, it holds
that
\begin{equation}
\frac{1}{n}|\tilde{\tevec}_{<\msc}^{\T}\rsmtx\hat{\obser}_{\geq\msc}|\stodom\frac{1}{\sqrt{d}}\label{eq:test_err_cross_approx_step3_finalform_1}
\end{equation}
and
\begin{equation}
\frac{1}{n}|\tilde{\tevec}_{\msc}^{\T}\rsmtx\tevec_{<\msc}|\stodom\frac{1}{\sqrt{d}}\label{eq:test_err_cross_approx_step3_finalform_2}
\end{equation}
where $\tilde{\tevec}_{k}=\kcoeff k\delta_{k}\tevec_{k}$, $\hat{\obser}_{\geq\msc}=\sum_{k=\msc}^{L}\tevec_{k}+\noise$
and $\rsmtx=(\lambda\mI+\kmtx)^{-1}$.
\end{lem}
\begin{proof}
In the following, we present the proof for (\ref{eq:test_err_cross_approx_step3_finalform_1}).
The proof for (\ref{eq:test_err_cross_approx_step3_finalform_2})
is analogous and will be omitted for brevity.
To prove (\ref{eq:test_err_cross_approx_step3_finalform_1}), it suffices
to show for any fixed $k\geq\msc$,
\begin{equation}
\frac{1}{n}|\tilde{\tevec}_{<\msc}^{\T}\rsmtx\tevec_{k}|\stodom\frac{1}{\sqrt{d}}\label{eq:test_cross_term_bd_1}
\end{equation}
and
\begin{equation}
\frac{1}{n}|\tilde{\tevec}_{<\msc}^{\T}\rsmtx\vz|\stodom\frac{1}{\sqrt{d}}.\label{eq:test_cross_term_bd_2}
\end{equation}
The second bound (\ref{eq:test_cross_term_bd_2}) is easy to obtain.
By the independence between $\noise$ and $\tilde{\tevec}_{<\msc}^{\T}\rsmtx$,
we have
\begin{align}
\E_{\noise}(\tilde{\tevec}_{<\msc}^{\T}\rsmtx\vz)^{2} & =\|\rsmtx\tilde{\tevec}_{<\msc}\|^{2}.\label{eq:test_err_cross_approx_step3_1}
\end{align}
Since $\tilde{\tevec}_{<\msc}=\shmtx_{<\msc}\mD_{<\msc}^{2}\widetilde{\shmtx}_{<\msc}(\vxi)$,
we have
\begin{align}
\frac{1}{\sqrt{n}}\|\rsmtx\tilde{\tevec}_{<\msc}\| & =\frac{1}{\sqrt{n}}\|(\lambda\mI+\kmtx)^{-1}\shmtx_{<\msc}\mD_{<\msc}^{2}\widetilde{\shmtx}_{<\msc}(\vxi)\|\nonumber \\
& =\frac{1}{\sqrt{n}}\|\rsmtx_{\geq\msc}\shmtx_{<\msc}(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})^{-1}\widetilde{\shmtx}_{<\msc}(\vxi)\|\nonumber \\
& \stodom1,\label{eq:test_err_cross_approx_step3_2}
\end{align}
where the second step follows from Lemma \ref{lem:muted_lowerdegree}
and the last step follows from Lemma \ref{lem:spectral_norm_1} and
the fact that $\|\widetilde{\shmtx}_{<\msc}(\vxi)\|\lesssim1$. Then
combining (\ref{eq:test_err_cross_approx_step3_1}) and (\ref{eq:test_err_cross_approx_step3_2}),
we have
\begin{align*}
\frac{1}{n}|\tilde{\tevec}_{<\msc}^{\T}\rsmtx_{\leq\msc}\vz| & \stodom\frac{1}{n}\|\rsmtx\tilde{\tevec}_{<\msc}\|\\
& \stodom\frac{1}{\sqrt{n}}
\end{align*}
which implies (\ref{eq:test_cross_term_bd_2}), since $d\lesssim n$.
Now we analyze (\ref{eq:test_cross_term_bd_1}). Note that (\ref{eq:test_cross_term_bd_1})
can be equivalently expressed as:
\begin{align}
\frac{1}{n}|\widetilde{\shmtx}_{<\msc}(\sgl)^{\T}\mdmtx_{<\msc}^{2}\shmtx_{<\msc}^{\T}\rsmtx\shmtx_{k}\widetilde{\shmtx}_{k}(\sgl)| & \stodom\frac{1}{\sqrt{d}}.\label{eq:test_cross_term_bd_11}
\end{align}
Here $\widetilde{\shmtx}_{k}(\sgl)$ and $\widetilde{\shmtx}_{<\msc}(\sgl)$
are independent of the sandwiched matrix $\mdmtx_{<\msc}^{2}\shmtx_{<\msc}^{\T}\rsmtx\shmtx_{k}$.
However, the entries of $\widetilde{\shmtx}_{k}(\sgl)$ and $\widetilde{\shmtx}_{<\msc}(\sgl)$
are not independent, although they are uncorrelated. The key to prove
(\ref{eq:test_cross_term_bd_11}) is to handle the weak correlation
among different entries of $\widetilde{\shmtx}_{k}(\sgl)$ and $\widetilde{\shmtx}_{<\msc}(\sgl)$.
As we know, there are different choices of the orthonormal basis $\{\shmtx_{k}(\sgl)\}_{k\geq0}$,
which are all equivalent. To ease our analysis, we will work with
the following specific type of $\shmtx_{k}(\sgl)$:
\begin{equation}
\shmtx_{k}(\sgl)=[\shmtx_{k,\symset}(\sgl)^{\T},\shmtx_{k,\backslash\symset}(\sgl)^{\T}]^{\T},\label{eq:SH_symmetric}
\end{equation}
where $\shmtx_{k,\symset}(\sgl^{\T})$ is a ${d \choose k}$-dimensional
vector and its entries are $\{\upsilon_{k}\sgli_{i_{1}}\sgli_{i_{2}}\cdots\sgli_{i_{k}}\}_{1\leq i_{1}<i_{2}<\cdots<i_{k}\leq d}$.
Here, the normalizing constant $\upsilon_{k}=[\E(\sgli_{1}^{2}\sgli_{2}^{2}\cdots\sgli_{k}^{2})]^{-\frac{1}{2}}$,
with $\sgl\sim\spmeasure{d-1}$. It can be easily checked that when
$\sgl\sim\spmeasure{d-1}$ and $\{i_{1},i_{2},\cdots,i_{k}\}$, $\{j_{1},j_{2},\cdots,j_{k}\}$
are two different sets of indices, it holds that $\E(\sgli_{i_{1}}\sgli_{i_{2}}\cdots\sgli_{i_{k}}\cdot\sgli_{j_{1}}\sgli_{j_{2}}\cdots\sgli_{j_{k}})=0$.
To see this, we first represent $\sgl$ by: $\sgl=\sqrt{d}\frac{\vtheta}{\|\vtheta\|}$,
where $\vtheta\sim\mathcal{N}(\boldsymbol{0},\mI_{d})$. Since $\{i_{1},i_{2},\cdots,i_{k}\}\neq\{j_{1},j_{2},\cdots,j_{k}\}$,
there exists at least one $u\in\{j_{1},j_{2},\cdots,j_{k}\}$ such
that $u\notin\{i_{1},i_{2},\cdots,i_{k}\}$. Then we have:
\begin{align*}
\E[\sgli_{i_{1}}\sgli_{i_{2}}\cdots\sgli_{i_{k}}\cdot\sgli_{j_{1}}\sgli_{j_{2}}\cdots\sgli_{j_{k}}] & =\E\E[\sgli_{i_{1}}\sgli_{i_{2}}\cdots\sgli_{i_{k}}\cdot\sgli_{j_{1}}\sgli_{j_{2}}\cdots\sgli_{j_{k}}\mid\{\theta_{j}\}_{j\neq u}]\\
& =\E\E\Big[\tfrac{\varphi\big(\{\theta_{j}\}_{j\neq u}\big)\cdot\theta_{u}}{(\sum_{j\neq u}\theta_{j}^{2}+\theta_{u}^{2})^{k}}\mid\{\theta_{j}\}_{j\neq u}\Big]\\
& =0,
\end{align*}
where $\varphi$ is a function of $\{\theta_{j}\}_{j\neq u}$ and
the last step follows from the fact that $\theta_{u}\sim\mathcal{N}(0,1)$
and $\theta\mapsto\tfrac{\varphi\big(\{\theta_{j}\}_{j\neq u}\big)\cdot\theta}{(\sum_{j\neq u}\theta_{j}^{2}+\theta^{2})^{k}}$
is an odd function, for any fixed $\{\theta_{j}\}_{j\neq u}$. Therefore,
we have $\E[\shmtx_{k,\symset}(\sgl)\shmtx_{k,\symset}(\sgl)^{\T}]=\mI$.
In the following, we make use of the following notations: (i) for
any $\mX$, $\shmtx_{<\msc,\symset}(\mX):=[\shmtx_{0,\symset}(\mX),\cdots,\shmtx_{\msc-1,\symset}(\mX)]$
and $\shmtx_{<\msc,\backslash\symset}(\mX):=[\shmtx_{0,\backslash\symset}(\mX),\cdots,\shmtx_{\msc-1,\backslash\symset}(\mX)]$,
(ii) for any $k$, $\vx$, $\widetilde{\shmtx}_{k,\symset}(\vx):=\frac{\tcoeff k}{\sqrt{\usdim k}}\shmtx_{k,\symset}(\vx)$
and $\widetilde{\shmtx}_{k,\backslash\symset}(\vx):=\frac{\tcoeff k}{\sqrt{\usdim k}}\shmtx_{k,\backslash\symset}(\vx)$.
The proof of (\ref{eq:test_cross_term_bd_11}) will be done in two
steps.
(I) The first step is to show the following,
\begin{equation}
\tfrac{1}{\sqrt{n}}\left|\widetilde{\shmtx}_{<\msc}(\sgl^{\T})\mM\shmtx_{k}\widetilde{\shmtx}_{k}(\sgl)-\widetilde{\shmtx}_{<\msc,\symset}(\sgl^{\T})\mM_{\symset}\shmtx_{k,\symset}\widetilde{\shmtx}_{k,\symset}(\sgl)\right|\stodom\frac{1}{\sqrt{d}},\label{eq:test_cross_term_bd_1_step_1_finalform}
\end{equation}
where $\mM=\frac{1}{\sqrt{n}}\mdmtx_{<\msc}^{2}\shmtx_{<\msc}^{\T}\rsmtx$
and $\mM_{\symset}=\frac{1}{\sqrt{n}}\mdmtx_{<\msc,\symset}^{2}\shmtx_{<\msc,\symset}^{\T}\rsmtx$.
Here, $\mdmtx_{<\msc,\symset}$ is the minor of $\mdmtx_{<\msc}$,
corresponding to $\shmtx_{<\msc,\symset}$ (in other words, if $\mathcal{I}\subseteq\{1,2,\ldots,\sum_{k=0}^{\msc-1}\usdim k\}$
is the set of all row indices of $\shmtx_{<\msc,\symset}$ in $\shmtx_{<\msc}$,
then $\mdmtx_{<\msc,\symset}=(\mdmtx_{<\msc,i,j})_{i,j\in\mathcal{I}}$).
In fact, (\ref{eq:test_cross_term_bd_1_step_1_finalform}) states
that after applying a truncation on $\shmtx_{k}$ and $\widetilde{\shmtx}_{k}(\sgl)$,
the resultant approximation error vanishes as $d\to\infty$. Then
the subsequent analysis can be done over $\frac{1}{\sqrt{n}}\widetilde{\shmtx}_{<\msc,\symset}(\sgl^{\T})\mM_{\symset}\shmtx_{k,\symset}\widetilde{\shmtx}_{k,\symset}(\sgl)$,
which is easier to handle due to the symmetric structure of $\widetilde{\shmtx}_{k,\symset}(\sgl)$.
We now prove (\ref{eq:test_cross_term_bd_1_step_1_finalform}). We
have
\begin{align}
& \tfrac{1}{\sqrt{n}}|\widetilde{\shmtx}_{<\msc}(\sgl^{\T})\mM\shmtx_{k}\widetilde{\shmtx}_{k}(\sgl)-\widetilde{\shmtx}_{<\msc,\symset}(\sgl^{\T})\mM_{\symset}\shmtx_{k,\symset}\widetilde{\shmtx}_{k,\symset}(\sgl)|\nonumber \\
\leq & \tfrac{1}{\sqrt{n}}|\widetilde{\shmtx}_{<\msc}(\sgl^{\T})\mM\shmtx_{k}\widetilde{\shmtx}_{k}(\sgl)-\widetilde{\shmtx}_{<\msc,\symset}(\sgl^{\T})\mM_{\symset}\shmtx_{k}\widetilde{\shmtx}_{k}(\sgl)|\nonumber \\
& +\tfrac{1}{\sqrt{n}}|\widetilde{\shmtx}_{<\msc,\symset}(\sgl^{\T})\mM_{\symset}\shmtx_{k}\widetilde{\shmtx}_{k}(\sgl)-\widetilde{\shmtx}_{<\msc,\symset}(\sgl^{\T})\mM_{\symset}\shmtx_{k,\symset}\widetilde{\shmtx}_{k,\symset}(\sgl)|\nonumber \\
= & \Big|\widetilde{\shmtx}_{<\msc,\backslash\symset}(\sgl^{\T})\mM_{\backslash\symset}\tfrac{1}{\sqrt{n}}\shmtx_{k}\widetilde{\shmtx}_{k}(\sgl)\Big|+\Big|\widetilde{\shmtx}_{<\msc,\symset}(\sgl^{\T})\mM_{\symset}\tfrac{1}{\sqrt{n}}\shmtx_{k,\backslash\symset}\widetilde{\shmtx}_{k,\backslash\symset}(\sgl)\Big|.\label{eq:test_cross_term_bd_1_step_1}
\end{align}
Here, $\mM_{\backslash\symset}$ is defined via the same way as $\mM_{\symset}$.
Next we show both terms on the right-hand side of (\ref{eq:test_cross_term_bd_1_step_1})
vanish as $d\to\infty$. First, for any $k$,
\begin{align*}
\E\|\shmtx_{k,\backslash\symset}(\sgl)\|^{2} & =\E\tr[\shmtx_{k,\backslash\symset}(\sgl)\shmtx_{k,\backslash\symset}(\sgl^{\T})]\\
& =\usdim k-{d \choose k}\\
& \lesssim\usdim{k-1},
\end{align*}
where we have used the fact that $\usdim k={d \choose k}[1+\mathcal{O}(1/d)]$.
Therefore, by Chebyshev's inequality,
\begin{align}
\|\widetilde{\shmtx}_{k,\backslash\symset}(\sgl)\| & =\frac{\tcoeff k}{\sqrt{\usdim k}}\|\shmtx_{k,\backslash\symset}(\sgl)\|\nonumber \\
& \lesssim d^{-1/2}.\label{eq:test_cross_term_bd_1_step_1_bd_1}
\end{align}
Similarly, for any $k$, we can show $\|\frac{1}{\sqrt{n}}\shmtx_{k,\backslash\symset}\widetilde{\shmtx}_{k,\backslash\symset}(\sgl)\|\lesssim d^{-1/2}$
as follows. By the independence between $\shmtx_{k,\backslash\symset}$
and $\shmtx_{k,\backslash\symset}(\sgl)$, we have
\begin{align*}
\E\|\shmtx_{k,\backslash\symset}\shmtx_{k,\backslash\symset}(\sgl)\|^{2} & =\E\tr[\shmtx_{k,\backslash\symset}\shmtx_{k,\backslash\symset}(\sgl)\shmtx_{k,\backslash\symset}(\sgl^{\T})\shmtx_{k,\backslash\symset}^{\T}]\\
& =\E\tr[\shmtx_{k,\backslash\symset}\shmtx_{k,\backslash\symset}^{\T}]\\
& =n\Big[\usdim k-{d \choose k}\Big].
\end{align*}
Therefore, similar as (\ref{eq:test_cross_term_bd_1_step_1_bd_1})
we can get
\begin{equation}
\|\frac{1}{\sqrt{n}}\shmtx_{k,\backslash\symset}\widetilde{\shmtx}_{k,\backslash\symset}(\sgl)\|\lesssim d^{-1/2}.\label{eq:test_cross_term_bd_1_step_1_bd_2}
\end{equation}
On the other hand, by Lemma \ref{lem:muted_lowerdegree} we have
\begin{align*}
\mM= & \frac{1}{\sqrt{n}}\mdmtx_{<\msc}^{2}\shmtx_{<\msc}^{\T}\rsmtx\\
= & (\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})^{-1}(\frac{1}{\sqrt{n}}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc})
\end{align*}
and by Lemma \ref{lem:spectral_norm_1}, we have $\|\mM\|\stodom1$.
Since $\mM_{\backslash\symset}$ and $\mM_{\symset}$ are both sub-matrices
of $\mM$, we get
\begin{equation}
\|\mM_{\symset}\|,\|\mM_{\backslash\symset}\|\leq\|\mM\|\stodom1.\label{eq:test_cross_term_bd_1_step_1_bd_3}
\end{equation}
Now substituting (\ref{eq:test_cross_term_bd_1_step_1_bd_1}), (\ref{eq:test_cross_term_bd_1_step_1_bd_2}),
(\ref{eq:test_cross_term_bd_1_step_1_bd_3}) back to (\ref{eq:test_cross_term_bd_1_step_1})
and using Lemma \ref{lem:y_norm_bd}, we reach at (\ref{eq:test_cross_term_bd_1_step_1_finalform}).
(II) In the second step, we are going to show
\begin{equation}
\frac{1}{\sqrt{n}}|\widetilde{\shmtx}_{<\msc,\symset}(\sgl^{\T})\mM_{\symset}\shmtx_{k,\symset}\widetilde{\shmtx}_{k,\symset}(\sgl)|\stodom\sqrt{\frac{1}{d}}.\label{eq:test_cross_term_bd_1_step_2_finalform}
\end{equation}
By Lemma \ref{lem:samplecov_spectral_norm} and Lemma \ref{lem:spectral_norm_1}
we have
\begin{align*}
\|\frac{1}{\sqrt{n}}\mM\shmtx_{k}\| & =\|(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})^{-1}(\frac{1}{\sqrt{n}}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\frac{1}{\sqrt{n}}\shmtx_{k})\|\\
& \stodom\sqrt{\frac{\usdim k}{n}}.
\end{align*}
Since $\mM_{\symset}\shmtx_{k,\symset}$ is a sub-matrix of $\mM\shmtx_{k}$, we get $\|\frac{1}{\sqrt{n}}\mM_{\symset}\shmtx_{k,\symset}\|\stodom\sqrt{\frac{\usdim k}{n}}.$
Now we have for any $k\geq\msc$,
\begin{align*}
\frac{1}{\sqrt{n}}|\widetilde{\shmtx}_{<\msc,\symset}(\sgl^{\T})\mM_{\symset}\shmtx_{k,\symset}\widetilde{\shmtx}_{k,\symset}(\sgl)|\leq & \sum_{\ell<\msc}\left|\widetilde{\shmtx}_{\ell,\symset}(\sgl^{\T})\frac{1}{\sqrt{n}}\mM_{\symset,\ell}\shmtx_{k,\symset}\widetilde{\shmtx}_{k,\symset}(\sgl)\right|\\
\lesssim & \sum_{\ell<\msc}\frac{1}{\sqrt{\usdim{\ell}\usdim k}}\left|\shmtx_{\ell,\symset}(\sgl^{\T})\frac{1}{\sqrt{n}}\mM_{\symset,\ell}\shmtx_{k,\symset}\shmtx_{k,\symset}(\sgl)\right|\\
\overset{\text{(a)}}{\stodom} & \sum_{\ell<\msc}\frac{1}{\sqrt{\usdim{\ell}\usdim k}}\sqrt{\usdim{\ell}}\|\frac{1}{\sqrt{n}}\mM_{\symset,\ell}\shmtx_{k,\symset}\|_{F}\\
\leq & \sum_{\ell<\msc}\frac{1}{\sqrt{\usdim{\ell}\usdim k}}\sqrt{\usdim{\ell}}\cdot\sqrt{\usdim{\ell}}\|\frac{1}{\sqrt{n}}\mM_{\symset}\shmtx_{k,\symset}\|\\
\overset{\text{(b)}}{\stodom} & \sum_{\ell<\msc}\sqrt{\frac{\usdim{\ell}}{n}}\\
\lesssim & \sqrt{\frac{1}{d}}
\end{align*}
where (a) follows from (\ref{eq:Fkl_sq_bound}) in Lemma \ref{lem:Fkl_sq_bound}
and in (b) we use $\|\frac{1}{\sqrt{n}}\mM_{\symset}\shmtx_{k,\symset}\|\stodom\sqrt{\frac{\usdim k}{n}}$.
Combining (\ref{eq:test_cross_term_bd_1_step_1_finalform}) and (\ref{eq:test_cross_term_bd_1_step_2_finalform}),
we reach at (\ref{eq:test_cross_term_bd_11}). Hence, the proof of
(\ref{eq:test_cross_term_bd_1}) is completed.
\end{proof}
\begin{lem}
\label{lem:test_err_cross_approx_step4}It holds that
\[
\frac{1}{n}|\tilde{\tevec}_{\msc}^{\T}(\rsmtx-\widetilde{\rsmtx}_{\msc})\hat{\obser}_{\geq\msc}|\stodom\frac{1}{d^\tau}
\]
where $\tilde{\tevec}_{\msc}=\kcoeff{\msc}\delta_{\msc}\tevec_{\msc}$,
$\tevec_{\geq\msc}=\sum_{k\geq\msc}\tevec_{k}$, $\rsmtx=(\lambda\mI+\kmtx)^{-1}$
and $\widetilde{\rsmtx}_{\leq\msc}=(\tilde{\lambda}\mI+\kmtx_{\leq\msc})^{-1}$,
with $\tilde{\lambda}=\lambda+\sum_{k>\msc}\kcoeff k$.
\end{lem}
\begin{proof}
We have the following decomposition:
\begin{align*}
\frac{1}{n}|\tilde{\tevec}_{\msc}^{\T}(\rsmtx-\widetilde{\rsmtx}_{\msc})\hat{\obser}_{\geq\msc}|\leq & \frac{1}{n}|\tilde{\tevec}_{\msc}^{\T}(\rsmtx-\widetilde{\rsmtx}_{\leq\msc})\hat{\obser}_{\geq\msc}|+\frac{1}{n}|\tilde{\tevec}_{\msc}^{\T}(\widetilde{\rsmtx}_{\leq\msc}-\widetilde{\rsmtx}_{\msc})\hat{\obser}_{\geq\msc}|.
\end{align*}
Since $\delta_{k}\lesssim1$, when $k\geq\msc$, it follows directly
from Lemma \ref{lem:training_err_approx_step1} and Lemma \ref{lem:y_norm_bd}
that $\frac{1}{n}|\tilde{\tevec}_{\msc}^{\T}(\rsmtx-\widetilde{\rsmtx}_{\leq\msc})\hat{\obser}_{\geq\msc}|\stodom\frac{1}{{d}^\tau}$.
On the other hand, following the same proof of Lemma \ref{lem:training_err_approx_step3},
we get $\frac{1}{n}|\tilde{\tevec}_{\msc}^{\T}(\widetilde{\rsmtx}_{\leq\msc}-\widetilde{\rsmtx}_{\msc})\hat{\obser}_{\geq\msc}|\stodom\frac{1}{\sqrt{d}}$.
Combining these two bounds together, we reach at the desired result.
\end{proof}
\begin{lem}
\label{lem:testerr_normsqterm_lowdegree_truncation}It holds that
\begin{equation}
\Big|\frac{1}{n}\vy^{\T}\rsmtx\big(\sum_{k=0}^{\infty}\kcoeff k\delta_{k}\kmtx_{k}\big)\rsmtx\vy-\big[\sum_{k=0}^{\msc-1}\tcoeff k^{2}+\frac{1}{n}\vy^{\T}\rsmtx\big(\kcoeff{\msc}\delta_{\msc}\kmtx_{\msc})\rsmtx\vy\big]\Big|\stodom\frac{1}{\sqrt{d}}.\label{eq:testerr_normsqterm_1_approx_1}
\end{equation}
\end{lem}
\begin{proof}
We first show some high-degree components can be discarded:
\begin{equation}
\Big|\frac{1}{n}\vy^{\T}\rsmtx\big(\sum_{k>\msc}\kcoeff k\delta_{k}\kmtx_{k}\big)\rsmtx\vy\Big|\stodom\frac{1}{d}.\label{eq:testerr_normsqterm_1_approx_1_bd_1}
\end{equation}
From (\ref{eq:higher_order_norm_bd_Mpapaer_3}) in the proof of Lemma
\ref{lem:training_err_approx_step1}, we have $\|\sum_{k>\msc}\kmtx_{k}\|\stodom1$.
On the other hand, since $\sup_{k\geq0}\kcoeff k\leq\kernalfn(\sqrt{d})\lesssim1$
and $\sup_{k>\msc}\delta_{k}\lesssim\frac{1}{d}$, we get $\|\sum_{k>\msc}\kcoeff k\delta_{k}\kmtx_{k}\|\lesssim\frac{1}{d}$.
Combine this bound with Lemma \ref{lem:y_norm_bd} and the fact that
$\|\rsmtx\|\lesssim1$, we verify (\ref{eq:testerr_normsqterm_1_approx_1_bd_1}).
Next we show
\begin{equation}
\Big|\frac{1}{n}\vy^{\T}\rsmtx\big(\sum_{k=0}^{\msc-1}\kcoeff k\delta_{k}\kmtx_{k}\big)\rsmtx\vy-\sum_{k=0}^{\msc-1}\tcoeff k^{2}\Big|\stodom\frac{1}{\sqrt{d}}.\label{eq:testerr_normsqterm_1_approx_1_bd_2}
\end{equation}
To start, we rewrite $\frac{1}{n}\vy^{\T}\rsmtx(\sum_{k=0}^{\msc-1}\kcoeff k\delta_{k}\kmtx_{k})\rsmtx\vy$
as:
\begin{align}
\frac{1}{n}\vy^{\T}\rsmtx(\sum_{k=0}^{\msc-1}\kcoeff k\delta_{k}\kmtx_{k})\rsmtx\vy= & \Big(\frac{1}{n}\obser^{\T}\rsmtx\mY_{<\msc}\mD_{<\msc}^{2}\Big)\cdot\Big(\underbrace{\frac{1}{n}\mD_{<\msc}^{2}\mY_{<\msc}^{\T}\rsmtx\obser}_{:=\vphi}\Big).\label{eq:testerr_normsqterm_1_approx_2}
\end{align}
The vector $\vphi$ defined on the right-hand side of (\ref{eq:testerr_normsqterm_1_approx_2})
can be decomposed as:
\begin{align}
\vphi & =\frac{1}{n}\mD_{<\msc}^{2}\mY_{<\msc}^{\T}\rsmtx\tevec_{<\msc}+\frac{1}{n}\mD_{<\msc}^{2}\mY_{<\msc}^{\T}\rsmtx\obser_{\geq\msc}\nonumber \\
& =\widetilde{\shmtx}_{<\msc}(\sgl)-\underbrace{(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})^{-1}\mD_{<\msc}^{-2}\widetilde{\shmtx}_{<\msc}(\sgl)}_{:=\vphi_{1}}+\underbrace{\frac{1}{n}\mD_{<\msc}^{2}\mY_{<\msc}^{\T}\rsmtx\obser_{\geq\msc}}_{\vphi_{2}},\label{eq:testerr_normsqterm_1_phi}
\end{align}
where $\widetilde{\shmtx}_{k}(\sgl)=\frac{\tcoeff k}{\sqrt{\usdim k}}\shmtx_{k}(\sgl)$,
$\rsmtx_{\geq\msc}=(\lambda\mI+\mK_{\geq\msc})^{-1}$ and in reaching
the second step we use Lemma \ref{lem:muted_lowerdegree}. To proceed,
let us analyze the norms of vectors $\widetilde{\shmtx}_{<\msc}(\sgl)$,
$\vphi_{1}$ and $\vphi_{2}$ on the right-hand side of (\ref{eq:testerr_normsqterm_1_phi}).
First,
\begin{equation}
\|\widetilde{\shmtx}_{<\msc}(\sgl)\|^{2}=\sum_{k=0}^{\msc-1}\tcoeff k^{2}.\label{eq:testerr_normsqterm_1_vecnorm_1}
\end{equation}
Also it is easy to verify $\|\vphi_{1}\|\stodom\frac{1}{d}$ as follows.
From (\ref{eq:higher_order_norm_bd_Mpapaer_3}), we have $\|\kmtx_{>\msc}\|\stodom1$.
Then combine this with Lemma \ref{lem:spectral_norm_1} we have $\|(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})^{-1}\|\stodom1$.
Also note that $\|\mD_{<\msc}^{-2}\|\lesssim\frac{1}{d}$ and $\|\widetilde{\shmtx}_{<\msc}(\sgl)\|^{2}=\sum_{k=0}^{\msc-1}\tcoeff k^{2}<\infty$,
so
\begin{equation}
\|\vphi_{1}\|\stodom\frac{1}{d}\label{eq:testerr_normsqterm_1_vecnorm_2}
\end{equation}
Lastly, let us show
\begin{equation}
\|\vphi_{2}\|\stodom\frac{1}{\sqrt{d}}\label{eq:testerr_normsqterm_1_vecnorm_3}
\end{equation}
Note that $\vphi_{2}$ can be written as:
\[
\vphi_{2}=\sum_{k\geq\msc}\mPsi\shmtx_{k}\widetilde{\shmtx}_{k}(\sgl)+\mPsi\noise
\]
where $\mPsi=\frac{1}{n}\mD_{<\msc}^{2}\mY_{<\msc}^{\T}\rsmtx$. Then
\begin{equation}
\E_{\sgl,\noise}\|\vphi_{2}\|^{2}=\tr\Big[\mPsi(\sum_{k\geq\msc}\frac{\tcoeff k^{2}}{\usdim k}\shmtx_{k}\shmtx_{k}^{\T})\mPsi^{\T}\Big]+\varnoise^{2}\tr(\mPsi\mPsi^{\T}).\label{eq:testerr_normsqterm_1_Ephi2normsq}
\end{equation}
We first control the spectral norm of $\mPsi$. From Lemma \ref{lem:muted_lowerdegree},
we can get
\[
\mPsi=\frac{1}{n}(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})^{-1}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}
\]
By Lemma \ref{lem:samplecov_spectral_norm}, we have
\begin{align*}
\P(\|\frac{1}{n}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\|>2) & \leq2\usdim{<\msc}\exp\big(-\tfrac{n}{8\usdim{<\msc}}\big)\\
& \leq c\exp(-d/c),
\end{align*}
for some $c>0$ and we have already shown $\|(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\geq\msc}\shmtx_{<\msc})^{-1}\|\stodom1$,
so we get
\begin{equation}
\|\mPsi\|\stodom\frac{1}{\sqrt{n}}.\label{eq:testerr_normsqterm_1_spectralnormbd_1}
\end{equation}
On the other hand, same as (\ref{eq:higher_order_norm_bd_Mpapaer_3}),
we can show $\|\sum_{k>\msc}\frac{\tcoeff k^{2}}{\usdim k}\shmtx_{k}\shmtx_{k}^{\T}-\sum_{k>\msc}\tcoeff k^{2}\mI\|\stodom\frac{1}{d^{\tau}}$,
for some $\tau>0$, so combined with $\|\frac{\tcoeff{\msc}^{2}}{\usdim{\msc}}\shmtx_{\msc}\shmtx_{\msc}^{\T}\|\stodom1$,
we can get
\begin{equation}
\|\sum_{k\geq\msc}\frac{\tcoeff k^{2}}{\usdim k}\shmtx_{k}\shmtx_{k}^{\T}\|\stodom1.\label{eq:testerr_normsqterm_1_spectralnormbd_2}
\end{equation}
Combining (\ref{eq:testerr_normsqterm_1_spectralnormbd_1}) and (\ref{eq:testerr_normsqterm_1_spectralnormbd_2})
with (\ref{eq:testerr_normsqterm_1_Ephi2normsq}), we get:
\[
\|\vphi_{2}\|^{2}\stodom\frac{\usdim{<\msc}}{n}\lesssim\frac{1}{d}.
\]
After combining (\ref{eq:testerr_normsqterm_1_vecnorm_1}), (\ref{eq:testerr_normsqterm_1_vecnorm_2})
and (\ref{eq:testerr_normsqterm_1_vecnorm_3}) with (\ref{eq:testerr_normsqterm_1_phi})
and (\ref{eq:testerr_normsqterm_1_approx_2}), we get (\ref{eq:testerr_normsqterm_1_approx_1_bd_2}).
Finally, the proof is completed by combing (\ref{eq:testerr_normsqterm_1_approx_1_bd_1})
with (\ref{eq:testerr_normsqterm_1_approx_1_bd_2}).
\end{proof}
\section{Auxiliary Results for Analyzing Training Error}
\begin{lem}
\label{lem:y_norm_bd}For any $\mathcal{I}\subseteq\{0,1,2,\cdots\}$,
we have $\|\sum_{k\in\mathcal{I}}\tevec_{k}\|\stodom\sqrt{n}$. Also
$\|\noise\|\stodom\sqrt{n}$.
\end{lem}
\begin{proof}
Recall that $\tevec_{k}=\frac{\tcoeff k}{\sqrt{\usdim k}}\shmtx_{k}\shmtx_{k}(\vxi)$.
By rotational invariance, we can assume that $\vxi\sim\spmeasure{d-1}$.
Therefore,
\begin{align*}
\E\|\sum_{k\in\mathcal{I}}\tevec_{k}\|^{2} & =\E\sum_{k\in\mathcal{I}}\frac{\tcoeff k^{2}}{\usdim k}\tr(\shmtx_{k}\shmtx_{k}^{\T})\\
& =n\sum_{k\in\mathcal{I}}\tcoeff k^{2}.
\end{align*}
Since $\sum_{k=0}^{\infty}\tcoeff k^{2}\lesssim1$, we get by Chebyshev's
inequality, $\|\sum_{k\in\mathcal{I}}\tevec_{k}\|\stodom\sqrt{n}$.
In a similar way, we can get $\|\noise\|\stodom\sqrt{n}$.
\end{proof}
\begin{lem}
\label{lem:muted_lowerdegree}For any positive semidefinite matrix
$\mM\in\R^{n\times n}$, we have%
\begin{comment}
\[
(\lambda\mI+\kmtx_{\leq\msc})^{-1}\shmtx_{<\msc}=\rsmtx_{\msc}\shmtx_{<\msc}(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\msc}\shmtx_{<\msc})^{-1}\cdot\mD_{<\msc}^{-2}
\]
\end{comment}
\begin{equation}
(\lambda\mI+\kmtx_{<\msc}+\mM)^{-1}\shmtx_{<\msc}=\rsmtx_{M}\shmtx_{<\msc}(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{M}\shmtx_{<\msc})^{-1}\cdot\mD_{<\msc}^{-2}\label{eq:muted_lowerdegree}
\end{equation}
where $\rsmtx_{M}=(\lambda\mI+\mM)^{-1}$ and
$\mD_{<\msc} := \diag\{\mD_0,\mD_1,\cdots,\mD_{\msc-1}\}.$
\begin{comment}
\[
(\lambda\mI+\kmtx_{\leq\msc})^{-1}\widetilde{\shmtx}_{<\msc}=\rsmtx_{\msc}\widetilde{\shmtx}_{<\msc}(\mI+\widetilde{\shmtx}_{<\msc}^{\T}\rsmtx_{\msc}\widetilde{\shmtx}_{<\msc})^{-1}
\]
\end{comment}
\end{lem}
\begin{proof}
By the matrix inversion formula, we have
\begin{align*}
&~~~(\lambda\mI+\kmtx_{<\msc}+\mM)^{-1} \\
& =(\lambda\mI+\frac{1}{n}\shmtx_{<\msc}\mD_{<\msc}^{2}\shmtx_{<\msc}^{\T}+\mM)^{-1}\\
& =\rsmtx_{M}-\frac{1}{n}\rsmtx_{M}\shmtx_{<\msc}\mD_{<\msc}(\mI+\frac{1}{n}\mD_{<\msc}\shmtx_{<\msc}^{\T}\rsmtx_{M}\shmtx_{<\msc}\mD_{<\msc})^{-1}\mD_{<\msc}\shmtx_{<\msc}^{\T}\rsmtx_{M}\\
& =\rsmtx_{M}-\frac{1}{n}\rsmtx_{M}\shmtx_{<\msc}(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{M}\shmtx_{<\msc})^{-1}\shmtx_{<\msc}^{\T}\rsmtx_{M}.
\end{align*}
Note that due to Assumption (A.3), $\mD_{<\msc}^{-2}$ is well-defined.
Then%
\begin{comment}
\begin{align*}
(\lambda\mI+\kmtx_{\leq\msc})^{-1}\widetilde{\shmtx}_{<\msc} & =[\rsmtx_{\msc}-\rsmtx_{\msc}\widetilde{\shmtx}_{<\msc}(\mI+\widetilde{\shmtx}_{<\msc}^{\T}\rsmtx_{\msc}\widetilde{\shmtx}_{<\msc})^{-1}\widetilde{\shmtx}_{<\msc}^{\T}\rsmtx_{\msc}]\widetilde{\shmtx}_{<\msc}\\
& =\rsmtx_{\msc}\widetilde{\shmtx}_{<\msc}[\mI-(\mI+\widetilde{\shmtx}_{<\msc}^{\T}\rsmtx_{\msc}\widetilde{\shmtx}_{<\msc})^{-1}\widetilde{\shmtx}_{<\msc}^{\T}\rsmtx_{\msc}\widetilde{\shmtx}_{<\msc}]\\
& =\rsmtx_{\msc}\widetilde{\shmtx}_{<\msc}(\mI+\widetilde{\shmtx}_{<\msc}^{\T}\rsmtx_{\msc}\widetilde{\shmtx}_{<\msc})^{-1}
\end{align*}
\end{comment}
\begin{align*}
(\lambda\mI+\kmtx_{<\msc}+\mM)^{-1}\shmtx_{<\msc} & =\rsmtx_{M}\shmtx_{<\msc}[\mI-(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{M}\shmtx_{<\msc})^{-1}\cdot(\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{M}\shmtx_{<\msc})]\\
& =\rsmtx_{M}\shmtx_{<\msc}(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{M}\shmtx_{<\msc})^{-1}\cdot\mD_{<\msc}^{-2}.
\end{align*}
\begin{comment}
\begin{align*}
(\lambda\mI+\kmtx_{\leq\msc})^{-1}\shmtx_{<\msc} & =\rsmtx_{\msc}\shmtx_{<\msc}[\mI-(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\msc}\shmtx_{<\msc})^{-1}\cdot(\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\msc}\shmtx_{<\msc})]\\
& =\rsmtx_{\msc}\shmtx_{<\msc}(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\msc}\shmtx_{<\msc})^{-1}\cdot\mD_{<\msc}^{-2}
\end{align*}
\end{comment}
\begin{comment}
Therefore,
\begin{align*}
\frac{1}{n}|\obser^{\T}(\rsmtx-\rsmtx_{\leq\msc})\obser|= & \frac{1}{n}|\obser^{\T}\rsmtx_{\leq\msc}(\mK_{>\msc}-\sum_{k>\msc}\kcoeff k\mI)\rsmtx\obser|\\
\leq & \frac{\|\vy\|^{2}}{n\lambda^{2}}\|\mK_{>\msc}-\sum_{k>\msc}\kcoeff k\mI\|\\
\stodom & \frac{1}{d^{\tau}},
\end{align*}
where in the last step, we use (\ref{eq:higher_order_norm_bd_Mpapaer_3})
and Lemma \ref{lem:y_norm_bd}.
\end{comment}
\end{proof}
\begin{lem}
\label{lem:training_err_approx_step2}It holds that
\begin{equation}
\frac{1}{n}|\obser^{\T}\widetilde{\rsmtx}_{\leq\msc}\obser-(\tevec_{\geq\msc}^{\T}+\vz^{\T})\widetilde{\rsmtx}_{\leq\msc}(\tevec_{\geq\msc}+\vz)|\stodom\frac{1}{d}.\label{eq:training_err_approx_step2}
\end{equation}
where $\widetilde{\rsmtx}_{\leq\msc}=(\tilde{\lambda}\mI+\mK_{\leq\msc})^{-1}$
and $\tilde{\lambda}:=\lambda+\sum_{k>\msc}\kcoeff k$.
\end{lem}
\begin{proof}
We first show
\begin{equation}
\Big|\frac{1}{\sqrt{n}}\widetilde{\rsmtx}_{\leq\msc}\tevec_{<\msc}\Big|\stodom\frac{1}{d}.\label{eq:training_err_approx_step2_1}
\end{equation}
By (\ref{eq:muted_lowerdegree}), we have
\begin{align}
\frac{1}{\sqrt{n}}\widetilde{\rsmtx}_{\leq\msc}\tevec_{<\msc} & =\frac{1}{\sqrt{n}}\widetilde{\rsmtx}_{\leq\msc}\shmtx_{<\msc}\widetilde{\shmtx}_{<\msc}(\sgl)\nonumber \\
& =\frac{1}{\sqrt{n}}\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc}(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc})^{-1}\cdot\mD_{<\msc}^{-2}\widetilde{\shmtx}_{<\msc}(\sgl),\label{eq:ht_R_yleK}
\end{align}
where $\widetilde{\rsmtx}_{\msc}=(\tilde{\lambda}\mI+\mK_{\msc})^{-1}$ and $\widetilde{\shmtx}_{<\msc}(\sgl)=[\widetilde{\shmtx}_{0}(\sgl)^{\T},\cdots,\widetilde{\shmtx}_{\msc-1}(\sgl)^{\T}]^{\T}$.
Substituting (\ref{eq:spectral_RK_YleK_bd}) and (\ref{eq:spectral_Dk-2_bd})
in Lemma \ref{lem:spectral_norm_1} into (\ref{eq:ht_R_yleK}), we
have
\begin{align*}
|\frac{1}{\sqrt{n}}\widetilde{\rsmtx}_{\leq\msc}\tevec_{<\msc}|\leq & \|\frac{1}{\sqrt{n}}\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc}\|\cdot\|(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc})^{-1}\|\cdot\|\mD_{<\msc}^{-2}\|\cdot\|\widetilde{\shmtx}_{<\msc}(\sgl)\|\\
\lesssim & \frac{1}{d}
\end{align*}
which is (\ref{eq:training_err_approx_step2_1}). Since
\begin{align*}
& \frac{1}{n}|\obser^{\T}\widetilde{\rsmtx}_{\leq\msc}\obser-(\tevec_{\geq\msc}^{\T}+\vz^{\T})\widetilde{\rsmtx}_{\leq\msc}(\tevec_{\geq\msc}+\vz)|\\
= & 2|\frac{1}{n}\obser^{\T}\widetilde{\rsmtx}_{\leq\msc}\tevec_{<\msc}|+|\frac{1}{n}\tevec_{<\msc}^{\T}\widetilde{\rsmtx}_{\leq\msc}\tevec_{<\msc}|
\end{align*}
and $\frac{1}{\sqrt{n}}\|\obser\|\lesssim1$, $\frac{1}{\sqrt{n}}\|\tevec_{<\msc}\|\lesssim1$
by Lemma \ref{lem:y_norm_bd}, we can obtain (\ref{eq:training_err_approx_step2}),
after applying (\ref{eq:training_err_approx_step2_1}) with Cauchy-Schwarz
inequality.
\end{proof}
\begin{lem}
\label{lem:training_err_approx_step3}It holds that
\begin{equation}
\frac{1}{n}|(\tevec_{\geq\msc}^{\T}+\vz^{\T})\cdot(\widetilde{\rsmtx}_{\msc}-\widetilde{\rsmtx}_{\leq\msc})\cdot(\tevec_{\geq\msc}+\vz)|\stodom\frac{1}{\sqrt{d}}\label{eq:training_err_approx_step3}
\end{equation}
where $\widetilde{\rsmtx}_{\msc}=(\tilde{\lambda}\mI+\mK_{\msc})^{-1}$
and $\widetilde{\rsmtx}_{\leq\msc}=(\tilde{\lambda}\mI+\mK_{\leq\msc})^{-1}$,
with $\tilde{\lambda}:=\lambda+\sum_{k>\msc}\kcoeff k$.
\end{lem}
\begin{proof}
By the matrix inversion formula,
\begin{align*}
\widetilde{\rsmtx}_{\msc}-\widetilde{\rsmtx}_{\leq\msc}= & \frac{1}{n}\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc}(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc})^{-1}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc},
\end{align*}
so
\begin{align}
& \frac{1}{n}(\tevec_{\geq\msc}^{\T}+\vz^{\T})\cdot(\widetilde{\rsmtx}_{\msc}-\widetilde{\rsmtx}_{\leq\msc})\cdot(\tevec_{\geq\msc}+\vz)\nonumber \\
= & \frac{1}{n^{2}}(\tevec_{\geq\msc}^{\T}+\vz^{\T})\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc}(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc})^{-1}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}(\tevec_{\geq\msc}+\vz).\label{eq:training_err_approx_step3_1}
\end{align}
It remains to control $\|\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\tevec_{\geq\msc}\|,$
$\|\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\vz\|$
and $\|(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc})^{-1}\|$
in (\ref{eq:training_err_approx_step3_1}). First, by Lemma \ref{lem:spectral_norm_1}
we have $\|(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc})^{-1}\|\lesssim1$.
By the independence between $\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}$
and $\noise$, we have $\E_{\noise}\|\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\vz\|^{2}=\frac{1}{n^{2}}\tr(\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}^{2}\shmtx_{<\msc})$.
Meanwhile, using Lemma \ref{lem:spectral_norm_1}, we can get $\frac{1}{n^{2}}\tr(\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}^{2}\shmtx_{<\msc})\stodom\frac{\usdim{<\msc}}{n}\lesssim\frac{1}{d}.$
Hence, $\|\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\vz\|\stodom\frac{1}{\sqrt{d}}$.
Finally, we show $\|\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\tevec_{\geq\msc}\|\stodom d^{-1/4}$.
Recall that
\[
\tevec_{\geq\msc}=\sum_{k\geq\msc}\frac{\tcoeff k}{\sqrt{\usdim k}}\shmtx_{k}\shmtx_{k}(\vxi)
\]
and by rotational invariance, we can assume that $\vxi\sim\spmeasure{d-1}$. Therefore, conditioning on $\inputmtx$ and taking expectation over $\sgl\sim\spmeasure{d-1}$,
we have
\begin{align}
\E_{\sgl}\|\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\tevec_{\geq\msc}\|^{2} & =\frac{1}{n^{2}}\tr\Big[(\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc})\cdot\Big(\sum_{k\geq\msc}\frac{\tcoeff k^{2}}{\usdim k}\shmtx_{k}\shmtx_{k}^{\T}\Big)\Big]\nonumber \\
& \leq\frac{1}{n^{2}}\|\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\|_{F}\cdot\|\sum_{k\geq\msc}\frac{\tcoeff k^{2}}{\usdim k}\shmtx_{k}\shmtx_{k}^{\T}\|_{F}\nonumber \\
& =\frac{1}{n^{2}}\sqrt{\tr(\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}^{2}\shmtx_{<\msc})^{2}}\cdot\|\sum_{k\geq\msc}\frac{\tcoeff k^{2}}{\usdim k}\shmtx_{k}\shmtx_{k}^{\T}\|_{F}\nonumber \\
& \leq\frac{1}{n^{2}}\sqrt{\usdim{<K}}\|\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc}\|^{2}\cdot\|\sum_{k\geq\msc}\frac{\tcoeff k^{2}}{\usdim k}\shmtx_{k}\shmtx_{k}^{\T}\|_{F}\nonumber \\
& \leq\frac{1}{n\lambda^{2}}\sqrt{\usdim{<K}}\|\frac{1}{n}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\|\cdot\|\sum_{k\geq\msc}\frac{\tcoeff k^{2}}{\usdim k}\shmtx_{k}\shmtx_{k}^{\T}\|_{F},\label{eq:E_xi_YleK_RK_ygeK}
\end{align}
where $\usdim{<K}:=\sum_{k=0}^{\msc-1}\usdim k$. The right-hand side
of (\ref{eq:E_xi_YleK_RK_ygeK}) can be controlled as follows. From
(\ref{eq:lem:spectral_norm_1_ubd}), we can get
\[
\P(\|\frac{1}{n}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\|\geq2)\leq2\usdim{<\msc}\exp\big(-\tfrac{\delta_{<\msc}}{8}\big),
\]
where $\delta_{<\msc}=\frac{n}{\usdim{<K}}$. Also $\|\frac{1}{n}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\|\leq\tr(\frac{1}{n}\shmtx_{<\msc}\shmtx_{<\msc}^{\T})=\usdim{<\msc}$.
Therefore,
\begin{align}
\E\|\frac{1}{n}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\|^{2} & =\E\|\frac{1}{n}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\|^{2}\indicatorfn_{\|\frac{1}{n}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\|\geq2}+\E\|\frac{1}{n}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\|^{2}\indicatorfn_{\|\frac{1}{n}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\|<2}\nonumber \\
& \leq\usdim{<\msc}^{2}\exp\big(-\tfrac{\delta_{<\msc}}{8}\big)+4\nonumber \\
& \lesssim1,\label{eq:E_xi_YleK_RK_ygeK_1}
\end{align}
where in the second step, we use the deterministic bound $\|\frac{1}{n}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\|\leq\frac{1}{n}\tr(\shmtx_{<\msc}\shmtx_{<\msc}^{\T})=\usdim{<\msc}$
and in the last step we use the fact that $\usdim{<\msc}\lesssim d^{\msc-1}$
and $\delta_{<\msc}\gtrsim d$. On the other hand, $\|\sum_{k\geq\msc}\frac{\tcoeff k^{2}}{\usdim k}\shmtx_{k}\shmtx_{k}^{\T}\|_{F}$
can be controlled as:
\begin{align}
\E\|\sum_{k\geq\msc}\frac{\tcoeff k^{2}}{\usdim k}\shmtx_{k}\shmtx_{k}^{\T}\|_{F}^{2} & =\sum_{k,l\geq\msc}\frac{\tcoeff k^{2}\tcoeff{\ell}^{2}}{\usdim k\usdim{\ell}}\tr\E[\shmtx_{\ell}^{\T}\shmtx_{k}\shmtx_{k}^{\T}\shmtx_{\ell}]\nonumber \\
& =\sum_{k,l\geq\msc}\frac{\tcoeff k^{2}\tcoeff{\ell}^{2}}{\usdim k\usdim{\ell}}[\usdim kn+n(n-1)\delta_{k\ell}]\usdim{\ell}\nonumber \\
& =n\sum_{\substack{k,\ell\geq K}
}\tcoeff k^{2}\tcoeff{\ell}^{2}+n(n-1)\sum_{k\geq\msc}\frac{\tcoeff k^{4}}{\usdim k}\nonumber \\
& \leq C_{\msc}n\cdot(\sum_{k\geq\msc}\tcoeff k^{2})^{2}\nonumber \\
& \lesssim n,\label{eq:E_xi_YleK_RK_ygeK_2}
\end{align}
where $C_{\msc}$ is a constant that only depends on $\msc$, in the
second step, we use Lemma 12 of \cite{ghorbani2021linearized} and
the second to last step follows from $\usdim k\gg n$, when $k>\msc$.
Combining (\ref{eq:E_xi_YleK_RK_ygeK}), (\ref{eq:E_xi_YleK_RK_ygeK_1})
and (\ref{eq:E_xi_YleK_RK_ygeK_2}), we can get
\begin{align*}
\E\|\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\tevec_{\geq\msc}\|^{2} & =\E_{\inputmtx}\E_{\sgl}\|\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\tevec_{\geq\msc}\|^{2}\\
& \leq\frac{1}{n\lambda^{2}}\sqrt{\usdim{<K}\E\|\frac{1}{n}\shmtx_{<\msc}\shmtx_{<\msc}^{\T}\|^{2}\cdot\E\|\sum_{k\geq\msc}\frac{\tcoeff k^{2}}{\usdim k}\shmtx_{k}\shmtx_{k}^{\T}\|_{F}^{2}}\\
& \lesssim\sqrt{\frac{\usdim{<K}}{n}}\\
& \lesssim d^{-1/2}.
\end{align*}
Then by Chebyshev's inequality, we get $\|\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\tevec_{\geq\msc}\|\stodom d^{-1/4}.$
Substituting $\|\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\tevec_{\geq\msc}\|\stodom d^{-1/4}$,
$\|\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\vz\|\stodom d^{-1/2}$
and
\[
\|(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\widetilde{\rsmtx}_{\msc}\shmtx_{<\msc})^{-1}\|\stodom1
\]
back to (\ref{eq:training_err_approx_step3_1}), we reach at the desired
result.
\end{proof}
\section{Concentration Results}
\section{\label{sec:Spherical-Harmonics-and}Spherical Harmonics and Ultraspherical
Polynomials}
In this section, we give a brief overview of some properties of the
spherical harmonics and ultraspherical polynomials that are frequently
used in our analysis. A detailed coverage of these topics can be found
in \cite{dai2013approximation}.
Spherical harmonics are homogeneous harmonic polynomials restricted
on $\dsphere d$. In other words, a polynomial $P(\vx)$, $\vx\in\dsphere d$
is a spherical harmonic if and only if (1) $P(t\vx)=t^{d}P(t\vx)$,
for any $t\in\R$, (2) $\Delta P=0$, where $\Delta$ is the Laplace
operator. Let $\mathcal{H}_{k,d}$ be the space of all degree-$k$
spherical harmonics in $d$ dimension. Then $\dim(\mathcal{H}_{k,d})=\usdim k$,
where
\begin{equation}
\usdim k=\begin{cases}
1 & k=0\\
d & k=1\\
\frac{d+2k-2}{k}{d+k-3 \choose k-1} & k\geq2
\end{cases}\label{eq:usdim_def}
\end{equation}
From (\ref{eq:usdim_def}), we can find that $|\usdim k/{d \choose k}-1|\leq\frac{C_{k}}{d}$,
where $C_{k}>0$ is some constant that only depends on $k$, \emph{i.e.},
$\usdim k$ is approximately equal to the combinatorial number up
to an $\mathcal{O}(\frac{1}{d})$ correction. Any orthonormal basis of $\mathcal{H}_{k,d}$, $k\geq0$ is denoted
by $\{Y_{ki}(\vx)\}_{i=1}^{\usdim k}$ and they satisfy
\begin{equation}
\int_{\dsphere d}Y_{ki}(\vx)Y_{kj}(\vx)\spmeasure{d-1}(d\vx)=\indicatorfn_{i=j}.\label{eq:orthogonal_Y}
\end{equation}
where $\spmeasure{d-1}$ is the uniform distribution over $\dsphere d$.
Ultraspherical polynomials $\{\usp k\}_{k=0}^{\infty}$ are orthonormal polynomials on $L^{2}([-\sqrt{d},\sqrt{d}],\spmeasure{d-1,1})$,
\emph{i.e.},
\[
\int_{-\sqrt{d}}^{\sqrt{d}}\usp k(x)\usp{\ell}(x)\spmeasure{d-1,1}(dx)=\indicatorfn_{k=\ell}
\]
where $\spmeasure{d-1,1}$ is the distribution of $\vx^{\T}\ve_{1}$,
with $\vx\sim\spmeasure{d-1}$ and it also has the explicit form:
$\spmeasure{d-1,1}(dx)=\frac{\omega_{d-2}}{\sqrt{d}\omega_{d-1}}(1-x^{2})^{\frac{d-3}{2}}dx$,
where $\omega_{d-1}=\frac{2\pi^{d/2}}{\Gamma(d/2)}$ is the surface
area of $\mathcal{S}^{d-1}$. The moments of measure $\spmeasure{d-1,1}$
equal to
\begin{equation}
\E_{\spmeasure{d-1,1}}X^{m}=\begin{cases}
0 & m=2k+1\\
\frac{(2k-1)!!}{\prod_{0\leq i<k}(1+2i/d)} & m=2k
\end{cases}\label{eq:moments_of_usp}
\end{equation}
Based on (\ref{eq:moments_of_usp}), we can explicitly write out the
first three $\usp k(x)$ via the Gram-Schmidt procedure:
\begin{align*}
\usp 0(x) & =1, & \usp 1(x) & =x, & \usp 2(x) & =\frac{1}{\sqrt{2}}\sqrt{\frac{d+2}{d-1}}(x^{2}-1).
\end{align*}
In particular, for any $k\geq0$, $\usp k(x)$ and $\{Y_{ki}(\vx)\}_{i=1}^{\usdim k}$
have the following correspondence: for any $\vx,\vx'\in\dsphere d$,
\begin{equation}
\usp k(\invec^{\T}\invec'/\sqrt{d})=\frac{1}{\sqrt{\usdim k}}\sum_{i=1}^{\usdim k}Y_{ki}(\vx)Y_{ki}(\vx'),\label{eq:q_Y_relation}
\end{equation}
which is also known as the addition theorem. As a result, the expansion
(\ref{eq:kernel_expansion}) can also be written as:%
\begin{comment}
In (\ref{eq:kernel_expansion}), the convergence of the infinite sum
holds in $L^{2}$ sense.
\end{comment}
\begin{equation}
\kernalfn_{d}(\invec^{\T}\invec'/\sqrt{d})=\sum_{k=0}^{\infty}\sqrt{\usdim k}\tkcoeff{k,d}\usp k(\invec^{\T}\invec'/\sqrt{d}).\label{eq:fd_under_qk}
\end{equation}
Also we can deduce that for any $\vx\in\dsphere d$,
\begin{equation}
\frac{1}{\sqrt{\usdim k}}\sum_{i=1}^{\usdim k}Y_{ki}(\vx)^{2}=\usp k(\sqrt{d})=\sqrt{\usdim k}.\label{eq:diagonal_Y_constant}
\end{equation}
Indeed, for any $\vx\in\dsphere d$,
\begin{align*}
\usp k(\sqrt{d}) & =\usp k(\|\vx\|^{2}/\sqrt{d})\\
& =\int_{\dsphere d}\usp k(\|\vx\|^{2}/\sqrt{d})\spmeasure{d-1}(d\vx)\\
& \teq{\text{(a)}}\frac{1}{\sqrt{\usdim k}}\sum_{i=1}^{\usdim k}\int_{\dsphere d}Y_{ki}(\vx)^{2}\spmeasure{d-1}(d\vx)\\
& \teq{\text{(b)}}\sqrt{\usdim k},
\end{align*}
where (a) follows from (\ref{eq:q_Y_relation}) and (b) follows from
(\ref{eq:orthogonal_Y}).
The ultraspherical polynomials are closely related with the Hermite polynomials $\{\hermitefn_k\}_{k=0}^{\infty}$, which are the orthonormal polynomials on $L^{2}(\R, \tau_G)$, where $\tau_G$ denotes the standard Gaussian measure. From (\ref{eq:moments_of_usp}), we can see that for any fixed $k$, $\lim_{d\to\infty}\E_{\spmeasure{d-1,1}}X^{m}=\E_{\tau_G}X^{m}$,
where $\tau_G$ denotes the standard Gaussian measure. By Theorem
30.1 in \citep{billingsley2008probability}, $\spmeasure{d-1,1}$ converges weakly to $ \tau_G$. Therefore, as $d\to\infty$, we can get $\usp{k,d}(x)\to \hermitefn_k(x)$ pointwise on $\R$.
\section{\label{sec:kcoeff_convergece}Convergence of Expansion Coefficients}
\begin{lem}
\label{fact:kcoeff_convergence}Suppose $\kernalfn_{d}(x)=\kernalfn(\frac{x}{\sqrt{d}})$,
where $\kernalfn(z)$ is defined on $[-1,1]$ satisfying $\kernalfn(z)\leq C_{f}$ and $\kernalfn(z)\in C^{\infty}[-\upsilon,\upsilon]$,
for some $C_{f}>0$ and $\upsilon\in(0,1]$. Then for any fixed $k\geq 0$, \[\kcoeff{k,d}\to\kcoeff{k}=\frac{\kernalfn^{(k)}(0)}{k!},
\]
as $d\to\infty$.
\end{lem}
\begin{proof}
First, we analyze the special case $\upsilon=1$, \emph{i.e.}, $f(z)\in C^{\infty}[-1,1]$.
From (\ref{eq:fd_under_qk}), we have
\begin{equation}
\kcoeff{k,d}=\sqrt{\usdim k}\int_{-\sqrt{d}}^{\sqrt{d}}\kernalfn(\tfrac{x}{\sqrt{d}})\usp k(x)\spmeasure{d-1,1}(dx),\label{eq:mu_d_exact_1}
\end{equation}
where $\spmeasure{d-1,1}(dx)=\frac{\omega_{d-2}}{\sqrt{d}\omega_{d-1}}(1-x^{2}/d)^{\frac{d-3}{2}}dx$
,with $\omega_{d-1}=\frac{2\pi^{d/2}}{\Gamma(d/2)}$. Then we can
utilize the Rodrigues' formula for $\usp k(x)$:
\[
\usp k(x)=\sqrt{\usdim k}C_{k,d}\sqrt{d}^{k}\Big(1-\frac{x^{2}}{d}\Big)^{\frac{3-d}{2}}\Big(\frac{d}{dx}\Big)^{k}\Big(1-\frac{x^{2}}{d}\Big)^{k+\frac{d-3}{2}}
\]
where $C_{k,d}=\Big(-\frac{1}{2}\Big)^{k}\frac{\Gamma(\frac{d-1}{2})}{\Gamma(k+\frac{d-1}{2})}$.
Then
\begin{align}
& \int_{-\sqrt{d}}^{\sqrt{d}}\kernalfn(\tfrac{x}{\sqrt{d}})\usp k(x)\spmeasure{d-1,1}(dx)\nonumber \\
= & \sqrt{\usdim k}C_{k,d}\sqrt{d}^{k}\int_{-\sqrt{d}}^{\sqrt{d}}\kernalfn(\tfrac{x}{\sqrt{d}})\cdot\Big(1-\frac{x^{2}}{d}\Big)^{\frac{3-d}{2}}\Big(\frac{d}{dx}\Big)^{k}\Big(1-\frac{x^{2}}{d}\Big)^{k+\frac{d-3}{2}}\cdot\frac{\omega_{d-2}}{\sqrt{d}\omega_{d-1}}\Big(1-\frac{x^{2}}{d}\Big)^{\frac{d-3}{2}}dx\nonumber \\
\teq{\text{(a)}} & \sqrt{\usdim k}C_{k,d}\frac{\omega_{d-2}}{\omega_{d-1}}\int_{-1}^{1}\kernalfn(z)\cdot\Big(\frac{d}{dz}\Big)^{k}(1-z^{2})^{k+\frac{d-3}{2}}dz\nonumber \\
\teq{\text{(b)}} & \sqrt{\usdim k}C_{k,d}(-1)^{k}\frac{\omega_{d-2}}{\omega_{d-1}}\int_{-1}^{1}(1-z^{2})^{k+\frac{d-3}{2}}\kernalfn^{(k)}(z)dz\nonumber \\
= & \frac{\sqrt{\usdim k}}{2^{k}\prod_{i=0}^{k-1}(\frac{d-1}{2}+i)}\int_{-1}^{1}\frac{\omega_{d-2}}{\omega_{d-1}}(1-z^{2})^{k+\frac{d-3}{2}}\kernalfn^{(k)}(z)dz\label{eq:mu_d_exact_2}
\end{align}
where in (a) we make a change of variable: $\frac{x}{\sqrt{d}}\to z$
and in (b) we use integration by parts for $k$ times. After combining
(\ref{eq:mu_d_exact_1}) and (\ref{eq:mu_d_exact_2}), we get
\[
\kcoeff{k,d}=\frac{\usdim k}{\prod_{i=0}^{k-1}(d-1+2i)}\int_{-1}^{1}\frac{\omega_{d-2}}{\omega_{d-1}}(1-z^{2})^{k+\frac{3-d}{2}}\kernalfn^{(k)}(z)dz.
\]
As $d\to\infty$, $\frac{\usdim k}{\prod_{i=0}^{k-1}(d-1+2i)}\to\frac{1}{k!}$.
Note that $\frac{\omega_{d-2}}{\omega_{d-1}}(1-z^{2})^{\frac{d-3}{2}}$
is the density of distribution of $\frac{\vx^{\T}\ve_{1}}{\sqrt{d}},$where
$\vx\sim\spmeasure{d-1}$, so
\begin{align}
\int_{-1}^{1}\frac{\omega_{d-2}}{\omega_{d-1}}(1-z^{2})^{k+\frac{3-d}{2}}\kernalfn^{(k)}(z)dz & =\E\Big[\kernalfn^{(k)}\Big(\frac{\vx^{\T}\ve_{1}}{\sqrt{d}}\Big)\cdot\Big(1-\frac{(\vx^{\T}\ve_{1})^{2}}{d}\Big)^{k}\Big]\nonumber \\
& \to\kernalfn^{(k)}(0)\label{eq:mu_d_converge_1}
\end{align}
where the last step follows from dominated convergence theorem, due
to the assumption that $\kernalfn(z)\in C^{\infty}[-1,1]$. Therefore,
we get $\kcoeff{k,d}\to\frac{\kernalfn^{(k)}(0)}{k!}$.
Then we analyze the general case: $\kernalfn(z)\in C^{\infty}[-\upsilon,\upsilon]$, for some
$\upsilon\in(0,1]$. To utilize the results for $\upsilon=1$, we
need the following truncation argument. For any $\veps\in(0,1]$ and
$k$, it holds that
\begin{align}
\sqrt{\usdim k}\int_{-\sqrt{d}}^{\sqrt{d}}\big|\kernalfn(\tfrac{x}{\sqrt{d}})\usp k(x)\indicatorfn_{|x|/\sqrt{d}\geq\veps}\big|\spmeasure{d-1,1}(dx)= & \sqrt{\usdim k}\E_{Z}\big|\kernalfn(Z)\usp k(\sqrt{d}Z)\indicatorfn_{|Z|\geq\veps}\big|\nonumber \\
\tleq{\text{(a)}} & \sqrt{\usdim k C_{f}^{2}\P(|Z|\geq\veps)}\nonumber \\
\lesssim & \sqrt{d^{k}(1-\veps^{2})^{\frac{d-3}{2}}}\label{eq:mu_d_bd_1}
\end{align}
where $Z\sim\frac{\vx^{\T}\ve_{1}}{\sqrt{d}}$, $\vx\sim\spmeasure{d-1}$
and in (a), we use Cauchy-Schwarz inequality, with $\E\usp k(\sqrt{d}Z)^{2}=1$
and $\kernalfn(z)\leq C_{f}$. Clearly, in (\ref{eq:mu_d_bd_1}),
$\sqrt{d^{k}(1-\veps^{2})^{\frac{d-3}{2}}}\to0$ as $d\to\infty$,
so this implies that for any $\veps\in(0,1]$ and $k$, as $d\to\infty$,
\begin{equation}
\sqrt{\usdim k}\int_{-\sqrt{d}}^{\sqrt{d}}\big|\kernalfn(\tfrac{x}{\sqrt{d}})-\hat{\kernalfn}(\tfrac{x}{\sqrt{d}})\big|\cdot\big|\usp k(x)\big|\cdot\spmeasure{d-1,1}(dx)\to0,\label{eq:mu_d_bd_2}
\end{equation}
where $\hat{\kernalfn}(z)=\kernalfn(z)\indicatorfn_{|z|\leq\veps}$.
From (\ref{eq:mu_d_bd_2}) we conclude that for any bounded function
$r(z)$ on $[-1,1]$, if for some $\veps\in(0,1]$, $r(z)=\kernalfn(z)$
on $[-\veps,\veps]$, then as $d\to\infty$
\begin{equation}
|\kcoeff{k,d}(r)-\kcoeff{k,d}|\to0,\label{eq:mu_d_converge_2}
\end{equation}
where $\kcoeff{k,d}(r):=\sqrt{\usdim k}\int_{-\sqrt{d}}^{\sqrt{d}}r(\tfrac{x}{\sqrt{d}})\usp k(x)\spmeasure{d-1,1}(dx)$.
In light of (\ref{eq:mu_d_converge_2}) and the results we have established
for the smooth functions {[}c.f. (\ref{eq:mu_d_converge_1}){]}, it remains
to choose a proper smoothed approximation of $\kernalfn(z)$ on $[-1,1]$,
which agrees with $\kernalfn(z)$ in some neighborhood of 0. One such choice is:
\[
\tilde{\kernalfn}(z):=\int\kernalfn(z-t)\indicatorfn_{|z-t|\leq\veps}\cdot m_{\varsigma}(t)dt,
\]
where $\varsigma=\frac{1}{3}\min\{\veps,1-\veps\}$ and $m_{\varsigma}(t)=\frac{1}{\varsigma}m(\frac{t}{\varsigma})$,
with
\[
m(s)=\begin{cases}
ce^{-\frac{1}{1-s^{2}}} & |s|<1\\
0 & |s|\geq1
\end{cases}
\]
and $c$ is a normalizing constant such that $\int m(s)ds=1$. It
can be directly verified that $\tilde{\kernalfn}(z)\in C^{\infty}[-1,1]$
and $\tilde{\kernalfn}(z)=\kernalfn(z)$, when $|z|\leq\veps-\varsigma$.
Then applying (\ref{eq:mu_d_converge_2}) and (\ref{eq:mu_d_converge_1}),
we obtain that
\[
\kcoeff{k,d}\to\kcoeff{k,d}(\tilde{\kernalfn}):=\sqrt{\usdim k}\int_{-\sqrt{d}}^{\sqrt{d}}\tilde{\kernalfn}(\tfrac{x}{\sqrt{d}})\usp k(x)\spmeasure{d-1,1}(dx)\to\frac{\tilde{\kernalfn}^{(k)}(0)}{k!}=\frac{\kernalfn^{(k)}(0)}{k!}.
\]
\end{proof}
\begin{lem}
\label{lem:obser_truncation}For any $k\geq0$, as $d\to\infty$
\[
\tcoeff{k,d}\to\tcoeff k=\int\teacherfn(x)\hermitefn_{k}(x)\tau_G(dx)
\]
where $\hermitefn_{k}(x)$ is the degree-$k$ Hermite polynomial and $\tau_G$
denotes the standard Gaussian measure. Also for any $\veps>0$, there
exists $L\in\mathbb{Z}^{+}$ and $C>0$, such that for any
large enough $d$,
\[
\E g_{>L,i}^{2}=\sum_{k=L+1}^{\infty}\tcoeff{k,d}^{2}<{\veps}
\]
and
\[
\P(\frac{1}{n}\|\tevec_{>L}\|^{2}>\veps)\leq\frac{C}{n\veps^{2}},
\]
where $g_{>L,i}$ is the $i$th coordinate of $\tevec_{>L}$, $i\in[n]$.
\end{lem}
\begin{proof}
From (\ref{eq:moments_of_usp}), we get for any fixed $j\in\mathbb{Z}^{+}$,
there exists $C_{j}>0$ such that for any $d\in\mathbb{Z}^{+}$,
\begin{equation}
|\E_{\spmeasure{d-1,1}}X^{j}-\E_{\tau_G}X^{j}|\leq\frac{C_{j}}{d}.\label{eq:kmoments_sp_gauss_diff}
\end{equation}
Also by Assumption
(A.4), $g(x)\leq C_{g}(1+|x|^{K_{g}})$, so there exists $C>0$ such
that
\begin{align*}
\lim_{R\to\infty}\sup_{d\in\mathbb{Z}^{+}}\E_{\spmeasure{d-1,1}}[g(X)^{2}\indicatorfn_{|X|\geq R}] & \leq\lim_{R\to\infty}\sup_{d\in\mathbb{Z}^{+}}\E_{\spmeasure{d-1,1}}C_{g}^{2}(1+|X|^{K_{g}})^{2}\indicatorfn_{|X|\geq R}\\
& \leq\lim_{R\to\infty}\sup_{d\in\mathbb{Z}^{+}}\sqrt{\E_{\spmeasure{d-1,1}}C_{g}^{4}(1+|X|^{K_{g}})^{4}\cdot\E_{\spmeasure{d-1,1}}\indicatorfn_{|X|\geq R}}\\
& \leq\lim_{R\to\infty}\sup_{d\in\mathbb{Z}^{+}}\sqrt{\big[\E_{\tau_G}C_{g}^{4}(1+|X|^{K_{g}})^{4}+\tfrac{C}{d}\big]\cdot\E_{\spmeasure{d-1,1}}\indicatorfn_{|X|\geq R}}\\
& =\lim_{R\to\infty}\sup_{d\geq R^{2}}\sqrt{\big[\E_{\tau_G}C_{g}^{4}(1+|X|^{K_{g}})^{4}+\tfrac{C}{d}\big]\cdot\E_{\spmeasure{d-1,1}}\indicatorfn_{|X|\geq R}}\\
& =0
\end{align*}
where in the last step, we use the fact that $\spmeasure{d-1,1}$
converges weakly to $\tau_G$, as $d\to\infty$.
By Lemma C.5 in \cite{cheng2013spectrum}, we have
\[
\int \teacherfn(x)^2|\spmeasure{d-1}(x)-\tau_G(x)|dx \to 0.
\]
Then by Lemma C.1 in \cite{cheng2013spectrum},
we get for any fixed $k\geq0$, $\tcoeff{k,d}\to\tcoeff k$ and by Lemma C.2 in \cite{cheng2013spectrum}, for any $\veps>0$,
there exists a fixed $L\in\mathbb{Z}^{+}$ such that for any large
enough $d$ and $i\in[n]$, $\E g_{>L,i}^{2}=\sum_{k=L+1}^{\infty}\tcoeff{k,d}^{2}<\frac{\veps}{2}$.
Then
\begin{align*}
\P(\frac{1}{n}\|\tevec_{>L}\|^{2}>\veps) & \leq\P(\frac{1}{n}(\|\tevec_{>L}\|^{2}-\E\|\tevec_{>L}\|^{2})>\frac{\veps}{2})\\
& \leq\frac{4\var(g_{>L,i}^{2})}{n\veps^{2}}\\
& \leq C_{1}\frac{\E\teacherfn_{i}^{4}+\E\teacherfn_{\leq L,i}^{4}}{n\veps^{2}}\\
& \leq\frac{C_{2}}{n\veps^{2}},
\end{align*}
where $C_{1},C_{2}$ are two constants and in the last step, we use
$\E\teacherfn_{i}^{4},\E\teacherfn_{\leq L,i}^{4}<\infty$, which
follows from Assumption (A.4) and (\ref{eq:kmoments_sp_gauss_diff}).
\end{proof}
\section{Concentration of a Quadratic Form}
\begin{lem}
\label{lem:Fkl_sq_bound}For any finite integer $k,\ell\geq0$, let
\[
F_{k,\ell}(\vx)=\mY_{k,\symset}(\vx)^{\T}\mM\mY_{\ell,\symset}(\vx),
\]
where $\mY_{k,\symset}$ is defined in (\ref{eq:SH_symmetric}), $\vx\sim\spmeasure{d-1}$
and $\mM\in\R^{m\times p}$ is a deterministic matrix, with $m={d \choose k}$
and $p={d \choose \ell}$. It holds that
\begin{equation}
\E_{\vx}F_{k,\ell}(\vx)^{2}\leq C_{k,\ell}\min\{N_{k},N_{\ell}\}\|\mM\|_{F}^{2},\label{eq:Fkl_sq_bound}
\end{equation}
where $C_{k,\ell}>0$ is some constant that only depends on $k$ and $\ell$.
\end{lem}
\begin{proof}
Note that since $\vx$ can be represented as: $\vx=\frac{\vtheta}{\|\vtheta\|/\sqrt{d}}$,
where $\vtheta\sim\mathcal{N}(\boldsymbol{0},\mI_{d})$ and by concentration
of norm of Gaussian vector [\emph{e.g.}, Theorem 3.1.1 in \cite{vershynin2018high}],
there exists $c>0$, such that for any $d\in\mathbb{Z}^{+},$$\P(\|\vtheta\|/\sqrt{d}<1/2)\leq e^{-cd}$.
Hence, for $\vx=\frac{\vtheta}{\|\vtheta\|/\sqrt{d}}$, $\vtheta\sim\mathcal{N}(\boldsymbol{0},\mI_{d})$,
we can get
\[
\mY_{k,\symset}(\vx)^{\T}\mM\mY_{\ell,\symset}(\vx)\lesssim\mY_{k,\symset}(\vtheta)^{\T}\mM\mY_{\ell,\symset}(\vtheta).
\]
Therefore, it suffices to prove (\ref{eq:Fkl_sq_bound}) for $\vx\sim\mathcal{N}(\boldsymbol{0},\mI_{d})$.
In the following, we set $\vx\sim\mathcal{N}(\boldsymbol{0},\mI_{d})$. We
will prove (\ref{eq:Fkl_sq_bound}) by induction. When $k=\ell=0$,
we trivially have $\E[Y_{0,\symset}(\vx)^{\T}M Y_{0,\symset}(\vx)]^{2}=M^{2}$.
Now suppose we have shown for any $\mM_{1}\in\R^{m_{1}\times p_{1}}$
and $\mM_{2}\in\R^{m_{2}\times p_{2}}$, with $m_{1}={d \choose k-1}$,
$p_{1}={d \choose \ell}$ and $m_{2}={d \choose k}$ and $p_{2}={d \choose \ell-1}$,
it holds that
\begin{align}
\E[F_{k-1,\ell}(\vx)]^{2} & \leq C_{k-1,\ell}\min\{N_{k-1},N_{\ell}\}\|\mM_{1}\|_{F}^{2},\label{eq:Fkl_sq_bound_start1}\\
\E[F_{k,\ell-1}(\vx)]^{2} & \leq C_{k,\ell-1}\min\{N_{k},N_{\ell-1}\}\|\mM_{2}\|_{F}^{2}.\label{eq:Fkl_sq_bound_start2}
\end{align}
Based on these two bounds, we are going to show for any $\mM\in\R^{m\times p}$,
with $m={d \choose k}$, $p={d \choose \ell}$,
\begin{equation}
\E[F_{k,\ell}(\vx)]^{2}\leq C_{k,\ell}\min\{N_{k},N_{\ell}\}\|\mM\|_{F}^{2}.\label{eq:Fkl_sq_bound_end}
\end{equation}
In particular, if $k=0$ (or $\ell=0$), we just use (\ref{eq:Fkl_sq_bound_start2})
or {[}(\ref{eq:Fkl_sq_bound_start1}){]} for the induction.
We will bound $|\E F_{k,\ell}(\vx)|$ and $\text{Var}F_{k,\ell}(\vx)$,
separately. The expectation $\E F_{k,\ell}(\vx)$ is easy to deal
with. When $k\neq\ell$, $\E F_{k,\ell}(\vx)=0$; when $k=\ell$,
\begin{align*}
\E F_{k,k}(\vx) & =\E\text{Tr}[\mM\mY_{k,\symset}(\vx)\mY_{k,\symset}(\vx)^{\T}]\\
& =\upsilon_{k}^{2}\text{Tr}\mM
\end{align*}
where $\upsilon_{k}$ is the normalizing constant defined in (\ref{eq:SH_symmetric}).
Therefore,
\begin{align}
|\E F_{k,k}(\vx)|^{2} & \leq\upsilon_{k}^{4}\big(\sum_{u=1}^{m}|M_{uu}|\big)^{2}\nonumber \\
& \leq C_k N_{k}(\sum_{u=1}^{m}M_{uu}^{2})\nonumber \\
& \leq C_k N_{k}\|\mM\|_{F}^{2}\label{eq:abs_meanEF_bd_2}
\end{align}
where $C_k$ is a constant that only depends on $k$. Next, we compute the variance of $F_{k,\ell}(\vx)$. Taking derivative
of $F_{k,\ell}(\vx)$ with respect to each $x_{i}$, we have
\[
\frac{\partial F_{k,\ell}(\vx)}{\partial x_{i}}=\frac{\upsilon_{k}}{\upsilon_{k-1}}\mY_{k-1,\symset}(\vx)^{\T}\text{\ensuremath{\mM}}_{i}^{\text{row}}\mY_{\ell,\symset}(\vx)+\frac{\upsilon_{\ell}}{\upsilon_{\ell-1}}\mY_{k,\symset}(\vx)^{\T}\text{\ensuremath{\mM}}_{i}^{\text{col}}\mY_{\ell-1,\symset}(\vx)
\]
where $\text{\ensuremath{\mM}}_{i}^{\text{row}}$ and$\text{\ensuremath{\mM}}_{i}^{\text{col}}$
are formed by concatenating a subset of rows (columns) from $\mM$
and some zero rows (columns). As a result,
\begin{align*}
\|\nabla F_{k,\ell}(\vx)\|^{2}
&\leq2\Big(\frac{\upsilon_{k}}{\upsilon_{k-1}}\Big)^{2}\sum_{i=1}^{d}[\mY_{k-1,\symset}(\vx)^{\T}\text{\ensuremath{\mM}}_{i}^{\text{row}}\mY_{\ell,\symset}(\vx)]^{2}\\
&\hspace{6em}+2\Big(\frac{\upsilon_{\ell}}{\upsilon_{\ell-1}}\Big)^{2}\sum_{i=1}^{d}[\mY_{k,\symset}(\vx)^{\T}\text{\ensuremath{\mM}}_{i}^{\text{col}}\mY_{\ell-1,\symset}(\vx)]^{2}.
\end{align*}
Then by (\ref{eq:Fkl_sq_bound_start1}) and (\ref{eq:Fkl_sq_bound_start2}),
we have
\begin{align*}
\E\|\nabla F_{k,\ell}(\vx)\|^{2} & \leq C_{k,\ell}\Big[\min\{N_{k-1},N_{\ell}\}\sum_{i=1}^{d}\|\text{\ensuremath{\mM}}_{i}^{\text{row}}\|_{F}^{2}+\min\{N_{k},N_{\ell-1}\}\sum_{i=1}^{d}\|\text{\ensuremath{\mM}}_{i}^{\text{col}}\|_{F}^{2}\Big]\\
& \leq C_{k,\ell}\Big[k\min\{N_{k-1},N_{\ell}\}\|\mM\|_{F}^{2}+\ell\min\{N_{k},N_{\ell-1}\}\|\mM\|_{F}^{2}\Big]\\
& \leq C_{k,\ell} \min\{N_{k},N_{\ell}\}\|\mM\|_{F}^{2}.
\end{align*}
Using Gaussian Poincar\'{e} inequality, we can get
\begin{align}
\text{Var}F_{k,\ell}(\vx) & \leq\E\|\nabla F_{k,\ell}(\vx)\|^{2}\nonumber \\
& \leq C_{k,\ell}\min\{N_{k},N_{\ell}\}\|\mM\|_{F}^{2}.\label{eq:abs_varEF_bd_2}
\end{align}
Finally, combining (\ref{eq:abs_varEF_bd_2}) and (\ref{eq:abs_meanEF_bd_2}),
we get
\begin{align*}
\E[F_{k,\ell}(\vx)^{2}] & =[\E F_{k,\ell}(\vx)]^{2}+\text{Var}F_{k,\ell}(\vx)\\
& \leq C_{k,\ell} \big(\delta_{k\ell}N_{k}\|\mM\|_{F}^{2}+\min\{N_{k},N_{\ell}\}\|\mM\|_{F}^{2}\big)\\
& \leq2 C_{k,\ell}\min\{N_{k},N_{\ell}\}\|\mM\|_{F}^{2}
\end{align*}
which is (\ref{eq:Fkl_sq_bound_end}).
\end{proof}
\section{Simulation}
\section{Spectral Norm Bounds}
The following is a concentration result for the spectral norm of kernel
matrix.
\begin{lem}
\label{lem:samplecov_spectral_norm}Let $\shmtx=[\vy_{1},\vy_{2},\ldots,\vy_{n}]^{\T}\in\R^{n\times N}$
be a matrix with $n$ independent rows $\{\vy_{i}\}_{i=1}^{n}$ satisfying
$\E\vy_{i}=\boldsymbol{0}$
and $\E\vy_{i}\vy_{i}^{\T}=\mI_{N}$ and
$\|\vy_{i}\|^{2}=N$.
For any $t>0$, we have
\begin{equation}
\P(\|\frac{1}{n}\shmtx\shmtx^{\T}\|>1+t)\leq2N\exp\big(-\tfrac{\delta}{8}\min\{t^{2},t\}\big)\label{eq:lem:spectral_norm_1_ubd}
\end{equation}
where $\delta=n/N$. %
\begin{comment}
In particular
\[
\|\frac{1}{n}\shmtx\shmtx^{\T}\|\stodom\begin{cases}
1 & \delta\gtrsim1\\
\delta^{-1} & \delta\lesssim1
\end{cases}
\]
\end{comment}
Also there exists $c>0$ such that for $\delta\geq(\log N)^{2}$
\begin{equation}
\P\big(\lambda_{\min}(\frac{1}{n}\shmtx^{\T}\shmtx)\leq1/2\big)\leq c\exp(-(\log N)^{2}/c).\label{eq:lem:spectral_norm_1_domilbd}
\end{equation}
\end{lem}
\begin{proof}
Since $\|\frac{1}{n}\shmtx\shmtx^{\T}\|=\|\frac{1}{n}\shmtx^{\T}\shmtx\|$,
it is equivalent to prove all the results for the sample covariance
matrix $\frac{1}{n}\shmtx^{\T}\shmtx$. Denote $\mX_{i}=\frac{1}{n}(\vy_{i}\vy_{i}^{\T}-\mI)$.
We have
\begin{align*}
\|\mX_{i}\| & \leq\frac{1}{n}(\|\vy_{i}\vy_{i}^{\T}\|+\|\mI\|)\\
& =\frac{N+1}{n}
\end{align*}
and
\begin{align*}
\|\sum_{i=1}^{n}\E\mX_{i}^{2}\| & =\frac{1}{n^{2}}\|\sum_{i=1}^{n}\E(\vy_{i}\vy_{i}^{\T}-\mI)^{2}\|\\
& =\frac{1}{n^{2}}\|n( N-1)\mI\|\\
& =\frac{N-1}{n}.
\end{align*}
By matrix Bernstein's inequality \cite[Theorem 5.4.1]{vershynin2018high},
for any $t>0$
\begin{align}
\P\Big(\|\frac{1}{n}\shmtx^{\T}\shmtx-\mI\|\geq t\Big) & =\P\Big(\|\sum_{i=1}^{n}\mX_{i}\|\geq t\Big)\nonumber \\
& \leq2N\exp\Big(-\tfrac{t^{2}/4}{n^{-1} N(1+t/3)}\Big)\nonumber \\
& \leq2N\exp\big(-\tfrac{\delta}{8}\min\{t^{2},t\}\big).\label{eq:lem:spectral_norm_1_origin}
\end{align}
Therefore, for any $t>0$,
\begin{align*}
\P\Big(\|\frac{1}{n}\shmtx^{\T}\shmtx\|\geq1+t\Big) & \leq\P\Big(\|\frac{1}{n}\shmtx^{\T}\shmtx-\mI\|\geq t\Big)\\
& \leq2N\exp\big(-\tfrac{\delta}{8}\min\{t^{2},t\}\big)
\end{align*}
which proves (\ref{eq:lem:spectral_norm_1_ubd}). %
\begin{comment}
Then (\ref{eq:lem:spectral_norm_1_domiubd}) follows directly from
(\ref{eq:lem:spectral_norm_1_ubd}) by letting
\[
t=\begin{cases}
(\log N)^{2} & \delta\gtrsim1\\
(\log N)^{2}\delta^{-1} & \delta\lesssim1
\end{cases}
\]
\end{comment}
Finally, since $\lambda_{\min}(\frac{1}{n}\shmtx^{\T}\shmtx)\geq1-\|\frac{1}{n}\shmtx^{\T}\shmtx-\mI\|$,
\begin{align*}
\P\big(\lambda_{\min}(\frac{1}{n}\shmtx^{\T}\shmtx)\leq1/2\big) & \leq\P\big(\|\frac{1}{n}\shmtx^{\T}\shmtx-\mI\|\geq1/2\big)\\
& \leq2N\exp\big(-\tfrac{\delta}{32}\big)
\end{align*}
where we use (\ref{eq:lem:spectral_norm_1_origin}). Therefore, when
$\delta\geq(\log N)^{2}$, $\P\big(\lambda_{\min}(\frac{1}{n}\shmtx^{\T}\shmtx)\leq1/2\big)\leq c\exp(-(\log N)^{2}/c)$,
for some $c>0$.
\end{proof}
\begin{lem}
\label{lem:training_err_approx_step1}There exists $\tau>0$ such
that
\begin{equation}
\|\rsmtx-\widetilde{\rsmtx}_{\leq\msc}\|\stodom\frac{1}{d^{\tau}}\label{eq:training_err_approx_step1}
\end{equation}
where $\widetilde{\rsmtx}_{\leq\msc}=(\tilde{\lambda}\mI+\mK_{\leq\msc})^{-1}$
and $\tilde{\lambda}:=\lambda+\sum_{k>\msc}\kcoeff k$.
\end{lem}
\begin{proof}
By (72) in \cite{ghorbani2021linearized}, for any fixed $k\in\mathbb{Z}^{+}$,
there exists $C>0$ such that for all large enough $d$ ,
\begin{equation}
\E\|\kmtx_{k}-\kcoeff k\mI\|\leq C\Big[p^{3/2}n^{1/2p}\sqrt{\frac{n}{d^{k}}}+(\frac{n}{d^{k}})^{1/p}\Big]\label{eq:higher_order_norm_bd_Mpapaer}
\end{equation}
where $p\in\mathbb{Z}^{+}$ need to satisfy $2p\leq-\log\left(\frac{Cnp^{k+1}}{d^{k}}\right)$.
Here we choose $p=2\msc$. Since $n\asymp d^{\msc}$ by Assumption
(A.1), when $k>\msc$, $2p\leq-\log\left(\frac{Cnp^{k+1}}{d^{k}}\right)$
is satisfied for all large $d$. Then substituting $p=2\msc$ in (\ref{eq:higher_order_norm_bd_Mpapaer}),
we get there exists $C>0$, such that for any fixed $k>K$ and large
enough $d$,
\begin{equation}
\E\|\kmtx_{k}-\kcoeff k\mI\|\leq C\Big[d^{-\frac{1}{2}(k-\msc)+\frac{1}{4}}+d^{-\frac{k-\msc}{2\msc}}\Big].\label{eq:higher_order_norm_bd_Mpapaer_1}
\end{equation}
On the other hand, following the steps leading to (55) in \cite{ghorbani2021linearized},
we can get for any $L\geq2\msc+3$,
\begin{equation}
\E\|\kmtx_{\geq L}-\sum_{k\geq L}\kcoeff k\mI\|^{2}\lesssim\frac{1}{d}.\label{eq:higher_order_norm_bd_Mpapaer_2}
\end{equation}
Combining (\ref{eq:higher_order_norm_bd_Mpapaer_1}) and (\ref{eq:higher_order_norm_bd_Mpapaer_2}),
we can get there exists $\tau>0$, such that
\begin{equation}
\|\mK_{>\msc}-\sum_{k>\msc}\kcoeff k\mI\|\lesssim\frac{1}{d^{\tau}}.\label{eq:higher_order_norm_bd_Mpapaer_3}
\end{equation}
Therefore, combine (\ref{eq:higher_order_norm_bd_Mpapaer_3}) with
the fact $\|\widetilde{\rsmtx}_{\leq\msc}\|,\|\rsmtx\|\leq\frac{1}{\lambda}$,
we have
\begin{align*}
\|\rsmtx-\widetilde{\rsmtx}_{\leq\msc}\| & =\|\widetilde{\rsmtx}_{\leq\msc}(\mK_{>\msc}-\sum_{k>\msc}\kcoeff k\mI)\rsmtx\|\\
& \stodom\frac{1}{d^{\tau}}.
\end{align*}
\end{proof}
\begin{lem}
\label{lem:spectral_norm_1}For any positive semi-definite matrix
$\mM\in\R^{n\times n}$ satisfying
\begin{equation}
1\lesssim\lambda_{\min}(\mM)\leq\lambda_{\max}(\mM)\stodom1,\label{eq:spectral_lbd_ubd}
\end{equation}
we have
\begin{equation}
\|\frac{1}{\sqrt{n}}\mM\shmtx_{<\msc}\|\lesssim1\label{eq:spectral_RK_YleK_bd}
\end{equation}
and
\begin{equation}
\|(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\mM\shmtx_{<\msc})^{-1}\|\stodom1.\label{eq:spectral_Dk-2_bd}
\end{equation}
Here, $\lambda_{\min}(\cdot)$ and $\lambda_{\max}(\cdot)$ are the
smallest and largest eigenvalue of $\mM$ and $$\mD_{<\msc} := \diag\{\mD_0,\mD_1,\cdots,\mD_{\msc-1}\}.$$ On the other hand, $(\lambda\mI+\kmtx_{\msc})^{-1}$
and $(\lambda\mI+\kmtx_{>\msc})^{-1}$ both satisfy (\ref{eq:spectral_lbd_ubd}),
for any $\lambda>0$.
\end{lem}
\begin{proof}
Denote $\rsmtx_{\msc}=(\lambda\mI+\kmtx_{\msc})^{-1}$ and $\rsmtx_{>\msc}=(\lambda\mI+\kmtx_{>\msc})^{-1}$.
By Lemma \ref{lem:samplecov_spectral_norm}, we have $\|\frac{1}{n}\shmtx_{\msc}\shmtx_{\msc}^{\T}\|\stodom1$,
so
\begin{align*}
\|\frac{1}{\sqrt{n}}\mM\shmtx_{<\msc}\| & \leq\|\mM\|\cdot\|\frac{1}{\sqrt{n}}\shmtx_{<\msc}\|\\
& \lesssim1
\end{align*}
and we can also get $1\lesssim\lambda_{\min}(\rsmtx_{\msc})$. On
the other hand, by (\ref{eq:lem:spectral_norm_1_domilbd}) we also
have $1\lesssim\lambda_{\min}(\frac{1}{n}\shmtx_{<\msc}^{\T}\shmtx_{<\msc})$,
since $n/\usdim{<\msc}\gtrsim\log(\usdim{<\msc})^{2}$. As a result,
\begin{align*}
\|(\mD_{<\msc}^{-2}+\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\msc}\shmtx_{<\msc})^{-1}\| & \leq\|(\frac{1}{n}\shmtx_{<\msc}^{\T}\rsmtx_{\msc}\shmtx_{<\msc})^{-1}\|\\
& \leq[\lambda_{\min}(\rsmtx_{\msc})\cdot\lambda_{\min}(\frac{1}{n}\shmtx_{<\msc}^{\T}\shmtx_{<\msc})]^{-1}\\
& \stodom1,
\end{align*}
where the last step follows from $1\lesssim\lambda_{\min}(\rsmtx_{\msc})$
and $1\lesssim\lambda_{\min}(\frac{1}{n}\shmtx_{<\msc}^{\T}\shmtx_{<\msc})$.
Finally, we verify that both $\rsmtx_{\msc}$ and $\rsmtx_{>\msc}$
satisfy (\ref{eq:spectral_lbd_ubd}). Clearly, $\lambda_{\max}(\rsmtx_{\msc}),\lambda_{\max}(\rsmtx_{>\msc})\leq\frac{1}{\lambda}\lesssim1$.
On the other hand, since $\|\frac{1}{n}\shmtx_{\msc}\shmtx_{\msc}^{\T}\|\stodom1$
by Lemma \ref{lem:samplecov_spectral_norm} and $\delta_{\msc}\lesssim1$,
we get $1\lesssim\lambda_{\min}(\rsmtx_{\msc})$. For $\lambda_{\min}(\rsmtx_{>\msc})$,
we can apply (\ref{eq:higher_order_norm_bd_Mpapaer_3}) and obtain
that $\|\mK_{>\msc}\|\stodom2\sum_{k>\msc}\kcoeff k\lesssim1$, which
implies that $1\lesssim\lambda_{\min}(\rsmtx_{>\msc})$.
\end{proof}
\section{Auxiliary Results for Computing Training Error}
\section{Introduction}
Consider kernel ridge regression (KRR), where we seek to learn a function $\regressfn:\R^{d}\mapsto\R$ from a reproducing kernel Hilbert space (RKHS) associated with a positive semi-definite kernel $\Kernalfn(\cdot,\cdot)$ by solving the following
optimization problem:
\begin{equation}
\hat{\regressfn}=\argmin{\regressfn}\sum_{i=1}^{n}[y_{i}-h(\invec_{i})]^{2}+\lambda\|\regressfn\|_{\Kernalfn}^{2}.\label{eq:kernel_1}
\end{equation}
Here, $\{\invec_{i},y_{i}\}_{i=1}^{n}$ is a collection of training samples, $\|\cdot\|_{\Kernalfn}$ is the RKHS norm and $\lambda>0$ is the regularization parameter.
The performance
of KRR can be characterized by the \emph{training error}
\[
\mathcal{E}_{\text{train}}=\frac{1}{n}\Big[\sum_{i=1}^{n}\big(y_{i}-\hat{\regressfn}(\invec_{i})\big)^{2}+\lambda\|\hat{\regressfn}\|_{\Kernalfn}^{2}\Big]
\]
and the \emph{test error}
\[
\mathcal{E}_{\text{test}}=\E_{\text{new}}[\E(y_{\text{new}}\mid\invec_{\text{new}})-\hat{\regressfn}(\invec_{\text{new}})]^{2},
\]
where $(\invec_{\text{new}},y_{\text{new}})\sim\mathcal{P}$ denotes an independent
test sample, and $\E_{\text{new}}$ denotes the expectation with respect to $(\invec_{\text{new}},y_{\text{new}})$ while keeping the training samples $\{\invec_{i},y_{i}\}_{i=1}^{n}$ fixed.
Kernel ridge regression is a classical method for supervised learning \citep{scholkopf2002learning}. Due to its connection
to modern overparameterized neural networks \citep{neal1996priors,williams1996computing,daniely2016toward,jacot2018neural,belkin2018understand,du2019gradient},
there has been a strong resurgence of interest in studying the performance of KRR, especially in various high-dimensional settings. See, \emph{e.g.}, \cite{rakhlin2019consistency,liang2020multiple,liang2020just,bordelon2020spectrum,canatar2021spectral,ghorbani2021linearized,mei2021generalization}.
\begin{figure}[t]
\begin{centering}
\includegraphics[width=0.6\textwidth]{figs/multi_phase}
\par\end{centering}
\caption{\label{fig:Illustration-of-hierarchical}Illustration of the hierarchical
learning process of kernel ridge regression. Top figure: test error as a function of $\log n / \log d$, where $n$ is the sample size and $d$ is the dimension. The test error appears to remain unchanged when $d^{k-1}\ll n\ll d^{k}$, for $k\in\mathbb{Z}^{+}$, while drastic transitions
occur at $n\asymp d^{k}$. Bottom figures: the zoomed-in views of the test error within each transition region, corresponding to $n \asymp d^k$ for $k = 1, 2, 3$. The red curves show the asymptotic predictions obtained in this work. These learning curves exhibit delicate non-monotonic behavior, due to bias-variance trade-offs at different polynomial scaling regimes.}
\end{figure}
One intriguing phenomenon revealed by several recent works \citep{liang2020multiple,ghorbani2021linearized,mei2021generalization} is that the generalization performance of KRR exhibits a hierarchical and multi-phased pattern that crucially depends on the scaling relationship between the sample size $n$ and the underlying dimension $d$. We illustrate this phenomenon in Fig. \ref{fig:Illustration-of-hierarchical} (top part), where we plot the test error $\mathcal{E}_{\text{test}}$ against the ratio $\log n / \log d$. The test error can be clearly partitioned into several consecutive ``stationary'' phases that are separated by more drastic transitions in between. More precisely, $\mathcal{E}_{\text{test}}$
appears to remain unchanged when $d^{k-1}\ll n\ll d^{k}$, for $k\in\mathbb{Z}^{+}$, while transitions
occur at $n\asymp d^{k}$. An explanation of this phenomenon was given in \cite{ghorbani2021linearized}: It is shown that, when $d^{k-1}\ll n\ll d^{k}$, $\mathcal{E}_{\text{test}}$
is approximately equal to the approximation error of the function $h$ by all the polynomials
with degree less than $k$. This means that KRR sequentially learns
functions of increasing complexity as the sample size increases; when $d^{k-1}\ll n\ll d^{k}$, only polynomials with degree less than $k$ are learned.
What is the performance of KRR near the critical regions, exactly where the transitions happen? This is the focus of the current paper. More precisely, we ``zoom into'' each transition region by assuming $n\asymp d^{k}$, and derive sharp asymptotics of KRR for different values of $k$. Such asymptotic characterization provides a precise picture of the whole learning process and clarifies the impact of various parameters (including the choice of the kernel function) on the generalization performance. As a preview of our results, we plot in the lower part of Fig. \ref{fig:Illustration-of-hierarchical} the theoretical predictions of $\mathcal{E}_{\text{test}}$
in the regimes $n\asymp d^{k}$, for $k=1,2,3$, for a specific choice
of the kernel function. It can be seen that the learning curves of KRR can exhibit delicate non-monotonic behavior due to bias-variance trade-offs: as the sample size $n$ increases, $\mathcal{E}_{\text{test}}$
can first increase and then decrease again after crossing certain
deterministic thresholds. Under the names of ``double descent'' or ``multiple descent'', such phenomenon
has been observed and analyzed in various other problems and models in learning \citep{mei2019generalization,d2020double,adlam2020neural,nakkiran2021deep}.
Some of the asymptotic predictions given in the paper were first derived in \cite{bordelon2020spectrum,canatar2021spectral} (see also \cite{dietrich1999statistical} for a related earlier work), via non-rigorous statistical physics methods and a ``Gaussian equivalence conjecture'' (see Sec \ref{sec:Equivalence-Conjecture}). One of the technical contributions of this paper is to rigorously establish this conjecture, which allows us to characterize the exact performance KRR in the polynomial scaling regime.
When the current work was under review at COLT '22, we became aware of the recent paper \cite{misiakiewicz2022spectrum} that also studies the exact asymptotics of KRR in the polynomial scaling regime.
The target function considered in that paper is different from ours.
On one hand, the expansion coefficients (\emph{i.e.}, $\tcoeff{k,d}$ in \eqref{eq:teacher_expansion} below) of low-degree components can be arbitrary, which is more general than ours; on the other hand, they require high-degree coefficients
to be \emph{independent} random variables. In comparison, we consider a target function (as detailed in Sec. \ref{sec:model}) whose coefficients are dependent.
In terms of the distribution of $\{\invec_i\}$, our work focuses on the uniform distribution over $d$-dimensional sphere.
In addition to this case, \cite{misiakiewicz2022spectrum} also considers the uniform distribution over the hypercube $\{-1,1\}^{d}$.
\section{Main Results}
\subsection{Model}
\label{sec:model}
We start by describing the statistical model under which we analyze the performance of KRR. Let $\dsphere d$ denotes the $d$-dimensional sphere with radius $\sqrt{d}$. We assume that the input data vectors $\invec_{i}\iid\spmeasure{d-1}$, where $\spmeasure{d-1}$
denotes the uniform distribution over $\dsphere d$. The labels $\{y_{i}\}$ are generated from a generalized linear teacher model
\begin{equation}
y_{i}=\teacherfn(\invec_{i}^{\T}\sgl/\sqrt{d})+\noisei_{i},\label{eq:teacher_model}
\end{equation}
where $\teacherfn$ is an unknown teacher function, $\sgl\in\dsphere d$
is the teacher weight vector, and $\noisei_{i}\iid\mathcal{N}(0,\varnoise^{2})$
denotes independent additive noise. We consider
an inner-product kernel on $\dsphere d$, represented as
\begin{equation}
\Kernalfn(\invec,\invec')=\kernalfn_{d}(\invec^{\T}\invec'/\sqrt{d}), \label{eq:inner_product_kernel_def}
\end{equation}
where $\kernalfn_{d}$ is a transformation function that can
depend on $d$. It is known that
any inner-product PSD kernel $\Kernalfn\in L^{2}(\dsphere d\times\dsphere d,\spmeasure{d-1}^{\otimes2})$
can be expanded as \cite[Lemma 4.20]{scholkopf2002learning}:
\begin{equation}
\Kernalfn(\invec,\invec')=\sum_{k=0}^{\infty}\tkcoeff{k,d}\sum_{i=1}^{\usdim k}Y_{ki}(\vx)Y_{ki}(\vx'),\label{eq:kernel_expansion}
\end{equation}
where $\tkcoeff{k,d}\geq0$ is the eigenvalue of the $k$th eigenspace (we will also denote the normalized eigenvalue as $\kcoeff{k,d}:=\usdim k\tkcoeff{k,d}$),
$\{Y_{ki}(\vx)\}_{i=1}^{\usdim k}$ are the associated eigenfunctions,
which is a set of orthonormal degree-$k$ spherical harmonics. We have
\[
\int Y_{ki}(\vx)Y_{\ell j}(\vx)\spmeasure{d-1}(d\vx)=\indicatorfn_{k\ell} \indicatorfn_{ij},
\]
where $\indicatorfn_{ab} = 1$ if $a = b$ and $\indicatorfn_{ab} = 0$ otherwise. Moreover, $\usdim k$ is the corresponding geometric multiplicity of the $k$th
eigenspace, which coincides with the dimension of the subspace spanned
by all degree-$k$ spherical harmonics. (We collect a list of related concepts and properties of spherical
harmonics in Appendix \ref{sec:Spherical-Harmonics-and}.)
Note that both $\usdim k$ and $\{Y_{ki}(\vx)\}_{i=1}^{\usdim k}$
can depend on $d$. However, To lighten the notation, we will omit this dependence when doing so causes no confusion. Finally, we denote by $\spmeasure{d-1,1}$ the distribution of the scalar random variable
$\vx^{\T}\ve_{1}$, where $\vx\sim\spmeasure{d-1}$ and $\ve_1$ is the first standard basis vector.
\subsection{Technical Assumptions}
Our main results are proved under the following assumptions.
\begin{enumerate}
\item[(A.1)] Both the number of training samples $n$ and the input dimension
$d$ go to infinity, while $\frac{n}{d^{\msc}/\msc!}\to\delta_{\msc}\in(0,\infty)$,
for $\msc\in\mathbb{Z}^{+}$.
\item[(A.2)] $\kernalfn_{d}(x)=\kernalfn(x/\sqrt{d})$, where
$\kernalfn(z)$ is well-defined on $[-1,1]$ and satisfies: (1) $\kernalfn(z)\leq C_{f}$, for some $C_{f}>0$ and (2) there exists some $\upsilon\in(0,1]$ such that $\kernalfn(z)\in C^{\infty}[-\upsilon,\upsilon]$.
\item[(A.3)] There exists $C>0$ such that $\kcoeff{k,d}\geq C$, for all $0\leq k\leq\msc$ and large enough $d$.
\item[(A.4)] There exist constants $C_{g},K_{g}>0$ such that $g(x)\leq C_{g}(1+|x|^{K_{g}})$
for any $x\in\R$.
\end{enumerate}
\begin{rem}
Assumption (A.2) put two constraints on the
kernel functions we consider. The first condition
ensures that the expansion (\ref{eq:kernel_expansion}) is valid and
$
\sum_{k=0}^{\infty}\kcoeff{k,d}<\infty
$
holds uniformly over $d$, which directly follows from the properties
(\ref{eq:fd_under_qk}) and (\ref{eq:diagonal_Y_constant}) given
in Appendix \ref{sec:Spherical-Harmonics-and}.
The boundedness condition
$
\sum_{k=0}^{\infty}\kcoeff{k,d}<\infty
$
is convenient as it makes all the quantities of interests
such as the training and test errors of size $\mathcal{O}(1)$, when $\lambda\in(0,\infty)$.
On the other hand, the second condition requires that $\kernalfn(z)$ is smooth in some neighbourhood of $0$. This guarantees that
for any fixed $k$, $\kcoeff{k,d} \to\frac{f^{(k)}(0)}{k!}$, as $d\to\infty$
(see Lemma \ref{fact:kcoeff_convergence} in Appendix \ref{sec:kcoeff_convergece}), where $f^{(k)}(0)$ denotes the $k$th derivative of $f(z)$ at $z = 0$. Below are two concrete examples of kernels that satisfy
Assumption (A.2).
\textbf{Example 1: Polynomial kernel.} In this case, $\kernalfn(z)=(z+b)^{k}$,
where $b\geq0$ is a fixed constant and $k\in\mathbb{Z}^{+}$. It
is a classical type of kernel, widely applied in several machine learning
problems such as support vector machine (SVM).
\textbf{Example 2: Random feature model \citep{rahimi2008random}.}The
random feature model is a computationally efficient random approximation
of kernel function, which is based on the following feature map:
\[
\vx\mapsto\frac{1}{\sqrt{p}}[\sigma(\vr_{1}^{\T}\vx),\sigma(\vr_{2}^{\T}\vx),\cdots,\sigma(\vr_{p}^{\T}\vx)]^{\T},
\]
where $\{\vr_{i}\}_{i=1}^{p}$ is a set of independent random feature
vectors and $\sigma(x)$ is an activation function satisfying $\E\sigma(G)^2<\infty$, with $G\sim\mathcal{N}(0,1)$. The associated random kernel function is as follows:
\[
\Kernalfn_{p}(\vx,\vx')=\frac{1}{p}\sum_{u=1}^{p}\sigma(\vr_{u}^{\T}\vx)\cdot\sigma(\vr_{u}^{\T}\vx').
\]
In particular, when $\vr_{u}\iid\mathcal{N}(\boldsymbol{0},\mI/d)$
and $p\to\infty$, $\Kernalfn_{p}(\vx,\vx')\to\kernalfn(\frac{\invec^{\T}\invec'}{d})$.
Here, $\kernalfn(z)=\E[\sigma(X)\cdot\sigma(Y)]$, with
\[
\begin{pmatrix}X\\
Y
\end{pmatrix}\sim\mathcal{N}(\boldsymbol{0},\mSig),\;\mSig=\begin{bmatrix}1 & z\\
z & 1
\end{bmatrix}.
\]
The smoothness of $f$ near 0 can be directly checked by taking derivatives
of $\kernalfn(z)$.
In fact, we can also let $\kernalfn(z)$ depend on $d$, as long as: (1) $\kernalfn_{d}\in L^{2}([-\sqrt{d},\sqrt{d}],\spmeasure{d-1,1})$, (2) $\sum_{k=0}^{\infty}\kcoeff{k,d}<\infty$ for large enough $d$, and (3) for any fixed $k$, $\kcoeff{k,d}\to\kcoeff k$ as $d\to\infty$.
\end{rem}
\begin{rem}
Assumption (A.3) guarantees
that the RKHS associated with the kernel $\Kernalfn$ is not degenerate:
the condition that $\mu_k > 0$ for $0 \le k \le K$ is satisfied as long as all the degree-$k$ ($k\leq\msc$) polynomials
are in the RKHS. This is equivalent to requiring $\kernalfn^{(k)}(0)>0$, for all $0\leq k \leq \msc$.
\end{rem}
\begin{rem}
Assumption (A.4) implies that for all $d\in\mathbb{Z}^{+}$, $\teacherfn\in L^{2}([-\sqrt{d},\sqrt{d}],\spmeasure{d-1,1})$.
Therefore, we have the following expansion:
\begin{equation}
\teacherfn(x)=\sum_{k=0}^{\infty}\tcoeff{k,d}\usp k(x),\label{eq:teacher_expansion}
\end{equation}
where $\usp k(x)$ is the degree-$k$ ultraspherical polynomial in
$L^{2}([-\sqrt{d},\sqrt{d}],\spmeasure{d-1})$ (more details about these polynomials can be
found in Appendix \ref{sec:Spherical-Harmonics-and}) and $\{\tcoeff{k,d}\}_{k\geq0}$
satisfies $\sum_{k=0}^{\infty}\tcoeff{k,d}^{2}<\infty$. Note that $\usp{k}(x)$ depends on $d$, but for notational simplicity, we will suppress the dependence.
In fact,
$g(x)\leq C\exp(C|x|)$, for some $C>0$ will suffice to guarantee
the $L^{2}$-integrability of $\teacherfn$. Here, we put a stronger
assumption to ensure that all the moments of $g(x)$ exists, which
helps simplify several parts of the proof.
\end{rem}
\subsection{Main Results and Insights}
We are now ready to state our main theorem.
\begin{thm}
\label{thm:training_test_err}Under Assumption (A.1)-(A.4), we have
the following asymptotic characterization of the training and test errors.
\textbf{\textup{1. Training error:}}
\begin{equation}
\mathcal{E}_{\text{train}}\pconv\lambda\left[\frac{\tcoeff{\msc}^{2}R_{\star}}{1+\kcoeff{\msc}\delta_{\msc}R_{\star}}+\Big(\varnoise^{2}+\sum_{k>\msc}\tcoeff k^{2}\Big)R_{\star}\right],\label{eq:main_results_trainerr}
\end{equation}
where $R_{\star}$ is the unique non-negative solution of:
\begin{equation}
\frac{1}{R}=\tilde{\lambda}+\frac{\mu_{\msc}}{1+\delta_{\msc}\mu_{\msc}R},\label{eq:main_results_Stieltjes}
\end{equation}
with $\kcoeff k=\frac{\kernalfn^{(k)}(0)}{k!}$,
$\tcoeff k=\int_{-\infty}^{\infty}\teacherfn(x)\hermitefn_{k}(x)\tau_G(dx)$ and $\tilde{\lambda}=\lambda+\sum_{k>\msc}^{\infty}\kcoeff k$.
Here, $\hermitefn_{k}(x)$ is the degree-$k$ Hermite polynomial, $\tau_G(dx)$ is the standard Gaussian measure
and $R_{\star}=\mathcal{R}(\tilde{\lambda};\mu_{\msc},\delta_{\msc})$,
where
\begin{equation}
\mathcal{R}(\lambda;\mu,\delta)=\frac{-(\lambda+\mu-\mu\delta)+\sqrt{(\lambda+\mu-\delta\mu)^{2}+4\lambda\mu\delta}}{2\lambda\mu\delta}.\label{eq:main_results_Stieltjes_1}
\end{equation}
\textbf{\textup{2. Test error:}}
\begin{equation}
\mathcal{E}_{\text{test}}\pconv\frac{1}{\equisamp-1}\left[\frac{\equisamp\tcoeff{\msc}^{2}}{(1+\kcoeff{\msc}\delta_{\msc}R_{\star})^{2}}+\equisamp\sum_{k>\msc}\tcoeff k^{2}+\varnoise^{2}\right],\label{eq:main_results_testerr}
\end{equation}
where $\equisamp=\frac{(1+\kcoeff{\msc}\delta_{\msc}R_{\star})^{2}}{\delta_{\msc}\kcoeff{\msc}^{2}R_{\star}^{2}}$.
In both (\ref{eq:main_results_trainerr}) and (\ref{eq:main_results_testerr}), $\pconv$ denotes convergence in probability as $d\to\infty$.
\end{thm}
\begin{rem}
It turns out that $R_{\star}$
coincides with the Stieltjes transform of the Marchenko-Pastur (MP)
law, with aspect ratio parameter $1/\delta_{\msc}$ and variance $\kcoeff{\msc}$.
The MP law corresponds to the limiting empirical
eigenvalue distributions of the Wishart ensemble.
\end{rem}
\textbf{Bias-Variance Decomposition. }The formulas in (\ref{eq:main_results_trainerr})
and (\ref{eq:main_results_testerr}) can be viewed from the perspective of bias-variance
decomposition. Conditioning on the input data vectors $\set{\vx_i}$, we consider the following
average squared bias and variance of an estimator $h$ \cite[Sec.7.3]{hastie2009elements}:
\begin{align*}
\mathcal{B}_{h} & :=\E_{\invec_{\text{new}}}[\E_{\noise}\regressfn(\invec_{\text{new}})-\E(y_{\text{new}}\mid\invec_{\text{new}})]^{2}\\
\mathcal{V}_{h} & :=\E_{\invec_{\text{new}}}[\var_{\noise}(\regressfn(\invec_{\text{new}}))]^{2}.
\end{align*}
Then $\mathcal{E}_{\text{test}}$
in (\ref{eq:main_results_testerr}) can be decomposed as $\mathcal{E}_{\text{test}}=\mathcal{B}_{\hat{h}}+\mathcal{V}_{\hat{h}}$
and it holds that
\begin{align}
\mathcal{B}_{\hat{h}}\pconv & \frac{\equisamp}{\equisamp-1}\Big[\frac{\tcoeff{\msc}^{2}}{(1+\kcoeff{\msc}\delta_{\msc}R_{\star})^{2}}+\sum_{k>\msc}\tcoeff k^{2}\Big]\label{eq:bias_hhat}\\
\mathcal{V}_{\hat{h}}\pconv & \frac{\varnoise^{2}}{\equisamp-1}.\label{eq:variance_hhat}
\end{align}
[These can be directly verified via the same proof strategy underlying our proof of (\ref{eq:main_results_testerr}).]
From (\ref{eq:variance_hhat}), we can regard $\equisamp$ as
an inflation factor of the variance. By its definition, $\equisamp$ only depends
on $\{\kcoeff k\}_{k\geq0}$ but not on $\{\tcoeff k\}_{k\geq0}$
or $\varnoise^{2}$. Therefore, $\equisamp$ reflects the influence
of the kernel function on
the learning performance. On the other hand, from (\ref{eq:bias_hhat})
we can further decompose the bias term $\mathcal{B}_{\hat{h}}$ as $\mathcal{B}_{\hat{h}}=\sum_{k=0}^{\infty}\mathcal{B}_{k}$, where
\[
\mathcal{B}_{k}=\begin{cases}
0 & k<\msc,\\
\frac{\equisamp}{\equisamp-1}\frac{\tcoeff \msc^{2}}{(1+\kcoeff \msc\delta_{\msc}R_{\star})^{2}}+\frac{\sum_{\ell>\msc}\tcoeff \ell^{2}}{\equisamp-1} & k=\msc,\\
\tcoeff k^{2} & k>\msc.
\end{cases}
\]
It can be seen that $\mathcal{B}_{k}$ is equal to the average squared
bias when $\tcoeff k$ is the only non-zero coefficient in (\ref{eq:teacher_expansion}).
In this sense, each $\mathcal{B}_{k}$ can be understood as the contribution
of degree-$k$ components to the total squared bias. Moreover, the
contributions from different components are linear. Similar results
also hold for the training error (\ref{eq:main_results_trainerr}).
\textbf{Hierarchical learning process. }Based on the characterizations
(\ref{eq:main_results_testerr}), we can now have a better understanding
of the hierarchical learning process illustrated in Fig. \ref{fig:Illustration-of-hierarchical}.
Consider the limit: $\delta_{\msc}\to0$ and $\delta_{\msc}\to\infty$.
By (\ref{eq:main_results_Stieltjes_1}), we can get for any fixed $\tilde{\lambda}>0$
and $\mu_{\msc}>0$,
\[
R_{\star}\to\begin{cases}
(\tilde{\lambda}+\mu_{\msc})^{-1}& \delta_{\msc}\to0\\
\tilde{\lambda}^{-1}& \delta_{\msc}\to\infty
\end{cases}
\]
and $\equisamp\to\infty$ under both limits. Therefore, $\lim_{\delta_{\msc}\to0}\mathcal{B}_{\msc}=\tcoeff \msc^{2}$
and $\lim_{\delta_{\msc}\to\infty}\mathcal{B}_{\msc}=0$. Recall that
$\tcoeff k^{2}$ is the energy of the degree-$k$ component of the
teacher function, so this justifies that KRR learns
the degree-$k$ components in phase-$k$. Moreover we can also identify the roles
played by the components of different degrees. From (\ref{eq:main_results_testerr}),
we can see that at phase-$k$: (i) all the low-degree ($\ell<k$) components
of the kernel function and the teacher function do not exert any influence;
(ii) high-degree ($\ell>k$) components of the kernel function act as
an additional regularization term, as manifested in $\tilde{\lambda}$; and
(iii) high-degree components of the teacher function act as
additive noise.%
\begin{comment}
KRR essentially estimates all the degree-$k$ components as $0$,
while at the end of phase-$k$, the bias become $0$ which means that
the degree-$k$ components can be perfectly estimated. On the other
hand, we have, $\lim_{\delta_{\msc}\to0}\mathcal{B}_{k}=\lim_{\delta_{\msc}\to\infty}\mathcal{B}_{k}=\tcoeff k^{2}$
for all $k>\msc$. This means that at phase-$\msc$, the learning
of higher degree ($k>\msc$) components has not started yet and KRR
is only involved in learning the degree-$k$ component. Also we can
see that all $\tcoeff{\ell}$, $\ell<k$ do not contribute to $\mathcal{E}_{\text{test}}$,
which means that all the lower-degree components have been perfectly
learned by KRR. In summary, KRR \emph{hierarchically} learns components
of different degrees, from low to high, as $n$ increases. This phenomenon
has been rigorously studied before in \cite{liang2020multiple,ghorbani2021linearized}.
In particular, \cite{ghorbani2021linearized} focus on the regime:
$d^{k}\ll n\ll d^{k+1}$. This corresponds to the interval between
the end of phase-$k$ and the beginning of phase-$k+1$. In comparison,
our results provides the exact in-phase characterization of this learning
process. It depicts the transitioning process from phase-$\msc$ to
phase-$\msc+1$. From the characterization (\ref{eq:main_results_testerr}),
we can see that at phase-$k$ (i) all the low-degree ($\ell<k$) components
of kernel function and teacher function do not exert any influence,
(ii) high-degree ($\ell>k$) components of kernel function act as
an additional regularization term, as manifested in $\tilde{\lambda}$,
(iii) high-degree components of teacher function plays the role of
additive noise. Only the degree-$k$ component is being learned. In
this way, we can interpret phase-$k$ as the phase of learning degree-$k$
component. An illustration of this hierarchical process is given in
Fig. \ref{fig:Illustration-of-multi-phase}. We can find that towards
end of phase-1 and phase-2, $\mathcal{E}_{\text{test}}$ converges
to a non-zero value. It will start decreasing again, only when the
sample size crosses the boundary of each phase. Moreover, we can find
that at each phase, the error can undergo some non-monotonic process:
it doesn't necessarily goes down as we use more training samples.
This happens when $\lambda$ is small.
\end{comment}
\begin{comment}
Finally, it is worth mentioning that this type of characterization
were obtained recently in \cite{bordelon2020spectrum,canatar2021spectral}
for KRR and earlier in \cite{dietrich1999statistical} for SVM using
non-rigorous statistical physics methods. Here, we provide a rigorous
result.
\end{comment}
\textbf{Non-monotonicity of $\mathcal{E}_{\text{test}}$.} In Fig. \ref{fig:Illustration-of-hierarchical}, we show that $\mathcal{E}_{\text{test}}$ can be non-monotonic with respect to the sample complexity. Based on our asymptotic characterization, this phenomenon can be explained as follows. In Fig. 1, we choose $f(z)=\frac{z^3}{30}+\frac{z^2}{2}+z+1$, $g(x)=\frac{x^4}{20}+\frac{x^3}{2}+x^2+x$, $\varnoise=0.5$ and $\lambda=10^{-4}\approx 0$. This corresponds to the ridgeless limit of KRR. Let us take $\lambda\to 0$ in the asymptotic characterization and suppose $\kcoeff k=0$ for all $k>\msc$.
In this case, we can show from \eqref{eq:bias_hhat} and \eqref{eq:variance_hhat} that $\mathcal{B}_{\hat{h}}\pconv\tcoeff{\msc}^{2}\max\{1-\delta_{\msc},0\}$
and $\mathcal{V}_{\hat{h}}\pconv\frac{\varnoise^{2}}{\delta_{\msc}-1}\min\{\delta_{\msc},1\}$. We can find that $\mathcal{B}_{\hat{h}}$ is monotonically decreasing and the non-monotonicity of $\mathcal{E}_{\text{test}}$ stems from the non-monotonicity of $\mathcal{V}_{\hat{h}}$. The peak at the interpolation threshold $\delta_{\msc}=1$ due to the explosion of the variance. In Fig. 1, this corresponds to the peak in the Transition-3.
On the other hand, we can find that the peak at Transition-2 is less notable and there is no peak at Transition-1. The reason is that when $K=1,2$, higher-degree components of kernel function do not vanish and they act as regularization terms (manifested by $\tilde{\lambda}=\lambda+\sum_{k>\msc}^{\infty}\kcoeff k$), which reduces the variance.
\subsection{Proof for the Asymptotic Formula of the Test Error}
Next we analyze the asymptotics of the test error. We can decompose $\mathcal{E}_{\text{test}}$
as:
\begin{align}
\mathcal{E}_{\text{test}} =\underbrace{\E_{\text{new}}\E(y_{\text{new}}\mid\invec_{\text{new}})^{2}}_{\text{I}}-\underbrace{2\E_{\text{new}}[\E(y_{\text{new}}\mid\invec_{\text{new}})\hat{\regressfn}(\invec_{\text{new}})]}_{\text{II}}+\underbrace{\E_{\text{new}}\hat{\regressfn}(\invec_{\text{new}})^{2}}_{\text{III}}.\label{eq:testerr_decomposition}
\end{align}
In the following, we deal with them individually.
\subsubsection{Part I}
It can be directly calculated from (\ref{eq:teacher_model}) and (\ref{eq:teacher_expansion})
that $\E_{\text{new}}\E(y_{\text{new}}\mid\invec_{\text{new}})^{2}=\sum_{k=0}^{\infty}\tcoeff{k,d}^{2}$.
Then by Lemma \ref{lem:obser_truncation}, as $d\to\infty$
\begin{equation}
\E_{\text{new}}\E(y_{\text{new}}\mid\invec_{\text{new}})^{2}\to\sum_{k=0}^{\infty}\tcoeff k^{2}.\label{eq:testerr_sglnormsqterm_finalform}
\end{equation}
\subsubsection{Part II}
Following the similar strategy as in Sec. \ref{sec:proof_trainingerr}, we can get
\begin{equation}
\E_{\text{new}}[y_{\text{new}}\hat{\regressfn}(\invec_{\text{new}})]=\frac{1}{n}\tilde{\tevec}^{\T}\rsmtx\obser\pconv\sum_{k<\msc}\tcoeff k^{2}+\tfrac{\kcoeff{\msc}\delta_{\msc}\tcoeff{\msc}^{2}R_{\star}}{1+\kcoeff{\msc}\delta_{\msc}R_{\star}}\label{eq:testerr_crossterm_final_form}
\end{equation}
where $\tilde{\tevec}=\sum_{k=0}^{\infty}\kcoeff k\delta_{k}\tevec_{k}$. The proof is deferred to Appendix \ref{sec:proof_of_cross}.
\subsubsection{Part III}
This part of proof follows the same strategy as Part I and Part II.
In particular, we can get
\begin{equation}
\E_{\text{new}}\hat{\regressfn}(\invec_{\text{new}})^{2}\pconv\sum_{k<\msc}\tcoeff k^{2}+\frac{(\delta_{\msc}\kcoeff{\msc}\tcoeff{\msc}R_{\star})^{2}}{(1+\delta_{\msc}\kcoeff{\msc}R_{\star})^{2}}+\frac{\tcoeff{\msc}^{2}}{(\theta-1)(1+\delta_{\msc}\kcoeff{\msc}R_{\star})^{2}}+\frac{\varnoise^{2}+\sum_{k>\msc}\tcoeff k^{2}}{\theta-1}.\label{eq:testerr_normsqterm_finalform}
\end{equation}
The details are relegated to Appendix \ref{sec:Proof-of-hsq}.
\subsection{Finish the Proof}
The final step is to substitute (\ref{eq:testerr_sglnormsqterm_finalform}),
(\ref{eq:testerr_crossterm_final_form}) and (\ref{eq:testerr_normsqterm_finalform})
back to (\ref{eq:testerr_decomposition}) and get (\ref{eq:main_results_testerr}).
\section{Proof of Main Results}
\subsection{Notations}
\begin{comment}
Throughout our proof, we will consider the expansion of kernel function
$\Kernalfn$ and teacher function $\teacherfn$.
\end{comment}
Before delving into the formal proof, let us first list some notations
that will be used throughout our proof in the following sections.
For $n\in\mathbb{Z}^{+},$we denote by $[n]$ the set $\{1,2,\cdots,n\}$.
For a vector $\vx\in\R^{n}$, we use $\|\vx\|$ to denote its $\ell_{2}$
norm and for a matrix $\mX\in\R^{m\times n}$, we use $\|\mX\|$ to
denote its operator norm and $\|\mX\|_{F}$ as its Frobenius norm.
For convenience of stating some results regarding deterministic or
probabilistic upper bounds, we will adopt the following notations.
$f(d)\lesssim g(d)$ means that there exists $C>0$ such that $|f(d)|\leq C|g(d)|$
and $|f(d)|\gtrsim g(d)$ means that there exists $c>0$ such that
$|f(d)|\geq c|g(d)|$. Also for two non-negative random variables,
$X\stodom Y$ means that for any $\tau>0$ and $\veps>0$, $\P(X\leq d^{\tau}Y)\leq\veps$
for all large enough $d$.
Our proof will frequently utilize the expansions of $\Kernalfn(\cdot,\cdot)$
and $\teacherfn(\cdot)$ under spherical harmonics $\{Y_{ki}(\vx)\}_{k,i}$
and ultraspherical polynomials $\{\usp k\}_{k}$. For any vector $\va\in\R^{d}$,
$$\shmtx_{k}(\va):=[Y_{k1}(\va),Y_{k2}(\va),\cdots,Y_{k\usdim k}(\va)]^{\T}$$
and for any matrix $\mA=[\va_{1},\va_{2},\cdots,\va_{n}]^{\T}\in\R^{n\times d}$,
$\shmtx_{k}(\mA):=[\shmtx_{k}(\va_{1}),\shmtx_{k}(\va_{2}),\cdots,\shmtx_{k}(\va_{n})]^{\T}$.
In particular, for the input matrix $\inputmtx=[\invec_{1},\invec_{2},\cdots,\invec_{n}]^{\T}\in\R^{n\times d}$,
we denote $\shmtx_{k}:=\shmtx_{k}(\inputmtx)$. In the kernel expansion,
the degree-$k$ component will be denoted as $\kmtx_{k}:=\tkcoeff {k,d}\shmtx_{k}\shmtx_{k}^{\T}$
and likewise for the teacher model, we have $\tevec_{k}:=\tcoeff{k,d}\usp k\big(\frac{\inputmtx\sgl}{\sqrt{d}}\big)=\frac{\tcoeff{k,d}}{\sqrt{\usdim k}}\shmtx_{k}\shmtx_{k}(\sgl)$ and we write $\widetilde{\shmtx}_{k}(\sgl)=\frac{\tcoeff k}{\sqrt{\usdim k}}\shmtx_{k}(\sgl)$.
We also denote $\delta_{k}=n/\usdim k$ as the sampling ratio with
respect to the degree-$k$ component and $\mD_k = \sqrt{\kcoeff{k,d}\delta_k} \mI_{\usdim{k}}$.
We use the following short-hand notations for the partial sum:
$\kmtx_{\leq k}=\sum_{\ell=0}^{k}\kmtx_{k}$,
$\usdim{\leq k}=\sum_{\ell=0}^{k}\usdim{\ell}$,
$\tevec_{\leq k}=\sum_{\ell=0}^{k}\tevec_{\ell}$ and
$\obser_{\leq k}:=\tevec_{\leq k}+\noise$
and block matrix:
$\shmtx_{\leq k}=[\shmtx_{0},\cdots,\shmtx_{k}]$, ${\shmtx}_{\leq k}(\sgl)=[{\shmtx}_{0}(\sgl)^{\T},\cdots,{\shmtx}_{k}(\sgl)^{\T}]^{\T}$
and
$\mD_{\leq k} = \diag\{\mD_{0},\cdots,\mD_{k}\}$. The quantities like $\kmtx_{>k}$ or $\shmtx_{>k}$ are defined in the same way.
Also since we are focusing on asymptotic results and under our main
assumptions, $\kcoeff{k,d}\to\kcoeff k$ and $\tcoeff{k,d}\to\tcoeff k$ as $d\to\infty$, we will drop the dependence of $\kcoeff{k,d}$ and $\tcoeff{k,d}$
on $d$ in our proof, when it is clear from the context.
\subsection{Proof for the Asymptotic Formula of the Training Error}\label{sec:proof_trainingerr}
We first study the asymptotics of the training error. It can be proved
\cite[Theorem 4.2]{scholkopf2002learning} that the optimal solution
of (\ref{eq:kernel_1}) is:
\begin{equation}
\hat{\regressfn}(\vx)=\sum_{i=1}^{n}\Kernalfn(\invec,\invec_{i})\soli_{i}.\label{eq:representer_theorem}
\end{equation}
Here, $\sol=(\lambda\mI+\kmtx)^{-1}\obser$ is the optimal
solution of
\begin{equation}
\min_{\vw}(\vy-\mK\vw)^{2}+\lambda\vw^{\T}\mK\vw,\label{eq:kernel_2}
\end{equation}
where $\kmtx\in\R^{n\times n}$ is the kernel matrix, with $[\kmtx]_{ij}=\Kernalfn(\invec_{i},\invec_{j})$.
Therefore, $\mathcal{E}_{\text{train}}$ has an explicit form:
\begin{align}
\mathcal{E}_{\text{train}} & =\frac{\lambda}{n}\vy^{\T}\rsmtx\vy,\label{eq:training_error}
\end{align}
where $\rsmtx=(\lambda\mI+\mK)^{-1}$ is the resolvent matrix of $\mK$.
\subsubsection{\label{subsec:training_err_special_case_proof}A Special Case}
We first present the proof for a special case. Consider the following
kernel function $\Kernalfn(\cdot,\cdot)$:
\begin{equation}
\Kernalfn(\vx,\vx')=\frac{\kcoeff{\msc,d}}{\usdim{\msc}}\sum_{i=1}^{\usdim{\msc}}Y_{\msc i}(\vx)Y_{\msc i}(\vx')\label{eq:kernel_expansion_1}
\end{equation}
and teacher function $\teacherfn(\cdot)$:
\begin{equation}
\teacherfn(x)=\tcoeff{\msc,d}\usp{\msc}(x),\label{eq:teacher_expansion_1}
\end{equation}
where $\msc\in\mathbb{Z}^{+}$ is defined as in Assumption (A.1).
Comparing (\ref{eq:kernel_expansion_1}) and (\ref{eq:teacher_expansion_1})
with (\ref{eq:kernel_expansion}) and (\ref{eq:teacher_expansion}),
we can find that (\ref{eq:kernel_expansion_1}) and (\ref{eq:teacher_expansion_1})
correspond to a special model that only retains the degree-$\msc$
component, while discarding all the low-degree and high-degree parts.
Although this may appear to be a substantial simplification of the
original model, it turns out that this simplified setting already
captures some main technical ingredients in the general proof.%
\begin{comment}
Although this may appear to a substantial simplification of the original
model, it turns out that it constitutes the main technical ingredient.
\end{comment}
We can make some simplifications utilizing the rotational invariance
of the input vectors $\{\invec_{i}\}_{i=1}^{n}$. Since $\invec_{i}\sim\spmeasure{d-1}$,
we have the following representation:
\begin{equation}
\invec_{i}=(\eta_{i},[(d-\eta_{i}^{2})/(d-1)]^{1/2}\vv_{i}^{\T})^{\T},\label{eq:xi_representation}
\end{equation}
where $\eta_{i}\sim\spmeasure{d-1,1}$, $\vv_{i}\sim\spmeasure{d-2}$
and they are independent. Also by rotational invariance, we can assume
without loss of generality that $\sgl=\sqrt{d}\ve_{1}$. Then substituting
$\sgl=\sqrt{d}\ve_{1}$, (\ref{eq:teacher_expansion_1}) and (\ref{eq:xi_representation})
into (\ref{eq:teacher_model}), we get
\begin{equation}
\obser=\tcoeff{\msc,d}\usp{\msc}(\veta)+\noise,\label{eq:teacher_model_1}
\end{equation}
where $\usp{\msc}(\cdot)$ is applied pointwise on $\veta$. On the
other hand, the kernel function can be written compactly as:
\begin{align}
\Kernalfn(\vx_{i},\vx_{j}) & =\tfrac{\kcoeff{\msc,d}}{\sqrt{\usdim{\msc}}}\usp{\msc}(\vx_{i}^{\T}\vx_{j}/\sqrt{d}),\nonumber \\
& =\tfrac{\kcoeff{\msc,d}}{\sqrt{\usdim{\msc}}}\usp{\msc}\bigg(\tfrac{\eta_{i}\eta_{j}}{\sqrt{d}}+\tfrac{\vv_{i}^{\T}\vv_{j}}{\sqrt{d-1}}\sqrt{\tfrac{d}{d-1}\Big(1-\tfrac{\eta_{i}^{2}}{d}\Big)\Big(1-\tfrac{\eta_{j}^{2}}{d}\Big)}\bigg)\label{eq:kernel_function_1}
\end{align}
where in the second step, we use (\ref{eq:xi_representation}). Correspondingly,
the kernel matrix $\kmtx$($=\kmtx_{\msc}$) becomes:
\begin{align}
\mK & =\tfrac{\kcoeff{\msc,d}}{\sqrt{\usdim{\msc}}}\usp{\msc}\Big(\tfrac{\veta\veta^{\T}}{\sqrt{d}}+\sqrt{\tfrac{d}{d-1}}\diag\{(1-\eta_{i}^{2}/d)^{1/2}\}\tfrac{\mV\mV^{\T}}{\sqrt{d-1}}\diag\{(1-\eta_{i}^{2}/d)^{1/2}\}\Big)\label{eq:kernel_matrix_1}
\end{align}
where $\veta=(\eta_{1},\eta_{2},\cdots,\eta_{n})^{\T}$ and $\mV=(\vv_{1},\vv_{2},\cdots,\vv_{n})^{\T}$.
From (\ref{eq:teacher_model_1}) we know $\vy$ is a (noisy) function of $\veta$.
Therefore, to compute $\mathcal{E}_{\text{train}}=\frac{\lambda}{n}\vy^{\T}\rsmtx\vy,$
we need to handle the (weak) correlation between $\veta$ and $\kmtx$.
However, the formulation in (\ref{eq:kernel_matrix_1}) is not amenable
for analysis, as $\kmtx$ depends on $\veta$ in a convoluted way. To
this end, we can apply Proposition 1, Lemma 4 and Lemma 7 in \cite{Lu2022Equi} (with
a slightly different scaling) to get
\begin{equation}
\Big|\frac{1}{n}\obser^{\T}(\rsmtx-\widehat{\rsmtx})\obser\Big|\stodom\frac{1}{\sqrt{d}},\label{eq:training_error_2_1}
\end{equation}
where $\widehat{\rsmtx}:=(\lambda\mI+\widehat{\kmtx})^{-1}$, with
\begin{equation}
\widehat{\kmtx}=\frac{\kcoeff{\msc,d}}{\sqrt{\usdim{\msc}}}\tusp{\msc}(\mV\mV^{\T})+\frac{\kcoeff{\msc,d}}{\usdim{\msc}}\vv_{\msc}(\veta)\vv_{\msc}(\veta)^{\T},\label{eq:kernelmtx_approximation_Kl}
\end{equation}
$\vv_{\ell}(\veta):=(\usp{\ell}(\eta_{1}),\cdots,\usp{\ell}(\eta_{n}))^{\T}$ and $\tusp{\msc}(x):=\usp{\msc,d-1}(x)$.
The approximation $\widehat{\kmtx}$
is much easier to handle,
as it depends on $\veta$ only through a rank-1 matrix.
Define
$\widetilde{\rsmtx}:=(\lambda\mI+\widetilde{\kmtx})^{-1}$, where $\widetilde{\kmtx}:=\frac{\kcoeff{\msc,d}}{\sqrt{\usdim{\msc}}}\tusp{\msc}(\mV\mV^{\T})$. By Sherman--Morrison formula, we can get:
$$
\frac{1}{n}\obser^{\T}\widehat{\rsmtx}\obser=\frac{\tcoeff{\msc,d}^2 a + 2\tcoeff{\msc} b - \delta_{\msc}\kcoeff{\msc,d} b^2}{1+\delta_{\msc}\kcoeff{\msc,d}a} + c,
$$
where $a = \frac{\vv_{\msc}(\veta)^{\T}\widetilde{\rsmtx}\vv_{\msc}(\veta)}{n}$, $b = \frac{\vv_{\msc}(\veta)^{\T}\widetilde{\rsmtx}\noise}{n}$ and $c=\frac{\noise^{\T}\widetilde{\rsmtx}\noise}{n}$. Also by Lemma 5 in \cite{Lu2022Equi}, we have $|a-\widetilde{R}_{\msc}|\stodom\frac{1}{\sqrt{d}}$, $|b|\stodom\frac{1}{\sqrt{d}}$ and $|c-\sigma_{\noisei}^2\widetilde{R}_{\msc}|\stodom\frac{1}{\sqrt{d}}$, where $\widetilde{R}_{\msc}=\frac{1}{n}\Tr\widetilde{\rsmtx}$. Therefore,
\begin{align}
\bigg|\frac{1}{n}\obser^{\T}\widehat{\rsmtx}\obser-\Big(\frac{\tcoeff{\msc,d}^{2}\widetilde{R}_{\msc}}{1+\delta_{\msc}\kcoeff{\msc,d}\widetilde{R}_{\msc}}+\varnoise^{2}\widetilde{R}_{\msc}\Big)\bigg|
\stodom\frac{1}{\sqrt{d}}.
\label{eq:training_error_2_2}
\end{align}
Then after
combining (\ref{eq:training_error_2_1}), (\ref{eq:training_error_2_2})
and Lemma \ref{lem:obser_truncation}, we obtain
\begin{equation}
\frac{1}{n}\obser^{\T}\rsmtx\obser\pconv\frac{\tcoeff k^{2}\widetilde{R}_{\msc}}{1+\delta_{\msc}\kcoeff{\msc}\widetilde{R}_{\msc}}+\varnoise^{2}\widetilde{R}_{\msc}.\label{eq:training_error_2}
\end{equation}
Note that $\widetilde{R}_{\msc}$ is the Stieltjes transform of $\widetilde{\kmtx}$. Then following the same proof of Theorem 1 in \cite{Lu2022Equi}, we can show that $\widetilde{R}_{\msc}\pconv R_{\star,\msc}$,
where $R_{\star,\msc}$ is the unique non-negative solution of $\frac{1}{R}=\lambda+\frac{\mu_{\msc}}{1+\delta_{\msc}\mu_{\msc}R}$.
This concludes the proof.
\subsubsection{General Case}
To extend the proof to the general setting {[}c.f. (\ref{eq:kernel_expansion})
and (\ref{eq:teacher_model}){]}, we need to take into account all
the terms in the expansion of $\Kernalfn(\cdot,\cdot)$ and $\teacherfn(\cdot)$,
not just the degree-$\msc$ component as in (\ref{eq:kernel_expansion_1})
and (\ref{eq:teacher_expansion_1}). The bridge is the following result,
which shows that: (1) the low-degree parts ($k<\msc$) can be truncated
and (2) the high-degree components ($k>\msc$) of kernel function
act as a regularization term.
\begin{prop}
\label{prop:train_error_truncation}There exists $\tau>0$ such that
\begin{equation}
\frac{1}{n}\Big|\obser^{\T}\rsmtx\obser-\obser_{\geq\msc}^{\T}\widetilde{\rsmtx}_{\msc}\obser_{\geq\msc}\Big|\stodom\frac{1}{d^{\tau}}\label{eq:training_err_approx_target}
\end{equation}
where $\widetilde{\rsmtx}_{\msc}=(\tilde{\lambda}\mI+\mK_{\msc})^{-1}$ and $\tilde{\lambda}:=\lambda+\sum_{k>\msc}\kcoeff k$ is an
equivalent regularization parameter.
\end{prop}
\begin{proof}
We have the following decomposition for $\frac{1}{n}\obser^{\T}\rsmtx\obser$:
\begin{align*}
\frac{1}{n}\obser^{\T}\rsmtx\obser= & \left[\frac{1}{n}\obser^{\T}(\rsmtx-\widetilde{\rsmtx}_{\leq\msc})\obser\right]+\left[\frac{1}{n}\obser^{\T}\widetilde{\rsmtx}_{\leq\msc}\obser-\frac{1}{n}\obser_{\geq\msc}^{\T}\widetilde{\rsmtx}_{\leq\msc}\obser_{\geq\msc}\right]\\
& +\frac{1}{n}\obser_{\geq\msc}^{\T}(\widetilde{\rsmtx}_{\leq\msc}-\widetilde{\rsmtx}_{\msc})\obser_{\geq\msc}+\frac{1}{n}\obser_{\geq\msc}^{\T}\widetilde{\rsmtx}_{\msc}\obser_{\geq\msc},
\end{align*}
where $\widetilde{\rsmtx}_{\leq\msc}=(\tilde{\lambda}\mI+\mK_{\leq\msc})^{-1}$.
The first three terms in the above display are approximation errors.
In Lemma \ref{lem:training_err_approx_step1}, Lemma \ref{lem:training_err_approx_step2}
and Lemma \ref{lem:training_err_approx_step3}, we show that they
all decay to zero as $d\to\infty$, with the desired rate. This completes
the proof.
\end{proof}
Proposition \ref{prop:train_error_truncation} brings us closer to
the special case analyzed previously in Sec. \ref{subsec:training_err_special_case_proof}.
In particular, it implies that all the low-degree components in $\Kernalfn(\cdot,\cdot)$
and $\teacherfn(\cdot)$ can be dropped, without causing any non-vanishing
error and all the higher-degree components of $\Kernalfn(\cdot,\cdot)$
can be equivalently treated as a regularization term. The remaining thing is to handle the higher-degree components contained
in $\obser_{\geq\msc}$. Recall that in the simplified setting (\ref{eq:teacher_expansion_1}),
only the $\msc$th degree component is involved.%
\begin{comment}
As it will be clear in a moment, those higher-degree components in
$\obser_{\geq\msc}$ act as an equivalent additive noise.
\end{comment}
To proceed, we first apply a truncation over $\obser_{\geq\msc}$.
Let $\hat{\obser}_{\geq\msc}=\sum_{k=\msc}^{L}\tcoeff{k}\usp k(\inputmtx\sgl/\sqrt{d})+\noise$,
for some $L\geq\msc$ to be chosen. Then we can follow the same strategy
in Sec. \ref{subsec:training_err_special_case_proof} to obtain
\begin{equation}
\bigg|\frac{1}{n}\hat{\obser}_{\geq\msc}^{\T}\widetilde{\rsmtx}_{\msc}\hat{\obser}_{\geq\msc}-\Big[\frac{\tcoeff{\msc}^{2}R_{\star}}{1+\delta_{\msc}\kcoeff{\msc}R_{\star}}+\Big(\varnoise^{2}+\sum_{k=\msc+1}^{L}\tcoeff{k}^{2}\Big)R_{\star}\Big]\bigg|\pconv0.\label{eq:training_error_3}
\end{equation}
To complete the proof, we just need to show the above approximation
by $\hat{\obser}_{\geq\msc}$ can be made arbitrarily precise. In
particular, by Lemma \ref{lem:obser_truncation}, for any $\veps>0$,
we can always find an $L\in\mathbb{Z}^{+}$ such that for all large
enough $d$, $\sum_{k=L+1}^{\infty}\tcoeff{k,d}^{2}<\frac{\veps}{2}$
and $\P(\frac{1}{n}\|\obser_{\geq\msc}-\hat{\obser}_{\geq\msc}\|^{2}\geq\veps)\leq\frac{C}{n\veps^{2}}$,
where $C>0$ is some constant. These two bounds together with (\ref{eq:training_error_3})
imply that for any $\veps>0$,
\[
\P\bigg\{\bigg|\frac{1}{n}\obser_{\geq\msc}^{\T}\widetilde{\rsmtx}_{\msc}\obser_{\geq\msc}-\Big[\frac{\tcoeff{\msc}^{2}R_{\star}}{1+\delta_{\msc}\kcoeff{\msc}R_{\star}}+\Big(\varnoise^{2}+\sum_{k=\msc+1}^{\infty}\tcoeff{k}^{2}\Big)R_{\star}\Big]\bigg|>\veps\bigg\}\to0,
\]
as $d\to\infty$ and the proof is finished.
| {
"timestamp": "2022-05-16T02:21:40",
"yymm": "2205",
"arxiv_id": "2205.06798",
"language": "en",
"url": "https://arxiv.org/abs/2205.06798",
"abstract": "The generalization performance of kernel ridge regression (KRR) exhibits a multi-phased pattern that crucially depends on the scaling relationship between the sample size $n$ and the underlying dimension $d$. This phenomenon is due to the fact that KRR sequentially learns functions of increasing complexity as the sample size increases; when $d^{k-1}\\ll n\\ll d^{k}$, only polynomials with degree less than $k$ are learned. In this paper, we present sharp asymptotic characterization of the performance of KRR at the critical transition regions with $n \\asymp d^k$, for $k\\in\\mathbb{Z}^{+}$. Our asymptotic characterization provides a precise picture of the whole learning process and clarifies the impact of various parameters (including the choice of the kernel function) on the generalization performance. In particular, we show that the learning curves of KRR can have a delicate \"double descent\" behavior due to specific bias-variance trade-offs at different polynomial scaling regimes.",
"subjects": "Machine Learning (cs.LG)",
"title": "Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924802053235,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7075866563743256
} |
https://arxiv.org/abs/0707.4256 | Rubbling and Optimal Rubbling of Graphs | A pebbling move on a graph removes two pebbles at a vertex and adds one pebble at an adjacent vertex. Rubbling is a version of pebbling where an additional move is allowed. In this new move one pebble is removed at vertices v and w adjacent to a vertex u and an extra pebble is added at vertex u. A vertex is reachable from a pebble distribution if it is possible to move a pebble to that vertex using rubbling moves. The rubbling number of a graph is the smallest number m needed to guarantee that any vertex is reachable from any pebble distribution of m pebbles. The optimal rubbling number is the smallest number m needed to guarantee a pebble distribution of m pebbles from which any vertex is reachable. We determine the rubbling and optimal rubbling number of some families of graphs including cycles. | \section{Introduction}
Graph pebbling has its origin in number theory. It is a model for
the transportation of resources. Starting with a pebble distribution
on the vertices of a simple connected graph, a \emph{pebbling move}
removes two pebbles from a vertex and adds one pebble at an adjacent
vertex. We can think of the pebbles as fuel containers. Then the loss
of the pebble during a move is the cost of transportation. A vertex
is called \emph{reachable} if a pebble can be moved to that vertex
using pebbling moves. There are several questions we can ask about
pebbling. How many pebbles will guarantee that every vertex is reachable,
or that all vertices are reachable at the same time? How can we place
the smallest number of pebbles such that every vertex is reachable?
For a comprehensive list of references for the extensive literature
see the survey papers \cite{Hurlbert_survey1,Hurlbert_survey2}.
In the current paper we propose the study of an extension of pebbling
called \emph{rubbling}. In this version we also allow a move that
removes a pebble from the vertices $v$ and $w$ that are adjacent
to a vertex $u$, and adds a pebble at vertex $u$. We find rubbling
versions of some of the well known pebbling tools such as the transition
digraph, the No Cycle Lemma, squishing and smoothing. We use these
tools to find the rubbling number and the optimal rubbling number
for some families of graphs including complete graphs, complete bipartite
graphs, paths, wheels and cycles.
\section{Preliminaries}
Let $G$ be a simple graph. We use the notation $V(G)$ for the vertex
set and $E(G)$ for the edge set. A \emph{pebble function} on a graph
$G$ is a function $p:V(G)\to{\bf Z}$ where $p(v)$ is the number
of pebbles placed at $v$. A \emph{pebble distribution} is a nonnegative
pebble function. The \emph{size} of a pebble distribution $p$ is
the total number of pebbles $\sum_{v\in V(G)}p(v)$. We are going
to use the notation $p(v_{1},\ldots,v_{n},*)=(a_{1},\ldots,a_{n},q(*))$
to indicate that $p(v_{i})=a_{i}$ for $i\in\{1,\ldots,n\}$ and $p(w)=q(w)$
for all $w\in V(G)\setminus\{ v_{1},\ldots,v_{n}\}$.
\begin{defn}
Consider a pebble function $p$ on the graph $G$. If $\{ v,u\}\in E(G)$
then the \emph{pebbling move} $(v,v\to u)$ removes two pebbles at
vertex $v$ and adds one pebble at vertex $u$ to create a new pebble
function\[
p_{(v,v\to u)}(v,u,*)=(p(v)-2,p(u)+1,p(*)).\]
If $\{ w,u\}\in E(G)$ and $v\not=w$ then the \emph{strict rubbling
move} $(v,w\to u)$ removes one pebble each at vertices $v$ and $w$
and adds one pebble at vertex $u$ to create a new pebble function\[
p_{(v,w\to u)}(v,w,u,*)=(p(v)-1,p(w)-1,p(u)+1,p(*)).\]
A \emph{rubbling move is} either a pebbling move or a strict rubbling
move.
\end{defn}
Note that the rubbling moves $(v,w\to u)$ and $(w,v\to u)$ are the
same. Also note that the resulting pebble function might not be a
pebble distribution even if $p$ is.
\begin{defn}
A \emph{rubbling sequence} is a finite sequence $s=(s_{1},\ldots,s_{k})$
of rubbling moves. The pebble function gotten from the pebble function
$p$ after applying the moves in $s$ is denoted by $p_{s}$.
\end{defn}
The concatenation of the rubbling sequences $r=(r_{1},\ldots,r_{k})$
and $s=(s_{1},\ldots,s_{l})$ is denoted by $rs=(r_{1},\ldots,r_{k},s_{1},\ldots,s_{l})$.
\begin{defn}
A rubbling sequence $s$ is \emph{executable} from the pebble distribution
$p$ if $p_{(s_{1},\ldots,s_{i})}$ is nonnegative for all $i$. A
vertex $v$ of $G$ is \emph{reachable} from the pebble distribution
$p$ if there is an executable rubbling sequence $s$ such that $p_{s}(v)\ge1$.
The \emph{rubbling number} $\rho(G)$ of a graph $G$ is the minimum
number $m$ such that every vertex of $G$ is reachable from any pebble
distribution of size $m$.
\end{defn}
A vertex is reachable if a pebble can be moved to that vertex using
rubbling moves with actual pebbles without ever running out of pebbles.
Changing the order of moves in an executable rubbling sequence $s$
may result in a sequence $r$ that is no longer executable. On the
other hand the ordering of the moves has no effect on the resulting
pebble function, that is, $p_{s}=p_{r}$. This justifies the following
definition.
\begin{defn}
Let $S$ be a multiset of rubbling moves. The pebble function gotten
from the pebble function $p$ after applying the moves in $S$ in
any order is denoted by $p_{S}$.
\end{defn}
\section{The transition digraph and the No Cycle Lemma}
\begin{defn}
Given a multiset $S$ of rubbling moves on $G$, the \emph{transition
digraph} $T(G,S)$ is a directed multigraph whose vertex set is $V(G)$,
and each move $(v,w\to u)$ in $S$ is represented by two directed
edges $(v,u)$ and $(w,u)$. The transition digraph of a rubbling
sequence $s=(s_{1},\ldots,s_{n})$ is $T(G,s)=T(G,S)$, where $S=\{ s_{1},\ldots,s_{n}\}$
is the multiset of moves in $s$. Let $d_{T(G,S)}^{-}$ represent
the in-degree and $d_{T(G,S)}^{+}$ the out-degree in $T(G,S)$. We
simply write $d^{-}$ and $d^{+}$ if the transition digraph is clear
from context.
\end{defn}
The transition digraph only depends on the rubbling moves and the
graph but not on the pebble distribution or on the order of the moves.
It is possible that $T(G,S)=T(G,R)$ even if $S\not=R$. If $T(G,S)=T(G,R)$
then $p_{S}=p_{R}$, so the effect of a rubbling sequence on a pebble
function only depends on the transition digraph. In fact we have the
following.
\begin{lem}
If $p$ is a pebble function on $G$ and $S$ is a multiset of rubbling
moves then\[
p_{S}(v)=p(v)+d^{-}(v)/2-d^{+}(v)\]
for all $v\in V(G)$.
\end{lem}
\begin{proof}
The three terms on the right hand side represent the original number
of pebbles, the number of pebbles arrived at $v$ and the number of
pebbles moved away from $v$.
\end{proof}
We are often interested in the value of $q_{R}(v)-p_{S}(v)$. The
function $\Delta$ defined in the following lemma is going to simplify
our notation. The three parameters of $\Delta$ represent the change
in the number of pebbles, the change in the in-degree and the change
in the out-degree. The proof is a trivial calculation.
\begin{lem}
Define $\Delta(a,b,c)=a+b/2-c$. Then\[
q_{R}(v)-p_{S}(v)=\Delta(q(v)-p(v),d_{T(G,R)}^{-}(v)-d_{T(G,S)}^{-}(v),d_{T(G,R)}^{+}(v)-d_{T(G,S)}^{+}(v)).\]
\end{lem}
If the rubbling sequence $s$ is executable from a pebble distribution
$p$ then we must have $p_{s}\ge0$. This motivates the following
terminology.
\begin{defn}
A multiset $S$ of rubbling moves on $G$ is \emph{balanced} with
a pebble distribution $p$ \emph{at vertex} $v$ if $p_{S}(v)\ge0$.
We say $S$ is \emph{balanced} with $p$ if $S$ is balanced with
$p$ at all $v\in V(G)$, that is, $p_{S}\ge0$. We say that a rubbling
sequence \emph{}$s$ is balanced with $p$ if the multiset of moves
in $s$ is balanced with $p$.
\end{defn}
$S$ is trivially balanced with a pebble distribution at $v$ if $d_{T(G,S)}^{+}(v)=0$.
The balance condition is necessary but not sufficient for a rubbling
sequence to be executable. The pebble distribution $p(u,v,w)=(1,1,1)$
on the cycle $C_{3}$ is balanced with $s=((u,u\to v),(v,v\to w),(w,w\to u))$,
but $s$ is not executable. The problem is caused by the cycle in
the transition digraph. The goal of this section is to overcome this
difficulty.
\begin{defn}
A multiset of rubbling moves or a rubbling sequence is called \emph{acyclic}
if the corresponding transition digraph has no directed cycles. Let
$S$ be a multiset of rubbling moves. An acyclic multiset $R\subseteq S$
is called an \emph{untangling} of $S$ if $p_{R}\ge p_{S}$.
\end{defn}
\begin{prop}
\label{pro:unfolding}Every multiset of rubbling moves has an untangling.
\end{prop}
\begin{figure}
\begin{center}~\input{cycle.inc}\end{center}
\caption{\label{cap:Arrows-representing-moves} Arrows of $T(G,Q)$. The solid
arrows belong to $C$. \protect \\
}
\end{figure}
\begin{proof}
Let $S$ be the multiset of rubbling moves. Suppose that $T(G,S)$
has a directed cycle $C$. Let $Q$ be the multiset of elements of
$S$ corresponding to the arrows of $C$, see Figure~\ref{cap:Arrows-representing-moves}.
We show that $p_{R}\ge p_{S}$ where $R=S\setminus Q$. If $v\in V(C)$
then there is an $a\le-1$ such that\[
p_{R}(v)-p_{S}(v)=\Delta(0,-2,a)=-1-a\ge0.\]
If $v\in V(G)\setminus V(C)$ then there is an $a\le0$ such that\[
p_{R}(v)-p_{S}(v)=\Delta(0,0,a)\ge0.\]
We can repeat this process on $R$ until we eliminate all the cycles.
This can be finished in finitely many steps since every step decreases
the number of edges in $R$. The resulting multiset is an untangling
of $S$.
\end{proof}
Note that a multiset of moves can have several untanglings. Also note
that if a pebble distribution $p$ is balanced with $S$ and $R$
is an untangling of $S$ then $p_{R}\ge p_{S}\ge0$ and so $p$ is
also balanced with $R$.
\begin{lem}
\label{lem:sourse}If $p$ is a pebble distribution on $G$ that is
balanced with the multiset $S$ of moves and $t=(v,w\to u)\in S$
such that $d^{-}(v)=0=d^{-}(w)$ then $t$ is executable from $p$.
\end{lem}
\begin{proof}
If $v\not=w$ then $p(v)\ge d^{+}(v)\ge1$ and $p(w)\ge d^{+}(w)\ge1$.
If $v=w$ then $p(v)\ge d^{+}(v)\ge2$. In both cases $s$ is executable
from $p$.
\end{proof}
\begin{prop}
\label{pro:orderability}If the pebble distribution $p$ on $G$ is
balanced with the acyclic multiset $S$ of rubbling moves then there
is a sequence $s$ of the elements of $S$ such that $s$ is executable
from $p$.
\end{prop}
\begin{proof}
We define $s$ recursively. Let $R_{1}=S$. Since $R_{1}$ is acyclic,
we must have a move $s_{1}=(v_{1},w_{1}\to u_{1})\in R_{1}$ such
that $d_{T(G,R_{1})}^{-}(v_{1})=0=d_{T(G,R_{1})}^{-}(w_{1})$. Then
$s_{1}$ is executable from $p$ by Lemma~\ref{lem:sourse}. Let
$R_{i}=R_{i-1}\setminus\{ s_{i-1}\}$. Then $R_{i}$ is acyclic so
we must have a move $s_{i}=(v_{i},w_{i}\to u_{i})\in R_{i}$ such
that $d_{T(G,R_{i})}^{-}(v_{i})=0=d_{T(G,R_{i})}^{-}(w_{i})$. Then
$p_{(s_{1},\ldots,s_{i-1})}$ is balanced with $R_{i}$ since $(p_{(s_{1},\ldots,s_{i-1})})_{R_{i}}=p_{S}\ge0$
and so $s_{i}$ is executable from $p_{(s_{1},\ldots,s_{i-1})}$.
The sequence $s=(s_{1},\ldots,s_{|S|})$ is an ordering of the elements
of $S$ that is executable from $p$.
\end{proof}
The following is the rubbling version of the No-Cycle Lemma for pebbling
\cite{Betsy,Milans,Moews}.
\begin{lem}
\emph{(No Cycle)} Let $p$ be a pebble distribution on $G$ and $v\in V(G)$.
The following are equivalent.
\end{lem}
\begin{enumerate}
\item $v$ is reachable from $p$.
\item There is a multiset $S$ of rubbling moves such that $S$ is balanced
with $p$ and $p_{S}(v)\ge1$.
\item There is an acyclic multiset $R$ of rubbling moves such that $R$
is balanced with $p$ and $p_{R}(v)\ge1$.
\item $v$ is reachable from $p$ through an acyclic rubbling sequence.
\end{enumerate}
\begin{proof}
If $v$ is reachable from $p$ then there is an executable sequence
$s$ of rubbling moves. The multiset $S$ of rubbling moves of $s$
is balanced with $p$ and $p_{S}(v)\ge1$. So (1) implies (2). If
$S$ satisfies (2) then an untangling $R$ of $S$ satisfies (3).
Suppose $R$ satisfies (3). By Proposition~\ref{pro:orderability},
there is an executable ordering $r$ of the moves of $R$. This $r$
is acyclic and $v$ is reachable through $r$ since $p_{r}(v)=p_{R}(v)\ge1$.
So (3) implies (4). Finally, (4) clearly implies (1).
\end{proof}
\begin{cor}
\label{cor:no-flip-flop}If a vertex is reachable from a pebble distribution
$p$ on $G$ then it is also reachable by a rubbling sequence in which
no move of the form $(v,a\to u)$ is followed by a move of the form
$(u,b\to v)$.
\end{cor}
\section{Basic results}
It is clear from the definition that for all graphs $G$ we have $\rho(G)\le\pi(G)$
where $\pi$ is the pebbling number. For the pebbling number we have
$2^{\text{{\rm diam}}(G)}\le\pi(G)$. This is also true for the rubbling
number. To see this we need to find the rubbling number of a path
first.
\begin{prop}
\label{pro:path}The rubbling number of the path with $n$ vertices
is $\rho(P_{n})=2^{n-1}$.
\end{prop}
\begin{proof}
Let $v_{1},\ldots,v_{n}$ be the consecutive vertices of $P_{n}$.
Let $p(v_{n},*)=(m,0)$ be a pebble distribution from which $v_{1}$
is reachable through the acyclic rubbling sequence $s$. We show that
$m\ge2^{n-1}$. Since $v_{1}$ is reachable and $p(v_{1})=0$, the
balance condition at $v_{1}$ implies that $T(G,s)$ has at least
2 arrows from $v_{2}$ to $v_{1}$ and so $d^{+}(v_{2})\ge2$. Since
$T(G,s)$ has no cycles, there are no arrows from $v_{1}$ to $v_{2}$.
The balance condition at $v_{2}$ now implies that $T(G,s)$ has at
least 4 arrows from $v_{3}$ to $v_{2}$ and so $d^{+}(v_{3})\ge2^{2}$.
An inductive argument shows that $d^{+}(v_{n})\ge2^{n-1}$ and $d^{-}(v_{n})=0$.
The balance condition at $v_{n}$ implies that $m\ge d^{+}(v_{n})\ge2^{n-1}$.
This shows that $2^{n-1}\le\rho(P_{n})$.
It is known \cite{Hurlbert_survey1} that $\pi(P_{n})=2^{n-1}$. The
result now follows from the inequality $2^{n-1}\le\rho(P_{n})\le\pi(P_{n})=2^{n-1}$.
\end{proof}
\begin{figure}
\begin{center}~\input{quotient.inc}\end{center}
\caption{\label{cap:quotient}Arrows in $T(G,S)$ representing the possible
types of rubbling moves in $E$. The vertices in the same box are
equivalent. The solid arrows connect equivalent vertices. The calculation
on the left shows the change in $\sum_{i}(\frac{1}{2}d^{-}(v_{i})-d^{+}(v_{i}))$
after the removal of one of the rubbling moves. }
\end{figure}
\begin{prop}
If the graph $G$ has diameter $d$ then $2^{d}\le\rho(G)$.
\end{prop}
\begin{proof}
Let $v_{0}$ and $v_{d}$ be vertices at distance $d$. Let $p(v_{0},*)=(m,0)$
be a pebble distribution from which $v_{d}$ is reachable through
the rubbling sequence $s$. We now build a quotient rubbling problem.
Let $[v]$ be the equivalence class of $v$ in the partition of the
vertices of $G$ according to their distances from $v_{0}$. The quotient
simple graph $H$ is isomorphic to $P_{d+1}$ with leafs $[v_{0}]=\{ v_{0}\}$
and $[v_{d}]$. Let $q([v])=\sum_{w\in[v]}p(w)$ for all $[v]\in V(H)$
and note that $q([v_{0}],*)=(m,0)$. The rubbling sequence $s$ induces
a multiset $R$ of rubbling moves on $H$. We construct this $R$
from the multiset $S$ of rubbling moves of $s$. Let $E$ be the
multiset of moves of $S$ of the form $(v,w\to u)$ where $v\in[u]$
or $w\in[u]$. Define $R$ to be the multiset of moves of the form
$([v],[w]\to[u])$ where $(v,w\to u)$ runs through the elements of
$S\setminus E$.
We show that $R$ is balanced with $q$ . Figure~\ref{cap:quotient}
shows the possible types of moves in $E$. The removal of any of these
moves does not decrease the value of $\sum_{v_{i}\in[v]}(\frac{1}{2}d^{-}(v_{i})-d^{+}(v_{i}))$
and so\[
q_{R}([v])=\sum_{v_{i}\in[v]}p_{S\setminus E}(v_{i})\ge\sum_{v_{i}\in[v]}p_{S}(v_{i})\ge0\]
since $p$ is balanced with $S$.
We also have $q_{R}([v_{d}])\ge1$ since $v_{d}$ is reachable and
so $p_{S}(v_{d})\ge1$. Thus $[v_{d}]$ is reachable from $q$ and
so the result now follows from Proposition~\ref{pro:path}.
\end{proof}
For the pebbling number we have $\pi(G)\ge|V(G)|$. This inequality
does not hold for the rubbling number as we can see in the next result.
\begin{prop}
We have the following values for the rubbling number:
\emph{a.} $\rho(K_{n})=2$ for $n\ge2$ where $K_{n}$ is the complete
graph with $n$ vertices\emph{;}
\emph{b.} $\rho(W_{n})=4$ for $n\ge4$ \emph{}where $W_{n}$ is the
wheel with $n$ spikes\emph{;}
\emph{c.} $\rho(K_{m,n})=4$ for $m,n\ge2$ \emph{}where $K_{m,n}$
is a complete bipartite graph\emph{;}
\emph{d.} $\rho(Q^{n})=2^{n}$ \emph{}for $n\ge1$ \emph{}where $Q^{n}$
is the $n$-dimensional hypercube\emph{;}
\emph{e.} $\rho(G)=2^{s+1}$ where $s$ is the number of vertices
in the spine of the caterpillar $G$.
\end{prop}
\begin{proof}
a. A single pebble is clearly not sufficient but any vertex is reachable
with two pebbles using a single move.
b. If we have 4 pebbles then we can move 2 pebbles to the center using
two moves. Then any other vertex is reachable from the center in a
single move. On the other hand $\rho(W_{n})\ge2^{\text{diam}(W_{n})}=2^{2}=4$.
c. It is easy to see that from any pebble distribution of size 4 any
vertex is reachable in at most 3 moves. On the other hand we have
$\rho(K_{m,n})\ge2^{\text{diam}(K_{m,n})}=2^{2}=4$.
d. We know \cite{Chung} that $\pi(Q^{n})=2^{n}$. The result now
follows from the inequality $2^{n}=2^{\text{diam}(Q^{n})}\le\rho(Q^{n})\le\pi(Q^{n})=2^{n}$.
e. The result follows easily from Proposition~\ref{pro:path}.
\end{proof}
\begin{figure}
\begin{center}~\input{petersen.inc}\end{center}
\caption{\label{cap:The-Petersen-graph}The Petersen graph $P$.\protect \\
}
\end{figure}
\begin{prop}
The rubbling number of the Petersen graph $P$ is $\rho(P)=5$.
\end{prop}
\begin{proof}
Consider Figure~\ref{cap:The-Petersen-graph}. It is easy to see
that vertex $w$ is not reachable from the pebble distribution $p(r,s,*)=(3,1,0)$
and so $\rho(P)>4$. To show that $\rho(P)\le5$, assume that a vertex
is not reachable from a pebble distribution $p$ of size 5. Since
$P$ is vertex transitive, we can assume that this vertex is $w$.
Then we must have \[
p(a)+p(b)+p(c)+\left\lfloor \frac{p(q)+p(r)}{2}\right\rfloor +\left\lfloor \frac{p(s)+p(t)}{2}\right\rfloor +\left\lfloor \frac{p(u)+p(v)}{2}\right\rfloor \le1,\]
otherwise we could make the total number of pebbles at vertices $a$,
$b$ and $c$ more than 2 after which $w$ is reachable. This inequality
forces $p(a)=p(b)=p(c)=0$ and two of the remaining terms to be 0
as well. So by symmetry we can assume that the last term is 1 and
all the other terms are 0. Then we must have $p(u)+p(v)=3$ and $p(q)+p(r)=1=p(s)+p(t)$.
A simple case analysis shows that $w$ is reachable from this $p$,
which is a contradiction.
\end{proof}
\section{Squishing}
The following terms are needed for the rubbling version of the squishing
lemma of \cite{Bunde_optimal}. A \emph{thread} in a graph is a path
containing vertices of degree 2. A pebble distribution is \emph{squished}
on a thread $P$ if all the pebbles on $P$ are placed on a single
vertex of $P$ or on two adjacent vertices of $P$.
\begin{lem}
\label{lem:notype2}Let $P$ be a thread in $G$. If vertex $x\not\in V(P)$
is reachable from the pebble distribution $p$ then $x$ is reachable
from $p$ through a rubbling sequence in which there is no strict
rubbling move of the form $(v,w\to u)$ where $u\in V(P)$.
\end{lem}
\begin{proof}
Let $S$ be an acyclic multiset of rubbling moves balanced with $p$
such that $p_{S}(x)\ge1$. Let $E$ be the multiset of strict rubbling
moves of $S$ of the form $(v,w\to u)$ where $u\in V(P)$.
If $e=(v,w\to u)\in E$ then we have $d_{T(G,S\setminus\{ e\})}^{+}(u)=d_{T(G,S)}^{+}(u)=0$
since $S$ is acyclic and so $S\setminus\{ e\}$ is balanced with
$p$ at $u$. It is clear that $p_{S\setminus\{ e\}}(y)\ge p_{S}(y)$
for all $y\in V(G)\setminus\{ u\}$ and so $S\setminus\{ e\}$ is
balanced with $p$. We still know that $S\setminus\{ e\}$ is acyclic
and $p_{S\setminus\{ e\}}(x)\ge1$, so induction shows that $R=S\setminus E$
is balanced with $p$.
By Proposition~\ref{pro:orderability}, there is an ordering $r$
of the elements of $R$ that is executable from $p$. Then $v$ is
reachable through $r$ since $p_{r}(v)=p_{S}(v)\ge1$.
\end{proof}
The following is the rubbling version of the Squishing Lemma for pebbling
\cite{Bunde_optimal}.
\begin{lem}
\emph{(Squishing)} If vertex $v$ is not reachable from a pebble distribution
with size $n$ then there is a pebble distribution $r$ of size $n$
that is squished on each thread not containing $v$ such that $v$
is not reachable from $r$ either.
\end{lem}
\begin{proof}
The result follows from \cite[Lemma 4]{Bunde_optimal} and \ref{lem:notype2}.
\end{proof}
\section{Rubbling $C_{n}$}
The Squishing Lemma allows us to find the rubbling numbers of cycles.
For the pebbling numbers of $C_{n}$ see \cite{Pachter,Bunde_optimal}.
\begin{prop}
The rubbling number of an even cycle is $\rho(C_{2k})=2^{k}$.
\end{prop}
\begin{proof}
It is well known \cite{Pachter} that $\pi(C_{2k})=2^{k}$. The first
result now follows since\[
2^{k}=2^{\text{diam}(C_{2k})}\le\rho(C_{2k})\le\pi(C_{2k})=2^{k}.\]
\end{proof}
\begin{prop}
The rubbling number of an odd cycle is $\rho(C_{2k+1})=\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor+1$.
\end{prop}
\begin{proof}
Let $C_{2k+1}$ be the cycle with consecutive vertices \[
x_{k},x_{k-1},\ldots,x_{1},v,y_{1},y_{2},\ldots,y_{k},x_{k}.\]
First we show that $\rho(C_{2k+1})\le\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor+1$.
Let $p$ be a pebble distribution on $C_{2k+1}$ from which not every
vertex is reachable. It suffices to show that $p$ contains at most
$\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor$ pebbles. By symmetry, we
can assume that $v$ is the vertex that is not reachable from $p$.
By the Squishing Lemma, we can assume that $p$ is squished on the
thread with consecutive vertices $y_{1},\ldots,y_{k},x_{k},\ldots,x_{1}$.
First we consider the case when all the pebbles are at distance $k$
from $v$, that is, $p(x_{k},y_{k},*)=(a,b,0)$. By symmetry, we can
assume that $0\le a\le b$. Then we must have\begin{equation}
\left\lfloor \frac{a}{2}\right\rfloor +b\le2^{k}-1,\label{eq:1}\end{equation}
otherwise we could move $\lfloor\frac{a}{2}\rfloor$ pebbles from
vertex $x_{k}$ to vertex $y_{k}$ and then reach $v$ from $b_{k}$.
Hence $\frac{a}{2}<\left\lfloor \frac{a}{2}\right\rfloor +1\le2^{k}-1-b+1=2^{k}-b$
and so\begin{equation}
a+2b\le2^{k+1}-1.\label{eq:1a}\end{equation}
We also must have \begin{equation}
\left\lfloor \frac{b-2^{k-1}}{2}\right\rfloor +a\le2^{k-1}-1,\label{eq:2}\end{equation}
otherwise we could move $\lfloor\frac{b-2^{k-1}}{2}\rfloor$ pebbles
from vertex $y_{k}$ to vertex $x_{k}$ after which $x_{1}$ is reachable
from $x_{k}$ and $y_{1}$ is reachable from $y_{k}$, and so $v$
would be reachable by the move $(x_{1},y_{1}\to v)$. Hence $\frac{b-2^{k-1}}{2}<\left\lfloor \frac{b-2^{k-1}}{2}\right\rfloor +1\le2^{k-1}-1-a+1=2^{k-1}-a$
and so \begin{equation}
b+2a\le2^{k}+2^{k-1}-1.\label{eq:2a}\end{equation}
Adding (\ref{eq:1a}) and (\ref{eq:2a}) gives\[
3(a+b)\le2^{k+1}-1+2^{k}+2^{k-1}-1=7\cdot2^{k-1}-2,\]
which shows that $|p|=a+b\le\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor$.
Now we consider the case when some pebbles are closer to $v$ than
$k$, that is, $p(x_{i},x_{i+1},*)=(b,a,0)$ with $b\ge1$ and $a\ge0$
for some $1\le i<k$. Then we must have $\left\lfloor \frac{a}{2}\right\rfloor +b\le2^{i}-1\le2^{k-1}-1$
otherwise $v$ is reachable. Hence\begin{eqnarray*}
|p| & = & a+b\le a-\left\lfloor \frac{a}{2}\right\rfloor +\left\lfloor \frac{a}{2}\right\rfloor +b\\
& \le & \left\lfloor \frac{a}{2}\right\rfloor +1+2^{k-1}-1\le2^{k-1}-1-b+1+2^{k-1}-1\\
& = & 2\cdot2^{k-1}-2<\left\lfloor \frac{7\cdot2^{k-1}-2}{3}\right\rfloor .\end{eqnarray*}
Now we show that we can always distribute $\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor$
pebbles so that $v$ is unreachable and so $\rho(C_{2k+1})\ge\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor+1$.
Let $a=\lfloor\frac{2^{k}}{3}\rfloor$ and $b=\lfloor\frac{5\cdot2^{k-1}}{3}\rfloor$.
It is easy to check that \[
a=\begin{cases}
\frac{2^{k}-2}{3}, & \text{$k$ odd}\\
\frac{2^{k}-1}{3}, & \text{$k$ even}\end{cases},\ b=\begin{cases}
\frac{5\cdot2^{k-1}-2}{3}, & \text{$k$ odd}\\
\frac{5\cdot2^{k-1}-1}{3}, & \text{$k$ even}\end{cases},\ \left\lfloor \frac{7\cdot2^{k-1}-2}{3}\right\rfloor =\begin{cases}
\frac{7\cdot2^{k-1}-4}{3}, & \text{$k$ odd}\\
\frac{7\cdot2^{k-1}-2}{3}, & \text{$k$ even}\end{cases}\]
and so $a+b=\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor$. We show that
$v$ is unreachable from the pebble distribution $p(x_{k},y_{k},*)=(a,b,0)$.
It is easy to see that $a$ and $b$ satisfy (\ref{eq:1a}) and (\ref{eq:2a}).
Suppose that $v$ is reachable from $p$, that is, there is an acyclic
multiset $S$ of rubbling moves that is balanced with $p$ satisfying
$p_{S}(v)\ge1$. The balance condition at $v$ shows that $d^{-}(v)\ge2$.
Hence $S$ must have at least one of $(x_{1},y_{1}\to v)$, $(x_{1},x_{1}\to v)$
or $(y_{1},y_{2}\to v)$.
First assume that $(x_{1},y_{1}\to v)\in S$. The argument used in
the proof of Proposition~\ref{pro:path} shows that then $T(G,S)$
has at least $2^{i-1}$ arrows from $x_{i}$ to $x_{i-1}$ and from
$y_{i}$ to $y_{i-1}$ for all $i\in\{2,\ldots,k\}$. Since $S$ is
acyclic, any arrow in $T(G,S)$ pointing to $x_{k}$ must come from
$y_{k}$. So the balance condition at $x_{k}$ requires $m$ arrows
from $y_{k}$ to $x_{k}$ satisfying $2^{k-1}\le a+\frac{m}{2}$.
The balance condition at $y_{k}$ gives $2^{k-1}+m\le b$. Combining
the two inequalities gives $2^{k}+2^{k-1}\le b+2a$ which contradicts
(\ref{eq:2a}).
Next assume that $(y_{1},y_{1}\to v)\in S$. Then $T(G,S)$ has at
least $2^{i}$ arrows from $y_{i}$ to $y_{i-1}$ for all $i\in\{2,\ldots,k\}$.
The balance condition at $y_{k}$ requires $m$ arrows from $x_{k}$
to $y_{k}$ satisfying $2^{k}\le b+\frac{m}{2}$. We must have $d^{-}(x_{k})=0$,
otherwise there is a directed path from $v$ to $x_{k}$ which is
impossible since $S$ is acyclic. The balance condition at $x_{k}$
gives $m\le a$. Combining the two inequalities gives $2^{k+1}\le a+2b$
which contradicts (\ref{eq:1a}).
Similar argument shows that $(x_{1},x_{1}\to v)\in S$ is also impossible.
\end{proof}
\section{Optimal rubbling}
Optimal pebbling was studied in \cite{Pachter,Moews_optimal,Fu,Bunde_optimal}.
In this section we investigate the optimal rubbling number of certain
graphs.
\begin{defn}
The \emph{optimal rubbling number} $\rho_{\text{opt}}(G)$ of a graph
$G$ is the minimum number $m$ for which there is a pebble distribution
of size $m$ from which every vertex of $G$ is reachable.
\end{defn}
\begin{prop}
We have the following values for the optimal rubbling number:
\emph{a.} $\rho_{\text{{\rm opt}}}(K_{n})=2$ for $n\ge2$ where $K_{n}$
is the complete graph with $n$ vertices\emph{;}
\emph{b.} $\rho_{{\rm opt}}(W_{n})=2$ for $n\ge4$ \emph{}where $W_{n}$
is the wheel with $n$ spikes\emph{;}
\emph{c.} $\rho_{\text{{\rm opt}}}(K_{m,n})=3$ for $m,n\ge3$ \emph{}where
$K_{m,n}$ is the complete bipartite graph\emph{;}
\emph{d}. $\rho_{{\rm opt}}(P)=4$ where $P$ is the Petersen graph.
\end{prop}
\begin{proof}
a. Not every vertex of $K_{n}$ is reachable from a distribution of
size 1 since $n\ge2$. On the other hand any vertex is reachable by
a single move from any distribution of size 2.
b. Again, not every vertex of $W_{n}$ is reachable from a distribution
of size 1. On the other hand, every vertex is reachable from the distribution
that has 2 pebbles at the center of $W_{n}$.
c. Let $A$ and $B$ be the natural partition of the vertex set of
$K_{m,n}$. Let $p$ be a pebble distribution of size 2. If $p$ places
both pebbles on vertices in $A$ then there is a vertex in $A$ that
is not reachable from $p$. If $p$ places both pebbles on vertices
in $B$ then there is a vertex in $B$ that is not reachable from
$p$. If $p$ places one pebble on a vertex in $A$ and one pebble
on a vertex in $B$ then both $A$ and $B$ have vertices that are
unreachable from $p$. On the other hand any vertex is reachable in
at most two moves from a pebble distribution that places one pebble
on a vertex in $A$ and two pebbles on a vertex in $B$.
d. Every vertex is reachable from the pebble distribution that has
4 pebbles on any of the vertices. A simple case analysis shows that
3 pebbles are not sufficient to make every vertex reachable.
\end{proof}
Rolling moves serve the same purpose as the smoothing move of \cite{Bunde_optimal}.
\begin{defn}
Let $v_{1},\ldots,v_{n}$ be the consecutive vertices of a path such
that the degree of $v_{1}$ is 1 and the degrees of $v_{2},v_{3},\ldots,v_{n-1}$
are all 2. The subgraph induced by $\{ v_{1},\ldots,v_{n}\}$ is called
an \emph{arm} of the graph. Let $p$ be a pebble distribution such
that $p(v_{i})\ge2$ for some $i\in\{1,\ldots,n-1\}$, $p(v_{n})=0$,
and $p(v_{j})\ge1$ for all $j\in\{1,\ldots,n-1\}$. A \emph{single
rolling move} creates a new pebble distribution $q$ by taking one
pebble from $v_{i}$ and placing it on $v_{n}$, that is $q(v_{i},v_{n},*)=(p(v_{i})-1,1,p(*))$.
See Figure~\ref{cap:rollvis}.
\begin{figure}
~\input{rollvis.inc}
\caption{\label{cap:rollvis}Visualization of a single rolling move with $i=2$
and $n=5$. An arrow indicates the transfer of a single pebble}
\end{figure}
\end{defn}
\begin{lem}
\label{lem:roll}Let $q$ be a pebble distribution on $G$ gotten
from the pebble distribution $p$ by applying a single rolling move
from $v_{i}$ to $v_{n}$ on the arm with vertices $v_{1},\ldots,v_{n}$.
If vertex $u\in G$ is reachable from $p$ then $u$ is also reachable
from $q$.
\begin{figure}
\begin{center}~\input{roll.inc}\end{center}
\caption{\label{cap:roll}Four possible configurations for $T(G,S\setminus R)$.
The solid arrows represent the arrows of $P$.}
\end{figure}
\end{lem}
\begin{proof}
If $u$ is a vertex of the arm then it is clearly reachable from $q$
so we can assume that $u$ is not on the arm. Let $S$ be an acyclic
multiset of rubbling moves balanced with $p$ such that $p_{S}(u)\ge1$.
Let $P$ be a maximum length directed path in $T(G,S)$ starting at
$v_{i}$ and not going further than $v_{n}$. Then $P$ has consecutive
vertices $v_{i}=v_{n_{0}},v_{n_{1}}\ldots,v_{n_{k}}$ on the arm.
Let $R$ be the multiset containing the elements of $S$ without the
moves corresponding to the arrows of $P$. We show that $R$ is balanced
with $q$ and so $u$ is reachable from $q$ since $q_{R}(u)=p_{S}(u)\ge1$.
Figure ~\ref{cap:roll} shows the possible configurations for $T(G,S\setminus R)$.
We have $d_{T(G,S)}^{+}(v_{n_{k}})=0$ even if $n_{k}=1$. If $n_{k}=n$
then \[
q_{R}(v_{n_{k}})=p_{S}(v_{n_{k}})+\Delta(1,-2,0)=p_{S}(v_{n_{k}})\ge1\ge0,\]
while if $n_{k}\not=n$ then\[
q_{R}(v_{n_{k}})=p_{S}(v_{n_{k}})+\Delta(0,-2,0)\ge p_{S}(v_{n_{k}})-1\ge2-1\ge0.\]
So $R$ is balanced with $q$ at $v_{n_{k}}$. If $d_{T(G,S)}^{+}(v_{n_{0}})=0$
then $n_{0}=n_{k}$, otherwise there is an $a\in\{-1,-2\}$ such that\[
q_{R}(v_{n_{0}})=p_{S}(v_{n_{0}})+\Delta(-1,0,a)\ge p_{S}(v_{n_{0}})\ge0\]
and so $R$ is balanced with $q$ at $v_{n_{0}}$. If $0<j<k$ then
there is an $a\in\{-1,-2\}$ such that\[
q_{R}(v_{n_{j}})=p_{S}(v_{n_{j}})+\Delta(0,-2,a)\ge p_{S}(v_{n_{j}})\ge0\]
and so $R$ is balanced with $q$ at $v_{n_{j}}.$ It is clear that
$R$ is balanced with $q$ at every other vertex.
\end{proof}
\begin{defn}
Let $v_{1},\ldots,v_{n}$ be the consecutive vertices of a path such
that the degrees of $v_{2},v_{3},\ldots,v_{n-1}$ are all 2. Let $p$
be a pebble distribution such that $p(v_{1})=0=p(v_{n})$, $p(v_{i})\ge2$
for some $i\in\{2,\ldots,n-1\}$ and $p(v_{j})\ge1$ for all $j\in\{2,\ldots,n-1\}$.
A \emph{double rolling move} creates a new pebble distribution $q$
by taking two pebbles from $v_{i}$ and placing one pebble on $v_{1}$
and one pebble on $v_{n}$, that is $q(v_{i},v_{1},v_{n},*)=(p(v_{i})-2,1,1,p(*))$.
See Figure~\ref{cap:drollvis}.
\begin{figure}
~\input{drollvis.inc}
\caption{\label{cap:drollvis}Visualization of a double rolling move with
$i=2$ and $n=5$. An arrow indicates the transfer of a single pebble.}
\end{figure}
\end{defn}
\begin{lem}
\label{lem:droll}Let $q$ be a pebble distribution on $G$ gotten
from the pebble distribution $p$ by applying a double rolling move
from vertex $v_{i}$ to vertices $v_{1}$ and $v_{n}$ on the path
with consecutive vertices $v_{1},\ldots,v_{n}$. If vertex $u\in G$
is reachable from $p$ then $u$ is also reachable from $q$.
\end{lem}
\begin{proof}
If $u\in\{ v_{1},\ldots,v_{n}\}$ then it is clearly reachable from
$q$ so we can assume that $u\not\in\{ v_{1},\ldots,v_{n}\}$. Let
$S$ be an acyclic multiset of rubbling moves balanced with $p$ such
that $p_{S}(u)\ge1$. Let $P$ be a maximum length directed path in
$T(G,S)$ starting at $v_{i}$ and not going further than $v_{1}$
or $v_{n}$. Then $P$ has consecutive vertices $v_{i}=v_{n_{0}},v_{n_{1}}\ldots,v_{n_{k}}\in\{ v_{1},\ldots,v_{n}\}$.
Let $R$ be the multiset containing the elements of $S$ without the
moves corresponding to the arrows of $P$. An argument similar to
the one in the proof of Lemma~\ref{lem:roll} shows that $R$ is
clearly balanced with $q$ at every vertex except maybe at $v_{i}$.
If $n_{k}=n_{0}$ or the arrow $(v_{n_{0}},v_{n_{1}})$ in $P$ corresponds
to a pebbling move, then $R$ is balanced with $q$ at $v_{i}$ as
well. Then $u$ is reachable from $q$ since $q_{R}(u)=p_{S}(u)\ge1$.
So we can assume that $(v_{n_{0}},v_{n_{1}})$ corresponds to a strict
rubbling move and that $k=1$. Let $\tilde{P}$ be a maximum length
path in $T(G,R)$. Since $k=1$, the length of $\tilde{P}$ is either
0 or 1. If this length is 0, then $q$ is balanced with $R$ at $v_{i}$
since $d_{T(G,R)}^{+}(v_{i})=0$ and we are done. If the length of
$\tilde{P}$ is 1, then let $\tilde{R}$ be the multiset containing
the elements of $R$ without the moves corresponding to the arrows
of $\tilde{P}$. Figure~\ref{cap:droll} shows the possibilities
for $T(G,S\setminus\tilde{R})$. It is easy to check that $\tilde{R}$
is balanced with $q$ in each case. Thus $u$ is reachable from $q$
since $q_{\tilde{R}}(u)\ge p_{S}(u)$.
\end{proof}
\begin{figure}
\begin{center}~\input{droll.inc}\end{center}
\caption{\label{cap:droll}The four possible configurations for $T(G,S\setminus\tilde{R})$.
The solid arrows represent the moves corresponding to the arrows of
$\tilde{P}$. The dotted arrows represent the moves corresponding
to the arrows of $P$.}
\end{figure}
Rolling moves make it possible to find the optimal rubbling number
of paths and cycles.
\begin{prop}
\emph{The optimal rubbling number of the path is} $\rho_{\text{{\rm opt}}}(P_{n})=\lceil\frac{n+1}{2}\rceil$.
\end{prop}
\begin{proof}
Let $P_{n}$ be the path with consecutive vertices $v_{1},\ldots,v_{n}$.
It is clear that every vertex is reachable from the pebble distribution\[
p(v_{i})=\begin{cases}
1, & \text{$i$ is odd or $i=n$}\\
0, & \text{else}\end{cases}\]
which has size $\lceil\frac{n+1}{2}\rceil$.
Now assume that there is a pebble distribution of size $\lceil\frac{n+1}{2}\rceil-1$
from which every vertex of $P_{n}$ is reachable. Let us apply all
available rolling moves (single or double). The process ends in finitely
many steps since a rolling move reduces the number of pebbles on vertices
with more than one pebble by at least one. If there is a vertex with
more than one pebble and a vertex with no pebbles, then a rolling
move is available. The number of pebbles is not larger than the number
of vertices, so the resulting pebble distribution $q$ has at most
one pebble on each vertex. Every vertex of $P_{n}$ still must be
reachable from $q$ by Lemma~\ref{lem:droll}.
The only moves executable directly from $q$ are strict rubbling moves.
By the No Cycle Lemma we can assume that every vertex is reachable
by a sequence of moves in which a strict rubbling move $(x,y\to z)$
is not followed by a move of the form $(z,z\to x)$ or $(z,z\to y)$.
So we can assume that every vertex is reachable through strict rubbling
moves. Then we must have $q(v_{1})=1=q(v_{n})$ otherwise $v_{1}$
or $v_{n}$ is not reachable. A pigeon hole argument shows that there
must be two neighbor vertices $u$ and $w$ such that $q(u)=0=q(w)$.
But then neither $u$ nor $w$ is reachable from $q$, which is a
contradiction.
\end{proof}
\begin{prop}
The optimal rubbling number of the cycle is $\rho_{\text{{\rm opt}}}(C_{n})=\lceil\frac{n}{2}\rceil$
for $n\ge3$.
\end{prop}
\begin{proof}
Let $C_{n}$ be the cycle with consecutive vertices $v_{1},\ldots,v_{n}$.
It is clear that every vertex is reachable from the pebble distribution\[
p(v_{i})=\begin{cases}
1, & \text{$i$ is odd}\\
0, & \text{else}\end{cases}\]
which has size $\lceil\frac{n}{2}\rceil$.
Now assume that there is a pebble distribution of size $\lceil\frac{n}{2}\rceil-1$
from which every vertex of $C_{n}$ is reachable. Let us apply all
available double rolling moves. The process ends in finitely many
steps since a double rolling move reduces the number of pebbles on
vertices with more than one pebble by two . If there is a vertex with
more than one pebble and two vertices with no pebbles, then a double
rolling move is available. The number of pebbles is smaller than the
number of vertices, so the resulting pebble distribution $q$ has
at most one pebble on each vertex. Every vertex of $C_{n}$ still
must be reachable from $q$.
The only moves executable directly from $q$ are strict rubbling moves.
The No Cycle Lemma implies that we can assume that every vertex is
reachable through strict rubbling moves. A pigeon hole argument shows
that there must be two neighbor vertices $u$ and $w$ such that $q(u)=0=q(w)$.
But then neither $u$ nor $w$ is reachable from $q$ which is a contradiction.
\end{proof}
\section{Further questions}
There are plenty of unanswered questions. The following might not
be too hard to answer.
\begin{itemize}
\item What is the optimal rubbling number for the hypercube $Q^{n}$. \emph{}It
is fairly easy to get answers for small $n$ with a computer. The
known values are listed in Table~\ref{cap:Known-rho-opt-hyper}.%
\begin{table}
\begin{tabular}{|c|c|c|c|c|}
\hline
$n$&
2
&
3
&
4
&
5
\tabularnewline
\hline
$\rho(B_{n})$&
4&
16&
$>23$&
\tabularnewline
\hline
$\rho_{{\rm opt}}(B_{n})$&
2&
4&
6&
\tabularnewline
\hline
$\rho_{{\rm opt}}(Q^{n})$&
2
&
3
&
4
&
6
\tabularnewline
\hline
\end{tabular}
~
\caption{\label{cap:Known-rho-opt-hyper}Rubbling values without a known general
formula.\protect \\
}
\end{table}
\item Does Graham's conjecture hold for the rubbling number?
\item Is the cover rubbling number the same as the cover pebbling number
for every graph?
\item We have $\pi(P_{n})=\rho(P_{n})$, $\pi(Q^{n})=\rho(Q^{n})$ and it
is easy to check that $\pi(L)=8=\rho(L)$ where $L$ is the Lemke
graph \cite{Hurlbert_survey2}. This is not always the case though.
Is it possible to characterize those graphs for which the pebbling
and the rubbling numbers are the same?
\item Let $f(d,n)=\max\{\rho(G)\mid|V(G)|=n\text{ and diam}(G)=d\}$. It
is not hard to check that $f(2,n)\le5$ and $f(3,n)\le9$ for $n\in\{1,\ldots,7\}$.
Do these upper limits hold for all $n$? Is it true that $f(d,n)\le2^{d}+1$
for all $d$ and $n$?
\end{itemize}
\bibliographystyle{amsplain}
| {
"timestamp": "2007-07-28T21:32:08",
"yymm": "0707",
"arxiv_id": "0707.4256",
"language": "en",
"url": "https://arxiv.org/abs/0707.4256",
"abstract": "A pebbling move on a graph removes two pebbles at a vertex and adds one pebble at an adjacent vertex. Rubbling is a version of pebbling where an additional move is allowed. In this new move one pebble is removed at vertices v and w adjacent to a vertex u and an extra pebble is added at vertex u. A vertex is reachable from a pebble distribution if it is possible to move a pebble to that vertex using rubbling moves. The rubbling number of a graph is the smallest number m needed to guarantee that any vertex is reachable from any pebble distribution of m pebbles. The optimal rubbling number is the smallest number m needed to guarantee a pebble distribution of m pebbles from which any vertex is reachable. We determine the rubbling and optimal rubbling number of some families of graphs including cycles.",
"subjects": "Combinatorics (math.CO)",
"title": "Rubbling and Optimal Rubbling of Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924785827002,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7075866551930957
} |
https://arxiv.org/abs/2110.05266 | Chaos as an interpretable benchmark for forecasting and data-driven modelling | The striking fractal geometry of strange attractors underscores the generative nature of chaos: like probability distributions, chaotic systems can be repeatedly measured to produce arbitrarily-detailed information about the underlying attractor. Chaotic systems thus pose a unique challenge to modern statistical learning techniques, while retaining quantifiable mathematical properties that make them controllable and interpretable as benchmarks. Here, we present a growing database currently comprising 131 known chaotic dynamical systems spanning fields such as astrophysics, climatology, and biochemistry. Each system is paired with precomputed multivariate and univariate time series. Our dataset has comparable scale to existing static time series databases; however, our systems can be re-integrated to produce additional datasets of arbitrary length and granularity. Our dataset is annotated with known mathematical properties of each system, and we perform feature analysis to broadly categorize the diverse dynamics present across the collection. Chaotic systems inherently challenge forecasting models, and across extensive benchmarks we correlate forecasting performance with the degree of chaos present. We also exploit the unique generative properties of our dataset in several proof-of-concept experiments: surrogate transfer learning to improve time series classification, importance sampling to accelerate model training, and benchmarking symbolic regression algorithms. | \section{Introduction}
Two trajectories emanating from distinct locations on a strange attractor will never recur nor intersect, a basic mathematical property that underlies the complex geometry of chaos. As a result, measurements drawn from a chaotic system are deterministic yet non-repeating, even at finite resolution \cite{crutchfield1982symbolic,cvitanovic2005chaos}. Thus, while representations of chaotic systems are finite (e.g. differential equations or discrete maps), they can indefinitely generate new data, allowing the fractal structure of the attractor to be resolved in ever-increasing detail \cite{farmer1982information}. This interplay between the boundedness of the attractor and non-recurrence of the dynamics is responsible for the complexity of diverse systems, ranging from the intricate gyrations of orbiting stars to the irregular spiking of neuronal ensembles \cite{grebogi1987chaos,ott2002chaos}.
Chaotic systems thus represent a unique testbed for modern statistical learning techniques. Their unpredictability challenges traditional forecasting methods, while their fractal geometry precludes concise representations \cite{tang2020introduction}. While modeling and forecasting chaos remains a fundamental problem in its own right \cite{pathak2018model,boffetta2002predictability}, many prior works on general time series analysis and data-driven model inference have used specific chaotic systems (such as the Lorenz "butterfly" attractor) as toy problems in order to demonstrate method performance in a controlled setting \cite{nassar2018tree,champion2019data,costa2019adaptive,gilpin2020deep,greydanus2019hamiltonian,lu2020supervised,yu2017long,lu2017reservoir,bellot2021consistency,wang2021reconstructing,li2020scalable,ma2007chaotic}. In this context, there are several advantages to chaotic systems as benchmarks for time series analysis and data-driven modelling: (1) Chaotic systems have provably complex dynamics, which arise due to underlying mathematical structure, rather than complex representation of otherwise simple latent dynamics. (2) Existing time series databases contain datasets chosen primarily for availability or applicability, rather than for having innate properties (e.g. complexity, quasiperiodicity, dimensionality) that span the range of possible behaviors time series may exhibit. (3) Chaotic systems have accessible generating processes, making it possible to obtain new data and representations, and for the benchmark to be related to mechanistic details of the underlying system. These properties suggest that chaotic systems can aid in interpreting the properties of complex models \cite{tang2020introduction,kantz2004nonlinear}. However, chaotic systems as benchmarks lack standardization, and prior works' emphasis on single systems like the Lorenz attractor may undermine generalizability. Moreover, focusing on isolated systems neglects the diversity of dynamics in known chaotic systems, thereby preventing systematic quantification and interpretation of algorithm performance relative to the mathematical properties of different systems.
Here, we present a growing database of low-dimensional chaotic systems drawn from published work in diverse domains such as meteorology, neuroscience, hydrodynamics, and astrophysics. Each system is represented by several multivariate time series drawn from the dynamics, annotations of known mathematical properties, and an explicit analytical form that can be re-integrated to generate new time series of arbitrary length, stochasticity, and granularity. We provide extensive forecasting benchmarks across our systems, allowing us to interpret the empirical performance of different forecasting techniques in the context of mathematical properties such as system chaoticity. Our dataset improves the interpretability of time series algorithms by allowing methods to be compared across time series with different intrinsic properties and underlying generating processes---thereby complementing existing interpretability methods that identify salient feature sets or time windows within single time series \cite{ismail2020benchmarking,lim2021temporal}. We also consider applications to data-driven modelling in the form of symbolic regression and neural ordinary differential equations tasks, and we show the surprising result that the accuracy of a symbolic regression-derived formula can correlate with mathematical properties of the dynamics produced by the formula. Finally, we demonstrate unique applications enabled by the ability to re-integrate our dataset: we pre-train a timescale-matched feature extractor for an existing time series classification benchmark, and we accelerate training of a forecast model by importance sampling sparse regions on the dynamical attractor.
\section{Description of Datasets}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig1_stats.jpg}
\caption{{\bf Properties of the chaotic dynamical systems dataset.} (A) Embeddings of $131$ chaotic dynamical systems. Points correspond to average embeddings of individual systems, and shading shows ranges over many random initial conditions. Colors correspond to an unsupervised clustering, and example dynamics for each cluster are shown. (B) Distributions of key mathematical properties across the dataset.
}
\label{fig_stats}
\vspace{-6mm}
\end{figure}
\paragraph{Scope.} The diverse dynamical systems in our dataset span astrophysics, neuroscience, ecology, climatology, hydrodynamics, and many other domains. The supplementary material contains a glossary defining key terms from dynamical systems theory relevant to our dataset. Each entry in our dataset represents a single dynamical system, such as the Lorenz attractor, that takes an initial condition as input and outputs a trajectory representing the input point's location as time evolves. Systems are chosen based on prior appearance as named systems in published works. In order to provide a consistent test for time series models, we define chaos in the mathematical sense: two copies of a system prepared in infinitesimally different initial states will exponentially diverge over time. We also focus particularly on chaotic systems that produce low-dimensional strange attractors, which are fractal structures that display bounded, stationary, and ergodic dynamics with quantifiable mathematical properties. As a result, we exclude transient chaos and chaotic {\it repellers} (chaotic regions that trajectories eventually escape) \cite{tel2015joy,chen2017slim,grebogi1986critical}, as well as most nonchaotic strange attractors save for one paradigmatic example: a quasiperiodic two-dimensional torus \cite{grebogi1984strange}.
\paragraph{Scale and structure.} Our extensible collection currently comprises $131$ previously-published and named chaotic dynamical systems. Each record includes a compilable implementation of the system, a citation reference, default initial conditions on the attractor, precomputed train and test trajectories from different initial conditions at both coarse and fine granularities, and an optimal integration timestep and dominant timescale (used for aligning timescales across systems). For each of the $131$ systems, we include $16$ precomputed trajectories corresponding to all combinations of the following variations per system: coarse and fine sampling granularity, train and test splits emanating from different initial conditions, multivariate and univariate views, and trajectories with and without Brownian noise influencing the dynamics. Because certain data-driven modelling methods, such as our symbolic regression task below, require gradient information, we also include with each system precomputed train and test regression datasets corresponding to trajectories and time derivatives along them.
Figure S1 shows the attractors for all systems, and Table S1 includes brief summaries of their origin and applications. While there are an infinite number of possible chaotic dynamical systems, our work represents, to our knowledge, the first effort to survey and reproduce previously-published chaotic systems. For this reason, while our dataset is readily extensible to new systems, the primary bottleneck as we expand our database is the need to manually reproduce claimed chaotic dynamics, and to identify appropriate parameter values and initial conditions based on published reports. Broadly, our work can be considered a systematization of previous studies that benchmark methods on single chaotic systems such as the Lorenz attractor \cite{nassar2018tree,champion2019data,costa2019adaptive,gilpin2020deep,greydanus2019hamiltonian,lu2020supervised,yu2017long,lu2017reservoir,bellot2021consistency,wang2021reconstructing,li2020scalable,ma2007chaotic}.
\paragraph{Annotations.} For each system, we calculate and include precise estimates of several standard mathematical characteristics of chaotic systems. More detailed definitions are included in the appendix.\smallskip\\
\textit{The largest Lyapunov exponent} measures the degree to which nearby trajectories diverge, a common measure of the degree of chaos present in a system.\smallskip\\
\textit{The Lyapunov exponent spectrum} determines the tendency of trajectories to locally converge or diverge across the attractor. All continuous-time chaotic systems have at least one positive exponent, exactly one zero exponent (due to time translation), and, for dissipative systems (i.e., those converging to an attractor), at least one negative exponent \cite{sommerer1993particles}. \smallskip\\
\textit{The correlation dimension} measures an attractor's effective fractal dimension, which informally indicates the intricacy of its geometric structure \cite{grassberger1983measuring,grebogi1987chaos}. Integer fractal dimensions indicate familiar geometric forms: a line has dimension one, a plane has two, and a filled solid three. Non-integer values correspond to fractals that fill space in a manner intermediate to the two nearest integers. \smallskip\\
\textit{The multiscale entropy} represents the degree to which complex dynamics persist across timescales \cite{costa2002multiscale}. Chaotic systems have continuous power spectra, and thus high multiscale entropy. \smallskip\\
We also include two quantities derived from the Lyapunov spectrum: \textit{the Pesin entropy bound}, and \textit{the Kaplan-Yorke fractal dimension}, an alternative estimator of attractor dimension based on trajectory dispersion. Each system is also annotated with various qualitative details, such as whether the system is Hamiltonian or dissipative (i.e., whether there exists conserved invariants like total energy, or whether the dynamics relax to an attractor), non-autonomous (whether the dynamical equations explicitly depend on time), bounded (all variables remain finite as time passes), and whether the dynamics are given by a delay differential equation. In addition to the $131$ differential equations described here, our collection also includes several common discrete time maps; however, we exclude these from our study due to their unique properties.
\paragraph{Methods.} Our dataset includes utilities for re-sampling and re-integrating each system with or without stochasticity, loading pre-computed multivariate or univariate trajectories, computing statistical properties and performing surrogate significance testing, and running benchmarks. One shortcoming of previous studies using chaotic systems as benchmarks---as well as more generally with static time series databases---is inconsistent timescales and granularities (sampling rates). We alleviate this problem by using phase surrogate significance testing to select optimal integration timesteps and sampling rates for all systems in our dataset, thus ensuring that dynamics are aligned across systems with respect to dominant and minimum significant timescales \cite{kantz2004nonlinear}. We further ensure consistency across systems using several standard methods, such as testing ergodicity to find consistent initial conditions, and integrating with continuous re-orthonormalization when computing various mathematical quantities such as Lyapunov exponents (see supplementary material).
\paragraph{Properties and Characterization.} In order to characterize the properties of our collection, we use an off-the-shelf time series featurizer that computes a corpus of $787$ common time series features (e.g. absolute change, peak count, wavelet transform coefficients, etc) for each system in our dataset \cite{christ2018time}. In addition to providing general statistical descriptors for our systems, embedding and clustering the systems based on these features illustrates the diverse dynamics present across our dataset (Figure \ref{fig_stats}). We find that the dynamical systems naturally separate into groups displaying different types of chaotic dynamics, such as smooth scroll-like trajectories versus spiking. Additionally, we observe that our chaotic systems trace a filamentary manifold in embedding space, a property consistent with the established rarity of chaotic attractors within the space of possible dynamical systems: persistent chaos often occurs in an intermediate regime between bifurcations producing simpler dynamics, such as limit cycles or quiescence at fixed points \cite{grebogi1986critical,ott2002chaos}.
\subsection{Prior Work.}
\paragraph{Data-driven modelling and control.} Many techniques at the intersection of machine learning and dynamical systems theory have been evaluated on specific well-known chaotic attractors, such as the Lorenz, R\"ossler, double pendulum, and Chua systems \cite{nassar2018tree,champion2019data,costa2019adaptive,gilpin2020deep,greydanus2019hamiltonian,lu2020supervised,yu2017long,lu2017reservoir,bellot2021consistency,wang2021reconstructing,li2020scalable,ma2007chaotic}. These and several other chaotic systems used in previous machine learning studies are all included within our dataset \cite{myers2020teaspoon,datseris2018dynamicalsystems}.
General databases of analytical mathematical models include the BioModels database of systems biology models, which currently contains $1017$ curated entries, with an additional 1271 unreviewed user submissions \cite{le2006biomodels}. Among these models, a subset corresponding to $491$ differential equations recur within the ODEBase database \cite{luders2019odebase}. For the specific task of symbolic regression, the inference of analytical equations from data, existing benchmarks include the Nguyen dataset of $12$ complex mathematical expressions \cite{uy2011semantically}, and corpora of equations from two physics textbooks \cite{udrescu2020ai,orzechowski2018we,strogatz2018nonlinear}, and a recently-released suite of $252$ regression problems from Penn Machine Learning Benchmark \cite{la2021contemporary}.
\paragraph{Forecasting and classification of time series.} The UCR-UEA time series classification benchmark includes $128$ univariate and $30$ multivariate time series with \textasciitilde$10^1$--$10^3$ timepoints \cite{dau2019ucr,bagnall2017great,dempster2020rocket,bagnall2018uea}. Several of these entries overlap with the UCI Machine Learning Repository, which contains $121$ time series ($91$ multivariate) of lengths \textasciitilde$10^1$--$10^6$ \cite{asuncion2007uci}. The M-series of time series forecasting competitions have most recently featured $10^6$ univariate time series of length \textasciitilde$10^1$--$10^5$ \cite{makridakis2020m4}. The recently-introduced Monash forecasting archive comprises $26$ domain areas, each of which includes \textasciitilde$10^1$--$10^6$ distinct time series with lengths in the range \textasciitilde$10^2$--$10^6$ timepoints \cite{godahewa2021monash}. A recent long-sequence forecasting model uses the \texttt{ETT-small} dataset of electricity consumption in two regions of China (70,080 datapoints at one-minute increments) \cite{haoyietal2021informer}, as well as NOAA local climatological data (\textasciitilde$10^6$ hourly recordings from \textasciitilde$10^3$ locations) \cite{young2018international}. The PhysioNet database contains several hundred physiological recordings such as EEG, ECG, and blood pressure, at a wide variety of resolutions and lengths \cite{goldberger2000physiobank}.
A point of differentiation between our work and existing datasets is our focus on reproducible chaotic dynamics, which sufficiently narrows the space of potential systems that we can manually curate and re-implement reported dynamics, and calculate key mathematical properties relevant to forecasting and physics-based model inference. These mathematical properties can be used to interpret the properties of black box models by examining their correlation with model performance across systems. Our dataset's curation also ensures a high degree of standardization across systems, such as consistent integration and sampling timescales, as well as ergodicity and stationarity. Additionally, the precomputed multivariate time series in our dataset approximately match the length and size of existing time series databases. We emphasize that, unlike existing time series databases, our dataset's size is flexible due to the ability to re-integrate each system at arbitrary length, sample at any granularity, integrate from new initial conditions, change the amount of stochastic forcing, or even perturb parameters in the underlying differential equation in order to modify or control each system's dynamics.
\section{Experiments}
\subsection*{Task 1: Forecasting}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig_forecast.jpg}
\caption{{\bf Forecasting benchmarks for all chaotic dynamical systems.} (A) Distribution of forecast errors for all dynamical systems and for all forecasting models, sorted by increasing median error. Dark and light hues correspond to coarse and fine time series granularities. (B) Spearman correlation among forecasting models, among different forecast evaluation metrics, and between forecasting metrics and underlying mathematical properties, computed across all dynamical systems at fine granularity. Columns are ordered by descending maximum cross-correlation in order to group similar models and metrics. (C) The systems with the highest, median, and lowest forecasting error across all models, annotated by largest Lyapunov exponent.}
\label{forecast}
\vspace{-3mm}
\end{figure}
Chaotic systems are inherently unpredictable, and extensive work by the physics community has sought to quantify chaos, and to relate its properties to general features of the underlying governing equations \cite{ott2002chaos,tang2020introduction}. Traditionally, the predictability of a chaotic system is thought to be determined by the largest Lyapunov exponent, which measures the rate at which trajectories emanating from two infinitesimally-spaced points will exponentially separate over time \cite{sommerer1993particles}.
We evaluate this claim on our dataset by benchmarking $16$ forecasting models spanning a wide variety of techniques: deep learning methods (NBEATS, Transformer, LSTM, and Temporal Convolutional Network), statistical methods (Prophet, Exponential Smoothing, Theta, 4Theta), common machine learning techniques (Random Forest), classical methods (ARIMA, AutoARIMA, Fourier transform regression), and standard naive baselines (naive mean, naive seasonal, naive drift) \cite{oreshkin2019n,lea2016temporal,alexandrov2020gluonts,godahewa2021monash}. Our train and test datasets correspond to differential initial conditions, and we perform separate hyperparameter tuning for each chaotic system and granularity \cite{unit8co2020darts,alexandrov2020gluonts}. While the forecasting models are heterogenous, for each we tune whichever hyperparameter most closely corresponds to a timescale---for example, the lag order for autoregressive models, or the input chunk size for the neural network models. Because all systems are aligned to the same average period, the range of values over which timescales are tuned is scaled by the granularity. Hyperparameters are tuned using held-out future values, and scores are computed on an unseen test trajectory emanating from different initial conditions.
Our results are shown in Figure \ref{forecast} for all dynamical systems at coarse and fine sampling granularity. We include corresponding results for systems with noise in the supplementary material. We find the the deep learning models perform particularly well, with the Transformer and NBEATS models achieving the lowest median scores, while also appearing within the three best-performing models for nearly all systems. On many datasets, the temporal convolutional network and traditional LSTM models also achieve competitive performance. Notably, the random forest also exhibits strong performance despite the continuous nature of our datasets, and with substantially lower training cost.
The relative ranking of the different forecasting models remains stable both as granularity is varied over two orders of magnitude, and as noise is increased to a level dominating the signal (see supplementary experiments). In the latter case, we observe that the performance of different models converges as their overall performance decreases. Overall, NBEATS strongly outperforms the other forecasting techniques across varied systems and granularities, and its performance persists even in the presence of noise. We speculate that NBEAT's advantage arises from its implicit decomposition of time series into a hierarchy of basis functions \cite{oreshkin2019n}, an approach that mirrors classical techniques for representing continuous-time chaotic systems \cite{wang2011predicting}.
Our results seemingly contrast with studies showing that statistical models outperform neural networks on forecasting tasks \cite{godahewa2021monash,makridakis2020m4}. However, our forecasting task focuses on long time series and prediction horizons, two areas where neural networks have previously performed well \cite{haoyietal2021informer}. Additionally, we hypothesize that the strong performance of deep learning models on our dataset is a consequence of the smoothness of chaotic systems, which have mathematical regularity and stationarity compared to time series generated from industrial or environmental measurements. In contrast, models like Prophet are often applied to calendar data with seasonality and irregularities like holidays \cite{taylor2018forecasting}---neither of which have a direct analogue in chaotic systems, which contain a continuous spectrum of frequencies \cite{lusch2018deep}. Consistent with this intuition, we observe that among the systems in our dataset, the Prophet model performs well on the torus, a quasiperiodic system with largest Lyapunov exponent equal to zero.
Several recent works have considered the appropriate metric for determining forecast accuracy \cite{hyndman2006another,makridakis2020m4,durbin2012time,godahewa2021monash}. For all forecasting models and dynamical systems we compute eight error metrics: the mean squared error (MSE), mean absolute scaled error (MASE), mean absolute error (MAE), mean absolute ranged relative error (MARRE), the magnitude of the coefficient of variation ($\abs{CV}$), one minus the coefficient of determination ($1 - r^2$), and the symmetric and regular mean absolute percent errors (MAPE and sMAPE). We find that all of these potential metrics are positively correlated across our dataset, and that they can be grouped into families of strongly-related metrics (Figure \ref{forecast}B). We also observe that the relative ranking of different forecasting models is independent of the choice of metric. Hereafter, we report sMAPE errors when comparing models, but we include all other metrics within the benchmark.
We next evaluate the common claim that the empirical predictability of a system depends on the mathematical degree of chaos present \cite{pathak2018model}. For each system, we correlate the forecast error of the best-performing model with the various mathematical properties of each system (Figure \ref{forecast}B). Across all systems, we find a high degree of correlation between the largest Lyapunov exponent and the forecast error, while other measures such as the attractor fractal dimension and entropy correlate less strongly. While this observation matches conventional wisdom, it represents (to our knowledge) the first large-scale test of the empirical relevance of Lyapunov exponents. We consider this observation particularly noteworthy because our forecasting task spans several periods, yet the Lyapunov exponent is a purely local measure of dispersion between infinitesimally-separated points.
Our results introduce several considerations for the development of time series models. The strong performance we observe for neural network models implies that the flexibility of large models proves beneficial for time series without obvious trends or seasonality. The consistent accuracy we observe for NBEATS, even in the presence of noise, suggests that hierarchical decomposition can improve modelling of systems with multiple timescales. Most of our best-performing methods implicitly lift the dimensionality of the input time series, implying that higher-dimensional representations create more predictable dynamics—a finding consistent with recent studies showing that certain machine learning techniques implicitly learn Koopman operators, linear propagators that act on lifted representations of nonlinear systems \cite{lusch2018deep,champion2019data,klus2018data,otto2021koopman,takeishi2017learning}. That higher dimensional representations can linearize dynamics mirrors classical motivation for kernel methods in machine learning \cite{hastie2009elements}; we thus hypothesize that classical time series representations like time-lagged embeddings can be improved through nonlinearities, either in the form of custom functions learned by neural networks, or by inductive biases in the form of fixed periodic or wavelet-like kernels.
\subsection*{Task 2: Accelerating model training with importance sampling.}
\begin{table}
\caption{(Upper) Forecast accuracy for LSTMs trained on full time series, random subsets, and subsets sampled proportionately to their epochwise error (medians $\pm$ standard errors across all dynamical systems). (Lower) Accuracy scores on the UCR database for classifiers trained on features extracted from bare time series, and from autoencoders pretrained on the full chaotic systems collection at random and task-matched timescales (medians $\pm$ standard errors across UCR tasks).}
\label{importance}
\centering
\begin{tabular}{llll}
\bottomrule
\toprule
\multicolumn{2}{c}{Importance Sampling Forecasting Error (sMAPE)} \\
\cmidrule(r){1-2}
\null & Full Epochs & Random Subset & Importance Weighted \\
\midrule
sMAPE & $1.00 \pm 0.05$& $0.99 \pm 0.05$ & $\mathbf{0.90 \pm 0.05}$ \\
Runtime (sec) & $190.1 \pm 0.3$& $\mathbf{77.9 \pm 0.3}$ & $94.6 \pm 0.2$ \\
\bottomrule
\toprule
\multicolumn{2}{c}{Transfer Learning Classification Accuracy} \\
\cmidrule(r){1-2}
\null & No Transfer Learning & Random Timescales & Matched Surrogates \\
\midrule
sMAPE & $0.80 \pm 0.02$& $0.82 \pm 0.01$ & $\mathbf{0.84 \pm 0.01}$ \\
\bottomrule\toprule
\end{tabular}
\vspace{-5mm}
\end{table}
When training a forecasting model iteratively, each training batch usually samples input timepoints with equal probability. However, chaotic attractors generally possess non-uniform measure due to their fractal structure \cite{farmer1982dimension}. We thus hypothesize that importance sampling can accelerate training of a forecast model, by encouraging the network to oversample sparser regions of the underlying attractor \cite{leitao2013monte}. We thus modify the training procedure for a forecast model by applying a simple form of importance sampling, based on the epoch-wise training losses of individual samples---an approach related to zeroth-order adaptive methods appearing in other areas \cite{press1986numerical,jiang2019accelerating,kawaguchi2020ordered,katharopoulos2018not}. Our procedure consists of the following: (1) We halt training every few epochs and compute historical forecasts (backtests) on the training trajectory. (2) We randomly sample timepoints proportionately to their error in the historical forecast, and then generate a set of initial conditions corresponding to random perturbations away from each sampled attractor point. (3) We simulate the full dynamical system for $\tau$ timesteps for each of these initial conditions, and we use these new trajectories as the training set for the next $b$ epochs. We repeat this procedure for $\nu$ meta-epochs. For the original training procedure, the training time scales as $\sim B$, the number of training epochs. In our modified procedure, the training time has dominant term $\sim \nu \,b$, plus an additional term proportional to $\tau$ (integration can be parallelized across initial conditions), plus a small constant cost for sampling. We thus set $\nu \, b < B$, and record run times to verify that total cost has decreased.
Table \ref{importance} shows the results of our experiments for an LSTM model across all chaotic attractors. Importance sampling achieves a significantly smaller forecast error than a baseline using the full training set in each epoch, as well as a control in which the exact importance sampling procedure was repeated without weighting random samples by error (two sided paired t-test, $p < 10^{-6}$ for both tests). Notably, importance sampling requires substantially lower computation due to the reduced number of training epochs incurred. Our approach exploits that our database comprises strange {\it attractors}, because initial conditions derived from random perturbations off an attractor will produce trajectories that return to the attractor.
\subsection*{Task 3: Transfer learning and data augmentation.}
We next explore how our dataset can assist general time series analysis, regardless of the relevance of chaos to the problem. We study an existing time series classification benchmark, and we use our dataset to generate timescale-matched surrogate data for transfer learning.
Our classification procedure broadly consists of training an autoencoder on trajectories from our database, and then using the trained encoder as a general feature extractor for time series classification. However, unlike existing transfer learning approaches for time series \cite{malhotra2017timenet}, we train the autoencoder on a new dataset for each classification problem: we re-integrate our entire dataset to match the dominant timescales in the classification problem's training data.
Our approach thus comprises several steps: (1) Across all data in the train partition, the dominant significant Fourier frequency is determined using random phase surrogates \cite{kantz2004nonlinear}. (2) Trajectories are re-integrated for every dynamical system in our database, such that the sampling rate of the dynamics is equal to that of the training dataset. The surrogate ensemble thus corresponds to a custom set of trajectories with timescales matched to the training data of the classification problem. (3) We train an autoencoder on this ensemble. Our encoder is a one layer causal dilated encoder with skip connections, an architecture recently shown to provide strong time series classification performance \cite{franceschi2019unsupervised}. (3) We apply the encoder to the training data of the classification problem. (4) We apply a standard linear time series classifier, following recent works \cite{dempster2020rocket,loning2019sktime}. We featurize the time series using a library of standard featurizers \cite{christ2018time}, and then perform classification using ridge regression \cite{loning2019sktime}. Overall, our classification approach bears conceptual similarity to other generative data augmentation techniques: we extract parameters (the dominant timescales) from the training data, and then use these parameters to construct a custom surrogate ensemble with matching timescales. In many image augmentation approaches, a prior distribution is learned from the training data (e.g. via a GAN), and the then sampled to create surrogate examples \cite{zhang2019dada,tran2017bayesian,zhu2017data,hauberg2016dreaming}.
\begin{wrapfigure}{r}{0.4\linewidth}
\vspace{-5mm}
\centering
\includegraphics[width=\linewidth]{fig_transfer.pdf}
\caption{Classification accuracy on the UCR dataset \texttt{EOGHorizontalSignal}, across models pretrained on increasing fractions of the database. Standard errors are from bootstrapped replicates, where the dynamical systems are sampled with replacement.}
\label{transfer}
\vspace{-5mm}
\end{wrapfigure}
As baselines for our approach, we train a classifier on the bare original time series, as well as a "random timescale" collection in which the time series in the surrogate ensemble have random dominant frequencies, unrelated to the timescales in the training data. The latter ablation serves to isolate the role of timescale matching, which is uniquely enabled by the ability to re-integrate our dataset at arbitrary granularity. This is necessary in light of recent work showing that transfer learning on a large collection of time series can yield informative features \cite{malhotra2017timenet}.
We benchmark classification using the UCR time series classification benchmark, which contains $128$ real-world classification problems spanning diverse areas like medicine, agriculture, and robotics \cite{dau2019ucr}. Because we are using convolutional models, we restrict our analysis to the $91$ datasets with at least $100$ contiguous timepoints (these include the $85$ "bakeoff" datasets benchmarked in previous studies) \cite{bagnall2017great}. We compute separate benchmarks (surrogate ensembles, features, and scores) for each dataset in the archive.
Our results are shown in Table \ref{importance}. Across the UCR archive we observe statistically significant average classification accuracy increases of $4\% \pm 1\%$ compared to the raw dataset ($p < 10^{-4}$, paired two-sided t-test), and $2\% \pm 1\%$ compared to the ablation with random surrogate timescales ($p < 10^{-4}$). While these modest improvements do not comprise state-of-the-art results on the UCR database \cite{bagnall2017great}, they demonstrate that features learned from chaotic systems in an unsupervised setting can be used to extract meaningful general features for further analysis. On certain datasets, our results approach other recent unsupervised approaches in which a simple linear classifier is trained on top of a complex unsupervised feature extractor \cite{malhotra2017timenet,franceschi2019unsupervised,dempster2020rocket}. Recent results have even shown that a very large number of {\it random} convolutional features can provide informative representations of time series for downstream supervised learning tasks \cite{dempster2020rocket}; we therefore speculate that pretraining with chaotic systems may allow more efficient selection of informative convolutional kernels. Moreover, the improvement of transfer learning over the random timescale model demonstrates the advantage of re-integration. In order to verify that the diversity of dynamical systems present within our dataset contribute to the quality of the learned features, we repeat the classification task on a single UCR dataset, corresponding to clinical eye tracking data. We train encoders on gradually increasing numbers of dynamical systems, in order to see how the final accuracy changes as the number of systems available for pretraining increases (Figure \ref{transfer}). We observe monotonic scaling, indicating that our dataset's size and diversity contribute to feature quality.
\subsection*{Task 4: Data-driven model inference and symbolic regression}
We next look beyond traditional time series analysis, and apply our database to a data-driven modelling task. A growing body of work uses machine learning methods to infer dynamical systems directly from data \cite{karniadakis2021physics,de2020discovery,callaham2021learning,carleo2019machine}. Examples include constructing effective propagators for the dynamics \cite{otto2021koopman,costa2021maximally,takeishi2017learning,budivsic2012applied,gilpin2019cellular,lusch2018deep}, obtaining neural representations of the dynamical equations \cite{chen2018neural,kidger2020neural,massaroli2020dissecting,rackauckas2020universal}, and inferring analytical governing equations via symbolic regression \cite{klus2018data,petersen2019deep,brunton2016discovering,schmidt2009distilling,martin2018reverse,udrescu2020ai,cranmer2020discovering,la2021contemporary,rudy2021sparse}. Beyond improving forecasts, these data-driven representations can discover mechanistic insights, such as symmetries or separated timescales, that might not otherwise be apparent in a time series.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{fig_symb.pdf}
\caption{{\bf Symbolic regression benchmarks.} (A) Error distributions on test datasets across all systems, (B) Spearman correlation between errors and mathematical properties of the underlying systems.}
\label{fig_symb}
\vspace{-4mm}
\end{figure}
We thus use our dataset for data-driven modelling in the form of a symbolic regression task. We focus on symbolic regression because of the recent emergence of widely-used benchmark models and performance desiderata for these methods \cite{la2021contemporary}. However, we emphasize that our database can be used for other emerging focus areas in data-driven modelling, such as inference of empirical propagators or neural ordinary differential equations \cite{lusch2018deep,froyland2009almost,budivsic2012applied}, and we include a baseline neural ordinary differential equation task in the supplementary material. For each dynamical system in our collection, we generate trajectories with sufficiently coarse granularity to sample all regions of the attractor. At each timepoint, we compute the value of the right hand side of the corresponding dynamical equation, and we treat the value of this time derivative as the regression target. We use this dataset to compare several recent symbolic regression approaches: (1) DSR: a recurrent neural network trained with a risk-seeking policy gradient, which produces state-of-the-art results on a variety of challenging symbolic regression tasks \cite{petersen2019deep}. (2) PySR: an open-source package inspired by the popular closed-source software Eureqa, which uses genetic programming and simulated annealing \cite{cranmer2020pysr,cranmer2020discovering,schmidt2009distilling}. (3,4) PySINDY: a Python implementation of the widely-used SINDY algorithm, which uses sparse regression to decompose data into linear combinations of functions \cite{de2020pysindy,brunton2016discovering}. For PySINDY we train separate models for purely polynomial (SINDY-poly) and trigonometric (SINDY-fourier) bases. For DSR and pySR we use a standard library of binary and unary expressions, $\{+, - , \times, \div \}$, $\{\sin, \cos, \exp, \log, \tanh \}$ \cite{petersen2019deep}. After fitting a formula using each method, we evaluate it on an unseen test trajectory arising from different initial conditions, and we report the the sMAPE error between the formula's prediction and the true value along the trajectory.
Our results illustrate several features of our dataset, while also illustrating properties of the different symbolic regression algorithms. All algorithms show strong performance across the chaotic systems dataset (Figure \ref{fig_symb}). The two lowest-error models, pySR and DSR, exhibit nearly-equivalent performance when accounting for error ranges, and both achieve errors near zero on many systems. We attribute this strong performance to the relatively simple algebraic construction of most published systems: named chaotic systems will inevitably favor concise, demonstrative equations over complex expressions. In fact, several systems in our dataset belong to the Sprott attractor family, which represent the algebraically simplest chaotic systems \cite{sprott1994some}. In this sense, our dataset likely has similar complexity to the Feynman equations benchmark \cite{udrescu2020ai}.
We highlight that PySINDY with a purely polynomial basis performs very well on our dataset, especially as a linear approach that requires a median training time of only $0.01 \pm 0.01$ s per system on a single CPU core. In comparison, pySR had a median time of $1400 \pm 60$ s per system on one core, while DSR on one GPU required $4300 \pm 200$s per system---consistent with the results of recent symbolic regression benchmark suite \cite{la2021contemporary}. However, parallelization reduces the runtime of all methods.
We emphasize that the relative performance of a given symbolic regression algorithm depends on diverse factors, such as equation complexity, the library of available unary and binary operators, the amount and dynamic range of available input data, the amount of compute available for refinement, and the degree of nonlinearity of the underlying system. More generally, symbolic regression algorithms exhibit a bias-variance tradeoff manifesting as Pareto front bridging accuracy and parsimony: large models with many terms will appear more accurate, but at the expense of brevity and potentially robustness and interpretability \cite{schmidt2009distilling}. More challenging benchmarks would include nested expressions and uncommon transcendental functions; these systems may be a more appropriate setting for benchmarking state-of-the-art techniques like DSR. Additionally, we do not include measurement noise in our experiments, a scenario in which DSR performs strongly compared to other methods \cite{petersen2019deep,la2021contemporary}.
Interestingly, DSR exhibits the strongest dependence on the mathematical properties of the underlying dynamics: more chaotic systems consistently yield higher errors (Figure \ref{fig_symb}B). We consider this result surprising, because {\it a priori} we would expect the performance of a given symbolic regression algorithm to depend purely on the syntactic complexity of the target formula, rather than the dynamics that it produces. Because DSR uses a large model to navigate a space of smaller models, we hypothesize that more chaotic systems present a broader set of possible "partial formulae" that match specific subregimes of the attractor---an effect exploited in several recent decomposition techniques for chaotic systems \cite{nassar2018tree,costa2019adaptive}. The diversity of these local approximants would result in a more complex global search space.
\section{Discussion}
We have introduced an extensible collection of known chaotic dynamical systems. In addition to representing a customizable benchmark for time series analysis and data-driven modelling, we have provided examples of additional applications, such as transfer learning for general time series analysis tasks, that are enabled by the generative nature of our dataset. We note that there are several other potential applications that we have not explored here: testing feedback-based control algorithms (which require perturbing the parameters of a given dynamical system, and then re-integrating), and inferring numerical propagators (such as Koopman operators)\cite{otto2021koopman,costa2021maximally,takeishi2017learning,budivsic2012applied,gilpin2019cellular,lusch2018deep,gilpin2020learning,arbabi2017ergodic}. In the appendix, we include preliminary benchmarks for a neural ordinary differential equations task \cite{chen2018neural,kidger2020neural,massaroli2020dissecting,rackauckas2020universal}; due to the direct connections between our work and this area, we hope to further explore these methods in future studies. Our work can be seen as systematizing the common practice of testing new methods on single chaotic systems, particularly the Lorenz attractor \cite{nassar2018tree,champion2019data,costa2019adaptive,gilpin2020deep,greydanus2019hamiltonian,lu2020supervised,yu2017long,lu2017reservoir,bellot2021consistency,wang2021reconstructing,li2020scalable,ma2007chaotic}.
More broadly, our collection seeks to improve the interpretability of data-driven modelling from time series. For example, our forecasting benchmark experiments show that the Lyapunov exponent, a popular measure of local chaoticity, correlates with the empirical predictability of a system under a variety of models---a finding that matches intuition, but which has not (to our knowledge) previously been tested extensively. Likewise, in our symbolic regression benchmark we find that more chaotic systems are harder to model, an effect we attribute to the diverse local approximants available for complex dynamical systems. These examples demonstrate how the control and mathematical context provided by differential equations can yield mechanistic insight beyond traditional time series.
Limitations of our approach include our inclusion only of known chaotic systems that have previously appeared in published works. This limits the rate at which our collection may expand, since each new entry requires manual curation and implementation in order to verify reported dynamics. Our focus on published systems may bias the dataset towards more unusual (and thus reportable) dynamics, particularly because there are infinite possible chaotic systems. Moreover, in few dimensions chaotic dynamics are rare relative to the space of all possible models \cite{ott2002chaos}, although chaos becomes ubiquitous as the number of coupled variables increases \cite{ispolatov2015chaos}. Nonetheless, low-dimensional chaos may represent an instructive step towards understanding complex dynamics in high-dimensional systems.
\begin{ack}
We thank Gautam Reddy, Samantha Petti, Brian Matejek, and Yasa Baig for helpful discussions and comments on the manuscript. W. G. was supported by the NSF-Simons Center for Mathematical and Statistical Analysis of Biology at Harvard University, NSF Grant DMS 1764269, and the Harvard FAS Quantitative Biology Initiative. The author declares no competing interests.
\end{ack}
\section{Data Availability}
The database of dynamical models and precomputed time series is available on GitHub at \url{https://github.com/williamgilpin/dysts}. The \texttt{benchmarks} subdirectory contains all code needed reproduce the benchmarks, figures, and tables in this paper.
All included equations are in the public domain, and all precomputed time series datasets have been generated {\it de novo} from these equations. No license is required to use these equations or datasets. The repository and precomputed datasets include an Apache 2.0 license. The author attests that they bear responsibility for copyright matters associated with this dataset.
\clearpage
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{all_attractors.jpg}
\caption{{\bf All dynamical systems currently in the database.}
}
\label{fig_all}
\end{figure*}
\clearpage
\section{Descriptions of all systems}
Descriptions and citations for all systems are included below, and each system is visualized in Figure \ref{fig_all}. Each system's entry in the project repository contains full records and descriptions.
\setlength\LTleft{-2cm}
\begin{longtable}{lll}
\toprule
System & Reference & Description \\
\toprule
Aizawa & Aizawa, Yoji, and Tatsuya Uezu (1982). Topolog... & A torus-like attractor related to the forced L... \\
AnishchenkoAstakhov & Anishchenko, et al. Nonlinear dynamics of chao... & Stochastic resonance in forced oscillators. \\
Arneodo & Arneodo, A., Coullet, P. \& Tresser, C. Occuren... & A modified Lotka-Volterra ecosystem, also know... \\
ArnoldBeltramiChildress & V. I. Arnold, Journal of Applied Mathematics a... & An exact solution of Euler's equation for invi... \\
ArnoldWeb & Froeschle, C., Guzzo, M. \& Legga, E (2000). Gr... & A quasi-integrable system that transitions to ... \\
BeerRNN & Beer, R. D. (1995). On the dynamics of small c... & A two-neuron minimal model nervous system. \\
BelousovZhabotinsky & Gyorgyi and Field (1992). A three-variable mod... & A reduced-order model of the BZ reaction that ... \\
BickleyJet & Hadjighasem, Karrasch, Teramoto, Haller (2016)... & A zonal jet passing between two counter rotati... \\
Blasius & Blasius, Huppert, Stone. Nature 1999 & A chaotic food web composed of interacting pr... \\
BlinkingRotlet & Meleshko \& Aref. A blinking rotlet model for c... & The location of the mixer is chosen so that th... \\
BlinkingVortex & Aref (1984). Stirring by chaotic advection. J.... & A classic minimal chaotic mixing flow. Solutio... \\
Bouali & Bouali (1999). Feedback loop in extended Van d... & Economic cycles with fluctuating demand. Relat... \\
Bouali2 & Bouali (1999). Feedback loop in extended Van d... & A modified economic cycle model. \\
BurkeShaw & Shaw (1981). Zeitschrift fur Naturforschung. & A scroll-like attractor with unique symmetry a... \\
CaTwoPlus & Houart, Dupont, Goldbeter. Bull Math Biol 1999. & Intracellular calcium ion oscillations. \\
CaTwoPlusQuasiperiodic & Houart, Dupont, Goldbeter. Bull Math Biol 1999. & Intracellular calcium ion oscillations with qu... \\
CellCycle & Romond, Rustici, Gonze, Goldbeter. 1999. & A simplified model of the cell cycle. The para... \\
CellularNeuralNetwork & Arena, Caponetto, Fortuna, and Porto., Int J B... & Cellular neural network dynamics. \\
Chen & Chen (1997). Proc. First Int. Conf. Control of... & A system based on feedback anti-control in eng... \\
ChenLee & Chen HK, Lee CI (2004). Anti-control of chaos ... & A rigid body with feedback anti-control. \\
Chua & Chua, L. O. (1969) Introduction to Nonlinear N... & An electronic circuit with a diode providing n... \\
CircadianRhythm & Leloup, Gonze, Goldbeter. 1999. Gonze, Leloup... & The Drosophila circadian rhythm under periodic... \\
CoevolvingPredatorPrey & Gilpin \& Feldman (2017). PLOS Comp Biol & A system of predator-prey equations with co-ev... \\
Colpitts & Kennedy (2007). IEEE Trans Circuits \& Systems.... & An electrical circuit used as a signal generator. \\
Coullet & Arneodo, A., Coullet, P. \& Tresser, C. Occuren... & A variant of the Arneodo attractor \\
Dadras & S Dadras, HR Momeni (2009). A novel three-dime... & An electronic circuit capable of producing mul... \\
DequanLi & Li, Phys Lett A. 2008: 387-393. & Related to the Three Scroll unified attractor ... \\
DoubleGyre & Shadden, Lekien, Marsden (2005). Definition an... & A time-dependent fluid flow exhibiting Lagrang... \\
DoublePendulum & See, for example: Marion (2013). Classical dyn... & Two coupled rigid pendula without damping. \\
Duffing & Duffing, G. (1918), Forced oscillations with v... & A monochromatically-forced rigid pendulum, wit... \\
ExcitableCell & Teresa Chay. Chaos In A Three-variable Model O... & A reduced-order variant of the Hodgkin-Huxley ... \\
Finance & Guoliang Cai, Juanjuan Huang. International Jo... & Stock fluctuations under varying investment de... \\
FluidTrampoline & Gilet, Bush. The fluid trampoline: droplets bo... & A droplet bouncing on a horizontal soap film. \\
ForcedBrusselator & I. Prigogine, From Being to Becoming: Time and... & An autocatalytic chemical system. \\
ForcedFitzHughNagumo & FitzHugh, Richard (1961). Impulses and Physiol... & A driven neuron model sustaining both quiesent... \\
ForcedVanDerPol & B. van der Pol (1920). A theory of the amplitu... & An electronic circuit containing a triode. \\
GenesioTesi & Genesio, Tesi (1992). Harmonic balance methods... & A nonlinear control system with feedback. \\
GuckenheimerHolmes & Guckenheimer, John, and Philip Holmes (1983). ... & A nonlinear oscillator. \\
Hadley & G. Hadley (1735). On the cause of the general ... & An atmospheric convective cell. \\
Halvorsen & Sprott, Julien C (2010). Elegant chaos: algebr... & An algebraically-simple chaotic system with qu... \\
HastingsPowell & Hastings, Powell. Ecology 1991 & A three species food web. \\
HenonHeiles & Henon, M.; Heiles, C. (1964). The applicabilit... & A star's motion around the galactic center. \\
HindmarshRose & Marhl, Perc. Chaos, Solitons, Fractals 2005. & A neuron model exhibiting spiking and bursting. \\
Hopfield & Lewis \& Glass, Neur Comp (1992) & A neural network with frustrated connectivity \\
HyperBao & Bao, Liu (2008). A hyperchaotic attractor coi... & Hyperchaos in the Lu system. \\
HyperCai & Guoliang, Huang (2007). A New Finance Chaotic ... & A hyperchaotic variant of the Finance system. \\
HyperJha & Jürgen Meier (2003). Presentation of Attractor... & A hyperchaotic system. \\
HyperLorenz & Jürgen Meier (2003). Presentation of Attractor... & A hyperchaotic variant of the Lorenz attractor. \\
HyperLu & Jürgen Meier (2003). Presentation of Attractor... & A hyperchaotic variant of the Lu attractor. \\
HyperPang & Jürgen Meier (2003). Presentation of Attractor... & A hyperchaotic system. \\
HyperQi & G. Qi, M. A. van Wyk, B. J. van Wyk, and G. Ch... & A hyperchaotic variant of the Qi system. \\
HyperRossler & Rossler, O. E. (1979). An equation for hyperch... & A hyperchaotic variant of the Rossler system. \\
HyperWang & Wang, Z., Sun, Y., van Wyk, B. J., Qi, G. \& va... & A hyperchaotic variant of the Wang system. \\
HyperXu & Letellier \& Rossler (2007). Hyperchaos. Schola... & A hyperchaotic system. \\
HyperYan & Jürgen Meier (2003). Presentation of Attractor... & A hyperchaotic system. \\
HyperYangChen & Jürgen Meier (2003). Presentation of Attractor... & A hyperchaotic system. \\
IkedaDelay & K. Ikeda and K. Matsumoto (1987). High-dimensi... & A passive optical resonator system. A standard... \\
IsothermalChemical & Petrov, Scott, Showalter. Mixed-mode oscillati... & An isothermal chemical system with mixed-mode ... \\
ItikBanksTumor & Itik, Banks. Int J Bifurcat Chaos 2010 & A model of cancer cell populations. \\
JerkCircuit & Sprott (2011). A new chaotic jerk circuit. IEE... & An electronic circuit with nonlinearity provid... \\
KawczynskiStrizhak & P. E. Strizhak and A. L. Kawczynski, J. Phys. ... & A chemical oscillator model describing mixed-m... \\
Laser & Abooee, Yaghini-Bonabi, Jahed-Motlagh (2013). ... & A semiconductor laser model \\
LiuChen & Liu, Chen. Int J Bifurcat Chaos. 2004: 1395-1403. & Derived from Sakarya. \\
Lorenz & Lorenz, Edward N (1963). Deterministic nonperi... & A minimal weather model based on atmospheric c... \\
Lorenz84 & E. Lorenz (1984). Irregularity: a fundamental ... & Atmospheric circulation analogous to Hadley co... \\
Lorenz96 & Lorenz, Edward (1996). Predictability: A probl... & A climate model containing fluid-like advectiv... \\
LorenzBounded & Sprott \& Xiong (2015). Chaos. & The Lorenz attractor in the presence of a conf... \\
LorenzCoupled & Lorenz, Edward N. Deterministic nonperiodic fl... & Two coupled Lorenz attractors. \\
LorenzStenflo & Letellier \& Rossler (2007). Hyperchaos. Schola... & Atmospheric acoustic-gravity waves. \\
LuChen & Lu, Chen. Int J Bifurcat Chaos. 2002: 659-661. & A system that switches shapes between the Lore... \\
LuChenCheng & Lu, Chen, Cheng. Int J Bifurcat Chaos. 2004: 1... & A four scroll attractor that reduces to Lorenz... \\
MacArthur & MacArthur, R. 1969. Species packing, and what ... & Population abundances in a plankton community,... \\
MackeyGlass & Glass, L. and Mackey, M. C. (1979). Pathologic... & A physiological circuit with time-delayed feed... \\
MooreSpiegel & Moore, Spiegel. A Thermally Excited Nonlinear ... & A thermo-mechanical oscillator. \\
MultiChua & Mufcstak E. Yalcin, Johan A. K. Suykens, Joos ... & Multiple interacting Chua electronic circuits. \\
NewtonLiepnik & Leipnik, R. B., and T. A. Newton (1981). Doubl... & Euler's equations for a rigid body, augmented ... \\
NoseHoover & Nose, S (1985). A unified formulation of the c... & Fixed temperature molecular dynamics for a str... \\
NuclearQuadrupole & Baran V. and Raduta A. A. (1998), Internationa... & A quadrupole boson Hamiltonian that produces c... \\
OscillatingFlow & T. H. Solomon and J. P. Gollub, Phys. Rev. A 3... & A model fluid flow that produces KAM tori. Ori... \\
PanXuZhou & Zhou, Wuneng, et al. On dynamics analysis of a... & A named attractor related to the DequanLi attr... \\
PehlivanWei & Pehlivan, Ihsan, and Wei Zhouchao (2012). Anal... & A system with quadratic nonlinearity, which un... \\
PiecewiseCircuit & A. Tamasevicius, G. Mykolaitis, S. Bumeliene, ... & A delay model that can be implemented as an el... \\
Qi & G. Qi, M. A. van Wyk, B. J. van Wyk, and G. Ch... & A hyperchaotic system with a wide power spectrum. \\
QiChen & Qi et al. Chaos, Solitons \& Fractals 2008. & A double-wing chaotic attractor that arises fr... \\
RabinovichFabrikant & Rabinovich, Mikhail I.; Fabrikant, A. L. (1979... & A reduced-order model of propagating waves in ... \\
RayleighBenard & Yanagita, Kaneko (1995). Rayleigh-Bénard... & A reduced-order model of a convective cell. \\
RikitakeDynamo & Rikitake, T., Oscillations of a system of disk... & Electric current and magnetic field of two cou... \\
Rossler & Rossler, O. E. (1976), An Equation for Continu... & Spiral-type chaos in a simple oscillator model. \\
Rucklidge & Rucklidge, A.M. (1992). Chaos in models of dou... & Two-dimensional convection in a horizontal lay... \\
Sakarya & Li, Chunbiao, et al (2015). A novel four-wing ... & An attractor that arises due to merging of two... \\
SaltonSea & Upadhyay, Bairagi, Kundu, Chattopadhyay (2007)... & An eco-epidemiological model of bird and fish ... \\
SanUmSrisuchinwong & San-Um, Srisuchinwong. J. Comp 2012 & A two-scroll attractor arising from dynamical ... \\
ScrollDelay & R.D. Driver, Ordinary and Delay Differential E... & A delay model that can be implemented as an el... \\
ShimizuMorioka & Shimizu, Morioka. Phys Lett A. 1980: 201-204 & A system that bifurcates from a symmetric limi... \\
SprottA & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottB & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottC & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottD & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottDelay & Sprott, J. C (2007). A simple chaotic delay di... & An algebraically simple delay equation. A stan... \\
SprottE & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottF & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottG & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottH & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottI & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottJ & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottJerk & Sprott, J. C. Simplest dissipative chaotic flo... & An algebraidally simple flow depending on a th... \\
SprottK & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottL & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottM & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottMore & Sprott, J. C. (2020). Do We Need More Chaos Ex... & A multifractal system with a nearly 3D attractor \\
SprottN & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottO & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottP & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottQ & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottR & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottS & Sprott (1994). Some simple chaotic flows. Phys... & A member of the Sprott family of algebraically... \\
SprottTorus & Sprott Physics Letters A 2014 & A multiattractor system that goes to a torus o... \\
StickSlipOscillator & Awrejcewicz, Jan, and M. M. Holicke (1999). In... & A weakly forced (quasiautonomous) oscillator w... \\
SwingingAtwood & Tufillaro, Nicholas B.; Abbott, Tyler A.; Grif... & A mechanical system consisting of two swinging... \\
Thomas & Thomas, Rene (1999). Deterministic chaos seen ... & A cyclically-symmetric attractor correspondng ... \\
ThomasLabyrinth & Thomas, Rene. Deterministic chaos seen in term... & A system in which trajectories seemingly under... \\
Torus & See, for example, Strogatz (1994). Nonlinear D... & A minimal quasiperiodic flow on a torus. All l... \\
Tsucs2 & Pan, Zhou, Li (2013). Synchronization of Three... & A named attractor related to the DequanLi attr... \\
TurchinHanski & Turchin, Hanski. The American Naturalist 1997.... & A chaotic three species food web. The species... \\
VallisElNino & Vallis GK. Conceptual models of El Nio and the... & Atmospheric temperature fluctuations with annu... \\
VossDelay & Voss (2002). Real-time anticipation of chaotic... & An electronic circuit with delayed feedback. A... \\
WangSun & Wang, Z., Sun, Y., van Wyk, B. J., Qi, G. \& va... & A four-scroll attractor \\
WindmiReduced & Smith, Thiffeault, Horton. J Geophys Res. 2000... & Energy transfer into the ionosphere and magnet... \\
YuWang & Yu, Wang (2012). A novel three dimension auton... & A temperature-compensation circuit with an ope... \\
YuWang2 & Yu, Wang (2012). A novel three dimension auton... & An alternative temperature-compensation circui... \\
ZhouChen & Zhou, Chen (2004). A simple smooth chaotic sys... & A feedback circuit model. \\
\toprule
\end{longtable}
\clearpage
\section{Dataset structure and format}
All systems are primarily represented as Python objects, with names matching those in Figure \ref{fig_all} and the accompanying table. Underlying mathematical properties, parameters of the governing differential equation, recommended integration timestep and period, and default initial conditions are accessed as instance attributes. A callable implementation of the right hand side of the differential equation, a function for loading precomputed trajectories, and a function for re-integrating with default initial conditions and timescales, are included as instance methods. Additionally, we include a separate submodule for loading precomputed time series in bulk, or re-integrating all systems, which are useful for benchmarking tasks.
Our object representation abstracts the underlying records and metadata for each system, which are stored in a \texttt{JSON} file. The attributes recorded in the database file for each system are listed in Table \ref{metadata}.
For each dynamical system, we include $16$ precomputed time series corresponding to all combinations of the following: coarse and fine sampling granularity, train and test splits emanating from different initial conditions, multivariate and univariate views, and trajectories with and without Brownian noise influencing the dynamics. The precomputed granularities correspond to a coarse granularity sampled at $15$ points per period (the dominant timescale determined by surrogate testing on the power spectrum), and a fine granularity sampled at $100$ points per period. The stochastically-forced trajectories correspond to adding a Langevin forcing term to the right hand side of each term in the dynamical equation. We used a scaled force with amplitude equal to to $1/40$ the standard deviation of the values the dynamical variable takes on the attractor in the absence of noise. When integrating these trajectories, we use variant of the Runge-Kutta algorithm for stochastic differential equations \cite{rossler2010runge}, as implemented in the Python package \texttt{sdeint}.
\begin{table}
\caption{Properties recorded for each chaotic system in the dataset}
\hspace*{-2cm}
\begin{tabular}{ l l }
\hline
System Name & \null \\
\hline
Reference & A citation to published work or original source where available. \\
Description & A brief description of domain area, or original motivation for publication \\
Parameters & Parameters governing the differential equation (e.g for bifurcations) \\
Embedding Dimension & The number of dynamical variables, or the number set by default for delay equations \\
Unbounded Indices & Indices of dynamical variables that grow without bound (e.g. time for nonautonomous systems) \\
dt & The integration timestep, determined by surrogate testing of the power spectrum \\
Initial Conditions & Initial conditions on the attractor, determined by a long simulation discarding a transient \\
Period & The dominant timescale in the system, determined by surrogate testing of the power spectrum \\
Lyapunov Spectrum & The spectrum of Lyapunov exponents, measure of trajectory dispersion \\
Largest Lyapunov Exponent & The largest Lyapunov exponent, a measure of chaoticity \\
Correlation Dimension & The fractal dimension, a measure of geometric complexity \\
Kaplan-Yorke Dimension & An alternative fractal dimension, a measure of geometric complexity \\
Multiscale Entropy & A measure of signal complexity \\
Pesin Entropy & An upper bound on the entropy under discretized measurements \\
Delay & Whether the system is a delay differential equation \\
Hamiltonian & Whether the dynamics are Hamiltonian \\
Non-autonomous & Whether the dynamics depend explicitly on time\\
\hline
\end{tabular}
\label{metadata}
\end{table}
\section{Glossary}
Here, we provide a glossary of several terms as they appear in the work presented here. More detailed treatments can be found in several references \cite{guckenheimer2013nonlinear,kuznetsov2013elements,strogatz2018nonlinear,ott2002chaos}.
\paragraph{Attractor.} A set of points within the state space of a dynamical system that most initial conditions approach over time. These points usually represent a subset of the full state space. In the work presented here, “attractor” and “dynamical attractor” are used interchangeably.
\paragraph{Bifurcation.} A qualitative change in the dynamics exhibited by a dynamical system, as one or more system parameters is varied. For example, strange attractor can become a periodic orbit or fixed point as one of the parameters of the underlying dynamical equations is varied. Importantly, bifurcations occur as the result of changes to the underlying dynamical system, and do not in themselves result from the dynamics.
\paragraph{Dynamical System.} A set of rules describing how points within a space evolve over time. Dynamical systems usually appear either as (1) systems of coupled ordinary differential equations, which can be integrated to produce continuous-time trajectories, or (2) discrete-time maps that send points at one timepoint to new points a fixed interval $\Delta t$ later. In the context of the work presented here, a dynamical system is a single set of deterministic ordinary differential equations (e.g. the Lorenz system).
\paragraph{Entropy.} A statistical property of a dynamical system corresponding to the gain of information over time as the system is observed. A highly regular and predictable process will have low entropy, while a stochastic process will have high entropy. Unlike dimensionality, the entropy of a system typically does not require a notion of distance on the state space. For example, if different regions of an attractor are colored with discrete labels, it is possible to define the entropy of a trajectory based on the sequence of symbols it passes through—without referencing the precise locations visited, or the distance among the symbols.
\paragraph{Ergodic.} A property of a dynamical system specifying that, over sufficiently long timescales, the system will visit all parts of its state space. A dissipative dynamical system will not be ergodic over its full state space, but it may be ergodic once it settles onto an attractor. In the context of time series analysis, ergodicity implies that a forecasting model trained on many short trajectories initialized at different points on an attractor will have the same properties as a model trained on subsections of a single long trajectory.
\paragraph{Fractal.} A set of points that appears self-similar over all length scales. Fractals have dimensionality intermediate to traditional mathematical objects like lines and surfaces, resulting in a diffuse appearance.
\paragraph{Initial Conditions.} A point within the state space of a dynamical system. As time passes, the rules specifying the dynamical system will transmit this point to other points within the system’s state space. An initial condition does not necessarily lie on an attractor of the dynamical system.
\paragraph{Limit Cycle.} A type of attractor in which trajectories undergo recurring periodic motion. A swinging, frictionless pendulum exhibits a limit cycle.
\paragraph{Lyapunov Exponent.} The initial growth rate of an infinitesimal perturbation to a point within a dynamical system’s state space. If two initial conditions are chosen with infinitesimal initial separation, then as time passes the two points will spread apart exponentially. The logarithm of the rate of change in their separation equals the Lyapunov exponent. For non-chaotic systems (such as systems evolving along regular limit cycles), neighboring points do not diverge, and so the Lyapunov exponent is zero. When used in reference to an entire attractor, the Lyapunov exponent corresponds to an average over all points on the attractor.
\paragraph{Quasiperiodic Motion.} A type of attractor corresponding to non-repeating continuous motion, which does not exhibit fractal structure. The dynamics contain at least two frequencies that are incommensurate with one another. Quasiperiodic attractors have integer fractal dimension and a surface-like appearance, in contrast to the diffuse appearance of strange attractors.
\paragraph{Stable Fixed Point.} A type of attractor in which trajectories converge to a single location within the state space.
\paragraph{State Space.} The set of all possible states of a dynamical system. Initial conditions, trajectories, and attractors are all subsets of this space.
\paragraph{Strange Attractor.} An attractor in which trajectories continuously wander over a bounded region in state space, but never stop at a fixed point or settle into a repeating limit cycle. The dynamics are therefore globally stable, but locally unstable: the attractor contains a dense set of unstable periodic orbits, and trajectories briefly shadow individual orbits before escaping onto others. These unstable orbits span a continuous range of frequencies, producing motion at a range of length scales—and resulting in the fractal appearance of strange attractors.
\paragraph{Trajectory.} A set of points corresponding to the locations to which a given initial condition is mapped by a dynamical system. Trajectories are continuous curves for continuous-time systems, and isolated points for discrete-time maps.
\section{Calculation of mathematical properties}
For all mathematical properties we perform $20$ replicate computations from different initial conditions, and record the average in our database. To ensure high-quality estimates, we compute trajectories at high granularity of $500$ points per period (as determined by the dominant frequency in the power spectrum), and we use trajectories with length $2500$, corresponding to five complete periods.
\paragraph{Timescale alignment.} All systems in our database have been timescale-aligned, allowing them to be re-integrated at equivalent dominant timescales and sampling rates. This feature differentiates our approach from other time series collections, as well as previous applications of data-driven models to ordinary differential equations, and it allows easier comparison among systems. In order to align timescales, for each system we calculate the optimal integration timestep by computing the power spectrum, and then using random phase surrogates in order to identify the smallest and dominant significant significant frequencies \cite{kantz2004nonlinear}. The smallest frequency determines the integration timestep when re-integrating each system, while the highest amplitude peak in the power spectrum determines the dominant significant frequency, and thus the governing timescale. We use the dominant timescale to downsample integrated dynamics, ensuring consistency across systems. We record both fields in our database.
\paragraph{Lyapunov Exponents.} We implement standard techniques for computing Lyapunov exponents \cite{wolf1985determining,holzfuss1991lyapunov,datseris2018dynamicalsystems}. Our basic approach consists of following a bundle of vectors along a trajectory, and at each timestep using the Gram-Schmidt procedure to re-orthonormalize the bundle. The stretching rates of the principal axes provide estimates of the Lyapunov exponents in each direction.
When determining the Lyapunov exponents, for each initial condition we continue integration until the smallest-magnitude Lyapunov exponent drops below our tolerance level of $10^{-8}$, because all continuous time systems have at least one zero-magnitude exponent. Our replicate spectrum estimates across initial conditions are averaged with weighting proportional to the distance between the smallest magnitude exponent and zero, in order to produce a final estimate.
\paragraph{Fractal Dimension.} We compute the fractal dimension using the Grassberger-Procaccia algorithm for the correlation dimension, a robust nonparametric estimator of the fractal dimension that can be calculated deterministically from finite point sets \cite{grassberger1983characterization}.
\paragraph{Entropy.} The multiscale entropy was used to estimate the intrinsic complexity of each trajectory \cite{costa2002multiscale}. While a multivariate generalization of the multiscale entropy has recently been proposed \cite{ahmed2011multivariate}, due to convergence issues we calculate the entropy separately for each dynamical variable, and then record the median across all coordinates. Because this approach fails to take into account common motifs across multiple dimensions, we expect that our calculations overestimate the true entropy of the underlying systems. A similar effect occurs when mutual information is computed among subsets of correlated variables.
\paragraph{Additional mathematical properties.} We derive and record in our database several properties derived from the spectrum of Lyapunov exponents, including the Pesin's upper bound on the entropy (the sum of all positive Lyapunov exponents) and the Kaplan-Yorke fractal dimension (an alternative estimator of the fractal dimension) \cite{ott2002chaos,kantz2004nonlinear}.
\section{Statistical Features and Embedding}
For each dynamical system, we generate $40$ trajectories of length $2000$ originating from random initial conditions on the attractor. We use the default granularity of $100$ points per dominant period as determined by Fourier transform. For each system and replicate, we compute $787$ standard common time series features using standard methods \cite{christ2018time}. For each dynamical system and replicate, we drop all null features, and then use an inner join operation to retain only features that appear across all dynamical systems and replicates. We then retain only the $100$ features with the highest variance across all dynamical systems.
We use these features to generate an embedding with UMAP \cite{mcinnes2018umap}. We repeat this procedure for each of the $40$ random initial conditions that were featurized for each dynamical system, and we report the median across replicate as the embedding of the dynamical system. We use affinity propagation with default hyperparameters in order to identify eight clusters within the embedding \cite{pedregosa2011scikit}.
\section{Forecasting Experiments}
Benchmarks are computed on the Harvard FAS Cannon cluster, using two Tesla V100-PCIE-32GB GPU and 32 GB RAM per node. Benchmarks are implemented with the aid of the \texttt{darts}, \texttt{GluonTS}, and \texttt{sktime} libraries \cite{loning2019sktime,alexandrov2020gluonts,unit8co2020darts}.
\paragraph{Models.} We include forecasting models from several domains: deep learning methods (NBEATS, Transformer, LSTM, and Temporal Convolutional Network), statistical methods (Prophet, Exponential Smoothing, Theta, 4Theta), common machine learning techniques (Random Forest), classical forecasting methods (ARIMA, AutoARIMA, Fourier transform regression), and standard naive baselines (naive mean, naive seasonal, naive drift) \cite{oreshkin2019n,lea2016temporal,alexandrov2020gluonts,godahewa2021monash}. All non-tuned hyperparameters (e.g. training epochs, number of layers, etc) are kept at default values used in reference implementations included in the \texttt{darts}, \texttt{GluonTS}, and \texttt{sktime} libraries \cite{loning2019sktime,alexandrov2020gluonts,unit8co2020darts}.
\paragraph{Hyperparameter tuning.} Hyperparameter tuning is performed separately for each forecasting model, dynamical system, and sampling granularity. The training set for each attractor consists of a single train time series comprising a trajectory emanating from a random location on the chaotic attractor. For each trajectory, $10$ full periods are used to train the model, and $2$ periods are used to generate forecast mean-squared-errors to evaluate combinations of hyperparameters. These splits correspond to $150$ and $30$ timepoints for the coarse granularity datasets, and $1000$ and $200$ timepoints for the fine granularity datasets.
Because benchmarks are computed on both coarse and fine granularities, different value ranges are searched for the two granularities: 1 timepoint, 5 timepoints, half of a period (8 timepoints for the coarse granularity, 50 timepoints for the fine granularity), and one full period (15 timepoints / 100 timepoints). For forecast models that accept a seasonality hyperparameter, the presence of additive seasonality (such as monochromatic forcing) is treated as an additional hyperparameter. A standard grid search is used to find the best sets of hyperparameters separately for each model, system, and granularity.
\paragraph{Scoring.} The testing dataset consists of a single time series emanating from another point on the same attractor. On this trajectory, a model is trained on the first $10$ periods using the best hyperparameters the train dataset, and the forecast score is generated on the remaining $2$ periods of the testing time series. Several standard time series similarity metrics are recorded for each dynamical system and forecasting model: mean absolute percentage error (MAPE), symmetric mean absolute percentage error SMAPE, coefficient of variation (CV), mean absolute error (MAE), mean absolute ranged relative error (MARRE), mean squared error (MSE), root mean squared error (RMSE), coefficient of determination ($r^2$), and mean absolute scaled error (MASE).
\subsection{The effect of noise on forecasting results.}
In order to determine the robustness of our experimental results to the presence of non-deterministic noise in the dataset, we perform a full replication of our experiments above on a modified dataset that includes noise. For each dynamical system, the scale of each dynamical variable is determined by generating a reference trajectory without noise, and calculating the standard deviation along each dimension. A new trajectory is then generated with noise of amplitude equal to 20\% of the scale of each dynamical variable. Figure \ref{fig_noise_forecast} shows the result of our benchmarks with noise, compared to our benchmarks in the absence of noise.
As expected, the median forecasting performance degrades for all methods in the presence of noise. Noise only weakly affects the naive baselines, because the range of values present in the data remains the same in the presence of noise. The deep learning models continue to perform very well, consistent with general intuition that large, overparametrized models effectively filter low-information content from complex signals \cite{hastie2009elements}. Interestingly, the performance of the random forest model noticeably degrades with noise, suggesting that the representation learned by the model is fragile in the presence of extraneous information from noise. Conversely, the simple Fourier transform regression performs better than several more sophisticated models in the presence of noise. We hypothesize that high-frequency noise disproportionately obfuscates phase information within the signal, and so forecasting models that project time series onto periodic basis functions (e.g., Fourier and N-BEATS) are least impacted.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{sfig_forecast_noise.jpg}
\caption{{\bf Forecasting results with and without noise.} Each panel shows the distribution of forecast errors for all dynamical systems across different forecasting models, sorted by increasing median error. Dark and light hues correspond to coarse and fine time series sampling granularities. Upper panel corresponds to results for the full chaotic systems collection without noise, and lower panel corresponds to results from replicate experiments in which noise is present. Note that the model order along the horizontal axis differs between the two panels, because the relative performance of the different forecasting methods changes in the presence of noise.
}
\label{fig_noise_forecast}
\end{figure*}
\section{Forecasting experiments as granularity and noise are varied}
In order to better understand how the performance of different forecasting models depends on properties of the time series, we perform a set of experiments in which we re-train all forecasting models on datasets with a range of granularities and noise levels. We define noise level the same way as in our forecasting experiments: a noise level of $0.2$ corresponds to a noise amplitude equal to $20\%$ of the normal standard deviation of the signal. Granularity refers to the number of points sampled per period, as defined by the dominant significant frequency in the power spectrum. For these experiments, the same hyperparameters are used as for the original forecasting experiments. However, for the granularity sweep, hyperparameters that have units equivalent to timescale (e.g. number of time lags, or input chunk size) are rescaled by the granularity.
The results are shown in Figure \ref{fig_sweep}. We find that forecasting models are most strongly differentiated at low noise levels, and that as the noise level exceeds the average amplitude of the signal the performance of models converges. This effect arises because there is less useable information in the signal for forecasting. However, the relative ranking of the different models remains somewhat stable as noise intensity increases, suggesting that the deep learning models remain effective at extracting relevant information even in the presence of dominant noise.
The granularity results show that the relative performance of different forecasting models is stable across granularities, and that the deep learning models (and particularly NBEATS) continue to perform well across a range of granularities. However, unlike the statistical methods, the performance of the deep learning models fluctuates widely across granularities, and in a systematic manner that cannot be attributed to sampling error---all points and rankings are averages over all $131$ systems. These results suggest that more complex models may have timescale bias in their default architectures. However, we caution that exhaustive (albeit computationally expensive) hyperparameter tuning is needed to further understand this effect.
\begin{figure*}
\centering
\includegraphics[width=0.7\linewidth]{sfig_sweep.pdf}
\caption{{\bf Variation in forecasting model performance as noise level and granularity are varied.} Points and shaded ranges correspond to medians and standard errors across dynamical systems.
}
\label{fig_sweep}
\end{figure*}
\section{Relative performance of forecasting models across different mathematical properties}
In order to determine whether different forecasting models are better suited to different types of dynamical system, we analyze our forecasting benchmarks striated by different mathematical properties of the dynamical systems. For a given mathematical property (such as Lyapunov exponent), we select only the dynamical systems among the bottom $20\%$ of systems (i.e. the least chaotic systems), and we compute the average forecast error for each forecasting model on just this group. We repeat the analysis for the dynamical systems in the quantile $10 - 30\%$, then $20 - 40\%$, and so forth in order to determine how forecasting performance of each model type varies with level of chaoticity. We repeat the analysis for the correlation dimension and multiscale entropy. Our results are shown in Figure \ref{fig_rank}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{sfig_ranks.jpg}
\caption{{\bf Variation in forecasting model performance across different mathematical properties.} The horizontal axis of each plot corresponds to a sliding window comprising a $20\%$ quantile in the property across all systems. Points correspond to medians across all dynamical systems in that quantile.
}
\label{fig_rank}
\end{figure*}
\section{Importance Sampling Experiments}
Our importance sampling experiment consists of a modified version of our forecasting task. We choose a single model, the LSTM, and alter its training procedure in order to determine how it is affected by alternative sampling strategies. In order to control for unintended interactions, we use a single set of hyperparameters for models trained on all chaotic systems, corresponding to the most common values from our forecasting benchmark. As a result, the baseline forecast error is higher across the chaotic systems dataset compared to our forecasting experiments, in which the LSTM was tuned separately for each chaotic system.
Our procedure consists of the following: (1) We halt training every few epochs and compute historical forecasts (backtests) on the training trajectory. (2) We randomly sample timepoints proportionately to their error in the historical forecast, and then generate a set of initial conditions corresponding to random perturbations away from each sampled attractor point. (3) We simulate the full dynamical system for $\tau = 150$ timesteps for each of these initial conditions, and we use these new trajectories as the training set for the next $b = 30$ epochs. We repeat this procedure for $\nu = 5$ meta-epochs. For the original training procedure, the training time scales as $\sim B = 400$, the number of training epochs times the number of timepoints in a full trajectory.
For the control "full epoch" baseline, we use the standard training procedure. For our "random batch" control experiments, we repeat the importance sampling procedure, but randomly sample timepoints, rather than weighting points by their backtest error. We include this control in order to account for the possibility of forecast error decreasing with total training data, an effect that would lead the importance sampling procedure to perform well spuriously.
\section{Transfer Learning Experiments}
For our classification experiments, we start with the $128$ tasks currently within the UCR time series classification archive, and we narrow the set to the $96$ datasets that contain at least $100$ valid timepoints \cite{dau2019ucr}.
Our autoencoder is based on a causal dilated architecture recently shown to provide competitive performance among unsupervised embedding methods on the UCR archive \cite{franceschi2019unsupervised}. Following previous work, our encoder comprises a single causal convolutional block \cite{bai2018empirical}, containing two causal convolutions with kernel size $3$ and dilations of $2$. A convolutional residual connection bridges the input layer and the latent layer, and leaky ReLU activations are used throughout. Unlike previous studies that learned embeddings using a triplet loss (thereby eliminating the need for a decoder) \cite{franceschi2019unsupervised}, we use a standard decoder similar to our previous study on chaotic system embedding \cite{gilpin2020deep}, consisting of a three-layer standard convolutional network with ELU activation functions. We train our models using the Adam optimizer with mean squared error loss and a learning rate of $10^{-3}$ \cite{kingma2014adam}. Our PyTorch network implementations are included in the project repository.
We train separate encoders for each classification task in the UCR archive. Briefly, we retrieve the training dataset for a given classifation task, and we use phase surrogate testing to determine the dominant frequency in the training data. We then convert this timescale into an effective granularity (in points per dominant period) for the training data. We then re-integrate all $131$ dynamical systems within our dataset, with a granularity setting set to match the training data. We train the autoencoder on these trajectories, and we then apply the encoder to the training data of the classification task, in order to generate a featurized time series. For our "random timescale" ablation experiment, we select random granularities unrelated to the training data, and otherwise repeat the procedure above.
Having obtained encoded representations of the classification task training data, we then convert the training data into a featurized representation using \texttt{tsfresh}, a suite that generates $787$ standard time series features (such as number of peaks, average power, wavelet coefficients) \cite{christ2018time}. We then pass these features to a standard ridge regression classifier, which we set to search for $\alpha$ values over a range $10^{-3}$ -- $10^{3}$ via cross-validation \cite{pedregosa2011scikit}. Our approach to classifying time series is based upon recent methods for generating classification results from features learned from time series in an unsupervised setting, which found that complex unsupervised feature extractors followed by supervised linear classification yield competitive performance \cite{dempster2020rocket}. For our "no transfer learning" baseline, we apply the featurization and regression to the bare original training data for the classification problem.
Our reported scores correspond to accuracy on the test partition of the UCR archive. The timescale extraction, surrogate data generation, autoencoder, \texttt{tsfresh} featurization, and ridge classifier cross-validation steps are all trained only on the training data, and the trained encoder, \texttt{tsfresh} featurization, and ridge classifer are applied to the test data.
\section{Symbolic Regression Experiments}
Our symbolic regression dataset consists of input values corresponding to points along a trajectory, and target values corresponding to the value of the right hand side of the governing differential equation at those points. For our benchmark, we generate train and test datasets corresponding to trajectories originating from different locations on the attractor. Because we are interested in performance using information sampled across the attractor, we generate long trajectories ($10$ full periods, as determined by dominant timescale in power spectrum) at low sampling granularity ($15$ points per period), for a total of $150$ datapoints in each of the train and test trajectories. This number of points is comparable to existing benchmarks \cite{la2021contemporary}. While, in principle, random inputs could be generated and used to produce output values for our differential equations, because our target formulae correspond to dynamical systems, we favor using trajectories---which would best simulate observations from a real-world system. As we note in the main text, the accuracy of the target formulae will likely be reduced in regions of the attractor with lower measure.
For PySINDY, we fit separate models with purely polynomial and purely trigonometric bases. For DSR and pySR, we use default hyperparameters, and allow a fixed library of binary and unary expressions, $\{+, - , \times, \divsymb\}$, $\{\sin, \cos, \exp, \log, \tanh \}$ \cite{petersen2019deep}. Because our dynamical systems are multivariate, we fit separate expressions to each dynamical variable, and record the median across dynamical variables as the overall error for the system.
We apply the expressions generated by symbolic regression to the unseen test trajectory, and we treat the resulting values as forecasts. We therefore record the same error metrics as for our forecasting benchmark above.
\section{Neural Ordinary Differential Equation Experiments}
We perform a preliminary neural ordinary differential equation (nODE) experiment, in order to evaluate whether mathematical properties of a dynamical system influence the properties of a fitted nODE. We design our experiment identically to our fine-granularity forecasting benchmark above: for each system, a multivariate training trajectory consisting of $1000$ timepoints is used to train a nODE model \cite{chen2018neural}. An unseen "test" initial condition is then randomly chosen, and $200$ timepoint trajectories are generated using both the true dynamical system, and the trained neural ODE. The quality of the resulting trajectory is evaluated using the sMAPE error between the predicted and true trajectory.
Our results are shown in Figure \ref{fig_node}. Overall, the forecasting performance of the nODE model is competitive with other time series forecasting techniques, with the advantage of producing a differentiable representation of the underlying process that can potentially be used for downstream analysis. Qualitatively, we observe that the nODE dynamics frequently become trapped near unstable periodic orbits over long durations, suggesting that shadowing events observed in the training data dominate the learned representation \cite{guckenheimer2013nonlinear}.
Unlike our symbolic regression experiments, we find that there is no significant correlation between the quality of a nODE model and any underlying properties of the differential equations. Among the various mathematical properties (Lyapunov exponents, fractal dimension, etc) the largest observed Spearman correlation was not significantly different from zero ($0.072 \pm 0.003$, median with standard error determined by bootstrapping),
\begin{figure*}
\centering
\includegraphics[width=0.4\linewidth]{sfig_histogram_node.pdf}
\caption{Distribution of error scores for the neural ordinary differential equation benchmark.
}
\label{fig_node}
\end{figure*}
\section{Datasheet: Dataset documentation and intended uses}
The primary inclusion criteria for dynamical systems is appearance in published work with explicit equations and parameter values provided that created chaotic dynamics. While there are infinite possible chaotic attractors, our collection surveys systems as they appear in the literature---which primarily comprises particular domain-area applications, as well as systems with particular mathematical properties. Below, we address the questions included in an existing dataset datasheet guide \cite{gebru2018datasheets}.
\subsubsection{Motivation}
\label{motivation}
\textbf{Purpose} This dataset was created for the purpose of providing a generative benchmark for time series mining applications, in which arbitrary synthetic data can be generated using a deterministic process.
\textbf{Unintended Uses} To our knowledge, there are no pressing uses for this data that could cause unintended harm. However, insofar as our dataset can be used to improve existing time series models (illustrated by our time series classification benchmark), there is a possibility of our dataset contributing to privacy concerns with time series analysis---particularly by making it possible for large models to identify latent factors that could, for example, de-anonymize physiological recordings \cite{shi2011privacy}. In our project repository, we include instructions asking users who become aware of any unintended harms to submit an issue on GitHub.
\textbf{Previous Uses} Some time series analysis utilities and specific systems in this repository were used in our previous work \cite{gilpin2020deep}, but the full dataset and benchmarks are all new.
\textbf{Creator and Funding.} This repository was created by William Gilpin, with support from the NSF-Simons Center for Quantitative Biology at Harvard University, as well as the University of Texas at Austin. No special funding was solicited for this project.
\subsubsection{Composition}\label{composition}
\textbf{Instances.} Each instance in this dataset comprises a set of nonlinear differential equations describing a chaotic process, a set of standard parameter values and initial conditions, a set of default timescales and integration timesteps, a set of characteristic mathematica properties, a citation to a published source (where available), a brief description of the system, and $16$ precomputed trajectories from the system under various granularities and initial conditions.
\textbf{Instance Relationships.} Each instance corresponds to a different dynamical system.
\textbf{Instance Count.} At time of writing, there are 131 continuous-time dynamical systems (126 ordinary differential equations, and 5 delay equations). There are also 30 discrete-time chaotic maps, however we do not include these in any analyses or discussion presented here.
\textbf{Instance Scope.} Each instance corresponds to a particular realization of a dynamical system, based on previously-published parameter values and initial conditions. In principle, an infinite number of additional chaotic systems exists; our dataset seeks to provide a representative sample of published systems.
\textbf{Labels.} Each trajectory and system contains metadata describing its provenance, however there is not a particular label associated with each trajectory. However, all systems are labelled a variety of annotations that can, in principle, be used as labels (see Table \ref{metadata}).
\textbf{External Dependencies.} The data itself has no external dependencies. Simulating each system requires several standard scientific Python packages (enumerated in the repository README file). Running the benchmarks requires several additional dependencies, which are also listed in the README.
\textbf{Data Splits.} No splits are baked-in, because (in principle) arbitrary amounts of training, validation, and testing data can be generated for each dynamical system. Splits can either be performed by
holding out some timepoints, or (for multivariate systems) by splitting the set of dynamical variables. For the purpose of benchmarking experiments, splits corresponding to $10$ periods of training data, and
$2$ periods of unseen prediction/validation data, were used for both the train and test datasets (the test dataset corresponds to an unseen initial condition). For the fine granularity time series, this corresponds to splits of 1000/200 for both the train and test initial conditions. For the coarse granularity time series, this corresponds to a split of 150/30. The data loader utilities included in the Python library use the $10$ period / $2$ period split by default.
\textbf{Experiments.} All benchmark experiments are described at length
in our preprint. They primarily consist of forecasting benchmarks,
generative experiments (importance sampling and model pretraining), and
data-driven model inference experiments.
\subsubsection{Collection}\label{collection}
\textbf{Collection.} ISI Web of Science was used to identify papers claiming novel low-dimensional chaotic systems published after 1963 (the year of Lorenz's original paper). Papers were sorted by citations in order to determine priority for re-implementation, and systems were only included that had (1) explicit analytical expressions and (2) published parameter values and initial conditions leading to chaos. All systems were re-implemented in Python and checked to verify that the reported dynamics were chaotic. Additionally, several previous collections and galleries of chaos were checked, to ensure that all entries are included \cite{sprott2010elegant,maier2003,datseris2018dynamicalsystems,myers2020teaspoon}.
\textbf{Workers.} All individuals involved in data collection and
curation are authors on the paper.
\textbf{Timeframe.} Data was collected from 2018 -- 2021.
\textbf{Instance Acquisition.} Each dynamical system required
implementation in Python of the stated dynamical equations, as well as
all parameter values and initial conditions leading to chaos. Each
system was then numerically integrated in order to ensure that the
observed dynamics matched those claimed in the original publication.
Once chaos was validated, the integration timestep and the trajectory
sampling rate were determined using the power spectrum, with time series
surrogate analysis used to identify significant frequencies. Once the
correct timescales were known, properties such as the Lyapunov exponents
and entropy were calculated. For all trajectory data and initial
conditions, a long transient was discarded in order to ensure that the
dynamics settled onto the attractor.
\textbf{Instance Scope.} There are effectively an infinite number of
possible chaotic dynamical systems, even in low dimensions. However, our
collection represents a sample of named and published chaotic systems,
and it includes most well-known systems.
\textbf{Sampling.} Because our dataset comprises only named and
published chaotic systems, it does not comprise a representative sample
of the larger space of all low-dimensional chaotic systems. Therefore,
our database should not be used to compute any quantities that depend on
the measure of chaotic systems within the broader space of all possible
dynamical systems. For example, a study that seeks to identify the most
common features or motifs of chaotic systems cannot use our database as
representative sample. However, our database does comprise a
representative sample of chaotic dynamics as they appear in the
literature.
\textbf{Missing Information.} For systems in which a reference citation
or additional context is unavailable, the corresponding field in the
metadata file is left blank. However, all systems have sufficient
information to be integrated.
\textbf{Errors.} If any errors or redundancies are identified,
we encourage users to submit an issue via GitHub.
\textbf{Noise.} Noise can be added to the trajectories either by adding random values to each observed
timepoint (measurement noise), or performing a stochastic simulation (stochastic dynamics). A stochastic integration function is included in the Python library. The precomputed trajectories associated with each system include trajectories with noise.
\subsubsection{Preprocessing}\label{preprocessing}
\textbf{Cleaning.} Dynamical systems may be numerically integrated with
arbitrary precision, and their dynamics can be recorded at arbitrarily
small intervals. In order to report all systems consistently, we use
time series phase surrogate testing to identify the highest significant
frequency in the power spectrum of each system's dynamics. We then set
the numerical integration timestep to be proportional to this timescale.
We then re-integrate, and use surrogates to identify the dominant
significant frequency in each system's dynamics. We use this timescale
to determine the sampling rate. This process ensures overall that all
systems exhibit dynamical variation over comparable timescales, and that
the integration timestep is sufficiently small to accurately resolve the
dynamics.
Having determined the appropriate integration timescales, we then determine the Lyapunov exponents, average period, and other ensemble-level properties of each dynamical system. We compute these quantities for replicate trajectories originating from different initial conditions on the attractor, and record the average.
For each fixed univariate time series dataset, the first ordinal component of the system's dynamics is included.
\textbf{Raw data.} New time series data can be generated as needed via
the \texttt{make\_trajectory()} method of each dynamical system.
\textbf{Preprocessing Software.} All analysis software is included in
the repository.
\textbf{Motivation.} To our knowledge, dataset processing is consistent
with the underlying motivation of the dataset.
\subsubsection{Distribution}\label{distribution}
\textbf{Distribution.} The dataset is distributed on GitHub.
\textbf{First Distribution.} A private fork may be distributed with the
paper for review in order to maintain anonymity for certain venues. The
updated repository will be distributed with the final paper.
\textbf{License.} We include an Apache 2.0 License in the project
repository.
\textbf{Fees.} None.
\subsubsection{Legal}\label{legal}
\textbf{People.} No individuals are included in this dataset.
\textbf{Protected Subjects.} No ethically-protected subjects are
included in this dataset.
\textbf{Institutional Approval.} No institutional approval is required
for this dataset
\textbf{Consent.} No individual data is included in this dataset.
\textbf{Harm.} No individual data is included in this dataset. However, the README file of the dataset repository includes instructions to submit an issue if an unintended harm is detected in the process of using this dataset.
\textbf{Disadvantages.} No individual data is included in this dataset.
\textbf{Privacy.} None of the data contains personal information.
\textbf{GDPR.} To our knowledge, this dataset complies with GDPR and
equivalent foreign standards.
\textbf{Sensitivity.} To our knowledge, this dataset contains no
sensitive or confidential information
\textbf{Inappropriate.} This dataset contains no inappropriate or offensive content.
\section{Author statement and hosting plan}
The authors bear all responsibility in case of rights violations. The data license has been included elsewhere in this appendix. The authors have full control of the data repository on GitHub, and will ensure its continued accessibility.
\clearpage
| {
"timestamp": "2021-10-12T02:39:17",
"yymm": "2110",
"arxiv_id": "2110.05266",
"language": "en",
"url": "https://arxiv.org/abs/2110.05266",
"abstract": "The striking fractal geometry of strange attractors underscores the generative nature of chaos: like probability distributions, chaotic systems can be repeatedly measured to produce arbitrarily-detailed information about the underlying attractor. Chaotic systems thus pose a unique challenge to modern statistical learning techniques, while retaining quantifiable mathematical properties that make them controllable and interpretable as benchmarks. Here, we present a growing database currently comprising 131 known chaotic dynamical systems spanning fields such as astrophysics, climatology, and biochemistry. Each system is paired with precomputed multivariate and univariate time series. Our dataset has comparable scale to existing static time series databases; however, our systems can be re-integrated to produce additional datasets of arbitrary length and granularity. Our dataset is annotated with known mathematical properties of each system, and we perform feature analysis to broadly categorize the diverse dynamics present across the collection. Chaotic systems inherently challenge forecasting models, and across extensive benchmarks we correlate forecasting performance with the degree of chaos present. We also exploit the unique generative properties of our dataset in several proof-of-concept experiments: surrogate transfer learning to improve time series classification, importance sampling to accelerate model training, and benchmarking symbolic regression algorithms.",
"subjects": "Machine Learning (cs.LG); Signal Processing (eess.SP); Chaotic Dynamics (nlin.CD)",
"title": "Chaos as an interpretable benchmark for forecasting and data-driven modelling",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924777713886,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7075866546024808
} |
https://arxiv.org/abs/quant-ph/0412011 | Hidden Variables and Nonlocality in Quantum Mechanics | In this paper, we show that Erwin Schroedinger's generalization of the Einstein Podolsky Rosen argument can be connected to certain mathematical theorems - Gleason's and also Kochen and Specker's - in a manner analogous to the relation of EPR itself with Bell's theorem. In both cases, the conclusion is quantum nonlocality, as we discuss. The "Schroedinger nonlocality" proofs share some features with the Greenberger, Horne, and Zeilinger quantum-nonlocality work, yet also differ in significant ways.For clarity and completeness, we begin with a detailed discussion of the topic of hidden variable theorems. We argue, in agreement with John S. Bell, that 'impossibility' does not follow. |
\chapter{Contextuality} We have seen in the
previous section that the analysis of von Neumann has little impact
on the question of whether a viable hidden variables theory may be
constructed. However, further mathematical results were
developed by Andrew M. Gleason \cite{Gleason} in 1957 and by Simon
Kochen and E.P. Specker in 1967 \cite{Kochen Specker}, which were
claimed by some\footnote{See \cite{Kochen Specker, Mermin's Gleason, Reality
Marketplace}.} to imply the impossibility of hidden
variables. In the words of Kochen and Specker \cite [p. 73]{Kochen
Specker}: ``If a physicist X believes in hidden variables... ...the
prediction of X contradicts the prediction of
quantum mechanics''. The Gleason, and Kochen and Specker arguments are in
fact, {\em stronger} than von Neumann's in that
they assume linearity only for {\em commuting} observables. Despite
this, a close analysis reveals that the impossibility proofs
of Gleason and of Kochen and Specker share with
von Neumann's proof the neglect of the possibility of a hidden variables
feature called {\em contextuality} \cite{Bell Eclipse, Bell Imposs Pilot}.
We will find that this shortcoming
makes these theorems inadequate as proofs of the impossibility of
hidden variables.
We begin this chapter
with the presentation of Gleason's theorem and the theorem of Kochen and
Specker. We then discuss a more recent theorem discovered by
Mermin\footnote{See Mermin in
\cite{Mermin's Gleason, Mermin Review}.},
which is similar to these, and yet admits a much simpler proof.
This will be followed by a discussion of
contextuality and its relevance to these analyses. We will make clear
in this discussion why the theorems in question fail as impossibility
proofs. As far as the question of what conclusions {\em do} follow,
we show that these theorems' implications can be expressed in a simple and
concise fashion, which we refer to as ``spectral-incompatibility''.
We conclude the chapter with the discussion of an experimental
procedure first discussed by Albert\footnote{See D. Albert \cite{Albert}.}
which provides further insight into contextuality.
\section{Gleason's theorem} Von Neumann's theorem
addressed the question of the form taken by a function $E(O)$ of the
observables. Gleason's theorem essentially addresses the same
question\footnote{The original form presented by A.M. Gleason referred
to a probability measure on the subspaces of a Hilbert space, but the
equivalence of such a construction with a value map on the projection
operators is simple and immediate. This may be seen by considering
that there is a one-to-one correspondence of the subspaces and
projections of a Hilbert space and that the values taken by the
projections are $1$ and $0$, so that a function mapping projections to
their eigenvalues is a special case of a probability measure on these
operators.}, the most significant difference being that the linearity
assumption is relaxed to the extent that it is demanded that $E$ be
linear on only {\em commuting} sets of observables. Besides
this, Gleason's theorem involves a function $E$ on only the projection
operators of the system, rather than on all observables.
Finally, Gleason's theorem contains the assumption that the system's
Hilbert space is at least three dimensional. As for the conclusion of
the theorem, this is identical to von Neumann's: $E(P)$ takes the form
$E(P)=\mbox{Tr}(UP)$ where $U$ is a positive operator and
$\mbox{Tr}(U)=1$.
Let us make the requirement of linearity
on the commuting observables somewhat more explicit. First we note
that any
set of projection $\{P_{1},P_{2}, \ldots\}$ onto mutually orthogonal
subspaces $\{{\cal H}_{1},{\cal H}_{2},\ldots\}$ will form a commuting
set. Furthermore, if $P$ projects onto the direct sum ${\cal H}_{1}
\oplus {\cal H}_{2} \oplus \ldots$ of these subspaces, then
$\{P,P_{1},P_{2},\ldots\}$ will also form a commuting set. It is in the
case of this latter type of set that the linearity requirement comes
into play, since these observables obey the relationship
\begin{equation}
P=P_{1}+P_{2}+\ldots
\label{eq:GlPsum} \mbox{.}
\eeq
The condition on
the function $E$ is then
\begin{equation} E(P)=E(P_{1})+E(P_{2})+\ldots \mbox{.}
\label{eq:comm lin}
\eeq
The formal statement of Gleason's theorem is
expressed as follows. For any quantum system whose Hilbert space is at
least three dimensional, any expectation function $E(P)$ obeying
the conditions \ref{eq:comm lin}, $0 \leq E(P) \leq 1$, and $E({\bf
1})=1$ must take the form
\begin{equation}
E(P)=\mbox{Tr}(UP)
\label{eq:Gltracedude}
\mbox{,}
\eeq
where $\mbox{Tr}(U)=1$
and $U$ is a positive operator. We do not present the proof of this
result\footnote{See Bell \cite{Bell Eclipse}. Bell proves that any
function
$E(P)$ satisfying the conditions of Gleason's theorem cannot map
the projection operators to their eigenvalues.} here. In the next section,
we give the outline the proof of Kochen and Specker's theorem. The same
impossibility result derived from Gleason's theorem
follows from this theorem.
.
It is straightforward to demonstrate that
the function $E(P)$ considered within Gleason's theorem cannot be a value map
function on these observables.
To demonstrate this, one may argue in the same fashion as was done by von
Neumann, since the form developed here for $E(P)$ is the same as that
concluded
by the latter. (See section \ref{Vonimp}). We recall that if $E(O)$ is
to represent a dispersion free
state specified by $\psi$ and $\mbox{$\lambda$}$, it {\em must} take the form
of such a value map,
and $E(O)$ evidently cannot be the expectation function
for such a state. It is on this basis that the impossibility of hidden
variables has been claimed to follow from Gleason's theorem.
\section{The theorem of Kochen and Specker}
As with Gleason's theorem, the essential assumption
of Kochen and Specker's theorem is
that the expectation function $E(O)$ must be linear on commuting sets of
observables. It differs from the former only in the set of observables
considered. Gleason's theorem was addressed to
the projection operators on a Hilbert space of arbitrary dimension $N$.
Kochen and Specker consider the squares
$\{s^{2}_{\theta,\phi}\}$ of the spin components of a spin $1$ particle.
One may note that these
observables are formally identical to projection
operators on a three-dimensional Hilbert space.
Thus, the Kochen and Specker observables are a
subset\footnote{The set of projections on a three-dimensional space
is actually a larger class of observables.}
of the ``$N=3$'' case of the Gleason
observables. Among
the observables $\{s^{2}_{\theta,\phi}\}$ any subset
$\{s^{2}_{x},s^{2}_{y},s^{2}_{x}\}$ corresponding
to mutually orthogonal component directions $x,y,z$ will be a commuting set.
Each such set obeys the relationship
\begin{equation}
s^{2}_{x}+s^{2}_{y}+s^{2}_{z}=2
\label{eq:KS comm}
\mbox{.}
\eeq
For every such subset, Kochen and Specker
require
that $E(s^{2}_{\theta,\phi})$ must obey
\begin{equation}
E(s^{2}_{x})+E(s^{2}_{y})+E(s^{2}_{z})=2
\mbox{.}
\label{eq:KS proj lin}
\eeq
Kochen and Specker theorem states that there exists no
function $E(s^{2}_{\theta,\phi})$ on the squares of the spin components
of a spin $1$ particle which maps each observable to either $0$ or $1$ and
which satisfies \ref{eq:KS proj lin}.
We now make some comments regarding the nature of this theorem's proof.
The problem becomes somewhat simpler to discuss when formulated in
terms of a geometric model. Imagine a sphere of unit radius surrounding
the origin in $\mbox{$\rm I\!R$}^{3}$. It is easy to see that
each point on this sphere's surface corresponds to a direction in space,
which implies that each point is associated with one observable of the set
$\{s^{2}_{\theta,\phi}\}$.
With this, $E$ may be regarded as a function on
the surface of the unit sphere. Since the eigenvalues
of each of these spin observables are $0$ and $1$, it
must be that $E(O)$ must take on these values, if it
is to assume the form of a value map function. Satisfaction
of \ref{eq:KS proj lin} requires that for each set of mutually orthogonal
directions, $E$ must
assign to one of them $0$ and to the other directions $1$. To gain some
understanding\footnote{We follow here the argument given in Belinfante
\cite[p. 38]{1974 Belinfante Book}}
of why such an assignment of values must fail, we proceed as follows.
To label the points on the sphere, we imagine
that each point on the sphere to which the number $0$ is assigned is
painted red, and each point assigned $1$ is painted blue. We label each
direction as by the unit vector $\hat{n}$. Since one
direction of every mutually perpendicular set is assigned red, then in
total we require that one-third of the sphere is painted red.
If we consider the components of the spin in {\em opposite} directions
$\theta,\phi$, and $180^{\circ}-\theta,180+\phi$, these values are
always opposite, i.e., if $s_{x}$ takes the value $+1$, then $s_{-x}$
takes the value $-1$. This implies that the values of $s^{2}_{\theta,\phi}$
and $s^{2}_{180^{\circ}-\theta,180+\phi}$ will be {\em equal}.
There, we must have that points on the sphere lying
directly opposite each other, i.e. the ``antipodes'', must receive
the same assignment from $E$. Suppose that one direction and its antipode
are painted red. These points form the two poles of a great circle, and all points along this circle must then be painted blue,
since all such points represent directions orthogonal to the directions
of our two `red' points. For every such pair of red points on the sphere,
there must be many more blue points introduced, and we will find this makes
it impossible to make one-third of the sphere red, as would be necessary
to satisfy \ref{eq:KS proj lin}.
Suppose we paint the entire first octant of the sphere red. In terms of
the coordinates used by geographers, this is similar to
the region in the Northern hemisphere between $0$ degrees and $90$
degrees longitude. If the point at the north pole is painted red then
the great circle at the equator must be blue. Suppose that
the $0$ degree and $90$
degree meridians are also painted red. Then the octant which is the antipode
of the first octant
must also be painted red. This octant would be within the `southern
hemisphere' between $180$ degrees and $270$ degrees longitude. If we now
apply the condition that for every point painted red, all points lying
on the great circle defined with the point at its pole, we find that all
remaining points of the sphere must then be colored blue, thereby
preventing the addition of more red points. This assignment fails to
meet the criterion that {\em less} than one-third of the sphere is
blue, so there are some sets of mutually
perpendicular directions which are all colored blue. An example of such a
set of directions is provided by the points on the sphere's surface
lying in the mutually orthogonal directions represented by
$(.5,.5,-.7071)$, $(-.1464,.8535,.5)$,
$(.8535,-.1464,.5)$. Each of these points lies in a quadrant to which we
have assigned the color blue by the above scheme. This assignment of
values to the spatial directions must therefore fail to meet the criteria
demanded of the function $E(s^{2}_{\theta,\phi})$ in Kochen and Specker's premises: that $E(s^{2}_{\theta,\phi})$ satisfies
\ref{eq:KS proj lin} and maps the observables to their
values.
In their proof, Kochen and Specker show that for a discrete
set of $117$ different directions in space,
it is impossible to give appropriate value assignments to the
corresponding spin observables\footnote{Since
their presentation, the proof has been simplified by Peres in 1991
\cite{Peres' Gleason} whose proof is based on examination of 33 such
$\hat{n}$ vectors. We also note that a proof presented by Bell
\cite{Bell Eclipse} may be shown to lead easily to a proof of Kochen and
Specker. See Mermin in this connection \cite[p. 806]{Mermin Review}.}.
Kochen and Specker then assert that hidden
variables cannot agree
with the predictions of quantum mechanics. Their conclusion
is that if some physicist `X', mistakenly
decides to accept the validity of hidden variables then
``the prediction of X (for some measurements) contradicts the
prediction of quantum mechanics.'' \cite[p 73]{Kochen Specker}
The authors cite a particular system on which one
can perform an experiment
they claim reveals the failure of the hidden variables prediction.
We will demonstrate in the next section of this work that what
follows from Kochen and Specker's
theorem is only that a {\em non-contextual} hidden
variables theory will conflict with quantum mechanics, so that
the general possibility of hidden variables has not been disproved.
Furthermore, we show that if the type of experiment envisioned
by these authors is considered in more detail, it does not indicate
where hidden variables must fail, but instead serves as an illustration
indicating that the requirement of contextuality is a quite natural one.
\subsection{Mermin's theorem \label{Mermin}}
We would like to discuss here a much more recent theorem discovered by
N. David
Mermin\footnote{See \cite{Mermin's Gleason} and \cite{Mermin Review}}
in 1990, which is of the same character as those considered
above. Mermin makes essentially the same assumption regarding the
function $E(O)$
as did Gleason, and Kochen and Specker: that
$E(O)$ must obey all
relationships among the commuting sets of observables.
This theorem is more straightforward in its
proof and simpler in form than those of Gleason and of Kochen and Specker.
The system addressed by Mermin's theorem is that of a pair of spin
$\frac{1}{2}$
particles. The observables involved are the $x$ and $y$ components of these
spins, and six other observables which are defined in terms of these four.
We begin with the derivation of the expression
\begin{equation}
\mbox{$\sigma^{(1)}_{x}$}\mbox{$\sigma^{(2)}_{y}$}\mbox{$\sigma^{(1)}_{y}$}\mbox{$\sigma^{(2)}_{x}$}\mbox{$\sigma^{(1)}_{x}$}\mbox{$\sigma^{(2)}_{x}$}\mbox{$\sigma^{(1)}_{y}$}\mbox{$\sigma^{(2)}_{y}$}=-1
\label{eq:PM}
\mbox{,}
\eeq
since this is actually quite crucial to the theorem.
For simplicity, we normalize the spin eigenvalues from $\pm \frac{1}{2}$
to $\pm 1$. To demonstrate \ref{eq:PM}, we make use of the \linebreak[0]
commutation \linebreak[0] rules \linebreak[0] for \linebreak[0]
the \linebreak[0] $x$ \linebreak[0] and \linebreak[0] $y$ \linebreak[0]
\linebreak[0] components \linebreak[0] of \linebreak[0] two \linebreak[0]
spin $\frac{1}{2}$ \linebreak[0] particles. \linebreak[0] Any \linebreak[0]
pair \linebreak[0] of \linebreak[0] such \linebreak[0]
observables \linebreak[0] associated \linebreak[0] with
\linebreak[0] different \linebreak[0] particles \linebreak[0] will
\linebreak[0] commute, \linebreak[0] so \linebreak[0] that
\linebreak[0] we \linebreak[0] have \linebreak[0] $[\mbox{$\sigma^{(1)}_{x}$},\mbox{$\sigma^{(2)}_{x}$}]=0$
\linebreak[0] and \linebreak[0] $[\mbox{$\sigma^{(1)}_{x}$},\mbox{$\sigma^{(2)}_{y}$}]=0$, \linebreak[0] for
\linebreak[0] example. \linebreak[0] Any \linebreak[0] pair
\linebreak[0] which \linebreak[0] involves \linebreak[0] the \linebreak[0]
same \linebreak[0] component \linebreak[0] will \linebreak[0] also
\linebreak[0]
commute, \linebreak[0] thus \linebreak[0] we \linebreak[0] have \linebreak[0]
$[\mbox{$\sigma^{(1)}_{x}$},\mbox{$\sigma^{(1)}_{x}$}]=0$ \linebreak[0] and \linebreak[0]
$[\mbox{$\sigma^{(2)}_{x}$},\mbox{$\sigma^{(2)}_{x}$}]=0$. \linebreak[0]
Note \linebreak[0] that \linebreak[0] commutation \linebreak[0] of
\linebreak[0] two \linebreak[0]
observables \linebreak[0] $O_{1},O_{2}$ \linebreak[0] implies \linebreak[0]
that \linebreak[0] $O_{1}O_{2}=O_{2}O_{1}$.
\linebreak[0] Those \linebreak[0]
\linebreak[0] pairs \linebreak[0] associated \linebreak[0] with \linebreak[0]
the \linebreak[0] same \linebreak[0] particle \linebreak[0] but
\linebreak[0] {\em different} \linebreak[0] components \linebreak[0] do
\linebreak[0] not \linebreak[0] commute, \linebreak[0] but
\linebreak[0]
{\em anticommute}. \linebreak[0] For \linebreak[0] two \linebreak[0]
anticommuting \linebreak[0] observables \linebreak[0] $O_{1},
O_{2}$, \linebreak[0] it \linebreak[0] follows \linebreak[0] that
\linebreak[0] \linebreak[0] their \linebreak[0] anticommutator
\linebreak[0] $[O_{1},O_{2}]^{+}=
O_{1}O_{2}+O_{2}O_{1}$ is equal to $0$. This implies that
$O_{1}O_{2}=-O_{2}O_{1}$.
Using these rules, we may manipulate the expression
on the left hand side
of \ref{eq:PM} by sequentially interchanging the first $\mbox{$\sigma^{(1)}_{x}$}$ with the
operator appearing to its right. If we exchange $\mbox{$\sigma^{(1)}_{x}$}$ with $\mbox{$\sigma^{(2)}_{y}$}$,
$\mbox{$\sigma^{(1)}_{y}$}$ and $\mbox{$\sigma^{(2)}_{x}$}$ the expression becomes $-\mbox{$\sigma^{(2)}_{y}$}\mbox{$\sigma^{(1)}_{y}$}\mbox{$\sigma^{(2)}_{x}$}\mbox{$\sigma^{(1)}_{x}$}\sgox
\mbox{$\sigma^{(2)}_{x}$}\mbox{$\sigma^{(1)}_{y}$}\mbox{$\sigma^{(2)}_{y}$}$.
The overall minus sign results from its interchange with $\mbox{$\sigma^{(1)}_{y}$}$. At this
point, it is straightforward to simplify the expression using that
the square of any component of the spin has the value $1$, i.e.
$(\sigma^{(i)}_{j})^{2}=1$. If we apply this to the expression in question, we
can easily see that the entire expression reduces to $-1$, thereby
verifying \ref{eq:PM}.
Motivated by the expression \ref{eq:PM}, we introduce six additional
observables
\begin{equation}
\{A,B,C,X,Y,Z\}
\mbox{.}
\eeq
If we group the
observables in the left hand side of \ref{eq:PM} by pairs,
we can rewrite this relationship as $ABXY=-1$ where $A,B,X,Y$ are defined as
\begin{eqnarray}
A & = & \mbox{$\sigma^{(1)}_{x}$}\mbox{$\sigma^{(2)}_{y}$} \label{eq:Pduct} \\
B & = & \mbox{$\sigma^{(1)}_{y}$}\mbox{$\sigma^{(2)}_{x}$} \nonumber \\
X & = & \mbox{$\sigma^{(1)}_{x}$}\mbox{$\sigma^{(2)}_{x}$} \nonumber \\
Y & = & \mbox{$\sigma^{(1)}_{y}$}\mbox{$\sigma^{(2)}_{y}$} \nonumber
\mbox{.}
\end{eqnarray}
Defining observables $C$ and $Z$ as
\begin{eqnarray}
C & = & AB \label{eq:SPduct} \\
Z & = & XY \nonumber
\end{eqnarray}
then allows one to write \ref{eq:PM} as
\begin{equation}
CZ=-1
\mbox{.}
\label{eq:IPduct}
\eeq
It is important to note that the equation \ref{eq:IPduct}
is equivalent to \ref{eq:PM}, given that $A,B,C,X,Y,Z$ are defined as given
above in \ref{eq:SPduct} and \ref{eq:Pduct}.
If we examine the first equation in \ref{eq:Pduct}, we see that
the three observables
involved are a commuting set: $[\mbox{$\sigma^{(1)}_{x}$},\mbox{$\sigma^{(2)}_{y}$}]=0$, $[\mbox{$\sigma^{(1)}_{x}$},
A]=[\mbox{$\sigma^{(1)}_{x}$},\mbox{$\sigma^{(1)}_{x}$}\mbox{$\sigma^{(2)}_{y}$}]=0$ and
$[\mbox{$\sigma^{(2)}_{y}$},A]=[\sigma^{(2)}_{y},\sigma^{(1)}_{x}\mbox{$\sigma^{(2)}_{y}$}]=0$. Examination of
the other equations in \ref{eq:Pduct} reveals that
the same holds true for
these, i.e. the observables in each form a commuting set. Repeated
application of the commutation rules reveals that the sets $\{C,A,B\}$,
$\{Z,X,Y\}$, and $\{C,Z\}$ are also commuting sets. As was done in Gleason's
theorem and Kochen and Specker's theorem, we
consider the question of a function $E(O)$ on the observables
$\mbox{$\sigma^{(1)}_{x}$},\mbox{$\sigma^{(1)}_{y}$},\mbox{$\sigma^{(2)}_{x}$},\mbox{$\sigma^{(2)}_{y}$},A,B,C,X,Y,Z$ which returns for each observable
its value. We require that $E(O)$ satisfy all constraining relationships
on each commuting set of observables.
The relationships in question differ from those of Gleason
(\ref{eq:comm lin}), and of Kochen and Specker (\ref{eq:KS proj lin})
only in that they involve {\em products} of observables, rather than
just linear combinations. However, this distinction is not a
significant one---the essential
feature of all of these theorems is simply that they require $E(O)$ to
satisfy the relationships constraining each {\em commuting} set; no
relations on non-commuting observables are regarded as necessary
constraints. The relationships in question in the present analysis
are the defining equations in \ref{eq:Pduct}, \ref{eq:SPduct} and
\ref{eq:IPduct}. Given that $E(O)$ satisfies these, the Mermin
theorem implies that this function cannot map the observables
to their eigenvalues.
We will now present the proof of the theorem. Since $\sigma^{(i)}_{j} = \pm
1 \mbox{ } \forall i,j$, the eigenvalues
of each of the ten observables will be $\pm 1$. This is readily
seen for the observables $A,B,X,Y$ defined by \ref{eq:Pduct}: each of
these is the product of two commuting observables whose eigenvalues
are $1$ and $-1$. With this, it follows similarly that
$C$ and $Z$ each have eigenvalues of $\pm 1$. We therefore
require that $E(O)$ must assign either $-1$ or $1$ to each observable.
Now let us recall an important result which we saw above: the relationship
$CZ=-1$ is {\em equivalent} to the relationship
\ref{eq:PM}, provided that the observables $A,B,C,X,Y,Z$ are defined
by \ref{eq:SPduct}, and
\ref{eq:Pduct}.
Consider the function $E(O)$. Since
the assignments made by this function have been required to satisfy all the
commuting
relationships \ref{eq:IPduct}, \ref{eq:SPduct}, and \ref{eq:Pduct},
it follows that they must also satisfy
\ref{eq:PM}. However, if we examine \ref{eq:PM}, it follows that no
assignment of the values $-1$ and $+1$ can possibly be made which will
satisfy this equation. This follows
since each of the spin observables appears {\em twice} in the left hand
side of
\ref{eq:PM}, so that any such assignment must give a value of $1$ to the
entire expression, but the right hand side of \ref{eq:PM} is $-1$.
Therefore, there is no
function $E(O)$ which maps each observable
of the set $\mbox{$\sigma^{(1)}_{x}$},\mbox{$\sigma^{(1)}_{y}$},\mbox{$\sigma^{(2)}_{x}$},\mbox{$\sigma^{(2)}_{y}$},A,B,C,X,Y,Z$ to an eigenvalue, if we insist
that $E(O)$ on every commuting set must obey all the constraint
equations. This
completes the proof of Mermin's theorem.
\section{Contextuality and the refutation of the impossibility proofs
of Gleason, Kochen and Specker, and Mermin}
We have seen that the theorems of Gleason, Kochen and Specker, and
Mermin all demonstrate the impossibility of value maps on
some sets of observables such that the constraining relationships
on each commuting set of observables are obeyed.
One might be at first inclined to conclude with Kochen and
Specker that such results imply the impossibility of a hidden
variables theory. However, if we consider that there exists
a successful theory of hidden variables, namely Bohmian mechanics
\cite{Bohmian Mechanics} (see section \ref{Bohm}), we see that
such a conclusion is in error. Moreover, an explicit analysis
of Gleason's theorem has been carried out by Bell \cite{Bell Eclipse, Bell
Imposs Pilot} and its inadequacy as an impossibility proof was shown.
Bell's argument may easily be
adapted\footnote{As we have mentioned, since the Kochen and Specker observables are formally
equivalent to projections on a three-dimensional Hilbert space, this
theorem is actually a special case of Gleason's. Therefore, Bell's
argument essentially addresses the Kochen and Specker theorem as well as
Gleason's.}
to provide a similar demonstration regarding Kochen and Specker's theorem.
The key concept underlying Bell's argument is that of {\em contextuality}, and
we now present a discussion of this notion.
Essentially, contextuality refers
to the dependence of measurement results on the detailed experimental
arrangement being employed. In discussing this notion,
we will find that an inspection of the quantum formalism suggests
that contextuality is a natural feature to expect in a theory explaining the
quantum phenomena. Furthermore we shall find that the concept
is in accord with Niels Bohr's remarks regarding the fundamental
principles of quantum mechanics. According to Bohr,
\cite{Bohr Nature} ``a closer examination reveals that the procedure of
measurement has an
essential influence on the conditions on which the very definition of the
physical quantities in question rests.'' In addition, he
stresses
\cite[p. 210]{Einstein impeachment} ``the impossibility of any sharp
distinction
between the
behavior of atomic objects and the interaction with the measuring
instruments which serve to define the conditions under which the phenomena
appear.''
The concept of contextuality represents a concrete manifestation of
the quantum theoretical aspect to which Bohr refers. We will first
explain the concept itself in detail,
and then focus on its relevance to the theorems
of Gleason, Kochen and Specker, and Mermin.
We begin by recalling a particular feature of the quantum formalism.
In the presentation of this
formalism given in chapter one, we discussed the
representation of the
system's state, the rules for the state's time evolution, and
the rules governing the measurement of an observable. The measurement
rules are quite crucial, since it is only through
measurement
that the physical significance of the abstract quantum state (given
by $\psi$) is made manifest. Among these rules,
one finds that any commuting set
of observables may be measured simultaneously. With a little
consideration, one is led to observe
that the possibility exists for {\em a variety of different
experimental procedures} to measure a single observable.
Consider for example, an observable
$O$ which is a member of the commuting set
$\{O,A_{1},A_{2},\ldots\}$. We label this set as ${\cal C}$. A
simultaneous measurement of the set ${\cal C}$ certainly
gives among its results a value for $O$ and thus may be regarded as
providing a measurement of $O$. It is possible that $O$ may
be a member of another commuting set of
observables ${\cal C}^{'}=\{O,B_{1},B_{2}, \ldots\}$, so that a
simultaneous measurement of ${\cal C}^{'}$
\linebreak[0] also provides a measurement of $O$.
Let us suppose further that the members of set $\{A_{i}\}$ fail
to commute with those of $\{B_{i}\}$. It is then clear that experiments
measuring ${\cal C}$ and ${\cal C}^{'}$ are quite different, and hence
must be distinct. A concrete difference appears for example,
in the effects of such experiments on the system wave function. The
measurement rules tell us that the wave function subsequent to an
ideal measurement of a commuting set
is prescribed by the equation \ref{eq:NIC}, according to which the
post-measurement wave function is calculated from the
pre-measurement wave function by taking the projection of the latter
into the joint-eigenspace of that set.
Since the members of ${\cal C}$ and ${\cal C}^{'}$ fail to
commute, the joint-eigenspaces of the two are necessarily different, and
the system wave function will not
generally be affected in the same way by the two experimental
procedures. Apparently
\linebreak[0] the concept of `the measurement of an
\linebreak[0] observable' is
\linebreak[0] {\em ambiguous}, since there can be distinct experimental
procedures for the measurement of a single observable.
There are, in fact, more subtle distinctions between different procedures
for measuring the same observable, and these also may be important.
We will see such an example in studying Albert's experiment at the conclusion
of this chapter. To introduce the experimental procedure of
measurement into
our formal notation, we shall write ${\cal E}(O)$, ${\cal E}^{'}(O)$,
etc., to represent experimental
procedures used to measure the observable $O$.
From what we have seen here, it is quite natural to expect
that a hidden variables theory should allow for the possibility
that {\em different
experimental procedures}, e.g., ${\cal E}(O)$ and ${\cal E}^{'}(O)$, for the
measurement of an observable
might yield {\em different results} on an individual system. This is
contextuality.
Examples of observables for which there exist incompatible measurement
procedures are found among the observables addressed in each of the
theorems of Gleason, Kochen and Specker, and Mermin.
Among observables addressed by Gleason are the
one-dimensional projection operators $\{P_{\phi}\}$
on an $N$-dimensional Hilbert space ${\cal H}_{N}$.
Consider a one-dimensional projection $P_{\phi}$ where $\phi$ belongs
to two sets of orthonormal vectors given by
$\{\phi,\psi_{1},\psi_{2},\ldots\}$ and
$\{\phi,\chi_{1},\chi_{2},\ldots\}$. Note that the sets $\{\psi_{1},\psi_{2},
\ldots\}$ and $\{\chi_{1},\chi_{2},\ldots\}$ are constrained only in that
they span ${\cal H}_{\phi}^{\perp}$ (the
orthogonal complement of the one-dimensional space spanned by $\psi$).
Given this, there exist examples of such sets for which some
members of $\{\psi_{1},\psi_{2},\ldots\}$
are distinct from and not orthogonal to the vectors in $\{\chi_{1},\chi_{2},
\ldots\}$.
Since any distinct vectors that are not orthogonal correspond to projections
which fail to commute,
the experimental
procedures ${\cal E}(P_{\phi})$ measuring $\{P_{\phi},P_{\psi_{1}},
P_{\psi_{2}},\ldots\}$ and ${\cal E}^{'}(P_{\phi})$
measuring $\{P_{\phi},P_{\chi_{1}},P_{\chi_{2}},\ldots\}$ are
incompatible. The argument just given applies also to the
Kochen and Specker observables (the squares of the
spin components of a spin $1$ particle), since these
are formally identical to projections on a three-dimensional Hilbert
space. To be explicit,
the observable $s^{2}_{x}$ is a member
of the commuting sets $\{s^{2}_{x},s^{2}_{y},s^{2}_{z}\}$ and
$\{s^{2}_{x},s^{2}_{y^{'}},
s^{2}_{z^{'}}\}$ where the $y^{'}$ and $z^{'}$ are oblique relative to
the $y$ and $z$ axes. In this case, $s^{2}_{y^{'}},s^{2}_{z^{'}}$ do not
commute with $s^{2}_{y},s^{2}_{z}$. Thus, the experimental procedures to
measure these sets are incompatible.
If we examine the Mermin observables, we find that here also,
each is a member of two incompatible commuting sets.
For example $\mbox{$\sigma^{(1)}_{x}$} $ belongs to $\{\mbox{$\sigma^{(1)}_{x}$} ,\mbox{$\sigma^{(2)}_{y}$} ,A\}$,
and to $\{\mbox{$\sigma^{(1)}_{x}$} ,\mbox{$\sigma^{(2)}_{x}$} ,X\}$. Here, the observables $\mbox{$\sigma^{(2)}_{y}$} ,A$ do not
commute with $\mbox{$\sigma^{(2)}_{x}$} ,X$, so that the
experimental procedures ${\cal E}(\mbox{$\sigma^{(1)}_{x}$})$ and ${\cal E}^{'}(\mbox{$\sigma^{(1)}_{x}$})$
that entail respectively the measurement of $\{\mbox{$\sigma^{(1)}_{x}$} ,\mbox{$\sigma^{(2)}_{y}$} ,A\}$ and
$\{\mbox{$\sigma^{(1)}_{x}$} ,\mbox{$\sigma^{(2)}_{x}$} ,X\}$ are incompatible.
While it is true that the arguments against hidden
variables derived from these theorems are superior to von Neumann's, since
they require agreement only with operator relationships among
commuting sets, these arguments nevertheless possess
the following shortcoming. Clearly, the mathematical functions
considered in each
case, $E(P)$,$E(s^{2}_{\theta,\phi})$ and $E(\sigma^{(1)}_{x},
\sigma^{(1)}_{y},\sigma^{(2)}_{x},\sigma^{(2)}_{y},
A,B,C,X,Y,Z)$ do {\em not} allow for the
possibility that the results of measuring each observable using different
and possibly {\em incompatible} procedures may lead to different results.
What the theorems demonstrate is that
no hidden variables formulation {\em based on assignment of a unique
value to each observable} can possibly agree with quantum
mechanics. But this is a result we might well have expected from the
fact that the quantum formalism allows the possibility of incompatible
experimental procedures for the measurement of an
observable. For this reason, none of the theorems
here considered---Gleason's theorem,
Kochen and Specker's theorem and Mermin's theorem---imply the
impossibility of hidden variables, since they fail to account for
such a fundamental feature of the quantum formalism's rules of measurement.
\subsubsection{Discussion of a procedure to measure the Kochen and Specker
observables}
In a discussion of the implications of their theorem, Kochen and Specker
mention a system for which
well-known techniques of atomic spectroscopy
may be used to measure the relevant spin observables.
Although these authors mention this
experiment to support their case against hidden variables, the
examination of such an experiment actually reinforces the
assertion that one should allow for contextuality---the
very concept that {\em refutes} their argument against hidden variables.
Kochen and Specker note\footnote{One can derive the analogous first-order
perturbation term
arising for a charged particle of orbital angular momentum $L=1$ in
such an electric field using the fact that the joint-eigenstates of
$L^{2}_{x},L^{2}_{y},L^{2}_{y}$ are the eigenstates of the potential energy
due to the field. This latter result is shown in Kittel \cite[p.
427]{Kittel}.} that for an atom of
{\em orthohelium}\footnote{Orthohelium and parahelium are two species
of helium which are distinguished by
the total spin $S$ of the two electrons: for the former we have $S=1$,
and for the latter $S=0$. There is a rule of atomic spectroscopy
which prohibits atomic transitions for which $\Delta S=1$, so that no
transitions from one form to the other can occur spontaneously.}
which is subjected to an
electric field of a certain configuration, the first-order
effects of this field on the electrons may be accounted for by
adding a term of the form
$aS^{2}_{x}+bS^{2}_{y}+cS^{2}_{z}$ to the electronic Hamiltonian.
Here $a,b,c$ are distinct constants, and $S^{2}_{x},S^{2}_{y},S^{2}_{z}$
are the squares of the components of the total spin of the two electrons
with respect to
the Cartesian axes $x,y,z$. The Cartesian axes are defined
by the orientation of the applied external field.
For such a system, an
experiment measuring the energy of the electrons also measures the squares
of the three spin components. To see this, note that the value of the
perturbation energy will be $(a+b)$, $(a+c)$, or $(b+c)$ if the
joint values of the set $S^{2}_{x},S^{2}_{y},S^{2}_{z}$ equal
respectively $\{1,1,0\}$, $\{1,0,1\}$, or $\{0,1,1\}$.
To understand why the external electric field affects the
orthohelium electrons in this way, consider the ground state of
orthohelium\footnote{Using spectroscopic notation, this state
would be written as the `$2^{3}$S' state of orthohelium. The `$2$'
refers to the fact that the principle quantum number $n$ of the state
equals $2$, `S' denotes that the total orbital angular momentum is zero,
and the `3' superscript means that it is a spin triplet state.
Orthohelium has no state of principle quantum number $n=1$, since
the Pauli exclusion principle forbids the `$1^{3}$S' state.}.
The wave function of this state is given by a spatial part
$\phi(r_{1},r_{2})$, depending only
on $r_{1},r_{2}$ (the radial coordinates of the electrons), multiplied by the
spin part, which is a linear combination of
the eigenvectors $\psi_{+1},\psi_{0},\psi_{-1}$ of $S_{z}$ , corresponding
to $S_{z}=+1,0,-1$, respectively. Thus, the ground state may be represented by
any vector in the three-dimensional Hilbert space spanned by the vectors
$\phi(r_{1},r_{2})\psi_{+1},\phi(r_{1},r_{2})\psi_{0},
\phi(r_{1},r_{2})\psi_{-1}$. The external electric field will have the
effect of ``lifting the degeneracy'' of the state, i.e. the new
Hamiltonian will not be degenerate in this space, but its eigenvalues
will correspond to three unique orthogonal vectors.
Suppose that we consider a particular set of Cartesian axes $x,y,z$.
We apply an electric field which is of orthorhombic
symmetry\footnote{Orthorhombic symmetry is defined by the criterion that
rotation about either the
$x$ or $y$ axis by $180^{\circ}$ would bring such a field back to itself.}
with respect to these axes. It can be
shown\footnote{A straightforward way to see this is by analogy with
a charged particle of orbital angular momentum $L=1$. The effects
of an electric or magnetic field on a charged particle of spin $1$
are analogous to the effects of the same field on a charged particle
of orbital angular momentum $1$. To calculate the first-order effects of an
electric field of orthorhombic symmetry for such a particle, one can
examine the spatial dependence of the $L_{z}=1,0,-1$ states
$\psi_{-1},\psi_{0},\psi_{+1}$, together with
the spatial dependence of the perturbation potential $V({\bf r})$,
to show that the states $1/\sqrt(\psi_{1}-\psi_{-1})$, $1/\sqrt{2}(\psi_{1}+
\psi_{-1})$, and $\psi_{0}$ are the eigenstates of such a perturbation.
A convenient choice of $V$ for this purpose is $V=Ax^{2}+By^{2}+Cz^{2}$.
See Kittel in \cite[p. 427]{Kittel}.}
that the eigenvectors of the Hamiltonian due to this
field are $v_{1}=1/\sqrt{2}((\psi_{1}-\psi_{-1})$, $v_{2}=1/\sqrt{2}(\psi_{1}+
\psi_{-1})$ and $v_{3}=\psi_{0}$.
We drop the factor $\phi(r_{1},r_{2})$ for convenience of expression.
These vectors are {\em also} the joint-eigenvectors
of the observables $S^{2}_{x},S^{2}_{y},S^{2}_{z}$, as we can easily show.
When expressed as a matrix in
the $\{\psi_{+1},\psi_{0},\psi_{-1}\}$ basis,
the vectors $v_{1},v_{2},v_{3}$ take the form
\begin{eqnarray}
v_{1} & = &
\left(
\begin{array}{c}
\frac{1}{\sqrt{2}} \\
0 \\
-\frac{1}{\sqrt{2}}
\end{array}
\right) \\
v_{2} & = &
\left(
\begin{array}{c}
\frac{1}{\sqrt{2}} \\
0 \\
\frac{1}{\sqrt{2}}
\end{array}
\right) \\
v_{3} & = &
\left(
\begin{array}{c}
0 \\
1 \\
0
\end{array}
\right)
\mbox{.}
\end{eqnarray}
If we then express $S^{2}_{x},S^{2}_{y},S^{2}_{z}$ as matrices in
terms of the same basis, then
by elementary matrix multiplication, one can show that
$v_{1}$ corresponds to the joint-eigenvalue $\mbox{\boldmath $\mu$}=\{0,1,1\}$,
$v_{2}$ corresponds to $\mbox{\boldmath $\mu$}=\{1,0,1\}$,
and $v_{3}$ corresponds to $\mbox{\boldmath $\mu$}=\{1,1,0\}$.
Thus, the eigenvectors $v_{1},v_{2},v_{3}$ of
the Hamiltonian term $H^{'}$ which arises from a perturbing
electric field (defined with respect to $x,y,z$) are
also the joint-eigenvectors of the set $\{S^{2}_{x},S^{2}_{y},
S^{2}_{z}\}$. This implies that we can represent $H^{'}$ by the
expression $aS^{2}_{x}+bS^{2}_{y}+cS^{2}_{z}$, where $H^{'}$'s eigenvalues are
$\{(b+c), (a+c),(a+b)\}$.
All of this leads to the following conclusion regarding the
measurement of the spin of the orthohelium ground state electrons.
Let the system be subjected to an electric field with orthorhombic
symmetry with respect to a given set of Cartesian
axes $x,y,z$. Under these circumstances, the measurement of the
total Hamiltonian will yield a result equal
(to first-order approximation) to the (unperturbed) ground state
energy plus one of the perturbation corrections $\{(b+c), (a+c),(a+b)\}$.
If the measured value of the
perturbation energy is $(a+b)$, $(a+c)$, or $(b+c)$ then the
joint values of the set $\{S^{2}_{x},S^{2}_{y},S^{2}_{z}\}$ are given
respectively by $\{1,1,0\}$, $\{1,0,1\}$, or $\{0,1,1\}$.
Suppose we consider two such
experiments\footnote{The type of experiments mentioned by Kochen and
Specker are actually {\em non-ideal} measurements of sets such
as $\{S^{2}_{x},S^{2}_{y},S^{2}_{z}\}$, since the post-measurement
wave function of the electrons is not equal to the projection of
the pre-measurement wave function into an eigenspace of the set
$\{S^{2}_{x},S^{2}_{y},S^{2}_{z}\}$ (see section \ref{formalism}).
In particular, they envision a spectroscopic analysis of the atom, i.e.,
the observation of photons emitted during transitions between the
stationary states of the electrons. The fact that they consider
non-ideal measurements of each set $\{S^{2}_{x},S^{2}_{y},S^{2}_{z}\}$
rather than ideal only serves to widen the range of possibilities for different
experimental procedures, and so strengthens our point that
distinct procedures exist for what Kochen and Specker consider
to be a `measurement of $S^{2}_{x}$'.},
one which measures
the set $\{S^{2}_{x},S^{2}_{y},S^{2}_{z}\}$ and the other of which measures
$S^{2}_{x},S^{2}_{y^{'}},S^{2}_{z^{'}}$, i.e. the squares of the components
in the $x,y^{'},z^{'}$ system. The former experiment involves an
electric field with orthorhombic symmetry with respect to $x,y,z$,
while the latter involves an electric field {\em with a different
orientation in space}, since it must have symmetry with respect
to the axes $x,y^{'},z^{'}$. Although both procedures can be regarded
as measurements of $S^{2}_{x}$, they involve quite different experiments.
It is quite apparent from this example that it would be
unreasonable to require that a hidden variables theory must assign a
single value to $S^{2}_{x}$, independent of the experimental procedure.
\section{Contextuality theorems and spectral-incompatibility
\label{Spec}}
We saw in our discussion of von Neumann's theorem that
its implications toward hidden variables amounted to the assertion that
there can be no
mathematical function $E(O)$ that is linear on the observables and
which maps them to their eigenvalues.
This was neither a surprising,
nor particularly enlightening result, since it follows also from
a casual observation of some example of linearly related non-commuting
observables, as we saw in examining the observables
$\frac{1}{\sqrt{2}}(\sigma_{x}+\sigma_{y}),\sigma_{x},\sigma_{y}$ of a
spin $\frac{1}{2}$ particle. The theorems of Gleason, Kochen and
Specker, and Mermin imply a somewhat less obvious type of
impossibility: there exists no function $E(O)$ mapping the observables to their
eigenvalues, which obeys all relationships constraining {\em commuting}
observables. What we develop here is a somewhat simpler expression
of the implication of these theorems. We will
find that they imply
the {\em spectral-incompatibility} of the value map function:
there exists no mathematical function that assigns to each commuting
set of observables a joint-eigenvalue of that set.
We begin by recalling the notions of joint-eigenvectors and
joint-eigenvalues of a commuting set of observables $(O^{1},O^{2},\ldots)$.
For a commuting set, the eigenvalue equation \ref{eq:Eigenvalue}
$O \mbox{$| \psi \rangle$} = \mu \mbox{$| \psi \rangle$} $ is replaced by a
set of relationships \ref{eq:Joint-Eig}:
$O^{i}\mbox{$| \psi \rangle$}=\mu^{i}\mbox{$| \psi \rangle$} \mbox{ } i=1,2,\ldots$, one for each
member of the commuting set. If a given $\mbox{$| \psi \rangle$}$ satisfies this relationship
for {\em all} members of the set, it is referred to as a
{\em joint-eigenvector}. The set of numbers $(\mu^{1},\mu^{2},\ldots)$
that allow the equations to be satisfied for this vector are
collectively referred to as the {\em joint-eigenvalue} corresponding
to this eigenvector, and the symbol ${\mbox{\boldmath $\mu$}}=(\mu_{1},\mu_{2},\ldots)$
is used to refer to this set. The set of all joint-eigenvalues
$\{\mbox{\boldmath $\mu$}_{a}\}$
is given the name `joint-eigenspectrum'.
In general, the members of any given commuting set of observables
might not be independent, i.e., they may be constrained by mathematical
relationships. We label the relationships for any given
commuting set $\{O^{1},O^{2},\ldots\}$ as
\begin{eqnarray}
f_{1}(O^{1},O^{2},\ldots) & = & 0 \label{eq:comm relat} \\
f_{2}(O^{1},O^{2},\ldots) & = & 0 \nonumber \\
\vdots & & \nonumber
\mbox{.}
\end{eqnarray}
The equations \ref{eq:GlPsum} in Gleason's theorem, \ref{eq:KS comm}
in Kochen and Specker's theorem, and \ref{eq:Pduct}, \ref{eq:SPduct}
and \ref{eq:IPduct} in Mermin's theorem, are just such relations. We
now demonstrate the following two results. First, that every member of
the joint-eigenspectrum must satisfy
all relationships \ref{eq:comm relat}. Second, that any set of numbers
$\xi_{1},\xi_{2},\ldots$ satisfying all of these relationships
is a joint-eigenvalue.
To demonstrate the first of these, we suppose that ${\mbox{\boldmath $\mu$}}=
(\mu^{1},\mu^{2},\ldots)$ is a
joint-eigenvalue of the commuting set $\{O^{1},O^{2},\ldots\}$, with
joint-eigenspace ${\cal H}$. We then consider the operation of
$f_{i}(O^{1},O^{2},\ldots)$ on a vector
$\psi \in {\cal H}$ where $f_{i}(O^{1},O^{2},\ldots)=0$ is one
of the relationships constraining the commuting set. We find
\begin{equation}
f_{i}(O^{1},O^{2},\ldots) \psi=
f_{i}(\mu_{1},\mu_{2},\ldots) \psi=0
\mbox{.}
\eeq
The second equality implies that $f_{i}(\mu_{1},\mu_{2},
\ldots)=0$. Since $f_{i}(O^{1},O^{2},\ldots)=0$ is an
arbitrary member of the relationships \ref{eq:comm
relat} for the commuting set $\{O^{1},O^{2},\ldots\}$, it
follows that every joint-eigenvalue $\mbox{\boldmath $\mu$}$ of the set must satisfy
{\em all} such relationships.
We now discuss the demonstration of the second point. Suppose that
the numbers $\{\xi_{1},\xi_{2},\ldots\}$ satisfy
all of \ref{eq:comm relat} for some commuting set $\{O^{1},O^{2},\ldots\}$.
We consider the following relation:
\begin{equation}
\left([(O^{1}-\mu^{1}_{1})^{2}+
(O^{2}-\mu^{2}_{1})^{2}+\ldots][(O^{1}-\mu^{1}_{2})^{2}+
(O^{2}-\mu^{2}_{2})^{2} + \ldots] \ldots\right) \psi=0
\mbox{.}
\label{eq:jointprod}
\eeq
Here, we operate on the vector $\psi$ with a product whose factors each
consist of a sum of various operators.
The product is taken over all joint-eigenvalues $\{\mbox{\boldmath $\mu$}_{i}\}$. We represent
each joint-eigenvalue ${\mbox{\boldmath $\mu$}}_{i}$ by a set $(\mu^{1}_{i},\mu^{2}_{i},
\ldots)$. The validity of \ref{eq:jointprod}
is easily seen. Since the joint-eigenspaces of any commuting set are
complete, the vector $\psi$ must lie within such a space. Suppose
that $\psi \in {\cal H}_{i}$, where ${\cal H}_{i}$ is the joint-eigenspace
corresponding to $\mbox{\boldmath $\mu$}_{i}$. Then the $i$th factor in the product of operators
in \ref{eq:jointprod} must give zero when operating on $\psi$. Therefore, the
entire product operating
on $\psi$ must also give zero. Since $\psi$ is an arbitrary vector, it follows
that
\begin{equation}
[(O^{1}-\mu^{1}_{1})^{2}+
(O^{2}-\mu^{2}_{1})^{2}+\ldots][(O^{1}-\mu^{1}_{2})^{2}+
(O^{2}-\mu^{2}_{2})^{2} + \ldots] \ldots=0
\label{eq:Communi}
\mbox{.}
\eeq
Note that \ref{eq:Communi} is itself
a constraining relationship on the commuting set, so that
it must be satisfied by the numbers $(\xi_{1},\xi_{2},\ldots)$. This can only
be true if
these numbers form a joint-eigenvalue of
$\{O^{1},O^{2},\ldots\}$, and this is the result we were to prove.
From this, we can discern a simple way to regard the implications
of the theorems of Gleason,
Kochen and Specker, and Mermin toward the question of a value map.
The requirement \ref{eq:comm lin} Gleason's theorem
places on the function $E(P)$ can be re-stated as the requirement
that for each commuting set, $E(P)$ must satisfy all the relationships
constraining its members. From the above argument, it follows that this
assumption is equivalent to the constraint that $E(P)$ must assign to
each commuting set a joint-eigenvalue.
The same is also true of Kochen and Specker's assumption that $E(s^{2}_{\theta,\phi})$ satisfy
equation \ref{eq:KS proj lin}, and the Mermin requirement that $E$ satisfy
\ref{eq:Pduct}, \ref{eq:SPduct} and \ref{eq:IPduct}.
Thus, all three of these theorems can be regarded as proofs of the
impossibility of a function mapping the observables to their values
such that each commuting set is assigned a joint-eigenvalue. An
appropriate name for such a proof would seem to be
`spectral-incompatibility theorem.'
\section{Albert's example and contextuality \label{UncleAlb}}
In thinking about any given physical phenomenon, it is natural to
try to picture to oneself the properties of the system
being studied. In using the quantum formalism to develop
such a picture, one may tend to regard the `observables' of this formalism,
i.e., the Hermitian operators (see section \ref{formalism}), as representative
of these properties.
However, the central role played by the {\em experimental procedure}
${\cal E}(O)$
in the measurement of any given observable $O$ seems to
suggest that such a view of the operators
may be untenable. We describe an experiment originally discussed by David
Albert \cite{Albert}
that indicates that this is indeed the case:
the Hermitian operators cannot be regarded as
representative of the properties of the system\footnote{This
idea has been propounded by Daumer, D{\"u}rr, Goldstein,
and Zangh{\'i} in \cite{Martin}. See also Bell in
\cite{Last of Bell}}.
Albert considers two laboratory procedures that may be used
to measure the $z-$component of the spin of a spin $\frac{1}{2}$
particle. Although the two procedures are quite similar to one another, they
cannot not be regarded as identical when considered in light of
the hidden variables theory known as Bohmian mechanics. This is a particularly
striking instance of contextuality, and it indicates the inadequacy of
the conception that the spin operator $\sigma_{z}$
represents an intrinsic property of the
particle. From Albert's example, one can clearly see that the outcome
of the $\sigma_{z}$ measurement depends not only on the parameters
of the particle itself, but also on the complete experimental setup.
The Albert example is concerned with the
measurement of
spin\footnote{As is usual in discussions of Stern-Gerlach
experiments, we consider only those effects relating to the
interaction of the magnetic field with the magnetic moment of the particle.
We consider the electric charge of the particle to be zero.}
as performed using a Stern-Gerlach magnet.
The schematic diagram given in figure \ref{Stern} exhibits the
configuration used in both of the measurement procedures to be
described here. Note that we use a Cartesian coordinate system for which the
$x$-axis lies along the horizontal
direction with positive $x$ directed toward the right, and the
$z$-axis lies along the vertical direction with
positive $z$ directed upward. The $y$-axis (not shown) is perpendicular to the
plane of the figure, and---since we use a
right-handed coordinate system---positive $y$ points into this plane.
The long axis of the Stern-Gerlach magnet system
is oriented along the $x$-axis, as shown. The upper and lower magnets
of the apparatus are located in the directions of positive $z$ and
negative $z$.
We define the Cartesian system further by requiring that
$x$-axis (the line defined by $y=0,z=0$) passes through the center of the
Stern-Gerlach magnet system.
\begin{figure}
\begin{picture}(360,118)
\put(247.5,17){\framebox(67.5,27)[tl]{lower}}
\put(247.5,30.5){\makebox(0,0)[l]{magnet}}
\put(247.5,71){\framebox(67.5,27)[tl]{upper}}
\put(247.5,84.5){\makebox(0,0)[l]{magnet}}
\put(49.5,57.5){\vector(0,1){18}}
\put(51.5,75.5){\makebox(0,0)[l]{z}}
\put(49.5,57.5){\vector(1,0){18}}
\put(67.5,57.5){\makebox(0,0)[l]{x}}
\put(103.5,57.5){\vector(1,0){45}}
\put(103.5,57.5){\makebox(0,0)[tl]{Direction of}}
\put(103.5,44){\makebox(0,0)[tl]{incident wave packet}}
\put(333,66.5){\vector(3,1){45}}
\put(354,73){\makebox(0,0)[tl]{Upper wave packet}}
\put(333,48.5){\vector(3,-1){45}}
\put(354,42){\makebox(0,0)[bl]{Lower wave packet}}
\end{picture}
\caption{Geometry of the Stern-Gerlach Experiment \label{Stern}}
\end{figure}
In each experiment, the spin $\frac{1}{2}$ particle to be measured is
incident on the apparatus along the positive $x-$axis.
In the region of space the particle occupies before
entering the Stern-Gerlach apparatus, its wave function is of the form
\begin{equation}
\psi_{t}({\bf r}) = \varphi_{t}({\bf r}) (|\uparrow \rangle +
| \downarrow \rangle)
\label{eq:Bef}
\mbox{,}
\eeq
where the vectors $|\uparrow \rangle$ and $|\downarrow\rangle$ are the
eigenvectors of $\sigma_{z}$
corresponding to eigenvalues $+\frac{1}{2}$ and $-\frac{1}{2}$, respectively.
Here $\varphi_{t}({\bf r})$ is a localized wave packet moving
in the positive $x$ direction toward the magnet.
We wish to consider two experiments that differ only
in the orientation of the magnetic field inside the Stern-Gerlach
apparatus.
In the experiment $1$, the upper magnet has a strong magnetic north pole
toward the region of particle passage, while the lower has a somewhat
weaker magnetic south pole toward this region. In the experiment $2$, the
magnets are such that the gradient of the field points in the {\em opposite}
direction, i.e., the
upper magnet has a strong magnetic {\em south} pole toward the region
of passage, while
the lower has a weak magnetic north pole towards it.
After passing the Stern-Gerlach apparatus,
the particle will be described by a wave function of one of the
following forms:
\begin{eqnarray}
\psi^{1}_{t}({\bf r}) & = &
\frac{1}{\sqrt{2}}(\phi_{t}^{+}({\bf r}) | \uparrow \rangle
+ \phi_{t}^{-}({\bf r}) | \downarrow \rangle)
\label{eq:Aft} \\
\psi_{t}^{2} ({\bf r}) & = & \frac{1}{\sqrt{2}}(\phi_{t} ^{-}({\bf r})|
\uparrow \rangle +
\phi_{t} ^{+}({\bf r})| \downarrow \rangle) \nonumber
\mbox{.}
\end{eqnarray}
Here $\psi^{1}_{t}({\bf r})$ corresponds to experiment $1$,
and $\psi_{t} ^{2} ({\bf r})$ corresponds to experiment $2$. In both
cases, the function $\phi_{t} ^{+}({\bf r})$ represents a
localized wave packet moving obliquely upward and
$\phi_{t} ^{-}({\bf r})$ represents a localized wave packet moving obliquely
downward. To measure $\sigma_{z}$, one places detectors in the
paths of these wave packets. Examination of the first equation of
\ref{eq:Aft} shows that for experiment $1$, if the particle is
detected in the upper path, the result of
our $\sigma_{z}$ measurement is $+\frac{1}{2}$. If the particle is detected in
the
lower path, the result is $-\frac{1}{2}$.
For experiment $2$, the second equation of \ref{eq:Aft}
leads to the conclusion that similar detections are associated with results
opposite in sign to those of experiment $1$. Thus, for experiment $2$, detection
in the upper path implies $\sigma_{z}=-\frac{1}{2}$, while detection in the
lower
implies $\sigma_{z}=+\frac{1}{2}$.
We make here a few remarks regarding the symmetry of the system.
We constrain the form of the wave packet $\varphi_{t}({\bf r})$
of \ref{eq:Bef}, by demanding that it has no dependence on $y$, and
that it exhibits reflection symmetry through the plane defined by $z=0$,
i.e., $\varphi_{t} ({x,z})=
\varphi_{t} ({x,-z})$. Moreover, the vertical extent of this wave packet
is to be the same size as the vertical spacing between the
upper and lower magnets of the apparatus. As regards the wave packets
$\phi_{t} ^{+}({\bf r})$ and $\phi_{t} ^{-}({\bf r})$ of \ref{eq:Aft},
if the magnetic field within the apparatus is such that
$\partial B_{z}/\partial z$ is
constant\footnote{The term added to the
particle's Hamiltonian to account for a magnetic field is
$g{\bf s} \cdot {\bf B}$, where ${\bf s}$ is the spin, ${\bf B}$
is the magnetic field and $g$ is the gyromagnetic ratio.
To determine the form of this term in the case of a Stern-Gerlach
apparatus, we require the configuration of the magnetic field.
A Stern-Gerlach magnet apparatus has a ``long axis'' which for the example
of figure \ref{Stern} lies along the $x$-axis. Since the component of the
magnetic field along this axis will
{\em vanish} except within a small region before and after the apparatus,
the effects of $B_{x}$ may be neglected. Furthermore, $B_{y}$ and $B_{z}$
within the apparatus may be regarded as being independent of $x$.
The magnetic field in the $x,z$ plane between the magnets lies in the
$z$-direction, i.e., ${\bf B}(x,0,z)=B_{z}(z)\hat{k}$. Over the region of
incidence of the particle, the field is such that $\frac{\partial B_{z}}
{\partial z}$ is constant. See for example, Weidner and Sells \cite{Weidner}
for a more detailed discussion of the Stern-Gerlach apparatus. The motion of
the particle in the $y$-direction is of no importance to us, and so we do not
consider the effects of any Hamiltonian terms involving
only $y$ dependence. The results we discuss in the present
section are those which arise from taking account of the magnetic field
by adding to the Hamiltonian term of the form
$g\sigma_{z}B_{z}(z=0)+g\sigma_{z}(\frac{\partial B_{z}}{\partial z}(z=0))z$.},
then for both experiments, these packets move at
equal angles above and below the horizontal (See figure \ref{Stern}).
Thus, the particle is described both before and after
it passes the Stern-Gerlach magnet, by a wave function
which has reflection symmetry through the plane defined by $z=0$.
\subsubsection{Bohmian Mechanics and Albert's example}
We have mentioned that the hidden variables theory developed by David Bohm
\cite{Bohmian Mechanics} gives an explanation of quantum phenomena which is
empirically equivalent
to that given by the quantum formalism. Bohmian mechanics allows us to regard
any given system as a set of
particles having well-defined (but distinctly non-Newtonian
\cite{Quantum Equilibrium}) trajectories. Within Bohmian mechanics, it
is the the {\em configuration} of the system ${\bf q}=(q_{1},q_{2},q_{3},...)$
which plays the role of the hidden variables parameter $\mbox{$\lambda$}$.
Thus, the state description in this theory consists of both $\psi$ and
${\bf q}$. Bohmian mechanics does not involve a change in the mathematical
form of $\psi$: just as in the quantum formalism, $\psi$ is a
vector in the Hilbert space associated with the system,
and it evolves with time according to the Schr{\"o}dinger equation:
\begin{equation}
i\hbar \frac{\partial \psi}{\partial t} = H \psi
\label{eq:erwin}
\mbox{.}
\end{equation}
The system configuration ${\bf q}$ is governed by the equation:
\begin{equation}
\frac{d{\bf q}}{dt}= (\hbar/m) \mbox{Im}(\frac{\psi^{*}{\bf \nabla}
\psi}{\psi^{*} \psi})
\label{eq:Bohm}
\mbox{.}
\eeq
In the case of a particle with spin, we make use of the spinor inner-product
in this equation. For example, in the case of a spin $\frac{1}{2}$ particle
whose wave function is $\psi=\chi_{+}({\bf r})|\! \uparrow \rangle +
\chi_{-}({\bf r})|\! \downarrow \rangle$, the equation \ref{eq:Bohm}
assumes the form
\begin{equation}
\frac{d{\bf q}}{dt}=(\hbar/m)\mbox{Im}(\frac{\chi_{+}^{*}({\bf r})
{\bf \nabla}\chi_{+}({\bf r})+\chi_{-}^{*}({\bf r})
{\bf \nabla}\chi_{-}({\bf r})}
{\chi_{+}^{*}({\bf r})\chi_{+}({\bf r}) + \chi_{-}^{*}({\bf r})
\chi_{-}({\bf r})})
\mbox{.}
\label{eq:Bohmsp}
\eeq
As we expect from the fact that this theory is in empirical agreement with
quantum theory, Bohmian mechanics does {\em not} generally provide,
given $\psi$ and ${\bf q}$,
a mapping from the observables to their values. In other words, it does
not provide a non-contextual value map for each state.
As we shall see, the choice of {\em experimental procedure} plays such a
pronounced role in the Bohmian mechanics description of Albert's spin
measurements, that one cannot possibly regard the spin operator as
representative of an objective property of the particle.
We first discuss the Bohmian mechanics description of the
Albert experiments. Since the wave function and its evolution
are the same as in quantum mechanics, the
particle's wave function $\psi$
is taken to be exactly as described above. As far as the
configuration ${\bf q}$ is concerned, there
are two important features of the Bohmian evolution equations to be considered:
the uniqueness of the trajectories and the {\em equivariance}
of the time evolution. The first feature refers to the fact that each initial
$\psi$ and ${\bf q}$ leads to a {\em unique}
trajectory. Since the particle being measured has a
fixed initial wave function, its
initial conditions are defined solely by its initial position. The
equivariance of the system's time evolution is a more complex property.
Suppose that at some time $t$, the probability that the
system's configuration is within the region $d{\bf q}$ about ${\bf q}$
obeys the relationship:
\begin{equation}
P({\bf q}^{'}\in d{\bf q})=|\psi({\bf q})|^{2}d{\bf q}
\mbox{.}
\label{eq:equ}
\eeq
According to equivariance, this relationship will continue to hold
for all later times $t^{'}:\mbox{ }t^{'} >t$. In considering the
Bohmian mechanics description of a system, we assume that
the particle initially obeys \ref{eq:equ}.
By equivariance, we then have that for all later times the particle
will be guided to ``follow'' the motion
of the wave function. Thus, after it
passes through the Stern-Gerlach apparatus, the particle will
enter either the upward or downward moving packet. From consideration of the
uniqueness of the trajectory and the equivariance of the time evolution,
it follows that the question of {\em which branch} of the wave function the
particle enters {\em depends solely on its initial position}.
If we consider the situation in a little more detail, we find
a simple criterion on the initial position of the particle
that determines which branch of the wave function it will enter.
Recall that the initial \ref{eq:Bef} and final
\ref{eq:Aft} wave function have no dependence on the $y$ coordinate,
and that they exhibit reflection symmetry through the $z=0$ plane.
From this symmetry together with the uniqueness of Bohmian trajectories,
it follows that the particle cannot cross the $z=0$ plane.
In conjunction with equivariance, this result implies that if the particle's
initial $z$ coordinate is greater than zero, it must enter the upper
branch, and if its initial $z$ is less than zero the particle must enter the
lower branch.
If we now consider the above described spin measurements, we find a
somewhat curious situation. For any given initial configuration ${\bf q}$,
the question of whether
the measurement result is $\sigma_{z} =+\frac{1}{2}$ or $\sigma_{z}=-\frac{1}{2}$
depends on the
configuration of the experimental apparatus. Suppose that the particle
has an initial $z>0$, so that according to the results just shown,
it will enter the upper branch of the wave function. If the magnetic
field inside the Stern-Gerlach apparatus is
such that $\partial B_{z}/\partial z < 0$, then the particle's final
wave function is given by the first equation in \ref{eq:Aft}, and its
detection in the upper branch then implies that $\sigma_{z} =+\frac{1}{2}$. If,
on the other
hand, the magnetic field of the Stern-Gerlach magnet has the {\em opposite}
orientation,
i.e., $\partial B_{z}/\partial z > 0$, then the second equation in
\ref{eq:Aft} obtains and the detection in the upper branch implies
that $\sigma_{z}=-\frac{1}{2}$. Thus, we arrive at the conclusion that
the ``measurement of $\sigma_{z}$'' gives a {\em different} result for
two situations that differ only in the experimental configuration.
The quantum formalism's rules for the
measurement of observables strongly suggest that
the Hermitian operators represent objective properties
of the system. Moreover, such a conception is a common
element of the expositions given in quantum mechanics
textbooks. On the other hand, the fact that the result
of the ``measurement'' of $\sigma_{z}$ can
depend on properties of {\em both} system {\em and} apparatus
contradicts this conception. In general, one must consider the
results of the
``measurement of an observable'' to be a joint-product of system and measuring
apparatus. Recall Niels Bohr's comment that \cite{Bohr Nature} ``a
closer examination
reveals that the procedure of measurement has an essential influence
on the conditions on which the very definition of the physical quantities
in question rests.'' For further discussion of the role of
Hermitian operators in quantum theory, the reader is directed to
Daumer, D{\"u}rr, Goldstein, and Zangh{\'i} in \cite{Martin}.
According to these authors:
``the basic problem
with quantum theory \ldots more fundamental than the measurement problem and
all the
rest, is a naive realism about operators \ldots by (this) we refer
to various, not entirely sharply defined, ways of taking too seriously the
notion of
operator-as-observable, and in particular to the all too casual talk about
`measuring
operators' which tends to occur as soon as a physicist enters quantum mode.''
\chapter{The Einstein--Podolsky--Rosen paradox and nonlocality}
In contrast to contextuality, nonlocality is quite an unexpected
and surprising feature to meet with in the quantum phenomena.
As seen in the previous chapter, contextuality is essentially
the dependence of the hidden variables predictions on
the different possible experimental procedures
for the measurements of the observable.
Although the nonlocality of a quantum system is not something one would
regard
as natural, it may be proved by a mathematical
analysis that this feature is inevitable. The demonstration
in question arises from the well-known Einstein--Podolsky--Rosen
paradox \cite{EPR}, in combination with Bell's theorem \cite{Bell's
theorem}.
There exists, however, a
misperception among some authors that what follows from these analyses
is {\em only} that local
hidden variables theories must conflict with quantum
mechanics. In fact, a more careful examination shows that
the conjunction of the EPR paradox with Bell's theorem
implies that {\em any} local theoretical explanation
must disagree with quantum mechanics. To demonstrate this conclusion, we
now review the EPR paradox and Bell's theorem. We begin with a discussion
of the spin singlet version of the EPR
argument. This will lead us to the presentation of Bell's theorem and
its proof. We then discuss the conclusion that follows from
the conjunction of the EPR paradox and Bell's theorem.
\section{Review of the Einstein-Podolsky-Rosen analysis \label{EPRB}}
\subsection{Rotational invariance of the spin singlet state and
perfect correlations between spins}
The well-known work of Einstein, Podolsky, and Rosen, first published in
1935,
was not designed to address the possibility of nonlocality, as such.
The title of the paper was
``Can Quantum Mechanical Description of Physical Reality be Considered
Complete?'', and the goal of these authors was essentially the opposite
of authors such as von Neumann: Einstein, Podolsky,
and Rosen wished to demonstrate that the addition of
hidden variables to the description of state is {\em necessary} for
a complete description of a quantum system. According to these authors,
the quantum mechanical
state description given by $\psi$ is {\em incomplete}, i.e. it cannot
account for all the objective properties of the
system. This conclusion is stated in the paper's closing
remark: ``While we have thus shown that the wave function does
not provide a complete description of the physical reality, we left open
the question of whether such a description exists. We believe, however,
that such a theory is possible.''
Einstein, Podolsky, and Rosen arrived at this conclusion
having shown that for the system
they considered, each of the particles must have position and
momentum as simultaneous ``elements of reality''. Regarding the
completeness of a physical theory, the authors state: \cite{EPR}
(emphasis due to
EPR) ``Whatever the meaning
assigned to the term {\em complete}, the following requirement for a
complete
theory seems to be a necessary one: {\em every element of the physical
reality must have a counterpart in the physical theory}''. This
requirement leads them to conclude that the quantum theory is
incomplete, since it does not account for the possibility of
position and momentum as being simultaneous elements of reality.
To develop this conclusion for the position and
momentum of a particle, the authors make use of the following
{\em sufficient condition}
for a physical quantity to be considered as an element of reality:
``If without in any way disturbing a system, we
can predict with certainty (i.e. with probability equaling unity) the
value of a physical quantity, then there exists an element of physical
reality corresponding to this quantity''.
What we shall present
here\footnote{The Bohm spin singlet version and the original version
of the EPR paradox differ essentially in the states and observables
with which they are concerned. We shall consider the original
EPR state more explicitly in chapter 4, section \ref{EPR orig}.}
is a form of the EPR paradox which was developed in
1951,
by David Bohm\footnote{\cite[p. 611-623]{Bohm's Famous Text}. A recent
reprint appears within \cite[p. 356-368]{Big Red}}. Bohm's version
involves the properties of the {\em spin singlet
state} of a pair of spin $\frac{1}{2}$ particles. Within his argument,
Bohm
shows that various components of the spin of
a pair of particles must be elements of reality in the same sense as the
position and momentum were for Einstein, Podolsky, and Rosen. We begin
with a discussion
of the formal properties of the spin singlet state, and then
proceed with our presentation of Bohm's version of the EPR
argument.
Because the spin and its components all commute with the
observables associated with the system's spatial properties,
one may analyze a particle with spin by separately analyzing
the spin observables and the spatial observables. The spin observables
may be analyzed in terms of a Hilbert space ${\cal H}_{s}$ (which is
a two-dimensional space in the case of a spin $\frac{1}{2}$ particle)
and the spatial observables in terms of $L_{2}(\mbox{$\rm I\!R$})$. The full Hilbert
space of the system is then given by
{\em tensor product} ${\cal H}_{s} \otimes L(\mbox{$\rm I\!R$})_{2}$ of these spaces.
Hence we proceed to discuss the spin observables only, without explicit
reference to the spatial observables of the system. We denote each
direction in space by its
$\theta$ and $\phi$ coordinates in spherical polar coordinates,
and (since we consider spin $\frac{1}{2}$ particles) the symbol
$\sigma$ denotes the spin. To represent
the eigenvectors of $\sigma_{\theta,\phi}$ corresponding to the
eigenvalues
$+\frac{1}{2}$ and $-\frac{1}{2}$, we write
$|\!\uparrow \theta,\phi \rangle$,
and $|\!\downarrow \theta,\phi \rangle $, respectively. For the
eigenvectors
of $\sigma_{z}$, we write simply $|\!\uparrow \rangle$ and
$|\! \downarrow \rangle$. Often vectors and observables
are expressed in terms of the basis formed by the eigenvectors of
$\sigma_{z}$.
The vectors $|\!\uparrow \theta,\phi \rangle$,
and $|\!\downarrow \theta,\phi \rangle$ when expressed
in terms of these are
\begin{eqnarray}
|\!\uparrow \theta,\phi \rangle & = &
\cos(\theta/2) \, |\!\uparrow\rangle \,\,
+ \,\, \sin(\theta/2) e^{i\phi} \, |\!\downarrow\rangle \label{eq:thetap} \\
|\!\downarrow \theta,\phi \rangle & = &
\sin(\theta/2) e^{-i\phi} \, |\! \uparrow \rangle \,\,
- \,\, \cos(\theta/2) \, |\! \downarrow \rangle
\nonumber
\mbox{.}
\end{eqnarray}
We now discuss the possible states for a system consisting of {\em two}
spin $\frac{1}{2}$ particles\footnote{See for example, Messiah
\cite{Messiah}, and Shankar \cite{Shankar}.}. For such a system, the
states are often classified in terms of the
total spin $S=\sigma^{(1)}+\sigma^{(2)}$ of the particles, where
$\sigma^{(1)}$
is the spin of particle $1$ and $\sigma^{(2)}$ is the spin of particle
$2$. The
state
characterized by $S=1$ is known as the {\em spin triplet state},
$\psi_{ST}$; it
is so named because it consists of a combination of the three
eigenvectors of $S_{z}$---the z-component of the total spin---which
correspond to the eigenvalues $-1,0,1$ . The
{\em spin singlet state}, in which we shall be interested, is
characterized
by $S=0$. The name given to the state
reflects that it contains just one eigenvector of the
z-component $S_{z}$ of the total spin: that corresponding to
the eigenvalue $0$. In fact, as we shall demonstrate,
the spin singlet state is an eigenvector
of {\em all} components of the total spin
with an eigenvalue $0$. We
express\footnote{Note that a term such as $|a\rangle^{(1)}|b\rangle^{(2)}$
represents a {\em tensor product}
of the vector $|a\rangle^{(1)}$ of the Hilbert space associated with the first
particle with the vector $|b\rangle^{(2)}$ of the Hilbert space associated with
the second. The formal way of writing such a quantity is as:
$|a\rangle^{(1)}\otimes |b\rangle^{(2)}$. For simplicity of
expression, we shall omit the symbol `$\otimes$' here.} the state in terms
of the eigenvectors $|\!\uparrow\rangle^{(1)},|\!\downarrow\rangle^{(1)}$ of
$\sigma^{(1)}_{z}$ and
$|\!\uparrow\rangle^{(2)},|\!\downarrow\rangle^{(2)}$ of $\sigma^{(2)}_{z}$, as
follows:
\begin{equation}
\psi_{ss}=
|\! \uparrow \rangle^{(1)}\, |\!\downarrow \rangle^{(2)} \,\,
- \,\, |\! \downarrow \rangle^{(1)} \, |\! \uparrow \rangle^{(2)}
\label{eq:zss}
\mbox{.}
\eeq
For simplicity, we have suppressed the normalization factor
$\frac{1}{\sqrt{2}}$.
Note that each of the two terms consists of a product of
an eigenvector of $\sigma^{(1)}_{z}$ with an eigenvector of
$\sigma^{(2)}_{z}$
such that the corresponding eigenvalues are the negatives of each other.
If we invert the relationships \ref{eq:thetap} we
may then re-write the spin singlet state \ref{eq:zss} in
terms of the eigenvectors $|\!\uparrow \theta,\phi\rangle$
and $|\!\downarrow \theta,\phi \rangle$
of the component of spin in the $\theta,\phi$ direction. Inverting
\ref{eq:thetap} gives:
\begin{eqnarray}
|\!\uparrow \rangle & = &
\cos(\theta/2) \, |\!\uparrow \theta ,\phi \rangle \,\,
+ \,\, \sin(\theta/2) e^{i\phi} \, |\!\downarrow \theta ,\phi \rangle
\label{eq:zthetap} \\
|\!\downarrow \rangle & = &
\sin(\theta/2) e^{-i\phi} \, |\! \uparrow \theta,\phi \rangle \,\,
- \,\, \cos(\theta/2) \, |\! \downarrow \theta,\phi \rangle
\nonumber
\mbox{.}
\end{eqnarray}
Now let us consider re-writing the spin singlet state \ref{eq:zss}
in the following manner. We
substitute for $|\!\uparrow \rangle^{(2)}$ and $|\!\downarrow \rangle^{(2)}$
expressions
involving $|\!\uparrow \theta_{2},\phi_{2}\rangle^{(2)}$ and
$|\!\downarrow \theta_{2},\phi_{2}\rangle^{(2)}$ by making use of
\ref{eq:zthetap}. Doing so gives us a new expression for the spin singlet
state:
\begin{eqnarray}
\psi_{ss} & = & |\!\uparrow \rangle^{(1)}
\left[
\sin(\theta_{2}/2)e^{-i\phi_{2}} \, |\! \uparrow \theta_{2},\phi_{2}
\rangle^{(2)}
\,\, - \,\, \cos(\theta_{2}/2) \, |\!\downarrow \theta_{2},\phi_{2}
\rangle^{(2)}
\right] \\
& & -|\!\downarrow \rangle^{(1)}
\left[
\cos(\theta_{2}/2) \, |\! \uparrow \theta_{2},\phi_{2}
\rangle^{(2)}
\,\, + \,\, \sin(\theta_{2}/2)e^{i \phi_{2}} \,
|\!\downarrow \theta_{2},\phi_{2} \rangle^{(2)}
\right]
\nonumber \\
& = & -\left[\cos(\theta_{2}/2) \, |\!\uparrow \rangle^{(1)} \,\, + \,\,
\sin(\theta_{2}/2)e^{i\phi_{2}} \, |\!\downarrow \rangle^{(1)}\right]
\, |\!\downarrow\theta_{2},\phi_{2} \rangle^{(2)}
\nonumber
\\
& & + \left[\sin(\theta_{2}/2)e^{-i\phi_{2}}\, |\!\uparrow \rangle^{(1)} \,\,
-
\,\, \cos(\theta_{2}/2)\, |\!\downarrow \rangle^{(1)}\right] \,
|\!\uparrow\theta_{2},
\phi_{2}\rangle^{(2)}
\nonumber
\mbox{.}
\end{eqnarray}
Examining the second of these relationships, we see from
\ref{eq:thetap} that $\psi_{ss}$
reduces\footnote{If we multiply a wave function by any constant factor
$c$,
where $c \neq 0$ the resulting wave function represents the same physical
state. We multiply $\psi_{ss}$ by $-1$ to facilitate
comparison with \ref{eq:zss}}:
\begin{equation}
\psi_{ss}=
|\!\uparrow\theta_{2},\phi_{2} \rangle^{(1)}
\, |\!\downarrow \theta_{2},\phi_{2} \rangle^{(2)}
\,\, - \,\, |\!\downarrow\theta_{2},\phi_{2}
\rangle^{(1)}\, |\!\uparrow \theta_{2},\phi_{2} \rangle^{(2)}
\label{eq:thetassp}
\mbox{.}
\eeq
Note
that this form is similar to that given in \ref{eq:zss}. Each of
its terms is a product of an eigenvector of
$\sigma^{(1)}_{\theta_{2},\phi_{2}}$
and an
eigenvector of $\sigma^{(2)}_{\theta_{2},\phi_{2}}$ such that the factors
making up
the product
correspond to eigenvalues that are just the opposites of each other. We
drop the `$2$' from
$\theta$ and $\phi$ giving
\begin{equation}
\psi_{ss}=
|\!\uparrow \theta,\phi \rangle^{(1)} \,
|\!\downarrow \theta,\phi \rangle^{(2)}
\,\, -\,\, |\!\downarrow \theta, \phi \rangle^{(1)}
\, |\!\uparrow \theta, \phi \rangle^{(2)}
\label{eq:thetass}
\mbox{.}
\eeq
Suppose that we consider the implications of \ref{eq:thetass} for
individual measurements of $\sigma^{(1)}_{\theta,\phi}$ of particle $1$
and the same spin component of particle $2$. In each term of this
spin singlet form, we have
a product of eigenvectors of the two spin components
such that the eigenvalues are simply the negatives of one another.
Therefore, it follows that
{\em measurements of} $\sigma^{(1)}_{\theta,\phi}$
{\em and} $\sigma^{(2)}_{\theta,\phi}$ {\em always give
results that sum to zero}, i.e. if measurement of
$\sigma^{(1)}_{\theta,\phi}$ gives the result $\pm \frac{1}{2}$, then
the measurement of $\sigma^{(2)}_{\theta,\phi}$ always gives
$\mp \frac{1}{2}$. We say that these observables exhibit {\em perfect
correlation}. Note that this holds true for pairs of spin observables
$\sigma^{(1)}_{\theta,\phi}$ and $\sigma^{(2)}_{\theta,\phi}$ where
$\theta,\phi$ refer to an {\em arbitrary} direction.
\subsection{Incompleteness argument}
Consider a situation in which the two particles described by
the spin singlet state are spatially separated from one another,
and spin-component measurements are to be carried out on each. Since
there exist perfect correlations, it is possible to predict with
certainty the result of a measurement of $\sigma^{(1)}_{x}$ from
the result of a previous measurement of $\sigma^{(2)}_{x}$. Suppose
for example, if we measure $\sigma^{(2)}_{x}$ and find the result
$\sigma^{(2)}_{x}=\frac{1}{2}$. A subsequent measurement of
$\sigma^{(1)}_{x}$ must give $\sigma^{(1)}_{x}=-\frac{1}{2}$. If
we assume {\em locality} then the measurement of $\sigma^{(2)}_{x}$
cannot in any way disturb particle $1$, which is spatially separated from
particle $2$. Using the Einstein-Podolsky-Rosen criterion, it follows
that $\sigma^{(1)}_{x}$ is an element of reality.
Similarly, one can predict with certainty the result of a measurement
of $\sigma^{(1)}_{y}$ from a previous measurement of $\sigma^{(2)}_{y}$.
Again, locality implies that measurement of $\sigma^{(2)}_{y}$ does
not disturb particle $1$, and by the EPR criterion,
$\sigma^{(1)}_{y}$ must be an element of reality.
In total we have shown that both $\sigma^{(1)}_{x}$ and
$\sigma^{(1)}_{y}$ are elements of reality.
On the other hand, from the quantum formalism's description of state given
by $\psi$, one can deduce, at most, one of two such non-commuting
quantities. Therefore,
we may conclude that this description of state is {\em incomplete}.
Moreover, since we have perfect correlations between all components
of the spins of the two particles, a similar argument may be given for
any component of $\sigma^{(1)}$. Therefore,
particle $1$'s spin component in an {\em arbitrary} direction
$\theta,\phi$
must be an element of reality.
Note that we can
interchange the roles of particle $1$ and $2$ in this argument to show
the same conclusion for all components of particle $2$'s spin, as well.
The EPR paradox thus has the following structure. The analysis
begins with the observation that the spin singlet state exhibits perfect
correlations such that for any direction $\theta,\phi$, measurements of
$\sigma^{(1)}_{\theta,\phi}$
and $\sigma^{(2)}_{\theta,\phi}$ always give results which sum to zero.
Together with the assumption of locality, this leads to the conclusion
that all components of the spins of both
particles must be elements of reality. Since the quantum state description
given by $\psi$ does not allow for the simultaneous reality of
non-commuting quantities such as $\sigma^{(1)}_{x}$ and
$\sigma^{(1)}_{y}$,
it follows that this description is incomplete.
\section{Bell's theorem \label{bellinequ}}
The incompleteness of the quantum mechanical state description
concluded by EPR implies that
one must consider a theoretical description of state consisting of $\psi$
and some additional parameter, in order to fully account for a system's
properties. In terms of such a state description, one ought to be
able to mathematically represent the
definite values concluded by EPR using a
function\footnote{Since we are considering a system of {\em fixed} $\psi$,
namely that of the spin singlet state, no $\psi$ dependence need be
included in $V$}
$V_{\mbox{$\lambda$}}(O)$ mapping each
component of the spin of each particle to a value.
Here we have denoted the
supplemental state parameter as $\mbox{$\lambda$}$ in
accordance with the discussion of von Neumann's theorem in chapter $1$.
In 1965, John S. Bell presented a famous theorem which
addressed the possibility of just such a function on the spin observables.
Bell was able to show that this formulation must {\em conflict} with
the statistical predictions of
quantum mechanics for various spin measurements.
We now present Bell's theorem.
Let us first fix our notation. To denote directions in space,
we write unit vectors such as $\hat{a},\hat{b},\hat{c}$
instead of $\theta$ and $\phi$. Rather than using the form
$V_{\mbox{$\lambda$}}(O)$, we shall write $A(\mbox{$\lambda$},\hat{a})$ and
$B(\mbox{$\lambda$},\hat{b})$ to represent functions on
the spin components of particles $1$ and $2$, respectively.
Since the two particles are each of spin $\frac{1}{2}$,
we should have $A=\pm \frac{1}{2}$ and $B=\pm \frac{1}{2}$, however for
simplicity we rescale these to $A=\pm 1 $ and $B=\pm 1$.
\subsection{Proof of Bell's theorem}
The key feature of the spin singlet version of the EPR paradox was its
analysis of the perfect correlations arising when the two particles
of a spin singlet pair are subject to measurements of the same spin
component. Thus, it may not be surprising that Bell's theorem is
concerned with a {\em correlation function}, which is essentially a
measure of the
statistical correlation between the results of spin component measurements
of the two particles.
The correlation function is to be determined
as follows: we set the apparatus measuring particle $1$ to
probe the component in the $\hat{a}$ direction, and the apparatus
measuring $2$ is set for the $\hat{b}$ direction. We make a series of
measurements of spin singlet pairs using this configuration, recording the
{\em product} $\sigma^{(1)}_{\hat{a}}\sigma^{(2)}_{\hat{b}}$ of the results on
each trial. The average of these products over the series of measurements
is the value of the correlation function.
In general, we expect the value of the average
determined in this way to depend on the
directions $\hat{a}, \hat{b}$ with respect to which the
spin components are measured.
According to the quantum formalism, we may predict the average, or
{\em expectation value} of any observable using the formula
$E(O)=\langle\psi|O\psi\rangle$. For the series of experiments just described,
we take the expectation value of product of the appropriate spin component
observables, giving:
\begin{equation}
P_{QM}({\hat a},{\hat b}) =
\langle\sigma^{(1)}_{a} \sigma^{(2)}_{b}\rangle = -{\hat a} \cdot {\hat b}
\label{eq:CQM}
\mbox{.}
\end{equation}
In the case of the predetermined values,
the average of the product of the two spin components
$\sigma^{(1)}_{\hat{a}}\sigma^{(2)}_{\hat{b}}$
is obtained by taking an average over $\mbox{$\lambda$}$:
\begin{equation}
P(\hat{a},\hat{b}) = \int d \mbox{$\lambda$} \rho(\mbox{$\lambda$}) A(\mbox{$\lambda$},\hat{a})
B(\mbox{$\lambda$},\hat{b})
\mbox{,}
\label{eq:CHV}
\eeq
where $\rho (\lambda)$ is the probability
distribution over $\mbox{$\lambda$}$.
$\rho (\lambda)$ is normalized by:
\begin{equation}
\int d \mbox{$\lambda$} \mbox{ } \rho(\mbox{$\lambda$}) =1
\label{eq:Norm}
\mbox{.}
\eeq
We will now examine the question of whether the correlations function
given by \ref{eq:CHV} is
compatible with the quantum mechanical prediction \ref{eq:CQM}
for this function.
Crucial to the EPR analysis is the fact that there
is a perfect correlation between the results of the
measurement of any component of particle $1$'s spin in a given direction
with the measurement of the same component of particle $2$'s spin, such
that the results are of opposite sign.
To account for this, the correlation function must give
\begin{equation}
P(\hat{a},\hat{a}) = -1 \mbox{ }\forall \hat{a}
\mbox{.}
\eeq
It is easy to see that the quantum correlation function satisfies this
condition. If the prediction derivable using the predetermined values
is to reflect this, we must have
\begin{equation}
A(\mbox{$\lambda$},\hat{a}) = -B(\mbox{$\lambda$},\hat{a}) \mbox{ } \forall \hat{a},\mbox{$\lambda$}
\mbox{.}
\label{eq:PC}
\eeq
At this point, we have enough information to derive the conclusion
of the theorem. Using \ref{eq:CHV} together with \ref{eq:PC} and the fact
that $[A(\mbox{$\lambda$},\hat{a})]^{2} = 1$, we write
\begin{eqnarray}
P(\hat{a},\hat{b}) - P(\hat{a},\hat{c}) & = &
- \int d \mbox{$\lambda$} \rho(\mbox{$\lambda$})[A(\mbox{$\lambda$},\hat{a})A(\mbox{$\lambda$},\hat{b})
-A(\mbox{$\lambda$},\hat{a})A(\mbox{$\lambda$},\hat{c})] \\
& = & - \int d \mbox{$\lambda$} \rho(\mbox{$\lambda$})A(\mbox{$\lambda$},\hat{a})A(\mbox{$\lambda$},\hat{b})
[1-A(\mbox{$\lambda$},\hat{b})A(\mbox{$\lambda$},\hat{c})]
\nonumber
\end{eqnarray}
Using $A,B = \pm 1$, we have that
\begin{equation}
|P(\hat{a},\hat{b})-P(\hat{a},\hat{c})|
\leq
\int d \mbox{$\lambda$} \rho(\mbox{$\lambda$}) [1-A(\mbox{$\lambda$},\hat{b})A(\mbox{$\lambda$},\hat{c})]
\mbox{;}
\eeq
then using the normalization \ref{eq:Norm}, and \ref{eq:PC} we
have
\begin{equation}
|P(\hat{a},\hat{b})-P(\hat{a},\hat{c})| \leq
1 + P(\hat{b},\hat{c})
\label{eq:Bell}
\mbox{,}
\eeq
and this relation, which is commonly referred to as ``Bell's
inequality'', is the theorem's conclusion.
Thus, the general framework of Bell's theorem is as follows.
The definite values of the various components of the two particles' spins are
represented by the mathematical functions $A(\mbox{$\lambda$},\hat{a})$, and
$B(\mbox{$\lambda$},\hat{b})$.
The condition
\begin{equation}
A(\mbox{$\lambda$},\hat{a}) = -B(\mbox{$\lambda$},\hat{a}) \mbox{ } \forall \hat{a},\mbox{$\lambda$}
\eeq
(equation \ref{eq:PC}), placed
on the functions $A(\mbox{$\lambda$},\hat{a})$,
$B(\mbox{$\lambda$},\hat{b})$ ensures the agreement of these functions with
the perfect correlations. Bell's theorem tells us that
from these conditions it follows that the theoretical prediction
for the correlation function, $P(\hat{a},\hat{b})$, must satisfy
the Bell inequality, \ref{eq:Bell}.
Based on the fact that the Bell inequality is not satisfied by the
quantum mechanical correlation function \ref{eq:CQM} (as we shall see
below), some authors\footnote{This
is the conclusion reached by the following authors:
\cite{Bethe},\cite[p. 172]{Gell-Mann}, \cite[Wigner p. 291]{Big Red}}
have concluded that Bell's theorem proves the impossibility of
hidden variables. As we shall see, however, this conclusion does not follow.
\section{The EPR paradox, Bell's theorem, and nonlocality
\label{eprbellnon}}
Recall our discussion of the EPR paradox. We found
that for a system described by the spin singlet
state, each component of the spin of one
particle is perfectly correlated with the same component of the spin
of the other. Such perfect correlations seem to give the appearance
of nonlocality,
since the measurement of one spin seems capable of immediately influencing
the result of a measurement of its distant partner. The conclusion of
nonlocality can be avoided only if all components of the
spins of each particle possess definite values. This much is developed
from the EPR analysis. According to Bell's theorem, these definite values
lead to the prediction that the
correlation function $P$ must satisfy the inequality \ref{eq:Bell}.
Since Bell's theorem assumes nothing beyond what can be concluded from the
spin singlet EPR paradox, one can deduce from the {\em conjunction}
of EPR with Bell that {\em any local theoretical description} which
accounts for the perfect correlations of the spin singlet state
leads to a correlation function satisfying Bell's inequality.
Consider now the quantum mechanical prediction for the correlation
function \ref{eq:CQM}.
If we examine this function, we find that it does {\em not} in
general satisfy Bell's inequality. Suppose, for example, that we have defined
some angular orientation such that $\hat{a},\hat{b},\hat{c}$ all lie in the
$x,y$ plane (so that $\theta=90^{\circ}$), with
$\hat{a}$ along $\phi=60^{\circ}$, $\hat{b}$ along
$\phi=0^{\circ}$, and $\hat{c}$ along
$\phi=120^{\circ}$. With this, we have
$P_{QM}(\hat{a},\hat{b})=\frac{1}{2}$, $P_{QM}(\hat{a},\hat{c})=
\frac{1}{2}$ and $P_{QM}(\hat{b},\hat{c})=
-\frac{1}{2}$, so that $|P_{QM}(\hat{a},\hat{b})-P_{QM}(\hat{a},\hat{c})|
= 1$
and $1+P_{QM}(\hat{b},\hat{c}) = \frac{1}{2}$, which is in violation of
\ref{eq:Bell}.
From what we have found above, the disagreement of the quantum
mechanical prediction for this correlation function with Bell's
inequality implies that {\em quantum mechanics
must disagree with any local theoretical description.} In the words of
Bell: \cite{EPW} ``It is known that with Bohm's example of EPR correlations,
involving particles with spin, there is an irreducible nonlocality.''
Concerning the claim that Bell's theorem is an `impossibility proof',
the falsity of this conclusion is already indicated by the success
of Bohmian mechanics (section \ref{Bohm}). On the other hand, Bell's theorem
shows that from the existence of definite values for the spins
follows a conclusion that is in conflict with the quantum mechanical
predictions. This prompts the question of just what feature Bohmian
mechanics possesses which allows it to succeed where the quite general
looking formulation of hidden variables analyzed by Bell does not.
If we examine the functions $A(\mbox{$\lambda$},\hat{a})$ and $B(\mbox{$\lambda$},\hat{b})$
analyzed by Bell, we see that they do not possess the feature
we have just seen
is present in the quantum theory itself: nonlocality. This
follows since the value of $A$ does not depend on the setting $\hat{b}$
of the apparatus measuring particle $2$, nor does $B$ depend on the
setting $\hat{a}$ of
the apparatus measuring particle $1$. Thus, Bell analyzes a {\em local}
theory of hidden
variables\footnote{This feature is not ``accidental'': the hidden variables
analyzed by Bell are precisely those given by the spin singlet EPR
analysis---which itself is based on the
locality assumption. Moreover, Bell did not regard his analysis as a
hidden variables impossibility proof, since he was aware that:
\cite{Bell's theorem} ``\ldots a hidden variable interpretation
(Bohmian mechanics) has been explicitly constructed''. Instead, Bell
viewed his theorem as a proof that the ``grossly nonlocal structure''
inherent in Bohmian mechanics ``is characteristic \ldots of any such
(hidden variables) theory which reproduces exactly the quantum
mechanical predictions''.}.
Bohmian mechanics, on the other hand, is nonlocal, and it is precisely this
that allows it to ``escape'' disproof by Bell's theorem.
Hence, Bell's theorem does not constitute a disproof of hidden
variables {\em in general}, but only of {\em local} hidden
variables.
\chapter{Introduction}
\section{The issue of hidden variables}
\subsection{Contextuality, nonlocality, and hidden variables}
\renewcommand{\thepage}{\arabic{page}}
\setcounter{page}{1}
The aim of this thesis is to contribute to the issue of hidden variables as a
viable interpretation of quantum mechanics. Our efforts will be directed
toward the examination of certain mathematical theorems that are relevant to
this issue. We shall examine the relationship these theorems bear toward
hidden variables and the lessons they provide regarding quantum mechanics
itself. The theorems in question include those of John von Neumann
\cite{von Neumann}, A. M. Gleason \cite{Gleason}, J.S. Bell \cite{Bell's
theorem}, and S. Kochen and E. P. Specker \cite{Kochen Specker}. Arguments
given by John Stewart
Bell\footnote{See references by Bell:
\cite{Bell Eclipse, Bell Imposs Pilot} on the theorems of von Neumann, Gleason,
and Kochen and Specker. Bell's works
emphasize the conclusion that these theorems would place no serious
limitations on a hidden variables theory, and he declares that
\cite{Bell Imposs Pilot} ``What is proved by impossibility proofs [these
theorems] \ldots \ldots is lack of imagination.'' A recent work by
Mermin
\cite{Mermin Review}
also addresses the impact of these theorems on hidden variables. Mermin
does not accept Bell's conclusions, however, and states ``Bell \ldots is
unreasonably dismissive of the importance of \ldots impossibility
proofs [theorems]. \ldots [Bell's] criticism [of these theorems]
undervalues the importance of defining limits to what speculative
theories can or cannot be expected to accomplish.''
In the present work, we argue for Bell's position on these
matters.}
demonstrate that the prevailing
view that these results disprove hidden
variables\footnote{The following references found in
the bibliography present discussions of all four theorems along with their
relationship to hidden variables: \cite {1974 Belinfante Book, Hughes Simple
Math, Jammer Philosophy, Mermin Review}. Von Neumann's theorem is claimed to
disprove hidden variables in \cite{Albertson, Born, Jauch-Piron, von Neumann}.
This conclusion is reached for both Gleason's theorem and the
theorem of Kochen and Specker in \cite{Kochen Specker, Reality Marketplace}.
The following references state that Bell's theorem may be regarded in this way:
\cite{Bethe} ,\cite[p. 172]{Gell-Mann}, \cite{Wigner Big Red}.}
is actually a false one. According
to Bell, what is
shown\footnote{The theorem due to von Neumann is not as
closely
connected with these concepts as the others. We include it for the sake of
completeness and as introduction to the discussion of the other theorems. In
addition, we will point out the presence in a work by Schr{\"o}dinger
\cite{Present Sit} of an analysis leading to essentially the same conclusion as
von Neumann's.}
by these theorems is that hidden variables must allow for two
important and fundamental quantum features: {\em contextuality} and {\em
nonlocality}.
To allow for contextuality, one must consider the results of a
measurement to depend on the
attributes of both the system, and the measuring apparatus. The concept
of contextuality is evidently in contrast to the tendency to consider
the quantum system in isolation, without considering the configuration
of the experimental apparatus\footnote{See
papers in the recent book by Bell \cite{Bell Eclipse, Bell Imposs Pilot} for a
good discussion of contextuality.}.
Such a tendency conflicts with the lesson of
Niels Bohr concerning \cite[page 210]{Einstein impeachment} ``the
impossibility
of any sharp separation between the behavior of atomic objects and the
interaction with the measuring instruments which serve to define the conditions
under which the phenomena appear.'' In any attempt to
develop a theory of hidden variables, one must take contextuality into
account.
The concept of contextuality lies at the heart of both Gleason's
theorem and the theorem of Kochen and Specker. However, typical
expositions\footnote{The exception to this is two
works by Bell: bibliography references \cite{Bell Eclipse, Bell Imposs Pilot}.
In these works, contextuality is made quite clear, and it is concluded
that neither theorem proves the impossibility of hidden variables.
References that
discuss the Kochen and Specker theorem {\em without} treating contextuality are
\cite{Kochen Specker, Mermin's Gleason, Reality Marketplace}. The following
references
briefly discuss the relationship of Gleason's and Kochen and Specker's
theorems to contextuality, but fail to convey its
importance:
\cite{1974 Belinfante Book, Hughes Simple Math, Jammer Philosophy}.}
of these
arguments give either no discussion of the concept, or else a very
brief treatment
that fails to convey its full meaning and importance. The conclusion drawn by
authors giving no discussion of contextuality is that the mathematical
impossibility of hidden variables has been proved.
References which give only a brief mention of
the concept tend to leave their readers with the impression that it is not of
much significance and is somewhat artificial. Readers of these works would be
led to the conclusion that Gleason's and Kochen and Specker's theorems
demonstrate that the prospect of hidden variables is a dubious one, at best.
The concept of nonlocality is well known through Bell's famous mathematical
theorem \cite{Bell's theorem} in which he addresses the problem of the
Einstein--Podolsky--Rosen paradox\footnote{The EPR paper appears in \cite{EPR},
and it is reprinted in \cite{Big Red}.}. In a system exhibiting nonlocality,
the consequences of events at one place propagate to other
places instantaneously. Although Einstein, Podolsky and Rosen were
attempting to demonstrate a different conclusion (the incompleteness of the
quantum theory), their analysis served to point out the conditions under which
(as would become evident after Bell's work) nonlocality arises. The
existence of
such conditions in quantum mechanics was regarded by Erwin Schr{\"o}dinger as
being of great significance, and in a work \cite{Camb1} in which he generalized
the EPR argument, he called these conditions ``{\em the}
characteristic trait of
quantum mechanics, the one that forces its entire departure from classical
lines.'' Bell's work essentially completed the proof that the
quantum phenomenon discovered by EPR and elaborated by Schr{\"o}dinger truly
does entail nonlocality.
Although the EPR paradox and Bell's theorem are quite well known,
there exists a misperception regarding the relationship
these arguments bear to nonlocality. Some authors
(mistakenly) conclude\footnote{See for example,
\cite[p 48]{Ghost Interviews}, \cite{PhysWorld, Mermin Review, Reality
Marketplace}. Clauser and Shimony \cite{Clauser-Shimony}
regard the EPR paradox and Bell's theorem as proof that quantum mechanics
must conflict with any `realistic' local theory. Their view seems
to fall short of Bell's (see Bell \cite{Bell cascade photons,
Bertlemann's Socks}) contention that it is
{\em locality itself} which leads to this clash with quantum theory.}
that the EPR paradox and Bell's theorem imply that
a conflict exists {\em only} between local theories of hidden
variables and quantum mechanics, when a more general
conclusion than this follows. When taken together, the EPR
paradox\footnote{Bohm's spin singlet version \cite{Bohm's Famous Text}
of EPR.} and Bell's theorem imply that
{\em any local theoretical explanation whatsoever must disagree with
quantum mechanics}. In Bell's words: \cite{Cosmologists}
``It now seems that the non-locality is deeply rooted in
quantum mechanics itself and will persist in any completion.''
We can conclude from the EPR paradox and Bell's theorem that the
quantum theory is irreducibly nonlocal\footnote{See the following:
\cite{Albert, Bell cascade photons, Bertlemann's Socks, Intuitive Nonlocality,
Baby Snakes, Tim, Shadows}.}.
As we have mentioned, Erwin Schr{\"o}dinger regarded the situation brought out
in the Einstein, Podolsky, Rosen paper as a very important problem. His own
work published just after the EPR paper \cite{Present Sit} provided a
generalization
of this analysis. This work contains Schr{\"o}dinger's presentation of the
well known
paradox of `Schr{\"o}dinger's cat'. However, a close look at the paper
reveals much more. Schr{\"o}dinger's work contains a very extensive and
thought provoking analysis of
quantum theory. He begins with a statement of the nature of
theoretical modeling and a comparison of this to the framework of
quantum theory. He continues with the cat paradox, the measurement
problem, and
finally his generalization of the EPR paradox and its implications. In
addition,
he derives a result quite similar to that of von Neumann's
theorem\footnote{Schr{\"o}dinger does not explicitly mention von Neumann's
work, however.}.
Subsequent to our discussion of the issues of the EPR paradox,
Bell's theorem, and nonlocality, we will discuss a new
form of quantum nonlocality proof\footnote{A nonlocality proof similar to
that presented in this thesis was developed by Heywood and Redhead
\cite{Lugubrious Fellow} and by Brown and Svetlichny \cite{Tom's Paper}.
These authors present arguments addressed to one particular example of
the general class of maximally entangled states we consider. Their
argumentation does not bring out all the conclusions we do in
the present work, as they do not draw the {\em general} conclusion
of nonlocality, but instead only the nonlocality of the particular
class of hidden variables they address.} based on Schr{\"o}dinger's
generalization
of the EPR paradox. The type of argument we shall present falls into the
category of a ``nonlocality without
inequalities'' proof\footnote{See Greenberger, Horne, Shimony and Zeilinger in
\cite{GHZ AJP}}. Such analyses---of which the
first\footnote{See Mermin in \cite{Mermin GHZ}}
to be developed was that of
Greenberger, Horne, and Zeilinger \cite{GHZ}---differ from
the nonlocality proof derived from the EPR paradox and Bell's
theorem in that they do not possess the statistical
character of the latter.
\subsection{David Bohm's theory of hidden variables \label{Bohm}} It must be
mentioned that the problem of whether hidden variables are possible is
more than just conjectural. Louis de Broglie's 1926 `Pilot Wave'
theory\footnote{See \cite{de Broglie 1926, de Broglie 1927, de Broglie 1930}.
A summary of the early development of the theory is
given by de Broglie in \cite{de Broglie 1953}.}
is, in fact, a viable interpretation of
quantum phenomena. The theory was more systematically presented
by David Bohm in 1952\footnote{See \cite{Bohmian Mechanics}. See also
Bell in \cite{Bell Imposs Pilot, Bell Six Worlds} A recent book
on Bohmian mechanics is \cite{Holland}.}. This
encouraged de Broglie
himself to take up the idea again. We shall refer to the theory as ``Bohmian
mechanics'', after David Bohm.
The validity of Bohmian mechanics is a result of the first importance, since
it restores {\em objectivity} to the description of quantum
phenomena\footnote{It is for this reason that Bohm refers to his theory as an
`Ontological Interpretation' of quantum mechanics:
\cite[page 2]{Bohm 1993 Book}
``Ontology is concerned
primarily with that which {\em is} and only secondarily with how we obtain our
knowledge about this. We have chosen as the subtitle of our book
``An Ontological
Interpretation of Quantum Theory'' because it gives the clearest and most
accurate description of what the book is about....the primary question is
whether
we can have an adequate conception of the reality of a quantum system, be this
causal or be it stochastic or be it of any other nature.''}.
The quantum
formalism addresses only the properties of quantum
systems as they appear when the system is {\em measured}. An objective
description, by contrast, must give a measurement-independent
description of the
system's properties. The lack of objectivity in quantum theory is something
which Albert Einstein disliked, as is clear from his writings:
\cite[page 667]{Einstein impeachment}
\begin{quotation}
What does not
satisfy me, from the standpoint of principle, is (quantum theory's)
attitude toward that which appears to be the programmatic aim of all
physics: the
complete description of any (individual) real situation (as it
supposedly exists
irrespective of any act of observation or substantiation).
\end{quotation}
Moreover, Einstein saw quantum theory as
{\em incomplete}, in the sense that it omits essential elements from its
description of state: \cite[page 666]{Einstein impeachment}
``I am, in fact, firmly convinced that the essentially statistical
character of contemporary quantum theory is solely to be ascribed to the fact
that this (theory) operates with an incomplete description of physical
systems.`' Bohmian mechanics essentially fulfills Einstein's
goals for quantum physics: to complete the quantum description
and to restore objectivity to the theoretical picture.
Besides the feature of objectivity, another reason the success of Bohmian
mechanics is important is that it restores {\em determinism}. Within this
theory one may predict, from the present state of the system, its form at any
subsequent time. In particular, the results of quantum measurements are so
determined from the present state. Determinism is the feature with which the
hidden variables program has been traditionally
associated\footnote{This is the
motivation mentioned by von Neumann in his no-hidden variables proof. See
reference \cite{von Neumann}}.
The importance of the existence of a successful theory of hidden variables in
the form of Bohmian mechanics is perhaps expressed most succinctly by J.S.
Bell: \cite{Bell Imposs Pilot}
\begin{quotation}
Why is the pilot wave picture [Bohmian mechanics] ignored in text
books? Should it not be taught, not as the only way, but as an
antidote to the prevailing complacency? To show that
vagueness, subjectivity, and indeterminism are not forced upon us by
experimental facts, but by deliberate theoretical choice?
\end{quotation}
\section{Original results to be presented}
The analysis we present will consist of
two main parts: one devoted to contextuality and the theorems of
Gleason, and Kochen and Specker, and the other devoted to nonlocality, Bell's
theorem, and
Schr{\"o}dinger's generalization of the Einstein--Podolsky--Rosen
paradox. We will review
and elaborate on the reasons why these theorems do not disprove hidden
variables, but should instead be regarded as illustrations of
contextuality and nonlocality. Our presentation will link topics that have most
often been addressed in isolation, as we relate the various mathematical
theorems to one another. Our analysis of contextuality and the theorems of
Gleason and Kochen and Specker goes
beyond that of other authors\footnote{Bell \cite{Bell Eclipse, Bell Imposs
Pilot} and Mermin \cite{Mermin Review}.} in the following ways. First, we will
show that the theorems of Gleason, and Kochen and Specker imply a result we
call ``spectral-incompatibility''. Regarded as proofs of this result, the
implications of these theorems toward hidden variables become much more
transparent. Second, through discussion of an experimental procedure first
given by David Albert \cite{Albert}, we show that contextuality provides
an indication
that the role traditionally ascribed to the Hermitian operators in the
description of a quantum system is not tenable\footnote{A work by
Daumer, D{\"u}rr, Goldstein, and Zangh{\'i} (\cite{Martin})
argues for the same conclusion, but does not do so by focussing on
the Albert experiment.}.
After reviewing the EPR paradox, Bell's theorem, and
the argument for quantum nonlocality that follows, we then present what
is perhaps the most distinctive feature of the thesis, as we focus
on Erwin Schr{\"o}dinger's generalization of the EPR paradox. First, we show
that the conclusions of `Schr{\"o}dinger's paradox' may be reached using a more
transparent method than Schr{\"o}dinger's which allows one to relate the form of
the quantum state to the perfect correlations it exhibits. Second, we
use Schr{\"o}dinger's paradox to
derive a broad spectrum of new instances of
quantum nonlocality. Like the proof given by Greenberger, Horne, and
Zeilinger, these `Schr{\"o}dinger nonlocality'
proofs are of a deterministic character, i.e., they are `nonlocality
without inequalities' proofs.
The quantum nonlocality result based on the EPR paradox and Bell's
theorem is, on
the other hand, essentially statistical, since it concerns the average of the
results from a series of quantum measurements.
We demonstrate that just as is true for the GHZ nonlocality, Schr{\"o}dinger
nonlocality differs from the EPR/Bell demonstration in that it
may be experimentally confirmed simply by
observing the `perfect correlations' between the relevant
observables. To verify the EPR/Bell nonlocality result, on the other
hand, one must
observe not only that the perfect correlations exist, but also that
Bell's inequality is violated\footnote{See \cite{Aspect 1, Aspect 2,
Aspect 3}.}.
The test of Schr{\"o}dinger nonlocality is simpler than that of GHZ in
that the perfect correlations in question are between observables of two
subsystems, rather than between those of three or more.
Moreover, some of the Schr{\"o}dinger
nonlocality proofs are stronger than the GHZ proof in that they
involve larger classes of
observables than that addressed by the latter.
\section{Review of the formalism of quantum mechanics \label{formalism}}
\subsection{The state and
its evolution} The formalism of a physical theory typically consists of two
parts. The first describes the representation of the state of the system; the
second, the time evolution of the state. The quantum formalism contains besides these a prescription\footnote{It is the existence of such rules in
the quantum mechanical formalism that marks its departure from an objective
theory, as we discussed above.} for the results of measurement.
We first discuss the quantum formalism's representation of state. The quantum
formalism associates with every system a Hilbert space ${\cal H}$ and
represents
the state of the system by a vector $\psi$ in that Hilbert space. The vector
$\psi$ is referred to as the {\em wave function}. This is in contrast to the
classical description of state which for a system of particles is
represented by
the coordinates $\{q_{1},q_{2},q_{3},...\}$ and momenta
$\{p_{1},p_{2},p_{3},...\}$. We shall often use the Dirac notation: a Hilbert
space vector may be written as $\mbox{$| \psi \rangle$}$, and the inner product of two vectors
$|\psi _{1} \rangle, \mid \psi_{2} \rangle$ as $\langle \psi_{1} \mid \psi_{2}
\rangle$. The quantum mechanical state $\psi$ in the case of a non-relativistic
$N$-particle system is given\footnote{In the absence of spin.} by a
complex-valued function on configuration space, $\psi({\bf q})$, which is an
element of $L_{2}$---the Hilbert space of square-integrable functions. Here
${\bf q}$ represents a point $\{q_{1},q_{2},...\}$ in the configuration space
\mbox{$\rm I\!R$}$^{3N}$. The fact that $\psi$ is an element of $L_{2}$ implies
\begin{equation}
\mbox{$\langle \psi | $} \psi \rangle = \mid \mid \psi \mid \mid^{2}=\int \mbox{ {\bf dq} } \psi
^{*} ({\bf q}) \psi({\bf q}) < \infty \label{eq:L2}
\eeq
where
$\mbox{{\bf dq}}=(\mbox{dq}_{1} \mbox{dq}_{2}...)$. The physical state
is defined
only up to a multiplicative constant, i.e., $c \mbox{$| \psi \rangle$}$ and $\mbox{$| \psi \rangle$}$
(where $c \neq
0$) represent the same state. We may therefore choose to normalize $\psi$ so
that \ref{eq:L2} becomes
\begin{equation}
\langle \psi \mid \psi \rangle=1
\label{eq:psi-norm} \mbox{.}
\eeq
The time evolution of the quantum mechanical wave function is governed by
\linebreak
Schr{\"o}dinger's equation:
\begin{equation}
i\hbar\frac{\partial \psi }{
\partial t} = H \psi
\mbox{,}
\eeq
where $H$ is an operator whose form depends
on the nature of the system and in particular on whether or not it is
relativistic. To indicate its dependence on time, we write the wave function as
$\psi_{t}$. For the case of a non-relativistic system, and in the absence of
spin, the Hamiltonian takes the form
\begin{equation}
H=-\frac{1}{2} \hbar ^{2} \sum_{j}
\frac{\nabla_{j}^{2}}{m_{j}} + V({\bf q})
\eeq
where $V({\bf q})$ is the
potential energy\footnote{We ignore here the possibility of an
external magnetic
field.} of the system. With the time evolution so specified, $\psi_{t}$ may be
determined from its form $\psi_{t_{0}}$ at some previous time.
\subsection{Rules of measurement} As stated above, the description of the state
and its evolution does not constitute the entire quantum formalism. The wave
function provides only a formal description and does not by itself make contact
with the properties of the system. Using {\em only} the wave function and its
evolution, we cannot make predictions about the typical systems in which we are
interested, such as the electrons in an atom, the conduction electrons of a
metal, and photons of light. The connection of the wave function to
any physical
properties is made through the rules of measurement. Because the physical
properties in quantum theory are defined through measurement, or observation,
they are referred to as `observables'. The quantum formalism represents the
observables by Hermitian operators\footnote{In the following
description, we use
the terms observable and operator interchangeably.} on the
system Hilbert space.
Associated with any observable $O$ is a set of {\em eigenvectors} and {\em
eigenvalues}. These quantities are defined by the relationship \begin{equation} O
\mid \phi \rangle = \mu \mid \phi \rangle \label{eq:Eigenvalue} \mbox{,}
\eeq where $\mu$ is a real constant. We refer to $| \phi \rangle$ as
the eigenvector corresponding to the eigenvalue $\mu$. We label the set of
eigenvalues so defined as $\{\mu_{a}\}$. To each member of this set, there
corresponds a set of eigenvectors, all of which are elements of a {\em
subspace} of ${\cal H}$, sometimes referred to as the {\em eigenspace}
belonging
to $\mu_{a}$. We label this subspace as ${\cal H}_{a}$. Any two such distinct
subspaces are orthogonal, i.e., every vector of ${\cal H}_{a}$ is orthogonal to
every vector of ${\cal H}_{b}$ if $\mu_{a} \neq \mu_{b}$.
It is important to develop explicitly the example of a {\em commuting set} of
observables $(O^{1}, O^{2},...)$ i.e., where each pair $( O^{i}, O^{j})$ of
observables has commutator zero:
\begin{equation} [O^{i}, O^{j}]=
O^{i} O^{j}- O^{j} O^{i}=0 \eeq For this case, we have a series of
relationships of the form \ref{eq:Eigenvalue} \begin{eqnarray} O^{i} \mbox{$| \phi \rangle$} & = &
\mu^{i} \mbox{$| \phi \rangle$} \label{eq:Joint-Eig} \\ i & = & 1,2,... \nonumber \end{eqnarray}
defining the set of {\em simultaneous} eigenvalues and eigenvectors, or {\em
joint-eigenvalues} and {\em joint-eigenvectors}. Note that \ref{eq:Eigenvalue}
may be regarded as a vector representation of the set of equations
\ref{eq:Joint-Eig} where $O=(O^{1},O^{2},\ldots)$ and $\mbox{\boldmath $\mu$}
=(\mu^{1},\mu^{2},\ldots)$ are seen respectively as ordered sets of operators
and numbers. We use the same symbol, $\mu$, to denote a
joint-eigenvalue, as was used to designate an eigenvalue.
We emphasize that for a
commuting set $\mbox{\boldmath $\mu$}$ refers to an {\em ordered
set} of numbers. The correspondence of the joint-eigenvalues to the
joint-eigenvectors is similar to that between the eigenvalues and eigenvectors
discussed above; to each $\mbox{\boldmath $\mu$}_{a}$ there corresponds a set of
joint-eigenvectors
forming a subspace of ${\cal H}$ which we refer to as the {\em
joint-eigenspace}
belonging to $\mbox{\boldmath $\mu$}_{a}$.
We denote by $P_{a}$ the projection operator onto the eigenspace belonging to
$\mu_{a}$. It will be useful for our later discussion to express
$P_{a}$ in terms
of an orthonormal basis of ${\cal H}_{a}$. If ${{ \phi_{k} }}$ is such a basis,
we have \begin{equation} P_{a}= \sum_{k} \mid \phi_{k} \rangle \langle \phi_{k}
\mid \mbox{,} \label{eq:Psum} \eeq which means that the operator which
projects onto a subspace is equal to the sum of the projections onto the
one-dimensional spaces defined by any basis of the subspace. If we label the
eigenvalues of the observable $O$ as $\mu_{a}$ then $O$ is represented in terms
of the $P_{a}$ by \begin{equation} O=\sum_{a}\mu_{a} P_{a} \label{eq:Dirac}
\mbox{.} \eeq
Having developed these quantities, we now discuss the rules of measurement. The
first rule concerns the possible outcomes of a measurement and it states that
they are restricted to the eigenvalues $\{\mu_{a}\}$ for the measurement of a
single observable\footnote{A measurement may be classified either as
{\em ideal}
or {\em non-ideal}. Unless specifically stated, it will be assumed whenever we
refer to a measurement process, what is said will apply just as well to {\em
either} of these situations.} $O$ or {\em joint}-eigenvalues $\{\mbox{\boldmath $\mu$}_{a}\}$ for
measurement a commuting set of observables $(O^{1},O^{2},...)$.
The second rule
provides the probability of the measurement result equaling one particular
eigenvalue or joint-eigenvalue: \begin{eqnarray} P(O=\mu_{a}) & = & \mbox{$\langle \psi | $} P_{a}
\mbox{$| \psi \rangle$} \label{eq:Prob} \\ P((O^{1},O^{2},...) = \mbox{\boldmath $\mu$}_{a}) & = & \mbox{$\langle \psi | $} P_{a} \mbox{$| \psi \rangle$}
\nonumber \end{eqnarray} where the former refers to the measurement of a single
observable and the latter to a commuting set. As a consequence of the
former, the
expectation value for the result of a measurement of $O$ is given by \begin{equation}
E(O)=\sum_{a} \mu_{a} \mbox{$\langle \psi | $} P_{a} \mbox{$| \psi \rangle$} = \mbox{$\langle \psi | $} O \mbox{$| \psi \rangle$} \label{eq: Exp} \eeq
where the last equality follows from \ref{eq:Dirac}.
The third rule governs the effect of measurement on the system's wave function.
It is here that the measurement is governed by a different rule depending on
whether one performs an ideal or non-ideal measurement. An ideal measurement is
defined as one for which the wave function's form after measurement is given by
the (normalized) projection of $\psi$ onto the eigenspace ${\cal H}_{a}$ of the
measurement result ${\bf \mu}_{a}$ \begin{equation} P_{a} \psi / \mid \mid P_{a} \psi \mid
\mid \label{eq:NIC} \eeq The case of a non-ideal measurement arises when an ideal
measurement of a commuting set of observables $(O^{1}, O^{2},...)$ is regarded
as
a measurement of an {\em individual} member of the set. An ideal measurement of
the {\em set} of observables leaves the wave function as the projection of
$\psi$ onto their {\em joint}-eigenspace. For an
arbitrary vector $\psi$, the
projection onto the eigenspace of an individual
member of the set does not generally equal the projection onto the set's
joint-eigenspace. Therefore an ideal measurement of a commuting set cannot be
an ideal measurement of an individual member of the set. We refer to the
procedure as a
{\em non-ideal} measurement of the observable. This completes the presentation
of
the general rules of measurement. We discuss below two special cases for which
these rules reduce to a somewhat simpler form.
The first special case is that of a {\em non-degenerate} observable. The
eigenspaces of such an observable are one-dimensional and are
therefore spanned by a single normalized vector called an eigenvector. We label
the eigenvector corresponding to eigenvalue $\mu_{a}$ as $|a\rangle$. The operator
which projects onto such a one-dimensional space
is written as: \begin{equation} P_{a} = \mid
a \rangle \langle a \mid \mbox{.} \eeq It is easy to see that the projection
given in \ref{eq:Psum} reduces to this form.
The form of the observable given in \ref{eq:Dirac} then reduces to:
\begin{equation} O=\sum_{a} \mu_{a} \mid a \rangle \langle a \mid \mbox{.}
\eeq The probability of a measurement result being equal to $\mu_{a}$
is then \begin{equation} P(O=\mu_{a})= | \langle a \mid \psi \rangle |^{2}
\mbox{,} \label{eq:ProbNon} \eeq in which case the wave function
subsequent to the measurement is given by $| a \rangle$. These rules
are perhaps the most familiar ones since the non-degenerate case is usually
introduced first in presentations of quantum theory.
Finally, there is a case
for which the measurement rules are essentially the same as these, and this is
that of a set of commuting observables which form a {\em complete set}. The
joint-eigenspaces of a complete set
are one-dimensional. The rules governing the probability of the measurement
result $\mbox{\boldmath $\mu$}_{i}$ and the effect of measurement on the wave function are
perfectly analogous to those of the non-degenerate observable. This completes
our discussion of the rules of measurement.
\section{Von Neumann's theorem and hidden variables \label{Von}}
\subsection{Introduction \label{Vint}} The quantum formalism contains features
which may be considered objectionable\footnote{J.S. Bell goes further
than this:
he refers to quantum mechanics as {\em unprofessional} (\cite{Bell
In Bohm} and \cite[p. 45]{Ghost
Interviews}) in its lack of clarity.} by some. These are its {\em subjectivity}
and {\em indeterminism}. The aim of the development of a hidden variables
theory\footnote{See the discussion of Bohmian mechanics above. The great
success of this theory is that it explains quantum phenomena {\em without} such
features. The general issue of hidden variables is, of course, discussed in
several references. Bell's work is the most definitive: see \cite{Bell Eclipse,
Determine Bell, Bell Imposs Pilot} in \cite{Speakable}. A recent review was
published by N.D. Mermin, who has done much to
popularize Bell's theorem through articles in {\em Physics Today} and through
popular lectures. Discussions may be also be found in Bohm
\cite{Bohmian Mechanics, Bohm 1993 Book}, Belinfante \cite{1974
Belinfante Book}, Hughes \cite{Hughes Simple Math} and Jammer \cite{Jammer
Philosophy}.} is to give a formalism which, while being empirically equivalent
to
the quantum formalism, does not possess these features. In the present section
we
shall present and discuss one of the earliest works to address the hidden
variables question, which is the 1932 analysis of John von Neumann\footnote{The
original work is \cite{von Neumann}. Discussions of von Neumann's hidden
variables analysis may be found within \cite{Albertson, Bell Eclipse, Bell
Imposs Pilot, Jauch-Piron}}. We shall also review and elaborate on J.S. Bell's
analysis \cite{Bell Eclipse} of this work, in which he made clear its
limitations.
Von Neumann's hidden variables analysis appeared in his now classic book {\bf
Mathematical Foundations of Quantum Mechanics}. This book is notable
both for its exposition of the mathematical structure of quantum theory,
and as one
of the earliest works\footnote{Within chapter $4$ of the present
work, we shall discuss a 1935 analysis by Erwin Schr{\"o}dinger \cite{Present
Sit}. This is the paper in which the `Schr{\"o}dinger's cat' paradox first
appeared, but it is not generally appreciated that it contains other results of
equal or perhaps greater significance, such as Schr{\"o}dinger's generalization
of the Einstein--Podolsky--Rosen paradox. We believe that this remarkable paper
could have done much to advance the study of the foundations of quantum
mechanics, had these latter features been more widely appreciated.}
to
systematically address both the hidden variables issue and the measurement
problem. The quantum formalism presents us with two different types of state
function evolution: that given by the Schr{\"o}dinger equation and that which
occurs during a measurement. The latter evolution appears in the formal rule
given above in equation \ref{eq:NIC}. The measurement problem is the
problem of the
reconciliation of these two types of evolution.
In his analysis of the hidden variables problem, von Neumann proved a
mathematical result now known as von Neumann's theorem and then argued
that this theorem implied the very strong conclusion that no hidden
variables theory can provide empirical agreement with quantum
mechanics: (preface p. ix,x) `` \ldots such an explanation (by `hidden
parameters') is incompatible with certain qualitative fundamental
postulates of quantum mechanics.'' The author further states:
\cite[p. 324]{von Neumann} ``It should be noted that we need not go
further into the mechanism of the `hidden parameters,' since we now
know that the established results of quantum mechanics can never be
re-derived with their help.'' The first concrete
demonstration\footnote{See Jammer's book
\cite[page 272]{Jammer Philosophy} for a discussion of an early work by Grete
Hermann that addresses the impact of von Neumann's theorem.} that this
claim did
not follow was given in 1952 when David Bohm constructed a viable theory of
hidden variables \cite{Bohmian Mechanics}. Then in 1966, J.S. Bell \cite{Bell
Eclipse} analyzed von Neumann's argument against hidden variables and showed it
to be in error. In this section, we begin by discussing an essential
concept of von Neumann's analysis: the state representation of
a hidden variables theory. We then present von Neumann's theorem and
no-hidden-variables argument. Finally, we show where the flaw in this argument
lies.
The analysis of von Neumann is concerned with the description of the
{\em state}
of a system, and the question of the {\em incompleteness} of the quantum
formalism's description. The notion of the incompleteness of the quantum
mechanical description was particularly emphasized by Einstein, as we noted in
section \ref{Bohm}. The famous Einstein--Podolsky--Rosen paper was
designed as a
proof of such incompleteness and the authors concluded this work with the
following statement: \cite{EPR}
``While we have thus shown that the wave function does not provide a
complete description of the physical reality, we left open the question of
whether or not such a description exists. We believe, however, that such a
theory is possible.''
The hidden variables program, which is an endeavor to
supplement the state description, is apparently exactly the type of program
Einstein was advocating. A complete state description might be
constructed to remove some of the objectionable features of the quantum
theoretical description.
The particular issue which von Neumann's analysis addressed was the
following: is it possible to restore {\em determinism} to the
description of physical systems by the introduction of hidden
variables into the description of the state of a system. The quantum
formalism's state representation given by $\psi$ does not generally
permit deterministic predictions regarding the values of the physical
quantities, i.e. the observables. Thus results obtained from
performance of measurements on systems with {\em identical} state
representations $\psi$ may be expected to {\em vary}. (The statistical
quantity called the {\em dispersion} is used to describe this
variation quantitatively.). While it generally does not provide
predictions for each individual measurement of an observable $O$, the
quantum formalism does give a prediction for its average or {\em
expectation} value\footnote{When generalized to the case of a mixed
state, this becomes
\begin{equation}
E(O)=\mbox{Tr}(UO)
\label{eq:Eqfms}
\eeq
where $U$ is a positive operator with the property $\mbox{Tr}(U)=1$.
Here $U$ is known as the ``density matrix''. See for example,
\cite[p. 378]{Schiff}}:
\begin{equation}
E(O)=\langle \psi \mid O \mid \psi \rangle \label{eq:Eqf}
\mbox{.}
\eeq
Von Neumann's analysis addresses the question of whether the lack of
determinism in the quantum formalism may be ascribed to the fact that
the state description as given by $\psi$ is incomplete. If this were
true, then the complete description of state---consisting of both
$\psi$ and an additional parameter we call $\mbox{$\lambda$}$---should allow one
to make predictions regarding individual measurement results for each
observable. Note that such predictability can be expressed
mathematically by stating that for every $\psi$ and $\mbox{$\lambda$}$, there must
exist a ``value map'' function, i.e. a mathematical function assigning
to each observable its value. We represent such a value map function
by the expression $V^{\psi}_{\mbox{$\lambda$}}(O)$. Von Neumann referred to a
hypothetical state described by the parameters $\psi$ and $\mbox{$\lambda$}$ as a
``dispersion-free state'', since results obtained from measurements on
systems with identical state representations in $\psi$ and $\mbox{$\lambda$}$ are
expected to be identical and therefore exhibit no dispersion.
Von Neumann's theorem is concerned with the general form taken by a
function $E(O)$, which assigns to each observable its expectation
value. The function addressed by the theorem is considered as
being of sufficient generality that the expectation function of
quantum theory, or of any empirically equivalent theory must assume the
form von Neumann derived. In the case of quantum theory, $E(O)$ should take
the form
of the quantum expectation formula \ref{eq:Eqfms}. In the case of a
dispersion-free state, the average over a series of $E(O)$ should
return the value of each observable. When one analyzes the form of
$E(O)$ developed in the theorem, it is easy to see that it cannot be a
function of the latter type. Von Neumann went on to conclude from this
that no theory involving dispersion-free states can agree with quantum
mechanics. However, since the theorem places an unreasonable
restriction on the function $E(O)$, this conclusion does {\em not}
follow.
\subsection{Von Neumann's theorem}
The assumptions regarding the function $E(O)$ are as follows. First,
the value $E$ assigns to the `identity observable' ${\bf 1}$ is equal
to unity: \begin{equation} E({\bf 1}) = 1 \label{eq:1} \eeq The identity
observable is the projection operator associated with the entire
system Hilbert space. All vectors are eigenvectors of ${\bf 1}$ of
eigenvalue $1$. The second assumption is that $E$ of any real-linear
combination of observables is the same linear combination of the
values $E$ assigns to each individual observable: \begin{equation} E(aA+bB+...) =
aE(A)+bE(B)+... \label{eq:Silly}
\eeq
where $(a,b,...)$ are real numbers and $(A,B,...)$ are
observables. Finally, it is assumed that $E$ of any projection
operator $P$ must be non-negative:
\begin{equation}
E(P) \geq 0 . \label{eq:Pos}
\eeq
For
example, in the case of the value map function $V^{\psi}_{\mbox{$\lambda$}}$, $P$
must be assigned
either $1$ or $0$, since these are its possible values. According to
the theorem, these premises imply that $E(O)$ must be
given by the form
\begin{equation}
E(O)=\mbox{Tr}(UO)
\eeq
where $U$ is a positive operator with the property $\mbox{Tr}(U)=1$.
The demonstration\footnote{A proof of the theorem may be found in von Neumann's
original work \cite{von Neumann}. Albertson presented a simplification of this
proof in 1961 \cite{Albertson}. What we present here is a further
simplification.} of this conclusion is straightforward. We begin by noting that
any operator $O$ may be written as a sum of Hermitian operators.
Define $A$ and $B$ by the relationships $A=\frac{1}{2}(O+O^{\dag})$ and
$B= \frac{1}{2i}(O-O^{\dag})$, where $O^{\dag}$ is the Hermitian conjugate of
$O$. Then it is easily seen that $A$ and $B$ are Hermitian and that
\begin{equation}
O=A+iB
\mbox{.}
\eeq
We define the function $E^{*}(O)$
by
\begin{equation}
E^{*}(O)=E(A)+iE(B) \label{eq:E*Def}
\mbox{,}
\eeq
where $E(O)$ is von Neumann's $E(O)$, and $A$ and $B$ are defined as
above. From the equations \ref{eq:E*Def} and \ref{eq:Silly} we have
that $E^{*}(O)$ has the property of complex linearity. Note that
$E^{*}(O)$ is a generalization of von Neumann's $E(O)$: the latter is
a {\em real}-linear function on the Hermitian operators, while the
former is a {\em complex}-linear function on {\em all} operators. The
general form of $E^{*}$ will now be investigated for the case of a
finite-dimensional operator expressed as a matrix in terms of some
orthonormal basis. We write the operator $O$ in the form
\begin{equation}
O=\sum_{m,n} |m\rangle \langle m | O | n \rangle \langle n |
\mbox{,}
\eeq
where the sums over $m,n$ are finite sums. This form of $O$ is a linear
combination of the operators $|m\rangle \langle n|$, so that the complex linearity
of $E^{*}$ implies that
\begin{equation}
E^{*}(O)=\sum_{m,n} \langle m | O | n \rangle E^{*}(|m\rangle \langle n|)
\label{eq:E}
\mbox{.}
\eeq
Now we define the operator $U$ by the relationship
$U_{nm}=E^{*}(|m\rangle \langle n|)$ and the \ref{eq:E} becomes
\begin{equation}
E^{*}(O)=\sum_{m,n} O_{mn} U_{nm}
=\sum_{m}(UO)_{mm}=\mbox{Tr}(UO)
\label{eq:Trace*}
\mbox{.}
\eeq
Since von
Neumann's $E(O)$ is a special case of $E^{*}(O)$, \ref{eq:Trace*} implies that
\begin{equation}
E(O)=\mbox{Tr}(UO) \label{eq:Trace} \mbox{.}
\eeq
We now show that $U$ is a positive operator. It is a premise of the
theorem that $E(P) \geq 0$ for any projection operator $P$. Thus we
write $E(P_{\chi}) \geq 0$ where $P_{\chi}$ is a one-dimensional
projection operator onto the vector $\chi$. Using the form of $E$
found in \ref{eq:Trace}, we have\footnote{The equality
$\mbox{Tr}(UP_{\chi})= \langle \chi \mid U \mid \chi \rangle$ in
\ref{eq:easy} is seen as follows. The expression
$\mbox{Tr}(UP_{\chi})$ is independent of the orthonormal basis
${\phi_{n}}$ in terms of which the matrix representations of $U$ and
$P_{\chi}$ are expressed, so that one may choose an orthonormal basis
of which $| \chi\rangle$ itself is a member. Since $P_{\chi} = \mid
\chi \rangle
\langle \chi |$, and $P_{\chi} \mid \phi_{n} \rangle = 0$ for all $|
\phi_{n}
\rangle$ except $| \chi \rangle$, we have
$\mbox{Tr}(UP_{\chi})=\langle \chi | U
| \chi \rangle$.} \begin{equation} \mbox{Tr}(UP_{\chi})=\langle \chi \mid U \mid
\chi \rangle
\geq 0 \mbox{.} \label{eq:easy} \eeq Since $\chi$ is an arbitrary vector, it
follows that $U$ is a positive operator. The relation $\mbox{Tr}(U)=1$
is shown as follows: from the first assumption of the theorem
\ref{eq:1} together with the form of $E$ given by \ref{eq:Trace} we
have $\mbox{Tr}(U)=\mbox{Tr}(U{\bf 1})=1$. This completes the
demonstration of von Neumann's theorem.
\subsection{Von Neumann's impossibility proof \label{Vonimp}} We now
present von
Neumann's argument against the possibility of hidden
variables. Consider the function $E(O)$ evaluated on the
one-dimensional projection operators $P_{\phi}$. For such
projections, we have the relationship \begin{equation} P_{\phi}=P^{2}_{\phi}
\label{eq:Pr} \mbox{.} \eeq As mentioned above, in the case when $E(O)$ is to
correspond to a dispersion free state represented by some $\psi$ and
$\mbox{$\lambda$}$, it must map the observables to their values. We write
$V^{\psi}_{\mbox{$\lambda$}}(O)$ for the value map function corresponding to the
state specified by $\psi$ and $\mbox{$\lambda$}$. Von Neumann noted
$V^{\psi}_{\mbox{$\lambda$}}(O)$ must obey the relation: \begin{equation}
f(V^{\psi}_{\mbox{$\lambda$}}(O))=V^{\psi}_{\mbox{$\lambda$}}(f(O)) \label{eq:11} \mbox{,}
\eeq where $f$ is any mathematical function. This is easily seen by
noting that the quantity $f(O)$ can be measured by measuring $O$ and
evaluating $f$ of the result. This means that the value of the
observable $f(O)$ will be $f$ of the value of $O$. Thus, if
$V^{\psi}_{\mbox{$\lambda$}}(O)$ maps each observable to a value then we must have
\ref{eq:11}. Hence
$V^{\psi}_{\mbox{$\lambda$}}(P^{2}_{\phi})=(V^{\psi}_{\mbox{$\lambda$}}(P_{\phi}))^{2}$ which together
with \ref{eq:Pr} implies \begin{equation}
V^{\psi}_{\mbox{$\lambda$}}(P_{\phi})=(V^{\psi}_{\mbox{$\lambda$}}(P_{\phi}))^{2} \eeq This last
relationship implies that $V^{\psi}_{\mbox{$\lambda$}}(P_{\phi})$ must be equal to
either $0$
or $1$.
Recall from the previous section the relation $E(P_{\phi})=\langle
\phi | U |
\phi \rangle$. If $E(O)$ takes the form of a value map function such as
$V^{\psi}_{\mbox{$\lambda$}}(P_{\phi})$, then it follows that the quantity
$\langle \phi | U | \phi \rangle$ is equal to either $0$ or
$1$. Consider the way this quantity depends on vector $|\phi\rangle$. If
we vary $|\phi\rangle$ continuously then $\langle \phi | U | \phi \rangle$
will also vary continuously. If the only possible values of $\langle
\phi | U | \phi \rangle$ are $0$ and $1$, it follows that this
quantity must be a {\em constant}, i.e. we must have either $\langle
\phi | U | \phi \rangle=0 \mbox{ } \forall \phi \in \mbox{${\cal H}$}$, or $\langle
\phi | U |
\phi \rangle=1 \mbox{ } \forall \phi \in \mbox{${\cal H}$}$. If the former holds true, then
it must be that $U$ itself is zero. However, with this and the form
\ref{eq:Trace}, we find that $E({\bf 1})=0$; a result that conflicts
with the theorem
assumption that $E({\bf 1})=1$ (\ref{eq:1}). Similarly, if $\langle
\phi | U |
\phi \rangle=1$ for all $\phi \in {\cal H}$ then it follows that $U=1$. This
result also conflicts with the requirement \ref{eq:1}, since it leads
to $E({\bf 1})=\mbox{Tr}(1)=n$ where $n$ is the dimensionality of
${\cal H}$.
From the result just obtained, one can conclude that any function
$E(O)$ which satisfies the constraints of von Neumann's theorem (see
\ref{eq:1},
\ref{eq:Silly}, and \ref{eq:Pos}) must fail to satisfy the relationship
\ref{eq:11}, and so cannot be a value map function on the
observables\footnote{It
should be noted that the same result may be proven without use of
\ref{eq:11} since the fact that $V^{\psi}_{\mbox{$\lambda$}}(P_{\phi})$ must be
either $0$ or $1$ follows simply from the observation that these are
the eigenvalues of $P_{\phi}$.}. From this result, von Neumann
concluded that it is impossible for a deterministic hidden variables
theory to provide empirical agreement with quantum theory:
\cite[p. 325]{von Neumann}
\begin{quotation} It is therefore not, as is often assumed, a question
of a re-interpretation of quantum mechanics --- the present system of quantum
mechanics would have to be objectively false in order that another description
of the elementary processes than the statistical one be possible.
\end{quotation}
\subsection{Refutation of von Neumann's impossibility proof
\label{RefVon}} While
it is true that the mathematical theorem of von Neumann is a valid one, it is
not
the case that the impossibility of hidden variables follows. The
invalidity of von Neumann's argument against hidden variables was
shown by Bohm's
development \cite{Bohmian Mechanics} of a successful hidden variables
theory (a
{\em counter-example} to von Neumann's proof) and by J.S. Bell's explicit
analysis \cite{Bell Eclipse} of von Neumann's proof. We will now present the
latter.
The no hidden variables demonstration of von Neumann may be regarded as
consisting of two components: a mathematical theorem and an analysis of its
implications toward hidden variables. As we have said, the theorem itself is
correct when regarded as pure mathematics. The flaw lies in the
analysis connecting this
theorem to hidden variables. The conditions prescribed for the
function $E$ are found in equations \ref{eq:1}, \ref{eq:Silly}, and
\ref{eq:Pos}.
The theorem of von Neumann states that from these assumptions follows the
conclusion that the form of $E(O)$ must be given by \ref{eq:Trace}. When one
considers an actual physical situation, it becomes apparent that the second of
the theorem's conditions is not at all a reasonable one. As we shall see, the
departure of this condition from being a reasonable constraint on $E(O)$ is
marked by the case of its application to non-commuting observables.
We wish to demonstrate why the assumption \ref{eq:Silly} is an
unjustified constraint on $E$. To do so, we first examine a particular
case in which such a relationship is reasonable, and then contrast
this with the case for which it is not. The assumption itself calls
for the real-linearity of $E(O)$, i.e. that $E$ must satisfy $E(aA+bB+
\ldots)=aE(A)+bE(B)+\ldots$ for any observables $\{A,B,\ldots\}$ and
real numbers $\{a,b,\ldots\}$. This is in fact, a sensible requirement
for the case where $\{A,B,\ldots\}$ are {\em commuting}
observables. Suppose for example, the observables $O_{1}$, $O_{2}$,
$O_{3}$ form a commuting set and that they obey the relationship
$O_{1}=O_{2}+O_{3}$. We know from the quantum formalism that one may
measure these observables simultaneously and that the the the
measurement result $\{o_{1},o_{2},o_{3}\}$ must be a member of the
joint-eigenspectrum of the set. By examining the relation
\ref{eq:Joint-Eig} which defines the joint-eigenspectrum, it is easily
seen that any member of the joint eigenspectrum of $O_{1},O_{2},O_{3}$
must satisfy $o_{1}=o_{2}+o_{3}$. This being the case, one might well
expect that the function $E(O)$---which in the case of a dispersion
free state must be a value map $V^{\psi}_{\mbox{$\lambda$}}(O)$ on the
observables---should be required to satisfy
$E(O_{1})=E(O_{2})+E(O_{3})$. On the other hand, suppose we consider
a set $\{O,P,Q\}$ satisfying $O=P+Q$, where the observables $P$ and
$Q$ {\em fail to commute}, i.e. $[P,Q] \neq 0$. It is easy to see that
$O$ commutes with neither $P$ nor $Q$. It is therefore impossible to
perform a simultaneous measurement of any two of these
observables. Hence, measurements of these observables require three
{\em distinct} experimental procedures. This being so, there is {\em
no justification} for the requirement that $E(O)=E(P)+E(Q)$ for such
cases.
As an example one may consider the case of a
spin $\frac{1}{2}$ particle. Suppose that the components of the spin given by
$\sigma_{x}$, $\sigma_{y}$ and $\sigma^{'}$ where
\begin{equation}
\sigma^{'}=\frac{1}{\sqrt{2}}(\sigma_{x}+\sigma_{y})
\label{eq:spins}
\mbox{,}
\eeq
are to be examined. The
measurement procedure for any given component of the spin of a particle is
performed by a suitably oriented Stern-Gerlach magnet. For example,
to measure the $x$-component, the magnet must be oriented along the $x$-axis;
for the $y$-component it must be oriented along the $y$-axis. A measurement of
$\sigma^{'}$ is done using a Stern-Gerlach magnet oriented along an axis
in yet another direction\footnote{It is not difficult to show that $\sigma^{'}$
defined in this way
is the spin component along an axis which is in the $x,y$ plane and lies at
$45^{\circ}$ from both the $x$ and $y$ axis.}. The relationship
\ref{eq:spins} cannot be a reasonable demand to place on the expectation
function $E(O)$ of the observables $\sigma_{x}, \sigma_{y}, \sigma^{'}$, since
these quantities are measured using completely distinct procedures.
Thus von Neumann's hidden variables argument is seen to be an unsound one. That
it is based on an unjustified assumption is sufficient to show this. It should
also be noted that the presence of the real-linearity postulate discussed above
makes von Neumann's entire case against hidden variables into an argument of a
rather trivial character. Examining the above example involving the three spin
components of a spin $\frac{1}{2}$ particle, we find that the eigenvalues of
these observables $\pm \frac{1}{2}$ do not obey \ref{eq:spins}, i.e. \begin{equation} \pm
\frac{1}{2} \neq \frac{1}{\sqrt{2}}(\pm \frac{1}{2} \pm \frac{1}{2}) \eeq Since
$E(O)$ by hypothesis must satisfy \ref{eq:spins}, it cannot map the observables
to their eigenvalues. Hence, with the real-linearity assumption one can almost
immediately `disprove' hidden variables. It is therefore apparent that Von
Neumann's case against hidden variables rests essentially upon the arbitrary
requirement that $E(O)$ obey real linearity---an assumption that gives immediate
disagreement with the simple and natural demand that $E$ agree with quantum
mechanics in giving the eigenvalues as the results of measurement.
\subsection{Summary and further remarks \label{Von Summ}} In our discussion of
von Neumann's no hidden variables argument, we found that the argument may be
regarded as consisting of two components: a theorem which concerns the general
form for an expectation function $E(O)$ on the observables, and a
proof that the
function $E(O)$ so developed cannot be a value map function. Because the assumption
of the real-linearity of $E(O)$ is an unjustified one, the work of von Neumann
does {\em not} imply the general failure of hidden variables. Finally, we noted
that assuming {\em only} the real-linearity of $E$, one may easily arrive at the
conclusion that such a function cannot be a map to the observables'
eigenvalues. Ultimately, the lesson to be learned from von Neumann's
theorem is simply that
there exists no mathematical function from observables to their values
obeying the requirement of real-linearity.
Abner Shimony has reported \cite{Shimony} that Albert Einstein was aware of
both the von Neumann analysis itself and the reason it fails as a hidden
variables impossibility proof. The source of Shimony's report was a personal
communication with Peter G. Bergmann. Bergmann reported that during a
conversation with Einstein
regarding the von Neumann proof, Einstein opened von Neumann's
book to the page where the proof is given, and pointed to the linearity
assumption. He then said that there is no reason why this premise should
hold in a state not acknowledged by quantum mechanics, if the observables are
not simultaneously measurable. Here the ``state not acknowledged by quantum
mechanics'' seems to refer to von Neumann's dispersion-free state, i.e. the
state specified by $\psi$,and $\mbox{$\lambda$}$. It is almost certain that Erwin
Schr{\"o}dinger
would also have realized the error in von Neumann's impossibility proof, since
in
his 1935 paper \cite{Present Sit} he gives a derivation which is equivalent to
von Neumann's theorem in so far as hidden variables are concerned, yet he does
not\footnote{In fact, if Schr{\"o}dinger {\em had} interpreted his result this way,
this---in light of his own generalization of the EPR paradox presented in the
same paper--would have allowed him to reach a further and very striking
conclusion. We shall discuss this in chapter 4.}
arrive at von Neumann's conclusion of the impossibility of hidden variables.
We shall discuss Schr{\"o}dinger's
derivation in the next section. In view of the scarcity\footnote{See
Max Jammer's
book \cite[p. 265]{Jammer Philosophy}. Jammer mentions that not only
was there
very little response to von Neumann's impossibility proof, but the book itself
was never given a review before 1957, with the exception of two brief works by
Bloch and Margenau (see the Jammer book for these references).} of early
responses to von Neumann's proof, it is valuable to have such evidence of
Einstein's and Schr{\"o}dinger's awareness of the argument and its shortcomings. In
addition, this affirms the notion that Einstein regarded the problem
of finding a
complete description of quantum phenomena to be of central importance (see
again the Einstein quotes given in sections \ref{Bohm} and \ref{Vint}).
In our introduction to von Neumann's theorem, we stated that the existence of a
deterministic hidden variables theory led to the result that for each $\psi$ and
$\mbox{$\lambda$}$, there exists a value map on the observables. We represented such value
maps by the expression $V^{\psi}_{\mbox{$\lambda$}}(O)$. If one considers the question of
hidden variables more deeply, it is clear that the agreement of their
predictions with those of quantum mechanics requires an additional criterion
that
beyond the existence of a value map for each $\psi$ and $\mbox{$\lambda$}$: it requires
agreement with the {\em statistical} predictions of the quantum formalism.
To make
possible the empirical agreement of quantum theory, in which only statistical
predictions are generally possible with the deterministic description of a hidden
variables theory, we regard their descriptions of a quantum system in the
following way. The quantum mechanical state given by $\psi$ corresponds to a {\em
statistical ensemble} of the states given by $\psi$ and $\mbox{$\lambda$}$; the members of
the ensemble being described by the same $\psi$, but differing in
$\mbox{$\lambda$}$. The variation in measurement results found for a series of quantum
mechanical systems with identical $\psi$ is to be accounted for by the
variation in parameter $\mbox{$\lambda$}$ among the ensemble of $\psi,\mbox{$\lambda$}$ states.
For precise agreement in this regard, we require that for all $\psi$ and $O$,
the
following relationship must hold:
\begin{equation}
\int^{\infty}_{-\infty} d \mbox{$\lambda$} \rho (\mbox{$\lambda$})
\mbox{$V^{\psi}_{\lda}(O)$} = \mbox{$\langle \psi | $} O \mbox{$| \psi \rangle$} \mbox{,} \label{eq:Expl}
\eeq
where $\rho(\mbox{$\lambda$})$ is the
probability distribution over $\mbox{$\lambda$}$.
We have seen from von Neumann's result and from our simple examination of the
spin $\frac{1}{2}$ observables $\sigma_{x},\sigma_{y},
\frac{1}{\sqrt{2}}(\sigma_{x}+\sigma_{y})$,
that it is impossible to develop a linear function mapping the observables to
their
eigenvalues. We shall find also that an impossibility proof
may be developed showing that the criterion of agreement with the quantum
statistics, i.e., the agreement with \ref{eq:Expl}, cannot be met by
functions of the form $\mbox{$V^{\psi}_{\lda}(O)$}$. Bell's theorem is, in fact, such a proof. We will
present the proof of
Bell's result along with some further discussion in chapter $3$.
\subsection{Schr{\"o}dinger's derivation of von Neumann's `impossibility proof'
\label{Schrvon}}
As mentioned above, in
his famous ``cat paradox'' paper \cite{Present Sit}, Schr{\"o}dinger presented an
analysis which, as far as hidden variables are concerned, was essentially
equivalent to the von Neumann proof. Schr{\"o}dinger's study of the problem was
motivated by the results of his generalization of the
Einstein--Podolsky--Rosen paradox. While EPR had concluded definite values on
the position and momentum observables only, Schr{\"o}dinger was able to show that such
values must exist for {\em all} observables of the state considered by EPR.
We will discuss both the original
Einstein--Podolsky--Rosen analysis, and Schr{\"o}dinger's generalization thereof in much
more detail in chapter 4. To probe the possible relationships which might
govern the values assigned to the various observables, Schr{\"o}dinger then gave a brief
analysis
of a system whose Hamiltonian takes the form
\begin{equation} H=p^{2}+a^{2}q^{2}
\mbox{.}
\label{eq:HrO}
\eeq
We are aware from the well-known solution of the harmonic
oscillator problem, that this Hamiltonian's eigenvalues are given by the set
$\{a
\hbar, 3a \hbar,5a \hbar, 7a \hbar, \ldots \}$. Consider a mapping $V(O)$ from
observables to values. If we require that the assignments $V$ makes to the
observables $H,p,q$ satisfy \ref{eq:HrO} then we must have \begin{equation}
V(H)=(V(p))^{2}+a^{2}(V(q))^{2} \mbox{,} \label{eq:HrOV} \eeq which implies \begin{equation}
((V(p))^{2}+ a^{2}(V(q))^{2})/a\hbar= \mbox{an odd integer} \mbox{.}
\eeq This latter relationship cannot generally be satisfied by the
eigenvalues of $q$, and $p$---each of which may be any real number---and an
arbitrary positive number $a$.
The connection of this result to the von Neumann argument is immediate. In the
discussion of section \ref{Vonimp}, we noted that the value of the observable
$f(O)$ will be $f$ of the value of $O$, so that any value map must satisfy
$f(V(O))=V(f(O))$, as given in equation \ref{eq:11}. Here $f$ may be any
mathematical function. It follows from this that \ref{eq:HrOV} is equivalent to a
relation between the observables $H$, $p^{2}$, and $q^{2}$ given by: \begin{equation}
V(H)=V(p^{2})+a^{2}V(q^{2}) \mbox{.} \label{eq:HrOVE} \eeq With the known
eigenvalues of $H$, this leads to
\begin{equation} (V(p^{2})+a^{2}V(q^{2}))/a\hbar= \mbox{an
odd integer} \mbox{,} \eeq which cannot generally be satisfied by the
eigenvalues
of $q^{2}$, and $p^{2}$---each of which may be any positive real number---and
an arbitrary positive number $a$. We have here another example leading to a
demonstration of von Neumann's result that there is no linear value map on
the observables (Recall the example of the spin component
observables---$\sigma_{x}$,$\sigma_{y}$ and
$\sigma^{'}=\frac{1}{\sqrt{2}}(\sigma_{x}+\sigma_{y})$ given above.). If we
consider the von Neumann function $E(O)$, the real-linearity
assumption requires
it to satisfy \ref{eq:HrOVE}. Therefore, $E(O)$ cannot map the observables to
their eigenvalues. Schr{\"o}dinger did not regard this as proof of the impossibility of
hidden variables, as von Neumann did, but concluded only that the relationships
such as \ref{eq:HrO} will not
necessarily be satisfied by the value assignments made to the
observables constrained by such a relation. Indeed, if Schr{\"o}dinger
{\em had} made von Neumann's error of interpretation, this would contradict
results he had developed previously according to which such hidden variables
must
exist. Such a contradiction would have allowed Schr{\"o}dinger to reach a further
conclusion which is quite striking, and which we shall discuss in chapter 4.
\chapter{Schr{\"o}dinger's paradox and nonlocality}
\section{Schr{\"o}dinger's paradox}
In addition to his seminal role in the development of quantum theory
itself, Erwin Schr{\"o}dinger made important contributions \cite{Present Sit,
Camb1, Camb2} to its interpretation, both in his development of the
paradox of ``Schr{\"o}dinger's cat'' and in his generalization of the
Einstein--Podolsky--Rosen paradox. In this latter analysis, Schr{\"o}dinger
demonstrated that the state considered by EPR leads to a much more
general result than these authors had concluded. According to
the incompleteness argument given by Schr{\"o}dinger, for any
system described by a certain class of quantum state\footnote{A
`maximally entangled state'.}, {\em all} observables
of both particles must be elements of reality.
As was the case with EPR, Schr{\"o}dinger's argument was based on the
existence of perfect correlations exhibited by the state.
In this chapter, we
show that Schr{\"o}dinger's result can be developed in a simpler way which allows
one to determine which observables are perfectly correlated with one
another using the form of the maximally entangled state in question.
Moreover, we derive a wide variety of new quantum nonlocality proofs based on
Schr{\"o}dinger's generalization of EPR in conjunction with
the theorems discussed in chapter 2,
Gleason's theorem, Kochen and Specker's theorem, and Mermin's
theorem. We show that {\em any} such ``spectral-incompatibility'' theorem
(see section \ref{Spec}) when taken together with the Schr{\"o}dinger paradox
provides a proof of quantum nonlocality. Since the
spectral-incompatibility theorems involve the quantum predictions for
individual measurements, the
nonlocality proofs to which they lead are of
a {\em deterministic}, rather than statistical, character. A further
noteworthy feature of this type of quantum nonlocality
is that its experimental confirmation need involve only the verification
of the perfect correlations, with no further observations required.
Before addressing these matters, we discuss the original form of the
EPR paradox.
\subsection{The Einstein--Podolsky--Rosen quantum state \label{EPR orig}}
The Bohm version of the EPR paradox presented in chapter $3 $
was addressed to the spin components of two particles represented
by the spin singlet state. While this argument was concerned with perfect
correlations in the spin components, the original form of the EPR paradox
involved perfect correlations of a slightly different
form for the positions and momenta. For a system described by the original
EPR state, we find that
measurements of the positions $x_{1}$ and $x_{2}$ of the two particles
give equal results\footnote{EPR discuss a form in which the difference between
the positions is equal to a {\em constant} they call $d$. The
case we consider differs from this only in that the points of origin
from which the positions of the two particles are measured are
different, so that the quantity $x_{2}-x_{1} +d$ for EPR becomes
$x_{2}-x_{1}$ for our case.}
\footnote{It is clear that the Einstein--Podolsky--Rosen quantum state
\ref{eq:EPR state} is not normalizable. Nevertheless, as we saw in the example
of the spin singlet state, it is possible to carry
out a similar argument for states which {\em can} be normalized. The
`maximally entangled states', which are the subject of the Schr{\"o}dinger
paradox, contain among them a large class of normalizable states,
as we shall see.}, while
measurements of the momenta give results which sum to zero.
To develop the perfect correlations in position and momentum, we first
recall how the spin singlet state leads to the spin correlations.
The spin singlet state takes the form \ref{eq:thetass}:
\begin{equation}
\psi_{ss} = |\!\uparrow \theta,\phi \rangle \,
\otimes \, |\!\downarrow \theta,\phi \rangle
\,\, -\,\, |\!\downarrow \theta, \phi \rangle
\, \otimes \, |\!\uparrow \theta, \phi \rangle
\label{eq:thetasss}
\mbox{,}
\eeq
where we have suppressed the normalization for simplicity.
We have here a sum of
two terms, each being a product of an eigenfunction of
$\sigma^{(1)}_{\theta,\phi}$
with an eigenfunction of $\sigma^{(2)}_{\theta,\phi}$ such that the
corresponding eigenvalues sum to zero. In the first term of
\ref{eq:thetasss} for example, the factors are eigenvectors corresponding
to $\sigma^{(1)}_{\theta,\phi}=\frac{1}{2}$ and $\sigma^{(2)}_{\theta,\phi}
=-\frac{1}{2}$.
With this, it is clear that we will have perfect correlations between
the results of measurement of $\sigma^{(1)}_{\theta,\phi}$ and
$\sigma^{(2)}_{\theta,\phi}$, i.e. measurements of these observables will
give results which sum to zero. By analogy with this result,
the perfect correlations in position and momentum emphasized by Einstein,
Podolsky, and Rosen will follow
if the wave function assumes the forms:
\begin{eqnarray}
\psi & = & \int^{\infty}_{-\infty} dp \mbox{ }| \phi_{-p}\rangle
\otimes |\phi_{p}\rangle
\label{eq:xpcorr}
\\
\psi & = & \int^{\infty}_{-\infty} dx \mbox{ }
|\varphi_{x}\rangle \otimes |\varphi_{x}\rangle
\nonumber
\mbox{.}
\end{eqnarray}
Here $|\phi_{p}\rangle$ is the eigenvector of momentum
operator corresponding to a momentum
of $p$. $|\varphi_{x}\rangle$ is the
eigenvector of position. We now show that
the the wave function given by the first equation in \ref{eq:xpcorr}:
\begin{equation}
\psi_{EPR}=\frac{1}{2\pi\hbar} \int^{\infty}_{-\infty} dp
\mbox{ } |\phi_{-p}\rangle \otimes |\phi_{p}\rangle
\label{eq:EPR state}
\eeq
assumes also the form of the second equation in \ref{eq:xpcorr}.
To
see this
we expand the first factor in the summand of
$\psi_{EPR}$ in terms of $|\varphi_{x}\rangle$:
\begin{equation}
\psi_{EPR}=\int^{\infty}_{-\infty} dp
\left(
\int^{\infty}_{-\infty} dx
|\varphi_{x} \rangle \langle \varphi_{x} | \phi_{-p} \rangle
\right)
\otimes |\phi_{p}\rangle
\mbox{.}
\eeq
Using $\langle \varphi_{x} | \phi_{-p} \rangle = \langle \phi_{p} | \varphi_{x} \rangle$,
this becomes
\begin{equation}
\psi_{EPR}=\int^{\infty}_{-\infty} dx | \varphi_{x} \rangle \otimes
\int^{\infty}_{-\infty} dp |\phi_{p} \rangle \langle \phi_{p} | \varphi_{x} \rangle
\mbox{.}
\eeq
Since $\int^{\infty}_{-\infty} dp |\phi_{p} \rangle \langle \phi_{p} |$
is a unit operator, it follows that
\begin{equation}
\psi_{EPR} = \int^{\infty}_{-\infty} dx | \varphi_{x} \rangle \otimes | \varphi_{x} \rangle
\eeq
This is the result we had set out to obtain.
\subsection{Schr{\"o}dinger's generalization \label{parad}}
\subsubsection{Maximal perfect correlations \label{Perf}}
Schr{\"o}dinger's work essentially revealed the full potential of the
quantum state which Einstein, Podolsky, and Rosen had considered.
In his analysis, Schr{\"o}dinger demonstrated that
the perfect correlations the EPR state exhibits are not limited to those
in the positions and momenta. For two particles described by the EPR state,
{\em every} observable of each particle will exhibit perfect correlations
with an observable of the other. What we present here and in subsequent
sections is a simpler way to develop such perfect correlations than that
given by Schr{\"o}dinger \footnote{See \cite{Camb1}}
This result may be
seen as follows. The EPR state \ref{eq:EPR state} is rewritten
as\footnote{The reader may object that for this state, the two
particles lie `on top of one another', i.e., the probability
that $x_{1} \neq x_{2}$ is identically $0$. However, in the
following sections, we show that the same conclusions drawn for
the EPR state also follow for a more general class of `maximally
entangled states', of which many do {\em not} restrict the
positions in just such a way.}
\begin{equation}
\langle x_{1},x_{2}| \psi_{EPR} \rangle
= \int^{\infty}_{-\infty} dp_{1}dp_{2}
\langle x_{1},x_{2}|p_{1},p_{2} \rangle \langle p_{1},p_{2}|\psi_{EPR}\rangle
=\delta(x_{2}-x_{1})
\label{eq:delta}
\mbox{.}
\eeq
We may then use
a relationship known as the {\em completeness} relationship. According to
the completeness relation, if $\{\phi_{n}(x)\}$ is any basis for the
Hilbert space $L_{2}$, then we have
$\sum^{\infty}_{n=1} \phi^{*}_{n}(x_{1}) \phi_{n}(x_{2}) =
\delta(x_{2}-x_{1})$. The EPR state may be rewritten using this
relation as:
\begin{equation}
\psi_{EPR}(x_{1},x_{2})=\sum^{\infty}_{n=1} \phi^{*}_{n}(x_{1}) \phi_{n}(x_{2})
\label{eq:SCHR}
\mbox{,}
\eeq
where $\{\phi_{n}(x)\}$ is an {\em arbitrary} basis of $L_{2}$.
Since the form \ref{eq:SCHR} resembles the spin
singlet form \ref{eq:thetass}, one might anticipate that it will lead to the
existence of perfect correlations between the observables of particles
$1$ and $2$. As we shall see, this is true not only for quantum
systems described by \ref{eq:SCHR}, but also for a more general class
of states as well.
Let us consider a Hermitian operator $A$ (assumed to possess a
discrete spectrum) on the Hilbert space $L_{2}$. At this point,
we postpone defining an observable of the EPR system in terms of $A$---
we simply regard $A$ as an abstract operator. Suppose
that $A$ can be written as
\begin{equation}
A = \sum^{\infty}_{n=1}\mu_{n}\left|\phi_{n}\right>\left<\phi_{n}\right|
\label{eq:eprpc}
\mbox{,}
\eeq
where $\left|\phi_{n}\right>\left<\phi_{n}\right|$
is the one-dimensional projection
operator associated with the vector $|\phi_{n}\rangle$. The eigenvectors
and eigenvalues of $A$ are respectively the sets $\{|\phi_{n}\rangle\}$, and
$\{\mu_{n}\}$. Suppose that another Hermitian operator, called $\tilde{A}$,
is defined by the relationship
\begin{equation}
\tilde{A}_{x,x^{'}} = A^{*}_{x,x^{'}}
\label{eq:EPRstar}
\mbox{,}
\eeq
where $\tilde{A}_{x,x^{'}}$, and $A_{x,x^{'}}$ are respectively the
matrix elements of $\tilde{A}$ and $A$ in the position basis
$\{\varphi_{x}\}$, and the superscript `$*$' denotes complex conjugation.
Note that for any given $A$, the `complex conjugate' operator
$\tilde{A}$ defined by \ref{eq:EPRstar} is {\em unique}.
Writing out $A_{x,x^{'}}$ gives
\begin{eqnarray}
A_{x,x^{'}} & = & \langle\varphi_{x}|
\left( \sum^{\infty}_{n=1} \mu_{n}\left|\phi_{n}\rangle\langle\phi_{n}\right|
\right) | \varphi_{x^{'}}\rangle \\
& = & \sum^{\infty}_{n=1}\mu_{n}
\langle\varphi_{x}|\phi_{n}\rangle
\langle\phi_{n}|\varphi_{x^{'}}\rangle
\nonumber
\mbox{.}
\end{eqnarray}
Using \ref{eq:EPRstar}, we find
\begin{eqnarray}
\tilde{A}_{x,x^{'}} & = & \sum^{\infty}_{n=1}\mu_{n}
\langle\varphi_{x}|\phi_{n}\rangle^{*}
\langle\phi_{n}|\varphi_{x^{'}}\rangle^{*} \label{eq:positel} \\
& = & \sum^{\infty}_{n=1}\mu_{n}
\langle\varphi_{x}|\phi^{*}_{n}\rangle
\langle\phi^{*}_{n}|\varphi_{x^{'}}\rangle
\nonumber
\mbox{,}
\end{eqnarray}
where $|\phi^{*}_{n}\rangle$ is just the Hilbert space vector
corresponding\footnote{More formally, we develop the correspondence of
Hilbert space vectors $|\phi\rangle$ to functions $\phi(x)$ by expanding
$|\psi\rangle$ in terms of the position eigenvectors $|\varphi_{x}\rangle$:
\begin{equation}
|\phi\rangle = \int^{\infty}_{-\infty}dx|\varphi_{x}\rangle \langle\varphi_{x}|\phi\rangle
\mbox{.}
\eeq
The function $\phi(x)$ can then be identified with the inner
product $\langle\varphi_{x}|\phi\rangle$.
Then the Hilbert space vector corresponding to $\phi^{*}(x)$ is just
\begin{eqnarray}
|\phi^{*}\rangle & = &
\int^{\infty}_{-\infty}dx|\varphi_{x}\rangle \langle\varphi_{x}|\phi\rangle^{*} \\
& = & \int^{\infty}_{-\infty}dx|\varphi_{x}\rangle \langle\phi|\varphi_{x}\rangle
\nonumber
\mbox{.}
\end{eqnarray} }
to the function $\phi^{*}_{n}(x)$. From the second equation in
\ref{eq:positel}, it follows that $\tilde{A}$ takes the form
\begin{equation}
\tilde{A}=\sum^{\infty}_{n=1} \mu_{n} |\phi^{*}_{n}\rangle
\langle\phi^{*}_{n}|
\mbox{.}
\eeq
The eigenvectors and eigenvalues of $\tilde{A}$ are respectively the sets
$\{|\phi^{*}_{n}\rangle\}$, and $\{\mu_{n}\}$.
We now consider the observables ${\bf 1}\otimes A$
and $\tilde{A} \otimes {\bf 1}$ on the Hilbert space of the EPR
state, $L_{2}\otimes L_{2}$, where ${\bf 1}$ is the identity operator
on $L_{2}$. Put less formally, we consider $A$ to be an observable of
particle $2$, and $\tilde{A}$ as an observable of particle $1$, just as
$\sigma^{(1)}_{\hat{a}}$ represented `the spin component of
particle 1' in our above discussion of the spin singlet state.
Examining \ref{eq:SCHR}, we see that each term
is a product of an eigenvector of $A$
with an eigenvector of $\tilde{A}$ such that the eigenvalues are equal.
For these observables, we have that $ \psi_{EPR} \mbox{ {\em is an eigenstate
of} } A-\tilde{A}$ {\em of eigenvalue zero}, i.e.,
$(A-\tilde{A})\psi_{EPR}=0$.
Thus, $\psi_{EPR}$ exhibits {\em perfect correlation} between $A$ and
$\tilde{A}$ in that the measurements of $A$ and $\tilde{A}$
give results that are equal.
Recall that the $L_{2}$ basis $\{\phi_{n}(x)\}$ appearing in
\ref{eq:SCHR} is
an arbitrary $L_{2}$ basis. With this, and the fact that
the eigenvalues $\mu_{n}$ chosen for $A$ are arbitrary, it follows
that the operator $A$ can represent {\em any}
observable of particle $2$. We can thus conclude that
for any observable $A$ of
particle $2$, there exists a unique observable $\tilde{A}$
(defined by \ref{eq:EPRstar}) of particle $1$
which exhibits perfect correlations with the former.
We may interchange the roles of particles $1$ and $2$ in the above
argument, if we note that the EPR state is {\em symmetric} in
$x_{1}$ and $x_{2}$. This latter is proved as follows. Since the Dirac
delta function is an even function, we have
\begin{equation}
\delta(x_{2} -x_{1})
=\delta(x_{1} -x_{2})
\label{eq:SCHRsymm}
\mbox{.}
\eeq
Thus the EPR state assumes the form
\begin{equation}
\psi_{EPR}
=\delta(x_{2} -x_{1})
=\delta(x_{1} -x_{2})
=\sum^{\infty} _{n=1} \phi_{n} (x_{1}) \phi^{*}_{n} (x_{2})
\mbox{,}
\eeq
where $\{\phi_{n}(x)\}$ is an arbitrary basis of $L_{2}$, and the second
equality is the completeness relation. This establishes the desired symmetry.
Suppose now that we consider the observables $A \otimes {\bf 1}$ and ${\bf 1}
\otimes \tilde{A}$, i.e., we consider $A$ as an observable of
particle $1$, and $\tilde{A}$ as an observable of particle $2$. Then
from the symmetry of the EPR
state, it follows that we can reverse the roles of the particles in
the above discussion to show that for any observable $A$ of particle $1$,
there exists a unique observable $\tilde{A}$ of particle $2$ defined by
\ref{eq:EPRstar}, which exhibits perfect correlations with the former.
The perfect correlations in position and momentum originally noted by
Einstein, Podolsky and Rosen are a special case of the perfectly
correlated observables $A$ and $\tilde{A}$ given here. EPR found that
the measurement of the positions $x_{1} $ and $x_{2} $ of the two particles
must give results that are {\em equal}. The momenta $p_{1} $ and
$p_{2} $ showed slightly different perfect correlations in which their
measurements give results which sum to zero. To assess
whether the EPR perfect correlations in position and momentum are
consistent with the scheme given above, we derive using \ref{eq:EPRstar} the
forms of $\tilde{x}$ and $\tilde{p}$.
Since the position observable $x$ is diagonal in the position
eigenvectors and has real matrix elements, \ref{eq:EPRstar} implies
that $\tilde{x}=x$. As for the momentum, this operator is
represented in the position basis by the differential operator
$-i\hbar\frac{d}{dx}$. Using \ref{eq:EPRstar} we see that
$\tilde{p}$ is just equal to the {\em negative} of $p$ itself.
Hence the perfect correlations developed in the present section,
according to which $A$ and $\tilde{A}$ are equal, imply that
measurements of the
positions of the two particles must give equal results, whereas
measurements of the
momenta $p_{1} $ and $p_{2} $ must give results which sum to zero.
This is just what EPR had found.
\subsubsection{Incompleteness argument \label{inc}}
The existence of such ubiquitous perfect correlations allows one to
develop an argument demonstrating that all observables of
particle $1$ and all observables of particle $2$ are ``elements of reality'',
i.e., they possess definite values. This argument is similar to the
incompleteness argument given above for the spin singlet EPR case. We consider
the possibility of separate measurements of observables $\tilde{A}$, $A$
being performed respectively on particles $1$ and $2$ of
the EPR state. As in the spin singlet EPR analysis, we assume
locality, so that the properties of each particle must be regarded as
independent of those of its spatially separated partner.
Suppose we
perform an experimental procedure ${\cal E}(\tilde{A})$ which
constitutes a measurement of the observable $\tilde{A}$ of particle $1$.
Since $\tilde{A}$ is perfectly correlated with $A$, the
result we find allows us to predict with certainty the result of any
subsequent experiment ${\cal E}(A)$ measuring $A$ of
particle $2$. For example, if ${\cal E}(\tilde{A})$ gives the result
$\tilde{A}=\mu_{a}$ then
we can predict with certainty that ${\cal E}(A)$ will give
$A=\mu_{a}$. Since we have assumed {\em locality},
$A$ must be an element of reality, i.e., there exists
some definite value $E(A)$.
Furthermore, since the {\em same} number is
predicted no matter what experimental procedure
${\cal E}(A)$ is used in $A$'s measurement, $E(A)$ cannot
depend on the choice of procedure, i.e., $E(A)$ must be
non-contextual.
We saw in the discussion above that for {\em any} observable $A$ of
particle $2$, there exists an observable $\tilde{A}$ of particle $1$
with which the former shows perfect correlation. Hence, the above
argument may be applied to any observable of particle $2$,
and we can therefore conclude that {\em all} observables of particle $2$ are
elements of reality, and that the definite value of each is
non-contextual. Of course, if we consider the set of all observables
of particle $2$, there are some pairs among the set which are
non-commuting.
Since, from the quantum formalism's description of state, one can
deduce, at most, one observable of such a pair, we may conclude that this
description of state is incomplete.
Thus, any theoretical
structure which hopes to capture these definite values must contain
a state description which extends that of quantum mechanics.
As we noted above, it is possible to interchange the roles of
the two particles when deriving the perfect correlations. Thus,
to every observable $A$ of particle $1$, there is a unique observable
$\tilde{A}$ of particle $2$ with which the former is perfectly
correlated. This being the case, one can construct a similar
incompleteness argument to show that all observables of particle $1$
possess non-contextual definite values.
Finally, note that if we consider the possibility of an
experiment measuring a {\em commuting
set} of observables of either particle, the quantum theory predicts
that the result is one of the joint-eigenvalues of that set. For
agreement with this prediction, the values $E(O)$ assigned to the
observables of this particle must map each commuting set to a
joint-eigenvalue.
The Schr{\"o}dinger paradox is seen to have the following structure. The analysis
begins with the observation that the EPR state \ref{eq:SCHR} exhibits
maximal perfect
correlations such that for any observable $A$ of either particle,
there is a unique observable $\tilde{A}$ of the other with which $A$
is perfectly correlated.
If we assume locality, then these perfect correlations imply
the existence of a non-contextual value map on all observables of both
particles. These value maps must assign to every commuting set a
joint-eigenvalue.
The key to the above incompleteness argument is that one
can, by measurement of an observable of particle
$2$, predict with certainty the result of any measurement of a
particular observable of particle $1$.
In his presentation of the EPR incompleteness argument (which concerns
position and momentum) Schr{\"o}dinger uses a colorful
analogy to make the situation clear. He imagines particles $1$ and $2$
as being a student and his instructor. The measurement of particle $2$
corresponds to the instructor consulting a textbook to check the answer to an
examination question, and the measurement of particle $1$ to the
response the student gives to this question. Since he always gives
the correct answer, i.e., the same as that in the instructor's textbook,
the student must have known the answer beforehand.
Schr{\"o}dinger presents the situation as follows: \cite{Present Sit}
(emphasis by original author)
\begin{quotation}
Let us focus attention on the system labeled with small letters
$p,q$ and call it for brevity the ``small'' system. Then things stand as
follows. I can direct {\em one} of two questions to the small system,
either that about $q$ or that about $p$. Before doing so I can, if I
choose, procure the answer to {\em one} of these questions by a measurement
on the fully separated other system (which we shall regard as auxiliary
apparatus), or I may take care of this afterwards. My small system, like
a schoolboy under examination, {\em cannot possibly know} whether I have
done this or for which questions, or whether or for which I intend to do
it later. From arbitrarily many pretrials I know that the pupil will
correctly answer the first question I put to him. From that it follows
that in every case, he {\em knows} the answer to {\em both} questions.
\ldots No school principal would judge otherwise \ldots He would not come
to think that his, the teacher's, consulting a textbook first suggests
to the pupil the correct answer, or even, in the cases when the
teacher chooses to consult it only after ensuing answers from the
pupil, that the pupil's answer has changed the text of the notebook
in the pupil's favor.
\end{quotation}
\subsubsection{Generalized form of the EPR state \label{csection}}
Consideration of the structure of the
EPR state \ref{eq:SCHR} suggests the possibility that
a more general class of states might also exhibit maximal
perfect correlations. Before developing this, we first note that
the formal way to write \ref{eq:SCHR} is such that each term consists of a
tensor product of a vector of particle $1$'s Hilbert space
with a vector of particle $2$'s Hilbert space:
\begin{equation}
\psi_{EPR} =\sum^{\infty} _{n=1} |\phi_{n}^{*} \rangle \otimes
|\phi_{n} \rangle
\mbox{.}
\eeq
We shall often use this convenient notation in expressing the
maximally entangled states.
The operation of complex conjugation in the first factor of
the tensor product may be regarded as a special
case of a class of operators known as ``anti-unitary
involutions''---operators
which we shall denote by $C$, to suggest complex conjugation. We
will find that any state of the form
\begin{equation}
\psi_{C} =\sum^{\infty} _{n=1} C|\phi_{n} \rangle \otimes |\phi_{n} \rangle
\mbox{,}
\label{eq:SCHRC}
\eeq
will exhibit ubiquitous perfect
correlations leading to the conclusion of definite values on all
observables of both subsystems.
An {\em anti-unitary involution} operation represents the
generalization of complex conjugation from scalars
to vectors. The term
``involution'' refers to any operator $C$ whose square is equal to the
identity operator, i.e. $C^{2}={\bf 1}$.
Anti-unitarity\footnote{This term may appear confusing for the following
reason. It does not not refer to an operator which
is ``not unitary'', but instead the prefix ``anti'' refers to
anti-linearity.}
entails two conditions, the first of which is anti-linearity:
\begin{equation}
C(c_{1}|\psi_{1}\rangle+c_{2}|\psi_{2}\rangle+\ldots)
=c^{*}_{1}C|\psi_{1}\rangle+c^{*}_{2}C|\psi_{2}\rangle+\ldots
\mbox{,}
\eeq
where $\{c_{i}\}$ are constants and $\{|\psi_{i}\rangle\}$ are vectors. The
second condition is the anti-linear counterpart of unitarity:
\begin{equation}
\langle C\psi| C\phi\rangle
=\langle\psi|\phi\rangle^{*}
=\langle\phi|\psi\rangle
\mbox{ } \forall \psi,\phi
\mbox{,}
\label{eq:anti-lin unit}
\eeq
which tells us that under the
operation of $C$, inner products are replaced by their complex conjugates.
Note that this property is sufficient to guarantee that if the set
$\{|\phi_{n} \rangle\}$ is a basis, then the vectors
$\{C|\phi_{n} \rangle\}$ in \ref{eq:SCHRC} must also form a basis.
For each anti-unitary involution $C$, there is a special Hilbert
space basis whose elements are invariant under $C$. The operation
of $C$ on any given vector $|\psi\rangle$ can be easily obtained by expanding
the vector in terms of this basis. If $\{\varphi_{n} \}$ is this special
basis then we have
\begin{equation}
C|\psi\rangle=C\sum^{\infty} _{i=1} |\varphi_{i} \rangle\langle \varphi_{i} |\psi\rangle
=\sum^{\infty} _{i=1} |\varphi_{i} \rangle\langle \varphi_{i} |\psi\rangle^{*}
\label{eq:var}
\mbox{.}
\eeq
When one is analyzing any given state of the form \ref{eq:SCHRC},
it is convenient to express the state and observables using this
special basis. The EPR state is a special case of the state
\ref{eq:SCHRC} in which the anti-unitary involution $C$ is such
that the position basis $\{|\varphi_{x} \rangle\}$ plays this
role.
The state \ref{eq:SCHRC} shows an invariance similar to that we
developed for the EPR state: the basis $\{|\phi_{n} \rangle\}$ in terms of
which the state is expressed, is {\em arbitrary}.
We now develop this result. Note that the expression \ref{eq:SCHRC}
takes the form
\begin{equation}
\psi_{C} = \sum^{\infty}_{n=1} C \left( \sum^{\infty}_{i=1} |\chi_{i} \rangle
\langle \chi_{i}|\phi_{n}\rangle \right)
\otimes \left( \sum^{\infty}_{j=1} |\chi_{j} \rangle
\langle \chi_{j}|\phi_{n}\rangle \right)
\mbox{,}
\eeq
if we expand the vectors in terms of an arbitrary basis $\{\chi_{i}\}$.
Applying the $C$ operation in the first factor and rearranging the
expression, we obtain
\begin{eqnarray}
\psi_{C} & = & \sum^{\infty}_{n=1} \left( \sum^{\infty}_{i=1}
C |\chi_{i} \rangle
\langle \phi_{n} | \chi_{i} \rangle \right)
\otimes \left( \sum^{\infty}_{j=1} |\chi_{j} \rangle
\langle \chi_{j}|\phi_{n}\rangle \right) \label{eq:SCHRexp} \\
& = & \sum^{\infty}_{i=1} \sum^{\infty}_{j=1}
\sum^{\infty}_{n=1} \langle \chi_{j}|\phi_{n}\rangle \langle \phi_{n} | \chi_{i} \rangle
C |\chi_{i} \rangle \otimes |\chi_{j} \rangle \nonumber \\
& = & \sum^{\infty}_{i=1} \sum^{\infty}_{j=1}
\langle \chi_{j}|\left(\sum^{\infty}_{n=1} |\phi_{n}\rangle \langle \phi_{n}| \right)
| \chi_{i} \rangle
C |\chi_{i} \rangle \otimes |\chi_{j} \rangle
\nonumber
\mbox{,}
\end{eqnarray}
where the first equality follows from the anti-linearity of $C$.
Since the expression $\sum^{\infty}_{n=1} |\phi_{n}\rangle \langle \phi_{n}|$ is the
identity operator, the orthonormality of the set $\{\chi_{i}\}$ implies
that
\begin{equation}
\psi_{C} = \sum^{\infty}_{i=1} \sum^{\infty}_{j=1} \delta_{ij}
C |\chi_{i} \rangle \otimes |\chi_{j} \rangle
= \sum^{\infty}_{i=1} C |\chi_{i} \rangle \otimes |\chi_{i} \rangle
\label{eq:SCHdelta}
\mbox{.}
\eeq
Thus, the form of the state \ref{eq:SCHRC} is invariant under
any change of basis, and we have the desired result.
Note then the role played by the properties of anti-linear
unitarity and anti-linearity: from the former followed the result
that the the vectors $\{C|\phi_{n}\rangle\}$ form a basis if the
vectors $\{|\phi_{n}\rangle\}$ do so, and from the latter followed the
invariance just shown.
Consider a Hermitian operator $A$ on $L_{2}$
which can be written as
\begin{equation}
A = \sum^{\infty}_{n=1}\mu_{n}\left|\phi_{n}\right>\left<\phi_{n}\right|
\label{eq:cpc}
\mbox{.}
\eeq
Note that $A$'s eigenvectors and eigenvalues are given by the sets
$\{|\phi_{n}\rangle\}$ and $\{\mu_{n}\}$, respectively.
If we define the observable $\tilde{A}$ by the relationship
\begin{equation}
\tilde{A}=CAC^{-1}
\label{eq:cacistar}
\mbox{,}
\eeq
then $\tilde{A}$'s eigenvalues are the same as $A$'s, i.e.,
$\{\mu_{n}\}$, and its eigenvectors are given by
$\{C|\phi_{n}\rangle\}$. To see this, note that
\begin{equation}
CAC^{-1}C|\phi_{n}\rangle = C\mu_{n}|\phi_{n}\rangle=\mu_{n}C|\phi_{n}\rangle
\mbox{,}
\eeq
where the first equality follows from $CC^{-1}={\bf 1}$ and
$A|\phi_{n}\rangle=\mu_{n}\phi_{n}$. Since $C^{2}={\bf 1}$, it follows
that $C=C^{-1}$, and we may rewrite \ref{eq:cacistar}
as\footnote{Note that to evaluate $\tilde{A}$, one can express
it using its matrix elements with respect to
the invariant basis $\{\varphi_{n}\}$
of $C$. If we evaluate the matrix element $CAC_{ij}$, we find
\begin{equation}
\langle\varphi_{i}|CAC\varphi_{j}\rangle
= \langle \varphi_{i}|CA\varphi_{j}\rangle
= \langle\varphi_{i}|A\varphi_{j}\rangle^{*}
\mbox{,}
\label{eq:calc}
\eeq
where the second equality follows from \ref{eq:var}.
From \ref{eq:calc} follows the relationship
\begin{equation}
\tilde{A}_{ij}=A^{*}_{ij}
\label{eq:vstar}
\mbox{,}
\eeq
which is a convenient form one may use to evaluate $\tilde{A}$, as we will
see in section \ref{spinmax}. Note that \ref{eq:vstar} reduces to
\ref{eq:EPRstar} when the invariant basis $\{\varphi_{n}\}$ of $C$ is the
position basis $|\varphi_{x}\rangle$.}:
\begin{equation}
\tilde{A}=CAC
\label{eq:cacstar}
\mbox{.}
\eeq
Note that for any given $A$, the observable $\tilde{A}$ defined by
\ref{eq:cacstar} is unique.
If we identify $A$ as an observable of subsystem $2$, and $\tilde{A}$
as an observable of subsystem $1$, then examination of the state
\ref{eq:SCHRC} shows that these exhibit {\em perfect correlations}
such that their measurements always yield results that are equal. From
the
invariance of the state, we can then conclude that for {\em any} observable
$A$ of
subsystem $2$, there is a unique observable $\tilde{A}$ of subsystem
$1$ defined
by \ref{eq:cacstar} which exhibits perfect correlations with $A$.
Since, as can easily be proved\footnote{The proof of this result follows
similar lines as the invariance proof given above. One
expands the vectors of the state \ref{eq:SCHRC} in terms of the
invariant basis $\{\varphi_{n}\}$. Doing so leads to an expression
similar to the first equality in \ref{eq:SCHdelta}, with
$\varphi_{i}$ and $\varphi_{j}$ replacing $\chi_{i}$ and $\chi_{j}$,
respectively. The delta function is then replaced using the
relationship
\begin{equation}
\delta_{ij}=\langle\varphi_{i}|
\left(\sum^{\infty}_{n=1}|\phi_{n}\rangle\langle\phi_{n}|\right)|\varphi_{j}\rangle
\mbox{,}
\eeq
and one can then easily develop \ref{eq:SCHRCP}},
the state \ref{eq:SCHRC} assumes the form
\begin{equation}
\psi_{C} = \sum^{\infty} _{n=1} |\phi_{n} \rangle \otimes C |\phi_{n} \rangle
\label{eq:SCHRCP}
\mbox{,}
\eeq
one may develop the same results with the roles of the subsystems
reversed, i.e., for any observable $A$ of subsystem $1$, there exists
a unique observable $\tilde{A}$ of subsystem $2$ which exhibits
perfect correlations with $A$.
An incompleteness argument similar to that given above can be
given, and one can show the existence of
a value map $E(O)$ on all observables of both subsystems.
\subsubsection{The general form of a maximally entangled state}
States exhibiting ubiquitous perfect correlations are not limited to those
of the form \ref{eq:SCHRC}. If we examine any composite system
whose subsystems are of the same
dimensionality\footnote{The derivations of
this section can be carried
out for an entangled system whose subsystems are either finite or
infinite dimensional. Infinite sums may be substituted for the finite
sums written here to develop the same
results for the infinite dimensional case.}
and which is represented by a
state\footnote{This is equivalent to the form
\begin{equation}
\psi_{ME}=\sum^{N}_{n=1}c_{n}|\psi_{n}\rangle \otimes |\phi_{n}\rangle
\label{eq:MEC}
\mbox{,}
\eeq
where $\{|\psi_{n}\rangle\}$ is any basis of subsystem $1$ and
$\{|\phi_{n}\rangle\}$ is any basis of subsystem $2$ and
the $|c_{n}|^{2}=1 \mbox{ } \forall n$.}
\begin{equation}
\psi_{ME}=\sum^{N}_{n=1} |\psi_{n}\rangle \otimes |\phi_{n}\rangle
\label{eq:ME}
\mbox{,}
\eeq
where $\{|\psi_{n}\rangle\}$ is any basis of subsystem $1$ and
$\{|\phi_{n}\rangle\}$ is any basis of subsystem $2$,
we find that each observable of either subsystem
exhibits perfect correlations with some observable of the other
subsystem. To examine the properties of the state
\ref{eq:ME}, we note that it may be
rewritten\footnote{At this point, one may address the objection
that the EPR state \ref{eq:SCHR} makes the positions of the two
particles coincide. For example, one can consider an anti-unitary
operator $U_{d}$ defined by
\begin{equation}
U_{d} | \psi \rangle = \int^{\infty}_{\infty} dx |\varphi_{x} \rangle \langle \psi |
\varphi_{x+d} \rangle
\mbox{,}
\eeq
where $d$ is an arbitrary constant. Then, in the state \ref{eq:ME2},
the two particles are separated by a distance $d$.}
as
\begin{equation}
\psi_{ME}=\psi=\sum^{N}_{n=1} U |\phi_{n}\rangle \otimes |\phi_{n}\rangle
\label{eq:ME2}
\mbox{,}
\eeq
where $U$ is the anti-unitary operator defined by $U|\phi_{n}\rangle
= |\psi_{n}\rangle$. Recall from the above discussion that (anti-linear)
unitarity and anti-linearity are sufficient to guarantee both that
$\{U|\phi_n\rangle\}$ is a basis if $\{|\phi_{n}\rangle\}$ is, and that
the state shows the invariance we require.
We conclude our presentation of
the Schr{\"o}dinger paradox by discussing this general form.
We consider a Hermitian operator $A$ on ${\cal H}_{N}$ which can be
written as
\begin{equation}
A = \sum^{N}_{n=1}\mu_{n}\left|\phi_{n})\right>\left<\phi_{n}\right|
\label{eq:upc}
\mbox{.}
\eeq
We then define the observable $\tilde{A}$
by\footnote{For comparison with the form \ref{eq:EPRstar}, and
\ref{eq:vstar}, we note that if we express the operator $U$ as
$U=C\bar{U}$ with $C$ an anti-unitary involution, and $\bar{U}$ a
unitary matrix, then \ref{eq:uaustar} leads to
\begin{equation}
\tilde{A}_{ij}=(\bar{U}A\bar{U}^{-1})^{*}_{ij}
\mbox{,}
\eeq
where the $ij$ subscript indicates the $ij$th matrix element in terms
of $\{\varphi_{n}\}$, the basis that is invariant under $C$.}
\begin{equation}
\tilde{A}=UAU^{-1}
\label{eq:uaustar}
\mbox{.}
\eeq
For any given $A$, this defines a unique operator $\tilde{A}$.
Using $A|\phi_{n}\rangle=\mu_{n}|\phi_{n}\rangle$ and $UU^{-1}={\bf 1}$, we
have
\begin{equation}
UAU^{-1}U|\phi_{n}\rangle=\mu_{n}U\phi_{n}
\mbox{,}
\label{eq:Cosmos}
\eeq
so that the eigenvectors and eigenvalues of $\tilde{A}$ are given by
$\{U\phi_{n}\}$ and $\{\mu_{n}\}$.
Identifying $A$ as an observable of subsystem $2$ and $\tilde{A}$ as an
observable of subsystem $1$, we can see from the form of \ref{eq:ME2}
that these exhibit perfect correlations for such a state.
As in the EPR case and its generalization, the basis-invariance and symmetry
of the state imply that for {\em any} observable $A$ of either
subsystem, there is a unique observable $\tilde{A}$ of the other with
which $A$ is perfectly correlated.
An argument similar to that given for the EPR state leads to
the existence of a non-contextual value map $E(O)$
on all observables of subsystem $1$ and all observables of subsystem $2$.
\subsection{The spin singlet state as a maximally entangled state
\label{spinmax}}
It is instructive to make an explicit
comparison of the properties given above for a general maximally entangled
state with those of the spin singlet state.
The spin singlet state takes the form
\begin{equation}
\psi_{ss}=|\!\uparrow \theta,\phi \rangle \, \otimes \,
|\!\downarrow \theta,\phi \rangle
\,\, -\,\, |\!\downarrow \theta, \phi \rangle \, \otimes
\, |\!\uparrow \theta, \phi \rangle
\mbox{,}
\label{eq:MEss2}
\eeq
when expressed in terms of the eigenvectors of
$\sigma_{\theta,\phi}$. We have suppressed the normalization
for simplicity. Inspection of this form leads one to
conclude that the state must exhibit correlations
such that the results of measurement of $\sigma^{(1)}_{\theta,\phi}$ and
$\sigma^{(2)}_{\theta,\phi}$ will give results which sum to zero. These
might be better named ``perfect anti-correlations'', since the
measurement results are the negatives of one another.
On examining the form given in \ref{eq:MEC},
one can see immediately that the spin singlet state \ref{eq:MEss2} is
a maximally entangled state.
This being the case, it follows that the
spin singlet state must assume the form \ref{eq:ME2}. To see this,
we write the anti-unitary operator $U=C\bar{U}$, where $\bar{U}$
is a unitary operator given by
$\bar{U} = \left(
\begin{array}{rr}
0 & 1 \\
-1 & 0
\end{array} \right)$
and $C$ is an anti-unitary involution
under which the $\sigma_{z}$ eigenvectors are
invariant, i.e.,
$C|\!\uparrow\rangle=|\!\uparrow\rangle$ and $C|\!\downarrow\rangle=|\!\downarrow\rangle$.
Using $|\!\uparrow\rangle,|\!\downarrow\rangle$ as the basis in the maximally entangled
state expression \ref{eq:ME2}, we obtain
\begin{equation}
\psi=U|\!\uparrow\rangle\otimes|\!\uparrow\rangle+U|\!\downarrow\rangle\otimes|\!\downarrow\rangle
\mbox{.}
\eeq
This
reduces\footnote{To within an overall minus sign, which can be ignored.}
to the familiar spin singlet form
\begin{equation}
\psi = |\!\uparrow \rangle \, \otimes \,
|\!\downarrow \rangle
\,\, -\,\, |\!\downarrow \rangle \, \otimes
\, |\!\uparrow \rangle
\mbox{,}
\eeq
when we write out the operation of $\bar{U}$ as a matrix multiplication.
The perfect correlations between spin components
of the two particles can be regarded as a special case of the maximally
entangled state perfect correlations, which hold between any observables
$A$ of one
subsystem and $\tilde{A}$ of the other, where $\tilde{A}=UAU^{-1}$.
To develop this, we recall that the observable $\tilde{A}$ exhibits
perfect correlations with the observable $A$ such that
measurements of $A$ and $\tilde{A}$ give results that are equal. In the
case of the spin singlet state, we have what one might call ``perfect
anti-correlations'', i.e., measurements of
$\sigma^{(1)}_{\theta,\phi}$ and $\sigma^{(2)}_{\theta,\phi}$ give
results which sum to zero. Thus, for the case of the spin singlet state, we
expect that $\widetilde{\sigma_{\theta,\phi}}=-\sigma_{\theta,\phi}$, or
\begin{equation}
U\sigma_{\theta,\phi}U^{-1}=-\sigma_{\theta,\phi}
\mbox{,}
\label{eq:Sbar}
\eeq
where $U$ is the operator described above.
Using $C^{-1}=C$, one can see that the left hand side of
\ref{eq:Sbar} reduces
to $C\bar{U}\sigma_{\theta,\phi}\bar{U}^{-1}C$.
In deriving \ref{eq:Sbar}, we begin by evaluating the expression
$\bar{U}\sigma_{\theta,\phi}\bar{U}^{-1}$.
Recall that the form of the observable $\sigma_{\theta,\phi}$ is given by
\begin{equation}
\sigma_{\theta,\phi}=
\left(
\begin{array}{rr}
\cos(\theta) & e^{-i\phi}\sin(\theta) \\
e^{i\phi}\sin(\theta) & -\cos(\theta)
\end{array}
\right)
\mbox{.}
\eeq
Note that one can write this as $\sigma_{\theta,\phi}=
\left(
\begin{array}{rr}
a & b \\
b^{*} & -a
\end{array}
\right)
\mbox{,}$
where $a=\cos(\theta)$ and $b=e^{-i\phi}\sin(\theta)$.
A simple calculation involving matrix multiplication shows that
\begin{equation}
\bar{U}\left(
\begin{array}{rr}
a & b \\
b^{*} & -a
\end{array}
\right)\bar{U}^{-1}
= \left(
\begin{array}{rr}
-a & -b^{*} \\
-b & a
\end{array}
\right)
\mbox{.}
\eeq
To complete the calculation, we must evaluate the expression
$CAC$, where $A$ is given by the matrix
$\left(
\begin{array}{rr}
-a & -b^{*} \\
-b & a
\end{array}
\right)$.
We saw in section \ref{csection} an expression of the form $CAC$
can be evaluated in terms of
its matrix elements in $\{\varphi_{n}\}$, the basis that is invariant under
$C$. To do so, we use the relation \ref{eq:calc}:
\begin{equation}
(CAC)_{ij}=A^{*}_{ij}
\mbox{.}
\label{eq:ssstar}
\eeq
Using \ref{eq:ssstar}, it follows that $C\left(
\begin{array}{rr}
-a & -b^{*} \\
-b & a
\end{array}
\right)C=
\left(
\begin{array}{rr}
-a & -b \\
-b^{*} & a
\end{array}
\right)
\mbox{.}$
In comparing this relation to the form of $\sigma_{\theta,\phi}$, one can
see that
$C\bar{U}\sigma_{\theta,\phi}\bar{U}^{-1}C=-\sigma_{\theta,\phi}$, and
we have arrived at \ref{eq:Sbar}.
\subsection{Bell's theorem and the maximally entangled
states \label{Bellext}}
We have seen, in chapter 3, how quantum nonlocality may
be proved for one particular maximally entangled state, which is the
spin singlet state. Here, we show how Bell's theorem may be applied in
proofs of the nonlocality of a larger class of maximally entangled
states. A more general form of the proof we give here is
given by Popescu and Rohrlich \cite{Popescu}, who demonstrate that the
nonlocality of {\em any} maximally entangled state follows from Bell's
theorem.
Consider a general maximally entangled state given by
\begin{equation}
\psi_{ME}=\sum^{N}_{n=1} |\phi_{n}\rangle \otimes |\psi_{n}\rangle
=|\phi_{1}\rangle \otimes |\psi_{1}\rangle
+ |\phi_{2}\rangle \otimes |\psi_{2}\rangle +
\sum^{N}_{n=3} |\phi_{n}\rangle \otimes |\psi_{n}\rangle
\label{eq:MEP}
\mbox{.}
\eeq
Let us define ${\cal H}_{1}$ and ${\cal H}_{2}$ as the subspaces
of particle $1$ and $2$'s Hilbert spaces spanned respectively by
$\phi_{1}, \phi_{2}$ and $\psi_{1},\psi_{2}$. We define the sets of
observables $\{\xi^{(1)}_{\theta,\phi}\}$ and
$\{\xi^{(2)}_{\theta,\phi}\}$ as follows. The set
$\{\xi^{(1)}_{\theta,\phi}\}$ is formally
identical\footnote{Since any two Hilbert spaces of the same dimension
are isomorphic, it is possible to define such formally identical
observables.}
to the set $\{\sigma_{\theta,\phi}\}$ on ${\cal H}_{1}$, and
all its members give zero when operating on any vector in the orthogonal
complement of ${\cal H}_{1}$.
Similarly, the set $\{\xi^{(2)}_{\theta,\phi}\}$ is formally identical to
$\{\sigma_{\theta,\phi}\}$ on ${\cal H}_{2}$, and its members give zero when
operating on
any vector in its orthogonal complement.
We now select a class of states smaller than
that given by \ref{eq:MEP} by making the following
substitutions: the vectors $|\!\uparrow\rangle$ and $|\!\downarrow\rangle$ replace
$\phi_{1}$ and $\phi_{2}$ and the vectors $|\!\downarrow\rangle$ and $-|\!\uparrow\rangle$
replace $\psi_{1}$ and $\psi_{2}$. Here, $|\!\uparrow\rangle$ and $|\!\downarrow\rangle$
are the eigenvectors of $\sigma_{z}$.
The state \ref{eq:MEP} then becomes
\begin{equation}
\psi
= |\!\uparrow\rangle \otimes |\!\downarrow \rangle
- |\!\downarrow\rangle \otimes |\!\uparrow \rangle
+ \sum^{N}_{n=3} \phi_{n} \otimes \psi_{n}
\label{eq:Bellext}
\mbox{.}
\eeq
The sets of observables $\{\xi^{(1)}_{\theta,\phi}\}$ and
$\{\xi^{(2)}_{\theta,\phi}\}$ are zero on every term of
\ref{eq:Bellext} with the exception of the first two. Since the
first two terms are identical to the spin singlet state, it follows
that $\psi$ is an eigenstate of $\xi^{(1)}_{\theta,\phi}+
\xi^{(2)}_{\theta,\phi}$ of eigenvalue zero, $\forall \theta,\phi$, and
therefore perfect correlations between all such observables. We may exploit
this situation to derive a nonlocality proof with some further effort,
as we now show.
Let $P^{(1)}$ and $P^{(2)}$ be the projections operators which project
respectively onto ${\cal H}_{1}$ and ${\cal H}_{2}$.
Since the state \ref{eq:Bellext} exhibits maximal perfect correlations,
one can give an incompleteness argument to show that
$P^{(1)}, P^{(2)}$, and the sets $\{\xi^{(1)}_{\hat{a}}\}$ and
$\{\xi^{(2)}_{\hat{b}}\}$ must all possess definite values.
However, the existence of such values
cannot agree with the statistical predictions of quantum
mechanics, as one may see. We now examine what
might be called the ``conditional correlation function'', which we
define as follows. As was done in the presentation of Bell's theorem,
we represent the predetermined values of the observables
$\{\xi^{(1)}_{\theta,\phi}\}$ and $\{\xi^{(2)}_{\theta,\phi}\}$
using the mathematical functions
$A(\mbox{$\lambda$} ,\hat{a})$ and $B(\mbox{$\lambda$} ,\hat{b})$. Suppose that we consider
measurements
of the set $\{\xi^{(1)}_{\hat{a}},P^{(1)}\}$ of subsystem $1$ and
$\{\xi^{(2)}_{\hat{b}},P^{(2)}\}$ of subsystem $2$.
In every such case,
we discard those results for which either $P^{(1)}$ or $P^{(2)}$ (or both)
equals zero. We consider the correlation in measurements of
$\xi^{(1)}_{\hat{a}}$ and $\xi^{(2)}_{\hat{b}}$ in
those cases for which $P^{(1)}=P^{(2)}=1$. Under these conditions, the
quantum mechanical prediction for the correlation function
$P_{QM}(\hat{a},\hat{b})$ is given by
\begin{equation}
P_{QM}(\sigma^{(1)}_{\hat{a}}\sigma^{(2)}_{\hat{b}})=\langle \psi_{ss}
|\xi^{(1)}_{\hat{a}}\xi^{(2)}_{\hat{b}} | \psi_{ss} \rangle =
-\hat{a} \cdot \hat{b}
\mbox{.}
\eeq
On the other hand, the prediction derived from the predetermined values
$A(\mbox{$\lambda$} ,\hat{a})$ and $B(\mbox{$\lambda$} ,\hat{b})$
can be shown to satisfy Bell's inequality \ref{eq:Bell}. Therefore, the
statistics of these values---which themselves follow from the
assumption of locality---are in conflict with the predictions of
quantum mechanics and so we must conclude quantum nonlocality.
\section{Schr{\"o}dinger nonlocality, and a discussion of
experimental verification \label{Meat}}
\subsection{Schr{\"o}dinger nonlocality \label{Meatus}}
We have seen that the Schr{\"o}dinger paradox gives a more general result
than follows from either version of the EPR paradox. Because of
this greater generality, it is
possible to construct a nonlocality proof using Schr{\"o}dinger's paradox
in conjunction with any one of a wide variety of
theorems besides that of Bell. This is due mainly to the
fact that the Schr{\"o}dinger paradox predicts
definite values for such a large class of observables that the theorems
required need not address more than one particle or subsystem---even
value map impossibility proofs which are concerned with {\em single
particle} systems may be sufficient. Consider the case of Kochen and Specker's
theorem. Suppose that some system is described by
a maximally entangled state whose subsystems are of dimensionality three:
\begin{equation}
\sum^{3}_{n=1}\phi_{n}\otimes\psi_{n}
\label{eq:schks}
\mbox{.}
\eeq
Among the set of all observables on both ${\cal H}_{1}$ and ${\cal H}_{2}$
are the squares of the
various spin components\footnote{Of course, such a system need not consist of
two
spin $1$ particles. If it does not, then the same conclusion as is
given here holds for those observables which are {\em formally
equivalent} to the sets $\{s^{2}_{\theta,\phi}\}$ and
$\{\widetilde{s^{2}_{\theta,\phi}}\}$. Since any two Hilbert spaces
of the same dimension are isomorphic, it follows that such formally
equivalent observables must exist.}
of a spin $1$
particle, $\{s^{2}_{\theta,\phi}\}$. The state \ref{eq:schks} exhibits perfect
correlations between all the observables
$\{s^{2}_{\theta,\phi}\}$ of one subsystem,
and their counterparts $\{\widetilde{s^{2}_{\theta,\phi}}\}$ of the
other. As we saw in the incompleteness argument
given in section \ref{inc}, if we assume locality, then the perfect
correlations imply the existence of a non-contextual
value map $E(O)$ on the sets $\{s^{2}_{\theta,\phi}\}$
and $\{\widetilde{s^{2}_{\theta,\phi}}\}$. However, we know from the
Kochen and Specker theorem that there exists no value map
$\{E(s^{2}_{\theta,\phi})\}$ such that a joint-eigenvalue is assigned to
every commuting set. This conclusion {\em contradicts} the
quantum mechanical prediction that the measurements
of any commuting set always give one of its joint-eigenvalues.
Thus, if we demand that the perfect correlations of the state
\ref{eq:schks} are to be explained through a {\em local} theory, we are
led to a conclusion that is in conflict with quantum mechanics.
Therefore the quantum description of any system described by a state
such as \ref{eq:schks} must entail nonlocality. Note that the development of
the contradiction between the quantum predictions and those of
the definite values is expressed using {\em the observables of
one subsystem}, namely the set $\{s^{2}_{\theta,\phi}\}$.
Moreover, if we consider any maximally entangled state of dimensionality $N$
of at least three:
\begin{equation}
\psi=\sum^{N}_{n=1} \phi_{n} \otimes \psi_{n}
\mbox{,}
\label{eq:shelly}
\eeq
then we can evidently develop a nonlocality proof for this
state by using the Kochen and Specker
theorem. To see this, we re-write \ref{eq:shelly}
as
\begin{equation}
\psi=\phi_{1} \otimes \psi_{1}
+ \phi_{2} \otimes \psi_{2} + \phi_{3} \otimes \psi_{3} +
\sum^{N}_{n=4} \psi_{n} \otimes \psi_{n}
\label{eq:MEKSgen}
\mbox{,}
\eeq
and we define ${\cal H}_{2}$ to be the subspace
of particle $2$'s Hilbert space spanned by the vectors
$\psi_{1},\psi_{2},\psi_{3}$, and the operator $P$ as the
projection operator of ${\cal H}_{2}$. Let us define the set of observables
$\{\zeta_{\theta,\phi}\}$ such
that they are formally identical to the squares of the spin components
$\{\zeta_{\theta,\phi}\}$ on ${\cal H}_{2}$ and give zero
when operating on any vector in its orthogonal complement. Then the
Kochen and Specker
theorem tells us that the existence of definite values for the set
$\{P,\{\zeta^{2}_{\theta,\phi}\}\}$
must conflict with quantum mechanics. To see this, consider
those joint-eigenvalues of $\{P,\{\zeta_{\theta,\phi}\}\}$
for which $P=1$. The corresponding values of the set
$\{\zeta_{\theta,\phi}\}$ in this case are equal to
joint-eigenvalues on the spin observables themselves. Since
the Kochen and Specker theorem implies the impossibility of an assignment of values
to the spin components, then the same follows for the observables
$\{\zeta_{\theta,\phi}\}$ given that $P=1$.
If we consider the conjunction of Schr{\"o}dinger's paradox
with either Gleason's theorem or Mermin's
theorem, we again find that quantum nonlocality
follows. From Gleason's theorem we have
that a value map on the set of all projections $\{P\}$ must
conflict with quantum mechanics.
One can show that for any maximally entangled state
\begin{equation}
\sum^{N}_{n=1}\phi^{(1)}_{n}\otimes\psi^{(2)}_{n}
\mbox{,}
\label{eq:thcase}
\eeq
where $N$ is at least three, we have quantum nonlocality.
This result also holds true for for maximally entangled states of
infinite-dimensionality, in which case the sum in \ref{eq:thcase} is
replaced by an infinite sum. In the case of Mermin's theorem, we can
show that for any maximally entangled state whose
subsystems are at least four dimensional, the quantum predictions
must entail nonlocality.
Thus, the nonlocality of the maximally entangled
states may be proved not only by the analyses involving Bell's theorem
(see section \ref{Bellext}), but also by consideration of any of the
theorems of Gleason, Kochen and Specker, or Mermin, taken together
with the Schr{\"o}dinger paradox. We have seen that the latter type of argument,
which we have called ``Schr{\"o}dinger nonlocality'', differs from that
involving Bell's theorem in several ways.
First, the Schr{\"o}dinger nonlocality it is of a deterministic, rather than statistical
character. This is due to the fact that the conflict of the Schr{\"o}dinger
incompleteness with quantum mechanics is in
terms of the quantum prediction for individual measurements, i.e.,
the prediction that measurements of a commuting set always give one of
that set's joint-eigenvalues. Second, such quantum nonlocality proofs can be
developed from these theorem's implications regarding the observables
of just {\em one} subsystem. Finally, the
Schr{\"o}dinger nonlocality can be proved for a larger class of observables
than can the EPR/Bell nonlocality.
Note that beyond the arguments presented here, there remains the possibility
of further instances of
Schr{\"o}dinger nonlocality, since any `spectral incompatibility theorem' (see section
\ref{Spec}) leads to such a demonstration. If the theorem in question
concerns a class of observables on an $N$-dimensional Hilbert space,
then the proof can be applied to any maximally
entangled states whose subsystem's are at least $N$-dimensional.
\subsection{Discussion of the experimental tests of EPR/Bell
and Schr{\"o}dinger nonlocality}
In chapter three, we reviewed the demonstration that the spin singlet
version of the EPR paradox in conjunction with Bell's theorem
leads to the conclusion of quantum nonlocality. That quantum theory entails
such
an unusual feature as nonlocality invited physicists to perform laboratory
experiments to test whether the quantum predictions for the relevant
phenomena are actually borne out. The
experimental tests\footnote{See for example, Freedman and
Clauser \cite{They is us}, Fry and Thompson \cite{Fry}, and Aspect,
et. al. \cite{Aspect 1, Aspect 2, Aspect 3}. For the two-photon
systems studied by these authors, an analysis first
given by Clauser, Horne, Holt and Shimony \cite{CHHS} plays the role of Bell's
theorem: the assumption of locality together with the existence of the
perfect correlations in the photon polarizations leads to the CHHS
inequality, which relationship is not generally satisfied by
the quantum mechanical predictions.}
inspired by the discovery of quantum nonlocality have
focused mainly\footnote{The experiment of Lamehi-Rachti, and
Mittig \cite{singlet experiment} involves a pair of protons
described by the spin singlet state. These results are in support of
the quantum predictions.}
on a maximally entangled system of two {\em photons} rather
than two spin $\frac{1}{2}$ particles. The results of these
experiments are in {\em agreement} with the quantum mechanical
predictions\footnote{For a discussion of the
experimental results, see following: \cite{Bell experiments, Bell cascade
photons, Baby Snakes}.}. Since we have described the spin singlet version of
the EPR/Bell nonlocality in some detail (chapter 3), we address this case
rather than the two-photon maximally entangled state.
Let us briefly recall how the EPR argument and Bell's theorem
give rise to the conclusion of quantum nonlocality.
According to the EPR argument, the perfect correlations
exhibited by the spin singlet state can be
explained under locality {\em only} if there exist definite values
for all components of the spins of both particles. Bell's theorem
shows that any such definite values lead to the prediction that the
`correlation function' $P(\hat{a},\hat{b})$ must satisfy Bell's
inequality. Thus, if we combine the EPR
paradox with Bell's theorem we obtain the argument that any local theoretical
explanation of the perfect correlations must give a prediction for
$P(\hat{a},\hat{b})$ that satisfies the Bell inequality. On the
other hand, the quantum mechanical predictions for $P(\hat{a},\hat{b})$
{\em violate} this inequality. Thus, any local theoretical
description of the spin singlet state
must conflict with the quantum mechanical one.
We now consider what such a laboratory test of quantum nonlocality would
entail. The EPR/Bell analysis depends on the correctness of the
quantum mechanical predictions of perfect correlations.
If these quantum predictions are {\em not} confirmed, i.e.,
the experimental test does not reveal perfect
correlations, one could not interpret the experiment in terms of
EPR/Bell. It is only with the observation of the perfect correlations that
further tests can be performed to make an experimental judgment between
quantum mechanics and the family of local theories.
Such a judgment can be made based on the appropriate
measurements of the correlation function $P$, and the comparison of
the results
with the Bell inequality. One must measure the
correlation function for just those angles
$\hat{a},\hat{b},\hat{c}$ for which the quantum mechanical prediction
violates the Bell inequality (see section \ref{eprbellnon}).
If the results of these tests {\em agree} with the Bell inequality,
then the quantum predictions are refuted, and
the experiment may be interpreted in terms of a theory
consistent with the concept of locality.
If the results disagree with the Bell inequality, this
would confirm the quantum mechanical predictions and would
support the notion of nonlocality.
\subsubsection{Test of Schr{\"o}dinger nonlocality by perfect correlations only}
If one examines the various proofs of quantum nonlocality, one finds
that all have a somewhat similar structure. The first part of such a proof
consists of an
EPR-like incompleteness argument, according to which the perfect correlations
exhibited by some system together with the assumption of locality are
shown to imply the existence definite values for certain observables. The
second part of a nonlocality proof is a demonstration
that such definite values must conflict with the
predictions of quantum mechanics. The two parts can be
combined to produce a proof of quantum nonlocality. When we come to
consider the
laboratory confirmation of such quantum nonlocality, it is natural to
expect that the two part structure of its proof will
be reflected in the various stages of the experiment itself. As we have
seen above, this is indeed true for the test of the EPR/Bell nonlocality:
one must observe the perfect correlations, {\em and} the violation of
the Bell inequality to verify that the spin singlet state reflects
quantum nonlocality. When we examine the experimental test of what we
have called `Schr{\"o}dinger nonlocality', however, we find that it is
possible to perform the experiment in such a way that the perfect
correlations {\em by themselves} are sufficient to verify that the
system being studied shows nonlocal effects of this sort.
We now turn to the question of such an experimental test.
For the sake of definiteness, let us consider a particular example of
Schr{\"o}dinger nonlocality, namely that which arises from the
conjunction of Schr{\"o}dinger's paradox with Mermin's theorem
(see section \ref{Mermin}). This proof is constructed as follows.
The
Schr{\"o}dinger paradox incompletness argument implies that the
perfect correlations exhibited by the maximally entangled state
lead to the existence of a non-contextual value map
on all observables of both subsystems. If the subsystems of the state in
question are associated with Hilbert spaces that are at least $4$
dimensional (see section \ref{Meatus}), then there must exist
such a map on the Mermin observables (or a set of formally equivalent
observables). Mermin's theorem, however, contradicts the possibility of
such a map. If we {\em combine} this theorem with
the Schr{\"o}dinger incompleteness argument the result is a quantum
nonlocality proof.
For the reader's convenience, we recall Mermin's theorem here. The observables
addressed by this theorem are the $x$ and
$y$ components of two spin $\frac{1}{2}$ particles,
and the observables $A,B,C,X,Y,Z$ which are defined in terms of these
through the relations
\begin{eqnarray}
A & = & \sigma^{(\alpha )}_{x} \sigma^{(\beta )}_{y}
\label{eq:Pducto} \\
B & = & \sigma^{(\alpha )}_{y} \sigma^{(\beta )}_{x} \nonumber \\
X & = & \sigma^{(\alpha )}_{x} \sigma^{(\beta )}_{x} \nonumber \\
Y & = & \sigma^{(\alpha )}_{y} \sigma^{(\beta )}_{y} \nonumber
\mbox{,}
\end{eqnarray}
and
\begin{eqnarray}
C & = & AB \label{eq:SPducto} \\
Z & = & XY \nonumber
\mbox{.}
\end{eqnarray}
Note that the symbols $\sigma^{(\alpha )}$ and $\sigma^{(\beta )}$ are used
here\footnote{In the present section, we consider the
entire set
of Mermin observables as being associated with {\em one} subsystem of
a maximally entangled state. Use of the notation $\sigma^{(1)}$,
$\sigma^{(2)}$ suggests observables belonging to separate subsystems,
and might have led to confusion.},
rather than
$\sigma^{(1)}$ and $\sigma^{(2)}$, which we employed in the
presentation of Mermin's theorem given in section \ref{Mermin}.
We will refer to these observables as the `Mermin observables', and
the notation $M_{i}\mbox{ }i=1 \ldots 10$ shall refer to
an arbitrary member of the set.
In the discussion of Mermin's theorem given in section
\ref{Mermin}, we showed that the commutation relationships among the
spin components led to the relation
\begin{equation}
\sigma^{(\alpha )}_{x}\sigma^{(\beta )}_{y}\sigma^{(\alpha )}_{y}
\sigma^{(\beta )}_{x}
\sigma^{(\alpha )}_{x}\sigma^{(\beta )}_{x}\sigma^{(\alpha )}_{y}
\sigma^{(\beta )}_{y}=-1
\label{eq:PMo}
\mbox{.}
\eeq
From this and the definitions \ref{eq:Pducto} and \ref{eq:SPducto}, it is
easy to see $C$ and $Z$ satisfy
\begin{equation}
CZ=-1
\mbox{.}
\label{eq:CZducto}
\eeq
Mermin's theorem implies that there exists no function
$E(O)$ on the observables $M_{i}$ that satisfies the all
relationships constraining the commuting observables.
On inspection of the first equation in \ref{eq:Pducto}, we see that
the three observables
involved are a commuting set:
$[\sigma^{(\alpha )}_{x},\sigma^{(\beta )}_{y}]=0$, $[\sigma^{(\alpha )}_{x},
A]=[\sigma^{(\alpha )}_{x},\sigma^{(\alpha )}_{x}\sigma^{(\beta )}_{y}]=0$ and
$[\sigma^{(\beta )}_{y},A]=[\sigma^{(\beta )}_{y},\sigma^{(\alpha
)}_{x}\sigma^{(\beta
)}_{y}]=0$.
Examination of
the other equations in \ref{eq:Pducto} shows that
the same holds true for
these, i.e. the observables in each form a commuting set. Repeated
application of the commutation rules may be used to show that the sets
$\{C,A,B\}$, $\{Z,X,Y\}$, and $\{C,Z\}$ are also commuting sets.
For convenience, we list these sets here:
\begin{eqnarray}
\{A, \sigma^{(\alpha )}_{x}, \sigma^{(\beta )}_{y}\}
& \{B, \sigma^{(\alpha )}_{y}, \sigma^{(\beta )}_{x}\}
& \{X, \sigma^{(\alpha )}_{x}, \sigma^{(\beta )}_{x}\}
\label{eq:Johnlennon} \\
\{Y, \sigma^{(\alpha )}_{y}, \sigma^{(\beta )}_{y}\} & \{C,A,B\}
& \{Z,X,Y\} \nonumber \\
\{C,Z\} & & \nonumber
\end{eqnarray}
The relationships
which Mermin's theorem requires the function $E(O)$ to satisfy are
the defining equations in \ref{eq:Pducto}, \ref{eq:SPducto} and
\ref{eq:CZducto}.
We now recall the Schr{\"o}dinger incompleteness
argument in detail. In this argument, one considers the possibility
of separate experimental procedures being performed
to measure observables of the two subsystems of any given
maximally entangled state. Suppose that one measures
$A$ of subsystem $2$ and $\tilde{A}$ of subsystem $1$, through
experimental procedures ${\cal E}(A)$, and ${\cal E}(\tilde{A})$.
The perfect correlation
between these observables implies that from the value of $A$ found in
any procedure ${\cal E}(A)$, we can predict with certainty
the value of $\tilde{A}$ found in any procedure ${\cal E}(\tilde{A})$.
With this and the assumption of locality,
we must conclude that $\tilde{A}$ possesses a definite value
$V(\tilde{A})$, which cannot depend on its measurement context.
The
symmetry of the system allows one to argue in a similar fashion to
show that $A$ also possesses a non-contextual value $V(A)$. The
invariance and symmetry
of the maximally entangled state then imply that such an argument can
be performed to show that {\em all} observables of subsystems $1$
and $2$ must possess definite values.
To confirm the `Schr{\"o}dinger-Mermin' nonlocality, one must perform
perfect correlation tests in such a way that---in light of
the above argument---it follows that all of the Mermin observables
possess non-contextual values. Let us examine the
following series of experiments. For each of the Mermin observables,
perform the following series of experiments. First, we simultaneously
peform ${\cal E}(M_{i})$ on subsystem $2$ and ${\cal
E}(\widetilde{M_{i}})$ on subsystem $1$ (The precise experiments
required for each $M_{i}$ are to be defined below.).
Second, we simultaneously perform ${\cal E}^{'}(M_{i})$
on subsystem $2$ and ${\cal E}(\widetilde{M_{i}})$. If the
perfect correlations are verified in both cases and for
all Mermin observables then it follows that all of the
Mermin observables of subsystem $2$ must have definite
values, and that these must be noncontextual. The latter follows
since the perfect correlations are observed under conditions
where the context of $M_{i}$ is varied (i.e., from ${\cal E}(M_{i})$
to ${\cal E}^{'}(M_{i})$) while that of $\widetilde{M_{i}}$
is not.
In this way, one performs that part of the
experiment which corresponds to the incompleteness argument of the nonlocality
proof. One might suppose that what is required next is the performance
of a separate series of tests to judge between the existence of the
definite values implied by the perfect correlations, and the quantum
mechanical predictions. As we saw above, Bell's inequality provides
for the empirical difference between these two approaches in the case
of EPR/Bell nonlocality. In the present case, we have that Mermin's theorem
provides such a difference: the noncontextual values must {\em fail} to
satisfy the relationships \ref{eq:Pducto}, \ref{eq:SPducto}, and
\ref{eq:CZducto}, while quantum mechanics predicts that such
relationships {\em are} satisfied. To confirm the quantum mechanical
predictions and the presence of nonlocality in the maximally
entangled state in question, it would appear that one must check
whether or not these relationships are satisfied. However, as we
shall see, if the measurements involved in the perfect correlations
test are done in a particular way, then {\em the existence of
perfect correlations is sufficient in itself} to prove such a
conclusion. We now demonstrate this.
Consider the measurement of the commuting set
$\{A,\sigma^{(\alpha )}_{x},\sigma^{(\beta )}_{y}\}$.
These observables are related by the first equation in
\ref{eq:Pducto}, which we repeat here for convenience:
\begin{equation}
A=\sigma^{(\alpha )}_{x}\sigma^{(\beta )}_{y}
\label{eq:PductoFir}
\mbox{.}
\eeq
Suppose that to measure the set in question, we {\em first}
measure the set $\{\sigma^{(\alpha )}_{x},\sigma^{(\beta )}_{y}\}$.
Then the values for $\sigma^{(\alpha )}_{x}$ and $\sigma^{(\beta )}_{y}$
so obtained are simply {\em multiplied together}
to determine the value of $A$. One can, of course measure any
commuting set that obeys a constraining relationship in this way,
i.e., any commuting set $\{O,O_{1},O_{2},\ldots\}$ where
\begin{equation}
O=f(O_{1},O_{2},\ldots)
\mbox{,}
\eeq
can be measured by first performing an experiment to measure
$\{O_{1},O_{2},\ldots\}$, and then evaluating $f$ of the resulting
values obtained, to obtain the value of $O$. If the set
$\{A,\sigma^{(\alpha )}_{x},\sigma^{(\beta )}_{y}\}$ is measured in such
a way then it is {\em a~priori} that the relationship \ref{eq:PductoFir}
will be satisfied by the measurement results, since this relationship is
``built into'' the very
procedure itself, i.e., in such a procedure, we {\em use} \ref{eq:PductoFir}
in determining these results.
Now let us suppose that in all of procedures ${\cal E}(M_{i})$
involved in the perfect correlation tests
discussed above, the above described method is followed. That it is
possible to use this method in each case is clear from the fact that every
commuting set
in \ref{eq:Johnlennon} is constrained by one of the relationships
of \ref{eq:Pducto}, \ref{eq:SPducto}, and \ref{eq:CZducto}. If we follow
such a method to measure the perfect correlations, then it
is {\em a~priori} that the measurement results will obey the
commuting relationships \ref{eq:Pducto}, \ref{eq:SPducto}, and
\ref{eq:CZducto}. Since, as we have mentioned, it is the judgment of whether
or not these relationships are obeyed which is required to
complete our laboratory test of nonlocality, it follows that the
perfect correlation tests by themselves are sufficient for this test.
If, in fact, the perfect correlation test described above gives a
`positive' result, then the conclusion of quantum nonlocality
necessarily follows. The only {\em local} theoretical interpretation of
such a result---the existence of definite values---is immediately
ruled out since (through Mermin's theorem) it follows that such values
cannot satisfy the commuting relationships, which are {\em a~priori}
obeyed when one performs the commuting set measurements in the fashion
just discussed.
Moreover, the Schr{\"o}dinger nonlocality that follows from either of the
other theorems studied in chapter 2 (Gleason's, and Kochen and Specker's)
can also be tested through the confirmation of perfect correlations. This
follows since every observable among the set addressed by each
theorem is a
member of two `incompatible commuting sets', where each set
obeys a relationship of the form $O=f(O_{1},O_{2},\ldots)$.
Thus, each observable can be
measured by two distinct procedures ${\cal E}(O_{i})$ and
${\cal E}^{'}(O_{i})$ for
which the commuting relationships are {\em a~priori}.
\section{Schr{\"o}dinger's paradox and von Neumann's no hidden variables argument}
Lastly, we would like to make a few observations which are of
historical interest. We now focus on the line of thought Schr{\"o}dinger followed
subsequent to his
generalization of the Einstein--Podolsky--Rosen paradox. In addition,
we consider the question of the possible consequences had he
repeated the mistake made by a well-known contemporary, or
had he anticipated any of several mathematical theorems that were developed
somewhat later.
Having concluded the existence of definite
values on the observables, Schr{\"o}dinger considered the question of the type
of relationships which might govern these values.
As we discussed in section \ref{Schrvon}, he was able to show that no
such values can obey the same relationships that constrain the
observables themselves. In particular, he observed that the relationship
\begin{equation} H=p^{2}+a^{2}q^{2}
\mbox{,}
\label{eq:HrO2}
\eeq
is not generally obeyed by the eigenvalues of the observables
$H,p^{2},q^{2}$, so that no value map $V(O)$ can satisfy this
equation. From this it
follows\footnote{To show this, one need only note that the value of any
observable
$f(O)$ will be $f$ of the value of $O$, where $f$ is any mathematical
function (see section \ref{Schrvon}).}
immediately that there exists
no value map which is {\em linear} on the
observables.
Thus we see that Schr{\"o}dinger's argument essentially leads to
the same conclusion regarding hidden variables as von Neumann's
theorem. Schr{\"o}dinger did not, however, consider this as proof of the impossibility
of hidden variables.
Instead, the fact that the definite values of
his EPR generalization must fail to obey such
relationships prompted Schr{\"o}dinger to consider the possibility that
{\em no} relationship whatsoever serves to constrain them:
\cite{Present Sit} (emphasis due to original author)
``Should one now think that because we are so
ignorant about the relations among the variable-values held ready in
{\em one} system, that none exists, that far-ranging arbitrary
combination can occur?'' Note, however, that if the `variable-values' obey
no constraining relationship, each must be an independent
parameter of the system. Thus, Schr{\"o}dinger continues with the statement: ``That
would mean that a system of `{\em one} degree of freedom' would need
not merely {\em two} numbers for adequately describing it, as in
classical mechanics, but rather many more, perhaps infinitely many.''
Recall the discussion of the hidden variables theory known as Bohmian
mechanics presented in section
\ref{UncleAlb}. There we saw that the state description of a system is
given in this theory by the wave function $\psi$ and the system
configuration ${\bf q}$. The mathematical form of $\psi$
is the same as in the quantum formalism, i.e., $\psi$ is a
vector in the Hilbert space associated with the system. Consider a spinless
particle constrained to move in $1$ dimension. The
Bohmian mechanics state description would consist of the
wave function $\psi(x) \in L_{2}$ and position $x \in \mbox{$\rm I\!R$}$. Since
any $L_{2}$
function $\psi(x)$ is infinite dimensional, i.e. it is an
assignment of numbers to the points $x \in (-\infty ,\infty )$, the
Bohmian mechanics state description is infinite-dimensional.
Thus, a theory of hidden variables such as Bohmian mechanics provides just
the type of description that Schr{\"o}dinger's
speculations had led him to conclude.
Let us now suppose that rather than reasoning as he did, Schr{\"o}dinger
instead committed the same error as von Neumann. In other words, we
consider the possible consequences had Schr{\"o}dinger regarded the failure of
the definite values to satisfy the same relations as the observables
as proof that no such values can possibly agree with quantum mechanics.
This false result would appear to refute the conclusion of Schr{\"o}dinger's
generalization of EPR; it would seem to imply that the Schr{\"o}dinger paradox
leads to a conflict with the quantum theory. The Schr{\"o}dinger paradox, like the EPR
paradox, assumes only that the perfect correlations can
be explained in terms of a local theoretical model. Hence, if Schr{\"o}dinger
had concluded with von Neumann that hidden variables must conflict
with quantum mechanics, then he would have been led to deduce quantum
nonlocality.
As we saw above, such a conclusion actually follows from
the combination of Schr{\"o}dinger's paradox with any of the spectral
incompatibility theorems, for example, those of Gleason, Kochen and
Specker, and
Mermin. Since Gleason's theorem involves the proof of the trace
relation \ref{eq:Gltracedude}, it seems reasonable to regard it
as being the most similar of these theorems to that of von Neumann, which
features a derivation of this same formula\footnote{Von Neumann's
derivation is based on quite different assumptions, as we saw.}.
Note that to whatever degree one might regard Gleason's theorem as being
similar to von
Neumann's, one must regard Schr{\"o}dinger as having come to within that same
degree of a proof of quantum nonlocality.
\section{Summary and conclusions}
Our investigation of the hidden variables issue
was motivated by several mathematical results that have been interpreted
as proofs either of the incompatibility of hidden variables with
the quantum theory, or of the existence of serious limitations
on such theories. We have reviewed the arguments first presented
by J.S. Bell, according to which not only do these theorems fail to demonstrate
the impossibility of hidden variables, but the restrictions they place on such
theories---contextuality and nonlocality---are quite similar to particular
features of the quantum theory itself. When considering the theorems of Gleason
and of Kochen and Specker, we
found that they imply essentially that within any hidden variables
theory, the way in which values are assigned each observable $O$
must allow for contextuality---an attribute reflecting
the quantum formalism's rules of measurement.
Nonlocality is certainly surprising and unexpected, yet it has been
proved as a feature intrinsic to quantum mechanics from the combination
of Bell's
theorem with the spin singlet version of EPR.
We found also that from Erwin Schr{\"o}dinger's extensive generalization of
the EPR paradox, it follows that every spectral-incompatibility
theorem (such as the theorems of Gleason, Kochen and Specker, and
Mermin) will give
a new proof of ``nonlocality without inequalities.''
We began our exposition by discussing the earliest work on hidden variables,
John von Neumann's. Von Neumann considered the possibility of a theory whose
state description would supplement that of the quantum formalism with a
parameter we called $\mbox{$\lambda$}$. He regarded this scheme as a way to introduce
determinism into the quantum phenomena.
Mathematically, such determinism is represented by requiring that for each
$\psi$ and $\mbox{$\lambda$}$, i.e., for
each state, there exists a function assigning to each observable
its value. Von Neumann showed that no function $E(O)$ on the observables
satisfying his assumptions can be such a
value map. From this, he concluded that empirical agreement between any
hidden variables theory and the quantum theory is impossible.
That this conclusion is unjustified
follows since one of von Neumann's assumptions---that
$E(O)$ be {\em linear} on the observables---is quite unreasonable.
There is no basis for the demand that $E(O)$ obey $O=X+Y$ where
$[X,Y] \neq 0$, since each of these observables is measured by a
{\em distinct} procedure.
The theorems of Gleason , Kochen and Specker , and Mermin seem at first to
succeed where von Neumann failed, i.e., to prove
the impossibility of hidden variables. Nevertheless, as Bell has shown
\cite{Bell Eclipse, Bell Imposs Pilot}, these theorems also fail as
arguments against hidden variables, since they do not account for
contextuality. This concept is easily illustrated by
examination of the quantum formalism's rules of measurement. We find
that the `measurement of an observable $O$' can be performed using
distinct experimental procedures ${\cal E}(O)$ and ${\cal E}^{'}(O)$.
That ${\cal E}(O)$ and ${\cal E}^{'}$ are distinct is especially obvious if
these measure the commuting sets ${\cal C}$ and ${\cal C}^{'}$
where ${\cal C}$ and ${\cal C}^{'}$ both contain $O$, but the members of
${\cal C}$ fail to commute with those of ${\cal C}^{'}$.
It is therefore quite reasonable to expect that a hidden variables
theory should allow for the possibility that different procedures for
some observable's measurement might yield different results for an
individual system. That
it is necessary to account for the detailed experimental arrangement
recalls the views of Niels Bohr, who warns us
of \cite[page 210]{Einstein impeachment} ``the
impossibility of any sharp separation between the behavior of atomic objects
and the interaction with the measuring instruments which serve to define the
conditions under which the phenomena appear.''
Examples of just such
incompatible commuting sets are found among the observables in
each of the theorems we addressed in chapter 2: Gleason's, Kochen and
Specker's, and Mermin's. For example, in the theorem of Kochen
and Specker, the commuting sets are simply of the form
$\{s^{2}_{x},s^{2}_{y},s^{2}_{z}\}$, i.e., the squares of the
spin components of a spin $1$ particle taken with respect to
some Cartesian axis system $x,y,z$. Here one can see that a given
observable $s^{2}_{x}$ belongs to both $\{s^{2}_{x},s^{2}_{y},
s^{2}_{z}\}$, and $\{s^{2}_{x},s^{2}_{y^{'}},s^{2}_{z^{'}}\}$, where
the $y^{'}$ and $z^{`}$ axes are oblique relative to the $y$, and $z$ axes.
Since the theorems of Gleason, Kochen and Specker, and Mermin consider
a function $E(O)$ which {\em assigns a single value to each
observable}, they cannot account for the possibility of
incompatible measurement procedures which the
quantum formalism's rules of measurement allows us. Clearly, the
approach taken by these theorems falls far short of addressing the
hidden variables issue properly. Thus, we
come to concur with J.S. Bell's assessment that \cite{Bell
Imposs Pilot} ``What is proved by impossibility proofs
\ldots \ldots is lack of imagination.''
To address the general question of hidden variables, one must
allow for this important feature of contextuality. The theorems of
Gleason, Kochen and Specker, and Mermin may be
seen as explicit proofs that this simple and natural feature is
necessary in any hidden variables theory. In particular, any attempt to
construct a value map that neglects this feature will fail in that
it cannot satisfy the requirement that the relationships
constraining the commuting sets must be obeyed. We have seen that
there is a simpler way to express this result: there exists no value map
on the observables mapping every commuting set to one of its joint-eigenvalues.
We were able to gain some measure of additional insight into
contextuality in examining Albert's example.
As one can see from the experiments considered by Albert, the Hermitian
operators cannot
be considered as representing properties intrinsic to the quantum system
itself. Instead, the results of the ``measurement of a quantum observable''
must be considered as the joint-product of system {\em and} measuring
apparatus.
When we come to consider Bell's theorem, we must do so in the context
of the spin singlet version of the Einstein--Podolsky--Rosen paradox.
This argument essentially shows that locality necessarily leads to
the existnce of definite values for all components of the spin of both
particles of the spin singlet state.
Since the quantum mechanical description of the
state does not account for such values, EPR
conclude that this description is incomplete.
Bell's theorem essentially continues where the spin singlet EPR analysis
concluded. The fixed values for the spin components of the two particles are
represented by
functions $A(\mbox{$\lambda$},\hat{a})$, $B(\mbox{$\lambda$},\hat{b})$
where $\hat{a}$, $\hat{b}$ are unit vectors in the directions of the
axis of the spin component for particle $1$ and $2$ respectively.
In his analysis, Bell considers the statistical correlation between spin
component measuring experiments carried out on the two particles.
According to Bell's theorem, the theoretical prediction
for this correlation derivable from the variables $A(\mbox{$\lambda$},\hat{a})$
and $B(\mbox{$\lambda$},\hat{b})$ must satisfy `Bell's inequality'. The prediction
given by quantum mechanics does {\em not} generally agree with this
inequality. Bell's theorem in itself provides a proof
that local hidden variables must conflict with quantum mechanics.
It is important to note that what might {\em appear} to be Bell's
assumption---the existence of definite values for all spin components---is
identical to the conclusion of the spin singlet version of the
Einstein--Podolsky--Rosen argument. In fact, Bell assumes nothing beyond
what follows from EPR. Therefore the proper way to assess the implications
of these arguments is to combine them into
a single analysis that will begin with the assumptions of the
spin singlet EPR paradox, and end with the conclusion of Bell's
theorem; i.e., the assumption of locality leads to the conclusion of Bell's
inequality.
Since this inequality, as we saw, disagrees with
the quantum theory, we finally have an argument that {\em locality
implies disagreement with the predictions of quantum mechanics}.
This agrees with Bell's own assertion in the matter: \cite{Cosmologists}
``It now seems that the non-locality is deeply rooted in
quantum mechanics itself and will persist in any completion.''
We have now come to understand the issues of contextuality and nonlocality
as features of a hidden variables interpretation of quantum theory.
Contextuality in hidden variables is a
natural feature to expect since it reflects the possibility
of distinct experimental procedures for measurement of a single
observable. As J.S. Bell
expresses it: \cite{Bell Eclipse} ``The result of an observation may
reasonably
depend not only on the state of the system (including hidden
variables) but also on the complete disposition of the apparatus.''
According to the above results, the fact that nonlocality is required of
hidden variables does not in any way diminish the prospect of these types
of theories. As we have
mentioned, there has existed a successful theory of hidden variables since
1952, and there is no reason not to consider this theory (Bohmian mechanics)
as a serious interpretation of quantum
mechanics. We noted in section \ref{Bohm} that Bohmian mechanics possesses the
advantages of objectivity and determinism.
In the fourth and final chapter, we addressed Erwin Schr{\"o}dinger's
generalization of the EPR paradox. Besides greatly extending
the incompleteness argument of EPR, Schr{\"o}dinger's analysis provides for
a new set of ``nonlocality without inequalities'' proofs, which
have several important features. The Schr{\"o}dinger paradox
concerns the perfect correlations exhibited not just by a
single quantum state, but for a general class of states
called the maximally entangled states. A maximally entangled state is
any state of the form
\begin{equation}
\sum^{N}_{n=1}|\phi_{n}\rangle\otimes|\psi_{n}\rangle
\mbox{,}
\eeq
where
$\{|\phi_{n}\rangle\}$ and $\{|\psi_{n}\rangle\}$ are bases of the
($N$-dimensional) Hilbert
spaces of subsystems $1$ and $2$, respectively. As in the
EPR analysis (both the spin singlet and the original version) Schr{\"o}dinger
gives an incompleteness argument, according to which there must
exist precise values for all observables of both subsystems.
It may be shown using Gleason's theorem, Kochen and Specker's theorem,
Mermin's theorem, or any other `spectral incompatibility' proof, that
the definite values concluded in the Schr{\"o}dinger's paradox
must {\em conflict} with the empirical predictions of quantum
mechanics. The implication of such a disagreement is quantum
nonlocality. This conflict in empirical predictions differs from that
developed within Bell's theorem, in that it involves predictions
for individual measurements, rather than the statistics of a
series of measurements. Thus, combining Schr{\"o}dinger's paradox with any
spectral incompatibility theorem provides a `nonlocality without
inequalities' proof. Moreover, as we
observed in section \ref{Meatus}, the conflict between the Schr{\"o}dinger
incompleteness and such a theorem exists even
when one considers only the observables of one of the two subsystems.
We have
referred to this type of proof by the name ``Schr{\"o}dinger nonlocality.''
When we consider the experimental verification of Schr{\"o}dinger nonlocality,
we find a curious result. The measurement of a commuting
set satisfying an equation of constraint may be carried out
in such a way that the set of values obtained will
satisfy this constraint {\em a~priori}. Since the definite values
concluded in
the Schr{\"o}dinger paradox cannot satisfy these constraining relationships, performing
an experimental test using such measurement procedures provides that the
perfect correlations {\em themselves} are sufficient to imply nonlocality.
All this leads us to inquire how Schr{\"o}dinger himself regarded his results, and
what further conclusions he drew within his remarkable paper. Clearly,
it would have been possible for him to argue for quantum nonlocality
had he anticipated the results of any of a wide variety of theorems including
at least Bell's (see section \ref{Bellext}), Gleason's, Kochen and Specker's,
Mermin's, or any other spectral incompatibility theorem.
Instead, as we saw in chapter 1, Schr{\"o}dinger essentially reproduced the
von Neumann argument against hidden variables, in his observation of
a set of observables that obey a linear relationship not satisfied by the
set's eigenvalues. What may be seen from von Neumann's result is just what
Schr{\"o}dinger noted---relationships constraining the observables do
not necessarily constrain their values. Schr{\"o}dinger continued this line of
thought by speculating on the case for which {\em no} relation
whatsoever constrained the values of the various observables. In light of
his generalization
of the EPR paradox, this line of thought led Schr{\"o}dinger to the idea that the
quantum system in question might possess an infinite number of degrees of
freedom, which concept is actually quite similar to that of
Bohmian mechanics.
If, on the other hand, Schr{\"o}dinger {\em had} made von
Neumann's error, i.e., had concluded the impossibility of a map from
observables to values, this mistaken line of reasoning would have
permitted him to arrive at the concept of quantum nonlocality. Thus,
insofar as one might regard the von Neumann proof as ``almost''
leading to the type of conclusion that
follows from Gleason's theorem, one must consider Schr{\"o}dinger as having
come precisely that close to a proof of quantum nonlocality.
It is quite interesting to see that many of the issues related to
quantum mechanical incompleteness and hidden variables were addressed
in the 1935 work \cite{Present Sit, Camb1, Camb2} of Erwin
Schr{\"o}dinger . Schr{\"o}dinger's work seems to be the most far
reaching of the early analyses addressed to the subject.
Not only did he see deeper into the problem than
did von Neumann, but Schr{\"o}dinger developed results beyond those
of the Einstein, Podolsky, Rosen paper---an extension of their
incompleteness argument, and an analysis of incompleteness in terms of the
implication of von Neumann's theorem.
It seems clear that the field of foundations of
quantum mechanics
might have been greatly advanced had these features
of Schr{\"o}dinger's paper
been more widely appreciated at the time it was first published.
\chapter{References}
| {
"timestamp": "2004-12-02T06:21:24",
"yymm": "0412",
"arxiv_id": "quant-ph/0412011",
"language": "en",
"url": "https://arxiv.org/abs/quant-ph/0412011",
"abstract": "In this paper, we show that Erwin Schroedinger's generalization of the Einstein Podolsky Rosen argument can be connected to certain mathematical theorems - Gleason's and also Kochen and Specker's - in a manner analogous to the relation of EPR itself with Bell's theorem. In both cases, the conclusion is quantum nonlocality, as we discuss. The \"Schroedinger nonlocality\" proofs share some features with the Greenberger, Horne, and Zeilinger quantum-nonlocality work, yet also differ in significant ways.For clarity and completeness, we begin with a detailed discussion of the topic of hidden variable theorems. We argue, in agreement with John S. Bell, that 'impossibility' does not follow.",
"subjects": "Quantum Physics (quant-ph)",
"title": "Hidden Variables and Nonlocality in Quantum Mechanics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924826392581,
"lm_q2_score": 0.7279754371026367,
"lm_q1q2_score": 0.7075866524097909
} |
https://arxiv.org/abs/1105.0949 | Sharp bounds on the volume fractions of two materials in a two-dimensional body from electrical boundary measurements: the translation method | We deal with the problem of estimating the volume of inclusions using a finite number of boundary measurements in electrical impedance tomography. We derive upper and lower bounds on the volume fractions of inclusions, or more generally two phase mixtures, using two boundary measurements in two dimensions. These bounds are optimal in the sense that they are attained by certain configurations with some boundary data. We derive the bounds using the translation method which uses classical variational principles with a null Lagrangian. We then obtain necessary conditions for the bounds to be attained and prove that these bounds are attained by inclusions inside which the field is uniform. When special boundary conditions are imposed the bounds reduce to those obtained by Milton and these in turn are shown here to reduce to those of Capdeboscq-Vogelius in the limit when the volume fraction tends to zero. The bounds of this paper, and those of Milton, work for inclusions of arbitrary volume fractions. We then perform some numerical experiments to demonstrate how good these bounds are. | \section{Introduction}
\setcounter{equation}{0}
One of the central problems of the theory and practice of electrical impedance tomography is the problem of estimating the volume of the inclusions in terms of boundary measurements, either voltage measurements when currents are applied around the boundary of the body or current measurements when voltages are applied. The problem can described in rigorous terms as follows: Let $D$ be an inclusion inside a body $\Omega$, and suppose that the conductivities of $D$ and $\Omega \setminus D$ are $\sigma_1$ and $\sigma_2$ ($\sigma_1 \neq \sigma_2$), respectively. Let $\sigma= \sigma_1 \chi(D) + \sigma_2 \chi(\Omega \setminus D)$ where $\chi(D)$ is the characteristic function of $D$ and the potential $V$ be the solution to
\begin{equation}
\left\{
\begin{array}{l}
\nabla \cdot \sigma \nabla V=0 \quad \mbox{in } \Omega, \\
V=V^0 \quad \mbox{on } \partial\Omega
\end{array}
\right.
\eeq{In1}
for some Dirichlet data (voltage) $V^0$ on $\partial\Omega$. Then the measurement of current (the Neumann data) is $q:= \sigma \frac{\partial V}{\partial {\bf n}}$ on $\partial\Omega$. (Throughout this paper $\frac{\partial}{\partial{\bf n}}$ denotes the normal derivative.) The problem is to estimate the volume $|D|$ of the inclusion using the boundary data $(V^0, q)$ for finitely many voltages, say $V^0=V_1^0, \ldots, V_n^0$. If the Neumann boundary condition $\sigma \frac{\partial V}{\partial {\bf n}} =q$ is prescribed on $\partial\Omega$ instead of the Dirichlet condition, then the measurement is $V^0:= V|_{\partial\Omega}$.
The purpose of this paper is to consider this problem and derive optimal upper and lower bounds for the volume fraction of inclusions in two dimensions. In fact, we deal with a more general situation where $\Omega$ is a two phase mixture in which the phase 1 has conductivity $\sigma_1$ and the phase 2 has conductivity $\sigma_2$ ($\sigma_1 > \sigma_2$) so that the conductivity distribution $\sigma$ of $\Omega$ is given by $\sigma({\bf x}) = \sigma_1 \chi_1({\bf x}) + \sigma_2 \chi_2({\bf x})$ where $\chi_j$ is the characteristic function of phase $j$ for $j=1,2$, {\it i.e.},
\begin{equation}
\chi_1 ({\bf x})= 1- \chi_2({\bf x}) = \left\{
\begin{array}{l}
1 \quad \mbox{in phase 1}, \\
0 \quad \mbox{in phase 2}.
\end{array}
\right.
\eeq{In2}
We derive optimal upper and lower bounds for the volume fraction $f_1$ of phase 1 ($f_1= \frac{1}{|\Omega|} \int_{\Omega} \chi_1 ({\bf x})$) using boundary measurements corresponding to either a pair of Dirichlet data ($V_1^0$ and $V_2^0$) or a pair of Neumann data ($q_1$ and $q_2$) on $\Omega$. The bounds are optimal in the sense that they are attained by some inclusions or configurations. The bounds can be easily computed from the boundary measurements. In fact, they are given by two quantities: the measurement (or response) matrix $A=(a_{ij})_{i,j=1,2}$ where
\begin{equation}
a_{ij} := \frac{1}{|\Omega|} \int_{\partial\Omega} V_i^0 q_j
\eeq{In3}
and
\begin{equation}
b_D := \frac{1}{|\Omega|} \int_{\partial\Omega} V_1^0 \frac{\partial V_2^0}{\partial {\bf t}}
\eeq{In4}
if the Dirichlet data are used. Here and throughout this paper, $\frac{\partial}{\partial {\bf t}}$ denotes the tangential derivative along $\partial\Omega$ in the positive orientation. If the Neumann data are used, then $b_D$ is replaced with
\begin{equation}
b_N := \frac{1}{|\Omega|} \int_{\partial\Omega} q_1({\bf x}) (\int_{{\bf x}_0}^{{\bf x}} q_2).
\eeq{In5}
where the ${\bf x}_0\in\partial\Omega$ and the last integral is on the surface $\partial\Omega$.
See Theorem \ref{thm:LB1} and \ref{thm:UB1}.
Some significant results on the problem of estimating the volume of inclusion using boundary measurements are as follows. Kang-Seo-Sheen \cite{KSS97}, Alessandrini-Rosset \cite{AR98}, and Alessandrini-Rosset-Seo \cite{ARS00} obtained upper and lower bounds for the volume of the inclusion. However, their bounds involve constants which are not easy to determine, and hence it is not possible to compare them with the bounds of this paper. It is worth emphasizing that these results use only a single measurement. Another important result on volume estimation is that of Capdeboscq-Vogelius \cite{CV022, CV03}. They found, using the Lipton bounds on polarization tensors \cite{Lipton93}, upper and lower estimates for the volume of inclusions occupying a low volume fraction, which are optimal bounds in the asymptotic limit as the volume fraction tends to zero. Recently it was recognised by Milton \cite{milt11} that bounds on the response of two-phase periodic composites could be easily used to bound the multi-measurement response of two-phase bodies when special boundary conditions are imposed (see \eq{HS6} and \eq{HS10} below) and that these could be used in an inverse fashion to bound the volume fraction. As shown here those bounds coincide exactly with the Capdeboscq-Vogelius bounds in the asymptotic limit as the volume fraction tends to zero.
The bounds obtained in this paper allow for more general boundary conditions and we emphasize that they are optimal for any volume fraction. They reduce to those of Milton for the special boundary conditions, but have the advantage of being able to utilize the same set of measurements for
both the upper and lower volume fraction bounds.
We derive the bounds using the
translation method which in its simplest form
is based on classical variational principles with null Lagrangians added, {\it i.e.}, non-linear functions of fields which may be integrated by parts and expressed in terms
of boundary measurements. The translation method,
developed by Murat and Tartar \cite{tar79,tar85,mutar85} and independently by Lurie
and Cherkaev \cite{lucherk82,lucherk84}, is a powerful method for deriving bounds on effective tensors of composites. As shown by Murat and Tartar it can be extended using the method of compensated compactness
to allow for functions more general than null Lagrangians, namely quasiconvex functions.
It is reviewed in the books \cite{cherk,milton,allaire,tartar}. The use of classical variational principles to determine information about the conductivity distribution inside a body from
electrical impedance measurements was pioneered by Kohn and Berryman \cite{kohnber}.
We continue our investigation by looking for necessary and sufficient conditions for the bounds to be attained. These are the exact analogs of the condition found by Grabovsky \cite{grab} for attainability
of the translation bounds for composites. (See also section 25.6 of \cite{milton}.). It turns out that the upper bound is attained if and only if the field in phase 1 is uniform and the lower bound is attained if and only if the field in phase 2 is uniform. It means that if phase 1 is an inclusion, the upper bound is attained if the field inside the inclusion is uniform. However, the lower bound can only be approached since no boundary data generate a nonzero uniform field outside the inclusion. The lower bound (for $f_1$) can be attained for the configuration where phase 2 is an inclusion.
There are plenty of inclusions inside which the field is uniform for some boundary conditions. We call such inclusions E$_\Omega$-inclusions.
They include E-inclusions which were named in \cite{ljl07}.
An inclusion $E$ is called an E-inclusion if the field inside $E$ is uniform for any uniform loading at infinity. More precisely, E-inclusions are such that if $V$ is the solution to
\begin{equation}
\left\{
\begin{array}{l}
\nabla \cdot (\sigma_1 \chi(E) + \sigma_2 \chi({\mathbb R}^2 \setminus E)) \nabla V=0 \quad \mbox{in } {\mathbb R}^2, \\
V({\bf x}) - {\bf a} \cdot {\bf x} = O(|{\bf x}|^{-1}) \ \ \mbox{as } |{\bf x}| \to \infty,
\end{array}
\right.
\eeq{In6}
then $-\nabla V$ is constant in $E$ for any direction ${\bf a}$. If an E-inclusion $E$ is simply connected, then $E$ must be an ellipse (an ellipsoid in three dimensions). This was known as Eshelby's conjecture \cite{esh61} and resolved by Sendeckyj in two dimensions \cite{sen70} (see also \cite{KM06, liu07} for different proofs), and by Kang-Milton \cite{KM07} and Liu \cite{liu07} in three dimensions. There are E-inclusions with multiple components \cite{che74, liu07, KKM07}. There are also inclusions other than E-inclusions inside which the field is uniform. For example, if $\Omega$ contains a connected component, say $E$, of an E-inclusion with multiple components, then $E$ is an E$_\Omega$-inclusion. More generally if $E$ is an E$_\Omega$-inclusion and $\Psi\subset\Omega$ then the field in $E\cap\Psi$ will be uniform when appropriate boundary conditions are imposed at the boundary of $\Psi$.
We perform some numerical experiments to demonstrate how good the bounds are for inclusions. Special attention is paid to the variation of the bounds when certain parameters, such as conductivity, the volume fraction and the distance from the boundary, vary. We also look at the role of boundary data.
This paper is organized as follows. In the next section we derive the lower and upper bounds on the volume fraction. In section 3, we obtain conditions for these bounds to be attained, and then in section 4, we show that if the field is uniform in phase 1 then the upper bound is attained and if the field is uniform in phase 2 then the lower bound is attained. In Section 5, we obtain different sufficient conditions for the bounds to be obtained. Section 6 is devoted to the asymptotic analysis of the bounds when the volume fraction tends to zero. Numerical results are presented in section 7. In section 8 we show how to construct a wide variety of simply
connected E$_\Omega$-inclusions, following the approach outlined in section 23.9 of \cite{milton}.
We emphasize that the method of this paper (the translation method) works for three dimensions as well. The results in three dimensions will be
presented in a forthcoming paper.
\section{Translation bounds in two dimensions}
\setcounter{equation}{0}
In this section we derive upper and lower bounds on $f_1$ (the volume fraction of the phase with higher conductivity) using pairs of Cauchy data. Each bound requires two pairs of Cauchy data.
The derivation in this section is based on the translation method, and parallels the treatment given by Murat and Tartar \cite{tar79,tar85,mutar85} and Lurie
and Cherkaev \cite{lucherk82,lucherk84}.
\subsection{Lower bound}
Consider two potentials satisfying
\begin{equation}
\nabla \cdot \sigma \nabla V_j =0 \quad \mbox{in } \Omega, \quad j=1,2.
\eeq{LB1}
Let
\begin{equation}
{\bf j}_j ({\bf x}) = - \sigma({\bf x}) \nabla V_j({\bf x}), \quad j=1,2.
\eeq{LB2}
We want to use information about two pairs of Cauchy data $(V_1^0, q_1:=-{\bf j}_1 \cdot {\bf n})$ and $(V_2^0, q_2:=-{\bf j}_2 \cdot {\bf n})$ on $\partial\Omega$ to generate a lower bound on $f_1$.
Using the boundary data we can compute
\begin{equation}
\langle {\bf j}_i \rangle := \frac{1}{|\Omega|} \int_{\Omega} {\bf j}_i = -\frac{1}{|\Omega|} \int_{\partial\Omega} {\bf x} q_i , \quad i=1,2.
\eeq{LB3}
We assume that $\langle {\bf j}_1 \rangle$ and $\langle {\bf j}_2 \rangle$ are linearly independent. Then, by taking linear combinations of the old potentials if necessary we may assume
\begin{equation}
\langle {\bf j}_1 \rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad \langle {\bf j}_2 \rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix} .
\eeq{LB4}
With
\begin{equation}
R_\bot = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix},
\eeq{LB5}
let us introduce a $4 \times 4$ matrix
\begin{equation}
L_c({\bf x}):= \begin{bmatrix} \sigma^{-1} & c R_\bot \\
- c R_\bot & \sigma^{-1} \end{bmatrix}
\eeq{LB6}
where the constant $c$ is chosen so that $L_c({\bf x}) \ge 0$ for all ${\bf x}$. Here we assume that $\sigma$ is an anisotropic conductivity (matrix). With the constants $k_1$, $k_2$, $k_3$, $k_4$, define a 4-dimensional vector $J$ by
\begin{equation}
J:= \begin{bmatrix} k_1 {\bf j}_1 + k_2 {\bf j}_2 \\ k_3 {\bf j}_1 + k_4 {\bf j}_2 \end{bmatrix} .
\eeq{LB7}
We then consider
\begin{equation}
W_c := \frac{1}{|\Omega|} \int_\Omega J \cdot L_c ({\bf x}) J.
\eeq{LB8}
Define a $2 \times 2$ matrix $A=(a_{ij})$, which we call the response (or measurement) matrix, by
\begin{equation}
a_{ij} := \frac{1}{|\Omega|} \int_\Omega {\bf j}_i \cdot \sigma^{-1} {\bf j}_j = \frac{1}{|\Omega|} \int_{\partial\Omega} V_i^0 q_j, \quad i,j=1,2,
\eeq{LB9}
and
\begin{equation}
b:= \frac{1}{2|\Omega|} \int_{\Omega} {\bf j}_1 \cdot R_\bot {\bf j}_2 - {\bf j}_2 \cdot R_\bot {\bf j}_1 = \frac{1}{|\Omega|} \int_{\Omega} {\bf j}_1 \cdot R_\bot {\bf j}_2 .
\eeq{LB10}
Since
\begin{equation}
\int_{\Omega} {\bf j}_i \cdot R_\bot {\bf j}_i =0, \quad i=1,2,
\eeq{LB11}
one can see that
\begin{equation}
W_c = \begin{bmatrix} k_1 \\ k_2 \\ k_3 \\ k_4 \end{bmatrix} \cdot D_c \begin{bmatrix} k_1 \\ k_2 \\ k_3 \\ k_4 \end{bmatrix}
\eeq{LB12}
where
\begin{equation}
D_c = \begin{bmatrix} a_{11} & a_{12} & 0 & cb \\ a_{12} & a_{22} & -cb & 0 \\ 0 & -cb & a_{11} & a_{12} \\
cb & 0 & a_{12} & a_{22} \end{bmatrix} .
\eeq{LB13}
We emphasize that $W_c$ can be computed from the boundary measurements. In fact, since $\nabla \times R_{\bot} {\bf j}_i =0$, there are potentials $\varphi_i$ such that
\begin{equation}
R_{\bot} {\bf j}_i = \nabla \varphi_i.
\eeq{LB14}
Moreover, if ${\bf t}$ is the unit tangent vector field on $\partial\Omega$ in the positive orientation, then
\begin{equation}
{\bf t} \cdot \nabla \varphi_i = R_\bot^T {\bf t} \cdot {\bf j}_i = - {\bf j}_i \cdot {\bf n} = q_i \quad \mbox{on } \partial\Omega
\eeq{LB15}
(T for the transpose), and hence the boundary value of $\varphi_i$ which we denote by $\varphi_i^0$ is given by
\begin{equation}
\varphi_i^0 ({\bf x})= \int_{{\bf x}_0}^{{\bf x}} q_i
\eeq{LB16}
where the integration is along $\partial\Omega$ in the positive orientation (counterclockwise). Hence
\begin{equation}
b = - \frac{1}{|\Omega|} \int_{\partial\Omega} q_1 \varphi_2^0 = \frac{1}{|\Omega|} \int_{\partial\Omega} q_2 \varphi_1^0.
\eeq{LB17}
Since
\begin{align}
W_c & = \frac{1}{|\Omega|} \int_{\Omega} (k_1 {\bf j}_1 + k_2 {\bf j}_2) \sigma^{-1} (k_1 {\bf j}_1 + k_2 {\bf j}_2) + (k_3 {\bf j}_1 + k_4 {\bf j}_2) \sigma^{-1} (k_3 {\bf j}_1 + k_4 {\bf j}_2) \nonumber \\
&\quad + 2c (k_1k_4 -k_2k_3) b, \label{LB18}
\end{align}
we have the variational principle
\begin{equation}
W_c= \min_{\displaystyle \nabla \cdot \underline{{\bf j}_1} = \nabla \cdot\underline{{\bf j}_2}=0 \atop \displaystyle \underline{{\bf j}_1} \cdot {\bf n}= q_1 , \ \underline{{\bf j}}_2 \cdot {\bf n}= q_2} \frac{1}{|\Omega|} \int_{\Omega} \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} \cdot L_c({\bf x}) \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} .
\eeq{LB19}
One can easily see from the constraints that
\begin{equation}
\langle \underline{{\bf j}_i} \rangle =\frac{1}{|\Omega|} \int_{\partial\Omega} -{\bf x} q_i = \langle {\bf j}_i \rangle .
\eeq{LB20}
So if we replace the constraints by the weaker constraint that
\begin{equation}
\langle \underline{{\bf j}_i} \rangle = \langle {\bf j}_i \rangle , \quad i=1,2,
\eeq{LB21}
then we get
\begin{equation}
W_c \ge \min_{\displaystyle \underline{{\bf j}_1}, \underline{{\bf j}_2} \atop \displaystyle \langle \underline{{\bf j}_i} \rangle = \langle {\bf j}_i \rangle} \left\langle \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} \cdot L_c \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} \right\rangle .
\eeq{LB22}
In order to find the minimum, we first observe that at the minimum
\begin{equation}
\int_{\Omega} \begin{bmatrix} k_1 \psi_1 + k_2 \psi_2 \\ k_3 \psi_1 + k_4 \psi_2 \end{bmatrix} \cdot L_c ({\bf x}) \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} =0
\eeq{LB23}
for any (vector-valued) functions $\psi_1, \psi_2$ satisfying $\langle \psi_1 \rangle = \langle \psi_2 \rangle =0$, which implies
\begin{equation}
\displaystyle L_c ({\bf x}) \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} = \mu \ (\mbox{a constant vector}).
\eeq{LB24}
We then have
\begin{equation}
\left\langle \begin{bmatrix} k_1 {\bf j}_1 + k_2 {\bf j}_2 \\ k_3 {\bf j}_1 + k_4 {\bf j}_2 \end{bmatrix} \right\rangle = \left\langle \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} \right\rangle = \langle L_c^{-1} \rangle \mu
\eeq{LB25}
Thus the minimum is given by
\begin{align}
& \left\langle \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} \cdot L_c \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} \right\rangle \nonumber \\
& = \langle \mu \cdot L_c^{-1} \mu \rangle = \left \langle \begin{bmatrix} k_1 {\bf j}_1 + k_2 {\bf j}_2 \\ k_3 {\bf j}_1 + k_4 {\bf j}_2 \end{bmatrix} \right\rangle \cdot \langle L_c^{-1} \rangle^{-1} \left\langle \begin{bmatrix} k_1 {\bf j}_1 + k_2 {\bf j}_2 \\ k_3 {\bf j}_1 + k_4 {\bf j}_2 \end{bmatrix} \right\rangle, \label{LB26}
\end{align}
which implies, thanks to (\ref{LB4}), that
\begin{equation}
W_c \ge \begin{bmatrix} k_1 \\ k_2 \\ k_3 \\ k_4 \end{bmatrix} \cdot \langle L_c^{-1} \rangle^{-1} \begin{bmatrix} k_1 \\ k_2 \\ k_3 \\ k_4 \end{bmatrix} .
\eeq{LB27}
Thus we have
\begin{equation}
D_c \ge \langle L_c^{-1} \rangle^{-1} .
\eeq{LB28}
Let us now assume that $\sigma$ is isotropic so that
\begin{equation}
L_c= \begin{bmatrix} \sigma^{-1} & 0 & 0 & c \\
0 & \sigma^{-1} & -c & 0 \\
0 & -c & \sigma^{-1} & 0 \\
c & 0 & 0 & \sigma^{-1}
\end{bmatrix} ,
\eeq{LB29}
and
\begin{equation}
\langle L_c^{-1} \rangle = \left\langle \frac{1}{(\sigma^{-2}-c^2)} \begin{bmatrix} \sigma^{-1} & 0 & 0 & -c \\
0 & \sigma^{-1} & c & 0 \\
0 & c & \sigma^{-1} & 0 \\
-c & 0 & 0 & \sigma^{-1}
\end{bmatrix} \right\rangle .
\eeq{LB30}
Since
\begin{equation}
\begin{bmatrix}
Q^T & 0 \\
0 & Q^T
\end{bmatrix} \langle L_c^{-1} \rangle^{-1} \begin{bmatrix}
Q & 0 \\
0 & Q
\end{bmatrix} = \langle L_c^{-1} \rangle^{-1}
\eeq{LB31}
for any rotation $Q$, we obtain from (\ref{LB28}) that
\begin{equation}
\begin{bmatrix}
Q^T & 0 \\
0 & Q^T
\end{bmatrix} D_c \begin{bmatrix}
Q & 0 \\
0 & Q
\end{bmatrix} \ge \langle L_c^{-1} \rangle^{-1} .
\eeq{LB32}
In particular, we may choose $Q$ so that
\begin{equation}
Q^T \begin{bmatrix} a_{11} & a_{12} \\
a_{12} & a_{22} \end{bmatrix} Q = \begin{bmatrix} \lambda_1 & 0 \\
0 & \lambda_2 \end{bmatrix}
\eeq{LB33}
where $\lambda_1 \ge \lambda_2$ are eigenvalues of the response matrix $(a_{ij})$. Then by taking the inverse of both sides of (\ref{LB32}) we get
\begin{equation}
\frac{1}{(\lambda_1 \lambda_2 - c^2 b^2)} \begin{bmatrix} \lambda_2 & 0 & 0 & -cb \\ 0 & \lambda_1 & cb & 0 \\ 0 & cb & \lambda_2 & 0 \\
-cb & 0 & 0 & \lambda_1 \end{bmatrix} \le \langle L_c^{-1} \rangle .
\eeq{LB34}
So we get the inequality
\begin{equation}
\frac{1}{(\lambda_1 \lambda_2 - c^2 b^2)} {\bf v} \cdot \begin{bmatrix} \lambda_2 & -cb \\ -cb & \lambda_1 \end{bmatrix} {\bf v} \le
\left\langle \frac{1}{(\sigma^{-2} - c^2)} {\bf v} \cdot \begin{bmatrix} \sigma^{-1} & -c \\ -c & \sigma^{-1} \end{bmatrix} {\bf v} \right \rangle
\eeq{LB35}
for any vector ${\bf v}$.
Now suppose that the medium is 2-phase, with $\sigma_1 > \sigma_2$. In this case $L_c({\bf x}) > 0$ as long as $c < \sigma_1^{-1}$. We take the limit as $c$ approaches $\sigma_1^{-1}$. Then
\begin{equation}
\frac{{\bf v}}{(\sigma_1^{-2} - c^2)} \cdot \begin{bmatrix} \sigma_1^{-1} & -c \\ -c & \sigma_1^{-1} \end{bmatrix} {\bf v}
\eeq{LB36}
becomes infinite unless ${\bf v}$ is proportional to $\begin{bmatrix} 1 \\ 1 \end{bmatrix}$, and when ${\bf v}=\begin{bmatrix} 1 \\ 1 \end{bmatrix}$
\begin{equation}
\frac{{\bf v}}{(\sigma^{-2} - c^2)} \cdot \begin{bmatrix} \sigma^{-1} & -c \\ -c & \sigma^{-1} \end{bmatrix} {\bf v}
= \frac{2(\sigma^{-1} -c)}{\sigma^{-2}-c^2}= \frac{2}{\sigma^{-1} + c}
\eeq{LB37}
approaches $\sigma_1$ in phase 1 and $2/(\sigma_1^{-1}+\sigma_2^{-1})$ in phase 2. Hence the bound in (\ref{LB35}) reduces to
\begin{align}
\frac{\lambda_1 + \lambda_2 - 2b/\sigma_1}{\lambda_1 \lambda_2 - b^2/\sigma_1^2}
& \le f_1 \sigma_1 + \frac{2f_2}{1/\sigma_1 + 1/\sigma_2} \\
& = f_1 \sigma_1 + \frac{2f_2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \\
& = f_1 \frac{\sigma_1 (\sigma_1 -\sigma_2)}{(\sigma_1 +\sigma_2)} + \frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2},
\label{LB37a}
\end{align}
which gives the desired lower bound on the volume fraction:
\begin{equation}
f_1 \ge \frac{(\sigma_1 + \sigma_2)}{\sigma_1 (\sigma_1 - \sigma_2)} \left[ \frac{\lambda_1 + \lambda_2 - 2b/\sigma_1}{\lambda_1 \lambda_2 - b^2/\sigma_1^2} - \frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \right],
\eeq{LB38}
or
\begin{equation}
f_1 \ge \frac{(\sigma_1 + \sigma_2)}{\sigma_1 (\sigma_1 - \sigma_2)} \left[ \frac{\mathop{\rm Tr}\nolimits A - 2b/\sigma_1}{\det A - b^2/\sigma_1^2} -
\frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \right],
\eeq{LB39}
where the matrix $A$ is defined by (\ref{LB9}). We emphasize that the righthand side of (\ref{LB39}) can be computed by the boundary measurements. In fact, $A$ is computed by using (\ref{LB9}) and $b$ using (\ref{LB17}) under the condition (\ref{LB4}).
In general, if Neumann data $q_1$ and $q_2$ do not satisfy \eq{LB4}, then let
\begin{equation}
P_N:= \begin{bmatrix} \displaystyle \frac{-1}{|\Omega|} \int_{\partial\Omega} q_1 {\bf x} & \ \ \displaystyle \frac{-1}{|\Omega|} \int_{\partial\Omega} q_2 {\bf x} \end{bmatrix}^{-1} .
\eeq{LB40}
Then $\tilde{{\bf j}}_1$ and $\tilde{{\bf j}}_2$ defined by
\begin{equation} \tilde{{\bf j}}_i=\sum_{m=1}^2\left[ P_N\right]_{im} {\bf j}_m,\quad i=1,2
\eeq{LB41}
satisfy \eq{LB4}. Since
\begin{equation}
\left[ \frac{1}{|\Omega|} \int_{\Omega} \tilde{{\bf j}_i} \cdot \sigma^{-1} \tilde{{\bf j}_j} \right]_{i,j=1,2} = P_N A P_N^T
\eeq{LB42}
and
\begin{equation}
\frac{1}{|\Omega|} \int_{\Omega} \tilde{{\bf j}}_1 \cdot R_\bot \tilde{{\bf j}}_2 = \frac{\det P_N}{|\Omega|} \int_{\Omega} {\bf j}_1 \cdot R_\bot {\bf j}_2,
\eeq{LB43}
we obtain the following theorem from \eq{LB39}.
\begin{theorem}\label{thm:LB1}
Let $P_N$ be given by \eq{LB40} and
\begin{equation}
b_N := \frac{1}{|\Omega|} \int_{\Omega} {\bf j}_1 \cdot R_\bot {\bf j}_2 = \frac{1}{|\Omega|} \int_{\partial\Omega} q_1({\bf x}) (\int_{{\bf x}_0}^{{\bf x}} q_2).
\eeq{LB44}
Then,
\begin{equation}
f_1 \ge \frac{(\sigma_1 + \sigma_2)}{\sigma_1 (\sigma_1 - \sigma_2)} \left[ \frac{\mathop{\rm Tr}\nolimits (P_N A P_N^T) - 2 (\det P_N)b_N /\sigma_1}{(\det P_N)^2 (\det A - b_N^2/\sigma_1^2)} -
\frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \right].
\eeq{LB45}
\end{theorem}
\subsection{Upper bound}
We now derive the upper bound on $f_1$.
Let us introduce a $4 \times 4$ matrix
\begin{equation}
L'_c ({\bf x}):= \begin{bmatrix} \sigma & c R_\bot \\
- c R_\bot & \sigma \end{bmatrix}
\eeq{UB1}
where the constant $c$ is chosen so that $L'_c({\bf x}) \ge 0$ for all ${\bf x}$. With the constants $k_1$, $k_2$, $k_3$, $k_4$ and
\begin{equation}
{\bf e}_j ({\bf x}) = - \nabla V_j({\bf x}), \quad j=1,2,
\eeq{UB2-1}
define a 4-dimensional vector $E$ by
\begin{equation}
E:= \begin{bmatrix} k_1 {\bf e}_1 + k_2 {\bf e}_2 \\ k_3 {\bf e}_1 + k_4 {\bf e}_2 \end{bmatrix} .
\eeq{UB2}
We then consider
\begin{equation}
W'_c:= \langle E \cdot L'_c E \rangle.
\eeq{UB3}
The minimization problem in this case is
\begin{equation}
W'_c \ge \min_{\displaystyle \underline{{\bf e}_1}, \underline{{\bf e}_2} \atop \displaystyle \langle \underline{{\bf e}_i} \rangle = \langle {\bf e}_i \rangle} \left\langle \begin{bmatrix} k_1 \underline{{\bf e}_1} + k_2 \underline{{\bf e}_2} \\ k_3 \underline{{\bf e}_1} + k_4 \underline{{\bf e}_2} \end{bmatrix} \cdot L'_c \begin{bmatrix} k_1 \underline{{\bf e}_1} + k_2 \underline{{\bf e}_2} \\ k_3 \underline{{\bf e}_1} + k_4 \underline{{\bf e}_2} \end{bmatrix} \right\rangle .
\eeq{UB3-1}
As for \eq{LB24}, one can show that at the minimum of the right hand side of \eq{UB3-1}
\begin{equation}
\displaystyle L'_c ({\bf x}) \begin{bmatrix} k_1 \underline{{\bf e}_1} + k_2 \underline{{\bf e}_2} \\ k_3 \underline{{\bf e}_1} + k_4 \underline{{\bf e}_2} \end{bmatrix} = \mu \ (\mbox{a constant vector})
\eeq{UB3-2}
and the minimum is given by
\begin{align}
\left\langle \begin{bmatrix} k_1 \underline{{\bf e}_1} + k_2 \underline{{\bf e}_2} \\ k_3 \underline{{\bf e}_1} + k_4 \underline{{\bf e}_2} \end{bmatrix} \cdot L'_c \begin{bmatrix} k_1 \underline{{\bf e}_1} + k_2 \underline{{\bf e}_2} \\ k_3 \underline{{\bf e}_1} + k_4 \underline{{\bf e}_2} \end{bmatrix} \right\rangle
= \left\langle \begin{bmatrix} k_1 {\bf e}_1 + k_2 {\bf e}_2 \\ k_3 {\bf e}_1 + k_4 {\bf e}_2 \end{bmatrix} \right\rangle \cdot \langle (L'_c)^{-1} \rangle^{-1} \left\langle \begin{bmatrix} k_1 {\bf e}_1 + k_2 {\bf e}_2 \\ k_3 {\bf e}_1 + k_4 {\bf e}_2 \end{bmatrix} \right\rangle. \label{UB3-3}
\end{align}
Proceeding in the exactly same way as in the previous subsection (with $c$ approaching to $\sigma_2$), we can derive `dual bounds':
\begin{equation}
\frac{\mathop{\rm Tr}\nolimits A - 2b' \sigma_2}{\det A - b'^2 \sigma_2^2} \le \frac{f_2}{\sigma_2} + \frac{2f_1}{\sigma_1+\sigma_2}
\eeq{UB4}
where
\begin{equation}
b' := \langle {\bf e}_1 \cdot R_\bot {\bf e}_2 \rangle
\eeq{UB5}
and
\begin{equation}
A = \begin{bmatrix} a_{11} & a_{12} \\
a_{12} & a_{22} \end{bmatrix}
\eeq{UB6}
in which
\begin{equation}
a_{ij} := \langle {\bf e}_i \cdot \sigma {\bf e}_j \rangle = \langle {\bf e}_i \cdot {\bf j}_j \rangle,
\eeq{UB7}
and linear combination of potentials have been chosen so that
\begin{equation}
\langle {\bf e}_1 \rangle = \frac{1}{|\Omega|} \int_{\partial\Omega} -V_1^0 {\bf n} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \quad
\langle {\bf e}_2 \rangle = \frac{1}{|\Omega|} \int_{\partial\Omega} -V_2^0 {\bf n} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}.
\eeq{UB8}
Apart from this constraint, ${\bf e}_1$ and ${\bf e}_2$ are any fields solving
\begin{equation}
\nabla \cdot \sigma \nabla V_j =0 \quad \mbox{in } \Omega, \quad {\bf e}_j = - \nabla V_j.
\eeq{UB9}
One can obtain from (\ref{UB4}) the upper bound on $f_1$:
\begin{equation}
f_1 \le \frac{\sigma_2 (\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} \left[ \frac{1}{\sigma_2} - \frac{\mathop{\rm Tr}\nolimits A - 2b' \sigma_2}{\det A - b'^2 \sigma_2^2} \right].
\eeq{UB10}
We emphasize that $A$ and $b'$ can be computed from the boundary measurements:
\begin{equation}
a_{ij}= \frac{1}{|\Omega|} \int_{\partial\Omega} V_i^0 q_j,
\eeq{UB11}
and
\begin{equation}
b' = \frac{1}{|\Omega|} \int_{\partial\Omega} V_1^0 {\bf n} \cdot R_\bot {\bf e}_2 = - \frac{1}{|\Omega|} \int_{\partial\Omega} V_1^0 {\bf t} \cdot {\bf e}_2 = \frac{1}{|\Omega|} \int_{\partial\Omega} V_1^0 \frac{\partial V_2^0}{\partial {\bf t}}.
\eeq{UB12}
More generally, if $V_1^0$ and $V_2^0$ do not satisfy \eq{UB8}, then we have the following theorem in the same way as before.
\begin{theorem}\label{thm:UB1}
Let
\begin{equation}
P_D:= \begin{bmatrix} \displaystyle \frac{-1}{|\Omega|} \int_{\partial\Omega} V_1^0 {\bf n} & \ \ \displaystyle \frac{-1}{|\Omega|} \int_{\partial\Omega} V_2^0 {\bf n} \end{bmatrix}^{-1}
\eeq{UB20}
and
\begin{equation}
b_D := \frac{1}{|\Omega|} \int_{\partial\Omega} V_1^0 \frac{\partial V_2^0}{\partial {\bf t}}.
\eeq{UB21}
Then
\begin{equation}
f_1 \le \frac{\sigma_2 (\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} \left[ \frac{1}{\sigma_2} - \frac{\mathop{\rm Tr}\nolimits (P_D A P_D^T) - 2 (\det P_D) b_D \sigma_2}{(\det P_D)^2 (\det A - b_D^2 \sigma_2^2)} \right].
\eeq{UB19}
\end{theorem}
\subsection{Special boundary data}
In the special case where the Neumann data are given by
\begin{equation}
q_1 = - {\bf n} \cdot {\bf j}_1 = - {\bf n} \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad q_2 = - {\bf n} \cdot {\bf j}_2 = - {\bf n} \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix},
\eeq{HS1}
we have
\begin{equation}
b=1 \quad\mbox{and}\quad A=\bfm\sigma_N^{-1}
\eeq{HS2}
where $\bfm\sigma_N$ is the Neumann tensor which is defined via the relation
\begin{equation}
\langle {\bf e} \rangle = \bfm\sigma_N^{-1} \langle {\bf j} \rangle,
\eeq{HS3}
when the Neumann data is given by $q=-{\bf n}\cdot{\bf v}$ for some constant vector ${\bf v}$.
In fact, we have from \eq{LB14} and (\ref{LB17})
\begin{equation}
b= \frac{1}{|\Omega|} \int_{\partial\Omega} ({\bf j}_1 \cdot {\bf n}) \varphi_2^0 = \frac{1}{|\Omega|} {\bf j}_1 \cdot \int_{\partial\Omega} {\bf n} \varphi_2^0 =- \begin{bmatrix} 1 \\ 0 \end{bmatrix} \cdot \langle \nabla \varphi_2 \rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \cdot R_\bot \begin{bmatrix} 0 \\ 1 \end{bmatrix} =1,
\eeq{HS4}
and from (\ref{LB9})
\begin{equation}
a_{ij} = \frac{1}{|\Omega|} \int_{\partial\Omega} V_i^0 q_j = \frac{1}{|\Omega|} {\bf j}_j^0 \cdot \int_{\partial\Omega} V_i^0 {\bf n} = \langle {\bf j}_i \rangle \cdot \langle {\bf e}_j \rangle = \langle {\bf j}_i \rangle \cdot \bfm\sigma_N^{-1} \langle {\bf j}_j \rangle .
\eeq{HS5}
So, the bound (\ref{LB39}) reduces to the bound
\begin{equation}
f_1 \ge \frac{(\sigma_1 + \sigma_2)}{\sigma_1 (\sigma_1 - \sigma_2)} \left[ \frac{\mathop{\rm Tr}\nolimits \bfm\sigma_N^{-1} - 2/\sigma_1}{\det \bfm\sigma_N^{-1} - 1/\sigma_1^2} -
\frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \right].
\eeq{HS6}
of Milton \cite{milt11}.
If the Dirichlet data take the special affine form
\begin{equation}
V_1^0 = - \begin{bmatrix} 1 \\ 0 \end{bmatrix} \cdot {\bf x}, \quad V_2^0 = - \begin{bmatrix} 0 \\ 1 \end{bmatrix} \cdot {\bf x},
\eeq{HS7}
one can prove in the same way that
\begin{equation}
b'=1 \quad\mbox{and}\quad A=\bfm\sigma_D
\eeq{HS8}
where $\bfm\sigma_D$ is the Dirichlet tensor which is defined via the relation
\begin{equation}
\bfm\sigma_D \langle {\bf e} \rangle = \langle {\bf j} \rangle
\eeq{HS9}
when the Dirichlet data $V^0$ is given by $-{\bf v} \cdot {\bf x}$ for some constant vector ${\bf v}$. Thus the bound (\ref{UB10}) reduces to the other bound
\begin{equation}
f_1 \le \frac{\sigma_2 (\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} \left[ \frac{1}{\sigma_2} - \frac{\mathop{\rm Tr}\nolimits \bfm\sigma_D - 2 \sigma_2}{\det \bfm\sigma_D - \sigma_2^2} \right].
\eeq{HS10}
of Milton \cite{milt11}.
\section{Attainability conditions of the bounds}
In this section we derive conditions on the fields for the bounds in \eq{LB39} and \eq{UB10} to be attained. We will show in the next section that the bounds are actually attained by certain inclusions.
The derivation of the lower bound on $f_1$, and in particular \eq{LB24} and \eq{LB25}, suggests that if there is no column vector ${\bf K}=(k_1,k_2,k_3,k_4)^T$, with say
$|{\bf K}|^2=k_1^2+k_2^2+k_3^2+k_4^2=1$, such that
\begin{equation} L_{\sigma_1^{-1}} \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} = \langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1} \left\langle \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} \right\rangle ,
\eeq{ACB1}
then the lower bound will not be attained. Here, $\langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1}$ is understood as the limit of $\langle L_{c}^{-1} \rangle^{-1}$
as $c$ tends to $\sigma_1^{-1}$. To prove this, fix $c_0 < \sigma_1^{-1}$ and let
\begin{equation}
{\bf F}_c ({\bf x}): = L_{c}({\bf x}) \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} - \langle L_{c}^{-1} \rangle^{-1} \left\langle \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} \right\rangle.
\eeq{ACB2}
for $c$ such that $c_0 \le c < \sigma_1^{-1}$. Then, we have
\begin{align}
& \langle {\bf F}_c \cdot L_{c_0}^{-1} {\bf F}_c \rangle \le \langle {\bf F}_c \cdot L_{c}^{-1} {\bf F}_c \rangle \nonumber \\
& = \left\langle \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} \cdot L_{c} \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} \right\rangle - \left\langle \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} \right\rangle \cdot \langle L_{c}^{-1} \rangle^{-1} \left\langle \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} \right\rangle \nonumber \\
&={\bf K}\cdot D_{c}{\bf K}-{\bf K}\cdot\langle L_{c}^{-1}\rangle^{-1}{\bf K} \label{ACB3}
\end{align}
Letting $c\to \sigma_1^{-1}$ we see that if ${\bf F}_{\sigma_1^{-1}}$ is non-zero (in the $L^2$ norm) for all ${\bf K}$ with $|{\bf K}|=1$ then the right hand side of \eq{ACB3} is non-zero in this limit, or equivalently
$D_{\sigma_1^{-1}} > \alpha\langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1}$ for some $\alpha>1$. It follows that equality is not achieved in \eq{LB35} and hence in \eq{LB39}, {\it i.e.}, the lower bound on the volume fraction is not
attained.
Conversely, suppose we have equality in \eq{ACB1} for some ${\bf K}\ne 0$. Then,
\begin{equation} {\bf K}\cdot D_{\sigma_1^{-1}}{\bf K}={\bf K}\cdot\langle (L_{\sigma_1^{-1}})^{-1}\rangle^{-1}{\bf K},
\eeq{ACB4}
and as $D_{\sigma_1^{-1}} \ge \langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1}$ it follows that
$D_{\sigma_1^{-1}}-\langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1}$ must have zero determinant. A simple calculation shows that
\begin{equation}
\langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1} = \frac{1}{g} \begin{bmatrix} I & R_\bot \\ -R_\bot & I \end{bmatrix}
\eeq{ACB5}
where
\begin{equation}
g:= f_1 \frac{\sigma_1(\sigma_1 - \sigma_2)}{\sigma_1 + \sigma_2}+
\frac{2 \sigma_1\sigma_2}{\sigma_1 + \sigma_2}.
\eeq{ACB6}
Hence the matrix
\begin{equation}
\begin{bmatrix} Q^T & 0 \\ 0 & Q^T \end{bmatrix}[D_{\sigma_1^{-1}}-\langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1}]\begin{bmatrix} Q & 0 \\ 0 & Q \end{bmatrix}
= \begin{bmatrix} \lambda_1-1/g & 0 & 0 & b/\sigma_1-1/g \\ 0 & \lambda_2-1/g & -b/\sigma_1+1/g & 0 \\ 0 & -b/\sigma_1+1/g & \lambda_1-1/g & 0 \\
b/\sigma_1-1/g & 0 & 0 & \lambda_2-1/g \end{bmatrix}
\eeq{ACB7}
must have zero determinant, which implies
\begin{equation} \lambda_1\lambda_2-(\lambda_1+\lambda_2)/g-b^2/\sigma_1^2+2b/(g\sigma_1)=0.
\eeq{ACB8}
Thus equality holds in \eq{LB37a} and the lower bound on $f_1$ is attained.
In summary, the attainability condition is that for some $k_1$, $k_2$, $k_3$ and $k_4$,
\begin{equation}
L_{\sigma_1^{-1}} {\bf J} = \langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1} \langle {\bf J} \rangle
\eeq{AC8}
where
\begin{equation}
{\bf J}= \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} .
\eeq{AC9}
From \eq{ACB5} a vector ${\bf U}$ in the range of $\langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1}$ takes the form
\begin{equation}
{\bf U}= \begin{bmatrix} a_1 \\ a_2 \\ -a_2 \\ a_1\end{bmatrix}
\eeq{AC12}
for some $a_1$ and $a_2$.
We have the following theorem.
\begin{theorem}
The attainability condition \eq{AC8} for the lower bound holds if and only if
\begin{equation}
L_{\sigma_1^{-1}} {\bf J} = {\bf U}
\eeq{AC13}
for some ${\bf U}$ of the form \eq{AC12}.
\end{theorem}
\proof The `only if' part is trivial. Suppose that \eq{AC13} holds. We write $L=L_{\sigma_1^{-1}}$ for the ease of notation. One can see from the definition \eq{LB6} of $L$ that $L_1$ (=$L$ on phase 1) and $L_2$ (=$L$ on phase 2) can be simultaneously diagonalizable. Thus in that basis \eq{AC13} reads as
\begin{equation}
\left[ \lambda_1^{(j)} \chi_1({\bf x}) + \lambda_2^{(j)} \chi_2({\bf x}) \right] J^{(j)}({\bf x}) = U^{(j)}, \quad j=1,2,3,4.
\eeq{AC14}
Here $\lambda_1^{(j)}$ and $\lambda_2^{(j)}$ are eigenvalues of $L_1$ and $L_2$, respectively,
and $J^{(j)}({\bf x})$ and $U^{(j)}$ are $j$-th components of ${\bf J}$ and ${\bf U}$ in new basis.
Since $L_1$ has rank 2, two of eigenvalues $\lambda_1^{(j)}$ are zero, say $\lambda_1^{(3)}$ and $\lambda_1^{(4)}$, and hence $\chi_1 J^{(3)}({\bf x})$ and $\chi_2 J^{(4)}({\bf x})$ may depend on ${\bf x}$. However, $J^{(j)}({\bf x})$ for $j=1,2$ is piecewise constant, and by \eq{AC14},
\begin{equation}
J^{(j)}({\bf x}) = \left\{
\begin{array}{l}
U^{(j)}/\lambda_1^{(j)} \quad \mbox{in phase 1} \\
U^{(j)}/\lambda_2^{(j)} \quad \mbox{in phase 2} .
\end{array}
\right.
\eeq{AC16}
Thus we have
\begin{equation}
\langle J^{(j)} \rangle = \left[ f_1 /\lambda_1^{(j)} + f_2 /\lambda_2^{(j)} \right] U^{(j)} = \langle L^{-1} \rangle_{jj} U^{(j)}, \quad j=1,2.
\eeq{AC17}
Here $\langle L^{-1} \rangle_{jj}$ is the $(j,j)$-entry of the diagonal matrix $\langle L^{-1} \rangle$. So,
\begin{equation}
U^{(j)}= (\langle L^{-1} \rangle^{-1})_{jj} \langle J^{(j)} \rangle, \quad j=1,2.
\eeq{AC18}
If $j=1,2$, then $(\langle L^{-1} \rangle^{-1})_{jj}=0$ and ${\bf U}$, which belongs to the range of $\langle L^{-1} \rangle^{-1}$ satisfies $U^{(3)}=U^{(4)}=0$, and hence \eq{AC18} holds for all $j$. Therefore
\begin{equation}
{\bf U}= \langle L^{-1} \rangle^{-1} \langle {\bf J} \rangle,
\eeq{AC19}
and hence \eq{AC8} holds. \qed
Similarly one can show that the attainability condition for the upper bound is that for some $k_1$, $k_2$, $k_3$ and $k_4$
\begin{equation}
L'_{\sigma_2} {\bf E} = \langle (L'_{\sigma_2})^{-1} \rangle^{-1} \langle {\bf E} \rangle
\eeq{AC20}
where
\begin{equation}
{\bf E} = \begin{bmatrix} k_1 {{\bf e}_1}+ k_2 {\bf e}_2 \\ k_3 {\bf e}_1 + k_4 {{\bf e}_2} \end{bmatrix},
\eeq{AC21}
and it is equivalent to
\begin{equation}
L'_{\sigma_2} {\bf E} = {\bf U}
\eeq{AC22}
for some ${\bf U}$ of the form \eq{AC12}. We emphasize that the attainability conditions \eq{AC13} and \eq{AC22} are precisely analogous to those found by Grabovsky \cite{grab} for
composites.
\section{Attainability and uniformity}
We now investigate the attainability condition more closely. \eq{AC22} says that the field ${\bf E}$ is uniform in phase 1. This condition alone guarantees that the upper bound is attained. In fact, we show in this section that even more is true: if the field is uniform in phase 1 for a single boundary data $V^0=V^0_1$ then there is a $V^0_2$ such that the upper bound is attained.
\begin{theorem}\label{thm:AU1}
Suppose that phase 1 and 2 have finitely many connected (possibly multiply connected) components and the interfaces are Lipschitz continuous. Let $V$ be the solution to
\begin{equation}
\left\{
\begin{array}{l}
\nabla \cdot \sigma \nabla V=0 \quad \mbox{in } \Omega, \\
V=V^0 \quad \mbox{on } \partial\Omega.
\end{array}
\right.
\eeq{AU1}
If $-\nabla V$ is constant (the field is uniform) in phase 1 for some boundary data $V^0=V^0_1 \neq 0$, then there is a $V^0_2$ such the upper bound is attained.
\end{theorem}
\proof Phase 1 can be broken into connected components $\Psi_1^{(\alpha)}$, $\alpha= 1, 2, \ldots, m$, and phase 2 can be broken into connected components $\Psi_2^{(\beta)}$, $\beta= 1, 2, \ldots, n$.
If $\Psi_2^{(\beta)}$ has a boundary in common with $\Psi_1^{(\alpha)}$, we denote the common boundary by $\Gamma^{\alpha\beta}$.
Let $V_\beta({\bf x})$ denote the potential $V({\bf x})$ inside $\Psi_2^{(\beta)}$. If $-\nabla V= \begin{bmatrix} e_1 \\ e_2 \end{bmatrix}$ in phase 1 for some constants $e_1$ and $e_2$, then
\begin{equation}
V({\bf x}) = -e_1 x - e_2 y + c_\alpha
\eeq{AU2}
for some constant $c_\alpha$ inside $\Psi_1^{(\alpha)}$ (where $c_\alpha=c_\gamma$ if $\Psi_1^{(\alpha)}$ touches $\Psi_1^{(\gamma)}$ at a common point),
and the continuity of the potential on $\Gamma^{\alpha\beta}$ implies
\begin{equation}
V_\beta ({\bf x}) = -e_1 x - e_2 y + c_\alpha \quad \mbox{on } \Gamma^{\alpha\beta}.
\eeq{AU3}
Since $\nabla \cdot {\bf j}({\bf x})=0$ in $\Omega$, there is a continuous potential $W({\bf x})$ such that
\begin{equation}
{\bf j} ({\bf x}) = - \sigma_2 R_\bot \nabla W({\bf x}) \quad \mbox{in } \Omega.
\eeq{AU4}
In phase 1, inside $\Psi_1^{(\alpha)}$, we have
\begin{equation}
\nabla W({\bf x}) = \frac{\sigma_1}{\sigma_2} \begin{bmatrix} e_2 \\ -e_1 \end{bmatrix},
\eeq{AU5}
and hence
\begin{equation}
W({\bf x}) = \frac{\sigma_1}{\sigma_2} (e_2 x - e_1 y) + d_\alpha
\eeq{AU6}
for some constant $d_\alpha$ (where, by continuity of the potential $W$, $d_\alpha=d_\gamma$ if $\Psi_1^{(\alpha)}$ touches $\Psi_1^{(\gamma)}$ at a common point)
Let $W_\beta({\bf x})$ denote the potential $W({\bf x})$ inside $\Psi_2^{(\beta)}$. Since $W({\bf x})$ is continuous,
\begin{equation}
W_\beta ({\bf x}) = \frac{\sigma_1}{\sigma_2} (e_2 x - e_1 y) + d_\alpha \quad\mbox{on } \Gamma^{\alpha\beta}.
\eeq{AU8}
Note that inside $\Psi_2^{(\beta)}$,
\begin{equation}
\nabla V({\bf x})= -\frac{1}{\sigma_2} {\bf j}({\bf x}) = R_\bot \nabla W({\bf x}),
\eeq{AU9}
{\it i.e.}, $V_{\beta, x}=W_{\beta,y}$ and $V_{\beta, y}=-W_{\beta,x}$, which are the Cauchy-Riemann equations. Thus $V_\beta+i W_\beta$ is an analytic function of $z=x+iy$.
Now consider
\begin{align}
V'_\beta({\bf x}) &:= -W_\beta({\bf x}) + (\sigma_1/\sigma_2+1)(e_2 x - e_1 y) \label{AU10} \\
W'_\beta({\bf x}) &:= V_\beta({\bf x}) + (\sigma_1/\sigma_2+1)(e_1 x + e_2 y). \label{AU11}
\end{align}
Clearly
\begin{equation}
V'_\beta + i W'_\beta = i (V_\beta + i W_\beta) + (\sigma_1/\sigma_2+1)(e_2+ie_1)(x+iy)
\eeq{AU12}
is an analytic function of $z$. On $\Gamma^{\alpha\beta}$, we have
\begin{align}
V'_\beta &= - (\sigma_1/\sigma_2) (e_2 x- e_1 y) + d_\alpha + (\sigma_1/\sigma_2+1)(e_2 x - e_1 y) = e_2 x - e_1 y + d_\alpha, \label{AU13} \\
W'_\beta &= - e_1 x- e_2 y +c_\alpha + (\sigma_1/\sigma_2+1)(e_1 x + e_2 y)= (\sigma_1/\sigma_2) (e_1 x + e_2 y) + c_\alpha. \label{AU14}
\end{align}
So the conductivity equation $\nabla \cdot \sigma \nabla V=0$ is satisfied with potentials $V'$ and $W'$ defined by $V'=V'_\beta$, $W'=W'_\beta$ in $\Psi_2^{(\beta)}$, and
\begin{equation}
V'({\bf x})= e_2 x - e_1 y + d_\alpha, \quad W'({\bf x})= (\sigma_1/\sigma_2) (e_1 x + e_2 y) + c_\alpha
\eeq{AU15}
in $\Psi_2^{(\alpha)}$. Note that
\begin{equation}
-\nabla V'= \begin{bmatrix} -e_2 \\ e_1 \end{bmatrix}.
\eeq{AU16}
We then have, in $\Psi_2^{(\beta)}$
\begin{align}
L'_{\sigma_2} {\bf E} & = \begin{bmatrix} \sigma_2 I & \sigma_2 R_\bot \\ - \sigma_2 R_\bot & \sigma_2 I \end{bmatrix} \begin{bmatrix} -\nabla V \\ -\nabla V' \end{bmatrix} = \begin{bmatrix} \sigma_2 I & \sigma_2 R_\bot \\ - \sigma_2 R_\bot & \sigma_2 I \end{bmatrix} \begin{bmatrix} -\nabla V_\beta \\ \nabla W_\beta - (\sigma_1/\sigma_2 +1) \begin{bmatrix} e_2 \\ - e_1 \end{bmatrix} \end{bmatrix} \nonumber \\
& = \begin{bmatrix} - \sigma_2 (\nabla V_\beta - R_\bot \nabla W_\beta) + (\sigma_1 + \sigma_2) \begin{bmatrix} e_1 \\ e_2 \end{bmatrix} \\ \sigma_2 (R_\bot \nabla V_\beta + \nabla W_\beta) + (\sigma_1 + \sigma_2) \begin{bmatrix} -e_2 \\ e_1 \end{bmatrix} \end{bmatrix} = (\sigma_1 + \sigma_2) \begin{bmatrix} e_1 \\ e_2 \\ -e_2 \\ e_1 \end{bmatrix}, \label{AU17}
\end{align}
and in phase 1
\begin{align}
L'_{\sigma_2} {\bf E} & = \begin{bmatrix} \sigma_1 I & \sigma_2 R_\bot \\ - \sigma_2 R_\bot & \sigma_1 I \end{bmatrix} \begin{bmatrix} -\nabla V \\ -\nabla V' \end{bmatrix} = \begin{bmatrix} \sigma_1 I & \sigma_2 R_\bot \\ - \sigma_2 R_\bot & \sigma_1 I \end{bmatrix} \begin{bmatrix} \begin{bmatrix} e_1 \\ e_2 \end{bmatrix} \\ \begin{bmatrix} -e_2 \\ e_1 \end{bmatrix} \end{bmatrix} \nonumber \\
& = (\sigma_1 + \sigma_2) \begin{bmatrix} e_1 \\ e_2 \\ -e_2 \\ e_1 \end{bmatrix}. \label{AU18}
\end{align}
Thus $L'_{\sigma_2} {\bf E}={\bf U}$ where ${\bf U}$ is of the form \eq{AC12}. Hence the upper bound is attained when we take boundary data $V^0_1=V^0$ and $V^0_2=V'^0$ \qed
Observe that the Dirichlet condition in \eq{AU1} may be replaced with the Neumann condition. One can prove in the exactly same way that the lower bound is attained if the field is uniform in phase 2.
\section{Attainability and analyticity}
We have seen that uniformity and independence of the fields ${\bf e}_1=-\nabla V_1$ and ${\bf e}_2=-\nabla V_2$ in phase 1 is necessary and sufficient
to ensure that the upper bound is attained. Now we will see there is a condition on the potentials $V_1$ and $V_2$
in phase 2 which is also necessary and sufficient to ensure that the upper bound is attained. We assume that phase 2
is connected and completely surrounds each inclusion of phase 1.
First suppose that the upper bound is attained.
Then, given a constant $k$, there exist potentials $V$ and $V'$, which are linear combinations of the
potentials $V_1$ and $V_2$,
such that in phase 1 $-\nabla V= \begin{bmatrix} k \\ 0 \end{bmatrix}$ and
$-\nabla V'= \begin{bmatrix} 0 \\ k \end{bmatrix}$. Thus the analysis of the previous section holds
with $e_1=k$ and $e_2=0$. In particular, we may choose $k=1/(\sigma_1/\sigma_2+1)$ and, since in phase two $V+iW$
is an analytic function of $z$, it follows from \eq{AU10} that $V-iV'+x$ is an analytic function of $z=x+iy$ in phase 2.
Conversely suppose there exist potentials $V$ and $V'$, which are linear combinations of the
potentials $V_1$ and $V_2$, such that $V-iV'+x$ is an analytic function of $z$ in phase 2. Then the
harmonic conjugate to $V$ in phase 2 is $-V'-y$ and the harmonic conjugate to $V'$ in phase 2
is $V+x$. Since by \eq{AU9} these
harmonic conjugates can be identified with the potentials $W$ and $W'$, we have in phase 2
\begin{equation} W=-V'-y,\quad W'=V+x,
\eeq{AE12}
and in particular these identities hold on the boundary of an inclusion of phase 1. By \eq{AU4} inside that inclusion
$V+i(\sigma_2/\sigma_1)W$ and $V'+i(\sigma_2/\sigma_1)W'$ are analytic functions of $z$. Therefore
$$
V'+i(\sigma_2/\sigma_1)W'-i(\sigma_1/\sigma_2)(V+i(\sigma_2/\sigma_1)W)-i(x+iy)
$$
is also an analytic function of $z$ inside the inclusion and from \eq{AE12} takes the value
$$
i(\sigma_2/\sigma_1-1)[(\sigma_1/\sigma_2+1)V+x]
$$
at the boundary of the inclusion. Since the only function which has zero real part at the boundary of a
closed curve, and which is analytic in the interior, is an imaginary constant, we deduce that $(\sigma_1/\sigma_2+1)V+x$ is constant around the boundary of
the inclusion and hence constant inside, {\it i.e.}, in the inclusion $-\nabla V= \begin{bmatrix} k \\ 0 \end{bmatrix}$
with $k=1/(\sigma_1/\sigma_2+1)$.
The harmonic conjugate to $V$ inside the inclusion is then $-ky$ which can
be identified with $(\sigma_2/\sigma_1)W$ (to within an additive constant). Then from the first condition in \eq{AE12}
it follows that (to within an additive constant) $V'$ takes the value $-ky$ around the
boundary of the inclusion and hence in its interior
too, {\it i.e.}, in the inclusion $-\nabla V'= \begin{bmatrix} 0 \\ k \end{bmatrix}$. Hence the uniform field
attainability condition is met, and the upper bound is attained.
We summarize our findings as a theorem.
\begin{theorem}
Provided the body $\Omega$ consists of inclusions of phase 1 completely surrounded by phase 2 then the upper bound
is attained if and only if there exist potentials $V$ and $V'$, which are linear combinations of the
potentials $V_1$ and $V_2$, such that $V-iV'+x$ is an analytic function of $z=x+iy$ in phase 2.
\end{theorem}
Similarly we have the following theorem for the lower bound.
\begin{theorem}
Provided the body $\Omega$ consists of inclusions of phase 2 completely surrounded by phase 1 then the lower bound
is attained if and only if there exist potentials $V$ and $V'$, which are linear combinations of the
potentials $V_1$ and $V_2$, such that $V-iV'+x$ is an analytic function of $z=x+iy$ in phase 1.
\end{theorem}
\section{Asymptotic bounds for small volume fraction}
Suppose that the phase 1 occupies a region $\omega \subset \Omega$ satisfying
\begin{equation}
\mbox{dist} (\omega, \partial\Omega) \ge c
\eeq{AB1}
for some $c>0$. The purpose of this section is to compare the bounds \eq{LB39} and \eq{UB10} with the bounds obtained in \cite{CV03} when the volume $|\omega|$ of $\omega$ tends to $0$.
Let $q$ be a function on $L^2(\partial\Omega)$ satisfying $\int_{\partial\Omega} q=0$. Let $V$ be the solution to
\begin{equation}
\left\{
\begin{array}{l}
\nabla \cdot \sigma \nabla V=0 \quad \mbox{in } \Omega, \\
\noalign{\smallskip}
\displaystyle \sigma \frac{\partial V}{\partial {\bf n}} = q \quad \mbox{on } \partial \Omega, \quad (\int_{\partial\Omega} V =0),
\end{array}
\right.
\eeq{AB2}
and let $U$ be the solution to \eq{AB2} with $\sigma$ replaced with $\sigma_2$. It is proved in \cite{CV022} that given a sequence $\omega_n$ satisfying \eq{AB1} and such that $|\omega_n| \to 0$
there is a subsequence still denoted $\omega_n$, a probability measure $d\mu$ supported in the set $\{ x ~|~ \mbox{dist} (x, \partial\Omega) \ge c \}$, and a (pointwise) polarization tensor field $M({\bf x})$ such that if $V_n$ is the solution to \eq{AB2} when $\omega=\omega_n$, then
\begin{equation}
V_n ({\bf x}) - U({\bf x}) = -|\omega_n| \int_{\Omega} \nabla U({\bf z}) \cdot M({\bf z}) \nabla_z N({\bf x}, {\bf z}) d\mu({\bf z}) + o(|\omega_n|), \quad {\bf x} \in \partial\Omega,
\eeq{AB3}
where $N({\bf x},{\bf z})$ is the Neumann function for $\Omega$, {\it i.e.}, $U$ is given by
\begin{equation}
U({\bf z})= \int_{\partial\Omega} N({\bf x}, {\bf z}) q({\bf x}) ds({\bf x}).
\eeq{AB4}
Note that we have absorbed a factor of $\sigma_1-\sigma_2$ into the definition of $M$ given by Capdeboscq and Vogelius to be consistent with the conventional definition of polarization tensors.
Let $V'_n$ be the solution to
\begin{equation}
\left\{
\begin{array}{l}
\nabla \cdot \sigma \nabla V=0 \quad \mbox{in } \Omega, \\
V = V^0 \quad \mbox{on } \partial \Omega
\end{array}
\right.
\eeq{AB5}
with $\omega=\omega_n$ and $U'$ be the solution to \eq{AB5} with $\sigma$ replaced with $\sigma_2$. Then we have
\begin{equation}
\sigma_2 \frac{\partial V'_n}{\partial {\bf n}} ({\bf x}) - \sigma_2 \frac{\partial U'}{\partial {\bf n}} ({\bf x}) = |\omega_n| \int_{\Omega} \nabla U'({\bf z}) \cdot M({\bf z}) \nabla_z \frac{\partial }{\partial {\bf n}_{\bf x}} G({\bf x}, {\bf z}) d \mu({\bf z}) + o(|\omega_n|), \quad {\bf x} \in \partial\Omega,
\eeq{AB6}
where $G({\bf x}, {\bf z})$ is the Green function for $\Omega$, {\it i.e.}, $U'$ is given by
\begin{equation}
U'({\bf z})= \int_{\partial\Omega} \frac{\partial}{\partial {\bf n}_{\bf x}} G({\bf x}, {\bf z}) V^0({\bf x}) ds({\bf x}).
\eeq{AB7}
To see \eq{AB6} let us define the Neumann-to-Dirichlet (NtD) map $\Lambda_\sigma$ by
\begin{equation}
\Lambda_\sigma [q]:= V|_{\partial\Omega}
\eeq{AB8}
where $V$ the solution to \eq{AB2}. Let $\Lambda_{\sigma_2}$ be the NtD map when $\sigma$ is replaced with $\sigma_2$. Observe that
because of \eq{AB4}, we have
\begin{align}
\int_{\Omega} \nabla U({\bf z}) \cdot M({\bf z}) \nabla_z N({\bf x}, {\bf z}) d \mu({\bf z}) &= \int_{\partial\Omega} \left[ \int_{\Omega} \nabla_z N({\bf y}, {\bf z})\cdot M({\bf z}) \nabla_z N({\bf x}, {\bf z}) d \mu({\bf z}) \right] q({\bf y}) ds({\bf y}) \nonumber \\
& := K[q]({\bf x}). \label{AB9}
\end{align}
So \eq{AB3} can be rewritten as
\begin{equation}
\Lambda_\sigma[q]= \Lambda_{\sigma_2}[q] - |\omega_n| K[q] + o(|\omega_n|).
\eeq{AB10}
Then the Dirichlet-to-Neumann map $\Lambda_\sigma^{-1}$ is given by
\begin{equation}
\Lambda_\sigma^{-1} = (I - |\omega_n| \Lambda_{\sigma_2}^{-1} K)^{-1} \Lambda_{\sigma_2}^{-1} + o(|\omega_n|) = \Lambda_{\sigma_2}^{-1} + |\omega_n| \Lambda_{\sigma_2}^{-1} K \Lambda_{\sigma_2}^{-1} + o(|\omega_n|).
\eeq{AB11}
Observe that
\begin{equation}
\Lambda_{\sigma_2}^{-1} [N(\cdot, {\bf z})]({\bf x}) = \frac{\partial}{\partial {\bf n}_{\bf x}} G({\bf x}, {\bf z}), \quad {\bf x} \in \Omega, \ \ {\bf z} \in \Omega.
\eeq{AB12}
In fact,
\begin{equation}
\int_{\partial\Omega} \Lambda_{\sigma_2}^{-1} [N(\cdot, {\bf z})]({\bf x}) V^0({\bf x}) = \int_{\partial\Omega} N({\bf x}, {\bf z}) \sigma_2 \frac{\partial U'}{\partial{\bf n}} ({\bf x}) = U'({\bf z}) \quad \mbox{for all } {\bf z} \in \Omega,
\eeq{AB13}
and hence \eq{AB12} follows. We now obtain \eq{AB6} from \eq{AB11}.
Let $U_1({\bf x})= -\sigma_2^{-1} x$ and $U_2({\bf x})= -\sigma_2^{-1} y$, and let $V_j$ be the solution to \eq{AB2} with $q=q_j$ for $j=1,2$, where $q_j=\sigma_2 \frac{\partial U_j}{\partial {\bf n}}=-n_j$. Then, we have
\begin{equation}
[\bfm\sigma_N^{-1}]_{ij}= a_{ij}= \frac{1}{|\Omega|} \int_{\partial\Omega} V_i q_j = \frac{1}{|\Omega|} \int_{\partial\Omega} (V_i - U_i) q_j + \sigma_2^{-1} \delta_{ij}
\eeq{AB14}
where $\delta_{ij}$ is Kronecker's delta. Since
\begin{equation}
\int_{\partial\Omega} N({\bf x}, {\bf z}) q_j({\bf x}) ds({\bf x}) = U_j({\bf z}),
\eeq{AB15}
we have from \eq{AB3} that
\begin{equation}
\frac{1}{|\Omega|} \int_{\partial\Omega} (V_i - U_i) q_j = - |\omega_n| \frac{1}{\sigma_2^2 |\Omega|} \int_{\Omega} M_{ij} ({\bf x}) d\mu(x) + o(|\omega_n|) .
\eeq{AB16}
Thus we have
\begin{equation}
\bfm\sigma_N^{-1}= - f_1 \sigma_2^{-2} M + \sigma_2^{-1} I + o(f_1)
\eeq{AB17}
where
\begin{equation}
M:= \int_{\Omega} M_{ij} ({\bf x}) d\mu({\bf x}).
\eeq{AB18}
We then have
\begin{align}
\frac{\mathop{\rm Tr}\nolimits\bfm\sigma_N^{-1} - 2 \sigma_1^{-1}}{\det\bfm\sigma_N^{-1} - \sigma_1^{-2}}
&= \frac{-\frac{f_1}{\sigma_2^2} \mathop{\rm Tr}\nolimits M + \frac{2}{\sigma_2} - \frac{2}{\sigma_1} + o(f_1)}{-\frac{f_1}{\sigma_2^3} \mathop{\rm Tr}\nolimits M + \frac{1}{\sigma_2^2} - \frac{1}{\sigma_1^2} + o(f_1)} \nonumber \\
&= \frac{ \frac{2(\sigma_1-\sigma_2)}{\sigma_1 \sigma_2} \left[ 1 -\frac{f_1 \sigma_1}{2\sigma_2(\sigma_1 - \sigma_2)} \mathop{\rm Tr}\nolimits M \right] + o(f_1)}{ \frac{(\sigma_1^2 -\sigma_2^2)}{\sigma_1^2 \sigma_2^2} \left[ 1 -\frac{f_1 \sigma_1^2}{\sigma_2(\sigma_1^2 - \sigma_2^2)} \mathop{\rm Tr}\nolimits M \right] + o(f_1)} \nonumber \\
&= \frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \left[ 1 + \frac{f_1 \sigma_1}{2\sigma_2(\sigma_1 + \sigma_2)} \mathop{\rm Tr}\nolimits M \right] + o(f_1), \label{AB19}
\end{align}
and hence
\begin{equation}
\frac{(\sigma_1 + \sigma_2)}{\sigma_1 (\sigma_1 - \sigma_2)} \left[ \frac{\mathop{\rm Tr}\nolimits \bfm\sigma_N^{-1} - 2/\sigma_1}{\det \bfm\sigma_N^{-1} - 1/\sigma_1^2} -
\frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \right] = \frac{f_1 \sigma_1}{(\sigma_1^2 - \sigma_2^2)} \mathop{\rm Tr}\nolimits M + o(f_1).
\eeq{AB20}
The lower bound \eq{HS6} now reads
\begin{equation}
\frac{f_1 \sigma_1}{(\sigma_1^2 - \sigma_2^2)} \mathop{\rm Tr}\nolimits M \le f_1,
\eeq{AB21}
or equivalently
\begin{equation}
\frac{\sigma_1}{(\sigma_1^2 - \sigma_2^2)} \mathop{\rm Tr}\nolimits (I-\sigma_2\bfm\sigma_N^{-1}) \le f_1
\eeq{AB22}
up to $o(f_1)$ terms by \eq{AB17}.
We now consider the upper bound. Let $U_1({\bf x})=-x$ and $U_2({\bf x})=-y$, and let $V_i$ be the solution to \eq{AB5} with $V^0=U_i$ on $\partial\Omega$ for $i=1,2$. Then, defining $q_j=\sigma_2 \frac{\partial V_j}{\partial {\bf n}}$ on $\partial\Omega$,
we have
\begin{equation}
[\bfm\sigma_D]_{ij}= a_{ij}= \frac{1}{|\Omega|} \int_{\partial\Omega} V_i^0 q_j = \frac{1}{|\Omega|} \int_{\partial\Omega} V_i^0 (q_j - \sigma_2\frac{\partial U_j}{\partial {\bf n}}) + \sigma_2 \delta_{ij}
\eeq{AB23}
One can use \eq{AB6} and the fact that
\begin{equation}
\int_{\partial\Omega} \frac{\partial}{\partial {\bf n}_{{\bf x}}} G({\bf x}, {\bf z}) V_i^0({\bf x}) d{\bf x}= U_i({\bf z})
\eeq{AB24}
to derive that
\begin{equation}
\bfm\sigma_D= f_1 M + \sigma_2 I + o(f_1)=\bfm\sigma_N + o(f_1).
\eeq{AB25}
Thus we obtain
\begin{equation}
\frac{1}{\sigma_2} - \frac{\mathop{\rm Tr}\nolimits\bfm\sigma_D - 2 \sigma_2}{\det\bfm\sigma_D - \sigma_2^2} = \frac{f_1}{\sigma_2^2} \frac{\det M}{\mathop{\rm Tr}\nolimits M} + o(f_1).
\eeq{AB26}
Since $\det M = (\mathop{\rm Tr}\nolimits M^{-1})^{-1} \mathop{\rm Tr}\nolimits M$, \eq{HS10} reads
\begin{equation}
f_1 \le \frac{f_1 (\sigma_1+\sigma_2)}{\sigma_2(\sigma_1-\sigma_2)} (\mathop{\rm Tr}\nolimits M^{-1})^{-1} ,
\eeq{AB27}
or equivalently
\begin{equation}
f_1 \le \frac{(\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} (\mathop{\rm Tr}\nolimits (-I+\sigma_2^{-1}\bfm\sigma_D)^{-1})^{-1}
\eeq{AB28}
up to $o(f_1)$ terms.
By \eq{AB17} and \eq{AB25}, we have
\begin{equation}
-I+ \sigma_2^{-1} \bfm\sigma_D = I - \sigma_2\bfm\sigma_N^{-1}
\eeq{AB29}
modulo $o(f_1)$. Hence by putting \eq{AB22} and \eq{AB28} together, we have
\begin{equation}
\frac{\sigma_1 \sigma_2}{(\sigma_1^2 - \sigma_2^2)} \mathop{\rm Tr}\nolimits (I-\sigma_2\bfm\sigma_N^{-1}) \le f_1 \le \frac{(\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} (\mathop{\rm Tr}\nolimits (I-\sigma_2\bfm\sigma_N^{-1})^{-1})^{-1}
\eeq{AB30}
modulo $o(f_1)$, where $\bfm\sigma_N^{-1}$ is determined from the boundary measurements with special Neumann conditions, via \eq{AB14}.
We emphasize that these asymptotic bounds for small volume fraction were found in \cite{CV022, CV03}. From \eq{AB29} we also have the bounds
\begin{equation} \frac{\sigma_1 \sigma_2}{(\sigma_1^2 - \sigma_2^2)} \mathop{\rm Tr}\nolimits (\sigma_2^{-1}\bfm\sigma_D-I) \le f_1 \le \frac{(\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} (\sigma_2^{-1}\bfm\sigma_D-I)^{-1})^{-1}
\eeq{AB35}
modulo $o(f_1)$, where $\bfm\sigma_D$ is obtained from the boundary measurements with special Dirichlet conditions.
It is interesting to observe that the translation bounds also yield the Lipton bounds for the polarization tensor: We obtain from \eq{AB21} and \eq{AB27} that
\begin{equation}
\mathop{\rm Tr}\nolimits M \le \frac{(\sigma_1^2 - \sigma_2^2)}{\sigma_1} \quad\mbox{and}\quad \mathop{\rm Tr}\nolimits (M^{-1}) \le
\frac{(\sigma_1+\sigma_2)}{\sigma_2(\sigma_1-\sigma_2)} .
\eeq{AB36}
We refer to \cite{book, book2, milton} for properties of polarization tensors. If the phase 1 is an inclusion (or a cluster of inclusions) of the form
\begin{equation}
D= \varepsilon B + {\bf z}
\eeq{AB37}
where $\varepsilon$ is a small parameter representing the diameter of $D$, $B$ is a reference domain containing $0$, and
${\bf z}$ indicates the location of $D$ inside $\Omega$, then $M_{ij}=|B|^{-1} M(B)$ (a constant matrix)
and $d\mu = \lim_{n \to \infty} |\omega_n|^{-1} \chi(\omega_n) d{\bf x}=\delta({\bf x}-{\bf z})d{\bf x}$. Here $M(B)$ is the polarization tensor associated with $B$. Therefore we have
$M=|B|^{-1} M(B)$, and hence
\begin{equation}
\mathop{\rm Tr}\nolimits(M(B))\leq |B| \frac{(\sigma_1^2 - \sigma_2^2)}{\sigma_1},
\eeq{AB38}
and the lower bound is given by
\begin{equation}
\mathop{\rm Tr}\nolimits (M(B)^{-1})\leq\frac{\sigma_1+\sigma_2}{\sigma_2(\sigma_1-\sigma_2) |B|}.
\eeq{AB39}
The bounds in \eq{AB38} and \eq{AB39} were obtained by Lipton \cite{Lipton93} and later by Capdeboscq-Vogelius \cite{CV022, CV03} in a more general setting. They can also easily be derived
from the bounds of Lurie and Cherkaev \cite{lucherk82} and Tartar and Murat \cite{mutar85,tar85} using the observation made by Milton \cite{milt81} that the low volume fraction limit
of bounds on effective tensors of periodic arrays of well-separated inclusions yield bounds on polarization tensors.
We also mention that if the lower bound in \eq{AB39} is attained for $B$ and $B$ is simply connected, then $B$ is an ellipse.
This was known as the P\'olya-Szeg\"o conjecture and resolved by Kang-Milton \cite{KM06, KM07} (see also a review paper \cite{Kang09}).
\section{Numerical results}
\subsection{Forward solutions}
We implement an integral equation solver in FORTRAN in order to
generate forward solutions of the Neumann and Dirichlet problems of the equation $\nabla \cdot \sigma \nabla V=0$ in $\Omega$ when $D$ is an inclusion and $\sigma = \sigma_1 \chi(D) + \sigma_2 \chi(\Omega\setminus D)$. We set $\sigma_2=1$ throughout this section. We
compute the forward solutions $V$ with
$N=64,80,96,120,160,192,240,320$ and $480$ equi-spaced points on $\partial D$ and $N$
points on $\partial\Omega$. And then they are computed with the solutions on the finer discretization of $N=960$. Figure \ref{conv.1} shows the convergence of a forward solver for the Neumann problem as a function of discretization points, $N$, and Figure \ref{conv.2} for the Dirichlet problem, with $\sigma_1=10$
\begin{figure}[htb]
\begin{center}
\epsfig{figure=conv1.eps,width=10cm}
\end{center}
\caption{Convergence error of the forward solver with 64-480
discretization points. The solid line represents the
convergence error of $V$ on $\partial\Omega$ for the Neumann problem.}\label{conv.1}
\end{figure}
\begin{figure}[htb]
\begin{center}
\epsfig{figure=conv2.eps,width=10cm}
\end{center}
\caption{Convergence error of the forward solver with 64-480
discretization points. The solid line represents the
convergence error of $\frac{\partial V}{\partial{\bf n}}$ for the Dirichlet
problem.}\label{conv.2}
\end{figure}
\subsection{Numerical Experiments}
We perform numerical simulations to judge the performance of the bounds when relevant parameters are varying. Parameters under consideration are the conductivity contrast $\sigma_1/\sigma_2$, the volume fraction $f_1$, and the distance between the inclusion and $\partial\Omega$. We also investigate the role of boundary data in deriving bounds.
We use boundary data of special forms; $q_1=-\begin{bmatrix} 1 \\ 0 \end{bmatrix}\cdot{\bf n}$
and $q_2=-\begin{bmatrix} 0 \\ 1 \end{bmatrix}\cdot{\bf n}$ as Neumann data for the lower bound, and
$V_1=-\begin{bmatrix}
1 \\
0
\end{bmatrix}\cdot {\bf x}$ and $V_2=-\begin{bmatrix}
0 \\
1
\end{bmatrix}\cdot {\bf x}$ as Dirichlet data for the upper bound, in all examples except examples \ref{7.4} and \ref{7.5}. Thus except in these examples, the bounds correspond
to those derived by Milton \cite{milt11}.
Let
\begin{equation}
L(\sigma_1):= \frac{(\sigma_1 + \sigma_2)}{\sigma_1 (\sigma_1 - \sigma_2)} \left[ \frac{\mathop{\rm Tr}\nolimits A - 2b/\sigma_1}{\det A - b^2/\sigma_1^2} -
\frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \right],
\end{equation}
denote the lower bound on $f_1$
and let
\begin{equation}
U(\sigma_1) := \frac{\sigma_2 (\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} \left[ \frac{1}{\sigma_2} - \frac{\mathop{\rm Tr}\nolimits A - 2b' \sigma_2}{\det A - b'^2 \sigma_2^2} \right]
\end{equation}
denote the upper bound on $f_1$.
\begin{Exa} {\bf (variation of $\sigma_1$)}.
We compute the bounds changing $\sigma_1$, keeping $\sigma_2=1$, when
the inclusion is a disk or an ellipse inside a disk or a rectangle (with corners rounded).
Figures \ref{LU_DYDY1}, \ref{LU_DYDY3}, and \ref{LU_DYDY2} show the numerical results.
Figure \ref{LU_DYDN} is when the inclusion is simply connected and of general shape. Figure \ref{ALU}
is when the inclusion is not simply connected.
The results show that the lower bound deteriorates seriously as the conductivity ratio $\sigma_1$ increases while the upper bounds are relatively good even with large $\sigma_1$.
\end{Exa}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=LU_DYDY4.3.eps,width=8cm}\\
\epsfig{figure=LU_DYDY3.4.eps,width=8cm}\vskip 0.5cm
\begin{tiny}
\title{first diagram\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ second diagram}\\
\begin{tabular}{|c ||c |c|}\hline
$\sigma_1$ & $L(\sigma_1)/f_1$
& $U(\sigma_1)/f_1$ \\\hline
1.1&0.9979&1.0000\\
1.2&0.9925&1.0000\\
1.5&0.9635&1.0000\\
2 &0.8979&1.0000\\
3 &0.7673&1.0000\\
5 &0.5787&1.0000\\
10 &0.3518&1.0000\\
20 &0.1958&1.0000\\\hline
\end{tabular}\hskip 0.5cm
\begin{tabular}{|c ||c |c|}\hline
$\sigma_1$ & $L(\sigma_1)/f_1$
& $U(\sigma_1)/f_1$ \\\hline
1.1&0.9904&1.0077\\
1.2&0.9783&1.0149\\
1.5&0.9340&1.0337\\
2 &0.8532&1.0583\\
3 &0.7115&1.0917\\
5 &0.5237&1.1287\\
10 &0.3113&1.1659\\
20 &0.1710&1.1889\\\hline
\end{tabular}
\end{tiny}
\end{center}
\caption{The bounds with increasing $\sigma_1$ when the inclusion
is a disk and $\Omega$ is a circle, and $f_1=0.09$. We take Neumann and Dirichlet data of the special
forms. The second and third columns are graphs of the same data; the third column is with a log-scale for the $\sigma_1$-axis. The values for the bounds are given in the table.}\label{LU_DYDY1}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=LU_DYDN6.3.eps,width=8cm}\\
\epsfig{figure=LU_DYDN5.3.eps,width=8cm}\vskip 0.5cm
\begin{tiny}
\title{first diagram\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ second diagram}\\
\begin{tabular}{|c ||c |c|}\hline
$\sigma_1$ & $L(\sigma_1)/f_1$
& $U(\sigma_1)/f_1$ \\\hline
1.1&0.9982&1.0000\\
1.2&0.9934&1.0000\\
1.5&0.9677&1.0000\\
2 &0.9091&1.0001\\
3 &0.7895&1.0001\\
5 &0.6099&1.0001\\
10 &0.3818&1.0002\\
20 &0.2170&1.0002\\\hline
\end{tabular}\hskip 0.5cm
\begin{tabular}{|c ||c |c|}\hline
$\sigma_1$ & $L(\sigma_1)/f_1$
& $U(\sigma_1)/f_1$ \\\hline
1.1&0.9921 &1.0062\\
1.2&0.9819&1.0119\\
1.5&0.9435&1.0268\\
2 &0.8712&1.0459\\
3 &0.7396&1.0714\\
5 &0.5569&1.0988\\
10 &0.3395&1.1257\\
20 &0.1896&1.1420\\\hline
\end{tabular}
\end{tiny}
\end{center}
\caption{The bounds with increasing $\sigma_1$ when the inclusion is an ellipse and $\Omega$ is a circle and $f_1=0.08$. We take Neumann and Dirichlet data of the special
forms. The second and third columns are graphs of the same data; the third column is with a log-scale for the $\sigma_1$-axis. The values for the bounds are given in the table.}\label{LU_DYDY3}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=LU_DNDY4.4.eps,width=8cm}\\
\epsfig{figure=LU_DNDY3.3.eps,width=8cm}\vskip 0.5cm
\begin{tiny}
\title{first diagram\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ second diagram}\\
\begin{tabular}{|c ||c |c|}\hline
$\sigma_1$ & $L(\sigma_1)/f_1$
& $U(\sigma_1)/f_1$ \\\hline
1.1&0.9976&1.0003\\
1.2&0.9917&1.0006\\
1.5&0.9614&1.0013\\
2 &0.8939&1.0022\\
3 &0.7608&1.0033\\
5 &0.5708&1.0044\\
10 &0.3449&1.0054\\
20 &0.1912&1.0060\\\hline
\end{tabular}\hskip 0.5cm
\begin{tabular}{|c ||c |c|}\hline
$\sigma_1$ & $L(\sigma_1)/f_1$
& $U(\sigma_1)/f_1$ \\\hline
1.1&0.9915&1.0065\\
1.2&0.9803&1.0125\\
1.5&0.9376&1.0281\\
2 &0.8576&1.0480\\
3 &0.7155&1.0744\\
5 &0.5262&1.1027\\
10 &0.3122&1.1302\\
20 &0.1713&1.1468\\\hline
\end{tabular}
\end{tiny}
\end{center}
\caption{The bounds with increasing $\sigma_1$ when the inclusion is a
disk and $\Omega$ is a square, and $f_1=0.0699$. We take Neumann and Dirichlet data of the special
forms. The second and third columns are graphs of the same data; the third column is with a log-scale for the $\sigma_1$-axis. The values for the bounds are given in the table.}\label{LU_DYDY2}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=LU_DNDN4.3.eps,width=8cm}
\end{center}
\caption{The bounds with increasing $\sigma_1$ when the inclusion is
not a disk or an ellipse and $\Omega$ is a square, and $f_1=0.0673$. We take Neumann and Dirichlet data of the special
forms. The second and third columns are graphs of the same data; the third column is with a log-scale for the $\sigma_1$-axis.}\label{LU_DYDN}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=ALU_DYDY4.3.eps,width=8cm}\\
\epsfig{figure=ALU_DYDY3.4.eps,width=8cm}\\
\epsfig{figure=ALU_DNDY4.4.eps,width=8cm}\\
\epsfig{figure=ALU_DNDY3.3.eps,width=8cm}
\end{center}
\caption{The bounds with increasing $\sigma_1$ when the inclusion is an
annulus. We take Neumann and Dirichlet data of the special
forms. The second and third columns are graphs of the same data; the third column is with a log-scale for the $\sigma_1$-axis.}\label{ALU}
\end{figure}
\begin{Exa}{\bf (variation of $f_1$)}.
We compute the bounds for various volume fractions.
Figure \ref{LU_area} shows the numerical results. It clearly shows that the lower bound works better for higher volume fractions.
\end{Exa}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=LU_area1.4.eps,width=7cm}\\
\epsfig{figure=LU_area4.3.eps,width=7cm}
\end{center}
\caption{$\sigma_1=5$. The bounds changing the volume fraction. We
take Neumann and Dirichlet data with the special forms.
}\label{LU_area}
\end{figure}
\begin{Exa}{\bf (variation of distance from $\partial\Omega$)}.
We compute the lower and upper bounds changing
the distance between the inclusion and $\partial\Omega$. Figure \ref{LU_dist1}
shows the numerical results when $\sigma_1=2$. It shows that the further the inclusion is from $\partial\Omega$, the better bounds are.
\end{Exa}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=LU_d3.6.eps,width=8cm}
\end{center}
\caption{$\sigma_1=2$ and $f_1=0.0262$. The bounds changing the distance between
the inclusion and $\partial\Omega$. We take Neumann and Dirichlet
data of the special forms. }\label{LU_dist1}
\end{figure}
\begin{Exa}\label{7.4} {\bf (boundary data)}.
In the example we compute the bounds using other boundary data. We use as Neumann data for the lower bound
$q_1=-n_1 - n_1 n_2$ and $q_2=-n_2 - n_1 n_2$, and as Dirichlet data for the upper bound
$V_1=-x-xy$ and $V_2=-y-xy$. Figure \ref{LU_gDYDY} shows that the special boundary data work much better.
\end{Exa}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=LU_gDYDY1.3.eps,width=8cm}
\end{center}
\caption{The bounds changing $\sigma_1$ in case that we take
Neumann data $g_1=-\nu_1-\nu_1\nu_2$ and $g_2=-\nu_2-\nu_1\nu_2$.
Also we take Dirichlet data $V_1=-x-xy$ and $V_2=-y-xy$.
}\label{LU_gDYDY}
\end{figure}
\begin{Exa}\label{7.5} When we use special Neumann data $q_1=-n_1$ and $q_2=-n_2$, then a pair of Dirichlet data are measured on $\partial\Omega$. We may use this data to compute the upper bound using the formula \eq{UB19}. Likewise, we may use the measured Neumann data corresponding to the Dirichlet data $V_1=-x$ and $V_2=-y$ to compute the lower bound using the formula \eq{LB45}. Figure \ref{LU_same} shows numerical results when the volume fraction varies. In this example it clearly shows that bounds using the measured data are better than those using the given data.
\end{Exa}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=LU_same_1.eps,width=8cm}
\epsfig{figure=LU_same_2.eps,width=8cm}
\end{center}
\caption{$\sigma_1=5$. $L_N(f_1)$ is the lower bound using the Neumann data corresponding to the special Dirichlet data and $U_D(f_1)$ is the upper bound using the Dirichlet data corresponding to the special Neumann data.
}\label{LU_same}
\end{figure}
\section{Construction of E$_\Omega$-inclusions}
Following the method outlined in section 23.9 of \cite{milton} we look for a simply connected inclusion inside which the field is uniform for some boundary condition assigned on the outer boundary. More precisely, we look for an inclusion $E$ contained in a domain $\Omega$ (bounded or unbounded) such that $-\nabla V$ is uniform inside $E$, where $V$ is the solution to
\begin{equation}
\left\{
\begin{array}{l}
\nabla \cdot \sigma \nabla V=0 \quad \mbox{in } \Omega, \\
V=V^0 \quad \mbox{on } \partial\Omega
\end{array}
\right.
\eeq{CE1}
for some boundary data $V^0$ with $\sigma=\sigma_1 \chi(E) + \sigma_2 \chi(\Omega \setminus E)$ ($\sigma_1 \neq \sigma_2$). We may suppose, without loss of generality, that ${\bf e}=-\nabla V = (-1, 0)^T$. We also suppose that the coordinates have been positioned and scaled so that $y_{\mbox{max}}=1$ and $y_{\mbox{min}}=-1$, where $y_{\mbox{max}}=\max \{ y ~ | ~(x,y) \in E \mbox{ for some } x \}$ and $y_{\mbox{min}}=\min \{ y ~| ~(x,y) \in E \mbox{ for some } x \}$. Let $W$ be a harmonic conjugate of $V$ in $\Omega \setminus \overline{E}$ so that $V+iW$ is an analytic function of $z=x+iy$ in $\Omega \setminus \overline{E}$. Then we have
\begin{equation}
V=x, \quad W= \frac{\sigma_1}{\sigma_2} y, \quad \mbox{on } \partial E.
\eeq{CE2}
Define new potentials $u$ and $v$ by
\begin{equation}
u+ i v = \frac{i(V+iW-z)}{(1-\sigma_1/\sigma_2)}.
\eeq{CE3}
Then, $u+ i v$ is still an analytic function of $z=x+iy$ in $\Omega \setminus \overline{E}$, and on $\partial E$
\begin{equation}
u = \frac{-W+y}{1-\sigma_1/\sigma_2} = y, \quad v = \frac{V-x}{1-\sigma_1/\sigma_2} = 0.
\eeq{CE4}
Now assume $u+ i v$ is a univalent function of $z=x+iy$ inside $\Omega \setminus \overline{E}$, and consider $x+iy$ as an analytic function of $u+ i v$ (hodograph transformation). Because of \eq{CE4}, the image of $\partial E$ by $u+ i v$ is the slit $S= [y_{\mbox{min}}, y_{\mbox{max}}] = [-1,1]$ on the $u$-axis, and $y= u$ on $S$.
The problem is now to construct a function $z=x+iy=f(u+iv)$ such that
\begin{itemize}
\item[(i)] $f$ is analytic and univalent in $U \setminus S$ for some neighborhood $U$ of $S$,
\item[(ii)] $\mathop{\rm Im}\nolimits f=u$ on $S$,
\item[(iii)] $\mathop{\rm Re}\nolimits f|_{+} - \mathop{\rm Re}\nolimits f|_{-} >0$ on $S$ except at $\pm 1$ where it is $0$.
\end{itemize}
Here $|_{+}$ and $|_{-}$ indicate the limit from above and below $S$, respectively.
One can see that the conditions (i), (ii), and (iii) guarantee that $f$ maps $U \setminus S$ onto $\Omega \setminus \overline{E}$ for a simply connected domain $E$ and $\Omega$ a domain containing $\overline{E}$. In fact, (ii) and (iii) imply that $f$ maps $S$ onto $\partial E$ and the orientation is preserved. Since $f$ is conformal, it maps $U \setminus S$ to outside $\overline{E}$.
We have the following lemma for univalence.
\begin{lemma} \label{lemma21}
Let $\gamma$ be a simple closed curve which consists of two curves $\gamma^+$ and $\gamma^-$. Let $U$ be an open neighborhood of $S$ and let $B_1(\delta)$ and $B_{-1}(\delta)$ be open balls of radius $\delta$ centered at $w=1$ and $w=-1$
respectively. Let $f$ be an analytic function in $U\setminus S$ which maps $U\setminus S$ outside $\gamma$ of the form
\begin{equation}
f(w)=iw + g(w)
\eeq{CE5}
where $\mathop{\rm Im}\nolimits g=0$ on $S$. Suppose that the mapping $u \mapsto \lim_{v \to 0^+} f(u+iv)$ is one-to-one from $S$ onto $\gamma^+$, and $u \mapsto \lim_{v \to 0^+} f(u-iv)$ is one-to-one from $S$ onto $\gamma^-$. If there is $\delta>0$ such that $f$ is univalent in $B_1(\delta)\setminus S$ and in $B_{-1}(\delta)\setminus S$, then there is an open neighborhood $U_0$ of $S$ such that $f$ is univalent in $U_0 \setminus S$.
\end{lemma}
\proof Let $\varphi(z)=(z+\frac{1}{z})/2$ for $|z| \ge 1$. $\varphi$ maps $|z|>1$ onto $\mathbb{C} \setminus S$ and $|z|=1$ onto $S$. Let $G(z)= g(\varphi(z))$. Since $\mathop{\rm Im}\nolimits G(z)=0$ on $|z|=1$, $G$ can be extended so that it is analytic in $1-\varepsilon <|z| < 1+ \varepsilon$ for some $\varepsilon>0$. Let $F(z)= f(\varphi(z))$. Then $F$ is analytic in $1-\varepsilon <|z| < 1+ \varepsilon$ and univalent in the neighborhoods of $z=1$ and $z=-1$.
Moreover, $F$ is one-to-one from $|z|=1$ onto $\gamma$. We claim that $F$ is univalent in $1-\varepsilon_0 <|z| < 1+ \varepsilon_0$ for some $\varepsilon_0>0$. In fact, if not, then for each $n$ there are $z_{1,n}$ and $z_{2,n}$ such that $1- \frac{1}{n} < |z_{j,n}| < 1+ \frac{1}{n}$, $z_{1,n} \neq z_{2,n}$, and $F(z_{1,n})=F(z_{2,n})$. For $j=1,2$, the sequence $z_{j,n}$ has a subsequence which converges to a point on $|z|=1$, say $z_j$. Since $F$ is one-to-one on $|z|=1$, $z_1=z_2$.
But this implies that $F'(z_1)=0$, where
\begin{equation} F'(z)=f'(\varphi(z))\varphi'(z)=[i+g'(\varphi(z))](1-z^{-2})/2,
\eeq{CM0}
and since $g'(\varphi(z_1))$ is real we conclude that $z_1=1$ or $z_1=-1$ which is contradiction since $F$ is univalent in the neighborhoods of these points.
Thus $F$ is univalent in $1-\varepsilon_0 <|z| < 1+ \varepsilon_0$ for some $\varepsilon_0>0$. This completes the proof. \qed
We now construct $f$ satisfying (i), (ii), and (iii) using conformal mappings.
Let $w=u+iv$ and define
\begin{equation}
g(w)=f(w)-iw
\eeq{CM1}
so that $\mathop{\rm Im}\nolimits g=0$ on $S$. Let
\begin{equation}
\xi = \frac{1-w}{1+w},
\eeq{CM2}
which maps $S$ onto the positive real axis. Let $\zeta=\sqrt{\xi}$ with the branch cut along the positive real axis and define
\begin{equation}
F(\zeta) = g \left( \frac{1-\zeta^2}{1+\zeta^2} \right).
\eeq{CM3}
Then $\mathop{\rm Im}\nolimits F=0$ on the whole real axis. Thus, by defining $F(\zeta^*)=F(\zeta)^*$, where $*$ denotes the complex conjugate, $F$ can be extended as an analytic function in a tubular neighborhood of the real axis. Moreover, since $g$ is analytic in a neighborhood of $-1$ except the part of the slit and the bilinear transform $\zeta$ maps a neighborhood of $-1$ onto outside a compact set, $F$ must be analytic in $\mathbb{C} \setminus (K \cup K^*)$ where $K$ is a compact set in the upper half plane and $K^*$ is its symmetric part with respect to the real axis, {\it i.e.}, $K^*=\{z^* ~|~ z \in K\}$. $F$ satisfies
\begin{itemize}
\item[(i)$^\prime$] $F$ is analytic in $\mathbb{C} \setminus (K \cup K^*)$ for a compact set $K$ in the upper half plane.
\item[(ii)$^\prime$] $\mathop{\rm Im}\nolimits F=0$ on the real axis,
\item[(iii)$^\prime$] $F(\zeta) - F(-\zeta) >0$ for real positive $\zeta$.
\end{itemize}
The function $f$ is now given by
\begin{equation}
f(w)=iw + F\left( \sqrt{\frac{1-w}{1+w}}. \right)
\eeq{CM4}
Note that $y=u$ on the slit and hence $\partial E$ is given by
\begin{equation}
x = F\left( \pm \sqrt{\frac{1-y}{1+y}} \right).
\eeq{CM5}
In addition to (i)$^\prime$, (ii)$^\prime$, and (iii)$^\prime$, $F$ needs to be univalent inside a suffiently small ball around the origin, and outside a sufficiently large ball. The first condition is satisfied if
$F'(0)\ne 0$. Since $F$ maps $\infty$ to a point in $\mathbb{C}$, $F$ being analytic and univalent outside a sufficiently large ball has the series expansion
\begin{equation}
F(\zeta)= \sum_{j=0}^\infty \frac{\beta_j}{\zeta^j}
\eeq{CM6}
as $\zeta \to \infty$, where $\beta_1 \ne 0$ (and $\beta_1$ is real and positive from conditions (ii)$^\prime$, and (iii)$^\prime$).
We make a record of these conditions:
\begin{itemize}
\item[(iv)$^\prime$]
The derivative $F'(0)$ is non-zero, and $F(\zeta)$ has the asymptotic expansion
\begin{equation}
F(\zeta)= \beta_0 + \frac{\beta_1}{\zeta} + O(|\zeta|^{-2}) \quad\mbox{as } |\zeta| \to \infty,
\eeq{CM7}
where $\beta_1$ is real and positive.
\end{itemize}
Good candidates for functions satisfying (i)$^\prime$, (ii)$^\prime$, and (iv)$^\prime$ are rational functions of the form
\begin{equation}
F(\zeta)= \sum_{\alpha=1}^n \left[ \frac{b_\alpha}{\zeta-a_\alpha} + \frac{b_\alpha^*}{\zeta-a_\alpha^*} \right] + c
\eeq{CM8}
where the $a_\alpha$'s are complex numbers with positive imaginary parts, the $b_\alpha$'s are complex numbers, $c$ is a real number,
and
\begin{equation} \sum_{\alpha=1}^n \mathop{\rm Re}\nolimits (b_\alpha)>0,\quad \sum_{\alpha=1}^n \mathop{\rm Re}\nolimits (b_\alpha/a_\alpha^2)\ne 0.
\eeq{CM8a}
To ensure that (iii)$^\prime$ is satisfied we require that the function
\begin{equation} F(\zeta) - F(-\zeta)=2\zeta\sum_{\alpha=1}^n \left[ \frac{b_\alpha}{\zeta^2-a_\alpha^2} + \frac{b_\alpha^*}{\zeta^2-(a_\alpha^*)^2} \right]
\eeq{CM8b}
has no real roots aside from $\zeta=0$. (The sign of the inequality in (iii)$^\prime$ is guaranteed by the positivity of $\beta_1$.)
Let us now characterize those rational functions $F$ which yield ellipses as E$_\Omega$-inclusions. Because $y=u$ on the slit $[-1,1]$, the ellipse takes the shape like the first figure in Figure \ref{varIma} (after translation).
Let the ellipse be given by $x^2+\alpha y^2 + \beta x y=c$ with $4\alpha > \beta^2$. Solving for $x$ we get
\begin{equation}
x= \frac{-\beta y \pm \sqrt{\beta^2 y^2 - 4(\alpha y^2-c)}}{2}.
\eeq{CM9}
Since the discriminant vanishes at $y=\pm 1$, we have $c=4\alpha-\beta^2$, and hence
\begin{equation}
x= \frac{-\beta}{2} y \pm \frac{(1+y)}{2} \sqrt{(4\alpha -\beta^2) \frac{1-y}{1+y}}.
\eeq{CM10}
Letting $\zeta= \sqrt{\frac{1-y}{1+y}}$, we have
\begin{equation}
x= \frac{\pm \zeta\sqrt{4\alpha -\beta^2} - \beta}{\zeta^2+1} + \frac{\beta}{2} = F(\zeta)
\eeq{CM11}
for real $\zeta$. It means that ellipses are obtained by $F$'s of the form
\begin{equation}
F(\zeta)= \frac{b}{\zeta-a} + \frac{b^*}{\zeta-a^*} + c
\eeq{CM12}
with $a=i$ and $b$ with positive real part.
\medskip
\noindent{\bf Example}. In this example, we construct some E$_\Omega$-inclusions other than ellipses. We use $F$ in the form \eq{CM12} with $c=0$ (it amounts to translating the figure). Then in $\zeta$-coordinates $f$ is given by
\begin{equation}
f(\zeta)= \frac{2i}{\zeta^2+1} + \frac{b}{\zeta-a} + \frac{b^*}{\zeta-a^*}.
\eeq{CM13}
where both \eq{CM8a} and the absence of real non-zero roots of \eq{CM8b} will be ensured if we choose $b$ and $-b/a^2$ with positive real parts.
We will plot the image of a vicinity of the real axis in the upper half plane under the map $f$. To avoid computational difficulty in dealing with an infinite space, we use a bilinear transform
\begin{equation}
\zeta= \frac{1-iw}{w-i},
\eeq{CM14}
which maps the unit disk onto the upper half plane. Then we plot
\begin{equation}
f(w)= \frac{2i}{\zeta(w)^2+1} + \frac{b}{\zeta(w)-a} + \frac{b^*}{\zeta(w)-a^*}
\eeq{CM15}
for $w=r e^{i\theta}$ with $1-\varepsilon \le r \le 1$. From the expansions for $F(\zeta)$ in powers of $\zeta$ and $1/\zeta$ we see
that near the bottom and top of the inclusion the boundary is given by
\begin{equation} x\approx \mathop{\rm Re}\nolimits(b)\sqrt{2(1+y)}+O(1+y),\quad\quad
x \approx -2\mathop{\rm Re}\nolimits(b/a)-\mathop{\rm Re}\nolimits(b/a^2)\sqrt{2(1-y)}+O(1-y).
\eeq{CM16}
Thus the bottom and top are positioned at $x=0$ and $x=-2\mathop{\rm Re}\nolimits(b/a)$ and the curvature of the boundary there is determined by $\mathop{\rm Re}\nolimits(b)$ and
$-\mathop{\rm Re}\nolimits(b/a^2)$ respectively.
Figure \ref{varRad} shows various shapes of $\partial\Omega$, which are the image of $|z|=r<1$ under $f$, and the boundary of E$_\Omega$-inclusion, which is the image of $|z|=1$. Figure \ref{varRea}, \ref{varIma}, \ref{varReb}, \ref{varImb}, \ref{varImb2}, and \ref{varImba^2} show various shapes of E$_\Omega$-inclusions when we vary the complex parameters $a$, $b$, and $b/a^2$.
We emphasize that with these values of $a$ and $b$, the univalence of $f$ is guaranteed by Lemma \ref{lemma21}.
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=fig1.eps,height=4cm} \caption{Various shapes of E$_\Omega$-inclusions when varying Re $a$ with Im $a=1$ and $b=1$.} \label{varRea}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=fig2.eps,height=4cm} \caption{With $a=0.8+i$ and $b=1$, the inner most curve (the image of $|z|=1$) is the boundary of the E$_\Omega$-inclusion (the rightmost inclusion in Fig. \ref{varRea}). The others are images of $|z|=0.9, \ 0,8, \ 0,7, \ 0,6, \ 0,5$. These, or any simple closed curve enclosed by them, can be regarded as boundaries of $\Omega$.} \label{varRad}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=fig3.eps,height=4cm} \caption{Various shapes of E$_\Omega$-inclusions when varying Im $a$ with Re $a=0$ and $b=1+i$.}\label{varIma}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=fig4.eps,height=4cm} \caption{Various shapes of E$_\Omega$-inclusions when varying Re $b$ with Im $b=1$ and $a=1.3i$.}\label{varReb}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=fig5.eps,height=4cm} \caption{Various shapes of E$_\Omega$-inclusions when varying Im $b$ with Re $b=1$ and $a=1.3i$.}\label{varImb}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=figs_MImb.eps,height=4cm} \caption{Various shapes of E$_\Omega$-inclusions when varying Im $b$ with Re $b=0.1$ and $b/a^2=-10+2i$.}\label{varImb2}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=figs_MImba2.eps,height=4cm} \caption{Various shapes of E$_\Omega$-inclusions when varying Im $b/a^2$ with $b=0.1+2i$ and Re $b/a^2=-10$.}\label{varImba^2}
\end{center}
\end{figure}
\section*{Acknowledgements}
The authors thank Michael Vogelius for comments on a draft of the manuscript, and for spurring the interest of GWM in this problem through a lecture at the Mathematical Sciences
Research Institute. GWM is grateful for support from the Mathematical Sciences
Research Institute and from National Science Foundation through
grant DMS-0707978. HK is grateful for support from National Research Foundation through grants No. 2009-0090250 and 2010-0017532, and from Inha University. The work of EK was supported by Korea Research
Foundation, KRF-2008-359-C00004.
| {
"timestamp": "2011-05-06T02:01:14",
"yymm": "1105",
"arxiv_id": "1105.0949",
"language": "en",
"url": "https://arxiv.org/abs/1105.0949",
"abstract": "We deal with the problem of estimating the volume of inclusions using a finite number of boundary measurements in electrical impedance tomography. We derive upper and lower bounds on the volume fractions of inclusions, or more generally two phase mixtures, using two boundary measurements in two dimensions. These bounds are optimal in the sense that they are attained by certain configurations with some boundary data. We derive the bounds using the translation method which uses classical variational principles with a null Lagrangian. We then obtain necessary conditions for the bounds to be attained and prove that these bounds are attained by inclusions inside which the field is uniform. When special boundary conditions are imposed the bounds reduce to those obtained by Milton and these in turn are shown here to reduce to those of Capdeboscq-Vogelius in the limit when the volume fraction tends to zero. The bounds of this paper, and those of Milton, work for inclusions of arbitrary volume fractions. We then perform some numerical experiments to demonstrate how good these bounds are.",
"subjects": "Materials Science (cond-mat.mtrl-sci); Mathematical Physics (math-ph); Analysis of PDEs (math.AP); Medical Physics (physics.med-ph)",
"title": "Sharp bounds on the volume fractions of two materials in a two-dimensional body from electrical boundary measurements: the translation method",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924793940119,
"lm_q2_score": 0.7279754371026368,
"lm_q1q2_score": 0.7075866500473315
} |
https://arxiv.org/abs/1310.7063 | On the Convergence of Decentralized Gradient Descent | Consider the consensus problem of minimizing $f(x)=\sum_{i=1}^n f_i(x)$ where each $f_i$ is only known to one individual agent $i$ out of a connected network of $n$ agents. All the agents shall collaboratively solve this problem and obtain the solution subject to data exchanges restricted to between neighboring agents. Such algorithms avoid the need of a fusion center, offer better network load balance, and improve data privacy. We study the decentralized gradient descent method in which each agent $i$ updates its variable $x_{(i)}$, which is a local approximate to the unknown variable $x$, by combining the average of its neighbors' with the negative gradient step $-\alpha \nabla f_i(x_{(i)})$. The iteration is $$x_{(i)}(k+1) \gets \sum_{\text{neighbor} j \text{of} i} w_{ij} x_{(j)}(k) - \alpha \nabla f_i(x_{(i)}(k)),\quad\text{for each agent} i,$$ where the averaging coefficients form a symmetric doubly stochastic matrix $W=[w_{ij}] \in \mathbb{R}^{n \times n}$. We analyze the convergence of this iteration and derive its converge rate, assuming that each $f_i$ is proper closed convex and lower bounded, $\nabla f_i$ is Lipschitz continuous with constant $L_{f_i}$, and stepsize $\alpha$ is fixed. Provided that $\alpha < O(1/L_h)$ where $L_h=\max_i\{L_{f_i}\}$, the objective error at the averaged solution, $f(\frac{1}{n}\sum_i x_{(i)}(k))-f^*$, reduces at a speed of $O(1/k)$ until it reaches $O(\alpha)$. If $f_i$ are further (restricted) strongly convex, then both $\frac{1}{n}\sum_i x_{(i)}(k)$ and each $x_{(i)}(k)$ converge to the global minimizer $x^*$ at a linear rate until reaching an $O(\alpha)$-neighborhood of $x^*$. We also develop an iteration for decentralized basis pursuit and establish its linear convergence to an $O(\alpha)$-neighborhood of the true unknown sparse signal. |
\section{Introduction}
Consider that $n$ agents form a connected network and
collaboratively solve a consensus optimization problem
\begin{align}
\label{eq:original problem}
\Min\limits_{x\in\RR^p} \quad f(x) = \sum_{i=1}^n f_i(x),
\end{align}
where each $f_i$ is only available to agent $i$. {A pair of agents
can exchange data if and only if they are connected by a direct
communication link; we say that such two agents are neighbors of
each other.} Let $\cX^*$ denote the set of solutions to
\eqref{eq:original problem}, which is assumed to be non-empty, and
let $f^*$ denote the optimal objective value.
The traditional (centralized) gradient descent iteration is
\beq\label{org_grad}
x(k+1) = x(k) - \alpha \nabla f(x(k)),
\eeq
where $\alpha$ is the stepsize, either fixed or varying with $k$. To apply iteration \eqref{org_grad} to problem \eqref{eq:original problem} under the decentralized situation, one has two choices of implementation:
\begin{itemize}
\item let a fusion center (which can be a designated agent) carry
out iteration \eqref{org_grad}; \item let all the agents carry out
the same iteration \eqref{org_grad} in parallel.
\end{itemize}
In either way, $f_i$ (and thus $\nabla f_i$) is only known to
agent $i$. Therefore, in order to obtain $ \nabla f(x(k)) =
\sum_{i=1}^n \nabla f_i(x(k))$, every agent $i$ must have $x(k)$,
compute $\nabla f_i(x(k))$, and then send out $\nabla f_i(x(k))$.
This approach requires synchronizing $x(k)$ and
scattering/collecting $\nabla f_i(x(k))$, $i=1,\ldots,n$, over
the entire network, which incurs a significant amount of
communication traffic, especially if the network is large and
sparse. A decentralized approach will be more viable since its
communication is restricted to between neighbors. Although there
is no guarantee that decentralized algorithms use less
communication (as they tend to take more iterations), they provide
better network load balance and tolerance to the failure of
individual agents. In addition, each agent can keep its $f_i$ and
$\nabla f_i$ private to some extent\footnote{ Neighbors of $i$ may
know the samples of $f_i$ and/or $\nabla f_i$ at some points
through data exchanges and thus obtain an interpolation of
$f_i$.}.
Decentralized gradient descent \cite{Nedic2009} does not rely on a fusion center or network-wide communication. It carries out an approximate version of \eqref{org_grad} in the following fashion:
\begin{itemize}
\item let each agent $i$ hold an approximate \textit{copy}
$x_{(i)}\in\RR^p$ of $x\in\RR^p$;
\item let each agent $i$ update its
$x_{(i)}$ to the weighted average of its neighborhood; \item let each
agent $i$ apply $-\nabla f_i(x_{(i)})$ to
decrease $f_i(x_{(i)})$.
\end{itemize}
At each iteration $k$, each agent $i$ performs the following
steps:
\begin{enumerate}
\item computes $\nabla f_i(x_{(i)}(k))$; \item computes the
neighborhood weighted average $x_{(i)}(k+1/2) = \sum_{j} w_{ij}x_{(j)}(k)$, where
$w_{ij}\not=0$ only if $j$ is a neighbor of $i$ or $j=i$;
\item applies $x_{(i)}(k+1) = x_{(i)}(k+1/2)-\alpha\nabla f_i(x_{(i)}(k))$.
\end{enumerate}
Steps 1 and 2 can be carried out in parallel, and their results
are used in Step 3. Putting the three steps together, we arrive at
our main iteration \beq\label{dec_grad} \boxed{x_{(i)}(k+1) =
\sum_{j=1}^n w_{ij}x_{(j)}(k) - \alpha \nabla
f_i(x_{(i)}(k)),\quad i = 1,2,\ldots, n.} \eeq When $f_i$ is not
differentiable, by replacing $\grad f_i$ with a member of
$\partial f_i$ we obtain the decentralized \emph{subgradient}
method \cite{Nedic2009}. Other decentralization methods are
reviewed {Section \ref{blabla}} below.
We assume that the mixing matrix $W=[w_{ij}]$ is symmetric and
doubly stochastic. The eigenvalues of
$W$ are real and sorted in a nonincreasing order $1 =
\lambda_1(W) \geq \lambda_2(W) \geq \cdots \geq \lambda_n(W) \geq
-1$. Let the second largest magnitude of the eigenvalues of $W$ be
denoted as \beq\label{beta}\beta =
\max\left\{|\lambda_2(W)|,|\lambda_n(W)|\right\}.\eeq
The optimization of
matrix $W$ and, in particular, $\beta$, is not our focus; the reader is referred to \cite{Boyd2004}.
Some basic questions regarding the decentralized gradient
method include: (i) When does $x_{(i)}(k)$ converge? (ii) Does it
converge to $x^* \in \cX^*$? (iii) If $x^*$ is not the limit,
does consensus (i.e., $x_{(i)}(k)=x_{(j)}(k)$, $\forall i,j$) hold asymptotically? (iv) How do the properties of $f_i$ and the
network affect convergence?
\subsection{Background}\label{backgrd}
The study on decentralized optimization can be traced back to the
seminal work in the 1980s \cite{Tsitsiklis1986,Tsitsiklis1984}.
Compared to optimization with a fusion center that
collects data and performs computation, decentralized
optimization enjoys the advantages of scalability to network
sizes, robustness to dynamic topologies, and privacy preservation
in data-sensitive applications \cite{Sayed2013,
Ling2013_reweighted, Olfati-Saber2007, Yan2013}. These properties are important for
applications where data are collected by distributed
agents, communication to a fusion center is expensive or
impossible, and/or agents tend to keep their raw data private;
such applications arise in wireless sensor networks
\cite{Ling2010, Predd2007, Schizas2008,Zhao2002}, multivehicle
and multirobot networks \cite{Cao2013, Ren2007, Zhou2010}, smart
grids \cite{Giannakis2013, Kekatos2013}, cognitive radio networks
\cite{Bazerque2010, Bazerque2011}, etc. The recent research
interest in big data processing also motivates the work of
decentralized optimization in machine learning \cite{Duchi2012,
Tsianos2012_application}. Furthermore, the decentralized optimization problem \eqref{eq:original problem} can be extended
to the online or dynamic settings where the objective function becomes
an online regret \cite{Tsianos2012_strong, Yan2013} or a dynamic
cost \cite{Cavalcante2013, Jakubiec2013, Ling2013_dynamic}.
{To demonstrate how decentralized optimization works},
we take spectrum sensing in a cognitive radio network as an
example. Spectrum sensing aims at detecting unused spectrum bands,
{and thus enables} the cognitive radios to opportunistically
use them for data communication. Let $x$ be a vector whose
elements are the signal strengths of spectrum channels. Each
cognitive radio $i$ takes time-domain measurement
$b_i = F^{-1} G_i x + e_i$, where $G_i$ is
the channel fading matrix, $F^{-1}$ is the
inverse Fourier transform matrix, and $e_i$ is the measurement
noise. To each cognitive radio $i$, assign a local objective function $f_i(x) = (1/2) \|b_i -
F^{-1} G_i x\|^2$ or the regularized function $f_i(x) = (1/2)
\|b_i - F^{-1} G_i x\|^2 + \phi(x)$, where
$\phi(x)$ promotes a certain structure of $x$. To estimate $x$, a set of geologically nearby cognitive
radios collaboratively solve the consensus optimization problem
\eqref{eq:original problem}. Decentralized
optimization is suitable for this application since communication between nearby cognitive
radios are fast and energy-efficient and, if a cognitive radio joins and leaves the network, no
reconfiguration is needed.
\subsection{Related methods}\label{blabla}
The decentralized stochastic subgradient projection algorithm
\cite{Ram2010} handles constrained optimization; the fast
decentralized gradient methods \cite{Jakovetic2013} adopts
Nesterov's acceleration; the distributed online gradient descent
algorithm\footnote{Here we consider its decentralized batch
version.} \cite{Tsianos2012_strong} has nested iterations, where
the inner loop performs a fine search; the dual averaging
subgradient method \cite{Duchi2012} {carries out} a
projection operation after averaging and descending.
Unsurprisingly, decentralized computation tends to require more
assumptions for convergence than similar centralized computation.
All of the above algorithms are analyzed under the assumption of
bounded (sub)gradients. Unbounded gradients can potentially cause
algorithm divergence. When using a fixed stepsize, the above
algorithms (and iteration \eqref{dec_grad} in particular) converge
to a neighborhood of $x^*$ rather than $x^*$ itself. The size of
the neighborhood goes monotonic in the stepsize. Convergence to
$x^*$ can be achieved by using diminishing stepsizes in
\cite{Duchi2012,Jakovetic2013,Tsianos2012_strong} at the price of
slower rates of convergence. With diminishing stepsizes,
\cite{Jakovetic2013} shows an outer loop complexity of $O(1/k^2)$
under Nesterov's acceleration when the inner loop performs a
substantial search job, without which the rate reduces to
$O(\log(k)/k)$.
\subsection{Contribution and notation}
This paper studies the convergence of iteration \eqref{dec_grad} under the following assumptions.
\begin{assumption}\label{assmp1}
\begin{enumerate}
\item[a)] For $i=1,\ldots,n$, $f_i$ is proper closed convex, lower bounded, and Lipschitz differentiable with constant $L_{f_i}>0$. \item[b)] The network
has a synchronized clock in the sense that \eqref{dec_grad} is
applied to all the agents at the same time intervals, the network is
connected, and the mixing matrix $W$ is symmetric and doubly
stochastic with $\beta<1$ (see \eqref{beta} for the definition of $\beta$).
\end{enumerate}\end{assumption}
{Unlike
\cite{Duchi2012,Jakovetic2013,Nedic2009,Ram2010,Tsianos2012_strong},
{which {characterize} the ergodic convergence of
$f(\hat{x}_{(i)}(k))$ where
$\hat{x}_{(i)}(k)=\frac{1}{k}\sum_{s=0}^{k-1} {x}_{(i)}(s)$, }
this paper establishes the {non-ergodic} convergence
of all local solution sequences $\{x_{(i)}(k)\}_{k\ge 0}$. In
addition,
the analysis in this paper does not assume bounded $\grad f_i$.
Instead, the following stepsize condition will ensure bounded
$\nabla f_i$: \beq\label{step_bnd} \alpha < O(1/L_h), \eeq where
$L_h = \max\{L_{f_1},\ldots,L_{f_n}\}$. This result is obtained
through interpreting the iteration \eqref{dec_grad} for all the
{agents} as a gradient descent iteration applied to a certain
Lyapunov function.}
{Under {Assumption \ref{assmp1}} and condition
\eqref{step_bnd}, the
rate of $O(1/k)$ for ``near'' convergence is {shown}. Specifically, the
objective errors evaluated at the mean solution,
$f(\frac{1}{n}\sum_{i=1}^n x_{(i)}(k))-f^*$, and at any local
solution, $f({x}_{(i)}(k))-f^*$, both reduce at $O(1/k)$ until
reaching the level $O(\frac{\alpha}{1-\beta})$. The rate of the
mean solution is obtained by analyzing an inexact gradient descent
iteration, {somewhat} similar to
\cite{Duchi2012,Jakovetic2013,Nedic2009,Ram2010}. However, all of
their rates are given for the ergodic solution
$\hat{x}_{(i)}(k)=\frac{1}{k}\sum_{s=0}^{k-1} {x}_{(i)}(s)$. Our
rates are non-ergodic.}
In addition, a linear rate of ``near'' convergence is established if $f$ is
also strongly convex with modulus $\mu_f>0$, namely,
$$\langle\nabla f(x_a)-\nabla f(x_b), x_a-x_b \rangle
\ge \mu_f\|x_a-x_b\|^2,\quad \forall x_a,x_b\in\dom f,$$ or $f$ is
restricted strongly convex \cite{Lai2013} with modulus $\nu_f>0$,
\beq\label{rcvx}\langle\nabla f(x)-\nabla f(x^*), x-x^* \rangle
\ge \nu_f\|x-x^*\|^2,\quad \forall x\in\dom
f,~x^*=\Proj_{\cX^*}(x), \eeq where $\Proj_{\cX^*}(x)$ is the
projection of $x$ onto the solution set $\cX^*$ and $\nabla
f(x^*)=0$. In both cases, we show that the mean solution error
$\|\frac{1}{n}\sum_{i=1}^n x_{(i)}(k)-x^*\|$ and the local
solution error $\|x_{(i)}(k)-x^*\|$ reduce geometrically until
reaching the level $O(\frac{\alpha}{1-\beta})$. Restricted
strongly convex functions are studied as they appear in the
applications of sparse optimization and statistical regression;
see \cite{ZhangYin2013} for some examples. The solution set
$\cX^*$ is a singleton if $f$ is strongly convex but not
necessarily so if $f$ is restricted strongly convex.
{Since our analysis uses a fixed stepsize, the local solutions will not be asymptotically consensual. To adapt our analysis to diminishing stepsizes, significant changes will be needed.}
Based on iteration \eqref{dec_grad}, a decentralized algorithm is derived for the basis pursuit problem with distributed data to recover a sparse signal in Section
\ref{sec:3}. The algorithm converges linearly until reaching an $O(\frac{\alpha}{1-\beta})$-neighborhood of the sparse signal.
Section \ref{sec:4} presents numerical results on the test
problems of decentralized least squares and decentralized basis
pursuit to verify our developed rates of convergence and the
levels of the landing neighborhoods.
Throughout the rest of this paper, we employ the following {notations} of stacked vectors:
$$[x_{(i)}]:=\begin{bmatrix}x_{(1)}\\x_{(2)}\\\vdots\\x_{(n)}\end{bmatrix}\in\RR^{np}\quad\text{and}\quad
h(k):=\begin{bmatrix}\nabla f_1(x_{(1)}(k))\\
\nabla f_2(x_{(2)}(k))\\\vdots\\ \nabla
f_n(x_{(n)}(k))\end{bmatrix}\in\RR^{np}.$$
\section{Convergence analysis}
\subsection{Bounded gradients}
\label{sec:2a}
Previous methods and analysis
\cite{Duchi2012,Jakovetic2013,Nedic2009,Ram2010,Tsianos2012_strong}
assume bound gradients or subgradients of $f_i$. The assumption
indeed plays a key role in the convergence analysis. For
decentralized gradient descent iteration \eqref{dec_grad}, it
gives \emph{bounded} deviation from mean $\|x_{(i)}(k) -
\frac{1}{n}\sum_{j=1}^n x_{(j)}(k)\|$. It is necessary in the convergence
analysis of subgradient methods, whether they are centralized or
decentralized. But as we show below, the boundedness of $\nabla
f_i$ does not need to be guaranteed but is a consequence of bounded stepsize
$\alpha$, with dependence on the spectral properties of $W$. We
derive a tight bound on $\alpha$ for $\nabla f_i(x_{(i)}(k))$ to
be bounded.
\textbf{Example.} Consider $x\in\RR$ and a network formed by 3
connected agents (every pair of agents are directly linked).
Consider the following consensus optimization problem
$$ \Min_x\ \ f(x) = \sum_{i=1,2,3} f_i(x),\quad\text{where}~ f_i(x) = \frac{L_h}{2} (x-1)^2,$$
and $L_h>0$. This is a trivial average
consensus problem with $\nabla f_i(x_{(i)})=L_h(x_{(i)}-1)$ and $x^*=1$.
Take any $\tau \in (0,1/3)$ and let the mixing matrix be$$W =
\begin{bmatrix}1-2\tau & \tau & \tau\\ \tau & \tau & 1-2\tau\\
\tau & 1-2\tau & \tau\end{bmatrix},$$ which is symmetric doubly
stochastic. We have $\lambda_3(W)=3\tau - 1\in(-1,0)$. Start from
{$(x_{(1)},x_{(2)},x_{(3)})=(1,0,2)$}. Simple calculations
yield:
\begin{itemize}
\item if $\alpha < (1+\lambda_3(W))/L_h$, then $x_{(i)}(k)$
converges to $x^*$, $i=1,2,3$; (The consensus among $x_{(i)}(k)$
as $k\to\infty$ is due to design.) \item if $\alpha >
(1+\lambda_3(W))/L_h$, then $x_{(i)}(k)$ diverges and is
asymptotically unbounded where $i=1,2,3$; \item if $\alpha =
(1+\lambda_3(W))/L_h$, then $(x_{(1)}(k),x_{(2)}(k),x_{(3)}(k))$
equals $(1,2,0)$ at odd $k$ and $(1,0,2)$ at even $k$.
\end{itemize}
Clearly, if $x_{(i)}$ converges, then $\nabla f_i(x_{(i)})$ converges and
thus stays bounded. In the above example $\alpha =
(1+\lambda_3(W))/L_h$ is the critical stepsize.
As each $\nabla f_i(x_{(i)})$ is Lipschitz continuous with
constant $L_{f_i}$, $h(k)$ is Lipschitz continuous with constant
$$L_h =\max_{i}\{L_{f_i}\}.$$ We formally show that $\alpha <
(1+\lambda_n(W))/L_h$ ensures bounded $h(k)$. {The analysis
is based on the Lyapunov function
\beq\label{xi}\xi_\alpha([x_{(i)}]) :=
-\frac{1}{2}\sum_{i,j=1}^n w_{ij}x_{(i)}^Tx_{(j)}+\sum_{i=1}^n
\left(\frac{1}{2}\|x_{(i)}\|^2+\alpha f_i(x_{(i)})\right),\eeq
which is convex since all $f_i$ are convex and the
remaining terms $\frac{1}{2}\left(\sum_{i=1}^n \|x_{(i)}\|^2 -
\sum_{i,j=1}^n w_{ij}x_{(i)}^Tx_{(j)}\right)$ is also convex (and
uniformly nonnegative) due to {$\lambda_1(W)=1$}. In addition,
$\nabla \xi_{\alpha}$ is Lipschitz continuous with constant
$L_{\xi_\alpha}\le(1-\lambda_n(W))+{\alpha} L_h$. Rewriting
iteration \eqref{dec_grad} as
$$x_{(i)}(k+1) = \sum_{j=1}^n
w_{ij}x_{(j)}(k)-\alpha \nabla
f_i(x_{(i)}(k))=x_{(i)}(k)-\nabla_i\xi_{\alpha}([x_{(i)}(k)]),$$
we can observe that decentralized gradient
descent reduces to unit-stepsize centralized gradient
descent applied to minimize $\xi_\alpha([x_{(i)}])$.}
\begin{theorem}\label{h_bnd}
Under Assumption \ref{assmp1}, if the stepsize
\beq\label{stpbnd}\alpha \le (1+\lambda_n(W))/L_h,\eeq then,
starting from $x_{(i)}(0)=0$, ${i=1,2,\ldots,n}$, {{the}
sequence $x_{(i)}(k)$ generated by the iteration \eqref{dec_grad}
converges.} In addition we also have \beq\label{hk_bnd}
\|h(k)\|\le D:=\sqrt{2L_h \left(\sum_{i=1}^n f_i(0) -{f^o}\right)}
\eeq for all $k=1,2,\ldots$, {where $f^o:=\sum_{i=1}^n f_i(x_{(i)}^o)$ and
$x_{(i)}^o=\arg\min_x f_i(x)$.}
\end{theorem}
\begin{proof}
{Note that the iteration \eqref{dec_grad} is equivalent to the
gradient descent iteration for the Lyapunov function \eqref{xi}.
From the classic analysis of gradient descent iteration in
\cite{bauschke2011convex} and \cite{Nesterov2007}, $[x_{(i)}(k)]$,
and hence $x_{(i)}(k)$, will converge to a certain point when
$\alpha \le (1+\lambda_n(W))/L_h$.}
Next we show \eqref{hk_bnd}. Since $\beta<1$, we have $\lambda_n(W)>-1$ and $(L_{\xi_\alpha}/2-
1)\le0$. Hence,
\begin{align*}\xi_\alpha([x_{(i)}(k+1)])
&\le \xi_\alpha([x_{(i)}(k)]) +\nabla\xi_\alpha([x_{(i)}(k)])^T([x_{(i)}(k+1) -x_{(i)}(k)])+\frac{L_{\xi_\alpha}}{2}\|[x_{(i)}(k+1) -x_{(i)}(k)]\|^2\\
&= \xi_\alpha([x_{(i)}(k)]) +(L_{\xi_\alpha}/2- 1)\|\nabla \xi_\alpha([x_{(i)}(k)])\|^2\\
&\le \xi_\alpha([x_{(i)}(k)]).
\end{align*}
Recall that $\frac{1}{2}\left(\sum_{i=1}^n \|x_{(i)}\|^2 -
\sum_{i,j=1}^n w_{ij}x_{(i)}^Tx_{(j)}\right)$ is nonnegative. Therefore,
we have \beq\label{fxibnd}\sum_{i=1}^n f_i(x_{(i)}(k))\le
\alpha^{-1}\xi_{\alpha}([x_{(i)}(k)])\le \cdots \le
\alpha^{-1}\xi_{\alpha}([x_{(i)}(0)])=\alpha^{-1}\xi_\alpha(0)=
\sum_{i=1}^n f_i(0). \eeq
On the other hand, for any differentiable convex function $g$ with the minimizer $x^*$
and Lipschitz constant $L_g$, we have $g(x_a)\ge g(x_b)+\nabla
g^T(x_b)(x_a-x_b)+\frac{1}{2L_g}\|\nabla g(x_a)-\nabla g(x_b)\|^2$
and $\nabla g(x^*)=0$. Then, $\|\nabla g(x)\|^2\le 2
L_g(g(x)-g^*)$ where $g^*:=g(x^*)$. Applying this inequality and \eqref{fxibnd}, we
obtain
\begin{align}
\|h(k)\|^2 = \sum_{i=1}^n \|\nabla f_i(x_{(i)}(k))\|^2\le
{\sum_{i=1}^n
2L_{f_i} \left(f_i(x_{(i)}(k))-f_i^o\right)\le2L_h
\left(\sum_{i=1}^n f_i(0) -f^o\right),}
\end{align}
{where $f_i^o=f_i(x_{(i)}^o)$ and $x_{(i)}^o=\arg\min_x f_i(x)$. Note that $x_{(i)}^o$ exists because of Assumption \ref{assmp1}.
Besides, we denote $f^o=\sum_{i=1}^n f_i^o$. }This completes
the proof.
\end{proof}
{In the above theorem, we choose $x_{(i)}(0)=0$ for
convenience. For general $x_{(i)}(0)$, a different bound for $\|h(k)\|$ can still
be obtained. Indeed, if $x_{(i)}(0)\neq 0$, then
$\alpha^{-1}\xi_\alpha(0)=\sum_{i=1}^n f_i(0) +
\frac{1}{2\alpha} \big( \sum_{i=1}^n \|x_{(i)}(0)\|^2 -
\sum_{i,j=1}^n w_{ij} x_{(i)}(0)^T x_{(j)}(0)\big)$ in \eqref{fxibnd}.
Hence we have $\|h(k)\|^2 \le 2L_h \big(\sum_{i=1}^n f_i(0)-f^o\big) +
\frac{L_h}{\alpha}\big(\sum_{i=1}^n \|x_{(i)}(0)\|^2 -
\sum_{i,j=1}^nw_{ij}x_{(i)}(0)^Tx_{(j)}(0)\big)$. The
initial values of $x_{(i)}(0)$ do not influence the stepsize
condition though they change the bound of gradient. For simplicity, we let
$x_{(i)}(0)=0$ in the rest of the paper.}
\textbf{Dependence on stepsize.} In \eqref{dec_grad}, the negative
gradient step $-\alpha\nabla f_i(x_{(i)})$ does not diminish at
$x_{(i)}=x^*$. Even if we let $x_{(i)}=x^*$ for all $i$, $x_{(i)}$
will immediately change once $\eqref{dec_grad}$ is applied.
Therefore, the term $-\alpha\nabla f_i(x_{(i)})$ prevents the
consensus of $x_{(i)}$. Even worse, because both terms in the
right-hand side of \eqref{dec_grad} change $x_{(i)}$, they can
possibly add up to an uncontrollable amount and cause $x_{(i)}(k)$
to diverge. The local averaging term is {stable itself}, so
the only choice we have is to limit the size of $-\alpha\nabla
f_i(x_{(i)})$ by bounding $\alpha$.
{\textbf{Network spectrum.} One can design $W$ so that
$\lambda_n(W) > 0$ and thus simply bound \eqref{stpbnd} to
$$\alpha \leq 1/L_h,$$ which no longer requires any spectral information of the underlying network. Given any mixing
matrix $\tilde{W}$ satisfying $1 = \lambda_1(\tilde{W})
> \lambda_2(\tilde{W}) \geq \cdots \geq \lambda_n(\tilde{W}) > -1$
(cf. \cite{Boyd2004}), one can design a new mixing matrix
$W=(\tilde{W}+I)/2$ that satisfies $1 = \lambda_1(W)
> \lambda_2(W) \geq \cdots \geq \lambda_n(W) > 0$.
The same argument applies to the results throughout the paper.}
\subsection{Bounded deviation from mean}
Let $$\bar{x}(k) := \frac{1}{n}\sum_{i=1}^n x_{(i)}(k)$$ be the
\emph{mean} of $x_{(1)}(k),\ldots,x_{(n)}(k)$. We will later analyze the
error in terms of $\bar{x}(k)$ and then each $x_{(i)}(k)$. To enable
that analysis, we shall show that the deviation from mean
$\|x_{(i)}(k)-\bar{x}(k)\|$ is bounded uniformly over $i$ and $k$.
Then, any bound of $\|\bar{x}(k)-x^*\|$ will give a bound of
$\|x_{(i)}(k)-x^*\|$. Intuitively, if the deviation from mean is
unbounded, then there would be no approximate consensus among
$x_{(1)}(k),\ldots,x_{(n)}(k)$. Without this approximate consensus,
descending individual $f_i(x_{(i)}(k))$ will not contribute to the
descent of $f(\bar{x}(k))$ and thus convergence is out of the
question. Therefore, it is critical to bound the deviation
$\|x_{(i)}(k)-\bar{x}(k)\|$.
\begin{lemma}
\label{lem:bnd_dev}
\label{bnd_dev} If \eqref{hk_bnd} holds and $\beta<1$, then the total deviation from mean is bounded, namely, $${{\|x_{(i)}(k) - \bar{x}(k)\| \le \frac{\alpha D}{1-\beta},\quad \forall k,\forall i.}}$$
\end{lemma}
\begin{proof}
Recall the definition of $[x_{(i)}]$ and $h(k)$, from the equation
\eqref{dec_grad} we have
$$
[x_{(i)}(k+1)]=(W\otimes I)[x_{(i)}(k)] - \alpha h(k),
$$
where $\otimes$ denotes the Kronecker product. From it, we
obtain\beq [x_{(i)}(k)]=-\alpha \sum_{s=0}^{k-1}(W^{k-1-s}\otimes
I)h(s). \eeq {Besides, letting
$[\bar{\mathbf{x}}(k)]=[\bar{x}(k);\cdots;\bar{x}(k)]\in \RR^{np}$, it
follows that
$$
[\bar{x}(k)]=\frac{1}{n}((1_n 1_n^T) \otimes I)) [\bar{\mathbf{x}}(k)].
$$}
As a result,
\begin{align}
\|x_{(i)}(k)-\bar{x}(k)\| & \leq \|[x_{(i)}(k)] - [\bar{\mathbf{x}}(k)] \| \nonumber\\
& = \|[x_{(i)}(k)]-\frac{1}{n}((1_n 1_n^T) \otimes I)) [x_{(i)}(k)]\| \nonumber\\
& = \|-\alpha \sum\limits_{s=0}^{k-1} (W^{k-1-s} \otimes I) h(s) +
\alpha \sum\limits_{s=0}^{k-1} \frac{1}{n} ((1_n 1_n^T W^{k-1-s}) \otimes I) h(s)\| \nonumber \\
&= \|-\alpha \sum\limits_{s=0}^{k-1} (W^{k-1-s} \otimes I) h(s) +
\alpha \sum\limits_{s=0}^{k-1} \frac{1}{n} ((1_n 1_n^T) \otimes I) h(s)\| \label{W_disappear}\\
&= \alpha \| \sum\limits_{s=0}^{k-1} ((W^{k-1-s}-\frac{1}{n} 1_n 1_n^T) \otimes I) h(s) \| \nonumber \\
&\leq \alpha \sum\limits_{s=0}^{k-1} \|W^{k-1-s}-\frac{1}{n} 1_n 1_n^T\| \| h(s)\| \nonumber \\
&= \alpha \sum\limits_{s=0}^{k-1} \beta^{k-1-s} \| h(s)\|, \nonumber
\end{align}
where \eqref{W_disappear} holds since $W$ is doubly stochastic.
From $\|h(k)\|\le D$ and $\beta < 1$, it follows that
$$\|x_{(i)}(k)-\bar{x}(k)\| \leq \alpha \sum\limits_{s=0}^{k-1} \beta^{k-1-s} \| h(s)\| \leq \alpha \sum\limits_{s=0}^{k-1} \beta^{k-1-s} D \leq \frac{\alpha D}{1-\beta},$$
which completes the proof.
\end{proof}
{The proof of Lemma \ref{lem:bnd_dev} utilizes the spectral
property of the mixing matrix $W$. The constant in the upper bound is proportional to the stepsize $\alpha$ and
monotonically increasing with respect to the second largest
eigenvalue modulus $\beta$. The papers \cite{Duchi2012},
\cite{Nedic2009}, and \cite{Ram2010} also analyze the deviation of local solutions from their
mean, but their results are different.
The upper bound in \cite{Duchi2012} is given at the
termination time of the algorithm, which is not uniform in $k$. The two papers \cite{Nedic2009} and
\cite{Ram2010}, instead of bounding $\|W-\frac{1}{n}\mathbf{11}^T\|$, decompose it as the sum of element-wise $|w_{ij}-\frac{1}{n}|$ and then bounds it with the minimum nonzero element in
$W$.
}
{As discussed after Theorem \ref{h_bnd}, $D$ is affected by the
value of $x_{(i)}(0)$, if it is nonzero. In Lemma
\ref{lem:bnd_dev}, if $x_{(i)}(0)\neq 0$, then $[x_{(i)}(k)]=(W^k
\otimes I)[x_{(i)}(0)] -\alpha \sum_{s=0}^{k-1}(W^{k-1-s}\otimes
I)h(s)$. Substituting it into the proof of Lemma \ref{lem:bnd_dev}
we obtain
$$\|x_{(i)}(k)-\bar{x}(k)\| \leq \beta^k \|[x_{(i)}(0)]\| + \frac{\alpha D}{1-\beta}.$$
When $k \rightarrow \infty$, $\beta^k \|[x_{(i)}(0)]\| \rightarrow
0$ and, therefore, the last term dominates.}
A consequence of Lemma \ref{lem:bnd_dev} is that the distance
between the following two quantities is also bounded
\begin{align*}
g(k) & := \frac{1}{n}\sum_{i=1}^n \nabla f_i(x_{(i)}(k)), \\
\bar{g}(k)&:= \frac{1}{n}\sum_{i=1}^n \nabla f_i(\bar{x}(k)).
\end{align*}
{
\begin{lemma}
\label{bnd_g}
Under Assumption \ref{assmp1}, if \eqref{hk_bnd} holds and $\beta<1$, then
\begin{align*}
\|\nabla f_i(x_{(i)}(k))-\nabla f_i(\bar{x}(k))\|&\le \frac{\alpha DL_{f_i}}{1-\beta},\\
\|g(k)-\bar{g}(k)\|&\le \frac{\alpha DL_h}{1-\beta}.
\end{align*}
\end{lemma}
}
\begin{proof}
Assumption \ref{assmp1} gives
$$\|\nabla f_i(x_{(i)}(k))-\nabla f_i(\bar{x}(k))\| \le L_{f_i}\|x_{(i)}(k)-\bar{x}(k)\| \le \frac{\alpha DL_{f_i}}{1-\beta},$$
where the last inequality follows from Lemma \ref{lem:bnd_dev}. On the other hand, we have
$$\|g(k)-\bar{g}(k)\|=\|\frac{1}{n}\sum_{i=1}^n \big(\nabla f_i(x_{(i)}(k)) - \nabla f_i(\bar{x}(k))\big)\|\le \frac{1}{n}\sum_{i=1}^n L_{f_i}\|x_{(i)}(k)-\bar{x}(k)\|\le \frac{\alpha DL_h}{1-\beta},$$
which completes the proof.
\end{proof}
We are interested in $g(k)$ since $-\alpha g(k)$ updates the
average of $x_{(i)}(k)$. To see this, by taking the average of
\eqref{dec_grad} over $i$ and noticing $W=[w_{ij}]$ is doubly
stochastic, we obtain \beq\label{dec_avg}
\bar{x}(k+1)=\frac{1}{n}\sum_{i=1}^nx_{(i)}(k+1) =
\frac{1}{n}{\sum_{i,j=1}^n} w_{ij}x_{(j)} -
\frac{\alpha}{n}\sum_{i=1}^n \nabla
f_i(x_{(i)}(k))=\bar{x}(k)-\alpha g(k). \eeq On the other hand,
since the exact gradient of $\frac{1}{n}\sum_{i=1}^n
f_i(\bar{x}(k))$ is $\bar{g}(k)$, iteration \eqref{dec_avg} can be
viewed as an inexact gradient descent iteration (using $g(k)$
instead of $\bar{g}(k)$) for the problem
\beq\label{avg_prob}\Min_x~\bar{f}(x):=\frac{1}{n}\sum_{i=1}^n
f_i(x).\eeq It is easy to see that $\bar{f}$ is Lipschitz
continuous with the constant $$L_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n
L_{f_i}.$$ If any $f_i$ is strongly convex, then so is $\bar{f}$,
with the modulus $\mu_{\bar{f}}=\frac{1}{n}\sum_{i=1}^n
\mu_{f_i}$. Based on the above interpretation, next we bound
$f(\bar{x}(k))-f^*$ and $\|\bar{x}(k)-x^*\|$.
\subsection{Bounded distance to minimum}\label{sc:bdm}
We consider the convex, restricted strongly convex, and strongly
convex cases. In the former two cases, the solution $x^*$ may be
non-unique, so we use the set of solutions $\cX^*$. We need the
followings for our analysis:
\begin{itemize}
\item objective error $\bar{r}(k) : =
\bar{f}(\bar{x}(k))-\bar{f}^*=\frac{1}{n}(f(\bar{x}(k))-f^*)~\text{where}~
\bar{f}^*:=\bar{f}(x^*)$, $x^*\in\cX^*$; \item solution error $\bar{e}(k) :=
\bar{x}(k)-x^*(k)~\text{where}~x^*(k)=\Proj_{\cX^*}(\bar{x}(k))\in\cX^*.$
\end{itemize}
\begin{theorem}\label{r_cvg} Under Assumption \ref{assmp1}, if $\alpha\le \min\{(1+\lambda_n(W))/L_h,1/L_{\bar{f}}\}=O(1/L_h)$, then while
$$\bar{r}(k) > C\sqrt{2}\cdot \frac{\alpha L_h D}{(1-\beta)}=O\left(\frac{\alpha}{1-\beta}\right)$$ (where constants $C$ and $D$ are defined in \eqref{def_c} and \eqref{hk_bnd}, respectively), the reduction of $\bar{r}(k)$ obeys
$$\bar{r}(k+1) \le \bar{r}(k) -O(\alpha\bar{r}^2(k)),$$
and therefore, $$\bar{r}(k)\le O\left(\frac{1}{\alpha k}\right).$$
In other words, $\bar{r}(k)$ decreases at a minimal rate of
$O(\frac{1}{\alpha k})=O(1/k)$ until reaching
$O(\frac{\alpha}{1-\beta})$.
\end{theorem}
\begin{proof}
{First we show that $\|\bar{e}(k)\| \leq C$. To this end, recall
the definition of $\xi_\alpha([x_{(i)}])$ in \eqref{xi}. Let
{$\tilde{\cX}$} denote its set of minimizer(s), which is
nonempty since each $f_i$ has a minimizer due to Assumption
\ref{assmp1}. Following the arguments in \cite[pp.
69]{Nesterov2007} and with the bound on $\alpha$, we have $d(k)\le
d(k-1)\le\cdots\le d(0)$, where $d(k):=\|[x_{(i)}(k)-
{\tilde{x}_{(i)}}]\|$ and {$[\tilde{x}_{(i)}]\in
\tilde{\cX}$}. Using $\|a_1+\cdots +a_n\|\le
\sqrt{n}\|{[a_1;\ldots;a_n]}\|$, we have
\begin{align}
\nonumber\|\bar{e}(k)\|&= \|\bar{x}(k)-x^*(k)\| = \|\frac{1}{n}\sum_{i=1}^n (x_{(i)}(k) - x^*)\| \le \frac{1}{\sqrt{n}}\|[x_{(i)}(k)-{x}^*]\|\\
\nonumber& \le \frac{1}{\sqrt{n}}(\|{[x_{(i)}(k)-\tilde{x}_{(i)}]}\|+\|{[\tilde{x}_{(i)} - x^*]}\|)\\
& \le \frac{1}{\sqrt{n}} (\|{[x_{(i)}(0) -
\tilde{x}_{(i)}]}\|+\|{[\tilde{x}_{(i)} -
x^*]}\|)=:C\label{def_c}
\end{align}
Next we show the convergence of $\bar{r}(k)$. By the assumption, we
have $1-\alpha L_{\bar{f}}\ge 0$, and thus
\begin{align*}
\bar{r}(k+1) &\le \bar{r}(k) +\langle \bar{g}(k),\bar{x}({k+1})-\bar{x}(k)\rangle +\frac{L_{\bar{f}}}{2}\|\bar{x}({k+1})-\bar{x}(k)\|^2\\
&\stackrel{\eqref{dec_avg}}{=} \bar{r}(k) - \alpha \langle \bar{g}(k),g(k)\rangle + \frac{\alpha^2 L_{\bar{f}}}{2}\|g(k)\|^2\\
&= \bar{r}(k) - \alpha \langle \bar{g}(k),\bar{g}(k)\rangle+ \frac{\alpha^2 L_{\bar{f}}}{2}\|\bar{g}(k)\|^2+ 2\alpha\frac{1-\alpha L_{\bar{f}}}{2} \langle \bar{g}(k),\bar{g}(k)-g(k)\rangle+\frac{\alpha^2 L_{\bar{f}}}{2}\|\bar{g}(k)-g(k)\|^2\\
&\le \bar{r}(k)-\alpha(1-\frac{\alpha L_{\bar{f}}}{2}-\delta\frac{1-\alpha L_{\bar{f}}}{2}) \|\bar{g}(k)\|^2+\alpha(\frac{\alpha L_{\bar{f}}}{2}+\delta^{-1}\frac{1-\alpha L_{\bar{f}}}{2})\|\bar{g}(k)-g(k)\|^2,
\end{align*}
where the last inequality follows from Young's inequality $\pm2a^Tb\le
\delta^{-1}\|a\|^2+ \delta\|b\|^2$ for any $\delta >0$. Although
we can later optimize over $\delta>0$, we simply take
$\delta = 1$. Since $\alpha\le (1+\lambda_n(W))/L_h$, we can apply
Theorem \ref{h_bnd} and then Lemma \ref{bnd_g} to the last term
above, and obtain
$$\bar{r}(k+1) \le \bar{r}(k) -\frac{\alpha}{2} \|\bar{g}(k)\|^2+\frac{\alpha^3 D^2L_h^2}{2(1-\beta)^2}.$$
Since $\|\bar{e}(k)\|\le C$ as shown in \eqref{def_c}, from $\bar{r}(k) =\bar{f}(\bar{x}(k))-\bar{f}^*\le \langle \bar{g}(k),\bar{x}(k)-x^*(k)\rangle=\langle \bar{g}(k),\bar{e}(k)\rangle$, we obtain that
$$\|\bar{g}(k)\|\ge \|\bar{g}(k)\|\frac{\|\bar{e}(k)\|}{C} \ge\frac{|\langle\bar{g}(k),\bar{e}(k)\rangle|}{C}\ge\frac{\bar{r}(k)}{C}, $$
which gives
$$\bar{r}(k+1) \le \bar{r}(k) -\frac{\alpha}{2C^2} \bar{r}^2(k)+\frac{\alpha^3 D^2L_h^2}{2(1-\beta)^2}.$$ Hence, while $\frac{\alpha}{2C^2} \bar{r}^2(k)> 2 \cdot \frac{\alpha^3 D^2L_h^2}{2(1-\beta)^2}$ or equivalently $\bar{r}(k) > C\sqrt{2}\cdot \frac{\alpha L_h D}{(1-\beta)}$, we have $\bar{r}(k+1) \le \bar{r}(k)- O(\alpha\bar{r}^2(k))$. Dividing both sides by $\bar{r}(k)\bar{r}(k+1)$ gives $\frac{1}{\bar{r}(k)}+O(\frac{\alpha\bar{r}(k)}{\bar{r}(k+1)})\le \frac{1}{\bar{r}(k+1)}$. Hence, $\frac{1}{\bar{r}(k)}$ increase at $\Omega(\alpha k)$, or $\bar{r}(k)$ reduces at $O(1/(\alpha k))$, which completes the proof.}
\end{proof}
Theorem \ref{r_cvg} shows that until reaching
$f^*+O(\frac{\alpha}{1-\beta})$, $f(\bar{x}(k))$ reduces at the
rate of $O(1/(\alpha k))$. For fixed $\alpha$, there is a tradeoff
between the convergence rate and optimality. Again, upon the stopping
of iteration \eqref{dec_grad}, $\bar{x}(k)$ is not available to
any of the agents but obtainable by invoking an average
consensus algorithm.
{
\begin{remark}
Since $\bar{f}(x)$ is convex, we have for all $i=1,2,\ldots,n$:
\begin{align}
\bar{f}(x_{(i)}(k)) - \bar{f}^* & \leq \bar{r}(k) + \langle \bar{g}(x_{(i)}(k)), x_{(i)}(k) - \bar{x}(k) \rangle \nonumber \\
& \leq \bar{r}(k) + \frac{1}{n}\sum_{j=1}^n \|\nabla f_j(x_{(i)}(k))\| \|x_{(i)}(k)) - \bar{x}(k)\| \nonumber \\
& \leq \bar{r}(k) + \frac{\alpha D^2}{1-\beta}. \nonumber
\end{align}
From Theorem \ref{r_cvg} we conclude that $\bar{f}(x_{(i)}(k)) -
\bar{f}^*$, like $\bar{r}(k)$, converges at $O(1/k)$ until
reaching $O(\frac{\alpha}{1-\beta})$.
This nearly sublinear convergence rate is stronger than those of
the distributed subgradient method \cite{Nedic2009} and the dual
averaging subgradient method \cite{Duchi2012}. Their rates are in
terms of objective error $f(\hat{x}_{(i)}(k))-f^*$ evaluated at the ergodic
solution
$\hat{x}_{(i)}(k)=\frac{1}{k}\sum_{s=0}^{k-1}x_{(i)}(s)$.
\end{remark}
}
Next, we bound $\|\bar{e}(k+1)\|$ under the assumption of restricted or standard strong convexities. To start, we
present a lemma.
\begin{lemma}\label{sc_bnd}Suppose that $\nabla\bar{f}$ is Lipschitz continuous with constant $L_{\bar{f}}$. Then, we have
$$\langle x-x^*, \grad\bar{f}(x) - \grad\bar{f}(x^*) \rangle \geq c_1\|\grad\bar{f}(x) - \grad\bar{f}(x^*)\|^2 +c_2 \|x-x^*\|^2$$
(where $x^*\in \cX^*$ and $\nabla\bar{f}(x^*)=0$) for the following cases:
\begin{enumerate}[a)]
\item (\cite[Theorem 2.1.12]{Nesterov2007}) if ${\bar{f}}$ is
strongly convex with modulus $\mu_{\bar{f}}$, then
$c_1=\frac{1}{\mu_{\bar{f}}+L_{\bar{f}}}$ and
$c_2=\frac{\mu_{\bar{f}}L_{\bar{f}}}{\mu_{\bar{f}}+L_{\bar{f}}}$;
\item (\cite[Lemma 2]{ZhangYin2013}) if ${\bar{f}}$ is restricted
strongly convex with modulus $\nu_{\bar{f}}$, then
$c_1=\frac{\theta}{L_{\bar{f}}}$ and $c_2=(1-\theta)\nu_{\bar{f}}$
for any $\theta\in[0,1]$.
\end{enumerate}
\end{lemma}
{{\begin{theorem} \label{mean convg} Under Assumption
\ref{assmp1}, if $f$ is either strongly convex with modulus
$\mu_f$ or restricted strongly convex with modulus $\nu_f$, and if
$\alpha \le \min\{(1 + \lambda_n(W))/L_h, c_1\}=O(1/L_h)$
and $\beta<1$, then we have
$$\|\bar{e}(k+1)\|^2 \le c_3^2 \|\bar{e}(k)\|^2 + c_4^2,$$
where
$$c_3^2=1 - \alpha c_2+ \alpha\delta - \alpha^2 \delta c_2, \quad c_4^2=\alpha^3(\alpha+\delta^{-1}) \frac{L_h^2 D^2}{(1-\beta)^2},
\quad D=\sqrt{2L_h \sum_{i=1}^n \left(f_i(0) -f_i^o\right)},$$ constants
$c_1$ and $c_2$ are given in Lemma \ref{sc_bnd},
$\mu_{\bar{f}}=\mu_{f}/n$ and $\nu_{\bar{f}}=\nu_{f}/n$, and
$\delta$ is any positive constant. In particular, if we set
$\delta=\frac{c_2}{2(1-\alpha c_2)}$ such that
$c_3=\sqrt{1-\frac{\alpha c_2}{2}} \in (0,1)$, then we have
$$\|\bar{e}(k)\|\le c_3^{k} \|\bar{e}(0)\|+O(\frac{\alpha}{1-\beta}).$$
\end{theorem}}}
\begin{proof}
Recalling that $x^*(k+1)=\Proj_{\cX^*}(\bar{x}(k+1))$ and
$\bar{e}(k+1)=\bar{x}(k+1)-x^*(k+1)$, we have
\begin{align*}
\|\bar{e}(k+1)\|^2 & \leq \|\bar{x}(k+1) - x^*(k)\|^2 \nonumber\\
& = \|\bar{x}(k) - x^*(k) - \alpha g(k)\|^2 \nonumber \\
& = \|\bar{e}(k)-\alpha \bar{g}(k) + \alpha(\bar{g}(k)-g(k))\|^2 \quad \nonumber \\
& = \|\bar{e}(k)-\alpha \bar{g}(k)\|^2 +\alpha^2 \|\bar{g}(k)-g(k)\|^2 + 2\alpha (\bar{g}(k)-g(k))^T(\bar{e}(k)-\alpha \bar{g}(k)) \nonumber \\
& \leq (1 + \alpha\delta) \|\bar{e}(k)-\alpha \bar{g}(k)\|^2 + \alpha(\alpha+\delta^{-1}) \|\bar{g}(k)-g(k)\|^2,
\end{align*}
where the last inequality follows again from $\pm2a^Tb\le
\delta^{-1}\|a\|^2+ \delta\|b\|^2$ for any $\delta >0$. The bound
of $\|\bar{g}(k)-g(k)\|^2$ follows from Lemma \ref{bnd_g} and
Theorem \ref{h_bnd}, and we shall bound $ \|\bar{e}(k)-\alpha
\bar{g}(k)\|^2$, which is a standard exercise; we repeat below for
completeness. Applying Lemma \ref{sc_bnd} and noticing
$\bar{g}(x)=\nabla{\bar{f}}(x)$ by definition, we have
\begin{align*}
\|\bar{e}(k)-\alpha \bar{g}(k)\|^2 &
= \|\bar{e}(k)\|^2 + \alpha^2 \|\bar{g}(k)\|^2 - 2\alpha \bar{e}(k)^T \bar{g}(k) \nonumber \\
&\le \|\bar{e}(k)\|^2 + \alpha^2 \|\bar{g}(k)\|^2 - \alpha c_1 \|\bar{g}(k)\|^2-\alpha c_2\|\bar{e}(k)\|^2 \nonumber \\
& = (1-\alpha c_2) \|\bar{e}(k)\|^2 + \alpha(\alpha -c_1) \|\bar{g}(k)\|^2. \end{align*}
We shall pick $\alpha\le c_1$ so that $\alpha(\alpha -c_1) \|\bar{g}(k)\|^2\le 0$. Then from the last two inequality arrays, we have
\begin{align*}
\|\bar{e}(k+1)\|^2 & \leq (1 + \alpha\delta)(1-\alpha c_2) \|\bar{e}(k)\|^2 +\alpha(\alpha+\delta^{-1}) \|\bar{g}(k)-g(k)\|^2 \\
& \leq (1 - \alpha c_2+ \alpha\delta - \alpha^2 \delta c_2)
\|\bar{e}(k)\|^2 +\alpha^3(\alpha+\delta^{-1}) \frac{L_h^2
D^2}{(1-\beta)^2}.
\end{align*}
Note that if $f$ is strongly convex, then $c_1c_2=\frac{\mu_{\bar{f}} L_{\bar{f}}}{(\mu_{\bar{f}} + L_{\bar{f}})^2} < 1$; if $f$
is restricted strongly convex, then $c_1c_2=\frac{\theta(1-\theta)\nu_{\bar{f}}}{L_{\bar{f}}} < 1$ because $\theta \in [0,1]$ and
$\nu_{\bar{f}}<L_{\bar{f}}$. Therefore we have $c_1 < 1/c_2$. When $\alpha < c_1$, $(1 + \alpha\delta)(1-\alpha c_2) > 0$.
Next, since
$$\|\bar{e}(k)\|^2\le c_3^{2k} \|\bar{e}(0)\|^2+\frac{1-c_3^{2k}}{1-c_3^2}c_4^2\le c_3^{2k} \|\bar{e}(0)\|^2+\frac{c_4^2}{1-c_3^2},$$
we get
$$\|\bar{e}(k)\| \leq c_3^{k} \|\bar{e}(0)\| + \frac{c_4}{\sqrt{1-c_3^2}}.$$
If we set $$\delta=\frac{c_2}{2(1-\alpha c_2)},$$ then we obtain
$$c_3^2=1-\frac{\alpha c_2}{2}<1,$$
$$\frac{c_4}{\sqrt{1-c_3^2}}=\frac{\alpha L_h D}{1-\beta}\sqrt{\frac{\alpha(\alpha+\frac{2(1-\alpha c_2)}{c_2})}{\frac{\alpha c_2}{2}}}=\frac{\alpha L_h D}{1-\beta} \sqrt{\frac{4}{c_2^2}-\frac{2}{c_2}\alpha}=O(\frac{\alpha}{1-\beta}),$$
which completes the proof.
\end{proof}
\begin{remark}
As a result, if $f$ is strongly convex, then $\bar{x}(k)$
geometrically converges until reaching an $O(\frac{\alpha}{1-\beta})$-neighborhood
of the unique solution $x^*$; on the other hand, if $f$ is
restricted strongly convex, then $\bar{x}(k)$ geometrically
converges until reaching an $O(\frac{\alpha}{1-\beta})$-neighborhood of the
solution set $\mathcal{X}^*$.
\end{remark}
\subsection{Local agent convergence}
\begin{corollary} \label{coro2}
Under Assumption \ref{assmp1}, if $f$ is either strongly convex or
restricted strongly convex, $\alpha < \min\{(1 +
\lambda_n(W))/L_h, c_1\}$ and $\beta<1$,
then we have
$$\|x_{(i)}(k)-{x}^*(k)\| \le c_3^k \|{x}^*(0)\| + \frac{c_4}{\sqrt{1-c_3^2}} + \frac{\alpha D}{1-\beta},$$
where $x^*(0),{x}^*(k)\in\cX^*$ are solutions defined at the beginning of subsection \ref{sc:bdm} and {the constants $c_3$, $c_4$, $D$ are the same as given in Theorem \ref{mean convg}.}
\end{corollary}
\begin{proof}
From Lemma \ref{lem:bnd_dev} and Theorem \ref{mean convg} we have
\begin{align}
& \|x_{(i)}(k)-{x}^*(k)\| \nonumber \\
\leq&\|\bar{x}(k)-{x}^*(k)\|+\|x_{(i)}(k)-\bar{x}(k)\| \nonumber \\
\leq& c_3^k\|{x}^*(0)\|+\frac{c_4}{\sqrt{1-c_3^2}}+\frac{\alpha
D}{1-\beta}, \nonumber
\end{align}
which completes the proof.
\end{proof}
\begin{remark}
Similar to Theorem \ref{mean convg} and Remark 1, if we set
$\delta=\frac{c_2}{2(1-\alpha c_2)}$, and if $f$ is strongly
convex, then {$x_{(i)}(k)$} geometrically converges to an
$O(\frac{\alpha}{1-\beta})$-neighborhood of the unique solution
$x^*$; if $f$ is restricted strongly convex, then
{$x_{(i)}(k)$} geometrically converges to an
$O(\frac{\alpha}{1-\beta})$-neighborhood of the solution set
$\mathcal{X}^*$.
\end{remark}
\section{Decentralized basis pursuit}
\label{sec:3}
\subsection{Problem statement}
We derive an algorithm for solving a decentralized basis pursuit
problem to illustrate the application of iteration
\eqref{dec_grad}.
Consider a multi-agent network of $n$ agents who collaboratively
find a sparse representation $y$ of a given signal $b \in \RR^p$
that is known to all the agents. Each agent $i$ holds a part $A_i
\in \RR^{p \times q_i}$ of the entire dictionary $A \in \RR^{p
\times q}$, where $q = \sum_{i=1}^n q_i$, and shall recover the
corresponding $y_i \in \RR^{q_i}$. Let
\begin{equation}
y := \left[
\begin{array}{c}
y_1 \\
\vdots \\
y_n
\end{array}
\right] \in \RR^{q},\quad
A := \left[
\begin{array}{ccc}
| & & | \\
A_1 & \hdots & A_n \\
| & & |
\end{array}
\right] \in \RR^{p \times q}. \nonumber
\end{equation}
The problem is
\begin{align} \label{eq:cp-bp}
\Min \limits_y & \quad \|y\|_1, \\
\st & \quad \sum_{i=1}^n A_i y_i = b, \nonumber
\end{align}
where $\sum_{i=1}^n A_i y_i=Ay$. {This formulation is a column-partitioned version of decentralized basis pursuit, as opposed to the row-partitioned version in \cite{Mota2012} and \cite{yuan2013}. Both versions} find applications
in, for example, collaborative spectrum sensing
\cite{Bazerque2010}, sparse event detection \cite{meng2009sparse},
and seismic modeling \cite{Mota2012}.
Developing efficient decentralized algorithms to solve
\eqref{eq:cp-bp} is nontrivial since the objective function is
neither differentiable nor strongly convex, and the constraint
couples all the agents. In this paper, we turn to an equivalent
and tractable reformulation by appending a strongly convex term
and solving its Lagrange dual problem by decentralized gradient
descent. Consider the augmented form of \eqref{eq:cp-bp} motivated
by \cite{Lai2013}:
\begin{align} \label{eq:cp-bp-lb}
\Min \limits_y & \quad \|y\|_1 + \frac{1}{2\gamma} \|y\|^2, \\
\st & \quad A y = b, \nonumber
\end{align}
where the regularization parameter $\gamma > 0$ is chosen so that
\eqref{eq:cp-bp-lb} returns a solution to \eqref{eq:cp-bp}.
Indeed, provided that $Ay=b$ is consistent, there always exists $\gamma_{\min}
> 0$ such that the solution to \eqref{eq:cp-bp-lb} is also a
solution to \eqref{eq:cp-bp} for any $\gamma \geq \gamma_{\min}$
\cite{Friedlander2007,Yin2010}. Linearized Bregman iteration
proposed in \cite{yin2008bregman} is proven to converge to the
unique solution of \eqref{eq:cp-bp-lb} efficiently. See
\cite{Yin2010} for its analysis and \cite{osher2011fast} for
important improvements. Since the problem \eqref{eq:cp-bp-lb} is
now solved over a network of agents, we need to devise a
decentralized version of linearized Bregman iteration.
The Lagrange dual of \eqref{eq:cp-bp-lb}, casted as a minimization
(instead of maximization) problem, is
\begin{align}
\Min\limits_x~f(x) := \frac{\gamma}{2} \|A^T x - \text{Proj}_{[-1,1]}(A^T x)\|^2 - b^T
x, \label{eq:dual}
\end{align}
where $x \in \RR^p$ is the dual variable and
$\text{Proj}_{[-1,1]}$ denotes the element-wise projection onto $[-1,1]$.
We turn \eqref{eq:dual} into the form of \eqref{eq:original
problem}: \beq\label{dec_bp} \Min_x~f(x)=\sum_{i=1}^n
f_i(x),~\text{where}~f_i(x):= \frac{\gamma}{2} \|A_i^T x -
\text{Proj}_{[-1,1]}(A_i^T x)\|^2 - \frac{1}{n} b^T x. \eeq
The function $f_i$ is defined with $A_i$ and $b$, where matrix $A_i$ is the
private information of agent $i$. The local objective functions
$f_i$ are differentiable with the gradients given as
\begin{align} \label{eq:gradient}
\nabla f_i(x) = \gamma A_i \text{Shrink}(A_i^T x) - \frac{b}{n},
\end{align}
where $\text{Shrink}(z)$ is the shrinkage operator defined as
$\max(|z|-1,0)\sign(z)$ component-wise.
Applying the iteration \eqref{dec_grad} to the problem \eqref{dec_bp}
starting with $x_{(i)}(0) = 0$, we obtain the iteration
\begin{align} \label{eq:dlb}
\boxed{x_{(i)}(k+1) = \sum_{j=1}^n w_{ij} x_{(j)}(k) - \alpha \left(A_i y_i(k) - \frac{b}{n}\right), \quad \text{where}~ y_i(k) = \gamma
\text{Shrink}(A_i^T x_{(i)}(k)).}
\end{align}
{Note that the primal solution $y_i(k)$ is
iteratively updated, as a middle step for the update of
$x_{(i)}(k+1)$.}
{ It is easy to verify that the local objective functions
$f_i$ are Lipschitz differentiable with the constants $L_{f_i} =
\gamma \| A_i\|^2$. Besides, given that $Ay=b$ is consistent,
\cite{Lai2013} proves that $f(x)$ is restricted strongly convex
with a computable constant $\nu_f >0$.
Therefore, the objective function $f(x)$ in \eqref{eq:dual} has
$L_h=\max\{\gamma \|A_i\|^2:i=1,2,\cdots,n\}$,
$L_{\bar{f}}=\frac{\gamma}{n} \sum_{i=1}^n \|A_i\|^2$ and
$\nu_{\bar{f}}=\nu_{f}/n $. By Theorem \ref{mean convg}, any
local dual solution $x_{(i)}(k)$ generated by iteration
\eqref{eq:dlb} linearly converges to a neighborhood of the
solution set of \eqref{eq:dual}, and the primal solution $y(k) =
[y_1(k); \cdots; y_n(k)]$ linearly converges to a neighborhood of
the unique solution of \eqref{eq:cp-bp-lb}.}
\begin{theorem}\label{BP-convg}
Consider $x_{(i)}(k)$ generated by iteration \eqref{eq:dlb} and
$\bar{x}(k) := \frac{1}{n}\sum_{i=1}^n x_{(i)}(k)$. The unique
solution of \eqref{eq:cp-bp-lb} is $y^*$ and the projection of
$\bar{x}(k)$ onto the optimal solution set of \eqref{eq:dual} is
$\bar{x}^*(k)=\text{Proj}_{\mathcal{X}^*}(\bar{x}(k))$. If the
stepsize $\alpha < \min\{{(1 + \lambda_n(W))}/{L_h}, c_1\}$, we
have
\begin{align}
\label{thm4}
\|x_{(i)}(k)-\bar{x}^*(k)\| \le c_3^k \|\bar{x}^*(0)\| + \left(\frac{c_4}{\sqrt{1-c_3^2}} + \frac{\alpha D}{1-\beta}\right),
\end{align}
where the constants $c_3$ and $c_4$ are the same as given in
Theorem \ref{mean convg}. In particular, if we set
$\delta=\frac{c_2}{2(1-\alpha c_2)}$ such that
$c_3=\sqrt{1-\frac{\alpha c_2}{2}} \in (0,1)$, then $\frac{c_4}{\sqrt{1-c_3^2}} + \frac{\alpha D}{1-\beta}=O(\frac{\alpha}{1-\beta})$.
On the other hand, the primal solution satisfies
\begin{align}
\label{ybound} \| y(k) - y^* \| \leq n \gamma \max_{i} \left(
\|A_i\| \|x_{(i)}(k) - \bar{x}^*(k)\| \right).
\end{align}
\end{theorem}
\begin{proof}
The result \eqref{thm4} is a corollary of Corollary
\ref{coro2}. We focus on showing \eqref{ybound}.
Given any dual solution $\bar{x}(k)$, the primal solution of
\eqref{eq:cp-bp-lb} is $y^* = \gamma \text{Shrink}(A^T
\bar{x}^*(k))$. Recall that $y(k) = [y_1(k); \cdots;
y_n(k)]$ and $y_i(k) = \gamma \text{Shrink}(A_i^T x_{(i)}(k))$. We have
\begin{align} \label{eq:ybound-1}
\| y(k) - y^* \| = & \| [\gamma \text{Shrink}(A_1^T x_{(1)}(k));
\cdots; \gamma \text{Shrink}(A_n^T x_{(n)}(k))] - \gamma
\text{Shrink}(A^T
\bar{x}^*(k)) \| \\
\leq & \gamma \sum_{i=1}^n \| \text{Shrink}(A_i^T x_{(i)}(k)) - \text{Shrink}(A_i^T \bar{x}^*(k)) \|. \nonumber
\end{align}
Due to the contraction of the shrinkage operator, we have the bound $\|
\text{Shrink}(A_i^T x_{(i)}(k)) - \text{Shrink}(A_i^T \bar{x}^*(k)) \|
\leq \|A_i\| \|x_{(i)}(k) - \bar{x}^*(k)\| \leq \max_{i} \left( \|A_i\|
\|x_{(i)}(k) - \bar{x}^*(k)\| \right)$. Combining this inequality with
\eqref{eq:ybound-1}, we get \eqref{ybound}.
\end{proof}
\section{Numerical experiments}
\label{sec:4} In this section, we report our numerical results
applying the iteration \eqref{dec_grad} to a decentralized least
squares problem and the iteration \eqref{eq:dlb} to a
decentralized basis pursuit problem.
We generate a network consisting of $n$ agents with
$\frac{n(n-1)}{2}\eta$ edges that are uniformly randomly chosen, where $n=100$ and $\eta=0.3$ are chosen for all the tests.
We ensure a connected network.
\subsection{Decentralized gradient descent for least squares}
\label{sec:4a} We apply the iteration \eqref{dec_grad} to the
least squares problem
\begin{align}
\Min\limits_{x\in\RR^3} \quad \frac{1}{2}\|b-Ax\|^2=\sum\limits_{i=1}^n\frac{1}{2}\|b_i-A_ix\|^2. \label{LS}
\end{align}
The entries of the true signal $x^*\in\RR^3$ are i.i.d samples
from the Gaussian distribution $\mathcal{N}(0,1)$. $A_i \in
\RR^{3\times3}$ is the linear sampling matrix of agent $i$ whose
elements are i.i.d samples from $\mathcal{N}(0,1)$, and
$b_i=A_ix^* \in \RR^3$ is the measurement vector of agent $i$.
For the problem \eqref{LS}, let $f_i(x)=\frac{1}{2}\|b_i-A_ix\|^2$. For any $x_a,\
x_b \in \RR^3$, $\|\nabla f_i(x_a)-\nabla
f_i(x_b)\|=\|A_i^TA_i(x_a-x_b)\|\leq \|A_i^TA_i\|\|x_a-x_b\|$, so
$\nabla f_i(x)$ is Lipschitz continuous. In addition,
$\frac{1}{2}\|b-Ax\|_2^2$ is strongly convex since $A$ has full
column rank, with probability 1.
Fig. \ref{fig:1} depicts the convergence of the error $\bar{e}(k)$
corresponding to five different stepsizes. It shows that
$\bar{e}(k)$ reduces linearly until reaching an
$O(\alpha)$-neighborhood, which agrees with Theorem \ref{mean
convg}. Not surprisingly, a smaller $\alpha$ causes the algorithm
to converge more slowly.
\begin{figure}
\centering
\begin{center}
\includegraphics[height=7cm]{1009_DGD_LS_const_stepsize}
\caption{Comparison of different fixed stepsizes for the decentralized gradient descent
algorithm.} \label{fig:1}
\end{center}
\end{figure}
Fig. \ref{fig:2} compares our theoretical stepsize bound in Theorem \ref{h_bnd} to the empirical bound of $\alpha$.
The theoretical bound for this
experimental network is
$\min\{\frac{1+\lambda_n(W)}{L_h},c_1\}=0.1038$. In Fig. 2, we
choose $\alpha=0.1038$ and then the slightly larger $\alpha=0.12$.
We observe convergence
with $\alpha=0.1038$ but clear divergence with $\alpha=0.12$. This shows that our bound on $\alpha$ is quite close to the actual requirement.
\begin{figure}
\centering
\begin{center}
\includegraphics[height=7cm]{1016_DGD_LS_bound_tightness_2}
\caption{Comparison of the decentralized gradient descent
algorithm with stepsizes $\alpha=0.1038$ and $\alpha=0.12$.
} \label{fig:2}
\end{center}
\end{figure}
\subsection{Decentralized gradient descent for basis pursuit}
\label{sec:4b} In this subsection we test the iteration \eqref{eq:dlb} for the decentralized basis pursuit problem
\eqref{eq:cp-bp}.
Let $y \in \RR^{100}$ be the unknown signal whose entries are i.i.d. samples from
$\mathcal{N}(0,1)$. The entries of the measurement matrix $A \in
\RR^{50\times100}$ are also i.i.d. samples from $\mathcal{N}(0,1)$. Each
agent $i$ holds the $i$th column of $A$. $b=Ay\in \RR^{50}$ is the measurement
vector. We use the same network as in the last test.
\begin{figure}
\centering
\begin{center}
\includegraphics[height=7cm]{1012_DGD_LB_dual}
\caption{Convergence of the mean value of the dual variable
$\bar{x}(k)$.}
\label{fig:3}
\end{center}
\end{figure}
\begin{figure}
\centering
\begin{center}
\includegraphics[height=7cm]{1012_DGD_LB_primal}
\caption{Convergence of the primal variable $y(k)$. $y^*$ is the
solution of the problem \eqref{eq:cp-bp-lb}.
} \label{fig:4}
\end{center}
\end{figure}
Fig. \ref{fig:3} depicts the convergence of $\bar{x}(k)$, the
mean of the dual variables at iteration $k$. As stated in Theorem
\ref{BP-convg}, $\bar{x}(k)$ converges linearly to an
$O(\alpha)$-neighborhood of the solution set $\mathcal{X}^*$. The
limiting errors $\bar{e}(k)$ corresponding to the four values of
$\alpha$ are proportional to $\alpha$. As the stepsize becomes
smaller, the algorithm converges more accurately to
$\mathcal{X}^*$. Fig. \ref{fig:4} shows the linear convergence of
the primal variable $y(k)$. It is interesting that the $y(k)$
corresponding to three different values of $\alpha$ appear to
reach the same level of accuracy, which might be related to the
error forgetting property of the first-order $\ell_1$ algorithm
\cite{YinOsher2013} and deserves further investigation.
\section{Conclusion}
Consensus optimization problems in multi-agent networks arise in
applications such as mobile computing, self-driving cars'
coordination, cognitive radios, as well as collaborative data
mining. Compared to the traditional centralized approach, a
decentralized approach offers more balanced communication load
and better privacy protection. In this paper, our effort is to
provide a mathematical understanding to the decentralized gradient
descent method with a fixed stepsize. We give a tight
condition for guaranteed convergence, as well as an example to
illustrate the fail of convergence when the condition is violated.
We provide the analysis of convergence and the rates of convergence
for problems with different properties and establish the relations
between network topology, stepsize, and convergence speed, which
shed some light on network design. The numerical observations
reasonably matches the theoretical results.
\section*{Acknowledgements}
Q. Ling is supported by NSFC grant 61004137. W. Yin is supported
by ARL and ARO grant W911NF-09-1-0383 and NSF grants DMS-0748839
and DMS-1317602. The authors thank Yangyang Xu for helpful
comments.
\bibliographystyle{siam}
| {
"timestamp": "2015-07-02T02:05:50",
"yymm": "1310",
"arxiv_id": "1310.7063",
"language": "en",
"url": "https://arxiv.org/abs/1310.7063",
"abstract": "Consider the consensus problem of minimizing $f(x)=\\sum_{i=1}^n f_i(x)$ where each $f_i$ is only known to one individual agent $i$ out of a connected network of $n$ agents. All the agents shall collaboratively solve this problem and obtain the solution subject to data exchanges restricted to between neighboring agents. Such algorithms avoid the need of a fusion center, offer better network load balance, and improve data privacy. We study the decentralized gradient descent method in which each agent $i$ updates its variable $x_{(i)}$, which is a local approximate to the unknown variable $x$, by combining the average of its neighbors' with the negative gradient step $-\\alpha \\nabla f_i(x_{(i)})$. The iteration is $$x_{(i)}(k+1) \\gets \\sum_{\\text{neighbor} j \\text{of} i} w_{ij} x_{(j)}(k) - \\alpha \\nabla f_i(x_{(i)}(k)),\\quad\\text{for each agent} i,$$ where the averaging coefficients form a symmetric doubly stochastic matrix $W=[w_{ij}] \\in \\mathbb{R}^{n \\times n}$. We analyze the convergence of this iteration and derive its converge rate, assuming that each $f_i$ is proper closed convex and lower bounded, $\\nabla f_i$ is Lipschitz continuous with constant $L_{f_i}$, and stepsize $\\alpha$ is fixed. Provided that $\\alpha < O(1/L_h)$ where $L_h=\\max_i\\{L_{f_i}\\}$, the objective error at the averaged solution, $f(\\frac{1}{n}\\sum_i x_{(i)}(k))-f^*$, reduces at a speed of $O(1/k)$ until it reaches $O(\\alpha)$. If $f_i$ are further (restricted) strongly convex, then both $\\frac{1}{n}\\sum_i x_{(i)}(k)$ and each $x_{(i)}(k)$ converge to the global minimizer $x^*$ at a linear rate until reaching an $O(\\alpha)$-neighborhood of $x^*$. We also develop an iteration for decentralized basis pursuit and establish its linear convergence to an $O(\\alpha)$-neighborhood of the true unknown sparse signal.",
"subjects": "Optimization and Control (math.OC)",
"title": "On the Convergence of Decentralized Gradient Descent",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130596362787,
"lm_q2_score": 0.7154240079185319,
"lm_q1q2_score": 0.7075636870087565
} |
https://arxiv.org/abs/1812.09547 | Sum-Product Phenomena for Planar Hypercomplex Numbers | We study the sum-product problem for the planar hypercomplex numbers: the dual numbers and double numbers. These number systems are similar to the complex numbers, but it turns out that they have a very different combinatorial behavior. We identify parameters that control the behavior of these problems, and derive sum-product bounds that depend on these parameters. For the dual numbers we expose a range where the minimum value of $\max\{|A+A|,|AA|\}$ is neither close to $|A|$ nor to $|A|^2$.To obtain our main sum-product bound, we extend Elekes' sum-product technique that relies on point-line incidences. Our extension is significantly more involved than the original proof, and in some sense runs the original technique a few times in a bootstrapping manner. We also study point-line incidences in the dual plane and in the double plane, developing analogs of the Szemeredi-Trotter theorem. As in the case of the sum-product problem, it turns out that the dual and double variants behave differently than the complex and real ones. | \section{Introduction}
It is not uncommon for a combinatorial problem to be defined over $\ensuremath{\mathbb R}$, to then be generalized to $\ensuremath{\mathbb C}$, and then further generalized to the quaternions.
For example, this is the case for the sum-product problem \cite{BL18,KR13,SW17}, for geometric incidence problems \cite{SSZ18,SS08,ST12}, and for the Sylvester--Gallai problem \cite{EPS06}.
Both the complex numbers and the quaternions are types of \emph{hypercomplex numbers}.
A system of hypercomplex numbers is a unital algebra with every element having the form
\[ a_0 + a_1 {\bf i}_1 + a_2 {\bf i}_2 + \cdots + a_n {\bf i}_n. \]
Here $n\in \ensuremath{\mathbb N}$, $a_0,a_1,\ldots,a_n\in \ensuremath{\mathbb R}$, and ${\bf i}_1,\ldots,{\bf i}_n$ are called \emph{imaginary units}.
To be an algebra over the reals, the system also needs to include a multiplication table for the imaginary units.
The \emph{dimension} of the system is $n+1$, agreeing with the standard definition of the dimension of a vector space.
For example, the complex numbers are a two-dimensional system with the multiplication rule ${\bf i}^2=-1$.
The quaternions form a system of dimension four and involves a $3\times3$ multiplication table for the three imaginary units.
For a nice basic introduction to hypercomplex numbers, see for example \cite{KS89}.
We refer to two-dimensional systems of hypercomplex numbers as \emph{planar}.
Up to isomorphisms, there are exactly three such planar systems: The complex numbers, the \emph{dual numbers}, and the \emph{double numbers} (for a proof of this claim, see for example \cite[Section 2]{KS89}).
The dual numbers are of the form $a+b{\varepsilon}$, where $a,b\in \ensuremath{\mathbb R}$, and with the multiplication rule ${\varepsilon}^2=0$.
The double numbers are of the form $a+bj$ with the multiplication rule $j^2=1$.
Double number are often also called split-complex numbers, and more generally have at least 18 different names in the literature (Clifford referred to them as \emph{algebraic motors}, and some other names are \emph{spacetime numbers} and \emph{anormal-complex numbers}).
The dual and double numbers seem to appear in many different fields.
For example, they are used in String Theory \cite{GGP96}, Kinematics \cite{Fischer00}, and Signal Processing \cite{MG09}.
Dual numbers play a role in the theory of schemes (for example, see \cite{Hars83}).
They are used in geometry, and we were originally introduced to them through Kisil's lecture notes on the Erlangen program \cite{Kisil12}.
Double numbers were even used to design algorithms for dating sites \cite{KGG12}.
However, to the best of our knowledge, dual and double numbers were not seriously studied from a combinatorial perspective.
The goal of the current work is initiate such a combinatorial study.
We study a variant of the sum-product problem for dual and double numbers.
To obtain sum-product results, we also study other combinatorial properties of these number systems.
In particular, we derive variants of the Szemer\'edi--Trotter theorem for dual and double numbers.
Beyond initiating a combinatorial study of dual and double numbers, we believe that our results are also of intrinsic interest to the study of sum-product phenomena.
The sum-product problem seems to have a similar behavior over the reals, over the complex, and over the quaternions.
Results over the reals are usually extended to the complex and to the quaternions.
Surprisingly, the sum-product problem has a significantly different behavior over the dual numbers.
In addition, our main technique is based on a new idea: Using Elekes' sum-product technique several times, each time relying on the previous result in a bootstrapping manner.
\parag{The sum product problem.}
Given a finite set $A\subset \ensuremath{\mathbb R}$, the \emph{sum set} and \emph{product set} of $A$ are respectively defined as
\[ A+A = \{a+a' : a,a'\in A\} \quad \text{ and } \quad AA = \{a\cdot a' : a,a'\in A\}. \]
Erd\H os and Szemer\'edi \cite{ES83} conjectured that for every $\rho>0$, any sufficiently large $A\subset \ensuremath{\mathbb N}$ satisfies
$\max\{|A+A|,|AA|\} =\Omega\left(|A|^{2-\rho}\right)$. (Since ${\varepsilon}$ is already taken and $\delta$ is also used in our analysis, throughout the paper we will use $\rho$ as a small positive real number.)
The problem was later generalized to sets of real numbers, sets of complex numbers, quaternions, finite fields, and more.
The problem remains wide-open for all of these variants.
For the case of $A\subset \ensuremath{\mathbb R}$, in 2009 Solymosi \cite{Soly09} proved the bound $\max\{|A+A|,|AA|\} =\Omega^*\left(|A|^{4/3}\right)$. In the $\Omega^*(\cdot)$-notation, we neglect subpolynomial factors such as $\log |A|$ and $2^{\sqrt{\log|A|}}$. The same holds for the $O^*(\cdot)$-notation and for the $\Theta^*(\cdot)$-notation.
After a series of improvements, the current best bound over $\ensuremath{\mathbb R}$, derived in \cite{Shakan18}, is $\Omega^*\left(|A|^{4/3+5/5277}\right)$.
Similar bounds exist for the complex numbers and for the quaternions (for example, see \cite{BL18,SW17}).
The best know upper bound for all of these variants is $O^*\left(|A|^2\right)$.
\parag{Dual numbers.}
Let $\ensuremath{\mathbb D}$ be the set of dual numbers: The extension of $\ensuremath{\mathbb R}$ with the extra element ${\varepsilon}$ and the rule ${\varepsilon}^2=0$.
We write a number $a\in \ensuremath{\mathbb D}$ as $a_1+{\varepsilon} a_2$.
Imitating the complex numbers, we refer to $a_1$ as the \emph{real} part of $a$, and to $a_2$ as the \emph{imaginary} part of $a$.
Unlike $\ensuremath{\mathbb R}$ and $\ensuremath{\mathbb C}$, the dual numbers do not form a field, since some dual numbers are not invertible.
In particular, a dual number has an inverse if and only if it has a non-zero real part.
Unlike the cases of $\ensuremath{\mathbb R}$, $\ensuremath{\mathbb C}$, and the quaternions, the sum--product conjecture is false over the dual numbers.
For example, consider the set
\[ A = \{ 1 + {\varepsilon} m \in \ensuremath{\mathbb D} \ :\ m\in \ensuremath{\mathbb Z},\ 1\le m \le n\},\]
and note that
\[ A+A = \{ 2+{\varepsilon} m\ :\ m\in \ensuremath{\mathbb Z},\ 2\le m \le 2n\} \quad \text{ and } \quad AA = \{ 1+{\varepsilon} m\ :\ m\in \ensuremath{\mathbb Z},\ 2\le m \le 2n\}. \]
That is, the sizes of both the sum set and product set are linear in $|A|$.
Note that all of the elements of $A$ are invertible, and so are the elements of $A+A$ and of $AA$.
When allowing non-invertible elements, one can have a product set of \emph{size one} and a linear-sized sum set.
However, we are not interested in constructions that are based on non-invertible elements.
It turns out that the maximum number of elements of $A$ that have the same real part plays an important role.
We say that a set $A\subset \ensuremath{\mathbb D}$ has \emph{multiplicity} $k$ if every real number is the real part of at most $k$ elements of $A$.
We usually denote the size of $A$ as $n$ and the multiplicity of $A$ as
$n^{\alpha}$, for some $0\le \alpha \le 1$.
To adapt the above construction to the case of multiplicity $n^\alpha$, we consider the set
\[ A = \{ a_1 + {\varepsilon} a_2 \ :\ a_1,a_2\in \ensuremath{\mathbb Z},\ 1\le a_1 \le n^{1-\alpha}, \ 1\le a_2 \le n^{\alpha}\},\]
and note that
\begin{align*}
A+A &= \{ m_1+{\varepsilon} m_2\ :\ m_1,m_2\in \ensuremath{\mathbb Z},\ 2\le m_1 \le 2n^{1-\alpha},\ 2\le m_2 \le 2n^{\alpha} \},\\
AA &\subset \{ m_1+{\varepsilon} m_2\ :\ m_1,m_2\in \ensuremath{\mathbb Z},\ 1\le m_1 \le n^{2-2\alpha}, \ 2\le m_2 \le 2n \}.
\end{align*}
Note that in this construction $A$ indeed has multiplicity $n^\alpha$.
The size of the sum set is $\Theta(|A|)$ and the size of the product set is $O(|A|^{3-2\alpha})$.
Thus, the sum-product conjecture is false for any $\alpha>1/2$.
On the other hand, we show that $\max \{|A+A|,|AA|\}$ is super-linear in $|A|$ when $\alpha$ is not too large.
Set $\kappa = (39 - \sqrt{721})/20 \approx 0.607$.
\begin{theorem} \label{th:DualSP}
Let $A$ be a set of $n$ dual numbers with multiplicity $n^\alpha$, for some $0\le \alpha<\kappa$.
Then for every $\rho>0$,
\[ \max \left\{|A+A|,|AA|\right\} = \begin{cases}
\Omega^*\left(n^{(4-2\alpha)/3}\right), & \quad 0\le \alpha < 1/8, \\[2mm]
\Omega\left(n^{5/4-\rho}\right), & \quad 1/8\le \alpha<1/3, \\[2mm]
\Omega^*\left(n^{3/2-5\alpha/8}\right), & \quad 1/3\le\alpha<1/2,\\[2mm]
\Omega\left(n^{9/4-39\alpha/16+5\alpha^2/8-\rho}\right), & \quad 1/2\le\alpha<\kappa.
\end{cases}\]
\end{theorem}
Combining Theorem \ref{th:DualSP} with the above construction leads to a surprising observation: When $1/2<\alpha<\kappa$ the bound for the sum-product problem is neither $\Theta^*(n^2)$ nor $\Theta(n)$.
It is hard to guess what the actual value should be.
One possibility is that the sum-product conjecture holds when $\alpha\le 1/2$, and is replaced with $\Theta^*(n^{3-2\alpha})$ when $\alpha>1/2$.
Different bounds in the statement of Theorem \ref{th:DualSP} are obtained using different approaches.
The bound for the range $0\le \alpha < 1/8$ is obtained from a relatively simple adaptation of Solymosi's technique from \cite{Soly09}.
Surprisingly, we were able to obtain a stronger bound when $\alpha\ge 1/8$ by relying on an earlier approach of Elekes \cite{Elekes97}.
While we use Elekes' approach, our technique is significantly more involved, and is the main result of this work.
In addition to having several new steps, we use Elekes' approach several times, each time relying on the result of the previous case.
Our extension of Elekes' technique leads to the bounds of Theorem \ref{th:DualSP} for range $1/8\le \alpha<1/3$ and also for the range $1/3\le\alpha<1/2$.
This technique breaks down when $\alpha \ge 1/2$.
The bound for the range of $1/2\le\alpha<\kappa$ is obtained by a naive approach --- removing elements from $A$ to decrease the multiplicity to $1/2-\rho$, and then applying the bound of Theorem \ref{th:DualSP} for the range $1/3\le\alpha<1/2$.
\parag{Double numbers.}
Let $\ensuremath{\mathbb S}$ be the set of double numbers: The extension of $\ensuremath{\mathbb R}$ with the extra element $j$ and the rule $j^2=1$.
We write a number $a\in \ensuremath{\mathbb S}$ as $a_1+j a_2$.
As before, we refer to $a_1$ as the \emph{real} part of $a$, and to $a_2$ as the \emph{imaginary} part of $a$.
The double numbers do not form a field, since some double numbers are not invertible.
Unlike the case of a dual numbers, we could not find a counterexample to the sum-product conjecture in the case of double numbers.
To see some other surprising behavior of the double numbers, consider the sets
\[ A = \{ m+j m :\ 1\le m \le n \} \quad \text{ and } \quad B = \{ m'-j m' :\ 1\le m' \le n \}.\]
Note that $AB = \{0\}$.
That is, the product set of two large sets could be of size one.
This example will not be very relevant for us, since it heavily relies on non-invertible elements.
We say that a set $A\subset \ensuremath{\mathbb S}$ has multiplicity $k$ if for every $r\in \ensuremath{\mathbb R}$ at most $k$ elements $a_1 + ja_2\in A$ satisfy $a_1+a_2=r$ and at most $k$ satisfy $a_1-a_2=r$.
Recall that $\kappa = (39 - \sqrt{721})/20 \approx 0.607$.
We derive the following sum-product bound for double numbers.
\begin{theorem} \label{th:DoubleSP}
Let $A$ be a set of $n$ double numbers with multiplicity $n^\alpha$, for some $0\le \alpha<\kappa$.
Then for every $\rho>0$,
\[ \max \left\{|A+A|,|AA|\right\} = \begin{cases}
\Omega^*\left(n^{(4-2\alpha)/3}\right), & \quad 0\le \alpha < 1/8, \\[2mm]
\Omega\left(n^{5/4-\rho}\right), & \quad 1/8\le \alpha<1/3, \\[2mm]
\Omega^*\left(n^{3/2-5\alpha/8}\right), & \quad 1/3\le\alpha<1/2,\\[2mm]
\Omega\left(n^{9/4-39\alpha/16+5\alpha^2/8-\rho}\right), & \quad 1/2\le\alpha<\kappa.
\end{cases}\]
\end{theorem}
While Theorem \ref{th:DoubleSP} contains the same bounds as Theorem \ref{th:DualSP}, the proof of Theorem \ref{th:DoubleSP} is more involved.
In some sense it is easier to study dual numbers than double numbers.
That is why we first prove Theorem \ref{th:DualSP} in Section \ref{sec:Dual}, and then prove Theorem \ref{th:DoubleSP} in Section \ref{sec:Double}.
\parag{Point-line incidences.}
Our sum-product technique requires studying point-line incidences in the plane.
Thus, we first study analogs of the Szemer\'edi--Trotter theorem in $\ensuremath{\mathbb D}^2$ and in $\ensuremath{\mathbb S}^2$.
Given a set $\mathcal P$ of points and a set $\mathcal L$ of lines in $\ensuremath{\mathbb R}^2$, an \emph{incidence} is a pair $(p,\ell)\in \mathcal P \times \mathcal L$ such that the point $p$ is contained in the line $\ell$.
The number of incidences in $\mathcal P\times\mathcal L$ is denoted as $I(\mathcal P,\mathcal L)$.
\begin{theorem}[The Szemer\'edi-Trotter theorem \cite{ST83}] \label{th:ST83}
Let $\mathcal P$ be a set of $m$ points and let $\mathcal L$ be a set of $n$ lines, both in $\ensuremath{\mathbb R}^2$.
Then
\[ I(\mathcal P,\mathcal L)=O\left(m^{2/3}n^{2/3}+m+n\right). \]
\end{theorem}
As shown in \cite{ST12,Toth14,Zahl16}, Theorem \ref{th:ST83} still holds when replacing $\ensuremath{\mathbb R}^2$ with $\ensuremath{\mathbb C}^2$.
A common approach for incidences in the complex plane is to think of $\ensuremath{\mathbb C}^2$ as $\ensuremath{\mathbb R}^4$, obtaining an incidence problem between points and two-dimensional planes.
In particular, the following is a special case of a result of Solymosi and Tao \cite{ST12}.
Consider a point set $\mathcal P$ and a set $\Pi$ of two-dimensional planes, both in $\ensuremath{\mathbb R}^4$.
We say that an incidence $(p,h) \in \mathcal P\times \Pi$ is \emph{generic} if there is no additional incidence $(p,h') \in \mathcal P\times \Pi$ such that $h\cap h'$ is a line.
That is, two planes that form generic incidences with the same point do not have any other intersection points.
\begin{theorem} \label{th:IncR4}
Let $\mathcal P$ be a set of $m$ points and let $\Pi$ be a set of $n$ arbitrary two-dimensional planes, both in $\ensuremath{\mathbb R}^4$.
Then for every $\rho>0$, the number of generic incidences in $\mathcal P\times \Pi$ is
\[ O\left(m^{2/3+\rho}n^{2/3}+m+n\right). \]
\end{theorem}
As we will see below, Theorem \ref{th:ST83} cannot be extended to $\ensuremath{\mathbb D}^2$ and to $\ensuremath{\mathbb S}^2$.
In either case, one can construct a configuration of $m$ points and $n$ lines with $mn$ incidences.
As in the sum-product problem, this maximum number of point-line incidences is controlled by the notion of multiplicity.
We begin with the case of the dual plane.
There are several different ways to define the multiplicity in a point-line incidence problem in $\ensuremath{\mathbb D}^2$, and we only present one example here.
One may use Lemma \ref{le:LineFamilyDual} to obtain other similar results.
Let $\mathcal L$ be a set of $n$ lines in $\ensuremath{\mathbb D}^2$, each defined by an equation of the form $y=ax+b$ with $a,b\in \ensuremath{\mathbb D}$.
We say that $\mathcal L$ has multiplicity $k$ if for every $r\in \ensuremath{\mathbb R}$, at most $k$ lines of $\mathcal L$ satisfy $a_1=r$.
We also let $\mathcal L$ contain any number of lines of the form $x=b$, without this affecting the multiplicity of the set.
(Our incidence bound also holds when allowing $\mathcal L$ to contain $k$ lines of the form $ax=b$ with $a_1=0$.)
\begin{theorem} \label{th:DualST}
Let $\mathcal P$ be a set of $m$ points and let $\mathcal L$ be a set of $n$ lines, both in $\ensuremath{\mathbb D}^2$.
Let $\mathcal L$ have multiplicity $n^\alpha$ for some $0\le \alpha \le 1$.
Then for every $\rho>0$,
\[ I(\mathcal P,\mathcal L)=O\left(m^{2/3+\rho}n^{(2+\alpha)/3}+mn^{\alpha}+n\right). \]
\end{theorem}
As we see in Section \ref{ssec:DualLines}, the term $mn^{\alpha}$ is tight and cannot be removed from the bound of Theorem \ref{th:DualST}.
We can also obtain a construction with $m^{2/3}n^{2/3}$ incidences.
The $\rho$ in the bound is almost certainly redundant.
It is much less clear what should be the correct dependency in $\alpha$ in $m^{2/3+\rho}n^{2/3+\alpha/3}$.
We obtain similar incidence results in the double plane $\ensuremath{\mathbb S}^2$.
As in the dual case, there are several different ways to define the multiplicity of a point-line incidence problem in $\ensuremath{\mathbb S}^2$, and we only present one example.
Let $\mathcal L$ be a set of $n$ lines in $\ensuremath{\mathbb S}^2$, each defined by an equation of the form $y=ax+b$ with $a,b\in \ensuremath{\mathbb S}$.
We say that $\mathcal L$ has multiplicity $k$ if for every $r\in \ensuremath{\mathbb R}$, at most $k$ lines of $\mathcal L$ satisfy $a_1+a_2=r$ and at most $k$ such lines satisfy $a_1-a_2=r$.
We also let $\mathcal L$ contain any number of lines of the form $x=b$, without this affecting the multiplicity of the set.
(Our incidence bound still holds also when allowing $\mathcal L$ to contain $k$ lines of the form $ax=b$ with non-invertible $a$.)
\begin{theorem} \label{th:DoubleST}
Let $\mathcal P$ be a set of $m$ points and let $\mathcal L$ be a set of $n$ lines, both in $\ensuremath{\mathbb S}^2$.
Let $\mathcal L$ have multiplicity $n^\alpha$ for some $0\le \alpha \le 1$.
Then for every $\rho>0$,
\[ I(\mathcal P,\mathcal L)=O\left(m^{2/3+\rho}n^{(2+\alpha)/3}+mn^{\alpha}+n\right). \]
\end{theorem}
As in the dual plane, the term $mn^{\alpha}$ is tight and cannot be removed from the bound of Theorem \ref{th:DualST}.
We can also obtain a construction with $m^{2/3}n^{2/3}$ incidences.
The Szemer\'edi-Trotter theorem (Theorem \ref{th:ST83}) is considered to have a dual formulation, in the sense that there is a simple combinatorial argument for moving between one formulation and the other.
Given a set of lines $\mathcal L$, we say that a point of $p$ is $r$\emph{-rich} if $p$ is incident to at least $r$ lines of $\mathcal L$.
\begin{theorem}[Dual Szemer\'edi-Trotter] \label{th:DualFormST}
Let $\mathcal L$ be a set of $n$ lines in $\ensuremath{\mathbb R}^2$, and let $r$ be a positive integer.
Then the number of $r$-rich points of $\mathcal L$ is $O\left(n^2/r^3 + n/r \right)$.
\end{theorem}
\parag{More about the multiplicities. }
Both for the sum-product problem and for the incidence problem, we established that the dual and double variants behave quite differently than the real, complex, and quaternion cases.
An obvious possible explanation for this difference is that $\ensuremath{\mathbb D}$ and $\ensuremath{\mathbb S}$ are not fields.
But these are not fields only because some degenerate numbers have no inverse.
And all of our results hold also when all of the numbers in the problem have inverses.
Moreover, our definitions of multiplicity are not directly about non-invertible elements.
Instead of non-invertible elements in $A$, both definitions of multiplicity ask $A-A$ not to contain many non-invertible elements.
Indeed, in the dual case, $a-a'\in \ensuremath{\mathbb D}$ is non-invertible when $a_1=a'_1$.
In the double case, $a-a'\in \ensuremath{\mathbb D}$ is non-invertible when $a_1+a_2=a'_1+a'_2$ or $a_1-a_2=a'_1-a'_2$.
This curious connection between the multiplicity definitions in the dual and double cases might hide a deeper general principle.
In addition, this seems related to a result of Tao \cite[Theorem 5.4]{Tao09}, which holds in a much more general scenario. Vaguely and inaccurately, this result states that a set satisfying $\max\{|A+A|,|AA|\}=\Theta(|A|)$ implies the existence of a linear subspace $V$ of zero-divisors, such that $A$ has a large intersection with a translate of $V$ (see also \cite{KT01}).
This is indeed the situation in our case.
For example, in the dual case $V$ is the line $a_1=0$.
Continuing to expose this hidden principle could potentially be an exciting research front.
\parag{Additional connections to previous sum-product works.}
Similarly to the complex numbers and to the quaternions, one can represent dual and double numbers as matrices.
The standard matrix representations for these numbers are
\[ a_1+{\varepsilon} a_2 = \left[ {\begin{array}{cc}
a_1 & a_2 \\
0 & a_1 \\
\end{array} } \right] \quad \text{ and } \quad a_1+j a_2 = \left[ {\begin{array}{cc}
a_1 & a_2 \\
a_2 & a_1 \\
\end{array} } \right].
\]
%
With these representations, matrix addition and multiplication correspond to the addition and multiplication of dual and double numbers.
With the above matrix representation, our construction of $A\subset \ensuremath{\mathbb D}$ with $|A+A|=\Theta(|A|)$ and $|AA|=\Theta(|A|)$ corresponds to a construction of Chang \cite{Chang07} for matrices in ${\mathrm{SL}}(2,\ensuremath{\mathbb R})$.
Other papers, such as \cite{SV09,SW17}, study sum-product phenomena for matrices of specific types.
To the best of our knowledge, none of the previous works is relevant to the cases of dual numbers and double numbers.
For example, Theorem 4 of Solymosi and Wong \cite{SW17} depends on the 1-norm of the matrices.
This notion is completely unrelated to our notions of multiplicity.
Recall that when $1/2<\alpha<\kappa$, our sum-product bound for the dual numbers is neither $\Theta^*(n^2)$ nor $\Theta^*(n)$.
A somewhat similar situation was observed before for the sum-product problem in finite fields.
For simplicity, we only consider finite fields $\ensuremath{\mathbb F}_p$ where $p$ is a prime.
Garaev \cite{Garaev08} constructed a set $A\subset \ensuremath{\mathbb F}_p$ such that $|A|=\Theta(p^{1/2})$ and $\max\{|A+A|,|AA|\} =O(|A|^{3/2})$.
On the other hand, as shown in \cite{CKM18}, every set $A\subset \ensuremath{\mathbb F}_p$ with $|A|=O(|A|^{64/117})$ satisfies $\max\{|A+A|,|AA|\} =\Omega(|A|^{39/32})$.
Another elegant argument of Solymosi \cite{Soly05} shows that every finite $A \subset \ensuremath{\mathbb C}$ satisfies $\max\{|A+A|,|AA|\} =\Omega(|A|^{5/4})$.
The last paragraph of that paper states that ``A similar argument works for quaternions and for other hypercomplex
numbers.''
We now briefly discuss how the current work compares with the results of \cite{Soly05}.
A reader who is not familiar with \cite{Soly05} can safely skip this discussion.
The proof in \cite{Soly05} relies on the standard property that $|a\cdot b| = |a|\cdot |b|$ holds for $a,b\in \ensuremath{\mathbb C}$ (in particular, this property is used in Lemma 2.1 of \cite{Soly05}).
When working with dual or double numbers, this absolute value property fails when using the standard definition $|a|=\sqrt{a_1^2+a_2^2}$.
In the case of dual numbers, an alternative definition is $|a|=a_1$, which does maintain the property $|a\cdot b| = |a|\cdot |b|$.
When using this definition, a different part of Lemma 2.1 of \cite{Soly05} fails: The claim that no number is covered by more than seven disks.
A similar situation occurs for the double numbers.
Note that the proof of \cite{Soly05} should not hold for dual numbers, since then it would contradict the above construction.
We did manage to get a variant of the argument in \cite{Soly05} to hold for dual and double numbers, while depending on the notion of multiplicity (thus also eliminating the contradiction with the dual construction).
Let $A$ be a set of dual or double numbers with multiplicity $n^\alpha$.
In the proof of Lemma 2.1, instead of being covered by at most 7 disks, no number is covered by more than $2n^\alpha$ ``disks''.
Then, in the definition of good sets one changes the constant 28 with $8n^\alpha$.
Now the proof then holds again, implying the bound $\max\{|A+A|,|AA|\} =\Omega(|A|^{5/4-\alpha/2})$.
It is not difficult to verify that the bounds of Theorem \ref{th:DualSP} and \ref{th:DoubleSP} are stronger for every relevant value of $\alpha$.
\parag{Acknowledgements.}
We would like to thank Misha Rudnev for suggesting this problem, and to Ben Lund, Cosmin Pohoata, and Frank de Zeeuw for helpful discussions.
We would also like to thank J\'ozsef Solymosi --- while he was not even aware of this project, quite a few of his works affected every part of it.
\section{Dual numbers} \label{sec:Dual}
In this section we study dual numbers, and in particular prove Theorem \ref{th:DualSP}.
In Section \ref{ssec:DualLines} we study properties of lines in the dual plane.
We derive a point-line incidence bound in $\ensuremath{\mathbb D}^2$, and study additional properties of such incidences.
In Section \ref{ssec:ElekDual} we adapt Elekes' sum-product technique to the dual numbers.
As mentioned above, we add several additional steps to Elekes' original argument.
In Section \ref{ssec:SolyDual}, we adapt Solymosi's sum-product argument to the dual numbers.
\subsection{Lines in the dual plane} \label{ssec:DualLines}
Recall that we denote by $\ensuremath{\mathbb D}$ the set of dual numbers: The extension of $\ensuremath{\mathbb R}$ with the extra element ${\varepsilon}$ and the rule ${\varepsilon}^2=0$.
We write a number $a\in \ensuremath{\mathbb D}$ as $a_1+{\varepsilon} a_2$.
Multiplication of dual numbers is commutative, and 1 is the unit element.
A dual number $a\in \ensuremath{\mathbb D}$ has an inverse element if and only if $a_1\neq 0$.
The inverse element is then $a^{-1} = \frac{a_1-{\varepsilon} a_2}{a_1^2}$.
Indeed, we have
\[ a\cdot a^{-1} = \frac{(a_1+{\varepsilon} a_2)(a_1-{\varepsilon} a_2)}{a_1^2} = \frac{a_1^2}{a_1^2} = 1.\]
We define a line in $\ensuremath{\mathbb D}^2$ as the set of points on which a linear equation vanishes.
Let $\ell$ be the line defined by $ax+by=c$, where $a,b,c\in \ensuremath{\mathbb D}$.
This corresponds to
\begin{align*}
(a_1+{\varepsilon} a_2) (x_1+{\varepsilon} x_2) + (b_1+{\varepsilon} b_2)(y_1+{\varepsilon} y_2) &= (c_1+{\varepsilon} c_2),
\end{align*}
or equivalently
\begin{align*}
&a_1x_1 \hspace{12.5mm} + b_1y_1 &= c_1, \\
&a_2x_1 + a_1 x_2 + b_2 y_1 + b_1 y_2 &= c_2.
\end{align*}
When $a_1=b_1=0$, the first equation becomes trivial while the second still exists.
In any other case, the two equations are linearly independent.
We can think of $\ensuremath{\mathbb D}^2$ as $\ensuremath{\mathbb R}^4$, and then $\ell$ is either a 2-flat or a hyperplane, depending on whether $a_1=b_1=0$.
We refer to the lines of the latter type as \emph{degenerate lines}.
Note that a line defined by $ax+by=c$ is degenerate if and only if both $a$ and $b$ are non-invertible.
In the real and complex planes, any two lines intersect in at most one point.
In $\ensuremath{\mathbb D}^2$, two lines can have an infinite intersection, even when excluding non-invertible coefficients in the line equations.
For example, consider the set of non-degenerate lines
\[ \mathcal L = \{ (1+m{\varepsilon})x+(1-(m-1){\varepsilon})y = 2+ {\varepsilon} :\ m\in \ensuremath{\mathbb R} \}. \]
It is not difficult to verify that every line of $\mathcal L$ contains every point of the form $(1+a{\varepsilon},1-a{\varepsilon})\in \ensuremath{\mathbb D}^2$ with $a\in \ensuremath{\mathbb R}$.
By taking $n$ lines from $\mathcal L$ and $m$ points of the form $(1+a{\varepsilon},1-a{\varepsilon})\in \ensuremath{\mathbb D}^2$ we get $mn$ incidences.
That is, the point--line incidence problem in $\ensuremath{\mathbb D}^2$ is trivial.
This remains true when excluding degenerate lines, and also when using only invertible numbers in the definitions of the points and the lines.
We now study when collections of lines have an infinite intersection.
For $a\in \ensuremath{\mathbb D}$, denote by $\text{Re}(a)$ the real part of $a$.
That is, $\text{Re}(a_{1}+a_{2}{\varepsilon})=a_{1}$.
\begin{lemma} \label{le:LineFamilyDual}$\quad$ \\
(a) Let $\ell$ and $\ell'$ be distinct lines in $\ensuremath{\mathbb D}^2$, respectively defined by $y=ax+b$ and $y=a'x+b'$.
Then $\ell\cap\ell'$ contains more than one point if and only if $a_1=a'_1$, $b_1=b'_1$, and $a_2\neq a_2'$.
When these conditions are satisfied, $\ell_1\cap\ell_2$ is a line in $\ensuremath{\mathbb R}^4$ and there exist $r_1,r_2\in \ensuremath{\mathbb R}$ such that every point $(x,y)\in \ell_1\cap\ell_2$ satisfies $\text{Re}(x)=r_1$ and $\text{Re}(y)=r_2$. \\
(b) Let $\mathcal L$ be a set of lines of the form $y=ax+b$ that have an infinite common intersection.
Then all of these lines have the same values for $a_1$ and $b_1$, and there exist $r_1,r_2\in \ensuremath{\mathbb R}$ such that every point $(x,y)$ in the infinite intersection satisfies $\text{Re}(x)=r_1$ and $\text{Re}(y)=r_2$.
Moreover, either all the $b_2$ values are identical or there exists $m\in \ensuremath{\mathbb R}$ such that every line satisfies $b_2=x_1(m-a_2)$.
\end{lemma}
\begin{proof} (a) To study the intersection points of the two lines, we combine $y=ax+b$ and $y=a'x+b'$, obtaining $ax+b=a'x+b'$, or equivalently $x(a-a')=b'-b$.
Splitting this equation into real and imaginary parts gives
\begin{align}
x_1(a_1-a'_1) &= b'_1-b_1, \label{eq:realLineInt} \\
x_1(a_2-a'_2) + x_2(a_1-a'_1) &= b'_2-b_2. \nonumber
\end{align}
First assume that $a_1\neq a'_1$.
In this case we can rewrite the above system as
\begin{align*}
x_1 &= (b'_1-b_1)/(a_1-a'_1), \\
x_2 &= (b'_2-b_2-x_1(a_2-a'_2))/(a_1-a'_1).
\end{align*}
Since this system has a unique solution, when $a_1\neq a'_1$ the two lines intersect in a single point.
We next assume that $a_1=a'_1$.
In this case, equation \eqref{eq:realLineInt} implies $b'_1=b_1$.
Then the line equations $y=ax+b$ and $y=a'x+b'$ become
\begin{align*}
y_1 &= a_1x_1 +b_1, \\
y_2 &= a_1x_2+a_2x_1 +b_2, \\
y_2 &= a_1x_2+a'_2x_1 +b'_2.
\end{align*}
Combining the second and third equations of this system gives $a_2x_1 +b_2 = a'_2x_1 +b'_2$.
If $a_2=a'_2$ then the second and third equations of the system imply that either $\ell \cap \ell' = \emptyset$ or $\ell=\ell'$ (depending on whether or not $b_2=b'_2$).
We may thus assume that $a_2\neq a'_2$, to obtain
\[ x_1 = \frac{b'_2-b_2}{a_2-a'_2}, \quad y_1 = a_1\cdot \frac{b'_2-b_2}{a_2-a'_2} +b_1, \quad \text{ and } \quad y_2 = a_1x_2+a_2\cdot \frac{b'_2-b_2}{a_2-a'_2} +b_2. \]
Thus, the intersection $\ell_1\cap\ell_2$ is infinite (it is a line in $\ensuremath{\mathbb R}^4$).
Moreover, all of the points of $\ell_1\cap\ell_2$ have the same real parts $x_1,y_1$.
(b) By part (a), all the lines in $\mathcal L$ have the same values for $a_1$ and $b_1$, and there exist $r_1,r_2\in \ensuremath{\mathbb R}$ such that every point $(x,y)$ in the infinite intersection satisfies $\text{Re}(x)=r_1$ and $\text{Re}(y)=r_2$.
That is, every line of $\mathcal L$ is defined by $y= (a_1+a_2{\varepsilon})x+(b_1+b_2{\varepsilon})$, where $a_1,b_1\in \ensuremath{\mathbb R}$ are fixed and $a_2,b_2\in \ensuremath{\mathbb R}$ change between different lines.
By the proof of part (a), two lines defined by $y= (a_1+{\varepsilon} a_2)x+(b_1+{\varepsilon} b_2)$ and $y= (a_1+{\varepsilon} a'_2)x+(b_1+{\varepsilon} b'_2)$ satisfy
\[ r_1 = \frac{b'_2-b_2}{a_2-a'_2}. \]
To have every pair of lines of $\mathcal L$ satisfy $b'_2-b_2 = r_1(a_2-a'_2)$, either $r_1 =0$ and then all of the $b_2$ values are identical, or there exists $m\in \ensuremath{\mathbb R}$ such that every line satisfies $b_2=r_1(m-a_2)$.
\end{proof}
We are now ready to prove Theorem \ref{th:DualST}.
We first recall the statement of this theorem.
\vspace{2mm}
\noindent {\bf Theorem \ref{th:DualST}.}
\emph{Let $\mathcal P$ be a set of $m$ points and let $\mathcal L$ be a set of $n$ lines, both in $\ensuremath{\mathbb D}^2$.
Let $\mathcal L$ have multiplicity $n^\alpha$ for some $0\le \alpha \le 1$.
Then for every $\rho>0$, }
\[ I(\mathcal P,\mathcal L)=O(m^{2/3+\rho}n^{2/3+\alpha/3}+mn^{\alpha}+n). \]
\begin{proof}
By the definition of multiplicity, the set $\mathcal L$ may contain at most $n^\alpha$ degenerate lines.
Together these lines participate in at most $mn^{\alpha}$ incidences.
Every point of $\mathcal P$ is incident to at most one line of the form $x=b$, so such lines contribute at most $m$ incidences.
It remains to consider incidences with non-degenerate lines of $\mathcal L$ of the form $y=ax+b$.
We discard from $\mathcal L$ the degenerate lines and lines of the form $x=b$.
We can then partition $\mathcal L$ into $n^\alpha$ disjoint subsets $\mathcal L_1,\ldots,\mathcal L_{n^\alpha}$, such that the multiplicity of each $L_j$ is one.
For every $1\le j \le n^\alpha$, set $n_j = |\mathcal L_j|$.
Note that $\sum_{j=1}^{n^\alpha} n_j =n$.
Since each subset has no multiplicity, by Lemma \ref{le:LineFamilyDual} every two lines from the same $\mathcal L_j$ intersect in at most one point.
That is, when thinking of $\ensuremath{\mathbb D}^2$ as $\ensuremath{\mathbb R}^4$, the set $\mathcal L_j$ becomes a set of two-dimensional planes, each two intersecting in at most one point. We may thus apply Theorem \ref{th:IncR4} with $\mathcal P$ and $\mathcal L_j$.
Note that in this case every incidence is generic by definition, so Theorem \ref{th:IncR4} gives a bound for the total number of incidences.
By doing that for every $1\le j \le n^\alpha$ and then applying H\"older's inequality, we obtain
\begin{align*}
I(\mathcal P,\mathcal L) &= \sum_{j=1}^{n^\alpha} I(\mathcal P,\mathcal L_j) = \sum_{j=1}^{n^\alpha} O\left(m^{2/3+\rho}n_j^{2/3}+m+n_j\right) \\[2mm]
&= O\left(m^{2/3+\rho}\sum_{j=1}^{n^\alpha} n_j^{2/3}+mn^\alpha+n\right) = O\left(m^{2/3+\rho}n^{2/3+\alpha/3}+mn^\alpha+n\right).
\end{align*}
\end{proof}
For our sum-product results in $\ensuremath{\mathbb D}$, we need additional properties of point-line incidences in $\ensuremath{\mathbb D}^2$.
We define the \emph{real part} of $\ensuremath{\mathbb D}^2$ as the copy of $\ensuremath{\mathbb R}^2$, and say that a point $(a_1+a_2{\varepsilon},b_1+b_2{\varepsilon})$ corresponds to the point $(a_1,b_1)$ in the real part of $\ensuremath{\mathbb D}^2$.
When thinking of $\ensuremath{\mathbb D}^2$ as $\ensuremath{\mathbb R}^4$, the real part of $\ensuremath{\mathbb D}^2$ is the projection of $\ensuremath{\mathbb R}^4$ to the two real coordinates.
Thus, each point in the real part of $\ensuremath{\mathbb D}^2$ has an \emph{imaginary plane} associated with it, which is also copy of $\ensuremath{\mathbb R}^2$.
For example, the point $(1+2{\varepsilon},3+4{\varepsilon})\in \ensuremath{\mathbb D}^2$ is the point $(2,4)$ in the imaginary plane associated with the point $(1,3)$ in the real part of $\ensuremath{\mathbb D}^2$.
We refer to a set of lines in $\ensuremath{\mathbb D}^2$ with an infinite common intersection as a \emph{line family}.
Let $\mathcal L$ be such a line family.
By Lemma \ref{le:LineFamilyDual}(b), every line of $\mathcal L$ corresponds to the same line in the real part of $\ensuremath{\mathbb D}^2$.
We refer to this line as the \emph{real line} of $\mathcal L$.
By the same lemma, the infinite intersection of the lines of $\mathcal L$ is contained in a single point of the real part of $\ensuremath{\mathbb D}^2$.
That is, this intersection is a line in the imaginary plane associated with a single point $p$ in the real part of $\ensuremath{\mathbb D}^2$.
We say that $p$ is the \emph{special point} of the line family $\mathcal L$.
Let $\ell \subset \ensuremath{\mathbb R}^2$ be a line in the real part of $\ensuremath{\mathbb D}^2$.
Then there could be several line families whose real part is $\ell$.
In addition, a line in $\ensuremath{\mathbb D}^2$ whose real part is $\ell$ can participate in many line families that that have $\ell$ as their real line.
For example, the line defined by $y=(1+2{\varepsilon})x+(3+4{\varepsilon})$ is contained in the real line $y=x+3$ and is part of every family defined by $a_1=1, b_1=3,$ and $b_2 = x_1(m-a_2)$ such that $4= x_1(m-2)$ (see Lemma \ref{le:LineFamilyDual}(b)).
We now study the interaction between line families that have the same real line.
\begin{lemma} \label{le:LineFamProp}
Let $\mathcal L_1$ and $\mathcal L_2$ be two distinct line families in $\ensuremath{\mathbb D}^2$ that correspond to the same real line $\ell$.
Assume that $\ell$ is not parallel to the $y$-axis. \\
(a) If $\mathcal L_1$ and $\mathcal L_2$ have the same special point then they have no lines in common. \\
(b) If $\mathcal L_1$ and $\mathcal L_2$ have different special points then they have at most one line in common.
\end{lemma}
\begin{proof}
As before, we define a line in $\ensuremath{\mathbb D}^2$ using the equation $y=(a_1+a_2{\varepsilon})x + (b_1+b_2{\varepsilon})$.
Since the line families $\mathcal L_1$ and $\mathcal L_2$ have the same real part, every line in these families have the same values of $a_1$ and $b_1$.
(a) Denote the common special point as $(x_1,y_1)\in\ensuremath{\mathbb R}^2$.
As shown in the proof of Lemma \ref{le:LineFamilyDual}(b), every two lines $y=ax+b$ and $y=a'x+b'$ from the same family satisfy a relation of the form $b'_2-b_2 = x_1(a_2-a'_2)$.
First assume that $x_1=0$.
In this case, every line of $\mathcal L_1$ has the same $b_2$, and so does every line of $\mathcal L_2$.
The two values of $b_2$ are distinct, since otherwise $\mathcal L_1$ and $\mathcal L_2$ would have been the same family.
Since no line can have two different values of $b_2$, the two line families are disjoint.
We now assume that $x_1\neq 0$.
Then exist $m_1,m_2\in \ensuremath{\mathbb R}$ such that every line of $\mathcal L_1$ satisfies $b_2=x_1(m_1-a_2)$ and every line of $\mathcal L_2$ satisfies $b_2=x_1(m_2-a_2)$.
If a line satisfies both requirements, we obtain that $x_1(m_1-a_2)=x_1(m_2-a_2)$ and thus $m_1=m_2$.
This is impossible, since it implies that the two line families are identical.
We conclude that no line can be in both families.
(b) Denote the special point of $\mathcal L_1$ as $(x_1,y_1)\in \ensuremath{\mathbb R}^2$ and the special point of $\mathcal L_2$ as $(x'_1,y'_1)\in \ensuremath{\mathbb R}^2$.
Since these are distinct points on the line $\ell$ that is not parallel to the $y$-axis, we have $x_1 \neq x'_1$.
A line that is in both $\mathcal L_1$ and $\mathcal L_2$ satisfies $b_2=x_1(m_1-a_2)$ and $b_2=x'_1(m_2-a_2)$.
Since $x_1 \neq x'_1$ this system has at most one solution for the values of $a_2$ and $b_2$, implying that at most one line is in both families.
\end{proof}
\subsection{Adapting Elekes' argument to dual numbers} \label{ssec:ElekDual}
We are now ready to present our main proof for dual numbers.
We first repeat the relevant part of Theorem \ref{th:DualSP}
\begin{theorem} \label{th:DualMainCase}
Let $A$ be a set of $n$ dual numbers and multiplicity $n^{\alpha}$, for some $0 \le \alpha < 1/2$.
Then for any $\rho>0$,
\[ \max\{|A+A|,|AA|\} = \begin{cases}
\Omega^*\left(n^{3/2-5\alpha/8}n\right), & \quad 1/3\le\alpha<1/2,\\
\Omega\left(n^{5/4-\rho}\right), & \quad 0\le \alpha<1/3.
\end{cases} \]
\end{theorem}
\begin{proof}
By the multiplicity assumption, $A$ contains at most $n^\alpha$ non-invertible elements.
We discard these elements.
This does not change the asymptotic size of $A$ and can only decrease the sizes of $A+A$ and $AA$.
Thus, it suffices to prove the bound for the resulting smaller set.
Abusing notation, in the rest of the proof we refer to this revised set as $A$.
Consider the point set
\[ \mathcal P = \{ (c,d)\in\ensuremath{\mathbb D}^2 :\, c\in A+A \quad \text{ and } \quad d\in AA \}, \]
and the set of lines
\[ \mathcal L = \{ y=c(x-d) :\, c,d\in A \}. \]
Note that $|\mathcal L| =n^2$ and $|\mathcal P| = |A+A|\cdot|AA|$.
Since the revised $A$ consists of invertible elements, there are no degenerate lines in $\mathcal L$.
The proof is based on double counting $I(\mathcal P,\mathcal L)$.
A line of $\mathcal L$ defined by $y=c(x-d)$ contains every point of $\mathcal P$ of the form $(d+b,cb)$ for every $b\in A$.
That is, we have that $I(\mathcal P,\mathcal L) \ge |\mathcal L||A| = n^3$.
For the rest of the proof we will derive upper bounds on $I(\mathcal P,\mathcal L)$.
We partition the incidences in $\mathcal P\times\mathcal L$ into two types, as follows.
We say that an incidence $(p,\ell)\in \mathcal P\times\mathcal L$ is \emph{special} if $p$ is incident to a second line $\ell'\in \mathcal L$ such that $\ell$ and $\ell'$ are members of the same line family.
Since lines from the same family intersect only in their special point, the special point of this family is $(\text{Re}(p_x),\text{Re}(p_y))\in \ensuremath{\mathbb R}^2$.
If an incidence $(p,\ell)\in \mathcal P\times\mathcal L$ is not special, we say that it is a \emph{standard} incidence.
We first bound the number of standard incidences.
By considering $\ensuremath{\mathbb D}^2$ as $\ensuremath{\mathbb R}^4$, we obtain an incidence problem with two-dimensional flats.
If $(p,\ell_1)$ and $(p,\ell_2)$ are standard incidences, then $\ell_1\cap \ell_2 = \{p\}$.
These are regular incidences, as defined before Theorem \ref{th:IncR4}.
By that theorem, the number of standard incidences is $O\left(|\mathcal P|^{2/3+\rho}|\mathcal L|^{2/3}+|\mathcal P|+|\mathcal L|\right)$.
When the number of standard incidences is larger than the number of special incidences, we have that
\[ I(\mathcal P,\mathcal L) = O\left(|\mathcal P|^{2/3+\rho}|\mathcal L|^{2/3}+|\mathcal P|+|\mathcal L|\right) = O\left(|A+A|^{2/3+\rho}|AA|^{2/3+\rho}n^{4/3}+ |A+A||AA|\right). \]
Combining this with $I(\mathcal P,\mathcal L) \ge n^3$ leads to $|A+A||AA| = \Omega(n^{5/2-\rho})$.
This immediately implies the assertion of the theorem, for any $0\le \alpha <1/2$.
\parag{Handling special incidences.}
It remains to consider the case where the number of special incidences is larger than the number of standard incidences.
Denote by $I(\alpha',\beta,\gamma,\delta)$ the number of special incidences $(p,\ell)\in \mathcal P\times \mathcal L$ that satisfy:
\begin{itemize}
\item Let $\ell_\ensuremath{\mathbb R}$ be the line in the real part of $\ensuremath{\mathbb D}^2$ that corresponds to $\ell$. Then $\ell_\ensuremath{\mathbb R}$ corresponds to at least $n^{2\alpha'}$ lines of $\mathcal L$ and to fewer than $2n^{2\alpha'}$ such lines.
\item There is a line family that contains $\ell$ whose special point is $(\text{Re}(p_x),\text{Re}(p_y))$, and that contains at least $n^\beta$ and fewer than $2n^\beta$ lines of $\mathcal L$.
\item The real point $(\text{Re}(p_x),\text{Re}(p_y))$ corresponds to at least $n^\gamma$ points of $\mathcal P$ and fewer than $2n^\gamma$ such points.
\item The real point $(\text{Re}(p_x),\text{Re}(p_y))$ is the special point of at least $n^\delta$ and to fewer than $2n^\delta$ line families that satisfy the property stated in the second item.
\end{itemize}
Note that we can take $\Theta(\log^4 n)$ elements $I(\alpha',\beta,\gamma,\delta)$ such that every special incidence in $\mathcal P\times\mathcal L$ is counted in at least one of those elements.
Thus, the number of special incidences is upper bounded by the maximum size of $I(\alpha',\beta,\gamma,\delta)$ times $\Theta(\log^4 n)$.
We study some basic properties of the parameters $\alpha',\beta,\gamma,\delta$ that maximize $I(\alpha',\beta,\gamma,\delta)$.
For this purpose, we assume that $\alpha',\beta,\gamma,\delta$ are fixed.
We denote by $S$ the set of special points that participate in incidences of $I(\alpha',\beta,\gamma,\delta)$.
Let $T$ be the set of lines in the real part of $\ensuremath{\mathbb D}^2$ that correspond to lines of $\mathcal L$ that participate in incidences of $I(\alpha',\beta,\gamma,\delta)$.
Let $F$ be the set of line families that contain at least $n^\beta$ lines of $\mathcal L$ and fewer than $2n^{\beta}$ such lines.
Since $|\mathcal L|=n^2$ and every line of $T$ corresponds to $\Theta(n^{2\alpha'})$ lines of $\mathcal L$, we get that $|T| = O(n^{2-2\alpha'})$.
By the multiplicity of $A$ and the definition of $\mathcal L$, at most $n^{2\alpha}$ lines of $\mathcal L$ can correspond to the same line of $T$.
That is, we have $0\le \alpha'\le \alpha$.
We also have that $\beta \le 2\alpha'$, since otherwise there are not enough lines corresponding to a real line to create a family in $F$.
We consider the maximum number of lines from $\mathcal L$ that a line family can contain.
Recall that a line in $\ensuremath{\mathbb D}^2$ is defined by an equation of the form $y=(a_1+{\varepsilon} a_2) x + (b_1+{\varepsilon} b_2)$, and that a line in $\mathcal L$ is defined by an equation of the form $y=c(x-d)$ with $c,d\in A$.
By Lemma \ref{le:LineFamilyDual}(b), all the lines in the same family have the same $a_1$ and $b_1$ values, so the real parts of $c$ and $d$ are fixed.
By the same lemma, either all the lines in a family have the same $b_2$ value, or they all satisfy a relation of the form $b_2 = x_1(m-a_2)$.
In either case, choosing the imaginary part of $c$ uniquely determines the imaginary part of $d$.
Due to the multiplicity of $A$, the family has at most $n^\alpha$ lines from $\mathcal L$.
Since any line family contains at most $n^\alpha$ lines of $\mathcal L$, we have that $0 \le \beta \le \alpha$.
Since the multiplicity of $A$ is $n^\alpha$, at most $n^{1+\alpha}$ sums in $A+A$ can have the same real part.
Similarly, at most $n^{1+\alpha}$ products in $AA$ can have the same real part.
Since $\mathcal P= (A+A)\times (AA)$, at most $n^{2+2\alpha}$ points of $\mathcal P$ can correspond to the same point in the real part of $\ensuremath{\mathbb D}^2$.
That is, $0 \le \gamma \le 2+2\alpha$.
We also have the straightforward bound $n^\gamma \le |\mathcal P|$, or equivalently $\gamma \le (\log |\mathcal P|)/\log n$.
Next, we consider the maximum number of line families of $F$ that can have the same special point.
Recall that the lines of $\mathcal L$ are defined as $y=c(x-d)$ where $c,d\in A$.
For every choice of $c$ and $s\in S$, there is a unique real part of $d$ such that the real part of the resulting line is incident to $s$.
That is, for a fixed special point and $c\in A$, there are at most $n^\alpha$ elements $d\in A$ such that the resulting line is incident to the special point.
By Lemma \ref{le:LineFamProp}(a), if two families have the same real line and the same special point, then they have no lines in common.
This yields $0 \le \delta \le 1+\alpha-\beta$.
To recap:
\begin{align*}
0\le \alpha', &\beta \le \alpha, \qquad \beta \le 2\alpha', \qquad 0 \le \delta \le 1+\alpha-\beta, \\[2mm]
0 &\le \gamma \le \min\left\{2+2\alpha, (\log |\mathcal P|)/\log n\right\}.
\end{align*}
We next bound the number of families in $F$.
Recall that $|T|=O(n^{2-2\alpha'})$, and that each line of $T$ corresponding to fewer than $2n^{2\alpha'}$ lines of $\mathcal L$.
For a fixed line $\ell\in T$, by Lemma \ref{le:LineFamProp} every two families corresponding to $\ell$ have at most one line in common.
There are fewer than $\binom{2n^{2\alpha'}}{2} = \Theta(n^{4\alpha'})$ pairs of lines of $\mathcal L$ that correspond to $\ell$.
Each such pair can appear in at most one line family, and each line family subsumes at least $\binom{n^{\beta}}{2} = \Theta(n^{2\beta})$ such pairs.
Thus, the number of families that correspond to $\ell$ is $O(n^{4\alpha'-2\beta})$.
By summing up over every $\ell\in T$, we obtain that $|F| = O(n^{2+2\alpha'-2\beta})$.
We derive several upper bounds for $|S|$:
\begin{itemize}
\item Since each special point corresponds to $\Theta(n^\gamma)$ points of $\mathcal P$, we have $|S| = O(|\mathcal P|/n^\gamma)$.
\item Since $|F| = O(n^{2+2\alpha'-2\beta})$, and each special point subsumes $\Theta(n^{\delta})$ families of $F$, we obtain $|S| = O(n^{2+2\alpha'-2\beta-\delta})$.
\item Given a point $s\in S$, by Lemma \ref{le:LineFamProp}(a) each line of $T$ corresponds to $O(n^{2\alpha'-\beta})$ families that have $s$ as their special point. Thus, every point of $S$ is incident to $\Omega(n^{\delta-2\alpha'+\beta})$ lines of $T$. Recalling that $|T|=O(n^{2-2\alpha'})$, Theorem \ref{th:DualFormST} implies that
\[ |S|=O\left(\frac{(n^{2-2\alpha'})^2}{(n^{\delta-2\alpha'+\beta})^3} + \frac{n^{2-2\alpha'}}{n^{\delta-2\alpha'+\beta}}\right) = O\left(n^{4+2\alpha'-3\delta-3\beta} + n^{2-\delta-\beta}\right).\]
\end{itemize}
Consider the imaginary plane associated with a special point $s\in S$.
There are $\Theta(n^\delta)$ families incident to $s$, each corresponding to a distinct line in the imaginary plane of $s$.
There are $\Theta(n^\gamma)$ points of $\mathcal P$ in this imaginary plane.
By the Szemer\'edi--Trotter theorem (Theorem \ref{th:ST83}), the number of incidences between these points and lines is $O\left(n^{2(\delta+\gamma)/3}+n^\delta+n^\gamma\right)$.
Since each line in the imaginary plane corresponds to $\Theta(n^\beta)$ lines of $\mathcal L$, the number of incidences in the special point $s$ is
\begin{equation} \label{eq:IncInSpecialPnt}
O\left(n^\beta\left(n^{2(\delta+\gamma)/3}+n^\delta+n^\gamma\right)\right).
\end{equation}
To obtain an upper bound for $I(\alpha',\beta,\gamma,\delta)$, we can multiply \eqref{eq:IncInSpecialPnt} with any of our three upper bounds for $|S|$.
Then, to obtain an upper bound on the total number of incidences, we can multiply the resulting bound for $I(\alpha',\beta,\gamma,\delta)$ with $\Theta(\log^4 n)$.
We divide the rest of the analysis into cases, according to the term that dominates the inner parentheses in \eqref{eq:IncInSpecialPnt}.
\parag{The case where $n^{2(\delta+\gamma)/3}$ dominates.}
We first assume that $n^{2(\delta+\gamma)/3}$ is larger than the other two terms in the inner parentheses of \eqref{eq:IncInSpecialPnt}.
This case occurs when $\gamma/2 \le \delta \le \gamma$.
Using the bound $|S| = O(n^{2+2\alpha'-2\beta-\delta})$, we obtain that the number of special incidences is
\[ O\left(n^\beta\cdot n^{2(\delta+\gamma)/3} \cdot n^{2+2\alpha'-2\beta-\delta}\cdot \log^4 n\right) = O^*\left(n^{2+2\alpha'-\beta-\delta/3 + 2\gamma/3}\right)\]
Since we assume that the number of special incidences is larger than the number of standard incidences, the above is also a bound for the total number of incidences.
Combining this bound with $I(\mathcal P,\mathcal L) \ge n^3$ gives
\[ 2+2\alpha'-\beta-\delta/3 + 2\gamma/3 \ge 3, \quad \text{ or equivalently } \quad 2\alpha'-\beta-\delta/3 + 2\gamma/3 \ge 1. \]
Multiplying both sides by 3 and rearranging gives
\begin{equation} \label{eq:Case1Boots}
3-6\alpha'+3\beta+\delta - 2\gamma \le 0.
\end{equation}
We repeat the above argument with the different bound $|S|= O\left(n^{4+2\alpha'-3\delta-3\beta} + n^{2-\delta-\beta}\right)$.
In this case we get that the number of incidences is
\begin{align}
&O\left(n^\beta\cdot n^{2(\delta+\gamma)/3} \left(n^{4+2\alpha'-3\delta-3\beta} + n^{2-\delta-\beta}\right)\cdot\log^4n\right) \nonumber \\[2mm]
&\hspace{58mm}= O^*\left( n^{4+2\alpha'-7\delta/3-2\beta + 2\gamma/3} + n^{2-\delta/3+2\gamma/3}\right). \label{eq:TwoCases}
\end{align}
We split the current case into two additional cases, according to the dominating term in the above bound.
(i) When \eqref{eq:TwoCases} is dominated by the first term, combining it with $I(\mathcal P,\mathcal L) \ge n^3$ gives
\[ 4+2\alpha'-7\delta/3-2\beta + 2\gamma/3 \ge 3, \quad \text{ or equivalently } \quad 3+6\alpha'-7\delta-6\beta + 2\gamma \ge 0. \]
Combining this with \eqref{eq:Case1Boots} gives
\[ 0\ge (3-6\alpha'+3\beta+\delta - 2\gamma) - (3+6\alpha'-7\delta-6\beta + 2\gamma) = -12\alpha'+9\beta+8\delta-4\gamma.\]
Dividing by 12 gives $0\ge -\alpha'+3\beta/4+2\delta/3-\gamma/3$.
We next use the bound $|S| = O(|\mathcal P|/n^\gamma)$ to obtain that the number of incidences is
\begin{equation} \label{eq:SpecialIncPts}
O\left(n^\beta\cdot n^{2(\delta+\gamma)/3} \cdot |\mathcal P|n^{-\gamma} \cdot \log^4n\right) = O^*\left( n^{\beta+2\delta/3-\gamma/3} \cdot |\mathcal P|\right).
\end{equation}
Combining this with $I(\mathcal P,\mathcal L) \ge n^3$, and then applying $0\ge -\alpha'+3\beta/4+2\delta/3-\gamma/3$ and $\alpha',\beta\le \alpha$ yields
\begin{align*}
|A+A|\cdot |AA| = |\mathcal P| &= \Omega^*\left(n^{3-(\beta+2\delta/3-\gamma/3)}\right) \\[2mm]
&= \Omega^*\left(n^{3-(\beta+2\delta/3-\gamma/3) + (-\alpha'+3\beta/4+2\delta/3-\gamma/3)}\right) \\[2mm]
&= \Omega^*\left(n^{3-\beta/4 -\alpha'}\right) = \Omega^*\left(n^{3-5\alpha/4}\right).
\end{align*}
This immediately implies $\max\{|A+A|,|AA|\} = \Omega^*\left(n^{3/2-5\alpha/8}\right)$.
(ii) We next consider the case where the incidence bound \eqref{eq:TwoCases} is dominated by the term $n^{2-\delta/3+2\gamma/3}$.
Combining this with $I(\mathcal P,\mathcal L) \ge n^3$ gives
\[ 2-\delta/3+2\gamma/3 \ge 3, \quad \text{ or equivalently } \quad 1/2+\delta/6-\gamma/3\le 0.\]
In this case we still have the bound \eqref{eq:SpecialIncPts} for the number of incidences.
Combining \eqref{eq:SpecialIncPts} with $I(\mathcal P,\mathcal L) \ge n^3$, and then applying $1/2+\delta/6-\gamma/3\le 0$, $\delta\le 1+\alpha-\beta$, and $\beta\le \alpha$ yields
\begin{align*}
|A+A|\cdot |AA| = |\mathcal P| &= \Omega^*\left(n^{3-(\beta+2\delta/3-\gamma/3)}\right) \\[2mm]
&= \Omega^*\left(n^{3-(\beta+2\delta/3-\gamma/3) + (1/2+\delta/6-\gamma/3)}\right) = \Omega^*\left(n^{7/2-\beta -\delta/2}\right)\\[2mm]
&= \Omega^*\left(n^{7/2-\beta -(1+\alpha-\beta)/2}\log^{-4}n\right) =\Omega^*\left(n^{3-\alpha}\right).
\end{align*}
Similarly to the previous case, this implies $\max\{|A+A|,|AA|\} = \Omega^*\left(n^{3/2-\alpha/2}\right)$.
\parag{The cases where $n^{\delta}$ or $n^{\gamma}$ dominate.}
Assume that $n^\delta$ is larger than the other two terms in the inner parentheses of \eqref{eq:IncInSpecialPnt}.
This happens when $\delta>2\gamma$.
Using the bound $|S| = O(n^{2+2\alpha'-2\beta-\delta})$, we get that the number of incidences is
\[ O\left(n^{2+2\alpha'-2\beta-\delta} \cdot \left(n^\beta\cdot n^\delta\right)\cdot \log^{4}n\right) = O^*\left(n^{2+2\alpha'-\beta} \right). \]
Combining this with $I(\mathcal P,\mathcal L) \ge n^3$ implies that $2+2\alpha'-\beta \ge 3$, or equivalently $\alpha' \ge (1+\beta)/2$.
This in turn implies that $\alpha \ge \alpha' \ge (1+\beta)/2 \ge 1/2$.
Since this contradicts the assumption concerning $\alpha$, we conclude that $n^\delta$ cannot dominate the inner parentheses of \eqref{eq:IncInSpecialPnt}.
Finally, assume that $n^\gamma$ is larger than the other two terms in the inner parentheses of \eqref{eq:IncInSpecialPnt}.
This happens when $\delta>2\gamma$.
By using the bound $|S| = O(|\mathcal P|/n^\gamma)$ we get that the number of incidences is
\[ O\left(|\mathcal P|n^{-\gamma} \cdot \left(n^\beta\cdot n^\gamma\right)\cdot \log^{4}n\right) = O^*\left(|\mathcal P|n^\beta\right). \]
Combining this with $I(\mathcal P,\mathcal L) \ge n^3$ implies
\[ |A+A|\cdot|AA| = |\mathcal P| = \Omega^*(n^{3-\beta}) = \Omega^*(n^{3-\alpha}).\]
This immediately implies $\max\{|A+A|,|AA|\} = \Omega^*\left(n^{3/2-\alpha/2}\right)$.
By going over each case that occurs when the number special incidences is larger, we note that the weakest bound that was obtained is $\max\{|A+A|,|AA|\} = \Omega^*\left(n^{3/2-5\alpha/8}\right)$.
To complete the proof, for each value of $\alpha$ we use the weaker bound out of the one obtained when there are more standard incidences, and the one obtained when there are more special incidences.
\end{proof}
\parag{Remark.} It may at first seem surprising that in our analysis of special incidences we obtain bounds such as $\max\{|A+A|,|AA|\} = \Omega^*\left(n^{3/2-5\alpha/8}\right)$.
In particular, when $\alpha=0$ there is no multiplicity and each family consists of a single line, so one might expect to get the standard Elekes bound of $\Omega(n^{5/4})$.
The reason for obtaining a stronger bound is our assumption that each family has a single special point.
Thus, when setting $\alpha=0$, we force each line to form an incidence with at most one point.
It is not surprising that we get a stronger bound under such a strong assumption.
\vspace{2mm}
We next prove the bound of Theorem \ref{th:DualSP} for the case where $1/2 \le \alpha < \kappa$.
Recall that $\kappa = (39 - \sqrt{721})/20 \approx 0.607$.
\begin{corollary} \label{co:DualLargeAlpha}
Let $A$ be a set of $n$ dual numbers with multiplicity $n^\alpha$, for some $1/2\le \alpha<\kappa$.
Then for any $\rho>0$,
\[ \max \left\{|A+A|,|AA|\right\} = \Omega\left(n^{9/4-39\alpha/16+5\alpha^2/8-\rho}\right). \]
\end{corollary}
\begin{proof}
Consider a sufficiently small $\rho'>0$.
We remove elements from $A$ until it has multiplicity $n^{1/2-\rho'}$.
This yields a subset $A'\subset A$ of size $\Omega\left(n^{1-(\alpha+\rho'-1/2)}\right) = \Omega\left(n^{3/2-\alpha-\rho'}\right)$.
Applying Theorem \ref{th:DualSP} on $A'$ with multiplicity $1/2-\rho'$, and assuming that $\rho'$ is sufficiently small, leads to
\begin{align*}
\max \left\{|A+A|,|AA|\right\} &\ge \max \left\{|A'+A'|,|A'A'|\right\} = \Omega^*\left(|A'|^{3/2-5\alpha/8}\right) \\[2mm]
&= \Omega^*\left(n^{(3/2-5\alpha/8)(3/2-\alpha-\rho')}\right) = \Omega\left(n^{9/4-39\alpha/16+5\alpha^2/8-\rho}\right).
\end{align*}
Finally, $9/4-39\alpha/16+5\alpha^2/8>1$ when $\alpha<\kappa$.
\end{proof}
\subsection{Adapting Solymosi's argument to dual numbers} \label{ssec:SolyDual}
For any $a,a'\in \ensuremath{\mathbb D}$ we have $\text{Re}(a \cdot a') = \text{Re}(a) \cdot \text{Re}(a')$.
When $a'$ is invertible, we also have $\text{Re}(a/a') = \text{Re}(a)/\text{Re}(a')$.
For $\lambda \in \ensuremath{\mathbb R}$, we define
\begin{align*}
r_{A}^{\times}(\lambda)&=\left|\left\{ (a,a')\in A^{2}\ :\ \text{Re}(a\cdot a')=\lambda\right\} \right|, \\[2mm]
r_{A}^{\div}(\lambda)&=\left|\left\{ (a,a')\in A^{2}\ :\ \text{Re}(a/a')=\lambda\right\} \right|.
\end{align*}
In other words, $r_{A}^{\times}(\lambda)$ is the number of ways to obtain $\lambda$ as the real part of a product of two elements of $A$, and similarly for $r_{A}^{\div}(\lambda)$.
For a finite set $A \subset \ensuremath{\mathbb D}$, we define the \emph{multiplicative energy} of $A$ as
\[ E^\times (A) = \left|\left\{ (a,b,c,d)\in A^4\ :\ a\cdot b = c \cdot d \right\} \right|.\]
We are now ready to adapt Solymosi's sum-product argument \cite{Soly09} to sets of dual numbers.
\begin{theorem} \label{th:SolyDual}
Let $A$ be a set of $n$ dual numbers with multiplicity $n^{\alpha}$, for some $0\le \alpha<1/2$.
Then
\[ \max\{|A+A|,|AA|\}=\Omega^*\left(n^{(4-2\alpha)/3}\right). \]
\end{theorem}
\begin{proof}
By assumption, $A$ may contain up to $n^\alpha$ non-invertible elements.
We discard these elements without changing the asymptotic size of $|A|$.
If at least half of the elements of $A$ have a positive real part, we discard from $A$ elements with a negative real part.
Otherwise, we discard from $A$ the elements that have a positive real part and multiply the remaining elements by $-1$.
In either case, all the elements of the revised set have a positive real part.
The asymptotic size of $A$ is unchanged and the sizes of $A+A$ and $AA$ can only decrease.
Thus, it suffices to derive a lower bound for $\max\{|A+A|,|AA|\}$ for the revised $A$.
Abusing notation, we still refer to this set as $A$ and its size as $n$.
Since each pair $(a,a')\in A^{2}$ contributes to exactly one set $r_{A}^{\div}(\lambda)$, we have %
\[ \sum_{\lambda\in\text{Re}(A/A)}r_{A}^{\div}(\lambda)=n^{2}. \]
If $(a_{1},a_{2},a_{3},a_{4})\in A^{4}$ satisfies $a_{1}a_{2}=a_{3}a_{4},$ then $\text{Re}(a_{1}/a_{3})=\text{Re}(a_{4}/a_{2})$.
This implies that
\[ E^{\times}(A)\le\sum_{\lambda\in\text{Re}(A/A)}r_{A}^{\div}(\lambda)^{2}. \]
Using dyadic decomposition, we partition this sum to
\[ E^{\times}(A)\le\sum_{m=0}^{\log n-1}\sum_{\substack{\lambda\in\text{Re}(A/A)\\
2^{m}\le\r{\lambda}<2^{m+1}}}r_{A}^{\div}(\lambda)^{2}. \]
This implies that there exists $0\le m<\log n$ such that
\[ \sum_{\substack{\lambda\in\text{Re}(A/A)\\ 2^m\le\r{\lambda}<2^{m+1}}}r_{A}^{\div}(\lambda)^{2}\ge\frac{E^{\times}(A)}{\log n}. \]
We set $\Lambda=\left\{ \lambda\in\text{Re}(A/A)\ :\ 2^m\le\r \lambda<2^{m+1}\right\}$, and denote the elements of $\Lambda$ as $0<\lambda_{1}<\lambda_{2}<\cdots<\lambda_{|\Lambda|}$.
Since $r_{A}^{\div}(\lambda)^{2}<2^{2m+2},$ we have
\begin{equation} \label{eq:MultEnergyDecomp}
1>\frac{E^{\times}(A)}{|\Lambda|2^{2m+2}\log n}.
\end{equation}
Consider the planar point set ${\cal P}=A\times A\subset\ensuremath{\mathbb D}^{2}.$
Since ${\cal P}+{\cal P}=(A+A)\times(A+A)$, we have that $|{\cal P}+{\cal P}|=|A+A|^{2}$.
For each $1\le i\le|\Lambda|,$ let $\ell_{i}$ denote the line in $\ensuremath{\mathbb R}^{2}$ defined by $y=\lambda_{i}x$.
We think of these lines as being in the real part of $\ensuremath{\mathbb D}^2$ (which is a copy of $\ensuremath{\mathbb R}^2$).
Let ${\cal P}\cap\ell_{i}$ be the set of points $(a,b)\in {\cal P}$ that satisfy $\text{Re}(a)=\lambda_{i} \cdot \text{Re}(b)$.
In other words, this is the set of points that satisfy the real part of the line equation, but not necessarily the imaginary part.
By definition, for each of these $|\Lambda|$ lines we have $2^m \le |{\cal P}\cap \ell_{i}|<2^{m+1}$.
Let ${\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i}$ denote the set of points in the real part of $\ensuremath{\mathbb D}^2$ that correspond to at least one point of ${\cal P}\cap\ell_{i}$.
Note that ${\cal P}\cap\ell_{i}$ is in $\ensuremath{\mathbb D}^2$ while ${\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i}$ is in $\ensuremath{\mathbb R}^2$, and that $|{\cal P}\cap\ell_{i}|\ge |{\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i}|$.
The lines $\ell_i \subset \ensuremath{\mathbb R}^2$ are all incident to the origin.
In addition, the points of $({\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i})+({\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i+1})$ lie in the interior of the wedge formed by $\ell_{i}$ and $\ell_{i+1}$ in the first quadrant of $\ensuremath{\mathbb R}^{2}$.
Thus, for any $i\ne i'$, the sets $({\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i})+({\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i+1})$ and $({\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i'})+({\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i'+1})$ are disjoint.
Fix $0 < i <|\Gamma|$.
For any $a_{1},a_{2}\in \ell_{i}$ and $a_{3},a_{4}\in \ell_{i+1}$ (these are points in $\ensuremath{\mathbb R}^2$),
we have that $a_{1}+a_{3}\ne a_{2}+a_{4}$ unless $a_{1}=a_{2}$ and $a_{3}=a_{4}$.
Indeed, for variables $c,d\in \ensuremath{\mathbb R}$, the system $(c,c\cdot\lambda_{i})+(d,d\cdot\lambda_{i+1})=(p_{x},p_{y})$ has a unique solution.
Hence, for any $p,q\in{\cal P}\cap \ell_{i}$ and $r,s\in{\cal P}\cap \ell_{i+1}$ that satisfy $\text{Re} (p)\neq \text{Re} (q)$ or $\text{Re} (r) \neq \text{Re} (s)$, we have $p+r\ne q+s$.
Since ${\cal P} = A\times A$ and since $A$ has multiplicity $n^\alpha$, for each $(s,t)\in \ensuremath{\mathbb R}^2$ at most $n^{2\alpha}$ pairs $(a,b)\in \mathcal P$ satisfy $(\text{Re} (a),\text{Re} (b)) = (s,t)$.
For each point in ${\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i+1}$ we arbitrarily consider one point of ${\cal P}\cap\ell_{i+1}$ that corresponds to it, and denote the resulting set as $S_i$.
Note that $|S_i|\ge |{\cal P}\cap\ell_{i+1}|/n^{2\alpha}$ and that $S_i$ consists of points with distinct real parts.
We claim that $|({\cal P}\cap \ell_{i})+ S_i| = |{\cal P}\cap\ell_{i}| \cdot |S_i|$.
In other words, we claim that every element of $({\cal P}\cap \ell_{i})+ S_i$ can be written as a sum in a unique way.
Indeed, for $s\in S_i$ and $a,a'\in {\cal P}\cap \ell_{i}$ we clearly have $a+s \neq a'+s$ when $\text{Re}(a)\neq \text{Re}(a')$.
If $\text{Re}(a)= \text{Re}(a')$ then $a$ and $a'$ have distinct imaginary parts, again implying $a+s \neq a'+s$.
This leads to
\[ \left|({\cal P}\cap \ell_{i})+({\cal P}\cap \ell_{i+1})\right| \ge \left|({\cal P}\cap \ell_{i})+S_i\right| \ge |{\cal P}\cap \ell_{i}|\cdot |{\cal P}\cap \ell_{i+1}|/n^{2\alpha}. \]
Combining this with \eqref{eq:MultEnergyDecomp} yields
\begin{align}
|A+A|^{2} & =|{\cal P}+{\cal P}| >\sum_{i=1}^{|\Lambda|-1}\left|({\cal P}\cap \ell_{i})+({\cal P}\cap\ell_{i+1})\right| \ge\sum_{i=1}^{|\Lambda|-1}\frac{|{\cal P}\cap\ell_{i}||{\cal P}\cap\ell_{i+1}|}{n^{2\alpha}} \nonumber \\[2mm]
& \ge\frac{(|\Lambda|-1)2^{2m}}{n^{2\alpha}} \ge\frac{(|\Lambda|-1)2^{2m}}{n^{2\alpha}}\cdot \frac{E^{\times}(A)}{|\Lambda|2^{2m+2}\log n} = \Omega^*\left(\frac{E^{\times}(A)}{n^{2\alpha}}\right). \label{eq:MultEnergyUpper}
\end{align}
By the Cauchy-Schwarz inequality,
\begin{align*}
E^{\times}(A) =\sum_{t\in AA}r_{A}^{\times}(t)^{2} \ge\frac{\left(\sum_{t\in AA}r_{A}^{\times}(t)\right)^{2}}{|AA|} =\frac{n^{4}}{|AA|}.
\end{align*}
Combining this with \eqref{eq:MultEnergyUpper} leads to
\[ |A+A|^{2}n^{2\alpha} = \Omega^*\left(\frac{n^{4}}{|AA|}\right). \]
Rearranging this gives
\[ |A+A|^{2}|AA| =\Omega^*\left(n^{4-2\alpha}\right). \]
This immediately implies the assertion of the theorem.
\end{proof}
\section{Double numbers} \label{sec:Double}
In this section we study double numbers, and in particular prove Theorem \ref{th:DoubleSP}.
In Section \ref{ssec:DoubleLines} we study properties of lines in the double plane.
We derive a point-line incidence bound in $\ensuremath{\mathbb S}^2$, and study additional properties of such incidences.
This case is more involved than the analog for dual lines in Section \ref{sec:Dual}, since we cannot easily separate $\ensuremath{\mathbb S}^2$ into a real part and an imaginary part as we did for $\ensuremath{\mathbb D}^2$.
In Section \ref{ssec:ElekDouble} we adapt Elekes' sum-product argument to the double numbers.
As in the dual case, we add several additional steps to Elekes' original approach.
In Section \ref{ssec:SolyDouble}, we adapt Solymosi's sum-product argument to the double numbers.
\subsection{Lines in the double plane} \label{ssec:DoubleLines}
Recall that we denote by $\ensuremath{\mathbb S}$ the set of the double numbers: The extension of $\ensuremath{\mathbb R}$ with the extra element $j$ and the rule $j^2=1$.
We write a number $a\in \ensuremath{\mathbb S}$ as $a_1+j a_2$.
Multiplication of double numbers is commutative, and 1 is the unit element.
A double number $a\in \ensuremath{\mathbb S}$ has an inverse element if and only if $a_1\neq \pm a_2$ (equivalently, $a_1^2\neq a_2^2$).
The inverse element is then $a^{-1} = \frac{a_1-j a_2}{a_1^2-a_2^2}$.
Indeed,
\[ a\cdot a^{-1} = \frac{(a_1+j a_2)(a_1-j a_2)}{a_1^2-a_2^2} = \frac{a_1^2-a_2^2}{a_1^2-a_2^2} = 1.\]
For $a=a_1+ja_2 \in \ensuremath{\mathbb S}$, we define $\Delta^+(a) = a_1+a_2$ and $\Delta^-(a) = a_1-a_2$.
For any $a,b\in \ensuremath{\mathbb S}$ where $b$ is invertible, we have
\begin{align}
&\Delta^+(a + b) = \Delta^+(a_1 + b_1+(a_2 + b_2)j) = \Delta^+(a) + \Delta^+(b), \nonumber \\[2mm]
&\Delta^+(a \cdot b) = \Delta^+(a_1b_1+a_2b_2+(a_1b_2+ a_2b_1)j) = a_1b_1+a_2b_2 + a_1b_2+ a_2b_1 = \Delta^+(a)\cdot \Delta^+(b), \nonumber \\[2mm]
&\Delta^+(a/b) = \Delta^+\left(\frac{(a_1+a_2j)(b_1-b_2j)}{b_1^2-b_2^2}\right) \nonumber \\[2mm]
&\hspace{40mm}= \Delta^+\left(\frac{a_1b_1-a_2b_2+(a_2b_1- a_1b_2)j}{b_1^2-b_2^2}\right) = \Delta^+(a)\cdot \Delta^+(b^{-1}). \label{eq:DeltaProp}
\end{align}
It is not difficult to verify that the above equations still hold when replacing $\Delta^+(\cdot)$ with $\Delta^-(\cdot)$.
We define a line in $\ensuremath{\mathbb S}^2$ as the set of points on which a linear equation vanishes.
Let $\ell$ be the line defined by $ax+by=c$, where $a,b,c\in \ensuremath{\mathbb S}$.
This corresponds to
\begin{align*}
(a_1+j a_2) (x_1+j x_2) + (b_1+j b_2)(y_1+j y_2) &= (c_1+j c_2),
\end{align*}
or equivalently
\begin{align*}
a_1x_1 +a_2x_2 + b_1y_1 +b_2y_2 &= c_1, \\
a_2x_1 + a_1 x_2 + b_2 y_1 + b_1 y_2 &= c_2.
\end{align*}
The two above equations are linearly dependent if and only if $a_1= \pm a_2$, $b_1= \pm b_2$, and $c_1 = \pm c_2$, where all three $\pm$ represent the same operation.
We can think of $\ensuremath{\mathbb S}^2$ as $\ensuremath{\mathbb R}^4$, and then $\ell$ is either a 2-flat or a hyperplane, depending on whether the two above equations are linearly dependent.
We refer to the lines of the latter type as ``degenerate lines''.
Note that for a line defined by $ax+by=c$ to degenerate, all three $a$, $b$, and $c$ must be non-invertible.
As in the dual case, lines in the double plane can have an infinite intersection, even when excluding non-invertible coefficients in the line equations.
For example, consider the set of non-degenerate lines
\[ \mathcal L = \{ y=(k+(k-1)j)x+((15-3k)+(9-3k)j) :\ k\in \ensuremath{\mathbb R} \}. \]
It is not difficult to verify that every line of $\mathcal L$ contains the line $\ell$ parameterized by $(c+(3-c)j,12+c+(9-c)j)\in \ensuremath{\mathbb S}^2$ with $c\in \ensuremath{\mathbb R}$.
By taking $n$ lines from $\mathcal L$ and $m$ points on $\ell$, we get $mn$ incidences.
That is, the point--line incidence problem in $\ensuremath{\mathbb S}^2$ is trivial.
We now study when collections of lines have an infinite intersection.
\begin{lemma} \label{le:LineFamilyDoub}
In each of the following parts, every $\pm$ represents the same operation, and every $\mp$ represents the other operation. \\
(a) Let $\ell$ and $\ell'$ be distinct lines in $\ensuremath{\mathbb S}^2$, respectively defined by $y=ax+b$ and $y=a'x+b'$.
The intersection $\ell\cap\ell'$ contains more than one point if and only if $a_1-a'_1=\pm(a_2-a'_2)\neq 0$, and $b_1-b'_1 = \pm (b_2-b'_2)$.
When these conditions are satisfied, $\ell_1\cap\ell_2$ is a line in $\ensuremath{\mathbb R}^4$ and every point $x$ in $\ell_1\cap\ell_2$ satisfies $x_1 \pm x_2 = \frac{b'_1-b_1}{a_1-a'_1}$. \\
(b) Let $\mathcal L$ be a set of lines of the form $y=ax+b$ that have an infinite common intersection.
Then exist $t_1,t_2\in \ensuremath{\mathbb R}$ such that every line of $\mathcal L$ satisfies $\Delta^\mp(a)=t_1$ and $\Delta^\mp(b)=t_2$.
There also exists $s$ such that every point $(x,y)$ in the infinite intersection satisfies $\Delta^\pm(x)=s$.
\begin{itemize}
\item If $s = 0$ then every line has the same $b$, and every point in the common intersection satisfies $\Delta^{\pm}(y) = \Delta^{\pm}(b)$.
\item If $s \neq 0$ then exist $m,m'\in \ensuremath{\mathbb R}$ such that every line satisfies $b_1= s(m-a_1)$ and $b_2= s(m'-a_2)$.
Every point in the common intersection satisfies $\Delta^{\pm}(y) = s(m\pm m')$.
\end{itemize}
\end{lemma}
\begin{proof}
(a) To study the intersection points of the two lines, we combine $y=ax+b$ and $y=a'x+b'$, obtaining $ax+b=a'x+b'$, or equivalently $x(a-a')=b'-b$.
Splitting this equation into real and imaginary parts gives
\begin{align}
x_1(a_1-a'_1) + x_2(a_2-a'_2) &= b'_1-b_1, \nonumber \\
x_1(a_2-a'_2) + x_2(a_1-a'_1) &= b'_2-b_2. \label{eq:TwoLinearEqDouble}
\end{align}
We consider the above as a linear system in $x_1$ and $x_2$.
This system has a unique solution unless $(a_1-a'_1)^2 = (a_2-a'_2)^2$.
That is, the intersection contains at most one point unless $a_1-a'_1=\pm(a_2-a'_2)$.
If $(a_1-a'_1)^2 = (a_2-a'_2)^2=0$ then the two lines are either parallel or identical.
Thus, it remains to study the case where $a_1-a'_1=\pm(a_2-a'_2)\neq 0$.
We either have that $a_1-a'_1 = a_2-a'_2$ or that $a_1-a'_1 = a'_2-a_2$.
We first consider the former case.
By the equations of \eqref{eq:TwoLinearEqDouble}, either $b'_1-b_1 \neq b'_2-b_2$ and the two lines do not intersect or $b'_1-b_1 = b'_2-b_2$ and the two lines have an infinite intersection.
In the case of an infinite intersection, the equations of \eqref{eq:TwoLinearEqDouble} also imply $x_1 + x_2= \frac{b'_1-b_1}{a_1-a'_1}$.
It remains to consider the case where $a_1-a'_1 = a'_2-a_2 \neq 0$.
By \eqref{eq:TwoLinearEqDouble}, either $b'_1-b_1 \neq b_2-b'_2$ and the two lines do not intersect or $b'_1-b_1 = b_2-b'_2$ and the two lines have an infinite intersection.
In the case of an infinite intersection, the equations of \eqref{eq:TwoLinearEqDouble} also imply $x_1 - x_2= \frac{b'_1-b_1}{a_1-a'_1}$.
(b) If distinct 2-flats in $\ensuremath{\mathbb R}^4$ have an infinite intersection, then this intersection is a line.
Let $\ell^*$ be the line in $\ensuremath{\mathbb R}^4$ that is the infinite intersection of the lines of $\mathcal L$.
By part (a), there exists $s\in \ensuremath{\mathbb R}$ such that every $(x,y)\in \ell^*$ satisfies $\Delta^{\pm}(x) = s$ and every distinct $\ell,\ell'\in \mathcal L$ satisfy $\frac{b'_1-b_1}{a_1-a'_1}=s$.
This implies that the symbol $\pm$ represents the same operation for all lines in $\mathcal L$.
By part (a) we also have that $a_1-a_1' = \pm (a_2-a'_2)$, or equivalently that $\Delta^{\mp}(a) = \Delta^{\mp}(a')$.
That is, every line of $\mathcal L$ has the same value for $\Delta^{\mp}(a)$.
Similarly, the condition $b_1-b'_1 = \pm (b_2-b'_2)$ leads to every line of $\mathcal L$ having the same value for $\Delta^{\mp}(b)$.
If $s=0$, then every two lines $\ell,\ell'\in \mathcal L$ satisfy $\frac{b'_1-b_1}{a_1-a'_1}=0$, or equivalently $b_1=b'_1$.
Since every $\Delta^{\mp}(b)$ has the same value, we also obtain $b_2=b'_2$.
That is, every line of $\mathcal L$ has the same $b$.
Consider a line of $\mathcal L$ defined by $y=ax+b$.
Splitting this equation to real and imaginary parts, we obtain $y_1 = a_1x_1+a_2x_2 +b_1$ and $y_2 = a_1x_2+a_2x_1 +b_2$.
Combining these two equations gives
\[ y_1 \pm y_2 = a_1x_1+a_2x_2 +b_1 \pm (a_1x_2+a_2x_1 +b_2) = (a_1\pm a_2)(x_1\pm x_2) + b_1\pm b_2 = b_1 \pm b_2. \]
If $s\neq0$ then every distinct $\ell,\ell'\in \mathcal L$ satisfy $b'_1-b_1=s(a_1-a'_1)$.
This implies that there exists $m\in \ensuremath{\mathbb R}$ such that every line of $\mathcal L$ satisfies $b_1= s(m-a_1)$.
By part (a), we have $\frac{b_1-b'_1}{a_1-a'_1} = \frac{b_2-b'_2}{a_2-a'_2}$, or equivalently $\frac{b'_1-b_1}{a_1-a'_1} = \frac{b'_2-b_2}{a_2-a'_2}$.
Thus, $b'_2-b_2=s(a_2-a'_2)$ and there exists $m'\in \ensuremath{\mathbb R}$ such that every line of $\mathcal L$ satisfies $b_2= s(m'-a_2)$.
Consider a line of $\mathcal L$ defined by $y=ax+b$.
Splitting this equation into real and imaginary parts, we obtain $y_1 = a_1x_1+a_2x_2 +b_1$ and $y_2 = a_1x_2+a_2x_1 +b_2$.
Combining these two equations gives
\begin{align*}
y_1 \pm y_2 &= a_1x_1+a_2x_2 +b_1 \pm (a_1x_2+a_2x_1 +b_2) = (a_1 \pm a_2)(x_1\pm x_2) + (b_1\pm b_2) \\[2mm]
&= s(a_1 \pm a_2) + s(m-a_1 \pm (m'-a_2)) = s(m\pm m').
\end{align*}
\end{proof}
The example before Lemma \ref{le:LineFamilyDoub} was obtained by setting $x_1+x_2 = 3, m=5, m'=2,$ and $a_1-a_2 = 1$.
The rest followed from Lemma \ref{le:LineFamilyDoub}.
Using Lemma \ref{le:LineFamilyDoub}, we can prove Theorem \ref{th:DoubleST}.
This proof is identical to the proof of Theorem \ref{th:DualST}, so we do not repeat it here.
To derive our sum-product bounds in $\ensuremath{\mathbb S}$, we need additional properties of point-line incidences in $\ensuremath{\mathbb S}^2$.
We refer to a set of lines that have an infinite common intersection as a \emph{family of lines}.
Let $\mathcal L$ be such a family.
By Lemma \ref{le:LineFamilyDoub}(b), there exist constants $s,s'\in \ensuremath{\mathbb R}$ such that every point $(x,y)\in\ensuremath{\mathbb S}^2$ in the common intersection of the lines of $\mathcal L$ satisfies $\Delta^{\pm}(x) = s$ and $\Delta^{\pm}(y) =s'$.
We say that $(s,s')\in \ensuremath{\mathbb R}^2$ is the \emph{point parameter} of the family $\mathcal L$.
Also by Lemma \ref{le:LineFamilyDoub}(b), there exist $t_1,t_2\in \ensuremath{\mathbb R}$ such that every line of $\mathcal L$ satisfies $\Delta^\mp(a)=t_1$ and $\Delta^\mp(b)=t_2$.
We define the \emph{line parameter} of $\mathcal L$ to be $(t_1,t_2)$.
We can study the interaction between different line families by studying their point parameters and line parameters.
We say that a line family is \emph{positive} or \emph{negative} according to the meaning of the $\pm$ sign in the definition of the point parameter of the family.
In the notation of Lemma \ref{le:LineFamilyDoub}(b), a family is positive if $s=x_1+x_2$.
We refer to this property as the \emph{sign} of a line family.
\begin{lemma} \label{le:LineFamDouble}
Let $\mathcal L_1$ and $\mathcal L_2$ be distinct line families in $\ensuremath{\mathbb S}^2$ with the same sign. \\
(a) If $\mathcal L_1$ and $\mathcal L_2$ do not have the same line parameter, then no line is contained in both families. \\
(b) If $\mathcal L_1$ and $\mathcal L_2$ have the same line parameter and the same value for $\Delta^{\pm}(x)$, then no line is contained in both families. \\
(c) If $\mathcal L_1$ and $\mathcal L_2$ have the same line parameter but not the same $\Delta^{\pm}(x)$, then at most one line of $\ensuremath{\mathbb S}^2$ is in both families.
\end{lemma}
\begin{proof}
(a) By the definition of the line parameter, either the lines of $\mathcal L_1$ and the lines of $\mathcal L_2$ have different values of $\Delta^{\mp}(a)$ or these lines have different values of $\Delta^{\mp}(b)$ (or both).
Since no line can have two different values for $\Delta^{\mp}(a)$ or two different values for $\Delta^{\mp}(b)$, the two families are disjoint.
(b) Let $\pm$ denote the sign of $\mathcal L_1$ and $\mathcal L_2$, let $\mp$ denote the opposite sign, and denote the common value of $\Delta^{\pm}(x)$ as $s$.
Since the two families have the same line parameter and the same sign, every line in $\mathcal L_1 \cup \mathcal L_2$ has the same value of $\Delta^{\mp}(a)$ and the same value of $\Delta^{\mp}(b)$.
We assume for contradiction that there exists a line in $\ensuremath{\mathbb S}^2$ that is contained in both families.
We first consider the case of $s=0$.
By Lemma \ref{le:LineFamilyDoub}(b), all the lines in the same family have the same $b$.
Since the two families contain a line in common, every line of $\mathcal L_1 \cup \mathcal L_2$ has the same value of $b_1$ and the same value of $b_2$.
Since these families also have the same value of $\Delta^{\mp}(a)$ they are identical, contradicting the assumption.
Next, consider the case of $s\neq 0$.
By Lemma \ref{le:LineFamilyDoub}(b), in this case there exist $m_1,m_2\in \ensuremath{\mathbb R}$ such that every line of $\mathcal L_1$ satisfies $b_1 = s(m_1-a_1)$ and every line of $\mathcal L_2$ satisfies $b_1 = s(m_2-a_1)$.
Since there is a line in both families, we obtain $s(m_1-a_1)=s(m_2-a_1)$ or equivalently $m_1=m_2$.
By a symmetric argument, the $m'$ values of both families are identical.
Since the value of $\Delta^{\mp}(a)$ is also identical for both line families, we conclude that the two families are identical, contradicting the assumption.
We got a contradiction in both cases, so the two line families cannot have any lines in common.
(c) Let $\pm$ denote the sign of $\mathcal L_1$ and $\mathcal L_2$, let $\mp$ denote the opposite sign, and let $\ell\subset \ensuremath{\mathbb S}^2$ be a line that is in both families.
As in the proof of part (b), every line in $\mathcal L_1 \cup \mathcal L_2$ has the same value of $\Delta^{\mp}(a)$ and the same value of $\Delta^{\mp}(b)$.
Denote the $\Delta^{\pm}(x)$ values of $\mathcal L_1$ and $\mathcal L_2$ as $s_1$ and $s_2$, respectively.
Then there exist $m_1,m_2$ such that $\ell$ satisfies $a_1 = s_1(m_1-b_1)$ and $a_1 = s_2(m_2-b_1)$.
These are two independent linear equations in the variables $a_1,b_1$, and thus have a unique solution.
We conclude that there is at most one line common to both families.
\end{proof}
We can also use the line parameter to study the behavior of line families in $\ensuremath{\mathbb R}^4$.
\begin{lemma} \label{le:FamInHyper}
When considering every line in $\ensuremath{\mathbb S}^2$ as a 2-flat in $\ensuremath{\mathbb R}^4$, the 2-flats of a line family are all contained in a common hyperplane.
Two line families of the same sign are contained in the same hyperplane if and only if they have the same line parameter.
\end{lemma}
\begin{proof}
Consider a line family $\mathcal L$ with line parameter $(t,t')$, and a line from $\mathcal L$ defined by $y=ax+b$.
Splitting this equation into real and imaginary parts, we obtain $y_1 = a_1x_1+a_2x_2 +b_1$ and $y_2 = a_1x_2+a_2x_1 +b_2$.
Combining these two equations leads to
\[ y_1\mp y_2 = (a_1\mp a_2)(x_1\mp x_2) + (b_1 \mp b_2) = t(x_1 \mp x_2)+t'.\]
That is, every line of $\mathcal L$ corresponds to a 2-flat that is contained in the hyperplane defined by $y_1 \mp y_2=t(x_1 \mp x_2)+t'$.
It can now be easily verified that two families are contained in the same hyperplane if and only if they have the same line parameter $(t,t')$.
\end{proof}
Finally, we study the interaction between two line families with opposite signs.
\begin{lemma} \label{le:OppositeSigns}
Let $\mathcal L_1$ and $\mathcal L_2$ be distinct line families in $\ensuremath{\mathbb S}^2$ with opposite signs.
Then at most one line is contained in both families.
\end{lemma}
\begin{proof}
Without loss of generality, assume that the lines of $\mathcal L_1$ have the same value of $\Delta^+(a)$ and that the lines of $\mathcal L_2$ have the same value of $\Delta^-(a)$.
Then all the lines of $\mathcal L_1$ have the same value of $\Delta^+(b)$ and all the lines of $\mathcal L_2$ have the same value of $\Delta^-(b)$.
It can be easily verified that there are unique values for $a_1,a_2,b_1,b_2$ that satisfy all four restrictions.
We conclude that at most one line can be in both families.
\end{proof}
\subsection{Adapting Elekes' argument to double numbers} \label{ssec:ElekDouble}
We are now ready to adapt the proof from Section \ref{ssec:ElekDual} to the double numbers.
The two proofs are similar, but not identical.
We thus provide most of the proof, skipping only the last part, which is technical calculation identical to the one in Section \ref{ssec:ElekDual}.
We first repeat the relevant part of Theorem \ref{th:DoubleSP}.
\begin{theorem}
Let $A$ be a set of $n$ double numbers and multiplicity $n^{\alpha}$, for some $0 \le \alpha < 1/2$.
Then for any $\rho>0$,
\[ \max\{|A+A|,|AA|\} = \begin{cases}
\Omega^*\left(n^{3/2-3\alpha/4}\right), & \quad 1/3\le\alpha<1/2, \\
\Omega\left(n^{5/4-\rho}\right), & \quad 0\le \alpha<1/3.
\end{cases} \]
\end{theorem}
\begin{proof}
By the multiplicity assumption, $A$ contains at most $2n^\alpha$ non-invertible elements.
We discard these elements.
This does not change the asymptotic size of $A$ and can only decrease the sizes of $A+A$ and $AA$.
Thus, it suffices to prove the bound for the resulting smaller set.
Abusing notation, in the rest of the proof we refer to this revised set as $A$.
Consider the point set
\[ \mathcal P = \{ (c,d)\in\ensuremath{\mathbb D}^2 :\, c\in A+A \quad \text{ and } \quad d\in AA \}, \]
and the set of lines
\[ \mathcal L = \{ y=c(x-d) :\, c,d\in A \}. \]
Note that $|\mathcal L| =n^2$ and $|\mathcal P| = |A+A|\cdot|AA|$.
Since the revised $A$ consists only of invertible elements, there are no degenerate lines in $\mathcal L$.
We think of $\mathcal P$ both as a point set in $\ensuremath{\mathbb S}^2$ and as a point set in $\ensuremath{\mathbb R}^4$.
Similarly, we think of $\mathcal L$ both as a set of lines in $\ensuremath{\mathbb S}^2$ and as a set of 2-flats in $\ensuremath{\mathbb R}^4$.
The proof is based on double counting $I(\mathcal P,\mathcal L)$.
A line defined by $y=c(x-d)$ contains every point of $\mathcal P$ of the form $(d+b,cb)$ for every $b\in A$.
That is, we have that $I(\mathcal P,\mathcal L) \ge |\mathcal L||A| = n^3$.
For the rest of the proof we will derive upper bounds for $I(\mathcal P,\mathcal L)$.
We partition the incidences in $\mathcal P\times\mathcal L$ into two types, as follows.
We say that an incidence $(p,\ell)\in \mathcal P\times\mathcal L$ is \emph{special} if there exists a second line $\ell'\in \mathcal L$ such that $\ell$ and $\ell'$ are members of the same line family, and $p$ is in the infinite intersection of this family.
If an incidence $(p,\ell)\in \mathcal P\times\mathcal L$ is not special, then we say that it is a \emph{standard} incidence.
We first bound the number of standard incidences.
By considering $\ensuremath{\mathbb S}^2$ as $\ensuremath{\mathbb R}^4$, we obtain an incidence problem with two-dimensional flats.
If $(p,\ell_1)$ and $(p,\ell_2)$ are standard incidences, then $\ell_1\cap \ell_2 = \{p\}$.
These are regular incidences, as defined in Theorem \ref{th:IncR4}.
By that theorem, the number of standard incidences is $O\left(|\mathcal P|^{2/3+\rho}|\mathcal L|^{2/3}+|\mathcal P|+|\mathcal L|\right)$.
When the number of standard incidences is larger than the number of special incidences, we have that
\[ I(\mathcal P,\mathcal L) = O\left(|\mathcal P|^{2/3+\rho}|\mathcal L|^{2/3}+|\mathcal P|+|\mathcal L|\right) = O\left(|A+A|^{2/3+\rho}|AA|^{2/3+\rho}n^{4/3}+ |A+A||AA|\right). \]
Combining this with $I(\mathcal P,\mathcal L) \ge n^3$ leads to $|A+A||AA| = \Omega(n^{5/2-\rho})$.
This immediately implies the assertion of the theorem, for any $0\le \alpha < 1/2$.
\parag{Handling special incidences.}
It remains to consider the case where the number of special incidences is larger than the number of standard incidences.
We say that a special incidence $(p,\ell)$ corresponds to a line family $\mathcal L^*$ if the line $\ell$ is contained in the line family and the point $p$ is in the infinite intersection of the family.
By the definition, each special incidence corresponds to at least one line family.
By Lemmas \ref{le:LineFamDouble} and \ref{le:OppositeSigns}, a special incidence can correspond to at most one positive family and to at most one negative family.
In the rest of the analysis, we assume that at least half of the special incidences correspond to a positive family.
The other case, in which at least half of the special incidences correspond to a negative family, is handled in a symmetric manner.
We remove the special incidences that are not associated with positive line family.
By the above assumption, this does not asymptotically change $I(\mathcal P,\mathcal L)$.
The removal process may turn some special incidences to standard incidences.
We bound the number of new standard incidences in the same way we bound the number of the original standard incidences.
As before, if most of the original special incidences became standard incidences, then we are done.
It remains to study the case where most of the original value of $I(\mathcal P,\mathcal L)$ comes from the remaining special incidences.
Denote by $I(\alpha',\beta,\gamma,\delta)$ the number of special incidences $(p,\ell)\in \mathcal P\times \mathcal L$ that satisfy the following.
Let $\mathcal L^*$ be the positive line family that corresponds to $(p,\ell)$, let $(s,s')$ be the point parameter of $\mathcal L^*$, and let $(t,t')$ be the line parameter of $\mathcal L^*$.
\begin{itemize}
\item The number of lines of $\mathcal L$ that satisfy $t=\Delta^-(a)$ and $t'=\Delta^-(b)$ is at least $n^{2\alpha'}$ and smaller than $2n^{2\alpha'}$.
\item The number of lines of $\mathcal L$ that are in $\mathcal L^*$ is at least $n^\beta$ and smaller than $2n^\beta$.
\item The number of points $(x,y)\in\mathcal P$ that satisfy $s=\Delta^+(x)$ and $s'=\Delta^+(y)$ is at least $n^\gamma$ and smaller than $2n^\gamma$.
\item The pair $(s,s')$ is the point parameter of at least $n^\delta$ and fewer than $2n^\delta$ positive line families that satisfy the property stated in the second item.
\end{itemize}
Note that we can take $\Theta(\log^4 n)$ elements $I(\alpha',\beta,\gamma,\delta)$ such that every special incidence is counted in at least one of those elements.
Thus, the number of special incidences is at most the maximum size of $I(\alpha',\beta,\gamma,\delta)$ times $\Theta(\log^4 n)$.
We study some basic properties of the parameters $\alpha',\beta,\gamma,\delta$ that maximize $I(\alpha',\beta,\gamma,\delta)$.
For that purpose, we assume that $\alpha',\beta,\gamma,\delta$ are fixed.
We denote by $S$ the set of point parameters that participate in incidences of $I(\alpha',\beta,\gamma,\delta)$.
Let $F$ be the set of line families that contain at least $n^\beta$ lines of $\mathcal L$ and fewer than $2n^{\beta}$ such lines.
Let $T$ be the set of line parameters of the families of $F$.
Since $|\mathcal L|=n^2$ and every pair of $T$ corresponds to $\Omega(n^{2\alpha'})$ lines of $\mathcal L$, we get that $|T| =
O(n^{2-2\alpha'})$.
We study how many lines of $\mathcal L$ can correspond to a given line parameter $(t,t')\in T$.
Recalling that every line of $\mathcal L$ is of the form $y=c(x-d)$, we note that $c$ uniquely determines $t$.
Since $A$ has multiplicity $n^\alpha$, there are $O(n^\alpha)$ choices for $c$.
Recalling from \eqref{eq:DeltaProp} that $\Delta^-(a\cdot b) = \Delta^-(a)\cdot \Delta^-(b)$, we get that for a fixed $c$ there are $O(n^\alpha)$ choices for $d$.
Thus, the number of lines in $\mathcal L$ with a given line parameter is $O(n^{2\alpha})$.
This implies that $0\le \alpha'\le \alpha$.
We also have that $\beta \le 2\alpha'$, since otherwise there are not enough lines with the same line parameter to create a family in $F$.
We now consider how many lines from $\mathcal L$ can be part of the same line family.
In general we consider lines defined by an equation of the form $y=(a_1+j a_2) x + (b_1+j b_2)$, and a line in $\mathcal L$ is defined by an equation of the form $y=c(x-d)$ with $c,d\in A$.
By Lemma \ref{le:LineFamilyDoub}, all the lines in the same positive family have the same $\Delta^-(a)$ and $\Delta^-(b)$.
The multiplicity of $A$ implies that there are at most $n^\alpha$ possible values $c$.
Also by Lemma \ref{le:LineFamilyDoub}, either all the lines in a family have the same $b$, or they all satisfy relations of the form $b_1 = s(m-a_1)$ and $b_2 = s(m-a_2)$.
In either case, the values of $b$ are uniquely determined by the values of $a$.
That is, choosing $c$ uniquely determines $d$.
We conclude that every family contains at most $n^\alpha$ lines from $\mathcal L$, so $0 \le \beta \le \alpha$.
Recall from \eqref{eq:DeltaProp} that $\Delta^+(a+b) = \Delta^+(a)+\Delta^+(b)$.
Since the multiplicity of $A$ is $n^\alpha$, at most $n^{1+\alpha}$ sums $x$ in $A+A$ can have the same value for $\Delta^+(x)$.
Similarly, since $\Delta^+(a\cdot b) = \Delta^+(a)\cdot \Delta^+(b)$, at most $n^{1+\alpha}$ products $y$ in $AA$ can have the same value for $\Delta^+(y)$.
Since $\mathcal P= (A+A)\times (AA)$, at most $n^{2+2\alpha}$ points of $\mathcal P$ can correspond to the same point parameter $(x,y)$ in $\ensuremath{\mathbb S}^2$.
Thus, $0 \le \gamma \le 2+2\alpha$.
We also have the straightforward bound $n^\gamma \le |\mathcal P|$, or equivalently $\gamma \le (\log |\mathcal P|)/\log n$.
Next, we consider the maximum number of line families of $F$ that can have the same point parameter.
Recall that every line of a positive family with point parameter $(s,s')$ satisfies $s' = (a_1+a_2)s + (b_1+b_2)$ (for example, see the proof of Lemma \ref{le:LineFamilyDoub}(b)).
Since the lines of $\mathcal L$ are defined as $y=c(x-d)$ with $c,d\in A$, for every choice of $c$ and a point parameter, the value of $b_1+b_2$ is uniquely determined.
This implies that a specific point parameter has $O(n^{1+\alpha})$ lines of $\mathcal L$ corresponding to it.
We conclude that $0 \le \delta \le 1+\alpha - \beta$.
To recap:
\begin{align*}
0\le \alpha', \beta \le &\alpha, \qquad \beta \le 2\alpha', \qquad 0 \le \delta \le 1+\alpha-\beta, \\[2mm]
0 \le &\gamma \le \min\left\{2+2\alpha, (\log |\mathcal P|)/\log n\right\}.
\end{align*}
We next bound the number of families in $F$.
Recall that $|T|=O(n^{2-2\alpha'})$, and that each pair of $T$ corresponds to fewer than $2n^{2\alpha'}$ lines of $\mathcal L$.
For a fixed $(t,t')\in T$, by Lemma \ref{le:LineFamDouble} every two families corresponding to $(t,t')$ have at most one line in common.
There are fewer than $\binom{2n^{2\alpha'}}{2} = O(n^{4\alpha'})$ pairs of lines of $\mathcal L$ that correspond to $(t,t')$.
Every pair of lines can appear together in at most one line family, and each line family subsumes at least $\binom{n^{\beta}}{2} = \Theta(n^{2\beta})$ such pairs.
Thus, the number of families that correspond to $(t,t')$ is $O(n^{4\alpha'-2\beta})$.
By summing up over every $(t,t')\in T$, we obtain that $|F| = O(n^{2+2\alpha'-2\beta})$.
Since each pair $(s,s')\in S$ corresponds to $\Theta(n^\gamma)$ points of $\mathcal P$, we have $|S| = O(|\mathcal P|/n^\gamma)$.
Since $|F| = O(n^{2+2\alpha'-2\beta})$ and each point parameter subsumes $\Theta(n^{\delta})$ families of $F$, we obtain $|S| = O(n^{2+2\alpha'-2\beta-\delta})$.
We think of a point parameter $(s,s')$ as corresponding to the 2-flat in $\ensuremath{\mathbb R}^4$ defined by $s=x_1+x_2$ and $s'=y_1+y_2$.
By Lemma \ref{le:FamInHyper}, a line family is fully contained in a hyperplane in $\ensuremath{\mathbb R}^4$, and two positive families are contained in the same hyperplane if and only if they have the same line parameter.
Let $H$ be a generic 2-flat in $\ensuremath{\mathbb R}^4$, such that $H$ intersects every 2-flat that corresponds to a point parameter $(s,s')\in S$ in a single distinct point, and that $H$ intersects every hyperplane containing a line family at a distinct line.
Let $\mathcal P_H$ be the resulting set of $|S|$ points in $H$ and let $\mathcal L_H$ be the resulting family of $|T|$ lines in $H$.
By definition, every point of $\mathcal P_H$ is incident to $\Omega(n^{\delta-2\alpha'+\beta})$ lines of $\mathcal L_H$.
Recalling that $|T|=O(n^{2-2\alpha'})$, Theorem \ref{th:DualFormST} implies that
\[ |S|=O\left(\frac{(n^{2-2\alpha'})^2}{(n^{\delta-2\alpha'+\beta})^3} + \frac{n^{2-2\alpha'}}{n^{\delta-2\alpha'+\beta}}\right) = O\left(n^{4+2\alpha'-3\delta-3\beta} + n^{2-\delta-\beta}\right).\]
Recall that a point parameter $(s,s')\in S$ is associated with the plane in $\ensuremath{\mathbb R}^4$ defined by $x_1+x_2=s$ and $x_3+x_4=s'$.
Denote this plane as $h$.
There are $\Theta(n^\delta)$ families with point parameter $(s,s')$, each intersecting in a common line in $h$.
There are $\Theta(n^\gamma)$ points of $\mathcal P$ in $h$.
The intersection lines of the different families are distinct by definition.
By the Szemer\'edi--Trotter theorem, the number of incidences between these points and lines is $O(n^{2(\delta+\gamma)/3}+n^\delta+n^\gamma)$.
Since each line in the imaginary plane corresponds to $\Theta(n^\beta)$ lines of $\mathcal L$, the number of incidences in the $h$ is
\begin{equation} \label{eq:IncInPntParam}
O\left(n^\beta\left(n^{2(\delta+\gamma)/3}+n^\delta+n^\gamma\right)\right).
\end{equation}
Note that \eqref{eq:IncInPntParam} is identical to \eqref{eq:IncInSpecialPnt}.
In addition, we obtained the exact same bounds for $\alpha',\beta,\gamma,\delta, |T|,|S|$, and $|F|$ as in the proof of Theorem \ref{th:DualMainCase}.
We may thus repeat the technical calculation at the end of the proof of Theorem \ref{th:DualMainCase}.
We do not repeat the entire calculation here.
As in the proof of Theorem \ref{th:DualMainCase}, this leads to $\max\{|A+A|,|AA|\} = \Omega^*\left(n^{3/2-5\alpha/8}\right)$.
\end{proof}
Proving the bound of Theorem \ref{th:DoubleSP} for the case where $1/2 \le \alpha < \kappa$ is identical to the proof of Corollary \ref{co:DualLargeAlpha}.
Thus, we do not repeat this proof here.
\subsection{Adapting Solymosi's argument to double numbers} \label{ssec:SolyDouble}
We now adapt Solymosi's sum-product argument \cite{Soly09} to sets of double numbers.
\begin{theorem}
Let $A$ be a set of $n$ dual numbers with multiplicity $n^{\alpha}$, for some $0\le \alpha<1/2$.
Then
\[ \max\{|A+A|,|AA|\}=\Omega^*\left(n^{(4-2\alpha)/3}\right). \]
\end{theorem}
\begin{proof}
The proof is similar to the proof of Theorem \ref{th:SolyDual}, with $\text{Re}(a)$ replaced by $\Delta^+(a)$.
Equations \eqref{eq:DeltaProp} illustrate that $\Delta^+(a)$ has the same arithmetic properties we used with $\text{Re}(a)$.
For $a\in \ensuremath{\mathbb S}$, we refer to $\Delta^+(a)$ as the \emph{parameter} of $a$.
For $\lambda \in \ensuremath{\mathbb R}$, we define
\begin{align*}
r_{A}^{\times}(\lambda)&=\left|\left\{ (a,a')\in A^{2}\ :\ \Delta^+(aa')=\lambda\right\} \right|, \\[2mm]
r_{A}^{\div}(\lambda)&=\left|\left\{ (a,a')\in A^{2}\ :\ \Delta^+(a/a')=\lambda\right\} \right|.
\end{align*}
In other words, $r_{A}^{\times}(\lambda)$ is the number of ways to obtain $\lambda$ as the parameter of a product of two elements of $A$, and similarly for $r_{A}^{\div}(\lambda)$.
We repeat the pruning steps of $A$ as in the proof of Theorem \ref{th:SolyDual}, to obtain that every element of $A$ has a positive parameter.
We then repeat the multiplicative energy calculation from the proof of Theorem \ref{th:SolyDual}.
This implies that exists $0\le m<\log n$ with $\Lambda=\left\{ \lambda\in \Delta^+(A/A)\ :\ 2^m\le\r \lambda<2^{m+1}\right\}$ such that
\begin{equation*}
1>\frac{E^{\times}(A)}{|\Lambda|2^{2m+2}\log n}.
\end{equation*}
(The multiplicative energy $E^{\times}(A)$ is defined as in Section \ref{ssec:SolyDual}.)
Consider the planar point set ${\cal P}=A\times A\subset\ensuremath{\mathbb S}^{2}.$
Since ${\cal P}+{\cal P}=(A+A)\times(A+A)$, we have that $|{\cal P}+{\cal P}|=|A+A|^{2}$.
For each $1\le i\le|\Lambda|,$ let $\ell_{i}$ denote the line in $\ensuremath{\mathbb R}^{2}$ defined by $y=\lambda_{i}x$.
Let $\mathcal P\cap\ell_{i}$ be the set of points $(a,b)\in {\cal P}$ that satisfy $(\Delta^+(a),\Delta^+(b))\in \ell_{i}$ (equivalently, $\Delta^+(a)=\lambda_{i} \cdot \Delta^+(b)$).
By definition, for each of the $|\Lambda|$ lines we have $2^m \le |{\cal P}\cap \ell_{i}|<2^{m+1}$.
Let ${\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i}$ be the set of points $(\Delta^+(a),\Delta^+(b))$ such that $(a,b)\in \mathcal P\cap\ell_{i}$.
Note that ${\cal P}\cap\ell_{i}$ is in $\ensuremath{\mathbb S}^2$ while ${\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i}$ is in $\ensuremath{\mathbb R}^2$, and that $|{\cal P}\cap\ell_{i}|\ge |{\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i}|$.
The lines $\ell_i \subset \ensuremath{\mathbb R}^2$ are all incident to the origin.
In addition, for every $p\in \mathcal P \cap\ell_{i}$ and $q\in \mathcal P \cap \ell_{i+1}$, the point $\Delta^+(p+q)\in \ensuremath{\mathbb R}^2$ lies in the interior of the wedge formed by $\ell_{i}$ and $\ell_{i+1}$ in the first quadrant of $\ensuremath{\mathbb R}^{2}$.
Indeed, if positive $a,b,c,d\in \ensuremath{\mathbb R}$ satisfy $a/b<c/d$, then $a/b<(a+c)/(b+d) <c/d$.
Thus, for any $1\le i< i'< |\Lambda|$, the sets $(\mathcal P\cap_{\ensuremath{\mathbb R}}\ell_{i})+(\mathcal P \cap_{\ensuremath{\mathbb R}}\ell_{i+1})$ and $(\mathcal P \cap_{\ensuremath{\mathbb R}}\ell_{i'})+(\mathcal P \cap_{\ensuremath{\mathbb R}}\ell_{i'+1})$ are disjoint.
Fix $1\le i <|\Gamma|$.
For any $a_{1},a_{2}\in \mathcal P \cap \ell_{i}$ and $a_{3},a_{4}\in \mathcal P \cap \ell_{i+1}$, we have that $\Delta^+(a_{1}+a_{3})\ne \Delta^+(a_{2}+a_{4})$ unless $\Delta^+(a_{1})=\Delta^+(a_{2})$ and $\Delta^+(a_{3})=\Delta^+(a_{4})$.
Indeed, for variables $c,d\in \ensuremath{\mathbb R}$, the system $(c,c\cdot\lambda_{i})+(d,d\cdot\lambda_{i+1})=(p_{x},p_{y})$ has a unique solution.
In other words, for any $p,q\in{\cal P}\cap \ell_{i}$ and $r,t\in{\cal P}\cap \ell_{i+1}$ that satisfy $\Delta^+(p)\neq \Delta^+(q)$ or $\Delta^+(r) \neq \Delta^+(t)$, we have $p+r\ne q+t$.
Since ${\cal P} = A\times A$ and since $A$ has multiplicity $n^\alpha$, for each $(r,t)\in \ensuremath{\mathbb R}^2$ at most $n^{2\alpha}$ pairs $(a,b)\in \mathcal P$ satisfy $(\Delta^+(a),\Delta^+(b)) = (r,t)$.
For each point in ${\cal P}\cap_{\ensuremath{\mathbb R}}\ell_{i+1}$ we arbitrarily consider one point of ${\cal P}\cap\ell_{i+1}$ that corresponds to it, and denote the resulting set as $Q_i$.
Note that $|Q_i|\ge |{\cal P}\cap\ell_{i+1}|/n^{2\alpha}$ and that $Q_i$ consists of points with distinct parameters.
We claim that $|({\cal P}\cap \ell_{i})+ Q_i| = |{\cal P}\cap\ell_{i}| \cdot |Q_i|$.
In other words, we claim that every element of $({\cal P}\cap \ell_{i})+ Q_i$ can be written as a sum in a unique way.
Indeed, for $q\in Q_i$ and $a,a'\in {\cal P}\cap \ell_{i}$ we clearly have $a+s \neq a'+s$ when $\Delta^+(a)\neq \Delta^+(a')$.
If $\Delta^+(a)= \Delta^+(a')$ then $a$ and $a'$ have distinct imaginary parts, again implying $a+s \neq a'+s$.
This leads to
\[ \left|({\cal P}\cap \ell_{i})+({\cal P}\cap \ell_{i+1})\right| \ge \left|({\cal P}\cap \ell_{i})+Q_i\right| \ge |{\cal P}\cap \ell_{i}|\cdot |{\cal P}\cap \ell_{i+1}|/n^{2\alpha}. \]
The rest of the analysis is a technical calculation identical to the one at the end of proof of Theorem \ref{th:SolyDual}.
We do not repeat this analysis here.
\end{proof}
| {
"timestamp": "2018-12-27T02:11:45",
"yymm": "1812",
"arxiv_id": "1812.09547",
"language": "en",
"url": "https://arxiv.org/abs/1812.09547",
"abstract": "We study the sum-product problem for the planar hypercomplex numbers: the dual numbers and double numbers. These number systems are similar to the complex numbers, but it turns out that they have a very different combinatorial behavior. We identify parameters that control the behavior of these problems, and derive sum-product bounds that depend on these parameters. For the dual numbers we expose a range where the minimum value of $\\max\\{|A+A|,|AA|\\}$ is neither close to $|A|$ nor to $|A|^2$.To obtain our main sum-product bound, we extend Elekes' sum-product technique that relies on point-line incidences. Our extension is significantly more involved than the original proof, and in some sense runs the original technique a few times in a bootstrapping manner. We also study point-line incidences in the dual plane and in the double plane, developing analogs of the Szemeredi-Trotter theorem. As in the case of the sum-product problem, it turns out that the dual and double variants behave differently than the complex and real ones.",
"subjects": "Combinatorics (math.CO)",
"title": "Sum-Product Phenomena for Planar Hypercomplex Numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130596362787,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7075636810078908
} |
https://arxiv.org/abs/1210.2914 | Positive matrices partitioned into a small number of Hermitian blocks | Positive semidefinite matrices partitioned into a small number of Hermitian blocks have a remarkable property. Such a matrix may be written in a simple way from the sum of its diagonal blocks | \section{Introduction}
\noindent
Positive matrices partitioned into blocks frequently occur as basic tools or for their own interest in matrix analysis and mathematical physics. For instance, to define the geometric mean
of two $n\times n$ matrices $A,B\in\bM_n^+$, the space of positive semidefinite matrices, one consider the class of block-matrices
\begin{equation}\label{block1}
\begin{bmatrix}
A&X\\ X&B
\end{bmatrix}
\end{equation}
belonging to $\bM_{2n}^+$ (hence $X$ is Hermitian). The geometric mean of $A$ and $B$ is then characterized as the largest possible $X$ such that \eqref{block1} is positive. Positive matrices $H$ of the form \eqref{block1} enjoy a remarkable property: for all symmetric norms
\begin{equation}\label{norm1}
\left\| H\right\|
\le \| A+B \|.
\end{equation}
This says that we have a majorisation between $H$ and the sum of its diagonal block,
\begin{equation}\label{major1}
\sum_{j=1}^k \lambda_j(H) \le \sum_{j=1}^k\lambda_j(A+B), \qquad k=1,\ldots,2n
\end{equation}
where $\lambda_j(T)$ is the $j$th eigenvalue of $T\in\bM_m^+$ (in decreasing order, with the convention $\lambda_j(T)=0$ for $j>m$).
Another typical example of positive matrices written in blocks are formed by tensor products. Indeed,
the tensor product $A\otimes B$ of $A\in\bM_{\beta}$ with $B\in\bM_n$ can be identified with an element of $\bM_{\beta}(\bM_n)=\bM_{\beta n}$. Starting with positive matrices in $\bM_{\beta}^+$ and $\bM_n^+$ we then get a matrix in $\bM_{\beta n}^+$ partitioned in blocks in $\bM_n$. In quantum physics, sums of tensor products of positive semi-definite (with trace one) occur as so-called separable states. In this setting, the sum of the diagonal block is called the partial trace (with respect to $\bM_{\beta}$). Hiroshima \cite{H} proved a beautiful extension of \eqref{norm1}-\eqref{major1}:
\vskip 10pt
\begin{theorem}\label{Hiroshima} Let $H=[A_{s,t}]\in \bM_{\alpha n}^+$ be partitioned into $\alpha\times \alpha$ Hermitian blocks in $\bM_n$ and let $\Delta=\sum_{s=1}^{\alpha}{A_{s,s}}$ be its partial trace. Then, we have
\begin{equation*}
\left\|H \right\|\le \left\| \Delta\right\|
\end{equation*} for all symmetric norms.
\end{theorem}
\vskip 10pt
This result seems to be not so well-known among matrix-functional analysts. We recently rediscovered it and actually obtained a stronger decomposition theorem \cite{BLL3}. For small partitions, when $\alpha\in\{2,3,4\}$, special proofs are available that differ from the general case in two related ways. First, these proofs are simpler (especially for $\alpha=2$) but also yield a much sharper decomposition than the one we can obtain in the general case. Secondly, and this is rather surprising, even though we consider a positive matrix with {\it real} entries, its decomposition involves some {\it complex} matrices. This note is concerned with these special decompositions for small partitions.
In the next section we will see what can be said for $\alpha=2$. This situation was already implicitly covered in some proofs given in our first note \cite{BLL1}. Section 3 deals with the case $\alpha=4$ (and as a byproduct the case $\alpha=3$).
\section{Two by two blocks}
\noindent
For partitions of positive matrices,
the diagonal blocks play a special role. This is apparent in a rather striking decomposition due to the two first authors \cite{BL1}:
\vskip 10pt
\begin{lemma} \label{BL-lemma} For every matrix in $\bM_{n+m}^+$ partitioned into blocks, we have a decomposition
\begin{equation*}
\begin{bmatrix} A &X \\
X^* &B\end{bmatrix} = U
\begin{bmatrix} A &0 \\
0 &0\end{bmatrix} U^* +
V\begin{bmatrix} 0 &0 \\
0 &B\end{bmatrix} V^*
\end{equation*}
for some unitaries $U,\,V\in \bM_{n+m}$.
\end{lemma}
\vskip 10pt
This decomposition turned out to be an efficient tool and it also plays a major role below.
A proof and several consequences can be found in \cite{BL1} and \cite{BH1}.
Of course, $\bM_n$ is the algebra of $n\times n$ matrices with real or complex entries, and $\bM_n^+$ is the positive part. That is, $\bM_n$ may stand either for $\bM_n(\bR)$, the matrices with real entries, or for $\bM_n(\bC)$, those with complex entries. The situation is different
in the next statement, where complex entries seem unavoidable.
\vskip 10pt
\begin{theorem}\label{thm-four} Given any matrix in $\bM_{2n}^+(\bC)$ partitioned into blocks in $\bM_n(\bC)$ with Hermitian off-diagonal blocks, we have
\begin{equation*}
\begin{bmatrix} A &X \\
X &B\end{bmatrix}= \frac{1}{2}\left\{ U(A+B)U^* +V(A+B)V^*\right\}
\end{equation*} for some isometries $U,V\in\bM_{2n,n}(\bC)$.
\end{theorem}
\vskip 10pt
Here $\bM_{p,q}(\bC)$ denote the space of $p$ rows and $q$ columns matrices with complex entries, and $V\in\bM_{p,q}(\bC)$ is an isometry if $p\ge q$ and $V^*V=I_q$. Even for a matrix in $\bM_{2n}^+(\bR)$, it seems essential to use isometries with complex entries.
Theorem \ref{thm-four} is implicit in \cite{BLL1}. We detail here how it follows from Lemma \ref{BL-lemma}.
\vskip 10pt
\begin{proof}Taking the unitary matrix
$$W=\frac{1}{\sqrt{2}}\begin{bmatrix} -iI&iI \\I &I\end{bmatrix},$$
where $I$ is the identity of $\bM_n$, then
\begin{equation*}
W^*\begin{bmatrix} A &X \\
X &B\end{bmatrix}W= \frac{1}{2}\begin{bmatrix} A+B & \ast\\
\ast & A+B\end{bmatrix}
\end{equation*}
where $\ast$ stands for unspecified entries. By Lemma \ref{BL-lemma}, there are two unitaries $U,V\in\bM_{2n}$ partitioned into equally sized matrices,
$$U=\begin{bmatrix} U_{11}& U_{12} \\ U_{21} & U_{22}\end{bmatrix},\qquad V=\begin{bmatrix} V_{11}& V_{12} \\ V_{21} & V_{22}\end{bmatrix}$$
such that
$$
\frac{1}{2}\begin{bmatrix} A+B & \ast\\
\ast & A+B\end{bmatrix}= \frac{1}{2}\left\{U\begin{bmatrix} A+B & 0\\0 &0\end{bmatrix}U^*+V\begin{bmatrix} 0& 0\\0 & A+B\end{bmatrix}V^*\right\}.
$$
Therefore
$$ \frac{1}{2}\begin{bmatrix} A+B & \ast\\
\ast & A+B\end{bmatrix}=\frac{1}{2}\left\{ \widetilde{U}(A+B)\widetilde{U}^* +\widetilde{V}(A+B)\widetilde{V}^*\right\}$$
where $$\widetilde{U}=\begin{bmatrix} U_{11} \\U_{21}\end{bmatrix}\quad {\rm and}\quad \widetilde{V}=\begin{bmatrix} V_{11} \\V_{21}\end{bmatrix}$$
are isometries. The proof is complete by assigning $W\widetilde{U}$, $W\widetilde{V}$ to new isometries $U, V$, respectively.
\end{proof}
\vskip 10pt
Theorem \ref{thm-four}
yields \eqref{norm1}. As a consequence of this inequality we have a refinement of a well-known determinantal inequality,
$$
\det(I+A)\det(I+B)\ge \det(I+A+B)
$$
for all $A,B\in\bM^+_n$.
\begin{cor} Let $A,B\in \bM_n^+$. For any Hermitian $X\in\bM_n$ such that
$H=\begin{bmatrix} A&X \\X&B\end{bmatrix}$
is positive semi-definite, we have
\begin{equation*}
\det(I+A)\det(I+B) \ge \det(I+H) \ge \det(I+A+B).
\end{equation*}
\end{cor}
\vskip 10pt
Here $I$ denotes both the identity of $\bM_n$ and $\bM_{2n}$.
Note that equality obviously occurs in the first inequality when $X=0$, and equality occurs in the second inequality when $AB=BA$ and $X=A^{1/2}B^{1/2}$.
\vskip 10pt
\begin{proof} The left inequality is a special case of Fisher's inequality,
$$
\det X\det Y \ge \det \begin{bmatrix} X&Z \\ Z^*&Y \end{bmatrix}
$$
for any partitioned positive semi-definite matrix. The second inequality follows from \eqref{norm1}. Indeed,
the majorisation $S\prec T$ in $\bM_n^+$ entails the trace inequality
\begin{equation}\label{basic-concave}
{\mathrm{Tr\,}} f(S) \ge {\mathrm{Tr\,}} f(T)
\end{equation}
for all concave functions $f(t)$ defined on $[0,\infty)$. using \eqref{basic-concave} with $f(t)=\log(1+t)$ and the relation $H\prec A+B$ we have
\begin{align*}
\det(I+H) &=\exp{\mathrm{Tr\,}} \log(I+H) \\
&\ge\exp{\mathrm{Tr\,}} \log(I+((A+B)\oplus 0_n)) \\
&=\det(I+A+B).
\end{align*}
\end{proof}
\vskip 10pt
Theorem \ref{thm-four} says more than the eigenvalue majorisation \eqref{major1}. We have a few other eigenvalue inequalities as follows.
\vskip 10pt
\begin{cor}\label{cor-1} Let $H=\begin{bmatrix} A&X \\ X&B\end{bmatrix}\in\bM_{2n}^+$ be partitioned into Hermitian blocks in $\bM_n$. Then, we have
\begin{equation*}
\lambda_{1+2 k}(H) \le \lambda_{1+k}( A+B)
\end{equation*}
for all $k=0,\ldots,n-1$.
\end{cor}
\vskip 10pt
\begin{proof} Together with Theorem \ref{thm-four}, the alleged inequalities follow immediately from a simple fact, Weyl's theorem: if $Y, Z \in\bM_m$ are Hermitian, then
\[\lambda_{r+s+1}(Y+Z)\le \lambda_{r+1}(Y) + \lambda_{s+1}(Z)\]
for all nonnegative integers $r,s$ such that $r + s\le m-1$.
\end{proof}
\vskip 10pt
\begin{cor}
Let $S,T\in\bM_n$ be Hermitian. Then,
\begin{equation*}
\| T^2 + ST^2S\| \le \| T^2 + TS^2T\|
\end{equation*}
for all symmetric norms, and
\begin{equation*}
\lambda_{1+2k}( T^2 + ST^2S) \le
\lambda_{1+k}( T^2 + TS^2T)
\end{equation*} for all $k=0,\ldots,n-1$.
\end{cor}
\begin{proof}
The nonzero eigenvalues of $T^2+ST^2S=\begin{bmatrix}T& ST\end{bmatrix}\begin{bmatrix}T& ST\end{bmatrix}^*$ are the same as those of $$\begin{bmatrix}T& ST\end{bmatrix}^*\begin{bmatrix}T& ST\end{bmatrix}=\begin{bmatrix} T^2 & TST \\ TST & TS^2T\end{bmatrix}.$$
This block-matrix is of course positive and has its off-diagonal blocks Hermitian. Therefore, the norm inequality follows from \eqref{norm1}, and the eigenvalue inequalities from Corollary \ref{cor-1}. The norm inequality was first observed in \cite{FL}.
\end{proof}
\section{Quaternions and 4-by-4 blocks }
\noindent
Theorem \ref{thm-four} refines Hiroshima's theorem in case of two by two blocks. Some interesting new eigenvalue inequalities are obtained. How to get a similar result for partitions into a larger number of blocks ? The question whether a positive block-matrix $H$ in $\bM^+_{3n}$,
$$
H=\begin{bmatrix}
A&X&Y \\ X&B&Z \\Y&Z&C
\end{bmatrix}
$$
with Hermitian off-diagonal blocks $X,Y,Z$, can be decomposed as
$$
H=\frac{1}{3}\left\{ U\Delta U^*+ V\Delta V^* +W\Delta W^*\right\}
$$
where $\Delta=A+B+C$ and $U,V,W$ are isometries, is a difficult one. However, we will give a rather satisfactory answer by considering direct sums. We have been unable to find any direct proof for partitions in 3-by-3 blocks. The key idea was then to introduce quaternions and to deal with 4-by-4 partitions. This approach leads to the following theorem.
\vskip 10pt
\begin{theorem}\label{thm-quaternion} Let $H=[A_{s,t}]\in \bM_{\beta n}^+(\bC)$ be partitioned into Hermitian blocks in $\bM_n(\bC)$ with $\beta\in\{3,4\}$ and let $\Delta=\sum_{s=1}^{\beta}A_{s,s}$ be the sum of its diagonal blocks.Then,
\begin{equation*}
H\oplus H =\frac{1}{4}\sum_{k=1}^4 V_k\left(\Delta\oplus\Delta\right)V_k^*
\end{equation*} for some isometries $V_k\in\bM_{2\beta n,2n}(\bC)$, $k=1,2,3,4$.
\end{theorem}
\vskip 10pt
Note that, for $\alpha=\beta\in\{3,4\}$, Theorem \ref{thm-quaternion} considerably improves Theorem \ref{Hiroshima}. Indeed, Theorem \ref{thm-quaternion} implies the majorisation $\| H\oplus H \| \le \| \Delta \oplus \Delta \|$ which is equivalent to the majorisation of Theorem \ref{Hiroshima}, $\| H \| \le \| \Delta\|$.
Likewise for Theorem \ref{thm-four}, we must consider isometries with complex entries, even for a full matrix $H$ with real entries. In \cite{BLL3} we will develop a real approach for real matrices. The isometries are then with real coefficients, but the proof is more intricate and the result is not so simple since it requires direct sums of sixteen copies:
we obtain a decomposition of $\oplus^{16} H$ in term of $\oplus^{16} \Delta$.
Before turning to the proof, we recall some facts about quaternions.
The algebra $\bH$ of quaternions is an associative real division algebra of dimension four containing $\bC$ as a sub-algebra.
Quaternions $q$ are usually written as
$$
q=a+bi+cj +dk
$$
with $a,b,c,d\in\bR$ and $a+bi\in\bC$. The quaternion units $1,i,j,k$ satisfy
$$
i^2=j^2=k^2=ijk=-1.
$$
The algebra $\bH$ can be represented as the real sub-algebra of $\bM_2$ consisting of matrices of the form
$$
\begin{pmatrix} z&-\overline{w} \\
w&\overline{z}
\end{pmatrix}
$$
by the identification map
$$
a+bi+cj+dk \mapsto \begin{pmatrix} a+bi & ic-d \\ ic+d & a-ib\end{pmatrix}.
$$
The quaternion units $1,i,j,k$ are then represented by the matrices (related to the Pauli matrices),
\begin{equation}\label{units}
\begin{pmatrix} 1& 0\\
0& 1
\end{pmatrix}, \quad
\begin{pmatrix} i& 0\\
0& -i
\end{pmatrix}, \quad
\begin{pmatrix}0 & i\\
i& 0
\end{pmatrix}, \quad
\begin{pmatrix} 0& -1\\
1& 0
\end{pmatrix}
\end{equation}
that we will use in the following proof of Theorem \ref{thm-quaternion}.
We will work with matrices in $\bM_{8n}$ partitioned in 4-by-4 blocks in $\bM_{2n}$.
\begin{proof} It suffices to consider the case $\beta=4$, the case $\beta=3$ follows by completing $H$ with some zero colums and rows.
First, replace the positive block matrix $H=[A_{s,t}]$ where $1\le s,t,\le 4$
and all blocks are Hermitian by a bigger one in which each block in counted two times :
$$G= [G_{s,t}]:= [A_{s,t}\oplus A_{s,t}].$$
Thus $G\in\bM_{8n}(\bC)$ is written in 4-by-4 blocks in $\bM_{2n}(\bC)$. Then perform a unitary congruence with the matrix
$$W = E_1\oplus E_2\oplus E_3 \oplus E_4$$
where the $E_i$ are the analogues of quaternion units, that is, with $I$ the identity of $\bM_n(\bC)$,
$$
E_1 = \begin{bmatrix}I&0 \\ 0&I\end{bmatrix},\quad
E_2 = \begin{bmatrix} iI&0 \\ 0&-iI\end{bmatrix},\quad
E_3 =\begin{bmatrix} 0&iI \\ iI&0\end{bmatrix},\quad
E_4 =\begin{bmatrix}0& -I \\ I&0\end{bmatrix}.
$$
Note that $E_sE_t^*$ is skew-Hermitian whenever $s\neq t$.
A direct matrix computation then shows that the block matrix
$$
\Omega:=WGW^*=[ \Omega_{s,t}]
$$
has the following property for its off-diagonal blocks :
For $1\le s<t\le 4$
$$
\Omega_{s,t}=-\Omega_{t,s}.
$$
Using this property we compute the unitary congruence implemented by
$$
R_2=\frac{1}{2}\begin{bmatrix} 1&1&1&1 \\
1&-1& 1&-1 \\
1&1&-1&-1 \\
1&-1& -1&1 \\
\end{bmatrix}
\otimes\begin{bmatrix} I&0 \\ 0&I
\end{bmatrix}
$$
and we observe that
$
R_2\Omega R_2^*
$
has its four diagonal blocks $(R_2\Omega R_2^*)_{k,k}$, $1\le k\le 4$, all equal to the matrix $D\in\bM_{2n}(\bC)$,
$$
D =\frac{1}{4}\sum_{s=1}^4 A_{s,s}\oplus A_{s,s}.
$$
Let $\Gamma=D\oplus 0_{6n}\in\bM_{8n}$.
Thanks to the decomposition of Lemma \ref{BL-lemma}, there exist some unitaries $U_i\in\bM_{8n}(\bC)$, $1\le i\le 4$, such that
$$
\Omega=\sum_ {i=1}^4 U_i \Gamma U_i^*.
$$
That is, since $\Omega$ is unitarily equivalent to $H\oplus H$, and $\Gamma=WD W^*$ for some isometry $W\in\bM_{8n, 2n}(\bC)$,
$$
H\oplus H = \sum_{s=1}^4 V_kDV_k^*
$$
for some isometries $V_k\in\bM_{8n,2n}(\bC)$. Since $D=\frac{1}{4}\Delta\oplus\Delta$, the proof is complete.
\end{proof}
\vskip 10pt
In the same vein as in Section 2, we have the following consequences.
\vskip 10pt
\begin{cor} Let $H=[A_{s,t}]\in \bM_{\beta n}^+$ be written in Hermitian blocks in $\bM_n$ with $\beta\in\{3,4\}$ and let $\Delta=\sum_{s=1}^{\beta}A_{s,s}$ be the sum of its diagonal blocks. Then,
\begin{equation*}
\prod_{s=1}^\beta\det(I+A_{ss}) \ge \det(I+H) \ge \det\left(I+\sum_{s=1}^\beta A_{ss}\right).
\end{equation*}
\end{cor}
\vskip 10pt
\vskip 10pt
\begin{cor}\label{cor4} Let $H=[A_{s,t}]\in \bM_{\beta n}^+$ be written in Hermitian blocks in $\bM_n$ with $\beta\in\{3,4\}$ and let $\Delta=\sum_{s=1}^{\beta}A_{s,s}$ be the sum of its diagonal blocks. Then,
\begin{equation*}
\lambda_{1+4 k}(H) \le \lambda_{1+k}( A+B)
\end{equation*}
for all $k=0,\ldots,n-1$.
\end{cor}
\vskip 10pt
\begin{cor}\label{detail}
Let $T\in\bM_{n}$ be Hermitian and let $\{S_i\}_{i=1}^\beta\in\bM_{n}$ be commuting Hermitian matrices with $\beta\in\{3,4\}$. Then,
\begin{equation*}
\left\|\sum_{i=1}^\beta S_iT^2S_i\right\| \le \left\|\sum_{i=1}^\beta TS_i^2T\right\|
\end{equation*}
for all symmetric norms, and
\begin{equation*}
\lambda_{1+4 k}\left(\sum_{i=1}^{\beta} S_iT^2S_i\right) \le
\lambda_{1+k}\left(\sum_{i=1}^{\beta} TS_i^2T\right)
\end{equation*} for all $k=0,\ldots,n-1$.
\end{cor}
The proofs of these corollaries are quite similar of those of Section 2. We give details only for the norm inequality of Corollary \ref{detail}.
\vskip 10pt
\begin{proof} We may assume that $\beta=4$ by completing, if necessary with $S_4=0$. So, let $T\in\bM_n^+$ and let $\{S_i\}_{i=1}^4$ be four commuting Hermitian matrices in $\bM_n$. Then
$$ H= XX^*=
\begin{bmatrix} TS_1\\ TS_2\\ TS_3\\ TS_4
\end{bmatrix}
\begin{bmatrix} S_1T & S_2T &S_3T &S_4T
\end{bmatrix}
$$
is positive and partitioned into Hermitian blocks with diagonal blocks $TS_i^2T$, $1\le i\le 4$. Thus, from Theorem \ref{thm-quaternion}, for all symmetric norms,
$$
\| H\oplus H\| \le \left\| \left\{\sum_{i=1}^4 TS_i^2T\right\} \oplus \left\{\sum_{i=1}^4 TS_i^2T \right\}\right\|
$$
or equivalently
$$
\| H \| \le \left\| \sum_{i=1}^4 TS_i^2T\right\|
$$
Since $H=XX^*$ and $X^*X=\sum_{i=1}^4 S_iT^2S_i$, the norm inequality of Corollary \ref{detail} follows.
\end{proof}
| {
"timestamp": "2012-10-11T02:06:11",
"yymm": "1210",
"arxiv_id": "1210.2914",
"language": "en",
"url": "https://arxiv.org/abs/1210.2914",
"abstract": "Positive semidefinite matrices partitioned into a small number of Hermitian blocks have a remarkable property. Such a matrix may be written in a simple way from the sum of its diagonal blocks",
"subjects": "Functional Analysis (math.FA)",
"title": "Positive matrices partitioned into a small number of Hermitian blocks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130583409233,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7075636800811625
} |
https://arxiv.org/abs/1902.06403 | Hamiltonicity of bi-power of bipartite graphs, for finite and infinite cases | For a graph $G$, the $t$-th power $G^t$ is the graph on $V(G)$ such that two vertices are adjacent if and only if they have distance at most $t$ in $G$; and the $t$-th bi-power $G_B^t$ is the graph on $V(G)$ such that two vertices are adjacent if and only if their distance in $G$ is odd at most $t$. Fleischner's theorem states that the square of every 2-connected finite graph has a Hamiltonian cycle. Georgakopoulos prove that the square of every 2-connected infinite locally finite graph has a Hamiltonian circle. In this paper, we consider the Hamiltonicity of the bi-power of bipartite graphs. We show that for every connected finite bipartite graph $G$ with a perfect matching, $G_B^3$ has a Hamiltonian cycle. We also show that if $G$ is a connected infinite locally finite bipartite graph with a perfect matching, then $G_B^3$ has a Hamiltonian circle. | \section{Introduction}
A graph $G$ is Hamiltonian if it has a Hamiltonian cycle, i.e., a
cycle containing all vertices of $G$. The $t$-th power $G^t$ of $G$
is the graph on $V(G)$ such that two vertices are adjacent in $G^t$
if and only if they have distance at most $t$ in $G$. The following
classical theorems concern the Hamiltonicity of the power of graphs:
\begin{theorem}[Fleischner \cite{Fl}]\label{ThFl}
If $G$ is a 2-connected finite graph, then $G^2$ is Hamiltonian.
\end{theorem}
\begin{theorem}[Sekanina \cite{Se}]
If $G$ is a connected finite graph of order at least 3, then $G^3$
is Hamiltonian.
\end{theorem}
We consider the analogues of the above theorems on bipartite graphs.
We first notice that a bipartite graph is Hamiltonian only if it is
balanced, i.e., its two bipartition sets have the same size.
Generally, the power of a bipartite graph may not be bipartite. In
order to find a graph operation for bipartite graphs, we pose the
bipartite power (or bi-power for short) of graphs.
For a (bipartite or non-bipartite) graph $G$, we define the $t$-th
bi-power $G_B^t$ as the graph on $V(G)$ with edge set
$$E(G^t_B)=\{xy: d_G(x,y)\mbox{ is odd at most }k\},$$
where $d_G(x,y)$ is the distance between $x$ and $y$ in $G$. Note
that $G^2_B=G$, $G^4_B=G^3_B$, etc. It is nature to ask that is
there $k,t$ such that the $t$-th bi-power of every $k$-connected
balanced bipartite graph is Hamiltonian. The answer is negative by
the following construction.
Let $s\geq t$ be even, and let $V_0,V_1,\ldots,V_{s+1}$ be disjoint
sets of vertices such that $|V_0|=|V_{s+1}|>sk/2$ and $|V_i|=k$ for
$1\leq k\leq s$. Let $G$ be the graph on $\bigcup_{i=0}^{s+1}V_i$ by
adding all possible edges between $V_i$ and $V_{i+1}$, $0\leq i\leq
s$. Thus $G$ is $k$-connected. Since the distance between the
vertices in $V_0$ and in $V_{s+1}$ in $G$ is more that $t$, $V_0\cup
V_{s+1}$ is an independence set of $G_B^t$. It follows that $G_B^t$
is not Hamiltonian.
So we need some additional conditions to get Hamiltonian graphs.
\begin{theorem}\label{ThFinite}
Let $G$ be a connected finite bipartite graph of order at least 4.
If $G$ has a perfect matching, then $G^3_B$ is Hamiltonian.
\end{theorem}
We remark that the condition `$G$ has a perfect matching' in Theorem
\ref{ThFinite} cannot be replaced by `$G_B^3$ has a perfect
matching'. Let the bi-star $S_{k,k}$, $k\geq 3$, be the tree with
two vertices of degree $k+1$ and all other vertices of degree 1; and
let $G$ be the graph obtained by subdividing each pendant edge of
$S_{k,k}$ twice. One can check that $G_B^3$ has a perfect matching
but is not Hamiltonian.
Now we turn to the infinite graphs. Thomassen \cite{Th} generalized
Theorem \ref{ThFl} to locally finite graphs with one end. Diestel
\cite{Di05,Di16} launched the ambitious project of extending results
on finite Hamiltonian cycles to Hamiltonian circles in infinite
graphs. Diestel \cite{Di05} then conjectured that the square of any
2-connected locally finite graph has a Hamiltonian circle.
Georgakopoulos \cite{Ge} confirmed Diestel's conjecture. Since it is
necessary to introduce a lot of terminology and notations in order
to state the definition of Hamiltonian circles of infinite graphs,
we refrain from stating the concept explicitly in this introductory
section. Here we present Georgakopoulos's theorem on infinite graphs
concerning our topic. We will explain the concepts in Section 3 (see
also \cite{Di16}, Chapter 8). We apologize for the inconvenience
this may cause.
\begin{theorem}[Georgakopoulos \cite{Ge}]
Suppose that $G$ is an infinite locally finite graph.
\begin{mathitem}
\item If $G$ is 2-connected, then $G^2$ has a Hamiltonian circle.
\item If $G$ is connected, then $G^3$ has a Hamiltonian circle.
\end{mathitem}
\end{theorem}
Several other results in the area of Hamiltonian circles of infinite
graphs can be found in \cite{BrYu,CuWaYu,He15,He16,Le}. Our main
result of the paper is the infinite extension of Theorem
\ref{ThFinite}.
\begin{theorem}\label{ThInfinite}
Let $G$ be a connected infinite locally finite bipartite graph. If
$G$ has a perfect matching, then $G^3_B$ has a Hamiltonian circle.
\end{theorem}
The rest of the paper is organized as follows. In Section 2, we
exhibition the proof of Theorem \ref{ThFinite}, with a lemma that
will be also used for our infinite proof. In Section 3, after
introducing the basic terminology and notations, we give some lemmas
and techniques for dealing with the infinite Hamiltonian problems,
following which we complete the proof of Theorem \ref{ThInfinite}.
\section{Finite graphs}
For the purpose of the case of infinite graphs, we first give some
new definitions and a lemma.
Let $G$ be a graph, $A,B\subseteq V(G)$ be disjoint, and
$P=v_0v_1\ldots v_p$ be a nontrivial path of $G$. We say that $P$ is
an \emph{$(A,B)$-path} if $V(P)\cap A=\{v_0\}$ and $V(P)\cap
B=\{v_p\}$; and $P$ is an \emph{$A$-path} if $V(P)\cap
A=\{v_0,v_p\}$. For two disjoint subgraph $H,K$ of $G$, a
$(V(H),V(K))$-path ($V(H)$-path) is also called an $(H,K)$-path
($H$-path). Let $F$ be a path or a cycle, $T$ be a tree of $G$,
$e\in E(T)$ and $T_1,T_2$ be the two components of $T-e$. We say
that $F$ crosses the edge $e$ $k$ times respect to $T$ if $F$
contains $k$ $(T_1,T_2)$-paths.
Now we prove the following lemma.
\begin{lemma}\label{LeFiniteHamiltonian}
If $T$ is a finite tree and $M$ is a perfect matching of $T$, then
for every edge $xy\in M$, $T_B^3$ has a Hamiltonian $(x,y)$-path
crossing every edge $e\in E(T)\backslash M$ exactly twice respect to
$T$.
\end{lemma}
\begin{proof}
We use induction on the order of $T$. The assertion is trivially
true if $T$ has only two vertices. So we assume that $|V(T)|\geq 4$.
Since $T$ is a tree and $xy\in E(T)$, every component of $T-\{x,y\}$
has a neighbor of either $x$ or $y$, but not both. Let
$\mathcal{H}^1=\{H_1^1,\ldots,H_k^1\}$ be the set of components of
$G-\{x,y\}$ that have a neighbor of $x$ and
$\mathcal{H}^2=\{H_1^2,\ldots,H_l^2\}$ be the set of components of
$G-\{x,y\}$ that have a neighbor of $y$. For each
$H_i^1\in\mathcal{H}^1$, let $x_i^1y_i^1\in M$ such that $x$
neighbored $y_i^1$; for each $H_i^2\in\mathcal{H}^2$, let
$x_i^2y_i^2\in M$ such that $y$ neighbored $x_i^2$. By induction
hypothesis, $(H_i^1)_B^3$ has a Hamiltonian $(x_i^1,y_i^1)$-path
$P_i^1$ that crosses every edge $e\in E(H_i^1)\backslash M$ exactly
twice respect to $H_i^1$; and $(H_i^2)_B^3$ has a Hamiltonian
$(x_i^2,y_i^2)$-path $P_i^2$ that crosses every edge $e\in
E(H_i^2)\backslash M$ exactly twice respect to $H_i^2$. If
$\mathcal{H}^1=\emptyset$, then $P=xy_1^2P_1^2x_1^2\ldots
y_l^2P_l^2x_l^2y$ is a Hamiltonian $(x,y)$-path of $G_B^3$ that
crosses every edge $e\in E(T)\backslash M$ exactly twice respect to
$T$. The case of $\mathcal{H}^2=\emptyset$ is similar. If neither
$\mathcal{H}^1$ nor $\mathcal{H}^2$ is empty. then
$P=xy_1^2P_1^2x_1^2\ldots y_l^2P_l^2x_l^2y_1^1P_1^1x_1^1\ldots
y_k^1P_k^1x_k^1y$ is a Hamiltonian $(x,y)$-path of $G_B^3$ that
crosses every edge $e\in E(T)\backslash M$ exactly twice respect to
$T$.
\end{proof}
We say that a balanced bipartite graph $G$ is
\emph{Hamilton-laceable} if for any two vertices $x,y$ in distinct
bipartition sets, $G$ has a Hamiltonian $(x,y)$-path (i.e., an
$(x,y)$-path containing all vertices of $G$). The concept
Hamilton-laceability was introduced by Simmons \cite{Si78,Si81}, and
some times it is called Hamilton-biconnectedness (see
\cite{FaMaMaOr,FaMaOr} for examples). Note that every
Hamilton-laceable balanced bipartite graph (apart from $K_2$) is
Hamiltonian. Now we prove the following result, which is stronger
than Theorem \ref{ThFinite}.
\begin{theorem}
If $G$ is a connected finite bipartite graph that has a perfect
matching, then $G_B^3$ is Hamilton-laceable.
\end{theorem}
\begin{proof}
We use induction on the order of $G$. The assertion is trivial if
$G$ has only two vertices. So we assume that $|V(G)|\geq 4$. If $G$
is not a tree, then it has a spanning tree with a perfect matching,
which can be obtained by taking a perfect matching of $G$ and adding
edges one by one avoiding creating cycles, until no edges can be
added. So we need only consider the case that $G$ is a tree. Let $M$
be a perfect matching of $G$.
Let $x\in X,y\in Y$ be any two vertices, where $X,Y$ are the two
bipartition sets of $G$. We will find a Hamiltonian $(x,y)$-path in
$G_B^3$. Recall that we assume that $G$ is a tree. If $xy\in M$,
then we are done by Lemma \ref{LeFiniteHamiltonian}. So we assume
that $xy\notin M$. It follows that the unique $(x,y)$-path of $G$
contains some edges $e=x'y'\in E(G)\backslash M$, where $x'\in X$
and $y'\in Y$. Thus $G-x'y'$ has exactly two components one of which
contains $x$ and the other contains $y$. Let $H_1,H_2$ be the two
components of $G-x'y'$ containing $x$ and $y$, respectively.
If $y'\in V(H_1)$ and $x'\in V(H_2)$, then by induction hypothesis,
$(H_1)_B^3$ has a Hamiltonian $(x,y')$-path $P_1$ and $(H_2)_B^3$
has a Hamiltonian $(x',y)$-path $P_2$. Thus $P=P_1y'x'P_2$ is an
Hamiltonian $(x,y)$-path of $G_B^3$. If $x'\in V(H_1)$ and $y'\in
V(H_2)$, then let $y'',x''$ be the neighbors of $x',y'$ in $M$,
respectively. By induction hypothesis, $(H_1)_B^3$ has a Hamiltonian
$(x,y'')$-path $P_1$ and $(H_2)_B^3$ has a Hamiltonian
$(x'',y)$-path $P_2$. Note that $x''y''\in E(G_B^3)$, implying that
$P=P_1y''x''P_2$ is a Hamiltonian $(x,y)$-path of $G_B^3$.
\end{proof}
\section{Infinite graphs}
\subsection{Basic terminology and notations}
Now we consider the infinite graphs. We first give the terminology
concerning circles of infinite graphs.
An (infinite) graph $G$ is \emph{locally finite} if every vertex of
$G$ has finite degree. In this section, we always assume that $G$ is
a locally finite graph. A 1-way infinite path is called a
\emph{ray}, and the subrays of a ray are its \emph{tails}. Two rays
of $G$ are \emph{equivalent} if for every finite set $S\subseteq
V(G)$, there is a component of $G-S$ containing tails of both rays.
We write $R_1\cong_GR_2$ if $R_1$ and $R_2$ are equivalent in $G$.
The corresponding equivalence classes of rays are the \emph{ends} of
$G$. We denote by $\varOmega(G)$ the set of ends of $G$.
Let $\alpha\in\varOmega(G)$ and $S\subseteq V(G)$ be a finite set.
We denote by $C_G(S,\alpha)$ the unique component of $G-S$ that
containing a ray (and a tail of every ray) in $\alpha$. We let
$\varOmega_G(S,\alpha)$ be the set of all ends $\beta$ with
$C_G(S,\beta)=C_G(S,\alpha)$. When no confusion occurs, we will
denote $C_G(S,\alpha)$ and $\varOmega_G(S,\alpha)$ by $C(S,\alpha)$
and $\varOmega(S,\alpha)$, respectively.
To built a topological space $|G|$ we associate each edge $uv\in
E(G)$ with a homeomorphic image of the unit real interval $[0,1]$,
where 0,1 map to $u,v$ and different edges may only intersect at
common endpoints. Basic open neighborhoods of points that are
vertices or inner points of edges are defined in the usual way, that
is, in the topology of the 1-complex. For an end $\alpha$ we let the
basic neighborhood
$\widehat{C}(S,\alpha)=C(S,\alpha)\cup\varOmega(S,\alpha)\cup
E(S,\alpha)$, where $E(S,\alpha)$ is the set of all inner points of
the edges between $C(S,\alpha)$ and $S$. This completes the
definition of $|G|$, called the Freudenthal compactification of $G$.
In \cite{Di16} it is shown that if $G$ is connected and locally
finite, then $|G|$ is a compact Hausdorff space.
An \emph{arc} of $G$ a homeomorphic map of the unit interval $[0,1]$
in $|G|$; and a \emph{circle} is a homeomorphic map of the unit
circle $S^1$ in $|G|$. A circle of $G$ is \emph{Hamiltonian} if it
meets every vertex (and then every end) of $G$.
We define a \emph{curve} of $G$ as a continuous map of the unit
interval $[0,1]$ in $|G|$. A curve is \emph{closed} if $0,1$ map to
the same point; and is \emph{Hamiltonian} if it is closed and meets
every vertex of $G$ exactly once. In other words, a Hamiltonian
curve is a continuous map of the unit circle $S^1$ in $|G|$ that
meets every vertex of $G$ exactly once. Note that a Hamiltonian
circle is a Hamiltonian curve but not vice versa.
\subsection{Faithful subgraphs}
For a finite graphs $G$, if $G$ has a spanning subgraph $H$ that is
Hamiltonian, then $G$ itself is Hamiltonian. But this is not true
for infinite graphs, in the meaning that $H$ having a Hamiltonian
circle does not imply $G$ has one. The main reason is that we have
to guarantee injectivity at the ends in Hamiltonian circles. Now we
define a type of subgraphs that are stable on Hamiltonian circles.
We say a subgraph $H$ of $G$ is \emph{faithful} if
\begin{mathitem}
\item every end of $G$ contains a ray of $H$; and
\item for any two rays $R_1,R_2$ of $H$, $R_1\approx_HR_2$ if and only
if $R_1\approx_GR_2$.
\end{mathitem}
If $H\leq G$, then for every finite set $S\in V(H)$, each component
of $H-S$ is contained in a component of $G-S$. Thus the condition
(2) can be replaced by `for any two rays $R_1,R_2$ of $H$,
$R_1\approx_GR_2$ implies $R_1\approx_HR_2$'.
\begin{lemma}\label{FaFaithfulCircle}
Let $H$ be a faithful spanning subgraph of $G$. If $H$ has a
Hamiltonian circle, then $G$ has a Hamiltonian circle.
\end{lemma}
\begin{proof}
We define a map $\pi: \varOmega(H)\rightarrow\varOmega(G)$ such that
for the end $\alpha$ of $H$, $\pi(\alpha)$ is the end of $G$
containing all the rays in $\alpha$. By the definition of the
faithful subgraphs, $\pi$ is a bijection between $\varOmega(H)$ and
$\varOmega(G)$ (see also \cite{Ge}). Let $\alpha\in\varOmega(H)$ and
$S\subseteq V(G)$ be finite. Since $H\leq G$, the component
$C_H(S,\alpha)$ is contained in $C_G(S,\pi(\alpha))$. If there is an
end $\beta\in\varOmega_H(S,\alpha)$, then every ray in $\beta$ has a
tail contained in $C_H(S,\alpha)$, which is contained in
$C_G(S,\pi(\alpha))$. This implies that
$\pi(\beta)\in\varOmega_G(S,\pi(\alpha))$. It follows that
$\pi(\varOmega_H(S,\alpha))\subseteq\varOmega_G(S,\pi(\alpha))$.
Now let $\sigma_H: S^1\rightarrow|H|$ be a Hamiltonian circle of
$H$. We define $\sigma_G: S^1\rightarrow|G|$ such that
$$\sigma_G(p)=\left\{\begin{array}{ll}
\pi(\sigma_H(p)), & \mbox{if }\sigma_H(p)\in\varOmega(H);\\
\sigma_H(p), & \mbox{otherwise}.
\end{array}\right.$$
Clearly the map $\sigma_G$ is injective and meets all vertices of
$V(G)$. Now we prove that it is continuous.
Since $\sigma_H$ is homeomorphic, $\sigma_G$ is continuous at point
$p$ if $\sigma_H(p)$ is a vertex or is an inner point of an edge.
Now we assume that $\sigma_H(p)=\alpha\in\varOmega(H)$. Let
$\boldsymbol{p}=(p_i)_{i=0}^\infty$ be a sequence of points in $S^1$
converges to $p$ and let $S\subseteq V(G)$ be a finite set. Since
$\sigma_H$ is continuous, the neighborhood $\widehat{C}_H(S,\alpha)$
of $\alpha$ contains almost all terms of
$(\sigma(p_i))_{i=0}^\infty$ (that is, there exists $j$ such that
$\sigma(p_i)\in\widehat{C}_H(S,\alpha)$ for all $i\geq j$). Recall
that $C_H(S,\alpha)\subseteq C_G(S,\pi(\alpha))$,
$E_H(S,\alpha)\subseteq E_G(S,\pi(\alpha))$ and
$\pi(\varOmega_H(S,\alpha))\subseteq\varOmega_G(S,\pi(\alpha))$. It
follows that $\widehat{C}_G(S,\pi(\alpha))$ contains almost all
terms of $(\sigma_G(p_i))_{i=0}^\infty$. Thus $\sigma_G$ is
homeomorphic and then is a Hamiltonian circle of $G$.
\end{proof}
\begin{lemma}\label{FaFaithfulFaithful}
Suppose that $K\leq H\leq G$. If $H$ is faithful to $G$ and $K$ is
faithful to $H$, then $K$ is faithful to $G$.
\end{lemma}
\begin{proof}
Let $\alpha_G$ be an arbitrary end of $G$. Since $H$ is faithful to
$G$, $H$ has a ray $R_H\in\alpha_G$. Let $\alpha_H$ be the end of
$H$ with $R_H\in\alpha_H$. Since $K$ is faithful to $H$, $K$ has a
ray $R_K\in\alpha_H$. Thus $R_K\approx_HR_H$, implying that
$R_K\approx_GR_H$, that is, $R_K\in\alpha_G$.
Now let $R_1,R_2$ be two rays of $K$. Since $K$ is faithful to $H$,
$R_1\approx_KR_2$ if and only if $R_1\approx_HR_2$. Since $H$ is
faithful to $G$, $R_1\approx_HR_2$ if and only if $R_1\approx_GR_2$.
This implies that $R_1\approx_KR_2$ if and only if
$R_1\approx_GR_2$. It follows that $K$ is faithful to $G$.
\end{proof}
A \emph{comb} of $G$ is the union of a ray $R$ with infinitely many
disjoint finite paths having precisely their first vertex on $R$;
the last vertices of the paths are the \emph{teeth} of the comb; and
the ray $R$ is the \emph{spine} of the comb. We will use the
following Star-Comb Lemma in our proof.
\begin{lemma}[Diestel, see \cite{Di16}]\label{LeComb}
If $U$ is an infinite set of vertices in a connected graph, then the
graph contains either a comb with all teeth in $U$ or a subdivision
of an infinite star with all leaves in $U$.
\end{lemma}
Since a locally finite graph $G$ contains no infinite stars, Lemma
\ref{LeComb} always yields a comb of $G$. If $H$ is a connected
spanning subgraph of $G$, then for every ray $R$ of $G$, the spine
$R'$ of a comb of $H$ with all teeth in $V(R)$ is a ray in $H$ with
$R\approx_GR'$. Therefore the connected spanning subgraph $H$ is
faithful to $G$ if and only if for any two rays $R_1,R_2$ of $H$,
$R_1\approx_GR_2$ implies $R_1\approx_HR_2$.
\begin{lemma}\label{FaFaithfulSpanning}
Let $H$ be a spanning subgraph of $G$ and $K$ be a spanning subgraph
of $H$. If $K$ is connected and faithful to $G$, then $H$ is
faithful to $G$ and $K$ is faithful to $H$.
\end{lemma}
\begin{proof}
Let $R_1,R_2$ be two rays of $H$ with $R_1\approx_GR_2$. Let
$\alpha_G$ be the end of $G$ containing $R_1,R_2$, let
$R\in\alpha_G$ be a ray of $K$. By Lemma \ref{LeComb}, $K$ has a
comb with all teeth in $V(R_1)$. Let $R'_1$ be the spine of the
comb. Thus $R'_1$ is a ray of $K$ and $R_1\approx_HR'_1$. Since
$H\leq G$, $R_1\approx_GR'_1$ and then $R\approx_GR'_1$. Since $K$
is faithful to $G$, $R\approx_KR'_1$. Since $K\leq H$,
$R\approx_HR'_1$, and then $R\approx_HR_1$. By a similar analysis,
we have $R\approx_HR_2$, and thus $R_1\approx_HR_2$. This implies
that $H$ is faithful to $G$.
Now let $R_1,R_2$ be two rays of $K$ with $R_1\approx_HR_2$. Since
$H$ is faithful to $G$, $R_1\approx_GR_2$. Since $K$ is faithful to
$G$, $R_1\approx_KR_2$. It follows that $K$ is faithful to $H$.
\end{proof}
\begin{lemma}\label{FaFaithfulTree}
Let $\mathcal{P}$ be a partition of $V(G)$ such that $G[P]$ is
connected and finite for every $P\in\mathcal{P}$, let $\mathcal{G}$
be the graph on $\mathcal{P}$ such that for $P_1,P_2\in\mathcal{P}$,
$P_1P_2\in E(\mathcal{G})$ if and only if
$E_G(P_1,P_2)\neq\emptyset$, and let $\mathcal{T}$ be a spanning
tree of $\mathcal{G}$. For every partition set $P\in\mathcal{P}$,
let $T_P$ be a spanning tree of $G[P]$; and for every edge
$f=P_1P_2\in E(\mathcal{G})$, let $e_f$ be an edge in
$E_G(P_1,P_2)$. Let $T$ be the spanning tree of $G$ with edge set
$$\{e_f: f\in E(\mathcal{T})\}\cup\bigcup_{P\in\mathcal{P}}E(T_P).$$
If $\mathcal{T}$ is faithful to $\mathcal{G}$, then $T$ is faithful
to $G$.
\end{lemma}
\begin{proof}
For every ray $R=v_0v_1v_2\ldots$ of $G$, we define a ray $\rho(R)$
of $\mathcal{G}$ as follows: Let $P_0\in\mathcal{P}$ with $v_0\in
P_0$ and $\phi(0)$ be the maximum integer with $v_{\phi(0)}\in R_0$
($\phi(0)$ exists since $P_0$ is finite). Suppose we have already
defined $P_{i-1}$ and $\phi(i-1)$, $i\geq 1$. Let
$P_i\in\mathcal{P}$ such that $v_{\phi(i-1)+1}\in P_i$ and $\phi(i)$
be the maximum integer with $v_{\phi(i)}\in P_i$. Clearly
$v_{\phi(i-1)}\in P_{i-1}$, $v_{\phi(i-1)+1}\in P_i$, implying that
$E_G(P_{i-1},P_i)\neq\emptyset$, and $P_{i-1}P_i\in E(\mathcal{G})$,
$i\geq 1$. Now it follows that $\rho(R)=P_0P_1P_2\ldots$ is a ray of
$\mathcal{G}$. Note that if $R$ is a ray of $T$, then $\rho(R)$ is a
ray of $\mathcal{T}$.
We claim that if $R_1\approx_GR_2$, then
$\rho(R_1)\approx_\mathcal{G}\rho(R_2)$. Let
$\mathcal{S}\subseteq\mathcal{P}=V(\mathcal{G})$ be an arbitrary
finite set. Set $S=\bigcup_{P\in\mathcal{S}}P$. Since each
$P\in\mathcal{P}$ is finite, $S$ is finite. If $R_1\approx_GR_2$,
then there is a component $C$ of $G-S$ that contains a tail $R'_i$
of $R_i$, $i=1,2$. Recall that each $P\in\mathcal{P}$ induces a
connected finite subgraph of $G$. Thus the subgraph $\mathcal{C}$ of
$\mathcal{G}$ induced by $\{P\in\mathcal{P}: P\subseteq V(C)\}$ is a
component of $\mathcal{G}-\mathcal{S}$. Since $\mathcal{C}$ contains
all the vertices $P$ with $P\cap V(R'_i)\neq\emptyset$,
$\mathcal{C}$ contains a tail of $\rho(R_i)$ for $i=1,2$. It follows
that $\rho(R_1)\approx_\mathcal{G}\rho(R_2)$.
For every ray $\mathcal{R}=P_0P_1P_2\ldots$ of $\mathcal{T}$, we
define a ray $\varrho(\mathcal{R})$ of $T$ as follows: Let $u_0\in
P_0$ be a fixed vertex. For $i\geq 0$, let
$e_{P_iP_{i+1}}=v_iu_{i+1}$ be the unique edge in
$E_T(P_i,P_{i+1})$, let $R_i$ be the unique $(u_i,v_i)$-path of
$T_{P_i}$. It follows that
$\varrho(\mathcal{R})=u_0R_0v_0u_1R_1v_1u_2R_2v_2\ldots$ is a ray of
$T$. Note that for every ray $\mathcal{R}$ of $\mathcal{T}$,
$\rho(\varrho(\mathcal{R}))=\mathcal{R}$; and for every ray $R$ of
$T$, $R$ and $\varrho(\rho(R))$ differ only by a finite initial
segments (and so $R\approx_T\varrho(\rho(R))$).
We claim that if $\mathcal{R}_1\approx_\mathcal{T}\mathcal{R}_2$,
then $\varrho(\mathcal{R}_1)\approx_T\varrho(\mathcal{R}_2)$. Let
$S\subseteq V(G)$ be an arbitrary finite set. Set
$\mathcal{S}=\{P\in\mathcal{P}: P\cap S\neq\emptyset\}$. So
$\mathcal{S}$ is a finite subset of $V(\mathcal{G})$. If
$\mathcal{R}_1\approx_\mathcal{T}\mathcal{R}_2$, then there is a
component $\mathcal{C}$ of $\mathcal{T}-\mathcal{S}$ that contains a
tail $\mathcal{R}'_i$ of $\mathcal{R}_i$, $i=1,2$. Let $C$ be the
subgraph of $T$ induced by $\bigcup_{P\in V(\mathcal{C})}P$. It
follows that $C$ is contained in a component of $T-S$. Since $C$
contains all vertices in $\bigcup_{P\in V(\mathcal{R}'_i)}P$, $C$
contains a tail of $\varrho(\mathcal{R}_i)$ for $i=1,2$. It follows
that $\varrho(\mathcal{R}_1)\approx_T\varrho(\mathcal{R}_2)$.
Now we prove the lemma. Suppose that $\mathcal{T}$ is faithful to
$\mathcal{G}$, and $R_1,R_2$ are two rays of $T$ such that
$R_1\approx_GR_2$. It follows that $\rho(R_1),\rho(R_2)$ are two
rays of $\mathcal{T}$ with $\rho(R_1)\approx_\mathcal{G}\rho(R_2)$.
Since $\mathcal{T}$ is faithful to $\mathcal{G}$,
$\rho(R_1)\approx_\mathcal{T}\rho(R_2)$, and thus
$\varrho(\rho(R_1))\approx_T\varrho(\rho(R_2))$. Recall that
$R_1\approx_T\varrho(\rho(R_1))$ and
$R_2\approx_T\varrho(\rho(R_2))$. We have $R_1\approx_TR_2$,
implying that $T$ is faithful to $G$.
\end{proof}
\begin{lemma}\label{FaFaithfulPower}
For any connected graph $G$ and integer $t\geq 1$, $G$ is faithful
to $G^t$ and $G_B^t$.
\end{lemma}
\begin{proof}
Suppose that $R_1,R_2$ are two rays of $G$ with
$R_1\approx_{G^t}R_2$. Let $S\subseteq V(G)$ be an arbitrary finite
set, and set $S'=S\cup N_{G^t}(S)$. Clearly $S'$ is finite, and thus
there is a component $C'$ of $G^t-S'$ that contains tails of both
$R_1$ and $R_2$. For any two adjacent vertices $u,v\in
V(G)\backslash S'$, $G$ has a $(u,v)$-path $P$ of length at most
$t$. Since both $u,v$ have distance more than $t$ from $S$,
$V(P)\cap S=\emptyset$. It follows that $u,v$ are contained in a
common component of $G-S$. This implies that all vertices in $V(C')$
are contained in a common component $C$ of $G-S$. Thus $C$ contains
tails of both $R_1$ and $R_2$, implying that $G$ is faithful to
$G^t$.
Recall that $G_B^t$ is a spanning subgraph of $G^t$. By Lemma
\ref{FaFaithfulSpanning}, $G$ is faithful to $G_B^t$ as well.
\end{proof}
A rooted tree $T$ of $G$ is \emph{normal} if the end-vertices of
every $T$-path in $G$ are comparable in the tree-order of $T$. Note
that if $T$ is spanning, then every $T$-path is an edge of $G$. The
\emph{normal rays} of $T$ are those starting at the root of $T$.
From the following lemma, one can see that a normal spanning tree of
$G$ is faithful to $G$.
\begin{lemma}[Diestel, see \cite{Di16}]\label{LeNormalFaithful}
If $T$ is a normal spanning tree of $G$, then every end of $G$
contains exactly one normal ray of $T$.
\end{lemma}
One can see that the normal spanning tree has a nice property for
the infinite graphs. From the following theorem, we can always find
a normal spanning tree in connected locally finite graphs.
\begin{theorem}[Jung \cite{Ju}]\label{ThNormalTree}
Every countable connected graph has a normal spanning tree.
\end{theorem}
\subsection{Degree of ends}
The \emph{(vertex-)degree} of an end $\alpha\in\varOmega(G)$ is the
maximum number of vertex-disjoint rays in $\alpha$; and the
edge-degree of $\alpha$ is the maximum number of edge-disjoint rays
in $\alpha$. We refer the reader to \cite{BrSt} for some properties
on the end degrees of graphs. Before giving our lemma concerning the
degree of ends, we first list the following K\"onig's Infinite
Lemma.
\begin{lemma}[K\"onig, see \cite{Di16}]\label{LeKonig}
Let $V_0,V_1,V_2,\ldots$ be an infinite sequence of disjoint
non-empty finite sets, and let $G$ be a graph on
$\bigcup_{i=0}^\infty V_i$. Assume that every vertex in $V_i$ has a
neighbor in $V_{i-1}$, $i\geq 1$. Then $G$ has a ray
$R=v_0v_1v_2\ldots$ with $v_i\in V_i$ for all $i\geq 0$.
\end{lemma}
\begin{lemma}\label{FaDegreeArc}
Let $A,B\subseteq V(G)$ be disjoint, and $\alpha\in\varOmega(G)$.
\begin{mathitem}
\item $G$ has $k$ vertex-disjoint $(A,B)$-paths if and only if $|G|$ has
$k$ vertex-disjoint $(A,B)$-curves.
\item $\alpha$ has degree at least $k$ if and only if $|G|$ has $k$
vertex-disjoint nontrivial curves ending in $\alpha$.
\end{mathitem}
\end{lemma}
\begin{proof}
(1) The necessity of the assertion is trivial since a (topological)
path of $G$ is also a curve of $|G|$. Now we prove the sufficiency
of the assertion. Suppose that $G$ has no $k$ vertex-disjoint
$(A,B)$-paths. By Menger's Theorem, there is a set $S\subseteq V(G)$
with $|S|<k$ such that $G-S$ has no $(A,B)$-path. If $|G|$ has $k$
vertex-disjoint $(A,B)$-curves, then one of them is contained in
$|G-S|$. It follows that some component of $G-S$ contains some
vertices of both $A$ and $B$, and thus $G-S$ has an $(A,B)$-path
(see also \cite{DiKu}), a contradiction.
(2) The necessity of the assertion is trivial since a ray in
$\alpha$ is a curve of $|G|$ ending in $\alpha$. Now we prove the
sufficiency of the assertion. Clearly any nontrivial curve ending in
$\alpha$ contains some vertices. For convenience we assume that
$|G|$ has $k$ vertex-disjoint curves between some vertices and
$\alpha$. Let $S_0$ be the set of the starting vertices of the $k$
curves. For $i\geq 1$, set $S_i=S_{i-1}\cup N(S_{i-1})$. Thus $S_i$
is finite and $|G|$ has $k$ vertex-disjoint curves between $S_0$ and
$C(S_i,\alpha)$, for all $i\geq 0$. By (1), $G$ has $k$
vertex-disjoint paths between $S_0$ and $C(S_i,\alpha)$. Let
$\mathcal{V}_i$ be the set of the unions of $k$ vertex-disjoint
paths between $S_0$ and $C(S_i,\alpha)$. Since every path between
$S_0$ and $C(S_i,\alpha)$ is contained in $S_{i+1}$, which is
finite, we can see that $\mathcal{V}_i$ is finite for every $i\geq
0$.
We define a graph $\mathcal{G}$ on
$\bigcup_{i=0}^\infty\mathcal{V}_i$ such that
$U_{i-1}\in\mathcal{V}_{i-1}$ is adjacent to $U_i\in\mathcal{V}_i$
if and only if the $k$ paths of $U_{i-1}$ are the subpaths of the
$k$ paths of $U_i$. Clearly every vertex in $\mathcal{V}_i$ has a
neighbor in $\mathcal{V}_{i-1}$. By Lemma \ref{LeKonig},
$\mathcal{G}$ has a ray $\mathcal{R}=U_0U_1U_2\ldots$ with
$U_i\in\mathcal{V}_i$, $i\geq 0$. It follows that
$\bigcup_{i=0}^\infty U_i$ is the union of $k$ vertex-disjoint rays
in $\alpha$, implying that the degree of $\alpha$ is at least $k$.
\end{proof}
\begin{lemma}\label{FaEndDegreek}
Let $T$ be a faithful spanning tree of $G$ and $F\subseteq E(T)$
such that every component of $T-F$ is finite. If for every edge
$e\in F$, $G$ has at most $k$ edges between the two components
$T_1,T_2$ of $T-e$, then every end of $G$ has degree at most $k$.
\end{lemma}
\begin{proof}
Let $\alpha_G$ be an arbitrary end of $G$, $R$ be a ray of $T$
contained in $\alpha_G$, and $\alpha_T$ be the end of $T$ containing
$R$. We first claim that for every ray $R'\in\alpha_G$ and every
finite subtree $T_0$ of $T$, the component $C=C_T(V(T_0),\alpha_T)$
contains almost all vertices of $R'$. Suppose otherwise that $R'$
has infinitely many vertices contained in $T-C$. Note that $T-C$ is
connected. By Lemma \ref{LeComb}, $T-C$ has a comb with all teeth in
$V(R')$. Let $R''$ be the spine of the comb. It follows that $R''$
is a ray of $T$ and $R'\approx_GR''$. Since $R\approx_GR'$,
$R\approx_GR''$. Since $T$ is faithful to $G$, $R\approx_TR''$,
contradicting the fact that $R''$ has no tail in $C$.
Now we prove the lemma. Let $\alpha_G,\alpha_T$ be defined as above.
Suppose that $\alpha_G$ has degree at least $k+1$. Let $S_0$ be the
starting vertices of $k+1$ vertex-disjoint rays in $\alpha$, $T_0$
be a subtree of $T$ containing $S_0$ and $\mathcal{H}$ be the set of
the components $H$ of $T-F$ with $V(H)\cap V(T_0)\neq\emptyset$. Set
$S_1=\bigcup_{H\in\mathcal{H}}V(H)$, and $T_1=T[S_1]$. Clearly
$S_1\supseteq S_0$ is finite and $T_1$ is a finite subtree of $T$.
Recall that every ray in $\alpha_G$ contains some vertices of
$C_T(S_1,\alpha_T)$. It follows that $G$ has $k+1$ vertex-disjoint
paths between $S_1$ and $C_T(S_1,\alpha_T)$. Let $e$ be the unique
edge of $T$ between $S_1$ and $C_T(S_1,\alpha_T)$. Clearly $e\in F$
and thus $E_G(S_1,C_T(S_1,\alpha_T))\leq k$, a contradiction.
\end{proof}
\subsection{Hamiltonian curves and Hamiltonian circles}
In \cite{KuLiTh}, the authors obtained some necessary and sufficient
conditions for a graph $G$ to have a Hamiltonian curve. We list one
of the conditions which we will use in our paper.
\begin{theorem}[K\"undgen et al.
\cite{KuLiTh}]\label{ThHamiltonianCurve} The graph $G$ has a
Hamiltonian curve if and only if every finite set $S\subseteq V(G)$
is contained in a cycle of $G$.
\end{theorem}
Clearly if a Hamiltonian curve meets every end exactly once, then it
is also a Hamiltonian circle.
\begin{lemma}\label{FaCurveCircle}
If every end of $G$ has degree at most 3, then every Hamiltonian
curve of $G$ is also a Hamiltonian circle.
\end{lemma}
\begin{proof}
It sufficient to show that the Hamiltonian curve passes through each
end exactly once. Suppose that it passes through an end $\alpha$ at
least twice. It is clearly that $G$ has four vertex-disjoint curves
ending in $\alpha$. By Lemma \ref{FaDegreeArc}, $\alpha$ has degree
at least 4, a contradiction.
\end{proof}
\begin{theorem}\label{ThFaithfulHamiltonian}
Let $T$ be a faithful spanning tree of $G$, and $F\subseteq E(T)$
such that every component of $T-F$ is finite. Suppose that for every
subtree $T'$ of $T$, $G$ has a cycle $C'$ such that
\begin{mathitem}
\item $V(T')\subseteq V(C')$, and
\item $C'$ crosses each edges in $F\cap E(T')$ exactly twice respect to $T$.
\end{mathitem}
Then $G$ has a Hamiltonian circle.
\end{theorem}
\begin{proof}
Let $V(G)=\{v_i: i\geq 0\}$. For $i\geq 0$, let $T_i$ be a subtree
of $T$ containing all vertices of $\{v_0,\ldots,v_i\}$, and $C_i$ be
a cycle of $G$ with $V(T_i)\subseteq V(C_i)$ and $C_i$ crosses each
edges in $F\cap E(T_i)$ exactly twice respect to $T$. Set
$\mathcal{C}=(C_i)_{i=0}^\infty$. In the following, we will define a
sequence of infinite subsequences of $\mathcal{C}$, a sequence of
finite subsets of $E(G)$, and a sequence of finite subsets of $F$.
First let $\mathcal{C}^0=(C_i^0)_{i=0}^\infty=\mathcal{C}$ and
$E^0=F^0=\emptyset$. Suppose now we have already defined
$\mathcal{C}^{i-1}$, $E^{i-1}$ and $F^{i-1}$.
Consider the first cycle $C_0^{i-1}$ in $\mathcal{C}^{i-1}$. Let
$\mathcal{H}_i$ be the set of components $H$ of $T-F$ such that
$V(H)\cap V(C_0^{i-1})\neq\emptyset$. Set
$S_i=\bigcup_{H\in\mathcal{H}_i}V(H)$, $S'_i=S_i\cup N(S_i)$,
$E_i=E_G(S_i,G-S_i)$, and $F^i=E_T(S_i,G-S_i)$. Clearly
$F^i\subseteq F$. Since $C_0^{i-1}$ is a cycle and each component of
$T-F$ is finite, we see that $S_i$ is finite. Since $G$ is locally
finite, we have that $S'_i$, $E_i$ and $F^i$ are finite.
Note that there are only finitely many cycles in $\mathcal{C}$ (and
then, in $\mathcal{C}^{i-1}$) that does not contain all vertices in
$S'_i$. It follows that there are infinitely many cycles in
$\mathcal{C}^{i-1}$ that contains all vertices of $S'_i$. Recall
that $E_i$ is finite, and has only finitely many of subsets. So
there is a set $E^i\subseteq E_i$ such that for infinitely many
cycles $C$ in $\mathcal{C}^{i-1}$, $E(C)\cap E_i=E^i$. Let
$\mathcal{C}^i$ be the subsequence of $\mathcal{C}^{i-1}$ consists
of all the cycles $C$ with $S'_i\subseteq V(C)$ and $E(C)\cap
E_i=E^i$.
From the above construction, we can see that every cycle in
$\mathcal{C}^i$ containing all vertices of $C_0^{i-1}$, and for
$i,j\geq 0$,
$$E_i\cap E(C_0^j)=\left\{\begin{array}{ll}
\emptyset, & \mbox{if }j<i;\\
E^i, & \mbox{if }j\geq i.
\end{array}\right.$$
Recall that every cycle in $\mathcal{C}^i$ (and then $C_0^i$)
crosses every edge in $F^i$ exactly twice respect to $T$. It follows
that for any edge $e$ of $F^i$, $E^i$ contains exactly two edges in
$E_G(T_1,T_2)$, where $T_1,T_2$ are the two components of $T-e$.
Now let $F'=\bigcup_{i=0}^\infty F^i$ and $G'$ be the spanning
subgraph of $G$ with edge set
$$E(G')=E(T)\cup\bigcup_{i=0}^\infty E(C_0^i).$$ It follows that for
every edge $e\in F'$, $G'$ has at most three edges between the two
components of $T-e$.
We claim that every component of $T-F'$ is finite. Suppose otherwise
that there is an infinite component $H'$ of $T-F'$. Let $v\in
V(H')$. Note that there are only finitely many of cycles in
$\mathcal{C}$ not containing $v$. It follows that there exists $i$
with $v\in V(C_0^{i-1})$. Let $S_i$ be defined as above. Since $S_i$
is finite, there is some edge in $E_T(S_i,G-S_i)\cap E(H')$, which
is contained in $F^i$, a contradiction. Thus we conclude that every
component of $T-F'$ is finite.
By Lemma \ref{FaEndDegreek}, every end of $G'$ has degree at most 3.
Clearly every finite subset of $V(G')$ is contained in a cycle of
$G'$. By Theorem \ref{ThHamiltonianCurve}, $G'$ has a Hamiltonian
curve. By Lemma \ref{FaCurveCircle}, $G'$ has a Hamiltonian circle.
By Lemma \ref{FaFaithfulSpanning}, $G'$ is faithful to $G$. By Lemma
\ref{FaFaithfulCircle}, $G$ has a Hamiltonian circle.
\end{proof}
\subsection{Proof of Theorem \ref{ThInfinite}}
Let $M$ be a perfect matching of $G$. We define a graph
$\mathcal{G}$ on $M$ such that for any two edges $e_1,e_2\in M$,
$e_1e_2\in E(\mathcal{G})$ if and only if $G$ has an edge between
$e_1$ and $e_2$. Clearly $\mathcal{G}$ is connected and locally
finite. By Theorem \ref{ThNormalTree}, $G$ has a normal tree
$\mathcal{T}$, which is faithful to $\mathcal{G}$ by Lemma
\ref{LeNormalFaithful}. By Lemma \ref{FaFaithfulTree}, $G$ has a
faithful spanning tree $T$ containing all edges in $M$. By Lemma
\ref{FaFaithfulPower}, $T$ is faithful to $T_B^3$.
Let $F=E(T)\backslash M$. So every component of $T-F$ consists an
edge in $M$. Let $T'$ be an arbitrary subtree of $T$. By Lemma
\ref{LeFiniteHamiltonian}, $(T')_B^3$ has a Hamiltonian cycle $C'$
that crosses every edge in $F\cap E(T')$ exactly twice respect to
$T'$ (and then respect to $T$ since $C'$ contains no vertices
outsides $T'$). By Theorem \ref{ThFaithfulHamiltonian}, $T_B^3$ has
a Hamiltonian circle.
By Lemmas \ref{FaFaithfulFaithful} and \ref{FaFaithfulPower}, $T$ is
faithful to $G_B^3$. By Lemma \ref{FaFaithfulSpanning}, $T_B^3$ is
faithful to $G_B^3$. By Lemma \ref{FaFaithfulCircle}, $G_B^3$ has a
Hamiltonian circle.
The proof is complete.
| {
"timestamp": "2019-02-19T02:22:48",
"yymm": "1902",
"arxiv_id": "1902.06403",
"language": "en",
"url": "https://arxiv.org/abs/1902.06403",
"abstract": "For a graph $G$, the $t$-th power $G^t$ is the graph on $V(G)$ such that two vertices are adjacent if and only if they have distance at most $t$ in $G$; and the $t$-th bi-power $G_B^t$ is the graph on $V(G)$ such that two vertices are adjacent if and only if their distance in $G$ is odd at most $t$. Fleischner's theorem states that the square of every 2-connected finite graph has a Hamiltonian cycle. Georgakopoulos prove that the square of every 2-connected infinite locally finite graph has a Hamiltonian circle. In this paper, we consider the Hamiltonicity of the bi-power of bipartite graphs. We show that for every connected finite bipartite graph $G$ with a perfect matching, $G_B^3$ has a Hamiltonian cycle. We also show that if $G$ is a connected infinite locally finite bipartite graph with a perfect matching, then $G_B^3$ has a Hamiltonian circle.",
"subjects": "Combinatorics (math.CO)",
"title": "Hamiltonicity of bi-power of bipartite graphs, for finite and infinite cases",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130580170845,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7075636798494804
} |
https://arxiv.org/abs/1503.00112 | Transcendental Morse Inequality and Generalized Okounkov Bodies | The main goal of this article is to construct "arithmetic Okounkov bodies" for an arbitrary pseudo-effective (1,1)-class $\alpha$ on a Kähler manifold. Firstly, using Boucksom's divisorial Zariski decompositions for pseudo-effective (1,1)-classes on compact Kähler manifolds, we prove the differentiability of volumes of big classes for Kähler manifolds on which modified nef cones and nef cones coincide; this includes Kähler surfaces. We then apply our differentiability results to prove Demailly's transcendental Morse inequality for these particular classes of Kähler manifolds. In the second part, we construct the convex body $\Delta(\alpha)$ for any big class $\alpha$ with respect to a fixed flag by using positive currents, and prove that this newly defined convex body coincides with the Okounkov body when $\alpha\in {\rm NS}\_{\mathbb{R}}(X)$; such convex sets $\Delta(\alpha)$ will be called generalized Okounkov bodies. As an application we prove that any rational point in the interior of Okounkov bodies is "valuative". Next we give a complete characterisation of generalized Okounkov bodies on surfaces, and show that the generalized Okounkov bodies behave very similarly to original Okounkov bodies. By the differentiability formula, we can relate the standard Euclidean volume of $\Delta(\alpha)$ in $\mathbb{R}^2$ to the volume of a big class $\alpha$, as defined by Boucksom; this solves a problem raised by Lazarsfeld in the case of surfaces. Finally, we study the behavior of the generalized Okounkov bodies on the boundary of the big cone, which are characterized by numerical dimension. | \section{Introduction}
In \cite{Oko96} Okounkov introduced a natural procedure to associate a convex body $\Delta(D)$ in $\mathbb{R}^n$ to any ample divisor $D$ on an $n$-dimensional projective variety. Relying on the work of Okounkov, Lazarsfeld and Musta\c{t}\u{a} \cite{LM09}, and Kaveh and Khovanskii \cite{KK09,KK10}, have systematically studied Okounkov's construction, and associated to any big divisor and any fixed flag of subvarieties a convex body which is now called the Okounkov body.
We now briefly recall the construction of the Okounkov body. We start with a complex projective variety $X$ of dimension $n$. Fix a flag
$$ Y_{\bullet} : X=Y_0\supset Y_1 \supset Y_2 \supset \ldots \supset Y_{n-1} \supset Y_{n}=\{p\}
$$
where $Y_i$ is a smooth irreducible subvariety of codimension $i$ in $X$. For a given big divisor $D$, one defines a valuation-like function
$$
\mu=\mu_{Y_{\bullet},D}: (H^0(X,\oc_X(D))-\{0\})\rightarrow \mathbb{Z}^n.
$$
as follows. First set $\mu_1=\mu_1(s)={\rm ord}_{Y_1}(s)$. Dividing $s$ by a local equation of $Y_1$, we obtain a section
$$
\widetilde{s}_1\in H^0(X,\oc_X(D-\mu_1Y_1))
$$
that does not vanish identically along $Y_1$. We restrict $\widetilde{s}_1$ on $Y_1$ to get a non-zero section
$$
s_1\in H^0(Y_1,\oc_{Y_1}(D-\mu_1Y_1)),
$$
then we write
$\mu_2(s)=\ord_{Y_2}(s_1)$,
and continue in this fashion to define the remaining integers $\mu_i(s)$. The image of the
function $\mu$ in $\mathbb{Z}^n$ is denoted by $\mu(D)$. With this in hand, we define the \textit{Okounkov body of D with respect to the fixed flag $Y_{\bullet}$} to be
$$
\Delta(D)=\Delta_{Y_{\bullet}}(D)= \text{closed convex hull}\left(\displaystyle\bigcup_{m\geq 1}\frac{1}{m}\cdot \mu(mD)\right)\subseteq \mathbb{R}^n.
$$
According to the open question raised in the final part of \cite{LM09}, it is quite natural to wonder whether one can construct ``arithmetic Okounkov bodies" for an arbitrary pseudo-effective (1,1)-class $\alpha$ on a K\"ahler manifold, and realize the volumes of these classes by convex bodies as well. In our paper, using positive currents in a natural way, we give a construction of a convex body $\Delta(\alpha)$ associated to such a class $\alpha$, and show that this newly defined convex body coincides with the Okounkov body when $\alpha\in {\rm NS}_{\mathbb{R}}(X)$.
\begin{thm}
\label{equivalent}
Let $X$ be a smooth projective variety of dimension $n$, $L$ be a big line bundle on $X$ and $Y_{\bullet}$ be a fixed admissible flag. Then we have $$\Delta(c_1(L))=\Delta(L)=\overline{\bigcup_{m=1}^{\infty} \frac{1}{m}\nu(mL)}.$$ Moreover, in the definition of Okounkov body $\Delta(L)$, it suffices to take the closure of the set of normalized valuation vectors instead of the closure of the convex hull.
\end{thm}
By Theorem \ref{equivalent}, we know that our definition of the Okounkov body for any pseudo-efffective class could be treated as a generalization of the original Okounkov body. A very interesting problem is to find out exactly
which points in the Okounkov body $\Delta(L)$ are given by valuations of sections. This is expressed by saying that a rational point of $\Delta(L)$ is
``valuative". By Theorem \ref{equivalent} we can give some partial answers to this question which have been given in \cite{KL14} in the case of surfaces.
\begin{cor}
\label{valuation}
Let $X$ be a projective variety of dimension $n$ and $Y_{\bullet}$ be an admissible flag. If $L$ is a big line bundle, then any rational point in ${\rm int}(\Delta(L))$ is a valuative point.
\end{cor}
It is quite natural to wonder whether our newly defined convex body for big classes behaves similarly as the original Okounkov body. In the situation of complex surfaces, we give an affirmative answer to the question raised in \cite{LM09}, as follows:
\begin{thm}
\label{Okounkov}
Let $X$ be a compact K\"ahler surface, $\alpha\in H^{1,1}(X,\mathbb{R})$ be a big class. If $C$ is an irreducible divisor of $X$, there are piecewise linear continuous functions
$$ f\ ,\ g\ :\ [a,s]\mapsto \mathbb{R}_+$$
with $f$ convex, $g$ concave, and $f\leq g$, such that $\Delta(\alpha)\subset \mathbb{R}^2$ is the region bounded by the graphs of $f$ and $g$:
$$
\Delta(\alpha)=\{(t,y)\in \mathbb{R}^2\mid a\leq t \leq s, \mbox{and}\ f(t)\leq y\leq g(t)\}.
$$
Here $\Delta(\alpha)$ is the generalized Okounkov body with respect to the fixed flag
$$
X\supseteq C\supseteq \{x\},
$$
and $s=\sup\{t>0\mid \alpha-tC {\rm\ is\ big}\}$. If $C$ is nef, $a=0$ and $f(t)$ is increasing; otherwise, $a=\sup\{t>0\mid C\subseteq E_{nK}(\alpha-tC)\}$, where $E_{nK}:=\bigcap_TE_+(T)$ for $T$ ranging among the K\"ahler currents in $\alpha$, which is the non-K\"ahler locus.
Moreover, $\Delta(\alpha)$ is a finite polygon whose number of vertices is bounded by $2\rho(X)+2$, where $\rho(X)$ is the Picard number of $X$, and
$$
\vol_X(\alpha)=2\vol_{\mathbb{R}^2}(\Delta(\alpha)).
$$
\end{thm}
In \cite{LM09}, it was asked whether the Okounkov body of a divisor on a complex surface could be an infinite polygon. In \cite{KLM10}, it was shown that the Okounkov body is always a finite polygon. Here we give an explicit description for the ``finiteness" of the polygons appearing as generalized Okounkov bodies of big classes, and conclude that it also holds for the original Okounkov bodies by Theorem \ref{equivalent}.
As one might suspect from the construction of Okounkov bodies, the Euclidean volume of $\Delta(D)$ has a strong connection with the growth of the groups $H^0(X,\oc_X(mD))$. In \cite{LM09}, the following precise relations were shown:
\begin{equation}
\label{volume equal}
n!\cdot\vol_{\mathbb{R}^n}(\Delta(D))= \vol_X(D):=\lim\limits_{k\rightarrow\infty}\frac{n!}{k^n}h^0(X,\oc_X(kD)).
\end{equation}
The proof of (\ref{volume equal}) relies on properties of sub-semigroups of $\mathbb{N}^{n+1}$ constructed from the graded linear series $\{H^0(X,\oc_X(mD))\}_{m\geq0}$. However, when $\alpha$ is a big class which does not belong to ${\rm NS}_{\mathbb{R}}(X)$, there are no such algebraic objects which correspond to $\vol_X(\alpha)$, and we only have the following analytic definition due to Boucksom:
$$\vol_X(\alpha):= \sup_{T}\int_{X} T_{ac}^n,$$
where $T$ ranges among all positive $(1,1)$-currents. Therefore, it is quite natural to propose the following conjecture:
\begin{conj}
Let $X$ be a compact K\"ahler manifold of dimension $n$. For any big class $\alpha\in H^{1,1}(X,\mathbb{R})$, we have
$$
\vol_{\mathbb{R}^n}(\Delta(\alpha))=\frac{1}{n!}\cdot \vol_X(\alpha).
$$
\end{conj}
In Theorem \ref{Okounkov}, we prove this conjecture in dimension 2. Our method is to relate the Euclidean volume of the slice of the generalized Okounkov body to the differential of the volume of the big class. We prove the following differentiability formula for volumes of big classses.
\begin{thm}[Differentiability of volumes]
\label{main differential}
Let $X$ be a compact K\"ahler surface and $\alpha$ be a big class. If $\beta$ is a nef class or $\beta=\{C\}$ where $C$ is an irreducible curve, we have
$$
\left.\frac{d}{dt}\right|_{t=0}\vol_X(\alpha+t\beta)=2Z(\alpha)\cdot \beta,
$$
where $Z(\alpha)$ is the divisorial Zariski decomposition of $\alpha$ defined in Section \ref{divisorial}.
\end{thm}
A direct corollary of this formula is the \textit{transcendental Morse inequality}:
\begin{thm}\it
\label{morse 2}
Let $X$ be a compact K\"ahler surface. If $\alpha$ and $\beta$ are nef classes satisfying the inequality ${\alpha}^2- 2\alpha \cdot \beta>0$, then $\alpha-\beta$ is big and $\vol_X(\alpha-\beta) \geq {\alpha}^2- 2\alpha \cdot \beta $.
\end{thm}
In higher dimension, we also have a differentiability formula for big classes on some special K\"ahler manifolds.
\begin{thm}\it
\label{morse high}
Let $X$ be a compact K\"ahler manifold of dimension $n$ on which the modified nef cone $\mathcal{MN}$ coincides with the nef cone $\mathcal{N}$. If $\alpha\in H^{1,1}(X,\mathbb{R})$ is a big class, $\beta \in H^{1,1}(X,\mathbb{R})$ is a nef class, then \begin{equation}
\label{volume formula}
\vol_X(\alpha+\beta) = \vol_X(\alpha)+n\int_{0}^{1} Z(\alpha+t\beta)^{n-1}\cdot \beta\ \mathrm{d}t.
\end{equation}
As a consequence, $\vol_X(\alpha+t\beta)$ is $\mathcal{C}^1$ for $t\in \mathbb{R}^+$ and we have
\begin{equation}
\left.\frac{d}{dt}\right|_{t=t_0}\vol_X(\alpha+t\beta)=nZ(\alpha+t_0\beta)^{n-1}\cdot \beta
\end{equation}
\label{diff higher}
for $t_0\geq0$.
\end{thm}
Finally, we study the generalized Okounkov bodies for pseudo-effective classes in K\"ahler surfaces. We summerize our results as follows
\begin{thm}\it
\label{okounkov psf}
Let $X$ be a K\"ahler surface, $\alpha$ be any pseudo-effective but not big class,
\begin{enumerate}[\upshape (i)]
\item if the numerical dimension $n(\alpha)=0$, then for any irreducible curve $C$ which is not contained in the negative part $N(\alpha)$, we have the generalized Okounkov body
$$\Delta_{(C,x)}(\alpha)=0\times \nu_x(N(\alpha)|_C),$$
where $\nu_x(N(\alpha)|_C)=\nu(N(\alpha)|_C,x)$ is the Lelong number of $N(\alpha)$ at $x$;
\item if $n(\alpha)=1$, then for any irreducible curve $C$ satisfying $Z(\alpha)\cdot C>0$, we have
$$\Delta_{(C,x)}(\alpha)=0\times [\nu_x(N(\alpha)|_C),\nu_x(N(\alpha)|_C)+Z(\alpha)\cdot C]\nonumber.$$
\end{enumerate}
In particular, the numerical dimension determines the dimension of the generalized Okounkov body.
\end{thm}
\section{Technical preliminaries}
\label{preliminaries}
\subsection{Siu decomposition}
\label{Siu}
Let $T$ be a closed positive current of bidegree $(p,p)$ on a complex manifold $X$. We denote by $\nu(T,x)$ its Lelong number at a point $x\in X$. For any $c>0$, the Lelong upperlevel sets are defined by
$$
E_c(T):=\{x\in X, \nu(T,x)\geq c\}.
$$
In \cite{Siu74}, Siu proved that $E_c(T)$ is an analytic subset of $X$, of codimension at least $p$. Moreover, $T$ can be written as a convergent series of closed positive currents
$$
T=\displaystyle\sum_{k=1}^{+\infty}\nu(T,Z_k) [Z_k]+R
$$
where $[Z_k]$ is a current of integration over an irreducible analytic set of dimension $p$, and $R$ is a residual current with the property that $\dim E_c(R)<p$ for every $c>0$. This decomposition is locally and globally unique: the sets $Z_k$ are precisely the $p$-dimensional components occurring in the upperlevel sets $E_c(T)$, and $\nu(T,Z_k):=\inf\{\nu(T,x)|x\in Z_k\}$ is the generic Lelong number of $T$ along $Z_k$.
\subsection{Currents with analytic singularities}
\label{analytic singularity}
A closed positive (1,1) current $T$ on a compact complex manifold $X$ is said to have analytic (resp. algebraic) singularities along a subscheme $V(\mathcal{I})$ defined by an ideal $\mathcal{I}$ if there exists some $c\in \mathbb{R}_{>0}$ (resp. $\mathbb{Q}_{>0}$) such that locally we have
$$
T=\frac{c}{2}dd^c\log(|f_1|^2+\ldots+|f_k|^2)+dd^cv
$$
where $f_1,\ldots,f_k$ are local generators of $\mathcal{I}$ and $v\in L^{\infty}_{\rm loc}$ (resp. and additionally, $X$ and $V({\cal I})$ are algebraic). Moreover, if $v$ is smooth, $T$ will be said to have mild analytic singularities. In these situations, we call the sum $\sum\nu(T,D)D$ which appears in the Siu decomposition of $T$ the divisorial part of $T$. Using the Lelong-Poincar\'e formula, it is straightforward to check that the divisorial part $\sum\nu(T,D)D$ of a closed (1,1)-current $T$ with analytic singularities along the subscheme $V(\mathcal{I})$ is just the divisorial part of $V(\mathcal{I})$, times the constant $c>0$ appearing in the definition of analytic singularities. The residual part $R$ has analytic singularities in codimension at least~$2$. If we denote $E_+(T):=\{x\in X|\nu(T,x)>0\}$, then $E_+(T)$ is exactly the support of $V(\mathcal{I})$. Moreover, if $V\not\subseteq E_+(T)$ for some smooth variety~$V$, $T|_V:=\frac{c}{2}dd^clog(|f_1|^2+\ldots+|f_k|^2)|_V+dd^cv|_V$ is well defined, for $|f_1|^2+\ldots+|f_k|^2$ and $v$ are not identically equal to $-\infty$ on $V$. It is easy to check that this definition does not depend on the choice of the local potential of $T$.
\begin{defin}[Non-K\"ahler locus]
If $\alpha\in H^{1,1}_{{\partial}\overline{\partial}}(X,\mathbb{R})$ is a big class, we define its \emph{non-K\"ahler locus} as $E_{nK}:=\bigcap_TE_+(T)$ for $T$ ranging among the K\"ahler currents in $\alpha$.
\end{defin}
We will usually use the following theorem due to Collins and Tosatti.
\begin{thm}[\cite{CT13}]\it
\label{Tosatti}
Let $X$ be a compact K\"ahler manifold of dimension $n$. Given a nef and big class $\alpha$, we define a subset of $X$ which measures
its non-K\"ahlerianity, namely the null locus
$$\text{\rm Null}(\alpha):=\bigcup_{\int_{V}^{} \alpha^{\text{dim}V}=0}V,$$
where the union is taken over all positive dimensional irreducible
analytic subvarieties of X. Then we have $$\text{\rm Null}(\alpha)=E_{nK}(\alpha).$$
\end{thm}
\subsection{Regularization of currents}
We will need Demailly's regularization theorem for closed (1,1)-currents, which enables us to approximate a given current by currents with analytic singularities, with a loss of positivity that is arbitrary small. In particular, we could approximate a K\"ahler current $T$ inside its cohomology class by K\"ahler currents $T_k$ with algebraic singularities, with a good control of the singularities. A big class therefore contains plenty of K\"ahler currents with analytic singularities.
\begin{thm}\it
\label{Demailly}
Let $T$ be a closed almost positive (1,1)-current on a compact complex manifold $X$, and fix a Hermitian form $\omega$. Suppose that $T\geq \gamma$ for some real (1,1)-form $\gamma$ on $X$. Then there exists a sequence $T_k$ of currents with algebraic singularities in the cohomology class $\{T\}$ which converges weakly to T, such that $T_k\geq \gamma-\epsilon_k \omega$ for some sequence $\epsilon_k>0$ decreasing to 0, and $\nu(T_k,x)$ increases to $\nu(T,x)$ uniformly with respect to $x\in X$.
\end{thm}
\subsection{Currents with minimal singularities}
Let $T_1=\theta_1+dd^c\varphi_1$ and $T_2=\theta_2+dd^c\varphi_2$ be two closed almost positive (1,1)-currents on $X$, where $\theta_i$ are smooth forms and $\varphi_i$ are almost pluri-subharmonic functions, we say that $T_1$ is less singular than $T_2$ (write $T_1\preceq T_2$) if we have $\varphi_2\leq \varphi_1+C$ for some constant $C$.
Let $\alpha$ be a class in $H^{1,1}_{{\partial}\overline{\partial}}(X,\mathbb{R})$ and $\gamma$ be a smooth real (1,1)-form, we denote by $\alpha[\gamma]$ the set of closed almost positive (1,1)-currents $T\in \alpha$ with $T\geq\gamma$. Since the set of potentials of such currents is stable by taking a supremum, we conclude by standard pluripotential theory that there exists a closed almost positive (1,1)-current $T_{\min,\gamma}\in \alpha[\gamma]$ which has minimal singularities in $\alpha[\gamma]$. $T_{\min,\gamma}$ is well defined modulo $dd^cL^{\infty}$. For each $\epsilon>0$, denote by $T_{\min,\epsilon}=T_{\min,\epsilon}(\alpha)$ a current with minimal singularities in $\alpha[-\omega]$, where $\omega$ is some reference Hermitian form. The minimal multiplicity at $x\in X$ of the pseudo-effective class $\alpha\in H^{1,1}_{{\partial}\overline{\partial}}(X,\mathbb{R})$
is defined as
$$
\nu(\alpha,x):=\sup_{\epsilon>0}\nu(T_{\min,\epsilon},x).
$$
For a prime divisor $D$, we define the generic minimal multiplicity of $\alpha$ along $D$ as
$$
\nu(\alpha,D):=\inf\{\nu(\alpha,x)|x\in D)\}.
$$
We then have $\nu(\alpha,D)=\sup_{\epsilon>0}\nu(T_{\min,\epsilon},D)$.
\subsection{Lebesgue decomposition}
A current $T$ can be locally seen as a form with distribution coefficients. When $T$ is positive, the distributions are positive measures which admit a Lebesgue decomposition into an absolutely continuous part (with respect to the Lebesgue measure on $X$) and a singular part. Therefore we obtain the decomposition $T=T_{\rm ac}+T_{\rm sing}$, with $T_{\rm ac}$ (resp. $T_{\rm sing}$) globally determined thanks to the uniqueness of the Lebesgue decomposition.
Now we assume that $T$ is a (1,1)-current. The absolutely continuous part $T_{\rm ac}$ is considered as a (1,1)-form with $L^1_{\rm loc}$ coefficients, and more generally we have $T_{\rm ac} \geq \gamma$ whenever $T\geq \gamma$ for some real smooth real form $\gamma$. Thus we can define the product $T^k_{\rm ac}$ of $T_{\rm ac}$ almost everywhere. This yields a positive Borel $(k,k)$-form.
\subsection{Modified nef cone and divisorial Zariski decomposition}
\label{divisorial}
In this subsection, we collect some definitions and properties of the modified nef cone and divisorial Zariski decomposition. See \cite{Bou04} for more details.
\begin{defin}
Let $X$ be compact complex manifold, and $\omega$ be some reference Hermitian form. Let $\alpha$ be a class in $H^{1,1}_{{\partial}\overline{\partial}}(X,\mathbb{R})$.
\begin{enumerate}[\upshape (i)]
\item $\alpha$ is said to be a \emph{modified K\"ahler class} iff it contains a K\"ahler current $T$ with $\nu(T,D)=0$ for all prime divisors $D$ in $X$.
\item $\alpha$ is said to be a \emph{modified nef class} iff, for every $\epsilon>0$, there exists a closed (1,1)-current $T_\epsilon\geq -\epsilon\omega$ and $\nu(T_\epsilon,D)=0$ for every prime $D$.
\end{enumerate}
\end{defin}
\begin{rem}
\label{mf=f}
The modified nef cone $\mathcal{MN}$ is a closed convex cone which contains the nef cone $\mathcal{N}$. When $X$ is a K\"ahler manifold, $\mathcal{MN}$ is just the interior of the modified K\"ahler cone $\mathcal{MK}$.
\end{rem}
\begin{rem}
\label{nef property}
For a complex surface, the K\"ahler (nef) cone and the modified K\"ahler (modified nef) cone coincide. Indeed, analytic singularities in codimension 2 of a K\"ahler current $T$ are just isolated points. Therefore the class $\{T\}$ is a K\"ahler class.
\end{rem}
\begin{defin}[Divisorial Zariski decomposition]
The \emph{negative part} of a pseudo-effective class $\alpha\in H^{1,1}_{{\partial}\overline{\partial}}(X,\mathbb{R})$ is defined as $N(\alpha):=\sum\nu(\alpha,D)D$. The \emph{Zariski projection} of $\alpha$ is $Z(\alpha):=\alpha-\{N(\alpha)\}$. We call the decomposition $\alpha=Z(\alpha)+\{N(\alpha)\}$ the \emph{divisorial Zariski decomposition of $\alpha$}.
\end{defin}
\begin{rem}
\label{bijection}
We claim that the volume of $Z(\alpha)$ is equal to the volume of $\alpha$. Indeed, if $T$ is a positive current in $\alpha$, then we have $T\geq N(\alpha)$ since $T\in \alpha[-\epsilon\omega]$ for each $\epsilon>0$ and we conclude that $T\mapsto T-N(\alpha)$ is a bijection between the positive currents in $\alpha$ and those in $Z(\alpha)$. Furthermore, we notice that $(T-N(\alpha))_{\rm ac}=T_{\rm ac}$, and thus by the definition of volume of the pseudo-effective classes we conclude that $\vol_X(\alpha)=\vol_X(Z(\alpha))$.
\end{rem}
\begin{defin}[Exceptional divisors]
\begin{enumerate}[\upshape (i)]
\item A family $D_1,\ldots\ ,D_q$ of prime divisors is said to be an \emph{exceptional family} iff the convex cone generated by their cohomology classes meets the modified nef cone at 0 only.
\item An effective $\mathbb{R}$-divisor $E$ is said to be \emph{exceptional} iff its prime components constitute an exceptional family.
\end{enumerate}
\end{defin}
We have the following properties of exceptional divisors:
\begin{thm}\it
\label{property exceptional}
\begin{enumerate}[\upshape (i)]
\item An effective $\mathbb{R}$-divisor E is exceptional iff $Z({E})=0$.
\item If E is an exceptional effective $\mathbb{R}$-divisor, we have $E=N(\{E\})$.
\item If $D_1,\ldots,D_q$ is an exceptional family of primes, then their classes $\{D_1\},\ldots,\{D_q\}$ are linearly independent in ${\rm NS}_{\mathbb{R}}(X)\subset H^{1,1}(X,\mathbb{R})$. In particular, the length of the exceptional families of primes is uniformly bounded by the Picard number $\rho(X)$.
\item Let $X$ be a surface, a family $D_1,\ldots,D_r$ of prime divisors is exceptional iff its intersection matrix $(D_i\cdot D_j)$ is negative definite.
\end{enumerate}
\end{thm}
In this paper, we need the following properties of the modified nef cone $\mathcal{MN}$ and the divisorial Zariski decomposition due to Boucksom (ref. \cite{Bou04}). We state these properties without proofs.
\begin{thm}\it
\label{modified big}
Let $\alpha\in H^{1,1}(X,\mathbb{R})$ be a pseudo-effective class. Then we have:
\begin{enumerate}[\upshape (i)]
\item Its Zariski projection $Z(\alpha)$ is a modified nef class.
\item $Z(\alpha)=\alpha$ iff $\alpha$ is modified nef.
\item $Z(\alpha)$ is big iff $\alpha$ is.
\end{enumerate}
\end{thm}
\begin{rem}
Let $X$ be a complex K\"ahler surface. For a big class $\alpha\in H^{1,1}(X,\mathbb{R})$, $Z(\alpha)$ is a big and modified nef class. By Remark \ref{mf=f}, any modified nef class is nef, it follows that $Z(\alpha)$ is big and nef.
\end{rem}
\begin{thm}\it
\label{Continuous}
\begin{enumerate}[\upshape (i)]
\item The map $\alpha \mapsto N(\alpha)$ is convex and homogeneous on pseudo-effective class cone $ \mathcal{E}$. It is continuous on the interior of $ \mathcal{E}$.
\item The Zariski projection $Z:\mathcal{E}\rightarrow \mathcal{MN} $ is concave and homogeneous. It is continuous on the interior of $\mathcal{E}$.
\end{enumerate}
\end{thm}
\begin{thm}\it
\label{contain exceptional}
Let p be a big and modified nef class. Then the primes $D_1,\ldots,D_q$ contained in the non-K\"ahler locus $E_{nK}(p)$ form an exceptional family $A$, and the fiber of Z over p is the simplicial cone $Z^{-1}(p)=p+V_+(A)$, where $V_+(A):=\sum_{D\in A}\mathbb{R}_+\{D\}$.
\end{thm}
\begin{thm}\it
\label{orthogonal}
Let $X$ be a compact surface. If $\alpha\in H^{1,1}(X,\mathbb{R})$ is a pseudo-effective class, its divisorial Zariski decomposition $\alpha=Z(\alpha)+\{N(\alpha)\}$ is the unique orthogonal decomposition of $\alpha$ with respect to the non-degenerate quadratic form $q(\alpha):=\int\alpha^2$ into the sum of a modified nef class and the class of an exceptional effective $\mathbb{R}$-divisor.
\end{thm}
\begin{rem}
Let $X$ be a surface, $\alpha$ is the class of an effective $\mathbb{Q}$-divisor $D$ on a projective surface, the divisorial Zariski decomposition of $\alpha$ is just the original Zariski decomposition of $D$.
\end{rem}
\section{Transcendental Morse inequality}
\subsection{Proof of the transcendental Morse inequality for complex surfaces}
The main goal of this section is to prove the differentiability of the volume function and the transcendental Morse inequality for complex surfaces. In fact, in the next subsection we will give a more general method to prove the transcendental Morse inequality for K\"ahler manifolds on which modified nef cones $\mathcal{MN}$ coincide with the nef cones; this includes complex surfaces. However, since the methods and results here are very special in studying generalized Okounkov bodies, we will treat complex surface and higher dimensional K\"ahler manifolds separately. Throughout this subsection, if not specially mentioned, $X$ will stand for a complex K\"ahler surface. We denote by $q(\alpha):=\int \alpha^2$ the quadratic form on $H^{1,1}(X,\mathbb{R})$. By the Hodge index theorem, $(H^{1,1}(X,\mathbb{R}),q)$ has signature $(1,h^{1,1}(X)-1)$. The open cone $\{\alpha\in H^{1,1}(X,\mathbb{R})|q(\alpha)>0\}$ has thus two connected components which are convex cones, and we denote by $\mathcal{P}$ the component containing the K\"ahler cone $\mathcal{K}$.
\begin{lem}\it
\label{component}
Let $X$ be a compact K\"ahler manifold of dimension $n$. If $\alpha\in H^{1,1}(X,\mathbb{R})$ is a big class, $\beta \in H^{1,1}(X,\mathbb{R})$ is a nef class, then $N(\alpha+t\beta)\leq N(\alpha)$ as effective $\mathbb{R}$-divisors for $t\geq 0$. Furthermore, when t is small enough, the prime components of $N(\alpha+t\beta)$ will be the same as those of $N(\alpha)$.
\end{lem}
\begin{proof}
Since $\beta$ is nef, by Theorem \ref{Continuous}, we have
$$
N(\alpha+t\beta)\leq N(\alpha)+tN(\beta)=N(\alpha).
$$
Since the map $\alpha \mapsto N(\alpha)$ is convex on pseudo-effective class cone $ \mathcal{E}$, it is continuous on the interior of $ \mathcal{E}$, and thus the theorem follows.
\end{proof}
\begin{thm}\it
\label{differential}
If $\alpha\in H^{1,1}(X,\mathbb{R})$ is a big class and $\beta \in H^{1,1}(X,\mathbb{R})$ is a nef class, then
\begin{equation}
\label{differential formula}
\left.\frac{d}{dt}\right|_{t=0}\vol_X(\alpha+t\beta)=2Z(\alpha)\cdot \beta
\end{equation}
\end{thm}
\begin{proof}
By Lemma \ref{component}, there exists an $\epsilon>0$ such that when $0\leq t<\epsilon$, we can write $N(\alpha+t\beta)=\sum_{i=1}^{r}a_i(t)N_i$, where $0< a_i(t)\leq a_i(0)=:a_i$, and each $a_i(t)$ is a continuous and decreasing function with respect to $t$. According to the orthogonal property of divisorial Zariski decomposition (ref. Theorem \ref{orthogonal}), $Z(\alpha+t\beta)\cdot N(\alpha+t\beta)=0$ for $t\geq 0$. Since $Z(\alpha+t\beta)$ is modified nef and thus nef (by Remark \ref{nef property}), we have $Z(\alpha+t\beta)\cdot N_i\geq 0$ for every $i$. When $0\leq t<\epsilon$, we have $a_i(t)>0$ for $i=1,\ldots,r$, therefore, $Z(\alpha+t\beta)$ is orthogonal to each $\{N_i\}$ with respect to $q$. We denote by $V\subset H^{1,1}(X,\mathbb{R})$ the finite vector space spanned by $\{N_1\},\ldots,\{N_r\}$, by $V^{\bot}$ the orthogonal space of V with respect to $q$. Thus $\alpha+t\beta=Z(\alpha+t\beta)+\sum_{i=1}^{r}a_i(t)\{N_i\}$ is the decomposition in the direct sum $V^{\bot} \oplus V$. We decompose $\beta=\beta^{\bot}+\beta_0$ in the direct sum $V^{\bot}\oplus V$, and we have
\begin{gather}
Z(\alpha+t\beta)=Z(\alpha)+t\beta^{\bot},\nonumber\\
\sum_{i=1}^{r}a_i(t)\{N_i\}=\sum_{i=1}^{r}a_i\{N_i\}+t\beta_0.\nonumber
\end{gather}
Since $\vol_X(\alpha+t\beta)=\vol_X(Z(\alpha+t\beta))= Z(\alpha+t\beta)^2$ (by Remark \ref{bijection}),
it is easy to deduce that
$$
\left.\frac{d}{dt}\right|_{t=0}\vol_X(\alpha+t\beta)=2Z(\alpha)\cdot \beta^{\bot}=2Z(\alpha)\cdot \beta.
$$
The last equality follows from $\beta_0\in V$ and $Z(\alpha)\in V^{\bot}$. We get the first half of Theorem \ref{main differential}.
\end{proof}
To prove the transcendental Morse inequality for complex surfaces, we will need a criterion for bigness of a class:
\begin{thm}\it
\label{criterion}
Let $\alpha$ and $\beta$ be two nef classes such that $\alpha^2-2\alpha\cdot \beta>0$, then $\alpha-\beta$ is a big class.
\end{thm}
\begin{proof}
We denote by $\mathcal{P}$ the connected component of the open cone $\{\alpha\in H^{1,1}(X,\mathbb{R})\mid q(\alpha)>0\}$ containing the K\"ahler cone $\mathcal{K}$, then $\mathcal{P}\subset \mathcal{E}^0$. As a consequence of the Nakai-Moishezon criterion for surfaces (ref. \cite{Lam99}), we know that, if $\gamma$ is a real (1,1)-class with $\gamma^2>0$, then $\gamma$ or $-\gamma$ is big. Since $\alpha$ and $\beta$ are both nef, we have that $(\alpha-t\beta)^2>0$ for $0\leq t\leq 1$. This means that $\alpha-t\beta$ is contained in some component of the open cone $\{\alpha\in H^{1,1}(X,\mathbb{R})|q(\alpha)>0\}$. But since $\alpha$ is big, $\alpha-t\beta$ is contained in $\mathcal{P}\subset \mathcal{B}$, and \textit{a fortiori} $\alpha-\beta$ is.
\end{proof}
Now we are ready to prove the transcendental Morse inequality for complex surfaces.
\begin{proof}[Proof of Theorem \ref{morse 2}]
By Theorem \ref{criterion}, when ${\alpha}^2- 2\alpha \cdot \beta>0$, the cohomology class $\alpha-\beta$ is big. By the differentiability formula (\ref{differential formula}), we have
$$
\vol_X(\alpha-\beta)=\alpha^2-2\int_{0}^{1} Z(\alpha-t\beta)\cdot \beta\ dt.
$$
Since the Zariski projection $Z:\mathcal{E}\rightarrow \mathcal{MN} $ is concave and homogeneous by Theorem \ref{Continuous}, we have $$\alpha=Z(\alpha)\geq Z(\alpha-t\beta)+Z(t\beta)\geq Z(\alpha-t\beta).$$
Since $\beta$ is nef, we have
$$
\alpha\cdot \beta\geq Z(\alpha-t\beta)\cdot \beta,
$$
and thus
$$
\vol_X(\alpha-\beta) \geq {\alpha}^2- 2\alpha \cdot \beta.
$$
\end{proof}
In the last part of this subsection, we prove the second half of Theorem \ref{main differential}.
\begin{thm}\it
\label{differential 2}
Let $\alpha\in H^{1,1}(X,\mathbb{R})$ be a big class and $C$ be an irreducible divisor, then
\begin{equation}
\label{differential formula 2}
\left.\frac{d}{dt}\right|_{t=0}\vol_X(\alpha+tC)=2Z(\alpha)\cdot C.
\end{equation}
\end{thm}
\begin{proof}
It suffices to prove the theorem for $C$ not nef. Thus we have $C^2<0$. Write $N(\alpha)=\sum_{i=1}^{r}a_iN_i$, where each $N_i$ is prime divisor. If $C\subseteq E_{nK}(Z(\alpha))$, we deduce that $Z(\alpha)\cdot C=0$ by Theorem \ref{Tosatti}, and $\{C,N_1,\ldots,N_r\}$ forms an exceptional family by Theorem \ref{contain exceptional}. Thus we have
$$
Z(\alpha+tC)=Z(\alpha),
$$
and
$$
N(\alpha+tC)=N(\alpha)+tC
$$
for $t\geq0$. The theorem is thus proved in this case.
From now on we assume $C\not\subseteq E_{nK}(Z(\alpha))$, thus we have $Z(\alpha)\cdot C> 0$ and $C\not\subseteq {\rm Supp}(N(\alpha))$. We define
\begin{equation}
\left(
\begin{array}{c}
b_1\\
\vdots\\
b_r\\
\end{array}
\right)=
-S^{-1}\cdot\left(
\begin{array}{c}
C\cdot N_1\\
\vdots\\
C\cdot N_r\\
\end{array}
\right)\nonumber,
\end{equation}
where $S=(s_{ij})$ denotes the intersection matrix of $\{N_1,\ldots, N_r\}$. By Theorem \ref{orthogonal} we know that $S$ is negative definite satisfying $s_{ij}\geq 0$ for all $i\neq j$. We claim that $Z(\alpha)+t(\{C\}+\sum_{i=1}^{r}b_i\{N_i\})$ is big and nef if $0\leq t< -\frac{Z(\alpha)\cdot C}{C^2}$. We need the following lemma from \cite{BKS03} to prove our claim.
\begin{lem}
\label{BKS}
Let $A$ be a negative definite $r\times r$-matrix over the reals such that $a_{ij}\geq 0$ for all $i\neq j$. Then all entries of the inverse matrix $A^{-1}$ are $\leq0$.
\end{lem}
By Lemma \ref{BKS} we know that all entries of $S^{-1}$ are $\leq0$, thus $b_j\geq0$ for all $1\leq j\leq r$ and we get the bigness of $Z(\alpha)+t(\{C\}+\sum_{i=1}^{r}b_i\{N_i\})$. By the construction of $b_j$, we have
$$
(Z(\alpha)+t(\{C\}+\sum_{i=1}^{r}b_i\{N_i\}))\cdot N_j=0
$$
for $1\leq j\leq r$, and
$$
(Z(\alpha)+t(\{C\}+\sum_{i=1}^{r}b_i\{N_i\}))\cdot C>0
$$
for $0\leq t<-\frac{Z(\alpha)\cdot C}{C^2}$. Thus we have the nefness and our claim follows. Since the divisorial Zariski decomposition is orthogonal and unique (see Theorem \ref{orthogonal}), we conclude that
\begin{gather}
\label{negative}
N(\alpha+t\{C\})=\sum_{i=1}^{r}(a_i-tb_i)N_i,\\
Z(\alpha+t\{C\})=Z(\alpha)+t\{C\}+\sum_{i=1}^{r}tb_i\{N_i\},
\end{gather}
for $t$ small enough. Since $\vol_X(\alpha+tC)=Z(\alpha+t\{C\})^2$, we have thus also obtained formula (\ref{differential formula 2}) in this case.
\end{proof}
\subsection{Transcendental Morse inequality for some special K\"ahler manifolds}
One can modify the proof of Theorem \ref{morse 2} a little bit, to extend the transcendental Morse inequality to K\"ahler manifolds whose modified nef cone $\mathcal{MN}$ coincides with the nef cone $\mathcal{N}$. In this subsection, we assume $X$ to be a compact K\"ahler manifold of dimension $n$ which satisfies this condition.
\begin{lem}\it
\label{orthogonal higher}
If $\alpha\in \mathcal{E}^\circ$, then the divisorial Zariski decomposition $\alpha =Z(\alpha)+N(\alpha)$ is such that
$$Z(\alpha)^{n-1}\cdot N(\alpha)=0.$$
\end{lem}
\begin{rem}
Lemma \ref{orthogonal higher} is very similar to the Corollary 4.5 in \cite{BDPP13}: If $\alpha\in \mathcal{E}_{\rm NS}$, then the divisorial Zariski decomposition $\alpha= Z(\alpha)+N(\alpha)$ is such that $\langle Z(\alpha)^{n-1} \rangle \cdot N(\alpha)=0$. However, the proof of \cite{BDPP13} is based on the orthogonal estimate for divisorial Zariski decomposition of $\mathcal{E}_{\rm NS}$, which is still a conjecture for $\alpha\in \mathcal{E}$. Here we will use Theorem \ref{Tosatti} to prove this lemma directly.
\end{rem}
\begin{proof}[Proof of Lemma \ref{orthogonal higher}]
By Theorem \ref{modified big}, if $\alpha$ is big, then $Z(\alpha)$ is big and modified nef, thus nef by the assumption for $X$. By Theorem \ref{contain exceptional}, the primes $D_1,\ldots,D_q$ contained in the non-K\"ahler locus $E_{nK}(Z(\alpha))$ form an exceptional family, and $\alpha=Z(\alpha)+\sum_{i=1}^{r} a_iD_i$ for $a_i\geq0$ . Since $\text{\rm Null}(Z(\alpha))=E_{nK}(Z(\alpha))$ by Theorem \ref{Tosatti}, we have $Z(\alpha)^{n-1}\cdot D_i=0$ for each $i$, and thus $Z(\alpha)^{n-1}\cdot N(\alpha)=0$. The lemma is proved.
\end{proof}
\begin{proof}[Proof of Theorem \ref{morse high}]
By Lemma \ref{component}, there exists $\epsilon>0$ such that the prime components of $N(\alpha+t\beta)$ will be the same when $0\leq t\leq\epsilon$. Moreover if we denote $N(\alpha+t\beta)=\sum_{i=1}^{r}a_i(t)N_i$, then each $a_i(t)$ is continuous and decreasing satisfying $a_i(t)>0$. By Lemma \ref{orthogonal higher}, we have $$Z(\alpha+t\beta)^{n-1}\cdot N(\alpha+t\beta)=\sum_{i=1}^{r}a_i(t)Z(\alpha+t\beta)^{n-1}\cdot N_i=0.$$
Since $Z(\alpha+t\beta)$ is modified nef thus nef, we deduce that $Z(\alpha+t\beta)^{n-1}\cdot N_i=0$ for $0\leq t\leq\epsilon$ and $i=1,\ldots,r$.
Since $a_i(t)$ is continuous and decreasing, it is almost everywhere differentiable. Thus $Z(\alpha+t\beta)=\alpha+t\beta-\sum_{i=1}^{r}a_i(t)N_i$ is an a.e. differentiable and continuous curves in the finite dimensional space $H^{1,1}(X,\mathbb{R})$ parametrized by $t$. Meanwhile, since $\alpha\mapsto \alpha^n$ is a quadratic form (possibly degenerate) in $H^{1,1}(X,\mathbb{R})$, we thus deduce that $\vol_X(\alpha+t\beta)=Z(\alpha+t\beta)^n$ is an a.e. differentiable function with respect to $t$. Therefore, if $\vol_X(\alpha+t\beta)$ and $a_i(t)$ are both differentiable at $t=t_0$, we have
$$
\left.\frac{d}{dt}\right|_{t=t_0}\vol_X(\alpha+t\beta)=nZ(\alpha+t_0\beta)^{n-1}\cdot (\beta-\sum_{i=1}^{r}{a_i}'(t_0)N_i)=nZ(\alpha+t_0\beta)^{n-1}\cdot \beta.
$$
Since $\vol_X(\alpha+t\beta)$ is increasing and continuous, it is also a.e. differentiable and thus we have
\begin{eqnarray}
\label{intergral}
\vol_X(\alpha+s\beta)&=& \vol_X(\alpha)+\int_{0}^{s}\frac{d}{dt}\vol_X(\alpha+t\beta) \mathrm{d}t\nonumber\\
&=&\vol_X(\alpha)+n\int_{0}^{s} Z(\alpha+t\beta)^{n-1}\cdot \beta\ \mathrm{d}t.
\end{eqnarray}
for $0\leq s\leq \epsilon$. Since $Z(\alpha+t\beta)$ is continuous (by Theorem \ref{Continuous}), by $(\ref{intergral})$ we deduce that $\vol_X(\alpha+t\beta)$ is differentiable with respect to $t$ and its derivative
$$
\left.\frac{d}{dt}\right|_{t=t_0}\vol_X(\alpha+t\beta)=nZ(\alpha+t_0\beta)^{n-1}\cdot \beta.\qedhere
$$
\end{proof}
In order to prove transcendental Morse inequality, we will need the following bigness criterion obtained in \cite{Xia13} and \cite{Popo14}.
\begin{thm}\it
Let $X$ be an $n$-dimensional compact K\"ahler manifold. Assume $\alpha$ and $\beta$ are two nef classes on $X$ satisfying
$\alpha^n-n\alpha^{n-1}\cdot\beta>0$,
then $\alpha-\beta$ is a big class.
\end{thm}
The proof of the next theorem is similar to that of Theorem \ref{morse 2} and is therefore omitted.
\begin{thm}\it
\label{morse special}
Let $X$ be a compact K\"ahler manifold on which the modified nef cone $\mathcal{MN}$ and the nef cone $\mathcal{N}$ coincide. If $\alpha$ and $\beta$ are nef cohomology classes of type (1,1) on $X$ satisfying the inequality ${\alpha}^n- n\alpha^{n-1} \cdot \beta>0$. Then $\alpha-\beta$ contains a K\"ahler current and $\vol_X(\alpha-\beta) \geq {\alpha}^{n-1}- n\alpha^{n-1} \cdot \beta$.
\end{thm}
\begin{rem}
In \cite{BCJ09}, the authors proved the following differentiability theorem:
\begin{equation}
\label{bou diff}
\left.\frac{d}{dt}\right|_{t=t_0}\vol_X(L+tD)=n\langle L^{n-1}\rangle \cdot D,
\end{equation}
where $L$ is a big line bundle on the smooth projective variety $X$ and $D$ is a prime divisor. The right-hand side of the equation above involves the \textit{positive intersection product} $\langle L^{n-1}\rangle\in H^{n-1,n-1}_{\geq0}(X,\mathbb{R})$, first introduced in the analytic context in \cite{BDPP13}. Theorem \ref{morse high} could be seen as a transcendental version of (\ref{bou diff}) for some special K\"ahler manifolds. In the general K\"ahler situation, we propose the following conjecture:
\begin{conj}
Let $X$ be a K\"ahler manifold of dimensional $n$, $\alpha$ be a big class. If $\beta$ is a pseudo-effective class, then we have
$$
\left.\frac{d}{dt}\right|_{t=0}\vol_X(\alpha+t\beta)=n\langle \alpha^{n-1}\rangle \cdot \beta.
$$
\end{conj}
\end{rem}
\section{Generalized Okounkov bodies on K\"ahler manifolds}
\subsection{Definition and relation with the algebraic case}
\label{defin}
Throughout this subsection, $X$ will stand for a K\"ahler manifold of dimensional $n$. Our main goal in this subsection is to generalize the definition of Okounkov body to any pseudo-effective class $\alpha\in H^{1,1}(X,\mathbb{R})$. First of all, we define a valuation-like function. For any positive current $T\in \alpha$ with analytic singularites, we define the valuation-like function
$$ T \rightarrow \nu(T)=\nu_{Y_\bullet}(T)=(\nu_1(T),\ldots\nu_n(T))
$$
as follows. First, set
$$\nu_1(T)=\sup\{\lambda \mid T-\lambda [Y_1] \geq 0 \},
$$
where $[Y_1]$ is the current of integration over $Y_1$. By Section \ref{Siu} we know that $\nu_1(T)$ is the coefficient $\nu(T,Y_1)$ of the positive current $[Y_1]$ appearing in the Siu decomposition of $T$. Since $T$ has analytic singularities, by the arguments in Section \ref{analytic singularity}, $T_1:=(T-\nu_1[Y_1])|_{Y_1}$ is a well-defined positive current in the pseudo-effective class $(\alpha-\nu_1\{Y_1\})|_{Y_1}$ and it also has analytic singularities. Then take
$$\nu_2(T)=\sup\{\lambda \mid T_1-\lambda [Y_2] \geq 0 \},
$$
and continue in this manner to define the remaining values $\nu_i(T)\in\mathbb{R}^+$.
\begin{rem}
\label{section divisor}
If one assumes $\alpha\in {\rm NS}_\mathbb{Z}(X)$, there exists a holomorphic line bundle $L$ such that $\alpha=c_1(L)$. If $D$ is the divisor of some holomorphic section $s_D\in H^0(X,\oc_X(L))$, then we have
$$
\nu([D])=\mu(s_D),
$$
where $\mu$ is the valuation-like function appeared in the definition of the original Okounkov body. Roughly speaking our definition of valuation-like function has a bigger domain of definition and thus the image of our valuation-like function contains $\bigcup_{m=1}^{\infty} \frac{1}{m}\mu(mL)$.
\end{rem}
For any big class $\alpha$, we define a \emph{$\mathbb{Q}$-convex body} $\Delta_\mathbb{Q}(\alpha)$ (resp. \emph{$\mathbb{R}$-convex body} $\Delta_\mathbb{R}(\alpha)$) to be the set of valuation vectors $\nu(T)$, where $T$ ranges among all the K\"ahler (resp. positive) currents with algebraic (resp. analytic) singularities. Then $\Delta_\mathbb{Q}(\alpha)\subseteq \Delta_\mathbb{R}(\alpha)$.
It is easy to check that this is a convex set in $\mathbb{Q}^n$ (resp. $\mathbb{R}^n$). Indeed, for any two positive currents $T_0$ and $T_1$ with algebraic (resp. analytic) singularities, we have $\nu(\epsilon T_0+(1-\epsilon)T_1)=\epsilon\nu(T_0)+(1-\epsilon)\nu(T_1)$ for $0\leq \epsilon \leq 1$ rational (resp. real). It is also obvious to see the homogeneous property of $\Delta_\mathbb{Q}(\alpha)$, that is, for all $c\in \mathbb{Q}^+$, we have $$\Delta_\mathbb{Q}(c\alpha)=c\Delta_\mathbb{Q}(\alpha).$$ Indeed, since we have $\nu(cT)=c\nu(T)$ for all $c\in \mathbb{R}^+$, the claim follows directly.
\begin{example}
\label{curve}
Let $L$ be a line bundle of degree $c>0$ on a smooth curve $C$ of genus $g$. Then we have
$$
\Delta_\mathbb{Q}(c_1(L))=\mathbb{Q}\cap [0,c).
$$
Since ${\rm NS}_\mathbb{R}(C)=H^{1,1}(C,\mathbb{R})$, for any ample class $\alpha$ on $C$ we have
$$
\Delta_\mathbb{Q}(\alpha)=\mathbb{Q}\cap [0,\alpha\cdot C).
$$
\end{example}
\begin{lem}\it
\label{boundedness}
Let $\alpha$ be a big class, then the $\mathbb{R}$-convex body $\Delta_\mathbb{R}(\alpha)$ lies in a bounded subset of $\mathbb{R}^n$.
\end{lem}
\begin{proof}
It suffices to show that there exists a $b>0$ large enough such that $\nu_i(T)<b$ for any positive current $T$ with analytic singularities. We fix a K\"ahler class $\omega$. Choose first of all $b_1>0$ such that $$
(\alpha-b_1Y_1)\cdot \omega^{n-1}<0.
$$
This guarantees that $\nu_1(T)<b_1$ since $\alpha-b_1Y_1\not\in \mathcal{E}$. Next choose $b_2$ large enough so that
$$
((\alpha-aY_1)|_{Y_1}-b_2Y_2)\cdot \omega^{n-2}<0
$$
for all real numbers $0\leq a\leq b_1$. Then $\nu_2(T)\leq b_2$ for any positive current $T$ with analytic singularities. Continuing in this manner we construct $b_i>0$ for $i=1,\ldots,n$ such that $\nu_i(T)\leq b_i$ for any positive current $T$ with analytic singularities. We take $b=\max\{b_i\}$.
\end{proof}
\begin{lem}\it
\label{closure same}
For any big class $\alpha$, $\Delta_\mathbb{Q}(\alpha)$ is dense in $\Delta_\mathbb{R}(\alpha)$. Thus we have $\overline{\Delta_\mathbb{Q}(\alpha)} = \overline{\Delta_\mathbb{R}(\alpha)}$.
\end{lem}
\begin{proof}
It is easy to verify that if $T$ is a K\"ahler current with analytic singularities, then for any $\epsilon>0$, there exists a K\"ahler current $S_\epsilon$ with algebraic singularities such that $\left\lVert\nu(S_\epsilon)-\nu(T)\right\rVert<\epsilon$ with respect to the standard norm in $\mathbb{R}^n$. For the general case, We fix a K\"ahler current $T_0\in i\Theta(L)$ with algebraic singularities. Then for any positive current $T$ with analytic singularities, $T_{\epsilon}:=(1-\epsilon )T+\epsilon T_0$ is still a K\"ahler current. By Lemma \ref{boundedness}, $\left\lVert\nu(T_{\epsilon})-\nu(T)\right\rVert = \epsilon\left\lVert(\nu(T_0)-\nu(T))\right\rVert$ will tend to 0 since $\nu(T)$ is uniformly bounded for any positive current $T$ with analytic singularities. Thus $\Delta_\mathbb{Q}(\alpha)$ is dense in $\Delta_\mathbb{R}(\alpha)$.
\end{proof}
Now we study the relations between $\Delta_{\mathbb{Q}}(c_1(L))$ and $\Delta(L)$ for $L$ a big line bundle on $X$. First we begin with the following two lemmas.
\begin{lem}[Extension property]\it
\label{Extension property}
Let $L$ be a big line bundle on the projective variety $X$ of dimension $n$, with a singular Hermitian metric $h=e^{-\varphi}$ satisfying
$$
i\Theta_{L,h}=dd^c\varphi\geq \epsilon\omega
$$
for some $\epsilon>0$ and a given K\"ahler form $\omega$. If the restriction of $\varphi$ on a smooth hypersurface $Y$ is not identically equal to $-\infty$, then there exists a positive integer $m_0$ which depends only on $Y$ so that any holomorphic section $s_m\in H^0(Y,\oc_Y(mL)\otimes \mathcal{I}(m\varphi|_Y))$ can be extended to $ S_m\in H^0(X,\oc_X(mL)\otimes \mathcal{I}(m\varphi))$ for any $m\geq m_0$.
\end{lem}
We need the following Ohsawa-Takegoshi extension theorem to prove Lemma \ref{Extension property}.
\begin{thm}[Ohsawa-Takegoshi]\it
\label{Ohsawa}
Let $X$ be a smooth projective variety. Let $Y$ be a smooth divisor defined by a holomorphic section of the line bunle $H$ with a smooth metric $h_0=e^{-\psi}$. Let L be a holomorphic line bunle with a singular metric $h=e^{-\phi}$, satisfying the curvature assumptions
$$
dd^c\phi\geq0
$$
and
$$
dd^c\phi\geq \delta dd^c\psi
$$
with $\delta>0$. Then for any holomorphic section $s\in H^0(Y,\oc_Y(K_Y+L)\otimes \mathcal{I}(h|_Y))$, there exists a global holomorphic section $S\in H^0(X,\oc_X(K_X+L+Y)\otimes \mathcal{I}(h))$ such that $S|_Y=s$.
\end{thm}
\begin{proof}[Proof of Lemma \ref{Extension property}]
Taking a smooth metric $e^{-\psi}$ and $e^{-\eta}$ on $Y$ and $K_X$, we can choose $m_0$ large enough satisfying the curvature assumptions
$$
dd^c(m\phi-\eta-\psi)\geq0
$$
and
$$
dd^c(m\phi-\eta-\psi)\geq dd^c\psi
$$
for any $m\geq m_0$.
By Theorem \ref{Ohsawa}, any holomorphic section $s\in H^0(Y,\oc_Y(K_Y+(mL-K_X-Y)|_Y)\otimes \mathcal{I}(h^m|_Y))$ can be extended to a global holomorphic section $S\in H^0(X,\oc_X(mL)\otimes \mathcal{I}(h^m))$ such that $S|_Y=s$. By the adjunction theorem, we have $(K_X+Y)|_Y=K_Y$, thus the lemma is proved.
\end{proof}
\begin{lem}\it
\label{Riemann}
Let $L$ be a big line bundle on the Riemann surface $C$ with a singular Hermitian metric $h=e^{-\varphi}$ such that $\varphi$ has algebraic singularities and
$$
i\Theta_{L,h}=dd^c\varphi\geq \epsilon\omega
$$
for some $\epsilon>0$. Then for a fixed point $p$, there exists an integer $k>0$ such that we have a holomorphic section $s_k\in H^0(C,\oc_C(kL)\otimes \mathcal{I}(h^k))$ satisfying $\ord_p(s_k)=k\nu(i\Theta_{L,h},p)$.
\end{lem}
\begin{proof}
Since $\varphi$ has algebraic singularities, we have the following Lebsegue decomposition
$$
i\Theta_{L,h}=({i\Theta_{L,h}})_{\rm ac}+\sum_{i=1}^{r}c_ix_i,
$$
where each $c_i>0$ is rational and $x_1,\ldots,x_r$ are the log poles of $i\Theta_{L,h}$ (possibly $p$ is among them).
Since we have $$\int_Ci(\Theta_{L,h})_{\rm ac}+\sum_{i=1}^{r}c_i=\deg(L),$$
thus
$$
\sum_{i=1}^{r}c_i<\deg(L).
$$
By Riemann-Roch theorem there exists an integer $k>0$ satisfying\\
\begin{enumerate}[\upshape (i)]
\item $kc_i$ is integer,
\item there is a holomorphic section $s_k\in H^0(C,\oc_C(kL))$ such that ${\rm ord}_{x_i}(s_k)\geq kc_i$ and ${\rm ord}_p(s_k)=k\nu(i\Theta_{L,h},p)$.
\end{enumerate}
Thus $s_k$ is locally integrable with respect to the weight $e^{-k\varphi}$. The theorem is proved.
\end{proof}
\begin{thm}\it
\label{approximation}
Let $X$ be a smooth projective manifold of dimension $n$. For any K\"ahler current $T\in c_1(L)$ with algebraic singularities, there exists a holomorphic section $s\in H^0(X,\oc_X(kL))$ such that $\mu(s)=k\nu(T)$, i.e., we have $$\nu(T)\in \displaystyle\bigcup_{m=1}^{\infty} \frac{1}{m}\mu(mL).$$
In particular,
$$
\Delta_{\mathbb{Q}}(c_1(L))\subseteq \displaystyle\bigcup_{m=1}^{\infty} \frac{1}{m}\mu(mL) \subseteq \Delta(L).
$$
\end{thm}
\begin{proof}
First, set $\nu_i=\nu_i(T)$ and define
$$ T_0:=T,\ T_1:=(T_0-\nu_1[Y_1])|_{Y_1},\ \ldots\ , T_{n-1}:=(T_{n-2}-\nu_{n-1}[Y_{n-1}])|_{Y_{n-1}};
$$
$$L_0:=L-\nu_1Y_1, \ L_1:=L_0|_{Y_1}-\nu_2Y_2,\ \ldots\ ,
L_{n-2}:=L_{n-3}|_{Y_{n-2}}-\nu_{n-1}Y_{n-1}.
$$
Since $T_0\geq \epsilon \omega$, we have $T_1\geq \epsilon \omega|_{Y_1}$, \ldots , $T_{n-1}\geq \epsilon \omega|_{Y_{n-1}}$. Since each $\nu_i$ is rational, we could find an integer $m$ to make each $m\nu_i$ be integer so that each $mL_i$ is a big line bundle on $Y_i$. If we could prove
$$\nu(mT)\in \displaystyle\bigcup_{k=1}^{\infty} \frac{1}{k}\mu(kmL),$$
then we will have
$$\nu(T)\in \displaystyle\bigcup_{m=1}^{\infty} \frac{1}{m}\mu(mL),$$
by the homogeneous property $\frac{1}{m}\nu(mT)=\nu(T)$.
Thus we can assume that each $\nu_i(T)$ is an integer after we replace $L$ by $mL$ and $T$ by $mT$.
Firstly, since $T_0\in c_1(L)$ is a K\"ahler current with algebraic singularities, there exists a singular metric $h=e^{-\varphi_0}$ on $L$ whose curvature current is $T_0$ and $\varphi$ has algebraic singularities; on the other hand, there is a canonical metric $e^{-\eta_0}$ on $\oc_{Y_0}(Y_1)$ such that $dd^c\eta_0=[Y_1]$ in the sense of currents, thus by the definition of $\nu_1$ we deduce that $h_0:=e^{-\varphi_0+\nu_1\eta_0}$ is a singular metric of $L_0$ such that $-\varphi_0+\nu_1\eta_0$ does not vanish identically on $Y_1$, and $h_0|_{Y_1}$ is a singular metric of $L_0|_{Y_1}$ with algebraic singularities whose curvature current is $T_1\geq \epsilon\omega|_{Y_1}$.
Secondly, there is a canonical singular metric $e^{-\eta_1}$ of $\oc_{Y_1}(Y_2)$ on $Y_1$
with the curvature current $[Y_2]$. Thus the singular metric $h_1:=h_0|_{Y_1}+e^{\nu_2\eta_1}$ of the big line bundle $L_1$ gives a curvature current $T_1-\nu_2[Y_2]\geq \epsilon\omega|_{Y_1}$. We continue in this manner to define the remaining singular metrics $h_i:=h_{i-1}|_{Y_i}+e^{\nu_{i+1}\eta_i}$ of the big line bundle $L_i$ on $Y_i$ with curvature current $T_i-\nu_{i+1}[Y_{i+1}]\geq \epsilon\omega|_{Y_i}$ for $i=0, \ldots ,n-1$. It is easy to see that $h_i|_{Y_{i+1}}$ is well-defined.
By Lemma \ref{Extension property}, there exists a $k_0$ such that for each $k\geq k_0$, the following short sequence is exact
\begin{equation}
\label{exact}
H^0(Y_{i-1},\oc_{Y_{i-1}}(kL_{i-1})\otimes \mathcal{I}(h^k_{i-1}))\longrightarrow H^0(Y_{i},\oc_{Y_{i}}(kL_{i-1})\otimes \mathcal{I}(h^k_{i-1}|_{Y_{i}}))\longrightarrow 0
\end{equation}
for $i=1, \ldots ,n-1$.
Now we begin our construction. $T_{n-1}$ is the curvature current of the singular metric $h_{n-2}|_{Y_{n-1}}$ of $L_{n-2}|_{Y_{n-1}}$ over the Riemann surface $Y_{n-1}$. By Lemma \ref{Riemann}, there exists a $k\geq k_0$ and a holomorphic section $s_{n-1} \in H^0(Y_{n-1},\oc_{Y_{n-1}}(kL_{n-2})\otimes \mathcal{I}(h^k_{n-2}|_{Y_{n-1}}))$, such that $\ord_p(s_{n-1})=k\nu(T_{n-1},p)=k\nu_n$.
By the exact sequence (\ref{exact}), $s_{n-1}$ could be extend to
$$
\widetilde{s}_{n-2}\in H^0(Y_{n-2},\oc_{Y_{n-2}}(kL_{n-2})\otimes \mathcal{I}(h^k_{n-2})).
$$
Now we choose a canonical section $t_{n-2}$ of $H^0(Y_{n-2},\oc_{Y_{n-2}}(Y_{n-1}))$ such that the divisor of $t_{n-2}$ is $Y_{n-1}$. We define $s_{n-2}:=\widetilde{s}_{n-2}{t^{\otimes \nu_{n-1}}_{n-2}}$, by the construction of $h_{n-2}:=h_{n-3}|_{Y_{n-2}}+e^{\nu_{n-1}\eta_{n-2}}$, we obtain that
$$
s_{n-2}\in H^0(Y_{n-2},\oc_{Y_{n-2}}(kL_{n-3})\otimes \mathcal{I}(h^k_{n-3}|_{Y_{n-2}}).
$$
We can continue in this manner to construct a section $s_{0}\in H^0(X,\oc_{X}(kL))$ and by our construction we have $$
\mu(s_0) =(k\nu_1, \ldots ,k\nu_n)=k\nu(T),
$$
this concludes the theorem.
\end{proof}
\begin{proposition}\it
\label{coincide}
For any big line bundle $L$ and any admissible flag $Y_{\bullet}$, one has $\overline{\Delta_{\mathbb{Q}}(c_1(L))}=\Delta(L)$. In particular,
$$\Delta(L)= \overline{\displaystyle\bigcup_{m=1}^{\infty} \frac{1}{m}\nu(mL)}.$$
\end{proposition}
\begin{proof}
Firstly, since $\Delta_{\mathbb{Q}}(c_1(L))$ is a convex set in $Q^n$, its closure $\overline{\Delta_{\mathbb{Q}}(c_1(L))} $ is also a closed convex set in $\mathbb{R}^n$. By Proposition \ref{approximation}, we have
$$
\Delta_{\mathbb{Q}}(c_1(L)) \subset \displaystyle\bigcup_{m=1}^{\infty} \frac{1}{m}\cdot \nu(mL),
$$
thus
$$
\overline{\Delta_{\mathbb{Q}}(c_1(L))}\subseteq \Delta(L).
$$
By Remark \ref{section divisor}, we have $\bigcup_{m=1}^{\infty} \frac{1}{m}\nu(mL) \subseteq \Delta_{\mathbb{R}}(c_1(L))$,
thus by the definition of Okounkov body $\Delta(L)$, we deduce that
$$
\Delta(L)\subseteq \overline{\Delta_{\mathbb{R}}(c_1(L))}.
$$
By Lemma \ref{closure same}, we have $\overline{\Delta_\mathbb{Q}(c_1(L))} = \overline{\Delta_\mathbb{R}(c_1(L))}$, thus the theorem is proved.
\end{proof}
\begin{rem}
By Proposition \ref{coincide}, in the definition of the Okounkov body $\Delta(L)$, it suffices to close up the set of normalized valuation vectors instead of the closure of the convex hull of this set.
\end{rem}
\begin{rem}
It is easy to reprove that the Okounkov body $\Delta(L)$ depends only on the numerical equivalence class of the big line bundle $L$. Indeed, if $L_1$ and $L_2$ are numerically equivalent, we have $c_1(L_1)=c_1(L_2)$ thus
$$
\Delta_{\mathbb{Q}}(c_1(L_1))=\Delta_{\mathbb{Q}}(c_1(L_2)).
$$
By Proposition \ref{coincide}, we have
$$
\Delta(L_1)=\Delta(L_2).
$$
\end{rem}
Now we are ready to find some valuative points in the Okounkov bodies.
\begin{proof}[Proof of Corollary \ref{valuation}]
In \cite{LM09} we know that $\vol_{\mathbb{R}^n}(\Delta(L))=\vol_X(L)>0$ by the bigness of $L$. Since we have $\Delta(L)=\overline{\Delta_{\mathbb{Q}}(c_1(L))}$ by Proposition \ref{coincide}, then for any $p\in {\rm int}(\Delta(L))\cap\mathbb{Q}^n$, there exists an $n$-simplex $\Delta_n$ containing $p$ with all the vertices lying in $\Delta_{\mathbb{Q}}(c_1(L))$. Since $\Delta_{\mathbb{Q}}(c_1(L))$ is a convex set in $\mathbb{Q}^n$, we have $\Delta_n\cap\mathbb{Q}^n\subseteq \Delta_{\mathbb{Q}}(c_1(L))$, and thus
$$
\Delta_{\mathbb{Q}}(c_1(L))\supseteq {\rm int}(\Delta(L))\cap\mathbb{Q}^n.
$$
From Theorem \ref{approximation} we have $\Delta_{\mathbb{Q}}(c_1(L))\subseteq \displaystyle\bigcup_{m=1}^{\infty} \frac{1}{m}\mu(mL)$, thus we get the inclusion
$$
{\rm int}(\Delta(L))\cap\mathbb{Q}^n\subseteq \displaystyle\bigcup_{m=1}^{\infty} \frac{1}{m}\mu(mL),
$$
which means that all rational interior points of $\Delta(L)$ are valuative.
\end{proof}
Pursuing the same philosophy as in Proposition \ref{coincide}, it is natual to extend results related to Okounkov bodies for big line bundles, to the more general case of an arbitrary big class $\alpha\in H^{1,1}(X,\mathbb{R})$. We propose the following definition.
\begin{defin}[Generalized Okounkov body]
\label{generalized Okounkov}
Let $X$ be a K\"ahler manifold of dimension $n$. We define the \emph{generalized Okounkov body} of a big class $\alpha\in H^{1,1}(X,\mathbb{R})$ with respect to the fixed flag $Y_\bullet$ by
$$
\Delta(\alpha)= \overline{\Delta_{\mathbb{R}}(\alpha)}=\overline{\Delta_{\mathbb{Q}}(\alpha)}.
$$
\end{defin}
We have the following properties for generalized Okounkov bodies:
\begin{proposition}\it
\label{body continuity}
Let $\alpha$ and $\beta$ be big classes, $\omega$ be any K\"ahler class. Then:
\begin{enumerate}[\upshape (i)]
\item $
\Delta(\alpha)+\Delta(\beta)\subseteq \Delta(\alpha+\beta).
$
\item $\vol_{\mathbb{R}^n}(\Delta(\omega))>0.$
\item $
\Delta(\alpha)=\bigcap_{\epsilon>0}\Delta(\alpha+\epsilon\omega).
$
\end{enumerate}
\end{proposition}
\begin{proof}
(i) is obvious from the definition of generalized Okounkov body. To prove (ii), we use induction for dimension. The result is obvious if $n=1$, assume now that (ii) is true for $n-1$. We choose $t>0$ small enough such that $\omega-tY_1$ is still a K\"ahler class. By the main theorem of \cite{CT14}, any K\"ahler current $T\in (\omega-tY_1)|_{Y_1}$ with analytic singularities can be extended to a K\"ahler current $\widetilde{T}\in \omega-tY_1$, thus we have
$$
\Delta(\omega)\bigcap t\times{\mathbb{R}}^{n-1}=t\times \Delta((\omega-tY_1)|_{Y_1}),
$$
where $\Delta((\omega-tY_1)|_{Y_1})$ is the generalized Okounkov body of $(\omega-tY_1)|_{Y_1}$ with respect to the flag
$$
Y_1\supset Y_2\supset \ldots \supset Y_n=\{p\}.
$$
By the induction, we have $\vol_{\mathbb{R}^{n-1}}(\Delta((\omega-tY_1)|_{Y_1}))>0$. Since $\Delta(\omega)$ contains the origin, we have $\vol_{\mathbb{R}^n}(\Delta(\omega))>0.$
Now we are ready to prove (iii). By the concavity we have
$$\Delta(\alpha+\epsilon_1\omega)+\Delta((\epsilon_2-\epsilon_1)\omega)\subseteq \Delta(\alpha+\epsilon_2\omega)$$
if $0\leq \epsilon_1< \epsilon_2$. Since $\Delta(\omega)$ contains the origin, we have
$$
\Delta(\alpha)\subseteq\bigcap_{\epsilon>0}\Delta(\alpha+\epsilon\omega),
$$
and
$$
\Delta(\alpha+\epsilon_1\omega)\subseteq \Delta(\alpha+\epsilon_2\omega).
$$
From the concavity property, we conclude that $\vol_{\mathbb{R}^n}(\Delta(\alpha+t\omega))$ is a concave function for $t\geq 0$, thus continuous. Then we have
$$
\vol_{\mathbb{R}^n}(\bigcap_{\epsilon>0}\Delta(\alpha+\epsilon\omega))=\vol_{\mathbb{R}^n}(\Delta(\alpha))>0.
$$
Since they are all closed and convex, we have
$$
\Delta(\alpha)=\bigcap_{\epsilon>0}\Delta(\alpha+\epsilon\omega).
$$
\end{proof}
\begin{rem}
We don't know whether $\vol_{\mathbb{R}^n}(\Delta(\alpha))$ is independent of the choice of the admissible flag. However, in the next subsection we will prove that in the case of surfaces we have
$$
\vol_X(\alpha)=2\vol_{\mathbb{R}^2}(\Delta((\alpha)),
$$
in particular the Euclidean volume of the generalized Okounkov body is independent of the choice of the flag. We conjecture that
$$
\vol_{\mathbb{R}^n}(\Delta(\alpha))=\frac{1}{n!}\cdot \vol_X(\alpha),
$$
as we proposed in the introduction.
\end{rem}
\subsection{Generalized Okounkov bodies on complex surfaces}
Now we will mainly focus on generalized Okounkov bodies of compact K\"ahler surfaces. In this section, $X$ denotes a compact K\"ahler surface. We fix henceforth an admissible flag
$$
X\supseteq C\supseteq \{x\},
$$
on $X$, where $C\subset X$ is an irreducible curve and $x\in C$ is a smooth point.
\
\begin{defin}
\label{restrict dfn}
For any big class $\alpha\in H^{1,1}(X,\mathbb{R})$, we denote the \emph{restricted $\mathbb{R}$-convex body} of $\alpha$ along $C$ by $\Delta_{\mathbb{R},X|C}(\alpha)$, which is defined to be the set of Lelong numbers $\nu(T|_C,x)$, where $T\in \alpha$ ranges among all the positive currents with analytic singularities such that $C\not\subseteq E_+(T)$.
The \emph{restricted Okounkov body} of $\alpha$ along $C$ is defined as
$$\Delta_{X\mid C}(\alpha):=\overline{\Delta_{\mathbb{R},X|C}(\alpha)}.$$
\end{defin}
When $\alpha=c_1(L)$ for some big line bunle $L$ on X, it is noticeable that $\Delta_{X|C}(\alpha)=\Delta_{X|C}(L)$, where $\Delta_{X|C}(L)$ is defined in \cite{LM09}. When $L$ is ample, we have $\Delta_{X|C}(L)=\Delta(L|_C)$. Indeed, it is suffice to show that for any section $s\in H^0(C,\oc_C(L))$, there exists an integer $m$ such that $s^{\otimes m}$ can be extended to a section $S_m\in H^0(X,\oc_X(mL))$. This can be garanteed by Kodaira vanishing theorem. When $\alpha$ is any ample class, there is a very similar theorem which has appeared in the proof of Proposition \ref{body continuity}. However, the proof there relies on the difficult extension theorem in \cite{CT14}. Here we give a simple and direct proof when $X$ is a complex surface. Anyway, the idea of proof here is borrowed from \cite{CT14}.
\begin{proposition}\it
\label{ample}
If $\alpha$ is an ample class, then we have
$$
\Delta_{X| C}(\alpha)=\Delta(\alpha|_C)=[0,\alpha\cdot C].
$$
\end{proposition}
\begin{proof}
From Definition \ref{restrict dfn}, we have $\Delta_{X| C}(\alpha)\subseteq \Delta(\alpha|_C)$. It suffices to prove that for any K\"ahler current $T\in \alpha|_C$ with mild analytic singularities, we have a positive current $\widetilde{T}\in \alpha$ with analytic singularites such that $\widetilde{T}|_C=T$. First we choose a K\"ahler form $\omega\in\alpha$. By assumption, we can write $T=\omega|_V + dd^c\varphi$ for some quasi-plurisubharmonic function $\varphi$ on $C$ which has mild analytic singularities. Our goal is to extend $\varphi$ to a function $\Phi$ on $X$ such that $\omega + dd^c\Phi$ is a K\"ahler current with analytic singularities.
Choose $\epsilon>0$ small enough so that $$T=\omega|_C + idd^c\varphi\geq3\epsilon\omega,$$
holds as currents on $C$. We can cover $C$ by finitely many charts $ \{W_j\}_{1\leq j\leq N}$ satisfying the following properties:
\begin{enumerate}[\upshape (i)]
\item On each $W_j(j\leq k)$ there are local coordinates $(z^{(j)}_1,z^{(j)}_2)$ such that $C\bigcap W_j=\{z^{(j)}_2=0\}$ and $$\varphi=\frac{c_j}{2}\log |z^{(j)}_1|^2+g_j(z^{(j)}_1)$$
where $g_j(z^{(j)}_1)$ is smooth and bounded on $W_j\bigcap C$. We denote the single pole of $T$ in $W_j(j\leq k)$ by $x_j$;
\item On each $W_j(j>k)$ the local potential $\varphi$ is smooth and bounded on $W_j\bigcap C$;
\item $x_i\not\in\overline{W_j}$ for $i=1,\ldots,k$ and $j\neq i$.
\end{enumerate}
Define a function $\varphi_j$ on $W_j$ (with analytic singularities) by
\begin{equation}
\varphi_j(z^{(j)}_1,z^{(j)}_2)=\left\{
\begin{aligned}
&\varphi(z^{(j)}_1)+A|z^{(j)}_2|^2 \quad &\text{if} \quad j>k,\\
&\frac{c_j}{2}\log (|z^{(j)}_1|^2+|z^{(j)}_2|^2)+g_j(z^{(j)}_1)+A|z^{(j)}_2|^2 \quad &\text{if} \quad j\leq k,
\end{aligned}
\right.\nonumber
\end{equation}
where $A>0$ is a constant. If we shrink the $W_j$'s slightly, still preserving the property that $C\subseteq \bigcup W_j$, we can choose $A$ sufficiently large so that
$$
\omega+dd^c\varphi_j\geq2\epsilon\omega,
$$
holds on $W_j$ for all $j$. We also need to construct slightly smaller open sets $W'_j\subset \subset U_j \subset \subset W_j $ such that $\bigcup W'_j$ is still a covering of $C$.
By construction $\varphi_j$ is smooth when $j>k$, and $\varphi_j$ is smooth outside the log pole $x_j$ when $j\leq k$. By property (iii) above, we can glue the functions $\varphi_j$ together to produce a K\"ahler current
$$\widetilde{T}=\omega|_U+dd^c\widetilde{\varphi}\geq \epsilon\omega$$
defined in a neighborhood $U$ of $C$ in $X$, thanks to Richberg's gluing procedure. Indeed, $\varphi_i$ is smooth on $W_i\bigcap W_j$ for any $j\neq i$, which is a sufficient condition in using the Richberg technique. From the construction of $\widetilde{\varphi}$, we know that $\widetilde{\varphi}|_C=\varphi$, $\widetilde{\varphi}$ has log poles in every $x_i$ and is continuous outside $x_1,\ldots,x_k$.
On the other hand, since $\alpha$ is an ample class, there exists a rational number $\delta>0$ such that $\alpha-\delta\{C\}$ is still ample, thus we have a K\"ahler form $\omega_1\in \alpha-\delta\{C\}$. We can write $\omega_1+\delta[C]=\omega+dd^c\phi$, where $\phi$ is smooth outside $C$, and for any point $x\in C$, we have
$$
\phi=\frac{\delta}{2}\log |z_2|^2+O(1),
$$
where $z_2$ is the local equation of $C$.
Since $\phi$ is continuous outside $C$, we can choose a large constant $B>0$ such that $\phi>\widetilde{\varphi}-B$ in a neighborhood of $\partial U$. Therefore we define
$$\Phi=\begin{cases}
\max\{\widetilde{\varphi},\phi+B\}\ &\text{on}\ U\\
\phi+B &\text{on}\ X-U,
\end{cases}$$
which is well defined on the whole of $X$, and satisfies $\omega+dd^c\Phi\geq \epsilon'\omega$ for some $\epsilon'>0$. Since $\phi=-\infty$ on $C$, while $\widetilde{\varphi}|_C=\varphi$, it follows that $\Phi|_C=\varphi$.
We claim that $\Phi$ also has analytic singularities. Since around $x_j$, we have
$$
\widetilde{\varphi}(z_1,z_2)=\frac{c_j}{2}\log (|z_1|^2+|z_2|^2)+O(1),
$$
and
$$
\phi(z_1,z_2)=\frac{\delta}{2}\log |z_2|^2+O(1),
$$
for some local coordinates $(z_1,z_2)$ of $x_j$. Thus locally we have
$$
\max\{\widetilde{\varphi},\phi+A\}=\frac{1}{2}\log (|z_1|^{2c_j}+|z_2|^{2c_j}+|z_2|^{2\delta})+O(1).
$$
Since $\Phi$ is continuous outside $x_1,\ldots,x_k$, our claim is proved.
\end{proof}
\begin{lem}\it
\label{big nef}
Let $\alpha$ be a big and nef class on $X$, then for any $\epsilon>0$, there exsists a K\"ahler current $T_\epsilon\in \alpha$ with analytic singularities such that the Lelong number $\nu(T_\epsilon,x)<\epsilon$ for any point in $X$. Moreover, $T_\epsilon$ also satisfies
$$
E_+(T)=E_{nK}(\alpha).
$$
\end{lem}
\begin{proof}
Since $\alpha$ is big, there exists a K\"ahler current with analytic singularities such that $E_+(T_0)=E_{nK}(\alpha)$ and $T_0>\omega$ for some K\"ahler form $\omega$. Since $\alpha$ is also a nef class, for any $\delta>0$, there exists a smooth form $\theta_\delta$ such that $\theta_\delta\geq -\delta\omega$. Thus $T_\delta:=\delta T_0+(1-\delta)\theta_\delta\geq \delta^2\omega$ is a K\"ahler current with analytic singularities satisfying that $$E_+(T_\delta)=E_+(T_0)=E_{nK}(\alpha),$$
and
$$\nu(T_\delta,x)=\delta\nu(T_0,x)$$
for $x\in X$. Since the Lelong number $\nu(T_0,x)$ is an upper continuous function (thus bounded from above), $\nu(T_\delta,x)$ converges uniformly to zero as $\delta$ tends to 0. The lemma is proved.
\end{proof}
\begin{proposition}\it
\label{restrict volume}
Let $\alpha$ be a big and nef class, $C\not\subseteq E_{nK}(\alpha)$. Then we have $$
\Delta_{X| C}(\alpha)=\Delta(\alpha|_C)=[0,\alpha\cdot C].
$$
\end{proposition}
\begin{proof}
Asumme $E_{nK}(\alpha)=\bigcup_{i=1}^{r}C_i$, where each $C_i$ is an irreducible curve. By Lemma \ref{big nef}, for any $\epsilon>0$ there exists a K\"ahler current $T_\epsilon\in \alpha$ with analytic singularities such that $$E_+(T_\epsilon)=E_{nK}(\alpha)=\text{\rm Null}(\alpha)=\bigcup_{i=1}^{r}C_i$$
and $\nu(T_\epsilon,x)<\epsilon$ for all $x\in X$. Thus the Siu decomposition
$$T_\epsilon=R_\epsilon+\displaystyle\sum_{i=1}^{r}a_{i,\epsilon}C_i$$
satisfies $0\leq a_{i,\epsilon}<\epsilon$, and $R_\epsilon$ is a K\"ahler current whose analytic singularities are isolated points. By Remark \ref{mf=f}, the cohomology class $\{R_\epsilon\}$ is a K\"ahler class and converges to $\alpha$ as $\epsilon\rightarrow 0$. In particular, $|\{R_\epsilon\}\cdot C-\alpha\cdot C|<A\epsilon$, where $A$ is a constant.
By Proposition \ref{ample}, there exists a K\"ahler current $S_\epsilon\in \{R_\epsilon\}$ with analytic singularities such that $C\not\subseteq E_+(S_\epsilon)$ and $-\epsilon<\nu(S_\epsilon|_C,x)-\{R_\epsilon\}\cdot C<0$. Thus $T'_\epsilon:=S_\epsilon+\sum_{i=1}^{r}a_{i,\epsilon}C_i$ is a K\"ahler current in $\alpha$ with analytic singularities, and $-(1+A)\epsilon<\nu(T'_\epsilon|_C,x)-\alpha\cdot C$. Since $\alpha$ is big and nef, there exists a K\"ahler current $P_\epsilon$ in $\alpha$ with analytic singularities such that $\nu(P_\epsilon|_C,x)<\epsilon$. Therefore, by the definition of $\Delta_{X| C}(\alpha)$ and the convexity property we deduce that $[0,\alpha\cdot C]\subseteq \Delta_{X| C}(\alpha)$. On the other hand, $\Delta_{X| C}(\alpha)\subseteq \Delta(\alpha|_C)=[0,\alpha\cdot C]$ by definition. The proposition is proved.
\end{proof}
\begin{lem}\it
\label{restrict body}
Let $\alpha$ be a big class on $X$ with divisorial Zariski decomposition $\alpha=Z(\alpha)+N(\alpha)$. Assume
that $C\not\subseteq E_{nK}(Z(\alpha))$, so that $C\not\subseteq {\rm Supp}(N(\alpha))$ by Theorem \ref{contain exceptional}. Moreover, set
$$f(\alpha)=\nu_x(N(\alpha)|_C),\ \ \ g(\alpha)=\nu_x(N(\alpha)|_C)+Z(\alpha)\cdot C,$$
where $\nu_x(N(\alpha)|_C)=\nu(N(\alpha)|_C,x)$.
Then the restricted Okounkov body of $\alpha$ along $C$ is the interval
$$
\Delta_{X|C}(\alpha)=[f(\alpha),g(\alpha)]
$$
\end{lem}
\begin{proof}
First, by Remark \ref{bijection} we conclude that $T\mapsto T-N(\alpha)$ is a bijection between the positive currents in $\alpha$ and those in $Z(\alpha)$, thus we have
$$
E_{nK}(\alpha)=E_{nK}(Z(\alpha))\bigcup \text{supp}(N(\alpha)),
$$
and
\begin{equation}
\label{same nK}
C\not\subseteq E_{nK}(Z(\alpha)) \iff C\not\subseteq E_{nK}(\alpha).
\end{equation}
By the assumption of theorem, $N(\alpha)|_C$ is a well-defined positive current with analytic singularites on $C$. By the definition of $\Delta_{\mathbb{R},X|C}(\alpha)$, we have
$$
\Delta_{\mathbb{R},X|C}(\alpha)=\Delta_{\mathbb{R},X|C}(Z(\alpha))+\nu_x(N(\alpha)|_C).
$$
We take the closure of the sets to get
$$
\Delta_{X|C}(\alpha)=\Delta_{X|C}(Z(\alpha))+\nu_x(N(\alpha)|_C).
$$
Since $\alpha$ is big, thus $Z(\alpha)$ is big and nef, and by Proposition \ref{restrict volume} we have $\Delta_{X|C}(Z(\alpha))=[0,Z(\alpha)\cdot C]$. We have proved the lemma.
\end{proof}
\begin{defin}
If $\alpha$ is big and $\beta$ is pseudo-effective, then the slope of $\beta$ with respect to $\alpha$ is defined as
$$
s=s(\alpha,\beta)=\sup\{t>0\mid \alpha-t\beta\ \text{is big}\}.
$$
\end{defin}
\begin{rem}
\label{boundary}
Since the big cone is open, we know that $\{t>0\mid \alpha > t\beta\}$ is an open set in $\mathbb{R}^+$. Thus $\alpha-s\beta$ belongs to the boundary of the big cone $\mathcal{E}$, and $\vol_X(\alpha-s\beta)=0$.
\end{rem}
\begin{proof}[Proof of Theorem \ref{Okounkov}]
For $t\in [0,s)$, we put $\alpha_t=\alpha-t\{C\}$, and let $Z_t:=Z(\alpha_t)$ and $N_t:=N(\alpha_t)$ be the positive and negative part of the divisorial Zariski decomposition of $\alpha_t$.
(i) First we assume $C$ is nef. By Theorem \ref{contain exceptional}, the prime divisors in $E_{nK}(Z(\alpha_t))$ form an exceptional family, thus $C\not\subseteq E_{nK}(Z(\alpha_t))$, thus $C\not\subseteq E_{nK}(\alpha_t)$ by (\ref{same nK}). By Lemma \ref{restrict body} we have $
\Delta_{X| C}(\alpha_t)=[\nu_x(N_t|_C),Z_t\cdot C+\nu_x(N_t|_C)].
$
By the definition of $\mathbb{R}$-convex body and restrict $\mathbb{R}$-convex body, we have
$$
\Delta_{\mathbb{R}}(\alpha)\bigcap t\times \mathbb{R}= t\times \Delta_{\mathbb{R},X|C}(\alpha_t).
$$
Thus
$$
t\times \overline{\Delta_{\mathbb{R},X|C}(\alpha_t)}\subseteq \overline{\Delta_{\mathbb{R}}(\alpha)}\bigcap t\times \mathbb{R}.
$$
However, since both $\Delta_{\mathbb{R},X}(\alpha)$ and $\Delta_{\mathbb{R},X|C}(\alpha_t)$ are closed convex sets in $\mathbb{R}^2$ and $\mathbb{R}$, we have
$$
t\times \overline{\Delta_{\mathbb{R},X|C}(\alpha_t)}= \overline{\Delta_{\mathbb{R}}(\alpha)}\bigcap t\times \mathbb{R},
$$
therefore
\begin{equation}
\label{slice}
t\times \Delta_{X|C}(\alpha_t)=\Delta(\alpha)\bigcap t\times \mathbb{R}.
\end{equation}
Let
$$
f(t)=\nu_x(N_t|_C)\ ,\ g(t)=Z_t\cdot C+\nu_x(N_t|_C),
$$
then $\Delta(\alpha)\bigcap [0,s)\times \mathbb{R}$ is the region bounded by the graphs of $f(t)$ and $g(t)$.
Now we prove the piecewise linear property of $f(t)$ and $g(t)$. By Lemma \ref{component}, we have $N_{t_1}\leq N_{t_2}$ if $0\leq t_1\leq t_2<s$, thus $f(t)$ is increasing. Since $N_t$ is an exceptional divisor by Theorem \ref{orthogonal}, the number of the prime components of $N_t$ is uniformly bounded by the Picard number $\rho(X)$. Thus we can denote $N_t=\sum_{i=1}^{r}a_i(t)N_i$, where $a_i(t)\geq 0$ is an increasing and continuous function. Moreover, there exsists $0=t_0<t_1<\ldots<t_k=s$ such that the prime components of $N_t$ are the same when $t$ lies in the interval $(t_i,t_{i+1})$ for $i=0,\ldots,k-1$, and the number of prime components of $N_t$ will increase at every $t_i$ for $i=1,\ldots,k-1$. We write $s_i=\frac{t_{i-1}+t_i}{2}$ for $i=1,\ldots,k$.
We denote the linear subspace of $H^{1,1}(X,\mathbb{R})$ spanned by the prime components of $N_{s_i}$ by $V_i$, and let $V_i^{\bot}$ be the orthogonal space of $V_i$ with respect to $q$. By the proof of Lemma \ref{component}, for $t\in (t_{i-1},t_i)$ we have
\begin{eqnarray}
\label{decompose}
Z_t=Z_{s_i}+(s_i-t)\{C\}_i^\bot\\
\label{decompse 2}
N_t=N_{s_i}+(t_i-t)C_i^{\parallel},
\end{eqnarray}
where $\{C\}_i^\bot$ is the projection of $\{C\}$ to $V_i^\bot$, and $C_i^\parallel$ is a linear combination of the prime components of $N_{s_i}$ satisfying that the cohomology class $\{C_i^\parallel\}$ is equal to the projection of $\{C\}$ to $V_i$. By Theorem \ref{contain exceptional}, the prime components of $N_{s_i}$ are independent, thus $C_i^\parallel$ is uniquely defined.
The piecewise linearity property of $f(t)$ and $g(t)$ follows directly from (\ref{decompose}) and (\ref{decompse 2}), and thus $f(t)$ and $g(t)$ can be continuously extended to $s$. Therefore we conclude that $\Delta(\alpha)$ is the region bounded by the graphs of $f(t)$ and $g(t)$ for $t\in[0,s]$. Thus the vertices of $\Delta(\alpha)$ are contained in the set $\{(t_i,f(t_i)),(t_j,g(t_j))\in \mathbb{R}^2\mid i,j=0,\ldots,k\}$. This
means that a vertex of $\Delta(\alpha)$ may only occur for those $t\in[0,s]$, where a new curve appears in $N_t$. Since $r\leq \rho(X)$, the number of vertices is bounded by $2\rho(X)+2$.
The fact that $f(t)$ is convex and $g(t)$ concave is a consequence of the convexity of $\Delta(\alpha)$.
By (\ref{slice}), we have
\[\begin{split}
2\vol_{\mathbb{R}^2}(\Delta(\alpha))&=2\int_{0}^{s}\vol_\mathbb{R}(\Delta_{X|C}(\alpha_t))\mathrm{d}t\\
&=2\int_{0}^{s}Z_t\cdot C\mathrm{d}t\\
&=\vol_X(\alpha)-\vol_X(\alpha-s C)\\
&=\vol_X(\alpha).
\end{split} \]
where the second equality follows by Proposition \ref{restrict volume}, the third one by Theorem \ref{differential} and the last one by Remark \ref{boundary}. We have proved the theorem under the assumption that $C$ is nef.
(ii) Now we prove the theorem when $C$ is not nef, i.e., $C^2<0$. Recall that $a:=\sup\{t>0\mid C\subseteq E_{nK}(\alpha)\}$. By (\ref{same nK}), if $C\subseteq E_{nK}(\alpha_t)$ for some $t\in[0,s)$, we have $C\subseteq E_{nK}(Z(\alpha_t))$. By the proof in Theorem \ref{differential 2} we have
\begin{gather}
Z(\alpha_s)\cdot C=0,\nonumber\\
Z(\alpha_s)=Z(\alpha_t),\nonumber
\end{gather}
for $0\leq s\leq t$. Thus we have
$$
\{0\leq t <s\mid C\not\subseteq E_{nK}(\alpha_t)\}=(a,s),
$$
and $\Delta(\alpha)$ is contained in $[a,s]\times \mathbb{R}$. By Theorem \ref{differential 2} we also have
\[\begin{split}
2\vol_{\mathbb{R}^2}(\Delta(\alpha))&=2\int_{a}^{s}\vol_\mathbb{R}(\Delta_{X|C}(\alpha_t))\mathrm{d}t\\
&=2\int_{a}^{s}Z_t\cdot C\mathrm{d}t\\
&=\vol_X(\alpha_a)-\vol_X(\alpha_s)\\
&=\vol_X(\alpha).
\end{split} \]
Since the prime components of $N_{t_1}$ is contained in that of $N_{t_2}$ if $a<t_1\leq t_2<s$, using the same arguments above, we obtain the piecewise linear property of $f(t)$ and $g(t)$ which can also be extended to $s$. The theorem is proved completely. \end{proof}
\begin{rem}
If $X$ is a projective surface, by the main result in \cite{BKS03}, the cone of big divisors of $X$ admits a locally finite decomposition into locally polyhedral subcones such that the support of the negative part in the Zariski decomposition is constant on each subcone. It is noticeable that if we only assume $X$ to be K\"ahler, this decomposition still holds if we replace the cone of big divisors by the cone of big classes and use divisorial Zariski decomposition instead. This property ensures that the generalized Okounkov bodies should also be polygons.
\end{rem}
\subsection{Generalized Okounkov bodies for pseudo-effective classes}
Throughout this subsection, $X$ will stand for a K\"ahler surface if not specially mentioned. Our main goal in this subsection is to study the behavior of generalized Okounkov bodies on the boundary of the big cone.
\begin{defin}
Let $X$ be any K\"ahler manifold, if $\alpha\in H^{1,1}(X,\mathbb{R})$ is any pseudo-effective class. We define the \emph{generalized Okounkov body} $\Delta(\alpha)$ with respect to the fixed flag by
$$
\Delta(\alpha):=\bigcap_{\epsilon>0}\Delta(\alpha+\epsilon\omega),
$$
where $\omega$ is any K\"ahler class.
\end{defin}
It is easy to check that our definition does not depend on the choice of $\omega$, and if $\alpha$ is big, by Proposition \ref{body continuity}, the definition is consistent with Definition \ref{generalized Okounkov}. Now we recall the definition of numerical dimension for any real (1,1)-class.
\begin{defin}[numerical dimension]
Let $X$ be a compact K\"ahler manifold. For a class $\alpha\in H^{1,1}(X,\mathbb{R})$, the \emph{numerical dimension} $n(\alpha)$ is defined to be $-\infty$ if $\alpha$ is not pseudo-effective, and
$$
n(\alpha)=\text{\rm max}\{p\in\mathbb{N},\langle \alpha^p\rangle\neq 0\},
$$
if $\alpha$ is pseudo-effective.
\end{defin}
We recall that the right-hand side of the equation above involves the \textit{positive intersection product} $\langle \alpha^{p}\rangle\in H^{p,p}_{\geq0}(X,\mathbb{R})$ defined in \cite{BDPP13}. When $X$ is a K\"ahler surface, we simply have
$$
n(\alpha)=\text{\rm max}\{p\in\mathbb{N}, Z(\alpha)^p\neq 0\},\ \ p \in \{0,1,2\}.
$$
If $n(\alpha)=2$, $\alpha$ is big and the situation is studied in the last subsection. Throughout this subsection, we assume $\alpha\in\mathcal{\partial E}$.
\begin{lem}\it
\label{construct nef}
Let $\{N_1,\ldots,N_r\}$ be an exceptional family of prime divisors, $\omega$ be any K\"ahler class. Then there exists unique positive numbers $b_1,\ldots,b_r$ such that $\omega+\sum_{i=1}^{r}b_iN_i$ is big and nef satisfying $\text{\rm Null}(\omega+\sum_{i=1}^{r}b_iN_i)=\bigcup_{i=1}^{r}N_i$.
\end{lem}
\begin{proof}
If we set
\begin{equation}
\left(
\begin{array}{c}
b_1\\
\vdots\\
b_r\\
\end{array}
\right)=
-S^{-1}\cdot\left(
\begin{array}{c}
\omega\cdot N_1\\
\vdots\\
\omega\cdot N_r\\
\end{array}
\right)\nonumber,
\end{equation}
where $S$ denotes the intersection matrix of $\{N_1,\ldots, N_r\}$, we have $(\omega+\sum_{i=1}^{r}b_iN_i)\cdot N_j=0$ for $j=1,\ldots,r$. By Lemma \ref{BKS}, we conclude that all $b_i$ are positive and thus $\omega+\sum_{i=1}^{r}b_iN_i$ is big and nef.
\end{proof}
\begin{proposition}\it
\label{Zariski pseudo}
Let $\alpha$ be any pseudo-effective class with $N(\alpha)=\sum_{i=1}^{r}a_iN_i$, $\omega$ be a K\"ahler class. Then for $\epsilon>0$ small enough, we have the divisorial Zariski decomposition
\begin{eqnarray}
Z(\alpha+\epsilon\omega)=Z(\alpha)+\epsilon(\omega+\sum_{i=1}^{r}b_iN_i),\nonumber\\
N(\alpha+\epsilon\omega)=\sum_{i=1}^{r}(a_i-\epsilon b_i)N_i,\nonumber
\end{eqnarray}
where $b_i$ is the positive number defined in Lemma \ref{construct nef}.
\end{proposition}
\begin{proof}
Since $Z(\alpha)+\epsilon(\omega+\sum_{i=1}^{r}b_iN_i)$ is nef and orthogonal to all $N_i$ by Lemma \ref{construct nef}, by Theorem \ref{orthogonal}, if $\epsilon$ satisfies that $a_i-\epsilon b_i>0$ for all $i$, the divisorial decomposition in the proposition holds.
\end{proof}
If $n(\alpha)=0$, we have $Z(\alpha)=0$ and thus $\alpha=\sum_{i=1}^{r} a_iN_i$ is an exceptional effective $\mathbb{R}$-divisor. We fix a flag
$$
X\supseteq C\supseteq \{x\},
$$
where $C\neq N_i$ for all $i$. Then we have
\begin{thm}\it
For any pseudo-effective class $\alpha$ whose numerical dimension $n(\alpha)=0$, we have
$$\Delta_{(C,x)}(\alpha)=0\times \nu_x(N(\alpha)|_C).$$
\end{thm}
\begin{proof}
We asumme $N(\alpha)=\sum_{i=1}^{r}a_iN_i$. Fix a K\"ahler class $\omega$, by Proposition \ref{Zariski pseudo}, we have
\begin{eqnarray}
Z(\alpha+\epsilon\omega)=\epsilon(\omega+\sum_{i=1}^{r}b_iN_i)\label{0z},\\
N(\alpha+\epsilon\omega)=\sum_{i=1}^{r}(a_i-\epsilon b_i)N_i\label{0n},
\end{eqnarray}
where $b_i$ is the positive number defined in Lemma \ref{construct nef}. Since $T\mapsto T-N(\alpha+\epsilon\omega)$ is a bijection between the positive currents in $\alpha+\epsilon\omega$ and those in $Z(\alpha+\epsilon\omega)$, we have
$$
\Delta(\alpha+\epsilon\omega)=\epsilon\Delta(\omega+\sum_{i=1}^{r}b_iN_i)+\nu(\sum_{i=1}^{r}(a_i-\epsilon b_i)N_i),
$$
where $\nu(\sum_{i=1}^{r}(a_i-\epsilon b_i)N_i)=\nu_{(C,x)}(\sum_{i=1}^{r}(a_i-\epsilon b_i)N_i)$ is the valuation-like function defined in Section \ref{defin}. Thus the diameter of $\Delta(\alpha+\epsilon\omega)$ converges to 0 when $\epsilon$ tends to 0, and we conclude that $\Delta(\alpha)$ is a single point in $\mathbb{R}^2$. Since
\begin{eqnarray}
\Delta(\alpha+\epsilon\omega)\bigcap 0\times \mathbb{R}&=& 0\times\Delta_{X|C}(\alpha+\epsilon\omega)\nonumber\\
&=&0\times [\nu_x(N(\alpha+\epsilon\omega)|_C),\nu_x(N(\alpha+\epsilon\omega)|_C)+Z(\alpha+\epsilon\omega)\cdot C]\label{inter}\nonumber,
\end{eqnarray}
by (\ref{0z}) and (\ref{0n}) we have
\begin{eqnarray}
\Delta(\alpha)\bigcap0\times \mathbb{R}=0\times \nu_x(\sum_{i=1}^{r}a_iN_i|_C)\nonumber,
\end{eqnarray}
and we prove the first part of Theorem \ref{okounkov psf}..
\end{proof}
If $n(\alpha)=1$, $Z(\alpha)$ is nef but not big. If there exists one irreducible curve $C$ such that $Z(\alpha)\cdot C>0$, we fix the flag
$$
X\supseteq C\supseteq \{x\},
$$
then we have
\begin{thm}\it
For any pseudo-effective class $\alpha$ whose numerical dimension $n(\alpha)=1$, we have
$$\Delta(\alpha)=0\times [\nu_x(N(\alpha)|_C),\nu_x(N(\alpha)|_C)+Z(\alpha)\cdot C]\nonumber.$$
\end{thm}
\begin{proof}
By the assumption $Z(\alpha)\cdot C>0$ we know that $C\not\subseteq {\rm Supp}(N(\alpha))$. By Proposition \ref{Zariski pseudo}, when $\epsilon$ small enough, the divisorial Zariski decomposition for $\alpha+\epsilon\omega$ is
\begin{eqnarray}
Z(\alpha+\epsilon\omega)=Z(\alpha)+\epsilon(\omega+\sum_{i=1}^{r}b_iN_i),\label{z}\\
N(\alpha+\epsilon\omega)=\sum_{i=1}^{r}(a_i-\epsilon b_i)N_i,\label{n}
\end{eqnarray}
where $b_i$ is the positive number defined in Lemma \ref{construct nef}. Combine (\ref{z}) and (\ref{n}), we have
\begin{eqnarray}
\Delta(\alpha)\bigcap0\times \mathbb{R}&=& \bigcap_{\epsilon>0}\Delta(\alpha+\epsilon\omega)\bigcap 0\times \mathbb{R}\nonumber\\
&=& \bigcap_{\epsilon>0}0\times [\nu_x(N(\alpha+\epsilon\omega)|_C),\nu_x(N(\alpha+\epsilon\omega)|_C)+Z(\alpha+\epsilon\omega)\cdot C]\nonumber\\
&=& 0\times [\nu_x(\sum_{i=1}^{r}a_iN_i|_C),\nu_x(\sum_{i=1}^{r}a_iN_i|_C)+Z(\alpha)\cdot C]\nonumber.
\end{eqnarray}
Since we have
$$\vol_{\mathbb{R}^2}(\Delta(\alpha))=\lim\limits_{\epsilon\rightarrow 0}\vol_{\mathbb{R}^2}(\Delta(\alpha+\epsilon\omega))=\lim\limits_{\epsilon\rightarrow 0}Z(\alpha+\epsilon\omega)^2=0,$$
and $\Delta(\alpha)$ is a closed convex set, we conclude that there are no points of $\Delta(\alpha)$ which lie outside $0\times \mathbb{R}$ as $\vol_{\mathbb{R}}(\Delta(\alpha)\bigcap0\times\mathbb{R})=Z(\alpha)\cdot C>0$. We finish the proof of Theorem \ref{okounkov psf}.
\end{proof}
\footnotesize
\noindent\textit{Acknowledgments.}
I would like to express my warmest gratitude to my thesis supervisor Professor Jean-Pierre Demailly for his many valuable suggestions and help in this work. I also would like to thank Professor Sen Hu for his constant encouragement. This research is supported by the China Scholarship Council.
| {
"timestamp": "2015-03-03T02:08:03",
"yymm": "1503",
"arxiv_id": "1503.00112",
"language": "en",
"url": "https://arxiv.org/abs/1503.00112",
"abstract": "The main goal of this article is to construct \"arithmetic Okounkov bodies\" for an arbitrary pseudo-effective (1,1)-class $\\alpha$ on a Kähler manifold. Firstly, using Boucksom's divisorial Zariski decompositions for pseudo-effective (1,1)-classes on compact Kähler manifolds, we prove the differentiability of volumes of big classes for Kähler manifolds on which modified nef cones and nef cones coincide; this includes Kähler surfaces. We then apply our differentiability results to prove Demailly's transcendental Morse inequality for these particular classes of Kähler manifolds. In the second part, we construct the convex body $\\Delta(\\alpha)$ for any big class $\\alpha$ with respect to a fixed flag by using positive currents, and prove that this newly defined convex body coincides with the Okounkov body when $\\alpha\\in {\\rm NS}\\_{\\mathbb{R}}(X)$; such convex sets $\\Delta(\\alpha)$ will be called generalized Okounkov bodies. As an application we prove that any rational point in the interior of Okounkov bodies is \"valuative\". Next we give a complete characterisation of generalized Okounkov bodies on surfaces, and show that the generalized Okounkov bodies behave very similarly to original Okounkov bodies. By the differentiability formula, we can relate the standard Euclidean volume of $\\Delta(\\alpha)$ in $\\mathbb{R}^2$ to the volume of a big class $\\alpha$, as defined by Boucksom; this solves a problem raised by Lazarsfeld in the case of surfaces. Finally, we study the behavior of the generalized Okounkov bodies on the boundary of the big cone, which are characterized by numerical dimension.",
"subjects": "Algebraic Geometry (math.AG); Complex Variables (math.CV)",
"title": "Transcendental Morse Inequality and Generalized Okounkov Bodies",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130551025345,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7075636777643414
} |
https://arxiv.org/abs/1712.10279 | Vector and Matrix Optimal Mass Transport: Theory, Algorithm, and Applications | In many applications such as color image processing, data has more than one piece of information associated with each spatial coordinate, and in such cases the classical optimal mass transport (OMT) must be generalized to handle vector-valued or matrix-valued densities. In this paper, we discuss the vector and matrix optimal mass transport and present three contributions. We first present a rigorous mathematical formulation for these setups and provide analytical results including existence of solutions and strong duality. Next, we present a simple, scalable, and parallelizable methods to solve the vector and matrix-OMT problems. Finally, we implement the proposed methods on a CUDA GPU and present experiments and applications. | \section{Introduction}
Optimal mass transport (OMT) is a subject with a long history.
Started by Monge \cite{Mon81} and developed by many great mathematicians \cite{Kan42,Bre91,GanMcc96,Mcc97,JorKinOtt98,BenBre00,OttVil00}, the subject now has
incredibly rich theory and applications. It has found numerous applications in different areas such as partial differential equations, probability theory, physics, economics, image processing, and control \cite{Eva99,HakTanAng04,TanGeoTan10,MueKarKolTan13,CheGeoPav14e,CheGeoPav15b,Che16}. See also \cite{Rac98,Vil03,AmbGigSav06} and references therein.
However, in many applications such as color image processing, there is more than one piece of information associated with each spatial coordinate,
and such data can be interpreted as vector-valued or matrix-valued densities.
As the classical optimal mass transport works with scalar probability densities,
such applications require a new notion of mass transport.
For this purpose, Chen et al.\ \cite{CheGeoTan16,CheGeoTan17,CheGeoNinTan17, chen2017}
recently developed a framework for vector-valued and matrix-valued optimal mass transport. See also \cite{NinGeoTan13,NinGeo14,NinGeoTan15,FitLauSte16,QPT,VogLel17} for other different frameworks. For vector-valued OMT,
potential applications include color image processing, multi-modality medical imaging, and image processing involving textures. For matrix-valued OMT, we have diffusion tensor imaging, multivariate spectral analysis, and stress tensor analysis.
Several mathematical aspects of vector and matrix-valued OMT were not addressed in the previous work \cite{CheGeoTan16,CheGeoTan17,CheGeoNinTan17}.
As the first contribution of this paper,
we present duality and existence results
of the continuous vector and matrix-valued OMT problems
along with rigorous problem formulations.
Although the classical theory of OMT is very rich, only recently has there been much attention to numerical methods to
compute the OMT.
Several recent work proposed algorithms
to solve the $L^2$ OMT
\cite{AngHakTan03,Cut13,BenFroObe14,HabHor15,BenCarCut15,CheGeoPav15a,yongxin,GenCutPeyBac16}
and the $L^1$ OMT \cite{LiRyuOsh17,L1partial}.
As the second contribution of this paper,
we present first-order primal-dual methods to solve the vector and matrix-valued OMT problems. The methods simultaneously solve for both the primal and dual solutions (hence a primal-dual method) and are scalable
as they are first-order methods. We also discuss the convergence of the methods.
As the third contribution of this paper,
we implement the proposed method on a CUDA GPU and present several applications. The proposed algorithms' simple structure
allows us to
effectively utilize the computing capability of the CUDA architecture, and we demonstrate this through our experiments. We furthermore release the code for scientific reproducibility.
The rest of the paper is structured as follows. In Section \ref{sec:OMT} we give a quick review of the classic OMT theory,
which allows us
to present the
later sections in an analogous manner and thereby outline the similarities and differences.
In Section \ref{sec:vectorOMT} and Section \ref{sec:matrixOMT}
we present the vector and matrix-valued OMT problems and state a few theoretical results.
In Section \ref{s-duality-proof}, we present and prove the analytical results. In Section \ref{sec:algorithm},
we present the algorithm. In Section \ref{sec:examples}, we present the experiments and applications.
\section{Optimal mass transport} \label{sec:OMT}
Let $\Omega\subset \mathbb{R}^d$ be a closed, convex, compact domain.
Let $\lambda^0$ and $\lambda^1$ be nonnegative densities
supported on $\Omega$ with unit mass, i.e.,
$\int_{\Omega}\lambda^0(\mathbf{x})\;d\mathbf{x}=\int_{\Omega}\lambda^1(\mathbf{x})\;d\mathbf{x}=1$.
Let $\|\cdot\|$ denote any norm on $\mathbb{R}^d$.
In 1781, Monge posed the optimal mass transport (OMT) problem,
which solves
\begin{equation}\label{map}
\begin{split}
\underset{T}{\text{minimize}}\quad \int_{\Omega}\|\mathbf{x}-T(\mathbf{x})\| \lambda^0(\mathbf{x})\;d\mathbf{x}.
\end{split}
\end{equation}
The optimization variable $T:\Omega\rightarrow\Omega$
is smooth, one-to-one, and transfers $\lambda^0(\mathbf{x})$ to $\lambda^1(\mathbf{x})$.
The optimization problem \eqref{map} is
nonlinear and nonconvex.
In 1940, Kantorovich
relaxed \eqref{map} into a linear (convex) optimization problem:
\begin{equation}\label{Monge}
S(\lambda^0,\lambda^1)=
\left(
\begin{array}{ll}
\underset{\pi}{\text{minimize}}&
\int_{\Omega\times \Omega}\|\mathbf{x}-{\mathbf y}\| \pi(\mathbf{x},{\mathbf y})\;d\mathbf{x} d{\mathbf y}\\
\mbox{subject to} &
\pi(\mathbf{x},{\mathbf y})\ge 0\\
&\int_{\Omega}\pi(\mathbf{x},{\mathbf y})\;d{\mathbf y}=\lambda^0(\mathbf{x})\\
&\int_{\Omega}\pi(\mathbf{x},{\mathbf y})\;d\mathbf{x}=\lambda^1({\mathbf y}).
\end{array}
\right)
\end{equation}
The optimization variable $\pi$ is a joint nonnegative measure on $\Omega\times \Omega$
having $\lambda^0(\mathbf{x})$ and $\lambda^1({\mathbf y})$ as marginals.
To clarify, $S(\lambda^0,\lambda^1)$ denotes the optimal value of \eqref{Monge}.
\subsection{Scalar optimal mass transport}
The theory of optimal transport \cite{GanMcc96,Vil03,Vil08} remarkably points out that \eqref{Monge} is equivalent to the following flux minimization problem:
\begin{equation}\label{Kan1}
S(\lambda^0,\lambda^1)=
\left(
\begin{array}{ll}
\underset{{\mathbf u}}{\text{minimize}}&
\int_{\Omega}\|{\mathbf u}(\mathbf{x})\| \;d\mathbf{x}\\
\mbox{subject to} &
\mathrm{div}_\mathbf{x} ({\mathbf u})(\mathbf{x})=\lambda^0(\mathbf{x})-\lambda^1(\mathbf{x})\\
&{\mathbf u}(\mathbf{x})^T \mathbf{n}(\mathbf{x})=0,\,\,
\mbox{for all }\begin{cases}
\mathbf{x}\in \partial \Omega,\\
\text{$ \mathbf{n}(\mathbf{x})$ normal to $\partial\Omega$},
\end{cases}
\end{array}
\right)
\end{equation}
where ${\mathbf u}=(u_1,\dots,u_d): \Omega\rightarrow \mathbb{R}^d$ is the optimization variable
and $\mathrm{div}_\mathbf{x}$ denote the (spatial) divergence operator. Although \eqref{Kan1} and \eqref{Monge} are mathematically equivalent,
\eqref{Kan1} is much more computationally effective as its optimization variable
${\mathbf u}$ is much smaller when discretized.
It is worth mentioning that OMT in formulation \eqref{Kan1} is very close to the problems in compressed sensing. Its objective function is homogeneous degree one and the constraint is linear. It can be observed that for characterizing the OMT, the gradient operator in \eqref{Kan1} and divergence operator in \eqref{Kan1} play the key roles. Later on, we extend the definition of $L_1$ OMT problem by extending these differential operators into a general meaning.
The optimization problem \eqref{Kan1} has the following dual problem:
\begin{equation}
S(\lambda^0,\lambda^1)=
\left(
\begin{array}{ll}
\underset{\phi}{\text{maximize}}& \int_{\Omega}
\phi(\mathbf{x})(\lambda^1(\mathbf{x})-\lambda^0(\mathbf{x}))\;d\mathbf{x}\\
\mbox{subject to} &\|\nabla_\mathbf{x} \phi(\mathbf{x})\|_{*} \le 1
\quad\text{for all } \mathbf{x}\in \Omega,
\end{array}
\right)
\label{somt-dual}
\end{equation}
where the optimization and $\phi:\Omega \rightarrow \mathbb{R}$ is a function.
We write $\|\cdot\|_*$ for the dual norm of $\|\cdot\|$.
It is well-known that,
strong duality holds between
\eqref{Kan1} and \eqref{somt-dual}
in the sense that the minimized
and maximized objective values are equal
\cite{Vil03}.
Therefore,
we take either \eqref{Kan1} or \eqref{somt-dual} as the definition of $S$.
Rigorous definitions
of the optimization problems
\eqref{Kan1} or \eqref{somt-dual}
are somewhat technical.
We skip this discussion,
as scalar optimal mass transport is standard.
In Section~\ref{s-duality-proof},
we rigorously discuss
the vector-OMT problems,
so any rigorous discussion of the scalar-OMT problems can be inferred
as a special case.
\subsection{Theoretical properties}
\label{ss:somt-theory}
The algorithm we present in Section~\ref{sec:algorithm} is a primal-dual algorithm and, as such, finds solutions to both
Problems~\eqref{Kan1} and \eqref{somt-dual}.
This is well-defined as
both the primal and dual problems have
solutions
\cite{Vil03}.
Write $\mathbb{R}_+$ for the set of nonnegative real numbers.
Write $\mathcal{P}(\Omega,\mathbb{R})$ for the space of nonnegative densities supported on $\Omega$ with unit mass.
We can use $S(\lambda^0,\lambda^1)$
as a distance measure between $\lambda^0,\lambda^1\in \mathcal{P}(\Omega,\mathbb{R})$.
The value $S:\mathcal{P}(\Omega,\mathbb{R})\times \mathcal{P}(\Omega,\mathbb{R})\rightarrow \mathbb{R}_+$
defines a metric on $\mathcal{P}(\Omega,\mathbb{R})$
\cite{Vil03}.
\section{Vector optimal mass transport}\label{sec:vectorOMT}
Next we discuss the vector-valued optimal transport, proposed recently in \cite{CheGeoTan17}.
The basic idea is to combine
scalar optimal mass transport with network flow problems \cite{AhuMagOrl93}.
\subsection{Gradient and divergence on graphs}\label{sec:vectorgrad}
Consider a connected, positively weighted, undirected graph ${\mathcal G} $ with $k$ nodes and $\ell$ edges.
To define an incidence matrix for ${\mathcal G}$, we say an edge $\{i,j\}$ points from $i$ to $j$,
i.e., $i\rightarrow j$, if $i<j$.
This choice is arbitrary and does not affect the final result.
With this edge orientation, the incidence matrix $D\in \mathbb{R}^{k\times \ell}$ is
\[
D_{ie}=\left\{ \begin{array}{ll}
+1 & \text{if edge $e=\{i,j\}$ for some node $j>i$}\\
-1 & \text{if edge $e=\{j,i\}$ for some node $j<i$}\\
0 & \text{otherwise}.
\end{array}\right.
\]
For example, the incidence matrix of the graph of Figure~\ref{fig:graph-example} is
\[
D=
\begin{bmatrix}
1&1&0&1&0\\
-1&0&1&0&0\\
0&-1&-1&0&1\\
0&0&0&-1&-1
\end{bmatrix}.
\]
Write
$\Delta_{\mathcal G} =-D\operatorname{diag} \{1/c_1^2, \cdots, 1/c_\ell^2\} D^T$
for the (negative) graph Laplacian, where $1/c_1^2, \cdots, 1/c_\ell^2$ are the edge weights.
The edge weights are defined so that
and $c_j$ represents the cost of traversing edge $j$ for $j=1,\dots,\ell$.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{scope}[every node/.style={circle,thick,draw}]
\node (A) at (0,0) {1};
\node (B) at (0,2) {2};
\node (C) at (3,2) {3};
\node (D) at (3,0) {4};
\end{scope}
\begin{scope}[>={Stealth[black]},
every node/.style={fill=white,circle},
every edge/.style={draw=black,very thick}]
\path [->] (A) edge node {$c_1$} (B);
\path [->] (A) edge node {$c_2$} (C);
\path [->] (B) edge node {$c_3$} (C);
\path [->] (A) edge node {$c_4$} (D);
\path [->] (C) edge node {$c_5$} (D);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Example graph with $k=4$ nodes and $\ell=5$ edges.
To make $c_j$ the cost of traversing edge $j$,
the edge weight is defined to be $1/c_j^2$ for $j=1,\dots,\ell$.
}
\label{fig:graph-example}
\end{figure}
We define the gradient operator on ${\mathcal G}$ as
$ \nabla_{\mathcal G} x= \operatorname{diag} \{1/c_1, \cdots,1/c_\ell\}D^T x$
and the divergence operator as
$\mathrm{div}_{\mathcal G} y = -D \operatorname{diag} \{1/c_1, \cdots,1/c_\ell\}y$.
So the Laplacian can be rewritten as
$\Delta_{\mathcal G} =\mathrm{div}_{\mathcal G}\nabla_{\mathcal G}$.
Note that $\mathrm{div}_{\mathcal G}=-\nabla_{\mathcal G}^*$, where $\nabla_{\mathcal G}^*$ is the adjoint of $\nabla_{\mathcal G}$.
This is in analogy with the usual spatial gradient and divergence operators
\cite{Maa11,ChoHuaLiZho12,ChoLiZho17,CheGeoTan17}.
\subsection{Vector optimal mass transport}
We say $\vect{\lambda}:\Omega\rightarrow {\mathbb R}^k_+$
is a nonnegative vector-valued density with unit mass if
\[
\vect{\lambda}
(\mathbf{x})=
\begin{bmatrix}
\lambda_1(\mathbf{x})\\\vdots\\
\lambda_k(\mathbf{x})
\end{bmatrix},
\qquad
\int_\Omega\sum_{i=1}^k \lambda_i(\mathbf{x})\;d\mathbf{x}=1.
\]
Assume $\vect{\lambda}^0$ and $\vect{\lambda}^1$ are nonnegative vector-valued densities supported on $\Omega$ with unit mass.
We define the optimal mass transport between vector-valued densities as
\begin{equation}\label{vomt}
V(\vect{\lambda}^0,\vect{\lambda}^1)=
\left(
\begin{array}{ll}
\underset{\vect{{\mathbf u}},\vect{w}}{\text{minimize}}&
\int_{\Omega}\|\vect{{\mathbf u}}(\mathbf{x})\|_u +\alpha\|\vect{w}(\mathbf{x})\|_w\;d\mathbf{x}\\
\mbox{subject to} &
\mathrm{div}_\mathbf{x} (\vect{{\mathbf u}})(\mathbf{x})
+
\mathrm{div}_\mathcal{G}(\vect{w}(\mathbf{x}))=\vect{\lambda}^0(\mathbf{x})-\vect{\lambda}^1(\mathbf{x})\\
& \vect{{\mathbf u}} \text{ satisfies zero-flux b.c}.
\end{array}
\right)
\end{equation}
where
$\vect{{\mathbf u}}:\Omega\rightarrow {\mathbb R}^{k\times d}$
and $\vect{w}:\Omega\rightarrow {\mathbb R}^{\ell}$
are the optimization variables, $\alpha>0$ is a parameter, and
$\|\cdot\|_u$ is a norm on $\mathbb{R}^{k\times d}$ and $\|\cdot\|_w$ is a norm on $\mathbb{R}^{\ell}$.
The parameter $\alpha$ represents the relative importance of the two flux terms $\vect{{\mathbf u}}$ and $\vect{w}$.
We write
\[
\vect{{\mathbf u}}=
\begin{bmatrix}
{\mathbf u}_1^T
\\ \vdots
\\ {\mathbf u}_k^T
\end{bmatrix}
\qquad
\vect{w}=\begin{bmatrix}
w_1\\\vdots\\w_\ell\end{bmatrix}
\qquad
\mathrm{div}_\mathbf{x} (\vect{{\mathbf u}})=
\begin{bmatrix}
\mathrm{div}_\mathbf{x} ({\mathbf u}_1)
\\ \vdots
\\ \mathrm{div}_\mathbf{x}({\mathbf u}_k)
\end{bmatrix}.
\]
We call $\mathrm{div}_\mathbf{x}$ the
spatial divergence operator.
The zero-flux boundary condition is
\[
{\mathbf u}_i(\mathbf{x})^T \mathbf{n}(\mathbf{x})=0,\,\,
\mbox{for all }\begin{cases}
\mathbf{x}\in \partial \Omega,\\
\text{$ \mathbf{n}(\mathbf{x})$ normal to $\partial\Omega$}.
\end{cases}
\]
for $i=1,\dots,k$.
Note that $\vect{w}$ has no boundary conditions.
The optimization problem \eqref{vomt} has the following dual problem:
\begin{equation}
V(\vect{\lambda}^0,\vect{\lambda}^1)=
\left(
\begin{array}{ll}
\underset{\vect{\phi}}{\text{maximize}}& \int_{\Omega}
\langle \vect{\phi}(\mathbf{x}),\vect{\lambda}^1(\mathbf{x})
-\vect{\lambda}^0(\mathbf{x})\rangle \;d\mathbf{x}\\
\mbox{subject to} &\|\nabla_\mathbf{x} \vect{\phi}(\mathbf{x})\|_{u*} \le 1\\
&\|\nabla_{\mathcal G} \vect{\phi}(\mathbf{x})\|_{w*} \le \alpha
\quad\text{for all } \mathbf{x}\in \Omega,
\end{array}
\right)
\label{vomt-dual}
\end{equation}
where the optimization variable
$\vect{\phi}:\Omega \rightarrow \mathbb{R}^k$ is a function.
We write $\|\cdot\|_{u*}$ and $\|\cdot\|_{w*}$
for the dual norms of $\|\cdot\|_{u}$ and $\|\cdot\|_{w}$, respectively.
As stated in Theorem~\ref{thm:vomt-strong-duality},
strong duality holds between
\eqref{vomt} and \eqref{vomt-dual}
in the sense that the minimized
and maximized objective values are equal.
Therefore,
we take either \eqref{vomt} or \eqref{vomt-dual} as the definition of $V$.
In Section~\ref{s-duality-proof},
we rigorously define the primal and dual problems and prove Theorem~\ref{thm:vomt-strong-duality}.
\subsection{Theoretical properties}
The algorithm we present in Section~\ref{sec:algorithm} is a primal-dual algorithm and, as such, finds solutions to both
Problems~\eqref{vomt} and \eqref{vomt-dual}.
This is well-defined as both the primal
and dual problems have solutions.
\begin{theorem}
\label{thm:vomt-strong-duality}
The (infinite dimensional) primal
and dual optimization problems
\eqref{vomt} and \eqref{vomt-dual}
have solutions,
and their optimal values are the same,
i.e., strong duality holds.
\end{theorem}
Write $\mathcal{P}(\Omega,\mathbb{R}^k)$ for the space of nonnegative vector-valued densities supported on $\Omega$ with unit mass.
We can use $V(\vect{\lambda}^0,\vect{\lambda}^1)$
as a distance measure between $\vect{\lambda}^0,\vect{\lambda}^1\in \mathcal{P}(\Omega,\mathbb{R}^k)$.
\begin{theorem}
$V:\mathcal{P}(\Omega,\mathbb{R}^k)\times\mathcal{P}(\Omega,\mathbb{R}^k)\rightarrow \mathbb{R}_+$
defines a metric on $\mathcal{P}(\Omega,\mathbb{R}^k)$.
\end{theorem}
\section{Quantum gradient operator and matrix optimal mass transport}\label{sec:matrixOMT}
We closely follow the treatment in \cite{CheGeoTan16}. In particular, we define a notion of gradient on the space of Hermitian matrices and its dual, i.e., the (negative) divergence.
Some applications of matrix-OMT,
such as diffusion tensor imaging,
have real-valued data
while some applications,
such as multivariate spectral analysis,
have complex-valued data \cite{stoica2005}.
To accommodate the wide range of applications,
we develop the matrix-OMT with complex-valued matrices.
Write ${\mathcal C}$, ${\mathcal H}$, and ${\mathcal S}$ for the set of $k\times k$
complex, Hermitian, and skew-Hermitian matrices respectively.
We write
${\mathcal H}_+$ for the set of $k\times k$ positive semidefinite Hermitian matrices, i.e.,
$M\in {\mathcal H}_+$ if
$v^*Mv\ge 0$ for all $v\in \mathbb{C}^k$.
Write $\operatorname{tr}$ for the trace, i.e. for any $M\in {\mathcal H}$, we have
$\operatorname{tr}(M)=\sum^k_{i=1}M_{ii}$.
Write ${\mathcal C}^N$ for the block-column concatenation of $N$ matrices in ${\mathcal C}$,
i.e., $\mathbf{Z}\in {\mathcal C}^N$ if
$
\mathbf{Z}=
[Z_1^*\cdots Z_N^*]^*
$
and $Z_1,\dots,Z_N\in {\mathcal C}$.
Define ${\mathcal H}^N$ and ${\mathcal S}^N$ likewise.
For $X,Y\in {\mathcal C}$, we use the
Hilbert-Schmidt inner product
\[
\langle X,Y\rangle=\textrm{Re}\operatorname{tr}(XY^*)=
\sum^k_{i=1}
\sum^k_{j=1}(\textrm{Re}X_{ij}\textrm{Re}Y_{ij}+\textrm{Im}X_{ij}\textrm{Im}Y_{ij}).
\]
(This is the standard inner product when
we view ${\mathcal C}$ as the
real vector space $\mathbb{R}^{2n^2}$.)
For $X\in {\mathcal C}$, we use the norm
$\|X\|_2=(\langle X,X\rangle)^{1/2}$.
For $\mathbf{X}, \mathbf{Y}\in {\mathcal C}^N$, we use the inner product
$\langle \mathbf{X},\mathbf{Y}\rangle=\sum_{s=1}^N \langle X_s,Y_s\rangle$.
\subsection{Quantum gradient and divergence operators}
We define the gradient operator, given
a $\mathbf{L}=[L_1,\cdots,L_\ell]^*\in{\mathcal H}^\ell$, as
\begin{equation*}
\nabla_\mathbf{L}: {\mathcal H} \rightarrow {{\mathcal S}}^\ell, ~~X \mapsto
\left[ \begin{array}{c}
L_1 X-X L_1\\
\vdots \\
L_\ell X-X L_\ell
\end{array}\right].
\end{equation*}
Define the divergence operator as
\begin{equation*}
\mathrm{div}_\mathbf{L}: {{\mathcal S}}^\ell \rightarrow {\mathcal H},~~Z=
\left[ \begin{array}{c}
Z_1\\
\vdots \\
Z_\ell
\end{array}\right]
\mapsto
\sum_{s=1}^\ell -L_s Z_s+Z_s L_s.
\end{equation*}
Note that $\mathrm{div}_\mathbf{L}=-\nabla_\mathbf{L}^*$, where
$\nabla_\mathbf{L}^*$ is the adjoint of $\nabla_\mathbf{L}$.
This is in analogy with the usual spatial gradient and divergence operators.
Write $\Delta_\mathbf{L}=\mathrm{div}_\mathbf{L}\nabla_\mathbf{L}$.
This notion of gradient and divergence operators is motivated by the Lindblad equation in Quantum mechanics \cite{CheGeoTan16}.
The choice of $\mathbf{L}$ affects $\nabla_\mathbf{L}$.
There is no standard way of choosing $\mathbf{L}$. A standing assumption throughout, is that the null space of $\nabla_\mathbf{L}$, denoted by ${\rm ker}(\nabla_\mathbf{L})$, contains only scalar multiples of the identity matrix $I$.
\subsection{Matrix optimal mass transport}
We say $\Lambda:\Omega\rightarrow {\mathcal H}_+$
is a nonnegative matrix-valued density with unit mass if
\[
\int_\Omega \operatorname{tr}(\Lambda(\mathbf{x}))\;d\mathbf{x}=1.
\]
Assume $\Lambda^0$ and $\Lambda^1$ are nonnegative matrix-valued densities supported on $\Omega$ with unit mass.
We define the optimal mass transport between matrix-valued densities as
\begin{equation}\label{momt}
M(\Lambda^0,\Lambda^1)=
\left(
\begin{array}{ll}
\underset{{\mathbf U},{\mathbf W}}{\text{minimize}}&
\int_{\Omega} \|{\mathbf U}(\mathbf{x})\|_u +\alpha\|{\mathbf W}(\mathbf{x})\|_w\;d\mathbf{x}\\
\mbox{subject to} &
\mathrm{div}_\mathbf{x} ({\mathbf U})(\mathbf{x})
+
\mathrm{div}_{\mathbf L}({\mathbf W}(\mathbf{x}))=\Lambda^0(\mathbf{x})-\Lambda^1(\mathbf{x})\\
&{\mathbf U} \text{ satisfies zero-flux b.c.}
\end{array}
\right)
\end{equation}
where
${\mathbf U}:\Omega\rightarrow {\mathcal H}^{d}$
and ${\mathbf W}:\Omega\rightarrow {\mathcal S}^{\ell}$
are the optimization variables,
$\alpha>0$ is a parameter,
and $\|\cdot\|_u$ is a norm on ${\mathcal H}^{d}$ and $\|\cdot\|_w$ is a norm on ${\mathcal S}^{\ell}$.
The parameter $\alpha$ represents the relative importance of the two flux terms ${\mathbf U}$ and ${\mathbf W}$.
We write
\[
{\mathbf U}=
\begin{bmatrix}
U_1\\\vdots\\U_d
\end{bmatrix}
\qquad
{\mathbf W}=
\begin{bmatrix}
W_1\\\vdots\\W_\ell
\end{bmatrix}
\quad
{\mathbf u}_{ij}=
\begin{bmatrix}
( U_1)_{ij}\\
\vdots\\
( U_d)_{ij}
\end{bmatrix}.
\]
We define the spatial divergence as
\[
\mathrm{div}_\mathbf{x}({\mathbf U})=
\begin{bmatrix}
\mathrm{div}_\mathbf{x}({\mathbf u}_{11})& \mathrm{div}_\mathbf{x}({\mathbf u}_{12})&\cdots&\mathrm{div}_\mathbf{x}({\mathbf u}_{1k})\\
\mathrm{div}_\mathbf{x}(\overline{ {\mathbf u}}_{12})&\ddots&&\vdots\\
\vdots &&\ddots&\vdots\\
\mathrm{div}_\mathbf{x}(\overline{{\mathbf u}}_{1k})&\mathrm{div}_\mathbf{x}(\overline{{\mathbf u}}_{2k})&\cdots &\mathrm{div}_\mathbf{x}({\mathbf u}_{k,k})
\end{bmatrix}.
\]
The zero-flux boundary condition is
\[
{\mathbf u}_{ij}(\mathbf{x})
^T\mathbf{n}(\mathbf{x})=0,\,\,
\mbox{for all }\begin{cases}
\mathbf{x}\in \partial \Omega,\\
\text{$ \mathbf{n}(\mathbf{x})$ normal to $\partial\Omega$}.
\end{cases}
\]
for $i,j=1,\dots,k$.
Note that ${\mathbf W}$ has no boundary conditions
\cite{CheGeoNinTan17}.
The optimization problem \eqref{momt} has the following dual problem:
\begin{equation}
M(\Lambda^0,\Lambda^1)=
\left(
\begin{array}{ll}
\underset{\Phi}{\text{maximize}}& \int_{\Omega}
\langle \Phi(\mathbf{x}),\Lambda^1(\mathbf{x})
-\Lambda^0(\mathbf{x})\rangle \;d\mathbf{x}\\
\mbox{subject to} &\|\nabla_\mathbf{x} \Phi(\mathbf{x})\|_{u*} \le 1\\
&\|\nabla_\mathbf{L} \Phi(\mathbf{x})\|_{w*} \le \alpha
\quad\text{for all } \mathbf{x}\in \Omega,
\end{array}
\right)
\label{momt-dual}
\end{equation}
where the optimization variable
$\Phi:\Omega \rightarrow {\mathcal H}$ is a function.
We write $\|\cdot\|_{u*}$ and
$\|\cdot\|_{w*}$ for the dual norms of
$\|\cdot\|_{u}$ and
$\|\cdot\|_{w}$, respectively.
As stated in Theorem~\ref{thm:momt-strong-duality}, strong duality holds between
\eqref{momt} and \eqref{momt-dual}
in the sense that the minimized
and maximized objective values are equal.
Therefore we take
either
\eqref{momt} or \eqref{momt-dual}
as the definition of $M$.
To avoid repeating the same argument
with different notation,
we simply point out that the rigorous definitions
of the matrix optimal mass transport problems
are analogous to those of the vector setup.
So the precise definitions of
\eqref{momt} or \eqref{momt-dual}
can be inferred from the discussion of
Section~\ref{s-duality-proof}.
\subsection{Theoretical properties}
The algorithm we present in Section~\ref{sec:algorithm} is a primal-dual algorithm and, as such, finds
solutions to both
Problems~\eqref{momt} and \eqref{momt-dual}.
This is well-defined as both the
primal and dual problems have solutions.
\begin{theorem}
\label{thm:momt-strong-duality}
The (infinite dimensional)
primal and dual optimization problems
\eqref{momt} and \eqref{momt-dual}
have solutions,
and their optimal values are the same, i.e., strong duality holds.
\end{theorem}
Write $\mathcal{P}(\Omega,{\mathcal H}_+)$ for the space of nonnegative matrix-valued densities supported on $\Omega$ with unit mass.
We can use $M(\Lambda^0,\Lambda^1)$
as a distance measure between $\Lambda^0,\Lambda^1\in \mathcal{P}(\Omega,,{\mathcal H}_+)$.
\begin{theorem}
$M:\mathcal{P}(\Omega,{\mathcal H}_+)\times \mathcal{P}(\Omega,{\mathcal H}_+)\rightarrow \mathbb{R}_+$
defines a metric on $\mathcal{P}(\Omega,{\mathcal H}_+)$.
\end{theorem}
\section{Duality proof}
\label{s-duality-proof}
In this section, we establish the theoretical results.
For notational simplicity, we only prove the results for
the vector-OMT primal and dual problems \eqref{vomt} and \eqref{vomt-dual}.
Analogous results for the matrix-OMT primal and dual problems \eqref{momt} and \eqref{momt-dual}
follow from the same logic.
Although the classical scalar-OMT literature is very rich,
standard techniques for proving scalar-OMT duality do not simply apply to our setup.
For example, Villani's proof of strong duality, presented as Theorem~1.3 of \cite{Vil03},
relies on and works with the linear optimization formulation \eqref{Monge}.
However, our vector and matrix-OMT formulations
directly generalize the flux formulation \eqref{Kan1}
and do not have formulations analogous to \eqref{Monge}.
We need a direct approach to analyze duality between the flux formulation
and the dual (with one function variable), and we provide this in this section.
We further assume
$\Omega\subset \mathbb{R}^d$
has a piecewise smooth boundary. Write $\Omega^\circ$ and $\partial \Omega$ for the interior and boundary of $\Omega$.
For simplicity, assume $\Omega$ as full affine dimensions, i.e., $\overline{\Omega^\circ}=\Omega$.
The rigorous form the dual problem \eqref{vomt-dual} is
\begin{equation}\label{vomt-dual2}
\begin{array}{ll}
\underset{\vect{\phi}\in W^{1,\infty}(\Omega,\mathbb{R}^k)}{\text{maximize}}& \int_{\Omega}
\langle \vect{\phi}(\mathbf{x}),\vect{\lambda}^1(\mathbf{x})
-\vect{\lambda}^0(\mathbf{x})\rangle \;d\mathbf{x}\\
\mbox{subject to} &\esssup_{\mathbf{x}\in \Omega}\|\nabla_\mathbf{x} \vect{\phi}(\mathbf{x})\|_{u*} \le 1\\
&\sup_{\mathbf{x}\in \Omega}\|\nabla_{\mathcal G} \vect{\phi}(\mathbf{x})\|_{w*} \le \alpha,
\end{array}
\end{equation}
where $W^{1,\infty}(\Omega,\mathbb{R}^k)$ is the standard Sobolev space of
functions from $\Omega$ to $\mathbb{R}^k$ with bounded weak gradients.
That \eqref{vomt-dual2} has a solution directly follows from the Arzel\`a-Ascoli Theorem.
To rigorously define the primal problem \eqref{vomt} requires more definitions,
and we do so later as \eqref{vomt3}.
\subsection{Fenchel-Rockafellar duality}
Let $L:X\rightarrow Y$ be a continuous linear map between locally convex topological vector spaces $X$ and $Y$,
and let
$f:X\rightarrow\mathbb{R}\cup\{\infty\}$ and $g:Y\rightarrow\mathbb{R}\cup\{\infty\}$ are lower-semicontinuous convex functions.
Write
\begin{align*}
d^\star=\sup_{x\in X}\{-f(x)-g(Lx)\}\qquad
p^\star=\inf_{y^*\in Y^*}\{f^*(L^*y^*)+g^*(-y^*)\},
\end{align*}
where
\begin{align*}
f^*(x^*)=\sup _{x\in X}\{\langle x^*,x\rangle-f(x)\}\qquad
g^*(y^*)=\sup _{y\in Y}\{\langle y^*,y\rangle-g(y)\}.
\end{align*}
In this framework of Fenchel-Rockafellar duality,
$d^\star\le p^\star$, i.e., weak duality, holds unconditionally, and this is not difficult to prove.
To establish $d^\star= p^\star$, i.e., strong duality, requires additional assumptions and is more difficult to prove.
The following theorem does this with a condition we can use.
\begin{theorem}\label{thm:duality}[Theorem~17 and 18 \cite{rockafellar1974}]
If there is an $x\in X$ such that $f(x)<\infty$ and $g$ is bounded above in a neighborhood of $Lx$,
then $p^\star=d^\star$.
Furthermore, if $p^\star=d^\star<\infty$,
the infimum of $\inf_{y^*\in Y^*}\{f^*(L^*y^*)+g^*(-y^*)\}$ is attained.
\end{theorem}
\subsection{Spaces}
Throughout this section, $\|\cdot\|_1,\|\cdot\|_2,\dots$ denote unspecified finite dimensional norms.
As all finite dimensional norms are equivalent, we do not bother to precisely specify which norms they are.
Define
\[
C(\Omega,\mathbb{R}^k)=
\left\{\vect{\phi}:\Omega\rightarrow \mathbb{R}^k\,\,\Big|\,\,
\vect{\phi}\text{ is continuous},\,
\|\vect{\phi}\|_{\infty}=\max_{\mathbf{x}\in \Omega}\|\vect{\phi}(\mathbf{x})\|_1<\infty
\right\}.
\]
Then $C(\Omega,\mathbb{R}^k)$ is a Banach space
equipped with the norm $\|\cdot \|_{\infty}$.
We define $C(\Omega,\mathbb{R}^{k\times d})$ likewise.
If $\vect{\phi}\in C(\Omega,\mathbb{R}^k)$ is continuously differentiable,
$\nabla_\mathbf{x} \vect{\phi}$ is defined on $\Omega^\circ$.
We say $\nabla_\mathbf{x}\vect{\phi}$ has a continuous extension to $\Omega$,
if there is a $g\in C(\Omega,\mathbb{R}^{k\times d})$ such that
$g|_{\Omega^\circ}=\nabla_\mathbf{x} \vect{\phi}$.
Define
\begin{align*}
C^1(\Omega,\mathbb{R}^k)=
\Big\{\vect{\phi}:\mathbb{R}^d\rightarrow \mathbb{R}^k\,\,\big|\,\,&
\vect{\phi}\text{ is continuously differentiable on }\Omega^\circ,\\
&\nabla_\mathbf{x} \vect{\phi}\text{ has a continuous extension to }\Omega,\\
&\|\vect{\phi}\|_{\infty,\infty}=\max_{\mathbf{x}\in \Omega}\|\vect{\phi}(\mathbf{x})\|_2
+\sup_{\mathbf{x}\in \Omega}\|\nabla_\mathbf{x}\vect{\phi}(\mathbf{x})\|_3<\infty
\Big\}.
\end{align*}
Then $C^1(\Omega,\mathbb{R}^k)$ is a Banach space
equipped with the norm $\|\cdot\|_{\infty,\infty}$.
Write $\mathcal{M}(\Omega,\mathbb{R}^k)$ for the space of $\mathbb{R}^k$-valued signed finite Borel measures on $\Omega$,
and define $\mathcal{M}(\Omega,\mathbb{R}^{k\times d})$ likewise.
Write $(C(\Omega,\mathbb{R}^{k\times d}))^*$, $(C^1(\Omega,\mathbb{R}^k))^*$ for the topological dual of
$C(\Omega,\mathbb{R}^{k\times d})$, $C^1(\Omega,\mathbb{R}^k)$, respectively.
The standard Riesz-Markov theorem tells us that
$(C(\Omega,\mathbb{R}^{k\times d}))^*=\mathcal{M}(\Omega,\mathbb{R}^{k\times d})$.
Fully characterizing $(C^1(\Omega,\mathbb{R}^k))^*$ is hard, but we do not need to do so.
Instead, we only use the following simple fact.
Any $g\in \mathcal{M}(\Omega,\mathbb{R}^k)$ defines the bounded linear map
$\vect{\phi}\mapsto \int_\Omega \langle\vect{\phi}(\mathbf{x}),g(d\mathbf{x})\rangle$
for any $\vect{\phi}\in C^1(\Omega,\mathbb{R}^k)$. In other words,
$\mathcal{M}(\Omega,\mathbb{R}^{k\times d})\subset(C^1(\Omega,\mathbb{R}^k))^*$ with the appropriate identification.
\subsection{Operators}
We redefine
$\nabla_\mathbf{x}:C^1(\Omega,\mathbb{R}^k)\rightarrow C(\Omega,\mathbb{R}^{k\times d})$
so that $\nabla_\mathbf{x}\vect{\phi}$ is the continuous exntension of the usual $\nabla_\mathbf{x}\vect{\phi}$ to all of $\Omega$.
This makes $\nabla_\mathbf{x}$ a bounded linear operator.
Define the dual (adjoint) operator
$\nabla_\mathbf{x}^*:\mathcal{M}(\Omega,\mathbb{R}^{k\times d})\rightarrow (C^1(\Omega,\mathbb{R}^k))^*$ by
\[
\int_\Omega \langle\vect{\phi}(\mathbf{x}),(\nabla_\mathbf{x}^*\vect{{\mathbf u}})(d\mathbf{x})\rangle
=
\int_\Omega \langle(\nabla_\mathbf{x} \vect{\phi})(\mathbf{x}),\vect{{\mathbf u}}(d\mathbf{x})\rangle
\]
for any $\vect{\phi}\in C^1(\Omega,\mathbb{R}^k)$ and $\vect{{\mathbf u}}\in \mathcal{M}(\Omega,\mathbb{R}^{k\times d})$.
Define the $\nabla_{\mathcal G}$ (which is simply a multiplication by a $\mathbb{R}^{k\times\ell}$ matrix) as
$\nabla_{\mathcal G}:C^1(\Omega,\mathbb{R}^k)\rightarrow C(\Omega,\mathbb{R}^k)$.
Since $C^1(\Omega,\mathbb{R}^k)\subset C(\Omega,\mathbb{R}^k)$, there is nothing wrong with defining the range of $\nabla_{\mathcal G}$
to be $C(\Omega,\mathbb{R}^k)$,
and this still makes $\nabla_{\mathcal G}$ a bounded linear operator.
Define the dual (adjoint) operator
$\nabla_{\mathcal G}^*:\mathcal{M}(\Omega,\mathbb{R}^k)\rightarrow (C^1(\Omega,\mathbb{R}^k))^*$
by identifying $\nabla_{\mathcal G}^*$ with the transpose of the matrix that defines $\nabla_{\mathcal G}$.
Since $\nabla_{\mathcal G}^*$ is simply multiplication by a matrix,
we can further say
\[
\nabla_{\mathcal G}^*:\mathcal{M}(\Omega,\mathbb{R}^k)\rightarrow \mathcal{M}(\Omega,\mathbb{R}^k)\subset(C^1(\Omega,\mathbb{R}^k))^*.
\]
We write $\mathrm{div}_{\mathcal G}=-\nabla_{\mathcal G}^*$.
\subsection{Zero-flux boundary condition}
Let $\vect{\mathbf{m}}:\Omega\rightarrow \mathbb{R}^{k\times d}$ a smooth function.
Then integration by parts tells us that
\[
\int_\Omega\langle\nabla_\mathbf{x}\vect{\varphi}(\mathbf{x}),\vect{\mathbf{m}}(\mathbf{x})\rangle\;d\mathbf{x}=
-\int_\Omega\langle \vect{\varphi}(\mathbf{x}),\mathrm{div}_\mathbf{x}\vect{\mathbf{m}}(\mathbf{x})\rangle\;d\mathbf{x}
\]
holds for all smooth $\vect{\varphi}:\Omega\rightarrow \mathbb{R}^k$ if and only if
$\vect{\mathbf{m}}(\mathbf{x})$ satisfies the zero-flux boundary condition, i.e.,
$\vect{\mathbf{m}}(\mathbf{x})\mathbf{n}(\mathbf{x})=0$ for all $\mathbf{x}\in \partial \Omega$,
where $\mathbf{n}(\mathbf{x})$ denotes the normal
vector at $\mathbf{x}$.
Here $\mathrm{div}_\mathbf{x}$ denotes the usual (spatial) divergence.
To put it differently,
$\nabla_\mathbf{x}^*=-\mathrm{div}_\mathbf{x}$ holds when the zero-flux boundary condition holds.
We generalize this notion to measures. We say
$\vect{{\mathbf u}}\in \mathcal{M}(\Omega,\mathbb{R}^{k\times d})$
satisfies the zero-flux boundary condition in the weak sense if
there is a $\vect{g}\in \mathcal{M}(\Omega,\mathbb{R}^k)\subset(C^1(\Omega,\mathbb{R}^k))^* $ such that
\[
\int_\Omega\langle\nabla_\mathbf{x}\vect{\phi} (\mathbf{x}),\vect{\mathbf{u}}(d\mathbf{x})\rangle=
-\int_\Omega\langle \vect{\phi}(\mathbf{x}),\vect{g}(d\mathbf{x})\rangle
\]
holds for all $\vect{\phi}\in C^1(\Omega,\mathbb{R}^k)$.
In other words, $\vect{{\mathbf u}}$ satisfies the zero-flux boundary condition
if $\nabla_\mathbf{x}^*\vect{{\mathbf u}}\in\mathcal{M}(\Omega,\mathbb{R}^k)\subset(C^1(\Omega,\mathbb{R}^k))^*$.
In this case, we write $\mathrm{div}_\mathbf{x}(\vect{{\mathbf u}})=\vect{g}$
and $\mathrm{div}_\mathbf{x}(\vect{{\mathbf u}})=-\nabla_\mathbf{x}^*(\vect{{\mathbf u}})$.
This definition is often used in elasticity theory.
\subsection{Duality}
To establish duality, we view the dual problem
\eqref{vomt-dual}
as the primal problem and obtain the primal problem
\eqref{vomt} as the dual of the dual.
We do this because the dual of $C(\Omega,\mathbb{R}^{k\times d})$ is known, while the
dual of $\mathcal{M}(\Omega,\mathbb{R}^{k\times d})$ is difficult to characterize.
Consider the problem
\begin{equation}
\begin{array}{ll}
\underset{\vect{\phi}\in C^1(\Omega,\mathbb{R}^k)}{\text{maximize}}& \int_{\Omega}
\langle \vect{\phi}(\mathbf{x}),\vect{\lambda}^1(\mathbf{x})
-\vect{\lambda}^0(\mathbf{x})\rangle \;d\mathbf{x}\\
\mbox{subject to} &\|\nabla_\mathbf{x} \vect{\phi}(\mathbf{x})\|_{u*} \le 1\\
&\|\nabla_{\mathcal G} \vect{\phi}(\mathbf{x})\|_{w*} \le \alpha
\quad\text{for all } \mathbf{x}\in \Omega,
\end{array}
\label{vomt-dual3}
\end{equation}
which is equivalent to \eqref{vomt-dual2}.
Define
\begin{align*}
L:C^1(\Omega,\mathbb{R}^k)&\rightarrow C(\Omega,\mathbb{R}^{k\times d})\times C(\Omega,\mathbb{R}^k)\\
\vect{\phi}&\mapsto (\nabla_\mathbf{x}\vect{\phi},\nabla_{\mathcal G}\vect{\phi})
\end{align*}
and
\[
g(a,b)=\left\{
\begin{array}{ll}
0&\text{if } \|a(\mathbf{x})\|_{u*}\le1,\,\|b(\mathbf{x})\|_{w*}\le\alpha\text{ for all }\mathbf{x}\in \Omega\\
\infty&\text{otherwise}.
\end{array}
\right.
\]
Rewrite \eqref{vomt-dual3} as
\[
\begin{array}{ll}
\underset{\vect{\phi}\in C^1(\Omega,\mathbb{R}^k)}{\text{maximize}}& \int_{\Omega}
\langle \vect{\phi}(\mathbf{x}),\vect{\lambda}^1(\mathbf{x})
-\vect{\lambda}^0(\mathbf{x})\rangle \;d\mathbf{x}
-g(L\vect{\phi}),
\end{array}
\]
and consider its Fenchel-Rockafellar dual
\begin{equation*}
\begin{array}{ll}
\underset{
\substack{
\vect{{\mathbf u}}\in \mathcal{M}(\Omega,\mathbb{R}^{k\times d})\\
\vect{w}\in \mathcal{M}(\Omega,\mathbb{R}^k)
}
}{\text{minimize}}&
\int_{\Omega}\|\vect{{\mathbf u}}(\mathbf{x})\|_u +\alpha\|\vect{w}(\mathbf{x})\|_w\;d\mathbf{x}\\
\mbox{subject to} &
-\nabla_\mathbf{x}^*\vect{{\mathbf u}}-\nabla_{\mathcal G}^*\vect{w}=\vect{\lambda}^0-\vect{\lambda}^1
\text{ as members of }(C^1(\Omega,\mathbb{R}^k))^*.
\end{array}
\end{equation*}
The constraint
\[
-\nabla_\mathbf{x}^*\vect{{\mathbf u}}=\vect{\lambda}^0-\vect{\lambda}^1-\mathrm{div}_{\mathcal G}\vect{w}\in\mathcal{M}(\Omega,\mathbb{R}^k)\subset (C^1(\Omega,\mathbb{R}^k))^*
\]
implies
\[
-\nabla_\mathbf{x}^*\vect{{\mathbf u}}\in \mathcal{M}(\Omega,\mathbb{R}^k),
\]
i.e., $\vect{{\mathbf u}}$ satisfies the zero-flux boundary condition.
We now state the rigorous form of the primal problem \eqref{vomt}
\begin{equation}\label{vomt3}
\begin{array}{ll}
\underset{
\substack{
\vect{{\mathbf u}}\in \mathcal{M}(\Omega,\mathbb{R}^{k\times d})\\
\vect{w}\in \mathcal{M}(\Omega,\mathbb{R}^k)
}
}{\text{minimize}}&\int_{\Omega}\|\vect{{\mathbf u}}(\mathbf{x})\|_u +\alpha\|\vect{w}(\mathbf{x})\|_w\;d\mathbf{x}\\
\mbox{subject to} &
\mathrm{div}_\mathbf{x}\vect{{\mathbf u}}+\mathrm{div}_{\mathcal G}\vect{w}=\vect{\lambda}^0-\vect{\lambda}^1
\text{ as members of }\mathcal{M}(\Omega,\mathbb{R}^k)\\
& \vect{{\mathbf u}} \text{ satisfies zero-flux b.c in the weak sense}.
\end{array}
\end{equation}
The point $\vect{\phi}=0$ satisfies the assumption of Theorem~\ref{thm:duality}.
Furthermore, it is easy to verify that the optimal value of the dual problem \eqref{vomt-dual} is bounded.
This implies strong duality, \eqref{vomt3} is feasible, and \eqref{vomt3} has a solution.
Given that $V(\vect{\lambda}^0,\vect{\lambda}^1)<\infty$ for all $\vect{\lambda}^0,\vect{\lambda}^1\in \mathcal{P}(\Omega,\mathbb{R}^k)$, it is not hard to prove that
$V:\mathcal{P}(\Omega,\mathbb{R}^k)\times \mathcal{P}(\Omega,\mathbb{R}^k)\rightarrow\mathbb{R}_+$
defines a metric. Interested readers can find the argument in
\cite{CheGeoNinTan17}.
\section{Algorithmic preliminaries}
\label{s:alg-prelim}
Consider the Lagrangian for the vector optimal transport problems \eqref{vomt} and its dual \eqref{vomt-dual}
\begin{align}
L(\vect{{\mathbf u}},\vect{w},\vect{\phi})=&\int_\Omega \|\vect{{\mathbf u}}(\mathbf{x})\|_u+\alpha\|\vect{w}(\mathbf{x})\|_w\;d\mathbf{x}
\nonumber\\
&\qquad\qquad+
\int_\Omega
\langle\vect{\phi}(\mathbf{x}),
\mathrm{div}_\mathbf{x}(\vect{{\mathbf u}})(\mathbf{x})
+\mathrm{div}_{\mathcal G}(\vect{w}(\mathbf{x}))
-\vect{\lambda}^0(\mathbf{x})+\vect{\lambda}^1(\mathbf{x})\rangle\;d\mathbf{x},
\label{L-vomt}
\end{align}
which is convex with respect to $\vect{{\mathbf u}}$ and $\vect{w}$ and concave with respect to $\phi$.
Finding a saddle point of \eqref{L-vomt} is equivalent to solving \eqref{vomt} and \eqref{vomt-dual},
when the primal problem \eqref{vomt} has a solution, the dual problem \eqref{vomt-dual} has a solution,
and the optimal values of \eqref{vomt} and \eqref{vomt-dual} are equal.
See \cite[Theorem~7.1]{bauschke2012}, \cite[Theorem 2]{liu2017}, or any reference on standard
convex analysis such as \cite{rockafellar1974} for further discussion on this point.
To solve the optimal transport problems,
we discretize the continuous problems and
apply PDHG method to solve the discretized convex-concave saddle point problem.
\subsection{PDHG method}
Consider the convex-concave saddle function
\[
L(x,y,z)=f(x)+g(y)+\langle Ax+By,z\rangle-h(z),
\]
where
$f$, $g$, and $h$ are (closed and proper) convex functions and
$x\in \mathbb{R}^n$, $y\in \mathbb{R}^m$, $z\in \mathbb{R}^l$,
$A\in \mathbb{R}^{l\times n}$, and $B\in \mathbb{R}^{l\times m}$.
Note $L$ is convex in $x$ and $y$ and concave in $z$.
Assume $L$ has a saddle point
and step sizes $\mu,\nu,\tau>0$ satisfy
\[
1>\tau\mu \lambda_\mathrm{max}(A^TA)+\tau \nu \lambda_\mathrm{max}(B^TB).
\]
Write $\|\cdot\|_2$ for the standard Euclidean norm.
Then the method
\begin{align}
x^{k+1}&=
\argmin_{x\in \mathbb{R}^n}
\left\{
L(x,y^k,z^k) +\frac{1}{2\mu}\|x-x^k\|_2^2
\right\}\nonumber\\
y^{k+1}&=
\argmin_{y\in \mathbb{R}^m}
\left\{
L(x^k,y,z^k) +\frac{1}{2\nu}\|y-y^k\|_2^2
\right\}
\label{cp-method}\\
z^{k+1}&=
\argmax_{z\in \mathbb{R}^l}
\left\{
L(2x^{k+1}-x^k,2y^{k+1}-y^k,z) -\frac{1}{2\tau}\|z-z^k\|_2^2
\right\}\nonumber
\end{align}
converges to a saddle point.
This method is called the
the Primal-Dual Hybrid Gradient (PDHG) method
or the
(preconditioned) Chambolle-Pock method
\cite{esser2010,ChaPoc11,PockCha11}.
PDHG can be interpreted as a proximal point method
under a certain metric \cite{he2012}.
The quantity
\begin{align*}
R^k=&
\frac{1}{\mu}\|x^{k+1}-x^k\|_2^2+\frac{1}{\mu}\|y^{k+1}-y^k\|_2^2+
\frac{1}{\tau}\|z^{k+1}-z^k\|_2^2\nonumber\\
\qquad\qquad&-2\langle z^{k+1}-z^k,A(x^{k+1}-x^k)+B(y^{k+1}-y^k)\rangle.
\end{align*}
is the fixed-point residual of the non-expansive mapping
defined by the proximal point method.
Therefore $R^k=0$ if and only if $(x^k,y^k,z^k)$
is a saddle point of $L$,
and $R^k$ decreases monotonically to $0$, cf., review paper \cite{ryu2016}.
We can use $R^k$ as a measure of progress
and as a termination criterion.
\subsection{Shrink operators}
\label{ss:shrink}
As the subproblems of PDHG \eqref{cp-method}
are optimization problems themselves,
PDHG is most effective when these subproblems have closed-form solutions.
The problem definitions of scalar, vector, and matrix-OMT involve norms.
For some, but not all, choices of norms, the ``shrink'' operators
\begin{equation*}
\mathrm{shrink}(x^0;\mu)=
\argmin_{x\in \mathbb{R}^n}
\left\{
\mu\|x\|+(1/2)\|x-x^0\|_2^2
\right\}
\end{equation*}
have closed-form solutions.
Therefore, when possible, it is useful to choose such norms for computational efficiency.
Readers familiar with the compressed sensing or proximal methods literature
may be familiar with this notion.
For the vector-OMT, we focus on norms
\[
\|\vect{{\mathbf u}}\|_2^2=
\sum^d_{s=1}
\|{\mathbf u}_s\|_2^2
\qquad
\|\vect{{\mathbf u}}\|_{1,2}=
\sum^d_{s=1}
\|{\mathbf u}_s\|_2
\qquad
\|\vect{{\mathbf u}}\|_1=
\sum^k_{s=1}
\|{\mathbf u}_s\|_1
\]
for $\vect{{\mathbf u}}\in \mathbb{R}^{k\times d}$ and
\[
\|\vect{w}\|_2^2=
\sum^\ell_{s=1}
(w_s)^2
\qquad
\|\vect{w}\|_1=
\sum^\ell_{s=1}
|w_s|
\]
for $\vect{w}\in \mathbb{R}^{\ell}$.
The shrink operators of these norms have closed-form solutions.
For the matrix-OMT, we focus on norms
\[
\|{\mathbf U}\|_2^2=
\sum^d_{s=1}
\sum^k_{i,j=1}
|(U_s)_{ij}|^2
\,
\|{\mathbf U}\|_1=
\sum^d_{s=1}
\sum^k_{i,j=1}
|(U_s)_{ij}|
\quad
\|{\mathbf U}\|_\mathrm{1,nuc}=
\sum^d_{s=1}
\|U_s\|_\mathrm{nuc}
\]
for ${\mathbf U}\in {\mathcal H}^{d}$ and
$\|\cdot\|_2$, $\|\cdot\|_1$, and $\|\cdot\|_{1,\mathrm{nuc}}$
for ${\mathbf W}\in {\mathcal S}^\ell$, which are defined likewise.
The nuclear norm $\|\cdot\|_\mathrm{nuc}$ is the sum of the singular values.
The shrink operators of these norms have closed-form solutions.
We provide further information and details on shrink operators in the appendix.
\section{Algorithms}\label{sec:algorithm}
We now present simple and parallelizable algorithms for the OMT problems.
These algorithms are, in particular, very well-suited for GPU computing.
In Section~\ref{sec:algorithm} and \ref{sec:examples} we deal with discretized optimization variables
that approximate solutions to the continuous problems.
For simplicity of notation,
we use the same symbol to denote the discretizations and their continuous counterparts.
Whether we are referring to the continuous variable or its discretization should be clear from context.
As mentioned in Section~\ref{s:alg-prelim}, these methods are the PDHG method
applied to discretizations of the continuous problems.
In the implementation, it is important to get the discretization at the boundary correct
in order to respect the zero-flux boundary conditions.
For interested readers, the details are provided in the appendix.
Instead of detailing the somewhat repetitive derivations of the algorithms in full,
we simply show the key steps and arguments for the $\vect{{\mathbf u}}$ update of vector-OMT.
The other steps follow from similar logic.
When we discretize the primal and dual vector-OMT problems and
apply PDHG to the discretized Lagrangian form of \eqref{L-vomt}, we get
\begin{align*}
\vect{{\mathbf u}}^{k+1}&=
\argmin_{\vect{{\mathbf u}}\in \mathbb{R}^{n\times n\times k\times d}}
\left\{
\sum_{ij}
\|\vect{{\mathbf u}}_{ij}\|_u
+\langle \vect{\phi}_{ij},(\mathrm{div}_\mathbf{x} {\mathbf u})_{ij}\rangle
+\frac{1}{2\mu}\|\vect{{\mathbf u}}_{ij}-\vect{{\mathbf u}}^k_{ij}\|_2^2
\right\}\\
&=
\argmin_{\vect{{\mathbf u}}\in \mathbb{R}^{n\times n\times k\times d}}
\left\{\sum_{ij}
\mu\|\vect{{\mathbf u}}_{ij}\|_u
-\mu\langle (\nabla_\mathbf{x}\vect{\phi})_{ij},{\mathbf u}_{ij}\rangle
+(1/2)\|\vect{{\mathbf u}}_{ij}-\vect{{\mathbf u}}_{ij}^k\|_2^2
\right\}.
\end{align*}
Since the minimization splits over the $i,j$ indices, we write
\begin{align*}
\vect{{\mathbf u}}^{k+1}_{ij}
&=
\argmin_{\vect{{\mathbf u}}_{ij}\in \mathbb{R}^{ k\times d}}
\left\{
\mu\|\vect{{\mathbf u}}_{ij}\|_u
-\mu\langle (\nabla_\mathbf{x}\vect{\phi})_{ij},{\mathbf u}_{ij}\rangle
+(1/2)\|\vect{{\mathbf u}}_{ij}-\vect{{\mathbf u}}_{ij}^k\|_2^2
\right\}\\
&=
\argmin_{\vect{{\mathbf u}}_{ij}\in \mathbb{R}^{ k\times d}}
\left\{
\mu\|\vect{{\mathbf u}}_{ij}\|_u
+(1/2)\|\vect{{\mathbf u}}_{ij}-(\vect{{\mathbf u}}^k_{ij}+\mu(\nabla_\mathbf{x} \vect{\phi})_{ij})\|_2^2
\right\}\\
&=\mathrm{shrink}(\vect{{\mathbf u}}^k_{ij}+\mu(\nabla_\mathbf{x} \vect{\phi})_{ij};\mu).
\end{align*}
At the boundary, these manipulations need special care.
When we incorporate ghost cells in our discretization,
these seemingly cavalier manipulations are also correct on the boundary.
We further explain the ghost cells and discretization
in the appendix.
\subsection{Scalar-OMT algorithm}
\label{ss:somt-algorithm}
The scalar-OMT algorithm can be viewed
as a special case of vector-OMT or matrix-OMT algorithms.
This scalar-OMT algorithm was presented in \cite{LiRyuOsh17},
but we restate it here for completeness.
\begin{tabbing}
aaaaa\= aaa \=aaa\=aaa\=aaa\=aaa=aaa\kill
\rule{\linewidth}{0.8pt}\\
\noindent{\large\bf First-order Method for S-OMT}\\
\> \textbf{Input}: Problem data $\lambda^0$, $\lambda^1$\\
\>\>\> Initial guesses ${\mathbf u}^0$, $\phi^0$ and step sizes $\mu$, $\tau$\\
\> \textbf{Output}: Optimal ${\mathbf u}^\star$ and $\phi^\star$\\
\rule{\linewidth}{0.5pt}\\
\> \textbf{for } $k=1, 2, \cdots$ \qquad \textrm{(Iterate until convergence)}\\
\>\> ${\mathbf u}^{k+1}_{ij}=\mathrm{shrink}({\mathbf u}_{ij}^k+\mu(\nabla \Phi^k)_{ij}, \mu)$
\qquad for $i,j=1,\dots,n$\\
\>\> $\phi_{ij}^{k+1}=\phi_{ij}^k+\tau (\mathrm{div}_\mathbf{x} (2{\mathbf u}^{k+1}-{\mathbf u}^k)_{ij}+\lambda^1_{ij}-\lambda^0_{ij})$ \qquad for $i,j=1,\dots,n$\\
\> \textbf{end}\\
\rule{\linewidth}{0.8pt}
\end{tabbing}
This method converges for step sizes $\mu,\tau>0$ that satisfy
\[
1>\tau\mu \lambda_\mathrm{max}(-\Delta_\mathbf{x}).
\]
For the particular setup of $\Omega=[0,1]\times[0,1]$ and $\Delta x=1/(n-1)$,
the bound $\lambda_\mathrm{max}(-\Delta_\mathbf{x})\le 8/(\Delta x)^2=8(n-1)^2$ is known.
In our experiments, we use $\mu =1/(16\tau(n-1)^2)$,
a choice that ensures convergence for any $\tau>0$.
We tune $\tau$ for the fastest convergence.
\subsection{Vector-OMT algorithm}
\label{ss:vomt-algorithm}
Write $\mathrm{shrink}_u$ and $\mathrm{shrink}_w$ for the shrink operators with respect to $\|\cdot\|_u$ and $\|\cdot\|_w$.
The vector-OMT algorithm is as follows:
\begin{tabbing}
aaaaa\= aaa \=aaa\=aaa\=aaa\=aaa=aaa\kill
\rule{\linewidth}{0.8pt}\\
\noindent{\large\bf First-order Method for V-OMT}\\
\> \textbf{Input}: Problem data ${\mathcal G}$, $\vect{\lambda}^0$, $\vect{\lambda}^1$, $\alpha$\\
\>\>\> Initial guesses $\vect{{\mathbf u}}^0$, $\vect{w}^0$, $\vect{\phi}^0$ and step sizes $\mu$, $\nu$, $\tau$\\
\> \textbf{Output}: Optimal $\vect{{\mathbf u}}^\star$, $\vect{w}^\star$, and $\vect{\phi}^\star$\\
\rule{\linewidth}{0.5pt}\\
\> \textbf{for } $k=1, 2, \cdots$ \qquad \textrm{(Iterate until convergence)}\\
\>\> $\vect{{\mathbf u}}^{k+1}_{ij}=\mathrm{shrink}_u(\vect{{\mathbf u}}_{ij}^k+\mu(\nabla \vect{\phi}^k)_{ij}, \mu)$
\qquad \quad for $i,j=1,\dots,n$\\
\>\> $\vect{w}^{k+1}_{ij}=\mathrm{shrink}_w(\vect{w}^k_{ij}+\nu(\nabla_{\mathcal G}\vect{\phi}^k_{ij}),\alpha\nu)$ \qquad for $i,j=1,\dots,n$\\
\>\> $\vect{\phi}_{ij}^{k+1}=\vect{\phi}_{ij}^k+\tau (\mathrm{div}_\mathbf{x} (2\vect{{\mathbf u}}^{k+1}-\vect{{\mathbf u}}^k)_{ij}+
\mathrm{div}_{\mathcal G} (2\vect{w}^{k+1}-\vect{w}^k)_{ij} +\vect{\lambda}^1_{ij}-\vect{\lambda}^0_{ij})$\\
\>\>\>\> \qquad \qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad for $i,j=1,\dots,n$\\
\> \textbf{end}\\
\rule{\linewidth}{0.8pt}
\end{tabbing}
This method converges for step sizes $\mu,\nu,\tau>0$ that satisfy
\[
1>\tau\mu \lambda_\mathrm{max}(-\Delta_\mathbf{x})
+\tau \nu \lambda_\mathrm{max}(-\Delta_{\mathcal G}).
\]
For the particular setup of $\Omega=[0,1]\times[0,1]$ and $\Delta x=1/(n-1)$,
the bound $\lambda_\mathrm{max}(-\Delta_\mathbf{x})\le 8(n-1)^2$ is known.
Given a graph ${\mathcal G}$, we can compute
$\lambda_\mathrm{max}(-\Delta_{\mathcal G})$ with a standard eigenvalue routine.
In our experiments, we use
$\mu =1/(32\tau(n-1)^2)$
and
$\nu =1/(4\tau\lambda_\mathrm{max}(-\Delta_{\mathcal G}))$,
a choice that ensures convergence for any $\tau>0$.
We tune $\tau$ for the fastest convergence.
\subsection{Matrix-OMT algorithm}
\label{ss:momt-algorithm}
Write $\mathrm{shrink}_u$ and $\mathrm{shrink}_w$ for the shrink operators with respect to $\|\cdot\|_u$ and $\|\cdot\|_w$.
The matrix-OMT algorithm is as follows:
\begin{tabbing}
aaaaa\= aaa \=aaa\=aaa\=aaa\=aaa=aaa\kill
\rule{\linewidth}{0.8pt}\\
\noindent{\large\bf First-order Method for M-OMT}\\
\> \textbf{Input}: Problem data $\mathbf{L}$, $\Lambda^0$, $\Lambda^1$, $\alpha$, \\
\>\>\> Initial guesses ${\mathbf U}^0$, ${\mathbf W}^0$, $\Phi^0$ and step sizes $\mu$, $\nu$, $\tau$\\
\> \textbf{Output}: Optimal ${\mathbf U}^\star$, ${\mathbf W}^\star$, and $\Phi^\star$\\
\rule{\linewidth}{0.5pt}\\
\> \textbf{for } $k=1, 2, \cdots$ \qquad \textrm{(Iterate until convergence)}\\
\>\> ${\mathbf U}^{k+1}_{ij}=\mathrm{shrink}_u({\mathbf U}_{ij}^k+\mu(\nabla \Phi^k)_{ij}, \mu)$
\qquad\quad \,\,for $i,j=1,\dots,n$\\
\>\> ${\mathbf W}^{k+1}_{ij}=\mathrm{shrink}_w({\mathbf W}^k_{ij}+\nu(\nabla_\mathbf{L}\Phi^k_{ij}),\alpha\nu)$ \qquad for $i,j=1,\dots,n$\\
\>\> $\Phi_{ij}^{k+1}=\Phi_{ij}^k+\tau (\mathrm{div}_\mathbf{x} (2{\mathbf U}^{k+1}-{\mathbf U}^k)_{ij}+
\mathrm{div}_\mathbf{L} (2{\mathbf W}^{k+1}-{\mathbf W}^k)_{ij} +\Lambda^1_{ij}-\Lambda^0_{ij})$\\
\>\>\>\> \qquad \qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad for $i,j=1,\dots,n$\\
\> \textbf{end}\\
\rule{\linewidth}{0.8pt}
\end{tabbing}
This method converges for step sizes $\mu,\nu,\tau>0$ that satisfy
\[
1>\tau\mu \lambda_\mathrm{max}(-\Delta_\mathbf{x})
+\tau \nu \lambda_\mathrm{max}(-\Delta_\mathbf{L}).
\]
For the particular setup of $\Omega=[0,1]\times[0,1]$ and $\Delta x=1/(n-1)$,
the bound $\lambda_\mathrm{max}(-\Delta_\mathbf{x})\le 8(n-1)^2$ is known.
Given $\mathbf{L}$, we can compute the value of
$\lambda_\mathrm{max}(-\Delta_\mathbf{L})$
by explicitly forming a $k^2\times k^2$ matrix
representing the linear operator $-\Delta_\mathbf{L}$
and applying a standard eigenvalue routine.
In our experiments, we use
$\mu =1/(32\tau(n-1)^2)$
and
$\nu =1/(4\tau\lambda_\mathrm{max}(-\Delta_\mathbf{L}))$,
a choice that ensures convergence for any $\tau>0$.
We tune $\tau$ for the fastest convergence.
\begin{figure}[h!]
\centering
\subfloat[$\vect{\lambda}^0$]{\includegraphics[width=0.49\textwidth]{vectorrho0.png}
\label{fig:vec_marginal0}}
\subfloat[$\vect{\lambda}^1$]{\includegraphics[width=0.49\textwidth]{vectorrho1.png}
\label{fig:vec_marginal1}}\\
\vspace{-0.05in}
\subfloat[Velocity field ${\mathbf u}$ with $c_1=1$, $c_2=1$, $c_3=1$,
\newline $\alpha=1$ and norms $\|\vect{{\mathbf u}}\|_{1,2}, \|\vect{w}\|_{1}$.
\newline$V(\vect{\lambda}^0,\vect{\lambda}^1)=0.57$.
]{\includegraphics[width=0.49\textwidth]{figure2c.png}
\label{fig:velocityuL2a}
}
\subfloat[Velocity field ${\mathbf u}$ with $c_1=1$, $c_2=1$,
\newline $c_3=1$, $\alpha=1$ and norms $\|\vect{{\mathbf u}}\|_{1}, \|\vect{w}\|_{1}$.
\newline$ V(\vect{\lambda}^0,\vect{\lambda}^1)=0.67$.
]{\includegraphics[width=0.49\textwidth]{figure2d.png}
\label{fig:velocityuL2b}
}
\caption{Color image vector-OMT example.}
\label{fig:velocityuL2}
\end{figure}
\subsection{Parallelization}
For the vector-OMT,
the computation for the $\vect{{\mathbf u}}$, $\vect{w}$, and $\vect{\phi}$ updates splits over the indices $(i,j)$,
i.e., the computation splits pixel-by-pixel. Furthermore, the $\vect{{\mathbf u}}$ and $\vect{w}$ updates can be done in parallel.
Parallel processors handing jobs split over
$(i,j)$ must be synchronized before and after the $\vect{\phi}$ update.
The same is true for the scalar-OMT and matrix-OMT.
This highly parallel and regular algorithmic structure makes
the proposed algorithms very well suited for CUDA GPUs.
We demonstrate this through our experiments.
\section{Examples}\label{sec:examples}
In this section, we provide example applications of vector and matrix-OMT
with numerical tests to demonstrate the effectiveness of our algorithms.
As mentioned in the introduction,
potential applications of vector and matrix-OMT are broad. Here,
we discuss two of the simplest applications.
We implemented the algorithm on C++ CUDA and ran it on a Nvidia GPU.
For convenience, we MEXed this code, i.e., the code is made available as a Matlab function.
For scientific reproducibility, we release this code.
\subsection{Color images}
Color images in RGB format is one of the more immediate examples of vector-valued densities.
At each spatial position of a 2D color image, the color is represented as a combination of the three basic colors
red (R), green (G), and blue (B).
We allow any basic color to change to another basic color with cost
$c_1$ for R to G, $c_2$ for R to B, and $c_3$ for G to B.
So the graph ${\mathcal G}$ as described in Section~\ref{sec:vectorgrad} has 3 nodes and 3 edges.
Consider the two color images on the domain $\Omega=[0,1]\times [0,1]$
with $256\times 256$ discretization shown in
Figures~\ref{fig:vec_marginal0} and \ref{fig:vec_marginal1}.
The initial and target densities $\vect{\lambda}^0$ and $\vect{\lambda}^1$
both have three disks at the same location, but with different colors.
The optimal flux depends on the choice of norms.
Figures~\ref{fig:velocityuL2a} and \ref{fig:velocityuL2b},
show fluxes optimal with respect to different norms.
Whether it is optimal to spatially transport the colors or to change the colors
depends on the parameters $c_1, c_2, c_3, \alpha$ as well as the norms $\|\cdot\|_u$ and $\|\cdot\|_w$.
With the parameters of Figure~\ref{fig:colorringa} it is optimal to spatially transport the colors,
while with the parameters of Figure~\ref{fig:colorringb} it is optimal to change the colors.
\begin{figure}[h]
\centering
\subfloat[$c_1=1, c_2=1, c_3=1,\alpha =10$.]{\includegraphics[width=0.49\textwidth]{colorring1.png}
\label{fig:colorringa}
}
\subfloat[$c_1=1, c_2=1, c_3=1,\alpha =0.1$.]{\includegraphics[width=0.49\textwidth]{colorring2.png}
\label{fig:colorringb}
}
\caption{Color image vector-OMT example, with a more complicated shape
and norms $\|\vect{{\mathbf u}}\|_{1,2}, \|\vect{w}\|_{2}$.
}
\label{fig:colorring}
\end{figure}
Finally, we test our algorithm
on the setup of Figure~\ref{fig:velocityuL2a}
with grid sizes $32\times 32, 64 \times 64, 128 \times 128$, and $256 \times 256$.
Table~\ref{tab:vecgrid1} shows the number of iterations and runtime tested on a Nvidia Titan Xp GPU
required to achieve a $10^{-3}$ precision, measured as the ratio between the duality gap and primal objective value.
\begin{table}
\centering
\begin{tabular}[h]{| c | c | c | c |c| c| c|}
\hline
Grid Size & Iteration count & Time per-iter& Iteration time&$\tau$ & $V(\vect{\lambda}^0,\vect{\lambda}^1)$ \\
\hline
$32 \times 32$ & $0.5\times 10^4$ & $27 \mu s$ &$0.14s$& 1 & 0.57\\
$64 \times 64$ & $0.5\times 10^4$ & $27 \mu s$ &$0.14s$& 1 & 0.57\\
$128 \times 128$ & $2\times 10^4$ & $23 \mu s$ &$0.47s$& 3 & 0.57\\
$256 \times 256$ & $2\times 10^4$ &$68 \mu s$ & $1.36s$&3 & 0.57\\
\hline
\end{tabular}
\caption{Computation cost for vector-OMT as a function of grid size.}
\label{tab:vecgrid1}
\end{table}
\subsection{Diffusion tensor imaging}
Diffusion tensor imaging (DTI) is a technique used in magnetic resonance imaging.
DTI captures orientations and intensities of brain fibers at each spatial position as ellipsoids
and gives us a matrix-valued density.
Therefore, the metric defined by the matrix-OMT
provides a natural way to compare the differences between brain diffusion images.
In Figure~\ref{fig:dti} we visualize diffusion tensor images
by using colors to indicate different orientations of the tensors at each voxel.
In this paper, we present simple 2D examples as a proof of concept
and leave actual 3D imaging for a topic of future work.
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth]{DTI1.png}
\includegraphics[width=0.49\textwidth]{DTI2.png}
\caption{Example of 2D diffusion tensor images.}
\label{fig:dti}
\end{figure}
Consider three synthetic matrix-valued densities $\Lambda^0$, $\Lambda^1$, and $\Lambda^2$ in Figure~\ref{fig:matrixmarginals}.
The densities $\Lambda^0$ and $\Lambda^1$ have
mass at the same spatial location,
but the ellipsoids have different shapes.
The densities $\Lambda^0$ and $\Lambda^2$ have the same ellipsoids at different spatial locations.
We compute the distance between $\Lambda^0$, $\Lambda^1$, and $\Lambda^2$
for different parameters $\alpha$ and fixed ${\mathbf L}=[L_1, L_2]^*$ with
\[
L_1 = \left[\begin{matrix}
1 & 0 & 0\\
0 & 2 & 0\\
0 & 0 & 0
\end{matrix}\right],
\quad
L_2 = \left[\begin{matrix}
1 & 1 & 1\\
1 & 0 & 0\\
1 & 0 & 0
\end{matrix}\right].
\]
Table~\ref{tab:param}, shows the results
with $\|{\mathbf U}\|_2$ and $\|{\mathbf W}\|_1$ and grid size $128 \times 128$.
As we can see, whether $\Lambda^0$ is more ``similar'' to $\Lambda^1$ or $\Lambda^2$,
i.e., whether
$M(\lambda^0,\lambda^1)<M(\lambda^0,\lambda^2)$
or
$M(\lambda^0,\lambda^1)>M(\lambda^0,\lambda^2)$,
depends on whether the cost on ${\mathbf U}$, spatial transport, is higher than the cost on ${\mathbf W}$,
changing the ellipsoids.
\begin{figure}[h]
\centering
\subfloat[$\Lambda^0$]{\includegraphics[width=0.33\textwidth]{matrixrho0.png}}
\subfloat[$\Lambda^1$]{\includegraphics[width=0.33\textwidth]{matrixrho1.png}}
\subfloat[$\Lambda^2$]{\includegraphics[width=0.33\textwidth]{matrixrho2.png}}
\caption{Synthetic matrix-valued distributions.}
\label{fig:matrixmarginals}
\end{figure}
\begin{table}
\centering
\begin{tabular}[h]{| c | c | c | c |}
\hline
Parameters & $M(\Lambda^0,\Lambda^1)$ & $M(\Lambda^0,\Lambda^2)$ & $M(\Lambda^1,\Lambda^2)$ \\
\hline
$\alpha=10$ & 2.71 & 0.27 & 3.37 \\
$\alpha=3$ & 0.81 & 0.27 & 1.08 \\
$\alpha=1$ & 0.27 & 0.27 & 0.54 \\
$\alpha=0.3$ & 0.081 & 0.27 & 0.35 \\
$\alpha=0.1$ & 0.027 & 0.27 & 0.29 \\
\hline
\end{tabular}
\caption{Distances between the three images $\Lambda^0,\Lambda^1$ and $\Lambda^2$.}
\label{tab:param}
\end{table}
Again, we test our algorithm on the setup of Figure~\ref{fig:matrixmarginals}
with $\alpha=1$
on grid sizes $32\times 32, 64 \times 64, 128 \times 128$, and $256 \times 256$. Table~\ref{tab:matrixOMT1}
shows the number of iterations and runtime tested on a Nvidia Titan Xp GPU
required
to achieve a precision of $10^{-3}$, measured as the ratio between the duality gap and primal objective value.
\begin{table}
\centering
\begin{tabular}[h]{| c | c | c | c | c| c|}
\hline
Grid Size & Iteration count & Time per-iter& Iteration time & $\tau$ & $M(\Lambda^0,\Lambda^1)$ \\
\hline
$32 \times 32$ & $1\times 10^4$ & $39\mu s$ & $0.39s$&10 & 0.27\\
$64 \times 64$ & $1.5\times 10^4$ & $39\mu s$ & $0.59s$&10 & 0.27\\
$128 \times 128$ & $2\times 10^4$& $85\mu s$ & $1.70s$&30 & 0.27\\
$256 \times 256$ & $4\times 10^4$&$330\mu s$ & $13.2s$&60 & 0.27\\
\hline
\end{tabular}
\caption{Computation cost for matrix-OMT as a function of grid size.}
\label{tab:matrixOMT1}
\end{table}
\section{Conclusions}
In this paper, we studied the extensions of Wasserstein-1 optimal transport to vector and matrix-valued densities.
This extension, as a tool of applied mathematics, is interesting
if the mathematics is sound, if the numerical methods are good, and if the applications are interesting.
In this paper, we investigated all three concerns.
From a practical viewpoint, that we can solve vector and matrix-OMT problems of realistic sizes in modest time with GPU computing is the most valuable observation.
Applying our algorithms to tackle real world problems in signal/imaging processing, medical imaging, and machine learning would be interesting directions of future research.
Another interesting direction of study is quadratic regularization.
In general, the solutions to vector and matrix-OMT problems are not unique.
However, the regularized version of \eqref{vomt}
\begin{equation*}
\begin{array}{ll}
\underset{\vect{{\mathbf u}},\vect{w}}{\text{minimize}}&
\int_{\Omega}\|\vect{{\mathbf u}}(\mathbf{x})\|_u +\alpha\|\vect{w}(\mathbf{x})\|_w+ \epsilon (\|\vect{{\mathbf u}}(\mathbf{x})\|_u^2+\|\vect{w}(\mathbf{x})\|_w^2)\;d\mathbf{x}\\
\mbox{subject to} &
\mathrm{div}_\mathbf{x} (\vect{{\mathbf u}})(\mathbf{x})
+
\mathrm{div}_\mathcal{G}(\vect{w}(\mathbf{x}))=\vect{\lambda}^0(\mathbf{x})-\vect{\lambda}^1(\mathbf{x})\\
& \vect{{\mathbf u}} \text{ satisfies zero-flux b.c}
\end{array}
\end{equation*}
is strictly convex and therefore has a unique solution. A similar regularization can be done for matrix-OMT.
As discussed in \cite{LiRyuOsh17, L1partial},
this form of regularization is particularly useful as a slight modification to the
proposed method solves the regularized problem.
\section*{Acknowledgments}
We would like to thank Wilfrid Gangbo for many fruitful and inspirational discussions on the related topics. The Titan Xp's used for this research were donated by the NVIDIA Corporation.
This work was funded by ONR grants N000141410683, N000141210838, DOE grant DE-SC00183838 and a startup funding from Iowa State University.
| {
"timestamp": "2018-01-01T02:06:29",
"yymm": "1712",
"arxiv_id": "1712.10279",
"language": "en",
"url": "https://arxiv.org/abs/1712.10279",
"abstract": "In many applications such as color image processing, data has more than one piece of information associated with each spatial coordinate, and in such cases the classical optimal mass transport (OMT) must be generalized to handle vector-valued or matrix-valued densities. In this paper, we discuss the vector and matrix optimal mass transport and present three contributions. We first present a rigorous mathematical formulation for these setups and provide analytical results including existence of solutions and strong duality. Next, we present a simple, scalable, and parallelizable methods to solve the vector and matrix-OMT problems. Finally, we implement the proposed methods on a CUDA GPU and present experiments and applications.",
"subjects": "Optimization and Control (math.OC); Systems and Control (eess.SY); Functional Analysis (math.FA)",
"title": "Vector and Matrix Optimal Mass Transport: Theory, Algorithm, and Applications",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130586647623,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.7075636743119789
} |
https://arxiv.org/abs/1501.00714 | A positive Grassmannian analogue of the permutohedron | The classical permutohedron Perm is the convex hull of the points (w(1),...,w(n)) in R^n where w ranges over all permutations in the symmetric group. This polytope has many beautiful properties -- for example it provides a way to visualize the weak Bruhat order: if we orient the permutohedron so that the longest permutation w_0 is at the "top" and the identity e is at the "bottom," then the one-skeleton of Perm is the Hasse diagram of the weak Bruhat order. Equivalently, the paths from e to w_0 along the edges of Perm are in bijection with the reduced decompositions of w_0. Moreover, the two-dimensional faces of the permutohedron correspond to braid and commuting moves, which by the Tits Lemma, connect any two reduced expressions of w_0.In this note we introduce some polytopes Br(k,n) (which we call bridge polytopes) which provide a positive Grassmannian analogue of the permutohedron. In this setting, BCFW bridge decompositions of reduced plabic graphs play the role of reduced decompositions. We define Br(k,n) and explain how paths along its edges encode BCFW bridge decompositions of the longest element pi(k,n) in the circular Bruhat order. We also show that two-dimensional faces of Br(k,n) correspond to certain local moves for plabic graphs, which by a result of Postnikov [Pos06], connect any two reduced plabic graphs associated to pi(k,n). All of these results can be generalized to the positive parts of Schubert cells. A useful tool in our proofs is the fact that our polytopes are isomorphic to certain Bruhat interval polytopes. Conversely, our results on bridge polytopes allow us to deduce some corollaries about the structure of Bruhat interval polytopes. | \section{Introduction}
The \emph{totally nonnegative part} $(Gr_{k,n})_{\geq 0}$ of the real Grassmannian
$Gr_{k,n}$ is the locus where all Pl\"ucker coordinates are
non-negative \cite{Lusztig3, Rietsch1, Postnikov}.
Postnikov initiated the combinatorial study of $(Gr_{k,n})_{\geq 0}$:
he showed that if one stratifies
the space based on which Pl\"ucker coordinates are positive and which are zero, one gets
a cell decomposition. The cells are in bijection with several families of
combinatorial objects, including decorated permutations, and
\emph{equivalence classes of reduced plabic graphs}.
Reduced plabic graphs are a certain family of planar bicolored graphs
embedded in a disk. These graphs are very interesting:
they can be used to parameterize cells in $(Gr_{k,n})_{\geq 0}$ \cite{Postnikov} and
to understand the
\emph{cluster algebra structure} on the Grassmannian \cite{Scott}; they
also arise as \emph{soliton graphs} associated to the
KP hierarchy \cite{KW}.
Most recently reduced plabic graphs and
the totally non-negative Grassmannian $(Gr_{k,n})_{\geq 0}$
have been studied in connection with
\emph{scattering amplitudes} in $N=4$ super Yang-Mills \cite{Scatt}.
Motivated by physical considerations,
the authors gave a systematic way for building up a
reduced plabic graph for a given cell of $(Gr_{k,n})_{\geq 0}$,
which they called the \emph{BCFW-bridge construction}.
In many ways, reduced plabic graphs behave like reduced decompositions of the symmetric group:
for example, just as any two reduced decompositions of the same permutation can be related by
braid and commuting moves, any two reduced plabic graphs associated to the same decorated
permutation
can be related via
certain local moves \cite{Postnikov}.
The goal of this note is to highlight another way in which reduced plabic graphs
behave like reduced decompositions. We will introduce a polytope
called the \emph{bridge polytope}, which is a positive Grassmannian analogue of
the permutohedron in the following sense: just as the one-skeleton of the permutohedron
encodes reduced decompositions of permutations,
the one-skeleton of the bridge polytope encodes BCFW-bridge decompositions
of reduced plabic graphs. Moreover, just as the two-dimensional faces of permutohedra
encode braid and commuting moves for reduced decompositions,
the two-dimensional faces of bridge polytopes
encode local moves for reduced plabic graphs.
\textsc{Acknowledgments:}
The author is grateful to Nima Arkani-Hamed, Jacob Bourjaily, and
Yan Zhang for
conversations concerning the BCFW bridge construction.
\section{Background}\label{sec:background}
\subsection{Reduced decompositions, Bruhat order, and the permutohedron}
Recall that the \emph{symmetric group} $S_n$ is the
group of all permutations on $n$ letters.
If we let $s_i$ denote the simple transposition exchanging
$i$ and $i+1$, then $S_n$ is a Coxeter group generated by the
$s_i$ (for $1 \leq i <n$), subject to the relations $s_i^2 = 1$,
as well as the \emph{commuting relations}
$s_i s_j = s_j s_i$ for $|i-j|\geq 2$ and \emph{braid relations}
$s_i s_{i+1} s_i = s_{i+1} s_i s_{i+1}$.
Given $w\in S_n$, there are many ways to write $w$
as a product of simple transpositions.
If $w = s_{i_1} \dots s_{i_\ell}$ is a minimal-length such expression,
we call $s_{i_1} \dots s_{i_\ell}$ a \emph{reduced expression} of $w$,
and $\ell$ the \emph{length} $\ell(w)$ of $w$.
The well-known Tits Lemma asserts that any two reduced expressions
can be related by a sequence of braid
and commuting moves if and only if they are reduced expressions for the same
permutation.
There are two important partial orders on $S_n$, both of which are
graded, with rank
function given by the length of the permutation.
The \emph{strong Bruhat order} is the transitive closure
of the cover relation $u \prec v$, where
$u \prec v$ means that $(i j) u = v$ for $i<j$
and $\ell(v) = \ell(u)+1$.
Here $(i j)$ is the transposition (not necessarily simple) exchanging
$i$ and $j$.
We use the symbol $\preceq$ to indicate the strong Bruhat order.
The strong Bruhat order has many nice combinatorial properties:
it is thin and shellable, and hence is a regular CW poset \cite{Edelman,
Proctor, BW}.
Moreover, this partial order is closely connected to the geometry of the complete flag
variety ${\mathsf{Fl}}_n$: it encodes when one Schubert cell is contained
in the closure of another. Somewhat less well-known is its connection to
total positivity. The totally non-negative part
$U_{\geq 0}^+$ of the unipotent part of $GL_n$ has a decomposition into
cells, indexed by permutations, and the strong Bruhat order describes
when one cell is contained in the closure of another
\cite{Lusztig3}.
The other partial order associated to $S_n$
is the \emph{(left) weak Bruhat order}. This
is the transitive closure
of the cover relation $u \lessdot v$, where
$u \lessdot v$ means that $s_i u = v$
and $\ell(v) = \ell(u)+1$. We use the symbol
$\leq$ to indicate the weak Bruhat order.
The weak Bruhat order is
related to reduced decompositions in the following way:
given any $w\in S_n$, the maximal chains from the identity $e$ to $w$
in the weak Bruhat order are in bijection with reduced decompositions of $w$.
Moreover, the weak order
has the advantage of being conveniently visualized
in terms of a polytope called the permutohedron, as we now describe.
\begin{definition}
The \emph{permutohedron} $\Perm_n$ in ${\mathbb R}^n$ is the convex
hull of the points
$\{(z(1),\dots,z(n)) \ \vert \ z\in S_n\}$.
\end{definition}
See Figure \ref{fig:perm} for the permutohedron $\Perm_4$.
We remark that the permutohedron is also connected to the geometry of the
complete flag variety ${\mathsf{Fl}}_n$: it is the image
of ${\mathsf{Fl}}_n$ under the moment map.
Given a permutation $z\in S_n$, we will often refer to it using the notation
$(z(1),\dots, z(n))$, or
$(z_1,\dots, z_n)$, where $z_i$ denotes $z(i)$.
\begin{figure}[h]
\centering
\includegraphics[height=2in]{Permutohedron.eps}
\caption{The permutohedron $\Perm_4$. Edges are labeled by
the values swapped when we go between the permutations labeling
the two vertices.}
\label{fig:perm}
\end{figure}
The following proposition is well-known, see e.g. \cite{OM}.
\begin{proposition}\label{prop:Perm}
Let $u$ and $v$ be permutations in $S_n$ with
$\ell(u) \leq \ell(v)$.
There is an edge between vertices $u$ and $v$ in
the permutohedron $\Perm_n$ if and only if
$v=s_i u$ for some $i$ and $\ell(v) = \ell(u) \pm 1$, in which case
we label the corresponding edge of the permutohedron by $s_i$.
Moreover, the two-dimensional faces of $\Perm_n$ are either squares
(with edges labeled in an alternating fashion by $s_i$ and $s_j$,
where $|i-j| \geq 2$) or hexagons
(with edges labeled in an alternating fashion by
$s_i$ and $s_{i+1}$ for some $i$).
\end{proposition}
See Figure \ref{fig:perm} for the example of $\Perm_4$.
Proposition \ref{prop:Perm} implies that edges of the permutohedron correspond to
cover relations in the weak Bruhat order, and the two-dimensional faces correspond to
braid and commuting relations for reduced words.
Let $w_0$ be the longest permutation in $S_n$. This permutation
is unique, and can be defined by $w_0(i) = n+1-i$.
Proposition \ref{prop:Perm} can be equivalently restated as follows.
\begin{proposition}\label{prop:Perm2}
The shortest paths from $e$ to $w_0$ along the one-skeleton
of the permutohedron
$\Perm_n$ are in bijection with the reduced decompositions of $w_0$.
\end{proposition}
For example, we can see from Figure \ref{fig:perm} that there
is a path
$$e \lessdot (2,1,3,4) \lessdot (3,1,2,4) \lessdot (4,1,2,3)
\lessdot (4,2,1,3) \lessdot (4,3,1,2) \lessdot (4,3,2,1) = w_0$$
in the permutohedron $\Perm_4$. Reading off the edge labels of this path
gives rise to the reduced decomposition
$s_1 s_2 s_1 s_3 s_2 s_1$ of $w_0$.
\subsection{The positive Grassmannian, permutations, circular Bruhat order,
and plabic graphs}
In this section we introduced the positive Grassmannian, and some combinatorial
objects associated to it, including permutations and plabic graphs. As we will see,
\emph{reduced plabic graphs} are in many ways analogous to reduced decompositions of
permutations.
The \emph{real Grassmannian} $Gr_{k,n}$ is the space of all
$k$-dimensional subspaces of ${\mathbb R}^n$. An element of
$Gr_{k,n}$ can be viewed as a full-rank $k\times n$ matrix modulo left
multiplication by nonsingular $k\times k$ matrices. In other words, two
$k\times n$ matrices represent the same point in $Gr_{k,n}$
if and only if they
can be obtained from each other by row operations.
Let $\binom{[n]}{k}$ be the set of all $k$-element subsets of $[n]:=\{1,\dots,n\}$.
For $I\in \binom{[n]}{k}$, let $\Delta_I(A)$
be the {\it Pl\"ucker coordinate}, that is, the maximal minor of the $k\times n$ matrix $A$ located in the column set $I$.
The map $A\mapsto (\Delta_I(A))$, where $I$ ranges over $\binom{[n]}{k}$,
induces the {\it Pl\"ucker embedding\/} $Gr_{k,n}\hookrightarrow \mathbb{RP}^{\binom{n}{k}-1}$.
The \emph{totally non-negative part of the Grassmannian} (sometimes called the
\emph{positive Grassmannian})
$(Gr_{k,n})_{\geq 0}$ is the subset of $Gr_{k,n}$ such that all
Pl\"ucker coordinates are non-negative.
If we partition $(Gr_{k,n})_{\geq 0}$ into strata based on
which Pl\"ucker coordinates are positive and which are zero,
we obtain a decomposition into \emph{positroid cells} \cite{Postnikov}.
Postnikov showed that the cells are in bijection with, and naturally
labeled by, several
families of combinatorial objects, including
\emph{Grassmann necklaces},
\emph{decorated permutations} and
\emph{equivalence classes of reduced plabic graphs}.
He also introduced the \emph{circular Bruhat order} on decorated permutations,
which describes when one cell is contained in another.
Like the strong Bruhat order, the circular Bruhat order is
graded, thin, shellable, and hence is a regular CW poset \cite{Williams}.
\begin{definition}
A \emph{Grassmann necklace} is a sequence
${\mathcal I} = (I_1,\dots,I_n)$ of subsets $I_r \subset [n]$ such that,
for $i\in [n]$, if $i\in I_i$ then $I_{i+1} = (I_i \setminus \{i\}) \cup \{j\}$,
for some $j\in [n]$; and if $i \notin I_r$ then $I_{i+1} = I_i$.
(Here indices $i$ are taken modulo $n$.) In particular, we have
$|I_1| = \dots = |I_n|$, which is equal to some $k \in [n]$. We then say that
${\mathcal I}$ is a Grassmann necklace of \emph{type} $(k,n)$.
\end{definition}
To construct the Grassmann necklace associated to a positroid cell,
one uses the following construction.
\begin{lemma} \label{lem:necklace}
Define the shifted linear order
$<_i$ by $i <_i i+1 <_i \dots <_i n <_i 1 <_i \dots <_i i-1$.
Given $A\in Gr_{k,n}$,
let $\mathcal I(A) = (I_1,\dots,I_n)$ be the sequence of subsets in
$[n]$ such that, for $i \in [n]$, $I_i$ is the lexicographically
minimal subset of $\binom{[n]}{k}$ with respect to
the shifted linear order
such that $\Delta_{I_i}(A) \neq 0$.
Then ${\mathcal I}(A)$ is a Grassmann necklace of type $(k,n)$.
\end{lemma}
\begin{definition}
A \emph{decorated permutation} $\pi$ on $n$ letters is a permutation
on $n$ letters in which fixed points have one of two colors,
``clockwise" and ``counterclockwise". A position $i$ of $\pi$
is called a \emph{weak excedance} if $\pi(i) >i$ or
$\pi(i)= i$ and $i$ is a counterclockwise fixed point.
\end{definition}
\begin{definition}\label{necklace-to-perm}
Given a Grassmann necklace $\mathcal I$, define
a decorated permutation $\pi = \pi(\mathcal I)$ by requiring that
\begin{enumerate}
\item if $I_{i+1} = (I_i \setminus \{i\}) \cup \{j\}$,
for $j \neq i$, then $\pi(j)=i$.
\item if $I_{i+1}=I_i$ and $i \in I_i$ then $\pi(i)=i$ is
a counterclockwise fixed point.
\item if $I_{i+1}=I_i$ and $i \notin I_i$ then $\pi(i)=i$ is
a clockwise fixed point.
\end{enumerate}
As before, indices are taken modulo $n$.
\end{definition}
Using the above constructions, one may show the following.
\begin{theorem}\cite{Postnikov}
The cells of $(Gr_{k,n})_{\geq 0}$ are in bijection with
Grassmann necklaces of type $(k,n)$ and with
decorated permutations on $n$ letters with exactly $k$
weak excedances.
\end{theorem}
The totally non-negative Grassmannian
$(Gr_{k,n})_{\geq 0}$ has a unique top-dimensional cell, consisting
of elements where all Pl\"ucker coordinates are strictly positive.
This subset is called the \emph{totally positive Grassmannian}
$(Gr_{k,n})_{>0}$, and it is labeled by the decorated permutation
$\pi_{k,n}:=(n-k+1, n-k+2,\dots, n, 1, 2, \dots, n-k)$.
More generally, there is a nice class of positroid cells called the
{\it TP} or
{\it totally positive Schubert cells}.
\begin{definition}\label{def:TPSchubert}
A \emph{TP Schubert cell} is the unique positroid cell of
greatest dimension which lies in
the intersection of a usual Schubert cell with $(Gr_{k,n})_{\geq 0}$.
The TP Schubert cells in $(Gr_{k,n})_{\geq 0}$ are in bijection with
$k$-element subsets $J$ of $[n]=\{1,2,\dots,n\}$.
To calculate the decorated permutation
associated to the TP Schubert cell indexed by $J$,
write $J = \{j_1 < \dots < j_k\}$, and the complement of $J$
as $J^c =\{h_1 < \dots < h_{n-k}\}$. Then
the corresponding decorated permutation $\pi = \pi(J)$ is defined by
$\pi(h_i) = i$ for all $1 \leq i \leq n-k$,
and $\pi(j_i) = n-k+i$ for all $1 \leq i \leq k$.
(Any fixed points that arise are considered to be weak excedances if and only if they
are in positions labeled by $J$.)
\end{definition}
A \emph{Grassmannian permutation} is a permutation
$\pi = (\pi(1),\dots, \pi(n))$ which has at most one descent,
i.e. at most one position $i$ such that $\pi(i) > \pi({i+1})$.
Note that the permutations of the form $\pi(J)$ defined above are precisely
the inverses of the Grassmannian permutations, since
$\pi(J)^{-1} = (h_1, h_2, \dots, h_{n-k}, j_1, j_2, \dots, j_k)$.
In the case that $J=\{1,2,\dots, k\}$, the corresponding
TP Schubert cell is $(Gr_{k,n})_{>0}$
and $\pi(J) = \pi_{k,n}$.
To describe the partial order on positroid cells, it is
convenient to represent each decorated permutation
by a related affine permutation, as was done in \cite{KLS}.
Then, the circular Bruhat order for $(Gr_{k,n})_{\geq 0}$
can be viewed as the restriction of the
affine Bruhat order to the corresponding affine permutations,
together with a new bottom element $\hat{0}$ added to the poset.
\begin{definition}\label{def:affine}
Given a decorated permutation $\pi$ on $n$ letters, we construct its
affinization $\tilde{\pi}$ as follows.
If $\pi(i)<i$, set $\tilde{\pi}(i):=\pi(i)+n$.
If $\pi(i)=i$ and $i$ is a clockwise fixed point, set
$\tilde{\pi}(i):=i$.
If $\pi(i)=i$ and $i$ is a counterclockwise fixed point, set
$\tilde{\pi}(i):=i+n$.
Finally, if $\pi(i)>i$, set $\tilde{\pi}(i):=\pi(i)$.
\end{definition}
Clearly the affine permutation constructed above is a map
from $\{1,2,\dots,n\}$ to $\{1, 2,\dots, 2n\}$ such that
$i \leq \tilde{\pi}(i) \leq i+n$.
And modulo $n$, it agreeds with the underlying permutation $\pi$.
We now discuss plabic graphs, which e.g. are useful for constructing
parameterizations of positroid cells \cite{Postnikov}, and for
understanding the cluster structure on the Grassmannian \cite{Scott}.
\begin{definition}
A \emph{planar bicolored graph} (or \emph{plabic graph}) is an undirected graph $G$ drawn inside a disk
(and considered modulo homotopy)
plus $n$ {\it boundary vertices\/} on the boundary of the disk,
labeled $1,\dots,n$ in clockwise order.
The remaining {\it internal vertices\/}
are strictly inside the disk and are
colored in black and white. We require that each boundary vertex is incident to a
single edge.
\end{definition}
\begin{figure}[h]
\centering
\includegraphics[height=1in]{PlabicGraph.eps}
\caption{A plabic graph}
\label{plabic}
\end{figure}
We will always assume that a plabic graph is {\it leafless}, i.e. that
it has no non-boundary leaves, and that it has no isolated components.
The following map gives a connection between plabic graphs and decorated permutations.
\begin{definition}\label{gen:trip}
Given a plabic graph $G$,
the \emph{trip} $T_i$ is the directed path which starts at the boundary vertex
$i$, and follows the ``rules of the road": it turns right at a
black vertex and left at a white vertex.
Note that $T_i$ will also
end at a boundary vertex. The \emph{decorated trip permutation} $\pi_G$
is the permutation such that $\pi_G(i)=j$ whenever $T_i$
ends at $j$. Moreover, if there is a white (respectively, black)
boundary leaf at boundary vertex $i$, then $\pi_G(i)=i$ is a
counterclockwise (respectively, clockwise) fixed point.
\end{definition}
We now
define some local transformations of plabic graphs, which are analogous in some ways
to braid and commuting moves for reduced expressions in the symmetric group.
(M1) SQUARE MOVE. If a plabic graph has a square formed by
four trivalent vertices whose colors alternate,
then we can switch the
colors of these four vertices.
\begin{figure}[h]
\centering
\includegraphics[height=.6in]{M1}
\caption{Square move}
\label{M1}
\end{figure}
(M2) UNICOLORED EDGE CONTRACTION/UNCONTRACTION.
If a plabic graph contains an edge with two vertices of the same color,
then we can contract this edge into a single vertex with the same color.
We can also uncontract a vertex into an edge with vertices of the same
color.
\begin{figure}[h]
\centering
\includegraphics[height=.3in]{M2}
\caption{Unicolored edge contraction}
\label{M2}
\end{figure}
(M3) MIDDLE VERTEX INSERTION/REMOVAL.
If a plabic graph contains a vertex of degree 2,
then we can remove this vertex and glue the incident
edges together; on the other hand, we can always
insert a vertex (of any color) in the middle of any edge.
\begin{figure}[h]
\centering
\includegraphics[height=.07in]{M3}
\caption{Middle vertex insertion/ removal}
\label{M3}
\end{figure}
(R1) PARALLEL EDGE REDUCTION. If a network contains
two trivalent vertices of different colors connected
by a pair of parallel edges, then we can remove these
vertices and edges, and glue the remaining pair of edges together.
\begin{figure}[h]
\centering
\includegraphics[height=.25in]{R1}
\caption{Parallel edge reduction}
\label{R1}
\end{figure}
\begin{definition}\cite{Postnikov}
Two plabic graphs are called \emph{move-equivalent} if they can be obtained
from each other by moves (M1)-(M3). The \emph{move-equivalence class}
of a given plabic graph $G$ is the set of all plabic graphs which are move-equivalent
to $G$.
A leafless plabic graph without isolated components
is called \emph{reduced} if there is no graph in its move-equivalence
class to which we can apply (R1).
\end{definition}
The following result is analogous to the Tits Lemma for reduced decompositions
of permutations.
\begin{theorem}\cite[Theorem 13.4]{Postnikov}
Two reduced plabic graphs
are move-equivalent if and only if they have the same decorated
trip permutation.
\end{theorem}
A priori, it is not so easy to detect whether a given plabic graph is reduced.
One characterization was given in \cite[Theorem 13.2]{Postnikov}.
Another very
simple characterization
was given in \cite{KW}.
\begin{definition}\label{labels}
Given a plabic graph $G$ with $n$ boundary vertices,
start at each boundary
vertex $i$ and label every edge along trip $T_i$ with $i$.
After doing this for each boundary vertex,
each edge will be labeled by up to two numbers (between $1$ and $n$).
If an edge is labeled by two numbers $i<j$, write $[i,j]$
on that edge.
We say that a plabic graph has the
\emph{resonance property}, if after labeling edges as above,
the set $E$ of edges incident to a given vertex has
the following property:
\begin{itemize}
\item there exist numbers $i_1<i_2<\dots<i_m$ such that when
we read the labels of $E$, we see the labels
$[i_1,i_2],[i_2,i_3],\dots,[i_{m-1},i_m],[i_1,i_m]$ appear in
clockwise order.
\end{itemize}
\end{definition}
This property and the following characterization
of reduced plabic graphs were given in \cite{KW}.
\begin{theorem}\cite[Theorem 10.5]{KW}
A plabic graph is reduced
if and only if it has the resonance property.
\end{theorem}
\subsection{BCFW bridge decompositions}
Given a reduced plabic graph, it is easy to construct its corresponding
trip permutation, as explained in Definition
\ref{gen:trip}. But how can we go backwards, i.e. given the permutation, how can we construct
a corresponding reduced plabic graph? One procedure to do this was given in
\cite[Section 20]{Postnikov}. Another elegant solution -- the
\emph{BCFW-bridge construction} -- was given in
\cite[Section 3.2]{Scatt}.
To explain the BCFW-bridge construction, we use
the representation of each decorated permutation $\pi$ as an affine permutation
$\tilde{\pi}$,
see Definition \ref{def:affine}.
We say that position $i$ of $\tilde{\pi}$ is a \emph{fixed point}
if $\tilde{\pi}(i) = i$ or
$\tilde{\pi}(i) = i+n$. And we say that
$\tilde{\pi}$ is a \emph{decoration of the identity} if it has a fixed point
in each position.
\begin{definition}[\emph{The BCFW-bridge construction}]\label{def:BCFW}
Given a decorated permutation $\pi$ on $n$ letters, which is not
simply a decoration of the identity, we start by choosing
a pair of adjacent positions $i<j$ such that
$\tilde{\pi}(i)<
\tilde{\pi}(j)$, and
$i$ and $j$ are not fixed points.
Here \emph{adjacent} means that either
$j=i+1$, or $i<j$ and every position $h$ such that $i<h<j$ is a fixed point.
We record the transposition $\tau= (ij)$ and swap the entries in positions
$i$ and $j$ in $\tilde{\pi}$. Any entries in the resulting permutation
which are fixed points are designated as \emph{frozen}, and henceforth ignored.
We continue this process on the resulting affine permutation,
until the end result is a decoration of the identity.
Finally we use the sequence of transpositions $\tau$ as a recipe
for adding ``bridges," thereby constructing a plabic graph.
\end{definition}
See Table \ref{table:bridges} and Figure \ref{fig:bridges}
for an example of a bridge decomposition of the
permutation $\pi=(4,6,5,1,2,3)$ (with corresponding
affine permutation $\tilde{\pi} = (4,6,5,7,8,9)$).
Here a \emph{bridge} is the subgraph shown
at the left of Figure \ref{fig:bridges}, and a bridge decomposition
is a graph built by attaching successive bridges.
The sequence of transpositions
$\tau$ gives a recipe for where to place successive bridges.
\begin{center}
\begin{table}[h]
\begin{tabular}{| l | l |}
\hline
& \ 1 \ 2 \ 3 \ 4 \ 5 \ 6 \\
\ $\tau$ & \ $\downarrow$ \ $\downarrow$ \ $\downarrow$ \ $\downarrow$ \ $\downarrow$ \ $\downarrow$ \\
\hline
$(34)$ & \ 4 \ 6 \ 5 \ 7 \ 8 \ 9 \\
$(23)$ & \ 4 \ 6 \ 7 \ 5 \ 8 \ 9 \\
$(12)$ & \ 4 \ 7 \ 6 \ 5 \ 8 \ 9 \\
$(56)$ & \framebox{7} 4 \ 6 \ 5 \ 8 \ 9 \\
$(45)$ & \framebox{7} 4 \ 6 \ 5 \ 9 \ 8 \\
$(34)$ & \framebox{7} 4 \ 6 \ 9 \framebox{5} 8 \\
$(46)$ & \framebox{7} 4 \framebox{9} 6 \framebox{5} 8 \\
$(24)$ & \framebox{7} 4 \framebox{9} 8 \framebox{5}\framebox{6} \\
\hline
& \ 7 \ 8 \ 9 \ 4 \ 5 \ 6 \\
\hline
\end{tabular}
\vspace{.5cm}
\caption{A BCFW-bridge decomposition of $\tilde{\pi} = (4,6,5,7,8,9)$.
The frozen entries are boxed.}
\label{table:bridges}
\end{table}
\end{center}
\begin{proposition}\cite[Section 3.2]{Scatt}
Given a decorated permutation $\pi$,
the BCFW-bridge construction constructs a reduced plabic graph
whose trip permutation is $\pi$.
\end{proposition}
\begin{figure}[h]
\centering
\hspace{1cm}
\includegraphics[height=1in]{Bridge.eps}\hspace{1in}
\includegraphics[height=1.5in]{BridgePlabic.eps}
\caption{A single bridge, and the plabic graph $G$ associated to the bridge decomposition from Table \ref{table:bridges}. Note that the trip permutation of $G$
equals the product $(24)(46)(34)(45)(56)(12)(23)(34)$ of transpositions
$\tau$ in the bridge decomposition, which equals
the corresponding decorated permutation $\pi = (4,6,5,1,2,3)$.}
\label{fig:bridges}
\end{figure}
\section{Bridge polytopes and their one-skeleta}
The goal of this section is to introduce some
polytopes that we will call
\emph{bridge polytopes}, because their one-skeleta encode
BCFW-bridge decompositions of reduced plabic graphs. This statement is
analogous to the fact that
the one-skeleton of the permutohedron encodes
reduced decompositions of $w_0$. More specifically,
for each $k$-element subset $J$ of $[n]$, we will introduce
a bridge polytope $\Br_J$ which encodes bridge decompositions
of reduced plabic graphs for the permutation $\pi(J)$
labeling the corresponding TP Schubert cell.
In the case that $J = \{1,2,\dots,k\}$,
we will also denote $\Br_J$ by $\Br_{k,n}$ -- this will be the polytope
encoding bridge decompositions of $\pi_{k,n}$,
the decorated permutation labeling the totally positive Grassmannian.
\begin{figure}[h]
\centering
\includegraphics[height=5.3cm]{GrassPerm.eps} \hspace{1cm}
\includegraphics[height=5.3cm]{BridgeDecomps.eps}
\caption{Two labelings of a bridge polytope. At the left,
vertices are labeled by ordinary permutations, and edges are labeled by
the pair of \emph{values} which are swapped. At the right,
vertices are labeled by affine permutations, and edges are labeled by
the pair of \emph{positions} which are swapped.
The map from vertex-labels at the left to vertex-labels at the right is
$z \mapsto \widetilde{z^{-1}}$.
In both cases, the paths along the one-skeleton
from top to bottom encode bridge decompositions of the permutation
$\pi_{2,4} = (3,4,1,2)$.}
\label{fig:scamp}
\end{figure}
Before giving the general construction, we present an example in Figure
\ref{fig:scamp}, which encodes the bridge decompositions of plabic graphs with
trip permutation
$\pi = (3,4,1,2)$. The figure at the left shows the polytope
obtained by taking the convex hull of the permutations
$$\{(z_1,z_2,z_3,z_4) \ \vert \ z_1 \geq 1, z_2 \geq 2, z_3 \leq 3, z_4 \leq 4)\}.$$
The edge between two permutations is labeled by the
\emph{values} of the swapped entries.
The figure at the right shows the same polytope, but now vertices are labeled
by affine permutations. A vertex which was labeled by $z$ at the left
is labeled at the right by the affinization $\widetilde{z^{-1}}$ of $z^{-1}$.
Note that we took the inverse in order to get a vertex-labeling of
the polytope such that edges correspond to the \emph{positions}
of the swapped entries, agreeing with the sequence of
transpositions defined in Definition \ref{def:BCFW}.
The main result of this section will imply
that the minimal chains in the one-skeleton
of the bridge polytope from ``top" to ``bottom"
(from $(3,4,1,2)$ to $(1,2,3,4)$ using the vertex-labeling at the left,
and from $(3,4,5,6)$ to $(5,6,3,4)$ using the vertex-labeling at the right)
are in bijection with the bridge decompositions of the reduced plabic graphs
with trip permutation $\pi_{2,4} = (3,4,1,2)$.
For example, the leftmost chain from top to bottom is labeled by
the sequence of transpositions $(12)$, $(23)$, $(12)$, $(24)$; the
corresponding plabic graph is shown in Figure \ref{fig:bridge2}.
\begin{figure}[h]
\centering
\includegraphics[height=3.3cm]{BridgePlabic2.eps}
\caption{The plabic graph coming from the sequence of bridges
$(12)$, $(23)$, $(12)$, $(24)$. Note that its trip permutation
is $(3,4,1,2)=(24)(12)(23)(12)$.}
\label{fig:bridge2}
\end{figure}
We now turn to the general construction.
\begin{definition}\label{def:bridge}
Let $J \subset \{1,2,\dots, n\}$ and set
\begin{equation}
S_J = \{\pi \in S_n \ \vert \ \pi(j) \geq j \text{ for }j\in J
\text{ and }\pi(j) \leq j \text{ for }j \notin J.\}
\end{equation}
In other words, any permutation in $S_J$ is required to have a
\emph{weak excedance} in position $j$ for each $j\in J$, and a
\emph{weak non-excedance} in position $j$ for each $j\notin J$.
We define a polytope $\G_J$ by
\begin{equation}
\G_J = \conv\{ (\pi(1),\pi(2),\dots, \pi(n)) \ \vert \ \pi\in S_J\}
\subset {\mathbb R}^n.
\end{equation}
\end{definition}
In the special case that $J = \{1,2,\dots, k\}$, we write
$$\G_{k,n} = \G_J = \{\pi \in S_n \ \vert \
\pi(j) \geq j \text{ for } 1 \leq j \leq k \text{ and }
\pi(j) \leq j \text{ for } k+1 \leq j \leq n.\}$$
Recall the definition of $\pi(J)$ from Definition
\ref{def:TPSchubert}, the decorated permutation labeling the
corresponding TP Schubert cell. Let $e(J)$ be the decoration
of the identity which has counterclockwise fixed points
precisely in positions $J$.
The following is our main result. We describe it using
the vertex-labeling of the bridge polytope with ordinary
permutations, as in Definition \ref{def:bridge},
but it can be easily translated using
the vertex-labeling by affine permutations.
\begin{theorem}\label{thm:main}
Let $J$ be a $k$-element subset of $[n]$.
The shortest paths from $\pi(J)$ to the identity permutation $e$
along the one-skeleton of the bridge polytope $\G_J$
are in bijection with the BCFW-bridge decompositions of
the permutation $\pi(J)^{-1}$, where a sequence of edge-labels
in a path is interpreted as a sequence of transpositions $\tau$ in the
bridge decomposition. Equivalently,
there is an edge between two vertices $\pi$ and $\hat{\pi}$ of
the bridge polytope $\G_J$ if and only if there exists some
pair $i<\ell$ such that
$(i \ell) \pi = \hat{\pi}$, and
$\pi(j) = \hat{\pi}(j) = j$ for $i<j<\ell$.
In other words, $\pi$ and $\hat{\pi}$ differ by
swapping the values $i$
and $\ell$,
and $i+1, i+2, \dots, \ell-1$ are fixed points
of both $\pi$ and $\hat{\pi}$.
\end{theorem}
We will prove Theorem \ref{thm:main} in Section \ref{sec:Bruhat}.
The following corollary is immediate from Theorem \ref{thm:main}.
\begin{corollary}
The shortest paths
in the bridge polytope $\G_{k,n}$ from $\pi_{k,n}$ down to the identity $e$ are in bijection
with the BCFW-bridge decompositions of the permutation
$\pi_{n-k,n}$. In other words, we can read off all
BCFW-bridge decompositions of $\pi_{n-k,n}$ by
recording the edge-labels on all shortest paths from $\pi_{k,n}$
to $e$ in the one-skeleton of
$\G_{k,n}$.
\end{corollary}
\section{Bruhat interval polytopes and the proof of Theorem
\ref{thm:main}}\label{sec:Bruhat}
Bruhat interval polytopes are a class of polytopes which were recently
studied by Kodama and the second author in \cite{KW3},
in connection with the full Kostant-Toda lattice on the
flag variety. The combinatorial properties of Bruhat interval
polytopes were further investigated by Tsukerman and the second author
in \cite{TW}. We will show that bridge polytopes
are isomorphic to certain Bruhat interval polytopes,
and use a result from \cite{KW3} as a tool for proving
Theorem \ref{thm:main}. We will also deduce
a characterization of the one-skeleta of a large class of
Bruhat interval polytopes.
\begin{definition}\label{def:BIP}
Let $u,v \in S_n$ such that $ u \leq v $ in (strong) Bruhat
order.
The \emph{Bruhat interval polytope}
$\mathsf{Q}_{u,v}$ is defined as the convex hull
$$\mathsf{Q}_{u,v}
= \conv\{(z(1),\dots,z(n)) \ \vert \ \text{ for }z\in S_n \text{ such that }
u \leq z \leq v\}.$$
\end{definition}
Note that when $u$ is the identity and $v = w_0$,
the longest permutation in $S_n$, the Bruhat interval polytope
$\mathsf{Q}_{u,v}$ equals the permutohedron $\Perm_n$.
\begin{lemma}\label{lem:bridge-Bruhat}
The bridge polytope $\G_J$ is isomorphic to the Bruhat interval polytope
$\mathsf{Q}_{e,\pi(J)^{-1}}$.
More specifically,
recall that for $J$ a $k$-element subset of $[n]$, we write
$J = \{j_1 < \dots < j_k\}$, and $J^c = \{h_1 < \dots < h_{n-k}\}$.
Consider the map $\psi:{\mathbb R}^n \to {\mathbb R}^n$ defined by
$\psi(x_1,\dots,x_n) = (x_{h_1}, x_{h_2}, \dots,
x_{h_{n-k}}, x_{j_1}, x_{j_2}, \dots, x_{j_k})$.
Then $\Psi$ is an isomorphism from
$\G_J$ to $\mathsf{Q}_{e,\pi(J)^{-1}}$.
\end{lemma}
\begin{proof}
Since $\psi$ simply permutes the coordinates of ${\mathbb R}^n$,
it obviously acts as an isomorphism on any polytope.
Moreover, $\psi$ takes the vertex $(z(1),\dots, z(n))$
labeled by
the permutation $z$ in $\G_{k,n}$ to the vertex
labeled by the permutation $z \pi(J)^{-1}$ of
$\mathsf{Q}_{e,\pi(J)^{-1}}$. In particular,
it takes the vertices $\pi(J)$ and $e$ of $\G_{k,n}$
to the vertices $e$ and $\pi(J)^{-1}$ of $\mathsf{Q}_{e,\pi(J)^{-1}}$,
respectively.
It is not hard to show that $\psi$ maps
the set of permutations $S_J$ to the set of permutations
in the interval $[e,\pi(J)^{-1}]$.
\end{proof}
Once we have established Theorem \ref{thm:main},
Lemma \ref{lem:bridge-Bruhat} immediately implies the following description
of the one-skeleton of any
Bruhat interval polytopes $\mathsf{Q}_{e,w}$,
when $w$ is a Grassmannian permutation.
\begin{corollary}
Let $w$ be a Grassmannian permutation in $S_n$, i.e. a permutation
with at most one descent. Then there is an edge between
vertices $u$ and $v$ in $\mathsf{Q}_{e,w}$ if and only if
$u$ and $v$ satisfy the following properties:
\begin{itemize}
\item $v = (i \ell) u
\item in the permutations $u$ and $v$,
each of the values $i+1, i+2,\dots, \ell-1$ are in precisely
the same positions that they are in $w$.
\end{itemize}
\end{corollary}
We now prove Theorem \ref{thm:main}, by establishing
Lemma \ref{lem:edge-swap},
Proposition \ref{prop:fixed},
and Theorem \ref{th:edgeexist}, below.
\begin{lemma}\label{lem:edge-swap}
If there is an edge in $\G_J$ between $\pi$ and $\hat{\pi}$, then
there exists some transposition $(i \ell)$ such that
$(i \ell) \pi = \hat{\pi}$.
\end{lemma}
\begin{proof}
By \cite[Theorem A.10]{KW3}, every
edge of a Bruhat interval polytope connects two vertices
$u$ and $v$, where $v = (i \ell)$ for some transposition $(i \ell)$.
Lemma \ref{lem:edge-swap} now follows from
Lemma \ref{lem:bridge-Bruhat}.
\end{proof}
\begin{remark}
It was shown more generally in \cite{TW} that every face of a Bruhat interval
polytope is a Bruhat interval polytope.
\end{remark}
\begin{proposition}\label{prop:fixed}
Suppose there's an edge in $\G_J$ between $\pi$ and $\hat{\pi}$ where
$(i \ell) \pi = \hat{\pi}$ for some $i<\ell$, i.e. $\pi$ and $\hat{\pi}$
differ by swapping the values $i$ and $\ell$. Then $i+1, i+2, \dots, \ell-1$
must be fixed points of $\pi$ and $\hat{\pi}$.
\end{proposition}
\begin{proof}
We use a proof by contradiction. Suppose that not all of
$i+1, i+2,\dots, \ell-1$ are fixed points. Let $j$ be the smallest
element of $\{i+1,i+2,\dots, \ell-1\}$ which is not a fixed point.
Let $a, b$, and $c$ be the positions of $i, \ell$, and $j$ in $\pi$.
Without loss of generality, $\pi_c = j > c$. So $c\in J$
and is a position of a weak excedance in any permutation in $S_J$.
Since we have an edge in the polytope between $\pi$
and $\hat{\pi}$, there exists a $\lambda\in {\mathbb R}^n$
such that $\lambda \cdot x: = \sum_h \lambda_h x_h$ (applied to $x\in S_J$)
is maximized precisely on $\pi$ and $\hat{\pi}$. In particular,
we must have $\lambda_a = \lambda_b$.
If $\lambda_a = \lambda_b < \lambda_c$, then define
$\tilde{\pi}$ so that it agrees with $\pi$ except in positions $b$
and $c$, where we have $\tilde{\pi}_b = j$ and $\tilde{\pi}_c = \ell$.
Since $\pi_b = \ell$ and $\hat{\pi}_b = i$ and $i<j<\ell$,
permutations in $S_J$ are allowed to have a $b$th coordinate of $j$.
Since $j>c$ and $j<\ell$, we have $\ell>c$. So permutations in $S_J$
are allowed to have a $c$th coordinate of $\ell$. Therefore
$\tilde{\pi} \in S_J$. But $\lambda \cdot \tilde{\pi} >
\lambda \cdot \pi$, which is a contradiction.
If $\lambda_a = \lambda_b > \lambda_c$, then define
$\tilde{\pi}$ so that it agrees with $\pi$ except in positions
$a$ and $c$, where we have $\tilde{\pi}_a = j$ and
$\tilde{\pi}_c=i$. Since $\pi_a=i$ and $\hat{\pi}_a=\ell$ and $i<j<\ell$,
permutations in $S_J$ are allowed to have an $a$th coordinate of $j$.
Since $\pi_c = j>c$, and $i+1, i+2,\dots, j-1$ are fixed points
of $\pi$, $c \notin \{i+1, i+2,\dots, j-1\}$. So $c \leq i$,
and permutations in $S_J$ are allowed to have a $c$th coordinate of $i$
(recall that $c\in J$ and hence $c$ must be a weak excedance
in any element of $S_J$). Therefore $\tilde{\pi} \in S_J$.
But $\lambda \cdot \tilde{\pi} > \lambda \cdot \pi$, which is a contradiction.
\end{proof}
\begin{theorem}\label{th:edgeexist}
Consider the bridge polytope $\G_J$ and two permutations
$\pi$ and $\hat{\pi}$ in $S_J \subset S_n$ such that
$(i \ell) \pi = \hat{\pi}$ and $i+1, i+2, \dots, \ell-1$ are fixed points
of $\pi$ and $\hat{\pi}$. Choose a vector $\lambda =
(\lambda_1,\dots, \lambda_n)\in {\mathbb R}^n$ whose coordinates are some
permutation of the numbers $1, n^2, n^4, \dots, n^{2(n-1)}$,
and such that
\begin{equation*}
\lambda_{\pi^{-1}(1)} < \lambda_{\pi^{-1}(2)} < \dots < \lambda_{\pi^{-1}(i-1)}
<
\Box < \lambda_{\pi^{-1}(i)} = \lambda_{\pi^{-1}(\ell)} < \Box <
\lambda_{\pi^{-1}(\ell+1)} < \dots < \lambda_{\pi^{-1}(n-1)} <
\lambda_{\pi^{-1}(n)}.
\end{equation*}
Here the $\Box$ at the left represents the coordinates
$\lambda_{\pi^{-1}(j)}$ for $j\in J$ and $i<j<\ell$, sorted from
left to right in increasing order of $j$, and the $\Box$ at the right
represents the coordinates
$\lambda_{\pi^{-1}(j)}$ for $j\notin J$ and $i<j<\ell$, sorted from
left to right in increasing order of $j$.
Then when we calculate the dot product $\lambda \cdot x$
for each $x\in S_J$, $\lambda \cdot x$ is maximized precisely on
$\pi$ and $\hat{\pi}$. In particular, there is an edge in
$\Br_J$ between $\pi$ and $\hat{\pi}$.
\end{theorem}
\begin{proof}
To prove Theorem \ref{th:edgeexist}, we will first use the fact that the coordinates of $\lambda$
are $1, n^2, \dots, n^{2(n-1)}$ to show that to solve for the
$x\in S_J$ on which $\lambda \cdot x$ is maximized, it suffices
to use the greedy algorithm. In other words, we will show that
we can construct
such $x$ by maximizing individual coordinates $x_i$ one by one,
starting with the $x_i$ where $\lambda_i$ is maximal, and continuing
in decreasing order of the values of $\lambda_j$.
Next we will show that when we run the greedy algorithm, we will
get precisely the outcomes $\pi$ and $\hat{\pi}$.
Let us show now that in order to maximize the dot product
$\lambda \cdot x$ for $x \in S_J$, the greedy algorithm works.
Recall that the coordinates of $\lambda$ are the values
$1, n^2, \dots, n^{2(n-1)}$ (in some order). Let $h \in [n]$ be
such that $\lambda_h = n^{2(n-1)}$. We first claim
that we need to maximize $x_h$ subject to the condition $x\in S_J$.
Let $w_h$ be that maximum possible value.
Let $y\in S_J$ be some other permutation with $y_h \leq w_h-1$.
Then $\lambda_h x_h - \lambda_h y_h \geq n^{2(n-1)}$.
And since the maximum difference of $x_j$ and $y_j$ is $n-1$,
we have
$$\sum_{1 \leq j \leq n, j \neq h} \lambda_j y_j -
\sum_{1 \leq j \leq n, j \neq h} \lambda_j x_j
\leq (n-1)(1+n^2+ \dots + n^{2(n-2)}).$$
Now note that since $n-1<n$ and $1+n^2+ \dots + n^{2(n-2)}$ has $n-1$
terms, we have
$(n-1)(1+n^2+ \dots + n^{2(n-2)}) < n \cdot n \cdot n^{2(n-2)} = n^{2(n-1)}$.
It follows that $\lambda \cdot x > \lambda \cdot y$.
Next let $h' \in [n]$ be such that $\lambda_{h'} = n^{2(n-2)}$.
Using the same argument, and the inequality
$(n-1)(1+n^2+ \dots +n^{2(n-3)}) < n^{2(n-2)}$, it follows that to maximize
$\lambda \cdot x$, we need to now choose the maximum value of
$x_{h'}$ subject to the condition $x \in S_J$. Continuing in this fashion,
we see that this greedy algorithm will compute all $x\in S_J$ such
that $\lambda \cdot x$ is maximized.
Now we will show that if we use the greedy algorithm, choosing coordinates
$x_j$ for $x\in S_J$ in decreasing order of the corresponding value
$\lambda_j$, we will get precisely the permutations $\pi$ and $\hat{\pi}$.
{\bf Step 1.}
Looking at the ordering of the coordinates of $\lambda$ as
described in Theorem \ref{th:edgeexist}, we first need to maximize
$x_{\pi^{-1}(n)}$, then $x_{\pi^{-1}(n-1)}, \dots,$ then
$x_{\pi^{-1}(\ell+1)}$. Therefore we place $n$ in position
$\pi^{-1}(n)$, $n-1$ in position $\pi^{-1}(n-1)$, $\dots$,
$\ell+1$ in position $\pi^{-1}(\ell+1)$. (Note that there exist
permutations $x\in S_J$ with
these coordinates in these positions -- for example, $\pi$ and $\hat{\pi}$
are two examples of such permutations.)
{\bf Step 2.} Next we need to maximize the values that we put in
positions $\pi^{-1}(j)$, for
$i<j<\ell$ and
$\pi^{-1}(j) = j \notin J$.
But these are positions of weak non-excedances in $S_J$, so the
best we can do is to put fixed points there. Note that $\pi$
and $\hat{\pi}$ also have
fixed points in these positions, so the $x$ we are building so far
agrees with both $\pi$ and $\hat{\pi}$.
{\bf Step 3.} Now we want to maximize the values that we can
put in positions $\pi^{-1}(i)$ and $\pi^{-1}(\ell)$. The greatest
unused value so far is $\ell$ so we can put that in either position
$\pi^{-1}(\ell)$ (agreeing with $\pi$) or $\pi^{-1}(i)$ (agreeing
with $\hat{\pi}$).
Now let $j$ be the maximum value that we have not yet placed in
any position of the $x$ we are building. If $i<j<\ell$, then
necessarily $j\in J$, i.e. $j$ is a position where all permutations
of $S_J$ must have
a weak excedance. So we must make $j$ a fixed point in $x$ (the only
other option is to place a value smaller than $j$ in position $j$).
Similarly for any other $i<j<\ell$, we must set $x_j = j$.
Having done this, the maximum value that we have not yet placed in $x$
is $i$. So we now put $i$ in position $\pi^{-1}(i)$ (or
position $\pi^{-1}(\ell)$). Note that the $x$ we are building agrees
with either $\pi$ or $\hat{\pi}$ so far.
{\bf Step 4.} Finally we want to maximize the values that we place
in positions $\pi^{-1}(i-1),\dots, \pi^{-1}(1)$. The remaining
unused values are $i-1,\dots, 1$. So for $i-1 \geq j \geq 1$,
we place $j$ in position
$\pi^{-1}(j)$.
This greedy algorithm has
constructed precisely two permutations, $\pi$ and $\hat{\pi}$, and clearly
$\lambda \cdot \pi = \lambda \cdot \hat{\pi}$.
Therefore if we consider the values $\lambda \cdot x$ for
$x\in S_J$, $\lambda \cdot x$ is maximized precisely on $\pi$ and
$\hat{\pi}$. It follows that there is an edge in $\Br_J$ between
$\pi$ and $\hat{\pi}$.
\end{proof}
\section{The two-dimensional faces of bridge polytopes}
In this section we describe the two-dimensional faces of bridge polytopes,
and explain how they are related to the moves for reduced plabic graphs.
\begin{theorem}
A two-dimensional face of a bridge polytope is either a square, a
trapezoid, or a regular hexagon, with labels as in Figure \ref{fig:faces}.
\end{theorem}
\begin{figure}[h]
\centering
\includegraphics[height=1.2in]{Faces.eps}
\caption{The possible two-dimensional faces of a bridge polytope.
Here we have
$i<j<k<l$ for the square,
$i<j<k$ or $k<j<i$ for the trapezoid, and $i<j<k$ for the hexagon.} \label{fig:faces}
\end{figure}
\begin{proof}
Consider a two-dimensional face of a bridge polytope. By Lemma \ref{lem:edge-swap},
each edge is parallel to a vector of the form $e_i - e_{\ell}$, and has length
an integer multiple of $\sqrt{2}$.
A simple calculation with dot products show that the
possible angles among the edges are $\frac{\pi}{3}, \frac{\pi}{2}, \frac{2\pi}{3}$:
more precisely,
two vectors $e_i-e_j$ and $e_k-e_l$ (where $i, j, k, l$ are distinct) have angle $\frac{\pi}{2}$;
two vectors $e_i-e_j$ and $e_i-e_k$ have angle $\frac{\pi}{3}$;
and two vectors $e_i-e_j$ and $-e_i+e_k$ have angle $\frac{2\pi}{3}$.
The only possibilities for such a polygon are:
a square,
a trapezoid (with angles $\frac{\pi}{3}, \frac{\pi}{3}, \frac{2\pi}{3}, \frac{2\pi}{3}$),
a regular hexagon (all angles are $\frac{2\pi}{3}$),
an equilateral triangle, or a parallelogram.
In the first three cases, the labels on the edges of the polygons must be as
in Figure \ref{fig:faces}. In the case of the square, it follows from
the rules of bridge decompositions that the intervals $[i,j]$ and $[k,l]$
must be disjoint, and hence without loss of generality $i<j<k<l$.
(E.g. $k$ cannot lie in between $i$ and $j$, because
in order to perform the swap $i$ and $j$, all elements between $i$ and $j$ must
be fixed points.
And we are not allowed to swap fixed points.)
A similar argument explains the ordering on $i,j,k$ for the trapezoid and the
hexagon.
We now argue that it is impossible for a two-dimensional face to be a triangle
or a parallelogram.
Note that if one traverses any cycle in the one-skeleton of a bridge polytope,
the product of the corresponding edge labels must be $1$.
It follows that there cannot be a two-face with an odd number of sides, because the product of an
odd number of transpositions is an odd permutation, and hence is never the identity.
Therefore an equilateral triangle is impossible.
Finally consider the case of a face which is a parallelogram. Its edge labels must
have alternating labels $(ij)$, $(ik)$, $(ij)$, $(ik)$. But
the product $(ij) (ik) (ij) (ik)$ is not equal to the identity permutation.
\end{proof}
\begin{remark}
The same proof shows that the face of any Bruhat interval polytope is a square,
a trapezoid, or a hexagon, as shown
in Figure \ref{fig:faces}.
\end{remark}
\begin{theorem}\label{th:faces-moves}
The three kinds of two-dimensional faces of bridge polytopes correspond to
simple applications of the local moves
for plabic graphs. More specifically, consider
a shortest path $p$ from $\pi_{k,n}$ (or more generally $\pi(J)$)
to $e$ in the one-skeleton of the bridge polytope $\Br_{k,n}$
(or more generally $\Br_J$). Choose a two-dimensional face $F$ such that $p$ traverses
half the sides of $F$, and modify $p$ along $F$, obtaining a new path $p'$ which
goes around the other sides of $F$.
Then the reduced plabic graphs corresponding to $p$ and $p'$ are related by
homotopy and by the local moves for plabic graphs as in Figure \ref{fig:faces-moves}.
\end{theorem}
\begin{proof}
The proof of Theorem \ref{th:faces-moves} is illustrated in Figure \ref{fig:faces-moves}.
The two reduced plabic graphs related by a square face in a bridge polytope
are related by homotopy.
The two plabic graphs related by a trapezoidal face are related by moves of type
(M3). And the two plabic graphs related by a hexagonal face are related by a combination
of moves (M1) and (M3). Note that in the latter case, the dashed edge in Figure
\ref{fig:faces-moves} must be present, or else the plabic graph associated to our bridge
decomposition would not be reduced.
\begin{figure}[h]
\centering
\includegraphics[height=3.5in]{Faces-moves.eps}
\caption{A graphical proof of Theorem \ref{th:faces-moves}.}
\label{fig:faces-moves}
\end{figure}
\end{proof}
\bibliographystyle{alpha}
| {
"timestamp": "2015-01-06T02:10:38",
"yymm": "1501",
"arxiv_id": "1501.00714",
"language": "en",
"url": "https://arxiv.org/abs/1501.00714",
"abstract": "The classical permutohedron Perm is the convex hull of the points (w(1),...,w(n)) in R^n where w ranges over all permutations in the symmetric group. This polytope has many beautiful properties -- for example it provides a way to visualize the weak Bruhat order: if we orient the permutohedron so that the longest permutation w_0 is at the \"top\" and the identity e is at the \"bottom,\" then the one-skeleton of Perm is the Hasse diagram of the weak Bruhat order. Equivalently, the paths from e to w_0 along the edges of Perm are in bijection with the reduced decompositions of w_0. Moreover, the two-dimensional faces of the permutohedron correspond to braid and commuting moves, which by the Tits Lemma, connect any two reduced expressions of w_0.In this note we introduce some polytopes Br(k,n) (which we call bridge polytopes) which provide a positive Grassmannian analogue of the permutohedron. In this setting, BCFW bridge decompositions of reduced plabic graphs play the role of reduced decompositions. We define Br(k,n) and explain how paths along its edges encode BCFW bridge decompositions of the longest element pi(k,n) in the circular Bruhat order. We also show that two-dimensional faces of Br(k,n) correspond to certain local moves for plabic graphs, which by a result of Postnikov [Pos06], connect any two reduced plabic graphs associated to pi(k,n). All of these results can be generalized to the positive parts of Schubert cells. A useful tool in our proofs is the fact that our polytopes are isomorphic to certain Bruhat interval polytopes. Conversely, our results on bridge polytopes allow us to deduce some corollaries about the structure of Bruhat interval polytopes.",
"subjects": "Combinatorics (math.CO); High Energy Physics - Theory (hep-th)",
"title": "A positive Grassmannian analogue of the permutohedron",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130560740512,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.707563672458522
} |
https://arxiv.org/abs/0803.1018 | Positroids and Schubert matroids | Postnikov gave a combinatorial description of the cells in a totally-nonnegative Grassmannian. These cells correspond to a special class of matroids called positroid. We prove his conjecture that a positroid is exactly an intersection of permuted Schubert matroids. This leads to a nice combinatorial description of positroids that is easily computable. | \section{Introduction}
A \newword{positroid} is a matroid that can be represented by a $k \times n$ matrix with nonnegative maximal minors. The classical theory of total positivity concerns matrices in which all minors are nonnegative, and this subject was extended by Lusztig \cite{L}.
Lusztig introduced the totally nonnegative variety $G \geq 0$ in an arbitrary reductive group $G$ and the totally nonnegative part $(G/P)_{\geq 0}$ of a real flag variety $(G/P)$. He also conjectured that $(G/P)_{\geq 0}$ is made up of cells, and this was proved by Rietsch \cite{R}.
In this paper, we will restrict our attention to $(Gr_{kn})_{\geq 0}$, the \newword{totally nonnegative Grassmannian}. Then there is a more refined decomposition using matroid strata. Postnikov obtained a relationship between $(Gr_{kn})_{\geq 0}$ and certain planar bicolored graphs, producing a combinatorially explicit cell decomposition of $(Gr_{kn})_{\geq 0}$ \cite{P}. The cells correspond to positroids.
One of the results of \cite{P} is that each cell is an intersection of $(Gr_{kn})_{\geq 0}$ and Schubert cells corresponding to a combinatorial object called the Grassmann necklace. This result implies that each positroid is included in an intersection of cyclically shifted Schubert matroids. We extend this result: each positroid is exactly an intersection of certain cyclically shifted Schubert matroids.
A more detailed formulation of the main result follows.
Let $[n]:=\{1,\cdots,n\}$ and let ${[n] \choose k}$ be the collection of all $k$-element subsets in $[n]$. Fix some $t \in [n]$. We define the ordering $<_t$ on $[n]$ by the total order $t <_t t+1 <_t \cdots <_t n <_t 1 \cdots <_t t-1$. For $I,J \in {[n] \choose k}$, where
$$I=\{i_1, \cdots, i_k \}, i_1 <_t i_2 \cdots <_t i_k$$ and
$$J=\{j_1, \cdots, j_k \}, j_1 <_t j_2 \cdots <_t j_k,$$
we set
$$I \leq_t J \text{ if and only if } i_1 \leq_t j_1, \cdots, i_k \leq_t j_k.$$ For each $I \in {[n] \choose k}$ and $w \in S_n$, we define the \newword{cyclically shifted Schubert matroid} as
$$ SM^{t}_I := \{J \in {[n] \choose k} | I \leq_t J \}. $$
We will show that a matroid $\mathcal{M} \subseteq {[n] \choose k}$ is a positroid if and only if it can be written as $SM^{1}_{I_1} \cap SM^{2}_{I_2} \cap \cdots \cap SM^{n}_{I_n}$ for a Grassmann necklace $\mathcal{I} = (I_1,\cdots,I_n)$, $I_1, \cdots, I_n \in {[n] \choose k}$. Our proof is purely combinatorial.
The paper is organized as follows. In section 2, we go over the basics of matroids and the totally nonnegative Grassmannian. In section 3, we review $\LE$-diagrams and $\LE$-graphs. In section 4, we give the proof of our main result. In section 5, we introduce the upper Grassmann necklace. In section 6, we view lattice path matroids as special cases of positroids. In section 7, we define flag positroids.
\medskip
\textbf{Acknowledgment} I would like to thank my adviser, Alexander Postnikov for introducing me to the field and the problem. I would also like to thank Allen Knutson and Lauren Williams for useful discussions. I would also like to thank Anna de Mier and Tom Lenagan for corrections.
\section{Preliminaries and the Main Result}
We would like to guide the readers unfamiliar with basics in this section to \cite{F} and \cite{P}.
An element in the Grassmannian $Gr_{kn}$ can be understood as a collection of $n$ vectors $v_1, \cdots, v_n \in \field{R}^k$ spanning the space $\field{R}^k$ modulo the simultaneous action of $GL_k$ on the vectors. The vectors $v_i$ are the columns of a $k \times n$-matrix $A$ that represents the element of the Grassmannian. Then an element $V \in Gr_{kn}$ represented by $A$ gives the matroid $\mathcal{M}_V$ whose bases are the $k$-subsets $I \subset [n]$ such that $\Delta_I (A) \not = 0$. Here, $\Delta_I(A)$ denotes the determinant of $A_I$, the $k$ by $k$ submatrix of $A$ with the column set $I$.
Then $Gr_{kn}$ has a subdivision into \newword{matroid strata} $S_{\mathcal{M}}$ labeled by some matroids $\mathcal{M}$:
$$ S_{\mathcal{M}} := \{V \in Gr_{kn} | \mathcal{M}_{V} = \mathcal{M} \}.$$
The elements of the stratum $S_{\mathcal{M}}$ are represented by matrices $A$ such that $\Delta_I (A) \not = 0$ if and only if $I \in \mathcal{M}$.
Let us define the totally nonnegative Grassmannian and its cells.
\begin{definition}[\cite{P}, Definition 3.1] The \newword{totally nonnegative Grassmannian} $Gr^{tnn}_{kn} \subset Gr_{kn}$ is the quotient $Gr^{tnn}_{kn} = GL^{+}_{k} \backslash Mat^{tnn}_{kn}$, where $Mat^{tnn}_{kn}$ is the set of real $k\times n$-matrices $A$ of rank $k$ with nonnegative maximal minors $\Delta_I (A) \geq 0$ and $GL^{+}_k$ is the group of $k \times k$-matrices with positive determinant.
\end{definition}
\begin{definition}[\cite{P}, Definition 3.2] \newword{Totally nonnegative Grassmann cells} $S^{tnn}_{\mathcal{M}}$ in $Gr^{tnn}_{kn}$ are defined as $S^{tnn}_{\mathcal{M}} := S_{\mathcal{M}} \cap Gr^{tnn}_{kn}$. $\mathcal{M}$ is called a \newword{positroid} if the cell $S^{tnn}_{\mathcal{M}}$ is nonempty.
\end{definition}
Note that from above definitions, we get
$$ S^{tnn}_{\mathcal{M}} = \{ GL^{+}_k \bullet A \in Gr^{tnn}_{kn} | \Delta_I (A) >0 \textbf{ for } I \in \mathcal{M}, \Delta_I (A) = 0 \textbf{ for } I \not \in \mathcal{M} \}. $$
In \cite{P}, Postnikov showed a bijection between each cell and a combinatorial object called the Grassmann necklace.
\begin{definition}[\cite{P}, Definition 16.1]
A \newword{Grassmann necklace} is a sequence $\mathcal{I} = (I_1, \cdots, I_n)$ of subsets $I_r \subseteq [n]$ such that:
\begin{itemize}
\item if $i \in I_i$ then $I_{i+1}= (I_i \setminus \{i \}) \cup \{j\}$ for some $j \in [n]$,
\item if $i \not \in I_i$ then $I_{i+1} = I_i$.
\end{itemize}
The indices are taken modulo $n$. In particular, we have $|I_1| = \cdots = |I_n|$.
\end{definition}
An example of a Grassmann necklace would be $I_1 = \{1,2,4\}, I_2 = \{2,4,5\},I_3 = \{3,4,5\},I_4 = \{4,5,2\},I_5 = \{5,1,2\}$. Two of the results in \cite{P} are the following:
\begin{lemma}[\cite{P}, Lemma 16.3]
\label{lem:P}
For a matroid $\mathcal{M} \subseteq {[n] \choose k}$ of rank $k$ on the set $[n]$, let $\mathcal{I}_{\mathcal{M}} = (I_1,\cdots,I_n)$ be the sequence of subsets such that $I_i$ is the minimal member of $\mathcal{M}$ with respect to $\leq_i$. Then $\mathcal{I}_{\mathcal{M}}$ is a Grassmann necklace.
\end{lemma}
\begin{theorem}[\cite{P}, Theorem 17.2]
\label{thm:P}
Let $S^{tnn}_{\mathcal{M}}$ be a nonnegative Grassmann cell, and let $\mathcal{I}_{\mathcal{M}} = (I_1, \cdots, I_n)$ be the Grassmann necklace corresponding to $\mathcal{M}$. Then
$$S^{tnn}_{\mathcal{M}} = \bigcap_{i=1}^{n} \Omega_{I_i}^{i} \cap {Gr}^{tnn}_{kn},$$
where $\Omega_{I_i}^{i}$ is the cyclically shifted Schubert cell, which is the set of elements $V \in Gr_{kn}$ such that $I_i$ is the lexicographically minimal base of $M_V$ with respect to ordering $<_i$ on $[n]$.
\end{theorem}
These results imply that bases of a positroid are included in an intersection of cyclically shifted Schubert matroids. But it does not imply that they are equal. Postnikov therefore conjectured that each positroid is exactly the intersection of cyclically shifted Schubert matroids. This is what we are going to prove in our paper:
\begin{theorem}
\label{thm:mainO}
$\mathcal{M}$ is a positroid if and only if for some Grassmann necklace $(I_1, \cdots, I_n)$,
$$ \mathcal{M} = \bigcap_{i=1}^{n} SM^i_{I_i}. $$
In other words, $\mathcal{M}$ is a positroid if and only if the following holds : $H \in \mathcal{M}$ if and only if $H \geq_t I_t$ for all $t\in [n]$.
\end{theorem}
\section{Le-diagrams and Le-graphs}
In \cite{P}, Postnikov showed a bijection between positroids and combinatorial objects called $\LE$-diagrams.
\begin{definition}
Fix a partition $\lambda$ that fits inside the rectangle $(n-k)^k$. The boundary of the Young diagram of $\lambda$ gives the lattice path of length $n$ from the upper right corner to the lower left corner of the rectangle $(n-k)^k$. Let's denote this path as the \newword{boundary path}. Label each edge in the path by $1,\cdots,n$ as we go downwards and to the left. Define $I(\lambda)$ as the set of labels of $k$ vertical steps in the path.
Each column and row corresponds to exactly one labeled edge. Let's index the columns and rows with those labels. We will say that a box is at $(i,j)$ if it is on row $i$ and column $j$. A \newword{filling} of $\lambda$ is a diagram of $\lambda$ where each box is either empty or filled with a dot.
\end{definition}
Given a filling $L$ of shape $\lambda$, we define the sets $box(L),filled(L)$ as:
$$box(L) := \{ (i,j) | \text{ there is a box at } (i,j) \text{ in } L \},$$
$$filled(L) := \{ (i,j) | \text{ there is a box filled with a dot at } (i,j) \text{ in } L \}.$$
\begin{definition}[\cite{P}, Definition 6.1]
For a partition $\lambda$, let us define a \newword{$\LE$-diagram} $L$ of shape $\lambda$ as a filling of boxes of the Young diagram of shape $\lambda$ such that, for any three boxes indexed $(i,j),(i',j),(i,j')$, where $i'<i$ and $j'>j$, if boxes on position $(i',j)$ and $(i,j')$ are filled, then the box on $(i,j)$ is also filled. This property is called the $\LE$-property. We will say that a $\LE$-diagram is \newword{full} if every box is filled.
\end{definition}
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{leh-ex}
\includegraphics[width=0.3\textwidth]{leh-graph}
\caption{Example of a $\LE$-diagram and a $\LE$-graph}
\label{fig:leh}
\end{figure}
Fix a $\LE$-diagram $L$ of shape $\lambda$. For each $(i,j) \in box(L)$, we define the \newword{NW-region} of it as:
$$NW_{(i,j)} := \{(i',j')|i'<i, j'>j\}.$$
There is a unique dot $(i',j')$ that minimizes $i-i'$ and $j'-j$ at the same time, due to the $\LE$-property. We will say that $(i',j')$ covers $(i,j)$ and write this as $(i',j') \triangleleft (i,j)$.
\begin{definition}[\cite{P}]
A \newword{$\LE$-graph} is obtained from a $\LE$-diagram in the following way. Place a vertex at the middle of each step in the boundary path of the diagram and mark these vertices by $1,2,\cdots,n$. We will call these vertices the \newword{boundary vertices}. Now for each dot inside the $\LE$-diagram, draw a horizontal line to its right, and vertical line to its bottom until it reaches the boundary of the diagram. Then orient all vertical edges downward and horizontal edges to the left.
\end{definition}
$\LE$-graphs were also used to study TP-basis of positroids in \cite{Talaska}. The source set of the $\LE$-graph is given by $I(\lambda)$ and the sink set is given by $[n] \setminus I(\lambda)$.
\begin{definition}
A \newword{path} in a $\LE$-graph is a directed path that starts at some boundary vertex and ends at some boundary vertex. Given a path $p$, we denote its starting point and end point by $p^s$ and $p^e$. A \newword{VD-family} is a family of paths where no pair of paths share a vertex.
\end{definition}
A dot at $(i,j)$ is a \newword{NW-corner} of a path $p$, if $p$ changes direction at $(i,j)$. For each $(i,j) \in filled(L)$, there is a path that starts at a boundary vertex $i$, ends at a boundary vertex $j$ and has the dot at $(i,j)$ as a NW-corner. We call such path as a \newword{hook path of $(i,j)$}.
Given a VD-family of paths $\{p_1,\cdots,p_t\}$, we say that this family represents $J = I(\lambda) \setminus \{p_1^s,\cdots,p_t^s\} \cup \{p_1^e,\cdots,p_t^e\}$. Empty family is also a VD-family. The following proposition follows as a corollary from (\cite{P}, Theorem 6.5).
\begin{proposition}[\cite{P}]
Fix a positroid $\mathcal{M}_{L}$ that corresponds to a $\LE$-diagram $L$. Then $J \in \mathcal{M}_{L}$ if and only if $J$ is represented by a VD-family in the $\LE$-graph of $L$.
\end{proposition}
Each $\LE$-diagram corresponds to a positroid, and hence a Grassmann necklace. Given a $\LE$-diagram $L$, let's try to find out its corresponding Grassmann necklace $\mathcal{I} = (I_1,\cdots,I_n)$ directly from the diagram. It is obvious that $I_1 = I(\lambda)$.
For each $(x,y) \in box(L)$, we can get a maximal chain $(x_t,y_t) \triangleleft \cdots \triangleleft (x_1,y_1)$ such that $(x_1,y_1)$ is the unique dot in $\{(i,j)| i \leq x, j \geq y\}$ that minimizes $x-i$ and $j-y$ at the same time. We will call this the \newword{chain rooted at $(x,y)$}. Then the collection of hook paths at $(x_r,y_r)$ for $1 \leq r \leq t$ is a VD-family. So we get $J_{(x,y)} := I_{\lambda} \setminus \{x_1,\cdots,x_t\} \cup \{y_1,\cdots,y_t\} \in \mathcal{M}_L$. In Figure~\ref{fig:leh}, chain rooted at $(5,9)$ is given by $(1,10) \triangleleft (5,9)$. A chain rooted at $(3,9)$ is given by $(1,10)$.
\begin{proposition}
Fix $j \in [n] \setminus \{1\}$ and a $\LE$-diagram $L$ of shape $\lambda$ and let $\mathcal{I} = (I_1,\cdots,I_n)$ be the Grassmann necklace of $\mathcal{M}_{L}$. If $j \not \in I(\lambda)$, let $(x,y)$ be the box adjacent to a path labeled $j$ in the boundary path of $\lambda$. If $j \in I(\lambda), j \not = 1$, let $(x,y)$ be the box right above such box. Then $I_j = J_{(x,y)}$.
\end{proposition}
\begin{proof}
Let $\mathcal{F}$ be a VD-family that represents $I_j$. Then $\mathcal{F}$ only contains paths that satisfy $p^s < j \leq p^e$. Because if not, then $\mathcal{F} \setminus \{p\}$ represents $J$ such that $J <_j I_j$. So any path $p \in \mathcal{F}$ has to pass through a dot in the region $\{(i,j)| i \leq x, j \geq y\}$.
Let the chain rooted at $(x,y)$ be $(i_t,j_t) \triangleleft \cdots \triangleleft (i_1,j_1)$. For each $1 \leq r \leq t$, denote the hook path at $(i_r,j_r)$ by $p_r$. Then $\mathcal{F}$ must contain $p_1$. If not, then $I_j \not \leq_j I(\lambda) \setminus \{i_1\} \cup \{j_1\}$ because ${p}^s \leq i_1,{p}^e \geq j_1$ for all $p \in \mathcal{F}$. If $p_1,\cdots,p_r \in \mathcal{F}$, then we also have $p_{r+1} \in \mathcal{F}$ because if not, we get $I_j \not \leq_j I(\lambda) \setminus \{i_1,\cdots,i_{r+1}\} \cup \{j_1,\cdots,j_{r+1}\}$ due to the fact that for any path $p \in \mathcal{F} \setminus \{p_1,\cdots,p_r\}$, we have ${p}^s \leq i_{r+1}$ and ${p}^e \geq j_{r+1}$. As a result, we get $\mathcal{F} = \{p_1,\cdots,p_t\}$ and $I_j = J_{(x,y)}$.
\end{proof}
Let's look at an example. In the $\LE$-diagram of Figure~\ref{fig:leh}, $I_4$ is given by $J_{(3,6)}$. Chain rooted at $(3,6)$ is given by $(1,10) \triangleleft (3,6) $. So $I_4 = I_1 \setminus \{1,3\} \cup \{10,6\} = \{4,5,6,8,10\}$. $I_9$ is given by $J_{(8,9)}$. Chain rooted at $(8,9)$ is given by $(5,10) \triangleleft (8,9)$. So $I_9 = I_1 \setminus \{5,8\} \cup \{9,10\} = \{1,3,4,9,10\}$.
\section{Proof of the main theorem}
In this section, we will prove the main theorem by showing that for each Grassmann necklace $\mathcal{I} = (I_1,\cdots,I_n)$, we have $\bigcap_{i=1}^{n} SM^i_{I_i} \subseteq \mathcal{M}_{\mathcal{I}}$. To do this, we need to show that each $J \in \bigcap_{i=1}^{n} SM^i_{I_i}$ can be expressed as VD-family inside the $\LE$-graph of $\mathcal{M}_{\mathcal{I}}$. In order to accomplish this, we will start from a full-$\LE$-diagram and use induction by increasing the number of empty boxes.
\begin{lemma}
Let $L$ be a full $\LE$-diagram of shape $\lambda$. Then $\mathcal{M}_{L} = SM_{I(\lambda)}$.
\end{lemma}
\begin{proof}
We need to show that for all $J \in SM_{I(\lambda)}$, we have $J \in \mathcal{M}_{L}$. Due to the definition of $\leq_1$, there is a unique bijection $\phi : I(\lambda) \setminus J \rightarrow J \setminus I(\lambda)$ such that for any $a,b \in I(\lambda) \setminus J$, the two intervals $[a,\phi(a)]$ and $[b,\phi(b)]$ do not cross, meaning that they are either disjoint or nested. For each $a \in I(\lambda) \setminus J$, we associate a hook path at $(a,\phi(a))$. Then we get a VD-family representing $J$.
\end{proof}
Given any $\LE$-diagram $L_{\mathcal{I}}$ with associated Grassmann necklace $\mathcal{I} = (I_1,\cdots,I_n)$, we want to add a dot, to obtain a new $\LE$-diagram $L_{\mathcal{I}'}$ such that for some $\alpha \in [n]$, we have $|I_\alpha \setminus {I_\alpha}'|=1$ and ${I_i}' = I_i$ whenever $i \not = \alpha$.
The \newword{boundary strip} of $L_{\mathcal{I}}$ is the set of boxes where it shares at least one vertex with the boundary path. Let's first assume that there exists an empty box in the boundary strip of $L_{\mathcal{I}}$. Consider an empty box in the strip such that there is no empty box to its right or bottom. Then adding a dot to this box will change exactly one element of the Grassmann necklace. So we only need to consider the case when all the boxes of the boundary strip are filled. We define the \newword{middle path} of $L_\mathcal{I}$ to be a lattice path inside the diagram such that:
\begin{enumerate}
\item all boxes between the middle path and the boundary path are filled with dots,
\item the corner boxes of the upper region is empty. \newword{Upper region} is the diagram obtained by looking at the boxes above or left of the middle path. A box is a \newword{corner box} of a diagram if there is are no boxes to its right and below.
\end{enumerate}
Then putting a dot into any corner box of the upper region will work, since the newly added dot will affect only one element of the Grassmann necklace. Example of a middle path is given as a red line in Figure~\ref{fig:leh}.
\begin{proposition}
Given any Grassmann necklace $\mathcal{I}=(I_1, \cdots, I_n)$, we have $\mathcal{M}_{\mathcal{I}}=\bigcap_{i=1}^{n} SM^i_{I_i}.$
\end{proposition}
\begin{proof}
We will prove the proposition by induction on $m$, the number of empty boxes inside the $\LE$-diagram $L_{\mathcal{I}}$ of $\mathcal{M}_{\mathcal{I}}$. When $m=0$, this is the full $\LE$-diagram case. So assume for the sake of induction that we know the result for $\LE$-diagrams having $<m$ empty boxes.
Use the construction above to obtain $L_{\mathcal{I}'}$, where $\mathcal{I}'=({I_1}',\cdots,{I_n}')$ and there exists $\alpha \in [n]$ such that ${I_i}' = I_i$ for all $i \not = \alpha$ and $|I_\alpha \setminus {I_\alpha}'|=1$. Induction hypothesis tells us that $\mathcal{M}_{\mathcal{I}'} = \bigcap_{i=1}^{n} SM^i_{{I_i}'}$. It is enough to show $\mathcal{M}_{\mathcal{I}'} \setminus \mathcal{M}_{\mathcal{I}} \subset {SM^{\alpha}_{I_\alpha'}} \setminus {SM^{\alpha}_{I_\alpha}}$.
Let $(w_{q+r},z_{q+r}),\cdots \triangleleft (w_q,z_q) \triangleleft \cdots \triangleleft (w_{1}, z_{1})$ be the chain representing $I_{\alpha}'$ in $L_{\mathcal{I}'}$, such that $(w_q,z_q)$ is the newly added dot going from $L_{\mathcal{I}}$ to $L_{\mathcal{I}'}$. We have $(w_a,z_b) \in filled(L_{\mathcal{I}'})$ for $1 \leq a,b \leq q$. Any VD-family $\mathcal{F}_J$ representing some $J \in \mathcal{M}_{\mathcal{I}'} \setminus \mathcal{M}_{\mathcal{I}}$ should contain a path in which $(w_q,z_q)$ is a NW-corner.
In $\mathcal{F}_J$, denote the path going through $(w_q,z_q)$ by $p_q$. If there is no path in $\mathcal{F}_J$ that passes $(w_{q-1},z_{q-1})$, we can perturb the path $p_q$ to go through the points $(w_q,z_{q-1})$, $(w_{q-1},z_{q-1})$, $(w_{q-1},z_q)$ instead of going through $(w_q,z_q)$. So there must be a path $p_{q-1} \in \mathcal{F}_J$ that passes $(w_{q-1},z_{q-1})$. Since $(w_q,z_q)$ is a NW-corner of $p_q$, $(w_{q-1},z_{q-1})$ is also a NW-corner of $p_{q-1}$. Repeating this argument, we get $p_q,\cdots,p_1 \in \mathcal{F}_J$ each having $(w_q,z_q),\cdots,(w_1,z_1)$ as a NW-corner.
Let $(x_t,y_t) \triangleleft \cdots \triangleleft (x_1,y_1)$ be the chain rooted at $(w_q,z_q)$ in $L_{\mathcal{I}}$. Then
$$(x_t,y_t) \triangleleft \cdots \triangleleft (x_1,y_1) \triangleleft (w_{q-1},z_{q-1}) \triangleleft \cdots \triangleleft (w_{1}, z_{1})$$ represents $I_j$ in $\L_{\mathcal{I}}$. We have $t \geq r$ due to the $\LE$-property. We want to show that $J \not \geq_{\alpha} I_\alpha$.
If $p_q^e <_\alpha y_1$ or $p_q^s >_\alpha x_1$, then we have $J \not \geq_\alpha I_\alpha$ and we are done. So let's assume $p_q^e \geq_\alpha y_1$ and $p_q^s \leq_\alpha x_1$. If there is no path going through $(x_1,y_1)$ in $\mathcal{F}_J$, the path $p_q$ can be slightly changed so it goes through $(x_1,y_1)$ and this path cannot have $(w_q,z_q)$ as its NW-corner. So there must be a path $p_{q+1}$ in $\mathcal{F}_J$ that passes through $(x_1,y_1)$.
Due to similar reasons, we only need to consider the case when $p_{q+1}^e \geq_\alpha y_2$ and $p_{q+1}^s \leq_\alpha x_2$. If there is no path going through $(x_2,y_2)$ in $\mathcal{F}_J$, the path $p_{q+1}$ can be slightly changed so it goes through $(x_2,y_2)$. This path cannot pass $(x_1,y_1)$, since we have $x_2 <_\alpha x_1$ and $y_2 >_\alpha y_1$. So there must be a path $p_{q+2} \in \mathcal{F}_J$ that passes through $(x_2,y_2)$. Repeating this argument, we get $p_{q+1},\cdots,p_{q+t} \in \mathcal{F}_J$. Then $\{p_{q+t}^e,\cdots,p_1^e\} \subset J$ tells us that $J \not \geq_{\alpha} I_{\alpha}$. (The reason we do this separately from the previous paragraph is because one of $y_1= z_q$ and $x_1 = w_q$ might be true.)
So we have shown $\mathcal{M}_{\mathcal{I}'} \setminus \mathcal{M}_{\mathcal{I}} \subset {SM}^{\alpha}_{I_\alpha'} \setminus {SM^{\alpha}_{I_\alpha}}$, and we are finished.
\end{proof}
Let's look at an example on using the main theorem. Let $\mathcal{M}$ be a positroid such that its Grassmann necklace is given by:
$$I_1 = \{1,2,4\},I_2 = \{2,4,5\},I_3 = \{3,4,5\},I_4 = \{4,5,2\},I_5 = \{5,1,2\}.$$
Our main theorem tells us that:
$$\mathcal{M} = \{H | H \geq_1 I_1, H\geq_2 I_2, \cdots, H\geq_5 I_5 \}$$
$$=\{ \{1,2,4\} , \{1,2,5\}, \{1,3,4\}, \{1,3,5\}, \{2,4,5\}, \{3,4,5\} \}.$$
\section{Decorated permutations and the Upper Grassmann necklace}
In this section we will show that a positroid is also an intersection of cyclically shifted dual Schubert matroids.
\begin{definition}[\cite{P}, Definition 13.3]
A decorated permutation $\pi^{:} = (\pi, col)$ is a permutation $\pi \in S_n$ together with a coloring function $col$ from the set of fixed points $\{i | \pi(i) = i\}$ to $\{1,-1\}$. That is, a decorated permutation is a permutation with fixed points colored in two colors.
\end{definition}
It is easy to see the bijection between necklaces and decorated permutations. To go from a Grassmann necklace $\mathcal{I}$ to a decorated permutation $\pi^{:}=(\pi,col)$,
\begin{itemize}
\item if $I_{i+1} = (I_i \backslash \{i\}) \cup \{j\}$, $j \not = i$, then $\pi(i)=j$,
\item if $I_{i+1} = I_i$ and $i \not \in I_i$ then $\pi(i)=i, col(i)=1$,
\item if $I_{i+1} = I_i$ and $i \in I_i$ then $\pi(i)=i, col(i)=-1$.
\end{itemize}
To go from a decorated permutation $\pi^{:}=(\pi,col)$ to a Grassmann necklace $\mathcal{I}$,
$$I_i = \{ j \in [n] | j<_i \pi^{-1}(j) \textbf{ or } (\pi(j)=j \textbf{ and } col(j)=-1) \}.$$
Let's look at an example. For decorated permutation $\pi^{:}$ with $\pi = 81425736$ and $col(5)=1$, we get $I_1 = \{1,2,3,6\},I_2 = \{2,3,6,8\},I_3 = \{3,6,8,1\},I_4 = \{4,6,8,1\},I_5 = \{6,8,1,2\}, I_6 = \{6,8,1,2\}, I_7 = \{7,8,1,2\}, I_8 = \{8,1,2,3\}$.
\begin{definition}
For $I=(i_1, \cdots, i_k) \in {[n] \choose k}$, the \newword{cyclically shifted dual Schubert matroid} $\tilde{SM}^{i}_I$ consists of bases $H=(j_1, \cdots, j_k)$ such that $I \geq_i H$.
\end{definition}
Fix a decorated permutation $\pi^{:}=(\pi,col)$. Let $\mathcal{I}_{\pi^{:}} = (I_1,\cdots,I_n)$ be the corresponding Grassmann necklace and $\mathcal{M}_{\pi^{:}}$ the corresponding positroid.
\begin{lemma}
\label{lem:maxM}
For any $H \in \mathcal{M}_{\pi^{:}}$, we have $H \leq_i \pi^{-1} (I_i)$ for all $i \in [n]$.
\end{lemma}
\begin{proof}
We may assume that $\pi$ has no fixed point since they correspond to loops or coloops of $\mathcal{M}_{\pi^{:}}$. We will prove only for $i=1$ since the proof is similar in other cases. Denote $I_1 = \{i_1,\cdots,i_k\}$ where $i_1,\cdots,i_k$ are labeled in a way that satisfies $\pi^{-1}(i_1) < \cdots < \pi^{-1}(i_k)$.
Denote elements of $H$ by $h_1 < \cdots < h_k$. Let $j$ be the biggest element of $[k]$ such that:
\begin{enumerate}
\item $h_t \leq \pi^{-1}(i_t)$ for all $t \in (j,k]$ and
\item $h_j > \pi^{-1}(i_j)$.
\end{enumerate}
Since $h_i \in (\pi^{-1}(i_j), \pi^{-1}(i_{j+1})]$, we have $\{i_1,\cdots,i_j\} \subset I_{h_j}$. We get $|H \cap [1,h_j)| < |I_{h_j} \cap [1,h_j)|$, but this contradicts $H \geq_{h_j} I_{h_j}$. Hence there cannot be a $j \in [k]$ such that $h_j > \pi^{-1}(h_j)$. This tells us that $H \leq \{\pi^{-1}(j_1),\cdots,\pi^{-1}(j_k) \}$.
\end{proof}
The collection $(J_1:=\pi^{-1}(I_1),\cdots,J_n:=\pi^{-1}(I_n))$ forms a necklace in the sense that $J_{i+1} = J_i \setminus \{\pi^{-1}(i)\} \cup \{i\}$ except for $i$ such that $\pi(i)=i$. We will call this the \newword{upper Grassmann necklace} of $\pi$.
To go from a decorated permutation $\pi^{:}=(\pi,col)$ to an upper Grassmann necklace $\mathcal{J}$,
$$J_r = \{ i \in [n] | \pi(i) <_r i \textbf{ or } (\pi(i)=i \textbf{ and } col(i)=-1) \}.$$
Define $\tilde{\mathcal{M}}_{\pi^{:}}$ as:
$$ \tilde{\mathcal{M}}_{\pi^{:}} = \bigcap_{i=1}^{n} \tilde{SM}_{J_i}^{i}.$$
Then Lemma~\ref{lem:maxM} tells us that $\mathcal{M}_{\pi^{:}} \subseteq \tilde{\mathcal{M}}_{\pi^{:}}$.
The proof of the following lemma is similar to Lemma~\ref{lem:maxM}.
\begin{lemma}
For any $H \in \tilde{\mathcal{M}}_{\pi^{:}}$, we have $H \geq_i \pi(J_i)=I_i$ for all $i \in [n]$.
\end{lemma}
So we obtain the following result:
\begin{theorem}
\label{thm:duality}
Pick a decorated permutation $\pi^{:}=(\pi,col)$. Let $\mathcal{I}=(I_1,\cdots,I_n)$ and $\mathcal{J}=(J_1,\cdots,J_n)$ be the corresponding Grassmann necklace and the upper Grassmann necklace. Then $J_i = \pi^{-1}(I_i)$ for all $i \in [n]$. We also have the equality:
$$ \bigcap_{i=1}^{n} SM^i_{I_i} = \bigcap_{i=1}^{n} \tilde{SM}_{J_i}^{i}.$$
\end{theorem}
\section{Lattice Path Matroids}
Lattice path matroids were defined in \cite{BMN}. These are simple cases of positroids. In this section we will show a way to get the decorated permutation of a lattice path matroid.
\begin{definition}
Pick $I,J \in {[n] \choose k}$ such that $I \leq J$. The \newword{lattice path matroid} is defined as:
$$ LP_{I,J} := \{H| H \in {[n] \choose k}, I \leq H \leq J \} = SM_I \cap \tilde{SM}_J $$
\end{definition}
Since $I,J$ corresponds to two lattice paths in a $(n-k)$-by-$k$ grid, $LP_{I,J}$ expresses all the lattice paths between them.
\begin{lemma}
\label{lem:pathposi}
A lattice path matroid is a positroid.
\end{lemma}
\begin{proof}
Pick $I=\{a_1 < \cdots < a_k\}, J=\{b_1 < \cdots < b_k\}$ such that $I \leq J$. Let's construct a $k$-by-$n$ matrix such that $\Delta_H = 0$ for all $H \in {[n] \choose k} \setminus LP_{I,J}$ and $\Delta_H >0$ for all $H \in LP_{I,J}$.
Let $V=(v_{ij})_{i,j=1,1}^{k,n}$ be a $k$-by-$n$ Vandermonde matrix. Set $v_{ij}=0$ for all $j \not \in [a_i,b_i]$. So $V$ would look like:
$$ v_{ij} = \{ \begin{array}{ll}
{x_i}^{j-1} & \mbox{if $a_i \leq j \leq b_i,$}\\
0 & \mbox{otherwise.} \end{array}$$
Assign values to variables $x_1,\cdots,x_k$ such that $x_1 >1$ and $x_{i+1} = x_i^{k^2}$ for all $i \in [k-1]$. Let's denote $V_{[1..i],[c_1,\cdots,c_i]}$ as a submatrix of $V$ by taking rows from $1$ to $i$ and columns $c_1,\cdots,c_i$. We have $\Delta_{H}>0$ if and only if $V_{[1..k],H}$ has nonzero diagonal entries, which happens if and only if $H \in LP_{I,J}$.
\end{proof}
Let's try to find $\pi^{:}$ that corresponds to $LP_{I,J}$. Denote $I=\{i_1 < \cdots <i_k\}$ and $J=\{j_1 < \cdots < j_k\}$. If $i_t=j_t$ for some $t \in [k]$, this is a coloop in the matroid. The permutation $\pi$ we are trying to find should satisfy:
\begin{itemize}
\item $I = \{ i \in [n] | i \leq \pi^{-1}(i) \}$,
\item $J = \{ i \in [n] | \pi(i) \leq i) \}$ and
\item $\pi(J)=I$.
\end{itemize}
If $\pi$ satisfies the above conditions, then $\mathcal{M}_{\pi^{:}}$ is contained in $LP_{I,J}$. So $LP_{I,J}$ is the biggest positroid under inclusion inside the collection satisfying the above property. The following lemma is an immediate corollary of (\cite{P}, Theorem~17.8).
\begin{lemma}
If we have $a <_i b <_i \pi(a) <_i \pi(b) <_i a$, then $\mathcal{M}_{\pi^{:}} \subset \mathcal{M}_{\mu^{:}}$, where $\mu$ is obtained from $\pi$ by switching $\pi(a)$ and $\pi(b)$ (i.e. $\mu(a) = \pi(b), \mu(b) = \pi(a)$).
\end{lemma}
Combining this with Lemma~\ref{lem:pathposi}, we get the following result:
\begin{theorem}
Choose any $I=\{i_1 < \cdots < i_k\}$ and $J=\{j_1 < \cdots < j_k\} \in \ {[n] \choose k}$ such that $I \leq J$. Denote $[n] \setminus J = \{d_1 < \cdots <d_{n-k}\}$ and $[n] \setminus I = \{c_1 < \cdots< c_{n-k}\}$. Then $LP_{I,J}$ is a positroid and its decorated permutation $\pi^{:}=(\pi,col)$ is given by:
$$\mbox{$\pi(j_r) = i_r$ for all $r \in [k]$},$$
$$\mbox{$\pi(d_r) = c_r$ for all $r \in [n-k]$},$$
$$ \mbox{If $\pi(t)=t$ then $col(t)$} = \{ \begin{array}{ll}
-1 & \mbox{if $t \in J$,}\\
1 & \mbox{otherwise.} \end{array} $$
\end{theorem}
\section{Further Remark}
Positroids correspond to the matroid strata of the nonnegative part of the Grassmannian. Flag matroids correspond to the matroid strata of a flag variety.
\begin{definition}
A \newword{flag} $F$ is a strictly increasing sequence
$$ F^1 \subset F^2 \subset \cdots \subset F^m $$
of finite sets. Denote by $k_i$ the cardinality of the set $F^i$. We write $F=(F^1,\cdots,F^m)$. The set $F^i$ is called the $i$-th constituent of $F$.
\end{definition}
\begin{theorem}[\cite{BGW}]
A collection $\mathcal{F}$ of flags of rank $(k_1,\cdots,k_m)$ is a \newword{flag matroid} if and only if:
\begin{enumerate}
\item For all $i \in [m]$, $M_i$ the collection of $F^i$'s for each $F \in \mathcal{F}$ form a matroid.
\item For every $w \in S_n$, the $\leq_w$-minimal bases of each $M_i$ form a flag. If this holds, we say that $M_i$'s are concordant.
\item Every flag
$$B_1 \subset \cdots \subset B_m$$
such that $B_i$ is a basis of $M_i$ for $i=1,\cdots,m$ belongs to $\mathcal{F}$.
\end{enumerate}
\end{theorem}
\begin{definition}
A \newword{flag positroid} is a flag matroid in which all constituents are positroids.
\end{definition}
It would be interesting to check what is the necessary condition for two decorated permutations, so that the corresponding positroids are concordant.
| {
"timestamp": "2010-10-12T02:04:13",
"yymm": "0803",
"arxiv_id": "0803.1018",
"language": "en",
"url": "https://arxiv.org/abs/0803.1018",
"abstract": "Postnikov gave a combinatorial description of the cells in a totally-nonnegative Grassmannian. These cells correspond to a special class of matroids called positroid. We prove his conjecture that a positroid is exactly an intersection of permuted Schubert matroids. This leads to a nice combinatorial description of positroids that is easily computable.",
"subjects": "Combinatorics (math.CO)",
"title": "Positroids and Schubert matroids",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130560740512,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.707563672458522
} |
https://arxiv.org/abs/1211.2455 | Bounds For The Tail Distribution Of The Sum Of Digits Of Prime Numbers | Let s_q(n) denote the base q sum of digits function, which for n<x, is centered around (q-1)/2 log_q x. In Drmota, Mauduit and Rivat's 2009 paper, they look at sum of digits of prime numbers, and provide asymptotics for the size of the set {p<x, p prime : s_q(p)=alpha(q-1)log_q x} where alpha lies in a tight range around 1/2. In this paper, we examine the tails of this distribution, and provide the lower bound |{p < x, p prime : s_q(p)>alpha(q-1)log_q x}| >>x^{2(1-alpha)}e^{-c(log x)^{1/2+epsilon}} for 1/2<alpha<0.7375. To attain this lower bound, we note that the multinomial distribution is sharply peaked, and apply results regarding primes in short intervals. This proves that there are infinitely many primes with more than twice as many ones than zeros in their binary expansion. | \section{Introduction}
A prime number which can be written in the form $2^{n}-1$ will have
only ones in its binary expansion, and is called a Mersenne prime.
The first few such primes are $3$, $7$, $31$, and $127$. Currently,
the largest known prime is of this form, and it has over $12.9$ million
digits. These numbers have been looked at for centuries, and date
back to Euclid who was interested in them for their connection with
perfect numbers, something that we will not explore here. It is a
long standing conjecture that there are infinitely many Mersenne primes,
and currently this seems entirely out of reach of modern analytic
methods. However, we may weaken the condition and ask about primes
with a large number of $1$'s in their base $2$ expansion . With
this in mind, we ask the following motivational question:
\begin{problem}
\label{Main Question}Are there infinitely many primes with twice
as many ones than zeros in their binary expansion?
\end{problem}
If we let $s_{q}(n)$ denote the sum of the digits of $n$ written
in base $q$, then we are asking if there are infinitely many primes
$p$ which satisfy $s_{2}(p)\geq\frac{2}{3}\log_{2}p$. Moving to
a slightly more general setting, we will look at the sum of digits
base $q$ rather than just the binary case. The average of $s_{q}(n)$
is roughly $\frac{q-1}{2}$ multiplied by the number of digits, so
we have the asymptotic
\[
\sum_{n\leq x}s_{q}(n)\sim\frac{q-1}{2}\log_{q}x.
\]
However, things become much more complicated when we restrict ourselves
to the prime numbers. In 1946 Copeland and Erdos \cite{ErdosCopeland}
proved that
\[
\frac{1}{\pi(x)}\sum_{p\leq x}s_{q}(p)\sim\frac{q-1}{2}\log_{q}(x)
\]
where $\pi(x)=\sum_{p\leq x}1$ is the prime counting function, and
a more precise error term was subsequently given by Shiokawa \cite{Shiokawa}.
In 2009, Drmota, Mauduit and Rivat \cite{DrmotaMauduitRivat} gave
exact asymptotics for the set
\[
\left\{ p\leq x,\ p\begin{tabular}{c}
prime\end{tabular}s_{q}(p)=\alpha\left(q-1\right)\log_{q}x\right\}
\]
where $\alpha$ lies in the range
\[
\alpha\in\left(\frac{1}{2}-K\frac{\left(\log\log x\right)^{\frac{1}{2}-\epsilon}}{\sqrt{\log x}},\ \frac{1}{2}+K\frac{\left(\log\log x\right)^{\frac{1}{2}-\epsilon}}{\sqrt{\log x}}\right),
\]
and is chosen so that $\alpha\left(q-1\right)\log_{q}x$ is an integer
which avoids certain congruence conditions. However, these results
don't allow us to make any conclusions about problem \ref{Main Question}.
In \cite{DrmotaMauduitRivat} they also asked about finding non-trivial
bounds for the sum $\sum_{p\leq x}2^{s_{q}(p)}$, as this would yields
results regarding the tail distribution of the sum of digits of primes.
That is, lower bounds for the size of sets of primes of the form
\[
\left\{ p\leq x,\ p\ \text{prime}:\ s_{q}(n)\geq\alpha(q-1)\log_{q}x\right\}
\]
where $\alpha>\frac{1}{2}$. These are exactly the type of bounds
we are looking for in order to answer our question, as problem \ref{Main Question}
is the case when $\alpha=\frac{2}{3}$ and $q=2$ . In this note,
we provide such lower bounds, and prove the following:
\begin{thm}
\label{thm: Main Theorem}Given $0.2625<\beta\leq\frac{1}{2}$ and
$\frac{1}{2}\leq\alpha<0.7375$, for sufficiently large $x$ we have
that
\[
\left|\left\{ p\leq x,\ p\ \text{prime}:\ s_{q}(n)\geq\alpha(q-1)\log_{q}x\right\} \right|\gg_{\epsilon}\ x^{2\left(1-\alpha\right)}e^{-c\left(\log x\right)^{1/2+\epsilon}}
\]
and
\[
\left|\left\{ p\leq x,\ p\ \text{prime}:\ s_{q}(n)\leq\beta(q-1)\log_{q}x\right\} \right|\gg_{\epsilon}\ x^{2\beta}e^{-c\left(\log x\right)^{1/2+\epsilon}}.
\]
\end{thm}
We do not examine the sum $\sum_{p\leq x}2^{s_{q}(p)}$, rather we
note that the multinomial distribution is sharply peaked, so results
regarding primes in small intervals allow us to attain such a lower
bound. From theorem \ref{thm: Main Theorem}, problem \ref{Main Question}
follows as a corollary. In fact, we have that for any $\alpha<0.7375$
there are infinitely many primes where the proportion of $1$'s in
their binary expansion greater than $\alpha$.
\section{The Tail Distribution}
We start by providing bounds on the size of the tails of the multinomial
distribution.
\begin{lem}
\label{lem:Chernoff-bound}(Chernoff bound) Given $\frac{1}{2}<a<1$,
we have that
\[
\left|\left\{ n\leq q^{k}:\ a\left(q-1\right)k\leq s_{q}(n)\right\} \right|\leq\exp\left(-\frac{k}{18}\left(a-\frac{1}{2}\right)^{2}\right).
\]
\end{lem}
\begin{proof}
On the interval $\left[0,q^{k}\right]$ each digit can be thought
of as an independent random variable which corresponds to the roll
of a $q$ sided dice with sides $0,1,\dots,q-1$. Normalizing, let
$\xi$ be a random variable where
\[
\text{P}\left(\xi_{i}=\frac{2}{q-1}j-1\right)=\frac{1}{q}
\]
for $0\leq j\leq q-1$, and for each $i$ let $\xi_{i}=\xi$. Our
goal is then to examine
\[
\text{P}\left(\gamma\leq\frac{\xi_{1}+\xi_{2}+\cdots+\xi_{k}}{k}\right).
\]
For any nonnegative $t$,
\begin{eqnarray*}
\text{P}\left(\gamma\leq\frac{\xi_{1}+\xi_{2}+\cdots+\xi_{k}}{k}\right) & \leq & \frac{\mathbb{E}\left(e^{t\left(\xi_{1}+\cdots+\xi_{k}\right)}\right)}{e^{tk\gamma}}\\
& = & \left(e^{-t\gamma}\mathbb{E}\left(e^{t\xi}\right)\right)^{k}\\
& = & e^{-kI\left(t,\gamma\right)}
\end{eqnarray*}
where
\[
I\left(t,\gamma\right)=t\gamma-\log\mathbb{E}\left(e^{t\xi}\right).
\]
Evaluating the expectation, we find that
\[
\mathbb{E}\left(e^{t\xi}\right)=\sum_{j=0}^{q-1}\frac{1}{q}e^{t\left(\frac{2j}{q-1}-1\right)}=\frac{e^{-t}}{q}\sum_{j=0}^{q-1}\left(e^{\frac{2t}{q-1}}\right)^{j}=\frac{1}{q}\frac{\sinh\left(t+\frac{t}{q-1}\right)}{\sinh\left(\frac{t}{q-1}\right)}.
\]
This gives rise to the series expansion
\[
\log\left(\frac{1}{q}\frac{\sinh\left(t+\frac{t}{q-1}\right)}{\sinh\left(\frac{t}{q-1}\right)}\right)=\frac{(q+1)}{6(q-1)}t^{2}-\frac{q^{3}+q^{2}+q+1}{180(q-1)^{3}}t^{4}+O\left(t^{6}\right),
\]
allowing us to prove that
\[
\log\mathbb{E}\left(e^{t\xi}\right)\leq\frac{(q+1)}{6(q-1)}t^{2}.
\]
To maximize $I(t,\gamma)$, we choose $t=\frac{\gamma}{3}\frac{q-1}{q+1},$
and obtain the upper bound
\[
\text{P}\left(\gamma\leq\frac{\xi_{1}+\xi_{2}+\cdots+\xi_{k}}{k}\right)\leq\exp\left(-\frac{k}{6}\left(\frac{q-1}{q+1}\right)\gamma^{2}\right),
\]
which proves the lemma since $q\geq2$.
\end{proof}
Next, we will need the best existing results on prime gaps. In 2001,
Baker, Harman and Pintz proved that
\begin{equation}
\pi\left(x+x^{\theta}\right)-\pi(x)\gg\frac{x^{\theta}}{\log x}\label{eq:Baker Harman Pintz}
\end{equation}
for any $\theta\geq0.525$ \cite{BakerHarmanPintz}. Armed with equation
\ref{eq:Baker Harman Pintz} and lemma \ref{lem:Chernoff-bound},
we are now ready to prove theorem \ref{thm: Main Theorem}.
\begin{proof}
Let $\alpha^{'}=\alpha+r(x)$ where $r(x)$ is chosen so that $\alpha^{'}<0.7375$.
Let $k=\left[\log_{q}x\right]$, so that $q^{k}\leq x$, and let $l=\lceil2\left(1-\alpha^{'}\right)k\rceil$.
Consider the interval $\left[q^{k}-q^{l},\ q^{k}-1\right]$, which
is an interval whose first $k-l$ digits base $q$ are equal to $q-1$.
By Baker, Harman and Pintz, there will be
\[
\gg\frac{q^{l}}{\log\left(q^{k}\right)}\gg\frac{q^{l}}{\log x}
\]
primes in this interval, where the constant is explicit.. By Lemma
\ref{lem:Chernoff-bound}, there are at most $\exp\left(-\frac{l\delta^{2}}{18}\right)$
integers between $0$ and $q^{l}$ which have digit sum less than
$(q-1)l\left(\frac{1}{2}-\delta\right)$. Letting $\delta=\frac{\log l}{\sqrt{l}}$,
it follows that there are at most $q^{l}e^{-\left(\log l\right)^{2}}$
integers in the interval $\left[q^{k}-q^{l},\ q^{k}-1\right]$ whose
digit sum is less than
\[
(q-1)(k-l)+(q-1)l\left(\frac{1}{2}-\frac{\log l}{\sqrt{l}}\right).
\]
As $q^{l}e^{-\left(\log l\right)^{2}}$ is significantly smaller than
$\frac{q^{l}}{k\log q}$, almost all of the primes in this interval
will have a digit sum greater than the above, and so we see that there
\[
\gg\frac{q^{l}}{\log\left(x\right)}
\]
primes with digit sum larger than
\[
\alpha^{'}(q-1)k\log_{q}(x)-(q-1)\sqrt{l}\log l.
\]
Expanding $\alpha^{'}=\alpha+r(x)$, and taking $r(x)=c\frac{\log\log x}{\sqrt{\log x}}$
for the appropriate constant $c$ yields a digit sum greater than
\[
\alpha(q-1)\log_{q}(x),
\]
which proves the result since
\[
\frac{q^{l}}{\log\left(x\right)}\sim\frac{x^{2(1-\alpha)}x^{-2r(x)}}{\log x}\gg x^{2(1-\alpha)}\exp\left(-c\sqrt{\log x}\log\log x\right).
\]
The proof for the lower bound of the size of the corresponding set
of primes with $s_{q}(p)\leq\beta(q-1)\log_{q}(x)$ for $0.2625<\beta\leq\frac{1}{2}$
is identical\@. \end{proof}
\begin{rem*}
The reader may note that for any $\alpha<0.7375$ there are more possible
choices for the first $k-l$ digits other than all $1$'s. It is conceivable
that if we looked at multiple intervals where the first $k-l$ digits
had many $1$'s that we would be able to increase the density by a
small factor, and possibly a significant factor for smaller $\alpha$.
While such an approach seems promising, and while it seems logical
to sum over multiple intervals, the end result and lower bound for
the number of primes is roughly the same. The exponent of $x$ is
no different, so we opted to present the simpler argument above.
\end{rem*}
\specialsection*{Acknowledgements}
I would like to thank Didier Piau for helping me understand the Chernoff
bound.
\bibliographystyle{plain}
| {
"timestamp": "2012-11-13T02:04:03",
"yymm": "1211",
"arxiv_id": "1211.2455",
"language": "en",
"url": "https://arxiv.org/abs/1211.2455",
"abstract": "Let s_q(n) denote the base q sum of digits function, which for n<x, is centered around (q-1)/2 log_q x. In Drmota, Mauduit and Rivat's 2009 paper, they look at sum of digits of prime numbers, and provide asymptotics for the size of the set {p<x, p prime : s_q(p)=alpha(q-1)log_q x} where alpha lies in a tight range around 1/2. In this paper, we examine the tails of this distribution, and provide the lower bound |{p < x, p prime : s_q(p)>alpha(q-1)log_q x}| >>x^{2(1-alpha)}e^{-c(log x)^{1/2+epsilon}} for 1/2<alpha<0.7375. To attain this lower bound, we note that the multinomial distribution is sharply peaked, and apply results regarding primes in short intervals. This proves that there are infinitely many primes with more than twice as many ones than zeros in their binary expansion.",
"subjects": "Number Theory (math.NT)",
"title": "Bounds For The Tail Distribution Of The Sum Of Digits Of Prime Numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130547786955,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.7075636715317934
} |
https://arxiv.org/abs/1101.5830 | Perfect matching in 3-uniform hypergraphs with large vertex degree | A perfect matching in a 3-uniform hypergraph on $n=3k$ vertices is a subset of $\frac{n}{3}$ disjoint edges. We prove that if $H$ is a 3-uniform hypergraph on $n=3k$ vertices such that every vertex belongs to at least ${n-1\choose 2} - {2n/3\choose 2}+1$ edges then $H$ contains a perfect matching. We give a construction to show that this result is best possible. | \section{Introduction and Notation}
For graphs we follow the notation in \cite{B1}. For a set $T$, we refer to all of its $k$-element subsets ($k$-sets for short) as ${T \choose k}$ and to the number of such $k$-sets as ${|T| \choose k}$. We say that $H = (V(H),E(H))$ is an $r$-uniform hypergraph or $r$-graph for short, where $V(H)$ is the set of vertices and $E\subset {V(H) \choose r}$, a family of $r$-sets of $V(H)$, is the set of edges of $H$. We say that $H(V_1,\ldots,V_r)$ is an $r$-partite $r$-graph, if there is a partition of $V(H)$ into $r$ sets, i.e. $V(H) = V_1\cup \cdots\cup V_r$ and every edge of $H$ uses exactly one vertex from each $V_i$. We call it a balanced $r$-partite $r$-graph if all $V_i$'s are of the same size. Furthermore $H(V_1,\ldots,V_r)$ is a complete $r$-partite $r$-graph if every $r$-tuple that uses one vertex from each $V_i$ belongs to $E(H)$. We denote a complete balanced $r$-partite $r$-graph by $K^{(r)}(t)$, where $t = |V_i|$. When the graph referred to is clear from the context we will use $V$ instead of $V(H)$ and will identify $H$ with $E(H)$ and $e_r(H) = |E(H)|$. A matching in $H$ is a set of disjoint edges of $H$ and a perfect matching is a matching that contains all vertices. For $U\subset V$, $H|_U$ is the restriction of $H$ to $U$.
For an $r$-graph $H$ and a set $D = \{v_1,\ldots,v_d\} \in {V \choose d}, 1\leq d \leq r$, the degree of $D$ in $H$, $deg_H(D) = deg_r(D)$ denotes the number of edges of $H$ that contain $D$. For $1\leq d \leq r$, let $$\delta_d =\delta_d(H)= \min\left\{deg_r(D) \; : \; D\in {V \choose d}\right\}.$$
\noindent When $H$ is an $r$-graph and $A$ and $B$ are disjoint subsets of $V(H)$, for a vertex $v\in A$ we denote by $deg_r(v,{B\choose r-1})$ the number of $(r-1)$-sets of $B$ that make edges with $v$, while $d_r(v,{B\choose r-1}) = deg_r(v,{B\choose r-1})/{|B|\choose r-1}$ denotes the density. For such $A$ and $B$, $e_r(A,{B\choose r-1})$ is the sum of $deg_r(v,{B\choose r-1})$ over all $v\in A$ while $d_r(A,{B\choose r-1}) =\dfrac{e_r(A,{B\choose r-1})}{|A|{|B|\choose r-1}}$. We denote by $H(A,{B\choose r-1})$ such an $r$-graph when all edges of $H$ use one vertex from $A$ and $r-1$ vertices from $B$. When $A_1,\ldots,A_r$ are disjoint subsets of $V$, for a vertex $v\in A_1$ we denote by $deg_r(v, (A_2\times\cdots\times A_r))$ the number of edges in the $r$-partite $r$-graph induced by the subsets $\{v\},A_2,\ldots, A_r$, and $e(A_1, (A_2\times\cdots\times A_r))$ is the sum of $deg_r(v,(A_2\times\cdots\times A_r))$ over all $v \in A_1$. Similarly $$d_r(A_1, (A_2\times\cdots\times A_r)) = \frac{e(A_1, (A_2\times\cdots\times A_r))}{|A_1\times A_2\times\cdots\times A_r|}.$$
\noindent An $r$-graph $H$ on $n$ vertices is $\eta$-{\em dense} if it has at least $\eta {n \choose r}$ edges. We use the notation $d_r(H) \geq \eta$ to refer to an $\eta$-dense $r$-graph $H$. A bipartite graph $G=(A,B)$ is $\eta$-{\em dense} if $d(A,B)\geq \eta$. For $U\subset V$, for simplicity we refer to $d_r(H|_U)$ as $d_r(U)$ and to $E(H|_U)$ as $E(U)$. Throughout the paper $\log$ denotes the base 2 logarithm. Moreover we will only deal with $r$-graphs on $n$ vertices where $n=rk$ for some integer $k$, we denote this by $n\in r\mathbb{Z}$.
\begin{definition}
Let $d,r$ and $n$ be integers such that $1\leq d < r$, and $n\in r\mathbb{Z}$. Denote by $m_d(r,n)$ the smallest integer $m$, such that every $r$-graph $H$ on $n$ vertices with $\delta_d(H) \geq m$ contains a perfect matching.
\end{definition}
For graphs ($r=2$), by the Dirac's theorem on Hamiltonicity of graphs \cite{Dirac1952}, it is easy to see that $m_1(2,n) \leq n/2$, and since the complete bipartite $K_{n/2-1,n/2+1}$ does not have a perfect matching we get $m_1(2,n) = n/2$. For $r\geq 3$ and $d = r-1$, it follows from a result of R\"{o}dl, Ruci\'{n}ski and Szemer\'{e}di on Hamiltonicity of $r$-graph \cite{RRSz_HAM_ku_colDeg_approx} that $m_{r-1}(r,n) \leq n/2 + o(n)$. K\"{u}hn and Osthus \cite{KO_PM_ku_colDeg} improved this result to $m_{r-1}(r,n) \leq n/2 + 3r^2\sqrt{n\log n}$. This bound was further sharpened in \cite{RRSz_PM_ku_colDeg} to $m_{r-1}(r,n) \leq n/2 + C\log n$. In \cite{RRSz_PM_ku_colDeg_approx_better} the bound was improved to almost the true value; it was proved that $m_{r-1}(r,n) \leq n/2 + r/4$. Finally \cite{RRSz_PM_ku_colDeg_tight} settled the problem for $d=r-1$. K\"{u}hn and Osthus \cite{KO_PM_ku_colDeg} and Aharoni, Georgakopoulos and Spr\"ussel \cite{AGS_partite} studied the minimum degree threshold for perfect matching in $r$-partite $r$-graphs.
\vskip 10pt
\noindent
The case $d<r-1$ is rather hard. Pikhurko \cite{Pikh_PM_ku_dDeg} proved that for all $d\geq r/2$, $m_d(r,n)$ is close to $\frac{1}{2}{n-d \choose r-d}$. For $1\leq d<r/2$, H\`{a}n, Person and Schacht \cite{HPS_PM_3u_vertDeg} proved that $$m_d(r,n) \leq \left(\frac{r-d}{r} +o(1)\right){n-d \choose r-d}$$
A recent survey of these and other related results appear in \cite{Rod_Ruc_Survey}. In \cite{HPS_PM_3u_vertDeg} the authors posed the following conjecture.
\begin{conjecture}[\cite{HPS_PM_3u_vertDeg}, see \cite{Rod_Ruc_Survey} P. 23 ] \label{conject}
For all $1\leq d < r/2$,
$$m_d(r,n) \sim \max\left\{\frac{1}{2}, 1-\left(\frac{r-1}{r}\right)^{r-d}\right\}{n-d \choose r-d}$$
\end{conjecture}
\noindent Note that for $r=3$ and $d=1$ the above bound yields $$m_1(3,n) \sim \frac{5}{9}{n-1 \choose 2}$$
\noindent Improving an old result of Daykin and H{\"a}ggvist \cite{DH1981}, the authors of \cite{HPS_PM_3u_vertDeg} proved an approximate version of their conjecture for the case $r=3$ and $d=1$, they showed that $m_1(3,n) \leq \left(\frac{5}{9}+o(1)\right){n \choose 2}$ for large $n$. For the case $r=4$ and $d=1$, Markstr\"om and Ruci\'{n}ski \cite{Ruc_Marks_approx4u} proved that $m_1(4,n) \leq \left(\frac{42}{64}+o(1)\right){n-1 \choose 3}$. Lo and Markstr\"om \cite{Lo_Markst_3partite} determined the exact degree threshold for $r=3$ and $d=1$ for the case of $3$-partite $3$-graphs.
\vskip6pt
In this paper we settle Conjecture \ref{conject} for the case $r=3$ and $d=1$. Parallel to this work, independently K{\"u}hn, Osthus and Treglown \cite{KOT_parallel} proved the same result. We believe our techniques are more general and have many other applications. In our subsequent work \cite{Khan_4u_vertDeg} we use similar techniques to prove Conjecture \ref{conject} for the case $r=4$ and $d=1$ as well. Our main result in this paper is the following theorem.
\begin{theorem}\label{ourMainThm}
There exist an integer $n_0$ such that if $H$ is a $3$-graph on $n \geq n_0$ vertices ($n\in 3\mathbb{Z}$), and \begin{equation}}\def\eeq{\end{equation}\label{minDegree}\delta_1(H) \geq {n-1\choose 2} - {2n/3\choose 2} + 1 \eeq then $H$ has a perfect matching.
\end{theorem}
\noindent On the other hand the following construction from \cite{HPS_PM_3u_vertDeg} shows that the result is best possible.
\begin{construction}\label{extConstruction}
Let $H = (V(H),E(H))$ be a $3$-graph on $n$ vertices ($n\in 3\mathbb{Z}$), such that $V(H)$ is partitioned into $A$ and $B$, $|A| = \frac{n}{3}-1$ and $|B|= n-|A|$ and $E(H)$ is the set of all $3$-sets of $V(H)$, $T$, such that $|T\cap A|\geq 1$ (see Figure \ref{ext_example}).
\end{construction}
\noindent We have $\delta_1(H) = {n-1\choose 2} - {2n/3\choose 2}$ (the degree of a vertex in $B$) but since every edge in a matching must use at least one vertex from $A$, the maximum matching in $H$ is of size $|A|=\frac{n}{3}-1$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.50]{ext_example}
\caption{\footnotesize{The extremal example: every edge intersects the set $A$.}}
\label{ext_example}
\end{figure}
\section{The main result}
We distinguish two cases to prove Theorem \ref{ourMainThm}. In Section \ref{non_ext_case} we show that a slightly relaxed minimum degree condition implies that either $H$ has an {\em `almost perfect matching'} or $H$ is {\em `close to'} the extremal example of Construction \ref{extConstruction}. In case $H$ is not close to the extremal example we first find an almost perfect matching and extend it to a perfect matching in $H$ using the {\em `absorbing'} technique. On the other hand, when $H$ is close to the extremal example, in Section \ref{extCase} we build a perfect matching in $H$ with a greedy approach.
\begin{definition}[Extremal Case with parameter $\alpha$]\label{ext_defn} For a constant $0<\alpha < 1$, we say that $H$ is {\em $\alpha$-extremal}, if the following is satisfied otherwise it is {\em $\alpha$-non-extremal}. There exists a $B\subset V(H)$ such that
\begin{itemize}
\item $|B|\geq \left(\frac{2}{3}-\alpha\right) n$
\item $d_3\left(B\right) < \alpha$.
\end{itemize}
\end{definition}
\noindent When $H$ is $\alpha$-non-extremal, we use the absorbing lemma, which roughly states that in $H$ there exists a small matching $M$ with the property that every {\em `not too large'} subset of vertices $W$ can be absorbed into a matching covering $V(M)\cup W$.
\begin{lemma}\label{absorbLemma}(Absorbing Lemma, \cite{HPS_PM_3u_vertDeg}) For every $\eta>0$, there is an integer $n_0 = n_0(\eta)$ such that if $H$ is a $3$-graph on $n\geq n_0$ vertices with $\delta_1(H)\geq \left(1/2+2\eta\right){n\choose 2}$, then there exists a matching $M$ in $H$ of size $|M|\leq \eta^3n$ such that for every set $W\subset V\setminus V(M)$ of size at most $\eta^6 n\geq |W|\in 3\mathbb{Z}$, there exists a matching covering all the vertices in $V(M)\cup W$.
\end{lemma}
After removing an absorbing matching $M$ from $H$ we find an almost perfect matching in $H|_{V\setminus V(M)}$. The few vertices not covered by this almost perfect matching are absorbed into $M$ to get a perfect matching in $H$. Theorem \ref{optCoverTheorem} in Section \ref{non_ext_case}, using the tools developed in Section \ref{tools}, guarantees the existence of an almost perfect matching in the non-extremal case.
\begin{theorem}\label{optCoverTheorem}
For all $0<\eta \ll \alpha\ll1$, there is an $n_0$ such that if $H$ is a $3$-graph on $n\geq n_0$ vertices with $$\delta_1(H) \geq \left(\frac{5}{9} - 10\eta\right){n\choose 2},$$ then either
\begin{itemize}\setlength{\itemsep=-5pt}
\item $H$ contains a matching leaving strictly less than $\eta^2 n$ vertices unmatched or
\item $H$ is $\alpha$-extremal.
\end{itemize}
\end{theorem}
When $H$ is $\alpha$-extremal then almost all vertices of $A$ make edges with almost all $3$-sets in ${B\choose 3}$, where $B$ is as in Definition \ref{ext_defn} and $A = V(H)\setminus B$. In Section \ref{extCase} we first match the few vertices of $A$ that do make edges with almost all $3$-sets in ${B\choose 3}$ and the remaining vertices are matched using a K\"onig-Hall type argument.
\begin{theorem}\label{extCaseTheorem}
For all $0<\alpha\ll1$, there is an $n_0$ such that if $H$ is an $\alpha$-extremal $3$-graph on $n\geq n_0$ vertices with $$ \delta_1(H) \geq {n-1\choose 2} - {2n/3\choose 2}+1,$$ then $H$ contains a perfect matching.
\end{theorem}
\begin{proof}[\textbf{Proof of Theorem \ref{ourMainThm}}]
Let $0<\alpha\ll1$ be given. Applying Lemma \ref{absorbLemma} with parameter $\sqrt{\alpha}$, Theorem \ref{optCoverTheorem} with parameter $\alpha^{3/2}$ and Theorem \ref{extCaseTheorem} with parameter $\alpha$, we get $n_0',n_0''$ and $n_0'''$ respectively. Let $n_0 = 2\max \{n_0',n_0'',n_0'''\}$. Now assume that we have a $3$-graph $H$ on $n\geq n_0$ vertices satisfying (\ref{minDegree}).
\vskip5pt
From (\ref{minDegree}) when $n$ is large we have $$\delta_1(H)\geq{n-1 \choose 2}-{2n/3\choose 2} +1 > \frac{5}{9}{n-1\choose 2} - \frac{n}{3} > \left(1/2 + 2\sqrt{\alpha}\right){n\choose 2}.$$ Hence $H$ satisfies the conditions of Lemma \ref{absorbLemma} with parameter $\sqrt{\alpha}$. We remove from $H$ an absorbing matching $M$ of size at most $\alpha^{3/2} n$.
\vskip5pt
Let $H' = H|_{V\setminus V(M)}$ be the remaining hypergraph (after removing $M$) on $n' = n - |V(M)|$ vertices. Since ${|V(M)|\choose 2} + |V(M)|\cdot n < 7\alpha^{3/2}{n\choose2} \leq 10\alpha^{3/2}{n'\choose2}$, it is easy to see that $$\delta_1(H')\geq \left(\dfrac{5}{9}-10\alpha^{3/2}\right){n' \choose 2}$$
As $n' > n_0''$, using Theorem \ref{optCoverTheorem} (with $\eta = \alpha^{3/2}$), in $H'$ we find an almost perfect matching that leaves out a set of at most $\alpha^{3} n' < \alpha^{3} n$ vertices. As guaranteed by Lemma \ref{absorbLemma} the vertices that are left out from this almost perfect matching are absorbed into $M$, and we get a perfect matching in $H$.
\vskip5pt
In case $H$ is $\alpha$-extremal, using Theorem \ref{extCaseTheorem} we get a perfect matching in $H$, which concludes the proof of Theorem \ref{ourMainThm}. \hfill{}
\end{proof}
\section{Tools}\label{tools}
\noindent We use the following result of Erd\"{o}s \cite{ErdosKST} to find complete balanced $r$-partite subhypergraphs of $r$-graphs.
\begin{lemma}\label{hyperKST}
For every integer $l\geq 1$ there is an integer $n_0 = n_0(r,l)$ such that every $r$-graph on $n> n_0$ vertices, that has at least $n^{r-1/l^{r-1}}$ edges, contains a $K^{(r)}(l)$.
\end{lemma}
\begin{corollary}\label{hyperKST_corr}
\noindent For $0< \eta \ll1$ and $r\leq 3$, if $H$ is an $r$-graph on $n>n_0(\eta,r)$ vertices with $$|E(H)|\geq \eta{n\choose r}$$ then $H$ contains a $K^{(r)}(t)$, where $$t=\eta (\log n)^{1/(r-1)}.$$
\end{corollary}
\noindent This is so because $\eta{n\choose r} \geq \dfrac{\eta n^r}{2r!} \geq \dfrac{n^r}{2^{1/\eta^{r-1}}} = \dfrac{ n^r}{n^{1/(\eta^{r-1} \log n)}} = \dfrac{ n^r}{n^{1/t^{(r-1)}}} $ , as $\eta > \dfrac{2r!}{2^{1/\eta^{(r-1)}}}$.
\vskip6pt
The following lemma is a very useful tool in this section.
\begin{lemma} \label{subsetsPHP}
Let $m$ be a sufficiently large integer. If $G(A,B)$ is an $\eta$-dense bipartite graph with $|A| = c_1m $ and $B\geq c_2 2^{m}$ for some constants $0<c_1,c_2<1$, then there exists a complete bipartite subgraph $G'(A',B')$ of $G$ such that $A'\subset A$, $B'\subset B$, $|A'|\geq \eta |A|/2 \text{ and } |B'|\geq \dfrac{\eta }{2}\cdot \dfrac{|B|}{2^{c_1m}} \geq \dfrac{\eta c_2 }{2}\cdot 2^{(1-c_1)m} $.
\end{lemma}
\begin{proof}
First we show that there is a set $B_1\subset B$ such that $|B_1| \geq \eta|B|/2$ and for every vertex $ b\in B_1$, $deg(b,A)\geq \eta |A|/2$. Such a subset exists because otherwise the total number of edges in $G$ would be strictly less than $$\frac{\eta|B|}{2}\cdot |A| + |B|\cdot \frac{\eta|A|}{2} = \eta |A||B|$$ a contradiction to the fact that $G(A,B)$ is $\eta$-dense. Now we show that there is the required complete bipartite subgraph in $G(A,B_1)$. To see this consider the neighborhoods in $A$, of the vertices in $B_1$. Since there can be at most $2^{|A|} = 2^{c_1m}$ such
neighborhoods, by averaging there must be a neighborhood that appears for at least $\dfrac{|B_1|}{2^{c_1 m }}\geq \dfrac{\eta }{2}\cdot \dfrac{|B|}{2^{c_1m}} \geq \dfrac{\eta c_2 }{2} \cdot \dfrac{2^m}{ 2^{c_1 m }}= \dfrac{\eta c_2 }{2}\cdot 2^{(1-c_1)m}$ vertices of $B_1$. Hence we get the desired complete bipartite graph. \hfill{}
\end{proof}
\vskip6pt\noindent The following two lemmas are repeatedly used in Section \ref{non_ext_case}.
\begin{lemma}\label{3partVolArg}
Let $m$ be a sufficiently large integer and let $H(X,Y,Z)$ be a $3$-partite $3$-graph with $|X| = |Y| = c_1m$ and $|Z| \geq c_2 2^{m^2}$ for some constants $0<c_1,c_2<1$. If $d_3(Z,(X\times Y)) \geq \eta$, then there exists a complete $3$-partite $3$-graph $H'(X',Y',Z')$ as a subgraph of $H$, such that $|X'|=|Y'| =|Z'| \geq \frac{\eta}{4} \log |X|$.
\end{lemma}
\begin{proof}
First consider the auxiliary bipartite graph $G_1(A,Z)$, where $A =X\times Y$ and a vertex $z\in Z$ is connected to a pair $(a,b)\in A$ if $\{a,b,z\}$ is an edge of $H$. Clearly $G_1$ satisfies the conditions of Lemma\ref{subsetsPHP}. Applying Lemma \ref{subsetsPHP} on $G_1$ we get a complete bipartite graph $G_2(A',Z')$ such that $A' \subset A= X\times Y$, $Z' \subset Z$, $|A'|\geq \eta |X||Y|/2$ and $|Z'| \geq \frac{c_2\eta}{2} 2^{m^2(1-c_1^2)} > |X|$ where the last inequality follows when $m$ is large.
\vskip 4pt
Now Let $G_3$ be a graph on vertex set $X\cup Y$ and $(a,b)$ is an edge in $G_3$ if $(a,b)\in A'$. Since $|A'|\geq \eta |X||Y|/2$, we have $|E(G_3)| \geq \dfrac{\eta}{4} {|X\cup Y| \choose 2}$. Applying Corollary \ref{hyperKST_corr} (for $r=2$), in $G_3$ we get a complete bipartite graph $G_4(X',Y')$ with $X'\subset X$ and $Y'\subset Y$ such that $|X'|=|Y'| \geq \frac{\eta}{4} \log |X|$. Clearly $X'$, $Y'$ and a subset of $Z'$ (of size $|X'|$), correspond to the color classes of required complete $3$-partite $3$-graph. \hfill{}
\end{proof}
\begin{lemma}\label{subsetPHP_hyper}
Let $m$ be a sufficiently large integer and let $H\left(A,{B\choose 2}\right)$ be a $3$-graph such that $|A|= c_1 m$, $|B|\geq c_2 2^{m^2}$, for some constants $0<c_1,c_2<1$. If $d_3\left(A,{B\choose 2}\right)\geq \eta$, then there exists a complete $3$-partite $3$-graph $H'(A',B',B'')$, with $A'\subset A$, $B'\mbox{ and } B''$ are disjoint subsets of $B$ such that $|A'|=|B'|=|B''| = \eta|A|/2$.
\end{lemma}
\begin{proof}
First consider the auxiliary bipartite graph $G_1(A,P)$, where $P={B\choose 2}$ and a vertex $a\in A$ is connected to a pair $(b_1,b_2) \in P$ if $(a,b_1,b_2)$ is an edge of $H$. Applying Lemma \ref{subsetsPHP} on $G_1$ we get a complete bipartite graph $(A',P')$ in $G_1$ with $A' \subset A$ and $P' \subset P$ such that $|A'| \geq \eta|A|/2$ and $$|P'|\geq \frac{\eta}{2} \cdot \frac{|P|}{2^{c_1m}}\geq \frac{\eta}{5}\cdot \frac{|B|^2}{2^{c_1m}} \geq \frac{\eta}{5} \cdot \frac{|B|^2}{ \left(\frac{|B|}{c_2}\right)^{c_1/m}} = \frac{c_2^2\eta}{5} \left(\frac{|B|}{c_2}\right)^{2-c_1/m} \geq |B|^{2-2/\eta|A|}$$
where the last inequality follows when $m$ is large and $\eta$, $c_1$ and $c_2$ are small constants.
\vskip 4pt Now construct an auxiliary graph $G_2$ where $V(G_2)=B$ and edges of $G_2$ corresponds to pairs in $P'$. Since $|E(G_2) \geq |B|^{2-2/\eta|A|}$, applying lemma \ref{hyperKST} on $G_2$ (for $r=2$) we get a complete bipartite graph with color classes $B'$ and $B''$ each of size $\eta|A|/2$. Clearly $A'$, $B'$, and $B''$ corresponds to color classes of a complete $3$-partite $3$-graph in $H$ as in the statement of the fact. \hfill{}
\end{proof}
We also use the following simple facts about graphs.
\begin{lemma}\label{folk_subgraph}
Any graph on $n$ vertices with $m$ edges has a subgraph of minimum degree $m/n$.
\end{lemma}
\begin{lemma}\label{folk_matching}
Any graph on $n$ vertices has a matching of size $\min\{\delta(G),\lfloor\frac{n}{2}\rfloor\}$.
\end{lemma}
\section{Proof of Theorem \ref{optCoverTheorem}}\label{non_ext_case}
Let $H$ be a $3$-uniform hypergraph on $n$ vertices where $n$ is sufficiently large and \begin{equation}}\def\eeq{\end{equation}\label{minDeg_nonext} \delta_1(H) \geq \left(\frac{5}{9} - 10\eta\right){n\choose 2}.\eeq
In $H$ we will find an almost perfect matching (covering at least $(1-\eta^2)n$ vertices). In fact, we will prove a much stronger result. We are going to build a cover ${\cal T} = \{T_{1},T_{2},\ldots$\} where each $T_i$ is a disjoint complete $3$-partite $3$-graph in $H$. These complete $3$-partite $3$-graphs will be balanced and will be of the same size. We refer to them as tripartite graphs. We say that such a cover is optimal if it covers at least $(1-\eta^2)n$ vertices. We will show that either we can find an optimal cover or $H$ is $\alpha$-extremal. It is easy to see that such an optimal cover readily gives us a matching in $H$ that leaves out at most $\eta^2 n$ vertices.
\begin{proof}[Proof of Theorem \ref{optCoverTheorem}]
We begin with a cover ${\cal T}$ obtained by repeatedly applying Lemma \ref{hyperKST} in the remaining part of $H$ as long as there are at least $\eta^2 n$ vertices left and the condition of Lemma \ref{hyperKST} is satisfied, to get disjoint $K^{(3)}(t)$'s where $t = \eta\sqrt{\log (\eta^2n)}$. Note that by Lemma \ref{hyperKST} we can find larger tripartite graphs in $H$ (at least initially) but since we want all tripartite graphs to be of the same size we find all tripartite graphs of size $3t$.
Identify by ${\cal T}$ the set of tripartite graphs in the cover and let $V({\cal T})$ be the union of vertices in the tripartite graphs in ${\cal T}$. We refer to $|V({\cal T})|$ as size of the cover and to a subset of ${\cal T}$ as a subcover in ${\cal T}$. Let ${\cal I} = V(H)\setminus V({\cal T})$ be the set of remaining vertices. If ${\cal T}$ is not an optimal cover, then since we cannot apply Lemma \ref{hyperKST} in $H|_{\cal I}$ (with parameters $\eta$ to get another $K_3(t)$), we must have that $|{\cal I}| > \eta^2 n$ and \begin{equation}}\def\eeq{\end{equation} \label{denI} d_3(\cal{I}) < \eta. \eeq
\noindent Note that from (\ref{minDeg_nonext}) and (\ref{denI}) we immediately get that $|V({\cal T})| > \eta n$. We show that if $H$ is $\alpha$-non-extremal and ${\cal T}$ is not optimal then using the iterative procedure outlined below we can significantly increase the size of our cover (by at least $\eta^4n$ vertices). After every iteration all tripartite graphs in ${\cal T}$ will be of the same size. Furthermore, all tripartite graphs in ${\cal T}$ will be balanced and if the size of a color class in the tripartite graphs at a given iteration is $t$, then after the iteration it will be $\frac{\eta}{4}\log t$. We take $n$ to be sufficiently large so that till the end of the procedure the size of each color class is large enough for Lemma \ref{3partVolArg} and Lemma \ref{subsetPHP_hyper} to be applicable.
\vskip6pt \noindent Let $T_i = (V_1^i,V_2^i,V_3^i)$ be a tripartite graph in ${\cal T}$. For $1\leq l,k\leq 3$, we say that $T_i$ is {\em $k$-sided}, if $d_3\left(V_l^i,{{\cal I}\choose 2}\right) \geq 2\eta$, for $k$ color classes $V_l^i$ of $T_i$. We will show that most of the tripartite graphs in ${\cal T}$ are at most $1$-sided or we can significantly increase the size of our cover.
\begin{claim}\label{few_2sided}
If the number of vertices in the at least $2$-sided tripartite graphs in ${\cal T}$ is more than $\eta |V({\cal T})|$, then we can increase the size of ${\cal T}$ by at least $\eta^3 n/8$ vertices, such that all tripartite graphs in the cover are balanced and are of the same size.
\end{claim}
We repeatedly use Claim \ref{few_2sided} to increase the size of our cover as long as the condition of Claim \ref{few_2sided} is satisfied. Hence in at most $8\eta^{-3}$ iterations we either get an optimal cover or the number of vertices in the at least $2$-sided tripartite graphs is reduced to at most $\eta |V({\cal T})|$. For simplicity we still denote the cover by ${\cal T}$ and ${\cal I} = V(H)\setminus V({\cal T})$. The size of a color class in each tripartite graph is still denoted by $t$.
Suppose that ${\cal T}$ is not optimal and we cannot apply Claim \ref{few_2sided}, then the number of $2$-sided vertices in ${\cal T}$ is at most $\eta |V({\cal T})|$. Note that if $T_i \in {\cal T}$ is at most $1$-sided, then by definition $d\left(T_i, {{\cal I}\choose 2}\right) \leq \left(1/3 + 4\eta\right)$. This together with the bound on the number of $2$-sided tripartite graphs gives us \begin{equation}}\def\eeq{\end{equation}\label{edges_T_to_I} e_3\left(V({\cal T}), {{\cal I}\choose 2}\right) \leq \left(\frac{1}{3}+5\eta\right)|V({\cal T})|{|{\cal I}|\choose 2}. \eeq
For the sum of the degrees of vertices in ${\cal I}$, (\ref{minDeg_nonext}) gives us \begin{equation}}\def\eeq{\end{equation}\label{lower}\sum_{v\in {\cal I}} deg_3(v) \geq |{\cal I}|\left(\frac{5}{9} -10\eta \right){n\choose 2}.\eeq On the other hand considering the number of times each edge is counted, by (\ref{denI}) and (\ref{edges_T_to_I})
\begin{align}
\sum_{v\in {\cal I}} deg_3(v) &= e_3\left({\cal I},{V({\cal T})\choose 2}\right) + 2\cdot e_3\left(V({\cal T}), {{\cal I}\choose 2}\right) + 3\cdot e_3\left(H|_{\cal I}\right)\notag{}\\
&\leq e_3\left({\cal I},{V({\cal T})\choose 2}\right) + 2\cdot \left(\frac{1}{3}+5\eta\right)|V({\cal T})|{|{\cal I}|\choose 2} + 3\cdot \eta{|{\cal I}|\choose 3}\label{upper}
\end{align}
Combining (\ref{lower}) and (\ref{upper}) we get
\begin{align*}
e_3\left({\cal I},{V({\cal T})\choose 2}\right) &\geq |{\cal I}|\;\left(\frac{5}{9} -10\eta \right){n\choose 2} - 2\cdot \left(\frac{1}{3}+5\eta\right)|V({\cal T})|{|{\cal I}|\choose 2} - 3\cdot \eta{|{\cal I}|\choose 3}\\
&\geq |{\cal I}|\left( \left(\frac{5}{9} -10\eta \right){n\choose 2} - 2\cdot \left(\frac{1}{3}+5\eta\right)|V({\cal T})|\frac{|{\cal I}|}{2} - 3\cdot \eta{|{\cal I}|\choose 2}\right)\\
&\geq |{\cal I}| \left(\frac{5}{9} -38\eta \right){|V({\cal T})|\choose 2}.
\end{align*}
In (\ref{lower}) and (\ref{upper}) using $e_3\left({\cal I},{V({\cal T})\choose 2}\right) \leq |{\cal I}|\cdot {|V({\cal T})|\choose 2}$ and the fact that $\eta$ is a small constant we get $|V({\cal T})| \geq n/2$.
For a vertex $v\in V(H)$, consider the edges that $v$ makes with pairs of vertices within a tripartite graph. Since the size of a tripartite graph is $3t \leq 3\eta \sqrt{\log (\eta^2n)}$, the number of pairs of vertices of any tripartite graph $T_i \in \cal{T}$ is $O(\log n)$. Hence the total number of pairs of vertices within the tripartite graphs in ${\cal T}$ is $O(n\log n) = o{n\choose 2}$. We ignore the at most $o{n\choose 3}$ edges that use more than one vertex from a tripartite graph. By the above observation we still have \begin{equation}}\def\eeq{\end{equation} \label{min_cross_density} d_3\left({\cal I},{V({\cal T})\choose 2}\right)\geq \left(\frac{5}{9}-40\eta\right) \eeq
\noindent Let $T_i = (V_1^i,V_2^i,V_3^i)$ and $T_j = (V_1^j,V_2^j,V_3^j)$ be two tripartite graphs in ${\cal T}$. We say that $\cal{I}$ is {\em connected} to a pair of color classes $(V_p^i,V_q^j)$, $1\leq p,q \leq 3$, if $d_3({\cal I},(V_p^i \times V_q^j)) \geq 2\eta$. For $(T_i,T_j) \in {{\cal T}\choose 2}$ we define the {\em link graph}, $L_{ij}$, to be a balanced bipartite graph, where the vertex set of each color class of $L_{ij}$ corresponds to the color classes in $T_i$ and $T_j$. A pair of vertices is an edge in $L_{ij}$ iff ${\cal I}$ is connected to the corresponding pair of color classes.
We will use the following fact from \cite{HPS_PM_3u_vertDeg} for the analysis of the link graph.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.80]{classification_3x3bipgraphs}
\caption{\footnotesize{The balanced bipartite graphs on $6$ vertices, $B_{320}$ and $B_{311}$}}
\label{3x3bipgraphs}
\end{figure}
\begin{fact}\label{classification}
Let $B_{320}$ and $B_{311}$ be as defined in Figure \ref{3x3bipgraphs}. If $B$ is a balanced bipartite graph on $6$ vertices with $|E(B)|\geq 5$, then at least one of the following must be true.
\begin{itemize}\setlength{\itemsep-2pt}
\item $B$ has a perfect matching.
\item $B$ contains $B_{320}$ as a subgraph.
\item $B$ is isomorphic to $B_{311}$.
\end{itemize}
\end{fact}
\begin{claim}\label{expanding_byVolArg}
If there are $\eta {|{\cal T}|\choose 2}$ pairs of tripartite graphs $(T_i,T_j)$ such that $L_{ij}$ has a perfect matching or contains a $B_{320}$, then we can increase the size of ${\cal T}$ by at least $\eta^3 n/8$ vertices such that all tripartite graphs in the cover are balanced and are of the same size. \end{claim}
\vskip8pt
We can repeatedly use Claim \ref{expanding_byVolArg} to increase the size of our cover as long as the condition of Claim \ref{expanding_byVolArg} is satisfied. Hence in at most $8\eta^{-3}$ iterations we either get an optimal cover or we have that there are at most $\eta {|{\cal T}|\choose 2}$ pairs of tripartite graphs $(T_i,T_j)$ such that the link graph $L_{ij}$ has at least $5$ edges and contains either a perfect matching or $B_{320}$ as a subgraph. For simplicity we still denote the cover by ${\cal T}$ and ${\cal I} = V(H)\setminus V({\cal T})$.
\vskip8pt
\noindent Assume that ${\cal T}$ is not optimal. We show that if we cannot apply Claim \ref{expanding_byVolArg}, then for most of the pairs of tripartite graphs in ${{\cal T}\choose 2}$, the link graph has exactly $5$ edges and is isomorphic to $B_{311}$. Indeed, if there are many ($\geq \sqrt{\eta}$-fraction) pairs of tripartite graphs $(T_i,T_j)$ for which $|E(L_{ij})| \leq 4$ then there is another set of at least $\eta$-fraction of pairs of tripartite graphs $(T_i,T_j)$ for which $|E(L_{ij})| \geq 6$. To see this let ${\cal P}_4 = \{ (T_i,T_j)\in {{\cal T}\choose 2} : |E(L_{ij})| \leq 4\}$ and ${\cal P}_6 = \{ (T_i,T_j)\in {{\cal T}\choose 2} : |E(L_{ij})| \geq 6\}$. Note that for any $(T_i,T_j)\in {\cal P}_4$ by definition $d_3({\cal I}, V(T_i)\times V(T_j)) \leq (4/9 + 10\eta)$.
\vskip8pt
Now if $|{\cal P}_4| \geq\sqrt{\eta}{|{\cal T}| \choose 2}$ and $|{\cal P}_6| < {\eta}{|{\cal T}| \choose 2}$ then $$d_3\left({\cal I},{V({\cal T})\choose 2}\right)\leq \sqrt{\eta}\cdot \left(\frac{4}{9} + 10\eta\right) + \eta \cdot \frac{9}{9} + (1-\sqrt{\eta})\cdot \left(\frac{5}{9} + 8\eta\right)$$ a contradiction to (\ref{min_cross_density}). Therefore for at least $ (1-\sqrt{\eta}){|{\cal T}|\choose 2}$ pairs of tripartite graphs $(T_i,T_j)\in {{\cal T}\choose 2}$, $L_{ij}$ has at least $5$ edges. Since we cannot apply Claim \ref{expanding_byVolArg}, for at least $(1-2\sqrt{\eta}) {|{\cal T}|\choose 2}$ of them the $L_{ij}$ is isomorphic to $B_{311}$ and $|E(L_{ij})| =5$.
\begin{claim}\label{noExpand_extremal}
If there are at least $(1-2\sqrt{\eta}) {|{\cal T}|\choose 2}$ pairs of tripartite graphs $(T_i,T_j)$ such that $L_{ij}$ is isomorphic to $B_{311}$, then either
\begin{itemize}\setlength{\itemsep=-3pt}
\item we can increase the size of ${\cal T}$ by at least $\eta^4 n$ vertices such that all tripartite graphs in the cover are balanced and are of the same size, or
\item $H$ is $\alpha$-extremal.
\end{itemize}
\end{claim} We repeatedly use Claim \ref{noExpand_extremal} to increase the size of our cover as long as the condition of Claim \ref{noExpand_extremal} is satisfied.
Hence proceeding in iterations applying the appropriate claim at each iteration, it is clear that in at most $8\eta^{-4}$ iterations we either get an optimal cover or that $H$ is $\alpha$-extremal. The optimal cover readily gives us an almost perfect matching. \hfill{} \end{proof}
\subsection{Proof of Claim \ref{few_2sided}} Let ${\cal T}'= \{T_1,T_2,\ldots\} \subset {\cal T} $ be the subcover of the at least $2$-sided tripartite graphs such $|V({\cal T}')|\geq \eta |V({\cal T})| \geq \eta^2 n$. Without loss of generality, say in each $T_i\in {\cal T'}$ we have
$$d_3\left(V_1^i, {{\cal I}\choose 2}\right)\geq 2\eta \;\; \text{ and } \;\; d_3\left(V_2^i, {{\cal I}\choose 2}\right)\geq 2\eta.$$ For each such $T_i$, we have $|V_1^i|=|V_2^i| = t\leq \eta\sqrt{\log (\eta^2n)} = \eta m$ and $|{\cal I}|\geq \eta^2 n > \eta^2 2^{m^2}$. By Lemma \ref{subsetPHP_hyper} (with parameter $\eta$ ) we find two disjoint balanced complete tripartite graphs $(U_1^i,A_1^i,B_1^i)$ and $(U_2^i,A_2^i,B_2^i)$ where $U_1^i$ and $U_2^i$ are subsets of $V_1^i$ and $V_2^i$ respectively, and $A_1^i, A_2^i, B_1^i \mbox{ and } B_2^i$ are disjoint subsets of ${\cal I}$ (see Figure \ref{2sided_extension}). The size of each color class of these tripartite graphs is $\eta |V_1^i|/2$ (we assume it is an integer). Note that we can find larger tripartite graphs but we keep the size of these new tripartite graphs $3\eta |V_1^i|/2$ only. We remove the vertices of these new tripartite graphs from their respective sets and add the tripartite graphs to our cover. Removing these vertices from $V_1^i$ and $V_2^i$ creates an imbalance in the leftover part of $T_i$ ($V_3^i$ has more vertices). To restore the balance in the leftover of $T_i$ we discard (add to ${\cal I}$) some arbitrary $|U_1^i| = |U_2^i| = \eta|V_1^i|/2$ vertices from $V_3^i$. The new tripartite graphs use at least $2|A_1^i| + 2|B_1^i| = 2\eta |V_1^i|$ vertices from ${\cal I}$. Therefore, after discarding the vertices from $V_3^i$ the net increase in the size of our cover is $3\eta |V_1^i|/2$, while all the tripartite graphs in ${\cal T}$ are balanced.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.55]{3unif_2sided_extended}
\caption{\footnotesize{Shaded triangles represent $2$-sided color classes. Solid triangles represent the new complete tripartite graphs. $V_3^i$ has extra vertices. }}
\label{2sided_extension}
\end{figure}
We proceed as above for the remaining tripartite graphs in ${\cal T}'$ one by one until we remove at least $\eta |{\cal I}|/8 \geq \eta^3 n/8$ vertices from ${\cal I}$. Since each $T_i \in {\cal T}'$ is at least $2$-sided we can continue as above. Indeed, until we remove $\eta |{\cal I}|/8$ vertices from ${\cal I}$, for the remaining tripartite graphs in ${\cal T}'$ and the remaining part of ${\cal I}$ the condition of Lemma \ref{subsetPHP_hyper} is still satisfied (with parameter $\eta$). Since $|V({\cal T}')|\geq \eta|V({\cal T})|\geq \eta^2 n$, it is easy to see that with this procedure we increase the size of our cover by at least $\eta ^3 n/8$ vertices. Note that the newly made tripartite graphs have color classes of size $\eta t/2$ while the remaining parts of tripartite graphs in ${\cal T}'$ and those in ${\cal T}\setminus {\cal T}'$ are bigger. To make all tripartite graphs in the cover of the same size, we split each tripartite graph in the cover, into disjoint balanced complete tripartite graphs, such that each color class of every tripartite graph is of size $\eta t/2$ (we assume divisibility). \hfill{} \ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi
\subsection{Proof of Claim \ref{expanding_byVolArg}}
We first find a set of disjoint pairs of tripartite graphs such that for each pair the link graph either has a perfect matching or contains a $B_{320}$. Consider the auxiliary graph where vertices are the tripartite graphs in ${\cal T}$ and two vertices are connected if for the corresponding tripartite graphs $T_i$ and $T_j$, $L_{ij}$ has a perfect matching or contains a $B_{320}$. Since this auxiliary graph has at least $\eta {|{\cal T}|\choose 2}$ edges, by Lemma \ref{folk_subgraph} and Lemma \ref{folk_matching}, we find a matching of size $\eta |{\cal T}|/3$ in this graph. Clearly, this matching corresponds to a set of disjoint pairs of tripartite graphs, ${\cal P}\subset {{\cal T}\choose 2}$, such that for the each pair $(T_i,T_j) \in {\cal P}$, $L_{ij}$ has a perfect matching or contains a $B_{320}$. The number of vertices in the $2|{\cal P}|$ tripartite graphs in ${\cal P}$ is at least $\eta n /3$ as $|V({\cal T})| \geq n/2$. We distinguish the following two cases for each pair in ${\cal P}$, to make new tripartite graphs using some vertices from ${\cal I}$.
\vskip 6pt
\textbf{Case 1: {\em $L_{ij}$ has a perfect matching.}} Without loss of generality assume that the perfect matching in $L_{ij}$ corresponds to the pairs $(V_1^i,V_1^j)$, $(V_2^i,V_2^j)$ and $(V_3^i,V_3^j)$. Note that by construction of $L_{ij}$, ${\cal I}$ is {\em connected} to $(V_1^i,V_1^j)$, $(V_2^i,V_2^j)$ and $(V_3^i,V_3^j)$. By definition of connectedness, $d_3({\cal I},V_1^i\times V_2^j)\geq 2\eta$ and $|V_1^i|=|V_1^j| = t\leq \eta\sqrt{\log (\eta^2n)} $ and $|{\cal I}|\geq \eta^2 n > \eta^2 2^{t^2}$, hence the $3$-partite $3$-graph $H(V_1^i,V_1^j,{\cal I})$ satisfies the conditions of Lemma \ref{3partVolArg}. Applying Lemma \ref{3partVolArg} (with parameter $\eta$) we find a complete balanced tripartite graph $T_1 = ({U_1^i,U_1^j,\cal I}_1)$, such that $$ {\cal I}_1 \subset {\cal I},\;\; {U}_1^i \subset V_1^i \mbox{ , } {U}_1^j \subset V_1^j \;\; \mbox{ and } |{\cal I}_1|=|U_1^i| = |U_1^j| = \frac{\eta}{4}\log t.$$ Similarly, we find such complete balanced tripartite graphs $T_2$ and $T_3$ in $H(V_2^i,V_2^j,{\cal I})$ and $H(V_3^i,V_3^j,{\cal I})$ respectively (see Figure \ref{fig:case1}). Clearly we can have that $T_1, T_2$ and $T_3$ are disjoint from each other as $|{\cal I}|\geq \eta^2 n$ .
\begin{figure}[h!]
\centering
\includegraphics[page=1,width=3.0in]{B_320_B_PM_Extension}
\caption{\footnotesize{The new tripartite graphs $T_1$, $T_2$ and $T_3$ using vertices from $T_i, T_j$ and ${\cal I}$ when $L_{ij}$ has a perfect matching. Shaded rectangles represent pairs connected to ${\cal I}$. Solid lines represent the new complete tripartite graphs.}}
\label{fig:case1}
\end{figure}
We remove the vertices in $T_1,T_2$ and $T_3$ from their respective sets and add these three new tripartite graphs to our cover. In the remaining parts of $T_i$ and $T_j$ we remove another such set of $3$ disjoint tripartite graphs. By definition of connectedness until we remove at least $\eta^2 t/2$ vertices from each color class of $T_i$ and $T_j$ we still have $d_3({\cal I}, (V_1^i\times V_1^j))> \eta$. Hence by Lemma \ref{3partVolArg} we continue removing such three tripartite graphs until from each color class of $T_i$ and $T_j$ we remove $\eta^2 t/2$ vertices. Note that the new tripartite graphs use $3\eta^2 t/2$ vertices from ${\cal I}$. Therefore adding these new tripartite graphs to our cover increases the size of the cover by $3\eta t^2/2$ vertices while all tripartite graphs in the cover are still balanced.
\vskip 5pt
\textbf{Case 2: {\em $L_{ij}$ contains a $B_{320}$.}} Again without loss of generality assume that in the $B_{320}$ the vertices of degree $3$ and $2$ correspond to the color classes $V_1^i$ and $V_2^i$ respectively. Furthermore, we may assume that ${\cal I}$ is connected to $(V_1^i,V_1^j)$, $(V_1^i,V_2^j)$, $(V_2^i,V_2^j)$ and $(V_2^i,V_3^j)$. By definition of connectedness the $3$-partite subhypergraph of $H$ induced by ${\cal I}$ and any of the above four pairs of color classes satisfies the conditions of Lemma \ref{3partVolArg}.
Similarly as in the previous case, applying Lemma \ref{3partVolArg} (with parameter $\eta$) we find the following four disjoint complete tripartite graphs: $(V_{11}^i,V_{11}^j,{\cal I}_1)$, $(V_{22}^i,V_{32}^j,{\cal I}_2)$, $(V_{13}^i,V_{23}^j,{\cal I}_3)$ and $(V_{24}^i,V_{24}^j,{\cal I}_4)$ such that for $1\leq p\leq 3$ and $1\leq q \leq 4$, ${\cal I}_q, V_{pq}^i \mbox{ and }V_{pq}^j$ are disjoint subsets of ${\cal I}, V_{p}^i$ and $V_p^j$ respectively (see Figure \ref{fig:case2}).
For the sizes of these new tripartite graphs we have $$|{\cal I}_1|=|{\cal I}_2|=|V_{11}^i|=|V_{11}^j|=|V_{22}^i|=|V_{32}^j| = \frac{\eta \log t}{4} $$ and $$|{\cal I}_3|=|{\cal I}_4|=|V_{13}^i|=|V_{23}^j|=|V_{24}^i|=|V_{24}^j|= \frac{\eta\log t}{8}.$$
\begin{figure}[h!]
\centering
\includegraphics[page=2,width=3.0in]{B_320_B_PM_Extension}
\caption{\footnotesize{The new tripartite graphs using vertices from $T_i, T_j$ and ${\cal I}$ when $L_{ij}$ contains a $B_{320}$. Shaded rectangles represent pairs connected to ${\cal I}$. Solid lines represent the new complete tripartite graphs. $V_3^i$ has extra vertices.}}
\label{fig:case2}
\end{figure}
In the remaining parts of $T_i$ and $T_j$ we remove another such set of $4$ disjoint tripartite graphs. Again by definition of connectedness and Lemma \ref{3partVolArg} we can continue this process until we remove $\eta^2 t/2$ vertices each from $V_1^i$ and $V_2^i$. Note that when we remove the vertices of these new tripartite graphs from their respective color classes in $T_i$ and $T_j$, the remaining part of $T_j$ is still balanced while it creates an imbalance in the remaining part of $T_i$, as $V_3^i$ has $\eta^2 t/2$ more vertices than the other two color classes To restore the balance we discard (add to ${\cal I}$) an arbitrary subset of vertices in $V_3^i$ (of size $\eta^2 t/2$). These new tripartite graphs use at least $\eta^2 t$ vertices from ${\cal I}$. Therefore, after discarding the vertices from $V_3^i$ the net increase in the number of vertices in the cover is $\eta^2 t/2$.
\vskip 8pt
We proceed in similar manner for all pairs in ${\cal P}$ one by one until we remove at least $\eta |{\cal I}|/8 \geq \eta^3 n/8$ vertices from ${\cal I}$. Applying the appropriate procedure in Case 1 or Case 2 we increase the size of the cover by ${\eta}^{3}n/8$ vertices (as the size of ${\cal P}$ is at least $\eta n/3$ and for every pair the increase is $\eta^2 t/2$) while keeping all the tripartite graphs in the cover balanced. Note that even after removing $\eta |{\cal I}|/8$ vertices from ${\cal I}$ for the remaining pairs $(T_i,T_j) \in {\cal P}$, the conditions of Lemma \ref{3partVolArg} are satisfied (with parameter $\eta$).
Again as above we make all tripartite graphs in the cover of the same size, by arbitrarily splitting each larger tripartite graph into disjoint tripartite graphs with color classes of size $\eta \log t /4$. \hfill{} \ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi
\subsection{Proof of Claim \ref{noExpand_extremal}}
Similarly as above, first consider the auxiliary graph where vertices are the tripartite graphs in ${\cal T}$ and two vertices are connected if for the corresponding tripartite graphs $T_i$ and $T_j$, $L_{ij}$ is isomorphic to $B_{311}$. This auxiliary graph has ${\cal T}$ vertices and at least $(1-2\sqrt{\eta}) {|{\cal T}|\choose 2}$ edges. By Lemma \ref{folk_subgraph} and Lemma \ref{folk_matching}, in this graph we can find a matching of size $(1-2\sqrt{\eta}) |{\cal T}|/2$. This matching corresponds to a set of disjoint pairs of tripartite graphs, ${\cal P}_g\subset {{\cal T}\choose 2}$, such that for each pair $(T_i,T_j) \in {\cal P}_g$, $L_{ij}$ is isomorphic to $B_{311}$. Let the set of tripartite graphs in ${\cal P}_g$ be ${\cal T}_g$ and let $V({\cal P}_g)$ be the set of vertices in these tripartite graphs, we have \begin{equation}}\def\eeq{\end{equation}\label{numVertices_good} |V({\cal P}_g)| \geq (1-2\sqrt{\eta})|V({\cal T})|.\eeq
\noindent Without loss of generality assume that for each $(T_i,T_j) \in {\cal P}_g$, the vertices of degree $3$ in the $B_{311}$ in $L_{ij}$ correspond to the color classes $V_1^i$ and $V_1^j$. Thus by definition of connectedness for each $(T_i,T_j) \in {\cal P}_g$, (see Figure \ref{all_B311}) \begin{equation}}\def\eeq{\end{equation}\label{connectedSumm} \text{ for } 1\leq q \leq 3 \;\;\; d_3\left({\cal I},(V_1^i\times V_q^j)\right)\geq 2\eta \;\;\text{ and } d_3\left({\cal I},(V_q^i\times V_1^j)\right)\geq 2\eta.\eeq Let $V_1 = \bigcup\limits_{T_i \in {\cal T}_g} V_1^i$ ($V_2$ and $V_3$ are similarly defined). We have \begin{equation}}\def\eeq{\end{equation}\label{emptyI_to_V2V3} d_3\left({\cal I},{V_2\cup V_3 \choose 2}\right) < 2\eta. \eeq
\begin{figure}[h!]
\centering
\includegraphics[page=4]{B_320_B_PM_Extension}
\caption{\footnotesize{The pairs of tripartite graphs in ${\cal P}_g$. All $L_{ij}$'s are isomorphic to $B_{311}$. Shaded rectangles represent pairs connected to ${\cal I}$.}}
\label{all_B311}
\end{figure}
For each $(T_i,T_j) \in {\cal P}_g$ the conditions of Lemma \ref{3partVolArg} are satisfied for $H(V_1^i,V_2^j,{\cal I})$, $H(V_1^i,V_3^j,{\cal I})$, $H(V_2^i,V_1^j,{\cal I})$ and $H(V_3^i,V_1^j,{\cal I})$. We find a disjoint complete tripartite graphs in each of these four $3$-partite $3$-graphs (see Figure \ref{B311_ext}). The size of each color class in the new tripartite graphs is $\dfrac{\eta\log t}{4}$. Again we continue removing such sets of four tripartite graphs until we remove $\eta^3 t$ vertices each from $V_1^i$ and $V_1^j$. By construction, these new tripartite graphs remove $\eta^3 t/2$ vertices each from $V_2^i$, $V_3^i$, $V_2^j$ and $V_3^j$.
\begin{figure}[h!]
\centering
\includegraphics[page=3,width=3.1in]{B_320_B_PM_Extension}
\caption{\footnotesize{The new tripartite graphs using vertices from $T_i, T_j$ and ${\cal I}$ when $L_{ij}$ is isomorphic to $B_{311}$. Shaded rectangles represent pairs connected to ${\cal I}$. Solid lines represent the new complete tripartite graphs.}}
\label{B311_ext}
\end{figure}
These new tripartite graphs use $2\eta^3t$ vertices from ${\cal I}$. Removing these new tripartite graphs creates an imbalance among the color classes of the remaining parts of $T_i$ and $T_j$, to restore the balance we will have to discard $\eta^3 t/2$ vertices from each color class of $T_i$ and $T_j$ except $V_1^i$ and $V_1^j$. This leaves us with no net gain in the size of the cover. Therefore we will not discard any vertices from these color classes at this time and say that these color classes have $\eta^3 t/2$ extra vertices.
We proceed in similar manner for each pair in ${\cal P}_g$. Since in total we will use $|{\cal T}_g|\cdot \eta^3 t \leq \eta^3 n \leq \eta |{\cal I}|$ vertices from ${\cal I}$ for the remaining pairs of tripartite graphs in ${\cal P}_g$ the conditions of Lemma \ref{3partVolArg} (with parameter $\eta$) are satisfied. So we can continue to make new tripartite graphs.
\vskip 6pt
\noindent Let $V_1^g\subset V_1,V_2^g\subset V_2,V_3^g\subset V_3$ be the union of the corresponding color classes of remaining parts of tripartite graphs in ${\cal T}_g$. Since the number of vertices used in the newly made tripartite graphs above is at most $\eta ^3n$, by (\ref{numVertices_good}) we have \begin{equation}}\def\eeq{\end{equation}\label{sizeV2V3g} |V_2^g| = |V_3^g| \geq (1-3\sqrt{\eta})|V({\cal T})|/3.\eeq Next we show that if at least one of the following density conditions is true then we can increase the size of our cover: \begin{equation}}\def\eeq{\end{equation}\label{emptyV2V3} d_3\left(V_2^g\cup V_3^g\right) \geq \sqrt{\eta},\eeq \begin{equation}}\def\eeq{\end{equation}\label{emptyV2V3_to_I} d_3\left(V_2^g\cup V_3^g,{{\cal I}\choose 2}\right) \geq \sqrt{\eta}.\eeq
\noindent Assume that $d_3(V_2^g\cup V_3^g) \geq \sqrt{\eta}$. We will show that in $H|_{V_2^g\cup V_3^g}$ there exist disjoint balanced complete tripartite $3$-graphs of size $\eta^3 t/4$ (half the number of extra vertices in a color class) covering at least $\eta^4 n$ vertices. Furthermore, we can find such tripartite graphs in $H|_{V_2^g\cup V_3^g}$ such that from no color class we use more than the number of extra vertices in that color class.
\vskip 6pt
To see this, call a color class {\em `full'} if these new tripartite graphs use at least $\eta^3 t/4$ vertices. We remove all vertices of each full color class and find tripartite graphs of size $\eta^3 t/4$ in the remaining vertices. Let $k$ be the number of full color classes at a given time and suppose that the total number of vertices covered by the new tripartite graphs is at most $\eta^4 n$. Then, $k \cdot \eta^3 t/4 < \eta^4 n$ which implies that $tk < 4\eta n$ i.e. the total number of vertices in the full color classes is at most $4\eta n$. Let $H'$ be the remaining part of $H|_{V_2^g\cup V_3^g}$ (after removing all vertices in every full color class). By the above observation $V(H') \geq (1-3\sqrt{\eta})|V({\cal T})|/3 - 4\eta n - \eta^4n$ and $d_3(H') \geq \sqrt{\eta}- 6(\eta +\eta^4) \geq \sqrt{\eta}/2$. Hence by Lemma \ref{hyperKST} we can continue to find complete tripartite graphs of size $\eta^3 t/4$ in $H'$. Note that we do not use more than the number of extra vertices from any color classes.
\vskip 6pt
We remove some of these new tripartite graphs so that the total number of vertices covered by them is at least $\eta^4n$. Now adding these new tripartite graphs to our cover increases the size of our cover by at least $\eta^4 n$ vertices, as we did not discard vertices from $V_2^g\cup V_3^g$ for rebalancing. Instead the extra vertices are part of these new tripartite graphs. Now in the remaining parts of each $T_i \in {\cal T}_g$ we arbitrarily remove some extra vertices to restore the balance in the tripartite graphs and as above make all tripartite graphs of the same size. \\
\noindent On the other hand if $d_3(V_2^g\cup V_3^g,{{\cal I}\choose 2}) \geq \sqrt{\eta}$, then since both $|{\cal I}|$ and $|V_2^g\cup V_3^g|$ are at least $\eta^2 n$, by Lemma \ref{hyperKST} we find disjoint complete tripartite graphs with one color class in $V_2^g\cup V_3^g$ and two color classes in ${\cal I}$ covering at least $\eta ^4n$ vertices. Again as above we make these tripartite graphs so as to not use more than the number of extra vertices in any color class. Adding these tripartite graphs increases the size of our cover by at least $\eta^4 n$ vertices.
\vskip 6pt
In case none of the above density conditions hold then by ({\ref{emptyI_to_V2V3}), (\ref{emptyV2V3}), (\ref{emptyV2V3_to_I}) and the fact that $d_3({\cal I})<\eta$, we get $d_3(V_1^g \cup V_2^g \cup {\cal I}) < 10\sqrt{\eta} < \alpha$. By (\ref{sizeV2V3g}) we have $|V_1^g \cup V_2^g \cup {\cal I}| \geq (2/3 - \alpha)n$. Hence $H$ is $\alpha$-extremal. This concludes the proof of Claim \ref{noExpand_extremal}. \hfill{} \ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi
\section{Proof of Theorem \ref{extCaseTheorem}}\label{extCase}
Let $\alpha$ be given and let $n\in 3\mathbb{Z} \gg \dfrac{1}{\alpha}$. Our hypergraph $H$ is $\alpha$-extremal i.e. there exists a $B\subset V(H)$ such that
\begin{itemize}
\item $|B|\geq (\frac{2}{3}-\alpha) n$
\item $d_3(B) < \alpha$.
\end{itemize}
Let $A=V(H)\setminus B$, by shifting some vertices between $A$ and $B$ we can have that $A=n/3$ and $B=2n/3$ (we keep the notation $A$ and $B$). It is easy to see that we still have \begin{equation}}\def\eeq{\end{equation}\label{extDen}d_3(B) < 6\alpha\eeq
Since we have \begin{equation}}\def\eeq{\end{equation}\label{minDeg_ext} \delta_1(H) \geq {n-1\choose 2} - {2n/3\choose 2}+1={n-1\choose 2} - {|B|\choose 2} + 1\eeq together with (\ref{extDen}) this implies that almost all $3$-sets of $V(H)$ are edges of $H$ except $3$-sets of $B$. Thus roughly speaking almost every vertex $b\in B$ makes edges with almost all pairs of vertices in ${A\choose 2}$ and with almost all pairs of vertices in $B\setminus\{b\} \times A$ and vice versa. Therefore, we will basically match every vertex in $A$ with a distinct pair of vertices in ${B\choose 2}$ to get the perfect matching. However, there may be a few vertices making edges with different pairs of vertices than the typical ones. Hence we will first match those few vertices and then we will use a K{\"o}nig-Hall type argument to match every remaining vertex in $A$ with a distinct pair of remaining vertices in $B$.
\begin{proof}[Proof of Theorem \ref{extCaseTheorem}]
\vskip4pt
We first identify vertices in $A$ and $B$ that do not satisfy the typical degree conditions as follows.
\begin{definition}
\begin{align*}
\bullet\;\; & X_A \mbox{ {\em (Exceptional vertices in $A$)}} := \{a\in A\; | \;deg_3\left(a,{B\choose 2}\right) < \left(1-\sqrt{\alpha}\right){|B|\choose 2}\}\\
\bullet\;\; & X_B \mbox{ {\em (Exceptional vertices in $B$)}} := \{b\in B \; |\; deg_3(b,(B\times A)) < (1-\sqrt{\alpha})|A|(|B|-1)\}\\
\bullet\;\; & S_A \mbox{ {\em (Strongly Exceptional vertices in $A$)}} := \{a\in A \;|\; deg_3\left(a,{B\choose 2}\right) < {\alpha}^{1/3}{|B|\choose 2}\}\\
\bullet\;\; & S_B \mbox{ {\em (Strongly Exceptional vertices in $B$)}} := \{b\in B \;|\; deg_3(b,(B\times A)) < {\alpha}^{1/3}|A|(|B|-1)\}
\end{align*}
\end{definition}
\noindent We will show that there are few vertices in $X_A$ and $X_B$ and very few vertices in $S_A$ and $S_B$.
\begin{claim} We have the following bounds on the sizes of the sets defined above.
\begin{enumerate}[(i)]
\item $|X_A|\leq 18\sqrt{\alpha}|A|$.
\item $|X_B|\leq 18\sqrt{\alpha}|B|$.
\item $|S_B|\leq 40\alpha|B|$.
\item $|S_A|\leq 40\alpha|A|$.
\end{enumerate}
\end{claim}
\proof
We only prove the bounds on $|X_B|$ and $|S_A|$ (the others are similar). Assume that $|X_B|\geq 18\sqrt{\alpha}|B|$. By (\ref{minDeg_ext}) and the definition of $X_B$, for any vertex $b\in X_B$, $deg_3\left(b,{B\choose 2}\right) \geq \sqrt{\alpha}|A|(|B|-1)/2$. Therefore for the number of edges inside $B$ we have $$ 3|E(B)|\geq |X_B|\cdot\sqrt{\alpha}|A|(|B|-1)/2 \geq 9\sqrt{\alpha}|B|\cdot\sqrt{\alpha}|A|(|B|-1)
\geq 9\alpha |B|(|B|-1)|A|\\
\geq 27\alpha {|B|\choose 3}$$ where the last inequality uses $|A|=|B|/2$. This implies that $d_3(B) \geq 9\alpha$, a contradiction to (\ref{extDen}).
\vskip 5pt
To see the bound on $|S_A|$, note that by (\ref{minDeg_ext}), if there is a set of $k$ vertices $\{a_1,a_2,\ldots, a_k\}$ in $A$ and a pair $\{b_1,b_2\}$ of vertices in $B$ such that for $1\leq i \leq k$, $\{a_i,b_1,b_2\} \notin E(H)$, then to make up the minimum degree of $b_1$, there are at least $k+1$ edges in $E(B)$ containing $b_1$. Similarly (not necessarily distinct) $k+1$ edges exist in $B$ to make up the minimum degrees of $b_2$. This, together with the fact that every $a\in S_A$ does not make edges with at least $(1-\alpha^{1/3}){|B|\choose 2}$ pairs of vertices in $B$, implies that $$3|E(B)| > |S_A|(1-\alpha^{1/3}{|B|\choose 2}.$$ If $|S_A|>40\alpha|A|$, then $$3|E(B)| > 40\alpha|A|(1-\alpha^{1/3}){|B|\choose 2} = 40\alpha(1-\alpha^{1/3}) \frac{|B|}{2}{|B|\choose 2} \geq 40\alpha{|B|\choose 3}$$ where the last inequality holds when $\alpha$ is a small constant and is a contradiction to (\ref{extDen}). \hfill{}\ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else$\Box$\fi
\vskip4pt
\vskip4pt
\begin{claim}\label{strongExceptional_matching}
There exists a matching $M$ in $H$ such that $M$ covers all the strongly exceptional vertices and if $A' = A\setminus V(M)$, $B'=B\setminus V(M)$ and $n'=n-|V(M)|$, then $|B'| = 2|A'| = 2n'/3$.
\end{claim}
\begin{proof}
We first show that if both $S_B$ and $S_A$ are non empty, then we can reduce the sizes of both. To see this assume $b\in S_B$ and $a\in S_A$. By definition, $deg_3(a,{B\choose 2})<\alpha^{1/3}{|B| \choose 2}$ and $deg_3(b,(B\times A)) < {\alpha}^{1/3}|A|(|B|-1)$. Hence by (\ref{minDeg_ext}), $deg_3(a,A\times B) \geq (1-2\alpha^{1/3})(|A|-1)|B|$ and $deg_3(b,{B\choose 2}) \geq (1-2\alpha^{1/3}){|B| \choose 2}$. We can exchange $a$ with $b$ and reduce the size of both $S_B$ and $S_A$, as both $a$ and $b$ are not {\em strongly exceptional} in their new sets. Applying the above procedure we take the sets $A$ and $B$ such that $|S_A|+|S_B|$ is as small as possible (and one of the sets $S_A$ and $S_B$ is empty).
\vskip4pt First assume that $S_B \neq \emptyset$. As observed above by the minimum degree condition and definition of $S_B$, for every vertex $b \in S_B$, $deg_3(b,{B\choose 2}) \geq (1-2{\alpha}^{1/3}){|B|\choose 2}$. Since $|S_B|$ is very small and every vertex in $S_B$ makes many edges insides $B$ we can greedily find $|S_B|$ vertex disjoint edges in $H|_B$ each containing exactly one vertex of $S_B$. Indeed after removing at most $|S_B|-1$ disjoint edges from $H|_B$ the remaining vertex in $S_B$ still makes edges with at least $(1-2{\alpha}^{1/3}){|B|\choose 2} - {|S_B|\choose 2} - |B|\times 3|S_B| > 1$ pairs of the remaining vertices. Hence we can greedily match each vertex in $S_B$ in a matching $M$ in $H|_B$. To keep the ratio of the sizes of the remaining parts of $A$ and $B$ intact, we add to $M$, $|S_B|$ other vertex disjoint edges such that each edge has a vertex in $B\setminus X_B$ and the two other vertices are in $A$. We can clearly find such edges because by (\ref{minDeg_ext}) and (\ref{extDen}) almost every vertex in $B\setminus X_B$ makes edges with at least $(1-2\sqrt{\alpha}){|A|\choose 2}$ pairs of vertices in $A$ (as otherwise $d_3(B)$ will be very large). We remove the vertices of $M$ from $A$ and $B$ and by construction $n' = n-6|S_B|$, $|A'|=|A|-2|S_B|$ and $|B'|=|B|-4|S_B|$, hence $|B'| = 2|A'| = 2n'/3$.
\vskip8pt
In case $S_A \neq \emptyset$ (and $S_B = \emptyset$), we will find a matching such that each edge contain a vertex in $S_A$. Note that in this case for any vertex $b\in B$ we have $deg_3(b,{B\choose 2}) < \alpha^{1/3}{|B|\choose 2}$. Indeed, if there is a vertex $b\in B$ such that $deg_3(b,{B\choose 2}) \geq \alpha^{1/3}{|B|\choose 2}$ then we can replace $b$ with any vertex $a\in S_A$ to reduce the size of $S_A$ (as the vertex $b$ is not {\em strongly exceptional} in $A$ and $a$ is not {\em strongly exceptional} in the set $B$). We say that vertices in $S_A$ are exchangeable with vertices in $B$ and consider the whole set $S_A\cup B$. By (\ref{minDeg_ext}) for any vertex $v\in S_A\cup B$
$$deg_3\left(v,{S_A \cup B\choose 2}\right)\geq {|S_A \cup B|-1\choose 2} -{|B|\choose 2}+1= {|S_A|-1\choose 2} + (|S_A|-1)|B| + 1.$$
We will prove by induction on $|S_A|$ that we can find a matching $M$ in $H|_{S_A\cup B}$ of size $|S_A|$. Note that this also follows from a result of Bollob\'as, Daykin and Erd\"{o}s \cite{BDE}. If $|S_A| = 1$ then clearly we get an edge in $H|_{S_A\cup B}$ and we are done. Now assume that $|S_A|>1$ and that the assertion is true for smaller values of $|S_A|$. Let $v$ be a maximum degree vertex in $H|_{S_A\cup B}$ and let $H'= H|_{S_A\cup B\setminus\{v\}}$.
For any vertex $u\in V(H')$ the number of pairs of vertices in $S_A \cup B$, containing $v$ but not $u$, is at most $|S_A\cup B|-2$. Therefore we get that \begin{align*} \delta_1(H') &\geq {|S_A|-1\choose 2} + (|S_A|-1)|B| + 1 -(|S_A| + |B| -2 )\\ &= {|S_A|-2\choose 2} + (|S_A|-2)|B| + 1\end{align*} where the last equality follows by a simple calculation.
Hence by induction hypothesis there is a matching in $H'$ of size at least $|S_A| - 1$. Let $M_1$ be a maximum matching in $H'$, if $|M_1|\geq|S_A|$ then we are done so assume that $|M_1| = |S_A|-1$ and every edge in $H'$ intersects $V(M_1)$. This gives us a lower bound on the maximum degree of a vertex in $V(M_1)$ and since $v$ is the overall maximum degree vertex we get
\begin{align*} deg_3(v) &\geq \frac{|E(H')|}{|V(M_1)|}\geq \frac{|V(H')|\cdot\delta_1(H')}{3|V(M_1)|}\\ &\geq \dfrac{\left(|S_A|+ |B|-1\right) \left({|S_A|-2\choose 2} + \left(|S_A|-2\right)|B| + 1\right)}{9\left(|S_A|-1\right)} \\ &> \dfrac{\left(|S_A|+ |B|-1\right) \left(27\left(|S_A| -1\right)^2\right)}{9\left(|S_A|-1\right)} \\&> 3\left(|S_A|-1)(|S_A| +|B|-2\right)\end{align*}
where the last inequality uses the fact that $|B|$ is much larger compared to $|S_A|$. Since the last quantity is larger then the number of pairs that use at least one vertex from $V(M_1)$, there is a pair of vertices in $S_A \cup B\setminus V(M_1)$ that makes an edge with $v$. Adding this edge to $M_1$ we get a matching $M$ which is the required matching. Using the fact that vertices in $S_A$ are exchangeable with vertices in $B$, we get $n' = n-3|S_A|$, $|A'|=|A|-|S_A|$ and $|B'|=|B|-2|S_A|$, hence we get $|B'| = 2|A'| = 2n'/3$. \hfill{} \end{proof}
Having dealt with the \textit{strongly exceptional} vertices, the vertices of $X_A$ and $X_B$ in $A'$ and $B'$ can be eliminated using the fact that their sizes are much smaller than the crossing degrees of vertices in those sets. We have $|X_A|\leq 18\sqrt{\alpha}|A|$ while for any vertex $a\in X_A$, we have that $deg_3(a,{B'\choose 2}) \geq \alpha^{1/3}{|B'|\choose 2}/2$ (because $a\notin S_A$). For each $a\in X_A$ we remove a disjoint edge that contains $a$ and two vertices from $B'$. This can be done greedily as after covering $|X_A|-1$ vertices of $X_A$, the total number of edges removed is at most $50 \sqrt{\alpha} {|B'|\choose 2}$. So there are pairs of vertices in the remaining part of $B'$ making edges with the next vertex of $X_A$. Similarly for each $b \in X_B$ we remove an edge that contains $b$ and uses one vertex from $A'$ and the other vertex is from $B'$ distinct from $b$. Clearly we can find such disjoint edges by a simple greedy procedure. Hence we removed a partial matching that covers all vertices in the exceptional sets.
\vskip4pt
\noindent Denote the leftover sets of $A'$ and $B'$ by $A''$ and $B''$ respectively. By construction $|B''|=2|A''|$). We will find $|A''|$ disjoint edges each containing one vertex from $A''$ and two vertices from $B''$. Note that for every vertex $a\in A''$ we have $deg_3(a,{B''\choose 2})\geq (1-2\alpha^{1/3}){|B''|\choose 2}$ (as $a\notin X_A$). We say that a pair $(b_1,b_2)$ of vertices in $B''$ is {\em good} if $(b_i,b_j,a_k)\in E(H)$ for at least $(1-40{\alpha}^{1/4})|A''|$ vertices $a_k$ in $A''$. Any vertex $b_i \in B''$ makes a good pair with at least $(1-40{\alpha}^{1/4})|B''|$ other vertices in $B''$ (again this is so because $b_i\notin X_B$).
We randomly select a set $P_1$ of $100{\alpha}^{1/4} |B''|$ vertex disjoint good pairs of vertices in $B''$. By the above observation with high probability every vertex $a\in A''$ make edges in $H$ with at least $3|P_1|/4$ pairs in $P_1$ and every pair in $P_1$ makes an edge with at least $3|A''|/4$ vertices in $A''$. In $B''\setminus V(P_1)$ still every vertex makes {good} pairs with almost all other vertices. We pair up each vertex of $B''\setminus V(P_1)$ with a distinct vertex in $B''\setminus V(P_1)$ such that they make a good pair. This can be done by considering a $2$-graph with vertex set $B''\setminus V(P_1)$ and all the good pairs as its edges. A simple application of Dirac\rq{}s theorem on this $2$-graph gives such a perfect matching of vertices in $B''\setminus V(P_1)$. Let the set of these pairs be $P_2$.
Now construct an auxiliary bipartite graph $G(L,R)$, such that $L= A''$ and vertices in $R$ correspond to the pairs in $P_1$ and $P_2$. A vertex in $a_k\in L$ is connected to a vertex $y\in R$ if the pair corresponding to $y$ (say $b_i,b_j$) is such that $(b_i,b_j,a_k)\in E(H)$. We will show that $G(L,R)$ satisfies the K{\"o}nig-Hall criteria. Considering the sizes of $A''$ and $P_1$ it is easy to see that for every set $Q\subset R$ if $|Q|\leq (1-40\alpha^{1/4})|A''|$ then $|N(Q)|\geq |Q|$. When $|Q|>(1-40\alpha^{1/4})|A''|$ (using $|B''|=2|A''|$) any such $Q$ must have at least $6|P_1|/10$ vertices corresponding to pairs in $P_1$, hence with high probability $|N(Q)| = |L| \geq |Q|$. Therefore there is a perfect matching of $R$ into $L$. This perfect matching in $G$ readily gives us a matching in $H$ covering all vertices in $A''$ and $B''$, which together with the edges we already removed (covering exceptional and strongly exceptional vertices) is a perfect matching in $H$. \hfill{} \end{proof}
| {
"timestamp": "2012-07-10T02:01:35",
"yymm": "1101",
"arxiv_id": "1101.5830",
"language": "en",
"url": "https://arxiv.org/abs/1101.5830",
"abstract": "A perfect matching in a 3-uniform hypergraph on $n=3k$ vertices is a subset of $\\frac{n}{3}$ disjoint edges. We prove that if $H$ is a 3-uniform hypergraph on $n=3k$ vertices such that every vertex belongs to at least ${n-1\\choose 2} - {2n/3\\choose 2}+1$ edges then $H$ contains a perfect matching. We give a construction to show that this result is best possible.",
"subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "Perfect matching in 3-uniform hypergraphs with large vertex degree",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130596362788,
"lm_q2_score": 0.7154239897159438,
"lm_q1q2_score": 0.7075636690061593
} |
https://arxiv.org/abs/0901.2601 | Non-Defectivity of Grassmannians of planes | Let $Gr(k,n)$ be the Plücker embedding of the Grassmann variety of projective $k$-planes in $¶n$. For a projective variety $X$, let $\sigma_s(X)$ denote the variety of its $s-1$ secant planes. More precisely, $\sigma_s(X)$ denotes the Zariski closure of the union of linear spans of $s$-tuples of points lying on $X$. We exhibit two functions $s_0(n)\le s_1(n)$ such that $\sigma_s(Gr(2,n))$ has the expected dimension whenever $n\geq 9$ and either $s\le s_0(n)$ or $s_1(n)\le s$. Both $s_0(n)$ and $s_1(n)$ are asymptotic to $\frac{n^2}{18}$. This yields, asymptotically, the typical rank of an element of $\wedge^{3} 1pt {\mathbb C}^{n+1}$. Finally, we classify all defective $\sigma_s(Gr(k,n))$ for $s\le 6$ and provide geometric arguments underlying each defective case. | \section{Introduction}
Let $X \subset \P N$ be a non-degenerate projective variety. The {\it $s$-secant variety} $\sigma_s(X)$ is defined to be the Zariski closure of the union of linear spans of $s$-tuples of points lying on $X$ (see \cite{Z}).
Note that with this notation, $\sigma_2(X)$ is the usual variety of secant lines of $X$. There is a smallest $s$ such that $\sigma_s(X)=\P N$ leading to a natural filtration:
\begin{eqnarray*}
\label{seq:filtration}
X=\sigma_1(X)\subset\sigma_2(X)\subset\sigma_3(X)\subset\cdots \subset \sigma_s(X) =\P N.
\end{eqnarray*}
Let $Gr(k,n)$ denote the Grassmannian of projective $k$-planes in $\P n$. For the purposes of this paper, we will assume that $Gr(k,n)$ is embedded through the Pl\"ucker map in $\P N$ with $N={{n+1}\choose{k+1}}-1$. We can identify points in $\P N$ with general skew-symmetric tensors and points on $Gr(k,n)$ as decomposable skew-symmetric tensors. An element $\omega\in \wedge^{k+1}\mathbb{C}^{n+1}$ has {\it rank r} if it can be written as a linear combination of
$r$ decomposable skew-symmetric tensors (but not fewer). In other words, $\omega=\sum_{t=1}^r v_{1,t}\wedge\ldots\wedge v_{k+1,t}$ with $v_{i,j}\in{\mathbb C}^{n+1}$. The higher secant variety $\sigma_s(Gr(k,n))$ can be viewed as a compactification of the ``parameter space" for skew-symmetric tensors of rank less than or equal to $s$. An interesting problem related to the rank of skew-symmetric tensors is to find the least integer $\underline R(k,n)$ such that a generic skew-symmetric tensor has rank less than or equal to $\underline R(k,n)$. The integer $\underline R(k,n)$ is called the {\it typical rank} of $\wedge^{k+1}\mathbb{C}^{n+1}$ (also called the {\it essential rank} in \cite{E}). Note that the filtration of skew-symmetric tensors by their ranks leads naturally to an identification of $\underline R(k,n)$ as the least integer $s$ such that $\sigma_s(Gr(k,n))=\P N$. See \cite{LM} for a recent survey on the subject and its applications.
If $k=1$ then $X$ is a Grassmannian of lines and $\sigma_s(X)$ corresponds to the locus of skew-symmetric morphisms of rank less than or equal to $s$. It is well known that
a skew-symmetric morphism, corresponding to a skew-symmetric matrix of rank $2s$, can be written as the sum of $s$ decomposable skew-symmetric tensors (but not fewer). In particular, we have $\underline R(1,n)=\lceil\frac{n+1}{2}\rceil$. Thus we may assume
that $k\ge 2$.
It is straightforward to show that $$\frac{{{n+1}\choose{k+1}}}{(k+1)(n-k)+1}\le\underline R(k,n)$$
(see the next section). In particular, we have
$$\frac{n^2}{18}+O(n)\le\underline R(2,n).$$
On the other hand, Ehrenborg found in \cite{E}, Corollary 7.9, the upper bound
$$\underline R(2,n)\le \frac{(n^2+3)}{12}+1$$
by using results on Steiner triple systems.
One of the main goals of this paper is to give a sharp asymptotic bound for $\underline R(2,n)$. To be more precise, we will prove the following theorem:
\begin{theorem}\label{main} If $\underline R(2,n)$ denotes the rank of a generic skew symmetric tensor $\omega\in \wedge^{3}\mathbb{C}^{n+1}$
then $\underline R(2,n)\sim \frac{n^2}{18}.$
\end{theorem}
Theorem \ref{main} is obtained as a consequence of a more precise (but more technical) theorem on the dimension of $\sigma_s(Gr(k,n))$.
To state this theorem, we will need to review several known facts from the literature.
First of all, the following inequality is easy to establish (see the next section for a geometric interpretation):
\[
\dim\sigma_s(Gr(k,n))\leq \min\left\{s[(k+1)(n-k)+1]-1, \ \frac{{{n+1}\choose{k+1}}}{(k+1)(n-k)+1}\right\}.
\]
We say that $\sigma_s(Gr(k,n))$ has the {\it expected dimension} if equality holds.
If there exists a $s$ for which $\sigma_s(Gr(k,n))$ does not have the expected dimension then $Gr(k,n)$ is said to be {\it defective}. As we have seen, Grassmannians of lines are nearly always defective. In contrast,
there are very few cases where $Gr(k,n)$ is known to be defective when $k\geq 2$.
In 1916, C. Segre \cite{Se}
proved that $\sigma_2(Gr(2,5))$
has the expected dimension which established that $\underline R(2,5)=2$.
On the other hand, in 1931, Schouten \cite{Sch} showed that $Gr(2, 6)$ is defective.
Indeed he proved that $\sigma_3 (Gr( 2, 6))$ is a hypersurface as opposed to filling the ambient space. This result established that $\underline R(2,6)=4$.
It is well known that the degree of Schouten's hypersurface is seven (\cite{La}). In Section 5
we analyze this case in more detail. In particular, we find an explicit description of this degree seven invariant by relating its cube to the
determinant of a $21\times 21$ symmetric matrix.
\begin{theorem}\label{t1}
Let $\omega\in\wedge^3{\mathbb C}^7$. Consider the contraction operator $\phi_{\omega}\colon\wedge^2{\mathbb C}^7\to\wedge^5{\mathbb C}^7. $
The equation of $\sigma_3(Gr(2,6))$ is given by an $SL(7)$-invariant polynomial $P_7$
of degree seven such that
$$\det(\phi_{\omega})=2\left[P_7(\omega)\right]^3.$$
\end{theorem}
In 2002, Catalisano, Geramita and Gimigliano (with the help of Catalano-Johnson who had some unpublished results on this subject) \cite{CGG1} showed that $Gr(3, 7)$ and $Gr(2, 8)$ are defective. Due to the isomorphism $Gr(k,n)\cong Gr(n-k-1,n)$ (for instance, $Gr(2,8)\cong Gr(5,8)$), we only consider Grassmannians, $Gr(k,n)$, for which $k\leq \frac{n-1}{2}$. Based on a mixture of theory and computational experiments there is a body of evidence suggesting that all defective Grassmannians have been found. As a result, we believe in the following conjecture proposed in \cite{BDG} (conj. 4.1):
\begin{conjecture}[Baur-Draisma-de Graaf] \label{BDdG}
Let $k\ge 2$. Then $\sigma_s(Gr(k,n))$ has the expected dimension except for the following cases:
$$\begin{array}{c|c|c|c}
&&\textrm{actual codimension}&\textrm{expected codimension}\\
\hline
(1)&\sigma_3(Gr(2,6))&1&0\\
\hline (2)&\sigma_3(Gr(3,7))&20&19\\
\hline (2')&\sigma_4(Gr(3,7))&6&2\\
\hline (3)&\sigma_4(Gr(2,8))&10&8\\
\end{array}$$
\end{conjecture}
If the conjecture is true, then $\sigma_3(Gr(2,6))$ is the only secant variety which both does not have the expected dimension and is a hypersurface (with $k\geq 2$). The invariant computed in \thmref{t1} defines this hypersurface as its zero-locus.
Computational evidence was given by McGillivray who performed a Montecarlo technique to check that the conjecture is true for $n\le 14$ in \cite{McG}. This result was extended to $n=15$ in \cite{BDG}. One of the goals of this paper is to provide further evidence in support of this conjecture.
As a step in this direction, in Section 3 we classify Grassmann varieties with defective $s$-secant varieties for small $s$. More precisely, we prove the following theorem:
\begin{theorem}\label{t2} Except for the cases listed in Conjecture \ref{BDdG}, $\sigma_s(Gr(k,n))$ has the expected dimension whenever $k\geq 2$ and $s\le 6$.
\end{theorem}
Let $A(n,6,w)$ be the cardinality of the largest binary code
of length $n$, constant weight $w$, and distance 6 (see Section $3$ for more details).
It is not hard to show that if $s\leq A(n+1,6,k+1)$ then
$\sigma_s(Gr(k,n))$ has the expected dimension.
For small values of $n$ and $k$ this is a useful result via the monomial approach, like in \cite{E} (indeed we use it
in Section 3). However,
for $n\gg 0$ the value of $A(n+1,6,k+1)$ is typically
smaller than $$\max\left\{ s \ \left|\ \sigma_s(Gr(k,n))\ \textrm{\ has the expected dimension and does not fill}\ \P N\right. \right\}.$$
For example $A(10,6,4)=5$ by \cite{S}, while by \thmref{t2} we see that
$\sigma_6(Gr(3,9))$ has the expected dimension and does not fill the ambient space.
We can now state a slightly technical theorem which implies \thmref{main}.
We show that there are two functions
$s_1(n)\le s_2(n)$ such that $\sigma_s(Gr(2,n))$ has the expected dimension whenever
either $s\le s_1(n)$ or $s\ge s_2(n)$. The precise statement is the following:
\begin{theorem}\label{asymp}
Let $n\ge 9$. Let
\[
s_1(n)=\left\lfloor\frac{n^2}{18}-\frac{20n}{27}+\frac{287}{81}\right\rfloor+
\left\lfloor\frac{6n-13}{9}\right\rfloor
\]
and let
\[
s_2(n)=\left\lceil\frac{n^2}{18}-\frac{11n}{27}+\frac{44}{81}\right\rceil+
\left\lceil\frac{6n-13}{9}\right\rceil.
\]
Then $\sigma_s(Gr(2,n))$ has the expected dimension whenever $s\le s_1(n)$
and whenever $s\ge s_2(n)$ (in this second case it fills the ambient space).
\end{theorem}
Ideally, we would like the functions to satisfy
$s_1+1 \geq s_2$ modulo a finite list of exceptions. While the theorem does not reach this result, it does have a relatively small value for $s_2-s_1$. Such a result is reminiscent of one
obtained in \cite{CGG2} for $X=\P 1\times\ldots\times\P 1$ where they produced functions
$s_1(t),s_2(t)$ (with $t$ denoting the number of factors of $\P 1$) such that $s_2-s_1\le 1$. This was extended in \cite{AOP1}
to $X=\P n\times\ldots\times\P n$, where the functions satisfy
$s_2-s_1\le n$. In the recent \cite{CGG3} the final result for $X=\P 1\times\ldots\times\P 1$
has been found.
Note that in \thmref{asymp}, $s_1(n)\sim s_2(n)\sim\frac{n^2}{18}$ (the sharp asymptotical value)
and that
\thmref{main} follows.
The proof of \thmref{asymp} is in Section 4. Our approach relies on a specialization technique to place a certain number of points on subgrassmannians determined by codimension six linear subspaces (see the remark after
\propref{propa}).
This technique was inspired by \cite{BO}, where the case $X=(\P n,{\s O}(3))$ was treated
with a similar specialization determined by codimension three linear subspaces.
The present technique can be extended
to higher values of $k$ (see \cite{AOP2}), but at the price of a much more complicated inductive procedure involving joins,
and with a massive use of the computer to check initial cases. As a consequence, we
decided to treat this case in a separate paper.
In Section 5 we close the paper with a geometric explanation for each of the known defective cases appearing in the list of Conjecture \ref{BDdG}.
\section{Notation and definitions}
We begin this section by recalling the precise definition of a higher secant variety.
If $Q_1,\dots, Q_s$ are points in $\P m$ then we let $\langle Q_1,\dots, Q_s \rangle$ denote their linear span. If $X\subseteq\P m$ is
a projective variety then the {\it $s$-secant variety} $\sigma_s(X)$ is defined to be the Zariski closure of the union of
the linear span of $s$-tuples of points $(Q_1,\dots, Q_s)$ where $Q_1,\dots Q_s\in X$.
In other words $$\sigma_s(X)=\overline{\bigcup_{Q_1, \dots, Q_s\in X} \langle Q_1,\dots,Q_s\rangle}.$$
If $X\subset \P m$ is a non-degenerate variety of dimension $d$ then a standard dimension count leads to an upper bound on the dimension of $\sigma_s(X)$. In particular, the dimension of $\sigma_s(X)$ can never exceed $\min\{s(d+1)-1,m\}$. Determining when the secant variety of a Grassmann variety reaches this upper bound is one of the main goals of this paper. We summarize some of the terminology that will be used to develop the main theorems in the following:
\begin{definition}
Let $X$ be a non-degenerate $d$-dimensional variety in $\P m$.
\begin{itemize}
\item[(1)] If $\dim \sigma_s(X)=\min\{s(d+1)-1,m\}$ then $\sigma_s(X)$ is said to have the {\it expected dimension}.
\item[(2)] If $\dim\sigma_s(X)<\min\{s(d+1)-1,m\}$ then $X$ is said to have a {\it defective $s$-secant variety}.
\item[(3)] If there exists an $s$ such that $\dim\sigma_s(X)<\min\{s(d+1)-1,m\}$ then $X$ is said to be {\it defective}.
\item[(4)] The smallest integer $s$ such that $\sigma_s(X)$ fills the ambient space
is called the {\it typical rank} and is denoted by $\underline R(X)$.
\end{itemize}
\end{definition}
The main tool that will be used to compute the dimension of $\sigma_s(X)$ is the following celebrated theorem of Terracini:
\begin{theorem}[Terracini's Lemma \cite{Z}]
Let $P_1,\dots, P_k$ be points in $X$ and let $z$ be a general point in $\langle P_1,\ldots ,P_k \rangle$. Then the tangent space to $\sigma_s(X)$ at $z$ is given by
$$T_z\sigma_s(X)=\langle T_{P_1}X,\ldots ,T_{P_k}X \rangle$$ where $T_{P_i}X$ denotes the tangent space to $X$ at $P_i$.
\end{theorem}
Let $\mathbb{K}$ be a field with $\mathrm{char}({\mathbb K})=0$ and
let $V$ be an $(n+1)$-dimensional
vector space over $\mathbb{K}$.
We denote by $Gr(k,n)$ the Grassmannian
of $(k+1)$-dimensional subspaces of $V$ and by $C(Gr(k,n))$
the affine cone over $Gr(k,n)$.
We denote by
$\lfloor x\rfloor$ the greatest integer less than or equal to $x$ and by $\lceil x\rceil$ the smallest integer greater than or equal to $x$.
\section{Classification of Grassmannians with defective $s$-secant varieties, $s\le 6$}
In this section we classify Grassmannians with defective $s$-secant varieties when $s\le 6$. The main tools are a combination of computations on the computer and the monomial technique. We use ideas from coding theory to strengthen the monomial approach. Throughout this section, $\mathbb{K}$ is an infinite field with $\mathrm{char}\ (\mathbb{K})\not=2$.
We begin with two well known lemmas.
\begin{lemma}[Tangent space at a point of a Grassmannian]
\label{tangent}
Let $p=v_0\wedge\ldots\wedge v_k$
be a point of
$C(Gr(k,n))$ where $v_i\in V=\mathbb{K}^{n+1}$. The tangent space to $C(Gr(k,n))$ at $p$ is
\[
T_p(k,n)=\sum_{i=1}^k v_1 \wedge \cdots \wedge v_{i-1} \wedge V \wedge v_{i+1} \wedge \cdots \wedge v_k.
\]
\end{lemma}
Let ${\mathcal B}=\{v_0,\dots ,v_n\}$ be a basis of $V=\mathbb{K}^{n+1}$. The ambient space of $C(Gr(k,n ))$ in its
Pl\"ucker embedding is $\wedge^{k+1}V$. A basis of the ambient space is determined by the $(k+1)$-element subsets of ${\mathcal B}$.
From \lemref{tangent} we see that a basis of the tangent space to $C(Gr(k,n))$ at
$v_0\wedge\ldots\wedge v_k$ is determined by the $(k+1)$-element subsets of ${\mathcal B}$ which intersect
$\{v_0,\dots, v_k\}$ in a set of at least $k$ elements. Thus, we have the following Lemma:
\begin{lemma}[Monomial Lemma]
\label{monomial} Let $\{v_0,\dots, v_n\}$ be a basis for $V$ and let $k\geq 2$.
\begin{itemize}
\item[(1)] Let ${\mathbf A}=\{a_0,\dots ,a_k\}$ be a subset of $\{v_0,\dots ,v_n\}$ and let $T_{\mathbf A}(k,n)$ denote the tangent space to $C(Gr(k,n))$ at $a_0\wedge\ldots\wedge a_k$. A basis of $T_{\mathbf A}(k,n)$ is given by vectors corresponding to $(k+1)$-element subsets of $\{v_0,\dots, v_n\}$ which intersect ${\mathbf A}$ in at least $k$ elements.
\item[(2)] Let ${\mathbf A}_1,\dots, {\mathbf A}_t$ be $(k+1)$-element subsets of $v_0,\dots, v_n$. If $|{\mathbf A}_i\cap {\mathbf A}_j|\leq k-2$ whenever $i\neq j$ then $T_{\mathbf A_1}(k,n),\dots, T_{\mathbf A_t}(k,n)$ are linearly independent.
\end{itemize}
\end{lemma}
By upper semicontinuity, if there exist smooth points $Q_1, \dots, Q_s \in Gr(k,n)$ such that the tangent spaces $T_{Q_1}(k,n), \dots, T_{Q_s}(k,n)$ are linearly independent (or else span the ambient space) then $\sigma_s(Gr(k,n))$ has the expected dimension by Terracini's Lemma. Thus, to show that $X$ does not have a defective $s$-secant variety, it is enough to find $s$ smooth points on $X$ such that the tangent spaces at these points are either linearly independent or else span the ambient space.
The following theorem extends Theorem 2.1 of \cite{CGG1}.
\begin{theorem}\label{tre}
If $3(s-1)\le n-k$ and if $k\ge 2$
then $$\dim\sigma_s(Gr(k,n))=s(k+1)(n-k)+(p-1).$$
\end{theorem}
\begin{proof}
Let $\P {n}={\mathbb P}(V)$. Fix a basis $\{v_0,\ldots,v_n\}$ for $V$.
Let $\mathbf{A}_1,\ldots, \mathbf{A}_s$ be points in $Gr(k,n )$ defined by $\mathbf{A}_i=\langle v_{3(i-1)},\ldots, v_{3(i-1)+k}\rangle$.
By \lemref{monomial}, it follows that the tangent spaces at $\mathbf{A}_1,\ldots , \mathbf{A}_s$ are linearly independent, hence by Terracini's lemma we are done.
\end{proof}
\begin{remark} By \lemref{monomial} and Terracini's lemma, we can show that $\sigma_s(Gr(k,n))$ has the expected dimension if we can show that there exist $s$ distinct $(k+1)$-element subsets, ${\mathbf A}_1,\dots, {\mathbf A}_s$, of an $(n+1)$-element set such that whenever $i\neq j$ we have $|{\mathbf A}_i\cap {\mathbf A}_j|\leq k-2$. To each $(k+1)$-element subset, we can associate a weight $k+1$ binary vector of length $n+1$ via the characteristic function. Our conditions require that the Hamming distance between any pair of distinct vectors is at least 6 (this was observed also in \cite{BDG}). Let $A(n,6,w)$ denote the cardinality of the largest set of length $n$, weight $w$ vectors that satisfy this condition on the Hamming distance. Lower bounds on $A(n,6,w)$ have been computed as part of a search for good {\it constant weight binary codes}. Tables of such bounds can be used to prove that certain Grassmann varieties are not $s$-defective via monomial methods. We found the table \cite{S} and the paper \cite{GSl} particularly useful.
\end{remark}
\begin{theorem}[Graham and Sloane \cite{GSl}] \label{graham-sloane} Let $A(n,6,w)$ denote the maximum number of codewords in any binary code of length $n$, constant weight $w$ and Hamming distance 6.
\begin{itemize}
\item[(a)] Let $q$ be the smallest prime power such that $q\ge n$, then $A(n,6,w)\ge \frac{1}{q^2}$ ${n}\choose{w}$.
\item[(b)] Let $q$ be the smallest prime power such that $q+1\ge n$, then $A(n,6,w)\ge \frac{q-1}{q^3-1}$ ${n}\choose{w}$.
\item[(c)] $A(n,6,w)\ge$ ${n}\choose{w}$ $/ $ $( {1+w(n-w)+ {{w}\choose{2}}{{n-w}\choose{2}}})$.
\end{itemize}
\end{theorem}
Using results of \cite{McG}, \cite{S} and applying \thmref{tre} and \thmref{graham-sloane}, we obtain the following corollary (which proves \thmref{t2}):
\begin{corollary}\label{p6} If $k\geq 2$ then $\sigma_s(Gr(k,n))$ satisfies the following:
\begin{itemize}
\item[(i)] $\sigma_3(Gr(k,n))$ has the expected dimension
except for $(k,n)=(2,6), (3,7)$
\item[(ii)] $\sigma_4(Gr(k,n))$ has the expected dimension
except for $(k,n)=(2,8), (3,7)$
\item[(iii)] $\sigma_s(Gr(k,n))$ always has the expected dimension for $s=2,5,6$.
\end{itemize}
\end{corollary}
\begin{proof} The case $s=2$ was established in Corollary 2.2 of \cite{CGG1}.
For the case $s=3$, we apply \thmref{tre} when the inequality $n-k\ge 6$ is satisfied. The remaining cases have $n-k\le 5$
with $n\le 9$ (due to $k\le \frac{n-1}{2}$). These have been checked in \cite{McG}.
For the case $s=4$, we apply \thmref{tre} when the inequality $n-k\ge 9$ is satisfied.
The remaining cases have $n-k\le 8$
and $n\le 15$. For $n\le 14$, they have been checked in \cite{McG}. The only remaining case is $(k,n)=(7,15)$ which
follows from \cite{S}.
For the case $s=5$, we apply \thmref{tre} when the inequality $n-k\ge 12$ is satisfied.
The remaining cases have $n-k\le 11$ and $n\le 21$. For $n\le 14$ they have been checked in \cite{McG}.
The only remaining cases have $k\ge 4$ and $21\ge n\ge 15$. These all follow from \cite{S}.
For the case $s=6$, we apply \thmref{tre} when the inequality $n-k\ge 15$ is satisfied.
The remaining cases have $n-k\le 14$ and $n\le 27$. The case $(k,n)=(2,16)$ can be checked by the
computer exactly as in \cite{McG}. The remaining cases have $k\ge 3$ and $15\le n \le 27$. These all follow from \cite{S}.
\end{proof}
\begin{remark}
By using \thmref{tre}, \thmref{graham-sloane},
the table \cite{S}, and the algorithm in \cite{McG}, it is expected that one can extend \corref{p6} to larger values of $s$.
\end{remark}
\section{The inductive step, from $n-6$ to $n$}
In this section we develop a collection of tools that lead to an inductive proof of \thmref{asymp}. Throughout the section,
we denote by $V$ an $(n+1)$-dimensional vector space over an infinite field $\mathbb{K}$ and we denote
by $V^*$ its dual space.
\begin{proposition}\label{propa}
Let $X=Gr(2,n)$ with $n\ge 17$. Let $V=\mathbb{K}^{n+1}$, and let $L$, $M$ and $N$ be general codimension six subspaces of $V$.
Let ${\s L}$, ${\s M}$ and ${\s N}$ be the Grassmann varieties of $3$-planes in $L$, $M$ and $N$ respectively. Let $p_1,\dots ,p_4$ be $4$ general points on ${\s L}$, $q_1,\dots ,q_4$ be $4$ general points on ${\s M}$, and $r_1,\dots ,r_4$ be $4$ general points on ${\s N}$. Then there are no hyperplanes in $\P{{}}(\wedge^3 V)$ which contain both
${\s L}\cup{\s M}\cup{\s N}$ and the tangent spaces to $X$ at the $12$ points $\{p_i,q_i,r_i\}_{1 \leq i \leq 4}$.
\end{proposition}
\begin{proof} Let $\{e_0, \dots, e_n\}$ be a basis for $V$
and let $\{x_0, \dots ,x_n\}$ be its dual basis. Without loss of generality, we may assume that $L=\{x_i=0, i=0,\ldots,5\}$,
$M=\{x_i=0, i=6,\ldots,11\}$ and
$N=\{x_i=0, i=12,\ldots,17\}$.
The hyperplanes in $\P{{}}(\wedge^3 V)$ which contain
${\s L}\cup{\s M}\cup{\s N}$ span a space of dimension $6^3=216$ with basis
$\{x_0\wedge x_6\wedge x_{12},\ldots , x_5\wedge x_{11}\wedge x_{17}\}$.
We remark that the codimension of ${\s L}$ (resp. ${\s M}$, ${\s N}$) in $X$ is $18$.
Furthermore, $12 \cdot [3(n+1-3)-3(n-5-3)]=12\cdot 18=216$.
Thus it is enough to prove that
the tangent spaces to $X$ at the points $p_1,p_2,p_3,p_4,q_1,q_2,q_3,q_4,r_1,r_2,r_3,r_4$ modulo
$\langle \P{{}}(L), \P {{}}(M), \P{{}}(N) \rangle$ form a $216$-dimensional vector space. To prove this, it is enough to do the computation in $\P {{17}}$ which we achieve
through the Macaulay2 script given below:
\begin{footnotesize}
\begin{verbatim}
i1 : randomIdeal = method();
i2 : randomIdeal(Ideal) := Ideal => I -> (
R := ring I;
ideal(gens I*random(source gens I,R^{3:-1}))
);
i3 : randomIdeal' = method();
i4 : randomIdeal'(Ideal) := Ideal => I -> (
R := ring I;
J := I^2;
m := gens J;
ideal(m*map(source m,,basis(3,J)))
);
i5 : E = ZZ/32003[e_0..e_17, SkewCommutative=>true];
i6 : L = ideal(e_6..e_17);
o6 : Ideal of E
i7 : M = ideal(e_0..e_5,e_12..e_17);
o7 : Ideal of E
i8 : N = ideal(e_0..e_11);
o8 : Ideal of E
i9 : lt = {L,M,N};
i10 : h = new MutableList;
i11 : scan((#lt),i->h#i=(lt#i)^3);
i12 : J = trim(sum(toList h));
o12 : Ideal of E
i13 : I = ideal(0_E);
o13 : Ideal of E
i14 : for i from 1 to 4 do (
h' = new MutableList;
scan((#lt),i->h'#i=randomIdeal'(randomIdeal(lt#i)));
I = I+sum(toList h');
);
i15 : rank source gens trim (I+J) == rank source gens trim (ideal vars E)^3
o15 = true
\end{verbatim}
\end{footnotesize}
Note that we work in characteristic $p=32003$. Our goal is to check that a certain integer matrix has maximal rank. The Macaulay2 script determines that the matrix has maximal rank modulo $p$.
The result in characteristic zero follows from
the openess of the maximal rank condition.
\end{proof}
\begin{remark} It would be natural for the reader to ask why we choose subspaces of codimension six.
The linear system of hyperplanes in $\P{{}}(\wedge^3 V)$ which contain
the union of three subspaces of codimension $p$ has dimension $p^3$ when $n$ is sufficiently large.
Any tangent space supported on this union imposes
$3p$ conditions. In order to have a number of points which impose independent conditions
on the linear system and which make it empty, we need the condition
that $3p$ divides $p^3$. Therefore, a necessary condition is that $p$ is a multiple of $3$. Unfortunately,
the case $p=3$ does not work. Three general points supported on three codimension three
subspaces do not impose independent conditions, as can be
checked by an analogous Macaulay2 script. Hence we turned to the next case, $p=6$.
\end{remark}
Let $f(n)={{n+1}\choose 3}$. We compute the finite difference $f(n)-2f(n-6)+f(n-12)=36(n-6)$.
In particular the system of hyperplanes in $\P{{}}(\wedge^3 V)$ which contain
${\s L}\cup{\s M}$ has dimension $36(n-6)$ (by the Grassmann formula) for $n\ge 11$.
In order to fit with \propref{propa}, we remark that
$f(n)-3f(n-6)+3f(n-12)-f(n-18)=216$.
We want to keep four points outside ${\s L}\cup{\s M}$. Note
that the tangent space, at each of the four points, imposes $3n-5$ conditions
and that
$$\frac{36(n-6)-4\cdot(3n-5)}{36}=\frac{6n-49}{9}.$$ This leads to the following proposition:
\begin{proposition}\label{propb}
Let $X=Gr(2,n)$ with $n\ge 11$. Let $V=\mathbb{K}^{n+1}$, and let $L$ and $M$ be general codimension six subspaces of $V$.
Let ${\s L}$ (resp. ${\s M}$) be the Grassmann variety of $3$-planes in $L$ (resp. $M$). Then
\begin{itemize}
\item[(i)] The system of hyperplanes in $\P{{}}(\wedge^3 V)$ which contain
${\s L}\cup{\s M}$ and which contain the tangent spaces at $\lfloor\frac{6n-49}{9}\rfloor$
general points on ${\s L}$, at $\lfloor\frac{6n-49}{9}\rfloor$
general points on ${\s M}$,
and at $4$ general points on $X$
has the expected dimension $36(n-6)-36\lfloor\frac{6n-49}{9}\rfloor-4(3n-5)$,
which is
$$\left\{\begin{array}{ccc}20&\textrm{if}&n=0\ (\textrm{mod\ }3)\\
8&\textrm{if}&n=1\ (\textrm{mod\ }3)\\
32&\textrm{if}&n=2\ (\textrm{mod\ }3)\\
\end{array}\right.$$
\item[(ii)] There are no hyperplanes in $\P{{}}(\wedge^3 V)$ which contain
${\s L}\cup{\s M}$ and which contain the tangent spaces at $\lceil\frac{6n-49}{9}\rceil$
general points on ${\s L}$, at $\lceil\frac{6n-49}{9}\rceil$
general points on ${\s M}$
and at $4$ general points on $X$.
\end{itemize}
\end{proposition}
\begin{proof}
We will let $\{p_i\}$ denote the $\lfloor\frac{6n-49}{9}\rfloor$ (resp. $\lceil\frac{6n-49}{9}\rceil$) general points on ${\s L}$, $\{q_i\}$ denotes the $\lfloor\frac{6n-49}{9}\rfloor$ (resp. $\lceil\frac{6n-49}{9}\rceil$) general points on ${\s M}$ and $\{r_i\}$ denotes the four general points on $X$.
The proof is by a 6-step induction from $n-6$ to $n$. The initial six cases
$n=11, 12, 13, 14, 15, 16$ can be checked directly as follows:
\begin{footnotesize}
\begin{verbatim}
i16 : secondStep = method()
o16 = secondStep
o16 : MethodFunction
i17 : secondStep(ZZ) := n -> (
s := floor((6*n-49)/9);
t := binomial(n+1,3)-(36*(n-6)-36*s-4*(3*n-5));
E := ZZ/32003[e_0..e_n, SkewCommutative=>true];
lt := {ideal(e_6..e_n),ideal(e_0..e_(n-6))};
J := sum(apply(lt, i->i^3));
I := ideal(0_E);
for i from 1 to s do (
h := new MutableList;
scan((#lt),i->h#i=randomIdeal'(randomIdeal(lt#i)));
I = I+sum(toList h);
);
for i from 1 to 4 do (
r := randomIdeal'(ideal(random(E^{0},E^{3:-1})));
I = I+r;
);
rank source gens trim (I+J) == t
);
i18 : for i from 11 to 16 list secondStep(i)
o18 = {true, true, true, true, true, true}
o18 : List
\end{verbatim}
\end{footnotesize}
Now assume $n\ge 17$. Let $N$ be a third general codimension six subspace of $V$ and
let ${\s N}$ be the Grassmann variety of $3$-planes in $N$.
We have a short exact sequence of sheaves
$$
0\rightarrow I_{{ {\s L}}\cup{ {\s M}}\cup{ {\s N}},{\P {{}}(\wedge^3 V)}}(1)\rightarrow
I_{{ {\s L}}\cup{{\s M}},{\P {{}}(\wedge^3 V)}}(1)\rightarrow I_{\left({ {\s L}}\cup{ {\s M}}\right)\cap { {\s N}},{\s N}}(1)\rightarrow 0.
$$
To prove case (i), we will specialize the 4 points in
$\{r_i\}$ to lie on ${\s N}$, $\lfloor\frac{6n-85}{9}\rfloor$ of the points in $\{p_i\}$ to lie on ${\s L}\cap {\s N}$
and $\lfloor\frac{6n-85}{9}\rfloor$ of the points in $\{q_i\}$ to lie on ${\s M}\cap {\s N}$. This leaves in place
exactly four of the points in $\{p_i\}$ and four of the points in $\{q_i\}$.
Let $Y$ be the union of the tangent spaces at the points in
$\{p_i\}, \{q_i\}$ and $\{r_i\}$. Then we obtain the following exact sequence:
\[
0
\rightarrow H^0(I_{\mathcal{K} \cup{ {\s N}},{\P {{}}(\wedge^3 V)}}(1))
\rightarrow
H^0(I_{\mathcal{K} ,{\P {{}}(\wedge^3 V)}}(1))
\rightarrow
H^0(I_{\mathcal{K}\cap { {\s N}}, {\s N}}(1)).
\]
where $\mathcal{K} = Y\cup { {\s L}}\cup{{\s M}}$.
Note that we have the isomorphism:
\[
H^0\left(I_{\mathcal{K} \cap { {\s N}},{\s N}}(1)\right) \simeq H^0\left( I_{\mathcal{K} \cap { {\s N}},\P{{}}(\wedge^3 N)}(1)\right).
\]
Thus the following inequality holds:
\[
\dim H^0\left(I_{\mathcal{K} ,{\P {{}}(\wedge^3 V)}}(1) \right) \geq
\dim H^0\left(I_{\mathcal{K} \cup{ {\s N}},{\P {{}}(\wedge^3 V)}}(1)\right)+
\dim H^0\left( I_{\mathcal{K} \cap { {\s N}},\P{{}}(\wedge^3 N)}(1)\right).
\]
Since $Y\cap {\s N}$ satisfies the conditions of Proposition~\ref{propa},
$H^0\left(I_{\mathcal{K} \cup{ {\s N}},{\P {{}}(\wedge^3 V)}}(1)\right)$
has the expected dimension.
By the induction hypothesis,
$\dim H^0\left( I_{\mathcal{K}\cap { {\s N}},\P{{}}(\wedge^3 N)}(1)\right)$
also has the expected value. Thus
$\dim H^0\left(I_{\mathcal{K},{\P {{}}(\wedge^3 V)}}(1) \right)$
has the expected value, which proves (i).
To prove case (ii) we make a similar specialization but substituting
$\lfloor\frac{6n-85}{9}\rfloor$ with $\lceil\frac{6n-85}{9}\rceil$.
\end{proof}
Note that $f(n)-f(n-6)=3n^2-18n+35$.
In particular the system of hyperplanes in $\P{{}}(\wedge^3 V)$ which contain
${\s L}$ has dimension $3n^2-18n+35$ for $n\ge 8$.
We want to keep $\lfloor\frac{6n-13}{9}\rfloor$ points outside ${\s L}$. Note that
$$\frac{3n^2-18n+35-(3n-5)(6n-13)/9}{18}=\frac{n^2}{18}-\frac{31n}{54}+\frac{125}{81}.$$
This leads us to the next proposition:
\begin{proposition}\label{propc}
Let $X=Gr(2,n)$ with $n\ge 9$. Let $V=\mathbb{K}^{n+1}$. Let $L$ be a general codimension six subspace of $V$ and
let ${\s L}$ be the Grassmann variety of $3$-planes in $L$.
If
\[ f_1(n):=\left\lfloor\frac{n^2}{18}-\frac{31n}{54}+\frac{125}{81}-\frac{n}{6}+2\right\rfloor
\]
and
\[
f_2(n):=\left\lceil\frac{n^2}{18}-\frac{31n}{54}+\frac{125}{81}+\frac{n}{6}-1\right\rceil,
\]
then
\begin{itemize}
\item[(i)]
The system of hyperplanes in $\P{{}}(\wedge^3 V)$ which contain
${\s L}$ and which contain the tangent spaces at $f_1(n)$
general points in ${\s L}$ and at $\lfloor\frac{6n-13}{9}\rfloor$
general points in $X$ has the expected dimension
$3n^2-18n+35-18f_1(n)-(3n-5)\lfloor\frac{6n-13}{9}\rfloor=O(n)$.
\item[(ii)]
There are no hyperplanes in $\P{{}}(\wedge^3 V)$ which contain
${\s L}$ and which contain the tangent spaces at $f_2(n)$
general points in ${\s L}$ and at $\lceil\frac{6n-13}{9}\rceil$
general points in $X$ .
\end{itemize}
\end{proposition}
\begin{proof}
We will let $\{p_i\}$ denote a set of $f_1(n)$ (resp. $f_2(n)$) general points on ${\s L}$ and let $\{q_i\}$ denote a set of $\lfloor\frac{6n-13}{9}\rfloor$ (resp. $\lceil\frac{6n-13}{9}\rceil$) general points on $X$. The proof is by a 6-step induction from $n-6$ to $n$. The initial cases
$n=9, 10, 11, 12, 13, 14$ can be checked directly as follows:
\begin{footnotesize}
\begin{verbatim}
i19 : thirdStep = method()
o19 = thirdStep
o19 : MethodFunction
i20 : thirdStep(ZZ) := n -> (
f := floor(n^2/18-31*n/54+125/81-n/6+2);
s := floor((6*n-13)/9);
t := binomial(n+1,3)-(3*n^2-18*n+35-18*f-(3*n-5)*s);
E := ZZ/32003[e_0..e_n,SkewCommutative=>true];
L := ideal(e_6..e_n);
Lk := L^3;
I := ideal(0_E);
for i from 1 to f do (
I = I+randomIdeal'(randomIdeal(L));
);
for i from 1 to s do (
I = I+randomIdeal'(ideal(random(E^{1:0},E^{3:-1})));
);
rank source gens trim (I+Lk) == t
);
i21 : for i from 9 to 14 list thirdStep(i)
o21 = {true, true, true, true, true, true}
o21 : List
\end{verbatim}
\end{footnotesize}
Now assume that $n\ge 15$. Let $M$ be a second general codimension six subspace of $V$ and
let ${\s M}$ be the Grassmann variety of $3$-planes in $M$.
We have the short exact sequence of sheaves
$$
0\rightarrow I_{{{\s L}}\cup{{\s M}},{{\P {{}}(\wedge^3 V)}}}(1)\rightarrow
I_{{{\s L}},{{\P {{}}(\wedge^3 V)}}}(1)\rightarrow I_{{{\s L}}\cap {{\s M}}, {\s M}}(1)\rightarrow 0.
$$
To prove case (i) we will specialize
$f_1(n)-\lfloor\frac{6n-49}{9}\rfloor$ of the
points in $\{p_i\}$ to ${\s L}\cap {\s M}$ and $\lfloor\frac{6n-49}{9}\rfloor$ of the
points in $\{q_i\}$ to ${\s M}$. This leaves in place exactly $\lfloor\frac{6n-49}{9}\rfloor$ of the
points in $\{p_i\}$ and four of the points in $\{q_i\}$.
Let $Y$ be the union of the tangent spaces at the points in
$\{p_i\}$ and $\{q_i\}$. Then, as in the proof of the previous proposition,
we obtain the following exact sequence:
$$
0\rightarrow H^0 (I_{Y\cup{{\s L}}\cup{{\s M}},{\P {{}}(\wedge^3 V)}}(1))\rightarrow
H^0(I_{X\cup {{\s L}},{\P {{}}(\wedge^3 V)}}(1))
\rightarrow H^0(I_{(X\cup{{\s L}})\cap {{\s M}},{\P {{}}(\wedge^3 M)}}(1)).
$$
By the induction hypothesis, the third element in the exact sequence has the expected dimension.
Note that
$$
f_1(n)-\left\lfloor\frac{6n-49}{9}\right\rfloor\le f_1(n-6)
$$
(in fact, the summand $-\frac{n}{6}$ was inserted in the definition of $f_1$ in order for this inequality to hold).
Now if we apply Proposition~\ref{propb} to the first element in the exact sequence, we can prove (i).
To prove case (ii) we will specialize
$f_2(n)-\lceil\frac{6n-49}{9}\rceil$ of the
points in $\{p_i\}$ to ${\s L}\cap {\s M}$ and $\lceil\frac{6n-49}{9}\rceil$ of the
points in $q_i$ to ${\s M}$. Since $$
f_2(n-6)\le f_2(n)-\left\lceil\frac{6n-49}{9}\right\rceil,
$$
we can apply Proposition~\ref{propb} to the first element in the exact sequence and we are done.
\end{proof}
We now have the tools in place to prove the main theorem of this section (\thmref{asymp} of the Introduction).
\begin{theorem}
Let $n\ge 9$. Let $$s_1(n)=\left\lfloor\frac{n^2}{18}-\frac{2n}{27}+\frac{170}{81}\right\rfloor \ \ \mbox{and}
\ \ s_2(n)=\left\lceil\frac{n^2}{18}+\frac{7n}{27}-\frac{73}{81}\right\rceil.$$
Then $\sigma_s(Gr(2,n))$ has the expected dimension whenever $s\le s_1(n)$
and whenever $s\ge s_2(n)$ (in this second case it fills the ambient space).
\end{theorem}
\begin{proof}
The proof is by a 6-step induction from $n-6$ to $n$. The cases
$n=9, 10, 11, 12, 13, 14$ can be checked directly and are well known (\cite{McG}). Let $n\ge 15$. Let $V=\mathbb{K}^{n+1}$.
Let $L$ be a general codimension six subspace of $V$ and
let ${\s L}$ be the Grassmann variety of $3$-planes in $L$. We will let $\{p_i\}$ denote a set of $s_1(n)$ (resp. $s_2(n)$) general points on $Gr(2,n)$.
Note that $$
s_1(n)=f_1(n)+\left\lfloor\frac{6n-13}{9}\right\rfloor \ \ \mbox{and}
\ \ s_2(n)=f_2(n)+\left\lceil\frac{6n-13}{9}\right\rceil.
$$
Consider the following short exact sequence of vector spaces:
$$
0\rightarrow H^0(I_{{{\s L}},{{\P {{}}(\wedge^3 V)}}}(1))\rightarrow
\wedge^3 V\rightarrow \wedge^3 L\rightarrow 0.
$$
To prove that $\sigma_s(Gr(2,n))$ has the expected dimension whenever $s\le s_1(n)$, we specialize $f_1(n)$ of the points in $\{p_i\}$ to lie on ${\s L}$
and we keep $\left\lfloor\frac{6n-13}{9}\right\rfloor$ points in their place.
Let $Y$ be the union of the tangent spaces to $X$ at the points in $\{p_i\}$.
Then we obtain the following exact sequence:
$$
0\rightarrow H^0(I_{Y\cup{{\s L}},{\P {{}}(\wedge^3 V)}}(1))\rightarrow
H^0(I_{Y,{\P {{}}(\wedge^3 V)}}(1))
\rightarrow
H^0(I_{Y\cap {{\s L}},{\P {{}}(\wedge^3 L)}}(1)).
$$
By the induction hypothesis, $H^0(I_{Y\cap {{\s L}},{\P {{}}(\wedge^3 L)}}(1))$ has the expected dimension.
Note that the following inequality holds:
$$
s_1(n)-s_1(n-6)\le \left\lfloor\frac{6n-13}{9}\right\rfloor.
$$
It follows from \propref{propc} that $H^0(I_{Y\cup{{\s L}},{\P {{}}(\wedge^3 V)}}(1))$ also has
the expected dimension. Thus we have proved that $\sigma_s(Gr(2,n))$ has the expected dimension whenever $s\le s_1(n)$.
Since the following inequality holds:
$$
\left\lceil\frac{6n-13}{9}\right\rceil\le s_2(n)-s_2(n-6),
$$
the proof that $\sigma_s(Gr(2,n))$ has the expected dimension whenever $s\ge s_2(n)$ can be shown in the same way by
specializing $f_2(n)$ of the points in $\{p_i\}$ to lie on ${\s L}$ and keeping $\left\lceil\frac{6n-13}{9}\right\rceil$ points in their place.
\end{proof}
\section{The defective cases}
In \conjref{BDdG} there is a list of four defective secant varieties of Grassmannians. All four of the defective cases are described in \cite{CGG1}. We make here some further comments.
A geometric explanation of the defectivity of $X=Gr(3,7)$ is the following, inspired by \cite{CC2}. As in \cite{CGG1},
given three points $P_1, P_2, P_3$ in $X$, there is a basis $e_0,\ldots, e_7$ such that
the three points correspond to $P_1=\langle e_0,e_1,e_2,e_3 \rangle, P_2= \langle e_4,e_5,e_6,e_7 \rangle$, $P_3= \langle e_0+e_4,e_1+e_5,e_2+e_6,e_3+e_7 \rangle$. Using the matrix
\[\left[\begin{array}{cccc|cccc}
1&&&&t\\
&1&&&&t\\
&&1&&&&t\\
&&&1&&&&t\\
\end{array}\right],\]
we see that there is a rational normal curve embedded with ${\s O}(4)$ which passes through the 3 points and is contained in $X$.
The existence of this curve, $C_4$, forces each of the tangent spaces $T_{P_i}X$ to have the line $T_{P_i}C_4$ in common with the
$\P 4$ spanned by $C_4$. This leads to the following inequalities:
\[
\dim \langle T_{P_1}G, T_{P_2} G, T_{P_3} G \rangle \leq 4+3(\dim Gr(3,7)-1)=4 +3\cdot 15 = 49 < 50.
\]
By Terracini's lemma, this proves the defectivity of $Gr(3, 7)$.
The defectivity of $\sigma_4(Gr(3, 7))$ follows as a direct consequence of the defectivity
of $\sigma_3(Gr(3, 7))$.
A geometric explanation of the defectivity of $X=Gr(2,8)$ is similar.
For any $4$ general points in
$X$ we find a Veronese surface embedded with ${\s O}(3)$ which passes through the 4 points and is contained in $X$.
Let $P_1, P_2, P_3, P_4$ be the four points.
We may assume that (see \cite{CGG1})
$P_1=\langle e_0,e_1,e_2 \rangle, P_2=\langle e_3,e_4,e_5 \rangle,
P_3=\langle e_6,e_7,e_8 \rangle, P_4= \langle e_0+e_3+e_6, e_1+e_4+e_7,e_2+e_5+e_8\rangle$.
If $(s,t,u)$ are projective coordinates of $\P2$ we use the matrix
\[\left[\begin{array}{ccc|ccc|ccc}
s&&&t&&&u\\
&s&&&t&&&u\\
&&s&&&t&&&u\\
\end{array}\right]\]
to realize the Veronese surface passing through the 4 points.
It follows that the span of the tangent spaces has dimension at
most ${5\choose{2}}-1+4\cdot(18-2)=73$ while the expected dimension
is $4\cdot 18+3=75$. By Terracini's lemma, this proves the defectivity of $\sigma_4(Gr(2, 8))$.
The geometric argument for the defectivity of $X=Gr(2,6)$ is more subtle. It
can be proved by the following argument which also helps to find
the (set theoretical) equations of the secant varieties of $Gr(2,6)$.
Given $\omega\in\wedge^3{\mathbb C}^7$, there is a well defined contraction operator
$$\phi_{\omega}\colon\wedge^2{\mathbb C}^7\to\wedge^5{\mathbb C}^7. $$
Let $e_1,\ldots ,e_7$ be a basis of ${\mathbb C}^7$. If $\omega=e_1\wedge e_2\wedge e_3$
then $\phi_{\omega}(e_i\wedge e_j)$ is nonzero if and only if $i,j\ge 4$. It follows
that $\textrm{rank}(\phi_{\omega})={4\choose 2}=6$. Since $\textrm{rank}(\phi_{\omega})=\mathrm{rank}(\phi_{g\omega})$
for every $g\in SL({\mathbb C}^7)$, we get
that $\textrm{rank}(\phi_{\omega})=6$ if $\omega\in Gr(2,6)$.
If $\omega=\sum_{i=1}^k \omega_i$ with $\omega_i$ decomposable (i.e. $\omega_i\in Gr(2,6)$) then it follows that
$\textrm{rank}(\phi_{\omega})=\textrm{rank}(\sum_{i=1}^k\phi_{\omega_i})\le \sum_{i=1}^k\textrm{rank}(\phi_{\omega_i})\le 6k$.
Hence if $\omega\in \sigma_k(Gr(2,6))$ then by semicontinuity we have $\textrm{rank}(\phi_{\omega}) \le 6k$.
Consider $\omega=e_1\wedge e_3\wedge e_5+e_1\wedge e_4\wedge e_7+e_1\wedge e_2\wedge e_6+ e_2\wedge e_3\wedge e_4
+e_5\wedge e_6\wedge e_7.$
We can represent $\omega$ via the diagram
\begin{figure}[h]
\begin{center}
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(735,1280)(4000,-800)
\thinlines
\put(4255,-676){\circle*{100}}
\put(4255, 74){\circle*{100}}
\put(4604,-526){\circle*{100}}
\put(4259,-286){\circle*{100}}
\put(5050,-301){\circle*{100}}
\put(4604,-316){\circle*{100}}
\put(4610,-91){\circle*{100}}
\put(5080,-301){\line(-2,-1){810}}
\put(5080,-301){\line(-2, 1){810}}
\put(4255,-676){\line( 0, 1){750}}
\put(5020,-301){\line(-1, 0){795}}
\put(4606,-91){\line( 0,-1){450}}
\put(5191,-361){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}1}}}
\put(4021,-346){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}4}}}
\put(4411,-271){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}7}}}
\put(4591, 14){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}6}}}
\put(3991, 59){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}2}}}
\put(4021,-736){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}3}}}
\put(4621,-766){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}5}}}
\end{picture}
\end{center}
\end{figure}
\noindent An explicit computation shows that $rank(\phi_{\omega})=21$. It follows that
$\sigma_3(Gr(2,6))$ cannot fill the ambient space, hence $Gr(2,6)$ is defective.
\begin{theorem}
Let $\omega\in\wedge^3{\mathbb C}^7$. Consider the contraction operator $\phi_{\omega}\colon\wedge^2{\mathbb C}^7\to\wedge^5{\mathbb C}^7. $
The equation of $\sigma_3(Gr(2,6))$ is given by an $SL(7)$-invariant polynomial $P_7$
of degree seven such that
$$\det(\phi_{\omega})=2\left[P_7(\omega)\right]^3.$$
\end{theorem}
\begin{proof}
The morphism $\phi_{\omega}$ drops rank by three when $\omega$ belongs
to the hypersurface $\sigma_3(Gr(2,6))$. Hence the linear embedding of
$\wedge^3{\mathbb C}^7$ in $\mathrm{Hom}(\wedge^2{\mathbb C}^7,\wedge^5{\mathbb C}^7) $
(given by $\omega\mapsto \phi_{\omega}$)
meets the determinantal hypersurface with multiplicity three. By direct computation
on $\omega=a_{135}e_1\wedge e_3\wedge e_5+a_{147}e_1\wedge e_4\wedge e_7+a_{126}e_1\wedge e_2\wedge e_6+ a_{234}e_2\wedge e_3\wedge e_4
+a_{567}e_5\wedge e_6\wedge e_7$, we see that
$$\det\phi_{\omega}=-2(a_{234}^2a_{567}^2a_{135}a_{147}a_{126})^3.$$ Hence we can arrange the scalar multiples
in order that $P$ is defined over the rational numbers and the equation $\det(\phi_{\omega})=2\left[P_7(\omega)\right]^3$
holds.
\end{proof}
\begin{remark} The equation $v\wedge v'\wedge\omega=v'\wedge v\wedge\omega$ for
$v, v'\in\wedge^2{\mathbb C}^7$ shows that $\phi_{\omega}$ is symmetric.
A natural symmetric operator such that its determinant is a cube appears already in
\cite{Ot}, where the coefficient $2$ appears at the same place.
The coefficient
$2$ is needed if we want the invariant $P_7$ to be defined over the rational numbers.
The graphical notation found in the above diagram comes from the original paper of Schouten \cite{Sch}.
Indeed, the case $Gr(2,6)$ is in principle well known because $SL(7)$ has only finitely many orbits
on $\P{{}} (\wedge^3{\mathbb C}^7)$. This classification was computed in 1931 by Schouten, \cite{Sch},
correcting previous work of Reichel, who missed the orbit of dimension $20$.
He found
all of the 9 orbits for this
action together with their dimensions.
G.B. Gurevich in his textbook \cite{Gu} gave equations for these orbits but from his
description it is not easy to find the order among the orbits.
In
fact the obvious order relation (Bruhat order) among the orbits, such that $
O_1
\leq O_2
$ if the closure of $ O_2 $ contains $ O_1 $, is {\it not a total order},
and indeed this is the first case among Grassmannians where this phenomenon occurs.
We take the opportunity to show in the following table
the order relation among the 9 orbits, computed by Elisabetta Ardito in her laurea thesis, defended in
L'Aquila in 1997 under the supervision of the second author.
It is a distributive lattice.
We have added to each orbit the value of
$\mathrm{rank}(\phi_{\omega})$ together with some geometrical information. Each of the values can be computed easily
on a representative of each orbit. The dimensions can be computed by considering the rank of the derivative
of the action of $SL(7)$ on $\wedge^3{\mathbb C}^7$.
\begin{figure}
\begin{center}
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(6360,10238)(1401,-10044)
\thinlines
\put(4126,-2026){\circle*{100}}
\put(4951,-1651){\circle*{100}}
\put(4951,-2401){\circle*{100}}
\put(4576,-2251){\circle*{100}}
\put(4602,-1801){\circle*{100}}
\put(4951,-2026){\circle*{100}}
\put(4126,-2026){\line( 2, 1){810}}
\put(4126,-2026){\line( 2,-1){810}}
\put(4951,-1651){\line( 0,-1){750}}
\put(4606,-1816){\line( 0,-1){465}}
\put(4591,-2026){\circle*{100}}
\put(4255,-676){\circle*{100}}
\put(4255, 74){\circle*{100}}
\put(4604,-526){\circle*{100}}
\put(4259,-286){\circle*{100}}
\put(5050,-301){\circle*{100}}
\put(4604,-316){\circle*{100}}
\put(4610,-91){\circle*{100}}
\put(5080,-301){\line(-2,-1){810}}
\put(5080,-301){\line(-2, 1){810}}
\put(4255,-676){\line( 0, 1){750}}
\put(5020,-301){\line(-1, 0){795}}
\put(4606,-91){\line( 0,-1){450}}
\thicklines
\put(4606,-2626){\line( 0,-1){765}}
\put(4621,-826){\line( 0,-1){555}}
\thinlines
\put(3882,-9601){\circle*{100}}
\put(4981,-9601){\circle*{100}}
\put(4456,-9601){\circle*{100}}
\put(3856,-9601){\line( 1, 0){1125}}
\put(1917,-5701){\circle*{100}}
\put(3016,-5701){\circle*{100}}
\put(2491,-5701){\circle*{100}}
\put(1891,-5701){\line( 1, 0){1125}}
\put(1917,-5326){\circle*{100}}
\put(3016,-5326){\circle*{100}}
\put(2491,-5326){\circle*{100}}
\put(1891,-5326){\line( 1, 0){1125}}
\put(2842,-6826){\circle*{100}}
\put(2842,-7576){\circle*{100}}
\put(2493,-6976){\circle*{100}}
\put(2838,-7216){\circle*{100}}
\put(2497,-7411){\circle*{100}}
\put(2047,-7201){\circle*{100}}
\put(2017,-7201){\line( 2, 1){810}}
\put(2017,-7201){\line( 2,-1){810}}
\put(2842,-6826){\line( 0,-1){750}}
\put(4021,-8341){\circle*{100}}
\put(4846,-7966){\circle*{100}}
\put(4846,-8716){\circle*{100}}
\put(4471,-8566){\circle*{100}}
\put(4497,-8116){\circle*{100}}
\put(4021,-8341){\line( 2, 1){810}}
\put(4021,-8341){\line( 2,-1){810}}
\put(7012,-6496){\circle*{100}}
\put(6663,-6646){\circle*{100}}
\put(7008,-6886){\circle*{100}}
\put(6217,-6871){\circle*{100}}
\put(6663,-6856){\circle*{100}}
\put(6657,-7081){\circle*{100}}
\put(7017,-7246){\circle*{100}}
\put(6187,-6871){\line( 2, 1){810}}
\put(6187,-6871){\line( 2,-1){810}}
\put(6247,-6871){\line( 1, 0){795}}
\put(7192,-4846){\circle*{100}}
\put(6843,-4996){\circle*{100}}
\put(7188,-5236){\circle*{100}}
\put(6397,-5221){\circle*{100}}
\put(6843,-5206){\circle*{100}}
\put(6837,-5431){\circle*{100}}
\put(7197,-5596){\circle*{100}}
\put(6367,-5221){\line( 2, 1){810}}
\put(6367,-5221){\line( 2,-1){810}}
\put(6427,-5221){\line( 1, 0){795}}
\put(7216,-4816){\line( 0,-1){825}}
\put(4006,-4605){\circle*{100}}
\put(5146,-4616){\circle*{100}}
\put(4617,-4601){\circle*{100}}
\put(4021,-4601){\line( 1, 0){1125}}
\put(4006,-3731){\circle*{100}}
\put(5131,-3731){\circle*{100}}
\put(4591,-3731){\circle*{100}}
\put(4006,-3731){\line( 1, 0){1125}}
\put(3991,-4171){\circle*{100}}
\put(3991,-3766){\line( 0,-1){825}}
\thicklines
\put(4441,-8836){\line( 0,-1){525}}
\put(3721,-4396){\line(-2,-1){912}}
\put(2236,-6076){\line( 0,-1){645}}
\put(6586,-5551){\line( 0,-1){795}}
\put(3016,-7516){\line( 2,-1){1050}}
\put(3241,-6841){\line( 2, 1){2850}}
\put(6196,-7351){\line(-2,-1){912}}
\put(5311,-4351){\line( 5,-2){1350}}
\put(5221,-9586){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}dim 12, rank=6, G}}}
\put(5261,-9886){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}degree=42}}}
\put(5086,-8251){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}dim 19, rank=10}}}
\put(5086,-8451){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}restricted chordal
variety}}}
\put(5086,-8651){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}see \cite{FuHa} exerc.
15.44}}}
\put(7561,-5071){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}dim 27,
$(Tan(G))^{\vee}$}}}
\put(7600,-5371){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}rank=15}}}
\put(5401,-1921){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}dim 33, rank=18,
$G^{\vee}\simeq \sigma_3(G)$}}}
\put(5440,-2221){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}degree=7}}}
\put(5416,-285){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}dim 34, rank=21}}}
\put(5416,-485){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}ambient space}}}
\put(876,-7005){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}$Tan(G)$, dim 24}}}
\put(1076,-7325){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}rank=12}}}
\put(7531,-6856){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}dim 20,
$(\sigma_2(G))^{\vee}$}}}
\put(7431,-7056){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}rank=15}}}
\put(801,-5536){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}$\sigma_2(G)$, dim 25}}}
\put(1100,-5936){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}rank=12}}}
\put(5476,-3961){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}dim 30, rank=16,
$Sing(\sigma_3(G))$}}}
\end{picture}
\caption{Orbits for $SL(7)$-action on $\wedge^3{\mathbb C}^7$}
\label{figure}
\end{center}
\end{figure}
\end{remark}
It follows from this description the following theorem:
\begin{theorem}
For $\omega\in \wedge^3{\mathbb C}^7$ the following holds:
\begin{itemize}
\item[$\mathrm{(i)}$] $\omega\in Gr(2,6)$ if and only if $\mathrm{rank}(\phi_{\omega})\le 6$.
\item[$\mathrm{(ii)}$] $\omega\in \sigma_2(Gr(2,6))$ if and only if $\mathrm{rank}(\phi_{\omega})\le 12$. Hence
the $13\times 13$ minors of $\phi_{\omega}$ give set theoretic equations of
$\sigma_2(Gr(2,6))$.
\item[$\mathrm{(iii)}$] $\omega\in \sigma_3(Gr(2,6))$ if and only if $\mathrm{rank}(\phi_{\omega})\le 18$.
\end{itemize}
\end{theorem}
In particular the table on the following page shows the possible degenerations of elements in $
\wedge ^3
{\mathbb C}^7$.
There are two degenerations which are not obvious:
\begin{itemize}
\item
The degeneration of P$_{25}$ in P$_{24}$ (the subscript means the dimension) resulting from
\[
\begin{array}{l}
\lim_{t \rightarrow 0}\frac{1}{t}[e_1 \wedge e_2 \wedge e_3 - (e_1+te_4)\wedge
(e_2+te_5) \wedge
(e_3+te_6)] \\
= -(e_1 \wedge e_2 \wedge e_6+e_1 \wedge e_5 \wedge e_3+e_4 \wedge e_2 \wedge e_3)
\end{array}
\]
\item
The degeneration of P$_{30}$ in P$_{27}$ resulting from
\[
\begin{array}{ll}
\lim_{t \rightarrow 0}\frac{1}{t}[(e_1 \wedge e_2 \wedge e_3+(e_3 \wedge e_7 \wedge(e_3+te_6)) & \\
-(e_3+te_6)\wedge(e_1+te_4)\wedge(e_2+te_5)] & \\
= e_3 \wedge e_7 \wedge e_6 - e_3 \wedge e_1 \wedge e_5 - e_3 \wedge e_4 \wedge e_2 -
e_6 \wedge e_1 \wedge e_2 &
\end{array}
\]
\end{itemize}
All the other degenerations are somewhat clear by considering the shape of the diagrams.
Vinberg's school \cite{VE} extended Schouten and Gurevich's classification to higher dimension, but
the Bruhat order of the orbits has not yet been explicitly written.
Notice that the hypersurface $\sigma_3(Gr(2,6))$ is isomorphic to the dual variety
of $Gr(2,6)$ (which has degree $7$, see \cite{La}).
It is called $C_8$ in the notation of \cite{Gu}, pg. 393.
It can be computed that $P_7$ is a polynomial with 10,680 terms.
The ideal of the secant variety $\sigma_2(Gr(2,6))$
is generated by 28 cubics which correspond to the ideal
$\Gamma^{3,1^6}V\subset S^3(\wedge^3{\mathbb C}^7)$, which is the covariant $C_4$
according to \cite{Gu}, pg. 393.
It is interesting to check that $\mathrm{Sing}(\sigma_3(Gr(2,6)))$ is the orbit of dimension $30$,
while $\mathrm{Sing}(\sigma_2(Gr(2,6)))=Gr(2,6)$.
| {
"timestamp": "2009-01-17T01:40:14",
"yymm": "0901",
"arxiv_id": "0901.2601",
"language": "en",
"url": "https://arxiv.org/abs/0901.2601",
"abstract": "Let $Gr(k,n)$ be the Plücker embedding of the Grassmann variety of projective $k$-planes in $¶n$. For a projective variety $X$, let $\\sigma_s(X)$ denote the variety of its $s-1$ secant planes. More precisely, $\\sigma_s(X)$ denotes the Zariski closure of the union of linear spans of $s$-tuples of points lying on $X$. We exhibit two functions $s_0(n)\\le s_1(n)$ such that $\\sigma_s(Gr(2,n))$ has the expected dimension whenever $n\\geq 9$ and either $s\\le s_0(n)$ or $s_1(n)\\le s$. Both $s_0(n)$ and $s_1(n)$ are asymptotic to $\\frac{n^2}{18}$. This yields, asymptotically, the typical rank of an element of $\\wedge^{3} 1pt {\\mathbb C}^{n+1}$. Finally, we classify all defective $\\sigma_s(Gr(k,n))$ for $s\\le 6$ and provide geometric arguments underlying each defective case.",
"subjects": "Algebraic Geometry (math.AG); Commutative Algebra (math.AC)",
"title": "Non-Defectivity of Grassmannians of planes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130583409235,
"lm_q2_score": 0.7154239897159439,
"lm_q1q2_score": 0.707563668079431
} |
https://arxiv.org/abs/1805.07019 | $\mathcal{P}$ Play in Candy Nim | Candy Nim is a variant of Nim in which both players aim to take the last candy in a game of Nim, with the added simultaneous secondary goal of taking as many candies as possible. We give bounds on the number of candies the first and second players obtain in 3-pile $\mathcal{P}$ positions as well as strategies that are provably optimal for some families of such games. We also show how to construct a game with $N$ candies such that the loser takes the largest possible number of candies and bound the number of candies the winner can take in an arbitrary $\mathcal{P}$ position with $N$ total candies. | \section{Introduction}
One of the first serious results in the study of combinatorial games was Bouton's solution to the game of \textsc{Nim} in~\cite{Bouton02}.
\begin{defn}\label{d:nim} \textsc{Nim} is a two-player game played with several piles of stones. In a turn, a player removes some number of stones from one pile. The player taking the last stone wins.
\end{defn}
Beyond its historical interest, the game of \textsc{Nim} is interesting because a wide family of games, the so-called \emph{finite normal-play impartial combinatorial games}, can all be reduced to the game of \textsc{Nim}, thanks to the celebrated \emph{Sprague-Grundy theory}, as first described in~\cite{Sprague35} and~\cite{Grundy39}. In this paper, we describe and study a slight modification of the game of \textsc{Nim}, known as \textsc{Candy Nim}, which is interesting in its own right as being a blend of an impartial combinatorial game and a scoring game. While impartial combinatorial games have been widely studied ever since the time of Bouton, the study of scoring games has only recently attracted interest, for instance in~\cite{LNNS15}, ~\cite{larsson2017games}, ~\cite{johnson2014combinatorial}, and~\cite{Stewart13}.
In any \textsc{Nim} game, either the first player to move or the second player to move must have a winning strategy, but not both. We classify the \textsc{Nim} positions based on which player wins with optimal play.
\begin{defn} We call games in which the first player wins with optimal play $\mathcal{N}$ positions. Similarly, we call games in which the second player wins with optimal play $\mathcal{P}$ positions. We call $\mathcal{N}$ and $\mathcal{P}$ the \emph{outcome classes} of the associated games.
\end{defn}
\begin{rem}
In the two-player game of \textsc{Nim}, we refer to the losing player as Luca and the winning player as Windsor.
\end{rem}
It is easy to compute the outcome classes and winning strategy for \textsc{Nim} games, based on the following function.
\begin{defn} We define the function $\oplus$ of $a, b \in \mathbb{Z}$, called the \textit{nim-sum} as follows: $a\oplus b$ is given by the \textsf{XOR} of $a$ and $b$. (This is given by writing both $a$ and $b$ in binary and adding without carrying.)
\end{defn}
\begin{ex} Let us compute $9\oplus 5$: \begin{center}\begin{tabular}{cccccc} 9: && 1 & 0 & 0 & 1 \\ 5: &$\oplus$ & 0 & 1 & 0 & 1 \\ \hline 12 && 1 & 1 & 0 & 0 \end{tabular} \end{center} \end{ex}
\begin{thm}[Bouton~\cite{Bouton02}] The \textsc{Nim} game with piles of size $a_1, \ldots , a_p$ is a $\mathcal{P} $ position if and only if $n := a_1 \oplus \cdots \oplus a_p = 0$. If $n \neq 0$, winning moves take the total nim-sum to zero.
\end{thm}
\begin{defn} Given a \textsc{Nim} game $G$, with piles of size $a_1,\ldots,a_p$, we define its \emph{Grundy value} $\mathcal{G}(G)$ to be $a_1\oplus a_2\oplus\cdots\oplus a_p$. \end{defn}
Because we have an easily computable winning strategy for \textsc{Nim}, the game is boring to play, especially for Luca. In order to make the game more interesting, we change the objective slightly.
\begin{defn}\label{d:candynim} \textsc{Candy Nim} is a two-player combinatorial game with the same setup and game play as \textsc{Nim}. However, in addition to the primary goal of making the last move (as in \textsc{Nim}), players have a secondary goal of collecting as many stones, or \textit{candies} as possible.
\end{defn}
\begin{rem} Note that in \textsc{Candy Nim}, winning always takes priority over collecting candies. No number of candies can fully compensate for the embarrassment of losing the game. An alternative formulation of \textsc{Candy Nim} is that, at the end of the game, the loser (Luca) must give half of her candies to the winner, so that the winner (Windsor) always ends up with more candies than Luca.
\end{rem}
The game of \textsc{Candy Nim} was first introduced by Michael Albert in~\cite{Albert} in an unpublished set of slides based on a talk he gave at the CMS meeting in Halifax in 2004.\footnote{\url{http://www.cs.otago.ac.nz/research/theory/Talks/CandyNim.pdf}} He observed, among other things, that it is not always optimal for Luca to remove candies from the largest pile and also provided some results on values of \textsc{Candy Nim} games (see~\S\ref{sec:prelims} for a definition of the value of a game).
\begin{rem} In this paper, we focus on the $\mathcal{P}$ games. This is because Luca has many more options to play with than Windsor. At every turn, in optimal play, Windsor must bring the nim-sum of all of the pile sizes down to zero. This severely limits the options of the winning player. In many of the positions we will study, Windsor will only have a single move available on each turn. On the other hand, Luca loses no matter what her move is, giving room for optimizing her move with respect to the number of candies she collects. Consequently, her turns are more interesting to consider.
\end{rem}
Here, we study certain classes of $\mathcal{P}$ positions in \textsc{Candy Nim}. Throughout, we will assume that Windsor is \emph{forced} to play winning moves in the underlying \textsc{Nim} games, so that losing moves are illegal.
After describing the relevant notation in \S\ref{sec:prelims}, we turn to a study of the 3-pile game in \S\ref{sec:3pilelemmas}--\ref{sec:3pile}.\S\ref{sec:3pilelemmas} contains several lemmas that are helpful in the study of 3-pile games and sometimes \textsc{Candy Nim} more generally. In \S\ref{sec:3pilestrats}, we give two strategies for the 3-pile game. The first of these, the \emph{flip-flop strategy}, is a simple strategy for Luca to take as many candies as possible on the current move, subject to allowing Windsor only to remove a single candy. This strategy is easy to work with and analyze, and so is useful for providing bounds for the number of candies each player takes. We also introduce a refinement of the flip-flop strategy, called the \emph{fractal strategy}. The fractal strategy scores better than the flip-flop strategy, and we conjecture that it is optimal for games in a certain standard form, defined in~\S\ref{sec:prelims}. In~\S\ref{sec:3pile}, we prove bounds on the values of a certain important family of 3-pile games. In Theorem~\ref{t:standard}, we give a strategy for a broad class of $3$-pile games and prove this strategy optimizes the number of candies collected by Luca to within a constant factor of the smallest pile size. This enables us to give fairly strong upper and lower bounds on the difference in candies collected by Luca and Windsor for a large family of $3$-pile games in Theorem~\ref{t:genbound}.
In \S\ref{sec:multipile}, we consider the following problem: Luca gets to distribute an even number $N$ of candies among several piles, subject to the constraint that the resulting position must be a $\mathcal{P}$ position. Her goal is to maximize the number of candies she can take. In Theorem~\ref{thm:equalitycases}, we show that in a game $G$ with $N$ candies, Luca cannot take more than $N - \lfloor \log_2(N) \rfloor$ candies independent of arrangement. In this result, we give explicit characterization of all games where this upper bound is achieved. More generally, we show in Theorem~\ref{thm:5pile} that Luca can always arrange $N$ candies so that Windsor takes at most $O(\sqrt{N})$ candies, in an arrangement with at most $5$ piles. In Theorem~\ref{thm:k-1}, we show that in most games $G$ with $k$ piles, Windsor can take at least $k-1$ candies.
In \S\ref{sec:examples}, we conclude with some remarks on the 4-pile game, including examples were the best move for Luca is not in the largest pile. We also present some conjectures that we hope will inspire future work.
Throughout this work, we provide some worked examples of optimal play for specific \textsc{Candy Nim} games. We encourage readers to generate their own examples using our program, which is available for download.\footnote{\url{https://github.com/nmani2/candynim}}
\section{Preliminaries} \label{sec:prelims}
We begin with some definitions and notation that will be helpful for the analysis of the $3$ and $n$-pile \textsc{Candy Nim} games. Unless otherwise specified, $G$ will always refer to \textsc{Candy Nim} games.
\begin{defn}
Given a \textsc{Candy Nim} game $G$, let $N(G)$ be the total number of candies in the game. Let $N_W(G)$ be the number of candies collected by winning player Windsor, and let $N_L(G)$ be the number of candies collected by losing player Luca, assuming optimal play.
\end{defn}
Our primary goal is to bound the number of candies Luca (the losing player) can collect relative to Windsor in a $\mathcal{P}$ game, assuming optimal play. This difference in candies will be denoted the \textit{value} of the associated \textsc{Candy Nim} game:
\begin{defn} The \textit{value} of a game $V(G)$, is given by \[V(G) = N_L(G) - N_W(G)\]
\end{defn}
\begin{defn}A \textit{turn} is a triple of games $T = (G, G', G'')$ where Luca moves from $G$ to $G'$ and Windsor moves from $G'$ to $G''$. We call each move made by a player from $G$ to $G'$ a \textit{ply} $P = (G, G')$.
\end{defn}
Many of the bounds, such as those in Theorem \ref{t:standard} and Theorem \ref{t:genbound} will arise from the analysis of specific strategies or sequences of moves by Luca and Windsor. To simplify these analyses, we introduce some notation:
\begin{defn}
The \textit{single-turn value} of a turn $T = (G, G', G'')$, $V_T(G)$ is given as \[V_T(G) = (N(G) - N(G')) - (N(G') - N(G'')) = N(G) + N(G'') - 2N(G').\]
\end{defn}
\begin{defn}
A \textit{strategy} $S$ of game $G$ is a sequence of turns $T_i = (G_i, G_i', G_i'')$ for $1\le i\le n$ such that $G_1 = G$ and for each $j<n$, we have $G_j'' = G_{j+1}$. Furthermore, $G_n''=\varnothing$. We call the \textit{strategic value} of $G$ with strategy $S$ the difference between the number of candies collected by Luca and Windsor under strategy $S$, i.e.\ $V_S(G) = \sum V_{T_i}(G)$. An \textit{optimal strategy} is a strategy such that $\sum V_{T_i}(G) = V(G)$.
\end{defn}
\begin{defn}
The \textit{semiratio} of a turn $T = (G, G', G'')$, $r_T(G)$, is defined to be \[r_T(G) = \frac{N(G) - N(G')}{N(G') - N(G'')}.\]
\end{defn}
\begin{defn} We denote a game $G=[a_1,a_2,a_3,a_4,\ldots,a_p]$ if $G$ is the game with $p$ piles where pile $i$ has $a_i \geq 0$ candies.
\end{defn}
In Theorem \ref{t:standard} and Theorem \ref{t:genbound}, we provide lower and upper bounds on $V(G)$ when $G = [a_1, a_2, a_3]$ is a $3$-pile \textsc{Candy Nim} game. In giving such results, it is helpful to use an alternative characterization of $G$:
\begin{defn}\label{d:g3}
Let $\mathfrak{G}(a, m, x)$ be the $3$-pile \textsc{Candy Nim} game \[\mathfrak{G}(a, m, x) = [a,\, 2^{k+1}\cdot m + x,\, 2^{k+1}\cdot m + a \oplus x]\] where $k = \lfloor \log_2 a \rfloor$, $m \geq 1$, and $0 \leq x < 2^k$.
\end{defn}
\begin{rem}
Note that as per Definition~\ref{d:g3}, $2^{k+1}$ is the smallest power of $2$ strictly greater than $a$.
\end{rem}
\begin{defn}
A game $G$ is in \textit{standard form} if \[G = \mathfrak{G}(2^{k+1} - 1, m, 0) = [2^{k+1} - 1, \,2^{k+1} \cdot m,\, 2^{k+1} (m + 1) - 1].\]
\end{defn}
\begin{defn} Games $G=[g_1,g_2,g_3,\ldots,g_p]$ and $H=[h_1,h_2,h_3,\ldots ,h_q]$ have sum $G+H$ defined by concatenation as follows
\[G+H=[g_1,g_2,g_3,\ldots,g_p,h_1,h_2,h_3,\ldots,h_q] \]
\end{defn}
To gain some basic intuition about \textsc{Candy Nim} consider the following lemma:
\begin{lemma} \label{lem:valuegeq0}
For any game $G$, $V(G) \geq 0$.
\end{lemma}
\begin{proof}
Consider the strategy where Luca takes all of the candies from the largest pile in game $G_i$ for the $i^\text{th}$ turn $T_i$. Then, $V_{T_i}(G_i) \geq 0$ for all $i$, implying that $V(G) \geq 0$.
\end{proof}
\section{Lemmas for the 3-Pile Game} \label{sec:3pilelemmas}
\begin{lemma} \label{lem:oddwinningmoves}
For any $\mathcal{N}$ position $G = [g_1, \ldots, g_p]$, suppose there are exactly $j$ piles $g_w$ such that there exists a winning move in pile $w$. Then $j$ is odd.
\end{lemma}
\begin{proof} We give the binary representations of $n$ and $g_i$, as follows:
let $$n = g_1 \oplus \cdots \oplus g_p = \sum _{i = 0} ^k n_i \cdot 2^i, \quad n_k = 1,\, n_i \in \{0,1\}, 1 \le i < k,$$ and for each $1 \leq i \leq p$, let $$g_i = \sum _{h = 0} ^\infty b_{i,h} \cdot 2^h, \quad b_{i,h} \in \{0,1\}.$$ Then Winston has a winning move in pile $g_w$ if and only if $b_{w,k} = 1$. Since $n_k=1$, the number of $w$'s such that $b_{w,k}=1$ must be odd. The piles $g_w$ that contain winning moves are exactly those piles such that $b_{w,k}=1$ (see e.g.~\cite[Proof of Theorem 7.12]{ANW07}), so there are an odd number of piles containing winning moves.
\end{proof}
\begin{lemma} \label{lem:unique3pilemove}
For any 3-pile $\mathcal{P}$ position $G$ and a ply $P = (G, G')$ by Luca, there exists a unique $\mathcal{P}$ position $G''$ such that $T = (G, G', G'')$ is a turn in \textsc{Candy Nim}.
\end{lemma}
\begin{proof}
Suppose $G = [g_1, g_2, g_3]$ and Luca moves to $G' = [g_1', g_2, g_3]$. By Lemma~\ref{lem:oddwinningmoves}, Windsor has an odd number of winning moves from $G'$, and he has at most one winning move per pile. Let $n'=\mathcal{G}(G')$. Suppose first that Windsor attempts to take $g_1' - g_1''$ candies from the first pile, leaving the game $G'' = [g_1'', g_2, g_3]$. Since $g_2 \oplus g_3 = g_1$ and $g_1 \neq g_1''$, $n'' = g_1'' \oplus g_2 \oplus g_3 = g_1'' \oplus g_1 \neq 0$. Thus Windsor may not move in the first pile. It follows that Windsor may only move in $g_2$ or $g_3$. Since his number of winning moves is odd and at most 2, he has a unique winning move from $G'$.
\end{proof}
\begin{lemma} \label{lem:Semiratio}
Given $G = \mathfrak{G}(a, m, x)$, and any turn $T = (G, G', G'')$ the semiratio $r_T(G)$ is at most $2a + 1$.
\end{lemma}
\begin{proof}
Consider a turn $T = (G, G', G'')$. We show $r_T(G) \le 2a + 1$. If Luca's ply $(G, G')$ is in the smallest pile, she would take at most $a$ candies, yielding a semiratio at most $a< 2a + 1$.
Suppose Luca moves in either the middle pile or the largest pile such that $G'' = \mathfrak{G}(a', m', x')$ where $a' \neq a$. There are several cases to consider.
\begin{description}
\item[Case 1] Suppose $G' = [a, 2^{k+1}m + x, a']$ and $G'' = [a, a \oplus a', a']$. Then, \begin{align*} V_T(G) &= (2^{k+1}m + x\oplus a - a') - (2^{k+1}m + x -a'\oplus a) \\ &= x\oplus a - x + a' \oplus a - a' \\ &\leq x+a-x+a'+a-a' \\ &= 2a.\end{align*} Therefore, under this strategy, $$r_T(G) = \frac{N(G) - N(G')}{N(G') - N(G'')} \le \frac{2a+1}{1} \leq 2a + 1.$$
\item[Case 2] Suppose $G' = [a, a', 2^{k+1}m + x\oplus a]$ and $G'' = [a, a \oplus a', a']$. Then, \begin{align*} V_T(G) &= (2^{k+1}m + x - a') - (2^{k+1}m + x\oplus a -a'\oplus a) \\ &= x - a' - x\oplus a + a \oplus a' \\ &\leq x-a'-(a-x)+(a+a') \\ &= 2x.\end{align*} Therefore, $r_T(G) \leq 2x + 1 \le 2a + 1$ under this strategy.
\item[Case 3] Suppose $G' = [a, 2^{k+1}m + x, 2^{k+1}m + x\oplus a']$ and $G'' = [a', 2^{k+1}m + x, 2^{k+1}m + x\oplus a']$. Then, \begin{align*} V_T(G) &= (x\oplus a - x\oplus a') - (a - a') \\ &\leq (x+a)-|x-a'|-a+a' \\ &= a' + x - |a' - x|.\end{align*} This single-turn value is either $2x$ or $2a'$, so $r_T(G) \le 2 \max(x, a') + 1 \leq 2a + 1$ under this strategy.
\item[Case 4] Suppose $G' = [a, 2^{k+1}m + x\oplus a\oplus a', 2^{k+1}m + x\oplus a]$ and $G'' = [a', 2^{k+1}m + x\oplus a\oplus a', 2^{k+1}m + x\oplus a]$. Then, $$V_T(G) = x - x\oplus a\oplus a' - (a - a') \leq x - a + a'.$$ Therefore, $r_T(G) \leq 2a + 1$ under this strategy.
\end{description}
The last possible situation is when $G'' = \mathfrak{G}(a, m', x')$. Here, there are two cases to consider.
\begin{description}
\item[Case 1] $m' = m$ and $x' < x$. Then, $$N_L(G) - N_L(G'') < 2a + 1.$$
\item[Case 2] $m < m'$. Then, $V_T(G)$ is maximized when $G' = [a, 2^{k+1}m + x, 2^{k+1}m' + x']$. Here,
\begin{align*}
V_T(G) &= ((2^{k+1}m + x\oplus a) - (2^{k+1}m' + x')) - ((2^{k+1}m + x) - (2^{k+1}m' + x'\oplus a)) \\
&= x\oplus a - x' - x + x'\oplus a \\
&\leq x+a-x'-x+x'+a \\ &= 2a.
\end{align*}
In both cases, $r_T(G) = \frac{N(G) - N(G')}{N(G') - N(G'')} \le 2a+1$.
\end{description}
\end{proof}
\section{Strategies for the $3$-Pile Game} \label{sec:3pilestrats}
In this section, we present two strategies for certain families of the 3-pile game, which we call the \emph{flip-flop strategy} and the \emph{fractal strategy}. The flip-flop strategy is a simple strategy that, until the last turn, allows Luca to take as many candies as possible subject to allowing Windsor only to take one candy on that turn. The fractal strategy is an iterative variant of the flip-flip strategy which scores better, but for which it is difficult to compute the precise value. We conjecture that some version of the fractal strategy is optimal for games in standard form.
\subsection{The Flip-Flop Strategy} \label{sec:flipflop}
We begin by considering a class of games $[1, 2m, 2m+1]$. We show that they have a simple optimal strategy.
\begin{prop} \label{prop:V1flipflop}
$V([1, 2m, 2m+1]) = 2m$.
\end{prop}
\begin{proof}[Proof] We show optimality and give the strategy inductively,
If $m=1$, then $G = [1,2,3]$. Optimal play occurs when the first move is $T_1 = (G, [1,2,0], [1,1,0])$ with $V(G) = 2$ and $N_L(G) = 4$.
Now assume that $V([1,2m,2m+1])=2m$ is true for all $1 \le m \le m'$. We first show that for $G=[1,2(m'+1),2(m'+1)+1]$, $V(G) \geq 2m'+2$. Consider the strategy $S$ consisting of initial turn $$T_1 = (G, G' = [1,2(m'+1), 2m'], G''= [1,2m' + 1, 2m'])$$
followed by optimal play as per the inductive hypothesis for the resulting game $G'' = [1, 2m', 2m' + 1]$. $V_{T_1}(G) = 2$ and $V(G'') = 2m'$ by the inductive hypothesis, giving $V(G) \ge 2m' + 2$. \par
To show this strategy is optimal, we prove $V(G) \leq 2m'+2$. Consider the four possible cases for Luca's first move.
\begin{description}
\item[Case 1] Consider strategy $S_1$ where Luca takes from the smallest pile. Then the first turn $T_1 = (G, G', G'')$ satisfies $G'' = \{0,2(m'+1),2(m'+1)\}$ by Lemma~\ref{lem:unique3pilemove}. Then, $$V_{S_1}(G) = V_{T_1}(G) + V(G'') = 0 + 0 < 2m' + 2.$$
\item[Case 2] Consider strategy $S_2$ where Luca takes $2k$ candies from the largest pile such that the first turn is $$T_1 = (G, G' = [1,2(m'+1),2j+1], G'' = [1,2j,2j + 1]),$$ where $j = m'+1 - k$. Note that $V_{T_1}(G) = 0$. By induction, $V(G'') = 2j$. Therefore, $$V_{S_2}(G) = V_{T_1}(G) + V(G'') = 2j < 2m'+2.$$
\item[Case 3] Consider strategy $S_3$ where Luca takes $2k + 1$ candies from the largest pile such that the first turn is $$T_1 = (G, G' = [1,2(m'+1),2j], G'' = [1,2j+1,2j]),$$ where $j = m'+1 - k$. This time, $V_{T_1}(G) = 2$. By induction, $V(G'') = 2j$. Therefore, $$V_{S_3}(G) = V_{T_1}(G) + V(G'') = 2 + 2j \leq 2m'+2.$$
\item[Case 4] Consider strategy $S_4$ where Luca takes $k$ candies from the medium pile such that the first turn is $$T_1 = (G, G' = [1, 2j, 2(m'+1) + 1], G'' = [1, 2j, 2j+1]$$ or $$T_1 = (G, G' = [1, 2j + 1, 2(m'+1) + 1], G'' = [1, 2j, 2j+1].$$ In this case, $V_{T_1} \leq 0$. By induction, $V(G'') = 2j$. Therefore, $$V_{S_4}(G) = V_{T_1}(G) + V(G'') \leq 2j < 2m'+2.$$
\end{description}
Since $\max(V_{S_1}(G), V_{S_2}(G), V_{S_3}(G), V_{S_4}(G) \le 2m'+2$, we obtain the desired equality $V(G) = 2m' +2$. \end{proof}
The inductive optimal strategy in the proof above is pictorially represented by the sequence of moves in Figure~\ref{fig:flipflop}.
\begin{figure} [h]
\begin{displaymath}
\xymatrix{ [1 & 2m & 2m+1\ar@[red][d]^{\,\color{red}(-3)}]\\
[1 & 2m\ar@[green][d]^{\color{green}\,(-1)} & 2(m-1)] \\
[1 & 2(m-1)+ 1\ar@[red][d]^{\color{red}\,(-3)} & 2(m-1)] \\
[1 & \vdots & \vdots\ar@[green][d]^{\color{green}\,(-1)}] \\
[1 & 2 & 3\ar@[red][d]^{\color{red}\,(-3)}] \\ [1 & 2\ar@[green][d]^{\color{green}\,(-1)} & 0] \\ [1 & 1 & 0] }
\end{displaymath}
\caption{The sequence of moves that occurs when using the flip-flop strategy in the game $G = [1, 2m, 2m+1]$.}
\label{fig:flipflop}
\end{figure}
More generally, we can consider the following strategy, for games of the form $G = [2^k-1,2^k \cdot m,2^k \cdot (m+1)-1]$, that we will term the \textit{flip-flop} strategy:
\begin{defn}\label{d:flip-flop}
Given a game $G = \mathfrak{G}(2^k-1,m,0)=[2^k-1,2^k \cdot m,2^k \cdot (m+1)-1]$, the \textit{flip-flop strategy} $\Fl(G)$ is given as follows: \begin{enumerate} \item If $m\ge 1$, Luca removes $2^{k+1}-1$ candies from the third pile, then Windsor removes one candy from the second pile. The resulting game is $\mathfrak{G}(2^k-1,m-1,0)$. Then continue with $\Fl(\mathfrak{G}(2^k-1,m-1,0))$. \item If $m=0$, then we have $G=(2^k-1,2^k-1)$. Luca removes one pile, then Windsor removes the other one. \end{enumerate}
\end{defn}
\begin{prop} \label{prop:4.3}
For $G = [2^k-1,2^k \cdot m,2^k \cdot (m+1)-1]$, we have
$$V(G) \ge V_{\Fl(G)}(G) = (m-1) \cdot (2^{k+1} - 2).$$
\end{prop}
\begin{proof}
Consider the strategy $\Fl(G)$ with initial turn $T_1 = (G = G^{(0)}, G^{(1)}, G^{(2)})$, where Luca takes $2^{k+1} - 1$ candies from the largest pile and Windsor takes $1$ candy from the middle pile. Thus, $V_{T_0}(G) = 2^{k+1} - 2$ with $G^{(2)} = [2^k - 1, 2^k \cdot (m-1), 2^k \cdot m - 1]$. Repeat for turns $T_2, \ldots, T_{m-1}$ where $$T_i = (G^{(2i-2)},G^{(2i-1)} ,G^{(2i)}).$$
For $i = 1, \ldots, m-1$, $$V_{T_i}(G^{(2i-2)}) = 2^{k+1} - 2.$$
When $m = 0$, the resulting game is $G^{(2m-2)} = [2^k - 1, 2^k - 1]$ with $V(G^{(2m-2)}) = 0$. Thus, $V_{\Fl(G)}(G) = (m-1) \cdot (2^{k+1} - 2) + 0$.
\end{proof}
\subsection{The Fractal Strategy} We can improve the above strategy to one that exhibits a curious fractal-like behavior, as in Figure~\ref{fig:fractal1} and the following example:
\begin{prop}
$62m + 60 \geq V([31, 32m, 32m + 31]) \geq 62(m-1) + 98$
\end{prop}
\begin{proof}\
\begin{description}
\item[Upper Bound] By Lemma \ref{lem:Semiratio}, the semiratio on any turn is at most $63$, implying that $$V([31, 32m, 32m + 31]) \le \frac{63-1}{63+1}\cdot (31 + 32m + 32m + 31) = 60.0625 + 62m.$$
\item[Lower Bound] Consider the following strategy, broken down into $m>1$ and $m=1$.
\begin{enumerate}
\item While $m>1$, Luca recursively removes $63$ candies from the largest pile, requiring Windsor to respond by removing $1$ candy from the middle pile, creating the turn $$T = (G, [31, 32m, 32(m-1)], [31,32(m-1) + 31, 32(m-1)).$$ The accumulated value is $\sum_{m = 2}^{m} (63-1)=62(m-1)$.
\item When $m = 1$, the turn is $$T_4 = ([31,32,63],[31,32,3],[31,28,3])$$ with a single-turn value of $V_{T_4}(G_4) = 60-4 = 56$. \begin{enumerate} \item Then, for all games $G_2 = [3, 4m', 4m' + 3]$ with $m' > 1$, the turn would be $$T = (G_2, [3,4m', 4(m'-1)], [3, 4(m'-1) + 3, 4(m'-1)]).$$ The accumulated value for all $G_2$ is $\sum _{m'>1} (7-1) = 6(7-1) = 36$. \item When $m'=1$, the turn is $T_1 = ([3,4,7],[3,4,1],[3,2,1])$. Finally, the last two turns are $T_0 = ([1,2,3],[1,2],[1,1])$ and $T_0' = ([1,1],[1],\varnothing)$. In total, we get an overall value of $62(m-1) + 56 + 36 + 4 + 2 + 0 = 62(m-1) + 98$. \end{enumerate} \end{enumerate} \end{description}
\end{proof}
\begin{rem}
By checking numerically, we have verified that for $m < 12$, $V([31, 32m, 32m + 31]) = 62(m-1) + 98$. We conjecture equality holds for all $m \in \mathbb{N}$.
\end{rem}
\begin{figure}
\begin{displaymath}
\xymatrix{ [7 & 16 & 23\ar@[red][d]^{\,\color{red}(-15)}] &\,&\,& [1 & 6 & 7\ar@[red][d]^{\,\color{red}(-3)}]\\
[7 & 16\ar@[green][d]^{\color{green}\,(-1)} & 8] &\,&\,& [1 & 6\ar@[green][d]^{\color{green}\,(-1)} & 4]\\
[7 & 15\ar@[red][d]^{\color{red}\,(-14)} & 8] &\,&\,& [1 & 5\ar@[red][d]^{\,\color{red}(-3)} & 4]\\
[7 & 1 & 8\ar@[green][d]^{\color{green}\,(-2)}] &\,&\,& [1 & 2 & 4\ar@[green][d]^{\color{green}\,(-1)}]\\
[7 & 1 & 6] &\ar@{-->}[ruuuu]\,&\,& [1 & 2 & 3] }
\end{displaymath}
\caption{In the fractal strategy with $G=[7,16,23]$, we begin by applying the flip-flop strategy until the game reaches $H = [7, 15, 8]$. If Luca continued via the flip-flop strategy, the next turn would be $T = (H, [7, 8], [7,7])$, giving Luca $22/30$ candies in $H$. If Luca instead reduced the smallest pile from size $7$ to $1$, the single-turn value of that reduction would be $12$. This yields the game $[1, 6, 7]$ which has value $6$, as shown in Proposition~\ref{prop:V1flipflop}. With this sequence of moves, Luca does better, obtaining $24/30$ candies of $H$}
\label{fig:fractal1}
\end{figure}
\begin{figure}[h]
\begin{tikzpicture}
\node (0) at (0,8) {$\varnothing$};
\node (05) at (0,7) {$[1,2m,2m+1]$};
\draw [->] (05) -- (0);
\node (1) at (4,6) {$[3,4,7]$};
\node (2) at (-4,6) {$[7,8,15]$};
\draw [orange] [->] (1) -- (05);
\draw [orange] [->] (2) -- (05);
\node (15) at (4,5) {$[3, 4m, 4m + 3]$};
\node (25) at (-4,5) {$[7, 8m, 8m + 7]$};
\draw [->] (15) -- (1);
\draw [->] (25) -- (2);
\node (3) at (6,4) {$[15,16,31]$};
\node (4) at (2,4) {$[31,32,63]$};
\draw [orange] [->] (3) -- (15);
\draw [orange] [->] (4) -- (15);
\node (5) at (-2,4) {$[63,64,127]$};
\node (6) at (-6,4) {$[127,128,255]$};
\draw [orange] [->] (5) -- (25);
\draw [orange] [->] (6) -- (25);
\node (35) at (6,3) {$[15,16m,16m+15]$};
\node (45) at (2,3) {$[31,32m,32m+31]$};
\node (55) at (-2,3) {$[63,64m,64m+63]$};
\node (65) at (-6,3) {$[127,128m,128m+127]$};
\draw [->] (35) -- (3);
\draw [->] (45) -- (4);
\draw [->] (55) -- (5);
\draw [->] (65) -- (6);
\node (d) at (0,2.4) {$\vdots$};
\end{tikzpicture}
\caption{We represent the fractal strategy using black and orange arrows. The black arrows indicate the flip-flop strategy of Section~\ref{sec:flipflop} and the orange arrows indicate a change of smallest pile size.}
\end{figure}
\begin{defn}
Define a function $f: \mathbb{N} \rightarrow \mathbb{N}$ to be \textit{contractive} if for all $a \in \mathbb{N}$, $f(a) \le a$. Let $\mathscr{F}$ denote the family of contractive functions.
\end{defn}
\begin{defn} \label{d:fractal}
Consider a game $G$ of the form $G = \mathfrak{G}(2^k-1,m,0)=[2^k - 1, 2^k \cdot m, 2^k (m+1) -1]$, for $m, k \ge 1$. Let $f \in \mathscr{F}$. We define the \textit{fractal strategy} $\Fractal_f(G)$ based on $f$ as follows:
\begin{enumerate}
\item If $m>1$, then Luca plays as in $\Fl$ by removing $2^{k+1}-1$ candies from the third pile, and then Windsor moves to $\mathfrak{G}(2^k-1,m-1,0)$. Then play $\Fractal_f(\mathfrak{G}(2^k-1,m-1,0))$.
\item If $m=1$ and $f(a)=a$, then play as in the flip-flop strategy.
\item If $m=1$ and $f(a)<a$, then Luca moves the smallest pile to $2^{f(a)}-1$, and Windsor moves to $\mathfrak{G}(2^{f(a)}-1,2^{a-f(a)},0)$. Then play $\Fractal_f$ from there.
\end{enumerate}
\end{defn}
\begin{thm}
Given $G = \mathfrak{G}(2^k - 1, m, 0)$ with $k, m \ge 1$, we have
$$\sup_{f \in \mathscr{F}} V_{\Fractal_f}(G) = (m-2) \cdot (2^{k+1} - 2) + \displaystyle \sum_{i=0}^{\lceil \log_2 k \rceil } 2^{\lfloor \frac k{2^i}\rfloor+1}-2^{\left \lfloor \frac k{2^{i+1}} \right \rfloor+1} + \left (2^{\left \lfloor \frac k{2^{i+1}}\right \rfloor +1}-1 \right ) \left ( 2^{\left \lfloor \frac k{2^i}\right \rfloor - \left \lfloor \frac k{2^{i+1}}\right \rfloor}-2\right ),$$
with the supremum achieved by taking $f: a \mapsto \lfloor\frac{a}{2}\rfloor$.
\end{thm}
\begin{proof}
We first show that the fractal strategy $f:a\mapsto\lfloor\frac{a}{2}\rfloor$ achieves the stated bound. Consider the strategy $\Fractal_f$ where $f: a \mapsto \lfloor \frac a2 \rfloor$. We show \[V_{\Fractal_f}(G) = (m-2) \cdot (2^{k+1} - 2) + \displaystyle \sum_{i=0}^{\lceil \log_2 k \rceil } 2^{\lfloor \frac k{2^i}\rfloor+1}-2^{\left \lfloor \frac k{2^{i+1}} \right \rfloor+1} + \left (2^{\left \lfloor \frac k{2^{i+1}}\right \rfloor +1}-1 \right ) \left ( 2^{\left \lfloor \frac k{2^i}\right \rfloor - \left \lfloor \frac k{2^{i+1}}\right \rfloor}-2\right ).\]
It suffices to show that for the game $H=[2^k-1,2^k,2^{k+1}-1]$,
\[V_{\Fractal_f}(H) = \displaystyle \sum_{i=0}^{\lceil \log_2 k \rceil } 2^{\lfloor \frac k{2^i}\rfloor+1}-2^{\left \lfloor \frac k{2^{i+1}} \right \rfloor+1} + \left (2^{\left \lfloor \frac k{2^{i+1}}\right \rfloor +1}-1 \right ) \left ( 2^{\left \lfloor \frac k{2^i}\right \rfloor - \left \lfloor \frac k{2^{i+1}}\right \rfloor}-2\right ).\]
Under $\Fractal_f$, the first turn is \[T = (H, H', H'') = ([2^k-1,2^k,2^{k+1}-1], [2^k-1,2^k,2^{\lfloor \frac k2 \rfloor}-1], [2^k-1,2^k-2^{\lfloor \frac k2 \rfloor},2^{\lfloor \frac k2 \rfloor}-1]).\]
Under the this strategy, we perform $\Fl$ until we reach the game $ [2^{\lfloor \frac k2 \rfloor}-1,2^{\lfloor \frac k2 \rfloor},2^{\lfloor \frac k2 \rfloor-1}]$. This involves repeating the following sequence of moves $2^{k-\lfloor \frac k2 \rfloor}-2 $ times: $$[2^{\lfloor \frac k2 \rfloor}-1,a \cdot 2^{\lfloor \frac k2 \rfloor}, (a+1) \cdot 2^{\lfloor \frac k2 \rfloor} -1] \mapsto [2^{\lfloor \frac k2 \rfloor}-1,a \cdot 2^{\lfloor \frac k2 \rfloor}, (a-1) \cdot 2^{\lfloor \frac k2 \rfloor} ] \mapsto [2^{\lfloor \frac k2 \rfloor}-1,a \cdot 2^{\lfloor \frac k2 \rfloor}-1, (a-1) \cdot 2^{\lfloor \frac k2 \rfloor} ].$$
Since for any fractal strategy $g$ when $a \neq 1$ $$V_{\Fractal_g}([2^{\lfloor \frac k2 \rfloor}-1,a \cdot 2^{\lfloor \frac k2 \rfloor}, (a+1) \cdot 2^{\lfloor \frac k2 \rfloor} -1])=V_{\Fractal_g}([2^{\lfloor \frac k2 \rfloor}-1,a \cdot 2^{\lfloor \frac k2 \rfloor}-1, (a-1) \cdot 2^{\lfloor \frac k2 \rfloor} ])+2^{\lfloor \frac k2 \rfloor +1}-1,$$ we obtain that
$$V_{\Fractal_f}(H)=V_{\Fractal_f}([2^{\lfloor \frac k 2 \rfloor}-1,2^{\lfloor \frac k2 \rfloor},2^{\lfloor \frac k2 \rfloor +1}-1])+2^{k+1}-2^{\lfloor \frac k2 \rfloor}+(2^{\lfloor \frac k2 \rfloor +1}-1)(2^{k-\lfloor \frac k2 \rfloor}-2).$$
Via the inductive hypothesis we obtain the desired result: \[V_{\Fractal_f}(G) = (m-2) \cdot (2^{k+1} - 2) + \displaystyle \sum_{i=0}^{\lceil \log_2 k \rceil } 2^{\lfloor \frac k{2^i}\rfloor+1}-2^{\left \lfloor \frac k{2^{i+1}} \right \rfloor+1} + \left (2^{\left \lfloor \frac k{2^{i+1}}\right \rfloor +1}-1 \right ) \left ( 2^{\left \lfloor \frac k{2^i}\right \rfloor - \left \lfloor \frac k{2^{i+1}}\right \rfloor}-2\right ).\]
We now show that the $\Fractal_f$ strategy for $f: a \mapsto \lfloor \frac a2 \rfloor$ is optimal over all possible strategies $\Fractal_g$. It suffices to show that for all fractal strategies $\Fractal_g$, $$V_{\Fractal_g}([2^k-1,2^k,2^{k+1}-1]) \le V_{\Fractal_f}([2^k-1,2^k,2^{k+1}-1]).$$ We consider two cases based on whether $g(k) < \lfloor \frac k2 \rfloor$ or $g(k) > \lfloor \frac k2 \rfloor$
We first show that if $g(k)=i<\lfloor \frac k2 \rfloor$, then
\begin{equation} V_{\Fractal_g}([2^k-1,2^k,2^{k+1}-1]) \le V_{\Fractal_f}([2^k-1,2^k,2^{k+1}]). \label{eq:gvsffractal1}\end{equation}
Equivalently, we wish to show that the left side of~(\ref{eq:gvsffractal1} minus the right side is less than or equal to zero.
By the definition of the fractal strategy, we have \begin{align*}
&V_{\Fractal_g}([2^k-1,2^k,2^{k+1}-1])-V_{\Fractal_f}([2^k-1,2^k,2^{k+1}]) \\
&\le V_{\Fractal_g}([2^i-1,2^i,2^{i+1}-1]) - V_{\Fractal_f}([2^{\lfloor \frac k2 \rfloor}-1,2^{\lfloor \frac k2 \rfloor},2^{\lfloor \frac k2 \rfloor+1}-1])\\
&\qquad \qquad -2^{k-i} + 2^{i+1} +2^{k - \lfloor \frac k2 \rfloor} +2^{\lfloor \frac k2 \rfloor -i}-2\\
&\le V_{\Fractal_f}([2^i-1,2^i,2^{i+1}-1])-V_{\Fractal_f}([2^i-1,2^i,2^{i+1}-1]) \\
&\qquad \qquad + 2^{i+1} + 2^{k-\lfloor \frac k2 \rfloor} + 2^{\lfloor \frac k2 \rfloor -i} -2^{k-i}-2\\
&=2^{i+1}+2^{k-\lfloor \frac k2 \rfloor} + 2^{\lfloor \frac k2 \rfloor -i} - 2^{k-i}-2. \end{align*}
Now, suppose $i \neq {\lfloor \frac k2 \rfloor}-1$. Then we have \begin{align*}
&V_{\Fractal_g}([2^k-1,2^k,2^{k+1}-1])-V_{\Fractal_f}([2^k-1,2^k,2^{k+1}-1]) \\
&\le 2^{i+1}+2^{k-\lfloor \frac k2 \rfloor} + 2^{\lfloor \frac k2 \rfloor -i} - 2^{k-i} -2 \\ &\le 2^{\lfloor \frac k2 \rfloor-1}+2^{\lfloor \frac k2 \rfloor+1} +2^{\lfloor \frac k2 \rfloor} - 2^{k-\lfloor \frac k2 \rfloor +2} -2\\
&\le 2^{\lfloor \frac k2 \rfloor +2} - 2^{\lfloor \frac k2 \rfloor +2}-2 \\ &< 0.
\end{align*}
On the other hand, if $i=\lfloor \frac k2 \rfloor -1$, we get \begin{align*}
&V_{\Fractal_g}([2^k-1,2^k,2^{k+1}-1])-V_{\Fractal_f}([2^k-1,2^k,2^{k+1}-1]) \\
&\le 2^{i+1}+2^{k-\lfloor \frac k2 \rfloor} + 2^{\lfloor \frac k2 \rfloor -i} - 2^{k-i} -2 \\ &\le 2^{\lfloor \frac k2 \rfloor} +2^{k-\lfloor \frac k2 \rfloor} + 2^{1} - 2^{k-\lfloor \frac k2 \rfloor +1} -2 \\
&\le 2^{\lfloor \frac k2 \rfloor} - 2^{k-\lfloor \frac k2 \rfloor} \\ &\le 0.
\end{align*}
Next suppose that
$g(k) = i >\lfloor \frac k2 \rfloor$. First, recall the notation $\mathfrak{G}(a, m, x) = [a, 2^k\cdot m + x,\, 2^k\cdot m + a \oplus x], 2^k>a \ge 2^{k-1}$. Let $f',g'\in\mathscr{F}$ be defined by $f'(n)=f(n)$ if $n\neq\lfloor\frac{k}{2}\rfloor$ and $f'(\lfloor\frac{k}{2}\rfloor)=\lfloor\frac{i}{2}\rfloor$, and $g'(n)=f(n)$ if $n\neq k$ and $g'(k)=i$. We will show that $V_{\Fractal_{f'}}(\mathfrak{G}(2^k-1,1,0))\ge V_{\Fractal_{g'}}(\mathfrak{G}(2^k-1,1,0))$. By induction, this implies that $V_{\Fractal_{f}}(\mathfrak{G}(2^k-1,1,0))\ge V_{\Fractal_{g}}(\mathfrak{G}(2^k-1,1,0))$. To this end, we have
\begin{align*}
&V_{\Fractal_{f'}}(\mathfrak{G}(2^k-1,1,0))-V_{\Fractal_{g'}}(\mathfrak{G}(2^k-1,1,0)) \\
&=V_{\Fractal_{f'}}(\mathfrak{G}(2^{\lfloor \frac k2 \rfloor}-1,1,0))-V_{\Fractal_{g'}}(\mathfrak{G}(2^i-1,1,0))-2^{\lfloor \frac k2 \rfloor} \\
&\qquad \qquad +(2^{\lfloor \frac k2 \rfloor +1}-1)(2^{k-\lfloor \frac k2\rfloor}-2)+2^{i}-(2^{i+1}-1)(2^{k-i}-2) \\
&=V_{\Fractal_{f'}}(\mathfrak{G}(2^{\lfloor \frac k2 \rfloor}-1,1,0))-V_{\Fractal_{g'}}(\mathfrak{G}(2^i-1,1,0)) \\
&\qquad \qquad -2^{\lfloor \frac k2 \rfloor}-2^{k-\lfloor \frac k2 \rfloor} - 2^{\lfloor \frac k2 \rfloor +2}+2^{i} + 2^{k-i} +2^{i+2} \\
&\ge V_{\Fractal_{f'}}(\mathfrak{G}(2^{\lfloor \frac k2 \rfloor}-1,1,0)) \\
&\qquad \qquad - V_{\Fractal_{f'}}(\mathfrak{G}(2^i-1,1,0))-2^{\lfloor \frac k2 \rfloor}-2^{k-\lfloor \frac k2 \rfloor} - 2^{\lfloor \frac k2 \rfloor +2}+2^{i} + 2^{k-i} +2^{i+2}
\\
&=-2^{k-\lfloor \frac k2 \rfloor} - 2^{\lfloor \frac k2 \rfloor +2} + 2^{k-i} +2^{i+2}+ (2^{\lfloor \frac i2 \rfloor+1}-1)(2^{\lfloor \frac k2 \rfloor-\lfloor \frac i2\rfloor}-2)-(2^{\lfloor \frac i2 \rfloor+1}-1)(2^{i-\lfloor \frac i2 \rfloor}-2) \\
&=-2^{k-\lfloor \frac k2 \rfloor} - 2^{\lfloor \frac k2 \rfloor +2} + 2^{k-i} +2^{i+2}+ 2^{\lfloor \frac k2 \rfloor+1} - 2^{\lfloor \frac k2 \rfloor - \lfloor \frac i2 \rfloor} - 2^{\lfloor \frac i2 \rfloor +2} - 2^i +2^{\lfloor \frac i2 \rfloor+2} + 2^{i-\lfloor \frac i2 \rfloor} \\ &=2^{k-i}+3 \cdot 2^{i}+ 2^{i-\lfloor \frac i2 \rfloor}- 2^{\lfloor \frac k2 \rfloor - \lfloor \frac i2 \rfloor}-2^{\lfloor \frac k2 \rfloor+1} -2^{k-\lfloor \frac k2 \rfloor}.
\end{align*}
Now, we can see that the trio of inequalities $i \ge k-\lfloor \frac k2 \rfloor$ and $i \ge {\lfloor \frac k2 \rfloor +1}$ and $i \ge \lfloor \frac k2 \rfloor - \lfloor \frac i2 \rfloor$ are each true, and so that allows us to simplify to get\begin{align*}
&V_{\Fractal_f}(\mathfrak{G}(2^k-1,1,0))-V_{\Fractal_g}(\mathfrak{G}(2^k-1,1,0))\\
&\ge2^{k-i}+3 \cdot 2^{i}+ 2^{i-\lfloor \frac i2 \rfloor}- 2^{\lfloor \frac k2 \rfloor - \lfloor \frac i2 \rfloor}-2^{\lfloor \frac k2 \rfloor+1} -2^{k-\lfloor \frac k2 \rfloor} \\ &\ge 2^{k-i} + 2^{i-\lfloor \frac i2 \rfloor}, \end{align*} which is positive.
This resolves the last case, yielding the desired result.
\end{proof}
\section{Bounds for the 3-Pile Game} \label{sec:3pile}
\begin{thm} \label{t:standard}
Given a standard form game $G = \mathfrak{G}(2^{k+1} - 1, m, 0)$, we have \[V(\mathfrak{G}(2^{k+1} - 1, m, 0)) \le (2^{k+2} - 2)m + (2^{k+2} - 2) - 2 + \delta_{0k},\] where $\delta_{0k}$ is the Kronecker delta function which is 1 if $k=0$ and 0 otherwise. Furthermore,
\[
V(\mathfrak{G}(2^{k+1} - 1, m, 0)) \ge 2(2^{k+1} - 1)m - 2(2^{ \lceil \frac{k}{2}\rceil} - 1) + V(\mathfrak{G}(2^{ \lceil\frac{k}{2} \rceil } - 1, 2^{ \lfloor \frac{k}{2} \rfloor + 1} - 1, 0))
\]
Alternatively,
\[
V(\mathfrak{G}(2^{k+1} - 1, m, 0)) \geq 2(2^{k+1} - 1)(m-1) + b(k),
\] where
\[
3(2^{k+1} - 1) \leq b(k) \leq 4(2^{k+1} - 1) - 2.
\]
\end{thm}
\begin{proof}
($\le$) We will first show that \[V(G) = V(\mathfrak{G}(2^{k+1} - 1, m, 0)) \leq (2^{k+1} - 2)m + (2^{k+1} - 2) - 2 + \delta_{0k}.\] By Lemma \ref{lem:Semiratio}, $r_T(G) \le s = 2^{k+2} - 1$. Then,
\begin{align*}
V_L(G) &\leq \frac{s-1}{s+1} N(G) \\
&= \frac{2^{k+2} - 2}{2^{k+2}} \cdot (2^{k+2} - 2 + 2^{k+2}m) \\
&=2^{k+2} - 2 + (2^{k+2} - 2)m-2\frac{2^{k+2} - 2}{2^{k+2}} \\
&= (2^{k+2} - 2)m + (2^{k+2} - 2) - 2 + \frac{1}{2^k}.
\end{align*}
($\ge$)
Given the game $G = \mathfrak{G}(2^{k+1} - 1, m, 0)$, consider the strategy where Luca removes $2^{k+2}-1$ candies from the largest pile when $m > 1$ and $2^{k+2} - 2^{\lfloor \frac{k+1}{2} \rfloor}$ from the largest pile when $m = 1$. Then,
$$V_L(G) \geq 2(2^{k+1} - 1)m - 2(2^{\lceil \frac{k}{2} \rceil} - 1) + V(\mathfrak{G}(2^{\lceil \frac{k}{2} \rceil} - 1, 2^{\lfloor \frac{k}{2} \rfloor + 1} - 1, 0)).$$
This is an example of the fractal strategy as in Definition~\ref{d:fractal} with $f(k) = \lfloor \frac{k+1}{2} \rfloor$.
\end{proof}
\begin{cor} \label{cor:UsefulBound}
$V(\mathfrak{G}(a, m, x))\ge 2a(m-1) + x\oplus a + a - x$.
\end{cor}
\begin{proof}
Let $$G = \mathfrak{G}(a, m, x) = [a, 2^{\lfloor \log_2 a \rfloor + 1}m + x, 2^{\lfloor \log_2 a \rfloor + 1}m + x \oplus a].$$ Consider first turn $T_0 = (G, G', G'')$ such that $$G' = [a, 2^{\lfloor \log_2 a \rfloor + 1}m + x, 2^{\lfloor \log_2 a \rfloor + 1}(m-1)]$$ and $$G'' = [a, 2^{\lfloor \log_2 a \rfloor + 1}(m-1) + a, 2^{\lfloor \log_2 a \rfloor + 1}(m-1)].$$ Then, $V_{T_0}(G) = x\oplus a + a - x$.
For $0 < i < m$, let the $i^\text{th}$ turn be $T_i = (G_i, G_i', G_i'')$ where (similar to the flip-flop strategy of Definition~\ref{d:flip-flop}),
$$G_i = [a, 2^{\lfloor \log_2 a \rfloor + 1}(m-i), 2^{\lfloor \log_2 a \rfloor + 1}(m-i) + a],$$
$$G_i' = [a, 2^{\lfloor \log_2 a \rfloor + 1}(m-i), 2^{\lfloor \log_2 a \rfloor + 1}(m- i - 1)],$$
$$G_i'' = [a, 2^{\lfloor \log_2 a \rfloor + 1}(m-i-1) + a, 2^{\lfloor \log_2 a \rfloor + 1}(m-i-1)],$$
with $V_{T_i}(G) = 2a$.
After turn $T_{m - 1}$, $G_{m-1}'' = [a, a]$. Therefore the game concludes after $m$ turns and in total, Luca takes $2a(m-1) + x\oplus a + a - x$ candies.
\end{proof}
\begin{thm} \label{t:genbound}
If $G = \mathfrak{G}(2^{k+1} - 1, m, x)$, then
\[V(\mathfrak{G}(2^{k+1} - 1, m - 1, 0)) + 2(2^{k+1} - 1) - 2x \leq V(G) \leq V(\mathfrak{G}(2^{k+1} - 1, m + 1, 0)) - 2(2^{k+1} - 1) + 2x.\]
\end{thm}
\begin{proof}
Let $G = [2^{k+1} - 1, 2^{k+1}m + x, 2^{k+1}m + 2^{k+1} - 1 - x]$. \par
$(\leq )$. We construct a strategy $S$ that achieves a value of $$V_S(G) = V(\mathfrak{G}(2^{k+1} - 1, m-1, 0)) + 2(2^{k+1} - 1) - 2x.$$ Let the first turn $T_1 = (G, G', G'')$ consist of $$G' = [2^{k+1} - 1, 2^{k+1}m + x, 2^{k+1}(m-1)]$$ and $$G'' = [2^{k+1} - 1, 2^{k+1}(m-1) + 2^{k+1} - 1, 2^{k+1}(m-1)].$$
Then the single-turn value is $V_{T_1}(G) = 2(2^{k+1} - 1) - 2x$ and $G'' = \mathfrak{G}(2^{k+1} - 1, m - 1, 0)$, yielding an overall value of $$V_S(G) = V(\mathfrak{G}(2^{k+1} - 1, m-1, 0)) + 2(2^{k+1} - 1) - 2x,$$ which gives a lower bound on $V(G)$. \par
$(\geq)$. We prove $$V(\mathfrak{G}(2^{k+1} - 1, m + 1, 0)) - 2(2^{k+1} - 1) + 2x \geq V(G).$$
Given game $G_0 = \mathfrak{G}(2^{k+1} - 1, m + 1, 0)$, under any strategy $S$, $$V_S(G_0) \leq V(G_0).$$
Suppose the first turn in $S$ is $T_1 = (G_0, G_0', G_0'')$, where $$G_0' = [2^{k+1} - 1, 2^{k+1}(m+1), 2^{k+1}m + x]$$ and $$G_0'' = [2^{k+1} - 1, 2^{k+1}m + x, 2^{k+1}m + 2^{k+1} - 1 - x].$$ Note that $G_0''$ is the only move to a $\mathcal{P}$ position from $G_0'$ . The resulting game is $G_0'' = G$, implying that $V_S(\mathfrak{G}(2^{k+1} - 1, m + 1, 0)) = 2(2^{k+1} - 1) - 2x + V(G)$. This gives the desired inequality: $$ 2(2^{k+1} - 1) - 2x + V(G) = V_S(\mathfrak{G}(2^{k+1} - 1, m + 1, 0)) \leq V(\mathfrak{G}(2^{k+1} - 1, m + 1, 0)).$$
\end{proof}
\section{Optimal Allocation of $N$ Candies} \label{sec:multipile}
So far, we have only considered \textsc{Candy Nim} positions with three piles. We have seen that, in these games, Luca can take a substantial majority of the candies, and indeed there are 3-pile $\mathcal{P}$ positions in which Luca takes a proportion of at least $1-\varepsilon$ of the candies, for any fixed $\varepsilon>0$. It is natural, then, to consider the problem of Luca allocating $N$ candies, in a $\mathcal{P}$ position, so that she maximizes the number of candies that she can take with optimal play. This problem is the motivating question for this section.
\begin{lemma}\label{l:p2} If $G \in \mathcal{P}$, then $N(G)-N(G') \le \frac{N(G)}2$.
\end{lemma}
\begin{proof} Let $G=[a_1,a_2,\ldots,a_p]$, where $a_1 \ge a_2 \ge \cdots \ge a_p$. For any ply $(G,G')$, we have $N(G)-N(G')\le a_1$. So, it suffices to show that $a_1 \le \frac {N(G)} 2$. Since $G \in \mathcal{P}$, we have $a_1 = a_2 \oplus a_3 \oplus \cdots \oplus a_p$. For any $x_1,\ldots,x_k$, we have $x_1\oplus\cdots\oplus x_k\le x_1+\cdots+x_k$, so $a_1 \le a_2 + a_3 + \cdots + a_p$. Since $N(G)=a_1 + a_2 + a_3 + \cdots + a_p$, this implies that $a_1 \le N(G) - a_1$. Thus $a_1 \le \frac{N(G)}2$.
\end{proof}
\begin{lemma}\label{thm:Maximum} For any game $G$, we have
\begin{equation}\label{e:ineq}
N_W(G) \ge \lfloor \log_2 N(G) \rfloor.
\end{equation}
\end{lemma}
\begin{proof}
We prove this by induction on $N(G)$. As our base case, we consider the position where $N(G) = 1$, when $N_W(G) = 1 > \log_2 N(G) = 0$. Now, we perform the inductive step. Fix a game $G$ and suppose that the result holds for any game $H$ such that $N(H)<N(G)$. Let $n = \lfloor \log_2 N(G) \rfloor$. We divide our analysis into two cases:
\begin{enumerate}
\item If $G$ is a $\mathcal{P}$ position, consider a ply $(G, G')$. Then $N(G') \ge \frac{1}{2} N(G)\ge 2^{n-1}$ by Lemma~\ref{l:p2}. Suppose first that Windsor only removes a single candy when going from $G'$ to $G''$, i.e.\ $N(G'') = N(G') - 1$. If $N(G') > 2^{n-1}$, then,
$$N_W(G'') \ge \lfloor \log_2 N(G'') \rfloor \ge n-1,$$
so $$N_W(G) \ge 1 + N_W(G'') \ge n = \lfloor \log_2 N(G) \rfloor,$$ proving the desired result. On the other hand, if $N(G') = 2^{n-1}$, then $N(G'') = 2^{n-1} - 1$ has an odd number of candies and is thus an $\mathcal{N}$ position, meaning that Windsor's last ply was invalid.
The only case left to consider is if Windsor removes at least two candies, i.e.\ if $N(G'') \le N(G') - 2$. Since $N(G')-N(G'')>1$, we have
\[N(G')-N(G'')> \lfloor \log_2(N(G')) \rfloor - \lfloor \log_2 (N(G'')) \rfloor.\]
If $N(G'') = 0$, then $G = [a, a]$ where~(\ref{e:ineq}) holds, and if $N(G'') \neq 0$, then
\begin{align*} N_W(G)-N_W(G'') &= N(G')-N(G'') \\ &> \lfloor \log_2(N(G')) \rfloor -\lfloor \log_2 (N(G'')) \rfloor \\ &= n-1 - \lfloor \log_2 (N(G'')) \rfloor,\end{align*}
which implies that
$$N_W(G)> n-1 -\lfloor \log_2 (N(G'')) \rfloor + N_W(G'').$$
Since $N_W(G'') \ge \lfloor \log_2 (N(G'')) \rfloor$, this gives $N_W(G) \ge n$ as desired.
\item Now suppose $G$ is an $\mathcal{N}$ position. Consider a ply $(G, G')$. Since Windsor moves $G \mapsto G'$, $$N_W(G) - N_W(G') = N(G)-N(G')\ge \lfloor \log_2(N(G)) \rfloor - \lfloor \log_2 (N(G')) \rfloor$$ whenever $N(G') > 0$.
By the inductive hypothesis, $N_W(G') \ge \lfloor \log_2 (N(G')) \rfloor$, so $N_W(G) \ge \lfloor \log_2 (N(G)) \rfloor$, as desired.
\end{enumerate}
\end{proof}
\begin{lemma}{\label{lem:fairRemove} If $K=[a_1,a_1,a_2,a_2,\ldots,a_p,a_p]$ then for all games $G$, $V(G)=V(G+K)$.}
\end{lemma}
\begin{proof} We prove this by induction on $N(G)+N(K)$. First, the base case $G=K=\varnothing$ is trivial. Now we consider the inductive step. Consider a turn $T = (H := G + K, H', H'')$. If the optimal move is in $G$, then $H' = G' + K$, with $N(G') < N(G)$. Thus, by the inductive hypothesis, $V(H') = V(G' + K) = V(G')$, so
$$V(G+K)=N(G)-N(G')+V(G'+K)=N(G)-N(G')+V(G')=V(G).$$
If $(G,G')$ is the optimal ply in $G$, by the same argument we have $$V(G)=N(G)-N(G')+V(G').$$ On the other hand, for any ply $(K,K')$, the opponent can mimic in $K$, and hence move to $H'' = G+K''$ where $K''$ consists of only equal piles. It follows that $$V(G+K')\le N(K)-N(K')+V(G+K'').$$ Thus no move in $K$ can be strictly better than the optimal move in $G$, so we have $V(G+K)=V(G)$, completing the inductive step.
\end{proof}
\begin{lemma} \label{lem:oneremove}For all positive integers $a$, there exists a positive integer $k$ such that $a \oplus (a-1)=2^k-1$.
\end{lemma}
\begin{proof}
If $a$ is odd, then $a \oplus (a-1)$ is $1$, or $2^1-1$. If $a$ is even, then we write $$a = 2^{k_1}+2^{k_2}+\cdots+2^{k_\ell}, \quad k_1>k_2>\cdots>k_\ell>0.$$ Then we have $$a-1 = 2^{k_1}+2^{k_2}+\cdots+2^{k_{\ell-1}}+2^{k_\ell-1}+2^{k_\ell-2}+\cdots+2^{3}+2^2+2^1+1.$$ This gives
\[a\oplus(a-1)=2^{k_\ell}+2^{k_\ell-1}+\cdots+2^3+2^2+2^1+1=2^{k_l+1}-1,\] as desired.
\end{proof}
\begin{lemma} \label{thm:bestarr} The game $G=[1,2,4,8,16,\ldots,2^{n-2},2^{n-1}-1]$ maximizes $N_L(G)$ subject to the constraint that $N(G)=2^{n}-2$. In this case, we have $N_W(G)=n-1$. \end{lemma}
\begin{proof}
First, let us compute $N_W([1,2,4,8,16, \ldots ,2^{n-2},2^{n-1}-1])$. If Luca removes the entire largest pile, then Windsor is forced to remove a single candy, leaving the game $G'=[1,2,4,8,16,\ldots,2^{n-3},2^{n-2}-1]$. When $n=2$ we get $N_W=1$. By induction $N_W = n-1$. By Lemma~\ref{thm:Maximum} $N_W(G) \ge n-1$, for an arbitrary $\mathcal{P}$ position $G$ with $N(G)=2^n-2$. Since $G=[1,2,4,8,\ldots,2^{n-2},2^{n-1}-1]$ achieves equality, it minimizes $N_L$ subject to $N(G)=2^n-2$.
\end{proof}
\begin{lemma} \label{thm:Strat}
Given \[G = [1,2,4,8,\ldots,2^{k-2},2^{k},\ldots,2^{n-2},2^{n-1}-1-2^{k-1}],\] the ply $P = (G, G')$ with \[G' = [1,2,4,8,\ldots,2^{k-2},2^{k},\ldots,2^{n-2},2^{k-1}]\] is an optimal move. Then, \[N_W(G) = N_W(G') = n -1.\]
\end{lemma}
\begin{proof} Lemma~\ref{thm:Maximum} shows that it is impossible for Luca to concede fewer than $n-1$ candies to Windsor. Therefore to show optimality, it suffices to show that $N_W(G) = n - 1$. If Luca moves the pile of size $2^{n-1}-1-2^{k-1}$ to a pile of size $2^{k-1}$, the remaining game $G'$ has $2^n - 2$ candies, with $N_W(G') = n - 1$ by Lemma~\ref{thm:bestarr} assuming optimal play by Luca. Thus, $G$ minimizes $N_W$ with $N_W(G) = n-1$ as desired.
\end{proof}
\begin{thm} \label{thm:equalitycases}
Given a game $G$, $N_W(G) \ge \lfloor \log_2(N(G)) \rfloor$. Equality is achieved only when $N(G) = 2^n, 2^n - 2$, or $2^n - 2^k - 2$, for $n, k \in \mathbb{Z}^+$, $n > k + 1,n>2$ in the following arrangements:
\begin{enumerate}
\item $N(G) = 2^n$ and $G = [1,1,1,2,4,8,\ldots,2^{n-2},2^{n-1}-1]$
\item $N(G) = 2^n - 2$ and $G = [1,2,4,8,\ldots,2^{n-2},2^{n-1}-1]$
\item $N(G) = 2^n - 2^k - 2$ and $G = [1,2,4,8,\ldots,2^{k-2},2^k, \ldots ,2^{n-2},2^{n-1}-1-2^{k-1}]$
\end{enumerate}
\end{thm}
\begin{proof} First, note that it is sufficient to prove the result when $G$ is a $\mathcal{P}$ position. To see this, suppose that we have proven the result for all $\mathcal{P}$ positions, and $G$ is an $\mathcal{N}$ position with a ply $(G,G')$ where $G'$ is a $\mathcal{P}$ position, then we have \[N_W(G)\ge N(G)-N(G')+N_W(G')\ge N(G)-N(G')+\lfloor\log_2(N(G'))\rfloor\ge\lfloor\log_2(N(G))\rfloor.\] Thus from now on, we shall always assume that $G$ is a $\mathcal{P}$ position.
We prove the result by induction on $N(G)$. Via a finite check, this is true whenever $N(G) \le 16$. For the inductive step, suppose that equality is achieved only in the above positions for all positions with $N(G) < M$. We want to show that if $N(G)=M$, this theorem holds.
First, we show that $N_W(G)=\lfloor \log_2 (N(G)) \rfloor$ implies $N_W(G'')=\lfloor \log_2(N(G'')) \rfloor$. Let $M = 2^n + x$ where $n = \lfloor \log_2 M \rfloor$. Then, $$2^{n-1} \le 2^{n-1} + \frac x2 \le N(G') \le 2^n + x -1 \le 2^{n+1}.$$
If $N(G')- N(G'')=1$, then $$2^{n-1}-1 \le N(G'') < 2^{n+1}-1.$$ Since $G$ is a $\mathcal{P}$ position, $N(G)$ is even and thus $N(G'') \neq 2^{n-1}-1$. Thus $2^{n-1} \le N(G'') < 2^{n+1}-1$, so by Lemma~\ref{thm:Maximum}, $N_W(G'') \ge n-1$. If
$N_W(G'') \ge n$, then $N_W(G) \ge n+1$, so in any potential equality case, we must have $N_W(G'') = n-1$. If $N(G')- N(G'') \ge 2$, then whenever $ N(G'') > 0$ and $N(G')-N(G'')>1,$ we have \[ N(G')-N(G'')> \lfloor \log_2(N(G')) \rfloor - \lfloor \log_2 (N(G'')) \rfloor.\]
Since $N(G'') \ge \lfloor \log_2 (N(G'')) \rfloor$, if $N_W(G) = \lfloor \log_2 (N(G)) \rfloor$, then $N_W(G'') = \lfloor \log_2 (N(G'')) \rfloor$. If $N(G'')=0$, then $N(G') = \frac{N(G)}2$.
Thus, $N_W(G)=\lfloor \log_2 (N(G)) \rfloor$ which implies $N_W(G'')=\lfloor \log_2(N(G'')) \rfloor$. Therefore, by the inductive hypothesis, $G''$ must one of the three positions above.
Now we show that if $G''$ is any one of the above three positions, then so is $G$, thereby completing the induction.
\begin{enumerate}
\item
Suppose that $$G''=[1,1,1,2,4,8, \ldots, 2^{n-3},2^{n-2}-1].$$ In order to have $$N_W(G')-N_W(G'') = \lfloor \log_2 (N(G)) \rfloor - \lfloor \log_2 (N(G'')) \rfloor,$$ we must have $N(G) \in\{2^n, 2^n+2\}$. If $N(G)=2^n+2$, then $N(G)-N(G')=2^{n-1}+1$. However, this implies that $$G = [2^{n-1}+1,2^{n-1}+1]\text{ or }[1,2^{n-1},2^{n-1}+1],$$ since those are the only two $\mathcal{P}$ positions with $N(G)=2^n+2$. Neither of those can produce a $G''$ of the specified form. Therefore, $N(G)=2^n$ and $N(G)-N(G')=2^{n-1}-1$. and there must have been a pile of size at least $2^{n-1}-1$ in $G$. If there was a pile of size at least $2^{n-1}$, we have the same issue as above with $2^n + 2$. Consequently, there must be a pile of size exactly $2^{n-1}-1$. If $N(G)=2^n$, Windsor removed $1$ candy on the first term, giving the Grundy value of $G'$, $\mathcal{G}(G') \in \{1,3, 2^{n-1}-1\}$. In the first two cases, there is no way to achieve $N(G)-N(G')=2^{n-1}-1$. Therefore,
$$G=[1,1,1,2,4,8,\ldots,2^{n-2},2^{n-1}-1].$$
\item Now suppose that $$G''=[1,2,4,8,\ldots,2^{n-3},2^{n-2}-1].$$ As Windsor removed $1$ candy, $\mathcal{G}(G') \in \{1, 3, 2^{n-1} - 1\}$. If $\mathcal{G}(G') = 1$, then
$$G = [1,2,4,8,\ldots,2^{l}+1,\ldots,2^{m}+1,\ldots,2^{n-3},2^{n-2}-1],$$ which allows Windsor to remove $1$ candy from a different pile to increase his winnings, contradicting optimal play. If $\mathcal{G}(G') = 3$, then $$G = [2,2,4,8,\ldots,2^{l}+3,\ldots,2^{n-2}-1]\text{ or }[2,2,3,4,\ldots,2^{n-2}-1].$$ Windsor could have removed $3$ from the $2^{n-2}-1$ and received more candies while still winning, again contradicting optimal play. If $\mathcal{G}(G') = 2^{n-1}-1$, then $$G'=[1,2,4,8, \ldots, 2^{n-3},2^{n-2}].$$ So, either Luca moved from $2^{n-1}-2^k-1$ to $2^k$ or from $2^{n-1}-1$ to $0$. The first case gives the third game above, and the second gives the second game above.
\item Finally, suppose that $$G''=[1,2,4,8,\ldots,2^{k-2},2^{k},\ldots,2^{n-3},2^{n-2}-2^{k-1}-1]$$ with $\mathcal{G}(G') \in \{1, 3, 2^{k}-1\}$. If $k \ge 2$, then as $G-G'' \ge 2^k+1$, it would be impossible for Windsor to both remove one, and have $\lfloor\log_2(G'')\rfloor < \lfloor \log_2(G) \rfloor $. Otherwise $k = 1$, $\mathcal{G}(G') = 1$, and thus $G = 2 + G''$ so $\lfloor \log_2(G'') \rfloor = \lfloor \log_2 (G) \rfloor$, a contradiction.
\end{enumerate}
\end{proof}
\begin{thm} \label{thm:5pile} For all $N \in \mathbb{Z}^+$, there exists a 5-pile game $G$ with $N(G) = N$ and $N_W(G) \leq \frac{3}{2}\sqrt{2N} -2 $.
\end{thm}
\begin{proof}
We can write $N$ in binary as
\[ N = 2^{k_1}+ 2^{k_2}+\cdots+2^{k_n}+2^{k_{n+1}}+2^{k_{n+2}}+ \cdots+2^{k_{n+m}},\] where $k_1>k_2>\cdots>k_{n+p}$, where $n$ is defined so that $k_n\ge\lfloor\frac{k_1}{2}\rfloor$ but $k_{n+1} < \lfloor\frac{k_1}{2}\rfloor$. Thus $n$ is the minimal $i$ such that $2^{k_{i+1}} < \sqrt{N}$.
Consider the game $G_1 = \mathfrak{G}(a,m,0)$, where
\[m=2^{k_1-\lfloor \frac{k_1}2 \rfloor}+2^{k_2-\lfloor \frac{k_1}2 \rfloor}+2^{k_3-\lfloor \frac{k_1}2 \rfloor}+\cdots+2^{k_n-\lfloor \frac{k_1}2 \rfloor}-1\qquad\text{and}\qquad
a=2^{\lfloor \frac{k_1}2 \rfloor-1}-1.\]
By construction, $N(G_1) < N$. From this, we can construct the game
\[ G = \left [2^{\lfloor \frac{k_1}2 \rfloor-1}-1,2^{k_1-1}+2^{k_2-1}+ \cdots +2^{k_n-1}-2^{\lfloor \frac{k_1}2 \rfloor-1}, \right .2^{k_1-1}+2^{k_2-1}+ \cdots \]\[ \left . \cdots +2^{k_n-1}-1,2^{k_{n+1}-1}+2^{k_{n+2}-1}+\cdots+2^{k_{n+m}-1} -1,2^{k_{n+1}-1}+2^{k_{n+2}-1} + \cdots +2^{k_{n+m}-1}-1 \right ] \]
where $N(G) = N$. Note that the last two piles of $G$ are identical.
Corollary~\ref{cor:UsefulBound} gives
\[N_W(G_1) \le 2^{\lfloor \frac{k_1}2 \rfloor}-2+2^{k_1}+2^{k_2}+ \cdots +2^{k_n}-2^{\lfloor \frac{k_1}2 \rfloor}-
(2^{k_1}+2^{k_2}+ \cdots +2^{k_n}) + r_N, \,\, r_N \le \sqrt{2N}, \]
and therefore \[N_W(G) \le \frac32 \sqrt{2N} -2\]
because \[N_W(G)=2^{k_{n+1}-1}+2^{k_{n+2}-1}+\cdots+2^{k_{n+p}-1}+r_N-2.\]
\end{proof}
\begin{thm} \label{thm:k-1} If $G$ is a game containing $p$ piles with no duplicate piles, then $N_W(G) \ge p-1$.
\end{thm}
\begin{proof}
We shall prove this by induction on $N(G)$. When $N(G)<2$, the result is trivial. When $N(G)=2$, then $G$ is either $[2]$ or $[1,1]$. The first one gives $2 \ge 0$, and the second $1 \ge 1$, as desired. Suppose the claim is true for all $G$ with $N(G)<n$. We show it holds when $N(G) = n$.
\begin{itemize} \item[$(\mathcal{N})$] Let $G$ be an $\mathcal{N}$ position. Then the number of piles in $G'$ is at most one fewer than that of $G$. So, either $N_W(G') \ge p-2$ or Windsor made a move to make two piles equal sizes. In the first case, Windsor must have removed at least one candy, so $N_W(G)\ge p-1$ as desired. If Windsor moved to create a duplicate pile, $$G' = [a,a,g_1,g_2,g_3,\ldots,g_{p-2}],$$ where the $g_i$'s are all distinct. By \ref{lem:fairRemove}, $N_W(G')=a + N_W([g_1,g_2,\ldots,g_{p-2}])$. By induction $N_W([g_1,g_2,\ldots,g_{p-2}]) \ge p-3$. As $a \ge 1$, we get that $N_W(G') \ge p-2$ so $N_W(G)\ge p-1$ as desired
\item[$(\mathcal{P})$] Suppose $G$ is a $\mathcal{P}$ position.
\begin{enumerate}
\item If Luca doesn't remove a fill pile, then $G'$ has the same number of piles as $G$.
We consider cases:
\begin{enumerate}
\item If there are no duplicates in $G'$, by the inductive hypothesis, $N_W(G)=N_W(G') \ge p-1$ as desired.
\item Suppose Luca creates a duplicate pile, so $$G' = [a,a,g_3,\ldots,g_p].$$ Then we have $N_W(G')=a+N_W([g_3,\ldots,g_p])$. If $a \neq 1$, $N_W(G)=N_W(G') \ge 2+ p-3 =p-1$, via inductive hypothesis. Suppose $$G'=[1,1,g_3,\ldots,g_p].$$ In that case, we must have had $G=[g_1,g_2,g_3,\ldots,g_p]$ with $g_1 = 1$. Windsor cannot move in a $1$ pile, or else Luca would have been able to move to $G'' = [g_2,g_3, \ldots, g_p]$, contradicting the assumption that $G\in\mathcal{P}$. So, his winning move must be in one of the piles $g_3,\ldots,g_p$. If Windsor doesn't remove a pile, we get
\begin{align} \label{eq:NWPpos} \begin{split}N_W(G) &=N_W(G') \\ &= 1+N_W([g_3,g_4,\ldots,g_p]) \\ &= 2+N_W([z,g_4,\ldots,g_p]) \\ &\ge 2 + p - 3 \\ &= p-1.\end{split} \end{align} The first equality in~(\ref{eq:NWPpos}) is because it is currently Luca's move. The second equality follows from Lemma~\ref{lem:fairRemove}, The third equality follows because Windsor removed one candy. The fourth inequality follows from the inductive hypothesis.
\item If Luca first creates a $1,1$ duplicate (i.e. moves a pile $g_2$ to size $1$ with an existing pile $g_1$ of size $1$) to obtain $G'$, Windsor removes a pile in $G'$. We have \begin{align*} N_W(G) &= N_W(G') \\ &= 1+N_W([g_3,g_4,\ldots,g_p] \\ &= 1+g_3+N_W([g_4,\ldots,g_p]) \\ &\ge 1+g_3+p-4,\end{align*} where $g_3$ is the pile Windsor removes. If $g_3 \neq 1$, we have $$N_W(G) \ge 1+2 +p-4 =p-1,$$ as desired. But if $g_3=1$, then $G$ had a $1,1$ duplicate already, contrary to hypothesis.
\end{enumerate}
\item Suppose Luca removes a pile. We have $G' = [0,g_2,g_3,\ldots,g_p]$.
We further subdivide into cases:
\begin{enumerate}
\item If Windsor removes a pile $g_2$, then it is Luca's turn so $g_1 \oplus g_2 = 0$ and $g_1 = g_2$, giving an initial duplicate pile.
\item Suppose Windsor doesn't remove a pile and creates no duplicate piles when he moves $G'$ to $G''$. Via the inductive hypothesis $N_W(G'') \ge p-2$. Since Windsor removed at least one candy, $N_W(G) \ge p-1$ as desired.
\item Suppose Windsor removes no entire pile, but creates some duplicate pile of size $a \ge 2$ so $G'' = [a, a, g_4, g_5, \ldots g_p]$ with $$N_W(G'')=a+N_W([g_4,g_5,\ldots,g_p])\ge a+p-4.$$
Since $a \ge 2$, and Windsor removed at least one candy, $$N_W(G)\ge 1+ a+ p -4 \ge 1+2 + p-3=p-1$$ as desired.
\item Finally, suppose that Windsor removes some candies to create a duplicate pile of size $1$ with $G'' =[1, 1, g_4, g_5 , \ldots, g_p]$. This would give
$$G' = [1,2,g_4,g_5,\ldots,g_p], \quad G = [1,2,3,g_4,g_5,\ldots,g_p]$$
as Luca removed a pile (so no other pile had size $2$).
Since $G'' \in \mathcal{P}$, $\mathcal{G}(G') = 3$. It suffices to show that if $H = [g_4, g_5, \ldots, g_p]$, that $N_W(H) \ge p-3$. If $H = \varnothing$, we are done. Thus suppose $H$ has at least one pile.
Note that for all $i \ge 4$, $g_i > 1$ and $g_i \equiv 0, 1 \pmod{4}$, and all these piles of $H$ are distinct. We can consider the possible moves in Luca's ply $(H, H')$ as we did above.
\begin{itemize}
\item[$\bullet$] Any duplicate pile created has size at least $4$, so creating a duplicate pile would yield the desired bound: \[N_W(H)=N_W(H') \ge 4+N_W([g_6,g_7,\ldots,g_p]) \ge 4+p-6 =p-2.\]
\item[$\bullet$] If Luca neither removes a pile nor creates a duplicate, Windsor must move in a distinct pile from Luca. If Windsor removed a pile, he removed at least $4$ candies, so $N_W(G) \ge n-1$. Since $H$ is duplicate free, Windsor cannot create a duplicate. Thus, if Windsor didn't remove a pile, we thus obtain $H'' = [a, b, g_6, g_7, \ldots, g_p]$ with
\begin{align*} N_W(G) &= N_W(G') \\ &\ge 1+N_W(G'') \\ &= 2+N_W(H) \\ &= 2+N_W(H') \\ &= 3+N_W([a,b,g_6,g_7,\ldots,g_p]) \\ &\ge 3+n-4 \\ &= n-1,\end{align*}
\item[$\bullet$] Suppose Luca removes a pile. Since $g_i \equiv 0, 1 \pmod 4$ for all piles in $H$, Windsor must have removed at least $3$ candies since $H'' \in \mathcal{P}$; furthermore, because $H$ contains no duplicates, Windsor cannot have removed an entire pile in moving from $H'$ to $H''$. Thus $H''$ consists of $p-4$ piles. If $H''$ has no duplicate piles, then by induction $N_W(H'')\ge p-5$, so $$N_W(H)\ge 3+(p-5)=p-2,$$ which is greater than the required $p-3$. On the other hand, if $H''$ has a duplicate pile, say with $H''=[g_6,g_6,g_7,g_8,\ldots,g_p]$, then \begin{align*} N_W(H) &\ge 3+N_W(H'') \\ &= 3+g_6+N_W([g_7,g_8,\ldots,g_p]) \\ &\ge 3+g_6+(p-6) \\ &\ge p-3.\end{align*}
\end{itemize}
This completes the analysis of the final case, and shows the desired inductive hypothesis.
\end{enumerate}
\end{enumerate}
\end{itemize}
\end{proof}
For small $N$, we can use the above results to identify the games $G$ with $N(G) = N$ that minimize $N_W(G)$.
\begin{ex} \label{lem:bestten}
For $N = 10, 12, 16$ we compute the unique games $G$ that minimize $N_W(G)$ via Theorem~\ref{thm:equalitycases}.
\begin{itemize}
\item If $N=10$, then $G = [1, 4, 5]$ minimizes $N_W$, with $N_W(G) = 3$.
\item If $N=12$, then $G=[2,4,6]$ minimizes $N_W$, with $N_W(G) = 3$.
\item If $N = 14$, then $G=[1,2,4,7]$ minimizes $N_W$, with $N_W(G) = 3$.
\item If $N=16$, then $G=[1,1,1,2,4,7]$ minimizes $N_W$, with $N_W(G) = 4$.
\end{itemize}
\end{ex}
\section{Conjectures and Concluding Remarks} \label{sec:examples}
\subsection{$4$-Pile \textsc{Candy Nim}}
Most of our attention with respect to strategies and bounds on $V(G)$ has been focused on the case when $G$ is a $3$ pile game. We include a brief analysis and several conjectures regarding $V(G)$ and optimal play for $4$ pile games.
First, we show that, in the 4-pile game, Luca does not always have an optimal move in the largest pile.
\begin{ex} Let $G=[1,5,16,20]$. We show $V([1,5,16,20])=28$, where Luca's optimal move is to remove three candies from the pile of size 5. By checking, we have the following optimal game play:
$$[1, \color{red}5\color{black}, 16, 20] \overset{\textcolor{red}{L}}\rightarrow [1, \color{red}2\color{black}, 16, \color{green}20\color{black}] \overset{\textcolor{green}{W}}\rightarrow [1, 2, 16, \color{green}1 \color{red}9\color{black}] \overset{\textcolor{red}{L}}\rightarrow [1,2,\color{red}12, \color{green}16\color{black}] \overset{\textcolor{green}{W}}\rightarrow [1,2,12,\color{green}1\color{red}5\color{black}] \overset{\textcolor{red}{L}}\rightarrow [1,2,\color{red}8,\color{green}12\color{black}]$$
$$\overset{\textcolor{green}{W}}\rightarrow [1,2,8,\color{green}1\color{red}1\color{black}]\overset{\textcolor{red}{L}}\rightarrow [1,2,\color{green}8,\color{red}4\color{black}] \overset{\textcolor{green}{W}}\rightarrow [1,2,\color{green}7\color{black},4] = [1, 2, 4, 7]$$
By Theorem~\ref{thm:equalitycases}, $V([1, 2, 4, 7]) = 8$. Thus,
$$V(G) = 20 + 8 = 28$$
\end{ex}
We can obtain lower bounds on some families of four pile games $G$ using related three pile games. We first consider the four pile games $G$ with smallest two piles of size $1, 2$, and show that their values $V(G)$ are bounded by the ``corresponding'' $3$-pile game with smallest pile size $3$.
\begin{prop} \label{p:4pile}
Let $m$ be a positive integer. Then both of the following hold:
$$V([1,2,4m, 4m + 3]) \geq V([3,4m, 4m+3]),$$ $$V([1,2,4m+1, 4m+2]) \geq V([3,4m+1,4m+2]).$$
\end{prop}
\begin{proof}
Let \begin{align*} G_1 = [3,4m, 4m+3], && G_2 = [3,4m+1, 4m+2], \\ H_1 = [1,2,4m,4m+3], && H_2 = [1,2,4m+1, 4m+2].\end{align*}
We will show the desired result by induction on $m$. Let $m = 1$ be our base case.
We can check $6 = V([3,4,7]) \leq V([1,2,4,7]) = 8$ and $V([3,5,6]) = V([1,2,5,6]) = 6$.
Given $i\in\{1,2\}$, for every possible optimal turn $T_{G_i} = (G_i,G_i',G_i'')$ we must show that there exists a turn $T_{H_i} = (H_i, H_i', H_i'')$ such that $$V_{T_{G_i}}(G_i) + V(G_i'') \leq V_{T_{H_i}}(H_i) + V(H_i'').$$
Suppose that $V(G_i) \leq V(H_i)$ for $m<w$.
Let $m = w$. Suppose $G_i' = [3,a,b]$ and $G_i'' = [3,a,c]$. Then we set $H_i' = [1,2,a,b]$ and $H_i'' = [1,2,a,c]$, so that $V_{T_{G_i}}(G_i) = V_{T_{H_i}}(H_i)$ and $V(G_i'') \leq V(H_i'')$ by the inductive hypothesis. If $G_i' = [2,a,b]$ or $G_i' = [1,a,b]$, then we set $H_i' = [0,2,a,b]$ and $H_i' = [1,0,a,b]$, respectively. This yields $$V_{T_{G_i}}(G_i) + V(G_i'') = V_{T_{H_i}}(H_i) + V(H_i'').$$
Now suppose that $G_i'' = [0,a,b]$. If $i = 1$, then $V_{T_{G_1}}(G_1) + V(G_1'') = 0$ and $V_{T_{H_1}}(H_1) + V(H_1'') \geq 0$ by Lemma \ref{lem:valuegeq0}.
If $i=2$, then $V_{T_{G_2}}(G_2) + V(G_2'') \leq 2$, which implies that it is not an optimal move since Luca could instead remove the largest pile in $G_2$ and obtain an overall value of $4$.
Thus, by induction, we have $$V([1,2,4m, 4m + 3]) \geq V([3,4m, 4m+3]), \quad V([1,2,4m+1, 4m+2]) \geq V([3,4m+1,4m+2]).$$
\end{proof}
\subsection{General Play}
We can hope to make even more general inferences from the $3$-pile game to multi-pile \textsc{Candy Nim} games. Notably, we conjecture a similar result to Proposition~\ref{p:4pile} holds for a broader family of \textsc{Candy Nim} games.
\begin{conj}\label{c:gen}
Suppose $G = [a, b, c]$ with $a < b < c$. Then for some $j > 1$, there exist $a_1, \ldots, a_j$ with
$$a = a_1 + \cdots + a_j = a_1 \oplus \cdots \oplus a_j$$
such that the game $$H = [a_1, a_2, \ldots, a_j, b, c]$$ satisfies $V(H) \geq V(G)$.
\end{conj}
\begin{rem}
Note that it is \textit{not true} that for any game $G =[a, b,c]$, all such decompositions of $a = a_1 + \cdots + a_j = a_1 \oplus \cdots a_j$, with resulting game $H$ as in Conjecture~\ref{c:gen} satisfies $V(H) \ge V(G)$. As a counterexample, consider the game $G = [31,42,53]$, with $a = 31$. Using the decomposition $a_1 = 1, a_2 = 2, a_3 = 4, a_4 = 8, a_5 = 16$ we obtain the game $H = [1,2,4,8,16,42,53]$. However, $V(G) = 96$ while $V(H) = 94$.
\end{rem}
We can also hope to extend the analysis of Section~\ref{sec:multipile}. Observationally, for a fixed number of candies, the games $G$ that optimize $N_W(G)$ have specific structural properties that we conjecture hold in general:
\begin{conj}
For all fixed $N > 0$, there exist (not necessarily distinct) games $G_1, G_2$ with $N(G_1) = N(G_2) = N$ such that
$$N_W(G_1) = N_W(G_2) = \max_{H;\,N(H) = N} N_W(H)$$
where $G_1$ has a pile with at least $N/4$ candies and $G_2$ has at most $c \log N$ piles for some absolute constant $c > 0$.
\end{conj}
| {
"timestamp": "2018-05-21T02:04:12",
"yymm": "1805",
"arxiv_id": "1805.07019",
"language": "en",
"url": "https://arxiv.org/abs/1805.07019",
"abstract": "Candy Nim is a variant of Nim in which both players aim to take the last candy in a game of Nim, with the added simultaneous secondary goal of taking as many candies as possible. We give bounds on the number of candies the first and second players obtain in 3-pile $\\mathcal{P}$ positions as well as strategies that are provably optimal for some families of such games. We also show how to construct a game with $N$ candies such that the loser takes the largest possible number of candies and bound the number of candies the winner can take in an arbitrary $\\mathcal{P}$ position with $N$ total candies.",
"subjects": "Combinatorics (math.CO)",
"title": "$\\mathcal{P}$ Play in Candy Nim",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130567217291,
"lm_q2_score": 0.7154239897159439,
"lm_q1q2_score": 0.7075636669210206
} |
https://arxiv.org/abs/1510.06790 | On the Kato problem and extensions for degenerate elliptic operators | We study the Kato problem for degenerate divergence form operators. This was begun by Cruz-Uribe and Rios who proved that given an operator $L_w=-w^{-1}{\rm div}(A\nabla)$, where $w\in A_2$ and $A$ is a $w$-degenerate elliptic measure (i.e, $A=w\,B$ with $B$ an $n\times n$ bounded, complex-valued, uniformly elliptic matrix), then $L_w$ satisfies the weighted estimate $\|\sqrt{L_w}f\|_{L^2(w)}\approx\|\nabla f\|_{L^2(w)}$. Here we solve the $L^2$-Kato problem: under some additional conditions on the weight $w$, the following unweighted $L^2$-Kato estimates hold $$ \|L_w^{1/2}f\|_{L^2(\mathbb{R}^n)}\approx\|\nabla f\|_{L^2(\mathbb{R}^n)}. $$This extends the celebrated solution to the Kato conjecture by Auscher, Hofmann, Lacey, McIntosh, and Tchamitchian, allowing the differential operator to have some degeneracy in its ellipticity. For example, we consider the family of operators $L_\gamma=-|x|^{\gamma}{\rm div}(|x|^{-\gamma}B(x)\nabla)$, where $B$ is any bounded, complex-valued, uniformly elliptic matrix. We prove that there exists $\epsilon>0$, depending only on dimension and the ellipticity constants, such that $$ \|L_\gamma^{1/2}f\|_{L^2(\mathbb{R}^n)}\approx\|\nabla f\|_{L^2(\mathbb{R}^n)}, \qquad -\epsilon<\gamma<\frac{2\,n}{n+2}. $$ This gives a range of $\gamma$'s for which the classical Kato square root $\gamma=0$ is an interior point.Our main results are obtained as a consequence of a rich Calderón-Zygmund theory developed for some operators associated with $L_w$. These results, which are of independent interest, establish estimates on $L^p(w)$, and also on $L^p(v\,dw)$ with $v\in A_\infty(w)$, for the associated semigroup, its gradient, the functional calculus, the Riesz transform, and square functions. As an application, we solve some unweighted $L^2$-Dirichlet, Regularity and Neumann boundary value problems for degenerate elliptic operators. | \section{Introduction}
\label{section:introduction}
In this paper we study the degenerate elliptic
operators $L_w=-w^{-1}\mathop{\rm div} A{\nabla}$, where $w$ is in the Muckenhoupt
class $A_2$ and $A(x)$ is an $n\times n$ complex-valued matrix that
satisfies the degenerate ellipticity condition
\[ \lambda w(x) | \xi | ^{2}\leq {\Re}\langle A(x)\xi
,\xi \rangle, \qquad
|\langle A(x)\xi ,\eta \rangle |\leq \Lambda w(x)|\xi
||\eta |, \quad \xi ,\,\eta \in \mathbb{C}^{n},
\ \mbox{a.e.~}x\in\mathbb{R}^n. \]
Equivalently, $A(x)=w(x)B(x)$, where $B$ is an $n\times n$
complex-valued matrix that satisfies the uniform ellipticity
conditions
\[ \lambda | \xi | ^{2}\leq {\Re}\langle B(x)\xi
,\xi \rangle, \qquad
|\langle B(x)\xi ,\eta \rangle |\leq \Lambda |\xi
||\eta |, \quad \xi ,\,\eta \in \mathbb{C}^{n},
\ \mbox{a.e.~}x\in\mathbb{R}^n. \]
Such operators were first studied (with $A$ a real symmetric matrix)
by Fabes, Kenig and Serapioni~\cite{fabes-kenig-serapioni82}. When
$A$ is complex-valued and uniformly elliptic (i.e. $w\equiv 1$), a
landmark result was the
proof of the Kato conjecture by Auscher, Hofmann, Lacey, McIntosh, and Tchamitchian~\cite{auscher-hofmann-lacey-mcintosh-tchamitchian02}: that for all $f\in H^1$,
\[ \|L^{1/2} f\|_2 \approx \|{\nabla} f\|_2. \]
The proof of this long-standing conjecture led naturally to the study
of the operators associated with $L$: the semigroup $e^{-tL}$, its
gradient $\sqrt{t}{\nabla} e^{-tL}$, the Riesz transform ${\nabla}
L^{-1/2}$, the $H^\infty$ functional calculus and square functions:
for details and complete references, see~Auscher~\cite{auscher07}.
These estimates are interesting in themselves; moreover,
it is well known that $L^p$ estimates for these operators yield
regularity results for boundary value problems for $L$: for details, see the
introduction to~\cite{auscher-tchamitchian98}.
In~\cite{DCU-CR2013} (see also~\cite{cruz-uribe-riosP,cruz-riosP, Auscher-Rosen-Rule}) the
first and third authors solved the Kato problem for degenerate elliptic
operators: they showed that
if $w\in A_2$ and $A$ satisfies the degenerate ellipticity conditions,
then for all $f\in H^1(w)$,
\begin{equation} \label{eqn:Lw2-kato}
\|L_w^{1/2} f\|_{L^2(w)} \approx \|{\nabla} f\|_{L^2(w)}.
\end{equation}
In this paper we consider the problem determining those $A_2$ weights
such that the classical Kato problem can be solved for $L_w$: that
is, finding weights such that $L_w$ satisfies the unweighted estimate
\[
\|L_w^{1/2} f\|_{L^2(\mathbb{R}^n)} \approx \|{\nabla} f\|_{L^2(\mathbb{R}^n)},
\]
for $f$ in a class of nice functions ({\em a posteriori}, by standard density arguments, the estimate can be extended to all $f\in H^1(\mathbb{R}^n)$).
We solve this problem in two steps. The first is to prove weighted
$L^p$ estimates for some operators associated with $L_w$ (the
semigroup, its gradient, the Riesz transform, the functional calculus,
and square functions.) These results, which are of interest in their
own right, are analogous to those gotten in the uniformly elliptic
case. However, a significant technical obstruction is that given
a weight $w\in A_2$, while it is the case that there exists
$\epsilon>0$ such that $w\in A_{2-\epsilon}$, it is easy to construct
examples to show that $\epsilon$ may be arbitrarily small. Therefore,
our bounds in the range $1<p<2$ need to take this into account.
The second step is to find conditions on the weight $w$ so that these
operators satisfy {\em unweighted} $L^2$ estimates. Both steps are
carried out simultaneously, and the proofs are
intertwined. Our approach is to apply the theory of off-diagonal
estimates on balls developed by Auscher and the second
author~\cite{auscher-martell06,auscher-martell07b,
auscher-martell07,auscher-martell-08}. We will in fact prove
weighted estimates on $L^p(v\,dw)$, where $v$ satisfies
Muckenhoupt and reverse H\"older conditions with respect to the
measure $dw=w\,dx$: $L^p(w)$ estimates are then gotten by
taking $v=1$, and unweighted estimates by taking
$v=w^{-1}$.
The unweighted $L^2$ estimates are
delicate, since they require a careful estimate of the constants that
appear. Nevertheless, we are able to give useful sufficient
conditions: e.g., $w\in A_1\cap RH_{\frac{n}{2}+1}$. (For definitions of
these classes, see Section~\ref{section:prelim} below.) For example,
we have the following result that is a special case of one of our main
results (cf. Theorem~\ref{corol:super-Kato}).
\begin{theor} \label{thm:special-case}
Let $L_w=-w^{-1}\mathop{\rm div} A{\nabla}$ be a degenerate elliptic operator as
above. If $w\in A_1\cap RH_{\frac{n}{2}+1}$, then the Kato problem
can be solved for $L_w$: for every $f\in H^1(\mathbb{R}^n)$,
$$
\|L_w^{1/2} f\|_{L^2(\mathbb{R}^n)} \approx \|{\nabla} f\|_{L^2(\mathbb{R}^n)}.
$$
The implicit constants depend only on the dimension, the ellipticity
constants, and the $A_1$ and $RH_{\frac{n}{2}+1}$ constants of $w$.
Furthermore, if we define
$L_\gamma=-|x|^{\gamma}\mathop{\rm div}(|x|^{-\gamma} B(x){\nabla})$, where $B$ is an
$n\times n$ complex-valued matrix that satisfies the uniform
ellipticity condition, then there exists $0<\epsilon<\frac12 $ small
enough (dependding only on the dimension and the ratio $\Lambda/\lambda$)
such that
$$
\|L_\gamma^{1/2} f\|_{L^2(\mathbb{R}^n)} \approx \|{\nabla} f\|_{L^2(\mathbb{R}^n)},\qquad
-\epsilon< \gamma<\frac{2n}{n+2}.
$$
\end{theor}
\begin{remark}
In Theorem \ref{thm:special-case} the operator $L_w^{1/2}$ is {\em a priori} only defined on $H^1(w)$; however this means that it is defined on $C_0^\infty(\mathbb{R}^n)$ and so by a standard density argument we can extend our results to all $f\in H^1(\mathbb{R}^n)$. Hereafter we will make this extension without further comment.
\end{remark}
We emphasize that in Theorem~\ref{thm:special-case}, when $\gamma=0$
we are back at the uniformly elliptic case, which is the celebrated solution to
tha Kato square root problem by Auscher, Hofmann, Lacey, McIntosh,
and Tchamitchian
in~\cite{auscher-hofmann-lacey-mcintosh-tchamitchian02}. Here we
are able to find a range of $\gamma$'s for which the same estimates
hold and the classical Kato square root problem (i.e., $\gamma=0$)
is an interior point in that range.
These unweighted $L^2$ estimates have important applications to
boundary value problems for degenerate elliptic operators. Consider,
for example, the following Dirichlet problem on $\mathbb{R}^{n+1}_+={\mathbb{R}^n}\times
[0,\infty)$:
\[ \begin{cases}
\partial_t^2 u - L_w u = 0, & \text{on } \mathbb{R}^{n+1}_+ \\
u= f & \text{on } \partial\mathbb{R}^{n+1}_+ =\mathbb{R}^n.
\end{cases}
\]
If $f\in L^2(\mathbb{R}^n)$, then $u(x,t)=e^{-tL_w^{1/2}}f(x)$ is a solution,
and if $L_w$ has a bounded $H^\infty$ functional calculus on $L^2$,
then
$ \sup_{t>0} \|u(\cdot,t)\|_2 \lesssim \|f\|_2. $
Similar results hold for the corresponding Neumann and Regularity problems.
\medskip
Our proofs are unavoidably technical, and the results for each operator
considered build upon what was proved previously for other operators.
We have organized the material as follows. In
Section~\ref{section:prelim} we gather some essential definitions and results about
weights, degenerate elliptic operators, and off-diagonal estimates.
Central to all of our subsequent work are Theorems~\ref{theorem:2.2}
and~\ref{theorem:2.4} (which were proved in~\cite{auscher-martell06}).
In Sections~\ref{section:semigroup},~\ref{section:functional}
and~\ref{section:square-function} we prove estimates for the
semigroup $e^{-tL_w}$, $t>0$, the $H^\infty$ functional
calculus (i.e., operators $\varphi(L_w)$ where $\varphi \in
\H^\infty$), the vertical square function associated to the semigroup,
\[ g_{L_w} f(x) =\bigg( \int_{0}^{\infty }\left\vert \left(
tL_{w}\right) ^{1/2}e^{-tL_{w}}f(x) \right\vert ^{2}\frac{dt}{t}%
\bigg) ^{1/2}, \]
and its discrete analog. Here and in subsequent sections we
prove both $L^p(w)$ estimates and weighted $L^p(v\,dw)$
estimates. In many cases these results are proved simultaneously,
with the unweighted results (i.e., in $L^p(w)$) following from the weighted results (i.e., in $L^p(v\,dw)$) by taking $v=1$.
In Section~\ref{section:reverse} we prove the so-called reverse
inequality, $\|L_w^{1/2}\|_{L^p(w)} \lesssim \|{\nabla} f\|_{L^p(w)}$,
that generalizes the $L^2(w)$ estimate in~\eqref{eqn:Lw2-kato}. We
note that while the equivalence in~\eqref{eqn:Lw2-kato}
follows at once from the reverse inequality for $p=2$ by
duality, the two inequalities behave differently when $p\neq 2$.
In Sections~\ref{section:gradient} and~\ref{section:q-plus} we prove
estimates for the gradient of the semigroup,
$\sqrt{t}{\nabla} e^{-tL_w}$. The proof that there exists $q_+>2$ such
that this operator satisfies $L^p(w)$ estimates for $2<p<q_+$ is quite
involved as it requires preliminary estimates for the Riesz transform
and the Hodge projection. We note that as opposed to the
non-degenerate case, here we cannot use ``global'' embeddings, nor
can we rescale. Also we cannot expect to obtain that the gradient
of the semigroup maps globally $L^2(w)$ into $L^p(w)$ for $p\neq
2$.
All these difficulties arise naturally from the lack of isotropy of
the natural underlying measure $w(x)\,dx$ and make the typical
arguments used in the uniformly elliptic case
(cf.~\cite[Chapter~4]{auscher07}) unusable. We also note that in
some sense our result is the best possible: even in the non-degenerate
case it is known~\cite{auscher07} that given any $p>2$ there exists a
matrix $A$ and operator $L$ such that gradient of the semigroup is not
bounded on $L^p$.
In Section~\ref{section:riesz} we prove $L^p(w)$ estimates for the
Riesz transform ${\nabla} L^{-1/2}$, and in
Section~\ref{section:square-function-gradient} we prove $L^p(w)$
estimates for the square function associated to the gradient of the
semigroup,
\[ G_{L_{w}}f( x) =\bigg( \int_{0}^{\infty }
|t^{1/2}\nabla e^{-tL_{w}}f( x)|^{2}\frac{dt}{t}\bigg)^{1/2}. \]
In Section~\ref{section:L2-kato} we prove unweighted $L^2$
inequalities for the operators we have considered in previous
sections. These are a consequence of the weighted estimates and are
gotten by taking $v=w^{-1}$. The main problem is determining
conditions on $w$ for these to hold. We essentially have two
different kinds of estimates: one for operators that do not involve
the gradient, and one for those that do. The latter are more delicate
as they involve careful bounds for the parameter $q_+$ from
Section~\ref{section:q-plus} in terms of the weight~$w$. We also show
that we get unweighted $L^p$ estimates for $p$ very close to $2$.
Finally, in Section~\ref{section:BVP} we describe in more detail the
application of our results to $L^2$ boundary value problems for
degenerate elliptic operators. The results in this section our the culmination of
our work as they depend on all the estimates derived in previous
sections.
\medskip
As we were completing this project, we learned that related results
had been obtained independently by other authors.
In~\cite{LePhi} Le studies (among other things) the $L^p(w)$ theory for some of the
operators considered here and proves estimates for values of $p$ in the range
$(2-\epsilon,2+\epsilon)$. His proofs differ from ours in a number
of details. In~\cite{HLM} Hofmann, Le and Morris
establish some Carleson measure estimates and consider the Dirichlet problem for degenerate elliptic operators. Also, very recently we learned that Yang and Zhang~\cite{YZ} have proved
Kato type estimates in $L^p(w)$ for $p$ in the range $(p_0,2]$.
Finally, we note that the forthcoming paper
\cite{LiMartellPrisuelos} complements our work here as it considers the
conical square functions associated to the operator $L_w$.
\section{Preliminaries}
\label{section:prelim}
Throughout $n$ will denote the dimension of the underlying space ${\mathbb{R}^n}$
and we will always assume $n\geq 2$. If we write $A\lesssim B$ we
mean that there exists a constant $C$ such that $A\leq CB$. We write
$A\approx B$ if $A\lesssim B$ and $B\lesssim A$. The
constant $C$ in these estimates may depend on the dimension $n$ and other (fixed)
parameters that should be clear from the context. All constants,
explicit or implicit, may change at each appearance.
Given a ball $B$, let $r(B)$ denote the radius of $B$. Let
$\lambda B$ denote the concentric ball with radius $r(\lambda B) = \lambda r(B)$.
\subsection*{Weights}
By a weight $w$ we mean a non-negative, locally integrable function.
For brevity, we will often write $dw$ for $w\,dx$. We will use the following notation for averages: given a set $E$ such that
$0<w(E)<\infty$,
\[ \Xint-_E f\,dw = \frac{1}{w(E)}\int_E f\,dw, \]
or if $0<|E|<\infty$,
\[ \Xint-_E f\,dx = \frac{1}{|E|}\int_E f\,dx. \]
We state some definitions and basic properties of Muckenhoupt
weights. For further details,
see~\cite{duoandikoetxea01,garciacuerva-rubiodefrancia85}.
We say that $w\in A_p$, $1<p<\infty$, if
\[ [w]_{A_p} = \sup_Q \Xint-_Q w(x)\,dx \left(\Xint-_Q
w(x)^{1-p'}\,dx\right)^{p-1} < \infty. \]
When $p=1$, we say $w\in A_1$ if
\[ [w]_{A_1} = \sup_Q \Xint-_Q w(x)\,dx \esssup_{x\in Q} w(x)^{-1}<
\infty. \]
We say $w\in RH_s$, $1<s<\infty$ if
\[ [w]_{RH_s} = \sup_Q \left(\Xint-_Qw(x)\,dx\right )^{-1}
\left(\Xint-_Qw(x)^s\,dx\right )^{1/s} < \infty, \]
and
\[ [w]_{RH_\infty} = \sup_Q\left(\Xint-_Q w(x)\,dx\right)^{-1} \esssup_{x\in Q} w(x) <
\infty. \]
Let
\[ A_\infty = \bigcup_{1\leq p <\infty} A_p = \bigcup_{1<s\le \infty}
RH_s. \]
Weights in the $A_p$ and $RH_s$ classes have a self-improving
property: if $w\in A_p$, there exists $\epsilon>0$ such that $w\in
A_{p-\epsilon}$, and similarly if $w\in RH_s$, then $w\in
RH_{s+\delta}$ for some $\delta>0$. Hereafter, given $w\in A_p$, let
\[ r_w=\inf\{p: w\in A_p\}, \qquad s_w=\sup\{q: w\in RH_q\}. \]
An important property of $A_p$ weights is that they are doubling:
given $w\in A_p$, for all $\tau\ge 1$ and any ball $B$,
\[ w(\tau B)\leq [w]_{A_p} \tau^{pn} w(B). \]
In particular, hereafter let $D\leq pn$ be the doubling order of $w$:
that is the smallest exponent such that this inequality holds.
As a consequence of this doubling property, we have that with the ordinary Euclidean distance
$|\cdot|$, $({\mathbb{R}^n},dw,|\cdot|)$ is a space of homogeneous type.
In this setting we can define the new weight classes $A_p(w)$
and $RH_s(w)$ by replacing Lebesgue measure in the definitions above with
$dw$: e.g., $v\in A_p(w)$ if
\[ [v]_{A_p(w)} = \sup_Q \Xint-_Q v(x)\,dw \left(\Xint-_Q
v(x)^{1-p'}\,dw\right)^{p-1} < \infty. \]
It follows at once from these definitions that there is a
``duality'' relationship between the weighted and unweighted
$A_p$ and $RH_s$ conditions: $v=w^{-1} \in A_p(w)$ if and only if $w \in
RH_{p'}$ and $v=w^{-1}\in RH_s(w)$ if and only if $w\in A_{s'}$.
\medskip
Weighted Poincar\'e-Sobolev inequalities were proved
in~\cite{fabes-kenig-serapioni82}.
\begin{theor} \label{thm:wtd-poincare}
Given $w\in A_p$, $p\geq 1$, let
$p_w^*=\frac{p\,n\,r_w}{n\,r_w-p}$ if $p<n\,r_w$ and $p_w^*=\infty$
otherwise. Then for every $p\le q<p_w^*$, ball $B$ and $f\in C_0^\infty(B)$,
\begin{equation} \label{eqn:wtd-imbedding}
\left(\Xint-_B |f(x)|^q\,dw(x)\right)^{1/q} \leq
Cr(B)\left(\Xint-_B |{\nabla} f(x)|^p\,dw\right)^{1/p}.
\end{equation}
Moreover, if $f\in C^\infty(B)$, then
\begin{equation}\label{w-Poincare}
\left(\Xint-_B |f(x)-f_{B,w}|^{q}\,dw(x)\right)^{1/q}
\le
C\,
r(B)\left(\aver{B} |\nabla f(x)|^{p}\,dw\right)^{1/p},
\end{equation}
where $f_{B,w}=\Xint-_B f\,dw$.
\end{theor}
\begin{remark}
In the special
case when $w\in A_1$ and $1<p<n$ we can also take $q=p_w^*=p^*$, i.e.,
the regular Sobolev exponent. See
P\'erez~\cite[Theorem~2.5.2]{perez1999}.
\end{remark}
\begin{remark}\label{remark-best-Poi}
If we let $q= \frac{np}{n-1}<p_w^*$, then we can get a sharp
estimate for the constant $C$ in~\eqref{eqn:wtd-imbedding}
and~\eqref{w-Poincare}: it is of the form $C(p,n)[w]_{A_p}^\kappa$
where $\kappa=\frac{n\,p-1}{n\,p\,(p-1)}$. This follows from the
sharp weighted estimates for the fractional integral operator due to
Alberico, Cianchi and Sbordone~\cite{MR2561035} and the standard
pointwise estimates used to prove Poincar\'e-Sobolev inequalities;
see~\cite{fabes-kenig-serapioni82} for details.
\end{remark}
\begin{remark}\label{remark:Poincare-non-smooth}
By a standard density argument, once we know that \eqref{w-Poincare}
holds for smooth functions in $B$ we can easily extend that estimate to
any function $f\in L^q(w)$ with $\nabla f\in L^p(w)$.
Details are left to the reader.
\end{remark}
\subsection*{Degenerate elliptic operators}
Given $%
w\in A_2$ and constants $0<\lambda\leq \Lambda<\infty$,
let ${{\mathcal E}}_n(w, \lambda, \Lambda )$ denote the class of $n\times n$
matrices $A=\left( A_{ij}(x) \right) _{i,j=1}^{n}$ of
complex-valued, measurable functions satisfying the degenerate ellipticity
condition
\begin{equation} \label{eqn:degen}
\lambda w(x) | \xi | ^{2}\leq {\Re}\langle A\xi
,\xi \rangle, \quad
|\langle \mathcal{A}\xi ,\eta \rangle |\leq \Lambda w(x)|\xi
||\eta |, \quad \xi ,\,\eta \in \mathbb{C}^{n}.
\end{equation}%
Given $A\in {{\mathcal E}}_n(w,\lambda,\Lambda)$, we define the
degenerate elliptic operator in divergence form
${L}_{w}=-w^{-1}{{\mathop{\rm div}}}A{{\nabla}}$. These operators
were developed in~\cite{cruz-uribe-riosP} and we refer the reader there for complete
details. Here we sketch the key ideas.
Given a weight $w\in A_2$, the space $H^1(w)$ is the weighted Sobolev
space that is the completion of $C_c^\infty$ with respect to the norm
\begin{equation*}
\|f\|_{H^1(w)} = \left(\int_{{\mathbb{R}^n}} \left(|f(x)|^2+|{\nabla}
f(x)|^2\right)\,dw\right)^{1/2}.
\end{equation*}
Note that the space defined above would usually be denoted by $H^1_0(w)$. The space $H^1(w)$ is defined as the set of distributions for which both $f$ and $|\nabla f|$ belong to $L^2(w)$. However, since the underlying domain is ${\mathbb{R}^n}$ this definition implies that the ``boundary" values vanish in the $L^2(w)$-sense, and both definitions agree~\cite{miller82}.
Given a matrix $A\in {{\mathcal E}}_n(w,\lambda,\Lambda)$, define ${%
{\mathfrak a}}(f,g)$ to be the sesquilinear form
\begin{equation} \label{eqn-form}
{{\mathfrak a}}(f,g) = \int_{{\mathbb{R}^n}} A(x){\nabla} f(x) \cdot
\overline{{\nabla} g(x)} \,dx.
\end{equation}
Since $w\in A_2$ and $A$ satisfies \eqref{eqn:degen}, ${\mathfrak a}
$ is a closed, maximally accretive, continuous sesquilinear form. Therefore,
there exists an operator ${L}_w$ whose domain $\mathcal{D}(L_w)\subset H^1(w)$ is dense in
$L^2(w)$ and such
that for every $f \in \mathcal{D}(L_w)$ and every $g\in H^1(w)$,
\begin{equation} \label{eqn-a2}
{{\mathfrak a}}(f,g) = \langle {L}_wf, g \rangle_w= \int_{{\mathbb{R}^n}} {L}_wf(x)\overline{g(x)}\,dw.
\end{equation}
We note that the operator $L_w$ is one to one. Indeed, if $u,v\in\mathcal{D}(L_w)$ are such that $L_wu=L_wv$, then
for all $g\in H^1(w)$
\[0=\int_{\mathbb{R}^n} A(x){\nabla} (u(x)-v(x)) \cdot
\overline{{\nabla} g(x)} \,dx.\]
Taking $g=u-v$ implies ${\nabla} u(x)={\nabla} v(x)$ and so $u=v$.
The properties of the sesquilinear form guarantee
that on $L^2(w)$ there exists a bounded, strongly continuous semigroup $e^{-t{L}_w}$.
Further, it has a holomorphic extension.
Let
\[ \Sigma_\omega= \{ z\in {\mathbb C} : z\neq 0, |\arg(z)| < \omega \} \]
and define $\vartheta, \vartheta^* \in
[0,\pi/2)$ by
\[ \vartheta = \sup\{ |\arg \langle Lf,f\rangle_w| : f \in \mathcal{D}(L_w)
\}, \qquad \vartheta^*=\arctan\sqrt{\frac{\Lambda^2}{\lambda^2}-1}. \]
Then there exists a complex semigroup $e^{-zL_w}$ on
$\Sigma_{\pi/2-\vartheta}$ of bounded operators on $L^2(w)$. By
the weighted ellipticity condition~\eqref{eqn:degen}, we have that
$0\le\vartheta\le\vartheta^* <\pi/2$.
\subsection*{Holomorphic functional calculus}
Our operator $L_w$ is ``an operator of type $\omega$" with
$\omega=\vartheta$ , as defined in~\cite{mcintosh86}. Indeed, the
ellipticity conditions imply that $L_w$ is closed and densely defined,
its spectrum is contained in $\Sigma_{\vartheta}$, and its resolvent
satisfies standard decay estimates~\cite{cruz-uribe-riosP}. Therefore, we
can define an $L^2(w)$ functional calculus as in~\cite{mcintosh86}.
Given $\mu\in
(\vartheta, \pi)$, let $\H^\infty(\Sigma_\mu)$ be the collection of
bounded holomorphic functions on $\Sigma_\mu$. To define
$\varphi(L_w)$ for $\varphi\in \H^\infty(\Sigma_\mu)$ we first
consider a smaller class: we say that $\varphi\in
\H^\infty_0(\Sigma_\mu)$ if for some $c,\,s>0$ it satisfies
\[ |\varphi(z)| \leq c|z|^s(1+|z|)^{-2s}, \quad z \in \Sigma_\mu. \]
We then have an integral representation of $\varphi(L_w)$. Let
$\Gamma_\theta$ be the boundary of $\Sigma_\theta$ with positive orientation,
and let
$\vartheta < \theta < \nu < \min(\mu , \pi/2)$; then
\begin{equation} \label{eqn:L2-holo-rep}
\varphi(L_w) = \int_{\Gamma_{\pi/2-\theta}} e^{-zL_w}\eta(z)\,dz,
\end{equation}
where %
\begin{equation} \label{eqn:L2-holo-rep-eta}
\eta(z) = \frac{1}{2\pi i} \int_{\gamma_{\nu}(z)} e^{\zeta z}
\varphi(\zeta)\,d\zeta
\end{equation}
and $\gamma_{\nu}(z)=\mathbb{R}^+e^{i\mathrm{sign}(\mathrm{Im}(z))\nu} $. Note that
\[ |\eta(z)| \lesssim \min(1,|z|^{-s-1}), \quad z \in
\Gamma_{\pi/2-\theta}, \]
so the representation \eqref{eqn:L2-holo-rep} converges in $L^2(w)$, and we have the bound
\begin{equation}\label{eqn:L2-holo-bd}
\|\varphi(L_w)f\|_{L^2(w)} \leq C\|\varphi\|_\infty
\|f\|_{L^2(w)},\qquad f\in \H^\infty_0(\Sigma_\mu).
\end{equation}
Now, since $L_w$ is a one-to-one
operator of type $\omega$, it has dense range~\cite[Theorem
2.3]{couling96}, and so the results in~\cite{mcintosh86} (see also~\cite[Corollary
2.2]{couling96}) imply that $L_w$ has an ${H}^\infty$
functional calculus and~\eqref{eqn:L2-holo-bd} extends to all of
$\H^\infty(\Sigma_\mu)$. Moreover, in~\cite[Section 8]{mcintosh86}
the equivalence between the existence of this ${H}^\infty$
functional calculus and square function estimates for $L_w$ and
$L_w^*$ is established:
\begin{equation}\label{eqn:holo-sqfe}
\left\{\int_0^\infty\| \varphi(tL_w)\|_{L^2(w)}^2\,\frac{dt}{t}\right\}^{\frac{1}{2}}\le C\|\varphi\|_\infty\| f\|_{L^2(w)},\quad \varphi\in\H^\infty_0(\Sigma_\mu),
\end{equation}
with similar estimates for $L_w^*$.
The operators $\varphi(L_w)$ also have the following properties:
\begin{itemize}
\item If $\varphi$ and $\psi$ are bounded holomorphic functions, then we
have the operator identity $\varphi(L)\psi(L) = (\varphi \psi)(L)$.
\item Given any sequence $\{\varphi_k\}$ of bounded holomorphic functions
converging uniformly on compact subsets of $\Sigma_\mu$ to $\varphi$,
we have that $\varphi_k(L_w)$ converges to $\varphi(L_w)$ in the
strong operator topology (of operators on $L^2(w)$).
\end{itemize}
\begin{remark}
The $H^\infty$ functional calculus can be extended to more general
holomorphic functions, such as powers, for which the operators $\varphi (L_{w})$ can be
defined as unbounded operators: see \cite{haase06,mcintosh86}.
\end{remark}
\medskip
\subsection*{Gaffney-type estimates}
The semigroup and its gradient satisfy
Gaffney-type estimates on $L^2(w)$. Below, we will see that these are
a particular case of what we will call full off-diagonal estimates:
see Definition~\ref{defn:full-offdiagonal}.
\begin{theor} \label{thm:L2-gaffney}
Given $w\in A_2$ and $A\in {{\mathcal E}}_n(w,\lambda,\Lambda)$, then
for any closed sets $E$ and $F$, $f\in L^2(w)$ and for all $z\in
\Sigma_\nu$, where
$0<\nu<\frac{\pi}{2}-\vartheta$,
\begin{enumerate}
\setlength{\itemsep}{8pt}
\item $\|e^{-z\,L_w} (f\,\bigchi_E)\bigchi_F\|_{L^2(w)}
\le
C\, e^{-\frac{c\, d(E,F)^2}{|z|}}\,\|f\bigchi_E\|_{L^2(w)}$,
\item $\|\sqrt{z}{\nabla} e^{-z\,L_w} (f\,\bigchi_E)\bigchi_F\|_{L^2(w)}
\le
C\, e^{-\frac{c\, d(E,F)^2}{|z|}}
\,\|f\bigchi_E\|_{L^2(w)}$,
\item $\|z\, L_w e^{-z\,L_w} (f\,\bigchi_E)\bigchi_F\|_{L^2(w)}
\le
C\, e^{-\frac{c\, d(E,F)^2}{|z|}}
\,\|f\bigchi_E\|_{L^2(w)}$.
\end{enumerate}
\end{theor}
\begin{proof}
The semigroup estimate (1) was proved
in~\cite[Theorem~1.6]{cruz-uribe-riosP} for real $z$, but the same
proof can be readily modified to prove the analytic version.
Alternatively, estimates (1) and (2) follow from the resolvent bounds
\begin{equation} \label{eqn:res01}
\|(1+z^2L_w)^{-1}(f\bigchi_E)\bigchi_F\|_{L^2(w)}
\leq Ce^{-\frac{cd(E,F)}{|z|}}\|f\bigchi_E\|_{L^2(w)},
\end{equation}
and
\begin{equation} \label{eqn:res02}
\|z{\nabla}(1+z^2L_w)^{-1}(f\bigchi_E)\bigchi_F\|_{L^2(w)}
\leq Ce^{-\frac{cd(E,F)}{|z|}}\|f\bigchi_E\|_{L^2(w)},
\end{equation}
obtained in~\cite[Lemma~2.10]{DCU-CR2013} for
$z\in\Sigma_{\frac{\pi}{2}+\nu}$, together with the integral
representation of the semigroup
\[ e^{-zL_w}f=\frac{1}{2\pi}\int_{\Gamma} e^{z\zeta}\left(\zeta +L_w\right)^{-1}f\, d\zeta,%
\]
where $\Gamma$ is the boundary of $\Sigma_{\theta}$ with positive
orientation and $\frac{\pi}{2}<\theta<\frac{\pi}{2}+\nu-\arg(z)$.
Finally, from \eqref{eqn:res01} and \eqref{eqn:res02} we obtain the estimate
\[\|z^2 L_w(1+z^2L_w)^{-1}(f\bigchi_E)\bigchi_F\|_{L^2(w)}
\leq Ce^{-\frac{cd(E,F)}{|z|}}\|f\bigchi_E\|_{L^2(w)}, \]
and then by the same kind of argument we get (3).
\end{proof}
\medskip
\subsection*{The Kato estimate}
The starting point for all of our estimates is the $L^2(w)$ Kato
estimates for the square root operator $L^{1/2}_w$ proved
in~\cite{DCU-CR2013} (see also \cite{Auscher-Rosen-Rule} for a different proof). This operator is the
unique, maximal accretive operator such that $L^{1/2}_w
L^{1/2}_w=L_w$. It has the integral representation
\[ L_w^{1/2} = \frac{1}{\sqrt{\pi}}\int_0^\infty
\sqrt{t}L_we^{-tL_w}\, \frac{dt}{t}. \]
(For further details, see~\cite{auscher-tchamitchian98,mcintosh86}.)
\begin{theor}{\cite[Theorem~1.1]{DCU-CR2013}}\label{theorem:degen-kato}
Given $w\in A_2$ and $A \in {{\mathcal E}}_n(w,\lambda,\Lambda)$, there exist
constants $c$ and $C$, depending on $n$, $\Lambda/\lambda$ and
$[w]_{A_2}$, such that the domain of $L_w$ is $H^1(w)$, and for all
$f\in H^1(w)$,
\begin{equation} \label{eqn:degen-kato1}
c\|{\nabla} f\|_{L^2(w)} \leq \|L_w^{1/2}f\|_{L^2(w)} \leq C\|{\nabla}
f\|_{L^2(w)}.
\end{equation}
\end{theor}
\medskip
The Riesz transform associated to $L_w$ is the operator ${\nabla}
L_w^{-1/2}$. Formally, by \eqref{eqn:degen-kato1} we have that the
Riesz transform is a bounded operator on $L^2(w,{\mathbb C}^n)$. To legitimize
this, we define
\begin{equation}\label{defi-RT}
{\nabla} L_w^{-1/2} = \frac{1}{\sqrt{\pi}} \int_0^\infty
\sqrt{t} {\nabla} e^{-t
L_w} \frac{dt}{t}.
\end{equation}
However, it is not immediate that this integral converges at $0$ or
$\infty$. To rectify this, for $\epsilon>0$ define
\begin{equation}\label{defi-RT:trunc}
S_\epsilon = S_\epsilon(L_w) = \frac{1}{\sqrt{\pi}} \int_\epsilon^{1/\epsilon}
\sqrt{t} e^{-tL_w} \frac{dt}{t}.
\end{equation}
Since $S_\epsilon(z)$ is a uniformly bounded holomorphic function on the right
half plane for all $0<\epsilon<1$, by the $L^2(w)$ functional calculus described above, $S_\epsilon(L_w)$ is uniformly bounded on $L^2(w)$ for that range of $\epsilon$. Further, for $f\in L_c^\infty$, $S_\epsilon f \in \mathcal{D}(L_w)
\subset \mathcal{D}(L_w^{1/2})$, and so by inequality~\eqref{eqn:degen-kato1} and the functional calculus,
\begin{equation} \label{eqn:trunc}
\|{\nabla} S_\epsilon f \|_{L^2(w)} \lesssim \|L^{1/2} S_\epsilon
f\|_{L^2(w)} = \|\varphi_\epsilon (L_w)f\|_{L^2(w)},
\end{equation}
where
\[ \varphi_\epsilon(z) = \frac{1}{\sqrt{\pi}}
\int_\epsilon^{1/\epsilon}
\sqrt{t} \sqrt{z} e^{-tz}\frac{dt}{t}. \]
The sequence $\{\varphi_\epsilon\}$ is uniformly bounded and converges
uniformly to $1$ on compact subsets of the sector $\Sigma_\mu$,
$0<\mu<\pi/2$. Therefore, $L^{1/2} S_\epsilon f \rightarrow f$
strongly in $L^2(w)$. If we combine this fact with \eqref{eqn:trunc}
we see that $\{{\nabla} S_\epsilon f\}$ is Cauchy and so converges in
$L^2(w)$. We therefore define
\[ {\nabla} L^{-1/2} f = \lim_{\epsilon\rightarrow 0} {\nabla} S_\epsilon f, \]
where the limit is in $L^2(w)$.
Given this definition, hereafter, when we are proving $L^2(w)$
estimates for the Riesz transform, we should actually prove estimates
for ${\nabla} S_\epsilon$ that are independent of $\epsilon$. These
arguments will remain implicit unless there are details we need to emphasize.
\bigskip
\subsection*{Off-diagonal estimates}
Off-diagonal estimates as we define them were introduced in~\cite{auscher-martell07} and we
will refer repeatedly to this paper for further information and
results. Throughout this section we will assume that given a weight
$w$, $w\in A_2$.
\smallskip
Given a ball $B$, for $j\geq 2$ we define the annuli
$C_{j}(B)=2^{j+1}\, B\setminus 2^j\, B$. We let $C_{1}(B)=4B$.
By a slight abuse of notation, we will define
\[ \aver{C_j(B)} h\,dw
=
\frac1{w(2^{j+1}B)}\,\int_{C_{j}(B)} h\,dw. \]
If $w\in A_2$ (as it will be hereafter), then $w(2^{j+1}B)\approx
w(C_j(B))$, so this definition is equivalent to the one given above up
to a constant. Finally, for $s>0$ we set $\dec{s}=\max\{s,s^{-1}\}$.
\begin{defi}\label{defi:off-d:weights}
Given $1\le p\le q\le \infty$, a family $\{T_t\}_{t>0}$
of sublinear operators satisfies $L^{p}(w)-L^{q}(w)$ off-diagonal
estimates on balls, denoted by
\[ T_t \in\offw{p}{q},\]
if there exist constants
$\theta_1, \theta_2>0$ and $c>0$ such that for every $t>0$ and for
any ball $B$, setting $r=r(B)$,
\begin{equation}\label{w:off:B-B}
\left(\aver{B} |T_t( \bigchi_B \, f) |^{q}\,dw\right)^{\frac 1 q}
\lesssim
\dec{\frac{r}{\sqrt{t}}}^{\theta_2} \,\left(\aver{B}
|f|^{p}\,dw\right)^{\frac 1 p };
\end{equation}
and for all $j\ge 2$,
\begin{equation}\label{w:off:C-B}
\left(\aver{B}|T_t( \bigchi_{C_j(B)}\, f) |^{q}\,dw\right)^{\frac 1 q}
\lesssim
2^{j\,\theta_1} \dec{\frac{2^j\,r}{\sqrt{t}}}^{\theta_2}\,
\expt{-\frac{c\,4^{j}\,r^2}{t}} \,
\left(\aver{C_j(B)}|f|^{p}\,dw\right)^{\frac 1 p }
\end{equation}
and
\begin{equation}\label{w:off:B-C}
\bigg(\aver{C_j(B)}|T_t( \bigchi_B \, f) |^{q}\,dw\bigg)^{\frac 1 q}
\lesssim
2^{j\,\theta_1} \dec{\frac{2^j\,r}{\sqrt{t}}}^{\theta_2}\,
\expt{-\frac{c\,4^{j}\,r^2}{t}}
\,\left(\aver{B}|f|^{p}\,dw\right)^{\frac 1 p }.
\end{equation}
If the family of sublinear operators
$\left\{ T_z\right\}_{z\in\Sigma_\mu}$ is defined on a complex sector
$\Sigma_\mu$, we say that it satisfies $L^{p}(w)-L^{q}(w)$
off-diagonal estimates on balls in $\Sigma_\mu$ if \eqref{w:off:B-B},
\eqref{w:off:C-B} and \eqref{w:off:B-C} hold for
$z\in\Sigma_\mu$ with $t$ replaced by $|z|$ in the
righthand terms. We denote this by
$T_z\in\offwmu{p}{q}{\mu}$.
\end{defi}
We give some basic properties of off-diagonal estimates on balls as a
series of lemmas taken from~\cite[Section~2.2]{auscher-martell07}. The
first follows immediately by real interpolation, the second by
H\"older's inequality, and the third by duality.
\begin{lemma} \label{lemma:interpolate}
Given $1\leq p_i \le q_i\leq \infty$, $i=1,\,2$, if $T_t \in
\offw{p_1}{q_1}$, and $T_t : L^{p_2}(w)\rightarrow
L^{q_2}(w)$ is uniformly bounded, then $T_t\in \offw{p_\theta}{q_\theta}$,
$0<\theta<1$, where
\[ \frac{1}{p_\theta} = \frac{\theta}{p_1}+\frac{1-\theta}{p_2},
\qquad
\frac{1}{q_\theta} = \frac{\theta}{q_1}+\frac{1-\theta}{q_2}. \]
\end{lemma}
\begin{lemma} \label{lemma:nested}
If $1\leq p\leq p_1\leq q_1\leq q\leq \infty$, then
\[ \offw{p}{q}\subset \offw{p_1}{q_1}. \]
\end{lemma}
\begin{lemma} \label{lemma:duality}
If for some $1\leq p\leq q\leq \infty$, $T_t \in \offw{p}{q}$, and the
operators $T_t$ are linear, then $T_t^* \in \offw{q'}{p'}$. (Here $T_t^*$
is the dual operator for
the inner product $\int_{\mathbb{R}^n} f\,g\,dw$.)
\end{lemma}
\begin{lemma}[{\cite[Theorem~2.3]{auscher-martell07}}] \label{lemma:unif-comp} \
\begin{enumerate}
\item If $T_t \in \offw{p}{p}$, $1\leq p \leq \infty$, then $T_t :
L^p(w) \rightarrow L^p(w)$ is uniformly bounded.
\item If $1\leq p \leq q \leq r \leq \infty$, $T_t\in \offw{q}{r}$,
$S_t\in \offw{p}{q}$, then $T_t\circ S_t \in \offw{p}{r}$.
\end{enumerate}
\end{lemma}
\begin{remark}
If $p<q$, then $T_t \in \offw{p}{q}$ does not guarantee that $T_t$ is
bounded from $L^p(w)$ to $L^q(w)$.
\end{remark}
\begin{remark}\label{rem:complex-offd}
Since complex sectors $\Sigma_\mu$, $0\le\mu<\pi$, are closed under
addition, the proof of Lemma~\ref{lemma:unif-comp} extends to
give off-diagonal estimates on complex sectors $\offwmu{p}{q}{\mu}$.
\end{remark}
\begin{defi} \label{defn:full-offdiagonal}
Given $1\leq p \leq q \leq \infty$, a family of operators
$\{T_t\}$ satisfies full
off-diagonal estimates from $L^p(w)$ to $L^q(w)$, denoted by
\[ T_t \in \fullw{p}{q}, \]
if there exist constants $C,\,c,\,\theta>0$
such that given any closed sets $E,\,F$,
\[ \|T_t(f\bigchi_E)\bigchi_F\|_{L^q(w)} \leq Ct^{-\theta}
e^{-\frac{c d^2(E,F)}{t}}\|f\chi_E\|_{L^p(w)}. \]
\end{defi}
The connection between full off-diagonal estimates and off-diagonal
estimates on balls is given the following lemma from \cite[Section~3.1]{auscher-martell07}.
\begin{lemma} \label{lemma:full}
Given $1\leq p \leq q \leq \infty$:
\begin{enumerate}
\item if $T_t\in \fullw{p}{q}$, then
$T_t : L^p(w) \rightarrow L^q(w)$ is uniformly bounded;
\item $T_t\in \fullw{p}{p}$ if and only if $T_t \in \offw{p}{p}$.
\end{enumerate}
\end{lemma}
\medskip
The importance of off-diagonal estimates is that they will let us
prove weighted norm inequalities for the operators we are
interested in. To do so we will make repeated use of
two results first proved in~\cite{auscher-martell07b}; however, we
will use special cases of these results as given in~\cite[Theorems~2.2
and~2.4]{auscher-martell06}.
\begin{theor} \label{theorem:2.2}
Given $w\in A_2$ and $1\leq p_0<q_0\le \infty$, let $T$
be a sublinear operator acting on $L^{p_0}(w)$, $\{\mathcal{A}_r\}_{r>0}$ a family of operators acting from a
subspace $\mathcal{D}$ of $L^{p_0}(w)$ into $L^{p_0}(w)$, and $S$ an operator from
$\mathcal{D}$ into the space of measurable functions on ${\mathbb{R}^n}$. Suppose that
for every $f\in \mathcal{D}$ and ball $B$ with radius $r$,
\begin{equation} \label{eqn:2.1}
\bigg(\Xint-_B |T(I-\mathcal{A}_r)f|^{p_0}\,dw\bigg)^{1/p_0}
\leq
\sum_{j\geq 1} g(j) \bigg(\Xint-_{2^{j+1}B}
|Sf|^{p_0}\,dw\bigg)^{1/p_0}
\end{equation}
and
\begin{equation} \label{eqn:2.2}
\bigg(\Xint-_B |T\mathcal{A}_{r}f|^{q_0}\,dw\bigg)^{1/q_0}
\leq
\sum_{j\geq 1} g(j) \bigg(\Xint-_{2^{j+1}B}
|Tf|^{p_0}\,dw\bigg)^{1/p_0},
\end{equation}
where $\sum g(j) < \infty$. Then for every $p$, $p_0<p<q_0$, and
weights
\[ v\in A_{p/p_0}(w)\cap RH_{(q_0/p)'}(w), \]
there is a constant
$C$ such that for all $f\in \mathcal{D}$,
\[ \|Tf\|_{L^p(v\,dw)} \leq C\|Sf\|_{L^p(v\,dw)}. \]
\end{theor}
\begin{remark}
In Theorem~\ref{theorem:2.2} and Theorem~\ref{theorem:2.4} below,
the case $q_0 = \infty$ is understood in the sense that the
$L^{q_0}(w)$-average is replaced by the essential supremum. Also
in Theorem~\ref{theorem:2.2}, if $q_0=\infty$, then the condition on $v$
becomes $v\in A_{p/p_0}$.
\end{remark}
\begin{theor} \label{theorem:2.4}
Given $w\in A_2$ with doubling order $D$, and $1\leq p_0<q_0\leq \infty$, let $T : L^{q_0}(w) \rightarrow
L^{q_0}(w)$ be a sublinear operator, $\{\mathcal{A}_r\}_{r>0}$ a family of
linear operators acting from
$L_c^\infty$ into $L^{q_0}(w)$.
Suppose that
for every ball $B$ with radius $r$, $f\in L_c^\infty$ with
$\mathop{\rm supp}(f)\subset B$ and $j\geq 2$,
\begin{equation} \label{eqn:2.4}
\bigg(\Xint-_{C_j(B)} |T(I-\mathcal{A}_r)f|^{p_0}\,dw\bigg)^{1/p_0}
\leq
g(j) \bigg(\Xint-_{B}
|f|^{p_0}\,dw\bigg)^{1/p_0}.
\end{equation}
Suppose further that for every $j\geq 1$,
\begin{equation} \label{eqn:2.5}
\bigg(\Xint-_{C_j(B)} |\mathcal{A}_{r}f|^{q_0}\,dw\bigg)^{1/q_0}
\leq
g(j) \bigg(\Xint-_{B}
|f|^{p_0}\,dw\bigg)^{1/p_0},
\end{equation}
where $\sum g(j)2^{Dj} < \infty$. Then for all $p$,
$p_0<p<q_0$, there exists a constant $C$ such that for all $f\in L^\infty_c$,
\[ \|Tf\|_{L^p(w)} \leq C\|f\|_{L^p(w)}. \]
\end{theor}
\section{Off-diagonal estimates for the semigroup $e^{-tL_w}$}
\label{section:semigroup}
In this section we consider off-diagonal estimates for the semigroup associated to $L_w$. Throughout
this and subsequent sections, let $w\in A_2$ and $A\in
{\mathcal E}_n(w,\Lambda,\lambda)$ be fixed. Our goal is to characterize the
set of pairs $(p,q)$, $p\leq q$ such that these operators are in
$\offw{p}{q}$. By Theorem~\ref{thm:L2-gaffney} we have that
\[ e^{-tL_w} \in \fullw{2}{2} \subset
\offw{2}{2}. \]
We will show that in the $(p,q)$-plane this set contains a right
triangle: see Figure~\ref{figure:triangle}.
\begin{figure}[h]
\caption{$(p,q)$ such that $e^{-tL_w} \in \offw{p}{q}$}
\label{figure:triangle}
{\begin{pgfpicture}{0cm}{-0.5cm}{6cm}{6cm}
\definecolor{mygray}{gray}{0.70}
\color{mygray}
\color{mygray}
\pgfmoveto{\pgfxy(1,1)}
\pgflineto{\pgfxy(5,5)}
\pgflineto{\pgfxy(1,5)}
\pgflineto{\pgfxy(1,1)}
\pgffill
\color{black}
\pgfmoveto{\pgfxy(1,1)}
\pgflineto{\pgfxy(5,5)}
\pgflineto{\pgfxy(1,5)}
\pgflineto{\pgfxy(1,1)}
\pgfstroke
\pgfmoveto{\pgfxy(-0.1,0.4)}
\pgflineto{\pgfxy(5.5,0.4)}
\pgfstroke
\pgfmoveto{\pgfxy(0.4,-0.1)}
\pgflineto{\pgfxy(0.4,5.5)}
\pgfstroke
\pgfcircle[fill]{\pgfxy(1.7,3.8)}{1pt}
\pgfputat{\pgfxy(1.9,3.8)}{\pgfbox[left,center]{$(p,q)$}}
\pgfputat{\pgfxy(5.4,0.05)}{\pgfbox[center,center]{$p$}}
\pgfputat{\pgfxy(0.1,5.3)}{\pgfbox[center,center]{$q$}}
\end{pgfpicture}}
\end{figure}
Let $\widetilde \mathcal{J}(L_w)\subset [1,\infty]$ be the set of all
exponents $p$ such that $e^{-t\,L_w} : L^p(w)\rightarrow L^p(w)$ is
uniformly bounded for all $t>0$. By Theorem~\ref{thm:L2-gaffney} and Lemma~\ref{lemma:full}, $2\in
\widetilde \mathcal{J}(L_w)$, and if it contains more than one point, then by interpolation $\widetilde
\mathcal{J}(L_w)$ is an interval. The set of pairs $(p,q)$ such that $e^{-t\,L_w}
\in \offw{p}{p}$ is completely characterized by the next result.
\begin{prop}\label{prop:J}
There exists an interval $\mathcal{J}(L_w) \subset [1,\infty]$ such that
$p,q \in \mathcal{J}(L_w)$ if and only if $e^{-t\,L_w}\in \offw{p}{q}$.
Furthermore, $\mathcal{J}(L_w)$ has the following properties:
\begin{enumerate}
\item $\mathcal{J}(L_w)\subset \widetilde\mathcal{J}(L_w)$;
\item $\mathop{\rm Int}\mathcal{J}(L_w)=\mathop{\rm Int}\widetilde\mathcal{J}(L_w)$;
\item if $p_-(L_w)$ and $p_+(L_w)$ are respectively the left and right endpoints
of $\mathcal{J}(L_w)$, then $p_-(L_w)\le (2^*_w)'$ and $p_+(L_w)\ge 2^*_w$, where $2^*_w$ is as in Theorem~\textup{\ref{thm:wtd-poincare}}.
In particular, $2 \in \mathop{\rm Int}(\mathcal{J}(L_w))$.
\end{enumerate}
\end{prop}
\begin{remark} \label{remark:interval}
The smaller the value of $r_w$, the better our bounds on the size of
the set
$\mathcal{J}(L_w)$. In the limiting case when $w\in A_1$, we have that
$p_-(L_w)\le \frac{2n}{n+2}$ and $p_+(L_w)\ge \frac{2n}{n-2}$. These values
should be compared to the estimates in \cite[Corollary~4.6]{auscher07} for the
non-degenerate case that corresponds to the case $w=1$.
\end{remark}
We get two corollaries to Proposition~\ref{prop:J}. The first gives
us weighted off-diagonal estimates.
\begin{corol}\label{corollary-weighted-offd}
Let $p_-(L_w)<p\le q<p_+(L_w)$. If
$v\in A_{p/p_-(L_w)}(w)\cap RH_{(p_+(L_w)/q)'}(w)$, then
$e^{-tL_w}\in \mathcal{O}\big(L^{p}(v\,dw)\rightarrow L^{q}(v\,dw)\big)$.
\end{corol}
\begin{proof}
By Proposition~\ref{prop:J}, if $p_-(L_w)<p\le q<p_+(L_w)$, then
$e^{-tL_w} \in \offw{p}{q}$. Therefore,
by~\cite[Proposition~2.6]{auscher-martell07}, if $v\in
A_{p/p_-(L_w)}(w)\cap RH_{(p_+(L_w)/q)'}(w)$, then we have that
$e^{-tL_w}\in \mathcal{O}\big(L^{p}(v\,dw) \rightarrow L^{q}(v\,dw)\big)$.
\end{proof}
As our second corollary we get
off-diagonal estimates for the holomorphic extension of the semigroup.
\begin{corol} \label{cor:holomorphic} For any $\nu$,
$0<\nu<\arctan\left(\frac{\lambda}{\sqrt{\Lambda^2-\lambda^2}}\right)$,
and for any $p\leq q$ such that $e^{-tL_w} \in \offw{p}{q}$, then
for all $m\in \mathbb{N}\cup\{ 0\}$,
$(zL_w)^me^{-zL_w} \in \offwmu{p}{q}{\nu}$.
\end{corol}
\begin{proof}
This follows from~\cite[Theorem~4.3]{auscher-martell07} and the fact
that, by
Theorem~\ref{thm:L2-gaffney}, for these values of $z$, $e^{-zL_w} \in \fullw{2}{2}$.
\end{proof}
\bigskip
\begin{proof}[Proof of Proposition~\textup{\ref{prop:J}}]
Fix $2<q<2^*_w$ (If $w\in A_1$ we let $q=2^*_w=2^*$.) We will show
that $e^{-t\,L_w}\in\offw{2}{q}$. Given this, then we also
have that $e^{-t\,L_w}\in\offw{q'}{2}$. For if $L_w^*$ is the
adjoint of $L_w$ (with respect to $L^2(w)$), then
$L_w^*=-w^{-1}\,\mathop{\rm div}(A^*\,\nabla f)$ and the same estimates hold for
$L_w^*$. Hence,
$e^{-t\,L_w^*}\in\offw{2}{q}$, and so by Lemma~\ref{lemma:duality},
$e^{-t\,L_w}\in\offw{q'}{2}$. Since $e^{-tL_w}$ is a semigroup,
by Lemma~\ref{lemma:unif-comp} we have that
$e^{-t\,L_w}\in\offw{q'}{q}$. Therefore, by \cite[Proposition
4.1]{auscher-martell07}, we have that there exists an interval $\mathcal{J}(L_w)$
and Properties~(1) and~(2) hold. Moreover, we have that
$[q',q] \subset \mathcal{J}(L_w)$, so if we let $q\to 2^*_w$, then
we immediately get Property~(3).
\medskip
It therefore remains to prove that $e^{-t\,L_w}\in\offw{2}{q}$. We
first show \eqref{w:off:B-B}. Fix $B$ and for brevity write $r=r(B)$
and $C_j=C_j(B)$. By our choice of $q$ the Poincar\'e
inequality~\eqref{w-Poincare} holds. Moreover, as we noted above
$e^{-t\,L_w}, \,\sqrt{t}\,\nabla e^{-t\,L_w}\in\offw{2}{2}$; we may
assume that the same exponents $\theta_1$, $\theta_2$ hold for both
operators. We thus get that
\begin{align*}
&\left(\aver{B} |e^{-t\,L_w}( \bigchi_B \, f)|^{q}\,dw\right)^{\frac1q}
\\
&\qquad
\le
\big|\big(e^{-t\,L_w}( \bigchi_B \, f)\big)_{B,w} \big|+
\left(\aver{B} \big|e^{-t\,L_w}( \bigchi_B \, f)(x) - \big(e^{-t\,L_w}( \bigchi_B \, f)\big)_{B,w} \big|^{q}\,dw(x)\right)^{\frac 1 q}
\\
&\qquad
\lesssim
\left(\aver{B} |e^{-t\,L_w}( \bigchi_B \, f) |^{2}\,dw\right)^{\frac12}
+
r\,\left(\aver{B} |\nabla\,e^{-t\,L_w}( \bigchi_B \, f) |^{2}\,dw\right)^{\frac12}
\\
&\qquad
\lesssim
\left(1+\frac{r}{\sqrt{t}}\right)\,\dec{\frac{r}{\sqrt{t}}}^{\theta_2} \,\left(\aver{B} |f|^{2}\,dw\right)^{\frac12}
\\
&\qquad
\lesssim
\dec{\frac{r}{\sqrt{t}}}^{1+\theta_2} \,\left(\aver{B} |f|^{2}\,dw\right)^{\frac12}.
\end{align*}
\medskip
The proof that \eqref{w:off:C-B} holds is gotten by nearly the same
argument:
\begin{align*}
&\left(\aver{B} |e^{-t\,L_w}( \bigchi_{C_j} \, f)|^{q}\,dw\right)^{\frac1q}
\\
&\qquad
\le
\big|\big(e^{-t\,L_w}( \bigchi_{C_j} \, f)\big)_{B,w} \big|+
\left(\aver{B} \big|e^{-t\,L_w}( \bigchi_{C_j} \, f)(x) - \big(e^{-t\,L_w}( \bigchi_{C_j} \, f)\big)_{B,w} \big|^{q}\,dw(x)\right)^{\frac 1 q}
\\
&\qquad
\lesssim
\left(\aver{B} |e^{-t\,L_w}( \bigchi_{C_j} \, f) |^{2}\,dw\right)^{\frac12}
+
r\,\left(\aver{B} |\nabla\,e^{-t\,L_w}( \bigchi_{C_j} \, f) |^{2}\,dw\right)^{\frac12}
\\
&\qquad
\lesssim
2^{j\,\theta_1}\,
\left(1+\frac{r}{\sqrt{t}}\right)\,\dec{\frac{2^j\,r}{\sqrt{t}}}^{\theta_2} \,\expt{-\frac{c\,4^{j}\,r^2}{t}}\,\bigg(\aver{C_j} |f|^{2}\,dw\bigg)^{\frac12}
\\
&\qquad
\lesssim
2^{j\,\theta_1}\,
\dec{\frac{2^j\,r}{\sqrt{t}}}^{1+\theta_2} \,\expt{-\frac{c\,4^{j}\,r^2}{t}}\,\left(\aver{C_j} |f|^{2}\,dw\right)^{\frac12}.
\end{align*}
\medskip
Finally, to prove that \eqref{w:off:B-C} holds we use a
covering argument. Fix $j\geq 2$; then we can cover the annulus $C_j$
by a collection of balls $\{B_k\}_{k=1}^N$, $r(B_k)=2^{j-2}\,r$, with
centers $x_{B_k}\in C_j$. The number of balls required, $N$, depends only on the dimension.
For any such ball, since $dw$ is a doubling measure we have that
\begin{align*}
&\left(\aver{B_k} |e^{-t\,L_w}( \bigchi_{B} \, f)|^{q}\,dw\right)^{\frac1q}
\\
&\quad
\le
\big|\big(e^{-t\,L_w}( \bigchi_{B} \, f)\big)_{B_k,w} \big|+
\left(\aver{B_k} \big|e^{-t\,L_w}( \bigchi_B \, f)(x) - \big(e^{-t\,L_w}( \bigchi_{B} \, f)\big)_{B_k,w} \big|^{q}\,dw(x)\right)^{\frac 1 q}
\\
&\quad
\lesssim
\left(\aver{B_k} |e^{-t\,L_w}( \bigchi_{B} \, f) |^{2}\,dw\right)^{\frac12}
+
r(B_k)\,\left(\aver{B_k} |\nabla\,e^{-t\,L_w}( \bigchi_{B} \, f) |^{2}\,dw\right)^{\frac12}
\\
&\quad
\lesssim
\left(\aver{2^{j+2}\,B\setminus 2^{j-1}\,B} |e^{-t\,L_w}( \bigchi_{B}
\, f) |^{2}\,dw\right)^{\frac12} \\
& \quad \qquad +
2^j\,r\,\left(\aver{2^{j+2}\,B\setminus 2^{j-1}\,B} |\nabla\,e^{-t\,L_w}( \bigchi_{B} \, f) |^{2}\,dw\right)^{\frac12}.
\end{align*}
If $j\ge 3$, then $2^{j+2}\,B\setminus 2^{j-1}\,B=C_{j+1}\cup
C_j\cup C_{j-1}$; then to estimate the last two terms we use the fact that $e^{-t\,L_w}, \,\sqrt{t}\,\nabla
e^{-t\,L_w}\in\offw{2}{2}$ and apply \eqref{w:off:B-C} with $p=q=2$ in each
annulus $C_i$, $j-1\leq i \leq j+1$. (These annuli have comparable
measure since $dw$ is a doubling measure so we can divide the average up
into three averages). If $j=2$, then
$2^{4}\,B\setminus 2\,B=C_{3}\cup C_2\cup (4\,B\setminus 2\,B)$. On
$C_3$ and $C_2$ we argue as before using \eqref{w:off:B-C}. On $4\,B\setminus B$ we
apply \cite[Lemma 6.1]{auscher-martell07}. (We note that in the
notation there $\widehat C_1(B)=4\,B\setminus 2\,B$.)
If we combine all of
these estimates, we get that for every $j\ge 2$,
\begin{align*}
\left(\aver{B_k} |e^{-t\,L_w}( \bigchi_{B} \, f)|^{q}\,dw\right)^{\frac1q}
&
\lesssim
2^{j\,\theta_1}\,
\left(1+\frac{2^j\,r}{\sqrt{t}}\right)\,\dec{\frac{2^j\,r}{\sqrt{t}}}^{\theta_2} \,\left(\aver{B} |f|^{2}\,dw\right)^{\frac12}
\\
&\qquad
\lesssim
2^{j\,\theta_1}\,
\dec{\frac{2^j\,r}{\sqrt{t}}}^{1+\theta_2}\,\expt{-\frac{c\,4^{j}\,r^2}{t}} \,\left(\aver{B} |f|^{2}\,dw\right)^{\frac12}.
\end{align*}
Since $C_j\subset\bigcup_k B_k$, we can sum in $k$ to get
\begin{multline*}
\left(\aver{C_j(B)}|e^{-t\,L_w}( \bigchi_B \, f) |^{q}\,dw\right)^{\frac 1 q}
\lesssim
\sum_{k=1}^N
\left(\aver{B_k} |e^{-t\,L_w}( \bigchi_{B} \, f)|^{q}\,dw\right)^{\frac1q}
\\
\lesssim
2^{j\,\theta_1}\,
\dec{\frac{2^j\,r}{\sqrt{t}}}^{1+\theta_2}\,\expt{-\frac{c\,4^{j}\,r^2}{t}} \,\left(\aver{B} |f|^{2}\,dw\right)^{\frac12}.
\end{multline*}
This completes the proof that $e^{-tL_w} \in \offw{2}{q}$.
\end{proof}
\section{The functional calculus}
\label{section:functional}
In this section we show that the operator $L_w$ has an $L^p(w)$ holomorphic
functional calculus. As we discussed in Section~\ref{section:prelim}
above, we know already that if $\varphi$ is a bounded holomorphic
function on $\Sigma_\mu$, $\mu \in (\vartheta,\pi)$, then
$\varphi(L_w)$ is a bounded operator on $L^2(w)$.
Recall that for any $\mu \in
(\vartheta ,\pi )$, we say that $\varphi \in
\mathcal{H}_{0}^{\infty}(\Sigma_\mu)$ if for some $c,\,s>0$
\begin{equation} \label{eqn:phi-decay}
|\varphi(z)| \leq c|z|^s(1+|z|)^{-2s}, \qquad z \in \Sigma_\mu.
\end{equation}
We say that $L_w$ has a bounded holomorphic functional calculus
on $L^p(w)$ if for any such $\varphi$,
\begin{equation} \label{eq:fcX}
\Vert \varphi (L_{w})f\Vert _{L^p(w)}\leq C\,\Vert \varphi \Vert _{\infty
}\,\Vert f\Vert _{L^p(w)},\qquad f\in L^p(w)\cap L^{2}(w),
\end{equation}
where $C$ depends only on $p$, $w$, $\vartheta $ and $\mu $ (but not on the
decay of $\varphi $). By a standard density argument, \eqref{eq:fcX} implies
that $\varphi (L_{w})$ extends to a bounded operator on all of
$L^p(w)$. Furthermore, we then have that
this inequality holds if $\varphi$ is any bounded holomorphic function. For the
details of this extension, see~\cite{haase06,mcintosh86}.
\begin{prop} \label{prop:B-K:weights}
Let $p_-(L_w)<p<p_+(L_w)$
and $\mu \in (\vartheta ,\pi )$. Then for any $\varphi \in
\mathcal{H}_{0}^{\infty}(\Sigma_\mu)$,
\begin{equation} \label{eq:fcw}
\| \varphi (L_{w})f\| _{L^{p}(w)}\leq C\,\| \varphi \| _{\infty
}\,\| f\| _{L^{p}(w)},
\end{equation}%
with $C$ independent of $\varphi $ and $f$. Hence, $L_{w}$ has a bounded
holomorphic functional calculus on $L^{p}(w)$.
Moreover, if $v \in A_{p/p_-(L_w)}(w)\cap
RH_{(p_+(L_w)/p)'}(w) $ then $L_{w}$ also has a bounded
holomorphic functional calculus on $L^{p}(v\,dw)$:
\begin{equation}
\| \varphi (L_{w})f\| _{L^{p}(v\,dw)}\leq C\,\| \varphi \| _{\infty
}\,\| f\| _{L^{p}(v\,dw)}, \label{eq:fc-vw}
\end{equation}%
with $C$ independent of $\varphi $ and $f$.
\end{prop}
\begin{proof}
For brevity, let $p_-=p_-(L_w)$ and $p_+=p_+(L_w)$. By density it
will suffice to assume that $f\in L_c^\infty$. Fix
$\varphi\in \H_0^\infty(\Sigma_\mu)$; by linearity we may assume
that $\left\| \varphi \right\| _{\infty }=1$.
We divide the proof into two steps. We first obtain \eqref{eq:fcw}
for $p_-<p<2$ by applying Theorem~\ref{theorem:2.4} and following the ideas in \cite{auscher07}. To do so, we will
pick $q_0=2$ and $p_0>p_-$ arbitrarily close to $p_-$. In the second
step, using some ideas from \cite{auscher-martell06}, we will use Theorem~\ref{theorem:2.2} to get \eqref{eq:fc-vw};
in particular this yields \eqref{eq:fcw} for every $2<p<p_+$ by
taking $v\equiv 1$. To apply Theorem~\ref{theorem:2.2} we will
choose $p_0>p_-$ arbitrarily close to $p_-$ and $q_0<p_+$
arbitrarily close to $p_+$. We will also use the fact that $\varphi(L_w)$ is
bounded on $L^{p_0}(w)$; this follows from the first step and choosing
$p_-<p_0<2$.
\medskip
To apply Theorem~\ref{theorem:2.4}, fix $p_-<p_0<p<2$ and let $q_0=2$, $T=\varphi (L_{w}) $, and
\begin{equation} \label{eqn:Atm}
\mathcal{A}_{r}f( x) =\big( I-( I-e^{-r^{2}L_{w}})^{m}\big) f( x),
\end{equation}%
where $m$ is a positive integer that will be chosen below.
We first show that inequality~\eqref{eqn:2.5} holds.
By Proposition \ref{prop:J} we have that $e^{-tL_{w}}\in
\offw{p_0}{2}$. Since
\begin{equation}\label{eqn:Asum}
\mathcal{A}_r=
\sum_{k=1}^{m}\binom{m}{k} (-1)^{k+1} e^{-kr^{2}L_{w}},
\end{equation}
$\Upsilon \left( \frac{r}{\sqrt{k}t}\right) \leq \sqrt{m}\Upsilon \left(
\frac{r}{t}\right) $, and $\exp \left( -\frac{c}{k}\frac{4^{j}r^{2}}{t^{2}}%
\right) \leq \exp \left( -\frac{c}{m}\frac{4^{j}r^{2}}{t^{2}}\right)
$,
for each fixed $m$ and $1\le k\le m$; by Proposition~\ref{prop:J} it follows that
\begin{equation}\label{eqn:A-od}
\mathcal{A}_{r}\in \offw{p}{q},\qquad\text{for all }p_-(L_w)<p\le q<p_+(L_w).
\end{equation}
In particular, we have that $\mathcal{A}_{r}\in \offw{p_0}{2}$. Thus, given
any ball $B$ with radius $r$, if
$\mathop{\rm supp}(f)\subset B$, then for all $j\geq 1$,
\begin{equation}\label{eqn:2.5Ar}
\bigg(
\Xint-%
_{C_{j}\left( B\right) }\left\vert \mathcal{A}_{r}f\right\vert ^{2}dw\bigg) ^{1/2}
\lesssim 2^{j\theta _{1}}\Upsilon \left(
2^{j}\right) ^{\theta _{2}}e^{-c4^{j}}\left(
\Xint-%
_{B}\left\vert f\right\vert ^{p_0}dw\right) ^{1/p_0}.
\end{equation}%
This establishes \eqref{eqn:2.5} with $g\left( j\right) =C\,2^{j\left( \theta
_{1}+\theta _{2}\right) }e^{-c4^{j}}$, for in this case we have that
$$\sum_{j\geq1}2^{j(\theta _{1}+\theta _{2}+D) }e^{-c4^{j}}<\infty ,$$
where $D$ is
the doubling constant of $w$.
\medskip
We next prove that \eqref{eqn:2.4} holds. Since
$\varphi(z)(1-e^{-r^2z})^m\in
\H_0^\infty(\Sigma_{\{\min\{\mu,\pi/2\}\} })$,
by the functional calculus representation~\eqref{eqn:L2-holo-rep}
we have that
\begin{equation*}
\varphi \left( L_{w}\right) \left( I-\mathcal{A}_{r }\right)
f= \int_{\Gamma }e^{-z\,L_{w}}f\,\eta (z)\,dz,
\end{equation*}
where $\Gamma=\partial \Sigma_{\frac{\pi}{2}-\theta}$, with $0<\vartheta <\theta <\nu <\min\{\mu,\pi/2\}$, and we choose $\theta$ so that the hypotheses of Corollary~\ref{cor:holomorphic} are
satisfied for $z\in \Gamma$.
Moreover, we have the estimat
\begin{equation*}
\left\vert \eta \left( z\right) \right\vert
\lesssim \frac{r^{2m}}{\left\vert
z\right\vert ^{m+1}};
\end{equation*}
see \cite[Section 5.1]{auscher07} for details.
We can now argue as follows: given a ball $B$ with radius $r$, for
each $j\geq 2$, by Minkowski's inequality and
Corollary~\ref{cor:holomorphic} (since $p_0 \in \mathop{\rm Int} \mathcal{J}(L_w)$),
\begin{align}
&\bigg(
\Xint-%
_{C_{j}\left( B\right) }\left\vert \varphi \left( L_{w}\right) \left( I-%
\mathcal{A}_{r\left( B\right) }\right) f\right\vert ^{p_0}dw\bigg) ^{1/p_0}
\notag \\
& \qquad \qquad =\bigg(
\Xint-%
_{C_{j}\left( B\right) }\left\vert \int_{\Gamma }e^{-z\,L_{w}}f\,\eta
(z)\,dz\right\vert ^{p_0}dw\bigg) ^{1/p_0} \notag \\
& \qquad \qquad \lesssim\int_{\Gamma }\bigg(
\Xint-%
_{C_{j}\left( B\right) }\left\vert e^{-z\,L_{w}}f\,\right\vert ^{p_0}dw\bigg)
^{1/p_0}\frac{r^{2m}}{\left\vert z\right\vert ^{m+1}}\,\left\vert
dz\right\vert \notag \\
&\qquad \qquad \lesssim \bigg(
\Xint-%
_{B}\left\vert f\,\right\vert ^{p_0}dw\bigg) ^{1/p_0}\int_{\Gamma }\frac{r^{2m}%
}{\left\vert z\right\vert ^{m+1}}2^{j\theta _{1}}\Upsilon \left( \frac{%
2^{j}r}{\sqrt{\left\vert z\right\vert }}\right) ^{\theta _{2}}e^{-c\frac{%
r^{2}}{\left\vert z\right\vert }4^{j}}\,\left\vert dz\right\vert \notag \\
& \qquad \qquad = \bigg(
\Xint-%
_{B}\left\vert f\,\right\vert ^{p_0}dw\bigg) ^{1/p_0}2^{j\left( \theta
_{1}-2m\right) }\int_{0}^{\infty }\sigma^{2m}\Upsilon \left(
\sigma\right) ^{\theta _{2}}e^{-c\sigma ^{2}}\,\frac{d\sigma}{%
\sigma} \notag \\
&\qquad \qquad \lesssim 2^{j\left( \theta _{1}-2m\right) }\bigg(
\Xint-%
_{B}\left\vert f\,\right\vert ^{p_0}dw\bigg) ^{1/p_0}; \label{est-down}
\end{align}
the final inequality holds (i.e., the integral in $\sigma$ converges)
provided $2m> \theta _{2}$.
Moreover, if we choose $2m>\theta _{1}+D$, we
have that \eqref{eqn:2.4} holds with $g\left( j\right) =C\,2^{\left( j-1\right)
\left( \theta _{1}-2m\right) }$ and
\begin{equation*}
\sum_{j\geq 2}g\left( j\right) 2^{jD}\lesssim
\sum_{j\geq 2}2^{j\left( \theta _{1}+D-2m\right) }<\infty .
\end{equation*}
We have shown that inequalities \eqref{eqn:2.4} and \eqref{eqn:2.5}
hold, and so by Theorem \ref{theorem:2.4}
inequality~\eqref{eq:fcw} holds for all $p$ such that $
p_{-}<p\le 2$.
\medskip
We will now apply Theorem~\ref{theorem:2.2} to show
that~\eqref{eq:fc-vw} holds for $p_-<p<p_+$. (Inequality~\eqref{eq:fcw}
then follows for $2<p<p_+$ if we take $v\equiv 1$.)
Fix $p$, $p_-<p<p_+$ and $v \in A_{p/p_-}(w)\cap
RH_{(p_+/p)'}(w) $. By the openness properties of the $A_q$ and $RH_s$
classes there exist $p_0$, $q_0$ such that
\[ p_-<p_0<\min\{p,2\}\le p<q_0<p_+,
\qquad
v \in A_{p/p_0}(w)\cap
RH_{(q_0/p)'}(w). \]
Let $T=\varphi \left(
L_{w}\right) $, $\mathcal{A}_{r}=I-( I-e^{-r^{2}L_{w}}) ^{m}$, $S=I$,
and fix the above values of $p_{0}$ and $q_0$. By the previous argument
we have that $\varphi \left( L_{w}\right) $ is bounded on $L^{p_{0}}\left(
w\right) $.
We first show that~\eqref{eqn:2.1} holds. Fix a ball $B$ and
decompose $f$ as
\begin{equation}\label{decomp-f}
f=\sum_{j\geq 1}f\chi _{C_{j}(B)}:=\sum_{j\geq 1}f_{j}.
\end{equation}
Then, by the
same functional calculus argument as given above, we have that for
each $j$,
\begin{align*}
&\bigg(
\Xint-%
_{B}\left\vert \varphi ( L_{w}) ( I-\mathcal{A}_{r})
f_{j}\right\vert ^{p_{0}}dw\bigg) ^{\frac{1}{p_{0}}} \\
&\qquad =\bigg(
\Xint-%
_{B}\left\vert \int_{\Gamma }e^{-z\,L_{w}}f_{j}\,\eta (z)\,dz\right\vert
^{p_{0}}dw\bigg) ^{\frac{1}{p_{0}}} \\
& \qquad \lesssim \int_{\Gamma }\bigg(
\Xint-%
_{B}\left\vert e^{-z\,L_{w}}f_{j}\,\right\vert ^{p_{0}}dw\bigg) ^{\frac{1}{p_{0}}}%
\frac{r^{2m}}{\left\vert z\right\vert ^{m+1}}\,\left\vert dz\right\vert \\
&\qquad \lesssim \bigg(
\Xint-%
_{C_{j}\left( B\right) }\left\vert f\,\right\vert ^{p_{0}}dw\bigg) ^{\frac{1%
}{p_{0}}}2^{j\left( \theta _{1}-2m\right) }\int_{\Gamma }\bigg( \frac{2^{j}r%
}{\sqrt{\left\vert z\right\vert }}\bigg) ^{2m}\Upsilon \bigg( \frac{2^{j}r}{%
\sqrt{\left\vert z\right\vert }}\bigg)^{\theta _{2}}e^{-\frac{c4^{j}r^{2}}{%
\left\vert z\right\vert }}\,\frac{\left\vert dz\right\vert }{\left\vert
z\right\vert } \\
&\qquad \lesssim 2^{j\left( \theta _{1}-2m\right) }\bigg(
\Xint-%
_{C_{j}\left( B\right) }\left\vert f\,\right\vert ^{p_{0}}dw\bigg) ^{\frac{1%
}{p_{0}}};
\end{align*}
the last inequality holds provided $2m>\theta _{2}$. Hence, since $2^{j+1}B\supset C_{j}$, by
Minkowski's inequality we have (since the sum $\sum f_{j}$ is finite for
$f\in L_{c}^{\infty }$)
\begin{align*}
\bigg(
\Xint-%
_{B}\left\vert \varphi ( L_{w}) ( I-\mathcal{A}_{r }) f\right\vert ^{p_{0}}dw\bigg) ^{\frac{1}{p_{0}}}
& \leq \sum_{j\geq 1} \bigg(
\Xint-%
_{B}\left\vert \varphi ( L_{w}) ( I-\mathcal{A}_{r}) f_{j}\right\vert ^{p_{0}}dw\bigg) ^{\frac{1}{p_{0}}} \\
&\lesssim \sum_{j\geq 1}2^{j\left( \theta _{1}-2m\right) }\bigg(
\Xint-%
_{2^{j+1} B }\left\vert f\,\right\vert ^{p_{0}}dw\bigg) ^{%
\frac{1}{p_{0}}}.
\end{align*}
This establishes \eqref{eqn:2.1} with $g(j) =C\, 2^{j\left( \theta
_{1}-2m\right) }$. If we take $2m>\max \left\{ \theta
_{1},\theta _{2}\right\} $, then $\sum g(j)<\infty$.
We now show that~\eqref{eqn:2.2} holds. Fix a ball $B$ and $j\geq 1$.
Since $\mathcal{A}_{r}\in \offw{p_0}{q_0}$ (see \eqref{eqn:A-od}),
\begin{equation*}
\bigg( \Xint-_{B}\big|\mathcal{A}_{r }\big(
\chi_{C_j(B)}\varphi (L_{w}) f\big)\big| ^{q_{0}}dw\bigg) ^{\frac{1}{q_0}}\lesssim 2^{j\theta
_{1}}\Upsilon \left( 2^{j}\right) ^{\theta _{2}}e^{-c4^{j}}\bigg(
\Xint-%
_{C_{j}(B)}\left\vert \varphi \left( L_{w}\right) f\right\vert ^{p_{0}}d\mu
\bigg) ^{\frac{1}{p_0}}.
\end{equation*}%
Therefore, since $\varphi \left( L_{w}\right) $ and $
\mathcal{A}_{r}$ commute, by Minkowski's inequality we obtain
\begin{equation*}
\left(
\Xint-%
_{B}\left\vert \varphi \left( L_{w}\right) \mathcal{A}_{r}
f\right\vert ^{q_{0}}dw\right) ^{\frac{1}{q_0}}
\lesssim \sum_{j\geq 1}2^{j\left(
\theta _{1}+\theta _{2}\right) }e^{-c4^{j}}\bigg(
\Xint-%
_{C_{j}(B)}\left\vert \varphi \left( L_{w}\right) f\right\vert ^{p_{0}}d\mu
\bigg) ^{\frac{1}{p_0}}.
\end{equation*}%
This establishes \eqref{eqn:2.2} with $g(j)=C\,2^{j\left(
\theta _{1}+\theta _{2}\right) }e^{-c4^{j}}$; again, $\sum
g(j)<\infty$. Therefore, our proof is complete.
\end{proof}
\section{Square function estimates for the semigroup}
\label{section:square-function}
In this section we prove $L^p(w)$ norm inequalities for the vertical square
function associated to the semigroup $e^{-tL_w}$:
\[ g_{L_{w}}f( x)
=\bigg( \int_{0}^{\infty }\left\vert \left(
tL_{w}\right) ^{1/2}e^{-tL_{w}}f(x) \right\vert ^{2}\frac{dt}{t}%
\bigg) ^{1/2}. \]
\begin{prop} \label{prop:square-function}
Let $p_-(L_w) < p < p_+(L_w)$. Then
\begin{equation} \label{eqn:square-function-unwtd}
\left\Vert g_{L_{w}}f\right\Vert
_{L^{p}\left( w\right) }\approx \left\Vert f\right\Vert _{L^{p}\left(
w\right)}.
\end{equation}
Conversely if for some $p$ the equivalence~\eqref{eqn:square-function-unwtd} holds,
then $p \in \tilde{\mathcal{J}}(L_w)$---i.e., the interior of the interval on
which~\eqref{eqn:square-function-unwtd} holds is
$(p_-(L_w),p_+(L_w))$.
Moreover, if $v \in A_{p/p_-(L_w)}(w)\cap
RH_{(p_+(L_w)/p)'}(w)$, then
\begin{equation} \label{eqn:square-function-wt}
\left\Vert g_{L_{w}}f\right\Vert
_{L^{p}(v\,dw) }\approx \left\Vert f\right\Vert _{L^{p}(v\,dw)}.
\end{equation}
\end{prop}
We note that the upper bounds in the previous result could be
obtained by combining Proposition \ref{prop:B-K:weights} with the
operator theory methods developed in \cite{couling96}.
To reach a
wider audience we present a self-contained harmonic analysis
proof. We will use an auxiliary Hilbert space related to square
functions, following the approach in~\cite{auscher-martell06}. Let
$\mathbb{H}$ denote the Hilbert space
$L^{2}\left( \left( 0,\infty \right) ,\frac{dt}{t}\right) $ with norm
\begin{equation*}
\normH{h}
=\left( \int_{0}^{\infty }\left\vert h\left( t\right) \right\vert ^{2}%
\frac{dt}{t}\right) ^{\frac{1}{2}}.
\end{equation*}
In particular, we have that
\begin{equation*}
g_{L_{w}}f(x) =%
\normH{\varphi(L,\cdot)f(x)}%
\end{equation*}
where ${\varphi }\left( z,t\right) =\left( tz\right) ^{1/2}e^{-tz}$.
Furthermore, we define $L^p_{\mathbb{H}}(w)$ to be the space of
$\mathbb{H}$-valued functions with the the norm
\[ \|h\|_{L^p_{\mathbb{H}}(w)}
= \bigg(\int_{{\mathbb{R}^n}} \normH{h(x,\cdot)}^p\,dw(x) \bigg)^{\frac{1}{p}}. \]
The following lemma lets us extend scalar valued inequalities
to $\mathbb{H}$-valued inequalities. For a proof, see in \cite[ Lemma~7.4]{auscher-martell06} and the references given there.
\begin{lemma} \label{lemma-7.4}
Given a Borel measure $\mu $ on
$\mathbb{R}^{n}$,
let $\mathcal{D}$ be a subspace of $\mathcal{M}$, the space of
measurable functions in $\mathbb{R}^{n}$, and let $S,\,T$ be linear
operators from $\mathcal{D}$ into $\mathcal{M}$. Fix $1\leq
p\leq q<\infty$ and suppose there exists
$C_{0}>0 $ such that for all $f\in \mathcal{D}$,
\begin{equation*}
\left\Vert Tf\right\Vert _{L^{q}\left( \mu \right) }\leq C_{0}\sum_{j\geq
1}\alpha _{j}\left\Vert Sf\right\Vert _{L^{p}\left( F_{j},\mu \right) },
\end{equation*}
where the $F_{j}$ are measurable subsets of $\mathbb{R}^{n}$ and $\alpha
_{j}\geq 0$. Then there is a $\mathbb{H}$-valued inequality with the same
constant: for all $f:\mathbb{R}^{n}\times \left( 0,\infty \right)
\longrightarrow \mathbb{C}$ such that for almost all $t>0$, $f\left( \cdot
,t\right) \in \mathcal{D}$,
\begin{equation*}
\left\Vert Tf\right\Vert _{L_{\mathbb{H}}^{q}\left( \mu \right) }\leq
C_{0}\sum_{j\geq 1}\alpha _{j}\left\Vert Sf\right\Vert _{L_{\mathbb{H}%
}^{p}\left( F_{j},\mu \right) }.
\end{equation*}
\end{lemma}
The extension of a linear operator $T$ on $\mathbb{C}$-valued functions to
$\mathbb{H}$-valued functions is defined for $x\in \mathbb{R}^n$ and $t>0$ by $(T
h)(x,t)= T\big( h(\cdot,t)\big)(x)$, that is, $t$ can be considered
as a parameter and $T$ acts only on the variable in $\mathbb{R}^n$.
\begin{proof}[Proof of Proposition~\textup{\ref{prop:square-function}}]
We shall first prove the upper bound inequalities. We first claim that
the upper bound inequality in \eqref{eqn:square-function-unwtd} holds
for $p=2$. Indeed, since
$\varphi(z)=z^{1/2}e^{-z}\in\H^\infty_0(\Sigma_\mu)$, it follows from
\eqref{eqn:holo-sqfe} that we have the bound
%
\[
\left\Vert g_{L_{w}}f\right\Vert _{L^{2}(w) } \lesssim \left\Vert
f\right\Vert _{L^{2}(w) }. \]
For brevity, let $p_-=p_-(L_w)$ and $p_+=p_+(L_w)$. As in previous
proofs, we divide our proof into two steps. We will first
prove the upper bound in \eqref{eqn:square-function-unwtd} for $p_-<p<2$ by applying
Theorem~\ref{theorem:2.4}. Fix $p_{-}<p<q_{0}=2$, and let
$\mathcal{A}_{r}=I-( I-e^{-r^{2}L_{w}}) ^{m}$, where $m$ will be
chosen below. Notice that, by \eqref{eqn:A-od}, $\mathcal{A}_{r}$ is bounded on $L^{q_0}(w)$ for each $m$.
Fix $f\in L_c^\infty$; the result for general $f\in L^p(w)$ then follows by a density argument.
We have that $(tL_{w}) ^{1/2}e^{-tL_{w}}( I-\mathcal{A}_{r})
f=\varphi( L_{w},t) f$, where
\[ {\varphi }( z,t) {=}(tz) ^{1/2}e^{-tz}( 1-e^{-r^{2}z})
^{m}. \]
Moreover, since $\varphi(\cdot,t)\in \mathcal{H}_{0}^{\infty }(\Sigma_{\{\min\{\mu,\pi/2\}\} })$,
by the functional calculus representation~\eqref{eqn:L2-holo-rep}
we have that
\begin{equation*}
\left( tL_{w}\right) ^{1/2}e^{-tL_{w}}\left( I-\mathcal{A}_{r}\right)
f=\int_{\Gamma}\eta \left( z,t\right)
e^{-zL_{w}}f~dz,
\end{equation*}%
where $\Gamma=\partial \Sigma_{\frac{\pi}{2}-\theta}$, with $0<\vartheta <\theta <\nu <\min\{\mu,\pi/2\}$, and we choose $\theta$ so that the hypotheses of Corollary~\ref{cor:holomorphic} are
satisfied for $z\in \Gamma$.
Moreover, we have the estimate (see \cite{auscher07, auscher-martell06}
\begin{equation*}
\left\vert \eta \left( z,t\right) \right\vert
\lesssim \frac{t^{\frac{1}{2}}r^{2m}}{\left(
\left\vert z\right\vert +t\right) ^{m+\frac{3}{2}}},
\qquad z\in \Gamma;
\end{equation*}
therefore,
\begin{equation} \label{eqn:eta-bound}
\normH{\eta(z,\cdot)}%
=\left( \int_{0}^{\infty }\left\vert \eta \left( z,t\right) \right\vert ^{2}~%
\frac{dt}{t}\right) ^{1/2}\lesssim \frac{r^{2m}}{\left\vert z\right\vert
^{m+1}}.
\end{equation}
Now let $f\in L_{c}^{\infty }$ with $\mathop{\rm supp}\left( f\right)
\subset B$. For $j\geq 2$, we have
\begin{align}
& \bigg( \Xint-_{C_j(B)}
\left\vert g_{L_{w}}\left( {I-}\mathcal{A}_{r}\right) f\right\vert
^{p}dw\bigg) ^{1/p} \notag \\
&\qquad \qquad =\bigg( \Xint-_{C_j(B)}%
\bigg\vert \bigg( \int_{0}^{\infty }
\bigg\vert \int_{\Gamma _{\frac{\pi }{2}%
-\theta }}\eta ( z,t) e^{-zL_{w}}f\, dz\bigg\vert ^{2}\frac{dt}{t}%
\bigg) ^{1/2}\bigg \vert ^{p}dw\bigg) ^{1/p} \notag \\
& \qquad \qquad \leq \bigg( \Xint-_{C_j(B)}%
\bigg\vert \int_{\Gamma _{\frac{\pi }{2}-\theta }}\vert
e^{-zL_{w}}f\vert
\normH{\eta(z,\cdot)}%
d\vert z\vert \bigg\vert ^{p}dw\bigg) ^{1/p} \notag \\
&\qquad \qquad \lesssim \int_{\Gamma _{\frac{\pi }{2}-\theta }}
\bigg(\Xint-_{C_j(B)}%
\vert e^{-zL_{w}}f\vert ^{p}dw\bigg) ^{1/p}\frac{r^{2m}}{%
\vert z\vert ^{m+1}}\,d\vert z\vert \notag \\
&\qquad \qquad \lesssim 2^{j\theta _{1}}
\bigg( \Xint-_{B}%
\vert f\vert ^{p}\,dw\bigg) ^{1/p}\int_{\Gamma _{\frac{\pi }{2}%
-\theta }}\Upsilon \bigg( \frac{2^{j}r}{\sqrt{\vert z\vert }}%
\bigg) ^{\theta _{2}}e^{-\frac{c4^{j}r^{2}}{\vert z\vert }}\frac{%
r^{2m}}{\vert z\vert ^{m}}\,\frac{d\vert z\vert }{%
\vert z\vert } \notag \\
& \qquad \qquad \lesssim 2^{j\theta _{1}}4^{-mj}\bigg(
\Xint-_{B}%
\vert f\vert ^{p}\,dw\bigg) ^{1/p}; \label{eqn:gLw-01}
\end{align}
in the second inequality we applied (\ref{eqn:eta-bound}) and
the off-diagonal estimates for $%
e^{-zL_{w}}$
from Corollary \ref{cor:holomorphic}, and the last inequality holds provided $2\,m>\theta_2$. Thus, if we take
$m>\theta _{1}+D$, where $D$ is the doubling order of $w$, the
operator $g_{L_{w}}$ satisfies (\ref{eqn:2.4}) in
Theorem~\ref{theorem:2.4} with
$g\left( j\right) =C\,2^{j(\theta _{1}-2m)}$. Since
we already established~\eqref{eqn:2.5}
in~\eqref{eqn:2.5Ar} with
$g\left( j\right) =C\,2^{j\left( \theta _{1}+\theta _{2}\right)
}4^{-mj}$, the hypotheses of Theorem \ref%
{theorem:2.4} are satisfied if $m>\theta _{1}+\theta _{2}+D$.
Therefore,
for each $p_{-}<p< 2$ there
exists a constant $C$ such that
\begin{equation}
\left\Vert g_{L_{w}}f\right\Vert _{L^{p}(w) }\leq C\left\Vert
f\right\Vert _{L^{p}(w) }. \label{ineq:gps}
\end{equation}%
\bigskip
In the second part of the proof we will show that if $p_-<p<p_+$ and
$v \in A_{p/p_-}(w)\cap RH_{(p_+/p)'}(w)$, then the upper bound
inequality in \eqref{eqn:square-function-wt} holds. If
we take $v\equiv 1$,
then we immediately get~\eqref{eqn:square-function-unwtd}.
To do so, first note that if we fix $p$ and $v$, then by the openness
properties of weights there exist $p_0$, $q_0$ such that
\[ p_{-}<p_{0}<\min\{p,2\} \leq \max\{p,2\}<q_{0}<p_{+} \]
and $v \in A_{p_0/p_-}(w)\cap
RH_{(q_0/p)'}(w)$.
We will apply Theorem~\ref{theorem:2.2} with $T=g_{L_w}$, $S=I$ and
$\mathcal{D}=L^{p_0}(w)$ (again, note that by \eqref{eqn:A-od}, $\mathcal{A}_r$
is bounded on $L^{p_0}(w)$). We first prove that
inequality~\eqref{eqn:2.1} holds. For each $j\geq 1$, let
$f_j=f\chi_{C_j(B)}$; then we can argue exactly as we did in the proof
of~\eqref{eqn:gLw-01}, exchanging the roles of $B$ and $C_j(B)$, to get
\[ \bigg( \Xint-_B |g_{L_w}(I-\mathcal{A}_r)f_j|^p \,dw \bigg)^{\frac{1}{p}}
\lesssim 2^{j\theta_1}4^{-mj}
\bigg(\Xint-_{2^{j+1}B} |f|^p\,dw \bigg)^{\frac{1}{p}}. \]
Inequality~\eqref{eqn:2.1} follows if we sum over all $j$ and take
$g(j)= 2^{j\theta_1}4^{-mj}$.
We will now show that inequality~\eqref{eqn:2.2} holds. To do so, we need to
prove a vector-valued version of a key inequality. By Proposition~\ref{prop:J}, given a
ball $B$ with radius $r$, then for all $j\geq 1$, $g$ with $\mathop{\rm supp}(g)\subset C_j(B)$,
and $1\leq k \leq m$,
\begin{equation} \label{eqn:scalar-offdiag}
\bigg(\Xint-_B |e^{-kr^2L_w}g|^{q_0}\,dw\bigg)^{\frac{1}{q_0}}
\leq C_02^{j(\theta_1+\theta_2)}e^{-\alpha 4^j}
\bigg(\Xint-_{C_j(B)} |g|^{p_0}\,dw\bigg)^{\frac{1}{p_0}}.
\end{equation}
We now apply Lemma~\ref{lemma-7.4} with $S=I$ and $T :
L^{p_0}(w)\rightarrow L^{q_0}(w)$ given by
\[ Tg = (C_02^{j(\theta_1+\theta_2)}e^{-\alpha 4^j})^{-1}
\frac{ w(2^{j+1}B)^{\frac{1}{p_0}}}{w(B) ^{\frac{1}{q_0}}}
\chi_B e^{-kr^2L_w}(g\chi_{C_j(B)}). \]
This yields the $\mathbb{H}$-valued extension of~\eqref{eqn:scalar-offdiag}: for
all $g\in L^{p_0}_{\mathbb{H}}(w)$ with $\mathop{\rm supp}(g(\cdot,t))\subset C_j(B)$, $t>0$, we have that
\begin{equation} \label{eqn:vector-offdiag}
\bigg(\Xint-_B \normH{e^{-kr^2L_w}g(x,\cdot)}^{q_0}\,dw\bigg)^{\frac{1}{q_0}}
\leq C_02^{j(\theta_1+\theta_2)}e^{-\alpha 4^j}
\bigg(\Xint-_{C_j(B)}
\normH{g(x,\cdot)}^{p_0}\,dw\bigg)^{\frac{1}{p_0}}.
\end{equation}
Given an arbitrary $g\in L^{p_0}_{\mathbb{H}}(w)$, decompose it as
\[ g(x,t) = \sum_{j\geq 1} g(x,t)\chi_{C_j(B)}(x) = \sum_{j\geq 1}
g_j(x,t). \]
Then inequality~\eqref{eqn:vector-offdiag} yields
\begin{multline} \label{eqn:almost-there}
\bigg(\Xint-_B
\normH{e^{-kr^2L_w}g(x,\cdot)}^{q_0}\,dw\bigg)^{\frac{1}{q_0}}
\leq \sum_{j\geq 1}
\bigg(\Xint-_{B}
\normH{e^{-kr^2L_w} g_j(x,\cdot)}^{q_0}\,dw\bigg)^{\frac{1}{q_0}} \\
\lesssim \sum_{j\geq 1} 2^{j(\theta_1+\theta_2)}e^{-\alpha 4^j}
\bigg(\Xint-_{2^{j+1}B}
\normH{g(x,\cdot)}^{p_0}\,dw\bigg)^{\frac{1}{p_0}}.
\end{multline}
Define $g(x,t)=(tL_w)^{1/2}e^{-tL_w}f(x)$. Then
$g_{L_w}f(x) = \normH{g(x,\cdot)}$; by our choice of $p_0$ and the
first step of the proof we have that $g\in L^{p_0}_{\mathbb{H}}(w)$.
Moreover, since for each $t>0$,
$(tL_w)^{1/2}e^{-tL_w}$ and $e^{-kr^2L_w}$ commute,
\[ g_{L_w}(e^{-kr^2L_w}f)(x) = \normH{e^{-kr^2L_w} g(x,\cdot)}. \]
We can now use~\eqref{eqn:Asum} and~\eqref{eqn:almost-there} to get that
\begin{align*} \label{eqn:gLw-02}
\bigg( \Xint-_{B}%
|g_{L_{w}}\mathcal{A}_{r}f| ^{q_{0}}dw\bigg) ^{\frac{1}{q_{0}}}
&
\lesssim
\bigg(\Xint-_B
\normH{e^{-kr^2L_w}g(x,\cdot)}^{q_0}\,dw\bigg)^{\frac{1}{q_0}} \\
& \leq \sum_{j\geq 1}2^{j\left( \theta _{1}+\theta _{2}\right)
}e^{- \alpha 4^{j}}\left(
\Xint-_{2^{j+1}B}
\left\vert g_{L_{w}}f\right\vert ^{p}dw\right) ^{1/p_0}.
\end{align*}
This proves \eqref{eqn:2.2} with $g\left( j\right) =C\,2^{j\left( \theta _{1}+\theta
_{2}\right) }e^{-c4^{j}}$. Therefore, by Theorem~\ref{theorem:2.2}
we get that
\[ \|g_{L_w}f\|_{L^p(v\,dw)} \lesssim \|f\|_{L^p(v\,dw)}. \]
\bigskip
It remains to show the reverse inequalities. We will prove the
lower bound in~\eqref{eqn:square-function-wt}; then the lower bound
in~\eqref{eqn:square-function-unwtd} holds if we take $v\equiv 1$. Fix
$p_-<p<p_+$ and $v \in A_{p/p_-(L_w)}(w)\cap RH_{(p_+(L_w)/p)'}(w)$. By
the duality properties of weights~\cite[Lemma 4.4]{auscher-martell07b}
and since $p_{\pm}(L_w)'=p_{\mp} (L_w^*)$, where $L_w^*$ is the adjoint (on $L^2(w)$) of $L_w$,
\begin{equation}
v^{1-p'}\in A_{p'/p_-(L^*)}(w)\cap RH_{(p_+(L^*)/p')'}(w).
\label{eq:adede}
\end{equation}
We now proceed as in the proof of ~\cite[Theorem 7.3]{auscher-martell06}. Given $F\in L^p_{\mathbb{H}}(v\,dw)\cap L^2_{\mathbb{H}}(w)$, and $x\in \mathbb{R}^n$ we set
\begin{equation}\label{defi-TLw}
T_{L_w} F(x)
=\int_0^\infty (t\,L_w)^{1/2}\,e^{-t\,L_w}F(x,t )\,\frac{dt}{t}.
\end{equation}
Recall that $(t\,L_w)^{1/2}\,e^{-t\,L_w}F(x,t )=(t\,L_w)^{1/2}\,e^{-t\,L_w}(F(\cdot,t ))(x)$. Hence, $T_{L_w}$ maps
$\mathbb{H}$-valued functions to $\mathbb{C}$-valued functions. For $h\in
L^{p'}(v^{1-p'}\,dw)\cap L^2(w)$ with
$\|h\|_{L^{p'}(v^{1-p'}\,dw)}=1$, we have that
\begin{align*}
\Big|\int_{\mathbb{R}^n} T_{L_w}F\, \overline h\, dw \Big|
&=
\Big|\int_{\mathbb{R}^n} \int_0^\infty F(x,t) \, \overline{ (t\,L_w^*)^{1/2}\,e^{-t\,L_w^*}h(x )}\,\frac{dt}{t}dw(x)
\Big|
\\
&\le
\int_{\mathbb{R}^n} \normH{F(x, \cdot)} \, g_{L_w^*} h(x) \, dw(x)
\\
&\lesssim
\|F\|_{L^p_{\mathbb{H}}(v\,dw)}
\|g_{L_w^*} h\|_{L^{p'}(v^{1-p'}\,dw)}
\\
&\lesssim
\|F\|_{L^p_{\mathbb{H}}(v\,dw)},
\end{align*}
where the last estimate uses the fact that $g_{L_w^*}$ is bounded on
$L^{p'}(v^{1-p'}\,dw)$. This follows from the upper bound in
\eqref{eqn:square-function-wt} (with $L_w^*$ in place of $L_w$), which we
proved above, and
\eqref{eq:adede}. Taking the supremum over all such functions $h$ and using
an standard density argument we have obtained that $T_{L_w}$ is
bounded from $L^p_{\mathbb{H}}(v\,dw)$ to $L^p(v\,dw)$.
Next, given $f\in L^p(v\,dw)\cap L^2(dw)$, if we define $F( x,t) =(tL_{w}) ^{1/2}e^{-tL_{w}}f( x)$, then
$F \in L^p_{\mathbb{H}}(v\,dw)\cap L^2_{\mathbb{H}}(w)$ since
$\|F\|_{L^p_{\mathbb{H}}(v\,dw)} =
\|g_{L_w}f\|_{L^p(v\,dw)}$ and analogously for $L^2(w)$. Also, by the $L^2(w)$ functional calculus we have that
\begin{equation} \label{eqn:reverse-identity}
f(x)=2\int_{0}^{\infty }\left( tL_{w}\right) ^{1/2}e^{-tL_{w}}F\left( x,t\right) \frac{dt}{t}=2T_{L_w}F(x).
\end{equation}
Therefore,
\[ \|f\|_{L^p(v\,dw)} = 2\|T_{L_w} F\|_{L^p(v\,dw)}
\lesssim \|F\|_{L^p_{\mathbb{H}}(v\,dw)} =
\|g_{L_w}f\|_{L^p(v\,dw)}, \]
and this completes the proof of~ \eqref{eqn:square-function-wt}.
\medskip
To finish the proof of Proposition~\ref{prop:square-function} we
need to show that the equivalence of norms in~\eqref{eqn:square-function-unwtd} implies that the
semigroup is uniformly bounded. However, this follows immediately
from the definition of $g_{L_w}$ and the semigroup property: for any $s>0$,
\[ g_{L_w}(e^{-sL_w}f)(x)
= \bigg(\int_0^\infty |L_w^{1/2} e^{-(s+t)L_w}f(x)|^2\,dt\bigg)^{1/2}
\leq g_{L_w}f(x). \]
This completes the proof.
\end{proof}
\bigskip
We conclude this section by proving a version of
Proposition~\ref{prop:square-function} for the ``adjoint'' of a discrete square
function. We will need this estimate in the proof of
Proposition~\ref{prop:reverseRiesz} below.
\begin{prop}\label{prop:dsicrtee-SF}
Define the holomorphic function $\psi$ on the sector $\Sigma_{\pi /2}$
by
\begin{equation}\label{eqpsiw}
\psi(z)= \frac1{\sqrt{\pi}}\int_1^\infty z\, e^{-tz} \, \frac{dt}{\sqrt t}.
\end{equation}
If $p_-(L_w)<p<p_+(L_w)$, then for any sequence of functions
$\{\beta_k\}_{k\in \mathbb{Z}}$,
\begin{equation}\label{eq23w}
\Big\| \sum_{k\in \mathbb{Z}} \psi(4^kL_w)\ \beta_k \Big\|_{L^p(w)}
\lesssim
\bigg\|\Big(\sum_{k\in \mathbb{Z}} |\beta_k|^2\Big)^{\frac12}
\bigg\|_{L^p(w)}.
\end{equation}
\end{prop}
\begin{proof}
By duality and since $p_\pm(L_w)'=p_{\mp}(L^*_w)$ , it will suffice to show that for every
$p_-(L_w^*)<p<p_+(L_w^*)$,
\begin{equation}\label{discrete-SF}
\bigg\| \Big(\sum_{k\in \mathbb{Z}} |\overline{\psi}(4^kL_w^*)h|^2 \Big)^{\frac12}\bigg\|_{L^p(w)}
\lesssim
\|h\|_{L^p(w)},
\end{equation}
The function $\psi$ satisfies $|\psi(z)| \le C|z|^{1/2} e^{-c|z|}$
uniformly on subsectors $\Sigma_{\mu}$, $0\le \mu < {\frac \pi 2}$.
Thus the operator on the lefthand side of~\eqref{discrete-SF} is a
discrete analog of the square function $g_{L_w^*}$ changing continuous times $t$ to discrete
times $4^k$ and $z^{1/2}e^{-z}$ to $\overline{\psi}(z)$. Since $\overline{\psi}(z)$ has the same quantitative
properties as $z^{1/2}e^{-z}$ (decay at 0 and at
infinity), we can repeat the previous argument and obtain the desired
estimates as in
the proof of Proposition~\ref{prop:square-function}.
\end{proof}
\begin{remark}
In Proposition~\ref{prop:dsicrtee-SF} we can also get $L^p(v\,dw)$
estimates, but in the proof of
Proposition~\ref{prop:reverseRiesz} below we will only need the
unweighted estimates. Further details and the precise statements are
left to the interested reader.
\end{remark}
\section{Reverse inequalities}
\label{section:reverse}
In this section we will prove $L^p(w)$ estimates of the form
$\|L_w^{1/2}f\|_{L^p(w)} \le C\|\nabla f\|_{L^p(w)}$, which
generalize the $L^2(w)$ Kato estimates in
Theorem~\ref{theorem:degen-kato}. These are
referred to as reverse inequalities since if we replace $f$ by
$L_w^{-1/2}f$, then formally we get a reverse-type inequality for the
Riesz transform: $\|f\|_{L^p(w)} \le C\|\nabla L_w^{-1/2}
f\|_{L^p(w)}$.
Since these estimates involve the gradient, in proving them we will
rely (implicitly and explicitly) on the weighted Poincar\'e
inequality~\eqref{w-Poincare}. This will require an additional
assumption on $p$ when $p<2$. To state it simply, define
\[ (p_-(L_w))_{w,*} = \frac{n\,r_w\,p_-(L_w)}{n\,r_w+p_-(L_w)} < p_-(L_w). \]
\begin{prop}\label{prop:reverseRiesz}
Let
$\max\{r_w,(p_-(L_w))_{w,*}\}<p<
p_+(L_w)$. Then for all $f\in \mathcal{S}$,
\begin{equation} \label{eq:reverseRiesz}
\|L_w^{1/2}f\|_{L^p(w)} \le C\, \|\nabla f\|_{L^p(w)}.
\end{equation}
with $C$ independent of $f$.
Furthermore, if $\max\{r_w,p_-(L_w)\}<p<p_+(L_w)$ and
$v \in A_{{p}/{\max\{r_w,p_-(L_w)}\}}(w)\cap
RH_{({p_+(L_w)}/{p})'}(w)$, then for all $ f\in\mathcal{S}$,
\begin{equation}\label{eq:reverseRiesz-vdw}
\|L_w^{1/2}f\|_{L^p(v\,dw)} \le C\, \|\nabla f\|_{L^p(v\,dw)}.
\end{equation}
\end{prop}
\begin{remark}
The quantity $\max\{r_w,(p_-(L_w))_{w,*}\}$ can be equal to either term.
For instance, it equals $r_w$ if $p_-(L_w) \le n'r_w$. From
Proposition~\ref{prop:J} we know that $p_-(L_w) <
(2_w^*)'=\frac{2\,n\,r_w}{n\,r_w+2}$, but this only implies the previous
inequality for some values of $n$ and $r_w$.
\end{remark}
\medskip
\begin{proof}
As before, let $p_-=p_-(L_w)$ and $p_+=p_+(L_w)$. Fix
$p$, $\max\big\{r_w,(p_-)_{w,*}\big\}<p<2$, and $f\in\mathcal{S}$. We will
first show that
\begin{equation}\label{eq:weak-type-reverse}
\|L_w^{1/2}f\|_{L^{p,\infty}(w)} \lesssim \|\nabla f\|_{L^p(w)}.
\end{equation}
First note that since $p>r_w$, $w\in A_p$. Therefore, given $\alpha>0$ we can form the
Calder\'on-Zygmund decomposition given in \cite[Lemma
6.6]{auscher-martell06}: there exist a collection of balls $\{B_i\}_i$,
smooth functions $\{b_i\}_i$ and a function $g\in L^1_{\rm loc}(w)$
such that
\begin{equation}\label{eqcsds1}
f= g+\sum_i b_i
\end{equation}
and the following properties hold:
\begin{equation}\label{eqcsds2}
|\nabla g(x)| \le C\alpha
\quad
\text{for $w$-a.e. } x,
\end{equation}
\begin{equation}\label{eqcsds3}
\mathop{\rm supp}(b_i) \subset B_{i}
\quad\text{and}\quad
\int_{B_i} |\nabla
b_i|^p\, dw \le C\alpha^p w(B_i),\end{equation}
\begin{equation}\label{eqcsds4}
\sum_i w(B_i) \le \frac{C}{\alpha^{p}} \int_{\mathbb{R}^n} |\nabla f|^p\, dw ,
\end{equation}
\begin{equation}\label{eqcsds5}
\sum_i \bigchi_{B_i} \le N,
\end{equation}
\begin{equation}\label{eqcsds6}
\Big(\aver{B_i} |b_i|^q\,dw\Big)^\frac1q
\lesssim
C\,\alpha\, r(B_{i})\quad
\text{for }1\le q\le p^*_w,
\end{equation}
where $C$ and $N$ depend only on $n$, $p$, $q$ and the doubling
constant of $w$.
To prove~\eqref{eq:weak-type-reverse} we
will prove the corresponding weak-type estimates with $f$ replaced by $g$ and
$b_i$. For $g$, we use the $L^2(w)$ Kato estimate
\eqref{eqn:degen-kato1}, \eqref{eqcsds2}, and the fact that $p<2$ to
get
\begin{multline*}
w(\{|L_w^{1/2}g| > \alpha/3 \})
\lesssim
\frac{1}{\alpha^{2}}\int_{\mathbb{R}^n} |L_w^{1/2} g|^{2}\, dw
\lesssim
\frac{1}{\alpha^{2}}\int_{\mathbb{R}^n} |\nabla g|^{2}\, dw
\lesssim
\frac{1}{\alpha^p}\int_{\mathbb{R}^n} |\nabla g|^p\, dw
\\
\lesssim
\frac{1}{\alpha^p}\int_{\mathbb{R}^n} |\nabla f|^p\, dw +
\frac{1}{\alpha^p}\int_{\mathbb{R}^n}
\Big|\sum_i\nabla b_i\Big|^p\, dw
\lesssim
\frac{1}{\alpha^p}\int_{\mathbb{R}^n} |\nabla f|^p\, dw,
\end{multline*}
where the last estimate follows from \eqref{eqcsds5},
\eqref{eqcsds3}, and \eqref{eqcsds4}.
To prove a weak-type estimate for $L_w^{1/2}(\sum_i b_i)$, let $r_i=2^k$ if $2^k \le r(B_{i})
< 2^{k+1}$. Then for all $i$, $r_{i}\sim r(B_{i})$. Write
$$
L_w^{1/2}
=
\frac 1 {\sqrt \pi}\,\int_0^{r_i^2} L_w e^{-t \, L_w} \, \frac{dt}{\sqrt t}+ \frac 1 {\sqrt \pi}\,\int_{r_i^2}^\infty L_w e^{-t\,
L_w} \, \frac{dt}{\sqrt t}
=
T_i+U_i;
$$
then we have that
\begin{align*}
w\Big(\Big\{\Big|\sum_i L_w^{1/2} b_i\Big|>\frac{2\,\alpha}{3}\Big\}\Big)
&
\le
w\Big(\bigcup_i\, 4\,B_i\Big)+ w\Big(\Big\{\Big|\sum_i
U_ib_i\Big|>\frac{\alpha}{3}\Big\}\Big)
\\
&\hskip1.5cm
+ w\Big( \Big(\mathbb{R}^n\setminus\bigcup_i \, 4\,B_i \Big) \bigcap \Big\{
\Big|\sum_i T_ib_i\Big|
>\frac{\alpha}{3}\Big\}\Big)
\\
&
\lesssim
\frac1{\alpha^p}\,\int_{\mathbb{R}^n} |\nabla f|^p\,dw + I_1+I_2,
\end{align*}
where the last inequality follows from~\eqref{eqcsds4}.
We first estimate $I_2$. Since $p> (p_-)_{w,*}$ then $p_w^*>((p_-)_{w,*})_w^*=p_-$, and we can choose $q\in
\mathcal{J}(L_w)$ such that \eqref{eqcsds6} is satisfied. By Corollary~\ref{cor:holomorphic},
$t\,L_w\,e^{-t\, L_w}\in \offw{q}{q}$, and so
\begin{align*}
I_2
&
\lesssim
\frac{1}{\alpha}\,\sum_i\sum_{j\ge 2} \int_{C_j(B_i)} |T_ib_i| \, dw \\
&
\lesssim
\frac{1}{\alpha}\,\sum_i\sum_{j\ge 2}
w(2^{j}\,B_i)\,\int_0^{r_i^2}\aver{C_{j}(B_{i})} |t\, L_w\, e^{-t \,L_w}
b_i |\, dw\,\frac{dt}{t^{3/2}}
\\
&\lesssim
\frac{1}{\alpha}\,\sum_i\sum_{j\ge 2} 2^{j\,D}\,w(B_i)\,\int_0^{r_i^2}
2^{j\,\theta_{1}}\, \dec{\frac{2^j\, r_i}{\sqrt t}}^{\theta_{2}} \,
\expt{-\frac{c\, 4^j\, r_{i}^2}{t}}\,\frac{dt}{t^{3/2}}
\, \bigg(\aver{B_{i}} |b_{i}|^q\, dw\bigg)^{\frac1q}
\\
&\lesssim
\sum_i\sum_{j\ge 2} 2^{j\,D}\,e^{-c\,4^j}\,w(B_i) \\
& \lesssim
\sum_i w(B_i) \\
& \lesssim \frac{1}{\alpha^p}\,\int_{\mathbb{R}^n}
|\nabla f|^p \, dw,
\end{align*}
where we have used \eqref{eqcsds6} and \eqref{eqcsds4}, and $D$ is the doubling order of $dw$.
We will now estimate $I_1$. For $q$ as above, by
Proposition~\ref{prop:B-K:weights} we have an $L^q(w)$
functional calculus for $L_w$. Therefore, we can write $U_i$ as
$r_i^{-1}\psi(r_i^2L_w)$ with $\psi$ defined by \eqref{eqpsiw}. Let
$\beta_{k}= \sum_{i\, : \, r_i=2^k}\frac{b_i}{r_i}$; then,
$$
\sum_i U_i \, b_i
=
\sum_{k\in \mathbb{Z}} \psi(4^k\, L_w)
\bigg(\sum_{i\, : \, r_i=2^k}\frac{b_i}{r_i}\bigg)
=
\sum_{k\in \mathbb{Z}} \psi(4^k\,L_w) \beta_k.
$$
Therefore, by Proposition \ref{prop:dsicrtee-SF}, \eqref{eqcsds5},
\eqref{eqcsds6}, the fact that $r_{i}\sim r(B_{i})$ and \eqref{eqcsds4},
we have that
\begin{multline*}
I_1
\lesssim
\frac1{\alpha^q}\,
\Big\|\sum_i U_i b_i\Big\|_{L^q(w)}^q
\lesssim
\frac1{\alpha^q}\,
\bigg\|\Big(\sum_{k\in \mathbb{Z}} |\beta_k|^2\Big)^{\frac12}
\bigg\|_{L^q(w)}^q \\
\lesssim
\frac1{\alpha^q}\, \int_{\mathbb{R}^n} \sum_{i} \frac{|b_i|^q}{r_i^q}\ dw
\lesssim
\sum_{i}w(B_i)
\lesssim
\frac{1}{\alpha^p}\int_{\mathbb{R}^n} |\nabla f|^p\, dw.
\end{multline*}
If we combine all of the estimates we have obtained, we
get \eqref{eq:weak-type-reverse} as desired.
\medskip
To prove~\eqref{eq:reverseRiesz} from the weak-type estimate
\eqref{eq:weak-type-reverse}
will use an interpolation argument from \cite{auscher-martell06}.
Fix $p$ and $r$ such that $\max\big\{r_w,(p_-)_{w,*}\big\}<r<p<2$.
Then by \eqref{eq:weak-type-reverse} and
\eqref{eqn:degen-kato1} we have that for every $f\in\mathcal{S}$,
\begin{equation}\label{eq:interpol-grad}
\|L_w^{1/2}f\|_{L^{r,\infty}(w)} \lesssim \|\nabla f\|_{L^r(w)},
\qquad
\|L_w^{1/2}f\|_{L^2(w)} \lesssim \|\nabla f\|_{L^2(w)}.
\end{equation}
Formally, to apply Marcinkiewicz interpolation, we let $g=\nabla f$ to
get a weak $(r,r)$ and strong $(2,2)$ inequality; this would
immediately yield a strong $(p,p)$ inequality. To formalize this we
must justify this substitution.
For every $q>r_w$ by \cite[Lemma 6.7]{auscher-martell06} we have that
$$
\mathcal{E}
=
\big\{(-\Delta)^{1/2}f\, : \, f\in \mathcal{S}, \mathop{\rm supp} \widehat f \subset \mathbb{R}^n\setminus \{0\}\big\}
$$
is dense in $L^q(w)$, where $\widehat f$ denotes the Fourier transform
of $f$. Moreover, since $r>r_{w}$, $w\in A_r$ and the Riesz
transforms, $R_j = \partial_j (-\Delta)^{-1/2}$, are bounded on
$L^r(w)$~\cite{garciacuerva-rubiodefrancia85}. It follows from this
and the identity $-I=R_1^2+\dots +R_n^2$ that for
$g\in L^r(w)$,
$$
\|g\|_{L^r(w)} \sim \|\nabla (-\Delta)^{-1/2} g\|_{L^r(w)}.
$$
Thus, for $g\in \mathcal{E}$, $L_w^{1/2}(-\Delta)^{-1/2}g=L_w^{1/2}f$
if $f=(-\Delta)^{-1/2}g$ and
$\|\nabla f\|_{L^r(w)} \sim \|g \|_{L^r(w)}$ for $r>r_{w}$. Thus
\eqref{eq:interpol-grad} becomes weighted weak $(r,r)$
and strong $(2,2)$ inequalities for $T= L_w^{1/2}(-\Delta)^{-1/2}$,
and this operator is defined \textit{a
priori} on $\mathcal{E}$. Since $\mathcal{E}$ is dense in
each $L^q(w)$, we can extend
$T$ by density in both cases and their restrictions to the space of
simple functions agree. Hence, we can apply Marcinkiewicz
interpolation and conclude, again by density, that
\eqref{eq:reverseRiesz} holds for all $p$ with $r<p<2$. Since $r$
is arbitrary, we get \eqref{eq:reverseRiesz} in the range $\max\big\{r_w,(p_-)_{w,*}\big\}<p<2$.
\bigskip
For the second step of the proof we will
prove~\eqref{eq:reverseRiesz-vdw} using Theorem~\ref{theorem:2.2}.
Inequality~\eqref{eq:reverseRiesz} for its full range of exponents then follows by letting $v=1$.
Define $\tilde{p}_-=\max\{r_w,p_-\}<2$, and fix $\tilde{p}_-<p<p_+$ and
$v\in A_{p/\tilde{p}_-}(w)\cap RH_{(p_+/p)'}(w)$. By the openness
properties of $A_q$ and $RH_s$ weights, there exist $p_0$, $q_0$ such
that
\[
\tilde{p}_-<p_0<\min\{p,2\}\le p<q_0<p_+,
\qquad
v \in A_{p/p_0}(w)\cap
RH_{(q_0/p)'}(w). \]
To apply Theorem \ref{theorem:2.2}, let $T=L_w^{1/2}$,
$S=\nabla$, and $\mathcal{A}_{r}=I-( I-e^{-r^{2}L_{w}}) ^{m}$ where the
value of $m$ will be fixed below. We will first show that \eqref{eqn:2.2}
holds. By \eqref{eqn:A-od} we have that $\mathcal{A}_{r}\in \offw{p_0}{q_0}$
since $p_0$, $q_0\in \mathcal{J}(L_w)$. Let $h=L_w^{1/2} f$ and decompose
$h$ as we decomposed $f$ in \eqref{decomp-f}. Then, since $L_w^{1/2}$
and $\mathcal{A}_{r}$ commute, it follows that
\begin{align*}
\left(
\Xint-_{B}\left\vert {L_{w}^{1/2}}\mathcal{A}_{r} f\right\vert ^{q_{0}}dw\right) ^{\frac{1}{q_0}}
& \lesssim
\sum_{j\ge 1}
\left(
\Xint-_{B} \left\vert \mathcal{A}_{r} h_j \right\vert ^{q_{0}}dw\right) ^{\frac{1}{q_0}}
\\
& \lesssim
\sum_{j\ge 1}
2^{j\theta_{1}}\Upsilon \left( 2^{j}\right) ^{\theta _{2}} e^{-c4^{j}}
\bigg(
\Xint-_{C_{j}}\left\vert h\right\vert ^{p_{0}}dw
\bigg) ^{\frac{1}{p_0}} \\
& \le
\sum_{j\ge 1}
2^{j(\theta_{1}+\theta_2)} e^{-c4^{j}}
\bigg(
\Xint-_{2^{j+1}\,B}\left\vert L_w^{1/2}f\right\vert ^{p_{0}}dw
\bigg) ^{\frac{1}{p_0}}.
\end{align*}
This gives us \eqref{eqn:2.2} with $g(j)=C\,2^{j\left(
\theta _{1}+\theta _{2}\right) }e^{-c4^{j}}$; clearly, $\sum
g(j)<\infty$.
We now prove that \eqref{eqn:2.1} holds. Fix $f\in\mathcal{S}$ and let
$\varphi(z)=z^{1/2} (1-e^{-r^2\, z})^m$ so that
$\varphi(L_w)f=L_w^{1/2}(I-e^{-r^2\,L_w})^mf$. By the conservation
property (see \cite{DCU-CR2013} or \cite[Section
2.5]{auscher07}),
\begin{equation}\label{eq:rep}
\varphi(L_w)\, f=
\varphi(L_w)\, (f-f_{4\,B,w})
=
\sum_{j\ge 1} \varphi(L_w)\, h_j,
\end{equation}
where $h_{j}=(f-f_{4\,B,w})\,\phi_j$,
$\phi_j=\bigchi_{C_j(B)}$ for $j\ge 3$, $\phi_1$ is a smooth
function with support in $4\,B$, $0\le \phi_1\le 1$, $\phi_{1}=1$ in $2\,B$
and $\|\nabla \phi_1\|_{\infty}\le C/r$, and $\phi_2$ is
chosen so that $\sum_{j\ge 1} \phi_j=1$.
We estimate each term in the righthand side of~\eqref{eq:rep} separately.
When $j=1$, since $ p_-<p_0< p_+$, by the bounded holomorphic
functional calculus on $L^{p_{0}}(w)$ (Proposition \ref{prop:B-K:weights}) and
the fact that $\varphi(L_w)\, h_{1}= (I-e^{-r^2\, L_w})^m\,
L_w^{1/2}h_{1}$, we have that
$$
\left\|\varphi(L_w)\, h_{1}\right\|_{L^{p_{0}}(w)}
\lesssim
\|L_w^{1/2}h_{1}\|_{L^{p_{0}}(w)}
$$
uniformly in $r$.
By the above argument we have that~\eqref{eq:reverseRiesz} holds for $p=p_0$ since
$\tilde{p}_-<p_0<2$.
Further, since $f\in\mathcal{S}$, $h_1\in\mathcal{S}$ by our choice of
$\phi_1$. This, together with the $L^{p_{0}}(w)$-Poincar\'e inequality
\eqref{w-Poincare} (since $p_0>r_w$, $w\in A_{p_0}$)
and the definition of $h_{1}$ yield
\begin{multline*}
\|L_w^{1/2}h_{1}\|_{L^{p_{0}}(w)}
\lesssim
\|\nabla h_{1}\|_{L^{p_{0}}(w)}
\\
\lesssim
\|(\nabla f) \bigchi_{4B}\|_{L^{p_{0}}(w)}
+
r^{-1}\,\|(f-f_{4\,B,w})\bigchi_{4B}\|_{L^{p_{0}}(w)}
\lesssim
\|(\nabla f) \bigchi_{4B}\|_{L^{p_{0}}(w)}
\end{multline*}
Therefore,
$$
\bigg( \aver{B} |\varphi(L_w)\, h_{1}|^{p_0}\,dw\bigg)^{\frac1{p_0}}
\lesssim
\bigg( \aver{4\,B} |\nabla f |^{p_0}\,dw\bigg)^{\frac1{p_0}}.
$$
When $j\ge 3$, the functions $\eta$
associated with $\varphi$ by \eqref{eqn:L2-holo-rep-eta} satisfy
$$
|\eta(z)|
\lesssim \frac{r^{2\,m}}{|z|^{m+3/2}},
\qquad
z \in \Gamma_{\pi/2-\theta}.
$$
Since $p_0\in \mathcal{J}(L_w)$, by Corollary~\ref{cor:holomorphic},
$e^{-z\,L_w}\in \mathcal{O}\big(L^{p_0}(w) \rightarrow L^{p_0}(w),
\Sigma_{\mu}\big)$. This, together with the
representation~\eqref{eqn:L2-holo-rep} give us that
\begin{align*}
&\bigg( \aver{B} |\varphi(L_w)h_j|^{p_0}dw\bigg )^{\frac1{p_0}} \\
&\qquad\quad \le
\int_{\Gamma_{\pi/2-\theta}} \bigg(\aver{B} |e^{-z\,L}
h_j|^{p_0}\,dw\bigg)^{\frac1{p_0}}\, |\eta(z)|\,|dz|
\\
&\qquad\quad \lesssim
2^{j\,\theta_1} \int_{\Gamma_{\pi/2-\theta}}
\dec{\frac{2^j\,r}{\sqrt{|z|}}}^{\theta_2}\,
\expt{-\frac{\alpha\,4^j\,r^2}{|z|}}\,
\frac{r^{2\,m}}{|z|^{m+3/2}}\, {|dz|} \,
\bigg(\aver{C_j(B)}
|h_{j}|^{p_0}\,dw\bigg)^{\frac1{p_0}}
\\
&
\qquad\quad\lesssim
2^{j\,(\theta_1- 2\,m-1)} \, r^{-1}\,
\bigg(\aver{2^{j+1}\,B}
|f-f_{4\,B,w}|^{p_0}\,dw\bigg)^{\frac1{p_0}}
\\
&
\qquad\quad\lesssim
2^{j\,(\theta_1- 2\,m-1)} \, \sum_{l=1}^{j} 2^{l}\,
\bigg(\aver{2^{l+1}\,B}
|\nabla f|^{p_0}\,dx\bigg)^{\frac1{p_0}},
\end{align*}
provided $2\, m+1>\theta_2$.
The last estimate follows from $L^{p_{0}}(w)$-Poincar\'e inequality
\eqref{w-Poincare} (here we again use that $p_0>r_w$ and so $w\in
A_{p_0}$):
\begin{align}\label{eq:poincare}
&\bigg(\aver{2^{j+1}\,B}
|f-f_{4\,B,w}|^{p_0}\,dw\bigg)^{\frac1{p_0}}
\nonumber
\\
&\qquad\quad
\le
\bigg(\aver{2^{j+1}\,B}\hskip-7pt
|f-f_{2^{j+1}\,B,w}|^{p_0}\,dw\bigg)^{\frac1{p_0}} +\sum_{l=2}^j
|f_{2^l\,B,w}-f_{2^{l+1}\,B,w}|
\nonumber
\\
&\qquad\quad\lesssim
\sum_{l=1}^{j}
\bigg(\aver{2^{l+1}\,B}
|f-f_{2^{l+1}\,B}|^{p_0}\,dx\bigg)^{\frac1{p_0}}
\nonumber
\\
&\qquad\quad\lesssim
r\,
\sum_{l=1}^{j} 2^{l}\,
\bigg(\aver{2^{l+1}\,B}
|\nabla f|^{p_0}\,dx\bigg)^{\frac1{p_0}}.
\end{align}
When $j=2$ we can argue similarly, using the fact that
$$|h_2|\le |f-f_{4\,B,w}|\,\bigchi_{8\,B\setminus 2\,B} \le |f-f_{2\,B,w}|\,\bigchi_{8\,B\setminus 2\,B}
+ |f_{4\, B,w}-f_{2\,B,w}|\,\bigchi_{8\,B\setminus 2\,B}.
$$
If we combine these estimates, then by~\eqref{eq:rep} and Minkowski's
inequality we get
$$
\bigg( \aver{B} |\varphi(L_w)h|^{p_0}dw\bigg )^{\frac1{p_0}}
\lesssim
\sum_{j\ge 1}
\bigg( \aver{B} |\varphi(L_w)h_j|^{p_0}dw\bigg )^{\frac1{p_0}}
\le
\sum_{j\ge 1}
g(j)\bigg( \aver{B} |\nabla f|^{p_0}dw\bigg )^{\frac1{p_0}}
$$
with $g(j)=C_m\,2^{j\,(\theta_1-2\,m)}$ provided $2\,m+1>\theta_2$.
If we further assume that $2\,m>\theta_1$, then $\sum_j g(j)<\infty$.
This proves that \eqref{eqn:2.1} holds. Therefore, by Theorem
\ref{theorem:2.2} we get~\eqref{eq:reverseRiesz-vdw} as desired.
\end{proof}
\section{The gradient of the semigroup $\sqrt{t}{\nabla} e^{-tL_w}$}
\label{section:gradient}
Let $\widetilde \mathcal{K}(L_w)\subset [1,\infty]$ be the set of all exponents
$p$ such that $\sqrt{t} {\nabla} e^{-t\,L_w} : L^p(w)\rightarrow L^p(w)$
is uniformly bounded for all $t>0$. By Theorem~\ref{thm:L2-gaffney}
and Lemma~\ref{lemma:full}, $2\in \widetilde \mathcal{K}(L_w)$ and if it
contains more than one point, then by
interpolation $\widetilde \mathcal{K}(L_w)$ is an interval. In this section
we give a partial description of the set of $(p,q)$ such that
$\sqrt{t} {\nabla} e^{-t\,L_w} \in \offw{p}{q}$.
\begin{prop}\label{prop:K}
There exists an interval $\mathcal{K}(L_w)$ such that if $p,\,q \in \mathcal{K}(L_w)$,
$p\le q$, then $\sqrt{t}\,\nabla e^{-t\,L_w}\in \offw{p}{q}$.
Moreover, $\mathcal{K}(L_w)$ has the following properties:
\begin{enumerate}
\item $\mathcal{K}(L_w)\subset \widetilde\mathcal{K}(L_w)$;
\item if $q_-(L_w)$ and $q_+(L_w)$ are the left and right endpoints
of $\mathcal{K}(L_w)$, then $q_-(L_w)=p_-(L_w)$, $2\le q_+(L_w)\le
(q_+(L_w))^*_w\le p_+(L_w)$. In particular, $2\in \mathcal{K}(L_w)$ and
$\mathcal{K}(L_w)\subset \mathcal{J}(L_w)$;
\item If $q\geq 2$ and $p<q$, and if $\sqrt{t}\,\nabla e^{-t\,L_w} \in
\offw{p}{q}$, then $p,\,q \in \mathcal{K}(L_w)$;
\item $\sup \widetilde \mathcal{K}(L_w) = q_+(L_w)$.
\end{enumerate}
\end{prop}
\begin{remark}
Unlike in the unweighted case (see~\cite{auscher-martell07}) we are
unable to give a complete characterization of $\mathcal{K}(L_w)$. More
precisely, if we have an off-diagonal estimate and $p<q<2$, then we
cannot prove that $p,\,q\in \mathcal{K}(L_w)$.
\end{remark}
\begin{remark}
In Section~\ref{section:q-plus} below we will show that $q_+(L_w)>2$; in particular, this gives that
$2\in \mathop{\rm Int}\mathcal{K}(L_w)$.
\end{remark}
\medskip
As an immediate consequence of Proposition~\ref{prop:K} we get
weighted inequalities for the gradient of the semigroup. The proof is
identical to the proof of Corollaries~\ref{corollary-weighted-offd} and \ref{cor:holomorphic}.
\begin{corol}\label{corollary-grad-weighted-offd}
Let $q_-(L_w)<p\le q<q_+(L_w)$. If
$v\in A_{p/q_-(L_w)}(w)\cap RH_{(q_+(L_w)/q)'}(w)$, then
$\sqrt{t}\,\nabla e^{-tL_w}\in \mathcal{O}\big(L^{p}(v\,dw)\rightarrow L^{q}(v\,dw)\big)$ and
$\sqrt{z}\,\nabla e^{-zL_w}\in \mathcal{O}\big(L^{p}(v\,dw)\rightarrow L^{q}(v\,dw),\Sigma_\nu\big)$
for all $\nu$, $0<\nu<\arctan\left(\frac{\lambda}{\sqrt{\Lambda^2-\lambda^2}}\right)$.
\end{corol}
\medskip
The proof of Proposition~\ref{prop:K} requires two lemmas.
\begin{lemma}\label{off-w:sel-impro}
Given $w\in A_\infty$ and a family of sublinear operators $\{T_t\}_{t>0}$
such that $T_t\in \offw{p}{q}$, with $1\le p<q\le \infty$, there exist $\alpha$, $\beta>0$ such that for any ball $B$ with radius $r$ and for any $t>0$,
\begin{equation}\label{w:off:B-B:improved}
\left(\aver{B} |T_t( \bigchi_B \, f) |^{q}\,dw\right)^{\frac 1 q}
\lesssim
\max\bigg\{\left(\frac{r}{\sqrt{t}}\right)^{\alpha},\left(\frac{r}{\sqrt{t}}\right)^{\beta} \bigg\} \,\left(\aver{B}
|f|^{p}\,dw\right)^{\frac 1 p }.
\end{equation}
\end{lemma}
\begin{proof}
This result is implicit in \cite[Proof of Proposition 2.4,
p. 306]{auscher-martell07}; here we reprove it with a small
improvement in the constant. There it was shown that in
Definition~\ref{defi:off-d:weights} it is sufficient to consider
the case where $r\approx \sqrt{t}$. But in this case we get that
$\Upsilon(r/\sqrt{t})\approx 1$ and for all $j\geq 2$,
$\Upsilon(2^j\,r/\sqrt{t})\approx 2^{j}$. The argument in
\cite[p. 306]{auscher-martell07} shows that if we assume
\eqref{w:off:B-B}, \eqref{w:off:C-B}, \eqref{w:off:B-C} hold when
$r\approx \sqrt{t}$, then \eqref{w:off:B-B} holds in general with
constant $\max\{1,(r/\sqrt{t})^\alpha\}$ for some $\alpha>0$
depending on $p$, $q$ and $w$. In this maximum the $1$ occurs when
$r\le \sqrt{t}$; therefore, to prove~\eqref{w:off:B-B:improved} we
need to show that if $r\le \sqrt{t}$, then we can replace $1$ by the
better constant $(r/\sqrt{t})^\beta$ for some $\beta>0$.
Fix $r\le \sqrt{t}$. If $B=B(x,r)$, then $B\subset
B_t=B(x,\sqrt{t})$. As in \cite[p. 306]{auscher-martell07} we apply
\eqref{w:off:B-B} to $T_t$ and $B_t$; this yields
\begin{multline*}
\left( \aver B |T_t(\bigchi_B \,f)|^q\,dw\right)^{{\frac 1 q}}
\le
\left( \frac{w(B_t)}{w(B)}\right)^{\frac 1 q} \left( \aver {B_t }|T_t(\bigchi_B \,f)|^q\,dw\right)^{{\frac 1 q}}
\\
\lesssim \left( \frac{w(B_t)}{w(B)}\right)^{\frac 1 q} \left( \aver
{B_t }|\bigchi_B f|^p\,dw\right)^{\frac 1 p}
\le
\left( \frac{w(B)}{w(B_t)}\right)^{\frac 1 p - \frac 1 q} \left( \aver {B }|f|^p\,dw\right)^{\frac 1 p}.
\end{multline*}
Since $w\in A_\infty$, we
have that for some $\theta>0$,
\[ \frac{w(B)}{w(B_t)} \lesssim
\left(\frac{|B|}{|B_t|}\right)^{\theta}=\left(\frac{r}{\sqrt{t}}\right)^{\theta\,n}. \]
Since $p<q$ we have that
$$
\left( \aver B |T_t(\bigchi_B \,f)|^q\,dw\right)^{{\frac 1 q}}
\lesssim
\left( \frac{r}{\sqrt{t}}\right)^{(\frac 1 p - \frac 1 q)\,\theta\,n} \left( \aver {B }|f|^p\,dw\right)^{\frac 1 p}.
$$
Therefore, if we combine this with the argument
from~\cite[p. 306]{auscher-martell07} described above, we get that
~\eqref{w:off:B-B:improved} holds with
$\beta=(1/p-1/q)\,\theta\,n$.
\end{proof}
The second lemma gives the close connection between off-diagonal
estimates for $e^{-tL_w}$ and $\sqrt{t}\nabla e^{-tL_w}$ for $p<2$.
\begin{lemma}\label{lemma:fode}
Given $1\le p<2$ the following are equivalent:
\begin{enumerate}
\item $e^{-t\,L_w} \in \offw{p}{2}$.
\item $\sqrt {t} \, \nabla e^{-t\,L_w}\in \offw{p}{2}$.
\item $t\, L_w\, e^{-t\,L_w}\in \offw{p}{2}$.
\end{enumerate}
\end{lemma}
\begin{proof}
We follow the proof of \cite[Lemma 5.3]{auscher-martell07}.
To prove that $(1)$ implies $(2)$, note that by
Theorem~\ref{thm:L2-gaffney}, $\sqrt{t} \, \nabla
e^{-t\,L_w}\in \offw{2}{2}$. If we compose this with $(1)$, by
Lemma~\ref{lemma:unif-comp}, Remark~\ref{rem:complex-offd}, and the
semigroup property we get $(2)$.
To prove that $(2)$ implies $(3)$, define
$S_t\vec{f}=\sqrt{t}\,e^{-t\,L_w}(w^{-1}\,\mathop{\rm div} (A\vec f)) $. By
duality, we have that
\begin{multline*}
\langle S_t\vec{f},g\rangle_{L^2(w)}
= \langle w^{-1} \mathop{\rm div} (A\vec f)),
\sqrt{t}e^{-t\,L_w^*}g\rangle_{L^2(w)}
=\langle \mathop{\rm div} (A\vec f)),
\sqrt{t}e^{-t\,L_w^*}g\rangle_{L^2} \\
= -\langle \vec{f}, A^* \sqrt{t}\nabla e^{-t\,L_w^*}g\rangle_{L^2}
= \langle \vec{f}, w^{-1} A^* \sqrt{t}\nabla
e^{-t\,L_w^*}g\rangle_{L^2(w)} .
\end{multline*}
The matrix $w^{-1} A^*$ is uniformly elliptic, and so multiplication by
it is bounded on $L^2(w)$. Furthermore,
$\sqrt{t}\,\nabla e^{-t\,L_w^*}\in\offw{2}{2}$. Therefore, it follows
that $S_{t}\in\offw{2}{2}$. If we combine this with $(2)$, we get that $-t\,L_w\, e^{-2\,t\, L_w} =S_t
\circ \sqrt t \, \nabla e^{-t\,L_w}\in \offw{p}{2}$. This proves $(3)$.
Finally we show that $(3)$ implies $(1)$. We first prove
\eqref{w:off:B-B}. Fix $B$ and $f,\,g$ such that $\left(\aver{B} |f|^p\,dw\right)^{\frac1p}=
\left(\aver{B} |g|^2\,dw\right)^{\frac12}=1$, and assume also that $f\in L^2(B,dw)$. Define
\[ h(t)=
\aver{B} e^{-t\,L_w}( \bigchi_B \, f)(x)\,g(x)\,dw(x). \]
By duality it will suffice to show that $|h(t)|\lesssim
\Upsilon(r/\sqrt{t})^\theta$. (Note that our assumption implies that $t\,h'(t)$ satisfies such a bound.) First, we claim that
\[ \lim_{t\to\infty}
h(t)=0. \]
To see this we use the fact (discussed in
Section~\ref{section:prelim}) that $L_w$ has a bounded
holomorphic functional calculus on $L^2(w)$. Given this, since $z\mapsto
e^{-tz}$ converges to 0 uniformly on compact subsets of $\Re z>0$, we
get the desired limit.
Hence, we can write $ h(t)= - \int_{t}^\infty h'(s) \, ds. $ Notice
that $|t\,h'(t)|\lesssim \Upsilon(r/\sqrt{t})^{\theta_2}$ but this
does not give a convergent integral. However, if we apply
Lemma~\ref{off-w:sel-impro} to $t\, L_w\, e^{-t\,L_w}\in \offw{p}{2}$,
we get that $|t\,h'(t)|\lesssim
\widetilde{\Upsilon}(r/\sqrt{t})$ with
$\widetilde{\Upsilon}(s)=\max\{s^\alpha,s^\beta\}$. It follows from
this estimate that
\begin{multline*}
|h(t)|
\le
\int_{t}^\infty |h'(s)| \, ds
\lesssim
\int_{t}^\infty
\widetilde{\Upsilon}\left(\frac{r}{\sqrt{s}}\right)
\frac{ds}{s}
\approx
\int_{0}^{\frac{r}{\sqrt{t}}}
\widetilde{\Upsilon}(s)
\frac{ds}{s}
\lesssim
\widetilde{\Upsilon}\left(\frac{r}{\sqrt{t}}\right)
\lesssim
\dec{\frac{r}{\sqrt{t}}}^{\alpha+\beta}.
\end{multline*}
To prove \eqref{w:off:C-B} we argue as before, but with $\big(\aver{C_j(B)} |f|^p\,dw\big)^{\frac1p}=
\left(\aver{B} |g|^2\,dw\right)^{\frac12}=1$ and
\[ h(t)=
\aver{B} e^{-t\,L_w}( \bigchi_{C_j(B)} \, f)(x)\,g(x)\,dw(x). \]
Since $d(B,C_j(B))>0$, by Theorem~\ref{thm:L2-gaffney} and H\"older's
inequality, $ h(t) \rightarrow 0$ as $t\rightarrow 0$. Therefore, $h(t)=\int_{0}^t
h'(s)\, ds$. Since $t\,L_w\,e^{-t\,L_w}\in\offw{p}{2}$, we have that
\begin{multline*}
h(t)
\le
\int_{0}^t |h'(s)|\, ds
\lesssim
2^{j\,\theta_1}
\int_{0}^t
\dec{\frac{2^j\,r}{\sqrt{s}}}^{\theta_2}\,
\expt{-\frac{c\,4^{j}\,r^2}{s}} \,\frac{ds}{s}
\\
\approx
2^{j\,\theta_1}\int_{\frac{2^j\,r}{\sqrt{t}}}^\infty
\Upsilon(s)^{\theta_2}\, e^{-c\,s^2}\,\frac{ds}{s}
\lesssim
2^{j\,\theta_1}
\dec{\frac{2^j\,r}{\sqrt{t}}}^{\theta_2}\,
\expt{-\frac{c\,4^{j}\,r^2}{t}}.
\end{multline*}
This is \eqref{w:off:C-B}.
Finally, the proof of \eqref{w:off:B-C} is essentially the same and we
omit the details. This completes the proof that $(3)$ implies $(1)$.
\end{proof}
\begin{proof}[Proof of Proposition \textup{\ref{prop:K}}]
Define the sets $\mathcal{K}_{-}(L_w)$ and $\mathcal{K}_{+}(L_w)$ to be
\begin{gather*}
\mathcal{K}_{-}(L_w) = \{ p\in [1,2] : \sqrt t \, \nabla
e^{-t\,L_w}\in\offw{p}{2} \} \\
\mathcal{K}_{+}(L_w) = \{ p\in [2,\infty] : \sqrt t \, \nabla
e^{-t\,L_w}\in\offw{2}{p} \},
\end{gather*}
and let $\mathcal{K}(L_w)= \mathcal{K}_{-}(L_w)\cup \mathcal{K}_{+}(L_w)$.
The set is non-empty, since $2\in \mathcal{K}(L_w)$. By
Lemma~\ref{lemma:nested} it is an interval.
Now fix $p,q \in \mathcal{K}(L_w)$ with
$p<q$. If $p<q\le 2$ or $2 \le p < q $, then by Lemma~
\ref{lemma:nested}, $\sqrt t \, \nabla
e^{-t\,L}\in \offw{p}{q}$ since $p, q \in
\mathcal{K}_{-}(L_w)$ or $p,q \in \mathcal{K}_{+}(L_w)$. If $p \le 2 < q$, then
$\sqrt t \, \nabla e^{-t\,L}\in \offw{2}{q}$ and by Lemma~\ref{lemma:fode}, $e^{-t\,L}\in
\offw{p}{2}$. Hence, by Lemma~\ref{lemma:unif-comp} and
the semigroup property, $\sqrt t \, \nabla e^{-t\,L}\in
\offw{p}{q}$. Thus, in every case we get the desired off-diagonal estimate.
We now prove (1)-(4). By Lemma~\ref{lemma:unif-comp}, off-diagonal
estimates on balls imply uniform boundedness, and so $\mathcal{K}(L_w)\subset
\widetilde \mathcal{K}(L_w)$. This proves (1).
To prove (2), we first note that if $p<2$, then by Lemma
\ref{lemma:fode}, $p\in \mathcal{J}(L_w)$ if and only if $p\in
\mathcal{K}_-(L_w)$. Thus $\mathcal{J}(L_w)\cap[1,2]=\mathcal{K}_-(L_w)$ and so
$q_-(L_w)=p_-(L_w)$. To show that $(q_+(L_w))^*_w\le p_+(L_w)$,
first note that if
$q_+(L_w)=2$, then by Proposition~\ref{prop:J} we have that
$(q_+(L_w))^*_w=2_w^* \leq p_+(L_w)$. If $q_+(L_w)>2$, then we
proceed as in the proof of this proposition.
Let
$2<p<q_+(L_w)$ and $p<q<p^*_w$. Then by \eqref{w-Poincare},
$e^{-t\,L_w}\in\offw{2}{2}$, $\sqrt{t}\,\nabla
e^{-t\,L_w}\in\offw{2}{p}$, and so we get that
\begin{align*}
&\left(\aver{B} |e^{-t\,L_w}( \bigchi_B \, f)|^{q}\,dw\right)^{\frac1q}
\\
&\qquad
\lesssim
\left(\aver{B} |e^{-t\,L_w}( \bigchi_B \, f) |^{2}\,dw\right)^{\frac12}
+
r\,\left(\aver{B} |\nabla\,e^{-t\,L_w}( \bigchi_B \, f) |^{p}\,dw\right)^{\frac1p}
\\
&\qquad
\lesssim
\dec{\frac{r}{\sqrt{t}}}^{1+\theta_2} \,\left(\aver{B} |f|^{2}\,dw\right)^{\frac12}.
\end{align*}
This gives us inequality~\eqref{w:off:B-B}. The other two
inequalities in Definition \ref{defi:off-d:weights} can be proved in
exactly the same way. Thus
$e^{-t\,L_w}\in\offw{2}{q}$ which implies $q\le
p_+(L_w)$. Letting $p\nearrow q_+(L_w)$ and $q\nearrow p^*_w$ we
conclude that $(q_+(L_w))^*_w\le p_+(L_w)$.
The last estimate implies in particular that $q_+(L_w)\le
p_+(L_w)$. If $q_+(L_w)<\infty$ we clearly have that $q_+(L_w)<
p_+(L_w)$ and so $\mathcal{K}_+(L_w)\subset \mathcal{J}(L_w)$. Otherwise,
$p_+(L)=\infty$ and again we have that $\mathcal{K}_+(L_w)\subset \mathcal{J}(L_w)$.
This completes the proof of (2).
\medskip
To prove (3), suppose
first that $2\leq p < q$ and $\sqrt t
\, \nabla e^{-t\,L}\in \offw{p}{q}$. We will show that $p,q \in
\mathcal{K}(L_w)$. Since we also have that $\sqrt t
\, \nabla e^{-t\,L}\in \offw{2}{2}$, by interpolation (Lemma \ref{lemma:interpolate}),
$\sqrt t \, \nabla e^{-t\,L}\in \offw{p_{\theta}}{q_{\theta}}$ where
$1/p _{\theta}= (1-\theta)/ p + \theta/2$, $1/q
_{\theta}=(1-\theta)/q+\theta/2$ and $\theta \in (0,1)$. If $p
\notin \mathcal{K}_{+}(L_w)$, then $q>\sup \mathcal{K}_{+}(L_w)$. We can choose $\theta$
such that $p_{\theta}<\sup\mathcal{K}_{+}(L_w)<q_{\theta}$. Since
$\mathcal{K}_{+}(L_w)\subset \mathcal{J}(L_w)$, $p_{\theta}\in \mathcal{J}(L_w)$: i.e.,
$e^{-t\,L}\in \offw{2}{p_{\theta}}$. By composition and the
semigroup property, $\sqrt t \, \nabla e^{-t\,L_w}\in
\offw{2}{q_{\theta}}$; hence, $q_{\theta} \in \mathcal{K}_{+}(L_w)$, a
contradiction. Therefore, $p\in \mathcal{K}_{+}(L_w)$. As we
have $\sqrt t \, \nabla e^{-t\,L_w}\in \offw{p}{q}$ by assumption and
$e^{-t\,L_w}\in \offw{2}{p}$ since $p\in \mathcal{J}(L_w)$, by composition and
the semigroup property, $\sqrt t \, \nabla e^{-t\,L_w}\in \offw{2}{q}$. Hence,
$q\in \mathcal{K}_{+}(L_w)$.
The case $p<2 \le q$ is straightforward. Since $\sqrt t \, \nabla e^{-t\,L_w}\in
\offw{p}{q}$, by Lemma~\ref{lemma:nested} we have that $\sqrt t \, \nabla e^{-t\,L_w}\in
\offw{2}{q}$ and $\sqrt t \, \nabla e^{-t\,L_w}\in \offw{p}{2}$.
Hence, $p\in \mathcal{K}_{-}(L_w)$ and $q\in \mathcal{K}_{+}(L_w)$.
\medskip
Finally, we prove (4). Suppose to the contrary that $\sup \widetilde
\mathcal{K}(L_w)>q_+(L_w)$. Then there exist $p$, $q$ such that
$q_+(L_w)<p<q<\widetilde \mathcal{K}(L_w)$. Fix $r$ such that $p_-(L_w)=q_-(L_w)<r<2$.
Then we have that $\sqrt{t}\nabla e^{-t\,L_w}$ is uniformly bounded on
$L^q(w)$ and in $\offw{r}{2}$. By Lemma~\ref{lemma:interpolate} we
can interpolate between these to get that $\sqrt{t}\,\nabla
e^{-t\,L_w}\in \offw{s}{p}$ for some $s<p$. But then by the above
converse, we have
that $p\in \mathcal{K}(L_w)$ which is a contradiction.
\end{proof}
\section{An upper bound for $\mathcal{K}(L_w)$}
\label{section:q-plus}
In this section we will prove that $q_+(L_w)>2$: that is, the set
$\mathcal{K}(L_w)$ contains $2$ in its interior.
In general, all we can say is that $q_+(L_w)>2$: as noted
in~\cite[Section~4.5]{auscher07}, even in the unweighted case this is the best
possible bound, since given any $\epsilon>0$ it is possible to find an
operator $L$ such that $q_+(L)<2+\epsilon$. In
Section~\ref{section:L2-kato} below we will give some estimates for
$q_+(L_w)$ in terms of $[w]_{A_2}$.
We have broken the proof that $q_+(L_w)>2$ into
a series of discrete steps where we borrow some ideas from \cite{Auscher-Coulhon2005}. We first prove a reverse H\"older
inequality and use Gehring's inequality to get a higher integrability estimate.
We then prove that the Hodge projection is bounded on $L^q(w)$ for a
range of $q>2$ and use this to prove the Riesz transform is also
bounded for exponents greater than~2. (In Section~\ref{section:riesz}
we give a more complete discussion of the Riesz transform.) From this
we deduce that $q_+(L_w)>2$.
\subsection*{A reverse H\"older inequality}
Fix a ball $B_0$ and let $u\in H^1_0(w)$ be any weak solution of $L_wu=0$ in $4B_0$. Then for any
ball $B$ such that $3B\subset 4B_0$, we can again prove via a standard argument a
Caccioppoli inequality:
\[ \bigg(\Xint-_B |{\nabla} u|^2 dw\bigg)^{1/2} \leq
\frac{C_1}{r} \bigg(\Xint-_{2\,B} |u-u_{2B,w}|^2\,dw\bigg)^{1/2}, \]
where $C_1=C(n,\Lambda/\lambda)[w]_{A_2}^{1/2}\ge 1$. Fix $q$ such that
\begin{equation}
\max\Big\{\frac{2\,(n-1)}{n}, r_w, \frac{2\,n\,r_w}{2+n\,r_w}\Big\}<q<2;
\label{eq:q-choice}
\end{equation}
such a $q$ exists since $r_w<2$. Our choice of $q$ guarantees that $2<q_w^*$
and also that $2<n\,q/(q-1)$. Then, by the weighted Poincar\'e inequality,~Theorem~\ref{thm:wtd-poincare},
\begin{equation}
\frac{1}{r} \bigg(\Xint-_{2\,B} |u-u_{2B,w}|^2\,dw\bigg)^{1/2}
\leq C_2\bigg(\Xint-_{2B} |{\nabla} u|^q\,dw\bigg)^{1/q},
\label{eq:w-Poincare-a}
\end{equation}
where $C_2=C(n)[w]_{A_2}^\kappa\ge 1$ and
$\kappa=\frac{n\,q-1}{n\,q\,(q-1)}$. (By our choice of $q$ we can get
this sharp estimate: see Remark \ref{remark-best-Poi}. Since $q<2$
we could write $[w]_{A_q}$, but we use that $[w]_{A_q}\le [w]_{A_2}$.)
If we combine these inequalities, we get a reverse H\"older inequality:
\[ \bigg(\Xint-_{B} |{\nabla} u|^2\,dw\bigg)^{1/2} \leq
C_1C_2\bigg(\Xint-_{2B} |{\nabla} u|^q\,dw\bigg)^{1/q}. \]
We now apply Gehring's lemma in the setting of spaces of
homogenous type (see Bj\"orn and
Bj\"orn~\cite[Theorem~3.22]{MR2867756}) to get that there exists $p_0>2$
such that for every such $B$,
\begin{equation} \label{eqn:gehring-bump}
\bigg(\Xint-_B |{\nabla} u|^{p_0}\,dw\bigg)^{1/p_0} \leq
C_0\bigg(\Xint-_{2B} |{\nabla} u|^2\,dw \bigg)^{1/2}.
\end{equation}
Moreover, we can take the following values:
$C_0=8C_1^2C_2^2[w]_{A_2}^{31}$ and
\begin{equation}
p_0 = 2 + \frac{2-q}{2^{4/q+1}C_1^2C_2^2 [w]_{A_2}^{6/q+17}}.
\label{eq:value-p0}
\end{equation}
In Section~\ref{section:L2-kato} below we will need these precise
values. Here, it suffices to note that in
inequality~\eqref{eqn:gehring-bump} we have $p_0>2$.
\subsection*{The Hodge projection}
Define the Hodge projection operator by
\[ T = {\nabla} L_w^{-1/2} ({\nabla} ( L_w^*)^{-1/2})^*,\]
where the adjoint operators are defined with respect to the inner product in $L^2(w)$.
As we noted in Section~\ref{section:prelim}, the Riesz transform is
bounded on $L^2(w)$; hence, the Hodge
projection is also bounded. By duality, $({\nabla} ( L_w^*)^{-1/2})^*\vec{f}=
-L_w^{-1/2} (w^{-1}\mathop{\rm div}(w\vec{f}))$, and so
\[ T \vec{f} = -{\nabla} L_w^{-1/2} L_w^{-1/2} (w^{-1}\mathop{\rm div}(w\vec{f}) = -{\nabla}
L_w^{-1} (w^{-1}\mathop{\rm div}(w\vec{f})). \]
Now fix $\vec{f} \in L^2(w,{\mathbb C}^n) \cap L^{p_0}(w,{\mathbb C}^n)$ such that
$\mathop{\rm supp}(\vec{f})\subset {\mathbb{R}^n} \setminus 4B_0$. Let $u\in H^1(w)$ be a
solution to the equation
\[ L_w u = w^{-1} \mathop{\rm div}(w\vec{f}); \]
by a standard Lax-Milgram argument because $A$ satisfies~\eqref{eqn:degen}
(cf.~\cite[Theorem~2.2]{fabes-kenig-serapioni82}), we know $u$ exists.
Then
\[ T\vec{f} = -{\nabla} L_w^{-1} L_w u = -{\nabla} u, \]
where equality is in the sense of distributions. In particular, since
$f=0$ on $4B_0$, $Lu=0$ on $4B_0$. Therefore, we can
apply~\eqref{eqn:gehring-bump} to $u$: on any ball $B$ such that $3B
\subset 4B_0$,
\begin{multline*}
\bigg(\Xint-_B |T\vec{f}|^{p_0}\, dw\bigg)^{1/p_0}
= \bigg(\Xint-_B |{\nabla} u|^{p_0}\, dw\bigg)^{1/p_0}
\leq C_0 \bigg(\Xint-_{2B} |{\nabla} u|^{2}\, dw\bigg)^{1/2}
= \bigg(\Xint-_{2B} |T\vec{f}|^{2}\, dw\bigg)^{1/2}.\!
\end{multline*}
As a consequence of this inequality, we have
by~\cite[Theorem~3.14]{auscher-martell07b} (see also Section~5 of the
same paper) that for all $q$, $2 \leq q < p_0$, $T :
L^q(w,{\mathbb C}^n)\rightarrow L^q(w,{\mathbb C}^n)$.
\subsection*{Boundedness of the Riesz transform}
To show that the Riesz transform ${\nabla} L^{-1/2}_w$ is
bounded, fix $q$ such that
\[
\max\left( p_-(L_w^*), r_w, p_0'\right)
=
\max\left( p_-(L_w^*), r_w, p_0',
\frac{nr_wp_-(L_w^*)}{nr_w+p_-(L_w^*)}\right) < q' < 2. \]
(The reason for including $p_-(L_w^*)$ will be made clear below.)
By the above argument we have that $T^*$ is bounded on $L^{q'}(w)$,
where $T^*\vec{f} = -{\nabla} (L^*_w)^{-1}
(w^{-1}\mathop{\rm div}(w\vec{f}))$. Furthermore, by
Proposition~\ref{prop:reverseRiesz}, we have that
\[ \| (L_w^*)^{1/2} f \|_{L^{q'}(w)}
\leq C\|{\nabla} f\|_{L^{q'}(w)}. \]
Therefore,
\begin{align*}
\| ({\nabla} L_w^{-1/2})^* \vec{f} \|_{L^{q'}(w)}
& = \| (L_w^*)^{-1/2}(w^{-1} \mathop{\rm div}(w\vec{f}))\|_{L^{q'}(w)} \\
& = \| (L_w^*)^{1/2} (L_w^*)^{-1}(w^{-1} \mathop{\rm div}(w\vec{f}))\|_{L^{q'}(w)}
\\
& \lesssim \| {\nabla} (L_w^*)^{-1}(w^{-1} \mathop{\rm div}(w\vec{f}))\|_{L^{q'}(w)} \\
& = \|T^* \vec{f}\|_{L^{q'}(w)} \\
&\lesssim \|\vec{f}\|_{L^{q'}(w)}.
\end{align*}
Hence, by duality we have that ${\nabla} L_w^{-1/2} : L^q(w)
\rightarrow L^q(w)$ for all $q$ such that
\[ 2 < q < \min\big(p_+(L_w), r_w', p_0 \big) = q_w; \]
here we have used the fact that by duality, $p_-(L_w^*)' = p_+(L_w)$.
\subsection*{Boundedness of the gradient of the semigroup}
Finally, we show that if $2< q < q_w$, then $\sqrt{t}{\nabla} e^{-tL_w} :
L^q(w)\rightarrow L^q(w)$. The desired estimate for $q_+(L_w)$
follows from this: by Proposition~\ref{prop:K}, part (4),
\[ q_+(L_w) =\sup \tilde{\mathcal{K}}(L_w) \geq q_w > 2. \]
Fix such a $q$; then by the above estimate for the Riesz transform,
\begin{multline*}
\|\sqrt{t} {\nabla} e^{-tL_w} f\|_{L^q(w)}
= \|{\nabla} L_w^{-1/2} (tL_w)^{1/2} e^{-tL_w} f\|_{L^q(w)} \\
\lesssim \|(tL_w)^{1/2} e^{-tL_w} f\|_{L^q(w)} = \|\varphi_t(L_w)
f\|_{L^q(w)},
\end{multline*}
where $\varphi_t(z) = (tz)^{1/2} e^{-tz}$. For all $t>0$ this is a
uniformly bounded holomorphic function in the right half plane.
Therefore, since $2<q<p_+(L_w)$, by Proposition~\ref{prop:B-K:weights}
we have that
\[ \|\sqrt{t} {\nabla} e^{-tL_w} f\|_{L^q(w)} \lesssim
\|\varphi_t\|_{\infty}
\|f\|_{L^q(w)}
\lesssim
\|f\|_{L^q(w)}
\]
and the bound is independent of $t$. This completes the proof that
$q_+(L_w)>2$.
\section{Riesz transform estimates}
\label{section:riesz}
In this section we prove $L^p(w)$ norm inequalities for the Riesz
transform ${\nabla} L_w^{-1/2}$. We have already proved such
inequalities for a small range of values $q>2$ in
Section~\ref{section:q-plus}. Here we prove the following result.
\begin{prop}\label{prop:ext-RT}
Let $q_-(L_w) <p < q_+(L_w)$. Then there exists a constant $C$ such that
\begin{equation}
\label{eq:Riesz} \|\nabla L_w^{-1/2} f\|_{L^p(w)} \le C\|f\|_{L^p(w)}.
\end{equation}
Furthermore, if $v \in A_{p/q_-(L_w)}(w)\cap
RH_{(q_+(L_w)/p)'}(w)$, then
\begin{equation}
\label{eq:Riesz:w} \|\nabla L_w^{-1/2} f\|_{L^p(v\,dw)} \le C\|f\|_{L^p(v\,dw)}.
\end{equation}
\end{prop}
To prove Proposition \ref{prop:ext-RT} we would like to follow the
same outline as the proof of Proposition \ref{prop:B-K:weights}. The first step---i.e.,
proving~\eqref{eq:Riesz} holds when $q_-(L_w)<p<2$--- does work with
the appropriate
changes. However, the second step (i.e., the proof that
\eqref{eq:Riesz:w} holds) runs into difficulties since $\nabla
L_w^{-1/2}$ and the auxiliary operators
$\mathcal{A}_r$ do not commute. One approach to overcoming this
obstacle would be to adapt the
proof in \cite{auscher-martell06} (see also \cite{auscher07}). In this
case we would need to use an $L^{p_0}(w)$-Poincar\'e inequality which
may not hold unless we assume $w\in A_{p_0}$. This would yield
estimates in the range $\max\{r_w,q_-(L_w)\}<p<q_+(L_w)$, analogous to
those in Proposition~\ref{prop:reverseRiesz}.
There is, however, an alternative approach. In
\cite{auscher-martell-08} the authors considered Riesz
transforms associated with the Laplace-Beltrami operator of a complete,
non-compact Riemannian manifold. Their proof avoids Poincar\'e
inequalities for $p$ close to $1$ as these may
not hold. Instead, they use a duality argument based
on ideas in \cite{bernicot-zhao}; this requires that they first prove
that the Riesz transform is bounded for $p>2$ in the appropriate range of
values. This reverses the order used in the proof of
Proposition~\ref{prop:B-K:weights}.
\begin{proof}[Proof of Proposition \ref{prop:ext-RT}]
For brevity, let $q_-=q_-(L_w)$ and $q_+=q_+(L_w)$. To implement
the approach sketched above, we divide the proof in two steps. First
we will prove that~\eqref{eq:Riesz} holds when $2<p<q_+$. We do so
using Theorem~\ref{theorem:2.2} and some ideas
from~\cite{auscher07,auscher-martell06}. We note that since the Riesz transform and $\mathcal{A}_r$ do not commute, we will
use an $L^2(w)$-Poincar\'e inequality. This holds since $w\in A_2$:
the problem with using the Poincar\'e inequality only occurs with exponents
less than $2$. The second step is to prove that
\eqref{eq:Riesz:w} holds by
adapting the proof in \cite{auscher-martell-08}. Here we will use
duality and a result from \cite{auscher-martell07b} that is
based on good-$\lambda$ inequalities. Inequality~\eqref{eq:Riesz}
then holds when $q_-<p<2$ by taking $v\equiv 1$.
\medskip
To apply Theorem \ref{theorem:2.2}, fix $2<p<q_+$
and let $T=\nabla L_w^{-1/2}$, $S=I$ and $\mathcal{D}=L^\infty_c$. Let
$p_0=2$ and fix $q_0$ such that $2<p<q_0<q_+$. As before we take
$\mathcal{A}_{r}=I-( I-e^{-r^{2}L_{w}})^{m}$, where $m$
will be chosen below. We first show that
\eqref{eqn:2.1} holds. Let $f\in L^\infty_c$ and decompose it as in \eqref{decomp-f}; then we
have
\begin{align*}
\bigg(\aver{B} |\nabla L_w^{-1/2} (I-e^{-r^2\,L_w})^m
f|^{2}\,dw\bigg)^{\frac1{2}}
\le
\sum_{j\ge 1}
\bigg(
\aver{B} |\nabla L_w^{-1/2} (I-e^{-r^2\,L_w})^m f_j|^{2}\,dw
\bigg)^{\frac1{2}}.
\end{align*}
To estimate the first term, note that $\nabla L_w^{-1/2}$ and
$e^{-r^2\,L_w}$ are bounded on $L^{2}(w)$ by
Theorems~\ref{thm:L2-gaffney} and~\ref{theorem:degen-kato}.
Hence,
\begin{equation}\label{RT-est:f1}
\Big( \aver{B} |\nabla L_w^{-1/2}(I-e^{-r^2\,L_w})^m
f_1|^{2}\,dw \Big)^{\frac1{2}}
\lesssim
\Big( \aver{4\,B} |f|^{2}\,dw\Big)^{\frac1{2}}.
\end{equation}
Fix $j\ge 2$; to get the desired $L^2$ estimates we will use the
$L^2$ bounds for the gradient of the square function.. If $h\in L^2(w)$, by
\eqref{defi-RT}
\begin{equation}\label{RT-repre}
\nabla L_w^{-1/2}(I-e^{-r^2\,L_w})^m h
=
\frac1{\sqrt{\pi}} \int_0^\infty \sqrt{t}\,\nabla \varphi(L_w,t)
h\,\frac{dt}{t},
\end{equation}
where $\varphi(z,t)=e^{-t\,z}\,(1-e^{-r^2\,z})^m \in
\H^\infty_0(\Sigma_\mu)$. We can therefore use the integral
representation \eqref{eqn:L2-holo-rep} for $\varphi(\cdot, t)$.
The function
$\eta(\cdot, t)$ in this representation satisfies
\begin{equation*}\label{eta-RT}
|\eta(z,t)|
\lesssim
\frac{r^{2\,m}}{(|z|+t)^{m+1}},
\qquad
z\in \Gamma, \; t>0.
\end{equation*}
By Theorem \ref{thm:L2-gaffney}, $\sqrt{z}\,\nabla
e^{-z\,L_w}\in\offw{2}{2}$; hence,
\begin{align}
&\bigg( \aver{B}
\bigg|
\int_{\Gamma} \eta(z)\, \sqrt{t}\,\nabla e^{-z\,L_w} f_j\,dz
\bigg|^{2}\,dw\bigg)^{\frac1{2}} \nonumber
\\
&\quad\le
\int_{\Gamma}
\bigg(
\aver{B} |\sqrt{z}\,\nabla e^{-z\,L_w}
f_j|^{2}\,dw\bigg)^{\frac1{2}}\,
\frac{\sqrt{t}}{\sqrt{|z|}}\,|\eta(z)|\,|dz| \nonumber
\\
&\quad\lesssim
2^{j\,\theta_1} \int_{\Gamma}
\dec{\frac{2^j\,r}{\sqrt{|z|}}}^{\theta_2}
\expt{-\frac{\alpha\,4^j\,r^2}{|z|}}\,
\frac{\sqrt{t}}{\sqrt{|z|}}\,|\eta(z)|\,|dz| \,
\bigg(\aver{C_j(B)}
|f|^{2}\,dw\bigg)^{\frac1{2}} \nonumber
\\
&\quad\lesssim
2^{j\,\theta_1} \int_0^\infty
\dec{\frac{2^j\,r}{\sqrt{s}}}^{\theta_2}
\expt{-\frac{\alpha\,4^j\,r^2}{s}}\, \frac{\sqrt{t}}{\sqrt{s}}
\,\frac{r^{2\,m}}{(s+t)^{m+1}}\,ds\, \bigg(\aver{C_{j}(B)}
|f|^{2}\,dw\bigg)^{\frac1{2}}.
\label{est-Riesz-aux}
\end{align}
When $2\, m>\theta_{2}$,
\begin{align}\label{eqn:integral}
\int_{0}^\infty\int_0^\infty
\dec{\frac{2^j\,r}{\sqrt{s}}}^{\theta_2}
\expt{-\frac{\alpha\,4^j\,r^2}{s}}\,
\frac{\sqrt{t}}{\sqrt{s}}\,\frac{r^{2\,m}}{(s+t)^{m+1}}\,ds\frac
{dt}t = C\, 4^{-j\, m}.
\end{align}
If we insert this into the representation \eqref{eqn:L2-holo-rep}
we get
\begin{align}
\bigg( \aver{B} |\nabla e^{-t\,L_w}(I-e^{-r^2\,L_w})^m f_j|^{2}\,dw\bigg)^{\frac1{2}}
&\lesssim
\int_0^\infty
\Big( \aver{B}
|\sqrt{t}\,\nabla \varphi(L_w,t) f_j|^{2}\,dw\Big)^{\frac1{2}} \, \frac{dt}{t} \nonumber
\\
&\lesssim 2^{j\,(\theta_1-2\,m)}
\Big(\aver{C_{j}(B)}
|f|^{2}\,dw\Big)^{\frac1{2}}. \label{RT-est:fj}
\end{align}
If we now combine~\eqref{RT-est:f1} and \eqref{RT-est:fj} we get
\eqref{eqn:2.1} with $g(j)=C_m\,2^{j\,(\theta_1-2\,m)}$; if we also
fix $2m>\theta_1$, we get that $\sum g(j)<\infty$.
\medskip
We now show that~\eqref{eqn:2.2} holds. As we remarked above, the
Riesz transform does not commute with $\mathcal{A}_r$. To overcome
this obstacle, we will prove an off-diagonal estimate for the gradient
of the semigroup (using the $L^2(w)$-Poincar\'e inequality), and
then use an approximation argument to get the desired estimate for the
Riesz transform.
More precisely, we claim
that for every $f \in H^1(w)$ and
$1\le k\le m$,
\begin{equation}
\label{eq:est-Riesz:2}
\bigg( \aver{B} |\nabla
e^{-k\,r^2\,L_w}f|^{q_0}\,dw\bigg)^{\frac1{q_0}}
\le
\sum_{j\ge 1} g(j)\,
\bigg( \aver{2^{j+1}\,B}
|\nabla f|^{2}\,dw\bigg)^{\frac1{2}},
\end{equation}
where
$g(j)=C_m\,2^j\,\sum_{l\ge j} 2^{l\,\theta}\,e^{-\alpha\,4^l}$.
Assume for the moment that~\eqref{eq:est-Riesz:2} holds. Then for
every $\epsilon>0$ we can apply this estimate to $S_{\varepsilon}f$ (defined
by~\eqref{defi-RT:trunc}) since $S_{\varepsilon}f\in H^{1}(w)$. Moreover, we
have that $\mathcal{A}_r$ and $S_{\varepsilon}$ commute, and so if we expand
$\mathcal{A}_r=I-(I-e^{-r^2\,L})^m$ and apply~\eqref{eq:est-Riesz:2}, we get
\[ \bigg( \aver{B} |\nabla S_{\varepsilon} \mathcal{A}_{r}
f|^{q_0}\,dw\bigg)^{\frac1{q_0}}
\le
C_m\sum_{j\ge 1} g(j)\,
\bigg( \aver{2^{j+1}\,B}
|\nabla S_{\varepsilon} f|^{2}\,dw\bigg)^{\frac1{2}}. \]
If we let $\varepsilon$ go to $0$, we obtain \eqref{eqn:2.2}. (The
justification of this uses the observations made in
Section~\ref{section:prelim} after~\eqref{defi-RT:trunc} and is left
to the reader.) Moreover, we have that $\sum_{j\ge 1}g(j)<\infty$,
and so by Theorem~\ref{theorem:2.2} with $v\equiv 1$ (which
trivially satisfies $v\in A_{p/2}(w)\cap RH_{(q_0/p)'}(w)$) we
have that \eqref{eq:Riesz} holds for $f\in L^\infty_c$ and for
every $2<p<q_+$.
To complete this step we need to prove~\eqref{eq:est-Riesz:2}. Fix
$1\le k\le m$ and $f\in H^{1}(w)$.
Let $h=f-f_{4B,w}$, where $f_{4B,w}=\Xint-_{4B} f \,dw$. Then by the conservation property (see
\cite{DCU-CR2013}, or the proof in \cite[Section 2.5]{auscher07}),
$e^{-t\, L_w}1=1$ for all $t>0$, and so
\[ \nabla e^{-k\,r^2\,L_w} f
=
\nabla e^{-k\,r^2\,L_w} (f-f_{4\,B,w})
=
\nabla e^{-k\,r^2\,L_w} h
=
\sum_{j\ge 1} \nabla e^{-k\,r^2\,L_w}h_j, \]
where $h_j=h\,\bigchi_{C_j(B)}$. Hence,
$$
\bigg(\aver{B} |\nabla
e^{-k\,r^2\,L_w}f|^{q_0}\,dw\bigg)^{\frac1{q_0}}
\le
\sum_{j\ge 1}
\bigg(\aver{B} |\nabla
e^{-k\,r^2\,L_w}h_j|^{q_0}\,dw\bigg)^{\frac1{q_0}}.
$$
Since $2<q_0<q_+$, by Proposition~\ref{prop:K},
$\sqrt{t}\,\nabla e^{-t\,L_w}\in\offw{2}{q_0}$. If we apply this and
the $L^{2}(w)$-Poincar\'e inequality (see Remark
\ref{remark:Poincare-non-smooth} with $p=q=2$), then for each $j\geq
1$ we get
\begin{align*}
&\bigg(\aver{B}
|\nabla e^{-k\,r^2\,L_w}h_j|^{q_0}\,dw\bigg)^{\frac1{q_0}} \\
& \qquad \lesssim
\frac{2^{j\,(\theta_1+\theta_2)}\,e^{-\alpha\,4^j}}r\,
\bigg(\aver{C_j(B)}
|h_j|^{2}\,dw\bigg)^{\frac1{2}}
\nonumber
\\
&\qquad\le
\frac{2^{j\,(\theta_1+\theta_2)}\,e^{-\alpha\,4^j}}r\,
\bigg(\aver{2^{j+1}\,B}|f-f_{4\,B,w}|^{2}\,dw\bigg)^{\frac1{2}}
\nonumber
\\
&\qquad\le
\frac{2^{j\,(\theta_1+\theta_2)}\,e^{-\alpha\,4^j}}r\,
\bigg(
\bigg(\aver{2^{j+1}\,B}\hskip-7pt
|f-f_{2^{j+1}\,B,w}|^{2}\,dw\bigg)^{\frac1{2}} +\sum_{l=2}^j
|f_{2^l\,B,w}-f_{2^{l+1}\,B,w}| \bigg)
\nonumber
\\
&\qquad\lesssim
\frac{2^{j\,(\theta_1+\theta_2)}\,e^{-\alpha\,4^j}}r\,\sum_{l=1}^{j}
\bigg(\aver{2^{l+1}\,B}
|f-f_{2^{l+1}\,B,w}|^{2}\,dw\bigg)^{\frac1{2}}
\nonumber
\\
&\qquad\lesssim
2^{j\,(\theta_1+\theta_2)}\,e^{-\alpha\,4^j}\,\sum_{l=1}^{j} 2^{l}\,
\bigg(\aver{2^{l+1}\,B}
|\nabla f|^{2}\,dw\bigg)^{\frac1{2}}.
\end{align*}
If we combine these two estimates and exchange the order of summation
we get~\eqref{eq:est-Riesz:2} with
$\theta=\theta_{1}+\theta_{2}$. This completes the proof
that~\eqref{eq:Riesz} holds when $2<p<q_+$.
\medskip
For the second step of our proof we show
that~\eqref{eq:Riesz:w} holds for all $p$, $q_-<p<q_+$ and
$v\in A_{p/q_-}(w)\cap RH_{(q_+/p)'}(w)$. Fix such a $p$ and $v$;
then by the openness
properties of $A_q$ and $RH_s$ weights,
there exist
$p_0, q_0$ such that
\begin{equation*}
q_-<p_0<\min\{p,2\}\le \max\{p,2\}<q_0<q_+
\quad \mbox{and} \quad
v\in A_{p/p_0}(w) \cap RH_{(q_0/p)'}(w).
\end{equation*}
By the duality properties of weights~\cite[Lemma 4.4]{auscher-martell07b},
\[ u=v^{1-p'}\in A_{p'/q_0'}(w)\cap RH_{(p_0'/p')'}(w). \]
Let
$T=\nabla L_w^{-1/2}$; then $T$ is bounded from
$L^p(\mathbb{R}^n,v\,dw)$ to $L^p(\mathbb{R}^n;\mathbb{C}^n, v\,dw)$ if and only if
$T^*$ from $L^{p'}(\mathbb{R}^n;\mathbb{C}^n, u\,dw) $ to
$L^{p'}(\mathbb{R}^n; u\,dw)$. (Note that $T$ takes scalar valued
functions to vector functions valued and $T^*$ the opposite.)
Therefore, it will suffice to prove the boundedness of $T^*$. We will
do so using a particular case of \cite[Theorem
3.1]{auscher-martell07b}. This result is stated there in the
Euclidean setting but it extends to spaces of homogeneous type. Here we
give the weighted version we need:
see~\cite[Section 5]{auscher-martell07b}.
\begin{theor}\label{theor:good-lambda:w}
Fix $1<q<\infty$, $a\ge 1$ and $u\in RH_{s'}(w)$, $1<s<\infty$.
Then there exists $C>1$ with the
following property: suppose $F\in L^1(w)$ and $G$ are
non-negative measurable functions such that for any ball $B$ there
exist non-negative functions $G_B$ and $H_B$ with $F(x)\le G_B(x)+
H_B(x)$ for a.e. $x\in B$ and, for all $x\in B$,
\begin{equation}\label{H-Q:G-Q}
\Big(\aver{B} H_B^q\, dw \Big)^{\frac1q}
\le
a\, M_w F(x),
\qquad\qquad
\aver{B} G_B\, dw
\le
G(x),
\end{equation}
where $M_w$ is the Hardy-Littlewood maximal function with respect to $dw$.
Then for $1<t< q/s$,
\begin{equation}\label{good-lambda:Lp:w}
\|M_w F\|_{L^t(u\,dw)}
\le
C\,\|G\|_{L^t(u\,dw)}.
\end{equation}
\end{theor}
\medskip
To apply Theorem~\ref{theor:good-lambda:w}, fix
$\vec{f}\in L^\infty_c (\mathbb{R}^n;\mathbb{C}^n, w)$, and let $h=T^* \vec{f}$
and $F=|h|^{q_0'}$. Then $F\in L^1(w)$: by the argument above,
since $2<q_0<q_+$, $T$ is bounded from $L^{q_0}(\mathbb{R}^n, w)$ to
$L^{q_0}(\mathbb{R}^n;\mathbb{C}^n, w)$. Thus, $T^*$ is bounded
from $L^{q_0'}(\mathbb{R}^n;\mathbb{C}^n, w)$ to $L^{q_0'}(\mathbb{R}^n, w)$.
Now let
$\mathcal{A}_{r}=I- (I-e^{-r^2\, L_w})^m$, where $m>0$ will be fixed below.
Given a ball $B$ with radius $r$, we define
\[ F
\le
2^{q_0'-1}\,|(I-\mathcal{A}_{r})^* h|^{q_0'}
+
2^{q_0'-1}\,|\mathcal{A}_{r}^* h|^{q_0'}
\equiv G_B+H_B, \]
where, as before, the adjoint is with
respect to $L^2(w)$. To complete the proof, suppose for the
moment that we could prove~\eqref{H-Q:G-Q} with $q=p_0'/q_0'$
and $G=M_w(|\vec{f}|^{q_0'})$.
Since $u \in RH_{(p_0'/p')'}$, by the openness property of reverse
H\"older weights, $u \in RH_{s'}$ for some $s<p_0'/p'$.
Then if we let $t=p'/q_0'=(p_0'/q_0')/(p_0'/p')<q/s$, we have
$u \in A_t(w)$, and so $M_w$ is bounded on $L^t(u\,dw)$. Therefore,
by~\eqref{good-lambda:Lp:w},
\begin{equation*}
\|T^* \vec{f}\|_{L^{p'}(u\,dw)}^{q_0'}
\le
\|M_w F\|_{L^t(u\,dw)}
\le
C\,
\|G\|_{L^t(u\,dw)}
=
C\,
\|M_w(|\vec{f}|^{q_0'})\|_{L^t(u\,dw)}
\lesssim
\big\|\vec{f}\big\|_{L^{p'}(u\,dw)}^{q_0'}.
\end{equation*}
To complete the proof we need to show that ~\eqref{H-Q:G-Q} holds. We
first estimate $H_B$. By
duality there exists $g\in L^{p_0}(B,dw/w(B))$ with norm $1$ such that
for all $x\in B$,
\begin{align*}
\bigg(\aver{B} H_B^q\,dw\bigg)^{\frac1{q\,q_0'}}
&
\lesssim
w(B)^{-1}\,\int_{\mathbb{R}^n} |h|\,|\mathcal{A}_{r} g|\,dw
\\
&\lesssim
\sum_{j=1}^\infty 2^{j\,D} \bigg(\aver{C_j(B)} |h|^{q_0'}\,dw\bigg)^\frac1{q_0'}\,\bigg(\aver{C_j(B)} |\mathcal{A}_{r} g|^{q_0}\,dw\bigg)^\frac1{q_0}
\\
&\lesssim
M_w F(x)^\frac1{q_0'}\sum_{j=1}^\infty 2^{j\,(D+\theta_1+\theta_2)} e^{-\alpha\,4^{j}} \bigg(\aver{B} |g|^{p_0}\,dw\bigg)^\frac1{p_0}
\\
&\lesssim
M_w F(x)^\frac1{q_0'}
,
\end{align*}
where in the second to last inequality we used the fact that by our choice of $p_0,\,q_0$,
$e^{-t\,L_w}\in\offw{p_0}{q_0}$, and so $\mathcal{A}_r$ is as well.
\medskip
We now estimate $G_B$. Again by duality there exists
$g\in L^{q_0}(B,dw/w(B))$ with norm $1$ such that for all $x\in B$,
\begin{align}
\bigg(\aver{B} G_B\,dw\bigg)^{\frac1{q_0'}}
&
\lesssim
\mu(B)^{-1}\,\int_{\mathbb{R}^n} |\vec{f}|\,|T(I-\mathcal{A}_{r}) g|\,dw
\nonumber\\
&\lesssim
\sum_{j=1}^\infty 2^{j\,D} \bigg(\aver{C_j(B)} |\vec{f}|^{q_0'}\,dw\bigg)^\frac1{q_0'}\,\bigg(\aver{C_j(B)} |T(I-\mathcal{A}_{r}) g|^{q_0}\,dw\bigg)^\frac1{q_0}
\nonumber\\
&\le
M_w(|\vec{f}|^{q_0'})(x)^\frac1{q_0'}\sum_{j=1}^\infty 2^{j\,D} \bigg(\aver{C_j(B)} |T(I-\mathcal{A}_{r}) g|^{q_0}\,d\mu\bigg)^\frac1{q_0}.
\label{est:GB}
\end{align}
To estimate each term in the sum we argue as in the first half of the
proof. When $j=1$, $\nabla L_w^{-1/2}$ and $e^{-r^2\,L_w}$ are
bounded on $L^{q_0}(w)$ by the first part of the proof and
Theorem~\ref{thm:L2-gaffney}. Hence,
\begin{equation}\label{RT-est:GB-1}
\bigg( \aver{4\,B} |\nabla L_w^{-1/2}(I-e^{-r^2\,L_w})^m g|^{q_0}\,dw \bigg)^{\frac1{q_0}}
\lesssim
\bigg( \aver{B} |g|^{q_0}\,dw\bigg)^{\frac1{q_0}}
=1.
\end{equation}
For $j\ge 2$ we use the integral representation
\eqref{RT-repre}. If we estimate as in \eqref{est-Riesz-aux}, with
the roles of $B$ and $C_j(B)$ switched and using the fact that
$\sqrt{z}\,\nabla e^{-z\,L_w}\in\offw{q_0}{q_0}$ since $2<q_0<q_+$, we
see that
\begin{align*}
&\bigg( \aver{C_j(B)}
\bigg|
\int_{\Gamma} \eta(z)\, \sqrt{t}\,\nabla e^{-z\,L_w} g\,dz
\bigg|^{q_0}\,dw\bigg)^{\frac1{q_0}} \nonumber
\\
&\quad\le
\int_{\Gamma}
\bigg(
\aver{C_j(B)} |\sqrt{z}\,\nabla e^{-z\,L_w}
g|^{q_0}\,dw\bigg)^{\frac1{q_0}}\,
\frac{\sqrt{t}}{\sqrt{|z|}}\,|\eta(z)|\,|dz| \nonumber
\\
&\quad\lesssim
2^{j\,\theta_1} \int_{\Gamma}
\dec{\frac{2^j\,r}{\sqrt{|z|}}}^{\theta_2}
\expt{-\frac{\alpha\,4^j\,r^2}{|z|}}\,
\frac{\sqrt{t}}{\sqrt{|z|}}\,|\eta(z)|\,|dz| \,
\bigg(\aver{B}
|g|^{q_0}\,dw\bigg)^{\frac1{2}} \nonumber
\\
&\quad\lesssim
2^{j\,\theta_1} \int_0^\infty
\dec{\frac{2^j\,r}{\sqrt{s}}}^{\theta_2}
\expt{-\frac{\alpha\,4^j\,r^2}{s}}\, \frac{\sqrt{t}}{\sqrt{s}}
\,\frac{r^{2\,m}}{(s+t)^{m+1}}\,ds.
\end{align*}
If we take $2\, m>\theta_{2}$, we can combine this
with~\eqref{eqn:integral}. We can then insert this estimate into the representation
\eqref{eqn:L2-holo-rep} to get that for every $j\ge 2$,
\begin{multline}
\bigg( \aver{C_j(B)} |\nabla e^{-t\,L_w}(I-e^{-r^2\,L_w})^m g|^{q_0}\,dw\bigg)^{\frac1{q_0}}
\\
\lesssim
\int_0^\infty
\bigg( \aver{C_j(B)}
|\sqrt{t}\,\nabla \varphi(L_w,t) g|^{q_0}\,dw\bigg)^{\frac1{q_0}} \, \frac{dt}{t}
\lesssim 2^{j\,(\theta_1-2\,m)}.
\label{RT-est:GB-j}
\end{multline}
Taken together, \eqref{est:GB}, \eqref{RT-est:GB-1} and \eqref{RT-est:GB-j} yield
$$
\bigg(\aver{B} G_B\,dw\bigg)^{\frac1{q_0'}}
\lesssim
M_w(|\vec{f}|^{q_0'})(x)^\frac1{q_0'}\sum_{j=1}^\infty 2^{j\,(D+\theta_1-2\,m)}
\lesssim
M_w(|\vec{f}|^{q_0'})(x)^\frac1{q_0'}
=
G(x)^\frac1{q_0'}
,
$$
provided we take $m$ large enough so that $D+\theta_1-2\,m<0$. This
completes the estimate of $H_B$ and $G_B$ and so completes our proof.
\end{proof}
\section{Square function estimates for the gradient of the semigroup}
\label{section:square-function-gradient}
In this section we prove $L^p(w)$ estimates for the vertical square function
\[ G_{L_{w}}f( x)
=\bigg( \int_{0}^{\infty }|t^{1/2}\nabla e^{-tL_{w}}f(x)|^{2}\frac{dt}{t}
\bigg) ^{1/2}. \]
\begin{prop} \label{prop:grad-square-function}
Let $q_-(L_w) < p < q_+(L_w)$. Then
\begin{equation} \label{eqn:grad-square-function-unwtd}
\left\Vert G_{L_{w}}f\right\Vert
_{L^{p}\left( w\right) }\lesssim \left\Vert f\right\Vert _{L^{p}\left(
w\right)}.
\end{equation}
Furthermore, if $v \in A_{p/q_-(L_w)}(w)\cap
RH_{(q_+(L_w)/p)'}(w)$, then
\begin{equation} \label{eqn:-grad-square-function-wt}
\left\Vert G_{L_{w}}f\right\Vert
_{L^{p}(v\,dw) }\lesssim \left\Vert f\right\Vert _{L^{p}(v\,dw)}.
\end{equation}
\end{prop}
We can also prove a reverse inequality for $G_{L_w}$. To do so we need
to introduce an auxiliary operator. Define the
weighted Laplacian by $\Delta _{w}=- w^{-1}\mathop{\rm div} w\nabla $: i.e.,
$\Delta_w$ is the
operator $L_w$ if we take the matrix $A$ to
be $wI$, where $I$ is the identity matrix.
\begin{prop} \label{prop:grad-square-reverse}
Let $q_+(\Delta_w)' < p < \infty$. Then
\begin{equation} \label{eqn:grad-square-reverse-unwtd}
\Vert f \Vert_{L^{p}(w) }
\lesssim \Vert G_{L_{w}} f \Vert _{L^{p}( w)}.
\end{equation}
Furthermore, if $v \in A_{p/q_+(\Delta_w)'}(w)$, then
\begin{equation} \label{eqn:-grad-square-reverse-wt}
\Vert f\Vert_{L^{p}(v\,dw) }
\lesssim \Vert G_{L_{w}} f\Vert _{L^{p}(v\,dw)}.
\end{equation}
\end{prop}
\medskip
\begin{proof}[Proof of Proposition~\textup{\ref{prop:grad-square-function}}]
The proof could be done in a way similar to those for the square
function $g_{L_w}$ in Section~\ref{section:square-function}. However,
we will give a shorter proof that uses the Riesz transform estimates
from Section~\ref{section:riesz}.
Let $q_-=q_-(L_w)$ and $q_+=q_+(L_w)$. Fix $p$,
\[ q_{-}=p_-(L_w)<p<q_+\leq p_+(L_w), \]
and $v\in A_{p/q_{-}}(w) \cap
RH_{\left( q_{+}/p\right) ^{\prime }}(w)$. Then by Proposition
\ref{prop:ext-RT}, the Riesz transform is bounded on $L^{p}(v\,dw)$, and so by
Lemma \ref{lemma-7.4} it has a bounded extension to
$L_{\mathbb{H}}^{p}(v\,dw) $: i.e., if
$g\left( x,t\right) \in L_{\mathbb{H}}^{p}(v\,dw) $, then
$\| \nabla L_{w}^{-1/2}g\| _{L_{\mathbb{H}}^{p}(v\,dw) }\lesssim
\|g\|_{L_{\mathbb{H}}^{p}(v\,w) }$, where the extension of
$\nabla L_{w}^{-1/2}$ to
$\mathbb{H}$-valued functions is defined for $x\in \mathbb{R}^n$ and $t>0$ by $(\nabla L_{w}^{-1/2}
g)(x,t)= \nabla L_{w}^{-1/2}\big( g(\cdot,t)\big)(x)$.
Define $g_{f}\left( x,t\right) =\left( tL_{w}\right) ^{1/2%
}e^{-tL_{w}}f(x) $ and $G_{f}\left( x,t\right) =t^{1/2%
}\nabla e^{-tL_{w}}f(x) $;
then we clearly have
$\left\Vert g_{L_{w}}f\right\Vert _{L^{p}\left( v\,dw\right)
}=\left\Vert g_{f}\right\Vert _{L_{\mathbb{H}}^{p}\left( v\,dw\right)
}$ and
$\left\Vert G_{L_{w}}f\right\Vert _{L^{p}\left( v\,dw\right)
}=\left\Vert G_{f}\right\Vert _{L_{\mathbb{H}}^{p}\left( v\,dw\right)
}$. Furthermore, $G_f(x,t)=\nabla L_{w}^{-1/2} (g_f(\cdot,t))(x)=(\nabla L_{w}^{-1/2} g_f)(x,t)$. Hence,
\begin{multline*}
\left\Vert G_{L_{w}}f\right\Vert _{L^{p}(v\,dw) }
=
\left\Vert
G_f \right\Vert _{L_{\mathbb{H}}^{p}(v\,dw)}
=\left\Vert
\nabla L_{w}^{-1/2} g_f\right\Vert _{L_{\mathbb{H}}^{p}(v\,dw)}
\\
\lesssim \left\Vert g_f\right\Vert
_{L_{\mathbb{H}}^{p}(v\,dw)}=\left\Vert g_{L_{w}}f\right\Vert _{L^{p}\left(
v\,dw\right) }
\lesssim
\left\Vert f\right\Vert _{L^{p}\left(
v\,dw\right) }
.
\end{multline*}
To prove the last inequality we used Proposition
\ref{prop:square-function}; we also used the fact that
$q_{-}=p_-(L_w)<p<q_+\leq p_+(L_w)$ and $v\in A_{p/q_{-}}(w) \cap
RH_{\left( q_{+}/p\right) ^{\prime }}(w)$, which together imply that $v\in A_{p/p_{-}}(w) \cap
RH_{\left( p_{+}/p\right) ^{\prime }}(w)$.
This proves~\eqref{eqn:-grad-square-function-wt}. To prove
inequality~\eqref{eqn:grad-square-function-unwtd}, we take $v\equiv 1$.
\end{proof}
\bigskip
To prove Proposition~\ref{prop:grad-square-reverse} we need the following
identity relating $G_{L_w}$ and $\Delta_w$. It is a straightforward
extension of a similar unweighted result given in
\cite[Section~7.1]{auscher07}. For completeness we include the proof.
\begin{lemma}
\label{lemma-dual-square}If $f,g\in L_{c}^{\infty }(w) $ then%
\begin{equation*}
\left\vert \int_{\mathbb{R}^{n}}f(x) \overline{g(x) }%
~dw\right\vert \leq \left( \Lambda +1\right) \int_{\mathbb{R}%
^{n}}G_{L_{w}}f(x) ~\overline{G_{\Delta _{w}}g(x) }%
~dw.
\end{equation*}%
\end{lemma}
\begin{proof}
By the definition and properties of the operators $L_{w}$ and
$\Delta_{w}$ we have that
\begin{align*}
& \int_{\mathbb{R}^{n}}f(x) \overline{g(x)}\,dw \\
& \qquad \qquad =\lim_{\varepsilon \downarrow 0}\int_{\mathbb{R}^{n}}e^{-\varepsilon
L_{w}}f(x)\overline{e^{-\varepsilon \Delta_w }g(x)}\,dw-\lim_{R\uparrow \infty }\int_{%
\mathbb{R}^{n}}e^{-RL_{w}}f(x)\overline{e^{-R\Delta_w }g(x)}\,dw \\
&\qquad \qquad =-\int_{0}^{\infty }\frac{d}{dt}\int_{\mathbb{R}^{n}}e^{-tL_{w}}f(x)\overline{%
e^{-t\Delta_w }g(x)}\,dw\,dt \\
&\qquad \qquad =-\int_{0}^{\infty }\int_{\mathbb{R}^{n}}\left( L_{w}e^{-tL_{w}}f(x)\overline{%
e^{-t\Delta_w }g(x)}+e^{-tL_{w}}f(x)\overline{\Delta e^{-t\Delta_w }g(x)}\right)\,dw\,dt \\
&\qquad \qquad =\int_{0}^{\infty }\int_{\mathbb{R}^{n}}\left( A(x)w(x)^{-1} +I\right) \big( \nabla e^{-tL_{w}}f(x) ~\overline{%
\nabla e^{-t\Delta_w }g(x) }\big)\,dw\,dt.
\end{align*}
Since $\| Aw^{-1}\| _{\infty }\leq \Lambda $, if we apply
H\"older's inequality in the $t$ variable we get the desired result.
\end{proof}
\begin{proof}[Proof of Proposition~\textup{\ref{prop:grad-square-reverse}}]
As a consequence of the Gaussian estimate for weighted operators
with real symmetric coefficients that were proved
in~\cite{cruz-uribe-riosP,DCU-CR2014}, we have that $\Delta_w$
satisfies off-diagonal estimates on balls: i.e., $\Delta_w\in \offw{1}{\infty}$. In
particular, $q_-(\Delta_w) = p_-(L_{\Delta_w})=1$. Further, by
the results in Section~\ref{section:q-plus} we have that
$q_+(\Delta_w)>2$.
Therefore, by Proposition~\ref{prop:grad-square-function},
if $1<p' < q_+(\Delta_w)$, and
\begin{equation} \label{eqn:dual-wt}
u \in A_{p'}(w)\cap RH_{(q_+(\Delta_w)/p')'}(w),
\end{equation}
then
\begin{equation} \label{eqn:foobar}
\|G_{\Delta_w} f\|_{L^{p'}(u\,dw)} \lesssim \|f\|_{L^{p'}(u\,dw)}.
\end{equation}
We want to apply inequality~\eqref{eqn:foobar} with $u=v^{1-p'}$.
By~\cite[Lemma~4.4]{auscher-martell07b}, the
condition~\eqref{eqn:dual-wt} is equivalent to
$v \in A_{p/q_+(w)'}(w)$.
Now fix $f,g\in L_{c}^{\infty }$, and a weight
$v\in A_{p/q_+(w)'}(w) $. Then by Lemma~\ref{lemma-dual-square}, for
$q_{+}(\Delta_w)'<p<\infty$,
\begin{align*}
\left\vert \int_{\mathbb{R}^{n}}f(x) g(x)
~dw\right\vert
&\leq \left( \Lambda +1\right) \int_{\mathbb{R}%
^{n}}G_{L_{w}}f(x) ~G_{\Delta _{w}}g(x) ~dw \\
&=\left( \Lambda +1\right) \int_{\mathbb{R}^{n}}G_{L_{w}}f(x)
~G_{\Delta _{w}}g(x) \,v^{1/p}\,v^{-1/p}\,dw \\
&\leq \left( \Lambda +1\right) \left\Vert G_{L_{w}}f\right\Vert
_{L^{p}( v\,dw) }\left\Vert G_{\Delta _{w}}g\right\Vert
_{L^{p^{\prime }}( v^{1-p'}\,dw) } \\
&\lesssim \left\Vert G_{L_{w}}f\right\Vert _{L^{p}( v\,dw)}
\left\Vert g\right\Vert _{L^{p^{\prime }}( v^{1-p'}\,dw)};
\end{align*}
the last inequality follows from~\eqref{eqn:foobar}.
If we take $g=\mathrm{sign}\left( f\right) \left\vert f\right\vert
^{p-1}v$, we get
\[
\left\Vert f\right\Vert _{L^{p}( v\,dw) } ^p
\lesssim \left\Vert
G_{L_{w}}f\right\Vert _{L^{p}( v\,w) }\left\Vert \left\vert
f\right\vert ^{p-1}v\right\Vert _{L^{p^{\prime }}( v^{1-p'}\,dw) }
=\left\Vert G_{L_{w}}f\right\Vert _{L^{p}( v\,dw) }\left\Vert
f\right\Vert _{L^{p}( v\,dw) }^{p/p^{\prime }}.
\]
This immediately gives us
the desired inequality.
\end{proof}
\section{Unweighted $L^2$ Kato estimates}
\label{section:L2-kato}
In this section we prove unweighted $L^2$ estimates for the
operators we have considered in the previous sections. These will all
be consequences of the weighted $L^p(v\,dw)$ estimates we have already
proved: it will only be necessary to find further conditions on $w\in
A_2$ so that
the weight $v=w^{-1}$ satisfies the requisite conditions.
We are particularly interested in power weights and we recall some
well-known facts about them. Define
$w_\alpha(x)=|x|^{\alpha}$, $\alpha>-n$; this restriction guarantees that
$w_\alpha$ is locally integrable. We can exactly determine the
Muckenhoupt $A_p$ and reverse H\"older $RH_s$ classes of these weights
in terms of $\alpha$: if $-n< \alpha\le 0$, then $w\in A_1$; for
$1<p<\infty$, $w\in A_p$ if $-n<\alpha <n\,(p-1)$. Furthermore, if $0\le
\alpha<\infty$, $w\in RH_\infty$; for $1<q<\infty$, $w\in RH_q$. if
$-n/q<\alpha<\infty$. Hence, we easily see that
\begin{equation}\label{sw-rw:w-alpha}
r_{w_{\alpha}}=\max\{1, 1+\alpha/n\},
\qquad\quad
s_{w_{\alpha}}=\big(\max\{1, (1+\alpha/n)^{-1}\}\big)'.
\end{equation}
We first consider the semigroup $e^{-tL_w}$, the functional calculus,
and the square function $g_{L_w}$, since these estimates will depend
on $p_-(L_w)$ and $p_+(L_w)$ and we have good estimates for these
quantities.
\begin{theor} \label{thm:semigroup-nowt}
Given a weight $w\in A_2$, suppose $1\leq r_w<1+\frac{2}{n}$ and
$s_w>\frac{n}{2}r_w+1$. Then $e^{-tL_w} : L^2 \rightarrow L^2$ is
uniformly bounded for all $t>0$. Similarly, $\varphi(L_w) : L^2 \rightarrow L^2$, where $\varphi$ is any
bounded holomorphic function on $\Sigma_\mu$, $\mu \in
(\vartheta,\pi)$, and $g_{L_w} : L^2 \rightarrow L^2$.
In particular, these $L^2$ estimates hold if we assume that $w\in A_1\cap RH_{1+\frac{n}{2}}$, or
more generally if $w\in A_{r}\cap RH_{\frac{n}{2}\,r+1}$ for $1<r\le 1+\frac2{n}$, or if we take the power weights
$$
w_\alpha(x)=|x|^{\alpha},
\qquad -\frac{2\,n}{n+2}<\alpha<2.
$$
\end{theor}
\begin{proof}
Let $p=q=2$, $p_0=(2^*_w)'$, $q_0=2_w^*$, and let $v=w^{-1}$. Then
by Proposition~\ref{prop:J}, Corollary~\ref{corollary-weighted-offd}
and the nesting properties of weights, $e^{-tL_w}\in \off{2}{2}$ if
$w^{-1} \in A_{2/p_0}(w)\cap RH_{(q_0/2)'}(w)$; in particular, by
Lemma~\ref{lemma:unif-comp}, $e^{-tL_w} : L^2 \rightarrow L^2$ is uniformly
bounded. However, this weight condition is equivalent
to
\[ w\in RH_{(2/p_0)'}\cap A_{q_0/2} . \]
A straightforward computation shows that
\[ \frac{q_0}{2} = \frac{nr_w}{nr_w-2}, \qquad
\left(\frac{2}{p_0}\right)' = \frac{n}{2}r_w+1. \]
Since $r_w<1+\frac{2}{n}$, we have that $r_w < \frac{nr_w}{nr_w-2}$,
so we automatically have that $w\in A_{q_0/2}$. Therefore, the
desired bounds hold if we have $s_w>\frac{n}{2}r_w+1$.
If $w\in A_{r}\cap RH_{\frac{n}{2}\,r+1}$ with $1\le r\le 1+\frac2{n}$, then
$r_w \le r$ and $s_w> \frac{n}{2}\,r+1 \ge \frac{n}{2}r_w+1$.
The desired conclusion for power weights follows at once from~\eqref{sw-rw:w-alpha}.
\smallskip
The same argument holds for $\varphi(L_w)$ and $g_{L_w}$, using
Proposition~\ref{prop:B-K:weights} or
Proposition~\ref{prop:square-function}, respectively.
\end{proof}
It is straightforward to construct weights more general than power weights that satisfy the
conditions on $r_w$ and $s_w$ in the above theorems. For instance, $w\in
A_{1+\frac{2}{n}}\cap RH_{2+\frac{n}{2}}$ (which corresponds to the choice $r=1+\frac2n$) if and only if there exist
$u_1,\,u_2\in A_1$ such that
\[ w = u_1^{\frac{2}{n+4}}u_2^{-\frac{2}{n}}.\]
This follows from the Jones factorization theorem and the properties
of $A_1$ weights: cf.~\cite{MR1308005}.
\begin{remark} \label{remark:Lpbounds}
We can modify the proof of
Theorem~\ref{thm:semigroup-nowt} to get unweighted $L^p$ estimates
for values of $p$ close to $2$. We leave the details to the
interested reader.
\end{remark}
\medskip
For the reverse inequalities we need to take into account the slightly
stronger hypotheses in Proposition~\ref{prop:reverseRiesz};
otherwise, the proof of the following result follows exactly as in the
proof of Theorem~\ref{thm:semigroup-nowt}.
\begin{theor} \label{thm:reverseRiesz-nowt}
Given a weight $w\in A_2$, suppose that
\[ 1\leq r_w<1+\frac{2}{n} \quad \text{ and }
s_w>\max\Big\{\Big(\frac{2}{r_w}\Big)', \frac{n}2\,r_w+1\Big\}. \]
Then
\begin{equation}\label{eq:reverseRiesz-dx}
\|L_w^{1/2}f\|_{L^2} \le C\, \|\nabla f\|_{L^2},
\qquad f\in\mathcal{S}.
\end{equation}
In particular, this is the case if we either
assume that $w\in A_{1}\cap RH_{ 1+\frac{n}{2}}$, or more generally
that
$w\in A_{r}\cap RH_{\max\{(\frac{2}{r})', \frac{n}2\,r+1\} }$, with $1<r\le 1+\frac{2}{n}$,
or
for power weights if we take
$$
w_\alpha(x)=|x|^{\alpha},
\qquad -\frac{n}{2}=-\min\Big\{\frac{n}{2},\frac{2n}{n+2}\Big\}<\alpha<2.
$$
\end{theor}
\begin{remark}
Note that $\max\{(\frac{2}{r})', \frac{n}2\,r+1\}=\frac{n}2\,r+1$ provided $r\le 2-\frac{2}{n}$ and this always holds if $n\ge 4$ as $1+\frac2n\le 2-\frac{2}{n}$. In this case, the conditions in the second part of Theorem~\ref{thm:reverseRiesz-nowt} simplify to the same conditions as in
Theorem~\ref{thm:semigroup-nowt}.
\end{remark}
\begin{remark}\label{remark:p-vs-q}
We note that in Theorems \ref{thm:semigroup-nowt} and \ref{thm:reverseRiesz-nowt} we can replace $1\le r_w<1+\frac2{n}$ with the possibly weaker condition $1\le r_w<\frac{p_+(L_w)}{2}$. The proof only requires us to take $q_0=p_+(L_w)$.
\end{remark}
\bigskip
For the gradient of the semigroup $\sqrt{t} {\nabla} e^{-tL_w}$, the
Riesz transform ${\nabla} L_w^{-1/2}$, and the square
function $G_{L_w}$ our estimates depend on $q_+(L_w)$.
\begin{theor} \label{thm:grad-square-nowt}
Given a weight $w\in A_2$, suppose $1\leq r_w<\frac{q_+(L_w)}2$ and
$s_w>\frac{n}{2}r_w+1$. Then $\sqrt{t} {\nabla} e^{-tL_w} : L^2
\rightarrow L^2$ is uniformly bounded for all $t>0$.
Similarly, we have that ${\nabla} L_w^{-1/2} : L^2 \rightarrow L^2$ and
$G_{L_w} : L^2 \rightarrow L^2$.
In particular, this is the case if we assume that
$w\in A_1\cap RH_{\frac{n}{2}+1}$. Furthermore, these $L^2$ estimates
hold if the following is true: given $\Theta \ge 1$ there exists
$\epsilon_0=\epsilon_0(\Theta, n, \Lambda/\lambda)$,
$0<\epsilon_0\le \frac1{2\,n}$, such that
$w\in A_{1+\epsilon}\cap RH_{\frac{n}{2}\,(1+\epsilon)+1}$,
$0\le \epsilon<\epsilon_0$, and $[w]_{A_2}\le \Theta$.
For power weights, there exists $\epsilon_1=\epsilon_1(n, \Lambda/\lambda)$, $0<\epsilon_1\le \frac1{2}$, such that
these estimate holds for
\[ w_\alpha(x) = |x|^{\alpha}, \qquad \qquad -\frac{2\,n}{n+2}<\alpha<\epsilon_1.\]
\end{theor}
\begin{proof}
We will prove this result for $\sqrt{t} {\nabla} e^{-tL_w}$ using
Proposition~\ref{prop:K}. The proof for ${\nabla} L_w^{-1/2}$ or $G_{L_w}$ is exactly the same,
using Proposition~\ref{prop:ext-RT} or
Proposition~\ref{prop:grad-square-function}.
By Proposition~\ref{prop:K}, $\sqrt{t} {\nabla} e^{-tL_w} : L^2
\rightarrow L^2$ if $w^{-1}=v \in A_{2/q_-(L_w)}(w) \cap
RH_{q_+(L_w)/2)'}(w)$, which is equivalent to
\[ w\in RH_{(2/q_-(L_w))'}\cap A_{q_+(L_w)/2}. \]
Therefore, we need $r_w<q_+(L_w)/2$. Furthermore, since we have that
$q_-(L_w)=p_-(L_w)\leq (2_w^*)'$, we can take
\[ s_w > \left(\frac{2}{(2_w^*)'}\right)' = \frac{n}{2}r_w+1. \]
\medskip
To get the particular examples stated in the theorem, note first that if we let $r_w=1$,
then it clearly suffices to assume $w\in A_1\cap
RH_{\frac{n}{2}+1}$, since we showed in Section~\ref{section:q-plus} that $q_+(L_w)>2$ for every $w\in A_2$.
\medskip
We now prove the condition for weights $w\in A_{1+\epsilon}$. In this
case it is more difficult to satisfy the condition $r_w<q_+(L_w)/2$
since the righthand side can be very close to $1$ depending on
$w$. Assume then that
$w\in A_{1+\epsilon}\cap RH_{\frac{n}{2}\,(1+\epsilon)+1}$, with
$0\le \epsilon<\epsilon_0\le \frac1{2\,n}$, $[w]_{A_2}\le \Theta$,
and with $\epsilon_0>0$ to be fixed below. Then we have that
$$
s_w
>
\frac{n}{2}\,(1+\epsilon)+1
\ge
\frac{n}{2}\,r_w+1.
$$
Therefore, in order to apply the first half of the theorem we need to
show that we can choose $\epsilon_0$ sufficiently small so that
$r_w< q_+(L_w)/2$. To do so we will use the notation and computations from
Section~\ref{section:q-plus}. There we showed that $q_+(L_w)\geq q_w$,
and so it will suffice to show that
\begin{equation} \label{eq:deadde}
2r_w < q_w = \min(r_w', p_+(L_w), p_0).
\end{equation}
We will compare $r_w$ to each term in the minimum in turn.
The first two terms are straightforward. First, we have that
$r_w<1+\epsilon<1+\frac1{2\,n}<\frac{3}{2}$ and so $2r_w<r_w'$.
Second, $r_w<1+\frac1{2\,n}<1+\frac2{n}$, and it follows at once from
this that $2\, r_w<2_w^*$. By Proposition~\ref{prop:J},
$2_w^*\le p_+(L_w)$ and so $2\, r_w<p_+(L_w)$.
Finally, we estimate $p_0$, the exponent from the higher integrability
condition~\eqref{eqn:gehring-bump}. We will use the
formula~\eqref{eq:value-p0}. First, we need to fix the exponent $q$
from the Poincar\'e inequality~\eqref{eq:w-Poincare-a}. Let
$q=2-1/n$; this value satisfies \eqref{eq:q-choice} since $r_w<1+\frac1{2\,n}<1+\frac1n$.
With this choice of $q$ (that only depends on $n$), we have that
\[
p_0
=
2 + \frac{2-q}{2^{4/q+1}C_1^2C_2^2 [w]_{A_2}^{6/q+17}}
=
2+\frac{1}{n\,C(n,\Lambda/\lambda)\,[w]_{A_2}^{\theta_n}}
\]
where $C(n,\Lambda/\lambda)\ge 1$ depends only on $n$ and the ratio $\Lambda/\lambda$ of the
ellipticity constants of the matrix $A$ used to define $L_w$, and where $\theta_n\ge 1$ depends only on $n$.
Then, since we also assumed that $[w]_{A_2}\le \Theta$,
we get that
$$
p_0
=
2+\frac{1}{n\,C(n,\Lambda/\lambda)\,[w]_{A_2}^{\theta_n}}
\ge
2+\frac{1}{n\,C(n,\Lambda/\lambda)\,\Theta^{\theta_n}}
=
2+2\,\epsilon_0,
$$
and $\epsilon_0= (2\,
n\,C(n,\Lambda/\lambda)\,\Theta^{\theta_n})^{-1}$ is such that $0<\epsilon_0\le \frac1{2\,n}$. Thus
$2\,r_w<2\,(1+\epsilon)<2\,(1+\epsilon_0)\le p_0$ and so
$2\,r_w<p_0$. This completes the proof that \eqref{eq:deadde} is satisfied,
and so the $L^2$ estimates hold for weights that satisfy $w\in A_{1+\epsilon}\cap RH_{\frac{n}{2}\,(1+\epsilon)+1}$.
\medskip
Finally, we consider power weights. First, it is easy to see that
\[ w_\alpha(x) = |x|^{\alpha}, \qquad \qquad
\frac{-2\,n}{n+2}<\alpha\le 0 \]
yields the desired estimates, since in this case $r_w=1$ and
$s_w>\frac{n}2+1= \frac{n}2r_w+1$.
Now consider the case $\alpha>0$. If we assume that $\alpha<\frac12$,
then $w\in A_{1+\frac1{2\,n}}\cap RH_\infty$. Moreover, it is
straightforward to show that for all such $\alpha$, there exists
$\Theta$, depending only on $n$, such that
$[w_\alpha]_{A_2}\le \Theta$.
Now apply the above argument to find $\epsilon_0\in(0,\frac1{2\,n}]$;
this value will only depend on $n$ and the ratio $\Lambda/\lambda$. If we let
$\epsilon_1=n\,\epsilon_0$ and assume that $0<\alpha<\epsilon_1$,
then $\alpha<\frac12$ and $w_\alpha\in A_{1+\epsilon}$ for some
$\epsilon<\epsilon_0$ as desired.
\end{proof}
\smallskip
To find examples of weights other than power weights to which
Theorem~\ref{thm:grad-square-nowt} apply, we argue as before. If $u_1\in A_1$, then
\[ w = u_1^{\frac{2}{n+2}} \in A_1 \cap RH_{\frac{n}{2}+1}. \]
To get weights that are not in $A_1$, take $u\in A_2$ and
let $w=u^\theta$. If $\theta$ is sufficiently small (depending on $n$,
the ratio
$\Lambda/\lambda$ and $[u]_{A_2}$) we can show that $w$ satisfies the
final conditions given in~Theorem~\ref{thm:grad-square-nowt}.
Details are left to the interested reader.
\medskip
\begin{remark}
To get the unweighted lower estimate
\[ \|f\|_{L^2} \leq C\|G_{L_w}f\|_{L^2}, \]
we note that by Proposition~\eqref{eqn:-grad-square-reverse-wt} we
need $w^{-1}\in A_{2/q_+(\Delta_w)'}(w)$, or equivalently,
$w\in RH_{(2/q_+(\Delta_w)')'}$. Hence, it suffices to assume
\[ s_w > 1+ \frac{1}{q_+(\Delta_w)-2}. \]
Arguing as above we can construct weights that satisfy this condition;
details are left to the interested reader.
\end{remark}
\bigskip
If we combine Theorems \ref{thm:reverseRiesz-nowt},
\ref{thm:grad-square-nowt}, and Remark \ref{remark:p-vs-q} we solve the Kato square root problem for
degenerate elliptic operators.
\begin{theor}\label{corol:super-Kato}
Let $L_w=-w^{-1}\mathop{\rm div} A{\nabla}$ be a degenerate elliptic operator with
$w\in A_2$. If
\[ 1\leq r_w<\frac{q_+(L_w)}2 \qquad \text{ and} \qquad
s_w>\max\Big\{\Big(\frac{2}{r_w}\Big)', \frac{n}{2}r_w+1\Big\}. \]
then the Kato problem can be solved for $L_w$: that is, for every $f\in H^1(\mathbb{R}^n)$.
\begin{equation}\label{Kato-L2}
\|L_w^{1/2} f\|_{L^2(\mathbb{R}^n)} \approx \|{\nabla} f\|_{L^2(\mathbb{R}^n)},
\end{equation}
where the implicit constants depend only on the dimension, the ellipticity constants
$\lambda$, $\Lambda$, and $w$.
In particular, \eqref{Kato-L2} holds if $w\in A_1\cap RH_{\frac{n}{2}+1}$. Further, \eqref{Kato-L2} holds if the following is true: given $\Theta \ge 1$ there exists
$\epsilon_0=\epsilon_0(\Theta, n, \Lambda/\lambda)$,
$0<\epsilon_0\le \frac1{2\,n}$, such that
$w\in A_{1+\epsilon}\cap RH_{\max\{(\frac{2}{1+\epsilon})', \frac{n}{2}\,(1+\epsilon)+1\}}$,
$0\le \epsilon<\epsilon_0$, and $[w]_{A_2}\le \Theta$.
For power weights, there exists $\epsilon_1=\epsilon_1(n, \Lambda/\lambda)$, $0<\epsilon_1\le \frac1{2}$, such that
inequality \eqref{Kato-L2} holds (with $w_\alpha$ in place of $w$) if
\[ w_\alpha(x) = |x|^{\alpha}, \qquad \qquad
-\frac{2\,n}{n+2}<\alpha<\epsilon_1. \]
\end{theor}
We can restate the final part of Theorem~\eqref{Kato-L2} as follows:
consider the family of operators
$L_\gamma=-|x|^{\gamma}\mathop{\rm div}(|x|^{-\gamma} B(x){\nabla})$, where $B$ is an
$n\times n$ complex-valued matrix that satisfies the uniform
ellipticity condition
\[ \lambda | \xi | ^{2}\leq {\Re}\langle B(x)\xi
,\xi \rangle, \qquad
|\langle B(x)\xi ,\eta \rangle |\leq \Lambda |\xi
||\eta |, \quad \xi ,\,\eta \in \mathbb{C}^{n},
\ \mbox{a.e.~}x\in\mathbb{R}^n. \]
Then,
\begin{equation} \label{eqn:more}
\|L_\gamma^{1/2} f\|_{L^2(\mathbb{R}^n)} \approx \|{\nabla} f\|_{L^2(\mathbb{R}^n)},\qquad
-\epsilon_1< \gamma<\frac{2n}{n+2}.
\end{equation}
When $\gamma=0$ we get the the classical Kato
sqaure root problem solved by Auscher, Hofmann, Lacey, McIntosh, and
Tchamitchian~\cite{auscher-hofmann-lacey-mcintosh-tchamitchian02}.
Inequality~\eqref{eqn:more}
shows that we can find an open interval containing $0$ such that if
$\gamma$ is in this interval, the same estimate holds.
\section{Applications to $L^2$ boundary value problems}
\label {section:BVP}
In this section we apply the results from the previous section to some
$L^2$ boundary value problems involving the degenerate elliptic
operator $L_w$. We follow the ideas in
\cite{auscher-tchamitchian98} and consider semigroup solutions: for
the Dirichlet or Regularity problems we let
$u(x,t)=e^{-tL_w^{1/2}}f(x)$; for the for the Neumann problem we let
$u(x,t)=-L_w^{-1/2}\,e^{-tL_w^{1/2}}f(x)$. In each case, for $t>0$ fixed
$L_w u(\cdot,t)$ makes sense in a weak sense since
$u(\cdot,t)$ is in the domain of $L_w$. Further, derivatives in $t$ are
well defined because of the semigroup properties. Finally, note that by the
strong continuity of the semigroup and the off-diagonal
estimates, in the context of the following results we have that
$e^{-tL_w^{1/2}}f\to f$ as $t\to 0^+$ in $L^2$; see
\cite[Section~4.2]{auscher-martell07}. Further details are left to the
interested reader.
\bigskip
We first consider the Dirichlet problem on $\mathbb{R}^{n+1}_+={\mathbb{R}^n}\times
[0,\infty)$:
\begin{equation} \label{eqn:dirichlet}
\begin{cases}
\partial_t^2 u - L_w u = 0, & \text{on } {\mathbb{R}^n} \\
u \big|_{\partial\mathbb{R}^{n+1}_+}= f & \text{on } \partial\mathbb{R}^{n+1}_+ =\mathbb{R}^n.
\end{cases}
\end{equation}
\begin{theor} \label{thm:dirichlet} Given a weight $w\in A_2$, suppose
$1\leq r_w<1+\frac{2}{n}$ and $s_w>\frac{n}{2}r_w+1$. Then for any
$f\in L^2(\mathbb{R}^n)$, $u(x,t)=e^{-tL_w^{1/2}}f(x)$ is a solution
of~\eqref{eqn:dirichlet} with convergence to the boundary data as $t\to 0^+$ in the $L^2$-sense. Furthermore, we have that
\begin{equation} \label{eqn:dirichlet-bound}
\sup_{t>0} \|u(\cdot,t)\|_{L^2} \leq C\|f\|_{L^2}.
\end{equation}
In particular, this is the case if we
assume that $w\in A_1\cap RH_{1+\frac{n}{2}}$, or $w\in A_{r}\cap RH_{\frac{n}{2}\,r+1}$ with $1<r\le 1+\frac{2}{n}$, or if we take the power weights
$$
w_\alpha(x)=|x|^{\alpha},
\qquad -\frac{2\,n}{n+2}<\alpha<2.
$$
\end{theor}
\begin{proof}
Formally, it is clear that $u$ is a solution to \eqref{eqn:dirichlet},
and this formalism can be justified by appealing to the theory of
maximal accretive operators: see~Kato~\cite{kato66}. Alternatively, the
weighted estimates for the functional calculus in Proposition~\ref{prop:B-K:weights}
show that both $\frac{\partial^2}{\partial t^2}u(\cdot,t)$ and $L_w u(\cdot,t)$ belong to $L^2$
for each $t>0$ and that they are equal in the $L^2$-sense.
To see that inequality~\eqref{eqn:dirichlet-bound} holds, it suffices
to let $\varphi_t(z)=e^{-t\sqrt{z}}$. Then $\varphi_t$ is a bounded
holomorphic function on $\Sigma_\mu$, and so by
Theorem~\ref{thm:semigroup-nowt} we get the desired bound.
\end{proof}
\begin{remark}\label{remark-extra}
Note that as observed in Remark \ref{remark:p-vs-q}, in the previous result we can replace $1\le r_w<1+\frac2{n}$ with the possibly weaker condition $1\le r_w<\frac{p_+(L_w)}{2}$. Also, by Proposition~\ref{prop:B-K:weights} we also have that for $u$ as in Theorem~\ref{thm:dirichlet} and all $k\ge 1$
\begin{equation}
\sup_{t>0} \left\| t^k\frac{\partial^k}{\partial t^k} u(\cdot,t)\right\|_{L^2}
=
\sup_{t>0} \left\| (t^k\, L_w^{1/2})^k e^{-tL_w^{1/2}}f(\cdot)\right\|_{L^2} \leq C\|f\|_{L^2}.
\label{eq:eqn-CR}
\end{equation}
\end{remark}
\medskip
For the regularity problem we have the following.
\begin{theor} \label{thm:regularity}
Given a weight $w\in A_2$, suppose
\[ 1\leq r_w<\frac{q_+(L_w)}2 \qquad \text{ and} \qquad
s_w>\max\Big\{\Big(\frac{2}{r_w}\Big)', \frac{n}{2}r_w+1\Big\}. \]
Then for any
$f\in H^1(\mathbb{R}^n)$, $u(x,t)=e^{-tL_w^{1/2}}f(x)$ is a solution
of~\eqref{eqn:dirichlet} with convergence to the boundary data as $t\to 0^+$ in the $L^2$-sense. Furthermore, we have that
\begin{equation} \label{eqn:regularity-bound}
\sup_{t>0} \|{\nabla}_{x,t}u(\cdot,t)\|_{L^2} \leq C\|\nabla f\|_{L^2}.
\end{equation}
In particular, \eqref{eqn:regularity-bound} holds if we assume that
$w\in A_1\cap RH_{1+\frac{n}{2}}$. Furthermore, it holds if the
following is true: given $\Theta \ge 1$ there exists
$\epsilon_0=\epsilon_0(\Theta, n, \Lambda/\lambda)$,
$0<\epsilon_0\le \frac1{2\,n}$, such that
$w\in A_{1+\epsilon}\cap RH_{\max\{(\frac{2}{1+\epsilon})', \frac{n}{2}\,(1+\epsilon)+1\}}$,
$0\le \epsilon<\epsilon_0$, and $[w]_{A_2}\le \Theta$.
For power weights, there exists $\epsilon_1=\epsilon_1(n, \Lambda/\lambda)$, $0<\epsilon_1\le \frac1{2}$, such that
\eqref{eqn:regularity-bound} holds if
\[ w_\alpha(x) = |x|^{\alpha}, \qquad \qquad -\frac{n}{2}<\alpha<\epsilon_1.\]
\end{theor}
\begin{proof}
Arguing as before, it suffices to prove that \eqref{eqn:regularity-bound} holds. For any
$t>0$ we have, by Theorem~\ref{corol:super-Kato}
\begin{multline*}
\|{\nabla}_{x,t}u(\cdot,t)\|_{L^2}
\leq
\|{\nabla} L_w^{-1/2} L_w^{1/2}e^{-tL_w^{1/2}}f \|_{L^2}
+ \|L_w^{1/2}e^{-tL_w^{1/2}}f \|_{L^2} \\
\lesssim
\|L_w^{1/2}e^{-tL_w^{1/2}}f \|_{L^2}
=
\|e^{-tL_w^{1/2}}\,L_w^{1/2}f \|_{L^2}
\lesssim
\|L_w^{1/2}f \|_{L^2}
\lesssim
\|{\nabla} f\|_{L^2}.
\end{multline*}
\end{proof}
Note that under the hypothesis of Theorem \ref{thm:regularity}, and as observed in Remark \ref{remark-extra},
we have that $u(\cdot,t)=e^{-tL_w^{1/2}}f$ satisfies \eqref{eqn:dirichlet-bound} and \eqref{eq:eqn-CR}. Additionally, from the functional calculus estimates on $L^2$ it follows that
\begin{equation} \label{eqn:regularity-bound2}
\sup_{t>0} \|t{\nabla}_{x,t}u(\cdot,t)\|_{L^2}
\lesssim
\| t L_w^{1/2}e^{-tL_w^{1/2}}f \|_{L^2} \lesssim \| f\|_{L^2}.
\end{equation}
Finally, we consider the Neumann problem
\begin{equation} \label{eqn:neumann}
\begin{cases}
\partial_t^2 u - L_w u = 0, & \text{on } {\mathbb{R}^n} \\
\partial_t u \big|_{\partial\mathbb{R}^{n+1}_+}= f & \text{on } \partial\mathbb{R}^{n+1}_+ =\mathbb{R}^n.
\end{cases}
\end{equation}
\begin{theor} \label{thm:neumann}
Given a weight $w\in A_2$, suppose $1\leq r_w<\frac{q_+(L_w)}2$ and
$s_w>\frac{n}{2}r_w+1$. Then for any
$f\in L^2(\mathbb{R}^n)$, $u(x,t)=-L_w^{-1/2}e^{-tL_w^{1/2}}f(x)$ is a solution of~\eqref{eqn:neumann} with convergence of $\partial_t u(\cdot,t)\to f$ as $t\to 0^+$ in the $L^2$-sense. Furthermore, we have that
\begin{equation} \label{eqn:neumann-bound}
\sup_{t>0} \|{\nabla}_{x,t}u(\cdot,t)\|_{L^2} \leq C\|f\|_{L^2}.
\end{equation}
In particular, \eqref{eqn:neumann-bound} holds if we
assume that $w\in A_1\cap RH_{1+\frac{n}{2}}$. Furthermore, it holds
if the following is true: given $\Theta \ge 1$ there exists
$\epsilon_0=\epsilon_0(\Theta, n, \Lambda/\lambda)$, $0<\epsilon_0\le
\frac1{2\,n}$, such that $w\in A_{1+\epsilon}\cap
RH_{\frac{n}2\,(1+\epsilon)+1}$, $0\le \epsilon<\epsilon_0$, and $[w]_{A_2}\le
\Theta$.
For power weights, there exists $\epsilon_1=\epsilon_1(n, \Lambda/\lambda)$, $0<\epsilon_1\le \frac1{2}$, such that
\eqref{eqn:neumann-bound} holds if
\[ w_\alpha(x) = |x|^{\alpha}, \qquad \qquad -\frac{2\,n}{n+2}<\alpha<\epsilon_1.\]
\end{theor}
\begin{proof}
Again, $u$ is clearly a formal solution of~\eqref{eqn:neumann};
see~\cite{kato66}. The proof that~\eqref{eqn:neumann-bound} holds is
similar to the proof of~\eqref{eqn:regularity-bound}:
$$
\|{\nabla}_{x,t}u(\cdot,t)\|_{L^2}
\leq
\|{\nabla} L_w^{-1/2} e^{-tL_w^{1/2}}f \|_{L^2}
+ \|e^{-tL_w^{1/2}}f \|_{L^2}
\lesssim
\|e^{-tL_w^{1/2}}f \|_{L^2}
\lesssim
\|f\|_{L^2},
$$
where we have used Theorem~\ref{thm:grad-square-nowt} (for the Riesz
transform) and Theorem~\ref{thm:semigroup-nowt} (for the functional calculus with $\varphi(z)=e^{-t\sqrt{z}}$).
\end{proof}
\begin{remark}
As we noted in Remark~\ref{remark:Lpbounds}, we can also get
unweighted $L^p$ bounds for these operators for values of $p$ close to
2. As a consequence we can also get estimates $L^p$ boundary value
problems for the same values of $p$. Details are left to the reader.
\end{remark}
\bibliographystyle{plain}
| {
"timestamp": "2017-06-08T02:03:46",
"yymm": "1510",
"arxiv_id": "1510.06790",
"language": "en",
"url": "https://arxiv.org/abs/1510.06790",
"abstract": "We study the Kato problem for degenerate divergence form operators. This was begun by Cruz-Uribe and Rios who proved that given an operator $L_w=-w^{-1}{\\rm div}(A\\nabla)$, where $w\\in A_2$ and $A$ is a $w$-degenerate elliptic measure (i.e, $A=w\\,B$ with $B$ an $n\\times n$ bounded, complex-valued, uniformly elliptic matrix), then $L_w$ satisfies the weighted estimate $\\|\\sqrt{L_w}f\\|_{L^2(w)}\\approx\\|\\nabla f\\|_{L^2(w)}$. Here we solve the $L^2$-Kato problem: under some additional conditions on the weight $w$, the following unweighted $L^2$-Kato estimates hold $$ \\|L_w^{1/2}f\\|_{L^2(\\mathbb{R}^n)}\\approx\\|\\nabla f\\|_{L^2(\\mathbb{R}^n)}. $$This extends the celebrated solution to the Kato conjecture by Auscher, Hofmann, Lacey, McIntosh, and Tchamitchian, allowing the differential operator to have some degeneracy in its ellipticity. For example, we consider the family of operators $L_\\gamma=-|x|^{\\gamma}{\\rm div}(|x|^{-\\gamma}B(x)\\nabla)$, where $B$ is any bounded, complex-valued, uniformly elliptic matrix. We prove that there exists $\\epsilon>0$, depending only on dimension and the ellipticity constants, such that $$ \\|L_\\gamma^{1/2}f\\|_{L^2(\\mathbb{R}^n)}\\approx\\|\\nabla f\\|_{L^2(\\mathbb{R}^n)}, \\qquad -\\epsilon<\\gamma<\\frac{2\\,n}{n+2}. $$ This gives a range of $\\gamma$'s for which the classical Kato square root $\\gamma=0$ is an interior point.Our main results are obtained as a consequence of a rich Calderón-Zygmund theory developed for some operators associated with $L_w$. These results, which are of independent interest, establish estimates on $L^p(w)$, and also on $L^p(v\\,dw)$ with $v\\in A_\\infty(w)$, for the associated semigroup, its gradient, the functional calculus, the Riesz transform, and square functions. As an application, we solve some unweighted $L^2$-Dirichlet, Regularity and Neumann boundary value problems for degenerate elliptic operators.",
"subjects": "Classical Analysis and ODEs (math.CA); Analysis of PDEs (math.AP)",
"title": "On the Kato problem and extensions for degenerate elliptic operators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130563978901,
"lm_q2_score": 0.7154239897159439,
"lm_q1q2_score": 0.7075636666893383
} |
https://arxiv.org/abs/1606.07474 | A stability result using the matrix norm to bound the permanent | We prove a stability version of a general result that bounds the permanent of a matrix in terms of its operator norm. More specifically, suppose $A$ is an $n \times n$ matrix over $\mathbb{C}$ (resp. $\mathbb{R}$), and let $\mathcal{P}$ denote the set of $n \times n$ matrices over $\mathbb{C}$ (resp. $\mathbb{R}$) that can be written as a permutation matrix times a unitary diagonal matrix. Then it is known that the permanent of $A$ satisfies $|\text{perm}(A)| \leq \Vert A \Vert_{2} ^n$ with equality iff $A/ \Vert A \Vert_{2} \in \mathcal{P}$ (where $\Vert A \Vert_2$ is the operator $2$-norm of $A$). We show a stability version of this result asserting that unless $A$ is very close (in a particular sense) to one of these extremal matrices, its permanent is exponentially smaller (as a function of $n$) than $\Vert A \Vert_2 ^n$. In particular, for any fixed $\alpha, \beta > 0$, we show that $|\text{perm}(A)|$ is exponentially smaller than $\Vert A \Vert_2 ^n$ unless all but at most $\alpha n$ rows contain entries of modulus at least $\Vert A \Vert_2 (1 - \beta)$. | \section{Introduction}\label{section intro}
The \textit{permanent} of an $n \times n$ matrix, $A$, has long been an important quantity in combinatorics and computer science, and more recently it has also had applications to physics and linear-optical quantum computing. It is defined as
\[
\perm{A} := \sum_{\sigma\in S_n} \prod_{i=1} ^{n} a_{i, \sigma(i)},
\]
where $S_n$ denotes the set of permutations of $[n]=\{1,2,\ldots,n\}$. For instance, if $A$ only has entries in $\{0,1\} \subseteq \mathbb{R}$, then the permanent counts the number of perfect matchings in the bipartite graph whose bipartite adjacency matrix is $A$.
\paragraph*{}The definition of the permanent is of course reminiscent of that for the determinant; however, whereas the determinant is rich in algebraic and geometric meaning, the more combinatorial permanent is notoriously difficult to understand. For example, computing $\perm{A}$ even for $\{0, 1\}$-matrices is the prototypical \#P-complete problem (Valiant \cite{valiant}).
\paragraph*{}On the other hand, the \textit{operator 2-norm} (also called the \textit{operator norm}) of a matrix is a particularly nice parameter. For an $n \times n$ matrix $A$ with entries in $\mathbb{C}$, it is defined as
\[
\Vert A \Vert_{2} = \sup_{\Vert \vec{x} \Vert_{2} \leq 1, \ \vec{x} \in \mathbb{C}^n} \Vert A \vec{x} \Vert_{2},
\]
where $\Vert \vec{v} \Vert_{p}$ is the usual $l_{p}$ norm (i.e., $\Vert \vec{v} \Vert_p ^{p} = \sum_{i} |v_i|^{p}$ for $p \in (0, \infty)$, and $\Vert \vec{v} \Vert_{\infty} = \max |v_i|$). The operator norm of a matrix has the advantages of being both algebraically and analytically well-behaved as well as computationally easy to determine (as this amounts to finding the largest singular value of $A$).
\paragraph*{}Considering how differently behaved the permanent and operator norm are, it is perhaps strange to think that there would be much of a connection between them. Nonetheless, they are related by the following extremal result, which is due to Gurvits \cite{gurvits} (see also \cite{aaronson, nguyen}).
\begin{theorem}\label{norm result}
Suppose $A$ is an $n \times n$ matrix over $\mathbb{C}$ (resp. $\mathbb{R}$), and let $\mathcal{P}$ denote the set of $n \times n$ matrices over $\mathbb{C}$ (resp. $\mathbb{R}$) that can be written as a permutation matrix times a unitary diagonal matrix. Then $|\perm{A}| \leq \Vert A \Vert_{2} ^n$ with equality iff $A$ is a scalar multiple of a matrix in $\mathcal{P}$.
\end{theorem}
\paragraph*{}Note that this extremal set $\mathcal{P}$ is simply the set of matrices with exactly $n$ non-zero entries, each having modulus 1, and no two of which are in the same row or column. Such a matrix $P \in \mathcal{P}$ has $\Vert P \Vert_2 = | \perm{P}| = 1$ and satisfies
\[
\Vert AP \Vert_2 = \Vert PA \Vert_2 = \Vert A \Vert_2, \qquad \qquad \text{and} \qquad \qquad |\perm{AP}| = |\perm{PA}| = | \perm{A}|
\]
for all matrices $A$ (which is equivalent to membership in $\mathcal{P}$). Moreover, $\mathcal{P}$ is a subgroup of the group of unitary matrices, and as a set, it has a very tractable topological structure.
\paragraph*{}Motivated by algorithmic questions related to approximating the permanent, Aaronson and Hance \cite{aaronson} asked whether one could prove a stability version of \myRef{Theorem \ref{norm result}}:
\paragraph*{Question A:} If $|\perm{A}|$ is close to $\Vert A \Vert_2 ^n$, must $A / \Vert A \Vert_2$ be `close' to a matrix in $\mathcal{P}$?
\paragraph*{}A somewhat more concrete version was suggested by Aaronson and Nguyen \cite{nguyen}:
\paragraph*{Question B:} Characterize $n \times n$ matrices $A$ such that $\Vert A \Vert_2 \leq 1$ and there exists a constant $C > 0$ such that $|\perm{A}| \geq n^{-C}$.
\paragraph*{}Using techniques of inverse Littlewood-Offord theory, Aaronson and Nguyen gave a substantial answer to an analogous question under the (stronger) assumptions that $A$ is orthogonal and that the intersection of the hypercube $\{\pm 1\}^n$ with its image under $A$ is large. They also proved something like (actually slightly stronger than) our results below for stochastic matrices. Further results in the direction of \myRef{Question B} were given by Nguyen \cite{nguyenPrivate}.
\paragraph*{}The two main results of the present paper are \myRef{Theorems \ref{main theorem}} and \myRef{\ref{main theorem real case}} below. The first provides a positive answer to \myRef{Question A} for matrices over $\mathbb{C}$ (or $\mathbb{R}$), and the second is a more refined result that (depending on your philosophical views) at least partially addresses \myRef{Question B} for matrices over $\mathbb{R}$. More specifically, we bound $\perm{A}$ in terms of the following easily computed parameters.
\paragraph*{Definition:} Let $A$ be a matrix with rows $r_1, r_2, \ldots , r_n$, and $p \in \mathbb{R} \cup \{\infty\}$. Then the parameter $h_p (A)$ is defined as $h_{p} (A) = h_p = \frac{1}{n} \sum_{i} \Vert r_i \Vert_{p}$.
\paragraph*{}We will only consider $h_\infty$ and $h_2$. First note $0 \leq h_{\infty} (A) \leq h_{2} (A) \leq \Vert A \Vert_2$. Moreover, it is easy to show $h_{2} (A) = \Vert A \Vert_2$ iff $A/\Vert A \Vert_2$ is a unitary matrix, and $h_{\infty} (A) = \Vert A \Vert_2$ iff $A / \Vert A \Vert_2$ is in $\mathcal{P}$. Thus, in some sense, the quantity $1- h_{2}(A) / \Vert A \Vert_2 \in [0, 1]$ measures how close $A / \Vert A \Vert_2$ is to being unitary, and $1 - h_{\infty}(A) / \Vert A \Vert_2 \in [0, 1]$ measures how close $A / \Vert A \Vert_2$ is to being in $\mathcal{P}$. Broadly speaking, $h_{\infty} / \Vert A \Vert_2$ is close to 1 precisely when most of the rows of $A$ each have one entry of modulus close to $\Vert A \Vert_2$ and all the other entries in that row are close to 0.
\paragraph*{}Before stating the first of our main results, notice that in addressing either of the above questions, we lose no generality in assuming $\Vert A \Vert _2 \leq 1$, since \myRef{Question A} is invariant under scaling. However, to facilitate any application of our results, we state them in the ``more general" case that $\Vert A \Vert_2 \leq T$.
\newpage
\begin{theorem}\label{main theorem}
Let $A$ be an $n\times n$ matrix over $\mathbb{C}$ and $\Vert A \Vert_{2} \leq T \neq 0$. Then
\begin{itemize}
\item[(i)] $|\perm{A}| \leq 2T^{n} \exp \bigg[-3n\Big(1- \frac{\sqrt{\pi}}{2} h_{2}/T - \left(1- \frac{\sqrt{\pi}}{2} \right)h_{\infty} /T \Big)^2/100 \bigg],$
\item[(ii)] $|\perm{A}| \leq 2 T^{n} \exp[-n(1-h_{\infty} / T)^2 /10^{5}]$.
\end{itemize}
\end{theorem}
\paragraph*{}As discussed above, this provides a positive answer to \myRef{Question A} by viewing $h_\infty$ (and to a lesser extent $h_2$) as a proxy for `closeness' of a matrix $A$ to those in $\mathcal{P}$. As an easy corollary, if $\alpha, \beta \geq 0$ satisfy $|\perm{A}| \geq 2 T^{n} \exp[-n \alpha^2 \beta^2 / 10^5]$, then all but at most $\alpha n$ of the rows of $A$ contain an entry whose modulus is at least $T(1-\beta)$. And since the $l_2$ norm of any row of $A$ is at most $\Vert A \Vert_2$, no entry of $A$ can have modulus larger than $T$. Thus, entries of modulus $T(1-\beta)$ are nearly as large as possible. Moreover, if a row (or column) has an entry with very large modulus, then the remaining entries must have very small moduli (again since its $l_2$ norm is at most $\Vert A \Vert_2$). Thus, this theorem also provides a \textit{qualitative} stability result stating that matrices with large permanent must have many very large entries, and a row (or column) containing a large entry must have all its other entries small.
\paragraph*{}Note that \myRef{Theorem \ref{main theorem}} is only useful for values of $h_{\infty} / T$ that are not very close to $1$---namely when $1 - h_{\infty} / T \gg n^{-1/2}$. Although this does well in many cases, we believe that for large values of $h_\infty / T$, it is not optimal. For comparison, if $A$ is $\delta$ times the identity matrix, and $\delta \approx 1$, then $|\perm{A}| \approx e^{-n (1-\delta)} = e^{-n (1-h_{\infty}) }$, and we conjecture that this is essentially tight.
\begin{conjecture}\label{main conjecture}
There is some constant $C > 0$ and some polynomial $f(n)$ such that the following holds. If $A$ is an $n \times n$ matrix with complex entries and $\Vert A \Vert_2 \leq 1$, then $| \perm{A}| \leq f(n) e^{-Cn(1 - h_\infty)}$.
\end{conjecture}
\paragraph*{}As a step in this direction, we are able to prove the following, which better addresses \myRef{Question B} for matrices over $\mathbb{R}$.
\begin{theorem}\label{main theorem real case}
Let $A$ be an $n\times n$ matrix over $\mathbb{R}$ and $\Vert A \Vert_{2} \leq T \neq 0$. Then
\[
|\perm{A}| \leq T^{n} (n+6) \exp \left[ \dfrac{-\sqrt{n(1-h_{\infty} / T)}}{400} \right].
\]
\end{theorem}
\paragraph*{}As with \myRef{Theorem \ref{main theorem}}, a result like \myRef{Theorem \ref{main theorem real case}} that involves $h_{2}$ is also possible, and it essentially falls out of our proof directly. \myRef{Theorem \ref{main theorem real case}} is an improvement over \myRef{Theorem \ref{main theorem}} when $n^{-1/3} \gg 1- h_{\infty} / T$ and gives a meaningful bound provided $1-h_{\infty} / T \gg \log(n)^2 / n$. Although this yields a quantitatively better understanding for matrices over $\mathbb{R}$, we cannot shake the belief that neither of our main results (i.e., \myRef{Theorems \ref{main theorem}} and \myRef{\ref{main theorem real case}}) is best possible, and we discuss this further in \myRef{Section \ref{section conclusion}}.
\subsection*{Structure of paper}
The paper is devoted to proving \myRef{Theorems \ref{main theorem}} and \myRef{\ref{main theorem real case}}, which goes roughly as follows. First, we appeal to a result of Glynn \cite{glynn} that allows us to convert the problem of estimating the permanent into a problem about estimating the expected value of a certain random variable (\mySection{\ref{section set-up}}). We then use standard probabilistic tools to show certain concentration results for the random variable of interest, which in turn yield the estimates needed for our results. This is done for the complex-valued case in \mySection{\ref{section complex}}, which proves \myRef{Theorem \ref{main theorem}}. In \mySection{\ref{section real}}, we consider the real-valued case, where we analyze the corresponding random variable more carefully to obtain \myRef{Theorem \ref{main theorem real case}}. We conclude in \mySection{\ref{section conclusion}} with several open questions and conjectures, as well as a discussion of \myRef{Question B}.
\section{Definitions and set-up with random variables}\label{section set-up}
We first need to use an observation due to Glynn \cite{glynn} whereby the permanent of a matrix is expressed as the expectation of a certain random variable. We will work over the field $\mathbb{K}$, which will either be $\mathbb{R}$ or $\mathbb{C}$.
\paragraph*{}Given an $n \times n$ matrix $A$ over $\mathbb{K}$ and $x \in \mathbb{K}^n$, set $y = Ax$, and define the \textit{Glynn estimator} of $A$ at $x$ to be
\[
Gly_{x} (A) = \prod_{i=1} ^{n} \overline{x}_i \times \prod_{i=1} ^{n} y_i,
\]
where $\overline{z}$ denotes the complex conjugate of $z$. Let $X \in \mathbb{K}^n$ be the random variable whose coordinates are independently selected uniformly on $|z| = 1$, and let $Y = AX$ (note: if $\mathbb{K} = \mathbb{C}$, then each coordinate of $X$ is distributed continuously over the unit circle, whereas if $\mathbb{K}= \mathbb{R}$, then $X$ is chosen uniformly from the discrete set $\{-1,1\}^n$). Then
\[
\perm{A} = \mathbb{E} [Gly_{X} (A)] = \mathbb{E} \left[ \prod_{i=1} ^{n} \overline{X}_i Y_i \right],
\]
obtained simply by expanding out the product in the Glynn estimator and using the fact that the $X_i$ are independent with mean $0$ and variance $1$ (see the original proof due to Glynn \cite{glynn} or also \cite{gurvits, aaronson, nguyen}). Therefore, by convexity (which we are about to use twice), we have
\[
| \perm{A} | \leq \mathbb{E} \left[ \prod_{i=1} ^{n} | \overline{X}_i Y_i| \right] = \mathbb{E} \left[ \prod_{i=1} ^{n} |Y_i| \right] \leq \mathbb{E} \left[ \left( \dfrac{1}{n}\sum_{i=1} ^{n} |Y_i| \right) ^{n} \right] = \mathbb{E} \left[ \left( \dfrac{\Vert AX \Vert_{1}}{n} \right) ^{n} \right].
\]
Note that from here, we could say (by Cauchy-Schwartz)
\[
\dfrac{\Vert AX \Vert_{1}}{n} \leq \dfrac{\Vert AX \Vert_{2}}{\sqrt{n}} = \dfrac{\Vert AX \Vert_{2}}{\Vert X \Vert_{2}} \leq \Vert A \Vert_{2},
\]
thus obtaining the inequality $| \perm{A} | \leq \Vert A \Vert_2 ^n$ of \myRef{Theorem \ref{norm result}} (the equality case follows by considering equality in the above estimates).
\subsection*{Specializing to norm at most 1}
Note that to prove our results, it suffices to prove them for the case $\Vert A \Vert_{2} \leq 1$. This is because otherwise, we could simply scale the matrix by some $\alpha$ to have norm at most 1, and because $\perm{A} = \alpha^n \perm{A/ \alpha}$, our results would follow. As such, we will henceforth assume $\Vert A \Vert_2 \leq 1$ (explicitly making note of when we do), but this choice is simply for notational ease. We remark that the set-up thus far has also been employed in several other papers \cite{gurvits, aaronson, nguyen}; however, the remainder of this paper deviates from the previous literature.
\section{Proof of \myRef{Theorem \ref{main theorem}} ($\mathbb{K} = \mathbb{C}$)}\label{section complex}
In the setting where $\Vert A \Vert_2 \leq 1$, the permanent is always bounded above by $1$ (as shown above), and we want to conclude that under certain conditions, it must be (exponentially) small. We know (since $0 \leq \Vert A X \Vert_1 /n \leq \Vert A \Vert_{2} \leq 1$) that for all $\varepsilon \geq 0$ and all $\tilde{\mu} \geq 0$,
\[
|\perm{A}| \leq \mathbb{E} \left[ \left( \dfrac{\Vert AX \Vert_{1}}{n} \right) ^{n} \right] \leq (\tilde{\mu} /n + \varepsilon)^n + \mathbb{P} (\Vert A X \Vert_1 \geq \tilde{\mu} + \varepsilon n).
\]
We will pick $\tilde{\mu}$ suitably small with $\tilde{\mu} \geq \mathbb{E}[ \Vert AX \Vert_1 ]$ and then argue that $\Vert A X \Vert_1$ is tightly concentrated about its mean, which will complete the proof.
\subsection*{The mean of $\Vert AX\Vert_1$}
We appeal to a theorem of K\"onig, Sch\"utt, and Tomczak-Jaegermann \cite{konig}, which is a variant of Khintchine's inequality conveniently well-suited for our situation (in fact, $X$ was chosen in part so that we could apply this result directly).
\begin{theorem}[K\"onig et al.\ \cite{konig}, $1999$] \label{konig_ineq}
Let $\mathbb{K}$ be $\mathbb{R}$ or $\mathbb{C}$. Suppose $\vec{a} = (a_1, \ldots , a_n) \in \mathbb{K}^n$ is fixed, and suppose each coordinate of $\xi \in \mathbb{K}^n$ is independently distributed uniformly on $|z| = 1$. Then
\[
\left| \mathbb{E} \left[ \left| \sum_{i} a_i \xi_i \right| \right] - \Lambda_{\mathbb{K}} \Vert \vec{a} \Vert_{2} \right| \leq \left(1 - \Lambda_{\mathbb{K}} \right) \Vert \vec{a} \Vert_{\infty},
\]
where $\Lambda_{\mathbb{R}} = \sqrt{2/ \pi}$ and $\Lambda_{\mathbb{C}} = \sqrt{\pi}/2$.
\end{theorem}
Applying this to each row of $A$ (and using linearity of expectation) gives
\begin{proposition}\label{mean bound}
With $A$ and $X \in \mathbb{C}^n$ as in \myRef{Section \ref{section set-up}}, we have
\[
\mathbb{E}[\Vert A X \Vert_1 /n] \leq \dfrac{1}{n} \sum_{i=1} ^{n} \left[\sqrt{\pi}/2 \Vert r_i \Vert_2 + \left(1 - \sqrt{\pi}/2 \right) \Vert r_i \Vert_{\infty} \right] = \dfrac{\sqrt{\pi}}{2} h_{2}(A) + \left(1 - \dfrac{\sqrt{\pi}}{2} \right) h_{\infty}(A).
\]
\end{proposition}
\subsection*{Concentration about mean}
To show concentration of $\Vert A X\Vert_1$ about its mean, we use a very general and useful result of Talagrand (a form of ``Talagrand's inequality"), which can be found in chapter 1 of his book \cite{talagrand}.
\begin{theorem}[Talagrand \cite{talagrand}, $1991$]\label{talagrand gaussian theorem}
Suppose $f : \mathbb{R}^n \to \mathbb{R}$ is such that $|f(x) - f(y)| \leq \sigma \Vert x - y \Vert_2$ for all $x, y \in \mathbb{R}^n$, and define the random variable $F = f( \xi_1, \xi_2, \ldots, \xi_n)$, where the $\xi_i$ are independent standard normal random variables. Then for all $t \geq 0$,
\[
\mathbb{P}(F > \mathbb{E}[F] +t) \leq e^{-2 t^2 / (\pi \sigma)^2}.
\]
\end{theorem}
We apply this result to our setting by way of a now standard trick that expresses our random variable of interest as a function of standard Gaussians. In fact, this trick is even discussed in \cite{talagrand}, so we could have saved a few lines of the following argument by simply citing a ``more applicable" version of \myRef{Theorem \ref{talagrand gaussian theorem}} (i.e., one for which this trick has already been incorporated); however, the trick so nicely captures the usefulness of \myRef{Theorem \ref{talagrand gaussian theorem}}, that we thought it worth recalling here.
\begin{proposition}\label{our concentration}
Suppose $\Vert A \Vert_2 \leq 1$, and let $X \in \mathbb{C}^n$ be as in \myRef{Section \ref{section set-up}}. Then for all $t \geq 0$,
\[
\mathbb{P}(\Vert AX \Vert_1 > \mathbb{E}[\Vert AX \Vert_1] + tn) \leq e^{- n t^2 / \pi^3}.
\]
\end{proposition}
\begin{proof}
To make use of \myRef{Theorem \ref{talagrand gaussian theorem}}, we need to define a suitable $f : \mathbb{R}^n \to \mathbb{R}$, which we do in pieces. First define $\Phi : \mathbb{R} \to \mathbb{R}$ via
\[
\Phi(u) = \dfrac{1}{\sqrt{2 \pi}} \int_{-\infty} ^{u} e^{-x ^2 / 2} \dd{x},
\]
which is the probability that a standard Gaussian is at most $u$. Then define $g : \mathbb{R}^n \to \mathbb{C}^n$ as
\[
g(x_1, \ldots , x_n) = \begin{pmatrix}
e^{2 \pi i \Phi(x_1)}\\
e^{2 \pi i \Phi(x_2)}\\
\vdots \\
e^{2 \pi i \Phi(x_n)}
\end{pmatrix},
\]
and, finally, set $f(x) = \Vert A g(x) \Vert_{1}$.
\paragraph*{}Notice that if $\xi_1, \xi_2, \ldots, \xi_n$ are independently sampled from the standard normal distribution, then each $\Phi(\xi_i)$ is distributed uniformly on $[0,1]$. Therefore $g(\xi_1, \ldots, \xi_n)$ has the same distribution as $X$, and so $F := f(\xi_1, \ldots, \xi_n)$ has the same distribution as $\Vert AX \Vert_1$.
\paragraph*{}Now let $x, y \in \mathbb{R}^n$ be arbitrary. Then we have
\begin{eqnarray*}
|f(x) - f(y)| &=& \Big | \Vert A g(x) \Vert_{1} - \Vert A g(y) \Vert_1 \Big | \leq \Vert A g(x) - A g(y) \Vert_{1} \leq \sqrt{n} \Vert A (g(x) - g(y)) \Vert_{2}\\
&\leq& \sqrt{n} \Vert A \Vert_2 \Vert g(x) - g(y) \Vert_{2} \leq \sqrt{n} \Vert g(x) - g(y) \Vert_{2}.
\end{eqnarray*}
Using the fact that $| e^{i \alpha} - 1 | \leq | \alpha|$ for all $\alpha \in \mathbb{R}$, we further bound the above by
\begin{eqnarray*}
\Vert g(x) - g(y) \Vert_{2} ^2 &=& \sum_{j=1} ^{n} |e^{2 \pi i \Phi(x_j)} - e^{2 \pi i \Phi(y_j)}|^2 =\sum_{j=1} ^{n} |e^{2 \pi i (\Phi(x_j) - \Phi(y_j))} - 1|^2\\
&\leq& (2 \pi)^2 \sum_{j=1} ^{n} |\Phi(x_j) - \Phi(y_j)|^2 \leq 2 \pi \sum_{j=1} ^{n} |x_j - y_j|^2 = 2 \pi \Vert x - y \Vert_2 ^{2}.
\end{eqnarray*}
Thus, $|f(x) - f(y) | \leq \sqrt{2 \pi n} \Vert x - y \Vert_2$, and appealing to \myRef{Theorem \ref{talagrand gaussian theorem}} with $\sigma = \sqrt{2 \pi n}$ yields
\[
\mathbb{P}(\Vert AX \Vert_1 > \mathbb{E}[\Vert A X \Vert_1] + tn) = \mathbb{P}(F > \mathbb{E}[F] + tn) \leq e^{- 2 (n t)^2 / (\pi \sqrt{2 \pi n})^2} = e^{- n t^2 / \pi ^3 }. \qedhere
\]
\end{proof}
\subsection*{Finishing the proof for $\mathbb{K} = \mathbb{C}$}
\begin{proposition}\label{perm bound in terms of mean}
Let $\Vert A \Vert_2 \leq 1$ and $X \in \mathbb{C}^n$ be as in \myRef{Section \ref{section set-up}}. If $\mathbb{E}[\Vert A X \Vert_1 /n] = \mu$, then
\[
\mathbb{E}[(\Vert AX \Vert_1 /n)^n] \leq 2 \exp [-3 n (1-\mu)^2 / 100].
\]
\end{proposition}
\begin{proof}
Let $L = t \mu + (1-t)$ with $t \in [0, 1]$ to be determined. Since $0 \leq \Vert AX \Vert_1 / n \leq 1$, we have (appealing to \myRef{Proposition \ref{our concentration}} for the last inequality)
\begin{eqnarray*}
\mathbb{E}[(\Vert AX \Vert_1 / n)^n] & \leq & L^n + \mathbb{P}(\Vert AX \Vert_1/n > L)\\
&\leq& \exp[-n(1-L)] + \mathbb{P}(\Vert AX \Vert_1/n - \mu > (1-t)(1 - \mu))\\
&\leq& \exp[-nt(1-\mu)] + \exp[-n (1-t)^2 (1-\mu)^2 / \pi^3 ],
\end{eqnarray*}
We now take $2t(1-\mu) = \pi^3+2 -2 \mu - \pi^{3/2} \sqrt{\pi^3 + 4-4 \mu} $ (for which $t$ does lie in the interval $[0,1]$), so as to make the exponents equal. For this $t$, we obtain
\[
\mathbb{E}[(\Vert AX \Vert_1/n)^n] \leq 2\exp \Big [-n (2 \mu + \pi^{3/2} \sqrt{\pi^3 + 4-4 \mu}- \pi^3-2)/2 \Big ].
\]
Then appealing to the Taylor series at $\mu =1$, we see that for all $\mu \in [0,1]$,
\[
\dfrac{2 \mu + \pi^{3/2} \sqrt{\pi^3 + 4-4\mu}- \pi^3-2}{2} \geq \dfrac{(1-\mu)^2}{\pi^3} - \dfrac{2(1-\mu)^3}{ \pi^6} \geq (1-\mu)^2 \left( \dfrac{1}{\pi^3} - \dfrac{2}{\pi^6} \right) \geq \dfrac{3 (1- \mu)^2}{100}. \qedhere
\]
\end{proof}
\paragraph*{}We then readily obtain \myRef{Theorem \ref{main theorem}} simply by combining \myRef{Propositions \ref{mean bound}} and \myRef{\ref{perm bound in terms of mean}} and using the fact that if $\Vert A \Vert_2 \leq 1$, then $0 \leq h_{\infty} (A) \leq h_{2} (A) \leq 1$.
\section{Proof of \myRef{Theorem \ref{main theorem real case}} (better results for $\mathbb{K} = \mathbb{R}$)}\label{section real}
For matrices over $\mathbb{R}$, our general strategy is the same as before, but we first partition the rows of $A$ into those that contain `big' entries and those that do not. We show that the contribution due to rows with large entries has small variance, and although the rows without large entries may each contribute something of high variance, we benefit from the fact that there simply aren't that many such rows. In this way, we are able to obtain better concentration of $\Vert AX \Vert_1$ about its mean, which in turn gives a better bound on $\perm{A}$.
\paragraph*{}We are not sure exactly how to adapt this argument when $\mathbb{K} = \mathbb{C}$, although we admittedly didn't try very hard to do so. We feel confident (especially in light of \myRef{Theorem \ref{main theorem real case}}) that \myRef{Theorem \ref{main theorem}} can be improved, but we do not think that \myRef{Theorem \ref{main theorem real case}} is best possible either (which is why we haven't worried so much about extending it to $\mathbb{K} = \mathbb{C}$). See \myRef{Section \ref{section conclusion}} for a discussion of several related conjectures (some perhaps more true than others) and open problems.
\subsection*{Set-up for the real-valued case}
As in \myRef{Section \ref{section set-up}}, we let $A$ be an $n \times n$ matrix over $\mathbb{R}$ with $\Vert A \Vert_{2} \leq 1$. Define $t = 1 - h_{\infty} (A)$. Then to prove \myRef{Theorem \ref{main theorem real case}}, our goal is to show
\[
|\perm{A}| \leq (n+6) \exp[- \sqrt{nt} / 400].
\]
\paragraph*{}Let $\varepsilon > 0$ and $1/10 > \lambda > 0$ be parameters to be determined (we will end up choosing $\varepsilon = t / 10$ and $\lambda = 64 / \sqrt{nt}$). We now partition the rows of $A$ into ``big rows" (those containing an element of absolute value at least $1 - \lambda$) and ``small rows" (the rest). Suppose there are $b$ big rows and $l = n-b$ small rows. Recall that because $\Vert A \Vert_{2} \leq 1$, each row and column of $A$ has $l_2$-norm at most $1$. Thus, `large' entries (those of absolute value at least $1 - \lambda$) must appear in different rows and columns. By multiplying $A$ by appropriate permutation matrices and the appropriate $\pm 1$-diagonal matrix (which changes neither the norm, nor the absolute value of the permanent, nor the values of $t, b,$ or $l$), we can assume $A$ is of the form:
\[
A = \left(
\begin{array}{c}
B \\
L
\end{array}
\right),
\]
where $B$ is a $b \times n$ matrix, the $(i,i)$-entries of $B$ are all positive with size at least $1-\lambda$, and all the rest of the entries in $A$ have absolute value less than $1 - \lambda$. For convenience, we will assume $b > 0$ and $l > 0$, for if not, our same argument would apply with only superficial alterations.
\paragraph*{}We recall our earlier set-up as in the complex-case (but with $X \in \mathbb{R}^n$ now uniformly distributed over $\{-1, 1 \}^n$). Then for all $\tilde{\mu}_B, \tilde{\mu}_L \geq 0$, we have
\begin{equation}
\begin{split}
|\perm{A}| &\leq \mathbb{E}_{X} \left[ \left( \dfrac{\Vert AX \Vert_{1}}{n} \right) ^n \right] = \mathbb{E}_{X} \left[ \left( \dfrac{\Vert LX \Vert_{1} + \Vert BX \Vert_{1}}{n} \right) ^n \right]\\
&\leq \left(\dfrac{\tilde{\mu}_{L} + \tilde{\mu}_{B}}{n} + 2 \varepsilon \right) ^{n} + \mathbb{P} \left(\Vert LX \Vert_{1} \geq \tilde{\mu}_{L} + \varepsilon n \right) + \mathbb{P} \left(\Vert BX \Vert_{1} \geq \tilde{\mu}_{B} + \varepsilon n \right),
\end{split}\label{template bound}
\end{equation}
where (as before) the last inequality is justified by the fact that the random variable within the expected value is bounded above by $1$.
\paragraph*{}We choose
\begin{eqnarray*}
\tilde{\mu}_B &=& \sum_{i = 1} ^{b} \left[ \sqrt{\dfrac{2}{\pi}} + \left(1 - \sqrt{\dfrac{2}{\pi}} \right) \Vert r_i \Vert _\infty \right] = \sum_{i = 1} ^{b} \left[1 - \left(1 - \sqrt{\dfrac{2}{\pi}} \right) (1 - \Vert r_i \Vert _\infty) \right], \qquad \text{and}\\
\tilde{\mu}_L &=& \sum_{i > b} ^{n} \left[ \sqrt{\dfrac{2}{\pi}} + \left(1 - \sqrt{\dfrac{2}{\pi}} \right) \Vert r_i \Vert _\infty \right] = \sum_{i > b} ^{n} \left[1 - \left(1 - \sqrt{\dfrac{2}{\pi}} \right) (1 - \Vert r_i \Vert _\infty) \right],
\end{eqnarray*}
where (again) $r_i$ is the $i^{\text{th}}$ row of $A$ (note, $\Vert r_i \Vert_{\infty} = b_{i,i}$ for all $i \leq b$). Then by \myRef{Theorem \ref{konig_ineq}} (this time with $\mathbb{K} = \mathbb{R}$), we have $\tilde{\mu}_L \geq \mathbb{E} [ \Vert LX \Vert_1 ]$ and $\tilde{\mu}_B \geq \mathbb{E}[ \Vert BX \Vert_1 ]$, and by the definitions
\begin{equation}\label{t and mu}
\dfrac{\tilde{\mu}_L + \tilde{\mu}_B}{n} = 1 - \left(1 - \sqrt{\dfrac{2}{\pi}} \right) \dfrac{1}{n} \sum_{i=1} ^{n} \bigg( 1 - \Vert r_i \Vert_\infty \bigg) = 1 - \left(1 - \sqrt{\dfrac{2}{\pi}} \right) t.
\end{equation}
\paragraph*{}To take advantage of \eqref{template bound}, we need only exhibit concentration bounds for $\Vert LX \Vert_1$ and $\Vert BX \Vert_1$.
\subsection*{Concentration of $\Vert LX \Vert_1$}
To show concentration of $\Vert LX \Vert_1$ about its mean, we will again apply a version of Talagrand's inequality (but this time suited for the discrete distribution over $\{-1, 1\}^n$). Instead of showing the derivation of this from the corresponding general result in \cite{talagrand} (as we did before), we will simply cite \cite{ailon}, in which the following statement appears as \myRef{Theorem 3.3}.
\begin{theorem}\label{Talagrand concentration}
Suppose $M$ is a $k \times n$ real-valued matrix such that $\Vert M \vec{x} \Vert_{1} \leq \sigma \Vert \vec{x} \Vert_2$ for all $\vec{x} \in \mathbb{R}^{n}$. Let $\xi \in \mathbb{R}^n$ be chosen uniformly from $\{-1,1\}^n$, and let $m$ be a median of $\Vert M \xi \Vert_1$. Then for all $\gamma \geq 0$, we have $\mathbb{P} ( | \Vert M \xi \Vert_{1} - m| > \gamma) \leq 4 e^{-\gamma^2 / (8 \sigma^2)}$.
\end{theorem}
\begin{lemma}\label{bound on LX deviation}
With notation as before, if $\varepsilon n \geq 16 \sqrt{nt \log (n) / \lambda}$, then
\[
\mathbb{P} \left(\Vert LX \Vert_{1} \geq \tilde{\mu}_{L} + \varepsilon n \right) \leq 4 \exp \left[ \dfrac{-\varepsilon^2 n \lambda}{32 t} \right].
\]
\end{lemma}
\begin{proof}
Note that for all $\vec{x} \in \mathbb{R}^n$, we have $\Vert L \vec{x} \Vert_{1} \leq \sqrt{l} \Vert L \vec{x} \Vert_{2} \leq \sqrt{l} \Vert A \vec{x} \Vert_{2} \leq \sqrt{l} \Vert \vec{x} \Vert_2$. Thus, if $m$ is a median of $\Vert L X \Vert_1$, then by \myRef{Theorem \ref{Talagrand concentration}}, we have
\begin{equation}
\mathbb{P}(| \Vert LX \Vert_1 - m| > \gamma ) \leq 4 e^{-\gamma^2 / (8 l)}. \label{median bound on L}
\end{equation}
From this, we see that $\Vert LX \Vert_1$ is tightly concentrated about its \textit{median}. However, this also implies
\begin{equation}\label{median is close to mean}
m \leq \mathbb{E}[ \Vert L X \Vert_1 ] + 8 \sqrt{l \log n},
\end{equation}
since otherwise, we would have
\begin{eqnarray*}
\mathbb{E}[ \Vert L X \Vert_1 ] &\geq& \left( \mathbb{E}[ \Vert L X \Vert_1 ] + 4 \sqrt{l \log n} \right) \cdot \mathbb{P} \left(| \Vert LX \Vert_1 - m| \leq 4 \sqrt{l \log n} \right)\\
&\geq& \left( \mathbb{E}[ \Vert L X \Vert_1 ] + 4 \sqrt{l \log n} \right) \cdot (1 - 4 / n^2 )\\
&=& \mathbb{E}[ \Vert L X \Vert_1 ] + 4 \sqrt{l \log n} - \left( \mathbb{E}[ \Vert L X \Vert_1 ] + 4 \sqrt{l \log n} \right ) \cdot 4/ n^2.
\end{eqnarray*}
And subtracting $\mathbb{E}[ \Vert LX \Vert_1 ]$ from both sides and rearranging, we would obtain
\[
n^2 \leq 4 + \dfrac{\mathbb{E}[ \Vert L X \Vert_1 ]}{\sqrt{l \log n}} \leq 4 + \dfrac{n}{\sqrt{\log n}},
\]
which is a contradiction if $n > 2$ (whereas for $n \leq 2$, the desired bound on $m$ is implied by $m \leq n$ [not that it matters]). Therefore, appealing to \eqref{median is close to mean}, we have
\[
\mathbb{P} \left(\Vert LX \Vert_{1} \geq \tilde{\mu}_{L} + \varepsilon n \right) \leq \mathbb{P} \left(\Vert LX \Vert_{1} \geq \mathbb{E}[ \Vert LX \Vert_1 ] + \varepsilon n \right) \leq \mathbb{P} \left(\Vert LX \Vert_{1} \geq m + \varepsilon n - 8 \sqrt{ l \log n} \right).
\]
Furthermore, if $\varepsilon n \geq 16 \sqrt{l \log n}$, then we can combine this with \eqref{median bound on L} to obtain
\begin{equation}
\text{if $\varepsilon n \geq 16 \sqrt{l \log n}$, then} \qquad \mathbb{P} \left(\Vert LX \Vert_{1} \geq \tilde{\mu}_{L} + \varepsilon n \right) \leq 4 \exp \left[ \dfrac{-\varepsilon^2 n^2}{32 l} \right].\label{original bound on LX deviation}
\end{equation}
Finally, since $nt \geq \sum_{i=b+1} ^{n} (1 - \Vert r_i \Vert_{\infty}) \geq l \lambda$, we know $l \leq nt / \lambda$, completing the proof by \eqref{original bound on LX deviation}.
\end{proof}
\subsection*{Concentration of $\Vert BX \Vert_1$}
We now focus on getting an upper bound on $\mathbb{P}(\Vert BX \Vert_1 \geq \tilde{\mu}_B + \varepsilon n)$. We first recall the following classical concentration result.
\begin{proposition}[Hoeffding's inequality]\label{hoeffding}
Let $a_1, \ldots , a_k$ be real numbers (not all of which are $0$), and let $\xi_1, \xi_2, \ldots , \xi_k$ be independent each distributed uniformly on $\{-1, 1\}$. Then for all $\gamma \geq 0$,
\[
\mathbb{P} \left( \sum_{i=1} ^{k} a_i \xi_i \geq \gamma \right) \leq \exp \left[ \dfrac{- \gamma^2}{2 \sum_{i=1} ^{k} a_i ^2} \right].
\]
\end{proposition}
Let $\tilde{B} = \left(
\begin{array}{c}
B \\
0
\end{array}
\right)$ be the $n \times n$ matrix whose first $b$ rows are given by $B$ and the rest are $0$. Our key step here is replacing $\Vert BX \Vert_1$ with $\langle X, \tilde{B} X \rangle$, via the following lemma\footnote{Extending this step is the main obstacle to applying the present argument when $\mathbb{K} = \mathbb{C}$.}.
\begin{lemma}\label{BX is close to inner product}
With notation as before, if $\lambda < 0.1$ then
\[
\mathbb{P} ( \Vert BX \Vert_1 \geq \tilde{\mu}_B + \varepsilon n) \leq \mathbb{P} ( \langle X, \tilde{B} X \rangle \geq \tilde{\mu}_B + \varepsilon n) + n e^{-1/(5 \lambda)}.
\]
\end{lemma}
\begin{proof}
It suffices to show $\mathbb{P} ( \Vert BX \Vert_1 \neq \langle X, \tilde{B} X \rangle ) \leq n e^{-1/(5 \lambda)}$. The idea is that since each row of $B$ is dominated by a single large entry (namely $b_{i,i}$), each entry of $BX$ is a random sum dominated by a single large term (namely $X_i b_{i,i}$). Thus, it is very unlikely that any entry of $BX$ would have a different sign than $X_i b_{i,i}$. This is made rigorous as follows.
\paragraph*{}Recall that we ordered the columns of $B$ so that the $(i,i)$-entry is the largest in its row, and that $b_{i,i} \geq 1- \lambda$. Letting $Y_i$ be the $i^{\text{th}}$ coordinate of $B X$, we have, by a simple union bound,
\[
\mathbb{P} ( \Vert BX \Vert_1 \neq \langle X, \tilde{B} X \rangle ) \leq \sum_{i=1} ^{b} \mathbb{P}( |Y_i| \neq X_i Y_i ) = \sum_{i=1} ^{b} \mathbb{P}(X_i Y_i < 0) = \sum_{i=1} ^{b} \mathbb{P} \left(\sum_{j=1} ^{n} X_i X_j b_{i,j} < 0 \right).
\]
Using the fact that for any given $i$, the random vector $(X_i X_j)_{j \neq i}$ has the same joint distribution as $(X_j)_{j \neq i}$ (and that $X_i ^2 = 1$), we obtain by \myRef{Proposition \ref{hoeffding}}
\[
\sum_{i=1} ^{b} \mathbb{P} \left( \sum_{j=1} ^{n} X_i X_j b_{i,j} < 0 \right) = \sum_{i=1} ^{b} \mathbb{P} \left( b_{i,i} < \sum_{j \neq i} ^{n} X_j b_{i,j} \right) \leq \sum_{i=1} ^{b} \exp \left[ \dfrac{- b_{i,i} ^2}{2 \sum_{i\neq j} b_{i,j} ^2} \right].
\]
Since $b_{i,i} \geq 1-\lambda$ and $\sum_{j} b_{i,j} ^2 \leq 1$, this in turn is bounded by
\[
\sum_{i=1} ^{b} \exp \left[ \dfrac{- b_{i,i} ^2}{2 \sum_{i\neq j} b_{i,j} ^2} \right] \leq n \exp \left[ \dfrac{- (1-\lambda) ^2}{2 (1- (1-\lambda)^2)} \right] \leq n e^{-1/(5 \lambda)},
\]
where the last inequality is justified because $0 < \lambda < 0.1$.
\end{proof}
\paragraph*{}We can now exploit the fact that $\langle X, \tilde{B} X \rangle$ is a degree two polynomial over $\{-1, 1\}^n$, allowing us to use any of a variety of concentration inequalities. We will use an inequality of Bonami \cite{bonami}, which was the first \textit{hypercontractivity inequality} of its type. A detailed exposition of such results can be found in chapter 9 of O'Donnell's book \cite{odonnell}, and a comparison of this to more recent polynomial concentration inequalities can be found in \cite{schudy}.
\begin{theorem}[Bonami \cite{bonami}, 1970]\label{bonami theorem}
Let $F : \mathbb{R}^{n} \to \mathbb{R}$ be a degree $k$ polynomial, and consider the random variable $Z = F(\xi_1, \xi_2, \ldots , \xi_n)$, where the $\xi_i$ are independent with each distributed uniformly over $\{-1, 1\}$. Then for all $q \geq 2$, we have $\mathbb{E}[|Z|^{q}] \leq \left( (q-1)^{k} \mathbb{E}[Z^2] \right)^{q/2}.$
\end{theorem}
\begin{lemma}\label{bound on inner product}
With notation as before, if $\varepsilon n \geq 4 e \sqrt{nt}$, then
\[
\mathbb{P} ( \langle X, \tilde{B} X \rangle \geq \tilde{\mu}_B + \varepsilon n) \leq \exp \left( \dfrac{- \varepsilon n}{2e \sqrt{nt}} \right).
\]
\end{lemma}
\begin{proof}
For $\vec{x} \in \mathbb{R}^n$, define $F(x_1, x_2, \ldots , x_n) = \langle \vec{x}, \tilde{B} \vec{x} \rangle - \displaystyle \sum_{i=1} ^{b} b_{i,i}$, and define the random variable $Z = F(X_1, \ldots , X_n)$. Then $\mathbb{P} ( \langle X, \tilde{B} X \rangle \geq \tilde{\mu}_B + \varepsilon n) \leq \mathbb{P}( Z \geq \varepsilon n)$, since\footnote{In fact, we could have simply taken $\tilde{\mu}_B = \sum_{i \leq b} b_{i,i}$, but we chose instead to define it similarly to $\tilde{\mu}_L$, a change which only affects the constants in our end result.} $\tilde{\mu}_{B} \geq \sum_{i\leq b} b_{i,i}$. Now $F(x_1, x_2, \ldots, x_n)$ is a degree $2$ polynomial, and moreover, by expanding out the sums and using the fact that terms such as $\mathbb{E}[X_i X_j]$ vanish when $i \neq j$, we obtain
\begin{eqnarray*}
\mathbb{E}[Z^2] &=& \mathbb{E} \left[ \left( \sum_{i=1} ^{b} \left[-b_{i,i} + \sum_{j = 1} ^{b} X_i X_j b_{i,j} \right] + \sum_{i=1} ^{b} \sum_{j = b+1} ^{n} X_i X_j b_{i,j} \right)^2 \right]\\
&=& \mathbb{E} \left[ \left( \sum_{i=1} ^{b} \left[-b_{i,i} + \sum_{j = 1} ^{b} X_i X_j b_{i,j} \right] \right)^2 \right] + \mathbb{E} \left[ \left( \sum_{i=1} ^{b} \sum_{j = b+1} ^{n} X_i X_j b_{i,j} \right)^2 \right]\\
&=& \sum_{i=1} ^{b} \sum_{j < i} (b_{i,j} + b_{j,i})^2 + \sum_{i=1} ^{b} \sum_{j=b+1} ^{n} b_{i,j} ^2 \leq 2 \sum_{i=1} ^{b} \sum_{j < i} (b_{i,j} ^2 + b_{j,i} ^2) + 2\sum_{i=1} ^{b} \sum_{j=b+1} ^{n} b_{i,j} ^2\\
&=& 2 \sum_{i=1} ^{b} \left( -b_{i,i}^2 + \sum_{j=1} ^{n} b_{i,j} ^2 \right) \leq 2 \sum_{i=1} ^{b} (1 - b_{i,i} ^2) \leq 4 \sum_{i=1} ^{b} (1- b_{i,i}) \leq 4 nt.
\end{eqnarray*}
Applying \myRef{Theorem \ref{bonami theorem}} with $q = \varepsilon n / (2e \sqrt{nt})$---which is valid since by hypothesis this ratio is at least 2---together with Markov's inequality, we obtain
\[
\mathbb{P}( Z \geq \varepsilon n) \leq \mathbb{P} ( |Z|^{q} \geq (\varepsilon n)^q ) \leq \dfrac{\mathbb{E}[|Z|^q]}{(\varepsilon n)^{q}} \leq \left( \dfrac{(q-1) 2 \sqrt{nt}}{\varepsilon n} \right)^{q} \leq \exp \left( \dfrac{- \varepsilon n}{2e \sqrt{nt}} \right). \qedhere
\]
\end{proof}
\subsection*{Finishing the proof for $\mathbb{K} = \mathbb{R}$}
We now need to pick $\varepsilon$ and $\lambda$ to optimize the tradeoffs between our various upper bounds. We need the assumptions of \myRef{Lemmas \ref{bound on LX deviation}}, \myRef{\ref{BX is close to inner product}}, and \myRef{\ref{bound on inner product}}---namely (i) $\varepsilon n \geq 16 \sqrt{nt \log(n) / \lambda}$, (ii) $\lambda < 0.1$, and (iii) $\varepsilon n \geq 4 e \sqrt{nt}$---in which case we can combine these lemmas with \eqref{template bound} and \eqref{t and mu} to obtain
\begin{eqnarray*}
|\perm{A}| &\leq& \left(2 \varepsilon + \dfrac{\tilde{\mu}_{L} + \tilde{\mu}_{B}}{n} \right) ^{n} + \mathbb{P} \left(\Vert LX \Vert_{1} \geq \tilde{\mu}_{L} + \varepsilon n \right) + \mathbb{P} \left(\Vert BX \Vert_{1} \geq \tilde{\mu}_{B} + \varepsilon n \right)\\
&\leq& \left(2 \varepsilon + 1 - \left(1 - \sqrt{\dfrac{2}{\pi}} \right) t \right) ^{n} + 4 \exp \left[ \dfrac{-\varepsilon^2 n \lambda}{32 t} \right] + n e^{-1/(5 \lambda)} + \exp \left( \dfrac{- \varepsilon n}{2e \sqrt{nt}} \right).
\end{eqnarray*}
\paragraph*{}We will take $\varepsilon = t/10$ and $\lambda = 64 / \sqrt{nt}$, for which we claim that conditions (i), (ii), and (iii) are satisfied. Note that since our goal is to show $|\perm{A}| \leq (n+6) \exp[- \sqrt{nt} / 400]$, we may assume $\sqrt{nt}/\log(n+6) \geq 400$ (or the bound we are trying for is worse than the trivial bound of $1$) (of course, in any case we are really more interested in large $n$). Notice that with $\varepsilon$ and $\lambda$ as above:
\begin{itemize}
\item[(i)] $\varepsilon n \geq 16 \sqrt{nt \log(n) / \lambda}$ is equivalent to $\sqrt{nt} \geq 400 \log n$;
\item[(ii)] $\lambda < 0.1$ is equivalent to $\sqrt{nt} > 640$; and
\item[(iii)] $\varepsilon n \geq 4e \sqrt{nt}$ is equivalent to $\sqrt{nt} \geq 40 e$.
\end{itemize}
Thus, these choices of $\lambda$ and $\varepsilon$ allow us to appeal to the aforementioned results, obtaining
\begin{eqnarray*}
|\perm{A}| &\leq& \left(2 \varepsilon + 1 - \left(1 - \sqrt{\dfrac{2}{\pi}} \right) t \right) ^{n} + 4 \exp \left[ \dfrac{-\varepsilon^2 n \lambda}{32 t} \right] + n e^{-1/(5 \lambda)} + \exp \left( \dfrac{- \varepsilon n}{2e \sqrt{nt}} \right)\\
&\leq& \exp \left[-nt \left(1 - \sqrt{2/\pi} - 0.2 \right) \right] + 4 \exp \left[ \dfrac{-\sqrt{nt}}{50} \right] + n \exp \left[-\dfrac{\sqrt{nt}}{320} \right] + \exp \left[ \dfrac{- \sqrt{nt}}{20 e} \right]\\
&\leq& (n+6) \exp \left[\dfrac{-\sqrt{nt}}{400} \right],
\end{eqnarray*}
which completes the proof of \myRef{Theorem \ref{main theorem real case}}.
\section{Conclusion}\label{section conclusion}
Our biggest (and most natural) open question concerns the optimality of our main results. Namely, a proof of \myRef{Conjecture \ref{main conjecture}} as stated in \myRef{Section \ref{section intro}} would be very interesting. The main barrier preventing us from proving this conjecture is our reliance on Talagrand's inequality. For $\mathbb{K} = \mathbb{R}$, we partially mitigated the cost of using this inequality via \myRef{Lemma \ref{bound on LX deviation}}, but the application of \myRef{Theorem \ref{Talagrand concentration}} was still a crucial (though not the only) bottleneck. Our argument could conceivably be pushed further either by a more careful analysis that better uses \eqref{original bound on LX deviation} or by a more nuanced argument that splits the matrix $A$ into more than two pieces.
\paragraph*{}One could also try to avoid using Talagrand's inequality altogether. It is possible that some stronger inequality could replace it (by taking advantage of some aspects particular to our situation), but a more likely ``quick fix" of this sort would be a more direct estimate of $\mathbb{E}[(\Vert A X \Vert_1 /n)^n]$ (in the real case, $AX$ is simply a vector-valued Rademacher sum, which is a well-studied random variable). On the other hand, it could be that the convexity bounds on the Glynn estimator already give away too much to recover anything stronger than what we have.
\paragraph*{}An entirely different approach would be to determine among matrices with given norm and $h_{\infty}$, which ones maximize $| \perm{A}|$ (it does not seem impossible that this maximum is always attained by a circulant matrix with all real entries). A characterization of these extremal matrices would certainly be very appealing, and one might hope that thinking along these lines would suggest a more combinatorial approach.
\paragraph*{}As far as \myRef{Question B} is concerned, we feel that there is still more to be said beyond the present results. Namely, our results only provide a necessary condition for a matrix to have a large permanent (i.e., $h_{\infty}$ must be large). But there is no clean converse to this statement; consider for example a diagonal matrix with most of its diagonal entries equal to 1 except for one of them equal to 0 (this has large $h_{\infty}$ and permanent $0$). To continue the spirit of the question, we state the following variation of \myRef{Question B} (essentially echoing a question of \cite{aaronson}):
\paragraph*{Problem B$'$:}Find a (deterministic) polynomial-time algorithm that takes an $n \times n$ matrix $A$ of norm $1$ and decides whether $|\perm{A}| < n^{-100}$ or $|\perm{A}| > n^{-10}$ (with the understanding that the input matrix will satisfy one of these inequalities).
\paragraph*{}We attempted this along the following lines: ``if the matrix has large permanent, it must have many rows each of which is dominated by a single large entry. If the matrix is of this form, then [heuristic] hopefully that means the permanent is dominated by terms that use at least most of these large entries. Since there are so many large entries, we can efficiently compute the exact contribution of these dominant terms." However, our current results do not allow us to conclude that there are enough rows with large entries (we would like all but about $\log n$ of the rows but are limited to all but about $\log^2 n$ when $\mathbb{K} = \mathbb{R}$ and $\sqrt{n \log n}$ when $\mathbb{K} = \mathbb{C}$). And in fact, even if we could improve our result to the conjectured (and best possible) bound mentioned above, we still do not quite see how to make this heuristic argument yield a polynomial-time algorithm. We should note that Gurvits \cite{gurvits} found a \textit{randomized} algorithm accomplishing the goal of \myRef{Problem B$'$}, and in the deterministic setting, progress towards \myRef{Problem B$'$} was made in \cite{aaronson} which gives an algorithm in the case that the entries of $A$ are non-negative.
\subsection*{Further remarks}
\begin{itemize}
\item We note that there is a lot of freedom in choosing the random variable $X \in \mathbb{K}^{n}$ for the Glynn estimator ($X$ just needs to have independent components each satisfying $\mathbb{E}[X_i] =0$ and $\mathbb{E}[|X_i|^2] = 1$). For example, when $\mathbb{K} = \mathbb{R}$, it is tempting to replace $X \in \mathbb{R}^n$ with an $n$-dimensional Gaussian and bound the Glynn estimator by something like
\[
| \perm{A} | = \left| \mathbb{E} \left[ \prod_{i} X_i Y_i \right] \right| \leq \mathbb{E} \left[ \prod_{i} |X_i Y_i| \right] \leq \mathbb{E} \left[ \left( \dfrac{1}{n} \sum_{i} |X_i Y_i| \right)^n \right].
\]
But even if $A$ is the identity matrix this is already (exponentially) larger than $1$, which illustrates the difficulty with this approach.
\item Via an entirely different method, we were also able to get an upper bound on the permanent for matrices having only non-negative real entries by appealing to the results of \cite{gurvitsSam}. Unfortunately, the bound we obtained is strictly weaker than the results of the present paper, so it is omitted.
\end{itemize}
\textbf{Acknowledgement:} We thank Hoi Nguyen for introducing us to this problem and sharing \cite{nguyenPrivate}.
| {
"timestamp": "2016-06-27T02:01:51",
"yymm": "1606",
"arxiv_id": "1606.07474",
"language": "en",
"url": "https://arxiv.org/abs/1606.07474",
"abstract": "We prove a stability version of a general result that bounds the permanent of a matrix in terms of its operator norm. More specifically, suppose $A$ is an $n \\times n$ matrix over $\\mathbb{C}$ (resp. $\\mathbb{R}$), and let $\\mathcal{P}$ denote the set of $n \\times n$ matrices over $\\mathbb{C}$ (resp. $\\mathbb{R}$) that can be written as a permutation matrix times a unitary diagonal matrix. Then it is known that the permanent of $A$ satisfies $|\\text{perm}(A)| \\leq \\Vert A \\Vert_{2} ^n$ with equality iff $A/ \\Vert A \\Vert_{2} \\in \\mathcal{P}$ (where $\\Vert A \\Vert_2$ is the operator $2$-norm of $A$). We show a stability version of this result asserting that unless $A$ is very close (in a particular sense) to one of these extremal matrices, its permanent is exponentially smaller (as a function of $n$) than $\\Vert A \\Vert_2 ^n$. In particular, for any fixed $\\alpha, \\beta > 0$, we show that $|\\text{perm}(A)|$ is exponentially smaller than $\\Vert A \\Vert_2 ^n$ unless all but at most $\\alpha n$ rows contain entries of modulus at least $\\Vert A \\Vert_2 (1 - \\beta)$.",
"subjects": "Combinatorics (math.CO); Probability (math.PR)",
"title": "A stability result using the matrix norm to bound the permanent",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130557502124,
"lm_q2_score": 0.7154239897159439,
"lm_q1q2_score": 0.7075636662259742
} |
https://arxiv.org/abs/1510.02046 | On Non-Zero Component Graph of Vector Spaces over Finite Fields | In this paper, we study non-zero component graph $\Gamma(\mathbb{V})$ on a finite dimensional vector space $\mathbb{V}$ over a finite field $\mathbb{F}$. We show that the graph is Hamiltonian and not Eulerian. We also characterize the maximal cliques in $\Gamma(\mathbb{V})$ and show that there exists two classes of maximal cliques in $\Gamma(\mathbb{V})$. We also find the exact clique number of $\Gamma(\mathbb{V})$ for some particular cases. Moreover, we provide some results on size, edge-connectivity and chromatic number of $\Gamma(\mathbb{V})$. | \section{Introduction}
The study of graphs associated with of various algebraic structures was initiated by Beck \cite{beck} who introduced the idea of zero divisor graph of a commutative ring with unity. Till then, a lot of research, e.g., \cite{survey2,zero-divisor-survey,anderson-livingston,graph-ideal,power1,power2,mks-ideal,int-vecsp-2,int-vecsp-1} has been done in connecting graph structures to various algebraic objects like semigroups, groups, rings, vector spaces etc. In this paper, we continue the study of one such graph called Non-Zero Component Graph \cite{angsu-comm-alg} of a finite dimensional vector space $\mathbb{V}$ over a field $\mathbb{F}$ with respect to a basis $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ of $\mathbb{V}$.
Throughout this paper, $\mathbb{F}$ is a finite field with $q=p^t$ elements, $p$ being a prime and $n=dim_{\mathbb{F}}(\mathbb{V})$. We study the structures of size, connectivity, maximal cliques, clique number, chromatic number and Hamiltonicity of these graphs and other related concepts.
\section{Some Preliminaries}
In this section, for convenience of the reader and also for later use, we recall some definitions, notations and results concerning elementary graph theory. For undefined terms and concepts the reader is referred to \cite{west-graph-book}.
By a graph $G=(V,E)$, we mean a non-empty set $V$ and a symmetric binary relation (possibly empty) $E$ on $V$. The set $V$ is called the set of vertices and $E$ is called the set of edges of $G$. Two element $u$ and $v$ in $V$ are said to be adjacent if $(u,v) \in E$. $H=(W,F)$ is called a {\it subgraph} of $G$ if $H$ itself is a graph and $\phi \neq W \subseteq V$ and $F \subseteq E$. If $V$ is finite, the graph $G$ is said to be finite, otherwise it is infinite. If all the vertices of $G$ are pairwise adjacent, then $G$ is said to be {\it complete}. A complete subgraph of a graph $G$ is called a {\it clique}. A {\it maximal clique} is a clique which is maximal with respect to inclusion. The {\it clique number} of $G$, written as $\omega(G)$, is the maximum size of a clique in $G$. The {\it chromatic number} of $G$, denoted as $\chi(G)$, is the minimum number of colours needed to label the vertices so that the adjacent vertices receive different colours. A subset $I$ of $V$ is said to be {\it independent} if any two vertices in that subset are pairwise non-adjacent. The {\it independence number} of a graph, $\alpha(G)$, is the maximum size of an independent set of vertices in $G$. A {\it path} of length $k$ in a graph is an alternating sequence of vertices and edges, $v_0,e_0,v_1,e_1,v_2,\ldots, v_{k-1},e_{k-1},v_k$, where $v_i$'s are distinct (except possibly the first and last vertices) and $e_i$ is the edge joining $v_i$ and $v_{i+1}$. We call this a path joining $v_0$ and $v_{k}$. A graph is {\it connected} if for any pair of vertices $u,v \in V,$ there exists a path joining $u$ and $v$. A path with first and last vertices same is called a {\it cycle}. A graph is said to be {\it Hamiltonian} if it contains a cycle consists of all the vertices in $G$. A graph is said to be {\it Eulerian} if it contains a cycle consists of all the edges in $G$ exactly once.
\section{Definitions and Some Basic Results}
Firstly, we recall the definition of Non-zero Component graph of a finite dimensional vector space and some preliminary results from \cite{angsu-comm-alg}.
Let $\mathbb{V}$ be a vector space over a field $\mathbb{F}$ with $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ as a basis and $\theta$ as the null vector. Then any vector $\mathbf{a} \in \mathbb{V}$ can be expressed uniquely as a linear combination of the form $\mathbf{a}=a_1\alpha_1+a_2\alpha_2+\cdots+a_n\alpha_n$. We denote this representation of $\mathbf{a}$ as its basic representation w.r.t. $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$. We define {\it Non-Zero Component graph} of a finite dimensional vector space $\Gamma(\mathbb{V}_\alpha)=(V,E)$ (or simply $\Gamma(\mathbb{V})$) with respect to $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ as follows: $V=\mathbb{V}\setminus \{\theta\}$ and for $\mathbf{a},\mathbf{b} \in V$, $\mathbf{a} \sim \mathbf{b}$ or $(\mathbf{a},\mathbf{b}) \in E$ if $\mathbf{a}$ and $\mathbf{b}$ shares atleast one $\alpha_i$ with non-zero coefficient in their basic representation, i.e., there exists atleast one $\alpha_i$ along which both $\mathbf{a}$ and $\mathbf{b}$ have non-zero components. Unless otherwise mentioned, we take the basis on which the graph is constructed as $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$. For some examples of $\Gamma(\mathbb{V})$, see Figure \ref{example-figure}.
\begin{figure}[ht]
\centering
\begin{center}
$\begin{array}{lr}
\includegraphics[scale=.4]{P3.pdf}~~~~~~~~~~~ & ~~~~~~~~~~ \includegraphics[scale=.4]{dim3overZ2.pdf}\\
\mathbf{dim(\mathbb{V})=2; \mathbb{F}=\mathbb{Z}_2} & \mathbf{dim(\mathbb{V})=3; \mathbb{F}=\mathbb{Z}_2}
\end{array}
$
\vspace{.3in}
$
\begin{array}{c}
\includegraphics[scale=.4]{dim2overZ3.pdf}\\
\mathbf{dim(\mathbb{V})=2; \mathbb{F}=\mathbb{Z}_3}
\end{array}
$
\caption{Examples of $\Gamma(\mathbb{V})$}
\label{example-figure}
\end{center}
\end{figure}
{\theorem \cite{angsu-comm-alg} \label{diameter-theorem} $\Gamma(\mathbb{V})$ is connected and $diam(\Gamma)=2$.}
{\theorem \cite{angsu-comm-alg} $\Gamma(\mathbb{V})$ is complete if and only if $\mathbb{V}$ is one-dimensional.}
{\theorem \cite{angsu-comm-alg} \label{independence-number-theorem} The independence number of $\Gamma(\mathbb{V})$, $\alpha(\Gamma(\mathbb{V}))=dim(\mathbb{V})$.}
{\theorem \label{degree-theorem} \cite{angsu-comm-alg} Let $\mathbb{V}$ be a vector space over a finite field $\mathbb{F}$ with $q$ elements and $\Gamma$ be its associated graph with respect to a basis $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$. Then, the degree of the vertex $c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k}$, where $c_1 c_2 \cdots c_k \neq 0$, is $(q^k -1)q^{n-k}-1$.}
Now we prove some basic results when the underlying field $\mathbb{F}$ is finite.
{\theorem $\Gamma$ is not Eulerian.}\\
\noindent {\bf Proof: } If $q$ is an odd prime then $\Gamma$ is not Eulerian as from Theorem \ref{degree-theorem} every vertex is of odd degree. If $q=2$, then from Theorem \ref{degree-theorem}, all the vertices with $1<k<n$ are of odd degree. Thus the graph is not Eulerian in any case.\hfill \rule{2mm}{2mm}
{\lemma \label{min-degree-lemma} If $\mathbb{V}$ be a $n$-dimensional vector space over a finite field $\mathbb{F}$ with $q$ elements, then the minimum degree $\delta$ of $\Gamma(\mathbb{V})$ is $q^{n-1}(q-1)-1$.}\\
\noindent {\bf Proof: } From Theorem \ref{degree-theorem}, the degree of the vertex $c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k}$, where $c_1 c_2 \cdots c_k \neq 0$, is $(q^k -1)q^{n-k}-1$ i.e., $q^n-q^{n-k}-1$. Thus the degree will be minimized if $k=1$ and hence $\delta=q^n-q^{n-1}-1$.\hfill \rule{2mm}{2mm}
{\corollary \label{edge-connectivity} Edge connectivity of $\Gamma(\mathbb{V})$ is $q^{n-1}(q-1)-1$.}\\
\noindent {\bf Proof: } From \cite{plesnik}, as $\Gamma(\mathbb{V})$ is of diameter $2$ (by Theorem \ref{diameter-theorem}), its edge connectivity is equal to its minimum degree, i.e., $q^{n-1}(q-1)-1$. \hfill \rule{2mm}{2mm}
{\theorem \label{size-theorem} If $\mathbb{V}$ be a $n$-dimensional vector space over a finite field $\mathbb{F}$ with $q$ elements, then the order of $\Gamma(\mathbb{V})$ is $q^n-1$ and the size $m$ of $\Gamma(\mathbb{V})$ is $$\dfrac{q^{2n}-q^n+1-(2q-1)^n}{2}.$$}\\
\noindent {\bf Proof: } It is trivial to observe that the order of $\Gamma(\mathbb{V})$ is $q^n-1$. By Theorem \ref{degree-theorem}, the degree of the vertex $c_1\alpha_{i_1}+c_2 \alpha_{i_2}+\cdots+c_k \alpha_{i_k}$, where $c_1 c_2 \cdots c_k \neq 0$, is $(q^k -1)q^{n-k}-1$. Now, there are $\binom{n}{k}(q-1)^k$ vectors with exactly $k$ many $\alpha_i$'s in its basic representation. Since, $2m=$ sum of the degree of all the vertices in $\Gamma(\mathbb{V})$, we have
$$2m=\sum_{k=1}^n \binom{n}{k}(q-1)^k\left[(q^k -1)q^{n-k}-1\right]=\sum_{k=1}^n \binom{n}{k}(q-1)^k\left[(q^n -1)-q^{n-k}\right]$$
$$=(q^n-1)\sum_{k=1}^n \binom{n}{k}(q-1)^k - \sum_{k=1}^n \binom{n}{k}(q-1)^k q^{n-k}$$
$$=(q^n-1)\left[(q-1+1)^n-1 \right]-\left[(q+q-1)^n - q^n \right]=(q^n-1)^2 + q^n - (2q-1)^n$$
$$=q^{2n}-q^n+1-(2q-1)^n$$ and hence the result. \hfill \rule{2mm}{2mm}
\section{Maximal Cliques in $\Gamma(\mathbb{V})$}
In this section, we study the structure of maximal cliques in $\Gamma(\mathbb{V})$ and find its clique number. In the process, we show that $\Gamma(\mathbb{V})$ possess two different classes of maximal cliques.
Let $S_\beta$ (skeleton of $\beta$) be the set of $\alpha_i$'s with non-zero coefficients in the basic representation of $\beta$ with respect to $\{\alpha_1,\alpha_2,\ldots, \alpha_n\}$. It is to be noted that two distinct $\beta$ may have same $S_\beta$. Also $1 \leq |S_\beta| \leq n, \forall \beta \in \Gamma(\mathbb{V}_\alpha)$. Let $M$ be a maximal clique in $\Gamma(\mathbb{V}_\alpha)$ and define $S(M)=\{S_\beta: \beta \in M\}$ and $S[M]=\{|S_\beta|: S_\beta \in S(M)\}$. Since $M$ is a clique, $S_\alpha \cap S_\beta \neq \emptyset, \forall \alpha, \beta \in M$. By maximality of $M$, if $\alpha \in M$ and $S_\alpha \subset S_\beta$ for some $\beta \in \Gamma(\mathbb{V}_\alpha)$, then $\beta \in M$.
As $\emptyset \neq S[M] \subset \mathbb{N}$, by well-ordering principle, it has a least element, say $k$. Then there exist some $\beta^* \in M$ with $|S_{\beta^*}|=k$, where $\beta^*=c_1\alpha_{i_1}+c_2\alpha_{i_2}+\cdots+c_k\alpha_{i_k}$. Now accordingly as $k \leq n/2$ or $k>n/2$, we show that there exists two types of maximal cliques in $\Gamma(\mathbb{V}_\alpha)$.
{\theorem \label{k-leq-n/2-clique} Let $M$ be a maximal clique in $\Gamma(\mathbb{V}_\alpha)$. If $k$ is the least element of $S[M]$ and $k \leq n/2$, then $M$ belongs to a family of maximal cliques $\{M_{k,i}: 1 \leq k \leq n/2 ;i \in \{1,2,\ldots , n\}\}$ of $\Gamma(\mathbb{V}_\alpha)$ where $M_{k,i}=\{\beta \in \Gamma(\mathbb{V}_\alpha): \alpha_i \in S_\beta \mbox{ and }|S_\beta|\geq k \}$ and $$|M|=(q-1)\sum_{r=k-1}^{n-1} \binom{n-1}{r}(q-1)^r.$$}\\
\noindent {\bf Proof: } Since minimum of $|S_\beta|$ for $\beta \in M$ is $k ~(\leq n/2)$ and $S_\alpha \cap S_\beta \neq \emptyset, \forall \alpha, \beta \in M$, by Erdos-Ko-Rado theorem \cite{erdos-ko-rado}, the maximum number of pairwise-intersecting $k$-subsets in $S(M)$ is $\binom{n-1}{k-1}$ and the maximum is achieved only if each $k$-subsets contain a fixed element, say $\alpha_i$. As $M$ is a maximal clique, $M=\{\beta \in \Gamma(\mathbb{V}_\alpha): \alpha_i \in S_\beta \mbox{ and } |S_\beta|\geq k\}$.
Now, the number of $\beta$'s in $M$ with $|S_\beta|=k$ and $\alpha_i \in S_\beta$ is $\binom{n-1}{k-1} (q-1)^k$. Similarly, the numbers of $\beta$'s in $M$ with $|S_\beta|=k+1,k+2,\ldots,n$ and $\alpha_i \in S_\beta$ are $$\binom{n-1}{k} (q-1)^{k+1},\binom{n-1}{k+1} (q-1)^{k+2},\ldots,\binom{n-1}{n-1} (q-1)^n$$ respectively. Thus, we have
$$|M|=\binom{n-1}{k-1} (q-1)^k+ \binom{n-1}{k} (q-1)^{k+1}+ \binom{n-1}{k+1} (q-1)^{k+2}+ \cdots + \binom{n-1}{n-1} (q-1)^n$$
$$=(q-1)\left[ \binom{n-1}{k-1} (q-1)^{k-1}+ \binom{n-1}{k} (q-1)^{k}+ \cdots + \binom{n-1}{n-1} (q-1)^{n-1} \right]$$
$$=(q-1)\sum_{r=k-1}^{n-1} \binom{n-1}{r}(q-1)^r. $$
It is to be noted that for same value of $k$ and by fixing different $\alpha_i$'s, we get different maximal cliques. Since these maximal cliques depends both on $k$ and $\alpha_i$, we get a family of maximal cliques $M_{k,i}$ where $1 \leq k \leq n/2$ and $i \in \{1,2,\ldots , n\}$ and $M \in M_{k,i}$.\hfill \rule{2mm}{2mm}
{\theorem \label{k>n/2-clique} Let $M$ be a maximal clique in $\Gamma(\mathbb{V}_\alpha)$. If $k$ is the least element of $S[M]$ and $k > n/2$, then $k=\lfloor n/2 \rfloor + 1$ and $M=\{\beta \in \Gamma(\mathbb{V}_\alpha):|S_\beta| \geq \lfloor n/2 \rfloor + 1\}$ and $$|M|=\sum_{r=k}^{n} \binom{n}{r}(q-1)^r.$$}\\
\noindent {\bf Proof: } Since $k > n/2$, for all $\alpha, \beta \in M$, $S_\alpha \cap S_\beta \neq \emptyset$. Thus the maximum number of pairwise-intersecting $k$-subsets in $S(M)$ is $\binom{n}{k}$. Thus by similar arguments as in the proof of Theorem \ref{k-leq-n/2-clique},
$$|M|=\binom{n}{k} (q-1)^k+ \binom{n}{k+1} (q-1)^{k+1}+ \cdots + \binom{n}{n} (q-1)^n$$
$$=\sum_{r=k}^{n} \binom{n}{r}(q-1)^r.$$
Now, as $\{\beta \in \Gamma(\mathbb{V}_\alpha):|S_\beta| \geq k + 1\} \subset \{\beta \in \Gamma(\mathbb{V}_\alpha):|S_\beta| \geq k\}$, by maximality of $M$, $M=\{\beta \in \Gamma(\mathbb{V}_\alpha):|S_\beta| \geq k\}$ when $k$ is minimized provided $k>n/2$. Thus $k=\lfloor n/2 \rfloor + 1$ and hence the theorem.\hfill \rule{2mm}{2mm}
{\remark \label{clique-number-remark} It is obvious from Theorem \ref{k-leq-n/2-clique}, that $|M_{k,i}|$ is maximum when $k=1$, i.e., $M_{1,i}=\{c_1\alpha_1+c_2\alpha_2+\cdots+c_n\alpha_n: c_i \neq 0\}$ and $|M_{1,i}|=(q-1)q^{n-1}$. Thus, the clique number of $\Gamma(\mathbb{V}_\alpha)$ is $$\omega(\Gamma(\mathbb{V}_\alpha))=\max \lbrace (q-1)q^{n-1}, \sum_{r=\lfloor n/2 \rfloor + 1}^{n} \binom{n}{r}(q-1)^r \rbrace $$ and it depends on the value of $q$ and $n$.
}
{\remark A maximal clique $M$ contains at most one $\alpha_i$, because if $M$ contain $\alpha_i$ and $\alpha_j$, then $\alpha_i \not\sim \alpha_j$ which contradicts that $M$ is a clique. Moreover if $M$ is a maximal clique containing $\alpha_i$, then $M=M_{1,i}$. It follows since every $\beta \in M$ is adjacent to $\alpha_i$, i.e., every $\beta$ has a non-zero component along $\alpha_i$ and hence $M_{1,i} \subset M$. Now, by maximality of $M_{1,i}$, it follows that $M=M_{1,i}$. }
{\corollary \label{clique-number-for-q=2} If $q=2$, the clique number $\omega(\Gamma(\mathbb{V}))=2^{n-1}$.}\\
\noindent {\bf Proof: } If $q=2$, $(q-1)q^{n-1}=2^{n-1}$. Now, for $q=2$ and $n$ even (say $2m$), $$\sum_{r=\lfloor n/2 \rfloor + 1}^{n} \binom{n}{r}(q-1)^r=\sum_{r=m + 1}^{2m} \binom{2m}{r}=\dfrac{1}{2}\sum_{r=0}^{2m} \binom{2m}{r} -\binom{2m}{m}<2^{2m-1}=2^{n-1}.$$
If $q=2$ and $n$ is odd (say $2m+1$), $$\sum_{r=\lfloor n/2 \rfloor + 1}^{n} \binom{n}{r}(q-1)^r=\sum_{r=m + 1}^{2m+1} \binom{2m+1}{r}=\dfrac{1}{2}\sum_{r=0}^{2m+1} \binom{2m+1}{r} =2^{2m+1-1}=2^{n-1}.$$
Combining both the cases and using Remark \ref{clique-number-remark}, we get $\omega(\Gamma(\mathbb{V})=2^{n-1}$.\hfill \rule{2mm}{2mm}
{\corollary \label{clique-number-for-q>2-and-n-odd} If $q>2$ and $n$ is odd, the clique number $$\omega(\Gamma(\mathbb{V}))=\sum_{r=\lfloor n/2 \rfloor + 1}^{n} \binom{n}{r}(q-1)^r.$$}\\
\noindent {\bf Proof: } It follows from the inequality proved in Appendix and Remark \ref{clique-number-remark}. \hfill \rule{2mm}{2mm}
{\corollary \label{chromatic-number-for-q=2} If $q=2$ and $\chi(\Gamma(\mathbb{V})$ be the chromatic number $\Gamma(\mathbb{V})$, then $$2^{n-1} \leq \chi(\Gamma(\mathbb{V}) \leq 2^{n-1}+2^{n-2}-n/2.$$}\\
\noindent {\bf Proof: } First part of the inequality follows from Corollary \ref{clique-number-for-q=2} and the fact that for any graph $G$, $\omega(G)\leq \chi(G)$. For the other part, we use the following result from \cite{brigham-dutton}: $$\chi(G)\leq \dfrac{\omega(G)+|G|+1-\alpha(G)}{2}$$
where $\alpha(G)$ is the independence number of $G$. Thus, by using Corollary \ref{clique-number-for-q=2} and Theorem \ref{independence-number-theorem}, we get for $q=2$, $$\chi(\Gamma(\mathbb{V})\leq \dfrac{2^{n-1}+2^n-n}{2}=2^{n-1}+2^{n-2}-n/2.$$ \hfill \rule{2mm}{2mm}
\section{$\Gamma(\mathbb{V})$ is Hamiltonian}
In this section, we prove that $\Gamma(\mathbb{V})$ is Hamiltonian except the case when $q=2$ and $n=2$. Firstly, we recall two classical theorems on sufficient conditions of Hamiltonicity of a graph which are crucial later in our proofs.
{\theorem \label{dirac-theorem} \cite{dirac} {\bf[Dirac]} If $G$ is a connected graph with minimum degree $\delta$ such that $\delta\geq |G|/2$, then $G$ is Hamiltonian.}
{\theorem \label{nash-williams-theorem} \cite{nash-williams} {\bf[Nash-Williams]} If $G$ is a $2$-connected graph with minimum degree $\delta$ and independence number $I$ such that $\delta\geq \max \{(|G|+2)/3,I\}$, then $G$ is Hamiltonian.}
{\theorem If $q>2$, then $\Gamma(\mathbb{V})$ is Hamiltonian.}\\
\noindent {\bf Proof: } If $n=1$, $\Gamma(\mathbb{V})$ is complete and hence it is Hamiltonian. So, we assume that $n>1$. Since $q>2$, by principle of mathematical induction it can be shown that for $n\geq 2, q^n> 2q^{n-1}+1$, which implies $2q^n-2q^{n-1}-2>q^n-1$, i.e., $q^n-q^{n-1}-1>(q^n-1)/2$. Now by Lemma \ref{min-degree-lemma} $\delta > |G|/2$ and hence by Theorem \ref{dirac-theorem}, $\Gamma(\mathbb{V})$ is Hamiltonian. \hfill \rule{2mm}{2mm}
{\lemma \label{2-connected-lemma} If $q=2$ and $n\geq 3$, then $\Gamma(\mathbb{V})$ is $2$-connected.}\\
\noindent {\bf Proof: } We prove that removal of any one vertex does not make $\Gamma(\mathbb{V})$ disconnected, i.e., $\Gamma(\mathbb{V}) - \{\alpha\}$ is connected for any non-null vector $\alpha \in \mathbb{V}$. Let $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ be a basis of $\mathbb{V}$. Let $\mathbf{a}$ and $\mathbf{b}$ be two arbitrary non-null vectors other than $\alpha$ in $\mathbb{V}$. If they are adjacent in $\Gamma$, we are done. If not, since $\mathbf{a},\mathbf{b} \neq \theta$, $\exists \alpha_i, \alpha_j$ which have non-zero coefficient in the basic representation of $\mathbf{a}$ and $\mathbf{b}$ respectively. Moreover, as $\mathbf{a}$ and $\mathbf{b}$ are not adjacent, $\alpha_i \neq \alpha_j$. Consider $\mathbf{c}=\alpha_i + \alpha_j$. Then, $\mathbf{a}\sim \mathbf{c}$ and $\mathbf{b} \sim \mathbf{c}$ in $\Gamma(\mathbb{V})$. This provides a path between $\mathbf{a}$ and $\mathbf{b}$. However, if $\mathbf{c}$ is the removed vertex $\alpha$, then this path between $\mathbf{a}$ and $\mathbf{b}$ does not exist in $\Gamma(\mathbb{V}) - \{\alpha\}$. But as $n \geq 3$, there exists $\alpha_k$ in the mentioned basis other than $\alpha_i, \alpha_j$. Then consider the vertex $\mathbf{d}=\alpha_i + \alpha_j + \alpha_k$ and observe that $\mathbf{a}\sim \mathbf{d}$ and $\mathbf{b} \sim \mathbf{d}$ in $\Gamma(\mathbb{V}) - \{\alpha\}$. Hence $\Gamma(\mathbb{V}) - \{\alpha\}$ is connected and thereby $\Gamma(\mathbb{V})$ is $2$-connected. \hfill \rule{2mm}{2mm}
{\lemma \label{delta-inequality-lemma} If $q=2$ and $n\geq 3$, then $\delta\geq \max \{(|\Gamma(\mathbb{V})|+2)/3,\alpha(\Gamma(\mathbb{V}))\}$.}\\
\noindent {\bf Proof: } Since $n \geq 3$, $2^{n-1}-4\geq 0 \Rightarrow (2^n -2)+(2^{n-1}-1)\geq 2^n+1 \Rightarrow 3(2^{n-1}-1)\geq 2^n+1$, i.e., $2^{n-1}-1\geq (2^n+1)/3$. Now for $q=2$, $\delta=2^{n-1}-1$ and $|\Gamma(\mathbb{V})|=2^n-1$. Thus, the inequality gives us $$\delta \geq (|\Gamma(\mathbb{V})|+2)/3.$$ Also from Theorem \ref{independence-number-theorem}, $\alpha(\Gamma(\mathbb{V}))=n$ and for $n \geq 3$, $$\delta=2^{n-1}-1 \geq n.$$ Combining the above two inequalities, we get the lemma. \hfill \rule{2mm}{2mm}
{\theorem If $q=2$ and $n\geq 3$, then $\Gamma(\mathbb{V})$ is Hamiltonian.}\\
\noindent {\bf Proof: } The theorem follows from Lemma \ref{2-connected-lemma}, Lemma \ref{delta-inequality-lemma} and Theorem \ref{nash-williams-theorem}. \hfill \rule{2mm}{2mm}
{\remark For $q=2$ and $n=1$, $\Gamma(\mathbb{V})$ is a single vertex graph and for $q=2,n=2$, $\Gamma(\mathbb{V})$ is isomorphic to $P_3$, a $3$-vertex path (See Figure \ref{example-figure}) and hence not Hamiltonian.}
\section*{Acknowledgement}
The author is thankful to Professor Mridul Kanti Sen for some fruitful suggestions on the paper. The research is partially funded by NBHM Research Project Grant, (Sanction No. 2/48(10)/2013/ NBHM(R.P.)/R\&D II/695), Govt. of India.
| {
"timestamp": "2015-10-08T02:13:06",
"yymm": "1510",
"arxiv_id": "1510.02046",
"language": "en",
"url": "https://arxiv.org/abs/1510.02046",
"abstract": "In this paper, we study non-zero component graph $\\Gamma(\\mathbb{V})$ on a finite dimensional vector space $\\mathbb{V}$ over a finite field $\\mathbb{F}$. We show that the graph is Hamiltonian and not Eulerian. We also characterize the maximal cliques in $\\Gamma(\\mathbb{V})$ and show that there exists two classes of maximal cliques in $\\Gamma(\\mathbb{V})$. We also find the exact clique number of $\\Gamma(\\mathbb{V})$ for some particular cases. Moreover, we provide some results on size, edge-connectivity and chromatic number of $\\Gamma(\\mathbb{V})$.",
"subjects": "General Mathematics (math.GM)",
"title": "On Non-Zero Component Graph of Vector Spaces over Finite Fields",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130551025345,
"lm_q2_score": 0.7154239897159438,
"lm_q1q2_score": 0.7075636657626098
} |
https://arxiv.org/abs/0805.0349 | Periods and elementary real numbers | The periods, introduced by Kontsevich and Zagier, form a class of complex numbers which contains all algebraic numbers and several transcendental quantities. Little has been known about qualitative properties of periods. In this paper, we compare the periods with hierarchy of real numbers induced from computational complexities. In particular we prove that periods can be effectively approximated by elementary rational Cauchy sequences. As an application, we exhibit a computable real number which is not a period. | \section{Introduction}
In their paper \cite{kz-per}, Kontsevich and Zagier
introduced the notion of periods:
\begin{definition}
\label{def:period}
A {\em period} is a complex number whose
real and imaginary parts are values of absolutely convergent
integral of rational functions with rational coefficients,
over domains in $\ensuremath{\mathbb{R}}^\ell$ given by polynomial inequalities
with rational coefficients.
\end{definition}
The set of all periods is denoted by $\ensuremath{\mathcal{P}}\subset\ensuremath{\mathbb{C}}$.
Obviously, $\ensuremath{\mathcal{P}}$ is a countable
set, forms a $\ensuremath{\mathbb{Q}}$-algebra (because of Fubini's theorem)
and contains all algebraic numbers and
several transcendental quantities,
like $\pi$ and $\log n$.
One of their motivations to introduce this notion
is that the structure of $\ensuremath{\mathcal{P}}$ is directly
related to profound theory of motives.
See \cite{wald-trans} for related problems in
transcendental number theory.
Kontsevich and Zagier pose several conjectures
and problems on $\ensuremath{\mathcal{P}}$.
However it seems that the qualitative properties
of $\ensuremath{\mathcal{P}}$ have not been well studied so far.
For instance, they pose the following
``{\bf Problem 3} {\it Exhibit at least one number which does
not belong to $\ensuremath{\mathcal{P}}$}''.
We have not had any properties on real numbers
which can distinguish non-periods from periods.
The purpose of this paper is to give an answer
to this problem by
constructing a computable
real number which can not be a period.
We approach the problem as follows.
Since the real
number field $\ensuremath{\mathbb{R}}$ is the completion of $\ensuremath{\mathbb{Q}}$ with
respect to the Euclidean norm,
a positive real number $\alpha\in\ensuremath{\mathbb{R}}_{>0}$ can be
expressed
as the limit of a positive rational Cauchy sequence
\begin{equation}
\label{eq:seq}
\lim_{n\rightarrow\infty}\frac{a(n)}{b(n)}=\alpha,
\end{equation}
where $a$ and $b$ are functions $\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{N}}$.
Therefore a positive real number $\alpha$ is
expressed by a pair of functions $a, b:\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{N}}$.
The observation that
not all functions $\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{N}}$
are computable ``by finite means'',
since the set of all functions $\ensuremath{\mathbb{N}}^\ensuremath{\mathbb{N}}$ is
uncountable, leads us to consider the computability
of the functions $a$ and $b$.
The idea of computability goes back to the seminal paper
\cite{tur-comput} by A. Turing.
Turing defines computable real numbers as those
real numbers with computable decimal expansions.
An equivalent definition is that the real numbers
which are limits of Cauchy sequences
(\ref{eq:seq}) with computable functions $a$ and $b$
(see \cite{pel-ric} or
\S\ref{subsec:cr} below).
So, refined notions of computability enable us to
hierarchize computable real numbers
\cite{spe-nicht, rice-recreal, csz-prreal1, csz-prreal2}.
In this paper, we will focus on a proper sub-class called
``elementary functions'' $\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{N}}$ introduced
in \cite{csi-elem,kal-elem}
(see \S\ref{sec:elemreal} below for definitions).
The main result (Theorem \ref{thm:main})
states that every real period is an elementary real
number, that (roughly speaking) is,
we can choose $a$ and $b$ from elementary functions.
And we will also construct a computable
real number which is not elementary
(\S\ref{subsec:nonelem}).
The non-elementary real numbers can not be
periods by our main result.
Let us briefly describe the idea of the proof.
First we show that
periods are generated by the volumes $\operatorname{vol}(D)$
of the bounded domains of the form
$$
D=\left\{(x_1, \ldots, x_\ell)\in\ensuremath{\mathbb{R}}^\ell \mid
G_k(x_1, \ldots, x_\ell)>0, k=1, \ldots, q\right\},
$$
where $G_k\in\ensuremath{\mathbb{Z}}[x_1, \dots, x_\ell]$ are polynomials
of integer coefficients. To approximate
the volume $\operatorname{vol}(D)$, we use the Riemann sum, that is,
consider the union of small cubes
$$
V_n:=\mbox{Union of cubes contained in $D$
with vertices in }\left(\frac{1}{n}\ensuremath{\mathbb{Z}}\right)^\ell.
$$
Then, clearly, $\operatorname{vol}(V_n)$ converges to $\operatorname{vol}(D)$ as
$n\rightarrow\infty$.
However there are two major problems here.
\begin{itemize}
\item[(a)] Which small cubes are contained
in the domain $D$?
\item[(b)] In which rate $\operatorname{vol}(V_n)$ converges to $\operatorname{vol}(D)$?
(As will be seen in Definition \ref{def:elem1},
we have to know the rate of convergence elementarily.)
\end{itemize}
Let $C\subset\ensuremath{\mathbb{R}}^\ell$ be a cube. Then the problem (a) above
is to ask whether or not the first-order formula
$$
\forall x(x\in C\Longrightarrow x\in D)
$$
is true.
In general, the truth assignment for a
first-order formula with
quantifiers ($\forall, \exists$) is difficult.
However, in our situation, Tarski's quantifier elimination for
real closed ordered field tells us that
the validity of the above formula can be decided
by a quantifier free formula. It is simply
a Boolean combination of
polynomial inequalities on the coefficients of
$G_k$'s.
This enables us to conclude the rational sequence
$\operatorname{vol}(V_n)$ is elementary.
The other problem (b) is related to
count how many small cubes are there near the
boundary $\partial D$? It is essentially done by
bound the Minkowski dimension of the
boundary $\partial D$ by using
resolution of singularities of algebraic varieties.
\medskip
The organization of this paper is as follows.
\S\ref{sec:elemreal} is about elementary functions
and elementary real numbers.
Section \S\ref{subsec:cr} begins with the
definition of the class $\ensuremath{\mathbb{R}}_\ensuremath{\mathcal{E}}$
of real numbers
computable by a given class $\ensuremath{\mathcal{E}}\subset\ensuremath{\mathbb{N}}^\ensuremath{\mathbb{N}}$
of functions. Section \S\ref{subsec:elemf}
gives the precise definition of elementary
functions and elementary real numbers.
In \S\ref{subsec:nonelem}, we algorithmically
enumerate all elementary Cauchy sequence.
Then by the diagonal argument, we construct a computable
real number which is not an elementary
real number. In view of the main result in
\S\ref{sec:main}, this number can not be
a period. In \S\ref{sec:main}, we first state the
main result.
After stating the main result
in \S\ref{subsec:main}, we will reduce the problem
to the bounded cases by employing results from
structure theorems of semi-algebraic sets in
\S\ref{subsec:ur}. In \S\ref{subsec:qe}, we recall
quantifier elimination by Tarski, and by using it,
we will construct an elementary rational sequence
converging to the volume of bounded semi-algebraic domain.
In the rest, \S\ref{subsec:mc} and \S\ref{subsec:pf},
we prove that the sequence converges elementarily.
\section{Elementary real numbers}
\label{sec:elemreal}
{\bf Notation.} In this section,
$\ensuremath{\mathbb{N}}=\{0, 1, 2, 3, \ldots\}$ denotes the
set of nonnegative integers and
$(\ensuremath{\mathbb{N}}^n)^\ensuremath{\mathbb{N}}=\{f:\ensuremath{\mathbb{N}}^n\rightarrow\ensuremath{\mathbb{N}}\}$ denotes the
set of all functions from $\ensuremath{\mathbb{N}}^n$ to $\ensuremath{\mathbb{N}}$.
We only deal
with nonnegative real and rational numbers.
\subsection{Computable real numbers}
\label{subsec:cr}
The set $\ensuremath{\mathbb{R}}$ of real numbers is
defined as the completion of the rational
number field $\ensuremath{\mathbb{Q}}$ by the metric $d(x, y)=|x-y|$.
In other words, exhibiting a real number is
equivalent to exhibit a Cauchy sequence
in $\ensuremath{\mathbb{Q}}$. Hence for a given nonnegative real
number $\alpha\in\ensuremath{\mathbb{R}}$, there exist two functions
$a, b:\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{N}}$ such that
$$
\lim_{n\rightarrow\infty}
\frac{a(n)}{b(n)+1}=\alpha.
$$
(The term ``$+1$'' in the denominator is
just for avoiding to be equal to zero.)
In the paper \cite{tur-comput}, Turing introduced
the notion of computable real numbers by restricting
the class of functions $a, b:\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{N}}$.
Following Turing and subsequent studies
\cite{rose-subrec, csz-prreal1, csz-prreal2},
we shall set the following definitions.
\begin{definition}
\label{def:real}
Let $\ensuremath{\mathcal{E}}\subset\ensuremath{\mathbb{N}}^\ensuremath{\mathbb{N}}$ be a class of functions.
A nonnegative real number $\alpha\in\ensuremath{\mathbb{R}}$ is said
to be $\ensuremath{\mathcal{E}}$-computable if
there exist $a(x), b(x), c(x)\in\ensuremath{\mathcal{E}}$ such that
\begin{equation}
\label{eq:real}
\left|
\frac{a(x)}{b(x)+1}-\alpha
\right|<
\frac{1}{k},\ \mbox{\normalfont for }
\forall x\geq c(k).
\end{equation}
Denote the set of all
$\ensuremath{\mathcal{E}}$-computable real numbers by $\ensuremath{\mathbb{R}}_\ensuremath{\mathcal{E}}$.
\end{definition}
\begin{example}
Obviously $\ensuremath{\mathbb{R}}_\ensuremath{\mathcal{E}}\subset\ensuremath{\mathbb{R}}$ depends on the class $\ensuremath{\mathcal{E}}$. \\
{\normalfont (1)}
Let ${\sf (Const)}\subset\ensuremath{\mathbb{N}}^\ensuremath{\mathbb{N}}$ be the set of
all constant functions. Then $\ensuremath{\mathbb{R}}_{\sf (Const)}=\ensuremath{\mathbb{Q}}$.
\medskip
\noindent
{\normalfont (2)}
Let ${\sf (Lin)}\subset\ensuremath{\mathbb{N}}^\ensuremath{\mathbb{N}}$ be the set of
all functions of linear growth, that is,
$$
{\sf (Lin)}=\{f\in\ensuremath{\mathbb{N}}\mid \exists C>0, \mbox{ s.t. }
f(n)<C\cdot n\}.
$$
Then
$\ensuremath{\mathbb{R}}_{\sf (Lin)}=\ensuremath{\mathbb{R}}$. Indeed for given $\alpha\in\ensuremath{\mathbb{R}}$, define
\begin{eqnarray*}
a(n)&=&\lfloor (n+1)\cdot\alpha\rfloor\\
b(n)&=&n,
\end{eqnarray*}
which are of linear growth.
It is easily shown that
$$
\left|
\frac{a(n)}{b(n)+1}-\alpha
\right|<
\frac{1}{n+1}.
$$
\medskip
\noindent
{\normalfont (3)} If $\ensuremath{\mathcal{E}}$ is the set of all computable
or recursive
(resp. primitive recursive) functions, then $\ensuremath{\mathbb{R}}_\ensuremath{\mathcal{E}}$ is
the set of computable (resp. primitive recursive)
real numbers. (See \cite{tur-comput} and \cite{pel-ric},
\cite{rice-recreal} for computable numbers.
And see \cite{csz-prreal1} for a recent survey on
primitive recursive real numbers.)
\end{example}
\subsection{Elementary functions}
\label{subsec:elemf}
In order to state the main result, we need the notion
of {\em elementary functions}
${\sf (Elem)}$. Here we consider functions
having any number of arguments, that is,
$f:\ensuremath{\mathbb{N}}^n\rightarrow\ensuremath{\mathbb{N}}$ for $n=1, 2, \ldots$.
We begin with the simplest functions and operations
on functions.
\begin{definition}
The zero function: $o(x)=0$.
The successor function: $s(x)=x+1$.
The $i$-th projection function:
$P^n_i(x_1, \ldots, x_n)=x_i$.
These three functions are called the {\em initial functions}.
\end{definition}
\begin{definition}
Define the modified subtraction $m:\ensuremath{\mathbb{N}}^2\rightarrow\ensuremath{\mathbb{N}}$
as follows:
$$
m(x,y)=x\uuu y:=
\left\{
\begin{array}{cc}
x-y& \mbox{if } x\geq y, \\
0 & \mbox{if } x<y.
\end{array}
\right.
$$
\end{definition}
Let $f(x_1, \ldots, x_m)$ be a function with $m$ arguments.
Let $g_i(y_1, \ldots, y_n)$ ($i=1, \ldots, m$) be functions
of $n$ arguments. Then the composition
$$
f(g_1(y_1, \ldots, y_n), \ldots,
g_m(y_1, \ldots, y_n))
$$
is a function with $n$ arguments.
Let $f(t, x_1, x_2, \ldots, x_n)$ be a function with
$(n+1)$ arguments. We define bounded summation by
$$
\sum_{t\leq x}f(t, x_1, \ldots, x_n)=
f(0, x_1, \ldots, x_n)+\cdots+
f(x, x_1, \ldots, x_n),
$$
and bounded product by
$$
\prod_{t\leq x}f(t, x_1, \ldots, x_n)=
f(0, x_1, \ldots, x_n)\times\cdots\times
f(x, x_1, \ldots, x_n),
$$
which are functions with $(n+1)$ arguments.
\begin{definition}
The class ${\sf (Elem)}$ of elementary functions is the
smallest class of functions:
\begin{itemize}
\item[(1)] containing the initial functions, the addition
$x+y$, the multiplication $x\cdot y$,
the modified subtraction $x\uuu y$,
\item[(2)] closed under composition, and
\item[(3)] closed under bounded summation and product.
\end{itemize}
\end{definition}
\begin{example}
\label{ex:elem}
The following are examples of elementary functions.
\begin{itemize}
\item[$(1)$] By definitions, $s(o(x))=1, s(s(o(x)))=2$, etc,
are elementary. Hence the constant function is elementary.
Since $s(x)\uuu s(0)=x$, the identity
function is elementary. The power
$x^{y+1}=\prod_{k=0}^yx$ is also elementary.
\item[$(2)$] The sign function
$$
\operatorname{sgn}(x)=
\left\{
\begin{array}{cc}
1& \mbox{if } x\neq 0, \\
0 & \mbox{if } x=0
\end{array}
\right.
$$
is elementary. Indeed,
$\operatorname{sgn}(x)=1\uuu(1\uuu x)$.
\item[$(3)$] Recall that a subset $P\subset\ensuremath{\mathbb{N}}^n$
is called a predicate. A predicate $P$ is said
to be elementary if the characteristic function
$$
\chi_P(x)=
\left\{
\begin{array}{cc}
1& \mbox{if } x\in P, \\
0 & \mbox{if } x\notin P
\end{array}
\right.
$$
is an elementary function. If $P$ and $Q$ are elementary
predicates, then the Boolean connection $P\wedge Q$,
$P\vee Q$ and $\neg P$ are also elementary predicates.
\item[$(4)$] The order predicate
$$
f_>(x,y)=
\left\{
\begin{array}{cc}
1& \mbox{if } x>y, \\
0 & \mbox{if } x\leq y
\end{array}
\right.
$$
is elementary. Indeed $f_>(x,y)=\operatorname{sgn}(x\uuu y)$.
Other functions $f_\geq, f_<, f_\leq$ are similarly
elementary.
\item[$(5)$] The quotient
$q(x,y)=\left\lfloor\frac{x}{y+1}\right\rfloor$
is elementary. Indeed,
$$
q(x,y)=\left(\sum_{i=0}^x f_\geq(x, i\cdot (y+1))\right)
\uuu 1.
$$
Similarly, the logarithm
$l(a,b)=\lfloor\log_a b\rfloor$
and the square root
$\lfloor\sqrt{x}\rfloor$ are also elementary.
\item[$(6)$] Bounded minimizer
$$
(\mu y_1\leq n)(f(y_1, y_2, \ldots, y_k)=0)
$$
is defined as the least $t\leq n$ such that
$f(t, y_2, \dots, y_k)=0$ and $n$ if no such $t$.
If $f:\ensuremath{\mathbb{N}}^k\rightarrow\ensuremath{\mathbb{N}}$ is elementary, then
$$
g(n, y_2, \dots, y_k)=(\mu y_1\leq n)(f(y_1, y_2, \ldots, y_k)=0)
$$
is also elementary.
\item[$(7)$] The pairing function $J(x,y)$ is defined by
$$
J(x, y)=\frac{(x+y)(x+y+1)}{2}+y.
$$
The inverse pairing functions $L(z)$, $R(z)$
are defined by the following relations
$$
J(L(z), R(z))=z,\ L(J(x, y))=x,\ R(J(x,y))=y.
$$
The functions $L, R$ are also elementary.
\end{itemize}
\end{example}
\begin{remark}
There also exists a computable but
non-elementary function, e.g.,
$$
f(x)=x^{x^{\cdot^{\cdot^{\cdot^x}}}} \mbox{ ($x$ floors)},
$$
i.e., $f(2)=2^2=4$, $f(3)=3^{3^3}=3^{27}=7625597484987$,
$f(4)=4^{4^{4^4}}=4^{4^{256}}>4^{1.34078\times 10^{154}}$.
This is a very rapidly growing function,
faster than any elementary
function with one variable.
(See \cite{rose-subrec} for details.)
\end{remark}
Recall that the set of elementary real numbers
$\ensuremath{\mathbb{R}}_{\sf (Elem)}$ is defined as follows.
\begin{definition}
\label{def:elem1}
A real number $\alpha\in\ensuremath{\mathbb{R}}$ is called elementary if
there exist $a(x), b(x), c(x)\in{\sf (Elem)}$ such that
\begin{equation}
\label{eq:elem1}
\left|
\frac{a(x)}{b(x)+1}-\alpha
\right|<
\frac{1}{k}, \mbox{\normalfont for }
\forall x\geq c(k).
\end{equation}
\end{definition}
The following proposition is straightforward.
\begin{proposition}
The set of elementary real numbers $\ensuremath{\mathbb{R}}_{\sf (Elem)}$ forms a field.
\end{proposition}
\begin{definition}
A map $g:\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{Q}}$ is said to be elementary if
$g$ is expressed as
$$
g(x)=\frac{a(x)}{b(x)+1}
$$
for some $a(x), b(x)\in{\sf (Elem)}$.
An map $g:\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{Q}}$ is said to be
{\em fast} if it satisfies
$$
|g(x)-g(x+1)|<\frac{1}{7^{x+1}},
$$
for $\forall x\in\ensuremath{\mathbb{N}}$.
\end{definition}
\begin{lemma}
\label{lem:fast}
A real number $\alpha\in\ensuremath{\mathbb{R}}$ is elementary if and only if
there exist an elementary fast map
$g:\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{Q}}$ such that
\begin{equation}
\label{eq:fast}
\lim_{x\rightarrow\infty}
g(x)=\alpha.
\end{equation}
\end{lemma}
\proof
Suppose we have $a(x), b(x), c(x)\in{\sf (Elem)}$
satisfying Eq.(\ref{eq:elem1}). Set $k=8^{n+1}$ and
$x=c(8^{n+1})$, we have
$$
\left|
\frac{a(c(8^{n+1}))}{b(c(8^{n+1}))+1}-\alpha
\right|<
\frac{1}{8^{n+1}}.
$$
Since $a(c(8^{n+1})), b(c(8^{n+1}))$ are elementary on $n$,
we have $\alpha\in\ensuremath{\mathbb{R}}_{\sf (Elem)}$.
Put $g(x)=a(c(8^{x+1}))/(b(c(8^{x+1}))+1)$. Then
$
|g(n)-\alpha|<8^{-n-1}.
$
Hence $|g(n)-g(n+1)|<8^{-n-1}+8^{-n-2}$,
which is less than $7^{-n-1}$.
\qed
\subsection{A non-elementary real number}
\label{subsec:nonelem}
In this section, we construct a non-elementary
real number, essentially, by the diagonal argument.
Together with the main result in the next section,
it is an example of real number which is not a period.
First we recall a simpler description of
elementary functions, due to Mazzanti.
\begin{proposition}(Mazzanti \cite{mazz-base})
All elementary functions can be generated from
the following four functions by composition:
\begin{itemize}
\item The successor, $x\mapsto S(x)=x+1$.
\item The modified subtraction, $(x,y)\mapsto x\uuu y$.
\item The quotient,
$(x, y)\mapsto \left\lfloor\frac{x}{y+1}\right\rfloor$.
\item The exponential function, $(x, y)\mapsto x^y$.
\end{itemize}
\end{proposition}
Next we enumerate all elementary functions
$\{f:\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{N}}\mid\mbox{elementary}\}$ of one variable
by using the pairing functions $J, L, R$ in
Example \ref{ex:elem} (7).
For each $e\in\ensuremath{\mathbb{N}}$ we attach an elementary function
$f_e:\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{N}}$ as follows.
\begin{itemize}
\item[(0)] If $L(e)=0$, then $f_e(x)=x$. (That is,
$f_{J(0,k)}(x)=x$).
\item[(1)] If $e=J(1, k)$, then $f_e(x)=S(f_k(x))=f_k(x)+1$.
\item[(2)] If $e=J(2, k)$, then $f_e(x)=f_{L(k)}\uuu f_{R(k)}$.
\item[(3)] If $e=J(3, k)$, then
$f_e(x)=\left\lfloor\frac{f_{L(k)}}{f_{R(k)}+1}\right\rfloor$.
\item[(4)] If $e=J(4, k)$, then $f_e(x)=(f_{L(k)})^{f_{R(k)}}$.
\item[(5)] If $e=J(c, k)$ with $c\geq 5$, then $f_e(x)=0$.
\end{itemize}
\begin{example}
\label{ex:enum}
Here are some examples.
$f_0(x)=x$ by (0).
Since $1=J(1,0)$, $f_1=S\circ f_0(x)=x+1$ by (1).
Since $2=J(0,1)$, again $f_2=x$.
Since $3=J(2,0)=J(2, J(0,0))$, $f_3=f_0\uuu f_0=0$.
Since $4=J(1,1)$, $f_4=S\circ f_1=f_1+1=x+2$.
Since $169=J(1,16)=J(1, J(4,1))=J(1,J(4,J(1,0)))$,
$f_{169}=f_{16}+1=(x+1)^x+1$, etc.
\end{example}
Now we can enumerate all elementary maps
$$
g:\ensuremath{\mathbb{N}}\longrightarrow\ensuremath{\mathbb{Q}}_{\geq 0},
$$
by the following way:
\begin{equation}
g_e(n):=
\frac{f_{L(e)}(n)}{f_{R(e)}(n)+1}.
\end{equation}
Obviously the sequence $\{g_e(x)\}_{x\in\ensuremath{\mathbb{N}}}$ is not
Cauchy in general. We enforce being fast on
these sequences.
For an elementary sequence
$g:\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{Q}}$, define
$\overline{g}$ is
$$
\overline{g}(n)=
\left\{
\begin{array}{cl}
g(n)&\mbox{ if }(\forall i<n)(|g(i)-g(i+1)|<7^{-i-1}), \\
g(n_0)&\mbox{ otherwise, where } n_0:=
(\mu i<n)(|g(i)-g(i+1)|\geq 7^{-i-1}).
\end{array}
\right.
$$
The map $\overline{g}:\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{Q}}$ is
a fast elementary map by definition, and $g$ is
fast if and only if $g=\overline{g}$.
\begin{definition}
For $e\in\ensuremath{\mathbb{N}}$, define the $e$-th elementary real
number by
$$
\beta_e:=\lim_{n\rightarrow\infty}
\overline{g_e}(n).
$$
\end{definition}
From Lemma \ref{lem:fast}, every elementary real
number is the limit of a fast sequence, we have
$$
\{\beta_0, \beta_1, \ldots, \beta_e, \ldots\}=\ensuremath{\mathbb{R}}_{\sf (Elem)}.
$$
\begin{example}
\label{ex:40}
First several terms are $\beta_0=0, \beta_1=1,
\beta_2=\beta_3=0, \beta_4=1/2$ etc.
Let us compute $\beta_{40}$. Since $40=J(4,4)$,
$L(40)=R(40)=4$. Thus $g_{40}=f_{40}/(f_{40}+1)$.
Recall Example \ref{ex:enum} that we already
have $f_4(x)=x+2$. Hence $g_{40}(x)=\frac{x+2}{x+3}$.
This is not fast, the enforced one is
$$
\overline{g_{40}}(x)=
\left\{
\begin{array}{cc}
2/3& \mbox{ if }x=0, \\
3/4& \mbox{ if }x>0.
\end{array}
\right.
$$
At the end we obtain $\beta_{40}=\frac{3}{4}$.
\end{example}
Now we construct a non-elementary
computable real number $\alpha\in\ensuremath{\mathbb{R}}$ as the limit
of sequence
\begin{equation}
\alpha_n=
\frac{2\varepsilon_1}{3^1}+
\frac{2\varepsilon_2}{3^2}+
\frac{2\varepsilon_3}{3^3}+
\cdots +
\frac{2\varepsilon_n}{3^n},
\end{equation}
defined as follows.
Put $\alpha_0=0$ and define $\varepsilon_n (n\geq 1)$
inductively as
\begin{equation}
\varepsilon_{n+1}=
\left\{
\begin{array}{cc}
0&\mbox{ if }\overline{g_{n}}(n)>\alpha_n+\frac{1}{2\cdot 3^n}\\
1&\mbox{ if }\overline{g_{n}}(n)\leq\alpha_n+\frac{1}{2\cdot 3^n}
\end{array}
\right.
\end{equation}
\begin{proposition}
\label{prop:nonelem}
Set $\alpha=\lim_{n\rightarrow\infty}\alpha_n$, then
$\alpha\notin\ensuremath{\mathbb{R}}_{\sf (Elem)}$.
\end{proposition}
\proof
We shall prove $\alpha\neq\beta_e$ for any $e\in\ensuremath{\mathbb{N}}$.
By the definition of $\alpha_n$,
$$
\alpha\leq\alpha_n+
2(3^{-n-1}+3^{-n-2}+\cdots )=\alpha_n+3^{-n}.
$$
So we have
\begin{equation}
\alpha\in[\alpha_n, \alpha_n+3^{-n}],
\end{equation}
for all $n\in\ensuremath{\mathbb{N}}$.
Since $|\overline{g_e}(n)-\overline{g_e}(n+1)|<7^{-n-1}$,
\begin{eqnarray*}
|\overline{g_e}(n)-\beta_e|&<&
7^{-n-1}(1+7^{-1}+7^{-2}+\cdots)\\
&=&\frac{1}{7^n\cdot 6}.
\end{eqnarray*}
Thus we have
\begin{equation}
\beta_e\in \left(
\overline{g_e}(n)-\frac{1}{6\cdot 7^n},\
\overline{g_e}(n)+\frac{1}{6\cdot 7^n}
\right).
\end{equation}
If $\overline{g_e}(e)\leq\alpha_{e}+2^{-1}3^{-e}$, then
$\alpha_{e+1}=\alpha_{e}+2\cdot 3^{-e-1}$. Hence
$$
\alpha\in\left[
\alpha_{e}+\frac{2}{3^{e+1}},\ \alpha_{e}+\frac{3}{3^{e+1}}
\right].
$$
\begin{eqnarray*}
\beta_e&<&\overline{g_e}(e)+\frac{1}{6\cdot 7^e}\\
&\leq&\alpha_{e}+\frac{1}{2\cdot3^{e}}+\frac{1}{6\cdot 7^e}\\
&\leq&\alpha_{e}+\frac{1}{2\cdot3^{e}}+\frac{1}{6\cdot 3^e}\\
&=&\alpha_{e}+\frac{2}{3^{e+1}}\\
&=&\alpha_{e+1}\leq\alpha.
\end{eqnarray*}
In particular,
$\alpha\neq\beta_e$.
If
$\overline{g_e}(e)>\alpha_{e}+2^{-1}3^{-e}$ we can
prove $\beta_e>\alpha$ similarly.
In conclusion we have $\alpha\notin\ensuremath{\mathbb{R}}_{{\sf (Elem)}}$.
\qed
The first $80$ terms of the sequence $\varepsilon_n$ are
the following.
$$
\begin{array}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
n&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16\\
\hline
\varepsilon_n&1&0&1&1&1&1&1&1&0&1&0&1&1&0&1&1\\
\hline
\hline
n&17&18&19&20&21&22&23&24&25&26&27&28&29&30&31&32\\
\hline
\varepsilon_n&0&1&1&1&1&1&1&0&1&1&0&1&0&1&1&0\\
\hline
\hline
n&33&34&35&36&37&38&39&40&41&42&43&44&45&46&47&48\\
\hline
\varepsilon_n&1&1&0&1&0&1&1&1&1&1&1&1&1&1&1&0\\
\hline
\hline
n&49&50&51&52&53&54&55&56&57&58&59&60&61&62&63&64\\
\hline
\varepsilon_n&1&1&0&1&1&1&1&1&1&0&0&1&1&0&1&1\\
\hline
\hline
n&65&66&67&68&69&70&71&72&73&74&75&76&77&78&79&80\\
\hline
\varepsilon_n&0&1&0&1&1&0&1&1&1&0&1&1&1&1&1&1\\
\hline
\end{array}
$$
The real number
$\alpha/2=\sum_{i=1}^\infty3^{-i}\cdot\varepsilon_i$
is not elementary. The first $30$ digits are the following.
\begin{equation}
\label{eq:nonelem}
\frac{\alpha}{2}=0.388832221773824641256243009581\dots
\end{equation}
\section{Periods are elementary}
\label{sec:main}
\subsection{Main result}
\label{subsec:main}
Now we can state the main result.
\begin{theorem}
\label{thm:main}
Real periods are elementary real numbers, i.e.,
$$
\ensuremath{\mathcal{P}}\subset\ensuremath{\mathbb{R}}_{\sf (Elem)}.
$$
\end{theorem}
So the real number $\alpha$ constructed
above (\ref{eq:nonelem})
is not a period.
To prove this theorem, we need to show that
a given absolutely convergent integration is
an elementary real number.
First we will reduce the problem to
the cases of volumes of bounded semi-algebraic domains.
Namely, in \S\ref{subsec:ur}, we will prove that $\ensuremath{\mathcal{P}}$ is
generated by volumes $\operatorname{vol}(D)$ of bounded semi-algebraic
open domains $D\subset\ensuremath{\mathbb{R}}^\ell$. The proof is based
on Hironaka's rectilinearization theorem on
semi-algebraic sets. Another result, uniformization
theorem of semi-algebraic sets, is also mentioned
for later purposes.
Next step is to construct an elementary
sequence $\operatorname{vol}(V_n)$ converging to the volume
$\operatorname{vol}(D)$ of a semi-algebraic domain $D$.
In \S\ref{subsec:qe} and \S\ref{subsec:rs},
this is done by using Riemann sum, that is,
approximating the domain by small cubes.
The fact that the sequence is elementary
is proved by using Tarski's quantifier elimination
theorem.
Finally, in \S\ref{subsec:mc} and \S\ref{subsec:pf},
we will prove the convergence
$\operatorname{vol}(V_n)\rightarrow\operatorname{vol}(D)$ is elementary.
The main task is to count small cubes
within a $\varepsilon$-neighborhood of the
boundary $\partial D$. It is closely related
to estimate the Minkowski dimension and
the Minkowski content of $\partial D$.
It is done with the help of uniformization
theorem for semi-algebraic sets.
This completes the proof that
$\operatorname{vol}(D)$ is an elementary real number.
\subsection{Uniformization and rectilinearization}
\label{subsec:ur}
In this section, we recall
uniformization and rectilinearization theorem
on subanalytic sets by Hironaka. Our
main references are \cite{hironaka-sub}
and \cite{bm-semi}. First let us recall the notions
of semi-algebraic set and basic open semi-algebraic set.
(See \cite{bcr-real} for details.)
\begin{definition}
A {\em semi-algebraic subset} of $\ensuremath{\mathbb{R}}^\ell$ is a
finite union of subsets of the form
\begin{equation}
\label{eq:semialg}
\{
x\in\ensuremath{\mathbb{R}}^\ell\mid
F_1(x)= \dots = F_p(x)=0,\
G_1(x)>0, \dots, G_q(x)>0
\},
\end{equation}
where
$F_j, G_k\in\ensuremath{\mathbb{R}}[x_1, \dots, x_\ell]$.
\end{definition}
A map from
a semi-algebraic subset $X\subset\ensuremath{\mathbb{R}}^p$ to
a semi-algebraic subset $Y\subset\ensuremath{\mathbb{R}}^q$ is
called semi-algebraic
if its graph is a semi-algebraic subset of
$\ensuremath{\mathbb{R}}^{p+q}$.
\begin{definition}
A {\em basic open semi-algebraic subset} of $\ensuremath{\mathbb{R}}^\ell$ is a
set of the form
\begin{equation}
\label{eq:basic}
\{
x\in\ensuremath{\mathbb{R}}^\ell\mid
G_1(x)>0, \dots, G_q(x)>0
\},
\end{equation}
where
$G_k\in\ensuremath{\mathbb{R}}[x_1, \dots, x_\ell]$.
\end{definition}
\begin{proposition}
{\normalfont
\cite[Thm. 5.1.]{bm-semi}
}
\label{prop:unif}
Let $X$ be a closed analytic subset of a real analytic
manifold $M$.
Then there is a real analytic manifold $N$ (of the
same dimension as $X$) and a proper real analytic
map $\varphi:N\rightarrow M$ such that $\varphi(N)=X.$
\end{proposition}
\begin{proposition}
{\normalfont
\cite[(2.4)]{hironaka-sub}
}
\label{prop:recti}
Let $X$ be a real-analytic space countable at
infinity. Let $A$ be a globally defined
semi-analytic set in $X$, i.e., there exists a
finite system of real analytic functions
$g_{ij}$ and $f_{ij}$ on $X$ such that
$$
A=\bigcup_i\{
x\in X\mid g_{ij}(x)=0, f_{ij}(x)>0, \forall j
\}.
$$
Then there exists a real-analytic map
$\pi:\widehat{X}\rightarrow X$ such that
\begin{itemize}
\item[(1)]
$\widehat{X}$ is smooth and $\pi$ is proper surjective,
\item[(2)]
for every point $\xi\in\widehat{X}$, there exists a
local coordinate system $(z_1, \dots, z_n)$ of
$\widehat{X}$ centered at $\xi$ for which we have:
within some neighborhood of $\xi$ in $\widehat{X}$,
$\pi^{-1}(X)$ is a union of quadrants with respect to
$(z_1, \dots, z_n)$, where a quadrant means a
set defined by a system of relations $z_1\sigma_1 0,
z_2\sigma_2 0, \dots, z_n\sigma_n 0$ with
$\sigma_i$ is either ``$=$'', ``$>$'' or ``$<$''.
\end{itemize}
\end{proposition}
We note that the map $\pi$ above can be taken to be
a composition of a finite sequence of blowing-ups
with smooth centers.
The following apparently
more general description of $\ensuremath{\mathcal{P}}$ is equivalent
to Definition \ref{def:period}.
\cite[Thm 2.5, Prop 4.2]{bb-per}:
\begin{proposition}
\label{prop:period2}
The ring $\ensuremath{\mathcal{P}}$ is exactly the ring generated by the
numbers of the form $\int_\Delta\omega$, where
$X$ is a smooth algebraic variety of dimension $\ell$
defined over $\ensuremath{\mathbb{Q}}$, $E\subset X$ is a divisor with
normal crossings, $\omega\in\Omega^\ell(X)$ is a
top degree algebraic differential form on $X$, and
$\Delta\subset X$ is a $\ell$-dimensional
compact real
semi-algebraic set with $\partial\Delta\subset E$.
\end{proposition}
In view of Proposition \ref{prop:recti},
we may assume that the semi-algebraic
cycle $\Delta$ in Proposition \ref{prop:period2}
is smooth and locally (analytically) a union of quadrants.
Now we come to prove that real periods are
elementary. We first reduce the problem to
the volumes of bounded semi-algebraic sets.
\begin{lemma}
\label{lem:bdd}
Periods $\ensuremath{\mathcal{P}}$ is generated by
\begin{equation}
\label{eq:basis}
\left\{
\operatorname{vol}(D)\mid
D\subset\ensuremath{\mathbb{R}}^k\ \mbox{\normalfont is bounded basic
open semi-algebraic set}
\right\}.
\end{equation}
\end{lemma}
\proof
We will prove:
\begin{itemize}
\item[(i)] $\ensuremath{\mathcal{P}}$ is generated by
$$
\left\{
\operatorname{vol}(D)\mid
D\subset\ensuremath{\mathbb{R}}^k\ \mbox{\normalfont is bounded
open semi-algebraic set}
\right\},
$$
and
\item[(ii)] The volumes of open semi-algebraic subsets
of $\ensuremath{\mathbb{R}}^\ell$ are generated by those of basic ones.
\end{itemize}
The second one (ii) is easy. Indeed, a
semi-algebraic subset of the form (\ref{eq:semialg})
with $p>0$ has measure zero.
As far as we are interested in volumes,
we can ignore the measure zero sets.
We may
consider an open semi-algebraic subset
is a disjoint union of basic ones modulo
measure zero sets.
Now we prove (i).
We use the description in
Proposition \ref{prop:period2}
with $\Delta$ smooth and locally (analytically)
isomorphic to a union of quadrants. Fix a semi-algebraic
triangulation $\Delta=\bigcup_\alpha\Delta_\alpha$ and
also fix base points $p_\alpha\in\Delta_\alpha$ in each simplex.
By taking the triangulation small enough,
we may assume that the orthogonal projections
\begin{equation}
\pi_\alpha:\Delta_\alpha\longrightarrow T_{p_\alpha}\Delta_\alpha
\end{equation}
induce the isomorphism
$\pi_\alpha:\Delta_\alpha\stackrel{\cong}{\longrightarrow}
\pi_\alpha(\Delta_\alpha)\subset T_{p_\alpha}\Delta_\alpha$. Then
the image $K_\alpha:=\pi_\alpha(\Delta_\alpha)$
is also a semi-algebraic
set. Denote the inverse of the projection by
$\psi_\alpha:K_\alpha\stackrel{\cong}{\longrightarrow}\Delta_\alpha$,
which is a semi-algebraic $C^\infty$-map.
Fix a coordinate $(z_1, \dots, z_\ell)$ of
the affine space $T_{p_\alpha}\Delta_\alpha$. Then the pull-back
of $\omega$ by $\psi_\alpha$ is of the form
\begin{equation}
(\psi_\alpha)^*\omega=H_\alpha(z)dz_1\wedge\dots\wedge dz_\ell.
\end{equation}
Since composition and differentiations of
semi-algebraic functions are also semi-algebraic,
$H(z)$ is a semi-algebraic $C^\infty$-function.
So the integration
$\int_{\Delta_i}\omega=\int_{K_\alpha}H(z)dz_1\dots dz_\ell$
is equal to the volume $\operatorname{vol}(D_\alpha)$ of the bounded
semi-algebraic domain
\begin{equation}
D_\alpha=\{
(x, t)\in\ensuremath{\mathbb{R}}^\ell\times\ensuremath{\mathbb{R}}\mid
x\in K_\alpha,\ 0\leq t\leq H_\alpha(x)\}.
\end{equation}
Thus we have (i). \qed
\subsection{Quantifier elimination}
\label{subsec:qe}
Let $\ensuremath{\mathcal{L}}_{OR}$ be the language
$$
\ensuremath{\mathcal{L}}_{OR}=(+, -, \cdot, 0, 1, <, =)
$$
of ordered rings. We consider the theory $T$ of
real number field $\ensuremath{\mathbb{R}}$ with the language $\ensuremath{\mathcal{L}}_{OR}$.
Recall that a quantifier free formula
$\psi(x_1, \dots, x_n)$ is a Boolean combination of
inequalities $p(x_1, \dots, x_n)>0$, where
$p\in\ensuremath{\mathbb{Z}}[x_1, \dots, x_n]$.
The following is due to Tarski \cite{tar-dec},
see also \cite{coh-dec}.
\begin{theorem}
\label{thm:qe}
{\normalfont (Tarski)}
On the real number field,
every $\ensuremath{\mathcal{L}}_{OR}$-formula $\varphi(x_1, \ldots, x_n)$ is
equivalent to a quantifier free formula
$\varphi^*(x_1, \ldots, x_n)$, i.e.,
$$
T\vDash
\forall x_1\forall x_2\cdots\forall x_n
\left(
\varphi(x_1, \ldots, x_n)\Leftrightarrow\varphi^*(x_1, \ldots, x_n)
\right).
$$\qed
\end{theorem}
Let
\begin{equation}
D=\{{x}=(x_1, \ldots, x_\ell)\in\ensuremath{\mathbb{R}}^\ell\mid
G_k({x})>0,\ k=1, \ldots, q\},
\end{equation}
be a domain in $\ensuremath{\mathbb{R}}^\ell$,
where $G_k(x)\in\ensuremath{\mathbb{Z}}[x_1, \ldots, x_\ell]$ and
set
$$
G_k(x)=
\sum_{J}a_{kJ}x^J,
$$
where $J=(j_1, \ldots, j_\ell)$ is multi-index and
denoting $x^J=x_1^{j_1}\cdots x_\ell^{j_\ell}$.
Let us consider the next predicates with
variables $s_i, t_i, a_{kJ}$,
\medskip
\noindent
$R(s_i, t_i, a_{kJ}: 1\leq i\leq\ell, 1\leq k\leq q, J)$:
\begin{equation}
\forall x_1\dots\forall x_\ell
\left(
(s_i\leq x_i\leq t_i),
i=1, \ldots, \ell
\Rightarrow
(G_k(x)>0),
k=1, \ldots, q
\right)
\end{equation}
The above formula means that the box
$\prod_{i=1}^\ell[s_i, t_i]$ is
contained in the domain $D$,
\begin{equation}
\prod_{i=1}^\ell[s_i, t_i]=
[s_1, t_1]\times\cdots\times [s_\ell, t_\ell]
\subset D.
\end{equation}
From Theorem \ref{thm:qe}, we have a
quantifier free formula
$R^*(s_i, t_i, a_{kJ})$ which satisfies
for $\forall s_i, t_i, a_{k,J}$,
\begin{equation}
R^*(s_i, t_i, a_{kJ})\Longleftrightarrow
[s_1, t_1]\times\cdots\times [s_\ell, t_\ell]
\subset D.
\end{equation}
\subsection{Riemann sum}
\label{subsec:rs}
Let $D\subset\ensuremath{\mathbb{R}}^\ell$ be a basic open
semi-algebraic subset as in (\ref{eq:basic}).
Now we assume that $D$ is bounded and
contained in a large cube $[0,r]^\ell$, $r>0$.
Then the volume $\operatorname{vol}(D)$ is approximated by
the inner Riemann sum.
For positive an integer $n>0$ and $k_1, \dots, k_\ell\in\ensuremath{\mathbb{N}}$,
define a small cube $C_n(k_1,\dots, k_\ell)$ of size
$r/n$ by
$$
C_n(k_1, \ldots, k_\ell)=
\left[\frac{k_1r}{n}, \frac{(k_1+1)r}{n}\right]\times
\cdots\times
\left[\frac{k_\ell r}{n}, \frac{(k_\ell +1)r}{n}\right].
$$
Trivially these cubes subdivides the large cube
$[0,r]^\ell=\bigcup_{0\leq k_i<n}C_n(k_1,\dots, k_\ell)$.
Let us denotes by $V_n$ the union
$$
V_n=\bigcup_{C_n(k)\subset D}C_n(k_1, \dots, k_\ell)
$$
of small cubes which are contained in $D$.
We will prove that $\operatorname{vol}(V_n)\rightarrow\operatorname{vol}(D)$
(as $n\rightarrow\infty$) determines an elementary
real number.
\begin{lemma}
The function
$$
\ensuremath{\mathbb{N}}\longrightarrow\ensuremath{\mathbb{Q}}\ \left(
n\longmapsto\operatorname{vol}(V_n)\right)
$$
is elementary.
\end{lemma}
\proof
To compute Riemann sum $\operatorname{vol}(V_n)$,
we have to know for which
$(k_1, \dots, k_\ell)$ the small cube $C_n(k_1, \dots, k_\ell)$
is contained in $D$. From Theorem \ref{thm:qe} in the
previous section,
this is decided by a quantifier free
formula $R^*(s_i, t_i, a_{kJ})$.
By definition, it is a Boolean
combination of the predicates of the form
\begin{equation}
p(s_i, t_i, a_{kJ})>0,
\end{equation}
with $p\in\ensuremath{\mathbb{Z}}[s_i, t_i, a_{kJ}]$.
The truth value of the statement $C_n(k)\subset D$ is decided
by checking the truth values of
Boolean combination of predicates of the form
\begin{equation}
p\left(
\frac{k_ir}{n}, \frac{(k_i+1)r}{n}, a_{kJ}
\right)>0.
\end{equation}
Thus the relation $C_n(k_1, \dots, k_\ell)\subset D$
can be decided elementarily, that is, there exists
an elementary function
$$
\varphi:\ensuremath{\mathbb{N}}^{\ell+1}\longrightarrow\ensuremath{\mathbb{N}},\
\left(
(n, k_1, \dots, k_\ell)\longmapsto
\varphi(n, k_1, \dots, k_\ell)
\right)
$$
such that
\begin{equation}
\varphi(n, k_1, \dots, k_\ell)=
\left\{
\begin{array}{cc}
1&\mbox{ if }C_n(k_1, \dots, k_\ell)\subset D, \\
0&\mbox{otherwise}.
\end{array}
\right.
\end{equation}
Thus the volume $\operatorname{vol}(V_n)$ of the union of small cubes
is expressed as
\begin{equation}
\operatorname{vol}(V_n)=\left(\frac{r}{n}\right)^\ell
\sum_{0\leq k_i\leq n}
\varphi(n, k_1, \ldots, k_\ell),
\end{equation}
which is an elementary function on $n$.
\qed
Next we have to estimate the rate of convergence
$$
\lim_{n\rightarrow\infty}\operatorname{vol}(V_n)=\operatorname{vol}(D).
$$
\subsection{Minkowski content}
\label{subsec:mc}
In this subsection we recall notations on
Minkowski dimension and Minkowski contents from
\cite{kp-dom}.
First let $B=\{x\in\ensuremath{\mathbb{R}}^N\mid |x|<1\}$.
Then
$$
\Upsilon_N:=\operatorname{vol}(B)=
2\frac{\pi^{N/2}}{N\cdot\Gamma(N/2)}.
$$
Here $\ensuremath{\mathcal{L}}^N$ denotes the $N$-dimensional Lebesgue measure.
\begin{definition}
Suppose $A\subset\ensuremath{\mathbb{R}}^N$ and $0\leq K\leq N$.
The {\em $K$-dimensional upper Minkowski content of $A$},
denoted by $\ensuremath{\mathcal{M}}^{*K}(A)$, is defined by
$$
\ensuremath{\mathcal{M}}^{*K}(A)=
\limsup_{{\varepsilon}\downarrow 0}
\frac{\ensuremath{\mathcal{L}}^N\{x\mid\operatorname{dist}(x,A)<{\varepsilon}\}}{\Upsilon_{N-K}{\varepsilon}^{N-K}}
$$
\end{definition}
\begin{proposition}
\label{prop:mink}
{\normalfont (\cite[Prop 3.5.5]{kp-dom})} Let
$f:\ensuremath{\mathbb{R}}^K\rightarrow\ensuremath{\mathbb{R}}^N$ be a $C^1$-map.
$A\subset\ensuremath{\mathbb{R}}^K$ is compact with
$$
A\subset\{x\mid |D(f)|\leq\rho\},
$$
then
$$
\ensuremath{\mathcal{M}}^{*K}(f(A))\leq\rho^K\ensuremath{\mathcal{L}}^K(A).
$$
\end{proposition}
\subsection{Proof, completion}
\label{subsec:pf}
Now we return to the proof of
Theorem \ref{thm:main}, $\ensuremath{\mathcal{P}}\subset\ensuremath{\mathbb{R}}_{\sf (Elem)}$.
In view of Lemma \ref{lem:bdd},
it is enough to show that
the sequence $\operatorname{vol}(V_n)\rightarrow\operatorname{vol}(D)$ constructed
in \S\ref{subsec:rs} converges effectively.
The following lemma concludes
$\operatorname{vol}(D)\in\ensuremath{\mathbb{R}}_{\sf (Elem)}$.
\begin{lemma}
\label{lem:conv}
There exists a constant $L=L(D)$ depending only on $D$,
such that if $k$ and $n$ satisfy
\begin{equation}
\label{eq:condition}
4rL\sqrt{\ell}k<n,
\end{equation}
then $|\operatorname{vol}(D)-\operatorname{vol}(V_n)|<1/k$.
\end{lemma}
\proof
Set $P(x)=\prod_{k=1}^q G_k(x)$.
Then $\partial D\subset \{P=0\}$.
By the Uniformization theorem
(Proposition \ref{prop:unif}), $X=\{P=0\}$ is an
image $\pi(X)$ of a proper analytic map
$\pi:X\rightarrow \ensuremath{\mathbb{R}}^\ell$. Since $\partial D\subset \ensuremath{\mathbb{R}}^\ell$
is compact, from Proposition \ref{prop:mink},
the $(\ell-1)$-dimensional Minkowski content
$\ensuremath{\mathcal{M}}^{*(\ell-1)}(\partial D)$ of
the boundary $\partial D$ is finite.
There is a constant $L>0$ and ${\varepsilon}_0>0$
such that
$$
\frac{\ensuremath{\mathcal{L}}^{\ell}(\{y\in\ensuremath{\mathbb{R}}^\ell\mid
\operatorname{dist} (y, \partial D)<{\varepsilon}\})}{2{\varepsilon}}<L,
$$
for $0<{\varepsilon}<{\varepsilon}_0$. Equivalently we have
$$
\ensuremath{\mathcal{L}}^{\ell}(\{y\in\ensuremath{\mathbb{R}}^\ell\mid
\operatorname{dist} (y, \partial D)<{\varepsilon}\})<2{\varepsilon} L.
$$
Choose $n$ large enough and ${\varepsilon}$ as
\begin{equation}
\label{eq:large}
\frac{r\sqrt{\ell}}{n}=\frac{\varepsilon}{2},
\end{equation}
note that the LHS is exactly the diagonal length of the
small cube $C_n(k_1, \dots, k_\ell)$.
Let us consider the subset of $D$ which is
$\varepsilon$-away from the
boundary (or removing $\varepsilon$-neighborhood of
the boundary)
\begin{equation}
D_{>\varepsilon}=
\{x\in D\mid
\operatorname{dist}(x, \partial D)>\varepsilon\}.
\end{equation}
It is easily seen that, under (\ref{eq:large}),
\begin{equation}
D_{>\varepsilon}\subset V_n\subset D.
\end{equation}
Instead of $\operatorname{vol}(D-V_n)$, we will estimate
$\operatorname{vol}(D-D_{>{\varepsilon}})$.
Hence if we choose $n$ as in Eq. (\ref{eq:large}),
\begin{eqnarray*}
|\operatorname{vol}(D)-\operatorname{vol}(V_n)|&<&|\operatorname{vol}(D)-\operatorname{vol}(D_{>{\varepsilon}})|\\
&=&\ensuremath{\mathcal{L}}^{\ell}(\{y\in D \mid \operatorname{dist} (y, \partial D)<{\varepsilon}\})\\
&<&\ensuremath{\mathcal{L}}^{\ell}(\{y\in \ensuremath{\mathbb{R}}^\ell \mid \operatorname{dist} (y, \partial D)<{\varepsilon}\})\\
&<&2{\varepsilon} L\\
&=&\frac{4r\sqrt{\ell}L}{n}.
\end{eqnarray*}
Thus if (\ref{eq:condition}) is satisfied,
we have $|\operatorname{vol}(D)-\operatorname{vol}(V_n)|<1/k$.
\qed
\medskip
\noindent
{\bf Acknowledgment.}
The author thanks to
Professor Masa-Hiko Saito for his interests to
this work and constant
encouragements. The author also thanks to
Professor
Toshiyasu Arai,
Makoto Kikuchi,
Takefumi Kondo,
Hiraku Kawanoue,
Takeshi Nozawa,
Okihiro Sawada
for comments and useful conversations on
several topics treated in this paper.
\bibliographystyle{plain}
| {
"timestamp": "2008-05-03T11:33:18",
"yymm": "0805",
"arxiv_id": "0805.0349",
"language": "en",
"url": "https://arxiv.org/abs/0805.0349",
"abstract": "The periods, introduced by Kontsevich and Zagier, form a class of complex numbers which contains all algebraic numbers and several transcendental quantities. Little has been known about qualitative properties of periods. In this paper, we compare the periods with hierarchy of real numbers induced from computational complexities. In particular we prove that periods can be effectively approximated by elementary rational Cauchy sequences. As an application, we exhibit a computable real number which is not a period.",
"subjects": "Algebraic Geometry (math.AG); Number Theory (math.NT)",
"title": "Periods and elementary real numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130599601177,
"lm_q2_score": 0.7154239836484143,
"lm_q1q2_score": 0.7075636632369754
} |
https://arxiv.org/abs/1407.1142 | Regularity Properties of Sectorial Operators: Counterexamples and Open Problems | We give a survey on the different regularity properties of sectorial operators on Banach spaces. We present the main results and open questions in the theory and then concentrate on the known methods to construct various counterexamples. | \section{Introduction}
By now sectorial operators play a central role in the study of abstract evolution equations. Moreover, in the past decades certain sectorial operators with additional properties have become important both from the point of view of operator theory and partial differential equations. We call these additional properties \emph{regularity properties} of sectorial operators. Very important examples are the boundedness of the $H^{\infty}$-calculus or the imaginary powers, $\mathcal{R}$-sectoriality and -- in the case that the sectorial operator generates a semigroup -- the property of having a dilation to a group. This survey is intended as a quick guide to these properties and the main results and open questions in this area. A particular emphasis is thereby given to the presentation of various methods to construct counterexamples.
In the first section we introduce all aforementioned properties and list the main results. In particular we will see that on $L_p$ for $p \in (1, \infty)$ and on more general Banach spaces the following implications hold:
\[ \text{loose dilation} \quad \Rightarrow \quad \text{bounded } \text{$H^{\infty}$-calculus} \quad \Rightarrow \quad \text{BIP} \quad \Rightarrow \quad \mathcal{R}\text{-sectorial} \]
and all of them imply sectoriality by their mere definitions. Our main goal in the sections thereafter is to give explicit counterexamples which show that for each of the above properties the converse implication $\Leftarrow$ does not hold. We present different approaches to construct such counterexamples. The first one is well-known and the most far-reaching and uses Schauder multipliers. In~\cite{Fac13} and~\cite{Fac14} this approach has been developed further to give the first explicit example of a sectorial operator on $L_p$ which is not $\mathcal{R}$-sectorial.
The second approach uses a theorem of S.~Monniaux to give examples of sectorial operators with BIP which do not have a bounded $H^{\infty}$-calculus. Finally, we study the regularity properties on exotic Banach spaces and show how Pisier's counterexample to the Halmos problem can be used to give an example of a sectorial operator with a bounded $H^{\infty}(\Sigma_{\frac{\pi}{2}+})$-calculus which does not have a dilation. Moreover, we meet and motivate open problems in the theory and formulate them separately whenever they arise.
\section{Main Definitions and Fundamental Results}
In this section we give the definitions of the regularity properties to be considered later. Further, we present the main results for these regularity classes. Our leitmotif is to present all results in the most general form that does not involve the introduction of new concepts apart from the main ones. We hope that this allows the reader to see the main ideas clearly without getting himself lost in details. For further information we refer to~\cite{KunWei04}, \cite{DHP03} and~\cite{Haa06}. Furthermore we make the following convention.
\begin{convention}
All Banach spaces are assumed to be complex.
\end{convention}
\subsection{Sectorial Operators}
We begin our journey with sectorial operators. For $\omega \in (0,\pi)$ we denote by
\[ \Sigma_{\omega} \coloneqq \{ z \in \mathbb{C} \setminus \{ 0 \}: \abs{\arg(z)} < \omega \} \]
the open sector in the complex plane with opening angle $\omega$, where our convention is that $\arg z \in (-\pi,\pi]$.
\begin{definition}[Sectorial Operator]\index{sectorial operator} A closed densely defined operator $A$ with dense range on a Banach space $X$ is called \emph{sectorial} if there exists an $\omega \in (0, \pi)$ such that
\begin{equation*}
\label{sectorial}
\tag{$S_{\omega}$}
\sigma(A) \subset \overline{\Sigma_{\omega}} \qquad \text{and} \qquad \sup_{\lambda \not\in \overline{\Sigma_{\omega + \epsilon}}} \norm{\lambda R(\lambda, A)} < \infty \quad \forall \epsilon > 0.
\end{equation*}
One defines the \emph{sectorial angle of $A$} as $\omega(A) \coloneqq \inf \{ \omega: \text{\eqref{sectorial} holds} \}$.
\end{definition}
\begin{remark}
The above definition automatically implies that $A$ is injective. The definition of sectorial operators varies in the literature. Some authors do not require a sectorial operator to be injective or to have dense range. Others even omit the density of the domain. We give this strict definition to reduce technical difficulties when dealing with bounded imaginary powers and bounded $H^{\infty}$-calculus. For a very general treatment avoiding unnecessary restrictions in the development as far as possible see the monograph~\cite{Haa06}.
\end{remark}
\subsection{\texorpdfstring{$\mathcal{R}$}{R}-Sectorial Operators}
In the study of $L_p$-maximal parabolic regularity culminating in the work~\cite{Wei01} an equivalent characterization of maximal $L_p$-regularity in terms of a stronger sectoriality condition has become very useful both for theory and applications. This condition is called $\mathcal{R}$-sectoriality. We will exclusively treat this condition from an operator theoretic point of view and refer to~\cite{KunWei04} and~\cite{DHP03} for the connection with non-linear parabolic partial differential equations.
Let $r_k(t) \coloneqq \sign \sin (2^k \pi t)$ be the $k$-th \emph{Rademacher function}. Then on the probability space $([0,1], \mathcal{B}([0,1]), \lambda)$, where $\mathcal{B}([0,1])$ is the Borel $\sigma$-algebra on $[0,1]$ and $\lambda$ denotes the Lebesgue measure, the Rademacher functions form an independent identically distributed family of random variables satisfying $\mathbb{P}(r_k = \pm 1) = \frac{1}{2}$.
\begin{definition}[$\mathcal{R}$-Boundedness]
A family of operators $\mathcal{T} \subseteq \mathcal{B}(X)$ on a Banach space $X$ is called \emph{$\mathcal{R}$-bounded} if for one $p \in [1, \infty)$ (equiv.\ all $p \in [1, \infty)$ by the Khintchine inequality) there exists a finite constant $C_p \ge 0$ such that for each finite subset $\{T_1, \ldots, T_n \}$ of $\mathcal{T}$ and arbitrary $x_1, \ldots, x_n \in X$ one has
\begin{equation}
\biggnorm{\sum_{k = 1}^n r_k T_k x_k}_{L_p([0,1]; X)} \le C_p \biggnorm{\sum_{k=1}^n r_k x_k}_{L_p([0,1]; X)}. \label{eq:R-ineq}
\end{equation}
The best constant $C_p$ such that \eqref{eq:R-ineq} holds is called the \emph{$\mathcal{R}$-bound} of $\mathcal{T}$ and is denoted (for an implicitely fixed $p$) by $\mathcal{R}(\mathcal{T})$.
\end{definition}
Furthermore we denote by $\Rad(X)$ the closed span of the functions of the form $\sum_{k=1}^n r_k x_k$ in $L_1([0,1];X)$. The $\mathcal{R}$-bound behaves in many ways similar to a classical norm. For example, if $\mathcal{S}$ is a second family of operators, one sees that (if the operations make sense)
\[ \mathcal{R}(\mathcal{T} + \mathcal{S}) \le \mathcal{R}(\mathcal{T}) + \mathcal{R}(\mathcal{S}), \qquad \mathcal{R}(\mathcal{TS}) \le \mathcal{R}(\mathcal{T})\mathcal{R}(\mathcal{S}). \]
Note that by the orthogonality of the Rademacher functions in $L_2([0,1])$ a family $\mathcal{T} \subseteq \mathcal{B}(H)$ for some Hilbert space $H$ is $\mathcal{R}$-bounded if and only if $\mathcal{T}$ is bounded in operator norm. In fact, an $\mathcal{R}$-bounded subset $\mathcal{T} \subseteq \mathcal{B}(X)$ for a Banach space $X$ is clearly always norm-bounded and one can show that the converse holds if and only if $X$ is isomorphic to a Hilbert space~\cite[Proposition~1.13]{AreBu02}.
Now, if one replaces norm-boundedness by $\mathcal{R}$-boundedness, one obtains the definition of an $\mathcal{R}$-sectorial operator.
\begin{definition}[$\mathcal{R}$-Sectorial Operator]\index{$\mathcal{R}$-sectorial operator}
A sectorial operator on a Banach space $X$ is called \emph{$\mathcal{R}$-sectorial }if for some $\omega \in (\omega(A), \pi)$ one has
\begin{equation*}
\mathcal{R} \{ \lambda R(\lambda,A): \lambda \not\in \overline{\Sigma_{\omega}} \} < \infty. \label{R-sectorial}\tag{$\mathcal{R}_{\omega}$}
\end{equation*}
One defines the \emph{$\mathcal{R}$-sectorial angle} of $A$ as $\omega_R(A) \coloneqq \inf\{ \omega: \text{\eqref{R-sectorial} holds} \}$. If $A$ is not $\mathcal{R}$-sectorial, we set $\omega_R(A) \coloneqq \infty$.
\end{definition}
By definition one has $\omega(A) \le \omega_{R}(A)$. In Hilbert spaces an operator is sectorial if and only if it is $\mathcal{R}$-sectorial. In this case the equality $\omega(A) = \omega_R(A)$ holds. There are examples of sectorial operators $A$ on Banach spaces for which one has the strict inequalities $\omega(A) < \omega_R(A) < \infty$. For this see the examples cited in Section~\ref{sec:hinfty} and use the fact that $\omega_R(A) = \omega_{H^{\infty}}(A)$ on UMD-spaces. However, the following problem seems to be open.
\begin{problem}
Let $A$ be an $\mathcal{R}$-sectorial operator on $L_p$ for $p \in (1,\infty)$. Does one have $\omega(A) = \omega_R(A)$ (if $A$ generates a positive / contractive / positive contractive analytic $C_0$-semigroup)?
\end{problem}
In general Banach spaces $\mathcal{R}$-sectorial operators clearly are sectorial, the converse question whether every sectorial operator is $\mathcal{R}$-sectorial will be explicitly answered negatively in Theorem~\ref{thm:counterexample_mrp_lp}.
\subsection{Bounded \texorpdfstring{$H^{\infty}$-}{Holomorphic }Calculus for Sectorial Operators}\label{sec:hinfty}
In complete analogy to the Dunford functional calculus for bounded operators one can define a holomorphic functional calculus for sectorial operators. This goes back to the work~\cite{McI86} in the Hilbert space case and to~\cite{CDMY96} in the Banach space case. We start by introducing the necessary function spaces.
\begin{definition}
For $\sigma \in (0, \pi)$ we define
\begin{align*}
& H_0^{\infty}(\Sigma_{\sigma}) \coloneqq \left\{ f: \Sigma_{\sigma} \to \mathbb{C} \text{ analytic}: \abs{f(\lambda)} \le C \frac{\abs{\lambda}^{\epsilon}}{(1 + \abs{\lambda})^{2\epsilon}} \text{ on } \Sigma_{\sigma} \text{ for } C, \epsilon > 0 \right \}, \\
& H^{\infty}(\Sigma_{\sigma}) \coloneqq \{ f: \Sigma_{\sigma} \to \mathbb{C} \text{ analytic and bounded} \}.
\end{align*}
\end{definition}
Now let $A$ be a sectorial operator on a Banach space $X$ and $\sigma > \omega(A)$. Then for $f \in H_0^{\infty}(\Sigma_{\sigma})$ one can define
\[ f(A) = \frac{1}{2\pi i} \int_{\partial \Sigma_{\sigma'}} f(\lambda) R(\lambda, A) \, d\lambda \qquad (\omega(A) < \sigma' < \sigma). \]
This is well-defined by the growth estimate on $f$ and by the invariance of the contour integral and induces an algebra homomorphism $H^{\infty}_0(\Sigma_{\sigma}) \to \mathcal{B}(X)$.
One can show that this homomorphism can be extended to a bounded homomorphism on $H^{\infty}(\Sigma_{\sigma})$ satisfying a continuity property similar to the one in Lebesgue's dominated convergence theorem if and only if the homomorphism $H_0^{\infty}(\Sigma_{\sigma}) \to \mathcal{B}(X)$ is bounded. This leads us to the next definition.
\begin{definition}[Bounded $H^{\infty}$-calculus]\index{$H^{\infty}$-Calculus} A sectorial operator $A$ is said to have a \emph{bounded $H^{\infty}(\Sigma_{\sigma})$-calculus} for some $\sigma \in (\omega(A), \pi)$ if the homomorphism $f \mapsto f(A)$ from $H_0^{\infty}(\Sigma_{\sigma})$ to $\mathcal{B}(X)$ is bounded. The infimum of the $\sigma$ for which this homomorphism is bounded is denoted by $\omega_{H^{\infty}}(A)$. We say that $A$ has a \emph{bounded $H^{\infty}$-calculus} if $A$ has a bounded $H^{\infty}(\Sigma_{\sigma})$-calculus for some $\sigma \in (0, \pi)$. If $A$ does not have a bounded $H^{\infty}$-calculus, we let $\omega_{H^{\infty}}(A) \coloneqq \infty$.
\end{definition}
One can extend the functional calculus to the broader class of holomorphic functions on $\Sigma_{\sigma}$ with polynomial growth~\cite[Appendix~B]{KunWei04}. Of course, the so obtained operators cannot be bounded in general. Note that it follows directly from the definition that one always has $\omega(A) \le \omega_{H^{\infty}}(A)$ for a sectorial operator $A$. Moreover, there exist examples of sectorial operators $A$ for which the strict inequalities $\omega(A) < \omega_{H^{\infty}}(A) < \infty$ hold: in~\cite{Kal03} N.J.~Kalton gives an example on a uniformly convex space and in the unpublished manuscript~\cite{KalWei????2} there is an example on a subspace of an $L_p$-space by the same author.
There is a close connection to $\mathcal{R}$-boundedness and $\mathcal{R}$-sectoriality. A Banach space $X$ is said to have \emph{Pisier's property $(\alpha)$} (as introduced in~\cite{Pis78}) if there is a constant $C \ge 0$ such that for all $n \in \mathbb{N}$, all $n \times n$-matrices $[x_{ij}] \in M_n(X)$ of elements in $X$ and all choices of scalars $[\alpha_{ij}]\in M_n(\mathbb{C})$ one has
\[
\int_{[0,1]^2} \biggnorm{\sum_{i,j=1}^n \alpha_{ij} r_i(s) r_j(t) x_{ij}} \, ds \, dt \le C \sup_{i,j} \abs{\alpha_{ij}} \int_{[0,1]^2} \biggnorm{\sum_{i,j=1}^n r_i(s) r_j(t) x_{ij}} \, ds \, dt.
\]
We remark that $L_p$-spaces have Pisier's property $(\alpha)$ for $p \in (1, \infty)$. A proof of the following theorem can be found in~\cite[Theorem~12.8]{KunWei04}.
\begin{theorem}\label{thm:h_infty_generates_R_bounded_sets}
Let $X$ be a Banach space with Pisier's property $(\alpha)$ and $A$ a sectorial operator on $X$ with a bounded $H^{\infty}(\Sigma_{\sigma})$-calculus for some $\sigma \in (0, \pi)$. Then for all $\sigma' \in (\sigma, \pi)$ and all $C \ge 0$ the set
\[ \{ f(A): \norm{f}_{H^{\infty}(\Sigma_{\sigma'})} \le C \} \]
is $\mathcal{R}$-bounded.
\end{theorem}
Note that this also implies under the above assumptions that a sectorial operator with a bounded $H^{\infty}$-calculus is $\mathcal{R}$-sectorial. This can also be proved under the following weaker assumption on the Banach space~\cite[Theorem~5.3]{KalWei01}. A Banach space $X$ \emph{has property $(\Delta)$} if there is a constant $C \ge 0$ such that for all $n \in \mathbb{N}$ and all $n \times n$-matrices $[x_{ij}] \in M_n(X)$ one has
\[
\int_{[0,1]^2} \biggnorm{\sum_{i=1}^n \sum_{j=1}^{i} r_i(s) r_j(t) x_{ij}} \, ds \, dt \le C \int_{[0,1]^2} \biggnorm{\sum_{i,j=1}^n r_i(s) r_j(t) x_{ij}} \, ds \, dt.
\]
\begin{theorem}\label{thm:hinfty_implies_rsectorial}
Let $X$ be a Banach space with property $(\Delta)$. Further let $A$ be a sectorial operator on $X$ with a bounded $H^{\infty}$-calculus. Then $A$ is $\mathcal{R}$-sectorial with $\omega_R(A) = \omega_{H^{\infty}}(A)$.
\end{theorem}
The above theorem can be seen as a generalization of the result that a sectorial operator with a bounded $H^{\infty}$-calculus on a Hilbert space satisfies $\omega(A) = \omega_{H^{\infty}}(A)$. In particular, the example for the strict inequality $\omega_{H^{\infty}}(A) > \omega(A)$ on a subspace of $L_p$ gives the same strict inequality for the $\mathcal{R}$-sectorial angle $\omega_R(A)$.
It is an important and natural question to ask which classes of sectorial operators have a bounded $H^{\infty}$-calculus. In the following a \emph{contractive analytic semigroup} is an analytic semigroup $(T(z))$ with $\norm{T(t)} \le 1$ for all $t \ge 0$. In the Hilbert space case one has the following characterization.
\begin{theorem}\label{thm:characterization_hinfty_hilbert}
Let $A$ be a sectorial operator on a Hilbert space such that $-A$ generates a contractive analytic $C_0$-semigroup. Then $A$ has a bounded $H^{\infty}$-calculus with $\omega_{H^{\infty}}(A) = \omega(A) < \frac{\pi}{2}$.
Conversely, if $A$ has a bounded $H^{\infty}$-calculus with $\omega_{H^{\infty}}(A) < \frac{\pi}{2}$, then there exists an invertible $S \in \mathcal{B}(H)$ such that $-S^{-1} A S$ generates a contractive analytic $C_0$-semigroup.
\end{theorem}
The first implication follows from the existence of a dilation to a $C_0$-group as discussed in Section~\ref{sec:dilations} and the fact $\omega_{H^{\infty}}(A) = \omega(A)$, the second implication is a result of C.~Le Merdy~\cite[Theorem~1.1]{Mer98}. There is an analogue in the $L_p$-case.
\begin{theorem}\label{thm:characterization_hinfty_banach}
Let $p \in (1, \infty)$ and $A$ be a sectorial operator on an $L_p$-space $L_p(\Omega)$ such that $-A$ generates a contractive positive analytic $C_0$-semigroup. Then $A$ has a bounded $H^{\infty}$-calculus with $\omega_{H^{\infty}}(A) = \omega_R(A) < \frac{\pi}{2}$.
Conversely, if $A$ has a bounded $H^{\infty}$-calculus with $\omega_{H^{\infty}}(A) < \frac{\pi}{2}$, then there exists a sectorial operator $B$ on a second $L_p$-space $L_p(\tilde{\Omega})$ with $\omega_{H^{\infty}}(B) < \frac{\pi}{2}$ such that $-B$ generates a positive contractive analytic $C_0$-semigroup, a quotient of a subspace $E$ of $L_p(\tilde{\Omega})$ and an invertible $S \in \mathcal{B}(L_p(\Omega), E)$ with $A = S^{-1} B S$.
\end{theorem}
The first implication is due to L.~Weis (see \cite[Remark~4.9c)]{Wei01} and \cite[Section~4d)]{Wei01b}), the second one was obtained by the author in~\cite{Fac14b}. There are some open questions regarding generalizations of Weis' result.
\begin{problem}
Let $A$ be a sectorial operator on some UMD-Banach lattice and suppose that $-A$ is the generator of a positive contractive $C_0$-semigroup. Does $A$ have a bounded $H^{\infty}$-calculus (bounded imaginary powers / is $\mathcal{R}$-analytic)?
\end{problem}
\begin{problem}\label{prob:contractive_generator_lp_hinfy}
Let $A$ be a sectorial operator on some $L_p$-space for $p \in (1,\infty)$ and suppose that $-A$ is the generator of a contractive $C_0$-semigroup. Does $A$ have a bounded $H^{\infty}$-calculus (bounded imaginary powers / is $\mathcal{R}$-analytic)?
\end{problem}
\begin{problem}
Let $A$ be a sectorial operator on some $L_p$-space for $p \in (1,\infty)$ and suppose that $-A$ is the generator of a positive $C_0$-semigroup. Does $A$ have a bounded $H^{\infty}$-calculus (bounded imaginary powers / is $\mathcal{R}$-analytic)?
\end{problem}
\begin{problem}
Find a similar characterization as in Theorem~\ref{thm:characterization_hinfty_hilbert} or Theorem~\ref{thm:characterization_hinfty_banach} in the case $\omega_{H^{\infty}(A)} = \frac{\pi}{2}$.
\end{problem}
It was observed by C.~Le Merdy in~\cite[p.~33]{Mer99} that a counterexample to Problem~\ref{prob:contractive_generator_lp_hinfy} on $L_p$ would also provide a negative answer to a (largely) open conjecture by Matsaev. For an introduction to the problem, its noncommutative analogue and further references we refer to the recent article~\cite{Arh13}. We note that there exists a $2 \times 2$-matrix counterexample to Matsaev's conjecture for the case $p=4$ which was obtained with the help of numerics~\cite{Dru11}, but an analytic approach is missing.
\subsection{Bounded Imaginary Powers (BIP)}
Sectorial operators with bounded imaginary powers have been studied before the first appearance of the $H^{\infty}$-calculus. They play an important role in the Dore--Venni theorem~\cite[Theorem~2.1]{DorVen87} and in the interpolation of fractional domain spaces~\cite{Yag84}.
\begin{definition}[Bounded Imaginary Powers (BIP)]\index{bounded imaginary powers}\index{BIP|see{Bounded Imaginary Powers}}
A sectorial operator on a Banach space $X$ is said to have \emph{bounded imaginary powers (BIP)} if for all $t \in \mathbb{R}$ the operator $A^{it}$ associated to the functions $\lambda \mapsto \lambda^{it}$ via the holomorphic functional calculus is bounded.
\end{definition}
In this case $(A^{it})_{t \in \mathbb{R}}$ is a $C_0$-group on $X$ with generator $i\log A$~\cite[Corollary~3.5.7]{Haa06}. The growth of the $C_0$-group $(A^{it})_{t \in \mathbb{R}}$ is used to define the BIP-angle.
\begin{definition}
For a sectorial operator $A$ on some Banach space with bounded imaginary powers one defines
\[
\omega_{\text{BIP}}(A) \coloneqq \inf \{ \omega \ge 0: \normalnorm{A^{it}} \le Me^{\omega \abs{t}} \text{ for all } t \in \mathbb{R} \text{ for some } M \ge 0 \}.
\]
If $A$ does not have bounded imaginary powers, we set $\omega_{\text{BIP}}(A) \coloneqq \infty$.
\end{definition}
Let $A$ be a sectorial operator with a bounded $H^{\infty}(\Sigma_{\sigma})$-calculus for some $\sigma \in (0, \pi)$. Then one has
\[ \normalabs{\lambda^{it}} \le \exp(\Re(it \log \lambda)) \le \exp(\abs{t} \sigma) \]
for all $\lambda \in \Sigma_{\sigma}$. This shows that the boundedness of the $H^{\infty}$-calculus for $A$ implies that $A$ has bounded imaginary powers with $\omega_{\text{BIP}}(A) \le \omega_{H^{\infty}}(A)$. A less obvious fact is that BIP implies $\mathcal{R}$-sectoriality on UMD-spaces~\cite[Theorem~4.5]{DHP03}. A Banach space is called a UMD-space if the vector-valued Hilbert transform is bounded on $L_2(\mathbb{R};X)$. There are more equivalent definitions of UMD-spaces. For details we refer to \cite{Bur01} and \cite{Fra86}. We only note the following: if $X$ is a UMD-space, then so is $L_p(\Omega;X)$ for all measure spaces $\Omega$ and $p \in (1,\infty)$. In particular, $L_p(\Omega)$ is UMD. Moreover, every UMD-space has property $(\Delta)$, but not every UMD-space has Pisier's property $(\alpha)$.
\begin{theorem}
Let $A$ be a sectorial operator with bounded imaginary powers on a UMD-space. Then $A$ is $\mathcal{R}$-sectorial with $\omega_R(A) \le \omega_{\mathrm{BIP}}(A)$.
\end{theorem}
In particular this implies that a sectorial operator $A$ on a UMD-space with a bounded $H^{\infty}$-calculus satisfies $\omega_R(A) = \omega_{\text{BIP}}(A) = \omega_{H^{\infty}}(A)$. The first example showing that the strict inequality $\omega(A) < \omega_{\text{BIP}}(A)$ can hold was found by M.~Haase~\cite[Corollary~5.3]{Haa03} (see also Remark~\ref{rem:bip_angle_bigger}).
\subsection{Sectorial Operators which have a Dilation}
A further regularity property which is not so inherent to sectorial operators but nevertheless very important for their study is the existence of group dilations. This powerful concept goes back to B.~Sz.-Nagy. For a detailed treatment of dilation theory on Hilbert spaces see~\cite{SFBK10}. In particular one has the following result \cite[Theorem~8.1]{SFBK10}.
\begin{theorem}\label{thm:dilation_hilbert_space}
Let $(T(t))_{t \ge 0}$ be a contractive $C_0$-semigroup on a Hilbert space $H$. Then there exists a second Hilbert space $K$, an embedding $J\colon H \to K$, an orthogonal projection $P\colon K \to H$ and a unitary group $(U(t))_{t \ge 0}$ on $K$ with
\[ T(t) = PU(t)J \qquad \text{for all } t \ge 0. \]
\end{theorem}
It follows from the spectral theory of normal operators that the negative generator of $(U(t))_{t \in \mathbb{R}}$ and therefore also the negative generator of $(T(t))_{t \ge 0}$ has a bounded $H^{\infty}$-calculus for all angles bigger than $\frac{\pi}{2}$. Hence, using the fact that $\omega(A) = \omega_{H^{\infty}}(A)$ we have found a proof of the first part of Theorem~\ref{thm:characterization_hinfty_hilbert}. We have seen the following.
\begin{corollary}
Let $A$ be a sectorial operator on a Hilbert space such that $-A$ generates a contractive $C_0$-semigroup. Then $A$ has a bounded $H^{\infty}$-calculus with $\omega_{H^{\infty}}(A) = \omega(A) \le \frac{\pi}{2}$.
\end{corollary}
It is now time to give a precise definition of semigroup dilations on general Banach spaces. We follow the terminology used in~\cite{ArhMer14}.
\begin{definition}
Let $(T(t))_{t \ge 0}$ be a $C_0$-semigroup on some Banach space $X$. Further let $\mathcal{X}$ denote a class of Banach spaces. We say that
\begin{def_enum}
\item $(T(t))_{t \ge 0}$ has a \emph{strict dilation} in $\mathcal{X}$ if for some $Y$ in $\mathcal{X}$ there are contractive linear operators $J\colon X \to Y$ and $Q\colon Y \to X$ and an isometric $C_0$-group $(U(t))_{t \in \mathbb{R}}$ on $Y$ such that
\[ T(t) = QU(t)J \qquad \text{for all } t \ge 0. \]
\item $(T(t))_{t \ge 0}$ has a \emph{loose dilation} in $\mathcal{X}$ if for some $Y$ in $\mathcal{X}$ there are bounded linear operators $J\colon X \to Y$ and $Q\colon Y \to X$ and a bounded $C_0$-group $(U(t))_{t \in \mathbb{R}}$ on $Y$ such that
\[ T(t) = QU(t)J \qquad \text{for all } t \ge 0. \]
\end{def_enum}
\end{definition}
Note that in the above terminology Theorem~\ref{thm:dilation_hilbert_space} shows that every contractive $C_0$-semigroup on a Hilbert space has a strict dilation in the class of all Hilbert spaces. The main connection with the other regularity properties is the following observation.
\begin{proposition}\label{prop:hinfty_and_dilation}
Let $A$ be a sectorial operator on a Banach space $X$ such that $-A$ generates a $C_0$-semigroup which has a loose dilation in the class of all UMD-Banach spaces. Then $A$ has a bounded $H^{\infty}$-calculus with $\omega_{H^{\infty}}(A) \le \frac{\pi}{2}$.
\end{proposition}
This follows from the transference principle of R.R.~Coifman and G.~Weis developed in~\cite{CoiWei76} which reduces the assertion to the case of the vector-valued shift group on $L_p(\mathbb{R};Y)$ for some UMD-space $Y$ which can be shown directly with the help of the vector-valued Mikhlin multiplier theorem~\cite[Proposition~3]{Zim89}.
On $L_p$-spaces for $p \in (1,\infty)$ one has the following characterization of strict dilations. A bounded linear operator $T\colon L_p(\Omega) \to L_p(\Omega')$ is called a \emph{subpositive contraction} if there exists a positive contraction $S\colon L_p(\Omega) \to L_p(\Omega')$, that is $\norm{S} \le 1$ and $f \ge 0 \Rightarrow Sf \ge 0$, such that $\abs{Tf} \le S\abs{f}$ for all $f \in L_p(\Omega)$.
\begin{theorem}
Let $(T(t))_{t \ge 0}$ be a $C_0$-semigroup on some $\sigma$-finite $L_p$-space for $p \in (1,\infty) \setminus \{2\}$. Then $(T(t))_{t \ge 0}$ has a strict dilation in the class of all $\sigma$-finite $L_p$-spaces if and only if $(T(t))_{t \ge 0}$ is a semigroup consisting of subpositive contractions.
\end{theorem}
Every $C_0$-semigroup of subpositive contractions on $L_p$ for $p \in (1,\infty)$ has a strict dilation by Fendler's dilation theorem~\cite{Fen97}. For the converse it suffices to show that for a strict dilation $T(t) = QU(t)J$ all the operators $U(t)$, $J$ and $Q$ are subpositive contractions (notice that $J$ and $Q^*$ are isometries). For the first two this essentially follows from the Banach--Lamperti theorem~\cite[Theorem~3.2.5]{FleJam03} on the structure of isometries on $L_p$-spaces, for the third as well if applied to the adjoint $Q^*$. However, there is no characterization of semigroups on $L_p$ with a loose dilation.
\begin{problem}
Characterize those semigroups on $L_p$ which have a loose dilation in the class of all $L_p$-spaces.
\end{problem}
For a more concrete discussion in the setting of discrete semigroups see~\cite[Section~5]{ArhMer14}. We also do not know whether the following extension of Fendler's dilation theorem to UMD-Banach lattices holds.
\begin{problem}
Does every $C_0$-semigroup of positive contractions on a UMD-Banach lattice have a strict / loose dilation in the class of all UMD-spaces?
\end{problem}
In the negative direction one knows the following: there exists a completely positive contraction, i.e.\ a discrete semigroup, on a noncommutative $L_p$-space which does not have a strict dilation in the class of all noncommutative $L_p$-spaces~\cite[Corollary~4.4]{JunMer07}. For a weak discrete counterexample in the setting of $L_p(L_q)$-spaces see~\cite[Contre exemple~6.1]{GueRay88}.
Recall that by Proposition~\ref{prop:hinfty_and_dilation} a $C_0$-semigroup $(T(t))_{t \ge 0}$ with generator $-A$ that has a loose dilation in the class of all UMD-spaces has a bounded $H^{\infty}$-calculus with $\omega_{H^{\infty}}(A) \le \frac{\pi}{2}$. The following theorem by A.~Fröhlich and L.~Weis~\cite[Corollary~5.4]{FroWei06} is a partial converse. Its proof uses square function techniques which we do not cover here, for an overview we refer to~\cite{LeM07}.
\begin{theorem}
Let $A$ be a sectorial operator on a UMD-space $X$ with $\omega_{H^{\infty}}(A) < \frac{\pi}{2}$. Then the semigroup $(T(t))_{t \ge 0}$ generated by $-A$ has a loose dilation to the space $L_2([0,1];X)$.
\end{theorem}
This shows that on UMD-spaces the existence of loose dilations in the class of UMD-spaces and of a bounded $H^{\infty}$-calculus are equivalent under the restriction $\omega_R(A) < \frac{\pi}{2}$. However, we will see in Section~\ref{sec:pisier} that there exists a semigroup generator $-A$ on a Hilbert space with $\omega_R(A) = \omega(A) = \frac{\pi}{2}$ that does not have a loose dilation in the class of all Hilbert spaces. So in general the existence of a dilation is a strictly stronger property than the existence of a bounded $H^{\infty}$-calculus.
\section{Counterexamples I: The Schauder Multiplier Method}\label{sec:dilations}
In this section we develop the most fruitful known method to construct systematically counterexamples: the Schauder multiplier method. This method was first used in~\cite{BaiCle91} and \cite{Ven93} in the context of sectorial operators to give examples of sectorial operators without bounded imaginary powers. After dealing with $H^{\infty}$-calculus and bounded imaginary powers, we present a self-contained example of a sectorial operator on $L_p$ which is not $\mathcal{R}$-sectorial.
\subsection{Schauder Multipliers}
We start our journey by giving the definition of Schauder multipliers and by studying its fundamental properties. After that we show how Schauder multipliers can be used to construct (analytic) semigroups. From now on we need some background from Banach space theory. We refer to \cite{AlbKal06}, \cite{FHH+11}, \cite{LinTza77} and~\cite{Sin70}.
\begin{definition}[Schauder Multiplier]\index{Schauder multiplier}
Let $(e_m)_{m \in \mathbb{N}}$ be a Schauder basis for a Banach space $X$. For a sequence $(\gamma_m)_{m \in \mathbb{N}} \subset \mathbb{C}$ the operator $A$ defined by
\begin{align*}
D(A) & = \biggl\{ x = \sum_{m=1}^{\infty} a_m e_m: \sum_{m=1}^{\infty} \gamma_m a_m e_m \text{ exists} \biggr\} \\
A \biggl( \sum_{m=1}^{\infty} a_m e_m \biggr) & = \sum_{m=1}^{\infty} \gamma_m a_m e_m
\end{align*}
is called the \emph{Schauder multiplier} associated to $(\gamma_m)_{m \in \mathbb{N}}$.
\end{definition}
\subsubsection{Basic Properties of Schauder Multipliers}
We now discuss some properties of Schauder multipliers whose proofs can be found in \cite[Section~9.1.1]{Haa06} and \cite{Ven93}.
\begin{proposition}
The Schauder multiplier $A$ associated to a sequence $(\gamma_m)_{m \in \mathbb{N}}$ is a densely defined closed linear operator.
\end{proposition}
A central problem in the theory of Schauder multipliers is to determine for a given Schauder basis $(e_m)_{m \in \mathbb{N}}$ the set of all sequences $(\gamma_m)_{m \in \mathbb{N}}$ for which the associated Schauder multiplier is bounded. In general, it is an extremely difficult problem to determine this space exactly. For example, the trigonometric basis is a Schauder basis for $L_p([0,1])$ for $p \in (1, \infty)$. In this particular case the above problem asks for a characterization of all bounded Fourier multipliers on $L_p$.
However, some elementary general properties of this sequence space can be obtained easily. In what follows let $BV$ be the Banach space of all sequences with bounded variation.
\begin{proposition}\label{prop:sm_prop}
Let $(e_m)_{m \in \mathbb{N}}$ be a Schauder basis for a Banach space $X$. Then there exists a constant $K \ge 0$ such that for every $(\gamma_m)_{m \in \mathbb{N}} \in BV$ the Schauder multiplier $A$ associated to $(\gamma_m)_{m \in \mathbb{N}}$ with respect to $(e_m)_{m \in \mathbb{N}}$ is bounded and satisfies
\[
\norm{A} \le K \norm{(\gamma_m)_{m \in \mathbb{N}}}_{BV}.
\]
Conversely, if $A$ is a bounded Schauder multiplier associated to some sequence $(\gamma_m)_{m \in \mathbb{N}}$, then $(\gamma_m)_{m \in \mathbb{N}}$ is bounded.
\end{proposition}
\begin{remark}\label{rem:bv} In general the above result is optimal. For if $X = BV$, then $(e_m)_{m \in \mathbb{N}_0}$ defined by $e_0$ as the constant sequence $\mathds{1}$ and $e_m = (\delta_{mn})_{n \in \mathbb{N}}$ form a conditional basis of $BV$ and the multiplier associated to a sequence $(\gamma_m)_{m \in \mathbb{N}_0}$ is bounded if and only if $(\gamma_m) \in BV$.
\end{remark}
\subsubsection{Schauder Multipliers as Generators of Analytic Semigroups}
Given an arbitrary Banach space $X$, it is difficult to guarantee, roughly spoken, the existence of interesting strongly continuous semigroups on this space. Of course, every bounded operator generates such a semigroup by means of exponentiation. Such an argument does in general not work to show the existence of $C_0$-semigroups with an unbounded generator. Indeed, on $L_{\infty}([0,1])$ a result by H.P.~Lotz~\cite[Theorem 3]{Lot85} shows that every generator of a strongly continuous semigroup is already bounded.
One therefore has to make additional assumptions on the Banach space. A very convenient and rather general assumption for separable Banach spaces is to require the existence of a Schauder basis for that space. Indeed, all classical separable Banach spaces have a Schauder basis. Moreover, for a long time it has been an open problem whether all separable Banach spaces have a Schauder basis (this was solved negatively by~P. Enflo~\cite{Enf73}).
The next proposition shows that Schauder bases allow us to construct systematically strongly continuous semigroups (with unbounded generators) on the underlying Banach spaces.
\begin{proposition}\label{prop:sm_generator_semigroup}
Let $(e_m)_{m \in \mathbb{N}}$ be a Schauder basis for a Banach space $X$. Further let $(\gamma_m)_{m \in \mathbb{N}}$ be an increasing sequence of positive real numbers. Then the Schauder multiplier associated to $(\gamma_m)_{m \in \mathbb{N}}$ with respect to $(e_m)_{m \in \mathbb{N}}$ is a sectorial operator with $\omega(A) = 0$. In particular, $-A$ generates an analytic $C_0$-semigroup $(T(z))_{z \in \Sigma_{\frac{\pi}{2}}}$.
\end{proposition}
\subsection{Sectorial Operators without a Bounded \texorpdfstring{$H^{\infty}$-}{Holomorphic }calculus}
In this subsection we apply the so far developed methods to give examples of sectorial operators without a bounded $H^{\infty}$-calculus. The first example was given in~\cite{McIYag90}. The elegant approach of this section goes back to~\cite{Lan98} and \cite{Mer99}.
One can easily show that one cannot obtain examples of sectorial operators without a bounded $H^{\infty}$-calculus by using Schauder multipliers with respect to an unconditional basis. However, one can produce counterexamples from Schauder multipliers with respect to a conditional basis.
\begin{theorem}\label{thm:sectorial_nohinfty}
Let $(e_m)_{m \in \mathbb{N}}$ be a conditional Schauder basis for a Banach space $X$. Then the Schauder multiplier $A$ associated to the sequence $(2^m)_{m \in \mathbb{N}}$ is a sectorial operator with $\omega(A) = 0$ which does not have a bounded $H^{\infty}$-calculus.
\end{theorem}
\begin{proof}
By Proposition~\ref{prop:sm_generator_semigroup} everything is already shown except for the fact that $A$ does not have a bounded $H^{\infty}$-calculus. For this observe that for each $f \in H^{\infty}(\Sigma_{\sigma})$ for some $\sigma \in (0,\pi)$ the operator $f(A)$ is given by the Schauder multiplier associated to the sequence $(f(\gamma_m))_{m \in \mathbb{N}}$. Now assume that $A$ has a bounded $H^{\infty}(\Sigma_{\sigma})$-calculus for some $\sigma \in (0, \pi)$. By~\cite[Corollary~9.1.6]{Haa06} on the interpolation of sequences by holomorphic functions, for every element in $\ell_{\infty}$ there exists an $f \in H^{\infty}(\Sigma_{\sigma})$ such that $(f(2^m))_{m \in \mathbb{N}}$ is the desired sequence. This means that every element in $\ell_{\infty}$ defines a bounded Schauder multiplier. However, this means that $(e_m)_{m \in \mathbb{N}}$ is unconditional in contradiction to our assumption.
\end{proof}
\begin{corollary}
Let $X$ be a Banach space that admits a Schauder basis. Then there exists a sectorial operator $A$ with $\omega(A) = 0$ that does not have a bounded $H^{\infty}$-calculus.
\end{corollary}
\begin{proof}
Every Banach space which admits a Schauder basis does also admit a conditional Schauder basis~\cite[Theorem 9.5.6]{AlbKal06}. Then the result follows directly from Theorem~\ref{thm:sectorial_nohinfty}.
\end{proof}
Next we give a concrete example of a sectorial operator of the above form which has boundary imaginary powers but no bounded $H^{\infty}$-calculus. This goes back to G.~Lancien \cite{Lan98} (see also \cite{Mer99}).
\begin{example}\label{exp:bip_nohinfty}
We consider the trigonometric system $(e^{imz})_{m \in \mathbb{Z}}$ enumerated as $(0,-1,1,-2, \ldots)$ which is a conditional basis of $L_p([0,2\pi])$ for $p \in (1, \infty) \setminus \{2\}$~\cite[Theorem~2.c.16]{LinTza79}. We can then consider the Schauder multiplier $A$ associated to the sequence $(2^m)_{m \in \mathbb{Z}}$. As a consequence of the boundedness of the Hilbert transform on $L_p$ one can consider the operator separately on the two complemented parts with respect to the decomposition
\[ L_p([0,2\pi]) = \overline{\linspan} \{ e^{imz}: m < 0 \} \oplus \overline{\linspan} \{ e^{imz}: m \ge 0 \}. \]
Observe that $A$ has a bounded $H^{\infty}$-calculus if and only if both parts have a bounded $H^{\infty}$-calculus. It then follows from Proposition~\ref{prop:sm_prop} and Proposition~\ref{prop:sm_generator_semigroup} that $A$ is a sectorial operator with $\omega(A) = 0$ which by Theorem~\ref{thm:sectorial_nohinfty} (applied to the second part) does not have a bounded $H^{\infty}$-calculus. We now show that $A$ has bounded imaginary powers with $\omega_{\text{BIP}}(A) = 0$. For this we observe that
\begin{align*}
A^{it} \biggl(\sum_{m \in \mathbb{Z}} a_m e^{imz} \biggr) & = \sum_{m \in \mathbb{Z}} 2^{mit} a_m e^{imz} = \sum_{m \in \mathbb{Z}} a_m \exp(imt \log 2) e^{imz} \\
& = \sum_{m \in \mathbb{Z}} a_m \exp(im(t \log 2 + z)) = S(t \log 2) \biggl( \sum_{m \in \mathbb{Z}} a_m e^{imz} \biggr),
\end{align*}
where $(S(t))_{t \in \mathbb{R}}$ is the periodic shift group on $L_p([0,2\pi])$.
\end{example}
We will study examples of the above type more systematically in Section~\ref{sec:monniaux}.
\subsection{Sectorial Operators without BIP}
Similarly to the case of the bounded $H^{\infty}$-calculus one can use Schauder multipliers to construct sectorial operators which do not have bounded imaginary powers. We start with a weighted version of Example~\ref{exp:bip_nohinfty} which gives an example of an $\mathcal{R}$-sectorial operator without bounded imaginary powers, a discrete version of the counterexample~\cite[Example~10.17]{KunWei04}. However, before we need to state some facts on harmonic analysis and $A_p$-weights.
It is a natural question to ask for which weights $w$ the trigonometric system is a Schauder basis for the space $L_p([0,2\pi],w)$. Indeed, a complete characterization of these weights is known. We identify the torus $\mathbb{T}$ with the interval $[0,2\pi)$ on the real line and functions in $L_p([0,2\pi])$ with their periodic extensions or with $L_p$-functions on the torus.
\begin{definition}[$A_p$-Weight]\index{Muckenhoupt weight}\index{$A_p$-weight}
Let $p \in (1, \infty)$. A function $w\colon \mathbb{R} \to [0, \infty]$ with $w(t) \in (0,\infty)$ almost everywhere is called an \emph{$A_p$-weight} if there exists a constant $K \ge 0$ such that for every compact interval $I \subset \mathbb{R}$ with positive length one has
\[ \biggl( \frac{1}{\abs{I}} \int_{I} w(t) \, dt \biggr) \biggl( \frac{1}{\abs{I}} \int_I w(t)^{-1/(p-1)} \, dt \biggr)^{p-1} \le K. \]
The set of all $A_p$-weights is denoted by $\mathcal{A}_p(\mathbb{R})$\index{$\mathcal{A}_p$|see{$A_p$-weight}}. Moreover, we set in the periodic case
\[ \mathcal{A}_p(\mathbb{T}) \coloneqq \{ w \in \mathcal{A}_p(\mathbb{R}): w \text{ is } 2\pi \text{-periodic} \}. \]
\end{definition}
For a detailed treatment of these weights and their applications in harmonic analysis we refer to the monograph~\cite[Chapter~V]{Ste93}. As an example the $2\pi$-periodic extension of the function $t \mapsto \abs{t}^{\alpha}$ for $\alpha \in \mathbb{R}$ lies in $\mathcal{A}_p(\mathbb{T})$ if and only if $\alpha \in (-1,p-1)$~\cite[Example~2.4]{BerGil03}. The characterization below can be found in~\cite[Proposition~2.3]{Nie09} and essentially goes back to methods developed by R.~Hunt, B.~Muckenhoupt and R.~Wheeden in~\cite{HMW73}.
\begin{theorem}\label{thm:trigonmetric_ap_basis}
Let $w\colon \mathbb{R} \to [0,\infty]$ with $w(t) \in (0,\infty)$ almost everywhere be a $2\pi$-periodic weight and $p \in (1, \infty)$. Then the trigonometric system is a Schauder basis for $L_p([0,2\pi], w)$ with respect to the enumeration $(0,-1,1,-2,2, \ldots)$ of $\mathbb{Z}$ if and only if $w \in \mathcal{A}_p(\mathbb{T})$.
\end{theorem}
Now we are ready to give the example.
\begin{example}\label{ex:r_sectorial_without_bip}
Let $p \in (1, \infty)$ and $w \in \mathcal{A}_p(\mathbb{T})$ be an $A_p$-weight. Then the trigonometric system $(e^{imz})_{m \in \mathbb{Z}}$ is a Schauder basis for $L_p([0,2\pi],w)$ by Theorem~\ref{thm:trigonmetric_ap_basis}. Let $A$ again be the Schauder multiplier associated to the sequence $(2^m)_{m \in \mathbb{Z}}$. One sees as in Example~\ref{exp:bip_nohinfty} that $A$ is a sectorial operator. It remains to show that $A$ is $\mathcal{R}$-sectorial. Notice that for $\lambda = a2^le^{i\theta} \in \mathbb{C} \setminus [0, \infty)$ with $\abs{a} \in [1,2]$ one has for $x = \sum_{m \in \mathbb{Z}} a_m e^{imz}$
\begin{align*}
\lambda R(\lambda,A) x & = \sum_{m \in \mathbb{Z}} \frac{\lambda}{\lambda - 2^m} a_m e^{imz} = \sum_{m \in \mathbb{Z}} \frac{ae^{i\theta}}{ae^{i\theta} - 2^{m-l}} a_m e^{imz} \\
& = \sum_{m \in \mathbb{Z}} \frac{ae^{i\theta}}{ae^{i\theta} - 2^m} a_{m+l} e^{i(m+l)z} = ae^{i \theta} R(ae^{i\theta},A) \biggl( \sum_{m \in \mathbb{Z}} a_{m+l} e^{imz} \biggr) e^{ilz} \\
& = e^{ilz} ae^{i\theta} R(a e^{i\theta},A) (x \cdot e^{-ilz})
\end{align*}
Consequently for $\lambda_k = a2^{l_k} e^{i \theta}$ with $k \in \{1,\ldots,n\}$ and $x_1, \ldots, x_n \in L_p([0,2\pi],w)$ one has
\begin{align*}
\MoveEqLeft \biggnorm{\sum_{k=1}^n r_k \lambda_k R(\lambda_k,A)x_k} = \biggnorm{\sum_{k=1}^n r_k e^{il_k z} ae^{i\theta} R(ae^{i\theta},A)(e^{-il_k z} x_k)} \\
& \le 2 \abs{a} \norm{R(ae^{i\theta},A)} \biggnorm{\sum_{k=1}^n r_k e^{-il_k z} x_k} \le 8 \norm{R(ae^{i\theta},A)} \biggnorm{\sum_{k=1}^n r_k x_k}
\end{align*}
by Kahane's contraction principle. Now it is easy to check that for every $\theta_0 > 0$ the sequences $(\frac{ae^{i\theta}}{ae^{i\theta} - 2^{\pm m}})_{m \in \mathbb{N}}$ satisfy the assumptions of Proposition~\ref{prop:sm_prop} uniformly in $\theta \in [\theta_0, 2\pi)$ for $\theta_0 > 0$ and in $\abs{a} \in [1,2]$. By \cite[Theorem~4.2 2)]{Wei01} and the boundedness of the Hilbert transform on $L_p([0,2\pi], w)$ this shows that $A$ is $\mathcal{R}$-analytic with $\omega_R(A) = 0$.
By the same calculation as in Example~\ref{exp:bip_nohinfty} the operator $A^{it}$ for $t \in \mathbb{R}$ is given by $S(t \log 2)$ on the dense set of trigonometric polynomials, where $(S(t))_{t \in \mathbb{R}}$ is the periodic shift group. Notice however that for example for $w(t) = \abs{t}^{\alpha}$ for a suitable chosen $\alpha \in \mathbb{R}$ such that $w \in \mathcal{A}_p(\mathbb{T})$ this group obviously does not leave $L_p([0,2\pi],w)$ invariant. Hence, $A$ does not have bounded imaginary powers.
\end{example}
\subsection{Sectorial Operators which are not \texorpdfstring{$\mathcal{R}$}{R}-Sectorial}
We now present a self-contained example of a sectorial operator on $L_p$ which is not $\mathcal{R}$-sectorial based on~\cite{Fac13}. In order to do that we need to study some geometric properties of $L_p$-spaces.
A key role in what follows is played by $L_p$-functions which stay away from zero in a sufficiently large set. More precisely, for $p \in [1,\infty)$ and $\epsilon > 0$ we consider
\[ M^p_{\epsilon} \coloneqq \left\{ f \in L_p([0,1]): \lambda \left( \left\{ x \in [0,1] : \abs{f(x)} \ge \epsilon \norm{f}_p \right\} \right) \ge \epsilon \right\}. \]
Functions in these sets have a very important summability property which is comparable to the $L_2$-case. For the proofs of the next two lemmata we follow closely the main ideas in~\cite[\textsection 21]{Sin70}.
\begin{lemma}\label{lem:unconditional}
For $p \in [2,\infty)$ and $\epsilon > 0$ let $(f_m)_{m \in \mathbb{N}} \subset L_p([0,1])$ be a sequence in $M_{\epsilon}^p$ such that $\sum_{m=1}^{\infty} f_m$ converges unconditionally in $L_p([0,1])$. Then one has $\sum_{m=1}^{\infty} \norm{f_m}_p^2 < \infty$.
\end{lemma}
\begin{proof}
Since $p \in [2,\infty)$, it follows from Hölder's inequality that for all $f \in L_p([0,1])$ one has $\norm{f}_2 \le \norm{f}_p$. This shows that the series $\sum_{m=1}^{\infty} f_m$ converges unconditionally in $L_2([0,1])$ as well. By the unconditionality of the series there exists a $K \ge 0$ such that $\norm{\sum_{m=1}^{\infty} \epsilon_m f_m}_2 \le K$ for all $(\epsilon_m)_{m \in \mathbb{N}} \in \{-1,1\}^{\mathbb{N}}$. Now, for all $N \in \mathbb{N}$ one has
\[ \sum_{m=1}^N \norm{f_m}_2^2 = \int_0^1 \biggnorm{\sum_{m=1}^N r_m(t) f_m}_2^2 \, dt \le K^2. \]
Hence, $\sum_{m=1}^{\infty} \norm{f_m}_2^2 < \infty$. Notice that the assumption $f_m \in M_{\epsilon}^p$ implies that for all $m \in \mathbb{N}$
\[ \norm{f_m}_2^2 \ge \int_{\abs{f_m} \ge \epsilon \norm{f_m}_p} \abs{f_m(x)}^2 \, dx \ge \epsilon^3 \norm{f_m}_p^2. \]
Together with the summability shown above this yields $\sum_{m=1}^{\infty} \norm{f_m}_p^2 < \infty$.
\end{proof}
The next lemma shows that unconditional basic sequences formed out of elements in $M_{\epsilon}^p$ behave like Hilbert space bases.
\begin{lemma}\label{lem:unconditional_series}
For $p \in [2,\infty)$ let $(e_m)_{m \in \mathbb{N}}$ be an unconditional normalized basic sequence in $L_p([0,1])$ for which there exists an $\epsilon > 0$ such that $e_m \in M_{\epsilon}^p$ for all $m \in \mathbb{N}$. Then
\[ \sum_{m=1}^{\infty} a_m e_m \text{ converges} \qquad \Leftrightarrow \qquad (a_m)_{m \in \mathbb{N}} \in \ell_2. \]
\end{lemma}
\begin{proof}
Assume that the expansion $\sum_{m=1}^{\infty} a_m e_m$ converges. Since $(e_m)_{m \in \mathbb{N}}$ is an unconditional basic sequence, the series $\sum_{m=1}^{\infty} a_m e_m$ converges unconditionally in $L_p([0,1])$. By Lemma~\ref{lem:unconditional}, one has
\[ \sum_{m=1}^{\infty} \abs{a_m}^2 = \sum_{m=1}^{\infty} \norm{a_m e_m}_p^2 < \infty. \]
Conversely, we have to show that the expansion converges for all $(a_m)_{m \in \mathbb{N}} \in \ell_2$. One has $\norm{\sum_{m=1}^{N} a_m e_m} \le K \norm{\sum_{m=1}^{N} \epsilon_m a_m e_m}$ for all $(\epsilon_m)_{m \in \mathbb{N}} \in \{-1,1\}^{N}$ and all $N \in \mathbb{N}$, where $K \ge 0$ denotes the unconditional basis constant of $(e_m)_{m \in \mathbb{N}}$. Now, since for $p \ge 2$ the space $L_p([0,1])$ has type 2, we have for all $N, M \in \mathbb{N}$
\[ \biggnorm{\sum_{m=M}^N a_m e_m}_p \le K \int_0^1 \biggnorm{\sum_{m=M}^N r_m(t) a_m e_m}_p \, dt \le K C \biggl( \sum_{m=M}^N \abs{a_m}^2 \biggr)^{1/2} \]
for some constant $C > 0$. From this it is immediate that the sequence of partial sums $(\sum_{m=1}^N a_m e_m)_{N \in \mathbb{N}}$ is Cauchy in $L_p([0,1])$.
\end{proof}
For the following counterexample on $L_p$-spaces our starting point is a particular basis given by the Haar system.
\begin{definition}
The \emph{Haar system}\index{Haar!system} is the sequence $(h_n)_{n \in \mathbb{N}}$ of functions defined by $h_1 = 1$ and for $n = 2^k + s$ (where $k = 0,1,2, \ldots$ and $s = 1,2, \ldots, 2^k$) by
\begin{equation*}
h_n(t) = \Ind_{[\frac{2s - 2}{2^{k+1}}, \frac{2s - 1}{2^{k+1}})}(t) - \Ind_{[\frac{2s - 1}{2^{k+1}}, \frac{2s}{2^{k+1}})}(t) = \begin{cases}
1 & \text{if } t \in [\frac{2s - 2}{2^{k+1}}, \frac{2s - 1}{2^{k+1}}) \\
-1 & \text{if } t \in [\frac{2s - 1}{2^{k+1}}, \frac{2s}{2^{k+1}}) \\
0 & \text{otherwise}
\end{cases}.
\end{equation*}
\end{definition}
The Haar basis is an unconditional Schauder basis for $L_p([0,1])$ for $p \in (1, \infty)$ (see \cite[Proposition~6.1.3 \& Theorem~6.1.6]{AlbKal06}).
\begin{remark}
Note that the Haar system is not normalized in $L_p([0,1])$ for $p \in [1, \infty)$. Of course, we can always work with $(h_m / \norm{h_m}_{p})_{m \in \mathbb{N}}$ instead which is a normalized basis. It is however important to note that the normalization constant $\norm{h_m}_p = 2^{-k/p}$ depends on $p$ and we can therefore not simultaneously normalize $(h_m)_{m \in \mathbb{N}}$ on the $L_p$-scale. This crucial point was overlooked in~\cite{Fac13}.
\end{remark}
The following proposition is used to transfer the $\mathcal{R}$-boundedness of a sectorial operator to the boundedness of a single operator. This approach is closely motivated by the work~\cite{AreBu03}.
\begin{proposition}\label{prop:resolvent_r_associated_operator}
Let $A$ be an $\mathcal{R}$-sectorial operator. Then there exists a constant $C \ge 0$ such that for all $(q_n)_{n \in \mathbb{N}} \subset \mathbb{R}_{-}$ the associated operator
\[ \mathcal{R}\colon \sum_{n=1}^{N} r_n x_n \mapsto \sum_{n=1}^{N} r_n q_n R(q_n,A)x_n \]
defined on the finite Rademacher sums extends to a bounded operator on $\Rad(X)$ with operator norm at most $C$.
\end{proposition}
\begin{proof}
If $A$ is $\mathcal{R}$-sectorial, one has $C \coloneqq \mathcal{R}\{ \lambda R(\lambda, A): \lambda \in \mathbb{R}_{-} \} < \infty$. Hence, for all finite Rademacher sums we have by the definition of $\mathcal{R}$-boundedness
\[ \biggnorm{\sum_{n=1}^N r_n q_n R(q_n,A)x_n} \le C \biggnorm{\sum_{n=1}^N r_n x_n}. \qedhere \]
\end{proof}
One now uses the freedom in the choice of the sequence $(q_n)_{n \in \mathbb{N}}$. This is done in the following elementary lemma. We will see its usefulness very soon.
\begin{lemma}\label{lem:maximize_resolvent_sequence}
For $\gamma_m > \gamma_{m-1} > 0$ consider the function $d(t) \coloneqq t [(t+\gamma_{m-1})^{-1} - (t+\gamma_{m})^{-1}]$ on $\mathbb{R}_{+}$. Then $d$ has a maximum bigger than $\frac{1}{2} \frac{\gamma_m - \gamma_{m-1}}{\gamma_m + \gamma_{m-1}}$.
\end{lemma}
\begin{proof}
By the mean value theorem we have for some $\xi \in (\gamma_{m-1}, \gamma_m)$ and all $t > 0$ that
\[
\frac{1}{t + \gamma_{m-1}} - \frac{1}{t + \gamma_{m}} = (\gamma_m - \gamma_{m-1}) \frac{1}{(t + \xi)^2} \ge (\gamma_m - \gamma_{m-1}) \frac{1}{(t + \gamma_m)^2}.
\]
One now easily verifies that the function $t \mapsto (\gamma_m - \gamma_{m-1}) \frac{t}{(t+\gamma_m)^2}$ has a unique maximum for $t = \gamma_m$. In particular one has
\[ \max_{t > 0} d(t) \ge d(\gamma_m) = \frac{1}{2} \frac{\gamma_m - \gamma_{m-1}}{\gamma_m + \gamma_{m-1}}. \qedhere \]
\end{proof}
We can now give examples of sectorial operators on $L_p$ which are not $\mathcal{R}$-sectorial.
\begin{theorem}\label{thm:counterexample_mrp_lp}
For $p \in (2, \infty)$ there exists a sectorial operator $A$ on $L_p([0,1])$ with $\omega(A) = 0$ which is not $\mathcal{R}$-sectorial.
\end{theorem}
\begin{proof}
Until the rest of the proof let $(h_m)_{m \in \mathbb{N}}$ denote the normalized Haar system on $L_p([0,1])$. Choose a subsequence $(m_k)_{k \in \mathbb{N}} \subset 2\mathbb{N}$ such that the functions $h_{m_k}$ have pairwise disjoint supports. Then $(h_{m_k})_{k \in \mathbb{N}}$ is an unconditional basic sequence equivalent to the standard basis of $\ell_p$. Indeed, for any finite sequence $a_1, \ldots, a_N$ we have by the disjointness of the supports
\[ \norm{\sum_{k=1}^N a_k h_{m_k}}_p^p = \sum_{k=1}^N \norm{a_k h_{m_k}}_p^p = \sum_{k=1}^N \abs{a_k}^p. \]
Choose a permutation of the even numbers such that $\pi(4k) = m_k$. We now define a new system $(f_m)_{m \in \mathbb{N}}$ as
\begin{equation*}
f_m \coloneqq \begin{cases} h_{\pi(m)} \\ h_{\pi(m)} + h_{\pi(m-1)} \end{cases} = \begin{cases} h_{m} & m \text{ odd} \\ h_{\pi(m)} + h_{m-1} & m \text{ even}. \end{cases}
\end{equation*}
Notice that by the unconditionality of the Haar basis $(h_{\pi(m)})_{m \in \mathbb{N}}$ is a Schauder basis of $L_p([0,1])$ as well. As a block perturbation of the normalized basis $(h_{\pi(m)})_{m \in \mathbb{N}}$ the sequence $(f_m)_{m \in \mathbb{N}}$ is a basis for $L_p([0,1])$ as well~\cite[Ch.~I, §4, Proposition~4.4]{Sin70}. Further, let $A$ be the closed linear operator on $L_p([0,1])$ given by
\begin{align*}
D(A) = \biggl\{ x = \sum_{m=1}^{\infty} a_m f_m: \sum_{m=1}^{\infty} 2^m a_m f_m \text{ exists} \biggr\} \\
A \biggl(\sum_{m=1}^{\infty} a_m f_m \biggr) = \sum_{m=1}^{\infty} 2^m a_m f_m.
\end{align*}
Proposition~\ref{prop:sm_generator_semigroup} shows that $A$ is sectorial with $\omega(A) = 0$.
The basic sequences $(h_{\pi(4m)})_{m \in \mathbb{N}}$ and $(h_{4m + 1})_{m \in \mathbb{N}}$ are not equivalent: assume that the two basic sequences are equivalent. Then on the one hand for $(h_{4m + 1})_{m \in \mathbb{N}}$ the block basic sequence
\[ b_k = \sum_{\substack{m: 4m + 1 \\ \in [2^k+1, 2^{k+1}]}} h_{4m + 1} \]
satisfies for $k \ge 2$ by the disjointness of the summands
\[ \norm{b_k}_p^p = \sum_{\substack{m: 4m + 1 \\ \in [2^k+1,2^{k+1}]}} \norm{h_{4m + 1}}_p^p = \sum_{\substack{m: 4m + 1 \\ \in [2^k+1,2^{k+1}]}} 1 = \frac{1}{4} \cdot 2^k = 2^{k-2}. \]
Moreover, on the non-vanishing part $b_k$ satisfies $\abs{b_k(t)} = 2^{k/p}$ for $k \ge 2$. Hence, for the normalized block basic sequence $(\tilde{b}_k)_{k \ge 2} = (\frac{b_k}{\norm{b_k}_p})_{k \ge 2}$ one has $\normalabs{\tilde{b}_k(t)} = 2^{2/p}$. Therefore we have
\[ \lambda \left( \left\{ t \in [0,1]: \normalabs{\tilde{b}_k(t)} \ge \epsilon \normalnorm{\tilde{b}_k}_p \right\} \right) = \lambda \left( \left\{ t \in [0,1]: \normalabs{\tilde{b}_k(t)} \ge \epsilon \right\} \right) = \frac{1}{4} \]
for $\epsilon \le 2^{2/p}$. In particular for $\epsilon \le \frac{1}{4}$ we have $\tilde{b}_k \in M_{\epsilon}^p$ for all $k \ge 2$. By Lemma~\ref{lem:unconditional_series} this implies that $(\tilde{b}_k)_{k \ge 2}$ is equivalent to the standard basis in $\ell_2$.
Since we have assumed that the basic sequence $(h_{\pi(4k)})_{k \in \mathbb{N}}$ is equivalent to $(h_{4k+1})_{k \in \mathbb{N}}$, the block basic sequence $(c_k)_{k \ge 2}$ defined by
\[ c_k = \norm{b_k}_p^{-1} \sum_{\substack{m: 4m + 1 \\ \in [2^k+1,2^{k+1}]}} h_{\pi(4m)} \]
is seminormalized. Recall that $(h_{\pi(4m)})_{m \in \mathbb{N}}$ is equivalent to the standard basis of $\ell_p$. Since all semi-normalized block basic sequences of $\ell_p$ are equivalent to the standard basis of $\ell_p$~\cite[Lemma~2.1.1]{AlbKal06}, the sequence $(c_k)_{k \ge 2}$ is equivalent to the standard basis of $\ell_p$. Altogether we have shown that the standard basic sequences of $\ell_p$ and $\ell_2$ are equivalent, which is obviously wrong.
In particular, the above arguments show that there is a sequence $(a_m)_{m \in \mathbb{N}}$ which converges with respect to $(h_{\pi(2m)})_{m \in \mathbb{N}}$ but not with respect to $(h_{2m+1})_{m \in \mathbb{N}}$. Now assume that $A$ is $\mathcal{R}$-sectorial. Let $(q_m)_{m \in \mathbb{N}} \subset \mathbb{R}_{-}$ be a sequence to be chosen later. Then it follows from Proposition~\ref{prop:resolvent_r_associated_operator} that the operator $\mathcal{R}\colon \Rad(L_p([0,1])) \to \Rad(L_p([0,1]))$ associated to the sequence $(q_n)_{n \in \mathbb{N}}$ is bounded. We now show that
\begin{equation}
x = \sum_{m=1}^{\infty} a_m h_{\pi(2m)} r_m \label{eq:ce_argument}
\end{equation}
converges in $\Rad(L_p([0,1]))$. Indeed, for a fixed $\omega \in [0,1]$ the infinite series $\sum_{m=1}^{\infty} a_m r_m(\omega) h_{\pi(2m)}$ converges by the unconditionality of the basic sequence $(h_{\pi(2m)})_{m \in \mathbb{N}}$ as $r_m(\omega) \in \{-1, 1\}$. Hence, the above series defines a measurable function as the pointwise limit of measurable functions. Moreover, if $K$ denotes the unconditional constant of $(h_{\pi(2m)})_{m \in \mathbb{N}}$, one has for each $\omega \in [0,1]$
\begin{equation}
\biggnorm{\sum_{m=1}^{\infty} r_m(\omega) a_m h_{\pi(2m)}} \le K \biggnorm{\sum_{m=1}^{\infty} a_m h_{\pi(2m)}}. \label{eq:ce_l1_convergence}
\end{equation}
This shows that the series \eqref{eq:ce_argument} is in $L_1([0,1]; L_p([0,1]))$. Using an analogous estimate as \eqref{eq:ce_l1_convergence} one sees that the sequence of partial sums $\sum_{m=1}^{N} a_m h_{\pi(2m)} r_m$ converges to $\sum_{m=1}^{\infty} a_m h_{\pi(2m)} r_m$ in $\Rad(L_p([0,1]))$. We now apply $\mathcal{R}$ to $x$. Because of $h_{\pi(2m)} = f_{2m} - f_{2m-1}$ we obtain
\begin{align*}
g & \coloneqq \mathcal{R}(x) = \mathcal{R} \biggl( \sum_{m=1}^{\infty} a_m (f_{2m} - f_{2m-1}) r_m \biggr) \\
& = \sum_{m=1}^{\infty} \frac{a_m q_m}{q_m - \gamma_{2m}} f_{2m} - \frac{a_m q_m}{q_m - \gamma_{2m-1}} f_{2m-1} \\
& = \sum_{m=1}^{\infty} \frac{a_m q_m}{q_m - \gamma_{2m}} (h_{\pi(2m)} + h_{2m-1}) - \frac{a_m q_m}{q_m - \gamma_{2m-1}} h_{2m-1} \\
& = \sum_{m=1}^{\infty} \frac{a_m q_m}{q_m - \gamma_{2m}} h_{\pi(2m)} + a_m q_m \biggl( \frac{1}{q_m - \gamma_{2m}} - \frac{1}{q_m - \gamma_{2m-1}} \biggr) h_{2m-1}.
\end{align*}
We now want to choose $(q_m)_{m \in \mathbb{N}}$ in such a way that the last term in the bracket is big. Notice that if we set $\gamma_{m} = 2^m$, then by Lemma~\ref{lem:maximize_resolvent_sequence} for $t = \gamma_{2m}$ one has $t [ (t + \gamma_{2m-1})^{-1} - (t + \gamma_{2m})^{-1} ] = \frac{1}{6}$. Hence, for the choice $q_m = -\gamma_{2m}$ we obtain
\[
\mathcal{R}(x) = \sum_{m=1}^{\infty} \frac{1}{2} a_m h_{\pi(2m)} - \frac{1}{6} a_m h_{2m-1}.
\]
Then after choosing a subsequence $(N_k)$ there exists a set $N \subset [0,1]$ of measure zero such that
\begin{align}
\sum_{m=1}^{N_k} \frac{1}{2} a_m r_m(\omega) h_{\pi(2m)} - \frac{1}{6} a_m r_m(\omega) h_{2m-1} \xrightarrow[k \to \infty]{} g(\omega) \quad \text{for all } \omega \in N^c. \label{eq:ce_after_riesz}
\end{align}
Applying the coordinate functionals for $(h_m)_{m \in \mathbb{N}}$ to \eqref{eq:ce_after_riesz} shows that for $\omega \in N^c$ the unique coefficients $(h^*_m(g(\omega)))$ of the expansion of $g(\omega)$ with respect to $(h_m)_{m \in \mathbb{N}}$ satisfy $h^*_{2m-1}(g(\omega)) = -\frac{a_m}{6} r_m(\omega)$. Since $(h_m)_{m \in \mathbb{N}}$ is unconditional
\begin{align*}
\sum_{m=1}^{\infty} a_m r_m(\omega) h_{2m-1} \quad \text{and therefore} \quad \sum_{m=1}^{\infty} a_m h_{2m-1} \quad \text{converge.}
\end{align*}
This contradicts the choice of $(a_m)_{m \in \mathbb{N}}$ and therefore $A$ cannot be $\mathcal{R}$-sectorial.
\end{proof}
Note that by taking the adjoint operators $A^*$ of the above counterexamples one obtains counterexamples on the range $p \in (1,2)$. Further, the above argument works for every Banach space that admits an unconditional normalized non-symmetric basis~\cite{Fac14}. This allows one to prove the following result by N.J.~Kalton \& G.~Lancien~\cite{KalLan00}.
\begin{theorem}\label{thm:Kalton-Lancien}
Let $X$ be a Banach space that admits an unconditional basis. Then every negative generator of an analytic semigroup is $\mathcal{R}$-sectorial if and only if $X$ is isomorphic to a Hilbert space.
\end{theorem}
Note that on $L_{\infty}([0,1])$ by a result of H.P.~Lotz~\cite[Theorem 3]{Lot85} every negative generator of a $C_0$-semigroup is already bounded and therefore $\mathcal{R}$-sectorial. However, the following questions are open~\cite[p.~68]{Kal01}.
\begin{problem}
Does Theorem~\ref{thm:Kalton-Lancien} hold in the bigger class of all Banach spaces admitting a Schauder basis / of all separable Banach spaces?
\end{problem}
For partial results in this direction see~\cite{KalLan02}.
\section{Counterexamples II: Using Monniaux's Theorem}\label{sec:monniaux} %
In this section we present an alternative method to construct counterexamples. This method is based on a theorem of S.~Monniaux. We consider the following straightforward analogue of sectorial operators on strips. For details see~\cite[Ch.~4]{Haa06}.
\begin{definition}
For $\omega > 0$ let $H_{\omega} \coloneqq \{ z \in \mathbb{C}: \abs{\Im z} < \omega \}$ be the \emph{horizontal strip} of height $2\omega$. A closed densely defined operator $B$ is called a \emph{strip type operator} of height $\omega > 0$ if $\sigma(B) \subset \overline{H_{\omega}}$ and
\begin{equation*}
\sup \{ \norm{R(\lambda, B)}: \abs{\Im \lambda} \ge \omega + \epsilon \} < \infty \qquad \text{for all } \epsilon > 0. \label{eq:strip_condition}\tag{$H_{\omega}$}
\end{equation*}
Further, we define the \emph{spectral height} of $B$ as $\omega_{st}(B) \coloneqq \inf \{ \omega > 0: \eqref{eq:strip_condition} \text{ holds} \}$. \end{definition}
Recall that if $A$ is a sectorial operator with bounded imaginary powers, then $t \mapsto A^{it}$ is a strongly continuous group. Conversely, one may ask which $C_0$-groups can be written in this form. The following theorem of S.~Monniaux~\cite{Mon99} gives a very satisfying answer to this question (for an alternative proof see \cite[Section~4]{Haa07}).
\begin{theorem}\label{thm:monniaux}\label{theorem!Monniaux}
Let $X$ be a UMD-space. Then there is an one-to-one correspondence
\[
\bigg\{ \parbox[c][2em][c]{0.42\textwidth}{\begin{center} $A$ sectorial operator with BIP and $\omega_{\mathrm{BIP}}(A) < \pi$ \end{center}} \bigg\}
\xlongleftrightarrow[e^{B}]{\log A}
\bigg\{ \parbox[c][2em][c]{0.40\textwidth}{\begin{center} $B$ strip type operator with $iB \sim C_0$-group of type $< \pi$ \end{center}} \bigg\}.
\]
\end{theorem}
\begin{proof}
For the surjectivity let $B$ be a strip type operator such that $iB$ generates a $C_0$-group $(U(t))_{t \in \mathbb{R}}$ of type $< \pi$. Then by Monniaux's theorem~\cite[Theorem~4.3]{Mon99} there exists a sectorial operator $A$ with bounded imaginary powers such that $A^{it} = U(t)$ for all $t \in \mathbb{R}$. Moreover, $(U(t))_{t \in \mathbb{R}}$ is generated by $i \log A$. It then follows from the uniqueness of the generator that $B = \log A$.
For the injectivity assume that $\log A = \log B$ for two sectorial operators from the left-hand side. Then by \cite[Corollary 4.2.5]{Haa06} one has $A = e^{\log A} = e^{\log B} = B$.
\end{proof}
\begin{remark}\label{rem:bip_angle_bigger}
In \cite{Haa03} M.~Haase shows that for every strip type operator $B$ with $\omega_{st}(B) < \pi$ such that $iB$ generates a $C_0$-group $(U(t))_{t \in \mathbb{R}}$ of arbitrary type there exists a sectorial operator $A$ with $A^{it} = U(t)$ for all $t \in \mathbb{R}$. If one chooses $B$ as above such that $(U(t))_{t \in \mathbb{R}}$ has group type bigger than $\pi$ (which is possible on some UMD-spaces) one sees that there exists a sectorial operator $A$ with $\omega_{\text{BIP}}(A) > \pi$. By taking suitable fractional powers of $A$ one then obtains a sectorial operator $\tilde{A}$ with $\omega(\tilde{A}) < \omega_{\text{BIP}}(\tilde{A}) < \pi$.
\end{remark}
Because of the above results, for a moment, we restrict our attention to a UMD-space $X$. A particular class of sectorial operators with bounded imaginary powers are those with a bounded $H^{\infty}$-calculus. Recall that a sectorial operator $A$ on $X$ with a bounded $H^{\infty}$-calculus satisfies $\omega_R(A) = \omega_{\text{BIP}}(A) = \omega_{H^{\infty}}(A)$ by Theorem~\ref{thm:hinfty_implies_rsectorial}. In particular one has $\omega_{\text{BIP}}(A) < \pi$. For sectorial operators with a bounded $H^{\infty}$-calculus one can formulate an analogous correspondence which essentially follows from an unpublished result of N.J.~Kalton \& L.~Weis.
In the following for a $C_0$-group $(U(t))_{t \in \mathbb{R}}$ on some Banach space we call the infimum of those $\omega > 0$ for which $\R{e^{-\omega \abs{t}} U(t): t \in \mathbb{R}} < \infty$ the $\mathcal{R}$-group type\index{group!$\mathcal{R}$-type} of $(U(t))_{t \in \mathbb{R}}$.
\begin{theorem}\label{thm:monniaux_for_hinfty}
Let $X$ be a Banach space with Pisier's property $(\alpha)$. Then there is an one-to-one correspondence
\[
\bigg\{ \parbox[c][2em][c]{0.42\textwidth}{\begin{center} $A$ sectorial operator with bounded $H^{\infty}$-calculus \end{center}} \bigg\}
\xlongleftrightarrow[e^{B}]{\log A}
\bigg\{ \parbox[c][2em][c]{0.40\textwidth}{\begin{center} $B$ strip type operator with $iB \sim$ $C_0$-group of $\mathcal{R}$-type $< \pi$ \end{center}} \bigg\}.
\]
\end{theorem}
\begin{proof}
Let $A$ be a sectorial operator with a bounded $H^{\infty}$-calculus. Then it follows from Theorem~\ref{thm:h_infty_generates_R_bounded_sets} and the fact that the norm of $\lambda \mapsto \lambda^{it}$ in $H^{\infty}(\Sigma_{\sigma})$ is bounded by $\exp(\abs{t} \sigma)$ for $t \in \mathbb{R}$ that $\{ e^{-\abs{t} \sigma} A^{it}: t \in \mathbb{R} \}$ is $\mathcal{R}$-bounded for all $\sigma \in (\omega_{H^{\infty}}(A), \pi)$. In particular $(A^{it})_{t \in \mathbb{R}}$ is of $\mathcal{R}$-type $< \pi$.
Conversely, let $B$ be from the right hand side. It then follows from an unpublished result in~\cite{KalWei14} (see \cite[Theorem~6.5]{Haa11} for a proof, here one has to additionally use the equivalence of $\mathcal{R}$- and $\gamma$-boundedness for Banach spaces with finite cotype) that the $\mathcal{R}$-type assumption implies that $B$ has a bounded $H^{\infty}$-calculus on some strip of height smaller than $\pi$. By \cite[Proposition~5.3.3]{Haa06}, the operator $e^B$ is sectorial and has a bounded $H^{\infty}$-calculus.
The one-to-one correspondence then follows as in the proof of Theorem~\ref{thm:monniaux}.
\end{proof}
From the above theorems it follows immediately that on $L_p$ for $p \in (1,\infty) \setminus \{2\}$ there exist sectorial operators with bounded imaginary powers which do not have a bounded $H^{\infty}$-calculus.
\begin{corollary}\label{cor:hinfty_correspondence}
Let $p \in (1,\infty) \setminus \{2\}$. Then there exists a sectorial operator $A$ on $L_p(\mathbb{R})$ with $\omega(A) = \omega_{\mathrm{BIP}}(A) = 0$ which does not have a bounded $H^{\infty}$-calculus.
\end{corollary}
\begin{proof}
Let $(U(t))_{t \in \mathbb{R}}$ be the shift group on $L_p(\mathbb{R})$. It follows from the Khintchine inequality that $\{ U(t): t \in [0,1] \}$ is not $\mathcal{R}$-bounded~\cite[Example~2.12]{KunWei04}. By Theorem~\ref{thm:monniaux} there exists a sectorial operator $A$ with bounded imaginary powers such that $A^{it} = U(t)$ for all $t \in \mathbb{R}$. Then one has $\omega(A) \le \omega_{\text{BIP}}(A) = 0$. However, by construction, $A^{it}$ is not $\mathcal{R}$-bounded on $[0,1]$ and therefore Theorem~\ref{thm:monniaux_for_hinfty} implies that $A$ cannot have a bounded $H^{\infty}$-calculus.
\end{proof}
Note that the constructed counterexample is exactly the same as in Example~\ref{exp:bip_nohinfty} which was obtained by different methods except for the fact that we worked in Example~\ref{exp:bip_nohinfty} with the periodic shift. Of course, we could have started with the same periodic shift in Corollary~\ref{cor:hinfty_correspondence}.
\subsection{Some Results on Exotic Banach Spaces}
In this subsection we want to investigate shortly sectorial operators on exotic Banach spaces. In the past twenty years Banach spaces were constructed whose algebra of operators has an extremely different structure from those of the well-known classical Banach spaces. The most prominent examples are probably the hereditarily indecomposable Banach spaces.
\begin{definition}[Hereditarily Indecomposable Banach Space (H.I.)]\index{H.I. space|see{hereditarily indecomposable Banach Space}}\index{hereditarily indecomposable Banach space}\index{Banach space!hereditarily indecomposable}
A Banach space $X$ is called \emph{indecomposable} if it cannot be written as the sum of two closed infinite-dimensional subspaces. Further $X$ is called \emph{hereditarily indecomposable (H.I.)} if every infinite-dimensional closed subspace of $X$ is indecomposable.
\end{definition}
It is a deep result of B.~Maurey and T.~Gowers that such (separable) spaces do actually exist~\cite{GowMau93}. We are now interested in the properties of $C_0$-semigroups on such spaces. We will use the following theorem proved in~\cite[Theorem~2.3]{RaeRic96}.
\begin{theorem}\label{thm:group_hi}
Let $X$ be a H.I. Banach space. Then every $C_0$-group on $X$ has a bounded generator.
\end{theorem}
The above result can be directly used to show the following result on operators with bounded imaginary powers.
\begin{corollary}\label{cor:bip_hi}
Let $A$ be a sectorial operator with bounded imaginary powers on a H.I. Banach space. Then $A$ is bounded.
\end{corollary}
\begin{proof}
Let $A$ be as in the assertion. Note that $(A^{it})_{t \in \mathbb{R}}$ is a $C_0$-group with generator $i \log A$. By~Theorem~\ref{thm:group_hi} $\log A$ is a bounded operator. This implies that $e^{\log A} = A$ is bounded.
\end{proof}
In particular on H.I. Banach spaces the structure of sectorial operators with a bounded $H^{\infty}$-calculus is rather trivial.
\begin{corollary}\label{cor:regularity_hi}
Let $A$ be an invertible sectorial operator on a H.I. Banach space. Then the following assertions are equivalent.
\begin{equiv_enum}
\item\label{cor:regularity_hi_i} $A$ is a bounded operator.
\item\label{cor:regularity_hi_ii} $A$ has bounded imaginary powers.
\item\label{cor:regularity_hi_iii} $A$ has a bounded $H^{\infty}$-calculus.
\end{equiv_enum}
\end{corollary}
\begin{proof}
The implication \ref{cor:regularity_hi_i} $\Rightarrow$ \ref{cor:regularity_hi_iii} can easily directly be verified and holds for every Banach space, \ref{cor:regularity_hi_iii} $\Rightarrow$ \ref{cor:regularity_hi_ii} also holds on every Banach space as discussed before and \ref{cor:regularity_hi_ii} $\Rightarrow$ \ref{cor:regularity_hi_i} follows from Corollary~\ref{cor:bip_hi}.
\end{proof}
Note that since every Banach space contains a basic sequence~\cite[Corollary~1.5.3]{AlbKal06}, there exist H.I. Banach spaces that admit Schauder bases. Then by Proposition~\ref{prop:sm_generator_semigroup} on these spaces there exist semigroups with unbounded generators which cannot have bounded imaginary powers. In particular the structure of semigroups on these spaces is not trivial. We do not know how $\mathcal{R}$-sectoriality behaves in these spaces.
\section{Counterexamples III: Pisier's Counterexample to the Halmos Problem}\label{sec:pisier}
We now present a counterexample to the last implication left open, namely that there exists a $C_0$-semigroup with generator $-A$ and $\omega_{H^{\infty}}(A) = \frac{\pi}{2}$ which does not have a loose dilation. The key ingredient here is Pisier's counterexample to the Halmos problem~\cite{Pis97} (for a more elementary approach see~\cite{DavPau97}). He constructed a Hilbert space $H$ and an operator $T \in \mathcal{B}(H)$ that is polynomially bounded, i.e.\ for some $K \ge 0$ one has $\norm{p(T)} \le K \sup_{\abs{z} \le 1} \abs{p(z)}$ for all polynomials $p$, but is not similar to a contraction, i.e.\ there does not exist any invertible $S \in \mathcal{B}(H)$ such that $S^{-1}TS$ is a contraction.
\begin{theorem}\label{thm:semigroup_hinfty_no_dilation}
There exists a generator $-A$ of a $C_0$-semigroup $(T(t))_{t \ge 0}$ on some Hilbert space with $\omega_{H^{\infty}}(A) = \frac{\pi}{2}$ such that $(T(t))_{t \ge 0}$ does not have a loose dilation in the class of all Hilbert spaces.
\end{theorem}
\begin{proof}
Let $T$ and $H$ be as above from Pisier's counterexample to the Halmos problem. It is explained in~\cite[Proposition~4.8]{Mer98} that the concrete structure of $T$ allows one to define $A = (I+T)(I-T)^{-1}$ which turns out to be a sectorial operator with $\omega(A) = \frac{\pi}{2}$. Moreover, it is shown that $-A$ generates a bounded $C_0$-semigroup $(T(t))_{t \ge 0}$ on $H$. Further, it follows from the polynomial boundedness of $T$ with a conformal mapping argument that $A$ has a bounded $H^{\infty}$-calculus with $\omega_{H^{\infty}}(A) = \frac{\pi}{2}$~\cite[Remark~4.4]{Mer98}. Now assume that $(T(t))_{t \ge 0}$ has a loose dilation in the class of all Hilbert spaces. Then it follows from Dixmier's unitarization theorem \cite[Theorem~9.3]{Pau02} that $(T(t))_{t \ge 0}$ has a loose dilation to a unitary $C_0$-group $(U(t))_{t \in \mathbb{R}}$ on some Hilbert space $K$, i.e.\ there exist bounded operators $J\colon H \to K$ and $Q\colon K \to H$ such that
\[
T(t) = QU(t)J \qquad \text{for all } t \ge 0.
\]
Now let $\mathcal{A}$ be the unital subalgebra of $L_{\infty}([0, \infty))$ generated by the functions $x \mapsto e^{-itx}$ for $t \ge 0$, where we identify elements in $L_{\infty}([0,\infty))$ with multiplication operators on the Hilbert space $L_2([0,\infty))$. This gives $\mathcal{A}$ the structure of an operator space. We now show that the algebra homomorphism
\begin{align*}
u\colon \mathcal{A} \to \mathcal{B}(H), \qquad e^{-it\cdot} \mapsto T(t)
\end{align*}
is completely bounded with respect to this operator space structure for $\mathcal{A}$. Indeed, observe that by Stone's theorem on unitary groups and the spectral theorem for self-adjoint operators there exists a measure space $\Omega$ and a measurable function $m\colon \Omega \to \mathbb{R}$ such that after unitary equivalence $U(t)$ is the multiplication operator with respect to the function $e^{-itm}$ for all $t \in \mathbb{R}$. Now for $n \in \mathbb{N}$ let $[f_{ij}] \in M_n(\mathcal{A})$ with $f_{ij} = \sum_{k=1}^N a^{(ij)}_k e^{-i t_k \cdot}$. Then one has
\begin{align*}
\MoveEqLeft \norm{u_n([f_{ij}])}_{M_n(\mathcal{B}(X))} = \biggnorm{\big[\sum_{k=1}^N a_k^{(ij)} T(t_k) \big]}_{M_n(\mathcal{B}(X))} \\
& = \biggnorm{\big[ Q \sum_{k=1}^N a_k^{(ij)} U(t_k) J \big]}_{M_n(\mathcal{B}(X))} \le \norm{Q} \norm{J} \biggnorm{\big[ \sum_{k=1}^N a_k^{(ij)} e^{-i t_k m} \big]}_{M_n(\mathcal{B}(L_2(\Omega)))} \\
& \le \norm{J} \norm{Q} \sup_{x \in \mathbb{R}} \biggnorm{\big[ \sum_{k=1}^N a_k^{(ij)} e^{-it_k x} \big]}_{M_n} = \norm{J} \norm{Q} \norm{[f_{ij}]}_{M_n(L_{\infty}[0, \infty))}.
\end{align*}
Here we have used the identification of the $C^*$-algebras $M_n(L_{\infty}(\Omega)) \simeq L^{\infty}(\Omega; M_n)$ for all $n \in \mathbb{N}$. We deduce from Theorem \cite[Theorem~9.1]{Pau02} that $(T(t))_{t \ge 0}$ is similar to a semigroup of contractions. However, since by construction $T$ is the cogenerator of $(T(t))_{t \ge 0}$, this holds if and only if $T$ is similar to a contraction~\cite[III,8]{SFBK10}. This is a contradiction to our choice of $T$.
\end{proof}
\printbibliography
\end{document} | {
"timestamp": "2015-10-13T02:08:01",
"yymm": "1407",
"arxiv_id": "1407.1142",
"language": "en",
"url": "https://arxiv.org/abs/1407.1142",
"abstract": "We give a survey on the different regularity properties of sectorial operators on Banach spaces. We present the main results and open questions in the theory and then concentrate on the known methods to construct various counterexamples.",
"subjects": "Functional Analysis (math.FA)",
"title": "Regularity Properties of Sectorial Operators: Counterexamples and Open Problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130580170846,
"lm_q2_score": 0.7154239836484143,
"lm_q1q2_score": 0.707563661846883
} |
https://arxiv.org/abs/2301.05278 | Mixed volumes of normal complexes | Normal complexes are orthogonal truncations of polyhedral fans. In this paper, we develop the study of mixed volumes for normal complexes. Our main result is a sufficiency condition that ensures when the mixed volumes of normal complexes associated to a given fan satisfy the Alexandrov-Fenchel inequalities. By specializing to Bergman fans of matroids, we give a new proof of the Heron-Rota-Welsh Conjecture as a consequence of the Alexandrov-Fenchel inequalities for normal complexes. | \section{Introduction}
The Alexandrov--Fenchel inequalities lie at the heart of convex geometry, asserting that, for any convex bodies $P_{\heartsuit},P_{\diamondsuit},P_3\dots,P_d\in\mathbb{R}^d$, their mixed volumes satisfy
\[
\operatorname{MVol}(P_{\heartsuit},P_{\diamondsuit},P_3,\dots,P_d)^2\geq \operatorname{MVol}(P_{\heartsuit},P_{\heartsuit},P_3,\dots,P_d)\operatorname{MVol}(P_{\diamondsuit},P_{\diamondsuit},P_3,\dots,P_d).
\]
This paper is centered around developing an analogue of the Alexandrov--Fenchel inequalities in a decidedly nonconvex setting. The geometric objects of interest to us are normal complexes, which were recently introduced by A. Nathanson and the third author \cite{NR}. Given a pure simplicial fan $\Sigma$, a normal complex associated to $\Sigma$ is, roughly speaking, a polyhedral complex obtained by truncating each cone of $\Sigma$ with half-spaces perpendicular to the rays of $\Sigma$. The choice of where to place the truncating half-spaces results in a family of normal complexes associated to each fan $\Sigma$, and the question that motivates this work is: \emph{for a given fan $\Sigma$, do the mixed volumes of the associated normal complexes satisfy the Alexandrov--Fenchel inequalities?} Our main result (Theorem~\ref{thm:reduce}) describes two readily verifiable conditions on $\Sigma$ that guarantee an affirmative answer to this question.
One of the motivations for studying mixed volumes of normal complexes is that, in the special setting of tropical fans, they correspond to mixed degrees of divisors in associated Chow rings. Thus, Alexandrov--Fenchel inequalities for normal complexes lead to nontrivial numerical inequalities in these Chow rings. A class of tropical fans that have garnered a great deal of attention in recent years are Bergman fans of matroids, and one application of our main result (Theorem~\ref{thm:bergman}) is that normal complexes associated to Bergman fans of matroids satisfy the Alexandrov--Fenchel inequalities. Translating these inequalities back to matroid Chow rings, we obtain a volume-theoretic proof of the log-concavity of characteristic polynomials of matroids, a result that was conjectured by Heron, Rota, and Welsh \cite{Rota, Heron, Welsh} and first proved by Adiprasito, Huh, and Katz \cite{AHK}.
\subsection{Overview of the paper}
We begin in Section 2 by briefly recalling the construction of normal complexes and their volumes. Normal complexes, denoted $C_{\Sigma,*}(z)$, depend on a marked simplicial $d$-fan $\Sigma$ in a vector space $N_\mathbb{R}$ with an inner product $*\in\mathrm{Inn}(N_\mathbb{R})$, as well as a choice of pseudocubical truncating values $z\in\overline{\Cub}(\Sigma,*)\subseteq\mathbb{R}^{\Sigma(1)}$. The volume of $C_{\Sigma,*}(z)$, denoted $\operatorname{Vol}_{\Sigma,\omega,*}(z)$, where $\omega$ is a weight function on the top-dimensional cones of $\Sigma$, is defined as the weighted sum of the volumes of the maximal polytopes in $C_{\Sigma,*}(z)$. We recall the main result of \cite{NR}, which asserts that, if $(\Sigma,\omega)$ is a tropical fan, then
\begin{equation}\label{intro:vol=deg}
\operatorname{Vol}_{\Sigma,\omega,*}(z)=\deg_{\Sigma,\omega}(D(z)^d)\;\;\;\text{ where }\;\;\;D(z)=\sum_{\rho\in\Sigma(1)}z_\rho X_\rho \in A^1(\Sigma).
\end{equation}
In Section 3, we introduce mixed volumes of normal complexes $C_{\Sigma,*}(z_1),\dots,C_{\Sigma,*}(z_d)$, denoted $\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)$, which are weighted sums of mixed volumes of maximal polytopes. Analogous to mixed volumes in convex geometry, we show that mixed volumes of normal complexes are characterized by being symmetric, multilinear, and normalized by volume (Proposition~\ref{prop:mvolchar}). Furthermore, we prove that mixed volumes are nonnegative on the pseudocubical cone $\overline{\Cub}(\Sigma,*)$ and positive on the cubical cone $\operatorname{Cub}(\Sigma,*)$ (Proposition~\ref{prop:positive}). For all tropical fans $(\Sigma,\omega)$,
we leverage \eqref{intro:vol=deg} to show (Theorem~\ref{thm:mvol=mdeg}) that
\begin{equation}\label{intro:mvol=mdeg}
\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)=\deg_{\Sigma,\omega}(D(z_1)\cdots D(z_d)).
\end{equation}
In Section 4, we develop the face structure of normal complexes, closely paralleling the classical face structure of polytopes. In particular, the faces of a normal complex $C_{\Sigma,*}(z)$ are indexed by cones $\tau\in\Sigma$, and each face is obtained as the intersection of $C_{\Sigma,*}(z)$ with the truncating hyperplanes indexed by the rays of $\tau$. We describe how each face can, itself, be viewed as a normal complex associated to the star fan $\Sigma^\tau$, and use this to define (mixed) volumes of faces. Our main result of this section (Proposition~\ref{prop:pyramidmixed}), shows how mixed volumes of normal complexes can be computed in terms of mixed volumes of facets.
In Section 5, we introduce what it means for a triple $(\Sigma,\omega,*)$ to be AF---namely, that the mixed volumes of cubical values satisfy the Alexandrov--Fenchel inequalities. Our main result (Theorem~\ref{thm:reduce}), inspired by work of Cordero-Erausquin, Klartag, Merigot, and Santambrogio \cite{OneMoreProof} and Br\"and\'en and Leake \cite{BrandenLeake}, states that $(\Sigma,\omega,*)$ is AF if (i) all star fans $\Sigma^\tau$ of dimension at least three remain connected after removing the origin and (ii) the quadratic volume polynomials associated to the two-dimensional star fans of $\Sigma$ have exactly one positive eigenvalue. In fact, under these conditions, we argue that the volume polynomial $\operatorname{Vol}_{\Sigma,\omega,*}(z)$ is $\operatorname{Cub}(\Sigma,*)$-Lorentzian, which then implies that $(\Sigma,\omega,*)$ is AF.
In Section 6, we briefly recall relevant notions regarding matroids and Bergman fans, and then we use Theorem~\ref{thm:reduce} to prove that Bergman fans of matroids are AF (Theorem~\ref{thm:bergman}). We conclude the paper by deducing the Heron--Rota--Welsh Conjecture as a consequence of the Alexandrov--Fenchel inequalities for normal complexes.
\subsection{Relation to other work}
Since the original proof of the Heron--Rota--Welsh Conjecture by Adiprasito, Huh, and Katz \cite{AHK}, there have been a number of alternative proofs, generalizations, and exciting related developments (an incomplete list includes \cite{BHMPW,BHMPW2,BES,ADH,AP1,AP2,BrandenHuh,AGVI,ALGVII,ALGVIII,CP}). We view the volume-theoretic approach in this paper as a new angle from which to view log-concavity of characteristic polynomials of matroids, but we also want to acknowledge that our methods share features of and are indebted to the approaches of several other teams of mathematicians. In particular, our methods rely on the Chow-theoretic interpretation of characteristic polynomials of matroids, proved by Huh and Katz \cite{HuhKatz}, which was central in the original proof of Adiprasito, Huh, and Katz \cite{AHK}, as well as in the subsequent proofs by Braden, Huh, Matherne, Proudfoot, and Wang \cite{BHMPW} and Backman, Eur, and Simpson \cite{BES}. In addition, our methods prove that volume polynomials are Lorentzian, which is also a central feature in the methods of both Backman, Eur, and Simpson \cite{BES} and Br\"and\'en and Leake \cite{BrandenLeake}. We note that, while the methods of \cite{BES} and \cite{BrandenLeake} seem to be tailored primarily for matroids, our methods readily extend to the more general setting of tropical intersection theory (this extension will be spelled out in a forthcoming work of the third author). By adding a new volume-theoretic approach to the Heron--Rota--Welsh Conjecture to the literature, we hope that this paper will serve to welcome a new batch of geometrically-minded folks into the fold of this flourishing area of research, opening the door for further developments.
\subsection{Acknowledgements}
The authors would like to express their gratitude to Federico Ardila, Matthias Beck, Emily Clader, Chris Eur, and Serkan Ho\c{s}ten for sharing insights related to this project. This work was supported by a grant from the National Science Foundation: DMS-2001439.
\section{Background on normal complexes}\label{sec:background}
In this section, we establish notation, conventions, and preliminary results regarding polyhedral fans and normal complexes.
\subsection{Fan definitions and conventions}
Let $N_\mathbb{R}$ be a real vector spaces of dimension $n$. Given a polyhedral fan $\Sigma\subseteq N_\mathbb{R}$, we denote the $k$-dimensional cones of $\Sigma$ by $\Sigma{(k)}$. Let $\preceq$ denote the face containment relation among the cones of $\Sigma$, and for each cone $\sigma\in\Sigma$, let $\sigma(k)\subseteq\Sigma(k)$ denote the $k$-dimensional faces of $\sigma$. For any cone $\sigma$, let $\sigma^\circ$ denote the relative interior of $\sigma$ and denote the linear span of $\sigma$ by $N_{\sigma,\mathbb{R}}\subseteq N_\mathbb{R}$.
We say that a fan $\Sigma$ is \textbf{pure} if all of the maximal cones in $\Sigma$ have the same dimension. We say that $\Sigma$ is \textbf{marked} if we have chosen a distinguished generating vector $0\neq u_\rho\in \rho$ for each ray $\rho\in\Sigma(1)$. Henceforth, we assume that all fans are pure, polyhedral, and marked, and we use the term \textbf{$d$-fan} to refer to a pure, polyhedral, marked fan of dimension $d$.
We say that $\Sigma$ is \textbf{simplicial} if $\dim(N_{\sigma,\mathbb{R}})=|\sigma(1)|$ for all $\sigma\in\Sigma$. The faces of a simplicial cone $\sigma$ are in bijective correspondence with the subsets of $\sigma(1)$. For every face containment $\tau\preceq\sigma$ in a simplicial fan $\Sigma$, let $\sigma\setminus\tau$ denote the face of $\sigma$ with rays $\sigma(1)\setminus\tau(1)$. Given two faces $\tau,\pi\preceq\sigma$, denote by $\tau\cup\pi$ the face of $\sigma$ with rays $\tau(1)\cup\pi(1)$.
Given a simplical $d$-fan $\Sigma$ and a weight function $\omega:\Sigma(d)\rightarrow\mathbb{R}_{>0}$, we say that the pair $(\Sigma,\omega)$ is a \textbf{tropical fan} if it satisfies the weighted balancing condition:
\[
\sum_{\sigma\in\Sigma(d)\atop \tau\prec\sigma}\omega(\sigma)u_{\sigma\setminus\tau}\in N_{\tau,\mathbb{R}} \;\;\; \text{ for all } \;\;\; \tau\in\Sigma(d-1).
\]
While the definition of tropical fans can be generalized to nonsimplicial fans, we will assume throughout this paper that all tropical fans are simplicial. If $\omega(\sigma)=1$ for all $\sigma\in\Sigma(d)$, we say that $\Sigma$ is \textbf{balanced} and we omit $\omega$ from the notation.
\subsection{Chow rings and degree maps}
Let $M_\mathbb{R}$ denote the dual of $N_\mathbb{R}$ and let $\langle-,-\rangle$ be the duality pairing. Given a simplicial fan $\Sigma\subseteq N_\mathbb{R}$, the \textbf{Chow ring of $\Sigma$} is defined by
\[
A^\bullet(\Sigma)\defeq \frac{\mathbb{R}\big[x_\rho\;|\;\rho\in\Sigma{(1)}\big]}{\mathcal{I}+\mathcal{J}}
\]
where
\[
\mathcal{I}\defeq \big\langle x_{\rho_1}\cdots x_{\rho_k}\;|\;\mathbb{R}_{\geq 0}\{\rho_1,\dots,\rho_k\}\notin\Sigma\big\rangle \;\;\; \text{ and } \;\;\; \mathcal{J}\defeq \bigg\langle \sum_{\rho\in\Sigma{(1)}}\langle v,u_\rho\rangle x_\rho\;\bigg|\;v\in M_\mathbb{R}\bigg\rangle.
\]
As both $\mathcal{I}$ and $\mathcal{J}$ are homogeneous, the Chow ring $A^\bullet(\Sigma)$ is a graded ring, and we denote by $A^k(\Sigma)$ the subgroup of homogeneous elements of degree $k$. We denote the generators of $A^\bullet(\Sigma)$ by $X_\rho\defeq [x_\rho]\in A^1(\Sigma)$, and for any $\sigma\in\Sigma(k)$, we define
\[
X_\sigma\defeq \prod_{\rho\in\sigma(1)}X_\rho\in A^k(\Sigma).
\]
If $\Sigma$ is a simplicial $d$-fan, then every element of $A^k(\Sigma)$ can be written as a linear combination of $X_\sigma$ with $\sigma\in\Sigma(k)$ (see, for example, \cite[Proposition~5.5]{AHK}). It follows that $A^k(\Sigma)=0$ for all $k>d$. If, in addition, $(\Sigma,\omega)$ is tropical, then there is a well-defined \textbf{degree map}
\[
\deg_{\Sigma,\omega}:A^d(\Sigma)\rightarrow\mathbb{R}
\]
such that $\deg_{\Sigma,\omega}(X_\sigma)=\omega(\sigma)$ for every $\sigma\in\Sigma(d)$ (see, for example, \cite[Proposition~5.6]{AHK}).
\subsection{Normal complexes}
We now recall the construction of normal complexes from \cite{NR}. In addition to a simplicial $d$-fan $\Sigma\subseteq N_\mathbb{R}$, the normal complex construction requires an additional choice of an inner product $*\in\mathrm{Inn}(N_\mathbb{R})$ and a value $z\in\mathbb{R}^{\Sigma(1)}$. Given such a $*$ and $z$, we define a set of hyperplanes and half-spaces in $N_\mathbb{R}$ associated to each $\rho\in\Sigma$ by
\[
H_{\rho,*}(z)\defeq \{v\in N_\mathbb{R} \mid v*u_\rho= z_\rho\}\;\;\;\text{ and }\;\;\;H_{\rho,*}^-(z)\defeq \{v\in N_\mathbb{R} \mid v*u_\rho\leq z_\rho\}.
\]
We then define polytopes $P_{\sigma,*}(z)$, one for each $\sigma\in\Sigma$, by
\[
P_{\sigma,*}(z)\defeq \sigma\cap\bigcap_{\rho\in\sigma(1)}H_{\rho,*}^-(z).
\]
Notice that $P_{\sigma,*}(z)$ is simply a truncation of the cone $\sigma$ by hyperplanes that are normal to the rays of $\sigma$---what it means to be normal is determined by $*$, and the locations of the normal hyperplanes along the rays of the cone are determined by $z$. We would like to construct a polytopal complex from these polytopes, but in general, they do not meet along faces. To ensure that they meet along faces, we require a compatibility between $z$ and $*$.
For each $\sigma\in\Sigma$, let $w_{\sigma,*}(z)\in N_{\sigma,\mathbb{R}}$ be the unique vector such that $w_{\sigma,*}(z)*u_\rho=z_\rho$ for all $\rho\in\sigma(1)$. That such a vector exists and is unique follows from the fact that the vectors $u_\rho$ with $\rho\in\sigma(1)$ are linearly independent---this is equivalent to the simplicial hypothesis. We then say that $z$ is \textbf{cubical (pseudocubical) with respect to $(\Sigma,*)$} if
\[
w_{\sigma,*}(z)\in\sigma^\circ\;\;\;(w_{\sigma,*}(z)\in\sigma)\;\;\;\text{ for all }\;\;\;\sigma\in\Sigma.
\]
In other words, the pseudocubical values are those values of $z$ for which the truncating hyperplanes intersect within each cone, and the cubical values are those for which they intersect in the relative interior of each cone. The collection of cubical values are denoted $\operatorname{Cub}(\Sigma,*)\subseteq\mathbb{R}^{\Sigma(1)}$ and the pseudocubical values are denoted $\overline{\Cub}(\Sigma,*)\subseteq\mathbb{R}^{\Sigma(1)}$.
We now summarize key results from \cite{NR} that will be necessary for the developments in this paper (see \cite[Propositions~3.2, 3.3, and 3.7 ]{NR}).
\begin{proposition}\label{prop:normalcomplexprelims}
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan and let $*\in\mathrm{Inn}(N_\mathbb{R})$ be an inner product.
\begin{enumerate}
\item The set $\overline{\Cub}(\Sigma,*)\subseteq\mathbb{R}^{\Sigma(1)}$ is a polyhedral cone with $\overline{\Cub}(\Sigma,*)^\circ=\operatorname{Cub}(\Sigma,*)$.
\item For $z\in\overline{\Cub}(\Sigma,*)$, the vertices of $P_{\sigma,*}(z)$ are $\{w_{\tau,*}(z)\mid\tau\preceq\sigma\}$.
\item For $z\in\overline{\Cub}(\Sigma,*)$, the polytopes $P_{\sigma,*}(z)$ meet along faces.
\end{enumerate}
\end{proposition}
For any polytope $P$, let $\widehat P$ denote the set of all faces of $P$. The third part of Proposition~\ref{prop:normalcomplexprelims} implies that
\[
C_{\Sigma,*}(z)\defeq \bigcup_{\sigma\in\Sigma(d)}\widehat{P_{\sigma,*}(z)}
\]
is a polytopal complex whenever $z\in\overline{\Cub}(\Sigma,*)$, and this polytopal complex is called the \textbf{normal complex of $\Sigma$ with respect to $*$ and $z$.}
Below, we depict a two-dimensional tropical fan and an associated normal complex. The fan is comprised of nine two-dimensional cones glued along faces, and each of these nine cones corresponds to a quadrilateral in the normal complex.
\begin{center}
\tdplotsetmaincoords{68}{55}
\begin{tikzpicture}[scale=2,tdplot_main_coords]
\draw[draw=blue!20,fill=blue!20,fill opacity=0.5] (0,0, 0)-- (1, 0, 0) -- (1, 1, 1) -- cycle;
\draw[draw=blue!20,fill=blue!20,fill opacity=0.5] (0,0, 0)-- (0, 1, 0) -- (1, 1, 1) -- cycle;
\draw[draw=blue!20,fill=blue!20,fill opacity=0.5] (0,0, 0)-- (0, 0, 1) -- (1, 1, 1) -- cycle;
\draw[draw=blue!20,fill=blue!20,fill opacity=0.5] (0,0, 0)-- (1, 0, 0) -- (0, -1, -1) -- cycle;
\draw[draw=blue!20,fill=blue!20,fill opacity=0.5] (0,0, 0)-- (0, 1, 0) -- (-1, 0, -1) -- cycle;
\draw[draw=blue!20,fill=blue!20,fill opacity=0.5] (0,0, 0)-- (0, 0, 1) -- (-1, -1, 0) -- cycle;
\draw[draw=blue!20,fill=blue!20,fill opacity=0.5] (0,0, 0)-- (-1, -1, 0) -- (-1, -1, -1) -- cycle;
\draw[draw=blue!20,fill=blue!20,fill opacity=0.5] (0,0, 0)-- (-1, 0, -1) -- (-1, -1, -1) -- cycle;
\draw[draw=blue!20,fill=blue!20,fill opacity=0.5] (0,0, 0)-- (0, -1, -1) -- (-1, -1, -1) -- cycle;
\draw[->] (0,0,0) -- (1,0,0);
\draw[->,gray] (0,0,0) -- (0,1,0);
\draw[->] (0,0,0) -- (0,0,1);
\draw[->] (0,0,0) -- (1,1,1);
\draw[->] (0,0,0) -- (-1,-1,-1);
\draw[->] (0,0,0) -- (-1,-1,0);
\draw[->,gray] (0,0,0) -- (-1,0,-1);
\draw[->] (0,0,0) -- (0,-1,-1);
\end{tikzpicture}
\hspace{70bp}
\begin{tikzpicture}[scale=1.1,tdplot_main_coords]
\draw[draw=green!20, fill=green!20, fill opacity=.5] (0,0,0) -- (0, 0, 1.6) -- (1, 1, 1.6) -- (1.2, 1.2, 1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.5] (0,0,0) -- (0, 1.6, 0) -- (1, 1.6, 1) -- (1.2, 1.2, 1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.5] (0,0,0) -- (1.6, 0, 0) -- (1.6, 1, 1) -- (1.2, 1.2, 1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.5] (0,0,0) -- (0, 0, 1.6) -- (-1.6, -1.6, 1.6) -- (-1.6, -1.6, 0) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.5] (0,0,0) -- (0, 1.6, 0) -- (-1.6, 1.6, -1.6) -- (-1.6, 0, -1.6) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.5] (0,0,0) -- (1.6, 0, 0) -- (1.6, -1.6, -1.6) -- (0, -1.6, -1.6) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.5] (0,0,0) -- (-1.6, -1.6, 0) -- (-1.6, -1.6, -.4) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.5] (0,0,0) -- (-1.6, 0, -1.6) -- (-1.6, -.4, -1.6) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.5] (0,0,0) -- (0, -1.6, -1.6) -- (-.4, -1.6, -1.6) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[thick] (0, 0, 1.6) -- (1, 1, 1.6) -- (1.2, 1.2, 1.2);
\draw[dashed] (0, 1.6, 0) -- (1, 1.6, 1) -- (1.2, 1.2, 1.2);
\draw[thick] (.7, 1.6, .7) -- (1, 1.6, 1) -- (1.2, 1.2, 1.2);
\draw[thick] (1.6, 0, 0) -- (1.6, 1, 1) -- (1.2, 1.2, 1.2);
\draw[thick] (0, 0, 1.6) -- (-1.6, -1.6, 1.6) -- (-1.6, -1.6, 0);
\draw[dashed] (0, 1.6, 0) -- (-1.6, 1.6, -1.6) -- (-1.6, 0, -1.6);
\draw[thick] (1.6, 0, 0) -- (1.6, -1.6, -1.6) -- (0, -1.6, -1.6);
\draw[thick] (-1.6, -1.6, 0) -- (-1.6, -1.6, -.4) -- (-1.2, -1.2, -1.2);
\draw[dashed] (-1.6, 0, -1.6) -- (-1.6, -.4, -1.6) -- (-1.2, -1.2, -1.2);
\draw[thick] (0, -1.6, -1.6) -- (-.4, -1.6, -1.6) -- (-1.2, -1.2, -1.2);
\draw[thick] (0, 0, 0) -- (1.2, 1.2, 1.2);
\draw[thick] (0, 0, 0) -- (0, 0, 1.6);
\draw[dashed] (0, 0, 0) -- (0, 1.6, 0);
\draw[thick] (0, 0, 0) -- (1.6, 0, 0);
\draw[thick] (0, 0, 0) -- (0, -1.6, -1.6);
\draw[dashed] (0, 0, 0) -- (-1.6, 0, -1.6);
\draw[thick] (0, 0, 0) -- (-1.6, -1.6, 0);
\draw[thick] (0, 0, 0) -- (-1.2, -1.2, -1.2);
\end{tikzpicture}
\end{center}
The next pair of images depict a three-dimensional fan comprised of two maximal cones meeting along a two-dimensional face, and a corresponding normal complex. While this fan is not tropical, the reader is welcome to view this image as just one small piece of a three-dimensional tropical fan in some higher-dimensional vector space.
\begin{center}
\begin{tikzpicture}
\node[] at (0,0) {\includegraphics[scale=.3]{3dcones}};
\node[] at (8,0) {\includegraphics[scale=.3]{3dtruncation}};
\end{tikzpicture}
\end{center}
\subsection{Volumes of normal complexes}
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan, $*\in\mathrm{Inn}(N_\mathbb{R})$ an inner product, and $z\in\overline{\Cub}(\Sigma,*)$ a pseudocubical value. Informally, the volume of the normal complex $C_{\Sigma,*}(z)$ is the sum of the volumes of the polytopes $P_{\sigma,*}(z)$ with $\sigma\in\Sigma(d)$; however, some care is required in specifying what we mean by \emph{volume} in each subspace $N_{\sigma,\mathbb{R}}$.
For each cone $\sigma\in\Sigma$, define the discrete subgroup
\[
N_\sigma\defeq \mathrm{span}_\mathbb{Z}(u_\rho\;|\;\rho\in\sigma(1))\subseteq N_\mathbb{R},
\]
and let $M_\sigma$ denote its dual: $M_\sigma\defeq \mathrm{Hom}_\mathbb{Z}(N_\sigma,\mathbb{Z})\subseteq M_{\sigma,\mathbb{R}}\defeq \mathrm{Hom}_\mathbb{R}(N_{\sigma,\mathbb{R}},\mathbb{R})$. Using the inner product $*$, we can identify $M_{\sigma,\mathbb{R}}$ with $N_{\sigma,\mathbb{R}}$ and thus, we can view $M_\sigma$ as a lattice in $N_{\sigma,\mathbb{R}}$. For each $\sigma\in\Sigma$, let
\[
\operatorname{Vol}_\sigma:\big\{\text{polytopes in }N_{\sigma,\mathbb{R}}\big\}\rightarrow\mathbb{R}_{\geq 0}
\]
be the volume function determined by the property that a fundamental simplex of the lattice $M_\sigma\subseteq N_{\sigma,\mathbb{R}}$ has unit volume. Define the \textbf{volume of the normal complex $C_{\Sigma,*}(z)$}, denoted $\operatorname{Vol}_{\Sigma,*}(z)$ for brevity, as the sum of the volumes of the constituent $d$-dimensional polytopes:
\[
\operatorname{Vol}_{\Sigma,*}(z)\defeq \sum_{\sigma\in\Sigma(d)}\operatorname{Vol}_\sigma(P_{\sigma,*}(z)).
\]
In slightly more generality, suppose that $\omega:\Sigma(d)\rightarrow \mathbb{R}_{>0}$ is a weight function on the maximal cones of $\Sigma$. The \textbf{volume of the normal complex $C_{\Sigma,*}(z)$ weighted by $\omega$} is defined by
\[
\operatorname{Vol}_{\Sigma,\omega,*}(z)\defeq \sum_{\sigma\in\Sigma(d)}\omega(\sigma)\operatorname{Vol}_\sigma(P_{\sigma,*}(z)).
\]
The main result of \cite{NR} is a Chow-theoretic interpretation of the weighted volumes of normal complexes, valid whenever $(\Sigma,\omega)$ is tropical.
\begin{theorem}[{\cite[Theorem~6.3]{NR}}]\label{thm:vol=deg}
Let $(\Sigma,\omega)$ be a tropical $d$-fan, $*\in\mathrm{Inn}(N_\mathbb{R})$ an inner product, and $z\in\overline{\Cub}(\Sigma,*)$ a pseudocubical value. Then
\[
\operatorname{Vol}_{\Sigma,\omega,*}(z)=\deg_{\Sigma,\omega}(D(z)^d)
\]
where
\[
D(z)=\sum_{\rho\in\Sigma(1)} z_\rho X_\rho\in A^1(\Sigma).
\]
\end{theorem}
\section{Mixed Volumes of Normal Complexes}
Our first aim in this paper is to enhance Theorem~\ref{thm:vol=deg} to a statement about mixed volumes. In order to do this, we briefly recall the classical theory of mixed volumes, for which we recommend the comprehensive text by Schneider \cite{Schneider} as a reference.
\subsection{Mixed volumes of polytopes}
Mixed volumes are the natural result of combining the notion of volume with the operation of Minkowski addition. We start with a $d$-dimensional real vector space $V$ and a volume function $\operatorname{Vol}:\{\text{polytopes in }V\}\rightarrow\mathbb{R}_{\geq 0}$. The \textbf{mixed volume function}
\[
\operatorname{MVol}:\{\text{polytopes in } V\}^d\rightarrow\mathbb{R}_{\geq 0}
\]
is the unique function determined by the following three properties.
\begin{itemize}
\item (Symmetry) For any permutation $\pi\in S_d$,
\[
\operatorname{MVol}(P_1,\dots,P_d)=\operatorname{MVol}(\pi(P_1,\dots,P_d)).
\]
\item (Multilinearity) For any $i=1,\dots,d$ and $\lambda \in\mathbb{R}_{\geq 0}$,
\begin{align*}
\operatorname{MVol}(P_1,\dots,\lambda P_i+P_i',\dots,P_d)=\lambda&\operatorname{MVol}(P_1,\dots,P_i,\dots,P_d)\\&+\operatorname{MVol}(P_1,\dots,P_i',\dots,P_d),
\end{align*}
where the linear combination of polytopes is defined by
\[
\lambda P_i+P_i'=\{\lambda v+w\mid v\in P_i,w\in P_i'\}.
\]
\item (Normalization) For any polytope $P$,
\[
\operatorname{MVol}(P,\dots,P)=\operatorname{Vol}(P).
\]
\end{itemize}
That such a mixed volume function exists and is unique is due to Minkowski \cite{Minkowski}, who proved that such a function exists more generally for convex bodies, not just for polytopes.
\subsection{Mixed volumes of normal complexes}
We now define a notion of mixed volumes of normal complexes. Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan and let $*\in\mathrm{Inn}(N_\mathbb{R})$ be an inner product. Given pseudocubical values $z_1,\dots,z_d\in\overline{\Cub}(\Sigma,*)$, we define the \textbf{mixed volume of the normal complexes $C_{\Sigma,*}(z_1),\dots,C_{\Sigma,*}(z_d)$}, denoted $\operatorname{MVol}_{\Sigma,*}(z_1,\dots,z_d)$ for brevity, by
\[
\operatorname{MVol}_{\Sigma,*}(z_1,\dots,z_d)\defeq \sum_{\sigma\in\Sigma(d)}\operatorname{MVol}_\sigma(P_{\sigma,*}(z_1),\dots,P_{\sigma,*}(z_d)).
\]
In other words, the mixed volume is the sum of the mixed volumes of the polytopes associated to the top-dimensional cones of $\Sigma$. More generally, if $\omega:\Sigma(d)\rightarrow \mathbb{R}_{>0}$ is a weight function, then the \textbf{mixed volume of the normal complexes $C_{\Sigma,*}(z_1),\dots,C_{\Sigma,*}(z_d)$ weighted by $\omega$} is defined by
\[
\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)\defeq \sum_{\sigma\in\Sigma(d)}\omega(\sigma)\operatorname{MVol}_\sigma(P_{\sigma,*}(z_1),\dots,P_{\sigma,*}(z_d)).
\]
In order to verify that this is a meaningful notion of mixed volumes for normal complexes, we check that it is characterized by an analogue of the three characterizing properties of mixed volumes of polytopes.
\begin{proposition}\label{prop:mvolchar}
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan, $*\in\mathrm{Inn}(N_\mathbb{R})$ an inner product, and $\omega:\Sigma(d)\rightarrow\mathbb{R}_{>0}$ a weight function.
\begin{enumerate}
\item For any $z_1,\dots,z_d\in\overline{\Cub}(\Sigma,*)$ and $\pi\in S_d$,
\[
\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)=\operatorname{MVol}_{\Sigma,\omega,*}(\pi(z_1,\dots,z_d)).
\]
\item For any $i=1,\dots,d$, and for any $z_1,\dots,z_i,z_i',\dots,z_d\in\overline{\Cub}(\Sigma,*)$ and $\lambda\in\mathbb{R}_{\geq 0}$,
\begin{align*}
\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,\lambda z_i+z_i',\dots,z_d)=\lambda &\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_i,\dots,z_d)\\
&+\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_i',\dots,z_d).
\end{align*}
\item For any $z\in\overline{\Cub}(\Sigma,*)$,
\[
\operatorname{MVol}_{\Sigma,\omega,*}(z,\dots,z)=\operatorname{Vol}_{\Sigma,\omega,*}(z).
\]
\end{enumerate}
Moreover, any function $\overline{\Cub}(\Sigma,*)^d\rightarrow\mathbb{R}_{\geq 0}$ satisfying Properties \emph{(1) -- (3)} must be $\operatorname{MVol}_{\Sigma,\omega,*}$.
\end{proposition}
\begin{proof}
Given that
\[
\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)=\sum_{\sigma\in\Sigma(d)}\omega(\sigma)\operatorname{MVol}_\sigma(P_{\sigma,*}(z_1),\dots,P_{\sigma,*}(z_d))
\]
and the summands in the right-hand side are simply mixed volumes of polytopes, Properties~(1) and (3) follow from the symmetry and normalization properties of mixed volumes in the polytope setting. Moreover, once we prove that
\begin{equation}\label{eq:polytopelinearity}
P_{\sigma,*}(\lambda z+z')=\lambda P_{\sigma,*}(z)+P_{\sigma,*}(z')
\end{equation}
for all $z,z'\in\overline{\Cub}(\Sigma,*)$ and $\lambda\in\mathbb{R}_{\geq 0}$, then Property (2) also follows from the multilinearity property of mixed volumes in the polytope setting. Thus, it remains to prove \eqref{eq:polytopelinearity}, which we accomplish by proving both inclusions.
First, suppose that $v\in P_{\sigma,*}(\lambda z+z')$. By Proposition~\ref{prop:normalcomplexprelims}, the vertices of $P_{\sigma,*}(\lambda z+z')$ are $\{w_{\tau,*}(\lambda z+z')\mid\tau\preceq\sigma\}$, so we can write $v$ as a convex combination:
\begin{equation}\label{eq:vertexexpression}
v=\sum_{\tau\preceq\sigma}a_\tau\, w_{\tau,*}(\lambda z+z')\;\;\;\text{ for some }\;\;\;a_\tau\in\mathbb{R}_{\geq 0}\;\;\;\text{ with }\;\;\;\sum_{\tau\preceq\sigma}a_\tau=1.
\end{equation}
To prove that $v\in\lambda P_{\sigma,*}(z)+P_{\sigma,*}(z')$, our next step is to prove that the vertices are linear:
\begin{equation}\label{eq:linearws}
w_{\tau,*}(\lambda z+z')=\lambda w_{\tau,*}(z)+w_{\tau,*}(z').
\end{equation}
Since $w_{\tau,*}(\lambda z+z')$ is the unique vector in $N_{\tau,\mathbb{R}}$ with $w_{\tau,*}(\lambda z+z')*u_\rho=(\lambda z+z')_\rho$ for all $\rho\in\tau(1)$, proving \eqref{eq:linearws} amounts to proving that $\lambda w_{\tau,*}(z)+w_{\tau,*}(z')$ also satisfies these equations. Using bilinearity of the inner product and the definition of the $w$ vectors, we have
\begin{align*}
(\lambda w_{\tau,*}(z)+w_{\tau,*}(z'))*u_\rho&=\lambda w_{\tau,*}(z)*u_\rho+w_{\tau,*}(z')*u_\rho\\
&=\lambda z_\rho+z_\rho'\\
&=(\lambda z+z')_\rho.
\end{align*}
Therefore, \eqref{eq:linearws} holds, and substituting \eqref{eq:linearws} into \eqref{eq:vertexexpression} implies that
\[
v=\lambda\sum_{\tau\preceq\sigma}a_\tau w_{\tau,*}(z)+\sum_{\tau\preceq\sigma}a_\tau w_{\tau,*}(z')\in \lambda P_{\sigma,*}(z)+P_{\sigma,*}(z').
\]
To prove the other inclusion, suppose that $v\in \lambda P_{\sigma,*}(z)+P_{\sigma,*}(z')$. Then $v=\lambda w+w'$ for some $w\in P_{\sigma,*}(z)$ and $w'\in P_{\sigma,*}(z')$. This means that $w,w'\in\sigma$ and, in addition, $w\cdot u_\rho\leq z_{\rho}$ and $w'\cdot u_\rho\leq z_{\rho}'$ for all $\rho\in\sigma(1)$. Since $\sigma$ is a cone, $u=\lambda w+w'\in\sigma$ and, for every $\rho\in\sigma(1)$, we have
\begin{align*}
v*u_\rho&=(\lambda w+w')*u_\rho\\
&=\lambda w*u_\rho+w'*u_\rho\\
&\leq \lambda z_\rho+z_\rho',
\end{align*}
from which we conclude that $v\in P_{\sigma,*}(\lambda z+z')$.
Finally, to prove the final assertion of the proposition, suppose that $F:\overline{\Cub}(\Sigma,*)^d\rightarrow\mathbb{R}_{\geq 0}$ satisfies Properties (1) -- (3). Our goal is to prove that $F(z_1,\dots,z_d)=\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)$ for any pseudocubical values $z_1,\dots,z_d\in\overline{\Cub}(\Sigma,*)$. Set $z=\lambda_1 z_1+\dots+\lambda_d z_d$ with $\lambda_1,\dots,\lambda_d\in\mathbb{R}_{\geq 0}$ arbitrary. Property (3) implies that
\[
F(z,\dots,z)=\operatorname{Vol}_{\Sigma,\omega,*}(z)=\operatorname{MVol}_{\Sigma,\omega,*}(z,\dots,z).
\]
Using Properties (1) and (2) we can expand both the left- and right-hand sides of this equation as polynomials in $\lambda_1,\dots,\lambda_d$:
\begin{align*}
\sum_{k_1,\dots,k_d}{d\choose k_1,\dots,k_d}&F(\underbrace{z_1,\dots,z_1}_{k_1},\dots,\underbrace{z_d,\dots,z_d}_{k_d})\lambda_1^{k_1}\cdots\lambda_d^{k_d}\\
&=\sum_{k_1,\dots,k_d}{d\choose k_1,\dots,k_d}\operatorname{MVol}_{\Sigma,\omega,*}(\underbrace{z_1,\dots,z_1}_{k_1},\dots,\underbrace{z_d,\dots,z_d}_{k_d})\lambda_1^{k_1}\cdots\lambda_d^{k_d}
\end{align*}
Equating the coefficients of $\lambda_1\cdots\lambda_d$ in these two polynomials leads to the desired conclusion: $F(z_1,\dots,z_d)=\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)$.
\end{proof}
Our methods for studying Alexandrov--Fenchel inequalities will also require the following positivity result.
\begin{proposition}\label{prop:positive}
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan, $*\in\mathrm{Inn}(N_\mathbb{R})$ an inner product, and $\omega:\Sigma(d)\rightarrow\mathbb{R}_{>0}$ a weight function. Then
\[
\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)\geq 0\;\;\;\text{ for all }\;\;\;z_1,\dots,z_d\in\overline{\Cub}(\Sigma,*)
\]
and
\[
\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)> 0\;\;\;\text{ for all }\;\;\;z_1,\dots,z_d\in\operatorname{Cub}(\Sigma,*).
\]
\end{proposition}
\begin{proof}
The first statement follows from the definition of $\operatorname{MVol}_{\Sigma,\omega,*}$ and the nonnegativity of mixed volumes of polytopes \cite[Theorem~5.1.7]{Schneider}. For the second statement, we first observe that $z\in\operatorname{Cub}(\Sigma,*)$ implies that $P_{\sigma,*}(z)$ has dimension $d$ for every $\sigma\in\Sigma(d)$, which follows from the fact that $P_{\sigma,*}(z)$ is combinatorially equivalent to a $d$-cube \cite[Proposition~3.8]{NR}. Thus, the second statement follows from the fact that mixed volumes of full-dimensional polytopes are strictly positive \cite[Theorem~5.1.8]{Schneider}.
\end{proof}
\subsection{Mixed volumes and mixed degrees}
We now extend Theorem~\ref{thm:vol=deg} to give a Chow-theoretic interpretation of mixed volumes of normal complexes associated to tropical fans.
\begin{theorem}\label{thm:mvol=mdeg}
Let $(\Sigma,\omega)$ be a tropical $d$-fan, let $*\in\mathrm{Inn}(N_\mathbb{R})$ be an inner product, and let $z_1,\dots,z_d\in\overline{\Cub}(\Sigma,*)$ be pseudocubical values. Then
\[
\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)=\deg_{\Sigma,\omega}(D(z_1)\cdots D(z_d)).
\]
\end{theorem}
\begin{proof}
By Proposition~\ref{prop:mvolchar}, it suffices to prove that the function
\begin{align*}
\overline{\Cub}(\Sigma,*)^d&\rightarrow\mathbb{R}_{\geq 0}\\
(z_1,\dots,z_d)&\mapsto \deg_{\Sigma,\omega}(D(z_1)\cdots D(z_d))
\end{align*}
is symmetric, multilinear, and normalized by $\operatorname{Vol}_{\Sigma,\omega,*}$. Symmetry follows from the fact that $A^\bullet(\Sigma)$ is a commutative ring, multilinearity follows from the fact that $\deg_{\Sigma,\omega}:A^d(\Sigma)\rightarrow\mathbb{R}$ is a linear map, and normalization is the content of Theorem~\ref{thm:vol=deg}.
\end{proof}
\section{Faces of Normal Complexes}
In this section, we develop a face structure for normal complexes, analogous to the face structure of polytopes. Parallel to the polytope case, we will see that each face is obtained by intersecting the normal complex with supporting hyperplanes, that each face can, itself, be viewed as a normal complex, and that a face of a face is, itself, a face. We then prove fundamental properties relating (mixed) volumes of normal complexes to the (mixed) volumes of their facets, which perfectly parallel central results in the classical polytope setting.
\subsection{Orthogonal decompositions}
The face construction for normal complexes makes heavy use of an orthogonal decomposition of $N_\mathbb{R}$ associated to each cone $\tau\in\Sigma$, which we now describe. Associated to each $\tau\in\Sigma$, we have already met the subspace $N_{\tau,\mathbb{R}}\subseteq\N_\mathbb{R}$, which is the linear span of $\tau$, and we now introduce notation for the quotient space
\[
N_\mathbb{R}^\tau\defeq N_\mathbb{R}/N_{\tau,\mathbb{R}}.
\]
With the inner product $*$, we may identify $N^\tau_\mathbb{R}$ as the orthogonal complement of $N_{\tau,\mathbb{R}}$:
\[
N^\tau_\mathbb{R}=N_{\tau,\mathbb{R}}^\perp=\{v\in N_\mathbb{R}\mid v*u=0\text{ for all }u\in N_{\tau,\mathbb{R}}\}\subseteq N_\mathbb{R},
\]
allowing us to decompose $N_\mathbb{R}$ as an orthogonal sum $N_\mathbb{R}=N_{\tau,\mathbb{R}}\oplus N^\tau_\mathbb{R}$. We denote the orthogonal projections onto the factors of this decomposition by $\mathrm{pr}_\tau$ and $\mathrm{pr}^\tau$.
As we will see below, given a normal complex $C_{\Sigma,*}(z)$ and a cone $\tau\in\Sigma$, we will associate a face $\mathrm{F}^\tau(C_{\Sigma,*}(z))$, and this face will lie in the space $N_\mathbb{R}^\tau$. In order to help the reader digest the construction of $\mathrm{F}^\tau(C_{\Sigma,*}(z))$ and its subsequent interpretation as a normal complex, we henceforth make the convention that \emph{$\tau$ superscripts will be used exclusively for objects associated to the vector space $N_\mathbb{R}^\tau$}. For example, $\Sigma^\tau$ will denote a fan in $N_\mathbb{R}^\tau$ and $*^\tau$ will denote an inner product on $N_\mathbb{R}^\tau$.
\subsection{Faces of normal complexes}
There are two primary steps in the face construction for normal complexes. The first step is completely analogous to the polytope setting: we intersect the normal complex with a collection of supporting hyperplanes to obtain a subcomplex. However, in order to view this resulting subcomplex as a normal complex itself, the second step of the construction requires us to translate this polytopal subcomplex to the origin, where we can then endow it with the structure of a normal complex inside $N_\mathbb{R}^\tau$.
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan, $*\in\mathrm{Inn}(N_\mathbb{R})$ an inner product, and $z\in\overline{\Cub}(\Sigma,*)$ a pseudocubical value. For each cone $\tau\in\Sigma$, define the \textbf{neighborhood of $\tau$ in $\Sigma$} by
\[
\N_\tau\Sigma\defeq \{\pi \mid \pi\preceq \sigma \text{ for some }\sigma\in\Sigma \text{ with }\tau\preceq\sigma\}.
\]
To illustrate this definition, we have darkened the neighborhood of the ray $\rho$ in the following two-dimensional fan.
\begin{center}
\tdplotsetmaincoords{68}{55}
\begin{tikzpicture}[scale=2,tdplot_main_coords]
\draw[draw=blue!20,fill=blue!20,fill opacity=0.8] (0,0, 0)-- (1, 0, 0) -- (1, 1, 1) -- cycle;
\draw[draw=blue!20,fill=blue!20,fill opacity=0.8] (0,0, 0)-- (0, 1, 0) -- (1, 1, 1) -- cycle;
\draw[draw=blue!20,fill=blue!20,fill opacity=0.8] (0,0, 0)-- (0, 0, 1) -- (1, 1, 1) -- cycle;
\draw[draw=blue!0,fill=blue!20,fill opacity=0.2] (0,0, 0)-- (1, 0, 0) -- (0, -1, -1) -- cycle;
\draw[draw=blue!0,fill=blue!20,fill opacity=0.2] (0,0, 0)-- (0, 1, 0) -- (-1, 0, -1) -- cycle;
\draw[draw=blue!0,fill=blue!20,fill opacity=0.2] (0,0, 0)-- (0, 0, 1) -- (-1, -1, 0) -- cycle;
\draw[draw=blue!0,fill=blue!20,fill opacity=0.2] (0,0, 0)-- (-1, -1, 0) -- (-1, -1, -1) -- cycle;
\draw[draw=blue!0,fill=blue!20,fill opacity=0.2] (0,0, 0)-- (-1, 0, -1) -- (-1, -1, -1) -- cycle;
\draw[draw=blue!0,fill=blue!20,fill opacity=0.2] (0,0, 0)-- (0, -1, -1) -- (-1, -1, -1) -- cycle;
\draw[->] (0,0,0) -- (1,0,0);
\draw[->,gray] (0,0,0) -- (0,1,0);
\draw[->] (0,0,0) -- (0,0,1);
\draw[->] (0,0,0) -- (1,1,1);
\node[right] at (1,1,1) {$\rho$};
\draw[->] (0,0,0) -- (-1,-1,-1);
\draw[->] (0,0,0) -- (-1,-1,0);
\draw[->,gray] (0,0,0) -- (-1,0,-1);
\draw[->] (0,0,0) -- (0,-1,-1);
\draw[draw=blue!20] (0, 1, 0) -- (-.41, .59, -.41);
\end{tikzpicture}
\end{center}
Notice that $\N_\tau\Sigma$ is, itself, a simplicial $d$-fan in $N_\mathbb{R}$ whose cones are a subset of $\Sigma$, and the maximal cones of $\N_\tau\Sigma$ comprise all of the maximal cones of $\Sigma$ that contain $\tau$. Since every maximal cone $\sigma\in\N_\tau\Sigma(d)$ contains $\tau$ as a face, it follows from the definitions that each hyperplane $H_{\rho,*}(z)$ with $\rho\in\tau(1)$ is a supporting hyperplane of $P_{\sigma,*}(z)$:
\[
P_{\sigma,*}(z)\subseteq H_{\rho,*}^-(z)\;\;\;\text{ for all }\;\;\; \sigma\in\N_\tau\Sigma(d)\;\;\;\text{ and }\;\;\;\rho\in\tau(1).
\]
Thus, for each $\sigma\in\N_\tau\Sigma(d)$, we obtain a face of $P_{\sigma,*}(z)$ by intersecting with all of these hyperplanes:
\[
\mathrm{F}_\tau(P_{\sigma,*}(z))\defeq P_{\sigma,*}(z)\cap\bigcap_{\rho\in\tau(1)}H_{\rho,*}(z).
\]
The collection of these polytopes along with all of their faces forms a polytopal subcomplex of $C_{\Sigma,*}(z)$, which we denote
\[
\mathrm{F}_\tau(C_{\Sigma,*}(z))\defeq \bigcup_{\sigma\in\N_\tau\Sigma(d)}\widehat{\mathrm{F}_\tau(P_{\sigma,*}(z))}.
\]
To illustrate how the polytopal subcomplex $\mathrm{F}_\tau(C_{\Sigma,*}(z))$ is constructed in a concrete example, the following image depicts a two-dimensional normal complex where we have darkened the collection of maximal polytopes associated to the neighborhood of a ray $\rho$. We have also drawn in the hyperplane associated to $\rho$. The intersection of the hyperplane and the darkened polytopes is $F_\rho(C_{\Sigma,*}(z))$, which, in this example, is a polytopal complex comprised of three line segments meeting at the point $w_{\rho,*}(z)$.
\begin{center}
\tdplotsetmaincoords{68}{55}
\begin{tikzpicture}[scale=1.1,tdplot_main_coords]
\draw[draw=green!20, fill=green!20, fill opacity=1] (0,0,0) -- (0, 0, 1.6) -- (1, 1, 1.6) -- (1.2, 1.2, 1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=1] (0,0,0) -- (0, 1.6, 0) -- (1, 1.6, 1) -- (1.2, 1.2, 1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=1] (0,0,0) -- (1.6, 0, 0) -- (1.6, 1, 1) -- (1.2, 1.2, 1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (0, 0, 1.6) -- (-1.6, -1.6, 1.6) -- (-1.6, -1.6, 0) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (0, 1.6, 0) -- (-1.6, 1.6, -1.6) -- (-1.6, 0, -1.6) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (1.6, 0, 0) -- (1.6, -1.6, -1.6) -- (0, -1.6, -1.6) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (-1.6, -1.6, 0) -- (-1.6, -1.6, -.4) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (-1.6, 0, -1.6) -- (-1.6, -.4, -1.6) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (0, -1.6, -1.6) -- (-.4, -1.6, -1.6) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[draw=purple!20, fill=purple!20, fill opacity=.8] (2.1, -.9, 2.4) -- (-.9, 2.1, 2.4) -- (0.1, 3.1, 0.4) -- (3.1, 0.1, 0.4) -- cycle;
\node[] at (1.5,1.3,-1.4) {$H_{\rho,*}(z)$};
\draw[thick,->] (1.4,1.5,-1.2) to [bend right=30] (1.5,1.2,-.1);
\draw[thick] (0, 0, 1.6) -- (1, 1, 1.6) -- (1.2, 1.2, 1.2);
\draw[dashed] (0, 1.6, 0) -- (1, 1.6, 1) -- (1.2, 1.2, 1.2);
\draw[thick] (.7, 1.6, .7) -- (1, 1.6, 1) -- (1.2, 1.2, 1.2);
\draw[thick] (1.6, 0, 0) -- (1.6, 1, 1) -- (1.2, 1.2, 1.2);
\draw[] (0, 0, 1.6) -- (-1.6, -1.6, 1.6) -- (-1.6, -1.6, 0);
\draw[dashed] (0, 1.6, 0) -- (-1.6, 1.6, -1.6) -- (-1.6, 0, -1.6);
\draw[] (1.6, 0, 0) -- (1.6, -1.6, -1.6) -- (0, -1.6, -1.6);
\draw[] (-1.6, -1.6, 0) -- (-1.6, -1.6, -.4) -- (-1.2, -1.2, -1.2);
\draw[dashed] (-1.6, 0, -1.6) -- (-1.6, -.4, -1.6) -- (-1.2, -1.2, -1.2);
\draw[] (0, -1.6, -1.6) -- (-.4, -1.6, -1.6) -- (-1.2, -1.2, -1.2);
\draw[line width=.8mm] (1, 1, 1.6) -- (1.2, 1.2, 1.2);
\draw[line width=.8mm] (1, 1.6, 1) -- (1.2, 1.2, 1.2);
\draw[line width=.8mm](1.6, 1, 1) -- (1.2, 1.2, 1.2);
\draw[] (0, 0, 0) -- (1.2, 1.2, 1.2);
\draw[] (0, 0, 0) -- (0, 0, 1.6);
\draw[dashed] (0, 0, 0) -- (0, 1.6, 0);
\draw[] (0, 0, 0) -- (1.6, 0, 0);
\draw[] (0, 0, 0) -- (0, -1.6, -1.6);
\draw[dashed] (0, 0, 0) -- (-1.6, 0, -1.6);
\draw[] (0, 0, 0) -- (-1.6, -1.6, 0);
\draw[] (0, 0, 0) -- (-1.2, -1.2, -1.2);
\draw[->] (0,0,0) -- (2,2,2);
\node[right] at (2,2,2) {$\rho$};
\node[] at (2.8,2.8,1) {$F_\rho(C_{\Sigma,*}(z))$};
\draw[thick,->] (2.1,2.1,1) to [bend right=30] (1.4,1.4,1.2);
\end{tikzpicture}\end{center}
One might be tempted to call $\mathrm{F}_\tau(C_{\Sigma,*}(z))$ a ``face'' of $C_{\Sigma,*}(z)$; however, a drawback would be that $\mathrm{F}_\tau(C_{\Sigma,*}(z))$ is not, itself, a normal complex (all normal complexes contain the origin, for example, while $\mathrm{F}_\tau(C_{\Sigma,*}(z))$ generally does not). Thus, our construction involves one more step, which is to translate $\mathrm{F}_\tau(C_{\Sigma,*}(z))$ by the vector $w_{\tau,*}(z)$. Notice that, tracking back through the definitions, there is an identification of affine subspaces
\[
\bigcap_{\rho\in\tau(1)}H_{\rho,*}(z)=N_{\mathbb{R}}^\tau+w_{\tau,*}(z).
\]
Since $\mathrm{F}_\tau(C_{\Sigma,*}(z))$ is, by definition, contained in the left-hand side, it follows that its translation by $-w_{\tau,*}(z)$ is a polytopal complex in $N_\mathbb{R}^\tau$. We define the \textbf{face of $C_{\Sigma,*}(z)$ associated to $\tau\in\Sigma$} to be this polytopal complex:
\[
\mathrm{F}^\tau(C_{\Sigma,*}(z))\defeq \mathrm{F}_\tau(C_{\Sigma,*}(z))-w_{\tau,*}(z)\subseteq N_\mathbb{R}^\tau.
\]
The face associated to the ray $\rho$ in our running example is depicted below inside $N_\mathbb{R}^\rho$.
\begin{center}
\tdplotsetmaincoords{68}{55}
\begin{tikzpicture}[scale=1.1,tdplot_main_coords]
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (0, 0, 1.6) -- (1, 1, 1.6) -- (1.2, 1.2, 1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (0, 1.6, 0) -- (1, 1.6, 1) -- (1.2, 1.2, 1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (1.6, 0, 0) -- (1.6, 1, 1) -- (1.2, 1.2, 1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (0, 0, 1.6) -- (-1.6, -1.6, 1.6) -- (-1.6, -1.6, 0) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (0, 1.6, 0) -- (-1.6, 1.6, -1.6) -- (-1.6, 0, -1.6) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (1.6, 0, 0) -- (1.6, -1.6, -1.6) -- (0, -1.6, -1.6) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (-1.6, -1.6, 0) -- (-1.6, -1.6, -.4) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (-1.6, 0, -1.6) -- (-1.6, -.4, -1.6) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.2] (0,0,0) -- (0, -1.6, -1.6) -- (-.4, -1.6, -1.6) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[line width=.01mm] (0, 0, 1.6) -- (1, 1, 1.6) -- (1.2, 1.2, 1.2);
\draw[line width=.01mm,dashed] (0, 1.6, 0) -- (1, 1.6, 1) -- (1.2, 1.2, 1.2);
\draw[line width=.01mm] (.7, 1.6, .7) -- (1, 1.6, 1) -- (1.2, 1.2, 1.2);
\draw[line width=.01mm] (1.6, 0, 0) -- (1.6, 1, 1) -- (1.2, 1.2, 1.2);
\draw[line width=.01mm] (0, 0, 1.6) -- (-1.6, -1.6, 1.6) -- (-1.6, -1.6, 0);
\draw[line width=.01mm,dashed] (0, 1.6, 0) -- (-1.6, 1.6, -1.6) -- (-1.6, 0, -1.6);
\draw[line width=.01mm] (1.6, 0, 0) -- (1.6, -1.6, -1.6) -- (0, -1.6, -1.6);
\draw[line width=.01mm] (-1.6, -1.6, 0) -- (-1.6, -1.6, -.4) -- (-1.2, -1.2, -1.2);
\draw[line width=.01mm,dashed] (-1.6, 0, -1.6) -- (-1.6, -.4, -1.6) -- (-1.2, -1.2, -1.2);
\draw[line width=.01mm] (0, -1.6, -1.6) -- (-.4, -1.6, -1.6) -- (-1.2, -1.2, -1.2);
\draw[draw=purple!20, fill=purple!20, fill opacity=.8] (.9, -2.1, 1.2) -- (-2.1, .9, 1.2) -- (-1.1, 1.9, -.8) -- (1.9, -1.1, -.8) -- cycle;
\node[] at (1.5,1.3,-2.5) {$N_\mathbb{R}^\rho=H_{\rho,*}(z)-w_{\rho,*}(z)$};
\draw[thick,->] (.2,.3,-2.4) to [bend right=30] (.3,0,-1.2);
\draw[line width=.8mm] (-.2, -.2, .4) -- (0, 0, 0);
\draw[line width=.8mm] (-.2, .4, -.2) -- (0, 0, 0);
\draw[line width=.8mm](.4, -.2, -.2) -- (0, 0, 0);
\draw[line width=.01mm] (0, 0, 0) -- (1.2, 1.2, 1.2);
\draw[line width=.01mm] (0, 0, 0) -- (0, 0, 1.6);
\draw[line width=.01mm,dashed] (0, 0, 0) -- (0, 1.6, 0);
\draw[line width=.01mm] (0, 0, 0) -- (1.6, 0, 0);
\draw[line width=.01mm] (0, 0, 0) -- (0, -1.6, -1.6);
\draw[line width=.01mm,dashed] (0, 0, 0) -- (-1.6, 0, -1.6);
\draw[line width=.01mm] (0, 0, 0) -- (-1.6, -1.6, 0);
\draw[line width=.01mm] (0, 0, 0) -- (-1.2, -1.2, -1.2);
\draw[->] (0,0,0) -- (2,2,2);
\node[right] at (2,2,2) {$\rho$};
\node[] at (3,3,0) {$F^\rho(C_{\Sigma,*}(z))=F_\rho(C_{\Sigma,*}(z))-w_{\rho,*}(z)$};
\draw[thick,->] (.9,.9,-.2) to [bend right=30] (.2,.2,0);
\end{tikzpicture}\end{center}
The next pair of images depicts the subcomplex $\mathrm{F}_\rho(C_{\Sigma,*}(z))\subseteq C_{\Sigma,*}(z)$ and, after translating to the origin, the face $\mathrm{F}^\rho(C_{\Sigma,*}(z))$, where $\rho$ is a ray of a three-dimensional fan.
\begin{center}
\begin{tikzpicture}
\node[] at (0,0) {\includegraphics[scale=.3]{2dface}};
\node[] at (8,0) {\includegraphics[scale=.25]{2dface2}};
\node[] at (.75,1.7) {$\rho$};
\node[] at (8.8,1.8) {$\rho$};
\node[] at (-2.8,1.5) {$F_\rho(C_{\Sigma,*}(z))$};
\draw[thick,->] (-1.7,1.5) to [bend left=30] (-.7,.9);
\node[] at (5.3,-1) {$F^\rho(C_{\Sigma,*}(z))$};
\draw[thick,->] (6.4,-1) to [bend right=20] (7.5,-.5);
\end{tikzpicture}
\end{center}
In the following subsections, it will also be useful to have notation for translates of the polytopes $\mathrm{F}_\tau(P_{\sigma,*}(z))$. We define
\[
\mathrm{F}^\tau(P_{\sigma,*}(z))\defeq \mathrm{F}_\tau(P_{\sigma,*}(z))-w_{\tau,*}(z).
\]
In terms of these translated polytopes, we can write the $\tau$-face of $C_{\Sigma,*}(z)$ as
\[
\mathrm{F}^\tau(C_{\Sigma,*}(z))=\bigcup_{\sigma\in\N_\tau\Sigma(d)}\widehat{\mathrm{F}^\tau(P_{\sigma,*}(z))}.
\]
\subsection{Faces as normal complexes}
Our aim in this subsection is to realize each face $\mathrm{F}^\tau(C_{\Sigma,*}(z))$ as a normal complex. In order to do so, we require several ingredients; namely, we require a marked, pure, simplicial fan $\Sigma^\tau$ in $N_\mathbb{R}^\tau$, an inner product $*^\tau$ on $N_\mathbb{R}^\tau$, and a pseudocubical value $z^\tau\in\overline{\Cub}(\Sigma^\tau,*^\tau)$. We now define each of these ingredients.
For each cone $\tau\in\Sigma$, define the \textbf{star of $\Sigma$ at $\tau\in\Sigma$} to be the fan in $N_\mathbb{R}^\tau$ comprised of all projections of cones in the neighborhood of $\tau$:
\[
\Sigma^\tau\defeq\{\mathrm{pr}^\tau(\pi)\mid \pi\in\N_\tau\Sigma\}.
\]
The star of a two-dimensional fan $\Sigma$ at a ray $\rho$ is depicted below. In the image, there are three two-dimensional cones in the neighborhood of $\rho$ that are projected onto three one-dimensional cones that comprise the maximal cones in the star fan $\Sigma^\rho$.
\begin{center}
\tdplotsetmaincoords{68}{55}
\begin{tikzpicture}[scale=2,tdplot_main_coords]
\draw[draw=blue!20,fill=blue!20,fill opacity=0.8] (0,0, 0)-- (1, 0, 0) -- (1, 1, 1) -- cycle;
\draw[draw=blue!20,fill=blue!20,fill opacity=0.8] (0,0, 0)-- (0, 1, 0) -- (1, 1, 1) -- cycle;
\draw[draw=blue!20,fill=blue!20,fill opacity=0.8] (0,0, 0)-- (0, 0, 1) -- (1, 1, 1) -- cycle;
\draw[draw=blue!0,fill=blue!20,fill opacity=0.2] (0,0, 0)-- (1, 0, 0) -- (0, -1, -1) -- cycle;
\draw[draw=blue!0,fill=blue!20,fill opacity=0.2] (0,0, 0)-- (0, 1, 0) -- (-1, 0, -1) -- cycle;
\draw[draw=blue!0,fill=blue!20,fill opacity=0.2] (0,0, 0)-- (0, 0, 1) -- (-1, -1, 0) -- cycle;
\draw[draw=blue!0,fill=blue!20,fill opacity=0.2] (0,0, 0)-- (-1, -1, 0) -- (-1, -1, -1) -- cycle;
\draw[draw=blue!0,fill=blue!20,fill opacity=0.2] (0,0, 0)-- (-1, 0, -1) -- (-1, -1, -1) -- cycle;
\draw[draw=blue!0,fill=blue!20,fill opacity=0.2] (0,0, 0)-- (0, -1, -1) -- (-1, -1, -1) -- cycle;
\draw[->] (0,0,0) -- (1,0,0);
\draw[->,gray] (0,0,0) -- (0,1,0);
\draw[->] (0,0,0) -- (0,0,1);
\draw[->] (0,0,0) -- (1,1,1);
\node[right] at (1,1,1) {$\rho$};
\draw[->] (0,0,0) -- (-1,-1,-1);
\draw[->] (0,0,0) -- (-1,-1,0);
\draw[->,gray] (0,0,0) -- (-1,0,-1);
\draw[->] (0,0,0) -- (0,-1,-1);
\draw[draw=blue!20] (0, 1, 0) -- (-.41, .59, -.41);
\node[] at (-.8,-.8,.6) {$\Sigma\subseteq N_\mathbb{R}$};
\end{tikzpicture}
\hspace{50bp}
\begin{tikzpicture}[scale=2,tdplot_main_coords]
\draw[draw=purple!20, fill=purple!20, fill opacity=.8] (.9*.65, -2.1*.65, 1.2*.65) -- (-2.1*.65, .9*.65, 1.2*.65) -- (-1.1*.65, 1.9*.65, -.8*.65) -- (1.9*.65, -1.1*.65, -.8*.65) -- cycle;
\node[] at (0.6,0.6,0.5) {$\Sigma^\rho\subseteq N_\mathbb{R}^\rho$};
\draw[thick,->] (0.3,0.3,0.49) to [bend right=20] (-.05,-.05,.15);
\draw[line width=.4mm,->] (0,0,0) -- (.66,-.33,-.33);
\draw[line width=.4mm,->] (0,0,0) -- (-.33,.66,-.33);
\draw[line width=.4mm,->] (0,0,0) -- (-.33,-.33,.66);
\end{tikzpicture}
\end{center}
Henceforth, we use the shorthand $\pi^\tau = \mathrm{pr}^\tau(\pi)$.
Given any cone $\pi^\tau\in\Sigma^\tau$ with $\pi\in\N_\tau\Sigma$, we can also view $\pi^\tau$ as the projection of the larger cone $\sigma=\pi\cup\tau\in\N_\tau\Sigma$. Note that $\sigma$ is the unique maximal cone in $\N_\tau\Sigma$ that projects onto $\pi^\tau$, from which it follows that each cone in $\Sigma^\tau$ is the projection of a \emph{distinguished} cone in $\N_\tau\Sigma$. In other words, there is a bijection
\begin{align*}
\{\sigma\in\N_\tau\Sigma \mid \tau\preceq\sigma\}&\rightarrow\Sigma^\tau\\
\sigma&\mapsto \sigma^\tau.
\end{align*}
From the assumptions that $\Sigma$ is a simplicial $d$-fan, it follow that $\Sigma^\tau$ is a simplicial fan in $N^\tau_\mathbb{R}$ that is pure of dimension $d^\tau = d-\dim(\tau)$. Moreover, the simplicial hypothesis on $\Sigma$ implies that each ray $\eta\in\Sigma^\tau(1)$ is the projection of a unique ray $\hat\eta\in\N_\tau\Sigma(1)$, and we can use this to mark each ray $\eta\in\Sigma^\tau(1)$ with the vector $\mathrm{pr}^\tau(u_{\hat\eta})$.
We now have a marked, pure, simplicial fan in $N_\mathbb{R}^\tau$, so it remains to define an inner product and pseudocubical value. The inner product $*^\tau\in\mathrm{Inn}(N_\mathbb{R}^\tau)$ is simply defined as the restriction of the inner product $*\in\mathrm{Inn}(N_\mathbb{R})$ to the subspace $N_\mathbb{R}^\tau$. Lastly, given any $z\in\mathbb{R}^{\Sigma(1)}$, we define $z^\tau\in\mathbb{R}^{\Sigma^\tau(1)}$ by the rule
\[
z^\tau_{\eta}=z_{\hat\eta}-w_{\tau,*}(z)*u_{\hat\eta},
\]
where, as before, $\hat\eta\in\N_\tau\Sigma(1)$ is the unique ray with $\mathrm{pr}^\tau(\hat\eta)=\eta$.
We now have all the ingredients necessary to state and prove the following result, which asserts that faces of normal complexes are, themselves, normal complexes.
\begin{proposition}\label{prop:facesarenormalcomplexes}
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan, $*\in\mathrm{Inn}(N_\mathbb{R})$ an inner product, and $\tau\in\Sigma$ a cone. If $z\in\mathbb{R}^{\Sigma(1)}$ is (pseudo)cubical with respect to $(\Sigma,*)$, then $z^\tau$ is (pseudo)cubical with respect to $(\Sigma^\tau,*^\tau)$ and
\[
\mathrm{F}^\tau(C_{\Sigma,*}(z))=C_{\Sigma^\tau,*^\tau}(z^\tau).
\]
\end{proposition}
We note that the first statement---that $z^\tau$ is (pseudo)cubical---is necessary in order for $C_{\Sigma^\tau,*^\tau}(z^\tau)$ to even be well-defined. Proposition~\ref{prop:facesarenormalcomplexes} is a statement about normal complexes, or equivalently, about the polytopes that comprise those complexes. In order to prove Proposition~\ref{prop:facesarenormalcomplexes}, we first prove the following key lemma, which concerns just the vertices of the polytopes.
\begin{lemma}\label{lemma:projectingws}
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan, $*\in\mathrm{Inn}(N_\mathbb{R})$ an inner product, and $\tau\in\Sigma$ a cone. For any $\sigma\in\Sigma$ with $\tau\preceq\sigma$, we have
\[
\mathrm{pr}^\tau(w_{\sigma,*}(z))=w_{\sigma,*}(z)-w_{\tau,*}(z)=w_{\sigma^\tau,*^\tau}(z^\tau).
\]
\end{lemma}
\begin{proof}
We start by establishing the first equality. To do so, we begin by arguing that $w_{\sigma,*}(z)-w_{\tau,*}(z)\in N_\mathbb{R}^\tau$. Since $N_\mathbb{R}^\tau=N_{\tau,\mathbb{R}}^\perp$, it suffices to prove that $w_{\sigma,*}(z)-w_{\tau,*}(z)$ is orthogonal to the basis $\{u_\rho\mid \rho\in\tau(1)\}\subseteq N_{\tau,\mathbb{R}}$. By definition of the $w$ vectors and the assumption that $\tau\preceq\sigma$, we compute
\[
(w_{\sigma,*}(z)-w_{\tau,*}(z))*u_\rho=z_\rho-z_\rho=0\;\;\;\text{ for all }\;\;\;\rho\in\tau(1),
\]
from which it follows that $w_{\sigma,*}(z)-w_{\tau,*}(z)\in N_\mathbb{R}^\tau$. Since $N_\mathbb{R}=N_{\tau,\mathbb{R}}\oplus N_\mathbb{R}^\tau$, the orthogonal decomposition $w_{\sigma,*}(z)=w_{\tau,*}(z)\,+\,(w_{\sigma,*}(z)-w_{\tau,*}(z))$ then implies that
\begin{equation}\label{eq:projectingvertices}
\mathrm{pr}_\tau(w_{\sigma,*}(z))=w_{\tau,*}(z)\;\;\;\text{ and }\;\;\;\mathrm{pr}^\tau(w_{\sigma,*}(z))=w_{\sigma,*}(z)-w_{\tau,*}(z).
\end{equation}
To prove that $w_{\sigma,*}(z)-w_{\tau,*}(z)=w_{\sigma^\tau,*^\tau}(z^\tau)$, we now argue that $w_{\sigma,*}(z)-w_{\tau,*}(z)$ is an element of $N_{\sigma^\tau,\mathbb{R}}$ and is a solution of the equations defining $w_{\sigma^\tau,*^\tau}(z^\tau)$:
\begin{equation}\label{eq:definingw2}
v*^\tau u_\eta=z_\eta^\tau\;\;\;\text{ for all }\;\;\;\eta\in\sigma^\tau(1).
\end{equation}
To check that $w_{\sigma,*}(z)-w_{\tau,*}(z)\in N_{\sigma^\tau,\mathbb{R}}$, we start by observing that we can write
\[
w_{\sigma,*}(z)=\sum_{\rho\in\sigma(1)}a_{\rho}\;u_\rho
\]
for some values $a_{\rho}\in\mathbb{R}$, in which case
\begin{align*}
w_{\sigma,*}(z)-w_{\tau,*}(z)&=\mathrm{pr}^\tau(w_{\sigma,*}(z))\\
&=\sum_{\rho\in\sigma(1)\setminus\tau(1)}a_{\rho}\;\mathrm{pr}^\tau(u_\rho)\\
&=\sum_{\eta\in\sigma^\tau(1)}a_{\hat\eta}\;u_{\eta},
\end{align*}
where the first equality uses \eqref{eq:projectingvertices}, the second uses that $\mathrm{pr}^\tau$ vanishes on $N_{\tau,\mathbb{R}}$, and the third uses that the rays of $\sigma^\tau$ are in natural bijection with $\sigma(1)\setminus\tau(1)$.
Lastly, we peel back the definitions to check that $w_{\sigma,*}(z)-w_{\tau,*}(z)$ is a solution of Equations~\eqref{eq:definingw2}:
\begin{align*}
(w_{\sigma,*}(z)-w_{\tau,*}(z))*^\tau u_\eta&= (w_{\sigma,*}(z)-w_{\tau,*}(z))*(u_{\hat\eta}-\mathrm{pr}_\tau(u_{\hat\eta}))\\
&=w_{\sigma,*}(z)*u_{\hat\eta}\;-\;w_{\tau,*}(z)*u_{\hat\eta}\;-\;\big(w_{\sigma,*}(z)-w_{\tau,*}(z)\big)*\mathrm{pr}_\tau(u_{\hat\eta})\\
&=z_{\hat\eta}-w_{\tau,*}(z)*u_{\hat\eta}\\
&=z_\eta^\tau,
\end{align*}
where the first equality uses the orthogonal decomposition of $u_{\hat\eta}$ and the fact that $*^\tau$ is just the restriction of $*$, the second equality uses linearity of the inner product, and the third equality uses the definition of $w_{\sigma,*}(z)$ along with the fact that the vectors $\mathrm{pr}_\tau(u_{\hat\eta})$ and $w_{\sigma,*}(z)-w_{\tau,*}(z)=\mathrm{pr}^\tau(w_{\sigma,*}(z))$ are in orthogonal subspaces.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:facesarenormalcomplexes}]
To prove the first statement in the cubical setting, assume that $z\in\mathbb{R}^{\Sigma(1)}$ is cubical. This means that, for every $\sigma\in\Sigma$, we can write
\[
w_{\sigma,*}(z)=\sum_{\rho\in\sigma(1)}a_{\rho} u_\rho
\]
for some positive values $a_{\rho}\in\mathbb{R}_{>0}$. Consider any cone of $\Sigma^\tau$, which we can write as $\sigma^\tau$ with $\tau\preceq\sigma$. Applying the lemma, we then see that
\begin{align*}
w_{\sigma^\tau,*^\tau}(z^\tau)&=\mathrm{pr}^\tau(w_{\sigma,*}(z))\\
&=\sum_{\rho\in\sigma(1)\setminus\tau(1)}a_{\rho}\; \mathrm{pr}^\tau(u_\rho)\\
&=\sum_{\eta\in\sigma^\tau(1)}a_{\hat\eta}\; u_\eta.
\end{align*}
This shows that $w_{\sigma^\tau,*^\tau}(z^\tau)$ can be written as a positive combination of the ray generators of $\sigma^\tau$, proving that $z^\tau\in\operatorname{Cub}(\Sigma^\tau,*^\tau)$. The proof in the pseudocubical setting is identical but with ``positive'' replaced by ``nonnegative.''
To prove that
\[
\mathrm{F}^\tau(C_{\Sigma,*}(z))=C_{\Sigma^\tau,*^\tau}(z^\tau),
\]
it suffices to identify the maximal polytopes in these complexes. In other words, we must prove that, for every $\sigma\in\N_\tau\Sigma(d)$, we have
\begin{equation}\label{eq:polytranslate}
\mathrm{F}^\tau(P_{\sigma,*}(z))=P_{\sigma^\tau,*^\tau}(z^\tau).
\end{equation}
To prove \eqref{eq:polytranslate}, we analyze the vertices of these polytopes.
By Proposition~\ref{prop:normalcomplexprelims}, the vertices of $P_{\sigma,*}(z)$ are $\{w_{\pi,*}(z)\mid \pi\preceq\sigma\}$. Since
\[
F_\tau(P_{\sigma,*}(z))=P_{\sigma,*}(z)\cap\bigcap_{\rho\in\tau(1)}H_{\rho,*}(z),
\]
it follows that the vertices of $F_\tau(P_{\sigma,*}(z))$ are
\[
\{w_{\pi,*}(z)\mid \pi\preceq\sigma\text{ and }w_{\pi,*}(z)*u_\rho=z_\rho\text{ for all }\rho\in\tau(1)\}.
\]
If a cone $\pi\preceq \sigma$ satisfies $w_{\pi,*}(z)*u_\rho=z_\rho$ for all $\rho\in\tau(1)$, then the definition of the $w$-vectors implies that $w_{\pi,*}(z)=w_{\pi\cup\tau,*}(z)$, and it follows that the vertices of $F_\tau(P_{\sigma,*}(z))$ are
\[
\mathrm{Vert}\big(F_\tau(P_{\sigma,*}(z))\big)=\{w_{\pi,*}(z)\mid \tau\preceq\pi\preceq\sigma\}.
\]
Upon translating by $w_{\tau,*}(z)$ to get from $F_\tau(P_{\sigma,*}(z))$ to $F^\tau(P_{\sigma,*}(z))$, we see that
\begin{align*}
\mathrm{Vert}\big(F^\tau(P_{\sigma,*}(z))\big)&=\{w_{\pi,*}(z)-w_{\tau,*}(z)\mid \tau\preceq\pi\preceq\sigma\}\\
&=\{w_{\pi^\tau,*^\tau}(z^\tau)\mid \pi^\tau\preceq\sigma^\tau\}\\
&=\mathrm{Vert}\big(P_{\sigma^\tau,*^\tau}(z^\tau))\big),
\end{align*}
where the second equality is an application of Lemma~\ref{lemma:projectingws} and the third is an application of Proposition~\ref{prop:normalcomplexprelims}. Having matched the vertices of the polytopes in \eqref{eq:polytranslate}, the equality of polytopes then follows.
\end{proof}
The importance of Proposition~\ref{prop:facesarenormalcomplexes} is that it allows us to endow each of the faces of a normal complex with the structure of a normal complex, and in particular, it then allows us to compute (mixed) volumes of faces. More specifically, if $\omega:\Sigma(d)\rightarrow\mathbb{R}_{>0}$ is a weight function, then we obtain a weight function $\omega^\tau:\Sigma^\tau(d^\tau)\rightarrow\mathbb{R}_{>0}$ defined by $\omega^\tau(\sigma^\tau)=\omega(\sigma)$ for all $\sigma\in\Sigma(d)$. The volume of the face $\mathrm{F}^\tau(C_{\Sigma,*}(z))$ weighted by $\omega$ is
\[
\operatorname{Vol}_{\Sigma^\tau,\omega^\tau,*^\tau}(z^\tau).
\]
Similarly, the mixed volume of the faces $\mathrm{F}^\tau(C_{\Sigma,*}(z_1)),\dots \mathrm{F}^\tau(C_{\Sigma,*}(z_{d^\tau}))$ weighted by $\omega$ is
\[
\operatorname{MVol}_{\Sigma^\tau,\omega^\tau,*^\tau}(z_1^\tau,\dots,z_{d^\tau}^\tau).
\]
In the next two subsections, we use these concepts to prove fundamental results relating (mixed) volumes of normal complexes to the (mixed) volumes of their facets. In making arguments using mixed volumes, it will be useful to consider facets of facets; as such, the next result---asserting that the face of a face of a normal complex is a face of the original normal complex---will be useful.
\begin{proposition}\label{prop:faceofaface}
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan, $*\in\mathrm{Inn}(N_\mathbb{R})$ an inner product, and $z\in\overline{\Cub}(\Sigma,*)$ a pseudocubical value. If $\tau,\pi\in\Sigma$ with $\tau\preceq\pi$, then
\[
\mathrm{F}^{\pi^\tau}(\mathrm{F}^\tau(C_{\Sigma,*}(z)))=\mathrm{F}^\pi(C_{\Sigma,*}(z)).
\]
\end{proposition}
\begin{proof}
By Proposition~\ref{prop:facesarenormalcomplexes}, the claim in this proposition is equivalent to
\[
\mathrm{F}^{\pi^\tau}(C_{\Sigma^\tau,*^\tau}(z^\tau))=C_{\Sigma^\pi,*^\pi}(z^\pi).
\]
It suffices to match the maximal polytopes in these complexes, so we must prove:
\begin{equation}\label{eq:faceofafacepoly}
\mathrm{F}^{\pi^\tau}(P_{\sigma^\tau,*^\tau}(z^\tau))=P_{\sigma^\pi,*^\pi}(z^\pi)\;\;\;\text{ for all }\;\;\;\sigma\in\Sigma(d)\;\;\;\text{ with }\;\;\;\tau\preceq\sigma.
\end{equation}
The vertices of the polytope in the left-hand side of \eqref{eq:faceofafacepoly} are
\[
\{w_{\mu^\tau,*^\tau}(z^\tau)-w_{\pi^\tau,*^\tau}(z^\tau)\mid \pi^\tau\preceq\mu^\tau\preceq \sigma^\tau \}
\]
while the vertices in the right-hand side of \eqref{eq:faceofafacepoly} are
\[
\{w_{\mu^\pi,*^\pi}(z^\pi)\mid \mu^\pi\preceq\sigma^\pi\}.
\]
Notice that both sets of vertices are indexed by $\mu\in\Sigma$ with $\pi\preceq\mu\preceq\sigma$, and we have
\begin{align*}
w_{\mu^\tau,*^\tau}(z^\tau)-w_{\pi^\tau,*^\tau}(z^\tau)&=\mathrm{pr}^{\pi^\tau}(w_{\mu^\tau,*^\tau}(z^\tau))\\
&=\mathrm{pr}^{\pi^\tau}(\mathrm{pr}^\tau(w_{\mu,*}(z)))\\
&=\mathrm{pr}^\pi(w_{\mu,*}(z))\\
&=w_{\mu^\pi,*^\pi}(z^\pi),
\end{align*}
where the first, second, and fourth equalities are Lemma~\ref{lemma:projectingws}, while the second is the observation that the projection $\mathrm{pr}^\pi$ can be broken up into two steps: $\mathrm{pr}^\pi=\mathrm{pr}^{\pi^\tau}\circ\mathrm{pr}^\tau$. Thus, the vertices of the polytopes in \eqref{eq:faceofafacepoly} match up, and the proposition follows.
\end{proof}
\subsection{Volumes and facets}
This subsection is devoted to proving the following result, which relates the volume of a normal complex to the volumes of its facets.
\begin{proposition}\label{prop:pyramidvolume}
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan with weight function $\omega:\Sigma(d)\rightarrow\mathbb{R}_{>0}$, let $*\in\mathrm{Inn}(N_\mathbb{R})$ be an inner product, and let $z\in\overline{\Cub}(\Sigma,*)$ be a pseudocubical value. Then
\[
\operatorname{Vol}_{\Sigma,\omega,*}(z)=\sum_{\rho\in\Sigma(1)}z_\rho\operatorname{Vol}_{\Sigma^\rho,\omega^\rho,*^\rho}(z^\rho).
\]
\end{proposition}
The sum in the right-hand side of the theorem corresponds to decomposing the normal complex into pyramids over its facets, as depicted in the next image.
\begin{center}
\tdplotsetmaincoords{68}{55}
\begin{tikzpicture}[scale=1.1,tdplot_main_coords]
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0,0) -- (0, 1.6, 0) -- (-1.6, 1.6, -1.6) -- (-1.6, 0, -1.6) -- cycle;
\draw[] (0,0,0) -- (0, 1.6, 0) -- (-1.6, 1.6, -1.6) -- (-1.6, 0, -1.6) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0,0) -- (0, 1.6, 0) -- (1, 1.6, 1) -- (1.2, 1.2, 1.2) -- cycle;
\draw[] (0,0,0) -- (0, 1.6, 0) -- (1, 1.6, 1) -- (1.2, 1.2, 1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0,0) -- (0, 0, 1.6) -- (1, 1, 1.6) -- (1.2, 1.2, 1.2) -- cycle;
\draw[] (0,0,0) -- (0, 0, 1.6) -- (1, 1, 1.6) -- (1.2, 1.2, 1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0,0) -- (1.6, 0, 0) -- (1.6, 1, 1) -- (1.2, 1.2, 1.2) -- cycle;
\draw[] (0,0,0) -- (1.6, 0, 0) -- (1.6, 1, 1) -- (1.2, 1.2, 1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0,0) -- (0, 0, 1.6) -- (-1.6, -1.6, 1.6) -- (-1.6, -1.6, 0) -- cycle;
\draw[] (0,0,0) -- (0, 0, 1.6) -- (-1.6, -1.6, 1.6) -- (-1.6, -1.6, 0) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0,0) -- (1.6, 0, 0) -- (1.6, -1.6, -1.6) -- (0, -1.6, -1.6) -- cycle;
\draw[] (0,0,0) -- (1.6, 0, 0) -- (1.6, -1.6, -1.6) -- (0, -1.6, -1.6) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0,0) -- (-1.6, -1.6, 0) -- (-1.6, -1.6, -.4) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[] (0,0,0) -- (-1.6, -1.6, 0) -- (-1.6, -1.6, -.4) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0,0) -- (-1.6, 0, -1.6) -- (-1.6, -.4, -1.6) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[] (0,0,0) -- (-1.6, 0, -1.6) -- (-1.6, -.4, -1.6) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0,0) -- (0, -1.6, -1.6) -- (-.4, -1.6, -1.6) -- (-1.2, -1.2, -1.2) -- cycle;
\draw[] (0,0,0) -- (0, -1.6, -1.6) -- (-.4, -1.6, -1.6) -- (-1.2, -1.2, -1.2) -- cycle;
\end{tikzpicture}
\hspace{50bp}
\begin{tikzpicture}[scale=.85,tdplot_main_coords]
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0.5,0) -- (0, 2.1, 0) -- (-1.6, 2.1, -1.6) -- cycle;
\draw[] (0,0.5,0) -- (0, 2.1, 0) -- (-1.6, 2.1, -1.6) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0.5,0) -- (0, 2.1, 0) -- (1, 2.1, 1) -- cycle;
\draw[] (0,0.5,0) -- (0, 2.1, 0) -- (1, 2.1, 1) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (.5,.5,.5) -- (1.5, 1.5, 2.1) -- (1.7, 1.7, 1.7) -- cycle;
\draw[] (0.5,0.5,0.5) -- (1.5, 1.5, 2.1) -- (1.7, 1.7, 1.7) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0.5,0.5,0.5) -- (1.5, 2.1, 1.5) -- (1.7, 1.7, 1.7) -- cycle;
\draw[] (0.5,0.5,0.5) -- (1.5, 2.1, 1.5) -- (1.7, 1.7, 1.7) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0.5,0.5,0.5) -- (2.1, 1.5, 1.5) -- (1.7, 1.7, 1.7) -- cycle;
\draw[] (0.5,0.5,0.5) -- (2.1, 1.5, 1.5) -- (1.7, 1.7, 1.7) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0,0.5) -- (0, 0, 2.1) -- (1, 1, 2.1) -- cycle;
\draw[] (0,0,0.5) -- (0, 0, 2.1) -- (1, 1, 2.1) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,0,0.5) -- (0, 0, 2.1) -- (-1.6, -1.6, 2.1) -- cycle;
\draw[] (0,0,0.5) -- (0, 0, 2.1) -- (-1.6, -1.6, 2.1) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0.5,0,0) -- (2.1, 0, 0) -- (2.1, 1, 1) -- cycle;
\draw[] (0.5,0,0) -- (2.1, 0, 0) -- (2.1, 1, 1) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0.5,0,0) -- (2.1, 0, 0) -- (2.1, -1.6, -1.6) -- cycle;
\draw[] (0.5,0,0) -- (2.1, 0, 0) -- (2.1, -1.6, -1.6) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (-0.5,-0.5,0) -- (-2.1, -2.1, 1.6) -- (-2.1, -2.1, 0) -- cycle;
\draw[] (-0.5,-0.5,0) -- (-2.1, -2.1, 1.6) -- (-2.1, -2.1, 0) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (-0.5,-0.5,0) -- (-2.1, -2.1, 0) -- (-2.1, -2.1, -.4) -- cycle;
\draw[] (-0.5,-0.5,0) -- (-2.1, -2.1, 0) -- (-2.1, -2.1, -.4) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (-0.5,-0.5,-0.5) -- (-2.1, -2.1, -.9) -- (-1.7, -1.7, -1.7) -- cycle;
\draw[] (-0.5,-0.5,-0.5) -- (-2.1, -2.1, -.9) -- (-1.7, -1.7, -1.7) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (-0.5,-0.5,-0.5) -- (-2.1, -.9, -2.1) -- (-1.7, -1.7, -1.7) -- cycle;
\draw[] (-0.5,-0.5,-0.5) -- (-2.1, -.9, -2.1) -- (-1.7, -1.7, -1.7) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (-0.5,-0.5,-0.5) -- (-.9, -2.1, -2.1) -- (-1.7, -1.7, -1.7) -- cycle;
\draw[] (-0.5,-0.5,-0.5) -- (-.9, -2.1, -2.1) -- (-1.7, -1.7, -1.7) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (-0.5,0,-0.5) -- (-2.1, 1.6, -2.1) -- (-2.1, 0, -2.1) -- cycle;
\draw[] (-0.5,0,-0.5) -- (-2.1, 1.6, -2.1) -- (-2.1, 0, -2.1) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (-0.5,0,-0.5) -- (-2.1, 0, -2.1) -- (-2.1, -.4, -2.1) -- cycle;
\draw[] (-0.5,0,-0.5) -- (-2.1, 0, -2.1) -- (-2.1, -.4, -2.1) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,-0.5,-0.5) -- (1.6, -2.1, -2.1) -- (0, -2.1, -2.1) -- cycle;
\draw[] (0,-0.5,-0.5) -- (1.6, -2.1, -2.1) -- (0, -2.1, -2.1) -- cycle;
\draw[draw=green!20, fill=green!20, fill opacity=.8] (0,-0.5,-0.5) -- (0, -2.1, -2.1) -- (-.4, -2.1, -2.1) -- cycle;
\draw[] (0,-0.5,-0.5) -- (0, -2.1, -2.1) -- (-.4, -2.1, -2.1) -- cycle;
\end{tikzpicture}
\end{center}
Proposition~\ref{prop:pyramidvolume} follows from the following lemma relating the volume function $\operatorname{Vol}_\sigma$ on $N_{\sigma,\mathbb{R}}$ to the volume function $\operatorname{Vol}_{\sigma^\rho}$ on the hyperplane $N_{\sigma^\rho,\mathbb{R}}\subseteq N_{\sigma,\mathbb{R}}$.
\begin{lemma}\label{lemma:pyramidvolume}
Under the hypotheses of Proposition~\ref{prop:pyramidvolume}, let $\sigma\in\Sigma(d)$ and $\rho\in\sigma(1)$. For any polytope $P\subseteq N_{\sigma^\rho,\mathbb{R}}$ and $a\in\mathbb{R}_{\geq 0}$, we have
\[
\operatorname{Vol}_\sigma\big(\mathrm{conv}(0,P+au_\rho )\big)=a(u_\rho*u_\rho)\cdot\operatorname{Vol}_{\sigma^\rho}(P).
\]
\end{lemma}
For intuition, we note that the polytope $\mathrm{conv}(0,P+au_\rho)$ appearing in the left-hand side of Lemma~\ref{lemma:pyramidvolume} is obtained from the polytope $P$ by first translating $P$ along the ray $\rho$, which is orthogonal to $N_{\sigma^\rho,\mathbb{R}}$, then taking the convex hull with the origin, the result of which can be thought of as a pyramid with $P$ as base and the origin as apex. The right-hand side can then be thought of as a ``base-times-height'' formula for the volume of this pyramid, where the ``height'' of the vector $au_\rho$ is $a(u_\rho*u_\rho)$. We now make this informal discussion precise.
\begin{proof}[Proof of Lemma~\ref{lemma:pyramidvolume}]
Let $\{v_\eta\mid \eta\in\sigma(1)\}\subseteq M_\sigma$ be the dual basis of $\{u_\eta\mid\eta\in\sigma(1)\}\subseteq N_\sigma$, defined uniquely by the equations
\[
v_\eta*u_\mu=\begin{cases}
1 & \mu=\eta\\
0 & \mu\neq\eta.
\end{cases}
\]
Recall that each ray generator of $\sigma^\rho$ is of the form $\mathrm{pr}^\rho(u_\eta)$ for a unique $\eta\in\sigma(1)\setminus\{\rho\}$; we claim that the dual vector of $\mathrm{pr}^\rho(u_\eta)$ in $M_{\sigma^\rho}$ is $v_\eta$---in other words, the dual vector of $\mathrm{pr}^\rho(u_\eta)$ is the same as the dual vector of $u_\eta$. To verify this, note that, for any $\eta,\mu\in\sigma(1)\setminus\{\rho\}$, we have
\begin{align*}
\mathrm{pr}^\rho(u_\eta)\,*^\rho\, v_\mu&=(u_\eta-\mathrm{pr}_\rho(u_\eta))*v_\mu\\
&=u_\eta*v_\mu\\
&=\begin{cases}
1 & \mu=\eta\\
0 & \mu\neq\eta,
\end{cases}
\end{align*}
where the first equality uses the decomposition of $u_\eta$ into its orthogonal components, along with the fact that $*^\rho$ is just the restriction of $*$, and the second equality uses that $\mathrm{pr}_\rho(u_\eta)$ is a multiple of $u_\rho$, along with $u_\rho*v_\mu=0$.
Using these dual bases, we defined simplices in each of vector spaces $N_{\sigma,\mathbb{R}}$ and $N_{\sigma^\rho,\mathbb{R}}$ by
\[
\Delta(\sigma)=\mathrm{conv}(0,\{v_\eta\mid\eta\in\sigma(1)\})\subseteq N_{\sigma,\mathbb{R}}
\]
and
\[
\Delta(\sigma^\rho)=\mathrm{conv}(0,\{v_\eta\mid\eta\in\sigma(1)\setminus\{\rho\}\})\subseteq N_{\sigma^\rho,\mathbb{R}}.
\]
By our convention on how volumes are normalized in $N_{\sigma,\mathbb{R}}$ and $N_{\sigma^\rho,\mathbb{R}}$, along with our verification above that $\{v_\eta\mid\eta\in\sigma(1)\setminus\{\rho\}\}$ is the dual basis of the ray generators of $\sigma^\rho$, these simplices have unit volume:
\[
\operatorname{Vol}_\sigma(\Delta(\sigma))=\operatorname{Vol}_{\sigma^\rho}(\Delta(\sigma^\rho))=1.
\]
Notice that $\Delta(\sigma^\rho)$ is a facet of $\Delta(\sigma)$ and we can write $\Delta(\sigma)=\mathrm{conv}(v_\rho,\Delta(\sigma^\rho))$. If we project the vertex $v_\rho$ of $\Delta(\sigma)$ onto the line spanned by $\rho$, we obtain a new simplex
\[
\Delta_1(\sigma)=\mathrm{conv}(\mathrm{pr}_\rho(v_\rho),\Delta(\sigma^\rho)).
\]
Since the projection $\mathrm{pr}_\rho$ is parallel to the facet $\Delta(\sigma^\rho)$, it follows that
\[
\operatorname{Vol}_\sigma(\Delta_1(\sigma))=\operatorname{Vol}_\sigma(\Delta(\sigma))=\operatorname{Vol}_{\sigma^\rho}(\Delta(\sigma^\rho)).
\]
Now define a new simplex by sliding the vertex $\mathrm{pr}_\rho(v_\rho)$ along $\rho$ to the new vertex $au_\rho$:
\[
\Delta_2(\sigma)=\mathrm{conv}(a u_\rho,\Delta(\sigma^\rho)).
\]
By the standard projection formula, we have $\mathrm{pr}_\rho(v_\rho)=\frac{u_\rho}{u_\rho*u_\rho}$, from which we see that $\Delta_2(\sigma)$ is obtained from $\Delta_1(\sigma)$ by scaling the height of the vertex $\mathrm{pr}_\rho(v_\rho)$ by a factor of $a(u_\rho*u_\rho)$. It follows that the volume also scales by $a(u_\rho*u_\rho)$:
\[
\operatorname{Vol}_\sigma(\Delta_2(\sigma))=a(u_\rho*u_\rho)\cdot\operatorname{Vol}_\sigma(\Delta_1(\sigma))=a(u_\rho*u_\rho)\cdot\operatorname{Vol}_{\sigma^\rho}(\Delta(\sigma^\rho)).
\]
More concisely, we have proved that
\begin{equation}\label{eq:pyramidy}
\operatorname{Vol}_\sigma\big(\mathrm{conv}(a u_\rho,P)\big)=a(u_\rho*u_\rho)\cdot\operatorname{Vol}_{\sigma^\rho}(P)
\end{equation}
when $P=\Delta(\sigma^\rho)$.
As a visual aid, we have depicted below the sequence of polytopes from the above discussion in the specific setting of a two-dimensional cone $\sigma$, which we have visualized in $\mathbb{R}^2$ with the usual dot product.
\begin{center}
\begin{tikzpicture}[scale=1.5]
\draw[draw=blue!10, fill=blue!10, fill opacity=.8] (0,0) -- (1.9,0) -- (1.9,1.9) -- cycle;
\node at (1,0) {$\bullet$};
\node at (1,1) {$\bullet$};
\node at (0,0) {$\bullet$};
\node[left] at (0,-.1) {$0$};
\node at (1.5,.65) {$\sigma$};
\draw[->, thick, black] (0,0) -- (2,0);
\draw[->, thick, black] (0,0) -- (2,2);
\node[right] at (2,0) {$\eta$};
\node[right] at (2,2) {$\rho$};
\node[below] at (1,0) {$u_\eta$};
\node[above] at (.9,1) {$u_\rho$};
\draw[thick,black] (-1.2,1.2) -- (1.2,-1.2);
\node[above] at (-1.2,1.2) {$N_{\sigma^\rho,\mathbb{R}}$};
\node at (0,1) {$\bullet$};
\node[above] at (0,1) {$v_\rho$};
\node at (1,-1) {$\bullet$};
\node[right] at (1,-1) {$v_\eta$};
\end{tikzpicture}
\hspace{50bp}
\begin{tikzpicture}[scale=1.5]
\draw[thick,black, fill=green!20, fill opacity=.8] (0,0) -- (1,-1) -- (0,1) -- cycle;
\draw[->] (0,0) -- (2,2);
\node at (0,0) {$\bullet$};
\node[left] at (0,-.1) {$0$};
\node[right] at (2,2) {$\rho$};
\draw[thick,black] (-1.2,1.2) -- (1.2,-1.2);
\node[above] at (-1.2,1.2) {$N_{\sigma^\rho,\mathbb{R}}$};
\node at (0,1) {$\bullet$};
\node[above] at (0,1) {$v_\rho$};
\node at (1,-1) {$\bullet$};
\node[right] at (1,-1) {$v_\eta$};
\draw[line width=2bp] (0,0) -- (1,-1);
\node at (1.1,0) {$\Delta(\sigma)$};
\draw[thick,->] (.8,.05) to [bend right=20] (.25,0);
\node at (.3,-1.1) {$\Delta(\sigma^\rho)$};
\draw[thick,->] (.2,-.9) to [bend left=20] (.47,-.53);
\end{tikzpicture}
\begin{tikzpicture}[scale=1.5]
\draw[thick,black, fill=green!20, fill opacity=.8] (0,0) -- (1,-1) -- (.5,.5) -- cycle;
\draw[->] (0,0) -- (2,2);
\node at (0,0) {$\bullet$};
\node[left] at (0,-.1) {$0$};
\node[right] at (2,2) {$\rho$};
\draw[thick,black] (-1.2,1.2) -- (1.2,-1.2);
\node[above] at (-1.2,1.2) {$N_{\sigma^\rho,\mathbb{R}}$};
\node at (.5,.5) {$\bullet$};
\node[above] at (.2,.5) {$\frac{u_\rho}{u_\rho*u_\rho}$};
\node at (1,-1) {$\bullet$};
\node[right] at (1,-1) {$v_\eta$};
\draw[line width=2bp] (0,0) -- (1,-1);
\node at (1.3,0) {$\Delta_1(\sigma)$};
\draw[thick,->] (.9,.05) to [bend right=20] (.35,0);
\node at (.3,-1.1) {$\Delta(\sigma^\rho)$};
\draw[thick,->] (.2,-.9) to [bend left=20] (.47,-.53);
\end{tikzpicture}
\hspace{50bp}
\begin{tikzpicture}[scale=1.5]
\draw[thick,black, fill=green!20, fill opacity=.8] (0,0) -- (1,-1) -- (1.7,1.7) -- cycle;
\draw[->] (0,0) -- (2,2);
\node at (0,0) {$\bullet$};
\node[left] at (0,-.1) {$0$};
\node[right] at (2,2) {$\rho$};
\draw[thick,black] (-1.2,1.2) -- (1.2,-1.2);
\node[above] at (-1.2,1.2) {$N_{\sigma^\rho,\mathbb{R}}$};
\node at (1.7,1.7) {$\bullet$};
\node[right] at (1.7,1.5) {$au_\rho$};
\node at (1,-1) {$\bullet$};
\node[right] at (1,-1) {$v_\eta$};
\draw[line width=2bp] (0,0) -- (1,-1);
\node at (.8,.1) {$\Delta_2(\sigma)$};
\node at (.3,-1.1) {$\Delta(\sigma^\rho)$};
\draw[thick,->] (.2,-.9) to [bend left=20] (.47,-.53);
\end{tikzpicture}
\end{center}
We now extend \eqref{eq:pyramidy} to any simplex $P\subseteq N_{\sigma^\rho,\mathbb{R}}$. To do so, first note that a simplex $P$ can be obtained from the specific simplex $\Delta(\sigma^\rho)$ by a composition of a translation and a linear transformation on $N_{\sigma^\rho,\mathbb{R}}$. Translating $P$ within $N_{\sigma^\rho,\mathbb{R}}$ does not affect the volume on either side of \eqref{eq:pyramidy}. Given a linear transformation $T$, on the other hand, we can extend it to a linear transformation $\widehat T$ on $N_{\sigma,\mathbb{R}}$ by simply fixing the vector $u_\rho$, in which case we have
\[
\widehat T(\mathrm{conv}(a u_\rho,P))=\mathrm{conv}(a u_\rho,T(P)).
\]
Since $\det(\widehat T)=\det(T)$ and linear transformations scale volumes by the absolute values of their determinants, we conclude that the equality in \eqref{eq:pyramidy} is preserved upon taking linear transforms of $P$:
\begin{align*}
\operatorname{Vol}_\sigma\big(\mathrm{conv}(a u_\rho,T(P))\big)&=\operatorname{Vol}_\sigma\big(\widehat T(\mathrm{conv}(a u_\rho,P))\big)\\
&=|\det(\widehat T)|\operatorname{Vol}_\sigma\big(\mathrm{conv}(a u_\rho,P)\big)\\
&=|\det(T)|\cdot a(u_\rho*u_\rho)\cdot\operatorname{Vol}_{\sigma^\rho}(P)\\
&=a(u_\rho*u_\rho)\cdot\operatorname{Vol}_{\sigma^\rho}(T(P)).
\end{align*}
Knowing that \eqref{eq:pyramidy} holds for simplices, we extend it to arbitrary polytopes $P\subseteq N_{\sigma^\rho,\mathbb{R}}$ by triangulating $P$ and applying \eqref{eq:pyramidy} to each simplex in the triangulation. The lemma then follows from \eqref{eq:pyramidy} along with the observation that $\mathrm{conv}(a u_\rho,P)$ is just a reflection of $\mathrm{conv}(0,P+au_\rho)$, so has the same volume.
\end{proof}
We now use Lemma~\ref{lemma:pyramidvolume} to prove Proposition~\ref{prop:pyramidvolume}.
\begin{proof}[Proof of Proposition~\ref{prop:pyramidvolume}]
For each top-dimensional cone $\sigma\in\Sigma(d)$ and $\rho\in\sigma(1)$, consider the polytope face $\mathrm{F}_\rho(P_{\sigma,*}(z))\subseteq P_{\sigma,*}(z)$. By definition, we have
\[
\mathrm{F}_\rho(P_{\sigma,*}(z))=\mathrm{F}^\rho(P_{\sigma,*}(z))+w_{\rho,*}(z).
\]
Noting that $w_{\rho,*}(z)=\frac{z_\rho}{u_\rho*u_\rho}u_\rho$, Lemma~\ref{lemma:pyramidvolume} computes the volume of the pyramid $\mathrm{conv}(0,\mathrm{F}_\rho(P_{\sigma,*}(z)))$:
\begin{equation}\label{eq:pyramid!!}
\operatorname{Vol}_\sigma\big(\mathrm{conv}(0,\mathrm{F}_\rho(P_{\sigma,*}(z)))\big)=z_\rho\operatorname{Vol}_{\sigma^\rho}(\mathrm{F}^\rho(P_{\sigma,*}(z))=z_\rho\operatorname{Vol}_{\sigma^\rho}(P_{\sigma^\rho,*^\rho}(z^\rho)),
\end{equation}
where the second equality is an application of \eqref{eq:polytranslate}.
Next, note that we can decompose each polytope $P_{\sigma,*}(z)$ into pyramids over the faces $\mathrm{F}_\rho(P_{\sigma,*}(z))$ with $\rho\in\sigma(1)$, implying that
\begin{equation}\label{eq:pyramid!}
\operatorname{Vol}_\sigma(P_{\sigma,*}(z))=\sum_{\rho\in\sigma(1)}\operatorname{Vol}_\sigma\big(\mathrm{conv}(0,\mathrm{F}_\rho(P_{\sigma,*}(z))\big).
\end{equation}
We then compute:
\begin{align*}
\operatorname{Vol}_{\Sigma,\omega,*}(z)&=\sum_{\sigma\in\Sigma(d)}\omega(\sigma)\operatorname{Vol}_\sigma(P_{\sigma,*}(z))\\
&=\sum_{\sigma\in\Sigma(d)}\omega(\sigma)\sum_{\rho\in\sigma(1)}\operatorname{Vol}_\sigma\big(\mathrm{conv}(0,\mathrm{F}_\rho(P_{\sigma,*}(z))\big)\\
&=\sum_{\sigma\in\Sigma(d)}\omega(\sigma)\sum_{\rho\in\sigma(1)}z_\rho\operatorname{Vol}_{\sigma^\rho}(P_{\sigma^\rho,*^\rho}(z^\rho))\\
&=\sum_{\rho\in\Sigma(1)}z_\rho\sum_{\sigma^\rho\in\Sigma^\rho(d-1)}\omega^\rho(\sigma^\rho)\operatorname{Vol}_{\sigma^\rho}(P_{\sigma^\rho,*^\rho}(z^\rho))\\
&=\sum_{\rho\in\Sigma(1)}z_\rho\operatorname{Vol}_{\Sigma^\rho,\omega^\rho,*^\rho}(z^\rho),
\end{align*}
where the first equality is the definition of $\operatorname{Vol}_{\Sigma,\omega,*}(z)$, the second and third are \eqref{eq:pyramid!} and \eqref{eq:pyramid!!}, respectively, the fourth follows from the definition of $\omega^\rho$ and the fact that cones in $\Sigma^\rho(d-1)$ are in bijection with the cones in $\Sigma(d)$ containing $\rho$ via $\sigma^\rho\leftrightarrow\sigma$, and the fifth is the definition of $\operatorname{Vol}_{\Sigma^\rho,\omega^\rho,*^\rho}(z^\rho)$.
\end{proof}
\subsection{Mixed volumes and facets}
The aim of this subsection is to enhance Proposition~\ref{prop:pyramidvolume} to the following more general statement about mixed volumes. See \cite[Lemma 5.1.5]{Schneider} for the analogous result in the classical setting of strongly isomorphic polytopes.
\begin{proposition}~\label{prop:pyramidmixed}
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan with weight function $\omega:\Sigma(d)\rightarrow\mathbb{R}_{>0}$, let $*\in\mathrm{Inn}(N_\mathbb{R})$ be an inner product, and let $z_1,\dots,z_d\in\overline{\Cub}(\Sigma,*)$ be pseudocubical values. Then
\[
\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)=\sum_{\rho\in\Sigma(1)}z_{1,\rho}\operatorname{MVol}_{\Sigma^\rho,\omega^\rho,*^\rho}(z_2^\rho,\dots,z_d^\rho).
\]
\end{proposition}
\begin{proof}
We proceed by induction on $d$. If $d=1$, then mixed volumes are just volumes, in which case Proposition~\ref{prop:pyramidmixed} is a special case of Proposition~\ref{prop:pyramidvolume}. Assume, now, that Proposition~\ref{prop:pyramidmixed} holds in dimension less than $d>1$. Define
\[
F(z_1,\dots,z_d)=\sum_{\rho\in\Sigma(1)}z_{1,\rho}\operatorname{MVol}_{\Sigma^\rho,\omega^\rho,*^\rho}(z_2^\rho,\dots,z_d^\rho).
\]
To prove that $F=\operatorname{MVol}_{\Sigma,*,\omega}$, Proposition~\ref{prop:mvolchar} tells us that it suffices to prove that $F$ is (1) symmetric, (2) multilinear, and (3) normalized correctly with respect to volume; we check these properties in reverse order.
To check (3), we note that
\begin{align*}
F(z,\dots,z)&=\sum_{\rho\in\Sigma(1)}z_{\rho}\operatorname{MVol}_{\Sigma^\rho,\omega^\rho,*^\rho}(z^\rho,\dots,z^\rho)\\
&=\sum_{\rho\in\Sigma(1)}z_{\rho}\operatorname{Vol}_{\Sigma^\rho,\omega^\rho,*^\rho}(z^\rho)\\
&=\operatorname{Vol}_{\Sigma,\omega,*}(z),
\end{align*}
where the first equality is the definition of $F$, the second is Proposition~\ref{prop:mvolchar} Part (3), and the third is Proposition~\ref{prop:pyramidvolume}.
To check (2), there are two cases to consider: linearity in the first coordinate and linearity in every other coordinate. Linearity in the first coordinate follows quickly from the definition of $F$, while linearity in every other coordinate follows from Proposition~\ref{prop:mvolchar} Part (2) applied to $(\Sigma^\rho,*^\rho,\omega^\rho)$.
Finally, to check (1), we first note that Proposition~\ref{prop:mvolchar} Part (1) applied to $(\Sigma^\rho,*^\rho,\omega^\rho)$ implies that $F$ is symmetric in the entries $z_2,\dots,z_d$. Thus, it remains to prove that $F$ is invariant under transposing $z_1$ and $z_2$. To do so, we first apply the induction hypothesis to the mixed volumes appearing in the definition of $F$ to obtain
\begin{equation}\label{eq:mixedinduction}
F(z_1,\dots,z_d)=\sum_{\rho\in\Sigma(1)}z_{1,\rho}\sum_{\eta^\rho\in\Sigma^\rho(1)}z_{2,\eta^\rho}^\rho\operatorname{MVol}_{\Sigma^{\rho,\eta},\omega^{\rho,\eta},*^{\rho,\eta}}(z_3^{\rho,\eta},\dots,z_d^{\rho,\eta}),
\end{equation}
where, to avoid the proliferation of parentheses and superscripts, we have written, for example, $\Sigma^{\rho,\eta}$ as short-hand for $(\Sigma^\rho)^{\eta^\rho}$. Notice that the mixed volumes appearing in the right-hand side of \eqref{eq:mixedinduction} are mixed volumes associated to faces of faces. Proposition~\ref{prop:faceofaface} tells us that the $\eta^\rho$-face of the $\rho$-face of a normal complex is the same as the $\tau$ face of the original normal complex, where $\tau\in\Sigma(2)$ is the $2$-cone containing $\rho$ and $\eta$ as rays. Therefore,
\[
\operatorname{MVol}_{\Sigma^{\rho,\eta},\omega^{\rho,\eta},*^{\rho,\eta}}(z_3^{\rho,\eta},\dots,z_d^{\rho,\eta})=\operatorname{MVol}_{\Sigma^\tau,\omega^\tau,*^\tau}(z_3^\tau,\dots,z_d^\tau).
\]
Keeping in mind that each $2$-cone $\tau$ appears twice in \eqref{eq:mixedinduction}, once for each ordering of the rays, we have
\[
F(z_1,\dots,z_d)=\sum_{\tau\in\Sigma(2)\atop \tau(1)=\{\rho,\eta\}}(z_{1,\rho}z_{2,\eta^\rho}^\rho+z_{1,\eta}z_{2,\rho^\eta}^\eta)\operatorname{MVol}_{\Sigma^\tau,\omega^\tau,*^\tau}(z_3^\tau,\dots,z_d^\tau).
\]
Therefore, it remains to prove that $z_{1,\rho}z_{2,\eta^\rho}^\rho+z_{1,\eta}z_{2,\rho^\eta}^\eta$ is invariant under transposing $1$ and $2$. Computing directly from the definition of $z^\rho$, we have
\[
z_{1,\rho}z_{2,\eta^\rho}^\rho+z_{1,\eta}z_{2,\rho^\eta}^\eta=z_{1,\rho}\big(z_{2,\eta}-w_{\rho,*}(z_2)*u_\eta\big)+z_{1,\eta}\big(z_{2,\rho}-w_{\eta,*}(z_2)*u_\rho\big),
\]
from which we see that it suffices to prove that both
\[
z_{1,\rho}w_{\rho,*}(z_2)*u_\eta\;\;\;\text{ and }\;\;\;z_{1,\eta}w_{\eta,*}(z_2)*u_\rho
\]
are invariant under transposing $1$ and $2$. This invariance follows from the computations
\[
w_{\rho,*}(z_2)=\frac{z_{2,\rho}}{u_\rho*u_\rho}u_\rho\;\;\;\text{ and }\;\;\;w_{\eta,*}(z_2)=\frac{z_{2,\eta}}{u_\eta*u_\eta}u_\eta.\qedhere
\]
\end{proof}
The following analytic consequence of Proposition~\ref{prop:pyramidmixed} will be useful in our computations in the next section.
\begin{corollary}\label{cor:derivatives}
In addition to the hypotheses of Proposition~\ref{prop:pyramidmixed}, assume that $\operatorname{Cub}(\Sigma,*)$ is nonempty. Then for any fixed $z_1,\dots,z_k\in\operatorname{Cub}(\Sigma,*)$, we have
\[
\frac{\partial}{\partial z_\rho}\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_k,\underbrace{z,\dots,z}_{d-k})=(d-k)\operatorname{MVol}_{\Sigma^\rho,\omega^\rho,*^\rho}(z_1^\rho,\dots,z_k^\rho,\underbrace{z^\rho,\dots,z^\rho}_{d-k-1}).
\]
\end{corollary}
\begin{proof}
The assumption that $\operatorname{Cub}(\Sigma,*)\neq\emptyset$ implies that $\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_k,z,\dots,z)$ is a degree $d-k$ polynomial in $\mathbb{R}[z_\rho\mid\rho\in\Sigma(1)]$, so the derivatives are well-defined. Proposition~\ref{prop:pyramidmixed} and symmetry of mixed volumes imply that
\[
\frac{\partial}{\partial z_{i,\rho}}\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)=\operatorname{MVol}_{\Sigma^\rho,\omega^\rho,*^\rho}(z_1^\rho,\dots,z_{i-1}^\rho,z_{i+1}^\rho,\dots,z_d^\rho).
\]
Viewing $\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_k,z,\dots,z)$ as the composition of $\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)$ with the specialization
\[
z_{k+1}=\cdots=z_d=z,
\]
the result then follows from the multivariable chain rule.
\end{proof}
\section{Alexandrov--Fenchel inequalities}
One of the most consequential properties of mixed volumes of polytopes (or, more generally, of mixed volumes of convex bodies) is the \textbf{Alexandrov--Fenchel inequalities}. Given polytopes $P_1,\dots,P_d$ in a $d$-dimensional real vector space $V$ with volume function $\operatorname{Vol}$, the Alexandrov--Fenchel inequalities state that
\[
\operatorname{MVol}(P_1,P_2,P_3,\dots,P_d)^2\geq \operatorname{MVol}(P_1,P_1,P_3,\dots,P_d)\operatorname{MVol}(P_2,P_2,P_3,\dots,P_d)
\]
(see, for example, \cite[Theorem~7.3.1]{Schneider} for a proof and historical references). It is our aim in this section to study Alexandrov--Fenchel inequalities in the setting of mixed volumes of normal complexes.
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan, $\omega:\Sigma(d)\rightarrow\mathbb{R}_{>0}$ a weight function, and $*\in\mathrm{Inn}(N_\mathbb{R})$ an inner product. We say that the triple $(\Sigma,\omega,*)$ is \textbf{Alexandrov--Fenchel}, or just \textbf{AF} for short, if $\operatorname{Cub}(\Sigma,*)\neq\emptyset$ and
\[
\operatorname{MVol}_{\Sigma,\omega,*}(z_1,z_2,z_3,\dots,z_d)^2\geq\operatorname{MVol}_{\Sigma,\omega,*}(z_1,z_1,z_3,\dots,z_d)\operatorname{MVol}_{\Sigma,\omega,*}(z_2,z_2,z_3,\dots,z_d)
\]
for all $z_1,\dots,z_d\in\operatorname{Cub}(\Sigma,*)$. In this section, we prove the following result, which provides sufficient conditions for proving that a triple $(\Sigma,\omega,*)$ is AF.
\begin{theorem}\label{thm:reduce}
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan, $\omega:\Sigma(d)\rightarrow\mathbb{R}_{>0}$ a weight function, and $*\in\mathrm{Inn}(N_\mathbb{R})$ an inner product such that $\operatorname{Cub}(\Sigma,*)\neq\emptyset$. The triple $(\Sigma,\omega,*)$ is AF if the following two conditions are satisfied:
\begin{enumerate}
\item[(i)] $\Sigma^\tau\setminus\{0\}$ is connected for any cone $\tau\in\Sigma(k)$ with $k\leq d-3$;
\item[(ii)] $\mathrm{Hess}\big(\operatorname{Vol}_{\Sigma^\tau,\omega^\tau,*^\tau}(z)\big)$ has exactly one positive eigenvalue for any $\tau\in\Sigma(d-2)$.
\end{enumerate}
\end{theorem}
\begin{remark}
Condition (i) in Theorem~\ref{thm:reduce} can be thought of as requiring that the fan $\Sigma$ does not have any ``pinch'' points. For example, in dimension four, this condition rules out fans that locally look like a pair of four-dimensional cones meeting along a ray, because the star fan associated to that ray would comprise two three-dimensional cones that meet only at the origin.
\end{remark}
\begin{remark}
Condition (ii) of Theorem~\ref{thm:reduce} concerns only the two-dimensional stars of $\Sigma$. Since the volume polynomial of a two-dimensional fan is a quadratic form, the Hessians appearing in Condition (ii) are constant matrices. Condition (ii) can be viewed as an analogue of the Brunn--Minkowski inequality for polygons. For an example of a two-dimensional (tropical) fan that does not satisfy Condition (ii), see \cite{BabaeeHuh}.
\end{remark}
\subsection{Proof of Theorem~\ref{thm:reduce}}
Our proof of Theorem~\ref{thm:reduce} is largely inspired by a proof of the classical Alexandrov--Fenchel inequalities recently developed by Cordero-Erausquin, Klartag, Merigot, and Santambrogio \cite{OneMoreProof}---for which the key geometric input is Proposition~\ref{prop:pyramidmixed}. While the arguments in \cite{OneMoreProof} can be employed in this setting more-or-less verbatim, we present a more streamlined proof using ideas regarding Lorentzian polynomials recently developed by Br\"and\'en and Leake \cite{BrandenLeake}. Before presenting a proof of Theorem~\ref{thm:reduce}, we pause to introduce key ideas regarding Lorentzian polynomials.
\subsubsection{Lorentzian polynomials on cones}
One way to view the AF inequalities is as the nonpositivity of the $2\times 2$ matrix
\[
\left[
\begin{array}{cc}
\operatorname{MVol}_{\Sigma,\omega,*}(z_1,z_1,z_3,\dots,z_d) & \operatorname{MVol}_{\Sigma,\omega,*}(z_1,z_2,z_3,\dots,z_d)\\
\operatorname{MVol}_{\Sigma,\omega,*}(z_2,z_1,z_3,\dots,z_d) & \operatorname{MVol}_{\Sigma,\omega,*}(z_2,z_2,z_3,\dots,z_d)
\end{array}
\right],
\]
and this nonpositivity is equivalent to the matrix having exactly one positive eigenvalue. Lorentzian polynomials are a clever tool for capturing the essence of this observation, and are therefore a natural setting for understanding AF-type inequalities.
Our discussion of Lorentzian polynomials follows Br\"and\'en and Leake \cite{BrandenLeake}. Suppose that $C\subseteq\mathbb{R}_{>0}^n$ is a nonempty open convex cone, and let $f\in\mathbb{R}[x_1,\dots,x_n]$ be a homogeneous polynomial of degree $d$. For each $i=1,\dots,n$ and $v=(v_1,\dots,v_n)\in\mathbb{R}^n$, we use the following shorthand for partial and directional derivatives
\[
\partial_i=\frac{\partial}{\partial x_i}\;\;\;\text{ and }\;\;\;\partial_v=\sum_{i=1}^n v_i\partial_i.
\]
We say that $f$ is \textbf{$C$-Lorentzian} if, for all $v_1,\dots,v_d\in C$,
\begin{enumerate}
\item[(P)] $\partial_{v_1}\cdots\partial_{v_d}f>0$, and
\item[(H)] $\Hess(\partial_{v_3}\cdots\partial_{v_d}f)$ has exactly one positive eigenvalue.
\end{enumerate}
To relate Lorentzian polynomials back to AF-type inequalities, we recall the following key observation (see \cite[Proposition~4.4]{BrandenHuh}).
\begin{lemma}\label{lem:lorentzian}
Let $C\subseteq\mathbb{R}_{>0}^n$ be a nonempty open convex cone, and let $f\in\mathbb{R}[x_1,\dots,x_n]$ be $C$-Lorentzian. Then for all $v_1,v_2,v_3\dots,v_d\in C$, we have
\[
\big(\partial_{v_1}\partial_{v_2}\partial_{v_3}\cdots\partial_{v_d}f\big)^2\geq \big(\partial_{v_1}\partial_{v_1}\partial_{v_3}\cdots\partial_{v_d}f\big) \big(\partial_{v_2}\partial_{v_2}\partial_{v_3}\cdots\partial_{v_d}f\big).
\]
\end{lemma}
\begin{proof}
Consider the symmetric $2\times 2$ matrix
\[
M=\left[
\begin{array}{cc}
\partial_{v_1}\partial_{v_1}\partial_{v_3}\cdots\partial_{v_d}f & \partial_{v_1}\partial_{v_2}\partial_{v_3}\cdots\partial_{v_d}f\\
\partial_{v_2}\partial_{v_1}\partial_{v_3}\cdots\partial_{v_d}f & \partial_{v_2}\partial_{v_2}\partial_{v_3}\cdots\partial_{v_d}f
\end{array}
\right].
\]
By (P), the entries of $M$ are positive, so the Peron--Frobenius Theorem implies that $M$ has \emph{at least} one positive eigenvalue. On the other hand, $M$ is a principal minor of $\Hess(\partial_{v_3}\cdots\partial_{v_d}f)$, which, by (H), has exactly one positive eigenvalue; thus, it follows from Cauchy's Interlacing Theorem that $M$ has \emph{at most} one positive eigenvalue. Therefore $M$ has exactly one positive eigenvalue, implying that the determinant of $M$ is nonpositive, proving the lemma.
\end{proof}
The following result, proved by Br\"and\'en and Leake \cite{BrandenLeake}, is particularly useful for the study of Lorentzian polynomials on cones. We view this result as an effective implementation of the key insights in \cite{OneMoreProof}; in essence, it eliminates the need for one of the induction parameters in \cite{OneMoreProof} because that induction parameter is captured within the recursive nature of Lorentzian polynomials.
\begin{lemma}[\cite{BrandenLeake}, Proposition 2.4]\label{lem:lorentziancheck}
Let $C\subseteq\mathbb{R}_{>0}^n$ be a nonempty open convex cone, and let $f\in\mathbb{R}[x_1,\dots,x_n]$ be a homogeneous polynomial of degree $d$. If
\begin{enumerate}
\item $\partial_{v_1}\cdots\partial_{v_d}f>0$ for all $v_1,\dots,v_d\in C$,
\item $\Hess\big(\partial_{v_1}\cdots\partial_{v_{d-2}}f\big)$ is irreducible\footnote{An $n\times n$ matrix $M$ is \textbf{irreducible} if the associated adjacency graph---the undirected graph on $n$ labeled vertices with an edge between the $i$th and $j$th vertex whenever the $(i,j)$ entry of $M$ is nonzero---is connected.} and has nonnegative off-diagonal entries for all $v_1,\dots,v_{d-2}\in C$, and
\item $\partial_i f$ is $C$-Lorentzian for all $i=1,\dots,n$,
\end{enumerate}
then $f$ is $C$-Lorentzian.
\end{lemma}
\subsubsection{Lorentzian volume polynomials}
We now discuss how the above discussion of Lorentzian polynomials on cones can be used to study mixed volumes of normal complexes. Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan, $\omega:\Sigma(d)\rightarrow\mathbb{R}_{>0}$ a weight function, and $*\in\mathrm{Inn}(N_\mathbb{R})$ an inner product. We assume that $\operatorname{Cub}(\Sigma,*)\neq\emptyset$, in which case the function $\operatorname{Vol}_{\Sigma,\omega,*}:\operatorname{Cub}(\Sigma,*)\rightarrow\mathbb{R}$ is a homogeneous polynomial of degree $d$ in $\mathbb{R}[z_\rho\mid\rho\in\Sigma(1)]$. By Proposition~\ref{prop:mvolchar}(3), we have
\[
\operatorname{Vol}_{\Sigma,\omega,*}(z)=\operatorname{MVol}_{\Sigma,\omega,*}(z,\dots,z).
\]
It then follows from Proposition~\ref{prop:mvolchar}(1) and (2) (and the chain rule) that
\begin{equation}\label{eq:partials}
\partial_{z_1}\cdots\partial_{z_k}\operatorname{Vol}_{\Sigma,\omega,*}(z)=\frac{d!}{d-k!}\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_k,\underbrace{z,\dots,z}_{d-k})
\end{equation}
for any $z_1,\dots,z_k\in\operatorname{Cub}(\Sigma,*)$. In particular, in order to prove that $(\Sigma,\omega,*)$ is AF, we now see that it suffices (by Lemma~\ref{lem:lorentzian}) to prove that $\operatorname{Vol}_{\Sigma,\omega,*}$ is $\operatorname{Cub}(\Sigma,*)$-Lorentzian. Thus, Theorem~\ref{thm:reduce} is a consequence of the following stronger result.
\begin{theorem}\label{thm:lorentzian}
Let $\Sigma\subseteq N_\mathbb{R}$ be a simplicial $d$-fan, $\omega:\Sigma(d)\rightarrow\mathbb{R}_{>0}$ a weight function, and $*\in\mathrm{Inn}(N_\mathbb{R})$ an inner product such that $\operatorname{Cub}(\Sigma,*)\neq\emptyset$. Then $\operatorname{Vol}_{\Sigma,\omega,*}$ is $\operatorname{Cub}(\Sigma,*)$-Lorentzian if the following two conditions are satisfied:
\begin{enumerate}
\item[(i)] $\Sigma^\tau\setminus\{0\}$ is connected for any cone $\tau\in\Sigma(k)$ with $k\leq d-3$;
\item[(ii)] $\mathrm{Hess}\big(\operatorname{Vol}_{\Sigma^\tau,\omega^\tau,*^\tau}(z)\big)$ has exactly one positive eigenvalue for any $\tau\in\Sigma(d-2)$.
\end{enumerate}
\end{theorem}
\begin{proof}
We prove Theorem~\ref{thm:lorentzian} by induction on $d$.
First consider the base case $d=2$ (in which case Condition (i) is vacuous). Note that $\operatorname{Vol}_{\Sigma,\omega,*}$ satisfies (P) by \eqref{eq:partials} and the positivity of mixed volumes (Proposition~\ref{prop:positive}), while (H) for $\operatorname{Vol}_{\Sigma,\omega,*}$ is equivalent to Condition (ii). Therefore, Theorem~\ref{thm:lorentzian} holds when $d=2$.
Now let $d>2$ and assume $(\Sigma,\omega,*)$ satisfies Conditions (i) and (ii) in Theorem~\ref{thm:lorentzian}. To prove that $\operatorname{Vol}_{\Sigma,\omega,*}$ is $\operatorname{Cub}(\Sigma,*)$-Lorentzian, we use Lemma~\ref{lem:lorentziancheck}. Translating the three conditions of Lemma~\ref{lem:lorentziancheck} using \eqref{eq:partials}, we must prove that
\begin{enumerate}
\item $\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_d)>0$ for all $z_1,\dots,z_d\in\operatorname{Cub}(\Sigma,*)$,
\item $\Hess\big(\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_{d-2},z,z)\big)$ is irreducible and has nonnegative off-diagonal entries for all $z_1,\dots,z_{d-2}\in \operatorname{Cub}(\Sigma,*)$, and
\item $\partial_\rho \operatorname{Vol}_{\Sigma,\omega,*}(z)$ is $\operatorname{Cub}(\Sigma,*)$-Lorentzian for all $\rho\in\Sigma(1)$.
\end{enumerate}
Note that (1) is just the positivity of mixed volumes (Proposition~\ref{prop:positive}). To prove (3), note that Proposition~\ref{prop:mvolchar}(3) and Corollary~\ref{cor:derivatives} (with $k=0$) together imply that
\[
\partial_\rho\operatorname{Vol}_{\Sigma,\omega,*}(z)=d\operatorname{Vol}_{\Sigma^\rho,\omega^\rho,*^\rho}(z^\rho).
\]
Applying the induction hypothesis to $(\Sigma^\rho,\omega^\rho,*^\rho)$---which we can do because any star fan of $\Sigma^\rho$ is a star fan of $\Sigma$, so our assumption that $(\Sigma,\omega,*)$ satisfies the two conditions of Theorem~\ref{thm:lorentzian} implies that $(\Sigma^\rho,\omega^\rho,*^\rho)$ also satisfies the two conditions of Theorem~\ref{thm:lorentzian}---implies that $\partial_\rho\operatorname{Vol}_{\Sigma,\omega,*}(z)$ is Lorentzian, verifying (3).
Finally, to prove (2), we use Corollary~\ref{cor:derivatives} to compute
\[
\partial_\rho\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_{d-2},z,z)=2\operatorname{MVol}_{\Sigma,\omega,*}(z_1^\rho,\dots,z_{d-2}^\rho,z^\rho).
\]
If $\tau\in\Sigma(2)$ with rays $\rho$ and $\eta$, then
\[
z^\rho_{\eta^\rho}=z_\eta-w_{\rho,*}(z)*u_\eta=z_\eta-\frac{u_\rho*u_\eta}{u_\rho*u_\rho}z_\rho,
\]
from which it follows that,
\begin{equation}\label{eq:secondderivative1}
\partial_\eta\partial_\rho\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_{d-2},z,z)=2\operatorname{MVol}_{\Sigma^\tau,\omega^\tau,*^\tau}(z_1^\tau,\dots,z_{d-2}^\tau)
\end{equation}
On the other hand, if $\rho$ and $\eta$ do not lie on a common cone $\tau\in\Sigma(2)$, then
\begin{equation}\label{eq:secondderivative2}
\partial_\eta\partial_\rho\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_{d-2},z,z)=0.
\end{equation}
The positivity of mixed volumes for cubical values, along with \eqref{eq:secondderivative1} and \eqref{eq:secondderivative2}, then implies that $\Hess\big(\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_{d-2},z,z)\big)$ has nonnegative off-diagonal entries that are positive whenever the row and column index are the rays of a cone $\tau\in\Sigma(2)$. The first condition in Theorem~\ref{thm:lorentzian} implies that we can travel from any ray of $\Sigma$ to any other ray by passing only through the relative interiors of one- and two-dimensional cones, which then implies that $\Hess\big(\operatorname{MVol}_{\Sigma,\omega,*}(z_1,\dots,z_{d-2},z,z)\big)$ is irreducible, concluding the proof.
\end{proof}
\section{Application: the Heron--Rota--Welsh conjecture}
As an application of our developments regarding mixed volumes of normal complexes, we show in this section how Theorem~\ref{thm:reduce} can be used to prove the Heron--Rota--Welsh conjecture, which states that the coefficients of the characteristic polynomial of any matroid are log-concave. The bridge between matroids and mixed volumes is the Bergman fan; we begin this section by briefly recalling relevant notions regarding matroids and Bergman fans.
\subsection{Matroids and Bergman fans}
A \textbf{(loopless) matroid} $\mathsf{M}=(E,\mathcal{L})$ consists of a finite set $E$, called the \textbf{ground set}, and a collection of subsets $\mathcal{L}\subseteq 2^E$, called \textbf{flats}, which satisfy the following three conditions:
\begin{enumerate}
\item[(F1)] $\emptyset\in\mathcal{L}$,
\item[(F2)] if $F_1,F_2\in\mathcal{L}$, then $F_1\cap F_2\in\mathcal{L}$, and
\item[(F3)] if $F\in\mathcal{L}$, then every element of $E\setminus F$ is contained in exactly one flat that is minimal among the flats that strictly contain $F$.
\end{enumerate}
We do not give a comprehensive overview of matroids; rather, we settle for a brief introduction of key concepts. For a more complete treatment, see Oxley's book \cite{Oxley}.
The \textbf{closure} of a set $S\subseteq E$, denoted $\mathrm{cl}(S)$, is the smallest flat containing $S$. A set $I\subseteq E$ is called \textbf{independent} if $\mathrm{cl}(I_1)\subsetneq\mathrm{cl}(I_2)$ for any $I_1\subsetneq I_2\subseteq I$. The \textbf{rank} of a set $S\subseteq E$, denoted $\mathrm{rk}(S)$, is the maximum size of an independent subset of $S$, and the \textbf{rank of $\mathsf{M}$}, denoted $\mathrm{rk}(\mathsf{M})$ is defined to be the rank of $E$. While we have chosen to characterize matroids in terms of their flats, we note that matroids can also be characterized in terms of their independent sets or their rank function.
A \textbf{flag of flats (of length $k$) in $\mathsf{M}$} is a chain of the form
\[
\mathcal{F}=(F_1\subsetneq\cdots\subsetneq F_k)\;\;\;\text{ with }\;\;\;F_1,\dots, F_k\in\mathcal{L}.
\]
It can be checked from the matroid axioms that every maximal flag has one flat of each rank $0,\dots,\mathrm{rk}(\mathsf{M})$. We let $\Delta_\mathsf{M}$ denote the set of flags of flats, which naturally has the structure of a simplicial complex of dimension $\mathrm{rk}(\mathsf{M})+1$. Since every maximal flag contains $\emptyset$ and $E$, we often restrict our attention to studying proper flats. We use the notation $\mathcal{L}^*=\mathcal{L}\setminus\{\emptyset,E\}$ for the set of proper flats and $\Delta_\mathsf{M}^*$ for the set of flags of proper flats, which is a simplicial complex of dimension $\mathrm{rk}(\mathsf{M})-1$.
Given a matroid $\mathsf{M}$, consider the vector space $\mathbb{R}^E$ with basis $\{v_e\mid e\in E\}$. For each subset $S\subseteq E$, define
\[
v_S=\sum_{e\in S}v_e\in\mathbb{R}^E.
\]
Set $N_\mathbb{R}=\mathbb{R}^E/\mathbb{R} v_E$ and denote the image of $v_S$ in the quotient space $N_\mathbb{R}$ by $u_S$. For each flag $\mathcal{F}=(F_1\subsetneq\cdots\subsetneq F_k)\in\Delta_\mathsf{M}^*$, define a polyhedral cone
\[
\sigma_\mathcal{F}=\mathbb{R}_{\geq 0}\{u_{F_1},\dots,u_{F_k}\}\subseteq N_\mathbb{R}.
\]
The \textbf{Bergman fan of $\mathsf{M}$}, denoted $\Sigma_\mathsf{M}$, is the polyhedral fan
\[
\Sigma_\mathsf{M}=\{\sigma_\mathcal{F}\mid \mathcal{F}\in\Delta^*_\mathsf{M}\}.
\]
Note that $\Sigma_\mathsf{M}$ is simplicial, pure of dimension $d=\mathrm{rk}(\mathsf{M})-1$, and marked by the vectors $u_F$.
Consider a cone $\sigma_\mathcal{F}\in\Sigma_\mathsf{M}(d-1)$ corresponding to a flag
\[
\mathcal{F}=(F_1\subsetneq\cdots\subsetneq F_{k-1}\subsetneq F_{k+1}\subsetneq\cdots\subsetneq F_d)\;\;\;\text{ with }\;\;\;\mathrm{rk}(F_i)=i.
\]
The $d$-cones containing $\sigma_\mathcal{F}$ are indexed by flats $F$ with $F_{k-1}\subsetneq F\subsetneq F_{k+1}$. If there are $\ell$ such flats, then (F3) implies that
\[
\sum_{F\in\mathcal{L} \atop F_{k-1}\subsetneq F\subsetneq F_{k+1}}u_F=(\ell-1)u_{F_{k-1}}+u_{F_{k+1}}.
\]
Since the right-hand side lies in $N_{\sigma_{\mathcal{F}},\mathbb{R}}$, this observation implies that $\Sigma_{\mathsf{M}}$ is balanced (tropical with weights all equal to $1$).
In order to check that Bergman fans are AF, we require a working understanding of the star fans of Bergman fans. Consider a cone $\sigma_\mathcal{F}$ associated to a flat $\mathcal{F}=(F_1\subsetneq\cdots\subsetneq F_k)$. Set $F_0=\emptyset$ and $F_{k+1}=E$, and for each $j=0,\dots,k$ consider the \textbf{matroid minor} $\mathsf{M}[F_j,F_{j+1}]$, which is the matroid on ground set $F_{j+1}\setminus F_j$ with flats of the form $F\setminus F_j$ where $F$ is a flat of $\mathsf{M}$ satisfying $F_j\subseteq F\subseteq F_{j+1}$. Notice that the star fan $\Sigma_\mathsf{M}^{\sigma_{\mathcal{F}}}$ lives in the quotient space
\[
N_\mathbb{R}^{\sigma_\mathcal{F}}=\frac{N_\mathbb{R}}{\mathbb{R}\{u_{F_1},\dots,u_{F_{k}}\}}=\frac{\mathbb{R}^E}{\mathbb{R}\{v_{F_1},\dots,v_{F_{k+1}}\}} = \bigoplus_{j=0}^k\frac{\mathbb{R}^{F_{k+1}\setminus F_k}}{\mathbb{R} v_{F_{k+1}\setminus F_k}},
\]
and one checks that this natural isomorphism of vector spaces identifies the star of $\Sigma_\mathsf{M}$ at $\sigma_{\mathcal{F}}$ as the product of the Bergman fans of the associated matroid minors:
\begin{equation}\label{eq:product}
\Sigma_\mathsf{M}^{\sigma_\mathcal{F}}=\prod_{j=0}^k\Sigma_{\mathsf{M}[F_j,F_{j+1}]}.
\end{equation}
\subsection{Bergman fans are AF}
We are now ready to use Theorem~\ref{thm:reduce} to prove that Bergman fans of matroids are AF.
\begin{theorem}\label{thm:bergman}
Let $\mathsf{M}$ be a matroid of rank $d+1$ and let $\Sigma_\mathsf{M}\subseteq N_\mathbb{R}$ be the associated Bergman fan. If $*\in\mathrm{Inn}(N_\mathbb{R})$ is any inner product with $\operatorname{Cub}(\Sigma_\mathsf{M},*)\neq\emptyset$, then $(\Sigma_\mathsf{M},*)$ is AF.
\end{theorem}
\begin{remark}
We are assuming the weight function $\omega$ is equal to $1$ because, as noted in the previous subsection, $\Sigma_\mathsf{M}$ is balanced. Thus, we omit $\omega$ from the notation in this section.
\end{remark}
To prove Theorem~\ref{thm:bergman}, we verify the two conditions of Theorem~\ref{thm:reduce}. We accomplish this through the following three lemmas. The first lemma verifies that Bergman fans satisfy (a slight strengthening of) Condition (i) of Theorem~\ref{thm:bergman}.
\begin{lemma}\label{lem:cond1}
$\Sigma_\mathsf{M}^{\sigma_\mathcal{F}}\setminus\{0\}$ is connected for any cone $\sigma_\mathcal{F}\in\Sigma_\mathsf{M}(k)$ with $k\leq d-2$.
\end{lemma}
\begin{proof}
We begin by arguing that $\Sigma_\mathsf{M}\setminus\{0\}$ is connected for any matroid of rank at least $3$. It suffices to prove that, for any two rays $\rho_F,\rho_{F'}\in\Sigma_\mathsf{M}(1)$ associated to flats $F,F'\in\mathcal{L}^*$, there are sequences $\rho_1,\dots,\rho_\ell\in\Sigma_\mathsf{M}(1)$ and $\tau_1,\dots,\tau_{\ell+1}\in\Sigma_\mathsf{M}(2)$ such that
\[
\rho_F\prec\tau_1\succ\rho_1\prec\cdots\succ\rho_\ell\prec\tau_{\ell+1}\succ\rho_{F'}.
\]
If $F\cap F'=G\neq\emptyset$, then $G\in\mathcal{L}^*$ by (F2) and the following is such a sequence
\[
\rho_F\prec\tau_{G\subsetneq F}\succ\rho_G\prec\tau_{G\subsetneq F'}\succ\rho_{F'}.
\]
If, on the other hand, $F\cap F'=\emptyset$, choose rank-one flats $G\subseteq F$ and $G'\subseteq F'$. By (F3), there is exactly one rank-two flat $H$ that contains $G$ and $G'$, so we can construct a sequence
\[
(\rho_F\prec\tau_{G\subsetneq F}\succ)\rho_G\prec\tau_{G\subsetneq H}\succ\rho_{H}\prec\tau_{G'\subsetneq H}\succ\rho_{G'}(\prec\tau_{G'\subsetneq F'}\succ\rho_{F'}),
\]
where the parenthetical pieces should be omitted if $G=F$ or $G'=F'$.
Now consider any star fan $\Sigma_{\mathsf{M}}^{\sigma_\mathcal{F}}$ where $\mathcal{F}=(F_1\subsetneq\cdots\subsetneq F_k)$ with $k\leq d-2$. Notice that such a star fan has dimension at least two, and we can write it as a product of Bergman fans on matroid minors
\[
\Sigma_\mathsf{M}^{\sigma_\mathcal{F}}=\prod_{j=0}^k\Sigma_{\mathsf{M}[F_j,F_{j+1}]}.
\]
Consider two rays $\rho,\rho'\in\Sigma_\mathsf{M}^{\sigma_\mathcal{F}}(1)$. If the two rays happen to come from different factors in the product, then we can connect them through the sequence
\[
\rho\prec\rho\times\rho'\succ\rho'.
\]
If, on the other hand, they lie in the same factor, there are two cases to consider. If the matroid minor of the factor that the rays lie in has rank at least $3$, then the rays can be connected via the argument above. If, on the other hand, the matroid minor has rank $2$, then one of the other matroid minors must also have rank at least $2$. Choosing any ray $\rho''$ in the Bergman fan of the second matroid minor, we can connect $\rho$ and $\rho'$ through the sequence
\[
\rho\prec\rho\times\rho''\succ\rho''\prec\rho'\times\rho''\succ\rho'.\qedhere
\]
\end{proof}
In order to verify Condition (ii) of Theorem~\ref{thm:reduce}, there are two cases to consider, depending on whether the two-dimensional star fan in question is, itself, a Bergman fan, or whether it is the product of two one-dimensional Bergman fans. In both cases, we use the fact that, in order to prove that the Hessian of a quadratic form $f\in\mathbb{R}[x_1,\dots,x_n]$ has exactly one eigenvalue, it suffices (by Sylvester's Law of Inertia) to find an invertible change of variables $y_1(x),\dots,y_n(x)$ such that
\[
f=\sum_{i=1}^na_iy_i(x)^2
\]
with exactly one positive $a_i$. We now consider the two cases in the following two lemmas.
\begin{lemma}\label{lem:cond2a}
If $\mathsf{M}$ is a rank-three matroid, then the Hessian of $\deg_{\Sigma_\mathsf{M}}(D(z)^2)$ has exactly one positive eigenvalue.
\end{lemma}
\begin{proof}
For a flat $F\in\mathcal{L}^*$, we use the shorthand $X_{F}=X_{\rho_F}$ and $z_F=z_{\rho_F}$. In order to compute $\deg_{\Sigma_\mathsf{M}}(D(z)^2)$, we must compute $\deg_{\Sigma_\mathsf{M}}(X_FX_G)$ for any two flats $F,G\in\mathcal{L}^*$. If $F\subsetneq G$, then the degree is one, by definition of the degree function, and if $F$ and $G$ are incomparable, then the degree is zero. Thus, it remains to compute the degree of the squared terms. Using the definition of $A^\bullet(\Sigma_{\mathsf{M}})$ and the flat axioms, the reader is encouraged to verify that $\deg_{\Sigma_\mathsf{M}}(X_F^2)=1-|\{G\in\mathcal{L}^*\mid F\subsetneq G\}|$ if $\mathrm{rk}(F)=1$ and $\deg_{\Sigma_\mathsf{M}}(X_G^2)=-1$ if $\mathrm{rk}(G)=2$. It follows that
\[
\deg_{\Sigma_\mathsf{M}}(D(z)^2)=2\sum_{F,G\in\mathcal{L}^*\atop F\subsetneq G} z_Fz_G+\sum_{F\in\mathcal{L}*\atop \mathrm{rk}(F)=1}z_F^2-\sum_{F,G\in\mathcal{L}^*\atop F\subsetneq G}z_F^2-\sum_{G\in\mathcal{L}*\atop \mathrm{rk}(G)=2}z_G^2.
\]
By creatively organizing the terms, we can rewrite this as
\[
\deg_{\Sigma_\mathsf{M}}(D(z)^2)=\Big(\sum_{F\in\mathcal{L}^*\atop\mathrm{rk}(F)=1}z_F\Big)^2-\sum_{G\in\mathcal{L}^*\atop \mathrm{rk}(G)=2}\Big(z_G-\sum_{F\in\mathcal{L}^*\atop F\subsetneq G}z_F\Big)^2,
\]
where the only key matroid assertion used in the equivalence of these two formulas is that there exists a unique rank-two flat containing any two distinct rank-one flats. Sylvester's Law of Inertia implies that the Hessian of this quadratic form has exactly one positive eigenvalue.
\end{proof}
\begin{lemma}\label{lem:cond2b}
If $\mathsf{M}$ and $\mathsf{M}'$ are rank-two matroids, then the Hessian of $\deg_{\Sigma_{\mathsf{M}}\times\Sigma_{\mathsf{M}'}}(D(z)^2)$ has exactly one positive eigenvalue.
\end{lemma}
\begin{proof}
By definition of $A^\bullet(\Sigma_{\mathsf{M}}\times\Sigma_{\mathsf{M}'})$, the reader is encouraged to verify that
\[
\deg_{\Sigma_{\mathsf{M}}\times\Sigma_{\mathsf{M}'}}(X_\rho X_\eta)=\begin{cases}
0 & \rho,\eta\in\Sigma_{\mathsf{M}}(1)\text{ or }\rho,\eta\in\Sigma_{\mathsf{M}'}(1),\\
1 & \rho\in\Sigma_{\mathsf{M}}(1)\text{ and }\eta\in\Sigma_{\mathsf{M}'}(1).
\end{cases}
\]
Therefore,
\[
\deg_{\Sigma_{\mathsf{M}}\times\Sigma_{\mathsf{M}'}}(D(z)^2)=\sum_{\rho\in\Sigma_{\mathsf{M}}(1),\;\eta\in\Sigma_{\mathsf{M}'}(1)}2z_\rho z_\eta,
\]
which can be rewritten as
\[
\deg_{\Sigma_{\mathsf{M}}\times\Sigma_{\mathsf{M}'}}(D(z)^2)=\frac{1}{2}\Big(\sum_{\rho\in\Sigma_{\mathsf{M}}(1)}z_\rho+\sum_{\eta\in\Sigma_{\mathsf{M}'}(1)}z_\rho\Big)^2-\frac{1}{2}\Big(\sum_{\rho\in\Sigma_{\mathsf{M}}(1)}z_\rho-\sum_{\eta\in\Sigma_{\mathsf{M}'}(1)}z_\rho\Big)^2.
\]
Sylvester's Law of Inertia implies that the Hessian of this quadratic form has exactly one positive eigenvalue.
\end{proof}
We now have all the ingredients we need to prove Theorem~\ref{thm:bergman}.
\begin{proof}[Proof of Theorem~\ref{thm:bergman}]
We prove that Bergman fans satisfy the two conditions of Theorem~\ref{thm:reduce}. That Bergman fans satisfy Condition (i) is the content of Lemma~\ref{lem:cond1}. To prove Condition (ii), we first note that, since Bergman fans are balanced, their star fans are also balanced, so Theorem~\ref{thm:vol=deg} implies that the volume polynomials in Condition (ii) are independent of $*$ and are equal to
\[
\deg_{\Sigma_\mathsf{M}^{\sigma_\mathcal{F}}}(D(z)^2).
\]
By the product decomposition of star fans given in \eqref{eq:product}, $\Sigma_\mathsf{M}^{\sigma_\mathcal{F}}$ is either a two-dimensional Bergman fan or a product of two one-dimensional Bergman fans; in the former case, the Hessian of the volume polynomial has exactly one positive eigenvalue by Lemma~\ref{lem:cond2a}, and in the latter case, by Lemma~\ref{lem:cond2b}.
\end{proof}
\subsection{Revisiting the Heron--Rota--Welsh Conjecture}
The \textbf{characteristic polynomial} of a matroid $\mathsf{M}=(E,\mathcal{L})$ can be defined by
\[
\chi_\mathsf{M}(\lambda)=\sum_{S\subseteq E}(-1)^{|S|}\lambda^{\mathrm{rk}(\mathsf{M})-\mathrm{rk}(S)}.
\]
It can be checked that $\chi_\mathsf{M}(\lambda)$ has a root at $\lambda=1$ for any positive-rank matroid, and the \textbf{reduced characteristic polynomial} is defined by
\[
\overline\chi_\mathsf{M}(\lambda)=\frac{\chi_\mathsf{M}(\lambda)}{\lambda-1}.
\]
We use the notation $\mu^a(\mathsf{M})$ and $\overline\mu^a(\mathsf{M})$ for the (unsigned) coefficients of these polynomials:
\[
\chi_\mathsf{M}(\lambda)=\sum_{a=0}^{\mathrm{rk}(\mathsf{M})}(-1)^a\mu^a(\mathsf{M})\lambda^{\mathrm{rk}(\mathsf{M})-a}\;\;\;\text{ and }\;\;\;\overline\chi_\mathsf{M}(\lambda)=\sum_{a=0}^{\mathrm{rk}(\mathsf{M})-1}(-1)^a\overline\mu^a(\mathsf{M})\lambda^{\mathrm{rk}(\mathsf{M})-1-a}.
\]
The Heron--Rota--Welsh Conjecture, developed in \cite{Rota,Heron,Welsh}, asserts that the sequence of nonnegative integers $\mu^0(\mathsf{M}),\dots,\mu^{\mathrm{rk}(\mathsf{M})}(\mathsf{M})$ is unimodal and log-concave:
\[
0\leq \mu^0(\mathsf{M})\leq \dots\leq \mu^k(\mathsf{M})\geq\cdots\geq\mu^{\mathrm{rk}(\mathsf{M})}(\mathsf{M})\geq 0\;\;\;\text{ for some }\;\;\;k\in\{0,\dots,\mathrm{rk}(\mathsf{M})\}
\]
and
\[
\mu^k(\mathsf{M})^2\geq\mu^{k-1}(\mathsf{M})\mu^{k+1}(\mathsf{M})\;\;\;\text{ for every }\;\;\;k\in\{1,\dots,\mathrm{rk}(\mathsf{M})-1\}.
\]
The Heron--Rota--Welsh Conjecture was first proved by Adiprasito, Huh, and Katz \cite{AHK}. Our aim here is to show how this result also follows from the developments in this paper.
It is elementary to check that the unimodality and log-concavity of the coefficients of the characteristic polynomial is implied by the analogous properties for the coefficients of the reduced characteristic polynomial. The bridge from characteristic polynomials to the content of this paper, then, is a result of Huh and Katz \cite[Proposition~5.2]{HuhKatz} (see also \cite[Proposition~9.5]{AHK} and \cite[Proposition~3.11]{DastidarRoss}), which asserts that
\[
\overline\mu^a(\mathsf{M})=\deg_{\Sigma_\mathsf{M}}(\alpha^{d-a}\beta^a)
\]
where $\mathrm{rk}(\mathsf{M})=d+1$ and $\alpha,\beta\in A^1(\Sigma_\mathsf{M})$ are defined by
\[
\alpha=\sum_{e_0\in F}X_F\;\;\;\text{ and }\;\;\;\beta=\sum_{e_0\notin F}X_F
\]
for some $e_0\in E$ (these Chow classes are independent of the choice of $e_0$).
Choose any $e_0\in E$, and let $*\in\mathrm{Inn}(N_\mathbb{R})$ be the inner product with orthonormal basis $\{u_e\mid e\neq e_0\}\subseteq N_\mathbb{R}=\mathbb{R}^E/\mathbb{R} u_E$. For two flats $F_1,F_2\in\mathcal{L}^*$, we compute
\[
u_{F_1}*u_{F_2}=\begin{cases}
|F_1\cap F_2| & e_0\notin F_1\text{ and } e_0\notin F_2,\\
-|F_1\cap F_2^c| & e_0\notin F_1\text{ and } e_0\in F_2,\\
|F_1^c\cap F_2^c| & e_0\in F_1\text{ and } e_0\in F_2.\\
\end{cases}
\]
Define $z^\alpha,z^\beta\in\mathbb{R}^{\Sigma_\mathsf{M}(1)}=\mathbb{R}^{\mathcal{L}^*}$ by
\[
z^\alpha_F=\begin{cases}
1 & e_0\in F,\\
0 & e_0\notin F,
\end{cases}
\;\;\;\text{ and }\;\;\;
z^\beta_F=\begin{cases}
1 & e_0\notin F,\\
0 & e_0\in F,
\end{cases}
\]
so that $D(z^\alpha)=\alpha$ and $D(z^\beta)=\beta$ in $A^1(\Sigma_\mathsf{M})$. The following lemma allows us to connect characteristic polynomials to mixed volumes of normal complexes.
\begin{lemma}
$z^\alpha,z^\beta\in\overline{\Cub}(\Sigma_\mathsf{M},*)$.
\end{lemma}
\begin{proof}
We must argue that $w_{\sigma,*}(z^\alpha),w_{\sigma,*}(z^\beta)\in\sigma$ for every cone $\sigma\in\Sigma_\mathsf{M}$. Consider a flag $\mathcal{F}=(F_1\subsetneq\dots\subsetneq F_k)$ corresponding to a cone $\sigma_\mathcal{F}\in\Sigma_{\mathsf{M}}$. It suffices to prove that
\begin{equation}\label{wvector1}
w_{\sigma_\mathcal{F},*}(z^\alpha)=
\begin{cases}
\frac{1}{|F_k^c|}u_{F_k} & e_0\in F_k\\
0 & e_0\notin F_k,
\end{cases}
\end{equation}
and
\begin{equation}\label{wvector2}
w_{\sigma_\mathcal{F},*}(z^\beta)=
\begin{cases}
\frac{1}{|F_1|}u_{F_1} & e_0\notin F_1\\
0 & e_0\in F_1,
\end{cases}
\end{equation}
We verify \eqref{wvector1}; the verification \eqref{wvector2} is similar.
To verify \eqref{wvector1}, first suppose that $e_0\in F_k$. Then for any $j=1,\dots,k$, it follows from the definition of $*$ that
\[
u_{F_k}*u_{F_j}=
\begin{cases}
|F_k^c| & e_0\in F_j,\\
0 & e_0\notin F_j.
\end{cases}
\]
Using this, we verify that $\frac{1}{|F_k^c|}u_{F_k}$ satisfies the defining equations of $w_{\sigma_\mathcal{F},*}(z^\alpha)$:
\[
\frac{1}{|F_k^c|}u_{F_k}*u_{F_j}=z_{F_j}^\alpha\;\;\;\text{ for all }\;\;\;j=1,\dots,k.
\]
Now suppose that $e_0\notin F_k$. Then $e_0\notin F_j$ for any $j=1,\dots,k$, so $z_{F_j}^\alpha=0$. Thus, the defining equation for $w_{\sigma_\mathcal{F},*}(z^\alpha)$ become
\[
w_{\sigma_\mathcal{F},*}(z^\alpha)*u_{F_j}=0\;\;\;\text{ for all }\;\;\;j=1,\dots,k,
\]
showing that $w_{\sigma_\mathcal{F},*}(z^\alpha)=0$.
\end{proof}
It follows from Theorem~\ref{thm:mvol=mdeg} that the coefficients of the reduced characteristic polynomial have a volume-theoretic interpretation:
\[
\overline\mu^a(\mathsf{M})=\operatorname{MVol}_{\Sigma_\mathsf{M},*}(\underbrace{z^\alpha,\dots,z^\alpha}_{d-a},\underbrace{z^\beta,\dots,z^\beta}_{a}).
\]
By \cite[Proposition~7.4]{NR}, we know that $\operatorname{Cub}(\Sigma_\mathsf{M},*)\neq\emptyset$, and since the cubical cone is the interior of the pseudocubical cone, we may approximate $z^\alpha,z^\beta\in\overline{\Cub}(\Sigma_\mathsf{M},*)$ with $z_t^\alpha,z_t^\beta\in\operatorname{Cub}(\Sigma_\mathsf{M},*)$ such that
\[
\lim_{t\rightarrow 0}z_t^\alpha=z^\alpha\;\;\;\text{ and }\;\;\;\lim_{t\rightarrow 0}z_t^\beta=z^\beta.
\]
Define
\[
\overline\mu_t^a(\mathsf{M})=\operatorname{MVol}_{\Sigma_\mathsf{M},*}(\underbrace{z_t^\alpha,\dots,z_t^\alpha}_{d-a},\underbrace{z_t^\beta,\dots,z_t^\beta}_{a}).
\]
By Theorem~\ref{thm:bergman}, we know that $(\Sigma_\mathsf{M},*)$ is AF, and the AF inequalities applied to the mixed volumes $\overline\mu_t^a(\mathsf{M})$ imply that the sequence $\overline\mu_t^0(\mathsf{M}),\dots,\overline\mu_t^d(\mathsf{M})$ is log-concave. Since mixed volumes of cubical values are positive (Proposition~\ref{prop:positive}), and since all log-concave sequences of positive values are unimodal, we see that the sequence $\overline\mu_t^0(\mathsf{M}),\dots,\overline\mu_t^d(\mathsf{M})$ is also unimodal. Since both unimodality and log-concavity are preserved under limits, we conclude that
\[
\overline\mu^0(\mathsf{M}),\dots,\overline\mu^d(\mathsf{M})
\]
is unimodal and log-concave, verifying the Heron--Rota--Welsh Conjecture.
\bibliographystyle{alpha}
\newcommand{\etalchar}[1]{$^{#1}$}
| {
"timestamp": "2023-01-16T02:01:22",
"yymm": "2301",
"arxiv_id": "2301.05278",
"language": "en",
"url": "https://arxiv.org/abs/2301.05278",
"abstract": "Normal complexes are orthogonal truncations of polyhedral fans. In this paper, we develop the study of mixed volumes for normal complexes. Our main result is a sufficiency condition that ensures when the mixed volumes of normal complexes associated to a given fan satisfy the Alexandrov-Fenchel inequalities. By specializing to Bergman fans of matroids, we give a new proof of the Heron-Rota-Welsh Conjecture as a consequence of the Alexandrov-Fenchel inequalities for normal complexes.",
"subjects": "Combinatorics (math.CO); Algebraic Geometry (math.AG)",
"title": "Mixed volumes of normal complexes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130557502123,
"lm_q2_score": 0.7154239836484144,
"lm_q1q2_score": 0.7075636602251082
} |
https://arxiv.org/abs/1407.7480 | Analysis of the diffuse-domain method for solving PDEs in complex geometries | In recent work, Li et al.\ (Comm.\ Math.\ Sci., 7:81-107, 2009) developed a diffuse-domain method (DDM) for solving partial differential equations in complex, dynamic geometries with Dirichlet, Neumann, and Robin boundary conditions. The diffuse-domain method uses an implicit representation of the geometry where the sharp boundary is replaced by a diffuse layer with thickness $\epsilon$ that is typically proportional to the minimum grid size. The original equations are reformulated on a larger regular domain and the boundary conditions are incorporated via singular source terms. The resulting equations can be solved with standard finite difference and finite element software packages. Here, we present a matched asymptotic analysis of general diffuse-domain methods for Neumann and Robin boundary conditions. Our analysis shows that for certain choices of the boundary condition approximations, the DDM is second-order accurate in $\epsilon$. However, for other choices the DDM is only first-order accurate. This helps to explain why the choice of boundary-condition approximation is important for rapid global convergence and high accuracy. Our analysis also suggests correction terms that may be added to yield more accurate diffuse-domain methods. Simple modifications of first-order boundary condition approximations are proposed to achieve asymptotically second-order accurate schemes. Our analytic results are confirmed numerically in the $L^2$ and $L^\infty$ norms for selected test problems. | \section{Introduction}
There are many problems in computational physics that involve solving partial
differential equations (PDEs) in complex geometries. Examples include fluid
flows in complicated systems, vein networks in plant leaves, and tumours in
human bodies. Standard solution methods for PDEs in complex domains typically
involve triangulation and unstructured grids. This rules out coarse-scale
discretizations and thus efficient geometric multi-level solutions. Also, mesh
generation for three-dimensional complex geometries remains a challenge, in
particular if we allow the geometry to evolve with time.
In the past several years, there has been much effort put into the development
of numerical methods for solving partial differential equations in complex
domains. However, most of these methods typically require tools
not frequently available in standard finite element and finite difference
software packages. Examples of such approaches include the extended and
composite finite element methods (e.g.,
\cite{GR07,dolbow09,fries10,duddu11,he11,PRE11,byfut12,bernauer12}), immersed
interface methods (e.g., \cite{LL94,LiIto_2006,SethianShan_2008,li12,wan12}),
virtual node methods with embedded boundary conditions (e.g.,
\cite{Bedrossian10,Zhu12,Hellrung12}), matched interface and boundary methods
(e.g., \cite{zhou06,zhao09,zhao10,xia11,zhou12}), modified finite
volume/embedded boundary/cut-cell methods/ghost-fluid methods (e.g.,
\cite{GlimmMarchesinMcBryan_1981,JohansenColella_1998,FedkiwAslamMerrimanOsher_1999,GibouFedkiwChengKang_2002,GibouFedkiw_2005,JiLienYee_2006,MacklinLowengrub_2006,zhong07,MacklinLowengrub_2008,Colellaetal_2008,lui09,Uzgoren09,OSK09,cisternino12,Coco13,Papac10,Papac13,Helgadottir11,Theillard13}).
In another approach, known as the fictitious domain method (e.g.,
\cite{GlowinskiPanPeriaux_CMAME_1994,GlowinskiPanWellsZhou_JCP_1996,RamiereAngotBelliard_CMAME_2007,lohner07}),
the original system is either augmented with equations for Lagrange multipliers
to enforce the boundary conditions, or the penalty method is used to enforce
the boundary conditions weakly. See also \cite{Gibou13} for a review of
numerical methods for solving the Poisson equation, the diffusion equation and
the Stefan problem on irregular domains.
An alternate approach for simulating PDEs in complex domains, which does not
require any modification of standard finite element or finite difference
software, is the diffuse-domain method. In this method, the domain is
represented implicitly by a phase-field function, which is an approximation of
the characteristic function of the domain. The domain boundary is replaced by
a narrow diffuse interface layer such that the phase-field function rapidly
transitions from one inside the domain to zero in the exterior of the domain.
The boundary of the domain can thus be represented as an isosurface of the
phase-field function. The PDE is then reformulated on a larger, regular domain
with additional source terms that approximate the boundary conditions.
Although uniform grids can be used, local grid refinement near domain
boundaries improves efficiency and enables the use of smaller interface
thicknesses than are achievable using uniform grids. A related approach
involves the level-set method \cite{osher88,Sethian99,osher03a} to describe the
implicitly embedded surface and to obtain the appropriate surface operators
(e.g., \cite{greer06}).
The diffuse-domain method (DDM) was introduced by Kockelkoren et al.\
\cite{Kockelkoren03} to study diffusion inside a cell with zero Neumann
boundary conditions at the cell boundary (a similar approach was also used in
\cite{Bueno06b,Bueno06a} using spectral methods). The DDM was later used to
simulate electrical waves in the heart \cite{Fenton05} and membrane-bound
Turing patterns \cite{Levine05}. More recently, diffuse-interface methods have
been developed for solving PDEs on stationary \cite{ratz} and evolving
\cite{DD07,dziuk08a,dziuk08b,elliott09a,elliott09,DE12} surfaces.
Diffuse-domain methods for solving PDEs in complex evolving domains with
Dirichlet, Neumann and Robin boundary conditions were developed by Li et al.\
\cite{Li09} and by Teigen et al.\ \cite{Teigen09-b} who modelled bulk-surface
coupling. The DDM was also used by Aland et al.\ \cite{Aland10} to simulate
incompressible two-phase flows in complex domains in 2D and 3D, and by Teigen
et al.\ \cite{Teigen11} to study two-phase flows with soluble surfactants.
Li et al.\ \cite{Li09} showed that in the DDM there exist several
approximations to the physical boundary conditions that converge asymptotically
to the correct sharp-interface problem. Li et al.\ presented some numerical
convergence results for a few selected problems and observed that the choice of
boundary condition can significantly affect the accuracy of the DDM. However,
Li et al.\ did not perform a quantitative comparison between the different
boundary-condition approximations, nor did they estimate convergence rates.
Further, Li et al.\ did not address the source of the different levels of
accuracy they observed for the different boundary-condition approximations.
In the context of Dirichlet boundary conditions, Franz et al.\
\cite{Franz12} recently presented a rigorous error analysis of the DDM for
a reaction-diffusion equation and found that the method converges only with
first-order accuracy in the interface thickness parameter $\epsilon$, which
they confirmed numerically. Similar results were obtained numerically by
Reuter et al.\ \cite{Reuter12} who reformulated the DDM using an integral
equation solver. Reuter et al.\ demonstrated that their generalized DDM,
with appropriate choices of approximate surface delta functions, converges
with first-order accuracy to solutions of the Poisson equation with Dirichlet
boundary conditions.
Here, we focus on Neumann and Robin boundary conditions and we present
a matched asymptotic analysis of general diffuse-domain methods in a fixed
complex geometry, focusing on the Poisson equation for Robin boundary
conditions and a steady reaction-diffusion equation for Neumann boundary
conditions. However, our approach applies to transient problems and more
general equations in the same way as shown in \cite{Li09}. Our analysis
shows that for certain choices of the boundary condition approximations, the
DDM is second-order accurate in $\epsilon$, which in practice is proportional
to the smallest mesh size. However, for other choices the DDM is only
first-order accurate. This helps to explain why the choice of boundary
condition approximation is important for rapid global convergence and high
accuracy.
Further, inspired by the work of Karma and Rappel \cite{Karma98} and Almgren
\cite{Almgren99}, who incorporated second-order corrections in their phase
field models of crystal growth and by the work of Folch et al.\
\cite{Folch1999} who added second-order corrections in phase-field models of
advection, we also suggest correction terms that may be added to yield a more
accurate version of the diffuse-domain method. Simple modifications of
first-order boundary condition approximations are proposed to achieve
asymptotically second-order accurate schemes. Our analytic results are
confirmed numerically for selected test problems.
The outline of the paper is as follows. In \cref{sec:dda} we introduce and
present an analysis of general diffuse-domain methods. In
\cref{sec:discretization} the numerical methods are described, and in
\cref{sec:results} the test cases are introduced and numerical results are
presented and discussed. We finally give some concluding remarks in
\cref{sec:conclusion}.
\section{The diffuse-domain method}
\label{sec:dda}
The main idea of the DDM is to extend PDEs that are defined inside complex and
possibly time-dependent domains into larger, regular domains. As a model
problem, consider the Poisson equation in a domain $D$,
\[
\boldsymbol\Delta u = f,
\]
with Neumann or Robin boundary conditions. As shown in Li et al.\ \cite{Li09},
the results for the Poisson equation can be used directly to obtain
diffuse-domain methods for more general second-order partial differential
equations in evolving domains.
The DDM equation is defined in a larger, regular domain $\Omega\supset D$ as
\begin{equation}
\div(\phi\grad u) + \text{BC} = \phi f,
\label{ddm}
\end{equation}
see \cref{fig:ddadomain}. Here $\phi$ approximates the characteristic function
of $D$,
\[
\chi_D = \begin{cases}
1 & \text{if $x\in D$,} \\
0 & \text{if $x\notin D$,}
\end{cases}
\]
and BC is chosen to approximate the physical boundary condition, cf.\
\cite{Li09}. This typically involves diffuse-interface approximations of the
surface delta function. A standard approximation of the characteristic
function is the phase-field function,
\begin{equation}
\chi_D \simeq \phi(\vct x,t) = \frac{1}{2} \left( 1
- \tanh \left( \frac{3r(\vct x,t)}{\epsilon} \right) \right).
\label{eq:characteristic}
\end{equation}
Here $\epsilon$ is the interface thickness and $r(\vct x,t)$ is the
signed-distance function with respect to $\partial D$, which is taken to be
negative inside $D$.
\begin{figure}[tbp]
\centering
\begin{tikzpicture}[scale=0.8]
\draw[thick] (0,0) rectangle(9,5);
\draw[thick] (2,1)
.. controls (1,1) and (1,4) .. (3,3.5)
.. controls (5,3) and (4,5) .. (6,4)
node[above right] {$\partial D$}
.. controls (8,3) and (8,1.5) .. (7,1.5)
.. controls (3,1.5) and (3,1) .. (2,1);
\node at (2.3,2.3) {$D$};
\node at (7.6,0.8) {$\Omega$};
\node at (4.8,2.4) {$\chi_D=1$};
\node at (5.4,0.5) {$\chi_D=0$};
\end{tikzpicture}
\caption{A complex domain $D$ covered by a larger, regular domain $\Omega$.}
\label{fig:ddadomain}
\end{figure}
As Li et al.\ \cite{Li09} described, there are a number of different choices
for BC in \cref{ddm}. For example, in the Neumann case, where $\vct n\cdot\nabla
u = g$ on $\partial D$, one may take:
\[
\text{BC} =
\begin{cases}
\text{BC1} = g|\nabla\phi|,
\quad\text{or}\\
\text{BC2} = \epsilon g|\nabla\phi|^2.
\end{cases}
\]
In the Robin case, where $\vct n\cdot\nabla u = k(u-g)$ on $\partial D$, one may use
analogous approximations:
\begin{equation}
\text{BC} =
\begin{cases}
\text{BC1} = k(u - g) |\nabla\phi |,
\quad\text{or} \\
\text{BC2} = \epsilon k(u - g) |\nabla\phi|^2.
\end{cases}
\label{eq:bc_robin}
\end{equation}
Note that the terms $|\nabla\phi|$ and $\epsilon|\nabla\phi|^2$ approximate the
surface delta function. Following Li et al.\ \cite{Li09} we assume that $g$ is
extended constant in the normal direction off $\partial D$ and that $f$ is
smooth up to $\partial D$ and is extended into the exterior of $D$ constant in
the normal direction. We next perform an asymptotic analysis to estimate the
rate of convergence of the corresponding approximations.
\subsection{Asymptotic analysis}
\label{sec:asymptotic_analysis}
To show asymptotic convergence, we need to consider the expansions of the
diffuse-domain variables in powers of the interface thickness $\epsilon$ in
regions close to and far from the interface. These are called inner and outer
expansions, respectively. The two expansions are then matched in a region
where both are valid, see \cref{fig:regions}, which provides the boundary
conditions for the outer variables. We refer the reader to \cite{Caginalp88}
and \cite{Pego88} for more details and discussion of the general procedure.
\begin{figure}[b!p]
\centering
\begin{tikzpicture}
[
yscale=0.8,
interface/.style={thick},
inner/.style={fill=gray,dotted,fill opacity=0.2},
outer/.style={fill=gray,dashed,fill opacity=0.3},
labels/.style={above right, font=\small},
]
\begin{scope}[scale=0.8]
\draw[thick] (0,0) rectangle(9,5);
\draw[thick] (2,1)
.. controls (1,1) and (1,4) .. (3,3.5)
.. controls (5,3) and (4,5) .. (6,4)
.. controls (8,3) and (8,1.5) .. (7,1.5) node (g1) {}
.. controls (3,1.5) and (3,1) .. (2,1);
\node at (7.4,0.4) {$\Omega$};
\node at (2.0,2.3) {$D$};
\coordinate (a) at (2.4,1.6);
\coordinate (b) at (3.4,1.6);
\coordinate (c) at (2.4,0.6);
\coordinate (d) at (3.4,0.6);
\draw[very thin] (c) rectangle (b);
\end{scope}
\begin{scope}[xshift=2.5cm,yshift=4.4cm]
\draw[very thin] (a) -- (0, 3);
\draw[very thin] (b) -- (8, 3);
\draw[very thin] (c) -- (0,-3);
\draw[very thin] (d) -- (8,-3);
\fill[white] (0,-3) rectangle (8,3);
\draw[very thin, densely dotted] (a) -- (0, 3);
\draw[very thin, densely dotted] (b) -- (8, 3);
\draw[interface] (0,0)
.. controls (2, 0.5) and (3, 0.5) .. (4,0)
.. controls (5,-0.5) and (6,-1.0) .. (8,0);
\draw[inner] (0,2.0)
.. controls (2,2.5) and (3,2.5) .. (4,2.0)
.. controls (5,1.5) and (6,1.0) .. (8,2.0) -- (8,-2.0)
.. controls (6,-3.0) and (5,-2.5) .. (4,-2.0)
.. controls (3,-1.5) and (2,-1.5) .. (0,-2.0) -- cycle;
\draw[outer] (0,3.0) -- (8,3.0) -- (8,1.0)
.. controls (6,0.0) and (5,0.5) .. (4,1.0)
.. controls (3,1.5) and (2,1.5) .. (0,1.0) -- cycle;
\draw[outer] (0,-3.0) -- (8,-3.0) -- (8,-1.0)
.. controls (6,-2.0) and (5,-1.5) .. (4,-1.0)
.. controls (3,-0.5) and (2,-0.5) .. (0,-1.0) -- cycle;
\node[labels] at (0.1,2.25) {Outer region};
\node[labels] at (0.1,1.25) {Overlapping region};
\node[labels] at (0.1,0.25) {Inner region};
\draw[decorate,decoration=brace] (8.1, 3.0) --
node[right=0.5em] {$D$} (8.1, 0.1);
\draw[decorate,decoration=brace] (8.1,-0.1) --
node[right=0.5em] {$\Omega$} (8.1,-3.0);
\node[right=0.5em] at (8.1,0) {$\partial D$};
\end{scope}
\end{tikzpicture}
\caption{A sketch of the regions used for the matched asymptotic expansions.
The inner region is marked with a light grey colour and the outer region
with a slightly darker grey colour. The overlapping region is marked with
the darkest grey colour.}
\label{fig:regions}
\end{figure}
The outer expansion for the variable $u(\vct x;\epsilon)$ is simply
\begin{equation}
u(\vct x;\epsilon) = u^{(0)}(\vct x)
+ \epsilon u^{(1)}(\vct x)
+ \epsilon^2 u^{(2)}(\vct x) + \cdots.
\label{eq:outer}
\end{equation}
The outer expansion of an equation is then found by inserting the expanded
variables into the equation.
The inner expansion is found by introducing a local coordinate system near the
interface $\partial D$,
\[
\vct x(\vct s,z;\epsilon) = \vct X(\vct s;\epsilon)
+ \epsilon z\vct n(\vct s;\epsilon),
\]
where $\vct X(\vct s;\epsilon)$ is a parametrization of the interface, $\vct
n(\vct s;\epsilon)$ is the interface normal vector that points out of $D$, $z$
is the stretched variable
\[
z = \frac{r(\vct x)}{\epsilon},
\]
and $r$ is the signed distance from the point $\vct x$ to $\partial D$. In the
local coordinate system, the derivatives become
\begin{align*}
\grad &= \frac{1}{\epsilon}\vct n\partial_z
+ \frac{1}{1+\epsilon z\kappa}\boldsymbol\nabla_{\text s}, \\
\boldsymbol\Delta &= \frac{1}{\epsilon^2}\partial_{zz}
+ \frac{1}{\epsilon}\frac{\kappa}{1+\epsilon z\kappa}\partial_z
+ \frac{1}{1+\epsilon z\kappa}\boldsymbol\nabla_{\text s}
\cdot \left(\frac{1}{1+\epsilon z\kappa}\boldsymbol\nabla_{\text s}\right),
\end{align*}
where $\kappa\equiv\boldsymbol\nabla_{\text s}\cdot\vct n$ is the curvature of the interface. Note that
$\vct n = -\frac{\nabla\phi}{|\nabla\phi|}$. The inner variable $\hat u(z,\vct
s;\epsilon)$ is now given by
\[
\hat u(z,\vct s;\epsilon) \equiv u(\vct x;\epsilon)
= u(\vct X(\vct s;\epsilon) + \epsilon z\vct n(\vct s;\epsilon);\epsilon),
\]
and the inner expansion is
\begin{equation}
\hat u(z,\vct s;\epsilon) = \hat u^{(0)}(z,\vct s)
+ \epsilon \hat u^{(1)}(z,\vct s)
+ \epsilon^2\hat u^{(2)}(z,\vct s) + \cdots.
\label{eq:inner}
\end{equation}
To obtain the matching conditions, we assume that there is a region of overlap
where both the expansions are valid. In this region, the solutions have to
match. In particular, if we evaluate the outer expansion in the inner
coordinates, this must match the limits of the inner solutions away from the
interface, that is
\[
u(\vct X + \epsilon z\vct n;\epsilon) \simeq \hat u(z,\vct s;\epsilon).
\]
Insert the expansions into \cref{eq:outer,eq:inner} to get
\[
\begin{split}
u^{(0)}(\vct X + \epsilon z\vct n)
+ \epsilon u^{(1)}(\vct X + \epsilon z\vct n)
+ \epsilon^2u^{(2)}(\vct X + \epsilon z\vct n) + \cdots \\
\simeq \hat u^{(0)}(z,\vct s)
+ \epsilon \hat u^{(1)}(z,\vct s)
+ \epsilon^2\hat u^{(2)}(z,\vct s) + \cdots.
\end{split}
\]
The terms on the left-hand side can be expanded as a Taylor series,
\[
u^{(k)}(\vct X + \epsilon z\vct n) = u^{(k)}(\vct s)
+ \epsilon z\vct n\cdot\grad u^{(k)}(\vct s)
+ \frac{\epsilon^2z^2}{2} \vct n\cdot\grad\grad u^{(k)}(\vct s)\cdot\vct n
+ \cdots,
\]
where $k \in \mathbb N$ and $u^{(k)}(\vct s)\equiv u^{(k)}(\vct X(\vct
s;\epsilon))$. Now we end up with the matching equation
\[
\begin{split}
u^{(0)}(\vct s)
+& \epsilon\left(u^{(1)}(\vct s) + z\vct n\cdot\grad u^{(0)}(\vct s)\right) \\
+& \epsilon^2\left(
u^{(2)}(\vct s) + z\vct n\cdot\grad u^{(1)}(\vct s)
+ \frac{z^2}{2} \vct n\cdot\grad\grad u^{(0)}(\vct s)\cdot\vct n \right) \\
+& \cdots \simeq \hat u^{(0)}(z,\vct s) + \epsilon \hat u^{(1)}(z,\vct s)
+ \epsilon^2\hat u^{(2)}(z,\vct s) + \cdots,
\end{split}
\]
which must hold when the interface width is decreased, that is $\epsilon \to
0$. In the matching region it is required that $\epsilon z = \bigo 1$. Under
this condition, if we let $z \to \pm\infty$, we get the following asymptotic
matching conditions:
\[
\ensuremath{\lim_{z\to\pm\infty}} \hat u^{(0)}(z,\vct s) = u^{(0)}(\vct s),
\label{eq:match1}
\]
and as $z \to \pm\infty$,
\begin{align}
\label{eq:match2}
\hat u^{(1)}(z,\vct s) &= u^{(1)}(\vct s)
+ z\vct n\cdot\grad u^{(0)}(\vct s) + \smallo 1, \\
\nonumber
\hat u^{(2)}(z,\vct s) &= u^{(2)}(\vct s)
+ z\vct n\cdot\grad u^{(1)}(\vct s) \\ &\quad
\label{eq:match3}
+ \frac{z^2}{2} \vct n\cdot\grad\grad u^{(0)}(\vct s)\cdot\vct n
+ \smallo 1,
\end{align}
where the quantities on the right-hand side are the limits from the interior
($-$) and exterior ($+$) of $D$. Here $\smallo 1$ means that the expressions
approach equality when $z\to\pm\infty$. That is, $\smallo 1$ is defined such
that if some function $f(z) = \smallo 1$, then we have $\ensuremath{\lim_{z\to\pm\infty}} f(z) = 0$.
\subsection{Poisson equation with Robin boundary conditions}
Now we are ready to consider the Poisson equation with Robin boundary
conditions,
\begin{equation}
\begin{alignedat}{2}
\boldsymbol\Delta u &= f & \ensuremath{\textup{in}\ } D, \\
\vct n\cdot\grad u &= k(u - g) \qquad & \ensuremath{\textup{on}\ } \partial D,
\end{alignedat}
\label{eq:poiss_robin}
\end{equation}
where $k\le 0$. Consider a general DDM approximation,
\begin{equation}
\div\left(\phi\grad u\right) + \frac{1}{\epsilon^2}\psi = \phi f.
\label{eq:general_dda}
\end{equation}
where $\psi$ represents the BC approximation in the DDM. The scaling factor
$1/\epsilon^2$ is taken for later convenience. If we assume that $\psi$ is
local to the interface (e.g., vanishes to all orders in $\epsilon$ away from
$\partial D$) and that $f$ is independent of $\epsilon$ (e.g., is smooth in
a neighbourhood of $\partial D$ and is extended constant in the normal
direction out of $D$), which is the case for the approximations BC1 and BC2
given in \cref{eq:bc_robin}, then the outer solution to this equation
when $z\to -\infty$ satisfies
\begin{equation}
\begin{split}
\boldsymbol\Delta u^{(0)} &= f, \\
\boldsymbol\Delta u^{(1)} &= 0, \\
\boldsymbol\Delta u^{(k)} &= 0,\qquad k = 2,3,\dots.
\end{split}
\label{eq:dda_outer}
\end{equation}
Now, if $u^{(0)}$ satisfies \cref{eq:poiss_robin} and $u^{(1)} \ne 0$ then the
outer expansion $u\approx u^{(0)} + \epsilon u^{(1)} +~\dots$ and the DDM is
asymptotically first-order accurate. However, if $u^{(1)} = 0$, then $u\approx
u^{(0)} + \epsilon^2 u^{(2)} + \dots$ and the DDM is asymptotically
second-order accurate. Determining which of these is the case requires
matching the outer solutions to the solutions of the inner equations.
\subsubsection{Matching conditions}
Before we analyse the inner expansions, we develop a higher-order matching
condition based on \cref{eq:match2,eq:match3} that matches a Robin boundary
condition for $u^{(1)}$. First we take the derivative of \cref{eq:match3} with
respect to $z$ and subtract $k$ times \cref{eq:match2}, which gives
\[
\hat u^{(2)}_z - k\hat u^{(1)}
= - ku^{(1)} - kz\vct n\cdot\grad u^{(0)}
+ \vct n\cdot\grad u^{(1)} + z \vct n\cdot\grad\grad u^{(0)}\cdot\vct n.
\]
Move the terms that make up a Robin condition for $u^{(1)}$ to the left-hand
side, and move the rest to the right-hand side, that is
\begin{equation}
\vct n\cdot\grad u^{(1)} - ku^{(1)}
= \hat u^{(2)}_z
- k\hat u^{(1)}
+ kz\vct n\cdot\grad u^{(0)}
- z \vct n\cdot\grad\grad u^{(0)}\cdot\vct n.
\label{eq:robin_mc}
\end{equation}
The Laplacian can be decomposed into normal and tangential components as
\begin{equation}
\boldsymbol\Delta u = \vct n\cdot\grad\grad u\cdot\vct n + \kappa\vct n\cdot\grad u + \boldsymbol\Delta_{\text s} u,
\label{laplace decomp}
\end{equation}
which can be shown by writing the gradient vector as $\grad = \vct n\vct n\cdot\grad
+ \boldsymbol\nabla_{\text s}$. We can therefore write
\[
\vct n\cdot\grad\grad u^{(0)}\cdot\vct n
= f - \kappa\vct n\cdot\grad u^{(0)} - \boldsymbol\Delta_{\text s} u^{(0)}
= \hat f^{(0)} - \kappa k\left(u^{(0)} - \hat g\right) - \boldsymbol\Delta_{\text s} u^{(0)},
\]
where we have assumed that $u^{(0)}$ satisfies the system
\eqref{eq:poiss_robin}, as demonstrated below. If we insert this into the
matching condition \eqref{eq:robin_mc}, we get
\begin{equation}
\vct n\cdot\grad u^{(1)} - ku^{(1)}
= \hat u^{(2)}_z
- k\hat u^{(1)}
- z\left(\hat f^{(0)}
- \left(\kappa+k\right)k\left(u^{0)}-\hat g\right)
- \boldsymbol\Delta_{\text s} u^{(0)}
\right),
\label{eq:robin_mc2}
\end{equation}
as $z\to - \infty$.
\subsubsection{Inner expansions}
\label{sec:dda_robin_inner}
Now consider the inner expansion of \cref{eq:general_dda},
\[
\frac{1}{\epsilon^2}\left(\phi\hat u_z\right)_z
+ \frac 1 \epsilon\frac{\kappa}{1+\epsilon z\kappa}\phi\hat u_z
+ \frac{\phi}{1+\epsilon z\kappa}\boldsymbol\nabla_{\text s}
\cdot\left(\frac{1}{1+\epsilon z\kappa}\boldsymbol\nabla_{\text s}\hat u\right)
+ \frac{1}{\epsilon^2}\hat\psi
= \phi\hat f.
\]
Expand $\hat u$, $\hat f$, $\hat \psi$ and $\displaystyle{\frac{1}{1+\epsilon
z\kappa}}$ in powers of $\epsilon$, to get
\begin{multline*}
\frac{1}{\epsilon^2}\left(\phi\hat u_z^{(0)}\right)_z
+ \frac{1}{\epsilon}\left(\phi\hat u_z^{(1)}\right)_z
+ \left(\phi\hat u_z^{(2)}\right)_z
+ \frac 1 \epsilon\kappa\phi\hat u_z^{(0)}
+ \kappa\phi\hat u_z^{(1)}-z\kappa^2\hat u_z^{(0)} \\
+ \phi\boldsymbol\Delta_{\text s}\hat u^{(0)}
+ \frac{1}{\epsilon^2}\hat\psi^{(0)}
+ \frac{1}{\epsilon}\hat\psi^{(1)}
+ \hat\psi^{(2)}
= \phi\hat f^{(0)} + \bigo{\epsilon}.
\end{multline*}
and then collect the leading order terms. Note that because $f$ is smooth up
to $\partial D$ and extended constantly outside $D$ we have that $\hat f^{(0)}$
is independent of $z$. The lowest power of $\epsilon$ gives
\[
\left( \phi \hat u^{(0)}_z \right)_z = - \hat\psi^{(0)}.
\]
Suppose that $\hat\psi^{(0)} = 0$, which is the case as we show below for BC1
and BC2, then we obtain $\hat u_z^{(0)} = 0$. By the matching condition
\eqref{eq:match1}, this gives $\hat u^{(0)}(z,\vct s) = u^{(0)}(\vct s)$,
where $u^{(0)}(\vct s)$ is the limiting value of $u^{(0)}$.
The next order terms give
\begin{equation}
\left( \phi\hat u_z^{(1)} \right)_z = - \hat\psi^{(1)}.
\label{eq:first_bc}
\end{equation}
Integrating from $-\infty$ to $+\infty$ in $z$ and using the matching condition
\eqref{eq:match2}, we get
\[
\vct n\cdot\grad u^{(0)} = \eint{\hat\psi^{(1)}}.
\]
To obtain a Robin boundary condition for $u^{(0)}$, we need that
\[
\eint{\hat\psi^{(1)}} = k(u^{(0)} - g).
\]
Now consider the zeroth order terms,
\begin{equation}
\left( \phi\hat u_z^{(2)} \right)_z
= \phi\hat f^{(0)}
- \kappa\phi\hat u_z^{(1)}
- \phi\boldsymbol\Delta_{\text s} u^{(0)}
- \hat\psi^{(2)}.
\label{eq:ddarobin0}
\end{equation}
If we subtract
\[
\left(\phi k\hat u^{(1)} + z\phi\left(
\hat f^{(0)} - (\kappa+ k)k\left(u^{(0)}-\hat g\right) - \boldsymbol\Delta_{\text s} u^{(0)}
\right)\right)_z
\]
from both sides of \cref{eq:ddarobin0}, we get
\begin{multline*}
\left(\phi\hat u^{(2)}_z
- \phi k\hat u^{(1)}
- z\phi\left(\hat f^{(0)}
- (\kappa+k) k\left(u^{(0)}-\hat g\right)
- \boldsymbol\Delta_{\text s} u^{(0)}
\right) \right)_z \\
= - \hat\psi^{(2)} - k\phi_z\hat u^{(1)}
- z\phi_z\left(\hat f^{(0)}
- (\kappa +k)k\left(u^{(0)}-\hat g\right)
- \boldsymbol\Delta_{\text s} u^{(0)} \right) \\
- \phi(\kappa + k)\left(\hat u^{(1)}_z-k\left(u^{(0)}-g\right)\right),
\end{multline*}
where we have taken into account the cancellation of terms and used the fact
that $\hat f^{(0)}$ and $\hat g$ are independent of $z$. The latter
holds when $g$ is extended as a constant in the normal direction off
$\partial D$, e.g., $\hat g(z,s) = g(s)$ and is independent of $z$ and
$\epsilon$. Next, we integrate and use the matching condition
\eqref{eq:robin_mc2} on the left-hand side,
\begin{multline}
\vct n\cdot\grad u^{(1)} - k u^{(1)}
= \eint{\left(\hat\psi^{(2)} + k\phi_z\hat u^{(1)}
+ \phi(\kappa + k)\left(\hat u^{(1)}_z-k\left(u^{(0)}-g\right)\right)
\right)} \\
+ \left(\hat f^{(0)} - (\kappa + k)k\left(u^{(0)} - \hat g\right)
- \boldsymbol\Delta_{\text s} u^{(0)}\right) \eint{z\phi_z}.
\label{order 1 BC}
\end{multline}
If the right-hand side of \cref{order 1 BC} vanishes, then we obtain
$\vct n\cdot\grad u^{(1)} - k u^{(1)}=0$ from which we can conclude that the outer
solution $u^{(1)} = 0$ since $u^{(1)}$ is harmonic: $\boldsymbol\Delta u^{(1)} = 0$. Next
we analyse the boundary condition approximations BC1 and BC2.
\subsubsection{Analysis of BC1}
The BC1 approximation corresponds to
\[
\frac{1}{\epsilon^2}\psi = k(u-g)|\nabla\phi|.
\]
Since $\phi=1$ in the outer region (interior part of $D$), we conclude that
$\psi$ vanishes in the outer region. In the inner region, we have
\begin{equation}
\frac{1}{\epsilon^2}\psi
= -\frac{k}{\epsilon}\left(\hat u - g\right)\phi_z
= -\frac{k}{\epsilon}\left(\hat u^{(0)}- g\right)\phi_z
- k\hat u^{(1)}\phi_z + \bigo{\epsilon},
\label{expansion of BC1}
\end{equation}
where we have used that $\hat g(z,s) = g(s)$ and is independent of $z$ and
$\epsilon$. Since $\hat\psi^{(0)} = 0$, we conclude from our analysis in
\cref{sec:dda_robin_inner} that $\hat u^{(0)}_z = 0$ and hence $\hat u^{(0)}
= u^{(0)}$.
The next orders of \cref{expansion of BC1} give
\[
\hat\psi^{(1)} = -k\left(u^{(0)} - g\right)\phi_z,
\qquad\text{and}\quad
\hat\psi^{(2)} = -k\hat u^{(1)}\phi_z.
\]
A direct calculation then shows that
\[
\eint{\hat\psi^{(1)}} = k(u^{(0)} - g),
\]
as desired. Thus, the leading order outer solution $u^{(0)}$ satisfies the
problem (\ref{eq:poiss_robin}).
To continue, we must first consider \cref{eq:first_bc},
\[
\left(\phi\hat u_z^{(1)}\right)_z = k\left(u^{(0)} - g\right)\phi_z,
\]
from which we get
\[
\hat u^{(1)}(z,\vct s) = u^{(1)}(\vct s) + zk\left(u^{(0)} - g\right),
\]
where $u^{(1)}(\vct s)$ is the limiting value of the outer solution (e.g., see
\cref{eq:match2}). Combining this with \cref{order 1 BC}, we obtain
\begin{equation}
\vct n\cdot\grad u^{(1)} - k u^{(1)} = \left(\hat f^{(0)}
- (\kappa + k)k\left(u^{(0)} - g\right)
- \boldsymbol\Delta_{\text s}\hat u^{(0)}\right)\eint{z\phi_z}.
\label{order 1 BC 1}
\end{equation}
Further, it follows from the definition of the phase-field
function~\eqref{eq:characteristic} that $z\phi_z$ is an odd function.
Therefore the integral on the right-hand side of \cref{order 1 BC 1} is
equal to zero. Thus $\vct n\cdot\grad u^{(1)}-k u^{(1)} = 0$ and so by our
arguments below \cref{eq:dda_outer}, the DDM with BC1 is second-order accurate
in $\epsilon$.
\subsubsection{Analysis of BC2}
When the BC2 approximation is used, we obtain
\[
\frac{1}{\epsilon^2}\psi = \epsilon k(u-g)|\nabla\phi|^2.
\]
Accordingly, in the inner region, we obtain
\[
\hat\psi^{(0)} = 0,
\qquad
\hat\psi^{(1)} = k\left( u^{(0)} - g\right)\phi_z^2,
\qquad\text{and}\quad
\hat\psi^{(2)} = k \hat u^{(1)}\phi_z^2.
\]
Since $\eint{\phi_z^2} = 1$, we get
\[
\eint{\hat\psi^{(1)}} = k(u^{(0)} - \hat g),
\]
as desired. From \cref{eq:first_bc} we have
\[
\left(\phi\hat u_z^{(1)}\right)_z = -k\left(u^{(0)} - g\right)\phi_z^2.
\]
Using that $\phi_z = -6\phi(1-\phi)$, this gives
\begin{equation}
\hat u^{(1)}(z,\vct s) = C(\vct s) - k\left(u^{(0)} - g\right) F(\phi),
\label{u1 inner solve B2}
\end{equation}
where
\begin{align}
\label{u1 inner solve B2 a}
F(\phi) &= -\frac{1}{6}\log\left(1 - \phi\right) + \frac{\phi}{3}, \\
\nonumber
C(\vct s) &= u^{(1)}(\vct s) + \frac{k}{3}\left(u^{(0)} - g\right).
\end{align}
Combining \cref{u1 inner solve B2,u1 inner solve B2 a,order 1 BC} we get
\begin{multline}
\vct n\cdot\grad u^{(1)} - k u^{(1)}
= kC(\vct s)\eint{\left(\phi_z^2 + \phi_z\right)} \\
- k^2\left(u^{(0)} - \hat g\right)
\eint{F(\phi)\left(\phi_z^2 + \phi_z\right)} \\
+ \left(\kappa + k\right)k\left(u^{(0)} - \hat g\right)
\eint{\phi\left(3\phi - 2\phi^2 - 1\right)}.
\label{order 1 BC 2}
\end{multline}
Direct calculations show that
\[
\eint{\left(\phi_z^2 + \phi_z\right)}
= \eint{\left(3\phi^2 - 2\phi^3 - \phi\right)} = 0,
\]
and
\[
\eint{F(\phi)\left(\phi_z^2 + \phi_z\right)} = -\frac{1}{36}.
\]
Using these in \cref{order 1 BC 2}, we get
\begin{equation}
\vct n\cdot\grad u^{(1)} - k u^{(1)} = \frac{1}{36}k^2\left(u^{(0)} - g\right).
\label{order 1 BC 2 final}
\end{equation}
This shows that the DDM with BC2 is only first-order accurate because the
solution $u^{(1)}$ of
\[
\begin{alignedat}{2}
\boldsymbol\Delta u^{(1)} &= 0 & \ensuremath{\textup{in}\ } D, \\
\vct n\cdot\grad u^{(1)} - ku^{(1)}
&= \frac{1}{36}k^2\left(u^{(0)} - g\right) \qquad
& \ensuremath{\textup{on}\ } \partial D,
\end{alignedat}
\]
is in general not equal to $0$, e.g., $u^{(1)}\ne 0$.
\subsubsection{Analysis of a second-order modification of BC2}
\label{sec:dda_robin_bc2m}
In order to modify BC2 to achieve second-order accuracy, we introduce
$\tilde\psi$ such that
\[
\hat \psi^{(0)} = 0,
\qquad
\hat \psi^{(1)} = k\left( u^{(0)}- g\right)\phi_z^2,
\qquad\text{and}\quad
\hat \psi^{(2)} = k \hat u^{(1)}\phi_z^2 +\hat{\tilde\psi}^{(0)}.
\]
That is, $\tilde\psi$ perturbs only the higher order terms in the inner
expansion and is chosen to cancel the term on the right-hand side of
\cref{order 1 BC 2 final} in order to achieve $\vct n\cdot\grad u^{(1)} - k u^{(1)}
= 0$, which in turn implies that $u^{(1)} = 0$ and the new formulation is
second-order accurate. The correction $\tilde\psi$ does not affect the
$O(\epsilon^{-2})$ or $O(\epsilon^{-1})$ orders in the system. Thus, $\hat
u^{(0)}$ and $\hat u^{(1)}$ are unchanged from the previous subsection.
\Cref{order 1 BC} now becomes
\[
\vct n\cdot\grad u^{(1)} - k u^{(1)}
= \frac{1}{36}k^2\left(u^{(0)} - g\right)
+ \eint{\hat{\tilde\psi}^{(0)}},
\]
so we wish to determine $\hat{\tilde\psi}^{(0)}$ such that
\[
\eint{\hat{\tilde\psi}^{(0)}}
= -\frac{1}{36}k^2\left(u^{(0)} - g\right).
\]
Two simple ways of achieving this are to take
\[
\hat{\tilde\psi}^{(0)} = -\frac{1}{36}k^2\left(u^{(0)} - g\right)
\times
\begin{cases}
-\phi_z,
\quad\text{or} \\
\phi_z^2.
\end{cases}
\]
Putting everything together, we can obtain BC2M, a second-order version of BC2,
using
\[
\text{BC2M} =
\begin{cases}
\text{BC2M1} = \epsilon k(u - g) |\nabla\phi|
\left(|\nabla\phi| - \frac{k}{36}\right),
\quad\text{or} \\
\text{BC2M2} = \epsilon k(u - g) |\nabla\phi|^2
\left(1 - \epsilon\frac{k}{36}\right).
\end{cases}
\]
The resulting DDM is an elliptic system since $k<0$, as required
for the Robin boundary condition. In each
instance, this is guaranteed if the interface thickness $\epsilon$ is
sufficiently small.
\subsubsection{Other approaches to second-order BCs}
\label{sec:dda_robin_other}
Thus far, we have taken advantage of integration to achieve second-order
accuracy. Alternatively, one may try to add correction terms to directly
obtain second-order boundary conditions without relying on integration. For
example, from \cref{order 1 BC} to achieve second-order accuracy we may take
\begin{multline}
\hat\psi^{(2)}
= -k\phi_z\hat u^{(1)} - \phi(\kappa+k)
\left(\hat u^{(1)}_z - k\left(u^{(0)} - g\right)\right) \\
- \left(\hat f^{(0)}
- (\kappa + k)k\left(u^{(0)} - \hat g\right)
- \boldsymbol\Delta_{\text s}\hat u^{(0)}\right) z\phi_z,
\label{local correction}
\end{multline}
where $\hat u^{(1)}$ is a functional of $\hat\psi^{(1)}$. This provides
another prescription of how to obtain a second-order accurate boundary
condition, which could in principle lead to faster asymptotic convergence since
it directly cancels a term in the inner expansion of the asymptotic matching.
As an illustration, let us use BC1 as a starting point even though this
boundary condition is already second-order accurate. Through the prescription
in \cref{local correction} above, we derive another second-order accurate
boundary condition. To see this, write
\[
\hat\psi^{(0)} = 0,
\qquad
\hat \psi^{(1)} = -k\left(u^{(0)} - g\right)\phi_z,
\qquad\text{and}\quad
\hat \psi^{(2)} = -k\hat u^{(1)}\phi_z + \hat{\tilde\psi}^{(0)},
\]
then from \cref{local correction} we get
\[
\hat{\tilde\psi}^{(0)} = -\left(\hat f^{(0)}
- (\kappa +k)k\left(u^{(0)} - \hat g\right)
- \boldsymbol\Delta_{\text s}\hat u^{(0)}\right) z\phi_z.
\]
This can be achieved by taking
\[
\frac{1}{\epsilon^2}\psi = k(u-g) |\nabla\phi| \left(1 - r(k + \kappa)\right)
+ r|\nabla\phi|\left(f - \boldsymbol\Delta_{\text s} u\right).
\]
where $r$ is the signed distance to $\partial D$ as defined earlier. Note that
we can also achieve second-order accuracy by taking instead
\[
\hat\psi^{(2)} = -k\phi_z\hat u^{(1)}
- \phi(\kappa + k)\left(\hat u^{(1)}_z - k\left(u^{(0)} - g\right)\right)
- \left(\hat f^{(0)} - (\kappa + k)k\left(u^{(0)} - \hat g\right)\right)
z\phi_z,
\]
where we use the fact that the integral involving $\boldsymbol\Delta_{\text s} u^{(0)}$ vanishes in
\cref{order 1 BC}. We refer to these choices, which are by no means
exhaustive, as
\begin{equation}
\text{BC1M} =
\begin{cases}
\text{BC1M1} = k(u-g) |\nabla\phi| \left(1 - r(k + \kappa)\right)
+ r|\nabla\phi| \left(f - \boldsymbol\Delta_{\text s} u\right),
\quad\text{or} \\
\text{BC1M2} = k(u-g) |\nabla\phi| \left(1 - r(k + \kappa)\right)
+ r|\nabla\phi| f.
\end{cases}
\label{BC1M versions}
\end{equation}
We remark, however, that this prescription may not always lead to an optimal
numerical method. For example, when using BC1M2, the system is guaranteed to
be elliptic when $1 - r(k + \kappa) > 0$ for $|r|\approx \epsilon$, which
puts an effective restriction on the interface thickness $\epsilon$ depending
on the values of $k$ and $\kappa$. When BC1M1 is used, the situation is more
delicate since ellipticity cannot be guaranteed when $r>0$ due to the $\boldsymbol\Delta_{\text s}
u$ term. Recall that $r>0$ outside the original domain $D$ and so this issue
is associated with the extending the modified boundary condition outside $D$.
In future work, we plan to consider different extensions that automatically
guarantee ellipticity.
To summarize, we have shown that the DDM
\[
\div\left(\phi\grad u\right) + \text{BC} = \phi f
\]
is a second-order accurate approximation of the system \eqref{eq:poiss_robin}
when BC1, BC2M, and BC1M are used. When BC2 is used, the DDM is only
first-order accurate.
\subsection{Reaction-diffusion equation with Neumann boundary conditions}
Since the Poisson equation with Neumann boundary conditions does not have
a unique solution, we instead consider the steady reaction-diffusion equation
with Neumann boundary conditions,
\begin{equation}
\begin{alignedat}{2}
\boldsymbol\Delta u - u &= f & \ensuremath{\textup{in}\ } D, \\
\vct n\cdot\grad u &= g \qquad & \ensuremath{\textup{on}\ } \partial D.
\end{alignedat}
\label{eq:poiss_neumann}
\end{equation}
Again we consider a general DDM approximation,
\begin{equation}
\div\left(\phi\grad u\right) - \phi u + \frac{1}{\epsilon^2}\psi = \phi f.
\label{eq:reacdiff_general_dda}
\end{equation}
Under the same conditions on $\psi$ as in the previous section, the outer
solution now satisfies
\begin{align*}
\boldsymbol\Delta u^{(0)} - u^{(0)} &= f, \\
\boldsymbol\Delta u^{(k)} - u^{(k)} &= 0,\qquad k = 1,2,3,\dots
\end{align*}
As in the Robin case, if $u^{(0)}$ satisfies \cref{eq:poiss_neumann} and
$u^{(1)} \ne 0$ then the DDM is first-order accurate. However, if $u^{(1)}
= 0$, then the DDM is second-order accurate.
To construct the boundary condition for $u^{(1)}$, we follow the approach from
the Robin case and combine \cref{eq:match3,laplace decomp} to get
\[
\vct n\cdot\grad\grad u^{(0)}\cdot\vct n
= \hat f^{(0)} - \kappa \hat g - \boldsymbol\Delta_{\text s} u^{(0)} + u^{(0)},
\]
assuming that $u^{(0)}$ satisfies the system \eqref{eq:poiss_neumann} as
demonstrated below, and to get
\[
\vct n\cdot\grad u^{(1)} = \hat u_z^{(2)}
- z\left(\hat f^{(0)} - \kappa\hat g - \boldsymbol\Delta_{\text s} u^{(0)} + u^{(0)}\right),
\]
as $z\to -\infty$.
\subsubsection{Inner expansions}
The inner expansion of \cref{eq:reacdiff_general_dda} is analogous to the Robin
case derived in \cref{sec:dda_robin_inner}. As before, if $\hat\psi^{(0)} = 0$
then $\hat u_z^{(0)}
= 0$ and $\hat u^{(0)}(z,\vct s) = u^{(0)}(\vct s)$, the limiting value of the
outer solution. \Cref{eq:first_bc} still holds at the next order and so to get
the desired boundary condition for $u^{(0)}$, we need
\begin{equation}
\eint{\hat\psi^{(1)}} = g.
\label{integral constraint Neumann}
\end{equation}
Analogously to \cref{eq:ddarobin0} the next order equation is
\begin{equation}
\left(\phi\hat u^{(2)}_z\right)_z
+ \phi\kappa\hat u_z^{(1)}
+ \phi\boldsymbol\Delta_{\text s} \hat u^{(0)}
- \phi\hat u^{(0)}
+ \hat\psi^{(2)}
= \phi\hat f^{(0)}.
\label{eq:o1}
\end{equation}
Subtracting
\[
-\left(z\phi\left(\hat f^{(0)}
- \kappa\hat g - \boldsymbol\Delta_{\text s} u^{(0)} + u^{(0)}\right)\right)_z
\]
from \cref{eq:o1} we get
\[
\begin{split}
\bigg(\phi&\hat u_z^{(2)}
- z\phi\left(\hat f^{(0)} - \kappa\hat g
- \boldsymbol\Delta_{\text s} u^{(0)} + u^{(0)}\right)\bigg)_z \\
&= - \hat\psi^{(2)} - \phi\kappa\left(\hat u^{(1)}_z - \hat g\right)
- z\phi_z\left(\hat f^{(0)} - \kappa \hat g
- \boldsymbol\Delta_{\text s} u^{(0)} + u^{(0)}\right),
\end{split}
\]
where we have used $\hat u^{(0)}=u^{(0)}$ as justified below. Integrating, we
obtain
\begin{equation}
\vct n\cdot\grad u^{(1)} = \eint{\left(
\hat\psi^{(2)} + \phi\kappa\left(\hat u^{(1)}_z - \hat g\right)
+ z\phi_z\left(\hat f^{(0)}
- \kappa\hat g
- \boldsymbol\Delta_{\text s} u^{(0)} + u^{(0)}\right)\right)}.
\label{order 1 BC Neumann}
\end{equation}
As in the Robin case, if the right-hand side of \cref{order 1 BC Neumann}
vanishes then we may conclude that $u^{(1)} = 0$ since $u^{(1)}$ satisfies
$\boldsymbol\Delta u^{(1)} - u^{(1)} = 0$ with zero Neumann boundary conditions. We next
analyse the boundary conditions BC1 and BC2.
\subsubsection{Analysis of BC1}
When the BC1 approximation is used, we obtain
\[
\frac{1}{\epsilon^2}\psi = g|\nabla\phi|,
\]
and
\begin{equation}
\hat \psi^{(0)} = 0,
\qquad
\hat \psi^{(1)} = -g\phi_z,
\qquad\text{and}\quad
\hat \psi^{(2)} = 0.
\label{psi expansion for BC1 Neumann}
\end{equation}
Accordingly, we find that $\hat u^{(0)}(z,\vct s) = u^{(0)}(\vct s)$ and
\cref{integral constraint Neumann} holds. Thus $u^{(0)}$ satisfies the system
\eqref{eq:poiss_neumann} as claimed above.
At the next order, from \cref{eq:first_bc,psi expansion for BC1 Neumann} we
obtain
\begin{equation}
\hat u^{(1)}(z,\vct s) = u^{(1)}(\vct s) + z\hat g.
\label{hat u1 Neumann}
\end{equation}
Thus, combining \cref{hat u1 Neumann,order 1 BC Neumann} we get
\[
\vct n\cdot\grad u^{(1)} = \left(\hat f^{(0)}
- \kappa\hat g
- \boldsymbol\Delta_{\text s}\hat u^{(0)}+u^{(0)}\right)\eint{z\phi_z} = 0,
\]
from which we conclude that $u^{(1)} = 0$ and the DDM with BC1 is second-order
accurate.
\subsubsection{Analysis of BC2}
When the BC2 approximation is used, we obtain
\[
\frac{1}{\epsilon^2}\psi = \epsilon g|\nabla\phi|^2,
\]
and
\begin{equation}
\hat \psi^{(0)} = 0,
\qquad
\hat \psi^{(1)} = g\phi_z^2,
\qquad\text{and}\quad
\hat \psi^{(2)} = 0.
\label{psi expansion for BC2 Neumann}
\end{equation}
Analogously to the case when BC1 is used, $\hat u^{(0)}(z,\vct s)
= u^{(0)}(\vct s)$, \cref{integral constraint Neumann} holds, and $u^{(0)}$
satisfies the system \eqref{eq:poiss_neumann}. At the next order, from
\cref{eq:first_bc,psi expansion for BC2 Neumann} we obtain
\begin{equation}
\hat u^{(1)}(z,\vct s) = u^{(1)}(\vct s) + z\hat g\left(3\phi
- 2\phi^2\right).
\label{hat u1 Neumann BC2}
\end{equation}
Combining \cref{hat u1 Neumann BC2,order 1 BC Neumann} we get
\[
\vct n\cdot\grad u^{(1)}
= \kappa\hat g\eint{\left(3\phi^2 - 2\phi^3 - \phi\right)}
+ \left(\hat f^{(0)} - \kappa\hat g \boldsymbol\Delta_{\text s}\hat u^{(0)} + u^{(0)}\right)
\eint{z\phi_z} = 0,
\]
from which we conclude that $u^{(1)} = 0$ and the DDM with BC2 is second-order
accurate for the Neumann problem as well, which is different from the Robin
case.
\subsubsection{Other approaches to second-order BCs}
Analogously to the Robin case, to achieve second-order accuracy we may also
take
\[
\hat\psi^{(2)} + \phi\kappa\left(\hat u^{(1)}_z - \hat g\right)
+ z\phi_z\left(\hat f^{(0)} - \kappa\hat g \boldsymbol\Delta_{\text s} u^{(0)}+u^{(0)}\right) = 0.
\]
Following the same reasoning, alternative boundary conditions analogous to
those in \cref{BC1M versions} may be derived
\[
\text{BC1M} =
\begin{cases}
\text{BC1M1} = g|\nabla\phi| \left(1 - r(k + \kappa)\right)
+ r|\nabla\phi| \left(f - \boldsymbol\Delta_{\text s} u\right),
\quad\text{or} \\
\text{BC1M2} = g|\nabla\phi| \left(1 - r(k + \kappa)\right)
+ r|\nabla\phi| f.
\end{cases}
\]
Note that as in the Robin case, when BC1M1 is used ellipticity cannot be
guaranteed when $r>0$ due to the $\boldsymbol\Delta_{\text s} u$ term.
To summarize, we have shown that the DDM
\[
\div\left(\phi\grad u\right) - \phi u + \text{BC} = \phi f
\]
is a second-order accurate approximation of the system \eqref{eq:poiss_robin}
when BC1, BC2, and BC1M are used.
\section{Discretizations and numerical methods}
\label{sec:discretization}
The equations are discretized on a uniform grid with the second-order
central-difference scheme. The discrete system is solved using a multigrid
method, where a red-black Gauss-Seidel type iterative method is used to relax
the solutions (see \cite{Wise07}). The equations are solved in two-dimensions
in a domain $\Omega = [-2,2]^2$ for all the test cases. Periodic boundary
conditions are used on the domain boundaries $\partial\Omega$. The iterations
are considered to be converged when the residual of the current solution has
reached a tolerance of $10^{-9}$.
Since the phase-field function quickly tends to zero outside the physical
domain $D$, it must be regularized in order to prevent the equations from
becoming ill-posed. We therefore use the modified phase-field function
\[
\hat\phi = \tau + (1 - \tau)\phi,
\]
where the regularization parameter is set to $\tau = 10^{-6}$ unless otherwise
specified. In addition, one should note that the chosen boundary condition for
the computational domain, $\Omega$, should not interfere with the physical
domain. Thus one has to make sure that the distance from the computational
wall to the diffuse interface of $D$ is large enough not to affect the results.
As shown in \cref{sec:asymptotic_analysis}, the normal vector and the
curvature can be calculated from the phase-field function as
\[
\vct n = -\frac{\grad\phi}{|\grad\phi|},
\]
and
\[
\kappa = -\div\frac{\grad\phi}{|\grad\phi|}.
\]
The surface Laplacian can be found from the identity
\[
\boldsymbol\Delta_{\text s} \equiv
\left(I - \vct n\vct n\right) \div\left(I - \vct n\vct n\right)\grad,
\]
where
\[
(I - \vct n\vct n)\grad \equiv (\delta_{ij} - n_in_j)\partial x_i.
\]
In 2D we get
\begin{align*}
\nonumber \boldsymbol\Delta_{\text s} u
&= \left(n_1n_2(n_1n_2)_x + n_1n_2(n_1^2)_y - (1-n_1^2)(n_1^2)_x
- (1-n_2^2)(n_1n_2)_y\right)u_x \\
&\qquad + \left(n_1n_2(n_1n_2)_y + n_1n_2(n_2^2)_x - (1-n_2^2)(n_2^2)_y
- (1-n_1^2)(n_1n_2)_x\right)u_y \\
&\qquad + \left(\left(1 - n_1^2 \right)^2 + n_1^2n_2^2\right)u_{xx} \\
&\qquad + \left(\left(1 - n_2^2 \right)^2 + n_1^2n_2^2\right)u_{yy} \\
&\qquad - 2n_1n_2\left(\left(1 - n_1^2 \right)
+ \left(1 - n_2^2\right)\right)u_{xy}.
\end{align*}
Below, we verify the accuracy of our numerical implementation on several test
problems in which we manufacture a solution to the DDM with different choices
of boundary conditions through particular choices of $f$. As suggested by our
analysis, we find that when we include the surface Laplacian, we are unable to
solve the discrete system using the multigrid method even though the correction
term and the subsequent loss of ellipticity outside $D$ is confined to the
interfacial region. As also mentioned in \cref{sec:dda_robin_other}, future
work involves developing
alternative extensions of the boundary conditions outside $D$ that maintain
ellipticity. Nevertheless, as a proof of principle, we still consider the
effect of this term by using the surface Laplacian of the analytic solution in
the DDM equations.
\section{Results}
\label{sec:results}
We next investigate the performance of the DDM with different choices of
boundary conditions and compare the results with the exact solution of the
sharp-interface equations for the reaction-diffusion equation with Neumann
boundary conditions and the Poisson equation with Robin boundary conditions.
We consider four different cases with Neumann boundary conditions and three
different cases with Robin boundary conditions. For each case, we calculate
and compare the error between the calculated solution $u$ and an analytic
solution $u_{\text{an}}$ of the original PDE, which is extended from $D$ into
$\Omega$. The error is defined as
\[
E_\epsilon = \frac{\|\phi(u_{\text{an}} - u)\|}{\|\phi u_\text{an}\|},
\]
where $\|\cdot\|$ is a norm and $\phi$ is used to restrict the error to the
physical domain $D$. The convergence rate in $\epsilon$ as $\epsilon\to 0$ is
calculated as
\[
k = \log\left(\frac{E_{\epsilon_i}}{E_{\epsilon_{i-1}}}\right) /
\log\left(\frac{\epsilon_i}{\epsilon_{i-1}}\right),
\]
for a decreasing sequence $\epsilon_i$. In the following results we mainly use
the $L^2$ norm,
\[
\|\psi\|_2 = \frac{\sqrt{\sum_{i=1}^N \psi_i^2}}{N},
\]
where $\psi$ is an array with $N$ elements. For a couple of cases, we also
present the results with the $L^\infty$ norm,
\[
\|\psi\|_\infty = \max_{i=1}^N\,|\psi_i|.
\]
For a given $\epsilon$, the error $E_\epsilon$ in both $L^2$ and $L^\infty$ is
calculated by refining the grid spacing until a minimum of two leading digits
converge (e.g., stop changing under refinement) in the $L^2$ norm. In some
cases for the smallest values of $\epsilon$, the error has not yet converged
to two digits in the $L^\infty$ norm. However, due to memory limits on our
computers, we were not able to use more than $n = 8192$ cells in each
direction. This limited our ability to obtain grid convergence, particularly
when BC2 (see below) is used for very small values of $\epsilon$.
\subsection{Neumann boundary conditions}
Consider the steady reaction-diffusion equation with Neumann boundary
conditions,
\[
\begin{alignedat}{3}
\boldsymbol\Delta u - u &= f && \ensuremath{\textup{in}\ } D, \\
\vct n\cdot\grad u &= g \quad && \ensuremath{\textup{on}\ } \partial D.
\end{alignedat}
\]
In this section we solve the DDM systems
\[
\div\left(\phi\grad u\right) - \phi u + \text{BC} = \phi f,
\]
where BC refers to selected boundary condition approximations considered in the
previous section. In the case of BC1M1, as remarked above, the surface
Laplacian term is not solved, rather the surface Laplacian of the analytic
solution is used and is treated as a known source term.
\subsubsection{Case 1}
Consider the case where $D$ is a circle of radius $R=1$ centred at $(0,0)$,
and where the analytic solution to the reaction-diffusion equation in $D$ is
\[
u_{\text{an}}(x,y) = \frac{1}{4}\left( x^2 + y^2 \right).
\]
This corresponds to $f = 1 - (x^2 + y^2)/4$, $g = 1/2$, and $\boldsymbol\Delta_{\text s}
u_{\text{an}} = 0$. In this case, the curvature is $\kappa = 1$.
\subsubsection{Case 2}
Now consider the case where $D$ is the square $D=[-1,1]^2$. Again let the
analytic solution in $D$ be
\[
u_{\text{an}}(x,y) = \frac{1}{4}\left( x^2 + y^2 \right),
\]
so that $f = 1 - (x^2 + y^2)/4$, $g = 1/2$, and $\boldsymbol\Delta_{\text s} u_{\text{an}} = 1/2$.
In this case the curvature is zero almost everywhere.
To initialize the square domain $D$, the signed-distance function is defined as
\[
r(x,y) =
\begin{cases}
|x| - 1 & \text{if $|x|\geq |y|$,} \\
|y| - 1 & \text{else.}
\end{cases}
\]
The phase-field function is then calculated directly from the signed-distance
function in \cref{eq:characteristic}.
\subsubsection{Case 3}
Again let $D$ be the circle centred at $(0,0)$ with radius $R=1$, but now
consider the case where the analytic solution is
\[
u_{\text{an}}(x,y) = y\sqrt{x^2 + y^2},
\]
which corresponds to
\[
f = \frac{3y}{\sqrt{x^2+y^2}} - y\sqrt{x^2 + y^2},
\]
$g = 2y$, and
\[
\boldsymbol\Delta_{\text s} u_{\text{an}} = -\frac{y}{\sqrt{x^2 + y^2}}.
\]
Note that in the DDM equations, $g$ is extrapolated constantly in the normal
direction off of the boundary $\partial D$.
\subsubsection{Case 4}
For the final Neumann case we again let $D=[-1,1]^2$, and we consider the case
where the analytic solution is
\[
u_{\text{an}}(x,y) = e^r,
\]
where $r = \frac{x^2 + y^2}{4}$. This corresponds to
\[
f = r e^r.
\]
The boundary function $g$ and the surface Laplacian of the analytic function
along the boundary are
\begin{align*}
g &= \frac 1 2 e^{\frac{1 + \xi^2}{4}}, \\
\boldsymbol\Delta_{\text s} u_\text{an} &= \frac 1 4\left(\xi^2 + 2\right)e^{\frac{1 + \xi^2}{4}},
\end{align*}
where $\xi \equiv x$ along the bottom and top boundaries, and $\xi \equiv y$
along the left and right boundaries.
\subsubsection{Results}
\Cref{fig:neumann_case1-plot,fig:neumann_case2-plot,fig:neumann_case3-plot,%
fig:neumann_case4-plot} and \cref{tab:neumann_error_per_eps} show convergence
results in the $L^2$ norm where~$\epsilon$ is reduced for Cases 1 to 4 with
BC1, BC1M1, BC1M2 and BC2.
\Cref{fig:neumann_case4-plot-2,tab:neumann_error_per_eps_infty} shows the
results for Case 4 where the $L^\infty$ norm is used. Although the DDM is most
efficient when adaptive meshes are used, here we consider only uniform meshes
to more easily control the discretization errors in order to focus on the
errors in the DDM. As in all diffuse-interface methods, fine grids are
necessary to accurately solve the equations when $\epsilon$ is small. This is
especially apparent for the cases with BC2 where even the finest grid spacing,
$n=8192$ in each direction, becomes too coarse to obtain results for small
$\epsilon$ that have converged with respect to the grid refinements.
The results confirm the second-order accuracy of all the considered boundary
condition approximations. Note that while the difference between BC1 and BC1M1
tends to be small, BC1M1 consistently performs better than BC1. In turn, BC1
performs better than BC2. In case 2 there is a noticeable improvement of
BC1M1 over BC1. Case 3 is the first case that has a nonconstant boundary
condition, and the surface Laplacian of the analytic solution along the
boundary is also nonconstant. An unexpected result for Case 3 is that BC1M2
performs the best. One possible explanation to this is errors due to grid
anisotropy. Therefore we also consider a fourth case, which again has
a nonconstant boundary condition and nonconstant surface Laplacian of the
analytic solution. Since the domain in this case is a square, the effect of
grid anisotropy is lessened. Correspondingly BC1M1 performs the best.
The cases were also calculated with the $L^\infty$ norm, which gave similar
results, although at the smallest values of $\epsilon$, the orders of accuracy
of BC1M1 and BC1M2 may deteriorate in $L^\infty$, as seen in
\cref{fig:neumann_case4-plot-2,tab:neumann_error_per_eps_infty} for case
4. This could be due to the influence of higher order terms in the expansion,
or the amplification of error when $\epsilon$ is small due to the condition
number of the system, which should scale like $\epsilon^{-2}$. This is
currently under investigation.
The difference between BC1 and BC2 is noticeable, especially with regard to the
required amount of grid refinement that is needed to obtain a convergent
result. This provides practical limits for the use of BC2.
\begin{figure}[tbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Interface width, $\epsilon$},
ylabel={$E_\epsilon$},
x dir=reverse,
width=0.8\textwidth,
legend entries={BC1,
BC1M1,
BC2},
legend cell align=left,
legend style={column sep=0.5em,draw=white},
]
\addplot[solid,mark=*,black] plot coordinates {
(8.000e-01, 3.4e-01)
(4.000e-01, 9.9e-02)
(2.000e-01, 2.6e-02)
(1.000e-01, 6.4e-03)
(5.000e-02, 1.6e-03)
(2.500e-02, 4.2e-04)
};
\addplot[dashed,mark=square*,mark options=solid,black] plot coordinates {
(8.000e-01, 4.8e-01)
(4.000e-01, 1.1e-01)
(2.000e-01, 2.7e-02)
(1.000e-01, 6.5e-03)
(5.000e-02, 1.6e-03)
};
\addplot[solid,mark=*,green] plot coordinates {
(8.000e-01, 3.1e-01)
(4.000e-01, 9.5e-02)
(2.000e-01, 2.6e-02)
};
\end{loglogaxis}
\end{tikzpicture}
\caption{$L^2$ errors for the Neumann problem with respect to $\epsilon$ for
Case 1, as labelled.}
\label{fig:neumann_case1-plot}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Interface width, $\epsilon$},
ylabel={$E_\epsilon$},
x dir=reverse,
width=0.8\textwidth,
legend entries={BC1,
BC1M1,
BC1M2,
BC2},
legend cell align=left,
legend style={column sep=0.5em,draw=white},
]
\addplot[solid,mark=*,black] plot coordinates {
(8.000e-01, 2.5e-01)
(4.000e-01, 7.3e-02)
(2.000e-01, 1.9e-02)
(1.000e-01, 5.2e-03)
};
\addplot[dashed,mark=square*,mark options=solid,black] plot coordinates {
(8.000e-01, 2.0e-01)
(4.000e-01, 2.6e-02)
(2.000e-01, 5.2e-03)
(1.000e-01, 1.2e-03)
};
\addplot[dotted,mark=triangle*,mark options=solid,black] plot coordinates {
(8.000e-01, 2.9e-01)
(4.000e-01, 7.8e-02)
(2.000e-01, 2.0e-02)
(1.000e-01, 5.1e-03)
};
\addplot[solid,mark=*,green] plot coordinates {
(8.000e-01, 2.2e-01)
(4.000e-01, 7.0e-02)
(2.000e-01, 2.0e-02)
};
\end{loglogaxis}
\end{tikzpicture}
\caption{$L^2$ errors for the Neumann problem with respect to $\epsilon$ for
Case 2, as labelled.}
\label{fig:neumann_case2-plot}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Interface width, $\epsilon$},
ylabel={$E_\epsilon$},
x dir=reverse,
width=0.8\textwidth,
legend entries={BC1,
BC1M1,
BC1M2,
BC2},
legend cell align=left,
legend style={column sep=0.5em,draw=white},
]
\addplot[solid,mark=*,black] plot coordinates {
(8.000e-01, 1.3e-01)
(4.000e-01, 3.1e-02)
(2.000e-01, 7.5e-03)
(1.000e-01, 1.8e-03)
(5.000e-02, 4.5e-04)
(2.500e-02, 1.2e-04)
};
\addplot[dashed,mark=square*,mark options=solid,black] plot coordinates {
(8.000e-01, 8.7e-02)
(4.000e-01, 2.9e-02)
(2.000e-01, 7.1e-03)
(1.000e-01, 1.8e-03)
(5.000e-02, 4.3e-04)
(2.500e-02, 1.1e-04)
};
\addplot[dotted,mark=triangle*,mark options=solid,black] plot coordinates {
(8.000e-01, 3.2e-02)
(4.000e-01, 1.3e-02)
(2.000e-01, 3.4e-03)
(1.000e-01, 8.6e-04)
(5.000e-02, 2.1e-04)
};
\addplot[solid,mark=*,green] plot coordinates {
(8.000e-01, 1.2e-01)
(4.000e-01, 2.8e-02)
(2.000e-01, 8.1e-03)
};
\end{loglogaxis}
\end{tikzpicture}
\caption{$L^2$ errors for the Neumann problem with respect to $\epsilon$ for
Case 3, as labelled.}
\label{fig:neumann_case3-plot}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Interface width, $\epsilon$},
ylabel={$E_\epsilon$},
x dir=reverse,
width=0.8\textwidth,
legend entries={BC1,
BC1M1,
BC1M2,
BC2},
legend cell align=left,
legend style={column sep=0.5em,draw=white},
]
\addplot[solid,mark=*,black] plot coordinates {
(8.000e-01, 1.7e-01)
(4.000e-01, 4.4e-02)
(2.000e-01, 1.1e-02)
(1.000e-01, 3.0e-03)
};
\addplot[dashed,mark=square*,mark options=solid,black] plot coordinates {
(8.000e-01, 1.4e-01)
(4.000e-01, 2.0e-02)
(2.000e-01, 4.6e-03)
(1.000e-01, 1.2e-03)
};
\addplot[dotted,mark=triangle*,mark options=solid,black] plot coordinates {
(8.000e-01, 3.4e-01)
(4.000e-01, 5.5e-02)
(2.000e-01, 1.2e-02)
(1.000e-01, 3.1e-03)
};
\addplot[solid,mark=*,green] plot coordinates {
(8.000e-01, 1.7e-01)
(4.000e-01, 4.6e-02)
(2.000e-01, 1.2e-02)
};
\end{loglogaxis}
\end{tikzpicture}
\caption{$L^2$ errors for the Neumann problem with respect to $\epsilon$ for
Case 4, as labelled.}
\label{fig:neumann_case4-plot}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Interface width, $\epsilon$},
ylabel={$E_\epsilon$},
x dir=reverse,
width=0.8\textwidth,
legend entries={BC1,
BC1M1,
BC1M2,
BC2},
legend cell align=left,
legend style={column sep=0.5em,draw=white},
]
\addplot[solid,mark=*,black] plot coordinates {
(8.000e-01, 1.8e-01)
(4.000e-01, 5.2e-02)
(2.000e-01, 1.5e-02)
(1.000e-01, 4.3e-03)
};
\addplot[dashed,mark=square*,mark options=solid,black] plot coordinates {
(8.000e-01, 1.5e-01)
(4.000e-01, 2.2e-02)
(2.000e-01, 1.2e-02)
(1.000e-01, 5.8e-03)
(5.000e-02, 2.8e-03)
};
\addplot[dotted,mark=triangle*,mark options=solid,black] plot coordinates {
(8.000e-01, 3.5e-01)
(4.000e-01, 6.3e-02)
(2.000e-01, 1.6e-02)
(1.000e-01, 4.6e-03)
(5.000e-02, 2.4e-03)
};
\addplot[solid,mark=*,green] plot coordinates {
(8.000e-01, 1.8e-01)
(4.000e-01, 5.1e-02)
(2.000e-01, 1.4e-02)
(1.000e-01, 4.7e-03)
};
\end{loglogaxis}
\end{tikzpicture}
\caption{$L^\infty$ errors for the Neumann problem with respect to $\epsilon$
for Case 4, as labelled.}
\label{fig:neumann_case4-plot-2}
\end{figure}
\begin{table}[tbp]
\scriptsize
\centering
\begin{tabular}{rlrlrlrlr}
\toprule
& BC1 & & BC1M1 & & BC1M2 & & BC2 \\
$\epsilon$ & $E$ & $k$ & $E$ & $k$ & $E$ & $k$ & $E$ & $k$ \\
\midrule
\multicolumn{4}{l}{Case 1} \\
0.800 & 3.39\e 1& & 4.77\e 1& & & & 3.09\e 1& \\
0.400 & 9.94\e 2& 1.8 & 1.12\e 1& 2.1 & & & 9.52\e 2& 1.7 \\
0.200 & 2.57\e 2& 2.0 & 2.68\e 2& 2.1 & & & 2.57\e 2& 1.9 \\
0.100 & 6.43\e 3& 2.0 & 6.51\e 3& 2.0 & & & & \\
0.050 & 1.61\e 3& 2.0 & 1.59\e 3& 2.0 & & & & \\
0.025 & 4.15\e 4& 2.0 & 3.87\e 4& 2.0 & & & & \\
\midrule
\multicolumn{4}{l}{Case 2} \\
0.800 & 2.46\e 1& & 1.96\e 1& & 2.88\e 1& & 2.22\e 1& \\
0.400 & 7.30\e 2& 1.8 & 2.58\e 2& 2.9 & 7.81\e 2& 1.9 & 6.99\e 2& 1.7 \\
0.200 & 1.94\e 2& 1.9 & 5.21\e 3& 2.3 & 1.96\e 2& 2.0 & 1.95\e 2& 1.8 \\
0.100 & 5.16\e 3& 1.9 & 1.20\e 3& 2.1 & 5.10\e 3& 1.9 & & \\
\midrule
\multicolumn{4}{l}{Case 3} \\
0.800 & 1.27\e 1& & 8.74\e 2& & 3.16\e 2& & 1.18\e 1& \\
0.400 & 3.12\e 2& 2.0 & 2.85\e 2& 1.6 & 1.28\e 2& 1.3 & 2.82\e 2& 2.1 \\
0.200 & 7.48\e 3& 2.1 & 7.08\e 3& 2.0 & 3.40\e 3& 1.9 & 8.13\e 3& 1.8 \\
0.100 & 1.81\e 3& 2.0 & 1.75\e 3& 2.0 & 8.58\e 4& 2.0 & & \\
0.050 & 4.48\e 4& 2.0 & 4.32\e 4& 2.0 & 2.12\e 4& 2.0 & & \\
0.025 & 1.15\e 4& 2.0 & 1.06\e 4& 2.0 & & & & \\
\midrule
\multicolumn{4}{l}{Case 4} \\
0.800 & 1.71\e 1& & 1.38\e 1& & 3.39\e 1& & 1.74\e 1& \\
0.400 & 4.42\e 2& 2.0 & 2.04\e 2& 2.8 & 5.51\e 2& 2.6 & 4.61\e 2& 1.9 \\
0.200 & 1.14\e 2& 2.0 & 4.58\e 3& 2.2 & 1.24\e 2& 2.2 & 1.20\e 2& 1.9 \\
0.100 & 2.95\e 3& 1.9 & 1.18\e 3& 2.0 & 3.09\e 3& 2.0 & & \\
\bottomrule
\end{tabular}
\caption{The $L^2$ error for the Neumann problem as a function of $\epsilon$
for all cases. All results are calculated with $n=8192$ in each direction
on uniform grids. Except for the scheme BC1M2 in Case 1, which was not
simulated, blank results indicate that the solutions require even finer
grids to converge.}
\label{tab:neumann_error_per_eps}
\end{table}
\begin{table}[tbp]
\scriptsize
\centering
\begin{tabular}{rlrlrlrlr}
\toprule
& BC1 & & BC1M1 & & BC1M2 & & BC2 & \\
$\epsilon$ & $E$ & $k$ & $E$ & $k$ & $E$ & $k$ & $E$ & $k$ \\
\midrule
0.800 & 1.80\e 1& & 1.46\e 1& & 3.54\e 1& & 1.76\e 1& \\
0.400 & 5.17\e 2& 1.8 & 2.24\e 2& 2.7 & 6.32\e 2& 2.5 & 5.08\e 2& 1.8 \\
0.200 & 1.48\e 2& 1.8 & 1.18\e 2& 0.9 & 1.59\e 2& 2.0 & 1.44\e 2& 1.8 \\
0.100 & 4.29\e 3& 1.8 & 5.77\e 3& 1.0 & 4.56\e 3& 1.8 & 4.72\e 3& 1.6 \\
0.050 & & & 2.79\e 3& 1.0 & 2.40\e 3& 0.9 & & \\
\bottomrule
\end{tabular}
\caption{The $L^\infty$ error for the Neumann problem as a function of
$\epsilon$ for Case 4. All results are calculated with $n=8192$ in each
direction on uniform grids. Blank results indicate that the solutions
require even finer grids to converge.}
\label{tab:neumann_error_per_eps_infty}
\end{table}
\clearpage
\subsection{Robin boundary conditions}
Now consider the Poisson equation with Robin boundary conditions,
\[
\begin{alignedat}{3}
\boldsymbol\Delta u &= f && \ensuremath{\textup{in}\ } D, \\
\vct n\cdot\grad u &= k(u-g) \quad && \ensuremath{\textup{on}\ } \partial D.
\end{alignedat}
\]
As in the previous section, we solve the DDM equation
\[
\div\left(\phi\grad u\right) + \text{BC} = \phi f,
\]
using BC1, BC2, BC1M and BC2M.
\subsubsection{Case 1}
Consider the case where $D$ is a circle of radius $R=1$ centred at $(0,0)$,
and where the analytic solution to the Poisson equation in $D$ is
\[
u_{\text{an}}(x,y) = \frac{1}{4}\left( x^2 + y^2 \right).
\]
This corresponds to $f = 1$,
\[
g = \frac{1}{2} \left(\frac{1}{2} - \frac{1}{k}\right),
\]
and $\boldsymbol\Delta_{\text s} u_{\text{an}} = 0$. We will consider the case when $k=-1$, thus
$g=3/4$.
\subsubsection{Case 2}
Again let $D$ be the circle at $(0,0)$ with radius $R=1$, but now consider the
case where the analytic solution is
\[
u_{\text{an}}(x,y) = y\left(x^2 + y^2\right),
\]
which corresponds to
\begin{align*}
f &= -8y, \\
g &= y\left( 1 - \frac 3 k \right),
\end{align*}
and
\[
\boldsymbol\Delta_{\text s} u_{\text{an}} = -y,
\]
Again let $k=-1$ so that $g=4y$. Similar to the Neumann case 3, $g$ is
extended constantly in the normal direction in the DDM equations.
\subsubsection{Case 3}
For the final Robin case we let $D=[-1,1]^2$, and we consider a case that
corresponds to the Neumann Case 4 where the analytic solution is
\[
u_{\text{an}}(x,y) = e^r,
\]
where $r = \frac{x^2 + y^2}{4}$. This corresponds to
\[
f = (r+1) e^r.
\]
The boundary function $g$ and the surface Laplacian of the analytic function
along the boundary are
\begin{align*}
g &= \frac 3 2 e^{\frac{1 + \xi^2}{4}}, \\
\boldsymbol\Delta_{\text s} u_\text{an} &= \frac 1 4\left(\xi^2 + 2\right)e^{\frac{1 + \xi^2}{4}},
\end{align*}
where $\xi \equiv x$ along the bottom and top boundaries, and $\xi \equiv y$
along the left and right boundaries.
\subsubsection{Results}
The convergence results calculated with the $L^2$ norm are presented in
\cref{fig:robin_cases-plot1,fig:robin_cases-plot2,fig:robin_cases-plot3} and
\cref{tab:robin_error_per_eps}.
\Cref{fig:robin_cases-plot2-2,tab:robin_error_per_eps_infty} shows the results
for Case 2 where the $L^\infty$ norm is used. Again the results indicate that
BC1M1 performs better than BC1, although both methods are second-order
accurate, as predicted by our analysis. The results also show that BC1 gives
better results than BC2, which is approximately first-order accurate as also
predicted by theory. Further, as in the Neumann case, BC2 is seen to require
very fine grids to converge. For small $\epsilon$, the requirement exceeds our
finest grid.
The modified BC2M1 and BC2M2 schemes are also tested. The results with BC2M2
are almost indistinguishable from the results with BC2M1, so only the latter
results are shown in the following figures. All results are listed in
\cref{tab:robin_error_per_eps} and \cref{tab:robin_error_per_eps_infty}. The
BC2M schemes are shown to perform better than the BC2 scheme, but they also
require very fine grids to converge. Further, the orders of accuracy of BC2M1
and BC2M2 seem to deteriorate somewhat at the smallest values of $\epsilon$.
As discussed in \cref{sec:dda_robin_bc2m}, this could be due to the influence
of higher order terms in the expansion, or the amplification of error and is
under study.
\begin{figure}[tbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Interface width, $\epsilon$},
ylabel={$E_\epsilon$},
x dir=reverse,
width=0.8\textwidth,
legend entries={BC1,
BC1M1,
BC2,
BC2M1,},
legend cell align=left,
legend style={column sep=0.5em,draw=white},
]
\addplot[solid,mark=*,black] plot coordinates {
(8.000e-01, 2.1e-01)
(4.000e-01, 4.4e-02)
(2.000e-01, 9.0e-03)
(1.000e-01, 2.0e-03)
(5.000e-02, 4.6e-04)
(2.500e-02, 1.2e-04)
};
\addplot[dashed,mark=square*,mark options=solid,black] plot coordinates {
(8.000e-01, 1.2e-01)
(4.000e-01, 2.7e-02)
(2.000e-01, 6.4e-03)
(1.000e-01, 1.6e-03)
(5.000e-02, 3.8e-04)
};
\addplot[solid,mark=*,green] plot coordinates {
(8.000e-01, 1.7e-01)
(4.000e-01, 5.2e-02)
(2.000e-01, 2.2e-02)
(1.000e-01, 1.0e-02)
};
\addplot[dashed,mark=square*,mark options=solid,green] plot coordinates {
(8.000e-01, 1.14e-01)
(4.000e-01, 2.29e-02)
(2.000e-01, 6.87e-03)
(1.000e-01, 2.54e-03)
};
\end{loglogaxis}
\end{tikzpicture}
\caption{$L^2$ errors for the Robin problem with respect to $\epsilon$ for
Case 1, as labelled.}
\label{fig:robin_cases-plot1}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Interface width, $\epsilon$},
ylabel={$E_\epsilon$},
x dir=reverse,
width=0.8\textwidth,
legend entries={BC1,
BC1M1,
BC2,
BC2M1,},
legend cell align=left,
legend style={column sep=0.5em,draw=white},
]
\addplot[solid,mark=*,black] plot coordinates {
(8.000e-01, 4.21e-01)
(4.000e-01, 1.02e-01)
(2.000e-01, 2.19e-02)
(1.000e-01, 4.77e-03)
(5.000e-02, 1.09e-03)
(2.500e-02, 2.57e-04)
};
\addplot[dashed,mark=square*,mark options=solid,black] plot coordinates {
(8.000e-01, 1.27e-1)
(4.000e-01, 4.87e-2)
(2.000e-01, 1.30e-2)
(1.000e-01, 3.27e-3)
(5.000e-02, 8.15e-4)
(2.500e-02, 2.04e-4)
};
\addplot[solid,mark=*,green] plot coordinates {
(8.000e-01, 3.84e-1)
(4.000e-01, 9.16e-2)
(2.000e-01, 2.51e-2)
(1.000e-01, 9.13e-3)
};
\addplot[dashed,mark=square*,mark options=solid,green] plot coordinates {
(8.000e-01, 3.48e-1)
(4.000e-01, 7.24e-2)
(2.000e-01, 1.56e-2)
(1.000e-01, 4.65e-3)
};
\end{loglogaxis}
\end{tikzpicture}
\caption{$L^2$ errors for the Robin problem with respect to $\epsilon$ for
Case 2, as labelled.}
\label{fig:robin_cases-plot2}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Interface width, $\epsilon$},
ylabel={$E_\epsilon$},
x dir=reverse,
width=0.8\textwidth,
legend entries={BC1,
BC1M1,
BC2,
BC2M1,},
legend cell align=left,
legend style={column sep=0.5em,draw=white},
]
\addplot[solid,mark=*,black] plot coordinates {
(8.000e-01, 3.80e-1)
(4.000e-01, 9.39e-2)
(2.000e-01, 2.14e-2)
(1.000e-01, 4.90e-3)
(0.500e-01, 1.15e-3)
(0.250e-01, 2.78e-4)
};
\addplot[dashed,mark=square*,mark options=solid,black] plot coordinates {
(8.000e-01, 1.36e-1)
(4.000e-01, 4.30e-2)
(2.000e-01, 1.06e-2)
(1.000e-01, 2.53e-3)
(5.000e-02, 6.09e-4)
(2.500e-02, 1.49e-4)
};
\addplot[solid,mark=*,green] plot coordinates {
(8.000e-01, 3.24e-1)
(4.000e-01, 7.49e-2)
(2.000e-01, 1.91e-2)
(1.000e-01, 6.98e-3)
};
\addplot[dashed,mark=square*,mark options=solid,green] plot coordinates {
(8.000e-01, 3.04e-1)
(4.000e-01, 6.94e-2)
(2.000e-01, 1.72e-2)
(1.000e-01, 6.40e-3)
};
\end{loglogaxis}
\end{tikzpicture}
\caption{$L^\infty$ errors for the Robin problem with respect to $\epsilon$
for Case 2, as labelled.}
\label{fig:robin_cases-plot2-2}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Interface width, $\epsilon$},
ylabel={$E_\epsilon$},
x dir=reverse,
width=0.8\textwidth,
legend entries={BC1,
BC1M1,
BC2,
BC2M1,},
legend cell align=left,
legend style={column sep=0.5em,draw=white},
]
\addplot[solid,mark=*,black] plot coordinates {
(8.000e-01, 7.9e-02)
(4.000e-01, 1.6e-02)
(2.000e-01, 3.7e-03)
(1.000e-01, 9.0e-04)
};
\addplot[dashed,mark=square*,mark options=solid,black] plot coordinates {
(8.000e-01, 3.6e-02)
(4.000e-01, 7.4e-03)
(2.000e-01, 1.7e-03)
(1.000e-01, 4.3e-04)
};
\addplot[solid,mark=*,green] plot coordinates {
(8.000e-01, 8.2e-01)
(4.000e-01, 1.9e-02)
(2.000e-01, 5.9e-03)
};
\addplot[dashed,mark=square*,mark options=solid,green] plot coordinates {
(8.000e-01, 6.75e-02)
(4.000e-01, 1.27e-02)
(2.000e-01, 2.78e-03)
};
\end{loglogaxis}
\end{tikzpicture}
\caption{$L^2$ errors for the Robin problem with respect to $\epsilon$ for
Case 3, as labelled.}
\label{fig:robin_cases-plot3}
\end{figure}
\begin{table}[tbp]
\tiny
\centering
\begin{tabular}{clrlrlrlrlr}
\toprule
& BC1 & & BC1M1 & & BC2 & & BC2M1 & & BC2M2\\
$\epsilon$ & $E$ & $k$ & $E$ & $k$ & $E$ & $k$ & $E$ & $k$ & $E$ & $k$ \\
\midrule
\multicolumn{4}{l}{Case 1} \\
0.800&2.11\e 1& &1.20\e 1& &1.70\e 1& &1.14\e 1& &1.15\e 1\\
0.400&4.40\e 2&2.3 &2.72\e 2&2.1 &5.21\e 2&1.7&2.29\e 2&2.3&2.31\e 2 &2.3\\
0.200&8.99\e 3&2.3 &6.42\e 3&2.1 &2.18\e 2&1.3&6.87\e 3&1.7&6.92\e 3 &1.7\\
0.100&1.95\e 3&2.2 &1.57\e 3&2.0 &1.03\e 2&1.1&2.54\e 3&1.4&2.55\e 3 &1.4\\
0.050&4.57\e 4&2.1 &3.79\e 4&2.0\\
0.025&1.23\e 4&1.9\\
\midrule
\multicolumn{4}{l}{Case 2} \\
0.800&4.21\e 1& &1.27\e 1& &3.84\e 1& &3.48\e 1& &3.49\e 1& \\
0.400&1.02\e 1&2.0&4.87\e 2&1.4&9.16\e 2&2.1 &7.24\e 2&2.3&7.26\e 2&2.3\\
0.200&2.19\e 2&2.2&1.30\e 2&1.9&2.51\e 2&1.9 &1.56\e 2&2.2&1.57\e 2&2.2\\
0.100&4.77\e 3&2.2&3.27\e 3&2.0&9.13\e 3&1.5 &4.65\e 3&1.7&4.67\e 3&1.7\\
0.050&1.09\e 3&2.1&8.15\e 4&2.0\\
0.025&2.57\e 4&2.1&2.04\e 4&2.0\\
\midrule
\multicolumn{4}{l}{Case 3} \\
0.800&7.89\e 2& &3.60\e 2& &8.23\e 2& &6.75\e 2& &6.80\e 2& \\
0.400&1.64\e 2&2.3&7.38\e 3&2.3 &1.89\e 2&2.2&1.27\e 2&2.3&1.28\e 2&2.4\\
0.200&3.70\e 3&2.2&1.71\e 3&2.1 &5.90\e 3&1.7&2.78\e 3&2.2&2.81\e 3&2.2\\
0.100&9.04\e 4&2.0&4.28\e 4&2.0\\
\bottomrule
\end{tabular}
\caption{The $L^2$ error for the Robin problem as a function of $\epsilon$
for all cases. All results are calculated with $n=8192$ in each direction
on uniform grids, except for Case 3 with BC2, where the results are
calculated with $n=4096$ in each direction. Blank results indicate that
the solutions require even finer grids to converge.}
\label{tab:robin_error_per_eps}
\end{table}
\begin{table}[tbp]
\tiny
\centering
\begin{tabular}{clrlrlrlrlr}
\toprule
& BC1 & & BC1M1 & & BC2 & & BC2M1 & & BC2M2 \\
$\epsilon$ & $E$ & $k$ & $E$ & $k$ & $E$ & $k$ & $E$ & $k$ & $E$ & $k$ \\
\midrule
\multicolumn{4}{l}{Case 2} \\
0.800&3.80\e 1& &1.36\e 1& &3.24\e 1& &3.04\e 1& &3.05\e 1& \\
0.400&9.39\e 2&2.0&4.30\e 2&1.7&7.49\e 2&2.1 &6.94\e 2&2.1&6.95\e 2&2.1\\
0.200&2.14\e 2&2.1&1.06\e 2&2.0&1.91\e 2&2.0 &1.72\e 2&2.0&1.73\e 2&2.0\\
0.100&4.90\e 3&2.1&2.53\e 3&2.1&6.98\e 3&1.5 &6.40\e 3&1.4&6.42\e 3&1.4\\
0.050&1.15\e 3&2.1&6.09\e 4&2.1\\
0.025&2.78\e 4&2.1&1.49\e 4&2.0\\
\bottomrule
\end{tabular}
\caption{The $L^\infty$ error for the Robin problem as a function of
$\epsilon$ for Case 2. All results are calculated with $n=8192$ in each
direction on uniform grids. Blank results indicate that the solutions
require even finer grids to converge.}
\label{tab:robin_error_per_eps_infty}
\end{table}
\Cref{fig:plot_robin_case1} shows a plot of the solutions of Case 1 with
$\epsilon=0.2$ at $y=0$. The plot shows the solutions with BC1 (black dashed),
BC1M1 (black doted), BC2 (blue dashed) and BC2M1 (blue dotted). We see that
the solutions with the modified schemes BC1M1 and BC2M1 perform better than the
corresponding schemes with BC1 and BC2.
\begin{figure}[tbp]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{plot_full}
\caption{}
\label{fig:plot_full}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{plot_zoom}
\caption{}
\label{fig:plot_zoom}
\end{subfigure}
\caption{A plot of the solutions of Case 1 through $y=0$. The solutions
with BC2M1 and BC2M2 are indistinguishable, so only BC2M1 is shown. (a)
Shows the full slice, where the domain boundary is depicted as thin
vertical lines at $x=\pm1$. (b) A zoom-in that shows the solutions near
the left boundary.}
\label{fig:plot_robin_case1}
\end{figure}
\clearpage
\section{Conclusion}
\label{sec:conclusion}
We have performed a matched asymptotic analysis of the DDM for the
Poisson equation with Robin boundary conditions and for a steady
reaction-diffusion equation with Neumann boundary conditions. Our analysis
shows that for certain choices of the boundary condition approximations, the
DDM is second-order accurate in the interface thickness $\epsilon$. However,
for other choices the DDM is only first-order accurate. This is confirmed
numerically and helps to explain why the choice of boundary-condition
approximation is important for rapid global convergence and high accuracy.
This helps to explain why the choice of boundary-condition approximation is
important for rapid global convergence and high accuracy. In particular, the
boundary condition BC1, which arises from representing the surface delta
function as $|\nabla\phi|$, is seen to give rise to a second-order
approximation for both the Neumann and Robin boundary conditions and thus is
perhaps the most reliable choice. The boundary condition BC2, which arises
from approximating the surface delta function as $\epsilon|\nabla\phi|^2$
yields a second-order accurate approximation for the Neumann problem, but
only first-order accuracy for the Robin problem. In addition, BC2 requires
very fine meshes to converge.
Our analysis also suggests correction terms that may be added to yield a more
accurate diffuse-domain method. We have presented several techniques for
obtaining second-order boundary conditions and performed numerical
simulations that confirm the predicted accuracy, although the order of
accuracy may deteriorate at the smallest values of $\epsilon$ possibly due to
amplification errors associated with conditioning of the system or the
influence of higher order terms in the asymptotic expansion. This is
currently under study. Further, the correction terms do not improve the mesh
requirements for convergence.
A common feature of the correction terms is that the interface thickness must
be sufficiently small in order for the DDM to remain an elliptic equation.
In addition, one choice of boundary condition involves the use of the surface
Laplacian of the solution, which could in principle lead to faster asymptotic
convergence since it directly cancels terms in the inner expansion of the
asymptotic matching. However, the extension of this term outside the domain
of interest can cause the loss of ellipticity of the DDM. As such, this is
an intriguing but not a practical scheme. Nevertheless, as a proof of
principle, we still considered the effect of this term, however, by using the
surface Laplacian of the analytic solution in the DDM. We found that this
choice gave the smallest errors in nearly all the cases considered. By
incorporating different extensions of the boundary conditions in the exterior
of the domain that automatically guarantee ellipticity, we aim to make this
method practical. This is the subject of future investigations.
We plan to extend our analysis to the Dirichlet problem where the boundary
condition approximations considered by Li et al.\ \cite{Li09} seem only to
yield first-order accuracy \cite{Franz12,Reuter12}. Our asymptotic analysis
thus has the potential to identify correction terms that can be used to
generate second-order accurate diffuse-domain methods for the Dirichlet
problem.
\medskip
{\bf Acknowledgement.}
KYL acknowledges support from the Fulbright foundation for a Visiting
Researcher Grant to fund a stay at the University of California, Irvine. KYL
also acknowledges support from Statoil and GDF SUEZ, and the Research Council
of Norway (193062/S60) for the research project Enabling low emission LNG
systems. JL acknowledges support from the National Science Foundation,
Division of Mathematical Sciences, and the National Institute of Health through
grant P50GM76516 for a Center of Excellence in Systems Biology at the
University of California, Irvine. The authors gratefully thank Bernhard Müller
(NTNU) and Svend Tollak Munkejord (SINTEF Energy Research) for helpful
discussions and for feedback on the manuscript. The authors also wish to thank
the anonymous reviewers for comments that greatly improved the manuscript.
\medskip
\bibliographystyle{siam}
| {
"timestamp": "2015-05-18T02:06:04",
"yymm": "1407",
"arxiv_id": "1407.7480",
"language": "en",
"url": "https://arxiv.org/abs/1407.7480",
"abstract": "In recent work, Li et al.\\ (Comm.\\ Math.\\ Sci., 7:81-107, 2009) developed a diffuse-domain method (DDM) for solving partial differential equations in complex, dynamic geometries with Dirichlet, Neumann, and Robin boundary conditions. The diffuse-domain method uses an implicit representation of the geometry where the sharp boundary is replaced by a diffuse layer with thickness $\\epsilon$ that is typically proportional to the minimum grid size. The original equations are reformulated on a larger regular domain and the boundary conditions are incorporated via singular source terms. The resulting equations can be solved with standard finite difference and finite element software packages. Here, we present a matched asymptotic analysis of general diffuse-domain methods for Neumann and Robin boundary conditions. Our analysis shows that for certain choices of the boundary condition approximations, the DDM is second-order accurate in $\\epsilon$. However, for other choices the DDM is only first-order accurate. This helps to explain why the choice of boundary-condition approximation is important for rapid global convergence and high accuracy. Our analysis also suggests correction terms that may be added to yield more accurate diffuse-domain methods. Simple modifications of first-order boundary condition approximations are proposed to achieve asymptotically second-order accurate schemes. Our analytic results are confirmed numerically in the $L^2$ and $L^\\infty$ norms for selected test problems.",
"subjects": "Numerical Analysis (math.NA); Computational Physics (physics.comp-ph)",
"title": "Analysis of the diffuse-domain method for solving PDEs in complex geometries",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9637799430946808,
"lm_q2_score": 0.7341195327172402,
"lm_q1q2_score": 0.7075296814669154
} |
https://arxiv.org/abs/0803.2850 | Rigorous sufficient conditions for index-guided mode in microstructured dielectric waveguides | We derive a sufficient condition for the existence of index-guided modes in a very general class of dielectric waveguides, including photonic-crystal fibers (arbitrary periodic claddings, such as ``holey fibers''), anisotropic materials, and waveguides with periodicity along the propagation direction. This condition provides a rigorous guarantee of cutoff-free index-guided modes in any such structure where the core is formed by increasing the index of refraction (e.g. removing a hole). It also provides a weaker guarantee of guidance in cases where the refractive index is increased ``on average'' (precisely defined). The proof is based on a simple variational method, inspired by analogous proofs of localization for two-dimensional attractive potentials in quantum mechanics. | \section{Introduction}
In this paper, we present rigorous sufficient conditions for the
existence of index-guided modes, including conditions for cutoff-free
modes, in a wide variety of dielectric waveguides---from ordinary
step-index fibers~\cite{Snyder83}, to photonic-crystal ``holey''
fibers~\cite{Russell03,Bjarklev03,Zolla05,Joannopoulos08}, and even
fiber-Bragg gratings~\cite{Ramaswami98} or other periodically
modulated waveguides~\cite{Elachi76,Fan95:periodicwvg,Joannopoulos08}.
The dispersion relations of such waveguides must almost always be
computed numerically, and so exact analytical theorems like the one
derived here provide a foundation of certainty that is not available
in any other way. A rigorous theorem allows us to give a general
answer (although not a necessary condition) for questions such as: if
the waveguide core has a mixture of higher- and lower-index regions,
how much higher-index material is enough for cutoff-free guidance; and
under what conditions do photonic-crystal fibers, like step-index
fibers, have \emph{cutoff-free} guided modes? The theorem provides an
absolute guarantee, with no calculation required, that strictly
increasing the refractive index to form the waveguide (e.g. filling in
a hole of a holey fiber) yields a cutoff-free guided mode. It also
leads directly to necessary conditions for single-polarization fibers,
the subject of another manuscript currently in prepration. Our work
extends an earlier proof of guided modes for homogeneous-cladding,
non-periodic, dielectric waveguides with isotropic~\cite{Bamberger90}
or anistropic~\cite{Urbach96} materials, and is closely related in
spirit to proofs of the existence of bound modes in two-dimensional
potentials for quantum mechanics~\cite{Yang89}.
\begin{figure}[t]
\begin{center}
\subfigure[Cross section of a waveguide (e.g. a conventional fiber) with a homogeneous cladding and an arbitrary-shape core.]{\includegraphics[width=0.273\columnwidth]{1a}}
\subfigure[Cross section of a photonic-crystal fiber with periodic cladding and arbitrary-shape core.]{\includegraphics[width=0.3\columnwidth]{1Aa}}
\subfigure[A waveguide periodic in the propagation ($z$) direction surrounded by a homogenous cladding.]{\includegraphics[width=0.27\columnwidth]{1b}}
\caption{\label{fig:schematic}Schematics of various types of dielectric waveguides in which our theorem is applicable. Light propagates in the $z$ direction (along which the structure is either uniform or periodic) and is confined in the $xy$ direction by a higher-index core compared to the surrounding (homogeneous or periodic cladding).}
\end{center}
\end{figure}
The most common guiding mechanism in dielectric waveguides is
\emph{index guiding} (or ``total internal reflection''), in which a
higher-index \emph{core} is surrounded by a lower-index
\emph{cladding} $\varepsilon_c$ ($\varepsilon$ is the relative permittivity, the
square of the refractive index in isotropic non-magnetic materials).
A schematic of several such dielectric waveguides is shown in
\figref{schematic}. In particular, we suppose that the waveguide is
described by a dielectric function $\varepsilon(x,y,z) = \varepsilon_c(x,y,z) +
\Delta\varepsilon(x,y,z)$ such that: $\varepsilon$, $\varepsilon_c$, and $\Delta\varepsilon$ are periodic in $z$
(the propagation direction) with period $a$ ($a\to0$ for the common
case of a waveguide with a constant cross-section); that the cladding
dielectric function $\varepsilon_c$ is periodic in $xy$ (e.g. in a
photonic-crystal fiber), with a homogeneous cladding (e.g. in a
conventional fiber) as a special case; and the core is formed by a
change $\Delta\varepsilon$ in some region of the $xy$ plane, sufficiently localized
that $\int|1/\varepsilon - 1/\varepsilon_c|<\infty$ (integrated over the $xy$ plane
and the unit cell in $z$). This includes a very wide variety of
dielectric waveguides, from conventional fibers
[\figref{schematic}(a)] to photonic-crystal ``holey'' fibers
[\figref{schematic}(b)] to waveguides with a periodic ``grating''
along the propagation direction [\figref{schematic}(c)] such as
fiber-Bragg gratings and other periodic waveguides. We exclude
metallic structures (i.e, we require $\varepsilon>0$) and make the
approximation of lossless materials (real $\varepsilon$). We allow anisotropic
materials. The case of substrates (e.g. for strip waveguides in
integrated optics~\cite{Hunsperger82, Saleh91, Chen06}) is considered
in \secref{substrates-etc}. We also consider only non-magnetic
materials (relative permeability $\mu = 1$), although a future
extension to magnetic materials should be straightforward.
Intuitively, if the refractive index is increased in the core, i.e. if
$\Delta\varepsilon$ is non-negative, then we might expect to obtain exponentially
localized index-guided modes, and this expectation is borne out by
innumerable numerical calculations, even in complicated geometries
like holey fibers~\cite{Russell03,Bjarklev03,Zolla05,Joannopoulos08}.
However, an intuitive expectation of a guided mode is far from a
rigorous guarantee, and upon closer inspection there arise a number of
questions whose answers seem harder to guess with certainty. First,
even if $\Delta\varepsilon$ is strictly non-negative, is there a guided mode at
\emph{every} wavelength, or is there the possibility of e.g. a
long-wavelength cutoff (as was initially suggested in holey
fibers~\cite{Kuhlmey02}, but was later contradicted by more careful
numerical calculations~\cite{Wilcox05})? Second, what if $\Delta\varepsilon$ is
\emph{not} strictly non-negative, i.e. the core consists of partly
increased and partly decreased index; it is known in such cases,
e.g. in ``W-profile fibers''~\cite{Kawakami74} that there is the
possibility of a long-wavelength cutoff for guidance, but precisely
how much decreased-index regions does one need to have such a cutoff?
Third, under some circumstances it is possible to obtain a
``single-polarization'' fiber, in which the waveguide is truly
single-mode (as opposed to two degenerate polarization modes as in a
cylindrical
fiber)~\cite{Okoshi80,Eickhoff82,Simpson83,Messerly91,Kubota04,Li05}---our
theorem can be extended, similar to \citeasnoun{Bamberger90}, to a
condition for \emph{two} guided modes, and we will explore the
consequences for single-polarization fibers in a subsequent paper. It
turns out that all of these questions can be rigorously answered (in
the sense of sufficient conditions for guidance) for the very general
geometries considered in \figref{schematic}, without resorting to
approximations or numerical computations.
We will proceed as follows. First, in \secref{theorem}, we review the
mechanism of index guiding, state our result (a sufficient condition
for the existence of index-guided modes), and discuss some important
special cases. In \secref{homogeneous}, we first prove this theorem
for the simplified special case of a homogeneous cladding $\varepsilon_c$,
where the proof is much easier to follow. Then, in \secref{general},
we generalize the proof to arbitrary periodic claddings, such as for
holey photonic-crystal fibers (with some algebraic details left to the
appendix). In \secref{substrates-etc}, we discuss a few contexts that
go beyond the initial assumptions of our theorem: substrates, material
dispersion, and finite-size effects. Finally, we offer some
concluding remarks in \secref{conclusion} discussing future directions.
\section{Statement of the theorem}
\label{sec:theorem}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\columnwidth]{banddiagram}
\end{center}
\caption{\label{fig:banddiagram}Example dispersion relation of a
simple 2d dielectric waveguide in air (inset) for the TM polarization
(electric field out of the plane), showing the light cone, the light
line, the fundamental (cutoff-free) guided mode, and a higher-order
guided mode with a cutoff.}
\end{figure}
First, let us review the basic description of the eigenmodes of a
dielectric waveguide~\cite{Snyder83, Joannopoulos08}. In a waveguide
as defined above, the solutions of Maxwell's equations (both guided
and non-guided) can be written in the form of eigenmodes
$\vec{H}(x,y,z) e^{i\beta z - i\omega t}$ (via Bloch's theorem thanks
to the periodicity in $z$)~\cite{Joannopoulos08}, where $\omega$ is
the frequency, $\beta$ is the propagation constant, and the
magnetic-field envelope $\vec{H}(x,y,z)$ is periodic in $z$ with
period $a$ (or is independent of $z$ in the common case of a constant
cross section, $a\to0$). A plot of $\omega$ versus $\beta$ for all
eigenmodes is the ``dispersion relation'' of the waveguide, one
example of which is shown in \figref{banddiagram}. In the absence of the core
(i.e. if $\Delta\varepsilon = 0$), the (non-localized) modes propagating in the
infinite cladding form the ``light cone'' of the
structure~\cite{Russell03,Bjarklev03,Zolla05,Joannopoulos08}; and at
each real $\beta$ there is a fundamental (minimum-$\omega$)
space-filling mode at a frequency $\omega_c(\beta)$ with a
corresponding field envelope
$\vec{H}_c$~\cite{Russell03,Bjarklev03,Zolla05,Joannopoulos08}. Such
a light cone is shown as a shaded triangular region in \figref{banddiagram}.
Below the ``light line'' $\omega_c(\beta)$, the only solutions in the
cladding are evanescent modes that decay exponentially in the
transverse
directions~\cite{Kuchment01,Russell03,Bjarklev03,Zolla05,Joannopoulos08}.
Therefore, once the core is introduced ($\Delta\varepsilon \neq 0$), any new
solutions with $\omega < \omega_c$ must be guided modes, since they
are exponentially decaying in the cladding far from the core: these
are the index-guided modes (if any). Such guided modes are shown as
lines below the light cone in \figref{banddiagram}: in this case, both a
lowest-lying (``fundamental'') guided mode with no low-frequency
cutoff (although it approaches the light line asymptotically as
$\omega \to 0$) and a higher-order guided mode with a low-frequency
cutoff are visible. Since a mode is guided if $\omega < \omega_c$, we
will prove the existence of a guided mode by showing that $\omega$ has
an upper bound $< \omega_c$, using the variational (minimax) theorem
for Hermitian eigenproblems~\cite{Joannopoulos08}.
[Modes that lie beneath the light light are not the only type of
guided modes in microstructured dielectric waveguides. While all the
guided modes in a traditional, homogeneous-cladding fiber lie below
the light line and are confined by the mechanism of index-guiding,
there can also be bandgap-guided modes in photonic-crystal
fibers~\cite{Russell03,Bjarklev03,Zolla05,Joannopoulos08}. These
bandgap-guided modes lie above the cladding light line and are
therefore not index-guided. Bandgap-guided modes always have a
low-frequency cutoff (since in the long-wavelength limit the structure
can be approximated by a ``homogenized'' effective medium that has no
gap~\cite{Smith05}). We do not consider bandgap-guided modes in this
work; sufficient conditions for such modes to exist were considered
by~\citeasnoun{Kuchment04}.]
We will derive the following sufficient condition
for the existence of an index-guided mode in a dielectric waveguide at a
given $\beta$: a guided mode \emph{must} exist whenever
\begin{equation}
\int \vec{D}_c^* \cdot \left(\varepsilon^{-1}-\varepsilon_c^{-1}\right) \vec{D}_c < 0 ,
\label{eq:general-cond}
\end{equation}
where the integral is over $xy$ and one period in $z$ and $\vec{D}_c$
is the displacement field of the cladding's fundamental mode. From
this, we can immediately obtain a number of useful special cases:
\begin{itemize}
\item There must be a cutoff-free guided mode if $\Delta\varepsilon \geq 0$
everywhere (i.e., if we only increase the index to make the core).
\item For a homogeneous cladding (and isotropic media), there must be
a cutoff-free guided mode if $\int(1/\varepsilon - 1/\varepsilon_c) < 0$ (similar to
the earlier theorem of \citeasnoun{Bamberger90}, but generalized to
include waveguides periodic in $z$ and/or cores $\Delta\varepsilon$ that do not have
compact support).
\item More generally, a guided mode has no long-wavelength cutoff if
\eqref{general-cond} is satisfied for the quasi-static
($\omega\to0$, $\beta\to0$) limit of $\vec{D}_c$.
\end{itemize}
\Eqref{general-cond} can also be extended to a sufficient condition
for having \emph{two} guided modes (or equivalently, a necessary
condition for single-polarization guidance), when the cladding
fundamental mode is doubly degenerate. We explore this
generalization, analogous to a result in \citeasnoun{Bamberger90} for
homogeneous claddings, in another manuscript currently being prepared.
\section{Waveguides with a homogeneous cladding}
\label{sec:homogeneous}
To illustrate the basic ideas of the proof in a simpler context, we
will first consider the case of a homogeneous cladding ($\varepsilon_c =
\mathrm{constant}$) and isotropic, $z$-invariant strucures ($\varepsilon$ is a
scalar function of $x$ and $y$ only). In doing so, we reproduce a
result first proved (using a somewhat different approach) by
\cite{Bamberger90} (although the latter result required $\Delta\varepsilon$ to have
compact support, whereas we only require a weaker integrability
condition). Our proof, which we generalize in the next section, is
closely inspired by a proof~\cite{Yang89} of a related result in
quantum mechanics, the fact that any attractive potential in two
dimensions localizes a bound
state~\cite{Simon76,Landau77,Picq82,Yang89,Economou06}; we discuss
this analogy in more detail below.
That is, we take the dielectric function $\varepsilon(x,y)$ to be of the form:
\begin{equation}
\varepsilon(x,y) = \varepsilon_c + \Delta\varepsilon(x,y),
\end{equation}
where $\Delta\varepsilon$ is an an arbitrary change in $\varepsilon$ that forms the core of
the waveguide. For convenience, we define a new function $\Delta$ by:
\begin{equation}
\Delta(x,y) \triangleq \varepsilon^{-1} - \varepsilon_c^{-1} .
\label{eq:Delta}
\end{equation}
The only constraints we place on $\Delta\varepsilon$ are that $\varepsilon$ be real and
positive and that $\int |\Delta| dxdy$ be finite, as discussed above.
Now, we wish to show that there must always be a (cutoff-free) guided
mode as long as $\Delta\varepsilon$ is ``mostly positive,'' in the sense that:
\begin{equation}
\int \Delta(x,y) \, dxdy < 0,
\label{eq:homog-cond}
\end{equation}
Since \eqref{homog-cond} is independent of $\omega$ or $\beta$, the
existence of guided modes will hold at all frequencies
(cutoff-free).
The foundation for the proof is the existence of a variational
(minimax) theorem that gives an upper bound for the lowest
eigenfrequency $\omega_\mathrm{min}$. In particular, at each
$\beta$, the eigenmodes $\vec{H}(x,y) e^{i\beta z - i\omega t}$
satisfy a Hermitian eigenproblem \cite{Joannopoulos08}:
\begin{equation}
\nabla_\beta \times \frac{1}{\varepsilon} \nabla_\beta \times \vec{H}
= \hat{\Theta}_\beta \vec{H} = \frac{\omega^2}{c^2} \vec{H},
\label{eq:eigen}
\end{equation}
where
\begin{equation}
\nabla_\beta \triangleq \nabla + i\beta\hat{\vec{z}} ,
\label{eq:nabla-beta}
\end{equation}
with \eqref{eigen} defining the linear operator
$\hat{\Theta}_\beta$. In addition to the eigenproblem, there is also
the ``transversality'' constraint~\cite{Kuchment01,Joannopoulos08}:
\begin{equation}
\nabla_\beta \cdot \vec{H} = 0
\label{eq:transversality}
\end{equation}
(the absence of static magnetic charges). From the Hermitian property
of $\hat{\Theta}_\beta$, the variational theorem immediately
follows~\cite{Joannopoulos08}:
\begin{equation}
\frac{\omega_\mathrm{min}^2(\beta)}{c^2} = \inf_{\nabla_\beta \cdot \vec{H} = 0}
\frac{\int \vec{H}^* \cdot \hat{\Theta}_\beta \vec{H} dxdy}
{\int \vec{H}^* \cdot \vec{H} dxdy}
\label{eq:variational-2d}
\end{equation}
That is, an upper bound for the smallest eigenvalue is obtained by plugging
\emph{any} ``trial function'' $\vec{H}(x,y)$, not necessarily an
eigenfunction, into the right-hand-side (the ``Rayleigh quotient''),
as long as $\vec{H}$ is ``transverse'' [satisfies
\eqref{transversality}].\footnote{Technically, we must also restrict
ourselves to trial functions where the integrals in
\eqref{variational-2d} are defined, i.e. the trial functions must
be in the appropriate Sobolev space $H(\nabla_\beta\times)$.}
Conversely, if \eqref{transversality} is not satisfied, it is easy to
make the numerator of the right-hand-side (which involved
$\nabla_\beta\times\vec{H}$) \emph{zero}, e.g. by setting
$\vec{H}=\nabla\varphi+i\beta\varphi\hat{\vec{z}}$ for any
$\varphi(x,y)$, so transversality of the trial function is critically
important to obtaining a true upper bound.
Now, we merely need to find a transverse trial function such that the
variational upper bound is below the light line of the cladding, which
will guarantee a guided fundamental mode. For a homogeneous,
isotropic cladding $\varepsilon_c$, the light line is simply $\omega_c^2/c^2 =
\beta^2 / \varepsilon_c$, and so the condition for guided modes becomes:
\begin{multline}
\varepsilon_c \int \vec{H}^* \cdot \hat{\Theta}_\beta \vec{H} dxdy
- \beta^2 \int \vec{H}^* \cdot \vec{H} dxdy
\\
= \varepsilon_c \int \frac{1}{\varepsilon} \left\| \nabla_\beta \times \vec{H} \right\|^2 dxdy
- \beta^2 \int \left\| \vec{H} \right\|^2 dxdy
< 0 ,
\label{eq:homog-var-condition}
\end{multline}
where in the second line we have integrated by parts.
The problem of bound states in quantum mechanics is conceptually very
similar. There, given a potential function $V(x,y)$ in two dimensions
with $\int|V|<\infty$, one wishes to show that $\int V < 0$
(attractive) implies the existence of a bound state: an eigenfunction
of the Schr{\"o}dinger operator $-\nabla^2 + V$ with eigenvalue
(energy) $< 0$. Again, this is a Hermitian eigenproblem and there is
a variational theorem~\cite{Tannoudji77}, so one merely needs to find
some trial wavefunction $\psi$ for which the Rayleigh quotient is
negative in order to obtain a bound state. In one dimension, finding
such a trial function is simple---for example, an exponentially
decaying function $e^{-\alpha |x|}$ (or a Gaussian $e^{-\alpha x^2}$)
will work for sufficiently small $\alpha$---and the proof is sometimes
assigned as an undergraduate homework problem~\cite{Haar64}. In two
dimensions, however, finding a trial function is more difficult---in
fact, no function of the form $f(\alpha r)$ (where $r$ is the radius
$\sqrt{x^2 + y^2}$) will work (without more knowledge of the explicit
solution for $V$)~\cite{Yang89}---and the earliest proofs of the existence of
bound modes used more complicated, non-variational
methods~\cite{Simon76, Economou06}. However, an appropriate trial
function for a variational proof was eventually
discovered~\cite{Picq82,Bamberger90}, and later a simpler trial
function $e^{-(r+1)^\alpha}$ was proposed independently by Yang and
de~Llano~\cite{Yang89}.
In the present electromagnetic case, we found that the following trial
function, inspired by the quantum case above~\cite{Yang89}, works.
That is, we can prove the existence of waveguided modes for a
homogeneous cladding using the trial function, in cylindrical
$(r,\phi)$ coordinates:
\begin{equation}
\vec{H} = \hat{\vec{r}} \gamma \cos\phi
-\hat{\boldsymbol{\phi}} \left(r\gamma\right)' \sin\phi ,
\label{eq:homog-trial}
\end{equation}
where
\begin{equation}
\gamma = \gamma(r) = e^{1-(r^2+1)^\alpha}
\label{eq:gamma-def}
\end{equation}
for some $\alpha > 0$, and $(r\gamma)'$ is the derivative with respect
to $r$. Clearly, $\vec{H}$ in \eqref{homog-trial} reduces to an
$\hat{\vec{x}}$-polarized plane wave propagating in the
$\hat{\vec{z}}$ direction as $\alpha\to0$ (and hence $\gamma\to1$).
This is a key property of the trial function: in the limit of no
localization ($\alpha = 0$, $\Delta\varepsilon = 0$) it should recover a
fundamental (lowest-$\omega$) solution of the infinite cladding.
Also, by construction, it satisfies the transversality condition
\eqnumref{transversality} (which is why we chose this particular
form). We chose $\gamma$ slightly differently from
\citeasnoun{Yang89} for convenience only (to make sure it is
differentiable at the origin and goes to $1$ for $\alpha\to 0$). For
future reference, the first two $r$ derivatives of $\gamma$ are:
\begin{align}
\gamma' &= -2\alpha r(r^{2}+1)^{\alpha-1} \gamma, \label{eq:gammap} \\
\gamma'' &= 2\alpha (r^2 + 1)^{\alpha - 1} \gamma \left[-1 +
2\alpha r^2(r^2+1)^{\alpha-1} + 2(1-\alpha)r^2(r^2+1)^{-1}
\right] , \label{eq:gammapp}
\end{align}
and are plotted along with $\gamma$ in \figref{gamma}.
\begin{figure}[t]
\begin{center}\includegraphics[width=0.7\columnwidth]{gamma}\end{center}
\caption{\label{fig:gamma}Plot of $\gamma$ [\eqref{gamma-def}],
$\gamma'/\alpha$ [\eqref{gammap}], and $\gamma''/\alpha$
[\eqref{gammapp}] versus $r$ for $\alpha=0.1$. All three functions go
to zero for $r\to\infty$, with no extrema other than those shown.}
\end{figure}
What remains is, in principle, merely a matter of algebra to verify
that this trial function, for sufficiently small $\alpha$, satisfies
the variational condition \eqnumref{homog-var-condition}. In
practice, some care is required in appropriately bounding each of the
integrals and in taking the limits in the proper order, and we
review this process below.
We substitute the trial function \eqnumref{homog-trial} for $\vec{H}$
into the left-hand side of \eqref{homog-var-condition}:
\begin{equation}
\begin{split}
&\varepsilon_c\int \frac{1}{\varepsilon({\vec{r}})} \left\|\left(\nabla+i\beta\hat{z}\right)\times\vec{H}(r,\phi)\right\|^2 d^2\vec{r}-\beta^2\int \|\vec{H}\|^2 d^2\vec{r}\\
&=\varepsilon_c\int\left(\frac{1}{\varepsilon_c}+\Delta(\vec{r})\right)\left\|\hat{z} \frac{1}{r}\left[\frac{\partial}{\partial r}(rH_\phi)-\frac{\partial H_r}{\partial \phi}\right]+i\beta\hat{z}\times {\vec{H}}\right\|^2d^2\vec{r}-\beta^2\int {\|\vec{H}\|^2} d^2\vec{r}\\
&=\varepsilon_c\int\left(\frac{1}{\varepsilon_c}+\Delta(\vec{r})\right)\left(\frac{\sin^2\phi}{r^2}\left\{\left[r(r\gamma)'\right]'-\gamma\right\}^2+\beta^2\|\vec{H}\|^2\right)d^2\vec{r}-\beta^2\int {\|\vec{H}\|^2} d^2\vec{r}\\
&= \varepsilon_c\int \left(\frac{1}{\varepsilon_c}+\Delta(\vec{r})\right)
\frac{\sin^2\phi}{r^2}\left(3r\gamma'+r^2\gamma''\right)^2 d^2\vec{r}
+ \varepsilon_c\int\beta^2\Delta(\vec{r}) \|\vec{H}\|^2d^2\vec{r}\\
\end{split}
\label{eq:homog-var-condition2}
\end{equation}
We proceed to show that the last line of the above expression is negative in
the limit $\alpha\to0$, thus satisfying the condition for the
existence of bound modes. We first examine the second term of
\eqref{homog-var-condition2}:
\begin{equation}
\lim_{\alpha\to{0}}\int\beta^2\Delta(\vec{r})\|\vec{H}\|^2 d^2\vec{r} =
\beta^2 \int \Delta(\vec{r}) d^2\vec{r} < 0 .
\label{eq:uniform-conv}
\end{equation}
The key fact here is that we are able to interchange the $\alpha\to0$
limit and the integral in this case, thanks to Lebesgue's Dominated
Convergence Theorem (LDCT)~\cite{Hewitt65}: whenever the absolute value
of the integrand is bounded above (for sufficiently small $\alpha$) by
an $\alpha$-independent function with a finite integral, LDCT guarantees
that the $\alpha\to0$ limit can be interchanged with the integral. In
particular, the absolute value of this integrand is bounded above by
$|\Delta|$ multiplied by some constant (since $|\vec{H}|$ is bounded by
a constant: $|\gamma|\le 1$ and $|r\gamma'|$ is also easily seen to be
bounded above for sufficiently small $\alpha$), and $|\Delta|$ has a
finite integral by assumption. Since $\lim_{\alpha\to0} |\vec{H}|^2 =
1$, we obtain \eqref{homog-cond}, which is negative by assumption.
Now we must show that the remaining first term of
\eqref{homog-var-condition2} goes to zero as $\alpha\to0$, completing
our proof. This term is proportional to $(\varepsilon_c^{-1} + \Delta)$, but
the $\Delta$ terms trivially go to zero by the same arguments as
above: $\Delta$ allows the limit to be interchanged with the
integration by LDCT, and as $\alpha\to0$ the $\gamma'$ and $\gamma''$ terms go
to zero. The remaining $\varepsilon_c^{-1}$ terms can be bounded above by a
sequence of inequalities as follows:
\begin{equation}
\begin{split}
&\lim_{\alpha\to 0}\int_0^\infty \int_0^{2\pi}
\frac{\sin^2\phi}{r^2}(3r\gamma'+r^2\gamma'')^2 r\,dr\,d\phi
\\
&= 16\pi
\lim_{\alpha\to 0} \int_0^\infty \alpha^2r^3(r^2+1)^{2\alpha-2}\gamma^2\left[-2+\alpha r^2(r^2+1)^{\alpha-1}
+(1-\alpha)r^2(r^2+1)^{-1}\right]^2dr
\\
&\le 16\pi
\lim_{\alpha\to 0} \int_0^\infty \alpha^2 r (r^2+1)^{2\alpha-1}\gamma^2\left[2+\alpha (r^2+1)^{\alpha}
+(1-\alpha)\right]^2dr
\\
&= 16\pi\lim_{\alpha\to 0} \int_1^\infty \alpha^2 t^{4\alpha-1}
e^{2-2t^{2\alpha}} \left[ (3-\alpha) + \alpha t^{2\alpha} \right]^2 dt
\\
&\le 8\pi \lim_{\alpha\to 0}\int_0^\infty \alpha u \, e^{2-2u}
\left[(3-\alpha) + \alpha u\right]^2 du
\\
&= 8\pi e^2\lim_{\alpha\to 0} \alpha \left[\frac{3}{8}\alpha^2
+\frac{1}{2}\alpha(3-\alpha)+\frac{1}{4}\left(3-\alpha\right)^2\right] = 0 .
\end{split}
\label{eq:gamma-integral-limit}
\end{equation}
From the first to second line, we substituted
\eqreftwo{gammap}{gammapp} and simplified. From the second to third
line, we bounded the integrand above by flipping negative terms into
positive ones and replacing $r^2$ with $r^2+1$. From the third to the
fourth line, we made a change of variables $t^2=r^2+1$. Then, from the
fourth to fifth line, we made another change of variable
$u=t^{2\alpha}$, and bounded the integral above by changing the lower
limit from $u=1$ to $u=0$. The final integral can be performed exactly
and goes to zero, completing the proof.
\section{General periodic claddings}
\label{sec:general}
In the previous section we considered $z$-invariant waveguides with a
homogeneous cladding and isotropic materials (for example,
conventional optical fibers). We now generalize the proof in three ways,
by allowing:
\begin{itemize}
\item transverse periodicity in the cladding material (photonic-crystal fibers),
\item a core and cladding that are periodic in $z$ with period $a$
($a\to0$ for the $z$-invariant case),
\item anisotropic $\varepsilon_c$ and $\Delta\varepsilon$ materials ($\varepsilon$ is a $3\times3$
positive-definite Hermitian matrix).
\end{itemize}
In particular, we consider dielectric functions of the form:
\begin{equation}
\varepsilon(x,y,z) = \varepsilon_c(x,y,z) + \Delta\varepsilon(x,y,z),
\end{equation}
where the cladding dielectric tensor $\varepsilon_c(x,y,z)=\varepsilon_c(x,y,z+a)$ is
$z$-periodic and also periodic in the $xy$ plane (with an arbitrary
unit cell and lattice), and the core dielectric tensor change
$\Delta\varepsilon(x,y,z)=\Delta\varepsilon(x,y,z+a)$ is $z$-periodic with the \emph{same}
period $a$. Both $\varepsilon_c$ and the total $\varepsilon$ must be
positive-definite Hermitian tensors. As defined in \eqref{Delta}, we
denote by $\Delta$ the change in the inverse dielectric tensor.
Similar to the isotropic case, we require that $\int|\Delta_{ij}|$ be
finite for integration over the $xy$ plane and one period of $z$, for
every tensor component $\Delta_{ij}$. We also require that
the components of $\varepsilon_c^{-1}$ be bounded above.
In the homogeneous-cladding case, any light mode that lies beneath the
(linear) light line of the cladding is guided. We have shown that such
a mode \emph{always} exists, for all $\beta$, under the condition of
\eqref{homog-cond}, by showing that the variational upper bound on its
frequency lies below the light line. In the case of a periodic
cladding, the light line is the dispersion relation of the fundamental
space-filling mode of the cladding, which corresponds to the
lowest-frequency mode at each given propagation constant $\beta$
\cite{Russell03,Bjarklev03,Zolla05,Joannopoulos08}. This light ``line'' is,
in general, no longer straight, and there are mechanisms for guidance
that are not available in the previous case, such as bandgap
guidance~\cite{Russell03,Bjarklev03,Zolla05,Joannopoulos08}. Bandgap-guided
modes may exist above the light line and are, in general, not
cutoff-free because the gap has a finite bandwidth. Here, we
\emph{only} consider index-guided modes, which are guided because they
lie below the light line. We will follow the same general procedure as
in the previous section to derive the sufficient condition
[\eqref{general-cond}] to guarantee the existence of guided modes.
The homogeneous-cladding case is then a special case of this more
general theorem, recovering \eqref{homog-cond} (but generalizing it to
$z$-periodic cores), where in that case the cladding fundamental mode
$\vec{D}_c$ is a constant and can be pulled out of the integral. The
case of a $z$-homogeneous fiber is just the special case $a\to0$,
eliminating the $z$ integral in \eqref{general-cond}.
The proof is similar in spirit to that of the homogeneous-cladding
case. At each $\beta$, the eigenmodes $\vec{H}(x,y,z) e^{i\beta z
-i\omega t}$ satisfy the same Hermitian eigenproblem \eqnumref{eigen}
and transversality constraint \eqnumref{transversality} as before. We
have a similar variational theorem to
\eqref{variational-2d}~\cite{Joannopoulos08}, except that, in the case
of $z$-periodicity, we now integrate over one period in $z$ as well as
over $x$ and $y$.
\begin{equation}
\label{eq:variational}
\frac{\omega_\mathrm{min}^2(\beta)}{c^2} = \inf_{\nabla_\beta \cdot \vec{H} = 0}
\frac{\int\vec{H}^* \cdot \hat{\Theta}_\beta \vec{H}}
{\int\vec{H}^* \cdot \vec{H}}.
\end{equation}
As before, to prove the existence of a guided mode we will find a
trial function $\vec{H}$ such that this upper bound, called the
``Rayleigh quotient'' for $\vec{H}$, is below the light line
$\omega_c(\beta)^2 / c^2$. The corresponding condition on $\vec{H}$
can be written [similar to \eqref{homog-var-condition}]:
\begin{equation}
\int \vec{H}^* \cdot \hat{\Theta}_\beta \vec{H}
- \frac{\omega_c^2(\beta)}{c^2} \int \vec{H}^* \cdot \vec{H} < 0.
\label{eq:gen-var-condition}
\end{equation}
We considered a variety of trial functions, inspired by the Yang and
de~Llano quantum case~\cite{Yang89}, before finding the following
choice that allows us to prove the condition
\eqnumref{gen-var-condition}. Similar to \eqref{homog-trial}, we want
a slowly decaying function proportional to $\gamma(r) =
e^{1-(r^2+1)^\alpha}$, from \eqref{gamma-def}, that in the
$\alpha\to0$ (weak guidance) limit approaches the cladding fundamental
mode $\vec{H}_c$. As before, the trial function must be transverse
($\nabla_\beta \cdot \vec{H} = 0$), which motivated us to write the
trial function in terms of the corresponding vector potential. We
denote by $\vec{A}_c$ the vector potential corresponding to the
cladding fundamental mode $\vec{H}_c = \nabla_\beta \times \vec{A}_c$.
In terms of $\vec{A}_c$ and $\gamma$, our trial function is then:
\begin{equation}\label{eq:general-trial}
\vec{H}=\nabla_\beta\times
\left(\gamma\vec{A}_c\right)=\gamma\vec{H}_c+\nabla\gamma\times
\vec{A}_c .
\end{equation}
For convenience, we choose $\vec{A}_c$ to be Bloch-periodic (like
$\vec{H}_c$, since $\vec{A}_c$ also satisifies a periodic Hermitian
generalized eigenproblem and hence Bloch's theorem
applies).\footnote{Alternatively, it is straightforward to show that
the Coulomb gauge choice, $\nabla_\beta \cdot \vec{A}_c = 0$, gives a
Bloch-periodic $\vec{A}_c$, by explicitly constructing the
Fourier-series components of $\vec{A}_c$ in terms of those of
$\vec{H}_c$.} In contrast, our previous homogeneous-cladding trial
function [\eqref{homog-trial}] corresponds to a different gauge choice
with an unbounded vector potential $\vec{A}_c = -\frac{1}{i\beta}\hat{\vec{y}} +
\nabla_\beta\psi$, differing from a constant vector potential via the
gauge function $\psi = \frac{r}{i\beta}\sin\phi+e^{-i\beta z}$.
Substituting \eqref{general-trial} into the left hand side of our new
guidance condition \eqnumref{gen-var-condition}, we obtain five
categories of terms to analyze:
\begin{enumerate}[(i)]
\item terms that contain $\Delta=\varepsilon^{-1}-\varepsilon_c^{-1}$,
\item terms that cancel due to the eigenequation \eqnumref{eigen},
\item terms that have one first derivative of $\gamma$,
\item terms that have $(\gamma')^2$,
\item terms that have $\gamma'\gamma''$ or $(\gamma'')^2$.
\end{enumerate}
Category~(i) will give us our condition for guided modes,
\eqref{general-cond}, while category~(ii) will be cancelled exactly in
\eqref{gen-var-condition}. We show in the appendix that all of the
terms in category~(iii) exactly cancel one another. The terms in
categories (iv) and (v) all vanish in the $\alpha\to0$ limit; we
distinguish them because category~(v) turns out to be easier to
analyze. There are no terms with $\gamma''$ alone, as these can be
integrated by parts to obtain category (iii) and (iv) terms. In the
appendix, we provide an exhaustive listing of all the terms and how
they combine as described above. In this section, we only outline the
basic structure of this algebraic process, and explain why the
category (iv) and (v) terms vanish as $\alpha\to0$.
Category~(i) consists only of one term:
\begin{equation}
\begin{split}
&\lim_{\alpha\rightarrow 0}\int \vec{H}^*\cdot\left(\nabla_{\beta}\times \Delta \nabla_{\beta}\times \vec{H}\right)\\
=&\int \vec{H}_c^*\cdot\left(\nabla_{\beta}\times \Delta \nabla_{\beta}\times \vec{H}_c\right)\\
=&\int \left(\nabla_{\beta}\times \vec{H}_c\right)^*\cdot\Delta \left(\nabla_{\beta}\times \vec{H}_c\right)\\
=& \frac{\omega_c^2}{c^2}\int \vec{D}_c^*\cdot\Delta \vec{D}_c\\
\end{split}
\label{eq:uniform-conf-general}
\end{equation}
From the first to the second line, we interchanged the limit with the
integration, thanks to the LDCT condition as in \secref{homogeneous},
since the magnitudes of all of the terms in the integrand are bounded
above by the tensor components $|\Delta_{ij}|$ multiplied by some
$\alpha$-independent constants, and $|\Delta_{ij}|$ has a finite
integral by assumption. (In particular, the $\vec{A}_c$ fundamental
mode and its curls are bounded functions, being Bloch-periodic, and
$\gamma$ and its first two derivatives are bounded for sufficiently
small $\alpha$.) The result is precisely the left-hand side of
\eqref{general-cond}, which is negative by assumption.
Next, we would like to cancel $-\frac{\omega_c^2}{c^2}\int
\vec{H}^*\cdot \vec{H}$ by the eigen-equation \eqnumref{eigen}. Thus,
we examine the term $\int
\vec{H}^*\cdot\left(\nabla_{\beta}\times\varepsilon_c^{-1}\gamma\nabla_{\beta}\times{\vec{H}_c}\right)$
(which comes from the term where the right-most curl falls on
$\vec{H}_c$ rather than $\gamma$) below:
\begin{equation}
\begin{split}
&\int \vec{H}^*\cdot \left(\nabla_{\beta}\times \varepsilon_c^{-1}\gamma\nabla_{\beta}\times{\vec{H}_c}\right) \\
=&\int \vec{H}^*\cdot \left(\gamma\nabla_{\beta}\times\varepsilon_c^{-1}\nabla_{\beta}\times \vec{H}_c+\left(\nabla\gamma\right)\times \varepsilon_c^{-1}\nabla_{\beta}\times \vec{H}_c\right)\\
=&\int \vec{H}^*\cdot \gamma\frac{\omega_c^2}{c^2} \vec{H}_c+\int \vec{H}^*\cdot\left(\nabla\gamma\times \varepsilon_c^{-1}\nabla_{\beta}\times \vec{H}_c\right)\\
=&\int \vec{H}^*\cdot \frac{\omega_c^2}{c^2} \vec{H}-\int \vec{H}^*\cdot \frac{\omega_c^2}{c^2}\nabla\gamma\times \vec{A}_c+\int \vec{H}^*\cdot\left(\nabla\gamma\times \varepsilon_c^{-1}\nabla_{\beta}\times \vec{H}_c\right)\\
\end{split}
\label{eq:eigen-cancellation}
\end{equation}
From the second to the third lines, we used the eigenequation
\eqnumref{eigen}, and from the third to the fourth lines we used the
definition \eqnumref{general-trial} of $\vec{H}$ in terms of
$\vec{H}_c$. The first term of the last line above cancels
$-\frac{\omega_c^2}{c^2}\int \vec{H}^*\cdot \vec{H}$ in
\eqref{gen-var-condition}. The second and third terms contain two
category~(iii) terms: $\frac{\omega_c^2}{c^2}\int \gamma
\vec{H}_c\cdot\left(\nabla\gamma\times\vec{A}_c\right)$ and
$i\omega_c \int \gamma\nabla\gamma \cdot \vec{E}_c \times
\vec{H}_c^*$, both of which will be exactly cancelled as described in
the appendix, as well as some category (iv) and (v) terms.
The category~(iv) integrands are all of the form $(\gamma')^2$
multiplied by some bounded function (a product of the various
Bloch-periodic fields as well as the bounded $\varepsilon_c^{-1}$). This
integrand can then be bounded above by replacing the bounded function
with the supremum $B$ of its magnitude, at which point the integral is
bounded above by $2\pi B \int_0^\infty (\gamma')^2 r\,dr$. However,
such integrands were among the terms we already analyzed in the
homogeneous-cladding case, in \eqref{gamma-integral-limit}, and we
explicitly showed that such integrals go to zero as $\alpha\to0$.
The category~(v) integrands could also be explicitly shown to vanish
as $\alpha\to0$, similar to \eqref{gamma-integral-limit}, but a
simpler proof of the same fact can be constructed via the LDCT
condition. In particular, similar to the previous paragraph, after
replacing bounded functions with their suprema we are left with
cylindrical-coordinate integrands of the form $\gamma'\gamma''r$ and
$(\gamma'')^2 r$. Both of these integrands, however, are bounded
above by an $\alpha$-independent function with a finite integral, and
hence LDCT allows us to put the $\alpha\to0$ limit inside the integral
and set the integrands to zero. Specifically, by inspection of
\eqreftwo{gammap}{gammapp}, $|\gamma'\gamma''|r < 4r^2(1+2+2) /
(r^2+1)^{2-\delta}$ and $(\gamma'')^2 r < 4r (1+2+2)^2 /
(r^2+1)^{2-\delta}$ for $\alpha<\delta/4$, and both of these upper
bounds have finite integrals, if we take $\delta$ to be some number $<
1/2$, since they decay faster than $1/r$.
In summary, we have shown that, if \eqref{general-cond} is satisfied,
then the variational upper bound for our trial function
[\eqref{general-trial}] is below the light line, and therefore an
index-guided mode is guaranteed to exist. The special cases of this
theorem, as discussed in the introduction, immediately follow.
\section{Substrates, dispersive materials, and finite-size effects}
\label{sec:substrates-etc}
In this section, we briefly discuss several situations that lie
outside of the underlying assumptions of our theorem: waveguides
sitting on substrates, dispersive ($\omega$-dependent) materials, and
finite-size claddings.
An optical fiber is completely surrounded by a single cladding
material, but the situation is quite different in integrated optical
waveguides. There, it is common to have an asymmetrical cladding,
with air above the waveguide and a low-index material (e.g. oxide)
below the waveguide, such as in strip or ridge waveguides~\cite{Hunsperger82, Saleh91, Chen06}.
In such cases, it is well known that the fundamental guided mode has a
low-frequency cutoff even when the waveguide consists of strictly
nonnegative $\Delta\varepsilon$~\cite{Hunsperger82, Chen06}. This does not contradict our theorem
because we required the cladding to be periodic in both transverse
directions, whereas a substrate is not periodic in the vertical
direction.
We have also assumed non-dispersive materials in our proof. What
happens when we have more realistic, dispersive materials? Suppose
that $\varepsilon$ depends on $\omega$ but has negligible absorption (so that
guided modes are still well-defined). For a given $\omega$, we can
construct a frequency-independent $\varepsilon$ structure matching the actual
$\varepsilon$ at that $\omega$, and apply our theorem to determine whether
there are guided modes at $\omega$. The simplest case is when
$\Delta\varepsilon\geq0$ for all $\omega$, in which case we must still obtain
cutoff-free guided modes. The theorem becomes more subtle to apply
when $\Delta\varepsilon < 0$ in some regions, because not only must one perform the
integral of \eqref{general-cond} to determine the existence of guided
modes, but the condition~\eqnumref{general-cond} is for a fixed
$\beta$ while the integrand is for a given frequency, and the
frequency of the guided mode is unknown {\it a priori}.
Finally, any real structure has a finite cladding. Both numerically
and experimentally, this makes it difficult to study the
long-wavelength regime because the modal diameter increases rapidly
with wavelength (i.e. the frequency approaches the light line and the
transverse decay rate becomes very slow)---in fact, it seems likely
that the modal diameter will increase \emph{exponentially} with the
wavelength. In quantum mechanics (scalar waves) with a potential well
of depth $V$, the decay length of the bound mode increases as
$e^{C/V}$ when $V\to0$, for some constant $C$~\cite{Simon76,Yang89}.
In electromagnetism, for the long wavelength limit, a homogenized
effective-medium $\tilde{\varepsilon}$ description of the structure becomes
applicable~\cite{Smith05}, and in this effective near-homogeneous
limit the modes are described by a scalar wave equation with a
``potential'' $-\omega^2 \Delta\tilde{\varepsilon}$~\cite{Jackson98}, and
hence the quantum analysis should apply. Thus, by this informal
argument, we would expect the modal diameter to expand proportional to
$e^{C\lambda^2}$ for some constant $C$ (where $\lambda = 2\pi
c/\omega$ is the vacuum wavelength), but a more explicit proof would
be desirable.
\section{Concluding remarks}
\label{sec:conclusion}
We have demonstrated sufficient conditions for the existence of
cutoff-free guided modes for general microstructured dielectric
fibers, periodic in either or both the $z$ direction and in the
transverse plane. The results are a generalization of previous results
on the existence of such modes in fibers with a homogeneous cladding
index. Our theorem allows one to understand the guidance in many very
complicated structures analytically, and enables one to rigorously
guarantee guided modes in many structures (especially those where
$\Delta\varepsilon \geq 0$ everywhere) by inspection. There remain a number of
interesting questions for future study, however, some of which we
outline below.
Our \eqref{general-cond} is a \emph{sufficient} condition for
index-guided modes, but it is certainly not \emph{necessary} in
general: even when it is violated, one can have guided modes with a
cutoff (as for W-profile fibers~\cite{Kawakami74} or waveguides on
substrates~\cite{Hunsperger82, Chen06}), or other types of guided modes (such as
bandgap-guided
modes~\cite{Russell03,Bjarklev03,Zolla05,Joannopoulos08}). However,
these other types of guides modes in dielectric waveguides have a
long-wavelength cutoff, so one can pose the question: is
\eqref{general-cond} a necessary condition for \emph{cutoff-free}
guided modes (where $\vec{D}_c$ is given by the long-wavelength limit
of the cladding fundamental mode) in dielectric waveguides (as opposed
to TEM modes in metallic coaxial waveguides, which also have no
cutoff~\cite{Kong75})? Based on theoretical reasoning and some
numerical evidence, we suspect that the answer is \emph{no}, but that
it may be possible to modify \eqref{general-cond} to obtain a
necessary condition. In particular, the variational theorem is closely
related to first-order perturbation theory: if one has a small
perturbation $\Delta\varepsilon$ and substitutes the unperturbed field into the
Rayleigh quotient, the result is the first-order perturbation in the
eigenvalue. However, when $\Delta\varepsilon$ is large, even if the volume of the
perturbation is small, perturbation theory requires a correction due
to the electric-field discontinuity at the
interface~\cite{Johnson05:bump}. In the long-wavelength limit,
perturbation theory is corrected by computing the quasi-static
polarizability of the perturbation~\cite{Johnson05:bump}, and we
conjecture that a similar correction to our trial field may allow one
to derive a necessary condition for the absence of a cutoff.
\Eqref{general-cond} is still a sufficient condition (the variational
theorem still holds even with a suboptimal trial function), but the
preceding considerations predict that it will become farther from a
necessary condition for the absence of a cutoff as $\Delta\varepsilon$ is
increased, and this prediction seems to be confirmed by preliminary
numerical experiments with W-profile fibers.
Let us also mention five other interesting directions to pursue.
First, \citeasnoun{Bamberger90} actually proved a somewhat stronger
condition than \eqref{general-cond} for homogeneous claddings, in that
they showed the existence of guided modes when the integral was $\leq
0$ (and $\Delta\varepsilon >0$ in some region) rather than $< 0$ as in our
condition. Although the $=0$ case seems unlikely to be experimentally
or numerically significant, we suspect that a similar generalization
should be possible for our theorem (re-weighting the integrand to make
it negative and then taking a limit as in \citeasnoun{Bamberger90}).
Second, as discussed in \secref{substrates-etc}, it would be desirable
to develop a sufficient condition at a fixed $\omega$ rather than at a
fixed $\beta$, although we are not sure whether this is possible.
Third, one would like a more explicit confirmation of the argument, in
\secref{substrates-etc}, that the modal diameter should asymptotically
increase exponentially with the square of the wavelength. Fourth, it
might be interesting to consider the case of ``Bragg fiber''
geometries consisting of ``periodic'' sequences of concentric
layers~\cite{Yeh78}, which are not strictly periodic because the layer
curvature decreases with radius. Finally, as we mentioned in
\secref{theorem}, it is possible to extend the theorem to a condition
for \emph{two} guided modes in many cases where the cladding
fundamental mode is doubly degenerate, and we are currently preparing
another manuscript describing this result along with conditions for
truly single-mode (``single-polarization'') waveguides.
\section*{Acknowledgements}
This work was supported in part by the US Army Research Office under
contract number W911NF-07-C-0002. The information does not necessarily
reflect the position or the policy of the Government and no official
endorsement should be inferred. We are also grateful to
M.~Ghebrebrhan and G.~Staffilani at MIT for helpful discussions.
\section*{Appendix: All Rayleigh-quotient terms}
In this appendix, we provide an exhaustive listing of all the terms
that appear when the trial function [\eqref{general-trial}] is
substituted into \eqref{gen-var-condition} (the condition to be
satisfied, a rearrangement of the Rayleigh quotient bound). Since the
terms that contain $\Delta$ [category~(i)] were already fully
analyzed in \secref{general} (since for these terms the limits could
be trivially interchanged), we consider only the remaining terms
involving $\varepsilon_c(\vec{r})$. More specifically, the only non-trivial term to
analyze is the $\Delta$-free part of the left-most integral in
\eqref{gen-var-condition}:
\begin{equation}
\begin{split}
&\int \vec{H}^*\cdot \left(\nabla_{\beta}\times \varepsilon_c^{-1}\nabla_{\beta}\times\vec{H}\right) \\
&=\int \vec{H}^*\cdot \left(\nabla_{\beta}\times \varepsilon_c^{-1}\gamma\nabla_{\beta}\times{\vec{H}_c}\right)+\int \vec{H}^*\cdot \left(\nabla_{\beta}\times \varepsilon_c^{-1}\nabla\gamma\times{\vec{H}_c}\right)\\
&\quad{}+\int \vec{H}^*\cdot \left(\nabla_{\beta}\times \varepsilon_c^{-1}\nabla_\beta\times\left(\nabla\gamma\times{\vec{A}_c}\right)\right) .
\end{split}
\end{equation}
We have already seen, in \eqref{eigen-cancellation}, that the first
term breaks down into a term that cancels $\frac{\omega_c^2}{c^2}\int
\vec{H}^*\cdot\vec{H}$ in \eqref{gen-var-condition}, via the
eigen-equation, and two other terms. Removing the terms cancelled by
the eigenequation, and substituting $-i\frac{\omega_c}{c}\vec{E}_c$
for $\varepsilon_c^{-1} \nabla_\beta\times\vec{H}_c$ (Amp{\`e}re's law), we have:
\begin{equation}
\begin{split}
&\quad{}-\frac{\omega_c^2}{c^2}\int \vec{H}^*\cdot\left(\nabla\gamma\times \vec{A}_c\right)+\int \vec{H}^*\cdot\left[\nabla\gamma\times \left(-i\frac{\omega_c}{c} \vec{E}_c\right)\right]\\
&\quad{}+\int \vec{H}^*\cdot \nabla_{\beta}\times\varepsilon_c^{-1}\left[\nabla\gamma\times \vec{H}_c +\nabla_{\beta}\times\left(\nabla\gamma\times \vec{A}_c\right)\right]\\
&=-\frac{\omega_c^2}{c^2}\int\left[\gamma \vec{H}_c+\nabla\gamma\times \vec{A}_c\right]^*\cdot \left(\nabla\gamma\times \vec{A}_c\right)-i\frac{\omega_c}{c}\int \gamma \nabla\gamma\cdot\left(\vec{E}_c\times \vec{H}_c^*\right)\\
&\quad{}-i\frac{\omega_c}{c} \int \left(\nabla\gamma\times \vec{A}_c\right)^*\cdot\left(\nabla\gamma\times \vec{E}_c\right)+\int \left(\gamma \nabla_\beta\times \vec{H}_c\right)^*\cdot \varepsilon_c^{-1}\left[\nabla\gamma\times \vec{H}_c +\nabla_{\beta}\times\left(\nabla\gamma\times \vec{A}_c\right)\right]\\
&\quad{}+\int \left(\nabla\gamma\times \vec{H}_c\right)^*\cdot \varepsilon_c^{-1}\left[\nabla\gamma\times \vec{H}_c +\nabla_{\beta}\times\left(\nabla\gamma\times \vec{A}_c\right)\right]\\
&\quad{}+\int\left(\nabla_\beta\times\nabla\gamma\times \vec{A}_c\right)^*\cdot\varepsilon_c^{-1}\left[\nabla\gamma\times \vec{H}_c +\nabla_{\beta}\times\left(\nabla\gamma\times \vec{A}_c\right)\right]\\
&=-\frac{\omega_c^2}{c^2}\int\gamma \vec{H}_c^*\cdot\left(\nabla\gamma\times \vec{A}_c\right)-\frac{\omega_c^2}{c^2}\int\left\|\nabla\gamma\times \vec{A}_c\right\|^2-i\frac{\omega_c}{c}\int \gamma \nabla\gamma\cdot\left(\vec{E}_c\times \vec{H}_c^*\right)\\
&\quad{}-i\frac{\omega_c}{c} \int \left(\nabla\gamma\times \vec{A}_c\right)^*\cdot\left(\nabla\gamma\times \vec{E}_c\right)+i\frac{\omega_c}{c}\int\gamma \vec{E}_c^*\cdot \left(\nabla\gamma\times \vec{H}_c\right)\\
&\quad{}+i\frac{\omega_c}{c}\int\gamma\left(\nabla_\beta\times \vec{E}_c\right)^*\cdot\left(\nabla\gamma\times \vec{A}_c\right)+i\frac{\omega_c}{c}\int\left(\nabla\gamma\times \vec{E}_c\right)^*\cdot\left(\nabla\gamma\times \vec{A}_c\right)\\
&\quad{}+\int \left(\nabla\gamma\times \vec{H}_c\right)^*\cdot\varepsilon_c^{-1}\left(\nabla\gamma\times \vec{H}_c\right)\\
&\quad{}+\left(\int\left(\nabla\gamma\times \vec{H}_c\right)^*\cdot \varepsilon_c^{-1}\left[\nabla_{\beta}\times\left(\nabla\gamma\times \vec{A}_c\right)\right]+\mathrm{c.c.}\right)\\
&\quad{}+\int\left(\nabla_\beta\times\left(\nabla\gamma\times \vec{A}_c\right)\right)^*\cdot\varepsilon_c^{-1}\left(\nabla_\beta\times\left(\nabla\gamma\times \vec{A}_c\right)\right).
\end{split}
\end{equation}
Above, the first ``$=$'' step is obtained by substituting the trial
function for $\vec{H}$, integrating some of the $\nabla_\beta\times{}$
operators by parts, and distributing the derivatives of
$\gamma\vec{H}_c$ by the product rule. The second step is obtained by
using Amp{\`e}re's law again, combined with integrations by parts and the
product rule; ``c.c.'' stands for the complex conjugate of the
preceding expression. Continuing, we obtain:
\begin{equation}
\begin{split}
&=-\frac{\omega_c^2}{c^2}\int\gamma \vec{H}_c^*\cdot\left(\nabla\gamma\times \vec{A}_c\right)-\frac{\omega_c^2}{c^2}\int\left\|\nabla\gamma\times \vec{A}_c\right\|^2-2i\frac{\omega_c}{c}\int \gamma \nabla\gamma\cdot\Re\left\{\vec{E}_c\times \vec{H}_c^*\right\}\\
&\quad{}+\left(i\frac{\omega_c}{c} \int \left(\nabla\gamma\times \vec{A}_c\right)\cdot\left(\nabla\gamma\times \vec{E}_c\right)^*+\mathrm{c.c.}\right)+\frac{\omega_c^2}{c^2}\int\gamma \vec{H}_c^*\cdot\left(\nabla\gamma\times \vec{A}_c\right)\\
&\quad{}+\int \left(\nabla\gamma\times \vec{H}_c\right)^*\cdot\varepsilon_c^{-1}\left(\nabla\gamma\times \vec{H}_c\right) +\left(\int\left(\nabla\gamma\times \vec{H}_c\right)^*\cdot \varepsilon_c^{-1}\left[\nabla_{\beta}\times\left(\nabla\gamma\times \vec{A}_c\right)\right]+\mathrm{c.c.}\right)\\
&\quad{}+\int\left(\nabla_\beta\times\left(\nabla\gamma\times \vec{A}_c\right)\right)^*\cdot \varepsilon_c^{-1}\left(\nabla_\beta\times\left(\nabla\gamma\times \vec{A}_c\right)\right)\\
\end{split}
\end{equation}
In obtaining this expression, we have grouped terms into
complex-conjugate pairs and used Faraday's law to replace
$\nabla_\beta\times\vec{E}_c$ with $i\frac{\omega_c}{c}\vec{H}_c$. At
this point, we have two $\frac{\omega_c^2}{c^2}\int
\gamma\vec{H}_c^*\cdot\left(\nabla\gamma\times\vec{A}_c\right)$ terms
that exactly cancel. All of the remaining terms, except for
$-i\frac{\omega_c}{c}\int \gamma
\nabla\gamma\cdot\Re\left\{\vec{E}_c\times \vec{H}_c^*\right\}$, are
multiples of two first or higher derivatives of $\gamma$,
corresponding to category~(iv) and~(v) terms, which we proved to vanish in
\secref{general}.
The only remaining term is the $\vec{E}_c\times\vec{H}_c^*$ term, in
category~(iii). This term is identically zero (for any $\alpha\geq0$)
because it is purely imaginary, whereas all of the other terms are
purely real and the overall expression must be real. More explicitly:
\begin{equation}
\begin{split}
&-2i\frac{\omega_c}{c}\int \gamma \nabla\gamma\cdot\Re\left\{\vec{E}_c\times \vec{H}_c^*\right\}\\
&\quad=-2i\frac{\omega_c}{2c}\int \nabla\gamma^2\cdot\Re\left\{\vec{E}_c\times \vec{H}_c^*\right\}\\
&\quad=-i\frac{\omega_c}{c}\int \nabla\cdot\left(\gamma^2\Re\left\{\vec{E}_c\times \vec{H}_c^*\right\}\right)+i\frac{\omega_c}{c}\int\gamma^2 \nabla\cdot\left(\Re\left\{\vec{E}_c\times \vec{H}_c^*\right\}\right)\\
\end{split}
\end{equation}
The first term of the last line is zero by the divergence theorem
(transforming it into a surface integral at infinity), since
$\gamma\to0$ at infinity. For the second term, the integrand is the
divergence of the time-average Poynting vector
$\Re\left\{\vec{E}_c\times \vec{H}_c^*\right\}$, which equals the
time-average rate of change of the energy density~\cite{Jackson98}, which is
identically zero for any lossless eigenmode (such as the cladding
fundamental mode).
\end{document} | {
"timestamp": "2008-03-19T18:23:15",
"yymm": "0803",
"arxiv_id": "0803.2850",
"language": "en",
"url": "https://arxiv.org/abs/0803.2850",
"abstract": "We derive a sufficient condition for the existence of index-guided modes in a very general class of dielectric waveguides, including photonic-crystal fibers (arbitrary periodic claddings, such as ``holey fibers''), anisotropic materials, and waveguides with periodicity along the propagation direction. This condition provides a rigorous guarantee of cutoff-free index-guided modes in any such structure where the core is formed by increasing the index of refraction (e.g. removing a hole). It also provides a weaker guarantee of guidance in cases where the refractive index is increased ``on average'' (precisely defined). The proof is based on a simple variational method, inspired by analogous proofs of localization for two-dimensional attractive potentials in quantum mechanics.",
"subjects": "Optics (physics.optics)",
"title": "Rigorous sufficient conditions for index-guided mode in microstructured dielectric waveguides",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9637799420543366,
"lm_q2_score": 0.7341195327172401,
"lm_q1q2_score": 0.7075296807031782
} |
https://arxiv.org/abs/1502.08014 | Localization theorems for matrices and bounds for the zeros of polynomials over a quaternion division algebra | In this paper, Ostrowski and Brauer type theorems are derived for the left and right eigenvalues of a quaternionic matrix. Generalizations of Gerschgorin type theorems are discussed for the left and the right eigenvalues of a quaternionic matrix. Thereafter a sufficient condition for the stability of a quaternionic matrix is given that generalizes the stability condition for a complex matrix. Finally, a characterization of bounds for the zeros of quaternionic polynomials is presented. | \section{Introduction}\label{s1}
This paper attempts to study localization theorems for matrices over a
quaternion division algebra, which include the Ostrowski, Brauer, and
Gerschgorin type of theorems. Bounds for the zeros of quaternionic polynomials
are also considered. Localization theorems for quaternionic matrices have
received much attention in the literature due to their applications in pure and applied
sciences, especially in quantum theory \cite{sla95, a99, jd02, arv89, t80, lw01,
gp02, l49, l14, rppv05, lr12, jw08, fz97, fz07, m99}. Unlike the case of matrices over the
field of complex numbers \cite{a46, s31, rc96, am37, r04}, localization theorems for
quaternionic matrices have been proposed for left and right eigenvalues separately
in \cite{fz07,wzcl08, lyj12}. Ostrowski and Brauer type theorems for the right
eigenvalues of a quaternionic matrix with all real diagonal entries have been introduced
in \cite{lyj12}. A Brauer type theorem for the left eigenvalues of a quaternionic matrix has
been considered in \cite[Theorem 4]{wzcl08}. Moreover, localization theorems for special
quaternionic matrices, for instance, central closed quaternionic matrices, have been
presented in \cite{wzcl08}.
In the first part of this paper, we provide a general framework for localization
theorems for quaternionic matrices. Let $M_n({\mathbb H})$ be the space of all $n \times n$
quaternionic matrices. Then, for any $A=(a_{ij}) \in M_n({\mathbb H}),$ we prove a Ostrowski type
theorem which states that all the left eigenvalues of $A$ are located in the
union of $n$ balls $T_i(A):= \{z \in {\mathbb H}:|z-a_{ii}| \leq r_i(A)^{\gamma} c_i(A)^{1-\gamma} \}$, where
$r_i(A):= \sum_{j=1,\, j\neq i}^n|a_{ij}|$ and $c_i(A):=
\sum_{j=1,\, j\neq i}^n |a_{ji}|,\,\,\forall\,\, \gamma \in [0,\,1]$. From this result, we deduce a
sufficient condition for invertibility of a quaternionic matrix. We also proved that the Ostrowski
type theorem is valid for the right eigenvalues when all the diagonal entries of
the quaternionic matrix $A$ are real.
We find that the Brauer type theorem, proved in \cite[Theorem 5]{wzcl08} for the left eigenvalues in the case of deleted absolute column sums
of a quaternionic matrix, is incorrect, and we prove a corrected version. In addition, we derive some stronger results than \cite[Theorems 6, 7]{wzcl08} and \cite[Theorem 4.3]{lyj12}.
In fact, in the case of the generalized H$\ddot{\mbox{o}}$lder inequality over the skew field of quaternions, we show that all
the left eigenvalues of $A=(a_{ij}) \in M_n({\mathbb H})$ are contained in
the union of $n$ generalized balls: $B_i(A) :=\{z\in {\mathbb H} : |z-a_{ii}|\le (n-1)^{\frac{1- \gamma}{q}} r_i(A)^{\gamma}
(n_i^{(p)}(A))^{1- \gamma}\}$, where $\gamma \in [0,\, 1],$ $ n_i^{(p)}(A):= \left( \sum_{j=1,\, j\neq i}^{n} |a_{ij}|^{p}\right)^{\frac{1}
{p}}$, for any $ p, q \in (1,\,\infty)$ with $\frac{1}{p}+ \frac{1}{q} = 1$. Further,
we prove that all the right eigenvalues of $A\in M_n({\mathbb H})$ with all real diagonal entries are contained in the
union of $n$ generalized balls $B_i(A).$ In the sequel, we present localization theorems for the right
eigenvalues of quaternionic matrices.
In the second part of this paper, we provide bounds for the zeros of quaternionic polynomials
using the aforementioned localization theorems. Recall that quaternionic polynomials in general
are expressed in the following forms
\begin{eqnarray}
p_l(z) &:=& q_m z^m+ q_{m-1} z^{m-1}+ \dots+q_1z+ q_0,\label{l}\\
p_r(z) &:=& z^m q_m+ z^{m-1}q_{m-1}+ \dots+zq_1+ q_0,\label{lk}
\end{eqnarray}
where $q_{j},\,\,z \in {\mathbb H},\,\, (0 \leq j \leq m).$ The polynomials ({\ref{l}}) and (\ref{lk}) are called simple
and monic if $q_m=1.$ Some recent developments on the location and computation of zeros of quaternionic polynomials
can be found in \cite{bt65, ddg10, dg10, sgv06, in41, go09, am04, rp01}.
As a consequence of the localization theorems for quaternionic matrices, we provide sharper bounds compared to the bound
introduced by G. Opfer in \cite{go09} for the zeros of quaternionic polynomials. Finally, we provide bounds for the zeros of
quaternionic polynomials in terms of powers of the companion matrices associated with the quaternionic polynomials (\ref{l}) and (\ref{lk}).
Some of our bounds are sharper than the bound from \cite{go09}.
The paper is organized as follows: Section \ref{s2} reviews some existing results from \cite{fz97, rpp08}.
Section \ref{ss3} discusses the Greshgorin type, Ostrowski type, and Brauer type theorems for the left
and right eigenvalues of a quaternionic matrix. Section \ref{s4} explains bounds for the zeros of $p_l (z)$ and $p_r (z)$.
Comparisons are made with the bound provided in \cite{go09}. A sufficient condition for the stability of a
quaternionic matrix is also given. Section \ref{s6} introduces bounds for the zeros of the polynomials
$p_l (z)$ and $p_r (z)$ in terms of powers of their companion matrices.
Finally, Section \ref{cf} summarizes this work.
\section{Preliminaries}\label{s2}
{\bf Notation:} Throughout the paper, ${\mathbb R}$ and ${\mathbb C}$ denote the fields of real and complex numbers, respectively.
The set of real quaternions is defined by $${\mathbb H}:= \left\{ q= a_0 + a_1{\bf{i}} +
a_2{\bf{j}} + a_3{\bf{k}}: a_0, a_1, a_2, a_3 \in {\mathbb R} \right\}$$ with ${\bf{i}}^2={\bf{j}}^2={\bf{k}}^2={\bf{ijk}}=-1.$
The conjugate of $q \in {\mathbb H}$ is $\overline{q}:= a_0 - a_1{\bf{i}} - a_2{\bf{j}} - a_3{\bf{k}}$ and the modulus of $q$ is
$|q|: = \sqrt {a_0^2 + a_1^2 + a_2^2 + a_3^2}$.
$\Im{(a)}$ denotes the imaginary part of $a \in {\mathbb C}$. The real part of a quaternion $q= a_0 + a_1{\bf{i}} +
a_2{\bf{j}} + a_3{\bf{k}}$ is defined as
$\Re(q)=a_0.$
The collection of all $n$-column vectors with elements in ${\mathbb H}$ is denoted by ${\mathbb H}^{n}$.
For $x \in \mathcal{K}^n,$ where $\mathcal{K} \in \{ {\mathbb R}, {\mathbb C}, {\mathbb H}\},$ the transpose of $x$ is $x^T.$ If $x=[x_1,\ldots,x_n]^T,$ the conjugate of
$x$ is defined as $\overline{x}=[\overline{x_1}, \ldots, \overline{x_n}]^T$ and the conjugate transpose of $x$
is defined as $x^H=[\overline{x_1}, \ldots, \overline{x_n}].$
For $x, y \in {\mathbb H}^n,$ the inner product is defined as $\langle x, y \rangle:= y^Hx$ and the norm of $x$ is defined as
$\|x\|:= \sqrt{\langle x, x\rangle}$.
The sets of $m\times n$ real, complex, and quaternionic matrices are denoted by $M_{m \times n}({\mathbb R}),$ $ M_{m \times n}({\mathbb C}),$
and $M_{m \times n}({\mathbb H}),$ respectively. When $m=n$, these sets are
denoted by $M_n(\mathcal{K})$, $\mathcal{K} \in \{{\mathbb R}, {\mathbb C}, {\mathbb H} \}$.
For $A \in M_{m \times n}({\mathcal{K}}),$
the conjugate, transpose, and conjugate transpose
of $A$ are defined as $\overline{A}=(\overline{a_{ij}})$, $A^T=(a_{ji}) \in M_{n \times m} ({\mathbb H}),
$ and $A^H=(\overline{A})^T \in M_{n \times m}({\mathbb H}),$ respectively.
For $z\in {\mathbb H}^n,$ the vector $p$-norm on ${\mathbb H}^n$ is
defined by $\|z\|_p:=(\sum_{i=1}^n|z_i|^p)^{1/p},$ where $1\le p < \infty$ and $\|z\|_\infty:=\dm {\max_{1\le i \le n}}\{|z_i|\}.$ Define
${\mathbb R}^{+}:=\{ \alpha: \alpha \in {\mathbb R}, \alpha > 0 \}.$
The set
\begin{eqnarray*}
[q]:=\{ r \in {\mathbb H} : r=\rho^{-1} \, q \,\rho \,\, for\,\, all\,\, 0\not=\rho \in {\mathbb H}\}
\end{eqnarray*}
is called an equivalence class of $q \in {\mathbb H}.$
Let $x\in {\mathbb H}^n$. Then $x$ can be uniquely expressed as $x= x_1+ x_2{\bf{j}},$ where $x_1, x_2 \in {\mathbb C}^n.$ Define the function $ \psi : {\mathbb H}^n \rightarrow
{\mathbb C}^{2n}$ by
$$ \psi_x:=\bmatrix{x_1 \\-\overline{x_2}}.$$
This function $\psi$ is an injective linear transformation from ${\mathbb H}^n$ to ${\mathbb C}^{2n}.$
\begin{definition}\label{ch1cam}
Let $A\in M_n({{\mathbb H}})$. Then $A$ can be uniquely expressed as $A= A_1+ A_2{\bf{j}},$ where $A_1,A_2 \in M_n({\mathbb C}).$ Define the function
$\Psi : M_n({{\mathbb H}}) \rightarrow
M_{2n}({{\mathbb C}})$ by
\begin{eqnarray*}
\Psi_A:=\bmatrix{A_1 & A_2\\-\overline{A_2}&\overline{A_1}}.
\end{eqnarray*}
The matrix $\Psi_A$ is called
the complex adjoint matrix of $A$.
\end{definition}
\begin{definition}\label{def3.1}
Let $A\in M_n({{\mathbb H}}).$ Then the left, right, and the standard eigenvalues, respectively, are given by
\begin{eqnarray*}
\Lambda_l(A) &:=& \left\{\lambda \in {\mathbb H} : Ax = \lambda x \,\, \mbox{for some nonzero}\,\,
x \in {\mathbb H}^{n} \right\},\\
\Lambda_r(A) &:=& \left\{\lambda \in {\mathbb H} : Ax = x \lambda \,\, \mbox{for some nonzero}\,
\, x \in {\mathbb H}^{n} \right\}\,\mbox{and}\\
\Lambda_s(A) &:=& \left\{\lambda \in {\mathbb C} : Ax = x\lambda \,\, \mbox{for some nonzero}\,
\, x \in {\mathbb H}^{n},\, \Im(\lambda ) \geq 0 \right\}.
\end{eqnarray*}
\end{definition}
%
\begin{definition}
Let $A \in M_n({\mathbb H}).$ Then $A$ is said to be a central closed matrix if there exists an invertible matrix $T$ such that
\[
T^{-1} A T=\mathrm{diag}(\lambda_1,\lambda_2,\ldots,\lambda_n),\quad
\mbox{where}\,\, \lambda_i \in {\mathbb R}, \quad 1 \leq i \leq n. \]
\end{definition}
\begin{definition}\label{st}
Let $A\in M_n({\mathbb H}).$ Then the matrix $A$ is said to be stable if and only if $\Lambda_r(A) \subset {\mathbb H}^{-}:= \left\{ q \in {\mathbb H} :
\Re(q) < 0\right\}.$
\end{definition}
\begin{definition}
Let $A \in M_n({\mathbb H}).$ Then $A$ is said to be $\eta$-Hermitian if $A= \left(A^{\eta}\right)^H,$
where $A^{\eta}=\eta^H A \eta $ and $\eta \in \{{\bf{i}}, {\bf{j}}, {\bf{k}} \}.$
\end{definition}
\begin{definition}
A matrix $A \in M_n({\mathbb H})$ is said to be invertible if there exists
$B \in M_n({\mathbb H})$ such that $AB=BA=I_n,$ where $I_n$ is the $n \times n$ identity matrix.
\end{definition}
%
We next recall the following result necessary for the development of our theory.
%
\begin{theorem}\cite[Theorem 4.3]{fz97}.\label{nst}
Let $A\in M_n({{\mathbb H}}).$ Then the following statements are equivalent:
\noindent $(a)$ $A$ is invertible,\,\, $(b)$ $Ax = 0$ has the unique solution, \,\,
$(c)$ $\mathrm{det}(\Psi_A) \neq 0,$\,\,
$(d)$ $\Psi_A$ is invertible,\,\,
$(e)$ $A$ has no zero eigenvalue.
\end{theorem}
Let $A:= (a_{ij})\in M_n({\mathbb H})$ and define the absolute row and column sums of $A$ as
\[
r_i'(A):=r_i(A)+ |a_{ii}|\,\,\mbox{and}\,\, c_i'(A):=c_i(A)+|a_{ii}| \,\,\,\,(1 \leq i \leq n).
\]
\section{Distribution of the left and right eigenvalues of quaternionic matrices}\label{ss3}
It is known from \cite[Corollary 3.2]{l08} that a quaternionic matrix $A$ and its conjugate transpose $A^H$ have the same right
eigenvalues. However, $A$ and $A^H$ may not have the same left eigenvalues, take for example
$A= \bmatrix{{\bf{i}}& 0 \\0 & {\bf{j}}}$ and $A^H= \bmatrix{{\bf{-i}}
& 0 \\0 &{\bf{-j}}}$. We now present the following lemma for left eigenvalues of $A$ and $A^H.$
\begin{lemma}\label{prop4}
Let $A\in M_n({{\mathbb H}})$ and let $\lambda \in {\mathbb H}.$ Then $\lambda$ is a left eigenvalue of $A$ if and only if $\overline{\lambda}$
is a left eigenvalue of $A^H.$
\end{lemma}
\proof Let $\lambda$ be a left eigenvalue of $A.$ Then there exists
$ x ( \neq 0) \in {\mathbb H}^n$ such that $(A-\lambda I_n)x= 0.$ This can be written as $\Psi_{(A-\lambda I_n)}
\psi_x=0.$ Hence it follows that $\lambda$ is a left eigenvalue of $A$ if and only if
$\mathrm{det}\left[\Psi_{(A-\lambda I_n)}\right]=0$ $\Leftrightarrow \mathrm{det}\left[\Psi^H_{(A-\lambda I_n)}\right]=0
\Leftrightarrow \mathrm{det}\left[\Psi_{(A-\lambda I_n)^H}\right]=0
\Leftrightarrow \mathrm{det}\left[\Psi_{(A^H-\overline{\lambda} I_n)}\right]=0.
$
Thus, $\overline{\lambda}$ is a left eigenvalue of $A^H.\,\,\,\blacksquare$
The Gerschgorin type theorem for the left eigenvalues using deleted absolute row sums of a matrix $A\in M_n ({\mathbb H})$ is proved in \cite{fz07}.
However, the Gerschgorin type theorem
for the left eigenvalues using deleted absolute column sums of $A$ has not yet been established.
We now state and prove the theorem.
\begin{theorem}\label{thm2}
Let $A:=(a_{ij}) \in M_n({{\mathbb H}}).$ Then all the left eigenvalues of $A$ are
located in the union of $n$ Gerschgorin balls $\Omega_i(A):=\left\{ z \in {\mathbb H}:
|z-a_{ii}|\leq c_i(A) \right\}, 1 \leq i \leq n,$ that is,
\[
\Lambda_l(A) \subseteq \Omega(A):=\cup_{i=1}^{n} \Omega_i(A).
\]
\end{theorem}
\proof Let $\lambda$ be a left eigenvalue of $A.$ Then from Lemma \ref{prop4},
$\overline{\lambda}$ is a left eigenvalue of $A^H.$ Then there exists some nonzero $x \in {\mathbb H}^n$ such that
$A^H x = \overline{\lambda}x$. Let $x:= [x_1,\ldots, x_n]^T \in {\mathbb H}^n$ and let
$x_t$ be an element of $x$ such that $|x_t|\ge |x_i|, 1\le i \le n$.
Then, $|x_t| > 0$. From the $t$-th equation of $A^Hx= \overline{\lambda} x$, we have
\begin{eqnarray*}
\sum_{j=1}^n \overline{a_{jt}} x_j &=& \overline{\lambda} x_t.
\end{eqnarray*}
This shows
\begin{eqnarray*}
|\lambda- a_{tt}| &\le & \sum_{j=1,\, j\neq t}^n |a_{jt}|:=c_t(A).\,\,\,\blacksquare
\end{eqnarray*}
We now have the following localization theorem for the deleted absolute
row and column sums of a matrix $A \in M_n({\mathbb H})$ which is known as {\em Ostrowski type
theorem}.
\begin{theorem}$($Ostrowski type theorem for the left eigenvalues$)$ \label{os1}
Let $A:=(a_{ij})\in M_n({{\mathbb H}})$ and let $\gamma \in [0, 1]$. Then all the
left eigenvalues of $A$ are located in the union of $n$ balls $T_i(A):=
\{z \in {\mathbb H}:|z-a_{ii}| \leq r_i(A)^{\gamma} c_i(A)^{1-\gamma}\}, 1 \leq i \leq n,$ that is,
\[
\Lambda_l(A) \subseteq T(A):= \cup_{i=1}^{n} T_i(A).
\]
\end{theorem}
\proof Let $\lambda$ be a left eigenvalue of $A.$ Then by \cite[Theorem $6$]{fz07}, for $\gamma\in [0,\,1],$ we have
\begin{equation}\label{eqbp}
|\lambda-a_{ii}|^{\gamma} \le r_i(A)^{\gamma}, \quad 1 \leq i \leq n.
\end{equation}
Similarly, from Theorem \ref{thm2}, we obtain
\begin{equation}\label{eqbq}
|\lambda-a_{ii}|^{1-\gamma} \le c_i(A)^{1-\gamma}, \quad 1 \leq i \leq n.
\end{equation}
Combining (\ref{eqbp}) and (\ref{eqbq}), we get
\[
|\lambda- a_{ii}| \le r_i(A)^{\gamma} c_i(A)^{1-\gamma}, \quad 1 \leq i \leq n.
\]
Thus, all the left eigenvalues of $A$ are located in the union of $n$ balls $T_i(A)$. $\blacksquare$
Next, we derive Ostrowski type theorem for right eigenvalues of $A\in M_n({\mathbb H})$ with all real diagonal entries.
\begin{theorem}\label{j12}
Let $A:=(a_{ij}) \in M_n({{\mathbb H}})$ with $a_{ii}\in {\mathbb R}$ and let $\gamma \in [0,\,1]$. Then all the right eigenvalues of $A$ are located in the union of $n$
balls $G_i(A):=\left\{z\in {\mathbb H} : |z-a_{ii}|\le r_i(A)^{\gamma}
c_i(A)^{1-\gamma}\right\}$, $1 \leq i \leq n,$ that is,
\[ \Lambda_r(A)\subseteq G(A):= \cup_{i=1}^{n} G_i(A).\]
\end{theorem}
\proof Let $\lambda$ be a right eigenvalue of $A.$ Then there exists some nonzero $x \in {\mathbb H}^n$ such that
$A x = x \lambda $. Let $x:= [x_1,\ldots, x_n]^T \in {\mathbb H}^n$ and let
$x_t$ be an element of $x$ such that $|x_t|\ge |x_i|, 1\le i \le n$.
From the $t$-th equation of $Ax= x \lambda $, we have
\begin{eqnarray}
a_{tt} x_t +\sum_{j=1,\, j\neq t}^n a_{tj} x_j &=& x_t \lambda.
\end{eqnarray}
Since $a_{tt} \in {\mathbb R},$ $a_{tt} x_t=x_t a_{tt}.$ Proceeding as in the proof of Theorem \ref{thm2}, we obtain
\begin{eqnarray}\label{R1}
|\lambda- a_{tt}| \le \sum_{j=1,\, j\neq t}^n |a_{jt}|=:r_t(A).
\end{eqnarray}
From \cite[Corollary 2.7]{l08}, $\lambda$ is also a right eigenvalue of $A^H$. Then
\begin{eqnarray}\label{R2}
|\lambda- a_{tt}| \le \sum_{j=1,\, j\neq t}^n |a_{tj}|=:c_t(A).
\end{eqnarray}
Let $\gamma \in [0, 1].$ Then from (\ref{R1}) and (\ref{R2}), we obtain
\begin{eqnarray}\label{R3}
|\lambda- a_{tt}|^{\gamma} \le r^{\gamma}_t(A),
\end{eqnarray}
\begin{eqnarray}\label{R4}
|\lambda- a_{tt}|^{1-\gamma} \le c^{1-\gamma}_t(A).
\end{eqnarray}
Combining (\ref{R3}) and (\ref{R4}), we get
\[
|\lambda- a_{tt}| \le r(A)^{\gamma}_t \, c(A)^{1-\gamma}_t.\,\,\,\blacksquare
\]
\begin{corollary}\label{s3}
For any $A:= (a_{ij})\in M_n({\mathbb H}),$ $n\geq 2$ and for any $ \gamma \in [0,\,1]$. Let us assume that
\begin{equation}\label{eqn5}
|a_{ii}| > r_i(A)^{\gamma} \ r_i(A)^{1-\gamma}, \quad 1\le i\le n.
\end{equation}
Then $A$ is invertible.
\end{corollary}
\proof On the contrary, suppose $A$ is not invertible. Then by Theorem \ref{nst},
there is a left eigenvalue $\lambda=0$ of $A$. Now from Theorem \ref{os1}, we obtain
$|a_{ii}| \leq r_i(A)^{\gamma} c_i(A)^{1-\gamma}$. This contradicts our assumption
(\ref{eqn5}). Hence $A$ is invertible.$\,\,\,\blacksquare$
It is known that a quaternionic matrix $A \in M_n({\mathbb H})$ may have at most $2n$ complex right eigenvalues. From
Theorem \ref{j12}, all the complex right eigenvalues of a matrix $A=(a_{ij}) \in M_n({\mathbb H})$
with all real diagonal entries lie in the union of $n$-discs $\mathcal{E}_i(A):=\{z \in {\mathbb C}: |z-a_{ii}|
\leq r_i(A)^{\gamma} c_i(A)^{1-\gamma} \},$ $1 \leq i \leq n,$ that is,
\begin{eqnarray}\label{EEE2}
\Lambda_c(A)\subseteq \mathcal{E}(A):= \cup_{i= 1}^{n}\mathcal{E}_i(A),\,
\mbox{where}\,\, \Lambda_c(A):=\{\lambda \in {\mathbb C} : A x=x \lambda,\,\, 0 \neq x \in {\mathbb H}^n \}.
\end{eqnarray}
The Brauer type theorem is proved in \cite{wzcl08} for the left eigenvalues in the case of deleted absolute
column sums of a matrix $A \in M_n({\mathbb H}).$ That is, if $\lambda \in \Lambda_l (A),$ then its conjugate $\overline{\lambda}$ lies in the
union of $\frac{n (n- 1)}{2}$ ovals of Cassini. However, this is incorrect as the following example suggest:
\begin{exam}\label{count2}{\rm
Let $A=\bmatrix{{\bf{i}} &{\bf{k}} \\ 0 & {\bf{j}}}.$
Then by \cite[Theorem 5]{wzcl08}, oval of Cassini is given by
$\left\{z \in {\mathbb H} : |z-{\bf{i}}|\ |z-{\bf{j}}|\leq 0\right\}.$ Here,
${\bf i}$ is a left eigenvalue of $A$ and its conjugate
${\bf -i}$ is not contained in the above oval of Cassini.}
\end{exam}
%
According to \cite[Theorem 5]{wzcl08}, if $\lambda \in \Lambda_l(A),$ then $\overline{\lambda} \in \dm{\cup_{\substack{i, j=
1,\\ i \ne j}}^{n}} F_{ij}(A),$ where
$$F_{ij}(A):= \left\{z\in {\mathbb H} : |z-a_{ii}| \ |z-a_{jj}| \le c_i(A)
c_j(A)\right\},\quad 1 \leq i,j \leq n,\quad i \neq j.$$
However, this result is not necessarily true as
\[
|\overline{\lambda}-a_{ii}| \ |\overline{\lambda}-a_{jj}| > c_i(A)
c_j(A),\quad 1 \leq i,j \leq n,\quad i \neq j,
\]
which follows from Example \ref{count2}. Now, we derive a corrected version of \cite[Theorem 5]{wzcl08} as follows:
\begin{theorem}\label{c}
Let $A:=(a_{ij}) \in M_n({{\mathbb H}}).$ Then all the left
eigenvalues of $A$ are located in the union of $\frac{n(n-1)}{2}$ ovals of
Cassini
$$F_{ij}(A):= \left\{z\in {\mathbb H} : |z-a_{ii}| \ |z-a_{jj}| \le c_i(A) c_j(A) \right\},
\quad 1 \leq i,j \leq n, \quad i \neq j,$$
that is, $ \Lambda_l(A)\subseteq F(A):=\dm{\cup_{\substack{i, j=
1,\\ i \ne j}}^{n}} F_{ij}(A).$
\end{theorem}
\proof Let $\lambda$ be a left eigenvalue of $A.$ Then by Lemma \ref{prop4},
$\overline{\lambda}$ is a left eigenvalue of $A^H$. Then there exists some nonzero $x \in {\mathbb H}^n$ such that
$A^H x = \overline{\lambda}x$. Let $x:= [x_1,\ldots, x_n]^T \in {\mathbb H}^n$ and let
$x_s$ be an element of $x$ such that
$|x_s| \geq |x_i|$, $1 \leq i \leq n.$ Then, $|x_s| >0.$ Clearly, if all the other elements of $x$ are zero,
then the required result holds.
Let $x_s$ and $x_t$ be two nonzero elements of $x$ such
that $|x_s|\ge|x_t|\ge|x_i|, 1\le i \le n, i \neq s$.
From the $s$-th equation of $A^H x= \overline{\lambda} x $, we have
\[
\sum_{j=1}^{n} \overline{ a_{js}} x_j= \overline{\lambda} x_s,
\]
which implies
\[
(\overline{\lambda}-\overline{a_{ss}}) x_s= \sum_{j=1,\, j\neq s}^{n} \overline{a_{js}} x_j.
\]
Thus
\begin{equation}\label{eqb1}
|\lambda-a_{ss}| \le \left(\frac{|x_t|}{|x_s|}\right) \ c_s(A).
\end{equation}
Similarly, from $A^H x= \overline{\lambda}x,$ we obtain
\begin{equation}\label{eqb2}
|\lambda-a_{tt}| \le \left(\frac{|x_s|}{|x_t|}\right) \ c_t(A).
\end{equation}
Combining (\ref{eqb1}) and (\ref{eqb2}), we have
\[
|\lambda-a_{ss}| \ |\lambda-a_{tt}|\le c_s(A) c_t(A).
\]
Hence, all the left eigenvalues of $A$ are located in the union of $\frac{n(n-1)}{2}$
ovals of Cassini $F_{ij}(A), \quad 1 \leq i,j \leq n, \quad i \neq j.$\,\,\, $\blacksquare$
%
Theorem $7$ of \cite{wzcl08} was stated for a central closed quaternionic matrix. Now we
generalize this result for all quaternionic matrices as follows.
\begin{theorem}\label{rc}
Let $A:=(a_{ij}) \in M_n({\mathbb H})$ and let $\gamma \in [0,\,1]$. Then all the left
eigenvalues of $A$ are located in the union of $\dm{\frac{n(n-1)}{2}}$ ovals of Cassini
\[K_{ij}(A):=\left\{z\in {\mathbb H} : |z-a_{ii}| \ |z-a_{jj}|\le r_i(A)^{\gamma} \
r_j(A)^{\gamma} \ c_i(A)^{1-\gamma } \ c_j(A)^{1-\gamma} \right\}, \quad 1 \leq i,j \leq n, \quad i \neq j,\]
that is,
\[
\Lambda_l(A)\subseteq K(A):=\dm{\cup_{\substack{i,j=1\\ i \ne j}}^{n}}K_{ij}(A).
\]
\end{theorem}
\proof Let $\lambda$ be a left eigenvalue of $A.$ Then by \cite[Theorem 4]{wzcl08} and Theorem \ref{c},
for $\gamma \in [0, 1]$, we have
\begin{equation}\label{ch2eqn26}
|\lambda-a_{ii}|^{\gamma} |\lambda-a_{jj}|^{\gamma} \le r_i(A)^{\gamma}
r_j(A)^{\gamma}, \quad 1 \leq i,j \leq n, \quad i \neq j
\end{equation}
and
\begin{equation}\label{ch2eqn27}
|\lambda-a_{ii}|^{1-\gamma} |\lambda-a_{jj}|^{1-\gamma} \le c_i(A)^{1-\gamma}
c_j(A)^{1-\gamma}, \quad 1 \leq i,j \leq n, \quad i \neq j.
\end{equation}
Combining (\ref{ch2eqn26}) and (\ref{ch2eqn27}), we have
\[
|\lambda- a_{ii}| |\lambda-a_{jj}| \le r_i(A)^{\gamma} r_j(A)^{\gamma} c_i(A)^{1-\gamma} c_j(A)^{1-\gamma},
\quad 1 \leq i,j \leq n, \quad i \neq j.\,\,\, \blacksquare
\]
\begin{corollary}
For any $A:= (a_{ij})\in M_n({\mathbb H}),$ $n\geq 2$ and for any $ \gamma \in [0,\,1]$. Assume that
$$|a_{ii}||a_{jj}|> r_i(A)^{\gamma}
r_j(A)^{\gamma} \ c_i(A)^{1-\gamma } c_j(A)^{1-\gamma},\quad 1 \leq i,j \leq n,\quad i \neq j.$$
Then $A$ is invertible.
\end{corollary}
\begin{corollary}
Let $A:= (a_{ij}) \in M_n({{\mathbb H}}).$ Then all the left
eigenvalues of $A$ are located in the union of $\frac{n(n-1)}{2}$ ovals of Cassini
$$ \Lambda_l(A)\subseteq \Phi(A):=\cup_{{\substack{i,j=1\\ i \ne j}}}^{n}
\left\{z\in {\mathbb H} : |z-a_{ii}| \ |z-a_{jj}| \le \min \{r_i(A) r_j(A),\, c_i(A) c_j(A) \} \right\}.$$
\end{corollary}
\proof Substituting $\gamma=0,1$ in Theorem \ref{rc}, we obtain the following:
\begin{itemize}
\item[$(a)$] $ \Lambda_l(A)\subseteq E(A):=\cup_{{\substack{i,j=1\\ i \ne j}}}^{n}
\left\{z\in {\mathbb H} : |z-a_{ii}| \ |z-a_{jj}| \le c_i(A) c_j(A)\right\}.$
\item[$(b)$] $ \Lambda_l(A)\subseteq F(A):=\cup_{{\substack{i,j=1\\ i \ne j}}}^{n}
\left\{z\in {\mathbb H} : |z-a_{ii}| \ |z-a_{jj}| \le r_i(A) r_j(A)\right\}.$
\end{itemize}
Combining $(a)$ and $(b),$ we get the required result.$\,\,\,\blacksquare$
The following result provides better estimate than Theorem \ref{j12}.
\begin{theorem}\label{btt}
Let $A:= (a_{ij}) \in M_n({\mathbb H})$ with $a_{ii} \in {\mathbb R}$ and let $\gamma \in [0,1]$. Then all the right
eigenvalues of $A$ are located in the union of $\dm{\frac{n(n-1)}{2}}$ ovals of Cassini $\mathcal{G}_{ij}(A):=
\left\{z\in {\mathbb H} : |z-a_{ii}| \ |z-a_{jj}|
\le r_i(A)^{\gamma} \ r_j(A)^{\gamma} \ {c_i}(A)^{1-\gamma}\ {c_j}(A)^{1-\gamma}\right\}
, \quad 1 \leq i,j \leq n, \quad i \neq j,$ that is,
\[
\Lambda_r(A)\subseteq \mathcal{G}(A):=\cup_{\substack{i,j=1\\ i \ne j}}^{n}\mathcal{G}_{ij}(A).
\]
\end{theorem}
\proof
Let $\lambda$ be a right eigenvalue of $A.$ Then by \cite[Theorem 4.1, Corollary 4.1]{lyj12},
for $\gamma \in [0, 1]$, we have
\begin{equation}\label{nch2eqn26}
|\lambda-a_{ii}|^{\gamma} |\lambda-a_{jj}|^{\gamma} \le r_i(A)^{\gamma}
r_j(A)^{\gamma},\quad 1 \leq i,j \leq n, \quad i \neq j
\end{equation}
and
\begin{equation}\label{nch2eqn27}
|\lambda-a_{ii}|^{1-\gamma} |\lambda-a_{jj}|^{1-\gamma} \le c_i(A)^{1-\gamma}
c_j(A)^{1-\gamma}, \quad 1 \leq i,j \leq n, \quad i \neq j.
\end{equation}
Combining (\ref{nch2eqn26}) and (\ref{nch2eqn27}), we have
\[
|\lambda- a_{ii}| |\lambda-a_{jj}| \le r_i(A)^{\gamma} r_j(A)^{\gamma} c_i(A)^{1-\gamma} c_j(A)^{1-\gamma}
,\quad 1 \leq i,j \leq n, \quad i \neq j.\,\,\, \blacksquare
\]
From Theorem \ref{btt}, all the complex right eigenvalues of a matrix $A:=(a_{ij}) \in M_n({\mathbb H})$ with
$a_{ii} \in {\mathbb R}$, $1 \leq i \leq n$ are contained in the union of $\frac{n(n-1)}{n}$ ovals of Cassini
$\mathcal{F}_{ij}(A):=\{z \in {\mathbb C}: |z-a_{ii}|\, |z-a_{jj}| \leq r_i(A)^{\gamma} r_j(A)^{\gamma} \ {c_i}(A)^{1-\gamma}
{c_j}(A)^{1-\gamma} \}$, $ \quad 1\le i, j\le n, \quad i \ne j$,
that is,
\begin{eqnarray}\label{EEE3}
\Lambda_c(A)\subseteq \mathcal{F}(A):= \cup_{\substack{i,j=1\\ i \ne j}}^{n} \mathcal{F}_{ij}(A),
\end{eqnarray}
The following theorem shows that Theorem \ref{rc} is sharper than Theorem \ref{os1}.
\begin{theorem}\label{sh}
Let $A:= (a_{ij}) \in M_n({\mathbb H})$ with $ n \geq 2$ and let $\gamma \in [0, 1]$. Then
$$K(A)\subseteq T(A),$$
where $G(A)$ and $\mathcal{G}(A)$ are defined in Theorem \ref{os1} and Theorem \ref{rc}, respectively.
\end{theorem}
\proof Let $z \in K_{ij}(A)$ and fix any $i$ and $j,\, \,( 1\le i,j \le n,\, i \neq j)$. Then from
Theorem \ref{rc}, we have
\begin{equation}\label{Eqn23}
|z-a_{ii}| \ |z-a_{jj}| \le r_i(A)^{\dm{\gamma}} r_j(A)^{\dm{\gamma}} c_i(A)^{\dm{1-\gamma}} c_j(A)^{\dm{1-\gamma}}.
\end{equation}
Now the following two cases are possible.
{\bf Case 1:} If $r_i(A)^{\dm{\gamma}} \ r_j(A)^{\dm{\gamma}} c_i(A)^{\dm{1-\gamma}} c_j(A)^{\dm{1-\gamma}}=0,$
then $z= a_{ii}$ or $z= a_{jj}$.
However, from Theorem \ref{os1}, we have $a_{ii}\in T_i(A)$
and $a_{jj} \in T_j(A).$ Thus $z \in T_i(A) \cup T_j(A).$
{\bf Case 2:}
If $r_i(A)^{\dm{\gamma}} r_j(A)^{\dm{\gamma}} c_i(A)^{\dm{1-\gamma}} c_j(A)^{\dm{1-\gamma}} > 0,$ then by (\ref{Eqn23})
\begin{equation}\label{Eqn24}
\left(\frac{|z-a_{ii}|}{r_i(A)^{\dm{\gamma}} c_i(A)^{\dm{1-\gamma}}}\right) \left(\frac{|z-a_{jj}|}{r_j(A)^{\dm{\gamma}}
c_j(A)^{\dm{1-\gamma}}} \right) \le 1.
\end{equation}
As the left side of (\ref{Eqn24}) cannot exceed unity, one of the
factors of the left side can be at most unity, that is, $z \in T_i(A)$ or
$z \in T_j(A).$ Hence $z \in T_i(A) \cup T_j(A)$. Thus
\begin{eqnarray}\label{eqn25}
K_{ij}\subseteq T_i(A) \cup T_j(A).
\end{eqnarray}
From Theorem \ref{os1} and Theorem \ref{rc}, we obtain
\[K(A):= \cup_{\substack{i, j= 1\\ i\ne j}}^{n} K_{ij}(A) \subseteq \cup_{\substack{i, j= 1\\ i\ne j}}^{n} \left\{T_i(A) \cup T_j(A) \right\}= \cup_{k= 1}^n T_k(A)=: T(A).\,\,\, \blacksquare\]
Similarly, we have the following relation between Theorem \ref{btt} and Theorem \ref{j12}.
\begin{theorem}\label{shh}
Let $A:= (a_{ij})\in M_n({\mathbb H}), n\ge 2$ with $ a_{ii}\in {\mathbb R}$ and let $\gamma\in [0, 1].$ Then
$$\mathcal{G}(A)\subseteq G(A),$$ where $G(A)$ and $\mathcal{G}(A)$ are defined in Theorem \ref{j12}
and Theorem \ref{btt}, respectively.
\end{theorem}
\proof The proof is similar to the proof of Theorem \ref{sh}.$\,\,\,\blacksquare$
The following example illustrates Theorem \ref{shh} for complex right eigenvalues of a matrix $A:=(a_{ij}) \in M_n({\mathbb H})$
with $a_{ii} \in {\mathbb R}$, $1 \leq i \leq n.$
\begin{exam}\label{ree1}{\em Let
$A= \bmatrix{3 & 1+{\bf{i}}+{\bf{j}}-{\bf{k}} & 2+3{\bf{j}}-\sqrt{3} {\bf{k}} \\
5+ \sqrt{2}{\bf{j}}+3 {\bf{k}} & -2 & 3{\bf{j}}+4{\bf{k}} \\
4+3{\bf{j}} & 2-{\bf{i}}-2{\bf{k}} & -5 }.$
Substituting $\gamma=1/4$ in (\ref{EEE2}), we get the following three discs:
\begin{eqnarray*}
\mathcal{E}_1(A)&:=&\{z \in {\mathbb C}: |z-3| \leq 9.4533 \},\\ \mathcal{E}_2(A)&:=&\{z \in {\mathbb C}: |z+2| \leq 6.0894\},\\
\mathcal{E}_3(A)&:=&\{z \in {\mathbb C}: |z+5| \leq 8.7389\}.
\end{eqnarray*}
Similarly, let $\gamma=1/4$ in (\ref{EEE3}), we get the following three discs:
\begin{eqnarray*}
\mathcal{F}_{12}(A) &:=& \{z \in {\mathbb C} : |z-3|\, |z+2| \leq 57.5649 \},\\
\mathcal{F}_{23}(A) &:=& \{z \in {\mathbb C} : |z+2|\, |z+5| \leq 53.2145 \},\\
\mathcal{F}_{31}(A) &:=& \{z \in {\mathbb C} : |z+5|\, |z-3| \leq 82.6108 \}.
\end{eqnarray*}
In this example, there are six complex right eigenvalues $\lambda_j\,\, ( 1 \leq j \leq 6)$
which are shown in Figure \ref{fig1}.
The set $\mathcal{F}(A):= \mathcal{F}_{12}(A) \cup \mathcal{F}_{23}(A) \cup \mathcal{F}_{31}(A)$ is
represented by shaded region in Figure \ref{fig1}. From Figure \ref{fig1}, it is clear that
$\mathcal{F}(A) \subset \mathcal{E}(A),$ where $\mathcal{E}(A):=\mathcal{E}_1(A) \cup \mathcal{E}_2(A)
\cup \mathcal{E}_3(A).$
\begin{figure}[h!]
\hspace{-1.3cm}\includegraphics[height=8cm,width=15cm]{F21.eps}
\caption{Location of the complex right eigenvalues of the matrix $A$ from Example \ref{ree1}.}\label{fig1}
\end{figure}
}
\end{exam}
For $A:= (a_{ij})\in M_n({\mathbb H})$, define
$$n_i^{(p)}(A):= \left( \sum_{j=1,\, j\neq i}^{n} |a_{ij}|^{p}\right)^{\frac{1}{p}}, \quad
1 \leq i \leq n, \quad p \in (1,\infty).$$
We are now ready to derive the following localization theorem for left eigenvalues of a quaternionic matrix.
\begin{theorem}\label{g1}
Let $A:=(a_{ij}) \in M_n({{\mathbb H}})$ and let $\gamma \in [0, 1]$. Then all the left eigenvalues of $A$ are contained in the
union of $n$ generalized balls \[B_i(A):= \left\{z\in {\mathbb H} : |z- a_{ii}| \le
(n-1)^{\frac{1-\gamma}{q}} r_i(A)^{\gamma} (n_i^{(p)}(A))^{1- \gamma}\right\},\quad 1 \leq i \leq n,\]
that is,
\[\Lambda_l(A) \subseteq B(A):= \cup_{i=1}^{n}B_i(A),\]
for any $p, q \in (1, \infty)$ with $\frac{1}{p}+\frac{1}{q}=1.$
\end{theorem}
\proof Let $\mu$ be a left eigenvalue of $A.$ Then there exists some nonzero $x \in {\mathbb H}^n$ such that
$A x = {\mu}x$. Let $x:= [x_1,\ldots, x_n]^T \in {\mathbb H}^n$ and let
$x_t$ be an element of $x$ such that $|x_t|\ge |x_i|,$ $1\le i \le n$.
Then from $Ax= \mu x,$ we have
\begin{eqnarray*}
a_{tt} x_t + \dm{\sum_{ j= 1,\, j \neq t}^{n} a_{tj} x_j} &=& \mu x_t.
\end{eqnarray*}
This implies
\begin{equation}\label{eqn9}
|\mu-a_{tt}||x_t|= \left|\sum_{j= 1,\, j \neq t}^{n} a_{tj} x_j \right|
\le \sum_{j= 1,\, j \neq t}^{n} |a_{tj}|\ |x_j|.
\end{equation}
Applying the generalized H$\ddot{\mbox{o}}$lder inequality to (\ref{eqn9}), we have
\begin{eqnarray*}
|\mu-a_{tt}||x_t| &\le& \left(\sum_{j= 1,\, j \neq t}^{n} |a_{tj}|^{p}\right)^{\frac{1}{p}}
\left( \sum_{j= 1,\, j \neq t}^{n} |x_j|^{q}\right)^{\frac{1}{q}}.
\end{eqnarray*}
Since $|x_t| \ge |x_i|$ for all $1 \le i \le n$, we have
\begin{eqnarray*}
|\mu-a_{tt}||x_t| &\le & n_t^{(p)}(A) \left( (n-1) |x_t|^{q} \right)^{\frac{1}{q}},
\end{eqnarray*}
that is,
\begin{equation}\label{eq10}
|\mu-a_{tt}| \le n_t^{(p)}(A) \left(n-1\right)^{\frac{1}{q}}.
\end{equation}
Similarly, using $|x_t| \ge |x_i|\,\,\forall\,\,i\,\, (1 \le i \le n)$ in (\ref{eqn9}), we get
\begin{equation}\label{eq11}
|\mu-a_{tt}|\le \sum_{j= 1,\, j \neq t}^{n} |a_{tj}|= r_t(A).
\end{equation}
Combining (\ref{eq10}) and (\ref{eq11}) for $\gamma \in [0,\,1],$ we have
\begin{equation}\label{eq12}
|\mu- a_{tt}|^{1-\gamma}
\le (n_t^{(p)}(A))^{1-\gamma} (n-1)^{\frac{1-\gamma}{q}}\,\,\mbox{and}\,\,
|\mu-a_{tt}|^{\gamma}\leq r_t(A)^{\gamma},
\end{equation}
that is,
$$|\mu-a_{tt}| \le (n-1)^{\frac{1-\gamma}{q}} (n_t^{(p)}(A))^{1-\gamma} r_t(A)^{\gamma}.\,\,\,\blacksquare$$
Let us relate Theorem \ref{g1} to some existing results:
\begin{itemize}
\item Setting $p= q= 2$ and $\gamma= 1$ implies that the left eigenvalues of $A:=(a_{ij}) \in M_n({{\mathbb H}})$
are contained in the
union of $n$ Greschgorin balls $B_i(A):= \left\{z\in {\mathbb H} : |z-a_{ii}| \le r_i(A) \right\}, 1 \leq i \leq n,$ that is,
\[ \Lambda_l(A) \subseteq B(A):= \cup_{i=1}^{n} B_i(A).\]
This result can be found in \cite[Theorem~6]{fz07}.
\item Setting $p= q= 2$ and $\gamma= 0$ implies that the
left eigenvalues of $A:=(a_{ij}) \in M_n({{\mathbb H}})$ are contained in the
union of $n$ balls $B_i(A):= \left\{z\in {\mathbb H} : |z-a_{ii}| \le
(n-1)^{\frac{1}{2}} n_i^{(2)}(A) \right\}, 1 \leq i \leq n,$ that is,
\[ \Lambda_l(A) \subseteq B(A):=\cup_{i=1}^{n} B_i(A).\]
This result can be found in \cite[Theorem~1]{jw08}.
\end{itemize}
We now present a generalization of \cite[Theorem~7]{fz07} and
\cite[Theorem~3.1]{lyj12} by applying the generalized H$\ddot{\mbox{o}}$lder
inequality over the skew field of quaternions. For a general matrix $A:= (a_{ij}) \in M_n({{\mathbb H}})$ ,
all the right eigenvalues may not lie in the union of $n$ generalized balls $B_i(A), 1 \leq i \leq n$.
On the other hand, we show that
every connected region of the generalized balls $B_i(A), 1 \leq i \leq n$ contains
some right eigenvalues of $A$.
\begin{theorem}\label{gg2}
Let $A:= (a_{ij}) \in M_n({{\mathbb H}})$ and let $\gamma \in [0, 1]$. For every
right eigenvalue $\mu$ of $A$ there exists a nonzero quaternion $\beta$ such that
$\beta^{-1} \mu \beta$ $($which is also a right eigenvalue$)$ is contained in the
union of $n$ generalized balls
\[B_i(A):= \left\{z\in {\mathbb H} : |z-a_{ii}| \le (n-1)^{\frac{1-\gamma}{q}}r_i(A)^{\gamma}
(n_i^{(p)}(A))^{1-\gamma}\right\},\quad 1 \leq i \leq n,\]
that is,
$$\left\{ z^{-1} \mu z : 0 \ne z \in {\mathbb H}\right\} \cap \cup_{i= 1}^{n} B_i(A)\ne\emptyset,$$
where $p, q \in (1, \infty)$ with $\frac{1}{p}+\frac{1}{q}=1.$
\end{theorem}
%
\proof Let $\mu$ be a right eigenvalue of $A.$ Then there exists some nonzero vector
$x \in {\mathbb H}^n$ such that $A x = x \mu.$ Let $x:= [x_1, \ldots, x_n]^T\in {\mathbb H}^n$ and choose $x_t$ from $x$ as given in Theorem \ref{g1}.
Consider $\rho \in {\mathbb H}$ such that $x_t \mu= \rho x_t$. Then we have
\begin{equation}\label{9}
|\rho-a_{tt}||x_t|= \left|\sum_{j=1,\, j \neq t}^{n} a_{tj} x_j \right|
\le \sum_{j=1,\, j \neq t}^{n} |a_{tj}|\ |x_j|.
\end{equation}
Using the method from the proof of Theorem \ref{g1}, we have
\[
|\rho-a_{tt}|
\le (n-1)^{\frac{1-\gamma}{q}} (n_t^{(p)}(A))^{1-\gamma} r_t(A)^{\gamma}.\,\,\,\blacksquare
\]
Let us relate Theorem \ref{gg2} to some existing results:
\begin{itemize}
\item Substituting $p= q= 2$ and $\gamma= 1$, we obtain
$$ \{ z^{-1} \mu z : 0 \ne z \in {\mathbb H} \} \cap \cup_{i= 1}^{n} \{z\in {\mathbb H} : |z-a_{ii}| \le r_i(A) \} \ne \emptyset. $$
This result can be found in \cite[Theorem~7]{fz07}.
\item Substituting $p= q= 2$ and $\gamma= 0$, we get
\[ \{ z^{-1} \mu z : 0 \ne z \in {\mathbb H} \} \cap \cup_{i= 1}^{n}
\left\{z\in {\mathbb H} : |z- a_{ii}| \le \sqrt{n-1} \ n_i^{(2)}(A) \right\} \ne \emptyset.\]
This result can be found in \cite[Theorem~3.1]{lyj12}.
\end{itemize}
We next present a sufficient condition for the stability of a matrix $A\in M_{n}({\mathbb H}).$
\begin{proposition}
Let $A:= (a_{ij}) \in M_n({{\mathbb H}})$ and let $\gamma \in [0, 1]$. Assume that
\begin{eqnarray}\label{SE1}
\Re(a_{ii})+ (n-1)^{\frac{1- \gamma}{q}}{r_i}(A)^{\gamma} (n_i^{(p)}(A))^{1-\gamma} <0, \quad 1 \leq i \leq n,
\end{eqnarray}
where $\frac{1}{p}+ \frac{1}{q} = 1$ with $ p, q \in (1, \infty).$
Then the matrix $A$ is stable.
\end{proposition}
\proof Let $\lambda \in \Lambda_r(A).$ From Theorem \ref{gg2} there
exists $0 \neq \rho \in {\mathbb H}$ such that $\rho^{-1} \lambda \rho \in \cup _{i=1}^{n}B_i(A).$
Without loss of generality, we assume $\rho^{-1} \lambda \rho \in B_l(A),$ that is,
$$
|\rho^{-1} \lambda \rho-a_{ll}| \leq (n-1)^{\frac{1- \gamma}{q}}{r_l}(A)^{\gamma} (n_l^{(p)}(A))^{1-\gamma}.
$$
Consider $\lambda:=\lambda_1+\lambda_2 {\bf{i}}+\lambda_3{\bf{j}}+\lambda_4{\bf{k}}$ and $a_{ll}=a_l+b_l {\bf{i}}+c_l {\bf{j}}+d_l {\bf{k}}.$ Then
from (\ref{SE1}), we obtain
\begin{eqnarray}\label{SE4}
| (\lambda_1 -a_l) +(\rho^{-1} \lambda_2 {\bf{i}} \rho-b_l{\bf{i}} ) +(\rho^{-1} \lambda_3 {\bf{j}}\rho-c_l{\bf{j}} )
+(\rho^{-1} \lambda_4 {\bf{k}}\rho-d_l{\bf{k}} )| < -\Re(a_{ll})=- a_l .
\end{eqnarray}
The equality (\ref{SE4}) is possible when $\lambda_1 <0,$ that is, $\Re(\lambda) <0,$ hence $\lambda \in {\mathbb H}^{-}.$
This shows that the matrix $A$ is stable.$\,\,\,\blacksquare$
When all the diagonal entries of a matrix $A \in M_n({\mathbb H})$ are real, we have the following theorem.
\begin{theorem}\label{t22}
Let $A:= (a_{ij}) \in M_n({{\mathbb H}})$ with $a_{ii}\in {\mathbb R}$ and let $\gamma \in [0, 1]$. Then all the right eigenvalues of $A$
are contained in the union of $n$ generalized balls $$B_i(A):= \left\{z\in {\mathbb H} : |z- a_{ii}| \le (n-1)^{\frac{1- \gamma}{q}}
{r_i}(A)^{\gamma} (n_i^{(p)}(A))^{1-\gamma}\right\},\quad 1 \leq i \leq n,$$
that is,
$$ \Lambda_r(A)\subseteq B(A):= \cup_{i= 1}^{n}B_i(A),$$
where $p, q \in (1, \infty)$ with $\frac{1}{p}+\frac{1}{q}=1.$
\end{theorem}
\proof
Let $\lambda$ be a right eigenvalue of $A.$ Then there exists some nonzero vector $x \in {\mathbb H}^n$ such that $A x =x \lambda$.
Let $x:=[x_1, \ldots, x_n]^T \in {\mathbb H}^n$ and let $x_t$ be an element of $x$ such that
$|x_t|\ge |x_i|, 1\le i \le n$. Then $|x_t|> 0$. Thus from $Ax= x \lambda,$ we have
\begin{eqnarray*}
a_{tt} x_t + \dm{\sum_{j=1, \, j \neq t}^{n} a_{tj} x_j} &=& x_t \lambda,
\end{eqnarray*}
since $a_{tt} \in {\mathbb R},$ so $a_{tt} x_t=x_t a_{tt}.$ Then from the proof method of Theorem \ref{g1}, we have
\[
|\lambda-a_{tt}|
\le (n-1)^{\frac{1-\gamma}{q}} (n_t^{(p)}(A))^{1-\gamma} r_t(A)^{\gamma}.\,\,\,\blacksquare
\]
The above result has great significance as Hermitian and $\eta$-Hermitian matrices have all real diagonal entries. In general, $\eta$-Hermitian matrices arise widely in
applications \cite{cdf11, rf12, cd11}. To that end, we state the following proposition when all diagonal
entries of $A\in M_n ({\mathbb H})$ are real.
In particular, this result gives a sufficient condition for the stability of a matrix $A \in M_n({\mathbb H})$.
\begin{proposition}\label{st1}
Let $A:= (a_{ij}) \in M_n ({{\mathbb H}})$ with $a_{ii}\in {\mathbb R}$ and let $\gamma \in [0, 1]$. Assume that
$$a_{ii}+ (n-1)^{\frac{1- \gamma}{q}}{r_i}(A)^{\gamma} (n_i^{(p)}(A))^{1-\gamma} < 0, \quad 1 \leq i \leq n,$$
$ \mbox{ where}\,\, p, q \in (1,\,\infty)\,\,
\mbox{with}\,\, \frac{1}{p}+ \frac{1}{q} = 1.$ Then the matrix $A$ is stable.
\end{proposition}
From Theorem \ref{t22}, all the complex right eigenvalues of a matrix
$A=(a_{ij}) \in M_n({\mathbb H})$ with all real diagonal entries lie in the union of $n$-discs
$D_i(A):= \{z \in {\mathbb C}: |z-a_{ii}| \leq (n-1)^{\frac{1- \gamma}{q}}
{r_i}(A)^{\gamma} (n_i^{(p)}(A))^{1-\gamma} \}, 1 \leq i \leq n,$ that is,
\begin{eqnarray}\label{EEE1}
\Lambda_c(A)\subseteq D(A):= \cup_{i= 1}^{n}D_i(A).
\end{eqnarray}
However, if diagonal entries are from ${\mathbb C} \setminus {\mathbb R},$ then it is not necessary that all the complex right eigenvalues of $A$ are contained
in the union of $n$-discs $D_i(A), 1 \leq i \leq n$ as the following examples suggest.
\begin{exam}\label{ree2}{\rm Let
$
A:= \bmatrix{1-2{\bf{i}} & {\bf{j}} & {\bf{k}} \\ 0 & -2{\bf{i}} & -{\bf{i}} \\ 0 & {\bf{k}} & 3+{\bf{i}}}.
$
The set of complex right eigenvalues of $A$ is $$\Lambda_c(A):=\{\lambda_1, \lambda_2, \lambda_3,\lambda_4, \lambda_5, \lambda_6 \},$$
where $\lambda_1= -0.0164+2.0083 {\bf{i}},\, \lambda_2= -0.0164-2.0083 {\bf{i}},\, \lambda_3=1+2{\bf{i}},\, \lambda_4= 1-2{\bf{i}},\,
\lambda_5= 3.0164+1.0324 {\bf{i}}$, and $ \lambda_6=3.0164+1.0324 {\bf{i}}.$
For $\gamma= 1$ in (\ref{EEE1}), the discs $D_1(A), D_2(A),$ and $D_3(A)$ are as follows:
$$
D_1(A):= \{z \in {\mathbb C}: |z-1+ 2{\bf{i}}| \leq 2 \}, \,\,\, D_2(A):= \{z \in {\mathbb C}: |z+2{\bf{i}}| \leq 1 \},\,\,\mbox{and}$$
$$D_3(A):=\{z \in {\mathbb C}: |z-3-{\bf{i}}| \leq 1 \}.$$
From Figure \ref{fig2}, it is clear that
$\lambda_1, \lambda_3,$ and $\lambda_6$ lie outside the discs $D_1(A), D_2(A),$ and
$D_3(A).$
\begin{figure}[h!]
\begin{center}
\includegraphics[height=7cm,width=7cm]{F6.eps}
\caption{Location of the complex right eigenvalues of $A$ from Example \ref{ree2}.}\label{fig2}
\end{center}
\end{figure}
}
\end{exam}
\begin{exam}\label{ree3}{\rm Let
$
A=\bmatrix{-4 & 1+{\bf{j}}+\sqrt{2} {\bf{k}} & {\bf{j}}\\ {\bf{i+j}} & -10 & 2{\bf{j}}-{\bf{k}} \\ {\bf{i}}-2{\bf{j}}+2{\bf{k}} &
\sqrt{3}+2{\bf{j}}-3{\bf{k}} & -8}.
$
In this example, there are six complex right eigenvalues $\lambda_j\,\,( 1 \leq j \leq 6)$ which are
shown in Figure \ref{fig3}. Substituting $\gamma=1$ in (\ref{EEE1}), then all the complex right
eigenvalues of the matrix $A$ are contained in the union of three discs $D_1(A), D_2(A),$
and $D_3(A),$ where
$$
D_1(A):=\{z \in {\mathbb C}: |z+4| \leq 3 \}, \,\,\, D_2(A):=\{z \in {\mathbb C}: |z+10| \leq \sqrt 2 +\sqrt 5\},\,\,\mbox{and}$$
$$D_3(A):=\{z \in {\mathbb C}: |z+8| \leq 7 \}.$$
From Figure \ref{fig3}, the standard right eigenvalues of $A$ are $\lambda_1$, $\lambda_3$, and $\lambda_5$. Then
\[
\Lambda_r(A)=[\lambda_1]\cup[\lambda_3]\cup[\lambda_5].
\]
Also, from Figure \ref{fig3}, we observe that $\Re(\lambda_i) \in {\mathbb H}^{-}\, (i=1,3,5).$ Hence
\[
\Re(\lambda_1)=\Re(\rho^{-1} \lambda_1 \rho),
\,\Re(\lambda_2)=\Re(\tau^{-1} \lambda_2 \tau), \mbox{and}\,\Re(\lambda_3)=\Re(\nu^{-1} \lambda_3 \nu)\,\,\,\, \forall \,
\rho, \tau, \nu \in {\mathbb H}
\]
Thus the matrix $A$ is stable.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=7cm,width=7cm]{F1.eps}
\caption{Location of the complex right eigenvalues of $A$ from Example \ref{ree3}.}\label{fig3}
\end{center}
\end{figure}
}
\end{exam}
In general, similar quaternionic matrices may not have the same left eigenvalues, see, \cite[Example 3.3]{fz07}. However, the following result is true.
\begin{proposition}\label{sml}
Let $A \in M_n ({\mathbb H})$ and let $W$ be any invertible real matrix. Then $A$ and $WAW^{-1}$ have the same left eigenvalues.
\end{proposition}
\proof Let $\lambda$ be a left eigenvalue of $A.$ Then there exists some nonzero vector $x\in {\mathbb H}^n$
such that $Ax= \lambda x.$ Let $W$ be an invertible real matrix. Then
\[
WAx= W\lambda x= \lambda Wx.
\]
Now, $WAW^{-1} Wx=\lambda Wx.$ Setting $Wx= y$ implies $W A W^{-1} y=\lambda y.\,\,\blacksquare$
Let $A:=(a_{ij}) \in M_n({\mathbb H}).$ Suppose $W=\mathrm{diag}(w_1, w_2, \ldots, w_n)$ with $w_i \in {\mathbb R}^+, 1 \leq i \leq n.$ Then
\[
W^{-1} A W=\left(\frac{a_{ij} w_j}{w_i}\right)\,\,\mbox{and}\,\, \Lambda_l(A)= \Lambda_l(W^{-1} A W).
\]
Define
$$ r_i^W(A):= \sum_{j=1,\, j\neq i}^{n} \frac{|a_{ij}| w_j}{w_i}\,\,\,\mbox{and}\,\,\,
\dm{ c_i^W(A):= \sum_{j=1,\, j\neq i}^{n} \frac{|a_{ji}| w_i}{w_j}},\quad 1 \leq i \leq n.$$
Applying Theorem \ref{os1} to $W^{-1} A W$, we get the following theorem which may be sharper than Theorem \ref{os1} depending
upon the choice of $W$.
\begin{theorem}\label{osts}
Let $A:= (a_{ij}) \in M_n({\mathbb H})$. Then all the left eigenvalues of $A$ are contained in the union of $n$
balls
$$T_i^W(A):= \{z \in {\mathbb H} : |z-a_{ii}| \leq (r_i^W(A))^{\gamma}\, (c_i^W(A))^{1-\gamma}\},\quad 1 \leq i \leq n,$$
that is,
\[
\Lambda_l(A)= \Lambda_l(W^{-1} A W) \subseteq T^W(A):=\cup_{i=1}^{n} T_i^W(A).
\]
\end{theorem}
Since the above theorem holds for every $W=\mathrm{diag}(w_1, w_2, \ldots, w_n)$, where $w_i \in {\mathbb R}^+,$ we have
\[
\Lambda_l(A)=\Lambda_l(W^{-1} A W) \subseteq \underset{\substack{W \in M_n(S)}}{\cap} T^W(A)=: T^S(A),
\]
where $M_n(S)$ is a set of real diagonal matrices with non-negative entries. $T^S(A)$ is called the
minimal Ostrowski type set for the matrix $A$.
Substituting $\gamma= 1$ in Theorem \ref{osts}, we obtain
\begin{equation}\label{neqn1}
\Lambda_l(A)=\Lambda_l(W^{-1} A W) \subseteq \eta^W (A):=\cup_{i=1}^{n} \eta_i^W(A),
\end{equation}
where $ \eta_i^W(A):=\left\{z \in {\mathbb H} : |z-a_{ii}| \leq r_i^W(A) \right\}.$
Therefore,
\[
\Lambda_l(A)=\Lambda_l(W^{-1} A W) \subseteq \underset{\substack{W \in M_n(S)}}{\cap} \eta^W(A)=: \eta^S(A),
\]
where $\eta^S(A)$ is called the first minimal Gerschgorin type set for the matrix $A$.
For $\gamma= 0$ in Theorem \ref{osts}, we have
\begin{eqnarray}\label{neqn2}
\Lambda_l(A)=\Lambda_l(W^{-1} A W) \subseteq \Omega^W(A):=\cup_{i=1}^{n} \Omega_i^W(A),
\end{eqnarray}
where
$
\Omega_i^W(A):=\left\{z \in {\mathbb H} : |z-a_{ii}| \leq c_i^W(A) \right\}.
$
Then
\[
\Lambda_l(A)=\Lambda_l(W^{-1} A W) \subseteq \underset{\substack{W \in M_n(S)}}{\cap} \Omega^W(A)=: \Omega^S(A),
\]
where $\Omega^S(A)$ is called the second minimal Gerschgorin type set for the matrix $A$.
Equivalently, applying Theorem \ref{rc} to $ W^{-1} A W,$ we get the following theorem:
\begin{theorem}\label{octs}
Let $A:=(a_{ij}) \in M_n ({\mathbb H})$ and let $\gamma \in [0, 1]$. Then all the left eigenvalues of $A$ are contained in the union of $\frac{n(n-1)}{2}$
ovals of Cassini
$$K_{ij}^W(A):=\{z \in {\mathbb H} : |z-a_{ii}|\ |z-a_{jj}|\leq \left(r_i^W(A) \right)^{\gamma} (r_j^W(A))^{\gamma}
(c_i^W(A))^{1-\gamma} (c_j^W(A))^{1-\gamma} \},\quad 1 \leq i, j \leq n,\quad i \neq j,$$
that is,
\[
\Lambda_l(A)=\Lambda_l(W^{-1} A W) \subseteq K^W(A):=\cup_{\substack{i, j=1\\ i \neq j}}^{n} K_{ij}^W (A).
\]
\end{theorem}
Since Theorem \ref{octs} holds for every $W= \mathrm{diag}(w_1, w_2, \ldots, w_n)$ with $w_i \in {\mathbb R}^+.$ Then
\[
\Lambda_l(A)= \Lambda_l(W^{-1} A W) \subseteq \underset{\substack{W \in M_n(S)}}{\cap} K^W(A)=: K^S(A).
\]
$K^S(A)$ is called the minimal Brauer type set for the matrix $A$.
\begin{exam}\label{se}{\rm
Let
$
A=\bmatrix{ {\bf{j}} & {\bf{k}} & {\bf{2j+\sqrt{5}k}} \\0 & {\bf{i+k}} & {\bf{\sqrt{2}i+j-k}} \\ 0 & 0 & {\bf{2-i}} }.
$
Let $\gamma= 1$ in Theorem \ref{os1}. Then, we have
the three Gerschgorin type balls $G_1(A):=\{ z \in {\mathbb H} : |z-{\bf{j}}| \leq 4 \}, G_2(A):=\{ z \in {\mathbb H} : |z-{\bf{i-k}}| \leq 2\},$ and
$ G_3(A):=\{ z \in {\mathbb H} : |z-{\bf{2+i}}| \leq 0\}.$
If $W=\mathrm{diag}(w_1, w_2, w_3)$ with $w_1=8,\ w_2=4, w_3=1.$ Then by (\ref{neqn1})
$$\eta_1^W(A):= \{ z \in {\mathbb H} : |z-{\bf{j}}| \leq 7/8\},\,\,\eta_2^W(A):= \{ z \in {\mathbb H} : |z-{\bf{i-k}}| \leq 1/2\},\,\,\mbox{and}$$
$$\eta_3^W(A):= \{ z \in {\mathbb H} : |z-{\bf{2+i}}| \leq 0\}.$$
Hence it is clear that $\eta_1^W(A) \subset G_1(A)$ and $\eta_2^W(A) \subset G_2(A).$
For $\gamma=1$, Theorem \ref{rc} gives
the following ovals of Cassini:
\[
K_{12}(A):= \{ z \in {\mathbb H} : |z-{\bf{j}}|\, |z-{\bf{i-k}}| \leq 8 \},\
K_{23}A):= \{ z \in {\mathbb H} : |z-{\bf{i-k}}|\, |z-{\bf{2+i}}| \leq 0 \},\,\mbox{and}
\]
\[
K_{31}(A):=\{ z \in {\mathbb H} : |z-{\bf{2+i}}| \, |z-{\bf{j}}| \leq 0 \}.
\]
Consider $W=\mathrm{diag}(w_1, w_2, w_3)$ with $w_1=w_2=6,$ and $\,w_3= 1.$ Then by Theorem \ref{octs} with $\gamma= 1,$ we obtain
\[
K_{12}^W(A):= \{ z \in {\mathbb H} : |z-{\bf{j}}|\, |z-{\bf{i-k}}| \leq 1/2 \},\,\,
K_{23}^W(A):= \{ z \in {\mathbb H} : |z-{\bf{i-k}}|\, |z-{\bf{2+i}}| \leq 0 \},\,\,\mbox{and}
\]
\[
K_{31}^W(A):= \{ z \in {\mathbb H} : |z-{\bf{2+ i}}| \, |z-{\bf{j}}| \leq 0 \}.
\]
Hence $K_{12}^W(A) \subset K_{12}(A).\,\,\,\blacksquare$
}
\end{exam}
\section{Bounds for the zeros of quaternionic polynomials}\label{s4}
In this section, we derive bounds for the zeros of quaternionic polynomials by applying the
localization theorems for the left eigenvalues of a quaternionic matrix. Due to noncommutivity of quaternions, we first define
some basic facts on multiplication
of quaternions.
For $p, q \in {\mathbb H}$, define $p \times q:= pq.$ For $0 \neq p\in {\mathbb H}$ and $q \in {\mathbb H}$, define
\[
\frac{1}{p} \times q:= p^{-1}\times q:= p^{-1}q,\,q \times \frac{1}{p}:= q \times p^{-1}:= qp^{-1}.
\]
Recall the quaternionic polynomials $p_l(z)$ and $p_r(z)$ from (\ref{l}) and (\ref{lk}).
Then the corresponding companion matrices of the simple monic polynomials $p_l(z)$ and $p_r(z)$ are given by
\[
C_{p_l}:= \bmatrix{0 & \vrule & 1 & & 0\\ \vdots & \vrule & & \ddots &\\0 & \vrule & 0 & &1\\
\cline{1-5}
-q_0 & \vrule & -q_1 &\ldots & -q_{m-1}}:= \kbordermatrix{ &1& & m-1\\
m-1& 0 & \vrule & I \\
\cline{2-4}
1& C_{p_l}(m, 1) & \vrule & C_{p_l}(m, 2:m)
}\, \mbox{and}\, C_{p_r}:=C_{p_l}^T,
\]
respectively.
Let $q_0 \neq 0$, and define simple monic reversal polynomials of $p_l(z)$ and $p_r(z)$ as follows:
\[q_l(z):= \frac{1}{q_0} \times p_l\left(\frac{1}{z}\right) \times z^m= z^m+ q_0^{-1} q_1 z^{m-1}
+\dots+ q_0^{-1}q_{m-1}z+ q_0^{-1},\]
\[ q_r(z):= z^m \times p_r\left(\frac{1}{z}\right) \times \frac{1}{q_0} = z^m+ z^{m-1}q_1 q_0^{-1}+
\dots+zq_{m-1}q_0^{-1} +q_0^{-1},
\]
respectively.
The corresponding companion matrices of the simple monic reversal polynomials $q_l(z)$ and $q_r(z)$ are denoted by $C_{q_l}$ and
$C_{q_r}$, respectively.
We observe that the zeros of $q_l(z)$ and $q_r(z)$ are the reciprocal of zeros of $p_l(z)$ and $p_r(z),$
respectively.
Now, we need the following result:
\begin{proposition}\cite[Proposition 1]{rp01}.\label{lr2}
Let $\lambda \in {\mathbb H}.$ Then $\lambda$ is a zero of the simple monic polynomial $p_l(z)$ if and only if $\lambda$ is a left eigenvalue of its
corresponding companion matrix $C_{p_l}$.
\end{proposition}
In general, a right eigenvalue of $C_{p_l}$ is not necessarily a zero of the simple monic polynomial $p_l(z)$.
For example, let a simple monic polynomial $p_l(z)= z^2+ {\bf{j}}z+ 2.$ Then its companion matrix is given by
$$C_{p_l}=\bmatrix{0 & 1\\-2 &-{\bf{j}}}.$$ Here ${\bf{i}}$ is a right eigenvalue of $C_{p_l}.$ However, ${\bf{i}}$ is not a
zero of $p_l(z).$
Analogous to Proposition \ref{lr2}, the following result is presented for $p_r(z)$.
\begin{proposition}\label{lr22}
Let $\lambda \in {\mathbb H}.$ Then $\lambda$ is a zero of the simple monic polynomial $p_r(z)$ if and only if $\lambda$ is a left eigenvalue of its
corresponding companion matrix $C_{p_r}.$
\end{proposition}
We now present bounds for the zeros of $p_l (z)$ as follows.
\begin{theorem}\label{u1}
Let $p_l(z)$ be a simple monic polynomial over ${\mathbb H}$ of degree $m.$ Then every zero $\tilde{z}$ of $p_l(z)$
satisfies the following inequality:
$$\left(\dm{\max_{1 \leq i \leq m}}\left( r_i'(C_{q_l})^{\gamma}\, c_i'(C_{q_l})^{1-\gamma}\right)\right)^{-1} \leq |\tilde{z}|
\leq \dm{\max_{1 \leq i \leq m}}\left( r_i'(C_{p_l})^{\gamma}\, c_i'(C_{p_l})^{1-\gamma}\right),
$$
for every $\gamma \in [0, 1].$
\end{theorem}
\proof
From Proposition \ref{lr2}, zeros of $p_l(z)$ and left eigenvalues of
$C_{p_l}$ are same.
Thus, if
$\tilde{z}$ is a zero of $p_l(z),$ then $\tilde{z}$ is a left eigenvalue of $C_{p_l}.$
By applying Theorem \ref{os1} (Ostrowski type theorem) to $C_{p_l}$, we obtain
\[
|\tilde{z}|\leq \dm{\max_{1 \leq i \leq m}}\left( r_i'(C_{p_l})^{\gamma}\, c_i'(C_{p_l})^{1-\gamma}\right).
\]
We use the respective upper bounds for the zeros of the simple monic reversal polynomial $q_l(z)$ for the desired lower bounds for the
zeros of $p_l(z)$. $\,\,\, \blacksquare$
\begin{corollary}\label{co}
Let $p_l(z)$ be a simple monic polynomial over ${\mathbb H}$ of degree $m.$ Then every zero $\tilde{z}$ of $p_l(z)$
satisfies the following inequalities:
\begin{enumerate}
\item
$\dm{\frac{|q_0|}{\dm{\max_{1 \leq i \leq (m-1)}}\left\{1, |q_0|+ |q_i|\right\}}} \leq |\tilde{z}|
\leq \dm{\max_{1 \leq i \leq (m-1)}} \left\{|q_0|, 1+|q_i|\right\}.$
\item
$ \dm{\frac{|q_0|}{\max\left\{|q_0|,1+\sum_{i=1}^{m-1} |q_i|\right\}}}\leq |\tilde{z}| \leq
\max\left\{1, \sum_{i=0}^{m-1} |q_i|\right\}. $
\end{enumerate}
\end{corollary}
\proof Substituting $\gamma=0, 1$ in Theorem \ref{u1}, we obtain the desired results.$\,\,\,\blacksquare$
Next, we derive the following lemma which gives a better bound than Opfer's bound \cite[Theorem 4.2]{go09} for $|q_0|\geq 1$.
\begin{lemma}\label{l1}
Assume that $|q_0| \geq 1.$ Then $
\alpha \leq \mathcal{T},$
where $\alpha:=\dm{\max_{1 \leq i \leq m-1}} \left\{|q_0|, 1+|q_i|\right\}\, \mbox{and}\,\,
\mathcal{T}:=\max\left\{1, \sum_{i=0}^{m-1} |q_i|\right\}.$
\end{lemma}
\proof {\bf{Case 1:}} If $|q_0|=1,$ then
\[
\alpha=\dm{\max_{1 \leq i \leq m-1}} \left\{|q_0|, 1+|q_i|\right\}
=\dm{\max_{1 \leq i \leq m-1}} \left\{ 1+|q_i|\right\}. \,\,\mbox{Also}
\]
$\mathcal{T}:=\max\left\{1, \sum_{i=0}^{m-1} |q_i|\right\}=\max\left\{1, |q_0|+\sum_{i=1}^{m-1} |q_i|\right\}=
1+\sum_{i=1}^{m-1} |q_i|.
$
{\bf{Case 2:}} If $|q_0| >1,$ then
\[
\alpha=\dm{\max_{1 \leq i \leq (m-1)}} \left\{|q_0|, 1+|q_i|\right\}= |q_0| \,\,\mbox{or}\,\, \max_{1 \leq i \leq (m-1)}\left\{
1+|q_i|\right\}\, \mbox{and}
\]
$\mathcal{T}:=\max \{1, \sum_{i=0}^{m-1} |q_i| \}=\max\left\{1, |q_0|+ \sum_{i=1}^{m-1} |q_i|\right\}=
|q_0|+\sum_{i=1}^{m-1} |q_i|.$
Thus $\alpha \leq \mathcal{T}.$ This completes the proof. $\,\,\,\blacksquare$
On the other hand, if $|q_0|<1,$ then $\alpha \leq \mathcal{T}$ or $\alpha > \mathcal{T}$. For example, for a simple
monic polynomial
$
p'_l(z):= z^3+({\bf{i}}+2{\bf{j}}+2{\bf{k}})z^2-2{\bf{k}}z+0.5 {\bf{k}},
$
we have $\alpha=4$ and $\mathcal{T}=5.5.$ Hence $\alpha < \mathcal{T}$. Further, if we consider
$
p''_l(z)= z^3+0.5{\bf{j}}z^2+(0.2{\bf{i}}+0.3{\bf{j}})z+0.5 {\bf{i}},
$
then $\alpha= 1.5$ and $\mathcal{T}= 1.36.$ Hence $\alpha > \mathcal{T}.$
Next, by applying Theorem \ref{os1} to $WC_{p_l}W^{-1}$ and $WC_{q_l}W^{-1}$ ($W$ is an invertible real diagonal matrix), we
obtain different and potentially sharper bounds.
\begin{theorem}\label{sm}
Let $w_i \in {\mathbb R}^+$, $1\le i \le m.$ Then every zero $\tilde{z}$ of the simple monic polynomial $p_l(z)$ satisfies
the following inequality:
$$ \left[\dm{\max_{1 \leq i \leq m}}\left\{ r_i'(W C_{q_l}W^{-1})^{\gamma}\,
c_i'(WC_{q_l}W^{-1})^{1-\gamma}\right\}\right]^{-1} \leq |\tilde{z}|
\leq \dm{\max_{1 \leq i \leq m}}\left\{ r_i'(WC_{p_l}W^{-1})^{\gamma}\, c_i'(WC_{p_l}W^{-1})^{1-\gamma}\right\},$$
where $W:=\mathrm{diag}(w_1, w_2, \ldots, w_m)$ and $\gamma \in [0, 1].$
\end{theorem}
\proof The companion matrix of $p_l(z)$ is given by
\[C_{p_l}=\kbordermatrix{ & 1 & & m-1\\
m-1& 0 & \vrule & I \\
\cline{2-4}
1&-q_0 & \vrule & [-q_1 \ldots -q_{m-1}]
}.\]
Then
\[
W C_{p_l} W^{-1}=\kbordermatrix{ &1& & m-1\\
m-1& 0 &\vrule & \mathrm{diag}\left(\frac{w_1}{w_2}, \ldots, \frac{w_{m-1}}{w_m} \right) \\
\cline{2-4}
1&- \frac{w_m }{w_1} q_0 & \vrule & -\frac{w_m }{w_2} q_1 \ldots -q_{m-1}
}.
\]
By Proposition \ref{sml}, $C_{p_l}$ and $W C_{p_l} W^{-1}$
have the same left eigenvalues. Rest of the proof follows from the proof method
of Theorem \ref{u1}.$\,\,\,\blacksquare$
\begin{corollary}\label{cs}
Let $p_l(z)$ be a simple monic polynomial over ${\mathbb H}$ of degree $m.$ Then every zero $\tilde{z}$ of $p_l(z)$
satisfies the following inequalities:
\begin{enumerate}
\item $
\left[\dm{\max_{0\le j \le m-1}\left\{\frac{(|q_0|w_j+ w_m|q_{m-j}|)}{|q_0|d_{j+1}}\right\}} \right]^{-1}\le
\dm{|\tilde z| \le \dm{\max_{0\le j \le m-1}} \left\{\frac{w_j+ w_m|q_j|}{w_{j+1}}\right\}},\,\mbox{ where}\,\,\, w_0= 0.
$
\item $ \left[\dm{\max_{1 \leq j\leq m-1}\left\{\frac{w_j}{w_{j+1}},\sum_{i=0}^{m-1}\frac{w_m|q_i|}{|q_0|w_{i+1}}\right\}}\right]^{-1}\leq
|\tilde{z}| \leq \dm{\max_{1 \leq j\leq m-1}\left\{\frac{w_j}{w_{j+1}},\sum_{i=0}^{m-1}\frac{w_m|q_i|}{w_{i+1}}\right\}}.$
\end{enumerate}
\end{corollary}
\proof Substituting $\gamma= 0, 1$ in Theorem \ref{sm}, we get the desired results.$\,\,\,\blacksquare$
Let $w_j= w_m |q_j|, 1 \leq j \le m-1,$ in the part (1) of Corollary \ref{cs}. Then we obtain
\[
|\tilde z| \le \max_{1\le j \le m-1} \left\{\left|\frac{q_0}{q_1}\right|, 2 \left| \frac{q_j}{q_{j+1}} \right| \right\}.
\]
This is called the Kojima type bound for the zeros of the simple monic polynomial $p_l(z).$
%
For computation of bounds of the zeros of $p_r(z)$, we define the following polynomial:
\[\tilde{p_l}(z):= \overline{p_r(\overline{z})}:= \sum_{j=0}^{m} \overline{q_j} z^j.\]
Now, we discuss the following theorem which shows relation between the zeros of $p_r(z)$ and $\tilde{p_l}(z).$
\begin{theorem}\label{conj} Let $\lambda \in {\mathbb H}.$ Then $\lambda$ is a zero of the simple monic
polynomial $p_r(z)$ if and only if $\overline{\lambda}$ is a zero of the simple monic polynomial
$\tilde{p_l}(z).$
\end{theorem}
\proof The corresponding companion matrices of $p_r(z)$ and $\tilde{p_l}(z)$
are given by
$$
C_{p_r}:=C_{p_l}^T\,\,\mbox{and}\,\, C_{\tilde{p_l}}:= C^H_{p_r},$$
respectively. By Lemma \ref{prop4}, if $\lambda$ is a left eigenvalue of $C_{p_r}$, then $\overline{\lambda}$ is a left eigenvalue of
$C^H_{p_r}=C_{\tilde{p_l}}.$ By Propositions \ref{lr2} and \ref{lr22}, the left eigenvalues of $C_{p_r}$ and $C_{\tilde{p_l}}$ imply the zeros of
$p_{r}(z)$ and $\tilde p_{ l}(z),$ respectively. Hence if $\lambda$ is a zero of
$p_r(z),$ then $\overline{\lambda}$ is also a zero of
${\tilde{p_l}} (z).\,\,\,\blacksquare$
\begin{remark}
Similar results can be obtained for the quaternionic polynomial $p_r(z)$ as well.
\end{remark}
\section{Bounds for the zeros of quaternionic polynomials by using the powers of companion matrices}\label{s6}
First, we present some preliminary
results for the powers of companion matrices $C_{p_l}$ and $C_{p_r}.$ In general, if $\lambda$ is a left eigenvalue of
a quaternionic matrix $A,$ then $\lambda^2$ is not necessarily a
left eigenvalue of $A^2$. For example, for a quaternionic matrix $A= \bmatrix{0 & {\bf{i}} \\ -{\bf{i}} & 0},$ we have
$
\Lambda_l(A):=\left\{ \mu: \mu= \alpha+\beta {\bf{j}}+ \gamma{\bf{k}}, \alpha^2+\beta^2+ \gamma^2= 1\right\}$ and
$A^2= \bmatrix{1 & 0 \\ 0 & 1}.$ So $\Lambda_l(A^2):= \{1\}.$ Here ${\bf{j}}$ is a left eigenvalue of $A$ but ${\bf{j}}^2$ is not
a left eigenvalue of $A^2.$
Now we prove the following result for left eigenvalues of $C_{p_l}$ and $C^t_{p_l}$ ($t$ is a nonzero integer).
\begin{proposition}\label{p2}
If $\lambda$ is a left eigenvalue of $C_{p_l}$ with respect to the eigenvector $x \in {\mathbb H}^{n}$, then
$\lambda^{t}$ is a left eigenvalue of $C_{p_l}^{t}$
corresponding to the same eigenvector $x \in {\mathbb H}^{n}$.
\end{proposition}
\proof {\bf Case (a)}: Let $t$ be a positive integer and let $\lambda$ be a left eigenvalue of
$C_{p_l}$. Then, there
exists $ 0 \neq x:=\left[1, \lambda, \lambda^2, \ldots, \lambda^{m-1}\right]^T \in {\mathbb H}^n$ such that $C_{p_l} x=\lambda x.$ Therefore,
\begin{eqnarray*}
C^2_{p_l} x&=&C_{p_l}(C_{p_l} x)=C_{p_l} x \lambda=x\lambda^2\\
\vdots\\
C^t_{p_l} x &=& C^{t-1}_{p_l} (C_{p_l}x)=C^{t-1}_{p_l} x \lambda=\dots=x\lambda^t = \lambda^t x.
\end{eqnarray*}
Thus, $\lambda^t$ is a left eigenvalue of matrix $C^t_{p_l}$ corresponding to the same eigenvector $x \in {\mathbb H}^{n}.$
{\bf Case (b)}: Let $t$ be a negative integer. From {\bf Case (a)}, we have
$C_{p_l} x=x \lambda $. This implies $C_{p_l}^{-1} x=x \lambda^{-1}$.
Therefore,
\begin{eqnarray*}
C^{-2}_{p_l} x&=&C_{p_l}^{-1}(C_{p_l}^{-1} x)=C_{p_l}^{-1} x \lambda^{-1}=x\lambda^{-2}\\
\vdots\\
C^{t}_{p_l} x &=& C^{(t+1)}_{p_l} (C_{p_l}^{-1}x)=C^{(t+1)}_{p_l} x \lambda^{-1}=\dots=x\lambda^{t} = \lambda^{t} x.
\end{eqnarray*}
Thus, $\lambda^{t}$ is a left eigenvalue of $C^{t}_{p_l}$ with respect to the same eigenvector $x \in {\mathbb H}^{n}.\,\,\, \blacksquare$
Next, we state the following result for left eigenvalues of $C_{p_r}$ and $C^t_{p_r}$ ($t$ is a nonzero integer).
\begin{proposition}\label{p3}
If $\lambda$ is a left eigenvalue of $C_{p_r}$ with respect to the eigenvector $x \in {\mathbb H}^{n}$, then
$\lambda^{t}$ $(t\,\, \mbox{is a nonzero integer})$ is a left eigenvalue of $C_{p_r}^{t}$
corresponding to the same eigenvector $x \in {\mathbb H}^{n}$.
\end{proposition}
\proof {\bf Case (a)}: Let $t$ be a positive integer and let $\lambda$ be a left eigenvalue of
$C_{p_r}.$ Now from Lemma \ref{prop4}, $\overline{\lambda}$ is a
left eigenvalue
of $C^H_{p_r}.$ Then there exists $0 \neq x:= \left[1, \overline{\lambda}, (\overline{\lambda})^2, \ldots, (\overline{\lambda})^{m-1}\right]
\in {\mathbb H}^n$
such that $C^H_{p_r} x= \overline{\lambda} x= x \overline{\lambda}$. This gives
\begin{eqnarray*}
\left(C^H_{p_r}\right)^2 x &=& C^H_{p_r}(C_{p_r}^H x)= C^H_{p_r} x \overline{\lambda}= x (\overline{\lambda})^2\\
\vdots\\
\left(C^H_{p_r}\right)^t x &=& \left(C^H_{p_r}\right)^{t-1} (C^H_{p_r}x)=\left(C^H_{p_r}\right)^{t-1} x \overline{\lambda}=\dots=
x (\overline{\lambda})^t =(\overline{\lambda})^t x.
\end{eqnarray*}
Thus, $(\overline{\lambda})^t$ is a left eigenvalue of $\left(C^H_{p_r}\right)^t.$ Then by Lemma \ref{prop4}, $\lambda^t$ is a left eigenvalue of
$C^t_{p_r}.$
{\bf Case (b)}: Let $t$ be a negative integer. From {\bf Case (a)}, we have
$C^H_{p_r} x= \overline{\lambda} x= x \overline{\lambda}$. This implies $(C^H_{p_r})^{-1} x=x (\overline{\lambda})^{-1}$. Thus
\begin{eqnarray*}
(C^{H}_{p_r})^{-2} x&=&(C^H_{p_r})^{-1}\{(C^H_{p_r})^{-1} x\}=(C^H_{p_r})^{-1} x (\overline{\lambda})^{-1}=x (\overline{\lambda})^{-2}\\
\vdots\\
(C^{H}_{p_r})^t x &=& (C^H_{p_r})^{(t+1)} \{(C^H_{p_r})^{-1}x\}=(C^H_{p_r})^{(t+1)} x (\overline{\lambda})^{-1}=\dots=
x (\overline{\lambda})^{t} =(\overline{\lambda})^{t} x.
\end{eqnarray*}
Thus, $(\overline{\lambda})^t$ is a left eigenvalue of $\left(C^H_{p_r}\right)^t.$ Then by Lemma \ref{prop4}, $\lambda^t$ is a left eigenvalue of
$C^t_{p_r}.\,\,\,\blacksquare$
Further, we present a framework to find the powers of the companion matrix $C_{p_l}$ which can be derived in a simple
procedure as follows, keeping in view that quaternions do not commute.
\begin{theorem}\label{i} Consider
$C_{p_l}=\kbordermatrix{ &1& & m-1\\
m-1& 0 &\vrule & I \\
\cline{2-4}
1&C_{p_l}(m, 1) & \vrule & C_{p_l} (m, 2: m)
}$. \\
{\bf (a)} If $t < m$ is a positive integer, then
\begin{equation}\label{q11}
C_{p_l}^t=
\kbordermatrix{ &t& & m-t\\
m-t& 0 &\vrule & I \\
\cline{2-4}
t& C & \vrule & D
},
\end{equation}
{\bf (b)} if $t \ge m,$ then
\begin{eqnarray}\label{q2}
C^t_{p_l}= \left[
\begin{array}{cc}
C^{t-(m-1)}_{p_l}(m,1: m)\\
C^{t-(m-2)}_{p_l}(m,1: m)\\
\vdots\\
C^{t-1}_{p_l}(m,1: m)\\
C_{p_l}^t(m, 1:m)
\end{array}\right]_{m\times m},
\end{eqnarray}
where
\begin{eqnarray*}
C_{p_l}^t(m, 1) &:=& C^{t-1}_{p_l}(m, m) C_{p_l}(m,1),\,\,\\ C_{p_l}^t(m, 2: m) &:=& C_{p_l}^{t-1}(m, 1: m-1)+C_{p_l}^{t-1}(m, m)C_{p_l}(m, 2: m),
\end{eqnarray*}
$$C:= \bmatrix{C_{p_l}(m,1:t)\{\mathbb C}^2_{p_l}(m,1:t)\\
\vdots\\
C^t_{p_l}(m,1:t)}_{t\times t},\,\, \mbox{and}\,\, \, D:= \bmatrix{C_{p_l}(m,t+1:m)\{\mathbb C}^2_{p_l}(m,t+1:m)\\
\vdots\\
C^t_{p_l}(m,t+1:m)}_{t\times(m-t)}.$$
Note that $C_{p_l}(k, 1: m)$ denotes the $k$-th row of the matrix $C_{p_l}.$
\end{theorem}
\proof Assuming $t=1$, (\ref{q11}) becomes
$C_{p_l}=\kbordermatrix{ &1& & m-1\\
m-1& 0 & \vrule & I \\
\cline{2-4}
1& C_{p_l}(m, 1) & \vrule & C_{p_l}(m, 2: m)
},$ where $C_{p_l}(m, 1):=-q_0, C_{p_l}(m, 2: m):=[-q_1 \ldots -q_{m-1}].$ Thus the theorem is true for $t=1.$
Now, let us consider $C_{p_l}$ as
\[
C_{p_l}=
\kbordermatrix{ &m-k& & k\\
k & A' &\vrule & B' \\
\cline{2-4}
m-k & C' & \vrule & D'
},\,\, \mbox{where}
\]
$A' := C_{p_l}(1: k, 1: m-k), B':= C_{p_l}(k+1: m, m-k+1: m), C':= C_{p_l}(k+1: m, 1: m- k), D': =C_{p_l}(k+1: m, m-k+1: m).$
For $t= k= 3$, we get
\begin{eqnarray*}
C_{p_l}^3&=&\kbordermatrix{ &2 & & m-2\\
m-2& 0 &\vrule & I \\
\cline{2-4}
2 & C & \vrule & D
} \kbordermatrix{ &m-2& & 2\\
2 & A' &\vrule & B' \\
\cline{2-4}
m-2 & C' & \vrule & D'
}= \kbordermatrix{ &m-2& & 2\\
m-2& C' &\vrule & D' \\
\cline{2-4}
2 & CA'+ DC' & \vrule & CB'+ DD'
}.
\end{eqnarray*}
Note that in each step, size of the identity matrix $I$ reduces by order $1$ and the size of matrix $C$
increases by order $1.$ Similarly, the matrix $D$ increases by $1$ row and decreases by $1$ column. Finally,
after rearranging and separating $0$ and $I$ matrices we get
$$
\kbordermatrix{ &2+1& & m-2-1\\
m-2-1& 0 &\vrule & I \\
\cline{2-4}
2+1 & C & \vrule & D
},$$ where $C$ and $D$ are of size $3\times 3$ and $3\times (m-3),$ respectively. Assuming that the theorem is true for $t=k$, we have
\begin{eqnarray*}
C_{p_l}^{k+1}= C_{p_l}^k C_{p_l}&=& \kbordermatrix{ &m-k& & k\\
m-k& C' &\vrule & D' \\
\cline{2-4}
k & CA'+DC' & \vrule &CB'+DD'
}
=
\kbordermatrix{ &k+1& & m-k-1\\
m-k-1& 0 &\vrule & I \\
\cline{2-4}
k+1 & C & \vrule & D
},
\end{eqnarray*}
where the corresponding $C$ and $D$ matrices are given in the statement of the theorem.
\noindent The proof for $t\ge m$ is similar$.\,\,\,\blacksquare$
In the case of quaternionic matrix, $C_{p_l}= C_{p_r}^T$ but
$C_{p_r}^t \not= (C_{p_l}^t)^T $ for $t \ge 2.$ This is illustrated by the following example.
\begin{exam}\label{ex5.7}{\rm Consider the following simple monic polynomials over ${\mathbb H}:$
\[
p_l(z)=z^3-{\bf{k}} z^2+ ({\bf{k-j}})z+({\bf{i+ j}})\,\,\mbox{and}\,\, p_r(z)=z^3 -z^2{\bf{k}}+ z({\bf{k-j}})+ ({\bf{i+j}}).
\]
The corresponding companion matrices of $p_l(z)$ and $p_r(z)$ are given by
$$C_{p_l}= \kbordermatrix{ &1 & & 2 \\
2& 0 & \vrule & I \\
\cline{2-4}
1 & C_{p_l}(3, 1) & \vrule & C_{p_l}(3, 2:3)
}\,\,\mbox{and}\,\,C_{p_r}= C_{p_l}^T,$$
respectively,
where $C_{p_l}(3, 1)= {\bf{-i-j}}$ and $ C_{p_l}(3, 2: 3):=[{\bf{j-k}}, {\bf{k}}].$ Then
\[
C^2_{p_l}=\bmatrix{0 & 0 & 1 \\ \bf{-i-j} & \bf{j-k} & \bf{k} \\ \bf{i-j} & \bf{1-2i-j} & \bf{j-k-1}}\,\,\mbox{and}\,\,
C^2_{p_r}= \bmatrix{0 & \bf{-i-j} & \bf{j-i} \\ 0 & \bf{j-k} & \bf{1-j} \\ 1 & \bf{k} & \bf{j-k-1} }.
\]
This shows that $C_{p_r}^2 \not= (C_{p_l}^2)^T.$
}
\end{exam}
Hence, we can derive results analogous to Theorem \ref{i} for the case of $C_{p_r}^t, t\ge 2.$
\begin{theorem}\label{ii} Consider
$C_{p_r}=\kbordermatrix{ &m-1& & 1\\
1& 0 &\vrule & C_{p_r}(1, m) \\
\cline{2-4}
m-1& I & \vrule & C_{p_r}(2: m, m)
}$. \\
{\bf (a)} If $t < m$ is a positive integer, then
\begin{equation}\label{q1}
C_{p_r}^t=
\kbordermatrix{ &m-t& & t\\
t& 0 &\vrule & C \\
\cline{2-4}
m-t& I & \vrule & D
},
\end{equation}
{\bf (b)} if $t \ge m,$ then
\begin{eqnarray*}\label{q2}
C^t_{p_r}=\left[
\begin{array}{ccccc}
C^{t-(m-1)}_{p_r}(1: m, m) & C^{t-(m-2)}_{p_r}(1: m, m) &\dots & C^{t-1}_{p_r}(1:m, m) & C_{p_r}^t(1:m, m)
\end{array}\right]_{m\times m},
\end{eqnarray*}
%
where
\begin{eqnarray*}
C &:=& \bmatrix{C_{p_r}(1: t, m) & C^2_{p_r}(1: t, m) & \ldots & C^t_{p_r}(1: t, m)},\\
D &:=& \bmatrix{C_{p_r}(t+1: m, m) & C^2_{p_r}(t+1: m, m) &\ldots& C^t_{p_r}(t+ 1: m, m)},\\
C_{p_r}^t(1, m) &:=& C_{p_r}(1, m) \,\, C_{p_r}^{t-1}(m, m),\,\, \mbox{and}\\
\,\,C_{p_r}^t(2: m, m) &:=& C_{p_r}^{t-1}(1: m-1, m)+
C_{p_r}(2: m, m) \,\, C_{p_r}^{t-1}(m, m).
\end{eqnarray*}
\end{theorem}
\proof The proof follows from the proof method of Theorem \ref{i}.$\,\,\,\blacksquare$
Polynomials from Example \ref{ex5.7} satisfy
\[
\tilde{p}_l(z):= \overline{p_r(\overline{z})}= z^3+ {\bf{k}} z^2+({\bf{j-k}})z+ ({\bf{-i-j}}),\,\,\mbox{and}\,\,
\tilde{p_r}(z):= \overline{p_l(\overline{z})}= z^3+ z^2{\bf{k}}+z({\bf{j-k}})- ({\bf{i+j}}).
\]
Thus the companion matrices corresponding to $\tilde{p}_l(z)$ and $ \tilde{p}_r(z)$ are given by
\[
C_{\tilde{p}_l}= \overline{C_{p_l}} \,\mbox{and}\, C_{\tilde{p}_r}=\overline{C_{p_r}},
\]
respectively.
Next,
\[
C^2_{\tilde{p}_l}= \bmatrix{0 & 0& 1 \\ \bf{i+j} & \bf{-j+k} & \bf{-k} \\ \bf{i-j} & \bf{1+j} & \bf{k-j-1}}\, \mbox{and}\,
C^2_{\tilde p_r}= \bmatrix{0 & \bf{i+ j} & \bf{j- i} \\ 0 & \bf{-j+k} & \bf{1+ 2i+ j} \\ 1 & \bf{-k} & \bf{-1 -j+ k} }.\]
Then
$$ \dm{\max_{ 1 \leq i \leq 3}}\left[ (r_i' (C^2_{p_l}) )^{1/2} \right]=2.3655\,\,\,\mbox{and}\,\,\,
\dm{\max_{ 1 \leq i \leq 3}}\left[ (r_i' (C^2_{\tilde{p_r}} ) )^{1/2} \right]=1.9656,$$
$$
\dm{\max_{ 1 \leq i \leq 3}}\left[ \left(r_i'\left(C^2_{p_r}\right) \right)^{1/2}
\right]=1.9319 \,\,\mbox{and}\,\, \dm{\max_{ 1 \leq i \leq 3}}\left[ (r_i'(C^2_{\tilde{p_l}}) )^{1/2} \right]=2.1355.
$$
Now, we have
\begin{eqnarray*}
\dm{\max_{ 1 \leq i \leq 3}}\left[ (r_i' (C^2_{p_l}) )^{1/2} \right] &\not =&
\dm{\max_{ 1 \leq i \leq 3}}\left[ (r_i' (C^2_{\tilde{p_r}} ) )^{1/2} \right]\,\,\mbox{and}\\
\dm{\max_{ 1 \leq i \leq 3}}\left[ \left(r_i'\left(C^2_{p_r}\right) \right)^{1/2}
\right] &\not=& \dm{\max_{ 1 \leq i \leq 3}}\left[ (r_i'(C^2_{\tilde{p_l}}) )^{1/2} \right].
\end{eqnarray*}
Further, we have the following bounds for the zeros of $p_l(z)$ and $p_r(z)$ for $\gamma \in [0,\, 1].$
\begin{theorem}\label{T4}
Let $p_l(z)$ and $p_r(z)$ be the simple monic polynomials over ${\mathbb H}$ of degree $m$ and let $C_{p_l}^t$ and
$C_{p_r}^t\, (t \geq 2$) be the $t$-th power of the
companion matrices $C_{p_l}$ and $C_{p_r},$ corresponding to $p_l(z)$ and $p_{r}(z),$ respectively. Then, for $\gamma \in [0, 1]$
bounds for every zero $\tilde{z}$ of $p_l(z)$ satisfy the following inequalities:
\begin{eqnarray}
\left(\dm{\max_{ 1 \leq i \leq m}}\left[ \left(r_i'\left(C^t_{q_l}\right) \right)^{\gamma/t}
\left(c_i'\left(C^t_{q_l}\right) \right)^{(1-\gamma)/t}\right]\right)^{-1}
\le
|\tilde{z}| \le \dm{\max_{ 1 \leq i \leq m}}\left[ \left(r_i'\left(C^t_{p_l}\right) \right)^{\gamma/t}
\left(c_i'\left(C^t_{p_l}\right) \right)^{(1-\gamma)/t}\right],\label{neqn3}
\end{eqnarray}
\begin{eqnarray}
\left(\dm{\max_{ 1 \leq i \leq m}}\left[ \left(r_i'\left(C^t_{\tilde{q_r}}\right) \right)^{\gamma/t}
\left(c_i'\left(C^t_{\tilde{q_r}}\right) \right)^{(1-\gamma)/t}\right]\right)^{-1}
\le
|\tilde{z}| \le \dm{\max_{ 1 \leq i \leq m}}\left[ \left(r_i'\left(C^t_{\tilde{p_r}}\right) \right)^{\gamma/t}
\left(c_i'\left(C^t_{\tilde{p_r}}\right) \right)^{(1-\gamma)/t}\right], \label{neqn4}
\end{eqnarray}
and bounds for every zero $\tilde{z}$ of $p_r(z)$ satisfy the following inequalities:
\begin{eqnarray}
\left(\dm{\max_{ 1 \leq i \leq m}}\left[ \left(r_i'\left(C^t_{q_r}\right) \right)^{\gamma/t}
\left(c_i'\left(C^t_{q_r}\right) \right)^{(1- \gamma)/t}\right]\right)^{-1}
\le
|\tilde{z}|
\le \dm{\max_{ 1 \leq i \leq m}}\left[ \left(r_i'\left(C^t_{p_r}\right) \right)^{\gamma/t}
\left(c_i'\left(C^t_{p_r}\right) \right)^{(1- \gamma)/t}\right],\label{neqn5}
\end{eqnarray}
\begin{eqnarray}
\left(\dm{\max_{ 1 \leq i \leq m}}\left[ \left(r_i'\left(C^t_{\tilde{q_l}}\right) \right)^{\gamma/t}
\left(c_i'\left(C^t_{\tilde{q_l}}\right) \right)^{(1-\gamma)/t}\right]\right)^{-1}
\le
|\tilde{z}| \le \dm{\max_{ 1 \leq i \leq m}}\left[ \left(r_i'\left(C^t_{\tilde{p_l}}\right) \right)^{\gamma/t}
\left(c_i'\left(C^t_{\tilde{p_l}}\right) \right)^{(1- \gamma)/t}\right] \label{neqn6}.
\end{eqnarray}
\end{theorem}
\proof Let $\lambda$ be a left eigenvalue of $C_{p_l}.$
Then by Proposition \ref{p2}, $\lambda^t$ ( $t \geq 2$ is positive integer) is a left eigenvalue
of $C^t_{p_l}.$ Hence by applying Theorem \ref{os1}, we get (\ref{neqn3}).
\noindent By Lemma \ref{prop4}, $\overline{\lambda}$ is a left eigenvalue of $C_{\tilde{p_r}}$ and by
Proposition \ref{p3}, $(\overline{\lambda})^t$ is a left eigenvalue of $(C_{\tilde{p_r}})^t.$ Then from Theorem \ref{os1}, (\ref{neqn4}) follows.
\noindent The proof of (\ref{neqn5}) and (\ref{neqn6}) are similar$.\,\,\,\blacksquare$
Substituting $t=2$ and $\gamma=1$ in Theorem \ref{T4}, we have the following corollary.
\begin{corollary}\label{pc}
Let $p_l(z)$ and $p_r(z)$ be the simple monic polynomials over ${\mathbb H}$ of degree $m.$ Then
bounds for every zero $\tilde{z}$ of $p_l(z)$ satisfy the following inequalities:
\begin{eqnarray}
\frac{1}{\beta_1} \leq |\tilde{z}| \leq \alpha_1,
\end{eqnarray}
\begin{eqnarray}
\frac{1}{\beta_2} \leq |\tilde{z}| \leq \alpha_2,
\end{eqnarray}
where
\begin{eqnarray*}
\alpha_1 &=& \dm{\max \left\{ 1, \left( \sum_{j=0}^{m-1} |q_j| \right)^{1/2},
\left( \sum_{j=0}^{m-1}|q_{m-1} q_j - q_{j-1}| \right)^{1/2} \right\}},\\
\alpha_2 &=& \dm{ \max_{ 2 \leq j \leq m-1 } \left\{ \left( |q_0| + |\overline{q_0}\,\, \overline{q_{m-1}}| \right)^{1/2},
\left( |q_1| + |\overline{q_1}\,\, \overline{q_{m-1}} - \overline{q_0}| \right)^{1/2},
\left( 1+ |q_j| + |\overline{q_j}\,\, \overline{q_{m-1}} - \overline{q_{j-1}}| \right)^{1/2} \right\}},\\
\beta_1 &=& \dm{\max \left\{ 1, \left( \sum_{j=1}^{m-1} |q^{-1}_0 q_j| \right)^{1/2},
\left( \sum_{j=0}^{m-1}|q^{-1}_0 q_1 q^{-1}_0q_{m-j} - q^{-1}_0 q_{m-j+1}| \right)^{1/2} \right\}},\\
\beta_2 &=& \max_{ 2 \leq j \leq m-1 } \bigg\{ \left( |q^{-1}_0| + |\overline{q^{-1}_0}\,\,\, \overline{q_1 q^{-1}_0}|
\right)^{1/2},
\left( |q_{m-1} q^{-1}_0| + |\overline{ q_{m-1} q^{-1}_0}\,\,\, \overline{q_1 q^{-1}_0}- \overline{q^{-1}_0}| \right)^{1/2},\\ &&
\left( 1+ |q_{m-j} q^{-1}_0| + |\overline{ q_{m-j} q^{-1}_0}\,\,\, \overline{q_1 q^{-1}_0}- \overline{q_{m-j+ 1}q^{-1}_0}|
\right)^{1/2} \bigg\},
\end{eqnarray*}
and bounds for every zero $\tilde{z}$ of $p_r(z)$ satisfy the following inequalities:
\begin{eqnarray}
\frac{1}{\beta_3} \leq |\tilde{z}| \leq \alpha_3,
\end{eqnarray}
\begin{eqnarray}
\frac{1}{\beta_4} \leq |\tilde{z}| \leq \alpha_4,
\end{eqnarray}
where
\begin{eqnarray*}
\alpha_3 &=& \dm{ \max_{ 2 \leq j \leq m-1 } \left\{ \left( |q_0| + |q_0\,\, q_{m-1}| \right)^{1/2},
\left( |q_1| + |q_1\,\, q_{m-1} - q_0| \right)^{1/2} ,
\left( 1+ |q_j| + |q_j\,\, q_{m-1} - q_{j-1}| \right)^{1/2} \right\}},\\
\alpha_4 &=& \dm{\max \left\{ 1, \left( \sum_{j=0}^{m-1} |q_j| \right)^{1/2},
\left( \sum_{j=0}^{m-1}|\overline{q_{m-1}} \,\, \overline{q_j} - \overline{q_{j-1}}| \right)^{1/2} \right\}},\\
\beta_3 &=& \max_{ 2 \leq j \leq m-1 } \bigg\{ \small{ \left( |q^{-1}_0| + |q^{-1}_0\,\,\, q_1 q^{-1}_0|
\right)^{1/2}},
\left( |q_{m-1} q^{-1}_0| + |q_{m-1} q^{-1}_0\, q_1 q^{-1}_0- q^{-1}_0| \right)^{1/2},\\ &&
\left( 1+|q_{m-j} q^{-1}_0| + |q_{m-j} q^{-1}_0\, q_1 q^{-1}_0- q_{m-j+1}q^{-1}_0|
\right)^{1/2}\bigg\},\\
\beta_4 &=& \dm{\max \left\{ 1, \left( \sum_{j=1}^{m-1} |q^{-1}_0 q_j| \right)^{1/2},
\left( \sum_{j=0}^{m-1}|\overline{q^{-1}_0 q_1} \,\,\,\, \overline{q^{-1}_0q_{m-j}} - \overline{q^{-1}_0 q_{m-j+1}}| \right)^{1/2} \right\}},
q_{-1}=0=q_{m+1}, q_m=1.
\end{eqnarray*}
\end{corollary}
\proof The proof follows from Theorem \ref{T4} and Appendix \ref{AP1}.$\,\,\,\blacksquare$
\begin{exam}\label{e1}{\rm
Consider the following polynomials $p_l(z)$ and $p_r(z)$ over ${\mathbb H}$:
\[p_l(z)=z^6+ ({\bf{i}}+ 3{\bf{k}}) z^5+ (3+ {\bf{j}})z^4+(5{\bf{i}}+ 15{\bf{k}}) z^3+ (-4+ 5{\bf{j}})z^2+ (6{\bf{i}}+ 18{\bf{k}})z+ (
6{\bf{j}} -12),\]
\[p_r(z)=z^6+ z^5 ({\bf{i}}+ 3{\bf{k}})+ z^4 (3+ {\bf{j}})+z^3 (5{\bf{i}}+ 15{\bf{k}})+ z^2 (-4+5{\bf{j}})+z (6{\bf{i}}+18{\bf{k}})+ (
6{\bf{j}} -12).\]}
\end{exam}
The zeros of $p_l(z)$ are given in \cite{rp01}. Moreover, we find the zeros of $p_r(z)$ by Niven's algorithm \cite{in41}.
\begin{table}[!h]
\caption{Zeros and bounds for the zeros of $p_l(z)$ and $p_r(z).$}
\begin{subtable}{.6\linewidth}
\centering
\caption{Zeros of $p_l(z)$ and $p_r(z)$ and their absolute values.}\label{t}
\begin{tabular}{|cccc|}\hline
$ z_1$ & $ |z_1|$ & $z_2$ & $ |z_2|$\\
\hline
$-{\bf{i}}-2{\bf{k}}$ & $ 2. 2361$ &$-0.4{\bf{i}}-2.2{\bf{k}}$ & $ 2. 2361$ \\
$[{\bf{i}}\sqrt{3}]$ & $1. 7321$ & $[{\bf{i}}\sqrt{3}]$& $1. 7321$\\
$ [{\bf{i}}\sqrt{2}]$ & $ 1. 4142$ & $ [{\bf{i}}\sqrt{2}]$ & $ 1. 4142$ \\
$-0.6{\bf{i}}-0. 8{\bf{k}}$ & $ 1$ &$-{\bf{k}}$ & $ 1$ \\
\hline
\end{tabular}
\end{subtable}%
\begin{subtable}{.4\linewidth}
\centering
\caption{Lower and upper bounds for the zeros of $p_l(z)$ and $p_r(z).$}\label{t4}
\begin{tabular}{|ccc|}\hline
Example \ref{e1} & lower bound & upper bound \\
\hline
Corollary \ref{co} (1) & $0. 4142 $ & $19. 9737$ \\
Corollary \ref{co} (2) &$0. 2766 $ & $60. 9291$ \\
Theorem \ref{u1}, $\gamma = 1/4$ &$0. 3744 $ & $8. 1415$ \\
\hline
\end{tabular}
\end{subtable}
\end{table}
where
$z_1:= $ the set of zeros of $p_l(z),$
$z_2:= $ the set of zeros of $p_r(z)$
\begin{table}[h!]
\begin{center}
\begin{tabular}{ccccc}\hline
Example \ref{ex5.7} && lower bound && lower bound\\
\hline
Corollary \ref{pc} $1(a)$ && $ 0. 6156 $ && $2. 3655$ \\
Corollary \ref{pc} $1(b)$ && $ 0. 6078 $ && $1. 9656$ \\
Corollary \ref{pc} $2(a)$ && $0. 6078$ && $ 1. 9319 $ \\
Corollary \ref{pc} $2(b)$ && $0. 6436$ && $2. 1355$ \\
\hline
\end{tabular}
\caption{Lower and upper bounds for the zeros of $p_l(z)$ and $p_{r}(z).$}
\end{center}
\end{table}
\section{Conclusion}\label{cf}
In this paper, we have derived Ostrowski type theorem for left eigenvalues of a quaternionic matrix that generalizes
Ostrowski type theorem for right eigenvalues of a quaternionic matrix when all the diagonal entries of a quaternionic matrix
are real. We have derived a corrected version of the Brauer type theorem for left eigenvalues for the
deleted absolute column sums of a quaternionic matrix. Moreover, we have extended
localization theorems by applying the generalized H$\ddot{\mbox{o}}$lder inequality for left as well as right eigenvalues of
a quaternionic matrix.
Bounds for the zeros of quaternionic polynomials have derived.
As a consequence, we have shown that some of our bounds are sharper than the bound given in \cite{go09}. Further,
we have derived bounds via the powers of companion matrices which are always sharper than the bound given in \cite{go09}.
\vskip 2ex
\noindent{\bf Acknowledgements:}
The authors would like to thank the reviewer and editor for their valuable comments and suggestions to improve the manuscript. They also thank Professor Ivan Slapni$\check{\mbox{c}}$ar for careful reading and helpful comments for the improvement of the manuscript.
\begin{appendix} \section{Appendix}\label{AP1}
In this appendix, we state formulas for the squares of quaternionic companion matrices. For $t= 2$, Theorem \ref{i} implies
$$
C_{p_l}^2=
\kbordermatrix{ &2& & m-2\\
m-2& 0 &\vrule & I \\
\cline{2-4}
2& C & \vrule & D
}, \,\mbox{where}\,\,
C:=\bmatrix{C_{p_l}(m,1: 2)\{\mathbb C}^2_{p_l}(m,1: 2) }=\bmatrix{-q_0 & -q_1 \\ q_{m-1} q_0 & q_{m-1} q_1-q_0}\,\,$$
and
$$D= \bmatrix{C_{p_l}(m,3: m)\{\mathbb C}^2_{p_l}(m, 3: m)}=\bmatrix{-q_2 & -q_3 & \ldots & -q_{m-1} \\
q_{m-1} q_2-q_1 & q_{m-1} q_3-q_1 &\ldots & (q_{m-1})^2-q_{m-2}},
$$
$$
C_{ \tilde{p_l}}^2=
\kbordermatrix{ &2& & m-2\\
m-2& 0 &\vrule & I \\
\cline{2-4}
2& C & \vrule & D
}, \,\mbox{where}\,\,
C= \bmatrix{C_{ \tilde{p_l}}(m,1:2)\{\mathbb C}^2_{ \tilde{p_l}}(m,1:2) }= \bmatrix{-\overline{q_0} & -\overline{q_1} \\ \overline{q_{m-1}}\,\,
\overline{q_0} & \overline{q_{m-1}} \, \,\overline{q_1}-\overline{q_0}}$$
and
$$D=\bmatrix{C_{ \tilde{p_l}}(m,3:m)\{\mathbb C}^2_{ \tilde{p_l}}(m, 3:m)}=\bmatrix{-\overline{q_2} & -\overline{q_3}
& \ldots & -\overline{q_{m-1}} \\
\overline{q_{m-1}} \,\, \overline{q_2}-\overline{q_1} & \overline{q_{m-1}} \,\, \overline{q_3}-\overline{q_1}
&\ldots & (\overline{q_{m-1}})^2-\overline{q_{m-2}}},
$$
$$
C_{q_l}^2=
\kbordermatrix{ &2& & m-2\\
m-2& 0 &\vrule & I \\
\cline{2-4}
2& C & \vrule & D
}, \,\mbox{where}\,\,
C=\bmatrix{-q^{-1}_0 & -q^{-1}_0 q_{m-1} \\ q^{-1}_0 q_1 q^{-1}_0 & q^{-1}_0 q_1 q^{-1}_0 q_{m-1} -q^{-1}_0}\,\,$$
and
$$D=\bmatrix{- q^{-1}_0 q_{m-2} & \ldots & - q^{-1}_0 q_1 \\
q^{-1}_0 q_1 q^{-1}_0 q_{m-2}- q^{-1}_0 q_{m-1} & \ldots & (q^{-1}_0 q_1)^2-q^{-1}_0q_2 },
$$
$$
C_{\tilde{q_l}}^2=
\kbordermatrix{ &2& & m-2\\
m-2& 0 &\vrule & I \\
\cline{2-4}
2& C & \vrule & D
}, \,\mbox{where}\,\,
C=\bmatrix{-\overline{q^{-1}_0} & -\overline{q^{-1}_0 q_{m-1}} \\
& \\ \overline{q^{-1}_0 q_1}\,\,\overline{ q^{-1}_0} &
\overline{q^{-1}_0 q_1}\,\, \overline{q^{-1}_0 q_{m-1}} -\overline{q^{-1}_0}}\,\,$$
and
$$D=\bmatrix{- \overline{q^{-1}_0 q_{m-2}} & \ldots & - \overline{q^{-1}_0 q_1} \\
& \\
\overline{q^{-1}_0 q_1}\,\,\,\overline{ q^{-1}_0 q_{m-2}}- \overline{q^{-1}_0 q_{m-1}} & \ldots &
\left(\overline{q^{-1}_0 q_1}\right)^2-\overline{q^{-1}_0q_2 } }.
$$
For $t=2$, Theorem \ref{ii} implies
$$
C_{p_r}^2=
\kbordermatrix{ &m-2& & 2\\
2& 0 &\vrule & C \\
\cline{2-4}
m-2& I & \vrule & D
}, \mbox{where}\,\,
C=\bmatrix{C_{p_r}(1:2, m) & C^2_{p_r}(1:2, m) }=\bmatrix{-q_0 & q_0 q_{m-1} \\ -q_1 & q_1 q_{m-1}-q_0 },$$ and
$$D=\bmatrix{C_{p_r}(3:m, m) & C^2_{p_r}(3:m, m)}= \bmatrix{ -q_2 & q_2 q_{m-1}-q_1 \\ -q_3 & q_3 q_{m-1}-q_2 \\
\vdots & \vdots \\
-q_{m-1} & (q_{m-1})^2 - q_{m-2}},
$$
$$
C_{\tilde{p_r}}^2=
\kbordermatrix{ &m-2& & 2\\
2& 0 &\vrule & C \\
\cline{2-4}
m-2& I & \vrule & D
}, \mbox{where}\,\,
C=\bmatrix{-\overline{q_0} & \overline{q_0}\,\, \overline{q_{m-1}} \\
-\overline{q_1} & \overline{q_1}\,\, \overline{q_{m-1}}-\overline{q_0} }\,\, \mbox{and}\,\,
D=\bmatrix{ -\overline{q_2} & \overline{q_2}\,\, \overline{q_{m-1}}-\overline{q_1} \\ -\overline{q_3} & \overline{q_3}\,\,
\overline{q_{m-1}}-\overline{q_2} \\
\vdots & \vdots \\
-\overline{q_{m-1}} & \left(\overline{q_{m-1}}\right)^2 - \overline{q_{m-2}}},
$$
$$
C_{q_r}^2=
\kbordermatrix{ &m-2& & 2\\
2& 0 &\vrule & C \\
\cline{2-4}
m-2& I & \vrule & D
}, \mbox{where}
$$
$$C=\bmatrix{-q^{-1}_0 & q^{-1}_0\,\, q_{1} q^{-1}_0 \\
-q_{m-1} q^{-1}_0 & q_{m-1} q^{-1}_0 q_1 q^{-1}_0-q^{-1}_0 }\,\,\mbox{and}\,\,
D=\bmatrix{ -q_{m-2} q^{-1}_0 & q_{m-2} q^{-1}_0 q_1 q^{-1}_0-q_{m-1} q^{-1}_0 \\
\vdots & \vdots \\
-q_{1} q^{-1}_0 & (q_1 q^{-1}_0)^2 - q_{2} q^{-1}_0},
$$
$$
C_{\tilde{q_r}}^2=
\kbordermatrix{ &m-2& & 2\\
2& 0 &\vrule & C \\
\cline{2-4}
m-2& I & \vrule & D
}, \mbox{where}
$$
$$C=\bmatrix{-\overline{q^{-1}_0} & \overline{q^{-1}_0}\,\,\, \overline{q_{1} q^{-1}_0} \\
& \\
-\overline{q_{m-1} q^{-1}_0} & \overline{q_{m-1} q^{-1}_0}\,\, \overline{q_1 q^{-1}_0}-\overline{q^{-1}_0} }\,\,\mbox{and}\,\,
D=\bmatrix{ -\overline{q_{m-2} q^{-1}_0} & \overline{q_{m-2} q^{-1}_0}\,\,\, \overline{q_1 q^{-1}_0}-\overline{q_{m-1} q^{-1}_0} \\
\vdots & \vdots \\
-\overline{q_{1} q^{-1}_0} & \left(\overline{q_1 q^{-1}_0}\right)^2 - \overline{q_{2} q^{-1}_0}}.
$$
\end{appendix}
| {
"timestamp": "2016-09-20T02:08:31",
"yymm": "1502",
"arxiv_id": "1502.08014",
"language": "en",
"url": "https://arxiv.org/abs/1502.08014",
"abstract": "In this paper, Ostrowski and Brauer type theorems are derived for the left and right eigenvalues of a quaternionic matrix. Generalizations of Gerschgorin type theorems are discussed for the left and the right eigenvalues of a quaternionic matrix. Thereafter a sufficient condition for the stability of a quaternionic matrix is given that generalizes the stability condition for a complex matrix. Finally, a characterization of bounds for the zeros of quaternionic polynomials is presented.",
"subjects": "Rings and Algebras (math.RA); Numerical Analysis (math.NA)",
"title": "Localization theorems for matrices and bounds for the zeros of polynomials over a quaternion division algebra",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9637799399736476,
"lm_q2_score": 0.7341195269001831,
"lm_q1q2_score": 0.707529673569341
} |
https://arxiv.org/abs/2206.07491 | Polaris: The Mathematics of Navigation and the Shape of the Earth | For millenia, sailors have used the empirical rule that the elevation angle of Polaris, the North Star, as measured by sextant, quadrant or astrolabe, is approximately equal to latitude. Here, we show using elementary trigonometry that Empirical Law 1 can be converted from a heuristic to a theorem. A second ancient empirical law is that the distance in kilometers from the observer to the North Pole, the geodesic distance measured along the spherical surface of the planet, is the number of degrees of colatitude multiplied by 111.1 kilometers. Can Empirical Law 2 be similarly rendered rigorous? No; whereas as the shape of the planet is controlled by trigonometry, the size of our world is an accident of cosmological history. However, Empirical Law 2, can be rigorously verified by measurements. The association of 111 km of north-south distance to one degree of latitude trivially yields the circumference of the globe as 40,000 km. We also extend these ideas and the parallel ray approximation to three different ways of modeling a Flat Earth. We show that photographs from orbit, taken by a very expensive satellite, are unnecessary to render the Flat Earth untenable; simple mathematics proves Earth a sphere just as well. | \section{Polaris and the True North Star}
The key measurement for navigation is the following.
\begin{definition}[elevation angle] The elevation angle $\epsilon$ is
the angular distance between the horizon and the target star. More precisely,
$\epsilon$ is the angle between a ray from the center of the earth
to the star and a second ray in the same longitude but in the equatorial plane. Alternatively, the elevation can be defined as the complementary angle to the zenith angle $\zeta$: $\epsilon \equiv\pi/2 - \zeta$. A Synonyms for ``elevation angle include``altitude angle" and simply ``altitude".
\end{definition}
\begin{definition}[zenith angle] The zenith angle
$\zeta$ is the angular distance between directly overhead [observer's zenith] and the target star. A second
equivalent definition $\zeta \equiv \pi/2 - \epsilon$.
\end{definition}
The ``Elevation angle" $\epsilon$ is ninety degrees when the Pole Star is at
zenith and zero degrees when the Pole Star is bisected by the horizon.
Both angles can be measured by a variety of ``inclinometers'' [angle-measuring devices] including the
mariner's astrolabe, quadrant, sextant and the Arab instrument known as the \emph{Kamal}.
As explained below, angle measurements of the Pole Star are especially useful for navigation.
However, one difficulty is that
Polaris is 3/4 of a degree from the celestial North Pole
today, and was more than three degrees away in Elizabethan times. However, mariners soon realized
that by looking at stars near Polaris and using the set of empirical rules with the charming name of the
``Regiment of the Stars", the distance-from-the-North-Pole errors could be greatly reduced.
\begin{definition}[Regiment of the North Star]
A set of rules that allow sightings of stars such as Kochab
to correct for the difference between the position of Polaris and
the true north celestial pole.
\end{definition}
We use ``Pole Star" denote a star that is not Polaris, but rather Polaris-after-the-Regiment-of-the-North-Star has been applied. Our Pole Star is thus aligned with the
earth's rotation axis and is the true north celestial pole.
Practical star sightings incur additional errors because of the limitations of the sextant, the mariner's
astrolable and other inclinometers. Again, correction tables and instruments were developed many centuries
ago \cite{Freiesleben55,Hewson51}.
The ful twelve-minute video \cite{PracticalNavigatorPolaris} is a careful discussion of all these corrections. For the videographer's example,
a sextant sighting of 29 degrees, 53.5 minutes is altered to a best estimate of
29 degrees, 21.3 minutes, a correction of about half a degree or about 30 nautical miles.
The importance of angles for navigation has driven the development of planar and
spherical trigonometry \cite{VanBrummelen09,VanBrummelen13,VanBrummelen20,VanBrummelen21}.
\clearpage
\section{The First Empirical Law: Latitude Is Elevation}
\begin{quote}
For at least two millennia, navigators have known how to determine their latitude. Knowing the latitude of the desired destination, the navigator could sail north or south to that latitude and then sail east or west to reach that destination. To do this, it was necessary to have a method of measuring angles. In early days Arabs used one or two fingers at arms length to measure the angle between the horizon and Polaris. Later they used a \emph{Kamal}, which is a piece of cord with knots tied in it. This could be used to measure the angle between the horizon and Polaris. A knot could be tied in the cord as a measure of the homeport latitude before leaving, so the desired latitude was premeasured. Arabs tied knots on the cord at intervals of one \emph{issabah}, Arabic for ``finger", whic denotes 1 degree and 36 minutes.
In the 10th century AD, Arabs introduced to Europe the astrolabe and the quadrant.
The quadrant spans 90 degrees and is divided into whole degrees. A plumb bob establishes the vertical. The quadrant was popular with Portuguese explorers in the 15th century. In addition to Polaris, they used observations of the Sun for determining latitude, particularly in the southern hemisphere.
\end{quote}
\hspace*{0.5in} --- P. Kenneth Seidelmann in Sec. 7 of \cite{Seidelmann1dot7}
\bigskip
Waters' treatise on navigation in the sixteenth and seventeenth centuries \cite{Waters58} discusses the Polaris-latitude connection on pgs. 41 to 50.
Royal Air Force navigators used ``latitude equals elevation" as a tool in navigating Lancaster heavy bombers in night attacks
on Germany during World War II as described in \cite{Hoare07}, written by a retired war-time RAF navigator.
Thus, many centuries of navigation by land and sea provides overwhelming evidence for the following.
{\bf Empirical Law 1: Latitude is the Elevation Angle of Polaris}
\emph{In the Northern Hemisphere, the latitude of the observer is approximately equal to the elevation angle measured by the observer, or in other words}
\begin{eqnarray} \epsilon \approx \varphi\end{eqnarray}.
We show below that this heuristic rule is in fact a provable theorem.
Practical navigators are obliged to make small corrections to the observed or ``apparent" elevation angle to account for errors intrinsic to the sextant and aslo atmospheric refraction \cite{PracticalNavigatorPolaris}, observer height above the sea (``dip angle") and so on. The observation and corrections collectively generate the ``true" elevation angle
denoted by $\epsilon$; it is this corrected elevation that appears in Empirical Law 1.
\section{The Leveled Observer}
The surface of the planet is corrugated with small scale curvature in the form of innumerable hills and valleys. A planar earth has zero large scale curvature, but a spherical earth is curved on a planetary scale as well as
on a small scale.
To obtain consistent star measurements,
we assume a ``leveled observer" who employs a sextant with a spirit level (or equivalent surveying tricks) applied so that a vector from the surface to his zenith is parallel to the ``effective gravity" (gravity plus the centrifugal force due to the planet's rotation).
The effective gravity is not constant; a mountain will alter the gravitational field in its immediate vicinity. However, the variations in Earth's surface gravitational field are less than one part in 15,000 of its mean value. The corrugations will then have no effect on our measurements or results except perhaps in the fourth decimal place. Geodesists and oil prospectors need to be concerned about these variations, but we need not.
\section{Empirical Law 2: The Distance to the North Pole Is the Product of Colatitude
With 6360 Kilometers}
By the late 16th century, Englishmen knew that the ratio of distances at sea to degrees were constant along any great circle such as the equator or any meridian,
\footnote{[
A meridian is a Great Circle of constant longitude; it passes through both poles.]}
Sssuming that Earth was a sphere. Robert Hues wrote in 1594 that the distance along a great circle was 60 miles per degree, that is, one nautical, mile per arcminute (pg. 374 of \cite{Waters58}).
Edmund Gunter wrote in 1623 that the distance along a great circle was 20 leagues per degree. (Pg. 374 of \cite{Waters58}), which is the same statement in a different length unit.
Pierre-Louis Maupertuis (1698-1759) was a major figure in physics for stating the Principle of Least-Action. In 1736,
he was appointed chief of the French Geodesic Mission to Lapland to measure the length of a degree
of latitude. Simultneously, a second team was sent to Ecuador. The measurements triumphantly confirmed Newton's
prediction that Earth is a (slightly!) oblate spheroid. In 1799, the meter was defined to be one-quarter of
the length of a meridian divided by 10,000,000.
Let ``geodesic distance" denote the arclength of the curve that connects two points by the shortest path that lies in the surface of the planet. This definition of the meter implies that one degree of latitude shortens the
geodesic distance to the North Pole by (10,000 kilometers/90) $\sim$ 111 kilometers, consistent with
Hues, Gunter and other navigators. Collectively, these and other astronomers over many centuries
arrived at the following.
{\bf Empirical Law 2: Nautical Miles to the Pole and Degrees of Latitude}
\emph{Each increase in latitude $\varphi$ by one degree decreases the geodesic distance to the North Pole from the observer by
60 nautical miles or
equivalently 111 kilometers.}
In symbols, this is equivalently
\begin{eqnarray}
d(\varphi) =W \, \left( \dfrac{\pi}{2} - \, \varphi \right)
\end{eqnarray}
where $W$ is a length scale determined in the next section.
It is not possible to \emph{prove} this empirical law in the same way that Empirical Law 1 will be elevated to a theorem below. However, it is possible to prove that Emprical Law 2 can be \emph{measured} as true.
\section{A Trivial Derivation of the Radius of the Earth}
The distance $d$ along the geodesic distance (great circle arc) from the observer to the North Pole has no obvious length scale. Nevertheless, it is still convenient to write $d$ as the product of a nondimensional distance and an arbitrary length scale $W$. The latitude $\varphi$ is known for millions of cities and geographical features and can be easily determined by a sextant measurement of the elevation $\epsilon(\text{Pole Star})$ via $\varphi=\epsilon(\text{Pole Star})$. It is therefore convenient to use latitude as the nondimensional
measure of distance from the observer to the North Pole.
The formula for $d$ becomes
\begin{eqnarray}gin{eqnarray}~\label{vvvv}
d(\varphi) & = & W \, \left( \pi/2 - \varphi \right).
\end{eqnarray}
Empirical Law 2 requires that the pole-to-equator geodesic distance is the length associated with one degree
of latitude (111.1 km) multiplied by the number of degrees of latitude (90) between the equator and
pole, yielding $d(\pi/2)=10,000 \text{km}$. Eq.~(\ref{vvvv}) becomes
\begin{eqnarray}
10,000 \text{km} & = & W \dfrac{\pi}{2}
\end{eqnarray} which demands that
\begin{eqnarray}
W &= & 6360 \, \text{km} \\
& = & W^{earth} \,
\end{eqnarray}
Note that $W=W^{earth}$ was obained without any presuppositions of planetary shape or curvature;
the radius of the spherical Earth has \emph{emerged
spontaneously}.
\section{Parallel Ray/Distant Source Approximation}
\begin{quote} ``The historical assumption of the sun rays parallelism
is far from being a spontaneous assumption for the students."
\end{quote} \hspace*{0.5in} --- by Nicolas D\'{e}camp and C\'{e}cile
de Hosson on pg. 919 \cite{DecampHosson12}
\bigskip
Whether light from a star falls upon the Earth or the Moon as a beam of \emph{parallel rays} or as a beam
of \emph{divergent} rays has a profound impact upon the consequences. A perfectly parallel beam is impossible,
but a beam of \emph{almost}-parallel rays will nonetheless generate a much different situation than a beam of strongly divergent rays. It is therefore essential to quantify the meaning of ``almost-parallell".
Let $D^{star}$ denote the distance from the observer's position to the target star. Let $W^{star}$ and $W^{earth}$ be the widths (radii) of the light-emission source (star) and absorber (Earth).
A ray from the center of the star to the center of the surface of the earth (``central ray") is shown as the dashed line in
Fig.~\ref{FigQuasiParallelRaySpheres}.
The ray which is the ``least parallel" (of those hitting the planet) in the diagram is the solid ray with the arrowhead. Its vertical range, as represented by the right side of the triangle in the lower half of
Fig.~\ref{FigQuasiParallelRaySpheres}, is the radius of the star plus the radius of the planet.
Let $\eta$ denote
the angle in radians between the central ray and the least parallel ray. This is an upper bound on the angles between rays from star to planet.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.5]{QuasiParallelRaySpheres.pdf}}
\caption{Top: Schematic of an emission source [star or the Sun] (yellow disk), Earth (blue disk) and light rays. The dashed line with an arrowhead is the path of a ray from the center of the star to the center of the Earth; we shall call this the ``central ray". The degree of parallelness
of other rays is measured by the angle between the ray and the central ray. The least parallel ray that hits the Earth is one which, while traveling a distance $D^{star}$ ( horizontal in the diagram) from the star to the Earth is moving in the perpendicular direction (vertical in the figure) from one side of the star to the opposite side of the planet. This implies that this least-parallel ray is the hypotenuse of the triangle in the lower half of the diagram. The horizontal top side of the triangle is the path of the central ray with length $D^{star}$. The angle $\eta$ between these two rays is an upper bound on the angle between the central ray and all other rays that pass from star to planet.}
\label{FigQuasiParallelRaySpheres}
\end{figure}
\begin{definition}[Nearly Parallel Rays]
All the rays from the star to the Earth will be ``nearly parallel" if and only if the bound-on-angles $\eta \ll 1$.
\end{definition}
It will prove useful to define the ``Eratosthenes parameter" as
\begin{eqnarray}~\label{Eqnudef}
\nu \equiv \frac{ W^{star} + W^{earth} } {D^{star} } \qquad
\text{[Eratosthenes parameter]}
\end{eqnarray}
\begin{theorem}[Spread of Angles and the Eratosthenes Parameter $\nu$]
In terms of the Eratosthenes parameter $\nu$,
\begin{eqnarray}~\label{Eqeta}
\eta & = & \arctan\left( \frac{W^{star}+ W^{earth} } {D^{star} } \right) \\
& = & \arctan(\nu). \end{eqnarray}
\end{theorem}
Proof:
The Law of Sines applied to the triangle in Fig.~\ref{FigQuasiParallelRaySpheres} yields \cite{VanBrummelen20}
\begin{eqnarray} \eta & = & \arctan\left(\frac{W^{star}+ W^{earth} } {D^{star} } \right) ,
\end{eqnarray} which is (\ref{Eqeta}).
The second line follows from recognizing that the argument of the arctangent is just the
Eratosthenes parameter $\nu$ as defined by (\ref{Eqnudef}).
$\blacksquare$
\begin{theorem}[Angles for Nearly Parallel Rays]~\label{ThAngNearlyPar}
If the distance to the star $D^{star}$ is large compared to the larger of the radius of the
radiating star and the radius of the target planet, then
\begin{enumerate}
\item \begin{eqnarray} \nu \ll 1 \end{eqnarray}
\item \begin{eqnarray} \eta \approx \nu
\end{eqnarray}
\item
\begin{eqnarray}
\eta \ll 1
\end{eqnarray}
This implies that all rays emitted by the star and falling on the planet will be
``nearly parallel" as defined above.
\item In the limit $\nu \rightarrow \, 0$ or equivalently, $D^{star} \rightarrow \infty$,
the rays are parallel.
\end{enumerate}
\end{theorem}
Proof: Recall that the definition of the Eratosthenes parameter is (\ref{Eqnudef} ) $\nu \equiv ( W^{star} + W^{earth})/ D^{star}$. When $D^{star}$ is large compared to the radii of the star and planet, it follows that $\nu \ll 1$,
which is the first proposition.
The Taylor expansion of the arctangent function, (4.4.42) of \cite{AbramowitzStegun65},
transforms (\ref{Eqeta}), $\eta = \arctan\left( \nu \right)$, into
\begin{eqnarray}
\eta & = & \sum_{j=0}^{\infty} \, (-1)^{j} \, \dfrac{1}{1+2 j} \, \nu^{1 + 2 j} \end{eqnarray}
Retaining just the term of smallest degree gives, with a relative error of $O(\nu^{2})$,
$\eta \approx \nu$, which is the second proposition of the theorem.
This in turn requires that small $\nu$ implies small $\eta$,
which is the third proposition. The fourth part of the theorem follows from taking the limit in the approximation $\eta = \nu +O(\nu^{3})$ and then recognizing that the limit of ``nearly parallel waves", differing in angle from the
central ray by no more than $\eta$, is \emph{parallel} waves.
$\blacksquare$
The nearly parallel approximation may be alternatively labeled the ``Distant Source"
approximation because it becomes more and more accurate as $D^{star}$ increases.
For all stars, $\nu$ is tiny because the distance to the star is huge compared to the diameter
of the star.
For Polaris itself, $D^{star}= 4 \times 10^{15}$ km and $W^{star} = 3 \times 10^{6}$ km
yield
\begin{eqnarray}
\nu \approx 0.75 \, \times 10^{-9} \qquad
\end{eqnarray}
For the Sun and earth combination, $D^{star}=1.496 \times 10^{8}$ km, and $W^{star}=6.96 \times 10^{5}$ km,
\begin{eqnarray}
\nu \approx 0.0046 \approx 1 / 215
\end{eqnarray}
The smallness of $\nu$ vindicates Eratosthenes, who assumed parallel (or nearly parallel) rays from the sun in his
measurement of the radius of the earth.\cite{DecampHosson12,MrYazdanEratosthenes,Papathomas05}
\begin{figure}[h]
\centerline{\includegraphics[scale=0.25]{Polaris_Parallel_on_Flat_and_Sphere.pdf}}
\caption{Parallel rays falling on a Flat Earth (left) and a spherical planet (right). All the observers
on the Flat Earth measure exactly the same elevation angle of a given star, regardless of observer location. In contrast, observers at different points on the sphere measure \emph{different} elevation angles. The elevation angle for observer C on the right is around 90 degrees where the elevation angle for observer A is near 0; for him the rays are low, just above the horizon.}
\label{FigPolaris_Parallel_on_Flat_and_Sphere}
\end{figure}
\begin{theorem}[Flat Earth Parallel Rays] On a Flat Earth when the rays from the star are parallel, the elevation of Polaris (and every other star) is \emph{independent} of the observer's latitude.
\end{theorem}
Proof: Obvious from the left half of Fig.~\ref{FigPolaris_Parallel_on_Flat_and_Sphere}. $\blacksquare$
The Flat Earth-parallel ray prediction that every observer sees Polaris at 90 degrees of elevation is in gross contradiction to observations. Thus, the Flat Earth is possible only if the
parallel ray approximation \emph{fails}, which requires that the distance to the star is
of comparable magnitude to the larger of the radii of the earth and the star.
In contrast, if the parallel ray approximation is accurate, observers on a \emph{spherical}
earth measure \emph{different} elevation angles depending on latitude. Let us suppose that all observations are made on the same meridian as that of the target star.
The stars appear to rotate about the Pole Star; a time-lapse photograph gives a so-called ``star trails" plot in which each star (except the Pole Star) traces a circle. Each point on the star trail gives a different angle; the desired elevation angle of a star is the angle of the point on its star trail which is closest to the observer's zenith, i. e., the highest point on the star trail.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.25]{Polaris_Parallel_Sphere_ObserversBC.pdf}}
\caption{Parallel rays falling on a spherical planet. Two rays from the center of the planet through the observers
define an angle $\psi$. Observers B and C will measure different elevation angles for the star. Because the rays from the star (and indeed any star) are parallel, the difference $\epsilon_{C}\, - \, \epsilon_{B}$ is due
entirely to the fact that a line segment [red] parallel to the horizontal direction of Observer B is the same as
for Observer C except for a rotation through the angle $\psi$.}
\label{FigPolaris_Parallel_Sphere_Observers}
\end{figure}
Fig.~\ref{FigPolaris_Parallel_Sphere_Observers} shows why different observers on a meridian of a
spherical planet obtain different angles for the beam of approximately parallel rays from a target star. The horizontal direction for each observer can be represented by a vector that is orthogonal to the
local effective gravity vector. The crucial point is that as we move from one observer to another, this vector
must rotate, and the rotation angle must be added to the elevation angle of the previous observer to
obtain the elevation angle at the location of the current observer as shown in the figure. Thus, observer B measures an elevation angle $90 - \psi$ where $\psi$ is about 45 degrees while observer C has an elevation angle of
$\epsilon_{C}= 90 \, \text{degrees}$. The difference in elevation angle arises because B and C are separated by about one-eighth of the circumference of the planet or equivalently by an angle of 360/8 degrees. In general, the measured elevation angles of two observers differ by an angle $\Psi$ which is 360 degrees multplied by the
geodesic distance between the two observers divided by the circumference of the earth.
\section{Elevation Is Latitude}
\subsection{Conversion of an Empirical Law to a Theorem}
\begin{theorem}[Elevation Angle Is Latitude]~\label{ThEL1}
Let $\varphi$ denote latitude and $\epsilon$ denote the ``elevation angle", also called
the ``altitude angle" or ``altitude".
If the Pole Star is observed from a spherical planet and the distance to the Pole Star is large compared to the radius of the Pole Star (allowing the parallel rays approximation), then after $\epsilon$ is corrected for the small errors of the sextant, the slight displacement of Polaris from the northern Celestial Pole, and atmospheric refraction, etc.
\begin{eqnarray} \varphi = \epsilon \end{eqnarray}
\end{theorem}
Proof: To handle general positions of stars on the Celestial Sphere, rotate the latitudinal coordinate so that C is at a latitude of 90 degrees N. in the new coordinate $\breve{\varphi}$. (Longitude is not modified.)
Observer B is at a $\breve{\varphi}$ that differs from C's by $\psi$ degrees. Therefore,
\begin{eqnarray}
\epsilon_{B} = 90 \, \, - \, \psi \qquad [\text{degrees}] \end{eqnarray}
But the latitude of observer B is also $90 - \psi$. It follows that latitude and elevation angle
are always equal, which is the proposition of the theorem.
$\blacksquare$
This theorem is identical with Empirical Law 1, which is thus converted from an empirical relationship ---
highly probable but vulnerable to disproof by as yet undisovered counterexamples --- to a mathematical
theorem.
\subsection{Sextant Corrections}
As noted in the introduction, the accuracy of the sextant measurement can be usefully improved by applying a number of
corrections. The details can be found in any handbook of celestial navigation \cite{Cunliffe10,Burch15} or videos such as \cite{RiggingDoctorSextant,PracticalNavigatorPolaris}; it would take us too far afield to describe them here. Instead, the correction for atmospheric refraction [next subsection] will have to illustrate the idea
of corrections for all species of corrections.
The important point is that most of these errors can be greatly reduced by using the tables of the
\emph{Astronomical Almanac}, a joint production of the U.S. Nautical Almanac Office, U. S. Naval Observatory, Her Majesty's Nautical Almanac Office and the U. K. Hydrographic Office. The printed version contains precise
positions over time of astronomical objects both natural (stars, planets, etc.) and man-made (artificial satellites) for a given year. This ephemeris serves as a world-wide standard for such information. The online version [http://asa.hmnao.com/] extends the printed version by providing data in machine-readable form. A good source for the theory behind the \emph{Almanac} is \cite{UrbanSeidelmann12}; the history and pre-history of
navigational almanacs is described in \cite{SeidelmannHohenkerkWHOLE}.
The videos \cite{RiggingDoctorSextant,PracticalNavigatorPolaris} walk the reader through examples.
\subsection{Atmospheric Refraction}
When the material properties of the medium vary, this induces variations in the speed of light, smoothly curving or discontinuously bending
the trajectories of light rays as discussed further in Subsection~\ref{SubSecCaseIIIRefraction}.
In particular, density variations in the atmosphere alter the speed of light.
Unfotunately, the density of the atmosphere varies with the seasons, local weather phenomena and the exponential decrease of density required by hydrostatic equilibrium.
Without density measurements, the best one can do is to apply a simple analytic approximation
that represents a time-average of refraction.
Let $\epsilon_{app}$ denote the apparent elevation angle. Let $\epsilon$ denote the true elevation (or synonymously,`` true altitude"). Let $\digamma$ denote the correction to $\epsilon_{app}$. The simplest analytical formula for refraction is, in radians,
\begin{eqnarray}~\label{Old19}
\digamma & = & - 0.000293 \cot(\epsilon_{app})
\end{eqnarray}
which is given on the first page of \cite{Bennett82}. It is not recommended when $\epsilon_{app}$ is
smaller than 15 degrees.
Bennett \cite{Bennett82} compares many analytic approximations for the refraction angle $\digamma(\epsilon_{app}$). We apply his method H, a more complicated formula than (\ref{Old19}), which is widely used, notably in the
U. S. Naval Observatory's \emph{Vector Astrometry} software.
\begin{eqnarray}
\digamma \, = 0.000291 \, \cot\left( \epsilon_{app} + \dfrac{7.31} {\epsilon_{app} + 4.4} \, \right)
\end{eqnarray}
Note that the argument of the cotangent function must be computed in degrees and then converted to radians.
Once the refraction $\digamma$ has been calculated, the best approximation to the elevation is
\begin{eqnarray}
\epsilon \approx \epsilon_{app} + \digamma
\end{eqnarray}
where $\epsilon_{app}$ is the ``apparent" elevation and $\epsilon$ is the ``true'' elevation, which is the sextant measurement modified by all non-refractive corrections. This is also the best approximation to latitude, $\varphi \approx \epsilon$.
Fig.~\ref{FigElevationEqualsLatitude_RefractionB} and Table~\ref{OldTabSextantCorrB} show that the refractive correction is less than an arcminute
for all $\epsilon_{app}$ greater than 45 degrees, which implies a navigational error, if the refractive correction is omitted, of only one
nautical mile. The correction grows as the elevation angle decreases, which is why navigators are advised to
measure Polaris and other celestial objects only when the target is at least 15 degrees above the horizon.
However, $\epsilon$ is a good approximation to latitude even when Polaris is within a degree or two
of the horizon.
\begin{figure}[h]
\centerline{\includegraphics[scale=1.2]{ElevationEqualsLatitude_RefractionC.pdf}}
\caption{ Corrected and uncorrected elevation angles and their difference on the full range of the elevation angle, $\epsilon_{app } \in [0 \,\text{degrees}, 90 \, \text{degrees}]$ (top). Not that the uncorrected and refraction-corrected curves are graphically indistinguishable and the dotted error curve is invisible underneath the horizontal axis. Bottom left: the same but on the reduced range
$\epsilon_{app } \in [0 \,\text{degrees}, 10 \, \text{degrees}]$. The lower right graph plots $ \epsilon - \epsilon_{app}$, also on the shortened range $[0, 10 \, \text{degrees}]$.}
\label{FigElevationEqualsLatitude_RefractionB}
\end{figure}
\begin{table} \caption{\label{OldTabSextantCorrB} Refraction corrections using the formula ``H" of
Bennett \cite{Bennett82}}
{\footnotesize Note that refraction in nautical miles (rightmost column) is numerically identical to the refraction in arc minutes.}
\begin{center}
\begin{tabular}{|ccccc|} \hline
$\epsilon_{app}$ [degrees] & $\epsilon$ [degrees] & $\digamma$ [radians]
& $\digamma$ [degrees] & $\digamma$ [nautical miles]
\\
0.5 & 0.021 & - 0.008 & -0.48 & -28.8 \\
1 & 0.59 & -0.01 & -0.41 & - 24.3 \\
2 & 1.70 & -0.0071 & -0.30 & -18.2 \\
5 & 4.84 & - 0.0029 & -0.165 & -9.89 \\
10 & 9.91 & - 0.0016 & -0.090 & - 5.39 \\
15 & 14.9 & -0.0011 & -0.061 & -3.63 \\
30 & 29.97 & -0.00050 & -0.029 & -1.72 \\
45 & 44.98 & -0.00029 & -0.017 & -1.00 \\
60 & 59.99 & -0.00017 & -0.0096 & -0.57 \\
75 & 75.0 & -0.000077 & -0.0044 & -0.27 \\
\hline
\end{tabular} \end{center} \vspace{5pt} \end{table}
\clearpage
\section{Why Flat Earth Needs a Nearby-Polaris and Non-parallel Light From the North Star}
On a Flat Earth, the horizontal levels of all Flat Earth observers are parallel. If the light from a star like Polaris is
also approximately parallel, then, as seen in Fig.~\ref{FigPolaris_Parallel_on_Flat_and_Sphere} \emph{all Flat Earth observers} will see Polaris \emph{at the same point} in the celestial sphere. This
is in gross contradistinction to what is actually observed, $\epsilon \approx \varphi$, Theorem~\ref{ThEL1}.
Therefore, Flat Earth requires that Polaris be so close to Earth --- a few thousand kilometers or less--- so that its rays are \emph{not} approximately parallel. Different observers will see
different elevation angles.\footnote{This means jettisoning several independent measurements of the distance to Polaris by parallax and other methods that show the actual distance to Polaris is over 400 light-years. However, a central tenet of all Flat Earth sects and denominations is that all space agencies and professional astronomers are part of a vast conspiracy and hoax, and their publications and findings must be discarded. Flat Earth must be approached as fruitful for what-if scenarios, but is otherwise the exponential of the exponential of exponential of delusion.}
The close-Polaris hypothesis allows the computation of $D^{star}$, the distance to Polaris, by triangularization as described in the next section.
If, on the contrary, Polaris is very distant, then the following corollary to Theorem~\ref{ThAngNearlyPar}
shows that calculation of $D^{star}$ is impossible.
\begin{corollary}[Triangularization Fails in the Parallel Limit]
In the limit of parallel rays, it is impossible
to obtain an estimate of $D^{star}$ from triangulation because the relevant triangles are infinitely
long and narow, degenerating into rays. This conclusion is \emph{independent} of the shape of the planet.
\end{corollary}
Proof: Take the limit, for fixed $W^{star}$ and $W^{earth}$, of $D^{star} \rightarrow \infty$
in Fig.~\ref{FigQuasiParallelRaySpheres}. $\blacksquare$
\section{Triangularization On the Flat Earth for a Local Polaris}
Let $D$ denote the distance from the North Pole to the Pole Star. Equivalently, $D$ is the length of a geodesic from the star to the North Pole. Because the star is at the zenith of an observer standing on the North Pole (a ``leveled observer" as defined earlier), the line segment is perpendicular to the surface of Flat Earth.
Let $d$ denote the distance from the observer's position to the North Pole. Let $\epsilon$ denote the elevation angle.
Fig.~\ref{FigPolaris_Triangle} shows a triangle whose vertices are (i) the observer's position on the surface
of the planet (ii) the North Pole and (iii) the Pole Star. It is a good first approximation to pretend Polaris is the Pole Star because the error is only three-quarters of a degree.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.5]{Polaris_TriangleAngles.pdf}}
\caption{Triangulating the Pole Star on on a Flat Earth.} \label{FigPolaris_Triangle}
\end{figure}
The ``Law of Triangles" \cite{VanBrummelen20} then shows that the ratio of the two shortest sides is equal to the tangent of
the angle $\epsilon$ between these two sides, or in other words
\begin{eqnarray}~\label{EqTri}
D = d \tan(\epsilon) & \leftrightarrow & d = D \cot(\epsilon)
\end{eqnarray}
We shall apply the left equation in Flat Earth Case I and the right equation [$d(D, \epsilon)$] in
Flat Earth Case II.
\section{Flat Earth Case I : $d$ [distance to pole] is a linear
function of elevation angle $\epsilon$}
\begin{eqnarray}
\arctan( D/d) \qquad \Leftrightarrow \qquad
\epsilon =
\operatorname{\arctan}\left( \dfrac{ D}{d} \right)
\qquad \text{Pole-Star-on-the-Flat-Earth}
\end{eqnarray}
where
$D$ is the distance to Polaris and $d$ is the distance from the
observation point to the North Pole. This can be rewritten to give the unknown $D$ in terms of the known
quantities $d$ and $\epsilon(d)$:
\begin{eqnarray}
D = \, d \,
\tan(\epsilon)
\qquad \Leftrightarrow \qquad
D = d \,
\tan\left( \epsilon(d) \right)
\qquad \text{Pole-Star-on-the-Flat-Earth}
\end{eqnarray}
Substituting
\begin{eqnarray}
d(\varphi) & = & W \, \left( \dfrac{\pi}{2} \, -
\, \varphi \, \right) \end{eqnarray}
where $W= 6360$ km and replacing elevation angle $\epsilon$ by latitude $\varphi$ yields
\begin{eqnarray}~\label{EqMoonFairy}
D(\varphi) = \dfrac{\pi}{2} W\, \left( \pi/2 - \varphi \, \right) \,\tan\left( \varphi \, \right)
\end{eqnarray}
This function is plotted
in Fig.~\ref{FigPolaris_FlatEarthCase_I}.
Table~\ref{TableFlatEarthOneOld3} lists numerical values of the distance to Polaris for different latitudes of the observer.
\begin{figure}[h]
\centerline{\includegraphics[scale=1.0]{Polaris_FlatEarthCase_I.pdf}}
\caption{Flat Earth Case I: The plot shows the distance $D$ from the North Pole to the Pole
Star, scaled by $W=6360$ km, so that the nondimensional quantity plotted is $D/W= (\pi/2 - \varphi) \, \tan(\varphi)$. The figure
assumes that $d= W \, \left(\pi/2 - \varphi \right)$. }
\label{FigPolaris_FlatEarthCase_I}
\end{figure}
For a Flat Earth evangelist, the very bad news is that Eq.~\ref{EqMoonFairy} predicts a \emph{different}
location for Polaris for every latitude. An observer at 75 degrees N. latitude sees Polaris more than 6220 km
above the North Pole whereas an observer in the tropics at 15 degrees N. sees Polaris only 2233 km above the Flat Earth. One is reminded of the cartoon character Multiman, who could almost instantly create an army of clones.
Even his magic powers fail here, however: placing clones of Polaris at both 2233 km and 6220 km above the planet would mean the observer now sees \emph{two} Polarises. Since the number of observers is not restricted
to two but may be arbitrarily large, we are immediately confronted with an \emph{infinite} number of yellow giant clones of Polaris.
To amplify the absurdity, $\max(D)=W=6360$ km. This implies that Polaris is always \emph{near} and \emph{local}, but can a star so close (with a distance of 10,000 km at most)
\begin{enumerate}
\item be a point source
\item be a star of the \emph{first magnitude} over the entire daylight sky
\item
while invisible at night?
\end{enumerate}
\begin{table}[h]
\caption{ \label{TableFlatEarthOneOld3} Distance $D$ from the North Pole to the North Star, Polaris. The star
cannot be in multiple places at the same time, so this Flat Earth model fails.}
\vspace{5pt}
\begin{center} {\footnotesize Distances $D$ have been independently checked and confirmed in \cite{BlueMarbleScienceMitchellTrig}.}
\begin{tabular}{|c|c|c|c|} \hline Analytical Form & Fourier
Latitude (degrees) $\epsilon, \varphi$ (radians) & $d$ & $D$ [Distance to Polaris] \\ \hline
1 degree & $(1/180) \pi$ & 9889 & 173 km \\
15 degrees & $\pi/12$ & 8333 km & 2233 km \\
45 degrees & $\pi/4$ & 5000 km & 5000 km \\
75 degrees & $(5/12) \pi$ & 1667 km & 6220 km \\
89 degrees & $(89/180) \pi$ & 111.11 km & 6360 km \\ \hline
\end{tabular} \end{center} \vspace{5pt} \end{table}
\clearpage
\section{Flat Earth Case II: Polaris Location $D$ is Independent of Latitude}
Polaris cannot simultaneously be at every possible distance $D$ above the North Pole from 0 to 6360 km.
Can this unreality be fixed?
The linear relationship between $d$, the distance from the observer to the North Pole, and latitude $\varphi$,
which is $d(\varphi) = W \, \left( \dfrac{\pi}{2} \, -
\, \varphi \, \right) $ as derived in the previous section,
is supported by at least half a millenia of experience at sea by those whose lives literally depended on it! Nevertheless, in the spirit that desperate illnesses often
require drastic remedies, we now explore a \emph{nonlinear} relationship $d(\varphi)$.
The right equation in Eq.~(\ref{EqTri}) gives a relationship between $d$, $D$ and $\epsilon$.
If $D$ is to be a constant so that Polaris has a single, unique distance from the planet, independent of the observer's latitude, then the constraint from (\ref{EqTri}) becomes, with $D$ as a constant,
\begin{eqnarray}
d(\varphi)= D \,\cot( \varphi).
\end{eqnarray} as illustrated in Fig.~\ref{FigPolaris_FlatEarthCase_II_d_vs_varphi}.
However, as $\varphi \rightarrow 0$ (the equator),
$d$ diverges to infinity.
Since $d(0)=\infty$, the Northern Hemisphere, latitude $\varphi \in [0, \pi])$ is mapped to the entire infinite plane.
Where, then, to put the Southern Hemisphere? One could perhaps put the Southern Hemisphere on the other
side of the infinite plane. Then a journey across the equator --- easy in our reality --- would become a journey
of infinite length. The Pan-American Highway from Alaska to Chile would be stretched to a never-ending strip of pavement.
We are left with insoluble difficulties:
\begin{enumerate}
\item The Northern Hemisphere is an
\emph{infinite} disk.
\item There is no place to put the Southern Hemisphere.
\item $\lim_{\varphi \rightarrow 0} d(\varphi)= \infty$.
\item The nonlinear relationship between $d$ and $\varphi$ is refuted
by many centuries of navigation on land, sea and air.
\end{enumerate}
\begin{figure}[h]
\centerline{\includegraphics[scale=1.0]{Polaris_FlatEarthCase_II_d_vs_varphi.pdf}}
\caption{Flat Earth Case II. The distance $d$ from the North Pole
to the observer at latitude $\varphi$, scaled by $W$,
as illustrated for three different values of the distance $D$ (in kilometers) to Polaris.
}
\label{FigPolaris_FlatEarthCase_II_d_vs_varphi}
\end{figure}
\clearpage
\subsection{Flat Earth Case III: Flat Earth with Refracted Light Rays}~\label{SubSecCaseIIIRefraction}
Another possibility is that atmospheric refraction of the light from Polaris can remove the absurdities of the
two previous Flat Earth models. An important constraint is that refraction is zero in a vacuum. Therefore,
refraction will be confined to a layer of thickness $L$ where $L$ is at most a few kilometers. Equivalently, refractive light-bending occurs only in the lower and middle atmosphere. Since the distances $D$ from the North Pole to Polaris have been a few hundred to a few thousand kilometers in the earlier sections, we shall
make the plausible assumption that
\begin{eqnarray} L << D
\end{eqnarray}
We shall suppose for
simplicity that the index of refraction (i) is equal to one, its vacuum value, everywhere above the refractive layer and (ii) varies discontinuously at $z=L$:
\begin{eqnarray}
n= \left\{ \begin{array}{c} 1, \qquad \, \qquad z > L \\
n_{bot}(\varphi), \qquad z \, \leq \, L \end{array} \right.
\end{eqnarray}
where $z$ is height above the surface of the planet.
Snell's Law is usually stated in terms of the angles $\theta_{i}$ and
$\theta_{o}$ for the ray incoming from the Pole Star and the outgoing,
refracted wave, respectively, as shown in
Fig.~\ref{FigPolaris_Snells_Law}. Denoting the indices of refraction
in the top and bottom layers by $n_{top}$ and $n_{bot}$, respectively,
Snell's law is
\begin{eqnarray}
\dfrac{\sin(\theta_{o})}{\sin(\theta_{i})} = \dfrac{n_{top}}{n_{bot}}
\end{eqnarray}
\begin{figure}[h]
\centerline{\includegraphics[scale=0.8]{Polaris_Snells_Law.pdf}}
\caption{Snell's Law of Refraction for Flat Earth Case III for the Pole Star. The
horizontal dotted line is the boundary [interface] between the two layers of air; the dashed line is a vertical guideline
perpendicular to the interface. $\theta_{1}$ and $\theta_{o}$ are the angles
of the incoming and outgoing light rays. $\phi$ is colatitude }
\label{FigPolaris_Snells_Law}
\end{figure}
However, we previously employed angles
that are the complements of $\theta_{i}$ and $\theta_{o}$, $\delta=
\pi/2 - \theta_{i}$ and $\epsilon=\pi/2 - \theta_{o}$. Furthermore $n_{top}=1$. Making these substitutions and invoking the identity that the sine of an angle is equal to the cosine of its complementary angle, Snell's Law can
be rewritten as
\begin{eqnarray} n_{bot}(\varphi) = \dfrac{\cos\left(\delta(\varphi) \right)}{\cos(\epsilon(\varphi)}
\end{eqnarray}
To obtain expressions for $\delta(\varphi)$ and $\epsilon(\varphi),$ we execute the following steps:
\begin{enumerate}
\item Empirical Law 1 is
\begin{eqnarray} \epsilon = \varphi \end{eqnarray}
\item Empirical Law 2 gives the Polaris-to-North-Pole distance as
\begin{eqnarray} d = W \, \left( \dfrac{\pi}{2} \, - \, \varphi \right) \end{eqnarray}
\item Trigonometric identities applied to the triangle with vertices at the North Star, the North Pole
and the observer's position gives
\begin{eqnarray}
\delta = \operatorname{arctan}\left( \dfrac{W}{ D} \left( \dfrac{\pi}{2} - \varphi) \right)\right)
\end{eqnarray}
\end{enumerate}
Then
\begin{eqnarray} n_{bot}(\varphi) = \dfrac{ \cos\left(
\operatorname{arctan}\left( \dfrac{W}{ D} \left( \dfrac{\pi}{2} - \varphi \right)\right)
\right)}
{\cos(\varphi)}
\end{eqnarray}
\begin{figure}[h]
\centerline{\includegraphics[scale=0.4]{Polaris_refraction_FE_schemZ.pdf}}
\caption{Flat Earth Case III. The downward-propagating light ray from Polaris (broken line segment with chevrons) has an elevation angle $\delta$ in the near-vacuum region $z \in [L, D]$.
At $z=L$ [dashed line], the density jumps discontinously with the density larger in the thin layer $z \in [0, L]$. The light bends at the density jump so that the measured elevation angle is
altered by refraction from $\delta$ (for $z > L$) to give the smaller angle
$\epsilon$ seen by an observer on the ground. The observer is a distance $d$ from the North Pole. Note that although it is difficult to show these triangles to scale [and these triagles are therefore omitted from the graph], $D \gg L$ and $d \gg d_{L}$. These inequalities make it possible to accurately apply triangle formulas with the approximation $L=0$, i. e., the refractive layer were absent. }
\label{FigPolaris_refraction_FE_schem}
\end{figure}
The index of refraction observed in the atmosphere is very close to one. The
National Institutes of Science and Technology offers an Engineering
Metrology Toolbox at emtoolbox.nist.gov. This includes a calculator to
compute the index of refraction for air as a function of temperature,
humidity and pressure. One finds that even for extreme pressure, density and temperature,
the index of refraction is less than 1.004.
Furthermore, the atmosphere slows down the incoming light; the index of refraction can never be less than 1.
Thus, the physically realizable values of the index of refraction are
\begin{eqnarray}~\label{Eqbadindex}
1 \, \leq \, n_{bot} \leq 1.004
\end{eqnarray}
Fig.~\ref{FigPolaris_FlatEarthCase_III_Inbot_vs_varphi} shows the required index of refraction
in the lower, refractive layer for the Flat Earth to have a single, unique distance to Polaris. For almost all observer latitudes, the needed index of refraction $n_{bot}$ is unphysical, violating (\ref{Eqbadindex}).
\begin{figure}[h]
\centerline{\includegraphics[scale=1.0]{Polaris_FlatEarthCase_III_Inbot_vs_varphi.pdf}}
\caption{Flat Earth Case III. $n_{bot}(\varphi)$ for three different geodesic distances $D$ of Polaris from
the North Pole. Values with the index of refraction less than one, which are not physically realizable, are marked with disks.}
\label{FigPolaris_FlatEarthCase_III_Inbot_vs_varphi}
\end{figure}
A little reflection will show that the simplifying assumptions made earlier have no effect on the conclusion: Refraction fails to salvage the Flat Earth.
\section{Summary}
For at least half a millenia, navigators have used the empirical rule that the elevation of the Pole Star (Polaris) is equal to latitude. We have shown that proposition can be elevated to a proved theorem.
A second ancient empirical law is that the distance in kilometers from the observer to the North Pole, the geodesic distance measured along the spherical surface of the planet, is the number of degrees of colatitude multiplied by 111.1 kilometers. This cannot be proved \emph{a priori}, but we show
that this relationship can be established by \emph{measurments}.
However, once Empirical Law 2, that is, that distance to the pole is colatitude multiplied by 111 km, is confirmed by measurement, we prove that this rigorously
and uniquely determines the circumference of the Earth to be 40,000 km.
We also extend these ideas and the parallel ray approximation to three different ways of modeling a
Flat Earth. All fail. As noted by many previous videographers, too numerous to list, triangularization gives different distances from Earth to Polaris for each latitude.
Absurd!
Photographs from space, taken on a very expensive satellite or astronauts on the moonm
are unnecessary. Simple mathematics and ancient empirical laws prove Earth a sphere just as
well.
| {
"timestamp": "2022-06-16T02:18:07",
"yymm": "2206",
"arxiv_id": "2206.07491",
"language": "en",
"url": "https://arxiv.org/abs/2206.07491",
"abstract": "For millenia, sailors have used the empirical rule that the elevation angle of Polaris, the North Star, as measured by sextant, quadrant or astrolabe, is approximately equal to latitude. Here, we show using elementary trigonometry that Empirical Law 1 can be converted from a heuristic to a theorem. A second ancient empirical law is that the distance in kilometers from the observer to the North Pole, the geodesic distance measured along the spherical surface of the planet, is the number of degrees of colatitude multiplied by 111.1 kilometers. Can Empirical Law 2 be similarly rendered rigorous? No; whereas as the shape of the planet is controlled by trigonometry, the size of our world is an accident of cosmological history. However, Empirical Law 2, can be rigorously verified by measurements. The association of 111 km of north-south distance to one degree of latitude trivially yields the circumference of the globe as 40,000 km. We also extend these ideas and the parallel ray approximation to three different ways of modeling a Flat Earth. We show that photographs from orbit, taken by a very expensive satellite, are unnecessary to render the Flat Earth untenable; simple mathematics proves Earth a sphere just as well.",
"subjects": "Popular Physics (physics.pop-ph); Physics Education (physics.ed-ph); History and Philosophy of Physics (physics.hist-ph)",
"title": "Polaris: The Mathematics of Navigation and the Shape of the Earth",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9637799399736476,
"lm_q2_score": 0.7341195269001831,
"lm_q1q2_score": 0.707529673569341
} |
https://arxiv.org/abs/1101.1104 | Reduced models of networks of coupled enzymatic reactions | The Michaelis-Menten equation has played a central role in our understanding of biochemical processes. It has long been understood how this equation approximates the dynamics of irreversible enzymatic reactions. However, a similar approximation in the case of networks, where the product of one reaction can act as an enzyme in another, has not been fully developed. Here we rigorously derive such an approximation in a class of coupled enzymatic networks where the individual interactions are of Michaelis-Menten type. We show that the sufficient conditions for the validity of the total quasi steady state assumption (tQSSA), obtained in a single protein case by Borghans, de Boer and Segel can be extended to sufficient conditions for the validity of the tQSSA in a large class of enzymatic networks. Secondly, we derive reduced equations that approximate the network's dynamics and involve only protein concentrations. This significantly reduces the number of equations necessary to model such systems. We prove the validity of this approximation using geometric singular perturbation theory and results about matrix differentiation. The ideas used in deriving the approximating equations are quite general, and can be used to systematize other model reductions. | \section{Introduction}
The Michaelis-Menten (MM) scheme~\citep{BH,MM} is a fundamental building block of many models of
protein interactions:
An enzyme, $E$, reacts with a protein, $X$, resulting in an intermediate complex, $C$. In turn, this complex can break down into a product, $X_p$, and the enzyme $E$. It is frequently assumed that formation of $C$ is reversible while its breakup is not. The process is represented by the following sequence of reactions~\citep{BH,MM,murray2003}
\begin{equation} \label{MM}
X + E \overset{k_1}{\underset{k_{-1}}{\rightleftharpoons}} C \overset{k_2}{\rightarrow} X_p + E.
\end{equation}
Frequently catalytic activity in protein interaction network is modeled by MM equations~\citep{huangFerrell1996,novakPatakiCilibertoTyson2001,novakPatakiCilibertoTyson2003,UriAlon2007,davitichBornholdt2008,stadman1977,NovakTyson1993,goldbeter91}. This gives rise to \emph{coupled enzymatic networks}, where the substrate of one reaction acts as enzyme in another reaction.
A direct application of the law of mass action to such models
typically leads to high dimensional differential equations which are often stiff, and difficult to study directly.
A number of methods have been introduced to address these problems. Most of these methods are based on \emph{quasi steady state assumptions} which take advantage of the differences in characteristic timescales of the quantities being modeled. It is typically assumed that the chemical species, or some combinations of chemical species, can be divided into two classes: One which equilibrates rapidly, and a second which evolves more slowly~\cite{hek2010,othmerLee2010}. Assuming that the members of the first class equilibrate instantaneously leads to a reduced model involving only elements of the second class.
Reduction methods differ in their assumption on which chemical species, or combinations thereof, are assigned to the two different classes. For instance, the \emph{standard quasi steady state assumption (sQSSA)} posits that the concentrations of the intermediate complexes change quickly compared to the protein concentration~\citep{GK,segel,segelAndSlemrod,maini,NoethenWalcher2006}. An alternative is the \emph{reverse quasi steady state assumption (rQSSA)} where the protein concentration is assumed to change rapidly compared to intermediate complexes~\citep{SchnellMaini2000}. Rigorous justifications of these methods are largely available only for isolated reactions of the type shown in scheme~\eqref{MM}, and the Goldbeter-Koshland switch\footnote{A Goldbeter-Koshland switch consist of two coupled reactions. One of these reactions frequently represents protein phosphorylation, and the second dephosphorylation.}~\citep{GK}.
The total quasi steady state assumption (tQSSA) was introduced to broaden the range of parameters over which a \emph{quasi steady state assumption} is valid. Under this assumption the concentration of the intermediate complex, $C$, evolves quickly compared to the sum of the intermediate complex and the protein concentration~\citep{BorghansBoerSegel1996,tzafriri,hegland,PedersenBersaniBersani2008,tzafririEdelman2004}.
Numerical experiments and heuristic arguments suggest that tQSSA may be valid in coupled enzymatic networks over a very broad set of parameters~\citep{CilibertoFabrizioTyson2007}.
Here we aim to provide a theoretical foundation for the reductions used in numerical studies of enzymatic networks. A standard model reduction technique for systems involving quantities that change on different timescales is geometric singular perturbation theory (GSPT)~\citep{Fenichel1979,kaper,hek2010}. For instance, this theory has been used by Khoo and Hegland to prove several results obtained earlier by Borghans, et al. using self consistency arguments~\citep{hegland,BorghansBoerSegel1996}.
GSPT has also been used to reduce other models of biochemical reactions~\citep{ZagarisKaperKaper2004,HardinZagarisKrabWesterhoff2009,othmerLee2010}. We derive a sufficient condition for the validity of tQSSA in arbitrary networks of proteins and enzymes provided the interactions are of MM type and can be modeled by mass action kinetics. This directly extends previous work, like that of Pedersen, et al.~\citep{PedersenBersaniBersaniCortese2008} who proposed a sufficient condition for the validity of tQSSA in the Goldbeter-Koshland switch.
The direct application of the tQSSA to coupled enzymatic networks generally leads to a differential-algebraic system. The algebraic part of this system consists of coupled quadratic equations that are typically impossible to solve. Our second aim is to show that, under certain assumptions on the structure of the network, it is possible to circumvent this problem using
ideas introduced by Bennett, et al.~\citep{BennettVolfsonTsimringHasty2007}. This allows us to obtain reduced set of differential equations for a class of protein interaction networks in terms of protein concentrations only.
We proceed as follows: In section~\ref{IsolatedMichaelisMentenReaction} we review the original Michaelis--Menten scheme. We introduce terminology, and illustrate our approach in a simple setting. In
this section we also give a brief overview of the theory of geometric singular perturbation theory, which is fundamental in proving the validity of the reduced equations. In section~\ref{sec:may3010_TwoProtein} we extend our approach to a well studied two protein network that plays part in the G2-to-mitosis phase (G2/M) transition in the eukaryotic cell cycle. We present the ideas in the most general setting in section~\ref{sec:TheGeneralProblem}, where we derive the general form of the reduced equations. Each section begins by the discussion of the tQSSA in the context of the network under consideration, and closes with a derivation of the reduced equations under the tQSSA, as well as sufficient conditions under which the tQSSA holds. A number of technical details used in the proofs of the main results are given in the appendices.
We note that throughout the presentation the law of mass action is assumed to hold.
\section{Isolated Michaelis-Menten reaction}\label{IsolatedMichaelisMentenReaction}
The MM scheme is frequently used to model enzymatic processes in solution
which are ubiquitous in biology. As discussed in the introduction, a number of different
approaches have been proposed to justify the reduced equations mathematically. We start by giving
a detailed overview of the tQSSA approach based on geometric singular perturbation theory
(GSPT)\citep{Fenichel1979}. The setting of a single MM type reaction will be used to introduce the main ideas and difficulties of reducing equations that describe larger reaction networks.
For notational convenience we will use variable names to denote both a chemical species and its concentration. For instance, $E$ denotes both an enzyme and its concentration.
Reaction~\eqref{MM} reaction obeys two natural constrains: The total amount of protein and enzyme remain constant. Therefore,
\begin{equation}\label{25Mar10_XT}
X + C + X_p = X_T, \quad \text{and} \quad
E + C = E_T,
\end{equation}
for positive constants $X_T$ and $E_T$.
In conjunction with the constraints~\eqref{25Mar10_XT}, the following system of ordinary differential equations
can be used to model reaction~\eqref{MM}
\begin{align}\label{25mar10_oneDimEq}
\frac{dX}{dt} &= -k_1X(E_T-C) +k_{-1}C, &X(0) &= X_T, \notag \\
\frac{dC}{dt} &= k_1X(E_T-C) - (k_{-1}+k_2)C, & C(0) &= 0.
\end{align}
\subsection{The total quasi steady state assumption (tQSSA)}
\label{S:intro_one}
Under the standard quasi steady state assumption (sQSSA), the concentration of the substrate--bound enzyme, $C$, equilibrates quickly, which allows
system~\eqref{25mar10_oneDimEq} to be reduced by one dimension. Sufficient conditions under
which the sQSSA is valid have been studied extensively~\citep{GK,segel,maini}. However, it has also been observed that
the sQSSA is too restrictive~\citep{BorghansBoerSegel1996,tzafriri}.
To obtain a reduction that is valid for a wider range of parameters, define $ \bar{X} := X+C$. Eq.~(\ref{25mar10_oneDimEq}) can then be rewritten as
\begin{subequations}\label{26mar10_A}
\begin{align}
\frac{d\bar{X}}{dt} &= -k_2C,
&\bar{X}(0) &= X_T, \label{26mar10_Aa} \\
\frac{dC}{dt} &= k_1[\bar{X}E_T-(\bar{X}+E_T + k_m)C +C^2], &C(0) &= 0,\label{26mar10_Ab}
\end{align}
\end{subequations}
where $k_m = (k_{-1}+k_2)/k_1$ is the \textit{Michaelis--Menten constant}.
The tQSSA posits that $C$ equilibrates quickly compared to $\bar{X}$~\citep{BorghansBoerSegel1996,tzafriri}.
Under this assumption we obtain the following differential--algebraic system
\begin{subequations}\label{27mar10_A}
\begin{align}
\frac{d\bar{X}}{dt} &= -k_2C, &\qquad
\bar{X}(0) = X_T,\label{27mar10_Aa} \\
0 &= k_1[\bar{X}E_T-(\bar{X}+E_T + k_m)C +C^2].\label{27mar10_Ab}
\end{align}
\end{subequations}
Solving Eq.~(\ref{27mar10_Ab}) and noting that only the negative branch of solutions is stable, we can express $C$ in terms of $\bar{X}$ to obtain a closed, first order differential equation for $\bar{X}$,
\begin{equation}\label{24May10_barX}
\frac{d \bar{X}}{dt} = -k_2\frac{
(\bar{X}+E_T + k_m) - \sqrt{(\bar{X}+E_T + k_m)^2- 4\bar{X}E_T} } {2}
, \qquad
\bar{X}(0) = X_T.
\end{equation}
Although the reduced equation is given in the $\bar{X}, C$ coordinates, it is easy to revert to the original variables $X,C$.
Therefore, from Eq.~(\ref{24May10_barX}) one can recover an approximation to the solution of Eq.~(\ref{25mar10_oneDimEq}).
\subsection{Extension of the tQSSA}
\label{S:extension}
An essential step in the tQSSA reduction is the solution of the quadratic equation~(\ref{27mar10_Ab}). A direct extension of this approach to networks of chemical reaction typically leads to coupled system of quadratic equations~\citep{CilibertoFabrizioTyson2007,PedersenBersaniBersaniCortese2008,PedersenBersaniBersani2008}.
The solution of this system may not be unique, and generally needs to be obtained numerically.
However, an approach introduced by Bennett, et al.~\citep{BennettVolfsonTsimringHasty2007}, can be used to obtain the desired solution from a system of linear equations.
In particular, we keep the
tQSSA, but look for a reduced equation in the original
coordinates, $X, C$.
Using $ \bar{X} = X + C$ to eliminate $\bar{X}$ from Eq.~(\ref{27mar10_Ab}), we obtain
\begin{equation}\label{28april10_C}
0 = k_1\left(X(E_T-C) - k_mC\right).
\end{equation}
Eq.~(\ref{28april10_C}) and Eq.~(\ref{27mar10_Ab})
are equivalent, but Eq.~(\ref{28april10_C}) is linear in $C$, and leads to
\[
C = \frac{XE_T}{k_m+X}, \text{ and } \bar{X} = X + \frac{XE_T}{k_m+X}.
\]
Using these expressions formulas in Eq.~(\ref{27mar10_Aa}), and applying the chain rule gives
\begin{subequations}
\begin{equation}
\frac{\partial}{\partial X} \left( X + \frac{XE_T}{k_m+X}\right)\frac{dX}{dt} = -k_2 \frac{XE_T}{k_m+X}
\quad \Longrightarrow \quad
\frac{dX}{dt} = -k_2 \left( 1 + \frac{k_mE_T}{(k_m+X)^2}\right)^{-1} \frac{XE_T}{k_m+X}. \label{27mar10_dxdt}
\end{equation}
The reduced Eq.~\eqref{27mar10_dxdt} was obtained under the assumption that there is no significant change in $\bar{X} = X + C$ during the rapid equilibration. After equilibration, $C = XE_T/(k_m+X)$ (See Fig.~\ref{fig:initialValue}). Therefore, the initial value for Eq.~(\ref{27mar10_dxdt}), denote by $\hat{X}(0)$, can be obtained from the initial values $X(0), C(0)$ using
\begin{equation}\label{june0810_initialVal}
\hat{X}\left(0\right) + \frac{E_T\hat{X}\left(0\right)}{\hat{X}\left(0\right)+k_m} = X(0)+ C(0) = X_T.
\end{equation}
\end{subequations}
Fig.~\ref{fig:initialValue}c) shows that the solutions of the full system~(\ref{25mar10_oneDimEq}) and the reduced system~(\ref{27mar10_dxdt}) are close when initial conditions are mapped correctly.
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[scale=.165]{InitialCondtionIllustration_3}
\end{center}
\caption{\footnotesize{
Proper choice of the initial values of the reduced system. The empty circle at $ \bar{X} = 1,\, C = 0,$ represents the initial value for the full system. The solid dot is the initial value of the reduced system. The dash-dotted (red) line represents the attracting \emph{slow manifold}. \emph{(a)} The solid curve represents the numerical solution of Eq.~(\ref{26mar10_A}). The solution rapidly converges to the manifold, and evolves slowly along the manifold after this transient. The dashed line satisfies $\bar{X} = X_T$. The solid dot at the intersection of the dashed line and the slow manifold represents the projection of the initial condition onto the slow manifold given by Eq.~(\ref{27mar10_Ab}). Thus $\bar{X}(0) = X_T$ is the proper initial condition for the reduced system~\eqref{24May10_barX}. \emph{(b)} The solid line represents the numerical solution of Eq. (\ref{25mar10_oneDimEq}). After a quick
transient, the solution again converges to the slow manifold. However, since the initial
transient is not orthogonal to the $X$ axis, the initial conditions do not project vertically onto the slow manifold. Instead, the initial transient follows the line $X + C = X_T$ (dashed), and the intersection of this line and the slow manifold represents the proper choice of the initial value for Eq.~(\ref{27mar10_dxdt}).
\emph{(c)}
Comparison of solutions of Eq.~(\ref{25mar10_oneDimEq}) and the reduced system~(\ref{27mar10_dxdt}). The graph in the inset offers a magnified view of the boxed region, showing the quick transient to the slow manifold.
We used: $X_T = E_T = k_1 = k_2 = 1, \, k_{-1} = 3$, which, using
Eq.~(\ref{june0810_initialVal}), gives the initial condition for the reduced system, $\hat{X}(0) = 0.83$. } }
\label{fig:initialValue}
\end{figure}
\clearpage
The tQSSA implies that Eq.~\eqref{26mar10_A} can be approximated by Eq.~\eqref{27mar10_A}.
Therefore, to explore the conditions under which
Eq.~(\ref{27mar10_dxdt}) is a valid reduction of Eq.~(\ref{25mar10_oneDimEq}) we need to
provide the asymptotic limits under which the transition from Eq.~\eqref{26mar10_A} to Eq.~\eqref{27mar10_A} is justified. Different sufficient conditions for the tQSSA have been obtained using self-consistency arguments~\citep{BorghansBoerSegel1996,tzafriri}. We follow the ideas of \textit{pairwise balance} to look for a proper non-dimensionalisation of variables~\citep{segelAndSlemrod,maini}. Although this method gives a weaker result than the one obtained in~\citep{tzafriri}, it is
easier to extend to networks of reactions.
\subsection{Review of Geometric singular perturbation (GSPT)}
\label{S:fenichel}
Since geometric singular perturbation theory (GSPT) is essential in our
reduction of the equations describing
coupled enzymatic reactions, we here provide a very brief overview of the theory. Further details can be found in~\citep{Fenichel1979,wiggins1994,jones1995,kaper,hek2010}. Readers familiar with
GSPT can skip to section~\ref{tQSSA_valid}.
Consider a system of ordinary differential equation of the form
\begin{subequations}\label{E:group}
\begin{align}\label{10may10_GSPT}
\epsilon \frac{du}{dt} &= f(u,v,\epsilon), & u(0) &= u_0, \nonumber \\
\frac{dv}{dt} &= g(u,v,\epsilon), & v(0) &= v_0,
\end{align}
where $u \in \mathbb{R}^k $ and $v \in \mathbb{R}^l $ with $k,l \ge 1$, and $u_0 \in \mathbb{R}^k $, $v_0 \in \mathbb{R}^l $ are initial values. The parameter $\epsilon$ is assumed to be small and positive $(0 < \epsilon \ll 1)$, the functions $f$ and $g$ smooth,
\begin{equation} \label{E:cond1}
f(u,v,0) \not\equiv 0, g(u,v,0) \not\equiv 0, \quad \text{and} \quad \lim_{\epsilon \rightarrow 0} \epsilon g(u,v,\epsilon) \equiv 0.
\end{equation}
\end{subequations}
The variable $u$ is termed the \emph{fast} variable, and $v$ the \emph{slow} variable.
Assume that $\mathcal{M}_0 := \{(u,v) \in \mathbb{R}^{k+l} \, |\, f(u,v,0) = 0 \}$ is a compact, smooth manifold with inflowing boundary. Suppose further that the eigenvalues $\lambda_i$ of the Jacobian $\frac{\partial f}{\partial u}(u,v,0)|_{\mathcal{M}_0}$ all satisfy $Re(\lambda_i) < 0$, so that $\mathcal{M}_0$ is \emph{normally hyperbolic}.
Then, for $\epsilon$ sufficiently small, the solutions of Eq.~(\ref{10may10_GSPT}) follow an initial transient, which can be approximated by
\begin{align}\label{may3010_GSPT_Breakb}
\frac{du}{ds} &= f(u,v,0) , & u(0) = u_0, \nonumber \\
\frac{dv}{ds} &= 0 , & v(0) = v_0,
\end{align}
where $t = \epsilon s$. After this transient, the solutions are $\mathcal{O}(\epsilon)$ close to the solutions of the reduced system
\begin{align}
0 &= f(u,v,0), \nonumber \\
\frac{dv}{dt} &= g(u,v,0), \quad v(0) = v_0. \label{may3010_GSPT_Breaka}
\end{align}
More precisely there is an invariant, slow manifold $\mathcal{M}_\epsilon$, $\mathcal{O}(\epsilon)$ close to $\mathcal{M}_0$. Solutions of Eq.~(\ref{10may10_GSPT}) are attracted to $\mathcal{M}_0$ exponentially fast, and can be approximated by concatenating the fast transient described by
Eq.~\eqref{may3010_GSPT_Breakb}, and the solution of the reduced Eq.~\eqref{may3010_GSPT_Breaka}.
The slow manifold, $\mathcal{M}_0$, consists of the fixed points of Eq.~(\ref{may3010_GSPT_Breakb}). The condition that the eigenvalues, $\lambda_i$, of the Jacobian $\frac{\partial f}{\partial u}(u,v,0)|_{\mathcal{M}_0}$ all satisfy $Re(\lambda_i) < 0$ implies that these fixed points are stable.
\subsection{Validity of the tQSSA}\label{tQSSA_valid}
\label{S:validity1}
We next show that GSPT can be applied to Eq.~(\ref{26mar10_A}), after a suitable rescaling of variables~\citep{segelAndSlemrod,maini}.
Let
\begin{equation}\label{newVariables}
\tau = \frac{t}{T_{\bar{X}}}, \quad \bar{x}(\tau) = \frac{\bar{X}(t)}{X_T}, \quad c(\tau) = \frac{C(t)}{\beta}.
\end{equation}
We have some freedom in defining $\beta$ and $T_{\bar{X}}$. Using
the method of \emph{pairwise balance}~\citep{maini,segelAndSlemrod}, we let
\begin{equation}\label{may2610_betaAndTxBar}
\beta = \frac{X_TE_T}{X_T+E_T + k_m}, \quad \text{and} \quad T_{\bar{X}} = \frac{X_T}{k_2 \beta}.
\end{equation}
In the rescaled variables, Eq.~(\ref{26mar10_A})
takes the form
\begin{subequations}\label{8may10_B}
\begin{align}
\frac{d\bar{x}}{d\tau} &= - c,
&\bar{x}(0)&=1,\label{8may10_Ba} \\
\frac{k_2}{k_1}\frac{E_T}{(E_T+X_T+k_m)^2} \frac{ dc }{d\tau} &= \bar{x}-\frac{X_T\bar{x}+E_T + k_m}{X_T+E_T + k_m} c +\frac{X_TE_T}{(E_T+X_T+k_m)^2}c^2, &c(0)&=0. \label{8may10_Bb}
\end{align}
\end{subequations}
Define the parameter
\begin{equation}\label{27mar10_eps}
\epsilon := \frac{\beta}{k_1T_{\bar{X}}X_TE_T} = \frac{k_2}{k_1}\frac{E_T}{(E_T+X_T+k_m)^2} .
\end{equation}
For small $\epsilon$, Eq.~\eqref{8may10_B} is singularly perturbed and has the form given in Eq.~(\ref{10may10_GSPT}). Indeed, we can apply GSPT to Eq.~\eqref{8may10_B} directly since in
the limit $\epsilon \rightarrow 0$ the right hand side of Eq.~(\ref{8may10_B}) remains ${\mathcal O}(1)$. Indeed, the requirement $0 < \epsilon \ll 1$, is equivalent to the sufficient condition for the validity of the tQSSA derived in~\citep{BorghansBoerSegel1996}.
GSPT implies that for small $\epsilon$, solutions of
Eq.~(\ref{8may10_B}) are close to those of the reduced system
\begin{subequations} \label{may2610_rescaled}
\begin{align}
\frac{d\bar{x}}{d\tau} &= -c,&\bar{x}(0)&=1, \label{may2610_rescaled_a}\\
0 &= \, \bar{x}-\frac{X_T\bar{x}+E_T + k_m}{X_T+E_T + k_m} c +\frac{X_TE_T}{(E_T+X_T+k_m)^2}c^2 \label{may2610_rescaled_b}.
\end{align}
\end{subequations}
The normal hyperbolicity and stability of the manifold defined by Eq.~(\ref{may2610_rescaled_b}) can be verified directly, and also follow from the results of section~\ref{sec:TheGeneralProblem}. It follows that GSPT can be applied to conclude that GSPT implies that Eq.~(\ref{may2610_rescaled}) is a reduction of Eq.~(\ref{8may10_B}).
The validity of the reduction in these rescaled equations implies its validity in the original coordinates: Eq.~(\ref{8may10_B}) is equivalent to Eq.~(\ref{26mar10_A}) via the scaling given in Eq.~(\ref{newVariables}). Hence, Eq.~(\ref{27mar10_A}) and Eq.~(\ref{may2610_rescaled}) are related by the same scaling relationship. We note that choosing the initial values of the intermediate complexes to be zero, implies that solutions of~\eqref{8may10_B} remain ${\mathcal O}(1)$ for small $\epsilon$ (see section \ref{Sect.ZeroInitComplex} for a detailed discussion).
It follows that
Eq.~(\ref{27mar10_A}) is a valid reduction of Eq.~(\ref{26mar10_A}) when $\epsilon$ is sufficiently small.
Hence, for $\epsilon$ in the same range, Eq.~(\ref{27mar10_dxdt}), with initial values satisfying Eq.~(\ref{june0810_initialVal}), is a valid reduction of Eq.~(\ref{25mar10_oneDimEq}).
Lemma~\ref{lemma1} in the Appendix shows that $\epsilon$ is always smaller than $1/4$. Although
this is suggestive, GSPT only guarantees the validity of the reduced equations in some unspecified range
of $\epsilon$ values.
\section{Analysis of a two protein network}\label{sec:may3010_TwoProtein}
We next show how the reduction described in the previous section extends to a network of MM reactions. Here the substrate of one reaction acts as an enzyme in another reaction. To illustrate the main ideas used in reducing the corresponding equations,
we start with a concrete example of two interacting proteins.
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[scale=.19]{twoProtein_3}
\end{center}
\caption{\footnotesize{ A simplified description of interactions between two regulators of the G2-to-mitosis phase (G2/M) transition in the eukaryotic cell cycle \citep{NovakTyson1993} (See text). \emph{(a)} $X$ and $Y$ phosphorylate and deactivate each other. For instance, the protein
$X$ exists in a \textit{phosphorylated} $X_p$ and \textit{unphosphorylated} $X$ state, and the conversion $X $ to $ X_p$ is catalyzed by $Y_p$. The conversion of $X_p$ to $X$ is catalyzed by the phosphatase $E_1$. \emph{(b)}
Comparison of the numerical solution of Eq.~(\ref{27feb10_ode6dim}) and Eq.~(\ref{02Mar10_finalSolution}). Here $k_1 = 5, k_{-1}=1, k_{2} = 1, E_1^T = 10, E_2^T = 2, X_T = 10, Y_T = 10.1$ as in~\citep{CilibertoFabrizioTyson2007}. The initial values for Eq.~(\ref{27feb10_ode6dim}) are $X(0) = 10, Y(0) = 1.1, X_p(0) = 0, Y_p(0) = 9, C_x(0) = 0, C_y(0) = 0, C_x^e(0) = 0, C_y^e(0) = 0, E_1(0) = 10, E_2(0) = 2$. The initial values of the reduced system, $\hat{X}_p(0) = 0.12, \hat{Y}_p(0) = 0.83$ are obtained by the projection onto the slow manifold defined by Eq.~\eqref{E:projection}.
} }
\label{25feb10_fig1}
\end{figure}
\clearpage
Fig.~\ref{25feb10_fig1}a) is a simplified depiction of the interactions between two regulators of the G2-to-mitosis phase (G2/M) transition in the eukaryotic cell cycle \citep{NovakTyson1993}. Here, $Y$ represents \emph{MPF} (M-phase promoting factor, a dimer of \emph{Cdc2} and cyclin B) and $X$ represents \emph{Wee1} (a kinase that phosphorylates and deactivates \emph{Cdc2}). The proteins
exist in a \textit{phosphorylated} state, $X_p,Y_p$, and an \textit{unphosphorylated} state, $X,Y$, with the phosphorylated state being less active. The proteins $X$ and $Y$ deactivate each other, and hence act as antagonists. In this network $E_1$ and $E_2$ represent phosphatases that catalyze the conversion of $X_p$ and $Y_p$ to $X$ and $Y,$ respectively. Each dotted arrow in Fig.~\ref{25feb10_fig1}a) is associated with exactly one MM type reaction in the list of reactions given below. The sources of the arrows act as enzymes. Therefore, Fig. ~\ref{25feb10_fig1}a) represents the following network of reactions
\begin{eqnarray*}
Y_p+X \overset{k_1}{\underset{k_{-1}}{\rightleftarrows}} C_x \overset{k_2}{\longrightarrow} X_p +Y_p, &&
E_1+X_p \overset{k_1}{\underset{k_{-1}}{\rightleftarrows}} C_x^e \overset{k_2}{\longrightarrow} X+E_1 , \\
X_p+Y \overset{k_1}{\underset{k_{-1}}{\rightleftarrows}} C_y \overset{k_2}{\longrightarrow} Y_p+ X_p, &&
E_2+Y_p \overset{k_1}{\underset{k_{-1}}{\rightleftarrows}} C_y^e \overset{k_2}{\longrightarrow} Y+E_2.
\end{eqnarray*}
To simplify the exposition, we have assumed some homogeneity in the rates.
Since the total concentration of proteins and enzymes is assumed fixed, the
system obeys the following set of constraints
\begin{align*}
X_T &= X(t) + X_p(t) + C_x(t) + C_y(t)+C_x^e(t) ,
&E_1^T &= C_x^e(t) + E_1(t) , \\
Y_T &= Y(t) + Y_p(t) + C_x(t) + C_y(t)+ C_y^e(t) ,
&E_2^T &= C_y^e(t) + E_2(t),
\end{align*}
where $X_T, Y_T, E_1^T, E_2^T$ are constant and represent the total concentrations of the respective
proteins and enzymes. Along with these constraints the concentrations
of the ten species
in the reaction evolve according to
\begin{align}
\frac{dX_p}{dt}
&= -k_1\underbrace{(Y_T-Y_p-C_x-C_y-C_y^e)}_{= Y}X_p-k_1X_p\underbrace{(E_1^T-C_x^e)}_{= E_1}
+k_{-1}C_x^e+(k_{-1}+k_2)C_y+k_2C_x,
\notag
\\
\frac{dY_p}{dt}
&= -k_1\underbrace{(X_T-X_p-C_x-C_y-C_x^e)}_{=X}Y_p-k_1Y_p\underbrace{(E_2^T-C_y^e)}_{=E_2}
+k_{-1}C_y^e+(k_{-1}+k_2)C_x+k_2C_y, \notag
\\
\frac{dC_x}{dt} &= k_1\underbrace{(X_T-X_p-C_x-C_y-C_x^e)}_{=X}Y_p-(k_{-1}+k_2)C_x, \label{27feb10_ode6dim}\\
\frac{dC_y}{dt} &=k_1\underbrace{(Y_T-Y_p-C_x-C_y-C_y^e)}_{= Y}X_p-(k_{-1}+k_2)C_y, \notag \\
\frac{dC_x^e}{dt} &= k_1X_p\underbrace{(E_1^T-C_x^e)}_{= E_1}-(k_{-1}+k_2)C_x^e, \notag\\
\frac{dC_y^e}{dt} &=k_1Y_p\underbrace{(E_2^T-C_y^e)}_{=E_2}-(k_{-1}+k_2)C_y^e, \notag
\end{align}
with initial values
\begin{equation} \label{E:initial1}
C_x(0) = 0, \quad C_y(0) = 0,\quad C_x^e(0) = 0,\quad C_y^e(0) = 0.
\end{equation}
The initial values of $X_p$ and $Y_p$ are arbitrary.
Following the approach in the previous section, we reduce Eq.~\eqref{27feb10_ode6dim} to a two dimensional system. Assuming the validity of the tQSSA, we obtain an approximating differential--algebraic system. Solving the algebraic equations, which are linear in the original coordinates, leads to a closed, reduced system of ODEs. We end by discussing the validity of the tQSSA.
\subsection{New coordinates and reduction under the tQSSA}
\label{S:intro_two}
To extend the tQSSA we define a new set of variables by adding the concentration of the free state of a species to the concentrations of all intermediate complexes formed by that particular species as reactant~\citep{CilibertoFabrizioTyson2007},
\begin{equation}\label{08feb10_slowVar2}
\begin{array}{ccc}
\bar{X}_p &:=& X_p + C_y + C_x^e, \vspace{1mm}\\
\bar{Y}_p &:=& Y_p + C_x + C_y^e.
\end{array}
\end{equation}
Under the tQSSA, the
intermediate complexes equilibrate quickly compared to the variables
$\bar{X}_p$ and
$\bar{Y}_p$. In the coordinates defined by Eq.~\eqref{08feb10_slowVar2}, Eq.~\eqref{27feb10_ode6dim} takes the form
\begin{subequations}\label{03mar10_TQSSA}
\begin{align}
\frac{d\bar{X}_p}{dt} &= k_2C_x -k_2C_x^e, \label{03mar10_TQSSAa}\\
\frac{d\bar{Y}_p}{dt} &=k_2C_y - k_2C_y^e, \label{03mar10_TQSSAb}\\
0 &= k_1(X_T-\bar{X}_p-C_x)(\bar{Y}_p-C_x -C_y^e)-(k_{-1}+k_2)C_x, \label{03mar10_TQSSAc}\\
0 &=k_1(Y_T-\bar{Y}_p-C_y)(\bar{X}_p-C_y -C_x^e)-(k_{-1}+k_2)C_y, \label{03mar10_TQSSAd} \\
0 &= k_1(\bar{X}_p-C_y -C_x^e)(E_1^T-C_x^e)-(k_{-1}+k_2)C_x^e, \label{03mar10_TQSSAe}\\
0 &=k_1(\bar{Y}_p-C_x -C_y^e)(E_2^T-C_y^e)-(k_{-1}+k_2)C_y^e. \label{03mar10_TQSSAf}
\end{align}
\end{subequations}
Solving the coupled system of quadratic equations~(\ref{03mar10_TQSSAc}-\ref{03mar10_TQSSAf}) in terms of $\bar{X}_p,\bar{Y}_p$ appears to be possible only numerically, as it is equivalent to
finding the roots of a degree 16 polynomial~\citep{CilibertoFabrizioTyson2007}.
However, since we are interested in the dynamics of $X_p $ and $ Y_p$, we can
proceed as in the previous section: Using Eq.~ (\ref{08feb10_slowVar2}) in (\ref{03mar10_TQSSAc}-\ref{03mar10_TQSSAf}) gives a linear system in $C_x,C_y,C_x^e,C_y^e$. Defining $k_m := (k_{-1}+k_2)/k_1$, this
system can be written in matrix form as
\begin{equation}\label{2mar10_manifoldMatrix2}
\left[\begin{array}{cccc}
Y_p+k_m & Y_p & Y_p & 0 \\
X_p & X_p+k_m & 0 & X_p \\
0 & 0 & X_p+k_m & 0 \\
0 & 0 & 0 & Y_p+k_m
\end{array}
\right]
\left[\begin{array}{c}
C_x \\ C_y \\C_x^e \\ C_y^e
\end{array}
\right]
=
\left[\begin{array}{c}
Y_p(X_T-X_p) \\ X_p(Y_T-t_p) \\ X_pE_1^T \\ Y_pE_2^T
\end{array}
\right].
\end{equation}
The coefficient matrix above is invertible and
Eq.~(\ref{2mar10_manifoldMatrix2}) can be solved to obtain $C_x,C_y,C_x^e,C_y^e$ as functions of $X_p,Y_p$. Denoting the resulting solutions as $C_x(X_p,Y_p),$ $ C_y(X_p,Y_p),$ $C_x^e(X_p,Y_p),$ $C_y^e(X_p,Y_p)$ and using them in Eqs.(\ref{03mar10_TQSSAa}-\ref{03mar10_TQSSAb}) we obtain the closed system of equations
\begin{eqnarray}
\frac{d}{dt}\left[\begin{array}{c}
\bar{X}_p \\ \bar{Y}_p
\end{array}
\right] &=& k_2\left[\begin{array}{c}
C_x(X_p,Y_p) - C_x^e(X_p,Y_p) \\ C_y(X_p,Y_p)-C_y^e(X_p,Y_p)
\end{array}
\right]. \notag
\end{eqnarray}
Reverting to the original coordinates, $X_p $ and $ Y_p$, and using the chain rule gives
\begin{eqnarray}
\frac{d}{dt}\left[\begin{array}{c}
X_p+C_y(X_p,Y_p)+C_x^e(X_p,Y_p) \\ Y_p+C_x(X_p,Y_p)+C_y^e(X_p,Y_p)
\end{array}
\right] &=& k_2\left[\begin{array}{c}
C_x(X_p,Y_p) - C_x^e(X_p,Y_p) \\ C_y(X_p,Y_p)-C_y^e(X_p,Y_p)
\end{array}
\right] \Longrightarrow \notag \\
\left[\begin{array}{cc}
1 + \frac{\partial C_y }{\partial X_p }+\frac{\partial C_x^e }{\partial X_p } & \frac{\partial C_y }{\partial Y_p } + \frac{\partial C_x^e}{\partial Y_p} \\
\frac{\partial C_x }{\partial X_p } + \frac{\partial C_y^e }{\partial X_p } &1 + \frac{\partial C_x}{\partial Y_p } +\frac{\partial C_y^e }{\partial Y_p}
\end{array}
\right]
\frac{d}{dt}\left[\begin{array}{c}
X_p \\ Y_p
\end{array}
\right] &=& k_2\left[\begin{array}{c}
C_x(X_p,Y_p) - C_x^e(X_p,Y_p) \\ C_y(X_p,Y_p)-C_y^e(X_p,Y_p)
\end{array}
\right]. \label{may2810_final}
\end{eqnarray}
The initial values of Eq.~(\ref{may2810_final}) are determined by projecting the initial values, given by Eq.~(\ref{E:initial1}), onto the slow manifold. Unfortunately, they can be expressed only implicitly. The reduction from Eq.~(\ref{27feb10_ode6dim}) to Eq.~(\ref{may2810_final}) was obtained under the assumption that $\bar{X}_p = X_p + C_y + C_x^e$ and $\bar{Y}_p = Y_p + C_x + C_y^e$ are slow variables, and hence constant during the transient to the slow manifold. Therefore the projections of the initial conditions onto the slow manifold, $\hat{X}_p(0)$ and
$\hat{Y}_p(0)$, are related to the original initial conditions
as
\begin{equation} \label{E:projection}
\begin{array}{ccccccc}
\hat{X}_p(0) + C_y(\hat{X}_p(0) ,\hat{Y}_p(0)) + C_x^e(\hat{X}_p(0),\hat{Y}_p(0))\quad &=& X_p(0) + C_y(0) + C_x^e(0) \quad&=& X_T, \\
\hat{Y}_p(0) + C_x(\hat{X}_p(0),\hat{Y}_p(0)) + C_y^e(\hat{X}_p(0),\hat{Y}_p(0))\quad &=& Y_p(0) + C_x(0) + C_y^e(0) \quad&=& Y_T.
\end{array}
\end{equation}
We have therefore shown that, if the tQSSA holds, and if the coefficient matrix on the left hand side of Eq.~(\ref{may2810_final}) is invertible, then
\begin{eqnarray}\label{02Mar10_finalSolution}
\frac{d}{dt}\left[\begin{array}{c}
X_p \\ Y_p
\end{array}
\right] &=& k_2 \left[\begin{array}{cc}
1 + \frac{\partial C_y }{\partial X_p }+\frac{\partial C_x^e }{\partial X_p } & \frac{\partial C_y }{\partial Y_p } + \frac{\partial C_x^e}{\partial Y_p} \\
\frac{\partial C_x }{\partial X_p } + \frac{\partial C_y^e }{\partial X_p } &1 + \frac{\partial C_x}{\partial Y_p } +\frac{\partial C_y^e }{\partial Y_p}
\end{array}
\right]^{-1}\left[\begin{array}{c}
C_x(X_p, Y_p) - C_x^e(X_p, Y_p) \\ C_y(X_p, Y_p)-C_y^e(X_p, Y_p)
\end{array}
\right],
\end{eqnarray}
with initial value obtained by solving Eq.~(\ref{E:projection}),
is a valid approximation of Eq.~(\ref{27feb10_ode6dim}).
Fig.~\ref{25feb10_fig1}b) shows that the solutions of the two systems are indeed close, after an initial transient.
\subsection{Validity of the tQSSA for two interacting proteins}
\label{S:validity2}
To reveal the asymptotic limits for which the tQSSA holds, we again
rescale the original equations. In particular, $\bar{X}_p$ and $\bar{Y}_p$ are scaled by the total concentration of the respective proteins. To scale the intermediate complexes, each MM reaction in this network is treated as isolated. The scaling factors are then obtained analogously to $\beta$ in Eq.~(\ref{may2610_betaAndTxBar}). Let
\begin{align*}
\alpha_x &:= \frac{X_TY_T}{X_T+Y_T+k_m} , &\alpha_y &:=\frac{X_TY_T}{X_T+Y_T+k_m}, \\
\beta_x^e &:= \frac{X_TE_1^T}{X_T+E_1^T+k_m} , &\beta_y^e &:=\frac{Y_TE_2^T}{Y_T+E_2^T+k_m},
\end{align*}
and
\begin{equation*}
T_s := \text{max} \left\{\frac{X_T}{k_2\alpha_x},\frac{X_T}{k_2\beta_x^e} ,\frac{Y_T}{k_2\alpha_y},\frac{Y_T}{k_2\beta_y^e} \right\}.
\end{equation*}
Therefore, $T_s$ is obtained analogously to $T_{\bar{X}}$ in Eq.~(\ref{may2610_betaAndTxBar}).
The reason for choosing the maximum will become evident shortly.
The rescaled variables are now defined as
\begin{eqnarray}\label{11may10_newVar}
\tau := \frac{t}{T_s},\quad \bar{x}_p(\tau) &:=& \frac{\bar{X}_p(t)}{X_T}, \quad \bar{y}_p(\tau) := \frac{\bar{Y}_p(t)}{Y_T}, \notag\\
c_x(\tau) := \frac{C_x(t)}{\alpha_x} ,\quad c_y(\tau) := \frac{C_y(t)}{\alpha_y}, &\quad&
c_x^e(\tau) := \frac{C_x^e(t)}{\beta_x^e} , \quad c_y^e(\tau) := \frac{C_y^e(t)}{\beta_y^e}.
\end{eqnarray}
%
Using Eq.~(\ref{08feb10_slowVar2}) in the Eq.~(\ref{27feb10_ode6dim}) to eliminate $X_p, Y_p$, and then
applying the rescaling, defined by Eq.~ (\ref{11may10_newVar}), to the new ODE we obtain
\begin{subequations}\label{27feb10_ode6dimNewVarRescaled}
\begin{align}
\frac{d\bar{x}_p}{d\tau}
&=
\frac{k_2\alpha_xT_s}{X_T}c_x -\frac{k_2\beta_x^eT_s}{X_T}c_x^e, \label{27feb10_ode6dimNewVarRescaleda}
\\
\frac{d\bar{y}_p}{d\tau}
&=
\frac{k_2\alpha_yT_s}{Y_T}c_y -\frac{k_2\beta_y^eT_s}{Y_T}c_y^e, \label{27feb10_ode6dimNewVarRescaledb}
\\
\underbrace{\frac{\alpha_x}{k_1X_TY_TT_s}}_{\le \epsilon_x}\frac{dc_x}{d\tau}
&=\begin{array}{l} \big[
\bar{y}_p - \bar{x}_p \bar{y}_p - \frac{\alpha_x }{X_T} c_x \bar{y}_p - \frac{\alpha_x }{Y_T}c_x - \frac{\beta_y^e }{Y_T}c_y^e + \frac{ \alpha_x }{Y_T}c_x \bar{x}_p
+ \frac{\beta_y^e }{Y_T}c_y^e \bar{x}_p + \frac{\alpha_x^2 }{X_T Y_T}c_x^2
\\
+ \frac{ \alpha_x \beta_y^e }{X_T Y_T}c_x c_y^e - \frac{\alpha_x k_m}{X_T Y_T}c_x \big], \end{array} \label{27feb10_ode6dimNewVarRescaledc}
\\
\underbrace{\frac{\alpha_y}{k_1X_TY_TT_s}}_{\le \epsilon_y}\frac{dc_y}{d\tau}
&= \begin{array}{l} \big[
\bar{x}_p - \frac{\beta_x^e }{X_T}c_x^e - \frac{\alpha_y }{X_T}c_y - \bar{x}_p \bar{y}_p + \frac{\beta_x^e }{X_T}c_x^e \bar{y}_p + \frac{ \alpha_y }{X_T}c_y \bar{y}_p
- \frac{\alpha_y }{Y_T}c_y \bar{x}_p + \frac{\alpha_y \beta_x^e }{X_T Y_T}c_x^e c_y
\\
+ \frac{ \alpha_y^2 }{X_T Y_T}c_y^2 - \frac{\alpha_y k_m}{X_T Y_T}c_y \big], \end{array} \label{27feb10_ode6dimNewVarRescaledd}
\\
\underbrace{\frac{\beta_x^e}{k_1X_TE_1^TT_s}}_{\le \epsilon_x^e}\frac{dc_x^e}{d\tau}
&=
\bar{x}_p - \frac{\beta_x^e }{E_1^T}c_x^e \bar{x}_p - \frac{\beta_x^e }{X_T}c_x^e - \frac{\alpha_y }{X_T}c_y + \frac{(\beta_x^e)^2 }{ E_1^T X_T}(c_x^e)^2
+ \frac{\alpha_y \beta_x^e }{E_1^T X_T}c_x^e c_y - \frac{\beta_x^e k_m}{E_1^T X_T}c_x^e ,
\label{27feb10_ode6dimNewVarRescalede}
\\
\underbrace{\frac{\beta_x^e}{k_1E_2^TY_TT_s}}_{\le \epsilon_y^e}\frac{dc_y^e}{d\tau}
&=
\bar{y}_p - \frac{\beta_y^e }{E_2^T}c_y^e \bar{y}_p - \frac{\alpha_x }{Y_T}c_x -\frac{\beta_y^e }{Y_T}c_y^e + \frac{\alpha_x \beta_y^e }{ E_2^T Y_T}c_x c_y^e
+ \frac{(\beta_y^e)^2 }{E_2^T Y_T}(c_y^e)^2 - \frac{\beta_y^e k_m}{E_2^T Y_T}c_y^e ,
\label{27feb10_ode6dimNewVarRescaledf}
\end{align}
\end{subequations}
where\begin{align*}
\epsilon_x &:= \frac{k_2}{k_1} \frac{Y_T}{(X_T + Y_T + k_m)^2}, &\epsilon_y &:= \frac{k_2}{k_1} \frac{X_T}{(Y_T + X_T + k_m)^2}, \\
\epsilon_x^e &:= \frac{k_2}{k_1} \frac{E_1^T}{(X_T + E_1^T + k_m)^2}, &\epsilon_y^e &:= \frac{k_2}{k_1} \frac{E_2^T}{(Y_T + E_2^T + k_m)^2}.
\end{align*}
The bounds on these coefficients follow from the definition of $T_s$. Since
$
({1}/{T_s} )\le ({k_2 \alpha_x}/{X_T}),
$
\[
\frac{\alpha_x}{k_1X_TY_TT_s}
\le
\frac{k_2}{k_1} \frac{\alpha_x^2}{X_T^2 Y_T} = \frac{k_2}{k_1} \frac{1}{X_T^2 Y_T} \left(\frac{X_TY_T}{X_T+Y_T+k_m}\right)^2 = \epsilon_x.
\]
Similarly,
\[
\frac{\alpha_y}{k_1X_TY_TT_s} \le \epsilon_y, \quad
\frac{\beta_x^e}{k_1X_TE_1^TT_s} \le \epsilon_x^e, \quad
\text{and}
\quad
\frac{\beta_x^e}{k_1E_2^TY_TT_s} \le \epsilon_y^e.
\]
Finally, we define
\begin{equation}\label{4may10_epsilon}
\epsilon := \max\left\{ \epsilon_x, \epsilon_y,\epsilon_x^e, \epsilon_y^e
\right\}.
\end{equation}
The definitions of scaling factors in (\ref{11may10_newVar}) imply that all the coefficients on the right hand side of (\ref{27feb10_ode6dimNewVarRescaledc}--\ref{27feb10_ode6dimNewVarRescaledf}) are $\mathcal{O}(1)$. Therefore, in the asymptotic limit $\epsilon \rightarrow 0$, Eq.~(\ref{27feb10_ode6dimNewVarRescaled}) defines a singularly perturbed system. Since the two equations
are related by the scaling given in Eq.~(\ref{11may10_newVar}), we can conclude that in the limit $\epsilon \rightarrow 0$, the tQSSA is valid. If additionally the slow manifold is normally hyperbolic, then Eq.~(\ref{03mar10_TQSSA}) is a valid reduced model of the network's dynamics. The normal hyperbolicity and stability of the slow manifold will be proved in a general setting in section~\ref{sec:TheGeneralProblem} .
\section{The general problem}\label{sec:TheGeneralProblem}
We next describe how to obtain reduced equations describing the dynamics of a large class of protein interaction networks~\citep{huangFerrell1996,novakPatakiCilibertoTyson2001,novakPatakiCilibertoTyson2003,UriAlon2007,davitichBornholdt2008,stadman1977,NovakTyson1993,goldbeter91}.
We again assume that the proteins interact via MM type reactions, and that a generalization of the tQSSA holds~\citep{CilibertoFabrizioTyson2007}. We will follow the steps that lead to the reduced systems in the previous two sections: After describing the model and the conserved quantities, we recast the equations in terms of the ``total'' protein concentrations (\emph{cf.} sections~\ref{S:intro_one} and \ref{S:intro_two}). Under a generalized tQSSA, these equations can be reduced
to an algebraic-differential system. We show that the algebraic part of the system is linear in the
original coordinates (\emph{cf.} sections~\ref{S:extension} and~\ref{S:intro_two}), so that the reduced system can be described by a differential equation with dimension equal to the
number of interacting proteins. We next show that this reduction is justified by proving that the singularly perturbed system we examine satisfies the conditions of GSPT (\emph{cf.} section~\ref{S:fenichel}). Finally, we describe
the asymptotic conditions under which the system is singularly perturbed,
following the arguments in sections~\ref{S:validity1} and~\ref{S:validity2}.
\subsection{Description of the network}\label{june2110_setup}
We start by defining the nodes and edges of a general protein interaction network. The nodes in this network represent enzymes as well as proteins, while the edges represent the catalytic effect one species has on another. Proteins are assumed to come in two states, phosphorylated and unphosphorylated. Both states are represented by a single node in this network. Fig.~\ref{fig:june1510_nNode} and the following description make these definitions
precise.
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[scale = .15]{nNode}
\end{center}
\caption{ \footnotesize{ A simple example illustrating the terminology used in describing protein interaction networks. The shaded regions represent nodes and encompass either an enzyme or a single protein that is part of an MM type reaction. Each dotted arrow represents an edge in the network. The solid arrows represent transitions within the nodes, and do not define an edge in the network. } }
\label{fig:june1510_nNode}
\end{figure}
\clearpage
In a network of $n$ interacting proteins, and $n$ associated enzymes, we define the following:
\textbf{Nodes:} The two types of nodes in this network represent proteins (P-type nodes) and enzymes (E-type nodes). Each protein can exist in either an \emph{active} or \emph{inactive} form. The \emph{inactive} form of the $i$th protein is denoted by $U_i$, and the \emph{active} form by $P_i$. The $i$th P-type node is formed by grouping together $U_i$ and $P_i$. In addition there are $n$ species of enzymes, $E_i$, which exist in only one state.
\textbf{Edges:} All edges in the network are \emph{directed}, and represent the catalytic effect of a species in a MM type reaction. There are two types of edges: \textit{PP-type} edges connect two P-type nodes, while \textit{EP-type} edges connect E-type nodes to P-type nodes.
In particular, a PP-type edge from node $i$ to node $j$ represents the following MM type reaction in which $P_i$ catalyzes the conversion of $U_j$ to the \emph{active} form $P_j$,
\begin{subequations}
\begin{equation}\label{may3010_PPij}
P_i+U_j \overset{k^{1}_{ij}}{\underset{k^{-1}_{ij}}{\rightleftarrows}} C^U_{ij} \overset{k^2_{ij}}{\longrightarrow} P_j + P_i.
\end{equation}
Note that autocatalysis is possible.
The rate constants $k^{1}_{i,j},k^{-1}_{i,j},k^{2}_{i,j}$, associated to each edge, can be grouped into weighted``connectivity matrices''
\begin{equation*}
\begin{array}{ccccccccc}
K_1 &=& \left[k^1_{ij}\right]_{n\times n},&
K_{-1} &=& \left[k^{-1}_{ij}\right]_{n\times n}, &
K_2 &=& \left[k^2_{ij}\right]_{n\times n}\end{array}.
\end{equation*}
In the absence of an edge, that is, when
$P_i$ does not catalyze the phosphorylation of $U_j$, the corresponding
$(i,j)$-th entry in $K_1,K_{-1},$ and $K_2$ is set to zero.
EP-type edges are similar to PP-type edges, with enzymes acting as catalysts. To each pair of enzyme, $E_i$, and protein, $P_j$, we associate three rate constants $l^{1}_{i,j},l^{-1}_{i,j},l^{2}_{i,j}$ of the corresponding reaction in which $E_i$ is a catalyst in the conversion of $P_j$ into $U_j$,
\begin{equation}\label{may3010_EPij}
E_i+P_j\overset{l^{1}_{ij}}{\underset{l^{-1}_{ij}}{\rightleftarrows}} C^E_{ij} \overset{l^2_{ij}}{\longrightarrow} U_j +E_i.
\end{equation}
\end{subequations}
The rate constants can again be arranged into matrices
\begin{equation*}
\begin{array}{ccccccccc}
L_1 &=& \left[l^1_{ij}\right]_{n\times n},&
L_{-1} &=& \left[l^{-1}_{ij}\right]_{n\times n}, &
L_2 &=& \left[l^2_{ij}\right]_{n\times n},
\end{array}
\end{equation*}
with zero entries again denoting the absence of interactions.
These definitions imply that the active form of one protein always
catalyzes the production of the active form of another protein.
This assumption excludes certain interactions (see section~\ref{sec:discussion} for an example).
However, the reduction is easiest to describe under these assumptions, and
we discuss generalizations in the Discussion.
For notational convenience we define
$U = [ U_1, U_2, \ldots, U_n ]^t,
P = [ P_1, P_2, \ldots, P_n ]^t,$ and
$E =[ E_1, E_2, \ldots, E_n ]^t$,
and arrange intermediate complexes into matrices,
\begin{equation*}\label{20mar10_cxcy}
C_U = \left[\begin{array}{cccc} C_{11}^U \\ C_{21}^U \\ \vdots \\ C_{n1}^U
\end{array}
\begin{array}{cccc} C_{12}^U \\ C_{22}^U \\ \vdots \\ C_{n2}^U
\end{array}
\ddots
\begin{array}{cccc} C_{1n}^U \\ C_{2n}^U \\ \vdots \\ C_{nn}^U
\end{array}
\right],
\quad
C_E = \left[\begin{array}{cccc} C_{11}^E \\ C_{21}^E \\ \vdots \\ C_{n1}^E
\end{array}
\begin{array}{cccc} C_{12}^E \\ C_{22}^E \\ \vdots \\ C_{n2}^E
\end{array}
\ddots
\begin{array}{cccc} C_{1n}^E \\ C_{2n}^E \\ \vdots \\ C_{nn}^E
\end{array}
\right].
\end{equation*}
Initially all intermediate complexes are assumed to start at zero concentration. Therefore,
any intermediate complex corresponding to a reaction that has zero rates, will remain at zero
concentration for all time.
\begin{quote}
For instance, in the two protein example analyzed in section~\ref{sec:may3010_TwoProtein}, we have
\end{quote}
\[\
C_U = \left[ \begin{array}{c} 0 \\ C_y \end{array}
\begin{array}{c}C_y \\ 0 \end{array}
\right], \quad
C_E = \left[ \begin{array}{c} C_x^e \\ 0 \end{array}
\begin{array}{c} 0 \\ C_y^e \end{array}
\right], \quad
U = \left[ \begin{array}{c}X \\ Y \end{array} \right], \quad
P = \left[ \begin{array}{c}X_p \\ Y_p \end{array} \right], \quad
\text{ etc.}
\]
Assuming that the system is isolated from the environment implies that the total concentration of each enzyme, $E_i^T$, remains constant. Therefore,
\begin{subequations}\label{may3110_constraints}
\begin{equation}\label{20mar10_ET}
E_i + \sum_{s=1}^n{C_{is}^E} = E_i^T ,\qquad i \in \{1,2,...,n \}.
\end{equation}
Similarly, for each protein the total concentration, $U_i^T$, of its \emph{inactive} and \emph{active} form, and the intermediate complexes is constant,
\begin{equation}\label{20mar10_UT}
U_i + P_i + \left( \sum_{s=1}^nC_{is}^U + \sum_{r=1}^nC_{ri}^U - C^U_{ii} \right) + \sum_{r=1}^nC_{ri}^E = U_i^T, \qquad i \in \{1,2,...,n \}.
\end{equation}
\end{subequations}
Let
$$
V_n = \underbrace{[\begin{array}{cccc}1 & 1 & \ldots & 1 \end{array}]^t}_{n \text{ times}}, \quad
E_T = [ \begin{array}{cccc}E^T_1 & E^T_2 & \ldots & E^T_n \end{array}]^t, \quad \text{and} \quad
U_T = [ \begin{array}{cccc} U^T_1 & U^T_2 & \ldots & U^T_n \end{array}]^t,
$$
and denote the $n \times n$ identity matrix by $I_n$.
In addition, we use the \textit{Hadamard product} of matrices, denoted by $*$, to simplify notation\footnote{For instance, the Hadamard product of matrices $A = \left[\begin{array}{cc}a & b \\ c &d \end{array}\right],$ and $ B = \left[\begin{array}{cc}e & f \\ g &h \end{array}\right],\text{ is } A*B = \left[\begin{array}{cc}ae & bf \\ cg &dh \end{array}\right].$}. Constraints~(\ref{may3110_constraints}) can now be written concisely in matrix form
\begin{align*}
E_T &= E+ C_E V_n , \\
U_T &= U + P +C_U V_n + C_U^t V_n- (I_n*C_U)V_n + C_E^t V_n.
\end{align*}
Applying the law of mass action to the system of reactions described by~(\ref{may3010_PPij}-\ref{may3010_EPij}) yields a $(2n^2+n)$ dimensional dynamical system,
\begin{align}
\frac{dP_i}{dt} &= \sum_{s=1}^n \bigg(-k_{is}^1 P_iU_s + ( k_{is}^{-1}+ k_{is}^{2})C_{is}^U \bigg) +\sum_{r=1}^n\bigg( k_{ri}^2C_{ri}^U
- l_{ri}^1 E_r P_i + l_{ri}^{-1} C_{ri}^E\bigg),
& P_i(0) &= p_{i}^0,
\nonumber \\
\frac{dC_{ij}^U}{dt} &= \, k_{ij}^1 P_iU_j-(k_{ij}^{-1}+k_{ij}^{2})C_{ij}^U,
& C_{ij}^U(0) &= 0.
\label{nov0809_mainODE} \\
\frac{dC_{ij}^E}{dt} &= \, l_{ij}^1E_iP_j -(l_{ij}^{-1}+l_{ij}^{2})C_{ij}^E,
& C_{ij}^E(0) &= 0,
\notag
\end{align}
Due to the constraints~(\ref{20mar10_ET},\ref{20mar10_UT}), $U_i, E_i $, are \textit{affine linear} function of $P_i,C_{ij}^U, C_{ij}^E$ and can be used to close Eq.~(\ref{nov0809_mainODE}). Our aim is to reduce this $2n^2+n$ dimensional system to an $n$ dimensional system involving only $P_i$.
\subsection{The total substrate coordinates}
\label{S:general_coordinates}
In this section we generalize the change of variables to the ``total`` protein concentrations, introduced in Eq.~(\ref{08feb10_slowVar2}). Let
\begin{equation}\label{may3110_PiBarTermWise}
\bar{P}_i := P_i + \sum_{s=1}^nC_{is}^U + \sum_{r=1}^nC_{ri}^E, \quad i \in \{1,2,...,n\},
\end{equation}
so that Eq.~(\ref{nov0809_mainODE}) takes the form
\begin{subequations}\label{may3010_ODEtermWise}
\begin{align}
\frac{d\bar{P}_i}{dt} =& \sum_{r=1}^n k_{ri}^{2} C_{ri}^U -\sum_{r=1}^nl_{ri}^2 C_{ri}^E, \label{may3010_ODEtermWisea}\\
\frac{dC_{ij}^U}{dt} =& \, k_{ij}^1P_iU_j-(k_{ij}^{-1}+k_{ij}^{2})C_{ij}^U, \label{may3010_ODEtermWisec} \\
\frac{dC_{ij}^E}{dt} =&\, l_{ij}^1E_iP_j -(l_{ij}^{-1}+l_{ij}^{2})C_{ij}^E. \label{may3010_ODEtermWiseb}
\end{align}
\end{subequations}
To close this system we use Eqs.~(\ref{20mar10_ET},\ref{20mar10_UT}) with Eq.~(\ref{may3110_PiBarTermWise}), to obtain
\begin{eqnarray}\label{21mar_xieiyi}
U_i &=& U_i^T - P_i - \sum_{s=1}^nC_{is}^U - \sum_{r=1}^n\left(C_{ri}^U + C_{ri}^E \right) + C_{ii}^U \nonumber \\
&=& U_i^T - \bar{P}_i - \sum_{r=1}^nC_{ri}^U+ C_{ii}^U, \nonumber \\
E_i &=& E_i^T -\sum_{s=1}^n{C_{is}^E}, \\
P_i &=& \bar{P}_i - \sum_{s=1}^nC_{is}^U - \sum_{r=1}^nC_{ri}^E. \nonumber
\end{eqnarray}
Defining $\bar{P} := (\bar{P}_1,\bar{P}_2,...,\bar{P}_n)^t$, Eq.~(\ref{may3110_PiBarTermWise}) can be written in vector form as $ \bar{P} = P+C_UV_n+C_E^tV_n$, and
Eqs.~(\ref{may3010_ODEtermWise}) and (\ref{21mar_xieiyi}) can be written in matrix form as
\begin{subequations}\label{09mar10_matrixForm}
\begin{align}
\frac{d\bar{P}}{dt} =& (K_2* C_U)^t V_n - (L_2* C_E)^tV_n, \label{09mar10_matrixForma} \\
\frac{dC_U}{dt} =& K_1* (P U^t) -(K_{-1}+K_2)* C_U, \label{09mar10_matrixFormc} \\
\frac{dC_E}{dt} =& L_1* (EP^t)- (L_{-1}+L_2)* C_E, \label{09mar10_matrixFormb}
\end{align}
\end{subequations}
where
\begin{subequations}\label{09mar10_closedform}
\begin{align}
U =&\, U_T- P -C_U V_n - C_U^t V_n - C_E^t V_n + (I_n*C_U)V_n \nonumber \\
=&\, U_T - \bar{P} - C_U^t V_n+ (I_n*C_U)V_n, \label{09mar10_closedformb} \\
E =&\, E_T- C_E V_n, \label{09mar10_closedforma} \\
P =&\, \bar{P} -C_UV_n-C_E^tV_n. \label{09mar10_closedformc}
\end{align}
\end{subequations}
\subsection{The tQSSA and the resulting reduced equations}
\label{S:general_reduced}
The general form of the tQSSA states that the intermediate complexes, $C_U$ and $C_E$, equilibrate
faster than $\bar{P}$. This assumption implies that, after a fast transient, Eq.~(\ref{09mar10_matrixForm}) can be approximated by the differential-algebraic system
\begin{subequations}\label{09mar10_matrixFormDEL}
\begin{align}
\frac{d\bar{P}}{dt} =& (K_2* C_U)^t V_n - (L_2* C_E)^tV_n, \label{09mar10_matrixFormDELa} \\
0 =& K_1* (P U^t) -(K_{-1}+K_2)* C_U, \label{09mar10_matrixFormDELc} \\
0 =& L_1* (EP^t)- (L_{-1}+L_2)* C_E. \label{09mar10_matrixFormDELb}
\end{align}
\end{subequations}
In particular, according to GSPT (see section~\ref{S:fenichel}), if the \emph{slow manifold}
\begin{equation}\label{20june10_slowManifold}
\mathcal{M}_0 = \left\{\left(\bar{P},C_U,C_E \right ) \, \bigg| \,
\begin{array}{ccl}
0 &=& K_1* (P U^t) -(K_{-1}+K_2)* C_U; \\
0 &=& L_1* (EP^t)- (L_{-1}+L_2)* C_E
\end{array}
\right\}
\end{equation}
is normally hyperbolic and stable, then the solutions of Eq.~(\ref{09mar10_matrixForm}) are
attracted to and shadow solutions on $\mathcal{M}_0$.
If we consider the system~(\ref{09mar10_matrixFormDEL}b,c) entry-wise then it consists of $2n^2$ coupled quadratic equations in $2n^2 + n$ variables, namely the entries of $\bar{P},C_U,C_E$ (note that $U,E$ are functions of $\bar{P},C_U,C_E$).
As described in section~\ref{S:intro_two}, we can avoid solving coupled quadratic equations by seeking a solution in terms of $P$ instead of $\bar{P}$. Using Eq.~(\ref{09mar10_closedform}a,b) we eliminate $E,U$ from Eqs.~(\ref{09mar10_matrixFormDEL}b,c) to obtain
\begin{subequations}\label{10mar10_linear}
\begin{align}
K_1* \left[P \left( V_n^tC_U^t + V_n^tC_U- V_n^t (I_n * C_U)\right) + PV_n^t C_E \right] +(K_{-1}+K_2)* C_U
&=\quad K_1* \left[P \left( U_T^t- P^t\right)\right], \label{10mar10_lineara}\\
L_1* \left(C_E \left(V_n P^t\right)\right)+ (L_{-1}+L_2)* C_E &= \quad L_1* \left(E_TP^t\right).
\label{10mar10_linearb}
\end{align}
\end{subequations}
Although complicated, Eq.~(\ref{10mar10_linear}) is linear in $C_U$ and $C_E$. The following Lemma, proved in ~\ref{june0110_solveLinearEquation}, shows that the equations are also solvable.
\begin{lemma}\label{21june10_linerSolve}
Suppose
$K_1 = [k_{ij}^{1}],$ $K_{-1}= [k_{ij}^{-1}],$ $K_2= [k_{ij}^{2}],$ $L_1= [l_{ij}^{1}],$ $L_{-1}= [l_{ij}^{-1}],$ $L_2= [l_{ij}^{2}]$ $\in \mathbb{R}^{n \times n}$ are real matrices with non-negative entries. Furthermore, assume that for any pair $i,j \in \{1,2,...,n\} $ either $k_{ij}^{1} = k_{ij}^{-1} = k_{ij}^{2} = 0 $, or all these coefficients are positive, and similarly for the coefficients $l_{ij}^{1}, l_{ij}^{-1}$, and $ l_{ij}^{2}$.
If $U_T,E_T, P $ $\in\mathbb{R}^{n \times 1}_+ $ are real vectors with positive entries, and $V_n = [1\,1\,\cdots\,1]^t$ is a vector of size $n$, then Eq.~(\ref{10mar10_linear}) has a unique solution for $C_U, C_E \in \mathbb{R}^{n \times n} $ in terms of $P$.
\end{lemma}
We denote the solution of Eq.~(\ref{10mar10_linear}) described in Lemma~\ref{21june10_linerSolve} by $\tilde{C}_U(P), \tilde{C}_E(P)$.
This solution can be used to close Eq.~(\ref{09mar10_matrixFormDELa}), by using Eq.~\eqref{09mar10_closedformc} to obtain
\begin{eqnarray}\label{19mar10_B}
\frac{d\bar{P}}{dt} &=& \frac{dP}{dt} + \frac{d}{dt}\left(\tilde{C}_U(P)V_n\right) + \frac{d}{dt}\left(\tilde{C}_E(P)^tV_n\right) \nonumber\\
&=&\left[I +\frac{\partial}{\partial P}\left(\tilde{C}_U(P)V_n\right) + \frac{\partial}{\partial P}\left(\tilde{C}_E(P)^tV_n\right) \right]\frac{dP}{dt}
\end{eqnarray}
With Eq.~(\ref{09mar10_matrixFormDELa}), this leads to a closed system in $P$,
\begin{equation}\label{19mar10_conclusion}
\left[I +\frac{\partial}{\partial P}\left(\tilde{C}_U(P)V_n\right) + \frac{\partial}{\partial P}\left(\tilde{C}_E(P)^tV_n\right) \right]\frac{dP}{dt}
\; = \; (K_2* \tilde{C}_U(P))^t V_n - (L_2* \tilde{C}_E(P))^tV_n.
\end{equation}
The initial value of Eq.~(\ref{19mar10_conclusion}), denoted by $\hat{P}(0)$, must be chosen as the projection of the initial value $P(0)$ of Eq.~(\ref{nov0809_mainODE}), onto the manifold $\mathcal{M}_0$. The reduction is obtained under the assumption that during the initial transient there has not been any significant change in $\bar{P} = P + C_UV_n + C_E^tV_n$.
Therefore the projection, $\hat{P}(0)$, of the initial conditions onto the slow manifold is related to the original initial conditions, $U(0), P(0), C_U(0), C_E(0),$ by
\begin{equation*}
\hat{P}(0) + \tilde{C}_U(\hat{P}(0))V_n + \tilde{C}_E^t(\hat{P}(0))V_n = P(0) + C_U(0)V_n + C_E(0)V_n = P(0).
\end{equation*}
In summary, if tQSSA is valid, then Eq.~(\ref{19mar10_conclusion}) is a reduction of Eq.~(\ref{nov0809_mainODE}). We next study the stability of the slow manifold $\mathcal{M}_0$ defined by Eq.~\eqref{20june10_slowManifold}. This is a necessary step in showing that GSPT can be used to justify the validity of the reduction obtained under the generalized tQSSA.
\subsection{Stability of the slow manifold}\label{20june10_StabilityOfSlowManifold}
We start by introducing several definitions and some notation to simplify the computations
involved in showing that the slow manifold $\mathcal{M}_0$, defined by Eq.~(\ref{20june10_slowManifold}), is normally hyperbolic and stable. The results also apply to the slow manifolds discussed in sections~\ref{IsolatedMichaelisMentenReaction} and~\ref{sec:may3010_TwoProtein}, as those are particular examples of $\mathcal{M}_0$.
Suppose that $A$ and $B$ are matrices of dimensions $n \times k$ and $n \times l$, respectively. We denote by $[A \,:\, B]$ the $n \times (k+l)$ matrix obtained by adjoining $B$ to $A$.
We use this definition to combine the different coefficient matrices, and let
\[
C := [C_U\,:\,C_E^t], \quad
Q_1 := [K_1\,:\,L_1^t], \quad Q_2 := [K_{-1}+K_2\,:\,L_{-1}^t+L_2^t].
\]
We also define
\[
Z := \left[\begin{array}{c} U \\ E \end{array} \right], \qquad
\bar{Z} := \left[\begin{array}{c} U_T - \bar{P} \\ E_T \end{array} \right], \qquad
I_{2n}^n := \left[\begin{array}{c} I_n \\ 0 \end{array} \right], \qquad \text{and} \qquad
V_{2n} =
\underbrace{ \left[\begin{array}{cccc}1 & 1 & \ldots & 1 \end{array} \right]^t
}_{2n \text{ times}}.
\]
Using this notation the right hand side of Eqs.~(\ref{09mar10_closedformb}-\ref{09mar10_closedforma}) can be written as
\begin{eqnarray*}
Z = \left[\begin{array}{c} U \\ E \end{array}\right]
&=&
\left[\begin{array}{c} U_T - \bar{P} \\ E_T \end{array} \right] -
\left[\begin{array}{c} C_U^t V_n \\ C_EV_n \end{array} \right] +
\left[\begin{array}{c} (I_n*C_U^t) V_n \\ 0 \end{array} \right] \\
&=&
\bar{Z} - C^tV_n + \left(\left[\begin{array}{c} I_n \\ 0 \end{array} \right]*C^t \right)V_n \\
&=&
\bar{Z} - (C^t - I_{2n}^n*C^t)V_n,
\end{eqnarray*}
and Eq.~(\ref{09mar10_closedformc}) can be written as
$
P = \bar{P} - CV_{2n}.
$
Therefore, Eqs.~(\ref{09mar10_matrixFormc}-\ref{09mar10_matrixFormb}) can be merged to obtain
\begin{equation}\label{may3110_dcdt}
\frac{dC}{dt} = \underbrace{Q_1*(PZ^t)-Q_2*C}_{:= F(C)}.
\end{equation}
The manifold $\mathcal{M}_0$ is defined by
\begin{equation*}
\mathcal{M}_0 = \left\{C \in \mathbb{R}^{n \times 2n} \,\, \big| \,\,
Q_1*(PZ^t)-Q_2*C = F(C) = 0 \right\}.
\end{equation*}
To show that $\mathcal{M}_0$ is stable and normally hyperbolic we need to show that the Jacobian,
$ \displaystyle{\frac{\partial F}{\partial C},}$ evaluated at $ {\mathcal{M}_0}$
has eigenvalues with only negative real parts. We will show that $ \displaystyle{\frac{\partial F}{\partial C}}$ has eigenvalues with negative real parts everywhere, and hence at all points of $ {\mathcal{M}_0},$ \emph{a fortiori}.
The mapping $F : \mathbb{R}^{n \times 2n} \rightarrow \mathbb{R}^{n \times 2n}$ is a matrix valued function of the matrix variables $C$. Therefore
$ \displaystyle{\frac{\partial F}{\partial C}}$ represents differentiation with respect to a matrix.
This operation is defined by ``flattening'' a $m \times n$ matrix to a
$mn \times 1$ vector and taking the gradient. More precisely, suppose $M = [M_{.1} : M_{.2} : \ldots : M_{.n} ]$ is a $m \times n$ matrix, where $M_{.j}$ is the $j$th column of $M$. Then define
\begin{equation}\label{june2110_vecAndHatDefinition}
\,\mathrm{vec}\,(M) := \left[ \begin{array}{c} M_{.1} \\ M_{.2} \\ \vdots \\ M_{.n} \end{array} \right] \quad \in \quad\mathbb{C}^{mn \times 1}, \qquad \text{and} \qquad
\widehat{M} := \, \text{diag}(\,\mathrm{vec}\,(M)) \quad \in \quad \mathbb{C}^{mn \times mn}.
\end{equation}
Therefore, $\,\mathrm{vec}\,(M)$ is obtained by stacking the columns of $M$ on top of each other, and $\widehat{M}$ is the $mn \times mn$ diagonal matrix whose diagonal entries are given by $\,\mathrm{vec}\,(M)$.
Suppose $G : \mathbb{C}^{p \times q} \rightarrow \mathbb{C}^{m \times n}$ is a matrix valued function with $ X \in \mathbb{C}^{p \times q} \mapsto G(X) \in \mathbb{C}^{m \times n}$. Then the derivative of $G$ with respect to $X$ is defined as
\begin{equation}\label{june2110_derivativeDefinition}
\frac{\partial G}{\partial X} := \frac{\partial \,\mathrm{vec}\,(G)}{\partial \,\mathrm{vec}\,(X)},
\end{equation}
where the right hand side is the Jacobian~\citep{neudecker}. In the appendix we list some important properties of these operators which will be used subsequently (see ~\ref{20june10_diffWrtMatrix}).
A direct application of Theorem~\ref{may3110_HadamardProductRule} stated in ~\ref{20june10_diffWrtMatrix} yields
\begin{eqnarray*}
\frac{\partial \,F}{\partial \,C} =
\frac{\partial \,\mathrm{vec}\,(F)}{\partial \,\mathrm{vec}\,(C)}
&=&
\widehat{Q}_1
\frac{\partial \,\mathrm{vec}\,(PZ^t)}{\partial \,\mathrm{vec}\, (C)}
-\widehat{Q}_2
\frac{\partial \,\mathrm{vec}\,(C)}{\partial \,\mathrm{vec}\,(C)}.
\end{eqnarray*}
We first assume that all the entries in the connectivity matrices are positive, so that all entries in the matrix $C$ are \emph{actual variables}. At the end of ~\ref{september1510_stableManifold} we show how to remove this assumption.
Replacing ${\partial \,\mathrm{vec}\,(C)}/{\partial \,\mathrm{vec}\,(C)}$ with the identity matrix, $I_{2n^2}$,
adding $\widehat{Q}_2$ to both side, using Theorems~\ref{may3110_vecABC}, \ref{may3110_hadamardproduct},\ref{may3110_simpleProtuctRule}, \ref{may3110_HadamardProductRule}, and treating $\bar{P}$ and $\bar{Z}$ as independent of $C$ we obtain
\begin{eqnarray*}
\widehat{Q}_2 +\frac{\partial \,\mathrm{vec}\,(F)}{\partial \,\mathrm{vec}\,(C)}
&=&
\widehat{Q}_1 \left[
\left(Z \otimes I_n \right)\frac{\partial \,\mathrm{vec}\,( P)}{\partial \,\mathrm{vec}\, (C) }
+
\left(I_{2n} \otimes P \right)
\frac{\partial \,\mathrm{vec}\,( Z^t)}{\partial \,\mathrm{vec}\, (C) }
\right] \\
&=&
\widehat{Q}_1 \left[
-
\left(Z \otimes I_n \right)\frac{\partial \,\mathrm{vec}\, (CV_{2n})}{\partial \,\mathrm{vec}\, (C) }
-
\left(I_{2n} \otimes P \right)
\frac{\partial \,\mathrm{vec}\,\left(\left(\left(C^t - I_{2n}^n*C^t \right)V_n \right)^t\right)}{\partial \,\mathrm{vec}\, (C) }
\right] \\
&=&
\widehat{Q}_1 \left[
-
\left(Z \otimes I_n \right)\frac{\partial \,\mathrm{vec}\, (CV_{2n})}{\partial \,\mathrm{vec}\, (C) }
-
\left(I_{2n} \otimes P \right)
\frac{\partial \,\mathrm{vec}\, \left(V_n^tC - V_n^t(\left({I_{2n}^n}\right)^t*C)\right) }{\partial \,\mathrm{vec}\, (C) }
\right] \\
&=&
\widehat{Q}_1 \Bigg[
-
\left(Z \otimes I_n \right)(V_{2n}^t \otimes I_n)\frac{\partial \,\mathrm{vec}\, (C)}{\partial \,\mathrm{vec}\, (C) } \\
&&-
\left(I_{2n} \otimes P \right)
\left\{
\frac{\partial \,\mathrm{vec}\, (V_n^tC ) }{\partial \,\mathrm{vec}\, (C) }
-\frac{\partial \,\mathrm{vec}\, ( V_n^t(\left({I_{2n}^n}\right)^t*C)) }{\partial \,\mathrm{vec}\, (C) }
\right\}
\Bigg] \\
&=&
\widehat{Q}_1 \left[
-
\left(ZV_{2n}^t \otimes I_n \right)
-
\left(I_{2n} \otimes P \right)
\left\{
\left(I_{2n} \otimes V_n^t \right)
-\left(I_{2n} \otimes V_n^t \right) \widehat{({I_{2n}^n})^t}
\right\}
\right] \\
&=&
\widehat{Q}_1 \left[
-
\left(ZV_{2n}^t \otimes I_n \right)
-
\left(I_{2n} \otimes PV_n^t \right)
+\left(I_{2n} \otimes PV_n^t \right) \widehat{({I_{2n}^n})^t}
\right] \\
&=&
-\widehat{Q}_1 \left[
\left(ZV_{2n}^t \otimes I_n \right)
+
\left(I_{2n} \otimes PV_n^t \right)\left(I_{2n^2}- \widehat{({I_{2n}^n})^t} \right)
\right].
\end{eqnarray*}
Here
$
\widehat{({I_{2n}^n})^t}
$ is the matrix obtained by applying the operator defined in Eq.~(\ref{june2110_vecAndHatDefinition}) to the transpose of $I_{2n}^n$.
This computation shows that the Jacobian matrix of interest has the form
\begin{equation}\label{1april10_jacobian}
J :=
\frac{\partial \,F}{\partial \,C}
=
-\widehat{Q}_1 \left[
\left(ZV_{2n}^t \otimes I_n \right)
+
\left(I_{2n} \otimes PV_n^t \right)\left(I_{2n^2}- \widehat{\left({I_{2n}^n}\right)^t} \right)
\right] - \widehat{Q}_2.
\end{equation}
The following Lemma, proved in the~\ref{september1510_stableManifold}, shows that this Jacobian matrix always has eigenvalues with negative real part.
\begin{lemma}\label{15mar10_stableMatrix}
Suppose $Z \in \mathbb{R}^{2n \times 1}_+$ is a $2n$ dimensional vector with positive entries, $Y \in \mathbb{R}^{n \times 1}_+$ is an $n$ dimensional vector with positive entries, $\Lambda, \Gamma \in \mathbb{R}^{2n^2 \times 2n^2}$ are diagonal matrices with positive entries on the diagonal. Further assume that $R_{n} $ and $ R_{2n}$ are row vectors of size $n$ and $2n$ respectively with all entries equal to $1$. Then the $2n^2 \times 2n^2$ matrix
\begin{equation}\label{15mar10_jacobian}
J = \Lambda \left[(ZR_{2n} \otimes I_n) + (I_{2n}\otimes YR_n)\left(I_{2n^2}- \widehat{\left({I_{2n}^n}\right)^t} \right) \right] + \Gamma
\end{equation}
has eigenvalues with strictly positive real parts.
\end{lemma}
This Lemma applies to connectivity matrices with strictly positive entries. In \ref{Section:Sparse} we show how
to generalize the Lemma to the case when the connectivity matrices contain zero entries. In this case only the principal submatrix of the Jacobian, $J$, corresponding to the positive entries of the connectivity matrices needs to be examined. Since any principal submatrix of $J$ inherits the stability properties of $J$, the result follows. We therefore obtain the following corollary.
\begin{corollary}
The manifold $\mathcal{M}_0$ defined in Eq.~\eqref{20june10_slowManifold} is normally hyperbolic and stable.
\end{corollary}
\subsection{Validity of tQSSA in the general setup}\label{ValidityOfTQSSAInGeneralSetup}
We next investigate the asymptotic limits under which the tQSSA is valid in the general setting described at the beginning of this section. We follow the approach given in the previous sections
to obtain a suitable rescaling of the variables. While this rescaling does not change the
stability of the slow manifold, $\mathcal{M}_0$, it allows us to more easily describe the
asymptotic limits in which the timescales are separated, and the system is singularly perturbed.
Recall that Eq.~(\ref{09mar10_matrixForm}) and Eq.~(\ref{may3010_ODEtermWise}) are equivalent. The concise form given in Eq.~(\ref{09mar10_matrixForm}) was useful in obtaining a reduction and checking the stability of the slow manifold. However, to obtain sufficient conditions for the validity of the tQSSA, we will work with Eqs.~(\ref{may3010_ODEtermWise}) and (\ref{21mar_xieiyi}).
Let $l_{ij}^m := (l_{ij}^{-1}+l_{ij}^2)/l_{ij}^1$, $k_{ij}^m := (k_{ij}^{-1}+k_{ij}^2)/k_{ij}^1$ denote the MM constants. Then the following scaling factors are natural generalizations of those introduced in section~\ref{sec:may3010_TwoProtein},
\begin{equation*}
\beta_{ij} := \frac{E_i^TU_j^T}{E_i^T+U_j^T+l_{ij}^m},
\quad
\alpha_{ij} := \frac{U_i^TU_j^T}{U_i^T+U_j^T+k_{ij}^m},
\quad \quad \quad
i,j \in \{1,2,...,n\}.
\end{equation*}
Note that for each pair $(i,j)$ either all of $k_{ij}^1, k_{ij}^{-1}, k_{ij}^2$ are all zero or all nonzero. In the case that $k_{ij}^1 = k_{ij}^{-1} = k_{ij}^2 = 0$ we define $k_{ij}^m := 0$. Similarly, if $l_{ij}^1 = l_{ij}^{-1} = l_{ij}^2 = 0$ then $l_{ij}^m := 0$. Let
\begin{equation*
T_{\bar{U}} := \, \max \left\{\, \underset{i,j}{\max}\left\{ \frac{U_j^T}{l_{ij}^2 \beta_{ij}}\right\},\, \underset{i,j}{\max}\left\{ \frac{U_j^T}{k_{ij}^2 \alpha_{ij}}\right\} \right\}
= \frac{U_{j_0}^T}{l_{i_0j_0}^2 \beta_{i_0j_0}}, \quad \text{for some } i_0,j_0 \in \{1,2,...,n\}.
\end{equation*}
We next define the following dimensionless rescaling of the variables in
Eq.~(\ref{may3010_ODEtermWise})
\begin{equation}\label{may3110_rescaling}
\tau = \frac{t}{T_{\bar{U}}}, \quad \text{and} \quad
\bar{p}_i(\tau) = \frac{\bar{P}_i(t)}{U_i^T}, \quad
c_{ij}^u(\tau)= \frac{C_{ij}^U(t)}{\alpha_{ij}}, \quad
c_{ij}^e(\tau) = \frac{C_{ij}^E(t)}{\beta_{ij}}, \quad i,j \in \{1,2,...,n\}.
\end{equation}
After rescaling, Eqs.~(\ref{may3010_ODEtermWise}) take the form
\begin{subequations}\label{21mar10_rescaled}
\begin{equation}
\frac{d\bar{p}_i}{d\tau}
= \sum_{r=1}^n\bigg( \frac{k_{ri}^{2}\alpha_{ri}U_{j_0}^T}{l_{{i_0j_0}}^2 \beta_{{i_0j_0}} U_i^T}c_{rj}^u - \frac{l_{ri}^2 \beta_{ri} U_{j_0}^T}{l_{{i_0j_0}}^2 \beta_{{i_0j_0}} U_i^T} c_{rj}^e\bigg),
\label{21mar10_rescaleda}
\end{equation}
\addtocounter{equation}{1}
\begin{eqnarray}
\bigg(\frac{ \beta_{ij} }{l_{ij}^1 E_i^TU_j^T T_{\bar{U}}}\bigg)
\frac{dc_{ij}^e}{d\tau}
&=& 1-c_{ij}^e -\left[ \sum_{\underset{s\neq j}{s=1}}^n\frac{\beta_{is}}{E_i^T}c_{is}^e \right]
\left[ 1 -\bar{x}_j-\frac{1}{U_j^T}\sum_{\underset{r\neq i}{r=1}}^n\left(\beta_{rj}c_{rj}^e+\sum_{\underset{s\neq j}{s=1}}^n\alpha_{js}c_{js}^u\right) \right] \nonumber \\
&&-\frac{1}{U_j^T}
\left[ U_j^T\bar{x}_j+\sum_{\underset{r\neq i}{r=1}}^n\beta_{rj}c_{rj}^e+\sum_{\underset{s\neq j}{s=1}}^n\alpha_{js}c_{js}^u \right]\nonumber \\
&& -\frac{1}{U_j^T}\left[ U_j^T\bar{x}_j +\sum_{\underset{r\neq i}{r=1}}\beta_{rj}c_{rj}^e+\sum_{\underset{s\neq j}{s=1}}^n\alpha_{js}c_{js}^u + \sum_{\underset{s\neq i}{s=1}}^n\beta_{is}c_{is}^e \right]\frac{\beta_{ij}c_{ij}^e}{E_i^T} + \frac{(\beta_{ij}c_{ij}^e)^2}{E_i^TU_j^T}.
\nonumber \\
\label{21mar10_rescaledb}
\end{eqnarray}
\end{subequations}
The rescaled form of Eq.~(\ref{may3010_ODEtermWisec}) is similar to the rescaled form of Eq.~(\ref{may3010_ODEtermWiseb}), and we therefore omit it.
If we define
\[
\epsilon_{ij} := \frac{k^2_{ij}}{k^1_{ij}}\frac{U^T_i}{(U^T_i + U^T_j + k^m_{ij})^2},
\quad
\epsilon^e_{ij} := \frac{l^2_{ij}}{l^1_{ij}}\frac{E^T_i}{(E^T_i + U^T_j + l^m_{ij})^2},
\]
and let
\begin{equation}\label{june2110_epsilon}
\epsilon := \text{max} \left\{ \underset{i,j}{\text{max}} \left\{ \epsilon_{ij} \right\}, \underset{i,j}{\text{max}} \left\{ \epsilon^e_{ij}\right\} \right\},
\end{equation}
then the following theorem defines the conditions under which Eq.~(\ref{21mar10_rescaled}) defines a singularly perturbed system and, hence, conditions under which GSPT is applicable.
\begin{theorem}\label{20june10_TQSSA}
If for all non-zero $k_{ij}^1,k_{ij}^2,k_{ij}^{-1}$ and for all non zero $l_{ij}^1,l_{ij}^2,l_{ij}^{-1}$ and for all $U_i^T,E_i^T$
\[
\begin{array}{ccccccc}
\mathcal{O}\left(\frac{k^{1}_{ij}}{k^{1}_{rs}}\right)
&=&
\mathcal{O}\left(\frac{l^{1}_{ij}}{k^{1}_{rs}}\right)
&=&
\mathcal{O}\left(\frac{l^{1}_{ij}}{l^{1}_{rs}}\right)
&=& \mathcal{O}(1), \\
\mathcal{O}\left(\frac{k^{2}_{ij}}{k^{2}_{rs}}\right)
&=&
\mathcal{O}\left(\frac{l^{2}_{ij}}{k^{2}_{rs}}\right)
&=&
\mathcal{O}\left(\frac{l^{2}_{ij}}{l^{2}_{rs}}\right)
&=& \mathcal{O}(1), \\
\mathcal{O}\left(\frac{k^{-1}_{ij}}{k^{-1}_{rs}}\right)
&=&
\mathcal{O}\left(\frac{l^{-1}_{ij}}{k^{-1}_{rs}}\right)
&=&
\mathcal{O}\left(\frac{l^{-1}_{ij}}{l^{-1}_{rs}}\right)
&=& \mathcal{O}(1), \\
\mathcal{O}\left(\frac{X_i^T}{X_j^T}\right)
&=&
\mathcal{O}\left(\frac{X_i^T}{E_j^T}\right)
&=&
\mathcal{O}\left(\frac{E_i^T}{E_j^T}\right)
&=& \mathcal{O}(1),
\end{array}
\qquad
1 \le i,j,r,s \le n,
\]
in the limit $\epsilon \rightarrow 0$, then Eq.~(\ref{21mar10_rescaled}) is a singularly perturbed system with the structure of Eq.~(\ref{E:group}). In particular, the $\bar{p}_i$ are the slow variables, and the $c_{ij}$ and $c_{ij}^e$ are the fast variables.
\end{theorem}
\begin{proof}
For each $i$ there always exist indices $r,s$ such that $k_{ri}^2 \neq 0 \neq k_{si}^2 $. Hence, the the right hand side of Eq.~(\ref{21mar10_rescaleda}) is not identically zero for any $i \in \{1,2,...,n\}$. Furthermore, by assumption all coefficients on the right hand side of Eq.~(\ref{21mar10_rescaleda}) are $\mathcal{O}(1)$ as $\epsilon \rightarrow 0$. This implies that $\epsilon$ times the right hand side of Eq.~(\ref{21mar10_rescaleda}) is identically zero, in the limit $\epsilon \rightarrow 0$.
Secondly, the definition of $\beta_{ij}$ implies that all coefficients on the right hand side of Eq.~(\ref{21mar10_rescaledb}) are less than or equal to 1. Also, by definition, at least one coefficient has value exactly equal to 1. Hence, the right hand side of Eq.~(\ref{21mar10_rescaledb}) is not identically zero in the limit $\epsilon \rightarrow 0$.
The definitions of $\epsilon, \alpha_{ij}, \beta_{ij}, T_{\bar{U}}$ imply that coefficients of $\frac{dc^e_{ij}}{d\tau}$ in Eq.~(\ref{21mar10_rescaledb}) are less than or equal to $\epsilon$. For example
\[
\frac{ \beta_{ij} }{l_{ij}^1 E_i^TU_j^T} \frac{1}{ T_{\bar{U}}}
\le
\frac{ \beta_{ij} }{l_{ij}^1 E_i^TU_j^T} \frac{l^2_{ij} \beta_{ij}}{U_j^T }
=
\epsilon_{ij}^e
\le
\epsilon.
\]
Hence, in the limit $\epsilon \rightarrow 0$, the left hand side of Eq.~(\ref{21mar10_rescaledb}) vanishes while the right hand side does not. To conclude the proof we only need to show the stability of the slow manifold in rescaled coordinates. But we have already shown that for unscaled coordinates in section~\ref{20june10_StabilityOfSlowManifold} and a non-singular scaling of variable, as in Eq.~(\ref{may3110_rescaling}), will not affect the eigenvalues of the Jacobian.
\end{proof}
Hence, under the assumptions of the above theorem, Eq.~(\ref{21mar10_rescaled}) has the form of Eq.~\ref{10may10_GSPT}. Hence, switching back to unscaled variables we conclude that in the limit $\epsilon \rightarrow 0$, tQSSA is valid, \emph{i.e.} the reduction from Eq.~(\ref{09mar10_matrixForm}) to Eq.~(\ref{09mar10_matrixFormDEL}) is valid.
\subsection{The assumption of zero initial concentrations of intermediate complexes and the choice of scaling}\label{Sect.ZeroInitComplex}
Before concluding, we discuss the significance of zero initial concentrations of intermediate complexes and the benefit of the choice of scaling we used to verify the asymptotic limits in which the system is singularly perturbed. Proposition~\ref{june2110_invariantHypercube} below proves that if the reaction starts with zero initial concentration of intermediate complexes then the solution of both Eqs.~(\ref{09mar10_matrixForm}) and (\ref{21mar10_rescaled}) are trapped in an $\mathcal{O}(1)$ neighborhood of the origin. Hence, separation of time scale in Eq.(\ref{21mar10_rescaled}), implied by Theorem~\ref{20june10_TQSSA} can be used to obtain the reduction of Eq.~(\ref{09mar10_matrixForm}) given by Eq.~(\ref{09mar10_matrixFormDEL}).
This is important, since GSPT would not be applicable if the rescaling were to send $\mathcal{O}(1)$ solutions of Eq.~(\ref{09mar10_matrixForm}) to solutions of Eq.~(\ref{21mar10_rescaled}) that are unbounded
as $\epsilon \rightarrow 0$.
\begin{proposition}\label{june2110_invariantHypercube}
The $2n^2+n$ dimensional hypercube $\Omega$ defined by
\[
\Omega := \left\{\{\bar{p}_i\},\{c_{ij}^u\},\{c_{ij}^e\}\, |\,
0\le\bar{p}_i \le 1,\,
0 \le c_{ij}^u\le 2,\,
0 \le c_{ij}^e\le 2,\,
\forall \, i,j \in \{1,2,...,n\}\right\},
\]
is invariant under the flow of Eq.~(\ref{21mar10_rescaled}).
\end{proposition}
\begin{proof}
By the construction of the differential equations from the law of mass action, all the species concentration variables can take only non negative values. This together with
the conservation constraints
(\ref{20mar10_UT}) force the $\bar{P_i}$ to take values between $0$ and $U_i^T$. Therefore $0\le\bar{p}_i(\tau) \le 1,\, \forall \, \tau >0$, provided the initial conditions are chosen in $\Omega$.
Positivity of variables also implies that $c_{ij}^u(\tau) \ge 0,\,c_{ij}^e(\tau) \ge 0$ if the flow starts inside $\Omega$. So we only need to show that $c_{ij}^u(\tau) \le 2$ and $ \,c_{ij}^e(\tau) \le 2$. It is sufficient to show that
$
\frac{d c_{ij}^u} {d \tau} \bigg |_{c_{ij}^u = 2} \le 0,$ and
$
\frac{d c_{ij}^e} {d \tau} \bigg |_{c_{ij}^e = 2} \le 0,$
or equivalently that $ \frac{d C_{ij}^U} {d t } \bigg |_{C_{ij}^U = 2\alpha_{ij}} \le 0,$ and $
\frac{d C_{ij}^E} {d t } \bigg |_{C_{ij}^E = 2\beta_{ij}} \le 0.
$
But
\begin{eqnarray*}
\frac{d C_{ij}^U} {d t } \bigg |_{C_{ij}^U = 2\alpha_{ij}}
&=&
k_{ij}^1\left[P_i U_j - (k_{ij}^{-1}+k_{ij}^{2}) C_{ij}^U \right]\big |_{C_{ij}^U = 2\alpha_{ij}} \\
&=&
k_{ij}^1\Bigg[\left(\bar{P}_i - \sum_{s=1}^nC_{is}^U - \sum_{r=1}^nC_{ri}^E \right) \left( U_i^T - \bar{P}_i - \sum_{r=1}^nC_{ri}^U\right) \\
&& \hspace{7cm}
- (k_{ij}^{-1}+k_{ij}^{2}) C_{ij}^U \Bigg]\Bigg |_{C_{ij}^U = 2\alpha_{ij}} \\
&\le&
k_{ij}^1\left(P_i^T-2\alpha_{ij} \right) \left(U_j - 2\alpha_{ij}\right) - (k_{ij}^{-1}+k_{ij}^{2}) 2\alpha_{ij} \\
&=&
k_{ij}^1 \left [\left(P_i^T-2\alpha_{ij} \right) \left(U_j^T - 2\alpha_{ij}\right) - k_{ij}^{m} 2\alpha_{ij} \right] \\
&\le& 0.
\end{eqnarray*}
Similarly we can show that $C_{ij}^E$ is decreasing when $C_{ij}^E = \beta_{ij}$. This concludes the proof.
\end{proof}
From this we conclude that the assumptions of Theorem~\ref{20june10_TQSSA} and the zero initial values of intermediate complexes together imply the tQSSA.
Finally, we combine the results of section~\ref{20june10_StabilityOfSlowManifold} with Theorem~\ref{20june10_TQSSA} and Proposition~\ref{june2110_invariantHypercube} to obtain the main result of this study.
\begin{theorem}\label{june2110_mainTheorem}
If the parameters of Eq.~(\ref{nov0809_mainODE}) are such that assumptions of Theorem~\ref{20june10_TQSSA} are satisfied and the initial values of intermediate complexes are zero, then the tQSSA holds. For $\epsilon$ defined by Eq.~(\ref{june2110_epsilon}), there exists an $\epsilon_0$ such that for all $0 < \epsilon< \epsilon_0$, the solutions of Eq.~(\ref{09mar10_matrixForm}) are
$\mathcal{O}(\epsilon)$ close to the solutions of Eq.~(\ref{09mar10_matrixFormDEL}) after an exponentially fast transient. Eq.~(\ref{nov0809_mainODE}) can therefore be reduced to the $n$ dimensional Eq.~(\ref{19mar10_conclusion}) involving only the protein concentrations, $P_i$.
\end{theorem}
\section{Discussion}\label{sec:discussion}
We obtained sufficient condition for the validity of tQSSA in non-isolated Michaelis-Menten type reactions. We therefore significantly generalized previous approaches that extended the MM scheme to small networks of
reactions~\citep{PedersenBersaniBersaniCortese2008}, and provided a theoretical justification of the numerical results obtained in~\citep{CilibertoFabrizioTyson2007}.
We noted that the direct application of the tQSSA to equations modeling networks of reactions produces a reduction that contains coupled quadratic equations. However, for the class of networks discussed here we were able to circumvent this problem by solving and equivalent linear system. Moreover, we obtained a closed form equation in terms of protein concentrations only. A direct application of the tQSSA leads to a reduced system that involves the concentration of proteins and intermediate complexes. It was also shown that the slow manifold used in the system reduction is always attracting.
MM type reactions are often used in models of signaling networks. In such models it is frequently assumed that the reduced equation describing the dynamics of a single, isolated protein can be used to study interactions in networks. It has been noted that this use of MM differential equations is not necessarily justified~\citep{CilibertoFabrizioTyson2007}. The present approach provide an alternative approximation that was proved to be valid.
Recently, a general reduction procedure for multiple timescale chemical reaction networks has been proposed~\citep{othmerLee2010}. That study considered a general chemical interaction network, with a pre--determined set of fast and slow reactions. We deal with a more restrictive class of equations, which makes it unnecessary to start with a prior knowledge of fast and slow reactions. Moreover, we are able to show the normal hyperbolicity of the slow manifold
in our reduction, something that was not possible in the more general setting described in~\citep{othmerLee2010}.
We end by pointing out a couple of limitations of this work. Firstly,
not all enzymatic networks belong to the class we have considered here. For example, our full reduction scheme does not work for the network depicted in Fig. \ref{22april10_notReducable}.
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[scale=.2]{notReducable}
\end{center}
\caption{\footnotesize{A hypothetical network for which the reduction described in sections~\ref{S:general_coordinates}--\ref{S:general_reduced} leads to a differential--algebraic system of equations. The concentrations of the intermediate complexes appear in a nonlinear way
in the resulting algebraic equations. A further reduction to a form involving only the protein concentrations
is therefore not apparent. } }
\label{22april10_notReducable}
\end{figure}
\clearpage
This network is a slight modification of the network in Fig.~\ref{25feb10_fig1}a). Although the tQSSA can be justified, the algebraic part of the reduced equations cannot be solved using our approach. These equations have the form
\begin{eqnarray*}
0 &=& \underbrace{(X_T - X_p - C_x^e - C_x - C_y)}_{= X}
\underbrace{(Y_T - Y - C_y - C_x - C_y^e )}_{ = Y_p} - k_mC_x, \\
0 &=& X_p \underbrace{(Y_T - Y - C_y - C_x - C_y^e )}_{ = Y_p} - k_mC_y, \\
0 &=& (E_1^T -C_x^e )X_p - k_mC_x^e, \\
0 &=& (E_2^T -C_y^e ) Y - k_mC_y^e,
\end{eqnarray*}
which has to be solved for $C_x ,C_y,C_x^e,C_y^e $ in terms of $X_p,Y $. Immediately we run into problems because the first equation in the above algebraic system is quadratic in the unknown variables.
We also note that no approximation theory is truly complete unless error bounds are investigated. Although GSPT guarantees that the derived approximations are $\mathcal{O}(\epsilon)$ close to
the true solutions, a more precise description of the error terms may be desired.
{\bf Acknowledgements:} We thank Patrick de Leenheer, Paul Smolen and Antonios Zagaris
for helpful discussions and comments on earlier version of the manuscript. This work was supported by NSF Grants
DMS-0604429 and DMS-0817649 and a Texas ARP/ATP award.
| {
"timestamp": "2011-01-10T02:00:32",
"yymm": "1101",
"arxiv_id": "1101.1104",
"language": "en",
"url": "https://arxiv.org/abs/1101.1104",
"abstract": "The Michaelis-Menten equation has played a central role in our understanding of biochemical processes. It has long been understood how this equation approximates the dynamics of irreversible enzymatic reactions. However, a similar approximation in the case of networks, where the product of one reaction can act as an enzyme in another, has not been fully developed. Here we rigorously derive such an approximation in a class of coupled enzymatic networks where the individual interactions are of Michaelis-Menten type. We show that the sufficient conditions for the validity of the total quasi steady state assumption (tQSSA), obtained in a single protein case by Borghans, de Boer and Segel can be extended to sufficient conditions for the validity of the tQSSA in a large class of enzymatic networks. Secondly, we derive reduced equations that approximate the network's dynamics and involve only protein concentrations. This significantly reduces the number of equations necessary to model such systems. We prove the validity of this approximation using geometric singular perturbation theory and results about matrix differentiation. The ideas used in deriving the approximating equations are quite general, and can be used to systematize other model reductions.",
"subjects": "Mathematical Physics (math-ph); Molecular Networks (q-bio.MN)",
"title": "Reduced models of networks of coupled enzymatic reactions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.963779941013992,
"lm_q2_score": 0.7341195210831258,
"lm_q1q2_score": 0.7075296687267151
} |
https://arxiv.org/abs/1607.04012 | Computing the Action of Trigonometric and Hyperbolic Matrix Functions | We derive a new algorithm for computing the action $f(A)V$ of the cosine, sine, hyperbolic cosine, and hyperbolic sine of a matrix $A$ on a matrix $V$, without first computing $f(A)$. The algorithm can compute $\cos(A)V$ and $\sin(A)V$ simultaneously, and likewise for $\cosh(A)V$ and $\sinh(A)V$, and it uses only real arithmetic when $A$ is real. The algorithm exploits an existing algorithm \texttt{expmv} of Al-Mohy and Higham for $\mathrm{e}^AV$ and its underlying backward error analysis. Our experiments show that the new algorithm performs in a forward stable manner and is generally significantly faster than alternatives based on multiple invocations of \texttt{expmv} through formulas such as $\cos(A)V = (\mathrm{e}^{\mathrm{i}A}V + \mathrm{e}^{\mathrm{-i}A}V)/2$. | \section{Introduction}\label{sec:int}
This work is concerned with the computation of $f(A)V$
for trigonometric and hyperbolic functions $f$,
where $A\in\mathbb{C}^{{n \times n}}$ and
$V\in\mathbb{C}^{{n \times n_0}}$ with $n_0 \ll n$.
Specifically, we consider the computation of
the actions of the matrix cosine, sine,
hyperbolic cosine, and hyperbolic sine functions.
Algorithm s exist for computing these matrix functions,
such as those in \cite{ahr15}, \cite{hahi05c},
but we are not aware of any existing algorithm s for
computing their actions.
Applications where these actions are required include differential
equations (as discussed below) and network analysis
\cite{ehh08}, \cite{kgg12}.
Furthermore, the proposed algorithm can also be utilized to compute the action
of the matrix exponential or $\varphi$ functions at different time steps.
This, in return, finds an application in the efficient implementation of
exponential integrators \cite{ho10}.
One distinctive feature of the algorithm proposed is that it avoids complex
arithmetic for a real matrix.
This characteristic can be exploited to use only real arithmetic in the
computation of the matrix exponential as well,
if the matrix is real but the step argument complex.
This is useful for higher order splitting methods
\cite{aeoa09}, or for the solution of the Schr\"odinger equation,
where the problem can be rewritten so that the step argument is complex
and the matrix is real (see~\cref{eg:schroedinger}).
One line of attack is to develop algorithm s for $f(A)V$ for each of these
four $f$ individually.
An algorithm\ \texttt{expmv}\ of Al-Mohy and Higham \cite{alhi11}
for computing the action of the
matrix exponential relies on the scaling and powering relation
$\ensuremath{\mathrm{e}}^A b = (\ensuremath{\mathrm{e}}^{A/s})^sb$, for nonnegative integers $s$, and uses a Taylor
polynomial\ approximation to $\ensuremath{\mathrm{e}}^{A/s}$.
The trigonometric functions $\cos$ and $\sin$
do not enjoy the same relation,
and while the double- and triple-angle formulas
$\cos (2A) = 2\cos^2(A) - I$ and
$\sin (3A) = 3\sin (A) - 4\sin^3 (A)$
can be successfully used in computing the cosine and sine~\cite{ahr15},
they do not lend themselves to computing the action of these functions.
For this reason our focus will be on exploiting the algorithm\ of \cite{alhi11}
for the action of the matrix exponential.
While this approach may not be optimal for each of the four $f$,
we will show that it leads to a numerically reliable algorithm\
and has the advantage that it allows the use of existing software.
The matrix cosine and sine functions arise in solving the
system of second order differential equations
\begin{align*}
\frac{\mathrm{d}^2}{\mathrm{d}t^2}y + A^2 y = 0, \qquad y(0)=y_0,\quad y'(0)=y_0',
\end{align*}
whose solution is given by
\begin{align*}
y(t) = \cos(tA)y_0 + A^{-1}\sin(tA)y_0'.
\end{align*}
Note that $A^2$ is the given matrix,
so $A$ may not always be known or easy to obtain.
By rewriting this system as a first order system of twice the dimension the
solution can alternatively be obtained as the first component of the action
of the matrix exponential:
\begin{align}\label{exp2by2}
\begin{bmatrix}
y(t) \\ y(t)'
\end{bmatrix}
&=
\exp\left(
t\begin{bmatrix}
0 & I\\-A^2 & 0
\end{bmatrix}
\right)
\begin{bmatrix}
y_0\\y_0'
\end{bmatrix}
=
\begin{bmatrix}
\cos(tA) & A^{-1}\sin(tA)\\-A\sin(tA)&\cos(tA)
\end{bmatrix}
\begin{bmatrix}
y_0\\y_0'
\end{bmatrix}\\
&=
\begin{bmatrix}
\cos(tA)y_0 + A^{-1}\sin(tA)y_0'\\-A\sin(tA)y_0 + \cos(tA)y_0'
\end{bmatrix}. \nonumber
\end{align}
By setting $y_0 = b$ and $y_0' = 0$,
or $y_0 = 0$ and $y_0' = b$,
and solving a linear system with $A$ or multiplying by $A$,
respectively, we obtain $\cos(tA)b$ and $\sin(tA)b$.
However, as a general purpose algorithm,
making use of \texttt{expmv}\ from \cite{alhi11},
this approach has several disadvantages.
First, each step requires two matrix--vector products with $A$,
when we would hope for one.
Second, because the block matrix has zero trace, no shift is applied by
\texttt{expmv}, so an opportunity is lost to reduce the norms.
Third, the coefficient matrix is nonnormal (unless $A^2$ is orthogonal),
which can lead to higher computational cost \cite{alhi11}.
We recall that all four of the functions addressed here can be expressed
as linear combinations of exponentials~\cite[chap.~12]{high:FM}:
\begin{subequations}\label{eq:both_trig_funs}
\begin{alignat}{2}\label{eq:trigh_funs}
\cosh A &= \tfrac{1}{2}(\ensuremath{\mathrm{e}}^{A}+ \ensuremath{\mathrm{e}}^{-A}), &\qquad
\sinh A &= \tfrac{1}{2}(\ensuremath{\mathrm{e}}^{A}- \ensuremath{\mathrm{e}}^{-A}), \\
\label{eq:trig_funs}
\cos A &= \tfrac{1}{2}(\ensuremath{\mathrm{e}}^{\ensuremath{\mathrm{i}} A}+ \ensuremath{\mathrm{e}}^{-\ensuremath{\mathrm{i}} A}), &\qquad
\sin A &= \tfrac{-\ensuremath{\mathrm{i}}}{2}(\ensuremath{\mathrm{e}}^{\ensuremath{\mathrm{i}} A}- \ensuremath{\mathrm{e}}^{-\ensuremath{\mathrm{i}} A}).
\end{alignat}
\end{subequations}
Furthermore, we have
\begin{align}\label{eq:trig_real}
\ensuremath{\mathrm{e}}^{\ensuremath{\mathrm{i}} A} &= \cos A + \ensuremath{\mathrm{i}}\sin A,
\end{align}
which implies that for real $A$,
$\cos A = \mathop{\mathrm{Re}} \ensuremath{\mathrm{e}}^{\ensuremath{\mathrm{i}} A}$ and
$\sin A = \mathop{\mathrm{Im}} \ensuremath{\mathrm{e}}^{\ensuremath{\mathrm{i}} A}$.
The main idea of this paper is to exploit these formulas to compute
$\cos(A) V$, $\sin(A) V$,
$\cosh(A) V$, and $\sinh(A) V$
by computing
$\ensuremath{\mathrm{e}}^{\beta A}V$ and
$\ensuremath{\mathrm{e}}^{-\beta A}V$ simultaneously with $\beta = \ensuremath{\mathrm{i}}$ and $\beta = 1$,
using a modification of the algorithm\ \texttt{expmv}\ of \cite{alhi11}.
In \cref{sec:ba} we discuss the backward error of the underlying
computation.
In \cref{sec:alg} we present the algorithm and the computational aspects.
Numerical experiments are given in \cref{sec:ne},
and in \cref{sec.conc} we offer some concluding remarks.
\section{Backward error analysis}\label{sec:ba}
The aim of this section is to bound the backward error for
the approximation of $f(A)V$ using truncated Taylor series expansions of
the exponential,
for the four functions $f$ in~\cref{eq:both_trig_funs}.
Here, backward error is with respect to truncation errors in the
approximation, and exact computation is assumed.
We will use the analysis of Al-Mohy and Higham \cite{alhi11},
with refinements to reflect the presence of two related exponentials
in each of the definitions of our four functions.
It suffices to consider the approximation of $\ensuremath{\mathrm{e}}^{A}$,
since the results apply immediately to $\ensuremath{\mathrm{e}}^AV$.
We consider a general approximation $r(A)$,
where $r$ is a rational function,
since when $r$ is a truncated Taylor series no simplifications accrue.
Since $A$ appears as $\pm A$ and $\pm \ensuremath{\mathrm{i}} A$ in \cref{eq:both_trig_funs},
in order to cover all cases
we treat $\beta A$, where $|\beta| \le 1$.
Consider the matrix
$$
G = \ensuremath{\mathrm{e}}^{-\beta A} r(\beta A) - I.
$$
With $\log$ denoting the principal matrix logarithm \cite[sec.~1.7]{high:FM},
let
\begin{equation}\label{Eeq}
E = \log( \ensuremath{\mathrm{e}}^{-\beta A} r(\beta A)) = \log (I + G),
\end{equation}
where $\rho(G) < 1$ is assumed for the existence of the logarithm.
We assume that $r$ has the property that
$r(X) \to \ensuremath{\mathrm{e}}^X$ as $X \to 0$,
which is enough to ensure that $\rho(G) < 1$ for small enough $\beta A$.
Exponentiating \cref{Eeq},
and using the fact that all terms commute
(each being a function of $A$), we obtain
$$
r(\beta A) = \ensuremath{\mathrm{e}}^{ \beta A + E},
$$
so that $E$ is the backward error matrix for the approximation.
For some positive integer $\ell$ and some radius of convergence $d > 0$
we have, from \cref{Eeq}, the convergent power series expansion
$$
E = \sum_{i=\ell}^{\infty} c_i (\beta A)^i, \qquad |\beta|\rho(A) < d.
$$
We can bound $E$ by taking norms to obtain
\begin{equation}\label{Ebound}
\norm{E} \le
\sum_{i=\ell}^{\infty} |c_i| \, \norm{\beta A}^i =: g( \norm{\beta A} ).
\end{equation}
Assuming that $g(\theta) = O(\theta^2)$, the quantity
\begin{align}\label{eq:thhat}
\widehat{\theta} :=\max\{\,\theta >0 : \, \theta^{-1}g(\theta) \le \mathrm{tol}\,\}
\end{align}
exists and we have the backward error result that
$\norm{\beta A} \le \widehat{\theta}$ implies
$r(\beta A) = \ensuremath{\mathrm{e}}^{\beta A + E}$, with $\norm{E}\le \mathrm{tol}\, \norm{\beta A}$.
Here $\mathrm{tol}$ represents the tolerance specified for the backward error.
In practice, we use scaling to achieve the required bound on $\norm{\beta A}$,
so our approximation is $r(\beta A/s)^s$ for some nonnegative integer $s$.
With $s$ chosen so that $\norm{\beta A/s} \le \widehat{\theta}$, we have
$$
r(\beta A/s)^s = \ensuremath{\mathrm{e}}^{\beta A + sE}, \qquad
\frac{\norm{sE}}{\norm{\beta A}} \le \mathrm{tol}.
$$
The crucial point is that since
$g(\norm{\beta A} ) = g(|\beta| \norm{A}) \le g(\norm{A})$, for all $|\beta| \le 1$,
the parameter $s$ chosen for $A$ can be used for $\beta A$.
Consequently, the original analysis gives the same bounds for
$\pm A$ and $\pm \ensuremath{\mathrm{i}} A$ and the same parameters can be used for the computation
of all four of these functions.
This result does not state that the backward error is the same for each $\beta$,
but rather the weaker result that each of the backward errors
satisfies the same inequality.
In practice, we use in place of
$\norm{\beta A}$ in \cref{Ebound} the quantity $\alpha_p(\beta A)$, where
\begin{equation}\label{eq:alphap}
\alpha_p(X) = \max(d_p,d_{p+1}), \quad d_p = \norm{X^p}^{1/p},
\end{equation}
for some $p$ with $\ell \ge p(p-1)$,
which gives potentially much sharper bounds, as shown in
\cite[Thm.\ 4.2(a)]{alhi09a}.
Our conclusion is that all four matrix functions
appearing in \cref{eq:both_trig_funs}
can be computed in a backward stable manner with the same parameters.
As we will see in the next section, the computations can even be combined to
compute the necessary values simultaneously.
\section{The basic algorithm}\label{sec:alg}
As our core algorithm\ for computing the action of the matrix exponential we take
the truncated Taylor series algorithm\ of Al-Mohy and Higham~\cite{alhi11}.
We recall some details of the algorithm below.
Other algorithm s, such as the Leja method presented in~\cite{ckor15},
can be employed in a similar fashion, though the details will be different.
The truncated Taylor series algorithm\ takes
\begin{align*}
r(A) \equiv T_m(A)=\sum_{j=0}^m\frac{A^j}{j!}.
\end{align*}
As suggested in \cite{alhi11}, we limit the degree
$m$ of the polynomial approximant $T_m$ to $m_{\max} = 55$.
In order to allow the algorithm to work for general matrices,
with no restriction on the norm,
we introduce a scaling factor $s$ and assume that $\ensuremath{\mathrm{e}}^{s^{-1}A}$
is well approximated by $T_m(s^{-1}A)$.
From the functional equation of the exponential we have
\begin{align*}
\ensuremath{\mathrm{e}}^{A}V = \left(\ensuremath{\mathrm{e}}^{s^{-1}A}\right)^sV,
\end{align*}
and so the recurrence
\begin{align*}
V_{i+1} = T_m(s^{-1}A)V_i, \quad i=0, \ldots, s-1,\quad V_0=V,
\end{align*}
yields the approximation $V_s\approx\ensuremath{\mathrm{e}}^{A}V$.
For a given $m$,
the function $g$ in \cref{Ebound} has
$\ell = m+1$.
The parameter $\widehat{\theta}$ in~\cref{eq:thhat}, which we now denote by $\theta_m$,
depends on the polynomial\ degree $m = \ell-1$ and the tolerance $\mathrm{tol}$, and its
values are given in \cite[Table~3.1]{alhi09a} for IEEE single precision
arithmetic and double precision arithmetic.
The cost function
\begin{align}
C_m(A) = ms = m\max\left\{1,\left\lceil\alpha_p(A)/\theta_m\right\rceil\right\}
\end{align}
measures the number of matrix--vector products,
and the optimal degree $m_*$ is chosen in \cite{alhi11} such that
\begin{align}\label{eq:minimalcost}
C_{m_*}(A)=\min\left\{
m\left\lceil\alpha_p(A)/\theta_m\right\rceil\!:\ 2\leq p\leq p_{\max},
\; p(p-1)-1\leq m \leq m_{\max}
\right\}.
\end{align}
Here, $m_{\max}$ is the maximal admissible Taylor polynomial\ degree
and $p_{\max}$ is the maximum value of $p$ such that $p(p-1)\le m_{\max} + 1$,
to allow the use of \cref{eq:alphap}.
Furthermore, $p_{\max}=8$ is the default choice in the implementation.
The parameters $m_*$ and $s$ are determined by \cref{cf:ms},
which is \cite[Code Fragment 3.1]{alhi11}.
\begin{algorithm}
\caption{$[m_*,s]=\text{parameters}(A,B,\mathrm{tol})$\label{cf:ms}}
This code determines $m_*$ and $s$ given
$A\in\mathbb{C}^{{n \times n}}$, $B\in\mathbb{C}^{{n \times q}}$,
$\mathrm{tol}$, $m_{\max}$, and $p_{\max}$.
It is assumed that the $\alpha_p$ in
\cref{eq:minimalcost} are estimated using the
block $1$-norm estimation algorithm\ of Higham and Tisseur
\rom{\cite{hiti00n}} with two columns.
\begin{code}
if
$\normi{A} \le \dfrac{4}{q} \dfrac{\theta_{m_{\max}}}{m_{\max}}
p_{\max}(p_{\max}+3)$ \\
\>$m_* = \mathop{\operatorname{arg\,min}}_{1\le m\le m_{\max}} m \lceil \normi{A}/\theta_m \rceil$\\
\>$s = \lceil \normi{A}/\theta_{m_*} \rceil$\\
else\\
\> Let $m_*$ be the smallest $m$ achieving the minimum in \cref{eq:minimalcost}.\\
\> $s = \max\bigl\{ C_{m_*}(A)/m_*, 1\bigr\}$\\
end
\end{code}
\end{algorithm}
A further reduction of the cost can be
achieved by choosing an appropriate point $\mu$ as the centre
of the Taylor series expansion.
As suggested in \cite{alhi11},
the shift is selected such that the Frobenius norm
$\normF{A-\mu I}$ is minimized, that is,
$\mu=\operatorname{trace}(A)/n$.
Algorithm~3.2 of \cite{alhi11} computes $\ensuremath{\mathrm{e}}^AB = [\ensuremath{\mathrm{e}}^A b_1,\dots,\ensuremath{\mathrm{e}}^Ab_q]$,
that is, the action of $\ensuremath{\mathrm{e}}^{A}$ on several vectors.
The following modification of that algorithm\ essentially computes
$[\ensuremath{\mathrm{e}}^{\tau_1A} b_1,\dots,\ensuremath{\mathrm{e}}^{\tau_qA}b_q]$:
the actions at different $t$ values.
The main difference between our algorithm\ and
\cite[Alg.~3.2]{alhi11} is in \cref{alg:recurence_line} of \cref{alg:basic_alg},
where a scalar ``$t$'' has been changed to a (block) diagonal matrix
$$
D(\tau) = D(\tau_1,\tau_2,\dots,\tau_q) \in\mathbb{C}^{q \times q}
$$
that we define precisely below.
The exponential computed in \cref{alg:eta_line} of \cref{alg:basic_alg}
is therefore a matrix exponential.
For simplicity we omit balancing, but it can be applied in the same way as
in \cite[Alg.~3.2]{alhi11}.
\begin{algorithm}\caption{$F = \textbf{F}(D(\tau),A,B)$}\label{alg:basic_alg}
For $\tau_1,\ldots,\tau_q\in\mathbb{C}$,
$A\in\mathbb{C}^{{n \times n}}$, $B\in\mathbb{C}^{{n \times q}}$, and
a tolerance $\mathrm{tol}$ the following algorithm produces a matrix
$F = g\circ g \circ \cdots \circ g(B) \in\mathbb{C}^{n \times q}$
\parens{$s$-fold composition}
where
$g(B) \approx \left(B + \frac{1}{s}\widetilde{A} BD(\tau) + \frac{1}{2!s^2}\widetilde{A}^2BD(\tau)^2 +
\frac{1}{3!s^3} \widetilde{A}^3BD(\tau)^3 + \cdots\right)J$,
where $\widetilde{A}$ and $J$ are given in the algorithm.
\begin{code}
$\widetilde{A}=A-\mu I$, where $\mu=\operatorname{trace}(A)/n$\\
$t=\max_k|\tau_k|$\\
if $t\|\widetilde{A}\|_1=0$\\
\> $m_*=0$, $s=1$\\
else\\
\> $[m_*,s]=\text{parameters}(t\widetilde{A},B,\mathrm{tol})$ \% \cref{cf:ms}\\
end\\\label{alg:eta_line}
$F=B$, $J = \ensuremath{\mathrm{e}}^{\mu D(\tau)/s}$\\
for $k=1:s$\\
\> $c_1 = \|B\|_\infty$\\\label{alg:innerloop_start}
\> for $j=1:m_*$\\\label{alg:recurence_line}
\> \> $B=\widetilde{A} B(D(\tau)/(sj)),$\\
\> \> $c_2=\|B\|_\infty$\\
\> \> $F = F+ B$\\
\> \> if $c_1 + c_2 \leq \mathrm{tol} \|F\|_\infty$, break, end\\
\> \> $c_1=c_2$\\\label{alg:innerloop_end}
\> end\\\label{alg:Dmult}
\> $F = F J$, $B = F$\\
end
\end{code}
\end{algorithm}
Note that for $\widetilde{A} = A - \mu I$,
$B = [b_1, b_2]$ and
$D(\tau) = \diag(\tau_1,\tau_2)$ we have $g(B) = [g_1, g_2]$
in \cref{alg:basic_alg}, with
\begin{align*}
g_j &\approx \left(b_j + \frac{\widetilde{A}}{s} b_j \tau_j
+ \frac{\widetilde{A}^2}{s^22!} b_j \tau_j^2
+ \frac{\widetilde{A}^3}{s^33!} b_j \tau_j^3 + \cdots\right)
\mathrm{e}^{\mu\tau_j/s}\\
&= \ensuremath{\mathrm{e}}^{(A-\mu I)\tau_j/s} \mathrm{e}^{\mu\tau_j/s}b_j=
\ensuremath{\mathrm{e}}^{A\tau_j/s}b_j,
\end{align*}
for $j = 1,2$.
Therefore we can compute the four actions of interest by selecting
appropriately
$\tau_1$, $\tau_2$, and $B$ and carrying out some postprocessing.
For given $t$, $A$, and $b$ we can compute,
with $\textbf{F}$ as in \cref{alg:basic_alg},
\begin{enumerate}
\item an approximation of $\cosh(tA)b$ by
\begin{align*}
B=[b/2,b/2], \quad D(\tau)=\begin{bmatrix}t& 0\\0 & -t\end{bmatrix},\quad
\cosh(tA)b=\textbf{F}(D(\tau),A,B)\begin{bmatrix}1\\1\end{bmatrix};
\end{align*}
\item an approximation of $\sinh(tA)b$ by
\begin{align*}
B=[b/2,b/2], \quad D(\tau)=\begin{bmatrix}t& 0\\0 & -t\end{bmatrix},
\quad
\sinh(tA)b=\textbf{F}(D(\tau),A,B)\begin{bmatrix}1\\-1\end{bmatrix};
\end{align*}
\item an approximation of $\cos(tA)b$ by
\begin{align*}
B=[b/2,b/2], \quad D(\tau)=\begin{bmatrix}\ensuremath{\mathrm{i}} t& 0\\0 & -\ensuremath{\mathrm{i}} t\end{bmatrix},
\quad
\cos(tA)b=\textbf{F}(D(\tau),A,B)\begin{bmatrix}1\\1\end{bmatrix};
\end{align*}
\item an approximation of $\sin(tA)b$ by
\begin{align*}
B=[b/2,b/2], \quad D(\tau)=\begin{bmatrix}\ensuremath{\mathrm{i}} t& 0\\0 & -\ensuremath{\mathrm{i}} t\end{bmatrix},
\quad
\sin(tA)b=\textbf{F}(D(\tau),A,B)\begin{bmatrix}-\ensuremath{\mathrm{i}}\\\ensuremath{\mathrm{i}}\end{bmatrix}.
\end{align*}
\end{enumerate}
Obviously, since they share the same $B$ and $D(\tau)$,
we can combine the computation of
$\cosh(tA)b$ and $\sinh(tA)b$,
and $\cos(tA)b$ and $\sin(tA)b$, respectively,
without any additional cost.
Furthermore, it is also possible to combine the computation of all four matrix
functions by a single call to $\textbf{F}(D(\tau),A,B)$ with
$B = [b,b,b,b]/2$ and $D(\tau)=\diag[t,-t,\ensuremath{\mathrm{i}} t, -\ensuremath{\mathrm{i}} t]$.
If $A$ is a real matrix the computation of $\cos(tA)b$ and $\sin(tA)b$ can be
performed entirely in real arithmetic, as we now show.
We need the formula
\begin{equation}\label{exp2by2_small}
\exp\left(
\mybmatrix{0 & t \cr
-t & 0 \cr} \right)
= \mybmatrix{ \cos t & \sin t\cr
-\sin t & \cos t\cr}.
\end{equation}
\begin{lemma}\label{lem:real_case}
For $A\in\mathbb{R}^{{n \times n}}$, $b=b_\mathrm{r} + \ensuremath{\mathrm{i}} b_\mathrm{i} \in\mathbb{C}^{n}$, and
$t\in\mathbb{R}$, the vector
$f = f_\mathrm{r}+\ensuremath{\mathrm{i}} f_\mathrm{i} = \text{\rm\bf F}(D(\ensuremath{\mathrm{i}} t),A,b) \approx \ensuremath{\mathrm{e}}^{\ensuremath{\mathrm{i}} tA}b$
can be computed
in real arithmetic by
\begin{equation}\label{fcomplex}
\begin{bmatrix}f_\mathrm{r},&f_\mathrm{i}\end{bmatrix}=
\text{\rm\bf F}\!\left(D(\tau), A,
\begin{bmatrix} b_\mathrm{r}, b_\mathrm{i}\end{bmatrix}\right),
\quad\mbox{where} \quad\tau=\ensuremath{\mathrm{i}} t, \quad
D(\tau) = \begin{bmatrix} 0 & t\\-t & 0\end{bmatrix}.
\end{equation}
Furthermore, the resulting vectors $f_\mathrm{r}$ and $f_\mathrm{i}$ are approximations of, respectively,
\begin{align*}
f_\mathrm{r} = \cos(tA)b_\mathrm{r} - \sin(tA)b_\mathrm{i}, \qquad
f_\mathrm{i} = \sin(tA)b_\mathrm{r} + \cos(tA)b_\mathrm{i}.
\end{align*}
\end{lemma}
\begin{proof}
With $B = [b_\mathrm{r},b_\mathrm{i}]$ we have
\begin{align*}
g(B) &\approx \left(\mybmatrix{b_\mathrm{r} ,b_\mathrm{i}\cr }
+ t\mybmatrix{ \dfrac{\widetilde{A} b_\mathrm{r}}{s} , \dfrac{\widetilde{A} b_\mathrm{i}}{s}\cr}
\mybmatrix{ 0 & 1 \cr -1 & 0 \cr}
+ t^2\mybmatrix{ \dfrac{\widetilde{A}^2b_\mathrm{r}}{s^2 2!}
, \dfrac{\widetilde{A}^2b_\mathrm{i}}{s^2 2!}\cr }\right.\\
& \left.\qquad
+ t^3\mybmatrix{ \dfrac{\widetilde{A}^3b_\mathrm{r}}{s^3 3!}
, \dfrac{\widetilde{A}^3b_\mathrm{i}}{s^3 3!}\cr }
\mybmatrix{ 0 & 1 \cr -1 & 0 \cr}
+ \cdots\right)
\exp\left(
\mybmatrix{0 & t\mu/s \cr
-t\mu/s & 0 \cr} \right)
\end{align*}
and on collecting terms, applying \cref{exp2by2_small}
and the addition formulas \cite[Thm.~12.1]{high:FM},
and recalling that $\widetilde{A} = A - \mu I$, we find that
\begin{align*}
g(B) &\approx \mybmatrix{ \cos\left( \frac{t\widetilde{A}}{s} \right)b_\mathrm{r}
-\sin\left( \frac{t\widetilde{A}}{s} \right)b_\mathrm{i}, &
\sin\left( \frac{t\widetilde{A}}{s} \right)b_\mathrm{r}
+\cos\left( \frac{t\widetilde{A}}{s} \right)b_\mathrm{i} \cr}
\mybmatrix{ \cos \left(\frac{t\mu}{s}\right)
& \sin \left(\frac{t\mu}{s}\right)\cr
-\sin \left(\frac{t\mu}{s}\right)
& \cos \left(\frac{t\mu}{s}\right)\cr} \\
&= \mybmatrix{ \cos\left( \frac{tA}{s} \right) &
-\sin\left( \frac{tA}{s} \right) \cr
\sin\left( \frac{tA}{s} \right) &
\cos\left( \frac{tA}{s} \right) \cr}
\mybmatrix{ b_\mathrm{r} \cr b_\mathrm{i} \cr}
=: C \mybmatrix{ b_\mathrm{r} \cr b_\mathrm{i} \cr}.
\end{align*}
Hence, overall, using \cref{exp2by2_small} again,
$$
\text{\rm\bf F}(D(\ensuremath{\mathrm{i}} t)A,b) \approx C^s \mybmatrix{ b_\mathrm{r} \cr b_\mathrm{i} \cr}
= \mybmatrix{ \cos (tA) & -\sin (tA) \cr \sin (tA) & \cos (tA) \cr}
\mybmatrix{ b_\mathrm{r} \cr b_\mathrm{i} \cr},
$$
as required.
\end{proof}
As a consequence of \cref{lem:real_case} we can compute,
with $D$ defined in \cref{fcomplex},
\begin{enumerate}
\item an approximation of $\cos(tA)b$ by
\begin{align*}
B=[b,0], \quad \tau=\ensuremath{\mathrm{i}} t,\quad
D(\tau)=\begin{bmatrix}0& t\\-t & 0\end{bmatrix},
\quad
\cos(tA)b=\textbf{F}(D(\tau),A,B)\begin{bmatrix}1\\0\end{bmatrix};
\end{align*}
\item an approximation of $\sin(tA)b$ by
\begin{align*}
B=[b,0], \quad \tau=\ensuremath{\mathrm{i}} t,\quad
D(\tau)=\begin{bmatrix}0& t\\-t & 0\end{bmatrix},
\quad
\sin(tA)b=\textbf{F}(D(\tau),A,B)\begin{bmatrix}0\\1\end{bmatrix}.
\end{align*}
\end{enumerate}
We compute the matrix exponential $J$ in \cref{alg:eta_line} of
\cref{alg:basic_alg}
by making use of \cref{exp2by2_small}.
We make three remarks.
\begin{remark}[Other cases.]
\Cref{alg:basic_alg} can also be used to compute exponentials at
different time steps and with the use of \cite[Thm.~2.1]{alhi11} it can be
used to compute linear combinations of $\varphi$ functions at different
time steps (see, e.g., \cite[sec.~10.7.4]{high:FM} for details of the
$\varphi$ functions). This in turn is useful for the implementation of
exponential integrators~\cite{ho10}.
The internal stages of an
exponential integrator often require the evaluation of a $\varphi$ function
at intermediate steps, e.g., $\varphi(c_k t A)b$ for $0<c_k\leq1$ and
$k\geq 1$. Although the new algorithm can be used in these situations it
might not be optimal for each of the $c_k$ values as the parameters $m_*$
and $s$ are chosen for the largest value of $t$ and might not be optimal
for an intermediate point. Nevertheless, the computation can be performed in
parallel for all the different values of $t$ and level-3 BLAS routines can
be used, which can speed up the process.
Furthermore, the algorithm could also be used to generate dense output,
in terms of the time step, as is sometimes desired for time integration.
\end{remark}
\begin{remark}\label{rem:expmvt}
We note that in \cite[Code Fragment 5.1, Alg.~5.2]{alhi11}
the authors also present an algorithm
to compute $\ensuremath{\mathrm{e}}^{t_k A}b$ on equally spaced grid points $t_k=t_0+hk$ with
$h=(t_q-t_0)/q$.
With that code we can compute $\cosh(A)b$ and $\sinh(A)b$
by setting $t_0=-t$, $t_q=t$, and $q=1$, so that
$b_1=\ensuremath{\mathrm{e}}^{t_0A}b = \ensuremath{\mathrm{e}}^{-tA}b$, $h=2t$,
and $b_2=\ensuremath{\mathrm{e}}^{hA}b_1 = \ensuremath{\mathrm{e}}^{tA}b$.
This is not only slower than our approach,
as the code now has to perform a larger time step and
compute the necessary steps consecutively and not in parallel,
but it can also cause instability.
In fact, for some of the matrices of \cref{eg:floatingpoint}
in \cref{sec:ne} we see a large error if we use
\cite[Alg.~5.2]{alhi11} as outlined above.
Furthermore, as we compute with $\pm \beta$ we can optimize the algorithm by
using level-3 BLAS routines and we can avoid complex arithmetic by our direct
approach.
\end{remark}
\begin{remark}[Block version]\label{rem:block}
As indicated in the introduction it is sometimes required to compute
the action of our four functions
not on a vector but on a tall, thin matrix $V\in\mathbb{C}^{{n \times n_0}}$.
It is possible to use \cref{alg:basic_alg} for this task.
One simply needs to repeat each $\tau_k$ value $n_0$ times and the matrix
$V$ needs to be repeated
$q$ times for each of the $\tau_k$ values
(this corresponds to replacing the vector $b$ by the matrix $V$ in the
definition of $B$).
This procedure can be formalized with the help of the Kronecker product
$X\otimes Y$.
We define the time matrix by $D(\tau)\otimes I_{n_0}$,
and the
postprocessing matrix $\tilde P$ by $P\otimes I_{n_0}$.
Furthermore, the matrix $B$ reads as $I_q\otimes V/2$.
For $V=[v_1, v_2]$ ($n_0=2$) the computation of $\cosh(tA)V$ becomes
\begin{align*}
B=[v_1,v_2,v_1,v_2]/2, \quad
\tilde{D}(\tau)=D(\tau)\otimes I_2 =
\diag(t, t,-t, -t)
\end{align*}
and results in
\begin{align*}
\cosh(tA)V= \textbf{F}(\tilde{D}(\tau),A,B)\begin{bmatrix}I_2\\I_2\end{bmatrix}.
\end{align*}
\end{remark}
\section{Numerical experiments}\label{sec:ne}
Now we present some numerical experiments that
illustrate the behaviour of \cref{alg:basic_alg}.
All of the experiments were carried out in MATLAB R2015a (glnxa64)
on a Linux machine
and for time measurements only one processor is used.
We work with three tolerances in Algorithm~\ref{alg:basic_alg},
corresponding to half precision, single precision, and double precision,
respectively:
\begin{align*}
u_{\mathrm{half}} &= 2^{-11} \approx 4.9 \times 10^{-4}, \\
u_{\mathrm{single}} &= 2^{-23} \approx 6.0 \times 10^{-8}, \\
u_{\mathrm{double}} &= 2^{-53} \approx 1.1 \times 10^{-16}.
\end{align*}
All computations are in IEEE double precision arithmetic.
We use the implementations of the algorithm s of \cite{alhi11} from
\url{https://github.com/higham/expmv}, which are named
\texttt{expmv}\ for \cite[Alg.~3.2]{alhi11} and
\texttt{expmv\_tspan}\ for \cite[Alg.~5.2]{alhi11}.
We also use the implementations \texttt{cosm}\ and \texttt{sinm}\ from
\url{https://github.com/sdrelton/cosm_sinm} of the algorithm\ of
\cite[Alg.~6.2]{ahr15} for computing the matrix sine and cosine;
the default option of using a Schur decomposition is chosen in the first
experiment, but no Schur decomposition is used in the second and third
experiments.
We note that we did not use the function \verb"cosmsinm" for a simultaneous
computation as we found it less accurate than \texttt{cosm}\ and \texttt{sinm}\ in
\cref{eg:floatingpoint}.
In order to compute $\cos(tA)b$ and $\sin(tA)b$ we use the
following methods.
\begin{enumerate}
\item
\texttt{trigmv}{} denotes \cref{alg:basic_alg} with
real or complex arithmetic (avoiding complex arithmetic when possible),
computing the two functions simultaneously.
\item
\texttt{trig\_expmv}{} denotes the use of \texttt{expmv}, in two forms.
For a real matrix \texttt{expmv}\ is called with
the pure imaginary step argument $\ensuremath{\mathrm{i}} t$,
making use of \cref{eq:trig_real}.
For a complex matrix \texttt{expmv}\ is called twice,
with step arguments $\ensuremath{\mathrm{i}} t$ and $-\ensuremath{\mathrm{i}} t$,
and \cref{eq:trig_funs} is used.
\item
\texttt{dense}\ denotes the use of \texttt{cosm}\ and \texttt{sinm}\ to compute the dense matrices
$\cos(tA)$ and $\sin(tA)$ before the
multiplication with $b$.
\item \texttt{trig\_block}{} denotes the use of formula \cref{exp2by2} with $y_0=0$ and
$y'_0=b$.
Therefore we need one extra matrix--vector product to compute $\sin(tA)b$.
In order to compute the exponential we use \texttt{expmv}{}.
\end{enumerate}
For the computation of $\cosh(tA)b$ and $\sinh(tA)b$ we use the
following methods.
\begin{enumerate}
\item \texttt{trighmv}{} denotes \cref{alg:basic_alg}, computing
the two functions simultaneously.
\item \texttt{trigh\_expmv}{} denotes the use of \texttt{expmv}\
called twice
with $\pm t$ as step arguments.
\item \texttt{expmv\_tspan}{} denotes \cite[Alg.~5.2]{alhi11} called with
$t_0=-t$, $q=1$, and $t_q=t$, as discussed in of \cref{rem:expmvt}.
\item
\texttt{dense}\ denotes the use of \texttt{cosm}\ and \texttt{sinm}\ to compute
the dense matrices
$\cosh(tA)$ and $\sinh(tA)$
as $\cos(\ensuremath{\mathrm{i}} t A)$ and $-\ensuremath{\mathrm{i}}\sin(\ensuremath{\mathrm{i}} t A)$, respectively,
before the multiplication with $b$.
\item \texttt{trigh\_block}{} denotes the use of formula \cref{exp2by2} with $y_0=0$ and
$y'_0 = b$, where $\ensuremath{\mathrm{i}} A$ is substituted for $A$.
We need one extra matrix--vector product to compute $\sinh(tA)b$.
In order to compute the exponential we use \texttt{expmv}{}.
\end{enumerate}
All the methods except \texttt{dense}\ support tolerances
$u_{\mathrm{half}}$, $u_{\mathrm{single}}$, and $u_{\mathrm{double}}$, whereas \texttt{dense}\ is designed to deliver
double precision accuracy.
In all cases, when \cref{cf:ms} is called to compute the optimal
scaling and truncation degree we use $m_{\mathrm{max}}=55$ and
$p_\mathrm{max}=8$.
We compute relative errors in the 1-norm, $\normi{x-\widehat{x}}/\normi{x}$,
where $x = f(A)b$.
In \cref{eg:floatingpoint},
$\widehat{x}$ denotes a reference
solution computed with the Multiprecision Computing Toolbox
\cite{adva-mct} at 100-digit precision.
In \cref{eg.large,eg:schroedinger}
the matrices are too large for multiprecision
computations so the reference solution $X$ is taken as that obtained via
$\texttt{cosm}$ or $\texttt{sinm}$.
\begin{eg}[Behavior for existing test sets]\label{eg:floatingpoint}
In this experiment we compare
\texttt{trigmv}{}, \texttt{trig\_expmv}{}, and \texttt{dense}.
We show only the results for $\cos$ and $\cosh$,
as the results for $\sin$ and $\sinh$ are very similar.
As test matrices we use Set 1-3 from \cite[sec.~6]{alhi09a},
with dimensions $n$ up to $50$.
We remove all matrices from our test sets where any of the considered
functions overflow;
the overflow also appears for the dense method considered and is due to
the result being too large to represent.
The elements of the vector $b$ are drawn from the standard normal
distribution and are the same for each matrix.
We compare the algorithms for tolerances $u_{\mathrm{half}}$, $u_{\mathrm{single}}$, and $u_{\mathrm{double}}$.
The relative errors are shown in \cref{fig:accuracy_cos},
with the test matrices ordered by decreasing condition number\
$\kappa_{\cos}$ of the matrix cosine.
The estimated condition number is computed by the \texttt{funm\_condest1}
function of the Matrix Function Toolbox~\cite{high-mft}. The required
Fr\'{e}chet derivative is computed with the $2 \times 2$ block
form~\cite[sec.~3.2]{high:FM}.
\begin{figure}
\includegraphics{accuracy_cos.pdf}
\caption{\label{fig:accuracy_cos}Relative error in $1$-norm for computing
$\cos(A)b$ with three algorithms with tolerances
$u_{\mathrm{half}}$ (blue), $u_{\mathrm{single}}$ (green), and $u_{\mathrm{double}}$ (orange).
The solid lines are the condition number multiplied by the tolerance.}
\end{figure}
\begin{figure}
\includegraphics{performance_cos.pdf}
\caption{\label{fig:performance_cos}
Same data as in \cref{fig:accuracy_cos} for $u_{\mathrm{double}}$
but presented as a performance profile.
For each method, $p$ is the proportion of problems in which the error is
within a factor of $\alpha$ of the smallest error over all methods.
}
\end{figure}
\begin{figure}
\includegraphics{accuracy_cosh.pdf}
\caption{\label{fig:accuracy_cosh}Relative error in $1$-norm for computing
$\cosh(A)b$ with tolerances
$u_{\mathrm{half}}$ (blue), $u_{\mathrm{single}}$ (green), and $u_{\mathrm{double}}$ (orange).
The solid lines are the condition number multiplied by the tolerance.}
\end{figure}
\begin{figure}
\includegraphics{performance_cosh.pdf}
\caption{\label{fig:performance_cosh}
Same data as in \cref{fig:accuracy_cosh} for $u_{\mathrm{double}}$
but presented as a performance profile.
For each method, $p$ is the proportion of problems in which the error is
within a factor of $\alpha$ of the smallest error over all methods.
}
\end{figure}
From the error plot in \cref{fig:accuracy_cos}
one can see that \texttt{trigmv}{} and \texttt{trig\_expmv}\ behave in a forward stable manner,
that is, the relative error is always within a modest multiple of the condition number\
of the problem times the tolerance,
and likewise for \texttt{dense}\ except for some mild instability on four problems.
We also show in \cref{fig:performance_cos}
a performance profile for the experiment with tolerance $u_{\mathrm{double}}$.
In the performance profile
the curve for a given method shows
the proportion of problems $p$ for which the
error is within a factor $\alpha$ of the smallest error over all
methods.
In particular, the value at $\alpha=1$ corresponds to the proportion
of problems
where the method performs best and for large values of $\alpha$ the performance
profile gives an idea of the reliability of the method.
The performance profile is computed with the code from
\cite[sec.~26.4]{hihi:MG3}
and we employ the idea of \cite{dihi13} to reduce the bias of relative
errors significantly less than the precision.
The performance profile suggests that the overall behavior of
\texttt{trigmv}\ and \texttt{trig\_expmv}\
is very similar
For the computation of $\cosh$,
shown in \cref{fig:accuracy_cosh},
\texttt{expmv\_tspan}{} is clearly not a good choice for the computation.
This is related to the implementation of \texttt{expmv\_tspan}{}.
As the algorithm first computes $b_1=\ensuremath{\mathrm{e}}^{-A}b$ and
from this computes $b_2=\ensuremath{\mathrm{e}}^{2A}b_1$ the result is not always stable,
as discussed in \cref{rem:expmvt}.
We see that \texttt{trighmv}{} and \texttt{trigh\_expmv}{} behave in a forward stable manner
and have about the same accuracy for all three tolerances,
as is clear for double precision from the performance profile
in \cref{fig:performance_cosh}.
\end{eg}
\begin{eg}[Behavior for large matrices]\label{eg.large}
In this experiment we take a closer look at the behavior of
several algorithms for large (sparse) matrices.
For the computation of the trigonometric functions we compare
\texttt{trigmv}{} with \texttt{trig\_block}{} and \texttt{trig\_expmv}{}, which both rely on \texttt{expmv}{}.
For a real matrix \texttt{trig\_expmv}{} calls \texttt{expmv}{} with a
pure imaginary step argument and two calls are made for a complex matrix.
For the hyperbolic functions, we compare \texttt{trighmv}{}
with \texttt{trigh\_block}{} and \texttt{trigh\_expmv}{}.
This time \texttt{trigh\_expmv}{} always calls \texttt{expmv}{} twice
and \texttt{trigh\_block}{} calls \texttt{expmv}{} with a pure imaginary step argument.
When \texttt{expmv}{} is called several times the preprocessing step (\cref{cf:ms})
is only performed once.
We use the same matrices as in \cite[Example 9]{ckor15},
namely \texttt{orani676} and \texttt{bcspwr10}, which are obtained
from the University of Florida Sparse Matrix Collection \cite{dahu11}.
The matrix \texttt{orani676} is a nonsymmetric $2529\times 2529$ matrix
with $90158$ nonzero entries and
\texttt{bcspwr10} is a symmetric $5300\times 5300$ matrix with
$13571$ nonzero entries.
The matrix \texttt{triw} is \texttt{-gallery('triw',2000,4)},
which is a $2000\times 2000$ upper triangular matrix
with $-1$ in the main diagonal and $-4$ in the upper triangular part.
The matrix \texttt{triu} is an upper triangular matrix of dimension $2000$
with entries uniformly distributed on $[-0.5,0.5]$.
The $9801\times 9801$ matrix \texttt{L2}
is from a finite difference discretization (second order symmetric differences)
of the two-dimensional Laplacian in the unit square.
The $27000\times 27000$ complex matrix \texttt{S3D}
is from a finite difference discretization (second order symmetric differences)
of the
three-dimensional Schr\"odinger equation with harmonic potential in the unit cube,
The matrix \texttt{Trans1D} is a periodic, symmetric finite difference
discretization of the transport equation in the unit square with
dimension $1000$.
As vector $b$ we use $[1,\ldots,1]^\mathrm{T}$ for \texttt{orani676},
$[1,0,\ldots,0,1]^\mathrm{T}$ for \texttt{bcspwr10},
the discretization of $256 x^2 (1-x)^2 y^2 (1-y)^2$
for \texttt{L2},
the discretization of $4096x^2(1-x)^2y^2(1-y)^2z^2(1-z)^2$
for \texttt{S3D},
the discretization of $\exp(-100(x-0.5)^2)$ for \texttt{Trans1D},
and $v_i=\cos\,i$ for all other examples.
\begin{table}\centering\captionsetup{position=top}\footnotesize
\caption{\label{tab:large matrices} Behavior of the algorithms for large
(sparse) matrices, for tolerance $u_{\mathrm{double}}$.
\subfloat[\label{tab:large matrices_trig}Results for the computation of
$\cos$ and $\sin$.]{
\centering\setlength{\tabcolsep}{4pt}
\begin{tabular}{rr|rl|rl|rl|l}
& & \multicolumn{2}{c|}{\texttt{trigmv}{}} & \multicolumn{2}{c|}{\texttt{trig\_expmv}{}} & \multicolumn{2}{c|}{\texttt{trig\_block}{}} & \texttt{dense}\\
& $t$ & $mv$ & Time & $mv$ & Time & $mv$ & Time & Time\\\hline
\tt orani676 & 100 &2200 & 2.3e-1 & 4164 & 3.1e-1 & 2599 & 9.5e-1 & 2.8e2 \\
\tt bcspwr10 & 10 &618 & 4.1e-2 & 1500 & 1.2e-1 & 1392 & 1.2e-1 & 2.6e2 \\
\tt triw & 10 &56740 & 5.7e1 & 113192 & 1.1e2 & 95389 & 1.2e2 & 1.9e1 \\
\tt triu & 40 &3936 & 4.0 & 7524 & 8.5 & 4585 & 5.2 & 1.4e1 \\
\tt L2 & 1/4 &107528 & 1.2e1 & 215320 & 1.9e1 & 257803 & 3.0e1 & 1.3e3 \\
\hline
\end{tabular}}
\subfloat[\label{tab:large matrices_trigh}Results for the computation of
$\cosh$ and $\sinh$.]{
\centering\setlength{\tabcolsep}{4pt}
\begin{tabular}{rr|rl|rl|rl|l}
& & \multicolumn{2}{c|}{\texttt{trighmv}{}} & \multicolumn{2}{c|}{\texttt{trigh\_expmv}{}} & \multicolumn{2}{c|}{\texttt{trigh\_block}{}}& \texttt{dense}\\
& $t$ & $mv$ & Time & $mv$ & Time & $mv$ & Time &Time \\\hline
\tt orani676 & 100 & 2202 & 2.2e-1 & 2202 & 2.7e-1 & 2619 & 9.5e-1 & 2.1e2\\
\tt bcspwr10 & 10 & 632 & 4.1e-2 & 806 & 5.4e-2 & 855 & 7.4e-2 & 5.4e2\\
\tt triw & 10 & 56478 & 5.7e1 & 57582 & 1.2e2 & 94499 & 1.1e2 & 1.2e2\\
\tt triu & 40 & 4042 & 4.1 & 4031 & 8.0 & 4689 & 5.4 & 2.9e1\\
\tt S3D & 1/2 & 15962 & 1.7e1 & 15934 & 2.1e1 & 32135 & 1.4e1 & 2.3e4\\
\tt Trans1D & 2 & 13086 & 2.0e-1 & 13039 & 2.4e-1 & 17551 & 2.1e-1 & 6.2 \\
\hline
\end{tabular}
\end{table}
The results for computing $\cos(tA)b$ and $\sin(tA)b$ are shown in
\Cref{tab:large matrices_trig}, and those for
$\cosh(tA)b$ and $\sinh(tA)b$ in \Cref{tab:large matrices_trigh}.
The different algorithms are run with tolerance $u_{\mathrm{double}}$.
All the methods behave in a forward stable manner,
with one exception,
so we omit the errors in the table.
The exception is the \texttt{trigh\_block}{} method, which has an error about
$10^2$ times larger than the other methods for \texttt{Trans1D}.
For the different methods we list the number of
real matrix--vector products performed ($mv$),
as well as the overall time in seconds averaged over ten runs.
The tables also show the time the dense algorithm needed to compute the
reference solution (computing both functions simultaneously).
In \cref{tab:large matrices_trig} we can see
that \texttt{trigmv}{} always needs the fewest matrix--vector products and
that with the sole exception of \texttt{triw} it is always the fastest method.
We can also see that, as expected, \texttt{trig\_block}{} has higher
computational cost than \t{trigmv}.
The increase in matrix--vector products is most pronounced for
normal matrices (\texttt{bcspwr10} and
\texttt{L2}).
For the matrix \texttt{bcspwr10} we find
$s=7$, $mv=618$, and $mvd=44$ (matrix--vector products performed in the
preprocessing stage, in \cref{cf:ms}) for \texttt{trigmv}.
On the other hand, for \texttt{trig\_block}{} we find
$s=10$, $mv=696\cdot 2=1392$, and $mvd=328\cdot 2=656$.
This means that the preprocessing stage is more expensive as the block
matrix is nonnormal and more $\alpha_p$ values need to be computed.
We can also see that we need more scaling steps as we miss the opportunity to
reduce the norm.
In total this sums up to more than twice the number of matrix--vector
products.
The results of the experiment for the hyperbolic functions can be seen in
\cref{tab:large matrices_trigh}.
Again \texttt{trighmv}{} almost always needs fewer matrix--vector products than the
other methods
where this time \texttt{trigh\_expmv}{} is the closest competitor and
\texttt{trigh\_block}{} has a higher computational effort.
Even in the cases where \texttt{trigh\_expmv}{} needs the same number of
matrix--vector products or slightly fewer,
\texttt{trighmv}{} is still clearly faster.
This is due to the fact that \texttt{trigmv}\ employs level-3 BLAS.
Comparing the runtime of \texttt{trigmv}{} and \texttt{trighmv}{} with the \texttt{dense}{} algorithms
we can see that we potentially save a great deal of computation time.
The \texttt{triw} and \texttt{triu}
matrices are the only cases where there is not a speedup of
at least a factor of 10.
For the \texttt{triw} matrix, and to a lesser extent for the
\texttt{triu} matrix, the $\alpha_p$ values,
which help deal with the nonnormality of the matrix,
decay very slowly, and this hinders
the performance of the algorithms.
Nevertheless, in all the other cases we can see a clear speed advantage,
most significantly for \texttt{bcspwr10} where we have a speedup by a factor
6190.
\end{eg}
\begin{eg}[Schr\"odinger equation]\label{eg:schroedinger}
In this example we solve an evolution equation.
We consider the 3D Schr\"odinger equation with harmonic potential
\begin{align}\label{eq:schroedinger}
\partial_t u = \frac{\ensuremath{\mathrm{i}}}{2}\left(\Delta -
\frac12 \left(x^2 + y^2 + z^2\right)\right)u.
\end{align}
We use a finite difference discretization in space with $N^3$ points on the
domain $\Omega=[0,1]^3$ and
as initial value we use the discretization of
$4096x^2(1-x)^2y^2(1-y)^2z^2(1-z)^2$.
We obtain a discretization matrix $\ensuremath{\mathrm{i}} A$ of size $27000\times 27000$,
where $A$ is symmetric with all eigenvalues on the negative real axis.
We deliberately keep $\ensuremath{\mathrm{i}}$ separate and as a result
the solution of \cref{eq:schroedinger} can be interpreted as
\begin{align*}
u(t)=\ensuremath{\mathrm{e}}^{\ensuremath{\mathrm{i}} tA}u_0=\cos(t A)u_0 + \ensuremath{\mathrm{i}} \sin(t A)u_0.
\end{align*}
\begin{table}\centering\captionsetup{position=top}\footnotesize
\caption{\label{tab:schroedinger}Results for the solution of the
Schr\"odinger equation with $N=30$.
We show the number of matrix--vector products performed,
the relative error in the $1$-norm, and the CPU time.
}
\begin{tabular}{l|c|c|c|c|c|l}
& \multicolumn{3}{c|}{$\mathrm{tol}=u_{\mathrm{single}}$} & \multicolumn{3}{c}{$\mathrm{tol}=u_{\mathrm{double}}$}\\
& $mv$ & rel.~err & Time & $mv$ & rel.~err & Time \\\hline
\texttt{trigmv}{} & 11034 & 1.3e-7 & 3.9 & 15846 & 2.7e-11 & 5.6 \\
\texttt{trig\_expmv}{} & 21952 & 1.3e-7 & 6.2 & 31516 & 2.7e-11 & 8.8 \\
\texttt{trig\_block}{} & 15883 & 5.2e-8 & 7.1 & 32023 & 1.1e-11 & 1.4e1\\
\texttt{expleja}{} & 11180 & 8.0e-9 & 4.3 & 17348 & 1.5e-11 & 6.6 \\
\texttt{dense}& - & - & - & - & - & 2.2e4
\end{tabular}
\end{table}
\Cref{tab:schroedinger} reports the results for the tolerances
$u_{\mathrm{single}}$ and $u_{\mathrm{double}}$,
for our new algorithm \texttt{trigmv}{}, \texttt{trig\_expmv}{}, \texttt{trig\_block}{}, and
\texttt{expleja}{} (the method from \cite{ckor15} called in the same fashion as
\texttt{trig\_expmv}{}).
The table shows the number of matrix--vector products performed,
the relative error, and the CPU time in seconds.
We see that the four methods achieve roughly the same accuracy.
We also see that \texttt{trigmv}{} requires
significantly fewer matrix--vector products than \texttt{trig\_expmv}{} and
\texttt{trig\_block}{}.
On the other hand, even though \texttt{expleja}{} is a close competitor in terms of
matrix--vector products performed the overall CPU time is higher
than for \texttt{trigmv}{}.
This is due to the fact that \texttt{trigmv}{} is avoiding complex arithmetic and
employs level-3 BLAS.
Also note that \texttt{trigmv}{} needs less storage than \texttt{expleja}{} as for the latter
the matrix needs to be complex.
Again we can see that the dense method needs roughly 1000 times longer for
the computation than the other algorithms.
\end{eg}
\section{Concluding remarks}\label{sec.conc}
We have developed the first algorithm\ for computing the actions of the matrix
functions $\cos A$, $\sin A$, $\cosh A$, and $\sinh A$.
Our new algorithm, \cref{alg:basic_alg},
can evaluate the individual actions or the actions of any of the
functions simultaneously.
The algorithm\ builds on the framework of the $\ensuremath{\mathrm{e}}^Ab$ algorithm\ \texttt{expmv}\ of
Al-Mohy and Higham \cite{alhi09a},
inheriting its backward stability
with respect to truncation errors,
its exclusive use of matrix--vector products
(or matrix--matrix products in our modification),
and its features for countering the effects of nonnormality.
For real $A$, $\cos(A)b$ and $\sin(A)b$ are computed entirely
in real arithmetic.
As a result of these features and its careful reuse of information,
\cref{alg:basic_alg} is more efficient than alternatives
that make multiple calls to \texttt{expmv},
as our experiments demonstrate.
Our MATLAB codes are available at \url{https://bitbucket.org/kandolfp/trigmv}
\section*{Acknowledgement}
The computational results presented have been achieved (in part)
using the HPC infrastructure LEO of the University of Innsbruck.
We thank Awad H.~Al-Mohy for his comments on an early version of
the manuscript.
We thank the referees for their constructive remarks which helped us to improve
the presentation of this paper.
\newpage
\bibliographystyle{siamplain}
| {
"timestamp": "2017-05-02T02:02:39",
"yymm": "1607",
"arxiv_id": "1607.04012",
"language": "en",
"url": "https://arxiv.org/abs/1607.04012",
"abstract": "We derive a new algorithm for computing the action $f(A)V$ of the cosine, sine, hyperbolic cosine, and hyperbolic sine of a matrix $A$ on a matrix $V$, without first computing $f(A)$. The algorithm can compute $\\cos(A)V$ and $\\sin(A)V$ simultaneously, and likewise for $\\cosh(A)V$ and $\\sinh(A)V$, and it uses only real arithmetic when $A$ is real. The algorithm exploits an existing algorithm \\texttt{expmv} of Al-Mohy and Higham for $\\mathrm{e}^AV$ and its underlying backward error analysis. Our experiments show that the new algorithm performs in a forward stable manner and is generally significantly faster than alternatives based on multiple invocations of \\texttt{expmv} through formulas such as $\\cos(A)V = (\\mathrm{e}^{\\mathrm{i}A}V + \\mathrm{e}^{\\mathrm{-i}A}V)/2$.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Computing the Action of Trigonometric and Hyperbolic Matrix Functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808765013517,
"lm_q2_score": 0.7217432182679957,
"lm_q1q2_score": 0.7075110746126572
} |
https://arxiv.org/abs/solv-int/9706007 | On the Liouville transformation and exactly-solvable Schrodinger equations | The present article discusses the connection between exactly-solvable Schrodinger equations and the Liouville transformation. This transformation yields a large class of exactly-solvable potentials, including the exactly-solvable potentials introduced by Natanzon. As well, this class is shown to contain two new families of exactly solvable potentials. | \section{Introduction}
The study of exactly solvable Schr\"odinger equations dates back to
the very beginnings of quantum mechanics. As examples one can site
the harmonic oscillator, the Coulomb, the Morse \cite{Morse},
P\"oschll-Teller \cite{PoschlTeller}, Eckart \cite{Eckart}, and the
Manning-Rosen \cite{ManningRosen} potentials. One can argue that in
each of these cases the exact-solvability comes about because the
Schr\"odinger equations in question can be transformed by a gauge
transformation and by a change of variables into either the
hypergeometric or the confluent hypergeometric equation. To be more
precise, in each of the above cases there exists a gauge factor
$\sigma(z;E)$, which depends on the energy parameter $E$, and a change
of variables $z=z(r)$, which does not, such that solutions to the
corresponding Schr\"odinger equation,
\begin{equation}
\label{schrod.eqn}
-\psi''(r;E) + U(r)\psi(r;E) = E\psi(r;E),
\end{equation}
are of the form
\begin{equation}
\label{gaugexform.eqn}
\psi(r;E) = \exp[\sigma(z(r);E)]\,\phi(z(r);E),
\end{equation}
where $\phi(z;E)$ is either $F(\alpha,\beta;\gamma;z)$, the
Gauss hypergeometric function, or $\Phi(\alpha;\gamma;z)$, the
confluent hypergeometric function, and where the parameters $\alpha$,
$\beta$, $\gamma$ are themselves functions of $E$.
The just mentioned types of special functions
are well understood, and as a consequence one can explicitly
calculate the bound state and scattering information for the
corresponding potentials. In light of these remarks the following
question is of interest.
\begin{problem}
\label{prob:findpot}
Given a collection of functions, ${\mathcal F}=\{\phi(z)\}$, find all possible
potentials, $U(r)$, such that there exist an $E$-dependent gauge
factor, $\sigma(z;E)$, and an $E$-independent change of variables,
$z(r)$, such that the solutions of equation \eqref{schrod.eqn} are of
the form shown in \eqref{gaugexform.eqn}.
\end{problem}
For the cases of hypergeometric and confluent hypergeometric
functions, Problem \ref{prob:findpot} was solved by Natanzon in
\cite{natanzon1}. The corresponding classes of exactly-solvable
potentials have come to be known as Natanzon's hypergeometric and
confluent hypergeometric potentials, and have been the subject of some
discussion in the literature \cite{ginocchio} \cite{cordero93}
\cite{wu89}. The purpose of the present article is to review
Natanzon's approach and then to enlarge Natanzon's class of
exactly-solvable potentials by allowing $\phi$ to come from a larger
class of special functions, namely the solutions of the following
class of differential equations:
\begin{equation}
\label{eq:algform}
A(z)\phi''(z) + B(z) \phi'(z) + C\phi(z) = 0,
\end{equation}
where $A(z)$ is a non-zero real polynomial of degree 2 or less, $B(z)$
is a real polynomial of degree 1 or less, and $C$ is a real constant.
Prior to Natanzon, Problem \ref{prob:findpot} was considered by Bose
in \cite{bose}, and by several other authors \cite{Manning}
\cite{BhattSud}. Bose's paper is noteworthy because it introduced the
approach that was followed by Natanzon in his classification. This
approach relies on two techniques: a certain canonical form for
linear, second-order differential operators, and the Liouville
transformation. The Liouville transformation will be described in
Section \ref{sect:liouville} and the Bose-Natanzon approach in Section
\ref{sect:natanzon}. The solution of Problem \ref{prob:findpot} for
the case where ${\mathcal F}$ is the set of solutions of equation
\eqref{eq:algform} is given in Section \ref{sect:genpot}. The
resulting collection of potentials includes Natanzon's hypergeometric
and confluent hypergeometric potentials, as well as two new classes of
exactly-solvable potentials. These new potentials will be discussed
in Section \ref{sect:newpot}.
\section{The Liouville Transformation}
\label{sect:liouville}
Consider a linear, second-order differential equation
\begin{equation}
\label{eq:sophi}
a(z)\phi''(z) + b(z)\phi'(z) + c(z)\phi(z) = 0,
\end{equation}
Dividing through by $a(z)$
and making the gauge transformation
\begin{equation}
\hat{\phi}(z) = \exp\left(\int^z \frac{b(t)}{2a(t)} dt\right)\phi(z)
\end{equation}
changes the equation into the following self-adjoint, canonical form
\begin{equation}
\label{eq:bosecanform}
\hat{\phi}''(z) + I(z) \hat{\phi}(z) = 0,
\end{equation}
where the potential term is given by
\begin{equation}
\label{eq:boseinv}
I = \frac{1}{4a^2}\,(4ac - 2ab'+2ba' - b^2).
\end{equation}
Clearly, $I(z)$ is an invariant of equation \eqref{eq:sophi} with
respect to gauge transformations and multiplication by functions, and
this is why equation \eqref{eq:bosecanform} is being called a
canonical form. Henceforth, $I(z)$ will be called the Bose invariant
of equation \eqref{eq:sophi}.
A change of the independent variable, say $z=z(r)$, will transform
equation \eqref{eq:bosecanform} into
$$
[z'(r)]^{-2} \tilde{\phi}''(r) - \frac{z''(r)}{[z'(r)]^3} \tilde{\phi}'(r) + I(z(r))
\tilde{\phi}(r) = 0,$$
where $\tilde{\phi}(r) = \hat{\phi}(z(r))$. The corresponding
canonical equation is
\begin{equation}
\label{eq:boseresult}
\psi''(r) + J(r) \psi(r) = 0.
\end{equation}
where
\begin{gather}
\psi(r) = [z'(r)]^{-\frac{1}{2}} \tilde{\phi}(r),\\
\label{eq:potrel}
J(r) = [z'(r)]^2 I(z(r)) + \frac{1}{2} \{ z , r \},
\end{gather}
and where the curly brackets term denotes the Schwarzian derivative of
$z$ with respect to $r$, namely
$$
\{ z , r \} = \left[\frac{z''(r)}{z'(r)}\right]' - \frac{1}{2} \left[
\frac{z''(r)}{z'(r)}\right]^2.
$$
The above process of going from one self-adjoint equation to another
by means of a change of variables has been named the Liouville
transformation in \cite{folver}, and the Liouville-Green
transformation in \cite{zwillinger}. The Liouville transformation
arises naturally in the context of WKB approximation (see Chapter 6 of
\cite{folver}), and also underlies the following classical theorem due
to Schwarz (see \cite{hille}, Theorem 10.1.1 or \cite{polver}, Theorem
6.28 .)
\begin{theorem}
The general solution to the Schwarzian equation
$$\{z,r\} = 2 J(r)$$
has the form $z(r)=\psi_2(r)/\psi_1(r)$, where
$\psi_2(r)$ and $\psi_1(r)$ are two linearly independent, but
otherwise arbitrary solutions of equation \eqref{eq:boseresult}.
\end{theorem}
\noindent
In particular, this theorem implies that for every potential $J(r)$,
there exists a change of variables $z(r)$ such that the corresponding
Liouville transformation takes the equation $\phi''(z) = 0$ to
equation \eqref{eq:boseresult}. Therefore, one can relate any two
equations of form \eqref{eq:boseresult} by a Liouville transformation.
It is for this reason that Problem \ref{prob:findpot} must be
formulated with the condition that $z(r)$ not depend on the energy
parameter. Without this restriction the problem would be
uninteresting; one would get a criterion that would be satisfied by
all possible potentials.
\section{The Bose-Natanzon approach}
\label{sect:natanzon}
The approach in question rests on the following reformulation of
Problem \ref{prob:findpot}.
\begin{probnn}
Given a collection of functions, ${\mathcal F}=\{\phi(z)\}$, find all
possible $I_1(z)\geq 0$ and $I_0(z)$ such that for some
$E$-dependent gauge factor, $\sigma(z;E)$ the solutions of
\begin{equation}
\label{eq:schrod1}
\hat{\phi}''(z;E) + \left[ I_1(z)E+I_0(z)\right]\hat{\phi}(z;E) = 0,
\end{equation}
are of the form
$$\hat{\phi}(z;E) = \exp(\sigma(z;E))\, \phi(z;E).$$
\end{probnn}
Indeed, suppose that $I_1$ and $I_0$ satisfy the above set of requirements.
Let $z(r)$ be a solution of
\begin{equation}
\label{eq:xvarrel}
z'(r) = \left( I_1(z)\right)^{-\frac{1}{2}},
\end{equation}
From formula \eqref{eq:potrel} it follows that a Liouville
transformation of equation \eqref{eq:schrod1} based on the change of
variables $z=z(r)$ yields an equation with potential term $E-U(r)$
where
\begin{equation}
\label{eq:upotform}
-U(r) = \frac{I_0(z)}{I_1(z)} +
\frac{-4\,I_1(z)I_1''(z)+5\,(I_1'(z))^2}{16\,(I_1(z))^3}.
\end{equation}
Furthermore, the corresponding eigenfunctions will have the form
$$
\psi(r;E) = \left( I_1(z) \right)^{\frac{1}{4}}
\,\hat{\phi}(z;E).
$$
Therefore $U(r)$ satisfies the criterion imposed by Problem
\ref{prob:findpot}. One can also reverse the above argument to show
that given a $U(r)$ demanded by Problem \ref{prob:findpot}, one can
produce an $I_1$ and an $I_0$ that satisfy the criterion of Problem
\ref{prob:findpot}A. In other words, the two formulations are
equivalent.
The Bose invariant for the hypergeometric equation,
$$z(1-z)\phi''(z) + (\gamma-(1+\alpha+\beta))\,\phi'(z) -
\alpha\beta\,\phi(z)=0, $$
is given by
$$I(z) = \frac{T(z)}{4z^2(1-z)^2},$$
where
$$T(z) = (1-(\alpha-\beta)^2)z^2 +
(2\gamma(\alpha+\beta-1)-4\alpha\beta)z+\gamma(2-\gamma)
$$
Note that every polynomial, $T(z)$, of degree 2 or less can be
obtained from some choice of $\alpha$, $\beta$, $\gamma$. Therefore,
in order to solve Problem \ref{prob:findpot}A one must determine all
possible $I_1(z)$ and $I_0(z)$ such that for all $E$ there exists a
$T(z;E)$ of degree two or less in $z$ such that
$$I_1(z)E+I_0(z) = \frac{T(z;E)}{4z^2(1-z)^2}.$$
It is clear that this
condition is satisfied if and only if $T(z;E) = R(z)E + S(z)$ where
$R(z)$ and $S(z)$ are polynomials of degree 2 or less, and such that
$R(z)\geq0$ in the domain of interest. The determining relation for
$z(r)$ follows from \eqref{eq:xvarrel}; it is
$$
z'(r) = \frac{2z(1-z)}{\sqrt{R(z)}}.
$$
Setting $R(z)=r_2z^2+r_1z+r_0$, calculating $\{z,r\}$ and plugging the
result into \eqref{eq:potrel} one obtains the formula for Natanzon's
hypergeometric potentials:
{\smaller[2]
\begin{equation}
\label{eq:natanzonpot}
U= \frac{-S(z)+1}{R(z)} +
\left( \frac{r_1-2(r_2+r_1)z}{z(1-z)} - \frac{5}{4}\frac{(r_1^2-4
r_2r_0)}{R(z)} + r_2 \right) \frac{z^2(1-z)^2}{R(z)^2}
\end{equation}}
\noindent
These potentials describe the solution of Problem \ref{prob:findpot}
for the case where ${\mathcal F}$ is the set of hypergeometric functions.
It is well known that one can transform hypergeometric functions into
confluent hypergeometric ones by a certain limit process. Natanzon
obtained his confluent hypergeometric potentials by applying this
limit process to his hypergeometric potentials. The resulting family
of potentials can also be considered as a solution of Problem 1, but
the corresponding ${\mathcal F}$ is not the set of confluent hypergeometric
functions, but rather the set of {\em scaled} confluent hypergeometric
functions; namely $\phi(z)=\Phi(\alpha;\gamma;\omega z)$, where
$\omega$ is an extra scaling parameter. These functions satisfy the
following scaled version of the confluent hypergeometric equation
$$z\phi''(z) + (\gamma-\omega z)\,\phi'(z)-\omega\alpha\phi(z)=0.$$
The corresponding Bose invariant is
$$I(z) = \frac{-\omega^2 z^2 +
2\omega(\gamma-2\alpha)z+\gamma(2-\gamma)}{4z^2},
$$
where again every possible second degree polynomial can occur in the
numerator as one varies $\alpha$, $\gamma$, $\omega$. Hence, by the
same reasoning as above, the criterion of Problem
\ref{prob:findpot}A will be satisfied if and only if
$$I_1(z)E+I_0(z) = \frac{R(z)E + S(z)}{4z^2},$$
where $R(z)$ and $S(z)$ are polynomials of degree 2 or less,
such that $R(z)\geq0$ in the domain of interest.
The determining relation
for $z(r)$ follows from \eqref{eq:xvarrel}; it is
$$
z'(r) = \frac{2z}{\sqrt{R(z)}}.
$$
Setting $R(z)=r_2z^2+r_1z+r_0$, calculating $\{z,r\}$ and plugging the
result into \eqref{eq:potrel} one obtains the formula for Natanzon's
confluent hypergeometric potentials:
\begin{equation}
\label{eq:natconfpot}
U= \frac{-S(z)+1}{R(z)} +
\left( \frac{r_1}{z} - \frac{5}{4}\frac{(r_1^2-4
r_2r_0)}{R(z)} - r_2 \right) \frac{z^2}{R(z)^2}
\end{equation}
\section{Generalized Natanzon potentials}
\label{sect:genpot}
The present section is devoted to the solution of Problem
\ref{prob:findpot} for the case where ${\mathcal F}$ is the set of solutions,
$\phi(z)$, of equations of type \eqref{eq:algform}. The Bose
invariant for equation \eqref{eq:algform} is given by
$$
I(z)= \frac{T(z)}{ A(z)^2},
$$
where $T(z)$ is a polynomial of degree 2 or less determined by
quadratic combinations of the coefficients of $A(z)$, $B(z)$, and $C$.
The exact formula for $T(z)$ is not important; it can be readily
recovered from equation \eqref{eq:boseinv}. What is significant, is
that for a fixed $A(z)$, one can obtain every possible $T(z)$ of
degree 2 or less, from some choice of $B(z)$ and $C$. Consequently,
using the reformulation given by Problem \ref{prob:findpot}A, one must
seek all possible $I_1(z)$ and $I_0(z)$ such that for all values of
$E$, there exist $T(z;E)$ and $A(z;E)$ both of degree 2 or less, such
that
\begin{equation}
\label{eq:tacrit}
I_1(z)E+I_0(z) = \frac{T(z;E)}{ A(z;E)^2}.
\end{equation}
\begin{lemma}
\label{lem:iform}
Suppose that for all $E$ there exist $T(z;E)$ and $A(z;E)$ such that
relation \eqref{eq:tacrit} holds. Then, there exist $E$-independent
polynomials $A(z)$, $R(z)$, $S(z)$ of degree two or less such that
$I_1$ = $R/A^2$ and $I_0=S/A^2$.
\end{lemma}
\begin{proof}
Setting $E=0$ in \eqref{eq:tacrit}, one infers that $I_0=T_0/A_0^2$
where the degrees of $T_0$ and $A_0$ are two or less. Similarly by
setting $E=1$ one infers that $I_1 = T_0/A_0^2 - T_1/A_1^2$ where
the degrees of $T_1$ and $A_1$ are two or less. Thus, relation
\eqref{eq:tacrit} may be rewritten as
\begin{equation}
\label{eq:tacrit1}
\frac{ET_1}{A_1^2} + \frac{(1-E)T_0}{A_0^2} =
\frac{E T_1 A_0^2 + (1-E) T_0 A_1^2}{A_0^2 A_1^2} =
\frac{T(z;E)}{A(z;E)^2}.
\end{equation}
Given rational functions $P_0/Q_0$ and $P_1/Q_1$ where the
numerators and denominators are relatively prime polynomials, it's
easy to show that the denominator of the reduced form of
$$
\frac{P_0}{Q_0} + \lambda \frac{P_1}{Q_1},\quad
\lambda\in{\mathbb C}$$
is the least common multiple of $Q_0$ and $Q_1$
for all but a finite number of $\lambda$ values. This observation
implies that the least common multiple of $A_1$ and $A_0$ must have
degree less than or equal to $2$.
The rest of the proof will be done by cases, based on the degree of
$A_1$ and $A_0$. If both $A_0$ and $A_1$ are constants then there
is nothing to prove.
Next, consider the case where one of $A_0$ or $A_1$ is a constant,
but the other one isn't. Without loss of generality suppose it is
$A_1$ that is constant. Then $T_1$ must be constant also, for
otherwise the linear combinations of $T_0A_1^2$ and $T_1 A_0^2$ would not
always yield polynomials of degree 2 or less. Therefore the lemma
is true for this case also.
Suppose next that both $A_1$ and $A_0$ have degree 1. If $A_1$ and
$A_0$ have the same root, then there is nothing to prove. If $A_1$
and $A_0$ have different roots, then both $T_1$ and $T_0$ must be
constants, because otherwise linear combinations of $T_0A_1^2$ and
$T_1 A_0^2$ would not always yield polynomials of degree 2 or less.
Hence one can write
$$I_1=\frac{T_1A_0^2}{A_0^2A_1^2},\qquad I_0=
\frac{T_0A_1^2}{A_0^2A_1^2},
$$
and this proves the lemma for the case under consideration.
Finally, suppose that one or both of $A_1$ and $A_0$ is second
degree. Without loss of generality assume that $A_1$ has degree 2.
Consequently, $A_1$ must be the least common multiple of $A_0$ and
$A_1$, i.e. $A_0$ must be a factor of $A_1$. Now $A_0$ cannot be a
constant, because generically, the degree of linear combinations of
$T_1A_0^2$ and $T_0 A_1^2$ would be greater than $2$. If the degree
of $A_0$ is $2$, then there is nothing to prove. The last
possibility is that $A_1=\Delta A_0$, where both $A_0$ and $\Delta$
have degree $1$. In this case
$$I_1 E + I_0 = \frac{E T_1+(1-E)\Delta^2 T_0}{A_1^2},$$
and hence $T_0$ must be a constant. But one can therefore write
$$I_0 = \frac{T_0\Delta^2}{A_1^2},$$
i.e. the lemma is also true for this last case.
\end{proof}
Using the above Lemma, as well as the formulas \eqref{eq:xvarrel} and
\eqref{eq:upotform} one can now give the solution of Problem
\ref{prob:findpot} for the case where ${\mathcal F}$ is the set of solutions to
equation \eqref{eq:algform}. The desired potentials have the form
{\smaller[2]
\begin{equation}
\label{eq:genpot}
U(r) = \frac{-S(z)+{\mathcal D}(A)/4}{R(z)} +
\left( -\frac{3 R''(z)}{2} - \frac{5}{4}\frac{{\mathcal D}(R)}{R(z)} +
\frac{R'(z) A'(z)}{A(z)}\right) \, \frac{A(z)^2}{4R(z)^2},
\end{equation}
}
where $A(z)$, $R(z)$, $S(z)$ are polynomials of degree two or less,
where ${\mathcal D}$ denotes the discriminant operator, and where $z(r)$ is a
solution of
\begin{equation}
\label{eq:genxvar}
z'(r) = \frac{A(z)}{\sqrt{R(z)}}.
\end{equation}
Henceforth this class of
potentials will be referred to as the generalized Natanzon potentials.
\section{New exactly-solvable potentials}
\label{sect:newpot}
Note that the presentation of the generalized Natanzon potentials
given by equations \eqref{eq:genpot} and \eqref{eq:genxvar} is
invariant under affine substitutions, $z\mapsto az+b$. Consequently,
no generality will be lost if one restricts $A(z)$ to one of the
following $5$ possibilities: 1, $z$, $z^2$, $z(z-1)$, $z^2+1$. The
second and the fourth case yield, respectively, the Natanzon confluent
hypergeometric, and the Natanzon hypergeometric potentials. If
$A(z)=z^2$, then the substitution $z\mapsto 1/z$ transforms
\eqref{eq:genpot} and \eqref{eq:genxvar} into the corresponding forms
for the case of $A(z)=z$. Therefore, if $A(z)=z^2$, one again obtains
the Natanzon confluent hypergeometric potentials. With a bit of work
one can check that at the level of solutions to the respective forms
of equation \eqref{eq:algform}, this transformation corresponds to the
to the well known identity (see chapter 6.6 of \cite{bateman}):
{\smaller[1]
$$
{}_2F_0(\alpha,\alpha+1-\gamma; 1/z) = z^\alpha
\Psi(\alpha,\gamma;z).
$$}
where $\Psi$ is the confluent hypergeometric
function of the second kind.
The two remaining cases, $A(z)=1$ and $A(z)=1+z^2$, yield new families
of exactly solvable potentials. These will be referred to,
respectively, as case 1, and case 5 potentials, and will now be
examined in some detail. In each case it will be convenient to
rewrite equation \eqref{eq:algform} using a certain choice of adapted
parameters. These adapted parameters will be denoted by Greek
letters, and the resulting equation will be referred to as the primary
equation.
The solutions of the primary equations can be given in terms of
hypergeometric functions, and have a natural dependence on the adapted
parameters. The actual potential depends on a choice of $R(z)$. The
potential parameters --- these will be denoted using lower case Latin
letters --- and the energy parameter, $E$, will turn out to be related
to the adapted parameters by polynomial relations. It is important to
note that for fixed potential parameters, and a fixed value of $E$ one
must solve these relations in order to obtain the corresponding values
of the adapted parameters of the primary equation.
An examination of formula \eqref{eq:genpot} shows that in order for
$U(r)$ to be non-singular, the $z$-domain of the function shown in the
right hand side of \eqref{eq:genpot} must not contain any roots of
$R(z)$. Singular potentials will not be discussed here, and this
constraint greatly reduces the possible choices for $R(z)$.
According to equation \eqref{eq:genxvar} the physical coordinate of
the corresponding Natanzon potential is given by
$$r(z)=\int^z \frac{\sqrt{R(t)}}{A(t)}\, dt.$$
In all cases one can
explicitly calculate the above anti-derivative, but the inverse, i.e.
$z(r)$, cannot in general be specified explicitly. Indeed, one may
reasonably speculate that historically the study of Natanzon
potentials was delayed by the fact that the inverse, $z(r)$, of the
above anti-derivative can be given in terms of elementary functions
only for certain restricted choices of $R(z)$ and $A(z)$ (based on the
remarks found in the first paragraph of \cite{natanzon1} it would seem
that Natanzon shares this viewpoint.)
Nonetheless, a great deal of information about the inverse is
available. First, one can always take the domain of $z(r)$ to be the
whole real line; this is a consequence of the fact that one excluded
those $R(z)$ that have roots in the domain of $z$. One can calculate
power series and asymptotic expansions for $z(r)$ and use these as the
basis for a numerical approximation. The graphs of potential curves
that are given below were generated using this approach.
In the subsequent discussion $\Phi$ and $\Psi$ will denote the
confluent hypergeometric functions of the first and second kind, and
$F$ will denote the usual hypergeometric function, ${}_2F_1$. For the
source of this notation, as well as the various properties of these
functions the reader is referred to \cite{bateman}.
\noindent{\em Case 1.} $A(z) = 1$.
\par\noindent{\em Primary equation: }
$
\phi''(z) - 2\omega(z-\beta)\phi'(z)-4\alpha\omega\,\phi(z) = 0
$
\par\noindent{\em Primary solutions: }
$$
\phi_0=\Phi\!\left(\alpha;\;1/2;\;\omega(z-\beta)^2\right),\quad
\phi_1 = (z-\beta)\,\Phi\!\left(\alpha+1/2;\;3/2;\;\omega (z-\beta)^2\right)\
$$
Note that the family of potentials under discussion is invariant under
substitutions of the form $z\mapsto z+k$, $k\in{\mathbb R}$.
In order to obtain a non-singular potential, $R(z)$ must not have real
roots, and consequently
it is sufficient to consider the case $R(z)=z^2+1$. The most general
form of the potential can then be obtained by a scaling transformation.
\par\noindent{\em Distance 1-form and physical variable:}
\begin{align}
\nonumber
\d r &=\sqrt{z^2+1}\,\d z, \qquad
r=\frac{1}{2} \left( z\sqrt{\strut z^2+1} + \log(z+\sqrt{z^2+1})\right).\\
\label{case1-zr.eqn}
z &\sim \pm \sqrt{2|r|},\quad r\rightarrow \pm\infty .
\end{align}
\noindent{\em Potential:}
\begin{align}
\nonumber
U&=\frac{az+b}{z^2+1} -\frac{3/4}{(z^2+1)^2}+\frac{ 5/4}{(z^2+1)^3} \\
\label{case1-parms.eqn}
E& = -\omega^2,\quad
a=-2\beta\omega^2,\quad
b=\omega^2(\beta^2-1)-\omega(1-4\alpha).
\end{align}
The resulting 2-parameter family of potentials are characterized by
the presence of two wells separated by a barrier --- see Figure
\ref{case1.fig}. The parameter $a$
controls the degree of asymmetry; if $a=0$, the potential is symmetric about
$r=0$. The parameter $b$ controls the height of the central barrier.
As $b$ increases the wells become smaller; if in addition $a=0$, then they
disappear altogether. As $b$ decreases the two wells merge into one.
\par\noindent{\em Eigenfunctions:}
\begin{equation}
\label{case1-efuncs.eqn}
\psi_i = \expf{-1/2\omega(z-\beta)^2} (z^2+1)^\frac{1}{4} \phi_i,\;
i=0,1.
\end{equation}
\noindent{\em Bound states:}
\noindent
A bound state occurs when $-2\alpha\in{\mathbb N}$, and when
$\omega>0$. This directly implies that there are infinitely many bound states
if $a\neq 0$ or if $b<0$; and that there are no bound states, otherwise.
\noindent{\em Scattering.}
\noindent
The scattering states occur when $E>0$. Correspondingly,
$\omega=i\hat{\omega}$, and $\alpha=\frac{1}{4}+\i\hat{\alpha}$, where $\hat{\omega}>0$ and
$\hat{\alpha}$ are real. As a consequence $\psi_0$ and
$\psi_1$ are real valued functions; this follows directly from
the well known formula (see chapter 6.3 of \cite{bateman}):
$$
\Phi(a;c;z) = {\mathrm e}^z\Phi(c-a;c;-z).
$$
An asymptotically free eigenfunction, call it $\psi_{\rm f}$, can be given
explicitly in terms of the
confluent hypergeometric function of the second kind:
$$\psi_{\rm f}=\omega^\alpha \expf{-1/2\omega(z-\beta)^2} (z^2+1)^\frac{1}{4}
\Psi(\alpha;1/2;\omega(z-\beta)^2).$$
From the well-known asymptotic formula (see chapter 6.13 of \cite{bateman})
$$
\Psi(a;c;z) \sim z^{-a} ,\quad z\rightarrow +\infty,
$$
and from (\ref{case1-zr.eqn}) it follows that
$$
\psi_{\rm f} \sim \expf{-\i\hat{\omega} |r|}
\expf{\i\left(\pm \frac{a}{\hat{\omega}^2\sqrt2}\sqrt{|r|} -\frac{ a^2}{4\hat{\omega}^3}\right)}
\left(\sqrt{2|r|}\mp\beta\right)^{2\i\hat{\alpha}},
$$
as $r\rightarrow \pm\infty$. Thus, asymptotically $\psi_{\rm f}$
represents an almost free particle traveling toward the center. The
discontinuity in the direction of motion is caused by the fact that
$\Psi(z)$ is not regular at $z=0$. The extra terms in the asymptotic
phase appear because of the slow rate --- on the order of $r^{-1}$ an
$r^{-1/2}$ , depending respectively on whether $a$ is zero or not ---
at which the potential falls off toward zero.
From the relation between confluent hypergeometric functions of the
first and second kind (see chapter 6.7 of \cite{bateman}) one obtains
$$
\psi_0 = c_0 \psi_{\rm f} + \overline{c_0 \psi_{\rm f}},\quad
\psi_1 = \mathop{\rm sgn}\nolimits(z)(c_1 \psi_{\rm f} + \overline{c_1 \psi_{\rm f}}).
$$
where
$$c_0 = (-\omega)^{-\alpha}
\frac{\Gamma(\frac{1}{2})}{\Gamma(\frac{1}{4}-\i\hat{\alpha})},\quad
c_1 = (-\omega)^{-\frac{1}{2}-\alpha}
\frac{\Gamma(\frac{3}{2})}{\Gamma(\frac{3}{4}-\i\hat{\alpha})}
$$
It immediately follows that
$$\frac{1}{2}\left(\frac{\psi_0}{ c_0} - \frac{\psi_1}{ c_1}\right) =
\begin{cases}
T \bar{\psi_{\rm f}}, & z\rightarrow +\infty \\
\psi_{\rm f} + R\bar{\psi_{\rm f}}\; & z\rightarrow -\infty
\end{cases}
$$
where the reflection and transmission coefficients are given by
$$
T = \expf{\i\theta}(1-\expf{-2\alpha\pi \i})^{-1},\quad
R = \expf{\i\theta}(1-\expf{2\alpha\pi \i})^{-1},
$$
where
$$
\expf{\i\theta}
=\frac{\Gamma(\frac{1}{2}-\alpha)}{\Gamma(\alpha)}
E^{(\alpha-\frac{1}{4})}\expf{-\frac{\pi \i}{4}}.
$$
\noindent{\em Case 5.}
$A(z)= z^2+1$
\par\noindent{\em Primary equation: }
$$(1+z^2)\phi''(z) +
(\i(1-2\rho)+(2\sigma+1)z)\phi'(z)+(\sigma^2-\delta^2)\phi=0.$$
\noindent
The above equation is related to the usual hypergeometric equation,
$$\zeta(1-\zeta)\phi_{\zeta\zeta} +
(\gamma-\zeta(\alpha+\beta+1))\phi_\zeta - \alpha\beta\phi = 0,$$
by a linear change of parameters, and a complex-linear change of coordinates:
$$\sigma=\frac{\alpha+\beta}{2},\quad
\delta=\frac{\alpha-\beta}{2},\quad
\rho = \gamma-\frac{\alpha+\beta}{2},\quad
\zeta=\frac{1-\i z}{2}.
$$
\par\noindent{\em Primary solutions: }
\begin{align*}
\phi_1 &= F\left(\sigma+\delta,\, \sigma-\delta;\, \
{\rho+\sigma};\, (1-\i z)/2\right),\\
\phi_2 &= F\left(\sigma+\delta,\, \sigma-\delta;\, \
{1-\rho+\sigma};\, (1+\i z)/2\right)
\end{align*}
To obtain non-singular potentials one must take $R(z)$ without any
real roots. The resulting family of potential depends of 4
parameters. A treatment of the most general potential, i.e. one
depending all 4 parameters would be unduly long, and not particularly
illuminating. Thus, the focus here will on a more manageable 3
parameter subclass, namely the potentials that correspond to
$R(z)=z^2+a^2$.
\par\noindent{\em Distance 1-form and physical variable:}
\begin{align*}
\d r&=\frac{\sqrt{z^2+a^2}}{ z^2+1}\,\d z,\qquad
r=\mathop{\rm sinh^{-1}}\nolimits\left( \frac{z}{a}\right)+
\sqrt{1-a^2} \mathop{\rm tanh^{-1}}\nolimits\left( \frac{\sqrt{1-a^2} z}{\sqrt{z^2+a^2}}\right)\\
z&\sim
a \exp\left( \sqrt{1-a^2}\,\mathop{\rm tanh^{-1}}\nolimits\left(\sqrt{1-a^2}\right)\,\right)\,\sinh(r) ,\quad r\rightarrow \pm\infty.
\end{align*}
\par\noindent{\em Potential: }
{\smaller[2]
\begin{align}
\label{case5-pot.eqn}
U&=\frac{b+cz}{ a^2+z^2} +\frac{1}{4}\frac{z^2+1}{ z^2+a^2}+
\left(\frac{z^2+1}{ z^2+a^2}\right)^2
\left( -\frac{1}{4}-\frac{1}{2}\frac{a^2+1}{ z^2+1}+\frac{5}{4}\frac{a^2}{
z^2+a^2}\right) \\
\label{case5-parms.eqn}
E&=-\delta^2,\; b= \delta^2(1-a^2)-\left(\rho-\frac{1}{2}\right)^2 -
\left(\sigma-\frac{1}{2}\right)^2 ,\\
\nonumber c&=-2\i\left(\rho-\frac{1}{2}\right)\left(\sigma-\frac{1}{2}\right).
\end{align}}
These potentials fall off exponentially toward zero for large $r$.
Setting $\rho=\frac{1}{2}$ one obtains potentials that coincide with a certain
subclass of Natanzon hypergeometric potentials. The correspondence is
given by the following substitution:
$z\mapsto 1+z^2,$
and at the level of solutions to the respective primary equations is
described by the following quadratic transformation of the
hypergeometric function (see chapter 2.11 of \cite{bateman}):
{\smaller[2]
$$F\left(\sigma+\delta,\,\sigma-\delta;\, 1/2+\sigma; \,(1-\i z)/2\right)
= F\left( (\sigma+\delta)/2, \,(\sigma-\delta)/2;\,
1/2+\sigma;\,1+z^2\right).
$$} The generic shape is that of two spikes for $a>1$, and two wells
for $0<a<1$; when $a=1$ one recovers a modified P\"oschll-Teller
potential. The parameter $b$ controls the height/depth of the central
spike/well, while $c$ is the skew parameter that controls the degree
of asymmetry in the potential.
The case $a>1$ results in the more interesting potential shapes,
and thus will be the focus of the remaining discussion.
Consider the symmetric potentials ($c=0$) for a fixed value of
$a>1$. The number of extrema in the potential curve depends on the
value of $b$. There are 3 critical values of $b$ where the
number of extrema changes:
$$
\frac{7a^2-3+3a^{-2}}{20},\quad
\frac{-a^2+9-9a^{-2}}{4},\quad
-a^2+\frac{3}{4}.
$$
At these critical values of $b$ some of the extrema merge, and one
obtains some distinguished potential shapes; these shapes are shown in
Figure \ref{case5-1.fig} (a).
At the value $b=(-a^2+9-9a^{-2})/4$ the potential takes the form
$$
\frac{a^2-1}{ 4a^4}\left( \frac{z^4((a^2-7)z^2+6a^4-12a^2)}{
(z^2+a^2)^3}-(a^2-7)\right)$$
One can show that $z=r/a+O(r^3)$ near $r=0$, and hence the first
3 $r$-derivatives of the potential vanish. As a
consequence, one obtains a
well with a very flat bottom. The potential also possesses two
local maxima; these correspond
to spike-like barriers on either side of the well. The variation of $a$ in
this type of
potential shape is shown in
Figure \ref{case5-1.fig} (b). The two barriers vanish
precisely at the critical value of $b=-a^2+\frac{3}{4}$.
For $c\neq0$ one can obtain similarly distinguished
potentials whenever $b$ attains a critical value where
the potential extrema merge.
There is no exact formula for these critical
values of $b$; they must be solved for numerically. The resulting
asymmetric potentials and their symmetric counterparts are shown
in Figure \ref{case5-2.fig}.
\par\noindent{\em Eigenfunctions:}
\begin{equation}
\label{case5-efuncs.eqn}
\psi_i =
(1-\i z)^{\frac{\sigma}{2}+\frac{\rho}{2}-\frac{1}{4}}
(1+\i z)^{\frac{\sigma}{2}-\frac{\rho}{2}+\frac{1}{4}}
\left(\frac{a^2+z^2}{ 1+z^2}\right)^\frac{1}{4}\phi_i,\; i=1,2.
\end{equation}
\par\noindent{\em Bound states:}
\noindent
An examination of relations (\ref{case5-parms.eqn}) will show that in
order to obtain a potential with real coefficients, $\delta$ must
either be real or imaginary; and either $\sigma-\frac{1}{2}$ or
$\rho-\frac{1}{2}$ must be real, while the other must be imaginary.
Note that the following transformation of the parameters is a symmetry of the
potential:
$$
\rho \mapsto \i\left(\sigma-1/2\right)+1/2,\quad
\sigma \mapsto -\i\left(\rho-1/2\right)+1/2.
$$
The transformation $\delta\rightarrow -\delta$ is also a potential
symmetry. The presence of these two symmetries means that without
loss of generality one can assume that $\sigma$ is real, that
$\Re(\rho)=1/2$, and that $\delta$ is either positive, or
positive imaginary. With these assumption in place, $\psi_2$ is the
complex conjugate of $\psi_1$, and the latter is square integrable if
and only if $\delta>0$ and $\sigma+\delta\in-{\mathbb N}$. After a bit
of calculation one can show that this criterion implies that for fixed
$a$, $b$, $c$, the bound states are indexed by natural numbers,
$N=-(\sigma+\delta)$ such that
$$N < \frac{\sqrt{-2b+2\sqrt{b^2+c^2}}-1}{2}.$$
In particular, if $c=0$ (the symmetric potentials) then there will be
no bound states if
$b\geq-1/4$.
If $b<-1/4$, then the number of bound states is equal to the largest
integer smaller than $1/2+\sqrt{-b}$.
\noindent{\em Scattering.}
\noindent
The scattering states occur when $E>0$, and hence without loss of generality
$\delta$ is positive imaginary.
For reasons detailed above, $\sigma$ will be assumed to be real, while
$\Re(\rho)$ will be assumed to be $1/2$.
To compute the reflection and
transmission coefficients it will be useful to introduce two more
solutions of the primary equation:
\begin{eqnarray*}
\phi_3 &=& \left(\,(\i z-1)/2\,\right)^{-\sigma-\delta}
F\left(\sigma+\delta,\delta+1-\rho;1+2\delta;\, 2/(1-\i z)\right),\\
\phi_4 &=& \left(\,(\i z-1)/2\,\right)^{-\sigma-\delta}
F\left(\sigma-\delta,\rho-\delta;1-2\delta;\, 2/(1+\i z)\right).\\
\end{eqnarray*}
As per the formula in (\ref{case5-efuncs.eqn}), let $\psi_3$ and
$\psi_4$ denote the corresponding eigenfunctions. The usefulness of
$\psi_3$ and $\psi_4$ is that they represent asymptotically free
particles traveling, respectively, towards and away from the origin:
\begin{align*}
\psi_3 &\sim K^{-\delta}
\expf{\mp (\sigma+\rho+\delta-1/2) \, \pi \i/2} \expf{-\delta|r|},\quad&
r\rightarrow \pm \infty,\\
\psi_4 &\sim K^{\delta}
\expf{\mp (\sigma+\rho-\delta-1/2)\, \pi \i/2} \expf{\delta|r|},\quad &
r\rightarrow \pm \infty,\\
\end{align*}
where
$$
K = \frac{a}{4} \exp\left(\, \sqrt{1-a^2}\mathop{\rm tanh^{-1}}\nolimits\left(\sqrt{1-a^2}\right)\,\right).
$$
Relations between the regular eigenfunctions, $\psi_1$, $\psi_2$, and
the irregular ones, $\psi_3$, $\psi_4$ are given by (see chapter 2.9
of \cite{bateman}):
$$
\psi_1 = c_3 \psi_3 + c_4\psi_4,\qquad
\psi_2 = \overline{c_4} \expf{\pm\pi \i (\sigma+\delta)} \psi_3 +
\overline{c_3} \expf{\pm \pi \i(\sigma-\delta)} \psi_4,
$$
where the $\pm$ in the second equation corresponds to the sign of $z$,
and where
$$
c_3 = \frac{\Gamma(\sigma+\rho)\Gamma(2\delta)}{
\Gamma(\rho+\delta)\Gamma(\sigma+\delta)},\quad
c_4 = \frac{\Gamma(\sigma+\rho)\Gamma(-2\delta)}{
\Gamma(\rho-\delta)\Gamma(\sigma-\delta)}.
$$
It follows that
\begin{gather*}
K^\delta \expf{-(\sigma+\delta+\rho-1/2)\, \pi \i/2}
\left(\frac{\psi_1 }{ c_3} \expf{\pi \i(\sigma+\delta)} - \frac{\psi_2
}{\overline{c_4}}
\right) = \\
\begin{cases}
T \expf{\delta r}, & r\rightarrow +\infty\\
\expf{\delta r} + R \expf{-\delta r},\; & r\rightarrow -\infty\\
\end{cases}
\end{gather*}
where elementary calculations will show that
\begin{eqnarray*}
T &=& \frac{K^{2\delta}
\Gamma(\sigma-\delta)\Gamma(1-\sigma-\delta)
\Gamma(\rho-\delta)\Gamma(1-\rho-\delta)}{2\pi\Gamma(-2\delta)
\Gamma(1-2\delta)} \\
R &=& T \left( \frac{\sin(\pi\sigma)\sin(\pi\rho)}{\sin(\pi\delta)}
- \frac{\i\cos(\pi\sigma)\cos(\pi\rho)}{\cos(\pi\delta)} \right)
\end{eqnarray*}
\begin{figure}[p]
\centering\noindent
\psfig{figure=case1-a.epsi,width=2.4in}
\hfil
\psfig{figure=case1-b.epsi,width=2.4in}
\caption{Case 1: roles of the $a$ and $b$ parameters.}
\label{case1.fig}
\end{figure}
\begin{figure}[p]
\centering
\noindent
\psfig{figure=case5-1a.epsi,width=2.4in}
\hfil
\psfig{figure=case5-1b.epsi,width=2.4in}
\caption{Case 5 symmetric potentials: (a) critical values of the $b$
parameter, (b) variation of the $a$ parameter in potentials with the
middle critical $b$ value.}
\label{case5-1.fig}
\end{figure}
\begin{figure}[p]
\centering
\noindent
\psfig{figure=case5-2a.epsi,width=2.4in}
\hfil
\psfig{figure=case5-2b.epsi,width=2.4in}
\caption{Case 5 asymmetric potentials: variation of the $c$ parameter
in potentials with (a) lowest critical $b$ value (b) middle critical
$b$ value.}
\label{case5-2.fig}
\end{figure}
| {
"timestamp": "1997-10-21T20:14:12",
"yymm": "9706",
"arxiv_id": "solv-int/9706007",
"language": "en",
"url": "https://arxiv.org/abs/solv-int/9706007",
"abstract": "The present article discusses the connection between exactly-solvable Schrodinger equations and the Liouville transformation. This transformation yields a large class of exactly-solvable potentials, including the exactly-solvable potentials introduced by Natanzon. As well, this class is shown to contain two new families of exactly solvable potentials.",
"subjects": "Exactly Solvable and Integrable Systems (nlin.SI)",
"title": "On the Liouville transformation and exactly-solvable Schrodinger equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808765013518,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7075110746126572
} |
https://arxiv.org/abs/1811.05197 | Affine Killing complete and geodesically complete homogeneous affine surfaces | An affine manifold is said to be geodesically complete if all affine geodesics extend for all time. It is said to be affine Killing complete if the integral curves for any affine Killing vector field extend for all time. We use the solution space of the quasi-Einstein equation to examine these concepts in the setting of homogeneous affine surfaces. | \section{Introduction}
Let $M$ be a connected smooth manifold of dimension $m$ which is equipped with a torsion free connection $\nabla$ on the tangent
bundle of $M$; the pair $\mathcal{M}=(M,\nabla)$ is said to be an {\it affine manifold}.
If $g$ is a pseudo-Riemannian metric on $M$, then the corresponding affine structure is obtained
by taking $\nabla$ to be the Levi-Civita connection. However, not all affine structures arise
in this fashion; such structures are said to be {\it not metrizable}. A diffeomorphism
from one affine manifold to another is said to be an {\it affine map} if
it intertwines the two connections.
Let $\Phi_t^X$ be the local 1-parameter flow of a vector field $X$ on $M$. The following 3 conditions are equivalent and
if any is satisfied, then $X$ is said to be
an {\it affine Killing vector field} (see Kobayashi and Nomizu~\cite{KN63}):
\begin{enumerate}
\item $\Phi_t^X$ is an affine map where defined.
\item The Lie derivative of $\nabla$ with respect to $X$ vanishes.
\item $[X,\nabla_YZ]-\nabla_Y[X,Z]-\nabla_{[X,Y]}Z=0$ for all smooth vector fields $Y$ and $Z$.
\end{enumerate}
Let $\mathfrak{K}(\mathcal{M})$ be the set of affine Killing vector fields. The
Lie bracket gives $\mathfrak{K}(\mathcal{M})$ the structure of a real Lie algebra. Furthermore,
if $X\in\mathfrak{K}(\mathcal{M})$, if $X(P)=0$, and if $\nabla X(P)=0$, then $X\equiv0$. Consequently,
$\dim\{\mathfrak{K}(\mathcal{M})\}\le m+m^2$; if equality holds, then $\mathcal{M}$ is flat.
An affine Killing vector field is said to be {\it complete} if the flow $\Phi_t^X$ exists for all $t$.
Let $\operatorname{Aff}(\mathcal{M})$ be the Lie group of all affine diffeomorphisms of $\mathcal{M}$.
The Lie algebra of $\operatorname{Aff}(\mathcal{M})$ is the space of complete affine Killing vector fields.
We say that $\mathcal{M}$ is {\it affine Killing complete} if all affine Killing vector fields are complete or, equivalently,
the Lie algebra of $\operatorname{Aff}(\mathcal{M})$ is $\mathfrak{K}(\mathcal{M})$.
Consequently, determining whether or not $\mathcal{M}$ is affine Killing complete is a central geometrical question.
A smooth curve $\sigma(t)$ in $M$ is said to be a {\it geodesic} if $\nabla_{\dot\sigma}\dot\sigma=0$.
We adopt the {\it Einstein convention} and sum over repeated indices to expand
$\nabla_{\partial_{x^i}}\partial_{x^j}=\Gamma_{ij}{}^k\partial_{x^k}$ in a system of local
coordinates. If $\sigma(t)=(x^1(t),\dots,x^m(t))$, then $\sigma$ is a geodesic if and only if the {\it geodesic equation} is satisfied, i.e.
\begin{equation}\label{E1.a}
\ddot\sigma^k+\Gamma_{ij}{}^k\dot\sigma^i\dot\sigma^j=0\text{ for all }k\,.
\end{equation}
$\mathcal{M}$ is said to be {\it geodesically complete} if every geodesic extends for infinite time.
Any geodesically complete affine manifold is affine Killing complete (see Kobayashi and Nomizu~\cite{KN63}) but the converse fails as we shall
see presently.
If $\operatorname{Aff}(\mathcal{M})$ acts transitively on $M$, then $\mathcal{M}$ is said to be
{\it affine homogeneous}; there is a corresponding local theory if the diffeomorphisms in question are only assumed
to be locally defined. The classification of locally homogeneous affine surfaces by Opozda~\cite{Op04}
may be described as follows. Up to isomorphism, there
are two simply connected Lie groups of dimension $2$,
the translation group $\mathbb{R}^2$ and the $ax+b$ group $\mathbb{R}^+\times\mathbb{R}$.
A left invariant affine structure on $\mathbb{R}^2$ (resp. on $\mathbb{R}^+\times\mathbb{R}$) is said to be {\it Type~$\mathcal{A}$}
(resp. {\it Type~$\mathcal{B}$}). These geometries are globally homogeneous; $\operatorname{Aff}(\cdot)$ acts transitively
on such geometries. Every locally
homogeneous affine surface is either modeled on a Type~$\mathcal{A}$ geometry, on a Type~$\mathcal{B}$
geometry, or on the geometry of the round sphere $S^2$ in $\mathbb{R}^3$ with the Levi-Civita connection.
Any Riemannian metric on a compact manifold is complete. Thus
the sphere is geodesically complete. Similarly, any vector field on a compact manifold is complete and thus
the sphere is Killing complete.
For that reason, we will concentrate on studying the Type~$\mathcal{A}$ and Type~$\mathcal{B}$ geometries in this paper.
We emphasize that geodesic completeness (resp., affine Killing completeness) is equivalent to prolonging a system of second order
(resp., first order) non-linear ODEs. Even in the homogeneous setting these equations can be quite unmanageable. Consequently, instead
of a direct approach, we shall follow a different ansatz making use of the affine quasi-Einstein equation.
We will examine Killing completeness for both the Type~$\mathcal{A}$ and the Type~$\mathcal{B}$ geometries.
However, we will examine geodesic completeness only in the context of the Type~$\mathcal{A}$ geometries
as the quasi-Einstein equation proves not to be terribly useful in studying geodesic completeness for the Type~$\mathcal{B}$
geometries.
\subsection{The Hessian, the curvature, and the quasi-Einstein equation} Set
$$
\mathcal{H}\phi=(\partial_{x^i}\partial_{x^j}\phi-\Gamma_{ij}{}^k\partial_{x^k}\phi)dx^i\otimes dx^j\in S^2(M)\,.
$$
Define the curvature operator $R(\cdot,\cdot)$ and the Ricci tensor $\rho(\cdot,\cdot)$ by setting:
$$
R(u,v):=\nabla_u\nabla_v-\nabla_v\nabla_u-\nabla_{[u,v]} \text{ and }
\rho(x,y):=\operatorname{Tr}\{z\rightarrow R(z,x)y\}\,.
$$
As the Ricci tensor need not be symmetric,
we introduce the symmetrization $\rho_s(x,y):=\frac12\{\rho(x,y)+\rho(y,x)\}$. Let
$\mathcal{Q}(\mathcal{M})$ be the solution space of the {\it quasi-Einstein equation}:
$$
\mathcal{Q}(\mathcal{M}):=\{\phi\in C^\infty(M):\mathcal{H}\phi+\phi\rho_s=0\}\,.
$$
\subsection{Strong projective equivalence}
We say that two affine connections $\nabla$ and $\tilde\nabla$ are {\it strongly projectively equivalent}
if there exists a smooth function $\varphi$
so that $\tilde\nabla_XY=\nabla_XY+Y(\varphi)X+X(\varphi)Y$. In this setting,
we shall say that $\varphi$ provides a {\it strong projective equivalence} from $\mathcal{M}=(M,\nabla)$ to
${}^\varphi\mathcal{M}:=(M,\tilde\nabla)$. We say that $\mathcal{M}$ is {\it strongly
projectively flat} if $\mathcal{M}$ is strongly projectively equivalent to a flat connection.
We will prove the following result in Section~\ref{S2}.
\begin{lemma}\label{L1.1}Let $\mathcal{M}=(\mathbb{R}^2,\nabla)$ be a Type~$\mathcal{A}$ geometry.
There exists a linear function $\varphi(x^1,x^2)=a_1x^1+a_2x^2$ which provides
a strong projective equivalence from $\mathcal{M}$ to a flat Type~$\mathcal{A}$ geometry
and which satisfies $e^{-\varphi}\in\mathcal{Q}(\mathcal{M})$.\end{lemma}
There is a close relationship between strong projective equivalence and the solutions of the quasi-Einstein equation.
We refer to Brozos-V\'{a}zquez et al.~\cite{BGGV16} and to Gilkey and Valle-Regueiro~\cite{GV18} for the proof of the
following result.
\begin{theorem}\label{T1.2}
Let $\mathcal{M}=(M,\nabla)$ be a simply connected affine surface.
\begin{enumerate}
\item If $\phi\in\mathcal{Q}(\mathcal{M})$ with $\phi(0)=0$ and $d\phi(0)=0$, then $\phi\equiv0$.
\item $\dim\{\mathcal{Q}(\mathcal{M})\}\le 3$.
\item $\dim\{\mathcal{Q}(\mathcal{M})\}=3$ if and only if $\mathcal{M}$ is strongly projectively flat.
\item $\mathcal{Q}({}^{\varphi}\mathcal{M})=e^{\varphi}\mathcal{Q}(\mathcal{M})$.
\item Let $(M,\nabla)$ and $(\tilde M,\tilde\nabla)$ be two strongly projectively flat affine surfaces and
let $\Phi$ be a diffeomorphism
from $M$ to $\tilde M$. If $\Phi^*\mathcal{Q}(\tilde M,\tilde\nabla)=\mathcal{Q}(M,\nabla)$,
then $\Phi^*\tilde\nabla=\nabla$.
\end{enumerate}\end{theorem}
By Theorem~\ref{T1.2}, $\mathcal{Q}$ transforms conformally under strong projective deformations. Since the unparameterized geodesic
structure is not altered by projective deformations, $\mathcal{Q}$ is intimately related with the affine geodesic structure in this instance.
\subsection{Type~$\mathcal{A}$ geometries} The Christoffel symbols of a Type $\mathcal{A}$ structure on $\mathbb{R}^2$
take the form $\Gamma=\Gamma(a,b,c,d,e,f)$ for $(a,b,c,d,e,f)\in\mathbb{R}^6$ where
$$\Gamma(a,b,c,d,e,f):=\left\{\begin{array}{lll}\Gamma_{11}{}^1=a,&\Gamma_{11}{}^2=b,&
\Gamma_{12}{}^1=\Gamma_{21}{}^1=c\\
\Gamma_{12}{}^2=\Gamma_{21}{}^2=d,&\Gamma_{22}{}^1=e,&\Gamma_{22}{}^2=f
\end{array}\right\}.$$
Let $\mathcal{M}(a,b,c,d,e,f)$ be the corresponding affine structure on $\mathbb{R}^2$.
\begin{definition}\label{D1.3}
\rm We define the following Type~$\mathcal{A}$
affine structures ${\mathcal{M}}_i^j(\cdot)$ on $\mathbb{R}^2$; a direct computation then
establishes $\mathcal{Q}$:
\medbreak
${\mathcal{M}}_0^6:=\mathcal{M}(0,0,0,0,0,0)$, \qquad$\mathcal{Q}({\mathcal{M}}_0^6)=\operatorname{Span}\{\mathbbm{1},x^1,x^2\}$.
\par${\mathcal{M}}_1^6:=\mathcal{M}(1,0,0,1,0,0)$, \qquad$\mathcal{Q}({\mathcal{M}}_1^6)=\operatorname{Span}\{\mathbbm{1},e^{x^1},x^2e^{x^1}\}$.
\par${\mathcal{M}}_2^6:=\mathcal{M}(-1,0,0,0,0,1)$, \phantom{.....}$\mathcal{Q}({\mathcal{M}}_2^6)=\operatorname{Span}\{\mathbbm{1},e^{x^2},e^{-x^1}\}$.
\par${\mathcal{M}}_3^6:=\mathcal{M}(0,0,0,0,0,1)$, \phantom{........}$\mathcal{Q}({\mathcal{M}}_3^6)=\operatorname{Span}\{\mathbbm{1},x^1,e^{x^2}\}$.
\par${\mathcal{M}}_4^6:=\mathcal{M}(0,0,0,0,1,0)$,\phantom{........} $\mathcal{Q}({\mathcal{M}}_4^6)=\operatorname{Span}\{\mathbbm{1},x^2,(x^2)^2+2x^1\}$.
\par${\mathcal{M}}_5^6:=\mathcal{M}(1,0,0,1,-1,0)$,\phantom{.....}
$\mathcal{Q}({\mathcal{M}}_5^6)=\operatorname{Span}\{\mathbbm{1},e^{x^1}\cos(x^2),e^{x^1}\sin(x^2)\}$.
\par${\mathcal{M}}_1^4:=\mathcal{M}(-1,0,1,0,0,2)$,\phantom{.....}
$\mathcal{Q}({\mathcal{M}}_1^4)=\operatorname{Span}\{e^{x^2},x^2e^{x^2},e^{-x^1+x^2}\}$.
\par${\mathcal{M}}_2^4(c):=\mathcal{M}(-1,0,c,0,0,1+2c)\text{ for }c\notin\{0,-1\}$,
\par\qquad$\mathcal{Q}({\mathcal{M}}_2^4(c))=\operatorname{Span}\{e^{cx^2},e^{(1+c)x^2},e^{cx^2-x^1}\}$.
\par${\mathcal{M}}_3^4(c):=\mathcal{M}(0,0,c,0,0,1+2c)\text{ for }c\notin\{0,-1\}$, \par\qquad
$\mathcal{Q}({\mathcal{M}}_3^4(c))=\operatorname{Span}\{e^{cx^2},e^{(1+c)x^2},x^1e^{cx^2}\}$.
\par${\mathcal{M}}_4^4(c):=\mathcal{M}(0,0,1,0,c,2)$, $\mathcal{Q}({\mathcal{M}}_4^4(c))=\operatorname{Span}\{e^{x^2},x^2e^{x^2},
(\textstyle\frac12c(x^2)^2+x^1)e^{x^2}\}$.
\par${\mathcal{M}}_5^4(c):=\mathcal{M}(1,0,0,0,1+c^2,2c)$,
\par\qquad$\mathcal{Q}({\mathcal{M}}_5^4(c))=\operatorname{Span}\{e^{cx^2}\cos(x^2),e^{cx^2}\sin(x^2),
e^{x^1}\}$.
\par${\mathcal{M}}_1^2(a_1,a_2):=\mathcal{M}\left(\frac
{a_1^2+a_2-1,a_1^2-a_1,a_1a_2,a_1a_2,a_2^2-a_2,a_1+a_2^2-1}{a_1+a_2-1}\right)$ for $a_1a_2\ne0$
\par\qquad and $a_1+a_2\ne1$,
$\mathcal{Q}({\mathcal{M}}_1^2(a_1,a_2))=\operatorname{Span}\{e^{x^1},e^{x^2},e^{a_1x^1+a_2x^2}\}$.
\par ${\mathcal{M}}_2^2(b_1,b_2):=\mathcal{M}\left(1+b_1,0,b_2,1,\frac{1+b_2^2}{b_1-1},0\right)$ for $b_1\ne1$,
\par\qquad
$\mathcal{Q}({\mathcal{M}}_2^2(b_1,b_2))=\operatorname{Span}\{e^{x^1}\cos(x^2),e^{x^1}\sin(x^2),e^{b_1x^1+b_2x^2}\}$.
\par${\mathcal{M}}_3^2(c):=\mathcal{M}(2,0,0,1,c,1)$ for $c\ne0$,\par\qquad\qquad
$\mathcal{Q}({\mathcal{M}}_3^2(c))=\operatorname{Span}\{e^{x^1},(x^1-cx^2)e^{x^1},e^{x^1+x^2}\}$.
\par${\mathcal{M}}_4^2(\pm1):=\mathcal{M}(2,0,0,1,\pm1,0)$,\par\qquad\qquad
$\mathcal{Q}({\mathcal{M}}_4^2(\pm1))=\operatorname{Span}\{e^{x^1},x^2e^{x^1},(2x^1\pm (x^2)^2)e^{x^1}\}$.
\end{definition}
\subsection{Linear equivalence and parametrization}
We say that two Type~$\mathcal{A}$ geometries $(\mathbb{R}^2,\nabla_1)$ and $(\mathbb{R}^2,\nabla_2)$ are
{\it linearly equivalent} if some element of $\operatorname{GL}(2,\mathbb{R})$ intertwines these two geometries.
The parametrization of the
Type~$\mathcal{A}$ geometries given below in Theorem~\ref{T1.4} was established by Gilkey and Valle-Regueiro \cite{GV18};
we also refer to a slightly different
parametrization given in Brozos-V\'{a}zquez, Garc\'{i}a-R\'{i}o, and Gilkey~\cite{BGG16}.
\begin{theorem}\label{T1.4}
Let $\mathcal{M}$ be a Type~$\mathcal{A}$ structure.
\begin{enumerate}
\item The Ricci tensor of $\mathcal{M}$ is symmetric.
\item The following assertions are equivalent.
\begin{enumerate}
\item $\operatorname{Rank}\{\rho\}=2$.
\item $\dim\{\mathfrak{K}(\mathcal{M})\}=2$.
\item $\mathcal{M}$ is linearly equivalent to ${\mathcal{M}}_i^2(\cdot)$ for some $i$.
\end{enumerate}
\item The following assertions are equivalent.
\begin{enumerate}
\item $\operatorname{Rank}\{\rho\}=1$.
\item $\dim\{\mathfrak{K}(\mathcal{M})\}=4$.
\item $\mathcal{M}$ is linearly equivalent to ${\mathcal{M}}_i^4(\cdot)$ for some $i$.
\end{enumerate}
\item The following assertions are equivalent.
\begin{enumerate}
\item $\operatorname{Rank}\{\rho\}=0$.
\item $\dim\{\mathfrak{K}(\mathcal{M})\}=6$.
\item $\mathcal{M}$ is linearly equivalent to ${\mathcal{M}}_i^6(\cdot)$ for some $i$.
\item $\mathcal{M}$ is flat.
\end{enumerate}\end{enumerate}
\end{theorem}
Although ${\mathcal{M}}_i^j(\cdot)$ is not linearly equivalent to
${\mathcal{M}}_u^v(\cdot)$ for $(u,v)\ne(i,j)$, it can happen that
${\mathcal{M}}_i^j(\cdot)$ is linearly equivalent to ${\mathcal{M}}_i^j(\cdot)$ for different values of the parameters involved;
for example, we may interchange the coordinates $x^1\leftrightarrow x^2$ to see that
${\mathcal{M}}_1^2(a_1,a_2)$ is linearly equivalent to ${\mathcal{M}}_1^2(a_2,a_1)$. Giving
a precise description of the identifications describing the relevant moduli spaces is somewhat difficult and we refer for
\cite{BGG16,GV18} for further details as it will play no role here.
The notation is chosen so that $\dim\{\mathfrak{K}({\mathcal{M}}_i^j(\cdot))\}=j$.
\subsection{Affine Killing completeness}
We will prove the following result in Section~\ref{S4}.
\begin{theorem}\label{T1.5}
Let $\mathcal{M}=(\mathbb{R}^2,\nabla)$ be a Type~$\mathcal{A}$ surface.
Then $\mathcal{M}$ is affine Killing complete if and only if $\mathcal{M}$
is linearly equivalent to ${\mathcal{M}}_0^6$, ${\mathcal{M}}_4^6$, ${\mathcal{M}}_3^4(c)$, ${\mathcal{M}}_4^4(c)$,
or to ${\mathcal{M}}_i^2(\cdot)$ for some $i$.
\end{theorem}
The structures $\mathcal{M}_1^6$, ${\mathcal{M}}_2^6$, ${\mathcal{M}}_3^6$, ${\mathcal{M}}_5^6$,
${\mathcal{M}}_1^4$, ${\mathcal{M}}_2^4(c)$, and ${\mathcal{M}}_5^4(c)$ are, up to linear equivalence, the only affine Killing incomplete
Type~$\mathcal{A}$ structures on $\mathbb{R}^2$.
In Section~\ref{S3}, we will exhibit affine immersions of
these structures into affine Killing complete Type~$\mathcal{A}$ surfaces and show thereby these structures can be affine Killing completed.
\subsection{The geodesic equations}
In Section~\ref{S5}, we will establish the following result that reduces
the system of geodesic equations to a single ODE in the context of Type~$\mathcal{A}$ structures on $\mathbb{R}^2$. This will simplifiy our subsequent analysis enormously; it is exactly this step which fails for the Type~$\mathcal{B}$ geometries and which renders the analysis
of the geodesic structure of the Type~$\mathcal{B}$ geometries so difficult.
\begin{theorem}\label{T1.6} Let $\mathcal{M}$ be a Type~$\mathcal{A}$ surface.
There exists a linear function $\varphi$ so that
$\mathcal{Q}(\mathcal{M})=e^{\varphi}\operatorname{Span}\{\mathbbm{1},\phi_1,\phi_2\}$ and so that
the map $\Phi:=(\phi_1,\phi_2)$ defines an immersion of $\mathbb{R}^2$ in $\mathbb{R}^2$. Any geodesic on $\mathcal{M}$
locally has the form $\sigma(t)=\Phi^{-1}(\psi_\sigma(t)u_\sigma+v_\sigma)$ for some smooth function $\psi_\sigma$ and for
suitably chosen vectors $u_\sigma$ and $v_\sigma$ in $\mathbb{R}^2$.
\end{theorem}
Theorem~\ref{T1.6} is only a local result; however, since we are working
in the real analytic setting, this does not affect our ansatz. This point arises in the analysis of Section~\ref{S6.1.6} for example.
Our study of the geodesic structure in Type~$\mathcal{A}$ geometries in Section~\ref{S7} will
be based on Theorem~\ref{T1.6} and upon a knowledge
of $\mathcal{Q}(\mathcal{M})$ which is an analytic invariant; it is not simply a straightforward exercise
in computer algebra. The geodesic equation is a linked pair of non-linear equations in 1-variable; Theorem~\ref{T1.6}
reduces consideration to finding a single function of 1-variable. This approach permits us to determine in Section~\ref{S7}
all the geodesics of the affine manifolds ${\mathcal{M}}_i^j(\cdot)$ for $j=4$ and $j=6$; for $j=2$, we obtained
ODEs we could not solve although we did obtain sufficient information to establish whether or
not these geometries were geodesically complete. D'Ascanio et al. \cite{DGP17} determined
which non-flat Type~$\mathcal{A}$ geometries were geodesically complete using a very different approach.
In Section~\ref{S6}, we will establish the following result which extends their results by
taking into account the flat geometries; we believe it is a more straightforward treatment -- it also yields more
information.
\begin{theorem}\label{T1.7}
Let $\mathcal{M}=(\mathbb{R}^2,\nabla)$ be a Type~$\mathcal{A}$ surface.
Then $\mathcal{M}$ is geodesically complete if and only
$\mathcal{M}$ is linearly equivalent to ${\mathcal{M}}_0^6$, to ${\mathcal{M}}_4^6$, to ${\mathcal{M}}_3^4(-\frac12)$,
or to ${\mathcal{M}}_2^2(-1,a)$ for some $a$.
\end{theorem}
The affine Killing vector fields of a Type~$\mathcal{A}$ geometry are real analytic. From this it follows that if
$\tilde{\mathcal{M}}$ is an affine surface which is modeled on a Type~$\mathcal{A}$ geometry $\mathcal{M}=(\mathbb{R}^2,\nabla)$ (where $\nabla$
has constant Christoffel symbols), then $\tilde{\mathcal{M}}$ is real analytic.
We say that a Type~$\mathcal{A}$ structure $\mathcal{M}$ on $\mathbb{R}^2$ is essentially geodesically incomplete if there is
no surface $\tilde{\mathcal{M}}$ which is modeled on ${\mathcal{M}}$ and which is geodesically complete.
It will follow from the analysis of Section~\ref{S6} that
any non-flat Type~$\mathcal{A}$ structure on $\mathbb{R}^2$ which is geodesically incomplete but not essentially geodesically incomplete
is linearly equivalent either to ${\mathcal{M}}_2^4(-\frac12)$
or to ${\mathcal{M}}_5^4(0)$; up to linear equivalence, ${\mathcal{M}}_2^4(-\frac12)$ and
${\mathcal{M}}_5^4(0)$ are the only non-flat Type~$\mathcal{A}$ structures on $\mathbb{R}^2$ which can
be geodesically completed. This is analogous to the situation when we considered the completion of affine Killing incomplete Type~$\mathcal{A}$
structures on $\mathbb{R}^2$.
\subsection{Type~$\mathcal{B}$ geometries}
$\nabla$ is a left invariant connection on the $ax+b$ group
$\mathbb{R}^+\times\mathbb{R}$ if and only if $\Gamma=(x^1)^{-1}\Gamma(a,b,c,d,e,f)$; we denote the
corresponding structure by $\mathcal{N}((x^1)^{-1}\Gamma(a,b,c,d,e,f))$.
\begin{definition}\label{D1.8}\rm We define the following Type~$\mathcal{B}$ affine structures $\mathcal{N}_i^j(\cdot)$
on $\mathbb{R}^+\times\mathbb{R}$; a direct computation then establishes $\mathcal{Q}$.
\par\quad$\mathcal{N}_0^6:=\Gamma(0,0,0,0,0,0)$, $\mathcal{Q}(\mathcal{N}_0^6)=\operatorname{Span}\{\mathbbm{1},x^1,x^2\}$.
\par\quad$\mathcal{N}_1^6(\pm):=\mathcal{N}((x^1)^{-1}\Gamma(1,0,0,0,\pm1,0))$,
\par\qquad\qquad
$\mathcal{Q}(\mathcal{N}_1^6(\pm))=\operatorname{Span}\{\mathbbm{1},x^2,(x^1)^2\pm(x^2)^2\}$.
\par\quad$\mathcal{N}_2^6(c):=\mathcal{N}((x^1)^{-1}\Gamma(c-1,0,0,c,0,0))$ for $c\ne0$,\par\qquad\qquad
$\mathcal{Q}(\mathcal{N}_2^6(c))=\operatorname{Span}\{\mathbbm{1},(x^1)^c,(x^1)^cx^2\}$.
\par\quad$\mathcal{N}_3^6:=\mathcal{N}((x^1)^{-1}\Gamma(-2,1,0,-1,0,0))$,
$\mathcal{Q}(\mathcal{N}_3^6)=\operatorname{Span}\{\mathbbm{1},\frac1{x^1},\frac{x^2}{x^1}+\log(x^1)\}$.
\par\quad$\mathcal{N}_4^6:=\mathcal{N}((x^1)^{-1}\Gamma(0,1,0,0,0,0))$,
$\mathcal{Q}(\mathcal{N}_4^6)=\operatorname{Span}\{\mathbbm{1},x^1,x^2+x^1\log(x^1)\}$.
\par\quad$\mathcal{N}_5^6:=\mathcal{N}((x^1)^{-1}\Gamma(-1,0,0,0,0,0))$,
$\mathcal{Q}(\mathcal{N}_5^6)=\operatorname{Span}\{\mathbbm{1},\log(x^1),x^2\}$.
\par\quad$\mathcal{N}_6^6(c):=\mathcal{N}((x^1)^{-1}\Gamma(c,0,0,0,0,0))$ for $c\ne0,-1$,\par\qquad\qquad
$\mathcal{Q}(\mathcal{N}_6^6(c))=\operatorname{Span}\{\mathbbm{1},(x^1)^{1+c},x^2\}$.
\par\quad$\mathcal{N}_1^4(\kappa):=\mathcal{N}((x^1)^{-1}(2\kappa,1,0,\kappa,0,0))$ for $\kappa\notin\{0,-1\}$,
\par\qquad\qquad$\mathcal{Q}(\mathcal{N}_1^4(\kappa))=\operatorname{Span}\{(x^1)^\kappa, (x^1)^{\kappa+1}, (x^1)^\kappa(x^2+x^1\log x^1)\}$.
\par\quad$\mathcal{N}_2^4(\kappa,\theta):=\mathcal{N}((x^1)^{-1}\Gamma(
2\kappa+\theta-1,0,0,\kappa,0, 0))$ for $\kappa\notin\{0,-\theta\}$ and $\theta\ne0$,
\par\qquad\qquad$\mathcal{Q}(\mathcal{N}_2^4(\kappa,\theta))=\operatorname{Span}\{(x^1)^\kappa, (x^1)^\kappa x^2, (x^1)^{\kappa+\theta} \}$.
\par\quad$\mathcal{N}_3^4(\kappa):=\mathcal{N}((x^1)^{-1}\Gamma(2\kappa-1,0,0,\kappa,0,0))$ for $\kappa\ne0$,
\par\qquad\qquad$\mathcal{Q}({\mathcal{N}_3^4(\kappa))}=\operatorname{Span}\{(x^1)^\kappa, (x^1)^\kappa x^2, (x^1)^\kappa \log x^1 \}$.
\par\quad$\mathcal{N}_1^3(\pm):=\mathcal{N}((x^1)^{-1}\Gamma(-\frac32,0,0,-\frac12,\mp\frac12,0))$,
$\mathcal{Q}(\mathcal{N}_1^3(\pm))=\{0\}$.
\par\quad$\mathcal{N}_2^3(c):=\mathcal{N}((x^1)^{-1}\Gamma(-\frac32,0,1,-\frac12,c,2))$,
$\mathcal{Q}(\mathcal{N}_2^3(c))=\{0\}$.
\par\quad$\mathcal{N}_3^3:=\mathcal{N}((x^1)^{-1}\Gamma(-1,0,0,-1,-1,0))$,
$\mathcal{Q}(\mathcal{N}_3^3)=\operatorname{Span}\{\frac1{x^1},\frac{x^2}{x^1},\frac{(x^2)^2-(x^1)^2}{x^1}\}$.
\par\qquad\qquad This is the affine structure of the
Lorentzian-hyperbolic plane given by\par\qquad\qquad the metric $ds^2=\frac{(dx^1)^2-(dx^2)^2}{(x^1)^2}$.
\par\quad$\mathcal{N}_4^3:=\mathcal{N}((x^1)^{-1}\Gamma(-1,0,0,-1,1,0))$,
$\mathcal{Q}(\mathcal{N}_4^3)=\operatorname{Span}\{\frac1{x^1},\frac{x^2}{x^1},\frac{(x^2)^2+(x^1)^2}{x^1}\}$.
\par\qquad\qquad This is the affine structure of the hyperbolic plane given by the metric
\par\qquad\qquad $ds^2=\frac{(dx^1)^2+(dx^2)^2}{(x^1)^2}$.
\end{definition}
We refer to Brozos-V\'{a}zquez et al.~\cite{BGG18} for the proof of Assertions~(1--3) in Theorem~\ref{T1.9} below.
Assertion~(4) will be established in Section~\ref{S8} and is
the appropriate generalization of Theorem~\ref{T1.4}~(4) to this setting; unlike
the case of the Type~$\mathcal{A}$ geometries, there is no classification for the generic case
$\dim\{\mathfrak{K}(\mathcal{N})\}=2$ and this is why the determination of which of these geometries is
geodesically complete is unsettled.
\begin{theorem}\label{T1.9}
Let $\mathcal{N}$ be a Type~$\mathcal{B}$ structure on $\mathbb{R}^+\times\mathbb{R}$.
\begin{enumerate}
\item $\dim\{\mathfrak{K}\}\in\{2,3,4,6\}$.
\item $\dim\{\mathfrak{K}(\mathcal{N})\}=3$ if and only if $\mathcal{N}$ is linearly equivalent to $\mathcal{N}_i^3(\cdot)$ for some $i$.
\item The following assertions are equivalent.
\begin{enumerate}
\item $\dim\{\mathfrak{K}(\mathcal{N})\}=4$.
\item $\mathcal{N}$ is linearly equivalent to $\mathcal{N}_i^4(\cdot)$ for some $i$;
\item $\mathcal{N}$ is also Type~$\mathcal{A}$.
\end{enumerate}
\item The following assertions are equivalent.
\begin{enumerate}
\item $\dim\{\mathfrak{K}(\mathcal{N})\}=6$.
\item $\mathcal{N}$ is linearly equivalent to $\mathcal{N}_i^6(\cdot)$ for some $i$.
\item $\mathcal{N}$ is flat.\end{enumerate}\end{enumerate}
\end{theorem}
We will prove the following result in Section~\ref{S8}.
\begin{theorem}\label{T1.10}
Let $\mathcal{M}$ be a Type~$\mathcal{B}$ structure on $\mathbb{R}^+\times\mathbb{R}$.
\begin{enumerate}
\item If $\dim\{\mathfrak{K}(\mathcal{M})\}=2$, then $\mathcal{M}$ is affine Killing complete.
\item If $\dim\{\mathfrak{K}(\mathcal{M})\}=3$, then $\mathcal{M}$ is affine Killing complete if and only if
$\mathcal{M}$ is linearly equivalent to the hyperbolic plane.
\item If $\dim\{\mathfrak{K}(\mathcal{M})\}=4$, then $\mathcal{M}$ is affine Killing complete.
\item If $\dim\{\mathfrak{K}(\mathcal{M})\}=6$, then $\mathcal{M}$ is affine Killing complete if and only
if $\mathcal{M}$ is linearly equivalent to $\mathcal{N}_0^6$ or $\mathcal{N}_5^6$.\end{enumerate}
\end{theorem}
The question of geodesic completeness is more subtle and will not be dealt with here since, unlike the
Type~$\mathcal{A}$ geometries, we do not have a suitable parametrization of the Type~$\mathcal{B}$ surfaces
where $\dim\{\mathfrak{K}(\mathcal{N})\}=2$.
\section{The proof of Lemma~\ref{L1.1}}\label{S2}
Let $\mathcal{M}=(\mathbb{R}^2,\nabla)$ be a Type~$\mathcal{A}$ geometry.
An affine surface $\mathcal{M}$ is strongly projectively flat if and only if both
$\rho$ and $\nabla\rho$ are totally symmetric (see, for example, Eisenhart~\cite{E70} or Nomizu and Sasaki~\cite{NS}). A direct computation shows that
$\rho$ and $\nabla\rho$ are in fact totally symmetric
if $\mathcal{M}$ is Type~$\mathcal{A}$ and thus every Type~$\mathcal{A}$ surface is strongly projectively flat.
However, this argument does not show that the associated flat surface is again Type~$\mathcal{A}$ nor does it show that
the equivalence can be obtained using a linear function. We proceed as follows.
Let $\varphi(x^1,x^2)=a_1x^1+a_2x^2$ for $(a_1,a_2)\in\mathbb{R}^2$. Let $\tilde{\mathcal{M}}:={}^{-\varphi}\mathcal{M}$
have Ricci tensor $\tilde\rho$. We wish to choose $(a_1,a_2)$ so $\tilde\rho=0$. We suppose first that $\Gamma_{11}{}^2\ne0$. By
rescaling $x^2$, we may assume that $\Gamma_{11}{}^2=1$. We solve the equation $\tilde\rho_{11}=0$ for $a_2$ to obtain
$$
a_2:=a_1^2 - a_1 \Gamma_{11}{}^1 - \Gamma_{12}{}^1 + \Gamma_{11}{}^1 \Gamma_{12}{}^2 - (\Gamma_{12}{}^2)^2 + \Gamma_{22}{}^2\,.
$$
This yields
\begin{eqnarray*}
\tilde\rho_{12}&=&-\Gamma_{22}{}^1 + (a_1 - \Gamma_{12}{}^2) (a_1^2 - a_1 \Gamma_{11}{}^1 - 2 \Gamma_{12}{}^1
+ \Gamma_{11}{}^1 \Gamma_{12}{}^2 - (\Gamma_{12}{}^2)^2 +
\Gamma_{22}{}^2)\\
\tilde\rho_{22}&=&(a_1 - \Gamma_{11}{}^1 + \Gamma_{12}{}^2)\tilde\rho_{12}\,.
\end{eqnarray*}
The crucial point is that $\tilde\rho_{12}$ divides $\tilde\rho_{22}$. Thus it suffices to choose $a_1$ so $\tilde\rho_{12}=0$.
Since $\tilde\rho_{12}$ is a monic polynomial of $a_1$,
we can find $a_1$ so $\tilde\rho_{12}=0$. We now have $\tilde\rho=0$ so $\tilde{\mathcal{M}}$ is flat as desired.
We suppose next that $\Gamma_{11}{}^2=0$. If $\Gamma_{22}{}^1\ne0$, we can interchange the roles of $x^1$ and $x^2$
and repeat the argument given above. We may therefore assume that $\Gamma_{22}{}^1=0$ as well. We make a direct computation to see
that taking $a_1=\Gamma_{12}{}^2$
and $a_2=\Gamma_{12}{}^1$ yields $\tilde\rho=0$. Since
$\mathbbm{1}\in\mathcal{Q}(\tilde{\mathcal{M}})$, we conclude $e^\varphi\in\mathcal{Q}(\mathcal{M})=e^\varphi\mathcal{Q}(\tilde{\mathcal{M}})$.\qed
\section{Affine embeddings and immersions of Type~$\mathcal{A}$ structures}\label{S3}
We introduce an auxiliary affine surface $\tilde{{\mathcal{M}}}_5^4(c):=(\mathbb{R}^2,\nabla)$
where the only (possibly) non-zero Christoffel symbols of $\nabla$ are
$\Gamma_{22}{}^1=(1+c^2)x^1$ and $\Gamma_{22}{}^2=2c$; this is not a Type~$\mathcal{A}$ structure on $\mathbb{R}^2$. We have
$$
\mathcal{Q}(\tilde{{\mathcal{M}}}_5^4(c))=\operatorname{Span}\{e^{cx^2}\cos(x^2),e^{cx^2}\sin(x^2),x^1\}\,.
$$
We will
show presently in Section~\ref{S3.3} that $\operatorname{Aff}(\tilde{{\mathcal{M}}}_5^4(c))$ acts transitively on $\mathbb{R}^2$
and consequently this is a homogeneous geometry.
\begin{theorem}\label{T3.1}
\ \begin{enumerate}
\item ${\Theta}_1^6(x^1,x^2):=(e^{x^1},x^2e^{x^1})$ is an affine embedding of ${\mathcal{M}}_1^6$ in ${\mathcal{M}}_0^6$.
\item ${\Theta}_2^6(x^1,x^2):=(e^{x^2},e^{-x^1})$ is an affine embedding of ${\mathcal{M}}_2^6$ in ${\mathcal{M}}_0^6$.
\item ${\Theta}_3^6(x^1,x^2):=(x^1,e^{x^2})$ is an affine embedding of ${\mathcal{M}}_3^6$ in ${\mathcal{M}}_0^6$.
\item ${\Theta}_4^6(x^1,x^2):=(x^2,(x^2)^2+2x^1)$ is an affine isomorphism from ${\mathcal{M}}_4^6$ to ${\mathcal{M}}_0^6$.
\item ${\Theta}_5^6(x^1,x^2)=(e^{x^1}\cos(x^2),e^{x^1}\sin(x^2))$ is an affine immersion of ${\mathcal{M}}_5^6$ in
${\mathcal{M}}_0^6$.
\item ${\Theta}_1^4(x^1,x^2):=(e^{-x^1},x^2)$ is an affine embedding of ${\mathcal{M}}_1^4$ in ${\mathcal{M}}_4^4(0)$.
\item ${\Theta}_2^4(x^1,x^2):=(e^{-x^1},x^2)$ is an affine embedding of ${\mathcal{M}}_2^4(c)$ in ${\mathcal{M}}_3^4(c)$.
\item ${\Theta}_4^4(c)(x^1,x^2):=(x^1+\frac12c(x^2)^2,x^2)$ is an affine isomorphism from ${\mathcal{M}}_4^4(c)$ to ${\mathcal{M}}_4^4(0)$.
\item ${\Theta}_5^4(c)(x^1,x^2):=(e^{x^1},x^2)$ is an affine embedding of ${\mathcal{M}}_5^4(c)$ in $\tilde{{\mathcal{M}}}_5^4(c)$.
\end{enumerate}
\end{theorem}
\begin{proof} Because $\dim\{\mathcal{Q}(\cdot)\}=3$,
the geometries in question are all strongly projectively flat.
The affine maps in question intertwine the solution spaces $\mathcal{Q}(\cdot)$.
Thus Theorem~\ref{T3.1} follows from Theorem~\ref{T1.2}.
\end{proof}
\section{The proof of Theorem~\ref{T1.5}}\label{S4}
Let $\mathcal{M}$ be a Type~$\mathcal{A}$ surface model. We have $\dim\{\mathfrak{K}(\mathcal{M})\}\in\{2,4,6\}$.
If $\dim\{\mathfrak{K}(\mathcal{M})\}=2$, then
$\mathfrak{K}(\mathcal{M})=\operatorname{Span}\{\partial_{x^1},\partial_{x^2}\}$. The flow lines
of the affine Killing vector fields are straight lines with a linear parametrization and are complete
so Theorem~\ref{T1.5} is immediate. If $\dim\{\mathfrak{K}(\mathcal{M})\}=6$, then $\mathcal{M}$
is flat. We apply Theorem~\ref{T3.1};
${\Theta}_i^6$ is a diffeomorphism if $i=4$ and thus ${\mathcal{M}}_4^6$ is
affine Killing complete. ${\Theta}_i^6$ is not surjective if $i=1,2,3,5$ and thus ${\mathcal{M}}_i^6$ is affine incomplete in
for these values of $i$. We complete the proof of Theorem~\ref{T1.5} by dealing with the case
$\dim\{\mathfrak{K}(\mathcal{M})\}=4$.
By Theorem~\ref{T1.4} and Theorem~\ref{T3.1} we may assume that $\mathcal{M}={\mathcal{M}}_3^4(c)$, that
$\mathcal{M}={\mathcal{M}}_4^4(0)$, or to replace $\mathcal{M}$ by $\tilde{{\mathcal{M}}}_5^4(c)$. We examine these 3
cases seratim.
\subsection{Case 1. $\boldsymbol{{\mathcal{M}}_3^4(c)}$} We have
$\mathcal{Q}({\mathcal{M}}_3^4(c))=\operatorname{Span}\{e^{cx^2},e^{(1+c)x^2},x^1e^{cx^2}\}$. This is
not a particularly convenient form of this surface to work with. We set $u^1:=x^1e^{cx^2}$ and $u^2:=x^2$
to express
$\mathcal{Q}({\mathcal{M}}_3^4(c))=\operatorname{Span}\{e^{cu^2},e^{(1+c)u^2},u^1\}$. We define
$$
T(a_1,b_1,c_1,c_2)(u^1,u^2)=(e^{a_1}u^1+b_1e^{cu^2}+c_1e^{(1+c)u^2},u^2+d_1)\,.
$$
Because $T(a_1,b_1,c_1,d_1)^*\mathcal{Q}({\mathcal{M}}_3^4(c))=\mathcal{Q}({\mathcal{M}}_3^4(c))$,
$T(a_1,b_1,c_1,d_1)$ defines a diffeomorphism of $\mathbb{R}^2$ preserving the affine structure.
We verify
\begin{eqnarray*}
&&
T(a_1,b_1,c_1,d_1)\circ T(a_2,b_2,c_2,d_2)\\
&&\quad=T(a_1+a_2,b_2e^{a_1}+b_1e^{cd_2},c_2e^{a_1}+c_1e^{(1+c)d_2},d_1+d_2){\,,}\\
&&T(a_1,b_1,c_1,d_1)^{-1}=T(-a_1,-b_1e^{-a_1-cd_1},-c_1e^{-a_1+(-1-c)d_1},-d_1)\,.
\end{eqnarray*}
This gives $\mathbb{R}^4$ the structure of a Lie group and constructs a 4-parameter family of affine Killing vector fields
which for dimensional reasons must be $\mathfrak{K}({\mathcal{M}}_3^4(c))$ and thereby shows ${\mathcal{M}}_3^4(c)$
is affine Killing complete.
\subsection{Case 2. $\boldsymbol{{\mathcal{M}}_4^4(0)}$} We have
$\mathcal{Q}({\mathcal{M}}_4^4(0))=\operatorname{Span}\{e^{x^2},x^2e^{x^2},x^1e^{x^2}\}$. We clear the previous notation and set
$$
T(a_1,b_1,c_1,d_1)(x^1,x^2):=(e^{a_1}x^1+b_1x^2+c_1,x^2+d_1)\,.
$$
Since $T(a_1,b_1,c_1,d_1)^*\mathcal{Q}({\mathcal{M}}_4^4(0))=\mathcal{Q}({\mathcal{M}}_4^4(0))$,
$T(a_1,b_1,c_1,d_1)$ is a diffeomorphism of $\mathbb{R}^2$ preserving the affine structure.
The group structure on $\mathbb{R}^4$ is given by
\begin{eqnarray*}
&&T(a_1,b_1,c_1,d_1)\circ T(a_2,b_2,c_2,d_2)\\
&&\quad=T(a_1+a_2,b_1+b_2e^{a_1},c_1+b_1d_2+c_2e^{a_1},d_1+d_2),\\
&&T(a_1,b_1,c_1,d_1)^{-1}=T(-a_1,-b_1e^{-a_1},e^{-a_1}(b_1d_1-c_1),-d_1)\,.
\end{eqnarray*}
It now follows ${\mathcal{M}}_4^4(0)$ is affine Killing complete.
\subsection{Case 3. $\boldsymbol{\tilde{{\mathcal{M}}}_5^4(c)}$}\label{S3.3}
We have
$\mathcal{Q}(\tilde{{\mathcal{M}}}_5^4(c))=\operatorname{Span}\{e^{cx^2}\cos(x^2),e^{cx^2}\sin(x^2),x^1\}$.
We clear the previous notation and set
$$
T(a_1,b_1,c_1,d_1)(x^1,x^2):=(e^{a_1}x^1+b_1e^{cx^2}\cos(x^2)+c_1e^{cx^2}\sin(x^2),x^2+d_1)\,.
$$
Then $T(a_1,b_1,c_1,d_1)^*\mathcal{Q}(\tilde{{\mathcal{M}}}_5^4(c))=\mathcal{Q}(\tilde{{\mathcal{M}}}_5^4(c))$ so
$T(a_1,b_1,c_1,d_1)$ is a diffeomorphism of $\mathbb{R}^2$ preserving the affine structure. The group
structure is given by
\begin{eqnarray*}
&&T(a_1,b_1,c_1,d_1)\circ T(a_2,b_2,c_2,d_2)\\
&&\quad= T(a_1+a_2,e^{a_1} b_2+b_1 e^{c d_2} \cos (d_2)+c_1 e^{c d_2} \sin (d_2),\\
&&\qquad e^{a_1} c_2-b_1 e^{c d_2} \sin (d_2)+c_1 e^{c d_2} \cos (d_2),d_1+d_2),\\
&&T(a_1,b_1,c_1,d_1)^{-1}=T(-a_1,-e^{-a_1-c d_1} (b_1 \cos (d_1)-c_1 \sin (d_1)),\\
&&\qquad\qquad\qquad\qquad\qquad -e^{-a_1-c d_1} (b_1 \sin (d_1)+c_1 \cos (d_1)),-d_1)\,.
\end{eqnarray*}
It now follows $\tilde{{\mathcal{M}}}_5^4(c)$ is affine Killing complete. It is immediate that $\operatorname{Aff}(\tilde{{\mathcal{M}}}_5^4(c))$ acts
transitively on $\mathbb{R}^2$ so this is a homogeneous geometry.
\section{The proof of Theorem~\ref{T1.6}}\label{S5}
Let $\mathcal{M}=(\mathbb{R}^2,\nabla)$ be a Type~$\mathcal{A}$ structure on $\mathbb{R}^2$.
By Lemma~\ref{L1.1}, there exists a linear function $\varphi$ with $e^\varphi\in\mathcal{Q}(\mathcal{M})$ and so
$\tilde{\mathcal{M}}:={}^{-\varphi}\mathcal{M}$ is flat.
Since $e^{\varphi}\in\mathcal{Q}(\mathcal{M})$ and $\dim\{\mathcal{Q}(\mathcal{M})\}=3$, we have
$\mathcal{Q}(\mathcal{M})=e^{\varphi}\operatorname{Span}\{\mathbbm{1},\phi_1,\phi_2\}$.
Set $\Xi_P(\phi):=\{\phi,\partial_{x^1}\phi,\partial_{x^2}\phi\}(P)$ for $P\in\mathbb{R}^2$.
By Theorem~\ref{T1.2}, $\Xi_P$ is an injective map from $\mathcal{Q}(\mathcal{M})$ to $\mathbb{R}^3$.
Since $\dim\{\mathcal{Q}(\mathcal{M})\}=3$, $\Xi_P$ is bijective.
It now follows that $d\phi_1(P)$ and $d\phi_2(P)$ are linearly independent so $\Phi:=(\phi_1,\phi_2)$ is an immersion.
By Theorem~\ref{T1.2}, $\mathcal{Q}(\tilde{\mathcal{M}})=
\operatorname{Span}\{\mathbbm{1},\phi_1,\phi_2\}=\Phi^*\operatorname{Span}\{\mathbbm{1},x^1,x^2\}=\Phi^*\mathcal{Q}({\mathcal{M}}_0^6)$. Consequently,
$\tilde{\mathcal{M}}=\Phi^*{\mathcal{M}}_0^6$ by Theorem~\ref{T1.2}.
The affine geodesics in ${\mathcal{M}}_0^6$ are straight lines and can be written in the form $t u+v$ for $u$ and $v$ in $\mathbb{R}^2$.
Thus the affine geodesics in $\tilde{\mathcal{M}}$ locally take the form $\Phi^{-1}(tu+v)$. Since $\mathcal{M}$ and $\tilde{\mathcal{M}}$
are strongly projectively equivalent, the unparameterized geodesics of $\mathcal{M}$ and $\tilde{\mathcal{M}}$ agree. The desired result now
follows.\qed
\section{The proof of Theorem~\ref{T1.7}}\label{S6}
We divide the proof of Theorem~\ref{T1.7} into 3 cases depending on $\operatorname{Rank}\{\rho\}$ or, equivalently, on
$\dim\{\mathfrak{K}\}$; each is then divided further
depending on the particular family involved.
We use the ansatz of Theorem~\ref{T1.6}. Let $\sigma_{a,b}(t)$ be the
affine geodesic with $\sigma_{a,b}(0)=0$ and $\dot\sigma_{a,b}(0)=(a,b)$.
\subsection{Case 1. The flat geometries $\boldsymbol{{\mathcal{M}}_i^6}$}
These geometries all are locally affine equivalent to the affine plane $(\mathbb{R}^2,\Gamma_0^6)$;
this geodesically complete affine surface provides a local model for each of these geometries.
${\Theta}_i^6$ for $i=1,2,3$ embeds ${\mathcal{M}}_i^6$ in ${\mathcal{M}}_0^6$, ${\Theta}_4^6$ provides
a diffeomorphism between ${\mathcal{M}}_4^6$ and ${\mathcal{M}}_0^6$, and ${\Theta}_5^6$ immerses
${\mathcal{M}}_5^6$ in ${\mathcal{M}}_0^6$. Thus ${\mathcal{M}}_i^6$ is geodesically incomplete for
$i=1,2,3,5$ and ${\mathcal{M}}_4^6$ is geodesically complete.
\subsubsection{$\boldsymbol{{\mathcal{M}}_0^6}$}
$\sigma_{a,b}(t)=(at,bt)$. ${\mathcal{M}}_0^6$ is geodesically complete and
defines the flat affine plane $\mathbb{A}^2$.
\subsubsection{$\boldsymbol{{\mathcal{M}}_1^6}$ }
$\sigma_{a,b}(t)=(\log(1+at),\frac{bt}{1+at})$. ${\mathcal{M}}_1^6$ is geodesically
incomplete; $\sigma_{a,b}(t)$ is defined for all $t\in\mathbb{R}$ if and only if $a=0$.
\subsubsection{$\boldsymbol{{\mathcal{M}}_2^6}$}
$\sigma_{a,b}(t)=(-\log(1-at),\log(1+bt))$.
${\mathcal{M}}_2^6$ is geodesically incomplete; no non-trivial geodesic is defined for all $t\in\mathbb{R}$.
\subsubsection{$\boldsymbol{{\mathcal{M}}_3^6}$}
$\sigma_{a,b}(t)=(at,\log(1+bt))$. $\sigma_{a,b}(t)$ is defined for all $t\in\mathbb{R}$ if and only if $b=0$.
\subsubsection{$\boldsymbol{{\mathcal{M}}_4^6}$} $\sigma_{a,b}(t)=(at-\frac12b^2t^2,bt)$. ${\mathcal{M}}_4^6$ is geodesically complete.
\subsubsection{$\boldsymbol{{\mathcal{M}}_5^6}$}\label{S6.1.6}
$\sigma_{a,b}(t)=\left(\frac12\log((1+at)^2+b^2t^2),\arctan\left(\frac{tb}{1+at}\right)\right)$.
$\sigma_{a,b}(t)$ extends to be defined for all $t\in\mathbb{R}$
if and only if $b\ne0$.
\subsection{Case 2: The geometries $\boldsymbol{{\mathcal{M}}_i^4(\cdot)}$}
For these geometries, the Ricci tensor is a non-zero constant multiple $\lambda$
of $dx^2\otimes dx^2$. Suppose there exists a geodesically complete affine
surface $\tilde{\mathcal{M}}$ which is modeled on ${\mathcal{M}}_i^4(\cdot)$.
Let $\sigma$ be a small piece of a geodesic in ${\mathcal{M}}_i^4(\cdot)$
defined by $\sigma(t)=(x^1(t),x^2(t))$ which can be copied into $\tilde{\mathcal{M}}$. Then $\rho(\dot\sigma,\dot\sigma)(t)=\lambda(\dot x^2(t))^2$
extends to a real analytic function on $\tilde{\mathcal{M}}$
which is defined for all $t$. If we can exhibit a geodesic where $\dot x^2(t)$ is not bounded,
it then follows that ${\mathcal{M}}_i^4(\cdot)$ is essentially geodesically incomplete.
\subsubsection{$\boldsymbol{{\mathcal{M}}_1^4}$} This geometry is essentially geodesically incomplete because
$$\sigma_{a,b}(t)=\left\{\begin{array}{ll}
(-\log (1-\frac{a \log (2 b t+1)}{2 b}),\frac{1}{2} \log (2 b t+1))&\text{ if }b\ne0\\
(-\log(1-at),0)&\text{ if }b=0\end{array}\right\}\,.$$
\subsubsection{$\boldsymbol{{\mathcal{M}}_2^4(c)}$}
If $c\ne-\frac12$, then $\sigma_{a,b}(t)$ is given by:
$$
\left\{\begin{array}{ll}
(\log (\frac{b}{a+b})-\log (1-\frac{a (2 b c t+b t+1)^{\frac{1}{2 c+1}}}{a+b}),
\frac{\log (2 b c t+b t+1)}{2 c+1})&\text{if }b\ne0,b\ne-a\\
\frac{(-\log(1+b(t+2ct)),\log(1+bt+2bct))}{1+2c}&\text{if }b\ne0,b=-a\\
(-\log(1-at),0)&\text{if }b=0
\end{array}\right\}.$$
This geometry is essentially geodesically incomplete. If $c=-\frac12$, then
$$
\sigma_{a,b}(t)=\left\{\begin{array}{ll}
(-\log(1-at),0)&\text{if }b=0\\
(-\log(a(e^{bt}-1)-b)+\log(-b),bt)&\text{if }b<0\\
(-\log(-a(e^{bt}-1)+b)+\log(b),bt)&\text{if }b>0
\end{array}\right\}\,.
$$
This geometry is geodesically incomplete. By Theorem~\ref{T3.1}, there is an affine
embedding of ${\mathcal{M}}_2^4(-\frac12)$ in ${\mathcal{M}}_3^4(-\frac12)$. Since
we shall show presently that $\Gamma_3^1(-\frac12)$ is geodesically
complete, the geometry ${\mathcal{M}}_2^4(-\frac12)$ can be geodesically completed.
\subsubsection{$\boldsymbol{{\mathcal{M}}_3^4(c)}$}
If $c\ne-\frac12$, let $\kappa:={1+2c}$. This geometry is essentially geodesically incomplete since
$$
\sigma_{a,b}(t)=\left\{\begin{array}{ll}
{\left(\frac ab((1+b t\kappa)^{{1}/{\kappa}}-1),\kappa^{-1}\log(1+b t\kappa)\right)}&\text{ if }b\ne0\\
(at,0)&\text{ if }b=0
\end{array}\right\}\,.
$$
If $c=-\frac12$, then this geometry is geodesically complete since
$$
\sigma_{a,b}(t)=\left\{\begin{array}{ll}
(\frac ab(e^{bt}-1),bt)&\text{ if }b\ne0\\
(at,0)&\text{ if }b=0
\end{array}\right\}\,.
$$
\subsubsection{$\boldsymbol{{\mathcal{M}}_4^4(c)}$} This geometry is essentially geodesically incomplete
since
$$\textstyle\sigma_{a,b}(t)=\left\{\begin{array}{ll}
(at,0)&\text{ if }b=0\\
(-\frac1{8b}\log(1+2bt)(-4a+bc\log(1+2bt)),\frac12\log(1+2bt))&\text{ if }b\ne0\end{array}\right\}\,.
$$
\subsubsection{$\boldsymbol{{\mathcal{M}}_5^4(c)}$} This has an affine embedding in $\tilde{{\mathcal{M}}}_5^4(c)$
which is not surjective and hence ${\mathcal{M}}_5^4(c)$ is geodesically incomplete.
\subsubsection{$\boldsymbol{\tilde{{\mathcal{M}}}_5^4(c)}$} We have $\rho=(1+c^2)dx^2\otimes dx^2$.
If $c=0$, then
$$
\sigma_{a,b}(t)=\left\{\begin{array}{ll}(at,0)&\text{ if }b=0\\
(\frac ab\sin(bt),bt)&\text{ if }b\ne0\end{array}\right\}\,.
$$
This geometry is geodesically complete. If $c\ne0$, then this geometry, and hence the geometry ${\mathcal{M}}_5^4(c)$,
is esentially geodesically incomplete because
$$
\sigma_{a,b}(t)=\left\{\begin{array}{ll}
(at,0)&\text{ if }b=0\\
\left(\frac ab(1+2bct)^{1/2}\sin(\frac{\log(1+2bct)}{2c}),\frac{\log(1+2bct)}{2c}\right)&\text{ if }b\ne0\end{array}\right\}\,.
$$
If $b\ne0$, then $\dot x^2$ does not remain bounded for all $t$. Thus all geodesics but one can not be completed. Since ${\mathcal{M}}_5^4(c)$
embeds as an open subset of $\tilde{{\mathcal{M}}}_5^4(c)$, this shows ${\mathcal{M}}_5^4(c)$ also is essentially geodesically incomplete for $c\ne0$.
\subsection{Case 3. The geometries $\boldsymbol{{\mathcal{M}}_i^2(\cdot)}$}
Suppose that $\tilde{\mathcal{M}}$ is a simply connected complete affine surface which is locally modeled on
${\mathcal{M}}_i^2(\cdot)$. Since
$\dim\{\mathfrak{K}(\tilde{\mathcal{M}})\}=2$, $\partial_{x^1}$ and
$\partial_{x^2}$ extend as Killing vector fields to all of $\mathcal{M}$. This shows that if $\gamma$ is a geodesic in
${\mathcal{M}}_i^2(\cdot)$, then $\rho(\dot\gamma,\partial_{x^i})$ is a bounded function on $\gamma$. We use this criteria
in what follows. In all cases, attempting to find the most general geodesic resulted in an ODE that we could
not solve explicitly.
\subsubsection{$\boldsymbol{{\mathcal{M}}_1^2(a_1,a_2)}$}
We obtain 3 possible geodesics $\sigma_i(t)=\log(t)\vec\alpha_i$ where
$$
\vec\alpha_1=\frac{(1,1)}{1+a_1+a_2},\quad
\vec\alpha_2=\frac{(1-a_2,a_1)}{1+a_1-a_2},\quad
\vec\alpha_3=\frac{(a_2,1-a_1)}{1-a_1+a_2}.
$$
The first geodesic is defined for $a_1+a_2+1\ne0$, the second for $a_1-a_2+1\ne0$, and the third for $-a_1+a_2+1\ne0$. At least
two geodesics are defined for any given geometry. We have $\dot\sigma=\frac1t(c,d)$ for $(0,0)\ne(c,d)\in\mathbb{R}^2$.
Thus this geometry is essentially geodesically incomplete.
\subsubsection{$\boldsymbol{{\mathcal{M}}_2^2(a_1,a_2)}$} Suppose $a_1\ne-1$.
We have a geodesic $\sigma(t)=\log(t)(\frac1{1+a_1},0)$. We conclude the geometry is essentially geodesically incomplete.
Suppose $a_1=-1$. We adapt an argument of Bromberg and Medina~\cite{B05}.
The geodesic equations become $\dot u=v(2a\dot u-\textstyle\frac12(1+a^2)v)$ and $\dot v=v(2u)$
or in matrix form:
$$
A\left(\begin{array}{c}u\\v\end{array}\right)
=v\left(\begin{array}{c}\dot u\\ \dot v\end{array}\right)\text{ for }
A:=\left(\begin{array}{cc}-2a&\frac12(1+a^2)\\-2&0\end{array}\right)\,.
$$
If $v(t_0)=0$ for any point in the parameter range, then $u(t)=u(t_0)$ and $v(t)=0$ solve this ODE.
Thus we may suppose without loss of generality $v$ does not change sign.
Introduce a new parameter $\tau$ so $\partial_\tau t=v(t)$ and let $U(\tau)=u(t(\tau))$ and $V(\tau)=v(t(\tau))$. We have
\begin{equation}\label{E6.a}
\partial_\tau\left(\begin{array}{c}U\\V\end{array}\right)=A\left(\begin{array}{c}U\\V\end{array}\right)\,.
\end{equation}
The eigenvalues of $A$ are $-a\pm\sqrt{-1}$. We solve Equation~(\ref{E6.a}) to see
$$
\left(\begin{array}{cc}U\\V\end{array}\right)=e^{-\tau a}\left\{\cos(\tau)\left(\begin{array}{cc}c_1\\c_2\end{array}\right)
+\sin(\tau)\left(\begin{array}{cc}-ac_1+\frac12(1+a^2)c_2\\-2c_1+ac_2\end{array}\right)\right\}\,.
$$
Thus $V=e^{-\tau a}(c_2\cos(\tau)+(-2c_1+ac_2)\sin(\tau))$. Since $V$ never vanishes, $\tau$ is restricted to
a parameter range of length at most $\pi$. It now follows that the original geodesic is for all $t\in\mathbb{R}$.
\subsubsection{$\boldsymbol{{\mathcal{M}}_3^2(c)}$ \bf and $\boldsymbol{{\mathcal{M}}_4^2(\pm)}$} We have
$\sigma_{a,0}=(\frac12\log(1+2at),0)$
so these geometries are essentially geodesically incomplete.
\section{The classification of flat Type~$\mathcal{B}$ geometries}\label{S7}
This section is devoted to the proof of Theorem~\ref{T1.6}~(4); by Theorem~\ref{T1.2}, it suffices to
classify the relevant solution spaces of the quasi-Einstein equation. Let $\mathcal{Q}=\mathcal{Q}(\mathcal{N})$
where $\mathcal{N}$ is a flat Type~$\mathcal{B}$ structure on $\mathbb{R}^+\times\mathbb{R}$. We work
modulo the action of the shear group $(x^1,x^2)\rightarrow(x^1,ax^1+bx^2)$ for $b\ne0$. Let
$\Phi=(\phi^1,\phi^2)$ be a local affine map from $\mathcal{N}$ to $\mathcal{M}_0^6$. We then have
$$
\mathcal{Q}=\Phi^*\operatorname{Span}\{\mathbbm{1},x^1,x^2\}=\operatorname{Span}\{\mathbbm{1},\phi^1,\phi^2\}\,.
$$
Since $\Phi$ is a local diffeomorphism, $\partial_{x^1}\mathcal{Q}\ne\{0\}$ and $\partial_{x^2}\mathcal{Q}\ne\{0\}$.
This rules out certain possibilities.
The vector fields $X:=x^1\partial_{x^1}+x^2\partial_{x^2}$ and $Y:=\partial_{x^2}$ are
Killing vector fields and therefore preserve $\mathcal{Q}$; the action of the Lie algebra $\operatorname{Span}\{X,Y\}$ on
$\mathcal{Q}$ is crucial. We complexify and
set $\mathcal{Q}_{\mathbb{C}}:=\mathcal{Q}\otimes_{\mathbb{R}}\mathbb{C}$;
elements of $\mathcal{Q}$ may be obtained by taking the
real and imaginary parts of complex solutions.
Decompose $\mathcal{Q}_{\mathbb{C}}=\oplus_\lambda\mathcal{Q}_\lambda$ as the direct sum of the generalized
eigenspaces of $X$ where
$$
\mathcal{Q}_\lambda:=\{f\in\mathcal{Q}_{\mathbb{C}}:(X-\lambda)^3f=0\}\,.
$$
The commutation relation $[X,\partial_{x^2}]=-\partial_{x^2}$ implies that
$$
\partial_{x^2}\mathcal{Q}_\lambda\subset\mathcal{Q}_{\lambda-1}\,.
$$
Choose $\lambda$ and $f\in\mathcal{Q}_\lambda$ so $\partial_{x^2}f\ne0$.
This implies $\mathcal{Q}_{\lambda-1}\ne0$. Thus, for dimensional reasons, $\dim\{\mathcal{Q}_\mu\}\le2$ for all $\mu$ and consequently
$$
\mathcal{Q}_\mu=\{f\in\mathcal{Q}_{\mathbb{C}}:(X-\mu)^2f=0\}\,.
$$
Since $\dim\{\mathcal{Q}\}=3$, $\{\mathcal{Q}_\lambda,\mathcal{Q}_{\lambda-1},\mathcal{Q}_{\lambda-2},\mathcal{Q}_{\lambda-3}\}$
can not all be non-trivial and thus, in particular, $(\partial_{x^2})^3f=0$ for any $f\in\mathcal{Q}_\lambda$. This implies any element of $\mathcal{Q}$
is a polynomial of degree at most 2 in $x^2$ with coefficients which are smooth functions of $x^1$.
If $(X-\lambda)f=0$,
then $f$ is a sum of elements of the form $(x^1)^{\lambda-k}(x^2)^k$ for $k\le 2$. If $(X-\lambda)^2f=0$, then $f$ is a sum of
elements of the form $(x^1)^{\lambda-k}(x^2)^k$ and $(x^1)^{\lambda-k}(x^2)^k\log(x^1)$ for $k\le2$. Since
$\dim\{\mathcal{Q}_\lambda\}\le2$, this is
the most complicated Jordan normal form possible.
In principle, the parameter $\lambda$ could be complex. It will follow from our subsequent analysis that this is not the case.
We adopt the
notation of Definition~\ref{D1.8}.
\subsection*{Case 1}
Suppose first that there exists $f\in\mathcal{Q}$ which has degree at least 2 in $x^2$. Let $f\in\mathcal{Q}_\lambda$
satisfy $\partial_{x^2}^2f\ne0$. Then $\{f,\partial_{x^2}f,\partial_{x^2}^2f\}$ is a basis for $\mathcal{Q}$. This implies $\partial_{x^2}^2f=c\mathbbm{1}$
so $\lambda=2$. Since $f\in\mathcal{Q}_2$, $\partial_{x^2} f\in\mathcal{Q}_1$, and $\mathbbm{1}\in\mathcal{Q}_0$,
$\dim\{\mathcal{Q}_\mu\}\le1$ for all $\mu$ and there are no log terms. Thus
$f=(x^2)^2+ax^1x^2+b(x^1)^2$. We may replace $x^2$ by $\tilde x^2=x^2+\frac12ax^1$ to ensure $a=0$. Since $\mathcal{Q}=\operatorname{Span}\{f,2x^2,\mathbbm{1}\}$ and since
$\partial_{x^1}\{\mathcal{Q}\}\ne0$, $b\ne0$. Rescale $x^2$ and renormalize $f$ to assume that $f=(x^2)^2\pm(x^1)^2$ and obtain
$\mathcal{N}_1^6(\pm)$.
\medbreak We assume henceforth that every element of $\mathcal{Q}$ is at most linear in $x^2$.
Since $\partial_{x^2}\{\mathcal{Q}\}\ne\{0\}$, we can choose $\lambda$ so that
$f=a_0(x^1)x^2+a_1(x^1)\in\mathcal{Q}_\lambda$ for $a_0(x^1)\ne0$. This gives rise to the following possibilities.
\subsection*{Case 2} Suppose $\lambda\notin\{0,1\}$. Then $\mathcal{Q}_\lambda$,
$\mathcal{Q}_{\lambda-1}$, and $\mathcal{Q}_0$ are non-trivial and distinct; hence each is 1-dimensional and
$\mathcal{Q}=\mathcal{Q}_\lambda\oplus\mathcal{Q}_{\lambda-1}\oplus\mathcal{Q}_0$. If $\lambda$ is complex,
then $\mathcal{Q}_{\bar\lambda}$ is non-trivial and is not contained in $\mathcal{Q}_\lambda\oplus\mathcal{Q}_{\lambda-1}\oplus\mathcal{Q}_0$
which is impossible. Thus $\lambda$ is real,
as noted above. Since $\dim\{\mathcal{Q}_\lambda\}=1$, there are no $\log(x^1)$ terms and $f=(x^1)^{\lambda-1}x^2+(x^1)^{\lambda}c$.
Replacing $x^2$ by $x^2-cx^1$ then permits us to assume $f=(x^1)^{\lambda-1}x^2$ so
$\mathcal{Q}=\operatorname{Span}\{\mathbbm{1},(x^1)^{\lambda-1},(x^1)^{\lambda-1}x^2\}$ for $\lambda\ne0,1$.
This is $\mathcal{N}_2^6(c)$ for $c=\lambda-1\notin\{-1,0\}$.
We will deal with $\mathcal{N}_2^6(-1)$ subsequently.
\subsection*{Case 3} Suppose $\lambda=0$ so that $f=a_0(x^1)x^2+a_1(x^1)\in\mathcal{Q}_0$.
We then have $a_0(x^1)=\partial_{x^2}f\in\mathcal{Q}_{-1}$.
We also have $\mathbbm{1}\in\mathcal{Q}_0$.
Thus $\mathcal{Q}_{-1}$ is 1-dimensional so, after rescaling, we may take $a_0(x^1)=(x^1)^{-1}$ and
consequently $f=\frac{x^2}{x^1}+\varepsilon\log(x^1)$.
If $\varepsilon=0$, we obtain $\mathcal{N}_2^6(-1)$. If $\varepsilon\ne0$, we can rescale to obtain $\mathcal{N}_3^6$.
\subsection*{Case 4} Suppose $\lambda=1$ so $f=a_0(x^1)x^2+a_1(x^1)\in\mathcal{Q}_1$. Express
$$
f=x^2+x^2\alpha\log(x^1)+\beta x^1+\gamma x^1\log(x^1)\,.
$$
If $\alpha\ne0$, then $X$ has non-trivial Jordan normal form on $\mathcal{Q}_1$ so $\dim\{\mathcal{Q}_1\}\ge2$.
Furthermore $\partial_{x^2}f=\alpha\log(x^1)\in\mathcal{Q}_0$; since $\mathbbm{1}\in\mathcal{Q}_0$,
$\dim\{\mathcal{Q}_0\}\ge2$. This is false.
Thus $\alpha=0$. By replacing $x^2$ by $x^2+\beta x^1$, we may assume
$\beta=0$ and obtain
$f=x^2+\gamma x^1\log(x^1)$. If $\gamma\ne0$, then applying $(X-1)$ we see $x^1\in\mathcal{Q}_1$; this gives, after rescaling,
$\mathcal{N}_4^6$. Thus we may assume
$\gamma=0$ so $x^2\in\mathcal{Q}_1$. If $\dim\{\mathcal{Q}_1\}=2$, we obtain $\mathcal{N}_0^6$.
If $\dim\{\mathcal{Q}_0\}=2$, then $\log(x^1)\in\mathcal{Q}_0$ and we obtain $\mathcal{N}_5^6$.
Otherwise, we obtain $\mathcal{N}_6^6(c)$ for $c\ne0,-1$. This completes the classification of the
flat Type~$\mathcal{B}$ structures.\qed
\section{Affine embeddings and immersions of Type~$\mathcal{B}$ structures}
Define
$$
\begin{array}{lll}
\Psi_0^6(x^1,x^2)=(x^1,x^2),&\Psi_1^6(\pm1)(x^1,x^2)=(x^2,(x^1)^2\pm(x^2)^2),\\
\Psi_2^6(c)(x^1,x^2)=((x^1)^c,(x^1)^cx^2),&\Psi_3^6(x^1,x^2)=(\frac1{x^1},\frac{x^2}{x^1}+\log(x^1)),\\
\Psi_4^6(x^1,x^2)=(x^1,x^2+x^1\log(x^1)),&\Psi_5^6(x^1,x^2)=(\log(x^1),x^2),\\
\Psi_6^6(c)(x^1,x^2)=((x^1)^{1+c},x^2),&\Psi_1^4(x^1,x^2)=(x^2+x^1\log(x^1),\log(x^1)),\\
\Psi_2^4(\kappa,\theta)(x^1,x^2)=(x^2,\theta\log(x^1)),&\Psi_3^4(\kappa)(x^1,x^2)=(x^2,\kappa\log(x^1)).
\end{array}$$
\begin{theorem}\label{T8.1}
\ \begin{enumerate}
\item $\Psi_i^6(\cdot)$ is an affine embedding of $\mathcal{N}_i^6({\cdot})$ in ${\mathcal{M}}_0^6$ for any $i$.
\item $\Psi_1^4$ is an affine isomorphism from $\mathcal{N}_1^4(\kappa)$ to ${\mathcal{M}}_3^4(\kappa)$.
\item $\Psi_2^4(\kappa,\theta)$ is an affine isomorphism from $\mathcal{N}_2^4(\kappa,\theta)$ to
${\mathcal{M}}_3^4(\frac\kappa\theta)$.
\item $\Psi_3^4(\kappa)$ is an affine isomorphism from $\mathcal{N}_3^4(\kappa)$ to ${\mathcal{M}}_4^4(0)$.
\end{enumerate}
\end{theorem}
\begin{proof}
These geometries are all
strongly projectively flat and the diffeomorphisms in question intertwine the solution spaces $\mathcal{Q}(\cdot)$.
Thus Theorem~\ref{T8.1} follows from Theorem~\ref{T1.2}.
\end{proof}
\section{The proof of Theorem~\ref{T1.10}}\label{S8}
This section is devoted to the proof of Theorem~\ref{T1.10}. We apply Theorem~\ref{T8.1}.
Let $\mathcal{N}$ be a Type~$\mathcal{B}$ structure on $\mathbb{R}^+\times\mathbb{R}$. We distinguish cases.
\subsection*{Case 1} If $\dim\{\mathfrak{K}\}=6$, then
$\mathcal{N}$ is linearly equivalent to $\mathcal{N}_i^6$. The map $\Psi_i^6$ is an affine embedding of $\mathcal{N}_i^6$ in
$\mathbb{R}^2$ with the flat structure. If $i\ne5$, the embedding is not surjective and $\mathcal{N}_i^6$ is
affine Killing incomplete;
if $i=5$, then $\Psi_i^6$ is an isomorphism so $\mathcal{N}_5^6$ is affine complete.
\subsection*{Case 2} If $\dim\{\mathfrak{K}(\mathcal{N})\}=4$, then $\mathcal{N}$ is linearly equivalent to
$\mathcal{N}_i^4(\cdot)$. The maps $\Psi_i^4$ are affine
isomorphisms of $\mathcal{N}_i^4(\cdot)$ with ${\mathcal{M}}_3^4(\cdot)$ or ${\mathcal{M}}_4^4(0)$; these are affine
Killing complete by Theorem~\ref{T1.5}.
\subsection*{Case 3} $\dim\{\mathfrak{K}(\mathcal{N})\}=3$. There exists $\sigma\in\{0,\pm1\}$ so that
$$
\mathfrak{K}(\mathcal{M})=\operatorname{Span}\{X:=2x^1x^2\partial_{x^1}+((x^2)^2+\sigma(x^1)^2)\partial_2,
x^1\partial_{x^1}+x^2\partial_{x^2},\partial_{x^2}\}\,.s
$$
\subsection*{Case 3a} $\mathcal{N}_1^3(\pm)$ or $\mathcal{N}_2^3(c)$. We have
$\sigma=0$. The curve $\xi(t)=(x^1(t),x^2(t))$ is a flow curve for $X$ means that
$\dot x^1(t)=2x^1(t)x^2(t)$ and $\dot x^2(t)=x^2(t)^2$. We take $\xi(t)=(t^{-2},-t^{-1})$ to solve these equations
and to see these structures are affine Killing incomplete.
\subsection*{Case 3b} $\mathcal{N}_3^3$. We have $\sigma=1$. The curve $\xi(t)=(x^1(t),x^2(t))$
is a flow curve for $X$ means that
$\dot x^1=2x^1x^2$ and $\dot x^2=(x^2)^2+(x^1)^2$. We solve these equations by taking
$x^1(t)=-\frac12t^{-1}$ and $x^2(t)=-\frac12t^{-1}$.
Consequently, this structure is affine Killing incomplete.
This structure is the Lorentzian-hyperbolic plane; it
isometrically embeds in the pseudo-sphere which is affine complete.
We refer to \cite{APG18} for a further discussion of these two geometries and to \cite{G17} for a discussion of the pseudo-group
of isometries.
\subsection*{Case 3c} $\mathcal{N}_4^3$. We have $\sigma=-1$. This is the hyperbolic plane and is
affine Killing complete.
\subsection*{Case 4} $\dim\{\mathfrak{K}(\mathcal{N})\}=2$.
${\mathfrak{K}}(\mathcal{N})=\operatorname{Span}\{x^1\partial_{x^1}+x^2\partial_{x^2},\partial_{x^2}\}$
and $\mathcal{N}$ is affine Killing complete.
\subsection*{Acknowledgments} We are grateful for useful comments by D. D'Ascanio and P. Pisani concerning these matters.
| {
"timestamp": "2018-11-14T02:09:45",
"yymm": "1811",
"arxiv_id": "1811.05197",
"language": "en",
"url": "https://arxiv.org/abs/1811.05197",
"abstract": "An affine manifold is said to be geodesically complete if all affine geodesics extend for all time. It is said to be affine Killing complete if the integral curves for any affine Killing vector field extend for all time. We use the solution space of the quasi-Einstein equation to examine these concepts in the setting of homogeneous affine surfaces.",
"subjects": "Differential Geometry (math.DG); Analysis of PDEs (math.AP)",
"title": "Affine Killing complete and geodesically complete homogeneous affine surfaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808724687408,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7075110717021476
} |
https://arxiv.org/abs/1205.5172 | Composition operator on model spaces | We study compactness property of composition operator acting from a model space generated by an inner function to the Hardy space. | \section{Introduction}
Let $\DD$ be the unit disk in the complex plane. Given a holomorphic function $\vf: \DD \to \DD$, denote by
\[
C_\vf: f \mapsto f\circ \vf
\]
the composition operator defined on holomorphic functions in $\DD$. This operator is bounded on the Hardy space
$H^2(\DD)$ (see e.g. \cite{S}). One of the intensively studied questions is when $C_\vf$ is a compact operator on various spaces of analytic functions.
We refer the reader to the monographs \cite{Shbook, CMc} for the history and basic results on composition operators.
Loosely speaking $C_\vf$ is compact on $H^2(\DD)$ if $\vf(z)$ does not approach the unit circle $\TT$ too fast as $z\to \TT$.
J.~Shapiro \cite{S} quantified this idea by using the Nevanlinna counting function
\[
N_\vf(w)= \sum_{\vf(z)=w} -\log|z|.
\]
He proved in particular that $C_\vf$ is compact on $H^2(\DD)$ if and only if
\beq
\label{compactness}
\lim_{|w|\rightarrow 1-}N_\vf(w)/(-\log|w|)=0.
\eeq
The basic tool in his argument is the Stanton formula
\beq
\label{normcomposition}
\|C_\vf f\|^2= 2 \int_\DD |f'(z)|^2 N_\vf(z) dA(z) + |f(\vf(0))|^2,
\eeq
where $A$ is the normalized area measure. It is obtained from the identity
\beq
\label{stanton}
\|f\|^2=2\int_\DD |f'(z)|^2\log \frac 1{|z|^2}dA(z)+|f(0)|^2, \ f\in H^2(\DD),
\eeq
by substituting $f\circ\vf$ in place of $f$.
Another way to describe the compactness property of $C_\vf$ is related to the Aleksandrov-Clark measures of $\vf$. These are the positive measures
$\mu_\alpha$ on $\TT$ defined by the relation
\[
\Re \frac {\alpha+ \vf(z)}{\alpha-\vf(z)}
= \int_\TT P_z d\mu_\alpha,
\]
where $P_z$ is the Poisson kernel, $\alpha\in\TT$. We refer the reader to the surveys \cite{PS, Saks} for more details. In \cite{Sar} D. Sarason showed how $C_\vf$ can be treated as an integral operator on the spaces $L^1(\TT)$ and $M(\TT)$ and proved that $C_\vf$ is compact on these spaces if and only if each $\mu_\alpha$ is absolutely continuous. Due to \cite{SSa}, it is further equivalent to $C_\vf$ being compact on $H^2(\DD)$ as well as on other Hardy spaces $H^p(\DD)$, see also \cite{CM}.
In this article we study the compactness of the operator $C_\vf : K_\vt \to H^2(\DD)$, where $\vt$ is an inner function in $\DD$ and $K_\vt= H^2(\DD)\ominus \vt H^2(\DD)$ is the corresponding model space. Consider the canonical factorization of $\vt$
\[
\vt(z)=B_\Lambda(z) \exp \left ( \int_\TT \frac {\xi+z}{\xi-z} d\omega (\xi) \right ),
\]
where $\Lambda$ is the zero set of $\vt$, $B_\Lambda$ is the corresponding Blachke product, and $\omega$
is a singular measure on $\TT$. Functions in $K_\vt$ admit analytic continuation through $\TT \setminus \Sigma(\vt)$,
where
\[
\Sigma(\vt) = \left (\TT\cap {\rm Clos}(\Lambda) \right ) \cup {\rm supp}(\omega)
\]
is the {\em spectrum} of $\vt$ (see \cite{Nik}, Lecture 3).
Therefore the compactness property of $C_\vf$ does not suffer as the values of $\vf$ approach points in $\TT\setminus \Sigma(\vt)$. We quantify this idea below and give a condition that is necessary and sufficient for the compactness of $C_\vf:K_\vt\to H^2(\DD)$.
\subsection*{ Acknowledgments} This work was started when the authors visited the Mathematics Department of the University of California, Berkley. It is our pleasure to thank the Department for the hospitality
and Donald Sarason for useful discussions.
The authors will also thank Anton Baranov for his comments on the preliminary version of this note and for showing us the inequality in \cite{C2} that is crucial for the proof of Theorem 1.
\section{Nevanlinna counting function}
In this section we give a counterpart of the condition (\ref{compactness}) for the operator $C_\vf:K_\vt\rightarrow H^2(\DD)$. The proof follows the ideas of \cite{S}.
The main new ingredients are estimates for the reproducing kernels and its derivatives given in Lemma \ref{l1} below.
Let $\kappa_w$ be the reproducing kernel for $K_\vt$,
$$
\kappa_w(\zeta)= \frac{1-{\overline{ \vt(w)} } \vt(\zeta)} {1-\bar w \zeta}, \quad
\|\kappa_w\|^2 = \frac{1-|\vt(w)|^2}{1-|w|^2},
$$
and let $\tilde{\kappa}_w$ be its normalized version
\[
\tilde{\kappa}_w(\zeta)= \left ( \frac{1-|w|^2} {1-|\vt(w)|^2} \right ) ^{1/2}
\frac{1-{\overline{ \vt(w)}} \vt(\zeta)}{1-\bar w \zeta}.
\]
By
\beq
\label{kernel}
k_w(\zeta)= \frac 1{1-\bar{w}\zeta}, \quad \tilde{k}_w(\zeta)= \frac{ (1-|w|^2)^{1/2}}{1-\bar{w}\zeta}
\eeq
we denote the reproducing kernel for $H^2$ and its normalized version.
\begin{lemma} \label{l1}
Let $\{w_n\}\subset \DD$, $|w_n|\to 1$ be such that
\beq
\label{thetaestimate}
|\vt(w_n)|<a,
\eeq
for some $a\in (0,1)$. Then\\
(i) $\tilde{\kappa}_{w_n}\xrightarrow{w*} 0$ as $n\to \infty$; \\
(ii) there exist $\eps >0$ , $c>0$ and $n_0$ such that
\beq
\label{estimatefrombelow}
| \kappa'_{w_n}(\zeta)|>\frac c {(1-|w_n|^2)^2}, \quad \zeta \in D_{\eps}(w_n)
\eeq
holds for any $n>n_0$, where $D_\eps(w)= \{\zeta; |\zeta-w|<\eps |1-\bar{\zeta}w|\}$ is a hyperbolic disk with center at $w$.
\end{lemma}
\begin{proof}
(i) It suffices to show that
\[
\frac{(1-|w_n|^2)^{1/2} }{ 1-\bar {w}_n \zeta} (1-\overline{ \vt(w_n)} \vt(\zeta)) \xrightarrow{w*} 0 \ {\mbox{ in} }\ L^2(\TT)
\ {\mbox{as}}\ \ |w_n|\to 1 .
\]
This in turn follows from the known fact that the normalized reproducing kernels $\tilde{k}_{w_n}$ for the Hardy space $H^2(\DD)$ tend weakly to 0 as $|w_n|\rightarrow 1$, see e.g. \cite{S}.
(ii) We start with the following well-known estimate
\beq
\label{wellknown}
|\vt'(\zeta)| \leq \frac {1-|\vt(\zeta)|^2}{1-|\zeta|^2}, \quad \zeta \in \DD.
\eeq
Together with (\ref{thetaestimate}) it readily yields
\beq
\label{nice}
|\vt(\zeta)|< b,\quad \zeta \in\cup_n D_\eps(w_n),
\eeq
for some $b<1$ and $\eps >0$.
We claim now that for sufficiently large $n_0$
\beq
\label{central}
|\kappa'_{\zeta}(\zeta)|> \frac {{\rm const}} {(1-|\zeta|^2)^2}, \quad \zeta\in\cup_{n>n_0} D_\eps(w_n).
\eeq
Indeed,
\[
\kappa_\zeta'(\zeta)= -\frac{\vt'(\zeta)\overline{\vt(\zeta)}}{1-|\zeta|^2} + \bar{\zeta} \frac {1-|\vt(\zeta)|^2}{(1-|\zeta|^2)^2}= A_1+A_2.
\]
It follows from \eqref{nice} that $|A_2|> c (1-|\zeta|^2)^{-2}$ for some $c>0$, and in order to prove \eqref{central}
it suffices to show that
\[
|A_1|<q|A_2| \quad \mbox{for some} \ q\in (0,1),
\]
when $\zeta\in\cup_{n>n_0}D_\eps(w_n)$. The relation \eqref{wellknown} yields
\[
|A_1|\le |\vt(\zeta)| \frac {1-|\vt(\zeta)|^2}{(1-|\zeta|^2)^2}<\frac{b}{|\zeta|}|A_2|,\quad \zeta\in\cup_n D_\eps(w_n).
\]
Since $b<1$ and $\inf\{|\zeta|: \zeta\in\cup_{n>m} D_{\eps(w_n)}\}\to 1$ as $m\to \infty$, the required estimate follows.
The inequality \eqref{central} proves (\ref{estimatefrombelow}) for the special case $\zeta=w_n$. In order to complete the proof consider the function
\[
g(w,\zeta) = \overline{\kappa'_w(\zeta)} = - \frac{\overline{\vt'(\zeta)}\vt(w)}{1-\bar{\zeta}w} + w \frac{1-\overline{\vt(\zeta)}\vt(w)}{(1-\bar{\zeta} w)^2}.
\]
We have
$|g(w_n,\zeta)|= |\kappa'_{w_n}(\zeta)|.$
On the other hand
\beq
\label{lagrange}
|g(w_n,\zeta)-g(\zeta,\zeta)|<|g'(\tilde{w}, \zeta)||\zeta-w_n|,
\eeq
for some point $\tilde{w}\in [\zeta,w_n]$, where the derivative is taken with respect to the first variable. A straightforward estimate shows
\[
|g'(\tilde{w}, \zeta)| < \frac {\mbox{const}}{(1-|w_n|)^3}, \ \tilde{w}, \zeta \in D_\eps(w_n),
\]
the constant being independent of $n$. Now, given any $\eta >0$ we can choose $\eps'<\eps$ such that
the right-hand side in \eqref{lagrange} does not exceed $\eta (1-|\zeta|^2)^{-2}$ when $\zeta\in D_{\eps'}(w_n)$.
Taking $\eta$ sufficiently small we obtain (\ref{estimatefrombelow}).
\end{proof}
In what follows we assume for simplicity that $\vf(0)=0$.
\begin{theorem}\label{th:1} The following statements are equivalent\\
(C) $ C_\vf:K_\vt \to H^2$ is a compact operator.\\
(N) The Nevanlinna counting function of $\vf$ satisfies
\beq
\label{basic}
N_\vf(w) \frac {1-|\vt(w)|^2}{1- |w|^2} \to 0 \ {\rm as} \ |w|\nearrow 1.
\eeq
\end{theorem}
\begin{proof}[Proof (N) $\Rightarrow$ (C)]
Since $N_\vf(w)(1-|(w)|^2)^{-1}$ and $1-|\vt(w)|^2$ are bounded, the condition (N) means that for any $a<1$
\[
\lim_{|\vt(w)|<a, |w|\rightarrow 1-} N_\vf(w)(1-|w|^2)^{-1}=0.\]
In particular, for any $p>0$
\[
N_\vf(w) \frac {(1-|\vt(w)|)^p}{1- |w|} \to 0 \ {\rm as} \ |w|\nearrow 1.
\]
We use the following inequality, see \cite[page 187]{C2} and \cite{ACS},
\begin{equation}
\label{cohn}
\|f\|^2 \ge C_p \int_\DD |f'(z)|^2 \frac{1-|z|}{(1-|\vt(z)|)^p}dA(z)+ |f(0)|^2 ,\quad f \in K_\vt,
\end{equation}
which is valid for some $p \in (0,1)$.
In our setting this formula replaces \eqref{stanton}. We follow the argument of J.~Shapiro;
a similar argument for compactness of the composition operator in some weighted spaces of analytic functions can be found in \cite[Ch. 3.2]{CMc}.
Let $K^{(n)}_\vt=\{ f\in K_\vt; f \ \mbox{has zero of order $n$ at the origin} \}$, and let
$\Pi^{(n)}:K_\vt \to K^{(n)}_\vt$ be the corresponding orthogonal projection.
We will prove that
\[
\|C_\vf \Pi^{(n)}\|_{K_\vt \to H^2} \to 0, \quad n\to \infty.
\]
Thus $C_\vf$ is compact as it can be approximated by the finite-rank operators $C_\vf (I-\Pi^{(n)}) $.
Indeed, given $f\in K_\vt$, $\|f\|=1$, denote $g_n= \Pi^{(n)}f$. We have $\|g_n\|\leq 1$ and, for each $R<1$, $\eps >0$ we can choose
$ n(\eps,R)$ independent of $f$ and such that
\[
|g_n(w)|< \eps, \ |g'_n(w)| < \eps, \ \mbox{for all} \ n>n(\eps, R), \ \mbox{and} \ |w|<R.
\]
It follows from \eqref{cohn} that
\[
\int_\DD |g_n'(z)|^2 \frac{1-|z|}{(1-|\vt(z)|)^p}dA(z) < C,
\]
with $C$ independent of $f$, $n$. Next, by (\ref{normcomposition}) we have
\begin{multline*}
\|C_\vf \Pi^{(n)}f\|^2 =\int_\DD |g_n'(z)|^2 N_\vf(z) dA(z) \leq \int_{|z|<R} + \int_{R<|z|<1} \leq \\
\max_{|z|<R}\left \{ |g'_n(z)|^2 \right \} \int_{|z|<R} N_\vf(z) dA(z) + \\
\max_{R<|z|<1} \left \{ N_\vf(z) \frac{(1-|\vt(z)|)^p}{1-|z|} \right \}
\underbrace { \int_{R<|z|<1}|g'_n(z)|^2 \frac{1-|z|}{(1-|\vt(z)|)^p} dA(z) }_
{\mbox {\tiny{this is less than }} C}.
\end{multline*}
Choosing first $R$ such that the second summand is small, and then $n$ large enough to provide smallness of the
first summand we can make the whole expression arbitrary small for all $f\in K_\vt$, $\|f\|=1$.
\end{proof}
\begin{proof}[ Proof (C) $\Rightarrow$ (N)] Assume that $C_\vf$ is compact but (\ref{basic}) does not hold. Then there exists a sequence $\{w_n\} \subset \DD$, $|w_n|\to 1$, satisfying
\beq
\label{not}
N_\vf(w_n) \frac{1-|\vt(w_n)|^2}{1-|w_n|^2}>c>0.
\eeq
By the Littlewood subordination principle, which implies that $N_\vf(w)\leq \log\frac{1}{|w|}$, there exists $a<1$ such that (\ref{thetaestimate}) holds. Applying Lemma \ref{l1} (i) and the compactness of $C_\vf$, we get
$\|C_\vf\tilde{\kappa}_{w_n}\|^2 \to 0 \ \mbox{as} \ n\to \infty.$
On the other hand, (\ref{thetaestimate}), Lemma \ref{l1} (ii) and the subharmonicity inequality for $N_\vf$ (see \cite{S}) imply
\begin{multline}
\label{finalestimate}
\|C_\vf\tilde{\kappa}_{w_n}\|^2 \ge \int_\DD |\tilde{\kappa}_{w_n}'(\zeta)|^2 N_\vf(\zeta) dA(\zeta)\ge\\
c_1\int_{\DD}|\kappa_{w_n}'(\zeta)|^2(1-|w_n|^2) N_\vf(\zeta)dA(\zeta)\ge
\frac {c_2} {(1-|w_n|^2)^3}
\int_{D_\eps(w_n)} N_\vf(\zeta) dA(\zeta) \ge\\ \frac{c_\eps N_\vf(w_n)}{1-|w_n|^2}.
\end{multline}
We combine the last estimate with (\ref{not}) to get a contradiction.
\end{proof}
\section{Aleksandrov-Clark measures}
For $\alpha \in \TT$ let as before $\mu_\alpha$ be the Aleksandrov-Clark measure of $\vf$ corresponding to $\alpha$ and let
\[
d\mu_\alpha= h_\alpha dm + d\sigma_\alpha
\]
be its decomposition into absolutely continuous and singular parts, where $m$ is the normalized Lebesgue measure on $\TT$.
Then
\[
h_\alpha(\zeta)=\frac{1-|\vf(\zeta)|^2}{|\alpha-\vf(\zeta)|^2}
\]
for almost every $\zeta$ on $\TT$. As above, we assume for simplicity that $\vf(0)=0$, then $\|\mu_a\|=1$.
We give a condition in terms of the Aleksandrov-Clark measures that is sufficient for the compactness, it is also necessary if
$\vt$ is a one-component inner function, i.e. the set $\{z\in \DD: |\vt(z)|<r\}$ is connected for some $r\in (0,1)$.The one-component inner functions were introduced by W.~S.~ Cohn in \cite{C},
see also \cite{A} for a number of equivalent characterizations of one-component inner functions.
\begin{theorem} Let $\vt$ be a one-component inner function. The following statements are equivalent\\
(C) $ C_\vf:K_\vt \to H^2$ is a compact operator.\\
(S) $\sigma_\alpha=0$ for all $\alpha \in \Sigma(\vt)$.\\
Moreover, the implication (S) $\Rightarrow$ (C) holds for any inner function $\vt$.
\end{theorem}
The proof mainly follows the pattern as described in \cite{Saks} section 7, see \cite{Sar} for the original approach and also\cite{CM}.
We need the following description of the spectrum of a one-component inner function.
\begin{lfact}
Let $\vt$ be a one-component inner function and $\alpha \in \TT$. The following statements are equivalent \\
(a) $\alpha \in \Sigma(\vt)$; (b) $\liminf_{w\rightarrow\alpha} |\vt(w)| <1$; (c) $\liminf_{r\rightarrow 1-} |\vt(r\alpha)| <1$.
\end{lfact}
The implications $(c)\Rightarrow (a)\Rightarrow (b)\Rightarrow (a)$ are straightforward and hold for any inner function, see \cite{Nik}, Lecture 3; $(a)\Rightarrow (c)$ is true when $\vt$ is one-component, it follows from \cite{VT}, Section 5, see also Theorem 1.11 in \cite{A}.
\begin{proof}[Proof (C) $\Rightarrow$ (S)] Fix $\alpha \in \Sigma(\vt)$ and chose a sequence $r_n\to 1 $ so that
$|\vt(\alpha r_n)|<a<1$. By Lemma \ref{l1}, we have
\[
\left \| C_\vf \tilde{\kappa}_{\alpha r_n}\right \|^2 \ge
\int_\TT \frac{1-r_n^2}{|1-\bar{\alpha} r_n \vf(\xi)|^2}
\frac {|1-\bar{\vt}(\alpha r_n) \vt(\vf(\xi))|^2}{1- |\vt(\alpha r_n)|^2} |d\xi| \ \to \ 0, \ {\rm as} \ n\to \infty.
\]
Since $|\vt(\alpha r_n)|<a<1$, this yields
\[
\| C_\vf \tilde{k}_{\alpha r_n} \|^2 =
\int_\TT \frac {1-r_n^2}{|1-\bar{\alpha}r_n \vf(\xi)|^2}|d\xi| \to 0, \ {\rm as} \ n\to \infty,
\]
where $\tilde{k}_w$ is the normalized reproducing kernel for $H^2$, see (\ref{kernel}).
The rest of the proof follows literally \cite{CM}, see also \cite{Saks}, Lemma 7.6. We give it here for
the sake of completeness. We have
\[
\|C_\vf\tilde{k}_{\alpha r_n} \|^2=
\int_{\TT} \frac{1-|r_n\vf(\xi)|^2}{|\alpha-r_n\vf(\xi)|^2}|d\xi| - \int_{\TT}r_n^2
\frac{1-|\vf(\xi)|^2}{|\alpha-r_n\vf(\xi)|^2}|d\xi| =: A_n-B_n.
\]
Clearly,
\[A_n=\Re\left(\frac{\alpha+r_n\vf(0)}{\alpha-r_n\vf(0)}\right)=1.\]
Further, by the monotone convergence theorem
\[
\lim_{n\rightarrow \infty}B_n =\int_\TT \frac{1-|\vf|^2}{|\alpha-\vf|^2}=\|h_\alpha\|_1, \ {\rm as} \ n\to \infty.
\]
Then $\|\sigma_\alpha\|=1-\|h_\alpha\|_1=0$. This completes the proof (C) $\Rightarrow$ (S).
\end{proof}
We remark that the one-component condition was employed only in the description of the spectrum, so the following statement holds for any inner function:
{\em If $C_\vf:K_\vt\rightarrow H^2$ is a compact operator then $\sigma_\alpha=0$ for all $\alpha\in\TT$ such that $\liminf_{r\rightarrow 1-}|\vt(r\alpha)|<1$.}
\begin{proof}[ Proof (S) $\Rightarrow$ (N)] We will prove this implication and refer to Theorem \ref{th:1}. Suppose that (N) is false, then
\[
N_\vf(w_n) \frac{1-|\vt(w_n)|^2}{1-|w_n|^2} > c>0, \ {\rm for \ some} \ \{w_n\}\subset \DD, w_n\rightarrow\alpha\in\TT.
\]
Clearly $\alpha\in\Sigma(\vt)$ and \eqref{thetaestimate} holds for some $a<1$. Further, \[(1-|w_n|)^{-1}N_\vf(w_n)>c_1>0.\]
We obtain a contradiction in the same way as in \cite{CM} see also \cite{Saks}, Theorem 7.5. We have, by a simple version of (\ref{finalestimate})
\[
\|C_\vf\tilde{k}_{w_n}\|^2\ge C\frac{N_\vf(w_n)}{1-|w_n|^2}>c_2>0.\]
On the other hand by the Fatou lemma,
\begin{multline*}
\limsup_{n\rightarrow\infty}\|C_\vf\tilde{k}_{w_n}\|^2=1 - \liminf_{n\rightarrow\infty}\int_\TT|w_n|^2 \frac{1-|\vf(\xi)|^2}{|1-\bar{w}_n\vf(\xi)|^2} |d\xi|\le\\
1 -\int_\TT\frac{1-|\vf(\xi)|^2}{|\alpha-\vf(\xi)|^2}|d\xi|=\|\sigma_\alpha\|=0,
\end{multline*}
which leads to a contradiction.
\end{proof}
\section{Examples and concluding remarks}
{\em Inner functions with one point spectra.} Consider the Paley-Wiener space $K_{\vt_1}$ generated by
\[
\vt_1(z)=e^{\frac{z+1}{z-1}},
\]
this space can be obtained from the classical Paley-Wiener space of entire functions by the substitution
$\zeta \mapsto \frac{z-i}{z+i}$. Then $\vt_1$ is a one-component inner function and $\Sigma(\vt_1)=\{1\}$.
Theorem \ref{th:1} and explicit calculation show that $C_\vf:K_{\vt_1}\to H^2$ is a compact operator if and only if
\beq
\label{PW}
\frac{N_\vf(w)}{\max \left ((1-|w|^2),|1-w|^2 \right )} \ \to \ 0, \ \ {\rm as} \ \ w\to 1.
\eeq
Consider now $D=\{w\in \DD; |w-1/4|<3/4\}$ and let $\vf$ be a conformal mapping $\vf : \DD \to D $, $\phi(0)=0, \vf(1)=1$.
Clearly (\ref{PW}) does not hold and the operator $C_\vf:K_{\vt_1}\to H^2$ is not compact. Evidently,
the Aleksandrov-Clark measure $\mu_1$ of $\vf$ is not absolutely continuous. Below we give an example of a (multi-component) inner function
$\vt$ with $\Sigma(\vt)=\{1\}$ and such that $C_\vf : K_\vt \to H^2$ is a compact operator. Thus (C) does not imply (S) for general $\vt$.
Take a sequence $t_m\searrow 0$ such that $\{\zeta_m\}=\{(1-t_m^3) ^{1/2}e^{it_m}\}$ is
an interpolating sequence in $\DD$. Given a sequence $\{\alpha_m\}\in l^1,\ \alpha_m\in(0,1),$ denote
$\Lambda=\{\lambda_m\}=\{(1-\alpha_mt_m^3)^{1/2}e^{it_m}\}$, this is also an interpolating sequence. Let now
$\vt=B_\Lambda$ be the Blaschke product corresponding to the sequence $\Lambda$. We claim that
$C_\vf:K_\vt \to H^2$ is a compact operator.
Indeed, $\|C_\vf \tilde{k}_\zeta\|\leq 1$, $\zeta \in \DD$, here $\tilde{k}_\zeta$ is the normalized reproducing kernel for $H^2$,
this follows just from the fact that $C_\vf$ is contractive. In particular
\[
t_m^3 \int_\TT \frac {|d\xi|}{|1-\bar{\zeta}_m \vf(\xi)|^2}=\|C_\vf \tilde{k}_{\zeta_m}\|^2 \le 1.
\]
Since $|1-\bar{\zeta}_m \vf(\xi)|^2\le c |1-\bar{\lambda}_m \vf(\xi)|^2$, $\xi \in \TT$, we have,
\[
\|C_\vf \tilde{k}_{\lambda_m}\|^2 \asymp \alpha_m t_m^3 \int_\TT \frac {|d\xi|}{|1-\bar{\lambda}_m \vf(\xi)|^2}
\le C\alpha_m.
\]
On the other hand the system $\{\tilde{k}_{\lambda_m}\}$ forms a Riesz basis in $K_\vt$ (see e.g. \cite{Nik}, Lecture VII).
Compactness of $C_\vf:K_\vt \to H^2$ is now straightforward, alternatively it could be deduced from Theorem 1.
\medskip
{\em Concluding remarks.}
In the classical case of $H^2(\DD)$ the essential norm of the composition operator was obtained by J.~Shapiro
\[
\|C_\vf\|_e^2=\limsup_{|w|\rightarrow 1-}\frac{N_\vf(w)}{-\log|w|}.
\]
For a given one-component $\vt$ the equivalence of the norms proved in \cite{C2} and similar arguments give
\[
\|C_\vf : K_\vt\rightarrow H^2\|_e^2\asymp \limsup_{|w|\rightarrow 1-}N_\vf(w)\frac{1-|\vt(w)|^2}{1-|w|^2}.
\]
Let $\vf:\DD\rightarrow\DD$ be a holomorphic function and
$\vf^*$ be its radial boundary values . Define a measure $\nu_\vf$ on $\bar{\DD}$
by $\nu_\vf(E)=m((\vf^*)^{-1}(E))$ for any $E\subset \bar{\DD}$, where $m$ is
the Lebesgue measure on $\TT$. The composition operator $C_\vf$ on $H^2(\DD)$ is isometrically equivalent to the embedding of $H^2$ into
$L^2(\bar{\DD},\nu_\vf)$, see \cite{Mc, CMc} for details. The connecting between the Nevanlinna counting function and the measure $\nu_\vf$ was recently
studied in details in \cite{LQLR}.
Respectively, the compactness of the composition operator on $K_\vt$ can be reduced
to the question of the compactness of the embedding $K_\vt\hookrightarrow L^2(\bar{\DD},\nu_\vf)$.
It is well-known that the embeddings are easier to study for one-component inner functions $\vt$, see
\cite{C,C2}, and subsequent works \cite{ VT} and \cite{A,A1}.
The compactness of the embedding $K_\vt\hookrightarrow L^2(\bar{\DD},\mu)$ was studied by J.~A.~Cima and A.~L.~Matheson \cite{CM1} and by A.~D.~Baranov \cite{B}. The latter article contains in particular
necessary and sufficient conditions for the compactness of the embedding
for the case of one-component inner function. This approach also shows that for one-component
$\vt$ the compactness of the composition operator $C_\vf:K_\vt^ p\rightarrow H^p$
does not depend on $p\in(1,\infty)$.
| {
"timestamp": "2013-01-31T02:02:41",
"yymm": "1205",
"arxiv_id": "1205.5172",
"language": "en",
"url": "https://arxiv.org/abs/1205.5172",
"abstract": "We study compactness property of composition operator acting from a model space generated by an inner function to the Hardy space.",
"subjects": "Complex Variables (math.CV)",
"title": "Composition operator on model spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.980280868436129,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7075110687916374
} |
https://arxiv.org/abs/2007.11216 | Boundedness of Dunkl-Hausdorff operator in Lebesgue spaces | In this paper, the $L^p_v(\R)$-boundedness of the Dunkl-Hausdorff operator $\displaystyle H_{\al,\phi} f(x)=\ent\frac{ |\phi(t)|}{|t|^{2\al+2}}f\lf(\frac{x}{t}\rh) dt $
has been characterized and for a certain type of weight $v$, the precise value of the norm $\|H_{\al,\phi}\|_{L^p_v(\R)\to L^p_v(\R)}$ has been obtained. This covers several of the existing results. Analogous results in two dimensions have also been proved. | \section{Introduction}
Let $v$ be a weight function, i.e., a function which is measurable, positive and finite almost everywhere on the specified domain. By $L^p_v(\R)$, $1\le p<\infty$, we denote the weighted Lebesgue space and a norm of a function $f\in L^p_v(\R)$ is given by
$$
\|f\|_{L^p_v(\R)}:=\lf(\ent|f(x)|^p v(x) dx\rh)^{1/p}.
$$
Occasionally, we shall be referring to the specific weight $v(x)=|x|^{2\al +1}$. The corresponding weighted Lebesgue space will be denoted by $L^p_\al(\R)$. The non-weighted Lebesgue space, i.e., when $v\equiv 1$, will be denoted by $L^p(\R)$.
Let $\phi\in L^1(\R)$. In the present paper, we are concerned with the Dunkl-Hausdorff operator \cite{daher, daher1, daher2, dunkl}
$$
H_{\al,\phi} f(x)=\ent\frac{ |\phi(t)|}{|t|^{2\al+2}}f\lf(\frac{x}{t}\rh) dt.
$$
When $\al=-1/2$, The operator $H_{\al,\phi}$ is the famous Hausdorff operator
$$
H_\phi f(x)=\ent\frac{|\phi(t)|}{|t|}f\lf(\frac{x}{t}\rh) dt,
$$
from which several well known operators can be deduced for suitable choices of $\phi$, e.g., for $\phi(t)=\frac{1}{t}\chi_{(1,\infty)}(t)$, the operator $H_\phi$ reduces to the standard Hardy averaging operator $$Hf(x)=\frac{1}{x}\int_0^x f(t)\,dt$$
while for $\phi(t)=\chi_{[0,1]}(t)$, it reduces to the adjoint of Hardy averaging operator $$H^*f(x)=\int_x^\infty \frac{f(t)}{t}\,dt.$$
Similarly, other operators like Calderon operator, Ces\'aro operator and fractional Riemann Liouville operator can also be deduced from $H_\phi$, see \cite{gorka, jain} for details.
For more updates on the Hausdorff operator, its extensions and in the framework of other function spaces one may refer to \cite{ar, chen, kan, miya, lm} and the survey \cite{L}.
By the replacement $\phi(s)=\frac{1}{s}\psi\lf(\frac{1}{s}\rh)$, $s>0$, the operator $H_\phi$ (considered on $\R^+$) becomes equivalent to
$$G_\psi g(x)=\frac{1}{x}\int_0^\infty \psi\lf(\frac{t}{x}\rh)g(t)\,dt.$$
It was proved by Golberg (\cite{gold}, Theorem 1) that if $\psi\ge 0$ on $\R^+$ is such that $\displaystyle\ent \frac{\psi(t)}{\sqrt{t}} dt =:K<\infty$, then the operator $G_\psi$ (and consequently $H_\phi$) is a bounded operator on $L^2(\R^+)$ and $\|G_\psi\|\leq K.$ The $L^p$-boundedness of $G_\psi$ is derived in (\cite{HLP}, Theorem 319) and for many other extensions with sharp constants one may refer to (\cite{KS}, Theorem 6.4 and bibliographic notes to Chapter 2 therein). In \cite{jain}, the authors reestablished the $L^p(\R^+)$-boundedness of $G_\psi$ and via a new proof of the lower bound, obtained the precise value of $\|G_\psi\|_{L^p(\R^+)\to L^p(\R^+)}$ as
$$
\|G_\psi\|_{L^p(\R^+)\to L^p(\R^+)}= \int_0^\infty \frac{\psi(t)}{{t^{1/p}}}\, dt =:K_p,\quad 1<p<\infty.
$$
Recently, in \cite{gorka}, a two weight characterization of the boundedness of $H_\phi$ between $L^p_v(\R^+)$ and $L^p_w(\R^+)$ has been given. Moreover, in the same paper, the corresponding boundedness has been studied in the framework of other function spaces as well, namely, grand Lebesgue spaces and variable exponent Lebesgue spaces.
Coming back to the Dunkl-Hausdorff operator $H_{\al,\phi}$, its $L_\al^1(\R)$ boundedness has been proved in \cite{daher} whereas $L_\al^p(\R)$ boundedness is obtained in \cite{daher2} and in each case, a sufficient condition has been provided. We, in this paper, generalize these results by providing a characterization for the $L_v^p(\R)$ boundedness of $H_{\al,\phi}$ and for a certain type of weight $v$, we provide the precise value of the norm $\|H_{\al,\phi}\|_{L^p_v(\R)\to L^p_v(\R)}$. Moreover, a sufficient condition has been proved for two weights and two indices boundedness, i.e., $H_{\al,\phi}:L^p_w(\R)\to L^q_v(\R)$ boundedness. These results have also been proved in the two dimensional framework.
\section{One dimensional case}
We begin by proving the following:
\begin{theorem}\label{t2.1}
Let $1<p<\infty$, $v$ be weight function and $\phi\in L^1(\R)$ be such that $$\displaystyle \ent \frac{|\phi(t)|}{{|t|^{{2\al+2}-\frac{1}{p}}}}\lf(\sup_{y\in\R}\frac{v(ty)}{v(y)}\rh)^{1/p} dt =:A_{\sup}<\infty.$$ Then the operator $H_{\al,\phi}$ is a bounded operator on $L^p_v(\R)$ and
$$\|H_{ \al,\phi}f\|_{L^p_v(\R)}\leq A_{\sup} \|f\|_{L^{p}_{v}(\R)}.
$$
\end{theorem}
\begin{proof}
If $f \in L^p_v(\R)$ then by using generalised Minkowski inequality, change of variables and H\"older's inequality, we have
\begin{align}\label{enew1}
\|H_{ \al,\phi}f\|_{L^p_v(\R)}
& =\lf(\ent |H_{ \al,\phi} f(x)|^p v(x)dx \rh)^\frac{1}{p} \nonumber\\
& = \lf(\ent\lf|\ent \frac{|\phi(t)|}{|t|^{2\al+2}}f\lf(\frac{x}{t}\rh) dt\rh|^p v(x)dx \rh)^\frac{1}{p} \nonumber\\
& \leq \lf(\ent \lf(\ent \frac{|\phi(t)|}{|t|^{2\al+2}}\lf|f\lf(\frac{x}{t}\rh)\rh|v^{\frac{1}{p}}(x)dt\rh)^p dx \rh)^\frac{1}{p}\nonumber\\
& \leq \ent \frac{|\phi(t)|}{|t|^{2\al+2}}|t|^{\frac{1}{p}} \lf(\ent \lf|f(y)\rh|^p v(yt)dy\rh)^\frac{1}{p} dt\nonumber\\
& = \ent \frac{|\phi(t)|}{|t|^{{2\al+2}-\frac{1}{p}}}\lf(\ent \lf|f(y)\rh|^p v(y)v^{-1}(y) v(yt)dy\rh)^{\frac{1}{p}} dt\\
&\leq \lf(\ent \frac{|\phi(t)|}{|t|^{{2\al+2}-\frac{1}{p}}} \lf(\sup_{y\in\R}\frac{v(ty)}{v(y)}\rh)^{\frac{1}{p}} dt \rh)\lf(\ent |f(y)|^p v(y) dy \rh)^{\frac{1}{p}}\nonumber\\
& = A_{\sup} \|f\|_{L^p_v(\R)}\nonumber
\end{align}
and the assertion follows.
\end{proof}
The following theorem provides a converse of Theorem \ref{t2.1}. Here and throughout $p'$ denotes the conjugate index to $p$, i.e., ${1\over p}+{1\over p'}=1.$
\begin{theorem}\label{t2.2}
Let $1<p<\infty$, $v$ be a weight function and $\phi\in L^1(\R)$. If the operator $H_{\al,\phi}$ is a bounded operator on $L^p_v(\R)$, then $$\|H_{ \al,\phi}\|\geq \ent \frac{|\phi(t)|}{{|t|^{2\al+2-\frac{1}{p}}}}\lf(\inf_{y\in\R}\frac{v(ty)}{v(y)}\rh)^{1/p} dt=:A_{\inf}.$$
\end{theorem}
\begin{proof}
Let us consider $0\le f\in L^p_v(\R)$ and $0\le g\in L^{p'}_{v^{1-p'}}(\R)$. On using Fubini's Theorem and H\"older's inequality, We have
\begin{align}\label{K2}
{J} & := \ent \frac{|\phi(t)|}{|t|^{2\al+2}}\lf( \ent f\lf(\frac{x}{t}\rh) g(x)dx\rh) dt\\
& = \ent g(x)\lf(\ent \frac{|\phi(t)|}{|t|^{2\al+2}}f\lf(\frac{x}{t}\rh)dt\rh) dx\nonumber\\
&\leq \ent |g(x)| \lf|(H_{\al,\phi} f )(x)\rh| dx \nonumber\\
&= \ent |g(x)| v^{\frac{1}{p}}(x)v^{\frac{-1}{p}}(x)\lf|(H_{\al,\phi} f )(x)\rh| dx \nonumber\\
& \leq \lf(\ent \lf|\lf(H_{\al,\phi}f\rh)(x)\rh|^p v(x) dx\rh)^{1/p}\lf(\ent |g(x)|^{p'} v^{\frac{-p'}{p}}(x)dx\rh)^{^{\frac{1}{p'}}}\nonumber \\
& \leq \|H_{\al,\phi}\| \|f\|_{L^p_v(\R)}\|g\|_{L^{p'}_{v^{1-p'}}(\R)}.\label{h2.1}
\end{align}
For any interval $I=(a,b)$, $I'$ will denote the interval $(-b,-a)$. Now, for $u\in (0,1)$, let $I_1 = (u, 1/u)$ so that $I_1'=(-1/u,-u)$. Define the test functions
$$f_u(x)=\frac{v^{-1/p}(x)}{|x|^{1/p}}\chi_{I_1\cup I'_1}(x), \qquad g_u(x)=\frac{v^{1/p}(x)}{|x|^{1/p'}}\chi_{I_1\cup I'_1}(x).$$
Then it can be calculated that
\begin{equation}\label{h2.2}
\|f_u\|_{L^p_v(\R)}^p=\|g_u\|_{L^{p'}_{v^{1-p'}}(\R)}^{p'}=4 \log(1/u).
\end{equation}
Also, we have
\begin{align}\label{2.2}
h_u(t)
&:= \ent f_u\lf(\frac{x}{t}\rh) g_u(x)dx\nonumber\\
& = |t|^{1/p} \ent \frac{1}{|y|}\chi_{I_1\cup I'_1}(y)\chi_{I_1\cup I'_1}\lf(ty\rh) v^{-1/p}(y)v^{1/p}(ty) dy\nonumber\\
&= |t|^{1/p}\ent \frac{1}{|y|}\chi_{I_{t,u}}(y)v^{-1/p}(y)v^{1/p}(ty) dy\nonumber\\
&\geq \inf_{y \in \R} \lf(\frac{v(ty)}{v(y)}\rh)^{1/p} |t|^{1/p}\ent \frac{1}{|y|}\chi_{I_{t,u}}(y) dy
\end{align}
where
\begin{align*}
I_{t,u} &=
\begin{cases}
\lf(I_1\cup I'_1\rh)\cap(I_2\cup I'_2),& {\rm if}\, t\geq 0\\
\lf(I_1\cup I'_1\rh)\cap(I_3\cup I'_3), & {\rm if}\, t< 0
\end{cases}\\
&=
\begin{cases}
\lf(I_1\cap {I_2}\rh)\cup(I'_1\cap I'_2),& {\rm if}\, t\geq 0\\
\lf(I_1\cap I'_3\rh)\cup(I'_1\cap I_3), & {\rm if}\, t< 0
\end{cases}
\end{align*}
with $I_2=(\frac{u}{t}, \frac{1}{ut})$ and $I_3=(\frac{1}{tu}, \frac{u}{t})$. We divide $\R$ as
$$\R=\lf(-\infty, -\frac{1}{u^2},\rh]\cup \lf(-\frac{1}{u^2}, -1\rh]\cup(-1, -u^2]\cup(-u^2, 0]\cup(0,u^2]\cup(u^2,1]\cup\lf(1,\frac{1}{u^2}\rh]\cup\lf(\frac{1}{u^2},\infty\rh).$$
If $t\in \lf(-\infty, -\frac{1}{u^2},\rh]\cup[-u^2,u^2]\cup\lf[\frac{1}{u^2},\infty\rh)\cup\{-1,1\}$, then $I_{t,u}=\emptyset$, so that in this case
\begin{equation}\label{h2.3}
h_u(t)=0.
\end{equation}
If $t\in (-1/u^2,-1)$, then $I_{t,u} = \lf(u,-\frac{1}{tu}\rh)\cup\lf(\frac{1}{ut},{-u}\rh)$ and
\begin{equation}\label{e2.1}
\ent \frac{1}{|y|}\chi_{I_{t,u}}dy=-2\log|t|+4\log \frac{1}{u} .
\end{equation}
If $t\in \lf(-1, -u^2\rh)$, then $I_{t,u} = \lf(-\frac{1}{u}, \frac{u}{t}\rh)\cup\lf(-\frac{u}{t}, \frac{1}{u}\rh)$ and we have
\begin{equation}\label{e2.2}
\ent \frac{1}{|y|}\chi_{I_{t,u}}dy=2\log|t|+4\log \frac{1}{u}.
\end{equation}
If $t\in (u^2,1)$, then $I_{t,u}=\lf(\frac{u}{t}, \frac{1}{u}\rh)\cup\lf(-\frac{1}{u}, -\frac{u}{t}\rh)$ and
\begin{equation}\label{h2.4}
\ent \frac{1}{|y|}\chi_{I_{t,u}}dy=2\log|t|+4\log \frac{1}{u}.
\end{equation}
If $t\in \lf(1,\frac{1}{u^2}\rh)$, then $I_{t,u}= \lf(u, \frac{1}{tu}\rh)\cup\lf(-\frac{1}{tu}, -u\rh)$ and we have
\begin{equation}\label{h2.5}
\ent \frac{1}{|y|}\chi_{I_{t,u}}dy=-2\log|t|+4\log \frac{1}{u}.
\end{equation}
On taking $f$ and $g$ as $f_u$ and $g_u$ in (\ref{K2}) and then using \eqref{2.2}-\eqref{h2.5}, we obtain
\begin{align}\label{h2.6}
{J} &= \lf(\int^{-1/u^2}_{-\infty}+\int_{-1/u^2}^{-1}+\int^{-u^2}_{-1}+\int^0_{-u^2}+\int_0^{u^2}+\int_{u^2}^1+\int_1^{1/u^2}+\int_{1/u^2}^\infty\rh) \frac{|\phi(t)|}{|t|^{2\al+2}} h_u(t) dt \nonumber\\
& \geq 4 \log \frac{1}{u} \lf[\int^{-u^2}_{-1/u^2}\inf_{y \in \R} \lf(\frac{v(ty)}{v(y)}\rh)^{\frac{1}{p}}\frac{|\phi(t)|}{|t|^{{2\al+2}-\frac{1}{p}}}\lf(1-\frac{\xi(t)}{4\log \frac{1}{u}}\rh)dt\rh.\nonumber\\
& \hskip 1.0 in \lf.+ \int^{1/u^2}_{u^2}\inf_{y \in \R} \lf(\frac{v(ty)}{v(y)}\rh)^{\frac{1}{p}}\frac{|\phi(t)|}{|t|^{{2\al+2}-\frac{1}{p}}}\lf(1-\frac{\xi(t)}{4\log \frac{1}{u}}\rh)dt\rh],
\end{align}
where
\begin{equation*}
\xi(t)=
\begin{cases}
2\log |t|,& {\rm if}\, t\in (-1/u^2, -1)\cup (1,1/u^2) \\
-2\log |t| , & {\rm if}\, t \in (-1, -u^2)\cup (u^2, 1).
\end{cases}
\end{equation*}
Now, using \eqref{h2.6} and \eqref{h2.2} in \eqref{h2.1} for $f=f_u$ and $g=g_u$, we get
\begin{align*}
&\int^{-u^2}_{-1/u^2}\inf_{y \in \R} \lf(\frac{v(ty)}{v(y)}\rh)^{\frac{1}{p}}\frac{|\phi(t)|}{|t|^{{2\al+2}-\frac{1}{p}}}\lf(1-\frac{\xi(t)}{4\log \frac{1}{u}}\rh)dt \\
& \qquad + \int^{1/u^2}_{u^2}\inf_{y \in \R} \lf(\frac{v(ty)}{v(y)}\rh)^{\frac{1}{p}}\frac{|\phi(t)|}{|t|^{{2\al+2}-\frac{1}{p}}}\lf(1-\frac{\xi(t)}{4\log \frac{1}{u}}\rh)dt \leq \|H_{\al,\phi}\|.
\end{align*}
By the Monotone Convergence Theorem, the LHS $\displaystyle \uparrow A_{\inf}$ as $u\rightarrow 0$ and we are done.
\end{proof}
In view of Theorems \ref{t2.1} and \ref{t2.2}, a characterization for the boundedness of $H_{\al,\phi}:L^p_v(\R)\to L^p_v(\R)$ can be derived. In fact, the following is immediate:
\begin{theorem}\label{t2.3}
Let $1<p<\infty$, $v$ be weight function and $\phi\in L^1(\R)$. Let the following be satisfied for some constant $c>0$:
$$
\sup_{y\in\R}\frac{v(ty)}{v(y)}\le c \inf_{y\in\R}\frac{v(ty)}{v(y)}.
$$
Then the operator $H_{\al,\phi}:L^p_v(\R)\to L^p_v(\R)$ is bounded if and only if $A_{\sup} <\infty$ and moreover the following estimates hold:
\begin{equation*}
{1\over c} A_{\sup} \le \|H_{\al,\phi}\|_{L^p_v(\R)\to L^p_v(\R)}\le A_{\sup}.
\end{equation*}
\end{theorem}
\begin{corollary}\label{t2.4} Let $1<p<\infty$, $v$ be weight function and $\phi\in L^1(\R)$. If there exists a function $h$ such that $v(xy)=v(x)h(y)$, then $A_{\sup} = A_{\inf}$ and
\begin{equation*}
\|H_{\al,\phi}\|_{L^p_v(\R)\to L^p_v(\R)} = \ent \frac{|\phi(t)|}{{|t|^{2\al+2-\frac{1}{p}}}}h^{1/p}(t)\, dt.
\end{equation*}
\end{corollary}
For $\al=-1/2$ and $\phi(t)={1\over t}\chi_{(1,\infty)}(t)$, as pointed out earlier, the operator $H_{\al,\phi}$ becomes the Hardy averaging operator
$$
Hf(x)={1\over x}\int_0^x f(t)\, dt.
$$
Further, if we take $v(t)=t^\beta, \, \beta<p-1$, then
$\displaystyle A_{\sup}=\frac{p}{\beta-p-1}$ and
$$
\|H\|_{L^p_{t^\beta}(\R)\to L^p_{t^\beta}(\R)} = \frac{p}{\beta-p+1}.
$$
The above discussion leads to the following corollary which, in fact, is the classical Hardy inequality (see e.g. \cite[Theorem 330]{HLP}, \cite[(3.6) p. 23]{KMP} or \cite[Theorem 6 p. 726]{KMP2}):
\begin{corollary}
Let $1<p<\infty$ and $\beta<p-1$ be a the weight function. Then the inequality
$$
\|Hf\|_{L^p_{x^\beta}(\R)} \le \left(\frac{p}{\beta-p+1} \right) \|f\|_{L^p_{x^\beta}(\R)}
$$
holds and the constant $ \left(\frac{p}{\beta-p+1} \right)$ is sharp.
\end{corollary}
\begin{remark}
\begin{itemize}
\item [(i)] When $p=1$ and $v(x)=|x|^{2\al +1}$, Theorem \ref{t2.1} reduces to (\cite{daher}, Theorem 3.1).
\item [(ii)] When $v(x)
=|x|^{2\al +1}$, Theorem \ref{t2.1} reduces to (\cite{daher2}, Theorem 1)
\item [(iii)] When $\al=-1/2$, Theorems \ref{t2.1}, \ref{t2.2}, \ref{t2.3} and Corollary \ref{t2.4} reduce to (\cite{gorka}, Theorems 1(i), 1(ii), Corollaries 1 and 2, respectively for $w=v$). Moreover, the functions here are defined on $\R$ unlike in \cite{gorka} where the domain is $\R^+$.
\end{itemize}
\end{remark}
\section{Two dimensional case}
In this section, we derive the two dimensional analogues of the results proved in Section 2.
\begin{definition}
For $\phi\in L^1(\R^2)$, the two dimensional Dunkl-Hausdorff operator is defined by
$$\sH_{\al, \phi}(f(x_1,x_2)) = \et \frac{|\phi(t_1,t_2)|}{|t_1t_2|^{2\al+2}}f\lf(\frac{x_1}{t_1},\frac{x_2}{t_2}\rh)dt_1dt_2.$$
\end{definition}
Now, we prove the boundedness of $\sH_{\al, \phi}$. By using generalised Minkowski inequality, change of variables and H\"older's inequality in two dimensions, the following theorem can be proved along the same lines as in Theorem \ref{t2.1}.
\begin{theorem}\label{t3.2}
Let $1<p<\infty$, $v$ be weight function and $\phi\in L^1(\R^2)$ be such that $$\displaystyle \et \frac{|\phi(t_1, t_2)|}{{|t_1t_2|^{2\al+2-\frac{1}{p}}}}\lf(\sup_{(y_1, y_2)\in\R^2}\frac{v(t_1y_1, t_2y_2)}{v(y_1, y_2)}\rh)^{1/p} dt_1dt_2 =:\sA_{\sup}<\infty.$$ Then the operator $\sH_{\al,\phi}:L^p_v(\R^2)\to L^p_v(\R^2)$ is bounded and
$$
\|\sH_{\al, \phi}f\|_{L^p_v(\R^2)}\leq \sA_{\sup} \|f\|_{L^{p}_{v}(\R^2)}.
$$
\end{theorem}
Towards the converse of Theorem \ref{t3.2}, we prove the following:
\begin{theorem}\label{t3.3}
Let $1<p<\infty$, $v$ be weight function and $\phi\in L^1(\R^2)$. If the operator $\sH_{\al,\phi}:L^p_v(\R^2)\to L^p_v(\R^2)$ is bounded, then $$\|\sH_{\al, \phi}\|\geq \et \frac{|\phi(t_1, t_2)|}{{|t_1t_2|^{2\al+2-\frac{1}{p}}}}\lf(\inf_{(y_1 y_2)\in\R^2}\frac{v(t_1y_1, t_2y_2)}{v(y_1,y_2)}\rh)^{1/p} dt_1dt_2=:\sA_{\inf}.$$
\end{theorem}
\begin{proof}
Let $0\le f\in L^p_v(\R^2)$ and $0\le g\in L^{p'}_{v^{1-p'}}(\R^2)$. On using Fubini's Theorem and H\"older's inequality, We have
\begin{align}\label{h0}
\mathcal{J}& := \et \frac{|\phi(t_1, t_2)|}{|t_1t_2|^{2\al+2}} \et f\lf(\frac{x_1}{t_1},\frac{x_2}{t_2}\rh) g(x_1,x_2)dx_1dx_2 dt_1dt_2\\
& = \et g(x_1,x_2) \lf(\et \frac{|\phi(t_1, t_2)|}{|t_1t_2|^{2\al+2}}f\lf(\frac{x_1}{t_1},\frac{x_2}{t_2}\rh)dt_1dt_2\rh) dx_1dx_2\nonumber\\
&\leq \et |g(x_1,x_2)| \lf|\lf(\sH_{\al, \phi} f \rh)(x_1,x_2)\rh| dx_1dx_2\nonumber\\
& \leq \lf(\et \lf|\lf(\sH_{\al, \phi}f\rh)(x_1,x_2)\rh|^p v(x_1, x_2) dx_1dx_2\rh)^{1/p}\lf(\ent |g(x_1,x_2)|^{p'} v(x_1, x_2) ^{\frac{-p'}{p}} dx_1dx_2\rh)^{1/p'}\nonumber \\
& \leq \|\sH_{\al, \phi}\| \|f\|_{L^p_v(\R^2)}\|g\|_{L^{p'}_{v^{1-p'}}(\R^2)}.\label{h2.1.1}
\end{align}
Now, for $u\in (0,1)$, we define the test functions
\begin{align*}
f_u(x_1,x_2)&=\frac{v^{-1/p}(x_1,x_2)}{|x_1x_2|^{\frac{1}{p}}}\chi_{I_1\cup I'_1\times I_1\cup I'_1}(x_1,x_2),\\
g_u(x_1,x_2)&=\frac{v^{1/p}(x_1,x_2)}{|x_1x_2|^{\frac{1}{p'}}}\chi_{I_1\cup I'_1\times I_1\cup I'_1}(x_1,x_2),
\end{align*}
where $I_1 = (u, 1/u)$ and as before $I'_1 = (-1/u, -u)$. Then it can be calculated that
\begin{equation}\label{h2.1.2}
\|f_u\|_{{L^p_v}(\R^2)}^p=\|g_u\|_{L^{p'}_{{v^{1-p'}}}(\R^2)}^{p'}=(4 \log(1/u))^2.
\end{equation}
Also, on taking ${x_i}/{t_i}= y_i$ for $ i = 1,2$, we have
\begin{align*}
h_u(t_1,t_2)
&:= \et f_u\lf(\frac{x_1}{t_1}, \frac{x_2}{t_2}\rh) g_u(x_1,x_2)dx_1dx_2\\
& = |t_1t_2|^{\frac{1}{p}} \et \frac{1}{|y_1y_2|}\chi_{I_1\cup I'_1\times I_1\cup I'_1}(y_1,y_2)v^{-\frac{1}{p}}(y_1,y_2)v^{\frac{1}{p}}(t_1y_1,t_2y_2)\chi_{I_1\cup I'_1\times I_1\cup I'_1}(t_1y_1,t_2y_2)dy_1dy_2\\
&\geq \inf_{(y_1,y_2)\in\R^2}\frac{v^{\frac{1}{p}}(t_1y_1,t_2y_2)}{v^{\frac{1}{p}}(y_1,y_2)} |t_1t_2|^{\frac{1}{p}}\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2,
\end{align*}
where
\begin{align*}
I_{t_1,t_2,u}&=
\begin{cases}
\lf(I_1\cup I'_1\rh) \cap\lf(I_2\cup I'_2\rh)\times \lf(I_1\cup I'_1\rh) \cap\lf(I_3\cup I'_3\rh), & {\rm if}\, t_1, t_2>0\\
\lf(I_1\cup I'_1\rh) \cap\lf(I_4\cup I'_4\rh)\times \lf(I_1\cup I'_1\rh) \cap\lf(I_5\cup I'_5\rh) , & {\rm if}\, t_1, t_2<0\\
\lf(I_1\cup I'_1\rh) \cap\lf(I_2\cup I'_2\rh)\times \lf(I_1\cup I'_1\rh) \cap\lf(I_5\cup I'_5\rh) , & {\rm if}\, t_1>0, t_2<0\\
\lf(I_1\cup I'_1\rh) \cap\lf(I_4\cup I'_4\rh)\times \lf(I_1\cup I'_1\rh) \cap\lf(I_3\cup I'_3\rh) , & {\rm if}\, t_1<0, t_2>0.
\end{cases} \\
&=
\begin{cases}
\lf(I_1\cap {I}_2\rh) \cup\lf(I'_1\cap I'_2\rh)\times \lf(I_1\cap I_3\rh) \cup\lf(I'_1\cap I'_3\rh), & {\rm if}\, t_1, t_2>0\\
\lf(I_1\cap I'_4\rh) \cup\lf(I'_1\cap I_4\rh)\times \lf(I_1\cap I'_5\rh) \cup\lf(I'_1\cap {I}_5\rh) , & {\rm if}\, t_1, t_2<0\\
\lf(I_1\cap {I}_2\rh) \cup\lf(I'_1\cap I'_2\rh)\times \lf(I_1\cap I'_5\rh) \cup\lf(I'_1\cap {I}_5\rh) , & {\rm if}\, t_1>0, t_2<0\\
\lf(I_1\cap I'_4\rh) \cup\lf(I'_1\cap I_4\rh)\times \lf(I_1\cap I_3\rh) \cup\lf(I'_1\cap I'_3\rh) , & {\rm if}\, t_1<0, t_2>0,
\end{cases}
\end{align*}
with
$$
I_2= \lf(\frac{u}{t_1}, \frac{1}{u t_1}\rh)\qquad I_3= \lf(\frac{u}{t_2}, \frac{1}{u t_2}\rh)\qquad I_4= \lf(\frac{1}{u t_1},\frac{u}{t_1}\rh)\qquad I_5=\lf(\frac{1}{u t_2}, \frac{u}{t_2}\rh).
$$
It is observed that if $t_1\in (-\infty, -\frac{1}{u^2}]\cup [-u^2,u^2]\cup\lf[\frac{1}{u^2},\infty\rh)\cup\{-1, 1\}$ and $t_2\in (-\infty, \infty)$, then
$I_{t_1,t_2,u} = \emptyset$
and therefore, in this case $h_u(t_1,t_2)=0$. The same is the situation if $t_2\in(-\infty, -\frac{1}{u^2}]\cup [-u^2,u^2]\cup\lf[\frac{1}{u^2},\infty\rh)\cup\{-1, 1\}$ and $t_1\in (-\infty, \infty)$, then $h_u(t_1,t_2)=0$.
We deal with the remaining cases as follows:
\smallskip
\noindent Case 1 : $t_1,t_2\in(u^2,1)$. In this case, it can be worked out that
\begin{equation*}
I_{t_1,t_2,u} = I_6\cup I'_6 \times I_7\cup I'_7, \quad \text{where } I_6= \lf(\frac{u}{t_1}, \frac{1}{u}\rh ),\quad I_7= \lf(\frac{u}{t_2}, \frac{1}{u}\rh )
\end{equation*}
and therefore,
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}} (y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |\frac{1}{t_1}|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |\frac{1}{t_2}|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 2 : $t_1\in (1,\frac{1}{u^2})$, $t_2\in (u^2,1)$. In this case
\begin{equation*}
I_{t_1,t_2,u} = I_8\cup I'_8\times I_7\cup I'_7, \qquad \text{where } I_8=\lf(u, \frac{1}{ut_1}\rh)
\end{equation*}
so that
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |t_1|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |\frac{1}{t_2}|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 3 : $t_1\in (u^2,1)$, $t_2\in (1,\frac{1}{u^2})$. In this case
\begin{equation*}
I_{t_1,t_2,u} = I_6\cup I'_6 \times I_9\cup I'_9, \qquad \text{where } I_9=\lf(u, \frac{1}{ut_2}\rh)
\end{equation*}
so that
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |\frac{1}{t_1}|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |t_2|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 4 : $t_1,t_2\in (1,\frac{1}{u^2})$. In this case
\begin{equation*}
I_{t_1,t_2,u} = I_8\cup I'_8 \times I_9\cup I'_9
\end{equation*}
so that
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}} (y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |t_1|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |t_2|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 5 : $t_1\in (-\frac{1}{u^2}, -1) ,t_2\in(u^2,1)$. In this case, it can be worked out that
\begin{equation*}
I_{t_1,t_2,u} = I_{10}\cup I'_{10} \times I_7\cup I'_7, \qquad \text{where } I_{10}= \lf(u, -\frac{1}{ut_1}\rh )
\end{equation*}
and therefore,
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |t_1|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |\frac{1}{t_2}|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 6 : $t_1\in (-1,-u^2)$, $t_2\in (u^2,1)$. In this case
\begin{equation*}
I_{t_1,t_2,u}=I_{11}\cup I'_{11} \times I_7\cup I'_7, \qquad \text{where } I_{11}=\lf(-\frac{u}{t_1}, \frac{1}{u}\rh)
\end{equation*}
so that
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |\frac{1}{t_1}|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |\frac{1}{t_2}|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 7 : $t_1\in (-\frac{1}{u^2},-1)$, $t_2\in (1,\frac{1}{u^2})$. In this case
\begin{equation*}
I_{t_1,t_2,u} = I_{10}\cup I'_{10} \times I_9\cup I'_9
\end{equation*}
so that
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}} (y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |t_1|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |t_2|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 8 : $t_1\in(-1, -u^2)$, $t_2\in (1,\frac{1}{u^2})$. In this case
\begin{equation*}
I_{t_1,t_2,u} = I_{11}\cup I'_{11} \times I_9\cup I'_9
\end{equation*}
so that
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |\frac{1}{t_1}|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |t_2|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 9 : $t_1\in (-\frac{1}{u^2}, -1)$, $t_2\in (-1, -u^2)$. In this case
\begin{equation*}
I_{t_1,t_2,u} = I_{10}\cup I'_{10} \times I_{12}\cup I'_{12}, \qquad \text{where } I_{12}= \lf( -\frac{u}{t_2}, \frac{1}{u}\rh )
\end{equation*}
so that
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |t_1|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |\frac{1}{t_2}|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 10 : $t_1, t_2\in (-\frac{1}{u^2}, -1)$. In this case, it can be worked out that
\begin{equation*}
I_{t_1,t_2,u} = I_{10}\cup I'_{10} \times I_{13}\cup I'_{13}, \qquad \text{where } I_{13}= \lf( u, -\frac{1}{ut_2}\rh )
\end{equation*}
and therefore,
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |t_1|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |t_2|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 11 : $t_1, t_2\in (-1, -u^2)$. In this case
\begin{equation*}
I_{t_1,t_2,u} = I_{11}\cup I'_{11} \times I_{12}\cup I'_{12}
\end{equation*}
so that
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |\frac{1}{t_1}|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |\frac{1}{t_2}|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 12 : $t_1\in(-1, -u^2)$, $t_2\in (-\frac{1}{u^2},-1)$. In this case
\begin{equation*}
I_{t_1,t_2,u} = I_{11}\cup I'_{11} \times I_{13}\cup I'_{13}
\end{equation*}
so that
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |\frac{1}{t_1}|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |{t_2}|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 13 : $t_1\in(u^2, 1)$, $ t_2\in ( -1, -u^2)$. In this case, it can be worked out that
\begin{equation*}
I_{t_1,t_2,u} = I_6\cup I'_6 \times I_{12}\cup I'_{12}
\end{equation*}
and therefore,
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |\frac{1}{t_1}|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |\frac{1}{t_2}|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 14 : $t_1\in (1,\frac{1}{u^2})$, $t_2\in (-1, -u^2)$. In this case
\begin{equation*}
I_{t_1,t_2,u} = I_8\cup I'_8 \times I_{12}\cup I'_{12}
\end{equation*}
so that
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |{t_1}|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |\frac{1}{t_2}|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 15 : $t_1\in(u^2, 1)$, $t_2\in (-\frac{1}{u^2}, -1)$. In this case
\begin{equation*}
I_{t_1,t_2,u} = I_6\cup I'_6 \times I_{13}\cup I'_{13}
\end{equation*}
so that
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |\frac{1}{t_1}|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |{t_2}|}{2\log \frac{1}{u}}\rh).
\end{equation*}
\noindent Case 16 : $t_1\in(1, \frac{1}{u^2})$, $t_2\in (-\frac{1}{u^2},-1)$. In this case
\begin{equation*}
I_{t_1,t_2,u} = I_8\cup I'_8 \times I_{13}\cup I'_{13}
\end{equation*}
so that
\begin{equation*}
\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2=\lf(4\log \frac{1}{u}\rh)^2\lf(1-\frac{\log |{t_1}|}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\log |{t_2}|}{2\log \frac{1}{u}}\rh).
\end{equation*}
Combining the above information and taking $f$ and $g$ as $f_u$ and $g_u$ respectively in (\ref {h0}), we obtain that
\begin{align}\label{2.t3}
\mathcal{J}= &\et \et \frac{|\phi(t_1,t_2)|}{|t_1t_2|^{2\al+2}}h_u(t_1,t_2)dt_1dt_2\nonumber\\
& \geq \et \frac{|\phi(t_1,t_2)|}{|t_1t_2|^{{2\al+2-\frac{1}{p}}}} \inf_{(y_1,y_2)\in \R^2}\lf(\frac {v(t_1y_1, t_2y_2)}{v(y_1,y_2)}\rh)^{\frac{1}{p}}\et \frac{1}{|y_1y_2|}\chi_{I_{t_1,t_2,u}}(y_1,y_2)dy_1dy_2 dt_1dt_2\nonumber\\
& =\lf(4 \log \frac{1}{u}\rh)^2 \lf( \int_{-\frac{1}{u^2}}^{-u^2}\int_{-\frac{1}{u^2}}^{-u^2}+ \int_{u^2}^{\frac{1}{u^2}}\int_{u^2}^{\frac{1}{u^2}}+\int_{u^2}^{\frac{1}{u^2}}\int_{-\frac{1}{u^2}}^{-u^2}+\int_{-\frac{1}{u^2}}^{-u^2}\int_{u^2}^{\frac{1}{u^2}}\rh)\lf(\frac{|\phi(t_1,t_2)|}{|t_1t_2|^{{2\al+2}-\frac{1}{p}}}\rh.\times\nonumber\\
& \quad \times \inf_{(y_1,y_2)\in \R^2}\lf(\frac {v(t_1y_1, t_2y_2)}{v(y_1,y_2)}\rh)^{\frac{1}{p}}\lf.\lf(1-\frac{\xi(t_1)}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\xi(t_2)}{2\log \frac{1}{u}}\rh)dt_1dt_2\rh),
\end{align}
where for $i=1,2$
\begin{equation*}
\xi(t_i)=
\begin{cases}
\log |\frac{1}{t_i}|,& {\rm if}\, t_i\in (u^2, 1]\cup(-1, -u^2] \\
\log |t_i| , & {\rm if}\, t_i\in (1, 1/u^2]\cup (-1/u^2, -1).
\end{cases}
\end{equation*}
Now, by using the test functions $f_u,$ $g_u$ in (\ref{h2.1.1}) and using (\ref{h2.1.2}), (\ref{2.t3}), we get
\begin{align*}
&\lf( \int_{-\frac{1}{u^2}}^{-u^2}\int_{-\frac{1}{u^2}}^{-u^2}+ \int_{u^2}^{\frac{1}{u^2}}\int_{u^2}^{\frac{1}{u^2}}+\int_{u^2}^{\frac{1}{u^2}}\int_{-\frac{1}{u^2}}^{-u^2}+\int_{-\frac{1}{u^2}}^{-u^2}\int_{u^2}^{\frac{1}{u^2}}\rh)
\lf(\frac{|\phi(t_1,t_2)|}{|t_1t_2|^{{{2\al+2}-\frac{1}{p}}}} \inf_{(y_1,y_2)\in \R^2}\lf(\frac {v(t_1y_1, t_2y_2)}{v(y_1,y_2)}\rh)^{\frac{1}{p}}\rh.\times\\
&\quad\times \lf.\lf(1-\frac{\xi(t_1)}{2\log \frac{1}{u}}\rh)\lf(1-\frac{\xi(t_2)}{2\log \frac{1}{u}}\rh)dt_1dt_2\rh)\leq \|H_{\al, \phi}\|.
\end{align*}
By the Monotone Convergence Theorem and on taking $u\rightarrow 0$ we have
$$\et\frac{|\phi(t_1,t_2)|}{|t_1t_2|^{{{2\al+2}-\frac{1}{p}}}} \inf_{(y_1,y_2)\in \R^2}\lf(\frac {v(t_1y_1, t_2y_2)}{v(y_1,y_2)}\rh)^{\frac{1}{p}} dt_1dt_2\leq \|\sH_{\al,\phi}\|$$
and we are done.
\end{proof}
On the lines of Theorem \ref{t2.3}, a characterization of the boundedness of $\sH_{\al,\phi}:L^p_v(\R^2)\to L^p_v(\R^2)$ can be obtained. Precisely, we have the following
\begin{theorem}\label{t3.4}
Let $1<p<\infty$, $v$ be weight function and $\phi\in L^1(\R^2)$. Let the following be satisfied for some constant $c>0$:
$$
\sup_{(y_1,y_2)\in \R^2}\lf(\frac {v(t_1y_1, t_2y_2)}{v(y_1,y_2)}\rh)\le c \inf_{(y_1,y_2)\in \R^2}\lf(\frac {v(t_1y_1, t_2y_2)}{v(y_1,y_2)}\rh).
$$
Then the operator $\sH_{\al,\phi}:L^p_v(\R^2)\to L^p_v(\R^2)$ is bounded if and only if $\sA_{\sup} <\infty$ and moreover the following estimates hold:
\begin{equation*}
{1\over c} \sA_{\sup} \le \|\sH_{\al,\phi}\|_{L^p_v(\R^2)\to L^p_v(\R^2)}\le \sA_{\sup}.
\end{equation*}
\end{theorem}
\begin{corollary}\label{t3.5} Let $1<p<\infty$, $v$ be weight function and $\phi\in L^1(\R^2)$. If there exists a function $h$ such that $v(x_1y_1,x_2y_2)=v(x_1,x_2)h(y_1,y_2)$, then $\sA_{\sup} = \sA_{\inf}$ and
\begin{equation*}
\|\sH_{\al,\phi}\|_{L^p_v(\R^2)\to L^p_v(\R^2)} = \ent \frac{|\phi(t_1,t_2)|}{{|t_1t_2|^{2\al+2-\frac{1}{p}}}}h^{1/p}(t_1,t_2)\, dt_1dt_2.
\end{equation*}
\end{corollary}
\begin{remark}
For $\al=-1/2$, $\sH_{\al,\phi}$ reduces to the two dimensional Hausdorff operator
$$
\sH_{\phi}f(x_1,x_2)=\et \frac{|\phi(t_1,t_2)|}{|t_1t_2|}f\lf(\frac{x_1}{t_1},\frac{x_2}{t_2}\rh)dt_1dt_2
$$
which on replacement $\phi(s_1,s_2)=\frac{1}{s_1s_2}\psi\lf(\frac{1}{s_1},\frac{1}{s_2}\rh)$ becomes equivalent to
$$
\mathscr G_\psi g(x_1,x_2)=\et \psi\lf(\frac{t_1}{x_1},\frac{t_2}{x_2}\rh) g(t_1,t_2)\,dt_1dt_2.
$$
The $L^p(\R^+\times \R^+)$-boundedness of $\mathscr G_\psi$ (consequently of $\mathscr H_\psi$) was proved in \cite{jain}. Moreover, if we take
$$
\phi(t_1,t_2)=\frac{1}{t_1t_2}\chi_{(1,\infty)}(t_1) \chi_{(1,\infty)}(t_2)
$$
then $\mathscr H_\psi f$ becomes the two-dimensional Hardy operator \cite{sawyer}
$$
H_2 f(x_1,x_2)=\frac{1}{x_1x_2}\int_0^{x_1} \int_0^{x_2} f(t_1,t_2)\,dt_1dt_2.
$$
\end{remark}
\section{Some generalizations}
In this section, we shall prove generalizations of some of the theorems proved in the previous section. To begin with, the following theorem is a two-weight generalization of Theorem \ref{t2.1}
\begin{theorem}\label{t4.1}
Let $1<p<\infty$, $v,w$ be weight functions and $\phi\in L^1(\R)$ be such that $$\displaystyle \ent \frac{|\phi(t)|}{{|t|^{{2\al+2}-\frac{1}{p}}}}\lf(\sup_{y\in\R}\frac{v(ty)}{w(y)}\rh)^{1/p} dt =:B_{\sup}<\infty.$$ Then the operator $H_{\al,\phi}:L^p_w(\R)\to L^p_v(\R)$ is bounded and
$$\|H_{ \al,\phi}f\|_{L^p_v(\R)}\leq B_{\sup} \|f\|_{L^{p}_{w}(\R)}.
$$
\end{theorem}
\begin{proof}
It follows on the similar lines as that of the proof of Theorem \ref{t2.1} by replacing $vv^{-1}$ by $ww^{-1}$ in \eqref{enew1}.
\end{proof}
Theorem \ref{t4.1} has the following version for two indices $p,q$:
\begin{theorem}\label{t4.2}
Let $1<q<p<\infty$, $v,w$ be weight functions and $\phi\in L^1(\R)$ be such that $$\displaystyle \ent \frac{|\phi(t)|}{{|t|^{{2\al+2}-\frac{1}{p}}}}\lf(\ent\frac{[v(ty)]^\frac{p}{p-q}}{[w(y)]^\frac{q}{p-q}}dy\rh)^\frac{p-q}{pq} dt =:D_{\sup}<\infty.$$ Then the operator $H_{\al,\phi}:L^p_w(\R)\to L^q_v(\R)$ is bounded and
$$\|H_{ \al,\phi}f\|_{L^q_v(\R)}\leq D_{\sup} \|f\|_{L^{p}_{w}(\R)}.
$$
\end{theorem}
\begin{proof}
Let $f \in L^p_v(\R)$ then by using generalised Minkowski inequality, change of variables, H\"older's inequality for $p/q>1$ and , we have
\begin{align*}
\|H_{ \al,\phi}f\|_{L^q_v(\R)}
& =\lf(\ent |H_{ \al,\phi} f(x)|^q v(x)dx \rh)^\frac{1}{q} \\
& = \lf(\ent\lf|\ent \frac{|\phi(t)|}{|t|^{2\al+2}}f\lf(\frac{x}{t}\rh) dt\rh|^q v(x)dx \rh)^\frac{1}{q} \\
& \leq \lf(\ent \lf(\ent \frac{|\phi(t)|}{|t|^{2\al+2}}\lf|f\lf(\frac{x}{t}\rh)\rh|v^{\frac{1}{q}}(x)dt\rh)^q dx \rh)^\frac{1}{q}\\
& \leq \ent \frac{|\phi(t)|}{|t|^{2\al+2}}|t|^{\frac{1}{q}} \lf(\ent \lf|f(y)\rh|^q v(yt)dy\rh)^\frac{1}{q} dt\\
& = \ent \frac{|\phi(t)|}{|t|^{{2\al+2}-\frac{1}{q}}}\lf(\ent \lf|f(y)\rh|^q w^{\frac{q}{p}}(y)w^{-\frac{q}{p}}(y) v(yt)dy\rh)^{\frac{1}{q}} dt\\
&\leq \lf(\ent \frac{|\phi(t)|}{|t|^{{2\al+2}-\frac{1}{q}}} \lf(\ent\frac{[v(ty)]^\frac{p}{p-q}}{[w(y)]^\frac{q}{p-q}}dt\rh)^\frac{p-q}{pq} dt \rh)\lf(\ent |f(y)|^p w(y) dy \rh)^{\frac{1}{p}}\\
& = D_{\sup} \|f\|_{L^p_w(\R)}
\end{align*}
and the assertion follows.
\end{proof}
Two dimensional versions of Theorems \ref{t4.1} and \ref{t4.2} can also be proved. We only state the results.
\begin{theorem}
Let $1<p<\infty$, $v,w$ be weight functions and $\phi\in L^1(\R^2)$ be such that $$\displaystyle \et \frac{|\phi(t_1, t_2)|}{{|t_1t_2|^{{2\al+2}-\frac{1}{p}}}}\lf(\sup_{(y_1, y_2)\in\R^2}\frac{v(t_1y_1, t_2y_2)}{w(y_1,y_2)}\rh)^{1/p} dt_1dt_2 =:\mathscr B_{\sup}<\infty.$$ Then the operator $\sH_{\al,\phi}:L^p_w(\R^2)\to L^p_v(\R^2)$ is bounded and
$$\|\sH_{ \al,\phi}f\|_{L^p_v(\R^2)}\leq {\mathscr B}_{\sup} \|f\|_{L^{p}_{w}(\R^2)}.
$$
\end{theorem}
\begin{theorem}
Let $1<q<p<\infty$, $v,w$ be weight functions and $\phi\in L^1(\R^2)$ be such that $$\displaystyle \et \frac{|\phi(t_1,t_2)|}{{|t_1t_2|^{{2\al+2}-\frac{1}{p}}}}\lf(\et \frac{[v(t_1y_1, t_2y_2)]^\frac{p}{p-q}}{[w(y_1,y_2)]^\frac{q}{p-q}}\rh)^\frac{p-q}{pq} dt_1dt_2 =:\mathscr D_{\sup}<\infty.$$ Then the operator $\mathscr H_{\al,\phi}:L^p_w(\R^2)\to L^q_v(\R^2)$ is bounded and
$$\|\mathscr H_{ \al,\phi}f\|_{L^q_v(\R^2)}\leq \mathscr D_{\sup} \|f\|_{L^{p}_{w}(\R^2)}.
$$
\end{theorem}
\noindent {\it Acknowledgement.} The third author acknowledges the MATRICS Research Grant No. MTR/2017/000126 of SERB, Department of Science and Technology, India.
| {
"timestamp": "2020-07-23T02:09:30",
"yymm": "2007",
"arxiv_id": "2007.11216",
"language": "en",
"url": "https://arxiv.org/abs/2007.11216",
"abstract": "In this paper, the $L^p_v(\\R)$-boundedness of the Dunkl-Hausdorff operator $\\displaystyle H_{\\al,\\phi} f(x)=\\ent\\frac{ |\\phi(t)|}{|t|^{2\\al+2}}f\\lf(\\frac{x}{t}\\rh) dt $\nhas been characterized and for a certain type of weight $v$, the precise value of the norm $\\|H_{\\al,\\phi}\\|_{L^p_v(\\R)\\to L^p_v(\\R)}$ has been obtained. This covers several of the existing results. Analogous results in two dimensions have also been proved.",
"subjects": "Functional Analysis (math.FA)",
"title": "Boundedness of Dunkl-Hausdorff operator in Lebesgue spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808678600414,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.70751106837585
} |
https://arxiv.org/abs/1608.06782 | Determination of two unknown thermal coefficients through an inverse one-phase fractional Stefan problem | We consider a semi-infinite one-dimensional phase-change material with two unknown constant thermal coefficients among the latent heat per unit mass, the specific heat, the mass density and the thermal conductivity. Aiming at the determination of them, we consider an inverse one-phase Stefan problem with an over-specified condition at the fixed boundary and a known evolution for the moving boundary. We assume that the phase-change process presents latent-heat memory effects by considering a fractional time derivative of order $\alpha$ ($0<\alpha<1$) in the Caputo sense and a sharp front model for the interface. According to the choice of the unknown thermal coefficients, six inverse fractional Stefan problems arise. For each of them, we determine necessary and sufficient conditions on data to obtain the existence and uniqueness of a solution of similarity type. Moreover, we present explicit expressions for the temperature and the unknown thermal coefficients. Finally, we show that the results for the classical statement of this problem, associated with $\alpha=1$, are obtained through the fractional model when $\alpha\to 1^-$. | \section{Introduction}
Determination of thermal coefficients for phase-change materials through inverse Stefan problems has been widely studied during the last decades \cite{CeTa2015,CeTa2016,KaIs2012,Ta1982,Ta1983,Ta1987}. Especially, phase-change processes involving solidification or melting have been extensively studied because of their scientific and technological applications \cite{AlSo1993,Ca1984,CaJa1959,Cr1984,Fa2005,Gu2003,Lu1991,Ru1971,Ta2011}. A rewiew of a long bibliography on moving and free boundary value problems for the heat equation can be consulted in \cite{Ta2000}. Recently, a new sort of Stefan problems including time-fracional derivatives have begun to be studied \cite{JiMi2009,KhZaFe2003,Vo2010,Vo2014,VoFaGa2013,RoSa2013,
RoSa2014,RoTa2014}. Some references in fractional derivatives can be found at \cite{KiSrTr2006,Ma2010,Po1999,GoLuMa1999,Lu2010,MaLuPa2001}. In \cite{RoSa2013,RoSa2014,RoTa2014} there are considered free boundary value problems which are obtained by replacing the time derivative in a one-phase Stefan problem by a fractional derivative of order $\alpha$ ($0<\alpha<1$) in the Caputo sense \cite{Ca1967}, and explicit solutions of similarity type are given for the resultant {\em fractional Stefan problems}. A physical interpretation of the problems considered in \cite{RoSa2013,RoSa2014,RoTa2014} is given in \cite{VoFaGa2013}. In that article, authors derive fractional Stefan problems for phase-change processes by substituting the {\em local} expression of the heat flux given by the Fourier law for a new {\em non-local} definition. They consider a heat flux given as a weighted sum of local fluxes back in time, which they express in terms of the Riemann-Liouville integral of order $\alpha$ ($0<\alpha<1$) of the local flux given by the Fourier law. They also explain how this change implies that the new model takes into consideration latent-heat memory effects in the evolution of the phase-change process and give to the parameter $\alpha$ the physical meaning of being the {\em strength of memory retention}. This fractional model reduces to the classical Stefan problem when $\alpha=1$. The same occurs with the solutions of similarity type given in \cite{RoSa2013,RoSa2014,RoTa2014} in the sense that they converge to the similarity solutions of the classical Stefan problems with which they are related to, when $\alpha\to 1^-$. To the authors knowledge, the first use of inverse fractional Stefan problems for the determination of thermal coefficients has been done recently in \cite{Ta2015-c}. In that article the author studies the determination of one unknown thermal coefficient for a semi-infinite material through a fractional one-phase Stefan problem with an over-specified condition at the fixed boundary. Necessary and sufficient conditions on data to obtain the existence and uniqueness of solutions of similarity type are established, and explicit expressions for the the temperature of the material, the free boundary and the unknown thermal coefficient are given. Moreover, it has shown that results through the fractional model reduce to the results previously obtained in \cite{Ta1982} for the determination of one unknown thermal coefficient using a classical inverse Stefan problem. Encouraged by \cite{Ta1983,Ta2015-c}, we consider here the problem of determining two unknown thermal coefficients through an inverse fractional one-phase Stefan problem for which it is known the evolution of the free boundary.
In order to have dimensional coherence in the time fractional heat equation as well as in the fractional Stefan condition, we have included two extra parameters $\mu_\alpha,\,\nu_\alpha\in(0,1]$ in the model, which are such that:
\begin{subequations}\label{munu-limit}
\begin{align}
\label{mu-limit}&\mu_\alpha\to 1\quad\quad\text{when }\alpha\to 1^-\\
\label{nu-limit}&\nu_\alpha\to 1\quad\quad\text{when }\alpha\to 1^-.
\end{align}
\end{subequations}
In particular, $\mu_\alpha$ and $\nu_\alpha$ can be considered equal to 1 with the corresponding physical dimension (see below).
\noindent More precisely, we consider the following inverse problem for a one-phase melting process:
\begin{subequations}\label{Problema}
\begin{align}
\label{1}&D^\alpha T(x,t)=\mu_{\alpha}\lambda^2T_{xx}(x,t)&0<x<s(t),\,&t>0\\
\label{2}&T(s(t),t)=T_m& &t>0\\
\label{3}&-kT_x(s(t),t)=\nu_\alpha\rho lD^\alpha s(t)& &t>0\\
\label{4}&T(0,t)=T_0& &t>0\\
\label{5}&kT_x(0,t)=-\frac{q_0}{t^{\alpha/2}}& &t>0
\end{align}
\end{subequations}
where the unknowns are the temperature $T$ [$^\circ$C] of the liquid phase and two thermal coefficients among:
\begin{table}[h!]
\begin{tabular}{rll}
$k>0$:& thermal conductivity& [$W\,{m^{-1}}\,\left(^\circ C\right)^{-1}$]\\
$\rho>0$:& mass density& [$kg\,m^3$]\\
$l>0$:& latent heat per unit mass& [$J\,kg^{-1}$]\\
$c>0$:& specific heat& [$J\,kg^{-1}\,\left(^\circ C\right)^{-1}$]
\end{tabular}
\end{table}
\noindent According to \cite{Hr2011-a,Hr2011-b}, the coefficient $\mu_\alpha\lambda^2$ [$m^2\,s^{-\alpha}$] in equation (\ref{1}) is a sort of {\em fractional diffusion coefficient}, $\lambda^2$ [$m^2\,s^{-1}$] being the thermal diffusivity given by:
\begin{equation*}
\lambda^2=\frac{k}{\rho c}\hspace{2cm}(\lambda>0).
\end{equation*}
\noindent We assume that the remaining coefficients:
\begin{table}[h!]
\begin{tabular}{rll}
$T_m>0$:& phase-change temperature& [$^\circ C$]\\
$T_0>T_m$:& temperature at the boundary $x=0$& [$^\circ C$]\\
$q_0>0$:& coefficient characterizing the heat flux at $x=0$& [$W\,m^2\,\left(^\circ C\right)^{-1}$]\\
$0<\alpha<1$:& strength of the memory retention & (dimensionless)\\
$0<\mu_{\alpha}\leq 1$: & parameter required to have dimensional coherence in equation (\ref{1}) &[$s^{1-\alpha}$]\\
$0<\nu_{\alpha}\leq 1$: & parameter required to have dimensional coherence in condition (\ref{3}) &[$s^{\alpha-1}$]
\end{tabular}
\end{table}
\noindent involved in Problem (\ref{Problema}), are all known (e.g. through a phase-change experiment). Aiming at the simultaneous determination of two unknown thermal coefficients, we consider that the time evolution of the sharp interface $s$ is also known. More precisely, we follow \cite{RoSa2013,RoSa2014,RoTa2014} in assuming that it is given by:
\begin{equation}\label{s}
s(t)=\sigma t^{\alpha/2},\quad t>0,
\end{equation}
with $\sigma>0$ [$m\,s^{-\alpha/2}$]. The operator $D^\alpha$ in (\ref{1}) and (\ref{3}) represents the fractional time derivative of order $\alpha$ in the Caputo sense, which is defined by \cite{Ca1967}:
\begin{equation}\label{DerCaputo}
D^\alpha f(t)=\left\{\begin{array}{lcl}
\frac{1}{\Gamma(1-\alpha)}\displaystyle\int_0^t\frac{f'(\tau)}{(t-\tau)^\alpha}d\tau&\text{ if }&0<\alpha<1\\
f'(t)&\text{ if }&\alpha=1
\end{array}\right.,\quad t>0
\end{equation}
for any $f\in W^1(\mathbb{R}^+)=\left\{f\in C^1(\mathbb{R}^+)/f'\in L^1(\mathbb{R}^+)\right\}$, where $\Gamma$ is the Gamma function defined by:
\begin{equation*}
\Gamma(x)=\displaystyle\int_0^{+\infty} s^{x-1}\exp(-s)ds,\quad x>0.
\end{equation*}
\section{Solutions of similarity type}
We begin with a definition of being a solution to the inverse fractional Stefan problem (\ref{Problema}), which is based in the definition established in \cite{RoSa2013,RoSa2014} for the direct case.
\begin{definition}
The triplet given by the temperature $T$ and two unknown thermal coefficients among $k$, $\rho$, $l$ and $c$ is a solution to the inverse fractional one-phase Stefan problem (\ref{Problema}) if:
\begin{enumerate}\label{DefSol}
\item $T$ is defined in $\mathbb{R}_0^+\times\mathbb{R}_0^+$.
\item $T\in C(\Omega)\cap C_x^2(\mathbb{R}^+\times\mathbb{R}^+)\cap W_t^1(\mathbb{R}^+)$,
where $\Omega=\left\{(x,t)\,/\,0<x<s(t),\,t>0\right\}$ and $W_t^1(\mathbb{R}^+)=\left\{f:\mathbb{R}^+\times\mathbb{R}^+\to\mathbb{R}/\,f(x,\cdot)\in W^1(\mathbb{R}^+)\,\forall\,x>0\right\}$.
\item $T$ is a continuous function over $\Omega\cup\partial_p\Omega$, where $\partial_p\Omega=\left\{(0,t)/t>0\right\}\cup\left\{(s(t),t)/t>0\right\}$, except maybe in the point $(0,0)$, in which it must be satisfied the following condition:
\begin{equation*}
0\leq \displaystyle\liminf_{(x,t)\to(0,0)}T(x,t)\leq\displaystyle\limsup_{(x,t)\to(0,0)}T(x,t)<\infty.
\end{equation*}
\item For all $t>0$, exists $\frac{\partial}{\partial x}T(s(t),t)$.
\item The unknown thermal coefficients are positive real numbers.
\item The temperature $T$ and the two unknown thermal coefficients verify (\ref{Problema}).
\end{enumerate}
\end{definition}
\begin{remark}
We note that the function $s$ given by (\ref{s}) is consistent with the definition of being a solution to a (direct) fractional one-phase Stefan problem given in \cite{RoSa2013,RoSa2014}, since it is a positive function belonging to $C(\mathbb{R}_0^+)\cap W^1(\mathbb{R}^+)$.
\end{remark}
Encouraged by \cite{RoSa2013,RoSa2014,RoTa2014,Ta2015-c}, in this section we are looking for a solution of similarity type to Problem (\ref{Problema}). That is, we look for a temperature function $T$ such that:
\begin{equation}\label{TPreliminar}
T(x,t)=A+B\left(1-W\left(-\frac{x}{\sqrt{\mu_\alpha}\lambda t^{\alpha/2}},-\frac{\alpha}{2},1\right)\right),\quad 0<x<s(t),\,t>0,
\end{equation}
where $A$ and $B$ are real numbers that must be determined, and $W$ is the {\em Wright function} given by \cite{Wr1934,Wr1940}:
\begin{equation}\label{Wright}
W(z,a,b)=\displaystyle\sum_{k=0}^{+\infty}\frac{z^k}{k!\Gamma(a k+b)},\quad z\in\mathbb{C},\,a>-1,\,b\in\mathbb{C}.
\end{equation}
According to \cite{RoSa2013,RoSa2014}, we know that the function $T$ given by (\ref{TPreliminar}) fulfill conditons 1 to 4 in Definition \ref{DefSol} and that it satisfies the fractional diffusion equation (\ref{1}). Noting that $W\left(0,-\frac{\alpha}{2},1\right)=1$, it follows from condition (\ref{4}) that:
\begin{equation}\label{A}
A=T_0.
\end{equation}
From this and condition (\ref{2}), we have that:
\begin{equation}\label{B}
B=\frac{T_m-T_0}{1-W\left(-\frac{\sigma}{\sqrt{\mu_\alpha}\lambda},-\frac{\alpha}{2},1\right)}.
\end{equation}
We observe that $W\left(-\frac{\sigma}{\sqrt{\mu_{\alpha}}\lambda},-\frac{\alpha}{2},1\right)\neq 1$ because $W\left(-x,-\frac{\alpha}{2},1\right)$ is a strictly decreasing function in $\mathbb{R}^+$ \cite{RoSa2013} and, as we have already noted, $W\left(0,-\frac{\alpha}{2},1\right)=1$.
\noindent By taking into consideration that the Wright function satisfies \cite{Wr1934,Wr1940}:
\begin{equation*}
\frac{d}{dz}W(z,a,b)=W(z,a,a+b),\quad z\in\mathbb{C},\,a>-1,\,b\in\mathbb{C},
\end{equation*}
and the fractional Caputo derivative of a power function with positive exponent is given by \cite{Po1999}:
\begin{equation*}
D^\alpha t^p=\frac{\Gamma(p+1)}{\Gamma\left(p-\alpha+1\right)}t^{p-\alpha},\quad t>0,\,p>0,
\end{equation*}
it follows from the fractional Stefan condition (\ref{3}) that:
\begin{equation}\label{Eq1-Preliminar}
\frac{\sqrt{\mu_\alpha}l\left[1-W\left(-\frac{\sigma}{\sqrt{\mu_{\alpha}}\lambda},-\frac{\alpha}{2},1\right)\right]}{\lambda c M_{\frac{\alpha}{2}}\left(\frac{\sigma}{\sqrt{\mu_\alpha}\lambda}\right)}=
\frac{(T_0-T_m)\Gamma\left(-\frac{\alpha}{2}+1\right)}{\nu_\alpha\sigma\Gamma\left(\frac{\alpha}{2}+1\right)},
\end{equation}
where $M_{\nu}$ is the {\em Mainardi function}, which is defined by \cite{Ma1995}:
\begin{equation}\label{Mainardi}
M_{\nu}(z)=W\left(-z,-\nu,1-\nu\right),\quad z\in\mathbb{C},\,0<\nu<1
\end{equation}
and satisfies $M_\nu(z)>0$ for all $z\in\mathbb{R}^+$ \cite{RoSa2013}.
\noindent Finally, when we consider $B$ given by (\ref{B}), the heat flux boundary condition (\ref{5}) implies that it must be satisfied the following equality:
\begin{equation}\label{Eq2-Preliminar}
\frac{\sqrt{\mu_{\alpha}}q_0\lambda}{k}\left[1-W\left(-\frac{\sigma}{\sqrt{\mu_{\alpha}}\lambda},-\frac{\alpha}{2},1\right)\right]=\frac{T_0-T_m}{\Gamma\left(-\frac{\alpha}{2}+1\right)}.
\end{equation}
We have thus proved the following result:
\begin{theorem}\label{ThCaracterizacion}
If the moving boundary $s$ is defined by (\ref{s}), then the function $T$ given by:
\begin{equation}\label{T}
T(x,t)=T_0-\frac{T_0-T_m}{1-W\left(-\xi,-\frac{\alpha}{2},1\right)}\left[1-W\left(-\frac{x}{\sqrt{\mu_\alpha}\lambda t^{\alpha/2}},-\frac{\alpha}{2},1\right)\right],\quad 0<x<s(t),\,t>0
\end{equation}
is a solution to Problem (\ref{Problema}) with two unknown thermal coefficients among $k$, $\rho$, $l$ and $c$, if and only if these are a solution to the following system of equations:
\begin{subequations}\label{SistEq}
\begin{align}
\label{Eq1}&\frac{\xi\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{M_{\frac{\alpha}{2}}(\xi)}=
\frac{c(T_0-T_m)\Gamma\left(-\frac{\alpha}{2}+1\right)}{\mu_\alpha\nu_\alpha l\Gamma\left(\frac{\alpha}{2}+1\right)}\\
\label{Eq2}&1-W\left(-\xi,-\frac{\alpha}{2},1\right)=\frac{\sqrt{k\rho c}(T_0-T_m)}{\sqrt{\mu_{\alpha}}q_0\Gamma\left(-\frac{\alpha}{2}+1\right)}
\end{align}
\end{subequations}
where the dimensionless parameter $\xi$ is defined by:
\begin{equation}\label{xi}
\xi=\frac{\sigma}{\sqrt{\mu_\alpha}\lambda}.
\end{equation}
\end{theorem}
\section{Existence and uniqueness of solutions of similarity type. Formulae for the two unknown thermal coefficients.}\label{SeFormulas}
In this section we will look for necessary and sufficient conditions on data to have existence and uniqueness of solution to Problem (\ref{Problema}) for each possible choice of the two unknown thermal coefficients, as well as explicit formulae for them. Thanks to Theorem \ref{ThCaracterizacion}, it can be done through analysing and solving the system of equations (\ref{SistEq}) for each pair of unknown thermal coefficients. With the aim of organizing the main results of this section, we will write:
\begin{table}[h!]
\begin{tabular}{ll}
Case 1: Determination of $l$ and $c$&\hspace*{2cm}
Case 4: Determination of $c$ and $\rho$\\
Case 2: Determination of $c$ and $k$&\hspace*{2cm}
Case 5: Determination of $l$ and $\rho$\\
Case 3: Determination of $l$ and $k$&\hspace*{2cm}
Case 6: Determination of $\rho$ and $k$.
\end{tabular}
\end{table}
For each $\alpha\in(0,1)$, we introduce the real functions $F_\alpha$, $G_\alpha$ and $H_\alpha$ defined in $\mathbb{R}^+$ by:
\begin{subequations}\label{FGH}
\begin{align}
\label{F}&F_\alpha(x)=\frac{f_\alpha(x)}{x}\\
\label{G}&G_\alpha(x)=xf_\alpha(x)\\
\label{H}&H_\alpha(x)=\frac{xf_\alpha(x)}{M_{\frac{\alpha}{2}}(x)}
\end{align}
\end{subequations}
where
\begin{equation}\label{f}
f_\alpha(x)=1-W\left(-x,-\frac{\alpha}{2},1\right).
\end{equation}
The following result will be useful all throughout this section:
\begin{lemma}\label{LeFGHProp}
For any $\alpha\in(0,1)$, the real functions $F_\alpha$, $G_\alpha$ and $H_\alpha$ defined in (\ref{FGH}) satisfy the following conditions:
\begin{subequations}\label{FGHProp}
\begin{align}
\label{FProp}F_\alpha(0^+)&=\frac{1}{\Gamma\left(-\frac{\alpha}{2}+1\right)},
&F_\alpha(+\infty)&=0,
&F_\alpha'(x)&<0\quad\forall\,x>0\\
\label{GProp}G_\alpha(0^+)&=0,
&G_\alpha(+\infty)&=+\infty,
&G_\alpha'(x)&>0\quad\forall\,x>0\\
\label{HProp}H_\alpha(0^+)&=0,
&H_\alpha(+\infty)&=+\infty,
&H_\alpha'(x)&>0\quad\forall\,x>0.
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
The proof of (\ref{FProp}) was done in \cite{Ta2015-c}. The demonstrations of (\ref{GProp}) and (\ref{HProp}) follow from elementary computations and the following facts:
\begin{enumerate}
\item Since $0<\alpha<1$, $f_\alpha$ is a positive and strictly increasing function in $\mathbb{R}^+$ \cite{RoSa2013}.
\item Since $0<\alpha<1$, $M_{\frac{\alpha}{2}}$ is a positive and strictly decreasing function in $\mathbb{R}^+$ \cite{RoSa2013}.
\item $\displaystyle\lim_{x\to+\infty}f(x)=1$ and $\displaystyle\lim_{x\to+\infty}M_{\frac{\alpha}{2}}(x)=0$ \cite{GoLuMa1999}.
\end{enumerate}
\end{proof}
\begin{theorem}[Case 1: Determination of $l$ and $c$]\label{Thlc}
If the moving boundary $s$ is given by (\ref{s}), then the Problem (\ref{Problema}) admits the solution $T$, $l$ and $c$ given by (\ref{T}) and:
\begin{subequations}\label{lc}
\begin{align}
\label{c-lc}&c=\frac{1}{\rho k}\left[\frac{q_0\sqrt{\mu_\alpha}\Gamma\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{T_0-T_m}\right]^2\\
\label{l-lc}&l=\frac{q_0^2\,\Gamma^3\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]M_{\frac{\alpha}{2}}(\xi)}{\nu_\alpha\rho k(T_0-T_m)\Gamma\left(\frac{\alpha}{2}+1\right)\xi}
\end{align}
\end{subequations}
respectively, $\xi$ being the only one solution to the equation:
\begin{equation}\label{EqXi-lc}
F_\alpha(x)=\frac{k(T_0-T_m)}{\sigma q_0\Gamma\left(-\frac{\alpha}{2}+1\right)},\quad x>0,
\end{equation}
if and only if the following inequality holds:
\begin{equation}\label{Cond-lc}
\frac{k(T_0-T_m)}{\sigma q_0}<1.
\end{equation}
\end{theorem}
\begin{proof}
Isolating $c$ from equation (\ref{Eq2}), we have that $c$ is given by (\ref{c-lc}). Now, by combining this with equation (\ref{Eq1}), it can be obtained that $l$ is given by (\ref{l-lc}). It must be noted that the parameter $\xi$ involved in both (\ref{c-lc}) and (\ref{l-lc}) depends on $c$. Nevertheless, it can be determined without making any reference to $c$ as follows. By replacing (\ref{c-lc}) in the definition of $\xi$ given in (\ref{xi}), we have that $\xi$ must be a solution to equation (\ref{EqXi-lc}). It follows from (\ref{FProp}) that the equation (\ref{EqXi-lc}) admits a solution if and only if its RHS is between $0$ and $\frac{1}{\Gamma\left(-\frac{\alpha}{2}+1\right)}$. To complete the proof only remains to observe that this is equivalent to say that inequality (\ref{Cond-lc}) must hold and that, when this happens, equation (\ref{EqXi-lc}) has an only one positive solution.
\end{proof}
\begin{theorem}[Case 2: Determination of $c$ and $k$]\label{Thck}
If the moving boundary $s$ is given by (\ref{s}), then the Problem (\ref{Problema}) admits the solution $T$, $c$ and $k$ given by (\ref{T}) and:
\begin{subequations}\label{ck}
\begin{align}
\label{c-ck}&c=\frac{\mu_{\alpha}\nu_\alpha l\Gamma\left(\frac{\alpha}{2}+1\right)\xi\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{(T_0-T_m)\Gamma\left(-\frac{\alpha}{2}+1\right)M_{\frac{\alpha}{2}}(\xi)}\\
\label{k-ck}&k=\frac{q_0^2\,\Gamma^3\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]M_{\frac{\alpha}{2}}(\xi)}{\nu_\alpha\rho l(T_0-T_m)\Gamma\left(\frac{\alpha}{2}+1\right)\xi}
\end{align}
\end{subequations}
respectively, $\xi$ being the only one solution to the equation:
\begin{equation}\label{EqXi-ck}
M_{\frac{\alpha}{2}}(x)=\frac{\nu_\alpha\sigma\rho l\Gamma\left(\frac{\alpha}{2}+1\right)}{q_0\Gamma^2\left(-\frac{\alpha}{2}+1\right)},\quad x>0,
\end{equation}
if and only if the following inequality holds:
\begin{equation}\label{Cond-ck}
\frac{\nu_\alpha\sigma\rho l\Gamma\left(\frac{\alpha}{2}+1\right)}{q_0\Gamma\left(-\frac{\alpha}{2}+1\right)}<1.
\end{equation}
\end{theorem}
\begin{proof}
By following the same steps as in the demonstration of the Theorem \ref{Thlc}, it can be shown that $c$ and $k$ must be given by (\ref{ck}), where the parameter $\xi$ should be a solution to equation (\ref{EqXi-ck}). Since the Mainardi function $M_{\frac{\alpha}{2}}$ is a strictly decreasing function from $\frac{1}{\Gamma\left(-\frac{\alpha}{2}+1\right)}$ to $0$ in $\mathbb{R}^+$ \cite{RoSa2013}, we have that the equation (\ref{EqXi-ck}) admits a solution if and only if its RHS is between $0$ and $\frac{1}{\Gamma\left(-\frac{\alpha}{2}+1\right)}$. This is equivalent to say that inequality (\ref{Cond-ck}) must holds. Moreover, when data satisfy (\ref{Cond-ck}), equation (\ref{EqXi-ck}) has an only one positive solution.
\end{proof}
\begin{theorem}[Case 3: Determination of $l$ and $k$]\label{Thlk}
If the moving boundary $s$ is given by (\ref{s}), then the Problem (\ref{Problema}) admits the solution $T$, $k$ and $l$ given by (\ref{T}) and:
\begin{subequations}\label{lk}
\begin{align}
\label{k-lk}&k=\frac{1}{\rho c}\left[\frac{q_0\sqrt{\mu_\alpha}\Gamma\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{T_0-T_m}\right]^2\\
\label{l-lk}&l=\frac{c(T_0-T_m)\Gamma\left(-\frac{\alpha}{2}+1\right)M_{\frac{\alpha}{2}}(\xi)}{\mu_\alpha\nu_\alpha\Gamma\left(\frac{\alpha}{2}+1\right)\xi\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}
\end{align}
\end{subequations}
respectively, $\xi$ being the only one solution to the equation:
\begin{equation}\label{EqXi-lk}
G_\alpha(x)=\frac{\sigma\rho c(T_0-T_m)}{\mu_\alpha q_0\Gamma\left(-\frac{\alpha}{2}+1\right)},\quad x>0.
\end{equation}
\end{theorem}
\begin{proof}
By proceeding analogously to the proofs of the previous Theorems \ref{Thlc} and \ref{Thck}, we have that $k$ and $l$ should be given by (\ref{lk}), $\xi$ being a solution to equation (\ref{EqXi-lk}). Since the RHS of the equation (\ref{EqXi-lk}) is a positive number, it follows from (\ref{GProp}) that the equation (\ref{EqXi-lk}) admits an only one solution for any set of data.
\end{proof}
\begin{theorem}[Case 4: Determination of $c$ and $\rho$]\label{Thcrho}
If the moving boundary $s$ is given by (\ref{s}), then the Problem (\ref{Problema}) admits the solution $T$, $c$ and $\rho$ given by (\ref{T}), (\ref{c-ck}) and:
\begin{subequations}\label{crho}
\begin{align}
\label{rho-crho}&\rho=\frac{q_0^2\,\Gamma^3\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]M_{\frac{\alpha}{2}}(\xi)}{\nu_\alpha k l(T_0-T_m)\Gamma\left(\frac{\alpha}{2}+1\right)\xi}
\end{align}
\end{subequations}
respectively, $\xi$ being the only one solution to the equation (\ref{EqXi-lc}), if and only if inequality (\ref{Cond-lc}) holds.
\end{theorem}
\begin{proof}
It is similar to the demonstration of Theorem \ref{Thlc}.
\end{proof}
\begin{theorem}[Case 5: Determination of $l$ and $\rho$]\label{Thlrho}
If the moving boundary $s$ is given by (\ref{s}), then the Problem (\ref{Problema}) admits the solution $T$, $l$ and $\rho$ given by (\ref{T}), (\ref{l-lk}) and:
\begin{subequations}\label{lrho}
\begin{align}
\label{rho-lrho}&\rho=\frac{1}{k c}\left[\frac{q_0\sqrt{\mu_\alpha}\Gamma\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{T_0-T_m}\right]^2
\end{align}
\end{subequations}
respectively, $\xi$ being the only one solution to the equation (\ref{EqXi-lc}), if and only if inequality (\ref{Cond-lc}) holds.
\end{theorem}
\begin{proof}
It is similar to the demonstration of Theorem \ref{Thlc}.
\end{proof}
\begin{theorem}[Case 6: Determination of $\rho$ and $k$]\label{Thrhok}
If the moving boundary $s$ is given by (\ref{s}), then the Problem (\ref{Problema}) admits the solution $T$, $\rho$ and $k$ given by (\ref{T}) and:
\begin{subequations}\label{rhok}
\begin{align}
\label{rho-rhok}&\rho=\frac{q_0\mu_\alpha\Gamma\left(-\frac{\alpha}{2}+1\right)\xi\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{\sigma c(T_0-T_m)}\\
\label{k-rhok}&k=\frac{\sigma q_0\Gamma\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{(T_0-T_m)\xi}
\end{align}
\end{subequations}
respectively, $\xi$ being the only one solution to the equation:
\begin{equation}\label{EqXi-rhok}
H_\alpha(x)=\frac{c(T_0-T_m)\Gamma\left(-\frac{\alpha}{2}+1\right)}{\mu_\alpha\nu_\alpha l\Gamma\left(\frac{\alpha}{2}+1\right)},\quad x>0.
\end{equation}
\end{theorem}
\begin{proof}
By following the same ideas as in the proof of the Theorem \ref{Thlc}, we have that $\rho$ and $k$ must be given by (\ref{rhok}), where $\xi$ should be a solution to equation (\ref{EqXi-rhok}). By noting that the RHS of this equation is a positive number, it follows from (\ref{HProp}) that the equation (\ref{EqXi-rhok}) admits an only one positive solution for any set of data.
\end{proof}
Table \ref{TbFrac} summarizes the formulae obtained for the two unknown thermal coefficients and the condition that data must verify to obtain them, for each one of the six possible choices of the two unknown thermal coefficients among $k$, $\rho$, $l$ and $c$ in Problem (\ref{Problema}) when the moving boundary $s$ is defined by (\ref{s}).
\newpage
\begin{table}[h!]
\caption{Formulae for the two unknown thermal coefficients and condition on data for\\ Problem (\ref{Problema})}\label{TbFrac}
\begin{tabular}{cllc}
\hline\\[-0.25cm]
Case & \hspace*{0.5cm}Formulae for the unknown & \hspace*{0.5cm}Equation for $\xi$ & Condition\\
&\hspace*{0.5cm}thermal coefficients& & for data\\[0.25cm]
\hline\\[-0.25cm]
1&
$c=\frac{1}{\rho k}\left[\frac{q_0\sqrt{\mu_\alpha}\Gamma\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{T_0-T_m}\right]^2$&
$F_\alpha(x)=\frac{k(T_0-T_m)}{\sigma q_0\Gamma\left(-\frac{\alpha}{2}+1\right)},\,\, x>0$&
$\frac{k(T_0-T_m)}{\sigma q_0}<1$\\[0.5cm]
&
$l=\frac{q_0^2\Gamma^3\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]M_{\frac{\alpha}{2}}(\xi)}{\nu_\alpha\rho k(T_0-T_m)\Gamma\left(\frac{\alpha}{2}+1\right)\xi}$&\\[0.25cm]
\hline\\[-0.25cm]
2&
$c=\frac{\mu_\alpha\nu_\alpha l\Gamma\left(\frac{\alpha}{2}+1\right)\xi\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{(T_0-T_m)\Gamma\left(-\frac{\alpha}{2}+1\right)M_{\alpha/2}(\xi)}$&
$M_{\frac{\alpha}{2}}(x)=\frac{\nu_\alpha\sigma\rho l\Gamma\left(\frac{\alpha}{2}+1\right)}{q_0\Gamma^2\left(-\frac{\alpha}{2}+1\right)},\,\, x>0$&
$\frac{\nu_\alpha\sigma\rho l\Gamma\left(\frac{\alpha}{2}+1\right)}{q_0\Gamma\left(-\frac{\alpha}{2}+1\right)}<1$\\[0.5cm]
&
$k=\frac{q_0^2\Gamma^3\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]M_{\frac{\alpha}{2}}(\xi)}{\nu_\alpha\rho l(T_0-T_m)\Gamma\left(\frac{\alpha}{2}+1\right)\xi}$&\\[0.25cm]
\hline\\[-0.25cm]
3&
$k=\frac{1}{\rho c}\left[\frac{q_0\sqrt{\mu_\alpha}\Gamma\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{T_0-T_m}\right]^2$&
$G_\alpha(x)=\frac{\sigma\rho c(T_0-T_m)}{\mu_\alpha q_0\Gamma\left(-\frac{\alpha}{2}+1\right)},\,\, x>0$&
$-$\\[0.5cm]
&
$l=\frac{c(T_0-T_m)\Gamma\left(-\frac{\alpha}{2}+1\right)M_{\frac{\alpha}{2}}(\xi)}{\mu_\alpha\nu_\alpha\Gamma\left(\frac{\alpha}{2}+1\right)\xi\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}$&\\[0.25cm]
\hline\\[-0.25cm]
4&
$c=\frac{\mu_\alpha\nu_\alpha l\Gamma\left(\frac{\alpha}{2}+1\right)\xi\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{(T_0-T_m)\Gamma\left(-\frac{\alpha}{2}+1\right)M_{\alpha/2}(\xi)}$&
$F_\alpha(x)=\frac{k(T_0-T_m)}{\sigma q_0\Gamma\left(-\frac{\alpha}{2}+1\right)},\,\, x>0$&
$\frac{k(T_0-T_m)}{\sigma q_0}<1$\\[0.5cm]
&
$\rho=\frac{q_0^2\Gamma^3\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]M_{\frac{\alpha}{2}}(\xi)}{\nu_\alpha k l(T_0-T_m)\Gamma\left(\frac{\alpha}{2}+1\right)\xi}$&\\[0.25cm]
\hline\\[-0.25cm]
5&
$l=\frac{c(T_0-T_m)\Gamma\left(-\frac{\alpha}{2}+1\right)M_{\alpha/2}(\xi)}{\mu_\alpha\nu_\alpha\Gamma\left(\frac{\alpha}{2}+1\right)\xi\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}$&
$F_\alpha(x)=\frac{k(T_0-T_m)}{\sigma q_0\Gamma\left(-\frac{\alpha}{2}+1\right)},\,\, x>0$&
$\frac{k(T_0-T_m)}{\sigma q_0}<1$\\[0.5cm]
&
$\rho=\frac{1}{k c}\left[\frac{q_0\sqrt{\mu_\alpha}\Gamma\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{T_0-T_m}\right]^2$&\\[0.25cm]
\hline\\[-0.25cm]
6&
$\rho=\frac{\mu_\alpha q_0\Gamma\left(-\frac{\alpha}{2}+1\right)\xi\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{\sigma c(T_0-T_m)}$&
$H_\alpha(x)=\frac{c(T_0-T_m)\Gamma\left(-\frac{\alpha}{2}+1\right)}{\mu_\alpha\nu_\alpha\Gamma\left(\frac{\alpha}{2}+1\right)},\,\, x>0$&
$-$\\[0.5cm]
&
$k=\frac{\sigma q_0\Gamma\left(-\frac{\alpha}{2}+1\right)\left[1-W\left(-\xi,-\frac{\alpha}{2},1\right)\right]}{(T_0-T_m)\xi}$&\\[0.25cm]
\hline
\end{tabular}
\end{table}
\newpage
\section{Convergence to the classic case when $\alpha\to 1^-$}
When $\alpha=1$, the time fractional derivative of order $\alpha$ in the Caputo sense of a function coincides with its classical time derivative. Then, if we allow $\alpha$ to be equal to 1 in Problem (\ref{Problema}) and we consider that case, we obtain that Problem (\ref{Problema}) reduces to the classical inverse one-phase Stefan problem studied in \cite{Ta1983}. This problem, which we will refer to as Problem (\ref{Problema}$^\star$), is given by the classical diffusion equation:
\begin{equation}\label{ED}
T_t(x,t)=\lambda^2T_{xx}(x,t),\quad 0<x<s(t),\,t>0,\tag{\ref{1}$^\star$}
\end{equation}
the classical Stefan condition:
\begin{equation}\label{CondStefan}
-kT_x(s(t),t)=\rho l\dot{s}(t),\quad t>0\tag{\ref{3}$^\star$}
\end{equation}
and conditions (\ref{2}), (\ref{4}) and (\ref{5}). Of course, to obtain (\ref{ED}) and (\ref{CondStefan}) we have also considered $\mu_\alpha=1$ and $\nu_\alpha=1$ in (\ref{1}) and (\ref{3}), respectively. According to the physical interpretation given in \cite{VoFaGa2013}, as $\alpha$ increases its value to 1, the ability of memory retention of the system diminishes to the limit case of no memory retention corresponding to $\alpha=1$. In this context, classical Stefan problems are representing phase-change processes whose temporal evolution can be described in terms of {\em local in time} properties ({\em absence of memory}).
The determination of two unknown thermal coefficients through a classical inverse one-phase Stefan problem was done in \cite{Ta1983}. In that article, necessary and sufficient conditions on data to obtain existence and uniqueness of solution to Problem (\ref{Problema}$^\star$) are given, together with formulae for the unknown thermal coefficients. In several articles \cite{RoSa2013,RoSa2014,RoTa2014,Ta2015-c} it has been proved the convergence when $\alpha\to 1^-$ of the solution to a fractional Stefan problem with $0<\alpha<1$ to the solution to the associated classical problem obtained by considering $\alpha=1$. Encouraged by those works, we are interested in this section in proving the convergence when $\alpha\to 1^-$ of the results obtained in Section \ref{SeFormulas} to the ones given in \cite{Ta1983}.
In order to emphasize the dependence on $\alpha$ of the formulae given in Theorems \ref{Thlc} to \ref{Thrhok}, we will mention it here explicitly. For example, if we are analysing the convergence of the solution to Problem (\ref{Problema}) given in Theorem \ref{Thlc}, we will refer to it as $T(x,t,\alpha)$, $l(\alpha)$ and $c(\alpha)$. We will also write $\xi(\alpha)$ to represent the coefficient defined by (\ref{xi}).
The next result will be useful in the following:
\begin{lemma}\label{LeWMProp}
For each $x>0$, the Wright and Mainardi functions are such that:
\begin{subequations}\label{WMProp}
\begin{align}
\label{WProp}&1-W\left(-x,-\frac{\alpha}{2},1\right)\to\erf\left(\frac{x}{2}\right),&\text{when }\alpha &\to 1^-\\
\label{MProp}&M_{\alpha/2}(x)\to \frac{1}{\sqrt{\pi}}\exp\left(-\frac{x^2}{4}\right),&\text{when } \alpha &\to 1^-,
\end{align}
\end{subequations}
where $erf$ is the error function, which is defined by:
\begin{equation*}
\erf(x)=\frac{2}{\sqrt{\pi}}\displaystyle\int_0^x\exp(s^2)ds,\quad x>0.
\end{equation*}
\end{lemma}
\begin{proof}
See \cite{RoSa2013}.
\end{proof}
\begin{theorem}[Convergence related to Case 1]\label{ThlcClassic}
If inequality (\ref{Cond-lc}) holds, then the solution $T(x,t,\alpha)$, $l(\alpha)$, $c(\alpha)$ to Problem (\ref{Problema}) given in Theorem \ref{Thlc} converges to the solution obtained in \cite{Ta1983}, which is given by:
\begin{subequations}\label{lcClassic}
\begin{align}
\label{T-Classic}&T(x,t)=T_0+\frac{T_0-T_m}{\erf\left(\frac{\sigma^\star}{\lambda}\right)}\erf\left(\frac{x}{2\lambda\sqrt{t}}\right),\quad 0<x<s(t),\,t>0\\
\label{c-lcClassic}&c=\frac{k}{\rho}\left(\frac{\xi^\star}{\sigma^\star}\right)^2\\
\label{l-lcClassic}&l=\frac{q_0\exp({-\xi^\star}^2)}{\rho\sigma^\star}
\end{align}
\end{subequations}
where $\sigma^\star$ is defined by:
\begin{equation}\label{sigmaStar}
\sigma^\star=\frac{\sigma}{2}
\end{equation}
and $\xi^\star$ is the only one solution to the equation:
\begin{equation}\label{EqXi-lcClassic}
\frac{\erf(x)}{x}=\frac{k(T_0-T_m)}{q_0\sigma^\star\sqrt{\pi}},\quad x>0.
\end{equation}
\end{theorem}
\begin{proof}
Taking the limit when $\alpha\to 1^-$ into both sides of the equation (\ref{EqXi-lc}) and using (\ref{MProp}), we obtain the following equation:
\begin{equation}\label{EqXi-lcClassicPrelim}
\frac{\erf(x/2)}{x}=\frac{k(T_0-T_m)}{q_0\sigma \sqrt{\pi}},\quad x>0.
\end{equation}
On one hand, we have that the LHS of the equation (\ref{EqXi-lcClassicPrelim}) defines a strictly decreasing function from $\frac{1}{\sqrt{\pi}}$ to $0$ in $\mathbb{R}^+$. On the other hand, since inequality (\ref{Cond-lc}) holds, the RHS of equation (\ref{Cond-lc}) is between $0$ and $\frac{1}{\sqrt{\pi}}$. Therefore, it follows that equation (\ref{EqXi-lcClassicPrelim}) has an only one positive solution. By introducing the parameter $\sigma^\star$ defined by (\ref{sigmaStar}), we can rewrite equation (\ref{EqXi-lcClassicPrelim}) as follows:
\begin{equation*}
\frac{\erf(x/2)}{x/2}=\frac{k(T_0-T_m)}{q_0\sigma^\star \sqrt{\pi}},\quad x>0,
\end{equation*}
and see that the solution $\xi(\alpha)$ to the equation (\ref{EqXi-lc}) is such that:
\begin{equation}\label{xi-lcLimit}
\xi(\alpha)\to 2\xi^\star,\quad\quad\text{when }\alpha\to 1^-
\end{equation}
$\xi^\star$ being the only one solution to the equation (\ref{EqXi-lcClassic}). From (\ref{munu-limit}), (\ref{WMProp}) and (\ref{xi-lcLimit}), it follows from elementary computations that:
\begin{subequations}\label{lc-limit}
\begin{align}
&c(\alpha)\to c,\quad\quad\text{when }\alpha\to 1^-\\
&l(\alpha)\to l,\quad\quad\,\,\text{when }\alpha\to 1^-
\end{align}
\end{subequations}
where $c$ and $l$ are given by (\ref{c-lcClassic}) and (\ref{l-lcClassic}), respectively. Finally, it follows from (\ref{mu-limit}), (\ref{WMProp}), (\ref{xi-lcLimit}) and (\ref{lc-limit}) that:
\begin{equation}\label{T-limit}
T(x,t,\alpha)\to T(x,t),\quad\quad\text{when }\alpha\to 1^-,
\end{equation}
for each pair $(x,t)$ with $0<x<s(t)$ and $t>0$, where $T(x,t)$ is given by (\ref{T-Classic}). We then have proved that the solution to Problem (\ref{Problema}) given in Theorem \ref{Thlc} converges to the solution to Problem (\ref{Problema}$^\star$) given in \cite{Ta1983} when $\alpha\to 1^-$.
\end{proof}
\begin{remark}
We note that inequality (\ref{Cond-lc}) can be written as:
\begin{equation}\label{Cond-ckClassic}
\frac{k(T_0-T_m)}{2\sigma^\star q_0}<1,
\end{equation}
$\sigma^\star$ being the parameter defined by (\ref{sigmaStar}), which is the condition established in \cite{Ta1983} to ensure the existence and uniqueness of the solution to Problem (\ref{Problema}$^\star$) given by (\ref{lcClassic}).
\end{remark}
\begin{theorem}[Convergence related to Case 2]\label{ThckClassic}
If inequality (\ref{Cond-ck}) holds for each $\alpha\in(0,1)$, then the solution $T(x,t,\alpha)$, $c(\alpha)$, $k(\alpha)$ to Problem (\ref{Problema}) given in Theorem \ref{Thck} converges to the solution obtained in \cite{Ta1983}, which is given by (\ref{T-Classic}) and:
\begin{subequations}\label{ckClassic}
\begin{align}
\label{c-ckClassic}&c=\frac{q_0\sqrt{\pi}\xi^\star\erf(\xi^\star)}{\rho\sigma^\star(T_0-T_m)}\\
\label{k-ckClassic}&k=\frac{\sigma^\star q_0\sqrt{\pi}\erf(\xi^\star)}{(T_0-T_m)\xi^\star}
\end{align}
\end{subequations}
where $\sigma^\star$ is given by (\ref{sigmaStar}) and $\xi^\star$ is the only one solution to the equation:
\begin{equation}\label{EqXi-ckClassic}
\exp(x^2)=\frac{q_0}{\rho l\sigma^\star},\quad x>0.
\end{equation}
\end{theorem}
\begin{proof}
If we take the limit when $\alpha\to 1^-$ side by side of equation (\ref{EqXi-ck}) and we have into consideration (\ref{nu-limit}) and (\ref{MProp}), the following equation is obtained:
\begin{equation}\label{EqXi-ckClassicPrelim}
\exp\left(\frac{x^2}{4}\right)=\frac{2q_0}{\sigma\rho l},\quad x>0.
\end{equation}
Since inequality (\ref{Cond-ck}) holds for all $\alpha\in(0,1)$, we have that the following inequality also holds:
\begin{equation}\label{Cond-ckClassicPrelim}
\frac{2q_0}{\sigma\rho l}< 1.
\end{equation}
Therefore, equation (\ref{EqXi-ckClassicPrelim}) admits only one positive solution. We note that equation (\ref{EqXi-ckClassicPrelim}) can be rewritten as:
\begin{equation*}
\exp\left(\left(\frac{x}{2}\right)^2\right)=\frac{q_0}{\sigma^\star\rho l},\quad x>0,
\end{equation*}
where $\sigma^\star$ is given by (\ref{sigmaStar}), from which we can see that the solution $\xi(\alpha)$ to equation (\ref{EqXi-ck}) is such that:
\begin{equation}\label{xi-ckLimit}
\xi(\alpha)\to 2\xi^\star,\quad\quad\text{when }\alpha\to 1^-,
\end{equation}
$\xi^\star$ being the only one solution to equation (\ref{EqXi-ckClassic}). It follows now from (\ref{munu-limit}) (\ref{WMProp}), (\ref{xi-ckLimit}) and elementary computations that:
\begin{subequations}\label{ck-limit}
\begin{align}
&c(\alpha)\to c\\
&k(\alpha)\to k
\end{align}
\end{subequations}
where $c$ and $k$ are given by (\ref{ck}), respectively. Finally, we have from (\ref{mu-limit}), (\ref{WMProp}), (\ref{xi-ckLimit}) and (\ref{ck-limit}) that $T(x,t,\alpha)$ satisfies (\ref{T-limit}).
\end{proof}
\begin{remark}
By introducing the parameter $\sigma^\star$ defined by (\ref{sigmaStar}), we have that the inequality (\ref{Cond-ckClassicPrelim}) can be rewritten as:
\begin{equation}\label{Cond-ckClassic}
\frac{q_0}{\rho l\sigma^\star}>1,
\end{equation}
which is the condition established in \cite{Ta1983} to ensure the existence and uniqueness of the solution to Problem (\ref{Problema}$^\star$) given by (\ref{T-Classic}) and (\ref{ckClassic}).
\end{remark}
\begin{theorem}[Convergence related to Case 3]\label{ThlkClassic}
If inequality (\ref{Cond-lc}) holds, then the solution $T(x,t,\alpha)$, $l(\alpha)$, $k(\alpha)$ to Problem (\ref{Problema}) given in Theorem \ref{Thlk} converges to the solution obtained in \cite{Ta1983}, which is given by (\ref{T-Classic}) and:
\begin{subequations}\label{lkClassic}
\begin{align}
\label{l-lkClassic}&l=\frac{q_0\exp(-{\xi^\star}^2)}{\rho\sigma^\star}\\
\label{k-lkClassic}&k=\rho c\left(\frac{\sigma^\star}{\xi^\star}\right)^2
\end{align}
\end{subequations}
where $\sigma^\star$ is given by (\ref{sigmaStar}) and $\xi^\star$ is the only one solution to the equation:
\begin{equation}\label{EqXi-lkClassic}
x\erf(x)=\frac{\rho c\sigma^\star(T_0-T_m)}{q_0\sqrt{\pi}},\quad x>0.
\end{equation}
\end{theorem}
\begin{proof}
Taking the limit when $\alpha\to 1^-$ into both sides of the equation (\ref{EqXi-lk}) and having into consideration (\ref{mu-limit}) and (\ref{WProp}), we obtain the following equation:
\begin{equation}\label{EqXi-lkClassicPrelim}
x\erf\left(\frac{x}{2}\right)=\frac{\sigma\rho c(T_0-T_m)}{q_0\sqrt{\pi}},\quad x>0.
\end{equation}
Since the LHS of equation (\ref{EqXi-lkClassicPrelim}) defines a strictly increasing function from $0$ to $+\infty$ in $\mathbb{R}^+$ and the RHS of equation (\ref{EqXi-lkClassicPrelim}) is a positive number, it follows that equation (\ref{EqXi-lkClassicPrelim}) has an only one positive solution.
By introducing the parameter $\sigma^\star$ defined by (\ref{sigmaStar}), the equation (\ref{EqXi-lkClassicPrelim}) can be rewritten as:
\begin{equation*}
\frac{x}{2}\erf\left(\frac{x}{2}\right)=\frac{\sigma^\star\rho c(T_0-T_m)}{q_0\sqrt{\pi}},\quad x>0,
\end{equation*}
from which we can see that the solution $\xi(\alpha)$ to the equation (\ref{EqXi-lk}) is such that:
\begin{equation}\label{xi-lkLimit}
\xi(\alpha)\to 2\xi^\star,\quad\quad\text{when }\alpha\to 1^-
\end{equation}
$\xi^\star$ being the only one solution to equation (\ref{EqXi-lkClassic}). The rest of the proof runs as before.
\end{proof}
The following results can be proved in the same manner than the previous theorems in this section. Then, we have prefered to omit their proofs.
\begin{theorem}[Convergence related to Case 4]\label{ThcrhoClassic}
If inequality (\ref{Cond-lc}) holds, then the solution $T(x,t,\alpha)$, $c(\alpha)$, $\rho(\alpha)$ to Problem (\ref{Problema}) given in Theorem \ref{Thcrho} converges to the solution obtained in \cite{Ta1983}, which is given by (\ref{T-Classic}) and:
\begin{subequations}\label{crhoClassic}
\begin{align}
\label{c-crhoClassic}&c=\frac{kl{\xi^\star}^2\exp({\xi^\star}^2)}{q_0\sqrt{\pi}}\\
\label{rho-crhoClassic}&\rho=\frac{q_0\exp(-{\xi^\star}^2)}{l\sigma^\star}
\end{align}
\end{subequations}
where $\sigma^\star$ is given by (\ref{sigmaStar}) and $\xi^\star$ is the only one solution to the equation (\ref{EqXi-lcClassic}).
\end{theorem}
\begin{theorem}[Convergence related to Case 5]\label{ThlrhoClassic}
The solution $T(x,t,\alpha)$, $l(\alpha)$, $\rho(\alpha)$ to Problem (\ref{Problema}) given in Theorem \ref{Thlrho} converges to the solution obtained in \cite{Ta1983}, which is given by (\ref{T-Classic}) and:
\begin{subequations}\label{lrhoClassic}
\begin{align}
\label{l-lrhoClassic}&l=\frac{q_0 c\sigma^\star\exp(-{\xi^\star}^2)}{k{\xi^\star}^2}\\
\label{rho-lrhoClassic}&\rho=\frac{k}{c}\left(\frac{\xi^\star}{\sigma^\star}\right)^2
\end{align}
\end{subequations}
where $\sigma^\star$ is given by (\ref{sigmaStar}) and $\xi^\star$ is the only one solution to the equation (\ref{EqXi-lcClassic}).
\end{theorem}
\begin{theorem}[Convergence related to Case 6]\label{ThrhokClassic}
The solution $T(x,t,\alpha)$, $\rho(\alpha)$, $k(\alpha)$ to Problem (\ref{Problema}) given in Theorem \ref{Thrhok} converges to the solution obtained in \cite{Ta1983}, which is given by (\ref{T-Classic}) and:
\begin{subequations}\label{rhokClassic}
\begin{align}
\label{rho-rhokClassic}&\rho=\frac{q_0\exp(-{\xi^\star}^2)}{l\sigma^\star}\\
\label{k-rhokClassic}&k=\frac{q_0c\sigma^\star\exp(-{\xi^\star}^2)}{l{\xi^\star}^2}
\end{align}
\end{subequations}
where $\sigma^\star$ is given by (\ref{sigmaStar}) and $\xi^\star$ is the only one solution to the equation:
\begin{equation}\label{EqXi-rhokClassic}
x\erf(x)\exp(x^2)=\frac{c(T_0-T_m)}{l\sqrt{\pi}},\quad x>0.
\end{equation}
\end{theorem}
Table \ref{TbClassic} summarizes the formulae obtained for the two unknown thermal coefficients and the condition that data must verify to obtain them, for each one of the six possible choices for the two unknown thermal coefficients among $k$, $\rho$, $l$ and $c$ in Problem (\ref{Problema}) when $\alpha\to 1^-$.
\newpage
\begin{table}[h!]
\caption{Formulae for the two unknown thermal coefficients and condition on data for\\ Problem (\ref{Problema}) when $\alpha\to 1^-$}\label{TbClassic}
\begin{tabular}{cllc}
\hline\\[-0.25cm]
Case & \hspace*{0.5cm}Formulae for the unknown & \hspace*{0.5cm}Equation for $\xi^\star$ & Condition\\
&\hspace*{0.5cm}thermal coefficients& & for data\\[0.25cm]
\hline\\[-0.25cm]
1&
$c=\frac{k}{\rho}\left(\frac{\xi^\star}{\sigma^\star}\right)^2$&
$\frac{\erf(x)}{x}=\frac{k(T_0-T_m)}{q_0\sigma^\star\sqrt{\pi}},\,\, x>0$&
$\frac{k(T_0-T_m)}{2\sigma^\star q_0}<1$\\[0.5cm]
&
$l=\frac{q_0\exp(-{\xi^\star}^2)}{\rho\sigma^\star}$&\\[0.25cm]
\hline\\[-0.25cm]
2&
$c=\frac{q_0\sqrt{\pi\xi^\star\erf(\xi^\star)}}{\rho\sigma^\star(T_0-T_m)}$&
$\exp(x^2)=\frac{q_0}{\rho l\sigma^\star},\,\, x>0$&
$\frac{q_0}{\rho l\sigma^\star}$\\[0.5cm]
&
$k=\frac{\sigma^\star q_0\sqrt{\pi\erf(\xi^2)}}{(T_0-T_m)\xi^\star}$&\\[0.25cm]
\hline\\[-0.25cm]
3&
$k=\rho c\left(\frac{\sigma^\star}{\xi^\star}\right)^2$&
$x\erf(x)=\frac{\rho c\sigma^\star(T_0-T_m)}{q_0\sqrt{\pi}},\,\, x>0$&
$-$\\[0.5cm]
&
$l=\frac{q_0\exp(-{\xi^\star}^2)}{\rho\sigma^\star}$&\\[0.25cm]
\hline\\[-0.25cm]
4&
$c=\frac{kl{\xi^\star}^2\exp({\xi^\star}^2)}{q_0\sigma^\star}$&
$\frac{\erf(x)}{x}=\frac{k(T_0-T_m)}{q_0\sigma^\star\sqrt{\pi}},\,\, x>0$&
$\frac{k(T_0-T_m)}{2\sigma^\star q_0}<1$\\[0.5cm]
&
$\rho=\frac{q_0\exp(-{\xi^\star}^2)}{l\sigma^\star}$&\\[0.25cm]
\hline\\[-0.25cm]
5&
$l=\frac{q_0c\sigma^\star\exp(-{\xi^\star}^2)}{k{\xi^\star}^2}$&
$\frac{\erf(x)}{x}=\frac{k(T_0-T_m)}{q_0\sigma^\star\sqrt{\pi}},\,\, x>0$&
$\frac{k(T_0-T_m)}{2\sigma^\star q_0}<1$\\[0.5cm]
&
$\rho=\frac{k}{\rho}\left(\frac{\xi^\star}{\sigma^\star}\right)^2$&\\[0.25cm]
\hline\\[-0.25cm]
6&
$\rho=\frac{q_0\exp(-{\xi^\star}^2)}{l\sigma^\star}$&
$x\erf(x)\exp(x^2)=\frac{c(T_0-T_m)}{l\sqrt{\pi}},\,\, x>0$&
$-$\\[0.5cm]
&
$k=\frac{q_0c\sigma^\star\exp(-{\xi^\star}^2)}{l{\xi^\star}^2}$&\\[0.25cm]
\hline
\end{tabular}
\end{table}
\newpage
\section*{Conclusions}
In this article we have considered a semi-infinite one-dimensional phase-change material with two unknown constant thermal coefficients. These were assumed to be among the latent heat per unit mass, the specific heat, the mass density and the thermal conductivity. The determination of them have been done through an inverse one-phase fractional Stefan problem with an overspecified condition at the fixed boundary of the material and a known evolution of the moving boundary. It was considered that this problem corresponds to a melting process with latent-heat memory effects, which we have represented by replacing the classical time derivative involved in the diffusion equation and the Stefan condition, by a time fractional derivative of order $\alpha$ ($0<\alpha<1$) in the Caputo sense. Solutions of similarity type were looked for and necessary and sufficient conditions on data to have their existence and uniqueness were given for each of the six inverse fractional Stefan problems that arise according to the choice of the two unknown thermal coefficients. We have also obtained explicit expressions for the temperature and the two unknown thermal coefficients. Finally, we have compared our results with those obtained for the determination of two coefficients through the classical statement ($\alpha=1$) of the inverse Stefan problem and we have proved the convergence of our results (which takes into account latent-heat memory effects) to those obtained by the classic case (no memory retention).
\subsection*{Acknowledgements}
This paper has been partially sponsored by the Project PIP No. 0534 from CONICET-UA (Rosario, Argentina) and AFOSR-SOARD Grant FA 9550-14-1-0122.
\bibliographystyle{plain}
| {
"timestamp": "2016-08-25T02:03:22",
"yymm": "1608",
"arxiv_id": "1608.06782",
"language": "en",
"url": "https://arxiv.org/abs/1608.06782",
"abstract": "We consider a semi-infinite one-dimensional phase-change material with two unknown constant thermal coefficients among the latent heat per unit mass, the specific heat, the mass density and the thermal conductivity. Aiming at the determination of them, we consider an inverse one-phase Stefan problem with an over-specified condition at the fixed boundary and a known evolution for the moving boundary. We assume that the phase-change process presents latent-heat memory effects by considering a fractional time derivative of order $\\alpha$ ($0<\\alpha<1$) in the Caputo sense and a sharp front model for the interface. According to the choice of the unknown thermal coefficients, six inverse fractional Stefan problems arise. For each of them, we determine necessary and sufficient conditions on data to obtain the existence and uniqueness of a solution of similarity type. Moreover, we present explicit expressions for the temperature and the unknown thermal coefficients. Finally, we show that the results for the classical statement of this problem, associated with $\\alpha=1$, are obtained through the fractional model when $\\alpha\\to 1^-$.",
"subjects": "Mathematical Physics (math-ph)",
"title": "Determination of two unknown thermal coefficients through an inverse one-phase fractional Stefan problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808759252646,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7075110683296942
} |
https://arxiv.org/abs/1711.07775 | Distance multivariance: New dependence measures for random vectors | We introduce two new measures for the dependence of $n \ge 2$ random variables: distance multivariance and total distance multivariance. Both measures are based on the weighted $L^2$-distance of quantities related to the characteristic functions of the underlying random variables. These extend distance covariance (introduced by Székely, Rizzo and Bakirov) from pairs of random variables to $n$-tuplets of random variables. We show that total distance multivariance can be used to detect the independence of $n$ random variables and has a simple finite-sample representation in terms of distance matrices of the sample points, where distance is measured by a continuous negative definite function. Under some mild moment conditions, this leads to a test for independence of multiple random vectors which is consistent against all alternatives. | \section{Introduction and related work}
Distance multivariance $M_\rho(X_1, X_2,\dots, X_n)$ and total distance multivariance $\overline M_\rho(X_1, X_2, \dots, X_n)$ are new measures for the dependence of random variables $X_1, \dots, X_n$. They are closely related to distance covariance, as introduced by Sz\'ekely, Rizzo and Bakirov \cite{SzekRizzBaki2007, SzekRizz2009} and its generalizations presented in \cite{part1}. Distance multivariance inherits many of the features of distance covariance; in particular, see Theorem~\ref{thm:independence} below,
\begin{itemize}
\item
$M_\rho(X_1, \dots, X_n)$ and $\overline M_\rho(X_1, \dots, X_n)$ are defined for random variables\\ $X_1, \dots$, $X_n$ with values in spaces of arbitrary dimensions $\mathds{R}^{d_1}, \dots, \mathds{R}^{d_n}$;
\item
if each subfamily of $X_1, \dots, X_n$ with $n-1$ elements is independent, $M_\rho(X_1, \dots, X_n) = 0$ characterizes the independence of $X_1, \dots, X_n$;
\item
$\overline M_\rho(X_1, \dots, X_n) = 0$ characterizes the independence of $X_1, \dots, X_n$.
\end{itemize}
We emphasize that measuring the dependence of $n$ random variables is different from measuring their pairwise dependence, and for this reason bivariate dependence measures, such as distance covariance, cannot be used directly to detect overall independence. A classical example, Bernstein's coins, is discussed in Section~\ref{ex:bernstein}. The extension of distance covariance to more than two random variables was addressed in a short paragraph in Bakirov and Sz\'ekely \cite{BakiSzek2011}. Our approach is different from the approach suggested in \cite{BakiSzek2011}; it is, in fact, closer to the two approaches that were advised against in \cite{BakiSzek2011}. We will discuss and compare these approaches in greater detail in Section~\ref{sec:BS11}, once the necessary concepts have been introduced. Recently, Yao \emph{et al.}\ \cite{YaoZhanShao2017} introduced measures for pairwise dependence based on distance covariance. In contrast, distance multivariance does not only detect pairwise dependence, but any type of multivariate dependence. Jin and Matteson \cite{JinMatt2017} present measures for multivariate independence which also use distance covariance. The resulting exact estimators are computationally more complex than those of distance multivariance; \cite{Boet2017} shows that the approximate estimators of \cite{JinMatt2017} have less empirical power but are computationally of the same order as distance multivariance.
Another line of research considers dependence measures based on reproducing kernel Hilbert spaces, notably the Hilbert-Schmidt independence criterion (HSIC) of \cite{GretBousSmolScho2005}, which has been shown to be equivalent to distance covariance in \cite{SejdSripGretFuku2013}. Subsequently, HSIC has been extended from a bivariate dependence measure to a multivariate dependence measure, $\mathrm{dHSIC}$, in \cite{PfisBuehSchoPete2017}. We compare $\mathrm{dHSIC}$ to distance multivariance in Section~\ref{sec:dhsic}.
Similar to distance covariance in \cite{SzekRizzBaki2007} and its generalizations given in \cite{part1}, distance multivariance can be defined as a weighted $L^2$-norm of quantities related to the characteristic functions of $X_1, \dots, X_n$, cf.~Definition~\ref{def:multivariance} below. There are, however, further definitions of distance multivariance which are equivalent up to moment conditions. In particular, multivariance can be equivalently defined as \emph{Gaussian multivariance} by evaluating a Gaussian random field at the instances $(X_1, \dots, X_n)$ and taking certain expectations, see Section~\ref{sub:gaussian}. This generalizes Sz\'ekely-and-Rizzo's \cite[Def.~4]{SzekRizz2009} Brownian covariance which is recovered using $n=2$ and multiparameter Brownian motion as random field.
The sample versions of both distance multivariance and total distance multivariance have simple expressions in terms of the distance matrices of the sample points; this means that we can compute these statistics efficiently even for large samples and in high dimensions. In concrete terms, as we show in Theorem~\ref{thm:sample}, the square of the distance multivariance computed from samples $\bm{x}^{(1)},\dots, \bm{x}^{(N)}$ of the random vector $\bm{X} = (X_1, \dots, X_n)$ can be written as
\begin{equation*}
\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}^2_\rho(\bm{x}^{(1)},\dots, \bm{x}^{(N)})
= \frac{1}{N^2} \sum_{j,k=1}^N (A_1)_{jk} \cdot \ldots \cdot (A_n)_{jk}
\end{equation*}
where the $A_i$ are doubly centred distance matrices of the sample points of $X_i$, i.e.~$A_i := - CB_iC$ where $C$ is the centering matrix $C = I - \tfrac{1}{N}\mathds{1}$, $\mathds{1}=(1)_{j,k=1,\dots, N}$, $I=(\delta_{jk})_{j,k=1,\dots, N}$, and $B_i$ are the distance matrices of the sample points. The square of the sample \emph{total} distance multivariance has a similar form
\begin{equation*}
\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}^2_\rho(\bm{x}^{(1)},\dots, \bm{x}^{(N)})
= \frac{1}{N^2} \sum_{j,k=1}^N (1+(A_1)_{jk}) \cdot \ldots \cdot (1+ (A_n)_{jk}) - 1.
\end{equation*}
The (quasi-)distance that is used to compute $B_i$ can be chosen, under mild restrictions, from the class of real-valued continuous negative definite functions, cf.~\cite[Ch.~II]{BerFor}, \cite[Sec.~3.2]{Jaco2001}. In particular, we may use Euclidean and $p$-Minkowski distances with exponent $p \in (1,2]$. In the bivariate case, and using Euclidean distance, the sample distance covariance of Sz\'ekely and Rizzo \cite[Def.~3]{SzekRizz2009} is recovered.
Finally, we show in Theorems~\ref{thm:Mdistconv} and \ref{thm:Mdconv} asymptotic properties of sample distance multivariance as $N$ tends to infinity; these results are multivariate analogues of those in \cite[Thm.~5]{SzekRizz2009}. Based on these results, we formulate two new distribution-free tests for the joint independence of $n$ random variables in Section~\ref{sec:test}. These tests are conservative, and a resampling approach can be used to construct tests achieving the nominal size; further results in this direction can be found in \cite{Boet2017}. The paper concludes in Section~\ref{ex:bernstein} with an extended example based on \emph{Bernstein's coins}, which demonstrates numerically that (total) distance multivariance is able to distinguish between pairwise independence and higher-order dependence of random variables. The example also illustrates the practical validity of the two tests that are proposed. A further example with sinusoidal dependence is discussed, illustrating the influence of the underlying distance on the dependence measure.
For the immediate use of distance multivariance in applications all necessary functions are provided in the R package \texttt{multivariance}, \cite{Boett2017R-1.0.5}.
\section{Preliminaries}\label{sec:prelim}
We consider a $d$-dimensional random vector $\bm{X} = (X_1, \dots, X_n)$, whose components $X_i$ are random variables taking values in $\mathds{R}^{d_i}$, $i=1,\dots, n$, and where $d = d_1 + \dots + d_n$. The characteristic function of $X_i$ is denoted by
$$
f_{X_i}(t_i) := \mathds{E}\mathrm{e}^{\mathrm{i} \scalp{X_i}{t_i}}, \quad t_i \in \mathds{R}^{d_i},
$$
and we write $\bm{t} = (t_1, \dots, t_n)$. In order to define the distance multivariance of $(X_1, \dots, X_n)$, we use \emph{L\'evy measures} $\rho_i$, i.e.\ Borel measures $\rho_i$ defined on $\mathds{R}^{d_i}\setminus\{0\}$ such that
\begin{equation}\label{eq:levy_integrability}
\int_{\mathds{R}^{d_i}\setminus\{0\}} \min\{|t_i|^2,1\}\,\rho_i(\mathrm{d}t_i) <\infty.
\end{equation}
Note that the measures $\rho_i$ need not be finite. Such measures appear in the L\'evy--Khintchine representation of infinitely divisible distributions, see \cite{sato1999levy}.
Throughout this paper we assume that $\rho_i$, $i=1,\dots,n$ are symmetric L\'evy measures with full topological support, cf.~\cite[Def.~2.3]{part1}, and we set $\rho := \rho_{d_1}\otimes\dots\otimes\rho_{d_n}$. To keep notation simple, we write $\int\dots\,\rho_i(\mathrm{d}t_i)$ and $\int_{\mathds{R}^{d_i}}\dots\,\rho_i(\mathrm{d}t_i)$ instead of the formally correct $\int_{\mathds{R}^{d_i}\setminus\{0\}}\dots\,\rho_i(\mathrm{d}t_i)$.
\begin{definition}
Let $(X_i)_{i=1,\dots, n}$ be random variables with values in $\mathds{R}^{d_i}$ and let the measures $\rho_i$ be given as above. With $\rho := \rho_1 \otimes \dots \otimes \rho_n$, we define
\smallskip\textup{a)}
\emph{Distance multivariance} $M_\rho \in [0,\infty]$ by
\begin{equation}\label{def:multivariance}
M_\rho^2(X_1,\dots,X_n)
:= \int_{\mathds{R}^d} \left|\mathds{E}\left( \prod_{i=1}^n \left(\mathrm{e}^{\mathrm{i} \scalp{X_i}{t_i}}- f_{X_i}(t_i)\right)\right)\right|^2 \rho(\mathrm{d}t_1, \dots, \mathrm{d}t_n),
\end{equation}
\smallskip\textup{b)}
\emph{Total distance multivariance} $\overline M_\rho \in [0,\infty]$ by
\begin{equation}\label{def:total_multivariance}
\overline M_\rho^2(X_1,\dots,X_n)
:= \sum_{\substack{1\leq i_1< \dots < i_m \leq n\\2 \leq m \leq n}} M_{\bigotimes_{j=1}^m\rho_{i_j}}^2(X_{i_1},\dots,X_{i_m}).
\end{equation}
\end{definition}
\begin{remark}
\ \ a) \
Using the tensor product for functions
$$
\left(g_1 \otimes \dots \otimes g_n\right) (x_1, \dots, x_n) = g_1(x_1) \cdot\ldots\cdot g_n(x_n),
$$
distance multivariance can be written in a compact way as
\begin{equation}\label{eq:M_rho_L2}
M_\rho(X_1,\dots,X_n)
= \left\| \mathds{E}\left[ \bigotimes_{i=1}^n \left(\mathrm{e}^{\mathrm{i} \scalp{X_i}{{\scriptscriptstyle\bullet}}}- f_{X_i}({\scriptscriptstyle\bullet})\right)\right]\right\|_{L^2(\rho)}.
\end{equation}
Thus, distance multivariance is the weighted $L^2$-norm of a quantity related to the characteristic functions of the $X_i$, analogous to the definition of distance covariance in Sz\'ekely, Rizzo and Bakirov \cite[Def.~1]{SzekRizzBaki2007}.
\smallskip b) \
Both $M_\rho$ and $\overline M_\rho$ are always well-defined in $[0,+\infty]$: For each $\bm{t} = (t_1, \dots, t_n)$ the product appearing in the integrand of \eqref{def:multivariance} can be bounded in absolute value by $2^n$; therefore, the expectation exists. The integrand of the $\rho$-integral is positive, and so the integral is always well-defined in $[0,+\infty]$. Just as in the bivariate case, see
\cite[Thm.~3.7, Rem.~3.8]{part1},
we need moment conditions on the random variables $X_i$ to guarantee finiteness of $M_\rho$ and $\overline M_\rho$, see Proposition~\ref{prop:representation} below.
\smallskip c) \
At first sight, total distance multivariance seems to suffer from a computational curse of dimension, since the sum \eqref{def:total_multivariance} extends over all subfamilies (comprising at least two members) of $(X_1, \dots, X_n)$, i.e.\ $2^n - 1 - n$ terms are summed. We will, however, show in Theorem~\ref{thm:sample}, that the finite sample version of $\overline M_\rho$ has the same computational complexity as $M_\rho$ and its computation requires only $\mathcal{O}(nN^2)$ operations given a sample of size $N$.
\end{remark}
Each L\'evy measure $\rho_i$ uniquely defines a real-valued \emph{continuous negative definite function}
\begin{equation}\label{eq:cndf}
\psi_i(y_i) := \int_{\mathds{R}^{d_i}} \left(1-\cos(y t_i)\right) \,\rho_i(\mathrm{d}t_i) \quad \text{for\ \ } y_i\in \mathds{R}^{d_i},
\end{equation}
see e.g.~\cite[Cor.~3.7.9]{Jaco2001}.
The functions $\psi_i$ will play a key role in the finite-sample representation of distance multivariance and also appear in moment conditions. They are also the reason for the terms \emph{distance} multivariance (and \emph{distance} covariance, cf.~\cite{SzekRizz2009}), since $\psi_i$ yields well-known distance functions (and in many cases norms) in several important special cases. In particular, $x\mapsto |x|_{}^\alpha$ where $|\cdot|_{}$ is the standard $d_i$-dimensional Euclidean norm and $\alpha \in (0,2)$, can be represented using
$$
\rho_i(\mathrm{d}t_i) = c_{\alpha,d_i}\, |t_i|_{}^{-d_i - \alpha}\, \mathrm{d}t_i,
\quad \alpha \in (0,2),
\quad c_{\alpha,d_i} = \frac{\alpha 2^{\alpha-1} \Gamma\left(\tfrac{\alpha+d_i}{2}\right)}{\pi^{d_i/2} \Gamma\left(1-\tfrac{\alpha}{2}\right)},
$$
since
$$
|y_i|_{}^\alpha = c_{d_i, \alpha} \int_{\mathds{R}^{d_i}} \left(1-\cos \scalp{y}{t_i}\right) \,\frac{\mathrm{d}t_i}{|t_i|_{}^{d_i + \alpha}}.
$$
Also other Minkowski distances $|x|_{d_i,p} := \left(\sum_{j=1}^{d_i} |x_j|^p\right)^{1/p}$, for $p \in (1,2]$ can be written in the form \eqref{eq:cndf}; see \cite[Lemma~2.2 and Table~1]{part1} for this and further examples.
For the following results and proofs it will be useful to introduce some notation for various distributional copies of the vector $\bm{X} = (X_1, \dots, X_n)$. Recall that $\mathcal{L}(X_i)$ denotes the law of $X_i$ and define the random vectors
\begin{equation} \label{eq:X0X1}
\begin{aligned}
\bm{X}_0 &= (X_{0,1},\dots,X_{0,n}) &&\sim \quad\mathcal{L}(X_1) \otimes \dots \otimes \mathcal{L}(X_n),\\
\bm{X'}_0 &= (X'_{0,1},\dots,X'_{0,n}) &&\sim \quad\mathcal{L}(X_1) \otimes \dots \otimes \mathcal{L}(X_n),\\
\bm{X}_1 &= (X_{1,1},\dots,X_{1,n}) &&\sim \quad\mathcal{L}(X_1, \dots, X_n),\\
\bm{X'}_1 &= (X'_{1,1},\dots,X'_{1,n}) &&\sim \quad\mathcal{L}(X_1, \dots, X_n),
\end{aligned}
\end{equation}
such that the random vectors $\bm{X}_0, \bm{X'}_0, \bm{X}_1, \bm{X'}_1$ are independent. Note that the subscript `$1$' -- as in $\bm{X}_1$ and $\bm{X'}_1$ -- indicates that these vectors have the \emph{same distribution} as $\bm{X}$, while the subscript `$0$' -- as in $\bm{X}_0$ and $\bm{X'}_0$ -- means that these random vectors have the \emph{same marginal distributions} as $\bm{X}$, but their coordinates are independent.
\begin{definition} \label{def:moment}
We introduce the following moment conditions:
\smallskip\textup{a)}
The \emph{mixed moment condition} holds if
$$
\mathds{E}\left( \prod_{i=1}^n \psi_i(X_{k_i,i}-X'_{l_i,i})\right)< \infty
\quad \text{for all\ \ } k_i,l_i \in\{0,1\},\; i=1,\dots, n.
$$
\smallskip\textup{b)}
The \emph{psi-moment condition} holds if there exist $p_i \in [1,\infty)$ satisfying $\sum_{i=1}^n p_i^{-1} = 1$ such that
$$
\mathds{E} \psi_i^{p_i}(X_i) < \infty
\quad\text{for all\ \ } i = 1, \dots, n.
$$
In particular, one may choose $p_1 = \dots = p_n = n$. (The case $p_i=\infty$ is also admissible, but this means that $\psi_i$ must be bounded or $X_i$ must have compact support.)
\smallskip\textup{c)}
The \emph{$2p$-moment condition} holds if there exist $p_i \in [1,\infty)$ satisfying $\sum_{i=1}^n p_i^{-1} = 1$ such that
$$
\mathds{E} \big[|X_i|_{}^{2 p_i}\big] < \infty \quad \text{for all\ \ } i = 1, \dots, n;
$$
(the case $p_i=\infty$ is also admissible, but this means that $X_i$ is a.s.\ bounded).
\end{definition}
As shown in Lemma~\ref{lem:moments} in the supplement \cite{part2supp}, these moment conditions are ordered from weak to strong, i.e.~c) implies b) and b) implies a). Also note that b) and a) trivially hold (for any choice of $p_i$) if the functions $\psi_i$ are bounded.
\section{Distance multivariance and total distance multivariance}\label{sec:theory}
\subsection{Total distance multivariance characterizes independence}
We need the concept of $m$-independence of $n\geq m$ random variables.
\begin{definition}
Random variables $X_1,\dots,X_n$ are \emph{$m$-independent} (for some $m\leq n$) if for any sub-family $\{i_1,\dots, i_m\}\subset \{1,\dots, n\}$ the random variables $X_{i_1},\dots, X_{i_m}$ are independent.
\end{definition}
The condition of $(n-1)$-independence allows certain factorizations of expectations of products; the proof of the following Lemma is given in the supplement \cite{part2supp}:
\begin{lemma}\label{lem:factorization}
Let $Z_1,\dots,Z_n$ be $\mathds{C}$-valued random variables which are $(n-1)$-in\-de\-pen\-dent. Then
\begin{equation}
\mathds{E}\left(\prod_{i=1}^n (Z_i - \mathds{E} Z_i)\right) = \mathds{E}\left(\prod_{i=1}^n Z_i - \prod_{i=1}^n \mathds{E} Z_i\right).
\end{equation}
\end{lemma}
If we use the random variables $Z_i := \mathrm{e}^{\mathrm{i} \scalp{X_i}{t_i}}$, Lemma \ref{lem:factorization} yields the following result for characteristic functions.
\begin{corollary}\label{cor:mInd-charfun}
Let $X_1,\dots,X_m$ be $(m-1)$-independent random variables, then
\begin{equation}\begin{aligned}
&\mathds{E}\left[\prod_{k=1}^m \left(\mathrm{e}^{\mathrm{i} \scalp{X_{i_k}}{t_{i_k}}} - f_{X_{i_k}}(t_{i_k})\right)\right]\\
&\qquad= f_{(X_{i_1},\dots,X_{i_m})}(t_{i_1},\dots,t_{i_m}) - f_{X_{i_1}}(t_{i_1})\cdot \ldots \cdot f_{X_{i_{m}}}(t_{i_m}).
\end{aligned}\end{equation}
\end{corollary}
This enables us to show that independence is indeed characterized by total distance multivariance.
\begin{theorem}\label{thm:independence}
\textup{a)}
Distance multivariance vanishes for independent random variables, i.e.
\begin{equation} \label{eq:indep-M}
X_1,\dots,X_n \text{\ \ are independent} \implies M_\rho(X_1,\dots,X_n) = 0.
\end{equation}
If $X_1, \dots, X_n$ are $(n-1)$-independent, then also the converse
holds.
\smallskip\textup{b)}
Total distance multivariance characterizes independence, i.e.
\begin{equation}
X_1,\dots,X_n \text{\ \ are independent} \iff \overline{M}_\rho(X_1,\dots,X_n) = 0.
\end{equation}
\end{theorem}
\begin{remark} \label{rem:M-n-independence} Note that multivariance is not just a building block of total multivariance, but has applications in its own right. The characterization of $n$-independence by $(n-1)$-independence and $M_\rho(X_1,\ldots,X_n) =0$ can be used to detect (higher order) dependence structures; this is used in \cite{Boet2017}. Other applications can be found in the setting of independent component analysis (ICA). The algorithm of \cite{Como1994}) aims to transform the input signal into pairwise independent random variables which, if all assumptions of ICA are satisfied, are also mutually independent. Thus, distance multivariance can be used to test the validity of assumptions by testing for higher order dependence, given pairwise independence \cite{Bart2018}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:independence}]
Suppose that $X_1,\dots,X_n$ are independent. We have for all indices $\{i_1,\dots,i_m\}\subset\{1,\dots,n\}$
\begin{equation}
\mathds{E}\left[ \prod_{k=1}^m \left(\mathrm{e}^{\mathrm{i} \scalp{X_{i_k}}{t_{i_k}}}- f_{X_{i_k}}(t_{i_k})\right)\right]
= \prod_{k=1}^m \mathds{E}\left( \mathrm{e}^{\mathrm{i} \scalp{X_{i_k}}{t_{i_k}}}- f_{X_{i_k}}(t_{i_k})\right)
= 0,
\end{equation}
and, so, $M_{\bigotimes_{k=1}^m \rho_{i_k}}(X_{i_1},\dots,X_{i_m}) = 0$; this implies $\overline{M}_\rho(X_1,\dots,X_n)=0$.
For the converse statements suppose first that $X_1, \dots, X_n$ are $(n-1)$-independent and consider
$$
\kappa(t_1, \dots, t_n)
:= \mathds{E}\left[ \prod_{i=1}^n \left(\mathrm{e}^{\mathrm{i} \scalp{X_{i}}{t_{i}}}- f_{X_{i}}(t_{i})\right)\right].
$$
By definition, $M_\rho(X_1, \dots, X_n)$ is the $L^2(\rho)$-norm of $\kappa$. Since $\rho$ has full topological support and $\kappa$ is continuous, $M_\rho = 0$ implies that $\kappa \equiv 0$ everywhere on $\mathds{R}^d$. By Corollary~\ref{cor:mInd-charfun}, it follows that
$$
f_{(X_{1},\dots,X_{n})}(t_1,\dots,t_n) = f_{X_{1}}(t_1)\cdot \ldots \cdot f_{X_{n}}(t_n)
\quad\text{for all\ \ } t_1,\dots, t_n,
$$
i.e.\ the joint characteristic function of $X_1, \dots, X_n$ factorizes, and we conclude that $X_1, \dots, X_n$ are independent.
Finally, suppose that $\overline{M}_\rho(X_1,\dots,X_n)=0$, and thus that
\begin{equation} \label{eq:proofindependence}
M_{\bigotimes_{k=1}^m \rho_{i_k}}(X_{i_1},\dots,X_{i_m}) = 0
\quad\text{for any\ \ } \{i_1,\dots,i_m\} \subset \{1,\dots,n\}.
\end{equation}
Starting with subsets of size $2$, we note that
\begin{align}
\overline{M}_{\rho_{i_1}\otimes\rho_{i_2}}(X_{i_1},X_{i_2})
&= M_{\rho_{i_1}\otimes\rho_{i_2}}(X_{i_1},X_{i_2})\\
\notag &= \|f_{(X_{i_1},X_{i_2})} - f_{X_{i_1}}f_{X_{i_2}}\|_{L^2(\rho_{i_1}\otimes\rho_{i_2})} = 0
\end{align}
for all $\{i_1, i_2\} \subset \{1, \dots, n\}$; this means that the random variables $X_1,\dots, X_n$ are pairwise independent, hence $X_1,\dots,X_n$ are $2$-independent. Continuing with subsets of size $3$, \eqref{eq:proofindependence} together with the first part of the proof implies $3$-independence of $X_1, \dots, X_n$. Repeating this argument finally yields the independence of $X_1,\dots,X_n$.
\end{proof}
\subsection{Further properties and representations of multivariance}
Directly from Definition~\ref{def:multivariance} we see that for two random variables $X=X_1$ and $Y=X_2$ and L\'evy measures $\rho = \rho_1\otimes\rho_2$ the notions of multivariance $M_\rho$, total multivariance $\overline M_\rho$ and generalized distance covariance $V$ as defined in \cite[Def.~3.1]{part1}
coincide, i.e.
$$
M_\rho(X,Y) = \overline M_\rho(X,Y) = V(X,Y).
$$
The following properties are straightforward.
\begin{proposition}
Distance multivariance enjoys the following properties.
\begin{gather}
\label{eq:Msingle}
M_{\rho_i}(X_i) = 0 \quad\text{for all\ \ } i=1,\dots,n,\\
\label{eq:Msymmetric}
M_\rho(X_1,\dots,X_n) = M_\rho(c_1 X_1,\dots,c_n X_n)\quad \text{for\ \ } c_i \in \{-1,+1\}.
\intertext{Let $S\subset \{1,\dots,n\}$. If $(X_i, i\in S)$ is independent of $(X_i, i\in S^c)$, then}
\label{eq:Mfactors}
M_\rho(X_1,\dots,X_n) = M_{\bigotimes_{i\in S}\rho_i}(X_i, i\in S) \cdot M_{\bigotimes_{i\in S^c}\rho_i}(X_i, i\in S^c).
\end{gather}
\end{proposition}
\begin{proof}
If $n=1$, the expectation in \eqref{def:multivariance} becomes $\mathds{E}\left(e^{\mathrm{i} X_i t_i} - \mathds{E}\mathrm{e}^{\mathrm{i} X_i t_i}\right) = 0$ and \eqref{eq:Msingle} follows. Property~\eqref{eq:Msymmetric} follows from the symmetry of the measures $\rho_i$. For the last property, note that the assumption of independence allows us to factorize the following expression
\begin{align*}
&\mathds{E}\left[\bigotimes_{i=1}^n \left(\mathrm{e}^{\mathrm{i} \scalp{X_i}{\scriptscriptstyle\bullet}}- f_{X_i}({\scriptscriptstyle\bullet})\right)\right] \\
&\qquad= \mathds{E}\left[\bigotimes_{i \in S} \left(\mathrm{e}^{\mathrm{i} \scalp{X_i}{\scriptscriptstyle\bullet}}- f_{X_i}({\scriptscriptstyle\bullet})\right)\right]
\cdot \mathds{E}\left[ \bigotimes_{i \in S^c} \left(\mathrm{e}^{\mathrm{i} \scalp{X_i}{\scriptscriptstyle\bullet}}- f_{X_i}({\scriptscriptstyle\bullet})\right)\right].
\end{align*}
Since also $\rho$ can be factorized into $\bigotimes_{i \in S} \rho_i$ and $\bigotimes_{i \in S^c} \rho_i$, \eqref{eq:Mfactors} follows.
\end{proof}
Another relevant aspect is the behaviour of (total) distance multivariance, when an independent component is added to a given random vector.
\begin{proposition}Let $X_{n+1}$ be independent from $(X_1, \dots, X_n)$. Then
\begin{align}
M_\rho(X_1,\dots,X_{n+1}) &= 0 \label{eq:M_add}\\
\overline{M}_\rho(X_1,\dots,X_{n+1}) &= \overline{M}_\rho(X_1,\dots,X_n) \label{eq:M_total_add}
\end{align}
\end{proposition}
\begin{proof}
The first equation follows from \eqref{eq:Mfactors} by taking $S = \{1, \dots, n\}$. If we insert this into \eqref{def:total_multivariance}, we see that all summands containing the index $i = n+1$ do not contribute to total distance multivariance. Hence, \eqref{eq:M_total_add} follows.
\end{proof}
\begin{remark}
In this context, it is interesting to anticipate \emph{normalized} total distance multivariance $\overline{\mathcal{M}}_\rho$ which will be defined in \eqref{eq:norm_total_DM}. If $X_{n+1}$ is independent from $(X_1, \dots, X_n)$ it is easy to check that
\begin{gather*}
\overline{\mathcal{M}}_\rho(X_1, \dots, X_{n+1}) = r(n) \cdot \overline{\mathcal{M}}_\rho(X_1, \dots, X_n)
\end{gather*}
where $r(n) = \sqrt{(2^n - n - 1)}/\sqrt{(2^{n+1} - n - 2)}$. Note that $r(n)$ is strictly increasing from $r(2) = 1/2$ to $\lim_{n \to \infty} r(n) = 1/\sqrt{2}$. Thus, the addition of an independent component affects $\overline{\mathcal{M}}_\rho$ by a factor from $[1/2, 1/\sqrt{2})$.
\end{remark}
We now turn to different representations of multivariance. The representation as $L^2(\rho)$-norm in \eqref{def:multivariance} is always well-defined, but may have infinite value. Under suitable moment conditions, multivariance is finite and can be represented in terms of the continuous negative definite functions $\psi_i$ given in \eqref{eq:cndf}. The proof of the following proposition can be found in the supplement \cite{part2supp}.
\begin{proposition}\label{prop:representation}
Multivariance $M_\rho = M_\rho^2(X_1,\dots,X_n)$ can be written as
\begin{gather}\label{eq:Msum_1}
M_\rho^2
= \int \mathds{E}\left(\sum_{k,l \in \{0,1\}^{ n}} \sgn(k,l) \prod_{i=1}^n \mathrm{e}^{\mathrm{i} \scalp{(X_{k_i,i}-X'_{l_i,i})}{t_i}}\right) \rho(\mathrm{d}t),
\intertext{or}\label{eq:Msum_2}
M_\rho^2
= \int \mathds{E}\left(\sum_{k,l \in \{0,1\}^{ n}} \sgn(k,l) \prod_{i=1}^n \left[\cos(\scalp{(X_{k_i,i}-X'_{l_i,i})}{t_i}) - 1\right]\right) \rho(\mathrm{d}t),
\end{gather}
where
\begin{align*}
\sgn(k,l)
:= (-1)^{\sum\limits_{j=1}^n (k_j+l_j)}
=
\begin{cases}
+1, & \text{if $(k,l)$ contains an even no.\ of `$1$\!'\/s},\\
-1, & \text{if $(k,l)$ contains an odd no.\ of `$1$\!'\/s}.
\end{cases}
\end{align*}
If one of the moment conditions in Definition~\ref{def:moment} holds, then the distance multivariance $M_\rho(X_1, \dots, X_n)$ is finite, and the following representation holds
\begin{equation}\label{eq:Mprod}\begin{aligned}
M_\rho^
&=\mathds{E}\Bigg(\prod_{i=1}^n \big[-\psi_i(X_{i}-X'_{i})+\mathds{E}(\psi_i(X_{i}-X'_{i}) \mid X_i)\\
&\qquad\qquad\quad\mbox{} + \mathds{E}(\psi_i(X_{i}-X'_{i}) \mid X'_i) -\mathds{E}\psi_i(X_{i}-X'_{i})\big]\Bigg).
\end{aligned}\end{equation}
\end{proposition}
\begin{remark}
\ \ a) \
The representations \eqref{eq:Msum_1} and \eqref{eq:Msum_2} have an interesting structural resemblance to the Leibniz' formula for determinants;
\eqref{eq:Mprod} is the analogue of \cite[Cor.~3.5]{part1}
for the bivariate case.
\smallskip b) \
In the bivariate case $n=2$, distance multivariance is also finite under the weaker moment condition $\mathds{E}\psi_1(X_1) + \mathds{E}\psi_2(X_2) < \infty$, cf.~\cite[Thm.~3.7]{part1}.
\end{remark}
We introduce yet another representation of distance multivariance, which helps to clarify the relation to the finite-sample form and the representation as \emph{Gaussian multivariance}, given in Section~\ref{sub:gaussian} below. For this, we need the centering operator $\Ce_\mathcal{F}$:
\begin{proposition}\label{prop:center}
Let $X$ be an integrable random variable on $(\Omega,\mathcal{A},\mathds{P})$ and $\mathcal{F}, \mathcal{F}'$ be sub-$\sigma$-algebras of $\mathcal{A}$. Set
\begin{equation}
\Ce_\mathcal{F} X:= X- \mathds{E}(X\mid\mathcal{F}).
\end{equation}
Then $\Ce$ is a linear operator and
\begin{align}
\Ce_{\{\emptyset,\Omega\}} X
&= X- \mathds{E} X,\\
\label{perp-double}\Ce_\mathcal{F} \Ce_{\mathcal{F}'} X
&= X - \mathds{E}(X \mid \mathcal{F}') - \mathds{E}(X \mid \mathcal{F}) + \mathds{E}(\mathds{E}(X \mid \mathcal{F}') \mid \mathcal{F}),\\
\label{perp-meas}
\Ce_\mathcal{F} \Ce_{\mathcal{F}'} X &= 0 \quad \text{if $X$ is $\mathcal{F}'$-measurable.}
\end{align}
If $\mathcal{F}'$ and $\mathcal{F}$ are independent, then $\mathds{E}(\Ce_{\mathcal{F}'}X \mid\mathcal{F}) = \Ce_{\{\emptyset,\Omega\}}\mathds{E}(X \mid \mathcal{F})$.
\end{proposition}
All assertions of the proposition follow directly from the properties of conditional expectations, and we omit the proof. Geometrically, $\Ce_\mathcal{F} X$ can be interpreted as the residual from the orthogonal projection of $X$ onto the set of $\mathcal{F}$-measurable functions. We will use the shorthand $\Ce_X := \Ce_{\sigma(X)}$.
\begin{corollary}\label{cor:M_center}
If one of the moment conditions in Definition~\ref{def:moment} holds, then
\begin{equation}\label{eq:Mprod_center}
M_\rho^2(X_1,\dots,X_n)
= \mathds{E}\left(\prod_{i=1}^n - \Ce_{X_i} \Ce_{X'_i} \psi_i(X_i - X'_i)\right)
\end{equation}
and
\begin{equation}\label{eq:Mprod_center_total}
\overline{M}_\rho^2(X_1,\dots,X_n)
= \mathds{E}\left(\prod_{i=1}^n \left(1 - \Ce_{X_i} \Ce_{X'_i} \psi_i(X_i - X'_i)\right)\right) - 1.
\end{equation}
The factors can be written explicitly as
\begin{align}\label{eq:psi_double_center}
\Ce_{X_i} \Ce_{X'_i} \psi_i(X_i - X'_i) =\,& \psi_i(X_i - X'_i) - \mathds{E}[\psi_i(X_i - X'_i) \mid X'_i] \\ &- \mathds{E}[\psi_i(X_i - X'_i) \mid X_i] + \mathds{E}\psi_i(X_i - X'_i). \notag
\end{align}
\end{corollary}
\begin{proof}
The identity \eqref{eq:psi_double_center} follows directly from the definition of the double centering operator in Prop.~\ref{prop:center}. The representation~\eqref{eq:Mprod_center} is an immediate consequence of \eqref{eq:Mprod} in Prop.~\ref{prop:representation}.
For representation~\eqref{eq:Mprod_center_total} of the total multivariance, write $a_i := - \Ce_{X_i} \Ce_{X'_i} \psi_i(X_i - X'_i)$. We can expand the product
$$
\prod_{i=1}^n (1 + a_i)
= \sum_{m=0}^n e_m(a_1, \dots, a_n),
$$
where the function $e_m(a_1, \dots, a_n)$ is the $m$th elementary symmetric polynomial in $(a_1, \dots, a_n),$ i.e.
$$e_m(a_1, \dots, a_n) = \sum_{1\leq i_1< \dots < i_m\leq n} a_{i_1} \cdot \ldots \cdot a_{i_m}.$$
In particular, $e_0(a_1, \dots, a_n) = 1$ and $e_1(a_1, \dots, a_n) = a_1 + \dots + a_n$. Taking expectations yields
\begin{equation}\begin{aligned}
\mathds{E} \left[\prod_{i=1}^n (1+ a_i)\right] - 1 &= \sum_{m=1}^n \mathds{E}\left[ e_m(a_1, \dots, a_n) \right] - 1 \\
&= \sum_{\substack{1\leq i_1< \dots < i_m \leq n\\2 \leq m \leq n}} \mathds{E}\left[ a_{i_1} \cdot \ldots \cdot a_{i_m} \right] \\
&= \sum_{\substack{1\leq i_1< \dots < i_m \leq n\\2 \leq m \leq n}} M^2_\rho(X_{i_1},\dots,X_{i_m}) \\&= \overline{M}_\rho^2(X_1,\dots,X_n),
\end{aligned}\end{equation}
as claimed. Note that the first elementary symmetric polynomial $e_1$ does not contribute since $\mathds{E}[a_i] = 0$ for all $i \in \{1, \dots, n\}$.
\end{proof}
\subsection{Gaussian multivariance}\label{sub:gaussian}
Recall that for a real-valued negative definite function $\psi:\mathds{R}^d\to\mathds{R}$ the matrix $(\psi(\xi_j)+\psi(\xi_k)-\psi(\xi_j-\xi_k))_{j,k=1,\dots,n}$, $\xi_1,\dots,\xi_n\in\mathds{R}^d$, $n\in\mathds{N}$, is positive semidefinite, see \cite[Def.~3.6.6]{Jaco2001}.
Therefore, we can associate with any cndf $\psi$ some Gaussian random field indexed by $\mathds{R}^d$.
\begin{definition}\label{def:gaussian}
Assume that
$X_1, \dots, X_n$ satisfy one of the moment conditions in Definition~\ref{def:moment} and let $G_1,\dots, G_n$ be independent (also independent of $X_1, \dots, X_n$), stationary Gaussian random fields with
\begin{equation}\label{gaussianprocess}
\mathds{E} G_i(\xi) = 0
\quad\text{and}\quad
\mathds{E}(G_i(\xi)G_i(\eta)) = \psi_i(\xi) + \psi_i(\eta) - \psi_i(\xi-\eta)
\end{equation}
for $\xi,\eta \in \mathds{R}^{d_i}$.
The \emph{Gaussian multivariance} of $(X_1, \dots, X_n)$ is defined by
\begin{equation}
\mathcal{G}^2(X_1,\dots,X_n) = \mathds{E}\left(\prod_{i=1}^n X_i^{G_i}X_i'^{G_i}\right)
\end{equation}
where $(X_1',\dots, X_n')$ is an independent copy of $(X_1,\dots,X_n)$ and
\begin{equation}\label{def:gaussiancentering}
X_i^{G_i} := G_i(X_i) - \mathds{E}(G_i(X_i) \mid G_i).
\end{equation}
\end{definition}
\begin{remark}
\ \ a) \
Using the centering operator $\Ce$ from Proposition~\ref{prop:center}, we can write \eqref{def:gaussiancentering} as $X_i^{G_i} = \Ce_{G_i} G_i(X_i)$.
\smallskip b) \
In the bivariate case $n=2$ Gaussian multivariance coincides with the Gaussian covariance defined in
\cite[Sec.~7]{part1}.
\smallskip c) \
If $\psi_i$ is given by the Euclidean norm, then $G_i$ is a Brownian field indexed by $\mathds{R}^{d_i}$. In particular, if $n=2$ and both $\psi_1$ and $\psi_2$ are given by the Euclidean norm, then $\mathcal{G}(X_1,X_2)$ coincides with the \emph{Brownian covariance} of Sz\'ekely and Rizzo \cite{SzekRizz2009}.
\smallskip d) \
If $\psi_i(x) = |x|^\alpha$, then $G_i$ is a fractional Brownian field with Hurst exponent $H = \frac{\alpha}{2}$, cf.~\cite[Sec.~4]{SzekRizz2009}.
\end{remark}
\begin{theorem}\label{thm:MequalG}
Suppose that one of the moment conditions of Definition~\ref{def:moment} holds and $\mathds{E}(\psi_i(X_i)^{\frac{n}{2}}) < \infty$ for $i=1,\ldots,n.$ Then distance multivariance and Gaussian multivariance coincide, i.e.
\begin{equation}
M_\rho(X_1,\dots,X_n) = \mathcal{G}(X_1,\dots,X_n).
\end{equation}
\end{theorem}
\begin{proof}
By Corollary~\ref{cor:M_center} we can represent squared multivariance in the product form \eqref{eq:Mprod_center}. Each of the factors can be rewritten as
\begin{equation}
\begin{aligned}
&- C_X C_{X'} \psi(X-X') \\
&\quad= C_X C_{X'}\left(\psi(X)+\psi(X') -\psi(X-X')\right)\\
&\quad= C_X C_{X'} \mathds{E}(G(X)G(X') \mid X,X') \\
&\quad= \mathds{E}(G(X)G(X') \mid X,X') - \mathds{E}(G(X) G(X') \mid X)\\
&\quad\qquad\qquad\mbox{} - \mathds{E}(G(X)G(X') \mid X') + \mathds{E}(G(X)G(X'))\\
&\quad= \mathds{E}\left[ \left( G(X)- \mathds{E}(G(X) \mid G)\right) \left(G(X')- \mathds{E}(G(X') \mid G)\right) \:\middle|\: X,X'\right]\\
&\quad = \mathds{E}(X^G X'^G \mid X,X'),\\
\end{aligned}
\end{equation}
where we have used the covariance structure \eqref{gaussianprocess} of the Gaussian process $G$ in the third line.
Putting everything together, we have
\begin{align*}
&M_\rho^2(X_1,\dots,X_n)
= \mathds{E} \left(\prod_{i =1}^n - C_{X_i} C_{X'_i} \psi_i(X_i-X_i')\right) \\
&= \mathds{E}\left(\prod_{i=1}^n \mathds{E}\left( X_i^{G_i}X_i'^{G_i} \;\middle|\; X_i,X_i'\right)\right)= \mathds{E}\left(\prod_{i=1}^n X_i^{G_i}X_i'^{G_i}\right)
= \mathcal{G}^2(X_1,\dots,X_n).
\end{align*}
Note that for the penultimate equality the absolute integrability of the integrand, i.e. $\mathds{E}\left(\prod_{i=1}^n |X_i^{G_i}X_i'^{G_i}|\right)< \infty$, is required.
Writing $\mathcal{F} := \sigma(X_i,i=1,\ldots,n)$ and $\mathcal{F}' := \sigma(X_i',i=1,\ldots,n)$, we obtain
\begin{equation*}
\begin{aligned}
\mathds{E}&\left(\prod_{i=1}^n |X_i^{G_i}X_i'^{G_i} |\right) = \mathds{E}\left(\prod_{i=1}^n \mathds{E}\left(|X_i^{G_i}X_i'^{G_i} | \;\middle|\; \mathcal{F},\mathcal{F}' \right)\right)\\
&\leq \mathds{E}\left(\prod_{i=1}^n \sqrt{\mathds{E}\left(|X_i^{G_i}|^2 \;\middle|\; \mathcal{F},\mathcal{F}'\right)\mathds{E}\left(|X_i'^{G_i} |^2 \;\middle|\; \mathcal{F},\mathcal{F}'\right)}\right)\\
&= \mathds{E}\left(\sqrt{\prod_{i=1}^n \mathds{E}\left(|X_i^{G_i}|^2 \;\middle|\;\mathcal{F} \right)}\right) \cdot \mathds{E}\left(\sqrt{\prod_{i=1}^n \mathds{E}\left(|X_i'^{G_i}|^2 \;\middle|\; \mathcal{F}'\right)}\right)\\
\label{eq:jensenm}& = \mathds{E}\left(\sqrt{\prod_{i=1}^n \mathds{E}\left(|X_i^{G_i}|^2 \;\middle|\; \mathcal{F}\right)}\right)^2
\leq \left(\prod_{i=1}^n\mathds{E}\left[\left(\mathds{E}\left(|X_i^{G_i}|^2 \;\middle|\;\mathcal{F}\right)\right)^\frac{n}{2}\right]\right)^\frac{2}{n}\\
&\leq \left(\prod_{i=1}^n\mathds{E}\left[\mathds{E}\left(|X_i^{G_i}|^n \;\middle|\;\mathcal{F}\right)\right]\right)^\frac{2}{n}
= \left(\prod_{i=1}^n\mathds{E}\left(|X_i^{G_i}|^n \right)\right)^\frac{2}{n},
\end{aligned}
\end{equation*}
where we used successively the independence of the $G_i$, the conditional H\"older inequality \cite[7.2.4]{ChowTeic1997}, the independence and identical distribution of $(X_i, i=1,\ldots,n)$ and $(X_i', i=1,\ldots,n)$, the generalized H\"older inequality \cite[p. 133, Pr. 13.5]{schilling-mims} and the conditional Jensen inequality \cite[7.1.4]{ChowTeic1997}.\\
Finally, note that for $n\in\mathds{N}$ the elementary inequality $|a+b|^n \leq 2^{n-1} (|a|^n+|b|^n)$ and the formula for absolute moments of Gaussian random variables, i.e. $\mathds{E}(|G_i(t)|^n) = 2^\frac{n}{2} \Gamma(\tfrac{n+1}{2}) \pi^{-\frac{1}{2}} [\mathds{E} G_i(t)^2]^\frac{n}{2},$ and $\mathds{E} [G_i(t)^2] = 2\psi_i(t)$ imply
\begin{equation}
\mathds{E}|X_i^{G_i}|^n \leq 2^n \mathds{E}|G_i(X_i)|^n
= 2^{2n} \Gamma(\tfrac{n+1}{2})\pi^{-\frac{1}{2}} \mathds{E}(\psi_i(X_i)^{\frac{n}{2}}).
\end{equation}
which proves the desired integrability.
\end{proof}
We conclude this section by comparing (total) distance multivariance to related approaches in \cite{BakiSzek2011} and to the multivariate Hilbert-Schmidt independence criterion ($\mathrm{dHSIC}$) of \cite{PfisBuehSchoPete2017}.
\subsection{Comparison with \textup{\cite{BakiSzek2011}}} \label{sec:BS11}
The problem of generalizing distance covariance of two random variables $X,Y$ to multiple variables has been discussed in a short paragraph `\emph{How to \textup{(}not\textup{)} extend \textup{[}distance covariance\textup{]} $\mathcal{V}(X,Y)$ to more than two random variables}' in \cite{BakiSzek2011}. In the notation of our paper they discuss for three random variables $X,Y,Z$ the following objects:
\medskip a)
Gaussian Covariance $\mathcal{G}(X,Y,Z) = \mathds{E}\left(X^G X'^G Y^G Y'^G Z^G Z'^G\right)$ (cf.~Section~\ref{sub:gaussian}) where $G$ is a Brownian motion. This approach is dismissed in \cite{BakiSzek2011} since it does not characterize the independence of $X,Y,Z$.
\smallskip b)
The quantity
\begin{equation}\label{eq:BS_int}
\int_{\mathds{R}^d} \left|\mathds{E} \left[\mathrm{e}^{\mathrm{i} (\scalp{X}{t_1} + \scalp{Y}{t_2} + \scalp{Z}{t_3})}\right] - f_{X}(t_1) f_{Y}(t_2) f_{Z}(t_3)\right|^2 \rho(\mathrm{d}t_1, \mathrm{d}t_2, \mathrm{d}t_3);
\end{equation}
-- this should be compared with the similar, yet different expression \eqref{eq:M_rho_L2}. Bakirov and Sz\'ekely dismiss this approach, since the integral can become infinite if $Z\equiv 0$, even if $X$ and $Y$ are bounded and independent; note that in this case the three random variables $X,Y,Z$ are actually independent.
\smallskip c)
The (bivariate) distance covariance of $U \sim \mathcal{L}(X,Y,Z)$ and $V \sim \mathcal{L}(X) \otimes \mathcal{L}(Y) \otimes \mathcal{L}(Z)$. Bakirov and Sz\'ekely recommend to use this approach, since it is able to detect independence of $X,Y,Z$, but they do not follow up this approach with a deeper discussion.
\medskip
Comparing with our results, let us add a few comments. The approach a) is equivalent to the calculation of distance multivariance $M_\rho(X,Y,Z)$ (based on Euclidean distance), by Theorem~\ref{thm:MequalG}. Consistent with the remarks of \cite{BakiSzek2011}, distance multivariance cannot characterize independence, cf.~Theorem~\ref{thm:independence}. It serves, however, as a building block of \emph{total distance multivariance}, which \emph{does} characterize independence.
If $Z\equiv 0$, the expression \eqref{def:multivariance} is zero, i.e.\ it does not suffer from the particular integrability problems as \eqref{eq:BS_int}. However, under certain conditions, it coincides with \eqref{eq:BS_int}, see Corollary~\ref{cor:mInd-charfun}.
Compared with c), our approach has the advantage that both distance multivariance and total distance multivariance have a very simple and efficient finite-sample representation, which retains all the benefits of the bivariate distance covariance, cf.~Theorem~\ref{thm:sample}. Also the asymptotic properties of the estimators are similar to the bivariate case, cf.~Theorems~\ref{thm:Mdistconv}, \ref{thm:Mdconv} and Section~4
in \cite{part1}.
\subsection{Comparison with $\mathrm{dHSIC}$} \label{sec:dhsic}
The multivariate Hilbert-Schmidt independence criterion ($\mathrm{dHSIC}$) was recently introduced in \cite{PfisBuehSchoPete2017}. Using our notation, $\mathrm{dHSIC}$ is given by
\begin{equation}\label{eq:dHSIC}\begin{aligned}
\mathrm{dHSIC}(X_1,\ldots,X_n)
&:= \mathds{E}\left[ \prod_{i=1}^n k_i(X_i,X_i')\right] + \prod_{i=1}^n \mathds{E}\left[k_i(X_i,X_i')\right] \\
&\qquad\mbox{}- 2 \mathds{E}\left[\prod_{i=1}^n \mathds{E}\left[ k_i(X_i,X_i') \mid X_i\right]\right],
\end{aligned}\end{equation}
where the $k_i$ are continuous, bounded, characteristic, positive semidefinite kernels on $\mathds{R}^{d_i}$. Here, a kernel $k(x,y)$ is said to be characteristic, if
\begin{gather*}
\mu \mapsto \Pi(\mu) = \int k(x,\cdot) \mu(dx)
\end{gather*}
from the finite Borel measures to a suitable Hilbert space is an injective map, see \cite[Section 2.1]{PfisBuehSchoPete2017}) for details.
Note that any continuous negative definite function $\psi_i$ gives rise to a continuous positive semidefinite kernel under the correspondence
\begin{equation}\label{eq:psi_to_kernel}
k_i(x,y) = \psi_i(x) + \psi_i(y) - \psi_i(y-x),
\end{equation}
see \cite{SejdSripGretFuku2013}. In the bivariate case ($n=2$) it is shown in \cite{SejdSripGretFuku2013} that $\mathrm{dHSIC}$ is equivalent to distance covariance with (quasi-)distance $\psi_i$. This raises the question whether equivalence of $\mathrm{dHSIC}$ and (total) distance multivariance still holds in the case $n > 2$. It can be easily shown by numerical experiments that they are not identical, at least not under the correspondence \eqref{eq:psi_to_kernel}. Nevertheless, the experiments show a strong positive association between $\mathrm{dHSIC}$ and total multivariance. Clarifying the exact nature of this association remains an open question, but we present the following related result: Given the marginal distributions $\mathcal{L}(X_1), \dots, \mathcal{L}(X_n)$, we can find kernels $k_i$, \emph{depending on these distributions}, such that $\mathrm{dHSIC}$ coincides formally with (total) distance multivariance on the random vector $(X_1, \dots, X_n)$. Note that, in general, these kernels are unbounded and its sample versions depend on all samples, thus they are beyond the restrictions imposed in \cite{PfisBuehSchoPete2017}.
\begin{proposition}
Let $X_1, \dots, X_n$ satisfy one of the moment conditions of Definition~\ref{def:moment} and define the kernels
\begin{equation}\label{eq:kernel_lambda}\begin{aligned}
k^\lambda_i(x_i,x_i')
&:= -\psi_i(x_i-x_i') + \mathds{E}(\psi_i(x_i - X_i')) \\
&\qquad\mbox{}+ \mathds{E}(\psi_i(X_i - x_i')) - \mathds{E}(\psi_i(X_i-X_i')) + \lambda,
\end{aligned}\end{equation}
where $\lambda \ge 0$ and write $\mathrm{dHSIC}^\lambda$ for the corresponding quantity defined in \eqref{eq:dHSIC}. Then
\begin{align}
\mathrm{dHSIC}^0(X_1, \dots, X_n) &= M_\rho^2(X_1, \dots, X_n),\\
\mathrm{dHSIC}^1(X_1, \dots, X_n) &= \overline M_\rho^2(X_1,\dots,X_n).
\end{align}
The kernel $k^0_i$ is not characteristic in the sense of \textup{\cite[Section 2.1]{PfisBuehSchoPete2017}}.
\end{proposition}
\begin{proof}
Observe that $\mathds{E} \left[k^\lambda_i(X_i,X_i')\right] = \mathds{E}\left[ k^\lambda_i(X_i,X_i') \mid X_i\right] = \lambda$, such that \eqref{eq:dHSIC} simplifies to
\begin{gather*}
\mathrm{dHSIC}^\lambda(X_1,\ldots,X_n) := \mathds{E}\left[ \prod_{i=1}^n \left(\lambda + k^0_i(X_i,X_i')\right)\right] - \lambda^n.
\end{gather*}
This is equal to \eqref{eq:Mprod_center} for $\lambda = 0$ and to \eqref{eq:Mprod_center_total} for $\lambda = 1$. It remains to show that $k_i^0$ is not characteristic. To this end, denote by $\mu_i$ the distribution of $X_i$. Then
\begin{gather*}
\Pi(\mu_i)(y) = \int k^\lambda(x,y) \mu_i(x) = \mathds{E} \left[k^\lambda_i(X_i,y)\right] = \lambda.
\end{gather*}
If $\lambda = 0$, then $\Pi(\mu_i) = 0 = \Pi(\bm{0})$, where $\bm{0}$ is the measure of mass zero. This shows that $\Pi$ is not injective, and therefore that $k_i^0$ is not characteristic.
\end{proof}
\section{Statistical properties of distance multivariance}\label{sec:stats}
\subsection{Sample distance multivariance} \label{sec:sampleM}
We now consider a sample of $N$ observations $(\bm{x}^{(1)}, \dots, \bm{x}^{(N)})$ of the random vector $\bm{X} = (X_1, \dots, X_n)$. Every observation $\bm{x}^{(j)}$ is a vector in $\mathds{R}^d$, $d= d_1 + \dots + d_n$, of the form $\bm{x}^{(j)} = \left(x_1^{(j)}, \dots, x_n^{(j)} \right)$, with each $x_i^{(j)}$ in $\mathds{R}^{d_i}$. Given such a sample, we denote by $(\hat X^{}_1,\dots,\hat X^{}_n)$ the random vector with the corresponding empirical distribution. Evaluating distance multivariance at this vector, we obtain the sample distance multivariance
\begin{gather*}
\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}^2_\rho(x^{(1)},\dots,x^{(N)})
:= M^2_\rho(\hat X_1^{},\dots,\hat X_n^{}),
\end{gather*}
which turns out to have a surprisingly simple representation.
Recall that the Hadamard (or Schur) product of two matrices $A,B\in\mathds{R}^{N\times N}$ is the $N\times N$-matrix $A\circ B$ with entries $(A\circ B)_{jk} = A_{jk}B_{jk}$.
\begin{theorem}\label{thm:sample}
Let $(\bm{x}^{(1)}, \dots, \bm{x}^{(N)})$ be a sample of size $N$.
\smallskip\textup{a)}
The sample distance multivariance can be written as
\begin{equation} \label{eq:empGC}\begin{aligned}
\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}^2_\rho(\bm{x}^{(1)},\dots, \bm{x}^{(N)})
&= \frac{1}{N^2} \sum_{j,k=1}^N (A_1\circ \ldots \circ A_n)_{jk}\\
&= \frac{1}{N^2} \sum_{j,k=1}^N (A_1)_{jk} \cdot \ldots \cdot (A_n)_{jk};
\end{aligned}\end{equation}
here, $A_i := -CB_iC$ where $B_i = \left(\psi_i\big(x_i^{(j)}-x_i^{(k)}\big)\right)_{j,k = 1,\dots,N}$ is the distance matrix and $C = I - \tfrac{1}{N}\mathds{1}$ the centering matrix.
\smallskip\textup{b)}
The sample total distance multivariance can be written as
\begin{equation} \label{eq:empTGC}
\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}^2_\rho(\bm{x}^{(1)},\dots, \bm{x}^{(N)}) = \left[\frac{1}{N^2} \sum_{j,k=1}^N (1+(A_1)_{jk}) \cdot \ldots \cdot (1+ (A_n)_{jk})\right] - 1.
\end{equation}
\end{theorem}
\begin{remark}\label{rem:sample}
\ \ a) \
If $n$ is even, then $A_i$ can be replaced by $-A_i.$
This explains the different sign used in the case $n=2$, cf.\ \cite[Def.~3]{SzekRizz2009}
and \cite[Lem.~4.2, Rem.~4.3]{part1}.
If $n=2$, then $\sum_{j,k=1}^N (A_1 \circ A_2)_{jk} = \trace(A_2^\top A_1)$ and the generalized sample distance covariance from \cite[Sec.~4]{part1} is recovered. If in addition $\psi_i(x) = |x|$, i.e.~the Euclidean distance, then we get the sample distance covariance of Sz\'ekely \emph{et al.} \cite{SzekRizzBaki2007, SzekRizz2009}.
\smallskip b) \
Since the $\psi_i$ are continuous negative definite functions, the matrices $-B_i$ are conditionally positive definite matrices, i.e.\ $-\lambda^\top B_i \lambda \geq 0$ for all non-zero $\lambda$ in $\mathds{R}^N$ with $\lambda_1 + \dots + \lambda_N = 0$.
As the double centerings of conditionally positive definite matrices, the matrices $A_i$ are positive definite. By Schur's theorem, the $N$-fold Hadamard product of positive definite matrices is again positive definite, see Berg and Forst \cite[Lem.~3.2]{BerFor}. This gives a simple explanation as to why $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho^2$ is always a non-negative number.
\smallskip c) \
Important special cases are when the $\psi_i$ are chosen as Euclidean distance, or as Minkowski distances. In these cases, each $B_i$ is a distance matrix. In general, $B_i$ need not be a distance matrix, since only $\sqrt{\psi_i}$, but not necessarily $\psi_i$ itself, defines a distance. Still, $\psi_i$ always defines a quasi-metric, i.e.~a metric with a relaxed triangle inequality, cf.~\cite[Sec.~2]{part1}.
\smallskip d) \
Even though total distance multivariance is defined as the sum of the multivariances of all $2^n - 1 - n$ subfamilies of $\{X_1, \dots, X_n\}$ with at least two members, cf.~\eqref{def:total_multivariance}, its empirical version \eqref{eq:empTGC} has a computational complexity of only $\mathcal{O}(nN^2)$.
\smallskip e) \
The row- and column sums of each $A_i$ are zero. This is a consequence of the double centering $A_i = -C B_i C$. \label{item:zero}
\smallskip f) \
Equation \eqref{eq:empGC} is a direct analogue of the representation \eqref{eq:Mprod_center}, when the centering operator is replaced by the centering matrix. The same is true for \eqref{eq:empTGC} in relation to \eqref{eq:Mprod_center_total}.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:sample}]
Since the support of the empirical distribution is finite, the moment conditions of Definition~\ref{def:moment} are trivially satisfied. Therefore, we can use the representation \eqref{eq:Mprod_center} to get
\begin{align}
\notag
\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}^2_\rho&(x^{(1)},\dots,x^{(N)})
= M^2_\rho(\hat X_1^{},\dots,\hat X_n^{})\\
\notag&= \mathds{E}\Bigg( \prod_{i=1}^n \Big[-\psi_i(\hat X^{}_{i}-\hat X'^{}_{i})
+ \mathds{E}\left(\psi_i\left(\hat X^{}_{i}-\hat X'^{}_{i}\right) \;\middle|\; \hat X^{}_i\right)\\
\label{eq:sample_interm}&\qquad\qquad\qquad\mbox{}
+ \mathds{E}\left(\psi_i\left(\hat X^{}_{i}-\hat X'^{}_{i}\right) \;\middle|\; \hat X'^{}_i\right)
- \mathds{E} \psi_i\left(\hat X^{}_{i}-\hat X'^{}_{i}\right)\Big]\Bigg)\\
\notag&= \frac{1}{N^2}\sum_{j,k=1}^N \Bigg( \prod_{i=1}^n \Big[ -\psi_i\left(x_i^{(j)}-x_i^{(k)}\right)
+ \mathds{E}\left(\psi_i\left(\hat X_i^{}-\hat X_i'^{}\right) \;\middle|\; \hat X_i^{} = x_i^{(j)}\right)\\
\notag&\qquad\qquad\qquad\mbox{}
+ \mathds{E}\left(\psi_i\left(\hat X_i^{}-\hat X_i'^{}\right) \;\middle|\; \hat X_i'^{} = x_i^{(k)}\right) - \mathds{E} \psi_i\left(\hat X_i^{}-\hat X_i'^{}\right) \Big]\Bigg).
\end{align}
Denoting by $\mathds{1}_N$ the column vector consisting of $N$ ones, we can rewrite the individual terms in \eqref{eq:sample_interm} as
\begin{subequations}\label{eq:sample_dm}
\begin{align}
\psi_i \left(x_i^{(j)} - x_i^{(k)}\right) &= (B_i)_{jk}\\
\mathds{E}\left(\psi_i \left(\hat X_i - \hat X_i'\right) \;\middle|\; \hat X_i = x_i^{(j)}\right) &= \frac{1}{N} \sum_{l=1}^N (B_i)_{jl} = \frac{1}{N} \left(\mathds{1}_N^\top B_i\right)_j \\
\mathds{E}\left(\psi_i \left(\hat X_i - \hat X_i'\right) \;\middle|\; \hat X'_i = x_i^{(k)}\right) &= \frac{1}{N} \sum_{m=1}^N (B_i)_{mk} = \frac{1}{N} \left(B_i \mathds{1}_N\right)_k \\
\mathds{E}\psi_i \left(\hat X_i - \hat X_i'\right) &= \frac{1}{N^2} \sum_{l,m=1}^N (B_i)_{ml} = \frac{1}{N^2}\, \mathds{1}_N^\top B_i \mathds{1}_N.
\end{align}
\end{subequations}
This shows that each factor on the right hand side of \eqref{eq:sample_interm} is the $(j,k)$-th entry of the matrix $A_i = -C B_i C$, and \eqref{eq:empGC} follows. The
representation \eqref{eq:empTGC} can be derived in complete analogy from \eqref{eq:Mprod_center_total}.
\end{proof}
\subsection{Estimating distance multivariance}
In this section we examine the properties of the sample distance multivariance $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho$ as an estimator of $M_\rho$. The corresponding results for the sample total distance multivariance will be presented in the next section.
\begin{theorem}[$\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho$ is a strongly consistent estimator for $M_\rho$]\label{thm:sconsistent}
Let one of the moment conditions of Definition~\ref{def:moment} be satisfied. Then $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho$ is a strongly consistent estimator of $M_\rho$, i.e.
\begin{equation}
\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho(\bm{X}^{(1)},\dots,\bm{X}^{(N)}) \xrightarrow[N\to \infty]{} M_\rho(X_1,\dots,X_n) \quad\text{a.s.}
\end{equation}
\end{theorem}
\begin{proof}
Inserting the representation \eqref{eq:sample_dm} into \eqref{eq:sample_interm}, we see that $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho$ is a $V$-statistic. Thus the convergence of the estimator $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho$ is just the strong law of large numbers for $V$-statistics.
\end{proof}
\begin{remark}
In the case of $n=2$ strong consistency can be obtained under the weaker moment condition $\mathds{E}\psi_i(X_i)<\infty$ for $i=1,2$, see \cite[Thm.~4.4]{part1}.
For $n \geq 3$ the arguments used in \cite{part1} break down. However, we show a weak consistency result under independence and relaxed moment conditions in Corollary~\ref{cor:wconsistent} below.
\end{remark}
The next result is our main result on the asymptotics properties of the estimator $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho$. The proof is technical and relegated to the supplement \cite{part2supp}.
\begin{theorem}[Asymptotic distribution of $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho$]\label{thm:Mdistconv}\mbox{}
\smallskip\textup{a)}
Let $X_1,\dots,X_n$ be independent random variables such that the moments $\mathds{E}\psi_i(X_i)<\infty$ and $\mathds{E} \left[\log^{1+\epsilon}(1+|X_i|^2)\right] <\infty$ exist for some $\epsilon>0$ and all $i=1,\dots, n$. Then
\begin{equation}\label{eq:G_conv}
N \cdot \mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho^2(\bm{X}^{(1)},\dots,\bm{X}^{(N)}) \xrightarrow[N\to \infty]{d} \|\mathds{G}\|_{L^2(\rho)}^2
\end{equation}
where $\mathds{G}$ is a centred, i.e.~$\mathds{E} \mathds{G}(t) =0$, $\mathds{C}$-valued Gaussian process indexed by $\mathds{R}^{d}$ with covariance function
\begin{equation}\label{eq:G_cov}
\Cov(\mathds{G}(t),\mathds{G}(t'))
= \mathds{E}\big[\mathds{G}(t)\overline{\mathds{G}(t')}\big]
= \prod_{i=1}^n \left(f_{X_i}(t_i-t_i')-f_{X_i}(t_i) \overline{f_{X_i}(t_i')}\right).
\end{equation}
\smallskip\textup{b)}
Suppose that the random variables $X_1, \dots, X_n$ are $(n-1)$-in\-de\-pen\-dent, but not $n$-in\-de\-pen\-dent and that one of the moment conditions of Definition~\ref{def:moment} holds. Then
\begin{equation}\label{eq:Mdistconv_b}
N \cdot \mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}^2_\rho(\bm{X}^{(1)},\dots,\bm{X}^{(N)}) \xrightarrow[N\to \infty]{} \infty \quad\text{a.s.}
\end{equation}
\end{theorem}
\begin{remark}
\ \ a) \
The complex-valued Gaussian process $\mathds{G}$ has to be distinguished from the Gaussian processes $G_i$ that appear in Definition~\ref{def:gaussian} of the Gaussian multivariance.
\smallskip b) \
Using the results of \cite{Csoe1985}, the $\log$-moment condition in a) can be relaxed by a weaker (but more involved) integral test cf.~\cite[Condition~($\star$)]{Csoe1985}.
From \cite[Lem.~2.7]{part1} it is readily seen that the log-moment condition in Thm.~\ref{thm:Mdistconv}.a) is equivalent to $\mathds{E}\left[\log^{1+\epsilon}\left(1\vee\sqrt{|X_1|^2+\dots+|X_n|^2}\right)\right]<\infty$.
\smallskip c) \
The expectation of the limit in \eqref{eq:G_conv} can be calculated as
\begin{equation}\label{eq:EG}
\mathds{E}(\|\mathds{G}\|_{L^2(\rho)}^2)
= \prod_{i=1}^n \int_{\mathds{R}^{d_i}} \left(1-|f_{X_i}(t_i)|^2\right) \rho_i(d t_i)
= \prod_{i=1}^n \mathds{E} \psi_i(X_i-X_i').
\end{equation}
\smallskip d) \
From Lemma~\ref{lem:ZN} in the supplement \cite{part2supp} it can be seen that $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho$ is a biased estimator of $M_\rho$, since in the case of non-degenerate and independent random variables
\begin{equation*}
\mathds{E} \Big[\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}^2_\rho(\bm{X}^{(1)},\dots,\bm{X}^{(n)})\Big]
= \frac{(N-1)^n+(-1)^n (N-1)}{N^{n+1}} \prod_{i=1}^n \mathds{E} \psi_i(X_i-X_i') > 0,
\end{equation*}
while $M_\rho^2(X_1,\dots,X_n)=0$. For bivariate distance covariance, this bias has already been discussed by Cope \cite{Cope2009} and Sz\'ekely and Rizzo \cite{SzekRizz2009a}.
\end{remark}
Finally, we present a weak consistency result for $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho$ under independence, which holds under milder moment conditions than the strong consistency result Theorem~\ref{thm:sconsistent}.
\begin{corollary}\label{cor:wconsistent}
Suppose that $X_1,\dots,X_n$ are independent random variables with $\mathds{E}\psi_i(X_i)<\infty$ and \\$\mathds{E}\left[\log^{1+\epsilon}(1+|X_i|^2)\right]<\infty$ for some $\epsilon>0$ and all $i=1,\dots, n$. Then
\begin{equation}
\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho(\bm{X}^{(1)},\dots,\bm{X}^{(N)}) \xrightarrow[N\to \infty]{} 0 \quad\text{in probability.}
\end{equation}
\end{corollary}
\begin{proof}
The corollary is a direct consequence of Theorem \ref{thm:Mdistconv} and the observation that
\begin{equation*}
n Z_n \xrightarrow{d} Z
\implies Z_n \xrightarrow{d} 0
\implies Z_n \xrightarrow{\mathds{P}} 0;
\end{equation*}
the second implication follows since the $d$-limit is degenerated.
\end{proof}
\subsection{Estimating total distance multivariance}\label{sec:estimate_total}
To simplify notation we write $\rho_S = \bigotimes_{i\in S}\rho_i$. Recall that
\begin{equation}\label{eq:totalms}
\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}_\rho^2(\bm{X}^{(1)},\dots,\bm{X}^{(N)})
= \sum_{\substack{S\subset \{1,\dots,n\}\\ |S|\geq 2}} \mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_{\rho_S}^2(\bm{X}^{(1)},\dots,\bm{X}^{(N)}).
\end{equation}
Note that $M_{\rho_S}$ depends only on the random variables $(X_i, i\in S)$, i.e.\ $M_{\rho_S}= M_{\rho_S}(X_i, i\in S)$. This means that the sample version $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_{\rho_S} = \mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_{\rho_S}(\bm{X}^{(1)},\dots,\bm{X}^{(N)})$ is computed only from the $S$-coordinates of the samples $\bm{X}^{(1)},\dots,\bm{X}^{(N)}$. The results of this section are mostly direct consequences of the results of the previous section (replacing $M_\rho$ by $M_{\rho_S}$ and $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho$ by $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_{\rho_S}$).
\begin{corollary}[$\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}_\rho$ is a strongly consistent estimator of $\overline M_\rho$]\label{cor:tMsconsistent}
Assume that one of the moment conditions of Definition~\ref{def:moment} is satisfied. Then
\begin{equation}
\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}_\rho(\bm{X}^{(1)},\dots,\bm{X}^{(N)}) \xrightarrow[N\to \infty]{} \overline{M}_\rho(X_1,\dots,X_n) \text{ a.s.}
\end{equation}
\end{corollary}
\begin{proof}
Apply Theorem \ref{thm:sconsistent} to each $M_{\rho_S}$ in \eqref{eq:totalms}.
\end{proof}
\begin{corollary}\label{cor:tMwconsistent}
Let $X_1,\dots,X_n$ be independent random variables with $\mathds{E}\psi_i(X_i)<\infty$ and \\$\mathds{E}\left[\log^{1+\epsilon}(1+|X_i|^2) \right]<\infty$ for some $\epsilon>0$ and all $i=1,\dots, n$. Then
\begin{equation}
\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}_\rho(\bm{X}^{(1)},\dots,\bm{X}^{(N)}) \xrightarrow[N\to \infty]{} 0 \quad\text{in probability}.
\end{equation}
\end{corollary}
\begin{proof}
Apply Corollary \ref{cor:wconsistent} to each $M_{\rho_S}$ in \eqref{eq:totalms}.
\end{proof}
The next theorem is the analogue of the convergence result Theorem~\ref{thm:Mdistconv}. For each $S\subset\{1,\dots,n\}$, we denote by $\mathds{G}_S$ the centred Gaussian process
\begin{equation}
\mathds{G}_S(t_S) := \sum_{R\subset S} (-1)^{|S|-|R|} \int\mathrm{e}^{\mathrm{i} \scalp{x_R}{t_R}}\,\mathrm{d}B(x) \cdot \prod_{\smash{j} \in S\backslash R} f_j(t_j),
\end{equation}
cf.~\eqref{eq:conv_to_G} in the supplement \cite{part2supp}, indexed by $t_S \in \bigtimes_{i\in S} \mathds{R}^{d_i}$, and where $B$ is the Brownian bridge from \eqref{eq:convBB} in the supplement \cite{part2supp}. Applying Theorem~\ref{thm:Mdistconv} with $\{1, \dots, n\}$ replaced by $S$, we see that $\mathds{G}_S$ has covariance structure
\begin{equation}\label{eq:GS_cov}
\mathds{E}(\mathds{G}_S(t)\overline{\mathds{G}_S(t')}) = \prod_{i \in S} \left(f_{X_i}(t_i-t_i') - f_{X_i}(t_i)\overline{f_{X_i}(t_i')}\right).
\end{equation}
\begin{theorem}[Asymptotic distribution of $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}_\rho$]\label{thm:Mdconv}\mbox{}
\smallskip\textup{a)}
Suppose that $X_1,\dots,X_n$ are independent with $\mathds{E}\psi_i(X_i)<\infty$ and $\mathds{E}\left[\log^{1+\epsilon}(1+|X_i|^2)\right] <\infty$ for some $\epsilon>0$ and all $i=1,\dots,n$. Then
\begin{equation}\label{eq:empMdist}
N\cdot\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}_\rho^2(\bm{X}^{(1)},\dots,\bm{X}^{(N)})
\xrightarrow[N\to \infty]{d} \sum_{\substack{S\subset \{1,\dots,n\}\\ |S| \geq 2}} \|\mathds{G}_S\|_{L^2(\rho_S)}^2.
\end{equation}
\smallskip\textup{b)}
Suppose that the random variables $X_1, \dots, X_n$ are not independent and that one of the moment conditions of Definition~\ref{def:moment} holds. Then
\begin{equation}
N\cdot\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}_\rho^2(\bm{X}^{(1)},\dots,\bm{X}^{(N)}) \xrightarrow[N\to\infty]{} \infty\quad\text{a.s.}
\end{equation}
\end{theorem}
\begin{remark}\label{rem:gaussian_qf}
Note that the processes $(\mathds{G}_S), S \subset \{1, \dots, n\}$ on the right hand side of \eqref{eq:empMdist} are \emph{jointly Gaussian}. Therefore, the limit appearing in \eqref{eq:empMdist} is a quadratic form of centred Gaussian random variables. This fact will be used in Subsection~\ref{sec:test} to construct a statistical test of (multivariate) independence. Further properties of the processes $\mathds{G}_S$ are discussed in \cite{BersBoet2018}.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:Mdconv}]
a) \ For any $S\subset \{1,\dots,n\}$ with $|S|\geq 2$, we know from Theorem~\ref{thm:Mdistconv} that
\begin{equation*}
N\cdot\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_{\rho_S}^2(\bm{X}^{(1)},\dots, \bm{X}^{(N)})
\xrightarrow[N\to\infty]{} \|\mathds{G}_S\|_{\rho_S}^2,
\end{equation*}
and \eqref{eq:empMdist} follows.
\smallskip
b) \ By Corollary \ref{cor:tMsconsistent} we have $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}_\rho \to \overline{M}_\rho$ almost surely. Moreover, $\overline{M}_\rho > 0$ by Theorem \ref{thm:independence}, since the random variables $(X_1, \dots, X_n)$ are not independent. Thus, $N\cdot\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}_\rho^2 \to \infty$ almost surely.
\end{proof}
\subsection{Normalizing and scaling distance multivariance}\label{sub:scaling}
With practical applications in mind, there are at least two reasons to consider rescaled versions of (total) distance multivariance:
\begin{itemize}
\item To obtain a \emph{distance multicorrelation} whose
value is bounded by $1$ -- analogous to Sz\'ekely-Rizzo-and-Bakirov's distance correlation \cite[Def.~3]{SzekRizzBaki2007};
\item To normalize the asymptotic distribution of the sample (total) distance multivariance under independence, cf. Theorem~\ref{thm:Mdistconv} and Theorem~\ref{thm:Mdconv}.
\end{itemize}
We will use normalized multivariances as test statistics in two tests for independence in Section~\ref{sec:test}. For the scaling constants we use in the following the convention $0/0 := 0$. This ensures that we also cover the case of degenerated (i.e.~constant) random variables.
\subsubsection*{Distance multicorrelation}
\begin{definition}
Let $X_1, \dots, X_n$ be random variables with
$\mathds{E} \psi_i^n(X_i) < \infty$ for all $i=1,\dots, n$. We set
\begin{align*}
&a_i
:= \norm{\Ce_{X_i} \Ce_{X_i'}\psi_i(X_i - X_i')}_{L^n(\mathds{P})}
\end{align*}
and define \emph{distance multicorrelation} as
\begin{equation}
\mathcal{R}^2_\rho(X_1, \dots, X_n) := \frac{M^2_\rho(X_1, \dots, X_n)}{a_1 \cdot \ldots \cdot a_n}.
\end{equation}
\end{definition}
For the sample version of distance multicorrelation, we define
\begin{equation}
\mbox{}^{\scriptscriptstyle N}\kern-1.5pt a_i
:= \mbox{}^{\scriptscriptstyle N}\kern-1.5pt a_i( \bm{x}^{(1)}, \dots, \bm{x}^{(N)})
= \bigg(\frac{1}{N^2} \sum_{k,l=1}^N |(A_i)_{kl}|^n \bigg)^{1/n},
\end{equation}
where the $A_i$ are the doubly centred matrices from Theorem~\ref{thm:sample}, and set
\begin{equation}
\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{\mathcal{R}}^2_\rho(\bm{x}^{(1)}, \dots, \bm{x}^{(N)})
:= \frac{1}{N^2} \sum_{k,l=1}^N \frac{(A_1)_{kl}}{\mbox{}^{\scriptscriptstyle N}\kern-1.5pt a_1} \cdot \ldots \cdot \frac{(A_n)_{kl}}{\mbox{}^{\scriptscriptstyle N}\kern-1.5pt a_n}.
\end{equation}
Note that $a_i = 0$ if, and only if, $X_i$ is degenerate, hence, $\mathcal{R}_\rho(X_1, \dots, X_n)$ is well-defined as a finite non-negative number.
\begin{proposition}\label{prop:dis-mult}
\textup{a)}
Distance multicorrelation and its sample version satisfy
\begin{equation}\label{eq:R_bound}
0 \leq \mathcal{R}_\rho(X_1, \dots, X_n) \leq 1
\quad\text{and}\quad
0 \leq \mbox{}^{\scriptscriptstyle N}\kern-1.5pt{\mathcal{R}}_\rho (\bm{x}^{(1)}, \dots, \bm{x}^{(N)}) \leq 1.
\end{equation}
\smallskip\textup{b)}
For iid copies $\bm{X}^{(1)}, \dots, \bm{X}^{(N)}$ of $\bm{X} = (X_1, \dots, X_n)$ it holds that
$$
\lim_{N \to \infty} \mbox{}^{\scriptscriptstyle N}\kern-1.5pt{\mathcal{R}}_\rho (\bm{X}^{(1)}, \dots, \bm{X}^{(N)})
= \mathcal{R}_\rho (X_1, \dots, X_n),
\quad\text{a.s.}
$$
\smallskip\textup{c)}
For $n=2$ and $\psi_1(x) = \psi_2(x) = |x|$ distance multicorrelation coincides with the distance correlation of \cite{SzekRizzBaki2007}.
\end{proposition}
\begin{remark}
Sz\'ekely and Rizzo \cite[Thm~4.(iv)]{SzekRizz2009} show for the case $n=2$ (i.e.\ for distance correlation) that $\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{\mathcal{R}}_\rho(\bm{X}^{(1)}, \dots, \bm{X}^{(N)}) = 1$ implies that the sample points $(x_1^{(1)}, \dots, x_1^{(N)})$ and $(x_2^{(1)}, \dots,x_2^{(N)})$ can be transformed into each other by a Euclidean isometry composed with scaling by a non-negative number. An analogous result seems not to hold for distance multicorrelation in the case $n > 2$.
\end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:dis-mult}]
By the generalized H\"older inequality for $n$-fold products (cf.~\cite[p.~133, Pr.~13.5]{schilling-mims}),
we have that
\begin{align*}
M^2_\rho(X_1, \dots, X_n)
&= \mathds{E}\left(\prod_{i=1}^n - \Ce_{X_i} \Ce_{X'_i} \psi_i(X_i - X'_i)\right)\\
&\leq \mathds{E}\left(\prod_{i=1}^n \left|\Ce_{X_i} \Ce_{X'_i} \psi_i(X_i - X'_i)\right|\right)\\
&\leq \prod_{i=1}^n \left\|\Ce_{X_i} \Ce_{X_i'}\psi_i(X_i - X_i')\right\|_{L^n(\mathds{P})} = a_1 \cdot \ldots \cdot a_n,
\end{align*}
and \eqref{eq:R_bound} follows.
For the convergence result, note that
\begin{equation}\label{eq:empEpsi}
\mbox{}^{\scriptscriptstyle N}\kern-1.5pt a_i^n
= \frac{1}{N^2} \sum_{k,l =1}^N \left|(A_i)_{kl} \right|^n \xrightarrow[N\to \infty]{}\mathds{E}\left[\left|\Ce_{X_i} \Ce_{X_i'}\psi_i(X_i-X_i')\right|^n\right]
= a_i^n
\end{equation}
by the law of large numbers for V-statistics, cf.~Theorem~\ref{thm:sconsistent} and its proof. Part c) follows from direct comparison with \cite{SzekRizz2009}.
\end{proof}
\subsubsection*{Normalized distance multivariance}
Alternatively, we can normalize distance multivariance in such a way, that the limiting distribution under independence (cf.~ Theorems~\ref{thm:Mdistconv} and \ref{thm:Mdconv}) has unit expectation.
\begin{definition}
Let $X_1, \dots, X_n$ be random variables with $\mathds{E}\psi_i(X_i) < \infty$ for all $i=1, \dots, n$, set
\begin{equation*}
b_i := \mathds{E} \psi_i(X_i - X_i')
\end{equation*}
and define \emph{normalized distance multivariance} as
\begin{equation}
\mathcal{M}^2_\rho(X_1, \dots, X_n) := \frac{M_\rho^2(X_1, \dots, X_n)}{b_1 \cdot \ldots \cdot b_n}.
\end{equation}
\end{definition}
For the sample version of normalized distance multicorrelation, we define
\begin{equation}
\mbox{}^{\scriptscriptstyle N}\kern-1.5pt b_i
:= \mbox{}^{\scriptscriptstyle N}\kern-1.5pt b_i( \bm{x}^{(1)}, \dots, \bm{x}^{(N)})
:= \frac{1}{N^2} \sum_{k,l=1}^N \psi_i\left(x_i^{(l)} - x_i^{(k)}\right)
= \frac{1}{N^2} \sum_{k,l=1}^N (B_i)_{kl},
\end{equation}
and set
\begin{equation}
\mbox{}^{\scriptscriptstyle N}\kern-2pt{\mathcal{M}}_\rho^2 (\bm{x}^{(1)}, \dots, \bm{x}^{(N)})
:= \frac{1}{N^2} \sum_{k,l=1}^N \frac{(A_1)_{kl}}{\mbox{}^{\scriptscriptstyle N}\kern-1.5pt b_1} \cdot \ldots \cdot \frac{(A_n)_{kl}}{\mbox{}^{\scriptscriptstyle N}\kern-1.5pt b_n}.
\end{equation}
\begin{corollary}\label{cor:Q_conv}
Suppose that $X_1,\dots,X_n$ are non-degenerate independent random variables with $\mathds{E}\psi_i(X_i)<\infty$ and $\mathds{E}\left[\log^{1+\epsilon}(1+|X_i|^2) \right]<\infty$ for some $\epsilon>0$ and all $i=1,\dots, n$. Then
\begin{equation}\label{eq:S_conv}
N \cdot \mbox{}^{\scriptscriptstyle N}\kern-2pt{\mathcal{M}}_\rho^2(\bm{X}^{(1)},\dots,\bm{X}^{(N)}) \xrightarrow[N\to \infty]{d} Q,
\end{equation}
where $Q = \|\mathds{G}\|_\rho^2 / (b_1 \cdot \ldots \cdot b_n)$ and $\mathds{E} Q = 1$.
\end{corollary}
\begin{proof}
This follows from Theorem \ref{thm:Mdistconv} in combination with
\begin{equation}\label{eq:empEpsi-2}
\mbox{}^{\scriptscriptstyle N}\kern-1.5pt b_i = \frac{1}{N^2} \sum_{k,l =1}^N \psi_i(X_i^{(k)}-X_i^{(l)}) \xrightarrow[N\to \infty]{}\mathds{E}\psi_i(X_i-X_i') = b_i,
\end{equation}
under the assumption $\mathds{E}\psi_i(X_i)<\infty$.
\end{proof}
It remains to find an analogous normalization for \emph{total} distance multivariance. For a subset $S \subset \{1, \dots, n\}$ define $M_{\rho_S}(X_1, \dots, X_n)$ as in Section~\ref{sec:estimate_total} and set $b_S = \prod_{i \in S} b_i$.
\begin{definition}
For the random variables $X_1, \dots, X_n$ we define the \emph{normalized total distance multivariance} as
\begin{equation}\label{eq:norm_total_DM}
\overline{\mathcal{M}}^2_\rho(X_1, \dots, X_n)^2
:= \frac{1}{2^n - 1 - n}\sum_{\substack{S\subset \{1,\dots,n\}\\ |S| \geq 2}}\frac{ M_{\rho_S}^2(X_i, i\in S)}{b_S}.
\end{equation}
Its sample version becomes
\begin{small}
\begin{align}
&\mbox{}^{\scriptscriptstyle N}\kern-1.5pt\overline{\mathcal{M}}^2_{\rho} (\bm{x}^{(1)}, \dots, \bm{x}^{(N)})\\
\notag &\quad:= \frac{1}{2^n - 1 - n} \left\{\frac{1}{N^2} \sum_{k,l=1}^N \left(1 + \frac{(A_1)_{kl}}{\mbox{}^{\scriptscriptstyle N}\kern-1.5pt b_1}\right) \cdot \ldots \cdot \left(1 + \frac{(A_n)_{kl}}{\mbox{}^{\scriptscriptstyle N}\kern-1.5pt b_1}\right) - 1\right\}.
\end{align}
\end{small}
\end{definition}
Similar to Corollary~\ref{cor:Q_conv}, we have the following result.
\begin{corollary}\label{cor:Q_conv_total}
Suppose that $X_1,\dots,X_n$ are non-degenerate independent random variables with $\mathds{E}\psi_i(X_i)<\infty$ and $\mathds{E}\left[\log^{1+\epsilon}(1+|X_i|^2) \right]<\infty$ for some $\epsilon>0$ and all $i=1,\dots, n$. Then
\begin{equation}\label{eq:S_conv_total}
N \cdot \mbox{}^{\scriptscriptstyle N}\kern-1.5pt\overline{\mathcal{M}}^2_\rho(\bm{X}^{(1)},\dots,\bm{X}^{(N)}) \xrightarrow[N\to \infty]{d} \overline{Q},
\end{equation}
where
$$
\overline{Q} = \frac{1}{2^n - n - 1} \sum_{\substack{S\subset \{1,\dots,n\}\\ |S| \geq 2}} \frac{\|\mathds{G}_S\|_{L^2(\rho_S)}^2}{b_S}
\quad\text{and}\quad
\mathds{E} \overline{Q} = 1.
$$
\end{corollary}
\begin{proof}
Convergence follows from Theorem~\ref{thm:Mdconv}. Note that the sum runs over $2^n - n - 1$ subsets and $\mathds{E}\left[\|\mathds{G}_S\|_{L^2(\rho_S)}^2\right] = b_S$ by Corollary~\ref{cor:Q_conv}.
\end{proof}
\subsection{Two tests for independence}\label{sec:test}
Based on the normalized multivariance statistics $\mathcal{M}_\rho$ and $\overline{\mathcal{M}}_\rho$ and the convergence results of Corollaries~\ref{cor:Q_conv} and \ref{cor:Q_conv_total}, we can formulate two statistical tests for the independence of the random variables $X_1,\dots, X_n$. To assess a critical value for the test statistics, we use the same approach as Sz\'ekely and Rizzo \cite{SzekRizz2009}: Both limiting random variables $Q$ and $\overline{Q}$ are quadratic forms of centred Gaussian random variables, normalized to $\mathds{E} Q = \mathds{E}\overline{Q} = 1$. Hence, by \cite[p.~181]{SzekBaki2003},
\begin{equation} \label{eq:Qalpha}
\mathds{P}(Q \ge \chi_{1-\alpha}^2(1)) \le \alpha
\quad\text{and}\quad
\mathds{P}(\overline{Q} \ge \chi_{1-\alpha}^2(1)) \le \alpha,
\end{equation}
for all $0 < \alpha \le 0.215$, where $\chi_{1-\alpha}^2(1)$ denotes the $(1- \alpha)$-quantile of a chi-square distribution with one degree of freedom. Note that \eqref{eq:Qalpha} is, in general, very rough, thus the following tests are, in general, quite conservative. The first test uses multivariance and, therefore, requires the a-priori assumption of $(n-1)$-independence.
\begin{test} \label{testA}
Let $\bm{x}^{(1)}, \dots, \bm{x}^{(N)}$ be observations of the random vector $\bm{X} = (X_1, \dots, X_n)$, let $\alpha \in (0,0.215)$, and suppose that the moment conditions of Corollary~\ref{cor:Q_conv} and one of the moment conditions of Definition~\ref{def:moment} hold. Under the assumption of $(n-1)$-independence, the null hypothesis
\begin{align*}
\bm{H_0}:&\qquad (X_1, \dots, X_n) \quad\text{are independent}
\intertext{is rejected against the alternative hypothesis}
\bm{H_1}:&\qquad (X_1, \dots, X_n) \quad\text{are not independent}
\end{align*}
at level $\alpha$, if the normalized multivariance $\mbox{}^{\scriptscriptstyle N}\kern-2pt{\mathcal{M}}(\bm{x}^{(1)}, \dots, \bm{x}^{(N)})$ satisfies
$$
N \cdot \mbox{}^{\scriptscriptstyle N}\kern-2pt{\mathcal{M}}^2_\rho(\bm{x}^{(1)}, \dots, \bm{x}^{(N)})
\geq \chi^2_{1 - \alpha}(1).
$$
\end{test}
The second test uses \emph{total} multivariance, and hence does not require a-priori assumptions, except for the moment conditions. We emphasize that this test on mutual independence can be applied in very general settings: It is distribution-free and the random variables $X_1, \dots, X_n$ can take values in arbitrary dimensions.
\begin{test} \label{testB}
Let $(\bm{x}^{(1)}, \dots, \bm{x}^{(N)})$ be observations of the random vector $\bm{X} = (X_1, \dots, X_n)$, let $\alpha \in (0,0.215)$, and suppose that the moment conditions of Corollary~\ref{cor:Q_conv_total} and one of the moment conditions of Definition~\ref{def:moment} hold. The null hypothesis
\begin{align*}
\bm{H_0}:&\qquad (X_1, \dots, X_n) \quad\text{are independent}
\intertext{is rejected against the alternative hypothesis}
\bm{H_1}:&\qquad (X_1, \dots, X_n) \quad\text{are not independent}
\end{align*}
at level $\alpha$, if the normalized total multivariance $\mbox{}^{\scriptscriptstyle N}\kern-1.5pt\overline{\mathcal{M}}_\rho(\bm{x}^{(1)}, \dots, \bm{x}^{(N)})$ satisfies
$$
N \cdot \mbox{}^{\scriptscriptstyle N}\kern-1.5pt\overline{\mathcal{M}}^2_\rho(\bm{x}^{(1)}, \dots, \bm{x}^{(N)}) \ge \chi^2_{1 - \alpha}(1).
$$
\end{test}
Note that in Test \ref{testA} and Test \ref{testB} the moment conditions of Definition~\ref{def:moment} ensure the divergence (for $N \to \infty$) of the test statistics in the case of dependence, cf.~Theorem \ref{thm:Mdistconv} and Theorem \ref{thm:Mdconv}. Thus these tests are consistent against all alternatives.
In Section~\ref{ex:bernstein} below we give a numerical example of both tests that also allows to assess their power for different sample sizes $N$.
\begin{remark}
If the marginal distributions are known, it is possible to perform a Monte Carlo test, where the $p$-value is obtained from the empirical (Monte Carlo) distribution of the test statistic under $H_0$. Even without knowledge of the marginal distributions, resampling tests can be performed. These and further tests based on distance multivariance are discussed in \cite{Boet2017,BersBoet2018}.
\end{remark}
\section{Examples}\label{ex:bernstein}
In this section we present two basic examples which illustrate some key aspects of distance multivariance:
\begin{itemize}
\item \textbf{Bernstein's coins:} This is a classical example of pairwise independence with higher order dependence. It shows that distance multivariance accurately detects multivariate dependence.
\item \textbf{Sinusoidal dependence:} This is a basic example which was considered in \cite{SejdSripGretFuku2013} to illustrate that distance covariance can perform poorly when used to detect small scale (local) dependencies. We show that the flexibility of generalized distance multivariance -- due to the choice of the distance functions $\psi_i$ -- can be used to improve the power of the test considerably.
\end{itemize}
\subsection{Bernstein's coins}
The first example of pairwise independent, but not (totally) independent random variables is attributed to S.N.~Bernstein, cf.~\cite[Sec.~V.3]{feller1971introduction}. We illustrate this example by using two identical fair coins, coin I and coin II. Based on independent tosses of these two coins, define the following events
\begin{gather*}
A = \{\text{coin I shows heads}\}, \qquad
B = \{\text{coin II shows tails}\}, \\
C = \{\text{both coins show the same side}\}.
\end{gather*}
All events have probability $\frac{1}{2}$, and they are pairwise independent, since
$$
\mathds{P}(A \cap B) = \mathds{P}(B \cap C) = \mathds{P}(C \cap A) = \frac{1}{4}.
$$
They are, however, not independent, since $A \cap B \cap C = \emptyset$, hence
$$
0 = \mathds{P}(A \cap B \cap C) \neq \mathds{P}(A)\cdot\mathds{P}(B)\cdot\mathds{P}(C) = \frac{1}{8}.
$$
Hence, the distance covariances\footnote{In slight abuse of notation, we identify the events $A,B,C$ with the random variables $\mathds{1}_A(\omega), \mathds{1}_B(\omega), \mathds{1}_C(\omega)$.} of the pairs $(A,B)$, $(B,C)$ and $(C,A)$ should vanish, due to pairwise independence, while the distance multivariance and the total distance multivariance of the triplet $(A,B,C)$ should detect their higher-order dependence. We discuss both the analytic approach and the numerical simulation of the relevant quantities.
Let $\rho_A, \rho_B, \rho_C$ be one-dimensional symmetric L\'evy measures with the corresponding continuous negative definite functions $\psi_A$, $\psi_B$ and $\psi_C$. We write $\rho = \rho_A\otimes\rho_B\otimes\rho_C$ and $\rho_{AB}:=\rho_A\otimes\rho_B$ etc.
\subsubsection*{Analytic Approach}
First, note that pairwise independence yields
\begin{align*}
M_{\rho_{AB}}(A,B)^2
&= \int_{\mathds{R}^2} \left(f_{A,B}(r,s) - f_A(r)f_B(s)\right)^2 \rho_A\otimes\rho_B(\mathrm{d}r,\mathrm{d}s) \\
&= \int_{\mathds{R}}\int_{\mathds{R}} 0\,\rho_A(\mathrm{d}r)\,\rho_B(\mathrm{d}s) = 0,
\end{align*}
and similarly for $M_{\rho_{BC}}(B,C)$ and $M_{\rho_{AC}}(C,A)$. On the other hand, from the pairwise independence and Corollary~\ref{cor:mInd-charfun} we obtain
\begin{align*}
M_\rho&(A,B,C)^2
= \int_{\mathds{R}^3} \left(f_{A,B,C}(r,s,t) - f_A(r)f_B(s)f_C(t)\right)^2 \rho(\mathrm{d}r,\mathrm{d}s,\mathrm{d}t)\\
&= \frac{1}{64} \int_\mathds{R} \int_\mathds{R} \int_\mathds{R} |1 -\mathrm{e}^{\mathrm{i} r} |^2 |1 -\mathrm{e}^{\mathrm{i} s} |^2 |1 -\mathrm{e}^{\mathrm{i} t} |^2 \,\rho_A(\mathrm{d}r) \,\rho_B(\mathrm{d}s) \,\rho_C(\mathrm{d}t)\\
&= \frac{1}{8} \psi_A(1) \psi_B(1) \psi_C(1).
\end{align*}
\begin{figure}[htbp]
\centering
\subfigure[Multivariance without normalization]{\includegraphics[width=0.8\textwidth]{Fig/bernstein-1}}
\subfigure[Normalized multivariance]{\includegraphics[width=0.8\textwidth]{Fig/bernstein-2}}
\subfigure[Squared normalized multivariance scaled by sample size]{\includegraphics[width=0.8\textwidth]{Fig/bernstein-3}}
\caption{These plots show sample distance covariance $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_{\rho_{AB}}(A,B)$ \textup{(}blue\textup{)}, sample distance multivariance $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho(A,B,C)$ \textup{(}red\textup{)} and sample total distance multivariance $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}_\rho(A,B,C)$ \textup{(}green\textup{)} for Bernstein's coin toss experiment, cf.~Section~\ref{ex:bernstein}, averaged over $5000$ Monte-Carlo replications. Also shown are the empirical $5\%$ and $95\%$ quantiles \textup{(}dashed\textup{)}. Different scalings are used in the plots \textup{(a)--(c)}, and plot \textup{(c)} also shows the critical value \textup{(}significance level $\alpha=5\%$\textup{)} of the independence tests from Section~\ref{sec:test} \textup{(}long dashes, black\textup{)}.}
\label{fig:bernstein}
\end{figure}
In particular, for $\psi(x) = |x|$ we obtain
$$
M_\rho(A,B,C) = \overline{M}_\rho(A,B,C) = \frac{1}{2\sqrt{2}}.
$$
We calculate the scaling factors from Section~\ref{sub:scaling} as
$$
a_A = a_B = a_C = b_A = b_B = b_C = \frac{1}{2},
$$
which shows that multicorrelation and normalized multivariance coincide in this case, i.e.
$$
\mathcal{R}_\rho(A,B,C) = 1 = \mathcal{M}_\rho(A,B,C).
$$
Finally, normalized total multivariance is given by
$$
\overline{\mathcal{M}}_\rho(A,B,C) = \frac{1}{\sqrt{2^3 - 3 - 1}} \,\mathcal{M}_\rho(A,B,C) = \frac{1}{2}.
$$
\subsubsection*{Numerical Simulation}
To complement the analytical results by a numerical simulation, we have simulated $5000$ replications of $N=3,\dots,30$ tosses of Bernstein's coins. We calculated the pairwise sample distance covariances $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_{\rho_{AB}}(A,B)$, $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_{\rho_{BC}}(B,C)$, $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_{\rho_{AC}}(C,A)$ as well as the sample distance multivariance $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho(A,B,C)$ and the sample total distance multivariance $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt\overline{M}_\rho(A,B,C)$. We used Euclidean distance as underlying distance in all cases. Due to pairwise independence, the bivariate distance covariances should tend to zero for increasing $N$, while the multivariances should tend to the non-zero limits that we calculated analytically above.
Figure~\ref{fig:bernstein} shows the average values of the multivariance statistics over $5000$ replications, along with their empirical $5\%$ and $95\%$ quantiles. Figure (a) uses no scaling, Figure (b) shows `normalized' quantities (cf.~Section~\ref{sub:scaling}) and Figure (c) shows squared normalized quantities scaled by $N$, as they appear in Theorems~\ref{thm:Mdistconv} and~\ref{thm:Mdconv}. Also shown is the critical value $\chi^2_{0.95}(1)$ of the test proposed in Section~\ref{sec:test}. In summary, the numerical simulation shows that\\
$\bullet$ (Total) distance multivariance is able to distinguish correctly pairwise independence of the events $A,B,C$ from their higher-order dependence;\\
$\bullet$ The sample statistics converge quickly to their analytic limits and numerically confirm the asymptotic results from Theorems~\ref{thm:Mdistconv} and~\ref{thm:Mdconv}.\\
$\bullet$ The hypothesis of pairwise independence of $A$ and $B$ would be correctly accepted in about $95\%$ of simulations, confirming the specificity of the proposed tests.\\
$\bullet$ Test A (with the a-priori assumption of pairwise independence) has a power exceeding $95\%$ for sample sizes $N>5$. Test B (no a priori assumptions) has a power exceeding $95\%$ for $N > 14$.
Note that all necessary functions and tests for such simulations and for the use of distance multivariance in applications are provided in the R package \texttt{multivariance} \cite{Boett2017R-1.0.5}.
\subsection{Sinusoidal dependence}
In \cite[p. 2287]{SejdSripGretFuku2013} it was pointed out that for random variables $X$, $Y$ with a common \emph{sinusoidal density}
\begin{equation} \label{eq:sinusoidal}
f_l(x,y) := \frac{1}{4 \pi^2}(1+ \sin(lx)\sin(ly)) \text{ on } [-\pi,\pi]^2 \quad\text{for some $l\in \mathds{N}$}
\end{equation}
the detection of the dependence using distance covariance is poor for $l>1$. It was also noted that choosing (in our notation) $\psi_i(x) = |x|^\alpha$ with some $\alpha \neq 1$ might improve the power, see Figure~\ref{fig:sinusoidal}.(a). Using the bounded continuous negative definite function $\psi_i(x) = \frac{1}{\gamma} (1 - \exp(-\gamma|x|))$ with $\gamma > 0$ can increase the power considerably for larger $l$, see Figure \ref{fig:sinusoidal}.(b). Here we used the same sample parameters as in \cite{BerrSamw2017} (5000 samples, $N = 200$, $\alpha = 0.05$). The $p$-values were calculated by Monte Carlo estimation with 10000 replications.
The following heuristic was used to choose the value of $\gamma$: Note that
\begin{equation}
\psi_i(x) := \frac{1}{\gamma} (1 - \exp(-\gamma|x|))
\end{equation}
is a bounded function which is strictly increasing for $x> 0$. Suppose we know that the local dependencies occur in a window of (Euclidean) distance $\delta$. Thus, it seems reasonable to \emph{neglect} all pairs which are further apart than $\delta$ by setting all their $\psi_i$-distances to (roughly) the same value, i.e.~we choose $\gamma$ such that $\psi_i(\delta) \geq 0.99 \cdot \sup_x \psi_i(x).$ This is achieved by setting $\gamma := - \ln(0.01)/\delta$. For the sinusoidal example $\delta$ is the period of the sin functions, i.e. $\delta=\pi/l$. Let us compare the resulting test with the methods \texttt{MINT} and \texttt{MINTav} which were proposed in [BS17] for a wide range of situations. Figure \ref{fig:multiMINT} shows in the setting of sinusoidal data that our proposed test outperforms \texttt{MINTav} and has similar power as the \textit{oracle test} \texttt{MINT}.
Note that \texttt{MINTav} uses no a priori information about the dependence scale, and that \texttt{MINT} computes the p-value using all possible parameters and selects a posteriori the parameter (for each setting) which yielded the highest power. In contrast, our test requires a heuristic parameter selection using certain a priori knowledge of the data generation mechanism.
Further extensions and details on resampling, Monte Carlo and other tests based on distance multivariance can be found in \cite{Boet2017, BersBoet2018}.
\begin{figure}[htbp]
\centering
\subfigure[$\psi_i(x)=|x|^\alpha$]{\includegraphics[width=0.45\textwidth]{Fig/sinusoidal-1}}
\subfigure[$\psi_i(x) = \frac{1}{\gamma} (1 - \exp(-\gamma|x|))$]{\includegraphics[width=0.45\textwidth]{Fig/sinusoidal-2}}
\caption{Power of tests based on distance multivariance for the sinusoidal example with density $f_l$ given in \eqref{eq:sinusoidal}. The parameter of the data is $l$ and the parameter of the $\psi_i$ is $\alpha$ and $\gamma$, respectively. Here (a) is the alpha-stable case and (b) uses a bounded cndf.}
\label{fig:sinusoidal}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{Fig/multi-dcov-MINT-1}
\caption{Comparison of the power of distance multivariance with distance adapted to the dependence scale, classical distance covariance with Euclidean distance, \textup{\texttt{MINTav}} and \textup{\texttt{MINT}}. The latter were recently introduced in \textup{\cite{BerrSamw2017}} and it was shown that for this example they outperform many (all in their comparison) other dependence measures.}
\label{fig:multiMINT}
\end{figure}
\section*{Acknowledgments}
We are grateful to Ulrich Brehm (TU Dresden), for insightful discussions on (elementary) symmetric polynomials and to Georg Berschneider (TU Dresden) who read and commented on the whole text. We would also like to thank Gabor J.\ Sz\'ekely (NSF) for advice on the current literature. We thank the anonymous referees and the handling editor for their helpful comments.
Martin Keller-Ressel acknowledges support by the German Research Foundation (DFG) under grant ZUK~64 and KE~1736/1-1.
\setcounter{part}{18}
\part{Supplement to ``Distance multivariance: New dependence measures for random vectors''}
\renewcommand{\thesection}{\Alph{part}}
\section{Proofs and auxiliary results}
Here we collect supplementary material to \cite{part2}. It contains the proofs of some of the main results as well as a few additional statements: Lemma \ref{lem:moments} discusses the moment conditions introduced in Definition \ref{def:moment} and Lemma \ref{lem:ZN} analyses the estimator which is required for the proof of the main convergence result (Theorem~\ref{thm:Mdistconv}).
Unless otherwise mentioned, all numbered references refer to \cite{part2}.
\subsection{Proofs and auxiliary results for Section~\ref{sec:prelim}}
\begin{proof}[Proof of Lemma~\ref{lem:factorization}]
For arbitrary $a_i,b_i\in\mathds{C}$, $i=1,\dots, n$, we have
\begin{equation}\label{eq:ab_identity}
\prod_{i=1}^n(a_i-b_i) = \sum_{S\subset \{1,\dots,n\}} \left(\prod_{i \in S} a_i\right) \left(\prod_{i \in S^c} b_i\right) (-1)^{|S^c|},
\end{equation}
where $|S|$ denotes the cardinality of $S$ and $S^c:= \{1,\dots,n\}\setminus S$. Thus,
\begin{align*}
&\mathds{E}\left(\prod_{i=1}^n (Z_i - \mathds{E} Z_i)\right)
= \mathds{E}\left[\sum_{S\subset \{1,\dots,n\}} \left(\prod_{i \in S} Z_i\right) \left(\prod_{i \in S^c} \mathds{E}(Z_i)\right) (-1)^{|S^c|}\right]\\
&= \mathds{E}\left(\prod_{i=1}^n Z_i\right) + \sum_{\substack{S\subset \{1,\dots,n\}\\|S|\leq n-1}}\mathds{E}\left( \prod_{i \in S} Z_i\right) \left(\prod_{i \in S^c} \mathds{E}\left( Z_i\right)\right) (-1)^{|S^c|}\\
&= \mathds{E}\left(\prod_{i=1}^n Z_i\right) + \sum_{\substack{S\subset \{1,\dots,n\}\\|S|\leq n-1}} \left(\prod_{i=1}^n \mathds{E}\left( Z_i\right)\right) (-1)^{|S^c|}\\
&= \mathds{E}\left(\prod_{i=1}^n Z_i\right) - \prod_{i =1}^n\mathds{E}( Z_i);
\end{align*}
$(n-1)$-independence is used in the penultimate line.
\end{proof}
\begin{lemma}\label{lem:moments}
The moment conditions in Definition~\ref{def:moment} are ordered from weak to strong, i.e.~c) implies b) and b) implies a). In particular, the estimate
\begin{equation}\label{eq:prod_est}
\mathds{E}\left( \prod_{i=1}^n \psi_i(X_{k_i,i}-X'_{l_i,i})\right) \leq 4^n \prod_{i=1}^n \left(\mathds{E} \psi_i^{p_i}(X_{i})\right)^{1/p_i}
\end{equation}
holds for all $k_i, l_i \in \{0,1\}, i=1, \dots, n$ and all $p_i \in [1,\infty)$ with $\sum_{i=1}^n p_i^{-1} = 1$.
\end{lemma}
\begin{proof}
The implication from c) to b) follows from the fact that every continuous negative definite function is quadratically bounded, i.e.\ $|\psi(x)| \leq C(1 + x^2)$ for some $C > 0$, see \cite[Lem.~3.6.22]{Jaco2001}.
The other implication follows directly from \eqref{eq:prod_est}. To show \eqref{eq:prod_est}, note that the generalized H\"older inequality for $n$-fold products
(cf.~\cite[p.~133, Pr.~13.5]{schilling-mims})
gives
$$
\mathds{E}\left( \prod_{i=1}^n \psi_i(X_{k_i,i}-X'_{l_i,i})\right)
\leq \prod_{i=1}^n \left(\mathds{E} \psi_i^{p_i}(X_{k_i,i}-X'_{l_i,i})\right)^{1/p_i}.
$$
Using an inequality for continuous negative definite functions (cf.~\cite[Eq.~(2.5)]{part1}, see also \cite[Lem.~3.6.21]{Jaco2001})
and the Minkowski inequality for the $L^{p_i}$-norm yields the bound
\begin{align*}
\left(\mathds{E} \psi_i^{p_i}(X_{k_i,i}-X'_{l_i,i})\right)^{1/p_i}
&\leq 2 \left(\mathds{E} \left[\psi_i(X_{k_i,i})+\psi_i(X'_{l_i,i})\right]^{p_i}\right)^{1/p_i}\\
&\leq 4 \left(\mathds{E} \psi_i^{p_i}(X_{i})\right)^{1/p_i}. \qedhere
\end{align*}
\end{proof}
\subsection{Proofs and auxiliary results for Section~\ref{sec:theory}}
\begin{proof}[Proof of Proposition~\ref{prop:representation}] Using \eqref{eq:X0X1}, we can rewrite $M_\rho$ in the following way:
\begin{equation}
\begin{aligned}\label{eq:Mrep}
M_\rho^2
&= \int \left| \mathds{E}\left[ \prod_{i=1}^n \left(\mathrm{e}^{\mathrm{i} \scalp{X_i}{t_i}}- f_{X_i}(t_i)\right)\right] \right|^2 \rho(\mathrm{d}t)\\
&= \int \left| \mathds{E}\left[ \prod_{i=1}^n \left(\mathrm{e}^{\mathrm{i} \scalp{X_{1,i}}{t_i}}- \mathrm{e}^{\mathrm{i} \scalp{X_{0,i}}{t_i}}\right)\right]\right|^2 \rho(\mathrm{d}t)\\
&= \int \mathds{E}\left[\prod_{i=1}^n \left(\mathrm{e}^{\mathrm{i} \scalp{X_{1,i}}{t_i}} - \mathrm{e}^{\mathrm{i} \scalp{X_{0,i}}{t_i}}\right)
\left(\mathrm{e}^{-\mathrm{i} \scalp{X'_{1,i}}{t_i}}- \mathrm{e}^{-\mathrm{i} \scalp{X_{0,i}'}{t_i}}\right)\right]\rho(\mathrm{d}t)\\
&= \int \mathds{E}\left[\sum_{k,l \in \{0,1\}^{ n}} (-1)^{\sum_{j=1}^n (k_j+l_j)} \prod_{i=1}^n \mathrm{e}^{\mathrm{i} \scalp{(X_{k_i,i}-X'_{l_i,i})}{t_i}}\right] \rho(\mathrm{d}t)
\end{aligned}
\end{equation}
and the ultimate line already gives \eqref{eq:Msum_1}. By \eqref{eq:Msymmetric},
$$
M_\rho^2(X_1, X_2, \dots, X_n)
= \frac{1}{2} \left(M_\rho^2(X_1, X_2, \dots, X_n) + M_\rho^2(-X_1, X_2, \dots, X_n)\right).
$$
Applying this to \eqref{eq:Mrep} shows that the imaginary part of the complex exponential cancels for $i=1$. Repeated applications to $i=2,\dots, n$ removes the other imaginary terms, and we obtain
\begin{equation}\label{eq:M_prod_interm}
M_\rho^2
= \int \mathds{E}\left[\sum_{k,l \in \{0,1\}^{ n}} \sgn(k,l) \prod_{i=1}^n \cos(\scalp{(X_{k_i,i}-X'_{l_i,i})}{t_i})\right] \rho(\mathrm{d}t).
\end{equation}
It remains to show that \eqref{eq:M_prod_interm} is equal to \eqref{eq:Msum_2}. For this, we note that the product appearing in \eqref{eq:Msum_2} is of the form
$$
\prod_{i=1}^n \left[\cos(\scalp{(X_{k_i,i}-X'_{l_i,i})}{t_i}) - 1\right]
= \prod_{i=1}^n \cos(\scalp{(X_{k_i,i}-X'_{l_i,i})}{t_i}) + \prod_{i=1}^n c(k_i,l_i)
$$
where $c(k_i,l_i)$ is either $\cos(\scalp{(X_{k_i,i}-X'_{l_i,i})}{t_i})$ or $-1$ and at least one factor in the second product is $-1$; if, say, $c(k_m,l_m)=-1$ for some $m\in\{1,\dots, n\}$, we get with $k'=(k_1,\dots,k_{m-1},k_{m+1},\dots,k_n)$, $l'=(l_1,\dots,l_{m-1},l_{m+1},\dots,l_n)$,
\begin{align*}
&\sum_{k,l\in\{0,1\}^n} \sgn(k,l)\prod_{i=1}^nc(k_i,l_i)\\
&\qquad= -\sum_{\mathclap{k_m,l_m\in\{0,1\}}} \quad(-1)^{k_m+l_m} \quad\sum_{\mathclap{k',l'\in\{0,1\}^{n-1}}} \quad\sgn(k',l')\prod_{i\neq m} c(k_i,l_i).
\end{align*}
This expression is $0$ since the inner sum does not depend on $k_m,l_m$ and appears exactly four times, twice with positive and twice with negative sign.
This shows that \eqref{eq:Msum_2} is equal to \eqref{eq:M_prod_interm}.
Finally, by Lemma~\ref{lem:moments}, all moment conditions in Definition~\ref{def:moment} imply the mixed moment condition \ref{def:moment}.a), $\mathds{E}\left( \prod_{i=1}^n \psi_i(X_{k_i,i}-X'_{l_i,i})\right)< \infty$ for all $k,l\in\{0,1\}^n$. Under this condition, Fubini's theorem together with the tower property for conditional expectations and the independence properties \eqref{eq:X0X1} of $\bm{X}_0, \bm{X'}_0$ yield
\begin{equation}\begin{aligned}
M_\rho^2
&= \mathds{E}\left(\sum_{k,l \in \{0,1\}^{ n}} (-1)^{\sum_{j=1}^n (k_j+l_j)} \prod_{i=1}^n (- \psi_i(X_{k_i,i}-X'_{l_i,i})) \right)\\
&=\label{eq:Mpsi}
\mathds{E}\left(\prod_{i=1}^n \Psi_{i,0,1}\right)= \mathds{E}\left(\mathds{E}\left(\prod_{i=1}^n \Psi_{i,0,1} \;\middle|\; \bm{X}_1, \bm{X}'_1\right)\right)= \mathds{E}\left(\prod_{i=1}^n \doverline{\Psi}_{i} \right)
\end{aligned}\end{equation}
where
\begin{align*}
\Psi_{i,0,1} := &-\psi_i(X_{1,i}-X'_{1,i})+\psi_i(X_{1,i}-X'_{0,i})\\
&\quad\mbox{}+\psi_i(X_{0,i}-X'_{1,i})-\psi_i(X_{0,i}-X'_{0,i}),\\
\doverline{\Psi}_{i} := &-\psi_i(X_{i}-X'_{i})+\mathds{E}(\psi_i(X_{i}-X'_{i})\mid X_i)\\
&\quad\mbox{}+\mathds{E}(\psi_i(X_{i}-X'_{i})\mid X'_i)-\mathds{E}\psi_i(X_{i}-X'_{i}).\qedhere
\end{align*}
\end{proof}
\subsection{Proofs and auxiliary results for Section~\ref{sec:stats}}
\begin{lemma}\label{lem:ZN}
Let $\bm{X}^{(l)} := (X_1^{(l)},\dots,X_n^{(l)})$ be independent and identical distributed copies of $\bm{X} = (X_1,\dots,X_n)$ and set
\begin{equation}\label{eq:ZN}
Z_N(t)
:= \frac{1}{N} \sum_{l=1}^N \prod_{i=1}^n \left(\mathrm{e}^{\mathrm{i} \scalp{X_i^{(l)}}{t_i}} - \frac{1}{N} \sum_{k=1}^N \mathrm{e}^{\mathrm{i} \scalp{X_i^{(k)}}{t_i}}\right).
\end{equation}
Then
\begin{equation} \label{eq:GCNintrep}
\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho(\bm{X}^{(1)},\dots,\bm{X}^{(N)})
= \|Z_N({\scriptscriptstyle\bullet})\|_{L^2(\rho)}.
\end{equation}
If $X_1,\dots,X_n$ are independent, then
\begin{align}
\mathds{E} Z_N(t) &= 0,\\
\mathds{E}\big(Z_N(t) \overline{Z_N(t')}\big) &= \frac{1}{N} \cdot C_N\cdot \prod_{i=1}^n \left[ f_{X_i}(t_i-t_i') - f_{X_i}(t_i) \overline{f_{X_i}(t_i')} \right],\\
\label{eq:squaredMN}
\mathds{E}\left(\left|\sqrt{N} Z_N(t) \right|^2\right) &= C_N \cdot \prod_{i=1}^n \left(1 - |f_{X_i}(t_i)|^2 \right),
\end{align}
with constant $C_N:= \frac{(N-1)^n + (-1)^n (N-1)}{N^{n}}.$
\end{lemma}
\begin{proof}
The equality \eqref{eq:GCNintrep} follows by inserting the empirical characteristic function into the representation \eqref{eq:M_rho_L2} of distance multivariance.
Assume that the random variables $X_1, \dots, X_n$ are independent. We obtain
$$
\mathds{E}\left(\mathrm{e}^{\mathrm{i} \scalp{X_i^{(l)}}{t_i}}-\frac{1}{N} \sum_{k=1}^N \mathrm{e}^{\mathrm{i} \scalp{X_i^{(k)}}{t_i}}\right)=0,\quad i=1,\dots, n,\; l = 1,\dots,N,
$$
hence, $\mathds{E} Z_N(t) = 0$. Next, consider
\begin{equation} \label{ZN-exp} \begin{aligned}
\mathds{E}(Z_N(t) \overline{Z_N(t')})
&= \frac{1}{N^2} \sum_{l,l' =1}^N \mathds{E}\left[ \prod_{i=1}^n \left(\mathrm{e}^{\mathrm{i} \scalp{X_i^{(l)}}{t_i}}-\frac{1}{N} \sum_{k=1}^N \mathrm{e}^{\mathrm{i} \scalp{X_i^{(k)}}{t_i}}\right)\right.\times\\
&\qquad\qquad\mbox{}\times\left.\prod_{i'=1}^n \left(\mathrm{e}^{-\mathrm{i} \scalp{X_{i'}^{(l')}}{t'_{i'}}}-\frac{1}{N} \sum_{k'=1}^N \mathrm{e}^{-\mathrm{i} \scalp{X_{i'}^{(k')}}{t'_{i'}}}\right) \right].
\end{aligned}\end{equation}
The independence of $X_i$, $X_{j}$ for $i\neq j$ implies
\begin{align*}
&\mathds{E}\left[ \prod_{i=1}^n \left(\mathrm{e}^{\mathrm{i} \scalp{X_i^{(l)}}{t_i}} - \frac{1}{N} \sum_{k=1}^N \mathrm{e}^{\mathrm{i} \scalp{X_i^{(k)}}{t_i}}\right)
\cdot \prod_{i'=1}^n \left(\mathrm{e}^{-\mathrm{i} \scalp{X_{i'}^{(l')}}{t'_{i'}}} - \frac{1}{N} \sum_{k'=1}^N \mathrm{e}^{-\mathrm{i} \scalp{X_{i'}^{(k')}}{t'_{i'}}}\right)\right]\\
&\qquad= \prod_{i=1}^n \mathds{E}\left[ \left(\mathrm{e}^{\mathrm{i} \scalp{X_i^{(l)}}{t_i}} - \frac{1}{N} \sum_{k=1}^N \mathrm{e}^{\mathrm{i} \scalp{X_i^{(k)}}{t_i}}\right)
\cdot \left(\mathrm{e}^{-\mathrm{i} \scalp{X_{i}^{(l')}}{t'_{i}}} - \frac{1}{N} \sum_{k'=1}^N \mathrm{e}^{-\mathrm{i} \scalp{X_{i}^{(k')}}{t'_{i}}}\right) \right]
\end{align*}
and each factor simplifies to
\begin{align*}
\mathds{E}&\left[ \left(\mathrm{e}^{\mathrm{i} \scalp{X_i^{(l)}}{t_i}} - \frac{1}{N} \sum_{k=1}^N \mathrm{e}^{\mathrm{i} \scalp{X_i^{(k)}}{t_i}}\right)
\cdot \left(\mathrm{e}^{-\mathrm{i} \scalp{X_{i}^{(l')}}{t'_{i}}} - \frac{1}{N} \sum_{k'=1}^N \mathrm{e}^{-\mathrm{i} \scalp{X_{i}^{(k')}}{t'_{i}}}\right) \right]\\
&= \mathds{E} \mathrm{e}^{\mathrm{i} \scalp{X_i^{(l)}}{t_i}-\mathrm{i} \scalp{X_i^{(l')}}{t_i'}} - 2 \frac{N-1}{N} f_{X_i}(t_i) \overline{f_{X_i}(t_i')} - \frac{2}{N} f_{X_i}(t_i-t_i') \\
&\qquad\mbox{}+ \frac{N^2-N}{N^2} f_{X_i}(t_i) \overline{f_{X_i}(t_i')} + \frac{N}{N^2} f_{X_i}(t_i-t_i')\\
&= \mathds{E} \mathrm{e}^{\mathrm{i} \scalp{X_i^{(l)}}{t_i}-\mathrm{i} \scalp{X_i^{(l')}}{t_i'}} - \frac{N-1}{N} f_{X_i}(t_i) \overline{f_{X_i}(t_i')} - \frac{1}{N} f_{X_i}(t_i-t_i').
\end{align*}
Thus, splitting the sum in \eqref{ZN-exp} into $l=l'$ and $l\neq l'$ yields
\begin{align*}
&\mathds{E}\big(Z_N(t) \overline{Z_N(t')}\big)\\
&= \frac{1}{N^2} \sum_{l,l' =1}^N \prod_{i=1}^n \left[\mathds{E} \mathrm{e}^{\mathrm{i} \scalp{X_i^{(l)}}{t_i} - \mathrm{i} \scalp{X_i^{(l')}}{t_i'}}
- \frac{N-1}{N} f_{X_i}(t_i) \overline{f_{X_i}(t_i')} - \frac{1}{N} f_{X_i}(t_i-t_i')\right]\\
&= \frac{N}{N^2} \prod_{i=1}^n \left[-\frac{N-1}{N} f_{X_i}(t_i) \overline{f_{X_i}(t_i')} - \left(\frac{1}{N}-1\right) f_{X_i}(t_i-t_i') \right] \\
&\qquad\mbox{} + \frac{N^2-N}{N^2} \prod_{i=1}^n \left[\left(-\frac{N-1}{N} +1\right) f_{X_i}(t_i) \overline{f_{X_i}(t_i')} - \frac{1}{N} f_{X_i}(t_i-t_i') \right]\\
&= \left(\frac{1}{N} \left(\frac{N-1}{N}\right)^n + \frac{N-1}{N} \left(\frac{-1}{N}\right)^n \right) \prod_{i=1}^n \left[f_{X_i}(t_i-t_i') - f_{X_i}(t_i) \overline{f_{X_i}(t_i')} \right]\\
&= \frac{(N-1)^n + (-1)^n (N-1)}{N^{n+1}} \prod_{i=1}^n \left[ f_{X_i}(t_i-t_i') - f_{X_i}(t_i) \overline{f_{X_i}(t_i')} \right].
\end{align*}
For $t'=t$ this reduces to
\begin{equation*}
\mathds{E}\left(\left|\sqrt{N} Z_N(t) \right|^2\right) = N\cdot \frac{(N-1)^n + (-1)^n (N-1)}{N^{n+1}}\prod_{i=1}^n \left(1 - |f_{X_i}(t_i)|^2 \right).\qedhere
\end{equation*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:Mdistconv}]
We start with part b), which is a simple consequence of the strong consistency of $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho$. Indeed, by Theorem~\ref{thm:sconsistent} we have $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho \to M_\rho$ a.s., and from Theorem~\ref{thm:independence} we know that $M_\rho > 0$ under the conditions of b), such that \eqref{eq:Mdistconv_b} follows.
For part a), let $Z_N(t)$ be defined as in \eqref{eq:ZN}. Then $\mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}_\rho(\bm{X}^{(1)},\dots,\bm{X}^{(N)}) = \|Z_N(.)\|_{L^2(\rho)}$ by Lemma~\ref{lem:ZN}.
If $\sqrt{N} Z_N$ converges in distribution to a Gaussian process then, by Lemma \ref{lem:ZN}, this process is centred and has the covariance structure \eqref{eq:G_cov}, i.e.\ it is distributed as $\mathds{G}$.
In order to show convergence, we introduce the following notation. Denote by $F_{\bm{X}}$ the distribution function of $\bm{X}$ and by $\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{F}_{\bm{X}}$ the empirical distribution function of the iid sequence $(\bm{X}^{(1)}, \dots, \bm{X}^{(N)})$. For a subset $S \subset \{1, \dots, n\}$ we write $t_S := (t_i)_{i \in S}$ and denote the corresponding empirical characteristic function by
\begin{equation*}
\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_S(t_S) := \frac{1}{N} \sum_{j=1}^N \exp\left(\mathrm{i} \sum_{i \in S} \scalp{X_i^{(j)}}{t_i} \right) = \int\mathrm{e}^{\mathrm{i} \scalp{x_S}{t_S}}\,\mathrm{d}(\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{F}_{\bm{X}}(x)).
\end{equation*}
If $S=\{i\}$ is a singleton, we write $\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_i := \mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_{\{i\}}$.
By \cite[Thm~3.1, p.~208]{Csoe1981a} the $\log$-moment condition is sufficient for the convergence
\begin{equation}\label{eq:convBB}
{\scriptstyle\sqrt{N}} \left(\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}(t) - f(t)\right)
= \int\mathrm{e}^{\mathrm{i} \scalp{x}{t}}\,\mathrm{d}\!\left( {\scriptstyle\sqrt{N}}(\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{F}_{\bm{X}}(x) -F_{\bm{X}}(x))\right)
\xrightarrow[N\to \infty]{d} \int\mathrm{e}^{\mathrm{i} \scalp{x}{t}}\,\mathrm{d}B(x),
\end{equation}
where $B$ is a Brownian bridge indexed by $\mathds{R}^d$ (cf.~\cite[Eq.~(3.2)]{Csoe1981a}) and the distributional convergence is uniform (in $t$) on compact subsets of $\mathds{R}^d$.
Next, we rewrite $Z_N$ from \eqref{eq:ZN} as
\begin{equation}\label{eq:ZN2}
Z_N(t) = \sum_{S\subset \{1,\dots,n\}} (-1)^{n-|S|} \left(\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_S(t_S) \cdot \prod_{\smash{j} \in S^c} \mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_j(t_j)\right).
\end{equation}
In addition, we have the simple identity, cf.~\eqref{eq:ab_identity},
\begin{equation}\label{eq:prod}
\prod_{j=1}^n \left(f_j(t_j) - \mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_j(t_j) \right) = \sum_{S\subset \{1,\dots,n\}} (-1)^{n-|S|} \left(\prod_{\smash{j} \in S} f_j(t_j) \cdot \prod_{\smash{j} \in S^c} \mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_j(t_j)\right).
\end{equation}
Subtracting \eqref{eq:prod} from \eqref{eq:ZN2} and rearranging the resulting equation yields
\begin{align*}
\sqrt{N} Z_N(t)
&= \sum_{S\subset \{1,\dots,n\}} (-1)^{n-|S|} \sqrt{N} \big(\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_S(t_S) - f_S(t_S)\big) \cdot \prod_{\smash{j} \in S^c} \mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_j(t_j) \\
&\qquad\mbox{}+ \sqrt{N} \prod_{j=1}^n \big(f_j(t_j)- \mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_j(t_j)\big).
\end{align*}
By \eqref{eq:convBB}, we have that
$$
\sqrt{N} \left(\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_S(t_S) - f_S(t_S)\right) \xrightarrow[N\to \infty]{d} \int\mathrm{e}^{\mathrm{i} \scalp{x_S}{t_S}}\,\mathrm{d}B(x).
$$
By the Glivenko--Cantelli theorem the limit $\lim_{N\to\infty}\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_j(t_j) = f_j(t_j)$ exists uniformly in $t_j$ for all $j = 1, \dots, n$, and thus
\begin{align*}
&{\sqrt{N}}\prod_{j=1}^n \left(\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_j(t_j)- f_j(t_j)\right)\\
&
= {\sqrt{N}}\left( \mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_1(t_1) - f_1(t_1)\right) \cdot \prod_{j=2}^n \left(\mbox{}^{\scriptscriptstyle N}\kern-1.5pt{f}_j(t_j)- f_j(t_j)\right)
\xrightarrow[N\to\infty]{} 0.
\end{align*}
Together with \eqref{eq:ZN2} this yields the convergence
\begin{equation}\label{eq:conv_to_G}
\sqrt{N} Z_N(t) \xrightarrow[N\to \infty]{d} \sum_{S\subset \{1,\dots,n\}} (-1)^{n-|S|} \int\mathrm{e}^{\mathrm{i} \scalp{x_S}{t_S}}\,\mathrm{d}B(x) \cdot \prod_{\smash{j} \in S^c} f_j(t_j),
\end{equation}
which takes place uniformly on compacts. The right hand side is a complex-valued Gaussian process indexed by $\mathds{R}^d$; denoting this process by $\mathds{G}$, we have thus shown that for each $T > 0$,
\begin{equation}\label{eq:sum-conv}
\sqrt{N} Z_N \xrightarrow[N\to\infty]{d} \mathds{G}
\quad\text{on}\quad
\mathcal{C}_T:= (C(B^d_T), \|{\scriptscriptstyle\bullet}\|_{B^d_T}),
\end{equation}
where $B^d_T := B^d_T(0) := \{x\in \mathds{R}^d\,:\,|x| < T\}$ and $\|f\|_{B^d_T} := \sup_{x \in B^d_T }|f(x)|$. To obtain \eqref{eq:G_conv}, it remains to show that also the $L^2(\rho)$-norms of both sides of \eqref{eq:sum-conv} converge, and that $T$ can be sent to infinity.
To this end, we apply a truncation argument.
Set
\begin{equation}
\rho_{i,\epsilon}(A) := \rho_{i}(A\cap (B_{1/\epsilon }^{d_i}\setminus B_{\epsilon}^{d_i})) \quad \text{ and } \quad \rho_i^{\epsilon} := \rho_i-\rho_{i,\epsilon},
\end{equation}
and note that the $\rho_{i,\epsilon}$ are finite measures for each $\epsilon >0$, by \eqref{eq:levy_integrability}. In addition, we define $\rho_{\epsilon} = \bigotimes_{i=1}^n\rho_{i,\epsilon} $ as well as $ \rho^{\epsilon} = \bigotimes_{i=1}^n\rho_i^{\epsilon}$ and introduce, for this proof, the shorthand notation $\norm{{\scriptscriptstyle\bullet}}_{\rho_\epsilon} = \norm{{\scriptscriptstyle\bullet}}_{L^2(\rho_\epsilon)}$. Note that $|x_i| \le 1/\epsilon $, $x_i\in\mathds{R}^{d_i}$, for all $i=1,\dots, n$ implies $|x| \le \sqrt{n}/\epsilon$, $x=(x_1,\dots,x_n)\in\mathds{R}^d$, and hence we have
\begin{equation}
\left|\|h\|_{\rho_{\epsilon}} - \|h'\|_{\rho_{\epsilon}} \right|^2
\leq \left\|h-h'\right\|_{\rho_\epsilon}
\leq \sup_{|x|\,\leq \sqrt n/\epsilon} \left|h(x)-h'(x)\right|^2 \cdot \prod_{i=1}^n \rho_{i,\epsilon}(\mathds{R}^{d_i}),
\end{equation}
which shows that $\|{\scriptscriptstyle\bullet}\|^2_{\rho_{\epsilon}}$ is continuous on $\mathcal{C}_T$ for any $T \geq \sqrt{n}/\epsilon$. Thus, the continuous mapping theorem implies that for any $\epsilon > 0$
\begin{equation}\label{eq:G_cmt}
\|\sqrt{N} Z_N\|^2_{\rho_{\epsilon}} \xrightarrow[N\to\infty]{d} \|\mathds{G}\|^2_{\rho_{\epsilon}}.
\end{equation}
By the portmanteau theorem, the convergence \eqref{eq:G_conv} is equivalent to the statement
$$
\lim_{N \to \infty} \mathds{E}\left(h(N\cdot \mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}^2) - h(\norm{\mathds{G}}^2_\rho)\right) = 0
$$
for all bounded Lipschitz continuous functions $h: \mathds{R} \to \mathds{R}$. Denoting the Lipschitz constant of $h$ by $L_h$, we see
\begin{align}\notag
\left|\mathds{E} h(N\cdot \mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}^2) - \mathds{E} h(\norm{\mathds{G}}^2_\rho) \right|
&\leq L_h \mathds{E} \left| N\cdot \mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}^2 - \|\sqrt{N} Z_N\|^2_{\rho_\epsilon} \right|\\
&\label{eq:G_split}\qquad\mbox{}+ \left|\mathds{E} h\left(\|\sqrt{N} Z_N\|^2_{\rho_\epsilon}\right) - \mathds{E} h\left(\norm{\mathds{G}}^2_{\rho_\epsilon}\right)\right|\\
&\notag\qquad\mbox{}+ L_h \mathds{E} \left|\norm{\mathds{G}}^2_{\rho_\epsilon} -\norm{\mathds{G}}^2_{\rho}\right|.
\end{align}
The middle term tends to zero as $N \to \infty$, by \eqref{eq:G_cmt}. To estimate the other terms, define $\mu^\epsilon$ to be the measure given by
\begin{equation*}
\left(\rho_1^{\epsilon} \otimes \rho_2 \otimes {\dots} \otimes \rho_n \right)+
\left(\rho_1 \otimes \rho_2^{\epsilon} \otimes {\dots} \otimes \rho_n \right) +
\dots + \left( \rho_1 \otimes {\dots} \otimes \rho_{n-1} \otimes \rho_n^{\epsilon} \right).
\end{equation*}
For the first term on the right hand side of \eqref{eq:G_split} we get the bound
\begin{equation*}
\mathds{E} \left|N \cdot \mbox{}^{\scriptscriptstyle N}\kern-2.5pt{M}^2 - \|\sqrt{N} Z_N\|^2_{\rho_{\epsilon}}\right|
= \mathds{E} \left|\|\sqrt{N} Z^N\|_{\rho}^2 - \|\sqrt{N} Z_N\|^2_{\rho_{\epsilon}}\right|
\leq \mathds{E} \|\sqrt{N} Z_N\|^2_{\mu^\epsilon}.
\end{equation*}
Using \eqref{eq:squaredMN} we see with $C_N := [(N-1)^n+(-1)^n(N-1)]/N^n \leq 1$
\begin{equation}
\label{eq:Zn-on-bounded}
\left\|\mathds{E} \left(|\sqrt{N} Z_N|^2\right)\right\|^2_{ \mu^\epsilon}
= C_N \sum_{k=1}^n \bigg[\|1-|f_{X_k}|^2\|^2_{ \rho_k^{\epsilon}} \prod_{\substack{i=1 \\ i\neq k}}^n \|1-|f_{X_i}|^2\|^2_{\rho_i}\bigg],
\end{equation}
and this expression converges to $0$ as $\epsilon\to 0$. This follows from dominated convergence, since
\begin{equation*}
\sum_{k=1}^n \bigg[\|1-|f_{X_k}|^2\|^2_{ \rho_k^{\epsilon}} \prod_{\substack{i=1 \\ i\neq k}}^n \|1-|f_{X_i}|^2\|^2_{\rho_i}\bigg]
\leq n \prod_{i=1}^n \mathds{E}\psi_i(X_i-X_i') < \infty.
\end{equation*}
The last term in \eqref{eq:G_split} can be estimated in a similar way. We have
\begin{equation} \label{eq:gauss-on-bounded}
\|\mathds{G}\|^2_{\rho} - \|\mathds{G}\|^2_{\rho_{\epsilon}}
\leq\|\mathds{G}\|^2_{\mu^\epsilon}
\xrightarrow[\epsilon \to 0]{} 0
\quad\text{a.s.}
\end{equation}
by dominated convergence, since $\lim_{\epsilon\to 0}\int g_i \, d\rho_i^{\epsilon} = 0$ for integrable $g_i$ and
\begin{equation}\label{eq:EG-2}\begin{aligned}
\mathds{E}(\|\mathds{G}\|^2_{\mu^\epsilon})
\leq n \mathds{E}(\|\mathds{G}\|^2_{\rho})
&= n \prod_{i=1}^n \|1-|f_{X_i}|^2\|^2_{\rho_i}\\
&= n \prod_{i=1}^n \mathds{E}\psi_i(X_i-X_i') < \infty.
\end{aligned}\end{equation}
Together with \eqref{eq:G_split} this shows the convergence result \eqref{eq:G_conv} and completes the proof. \end{proof}
| {
"timestamp": "2018-10-18T02:13:49",
"yymm": "1711",
"arxiv_id": "1711.07775",
"language": "en",
"url": "https://arxiv.org/abs/1711.07775",
"abstract": "We introduce two new measures for the dependence of $n \\ge 2$ random variables: distance multivariance and total distance multivariance. Both measures are based on the weighted $L^2$-distance of quantities related to the characteristic functions of the underlying random variables. These extend distance covariance (introduced by Székely, Rizzo and Bakirov) from pairs of random variables to $n$-tuplets of random variables. We show that total distance multivariance can be used to detect the independence of $n$ random variables and has a simple finite-sample representation in terms of distance matrices of the sample points, where distance is measured by a continuous negative definite function. Under some mild moment conditions, this leads to a test for independence of multiple random vectors which is consistent against all alternatives.",
"subjects": "Probability (math.PR); Statistics Theory (math.ST)",
"title": "Distance multivariance: New dependence measures for random vectors",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808759252645,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7075110683296941
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.